Bing's Webmaster Blog

The new evergreen Bingbot simplifying SEO by leveraging Microsoft Edge

Today we’re announcing that Bing is adopting Microsoft Edge as the Bing engine to run JavaScript and render web pages. Doing so will create less fragmentation of the web and ease Search Engines Optimization (SEO) for all web developers. As you may already know, the next version of Microsoft Edge is adopting the Chromium open source project.   This update means Bingbot will be evergreen as we are committing to regularly update our web page rendering engine to the most recent stable version of Microsoft Edge.     Easing Search Engine Optimization   By adopting Microsoft Edge, Bingbot will now render all web pages using the same underlying web platform technology already used today by Googlebot, Google Chrome, and other Chromium-based browsers. This will make it easy for developers to ensure their web sites and their Content Management System work across all these solutions without having to spend time investigating each solution in depth.   By disclosing our new Bingbot Web Pages rendering technology, we are ensuring fewer SEO compatibility problems moving forward and increase satisfaction in the SEO community.   If your feedback can benefit the greater SEO community, Bing and Edge will propose and contribute to the open source Chromium project to make the web better for all of us. Head to github  to check out our explainers!     What happens next   Over the next few months, we will be switching to Microsoft Edge “under the hood”, gradually over time. The key aspects of this evolution will be transparent for most sites. We may change our bingbot crawler user-agent as appropriate to allow rendering on some sites.   For most web sites, there is nothing you should really need to worry as we will carefully test that they dynamically render fine before switching them to Microsoft Edge.   We invite you to install and test Microsoft Edge and register your site to Bing Webmaster Tools to get insights about your site, to be notified if we detect issues, and to investigate your site using our upcoming tools based on our new rendering engine.   We look forward to sharing more details in the future. We are excited about the opportunity to be an even-more-active part of the SEO community to continue to make the web better for everyone including all search engines.   Thanks, Fabrice Canel Principal Program Manager Microsoft- Bing    

Import sites from Search Console to Bing Webmaster Tools

At Bing Webmaster Tools, we actively listen to the needs of webmasters. Verifying a website’s ownership had been reported as a pain-point in Bing Webmaster Tools. To simplify this process, we recently introduced a new method for webmasters to verify sites in Bing Webmaster Tools. Webmasters can now import their verified sites from Google Search Console into Bing Webmaster Tools. The imported sites will be auto-verified thus eliminating the need for going through manual verification process. Both Bing Webmaster Tools and Google Search Console use similar methods to verify the ownership of a website. Using this new functionality, webmasters can log into their Google Search Console account and import all the verified sites and their corresponding sitemaps to their Bing Webmaster Tools account. Webmasters just need to follow 4 simple steps to import their site: Step 1: Sign-in to your Bing Webmaster Tools account or create a new one here Step 2: Navigate to My Sites page on Bing Webmaster Tools and click Import Step 3: Sign-in with your Google Search Console account and click Allow to give Bing Webmaster Tools access to your list of verified sites and sitemaps Step 4: After authentication, Bing Webmaster Tools will display the list of verified sites present in your Google Search Console account along with the number of Sitemaps and corresponding role for each site. Select the sites which you want to add to Bing Webmaster Tools and click Import Webmasters can import multiple sites from multiple Google Search Console accounts. On successful completion, the selected sites will be added and automatically verified in Bing Webmaster Tools. Please note that it might take up to 48 hours to get traffic data for the newly verified websites. Maximum 100 websites can be imported in one go. Please follow the above steps again in case you want to add more than 100 sites. The limit of 1000 sites addition per Bing Webmaster Tools account still applies. Bing Webmaster Tools will periodically validate your site ownership status by syncing with your Google Search Console account. Therefore, it is necessary for your Bing Webmaster Tools account to have ongoing access to your Google Search Console account. If access to your Google Search Console account is revoked, you will then have to either import your sites again or verify your sites using other verification methods. We hope that both Import from Google Search Console and Domain Connect verification method will make the onboarding process easier for webmasters. We encourage you to sign up and leverage Bing Webmaster tools to help drive more users to your sites.   We want to hear from you!  As a reminder, you can always reach out to us and share feedback on Twitter and Facebook. If you encounter issues using this solution, please raise a service ticket with our support team. Thanks! Bing Webmaster Tools team  

Bing Webmaster Tools simplifies site verification using Domain Connect

In order to submit site information to Bing or to get performance report or access diagnostic tools, webmasters need to verify their site ownership in Bing Webmaster Tools. Traditionally Bing webmaster tools support three verification options,   Option 1: XML file authentication Option 2: Meta tag authentication Option 3: Add a CNAME record to DNS Option 1 and Option 2 requires webmaster to access the site source code to complete the site verification. With Option 3, webmaster can avoid access to site source code but need to access the domain hosting account to edit the CNAME record to hold the verification code provided by Bing Webmaster Tools. To simplify option 3, we announce the support for Domain Connect open standard that will allow webmasters to seamlessly verify their site in Bing Webmaster Tools. Domain Connect is an open standard that makes it easy for a user to configure DNS for a domain running at a DNS provider (e.g. GoDaddy, 1&1 Ionos, etc) to work with a Service running at an independent Service Provider (e.g. Bing, O365, etc). The protocol presents a simple experience to the user, isolating them from the details of DNS settings and its complexity. Bing Webmaster Tools verification using Domain Connect is already live for users whose domain is hosted with following DNS providers                                               Bing webmaster tools will gradually integrate this capability with other DNS providers that support Domain Connect open standard. Quick guide on how to use Domain Connect feature to verify your site in Bing Webmaster Tools:   Step 1: Open a Bing Webmaster Tools account You can open a free Bing Webmaster Tools account by going to the Bing Webmaster Tools sign-in or sign-up page.  You can sign up using Microsoft, Google or Facebook account.   Step 2: Add your website Once you have a Bing Webmaster Tools account, you can add sites to your account. You can do so by entering the URL of your site into the Add a Site input box and clicking Add.        Step 3: Check if your site is supported for Domain Connect protocol When you Add the website information, Bing Webmaster Tools will do background check to identify if that domain/ website is hosted on DNS provider that has integrated Domain Connect solution with Bing Webmaster Tools. Following view will show in case the site is supported – In case the site is not supported for Domain Connect protocol then user will see the default verification options as mentioned in top of this blog.   Step 4: Verify using DNS provider credentials On click of Verify, user will be redirected to DNS provider site. Webmaster should sign-in using the account credentials associated with domain/ website under verification.                                                               On successful sign-in, user site will be successfully verified by Bing webmaster tools within few seconds. In certain cases, it may take longer for DNS provider to send the site ownership signal to Bing webmaster tool service.   Using the new verification options will significantly reduce the time taken and simplify the site verification process in Bing Webmaster Tools. We encourage you to try out this solution and get more users for your sites on Bing via Bing Webmaster Tools. In case you face any challenges using this solution you can raise a service ticket with our support team. We are building another solution to further simplify the site verification process and help webmasters to easily add and verify their site in Bing Webmaster Tools. Watch this space for more!      Additional reference: https://www.plesk.com/extensions/domain-connect/ https://www.godaddy.com/engineering/2019/04/25/domain-connect/   Thanks! Bing Webmaster Tools team

bingbot Series: Introducing Batch mode for Adaptive URL submission API

We launched the Adaptive URL submission capability that allowed webmasters to submit up to 10,000 URLs using the online API or through Bing webmaster portal (Submit URLs option). Since the launch we have received multiple requests from webmasters for the ability to submit the URLs in batches. As we are actively listening to the webmaster and their needs, we are delighted to announce the Batch mode capability for Adaptive URL Submission API which will allow the webmasters and site managers to submit URLs in batches, saving them from those excessive API calls made when submitting the URLs individually.   The Batch URL Submission API is very similar to the individual URL Submission API (Blogpost) and hence integrating the Batch API is very easy and follows the same steps.   Example requests for the Batch URL Submission API for the supported protocols can be seen below JSON Request Sample  POST /webmaster/api.svc/json/SubmitUrlbatch? apikey=sampleapikeyEDECC1EA4AE341CC8B6 HTTP/1.1 Content-Type: application/json; charset=utf-8 Host: ssl.bing.com { "siteUrl":"http://yoursite.com", "urlList":[ "http://yoursite.com/url1", "http://yoursite.com/url2", "http://yoursite.com/url3" ] } XML Request Sample POST /webmaster/api.svc/pox/SubmitUrlBatch? apikey=sampleapikeyEDECC1EA4AE341CC8B6 HTTP/1.1 Content-Type: application/xml; charset=utf-8 Host: ssl.bing.com <SubmitUrlBatch xmlns="http://schemas.datacontract.org/2004/07/Microsoft.Bing.Webmaster.Api"> <siteUrl>http://yoursite.com</siteUrl> <urlList> <string xmlns="http://schemas.microsoft.com/2003/10/Serialization/Arrays">http://yoursite.com/url1</string> <string xmlns="http://schemas.microsoft.com/2003/10/Serialization/Arrays">http://yoursite.com/url2</string> <string xmlns="http://schemas.microsoft.com/2003/10/Serialization/Arrays">http://yoursite.com/url3</string> </urlList> </SubmitUrlBatch> You will get a HTTP 200 response on successful submission of the URLs. Meanwhile the URLs will be checked to comply with Bing Webmaster Guidelines and if they pass, they will be crawled and indexed in minutes.   Please refer the Documentation for generating the API key and Batch URL Submission API for more details. Do note that the maximum supported batch size in this API is 500 URLs per request. Total limit on numbers of URLs submitted per day still applies.   So, integrate the APIs today to get your content indexed real time by Bing and let us know of what you think of this capability. Please reach out to bwtsupport@microsoft.com if you face any issue while integrating.   Thanks ! Bing Webmaster Tools Team

bingbot Series: Easy set-up guide for Bing’s Adaptive URL submission API

In February, we announced launch of adaptive URL submission capability. As called out during the launch, as SEO manager or website owner, you do not need to wait for the crawler to discover new links, you should just submit those links automatically to Bing to get your content immediately indexed as soon as your content is published!  Who in SEO didn’t dream of that.  In the last few months we have seen rapid adoption of this capability with thousands of websites submitting millions of URLs and getting them indexed on Bing instantly.   At the same time, we have few webmasters who have asked for guidance on integrating the adaptive URL submission API. This blog provides information on how easy it is to set-up the adaptive URL submission API.   Step 1: Generate an API Key     Webmasters need an API key to be able to access and use Bing Webmaster APIs. This API key can be generated from Bing Webmaster Tools by following these steps:   Sign in to your account on Bing Webmaster Tools. In case you do not already have a Bing Webmaster account, sign up today using any Microsoft, Google or Facebook ID.  Add & verify the site that you want to submit URL for through the API, if not already done.  Select and open any verified site through the My Sites page on Bing Webmaster Tools and click on Webmaster API on the left-hand side navigation menu.    If you are generating the API key for the first time, please click Generate to create an API Key. Else you will see the key previously generated.    Note: Only one API key can be generated per user. You can change your API key anytime; change is taken by the system within 30 minutes. Step 2: Integrate with your website    You can any of the below protocols to easily integrate the Submit URL API into your system.   JSON request sample  POST /webmaster/api.svc/json/SubmitUrl? apikey=sampleapikeyEDECC1EA4AE341CC8B6 HTTP/1.1 Content-Type: application/json; charset=utf-8 Host: ssl.bing.com { "siteUrl":"http:\/\/example.com", "url":"http:\/\/example.com\/url1.html" } XML Request sample  POST /webmaster/api.svc/pox/SubmitUrl?apikey=sampleapikey341CC57365E075EBC8B6 HTTP/1.1 Content-Type: application/xml; charset=utf-8 Host: ssl.bing.com <SubmitUrl xmlns="http://schemas.datacontract.org/2004/07/Microsoft.Bing.Webmaster.Api"> <siteUrl>http://example.com</siteUrl> <url>http://example.com/url1.html</url> </SubmitUrl> If the URL submission is successful you will receive an http 200 response. This ensures that your pages will be discovered for indexing and if Bing webmaster guidelines are met then the pages will be crawled and indexed in real time. Using any of the above methods you should be able to directly and automatically let Bing know whenever new links are created in your website. We encourage you to integrate such solution in your Web Content Management System to let Bing auto discover your new content at publication time.  In case you face any challenges during the integration, you can reach out bwtsupport@microsoft.com to raise a service ticket. Feel free to contact us if your web site requires more than 10,000 URLs submitted per day. We will adjust as needed.  Thanks!  Bing Webmaster Tools team

bingbot Series: Get your content indexed fast by now submitting up to 10,000 URLs per day to Bing

Today, we are excited to announce a significant increase in the number of URLs webmasters can submit to Bing to get their content crawled and indexed immediately. We believe that enabling this change will trigger a fundamental shift in the way that search engines, such as Bing, retreive and are notified of new and updated content across the web. Instead of Bing monitoring often RSS and similar feeds or frequently crawling websites to check for new pages, discover content changes and/or new outbound links, websites will notify the Bing directly about relevant URLs changing on their website. This means that eventually search engines can reduce crawling frequency of sites to detect changes and refresh the indexed content.  For many years, Bing has offered all webmasters the ability to submit their site URLs through the Bing Webmaster Tools portal as well as the Bing Webmaster Tools API for immediate crawl and indexation. Until today, this feature was throttled for all sites to submit maximum of 10 URLs per day and maximum of 50 URLs per month.  Today we are releasing the Adaptive URL submission feature that increases the daily quota by 1000x, allowing you to submit up to 10,000 URLs per day, with no monthly quotas. The daily quota per site will be determined based on the site verified age in Bing Webmaster tool, site impressions and other signals that are available to Bing. Today the logic is as follows, and we will tweak this logic as needed based on usage and behavior we observe.    Key things to note: Site verified age is one of the signals but not the only signal that is used to determine the daily URL quota per site. Webmasters should be able to see revised limit for their site on Bing webmaster tools portal (Submit URLs option) or by using the Get URL submission quota API.  Webmaster tools portal ​ Get URL submission quota API In Bing webmaster tools portal, under “Submit URLs” option, webmaster will see maximum of 1,000 latest submitted URLs even though the permissible quota per day could be greater than 1,000 As per existing functionality, for sub-domain level site “Submit URL” option will not be applicable. If there are sites mapped to sub-domain then quota is allocated at domain level and not at sub-domain level. So, login to Bing Webmaster Tools or integrate Bing Webmaster APIs in your Content Management Systems now to benefit from the increase and contact us for feedback.  Feel free to contact us also if your web site requires more than 10,000 URLs submitted per day. We will adjust as needed.   Thanks! Bing Webmaster Tools team  

Introducing Clarity, a web analytics product for webmasters

Today, we are announcing the beta release of Clarity, a web analytics product which enables website developers to understand user behavior at scale.  Web developers face many challenges in building compelling and user-friendly websites. Understanding why users struggle, where users run into issues or why they abandon a website is difficult. When making updates to a web experience, A/B experimentation helps developers decide on which way to go. While A/B experiments allow developers to see when their key metrics are moving, the primary drawback is the lack of visibility into why the metrics moved in any given direction. This gap in understanding user behavior led us to build Clarity. Session Replay The session replay capability of Clarity allows web developers to view a user's page impression to understand user interactions such as mouse movement, touch gestures, click events and much more. Being able to replay user sessions allows web developers to empathize with users and understand their pain points. Clarity Case Studies  Bing uses Clarity to detect poor user experience due to malware In the Wild West of devices and browsers, the experience you think you are serving and what your users see can be completely different - devices, plugin, proxies and networks can all change and degrade the quality of your experience. These problems are expensive to diagnose and fix.   The first image shows the page of Bing with malware while the second image shows Bing default  experience after removal of malware.  Clarity has been used by the Bing UX team at Microsoft to delve into sessions that had negative customer satisfaction and determine what went wrong. In some cases, engineers were able to detect pages that had multiple overlays of advertisements and looked nothing like the expected experience for customers. Looking through the layout anomalies and network request logs that Clarity provides, Bing developers were able to diagnose the cause: malware installed on the end user's machine was hijacking the page and inserting bad content.  With the knowledge of what was causing these negative experiences, Bing engineers were able to design and implement changes which defended the Bing page. By doing so they increased revenue while decreasing page load time - all while giving their customers a significantly improved experience.  Cook with Manali uses Clarity to improve user engagement  Cook with Manali is a food blog and like many other blogs dedicated to cooking, posts begin with a story about the inspiration behind the recipe. Posts have detailed instructions to prepare the meal, high-quality photographs of the finished dish and potentially step by step pictures to help explain the more complicated parts. Near the bottom of the page is a shorthand recipe card summarizing ingredients, instructions and nutritional information. While this long post format enables food bloggers to emotionally connect with their readers and preemptively address any complication in the recipe, some readers would rather get straight to the recipe.  When the Cook with Manali team started using Clarity, they were able to investigate real user sessions and realized that almost thirty percent of users were abandoning the page before reaching the bottom of these recipe pages, which has important information about the recipe. In many cases, it seemed that users felt they had to scroll too far to get to the recipe that they really cared about and lost patience before making it far enough on the page. The developers realized their strategy was backfiring and creating a bad experience for some of their users, prompting them to add a "Jump to Recipe" button at the top of these pages.   With the new button deployed, the team was able to see traffic going up and abandonment going down. When they dug into these new session replays, they were able to see users utilizing the new button and getting directly to the content they cared about. They saw abandonment for these pages drop down to roughly ten percent, signaling a significant increase in user satisfaction. Interestingly, many users now utilize the "Jump to Recipe" button to then scroll back up to read the larger story afterwards.   How does Clarity work?  Clarity works on any HTML webpage (desktop or mobile) after adding a small piece of JavaScript to the website. This JavaScript code listens to browser events and instruments layout changes, network requests and user interactions. The instrumentation data is then uploaded and stored in the Clarity server running on Microsoft Azure.   Other capabilities coming soon  Interesting sessions are automatically bubbled up based on Clarity's AI and machine learning capabilities to help web developers review user sessions with abnormal click or scroll behavior, session length, JavaScript errors and more. Web developers can spend less time and gain more insight into their users focusing on the sessions that Clarity marks as most relevant.  Related sessions are a grouping of similar sessions that are recommended based a single session. This feature allows web developers to quickly understand the scope of a specific user behavior and find other occurrences for the same user as well as other users.   Heatmaps provide a view into user behavior at an aggregate level through click/touch and scroll heatmaps. Click/touch heatmap provides distribution of user interactions across a webpage. Scroll heatmaps provide how far users scroll on your webpage.   How do I get started?  Sign up at the Clarity website using your Microsoft Account! (In case you don’t have one, you can sign-up here.)   When you create a new project, it will be added to the waitlist. A notification will be sent when your project is approved for onboarding and you can login to Clarity to retrieve the uniquely generated JS code for your project. Once you have added the code to your website, you can use Clarity dashboard to start replaying user sessions and gain insights.   Please reach out to ClarityMS@microsoft.com if you have any questions.   Contributing to Clarity  The Clarity team has also open sourced the JavaScript library which instruments pages to help understand user behavior on websites on GitHub . As Clarity is in active development with continuous improvements, join our community and contribute. Getting started is easy, just visit GitHub and read through our README.    Here are some of the exciting new feature the Clarity team is brewing up: Interesting sessions are automatically bubbled up based on Clarity's AI and machine learning capabilities to help web developers review user sessions with abnormal click or scroll behavior, session length, JavaScript errors and more. Web developers can spend less time and gain more insight into their users focusing on the sessions that Clarity marks as most relevant. Related sessions are a grouping of similar sessions that are recommended based a single session. This feature allows web developers to quickly understand the scope of a specific user behavior and find other occurrences for the same user as well as other users. Heatmaps provide a view into user behavior at an aggregate level through click/touch and scroll heatmaps. Click/touch heatmap provides distribution of user interactions across a webpage. Scroll heatmaps provide how far users scroll on your webpage. Thank you,

bingbot Series: Getting most of Bingbot via Bing Webmaster Tools

There are multiple features in Bing Webmaster Tools that allows webmasters to check Bingbot’s performance and issues on their site, provide input to Bingbot crawl schedules and check if that random bot hitting the pages frequently is actually Bingbot or not.  In part 4 of our Bingbot series, Nikunj Daga, Program Manager for Bing Webmaster Tools, revisits some of those tools and features that assists webmasters in troubleshooting and optimizing Bingbot’s performance on their site.  Crawl Information - Webmasters can get the data about Bingbot’s performance on their site in the Reports and Data section in Bing Webmaster Tools. The site activity chart present in this page can show an overlapping view of total pages indexed, pages crawled and pages where there were crawl errors for the last six months along with the Impressions and Clicks data. Through this chart, it is easier for webmasters to visually see whether the changes they made on their sites had any impact on page crawling.   Further, in order to get more information on the pages with crawl errors, webmasters can go to the Crawl Information page. In this page, an aggregated count of the pages with different errors that Bingbot faced is provided along with the list of those URLs. This make it simple for webmasters to troubleshoot why a particular page that they are looking for in Bing is not appearing while searching.  Crawl Errors – In addition to webmasters going and checking the Crawl Information page for crawl errors, Bing Webmaster Tools also proactively notifies the webmasters in case Bingbot faces significant number of issues while crawling the site. These notifications are sent on the Message Center in Bing Webmaster Tools. These alerts can also be sent through email to users who do not visit webmaster tools on a regular basis. So, it will not be a bad idea for webmasters to opt in for the email communication from Bing Webmaster Tools through the Profile Page.  Webmasters can set the preference for kind of alerts they want to receive emails for along with the preferred contact frequency.  Further, Bingbot can face different kinds of errors while crawling a site, a detailed list of which along with their description and action can be found here.  Crawl Control – The Crawl Control feature allows webmasters to provide input to the Bingbot about the crawl speed and timing for your site. It can be found under the “Configure My Site” section in Bing Webmaster Tools. Using this feature, you can set hourly crawl rate for your site and notify Bingbot to crawl slowly during peak business hours and faster during off peak hours. There are preset schedules to choose from based on the most common business hours followed across the globe. In addition to the preset schedules, webmasters also have the option to fully customize the crawl schedule based on their site’s traffic pattern. Customizing the crawl pattern is very easy and can be done by just dragging and clicking on the graph present in the Crawl Control feature.  Fetch as Bingbot – The Fetch as Bingbot tool returns the code that Bingbot sees when it crawls the page. Webmasters can find this feature under the “Diagnostics and Tools” section to submit a request to fetch as Bingbot. Once the fetch is completed, the status will change from “Pending” to “Completed” and the webmasters will be able to see the code that appears to Bingbot when it tries to crawl the site. This is a useful feature for webmasters who use dynamic content on their sites and is the basic check if they want to see what data Bingbot sees among all the dynamic and static content on the site.  Verify Bingbot - Found under the “Diagnostics and Tools” section in Bing Webmaster Tools, Verify Bingbot tool lets the webmasters check if the Bing user agent string appearing in the server logs are actually from Bing or not. This can help webmasters determine if someone is hiding their true identity and attacking the site by using Bing’s name. Further, it also helps webmasters who have manually configured IP to whitelist Bingbot on their server. Since Bing does not release the list of IPs, using this tool the webmasters can check whether the IPs allowed in the server belong to Bing and whether they are whitelisting the right set of IPs.  Thus, it is evident that a lot can be done by webmasters to improve Bingbot’s performance on their site using the features in Bing Webmaster Tools. These features were developed and have evolved over the years based on feedback we receive from the webmaster community. So, login to Bing Webmaster Tools now to use the features and let us know what you think.  Thanks! Nikunj Daga Program Manager, Bing Webmaster Tools

bingbot Series: JavaScript, Dynamic Rendering, and Cloaking. Oh My!

Last week, we posted the second blog of our bingbot Series: Optimizing Crawl Frequency. Today is Halloween and like every day, our crawler (also known as a "spider") is wandering outside, browsing the world wide web, following links, seeking to efficiently discover, index and refresh the best web content for our Bing users.   Occasionally, bingbot encounters websites relying on JavaScript to render their content. Some of these sites link to many JavaScript files that need to be downloaded from the web server. In this setup, instead of making only one HTTP request per page, bingbot has to do several requests. Some some sites are spider traps, with dozens of HTTP calls required to render each page! Yikes. That's not optimal, now is it?  As we shared last week at SMX East, bingbot is generally able to render JavaScript. However, bingbot does not necessarily support all the same JavaScript frameworks that are supported in the latest version of your favorite modern browser. Like other search engine crawlers, it is difficult for bingbot to process JavaScript at scale on every page of every website, while minimizing the number of HTTP requests at the same time.  Therefore, in order to increase the predictability of crawling and indexing by Bing, we recommend dynamic rendering as a great alternative for websites relying heavily on JavaScript. Dynamic rendering is about detecting user agent and rendering content differently for humans and search engine crawlers. We encourage detecting our bingbot user agent, prerendering the content on the server side and outputting static HTML for such sites, helping us minimize the number of HTTP requests and ensure we get the best and most complete version of your web pages every time bingbot visits your site. Is using JavaScript for Dynamic Rendering considered Cloaking? When it comes to rendering content specifically for search engine crawlers, we inevitably get asked whether this is considered cloaking... and there is nothing scarier for the SEO community than getting penalized for cloaking, even during Halloween! The good news is that as long as you make a good faith effort to return the same content to all visitors, with the only difference being the content is rendered on the server for bots and on the client for real users, this is acceptable and not considered cloaking. So if your site relies a lot of JavaScript and you want to improve your crawling and indexing on Bing, look into dynamic rendering: you will certainly benefit immensely, receiving only treats and no tricks! Happy Halloween! Fabrice Canel and Frédéric Dubut Program Managers Microsoft - Bing

bingbot Series: Optimizing Crawl Frequency

Last week, we posted the first blog of our bingbot Series: Maximizing Crawl Efficiency highlighting the main goal for bingbot and its core metrics: Crawling Efficiency.   In part 2 of our bingbot series Principal Software Engineering Managing of the crawl team Cheng Lu shares one example of how we’ve optimized our processes to maximize crawl efficiency for large web sites whose content remains static, or unchanging.   Keeping Indexed Content Current and limiting crawling on content that has changed   When most people conduct a search they typically are looking for the most recent content published; however, the search engine results may link to webpages that were published days ago to years ago. This is a challenge, especially when searchers are wanting to keep up with breaking news and the latest trends by accessing the most up-to-date content online. The internet and search index are full of ghosts of yester-years past, that are often resurrected by the power of search engines. For instance, I was able to retrieve the Microsoft 1996 annual report. Interesting, yes, especially if I need to do a historical report, but if I'm looking for the current annual investment report, it is not so useful. The crawler also needs to have discovered, crawled and indexed the latest Microsoft annual report in order for me to discover it when I do a search. The challenge for bingbot is that it can't fetch a web page only once. Once a page is published, the search engine must fetch the page regularly to verify that the content has not been updated and that the page is not a dead link. Defining when to fetch the web page next is the hard problem we are looking to optimize with your help.   Case Study: Cornell University Library - A great source of knowledge with many static, unchanging web pages   One challenge that we are trying to address is how often should bingbot crawl a site to fetch the content. The answer depends on the frequency of which the content is edited and updated.   The Cornell University Library empowers Cornell's research and learning community with deep expertise, innovative services, and outstanding collections strengthened by strategic partnerships. Their web site https://arxiv.org/ is a mine of relevant information and it contains millions of web pages on a range of topic from Physics, to Science to Economics. Not only do they have millions of webpages and PDF files related to computer science, they even have content related to crawling and indexing websites.    Identifying patterns to allow bingbot to reduce crawl frequency   While new web pages may be posted daily, and some pages are updated on a regular basis, most of the content within the Cornell University Library is not edited for month and even years. The content is by in large static and unedited. By unedited, I mean that the HTML may change a little, for example {copyright 2018} will {become 2019} on January 1st,  the CSS and style sheet may change a little; however such changes are not relevant for updating the indexed content within Bing. The content of the page is still the same. Additionally, only few articles are deleted every year. Their library index increases in size with new and updated research, without substantially changing the content of the historically indexed research. Reviewing our crawling data, we discovered that bingbot was over-crawling the content and we were using more resources then necessary to check and re-check that the historical pages maintained static in nature. What we learned was that we could optimize our system to avoid fetching the same content over and over, and instead check periodically for major changes.  This resulted in about 40% crawl saving on this site! While our work in identified patterns for largely static content identified an opportunity to reduce crawling for this “class” of websites (slow and rarely changing content) and in the following posts we’ll share more learnings. Our work with improving crawler efficiency is not done yet, and we’ve got a lot of opportunity ahead of us to continue to improve our crawler’s efficiency and abilities across the hundreds of different types of data that are used to evaluate our crawler scheduling algorithms. The next step is to continue to identify patterns that apply to a multitude of websites, so we can scale our efforts and be more efficient with crawling everywhere.   Stay tuned! Next in this series of posts related to bingbot and our crawler, we’ll provide visibility on the main criteria involved in defining bingbots Crawl Quota and Crawl Frequency per site. I hope you are still looking forward to learning more about how we improve crawl efficiency and as always, we look forward to seeing your comments and feedback.   Thanks! Cheng Lu Principal Software Engineering Manager Microsoft - Bing Fabrice Canel Principal Program Manager Microsoft - Bing  

bingbot Series: Maximizing Crawl Efficiency

At the SMX Advanced conference in June, I announced that over the next 18 months my team will focus on improving our Bing crawler bingbot . I asked the audience to share data helping us to optimize our plans. First, I want to say "Thank you"  to those of you who responded and provided us with great insights. Please keep them coming!  To keep you informed of the work we've done so far, we are starting this series of blog posts related to our crawler, bingbot. In this series we will share best practices, demonstrate improvements, and unveil new crawler abilities. Before drilling into details about how our team is continuing to improve our crawler, let me explain why we need bingbot and how we measure bingbot's success. First things first: What is the goal of bingbot? Bingbot is Bing's crawler, sometimes also referred to as a "spider". Crawling is the process by which bingbot discovers new and updated documents or content to be added to Bing's searchable index. Its primary goal is to maintain a comprehensive index updated with fresh content. Bingbot uses an algorithm to determine which sites to crawl, how often, and how many pages to fetch from each site. The goal is to minimize bingbot crawl footprint on your web sites while ensuring that the freshest content is available. How do we do that? The algorithmic process selects URLs to be crawled by prioritizing relevant known URLs that may not be indexed yet, and URLs that have already been indexed that we are checking for updates to ensure that the content is still valid (example not a dead link) and that it has not changed. We also crawl content specifically to discovery links to new URLs that have yet to be discovered. Sitemaps and RSS/Atom feeds are examples of URLs fetched primarily to discovery new links. Measuring bingbot success : Maximizing Crawl efficiency Bingbot crawls billions of URLs every day. It's a hard task to do this at scale, globally, while satisfying all webmasters, web sites, content management systems, whiling handling site downtimes and ensuring that we aren't crawling too frequently or often. We've heard concerns that bingbot doesn't crawl frequently enough and their content isn't fresh within the index; while at the same time we've heard that bingbot crawls too often causing constraints on the websites resources. It's an engineering problem that hasn't fully been solved yet. Often, the issue is in managing the frequency that bingbot needs to crawl a site to ensure new and updated content is included in the search index. Some webmasters request to have their sites crawled daily by the bingbot to ensure that Bing has the freshest version of their site in the index;  whereas the majority of webmasters would prefer to only have bingbot crawl their site when new URLs have been added or content has been updated and changed. The challenge we face, is how to model the bingbot algorithms based on both what a webmaster wants for their specific site, the frequency in which content is added or updated, and how to do this at scale. To measure how smart our crawler is, we measure bingbot crawl efficiency. The crawl efficiency is how often we crawl and discover new and fresh content per page crawled.  Our crawl efficiency north star is to crawl an URL only when the content has been added (URL not crawled before), updated (fresh on-page context or useful outbound links) . The more we crawl duplicated, unchanged content, the lower our Crawl Efficiency metric is. Later this month, Cheng Lu, our engineer lead for the crawler team, will continue this series of blog posts by sharing examples of how the Crawl Efficiency has improved over the last few months. I hope you are looking forward to learning more about how we improve crawl efficiency and as always, we look forward to seeing your comments and feedback. Thanks! Fabrice Canel Principal Program Manager, Webmaster Tools Microsoft - Bing  

Introducing Bing AMP viewer and Bing AMP cache

In 2016, Bing joined the Accelerated Mobile Pages (AMP for short) open-source effort to help you “find” and “do” searches faster, regardless of where you are and on any device when you are looking for informaiton. Today, we are pleased to announce the release of Bing AMP viewer and Bing AMP Cache enabling AMP-enabled web pages to work directly from Bing’s mobile search results allowing Bing to provide faster mobile experiences to Bing users. On Monday, the 17th we started the first phase of the global roll out of this AMP viewer and AMP carrousel in the United States for the news carrousel. We will continue the phased roll out to more web sites, more countries and regions and other links in the search results pages. Also, if you are in the United States, try it out on your mobile device by navigating to https://www.bing.com and search for news related queries and tapping the search results labelled with the AMP icon: . Advice for AMP webmasters, AMP advertisers The AMP protocol offers the ability to cache and serve cached copied AMP content that is published on the web, providing faster user experiences on Bing. In order to enable your AMP published content within Bing, you need to allow the Bingbot (our crawler) to fetch AMP content and allow cross-origin resource sharing (CORS) for bing-amp.com domain. Most AMP enabled sites and advertisers have already authorized the CORS sharing for the ampproject.org domain, but now need to also bing-amp.com to the allowed list.  Thank you,  Fabrice Canel Principal Program Manager Microsoft Bing  

Anonymous URL Submission Tool Being Retired

Saying Goodbye is never easy, but the time has come to announce the withdrawal of anonymous non-signed in support Bing's URL submission tool. Webmaster will still be able to log in and access Submit URL tool in Bing Webmaster Tools, and this is easier than ever as the tool now supports Google and Facebook authentication in addition to existing Microsoft accounts. Why say goodbye? Well, the URLs received are by far too low quality to be trustable, and webmasters preferring having more ownership of the URLs for their site. In order to use the tool, webmasters just need to login, add and verify their site. Then navigate to the Submit URL tool within the Configure My Site menu options. In case the webmasters want to use our Bing Webmaster tools API, webmasters have to generate an API key through Bing Webmaster Tools and follow the guidelines for its usage here. In case you haven't signed up on the tool yet, please click here to sign up. Thank you, The Bing Webmaster Tools Team

Introducing JSON-LD Support in Bing Webmaster Tools

Bing is proud to introduce JSON-LD support as part of Bing Webmaster Tools, as announced at the 2018 SMX Advanced. Users can login to Bing Webmaster Tools and validate their JSON-LD implementation through the Markup Validator Tool present in the Diagnostics and Tools section. After the inclusion of JSON-LD support, the Markup Validator now supports seven markup languages, including Schema.org, HTML Microdata, Microformats, Open Graph and RDFa.  Bing works hard to understand the content of a page and one of the clues that Bing uses is structured data. JSON-LD, or JavaScript Object Notation for Linked Data, is an extension of JSON-based data format that can be used to implement structured data on your site so Bing and other search engines can better understand the content on your site. One of the advantages is that JSON-LD can be implemented without modifying the HTML content of your pages and can be hidden in the header, body or foot of the page. This effectively means that webmasters can go about designing their pages as they like without having to worry about arranging the information for markup implementations. JSON-LD makes defining links and relationships between data and entities between the data present on your pages easy because it supports nested data. However, with the wide array of possibilities that comes with JSON-LD, the webmasters should be very alert as to not put invalid and incorrect information in the markup. Remember, even though the markup is not visible on your page, it is still read by the search engines and putting spam data in the markup can hamper your presence on the search engines. Let us know what you think of the JSON-LD support. If you don’t have a Webmaster Tools account, you can sign up today.   Thank you, The Bing Webmaster Team  

Intermittent Webmaster Tools API Issues Resolved

The Bing Webmaster team recently received feedback that our APIs were intermittently failing, and we deeply regret any inconveniences caused from the API failures. We recognize the frustrations that this may have caused.  Upon investigation, we discovered a technical glitch which led to API call failure that is now resolved. We are very grateful to you, our users, who brought this to our attention and thank you for your continued feedback and support. We're Listening Bing and Bing Webmaster Tools are actively listening to you and we value your feedback. It’s important to how we continually improve Bing and to help notify us of potential issues. It’s easy to provide feedback: just look for the Feedback button or link at the bottom of each page. It’s in the footer or the lower-right corner and it looks something like this: We are using advances in technology to make it easier to quickly find what you are looking for – from answers to life's big questions or an item in an image you want to learn more about. At Bing, our goal is to help you get answers with less effort.  We appreciate your feedback and the more that you can send, the more we can use it to improve Bing. Have a suggestion? Tell us! The more feedback the merrier. Please let us know. The Bing Webmaster Tools Team

Introducing Social Login for Bing Webmaster Tools

Bing is actively listening to you, our customers, to improve our user experience and to understand what features you want within Bing Webmaster Tools. One of the top requests we have received is to expand the webmaster tools login capabilities to include a social login option. We are excited to begin the first phase of the global roll out of social login to webmaster tools on Friday, February 9th to webmasters in the United States. We expect to continue the roll out to webmasters in Europe, Middle East and Africa, Latin America and Asia Pacific regions shortly thereafter. Webmasters will be able to login to Bing Webmaster Tools using their Facebook and Google accounts in addition to their existing Microsoft account. Additionally, this means that the messages Webmaster Tools may occasionally send you about your managed properties will be sent to the email account associated with the webmaster tools account you are logged in with. The most recent messages will still be available in your Webmaster Tools Message Center. If you don’t have a Webmaster Tools account, you can sign up today. Let us know what you think of the new social login option. Tweet to us @Bing. Thank you, The Bing Webmaster Team

Bing adds Fact Check label in SERP to support the ClaimReview markup

Bing is adding a new UX element to the search results, called the “Fact Check” label, to help users find fact checking information on news, and with major stories and webpages within the Bing search results. The label may be used on both news articles and web pages that Bing has determined contain fact check information to allow users to have additional information to judge for themselves what information on the internet is trustworthy. The label may be used on a broad category of queries including news, health, science and politics. Bing may apply this label to any page that has schema.org ClaimReview markup included on the page. Example of the Fact Check label for a news article in the SERP: Example of the Fact Check label on a website: When determining if you should use this tag for your articles or webpages, consider whether it meets the following criteria, which are characteristics we consider for fact-checking sites: •    The analysis must be transparent about sources and methods, with citations and references to primary sources included. •    Claims and claim checks must be easily identified within the body of fact-check content. Readers should be able to determine and understand what was checked and what conclusions were reached. •    The page hosting the ClaimReview markup must have at least a brief summary of the fact check and the evaluation if not the full text. Bing determines whether an article might contain fact checks by looking for the schema.org ClaimReview markup. In addition to the ClaimReview markup being contained on page, Bing also looks for sites that follow commonly accepted criteria for fact checks including of third-party fact checking organizations.  Please note that we may not show the Fact Check label for all pages that include the ClaimReview schema markup. If we find sites not following the criteria for the ClaimReview markup, we might ignore the markup. We will consider the reputation of the site as well as other factors to determine if and when the tag should show. Use of the Claim Review tag when appropriate fact checking has not been done is a violation of our webmaster guidelines and Bing may penalize sites for such abuse or take other actions.  More information on how to implement and use this tag can be found at https://schema.org/ClaimReview

Bing refines its copyright removals process

In a continuous effort to enhance the Bing user experience and promote the expression of free speech while simultaneously upholding the rights of intellectual property and copyright holders, Bing has streamlined the copyright removals process.  We heard your feedback and understand that sometimes websites that experience alleged copyright infringement issues have a difficult time gaining visibility into the problem and getting relisted.  Webmasters now have the ability to see what pages on their site have been impacted by “copyright removal notices” and appeal those decisions.   Enhanced visibility   This new feature will provide webmasters with more visibility into how DMCA takedowns impact their site and gives webmasters the opportunity to either address the infringement allegation or remove the offending material.  All requests will be evaluated in a new appeals process.    More information   For more information on Bing’s copyright infringement policies and how Bing delivers search results, visit Bing's copyright infringement policies.  Bing also provides full transparency of takedown requests in a bi-annual Content Removal Requests Report with associated FAQs. Access the latest version here Bing Content Removal Requests Report. -The Bing Webmaster Tools Team  

Increasing the size limit of Sitemaps file to address evolving webmaster needs

For years, the sitemaps protocol defined at www.sitemaps.org stated that each sitemap file (or each sitemap index file) may contain no more than 50,000 URLs and must not be larger than 10MB (10,485,760 bytes). While most sitemaps are under this 10 MB file limit, these days, our systems occasionally encounter sitemaps exceeding this limit. Most often this is caused when sitemap files list very long URLs or if they have attributes listing long extra URLs (as alternate language URLs, Image URLs, etc), which inflates the size of the sitemap file. To address these evolving needs, we are happy to announce that we are increasing sitemaps file and index size from 10 MB to 50MB (52,428,800 bytes).  Webmasters can still compress their Sitemap files using gzip to reduce file size based on the bandwidth requirement; however, each sitemap file once uncompressed must still be no larger than 50MB. The update file size change is now reflected in the sitemap protocols on www.sitemaps.org. This update only impacts the file size; each sitemap file and index file still cannot exceed the maximum of 50,000 URLs (<loc> attributes). Having sitemaps updated daily, listing all and only relevant URLs is key to webmaster SEO success. To help you, please find our most recent sitemaps blog posts ·         Sitemaps Best Practices Including Large Web Sites ·         Sitemaps – 4 Basics to Get You Started   Fabrice Canel Principal Program Manager Microsoft - Bing

Bing Enhances the Copyright Infringement Claims Process with New, Easy-to-Use Dashboard

Lack of communication is a leading cause of divorce. Communication is vital, and sharing the status of a copyright infringement notice is no exception. Which is why Bing just made this easier. A new online dashboard provides insight into the status of a copyright removal request, as well as providing overall historical submission statistics. This dashboard is now available for users who submit DMCA notices via our online form or API.   How Bing Receives Copyright Notices Bing typically receives requests to remove links due to copyright infringement claims, also known as DMCA notices, through three different channels: email, an online form, and for certain users, an API. Email is the least efficient and prone to error such as missing or incomplete information. When submitters leverage the online form or the API, they decrease the chance of rejection due to incomplete or incorrect information. Bing’s online form solves the problems of email submissions by providing submitters a fill-in form with guides for all of the required information. Most submitters are recommended to use the online form.   After hitting the submit button, an email will arrive with a submission reference number, for example, 604ab644-2a38-4bbc-a839-2034471731c1. Individual submissions such as this, as well as overall historical statistics, are viewable through the dashboard. For rights owners who submit high volumes of DMCA notices, Bing’s API program is the most efficient method for requesting link removals due to copyright infringement. The API program is reserved for frequent submitters with a demonstrated history of valid submissions. Submitter Dashboard The dashboard’s top table shows submission statistics for all notices received by a Copyright Owner or their authorized agent. A submission is accepted if it contains all of the information required by the DMCA. That does not mean, however, that the (alleged) infringing URLs specified within the notice are automatically removed. Finally, submissions in the pending state indicates that Bing is currently processing the notice. The next table shows statistics for all alleged infringing URLs within all notices sent by a Copyright Owner. The table depicts the overall number of URLs accepted, rejected or still being processed. The final table shows the status of individual submissions and current status for the URLs contained within each submission.   Clicking on and individual submission ID will display the details for that specific submission. In Conclusion Bing wants to ensure that copyright owners send valid DMCA notices and that those notices are acted upon promptly. The online form and API help accomplish this. Having insight into the status of these notices helps copyright owners stay better informed and, in turn, promotes the use of such tools to help Bing respond in an expeditious manner.   Chad Foster Bing Program Manager        

Pages

Recommended Content