Bing's Webmaster Blog

bingbot Series: Introducing Batch mode for Adaptive URL submission API

We launched the Adaptive URL submission capability that allowed webmasters to submit up to 10,000 URLs using the online API or through Bing webmaster portal (Submit URLs option). Since the launch we have received multiple requests from webmasters for the ability to submit the URLs in batches. As we are actively listening to the webmaster and their needs, we are delighted to announce the Batch mode capability for Adaptive URL Submission API which will allow the webmasters and site managers to submit URLs in batches, saving them from those excessive API calls made when submitting the URLs individually.   The Batch URL Submission API is very similar to the individual URL Submission API (Blogpost) and hence integrating the Batch API is very easy and follows the same steps.   Example requests for the Batch URL Submission API for the supported protocols can be seen below JSON Request Sample  POST /webmaster/api.svc/json/SubmitUrlbatch? apikey=sampleapikeyEDECC1EA4AE341CC8B6 HTTP/1.1 Content-Type: application/json; charset=utf-8 Host: { "siteUrl":"", "urlList":[ "", "", "" ] } XML Request Sample POST /webmaster/api.svc/pox/SubmitUrlBatch? apikey=sampleapikeyEDECC1EA4AE341CC8B6 HTTP/1.1 Content-Type: application/xml; charset=utf-8 Host: <SubmitUrlBatch xmlns=""> <siteUrl></siteUrl> <urlList> <string xmlns=""></string> <string xmlns=""></string> <string xmlns=""></string> </urlList> </SubmitUrlBatch> You will get a HTTP 200 response on successful submission of the URLs. Meanwhile the URLs will be checked to comply with Bing Webmaster Guidelines and if they pass, they will be crawled and indexed in minutes.   Please refer the Documentation for generating the API key and Batch URL Submission API for more details. Do note that the maximum supported batch size in this API is 500 URLs per request. Total limit on numbers of URLs submitted per day still applies.   So, integrate the APIs today to get your content indexed real time by Bing and let us know of what you think of this capability. Please reach out to if you face any issue while integrating.   Thanks ! Bing Webmaster Tools Team

bingbot Series: Easy set-up guide for Bing’s Adaptive URL submission API

In February, we announced launch of adaptive URL submission capability. As called out during the launch, as SEO manager or website owner, you do not need to wait for the crawler to discover new links, you should just submit those links automatically to Bing to get your content immediately indexed as soon as your content is published!  Who in SEO didn’t dream of that.  In the last few months we have seen rapid adoption of this capability with thousands of websites submitting millions of URLs and getting them indexed on Bing instantly.   At the same time, we have few webmasters who have asked for guidance on integrating the adaptive URL submission API. This blog provides information on how easy it is to set-up the adaptive URL submission API.   Step 1: Generate an API Key     Webmasters need an API key to be able to access and use Bing Webmaster APIs. This API key can be generated from Bing Webmaster Tools by following these steps:   Sign in to your account on Bing Webmaster Tools. In case you do not already have a Bing Webmaster account, sign up today using any Microsoft, Google or Facebook ID.  Add & verify the site that you want to submit URL for through the API, if not already done.  Select and open any verified site through the My Sites page on Bing Webmaster Tools and click on Webmaster API on the left-hand side navigation menu.    If you are generating the API key for the first time, please click Generate to create an API Key. Else you will see the key previously generated.    Note: Only one API key can be generated per user. You can change your API key anytime; change is taken by the system within 30 minutes. Step 2: Integrate with your website    You can any of the below protocols to easily integrate the Submit URL API into your system.   JSON request sample  POST /webmaster/api.svc/json/SubmitUrl? apikey=sampleapikeyEDECC1EA4AE341CC8B6 HTTP/1.1 Content-Type: application/json; charset=utf-8 Host: { "siteUrl":"http:\/\/", "url":"http:\/\/\/url1.html" } XML Request sample  POST /webmaster/api.svc/pox/SubmitUrl?apikey=sampleapikey341CC57365E075EBC8B6 HTTP/1.1 Content-Type: application/xml; charset=utf-8 Host: <SubmitUrl xmlns=""> <siteUrl></siteUrl> <url></url> </SubmitUrl> If the URL submission is successful you will receive an http 200 response. This ensures that your pages will be discovered for indexing and if Bing webmaster guidelines are met then the pages will be crawled and indexed in real time. Using any of the above methods you should be able to directly and automatically let Bing know whenever new links are created in your website. We encourage you to integrate such solution in your Web Content Management System to let Bing auto discover your new content at publication time.  In case you face any challenges during the integration, you can reach out to raise a service ticket. Feel free to contact us if your web site requires more than 10,000 URLs submitted per day. We will adjust as needed.  Thanks!  Bing Webmaster Tools team

bingbot Series: Get your content indexed fast by now submitting up to 10,000 URLs per day to Bing

Today, we are excited to announce a significant increase in the number of URLs webmasters can submit to Bing to get their content crawled and indexed immediately. We believe that enabling this change will trigger a fundamental shift in the way that search engines, such as Bing, retreive and are notified of new and updated content across the web. Instead of Bing monitoring often RSS and similar feeds or frequently crawling websites to check for new pages, discover content changes and/or new outbound links, websites will notify the Bing directly about relevant URLs changing on their website. This means that eventually search engines can reduce crawling frequency of sites to detect changes and refresh the indexed content.  For many years, Bing has offered all webmasters the ability to submit their site URLs through the Bing Webmaster Tools portal as well as the Bing Webmaster Tools API for immediate crawl and indexation. Until today, this feature was throttled for all sites to submit maximum of 10 URLs per day and maximum of 50 URLs per month.  Today we are releasing the Adaptive URL submission feature that increases the daily quota by 1000x, allowing you to submit up to 10,000 URLs per day, with no monthly quotas. The daily quota per site will be determined based on the site verified age in Bing Webmaster tool, site impressions and other signals that are available to Bing. Today the logic is as follows, and we will tweak this logic as needed based on usage and behavior we observe.    Key things to note: Site verified age is one of the signals but not the only signal that is used to determine the daily URL quota per site. Webmasters should be able to see revised limit for their site on Bing webmaster tools portal (Submit URLs option) or by using the Get URL submission quota API.  Webmaster tools portal ​ Get URL submission quota API In Bing webmaster tools portal, under “Submit URLs” option, webmaster will see maximum of 1,000 latest submitted URLs even though the permissible quota per day could be greater than 1,000 As per existing functionality, for sub-domain level site “Submit URL” option will not be applicable. If there are sites mapped to sub-domain then quota is allocated at domain level and not at sub-domain level. So, login to Bing Webmaster Tools or integrate Bing Webmaster APIs in your Content Management Systems now to benefit from the increase and contact us for feedback.  Feel free to contact us also if your web site requires more than 10,000 URLs submitted per day. We will adjust as needed.   Thanks! Bing Webmaster Tools team  

Introducing Clarity, a web analytics product for webmasters

Today, we are announcing the beta release of Clarity, a web analytics product which enables website developers to understand user behavior at scale.  Web developers face many challenges in building compelling and user-friendly websites. Understanding why users struggle, where users run into issues or why they abandon a website is difficult. When making updates to a web experience, A/B experimentation helps developers decide on which way to go. While A/B experiments allow developers to see when their key metrics are moving, the primary drawback is the lack of visibility into why the metrics moved in any given direction. This gap in understanding user behavior led us to build Clarity. Session Replay The session replay capability of Clarity allows web developers to view a user's page impression to understand user interactions such as mouse movement, touch gestures, click events and much more. Being able to replay user sessions allows web developers to empathize with users and understand their pain points. Clarity Case Studies  Bing uses Clarity to detect poor user experience due to malware In the Wild West of devices and browsers, the experience you think you are serving and what your users see can be completely different - devices, plugin, proxies and networks can all change and degrade the quality of your experience. These problems are expensive to diagnose and fix.   The first image shows the page of Bing with malware while the second image shows Bing default  experience after removal of malware.  Clarity has been used by the Bing UX team at Microsoft to delve into sessions that had negative customer satisfaction and determine what went wrong. In some cases, engineers were able to detect pages that had multiple overlays of advertisements and looked nothing like the expected experience for customers. Looking through the layout anomalies and network request logs that Clarity provides, Bing developers were able to diagnose the cause: malware installed on the end user's machine was hijacking the page and inserting bad content.  With the knowledge of what was causing these negative experiences, Bing engineers were able to design and implement changes which defended the Bing page. By doing so they increased revenue while decreasing page load time - all while giving their customers a significantly improved experience.  Cook with Manali uses Clarity to improve user engagement  Cook with Manali is a food blog and like many other blogs dedicated to cooking, posts begin with a story about the inspiration behind the recipe. Posts have detailed instructions to prepare the meal, high-quality photographs of the finished dish and potentially step by step pictures to help explain the more complicated parts. Near the bottom of the page is a shorthand recipe card summarizing ingredients, instructions and nutritional information. While this long post format enables food bloggers to emotionally connect with their readers and preemptively address any complication in the recipe, some readers would rather get straight to the recipe.  When the Cook with Manali team started using Clarity, they were able to investigate real user sessions and realized that almost thirty percent of users were abandoning the page before reaching the bottom of these recipe pages, which has important information about the recipe. In many cases, it seemed that users felt they had to scroll too far to get to the recipe that they really cared about and lost patience before making it far enough on the page. The developers realized their strategy was backfiring and creating a bad experience for some of their users, prompting them to add a "Jump to Recipe" button at the top of these pages.   With the new button deployed, the team was able to see traffic going up and abandonment going down. When they dug into these new session replays, they were able to see users utilizing the new button and getting directly to the content they cared about. They saw abandonment for these pages drop down to roughly ten percent, signaling a significant increase in user satisfaction. Interestingly, many users now utilize the "Jump to Recipe" button to then scroll back up to read the larger story afterwards.   How does Clarity work?  Clarity works on any HTML webpage (desktop or mobile) after adding a small piece of JavaScript to the website. This JavaScript code listens to browser events and instruments layout changes, network requests and user interactions. The instrumentation data is then uploaded and stored in the Clarity server running on Microsoft Azure.   Other capabilities coming soon  Interesting sessions are automatically bubbled up based on Clarity's AI and machine learning capabilities to help web developers review user sessions with abnormal click or scroll behavior, session length, JavaScript errors and more. Web developers can spend less time and gain more insight into their users focusing on the sessions that Clarity marks as most relevant.  Related sessions are a grouping of similar sessions that are recommended based a single session. This feature allows web developers to quickly understand the scope of a specific user behavior and find other occurrences for the same user as well as other users.   Heatmaps provide a view into user behavior at an aggregate level through click/touch and scroll heatmaps. Click/touch heatmap provides distribution of user interactions across a webpage. Scroll heatmaps provide how far users scroll on your webpage.   How do I get started?  Sign up at the Clarity website using your Microsoft Account! (In case you don’t have one, you can sign-up here.)   When you create a new project, it will be added to the waitlist. A notification will be sent when your project is approved for onboarding and you can login to Clarity to retrieve the uniquely generated JS code for your project. Once you have added the code to your website, you can use Clarity dashboard to start replaying user sessions and gain insights.   Please reach out to if you have any questions.   Contributing to Clarity  The Clarity team has also open sourced the JavaScript library which instruments pages to help understand user behavior on websites on GitHub . As Clarity is in active development with continuous improvements, join our community and contribute. Getting started is easy, just visit GitHub and read through our README.    Here are some of the exciting new feature the Clarity team is brewing up: Interesting sessions are automatically bubbled up based on Clarity's AI and machine learning capabilities to help web developers review user sessions with abnormal click or scroll behavior, session length, JavaScript errors and more. Web developers can spend less time and gain more insight into their users focusing on the sessions that Clarity marks as most relevant. Related sessions are a grouping of similar sessions that are recommended based a single session. This feature allows web developers to quickly understand the scope of a specific user behavior and find other occurrences for the same user as well as other users. Heatmaps provide a view into user behavior at an aggregate level through click/touch and scroll heatmaps. Click/touch heatmap provides distribution of user interactions across a webpage. Scroll heatmaps provide how far users scroll on your webpage. Thank you,

bingbot Series: Getting most of Bingbot via Bing Webmaster Tools

There are multiple features in Bing Webmaster Tools that allows webmasters to check Bingbot’s performance and issues on their site, provide input to Bingbot crawl schedules and check if that random bot hitting the pages frequently is actually Bingbot or not.  In part 4 of our Bingbot series, Nikunj Daga, Program Manager for Bing Webmaster Tools, revisits some of those tools and features that assists webmasters in troubleshooting and optimizing Bingbot’s performance on their site.  Crawl Information - Webmasters can get the data about Bingbot’s performance on their site in the Reports and Data section in Bing Webmaster Tools. The site activity chart present in this page can show an overlapping view of total pages indexed, pages crawled and pages where there were crawl errors for the last six months along with the Impressions and Clicks data. Through this chart, it is easier for webmasters to visually see whether the changes they made on their sites had any impact on page crawling.   Further, in order to get more information on the pages with crawl errors, webmasters can go to the Crawl Information page. In this page, an aggregated count of the pages with different errors that Bingbot faced is provided along with the list of those URLs. This make it simple for webmasters to troubleshoot why a particular page that they are looking for in Bing is not appearing while searching.  Crawl Errors – In addition to webmasters going and checking the Crawl Information page for crawl errors, Bing Webmaster Tools also proactively notifies the webmasters in case Bingbot faces significant number of issues while crawling the site. These notifications are sent on the Message Center in Bing Webmaster Tools. These alerts can also be sent through email to users who do not visit webmaster tools on a regular basis. So, it will not be a bad idea for webmasters to opt in for the email communication from Bing Webmaster Tools through the Profile Page.  Webmasters can set the preference for kind of alerts they want to receive emails for along with the preferred contact frequency.  Further, Bingbot can face different kinds of errors while crawling a site, a detailed list of which along with their description and action can be found here.  Crawl Control – The Crawl Control feature allows webmasters to provide input to the Bingbot about the crawl speed and timing for your site. It can be found under the “Configure My Site” section in Bing Webmaster Tools. Using this feature, you can set hourly crawl rate for your site and notify Bingbot to crawl slowly during peak business hours and faster during off peak hours. There are preset schedules to choose from based on the most common business hours followed across the globe. In addition to the preset schedules, webmasters also have the option to fully customize the crawl schedule based on their site’s traffic pattern. Customizing the crawl pattern is very easy and can be done by just dragging and clicking on the graph present in the Crawl Control feature.  Fetch as Bingbot – The Fetch as Bingbot tool returns the code that Bingbot sees when it crawls the page. Webmasters can find this feature under the “Diagnostics and Tools” section to submit a request to fetch as Bingbot. Once the fetch is completed, the status will change from “Pending” to “Completed” and the webmasters will be able to see the code that appears to Bingbot when it tries to crawl the site. This is a useful feature for webmasters who use dynamic content on their sites and is the basic check if they want to see what data Bingbot sees among all the dynamic and static content on the site.  Verify Bingbot - Found under the “Diagnostics and Tools” section in Bing Webmaster Tools, Verify Bingbot tool lets the webmasters check if the Bing user agent string appearing in the server logs are actually from Bing or not. This can help webmasters determine if someone is hiding their true identity and attacking the site by using Bing’s name. Further, it also helps webmasters who have manually configured IP to whitelist Bingbot on their server. Since Bing does not release the list of IPs, using this tool the webmasters can check whether the IPs allowed in the server belong to Bing and whether they are whitelisting the right set of IPs.  Thus, it is evident that a lot can be done by webmasters to improve Bingbot’s performance on their site using the features in Bing Webmaster Tools. These features were developed and have evolved over the years based on feedback we receive from the webmaster community. So, login to Bing Webmaster Tools now to use the features and let us know what you think.  Thanks! Nikunj Daga Program Manager, Bing Webmaster Tools

bingbot Series: JavaScript, Dynamic Rendering, and Cloaking. Oh My!

Last week, we posted the second blog of our bingbot Series: Optimizing Crawl Frequency. Today is Halloween and like every day, our crawler (also known as a "spider") is wandering outside, browsing the world wide web, following links, seeking to efficiently discover, index and refresh the best web content for our Bing users.   Occasionally, bingbot encounters websites relying on JavaScript to render their content. Some of these sites link to many JavaScript files that need to be downloaded from the web server. In this setup, instead of making only one HTTP request per page, bingbot has to do several requests. Some some sites are spider traps, with dozens of HTTP calls required to render each page! Yikes. That's not optimal, now is it?  As we shared last week at SMX East, bingbot is generally able to render JavaScript. However, bingbot does not necessarily support all the same JavaScript frameworks that are supported in the latest version of your favorite modern browser. Like other search engine crawlers, it is difficult for bingbot to process JavaScript at scale on every page of every website, while minimizing the number of HTTP requests at the same time.  Therefore, in order to increase the predictability of crawling and indexing by Bing, we recommend dynamic rendering as a great alternative for websites relying heavily on JavaScript. Dynamic rendering is about detecting user agent and rendering content differently for humans and search engine crawlers. We encourage detecting our bingbot user agent, prerendering the content on the server side and outputting static HTML for such sites, helping us minimize the number of HTTP requests and ensure we get the best and most complete version of your web pages every time bingbot visits your site. Is using JavaScript for Dynamic Rendering considered Cloaking? When it comes to rendering content specifically for search engine crawlers, we inevitably get asked whether this is considered cloaking... and there is nothing scarier for the SEO community than getting penalized for cloaking, even during Halloween! The good news is that as long as you make a good faith effort to return the same content to all visitors, with the only difference being the content is rendered on the server for bots and on the client for real users, this is acceptable and not considered cloaking. So if your site relies a lot of JavaScript and you want to improve your crawling and indexing on Bing, look into dynamic rendering: you will certainly benefit immensely, receiving only treats and no tricks! Happy Halloween! Fabrice Canel and Frédéric Dubut Program Managers Microsoft - Bing

bingbot Series: Optimizing Crawl Frequency

Last week, we posted the first blog of our bingbot Series: Maximizing Crawl Efficiency highlighting the main goal for bingbot and its core metrics: Crawling Efficiency.   In part 2 of our bingbot series Principal Software Engineering Managing of the crawl team Cheng Lu shares one example of how we’ve optimized our processes to maximize crawl efficiency for large web sites whose content remains static, or unchanging.   Keeping Indexed Content Current and limiting crawling on content that has changed   When most people conduct a search they typically are looking for the most recent content published; however, the search engine results may link to webpages that were published days ago to years ago. This is a challenge, especially when searchers are wanting to keep up with breaking news and the latest trends by accessing the most up-to-date content online. The internet and search index are full of ghosts of yester-years past, that are often resurrected by the power of search engines. For instance, I was able to retrieve the Microsoft 1996 annual report. Interesting, yes, especially if I need to do a historical report, but if I'm looking for the current annual investment report, it is not so useful. The crawler also needs to have discovered, crawled and indexed the latest Microsoft annual report in order for me to discover it when I do a search. The challenge for bingbot is that it can't fetch a web page only once. Once a page is published, the search engine must fetch the page regularly to verify that the content has not been updated and that the page is not a dead link. Defining when to fetch the web page next is the hard problem we are looking to optimize with your help.   Case Study: Cornell University Library - A great source of knowledge with many static, unchanging web pages   One challenge that we are trying to address is how often should bingbot crawl a site to fetch the content. The answer depends on the frequency of which the content is edited and updated.   The Cornell University Library empowers Cornell's research and learning community with deep expertise, innovative services, and outstanding collections strengthened by strategic partnerships. Their web site is a mine of relevant information and it contains millions of web pages on a range of topic from Physics, to Science to Economics. Not only do they have millions of webpages and PDF files related to computer science, they even have content related to crawling and indexing websites.    Identifying patterns to allow bingbot to reduce crawl frequency   While new web pages may be posted daily, and some pages are updated on a regular basis, most of the content within the Cornell University Library is not edited for month and even years. The content is by in large static and unedited. By unedited, I mean that the HTML may change a little, for example {copyright 2018} will {become 2019} on January 1st,  the CSS and style sheet may change a little; however such changes are not relevant for updating the indexed content within Bing. The content of the page is still the same. Additionally, only few articles are deleted every year. Their library index increases in size with new and updated research, without substantially changing the content of the historically indexed research. Reviewing our crawling data, we discovered that bingbot was over-crawling the content and we were using more resources then necessary to check and re-check that the historical pages maintained static in nature. What we learned was that we could optimize our system to avoid fetching the same content over and over, and instead check periodically for major changes.  This resulted in about 40% crawl saving on this site! While our work in identified patterns for largely static content identified an opportunity to reduce crawling for this “class” of websites (slow and rarely changing content) and in the following posts we’ll share more learnings. Our work with improving crawler efficiency is not done yet, and we’ve got a lot of opportunity ahead of us to continue to improve our crawler’s efficiency and abilities across the hundreds of different types of data that are used to evaluate our crawler scheduling algorithms. The next step is to continue to identify patterns that apply to a multitude of websites, so we can scale our efforts and be more efficient with crawling everywhere.   Stay tuned! Next in this series of posts related to bingbot and our crawler, we’ll provide visibility on the main criteria involved in defining bingbots Crawl Quota and Crawl Frequency per site. I hope you are still looking forward to learning more about how we improve crawl efficiency and as always, we look forward to seeing your comments and feedback.   Thanks! Cheng Lu Principal Software Engineering Manager Microsoft - Bing Fabrice Canel Principal Program Manager Microsoft - Bing  

bingbot Series: Maximizing Crawl Efficiency

At the SMX Advanced conference in June, I announced that over the next 18 months my team will focus on improving our Bing crawler bingbot . I asked the audience to share data helping us to optimize our plans. First, I want to say "Thank you"  to those of you who responded and provided us with great insights. Please keep them coming!  To keep you informed of the work we've done so far, we are starting this series of blog posts related to our crawler, bingbot. In this series we will share best practices, demonstrate improvements, and unveil new crawler abilities. Before drilling into details about how our team is continuing to improve our crawler, let me explain why we need bingbot and how we measure bingbot's success. First things first: What is the goal of bingbot? Bingbot is Bing's crawler, sometimes also referred to as a "spider". Crawling is the process by which bingbot discovers new and updated documents or content to be added to Bing's searchable index. Its primary goal is to maintain a comprehensive index updated with fresh content. Bingbot uses an algorithm to determine which sites to crawl, how often, and how many pages to fetch from each site. The goal is to minimize bingbot crawl footprint on your web sites while ensuring that the freshest content is available. How do we do that? The algorithmic process selects URLs to be crawled by prioritizing relevant known URLs that may not be indexed yet, and URLs that have already been indexed that we are checking for updates to ensure that the content is still valid (example not a dead link) and that it has not changed. We also crawl content specifically to discovery links to new URLs that have yet to be discovered. Sitemaps and RSS/Atom feeds are examples of URLs fetched primarily to discovery new links. Measuring bingbot success : Maximizing Crawl efficiency Bingbot crawls billions of URLs every day. It's a hard task to do this at scale, globally, while satisfying all webmasters, web sites, content management systems, whiling handling site downtimes and ensuring that we aren't crawling too frequently or often. We've heard concerns that bingbot doesn't crawl frequently enough and their content isn't fresh within the index; while at the same time we've heard that bingbot crawls too often causing constraints on the websites resources. It's an engineering problem that hasn't fully been solved yet. Often, the issue is in managing the frequency that bingbot needs to crawl a site to ensure new and updated content is included in the search index. Some webmasters request to have their sites crawled daily by the bingbot to ensure that Bing has the freshest version of their site in the index;  whereas the majority of webmasters would prefer to only have bingbot crawl their site when new URLs have been added or content has been updated and changed. The challenge we face, is how to model the bingbot algorithms based on both what a webmaster wants for their specific site, the frequency in which content is added or updated, and how to do this at scale. To measure how smart our crawler is, we measure bingbot crawl efficiency. The crawl efficiency is how often we crawl and discover new and fresh content per page crawled.  Our crawl efficiency north star is to crawl an URL only when the content has been added (URL not crawled before), updated (fresh on-page context or useful outbound links) . The more we crawl duplicated, unchanged content, the lower our Crawl Efficiency metric is. Later this month, Cheng Lu, our engineer lead for the crawler team, will continue this series of blog posts by sharing examples of how the Crawl Efficiency has improved over the last few months. I hope you are looking forward to learning more about how we improve crawl efficiency and as always, we look forward to seeing your comments and feedback. Thanks! Fabrice Canel Principal Program Manager, Webmaster Tools Microsoft - Bing  

Introducing Bing AMP viewer and Bing AMP cache

In 2016, Bing joined the Accelerated Mobile Pages (AMP for short) open-source effort to help you “find” and “do” searches faster, regardless of where you are and on any device when you are looking for informaiton. Today, we are pleased to announce the release of Bing AMP viewer and Bing AMP Cache enabling AMP-enabled web pages to work directly from Bing’s mobile search results allowing Bing to provide faster mobile experiences to Bing users. On Monday, the 17th we started the first phase of the global roll out of this AMP viewer and AMP carrousel in the United States for the news carrousel. We will continue the phased roll out to more web sites, more countries and regions and other links in the search results pages. Also, if you are in the United States, try it out on your mobile device by navigating to and search for news related queries and tapping the search results labelled with the AMP icon: . Advice for AMP webmasters, AMP advertisers The AMP protocol offers the ability to cache and serve cached copied AMP content that is published on the web, providing faster user experiences on Bing. In order to enable your AMP published content within Bing, you need to allow the Bingbot (our crawler) to fetch AMP content and allow cross-origin resource sharing (CORS) for domain. Most AMP enabled sites and advertisers have already authorized the CORS sharing for the domain, but now need to also to the allowed list.  Thank you,  Fabrice Canel Principal Program Manager Microsoft Bing  

Anonymous URL Submission Tool Being Retired

Saying Goodbye is never easy, but the time has come to announce the withdrawal of anonymous non-signed in support Bing's URL submission tool. Webmaster will still be able to log in and access Submit URL tool in Bing Webmaster Tools, and this is easier than ever as the tool now supports Google and Facebook authentication in addition to existing Microsoft accounts. Why say goodbye? Well, the URLs received are by far too low quality to be trustable, and webmasters preferring having more ownership of the URLs for their site. In order to use the tool, webmasters just need to login, add and verify their site. Then navigate to the Submit URL tool within the Configure My Site menu options. In case the webmasters want to use our Bing Webmaster tools API, webmasters have to generate an API key through Bing Webmaster Tools and follow the guidelines for its usage here. In case you haven't signed up on the tool yet, please click here to sign up. Thank you, The Bing Webmaster Tools Team

Introducing JSON-LD Support in Bing Webmaster Tools

Bing is proud to introduce JSON-LD support as part of Bing Webmaster Tools, as announced at the 2018 SMX Advanced. Users can login to Bing Webmaster Tools and validate their JSON-LD implementation through the Markup Validator Tool present in the Diagnostics and Tools section. After the inclusion of JSON-LD support, the Markup Validator now supports seven markup languages, including, HTML Microdata, Microformats, Open Graph and RDFa.  Bing works hard to understand the content of a page and one of the clues that Bing uses is structured data. JSON-LD, or JavaScript Object Notation for Linked Data, is an extension of JSON-based data format that can be used to implement structured data on your site so Bing and other search engines can better understand the content on your site. One of the advantages is that JSON-LD can be implemented without modifying the HTML content of your pages and can be hidden in the header, body or foot of the page. This effectively means that webmasters can go about designing their pages as they like without having to worry about arranging the information for markup implementations. JSON-LD makes defining links and relationships between data and entities between the data present on your pages easy because it supports nested data. However, with the wide array of possibilities that comes with JSON-LD, the webmasters should be very alert as to not put invalid and incorrect information in the markup. Remember, even though the markup is not visible on your page, it is still read by the search engines and putting spam data in the markup can hamper your presence on the search engines. Let us know what you think of the JSON-LD support. If you don’t have a Webmaster Tools account, you can sign up today.   Thank you, The Bing Webmaster Team  

Intermittent Webmaster Tools API Issues Resolved

The Bing Webmaster team recently received feedback that our APIs were intermittently failing, and we deeply regret any inconveniences caused from the API failures. We recognize the frustrations that this may have caused.  Upon investigation, we discovered a technical glitch which led to API call failure that is now resolved. We are very grateful to you, our users, who brought this to our attention and thank you for your continued feedback and support. We're Listening Bing and Bing Webmaster Tools are actively listening to you and we value your feedback. It’s important to how we continually improve Bing and to help notify us of potential issues. It’s easy to provide feedback: just look for the Feedback button or link at the bottom of each page. It’s in the footer or the lower-right corner and it looks something like this: We are using advances in technology to make it easier to quickly find what you are looking for – from answers to life's big questions or an item in an image you want to learn more about. At Bing, our goal is to help you get answers with less effort.  We appreciate your feedback and the more that you can send, the more we can use it to improve Bing. Have a suggestion? Tell us! The more feedback the merrier. Please let us know. The Bing Webmaster Tools Team

Introducing Social Login for Bing Webmaster Tools

Bing is actively listening to you, our customers, to improve our user experience and to understand what features you want within Bing Webmaster Tools. One of the top requests we have received is to expand the webmaster tools login capabilities to include a social login option. We are excited to begin the first phase of the global roll out of social login to webmaster tools on Friday, February 9th to webmasters in the United States. We expect to continue the roll out to webmasters in Europe, Middle East and Africa, Latin America and Asia Pacific regions shortly thereafter. Webmasters will be able to login to Bing Webmaster Tools using their Facebook and Google accounts in addition to their existing Microsoft account. Additionally, this means that the messages Webmaster Tools may occasionally send you about your managed properties will be sent to the email account associated with the webmaster tools account you are logged in with. The most recent messages will still be available in your Webmaster Tools Message Center. If you don’t have a Webmaster Tools account, you can sign up today. Let us know what you think of the new social login option. Tweet to us @Bing. Thank you, The Bing Webmaster Team

Bing adds Fact Check label in SERP to support the ClaimReview markup

Bing is adding a new UX element to the search results, called the “Fact Check” label, to help users find fact checking information on news, and with major stories and webpages within the Bing search results. The label may be used on both news articles and web pages that Bing has determined contain fact check information to allow users to have additional information to judge for themselves what information on the internet is trustworthy. The label may be used on a broad category of queries including news, health, science and politics. Bing may apply this label to any page that has ClaimReview markup included on the page. Example of the Fact Check label for a news article in the SERP: Example of the Fact Check label on a website: When determining if you should use this tag for your articles or webpages, consider whether it meets the following criteria, which are characteristics we consider for fact-checking sites: •    The analysis must be transparent about sources and methods, with citations and references to primary sources included. •    Claims and claim checks must be easily identified within the body of fact-check content. Readers should be able to determine and understand what was checked and what conclusions were reached. •    The page hosting the ClaimReview markup must have at least a brief summary of the fact check and the evaluation if not the full text. Bing determines whether an article might contain fact checks by looking for the ClaimReview markup. In addition to the ClaimReview markup being contained on page, Bing also looks for sites that follow commonly accepted criteria for fact checks including of third-party fact checking organizations.  Please note that we may not show the Fact Check label for all pages that include the ClaimReview schema markup. If we find sites not following the criteria for the ClaimReview markup, we might ignore the markup. We will consider the reputation of the site as well as other factors to determine if and when the tag should show. Use of the Claim Review tag when appropriate fact checking has not been done is a violation of our webmaster guidelines and Bing may penalize sites for such abuse or take other actions.  More information on how to implement and use this tag can be found at

Bing refines its copyright removals process

In a continuous effort to enhance the Bing user experience and promote the expression of free speech while simultaneously upholding the rights of intellectual property and copyright holders, Bing has streamlined the copyright removals process.  We heard your feedback and understand that sometimes websites that experience alleged copyright infringement issues have a difficult time gaining visibility into the problem and getting relisted.  Webmasters now have the ability to see what pages on their site have been impacted by “copyright removal notices” and appeal those decisions.   Enhanced visibility   This new feature will provide webmasters with more visibility into how DMCA takedowns impact their site and gives webmasters the opportunity to either address the infringement allegation or remove the offending material.  All requests will be evaluated in a new appeals process.    More information   For more information on Bing’s copyright infringement policies and how Bing delivers search results, visit Bing's copyright infringement policies.  Bing also provides full transparency of takedown requests in a bi-annual Content Removal Requests Report with associated FAQs. Access the latest version here Bing Content Removal Requests Report. -The Bing Webmaster Tools Team  

Increasing the size limit of Sitemaps file to address evolving webmaster needs

For years, the sitemaps protocol defined at stated that each sitemap file (or each sitemap index file) may contain no more than 50,000 URLs and must not be larger than 10MB (10,485,760 bytes). While most sitemaps are under this 10 MB file limit, these days, our systems occasionally encounter sitemaps exceeding this limit. Most often this is caused when sitemap files list very long URLs or if they have attributes listing long extra URLs (as alternate language URLs, Image URLs, etc), which inflates the size of the sitemap file. To address these evolving needs, we are happy to announce that we are increasing sitemaps file and index size from 10 MB to 50MB (52,428,800 bytes).  Webmasters can still compress their Sitemap files using gzip to reduce file size based on the bandwidth requirement; however, each sitemap file once uncompressed must still be no larger than 50MB. The update file size change is now reflected in the sitemap protocols on This update only impacts the file size; each sitemap file and index file still cannot exceed the maximum of 50,000 URLs (<loc> attributes). Having sitemaps updated daily, listing all and only relevant URLs is key to webmaster SEO success. To help you, please find our most recent sitemaps blog posts ·         Sitemaps Best Practices Including Large Web Sites ·         Sitemaps – 4 Basics to Get You Started   Fabrice Canel Principal Program Manager Microsoft - Bing

Bing Enhances the Copyright Infringement Claims Process with New, Easy-to-Use Dashboard

Lack of communication is a leading cause of divorce. Communication is vital, and sharing the status of a copyright infringement notice is no exception. Which is why Bing just made this easier. A new online dashboard provides insight into the status of a copyright removal request, as well as providing overall historical submission statistics. This dashboard is now available for users who submit DMCA notices via our online form or API.   How Bing Receives Copyright Notices Bing typically receives requests to remove links due to copyright infringement claims, also known as DMCA notices, through three different channels: email, an online form, and for certain users, an API. Email is the least efficient and prone to error such as missing or incomplete information. When submitters leverage the online form or the API, they decrease the chance of rejection due to incomplete or incorrect information. Bing’s online form solves the problems of email submissions by providing submitters a fill-in form with guides for all of the required information. Most submitters are recommended to use the online form.   After hitting the submit button, an email will arrive with a submission reference number, for example, 604ab644-2a38-4bbc-a839-2034471731c1. Individual submissions such as this, as well as overall historical statistics, are viewable through the dashboard. For rights owners who submit high volumes of DMCA notices, Bing’s API program is the most efficient method for requesting link removals due to copyright infringement. The API program is reserved for frequent submitters with a demonstrated history of valid submissions. Submitter Dashboard The dashboard’s top table shows submission statistics for all notices received by a Copyright Owner or their authorized agent. A submission is accepted if it contains all of the information required by the DMCA. That does not mean, however, that the (alleged) infringing URLs specified within the notice are automatically removed. Finally, submissions in the pending state indicates that Bing is currently processing the notice. The next table shows statistics for all alleged infringing URLs within all notices sent by a Copyright Owner. The table depicts the overall number of URLs accepted, rejected or still being processed. The final table shows the status of individual submissions and current status for the URLs contained within each submission.   Clicking on and individual submission ID will display the details for that specific submission. In Conclusion Bing wants to ensure that copyright owners send valid DMCA notices and that those notices are acted upon promptly. The online form and API help accomplish this. Having insight into the status of these notices helps copyright owners stay better informed and, in turn, promotes the use of such tools to help Bing respond in an expeditious manner.   Chad Foster Bing Program Manager        

Warning! Bing now offers enhanced malware warnings!

Malware can be a confusing term. A survey of what is “malware” leads to a slew of incoherent answers. Microsoft uses malware as an umbrella term for threats listed in the following glossary, which, of course, Bing also uses. Bing has been warning users about malware for a long time. Additionally, webmasters receive notifications when a threat is detected on their site. Previously, a generic warning was used to cover all of the different malware threat types. By refining the generic malware warning, Bing now gives more details about the type of threat the user is facing. Furthermore, this improvement enables webmasters to clean their site quicker by having stronger insights into why their site was flagged. Phishing site warnings The trick to fishing is making the fly float through the air as if it were alive. Done right and the hungry trout eyeballing the fly is convinced to take the bait. It is not a coincidence that criminal activity shares a similar name: phishing. The bait are fake websites designed to look and feel like the legitimate ones. These sites catch people by taking advantage of a user’s trust in entering information such as passwords, usernames, and credit cards. Bing has refined the generic warning to specifically call out this threat. When users click a URL suspected of phishing, a warning will appear. This looks similar to the generic warning except it now warns that the site might steal personal information. Webmasters still get notified through the dashboard, and they can then ask for a review after performing the cleanup. Sites that link to malware Sites might not always be malicious; however, they might link to malicious binaries. While safe to load into the browser, there is a hidden bomb waiting to be clicked. In contrast, some hacked pages cause infections just by visiting them. The generic warning is now refined to specifically call out pages (likely) safe to visit as long as links are not clicked. This refined warning has a similar look and feel to the generic warning – the biggest change has been to the webmaster dashboard page.  The webmaster dashboard shows which binaries are causing the warning. As such, removing the harmful links leads to the warning being removed. Clicking (View), under Additional Details, displays the path to the malicious binaries. Sites with warnings are not always bad actors We understand that sites with warnings are not always bad actors. Websites are vulnerable to being hacked, and webmasters are vulnerable to being tricked, just like any other customer. By refining our generic malware warning, our hope is that users are more informed and webmasters are able to clean their sites more efficiently.   Chad Foster                          Bing Program Manager

Sitemaps – 4 Basics to Get You Started

1 - Why You Need a Sitemap   If you have a website and want to be recognized in the search engines, then you absolutely need a sitemap.  If your site does not have a sitemap, then you run the high risk of not being indexed appropriately in the search engines.  Well managed sitemaps greatly increase Bing’s ability to locate, access and index all of the relevant URLs on your website.  When creating a sitemap, you can refer to our sitemaps best practices article, particularly helpful for large sites.   The first question we ask when communicating with a website is, “Do you list all your relevant URLs in your sitemap?”  Telling us how many URLs you think you have is not enough.  Be sure to tell the search engines where your sitemap is located by listing it via robots.txt or via Bing Webmaster tools to promote getting listed on Bing.   2 - Avoid Stalled Sitemaps   Often times, site owners make aesthetic changes to their sites, but forget to update the URLs in their sitemap.  Too often, Bing discovers stalled sitemaps which have the same URLs listed for months – sometimes years – even when the website keeps changing.  This is frequently due to broken or unmonitored processes for updating your sitemap, switching to a new Content Management System or simply forgetting to stop referring to the previous sitemaps.   As a SEO best practice, you should regularly verify that the sitemaps referred to in your robots.txt and Webmaster tools are the appropriate ones, and ensure the sitemap content only lists the relevant URLs posted on your site.  Your sitemap should ideally be automatically generated at least once a day. Complimentary to sitemaps, you should also have real time RSS feeds to tell Bing about all of your​​ fresh URLs which enables Bing to discover new URLs in a matter of minutes rather than up to 24 hours. 3 – Test Your Sitemap with the “View Source” Feature   Be sure to pay attention to how your URLs are encoded and follow protocol guidelines. We see lots of ampersand characters not encoded (invalid XML files) or encoded twice (looks like this &amp;amp;).  As a trick, view your sitemap in your favorite browser by using the browser “View Source” feature.  This will show you the sitemap that the search engines see.  Your browser default web windows may decode the URLs listed in your sitemaps and URLs may look fine without being fine.    4 – Use Sitemaps Attributes Values Correctly   Bing is constantly monitoring and adapting to benefits from URL signals, including sitemaps attributes values.  As a best practice, do not output any of these attributes in your sitemaps if you are not able to set the appropriate values.  For instance, do not set the <lastmod> value set to the time you generate the sitemap.  <lastmod> should be the date of the last modification of the content linked from your sitemap.  Be sure to follow the protocol - <lastmod> values must be in the W3C Datetime.  If you don’t know how to generate this format, use YYYY-MM-DD. Do not use a country specific format. And avoid setting <changefreq> and <priority> attribute values to the same values if you don’t really know when the content will change and will not be able to differentiate priority between URLs.      We encourage you to visit Bing Webmaster tools regularly to get the latest information and data about your site.  And remember, you don’t need to log into Bing Webmaster tools and publish your sitemaps each time you generate them.  Instead, refer your sitemaps once in your robots.txt or in Bing Webmaster tools and we will process them regularly. As always, we would love to hear your ideas and feedback! So do let us know your thoughts at Bing Listens.    Fabrice Canel Principal Program Manager Microsoft - Bing

Announcing the Bing Mobile Friendliness Test Tool

Earlier this year, we shared our approach to mobile-friendly search and ranking and talked about our plans for Webmaster tool additions that will aid site owners in their efforts to create mobile-friendly sites.  Today, I am happy to introduce you to Charu Puhazholi from the Webmaster Team and Shyam Jayasankar from the Mobile Ranking Team. In today’s post, Charu and Shyam will be talking about the availability of the Bing Webmaster Mobile Friendliness Test tool which allows you to validate your site’s pages for mobile-device compatibility. Enjoy! – Vincent Wehren – Senior Product Lead Bing Webmaster & Publisher Experiences In our earlier post about our approach to mobile-friendly search, we talked about some of the factors that our algorithm takes into account to determine whether a page is considered “mobile friendly” such as readability and navigation. Today, we are happy to announce the general availability of the Bing Mobile Friendliness Test Tool and also would like to dive a little deeper into the more specific factors that determine mobile friendliness and provide more understanding how these factors impact your site. You can find the Bing Mobile Friendliness Test Tool in Bing Webmaster Tools under Tools & Diagnostics or navigate to it directly using this link. How is a Page Determined to be Mobile Friendly? When evaluating a webpage for mobile friendliness, the following key factors are considered: Viewport and Zoom control configuration Width of page content Readability of text on the page Spacing of links and other elements on the page Use of incompatible plug-ins The Mobile Friendliness Test tool runs checks on all of these key factors and additionally checks for and reports on resources that are needed to analyze the page fully but that we weren’t able to crawl due to robots.txt constraints. This way rendering issues (as seen in the page preview) can be fixed by webmasters by updating robots.txt in such a way that Bing can accurately determine the mobile-friendliness of the sites. When you submit the URL of a page to be analyzed to the Mobile Friendliness Test tool, our Bing Mobile crawler fetches and renders the page, extracting important features that are used by the tool to determine how the page performs against each of the above factors. The outcomes are then aggregated into a consolidated mobile-friendliness verdict for the page. A sample result for a page that meets all mobile-friendliness checks might look like this: On the other hand, when a page fails to meet one of these criteria, the verdict provides detailed information on the failure (see the example below). A preview snapshot of what the page looks like on a mobile device is also shown, along with pointers which can be used to identify what needs to be fixed.   Let us jump in and see what these checks mean and how you can ensure your page passes all of them. Viewport The Viewport meta tag needs to be set correctly in order for mobile-friendly pages to work well on devices of different sizes and orientations. In general, this means that the viewport is set with the content width equal to “device-width” as show below. <meta name=viewport content="width=device-width, initial-scale=1"> While it is possible for pages with an alternate viewport configuration to be mobile-friendly on certain devices, they might not work equally well on all devices. In the example below, the Viewport configuration check failed, so even though some checks passed, the page is flagged as not being ready for mobile experiences: Zoom Control The Zoom Control check verifies if the configuration of the viewport hampers the user’s ability to pinch and zoom the page. In general, not using the scale-related viewport settings should result in your page being zoomable on most mobile browsers, but improper use of these settings (user-scalable, maximum-scale, minimum-scale) could result in hampering access to some content on the page. Some mobile-friendly pages prevent user zoom by design and we do take that into account before flagging it as an error. Content Width One of the more important signals we use in determining the overall mobile-friendliness verdict is the page’s content width and how it relates to the device’s screen width. In general, the content width should not exceed the screen width. We have some tolerance built in, but any page that requires excessive horizontal panning will get flagged for the error “Page content does not fit device width“. Readability of Text on the Page Another important factor is the readability of text on the page. The readability factor was determined after analyzing hundreds of thousands of pages (mobile-friendly and otherwise) to determine appropriate features and thresholds. In any case, it is important to understand that readability is not just a function of font size, but also viewport scaling. It is useful to think of readability as the average area occupied by text when the page is fully zoomed out to fit the device width. Spacing of links and other elements on the page With regards to touch-friendliness we have a similar approach: we look at all input elements and hyperlinks on the page to see if they occupy an area considered “tap-friendly” at maximum zoom out. If that is not the case, we will call out that “Links and tap targets are too small“. Compatibility Issues Another warning you might see is when we detect that your page is using incompatible plugins (e.g. Flash), or the page is otherwise not intended for use on mobile devices. We detect any error messages that are surfaced by the page on a typical mobile device and currently capture those as a warning in the Mobile Friendliness Test Tool. It’s important though to take this as a serious warning and in some cases, we may decide to interpret this as a true error in the future. Resources Blocked by Robots.Txt While the factors discussed above impact the mobile-friendliness verdict of the page, the tool also validates a couple of other related factors. For example, the tool checks for page resources blocked due to robots.txt rules and reports instances thereof as warnings. Possible rendering issues could be due to these blocked resources, so armed with this information, you can look into updating your robots.txt so that Bing can accurately determine the mobile-friendliness of your site pages. Parting Words The Mobile Friendliness Test tool is yet another important step in our commitment to help site owners create mobile-friendly experiences. We hope it greatly aids you in making your website mobile-friendly. Each time you run the tool against a URL, we crawl the page the same way Bingbot does, download necessary and allowed resources, dynamically render the mobile page, extract features and run it through our mobile classification algorithms to produce the verdict for your page. Since all of this happens in real time, it might take the tool a few seconds to complete processing and show results. For any issues reported that have quick fixes (like robots.txt changes or viewport tag updates), or when you are actively working on making your website mobile-friendly, you could re-run the tool to immediately see the outcome. As always, we would love to hear your ideas and feedback! So do let us know your thoughts at Bing Listens. Happy mobile-friendly testing! Charu Puhazholi – Senior Program Manager, Webmaster Tools Shyam Jayasankar – Program Manager II, Mobile Ranking


Recommended Content