Bing's Webmaster Blog

Bing partners with the ecosystem to drive fresh signals

Bing Webmaster Tools had launched the Adaptive URL submission capability that allowed webmasters to submit up to 10,000 URLs using the online toolkit through Bing webmaster portal (Submit URLs option) or in batch mode using Batch API. Since launch we have seen high adoption rate by large websites as well as small and medium websites.   We have been working with multiple partners to further Bing’s vision to drive fundamental shift in how search engines find content using direct notification from websites whenever content is created or updated.   A few examples to note: During the recent SEO Conference (Pubcon Pro, Las Vegas) and showcased how they used the URL Submission API, ensuring search users find timely, relevant and trustworthy information on Bing.   Similarly, we have been working with leading SEO platforms like Botify, to integrate URL Submission API in their product offerings. This integration is an expansion to Botify’s new FastIndex solution, the first within the Botify Activation product suite, the partnership builds upon Bing’s new programmatic URL submission process. For more information please refer to the annoncement from Botify. URL Submission API is programmed to review the site performance that are registered on Bing webmaster tools and adaptively increase their daily quota for submitting content to Bing. We encourage websites to register on Bing webmaster tools using standard methods or Google Search Console Import or Domain Connect based verification.   Apart from integration of URL Submission API, Botify is participating in Bing’s content submission API pilot, which allows for the direct push of HTML, images, and other site content to the search engine, reducing the need for crawling.   Please refer the documentation for Easy set-up guide, Batch URL Submission API,  cURL code example, for more details on URL Submission API.   We are happy to bring in more partners to accelerate this shift in content discovery.   Thanks!  Bing Webmaster Tools Team 

Announcing future user-agents for Bingbot

As announced in October, Bing is adopting the new Microsoft Edge as the engine to run JavaScript and render web pages. We have already switched to Microsoft Edge for thousands of web sites “under the hood”. This evolution was transparent for most of the sites and we carefully tested to check whether each website is rendering fine on switching to Microsoft Edge. Over the coming months, we will scale this migration to cover all the sites. So far, we were crawling using an existing bingbot user-agents. With this change, we will start the transition to a new bingbot user-agent, first for sites which require it for rendering and then gradually and carefully to all sites.   Bingbot user-agents today  Mozilla/5.0 (compatible; bingbot/2.0; +  Mozilla/5.0 (iPhone; CPU iPhone OS 7_0 like Mac OS X) AppleWebKit/537.51.1 (KHTML, like Gecko) Version/7.0 Mobile/11A465 Safari/9537.53 (compatible; bingbot/2.0; +  Mozilla/5.0 (Windows Phone 8.1; ARM; Trident/7.0; Touch; rv:11.0; IEMobile/11.0; NOKIA; Lumia 530) like Gecko (compatible; bingbot/2.0; +  In addition to the existing user-agents listed above, following are the new evergreen Bingbot user-agents Desktop Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; bingbot/2.0; + Chrome/W.X.Y.Z Safari/537.36 Edg/W.X.Y.Z  Mobile Mozilla/5.0 (Linux; Android 6.0.1; Nexus 5X Build/MMB29P) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/W.X.Y.Z Mobile Safari/537.36 Edg/W.X.Y.Z (compatible; bingbot/2.0; +  We are committing to regularly update our web page rendering engine to the most recent stable version of Microsoft Edge thus making the above user agent strings to be evergreen. Thus, "W.X.Y.Z" will be substituted with the latest Microsoft Edge version we're using, for example “80.0.345.0".  How to test your web site  For most web sites, there is nothing to worry as we will carefully test the sites to dynamically render fine before switching them to Microsoft Edge and our new user-agent. We invite you to install and test the new Microsoft Edge to check if your site looks fine with it. If it does then you will not be affected by the change. You can also register your site on Bing Webmaster Tools to get insights about your site, to be notified if we detect issues and to investigate your site using our upcoming tools based on our new rendering engine.  We look forward to sharing more details in the future.  Thanks,  Fabrice Canel  Principal Program Manager  Microsoft - Bing

Accessing Bing Webmaster Tools API using cURL

Thank you webmasters for effectively using Adaptive URL solution to notify bingbot about your website’s most fresh and relevant content. But, did you know you don’t have to use the Bing webmaster tools portal to submit URLs? Bing webmaster tools exposes programmatic access to its APIs for webmasters to integrate their workflows. Here is an example using the popular command line utility cURL that shows how easy  it is to integrate the Submit URL single and Submit URL batch API end points. You can use Get url submission quota API to check remaining daily quota for your account. Bing API can be integrated and called by all modern languages (C#, Python, PHP…), however, cURL can help you to prototype and test the API in minutes and also build complete solutions with minimal effort. cURL is considered as one of the most versatile tools for command-line API calls and is supported by all major Linux shells – simply run the below commands in a terminal window. If you're a Windows user, you can run cURL commands in Git Bash, the popular git client for Windows" (no need to install curl separately, Git Bash comes with curl).  If you are a Mac user, you can install cURL using a package manager such as Homebrew. When you try the examples below, be sure to replace API_KEY with your API key string obtained from Bing webmaster tools > Webmaster API > Generate. Refer easy set-up guide for Bing’s Adaptive URL submission API for more details.   Submitting new URLs – Single curl -X POST "" -H "Content-Type: application/json" -H "charset: utf-8" -d '{"siteUrl":"", "url": ""}' Response: {"d": null} Submitting new URLs – Batch curl -X POST “” -H “Content-Type: application/json” -H “charset: utf-8” -d ‘{“siteUrl”:””, “urlList”:[“”, “”]}’ Response: {“d”:null} Check remaining API Quota curl “” Response: { “d”: {“__type”: “UrlSubmissionQuota:#Microsoft.Bing.Webmaster.Api”, “DailyQuota”: 973, “MonthlyQuota”: 10973 }} So, integrate the APIs today to get your content indexed real time by Bing. Please reach out to Bing webmaster tools support if you face any issues.  Thanks, Bing Webmaster Tools team  

Some Thoughts on Website Boundaries

In the coming weeks, we will update the Bing Webmaster Guidelines to make them clearer and more transparent to the SEO community. This major update will be accompanied by blog posts that share more details and context around some specific violations. In the first article of this series, we are introducing a new penalty to address “inorganic site structure” violations. This penalty will apply to malicious attempts to obfuscate website boundaries, which covers some old attack vectors (such as doorways) and new ones (such as subdomain leasing). What is a website anyway? One of the most fascinating aspects of building a search engine is developing the infrastructure that gives us a deep understanding of the structure of the web. We’re talking trillions and trillions of URLs, connected with one another by hyperlinks. The task is herculean, but fortunately we can use some logical grouping of these URLs to make the problem more manageable – and understandable by us, mere humans! The most important of these groupings is the concept of a “website”. We all have some intuition of what a website is. For reference, Wikipedia defines a website as “a collection of related network web resources, such as web pages, multimedia content, which are typically identified with a common domain name.” It is indeed very typical that the boundary of a website is the domain name. For example, everything that lives under the domain name is a single website. Fig. 1 – Everything under the same domain name is part of the same website. A common alternative is the case of a hosting service where each subdomain is its own website, such as or And there are some (less common) cases where each subdirectory is its own website, similar to what GeoCities was offering in the late 90s. Fig. 2 – Each subdomain is its own separate website.   Why does it matter? Some fundamental algorithms used by search engines differentiate between URLs that belong to the same website and URLs that don’t. For example, it is well known that most algorithms based on the link graph propagate link value differently whether a link is internal (same site) or external (cross-site). These algorithms also use site-level signals (among many others) to infer the relevance and quality of content. That’s why pages on a very trustworthy, high-quality website tend to rank more reliably and higher than others, even if such pages are new and didn’t accumulate a lot of page-level signals. When things go wrong Stating the obvious, we can’t have people manually review billions of domains in order to assess what is a website. To solve this problem, like many of the other problems we need to solve at the scale of the web, we developed sophisticated algorithms to determine website boundaries. The algorithm gets it right most of the time. Occasionally it gets it wrong, either conflating two websites into one or viewing a single website as two different ones. And sometimes there’s no obvious answer, even for humans! For example, if your business operates in both the US and the UK, with content hosted on two separate domains (respectively a .com domain and a domain), you can be seen as running either one or two websites depending on how independent your US and UK entities are, how much content is shared across the two domains, how much they link to each other, etc. However, when we reviewed sample cases where the algorithm got it wrong, we noticed that the most common root cause was that the website owner actively tried to misrepresent the website boundary. It can be indeed very tempting to try to fool the algorithm. If your internal links are viewed as external, you can get a nice rank boost. And if you can propagate some of the site-level signals to pages that don’t technically belong to your website, these pages can get an unfair advantage. Making things right In order to maintain the quality of our search results while being transparent to the SEO community, we are introducing new penalties to address “inorganic site structure”. In short, creating a website structure that actively misrepresents your website boundaries is going to be considered a violation of the Bing Webmaster Guidelines and will potentially result in a penalty. Some “inorganic site structure” violations were already covered by other categories, whereas some of them were not. To understand better what is active misrepresentation, let’s look at three examples. PBNs and other link networks While not all link networks misrepresent website boundaries, there are many cases where a single website is artificially split across many different domains, all cross-linking to one another, for the obvious purpose of rank boosting. This is particularly true of PBNs (private blog networks). Fig. 3 – All these domains are effectively the same website. This kind of behavior is already in violation of our link policy. Going forward, it will be also in violation of our “inorganic site structure” policy and may receive additional penalties. Doorways and duplicate content Doorways are pages that are overly optimized for specific search queries, but which only redirect or point users to a different destination. The typical situation is someone spinning up many different sites hosted under different domain names, each targeting its own set of search queries but all redirecting to the same destination or hosting the same content. Fig. 4 – All these domains are effectively the same website (again). Again, this kind of behavior is already in violation of our webmaster guidelines. In addition, it is also a clear-cut example of “inorganic site structure”, since we have ultimately only one real website, but the webmaster tried to make it look like several independent websites, each specialized in its own niche. Note that we will be looking for malicious intent before flagging sites in violation of our “inorganic site structure” policy. We acknowledge that duplicate content is unavoidable (e.g. HTTP vs. HTTPS), however there are simple ways to declare one website or destination as the source of truth, whether it’s redirecting duplicate pages with HTTP 301 or adding canonical tags pointing to the destination. On the other hand, violators will generally implement none of these, or will instead use sneaky redirects. Subdomain or subfolder leasing Over the past few months, we heard concerns from the SEO community around the growing practice of hosting third-party content or letting a third party operate a designated subdomain or subfolder, generally in exchange for compensation. This practice, which some people call “subdomain (or subfolder) leasing”, tends to blur website boundaries. Most of the domain is a single website except for a single subdomain or subfolder, which is a separate website operated by a third party. In most cases that we reviewed, the subdomain had very little visibility for direct navigation from the main website. Concretely, there were very few links from the main domain to the subdomain and these links were generally tucked all the way at the bottom of the main domain pages or in other obscure places. Therefore, the intent was clearly to benefit from site-level signals, even though the content on the subdomain had very little to do with the content on the rest of the domain. Fig. 5 – The domain is mostly a single website, to the exception of one subdomain. Some people in the SEO community argue that it’s fair game for a website to monetize their reputation by letting a third party buy and operate from a subdomain. However, in this case the practice equates to buying ranking signals, which is not much different from buying links. Therefore, we decided to consider “subdomain leasing” a violation of our “inorganic site structure” policy when it is clearly used to bring a completely unrelated third-party service into the website boundary, for the sole purpose of leaking site-level signals to that service. In most cases, the penalties issued for that violation would apply only to the leased subdomain, not the root domain. Your responsibility as domain owner This article is also an opportunity to remind domain owners that they are ultimately responsible for the content hosted under their domain, regardless of the website boundaries that we identify. This is particularly true when subdomains or subfolders are operated by different entities. While clear website boundaries will prevent negative signals due to a single bad actor from leaking to other content hosted under the same domain, the overall domain reputation will be affected if a disproportionate number of websites end up in violation of our webmaster guidelines. Taking an extreme case, if you offer free hosting on your subdomains and 95% of your subdomains are flagged as spam, we will expand penalties to the entire domain, even if the root website itself is not spam. Another unfortunate case is hacked sites. Once a website is compromised, it is typical for hackers to create subfolders or subdirectories containing spam content, sometimes unbeknownst to the legitimate owner. When we detect this case, we generally penalize the entire website until it is clean. Learning from you If you believe you have been unfairly penalized, you can contact Bing Webmaster Support and file a reconsideration request. Please document the situation as thoroughly and transparently as possible, listing all the domains involved. However, we cannot guarantee that we will lift the penalty. Your feedback is valuable to us! Clarifying our existing duplicate content policy and our stance on subdomain leasing were two feedbacks we heard from the SEO community, and we hope this article achieved both. As we are in the middle of a major update of the Bing Webmaster Guidelines, please feel free to reach out to us and share feedback on Twitter or Facebook. Thank you, Frederic Dubut and the Bing Webmaster Tools Team    

The new evergreen Bingbot simplifying SEO by leveraging Microsoft Edge

Today we’re announcing that Bing is adopting Microsoft Edge as the Bing engine to run JavaScript and render web pages. Doing so will create less fragmentation of the web and ease Search Engines Optimization (SEO) for all web developers. As you may already know, the next version of Microsoft Edge is adopting the Chromium open source project.   This update means Bingbot will be evergreen as we are committing to regularly update our web page rendering engine to the most recent stable version of Microsoft Edge.     Easing Search Engine Optimization   By adopting Microsoft Edge, Bingbot will now render all web pages using the same underlying web platform technology already used today by Googlebot, Google Chrome, and other Chromium-based browsers. This will make it easy for developers to ensure their web sites and their Content Management System work across all these solutions without having to spend time investigating each solution in depth.   By disclosing our new Bingbot Web Pages rendering technology, we are ensuring fewer SEO compatibility problems moving forward and increase satisfaction in the SEO community.   If your feedback can benefit the greater SEO community, Bing and Edge will propose and contribute to the open source Chromium project to make the web better for all of us. Head to github  to check out our explainers!     What happens next   Over the next few months, we will be switching to Microsoft Edge “under the hood”, gradually over time. The key aspects of this evolution will be transparent for most sites. We may change our bingbot crawler user-agent as appropriate to allow rendering on some sites.   For most web sites, there is nothing you should really need to worry as we will carefully test that they dynamically render fine before switching them to Microsoft Edge.   We invite you to install and test Microsoft Edge and register your site to Bing Webmaster Tools to get insights about your site, to be notified if we detect issues, and to investigate your site using our upcoming tools based on our new rendering engine.   We look forward to sharing more details in the future. We are excited about the opportunity to be an even-more-active part of the SEO community to continue to make the web better for everyone including all search engines.   Thanks, Fabrice Canel Principal Program Manager Microsoft- Bing    

Import sites from Search Console to Bing Webmaster Tools

At Bing Webmaster Tools, we actively listen to the needs of webmasters. Verifying a website’s ownership had been reported as a pain-point in Bing Webmaster Tools. To simplify this process, we recently introduced a new method for webmasters to verify sites in Bing Webmaster Tools. Webmasters can now import their verified sites from Google Search Console into Bing Webmaster Tools. The imported sites will be auto-verified thus eliminating the need for going through manual verification process. Both Bing Webmaster Tools and Google Search Console use similar methods to verify the ownership of a website. Using this new functionality, webmasters can log into their Google Search Console account and import all the verified sites and their corresponding sitemaps to their Bing Webmaster Tools account. Webmasters just need to follow 4 simple steps to import their site: Step 1: Sign-in to your Bing Webmaster Tools account or create a new one here Step 2: Navigate to My Sites page on Bing Webmaster Tools and click Import Step 3: Sign-in with your Google Search Console account and click Allow to give Bing Webmaster Tools access to your list of verified sites and sitemaps Step 4: After authentication, Bing Webmaster Tools will display the list of verified sites present in your Google Search Console account along with the number of Sitemaps and corresponding role for each site. Select the sites which you want to add to Bing Webmaster Tools and click Import Webmasters can import multiple sites from multiple Google Search Console accounts. On successful completion, the selected sites will be added and automatically verified in Bing Webmaster Tools. Please note that it might take up to 48 hours to get traffic data for the newly verified websites. Maximum 100 websites can be imported in one go. Please follow the above steps again in case you want to add more than 100 sites. The limit of 1000 sites addition per Bing Webmaster Tools account still applies. Bing Webmaster Tools will periodically validate your site ownership status by syncing with your Google Search Console account. Therefore, it is necessary for your Bing Webmaster Tools account to have ongoing access to your Google Search Console account. If access to your Google Search Console account is revoked, you will then have to either import your sites again or verify your sites using other verification methods. We hope that both Import from Google Search Console and Domain Connect verification method will make the onboarding process easier for webmasters. We encourage you to sign up and leverage Bing Webmaster tools to help drive more users to your sites.   We want to hear from you!  As a reminder, you can always reach out to us and share feedback on Twitter and Facebook. If you encounter issues using this solution, please raise a service ticket with our support team. Thanks! Bing Webmaster Tools team  

Bing Webmaster Tools simplifies site verification using Domain Connect

In order to submit site information to Bing or to get performance report or access diagnostic tools, webmasters need to verify their site ownership in Bing Webmaster Tools. Traditionally Bing webmaster tools support three verification options,   Option 1: XML file authentication Option 2: Meta tag authentication Option 3: Add a CNAME record to DNS Option 1 and Option 2 requires webmaster to access the site source code to complete the site verification. With Option 3, webmaster can avoid access to site source code but need to access the domain hosting account to edit the CNAME record to hold the verification code provided by Bing Webmaster Tools. To simplify option 3, we announce the support for Domain Connect open standard that will allow webmasters to seamlessly verify their site in Bing Webmaster Tools. Domain Connect is an open standard that makes it easy for a user to configure DNS for a domain running at a DNS provider (e.g. GoDaddy, 1&1 Ionos, etc) to work with a Service running at an independent Service Provider (e.g. Bing, O365, etc). The protocol presents a simple experience to the user, isolating them from the details of DNS settings and its complexity. Bing Webmaster Tools verification using Domain Connect is already live for users whose domain is hosted with following DNS providers                                               Bing webmaster tools will gradually integrate this capability with other DNS providers that support Domain Connect open standard. Quick guide on how to use Domain Connect feature to verify your site in Bing Webmaster Tools:   Step 1: Open a Bing Webmaster Tools account You can open a free Bing Webmaster Tools account by going to the Bing Webmaster Tools sign-in or sign-up page.  You can sign up using Microsoft, Google or Facebook account.   Step 2: Add your website Once you have a Bing Webmaster Tools account, you can add sites to your account. You can do so by entering the URL of your site into the Add a Site input box and clicking Add.        Step 3: Check if your site is supported for Domain Connect protocol When you Add the website information, Bing Webmaster Tools will do background check to identify if that domain/ website is hosted on DNS provider that has integrated Domain Connect solution with Bing Webmaster Tools. Following view will show in case the site is supported – In case the site is not supported for Domain Connect protocol then user will see the default verification options as mentioned in top of this blog.   Step 4: Verify using DNS provider credentials On click of Verify, user will be redirected to DNS provider site. Webmaster should sign-in using the account credentials associated with domain/ website under verification.                                                               On successful sign-in, user site will be successfully verified by Bing webmaster tools within few seconds. In certain cases, it may take longer for DNS provider to send the site ownership signal to Bing webmaster tool service.   Using the new verification options will significantly reduce the time taken and simplify the site verification process in Bing Webmaster Tools. We encourage you to try out this solution and get more users for your sites on Bing via Bing Webmaster Tools. In case you face any challenges using this solution you can raise a service ticket with our support team. We are building another solution to further simplify the site verification process and help webmasters to easily add and verify their site in Bing Webmaster Tools. Watch this space for more!      Additional reference:   Thanks! Bing Webmaster Tools team

bingbot Series: Introducing Batch mode for Adaptive URL submission API

We launched the Adaptive URL submission capability that allowed webmasters to submit up to 10,000 URLs using the online API or through Bing webmaster portal (Submit URLs option). Since the launch we have received multiple requests from webmasters for the ability to submit the URLs in batches. As we are actively listening to the webmaster and their needs, we are delighted to announce the Batch mode capability for Adaptive URL Submission API which will allow the webmasters and site managers to submit URLs in batches, saving them from those excessive API calls made when submitting the URLs individually.   The Batch URL Submission API is very similar to the individual URL Submission API (Blogpost) and hence integrating the Batch API is very easy and follows the same steps.   Example requests for the Batch URL Submission API for the supported protocols can be seen below JSON Request Sample  POST /webmaster/api.svc/json/SubmitUrlbatch? apikey=sampleapikeyEDECC1EA4AE341CC8B6 HTTP/1.1 Content-Type: application/json; charset=utf-8 Host: { "siteUrl":"", "urlList":[ "", "", "" ] } XML Request Sample POST /webmaster/api.svc/pox/SubmitUrlBatch? apikey=sampleapikeyEDECC1EA4AE341CC8B6 HTTP/1.1 Content-Type: application/xml; charset=utf-8 Host: <SubmitUrlBatch xmlns=""> <siteUrl></siteUrl> <urlList> <string xmlns=""></string> <string xmlns=""></string> <string xmlns=""></string> </urlList> </SubmitUrlBatch> You will get a HTTP 200 response on successful submission of the URLs. Meanwhile the URLs will be checked to comply with Bing Webmaster Guidelines and if they pass, they will be crawled and indexed in minutes.   Please refer the Documentation for generating the API key and Batch URL Submission API for more details. Do note that the maximum supported batch size in this API is 500 URLs per request. Total limit on numbers of URLs submitted per day still applies.   So, integrate the APIs today to get your content indexed real time by Bing and let us know of what you think of this capability. Please reach out to if you face any issue while integrating.   Thanks ! Bing Webmaster Tools Team

bingbot Series: Easy set-up guide for Bing’s Adaptive URL submission API

In February, we announced launch of adaptive URL submission capability. As called out during the launch, as SEO manager or website owner, you do not need to wait for the crawler to discover new links, you should just submit those links automatically to Bing to get your content immediately indexed as soon as your content is published!  Who in SEO didn’t dream of that.  In the last few months we have seen rapid adoption of this capability with thousands of websites submitting millions of URLs and getting them indexed on Bing instantly.   At the same time, we have few webmasters who have asked for guidance on integrating the adaptive URL submission API. This blog provides information on how easy it is to set-up the adaptive URL submission API.   Step 1: Generate an API Key     Webmasters need an API key to be able to access and use Bing Webmaster APIs. This API key can be generated from Bing Webmaster Tools by following these steps:   Sign in to your account on Bing Webmaster Tools. In case you do not already have a Bing Webmaster account, sign up today using any Microsoft, Google or Facebook ID.  Add & verify the site that you want to submit URL for through the API, if not already done.  Select and open any verified site through the My Sites page on Bing Webmaster Tools and click on Webmaster API on the left-hand side navigation menu.    If you are generating the API key for the first time, please click Generate to create an API Key. Else you will see the key previously generated.    Note: Only one API key can be generated per user. You can change your API key anytime; change is taken by the system within 30 minutes. Step 2: Integrate with your website    You can any of the below protocols to easily integrate the Submit URL API into your system.   JSON request sample  POST /webmaster/api.svc/json/SubmitUrl? apikey=sampleapikeyEDECC1EA4AE341CC8B6 HTTP/1.1 Content-Type: application/json; charset=utf-8 Host: { "siteUrl":"http:\/\/", "url":"http:\/\/\/url1.html" } XML Request sample  POST /webmaster/api.svc/pox/SubmitUrl?apikey=sampleapikey341CC57365E075EBC8B6 HTTP/1.1 Content-Type: application/xml; charset=utf-8 Host: <SubmitUrl xmlns=""> <siteUrl></siteUrl> <url></url> </SubmitUrl> If the URL submission is successful you will receive an http 200 response. This ensures that your pages will be discovered for indexing and if Bing webmaster guidelines are met then the pages will be crawled and indexed in real time. Using any of the above methods you should be able to directly and automatically let Bing know whenever new links are created in your website. We encourage you to integrate such solution in your Web Content Management System to let Bing auto discover your new content at publication time.  In case you face any challenges during the integration, you can reach out to raise a service ticket. Feel free to contact us if your web site requires more than 10,000 URLs submitted per day. We will adjust as needed.  Thanks!  Bing Webmaster Tools team

bingbot Series: Get your content indexed fast by now submitting up to 10,000 URLs per day to Bing

Today, we are excited to announce a significant increase in the number of URLs webmasters can submit to Bing to get their content crawled and indexed immediately. We believe that enabling this change will trigger a fundamental shift in the way that search engines, such as Bing, retreive and are notified of new and updated content across the web. Instead of Bing monitoring often RSS and similar feeds or frequently crawling websites to check for new pages, discover content changes and/or new outbound links, websites will notify the Bing directly about relevant URLs changing on their website. This means that eventually search engines can reduce crawling frequency of sites to detect changes and refresh the indexed content.  For many years, Bing has offered all webmasters the ability to submit their site URLs through the Bing Webmaster Tools portal as well as the Bing Webmaster Tools API for immediate crawl and indexation. Until today, this feature was throttled for all sites to submit maximum of 10 URLs per day and maximum of 50 URLs per month.  Today we are releasing the Adaptive URL submission feature that increases the daily quota by 1000x, allowing you to submit up to 10,000 URLs per day, with no monthly quotas. The daily quota per site will be determined based on the site verified age in Bing Webmaster tool, site impressions and other signals that are available to Bing. Today the logic is as follows, and we will tweak this logic as needed based on usage and behavior we observe.    Key things to note: Site verified age is one of the signals but not the only signal that is used to determine the daily URL quota per site. Webmasters should be able to see revised limit for their site on Bing webmaster tools portal (Submit URLs option) or by using the Get URL submission quota API.  Webmaster tools portal ​ Get URL submission quota API In Bing webmaster tools portal, under “Submit URLs” option, webmaster will see maximum of 1,000 latest submitted URLs even though the permissible quota per day could be greater than 1,000 As per existing functionality, for sub-domain level site “Submit URL” option will not be applicable. If there are sites mapped to sub-domain then quota is allocated at domain level and not at sub-domain level. So, login to Bing Webmaster Tools or integrate Bing Webmaster APIs in your Content Management Systems now to benefit from the increase and contact us for feedback.  Feel free to contact us also if your web site requires more than 10,000 URLs submitted per day. We will adjust as needed.   Thanks! Bing Webmaster Tools team  

Introducing Clarity, a web analytics product for webmasters

Today, we are announcing the beta release of Clarity, a web analytics product which enables website developers to understand user behavior at scale.  Web developers face many challenges in building compelling and user-friendly websites. Understanding why users struggle, where users run into issues or why they abandon a website is difficult. When making updates to a web experience, A/B experimentation helps developers decide on which way to go. While A/B experiments allow developers to see when their key metrics are moving, the primary drawback is the lack of visibility into why the metrics moved in any given direction. This gap in understanding user behavior led us to build Clarity. Session Replay The session replay capability of Clarity allows web developers to view a user's page impression to understand user interactions such as mouse movement, touch gestures, click events and much more. Being able to replay user sessions allows web developers to empathize with users and understand their pain points. Clarity Case Studies  Bing uses Clarity to detect poor user experience due to malware In the Wild West of devices and browsers, the experience you think you are serving and what your users see can be completely different - devices, plugin, proxies and networks can all change and degrade the quality of your experience. These problems are expensive to diagnose and fix.   The first image shows the page of Bing with malware while the second image shows Bing default  experience after removal of malware.  Clarity has been used by the Bing UX team at Microsoft to delve into sessions that had negative customer satisfaction and determine what went wrong. In some cases, engineers were able to detect pages that had multiple overlays of advertisements and looked nothing like the expected experience for customers. Looking through the layout anomalies and network request logs that Clarity provides, Bing developers were able to diagnose the cause: malware installed on the end user's machine was hijacking the page and inserting bad content.  With the knowledge of what was causing these negative experiences, Bing engineers were able to design and implement changes which defended the Bing page. By doing so they increased revenue while decreasing page load time - all while giving their customers a significantly improved experience.  Cook with Manali uses Clarity to improve user engagement  Cook with Manali is a food blog and like many other blogs dedicated to cooking, posts begin with a story about the inspiration behind the recipe. Posts have detailed instructions to prepare the meal, high-quality photographs of the finished dish and potentially step by step pictures to help explain the more complicated parts. Near the bottom of the page is a shorthand recipe card summarizing ingredients, instructions and nutritional information. While this long post format enables food bloggers to emotionally connect with their readers and preemptively address any complication in the recipe, some readers would rather get straight to the recipe.  When the Cook with Manali team started using Clarity, they were able to investigate real user sessions and realized that almost thirty percent of users were abandoning the page before reaching the bottom of these recipe pages, which has important information about the recipe. In many cases, it seemed that users felt they had to scroll too far to get to the recipe that they really cared about and lost patience before making it far enough on the page. The developers realized their strategy was backfiring and creating a bad experience for some of their users, prompting them to add a "Jump to Recipe" button at the top of these pages.   With the new button deployed, the team was able to see traffic going up and abandonment going down. When they dug into these new session replays, they were able to see users utilizing the new button and getting directly to the content they cared about. They saw abandonment for these pages drop down to roughly ten percent, signaling a significant increase in user satisfaction. Interestingly, many users now utilize the "Jump to Recipe" button to then scroll back up to read the larger story afterwards.   How does Clarity work?  Clarity works on any HTML webpage (desktop or mobile) after adding a small piece of JavaScript to the website. This JavaScript code listens to browser events and instruments layout changes, network requests and user interactions. The instrumentation data is then uploaded and stored in the Clarity server running on Microsoft Azure.   Other capabilities coming soon  Interesting sessions are automatically bubbled up based on Clarity's AI and machine learning capabilities to help web developers review user sessions with abnormal click or scroll behavior, session length, JavaScript errors and more. Web developers can spend less time and gain more insight into their users focusing on the sessions that Clarity marks as most relevant.  Related sessions are a grouping of similar sessions that are recommended based a single session. This feature allows web developers to quickly understand the scope of a specific user behavior and find other occurrences for the same user as well as other users.   Heatmaps provide a view into user behavior at an aggregate level through click/touch and scroll heatmaps. Click/touch heatmap provides distribution of user interactions across a webpage. Scroll heatmaps provide how far users scroll on your webpage.   How do I get started?  Sign up at the Clarity website using your Microsoft Account! (In case you don’t have one, you can sign-up here.)   When you create a new project, it will be added to the waitlist. A notification will be sent when your project is approved for onboarding and you can login to Clarity to retrieve the uniquely generated JS code for your project. Once you have added the code to your website, you can use Clarity dashboard to start replaying user sessions and gain insights.   Please reach out to if you have any questions.   Contributing to Clarity  The Clarity team has also open sourced the JavaScript library which instruments pages to help understand user behavior on websites on GitHub . As Clarity is in active development with continuous improvements, join our community and contribute. Getting started is easy, just visit GitHub and read through our README.    Here are some of the exciting new feature the Clarity team is brewing up: Interesting sessions are automatically bubbled up based on Clarity's AI and machine learning capabilities to help web developers review user sessions with abnormal click or scroll behavior, session length, JavaScript errors and more. Web developers can spend less time and gain more insight into their users focusing on the sessions that Clarity marks as most relevant. Related sessions are a grouping of similar sessions that are recommended based a single session. This feature allows web developers to quickly understand the scope of a specific user behavior and find other occurrences for the same user as well as other users. Heatmaps provide a view into user behavior at an aggregate level through click/touch and scroll heatmaps. Click/touch heatmap provides distribution of user interactions across a webpage. Scroll heatmaps provide how far users scroll on your webpage. Thank you,

bingbot Series: Getting most of Bingbot via Bing Webmaster Tools

There are multiple features in Bing Webmaster Tools that allows webmasters to check Bingbot’s performance and issues on their site, provide input to Bingbot crawl schedules and check if that random bot hitting the pages frequently is actually Bingbot or not.  In part 4 of our Bingbot series, Nikunj Daga, Program Manager for Bing Webmaster Tools, revisits some of those tools and features that assists webmasters in troubleshooting and optimizing Bingbot’s performance on their site.  Crawl Information - Webmasters can get the data about Bingbot’s performance on their site in the Reports and Data section in Bing Webmaster Tools. The site activity chart present in this page can show an overlapping view of total pages indexed, pages crawled and pages where there were crawl errors for the last six months along with the Impressions and Clicks data. Through this chart, it is easier for webmasters to visually see whether the changes they made on their sites had any impact on page crawling.   Further, in order to get more information on the pages with crawl errors, webmasters can go to the Crawl Information page. In this page, an aggregated count of the pages with different errors that Bingbot faced is provided along with the list of those URLs. This make it simple for webmasters to troubleshoot why a particular page that they are looking for in Bing is not appearing while searching.  Crawl Errors – In addition to webmasters going and checking the Crawl Information page for crawl errors, Bing Webmaster Tools also proactively notifies the webmasters in case Bingbot faces significant number of issues while crawling the site. These notifications are sent on the Message Center in Bing Webmaster Tools. These alerts can also be sent through email to users who do not visit webmaster tools on a regular basis. So, it will not be a bad idea for webmasters to opt in for the email communication from Bing Webmaster Tools through the Profile Page.  Webmasters can set the preference for kind of alerts they want to receive emails for along with the preferred contact frequency.  Further, Bingbot can face different kinds of errors while crawling a site, a detailed list of which along with their description and action can be found here.  Crawl Control – The Crawl Control feature allows webmasters to provide input to the Bingbot about the crawl speed and timing for your site. It can be found under the “Configure My Site” section in Bing Webmaster Tools. Using this feature, you can set hourly crawl rate for your site and notify Bingbot to crawl slowly during peak business hours and faster during off peak hours. There are preset schedules to choose from based on the most common business hours followed across the globe. In addition to the preset schedules, webmasters also have the option to fully customize the crawl schedule based on their site’s traffic pattern. Customizing the crawl pattern is very easy and can be done by just dragging and clicking on the graph present in the Crawl Control feature.  Fetch as Bingbot – The Fetch as Bingbot tool returns the code that Bingbot sees when it crawls the page. Webmasters can find this feature under the “Diagnostics and Tools” section to submit a request to fetch as Bingbot. Once the fetch is completed, the status will change from “Pending” to “Completed” and the webmasters will be able to see the code that appears to Bingbot when it tries to crawl the site. This is a useful feature for webmasters who use dynamic content on their sites and is the basic check if they want to see what data Bingbot sees among all the dynamic and static content on the site.  Verify Bingbot - Found under the “Diagnostics and Tools” section in Bing Webmaster Tools, Verify Bingbot tool lets the webmasters check if the Bing user agent string appearing in the server logs are actually from Bing or not. This can help webmasters determine if someone is hiding their true identity and attacking the site by using Bing’s name. Further, it also helps webmasters who have manually configured IP to whitelist Bingbot on their server. Since Bing does not release the list of IPs, using this tool the webmasters can check whether the IPs allowed in the server belong to Bing and whether they are whitelisting the right set of IPs.  Thus, it is evident that a lot can be done by webmasters to improve Bingbot’s performance on their site using the features in Bing Webmaster Tools. These features were developed and have evolved over the years based on feedback we receive from the webmaster community. So, login to Bing Webmaster Tools now to use the features and let us know what you think.  Thanks! Nikunj Daga Program Manager, Bing Webmaster Tools

bingbot Series: JavaScript, Dynamic Rendering, and Cloaking. Oh My!

Last week, we posted the second blog of our bingbot Series: Optimizing Crawl Frequency. Today is Halloween and like every day, our crawler (also known as a "spider") is wandering outside, browsing the world wide web, following links, seeking to efficiently discover, index and refresh the best web content for our Bing users.   Occasionally, bingbot encounters websites relying on JavaScript to render their content. Some of these sites link to many JavaScript files that need to be downloaded from the web server. In this setup, instead of making only one HTTP request per page, bingbot has to do several requests. Some some sites are spider traps, with dozens of HTTP calls required to render each page! Yikes. That's not optimal, now is it?  As we shared last week at SMX East, bingbot is generally able to render JavaScript. However, bingbot does not necessarily support all the same JavaScript frameworks that are supported in the latest version of your favorite modern browser. Like other search engine crawlers, it is difficult for bingbot to process JavaScript at scale on every page of every website, while minimizing the number of HTTP requests at the same time.  Therefore, in order to increase the predictability of crawling and indexing by Bing, we recommend dynamic rendering as a great alternative for websites relying heavily on JavaScript. Dynamic rendering is about detecting user agent and rendering content differently for humans and search engine crawlers. We encourage detecting our bingbot user agent, prerendering the content on the server side and outputting static HTML for such sites, helping us minimize the number of HTTP requests and ensure we get the best and most complete version of your web pages every time bingbot visits your site. Is using JavaScript for Dynamic Rendering considered Cloaking? When it comes to rendering content specifically for search engine crawlers, we inevitably get asked whether this is considered cloaking... and there is nothing scarier for the SEO community than getting penalized for cloaking, even during Halloween! The good news is that as long as you make a good faith effort to return the same content to all visitors, with the only difference being the content is rendered on the server for bots and on the client for real users, this is acceptable and not considered cloaking. So if your site relies a lot of JavaScript and you want to improve your crawling and indexing on Bing, look into dynamic rendering: you will certainly benefit immensely, receiving only treats and no tricks! Happy Halloween! Fabrice Canel and Frédéric Dubut Program Managers Microsoft - Bing

bingbot Series: Optimizing Crawl Frequency

Last week, we posted the first blog of our bingbot Series: Maximizing Crawl Efficiency highlighting the main goal for bingbot and its core metrics: Crawling Efficiency.   In part 2 of our bingbot series Principal Software Engineering Managing of the crawl team Cheng Lu shares one example of how we’ve optimized our processes to maximize crawl efficiency for large web sites whose content remains static, or unchanging.   Keeping Indexed Content Current and limiting crawling on content that has changed   When most people conduct a search they typically are looking for the most recent content published; however, the search engine results may link to webpages that were published days ago to years ago. This is a challenge, especially when searchers are wanting to keep up with breaking news and the latest trends by accessing the most up-to-date content online. The internet and search index are full of ghosts of yester-years past, that are often resurrected by the power of search engines. For instance, I was able to retrieve the Microsoft 1996 annual report. Interesting, yes, especially if I need to do a historical report, but if I'm looking for the current annual investment report, it is not so useful. The crawler also needs to have discovered, crawled and indexed the latest Microsoft annual report in order for me to discover it when I do a search. The challenge for bingbot is that it can't fetch a web page only once. Once a page is published, the search engine must fetch the page regularly to verify that the content has not been updated and that the page is not a dead link. Defining when to fetch the web page next is the hard problem we are looking to optimize with your help.   Case Study: Cornell University Library - A great source of knowledge with many static, unchanging web pages   One challenge that we are trying to address is how often should bingbot crawl a site to fetch the content. The answer depends on the frequency of which the content is edited and updated.   The Cornell University Library empowers Cornell's research and learning community with deep expertise, innovative services, and outstanding collections strengthened by strategic partnerships. Their web site is a mine of relevant information and it contains millions of web pages on a range of topic from Physics, to Science to Economics. Not only do they have millions of webpages and PDF files related to computer science, they even have content related to crawling and indexing websites.    Identifying patterns to allow bingbot to reduce crawl frequency   While new web pages may be posted daily, and some pages are updated on a regular basis, most of the content within the Cornell University Library is not edited for month and even years. The content is by in large static and unedited. By unedited, I mean that the HTML may change a little, for example {copyright 2018} will {become 2019} on January 1st,  the CSS and style sheet may change a little; however such changes are not relevant for updating the indexed content within Bing. The content of the page is still the same. Additionally, only few articles are deleted every year. Their library index increases in size with new and updated research, without substantially changing the content of the historically indexed research. Reviewing our crawling data, we discovered that bingbot was over-crawling the content and we were using more resources then necessary to check and re-check that the historical pages maintained static in nature. What we learned was that we could optimize our system to avoid fetching the same content over and over, and instead check periodically for major changes.  This resulted in about 40% crawl saving on this site! While our work in identified patterns for largely static content identified an opportunity to reduce crawling for this “class” of websites (slow and rarely changing content) and in the following posts we’ll share more learnings. Our work with improving crawler efficiency is not done yet, and we’ve got a lot of opportunity ahead of us to continue to improve our crawler’s efficiency and abilities across the hundreds of different types of data that are used to evaluate our crawler scheduling algorithms. The next step is to continue to identify patterns that apply to a multitude of websites, so we can scale our efforts and be more efficient with crawling everywhere.   Stay tuned! Next in this series of posts related to bingbot and our crawler, we’ll provide visibility on the main criteria involved in defining bingbots Crawl Quota and Crawl Frequency per site. I hope you are still looking forward to learning more about how we improve crawl efficiency and as always, we look forward to seeing your comments and feedback.   Thanks! Cheng Lu Principal Software Engineering Manager Microsoft - Bing Fabrice Canel Principal Program Manager Microsoft - Bing  

bingbot Series: Maximizing Crawl Efficiency

At the SMX Advanced conference in June, I announced that over the next 18 months my team will focus on improving our Bing crawler bingbot . I asked the audience to share data helping us to optimize our plans. First, I want to say "Thank you"  to those of you who responded and provided us with great insights. Please keep them coming!  To keep you informed of the work we've done so far, we are starting this series of blog posts related to our crawler, bingbot. In this series we will share best practices, demonstrate improvements, and unveil new crawler abilities. Before drilling into details about how our team is continuing to improve our crawler, let me explain why we need bingbot and how we measure bingbot's success. First things first: What is the goal of bingbot? Bingbot is Bing's crawler, sometimes also referred to as a "spider". Crawling is the process by which bingbot discovers new and updated documents or content to be added to Bing's searchable index. Its primary goal is to maintain a comprehensive index updated with fresh content. Bingbot uses an algorithm to determine which sites to crawl, how often, and how many pages to fetch from each site. The goal is to minimize bingbot crawl footprint on your web sites while ensuring that the freshest content is available. How do we do that? The algorithmic process selects URLs to be crawled by prioritizing relevant known URLs that may not be indexed yet, and URLs that have already been indexed that we are checking for updates to ensure that the content is still valid (example not a dead link) and that it has not changed. We also crawl content specifically to discovery links to new URLs that have yet to be discovered. Sitemaps and RSS/Atom feeds are examples of URLs fetched primarily to discovery new links. Measuring bingbot success : Maximizing Crawl efficiency Bingbot crawls billions of URLs every day. It's a hard task to do this at scale, globally, while satisfying all webmasters, web sites, content management systems, whiling handling site downtimes and ensuring that we aren't crawling too frequently or often. We've heard concerns that bingbot doesn't crawl frequently enough and their content isn't fresh within the index; while at the same time we've heard that bingbot crawls too often causing constraints on the websites resources. It's an engineering problem that hasn't fully been solved yet. Often, the issue is in managing the frequency that bingbot needs to crawl a site to ensure new and updated content is included in the search index. Some webmasters request to have their sites crawled daily by the bingbot to ensure that Bing has the freshest version of their site in the index;  whereas the majority of webmasters would prefer to only have bingbot crawl their site when new URLs have been added or content has been updated and changed. The challenge we face, is how to model the bingbot algorithms based on both what a webmaster wants for their specific site, the frequency in which content is added or updated, and how to do this at scale. To measure how smart our crawler is, we measure bingbot crawl efficiency. The crawl efficiency is how often we crawl and discover new and fresh content per page crawled.  Our crawl efficiency north star is to crawl an URL only when the content has been added (URL not crawled before), updated (fresh on-page context or useful outbound links) . The more we crawl duplicated, unchanged content, the lower our Crawl Efficiency metric is. Later this month, Cheng Lu, our engineer lead for the crawler team, will continue this series of blog posts by sharing examples of how the Crawl Efficiency has improved over the last few months. I hope you are looking forward to learning more about how we improve crawl efficiency and as always, we look forward to seeing your comments and feedback. Thanks! Fabrice Canel Principal Program Manager, Webmaster Tools Microsoft - Bing  

Introducing Bing AMP viewer and Bing AMP cache

In 2016, Bing joined the Accelerated Mobile Pages (AMP for short) open-source effort to help you “find” and “do” searches faster, regardless of where you are and on any device when you are looking for informaiton. Today, we are pleased to announce the release of Bing AMP viewer and Bing AMP Cache enabling AMP-enabled web pages to work directly from Bing’s mobile search results allowing Bing to provide faster mobile experiences to Bing users. On Monday, the 17th we started the first phase of the global roll out of this AMP viewer and AMP carrousel in the United States for the news carrousel. We will continue the phased roll out to more web sites, more countries and regions and other links in the search results pages. Also, if you are in the United States, try it out on your mobile device by navigating to and search for news related queries and tapping the search results labelled with the AMP icon: . Advice for AMP webmasters, AMP advertisers The AMP protocol offers the ability to cache and serve cached copied AMP content that is published on the web, providing faster user experiences on Bing. In order to enable your AMP published content within Bing, you need to allow the Bingbot (our crawler) to fetch AMP content and allow cross-origin resource sharing (CORS) for domain. Most AMP enabled sites and advertisers have already authorized the CORS sharing for the domain, but now need to also to the allowed list.  Thank you,  Fabrice Canel Principal Program Manager Microsoft Bing  

Anonymous URL Submission Tool Being Retired

Saying Goodbye is never easy, but the time has come to announce the withdrawal of anonymous non-signed in support Bing's URL submission tool. Webmaster will still be able to log in and access Submit URL tool in Bing Webmaster Tools, and this is easier than ever as the tool now supports Google and Facebook authentication in addition to existing Microsoft accounts. Why say goodbye? Well, the URLs received are by far too low quality to be trustable, and webmasters preferring having more ownership of the URLs for their site. In order to use the tool, webmasters just need to login, add and verify their site. Then navigate to the Submit URL tool within the Configure My Site menu options. In case the webmasters want to use our Bing Webmaster tools API, webmasters have to generate an API key through Bing Webmaster Tools and follow the guidelines for its usage here. In case you haven't signed up on the tool yet, please click here to sign up. Thank you, The Bing Webmaster Tools Team

Introducing JSON-LD Support in Bing Webmaster Tools

Bing is proud to introduce JSON-LD support as part of Bing Webmaster Tools, as announced at the 2018 SMX Advanced. Users can login to Bing Webmaster Tools and validate their JSON-LD implementation through the Markup Validator Tool present in the Diagnostics and Tools section. After the inclusion of JSON-LD support, the Markup Validator now supports seven markup languages, including, HTML Microdata, Microformats, Open Graph and RDFa.  Bing works hard to understand the content of a page and one of the clues that Bing uses is structured data. JSON-LD, or JavaScript Object Notation for Linked Data, is an extension of JSON-based data format that can be used to implement structured data on your site so Bing and other search engines can better understand the content on your site. One of the advantages is that JSON-LD can be implemented without modifying the HTML content of your pages and can be hidden in the header, body or foot of the page. This effectively means that webmasters can go about designing their pages as they like without having to worry about arranging the information for markup implementations. JSON-LD makes defining links and relationships between data and entities between the data present on your pages easy because it supports nested data. However, with the wide array of possibilities that comes with JSON-LD, the webmasters should be very alert as to not put invalid and incorrect information in the markup. Remember, even though the markup is not visible on your page, it is still read by the search engines and putting spam data in the markup can hamper your presence on the search engines. Let us know what you think of the JSON-LD support. If you don’t have a Webmaster Tools account, you can sign up today.   Thank you, The Bing Webmaster Team  

Intermittent Webmaster Tools API Issues Resolved

The Bing Webmaster team recently received feedback that our APIs were intermittently failing, and we deeply regret any inconveniences caused from the API failures. We recognize the frustrations that this may have caused.  Upon investigation, we discovered a technical glitch which led to API call failure that is now resolved. We are very grateful to you, our users, who brought this to our attention and thank you for your continued feedback and support. We're Listening Bing and Bing Webmaster Tools are actively listening to you and we value your feedback. It’s important to how we continually improve Bing and to help notify us of potential issues. It’s easy to provide feedback: just look for the Feedback button or link at the bottom of each page. It’s in the footer or the lower-right corner and it looks something like this: We are using advances in technology to make it easier to quickly find what you are looking for – from answers to life's big questions or an item in an image you want to learn more about. At Bing, our goal is to help you get answers with less effort.  We appreciate your feedback and the more that you can send, the more we can use it to improve Bing. Have a suggestion? Tell us! The more feedback the merrier. Please let us know. The Bing Webmaster Tools Team

Introducing Social Login for Bing Webmaster Tools

Bing is actively listening to you, our customers, to improve our user experience and to understand what features you want within Bing Webmaster Tools. One of the top requests we have received is to expand the webmaster tools login capabilities to include a social login option. We are excited to begin the first phase of the global roll out of social login to webmaster tools on Friday, February 9th to webmasters in the United States. We expect to continue the roll out to webmasters in Europe, Middle East and Africa, Latin America and Asia Pacific regions shortly thereafter. Webmasters will be able to login to Bing Webmaster Tools using their Facebook and Google accounts in addition to their existing Microsoft account. Additionally, this means that the messages Webmaster Tools may occasionally send you about your managed properties will be sent to the email account associated with the webmaster tools account you are logged in with. The most recent messages will still be available in your Webmaster Tools Message Center. If you don’t have a Webmaster Tools account, you can sign up today. Let us know what you think of the new social login option. Tweet to us @Bing. Thank you, The Bing Webmaster Team


Recommended Content