Search Engine Blogs

Accessing Bing Webmaster Tools API using cURL

Bing's Webmaster Blog -

Thank you webmasters for effectively using Adaptive URL solution to notify bingbot about your website’s most fresh and relevant content. But, did you know you don’t have to use the Bing webmaster tools portal to submit URLs? Bing webmaster tools exposes programmatic access to its APIs for webmasters to integrate their workflows. Here is an example using the popular command line utility cURL that shows how easy  it is to integrate the Submit URL single and Submit URL batch API end points. You can use Get url submission quota API to check remaining daily quota for your account. Bing API can be integrated and called by all modern languages (C#, Python, PHP…), however, cURL can help you to prototype and test the API in minutes and also build complete solutions with minimal effort. cURL is considered as one of the most versatile tools for command-line API calls and is supported by all major Linux shells – simply run the below commands in a terminal window. If you're a Windows user, you can run cURL commands in Git Bash, the popular git client for Windows" (no need to install curl separately, Git Bash comes with curl).  If you are a Mac user, you can install cURL using a package manager such as Homebrew. When you try the examples below, be sure to replace API_KEY with your API key string obtained from Bing webmaster tools > Webmaster API > Generate. Refer easy set-up guide for Bing’s Adaptive URL submission API for more details.   Submitting new URLs – Single curl -X POST "https://ssl.bing.com/webmaster/api.svc/json/SubmitUrl?apikey=API_KEY" -H "Content-Type: application/json" -H "charset: utf-8" -d '{"siteUrl":"https://www.example.com", "url": "https://www.example.com/about"}' Response: {"d": null} Submitting new URLs – Batch curl -X POST “https://ssl.bing.com/webmaster/api.svc/json/SubmitUrlBatch?apikey=API_KEY” -H “Content-Type: application/json” -H “charset: utf-8” -d ‘{“siteUrl”:”https://www.example.com”, “urlList”:[“https://www.example.com/about”, “https://www.example.com/projects”]}’ Response: {“d”:null} Check remaining API Quota curl “https://ssl.bing.com/webmaster/api.svc/json/GetUrlSubmissionQuota?siteUrl=https://www.example.com&apikey=API_KEY” Response: { “d”: {“__type”: “UrlSubmissionQuota:#Microsoft.Bing.Webmaster.Api”, “DailyQuota”: 973, “MonthlyQuota”: 10973 }} So, integrate the APIs today to get your content indexed real time by Bing. Please reach out to Bing webmaster tools support if you face any issues.  Thanks, Bing Webmaster Tools team  

Some Thoughts on Website Boundaries

Bing's Webmaster Blog -

In the coming weeks, we will update the Bing Webmaster Guidelines to make them clearer and more transparent to the SEO community. This major update will be accompanied by blog posts that share more details and context around some specific violations. In the first article of this series, we are introducing a new penalty to address “inorganic site structure” violations. This penalty will apply to malicious attempts to obfuscate website boundaries, which covers some old attack vectors (such as doorways) and new ones (such as subdomain leasing). What is a website anyway? One of the most fascinating aspects of building a search engine is developing the infrastructure that gives us a deep understanding of the structure of the web. We’re talking trillions and trillions of URLs, connected with one another by hyperlinks. The task is herculean, but fortunately we can use some logical grouping of these URLs to make the problem more manageable – and understandable by us, mere humans! The most important of these groupings is the concept of a “website”. We all have some intuition of what a website is. For reference, Wikipedia defines a website as “a collection of related network web resources, such as web pages, multimedia content, which are typically identified with a common domain name.” It is indeed very typical that the boundary of a website is the domain name. For example, everything that lives under the xbox.com domain name is a single website. Fig. 1 – Everything under the same domain name is part of the same website. A common alternative is the case of a hosting service where each subdomain is its own website, such as wordpress.com or blogspot.com. And there are some (less common) cases where each subdirectory is its own website, similar to what GeoCities was offering in the late 90s. Fig. 2 – Each subdomain is its own separate website.   Why does it matter? Some fundamental algorithms used by search engines differentiate between URLs that belong to the same website and URLs that don’t. For example, it is well known that most algorithms based on the link graph propagate link value differently whether a link is internal (same site) or external (cross-site). These algorithms also use site-level signals (among many others) to infer the relevance and quality of content. That’s why pages on a very trustworthy, high-quality website tend to rank more reliably and higher than others, even if such pages are new and didn’t accumulate a lot of page-level signals. When things go wrong Stating the obvious, we can’t have people manually review billions of domains in order to assess what is a website. To solve this problem, like many of the other problems we need to solve at the scale of the web, we developed sophisticated algorithms to determine website boundaries. The algorithm gets it right most of the time. Occasionally it gets it wrong, either conflating two websites into one or viewing a single website as two different ones. And sometimes there’s no obvious answer, even for humans! For example, if your business operates in both the US and the UK, with content hosted on two separate domains (respectively a .com domain and a .co.uk domain), you can be seen as running either one or two websites depending on how independent your US and UK entities are, how much content is shared across the two domains, how much they link to each other, etc. However, when we reviewed sample cases where the algorithm got it wrong, we noticed that the most common root cause was that the website owner actively tried to misrepresent the website boundary. It can be indeed very tempting to try to fool the algorithm. If your internal links are viewed as external, you can get a nice rank boost. And if you can propagate some of the site-level signals to pages that don’t technically belong to your website, these pages can get an unfair advantage. Making things right In order to maintain the quality of our search results while being transparent to the SEO community, we are introducing new penalties to address “inorganic site structure”. In short, creating a website structure that actively misrepresents your website boundaries is going to be considered a violation of the Bing Webmaster Guidelines and will potentially result in a penalty. Some “inorganic site structure” violations were already covered by other categories, whereas some of them were not. To understand better what is active misrepresentation, let’s look at three examples. PBNs and other link networks While not all link networks misrepresent website boundaries, there are many cases where a single website is artificially split across many different domains, all cross-linking to one another, for the obvious purpose of rank boosting. This is particularly true of PBNs (private blog networks). Fig. 3 – All these domains are effectively the same website. This kind of behavior is already in violation of our link policy. Going forward, it will be also in violation of our “inorganic site structure” policy and may receive additional penalties. Doorways and duplicate content Doorways are pages that are overly optimized for specific search queries, but which only redirect or point users to a different destination. The typical situation is someone spinning up many different sites hosted under different domain names, each targeting its own set of search queries but all redirecting to the same destination or hosting the same content. Fig. 4 – All these domains are effectively the same website (again). Again, this kind of behavior is already in violation of our webmaster guidelines. In addition, it is also a clear-cut example of “inorganic site structure”, since we have ultimately only one real website, but the webmaster tried to make it look like several independent websites, each specialized in its own niche. Note that we will be looking for malicious intent before flagging sites in violation of our “inorganic site structure” policy. We acknowledge that duplicate content is unavoidable (e.g. HTTP vs. HTTPS), however there are simple ways to declare one website or destination as the source of truth, whether it’s redirecting duplicate pages with HTTP 301 or adding canonical tags pointing to the destination. On the other hand, violators will generally implement none of these, or will instead use sneaky redirects. Subdomain or subfolder leasing Over the past few months, we heard concerns from the SEO community around the growing practice of hosting third-party content or letting a third party operate a designated subdomain or subfolder, generally in exchange for compensation. This practice, which some people call “subdomain (or subfolder) leasing”, tends to blur website boundaries. Most of the domain is a single website except for a single subdomain or subfolder, which is a separate website operated by a third party. In most cases that we reviewed, the subdomain had very little visibility for direct navigation from the main website. Concretely, there were very few links from the main domain to the subdomain and these links were generally tucked all the way at the bottom of the main domain pages or in other obscure places. Therefore, the intent was clearly to benefit from site-level signals, even though the content on the subdomain had very little to do with the content on the rest of the domain. Fig. 5 – The domain is mostly a single website, to the exception of one subdomain. Some people in the SEO community argue that it’s fair game for a website to monetize their reputation by letting a third party buy and operate from a subdomain. However, in this case the practice equates to buying ranking signals, which is not much different from buying links. Therefore, we decided to consider “subdomain leasing” a violation of our “inorganic site structure” policy when it is clearly used to bring a completely unrelated third-party service into the website boundary, for the sole purpose of leaking site-level signals to that service. In most cases, the penalties issued for that violation would apply only to the leased subdomain, not the root domain. Your responsibility as domain owner This article is also an opportunity to remind domain owners that they are ultimately responsible for the content hosted under their domain, regardless of the website boundaries that we identify. This is particularly true when subdomains or subfolders are operated by different entities. While clear website boundaries will prevent negative signals due to a single bad actor from leaking to other content hosted under the same domain, the overall domain reputation will be affected if a disproportionate number of websites end up in violation of our webmaster guidelines. Taking an extreme case, if you offer free hosting on your subdomains and 95% of your subdomains are flagged as spam, we will expand penalties to the entire domain, even if the root website itself is not spam. Another unfortunate case is hacked sites. Once a website is compromised, it is typical for hackers to create subfolders or subdirectories containing spam content, sometimes unbeknownst to the legitimate owner. When we detect this case, we generally penalize the entire website until it is clean. Learning from you If you believe you have been unfairly penalized, you can contact Bing Webmaster Support and file a reconsideration request. Please document the situation as thoroughly and transparently as possible, listing all the domains involved. However, we cannot guarantee that we will lift the penalty. Your feedback is valuable to us! Clarifying our existing duplicate content policy and our stance on subdomain leasing were two feedbacks we heard from the SEO community, and we hope this article achieved both. As we are in the middle of a major update of the Bing Webmaster Guidelines, please feel free to reach out to us and share feedback on Twitter or Facebook. Thank you, Frederic Dubut and the Bing Webmaster Tools Team    

The new evergreen Bingbot simplifying SEO by leveraging Microsoft Edge

Bing's Webmaster Blog -

Today we’re announcing that Bing is adopting Microsoft Edge as the Bing engine to run JavaScript and render web pages. Doing so will create less fragmentation of the web and ease Search Engines Optimization (SEO) for all web developers. As you may already know, the next version of Microsoft Edge is adopting the Chromium open source project.   This update means Bingbot will be evergreen as we are committing to regularly update our web page rendering engine to the most recent stable version of Microsoft Edge.     Easing Search Engine Optimization   By adopting Microsoft Edge, Bingbot will now render all web pages using the same underlying web platform technology already used today by Googlebot, Google Chrome, and other Chromium-based browsers. This will make it easy for developers to ensure their web sites and their Content Management System work across all these solutions without having to spend time investigating each solution in depth.   By disclosing our new Bingbot Web Pages rendering technology, we are ensuring fewer SEO compatibility problems moving forward and increase satisfaction in the SEO community.   If your feedback can benefit the greater SEO community, Bing and Edge will propose and contribute to the open source Chromium project to make the web better for all of us. Head to github  to check out our explainers!     What happens next   Over the next few months, we will be switching to Microsoft Edge “under the hood”, gradually over time. The key aspects of this evolution will be transparent for most sites. We may change our bingbot crawler user-agent as appropriate to allow rendering on some sites.   For most web sites, there is nothing you should really need to worry as we will carefully test that they dynamically render fine before switching them to Microsoft Edge.   We invite you to install and test Microsoft Edge and register your site to Bing Webmaster Tools to get insights about your site, to be notified if we detect issues, and to investigate your site using our upcoming tools based on our new rendering engine.   We look forward to sharing more details in the future. We are excited about the opportunity to be an even-more-active part of the SEO community to continue to make the web better for everyone including all search engines.   Thanks, Fabrice Canel Principal Program Manager Microsoft- Bing    

Import sites from Search Console to Bing Webmaster Tools

Bing's Webmaster Blog -

At Bing Webmaster Tools, we actively listen to the needs of webmasters. Verifying a website’s ownership had been reported as a pain-point in Bing Webmaster Tools. To simplify this process, we recently introduced a new method for webmasters to verify sites in Bing Webmaster Tools. Webmasters can now import their verified sites from Google Search Console into Bing Webmaster Tools. The imported sites will be auto-verified thus eliminating the need for going through manual verification process. Both Bing Webmaster Tools and Google Search Console use similar methods to verify the ownership of a website. Using this new functionality, webmasters can log into their Google Search Console account and import all the verified sites and their corresponding sitemaps to their Bing Webmaster Tools account. Webmasters just need to follow 4 simple steps to import their site: Step 1: Sign-in to your Bing Webmaster Tools account or create a new one here Step 2: Navigate to My Sites page on Bing Webmaster Tools and click Import Step 3: Sign-in with your Google Search Console account and click Allow to give Bing Webmaster Tools access to your list of verified sites and sitemaps Step 4: After authentication, Bing Webmaster Tools will display the list of verified sites present in your Google Search Console account along with the number of Sitemaps and corresponding role for each site. Select the sites which you want to add to Bing Webmaster Tools and click Import Webmasters can import multiple sites from multiple Google Search Console accounts. On successful completion, the selected sites will be added and automatically verified in Bing Webmaster Tools. Please note that it might take up to 48 hours to get traffic data for the newly verified websites. Maximum 100 websites can be imported in one go. Please follow the above steps again in case you want to add more than 100 sites. The limit of 1000 sites addition per Bing Webmaster Tools account still applies. Bing Webmaster Tools will periodically validate your site ownership status by syncing with your Google Search Console account. Therefore, it is necessary for your Bing Webmaster Tools account to have ongoing access to your Google Search Console account. If access to your Google Search Console account is revoked, you will then have to either import your sites again or verify your sites using other verification methods. We hope that both Import from Google Search Console and Domain Connect verification method will make the onboarding process easier for webmasters. We encourage you to sign up and leverage Bing Webmaster tools to help drive more users to your sites.   We want to hear from you!  As a reminder, you can always reach out to us and share feedback on Twitter and Facebook. If you encounter issues using this solution, please raise a service ticket with our support team. Thanks! Bing Webmaster Tools team  

Introducing Auto-DNS verification in the new Search Console

Google Webmaster Central Blog -

Back in February, we announced domain-wide data in Search Console, to give site owners a comprehensive view of their site, removing the need to switch between different properties to get the full picture of your data.We’ve seen lots of positive reactions from users who verified domain properties. A common feedback we heard from users is that before moving to domain properties they were underestimating their traffic, and the new method helped them understand their clicks and impressions aggregated data more effectively. When we asked Domain property users about their satisfaction with the feature, almost all of them seem to be satisfied. Furthermore, most of these users reported that they find domain properties more useful than the traditional URL prefix properties.However, changing a DNS record is not always trivial, especially for small and medium businesses. We heard that the main challenge preventing site owners from switching to Domain properties is getting their domain verified. To help with this challenge, we collaborated with various domain name registrars to automate part of the verification flow. The flow will guide you through the necessary steps needed to update your registrar configuration so that your DNS record includes the verification token we provide. This will make the verification process a lot easier.How to use Auto-DNS verificationTo verify your domain using the new flow, click ‘add property’ from the property selector (drop down on top of Search Console sidebar). Then, choose the ‘Domain’ option. The system will guide you through a series of steps, including a visit to the registrar site where you need to apply changes - there will be fewer and easier steps than before for you to go through. You can learn more about verifying your site at the Help Center.Image: Auto-DNS verification flow We hope you can use this new capability and gain ownership of your Domain property today. As always, please let us know if there is anything we can do to improve via the product feedback button, the Webmasters community or mention us on Twitter.Posted by Ruty Mundel, Search Console engineering team

Minor cleaning up in the Search Console API

Google Webmaster Central Blog -

With the move to the new Search Console, we've decided to clean up some parts of the Search Console API as well. In the Search Analytics API, going forward we'll no longer support these Android app search appearance types: Is InstallIs App UniversalIs Opened Since these appearance types are no longer used in the UI, they haven't been populated with data recently. Going forward, we won't be showing these types at all through the API.  Additionally, for the Sitemaps API, we're no longer populating data on indexing status of submitted sitemap files in the "Indexed" field. We're still committed to the Search Console API. In particular, we're working on updating the Search Console API to the new Search Console. We don't have any specific timeframes to share at the moment, but stay tuned to find out more!Posted by Ziv Hodak, Search Console product manager

Bing Webmaster Tools simplifies site verification using Domain Connect

Bing's Webmaster Blog -

In order to submit site information to Bing or to get performance report or access diagnostic tools, webmasters need to verify their site ownership in Bing Webmaster Tools. Traditionally Bing webmaster tools support three verification options,   Option 1: XML file authentication Option 2: Meta tag authentication Option 3: Add a CNAME record to DNS Option 1 and Option 2 requires webmaster to access the site source code to complete the site verification. With Option 3, webmaster can avoid access to site source code but need to access the domain hosting account to edit the CNAME record to hold the verification code provided by Bing Webmaster Tools. To simplify option 3, we announce the support for Domain Connect open standard that will allow webmasters to seamlessly verify their site in Bing Webmaster Tools. Domain Connect is an open standard that makes it easy for a user to configure DNS for a domain running at a DNS provider (e.g. GoDaddy, 1&1 Ionos, etc) to work with a Service running at an independent Service Provider (e.g. Bing, O365, etc). The protocol presents a simple experience to the user, isolating them from the details of DNS settings and its complexity. Bing Webmaster Tools verification using Domain Connect is already live for users whose domain is hosted with following DNS providers                                               Bing webmaster tools will gradually integrate this capability with other DNS providers that support Domain Connect open standard. Quick guide on how to use Domain Connect feature to verify your site in Bing Webmaster Tools:   Step 1: Open a Bing Webmaster Tools account You can open a free Bing Webmaster Tools account by going to the Bing Webmaster Tools sign-in or sign-up page.  You can sign up using Microsoft, Google or Facebook account.   Step 2: Add your website Once you have a Bing Webmaster Tools account, you can add sites to your account. You can do so by entering the URL of your site into the Add a Site input box and clicking Add.        Step 3: Check if your site is supported for Domain Connect protocol When you Add the website information, Bing Webmaster Tools will do background check to identify if that domain/ website is hosted on DNS provider that has integrated Domain Connect solution with Bing Webmaster Tools. Following view will show in case the site is supported – In case the site is not supported for Domain Connect protocol then user will see the default verification options as mentioned in top of this blog.   Step 4: Verify using DNS provider credentials On click of Verify, user will be redirected to DNS provider site. Webmaster should sign-in using the account credentials associated with domain/ website under verification.                                                               On successful sign-in, user site will be successfully verified by Bing webmaster tools within few seconds. In certain cases, it may take longer for DNS provider to send the site ownership signal to Bing webmaster tool service.   Using the new verification options will significantly reduce the time taken and simplify the site verification process in Bing Webmaster Tools. We encourage you to try out this solution and get more users for your sites on Bing via Bing Webmaster Tools. In case you face any challenges using this solution you can raise a service ticket with our support team. We are building another solution to further simplify the site verification process and help webmasters to easily add and verify their site in Bing Webmaster Tools. Watch this space for more!      Additional reference: https://www.plesk.com/extensions/domain-connect/ https://www.godaddy.com/engineering/2019/04/25/domain-connect/   Thanks! Bing Webmaster Tools team

You #AskGoogleWebmasters, we answer

Google Webmaster Central Blog -

We love to help folks make awesome websites. For a while now, we've been answering questions from developers, site-owners, webmasters, and of course SEOs in our office hours hangouts, in the help forums, and at events. Recently, we've (re-)started answering your questions in a video series called #AskGoogleWebmasters on our YouTube channel.  (At Google, behind the scenes, during the recording of one of the episodes.) When we started with the webmaster office-hours back in 2012, we thought we'd be able to get through all questions within a few months, or perhaps a year. Well ... the questions still haven't stopped -- it's great to see such engagement when it comes to making great websites!  To help make it a bit easier to find answers, we've started producing shorter videos answering individual questions. Some of the questions may seem fairly trivial to you, others don't always have simple answers, but all of them are worth answering. Curious about the first episodes? Check out the videos below and the playlist for all episodes! To ask a question, just use the hashtag #AskGoogleWebmasters on Twitter. While we can't get to all submissions, we regularly pick up the questions there to use in future episodes. We pick questions primarily about websites & websearch, which are relevant to many sites. Want to stay in the loop? Make sure to subscribe to our channel. If you'd like to discuss the questions or other important webmaster topics, feel free to drop by our webmaster help forums and chat with the awesome experts there.  Posted by John Mueller, Google Switzerland

When indexing goes wrong: how Google Search recovered from indexing issues & lessons learned since.

Google Webmaster Central Blog -

Most of the time, our search engine runs properly. Our teams work hard to prevent technical issues that could affect our users who are searching the web, or webmasters whose sites we index and serve to users. Similarly, the underlying systems that we use to power the search engine also run as intended most of the time. When small disruptions happen, they are largely not visible to anyone except our teams who ensure that our products are up and running. However, like all complex systems, sometimes larger outages can occur, which may lead to disruptions for both users and website creators. In the last few months, such a situation occurred with our indexing systems, which had a ripple effect on some other parts of our infrastructure. While we worked as quickly as possible to remedy the situation, we apologize for the disruption, as our goal is to continuously provide high-quality products to our users and to the web ecosystem. Since then, we took a closer, careful look into the situation. In the process, we learned a few lessons that we'd like to share with you today. In this blog post, we will go into more details about what happened, clarify how we plan to communicate better if such things happen in the future, and remind website owners of the channels they can use to communicate with us. So, what happened a few months ago? In April, we had several issues related to our index. The Search index is the database that holds the hundreds of billions of web pages that we crawled on the web and that we think could answer some of our users’ queries. When a user enters a query in the Google search engine, our ranking algorithms sort through those pages in our Search index to find the most relevant, useful results in a fraction of a second. Here is more information on what happened. 1. The indexing issue To start it off, we temporarily lost part of the Search index.Wait... What? What do you mean “lost part of the index?” Is that even possible? Basically, when serving search results to users, to accelerate the speed of the service, the query of the user only “travels” as far as the closest of our data centers supporting the Google Search product, from which the Search Engine Results Page (SERP) is generated. So when there are modifications to the composition of the index (some pages added and removed, documents are merged, or other types of data modification), those modifications need to be reflected in all of those data centers. The consequence is that users all over the world are consistently served pages from the most recent version of the index. Google owns and operates data centers (like the one pictured above) around the world, to keep our products running 24 hours a day, 7 days a week - source Keeping the index unified across all those data centers is a non trivial task. For large user-facing services, we may deploy updates by starting in one data center and expand until all relevant data centers are updated. For sensitive pieces of infrastructure, we may extend a rollout over several days, interleaving them across instances in different geographic regions. source So, as we pushed some planned changes to the Search index, on April 5th parts of the deployment system broke, on a Friday no-less! More specifically: as we were updating the index over some of our data centers, a small number of documents ended up being dropped from the index accidentally. Hence: “we lost part of the index.” Luckily, our on-call engineers caught the issue pretty quickly, at the same time as we started picking up chatter on social media (thanks to everyone who notified us over that weekend!). As a result, we were able to start reverting the Search index to its previous stable state in all data centers only a few hours after the issue was uncovered (we keep back-ups of our indexes just in case such events happen). We communicated on Sunday, April 7th that we were aware of the issue, and that things were starting to get back to normal. As data centers were progressively reverting back to a stable index, we continued updating on Twitter (on April 8th, on April 9th), until we were confident that all data centers were fully back to a complete version of the index on April 11th. 2. The Search Console issue Search Console is the set of tools and reports any webmaster can use to access data about their website’s performance in Search. For example, it shows how many impressions and clicks a website gets in the organic search results every day, or information on what pages of a website are included and excluded from the Search index. As a consequence of the Search index having the issues we described above, Search Console started to also show inconsistencies. This is because some of the data that surfaces in Search Console originates from the Search index itself: the Index Coverage report depends on the Search index being consistent across data centers.when we store a page in the Search index, we can annotate the entry with key signals about the page, like the fact that the page contains rich results markup for example. Therefore, an issue with the Search index can have an impact on the Rich Results reports in Search Console.Basically, many Search Console individual reports read data from a dedicated database. That database is partially built by using information that comes from the Search index. As we had to revert back to a previous version of the Search index, we also had to pause the updating of the Search Console database. This resulted in plateau-ing data for some reports (and flakiness in others, like the URL inspection tool). Index coverage report for indexed pages, which shows an example of the data freshness issues in Search Console in April 2019, with a longer time between 2 updates than what is usually observed. Because the whole Search index issue took several days to roll back (see explanation above), we were delayed focusing on fixing the Search Console database until a few days later, only after the indexing issues were fixed. We communicated on April 15th - tweet - that the Search Console was having troubles and that we were working on fixing it, and we completed our fixes on April 28th (day on which the reports started gathering fresh data again, see graph above). We communicated on Twitter on April 30th that the issue was resolved- tweet. 3. Other issues unrelated to the main indexing bug Google Search relies on a number of systems that work together. While some of those systems can be tightly linked to one another, in some cases different parts of the system experience unrelated problems around the same time. In the present case for example, around the same time as the main indexing bug explained above, we also had brief problems gathering fresh Google News content. Additionally, while rendering pages, certain URLs started to redirect Googlebot to other unrelated pages. These issues were entirely unrelated to the indexing bug, and were quickly resolved (tweet 1 & tweet 2). Our communication and how we intend on doing better In addition to communicating on social media (as highlighted above) during those few weeks, we also gave webmasters more details in 2 other channels: Search Console, as well as the Search Console Help Center. In the Search Console Help Center We updated our “Data anomalies in Search Console” help page after the issue was fully identified. This page is used to communicate information about data disruptions to our Search Console service when the impact affects a large number of website owners. In Search Console Because we know that not all our users read social media or the external Help Center page, we also added annotations on Search Console reports, to notify users that the data might not be accurate (see image below). We added this information after the resolution of the bugs. Clicking on “see here for more details” sends users to the “Data Anomalies” page in the Help Center. Index coverage report for indexed pages, which shows an example of the data annotations that we can include to notify users of specific issues. Communications going forward When things break at Google, we have a strong “postmortem” culture: creating a document to debrief on the breakage, and try to avoid it happening next time. The whole process is described in more detail at the Google Site Reliability Engineering website. In the wake of the April indexing issues, we included in the postmortem how to better communicate with webmasters in case of large system failures. Our key decisions were: Explore ways to more quickly share information within Search Console itself about widespread bugs, and have that information serve as the main point of reference for webmasters to check, in case they are suspecting outages.More promptly post to the Search Console data anomalies page, when relevant (if the disturbance is going to be seen over the long term in Search Console data).Continue tweeting as quickly as we can about such issues to quickly reassure webmasters we’re aware and that the issue is on our end. Those commitments should make potential future similar situations more transparent for webmasters as a whole. Putting our resolutions into action: the “new URLs not indexed” case study On May 22nd, we tested our new communications strategy, as we experienced another issue. Here’s what happened: while processing certain URLs, our duplicate management system ran out of memory after a planned infrastructure upgrade, which caused all incoming URLs to stop processing. Here is a timeline of how we thought about communications, following the 3 points highlighted just above: We noticed the issue (around 5.30am California time, May 22nd)We tweeted about the ongoing issue (around 6.40am California time, May 22nd)We tweeted about the resolution (around 10pm California time, May 22nd)We evaluated updating the “Data Anomalies” page in the Help Center, but decided against it since we did not expect any long-term impact for the majority of webmasters' Search Console data in the long run.The confusion that this issue created for many confirmed our earlier conclusions that we need a way to signal more clearly in the Search Console itself that there might be a disruption to one of our systems which could impact webmasters. Such a solution might take longer to implement. We will communicate on this topic in the future, as we have more news. Last week, we also had another indexing issue. As with May 22, we tweeted to let people know there was an issue, that we were working to fix it and when the issue was resolved. How to debug and communicate with us We hope that this post will bring more clarity to how our systems are complex and can sometimes break, and will also help you understand how we communicate about these matters. But while this post focuses on a widespread breakage of our systems, it’s important to keep in mind that most website indexing issues are caused by an individual website’s configuration, which can create difficulties for Google Search to index that website properly. For those cases, all webmasters can debug issues using Search Console and our Help center. After doing so, if you still think that an issue is not coming from your site or don’t know how to resolve it, come talk to us and our community, we always want to take feedback from our users. Here is how to signal an issue to us: Check our Webmaster Community, sometimes other webmasters have highlighted an issue that also impacts your site.In person! We love contact, come and talk to us at events. Calendar.Within our products! The Search Console feedback tool is very useful to our teams.Twitter and YouTube! Posted by Vincent Courson, Google Search Outreach

Googlebot evergreen rendering in our testing tools

Google Webmaster Central Blog -

Today we updated most of our testing tools so they are using the evergreen Chromium renderer. This affects our testing tools like the mobile-friendly test or the URL inspection tool in Search Console. In this post we look into what this means and what went into making this update happen.The evergreen Chromium rendererAt Google I/O this year we were happy to announce the new evergreen Googlebot.At its core the update is a switch from Chrome 41 as the rendering engine to the latest stable Chromium. Googlebot is now using the latest stable Chromium to run JavaScript and render pages. We will continue to update Googlebot along with the stable Chromium, hence we call it "evergreen".A JavaScript-powered demo website staying blank in the old Googlebot but working fine in the new Googlebot.What this means for your websitesWe are very happy to bring the latest features of the web platform not only to Googlebot but to the tools that let you see what Googlebot sees as well. This means websites using ES6+, Web Components and 1000+ new web platform features are now rendered with the latest stable Chromium, both in Googlebot and our testing tools.While the previous version of the mobile-friendly test doesn't show the page content, the new version does.What the update changes in our testing toolsOur testing tools reflect how Googlebot processes your pages as closely as possible. With the update to the new Googlebot, we had to update them to use the same renderer as Googlebot.The change will affect the rendering within the following tools:Search Console's URL inspection tool Mobile-friendly testRich results testAMP testWe tested these updates and based on the feedback we have switched the tools listed previously to the new evergreen Googlebot. A lot of the feedback came from Googlers and the community. Product Experts and Google Developer Experts helped us make sure the update works well.Note: The new Googlebot still uses the same user agent as before the update. There will be more information about an update to the user agent in the near future. For now, Googlebot's user agent and the user agent used in the testing tools does not change.We are excited about this update and are looking forward to your feedback and questions on Twitter, the webmaster forum or in our webmaster office hours.Posted by Zoe Clifford, Software Engineer in the Web Rendering Service team & Martin Splitt, friendly internet fairy at Google WTA

What webmasters should know about Google’s “core updates”

Google Webmaster Central Blog -

Each day, Google usually releases one or more changes designed to improve our search results. Most aren’t noticeable but help us incrementally continue to improve.Sometimes, an update may be more noticeable. We aim to confirm such updates when we feel there is actionable information that webmasters, content producers or others might take in relation to them. For example, when our “Speed Update” happened, we gave months of advanced notice and advice.Several times a year, we make significant, broad changes to our search algorithms and systems. We refer to these as “core updates.” They’re designed to ensure that overall, we’re delivering on our mission to present relevant and authoritative content to searchers. These core updates may also affect Google Discover.We confirm broad core updates because they typically produce some widely notable effects. Some sites may note drops or gains during them. We know those with sites that experience drops will be looking for a fix, and we want to ensure they don’t try to fix the wrong things. Moreover, there might not be anything to fix at all.Core updates & reassessing contentThere’s nothing wrong with pages that may perform less well in a core update. They haven’t violated our webmaster guidelines nor been subjected to a manual or algorithmic action, as can happen to pages that do violate those guidelines. In fact, there’s nothing in a core update that targets specific pages or sites. Instead, the changes are about improving how our systems assess content overall. These changes may cause some pages that were previously under-rewarded to do better.One way to think of how a core update operates is to imagine you made a list of the top 100 movies in 2015. A few years later in 2019, you refresh the list. It’s going to naturally change. Some new and wonderful movies that never existed before will now be candidates for inclusion. You might also reassess some films and realize they deserved a higher place on the list than they had before.The list will change, and films previously higher on the list that move down aren’t bad. There are simply more deserving films that are coming before them.Focus on contentAs explained, pages that drop after a core update don’t have anything wrong to fix. This said, we understand those who do less well after a core update change may still feel they need to do something. We suggest focusing on ensuring you’re offering the best content you can. That’s what our algorithms seek to reward.A starting point is to revisit the advice we’ve offered in the past on how to self-assess if you believe you’re offering quality content. We’ve updated that advice with a fresh set of questions to ask yourself about your content:Content and quality questionsDoes the content provide original information, reporting, research or analysis?Does the content provide a substantial, complete or comprehensive description of the topic?Does the content provide insightful analysis or interesting information that is beyond obvious?If the content draws on other sources, does it avoid simply copying or rewriting those sources and instead provide substantial additional value and originality?Does the headline and/or page title provide a descriptive, helpful summary of the content?Does the headline and/or page title avoid being exaggerating or shocking in nature?Is this the sort of page you’d want to bookmark, share with a friend, or recommend?Would you expect to see this content in or referenced by a printed magazine, encyclopedia or book?Expertise questionsDoes the content present information in a way that makes you want to trust it, such as clear sourcing, evidence of the expertise involved, background about the author or the site that publishes it, such as through links to an author page or a site’s About page?If you researched the site producing the content, would you come away with an impression that it is well-trusted or widely-recognized as an authority on its topic?Is this content written by an expert or enthusiast who demonstrably knows the topic well?Is the content free from easily-verified factual errors?Would you feel comfortable trusting this content for issues relating to your money or your life?Presentation and production questionsIs the content free from spelling or stylistic issues?Was the content produced well, or does it appear sloppy or hastily produced?Is the content mass-produced by or outsourced to a large number of creators, or spread across a large network of sites, so that individual pages or sites don’t get as much attention or care?Does the content have an excessive amount of ads that distract from or interfere with the main content?Does content display well for mobile devices when viewed on them?Comparative questionsDoes the content provide substantial value when compared to other pages in search results?Does the content seem to be serving the genuine interests of visitors to the site or does it seem to exist solely by someone attempting to guess what might rank well in search engines?Beyond asking yourself these questions, consider having others you trust but who are unaffiliated with your site provide an honest assessment.Also consider an audit of the drops you may have experienced. What pages were most impacted and for what types of searches? Look closely at these to understand how they’re assessed against some of the questions above.Get to know the quality rater guidelines & E-A-T Another resource for advice on great content is to review our search quality rater guidelines. Raters are people who give us insights on if our algorithms seem to be providing good results, a way to help confirm our changes are working well.It’s important to understand that search raters have no control over how pages rank. Rater data is not used directly in our ranking algorithms. Rather, we use them as a restaurant might get feedback cards from diners. The feedback helps us know if our systems seem to be working.If you understand how raters learn to assess good content, that might help you improve your own content. In turn, you might perhaps do better in Search.In particular, raters are trained to understand if content has what we call strong E-A-T. That stands for Expertise, Authoritativeness and Trustworthiness. Reading the guidelines may help you assess how your content is doing from an E-A-T perspective and improvements to consider.Here are a few articles written by third-parties who share how they’ve used the guidelines as advice to follow:E-A-T and SEO, from Marie HaynesGoogle Updates Quality Rater Guidelines Targeting E-A-T, Page Quality & Interstitials, from Jennifer SleggLeveraging E-A-T for SEO Success, presentation from Lily RayGoogle’s Core Algorithm Updates and The Power of User Studies: How Real Feedback From Real People Can Help Site Owners Surface Website Quality Problems (And More), Glenn GabeWhy E-A-T & Core Updates Will Change Your Content Approach, from Fajr MuhammadRecovering and more adviceA common question after a core update is how long does it take for a site to recover, if it improves content?Broad core updates tend to happen every few months. Content that was impacted by one might not recover - assuming improvements have been made - until the next broad core update is released.However, we’re constantly making updates to our search algorithms, including smaller core updates. We don’t announce all of these because they’re generally not widely noticeable. Still, when released, they can cause content to recover if improvements warrant.Do keep in mind that improvements made by site owners aren’t a guarantee of recovery, nor do pages have any static or guaranteed position in our search results. If there’s more deserving content, that will continue to rank well with our systems.It’s also important to understand that search engines like Google do not understand content the way human beings do. Instead, we look for signals we can gather about content and understand how those correlate with how humans assess relevance. How pages link to each other is one well-known signal that we use. But we use many more, which we don’t disclose to help protect the integrity of our results.We test any broad core update before it goes live, including gathering feedback from the aforementioned search quality raters, to see if how we’re weighing signals seems beneficial.Of course, no improvement we make to Search is perfect. This is why we keep updating. We take in more feedback, do more testing and keep working to improve our ranking systems. This work on our end can mean that content might recover in the future, even if a content owner makes no changes. In such situations, our continued improvements might assess such content more favorably.We hope the guidance offered here is helpful. You’ll also find plenty of advice about good content with the resources we offer from Google Webmasters, including tools, help pages and our forums. Learn more here.Posted by Danny Sullivan, Public Liaison for Search

Helping publishers and users get more out of visual searches on Google Images with AMP

Google Webmaster Central Blog -

Google Images has made a series of changes to help people explore, learn and do more through visual search. An important element of visual search is the ability for users to scan many ideas before coming to a decision, whether it’s purchasing a product, learning more about a stylish room, or finding instructions for a DIY project. Often this involves loading many web pages, which can slow down a search considerably and prevent users from completing a task.  As previewed at Google I/O, we’re launching a new AMP-powered feature in Google Images on the mobile web, Swipe to Visit, which makes it faster and easier for users to browse and visit web pages. After a Google Images user selects an image to view on a mobile device, they will get a preview of the website header, which can be easily swiped up to load the web page instantly.  Swipe to Visit uses AMP's prerender capability to show a preview of the page displayed at the bottom of the screen. When a user swipes up on the preview, the web page is displayed instantly and the publisher receives a pageview. The speed and ease of this experience makes it more likely for users to visit a publisher's site, while still allowing users to continue their browsing session. Publishers who support AMP don’t need to take any additional action for their sites to appear in Swipe to Visit on Google Images. Publishers who don’t support AMP can learn more about getting started with AMP here. In the coming weeks, publishers can also view their traffic data from AMP in Google Images in a Search Console’s performance report for Google Images in a new search area named “AMP on Image result”. We look forward to continuing to support the Google Images ecosystem with features that help users and publishers alike. Posted by Assaf Broitman, Google Images PM

A note on unsupported rules in robots.txt

Google Webmaster Central Blog -

Yesterday we announced that we're open-sourcing Google's production robots.txt parser. It was an exciting moment that paves the road for potential Search open sourcing projects in the future! Feedback is helpful, and we're eagerly collecting questions from developers and webmasters alike. One question stood out, which we'll address in this post:Why isn't a code handler for other rules like crawl-delay included in the code?The internet draft we published yesterday provides an extensible architecture for rules that are not part of the standard. This means that if a crawler wanted to support their own line like "unicorns: allowed", they could. To demonstrate how this would look in a parser, we included a very common line, sitemap, in our open-source robots.txt parser.While open-sourcing our parser library, we analyzed the usage of robots.txt rules. In particular, we focused on rules unsupported by the internet draft, such as crawl-delay, nofollow, and noindex. Since these rules were never documented by Google, naturally, their usage in relation to Googlebot is very low. Digging further, we saw their usage was contradicted by other rules in all but 0.001% of all robots.txt files on the internet. These mistakes hurt websites' presence in Google's search results in ways we don’t think webmasters intended.In the interest of maintaining a healthy ecosystem and preparing for potential future open source releases, we're retiring all code that handles unsupported and unpublished rules (such as noindex) on September 1, 2019. For those of you who relied on the noindex indexing directive in the robots.txt file, which controls crawling, there are a number of alternative options:Noindex in robots meta tags: Supported both in the HTTP response headers and in HTML, the noindex directive is the most effective way to remove URLs from the index when crawling is allowed.404 and 410 HTTP status codes: Both status codes mean that the page does not exist, which will drop such URLs from Google's index once they're crawled and processed.Password protection: Unless markup is used to indicate subscription or paywalled content, hiding a page behind a login will generally remove it from Google's index.Disallow in robots.txt: Search engines can only index pages that they know about, so blocking the page from being crawled usually means its content won’t be indexed.  While the search engine may also index a URL based on links from other pages, without seeing the content itself, we aim to make such pages less visible in the future.Search Console Remove URL tool: The tool is a quick and easy method to remove a URL temporarily from Google's search results.For more guidance about how to remove information from Google's search results, visit our Help Center. If you have questions, you can find us on Twitter and in our Webmaster Community, both offline and online.Posted by Gary

Google's robots.txt parser is now open source

Google Webmaster Central Blog -

For 25 years, the Robots Exclusion Protocol (REP) was only a de-facto standard. This had frustrating implications sometimes. On one hand, for webmasters, it meant uncertainty in corner cases, like when their text editor included BOM characters in their robots.txt files. On the other hand, for crawler and tool developers, it also brought uncertainty; for example, how should they deal with robots.txt files that are hundreds of megabytes large?Today, we announced that we're spearheading the effort to make the REP an internet standard. While this is an important step, it means extra work for developers who parse robots.txt files.We're here to help: we open sourced the C++ library that our production systems use for parsing and matching rules in robots.txt files. This library has been around for 20 years and it contains pieces of code that were written in the 90's. Since then, the library evolved; we learned a lot about how webmasters write robots.txt files and corner cases that we had to cover for, and added what we learned over the years also to the internet draft when it made sense.We also included a testing tool in the open source package to help you test a few rules. Once built, the usage is very straightforward:robots_main <robots.txt content> <user_agent> <url>If you want to check out the library, head over to our GitHub repository for the robots.txt parser. We'd love to see what you can build using it! If you built something using the library, drop us a comment on Twitter, and if you have comments or questions about the library, find us on GitHub.Posted by Edu Pereda, Lode Vandevenne, and Gary, Search Open Sourcing team

Formalizing the Robots Exclusion Protocol Specification

Google Webmaster Central Blog -

For 25 years, the Robots Exclusion Protocol (REP) has been one of the most basic and critical components of the web. It allows website owners to exclude automated clients, for example web crawlers, from accessing their sites - either partially or completely.In 1994, Martijn Koster (a webmaster himself) created the initial standard after crawlers were overwhelming his site. With more input from other webmasters, the REP was born, and it was adopted by search engines to help website owners manage their server resources easier.However, the REP was never turned into an official Internet standard, which means that developers have interpreted the protocol somewhat differently over the years. And since its inception, the REP hasn't been updated to cover today's corner cases. This is a challenging problem for website owners because the ambiguous de-facto standard made it difficult to write the rules correctly.We wanted to help website owners and developers create amazing experiences on the internet instead of worrying about how to control crawlers. Together with the original author of the protocol, webmasters, and other search engines, we've documented how the REP is used on the modern web, and submitted it to the IETF.The proposed REP draft reflects over 20 years of real world experience of relying on robots.txt rules, used both by Googlebot and other major crawlers, as well as about half a billion websites that rely on REP. These fine grained controls give the publisher the power to decide what they'd like to be crawled on their site and potentially shown to interested users. It doesn't change the rules created in 1994, but rather defines essentially all undefined scenarios for robots.txt parsing and matching, and extends it for the modern web. Notably:Any URI based transfer protocol can use robots.txt. For example, it's not limited to HTTP anymore and can be used for FTP or CoAP as well. Developers must parse at least the first 500 kibibytes of a robots.txt. Defining a maximum file size ensures that connections are not open for too long, alleviating unnecessary strain on servers. A new maximum caching time of 24 hours or cache directive value if available, gives website owners the flexibility to update their robots.txt whenever they want, and crawlers aren't overloading websites with robots.txt requests. For example, in the case of HTTP, Cache-Control headers could be used for determining caching time. The specification now provisions that when a previously accessible robots.txt file becomes inaccessible due to server failures, known disallowed pages are not crawled for a reasonably long period of time. Additionally, we've updated the augmented Backus–Naur form in the internet draft to better define the syntax of robots.txt, which is critical for developers to parse the lines.RFC stands for Request for Comments, and we mean it: we uploaded the draft to IETF to get feedback from developers who care about the basic building blocks of the internet. As we work to give web creators the controls they need to tell us how much information they want to make available to Googlebot, and by extension, eligible to appear in Search, we have to make sure we get this right.If you'd like to drop us a comment, ask us questions, or just say hi, you can find us on Twitter and in our Webmaster Community, both offline and online.Posted by Henner Zeller, Lizzi Harvey, and Gary

Bye Bye Preferred Domain setting

Google Webmaster Central Blog -

As we progress with the migration to the new Search Console experience, we will be saying farewell to one of our settings: preferred domain.It's common for a website to have the same content on multiple URLs. For example, it might have the same content on http://example.com/ as on https://www.example.com/index.html. To make things easier, when our systems recognize that, we'll pick one URL as the "canonical" for Search. You can still tell us your preference in multiple ways if there's something specific you want us to pick (see paragraph below). But if you don't have a preference, we'll choose the best option we find. Note that with the deprecation we will no longer use any existing Search Console preferred domain configuration.You can find detailed explanations on how to tell us your preference in the Consolidate duplicate URLs help center article. Here are some of the options available to you:Use rel=”canonical” link tag on HTML pagesUse rel=”canonical” HTTP headerUse a sitemapUse 301 redirects for retired URLsSend us any feedback either through Twitter or our forum.Posted by Daniel Waisberg, Search Advocate

bingbot Series: Introducing Batch mode for Adaptive URL submission API

Bing's Webmaster Blog -

We launched the Adaptive URL submission capability that allowed webmasters to submit up to 10,000 URLs using the online API or through Bing webmaster portal (Submit URLs option). Since the launch we have received multiple requests from webmasters for the ability to submit the URLs in batches. As we are actively listening to the webmaster and their needs, we are delighted to announce the Batch mode capability for Adaptive URL Submission API which will allow the webmasters and site managers to submit URLs in batches, saving them from those excessive API calls made when submitting the URLs individually.   The Batch URL Submission API is very similar to the individual URL Submission API (Blogpost) and hence integrating the Batch API is very easy and follows the same steps.   Example requests for the Batch URL Submission API for the supported protocols can be seen below JSON Request Sample  POST /webmaster/api.svc/json/SubmitUrlbatch? apikey=sampleapikeyEDECC1EA4AE341CC8B6 HTTP/1.1 Content-Type: application/json; charset=utf-8 Host: ssl.bing.com { "siteUrl":"http://yoursite.com", "urlList":[ "http://yoursite.com/url1", "http://yoursite.com/url2", "http://yoursite.com/url3" ] } XML Request Sample POST /webmaster/api.svc/pox/SubmitUrlBatch? apikey=sampleapikeyEDECC1EA4AE341CC8B6 HTTP/1.1 Content-Type: application/xml; charset=utf-8 Host: ssl.bing.com <SubmitUrlBatch xmlns="http://schemas.datacontract.org/2004/07/Microsoft.Bing.Webmaster.Api"> <siteUrl>http://yoursite.com</siteUrl> <urlList> <string xmlns="http://schemas.microsoft.com/2003/10/Serialization/Arrays">http://yoursite.com/url1</string> <string xmlns="http://schemas.microsoft.com/2003/10/Serialization/Arrays">http://yoursite.com/url2</string> <string xmlns="http://schemas.microsoft.com/2003/10/Serialization/Arrays">http://yoursite.com/url3</string> </urlList> </SubmitUrlBatch> You will get a HTTP 200 response on successful submission of the URLs. Meanwhile the URLs will be checked to comply with Bing Webmaster Guidelines and if they pass, they will be crawled and indexed in minutes.   Please refer the Documentation for generating the API key and Batch URL Submission API for more details. Do note that the maximum supported batch size in this API is 500 URLs per request. Total limit on numbers of URLs submitted per day still applies.   So, integrate the APIs today to get your content indexed real time by Bing and let us know of what you think of this capability. Please reach out to bwtsupport@microsoft.com if you face any issue while integrating.   Thanks ! Bing Webmaster Tools Team

Webmaster Conference: an event made for you

Google Webmaster Central Blog -

Over the years we attended hundreds of conferences, we spoke to thousands of webmasters, and recorded hundreds of hours of videos to help web creators find information about how to perform better in Google Search results. Now we'd like to go further: help those who aren't able to travel internationally and access the same information. Today we're officially announcing the Webmaster Conference, a series of local events around the world. These events are primarily located where it's difficult to access search conferences or information about Google Search, or where there's a specific need for a Search event. For example, if we identify that a region has problems with hacked sites, we may organize an event focusing on that specific topic. We want web creators to have equal opportunity in Google Search regardless of their language, financial status, gender, location, or any other attribute. The conferences are always free and easily accessible in the region where they're organized, and, based on feedback from the local communities and analyses, they're tailored for the audience that signed up for the events. That means it doesn't matter how much you already know about Google Search; the event you attend will have takeaways tailored to you. The talks will be in the local language, in case of international speakers through interpreters, and we'll do our best to also offer sign language interpretation if requested. Webmaster Conference OkinawaThe structure of the event varies from region to region. For example, in Okinawa, Japan, we had a wonderful half-day event with novice and advanced web creators where we focused on how to perform better in Google Images. At Webmaster Conference India and Indonesia, that might change and we may focus more on how to create faster websites. We will also host web communities in Europe and North America later this year, so keep an eye out for the announcements! We will continue attending external events as usual; we are doing these events to complement the existing ones. If you want to learn more about our upcoming events, visit the Webmaster Conference site which we'll update monthly, and follow our blogs and @googlewmc on Twitter! Posted by Takeaki Kanaya and Gary

A video series on SEO myths for web developers

Google Webmaster Central Blog -

We invited members of the SEO and web developer community to join us for a new video series called "SEO mythbusting". In this series, we discuss various topics around SEO from a developer's perspective, how we can work to make the "SEO black box" more transparent, and what technical SEO might look like as the web keeps evolving. We already published a few episodes: Web developer's 101: A look at Googlebot: Microformats and structured data: JavaScript and SEO: We have a few more episodes for you and we will launch the next episodes weekly on the Google Webmasters YouTube channel, so don't forget to subscribe to stay in the loop. You can also find all published episodes in this YouTube playlist. We look forward to hearing your feedback, topic suggestions, and guest recommendations in the YouTube comments as well as our Twitter account! Posted by Martin Splitt, friendly web fairy & series host, WTA team

Mobile-First Indexing by default for new domains

Google Webmaster Central Blog -

Over the years since announcing mobile-first indexing - Google's crawling of the web using a smartphone Googlebot - our analysis has shown that new websites are generally ready for this method of crawling. Accordingly, we're happy to announce that mobile-first indexing will be enabled by default for all new, previously unknown to Google Search, websites starting July 1, 2019. It's fantastic to see that new websites are now generally showing users - and search engines - the same content on both mobile and desktop devices! You can continue to check for mobile-first indexing of your website by using the URL Inspection Tool in Search Console. By looking at a URL on your website there, you'll quickly see how it was last crawled and indexed. For older websites, we'll continue monitoring and evaluating pages for their readiness for mobile first indexing, and will notify them through Search Console once they're seen as being ready. Since the default state for new websites will be mobile-first indexing, there's no need to send a notification. Using the URL Inspection Tool to check the mobile-first indexing status Our guidance on making all websites work well for mobile-first indexing continues to be relevant, for new and existing sites. For existing websites we determine their readiness for mobile-first indexing based on parity of content (including text, images, videos, links), structured data, and other meta-data (for example, titles and descriptions, robots meta tags). We recommend double-checking these factors when a website is launched or significantly redesigned. While we continue to support responsive web design, dynamic serving, and separate mobile URLs for mobile websites, we recommend responsive web design for new websites. Because of issues and confusion we've seen from separate mobile URLs over the years, both from search engines and users, we recommend using a single URL for both desktop and mobile websites. Mobile-first indexing has come a long way. We're happy to see how the web has evolved from being focused on desktop, to becoming mobile-friendly, and now to being mostly crawlable and indexable with mobile user-agents! We realize it has taken a lot of work from your side to get there, and on behalf of our mostly-mobile users, we appreciate that. We’ll continue to monitor and evaluate this change carefully. If you have any questions, please drop by our Webmaster forums or our public events. Posted by John Mueller, Developer Advocate, Google Zurich

Pages

Recommended Content

Subscribe to Complete Hosting Guide aggregator - Search Engine Blogs