CloudFlare Blog

Cloudflare Network Interconnection Partnerships Launch

Today we’re excited to announce Cloudflare’s Network Interconnection Partner Program, in support of our new CNI product. As ever more enterprises turn to Cloudflare to secure and accelerate their branch and core networks, the ability to connect privately and securely becomes increasingly important. Today's announcement significantly increases the interconnection options for our customers, allowing them to connect with us in the location of their choice using the method or vendors they prefer. In addition to our physical locations, our customers can now interconnect with us at any of 23 metro areas across five continents using software-defined layer 2 networking technology. Following the recent release of CNI (which includes PNI support for Magic Transit), customers can now order layer 3 DDoS protection in any of the markets below, without requiring physical cross connects, providing private and secure links, with simpler setup.Launch PartnersWe’re very excited to announce that five of the world's premier interconnect platforms are available at launch. Console Connect by PCCW Global in 14 locations, Megaport in 14 locations, PacketFabric in 15 locations, Equinix ECX Fabric™ in 8 locations and Zayo Tranzact in 3 locations, spanning North America, Europe, Asia, Oceania and Africa. What is an Interconnection Platform?Like much of the networking world, there are many terms in the interconnection space for the same thing: Cloud Exchange, Virtual Cross Connect Platform and Interconnection Platform are all synonyms. They are platforms that allow two networks to interconnect privately at layer 2, without requiring additional physical cabling. Instead the customer can order a port and a virtual connection on a dashboard, and the interconnection ‘fabric’ will establish the connection. Since many large customers are already connected to these fabrics for their connections to traditional Cloud providers, it is a very convenient method to establish private connectivity with Cloudflare.Why interconnect virtually?Cloudflare has an extensive peering infrastructure and already has private links to thousands of other networks. Virtual private interconnection is particularly attractive to customers with strict security postures and demanding performance requirements, but without the added burden of ordering and managing additional physical cross connects and expanding their physical infrastructure.Key Benefits of Interconnection PlatformsSecureSimilar to physical PNI, traffic does not pass across the Internet. Rather, it flows from the customer router, to the Interconnection Platform’s network and ultimately to Cloudflare. So while there is still some element of shared infrastructure, it’s not over the public Internet.EfficientModern PNIs are typically a minimum of 1Gbps, but if you have the security motivation without the sustained 1Gbps data transfer rates, then you will have idle capacity. Virtual connections provide for “sub-rate” speeds, which means less than 1Gbps, such as 100Mbps, meaning you only pay for what you use. Most providers also allow some level of “burstiness”, which is to say you can exceed that 100Mbps limit for short periods.PerformanceBy avoiding the public Internet, virtual links avoid Internet congestion.PriceThe major cloud providers typically have different pricing for egressing data to the Internet compared to an Interconnect Platform. By connecting to your cloud via an Interconnect Partner, you can benefit from those reduced egress fees between your cloud and the Interconnection Platform. This builds on our Bandwidth Alliance to give customers more options to continue to drive down their network costs.Less OverheadBy virtualizing, you reduce physical cable management to just one connection into the Interconnection Platform. From there, everything defined and managed in software. For example, ordering a 100Mbps link to Cloudflare can be a few clicks in a Dashboard, as would be a 100Mbps link into Salesforce.Data Center IndependenceIs your infrastructure in the same metro, but in a different facility to Cloudflare? An Interconnection Platform can bring us together without the need for additional physical links.Where can I connect?In any of our physical facilitiesIn any of the 23 metro areas where we are currently connected to an Interconnection Platform (see below)If you’d like to connect virtually in a location not yet listed below, simply get in touch via our interconnection page and we’ll work out the best way to connect.Metro AreasThe metro areas below have currently active connections. New providers and locations can be turned up on request.What’s next?Our customers have been asking for direct on-ramps to our global network for a long time and we’re excited to deliver that today with both physical and virtual connectivity of the world’s leading interconnection Platforms.Already a Cloudflare customer and connected with one of our Interconnection partners? Then contact your account team today to get connected and benefit from improved reliability, security and privacy of Cloudflare Network Interconnect via our interconnection partners.Are you an Interconnection Platform with customers demanding direct connectivity to Cloudflare? Head to our partner program page and click “Become a partner”. We’ll continue to add platforms and partners according to customer demand."Equinix and Cloudflare share the vision of software-defined, virtualized and API-driven network connections. The availability of Cloudflare on the Equinix Cloud Exchange Fabric demonstrates that shared vision and we’re excited to offer it to our joint customers today." – Joseph Harding, Equinix, Vice President, Global Product & Platform MarketingSoftware Developer "Cloudflare and Megaport are driven to offer greater flexibility to our customers. In addition to accessing Cloudflare’s platform on Megaport’s global internet exchange service, customers can now provision on-demand, secure connections through our Software Defined Network directly to Cloudflare Network Interconnect on-ramps globally. With over 700 enabled data centres in 23 countries, Megaport extends the reach of CNI onramps to the locations where enterprises house their critical IT infrastructure. Because Cloudflare is interconnected with our SDN, customers can point, click, and connect in real time. We’re delighted to grow our partnership with Cloudflare and bring CNI to our services ecosystem — allowing customers to build multi-service, securely-connected IT architectures in a matter of minutes." – Matt Simpson, Megaport, VP of Cloud Services “The ability to self-provision direct connections to Cloudflare’s network from Console Connect is a powerful tool for enterprises as they come to terms with new demands on their networks. We are really excited to bring together Cloudflare’s industry-leading solutions with PCCW Global’s high-performance network on the Console Connect platform, which will deliver much higher levels of network security and performance to businesses worldwide.” – Michael Glynn, PCCW Global, VP of Digital Automated Innovation "Our customers can now connect to Cloudflare via a private, secure, and dedicated connection via the PacketFabric Marketplace. PacketFabric is proud to be the launch partner for Cloudflare's Interconnection program. Our large U.S. footprint provides the reach and density that Cloudflare customers need." – Dave Ward, PacketFabric CEO

Introducing Cloudflare Network Interconnect

Today we’re excited to announce Cloudflare Network Interconnect (CNI). CNI allows our customers to interconnect branch and HQ locations directly with Cloudflare wherever they are, bringing Cloudflare’s full suite of network functions to their physical network edge. Using CNI to interconnect provides security, reliability, and performance benefits vs. using the public Internet to connect to Cloudflare. And because of Cloudflare’s global network reach, connecting to our network is straightforward no matter where on the planet your infrastructure and employees are.At its most basic level, an interconnect is a link between two networks. Today, we’re offering customers the following options to interconnect with Cloudflare’s network:Via a private network interconnect (PNI). A physical cable (or a virtual “pseudo-wire”; more on that later) that connects two networks.Over an Internet Exchange (IX). A common switch fabric where multiple Internet Service Providers (ISPs) and Internet networks can interconnect with each other.To use a real world analogy: Cloudflare over the years has built a network of highways across the Internet to handle all our customers' traffic. We’re now providing dedicated on-ramps for our customers’ on-prem networks to get onto those highways.Why interconnect with Cloudflare?CNI provides more reliable, faster, and more private connectivity between your infrastructure and Cloudflare’s. This delivers benefits across our product suite. Here are some examples of specific products and how you can combine them with CNI:Cloudflare Access: Cloudflare Access replaces corporate VPNs with Cloudflare’s network. Instead of placing internal tools on a private network, teams deploy them in any environment, including hybrid or multi-cloud models, and secure them consistently with Cloudflare’s network. CNI allows you to bring your own MPLS network to meet ours, allowing your employees to connect to your network securely and quickly no matter where they are.CDN: Cloudflare’s CDN places content closer to visitors, improving site speed while minimizing origin load. CNI improves cache fill performance and reduces costs.Magic Transit: Magic Transit protects datacenter and branch networks from unwanted attack and malicious traffic. Pairing Magic Transit with CNI decreases jitter and drives throughput improvements, and further hardens infrastructure from attack.Cloudflare Workers: Workers is Cloudflare’s serverless compute platform. Integrating with CNI provides a secure connection to serverless cloud compute that does not traverse the public Internet, allowing customers to use Cloudflare’s unique set of Workers services with tighter network performance tolerances.Let’s talk more about how CNI delivers these benefits.Improving performance through interconnectionCNI is a great way to boost performance for many existing Cloudflare products. By utilizing CNI and setting up interconnection with Cloudflare wherever a customer’s origin infrastructure is, customers can get increased performance and security at lower cost than using public transit providers.CNI makes things fasterAs an example of the performance improvements network interconnects can deliver for Cloudflare customers, consider an HTTP application workload which flows through Cloudflare’s CDN and WAF. Many of our customers rely on our CDN to make their HTTP applications more responsive.Cloudflare caches content very close to end users to provide the best performance possible. But, if content is not in cache, Cloudflare edge PoPs must contact the origin server to retrieve cacheable content. This can be slow, and places more load on an origin server compared to serving directly from cache. With CNI, these origin pulls can be completed over a dedicated link, improving throughput and reducing overall time needed for origin pulls. Using Argo Tiered Cache, customers can manage tiered cache topologies and specify upstream cache tiers that correspond with locations where network interconnects are in place. Using Tiered Cache in this fashion lowers origin loads and increases cache hit rates, thereby improving performance and reducing origin infrastructures costs.Here’s anonymized and sampled data from a real Cloudflare customer who recently provisioned interconnections between our network and theirs to further improve performance. Heavy users of our CDN, they were able to shave off precious milliseconds from their origin round trip time (RTT) by adding PNIs in multiple locations.As an example, their 90th percentile round trip time in Warsaw, Poland decreased by 6.5ms as a result of provisioning a private network interconnect (from 7.5ms to 1ms), which is a performance win of 87%!  The jitter (variation in delay in received packets) on the link decreased from 82.9 to 0.3, which speaks to the dedicated, reliable nature of the link. CNI helps deliver reliable and performant network connectivity to your customers and employees.Enhanced security through private connectivityCustomers with large on-premise networks want to move to the cloud: it’s cheaper, less hassle, and less overhead and maintenance.  However, customers want to also preserve their existing security and threat models.Traditionally, CIOs trying to connect their IP networks to the Internet do so in two steps:Source connectivity to the Internet from transit providers (ISPs).Purchase, operate, and maintain network function specific hardware appliances. Think hardware load balancers, firewalls, DDoS mitigation equipment, WAN optimization, and more.CNI allows CIOs to provision security services on Cloudflare and connect their existing networks to Cloudflare in a way that bypasses the public Internet.  Because Cloudflare integrates with on-premise networks and the cloud, customers can enforce security policies across both networks and create a consistent, secure boundary.CNI increases cloud and network security by providing a private, dedicated link to the Cloudflare network. Since this link is reserved exclusively for the customer that provisions it, the customer’s traffic is isolated and private.CNI + Magic Transit: Removing public Internet exposureTo use a product-specific example: through CNI’s integration with Magic Transit, customers can take advantage of private connectivity to minimize exposure of their network to the public Internet.Magic Transit attracts customers’ IP traffic to our data centers by advertising their IP addresses from our edge via BGP. When traffic arrives, it’s filtered and sent along to customers’ data centers. Before CNI, all Magic Transit traffic was sent from Cloudflare to customers via Generic Routing Encapsulation (GRE) tunnels over the Internet. Because GRE endpoints are publicly routable, there is some risk these endpoints could be discovered and attacked, bypassing Cloudflare’s DDoS mitigation and security tools.Using CNI removes this exposure to the Internet. Advantages of using CNI with Magic Transit include:Reduced threat exposure. Although there are many steps companies can take to increase network security, some risk-sensitive organizations prefer not to expose endpoints to the public Internet at all. CNI allows Cloudflare to absorb that risk and forward only clean traffic (via Magic Transit) through a truly private interface.Increased reliability. Traffic traveling over the public Internet is subject to factors outside of your control, including latency and packet loss on intermediate networks. Removing steps between Cloudflare’s network and yours means that after Magic Transit processes traffic, it’s forwarded directly and reliably to your network.Simplified configuration. Soon, Magic Transit + CNI customers will have the option to skip making MSS (maximum segment size) changes when onboarding, a step that’s required for GRE-over-Internet and can be challenging for customers who need to consider their downstream customers’ MSS as well (eg. service providers).Example deployment: Penguin Corp uses Cloudflare for Teams, Magic Transit, and CNI to protect branch and core networks, and employees.Imagine Penguin Corp, a hypothetical company, has a fully connected private MPLS network.  Maintaining their network is difficult and they have a dedicated team of network engineers to do this.  They are currently paying a lot of money to run their own private cloud. To minimize costs, they limit their network egress points to two worldwide.  This creates a major performance problem for their users, whose bits have to travel a long way to accomplish basic tasks while still traversing Penguin’s network boundary.SASE (Secure Access Service Edge) models look attractive to them, because they can, in theory, move away from their traditional MPLS network and move towards the cloud.  SASE deployments provide firewall, DDoS mitigation, and encryption services at the network edge, and bring security as a service to any cloud deployment, as seen in the diagram below:CNI allows Penguin to use Cloudflare as their true network edge, hermetically sealing their branch office locations and datacenters from the Internet. Penguin can adapt to a SASE-like model while keeping exposure to the public Internet at zero. Penguin establishes PNIs with Cloudflare from their branch office in San Jose to Cloudflare’s San Jose location to take advantage of Cloudflare for Teams, and from their core colocation facility in Austin to Cloudflare’s Dallas location to use Magic Transit to protect their core networks. Like Magic Transit, Cloudflare for Teams replaces traditional security hardware on-premise with Cloudflare’s global network. Customers who relied on VPN appliances to reach internal applications can instead connect securely through Cloudflare Access. Organizations maintaining physical web gateway boxes can send Internet-bound traffic to Cloudflare Gateway for filtering and logging.Cloudflare for Teams services run in every Cloudflare data center, bringing filtering and authentication closer to your users and locations to avoid compromising performance. CNI improves that even further with a direct connection from your offices to Cloudflare. With a simple configuration change, all branch traffic reaches Cloudflare’s edge where Cloudflare for Teams policies can be applied. The link improves speed and reliability for users and replaces the need to backhaul traffic to centralized filtering appliances.Once interconnected this way, Penguin’s network and employees realize two benefits:They get to use Cloudflare’s full set of security services without having to provision expensive and centralized physical or virtualized network appliances.Their security and performance services are running across Cloudflare’s global network in over 200 cities. This brings performance and usability improvements for users by putting security functions closer to them.Scalable, global, and flexible interconnection optionsCNI offers a big benefit to customers because it allows them to take advantage of our global footprint spanning 200+ cities: their branch office and datacenter infrastructure can connect to Cloudflare wherever they are.This matters for two reasons: our globally distributed network makes it easier to interconnect locally, no matter where a customer’s branches and core infrastructure is, and allows for a globally distributed workforce to interact with our edge network with low latency and improved performance. Customers don’t have to worry about securely expanding their network footprint: that’s our job.To this point, global companies need to interconnect at many points around the world. Cloudflare Network Interconnect is priced for global network scale: Cloudflare doesn't charge anything for enterprise customers to provision CNI. Customers may need to pay for access to an interconnection platform or a datacenter cross-connect. We’ll work with you and any other parties involved to make the ordering and provisioning process as smooth as possible.In other words, CNI’s pricing is designed to accommodate complicated enterprise network topologies and modern IT budgets.How to interconnectCustomers can interconnect with Cloudflare in one of three ways: over a private network interconnect (PNI), over an IX, or through one of our interconnection platform partners. We have worked closely with our global partners to meet our customers where they are and how they want.Private Network InterconnectsPrivate Network Interconnects are available at any of our listed private peering facilities. Getting a physical connection to Cloudflare is easy: specify where you want to connect, port speeds, and target VLANs. From there, we’ll authorize it, you’ll place the order, and let us do the rest.  Customers should choose PNI as their connectivity option if they want higher throughput than a virtual connection or connection over an IX, or want to eliminate as many intermediaries from an interconnect as possible.Internet ExchangesCustomers who want to use existing Internet Exchanges can interconnect with us at any of the 235+ Internet Exchanges we participate in. To connect with Cloudflare via an Internet Exchange, follow the IX’s instructions to connect, and Cloudflare will spin up our side of the connection.  Customers should choose Internet Exchanges as their connectivity option if they are either already peered at an IX, or they want to interconnect in a place where an interconnection platform isn’t present.Interconnection Platform PartnersCloudflare is proud to be partnering with Equinix, Megaport, PCCW ConsoleConnect, PacketFabric, and Zayo to provide you with easy ways to virtually connect with us in any of the partner-supported locations. Customers should choose to connect with an interconnection platform if they are already using these providers or want a quick and easy way to onboard onto a secure cloud experience.If you’re interested in learning more, please see this blog post about all the different ways you can interconnect. For all of the interconnect methodologies described above, the BGP session establishment and IP routing are the same. The only thing that is different is the physical way in which we interconnect with other networks.How do I find the best places to interconnect?Our product page for CNI includes tools to better understand the right places for your network to interconnect with ours.  Customers can use this data to help figure out the optimal place to interconnect to have the most connectivity with other cloud providers and other ISPs in general.What’s the difference between CNI and peering?Technically, peering and CNI use similar mechanisms and technical implementations behind the scenes. We have had an open peering policy for years with any network and will continue to abide by that policy: it allows us to help build a better Internet for everyone by interconnecting networks together, making the Internet more reliable. Traditional networks use interconnect/peering to drive better performance for their customers and connectivity while driving down costs. With CNI, we are opening up our infrastructure to extend the same benefits to our customers as well.How do I learn more?CNI provides customers with better performance, reliability, scalability, and security than using the public Internet. A customer can interconnect with Cloudflare in any of our physical locations today, getting dedicated links to Cloudflare that deliver security benefits and more stable latency, jitter, and available bandwidth through each interconnection point.Contact our enterprise sales team about adding Cloudflare Network Interconnect to your existing offerings.

My living room intern experience at Cloudflare

This was an internship unlike any other. With a backdrop of a pandemic, protests, and a puppy that interrupted just about every Zoom meeting, it was also an internship that demonstrated Cloudflare’s leadership in giving students meaningful opportunities to explore their interests and contribute to the company’s mission: to help build a better Internet.For the past twelve weeks, I’ve had the pleasure of working as a Legal Intern at Cloudflare. A few key things set this internship apart from even those in which I’ve been able to connect with people in-person:CommunicationCommunityComminglingCollaborationEver since I formally accepted my internship, the Cloudflare team has been in frequent and thorough communication about what to expect and how to make the most of my experience. This approach to communication was in stark contrast to the approach taken by several other companies and law firms. The moment COVID-19 hit, Cloudflare not only reassured me that I’d still have a job, the company also doubled down on bringing on more interns. Comparatively, a bunch of my fellow law school students were left in limbo: unsure of if they had a job, the extent to which they’d be able to do it remotely, and whether it would be a worthwhile experience. This approach has continued through the duration of the internship. I know I speak for my fellow interns when I say that we were humbled to be included in company-wide initiatives to openly communicate about the trying times our nation and particularly members of communities of color have experienced this summer. We weren’t left on the sidelines but rather invited into the fold. I’m so grateful to my manager, Jason, for clearing my schedule to participate in Cloudflare’s “Day On: Learning and Inclusion.” On June 18, the day before Juneteenth, Cloudflare employees around the world joined together for transformative and engaging sessions on how to listen, learn, participate, and take action to be better members of our communities. That day illustrated Cloudflare’s commitment to fostering communication as well as to building community and diversity. The company’s desire to foster a sense of community pervades each team. Case in point, members of the Legal, Policy, and Trust & Safety (LPT) team were ready and eager to help my fellow legal interns and me better understand the team’s mission and day-to-day activities. I went a perfect 11/11 on asks to LPT members for 1:1 Zoom meetings -- these meetings had nothing to do with a specific project but were merely meant to create a stronger community by talking with employees about how they ended up at this unique company. From what I’ve heard from fellow interns, this sense of community was a common thread woven throughout their experiences as well. Similarly, other interns shared my appreciation for being given more than just “shadowing” opportunities. We were invited to commingle with our teammates and encouraged to take active roles in meetings and on projects. In my own case, I got to dive into exciting research on privacy laws such as the GDPR and so much more. This research required that I do more than just be a fly on the wall, I was invited to actively converse and brief folks directly involved with making key decisions for the LPT. For instance, when Tilly came on in July as Privacy Counsel, I had the opportunity to brief her on the research I’d done related to Data Privacy Impact Assessments (DPIAs). In the same way, when Edo and Ethan identified some domain names that likely infringed on Cloudflare’s trademark, my fellow intern, Elizabeth, and I were empowered to draft WIPO complaints per the Uniform Domain Name Dispute Resolution Policy. Fingers crossed our work continues Cloudflare’s strong record before the WIPO (here’s an example of a recent favorable division). These seemingly small tasks introduced me to a wide range of fascinating legal topics that will inform my future coursework and, possibly, even my career goals.Finally, collaboration distinguished this internship from other opportunities. By way of example, I was assigned projects that required working with others toward a successful outcome. In particular, I was excited to work with Jocelyn and Alissa on research related to the intersection of law and public policy. This dynamic duo fielded my queries, sent me background materials, and invited me to join meetings with stakeholders. This was a very different experience from previous internships in which collaboration was confined to just an email assigning the research and a cool invite to reach out if any questions came up. At Cloudflare, I had the support of a buddy, a mentor, and my manager on all of my assignments and general questions. When I walked out of Cloudflare’s San Francisco office back in December after my in-person interview, I was thrilled to potentially have the opportunity to return and help build a better Internet. Though I’ve yet to make it back to the office due to COVID-19 and, therefore, worked entirely remotely, this internship nevertheless allowed me and my fellow interns to advance Cloudflare’s mission. Whatever normal looks like in the following weeks, months, and years, so long as Cloudflare prioritizes communication, community, commingling, and collaboration, I know it will be a great place to work.

Commit to Diversity, Equity and Inclusion, Every Day

The world is waking up Protesting in the name of Black Lives Matter.Reading the book “White Fragility”.Watching the documentary “13th”.The world is waking up to the fight against racism and I couldn’t be happier!But let’s be clear: learning about anti-racism and being anti-racist are not the same things. Learning is a good first step and a necessary one. But if you don’t apply the knowledge you acquire, then you are not helping to move the needle. Since the murder of George Floyd at the hands/knees of the Minneapolis police, people all over the world have been focused on Black Lives Matter and anti-racism. At Cloudflare, we’ve seen an increase in cyberattacks, we’ve heard from the leadership of Afroflare, our Employee Resource Group for employees of African descent, and we held our first ever Day On, held on June 18, Cloudflare’s employee day of learning about bias, the history and psychological effects of racism,, and how racism can get baked into algorithms. By way of this blog post, I want to share my thoughts about where I think we go from here and how I believe we can truly embody Diversity Equity and Inclusion (DEI) in our workplace.Is diversity recruiting the answer to anti-racism in the workplace?Many Cloudflarians said we should increase our diversity recruiting efforts as part of the feedback we received after our Day On event. But recruiting more diverse candidates only solves one part of the problem. There are still two major hurdles to overcome:Employees need to feel welcome and have a sense of belongingEmployees need to feel valued and have an equal opportunity for career advancementEmployee Resource Groups (ERGs) offer opportunities to foster community and a sense of belonging. But it is beyond the scope of an ERG to ensure all employees have equal opportunities for advancement. And honestly, this is where a lot of companies fall short. It’s the reason you see people sharing pictures and calling out management teams or boards of directors all over social media. Because there is a lack of visible signs of diversity at senior levels. Numbers can be misleading. A company might state, “We have 11% employees of this group or 8% of that group.” That’s great, but how many of these employees are thriving in their current roles and getting promoted at the same pace as their white counterparts? Or being compensated at the same rate as their male counterparts? The answers to those questions are much more telling, yet seldom shared.Folks, if we are going to see meaningful change, we all need to get onboard with Diversity, Equity and Inclusion. It’s really not the type of thing that people can opt-in or out of. It won’t work. And even if, and when, everyone opts in to make DEI a priority, that won’t be enough. We won’t start to see real change until we are all living and breathing DEI day in and day out.What does committing to DEI every day look like?Doing something (anything) every day that flexes our DEI muscles and gets us closer to meaningful outcomes.Examples include:Mentoring a person from an underrepresented group or asking someone from an underrepresented group to mentor you.Scheduling coffee meetings with underrepresented people around the company and finding out how you can help to amplify their voices.Providing candid, timely coaching to underrepresented employees to help them grow in their field or area of expertise.Learning to value the different approaches and styles that people from underrepresented groups bring to the workplace.Watching Cloudflare TV segments like, “Everyone at the Table” which airs weekly and promotes an open dialogue about everyday topics from the perspective of people with different perspectives.Hosting office-wide or team-wide “listening circles” where employees can share what a just and equitable workplace looks like to them.Requesting educational opportunities for your team or whole company such as implicit bias workshops or allyship workshops. Asking if your company’s leaders have attended similar workshops.Asking your manager/team leadership how you may help increase the diversity of your team. Suggesting ideas for building a more inclusive culture within your team such as running meetings in a manner where everyone has an equal opportunity to speak, keeping meetings and work social activities within working hours, and regularly hosting conversations about how the team can be more inclusive.And finally - asking the opinion of someone from an under-represented group. This one is especially important since so many of us are not present when critical decisions are being made.Why is committing to DEI on a daily basis important?Because it’s easier for us to do nothing. Keeping the status quo is easy. Coming together to change the system is hard work. Especially if everyone is not on board.Because having a company full of underrepresented people who are not being heard, seen, celebrated, or promoted is not going to get us the outcomes we want. And trust me, it doesn’t take long to realize that you are not going to make it at a company. Racism, discrimination, and unfair treatment can be very subtle but under-represented people can tell when they are valued and appreciated. And when they are being set up to fail.Because we know too much. The system is broken. Underrepresented groups have always known this. But now that it is a fact most people acknowledge and accept, we can’t ignore it. A wise woman once said, "Do the best you can until you know better. Then when you know better, do better." (Maya Angelou)I’ll end my commentary with this: I view DEI as a journey that we must commit to every day. Here at Cloudflare. Across the tech industry. And in our world.Notice I used the word journey. It’s not a destination in the sense that we do these 10 things and we have “arrived”. Instead, I believe it is a journey that we will always be on with milestones and achievements to be celebrated along the way. To help you start flexing your DEI muscle, I’m kicking off a 21-Day DEI Challenge starting today! Every day, for the next 21 days, I challenge you to share in a public forum (bonus points for doing it on LinkedIn) how you are helping to move DEI forward. You can take a small step or a really big one. What matters is that you are flexing that muscle and challenging yourself (and others) to start the journey. #21DayDEIChallenge #BeAntiRacist #MoveTheNeedleI hope you are up for the challenge that DEI offers us because the future of our company, industry, and society depends on it.Postscript: This blog post is dedicated to the memory of the late Congressman John Lewis, a great civil rights leader and so much more, who challenged all of us to be brave enough to make noise and get into “good trouble” for the sake of justice and equality. Rest in Power, Mr. Lewis.

Making magic: Reimagining Developer Experience for the World of Serverless

This week we’ve talked about how Workers provides a step function improvement in the TTFB (time to first byte) of applications, by running lightweight isolates in over 200 cities around the world, free of cold starts. Today I’m going to talk about another metric, one that’s arguably even more important: TTFD, or time to first dopamine, and announce a huge improvement to the Workers development experience — wrangler dev, our edge-based development environment with all the perks of a local environment. There’s nothing quite like the rush of getting your first few lines of code to work — no matter how many times you’ve done it before, there's something so magical about the computer understanding exactly what you wanted it to do and doing it! This is the kind of magic I expected of “serverless”, and while it’s true that most serverless offerings today get you to that feeling faster than setting up a virtual server ever would, I still can’t help but be disappointed with how lackluster developing with most serverless platforms is today. Some of my disappointment can be attributed to the leaky nature of the abstraction: the journey to getting you to the point of writing code is drawn out by forced decision making about servers (regions, memory allocation, etc). Servers, however, are not the only thing holding developers back from getting to the delightful magical feeling in the serverless world today. The “serverless” experience on AWS Lambda today looks like this: between configuring the right access policy to invoke my own test application, and deciding whether an HTTP or REST API was better suited for my needs, 30 minutes had easily passed, and I still didn’t have a URL I could call to invoke my application. I did, however, spin up five different services, and was already worrying about cleaning them up lest I be charged for them. That doesn’t feel like magic!In building what we believe to be the serverless platform of the future — a promise that feels very magical —  we wanted to bring back that magical feeling to every step of the development journey. If serverless is about empowering developers, then they should be empowered every step of the way: from proof of concept to MVP and beyond.We’re excited to share with you today our approach to making our developer experience delightful — we recognize we still have plenty of room to continue to grow and innovate (and we can’t wait to tell you about everything we have currently in the works as well!), but we’re proud of all the progress we’ve made in making Workers the easiest development platform for developers to use.Defining “developer experience”To get us started, let’s look at what the journey of a developer entails. Today, we’ll be defining the user experience as the following four stages: Getting started: All the steps we have to take before putting in some keystrokesIteration: Does my code do what I expect it to do? What do I need to do to get it there?Release: I’ve tested what I can -- time to hit the big red button!Observe: Is anything broken? And how do I fix it?When approaching each stage of development, we wanted to reimagine the experience, the way that we’ve always wanted our development flow to work, and fix places along the way where existing platforms have let us down.Zero to Hello WorldWith Workers, we want to get you to that aforementioned delightful feeling as quickly as possible, and remove every obstacle in the way of writing and deploying your code. The first deployment experience is really important — if you’ve done it once and haven’t given up along the way, you can do it again. We’re very proud to say our TTFD — even for a new user without a Cloudflare account -- is as low as three minutes. If you’re an existing customer, you can have your first Worker running in seconds. No regions to choose, no IAM rules to configure, and no API Gateways to set up or worry about paying for. If you’re new to Workers and still trying to get a feel for it, you can instantly deploy your Worker to 200 cities around the world within seconds, with the simple click of a button. If you’ve already decided on Workers as the choice for building your next application, we want to make you feel at home by allowing you to use all of your favorite IDEs, be it vim or emacs or VSCode (we don’t care!). With the release of wrangler — the official command-line tool for Workers, getting started is just as easy as: wrangler generate hello cd hello wrangler publishAgain, in seconds your code is up and running, and easily accessible all over the world. “Hello, World!”, of course, doesn’t have to be quite so literal. We provide a range of tutorials to help get you started and get familiar with developing with Workers. To save you that last bit of time in getting started, our template gallery provides starter templates so you can dive straight into building the products you’re excited about -- whether it’s a new GraphQL server or a brand new static site, we’ve got you covered.Local(ish) development: code, test, repeatWe can’t promise to get the code right on your behalf, but we can promise to do everything we can to get you the feedback you need to help you get your code right.The development journey requires lots of experimentation, trial and error, and debugging. If my Computer Science degree came with instructions on the back of the bottle, they would read: “code, print, repeat.” Getting code right is an extremely iterative, feedback-driven process. We would all love to get code right the first time around and move on, but the reality is, computers are bad mind-readers, and you’ve ended up with an extraneous parenthesis or a stray comma in your JSON, so your code is not going to run. Found where the loose parenthesis was introduced? Great! Now your code is running, but the output is not right — time to go find that off-by-one error. Local development has traditionally been the way for developers to get a tight feedback loop during the development process. The crucial components that make up an effective local development environment and make it a great testing ground are: fast feedback loop, its sandboxed nature (ability to develop without affecting production), and accuracy.As we started thinking about accomplishing all three of those goals, we realized that being local actually wasn’t itself a requirement — speed is the real requirement, and running on the client is the only way acceptable speed for a good-enough feedback loop could be achieved. One option was to provide a traditional local development environment, but one thing didn’t sit well with us: we wanted to provide a local development environment for the Workers runtime, however, we knew there was more to handling a request than just the runtime, which could compromise accuracy. We didn’t want to set our users up to fail with code that works on their machine but not ours. Shipping the rest of our edge infrastructure to the user would pose its own challenges of keeping it up to date, and it would require the user to install hundreds of unnecessary dependencies, all potentially to end up with the most frustrating experience of all: running into some installation bug the explanation to which couldn’t be found on StackOverflow. This experience didn’t sit right with us. As it turns out, this is a very similar problem to one we commonly solve for our customers: Running code on the client is fast, but it doesn’t give me the control I need; running code on the server gives me the control I need, but it requires a slow round-trip to the origin. All we had to do was take our own advice and run it on the edge! It’s the best of both worlds: your code runs so close to your end user that you get the same performance as running it on the client, without having to lose control. To provide developers access to this tight feedback loop, we introduced wrangler dev earlier this year! wrangler dev  has the look and feel of a local development environment: it runs on localhost but tunnels to the edge, and provides output directly to your favorite IDE of choice. Since wrangler dev now runs on the edge, it works on your machine and ours exactly the same! Our release candidate for wrangler dev is live and waiting for you to take it for a test drive, as easily as:npm i @cloudflare/wrangler@beta -gLet us know what you think.ReleaseAfter writing all the code, testing every edge case imaginable, and going through code review, at some point the code needs to be released for the rest of the world to reap the fruits of your hard labor and enjoy the features you’ve built. For smaller, quick applications, it’s exciting to hit the “Save & deploy” button and let fate take the wheel. For production level projects, however, the process of deploying to production may be a bit different. Different organizations adopt different processes for code release. For those using GitHub, last year we introduced our GitHub Action, to make it easy to configure an integrated release process. With Wrangler, you can configure Workers to deploy using your existing CI, to automate deployments, and minimize human intervention.When deploying to production, again, feedback becomes extremely important. Some platforms today still take as long as a few minutes to deploy your code. A few minutes may seem trivial, but a few minutes of nervously refreshing, wondering whether your code is live yet, and which version of your code your users are seeing is stressful. This is especially true in a rollback or a bug-fix situation where you want the new version to be live ASAP. New Workers are deployed globally in less than five seconds, which means new changes are instantaneous. Better yet, since Workers runs on lightweight isolates, newly deployed Workers don’t experience dreaded cold starts, which means you can release code as frequently as you’re able to ship it, without having to invest additional time in auxiliary gadgets to pre-warm your Worker — more time for you to start working on your next feature!Observe & Resolve The big red button has been pushed. Dopamine has been replaced with adrenaline: the instant question on your mind is: “Did I break anything? And if so, what, and how do I fix it?” These questions are at the core of what the industry calls “observability”. There are different ways things can break and incidents can manifest themselves: increases in errors, drops in traffic, even a drop in performance could be considered a regression. To identify these kinds of issues, you need to be able to spot a trend. Raw data, however, is not a very useful medium for spotting trends — humans simply cannot parse raw lines of logs to identify a subtle increase in errors. This is why we took a two-punch approach to helping developers identify and fix issues: exposing trend data through analytics, while also providing the ability to tail production logs for forensics and investigation. Earlier this year, we introduced Workers Metrics: an easy way for developers to identify trends in their production traffic. With requests metrics, you can easily spot any increases in errors, or drastic changes in traffic patterns after a given release:Additionally, sometimes new code can introduce unforeseen regressions in the overall performance of the application. With CPU time metrics, our developers are now able to spot changes in the performance of their Worker, as well as use that information to guide and optimize their code.Once you’ve identified a regression, we wanted to provide the tools needed to find your bug and fix it, which is why we also recently launched `wrangler tail`: production logs in a single command. wrangler tail can help diagnose where code is failing or why certain customers are getting unexpected outcomes because it exposes console.log() output and exceptions. By having access to this output, developers can immediately diagnose, fix, and resolve any issues occurring in production.We know how precious every moment can be when a bad code deploy impacts customer traffic. Luckily, once you’ve found and fixed your bug, it’s only a matter of seconds for users to start benefiting from the fix — unlike other platforms which make you wait as long as 5 minutes, Workers get deployed globally within five seconds.RepeatAs you’re thinking about your next feature, you checkout a new branch, and the cycle begins all over. We’re excited for you to check out all the improvements we’ve made to the development experience with Workers, all to reduce your time to first dopamine (TTFD). We are always working on improving it further, looking where we can remove every additional bit of friction, and love to hear your feedback as we do so.

Bringing Your Own IPs to Cloudflare (BYOIP)

Today we’re thrilled to announce general availability of Bring Your Own IP (BYOIP) across our Layer 7 products as well as Spectrum and Magic Transit services. When BYOIP is configured, the Cloudflare edge will announce a customer’s own IP prefixes and the prefixes can be used with our Layer 7 services, Spectrum, or Magic Transit. If you’re not familiar with the term, an IP prefix is a range of IP addresses. Routers create a table of reachable prefixes, known as a routing table, to ensure that packets are delivered correctly across the Internet.As part of this announcement, we are listing BYOIP on the relevant product pages, developer documentation, and UI support for controlling your prefixes. Previous support was API only.Customers choose BYOIP with Cloudflare for a number of reasons. It may be the case that your IP prefix is already allow-listed in many important places, and updating firewall rules to also allow Cloudflare address space may represent a large administrative hurdle. Additionally, you may have hundreds of thousands, or even millions, of end users pointed directly to your IPs via DNS, and it would be hugely time consuming to get them all to update their records to point to Cloudflare IPs. Over the last several quarters we have been building tooling and processes to support customers bringing their own IPs at scale. At the time of writing this post we’ve successfully onboarded hundreds of customer IP prefixes. Of these, 84% have been for Magic Transit deployments, 14% for Layer 7 deployments, and 2% for Spectrum deployments.When you BYOIP with Cloudflare, this means we announce your IP space in over 200 cities around the world and tie your IP prefix to the service (or services!) of your choosing. Your IP space will be protected and accelerated as if they were Cloudflare’s own IPs. We can support regional deployments for BYOIP prefixes as well if you have technical and/or legal requirements limiting where your prefixes can be announced, such as data sovereignty.You can turn on advertisement of your IPs from the Cloudflare edge with a click of a button and be live across the world in a matter of minutes.All BYOIP customers receive network analytics on their prefixes. Additionally all IPs in BYOIP prefixes can be considered static IPs. There are also benefits specific to the service you use with your IP prefix on Cloudflare.Layer 7 + BYOIP: Cloudflare has a robust Layer 7 product portfolio, including products like Bot Management, Rate Limiting, Web Application Firewall, and Content Delivery, to name just a few. You can choose to BYOIP with our Layer 7 products and receive all of their benefits on your IP addresses.For Layer 7 services, we can support a variety of IP to domain mapping requests including sharing IPs between domains or putting domains on dedicated IPs, which can help meet requirements for things such as non-SNI support. If you are also an SSL for SaaS customer, using BYOIP, you have increased flexibility to change IP address responses for custom_hostnames in the event an IP is unserviceable for some reason.Spectrum + BYOIP:Spectrum is Cloudflare’s solution to protect and accelerate applications that run any UDP or TCP protocol. The Spectrum API supports BYOIP today. Spectrum customers who use BYOIP can specify, through Spectrum’s API, which IPs they would like associated with a Spectrum application.Magic Transit + BYOIP:Magic Transit is a Layer 3 security service which processes all your network traffic by announcing your IP addresses and attracting that traffic to the Cloudflare edge for processing.  Magic Transit supports sophisticated packet filtering and firewall configurations. BYOIP is a requirement for using the Magic Transit service. As Magic Transit is an IP level service, Cloudflare must be able to announce your IPs in order to provide this serviceBringing Your IPs to Cloudflare: What is Required?Before Cloudflare can announce your prefix we require some documentation to get started. The first is something called a ‘Letter of Authorization’ (LOA), which details information about your prefix and how you want Cloudflare to announce it. We then share this document with our Tier 1 transit providers in advance of provisioning your prefix. This step is done to ensure that Tier 1s are aware we have authorization to announce your prefixes. Secondly, we require that your Internet Routing Registry (IRR) records are up to date and reflect the data in the LOA. This typically means ensuring the entry in your regional registry is updated (i.e. ARIN, RIPE, APNIC).Once the administrivia is out of the way, work with your account team to learn when your prefixes will be ready to announce.We also encourage customers to use RPKI and can support this for customer prefixes. We have blogged and built extensive tooling to make adoption of this protocol easier. If you’re interested in BYOIP with RPKI support just let your account team know!ConfigurationEach customer prefix can be announced via the ‘dynamic advertisement’ toggle in either the UI or API, which will cause the Cloudflare edge to either announce or withdraw a prefix on your behalf. This can be done as soon as your account team lets you know your prefixes are ready to go.Once the IPs are ready to be announced, you may want to set up ‘delegations’ for your prefixes. Delegations manage how the prefix can be used across multiple Cloudflare accounts and have slightly different implications depending on which service your prefix is bound to. A prefix is owned by a single account, but a delegation can extend some of the prefix functionality to other accounts. This is also captured on our developer docs. Today, delegations can affect Layer 7 and Spectrum BYOIP prefixes.Layer 7: If you use BYOIP + Layer 7 and also use the SSL for SaaS service, a delegation to another account will allow that account to also use that prefix to validate custom hostnames in addition to the original account which owns the prefix. This means that multiple accounts can use the same IP prefix to serve up custom hostname traffic. Additionally, all of your IPs can serve traffic for custom hostnames, which means you can easily change IP addresses for these hostnames if an IP is blocked for any reason.Spectrum: If you used BYOIP + Spectrum, via the Spectrum API, you can specify which IP in your prefix you want to create a Spectrum app with. If you create a delegation for prefix to another account, that second account will also be able to specify an IP from that prefix to create an app.If you are interested in learning more about BYOIP across either Magic Transit, CDN, or Spectrum, please reach out to your account team if you’re an existing customer or contact if you’re a new prospect.

Eliminating cold starts with Cloudflare Workers

A “cold start” is the time it takes to load and execute a new copy of a serverless function for the first time. It’s a problem that’s both complicated to solve and costly to fix. Other serverless platforms make you choose between suffering from random increases in execution time, or paying your way out with synthetic requests to keep your function warm. Cold starts are a horrible experience, especially when serverless containers can take full seconds to warm up.Unlike containers, Cloudflare Workers utilize isolate technology, which measure cold starts in single-digit milliseconds. Well, at least they did. Today, we’re removing the need to worry about cold starts entirely, by introducing support for Workers that have no cold starts at all – that’s right, zero. Forget about cold starts, warm starts, or... any starts, with Cloudflare Workers you get always-hot, raw performance in more than 200 cities worldwide.Why is there a cold start problem?It’s impractical to keep everyone’s functions warm in memory all the time. Instead, serverless providers only warm up a function after the first request is received. Then, after a period of inactivity, the function becomes cold again and the cycle continues. For Workers, this has never been much of a problem. In contrast to containers that can spend full seconds spinning up a new containerized process for each function, the isolate technology behind Workers allows it to warm up a function in under 5 milliseconds.Learn more about how isolates enable Cloudflare Workers to be performant and secure here.Cold starts are ugly. They’re unexpected, unavoidable, and cause unpredictable code execution times. You shouldn’t have to compromise your customers’ experience to enjoy the benefits of serverless. In a collaborative effort between our Workers and Network teams, we set out to create a solution where you never have to worry about cold starts, warm starts, or pre-warming ever again.How is a zero cold start even possible?Like many features at Cloudflare, security and encryption make our network more intelligent. Since 95% of Worker requests are securely handled over HTTPS, we engineered a solution that uses the Internet’s encryption protocols to our advantage.Before a client can send an HTTPS request, it needs to establish a secure channel with the server. This process is known as “handshaking” in the TLS, or Transport Layer Security, protocol. Most clients also send a hostname (e.g. in that handshake, which is referred to as the SNI, or Server Name Indication. The server receives the handshake, sends back a certificate, and now the client is allowed to send its original request, encrypted.Previously, Workers would only load and compile after the entire handshake process was complete, which involves two round-trips between the client and server. But wait, we thought, if the hostname is present in the handshake, why wait until the entire process is done to preload the Worker? Since the handshake takes some time, there is an opportunity to warm up resources during the waiting time before the request arrives.With our newest optimization, when Cloudflare receives the first packet during TLS negotiation, the “ClientHello,” we hint the Workers runtime to eagerly load that hostname’s Worker. After the handshake is done, the Worker is warm and ready to receive requests. Since it only takes 5 milliseconds to load a Worker, and the average latency between a client and Cloudflare is more than that, the cold start is zero. The Worker starts executing code the moment the request is received from the client.When are zero cold starts available?Now, and for everyone! We’ve rolled out this optimization to all Workers customers and it is in production today. There’s no extra fee and no configuration change required. When you build on Cloudflare Workers, you build on an intelligent, distributed network that is constantly pushing the bounds of what's possible in terms of performance. For now, this is only available for Workers that are deployed to a “root” hostname like “” and not specific paths like “” We plan to introduce more optimizations in the future that can preload specific paths.What about performance beyond cold starts?We also recognize that performance is more than just zero cold starts. That’s why we announced the beta of Workers Unbound earlier this week. Workers Unbound has the simplicity and performance of Workers with no limits, just raw performance.Workers, equipped with zero cold starts, no CPU limits, and a network that spans over 200 cities is prime and ready to take on any serious workload. Now that’s performance.Interested in deploying with Workers?Learn more about Cloudflare WorkersJoin the Workers Unbound BetaTry our new language support for Python and Kotlin

Workers Security

Hello, I'm an engineer on the Workers team, and today I want to talk to you about security. Cloudflare is a security company, and the heart of Workers is, in my view, a security project. Running code written by third parties is always a scary proposition, and the primary concern of the Workers team is to make that safe. For a project like this, it is not enough to pass a security review and say "ok, we're secure" and move on. It's not even enough to consider security at every stage of design and implementation. For Workers, security in and of itself is an ongoing project, and that work is never done. There are always things we can do to reduce the risk and impact of future vulnerabilities. Today, I want to give you an overview of our security architecture, and then address two specific issues that we are frequently asked about: V8 bugs, and Spectre. Architectural Overview Let's start with a quick overview of the Workers Runtime architecture. There are two fundamental parts of designing a code sandbox: secure isolation and API design. Isolation First, we need to create an execution environment where code can't access anything it's not supposed to. For this, our primary tool is V8, the JavaScript engine developed by Google for use in Chrome. V8 executes code inside "isolates", which prevent that code from accessing memory outside the isolate -- even within the same process. Importantly, this means we can run many isolates within a single process. This is essential for an edge compute platform like Workers where we must host many thousands of guest apps on every machine, and rapidly switch between these guests thousands of times per second with minimal overhead. If we had to run a separate process for every guest, the number of tenants we could support would be drastically reduced, and we'd have to limit edge compute to a small number of big enterprise customers who could pay a lot of money. With isolate technology, we can make edge compute available to everyone. Sometimes, though, we do decide to schedule a worker in its own private process. We do this if it uses certain features that we feel need an extra layer of isolation. For example, when a developer uses the devtools debugger to inspect their worker, we run that worker in a separate process. This is because historically, in the browser, the inspector protocol has only been usable by the browser's trusted operator, and therefore has not received as much security scrutiny as the rest of V8. In order to hedge against the increased risk of bugs in the inspector protocol, we move inspected workers into a separate process with a process-level sandbox. We also use process isolation as an extra defense against Spectre, which I'll describe later in this post. Additionally, even for isolates that run in a shared process with other isolates, we run multiple instances of the whole runtime on each machine, which we call "cordons". Workers are distributed among cordons by assigning each worker a level of trust and separating low-trusted workers from those we trust more highly. As one example of this in operation: a customer who signs up for our free plan will not be scheduled in the same process as an enterprise customer. This provides some defense-in-depth in the case a zero-day security vulnerability is found in V8. But I'll talk more about V8 bugs, and how we address them, later in this post. At the whole-process level, we apply another layer of sandboxing for defense in depth. The "layer 2" sandbox uses Linux namespaces and seccomp to prohibit all access to the filesystem and network. Namespaces and seccomp are commonly used to implement containers. However, our use of these technologies is much stricter than what is usually possible in container engines, because we configure namespaces and seccomp after the process has started (but before any isolates have been loaded). This means, for example, we can (and do) use a totally empty filesystem (mount namespace) and use seccomp to block absolutely all filesystem-related system calls. Container engines can't normally prohibit all filesystem access because doing so would make it impossible to use exec() to start the guest program from disk; in our case, our guest programs are not native binaries, and the Workers runtime itself has already finished loading before we block filesystem access. The layer 2 sandbox also totally prohibits network access. Instead, the process is limited to communicating only over local Unix domain sockets, to talk to other processes on the same system. Any communication to the outside world must be mediated by some other local process outside the sandbox. One such process in particular, which we call the "supervisor", is responsible for fetching worker code and configuration from disk or from other internal services. The supervisor ensures that the sandbox process cannot read any configuration except that which is relevant to the workers that it should be running. For example, when the sandbox process receives a request for a worker it hasn't seen before, that request includes the encryption key for that worker's code (including attached secrets). The sandbox can then pass that key to the supervisor in order to request the code. The sandbox cannot request any worker for which it has not received the appropriate key. It cannot enumerate known workers. It also cannot request configuration it doesn't need; for example, it cannot request the TLS key used for HTTPS traffic to the worker. Aside from reading configuration, the other reason for the sandbox to talk to other processes on the system is to implement APIs exposed to Workers. Which brings us to API design. API Design There is a saying: "If a tree falls in the forest, but no one is there to hear it, does it make a sound?" I have a related saying: "If a Worker executes in a fully-isolated environment in which it is totally prevented from communicating with the outside world, does it actually run?" Complete code isolation is, in fact, useless. In order for Workers to do anything useful, they have to be allowed to communicate with users. At the very least, a Worker needs to be able to receive requests and respond to them. It would also be nice if it could send requests to the world, safely. For that, we need APIs. In the context of sandboxing, API design takes on a new level of responsibility. Our APIs define exactly what a Worker can and cannot do. We must be very careful to design each API so that it can only express operations which we want to allow, and no more. For example, we want to allow Workers to make and receive HTTP requests, while we do not want them to be able to access the local filesystem or internal network services. Let's dig into the easier example first. Currently, Workers does not allow any access to the local filesystem. Therefore, we do not expose a filesystem API at all. No API means no access. But, imagine if we did want to support local filesystem access in the future. How would we do that? We obviously wouldn't want Workers to see the whole filesystem. Imagine, though, that we wanted each Worker to have its own private directory on the filesystem where it can store whatever it wants. To do this, we would use a design based on capability-based security. Capabilities are a big topic, but in this case, what it would mean is that we would give the worker an object of type Directory, representing a directory on the filesystem. This object would have an API that allows creating and opening files and subdirectories, but does not permit traversing "up" the tree to the parent directory. Effectively, each worker would see its private Directory as if it were the root of their own filesystem. How would such an API be implemented? As described above, the sandbox process cannot access the real filesystem, and we'd prefer to keep it that way. Instead, file access would be mediated by the supervisor process. The sandbox talks to the supervisor using Cap'n Proto RPC, a capability-based RPC protocol. (Cap'n Proto is an open source project currently maintained by the Cloudflare Workers team.) This protocol makes it very easy to implement capability-based APIs, so that we can strictly limit the sandbox to accessing only the files that belong to the Workers it is running. Now what about network access? Today, Workers are allowed to talk to the rest of the world only via HTTP -- both incoming and outgoing. There is no API for other forms of network access, therefore it is prohibited (though we plan to support other protocols in the future). As mentioned before, the sandbox process cannot connect directly to the network. Instead, all outbound HTTP requests are sent over a Unix domain socket to a local proxy service. That service implements restrictions on the request. For example, it verifies that the request is either addressed to a public Internet service, or to the Worker's zone's own origin server, not to internal services that might be visible on the local machine or network. It also adds a header to every request identifying the worker from which it originates, so that abusive requests can be traced and blocked. Once everything is in order, the request is sent on to our HTTP caching layer, and then out to the Internet. Similarly, inbound HTTP requests do not go directly to the Workers Runtime. They are first received by an inbound proxy service. That service is responsible for TLS termination (the Workers Runtime never sees TLS keys), as well as identifying the correct Worker script to run for a particular request URL. Once everything is in order, the request is passed over a Unix domain socket to the sandbox process. V8 bugs and the "patch gap" Every non-trivial piece of software has bugs, and sandboxing technologies are no exception. Virtual machines have bugs, containers have bugs, and yes, isolates (which we use) also have bugs. We can't live life pretending that no further bugs will ever be discovered; instead, we must assume they will and plan accordingly. We rely heavily on isolation provided by V8, the JavaScript engine built by Google for use in Chrome. This has good sides and bad sides. On one hand, V8 is an extraordinarily complicated piece of technology, creating a wider "attack surface" than virtual machines. More complexity means more opportunities for something to go wrong. On the bright side, though, an extraordinary amount of effort goes into finding and fixing V8 bugs, owing to its position as arguably the most popular sandboxing technology in the world. Google regularly pays out 5-figure bounties to anyone finding a V8 sandbox escape. Google also operates fuzzing infrastructure that automatically finds bugs faster than most humans can. Google's investment does a lot to minimize the danger of V8 "zero-days" -- bugs that are found by the bad guys and not known to Google. But, what happens after a bug is found and reported by the good guys? V8 is open source, so fixes for security bugs are developed in the open and released to everyone at the same time -- good guys and bad guys. It's important that any patch be rolled out to production as fast as possible, before the bad guys can develop an exploit. The time between publishing the fix and deploying it is known as the "patch gap". Earlier this year, Google announced that Chrome's patch gap had been reduced from 33 days to 15 days. Fortunately, we have an advantage here, in that we directly control the machines on which our system runs. We have automated almost our entire build and release process, so the moment a V8 patch is published, our systems automatically build a new release of the Workers Runtime and, after one-click sign-off from the necessary (human) reviewers, automatically push that release out to production. As a result, our patch gap is now under 24 hours. A patch published by V8's team in Munich during their work day will usually be in production before the end of our work day in the US. Spectre: Introduction We get a lot of questions about Spectre. The V8 team at Google has stated in no uncertain terms that V8 itself cannot defend against Spectre. Since Workers relies on V8 for sandboxing, many have asked if that leaves Workers vulnerable. However, we do not need to depend on V8 for this; the Workers environment presents many alternative approaches to mitigating Spectre. Spectre is complicated and nuanced, and there's no way I can cover everything there is to know about it or how Workers addresses it in a single blog post. But, hopefully I can clear up some of the confusion and concern. What is it? Spectre is a class of attacks in which a malicious program can trick the CPU into "speculatively" performing computation using data that the program is not supposed to have access to. The CPU eventually realizes the problem and does not allow the program to see the results of the speculative computation. However, the program may be able to derive bits of the secret data by looking at subtle side effects of the computation, such as the effects on cache. For more background about Spectre, check out our Learning Center page on the topic. Why does it matter for Workers? Spectre encompasses a wide variety of vulnerabilities present in modern CPUs. The specific vulnerabilities vary by architecture and model, and it is likely that many vulnerabilities exist which haven't yet been discovered. These vulnerabilities are a problem for every cloud compute platform. Any time you have more than one tenant running code on the same machine, Spectre attacks come into play. However, the "closer together" the tenants are, the more difficult it can be to mitigate specific vulnerabilities. Many of the known issues can be mitigated at the kernel level (protecting processes from each other) or at the hypervisor level (protecting VMs), often with the help of CPU microcode updates and various tricks (many of which, unfortunately, come with serious performance impact). In Cloudflare Workers, we isolate tenants from each other using V8 isolates -- not processes nor VMs. This means that we cannot necessarily rely on OS or hypervisor patches to "solve" Spectre for us. We need our own strategy. Why not use process isolation? Cloudflare Workers is designed to run your code in every single Cloudflare location, of which there are currently 200 worldwide and growing. We wanted Workers to be a platform that is accessible to everyone -- not just big enterprise customers who can pay megabucks for it. We need to handle a huge number of tenants, where many tenants get very little traffic. Combine these two points, and things get tricky. A typical, non-edge serverless provider could handle a low-traffic tenant by sending all of that tenant's traffic to a single machine, so that only one copy of the application needs to be loaded. If the machine can handle, say, a dozen tenants, that's plenty. That machine can be hosted in a mega-datacenter with literally millions of machines, achieving economies of scale. However, this centralization incurs latency and worldwide bandwidth costs when the users don't happen to be nearby. With Workers, on the other hand, every tenant, regardless of traffic level, currently runs in every Cloudflare location. And in our quest to get as close to the end user as possible, we sometimes choose locations that only have space for a limited number of machines. The net result is that we need to be able to host thousands of active tenants per machine, with the ability to rapidly spin up inactive ones on-demand. That means that each guest cannot take more than a couple megabytes of memory -- hardly enough space for a call stack, much less everything else that a process needs. Moreover, we need context switching to be extremely cheap. Many Workers resident in memory will only handle an event every now and then, and many Workers spend less than a fraction of a millisecond on any particular event. In this environment, a single core can easily find itself switching between thousands of different tenants every second. Moreover, to handle one event, a significant amount of communication needs to happen between the guest application and its host, meaning still more switching and communications overhead. If each tenant lives in its own process, all this overhead is orders of magnitude larger than if many tenants live in a single process. When using strict process isolation in Workers, we find the CPU cost can easily be 10x what it is with a shared process. In order to keep Workers inexpensive, fast, and accessible to everyone, we must solve these issues, and that means we must find a way to host multiple tenants in a single process. There is no "fix" for Spectre A dirty secret that the industry doesn't like to admit: no one has "fixed" Spectre. Not even when using heavyweight virtual machines. Everyone is still vulnerable. The current approach being taken by most of the industry is essentially a game of whack-a-mole. Every couple months, researchers uncover a new Spectre vulnerability. CPU vendors release some new microcode, OS vendors release kernel patches, and everyone has to update. But is it enough to merely deploy the latest patches? It is abundantly clear that many more vulnerabilities exist, but haven't yet been publicized. Who might know about those vulnerabilities? Most of the bugs being published are being found by (very smart) graduate students on a shoestring budget. Imagine, for a minute, how many more bugs a well-funded government agency, able to buy the very best talent in the world, could be uncovering. To truly defend against Spectre, we need to take a different approach. It's not enough to block individual known vulnerabilities. We must address the entire class of vulnerabilities at once. We can't stop it, but we can slow it down Unfortunately, it's unlikely that any catch-all "fix" for Spectre will be found. But for the sake of argument, let's try. Fundamentally, all Spectre vulnerabilities use side channels to detect hidden processor state. Side channels, by definition, involve observing some non-deterministic behavior of a system. Conveniently, most software execution environments try hard to eliminate non-determinism, because non-deterministic execution makes applications unreliable. However, there are a few sorts of non-determinism that are still common. The most obvious among these is timing. The industry long ago gave up on the idea that a program should take the same amount of time every time it runs, because deterministic timing is fundamentally at odds with heuristic performance optimization. Sure enough, most Spectre attacks focus on timing as a way to detect the hidden microarchitectural state of the CPU. Some have proposed that we can solve this by making timers inaccurate or adding random noise. However, it turns out that this does not stop attacks; it only makes them slower. If the timer tracks real time at all, then anything you can do to make it inaccurate can be overcome by running an attack multiple times and using statistics to filter out the noise. Many security researchers see this as the end of the story. What good is slowing down an attack, if the attack is still possible? Once the attacker gets your private key, it's game over, right? What difference does it make if it takes them a minute or a month? Cascading Slow-downs We find that, actually, measures that slow down an attack can be powerful. Our key insight is this: as an attack becomes slower, new techniques become practical to make it even slower still. The goal, then, is to chain together enough techniques that an attack becomes so slow as to be uninteresting. Much of cryptography, after all, is technically vulnerable to "brute force" attacks -- technically, with enough time, you can break it. But when the time required is thousands (or even billions) of years, we decide that this is good enough. So, what do we do to slow down Spectre attacks to the point of meaninglessness? Freezing a Spectre Attack Step 0: Don't allow native code We do not allow our customers to upload native-code binaries to run on our network. We only accept JavaScript and WebAssembly. Of course, many other languages, like Python, Rust, or even Cobol, can be compiled or transpiled to one of these two formats; the important point is that we do another pass on our end, using V8, to convert these formats into true native code. This, in itself, doesn't necessarily make Spectre attacks harder. However, I present this as step 0 because it is fundamental to enabling everything else below. Accepting native code programs implies being beholden to an existing CPU architecture (typically, x86). In order to execute code with reasonable performance, it is usually necessary to run the code directly on real hardware, severely limiting the host's control over how that execution plays out. For example, a kernel or hypervisor has no ability to prohibit applications from invoking the CLFLUSH instruction, an instruction which is very useful in side channel attacks and almost nothing else. Moreover, supporting native code typically implies supporting whole existing operating systems and software stacks, which bring with them decades of expectations about how the architecture works under them. For example, x86 CPUs allow a kernel or hypervisor to disable the RDTSC instruction, which reads a high-precision timer. Realistically, though, disabling it will break many programs because they are implemented to use RDTSC any time they want to know the current time. Supporting native code would bind our hands in terms of mitigation techniques. By using an abstract intermediate format, we have much greater freedom. Step 1: Disallow timers and multi-threading In Workers, you can get the current time using the JavaScript Date API, for example by calling However, the time value returned by this is not really the current time. Instead, it is the time at which the network message was received which caused the application to begin executing. While the application executes, time is locked in place. For example, say an attacker writes: let start =; for (let i = 0; i < 1e6; i++) { doSpectreAttack(); } let end =; The values of start and end will always be exactly the same. The attacker cannot use Date to measure the execution time of their code, which they would need to do to carry out an attack. As an aside: This is a measure we actually implemented in mid-2017, long before Spectre was announced (and before we knew about it). We implemented this measure because we were worried about timing side channels in general. Side channels have been a concern of the Workers team from day one, and we have designed our system from the ground up with this concern in mind. Related to our taming of Date, we also do not permit multi-threading or shared memory in Workers. Everything related to the processing of one event happens on the same thread -- otherwise, it would be possible to "race" threads in order to "MacGyver" an implicit timer. We don't even allow multiple Workers operating on the same request to run concurrently. For example, if you have installed a Cloudflare App on your zone which is implemented using Workers, and your zone itself also uses Workers, then a request to your zone may actually be processed by two Workers in sequence. These run in the same thread. So, we have prevented code execution time from being measured locally. However, that doesn't actually prevent it from being measured: it can still be measured remotely. For example, the HTTP client that is sending a request to trigger the execution of the Worker can measure how long it takes for the Worker to respond. Of course, such a measurement is likely to be very noisy, since it would have to traverse the Internet. Such noise can be overcome, in theory, by executing the attack many times and taking an average. Another aside: Some people have suggested that if a serverless platform like Workers were to completely reset an application's state between requests, so that every request "starts fresh", this would make attacks harder. That is, imagine that a Worker's global variables were reset after every request, meaning you cannot store state in globals in one request and then read that state in the next. Then, doesn't that mean the attack has to start over from scratch for every request? If each request is limited to, say, 50ms of CPU time, does that mean that a Spectre attack isn't possible, because there's not enough time to carry it out? Unfortunately, it's not so simple. State doesn't have to be stored in the Worker; it could instead be stored in a conspiring client. The server can return its state to the client in each response, and the client can send it back to the server in the next request. But is an attack based on remote timers really feasible in practice? In adversarial testing, with help from leading Spectre experts, we have not been able to develop an attack that actually works in production. However, we don't feel the lack of a working attack means we should stop building defenses. Instead, we're currently testing some more advanced measures, which we plan to roll out in the coming weeks. Step 2: Dynamic Process Isolation We know that if an attack is possible at all, it would take a very long time to run -- hours at the very least, maybe as long as weeks. But once an attack has been running even for a second, we have a huge amount of new data that we can use to trigger further measures. Spectre attacks, you see, do a lot of "weird stuff" that you wouldn't usually expect to see in a normal program. These attacks intentionally try to create pathological performance scenarios in order to amplify microarchitectural effects. This is especially true when the attack has already been forced to run billions of times in a loop in order to overcome other mitigations, like those discussed above. This tends to show up in metrics like CPU performance counters. Now, the usual problem with using performance metrics to detect Spectre attacks is that sometimes you get false positives. Sometimes, a legitimate program behaves really badly. You can't go around shutting down every app that has bad performance. Luckily, we don't have to. Instead, we can choose to reschedule any Worker with suspicious performance metrics into its own process. As I described above, we can't do this with every Worker, because the overhead would be too high. But, it's totally fine to process-isolate just a few Workers, defensively. If the Worker is legitimate, it will keep operating just fine, albeit with a little more overhead. Fortunately for us, the nature of our platform is such that we can reschedule a Worker into its own process at basically any time. In fact, fancy performance-counter based triggering may not even be necessary here. If a Worker merely uses a large amount of CPU time per event, then the overhead of isolating it in its own process is relatively less, because it switches context less often. So, we might as well use process isolation for any Worker that is CPU-hungry. Once a Worker is isolated, then we can rely on the operating system's Spectre defenses, just aslike, for example, most desktop web browsers now do. Over the past year we've been working with the experts at Graz Technical University to develop this approach. (TU Graz's team co-discovered Spectre itself, and has been responsible for a huge number of the follow-on discoveries since then.) We have developed the ability to dynamically isolate workers, and we have identified metrics which reliably detect attacks. The whole system is currently undergoing testing to work out any remaining bugs, and we expect to roll it out fully within the next several weeks. But wait, didn't I say earlier that even process isolation isn't a complete defense, because it only addresses known vulnerabilities? Yes, this is still true. However, the trend over time is that new Spectre attacks tend to be slower and slower to carry out, and hence we can reasonably guess that by imposing process isolation we have further slowed down even attacks that we don't know about yet. Step 3: Periodic Whole-Memory Shuffling After Step 2, we already think we've prevented all known attacks, and we're only worried about hypothetical unknown attacks. How long does a hypothetical unknown attack take to carry out? Well, obviously, nobody knows. But with all the mitigations in place so far, and considering that new attacks have generally been slower than older ones, we think it's reasonable to guess attacks will take days or longer. On a time scale of a day, we have new things we can do. In particular, it's totally reasonable to restart the entire Workers runtime on a daily basis, which resets the locations of everything in memory, forcing attacks to restart the process of discovering the locations of secrets. We can also reschedule Workers across physical machines or cordons, so that the window to attack any particular neighbor is limited. In general, because Workers are fundamentally preemptible (unlike containers or VMs), we have a lot of freedom to frustrate attacks. Once we have dynamic process isolation fully deployed, we plan to develop these ideas next. We see this as an ongoing investment, not something that will ever be "done". Conclusion Phew. You just read twelve pages about Workers security. Hopefully I've convinced you that designing a secure sandbox is only the beginning of building a secure compute platform, and the real work is never done. Popular security culture often dwells on clever hacks and clean fixes. But for the difficult real-world problems, often there is no right answer or simple fix, only the hard work of building defenses thicker and thicker.

Cloudflare Workers Announces Broad Language Support

We initially launched Cloudflare Workers with support for JavaScript and languages that compile to WebAssembly, such as Rust, C, and C++. Since then, Cloudflare and the community have improved the usability of Typescript on Workers. But we haven't talked much about the many other popular languages that compile to JavaScript. Today, we’re excited to announce support for Python, Scala, Kotlin, Reason and Dart.You can build applications on Cloudflare Workers using your favorite language starting today.Getting StartedGetting started is as simple as installing Wrangler, then running generate for the template for your chosen language: Python, Scala, Kotlin, Dart, or Reason. For Python, this looks like:wrangler generate my-python-project the installation instructions in the README inside the generated project directory, then run wrangler publish. You can see the output of your Worker at your subdomain, e.g. You can sign up for a free Workers account if you don't have one yet. That’s it. It is really easy to write in your favorite languages. But, this wouldn’t be a very compelling blog post if we left it at that. Now, I’ll shift the focus to how we added support for these languages and how you can add support for others. How it all works under the hoodLanguage features are important. For instance, it's hard to give up the safety and expressiveness of pattern matching once you've used it. Familiar syntax matters to us as programmers.You may also have existing code in your preferred language that you'd like to reuse. Just keep in mind that the advantages of running on V8 come with the limitation that if you use libraries that depend on native code or language-specific VM features, they may not translate to JavaScript. WebAssembly may be an option in that case. But for memory-managed languages you're usually better off compiling to JavaScript, at least until the story around garbage collection for WebAssembly stabilizes.I'll walk through how the Worker language templates are made using a representative example of a dynamically typed language, Python, and a statically typed language, Scala. If you want to follow along, you'll need to have Wrangler installed and configured with your Workers account. If it's your first time using Workers it's a good idea to go through the quickstart.Dynamically typed languages: PythonYou can generate a starter "hello world" Python project for Workers by runningwrangler generate my-python-project will create a my-python-project directory and helpfully remind you to configure your account_id in the wrangler.toml file inside it.  The file in the directory links to instructions on setting up Transcrypt, the Python to JavaScript compiler we're using. If you already have Python 3.7 and virtualenv installed, this just requires runningcd my-python-project virtualenv env source env/bin/activate pip install transcrypt wrangler publish The main requirement for compiling to JavaScript on Workers is the ability to produce a single js file that fits in our bundle size limit of 1MB. Transcrypt adds about 70k for its Python runtime in this case, which is well within that limit. But by default running Transcrypt on a Python file will produce multiple JS and source map files in a __target__ directory. Thankfully Wrangler has built in support for webpack. There's a webpack loader for Transcrypt, making it easy to produce a single file. See the webpack.config.js file for the setup.The point of all this is to run some Python code, so let's take a look at handleRequest(request): return __new__(Response('Python Worker hello world!', { 'headers' : { 'content-type' : 'text/plain' } })) addEventListener('fetch', (lambda event: event.respondWith(handleRequest(event.request)))) In most respects this is very similar to any other Worker hello world, just in Python syntax. Dictionary literals take the place of JavaScript objects, lambda is used instead of an anonymous arrow function, and so on. If using __new__ to create instances of JavaScript classes seems awkward, the Transcrypt docs discuss an alternative.Clearly, addEventListener is not a built-in Python function, it's part of the Workers runtime. Because Python is dynamically typed, you don't have to worry about providing type signatures for JavaScript APIs. The downside is that mistakes will result in failures when your Worker runs, rather than when Transcrypt compiles your code. Transcrypt does have experimental support for some degree of static checking using mypy.Statically typed languages: ScalaYou can generate a starter "hello world" Scala project for Workers by runningwrangler generate my-scala-project Scala to JavaScript compiler we're using is Scala.js. It has a plugin for the Scala build tool, so installing sbt and a JDK is all you'll need.Running sbt fullOptJS in the project directory will compile your Scala code to a single index.js file. The build configuration in build.sbt is set up to output to the root of the project, where Wrangler expects to find an index.js file. After that you can run wrangler publish as normal.Scala.js uses the Google Closure Compiler to optimize for size when running fullOptJS. For the hello world, the file size is 14k. A more realistic project involving async fetch weighs in around 100k, still well within Workers limits.In order to take advantage of static type checking, you're going to need type signatures for the JavaScript APIs you use. There are existing Scala signatures for fetch and service worker related APIs. You can see those being imported in the entry point for the Worker, Main.scala:import org.scalajs.dom.experimental.serviceworkers.{FetchEvent} import org.scalajs.dom.experimental.{Request, Response, ResponseInit} import scala.scalajs.js The import of scala.scalajs.js allows easy access to Scala equivalents of JavaScript types, such as js.Array or js.Dictionary. The remainder of Main looks fairly similar to a Typescript Worker hello world, with syntactic differences such as Unit instead of Void and square brackets instead of angle brackets for type parameters:object Main { def main(args: Array[String]): Unit = { Globals.addEventListener("fetch", (event: FetchEvent) => { event.respondWith(handleRequest(event.request)) }) } def handleRequest(request: Request): Response = { new Response("Scala Worker hello world", ResponseInit( _headers = js.Dictionary("content-type" -> "text/plain"))) } } Request, Response and FetchEvent are defined by the previously mentioned imports. But what's this Globals object? There are some Worker-specific extensions to JavaScript APIs. You can handle these in a statically typed language by either automatically converting existing Typescript type definitions for Workers or by writing type signatures for the features you want to use. Writing the type signatures isn't hard, and it's good to know how to do it, so I included an example in Globals.scala:import scalajs.js import js.annotation._ @js.native @JSGlobalScope object Globals extends js.Object { def addEventListener(`type`: String, f: js.Function): Unit = js.native } The annotation @js.native indicates that the implementation is in existing JavaScript code, not in Scala. That's why the body of the addEventListener definition is just js.native. In a JavaScript Worker you'd call addEventListener as a top-level function in global scope. Here, the @JSGlobalScope annotation indicates that the function signatures we're defining are available in the JavaScript global scope. You may notice that the type of the function passed to addEventListener is just js.Function, rather than specifying the argument and return types. If you want more type safety, this could be done as js.Function1[FetchEvent, Unit].  If you're trying to work quickly at the expense of safety, you could use def addEventListener(any: Any*): Any to allow anything.For more information on defining types for JavaScript interfaces, see the Scala.js docs.Using Workers KV and async PromisesLet's take a look at a more realistic example using Workers KV and asynchronous calls. The idea for the project is our own HTTP API to store and retrieve text values. For simplicity's sake I'm using the first slash-separated component of the path for the key, and the second for the value. Usage of the finished project will look like PUT /meaning of life/42 or GET /meaning of life/The first thing I need is to add type signatures for the parts of the KV API that I'm using, in Globals.scala. My KV namespace binding in wrangler.toml is just going to be named KV, resulting in a corresponding global object:object Globals extends js.Object { def addEventListener(`type`: String, f: js.Function): Unit = js.native val KV: KVNamespace = js.native } bash$ curl -w "\n" -X PUT ' of life/42' bash$ curl -w "\n" -X GET ' of life/' 42 So what's the definition of the KVNamespace type? It's an interface, so it becomes a Scala trait with a @js.native annotation. The only methods I need to add right now are the simple versions of KV.get and KV.put that take and return strings. The return values are asynchronous, so they're wrapped in a js.Promise. I'll make that wrapped string a type alias, KVValue, just in case we want to deal with the array or stream return types in the future:object KVNamespace { type KVValue = js.Promise[String] } @js.native trait KVNamespace extends js.Object { import KVNamespace._ def get(key: String): KVValue = js.native def put(key: String, value: String): js.Promise[Unit] = js.native } With type signatures complete, I'll move on to Main.scala and how to handle interaction with JavaScript Promises. It's possible to use js.Promise directly, but I'd prefer to use Scala semantics for asynchronous Futures. The methods toJSPromise and toFuture from js.JSConverters can be used to convert back and forth: def get(key: String): Future[Response] = { Globals.KV.get(key) { (value: String) => new Response(value, okInit) } recover { case err => new Response(s"error getting a value for '$key': $err", errInit) } } The function for putting values makes similar use of toFuture to convert the return value from KV into a Future. I use map to transform the value into a Response, and recover to handle failures. If you prefer async / await syntax instead of using combinators, you can use scala-async.Finally, the new definition for handleRequest is a good example of how pattern matching makes code more concise and less error-prone at the same time. We match on exactly the combinations of HTTP method and path components that we want, and default to an informative error for any other case: def handleRequest(request: Request): Future[Response] = { (request.method, request.url.split("/")) match { case (HttpMethod.GET, Array(_, _, _, key)) => get(key) case (HttpMethod.PUT, Array(_, _, _, key, value)) => put(key, value) case _ => Future.successful( new Response("expected GET /key or PUT /key/value", errInit)) } } You can get the complete code for this example by runningwrangler generate projectname to contributeI'm a fan of programming languages, and will continue to add more Workers templates. You probably know your favorite language better than I do, so pull requests are welcome for a simple hello world or more complex example.And if you're into programming languages check out the latest language rankings from RedMonk where Python is the first non-Java or JavaScript language ever to place in the top two of these rankings.Stay tuned for the rest of Serverless Week!

The Migration of Legacy Applications to Workers

As Cloudflare Workers, and other Serverless platforms, continue to drive down costs while making it easier for developers to stand up globally scaled applications, the migration of legacy applications is becoming increasingly common. In this post, I want to show how easy it is to migrate such an application onto Workers. To demonstrate, I’m going to use a common migration scenario: moving a legacy application — on an old compute platform behind a VPN or in a private cloud — to a serverless compute platform behind zero-trust security.Wait but why?Before we dive further into the technical work, however, let me just address up front: why would someone want to do this? What benefits would they get from such a migration? In my experience, there are two sets of reasons: (1) factors that are “pushing” off legacy platforms, or the constraints and problems of the legacy approach; and (2) factors that are “pulling” onto serverless platforms like Workers, which speaks to the many benefits of this new approach. In terms of the push factors, we often see three core ones:Legacy compute resources are not flexible and must be constantly maintained, often leading to capacity constraints or excess cost;Maintaining VPN credentials is cumbersome, and can introduce security risks if not done properly;VPN client software can be challenging for non-technical users to operate.Similarly, there are some very key benefits “pulling” folks onto Serverless applications and zero-trust security:Instant scaling, up or down, depending on usage. No capacity constraints, and no excess cost;No separate credentials to maintain, users can use Single Sign On (SSO) across many applications;VPN hardware / private cloud; and existing compute, can be retired to simplify operations and reduce costWhile the benefits to this more modern end-state are clear, there’s one other thing that causes organizations to pause: the costs in time and migration effort seem daunting. Often what organizations find is that migration is not as difficult as they fear. In the rest of this post, I will show you how Cloudflare Workers, and the rest of the Cloudflare platform, can greatly simplify migrations and help you modernize all of your applications. Getting StartedTo take you through this, we will use a contrived application I’ve written in Node.js to illustrate the steps we would take with a real, and far more complex, example. The goal is to show the different tools and features you can use at each step; and how our platform design supports development and cutover of an application.  We’ll use four key Cloudflare technologies, as we see how to move this Application off of my Laptop and into the Cloud:Serverless Compute through WorkersRobust Developer-focused Tooling for Workers via WranglerZero-Trust security through AccessInstant, Secure Origin Tunnels through Argo TunnelsOur example application for today is called Post Process, and it performs business logic on input provided in an HTTP POST body. It takes the input data from authenticated clients, performs a processing task, and responds with the result in the body of an HTTP response. The server runs in Node.js on my laptop.Since the example application is written in Node.js; we will be able to directly copy some of the JavaScript assets for our new application. You could follow this “direct port” method not only for JavaScript applications, but even applications in our other WASM-supported languages. For other languages, you first need to rewrite or transpile into one with WASM support. Into our ApplicationOur basic example will perform only simple text processing, so that we can focus on the broad features of the migration. I’ve set up an unauthenticated copy (using Workers, to give us a scalable and reliable place to host it) at where you can see how it operates. Here is an example cURL:curl -X POST --data '{"operation":"2","data":"Data-Gram!"}'The relevant takeaways from the code itself are pretty simple:There are two code modules, which conveniently split the application logic completely from the Preprocessing / HTTP interface.The application logic module exposes one function postProcess(object) where object is the parsed JSON of the POST body. It returns a JavaScript object, ready to be encoded into a string in the JSON HTTP response. This module can be run on Workers JavaScript, with no changes. It only needs a new preprocessing / HTTP interface.The Preprocessing / HTTP interface runs on raw Node.js; and exposes a local HTTPS server. The server does not directly take inbound traffic from the Internet, but sits behind a gateway which controls access to the service.Code snippet from Node.js HTTP moduleconst server = http.createServer((req, res) => { if (req.url == '/postprocess') { if(req.method == 'POST') { gatherPost(req, data => { try{ jsonData = JSON.parse(data) } catch (e) { res.end('Invalid JSON payload! \n') return } result = postProcess(jsonData) res.write(JSON.stringify(result) + '\n'); res.end(); }) } else { res.end('Invalid Method, only POST is supported! \nPlease send a POST with data in format {"Operation":1","data","Data-Gram!" } } else { res.end('Invalid request. Did you mean to POST to /postprocess? \n'); } });Code snippet from Node.js logic modulefunction postProcess (postJson) { const ServerVersion = "2.5.17" if(postJson != null && 'operation' in postJson && 'data' in postJson){ var output var operation = postJson['operation'] var data = postJson['data'] switch(operation){ case "1": output = String(data).toLowerCase() break case "2": d = data + "\n" output = d + d + d break case "3": output = ServerVersion break default: output = "Invalid Operation" } return {'Output': output} } else{ return {'Error':'Invalid request, invalid JSON format'} }Current State Application ArchitectureDesign DecisionsWith all this information in hand, we can arrive at at the details of our new Cloudflare-based design:Keep the business logic completely intact, and specifically use the same .js assetBuild a new preprocessing layer in Workers to replace the Node.js moduleUse Cloudflare Access to authenticate users to our applicationTarget State Application ArchitectureFinding the first winOne good way to make a migration successful is to find a quick win early on; a useful task which can be executed while other work is still ongoing. It is even better if the quick win also benefits the eventual cutover. We can find a quick win here, if we solve the zero-trust security problem ahead of the compute problem by putting Cloudflare’s security in front of the existing application. We will do this by using cloudflare’s Argo Tunnel feature to securely connect to the existing application, and Access for zero-trust authentication. Below, you can see how easy this process is for any command-line user, with our cloudflared tool.I open up a terminal and use cloudflared tunnel login, which presents me with an authentication flow. I then use the cloudflared tunnel --hostname --url localhost:8080 command to connect an Argo Tunnel between the “url” (my local server) and the “hostname” (the new, public address we will use on my Cloudflare zone).Next I flip over to my Cloudflare dashboard, and attach an Access Policy to the “hostname” I specified before. We will be using the Service Token mode of Access; which generates a client-specific security token which that client can attach to each HTTP POST. Other modes are better suited to interactive browser use cases.Now, without using the VPN, I can send a POST to the service, still running on Node.js on my laptop, from any Internet-connected device which has the correct token! It has taken only a few minutes to add zero-trust security to this application; and safely expose it to the Internet while still running on a legacy compute platform (my laptop!).“Quick Win” ArchitectureBeyond the direct benefit of a huge security upgrade; we’ve also made our eventual application migration much easier, by putting the traffic through the target-state API gateway already. We will see later how we can surgically move traffic to the new application for testing, in this state.Lift to the CloudWith our zero-trust security benefits in hand, and our traffic running through Cloudflare; we can now proceed with the migration of the application itself to Workers. We’ll be using the Wrangler tooling to make this process very easy.As noted when we first looked at the code, this contrived application exposes a very clean interface between the Node.js-specific HTTP module, which we need to replace, and the business logic postprocess module which we can use as is with Workers. We’ll first need to re-write the HTTP module, and then bundle it with the existing business logic into a new Workers application.Here is a handwritten example of the basic pattern we’ll use for the HTTP module. We can see how the Service Workers API makes it very easy to grab the POST body with await, and how the JSON interface lets us easily pass the data to the postprocess module we took directly from the initial Node.js app.addEventListener('fetch', event => { event.respondWith(handleRequest(event.request)) }) async function handleRequest(request) { try{ requestData = await request.json() } catch (e) { return new Response("Invalid JSON", {status:500}) } const response = new Response(JSON.stringify(postProcess (requestData))) return response }For our work on the mock application, we will go a slightly different route; more in line with a real application which would be more complex. Instead of writing this by hand, we will use Wrangler and our Router template, to build the new front end from a robust framework.We’ll run wrangler generate post-process-workers to initialize a new Wrangler project with the Router template. Most of the configurations for this template will work as is; and we just have to update account_id in our wrangler.toml and make a few small edits to the code in index.js.Below is our index.js after my edits, Note the line const postProcess = require('./postProcess.js') at the start of the new http module - this will tell Wrangler to include the original business logic, from the legacy app’s postProcess.js module which I will copy to our working directory.const Router = require('./router') const postProcess = require('./postProcess.js') addEventListener('fetch', event => { event.respondWith(handleRequest(event.request)) }) async function handler(request) { const init = { headers: { 'content-type': 'application/json' }, } const body = JSON.stringify(postProcess(await request.json())) return new Response(body, init) } async function handleRequest(request) { const r = new Router()'.*/postprocess*', request => handler(request)) r.get('/', () => new Response('Hello worker!')) // return a default message for the root route const resp = await r.route(request) return resp }Now we can simply run wrangler publish, to put our application on for testing! The Router template’s defaults; and the small edits made above, are all we need. Since Wrangler automatically exposes the test application to the Internet (note that we can *also* put the test application behind Access, with a slightly modified method), we can easily send test traffic from any device.Shift, Safely!With our application up for testing on, we finally come to the last and most daunting migration step: cutting over traffic from the legacy application to the new one without any service interruption. Luckily, we had our quick win earlier and are already routing our production traffic through the Cloudflare network (to the legacy application via Argo Tunnels). This provides huge benefits now that we are at the cutover step. Without changing our IP address, SSL configuration, or any other client-facing properties, we can route traffic to the new application with just one wrangler command.Seamless cutover from Transition to Target stateWe simply modify wrangler.toml to indicate the production domain / route we’d like the application to operate on; and wrangler publish. As soon as Cloudflare receives this update; it will send production traffic to our new application instead of the Argo Tunnel. We have configured the application to send a ‘version’ header which lets us verify this easily using curl.Rollback, if it is needed, is also very easy. We can either set the wrangler.toml back to the only mode, and wrangler publish again; or delete our route manually. Either will send traffic back to the Argo Tunnel.In ConclusionClearly, a real application will be more complex than our example above. It may have multiple components, with complex interactions, which must each be handled in turn. Argo Tunnel might remain in use, to connect to a data store or other application outside of our network. We might use WASM to support modules written in other languages. In any of these scenarios, Cloudflare’s Wrangler tooling and Serverless capabilities will help us work through the complexities and achieve success.I hope that this simple example has helped you to see how Wrangler, cloudflared, Workers, and our entire global network can work together to make migrations as quick and hassle-free as possible. Whether for this case of an old application behind a VPN, or another application that has outgrown its current home - our Workers platform, Wrangler tooling, and underlying platform will scale to meet your business needs.

Introducing Workers Unbound

We launched Cloudflare Workers® in 2017 with the goal of building the development platform that we wished we had. We want to enable developers to build great software while Cloudflare manages the overhead of configuring and maintaining the infrastructure. Workers is with you from the first line of code, to the first application, all the way to a globally scaled product. By making our Edge network programmable and providing servers in 200+ locations around the world, we offer you the power to execute on even the biggest ideas.Behind the scenes at Cloudflare, we’ve been steadily working towards making development on the Edge even more powerful and flexible. Today, we are excited to announce the next phase of this with the launch of our new platform, Workers Unbound, without restrictive CPU limits in a private beta (sign up for details here).What is Workers Unbound? How is it different from Cloudflare Workers?Workers Unbound is like our classic Cloudflare Workers (now referred to as Workers Bundled), but for applications that need longer execution times. We are extending our CPU limits to allow customers to bring all of their workloads onto Workers, no matter how intensive. It eliminates the choice that developers often have to make, between running fast, simple work on the Edge or running heavy computation in a centralized cloud with unlimited resources. This platform will unlock a new class of intensive applications with heavy computation burdens like image processing or complex algorithms. In fact, this is a highly requested feature that we’ve previously unlocked for a number of our enterprise customers, and are now in the process of making it widely available to the public. Workers Unbound is built to be a general purpose computing platform, not just as an alternative to niche edge computing offerings. We want to be more compelling for any workload you'd previously think to run on traditional, centralized serverless platforms — faster, more affordable, and more flexible. Neat! How can I try it?We are excited to offer Workers Unbound to a select group of developers in a private beta. Please reach out via this form with some details about your use case, and we’ll be in touch! We’d love to hear your feedback and can’t wait to see what you build. What’s going on behind the scenes?Serverless as it’s known today is constrained by being built on top of old paradigms. Most serverless platforms have inherited containers from their cloud computing origins. Cloudflare has had the opportunity to rethink serverless by building on the Edge and making this technology more performant at scale for complex applications. We reap performance benefits by running code on V8 Isolates, which are designed to start very quickly with minimal cold start times. Isolates are a technology built by the Google Chrome team to power the JavaScript engine in the browser and they introduce a new model for running multi-tenant code. They provide lightweight contexts that group variables with the code allowed to mutate them.Isolates are far more lightweight than containers, a central tenet of most other serverless providers’ architecture. Containers effectively run a virtual machine, and there’s a lot of overhead associated with them. That, in turn, makes it very hard to run the workload outside of a centralized environment.Moreover, a single process on Workers can run hundreds or thousands of isolates, making switching between them seamless. That means it is possible to run code from many different customers within a single operating system process. This low runtime overhead is part of the story of how Workers scales to support many tenants.The other part of the story is code distribution. The ability to serve customers from anywhere in the world is a key difference between an edge-based and a region-based serverless paradigm, but it requires us to ship customer code to every server at once. Isolates come to the rescue again: we embed V8 with the same standard JavaScript APIs you can find in browsers, meaning a serverless edge application is both lightweight and performant. This means we can distribute Worker scripts to every server in every datacenter around the world, so that any server, anywhere, can serve requests bound for any customer. How does this affect my bill?Performance at scale is top of mind for us because improving performance on our Edge means we can pass those cost savings down to you. We pay the overhead of a JavaScript runtime once, and then are able to run essentially limitless scripts with almost no individual overhead.Workers Unbound is a truly cost effective platform when compared to AWS Lambda. With serverless, you should only pay for what you use with no hidden fees. Workers will not charge you for hidden extras like API gateway or DNS request fees.Serverless Pricing Comparison* Workers Unbound AWS Lambda AWS Lambda @ Edge Requests (per MM requests) $0.15 $0.20 - $0.28 $0.60 Duration (per MM GB-sec) $12.50 $16.67 - $22.92 $50.01 Data Transfer (per egress GB) $0.09 $0.09 - $0.16 $0.09 - $0.16 API Gateway (per MM requests) $0 $3.50 - $4.68 CloudFront pricing DNS Queries (per MM requests) $0 $0.40 $0.40 * Based on pricing disclosed on as of July 24, 2020. AWS’ published duration pricing is based on 1 GB-sec, which has been multiplied by one million on this table for readability. AWS price ranges reflect different regional pricing. All prices rounded to the nearest two decimal places. Data Transfer for AWS is based on Data Transfer OUT From Amazon EC2 to Internet above 1 GB / month, for up to 9.999 TB / month. API Gateway for AWS is based on Rest APIs above 1MM/month, for up to 333MM/month. Both the Workers Unbound and AWS Lambda services provide 1MM free requests per month and 400,000 GB-seconds of compute time per month. DNS Queries rate for AWS is based on the listed price for up to 1 Billion queries / month.How much can I save?To put our numbers to the test, we deployed a hello world GraphQL server to both Workers and Lambda. The median execution time on Lambda was 1.54ms, whereas the same workload took 0.90ms on Workers. After crunching the numbers and factoring in all the opaque fees that AWS charges (including API Gateway to allow for requests from the Internet), we found that using Workers Unbound can save you up to 75% -- and that’s just for a hello world! Imagine the cost savings when you’re running complex workloads for millions of users.You might be wondering how we’re able to be so competitive. It all comes down to efficiency. The lightweight nature of Workers allows us to do the same work, but with less platform overhead and resource consumption. The execution times from this GraphQL hello world test are shown below and put platform providers’ overhead on display. Since the test is truly a hello world, the variation is explained by architectural differences between providers (e.g. isolates v. containers).GraphQL hello world Execution Time (ms) across Serverless Platforms* Cloudflare Workers AWS Lambda Google Cloud Functions Azure Functions Min 0.58 1.22 6.16 5.00 p50 0.90 1.54 10.41 21.00 p90 1.24 7.45 15.93 110.00 p99 3.32 57.51 20.25 207.96 Max 16.39 398.54 31933.18 2768.00 * The 128MB memory tier was used for each platform. This testing was run in us-east for AWS, us-central for Google, and us-west for Azure. Each platform test was run at a throughput of 1 request per second over the course of an hour. The execution times were taken from each provider's logging system.These numbers speak for themself and highlight the efficiency of the Worker’s architecture. On Workers, you don’t just get faster results, you also benefit from the cost savings we pass onto you. When can I use it?Workers Unbound is a major change to our platform, so we’ll be rolling it out slowly and tweaking it over time. If you’d like to get early access or want to be notified when it’s ready, sign up for details here!We’ve got some exciting announcements to share this week. Stay tuned for the rest of Serverless Week!

The Edge Computing Opportunity: It’s Not What You Think

Cloudflare Workers® is one of the largest, most widely used edge computing platforms. We announced Cloudflare Workers nearly three years ago and it's been generally available for the last two years. Over that time, we've seen hundreds of thousands of developers write tens of millions of lines of code that now run across Cloudflare's network.Just last quarter, 20,000 developers deployed for the first time a new application using Cloudflare Workers. More than 10% of all requests flowing through our network today use Cloudflare Workers. And, among our largest customers, approximately 20% are adopting Cloudflare Workers as part of their deployments. It's been incredible to watch the platform grow.Over the course of the coming week, which we’re calling Serverless Week, we're going to be announcing a series of enhancements to the Cloudflare Workers platform to allow you to build much more complicated applications, lower your serverless computing bills, make your applications even faster, and prove that the Workers platform is secure to its core.Matthew’s Hierarchy of Developers' NeedsBefore the week begins, I wanted to step back and talk a bit about what we've learned about edge computing over the course of the last three years. When we launched Cloudflare Workers we thought the killer feature was speed. Workers run across the Cloudflare network, closer to end users, so they inherently have faster response times than legacy, centralized serverless platforms.However, we’ve learned by watching developers use Cloudflare Workers that there are a number of attributes to a development platform that are far more important than just speed. Speed is the icing on the cake, but it’s not, for most applications, an initial requirement. Focusing only on it is a mistake that will doom edge computing platforms to obscurity.Today, almost everyone who talks about the benefits of edge computing still focuses on speed. So did Akamai, which launched their Java- and .NET-based EdgeComputing platform in 2002, only to shut it down in 2009 after failing to find enough customers where a bit less network latency alone justified the additional cost and complexity of running code at the edge. That’s a cautionary tale much of the industry has forgotten.Today, I’m convinced that we were wrong when we launched Cloudflare Workers to think of speed as the killer feature of edge computing, and much of the rest of the industry’s focus remains largely misplaced and risks missing a much larger opportunity.I'd propose instead that what developers on any platform need, from least to most important, is actually: Speed < Consistency < Cost < Ease of Use < Compliance. Call it: Matthew’s Hierarchy of Developers’ Needs. While nearly everyone talking about edge computing has focused on speed, I'd argue that consistency, cost, ease of use, and especially compliance will ultimately be far more important. In fact, I predict the real killer feature of edge computing over the next three years will have to do with the relatively unsexy but foundationally important: regulatory compliance.Speed As the Killer Feature?Don't get me wrong, speed is great. Making an application fast is the self-actualization of a developer’s experience. And we built Workers to be extremely fast. By moving computing workloads closer to where an application's users are we can, effectively, overcome the limitations imposed by the speed of light. Cloudflare's network spans more than 200 cities in more than 100 countries globally. We continue to build that network out to be a few milliseconds from every human on earth.Since we're unlikely to make the speed of light any faster, the ability for any developer to write code and have it run across our entire network means we will always have a performance advantage over legacy, centralized computing solutions — even those that run in the "cloud." If you have to pick an "availability zone" for where to run your application, you're always going to be at a performance disadvantage to an application built on a platform like Workers that runs everywhere Cloudflare’s network extends.We believe Cloudflare Workers is already the fastest serverless platform and we’ll continue to build out our network to ensure it remains so.Speed Alone Is NicheBut let's be real a second. Only a limited set of applications are sensitive to network latency of a few hundred milliseconds. That's not to say under the model of a modern major serverless platform network latency doesn't matter, it's just that the applications that require that extra performance are niche.Applications like credit card processing, ad delivery, gaming, and human-computer interactions can be very latency sensitive. Amazon's Alexa and Google Home, for instance, are better than many of their competitors in part because they can take advantage of their corporate parents' edge networks to handle voice processing and therefore have lower latency and feel more responsive.But after applications like that, it gets pretty "hand wavy." People who talk a lot about edge computing quickly start talking about IoT and driverless cars. Embarrassingly, when we first launched the Workers platform, I caught myself doing that all the time. Pro tip: when you’re talking to an edge computing evangelist, you can win Buzzword BINGO every time so long as you ensure you have "IoT" and "driverless cars" on your BINGO card.Donald Knuth, the famed Stanford Computer Science professor, (along with Tony Hoare, Edsgar Dijkstra, and many others) said something to the effect of "premature optimization is the root of all evil in programming." It shouldn't be surprising, then, that speed alone isn't a compelling enough reason for most developers to choose to use an edge computing platform. Doing so for most applications is premature optimization, aka. the “root of all evil.” So what’s more important than speed?ConsistencyWhile minimizing network latency is not enough to get most developers to move to a new platform, there is one source of latency that is endemic to nearly all serverless platforms: cold start time. A cold start is how long it takes to run an application the first time it executes on a particular server. Cold starts hurt because they make an application unpredictable and inconsistent. Sometimes a serverless application can be fast, if it's hitting a server where the code is hot, but other times it's slow when a container on a new server needs to be spun up and code loaded from disk into memory. Unpredictability really hurts user experience; turns out humans love consistency more than they love speed.The problem of cold starts is not unique to edge computing platforms. Inconsistency from cold starts are the bane of all serverless platforms. They are the tax you pay for not having to maintain and deploy your own instances. But edge computing platforms can actually make the cold start problem worse because they spread the computing workload across more servers in more locations. As a result, it's less likely that code will be "warm" on any particular server when a request arrives.In other words, the more distributed a platform is, the more likely it is to have a cold start problem. And to work around that on most serverless platforms, developers have to create horrible hacks like performing idle requests to their own application from around the world so that their code stays hot. Adding insult to injury, the legacy cloud providers charge for those throw-away requests, or charge even more for their own hacky pre-warming/”reserved” solutions. It’s absurd!Zero Nanosecond Cold StartsWe knew cold starts were important, so, from the beginning, we worked to ensure that cold starts with Workers were under 5 milliseconds. That compares extremely favorably to other serverless platforms like AWS Lambda where cold starts can take as long as 5 seconds (1,000x slower than Workers).But we wanted to do better. So, this week, we'll be announcing that Workers now supports zero nanosecond cold starts. Since, unless someone invents a time machine, it's impossible to take less time than that, we're confident that Workers now has the fastest cold starts of any serverless platform. This makes Cloudflare Workers the consistency king beating even the legacy, centralized serverless platforms.But, again, in Matthew’s Hierarchy of Developers' Needs, while consistency is more important than speed, there are other factors that are even more important than consistency when choosing a computing platform.CostIf you have to choose between a platform that is fast or one that is cheap, all else being equal, most developers will choose cheap. Developers are only willing to start paying extra for speed when they see user experience being harmed to the point of costing them even more than what a speed upgrade would cost. Until then, cheap beats fast.For the most part, edge computing platforms charge a premium for being faster. For instance, a request processed via AWS's Lambda@Edge costs approximately three times more than a request processed via AWS Lambda; and basic Lambda is already outrageously expensive. That may seem to make sense in some ways — we all assume we need to pay more to be faster — but it’s a pricing rationale that will always make edge computing a niche product servicing only those limited applications extremely sensitive to network latency.But edge computing doesn't necessarily need to be more expensive. In fact, it can be cheaper. To understand, look at the cost of delivering services from the edge. If you're well-peered with local ISPs, like Cloudflare's network is, it can be less expensive to deliver bandwidth locally than it is to backhaul it around the world. There can be additional savings on the cost of power and colocation when running at the edge. Those are savings that we can use to help keep the price of the Cloudflare Workers platform low.More Efficient Architecture Means Lower CostsBut the real cost win comes from a more efficient architecture. Back in the early-90s when I was a network administrator at my college, when we wanted to add a new application it meant ordering a new server. (We bought servers from Gateway; I thought their cardboard shipping boxes with the cow print were fun.) Then virtual machines (VMs) came along and you could run multiple applications on the same server. Effectively, the overhead per application went down because you needed fewer physical servers per application.VMs gave rise to the first public clouds. Quickly, however, cloud providers looked for ways to reduce their overhead further. Containers provided a lighter weight option to run multiple customers’ workloads on the same machine, with dotCloud, which went on to become Docker, leading the way and nearly everyone else eventually following. Again, the win with containers over VMs was reducing the overhead per application.At Cloudflare, we knew history doesn’t stop, so as we started building Workers we asked ourselves: what comes after containers? The answer was isolates. Isolates are the sandboxing technology that your browser uses to keep processes separate. They are extremely fast and lightweight. It’s why, when you visit a website, your browser can take code it’s never seen before and execute it almost instantly.By using isolates, rather than containers or virtual machines, we're able to keep computation overhead much lower than traditional serverless platforms. That allows us to much more efficiently handle compute workloads. We, in turn, can pass the savings from that efficiency on to our customers. We aim not to be less expensive than Lambda@Edge, it’s to be less expensive than Lambda. Much less expensive.From Limits to LimitlessOriginally, we wanted Workers’ pricing to be very simple and cost effective. Instead of charging for requests, CPU time, and bandwidth, like other serverless providers, we just charged per request. Simple. The tradeoff was that we were forced to impose maximum CPU, memory, and application size restrictions. What we’ve seen over the last three years is developers want to build more complicated, sophisticated applications using Workers — some of which pushed the boundaries of these limits. So this week we’re taking the limits off.Tomorrow we’ll announce a new Workers option that allows you to run much more complicated computer workloads following the same pricing model that other serverless providers use, but at much more compelling rates. We’ll continue to support our simplified option for users who can live within the previous limits. I’m especially excited to see how developers will be able to harness our technology to build new applications, all at a lower cost and better performance than other legacy, centralized serverless platforms. Faster, more consistent, and cheaper are great, but even together those alone aren't enough to win over most developers workloads. So what’s more important than cost?Ease of UseDevelopers are lazy. I know firsthand because when I need to write a program I still reach for a trusty language I know like Perl (don't judge me) even if it's slower and more costly. I am not alone.That's why with Cloudflare Workers we knew we needed to meet developers where they were already comfortable. That starts with supporting the languages that developers know and love. We've previously announced support for JavaScript, C, C++, Rust, Go, and even COBOL. This week we'll be announcing support for Python, Scala, and Kotlin. We want to make sure you don't have to learn a new language and a new platform to get the benefits of Cloudflare Workers. (I’m still pushing for Perl support.)Ease also means spending less time on things like technical operations. That's where serverless platforms have excelled. Being able to simply deploy code and allow the platform to scale up and down with load is magical. We’ve seen this with long-time users of Cloudflare Workers like Discord, which has experienced several thousand percent usage growth over the last three years and the Workers platform has automatically scaled to meet their needs.One challenge, however, of serverless platforms is debugging. Since, as a developer, it can be difficult to replicate the entire serverless platform locally, debugging your applications can be more difficult. This is compounded when deploying code to a platform takes as long as 5 minutes, as it can with AWS's Lamda@Edge. If you’re a developer, you know how painful waiting for your code to be deployed and testable can be. That's why it was critical to us that code changes be deployed globally to our entire network across more than 200 cities in less than 15 seconds.The Bezos RuleOne of the most important decisions we made internally was to implement what we call the Bezos Rule. It requires two things: 1) that new features Cloudflare engineers build for ourselves must be built using Workers if at all possible; and 2) that any APIs or tools we build for ourselves must be made available to third party Workers developers.Building a robust testing and debugging framework requires input from developers. Over the last three years, Cloudflare Workers' development toolkit has matured significantly based on feedback from the hundreds of thousands of developers using our platform, including our own team who have used Workers to quickly build innovative new features like Cloudflare Access and Gateway. History has shown that the first, best customer of any platform needs to be the development team at the company building the platform.Wrangler, the command-line tool to provision, deploy, and debug your Cloudflare Workers, has developed into a robust developer experience based on extensive feedback from our own team. In addition to being the fastest, most consistent, and most affordable, I'm excited that given the momentum behind Cloudflare Workers it is quickly becoming the easiest serverless platform to use.Generally, whatever platform is the easiest to use wins. But there is one thing that trumps even ease of use, and that, I predict, will prove to be edge computing’s actual killer feature.ComplianceIf you’re an individual developer, you may not think a lot about regulatory compliance. However, if you work as a developer at a big bank, or insurance company, or health care company, or any other company that touches sensitive data at meaningful scale, then you think about compliance a lot. You may want to use a particular platform because it’s fast, consistent, cheap, and easy to use, but if your CIO, CTO, CISO, or General Counsel says “no” then it’s back to the drawing board.Most computing resources that run on cloud computing platforms, including serverless platforms, are created by developers who work at companies where compliance is a foundational requirement. And, up until to now, that’s meant ensuring that platforms follow government regulations like GDPR (European privacy guidelines) or have certifications providing that they follow industry regulations such as PCI DSS (required if you accept credit cards), FedRamp (US government procurement requirements), ISO27001 (security risk management), SOC 1/2/3 (Security, Confidentiality, and Availability controls), and many more.The Coming Era of Data SovereigntyBut there’s a looming new risk of regulatory requirements that legacy cloud computing solutions are ill-equipped to satisfy. Increasingly, countries are pursuing regulations that ensure that their laws apply to their citizens’ personal data. One way to ensure you’re in compliance with these laws is to store and process  data of a country’s citizens entirely within the country’s borders.The EU, India, and Brazil are all major markets that have or are currently considering regulations that assert legal sovereignty over their citizens’ personal data. China has already imposed data localization regulations on many types of data. Whether you think that regulations that appear to require local data storage and processing are a good idea or not — and I personally think they are bad policies that will stifle innovation — my sense is the momentum behind them is significant enough that they are, at this point, likely inevitable. And, once a few countries begin requiring data sovereignty, it will be hard to stop nearly every country from following suit.The risk is that such regulations could cost developers much of the efficiency gains serverless computing has achieved. If whole teams are required to coordinate between different cloud platforms in different jurisdictions to ensure compliance, it will be a nightmare.Edge Computing to the RescueHerein lies the killer feature of edge computing. As governments impose new data sovereignty regulations, having a network that, with a single platform, spans every regulated geography will be critical for companies seeking to keep and process locally to comply with these new laws while remaining efficient.While the regulations are just beginning to emerge, Cloudflare Workers already can run locally in more than 100 countries worldwide. That positions us to help developers meet data sovereignty requirements as they see fit. And we’ll continue to build tools that give developers options for satisfying their compliance obligations, without having to sacrifice the efficiencies the cloud has enabled.The ultimate promise of serverless has been to allow any developer to say “I don’t care where my code runs, just make it scale.” Increasingly, another promise will need to be “I do care where my code runs, and I need more control to satisfy my compliance department.” Cloudflare Workers allows you the best of both worlds, with instant scaling, locations that span more than 100 countries around the world, and the granularity to choose exactly what you need.Serverless WeekThe best part? We’re just getting started. Over the coming week, we’ll discuss our vision for serverless and show you how we’re building Cloudflare Workers into the fastest, most cost effective, secure, flexible, robust, easy to use serverless platform. We’ll also highlight use cases from customers who are using Cloudflare Workers to build and scale applications in a way that was previously impossible. And we’ll outline enhancements we’ve made to the platform to make it even better for developers going forward.We’ve truly come a long way over the last three years of building out this platform, and I can’t wait to see all the new applications developers build with Cloudflare Workers. You can get started for free right now by visiting:

Reflecting on my first year at Cloudflare as a Field Marketer in APAC

Hey there! I am Els (short form for Elspeth) and I am the Field Marketing and Events Manager for APAC. I am responsible for building brand awareness and supporting our lovely sales team in acquiring new logos across APAC.I was inspired to write about my first year in Cloudflare, because John, our CTO, encouraged more women to write for our Cloudflare blog after reviewing our blogging statistics and found out that more men than women blog for Cloudflare. I jumped at the chance because I thought this is a great way to share many side stories as people might not know about how it feels to work in Cloudflare. Why Cloudflare?Before I continue, I must mention that I really wanted to join Cloudflare after reading our co-founder Michelle’s reply on Quora regarding "What is it like to work in Cloudflare?." Michelle’s answer as follows:“my answer is 'adult-like.' While we haven’t adopted this as our official company-wide mantra, I like the simplicity of that answer. People work hard, but go home at the end of the day. People care about their work and want to do a great job. When someone does a good job, their teammate tells them. When someone falls short, their colleague will let them know. I like that we communicate directly, no matter what seniority level you are.”The main themes were centered around High Curiosity, Ability to get things done, and Empathy. The answer took me by surprise. I have read so many replies by top leaders of leading companies in the world, and I have never seen such a down to earth reply! I was eager to join the company and test it out. Day 1 - Onboarding in our San Francisco HeadquartersEvery new hire in Cloudflare will have to attend a two week orientation in San Francisco (well, they used to until COVID-19 hit and orientation has gone virtual), where they have a comprehensive program that exposes them to all the different functions of the company. My most memorable session was the one conducted by Matthew Prince, where he delivered a very engaging and theatrical crash course on the origins of Cloudflare and competitive landscape surrounding cloud computing. Even though the session took 1.5 hours, I enjoyed every second of it and I was very impressed with Matthew’s passion and conviction behind Cloudflare’s mission to build a better Internet.There was also a very impressive session conducted by Joe Sullivan, our Chief Security Officer. Joe introduced us to the importance of cybersecurity through several real life examples and guided us through some key steps to protect ourselves. Joe left a very deep impression on me as he spoke in a very simple manner. This is important for someone like myself who didn’t come from a security background as I felt that it is important for me to understand why I am joining this company and why my contribution matters.I also had the chance to meet the broader members of my marketing team. I had about twenty meetings arranged in the span of one week and I am thankful to everyone who took time out of their busy schedule to help me understand how the global team worked together. Needless to say everyone was really smart, nice, and down to earth. I left the San Francisco office feeling really good about my start in Cloudflare, but little did I know that was just the tip of the iceberg. Back to Singapore, where the fun happens!After I returned to Singapore, Krishna, my manager, quickly put me to work to focus on building a pipeline for the APAC region. In a short span of six months, I had to quickly bring myself up to speed to understand the systems and processes in place, in addition to executing events across the region to ensure that we have a continuous pipeline for our ever-growing sales team. I am going to be completely transparent here, it was overwhelming, stressful and I was expected to deliver results in a short period of time. However, it has also been the most exciting period of personal and professional growth for me, and I am so grateful for the opportunity to join an amazing team in one of the most exciting companies of the century. As a new team member, I had to quickly understand the needs of the sales leaders from the ASEAN countries, ANZ, the Greater China Region, India, Japan, and Korea. There were so many things to learn and everyone was very supportive and helpful. More importantly, there were many challenges and mistakes made along the way I felt supported by the entire team throughout. In my first six months, I had to immediately plan and execute an average of 28 events per quarter, ranging from flagship events like Gartner Security Risk Management conferences in Sydney and Mumbai, the largest gaming conference ChinaJoy in Shanghai, AWS series across the ASEAN countries and leading security conferences in Korea and Japan. When Cloudflare IPO-ed on September 13, 2019, I was tasked to organize an IPO party for over 150 people in our Singapore over a short span of 3 weeks. What an adventure! At our largest event in Singapore, where over 30 Cloudflarians from the Singapore team took time to help out.Just when I thought 28 events per quarter is an achievement (for myself), my team and I were given once in a lifetime opportunity to lead a series of projects related to our Japan office opening.  "As the third largest economy, and one of the most Internet-connected countries in the world, Japan was a clear choice when considering expansion locations for our next APAC office,” said Matthew Prince, co-founder and CEO of Cloudflare. “Our new facility and team in Tokyo present a unique opportunity to be closer to our customers, and help even more businesses and users experience a better Internet across Japan and throughout the world.”Japan is a new market for me and I had to start everything from scratch. I started off with launching our very first Japan brand campaign where the team worked closely with leading Japanese media companies to launch digital advertisements, advertorials, video campaigns to spread our awareness across Japan in just under 3 months. While it is a complete unknown path for us, the team was really good at experimenting with new ideas, analysis results, iterating and improving on our campaigns week by week.Check out our amazing Japan city cloud designed by our very talented team I also had the opportunity to be part of our very first hybrid (physical and virtual) press conference that was held across Singapore and Tokyo, where we had 35 journalists participate (with 6 top-tier media in attendance and 29 journalists online). News of the office opening/event was covered in Japan's most influential business newspaper, Nikkei, in an article titled, "US IT giant Cloudflare establishes Japanese corporation.". I cannot wait to tell you more about what’s coming down the line!Career Planning - Take charge of your career!With so many things going on, it is easy to lose sight of the long term goal. Jake, our CMO is very focused on ensuring the team remains engaged and motivated throughout their time in Cloudflare. He launched a mandatory career conversations program where the team had to have at least one discussion with their respective managers on how they would envision their future to be within the company. This is a very useful exercise for me as I was able to have an open discussion with my manager on the various options that I could consider as Cloudflare is a company which supports cross departmental/borders transitions. It is beneficial to know that I am able to explore different opportunities going forward and lock down some next steps on how I will get there. Exciting times! Inclusivity - Women for Women and DiversityAs a young woman, I am very fortunate to be part of the APAC team led by Aliza Knox. Aliza is extremely passionate about encouraging women to pursue opportunities in business and tech. As a woman, I have never felt more comfortable under her leadership as gender discrimination is real and most companies are predominantly led by men. With Aliza, all opinions and ideas are strongly welcomed and I never felt bound by my age, seniority, experience to reach for the skies. It is ok to be ambitious, to do more, to ask questions, or something as simple as getting 15 mins of her time to ask if I should pursue an online course at MIT (and I did!). Did I also mention Cloudflare's Employee Resource Group (ERG)? I am the APAC lead for Womenflare where our mission is to cultivate an inclusive, inspiring, and safe environment that supports, elevates, and ensures equal opportunities for success to all who identify as women at Cloudflare. As part of our global Womenflare initiative, I organised an International Women’s Day luncheon in March this year where we had members of our APAC leadership team share about their experiences on how they have managed their career and family commitments. Other ERG in Cloudflare includes Proudflare, where we support and provide resources for the LGBTQIA+ community, Afroflare, where we aim to build a better global Afro-community at Cloudflare and beyond, and many more! COVID-19I am writing this blogpost as we all embrace the challenges and opportunities present during COVID-19. When COVID-19 first hit APAC,  I was very impressed with how the global team exhibited flexibility to adapt to everyday challenges, with great empathy that it might be challenging to work from home, to how it is ok to try new things and make mistakes as long as we can learn from it. Our Business Continuity Team provided regular employee communication on local guidelines and Work From Home next steps. Our office support team immediately supplied computer equipment/office chairs that employees can bring home for their remote working needs. Our Site Leads came up with different initiatives to ensure the team remains connected through a series of virtual yoga sessions, Friday wine down, and lunch and games. The latest activity we ran was Activeflare, where a group of us from the Singapore and Australia offices exercised together on a Saturday and drew a map of our activities using tracking technology. That was fun!The global team has also launched a series of fireside chats where we get to hear from leaders of leading companies, which is a really nice touch where we get to gain exposure to the mind of great leaders which we otherwise would not have the opportunity to. My favourite so far is from Doug, our Chief Legal Officer and Katrin Suder, one of our Board Members.My very first experience as a TV host on Cloudflare TVMatthew, Cloudflare co-founder and CEO, recently launched Cloudflare TV for the team to experiment and connect with the Cloudflare community, even while we're locked down. And that community shares common interests in topics like web performance, Internet security, edge computing, and network reliability. Aliza and myself are hosting a series of Zoomelier in APAC soon to connect with winemakers and sommeliers across the region and share some interesting wine recommendations that one can drink with technology. So hope you'll tune in, geek out, feel part of our community, and learn more about Cloudflare and the people who are building it. Check out the Cloudflare TV Guide: forward, second year in Cloudflare, what’s next?I am at the point where I feel like I have a good amount of experience to do a good job, but not good enough to be where I want to be. In Cloudflare, I strongly feel that “The more I learn, the less I realise I know” (Socrates). I aim to continuously learn and build up my capabilities to strategize and deliver results for the present and the future, and I must end this blogpost with my learnings from John, “overnight success takes at least 10 years, I read a lot to stay up to date on what’s happening internally and externally. The gym (exercise) is really important to me. It's challenging and takes my mind off everything. Many people seem to view the gym as dead time to fill with TED videos, podcasts or other “useless” activities. I love the fact that it’s the one time I stop thinking.” I have applied this learning to both my personal and professional life and it made a huge difference. Thank you John.If you’re willing to join an impressive team and work for a very dynamic company to help create a better Internet, we’re looking for many different profiles in our different offices all over the planet! Let's have a look!

Diversity Welcome - A Latinx journey into Cloudflare

I came to the United States chasing the love of my life, today my wife, in 2015.A Spanish native speaker, Portuguese as my second language and born in the Argentine city of Córdoba more than 6,000 miles from San Francisco, there is no doubt that the definition of "Latino" fits me very well and with pride. Cloudflare was not my first job in this country but it has been the organization in which I have learned many of the things that have allowed me to understand the corporate culture of a society totally alien to the one which I come from.I was hired in January 2018 as the first Business Development Representative for the Latin America (LATAM) region based in San Francisco. This was long before the company went public in September 2019. The organization was looking for a specialist in Latin American markets with not only good experience and knowledge beyond languages ​​(Spanish/Portuguese), but understanding of the economy, politics, culture, history, go-to-market strategies, etc.—I was lucky enough to be chosen as "that person". Cloudflare invested in me to a great extent and I was amazed at the freedom I had to propose ideas and bring them to reality. I have been able to experience far beyond my role as a sales representative: I have translated marketing materials, helped with campaigns, participated in various trainings, traveled to different countries to attend conferences and visit clients, and on. Later, I was promoted as a sales executive for the North America (NAMER) region.Cloudflare poster signed by colleagues after our Company retreat in 2018I have been very fortunate to be able to closely observe the growth and maturity of the organization throughout my time here. Today, Cloudflare has three times more employees than when I started, and I can say that much of what makes this organization unique has remained intact: Cloudflare's core mission is to help build a better Internet, to be transparent, to protect vulnerable yet important voices online through its Project Galileo, our open door policy, the importance of investing in people, among many others.Myself with Matthew Prince and Michelle Zatlyn, co-founders of CloudflareIn recent weeks I have participated in conversations around "how do we recruit more under-represented groups and avoid bias in the selection process" - This has really filled me with joy but is certainly not the first initiative of its kind at Cloudflare. The company takes pride in having several Employee Resource Groups (ERGs) created and led by employees and executive sponsors—and highly encouraged by the organization: Afroflare, Desiflare, Nativeflare, Latinflare, Proudflare, Soberflare and Vetflare are just some of those groups (we have over 16 ERGs to-date!).At Cloudflare I have found a space where I can develop professionally, where my ideas count, and where I am allowed to make mistakes—this is not something that I have experienced in my previous roles with other employers. I am not afraid to admit that in other organizations I have felt the stigma of being a person of color and that the working conditions were unfair compared to my colleagues.Cloudflare’s values have continued to shine through during the current COVID-19 situation ​​and we have strengthened overall as an organization.Being an immigrant (a person of color) it is a challenge to make the decision to work for organizations that don't fully understand the value of adding more diversity to their workforce. Cloudflare is a company that does value diversity in its workforce and has demonstrated a genuine interest in recruiting as well as retaining under-represented groups and creating a collective learning environment for them and the rest of the teams within the organization.The company is committed to increasing the diversity within our teams and we want more diverse candidates in our selection processes. To achieve this we want to invite you (or please encourage others) to visit our careers page for more information on full-time positions and internship roles at our locations across the globe and apply. And if you have questions, I will leave you my email: It would be a pleasure to be able to guide you and put you in touch with the right people within Cloudflare to better understand our technology and where we are going. Your experience and skills are what we need to continue improving the Internet. Come join me at Cloudflare!Our team culture lives inside and outside the company - Here is our Soccer team!

Internationalizing the Cloudflare Dashboard

Cloudflare’s dashboard now supports four new languages (and multiple locales): Spanish (with country-specific locales: Chile, Ecuador, Mexico, Peru, and Spain), Brazilian Portuguese, Korean, and Traditional Chinese. Our customers are global and diverse, so in helping build a better Internet for everyone, it is imperative that we bring our products and services to customers in their native language.Since last year Cloudflare has been hard at work internationalizing our dashboard. At the end of 2019, we launched our first language other than US English: German. At the end of March 2020, we released three additional languages: French, Japanese, and Simplified Chinese. If you want to start using the dashboard in any of these languages, you can change your language preference in the top right of the Cloudflare dashboard. The preference selected will be saved and used across all sessions.In this blog post, I want to help those unfamiliar with internationalization and localization to better understand how it works. I also would like to tell the story of how we made internationalizing and localizing our application a standard and repeatable process along with sharing a few tips that may help you as you do the same.Beginning the journeyThe first step in internationalization is externalizing all the strings in your application. In concrete terms this means taking any text that could be read by a user and extracting it from your application code into separate, stand-alone files. This needs to be done for a few reasons:It enables translation teams to work on translating these strings without needing to view or change any application code.Most translators typically use Translation Management applications which automate aspects of the workflow and provide them with useful utilities (like translation memory, change tracking, and a number of useful parsing and formatting tools). These applications expect standardized text formats (such as json, xml, md, or csv files).From an engineering perspective, separating application code from translations allows for making changes to strings without re-compiling and/or re-deploying code. In our React based application, externalizing most of our strings boiled down to changing blocks of code like this:<Button>Cancel</Button> <Button>Next</Button> Into this:<Button><Trans id="signup.cancel" /></Button> <Button><Trans id="" /></Button> // And in a separate catalog.json file for en_US: { "signup.cancel": "Cancel", "": "Next", // ...many more keys } The <Trans> component shown above is the fundamental i18n building block in our application. In this scheme, translated strings are kept in large dictionaries keyed by a translation id. We call these dictionaries “translation catalogs”, and there are a set of translation catalogs for each language that we support.At runtime, the <Trans> component looks up the translation in the correct catalog for the provided key and then inserts this translation into the page (via the DOM). All of an application's static text can be externalized with simple transformations like these.However, when dynamic data needs to be intermixed with static text, the solution becomes a little more complicated. Consider the following seemingly straightforward example which is riddled with i18n landmines:<span>You've selected { totalSelected } Page Rules.</span> It may be tempting to externalize this sentence by chopping it up into a few parts, like so:<span> <Trans id="selected.prefix" /> {totalSelected } <Trans id="pageRules" /> </span> // English catalog.json { "selected.prefix": "You've selected", "pageRules": "Page Rules", // ... } // Japanese catalog.json { "selected.prefix": "選択しました", "pageRules": "ページ ルール", // ... } // German catalog.json { "selected.prefix": "Sie haben ausgewählt", "pageRules": "Page Rules", // ... } // Portuguese (Brazil) catalog.json { "selected.prefix": "Você selecionou", "pageRules": "Page Rules", // ... } This gets the job done and may even seem like an elegant solution. After all, both the selected.prefix and pageRules.suffix strings seem like they are destined to be reused. Unfortunately, chopping sentences up and then concatenating translated bits back together like this turns out to be the single largest pitfall when externalizing strings for internationalization.The problem is that when translated, the various words that make up a sentence can be morphed in different ways based on context (singular vs plural contexts, due to word gender, subject/verb agreement, etc). This varies significantly from language to language, as does word order. For example in English, the sentence “We like them” follows a subject-verb-object order, while other languages might follow subject-object-verb (We them like), verb-subject-object (Like we them), or even other orderings. Because of these nuanced differences between languages, concatenating translated phrases into a sentence will almost always lead to localization errors.The code example above contains actual translations we got back from our translation teams when we supplied them with “You’ve selected” and “Page Rules” as separate strings. Here’s how this sentence would look when rendered in the different languages: Language Translation Japanese 選択しました { totalSelected } ページ ルール。 German Sie haben ausgewählt { totalSelected } Page Rules Portuguese (Brazil) Você selecionou { totalSelected } Page Rules. To compare, we also gave them the sentence as a single string using a placeholder for the variable, and here’s the result: Language Translation Japanese %{ totalSelected } 件のページ ルールを選択しました。 German Sie haben %{ totalSelected } Page Rules ausgewählt. Portuguese (Brazil) Você selecionou %{ totalSelected } Page Rules. As you can see, the translations differ for Japanese and German. We’ve got a localization bug on our hands.So, In order to guarantee that translators will be able to convey the true meaning of your text with fidelity, it's important to keep each sentence intact as a single externalized string. Our <Trans> component allows for easy injection of values into template strings which allows us to do exactly that:<span> <Trans id="pageRules.selectedForDeletion" values={{ count: totalSelected }} /> </span> // English catalog.json { "pageRules.selected": "You've selected %{ count } Page Rules.", // ... } // Japanese catalog.json { "pageRules.selected": "%{ count } 件のページ ルールを選択しました。", // ... } // German catalog.json { "pageRules.selected": "Sie haben %{ count } Page Rules ausgewählt.", // ... } // Portuguese(Brazil) catalog.json { "pageRules.selected": "Você selecionou %{ count } Page Rules.", // ... } This allows translators to have the full context of the sentence, ensuring that all words will be translated with the correct inflection.You may have noticed another potential issue. What happens in this example when totalSelected is just 1? With the above code, the user would see “You've selected 1 Page Rules for deletion”. We need to conditionally pluralize the sentence based on the value of our dynamic data. This turns out to be a fairly common use case, and our <Trans> component handles this automatically via the smart_count feature:<span> <Trans id="pageRules.selectedForDeletion" values={{ smart_count: totalSelected }} /> </span> // English catalog.json { "pageRules.selected": "You've selected %{ smart_count } Page Rule. |||| You've selected %{ smart_count } Page Rules.", } // Japanese catalog.json { "pageRules.selected": "%{ smart_count } 件のページ ルールを選択しました。 |||| %{ smart_count } 件のページ ルールを選択しました。", } // German catalog.json { "pageRules.selected": "Sie haben %{ smart_count } Page Rule ausgewählt. |||| Sie haben %{ smart_count } Page Rules ausgewählt.", } // Portuguese (Brazil) catalog.json { "pageRules.selected": "Você selecionou %{ smart_count } Page Rule. |||| Você selecionou %{ smart_count } Page Rules.", } Here, the singular and plural versions are delimited by ||||. <Trans> will automatically select the right translation to use depending on the value of the passed in totalSelected variable.Yet another stumbling block occurs when markup is mixed in with a block of text we'd like to externalize as a single string. For example, what if you need some phrase in your sentence to be a link to another page?<VerificationReminder> Don't forget to <Link>verify your email address.</Link> </VerificationReminder> To solve for this use case, the <Trans> component allows for arbitrary elements to be injected into placeholders in a translation string, like so:<VerificationReminder> <Trans id="notification.email_verification" Components={[Link]} componentProps={[{ to: '/profile' }]} /> </VerificationReminder> // catalog.json { "notification.email_verification": "Don't forget to <0>verify your email address.</0>", // ... } In this example, the <Trans> component will replace placeholder elements (<0>,<1>, etc.) with instances of the component type located at that index in the Components array. It also passes along any data specified in componentProps to that instance. The example above would boil down to the following in React:// en-US <VerificationReminder> Don't forget to <Link to="/profile">verify your email address.</Link> </VerificationReminder> // es-ES <VerificationReminder> No olvide <Link to="/profile">verificar la dirección de correo electrónico.</Link> </VerificationReminder> Safety third!The functionality outlined above was enough for us to externalize our strings. However, it did at times result in bulky, repetitive code that was easy to mess up. A couple of pitfalls quickly became apparent.The first was that small hardcoded strings were now easier to hide in plain sight, and because they weren't glaringly obvious to a developer until the rest of the page had been translated, the feedback loop in finding these was often days or weeks. A common solution to surfacing these issues is introducing a pseudolocalization mode into your application during development which will transform all properly internationalized strings by replacing each character with a similar looking unicode character.For example You've selected 3 Page Rules. might be transformed to Ýôú'Ʋè ƨèℓèçƭèδ 3 Þáϱè Rúℓèƨ.Another handy feature at your disposal in a pseudolocalization mode is the ability to shrink or lengthen all strings by a fixed amount in order to plan for content width differences. Here's the same pseudolocalized sentence increased in length by 50%: Ýôú'Ʋè ƨèℓèçƭèδ 3 Þáϱè Rúℓèƨ. ℓôřè₥ ïƥƨú₥ δô. This is useful in helping both engineers as well as designers spot places where content length could potentially be an issue. We first recognized this problem when rolling out support for German, which at times tends to have somewhat longer words than English.This meant that in a lot of places the text in page elements would overflow, such as in this "Add" button:There aren't a lot of easy fixes for these types of problems that don't compromise the user experience.For best results, variable content width needs to be baked into the design itself. Since fixing these bugs often means sending it back upstream to request a new design, the process tends to be time consuming. If you haven't given much thought to content design in general, an internationalization effort can be a good time to start. Having standards and consistency around the copy used for various elements in your app can not only cut down on the number of words that need translating, but also eliminate the need to think through the content length pitfalls of using a novel phrase.The other pitfall we ran into was that the translation ids — especially long and repetitive ones — are highly susceptible to typos.Pop quiz, which of these translation keys will break our app: or Nestled among hundreds of other lines of changes, these are hard to spot in code review. Most apps have a fallback so missing translations don't result in a page breaking error. As a result a bug like this might go unnoticed entirely if it's hidden well enough (in say, a help text flyout).Fortunately, with a growing percentage of our codebase in TypeScript, we were able to leverage the type-checker to give developers feedback as they wrote the code. Here’s an example where our code editor is helpfully showing us a red underline to indicate that the id property is invalid (due to the missing “l”):Not only did it make the problems more obvious, but it also meant that violations would cause builds to fail, preventing bad code from entering the codebase.Scaling locale filesIn the beginning, you'll probably start out with one translation file per locale that you support. In addition, the naming scheme you use for your keys can remain somewhat simple. As your app scales, your translation file will grow too large and need to be broken up into separate files. Files that are too large will overwhelm Translation Management applications, or if left unchecked, your code editor. All of our translation strings (not including keys), when lumped together into a single file, is around 50,000 words. For comparison, that's roughly the same size as a copy of "The Hitchhiker's Guide to the Galaxy" or "Slaughterhouse Five".We break up our translations into a number of "catalog" files roughly corresponding to feature verticals (like Firewall or Cloudflare Workers). This works out well for our developers since it provides a predictable place to find strings, and keeps the line count of a translation catalog down to a manageable length. It also works out well for the outside translation teams since a single feature vertical is a good unit of work for a translator (or small team).In addition to per-feature catalogs, we have a common catalog file to hold strings that are re-used throughout the application. It allows us to keep ids short ( common.delete vs some_page.some_tab.some_feature.thing.delete ) and lowers the likelihood of duplication since developers habitually check the common catalog before adding new strings.LibrariesSo far we've talked at length about our <Trans> component and what it can do. Now, let's talk about how it's built.Perhaps unsurprisingly, we didn't want to reinvent the wheel and come up with a base i18n library from scratch. Due to prior efforts to internationalize the legacy parts of our application written in Backbone, we were already using Airbnb's Polyglot library, a "tiny I18n helper library written in JavaScript" which, among other things, "provides a simple solution for interpolation and pluralization, based off of Airbnb’s experience adding I18n functionality to its Backbone.js and Node apps".We took a look at a few of the most popular libraries that had been purpose-built for internationalizing React applications, but ultimately decided to stick with Polyglot. We created our <Trans> component to bridge the gap to React. We chose this direction for a few reasons:We didn't want to re-internationalize the legacy code in our application in order to migrate to a new i18n support library.We also didn't want the combined overhead of supporting 2 different i18n schemes for new vs legacy code.Writing our own trans component gave us the flexibility to write the interface we wanted. Since Trans is used just about everywhere, we wanted to make sure it was as ergonomic as possible to developers.If you're just getting started with i18n in a new React based web-app, react-intl and i18n-next are 2 popular libraries that supply a component similar to <Trans> described above.The biggest pain point of the <Trans> component as outlined is that strings have to be kept in a separate file from your source code. Switching between multiple files as you author new code or modify existing features is just plain annoying. It's even more annoying if the translation files are kept far away in the directory structure, as they often need to be.There are some new i18n libraries such as jslingui that obviate this problem by taking an extraction based approach to handling translation catalogs. In this scheme, you still use a <Trans>component, but you keep your strings in the component itself, not a separate catalog:<span> <Trans>Hmm... We couldn't find any matching websites.</Trans> </span> A tool that you run at build time then does the work of finding all of these strings and extracting then into catalogs for you. For example, the above would result in the following generated catalogs:// locales/en_US.json { "Hmm... We couldn't find any matching websites.": "Hmm... We couldn't find any matching websites.", } // locales/de_DE.json { "Hmm... We couldn't find any matching websites.": "Hmm... Wir konnten keine übereinstimmenden Websites finden." } The obvious advantage to this approach is that we no longer have separate files! The other advantage is that there's no longer any need for type checking ids since typos can't happen anymore.However, at least for our use case, there were a few downsides.First, human translators sometimes appreciate the context of the translation keys. It helps with organization, and it gives some clues about the string's purpose.And although we no longer have to worry about typos in translation ids, we're just as susceptible to slight copy mismatches (ex. "Verify your email" vs "Verify your e-mail"). This is almost worse, since in this case it would introduce a near duplication which would be hard to detect. We'd also have to pay for it.Whichever tech stack you're working with, there are likely a few i18n libraries that can help you out. Which one to pick is highly dependent on technical constraints of your application and the context of your team's goals and culture.Numbers, Dates, and TimesEarlier when we talked about injecting data translated strings, we glossed over a major issue: the data we're injecting may also need to be formatted to conform to the user's local customs. This is true for dates, times, numbers, currencies and some other types of data.Let's take our simple example from earlier:<span>You've selected { totalSelected } Page Rules.</span> Without proper formatting, this will appear correct for small numbers, but as soon as things get into the thousands, localization problems will arise, since the way that digits are grouped and separated with symbols varies by culture. Here's how three-hundred thousand and three hundredths is formatted in a few different locales: Language (Country) Code Formatted Date German (Germany) de-DE 300.000,03 English (US) en-US 300,000.03 English (UK) en-GB 300,000.03 Spanish (Spain) es-ES 300.000,03 Spanish (Chile) es-CL 300.000,03 French (France) fr-FR 300 000,03 Hindi (India) hi-IN 3,00,000.03 Indonesian (Indonesia) in-ID 300.000,03 Japanese (Japan) ja-JP 300,000.03 Korean (South Korea) ko-KR 300,000.03 Portuguese (Brazil) pt-BR 300.000,03 Portuguese (Portugal) pt-PT 300 000,03 Russian (Russia) ru-RU 300 000,03 The way that dates are formatted varies significantly from country to country. If you've developed your UI mainly with a US audience in mind, you're probably displaying dates in a way that will feel foreign and perhaps un-intuitive to users from just about any other place in the world. Among other things, date formatting can vary in terms of separator choice, whether single digits are zero padded, and in the way that the day, month, and year portions are ordered. Here's how the March 4th of the current year is formatted in a few different locales: Language (Country) Code Formatted Date German (Germany) de-DE 4.3.2020 English (US) en-US 3/4/2020 English (UK) en-GB 04/03/2020 Spanish (Spain) es-ES 4/3/2020 Spanish (Chile) es-CL 04-03-2020 French (France) fr-FR 04/03/2020 Hindi (India) hi-IN 4/3/2020 Indonesian (Indonesia) in-ID 4/3/2020 Japanese (Japan) ja-JP 2020/3/4 Korean (South Korea) ko-KR 2020. 3. 4. Portuguese (Brazil) pt-BR 04/03/2020 Portuguese (Portugal) pt-PT 04/03/2020 Russian (Russia) ru-RU 04.03.2020 Time format varies significantly as well. Here's how time is formatted in a few selected locales: Language (Country) Code Formatted Date German (Germany) de-DE 14:02:37 English (US) en-US 2:02:37 PM English (UK) en-GB 14:02:37 Spanish (Spain) es-ES 14:02:37 Spanish (Chile) es-CL 14:02:37 French (France) fr-FR 14:02:37 Hindi (India) hi-IN 2:02:37 pm Indonesian (Indonesia) in-ID 14.02.37 Japanese (Japan) ja-JP 14:02:37 Korean (South Korea) ko-KR 오후 2:02:37 Portuguese (Brazil) pt-BR 14:02:37 Portuguese (Portugal) pt-PT 14:02:37 Russian (Russia) ru-RU 14:02:37 Libraries for Handling Numbers, Dates, and TimesEnsuring the correct format for all these types of data for all supported locales is no easy task. Fortunately, there are a number of mature, battle-tested libraries that can help you out.When we kicked off our project, we were using the Moment.js library extensively for date and time formatting. This handy library abstracts away the details of formatting dates to different lengths ("Jul 9th 20", "July 9th 2020", vs "Thursday"), displaying relative dates ("2 days ago"), amongst many other things. Since almost all of our dates were already being formatted via Moment.js for readability, and since Moment.js already has i18n support for a large number of locales, it meant that we were able to flip a couple of switches and have properly localized dates with very little effort.There are some strong criticisms of Moment.js (mainly bloat), but ultimately the benefits realized from switching to a lower footprint alternative when compared to the cost it would take to redo every date and time didn't add up.Numbers were a very different story. We had, as you might imagine, thousands of raw, unformatted numbers being displayed throughout the dashboard. Hunting them down was a laborious and often manual process.To handle the actual formatting of numbers, we used the Intl API (the Internationalization library defined by the ECMAScript standard):var number = 300000.03; var formatted = number.toLocaleString('hi-IN'); // 3,00,000.03 // This probably works in the browser you're using right now! Fortunately, browser support for Intl has come quite a long way in recent years, with all modern browsers having full support.Some modern JavaScript engines like V8 have even moved away from self-hosted JavaScript implementations of these libraries in favor of C++ based builtins, resulting in significant speedup.Support for older browsers can be somewhat lacking however. Here's a simple demo site ( source code) that’s built with Cloudflare Workers that shows how dates, times, and numbers are rendered in a hand-full of locales.Some combinations of old browsers and OS's will yield less than ideal results. For example, here's how the same dates and times from above are rendered on Windows 8 with IE 10: If you need to support older browsers, this can be solved with a polyfill.TranslatingWith all strings externalized, and all injected data being carefully formatted to locale specific standards, the bulk of the engineering work is complete. At this point, we can now claim that we’ve internationalized our application, since we’ve adapted it in a way that makes it easy to localize.Next comes the process of localization where we actually create varying content based on the user’s language and cultural norms.This is no small feat. Like we mentioned before, the strings in our application added together are the size of a small novel. It takes a significant amount of coordination and human expertise to create a translated copy that both captures the information with fidelity and speaks to the user in a familiar way.There are many ways to handle the translation work: leveraging multi-lingual staff members, contracting the work out to individual translators, agencies, or even going all in and hiring teams of in-house translators. Whatever the case may be, there needs to be a smooth process for both workflow signalling and moving assets between the translation and development teams.A healthy i18n program will provide developers with black-box interface with the process — they put new strings in a translation catalog file and commit the change, and without any more effort on their part, the feature code they wrote is available in production for all supported locales a few days later. Similarly, in a well run process translators will remain blissfully unaware of the particulars of the development process and application architecture. They receive files that easily load in their tools and clearly indicate what translation work needs to be done.So, how does it actually work in practice?We have a set of automated scripts that can be run on-demand by the localization team to package up a snapshot of our localization catalogs for all supported languages. During this process, a few things happen:JSON files are generated from catalog files authored in TypeScriptIf any new catalog files were added in English, placeholder copies are created for all other supported languages.Placeholder strings are added for all languages when new strings are added to our base catalogFrom there, the translation catalogs are uploaded to the Translation Management system via the UI or automated calls to the API. Before handing it off to translators, the files are pre-processed by comparing each new string against a Translation Memory (a cache of previously translated strings and substrings). If a match is found, the existing translation is used. Not only does this save cost by not re-translating strings, but it improves quality by ensuring that previously reviewed and approved translations are used when possible.Suppose your locale files end up looking something like this:{ "verify.button": "Verify Email", "other.verify.button": "Verify Email", "": "Verify Email to proceed", // ... } Here, we have strings that are duplicated verbatim, as well as sub-strings that are copied. Translation services are billed by the word — you don’t want to pay for something twice and run the risk of a consistency issue arising. To this end, having a well-maintained Translation Memory will ensure that these strings are taken care of in the pre-translation steps before translators even see the file.Once the translation job is marked as ready, it can take translation teams anywhere from hours to weeks to complete return translated copies depending on a number of factors such as the size of the job, the availability of translators, and the contract terms. The concerns of this phase could constitute another blog article of similar length: sourcing the right translation team, controlling costs, ensuring quality and consistency, making sure the company’s brand is properly conveyed, etc. Since the focus of this article is largely technical, we’ll gloss over the details here, but make no mistake -- getting this part wrong will tank your entire effort, even if you’ve achieved your technical objectives.After translation teams signal that new files are ready for pickup, the assets are pulled from the server and unpacked into their correct locations in the application code. We then run a suite of automated checks to make sure that all files are valid and free of any formatting issues.An optional (but highly recommended) step takes place at this stage — in-context review. A team of translation reviewers then look at the translated output in context to make sure everything looks perfect in its finalized state. Having support staff that are both highly proficient with the product and fluent in the target language are especially useful in this effort. Shoutout to all our team members from around the company that have taken the time and effort to do this. To make this possible for outside contractors, we prepare special preview versions of our app that allow them to test with development mode locales enabled.And there you have it, everything it takes to deliver a localized version of your application to your users all around the world.Continual LocalizationIt would be great to stop here, but what we’ve discussed up until this point is the effort required to do it once. As we all know, code changes. New strings will be gradually added, modified, and deleted over the course of ti me as new features are launched and tweaked.Since translation is a highly human process that often involves effort from people in different corners of the world, there is a lower bound to the timeframe in which turnover is possible. Since our release cadence (daily) is often faster than this turnover rate (2-5 days), it means that developers making changes to features have to make a choice: slow down to match this cadence, or ship slightly ahead of the localization schedule without full coverage.In order to ensure that features shipping ahead of translations don’t cause application-breaking errors, we fallback to our base locale (en_US) if a string doesn’t exist for the configured language.Some applications have a slightly different fallback behavior: displaying raw translation keys (perhaps you've seen in an app you're using). There's a tradeoff between velocity and correctness here, and we chose to optimize for velocity and minimal overhead. In some apps correctness is important enough to slow down cadence for i18n. In our case it wasn't.Finishing TouchesThere are a few more things we can do to optimize the user experience in our newly localized application. First, we want to make sure there isn’t any performance degradation. If our application made the user fetch all of its translated strings before rendering the page, this would surely happen. So, in order to keep everything running smoothly, the translation catalogs are fetched asynchronously and only as the application needs them to render some content on the page. This is easy to accomplish nowadays with the code splitting features available in module bundlers that support dynamic import statements such as Parcel or Webpack.We also want to eliminate any friction the user might experience with needing to constantly select their desired language when visiting different Cloudflare properties. To this end, we made sure that any language preference a user selects on our marketing site or our support site persists as they navigate to and from our dashboard (all links are in French to belabor the point).What’s next?It’s been an exciting journey, and we’ve learned a lot from the process. It’s difficult (perhaps impossible) to call an i18n project truly complete.  Expanding into new languages will surface slippery bugs and expose new challenges. Budget pressure will challenge you to find ways of cutting costs and increasing efficiency. In addition, you will discover ways in which you can enhance the localized experience even more for users.There’s a long list of things we’d like to improve upon, but here are some of the highlights:Collation. String comparison is language sensitive, and as such, the code you’ve written to lexicographically sort lists and tables of data in your app is probably doing the wrong thing for some of your users. This is especially apparent in languages that use logographic writing systems (such as Chinese or Japanese) as opposed to languages that use alphabets (like English or Spanish).Support for right-to-left languages like Arabic and Hebrew.Localizing API responses is harder than localizing static copy in your user interface, as it takes a coordinated effort between teams. In the age of microservices, finding a solution that works well across the myriad of tech stacks that power each service can be very challenging.Localizing maps. We’ll be working on making sure all content in our map-based visualizations is translated.Machine translation has come a long way in recent years, but not far enough to churn our translations unsupervised. We would however like to experiment more with using machine translation as a first pass that translation reviewers then edit for correctness and tone.I hope you have enjoyed this overview of how Cloudflare internationalized and localized our dashboard.  Check out our careers page for more information on full-time positions and internship roles across the globe.

Introducing IP Lists

Authentication on the web has been steadily moving to the application layer using services such as Cloudflare Access to establish and enforce software-controlled, zero trust perimeters. However, there are still several important use cases for restricting access at the network-level by source IP address, autonomous system number (ASN), or country. For example, some businesses are prohibited from doing business with customers in certain countries, while others maintain a blocklist of problematic IPs that have previously attacked them.Enforcing these network restrictions at centralized chokepoints using appliances—hardware or virtualized—adds unacceptable latency and complexity, but doing so performantly for individual IPs at the Cloudflare edge is easy. Today we’re making it just as easy to manage tens of thousands of IPs across all of your zones by grouping them in data structures known as IP Lists. Lists can be stored with metadata at the Cloudflare edge, replicated within seconds to our data centers in 200+ cities, and used as part of our powerful, expressive Firewall Rules engine to take action on incoming requests.Creating and using an IP ListPreviously, these sort of network-based security controls have been configured using IP Access or Zone Lockdown rules. Both tools have a number of shortcomings that we’ve eliminated with the introduction of IP Lists, including:IP prefix boundariesOur legacy IP Access rules allow the use of a limited number of IP prefix lengths: /16 and /24 for IPv4; and /32, /48, and /64 for IPv6. These restrictions typically result in users creating far more rules than needed, e.g., if you want to block a /20 IPv4 network you must create 16 separate /24 entries.With IP Lists we’ve removed this restriction entirely, allowing users to create Lists with any prefix length: /2 through /32 for IPv4 and /4 through /64 for IPv6. Lists can contain both IPv4 and IPv6 networks as well as individual IP addresses.Order of evaluationPerhaps the most limiting factor in the use of IP Access rules today is that they are evaluated before Firewall Rules. You can elect to Block or Challenge the request based on a the source IP address, country or ASN, or you can allow the request to bypass all subsequent L7 mitigations: DDoS, Firewall Rules, Zone Lockdown, User Agent, Browser Integrity Check, Hotlink Protection, IP Reputation (including “Under Attack” Mode), Rate Limiting, and Managed Rules.IP Lists introduce much more flexibility.For example, with IP Lists, you can combine a check of a source IP address with a Bot Management score, contents of an HTTP request header, or any other filter criteria that the Firewall Rules engine supports to implement more complex logic.Below is a rule that blocks requests to /login with a bot score below 30, unless it is coming from the probe servers of Pingdom, an external monitoring solution. Shared use across zonesZone Lockdown rules are defined exclusively at the zone level, and cannot be re-used across zones, so if you wanted to allow only a specific set of IPs to the same hundred zones you’d have to recreate the rules and IPs in each zone. IP Lists are stored at the account level, not zone level, so the same list can be referenced—and updated—in Firewall Rules across multiple zones. We’re also hard at work on letting you create account-wide Firewall Rules, which will streamline your security configuration even further.Organization, labeling, and bulk uploadingIP Access and Zone Lockdown rules must be created one at a time whereas IP Lists can be uploaded in bulk through the UI using a CSV file (or pasting multiple lines, as shown below), or via the API. Individual items are timestamped, and can be given descriptions along with the group itself.In the clip below, the contents of Pingdom's IPv4 list are copied to the clipboard and then pasted into the Lists UI. Multiple rows will automatically be created as shown:Actions available for use in rulesBecause IP Lists are used within Firewall Rules, users can take advantage of all the actions available today, as well as those that we add in the future. In the coming months we plan to migrate all of the capabilities under Firewall → Tools into Firewall Rules, including Rate Limiting, which will require the addition of the Custom Response action. This action, which allows users to specify the specific status code, content type, and payload that gets to the eyeball, will then be usable with IP Lists.Planned EnhancementsWe wanted to get IP Lists in your hands as soon as possible, but we’re still working on adding additional capabilities. If you have thoughts on our ideas below, or have other suggestions, please comment at the end of this blog post—we’d love to hear what would make Lists more useful to you!Multiple Lists and increased quotas for paid plansAs of today every account can create one (1) IP List with a total of 1,000 entries. In the near future we plan to increase both the number of Lists that can be created as well as the total count of entries.If you have a specific use case for multiple (or larger) Lists today, please contact your Customer Success Manager or file a support ticket.Additional types of custom ListsLists are assigned a type during creation and the first type available is the IP List. We plan to add Country and ASN Lists, and are monitoring feedback to see what other types may be useful.Expiring List entriesWe’ve heard a few requests from beta testers who are interested in expiring individual List entries after some specified period of time, e.g., 24 hours from addition to the List. If this is something of interest to you, please let us know along with your use cases.Managed ListsIn addition to Lists that you create and manage yourself, we plan to curate Lists that you can subscribe to and use in your rules. Our initial ideas revolve around surfacing intelligence gleaned from the 27M properties reverse proxying traffic through the Cloudflare edge, e.g., equipping you with lists of IPs that are known open proxies so requests from these can be treated differently.In addition to intelligent lists, we’re planning on creating other managed lists for your convenience—but need your help in identifying which those are. Are there lists of IPs you find yourself manually inputting? We’d like to hear about those as candidates for Cloudflare Managed lists. Some examples from beta testers include third-party performance monitoring tools that should never have security enforcements applied to them.Are you paying for a third-party List today that you’d like to subscribe to and have automatically updated within Cloudflare? Let us know in the comments below.Get started today and let us know what you thinkIP Lists are now available in all Cloudflare accounts. We’re excited to let you start using them, and look forward to your feedback.

Why I’m Helping Cloudflare Grow in Japan

If you'd like to read this post in Japanese click here. I’m excited to say that I’ve recently joined the Cloudflare team as Head of Japan. Cloudflare has had a presence in Japan for a while now, not only with its network spanning the country, but also with many Japanese customers and partners which I’m now looking forward to growing with. In this new role, I’m focused on expanding our capabilities in the Japanese market, building upon our current efforts, and helping more companies in the region address and put an end to the technical pain points they are facing. This is an exciting time for me and an important time for the company. Today, I’m particularly eager to share that we are opening Cloudflare’s first Japan office, in Tokyo! I can’t wait to grow the Cloudflare business and team here.Why Cloudflare?The web was built 25 years ago. This invention changed the way people connected—to anyone and anywhere—and the way we work, play, live, learn, and on. We have seen this become more and more complex. With complexities come difficulties, such as ensuring security, performance, and reliability while online. Cloudflare is helping to solve these challenges that businesses are facing in a very effective way, and I wanted to be a part of it. Even back to the days when I was with Cisco, where I got to know many people in the network technical community—many of these people have mentioned Cloudflare as the vendor for the future of the Internet. Cloudflare is in a unique position to help make the Internet better for everyone across the globe.I want online users to have a better experience—one that’s fast, secure, and reliable—and I’m excited to help make this a reality while working with Cloudflare. I believe the team here is providing the tools to make the Internet better and easier, and is making customers happier. One thing that is important for me, one of my values you could say, is focusing on solving customers’ problems. This is something that I saw Cloudflare has always been deeply involved with as well. I’m passionate about helping more and more customers in Japan, and now in this new role, I’m ready to help make a better Internet part of their reality.Cloudflare JapanSome of the current challenges in Japan I see are that Japanese enterprises still have old on-prem systems and are late to move to the cloud. This includes companies that heavily rely on using the Internet and may be facing complexities or difficulties, which shouldn’t be the case. Cloudflare provides these very solutions to move to multi-cloud environments much faster and easier. We have been working with various customers in Japan already, and I’m excited to begin helping more and more businesses in the region. We’ve been committed to our partner network as well, which I’m excited to now be involved with and help grow even more. We have a number of channel partners in Japan, including large system integrators and mid-size cloud integrators, which cover various industries in the region. Cloudflare’s massive network, one of the largest in the world, currently spans 206 cities and more than 100 countries across the globe—including many in Asia-Pacific, and Osaka and Tokyo in Japan. This global network and team enables Japanese customers and partners (in various verticals and of all sizes) with the security, performance, and reliability solutions that are needed for their business-critical applications to connect to their users all across the world.We are continuing to grow the Cloudflare team and are now hiring for roles in our first Japan office, in Tokyo. if you're interested in joining this ambitious mission to help build a better Internet—for everyone, including companies and users in Japan—please visit our Tokyo careers page here. You can see the open roles for this office, which include Sales, Marketing, Technical Support, and more. I can’t wait to see what the Cloudflare team does for the region and on.Our opportunities in Japan and onI’m looking forward to enabling Japanese customers with the network and tools to scale their businesses. There are still many users that are building their security protections and other solutions by themselves in on-prem and cloud environments. If you are facing complex issues, or seeking security features in multi-cloud environments, looking to reduce cost, and on—reach out to me ( We have a solution for that. We are here to help you.

Cloudflare outage on July 17, 2020

Today a configuration error in our backbone network caused an outage for Internet properties and Cloudflare services that lasted 27 minutes. We saw traffic drop by about 50% across our network. Because of the architecture of our backbone this outage didn’t affect the entire Cloudflare network and was localized to certain geographies. The outage occurred because, while working on an unrelated issue with a segment of the backbone from Newark to Chicago, our network engineering team updated the configuration on a router in Atlanta to alleviate congestion. This configuration contained an error that caused all traffic across our backbone to be sent to Atlanta. This quickly overwhelmed the Atlanta router and caused Cloudflare network locations connected to the backbone to fail.The affected locations were San Jose, Dallas, Seattle, Los Angeles, Chicago, Washington, DC, Richmond, Newark, Atlanta, London, Amsterdam, Frankfurt, Paris, Stockholm, Moscow, St. Petersburg, São Paulo, Curitiba, and Porto Alegre. Other locations continued to operate normally.For the avoidance of doubt: this was not caused by an attack or breach of any kind.We are sorry for this outage and have already made a global change to the backbone configuration that will prevent it from being able to occur again.The Cloudflare BackboneCloudflare operates a backbone between many of our data centers around the world. The backbone is a series of private lines between our data centers that we use for faster and more reliable paths between them. These links allow us to carry traffic between different data centers, without going over the public Internet. We use this, for example, to reach a website origin server sitting in New York, carrying requests over our private backbone to both San Jose, California, as far as Frankfurt or São Paulo. This additional option to avoid the public Internet allows a higher quality of service, as the private network can be used to avoid Internet congestion points. With the backbone, we have far greater control over where and how to route Internet requests and traffic than the public Internet provides.TimelineAll timestamps are UTC.First, an issue occurred on the backbone link between Newark and Chicago which led to backbone congestion in between Atlanta and Washington, DC. In responding to that issue, a configuration change was made in Atlanta. That change started the outage at 21:12. Once the outage was understood, the Atlanta router was disabled and traffic began flowing normally again at 21:39. Shortly after, we saw congestion at one of our core data centers that processes logs and metrics, causing some logs to be dropped. During this period the edge network continued to operate normally.20:25: Loss of backbone link between EWR and ORD20:25: Backbone between ATL and IAD is congesting21:12 to 21:39: ATL attracted traffic from across the backbone21:39 to 21:47: ATL dropped from the backbone, service restored21:47 to 22:10: Core congestion caused some logs to drop, edge continues operating22:10: Full recovery, including logs and metricsHere’s a view of the impact from Cloudflare’s internal traffic manager tool. The red and orange region at the top shows CPU utilization in Atlanta reaching overload, and the white regions show affected data centers seeing CPU drop to near zero as they were no longer handling traffic. This is the period of the outage.Other, unaffected data centers show no change in their CPU utilization during the incident. That’s indicated by the fact that the green color does not change during the incident for those data centers.What happened and what we’re doing about itAs there was backbone congestion in Atlanta, the team had decided to remove some of Atlanta’s backbone traffic. But instead of removing the Atlanta routes from the backbone, a one line change started leaking all BGP routes into the backbone.{master}[edit] atl01# show | compare [edit policy-options policy-statement 6-BBONE-OUT term 6-SITE-LOCAL from] ! inactive: prefix-list 6-SITE-LOCAL { ... } The complete term looks like this:from { prefix-list 6-SITE-LOCAL; } then { local-preference 200; community add SITE-LOCAL-ROUTE; community add ATL01; community add NORTH-AMERICA; accept; } This term sets the local-preference, adds some communities, and accepts the routes that match the prefix-list. Local-preference is a transitive property on iBGP sessions (it will be transferred to the next BGP peer). The correct change would have been to deactivate the term instead of the prefix-list.By removing the prefix-list condition, the router was instructed to send all its BGP routes to all other backbone routers, with an increased local-preference of 200. Unfortunately at the time, local routes that the edge routers received from our compute nodes had a local-preference of 100. As the higher local-preference wins, all of the traffic meant for local compute nodes went to Atlanta compute nodes instead. With the routes sent out, Atlanta started attracting traffic from across the backbone.We are making the following changes:Introduce a maximum-prefix limit on our backbone BGP sessions - this would have shut down the backbone in Atlanta, but our network is built to function properly without a backbone. This change will be deployed on Monday, July 20.Change the BGP local-preference for local server routes. This change will prevent a single location from attracting other locations’ traffic in a similar manner. This change has been deployed following the incident.ConclusionWe’ve never experienced an outage on our backbone and our team responded quickly to restore service in the affected locations, but this was a very painful period for everyone involved. We are sorry for the disruption to our customers and to all the users who were unable to access Internet properties while the outage was happening.We’ve already made changes to the backbone configuration to make sure that this cannot happen again, and further changes will resume on Monday.

Serverless Rendering with Cloudflare Workers

Cloudflare’s Workers platform is a powerful tool; a single compute platform for tasks as simple as manipulating requests or complex as bringing application logic to the network edge. Today I want to show you how to do server-side rendering at the network edge using Workers Sites, Wrangler, HTMLRewriter, and tools from the broader Workers platform. Each page returned to the user will be static HTML, with dynamic content being rendered on our serverless stack upon user request. Cloudflare’s ability to run this across the global network allows pages to be rendered in a distributed fashion, close to the user, with miniscule cold start times for the application logic. Because this is all built into Cloudflare’s edge, we can implement caching logic to significantly reduce load times, support link previews, and maximize SEO rankings, all while allowing the site to feel like a dynamic application.A Brief History of Web PagesIn the early days of the web pages were almost entirely static - think raw HTML. As Internet connections, browsers, and hardware matured, so did the content on the web. The world went from static sites to more dynamic content, powered by technologies like CGI, PHP, Flash, CSS, JavaScript, and many more. A common paradigm in those maturing days was Server Side Rendering of web pages. To accomplish this, a user would request a page with some supplied parameters, a server would generate a static web page using those incoming parameters, and return that static HTML back to the user. These web pages were easily cacheable by proxies and other downstream services, an important benefit in the world of slower Internet connection speeds. Time to Interactive (TTI) in this model is usually faster than other rendering methods, as render-blocking JavaScripts are avoided.This paradigm fell out of style as the web standardized and powerful hardware became easier to access. Time To First Byte (TTFB) is a concern with Server Side Rendering as this model incurs latency across the Internet and the latency of rendering pages on the server itself. Client side rendering allowed for a more seamless user experience for dynamic content. As a result of this shift, client applications became larger and larger, and SEO crawlers quickly had to adopt frameworks to be able to emulate the browser logic that is able to run and render these client applications. Tied into this is the idea of AJAX requests, allowing content on the single page application to change without the need for a full page reload. Application state is changed by requesting asynchronous updates from the server and allowing the client side application to update state based on the data returned by the server. This was great, it gave us amazingly interactive applications like Google Mail.While this is a great structure for dynamic applications, rendering on the client side has a side effect of reducing shareability of content via link previews, increases time to interactive (TTI), and reduces SEO rankings on many search engines.With Cloudflare’s Workers platform, you can get the benefits of server side rendering with greatly reduced latency concerns. The dynamic web pages in this example are delivered from any one of Cloudflare’s edge nodes, with application logic running upon request from the user. Server side rendering often leads to content that is more easily cacheable by downstream appliances; delivering better SEO rankings and obfuscating application logic from savvy users.You get all the benefits of the old way things were done, with all the speed of the modern web.Peer With Cloudflare, a Dynamic Web AppWithout further ado, let’s dive into building a dynamic web page using the Cloudflare Workers platform! This example leverages Workers Sites, which allows you to serve static web pages from Cloudflare’s Key Value store. From there, Workers application logic (using HTMLRewriter) transforms that static response based on user input to deliver modified responses with the requested data embedded in the returned web page.The Peer With Cloudflare application, hosted on peering.rad.workers.devPeeringDB is a user-maintained public database of networks, exchanges, facilities, and interconnection on the Internet. The Peer With Cloudflare (PWC) application leverages the PeeringDB API to query live information on facilities and exchange points from multiple ASNs, compares the resulting networks, and lists to the user shared exchanges and facilities. In this example, we’ll also explore using templating languages in conjunction with Cloudflare’s HTMLRewriter.Generate a Workers SiteWe’ll start by generating a workers site using wrangler.> wrangler generate --site peering PWC will be entirely served from index.html, which will be generated in the /public directory. Next, ensure that we only serve index.html, regardless of the path supplied by the user. Modify index.js to serve a single page application, using the serveSinglePageApp method.import { getAssetFromKV, serveSinglePageApp } from '@cloudflare/kv-asset-handler' addEventListener('fetch', event => { try { event.respondWith(handleEvent(event)) } catch (e) { if (DEBUG) { return event.respondWith( new Response(e.message || e.toString(), { status: 500, }), ) } event.respondWith(new Response('Internal Error', { status: 500 })) } }) async function handleEvent(event) { /** * You can add custom logic to how we fetch your assets * by configuring the function `mapRequestToAsset`. * In this case, we serve a single page app from index.html. */ const response = await getAssetFromKV(event, { mapRequestToAsset: serveSinglePageApp }) return response } Workers Sites will now load up index.html (in the /public directory) regardless of the supplied URL path. This means we can apply the application to any route on the site, and have the same user experience. We define this in our wrangler.toml under the [site] section.[site] bucket="./public" entry-point="./" Use URL Parameters to Control Application StateThe application itself needs a way to store state between requests. There are multiple methods to do so, but in this case URL query parameters are used for two primary reasons:Users can use browser-based search functionality to quickly look up an ASN and compare it with Cloudflare’s networkState can be stored in a single search parameter for the purposes of this application, and the null state can be handled easilyModify index.js to read in the asn search parameter:async function handleEvent(event) { const response = await getAssetFromKV(event, { mapRequestToAsset: serveSinglePageApp }) const url = new URL(event.request.url) // create a URL object from the request url const asn = url.searchParams.get('asn') // get the 'asn' parameter } PWC will have three cases to cover with regards to application state:Null state (no ASN is provided). In this case we can simply return the vanilla index.html pageASN is provided and has an entry on PeeringDB’s APIASN is provided but is malformed or has no PeeringDB entrytry { if (asn) { // B) asn is provided } else { return response // A) no asn is provided; return index.html } } catch (e) { // C) error state } To provide the initial state and styling of the PWC application, index.html uses a third party framework called milligram, chosen due to its lightweight nature, which requires normalize.css and the Roboto font family. Also defined is a custom style for basic formatting. For state storage, a form is defined such that upon submission a GET request is sent to #, which is effectively a request to self with supplied parameters. The parameter in this case is named asn and must be a number:<!doctype html> <html> <head> <link rel="stylesheet" href="//,300italic,700,700italic"> <link rel="stylesheet" href="//"> <link href="" rel="stylesheet"/> <style> .centered { max-width: 80rem; } </style> </head> <body> <div id="formContainer" class="centered container"> <h2 class="title">Peer With Cloudflare</h1> <p class="description">Welcome to the peering calculator, built entirely on Cloudflare Workers. Input an ASN below to see where it peers with Cloudflare's network.</p> <form action="#" method="GET"> <fieldset> <label for="asnField" class="">ASN</label> <input type="number" placeholder="13335" id="asnField" name="asn"> </fieldset> </form> </div> </body> </html> Modelling Data from a Third Party APIThe PeeringDB API defines networks primarily with metadata outlining key information about the network and owners, as well as two lists of public peering exchange points and private peering facilities. The PWC application will list any peering points (exchanges and facilities) shared between the user-provided network and Cloudflare’s network in a single table. PWC uses a model-view paradigm to retrieve, store, and display these data from the PeeringDB API. Defined below are the three data models representing a Network, Facility, and Exchange.To define a network, first inspect a sample response from the PeeringDB API (use for a sample from Cloudflare’s network). Some key pieces of information displayed in PWC are the network name, website, notes, exchanges, and facilities.Network begins with a constructor to initialize itself with an Autonomous System Number. This is used for lookup of the network from the PeeringDB API:export class Network { constructor(asn) { this.asn = asn } A populate() function is then implemented to fetch information from a third party API and fill in required data. The populate() method additionally creates instances of NetworkFacility and NetworkExchange objects to be stored as attributes of the Network model.async populate(){ const net = await findAsn(this.asn) = net['id'] = net['name'] = net['website'] this.notes = net['notes'] this.exchanges = {} for (let i in net['netixlan_set']) { const netEx = new NetworkExchange(net['netixlan_set'][i]) this.exchanges[] = netEx } this.facilities = {} for (let i in net['netfac_set']) { const netFac = new NetworkFacility(net['netfac_set'][i]) this.facilities[] = netFac } return this } Any Network defined in the PWC application can compare itself to another Network object. This generic approach allows PWC to be extended to arbitrary network comparison in the future. To accomplish this, implement a compare() and compareItems() function to compare both NetworkExchanges and NetworkFacilities.compareItems(listA, listB, sharedItems) { for (let key in listA) { if(listB[key]) { sharedItems[key] = listA[key] } } return sharedItems } async compare(network) { const sharedFacilities = this.compareItems(this.facilities, network.facilities, {}) const sharedExchanges = this.compareItems(this.exchanges, network.exchanges, {}) return await fetchAdditionalDetails(sharedFacilities, sharedExchanges) } Both the NetworkFacility and NetworkExchange models implement a constructor to initialize with supplied data, as well as a populate method to add in extra information. These models also take care of converting PeeringDB API information into more human-readable formats.export class NetworkFacility { constructor(netfac){ = netfac['name'] = netfac['fac_id'] this.type = 'Facility' this.url = `${}` this.location = netfac['city'] + ", " + netfac['country'] } populate(details) { this.networks = details['net_count'] = details['website'] } } export class NetworkExchange { constructor(netixlan){ = netixlan['ix_id'] = netixlan['name'] this.type = 'Exchange' this.url = `${}` } populate(details) { = details['website'] this.networks = details['net_count'] this.location = details['city'] + ", " + details['country'] } } Notice that the compare() and populate() functions call out to fetchAdditionalDetails and findAsn methods; these are implemented to gather additional information for each model. Both methods are implemented in an ‘interface’ under src/utils/.import {peeringDb} from './constants' async function fetchPdbData(path) { const response = await fetch(new Request(peeringDb['baseUrl'] + path)) const body = await response.json() return body['data'] } async function fetchAdditionalDetails(facilities, exchanges) { const sharedItems = [] if (Object.keys(facilities).length > 0) { const facilityDetails = await fetchPdbData(peeringDb['facEndpoint'] + "?id__in=" + Object.keys( facilities ).join(",")) for (const facility of facilityDetails) { facilities[].populate(facility) sharedItems.push(facilities[]) } } if (Object.keys(exchanges).length > 0) { const exchangeDetails = await fetchPdbData(peeringDb['ixEndpoint'] + "?id__in=" + Object.keys( exchanges ).join(",")) for (const exchange of exchangeDetails) { exchanges[].populate(exchange) sharedItems.push(exchanges[]) } } return sharedItems } async function findAsn(asn) { const data = await fetchPdbData(peeringDb['netEndpoint'] + "?" + `asn__in=${asn}&depth=2`) return data[0] } export {findAsn, fetchAdditionalDetails} Presenting Results using HTMLRewriterIn building a single page application with workers, the PWC application needs the ability to modify HTML responses returned to the user. To accomplish this, PWC uses Cloudflare’s HTMLRewriter interface. HTMLRewriter streams any supplied response through a transformer, applying any supplied transformations to the raw response object. This returns a modified response object that can then be returned to the user.In the case of PWC, three cases need to be handled, and two of them require some form of transformation before returning index.html to the user. Define a generic AsnHandler to provide to the user their supplied ASN. The element() method in this handler will simply set a value attribute on the target element.class AsnHandler { constructor(asn) { this.asn = asn } element(element) { element.setAttribute("value", this.asn) } } The ASNHandler fills the form field with the user-supplied ASN.For error cases, PWC needs to provide feedback to the user that the supplied ASN was not found on PeeringDB. In this case a simple header tag is appended to the target element.class ErrorConditionHandler { constructor(asn) { this.asn = asn } element(element) { element.append(`<h4>ASN ${this.asn} Not Found on PeeringDB</h4>`, {html: true}) } } The ErrorConditionHandler provides feedback on invalid user-supplied input.For cases where a result needs to be returned, a NetworkComparisonHandler is implemented. Instead of defining raw HTML in a string format, NetworkComparisonHandler uses a templating language (Handlebars) to provide a dynamic transformation based on data returned from PeeringDB. First, install both handlebars and handlebars loader with npm:> npm install handlebars handlebars-loader Now define the NetworkComparisonHandler, including an import of the networkTable template.import networkTable from '../templates/networktable.hbs' class NetworkComparisonHandler { constructor({cfNetwork, otherNetwork, sharedItems}) { this.sharedItems = sharedItems this.otherNetwork = otherNetwork this.cfNetwork = cfNetwork } element(element) { element.append(networkTable(this), { html: true }) } } The Handlebars template itself uses conditional logic to handle cases where there is no direct overlap between the two supplied networks, and a custom helper to provide references to each piece of returned data. Handlebars provides an easy-to-read interface for conditional logic, iteration, and custom views.{{#if this.sharedItems.length}} <h4>Shared facilities and exchanges between {{}} and {{}}</h4> <table> <thead> <tr> <th>Name</th> <th>Location</th> <th>Networks</th> <th>Type</th> </tr> </thead> <tbody> {{#each this.sharedItems}} <tr> <td>{{link this.url}}</td> <td>{{this.location}}</td> <td>{{this.networks}}</td> <td>{{this.type}}</td> </tr> {{/each}} </tbody> </table> {{else}} <h4>No shared exchanges or facilities between {{}} and {{}}</h4> {{/if}} A custom link helper is used to display an <a> tag with a reference to each datum.import handlebars from 'handlebars' export default function(text, url) { return new handlebars.SafeString("<a href='" + handlebars.escapeExpression(url) + "'>" + handlebars.escapeExpression(text) + "</a>"); } Great! Handlebars and other templating languages are extremely useful for building complex view logic into Cloudflare’s HTMLRewriter. To tie Handlebars into our build process, and have wrangler understand the currently foreign code, modify wrangler.toml to use a custom webpack configuration:type = "webpack" webpack_config = "webpack.config.js" In webpack.config.js, configure any .hbs files to be compiled using the handlebars-loader module. Custom webpack configurations can be used in conjunction with Wrangler to create more complex build schemes, including environment-specific schemes.module.exports = { target: 'webworker', entry: './index.js', module: { rules: [{ test: /\.hbs$/, loader: 'handlebars-loader' }], } } Time to tie it all together in index.js! Handle each case by returning to the user either a raw HTML response or a modified response using HTMLRewriter. The #asnField will be updated, and the #formContainer will be used to present either an error message or a table of results.async function handleEvent(event) { const response = await getAssetFromKV(event, { mapRequestToAsset: serveSinglePageApp }) const url = new URL(event.request.url) const asn = url.searchParams.get('asn') try { if (asn) { const cfNetwork = await new Network(cloudflare['asn']).populate() const otherNetwork = await new Network(asn).populate() const sharedItems = await return await new HTMLRewriter() .on('#asnField', new AsnHandler(asn)) .on('#formContainer', new NetworkComparisonHandler({cfNetwork, otherNetwork, sharedItems})) .transform(response) } else { return response } } catch (e) { return await new HTMLRewriter() .on('#asnField', new AsnHandler(asn)) .on('#formContainer', new ErrorConditionHandler(asn)) .transform(response) } } The NetworkComparisonHandler and associated Handlebars template allows PWC to present PeeringDB information in a user-friendly format.Publish to CloudflareYou can view the final code on Github, or navigate to to see a working example. The final wrangler.toml includes instructions to publish the code up to a site, allowing you to easily build, deploy, and test without a domain - simply by setting workers_dev to “true”.name = "peering" type = "webpack" webpack_config = "webpack.config.js" account_id = "<REDACTED>" workers_dev = true route = "<REDACTED>" zone_id = "<REDACTED>" [site] bucket="./public" entry-point="./" Finally, publish your code using wrangler.> wrangler publish Cache At The EdgeTaking advantage of our server-rendered content is as simple as matching the request against any previously cached assets. To accomplish this, add a few simple lines to the top of our handleEvent function using Cloudflare’s Cache API. If an asset is found, return the response without going into the application logic.async function handleEvent(event) { let cache = caches.default let response = await cache.match(event.request) if (response) { return response } response = await getAssetFromKV(event, { mapRequestToAsset: serveSinglePageApp }) What’s Next?Using the Workers platform to deploy applications allow users to load lightweight and static html, with all application logic residing on the network edge. While there are certainly a host of improvements which can be made to the Peer With Cloudflare application (use of Workers KV, more input validation, or mixing in other APIs to present more interesting information); it should present a compelling introduction to the possibilities of Workers!Check out Built With Workers for more examples of applications built on the Workers platform, or build your own projects at! For more information on peering with Cloudflare, please visit our Peering Portal.

Cloudflare's first year in Lisbon

A year ago I wrote about the opening of Cloudflare’s office in Lisbon, it’s hard to believe that a year has flown by. At the time I wrote:Lisbon’s combination of a large and growing existing tech ecosystem, attractive immigration policy, political stability, high standard of living, as well as logistical factors like time zone (the same as the UK) and direct flights to San Francisco made it the clear winner.We landed in Lisbon with a small team of transplants from other Cloudflare offices. Twelve of us moved from the UK, US and Singapore to bootstrap here. Today we are 35 people with another 10 having accepted offers; we’ve almost quadrupled in a year and we intend to keep growing to around 80 by the end of 2020.If you read back to my description of why we chose Lisbon only one item hasn’t turned out quite as we expected. Sure enough TAP Portugal does have direct flights to San Francisco but the pandemic put an end to all business flying worldwide for Cloudflare. We all look forward to getting back to being able to visit our colleagues in other locations.The pandemic also put us in the odd position of needing to move from one empty office to another. Back in January the Cloudflare Lisbon office was in the Chiado and only had capacity for about 14 people. With our rapid growth we moved, in February, to a larger, temporary location on Avenida da Liberdade which had room for about 25 people.Leaving the Chiado‌‌And in early April, we moved to our longer term office on Praça Marquês de Pombal. Of course, by that time the State of Emergency had been declared in Portugal and the office move took place in our absence. But it sits waiting for our return sometime in early 2021.The team that landed in Lisbon covered Customer Support, Security, IT, Technology, and  Emerging Technology and Incubation, but, as we suspected, we’ve grown in many other departments and the rest of Cloudflare is realizing how much Lisbon and Portugal have to offer. In addition to the original team we now have people in SRE, Payroll, Accounting, Trust and Safety, People and Places, Product Management and Infrastructure.View from the Cloudflare Lisbon office‌‌Despite the pandemic we’re continuing to invest in Lisbon with 24 open roles in Customer Support, Infrastructure, People and Places, Engineering, Accounting and Finance, Security, Business Intelligence, Product Management and Emerging Technology and Incubation.As I said in an interview with AICEP earlier this year “É nosso objetivo construir em Lisboa um dos maiores escritórios da Cloudflare” (“It’s our objective to build in Lisbon one of the major Cloudflare offices”). You can read the full Portuguese-language interview here. We continue to believe that Lisbon is a vital part of Cloudflare’s growth.I’ve spent a huge amount of my career on aircraft and the last few months have felt very odd, but I couldn’t have been happier to find myself temporarily stuck in Lisbon. No doubt we’ll all be traveling again but this last year has confirmed my impression that Lisbon is a great place to live.I asked our team what they’d found they love about living in Lisbon and Portugal. They came back with pasteis de nata, sunshine every day, the jacaranda trees, feijoada, empada de galinha, Joker, Super Bock, chocolate mousse being an everyday staple, Maria biscuits, quality fresh produce, dolphins, lizards in the gardens, MB Way, ovos moles de Aveiro, so great that only ~30/40min from here you get such nice beaches like the ones in Setubal, Sintra, Cascais, Sesimbra, bica, sardines, the Alentejo coastline, the chicken from Bonjardim, family friendliness and how nice it is to raise children here, fast, reliable and cheap Internet access, and so much more.If you’d like to join us please visit our careers page for Lisbon.


Recommended Content