Corporate Blogs

AWS Ground Station – Ready to Ingest & Process Satellite Data

Amazon Web Services Blog -

Last fall I told you about AWS Ground Station and gave you a sneak preview of the steps that you would take to downlink data from a satellite. I am happy to report that the first two ground stations are now in operation, and that you can start using AWS Ground Station today. Using AWS Ground Station As I noted at the time, the first step is to Add satellites to your AWS account by sharing the satellite’s NORAD ID and other information with us: The on-boarding process generally takes a couple of days. For testing purposes, the Ground Station team added three satellites to my account: Terra (NORAD ID 25994) – This satellite was launched in 1989 and orbits at an altitude of 705 km. It carries five sensors that are designed to study the Earth’s surface. Aqua (NORAD ID 27424) – This satellite was launched in 2002 and also orbits at an altitude of 705 km. It carries six sensors that are designed to study surface water. NOAA-20 (NORAD ID 43013) – This satellite was launched in 2017 and orbits at an altitude of 825 km. It carries five sensors that observe both land and water. While the on-boarding process is under way, the next step is to choose the ground station that you will use to receive your data. This is dependent on the path your satellite takes as it orbits the Earth and the time at which you want to receive data. Our first two ground stations are located in Oregon and Ohio, with other locations under construction. Each ground station is associated with an adjacent AWS region and you need to set up your AWS infrastructure in that region ahead of time. I’m going to use the US East (Ohio) Region for this blog post. Following the directions in the AWS Ground Station User Guide, I use a CloudFormation template to set up my infrastructure within my VPC: The stack includes an EC2 instance, three Elastic Network Interfaces (ENIs), and the necessary IAM roles, EC2 security groups, and so forth: The EC2 instance hosts Kratos DataDefender (a lossless UDP transport mechanism). I can also use the instance to host the code that processes the incoming data stream. DataDefender makes the incoming data stream available on a Unix domain socket at port 55892. My code is responsible for reading the raw data, splitting it in to packets, and then processing each packet. You can also create one or more Mission Profiles. Each profile outlines the timing requirements for a contact, lists the resources needed for the contact, and defines how data flows during the contact. You can use the same Mission Profile for multiple satellites, and you can also use different profiles (as part of distinct contacts) for the same satellite. Scheduling a Contact With my satellite configured and my AWS infrastructure in place, I am ready to schedule a contact! I open the Ground Station Console, make sure that I am in the AWS Region that corresponds to the ground station that I want to use, and click Contacts. I review the list of upcoming contacts, select the desired one (If you are not accustomed to thinking in Zulu time, a World Clock / Converter is helpful), and click Reserve contact: Then I confirm my intent by clicking Reserve: The status of the connection goes to SCHEDULING and then to SCHEDULED, all within a minute or so: The next step is to wait for the satellite to come within range of the chosen ground station. During this time, I can connect to the EC2 instance in two ways: SSH – I can SSH to the instance’s IP address, verify that my code is in place and ready to run, and confirm that DataDefender is running: WEB – I can open up a web browser and see the DataDefender web interface: One thing to note: you may need to edit the security group attached to the instance in order to allow it to be accessed from outside of the VPC: 3-2-1 Contact! Ok, now I need to wait for Terra to come within range of the ground station that I selected. While not necessary, it can be fun (and educational) to use a real-time satellite tracker such as the one at n2yo.com: When my satellite comes in to range, DataDefender shows me that the data transfer is under way (at an impressive 781 Mbps), as indicated by the increased WAN Data Rate: As I noted earlier, the incoming data stream is available within the instance in real time on a Unix domain socket. After my code takes care of all immediate, low-level processing, it can route the data to Amazon Kinesis Data Streams for real-time processing, store it in Amazon S3 for safe-keeping or further analysis, and so forth. Customer Perspective – Spire While I was writing this blog post I spoke with Robert Sproles, a Program Manager with AWS customer Spire to learn about their adoption of Ground Station. Spire provides data & analytics from orbit, and runs the space program behind it. They design and build their own cubesats in-house, and currently have about 70 in orbit. Collectively, the satellites have more sensors than any of Spire’s competitors, and collect maritime, aviation, and weather data. Although Spire already operates a network of 30 ground stations, they were among the first to see the value of (and to start using) AWS Ground Station. In addition to being able to shift from a CapEx (capital expense) to OpEx (operating expense) model, Ground Station gives them the ability to collect fresh data more quickly, with the goal of making it available to their customers even more rapidly. Spire’s customers are wide-ranging and global, but can all benefit from rapid access to high-quality data. Their LEMUR (Low Earth Multi-Use Repeater) satellites go around the globe every 90 minutes, but this is a relatively long time when the data is related to aviation or weather. Robert told me that they can counter this by adding additional satellites in the same orbit or by making use of additional ground stations, all with the goal of reducing latency and delivering the freshest possible data. Spire applies machine learning to the raw data, with the goal of going from a “lump of data” to actionable insights. For example, they use ML to make predictions about the future positions of cargo ships, using a combination of weather and historical data. The predicted ship positions can be used to schedule dock slots and other scarce resources ahead of time. Now Available You can get started with AWS Ground Station today. We have two ground stations in operation, with ten more in the works and planned for later this year. — Jeff;  

Three Tools That Test WordPress Themes For Code Quality and Accessibility

Nexcess Blog -

WordPress contributor teams recently released Theme Sniffer and WP Theme Auditor, tools that help developers to create themes that adhere to coding and accessibility best practices. There are thousands of free WordPress themes and thousands more premium themes. Some are excellent, and some are terrible, but most are somewhere in-between on the quality scale. Installing… Continue reading →

What Is a Domain Name Registrar?

HostGator Blog -

The post What Is a Domain Name Registrar? appeared first on HostGator Blog. Every website you visit online has a domain name, which means that every website owner went through the process of buying and registering that domain name. It’s one of the first necessary steps involved in starting a new website, along with getting web hosting and building out your site. And it’s a step that requires working with a domain registrar.   What Is a Domain Registrar? A domain registrar, sometimes called a DNS registrar (short for domain name server), is a business that sells domain names and handles the business of registering them. Domain names are the main address a website uses on the web—they’re the thing that usually starts with www and most often ends with .com. While technically, computers identify websites with a different sort of address—an IP address that’s a long string of numbers separated by periods (e.g. 111.111.111.111)—humans wouldn’t be much good at remembering and using that kind of address. So for us, websites also have an address made up of alphanumeric letters, that usually spell out a word or brand name. And there’s a specific type of process behind how people claim domain names. There are registries that manage the different top-level domains. The registries are large, centralized databases with information about which domain names have been claimed and by who. But the registries don’t sell the names directly, they delegate that job to DNS registrars. Registrars must be accredited by the Internet Corporation for Assigned Names and Numbers (ICANN). Then, each time they sell a domain to a customer, they’re expected to register it with the appropriate registry by updating a record with your information.   Domain Registration FAQs For the most part, this process happens behind the scenes for website owners. Part of the service a good domain name registrar provides is making the process of finding, buying, and managing a domain (or multiple) simple and intuitive. You don’t have to know how the sausage is made, but if you’re curious to learn more, we’ve got the answers to the most common questions about domain name registrars. What is the role of a domain name registrar? The domain name registrar handles the process of updating the registry when a customer purchases a new domain name. As part of that, they keep track of which domain names are available and typically provide customers with an intuitive search tool to find out what options they have. They handle the financial transaction with the customer, and provide the tools needed to maintain the domain name subscription over time. You can’t buy a domain name outright, you can only rent it for up to ten years at a time. DNS registrars usually provide the option of annual renewals or multi-year subscriptions, sometimes offering a discount for registering the name for a longer period of time upfront. Domain registrars will often provide a user account where you can keep up with your domain registration status, and features like automatic renewal or email reminders. What is a domain registrant? That’s you! Well, assuming that you, the person reading this, is planning to buy a domain name or already has one. Once you take the step of selecting and purchasing a domain name from a domain registrar, you become the domain registrant. And the title will continue to apply for as long as you keep up your domain subscription. In most contexts though, people are more likely to call a “domain registrant” a domain owner, or a website owner once their site is up. What is a domain registry? A domain registry is the database that includes all the information about a specific top-level domain (TLD). The term is also sometimes used to refer to the organization that manages the database, as well as the database itself.    Domain registries have relationships with domain registrars, who submit domain name registration requests and record updates to them on behalf of customers. One of the biggest examples of a domain registry is Verisign, which manages the databases for several of the most common TLDs, including .com, .net, .gov, and .edu. What is private domain name registration? Part of the domain registration process includes providing the registrant’s information to the database of domain owners. In addition to the domain registries, the WHOIS directory tracks information on every website domain that’s registered, who owns it, and their main contact information. That’s because someone needs to be able to identify website owners who use their site for illegal purposes. But in our age of high-profile data breaches and growing concern around internet privacy issues, not every website owner wants to put their name and contact information out on the open web. And it shouldn’t be a requirement for running a website. Thanks to the private domain name registration options now offered by many DNS registrants, it’s not. Domain registrars usually charge a little more in order to shield you from having your own name and information included in the directory. They provide enough contact information to the WHOIS to keep you on the right side of the law, typically an email address associated with the registrant’s company, and keep the rest of it private. What is a domain name server? We talked earlier about how computers don’t use domain names to recognize website addresses, they use IP addresses. Domain name servers are the technology that translates between the two. The domain name system is the protocol established to ensure machines exchange the right data for the average internet viewer to see the correct webpage when they type a domain name into their browser or click on a link. Domain name servers play an important role in that system, storing all the information required to connect a specific domain name address to the correct IP address. Each time a computer queries a domain name server for a particular domain name, it finds the appropriate IP address to serve up. How do I register a new domain name? Now that we’ve covered much of the back-end technical stuff, you’re probably wondering how this all translates into what you, a would-be website owner, need to do to get the domain name that you want for your site. Luckily, the process for you is pretty easy. Start by finding a domain registrant you want to work with (more on how to do that in a bit). Most of them make it easy to search for available names, see the different top-level domain options you can consider, and go through the purchasing process. Provide your name, contact, and payment information through a secure form on the registrar’s website, and you should be set! How do I find an available domain name? This part can be trickier. With billions of websites already out there, all of them with a unique domain name, a lot of your options are already taken. Finding an available domain name that’s easy to remember and describes what your website does can take some work and creativity. Expect to spend some time using your domain registrar’s domain name search tool. Try out different variations on the names you have in mind. Consider synonyms and creative spellings. While a .com is usually the easiest option for visitors to remember, consider if you’re willing to go with another top-level domain like .website or .biz. The TLDs that aren’t as common will have more domain name options available. What is a top-level domain? A top-level domain is the last part of the domain that follows a period, such as .com or .net. ICANN controls which TLDs are available, and used to be pretty strict about opening up new ones. Early on, most specialty TLDs related to a specific industry, type of website, or geographic location. For example, .com was for commercial businesses, .gov for government websites, and .org for nonprofit websites. But as the internet has grown, the need for more available domain names has caused ICANN to lift the restrictions on how many TLDs are available, and who can use different ones. As such, when you do a domain name search on your chosen registrar’s website, you’ll see an array of TLD options at different price points. If the name you want isn’t available as a .com, you may be able to get it for cheaper at a .us or .site TLD address. How does domain name transfer work? When you choose a domain registrar to purchase your domain name with, you don’t have to make a long-term commitment to working with them. You have the option of switching over to a different registrar down the line, although you have to wait at least 60 days, due to an ICANN policy designed to reduce domain hijacking. If you’re past that sixty day point, you can transfer your domain name to a new provider by unlocking your domain name at your current registrar, disabling any other privacy protections such as WHOIS domain name privacy, and obtaining a domain authorization code from your current registrar. Once that’s done, follow the domain transfer steps provided by the new registrar you’re switching to. For HostGator, you can start the domain name transfer process here.    What to Look for in a Domain Registrar Now that you know all the ins and outs of what a domain registrar is and how domain registration works, you’re probably ready to find a good domain registrar and get started. You have a lot of different options. Some companies only provide domain registration services. Others, like HostGator, offer domain registration along with other services like web hosting, so you can take care of multiple basic website needs all under one account. With so many options to choose from, you need to know what to look for. Here are some of the most important factors to consider. 1. Pricing Some of the cost of registering a new domain name is related to the name you choose. In particular, different top-level domains come at different prices. But you’ll also see some variety in what different companies charge. When considering the pricing of different domain registrars, there are a couple of important things to keep in mind, First, the prices advertised are generally for a one-year time period, but you should check to be sure. A domain name isn’t a one-time purchase, you have to plan on continuing to pay for as long as you keep your website. You want to make sure you’re comparing apples to apples, and not putting one company’s one-year price against the price another advertises for a longer period. Also, it’s fairly normal for companies to advertise an introductory price that you pay for year one that goes up in the second year. Don’t just consider what you’re paying right now, think about what you can afford on an ongoing basis. And as with most things, sometimes a cheaper price will mean you pay in other ways, as with weaker customer service or a worse customer experience. Don’t just jump at the first low price you see without researching the company to find out if they’re cheap for a reason. 2. Reputation While domain name management doesn’t involve that much interaction with the company, you still want to choose a domain registrar that will be easy to work with and reliable. Spend some time reading customer reviews and doing general research on the company. Are they well known as a legitimate domain registrar? Do they have a reputation for solid customer service? Do people find the registration and renewal processes intuitive? Your domain name is an important part of running your website and maintaining it over time. You can always transfer your domain later, but you’ll be better off picking the right DNS registrar from day one. 3. Extras Most domain name registrars provide services beyond just domain name registration. It’s very common for registrars to also be web hosting providers, and bundling the two services can increase your ease of use for managing each. Other good add-ons to look for are: Domain name privacy, which helps you avoid spam and any risk that comes with making your personal information more public. Auto-renewals, which allow you to put the renewal process on autopilot so you don’t have to worry about forgetting or doing any extra work to keep your domain name registration up to date.Email addresses that you can set up for yourself and people in your organization at the domain, making your communications look more official.A multi-year purchase option, so you can secure your domain name for longer without worrying about renewal. If any of these are features you know you want, find a domain registrar that provides them. Register Your Domain Today As you know by now, HostGator is a domain name registrar that provides an intuitive domain name search function and easy registration process.   We offer domain name privacy, automatic renewals, and the option to buy your domain for up to three years at a time. And on top of all that, we’re one of the most respected web hosting providers in the industry. If you want the convenience of managing your web hosting and domain name registration in one place, you can count on HostGator to be a reliable option for both. If you’re ready to move forward and buy a new domain name, get started searching. Find the post on the HostGator Blog

Agency Spotlight Series: Power Digital Marketing

WP Engine -

A key part of our business at WP Engine is the partnerships we’ve built with digital agencies. With emerging technologies and trends, increasing competitiveness, and the pressure to deliver memorable digital experiences, agencies have enough to worry about. WP Engine allows agencies to focus on creation and execution instead of worrying about performance and security.… The post Agency Spotlight Series: Power Digital Marketing appeared first on WP Engine.

NGINX structural enhancements for HTTP/2 performance

CloudFlare Blog -

IntroductionMy team: the Cloudflare PROTOCOLS team is responsible for termination of HTTP traffic at the edge of the Cloudflare network. We deal with features related to: TCP, QUIC, TLS and Secure Certificate management, HTTP/1 and HTTP/2. Over Q1, we were responsible for implementing the Enhanced HTTP/2 Prioritization product that Cloudflare announced during Speed Week.This is a very exciting project to be part of, and doubly exciting to see the results of, but during the course of the project, we had a number of interesting realisations about NGINX: the HTTP oriented server onto which Cloudflare currently deploys its software infrastructure. We quickly became certain that our Enhanced HTTP/2 Prioritization project could not achieve even moderate success if the internal workings of NGINX were not changed.Due to these realisations we embarked upon a number of significant changes to the internal structure of NGINX in parallel to the work on the core prioritization product. This blog post describes the motivation behind the structural changes, how we approached them, and what impact they had. We also identify additional changes that we plan to add to our roadmap, which we hope will improve performance further.BackgroundEnhanced HTTP/2 Prioritization aims to do one thing to web traffic flowing between a client and a server: it provides a means to shape the many HTTP/2 streams as they flow from upstream (server or origin side) into a single HTTP/2 connection that flows downstream (client side).Enhanced HTTP/2 Prioritization allows site owners and the Cloudflare edge systems to dictate the rules about how various objects should combine into the single HTTP/2 connection: whether a particular object should have priority and dominate that connection and reach the client as soon as possible, or whether a group of objects should evenly share the capacity of the connection and put more emphasis on parallelism.As a result, Enhanced HTTP/2 Prioritization allows site owners to tackle two problems that exist between a client and a server: how to control precedence and ordering of objects, and: how to make the best use of a limited connection resource, which may be constrained by a number of factors such as bandwidth, volume of traffic and CPU workload at the various stages on the path of the connection.What did we see?The key to prioritisation is being able to compare two or more HTTP/2 streams in order to determine which one’s frame is to go down the pipe next. The Enhanced HTTP/2 Prioritization project necessarily drew us into the core NGINX codebase, as our intention was to fundamentally alter the way that NGINX compared and queued HTTP/2 data frames as they were written back to the client.Very early in the analysis phase, as we rummaged through the NGINX internals to survey the site of our proposed features, we noticed a number of shortcomings in the structure of NGINX itself, in particular: how it moved data from upstream (server side) to downstream (client side) and how it temporarily stored (buffered) that data in its various internal stages. The main conclusion of our early analysis of NGINX was that it largely failed to give the stream data frames any 'proximity'. Either streams were processed in the NGINX HTTP/2 layer in isolated succession or frames of different streams spent very little time in the same place: a shared queue for example. The net effect was a reduction in the opportunities for useful comparison.We coined a new, barely scientific but useful measurement: Potential, to describe how effectively the Enhanced HTTP/2 Prioritization strategies (or even the default NGINX prioritization) can be applied to queued data streams. Potential is not so much a measurement of the effectiveness of prioritization per se, that metric would be left for later on in the project, it is more a measurement of the levels of participation during the application of the algorithm. In simple terms, it considers the number of streams and frames thereof that are included in an iteration of prioritization, with more streams and more frames leading to higher Potential.What we could see from early on was that by default, NGINX displayed low Potential: rendering prioritization instructions from either the browser, as is the case in the traditional HTTP/2 prioritization model, or from our Enhanced HTTP/2 Prioritization product, fairly useless.What did we do?With the goal of improving the specific problems related to Potential, and also improving general throughput of the system, we identified some key pain points in NGINX. These points, which will be described below, have either been worked on and improved as part of our initial release of Enhanced HTTP/2 Prioritization, or have now branched out into meaningful projects of their own that we will put engineering effort into over the course of the next few months.HTTP/2 frame write queue reclamationWrite queue reclamation was successfully shipped with our release of Enhanced HTTP/2 Prioritization and ironically, it wasn’t a change made to the original NGINX, it was in fact a change made against our Enhanced HTTP/2 Prioritization implementation when we were part way through the project, and it serves as a good example of something one may call: conservation of data, which is a good way to increase Potential.Similar to the original NGINX, our Enhanced HTTP/2 Prioritization algorithm will place a cohort of HTTP/2 data frames into a write queue as a result of an iteration of the prioritization strategies being applied to them. The contents of the write queue would be destined to be written the downstream TLS layer.  Also similar to the original NGINX, the write queue may only be partially written to the TLS layer due to back-pressure from the network connection that has temporarily reached write capacity.Early on in our project, if the write queue was only partially written to the TLS layer, we would simply leave the frames in the write queue until the backlog was cleared, then we would re-attempt to write that data to the network in a future write iteration, just like the original NGINX.The original NGINX takes this approach because the write queue is the only place that waiting data frames are stored. However, in our NGINX modified for Enhanced HTTP/2 Prioritization, we have a unique structure that the original NGINX lacks: per-stream data frame queues where we temporarily store data frames before our prioritization algorithms are applied to them.We came to the realisation that in the event of a partial write, we were able to restore the unwritten frames back into their per-stream queues. If it was the case that a subsequent data cohort arrived behind the partially unwritten one, then the previously unwritten frames could participate in an additional round of prioritization comparisons, thus raising the Potential of our algorithms.The following diagram illustrates this process:We were very pleased to ship Enhanced HTTP/2 Prioritization with the reclamation feature included as this single enhancement greatly increased Potential and made up for the fact that we had to withhold the next enhancement for speed week due to its delicacy.HTTP/2 frame write event re-orderingIn Cloudflare infrastructure, we map the many streams of a single HTTP/2 connection from the eyeball to multiple HTTP/1.1 connections to the upstream Cloudflare control plane.As a note: it may seem counterintuitive that we downgrade protocols like this, and it may seem doubly counterintuitive when I reveal that we also disable HTTP keepalive on these upstream connections, resulting in only one transaction per connection, however this arrangement offers a number of advantages, particularly in the form of improved CPU workload distribution.When NGINX monitors its upstream HTTP/1.1 connections for read activity, it may detect readability on many of those connections and process them all in a batch. However, within that batch, each of the upstream connections is processed sequentially, one at a time, from start to finish: from HTTP/1.1 connection read, to framing in the HTTP/2 stream, to HTTP/2 connection write to the TLS layer.The existing NGINX workflow is illustrated in this diagram:By committing each streams’ frames to the TLS layer one stream at a time, many frames may pass entirely through the NGINX system before backpressure on the downstream connection allows the queue of frames to build up, providing an opportunity for these frames to be in proximity and allowing prioritization logic to be applied.  This negatively impacts Potential and reduces the effectiveness of prioritization.The Cloudflare Enhanced HTTP/2 Prioritization modified NGINX aims to re-arrange the internal workflow described above into the following model:Although we continue to frame upstream data into HTTP/2 data frames in the separate iterations for each upstream connection, we no longer commit these frames to a single write queue within each iteration, instead we arrange the frames into the per-stream queues described earlier. We then post a single event to the end of the per-connection iterations, and perform the prioritization, queuing and writing of the HTTP/2 data frames of all streams in that single event.This single event finds the cohort of data conveniently stored in their respective per-stream queues, all in close proximity, which greatly increases the Potential of the Edge Prioritization algorithms.In a form closer to actual code, the core of this modification looks a bit like this:ngx_http_v2_process_data(ngx_http_v2_connection *h2_conn, ngx_http_v2_stream *h2_stream, ngx_buffer *buffer) { while ( ! ngx_buffer_empty(buffer) { ngx_http_v2_frame_data(h2_conn, h2_stream->frames, buffer); } ngx_http_v2_prioritise(h2_conn->queue, h2_stream->frames); ngx_http_v2_write_queue(h2_conn->queue); } To this:ngx_http_v2_process_data(ngx_http_v2_connection *h2_conn, ngx_http_v2_stream *h2_stream, ngx_buffer *buffer) { while ( ! ngx_buffer_empty(buffer) { ngx_http_v2_frame_data(h2_conn, h2_stream->frames, buffer); } ngx_list_add(h2_conn->active_streams, h2_stream); ngx_call_once_async(ngx_http_v2_write_streams, h2_conn); } ngx_http_v2_write_streams(ngx_http_v2_connection *h2_conn) { ngx_http_v2_stream *h2_stream; while ( ! ngx_list_empty(h2_conn->active_streams)) { h2_stream = ngx_list_pop(h2_conn->active_streams); ngx_http_v2_prioritise(h2_conn->queue, h2_stream->frames); } ngx_http_v2_write_queue(h2_conn->queue); } There is a high level of risk in this modification, for even though it is remarkably small, we are taking the well established and debugged event flow in NGINX and switching it around to a significant degree. Like taking a number of Jenga pieces out of the tower and placing them in another location, we risk: race conditions, event misfires and event blackholes leading to lockups during transaction processing.Because of this level of risk, we did not release this change in its entirety during Speed Week, but we will continue to test and refine it for future release.Upstream buffer partial re-useNginx has an internal buffer region to store connection data it reads from upstream. To begin with, the entirety of this buffer is Ready for use. When data is read from upstream into the Ready buffer, the part of the buffer that holds the data is passed to the downstream HTTP/2 layer. Since HTTP/2 takes responsibility for that data, that portion of the buffer is marked as: Busy and it will remain Busy for as long as it takes for the HTTP/2 layer to write the data into the TLS layer, which is a process that may take some time (in computer terms!).During this gulf of time, the upstream layer may continue to read more data into the remaining Ready sections of the buffer and continue to pass that incremental data to the HTTP/2 layer until there are no Ready sections available.When Busy data is finally finished in the HTTP/2 layer, the buffer space that contained that data is then marked as: FreeThe process is illustrated in this diagram:You may ask: When the leading part of the upstream buffer is marked as Free (in blue in the diagram), even though the trailing part of the upstream buffer is still Busy, can the Free part be re-used for reading more data from upstream?The answer to that question is: NOBecause just a small part of the buffer is still Busy, NGINX will refuse to allow any of the entire buffer space to be re-used for reads. Only when the entirety of the buffer is Free, can the buffer be returned to the Ready state and used for another iteration of upstream reads. So in summary, data can be read from upstream into Ready space at the tail of the buffer, but not into Free space at the head of the buffer.This is a shortcoming in NGINX and is clearly undesirable as it interrupts the flow of data into the system. We asked: what if we could cycle through this buffer region and re-use parts at the head as they became Free? We seek to answer that question in the near future by testing the following buffering model in NGINX:TLS layer BufferingOn a number of occasions in the above text, I have mentioned the TLS layer, and how the HTTP/2 layer writes data into it. In the OSI network model, TLS sits just below the protocol (HTTP/2) layer, and in many consciously designed networking software systems such as NGINX, the software interfaces are separated in a way that mimics this layering.The NGINX HTTP/2 layer will collect the current cohort of data frames and place them in priority order into an output queue, then submit this queue to the TLS layer. The TLS layer makes use of a per-connection buffer to collect HTTP/2 layer data before performing the actual cryptographic transformations on that data.The purpose of the buffer is to give the TLS layer a more meaningful quantity of data to encrypt, for if the buffer was too small, or the TLS layer simply relied on the units of data from the HTTP/2 layer, then the overhead of encrypting and transmitting the multitude of small blocks may negatively impact system throughput.The following diagram illustrates this undersize buffer situation:If the TLS buffer is too big, then an excessive amount of HTTP/2 data will be committed to encryption and if it failed to write to the network due to backpressure, it would be locked into the TLS layer and not be available to return to the HTTP/2 layer for the reclamation process, thus reducing the effectiveness of reclamation. The following diagram illustrates this oversize buffer situation:In the coming months, we will embark on a process to attempt to find the ‘goldilocks’ spot for TLS buffering: To size the TLS buffer so it is big enough to maintain efficiency of encryption and network writes, but not so big as to reduce the responsiveness to incomplete network writes and the efficiency of reclamation.Thank you - Next!The Enhanced HTTP/2 Prioritization project has the lofty goal of fundamentally re-shaping how we send traffic from the Cloudflare edge to clients, and as results of our testing and feedback from some of our customers shows, we have certainly achieved that! However, one of the most important aspects that we took away from the project was the critical role the internal data flow within our NGINX software infrastructure plays in the outlook of the traffic observed by our end users. We found that changing a few lines of (albeit critical) code, could have significant impacts on the effectiveness and performance of our prioritization algorithms. Another positive outcome is that in addition to improving HTTP/2, we are looking forward to carrying our newfound skills and lessons learned and apply them to HTTP/3 over QUIC.We are eager to share our modifications to NGINX with the community, so we have opened this ticket, through which we will discuss upstreaming the event re-ordering change and the buffer partial re-use change with the NGINX team.As Cloudflare continues to grow, our requirements on our software infrastructure also shift. Cloudflare has already moved beyond proxying of HTTP/1 over TCP to support termination and Layer 3 and 4 protection for any UDP and TCP traffic. Now we are moving on to other technologies and protocols such as QUIC and HTTP/3, and full proxying of a wide range of other protocols such as messaging and streaming media.For these endeavours we are looking at new ways to answer questions on topics such as: scalability, localised performance, wide scale performance, introspection and debuggability, release agility, maintainability.If you would like to help us answer these questions and know a bit about: hardware and software scalability, network programming, asynchronous event and futures based software design, TCP, TLS, QUIC, HTTP, RPC protocols, Rust or maybe something else?, then have a look here.

4 Free or Inexpensive Resources to Help You Start Your Online Business

HostGator Blog -

The post 4 Free or Inexpensive Resources to Help You Start Your Online Business appeared first on HostGator Blog. There’s a lot to learn before (and after) you start your own business, and if you don’t have a business degree or previous experience running an online business, your exciting plans can feel a bit overwhelming. So can sorting through all the advice and information out there for new and would-be business owners. To help you get off to a strong start on a small budget, here are some reliable free and low-cost resources to help you plan, launch, and grow your new business. 1. Mentoring from Experienced Professionals Want answers to specific business questions or insights from someone who’s been there and done that? SCORE is a nonprofit supported by the US Small Business Administration that provides free, confidential mentoring for entrepreneurs in person, online, and by phone. With more than 10,000 volunteers providing advice nationwide, the odds are good that you can connect with someone in your niche. You can enter your location on SCORE’s Find a Mentor page to see all the SCORE volunteer mentors near you, search for mentors by industry or keyword, and find the closest SCORE office. The SCORE website also has a resource library full of blog posts, webinars, podcasts, videos, and templates on thousands of topics. Some of the webinars charge a small fee but most of the resources are free.    2. Courses to Build Your Business Skills Khan Academy has a group of videos in its Careers section that feature different small business owners and freelancers talking about what they do, how much they earn, how they work, and how they got started. The range of careers covered is relatively small, but even if your niche isn’t included, there’s good advice on running a business in several of the presentations, and you can get an idea of all the tasks that go into being your own boss. If you’re ready to tackle business topics at the college level, check out OpenCourseWare from the Massachusetts Institute of Technology. The site provides free access to the materials for most of MIT’s undergraduate and graduate-level courses. You can search by academic department for classes on accounting, marketing, and other business topics. Or you can explore OpenCourseWare’s Entrepreneurship portal, which includes dozens of classes covering planning, pricing, finance and accounting, marketing, patents, sales, operations, and much more. The only catch? It’s up to you to download and work through the course materials on your own. Coursera also offers college-level instruction, and it provides graded assignments and feedback in courses from universities around the world. Unlike traditional distance-learning classes, Coursera courses don’t come with a traditional tuition price tag. Some courses can be audited for free, and if you want to earn a certificate or access all the course features, a subscription plan runs about $50 per month. One Coursera option for budding business owners is Michigan State University’s 6-course specialization program called How to Start Your Own Business, which is designed to walk students through the process of starting their own businesses as they launch it. The classes you may need will depend on the type of business you want to run. Planning an e-commerce business? OpenCourseWare’s undergrad-level Economics and E-Commerce course materials cover pricing, sales taxes, different types of e-commerce, advertising, and search. One recommendation from me: If you’re planning a service business like freelance design or writing, event planning, or repairs, it’s a good idea to learn as much as you can about negotiation before you begin, both to earn what you’re worth and to build good relationships with good clients. Becoming a good negotiator can help you in many areas of your business, from setting rates and writing bids to working with vendors and hammering out the fine print in contracts. Coursera offers more than 50 negotiation courses, and MIT OpenCourseWare offers materials for several negotiation classes from the Sloan School of Management’s curriculum. Whatever you decide to study now, remember that successful business owners are always learning. Free and low-cost courses are a low-stress way to keep up with trends and innovations in your niche. 3. Guidance for Building a User-Friendly eCommerce Website In late 2018, Google published its UX Playbook for Retail: Collection of best practices to delight your users. Google reviewed hundreds of retail sites to come up with its recommendations, and the result is probably the best free resource you’re going to find for learning what to include on your site and why to include it. The free-to-download playbook uses Sephora, Warby Parker, Boots, ThredUp and other best-in-class e-commerce sites to show you exactly what works for six key areas: the homepage or landing page, menus and navigation, search, products and categories, conversion, and forms. For each area, there are details on what to include and what to avoid, to help you create a site that looks professional and is frustration-free for shoppers. There are also charts showing the ease of implementation, impact, and key metrics to track for each suggestion in the playbook. Don’t let the playbook’s 108-page length discourage you from diving in. The guide’s design—lots of screenshots, checklists, and charts—makes it a fast, informative read you can consult as you plan each section of your site. 4. Easy Tools to Create Your Website DIY website design used to be reserved for hardy amateurs who enjoy coding and don’t mind spending time tinkering and consulting support forums. For the rest of us, website builders have opened up high-quality site design to anyone who can drag and drop. Site builders like Gator Website Builder make setting up a small business website or even an online store fast and easy by packaging everything you need to get started and making the design process a snap. For example, every Gator plan includes site hosting, domain name registration, an SSL certificate to protect your data and your customers, analytics to help you measure and improve your site’s performance, and support. You also get unlimited pages, storage, and bandwidth so there’s no limit to how much your site can grow as you add products, services, and testimonials from your best customers. You can also upgrade to Gator Premium for priority support or to Gator eCommerce for priority support plus online store functionality. Ready to get started? Choose your Gator Website Builder plan now. Find the post on the HostGator Blog

5 Best Online Payment Gateways in 2019 for your E-commerce Website

Reseller Club Blog -

Running an online e-commerce business takes a lot of strategizing and planning. Right from setting up your website, to selecting the right web hosting, and most importantly choosing a payment gateway for your store. To make your work easier we have compiled a list of the 5 best online payment gateways in 2019. Do Payment Gateways Affect your Business? Choosing the right payment gateway is crucial when it comes to the success of your e-commerce store. Payment gateways are like a bridge between the buyer and seller. They permit fund transfer directly to the seller, keeping the security and comfort of the buyer in mind. As per a survey by Baymard Institute, 6% of the customers abandoned their cart because there weren’t enough payment methods available. Added to this, most users these days prefer mobile payment options as they are quick and effortless. Thus, it is imperative that you choose a payment gateway keeping these points in mind: Secure Well-known Easy to use Let us have a look at the top 5 online payment gateways to look forward to in 2019. PayPal PayPal is a global online payments platform that assists in online money transfer. It currently has over 277 million users worldwide and operates in 202 markets. Moreover, PayPal allows customers to send, receive, & hold funds in 25 currencies worldwide. One of the advantages of PayPal is that you need not have a PayPal account for processing payment. This is a great advantage of e-commerce stores as they need not worry if their customer has a PayPal account or not. Key Features of PayPal: Doesn’t require users to have a PayPal account to process payment Supports international payment and credit card Multi-currency support No withdrawal fee Fast mobile payment PayU PayU is a prominent online payment service provider that processes payments faster for both merchants and buyers. PayU covers 18 markets across Asia, Central and Eastern Europe, the Middle East, Latin America and Africa catering to over 2.3 billion consumers. They have over 300 payment methods for fast, simple and secure electronic payments across platforms. It supports one-click buy that allows users to purchase with a single click, thus improving customer conversion rates on your e-commerce website. Key Features of PayU: Easily integrate and receive all local payment methods instantly Supports one-click buy allowing users to purchase with a single click Mobile integration Web checkout Multi-currency support Amazon Payments Amazon Payments is a payment service offered by the e-commerce giant Amazon. The payment gateway is available to Amazon users both sellers and buyers to help smoothen their online purchase process. In Amazon Pay, the merchants can accept payment either online or on mobile. Moreover, it easily allows users to access their information from the merchant’s site so you don’t need to add any details like name, shipping address, credit card details etc. without compromising the security. This smoothens and fastens the payment process. Key Features of Amazon Payments: Has a faster checkout process Offers top-notch security There is a merchant website integration Supports automatic payments Fraud protection Braintree Braintree is an online payment gateway and a division of PayPal designed to make your payment process simpler. Braintree supports over 45 countries and 130+ currencies worldwide. One of the benefits for Braintree is that users can tailor their checkout flows any way they would like while remaining PCI compliant. Moreover, it saves your customers the time and hassle of re-entering their payment information every time they make a purchase. Key Features of Braintree: Merchants can customize their checkout workflow Easy data migration Dynamic control panel Easy and Faster repeat billing Advanced fraud protection Authorize.Net Authorize.Net is a payment gateway platform powering over 3+ lakh customers. It provides security and complex infrastructure to enable fast, secure and reliable data transfer. It offers plenty of options to its users for both accepting and processing payments. It offers an online payment system, as well as, at retail locations. The online payment system accepts credit cards and electronic cheques from websites and deposits the money directly to the merchant account. Key Features of Authorize.Net It supports multiple payment options viz. Mobile, retail, mail and phone payment It employs advanced Fraud Detection Suite It supports recurring billing It does not have a fixed enterprise pricing scheme It supports sync for Quickbooks Conclusion: We at ResellerClub, use PayPal and PayU payment gateways as we have a global presence, however, as an e-commerce store owner you need to figure out which is the best online payment gateway option for you. It might be one amongst our top 5 list or any other payment gateway. The right choice is the one that is the most beneficial to your customers. Satisfied customers equal to increased and improved conversions which, in turn, leads to improved business. Which is the online payment gateway used by you? Is it one amongst these or something else? Tell us in the comments section below. .fb_iframe_widget_fluid_desktop iframe { width: 100% !important; } The post 5 Best Online Payment Gateways in 2019 for your E-commerce Website appeared first on ResellerClub Blog.

bingbot Series: Easy set-up guide for Bing’s Adaptive URL submission API

Bing's Webmaster Blog -

In February, we announced launch of adaptive URL submission capability. As called out during the launch, as SEO manager or website owner, you do not need to wait for the crawler to discover new links, you should just submit those links automatically to Bing to get your content immediately indexed as soon as your content is published!  Who in SEO didn’t dream of that.  In the last few months we have seen rapid adoption of this capability with thousands of websites submitting millions of URLs and getting them indexed on Bing instantly.   At the same time, we have few webmasters who have asked for guidance on integrating the adaptive URL submission API. This blog provides information on how easy it is to set-up the adaptive URL submission API.   Step 1: Generate an API Key     Webmasters need an API key to be able to access and use Bing Webmaster APIs. This API key can be generated from Bing Webmaster Tools by following these steps:   Sign in to your account on Bing Webmaster Tools. In case you do not already have a Bing Webmaster account, sign up today using any Microsoft, Google or Facebook ID.  Add & verify the site that you want to submit URL for through the API, if not already done.  Select and open any verified site through the My Sites page on Bing Webmaster Tools and click on Webmaster API on the left-hand side navigation menu.    If you are generating the API key for the first time, please click Generate to create an API Key. Else you will see the key previously generated.    Note: Only one API key can be generated per user. You can change your API key anytime; change is taken by the system within 30 minutes. Step 2: Integrate with your website    You can any of the below protocols to easily integrate the Submit URL API into your system.   JSON request sample  POST /webmaster/api.svc/json/SubmitUrl? apikey=sampleapikeyEDECC1EA4AE341CC8B6 HTTP/1.1 Content-Type: application/json; charset=utf-8 Host: ssl.bing.com { "siteUrl":"http:\/\/example.com", "url":"http:\/\/example.com\/url1.html" } XML Request sample  POST /webmaster/api.svc/pox/SubmitUrl?apikey=sampleapikey341CC57365E075EBC8B6 HTTP/1.1 Content-Type: application/xml; charset=utf-8 Host: ssl.bing.com <SubmitUrl xmlns="http://schemas.datacontract.org/2004/07/Microsoft.Bing.Webmaster.Api"> <siteUrl>http://example.com</siteUrl> <url>http://example.com/url1.html</url> </SubmitUrl> If the URL submission is successful you will receive an http 200 response. This ensures that your pages will be discovered for indexing and if Bing webmaster guidelines are met then the pages will be crawled and indexed in real time. Using any of the above methods you should be able to directly and automatically let Bing know whenever new links are created in your website. We encourage you to integrate such solution in your Web Content Management System to let Bing auto discover your new content at publication time.  In case you face any challenges during the integration, you can reach out bwtsupport@microsoft.com to raise a service ticket. Feel free to contact us if your web site requires more than 10,000 URLs submitted per day. We will adjust as needed.  Thanks!  Bing Webmaster Tools team

New – Updated Pay-Per-Use Pricing Model for AWS Config Rules

Amazon Web Services Blog -

AWS Config rules give you the power to perform Dynamic Compliance Checking on your Cloud Resources. Building on the AWS Resource Configuration Tracking provided by AWS Config, you can use a combination of predefined and custom rules to continuously and dynamically check that all changes made to your AWS resources are compliant with the conditions specified in the rules, and to take action (either automatic or manual) to remediate non-compliant resources. You can currently select from 84 different predefined rules, with more in the works. These are managed rules that are refined and updated from time to time. Here are the rules that match my search for EC2: Custom rules are built upon AWS Lambda functions, and can be run periodically or triggered by a configuration change. Rules can optionally be configured to execute a remediation action when a noncompliant resource is discovered. There are many built-in actions, and the option to write your own action using AWS Systems Manager documents as well: New Pay-Per-Use Pricing Today I am happy to announce that we are switching to a new, pay-per-use pricing model for AWS Config rules. Effective August 1st, 2019 you will be charged based on the number of rule evaluations that you run each month. Here is the new pricing for AWS Public Regions: Rule Evaluations Per Month Price Per Evaluation 0-100,000 $0.0010 100,001-500,000 $0.0008 500,001 and above $0.0005 You will no longer pay for active config rules, which can grow costly when used across multiple accounts and regions. You will continue to pay for configuration items recorded, and any additional costs such as use of S3 storage, SNS messaging, and the invocation of Lambda functions. The pricing works in conjunction with AWS Consolidated Billing, and is designed to provide almost all AWS customers with a significant reduction in their Config Rules bill. The new model will let you expand globally and cost-effectively, and will probably encourage you to make even more use of AWS Config rules! — Jeff;  

Transitioning to a Career in AR/VR Design

Facebook Design -

By Jake BlakeleyA couple of years ago, I made a silly prototype that let people shoot virtual foam darts at their friends’ faces in augmented reality. Although it was a small and fun project, it was the start of my transition from designing 2D UI products for advertisers to being one of the first handful of product designers helping shape what is now the Spark AR platform. It was exciting to see such a simple experience spark joy in people when they used it. Working at Facebook, I can bring these types of experiences to scale on a platform that enables creators to build and share similar augmented reality experiences with their friends and followers. Two years later, I’m still designing for augmented reality and virtual reality — AR/VR — at Facebook, but now I’m working on Oculus products and learning how to design for all of the ways our brains perceive the world.This transition wasn’t unique to me, and I see it as an industry trend. Based on the number of people reaching out to me recently, it seems more designers than ever are entering the AR/VR space as people realize how transformational this technology is becoming. Let’s take a peek at some key concepts, the general process AR/VR designers at Facebook use and how you can apply it to your own work, as well as how to choose the right tools to use, platforms to build for and how to mind the skill gap to avoid frustration when taking on this new challenge.Key Concepts to Start Your JourneyIt can be quite daunting to look at AR/VR as a completely new space, with a whole new language and concepts, but I often find myself leaning on knowledge from other fields. Architecture taught me about positive and negative space, visual effects taught me how to create spectacle to delight the viewer and, most of all, the games industry taught me how to think about interaction in a 3D environment. Playing video games for 10 hours a week was actually useful for my career — take that, Mom! Let’s start with what underlies all these fields: 3D.The Basics of 3DIn spatial computing, all modeling and interaction is represented on a 3-axis grid along x, y and z. Here are all the components of 3D modeling visualized from smallest to largest:On top of that, we construct the rest of our object by adding textures, materials and shaders. This is one of the key differences many designers struggle with when learning a 3D design tool. Unlike with 2D design tools, we’re not applying an image against a flat screen anymore. It’s a texture, applied to a material, tied to a UV map, rendered by a shader. That sentence probably didn’t make much sense, so let’s break it down with imagery.Say we want to model the “angry reaction” in 3D. We start with a simple sphere model, then unwrap the sphere mesh to create a UV map. Notice how all the edges of the mesh line up to a part of the UV map on the image so it can be realigned later:Next, we take our 2D image of an angry reaction and apply this to a material on a shader. We then apply this material to a sphere mesh. As you can see, the texture wraps around the sphere nicely.When it comes to 3D, shaders are probably the hardest component to wrap your head around but one of the most fundamental. Shaders are the instructions given to your device to tell it how to render an image. This is based on all the inputs we mentioned earlier: materials, mesh, vertices, color and light, among others. This happens in every frame to create an animation.The easiest way to think about this is to think about your favorite 3D video games. You’ve probably seen a game styled more like a cartoon, such as The Legend of Zelda: The Wind Waker, and one styled more realistically, such as The Elder Scrolls V: Skyrim. These styles were determined by the shaders used.Here is the “angry reaction” with three different shaders and the material we applied.Just like in the real world, lighting defines the brightness, shadows and other properties of an object and surfaces. Lighting is very important for AR/VR as it creates grounding, believability and also helps guide users.There’s a lot more to 3D, such as rigging, animating and the use of different material types, but this should be enough to help you grasp the basics before diving into a 3D tool.The Tale of Two SpacesIn a 2D app, everything is tied to the screen position. But in AR/VR, there are (mostly) two spaces. The first is screen space, where an object is tied to the screen, like in 2D apps. The second is world space — it’s an object sitting on your desk or placed in your hand. The concept is simple, but the implications are significant.Let’s look at typography as an example. A 12-pixel font in screen space is generally 12 pixels all the time, but if we wanted to put text in world space, it changes size and readability drastically, based on how close the user is to it.https://medium.com/media/00714068ff59da6fd6cc12c281e7abe7/hrefWhat is AR/VR actually?Let’s break down what virtual reality and augmented reality actually are. Although they are quite similar in underlying technology and exist on a spectrum of immersive technology, let’s simplify and discuss them separately so we can understand the constraints of the systems we’re working in.AR is about recognizing and understanding the world as seen by the device’s camera. It superimposes media onto the user’s view, combining the real world and a computer-generated one.Because the system only understands the pixels seen by the camera, it doesn’t interpret the world like people do. Occlusion is an example of an AR constraint. It means the device doesn’t automatically interpret the depth of the world.In this example, the system first has to understand a face. Then we track a mesh to it to occlude — or mask — the crown so the back side doesn’t show through the head.While AR superimposes a new world onto ours, VR transports us into a digital one. It does this through a stereoscopic display and headset tracking to make your head into a virtual camera for a digitally rendered world.The biggest constraint in VR comes from the fact that we’re tricking our eyes and brain into thinking we’re in a virtual world. We need the rules of this world to match our concept of reality.To simplify, when there’s a disconnect between what our body is feeling versus what we’re seeing, user comfort can be impacted. For example, if you make someone fall in VR when their body knows they’re standing up, this can result in reduced comfort due to the disconnect. Here are examples of how to allow movement while maintaining user comfort.From left to right: Teleporting by pointing and pressing a button in Robo Recall. Pushing yourself through space in Echo VR. Using your hands at a distance to pull yourself in To The Top.A design consideration you’ve probably thought about for mobile but that’s exaggerated in VR is designing for the human body. Spatial interfaces use your head and hands to allow you to interact with the world, which is a magical experience and intuitive if done right. However, our bodies have limitations. Looking down, turning around, keeping our arms up — these become tiring over time.There are numerous domain-specific AR/VR languages and concepts that are best learned while experimenting with the many tools on the market. For example if you want to tackle mobile AR, Spark AR will cover many capabilities and best practices, Oculus outlines concepts specific to VR and whatever video tool you are using will likely highlight how to do compositing to put objects in your real-world footage.While the language of AR/VR is evolving, this outlines the basics. Now, let’s dive into what it takes to do the work.Our Team’s AR/VR Design ProcessAR/VR designers at Facebook divide our efforts into three phases: ideation, vision work and prototyping.If you’re a designer, ideation is probably familiar. It’s a quick and iterative way to generate lots of ideas to address a problem and learn rapidly. We use collaborative brainstorming, storyboarding to tell a narrative and — unique to AR/VR — bodystorming. For storyboarding, our team is fond of Procreate for creating digital sketches in 2D and Quill for sketching in 3D. For bodystorming, we use real-world props and activities to act out interactions and narratives. This is especially effective in AR/VR, because you get a spatial feel for objects and scale while iterating much faster than in digital prototyping.Vision work is the second phase and occurs early in our process. It involves gathering our ideation and combining it in a tighter package, usually a video, to share more broadly within the team or cross-functionally. However, we can share a vision in other ways, such as style-boarding to agree on a visual language, or high-fidelity storyboards to discuss steps in great detail. Vision work helps our multidisciplinary team align around a north star, so we can also work fast and sometimes semi-autonomously toward the same solution. The vision may evolve as we learn more through prototyping and research, but it allows us to work in parallel instead of blocking other team functions.For vision work, we generally use 3D modeling and animation apps, such as Cinema 4D, Blender or Maya, to render videos on top of recorded footage.The third phase, prototyping, is the highest fidelity of the three phases and is usually reserved for smaller, more high-touch interactions or project details. Prototypes are also usually the best artifacts to bring into user research, since they allow participants to test our work and give tangible, direct feedback. AR/VR prototyping contains a couple of key differences compared to other disciplines. First, interactions take longer to build, as best practices have yet to be defined completely, and second, there are significantly more variables to consider when designing in 3D than 2D.In this phase, our team usually uses a 3D modeling app — the same ones mentioned above — to create low poly assets for our real-time engines.We generally do interaction prototyping in the same tool we use for the end product so we can test, learn and iterate fast. This usually means using Spark AR Studio for mobile AR, adding interactivity through either visual programming or scripting with code and using Unity or Unreal Engine for HMD-based AR/VR for products like the Oculus Rift. Whether you select Unity or Unreal as your tool of choice is a hotly debated topic, so I’ll leave it up to you to decide.This may seem like a broad skill set, but luckily I didn’t have to become an expert on all phases. Each of my team members has a strong domain expertise that helps raise the rest of the team up. I have a team member who is amazing at motion graphics and visualizing ideas, a coworker and friend who knows shaders and real-time engines inside and out, a teammate who is a master of design processes and practices, and, of course, there’s me. I’m more a generalist and know these skills more broadly but not as deeply in any one category. A multidisciplinary team like ours shows how broad and open the skill sets are for an AR/VR designer. The real magic happens when we apply our different areas of expertise to the challenge and collaborate to find a solution.Now that I’ve shared one approach to designing for AR/VR, let’s dig into some unique learning methods.The Skill Gap and How to Learn EffectivelyWhen I started in AR/VR, my biggest struggle was staying motivated in an emerging technology that had so many unknowns. I was at the point in my product design career where I was adept at iterating quickly on UI, had the confidence to defend my product decisions, had a strong intuition for user needs and felt pretty secure in my career. But when it came to AR/VR, I felt like an impostor. Making the leap to AR/VR was hard when I knew staying in my old role was safe. I had to persevere, and accept that my AR/VR work wasn’t great yet but that someday I would get there. What eventually pushed me to where I am today wasn’t thinking about *what* I learned but *how* I learned.A great framework for understanding the learning process is the four stages of competence, which describes how we learn and the struggles that come with the journey. My friend and coworker Emilia explored this in depth in her article “How to Feel All the Feelings and Still Kick Ass.” The role of conscious incompetence in learning particularly resonates with me. This is the learning stage where you understand enough to grasp how much you don’t actually know. It’s like feeling accomplished when you learn to play “Chopsticks” on the piano, then suddenly realizing how much more you need to learn before you can perform “Für Elise.” This is the stage where most people give up.The biggest favor I did myself was treating learning as play — taking the pressure off by doing small, fun projects. This meant taking grand ideas, such as creating a fully immersive AR shopping experience, and breaking them down into smaller projects. I started with questions like “How do I signal to users that they can place their objects into the world?” or “How do we allow users to manipulate an object?” or even “How do I get a 3D model into the engine?” There’s a ton to learn from small projects like these, especially in an early industry like AR/VR, where patterns aren’t fully cemented. These small projects also helped me realize what excited me the most about AR/VR, what I excelled at and where I had skill gaps. What’s great about this time in our industry is that we’re all learning together, and people are eager to help and mentor. Especially at a place like Facebook, we tap into each other’s unique skills to help ourselves grow. If you’re looking for a helping hand, I’d be more than happy to pass the baton and help you get started. Reach out!Summing It UpIf you’ve made it this far, congrats! This is only a short summary of the foundational concepts of 3D and AR/VR and the processes and tools my team and I find useful. What makes this industry a bit overwhelming is also what makes it so exciting — it’s evolving extremely fast, and there’s always a ton to learn. It’s a long journey, and the skill gap will be frustrating, but remember to start small, find ways to play as you learn and seek out a buddy or mentor to help guide you. If you jump in now, you’ll be years ahead of other designers once spatial computing is ubiquitous.Is there anything else you feel that designers starting in this field should know? Or is there anything you wished you knew early in your AR/VR career?· · ·Thank you to everyone who helped to compile this, and supported me in my design career transition; Matt S., Matt M., Hayden S., Emilia D., James T.!Transitioning to a Career in AR/VR Design was originally published in Facebook Design on Medium, where people are continuing the conversation by highlighting and responding to this story.

Using WordPress for Professional Websites

WP Engine -

When starting a business, it’s now more important than ever to establish and maintain an online presence. There’s more than one choice when choosing a platform for building your professional website. Could WordPress be the solution you are looking for? In this white paper, we’ll help answer the question: is WordPress suitable for professional websites?… The post Using WordPress for Professional Websites appeared first on WP Engine.

What Are The New TLDs

The Domain.com Blog -

As the internet has matured, the sheer number of relevant domains has started to dwindle. If you’ve registered a .com web address, chances are you’ve felt the pain of trying to find an applicable one-word or two-word domain that’s still available. With only about 22 generic top-level domains, the domain space was beginning to feel a bit crowded. Enter, new TLDs: .tech.space.actor.yoga New TLDs provide novel territory for individuals and businesses who want to distinguish themselves among other websites. Some of these domain extensions have incredible utility by offering companies a more niche website or a creative take on their original TLDs. What Are the Original TLDs The old TLDs are the original domain extensions that are still commonly in use today. Each has a specific purpose and a certain domain space to which it is connected. A few of the most well-known examples are: .com – Often used for commercial businesses and individuals who are marketing themselves..net – Short for “Network,” these are commonly associated with internet providers, emails, and umbrella sites that are connected to various smaller sites..org – Nonprofits and charities will often use the .org domain extension. Other organizations like sports teams, community groups, and religions will often use .org..edu – “Education.” Most schools, universities, and other learning centers will use this TLD..gov – This is a restricted TLD used by the U.S. government. Any government site must have a .gov domain extension. For a long time, these sorts of top-level domains were considered sufficient for covering all the subsections of the internet. But, of course, as the internet expanded, so increased the necessity for new TLDs. It all starts with a great domain. Get yours at Domain.com. ICANN and its Role in New TLDs The Internet Corporation for Assigned Names and Numbers (or ICANN) is a non-profit that helps maintain the Domain Name System (or DNS). ICANN is the organization responsible for the decision to expand the internet domain space, by allowing new TLDS to operate. Back in 2012, ICANN decided to allow businesses to apply for new top-level domains to promote growth. Some of the earliest applied TLDs included: .art.app.love.shop.baby Since then, more than a thousand new TLDs have entered the public domain. Now the question is, which one should you choose? 6 Considerations For Choosing a New TLD While these new TLDs are unconventional compared to the standard .com or .net, they have many benefits. Maybe you’re looking to stand out creatively from the other websites in your competitive space. Or maybe every domain idea you’ve had has already been taken. However you want to use them, new TLDs have incredible potential to boost your web presence. Modernized TLDs Every so often some new technological advancement will come along that shifts the way something is done. In this case, what’s changed is the possibility of a new and innovative web address. Businesses have always evolved and by using these new TLDs, companies can stay ahead of the curve. New TLDs — A Fresh Take Therefore, it’s important to have perspective. Sure, right now certain traditionalists consider anything but the core group of gTLDs to be less desirable (.com, .net, .org, etc.). But as these new TLDs become more commonplace, this view is changing and having an up-to-date domain will save time. Companies who lag might later change their opinion too late and find out their desired domain has already been taken. Of course, use this perspective with caution. How a business is perceived is always essential. Be sure to understand your audience and take them into account when registering a domain. Knowing Your Audience Not all businesses are created equal. Different demographics will be attracted to different facets of a company. Marketing strategies toward senior citizens, for example, will be much different than marketing toward millennials. Understanding your audience can help push you toward the right TLD. As a yoga center, one option is to register a .com domain extension. However, it would also be appropriate to register a .yoga TLD. This would generate authority within the yoga space. Some other new TLDs that fit a niche market are: .coffee.tennis.pizza.toys.photography Each of these domain extensions hits their target market with a certain exactness. Specificity Knowing what sort of business you run or what kind of service you are providing can help narrow down the TLD you want. The perfect domain extension indicates precisely what to expect when users stumble upon your website. Not only this, using a more specific domain extension can reduce the length of the website URL. Some new TLDs that can help specify your web address are: .tech – With the increasing number of tech start-ups out there, having a .tech TLD can set your website apart from the pack..design – Spice up an artist portfolio page with a .design URL. Or use this new TLD for any number of design professions like interior decorator, web designer, graphic designer, and more..luxury – Fashion brands, high-end accessories, car companies, furniture, these are all services that can succeed under the .luxury domain extension..restaurant – This TLD can separate your restaurant from all the other .com eateries. It allows the name of your restaurant to exist as the domain name and leave the “description” for the domain extension. These are just a few of the numerous TLDs available on Domain.com. Each has its own space where it provides value. It’s all a matter of finding the right one and getting creative. Increased Creativity With the sheer number of available TLDs nowadays, it’s possible to use them to upgrade your web address and boost it to the next level. Some examples of new creative web addresses include: [Your Name].cool[Clothing Brand].fashion[Cooking Site].recipes[Anything].pizza As you can see, these are just a few examples of possible combinations. With over a thousand of these new TLDs, it’s hard to imagine not finding the perfect domain that is both creative and descriptive. Brand Protection For those companies who already have their generic TLD domain name, it can be beneficial to scoop up similar TLDs that are available on the market. If a coffee business owns its brand name with the .com domain extension, they might also wish to purchase the .coffee domain extension as well. The Necessity of Brand Protection Unfortunately, with each new TLD, it becomes harder to protect a brand from those trying to benefit off of it. Brandjacking – Individuals will purchase relevant domains based around a popular website and use its popularity to drive traffic away from the intended website. An example of this would be trying to register starbucks.coffee before Starbucks does in order to exploit them or drive traffic to an opposing site. (In this case, Starbucks is a trademarked entity, so this would not be possible. It is more of a problem for smaller companies.)Typosquatting – Another form of brand protection that becomes harder to manage is typosquatting. This is when individuals will purchase web domains based on common misspellings of certain words. If enough traffic is driven away from the main site, companies are often forced to buy out that individual for the rights to the web address. More companies are having to purchase additional domains despite already owning their business website. Availability With each additional TLD available, the domain space grows and more companies can purchase a short, memorable and descriptive web address. This is incredibly useful as almost half of all domains are registered under the .com domain extension while the next few TLDs don’t quite scratch 5% usage. With Availability Comes Variable Pricing Because there are so many TLDs available now, there are multiple organizations who monitor different domain extensions. This means that there is no one standard price for registering a domain name. Which is great for those domains that happen to be cheap. Others, however, can be quite expensive depending on how in-demand they are. New TLDs vs Old gTLDs So far, the focus has been on new TLDs, but how do they compare with the old, standard gTLDs? Benefits of gTLDs – Traditional TLDs are tried and true. There’s a reason .com still reigns supreme in terms of how many sites are registered each year. Having a domain extension .com, ensures a certain quality and reliability. Everybody knows and understands what’s involved when accessing a .com site.Downside of gTLDs – That being said, it is much harder to generate a desired web address with a gTLD. It’s then equally difficult for your website to stand out among other websites.Benefits of new TLDs – New TLDs are creative and fun. With new TLDs, it’s possible to express more than with the older gTLDs. The level of specificity achieved is more significant than what can be provided by standard gTLDs like .com and .net, and there are a lot more domains available.Downside of new TLDs – Because of how many new TLDs are being created, the demand for particular domain extensions can be significantly high. This pushes the prices up in an unpredictable way. Those who happen upon a popular TLD might end up paying considerably more than a traditional gTLD (whose prices stay relatively even throughout time). It all starts with a great domain. Get yours at Domain.com. Registering New TLDs With each new TLD, there is a procedure they go through before they’re available to the general public. Domain.com does offer their members to be a part of the early access group and pre-registration groups which is great for businesses and individuals seeking out highly-contested domain names. Here are a few different methods of registering for new TLDs: General Availability (GA)– This is the list of new TLDs and gTLDs that are currently available to the general public. Of course, these can be purchased if no other entity has secured the domain already. You can search by domain name on Domain.com to see if the desired name is available.Early Access – The Early Access Period (EAP) is usually during the first week that a new TLD is available. As the week progresses, domains with this extension decrease in cost. This allows individuals and businesses to spend more in order to purchase a domain earlier. The time length generally doesn’t exceed a week.Pre-Registration / Priority Pre-Registration – There is another way to gain a new TLD earlier than general availability. This is by pre-registering (or paying a premium with priority). This gives users the best chance to acquire hotly-contested web addresses. Trademarks and the Sunrise Period The earliest possible time to register a domain under a new TLD is known as the sunrise period. This is a period of 30 days where an entity with a registered trademark can register early for a new TLD (trademarks must be registered with the Trademark Clearinghouse—an international trademark database). By trademarking part of a business and incorporating it into the web domain, companies can further protect themselves against brandjacking. Other Types of TLDs Available There are some other types of top-level domains available that cover a different angle of web addresses. These include: ccTLDs – These are known as “country-code top-level domains.” They signify websites that are associated with a specific country. Common examples include:.us – United States.uk – United Kingdom.eu – Europe gTLDs – These are generic top-level domains. There are over twenty of these common gTLDs (.com, .net, etc.).sTLDs – Or “sponsored top-level domains.” Private organizations manage these, and in general, are not available to the public (.edu, .gov, etc.). Conclusion New TLDs are a fun, creative way for businesses to express their identity with the perfect website address. By sprinkling in some spice with a new domain extension, companies can upgrade their website and stand out among the countless number of sites around today. With how many new TLDs are available, the options are starting to seem unlimited. If you’re looking to use the perfect new TLD for your web address, know that Domain.com has over 300 new TLDs from which to choose! Sources: LinkedIn. (2017, Jan.). Brandjacking: What It Is and How to Avoid It. https://www.linkedin.com/pulse/brandjacking-what-how-avoid-wink-faulkner/Domain Name Stat. Domain name registration’s statistics. https://domainnamestat.com/ ICANN. (2011, June). ICANN Approves Historic Change to Internet’s Domain Name System | Board Votes to Launch New Generic Top-Level Domains. https://www.icann.org/news/announcement-2011-06-20-en The post What Are The New TLDs appeared first on Domain.com | Blog.

The Top 15 Benefits of a Website for Small Businesses

DreamHost Blog -

These days there’s no excuse for not having a website, even if your business is only just getting off the ground. Many potential customers and clients won’t take you seriously without one. Plus, there are so many upsides to setting one up that not doing so is almost irresponsible. One of the most obvious benefits of having a small business website is that it enables people to find you online and get in touch with you easily. Having an effective and compelling site can even lead to sales you wouldn’t have made otherwise. In this post, we’re going to discuss 15 reasons why it makes sense to set up a website for your small business. Then we’ll show you how a website builder, like Remixer, can help business owners create a small-biz site in a matter of minutes. Easily Build Your Dream WebsiteDon't know code? No problem. Our DIY Website Builder makes designing a website as easy as sending an email.Start Building for Free The Top 15 Benefits of a Website for Small Business 1. You Can Develop an Online Presence These days, the first thing a lot of people do when they hear about a business is to look it up online. If you don’t have a website set up — or at least some social media profiles — you might as well not exist for all those potential clients. Moreover, having a website can help shape the way people perceive your business. For example, you can fill your site up with reviews, photos of your locations, helpful information, and anything else that will bolster your image. We’re not overexaggerating when we tell you that online marketing is a critical component of business success in today’s market. 2. It’s Possible to Target Local Customers If you’re anything like us, you look up the closest businesses to you when you’re trying to make a specific purchase. For example, let’s say you need a haircut and you don’t know the neighborhood. You’ll probably jump online, and look up nearby barbers or hair salons. If your website shows up among the first search engine results for these types of local queries, then you might land yourself some extra business (building a strategy to rank well for keywords is known as search engine optimization or SEO). On top of drumming up more customers, your site can also help build brand awareness of your local business in the community. Related: 7 Steps to Identify a Target Market for Your Online Business 3. You Can Share Your Address and Contact Information With Customers Imagine that someone knows your business exists, but they’re not sure how to get there. Ideally, your website should include your full address, instructions on how to find you, and (if you’re looking for extra points) a map of the area. Armed with that information, it’s almost impossible for anyone to get lost along the way. It’s also useful to have a place for your business’ phone number, email address, and other contact details. That way people can call in if they have any quick questions. 4. It Enables You to Receive Online Queries A lot of small business owners these days prefer online queries over phone calls. It’s easy to understand why. After all, you can answer emails on your own time, and it doesn’t matter if 20 people contact you at once online — you can still get to all of them. Ideally, your website will provide visitors with multiple ways to contact you. We already mentioned that it should include your phone number and email address, but a contact form is also an excellent addition that lets customers get in touch without leaving the site. Some businesses even go as far as to set up live chat. 5. You Can Save Money on Paper Advertisements It used to be that if you wanted to advertise your business, your options were limited. You could hand out flyers, take out ads in the local newspaper, or maybe pay for a TV spot. However, the web provides you with entirely new ways to reach your audience. Even if you don’t want to pay for online ads, your website itself can help market your business. You can, for example, reach out to visitors when you’re running offers they might be interested in. At the very least, you can publish the latest news on your site, so people have an incentive to visit your business. Related: The 30 Best Apps for Small Businesses in 2019 6. Online Content Can Help You Build a Reputation There are plenty of successful businesses that give back to their community by helping to keep them informed through content marketing. Take DreamHost’s blog, for example — it’s all about updating you on the latest news and sharing knowledge to power your website. Over the long term, you can also use your website as a platform to publish content and blog posts that help your clients. Content marketing not only makes you look like an authority in your field, but it can also build goodwill. Related: Blog Your Way to an Awesome Reputation: The 10 Best Company Blogs 7. You Can Use It to Learn More About Your Customers Websites aren’t only about sharing your business with the world. If used correctly, they can also help you learn more about your customers. Then you can use that information to drive more sales and conversions. You can, for example, set up polls on your website to find out what your visitors are interested in. There are also plenty of online tools that can help you set up full-fledged surveys. You can even track your site’s analytics, and get lots of data on how your visitors behave. 8. It’s the Perfect Way to Advertise Employment Opportunities Good help is hard to find, regardless of what field you do business in. If you’re looking for a new hire, there are plenty of platforms where you can advertise online. However, it also makes sense to use your own website to get the word out about employment opportunities. After all, it’s likely that plenty of people who visit your site are going to be interested in work opportunities. Plus, this way you cut out any middlemen. When someone applies for a job, you can vet them right away. 9. You Can Provide Personalized Email Addresses for Your Employees When you buy a domain for your website, you can use it to set up personalized email addresses. This is very useful since an email address such as johndoe@yourlocalbusiness.com looks much more professional than johndoe324@gmail.com. This may seem like a small detail. However, having personalized email addresses can give people the impression that you’re running a professional business (which of course you are!). 10. It Can Help Expand Your Business’ Reach If you’re running a small store, most of your business will probably come from locals. They’ll get to know what you provide and what your prices are, and hopefully keep coming back for more. To put it another way, most small businesses have a restricted area of influence. Setting up a website enables you to bypass the limitations of running a small operation. You’ll be able to reach more of your target audience than you might have otherwise, and attract business from outside your local area. 11. You Can Make Sales Online Aside from expanding your business’ reach, having a website also provides you with an entirely new channel you can use to make sales. These days, you’re no longer restricted to only selling products through your physical shop. Setting up an online store is actually relatively simple, and you can even combine it with your regular business site. That way, you’ll be able to make sales even when your operating hours end. 12. Social Media Can Help Promote Your Business A lot of people think that social media can be a replacement for a website. As far as we’re concerned, however, you need both a site and a social presence if you want to maximize your reach online. Plus, you’ll want to advertise all your social pages right from your website. To put it another way, think about your website as a place where you can publish any content you want, in any format you can imagine. Social media marketing, on the other hand, is a useful tool to get the word out, build a following, and drive traffic back to your website. The two work in perfect harmony, so it doesn’t make sense to limit yourself to one or the other. 13. Email Lists Can Help You Stay in Touch With Customers Email marketing is one of the most effective tools when it comes to staying in touch with your customers, driving sales, and getting conversions. In fact, you can get a lot of mileage out of creating an ongoing email campaign. What’s more, your website provides you with the perfect way to get people to sign up for your email list. Once you build an audience, you can send them as many emails as you want, as often as you’d like. Be Awesome on the InternetJoin our monthly newsletter for tips and tricks to build your dream website!Sign Me Up 14. You Can Educate Users About Your Business Customers don’t always know what they’re looking for. If you’re new to website hosting, for example, it can be hard to figure out which plan would best suit your needs. There is plenty of information available on the subject, but judging who’s right and who doesn’t know what they’re talking about can be a challenge Now, imagine that you’re on the other side of that dilemma. You’re running a hosting service, and you need to figure out how to help people choose the plans they need. A website is the perfect tool for this task. You can use it to educate your audience on what the best products are, depending on their requirements and goals. It doesn’t matter what type of business you’re running, of course. Your website can help you teach your customer base everything they need to know so they’ll make smarter purchases. 15. You Can Build a Community One of the best things about having a website to call your own is that it can provide a place for your visitors to talk to each other. For example, if you’re running a blog for your business, you can enable a comments section for it so visitors can ask you questions and discuss your posts with each other. Depending on which platform you use, you can also set up more complex community features, such as forums and even public chats. How to Create a Small Business Website Quickly (And on a Budget) The upsides of having a website for your business speak for themselves. However, the potential costs and time investment of launching such a project may be holding you back. It’s true that creating a website from scratch can be expensive and can take a lot of time. However, there are alternative ways to launch professional websites quickly even on a small budget. Website builders are tools designed to help you create stylish websites, even if you don’t have any experience in development. They’re especially well suited to creating your small business site since you probably aren’t looking to implement a lot of complex features. For example, our Remixer service can help you set up a basic business website in a matter of hours — even if you’ve never touched a line of code in your life. For example, if you’re working on your homepage, you can use one of Remixer’s professionally designed themes. Then you can customize your site to match your brand. With a few clicks, you can add multiple elements — contact forms, galleries, and more — and rearrange them until the page looks just the way you want it to. You can even customize each web section, so you have full control over what your visitors see. As your small business grows, you can export your Remixer site directly to WordPress to take advantage of the platform’s best features: SSL certificates, blogging tools, e-commerce store add-ons, and WordPress plugins. Open for Business Every small business owner needs a website. If you don’t have one yet, now is the perfect time to get started on it. While it is possible for your business to succeed without a website, a web presence can help you open so many doors. If you don’t know anything about website design, don’t worry. You don’t need to spend months and thousands of dollars to set something up. Our Remixer site builder enables you to create a powerful and professional-looking site — even if you’re a complete beginner. Start building your own Remixer site for free. The post The Top 15 Benefits of a Website for Small Businesses appeared first on Website Guides, Tips and Knowledge.

What Is phpBB Hosting?

HostGator Blog -

The post What Is phpBB Hosting? appeared first on HostGator Blog. The idea of community used to be much more tied to geography. For a group of people to come together to have discussions, share information, and learn from each other, they had to live close enough to do so in person. The internet has changed all that. The idea of connecting with people in online communities is now second nature to us. And one of the common options for creating and maintaining communities online is the forum software phpBB. Below we dive into everything there is to know about phpBB hosting, from how it works to why you might choose it to power your own forum. What Is phpBB? phpBB, which stands for PHP bulletin board, is an open source forum software that enables users to create a space online where communities can gather and share information in an organized format. Forums created with phpBB let creators define the main categories and topics the community forum will cover. Users can create accounts, load their own questions and contents in the appropriate categories, and respond to the posts of other community members. All participants in the discussion board can track the number of conversations and posts in all of the categories, and click to view each category and conversation they’re interested in. The forum model is basic and intuitive, making it easy for just about anyone to use. And for individuals or businesses interested in starting a forum, phpBB makes setting one up relatively simple as well. 6 Reasons to Start a Forum phpBB is best used for the specific function of building and running an online forum. Before you decide if it’s the best application for your purposes, you should figure out if creating a forum is right for your particular needs. It’s possible to create a discussion board for its own sake, in order to create a gathering place for people with common interests. It’s also common to create one as part of a business or brand that already exists, to give your customers or followers a place to connect with each other. If you already have a website and are wondering if a forum makes sense, there are a few main benefits it can offer. 1. Forums create community. Humans are social animals who have always sought out community and connection. An online forum gives people a chance to get to know other users, help each other out, and feel like a part of something. In some cases, that will take on a straightforward and professional tone, and in others it will have a more fun or emotional one. In either case, when you can provide people with a sense of community, you’re adding real value to their lives. And if you become a regular participant in it as well, you may find it brings value to yours also. 2. You can get to know your site visitors better. For brands, this is a big benefit. For both businesses and media sites, understanding your audience is a big priority. In order to deliver information and products that are relevant and useful to the people you want to reach, you need to know who they are and what they care about. When your audience comes together to interact with each other in a forum, they’ll voice the common questions and opinions they have. All that information you’re always trying to learn about them when crafting buyer personas or creating a content strategy? With a forum, they’ll bring it straight to you. 3. Your visitors can learn from each other. For many tech and software products, this is one of the big roles online forums play. When you’re struggling to figure out how to take a particular action in your software and head to Google to find the answer, often what you’ll find is a forum page where one user of the software asked the same question, and a fellow one provided an answer. This is a major reason why we have forums here at HostGator: That use case extends beyond tech products to any type of issue your forum users might be able to help each other with. A forum for teachers could include advice about lesson plans or how to handle particular student issues. A forum for fans of a TV show could include conversations about the show’s influences or themes that introduce ideas some fans hadn’t considered before. In every case, both the people learning something new, and the people providing the information (and in many forums, members will take turns in each role) are getting something valuable out of the interaction. 4. Forums can increase traffic to your site. If you add a forum to your website, you give all the members that participate reasons to keep coming back again and again. And when users have questions that relate to content you already have on your website, you have the perfect opportunity to share the link and drive more traffic to those pages on your site. All that adds up to more visits to your website. And even better, much of it will be return visitors who are actively engaging with your brand during their visits. For website owners, that’s a big win. 5. Forums can improve search engine optimization (SEO). The search engine algorithms like fresh content. When they see that a website is updated often, it shows that the website is current, which suggests it has more value than one that’s potentially outdated. Forums generate tons of fresh content—every time someone posts a new message or response, they’re creating new content on your site. And if your forum is available to the public, it creates a lot of new pages that can be indexed by the algorithms and show up in search themselves. If one of the conversations in your forum answers a question searchers have better than other websites do, it could get onto the first page of the search results, driving more traffic and increasing the authority of your website. 6. Forums help you generate new content ideas. One of the best ways to improve your website’s authority is to consistently create content your visitors care about. But creating large amounts of high-value, relevant content is challenging. In order to keep it up, you need a way to brainstorm topics your audience are most interested in. A forum gives you a window into the topics your audience is thinking and talking about. You can learn the common questions they have, and then create content that answers them. And by spending time following the conversations they have and the language they use, you’ll get better at creating content in a style that they can relate to. Why Use phpBB? If a forum sounds like the right choice for your website, you do have a few different options to choose from in selecting your software. But there’s a reason that phpBB is one of the most popular options—a good number of reasons in fact. In particular: It’s free. Setting up a forum with phpBB costs nothing. You can save your money for hosting and marketing. It’s open source. The code for phpBB is freely available to use and change as you need. And because it’s open source, other users can also develop features and extensions you can benefit from.It’s secure. The team behind phpBB runs security audits and work to quickly release updates to the software anytime they spot a vulnerability in the code. As long as you keep your software up to date and take basic website security measures, like choosing a trusted web hosting provider and using strong passwords, you’ll be able to keep your website safe from hackers. It enables user preferences. For your forum to attract the community you want and get them to stick around, you want it to be user friendly. phpBB allows individual users some control over their experience of the forum. For example, they can load unique avatars and signatures to personalize their accounts, and customize the order they view categories in.  It gives you the power of moderation. In our era of spam bots and trolls, a good community is a well moderated one. To keep your forum valuable, positive, and on topic, you need the power to review and approve the posts that go live. phpBB allows both the forum owner and any users you assign moderator status to the ability to remove or approve specific posts. It offers public and private messaging options. Much of the value of a forum is the visibility of the conversations to all members, but sometimes individual members may want the ability to take part of the conversation private. phpBB allows the option of private messaging between members as well as public discussions. It allows posters to include rich features. These days, online conversations are rarely just text. phpBB lets users complement their written messages with popular gifs, images, and emojis, as well as adding rich media like video or interactive features like polls. It has anti-spam features. It’s hard to go anywhere online without encountering spam, but phpBB can help you avoid dealing with too much of it in your forum with features like captcha confirmation and the ability to ban users as needed. You can customize your forum’s look. You can use your own coding skills to change up the design of your forum, if you’re able. Or you can choose from the hundreds of styles other phpBB users have developed and made available, mostly for free.  You can control permissions. Ther forum owner, administrators, and moderators will need different types of access and abilities in the forum than everyday users. phpBB makes it easy for you to determine which users are able to access which features.There are lots of extensions available. The functionality available in the core phpBB software is rich enough, but many users have created extensions that add additional features and functions to phpBB, many of them for free. You can expand what you and your members are able to do, based on your priorities. What Is phpBB Hosting? phpBB hosting is a type of web hosting that’s compatible with the phpBB software. phpBB provides a lot of the important functionality you need to build a forum, but it doesn’t come with web hosting. For your forum to become available online and accessible to your members, you’ll need to invest in application web hosting. If you already have a website, you may be able to get started by adding your phpBB forum to the hosting plan you already have, but if you didn’t choose an application web hosting plan specifically designed to work with phpBB, there’s a chance you’ll face compatibility issues. A good phpBB web hosting plan will promise easy one-click installation, so you can spend your time focusing on getting your forum started, not on messing with complicated technical processes to get everything working. Here’s a look at how easy it is to install phpBB with HostGator: A phpBB web hosting plan will be 100% compatible with your phpBB forum—so there’s no chance of surprise issues down the line. Plus, a strong phpBB hosting choice will provide adequate security through strong firewalls and security software options, so that you can trust your forum will remain active without the threat of malicious hackers or viruses. Finally, you’ll also want to be sure you choose a web hosting option that provides enough bandwidth for the website to handle regular visits from all your users, especially if the community starts to grow in the months and years to come—which is exactly what you want! While a number of web hosting plans may work for hosting your phpBB forum, your life will be a little easier if you go with a reputable web hosting company that offers a phpBB-specific plan. Get Started with phpBB Hosting Services With the right phpBB hosting plan, you can get your forum up and running fast, and keep it running functionally without issues for as long as you want. When considering your options for phpBB hosting, make sure you go with a web host that has a strong reputation for providing quality service. HostGator can promise 24/7 customer support, a 99.9% uptime guarantee, and a great reputation in the industry. Many of the benefits of having a community forum fall by the wayside if your visitors can’t access it at the moment they need it. With the right phpBB web hosting plan, you can be confident that your forum will deliver on the speed and accessibility that your users want and expect. Get started today with HostGator application hosting. Find the post on the HostGator Blog

Building a To-Do List with Workers and KV

CloudFlare Blog -

In this tutorial, we’ll build a todo list application in HTML, CSS and JavaScript, with a twist: all the data should be stored inside of the newly-launched Workers KV, and the application itself should be served directly from Cloudflare’s edge network, using Cloudflare Workers.To start, let’s break this project down into a couple different discrete steps. In particular, it can help to focus on the constraint of working with Workers KV, as handling data is generally the most complex part of building an application:Build a todos data structureWrite the todos into Workers KVRetrieve the todos from Workers KVReturn an HTML page to the client, including the todos (if they exist)Allow creation of new todos in the UIAllow completion of todos in the UIHandle todo updatesThis task order is pretty convenient, because it’s almost perfectly split into two parts: first, understanding the Cloudflare/API-level things we need to know about Workers and KV, and second, actually building up a user interface to work with the data.Understanding WorkersIn terms of implementation, a great deal of this project is centered around KV - although that may be the case, it’s useful to break down what Workers are exactly.Service Workers are background scripts that run in your browser, alongside your application. Cloudflare Workers are the same concept, but super-powered: your Worker scripts run on Cloudflare’s edge network, in-between your application and the client’s browser. This opens up a huge amount of opportunity for interesting integrations, especially considering the network’s massive scale around the world. Here’s some of the use-cases that I think are the most interesting:Custom security/filter rules to block bad actors before they ever reach the originReplacing/augmenting your website’s content based on the request content (i.e. user agents and other headers)Caching requests to improve performance, or using Cloudflare KV to optimize high-read tasks in your applicationBuilding an application directly on the edge, removing the dependence on origin servers entirelyFor this project, we’ll lean heavily towards the latter end of that list, building an application that clients communicate with, served on Cloudflare’s edge network. This means that it’ll be globally available, with low-latency, while still allowing the ease-of-use in building applications directly in JavaScript.Setting up a canvasTo start, I wanted to approach this project from the bare minimum: no frameworks, JS utilities, or anything like that. In particular, I was most interested in writing a project from scratch and serving it directly from the edge. Normally, I would deploy a site to something like GitHub Pages, but avoiding the need for an origin server altogether seems like a really powerful (and performant idea) - let’s try it!I also considered using TodoMVC as the blueprint for building the functionality for the application, but even the Vanilla JS version is a pretty impressive amount of code, including a number of Node packages - it wasn’t exactly a concise chunk of code to just dump into the Worker itself.Instead, I decided to approach the beginnings of this project by building a simple, blank HTML page, and including it inside of the Worker. To start, we’ll sketch something out locally, like this:<!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width,initial-scale=1"> <title>Todos</title> </head> <body> <h1>Todos</h1> </body> </html> Hold on to this code - we’ll add it later, inside of the Workers script. For the purposes of the tutorial, I’ll be serving up this project at todo.kristianfreeman.com. My personal website was already hosted on Cloudflare, and since I’ll be serving, it was time to create my first Worker.Creating a workerInside of my Cloudflare account, I hopped into the Workers tab and launched the Workers editor.This is one of my favorite features of the editor - working with your actual website, understanding how the worker will interface with your existing project.The process of writing a Worker should be familiar to anyone who’s used the fetch library before. In short, the default code for a Worker hooks into the fetch event, passing the request of that event into a custom function, handleRequest:addEventListener('fetch', event => { event.respondWith(handleRequest(event.request)) }) Within handleRequest, we make the actual request, using fetch, and return the response to the client. In short, we have a place to intercept the response body, but by default, we let it pass-through:async function handleRequest(request) { console.log('Got request', request) const response = await fetch(request) console.log('Got response', response) return response } So, given this, where do we begin actually doing stuff with our worker?Unlike the default code given to you in the Workers interface, we want to skip fetching the incoming request: instead, we’ll construct a new Response, and serve it directly from the edge:async function handleRequest(request) { const response = new Response("Hello!") return response } Given that very small functionality we’ve added to the worker, let’s deploy it. Moving into the “Routes” tab of the Worker editor, I added the route https://todo.kristianfreeman.com/* and attached it to the cloudflare-worker-todos script.Once attached, I deployed the worker, and voila! Visiting todo.kristianfreeman.com in-browser gives me my simple “Hello!” response back.Writing data to KVThe next step is to populate our todo list with actual data. To do this, we’ll make use of Cloudflare’s Workers KV - it’s a simple key-value store that you can access inside of your Worker script to read (and write, although it’s less common) data.To get started with KV, we need to set up a “namespace”. All of our cached data will be stored inside that namespace, and given just a bit of configuration, we can access that namespace inside the script with a predefined variable.I’ll create a new namespace called KRISTIAN_TODOS, and in the Worker editor, I’ll expose the namespace by binding it to the variable KRISTIAN_TODOS.Given the presence of KRISTIAN_TODOS in my script, it’s time to understand the KV API. At time of writing, a KV namespace has three primary methods you can use to interface with your cache: get, put, and delete. Pretty straightforward!Let’s start storing data by defining an initial set of data, which we’ll put inside of the cache using the put method. I’ve opted to define an object, defaultData, instead of a simple array of todos: we may want to store metadata and other information inside of this cache object later on. Given that data object, I’ll use JSON.stringify to put a simple string into the cache:async function handleRequest(request) { // ...previous code const defaultData = { todos: [ { id: 1, name: 'Finish the Cloudflare Workers blog post', completed: false } ] } KRISTIAN_TODOS.put("data", JSON.stringify(defaultData)) } The Worker KV data store is eventually consistent: writing to the cache means that it will become available eventually, but it’s possible to attempt to read a value back from the cache immediately after writing it, only to find that the cache hasn’t been updated yet.Given the presence of data in the cache, and the assumption that our cache is eventually consistent, we should adjust this code slightly: first, we should actually read from the cache, parsing the value back out, and using it as the data source if exists. If it doesn’t, we’ll refer to defaultData, setting it as the data source for now (remember, it should be set in the future… eventually), while also setting it in the cache for future use. After breaking out the code into a few functions for simplicity, the result looks like this:const defaultData = { todos: [ { id: 1, name: 'Finish the Cloudflare Workers blog post', completed: false } ] } const setCache = data => KRISTIAN_TODOS.put("data", data) const getCache = () => KRISTIAN_TODOS.get("data") async function getTodos(request) { // ... previous code let data; const cache = await getCache() if (!cache) { await setCache(JSON.stringify(defaultData)) data = defaultData } else { data = JSON.parse(cache) } } Rendering data from KVGiven the presence of data in our code, which is the cached data object for our application, we should actually take this data and make it available on screen.In our Workers script, we’ll make a new variable, html, and use it to build up a static HTML template that we can serve to the client. In handleRequest, we can construct a new Response (with a Content-Type header of text/html), and serve it to the client:const html = ` <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width,initial-scale=1"> <title>Todos</title> </head> <body> <h1>Todos</h1> </body> </html> ` async function handleRequest(request) { const response = new Response(html, { headers: { 'Content-Type': 'text/html' } }) return response } We have a static HTML site being rendered, and now we can begin populating it with data! In the body, we’ll add a ul tag with an id of todos:<body> <h1>Todos</h1> <ul id="todos"></ul> </body> Given that body, we can also add a script after the body that takes a todos array, loops through it, and for each todo in the array, creates a li element and appends it to the todos list:<script> window.todos = []; var todoContainer = document.querySelector("#todos"); window.todos.forEach(todo => { var el = document.createElement("li"); el.innerText = todo.name; todoContainer.appendChild(el); }); </script> Our static page can take in window.todos, and render HTML based on it, but we haven’t actually passed in any data from KV. To do this, we’ll need to make a couple changes.First, our html variable will change to a function. The function will take in an argument, todos, which will populate the window.todos variable in the above code sample:const html = todos => ` <!doctype html> <html> <!-- ... --> <script> window.todos = ${todos || []} var todoContainer = document.querySelector("#todos"); // ... <script> </html> ` In handleRequest, we can use the retrieved KV data to call the html function, and generate a Response based on it:async function handleRequest(request) { let data; // Set data using cache or defaultData from previous section... const body = html(JSON.stringify(data.todos)) const response = new Response(body, { headers: { 'Content-Type': 'text/html' } }) return response } The finished product looks something like this:Adding todos from the UIAt this point, we’ve built a Cloudflare Worker that takes data from Cloudflare KV and renders a static page based on it. That static page reads the data, and generates a todo list based on that data. Of course, the piece we’re missing is creating todos, from inside the UI. We know that we can add todos using the KV API - we could simply update the cache by saying KRISTIAN_TODOS.put(newData), but how do we update it from inside the UI?It’s worth noting here that Cloudflare’s Workers documentation suggests that any writes to your KV namespace happen via their API - that is, at its simplest form, a cURL statement:curl "<https://api.cloudflare.com/client/v4/accounts/$ACCOUNT_ID/storage/kv/namespaces/$NAMESPACE_ID/values/first-key>" \ -X PUT \ -H "X-Auth-Email: $CLOUDFLARE_EMAIL" \ -H "X-Auth-Key: $CLOUDFLARE_AUTH_KEY" \ --data 'My first value!' We’ll implement something similar by handling a second route in our worker, designed to watch for PUT requests to /. When a body is received at that URL, the worker will send the new todo data to our KV store.I’ll add this new functionality to my worker, and in handleRequest, if the request method is a PUT, it will take the request body and update the cache:addEventListener('fetch', event => { event.respondWith(handleRequest(event.request)) }) const setCache = data => KRISTIAN_TODOS.put("data", data) async function updateTodos(request) { const body = await request.text() const ip = request.headers.get("CF-Connecting-IP") const cacheKey = `data-${ip}`; try { JSON.parse(body) await setCache(body) return new Response(body, { status: 200 }) } catch (err) { return new Response(err, { status: 500 }) } } async function handleRequest(request) { if (request.method === "PUT") { return updateTodos(request); } else { // Defined in previous code block return getTodos(request); } } The script is pretty straightforward - we check that the request is a PUT, and wrap the remainder of the code in a try/catch block. First, we parse the body of the request coming in, ensuring that it is JSON, before we update the cache with the new data, and return it to the user. If anything goes wrong, we simply return a 500. If the route is hit with an HTTP method other than PUT - that is, GET, DELETE, or anything else - we return a 404.With this script, we can now add some “dynamic” functionality to our HTML page to actually hit this route.First, we’ll create an input for our todo “name”, and a button for “submitting” the todo.<div> <input type="text" name="name" placeholder="A new todo"></input> <button id="create">Create</button> </div> Given that input and button, we can add a corresponding JavaScript function to watch for clicks on the button - once the button is clicked, the browser will PUT to / and submit the todo.var createTodo = function() { var input = document.querySelector("input[name=name]"); if (input.value.length) { fetch("/", { method: 'PUT', body: JSON.stringify({ todos: todos }) }); } }; document.querySelector("#create") .addEventListener('click', createTodo); This code updates the cache, but what about our local UI? Remember that the KV cache is eventually consistent - even if we were to update our worker to read from the cache and return it, we have no guarantees it’ll actually be up-to-date. Instead, let’s just update the list of todos locally, by taking our original code for rendering the todo list, making it a re-usable function called populateTodos, and calling it when the page loads and when the cache request has finished:var populateTodos = function() { var todoContainer = document.querySelector("#todos"); todoContainer.innerHTML = null; window.todos.forEach(todo => { var el = document.createElement("li"); el.innerText = todo.name; todoContainer.appendChild(el); }); }; populateTodos(); var createTodo = function() { var input = document.querySelector("input[name=name]"); if (input.value.length) { todos = [].concat(todos, { id: todos.length + 1, name: input.value, completed: false, }); fetch("/", { method: 'PUT', body: JSON.stringify({ todos: todos }) }); populateTodos(); input.value = ""; } }; document.querySelector("#create") .addEventListener('click', createTodo); With the client-side code in place, deploying the new Worker should put all these pieces together. The result is an actual dynamic todo list!Updating todos from the UIFor the final piece of our (very) basic todo list, we need to be able to update todos - specifically, marking them as completed.Luckily, a great deal of the infrastructure for this work is already in place. We can currently update the todo list data in our cache, as evidenced by our createTodo function. Performing updates on a todo, in fact, is much more of a client-side task than a Worker-side one!To start, let’s update the client-side code for generating a todo. Instead of a ul-based list, we’ll migrate the todo container and the todos themselves into using divs:<!-- <ul id="todos"></ul> becomes... --> <div id="todos"></div> The populateTodos function can be updated to generate a div for each todo. In addition, we’ll move the name of the todo into a child element of that div:var populateTodos = function() { var todoContainer = document.querySelector("#todos"); todoContainer.innerHTML = null; window.todos.forEach(todo => { var el = document.createElement("div"); var name = document.createElement("span"); name.innerText = todo.name; el.appendChild(name); todoContainer.appendChild(el); }); } So far, we’ve designed the client-side part of this code to take an array of todos in, and given that array, render out a list of simple HTML elements. There’s a number of things that we’ve been doing that we haven’t quite had a use for, yet: specifically, the inclusion of IDs, and updating the completed value on a todo. Luckily, these things work well together, in order to support actually updating todos in the UI.To start, it would be useful to signify the ID of each todo in the HTML. By doing this, we can then refer to the element later, in order to correspond it to the todo in the JavaScript part of our code. Data attributes, and the corresponding dataset method in JavaScript, are a perfect way to implement this. When we generate our div element for each todo, we can simply attach a data attribute called todo to each div:window.todos.forEach(todo => { var el = document.createElement("div"); el.dataset.todo = todo.id // ... more setup todoContainer.appendChild(el); }); Inside our HTML, each div for a todo now has an attached data attribute, which looks like:<div data-todo="1"></div> <div data-todo="2"></div> Now we can generate a checkbox for each todo element. This checkbox will default to unchecked for new todos, of course, but we can mark it as checked as the element is rendered in the window:window.todos.forEach(todo => { var el = document.createElement("div"); el.dataset.todo = todo.id var name = document.createElement("span"); name.innerText = todo.name; var checkbox = document.createElement("input") checkbox.type = "checkbox" checkbox.checked = todo.completed ? 1 : 0; el.appendChild(checkbox); el.appendChild(name); todoContainer.appendChild(el); }) The checkbox is set up to correctly reflect the value of completed on each todo, but it doesn’t yet update when we actually check the box! To do this, we’ll add an event listener on the click event, calling completeTodo. Inside the function, we’ll inspect the checkbox element, finding its parent (the todo div), and using the todo data attribute on it to find the corresponding todo in our data. Given that todo, we can toggle the value of completed, update our data, and re-render the UI:var completeTodo = function(evt) { var checkbox = evt.target; var todoElement = checkbox.parentNode; var newTodoSet = [].concat(window.todos) var todo = newTodoSet.find(t => t.id == todoElement.dataset.todo ); todo.completed = !todo.completed; todos = newTodoSet; updateTodos() } The final result of our code is a system that simply checks the todos variable, updates our Cloudflare KV cache with that value, and then does a straightforward re-render of the UI based on the data it has locally.Conclusions and next stepsWith this, we’ve created a pretty remarkable project: an almost entirely static HTML/JS application, transparently powered by Cloudflare KV and Workers, served at the edge. There’s a number of additions to be made to this application, whether you want to implement a better design (I’ll leave this as an exercise for readers to implement - you can see my version at todo.kristianfreeman.com), security, speed, etc.One interesting and fairly trivial addition is implementing per-user caching. Of course, right now, the cache key is simply “data”: anyone visiting the site will share a todo list with any other user. Because we have the request information inside of our worker, it’s easy to make this data user-specific. For instance, implementing per-user caching by generating the cache key based on the requesting IP:const ip = request.headers.get("CF-Connecting-IP") const cacheKey = `data-${ip}`; const getCache = key => KRISTIAN_TODOS.get(key) getCache(cacheKey) One more deploy of our Workers project, and we have a full todo list application, with per-user functionality, served at the edge!The final version of our Workers script looks like this:const html = todos => ` <!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width,initial-scale=1"> <title>Todos</title> <link href="https://cdn.jsdelivr.net/npm/tailwindcss/dist/tailwind.min.css" rel="stylesheet"></link> </head> <body class="bg-blue-100"> <div class="w-full h-full flex content-center justify-center mt-8"> <div class="bg-white shadow-md rounded px-8 pt-6 py-8 mb-4"> <h1 class="block text-grey-800 text-md font-bold mb-2">Todos</h1> <div class="flex"> <input class="shadow appearance-none border rounded w-full py-2 px-3 text-grey-800 leading-tight focus:outline-none focus:shadow-outline" type="text" name="name" placeholder="A new todo"></input> <button class="bg-blue-500 hover:bg-blue-800 text-white font-bold ml-2 py-2 px-4 rounded focus:outline-none focus:shadow-outline" id="create" type="submit">Create</button> </div> <div class="mt-4" id="todos"></div> </div> </div> </body> <script> window.todos = ${todos || []} var updateTodos = function() { fetch("/", { method: 'PUT', body: JSON.stringify({ todos: window.todos }) }) populateTodos() } var completeTodo = function(evt) { var checkbox = evt.target var todoElement = checkbox.parentNode var newTodoSet = [].concat(window.todos) var todo = newTodoSet.find(t => t.id == todoElement.dataset.todo) todo.completed = !todo.completed window.todos = newTodoSet updateTodos() } var populateTodos = function() { var todoContainer = document.querySelector("#todos") todoContainer.innerHTML = null window.todos.forEach(todo => { var el = document.createElement("div") el.className = "border-t py-4" el.dataset.todo = todo.id var name = document.createElement("span") name.className = todo.completed ? "line-through" : "" name.innerText = todo.name var checkbox = document.createElement("input") checkbox.className = "mx-4" checkbox.type = "checkbox" checkbox.checked = todo.completed ? 1 : 0 checkbox.addEventListener('click', completeTodo) el.appendChild(checkbox) el.appendChild(name) todoContainer.appendChild(el) }) } populateTodos() var createTodo = function() { var input = document.querySelector("input[name=name]") if (input.value.length) { window.todos = [].concat(todos, { id: window.todos.length + 1, name: input.value, completed: false }) input.value = "" updateTodos() } } document.querySelector("#create").addEventListener('click', createTodo) </script> </html> ` const defaultData = { todos: [] } const setCache = (key, data) => KRISTIAN_TODOS.put(key, data) const getCache = key => KRISTIAN_TODOS.get(key) async function getTodos(request) { const ip = request.headers.get('CF-Connecting-IP') const cacheKey = `data-${ip}` let data const cache = await getCache(cacheKey) if (!cache) { await setCache(cacheKey, JSON.stringify(defaultData)) data = defaultData } else { data = JSON.parse(cache) } const body = html(JSON.stringify(data.todos || [])) return new Response(body, { headers: { 'Content-Type': 'text/html' }, }) } async function updateTodos(request) { const body = await request.text() const ip = request.headers.get('CF-Connecting-IP') const cacheKey = `data-${ip}` try { JSON.parse(body) await setCache(cacheKey, body) return new Response(body, { status: 200 }) } catch (err) { return new Response(err, { status: 500 }) } } async function handleRequest(request) { if (request.method === 'PUT') { return updateTodos(request) } else { return getTodos(request) } } addEventListener('fetch', event => { event.respondWith(handleRequest(event.request)) }) You can find the source code for this project, as well as a README with deployment instructions, on GitHub.

Workers KV — Cloudflare's distributed database

CloudFlare Blog -

Today, we’re excited to announce Workers KV is entering general availability and is ready for production use!What is Workers KV?Workers KV is a highly distributed, eventually consistent, key-value store that spans Cloudflare's global edge. It allows you to store billions of key-value pairs and read them with ultra-low latency anywhere in the world. Now you can build entire applications with the performance of a CDN static cache.Why did we build it?Workers is a platform that lets you run JavaScript on Cloudflare's global edge of 175+ data centers. With only a few lines of code, you can route HTTP requests, modify responses, or even create new responses without an origin server.// A Worker that handles a single redirect, // such a humble beginning... addEventListener("fetch", event => { event.respondWith(handleOneRedirect(event.request)) }) async function handleOneRedirect(request) { let url = new URL(request.url) let device = request.headers.get("CF-Device-Type") // If the device is mobile, add a prefix to the hostname. // (eg. example.com becomes mobile.example.com) if (device === "mobile") { url.hostname = "mobile." + url.hostname return Response.redirect(url, 302) } // Otherwise, send request to the original hostname. return await fetch(request) } Customers quickly came to us with use cases that required a way to store persistent data. Following our example above, it's easy to handle a single redirect, but what if you want to handle billions of them? You would have to hard-code them into your Workers script, fit it all in under 1 MB, and re-deploy it every time you wanted to make a change — yikes! That’s why we built Workers KV.// A Worker that can handle billions of redirects, // now that's more like it! addEventListener("fetch", event => { event.respondWith(handleBillionsOfRedirects(event.request)) }) async function handleBillionsOfRedirects(request) { let prefix = "/redirect" let url = new URL(request.url) // Check if the URL is a special redirect. // (eg. example.com/redirect/<random-hash>) if (url.pathname.startsWith(prefix)) { // REDIRECTS is a custom variable that you define, // it binds to a Workers KV "namespace." (aka. a storage bucket) let redirect = await REDIRECTS.get(url.pathname.replace(prefix, "")) if (redirect) { url.pathname = redirect return Response.redirect(url, 302) } } // Otherwise, send request to the original path. return await fetch(request) } With only a few changes from our previous example, we scaled from one redirect to billions − that's just a taste of what you can build with Workers KV.How does it work?Distributed data stores are often modeled using the CAP Theorem, which states that distributed systems can only pick between 2 out of the 3 following guarantees:Consistency - is my data the same everywhere?Availability - is my data accessible all the time?Partition tolerance - is my data resilient to regional outages?Workers KV chooses to guarantee Availability and Partition tolerance. This combination is known as eventual consistency, which presents Workers KV with two unique competitive advantages:Reads are ultra fast (median of 12 ms) since its powered by our caching technology.Data is available across 175+ edge data centers and resilient to regional outages.Although, there are tradeoffs to eventual consistency. If two clients write different values to the same key at the same time, the last client to write eventually "wins" and its value becomes globally consistent. This also means that if a client writes to a key and that same client reads that same key, the values may be inconsistent for a short amount of time.To help visualize this scenario, here's a real-life example amongst three friends:Suppose Matthew, Michelle, and Lee are planning their weekly lunch.Matthew decides they're going out for sushi.Matthew tells Michelle their sushi plans, Michelle agrees.Lee, not knowing the plans, tells Michelle they're actually having pizza.An hour later, Michelle and Lee are waiting at the pizza parlor while Matthew is sitting alone at the sushi restaurant — what went wrong? We can chalk this up to eventual consistency, because after waiting for a few minutes, Matthew looks at his updated calendar and eventually finds the new truth, they're going out for pizza instead.While it may take minutes in real-life, Workers KV is much faster. It can achieve global consistency in less than 60 seconds. Additionally, when a Worker writes to a key, then immediately reads that same key, it can expect the values to be consistent if both operations came from the same location.When should I use it?Now that you understand the benefits and tradeoffs of using eventual consistency, how do you determine if it's the right storage solution for your application? Simply put, if you want global availability with ultra-fast reads, Workers KV is right for you.However, if your application is frequently writing to the same key, there is an additional consideration. We call it "the Matthew question": Are you okay with the Matthews of the world occasionally going to the wrong restaurant?You can imagine use cases (like our redirect Worker example) where this doesn't make any material difference. But if you decide to keep track of a user’s bank account balance, you would not want the possibility of two balances existing at once, since they could purchase something with money they’ve already spent.What can I build with it?Here are a few examples of applications that have been built with KV:Mass redirects - handle billions of HTTP redirects.User authentication - validate user requests to your API.Translation keys - dynamically localize your web pages.Configuration data - manage who can access your origin.Step functions - sync state data between multiple APIs functions.Edge file store - host large amounts of small files.We’ve highlighted several of those use cases in our previous blog post. We also have some more in-depth code walkthroughs, including a recently published blog post on how to build an online To-do list with Workers KV.What's new since beta?By far, our most common request was to make it easier to write data to Workers KV. That's why we're releasing three new ways to make that experience even better:1. Bulk WritesIf you want to import your existing data into Workers KV, you don't want to go through the hassle of sending an HTTP request for every key-value pair. That's why we added a bulk endpoint to the Cloudflare API. Now you can upload up to 10,000 pairs (up to 100 MB of data) in a single PUT request.curl "https://api.cloudflare.com/client/v4/accounts/ \ $ACCOUNT_ID/storage/kv/namespaces/$NAMESPACE_ID/bulk" \ -X PUT \ -H "X-Auth-Key: $CLOUDFLARE_AUTH_KEY" \ -H "X-Auth-Email: $CLOUDFLARE_AUTH_EMAIL" \ -d '[ {"key": "built_by", value: "kyle, alex, charlie, andrew, and brett"}, {"key": "reviewed_by", value: "joaquin"}, {"key": "approved_by", value: "steve"} ]' Let's walk through an example use case: you want to off-load your website translation to Workers. Since you're reading translation keys frequently and only occasionally updating them, this application works well with the eventual consistency model of Workers KV.In this example, we hook into Crowdin, a popular platform to manage translation data. This Worker responds to a /translate endpoint, downloads all your translation keys, and bulk writes them to Workers KV so you can read it later on our edge:addEventListener("fetch", event => { if (event.request.url.pathname === "/translate") { event.respondWith(uploadTranslations()) } }) async function uploadTranslations() { // Ask crowdin for all of our translations. var response = await fetch( "https://api.crowdin.com/api/project" + "/:ci_project_id/download/all.zip?key=:ci_secret_key") // If crowdin is responding, parse the response into // a single json with all of our translations. if (response.ok) { var translations = await zipToJson(response) return await bulkWrite(translations) } // Return the errored response from crowdin. return response } async function bulkWrite(keyValuePairs) { return fetch( "https://api.cloudflare.com/client/v4/accounts" + "/:cf_account_id/storage/kv/namespaces/:cf_namespace_id/bulk", { method: "PUT", headers: { "Content-Type": "application/json", "X-Auth-Key": ":cf_auth_key", "X-Auth-Email": ":cf_email" }, body: JSON.stringify(keyValuePairs) } ) } async function zipToJson(response) { // ... omitted for brevity ... // (eg. https://stuk.github.io/jszip) return [ {key: "hello.EN", value: "Hello World"}, {key: "hello.ES", value: "Hola Mundo"} ] } Now, when you want to translate a page, all you have to do is read from Workers KV:async function translate(keys, lang) { // You bind your translations namespace to the TRANSLATIONS variable. return Promise.all(keys.map(key => TRANSLATIONS.get(key + "." + lang))) } 2. Expiring KeysBy default, key-value pairs stored in Workers KV last forever. However, sometimes you want your data to auto-delete after a certain amount of time. That's why we're introducing the expiration and expirationTtloptions for write operations.// Key expires 60 seconds from now. NAMESPACE.put("myKey", "myValue", {expirationTtl: 60}) // Key expires if the UNIX epoch is in the past. NAMESPACE.put("myKey", "myValue", {expiration: 1247788800}) # You can also set keys to expire from the Cloudflare API. curl "https://api.cloudflare.com/client/v4/accounts/ \ $ACCOUNT_ID/storage/kv/namespaces/$NAMESPACE_ID/ \ values/$KEY?expiration_ttl=$EXPIRATION_IN_SECONDS" -X PUT \ -H "X-Auth-Key: $CLOUDFLARE_AUTH_KEY" \ -H "X-Auth-Email: $CLOUDFLARE_AUTH_EMAIL" \ -d "$VALUE" Let's say you want to block users that have been flagged as inappropriate from your website, but only for a week. With an expiring key, you can set the expire time and not have to worry about deleting it later.In this example, we assume users and IP addresses are one of the same. If your application has authentication, you could use access tokens as the key identifier.addEventListener("fetch", event => { var url = new URL(event.request.url) // An internal API that blocks a new user IP. // (eg. example.com/block/1.2.3.4) if (url.pathname.startsWith("/block")) { var ip = url.pathname.split("/").pop() event.respondWith(blockIp(ip)) } else { // Other requests check if the IP is blocked. event.respondWith(handleRequest(event.request)) } }) async function blockIp(ip) { // Values are allowed to be empty in KV, // we don't need to store any extra information anyway. await BLOCKED.put(ip, "", {expirationTtl: 60*60*24*7}) return new Response("ok") } async function handleRequest(request) { var ip = request.headers.get("CF-Connecting-IP") if (ip) { var blocked = await BLOCKED.get(ip) // If we detect an IP and its blocked, respond with a 403 error. if (blocked) { return new Response({status: 403, statusText: "You are blocked!"}) } } // Otherwise, passthrough the original request. return fetch(request) } 3. Larger ValuesWe've increased our size limit on values from 64 kB to 2 MB. This is quite useful if you need to store buffer-based or file data in Workers KV.Consider this scenario: you want to let your users upload their favorite GIF to their profile without having to store these GIFs as binaries in your database or managing another cloud storage bucket.Workers KV is a great fit for this use case! You can create a Workers KV namespace for your users’ GIFs that is fast and reliable wherever your customers are located.In this example, users upload a link to their favorite GIF, then a Worker downloads it and stores it to Workers KV.addEventListener("fetch", event => { var url = event.request.url var arg = request.url.split("/").pop() // User sends a URI encoded link to the GIF they wish to upload. // (eg. example.com/api/upload_gif/<encoded-uri>) if (url.pathname.startsWith("/api/upload_gif")) { event.respondWith(uploadGif(arg)) // Profile contains link to view the GIF. // (eg. example.com/api/view_gif/<username>) } else if (url.pathname.startsWith("/api/view_gif")) { event.respondWith(getGif(arg)) } }) async function uploadGif(url) { // Fetch the GIF from the Internet. var gif = await fetch(decodeURIComponent(url)) var buffer = await gif.arrayBuffer() // Upload the GIF as a buffer to Workers KV. await GIFS.put(user.name, buffer) return gif } async function getGif(username) { var gif = await GIFS.get(username, "arrayBuffer") // If the user has set one, respond with the GIF. if (gif) { return new Response(gif, {headers: {"Content-Type": "image/gif"}}) } else { return new Response({status: 404, statusText: "User has no GIF!"}) } } Lastly, we want to thank all of our beta customers. It was your valuable feedback that led us to develop these changes to Workers KV. Make sure to stay in touch with us, we're always looking ahead for what's next and we love hearing from you!PricingWe’re also ready to announce our GA pricing. If you're one of our Enterprise customers, your pricing obviously remains unchanged.$0.50 / GB of data stored, 1 GB included$0.50 / million reads, 10 million included$5 / million write, list, and delete operations, 1 million includedDuring the beta period, we learned customers don't want to just read values at our edge, they want to write values from our edge too. Since there is high demand for these edge operations, which are more costly, we have started charging non-read operations per month.LimitsAs mentioned earlier, we increased our value size limit from 64 kB to 2 MB. We've also removed our cap on the number of keys per namespace — it's now unlimited. Here are our GA limits:Up to 20 namespaces per account, each with unlimited keysKeys of up to 512 bytes and values of up to 2 MBUnlimited writes per second for different keysOne write per second for the same keyUnlimited reads per second per keyTry it out now!Now open to all customers, you can start using Workers KV today from your Cloudflare dashboard under the Workers tab. You can also look at our updated documentation.We're really excited to see what you all can build with Workers KV!

SOAR Allows Cybersecurity Talent to Focus on Highest Value Tasks

The Rackspace Blog & Newsroom -

The cybersecurity industry needs relief — and it may be here, thanks to SOAR technology. In 2018, the cybersecurity workforce gap reached 2.9 million globally, according to a 2018 study, with a shortage of almost half a million skilled personnel in North America alone. At the same time, cyber threats continue to grow in sophistication and cost, leading […] The post SOAR Allows Cybersecurity Talent to Focus on Highest Value Tasks appeared first on The Official Rackspace Blog.

Pages

Recommended Content

Subscribe to Complete Hosting Guide aggregator - Corporate Blogs