Corporate Blogs

Join WP Engine at SXSW 2019

WP Engine -

SXSW 2019 is just days away. Beginning this Friday, digital pioneers, business leaders, and artists from around the world will head to Austin for the annual festival. For ten days, the capital city will be bustling with the best of music, technology, and film. WP Engine is proud to take part in SXSW 2019, and… The post Join WP Engine at SXSW 2019 appeared first on WP Engine.

4 Best Free WordPress Themes for Photography Blogs

HostGator Blog -

The post 4 Best Free WordPress Themes for Photography Blogs appeared first on HostGator Blog. A well-designed blog is a must, by definition, for photo bloggers, but professional photographers and Instagram addicts can benefit from having a photo blog, too. That’s because a blog that’s set up to show off images does more than connect bloggers and fans. A regularly updated blog also helps pro photographers keep their portfolio current and helps them rank better in search results. And photographers with a following on Instagram can use a blog to reach a wider audience with their images and build a list of prospects who may want to buy prints, products, or how-to know-how. To create a photo-friendly blog, you’ll need a theme that’s designed with images in mind. Here are four of our free favorite WordPress themes for photography blogs. 1. Camer Camer is an image-grid based theme from Blogging Theme Styles. Images on Camer’s pages only display text when site tap or mouse over them, which keeps visitors’ focus on your work, not your words. Camer’s layout for computer screens features a full-width text header above a 4-column image grid. On phones, Camer displays images in a single column. The free version of Camer is designed to work with Gutenberg, the new modular editor for WordPress that’s meant to make it easier for users without web design backgrounds to create and update their websites. Camer’s free version includes an unusually wide array of design options, such as five page templates, thirteen sidebar position options, a built-in menu for social media feeds, and more.   To get tools to let you adjust the width of each section on your pages, plus additional layouts, page templates, and sidebar positions, you can upgrade to Camer Pro ($49).   2. Himalayas Himalayas from Theme Grill is one of the most popular single-page themes around, and it’s a great option for photo bloggers who want to keep their site simple while showing off their best work. The full-width banner slider is followed by a blog section with featured images and text snippets and a portfolio section that’s all images with mouse-over/tap text display. There’s a built-in call-to-action button so you can invite your visitors to sign up for your newsletter, contact you to book a sitting, or visit your online store. Services and portfolio widgets help showcase your work, too. The pro version ($69) is WooCommerce compatible and includes Google fonts along with font size, color palette, and slider options not available in the free version.   3. Image Gridly Photographers can display their work and their words with Image Gridly from Superb Themes. The name probably gives away that the layout for this theme is an image grid. Unlike Camer (above) Image Gridly overlays titles on the lower third of each image, so users can see text related to each photo without having to tap or mouse over. Image Gridly’s desktop display includes a full-width banner photo with a three-column image grid below it. On smartphones, Image Gridly’s display has a full-width banner followed by featured post images displayed in a single column. Image Gridly’s free version is a great choice for showing off photography, but it lacks some of the features that other free themes include, like tools to customize the theme’s appearance, Google fonts, and speed and search optimization. Upgrading to the premium version (starting at $26) adds these features and tools.   4. Juliet Juliet is a minimalist, image-heavy, feminine theme from Lyra Themes that’s a solid choice for photo bloggers who enjoy writing about their work. It’s designed as a fashion blog theme, but the full-width image header followed by a 4-column row of featured images from different blog categories works for other types of photography, too. Juliet is responsive, WooCommerce compatible, and lightweight for fast image load times. The free version also gives you options for image and text logos, overlay colors for the banner, background color and image, sidebars, headers and footers, and two different skins. Although the free version has enough features to get most photo bloggers off to a strong start, the pro version ($35 plus $8/month for support and updates) has a lot to recommend it, like a lookbook template that could make a great portfolio tool, additional video display options, Jetpack-powered social media sharing tools, and an email subscription widget.   Picturing Your Ideal Photo Blog Theme Each theme publisher offers a live demo so you can see how their designs look and work on computers, tablets, and phones. However, it’s a good idea to try out the themes you like with your own blog content before you commit to one theme. As you try them out, ask yourself a few questions: How does the theme look with your content? Does the overall design of the theme work with the overall themes and mood of your photos? For example, a soft-looking theme like Juliet might be a great showcase for portrait photography but not so much for shots of brutalist architecture. Do you want to make money with your blog? If you plan to sell prints of your work on your site, display ads, or set up a customer service chatbot to connect with potential clients, does the theme integrate easily with the tools you’ll need to use? How quickly does your site load with the theme installed? Images can dramatically slow down page load times, which can lead to lower search-results rankings, more bounces, and less traffic overall. Ideally, each page on your photography site should load in less than 3 seconds. Once you start using a theme, keep an eye on your blog’s bounce rate, the average length of time visitors spend on your site, and whether conversions are increasing, falling, or staying flat to get a sense of whether your theme is helping visitors get the most from your content. You can also listen for feedback from your visitors to see what they think of it. Do the images display properly for them? Can they navigate around the site easily? Use their questions and comments to get a clear picture of where the theme is working for you and where it may need improvement. Then, optimize your photo blog with these essential tools. Find the post on the HostGator Blog

WP Engine CEO Heather Brunner Named DivInc Executive of the Year

WP Engine -

WP Engine CEO Heather Brunner has been named Executive of the Year by DivInc, the first tech pre-accelerator in Texas focused on championing diversity within the startup ecosystem. Their mission is to empower people of color and women entrepreneurs and help them overcome barriers to building successful high growth businesses by providing them with access… The post WP Engine CEO Heather Brunner Named DivInc Executive of the Year appeared first on WP Engine.

WordPress/Joomla!/Drupal- A Security Comparison

cPanel Blog -

One of the more popular methods of publishing content on a website is a CMS (Content Management System). A CMS generally has a graphic user interface where a user can log in, create or upload content, update existing content, design how they would want their website to appear, and other related tasks. The three most popular CMS choices by usage are WordPress, Joomla, and Drupal. A cursory glance at these three different pieces of software shows …

Why Your Small Business Needs a Website

InMotion Hosting Blog -

Are you wondering why your small business needs a website? According to a study by B2B research firm Clutch.co, only 63% of small businesses have a website. The reasons cited often include “Websites cost too much,” or, “I don’t know how to build a website on my own.” Yet in 2019, with so many easy-to-use website builders available, those excuses couldn’t be further from the truth. Here’s why your small business absolutely NEEDS a website and how you can build one quickly and easily (without any coding experience!): Why Your Small Business Needs a Website There are lots of reasons your business needs to be online. Continue reading Why Your Small Business Needs a Website at The Official InMotion Hosting Blog.

Start Attracting Clients Without Spending All Your Money

The Domain.com Blog -

Now that you’ve vetted your business idea, and have your website up and running, it’s time to grow your client list. The larger your list of potential clients, the better your chances at bringing in revenue. It can seem daunting reaching out to people you don’t know to ask for their business, but there are ways to make it easier. Learn how to get started below. It all starts with the right domain. Get yours today at Domain.com. Building client relationships is all about being social As great as your business idea is, it won’t go far on its own — you need the financial support only growing your client list can provide. But where do you find them? If you’re like most of the population, you have a free Facebook account. You can take that free tool, and start to leverage it to grow your client list. We bet you’re thinking, “But my Facebook Business Page doesn’t get much engagement with the miniscule percentage of my followers!” Well, yes, you’re right. Keep in mind that you’re not restricted to only using your Facebook Business Page for promotion. If you want to build relationships with potential clients then you need to find them where they’re already spending their time. If you provide local services then consider joining a local Facebook group for your town, or state. Don’t start bombing each and every local group you can find; instead, find ways to meaningfully contribute. Can you answer someone’s question and prove to everyone else who reads it that you’re an authority on the matter? At the root of every relationship is trust, so start building yours with prospective clients sooner rather than later. Content and SEO are your best friends SEO. It’s a pretty big buzzword, and it’s not going anywhere. SEO stands for “Search Engine Optimization,” and it’s the process of tweaking your website for a great chance at reaching the top of organic search results – getting you more, and better, leads. There are many great resources where you can learn the ins-and-outs of SEO, like Search Engine Land. Content greatly influences SEO, so start a blog on your website. Write quality pieces that are relevant to your audience of prospective clients and current customers. Make sure to use the correct terminology and keywords. If your articles provide value then folks are less likely to bounce off the page, which also helps influence your search engine results ranking. Toot your own horn When you’re in business for yourself, you need to be your biggest advocate — never forget that. You don’t need to become a full-fledged braggart, but you do need to show off a little. Make sure your website has a complete “About Me” section, and it’d be a good idea to provide examples of work you’ve done. If someone is considering hiring you, then they need to know who you are, and the caliber of your work. There’s nothing more unprofessional that not taking the time to build your website for your business. Take the first step to growing your client list by getting social You can take the pressure off of growing your client list by trying the methods we discussed above. Nothing happens in a silo, so be social and connect with prospective clients where they enjoy spending their time. By building trust and relationships, you start building a client list too. It all starts with the right domain. Get yours today at Domain.com. The post Start Attracting Clients Without Spending All Your Money appeared first on Domain.com | Blog.

Facebook Best Practices for Musicians

InMotion Hosting Blog -

As a musician, having an active, engaged fanbase is the key to success. A loyal following allows you to be a professional performer and make money off your art. But with hundreds of thousands of other artists out there, how can you manage to attract anyone’s attention? The answer: use social media to build a fanbase from the ground up. Social media is a powerful resource used not only by musicians and other artists but by professionals in almost every other industry. Continue reading Facebook Best Practices for Musicians at The Official InMotion Hosting Blog.

International Women’s Day: Working Together to Create #BalanceForBetter

LinkedIn Official Blog -

By now you’re probably familiar with the data that shows if we continue progressing at our current pace, true gender equality in the workplace won’t exist for more than 200 years. While we have seen progress in some industries — gaps still persist in representation in leadership, as is illustrated by a UK campaign that highlights the equal number of CEOs who are women and CEOs who are men named Dave, and in wage. According to the Census Bureau, in 2017 the gender wage gap for all women remained... .

Building serverless apps with components from the AWS Serverless Application Repository

Amazon Web Services Blog -

Guest post by AWS Serverless Hero Aleksandar Simovic. Aleksandar is a Senior Software Engineer at Science Exchange and co-author of “Serverless Applications with Node.js” with Slobodan Stojanovic, published by Manning Publications. He also writes on Medium on both business and technical aspects of serverless. Many of you have built a user login or an authorization service from scratch a dozen times. And you’ve probably built another dozen services to process payments and another dozen to export PDFs. We’ve all done it, and we’ve often all done it redundantly. Using the AWS Serverless Application Repository, you can now spend more of your time and energy developing business logic to deliver the features that matter to customers, faster. What is the AWS Serverless Application Repository? The AWS Serverless Application Repository allows developers to deploy, publish, and share common serverless components among their teams and organizations. Its public library contains community-built, open-source, serverless components that are instantly searchable and deployable with customizable parameters and predefined licensing. They are built and published using the AWS Serverless Application Model (AWS SAM), the infrastructure as code, YAML language, used for templating AWS resources. How to use AWS Serverless Application Repository in production I wanted to build an application that enables customers to select a product and pay for it. Sounds like a substantial effort, right? Using AWS Serverless Application Repository, it didn’t actually take me much time. Broadly speaking, I built: A product page with a Buy button, automatically tied to the Stripe Checkout SDK. When a customer chooses Buy, the page displays the Stripe Checkout payment form. A Stripe payment service with an API endpoint that accepts a callback from Stripe, charges the customer, and sends a notification for successful transactions. For this post, I created a pre-built sample static page that displays the product details and has the Stripe Checkout JavaScript on the page. Even with the pre-built page, integrating the payment service is still work. But many other developers have built a payment application at least once, so why should I spend time building identical features? This is where AWS Serverless Application Repository came in handy. Find and deploy a component First, I searched for an existing component in the AWS Serverless Application Repository public library. I typed “stripe” and opted in to see applications that created custom IAM roles or resource policies. I saw the following results: I selected the application titled api-lambda-stripe-charge and chose Deploy on the component’s detail page. Before I deployed any component, I inspected it to make sure it was safe and production-ready. Evaluate a component The recommended approach for evaluating an AWS Serverless Application Repository component is a four-step process: Check component permissions. Inspect the component implementation. Deploy and run the component in a restricted environment. Monitor the component’s behavior and cost before using in production. This might appear to negate the quick delivery benefits of AWS Serverless Application Repository, but in reality, you only verify each component one time. Then you can easily reuse and share the component throughout your company. Here’s how to apply this approach while adding the Stripe component. 1. Check component permissions There are two types of components: public and private. Public components are open source, while private components do not have to be. In this case, the Stripe component is public. I reviewed the code to make sure that it doesn’t give unnecessary permissions that could potentially compromise security. In this case, the Stripe component is on GitHub. On the component page, I opened the template.yaml file. There was only one AWS Lambda function there, so I found the Policies attribute and reviewed the policies that it uses.   CreateStripeCharge: Type: AWS::Serverless::Function Properties: Handler: index.handler Runtime: nodejs8.10 Timeout: 10 Policies: - SNSCrudPolicy: TopicName: !GetAtt SNSTopic.TopicName - Statement: Effect: Allow Action: - ssm:GetParameters Resource: !Sub arn:${AWS::Partition}:ssm:${AWS::Region}:${AWS::AccountId}:parameter/${SSMParameterPrefix}/* The component was using a predefined AWS SAM policy template and a custom one. These predefined policy templates are sets of AWS permissions that are verified and recommended by the AWS security team. Using these policies to specify resource permissions is one of the recommended practices for serverless components on AWS Serverless Application Repository. The other custom IAM policy allows the function to retrieve AWS System Manager parameters, which is the best practice to store secure values, such as the Stripe secret key. 2. Inspect the component implementation I wanted to ensure that the component’s main business logic did only what it was meant to do, which was to create a Stripe charge. It’s also important to look out for unknown third-party HTTP calls to prevent leaks. Then I reviewed this project’s dependencies. For this inspection, I used PureSec, but tools like those offered by Protego are another option. The main business logic was in the charge-customer.js file. It revealed straightforward logic to simply invoke the Stripe create charge and then publish a notification with the created charge. I saw this reflected in the following code: return paymentProcessor.createCharge(token, amount, currency, description) .then(chargeResponse => { createdCharge = chargeResponse; return pubsub.publish(createdCharge, TOPIC_ARN); }) .then(() => createdCharge) .catch((err) => { console.log(err); throw err; }); The paymentProcessor and pubsub values are adapters for the communication with Stripe and Amazon SNS, respectively. I always like to look and see how they work. 3. Deploy and run the component in a restricted environment Maintaining a separate, restricted AWS account in which to test your serverless applications is a best practice for serverless development. I always ensure that my test account has strict AWS Billing and Amazon CloudWatch alarms in place. I signed in to this separate account, opened the Stripe component page, and manually deployed it. After deployment, I needed to verify how it ran. Because this component only has one Lambda function, I looked for that function in the Lambda console and opened its details page so that I could verify the code. 4. Monitor behavior and cost before using a component in production When everything works as expected in my test account, I usually add monitoring and performance tools to my component to help diagnose any incidents and evaluate component performance. I often use Epsagon and Lumigo for this, although adding those steps would have made this post too long. I also wanted to track the component’s cost. To do this, I added a strict Billing alarm that tracked the component cost and the cost of each AWS resource within it. After the component passed these four tests, I was ready to deploy it by adding it to my existing product-selection application. Deploy the component to an existing application To add my Stripe component into my existing application, I re-opened the component Review, Configure, and Deploy page and chose Copy as SAM Resource. That copied the necessary template code to my clipboard. I then added it to my existing serverless application by pasting it into my existing AWS SAM template, under Resources. It looked like the following: Resources: ShowProduct: Type: AWS::Serverless::Function Properties: Handler: index.handler Runtime: nodejs8.10 Timeout: 10 Events: Api: Type: Api Properties: Path: /product/:productId Method: GET   apilambdastripecharge: Type: AWS::Serverless::Application Properties: Location: ApplicationId: arn:aws:serverlessrepo:us-east-1:375983427419:applications/api-lambda-stripe-charge SemanticVersion: 3.0.0 Parameters: # (Optional) Cross-origin resource sharing (CORS) Origin. You can specify a single origin, all origins with "*", or leave it empty and no CORS is applied. CorsOrigin: YOUR_VALUE # This component assumes that the Stripe secret key needed to use the Stripe Charge API is stored as SecureStrings in Parameter Store under the prefix defined by this parameter. See the component README.        # SSMParameterPrefix: lambda-stripe-charge # Uncomment to override the default value Outputs: ApiUrl: Value: !Sub https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Stage/product/123 Description: The URL of the sample API Gateway I copied and pasted an AWS::Serverless::Application AWS SAM resource, which points to the component by ApplicationId and its SemanticVersion. Then, I defined the component’s parameters. I set CorsOrigin to “*” for demonstration purposes. I didn’t have to set the SSMParameterPrefix value, as it picks up a default value. But I did set up my Stripe secret key in the Systems Manager Parameter Store, by running the following command: aws ssm put-parameter --name lambda-stripe-charge/stripe-secret-key --value --type SecureString --overwrite In addition to parameters, components also contain outputs. An output is an externalized component resource or value that you can use with other applications or components. For example, the output for the api-lambda-stripe-charge component is SNSTopic, an Amazon SNS topic. This enables me to attach another component or business logic to get a notification when a successful payment occurs. For example, a lambda-send-email-ses component that sends an email upon successful payment could be attached, too. To finish, I ran the following two commands: aws cloudformation package --template-file template.yaml --output-template-file output.yaml --s3-bucket YOUR_BUCKET_NAME aws cloudformation deploy --template-file output.yaml --stack-name product-show-n-pay --capabilities CAPABILITY_IAM For the second command, you could add parameter overrides as needed. My product-selection and payment application was successfully deployed! Summary AWS Serverless Application Repository enables me to share and reuse common components, services, and applications so that I can really focus on building core business value. In a few steps, I created an application that enables customers to select a product and pay for it. It took a matter of minutes, not hours or days! You can see that it doesn’t take long to cautiously analyze and check a component. That component can now be shared with other teams throughout my company so that they can eliminate their redundancies, too. Now you’re ready to use AWS Serverless Application Repository to accelerate the way that your teams develop products, deliver features, and build and share production-ready applications.

Four Atomic Blocks for Creating Dynamic Website Content

WP Engine -

At WP Engine, our goal is to help you create stunning, functional, and highly performant WordPress sites. We want to facilitate an easy path for our customers to build and maintain beautiful websites. One of the reasons we acquired StudioPress and the Genesis Framework was because we wanted our customers to have easily accessible, highly regarded… The post Four Atomic Blocks for Creating Dynamic Website Content appeared first on WP Engine.

How to Safeguard Your Website from Malware

InMotion Hosting Blog -

With huge companies like Facebook and Lyft getting hacked on the daily, it’s no wonder you’re concerned about web security. As a business owner yourself, it’s important to keep your information (and that of your customers) safe. After all, just one small incident could end up costing you plenty of valuable time – and money. Unfortunately, there are lots of ways a virus or malware could find its way onto your site. Continue reading How to Safeguard Your Website from Malware at The Official InMotion Hosting Blog.

Learn about AWS Services & Solutions – March AWS Online Tech Talks

Amazon Web Services Blog -

Join us this March to learn about AWS services and solutions. The AWS Online Tech Talks are live, online presentations that cover a broad range of topics at varying technical levels. These tech talks, led by AWS solutions architects and engineers, feature technical deep dives, live demonstrations, customer examples, and Q&A with AWS experts. Register now! Note – All sessions are free and in Pacific Time. Tech talks this month: Compute March 26, 2019 | 11:00 AM – 12:00 PM PT – Technical Deep Dive: Running Amazon EC2 Workloads at Scale – Learn how you can optimize your workloads running on Amazon EC2 for cost and performance, all while handling peak demand. March 27, 2019 | 9:00 AM – 10:00 AM PT – Introduction to AWS Outposts – Learn how you can run AWS infrastructure on-premises with AWS Outposts for a truly consistent hybrid experience. March 28, 2019 | 1:00 PM – 2:00 PM PT – Deep Dive on OpenMPI and Elastic Fabric Adapter (EFA) – Learn how you can optimize your workloads running on Amazon EC2 for cost and performance, all while handling peak demand. Containers March 21, 2019 | 11:00 AM – 12:00 PM PT – Running Kubernetes with Amazon EKS – Learn how to run Kubernetes on AWS with Amazon EKS. March 22, 2019 | 9:00 AM – 10:00 AM PT – Deep Dive Into Container Networking – Dive deep into microservices networking and how you can build, secure, and manage the communications into, out of, and between the various microservices that make up your application. Data Lakes & Analytics March 19, 2019 | 9:00 AM – 10:00 AM PT – Fuzzy Matching and Deduplicating Data with ML Transforms for AWS Lake Formation – Learn how to use ML Transforms for AWS Glue to link and de-duplicate matching records. March 20, 2019 | 9:00 AM – 10:00 AM PT – Customer Showcase: Perform Real-time ETL from IoT Devices into your Data Lake with Amazon Kinesis – Learn best practices for how to perform real-time extract-transform-load into your data lake with Amazon Kinesis. March 20, 2019 | 11:00 AM – 12:00 PM PT – Machine Learning Powered Business Intelligence with Amazon QuickSight – Learn how Amazon QuickSight leverages powerful ML and natural language capabilities to generate insights that help you discover the story behind the numbers. Databases March 18, 2019 | 9:00 AM – 10:00 AM PT – What’s New in PostgreSQL 11 – Find out what’s new in PostgreSQL 11, the latest major version of the popular open source database, and learn about AWS services for running highly available PostgreSQL databases in the cloud. March 19, 2019 | 1:00 PM – 2:00 PM PT – Introduction on Migrating your Oracle/SQL Server Databases over to the Cloud using AWS’s New Workload Qualification Framework – Get an introduction on how AWS’s Workload Qualification Framework can help you with your application and database migrations. March 20, 2019 | 1:00 PM – 2:00 PM PT – What’s New in MySQL 8 – Find out what’s new in MySQL 8, the latest major version of the world’s most popular open source database, and learn about AWS services for running highly available MySQL databases in the cloud. March 21, 2019 | 9:00 AM – 10:00 AM PT – Building Scalable & Reliable Enterprise Apps with AWS Relational Databases – Learn how AWS Relational Databases can help you build scalable & reliable enterprise apps. DevOps March 19, 2019 | 11:00 AM – 12:00 PM PT – Introduction to Amazon Corretto: A No-Cost Distribution of OpenJDK – Learn how to transform your approach to secure desktop delivery with a cloud desktop solution like Amazon WorkSpaces. End-User Computing March 28, 2019 | 9:00 AM – 10:00 AM PT – Fireside Chat: Enabling Today’s Workforce with Cloud Desktops – Learn about the tools and best practices Amazon Redshift customers can use to scale storage and compute resources on-demand and automatically to handle growing data volume and analytical demand. Enterprise March 26, 2019 | 1:00 PM – 2:00 PM PT – Speed Your Cloud Computing Journey With the Customer Enablement Services of AWS: ProServe, AMS, and Support – Learn how to accelerate your cloud journey with AWS’s Customer Enablement Services. IoT March 26, 2019 | 9:00 AM – 10:00 AM PT – How to Deploy AWS IoT Greengrass Using Docker Containers and Ubuntu-snap – Learn how to bring cloud services to the edge using containerized microservices by deploying AWS IoT Greengrass to your device using Docker containers and Ubuntu snaps. Machine Learning March 18, 2019 | 1:00 PM – 2:00 PM PT – Orchestrate Machine Learning Workflows with Amazon SageMaker and AWS Step Functions – Learn about how ML workflows can be orchestrated with the rich features of Amazon SageMaker and AWS Step Functions. March 21, 2019 | 1:00 PM – 2:00 PM PT – Extract Text and Data from Any Document with No Prior ML Experience – Learn how to extract text and data from any document with no prior machine learning experience. March 22, 2019 | 11:00 AM – 12:00 PM PT – Build Forecasts and Individualized Recommendations with AI – Learn how you can build accurate forecasts and individualized recommendation systems using our new AI services, Amazon Forecast and Amazon Personalize. Management Tools March 29, 2019 | 9:00 AM – 10:00 AM PT – Deep Dive on Inventory Management and Configuration Compliance in AWS – Learn how AWS helps with effective inventory management and configuration compliance management of your cloud resources. Networking & Content Delivery March 25, 2019 | 1:00 PM – 2:00 PM PT – Application Acceleration and Protection with Amazon CloudFront, AWS WAF, and AWS Shield – Learn how to secure and accelerate your applications using AWS’s Edge services in this demo-driven tech talk. Robotics March 28, 2019 | 11:00 AM – 12:00 PM PT – Build a Robot Application with AWS RoboMaker – Learn how to improve your robotics application development lifecycle with AWS RoboMaker. Security, Identity, & Compliance March 27, 2019 | 11:00 AM – 12:00 PM PT – Remediating Amazon GuardDuty and AWS Security Hub Findings – Learn how to build and implement remediation automations for Amazon GuardDuty and AWS Security Hub. March 27, 2019 | 1:00 PM – 2:00 PM PT – Scaling Accounts and Permissions Management – Learn how to scale your accounts and permissions management efficiently as you continue to move your workloads to AWS Cloud. Serverless March 18, 2019 | 11:00 AM – 12:00 PM PT – Testing and Deployment Best Practices for AWS Lambda-Based Applications – Learn best practices for testing and deploying AWS Lambda based applications. Storage March 25, 2019 | 11:00 AM – 12:00 PM PT – Introducing a New Cost-Optimized Storage Class for Amazon EFS – Come learn how the new Amazon EFS storage class and Lifecycle Management automatically reduces cost by up to 85% for infrequently accessed files.

Introducing ‘Hello Monday’: LinkedIn’s New Podcast About Our Rapidly Changing Work Lives

LinkedIn Official Blog -

Monday’s are about to get a whole lot better. Today, the LinkedIn editorial team is sharing the first episode of our new, 13-week podcast, called Hello Monday. Hello Monday is a show where Senior Editor at Large Jessi Hempel (that’s me) investigates how the nature of work is changing, and how that work is changing us. We’re asking our guests -- and ourselves -- the big questions: what does work mean to us? Should we love what we do? How do we make sure we still have jobs tomorrow, and that... .

Building fast interpreters in Rust

CloudFlare Blog -

In the previous post we described the Firewall Rules architecture and how the different components are integrated together. We also mentioned that we created a configurable Rust library for writing and executing Wireshark®-like filters in different parts of our stack written in Go, Lua, C, C++ and JavaScript Workers.With a mixed set of requirements of performance, memory safety, low memory use, and the capability to be part of other products that we’re working on like Spectrum, Rust stood out as the strongest option.We have now open-sourced this library under our Github account: https://github.com/cloudflare/wirefilter. This post will dive into its design, explain why we didn’t use a parser generator and how our execution engine balances security, runtime performance and compilation cost for the generated filters.Parsing Wireshark syntaxWhen building a custom Domain Specific Language (DSL), the first thing we need to be able to do is parse it. This should result in an intermediate representation (usually called an Abstract Syntax Tree) that can be inspected, traversed, analysed and, potentially, serialised.There are different ways to perform such conversion, such as:Manual char-by-char parsing using state machines, regular expression and/or native string APIs.Parser combinators, which use higher-level functions to combine different parsers together (in Rust-land these are represented by nom, chomp, combine and others).Fully automated generators which, provided with a grammar, can generate a fully working parser for you (examples are peg, pest, LALRPOP, etc.).Wireshark syntaxBut before trying to figure out which approach would work best for us, let’s take a look at some of the simple official Wireshark examples, to understand what we’re dealing with:ip.len le 1500udp contains 81:60:03sip.To contains "a1762"http.request.uri matches "gl=se$"eth.dst == ff:ff:ff:ff:ff:ffip.addr == 192.168.0.1ipv6.addr == ::1You can see that the right hand side of a comparison can be a number, an IPv4 / IPv6 address, a set of bytes or a string. They are used interchangeably, without any special notion of a type, which is fine given that they are easily distinguishable… or are they?Let’s take a look at some IPv6 forms on Wikipedia:2001:0db8:0000:0000:0000:ff00:0042:83292001:db8:0:0:0:ff00:42:83292001:db8::ff00:42:8329So IPv6 can be written as a set of up to 8 colon-separated hexadecimal numbers, each containing up to 4 digits with leading zeros omitted for convenience. This appears suspiciously similar to the syntax for byte sequences. Indeed, if we try writing out a sequence like 2f:31:32:33:34:35:36:37, it’s simultaneously a valid IPv6 and a byte sequence in terms of Wireshark syntax.There is no way of telling what this sequence actually represents without looking at the type of the field it’s being compared with, and if you try using this sequence in Wireshark, you’ll notice that it does just that:ipv6.addr == 2f:31:32:33:34:35:36:37: right hand side is parsed and used as an IPv6 addresshttp.request.uri == 2f:31:32:33:34:35:36:37: right hand side is parsed and used as a byte sequence (will match a URL "/1234567")Are there other examples of such ambiguities? Yup - for example, we can try using a single number with two decimal digits:tcp.port == 80: matches any traffic on the port 80 (HTTP)http.file_data == 80: matches any HTTP request/response with body containing a single byte (0x80)We could also do the same with ethernet address, defined as a separate type in Wireshark, but, for simplicity, we represent it as a regular byte sequence in our implementation, so there is no ambiguity here.Choosing a parsing approachThis is an interesting syntax design decision. It means that we need to store a mapping between field names and types ahead of time - a Scheme, as we call it - and use it for contextual parsing. This restriction also immediately rules out many if not most parser generators.We could still use one of the more sophisticated ones (like LALRPOP) that allow replacing the default regex-based lexer with your own custom code, but at that point we’re so close to having a full parser for our DSL that the complexity outweighs any benefits of using a black-box parser generator.Instead, we went with a manual parsing approach. While (for a good reason) this might sound scary in unsafe languages like C / C++, in Rust all strings are bounds checked by default. Rust also provides a rich string manipulation API, which we can use to build more complex helpers, eventually ending up with a full parser.This approach is, in fact, pretty similar to parser combinators in that the parser doesn’t have to keep state and only passes the unprocessed part of the input down to smaller, narrower scoped functions. Just as in parser combinators, the absence of mutable state also allows to easily test and maintain each of the parsers for different parts of the syntax independently of the others.Compared with popular parser combinator libraries in Rust, one of the differences is that our parsers are not standalone functions but rather types that implement common traits:pub trait Lex<'i>: Sized { fn lex(input: &'i str) -> LexResult<'i, Self>; } pub trait LexWith<'i, E>: Sized { fn lex_with(input: &'i str, extra: E) -> LexResult<'i, Self>; } The lex method or its contextual variant lex_with can either return a successful pair of (instance of the type, rest of input) or a pair of (error kind, relevant input span).The Lex trait is used for target types that can be parsed independently of the context (like field names or literals), while LexWith is used for types that need a Scheme or a part of it to be parsed unambiguously.A bigger difference is that, instead of relying on higher-level functions for parser combinators, we use the usual imperative function call syntax. For example, when we want to perform sequential parsing, all we do is call several parsers in a row, using tuple destructuring for intermediate results:let input = skip_space(input); let (op, input) = CombinedExpr::lex_with(input, scheme)?; let input = skip_space(input); let input = expect(input, ")")?; And, when we want to try different alternatives, we can use native pattern matching and ignore the errors:if let Ok(input) = expect(input, "(") { ... (SimpleExpr::Parenthesized(Box::new(op)), input) } else if let Ok((op, input)) = UnaryOp::lex(input) { ... } else { ... } Finally, when we want to automate parsing of some more complicated common cases - say, enums - Rust provides a powerful macro syntax:lex_enum!(#[repr(u8)] OrderingOp { "eq" | "==" => Equal = EQUAL, "ne" | "!=" => NotEqual = LESS | GREATER, "ge" | ">=" => GreaterThanEqual = GREATER | EQUAL, "le" | "<=" => LessThanEqual = LESS | EQUAL, "gt" | ">" => GreaterThan = GREATER, "lt" | "<" => LessThan = LESS, }); This gives an experience similar to parser generators, while still using native language syntax and keeping us in control of all the implementation details.Execution engineBecause our grammar and operations are fairly simple, initially we used direct AST interpretation by requiring all nodes to implement a trait that includes an execute method.trait Expr<'s> { fn execute(&self, ctx: &ExecutionContext<'s>) -> bool; } The ExecutionContext is pretty similar to a Scheme, but instead of mapping arbitrary field names to their types, it maps them to the runtime input values provided by the caller.As with Scheme, initially ExecutionContext used an internal HashMap for registering these arbitrary String -> RhsValue mappings. During the execute call, the AST implementation would evaluate itself recursively, and look up each field reference in this map, either returning a value or raising an error on missing slots and type mismatches.This worked well enough for an initial implementation, but using a HashMap has a non-trivial cost which we would like to eliminate. We already used a more efficient hasher - Fnv - because we are in control of all keys and so are not worried about hash DoS attacks, but there was still more we could do.Speeding up field accessIf we look at the data structures involved, we can see that the scheme is always well-defined in advance, and all our runtime values in the execution engine are expected to eventually match it, even if the order or a precise set of fields is not guaranteed:So what if we ditch the second map altogether and instead use a fixed-size array of values? Array indexing should be much cheaper than looking up in a map, so it might be well worth the effort.How can we do it? We already know the number of items (thanks to the predefined scheme) so we can use that for the size of the backing storage, and, in order to simulate HashMap “holes” for unset values, we can wrap each item an Option<...>:pub struct ExecutionContext<'e> { scheme: &'e Scheme, values: Box<[Option<LhsValue<'e>>]>, } The only missing piece is an index that could map both structures to each other. As you might remember, Scheme still uses a HashMap for field registration, and a HashMap is normally expected to be randomised and indexed only by the predefined key.While we could wrap a value and an auto-incrementing index together into a custom struct, there is already a better solution: IndexMap. IndexMap is a drop-in replacement for a HashMap that preserves ordering and provides a way to get an index of any element and vice versa - exactly what we needed.After replacing a HashMap in the Scheme with IndexMap, we can change parsing to resolve all the parsed field names to their indices in-place and store that in the AST:impl<'i, 's> LexWith<'i, &'s Scheme> for Field<'s> { fn lex_with(mut input: &'i str, scheme: &'s Scheme) -> LexResult<'i, Self> { ... let field = scheme .get_field_index(name) .map_err(|err| (LexErrorKind::UnknownField(err), name))?; Ok((field, input)) } } After that, in the ExecutionContext we allocate a fixed-size array and use these indices for resolving values during runtime:impl<'e> ExecutionContext<'e> { /// Creates an execution context associated with a given scheme. /// /// This scheme will be used for resolving any field names and indices. pub fn new<'s: 'e>(scheme: &'s Scheme) -> Self { ExecutionContext { scheme, values: vec![None; scheme.get_field_count()].into(), } } ... } This gave significant (~2x) speed ups on our standard benchmarks:Before:test matching ... bench: 2,548 ns/iter (+/- 98) test parsing ... bench: 192,037 ns/iter (+/- 21,538)After:test matching ... bench: 1,227 ns/iter (+/- 29) test parsing ... bench: 197,574 ns/iter (+/- 16,568)This change also improved the usability of our API, as any type errors are now detected and reported much earlier, when the values are just being set on the context, and not delayed until filter execution.[not] JIT compilationOf course, as with any respectable DSL, one of the other ideas we had from the beginning was “...at some point we’ll add native compilation to make everything super-fast, it’s just a matter of time...”.In practice, however, native compilation is a complicated matter, but not due to lack of tools.First of all, there is question of storage for the native code. We could compile each filter statically into some sort of a library and publish to a key-value store, but that would not be easy to maintain:We would have to compile each filter to several platforms (x86-64, ARM, WASM, …).The overhead of native library formats would significantly outweigh the useful executable size, as most filters tend to be small.Each time we’d like to change our execution logic, whether to optimise it or to fix a bug, we would have to recompile and republish all the previously stored filters.Finally, even if/though we’re sure of the reliability of the chosen store, executing dynamically retrieved native code on the edge as-is is not something that can be taken lightly.The usual flexible alternative that addresses most of these issues is Just-in-Time (JIT) compilation.When you compile code directly on the target machine, you get to re-verify the input (still expressed as a restricted DSL), you can compile it just for the current platform in-place, and you never need to republish the actual rules.Looks like a perfect fit? Not quite. As with any technology, there are tradeoffs, and you only get to choose those that make more sense for your use cases. JIT compilation is no exception.First of all, even though you’re not loading untrusted code over the network, you still need to generate it into the memory, mark that memory as executable and trust that it will always contain valid code and not garbage or something worse. Depending on your choice of libraries and complexity of the DSL, you might be willing to trust it or put heavy sandboxing around, but, either way, it’s a risk that one must explicitly be willing to take.Another issue is the cost of compilation itself. Usually, when measuring the speed of native code vs interpretation, the cost of compilation is not taken into the account because it happens out of the process.With JIT compilers though, it’s different as you’re now compiling things the moment they’re used and cache the native code only for a limited time. Turns out, generating native code can be rather expensive, so you must be absolutely sure that the compilation cost doesn’t offset any benefits you might gain from the native execution speedup.I’ve talked a bit more about this at Rust Austin meetup and, I believe, this topic deserves a separate blog post so won’t go into much more details here, but feel free to check out the slides: https://www.slideshare.net/RReverser/building-fast-interpreters-in-rust. Oh, and if you’re in Austin, you should pop into our office for the next meetup!Let’s get back to our original question: is there anything else we can do to get the best balance between security, runtime performance and compilation cost? Turns out, there is.Dynamic dispatch and closures to the rescue Introducing Fn trait!In Rust, the Fn trait and friends (FnMut, FnOnce) are automatically implemented on eligible functions and closures. In case of a simple Fn case the restriction is that they must not modify their captured environment and can only borrow from it.Normally, you would want to use it in generic contexts to support arbitrary callbacks with given argument and return types. This is important because in Rust, each function and closure implements a unique type and any generic usage would compile down to a specific call just to that function.fn just_call(me: impl Fn(), maybe: bool) { if maybe { me() } } Such behaviour (called static dispatch) is the default in Rust and is preferable for performance reasons.However, if we don’t know all the possible types at compile-time, Rust allows us to opt-in for a dynamic dispatch instead:fn just_call(me: &dyn Fn(), maybe: bool) { if maybe { me() } } Dynamically dispatched objects don't have a statically known size, because it depends on the implementation details of particular type being passed. They need to be passed as a reference or stored in a heap-allocated Box, and then used just like in a generic implementation.In our case, this allows to create, return and store arbitrary closures, and later call them as regular functions:trait Expr<'s> { fn compile(self) -> CompiledExpr<'s>; } pub(crate) struct CompiledExpr<'s>(Box<dyn 's + Fn(&ExecutionContext<'s>) -> bool>); impl<'s> CompiledExpr<'s> { /// Creates a compiled expression IR from a generic closure. pub(crate) fn new(closure: impl 's + Fn(&ExecutionContext<'s>) -> bool) -> Self { CompiledExpr(Box::new(closure)) } /// Executes a filter against a provided context with values. pub fn execute(&self, ctx: &ExecutionContext<'s>) -> bool { self.0(ctx) } } The closure (an Fn box) will also automatically include the environment data it needs for the execution.This means that we can optimise the runtime data representation as part of the “compile” process without changing the AST or the parser. For example, when we wanted to optimise IP range checks by splitting them for different IP types, we could do that without having to modify any existing structures:RhsValues::Ip(ranges) => { let mut v4 = Vec::new(); let mut v6 = Vec::new(); for range in ranges { match range.clone().into() { ExplicitIpRange::V4(range) => v4.push(range), ExplicitIpRange::V6(range) => v6.push(range), } } let v4 = RangeSet::from(v4); let v6 = RangeSet::from(v6); CompiledExpr::new(move |ctx| { match cast!(ctx.get_field_value_unchecked(field), Ip) { IpAddr::V4(addr) => v4.contains(addr), IpAddr::V6(addr) => v6.contains(addr), } }) } Moreover, boxed closures can be part of that captured environment, too. This means that we can convert each simple comparison into a closure, and then combine it with other closures, and keep going until we end up with a single top-level closure that can be invoked as a regular function to evaluate the entire filter expression.It’s turtles closures all the way down:let items = items .into_iter() .map(|item| item.compile()) .collect::<Vec<_>>() .into_boxed_slice(); match op { CombiningOp::And => { CompiledExpr::new(move |ctx| items.iter().all(|item| item.execute(ctx))) } CombiningOp::Or => { CompiledExpr::new(move |ctx| items.iter().any(|item| item.execute(ctx))) } CombiningOp::Xor => CompiledExpr::new(move |ctx| { items .iter() .fold(false, |acc, item| acc ^ item.execute(ctx)) }), } What’s nice about this approach is:Our execution is no longer tied to the AST, and we can be as flexible with optimising the implementation and data representation as we want without affecting the parser-related parts of code or output format.Even though we initially “compile” each node to a single closure, in future we can pretty easily specialise certain combinations of expressions into their own closures and so improve execution speed for common cases. All that would be required is a separate match branch returning a closure optimised for just that case.Compilation is very cheap compared to real code generation. While it might seem that allocating many small objects (one Boxed closure per expression) is not very efficient and that it would be better to replace it with some sort of a memory pool, in practice we saw a negligible performance impact.No native code is generated at runtime, which means that we execute only code that was statically verified by Rust at compile-time and compiled down to a static function. All that we do at the runtime is call existing functions with different values.Execution turns out to be faster too. This initially came as a surprise, because dynamic dispatch is widely believed to be costly and we were worried that it would get slightly worse than AST interpretation. However, it showed an immediate ~10-15% runtime improvement in benchmarks and on real examples.The only obvious downside is that each level of AST requires a separate dynamically-dispatched call instead of a single inlined code for the entire expression, like you would have even with a basic template JIT.Unfortunately, such output could be achieved only with real native code generation, and, for our case, the mentioned downsides and risks would outweigh runtime benefits, so we went with the safe & flexible closure approach.Bonus: WebAssembly supportAs was mentioned earlier, we chose Rust as a safe high-level language that allows easy integration with other parts of our stack written in Go, C and Lua via C FFI. But Rust has one more target it invests in and supports exceptionally well: WebAssembly.Why would we be interested in that? Apart from the parts of the stack where our rules would run, and the API that publishes them, we also have users who like to write their own rules. To do that, they use a UI editor that allows either writing raw expressions in Wireshark syntax or as a WYSIWYG builder.We thought it would be great to expose the parser - the same one as we use on the backend - to the frontend JavaScript for a consistent real-time editing experience. And, honestly, we were just looking for an excuse to play with WASM support in Rust.WebAssembly could be targeted via regular C FFI, but in that case you would need to manually provide all the glue for the JavaScript side to hold and convert strings, arrays and objects forth and back.In Rust, this is all handled by wasm-bindgen. While it provides various attributes and methods for direct conversions, the simplest way to get started is to activate the “serde” feature which will automatically convert types using JSON.parse, JSON.stringify and serde_json under the hood.In our case, creating a wrapper for the parser with only 20 lines of code was enough to get started and have all the WASM code + JavaScript glue required:#[wasm_bindgen] pub struct Scheme(wirefilter::Scheme); fn into_js_error(err: impl std::error::Error) -> JsValue { js_sys::Error::new(&err.to_string()).into() } #[wasm_bindgen] impl Scheme { #[wasm_bindgen(constructor)] pub fn try_from(fields: &JsValue) -> Result<Scheme, JsValue> { fields.into_serde().map(Scheme).map_err(into_js_error) } pub fn parse(&self, s: &str) -> Result<JsValue, JsValue> { let filter = self.0.parse(s).map_err(into_js_error)?; JsValue::from_serde(&filter).map_err(into_js_error) } } And by using a higher-level tool called wasm-pack, we also got automated npm package generation and publishing, for free.This is not used in the production UI yet because we still need to figure out some details for unsupported browsers, but it’s great to have all the tooling and packages ready with minimal efforts. Extending and reusing the same package, it should be even possible to run filters in Cloudflare Workers too (which also support WebAssembly).The futureThe code in the current state is already doing its job well in production and we’re happy to share it with the open-source Rust community.This is definitely not the end of the road though - we have many more fields to add, features to implement and planned optimisations to explore. If you find this sort of work interesting and would like to help us by working on firewalls, parsers or just any Rust projects at scale, give us a shout!

How to Investigate Disk Space Usage From the Command Line

Nexcess Blog -

Each of our plans establishes a disk-usage limit. If you receive an email with an alert that says you’re nearing this limit, ignoring it can hamper the operability of your website and other associated services, such as email. Before you call our Sales team to ask about an upgrade, it’s almost always worthwhile to investigate… Continue reading →

Beauty Basics: How to Launch Your Own YouTube Beauty Channel and Website

InMotion Hosting Blog -

“Beauty is truth; truth is beauty.” Almost two hundred years ago, romantic poet John Keats expressed this sentiment. Its meaning never changes. It’s also true that beauty products are one of the biggest profit categories in the world. If you want to be involved in this field, then you should seriously consider launching your own beauty channel on YouTube along with an accompanying website. But setting up something like this can be a daunting task that could potentially scare people away from taking the risk. Continue reading Beauty Basics: How to Launch Your Own YouTube Beauty Channel and Website at The Official InMotion Hosting Blog.

Web Hosting With a Dedicated IP

HostGator Blog -

The post Web Hosting With a Dedicated IP appeared first on HostGator Blog. When you’re exploring your options for dedicated hosting there are a lot of terms and technologies you’re probably not familiar with. One of those being a dedicated IP address. A dedicated IP is different than dedicated server hosting, but often the two are linked together. You can get a dedicated IP address without having to upgrade to dedicated hosting, but usually, a dedicated IP address will be paired with a dedicated hosting environment. If that sounds a little confusing, don’t worry. Below we’ll break down what a dedicated IP address actually is, how it differs from a standard shared IP address, and finally how web hosting with a dedicated IP works, and why you might want it. What is a Dedicated IP Address? First, we’ll look at what an IP address is. IP stands for Internet Protocol, so your IP address is your Internet Protocol Address. Essentially an IP address is a locator or identifier for any computer that’s connected to the internet. Your web host’s server is technically a computer, so this computer will have an address that identifies and defines their location on the internet.   These IP addresses are mapped to certain domain names. So, you can technically type an IP address in place of a domain name and actually land on the same website. A dedicated IP address is an IP address that’s assigned to your website and your website alone. Sometimes this can be paired to your server environment too, but usually, it’s connected directly to your site. On most hosting accounts the IP address that comes with your account will be a shared IP address. This IP address will be used by any website who’s sharing the server environment. So, in the case of shared hosting, this could be hundreds or thousands of other websites.   Dedicated vs Shared IP Address The discussion about the advantages and disadvantages of dedicated and shared IP addresses is actually quite a lively debate. Part of this is due to the natural evolution of certain technologies. A lot of the big advantages that used to be attributed to using a dedicated IP address have disappeared. Now, using a dedicated IP can still be beneficial for certain users, but it just doesn’t carry the weight it once did. The difference between a dedicated IP address and a shared IP address is pretty straightforward. Their names alone suggest the biggest difference. A shared IP address will be used by multiple websites who are also sharing the same server environment. While a dedicated IP address will be used by a single website. You can think of a dedicated IP address like your cell phone. Chances are you have a unique number that you don’t share with anyone else. While a shared IP address could be like an old-school landline or home phone. When people call that number they could be looking for a certain person in your house, but anyone can answer the phone. Here’s are the biggest ways that shared and dedicated IP addresses differ: Email sending. Although not always the case, if you’re sending a ton of emails through your web host you could run into email deliverability issues. In some cases, your shared IP could be blacklisted. With a dedicated IP, you’ll have higher deliverability rates, and the chances of a blacklist are almost zero, unless you’re spamming people yourself. SSL and site security. Dedicated IP addresses used to be completely necessary in order to do things like improve the security on your site. Simply put, you would need a dedicated IP for SSL certificates, which may be a necessity for some site owners. However, with new technology like SNI, this is no longer the case. Still, some hosting providers will require a dedicated IP for SSL. Software requirements. This is also pretty niche, but some server software and scripts actually require that you utilize a dedicated IP address. This is usually related to processes that take a while to run or initiate. If you’re using a shared server, then this might terminate before the process is complete, making a dedicated IP address necessity.   How About Dedicated IP Addresses and SEO? It’s still hotly debated whether or not using a dedicated IP address will give you an SEO boost. Like most things SEO-related there is no strict yes or no answer. If you’re using a low-quality host you can run into issues that could negatively impact your site, such as IP and DDoS attacks, association with spammy websites, and more. These will have a greater impact on your rankings than your IP address choice alone. But, if you’re using a high-quality host you probably won’t run into these issues, even if you’re using a shared IP address. Overall, there’s no real correlation between using a dedicated IP address and improved rankings. According to Google, using a shared IP address won’t negatively impact your rankings.   The Benefits of a Dedicated IP Address There are still many benefits of dedicated IP addresses. Overall, it depends upon your unique needs and the type of site you run. Depending on your provider you might be able to add a dedicated IP address to your shared hosting plan, but usually, if you can benefit from a dedicated IP address, then you can also benefit from a dedicated host. Here are the main benefits your site will receive by adding a dedicated IP address to your current hosting:   1. View Your Site By IP Address Alone One of the biggest advantages of using a dedicated IP address is that you can access your website without a domain name. So, if your domain is taking a while to propagate, you don’t have to sit around and wait for the process to complete. Or, maybe you want to start building a website, but you haven’t done your domain name registration yet. With a dedicated IP you’ll be able to access your site and start building, then once you’ve settled on a domain name you can complete the process. Accessing your site via FTP and building without a domain isn’t a common practice. But for some users, this will be an absolute necessity.   2. Reduce the Risk of IP Blacklisting One thing that could happen to your IP, if you’re using a shared IP, is something called IP blacklisting. IP addresses can become blacklisted if there’s activity that can be classified as spam from that IP. If you’re using a shared IP the chances are higher that this could occur. With a dedicated IP address, the chances of this are basically zero. This occurs most frequently with users who are sending emails through the host. All it takes is a single user to send spam emails from their domain and your website could be affected.   3. Run Your Own FTP Server Some users might want to use FTP to share files with clients, friends, and family. With a dedicated IP, it’s much easier to run your own FTP server. FTP isn’t used very frequently when you’re running a small website. But, it can be a very effective way to transfer and give access to files within an organization. With a dedicated IP, you’ll have easy access to your own FTP server. You can even implement things like anonymous FTP, which allows users to access files that are publicly available. This lets you give people access without having to identify themselves to the server.   4. Avoid SSL Compatibility Issues Like we mentioned above, a dedicated IP address is no longer completely necessary to install an SSL certificate on your site. For complete functionality, it’s recommended, but it’s not 100% necessary. Today, you can install multiple SSL certificates on a single server with Server Name Indication (SNI) technology. This allows the web hosting company to issue multiple certificates under a single shared IP address. However, there are still some issues with SNI, it’s not a perfect solution. The biggest issue is an incompatibility with older browsers and operating systems like: Windows XP and older versions of Internet Explorer Safari running on Windows XP Blackberry mobile, Windows mobile, and Opera mobile browsers This isn’t the biggest issue in the world, but for those who want to ensure all their bases are covered, it’s a good idea to upgrade to a dedicated IP address. There’s a chance your visitors might not even be using these browsers. If you want to know for sure you can fire up your Google Analytics account, and filter visitors by browser type. This will give you a better picture of what devices and browsers your users are using to access your site. This is useful knowledge to have unrelated to your choice of IP address.   5. Create Your Own Gaming Server If you want to run your own gaming server for an online game, like making a Minecraft server, then you’ll need your own dedicated IP address. You’ll also need a dedicated server for games if you want to create your own or share with friends. Some of the biggest issues with online gaming revolve around lag and lack of bandwidth. With a dedicated IP server and address, you can offer the highest level of gaming possible. It is possible to run a gaming server on a shared IP, but generally, it’s frowned upon. It will be much more difficult to configure your server and you’ll probably run into performance issues.   What is Dedicated IP Address Web Hosting? By now you’re probably wondering how web hosting and dedicated IP addresses fit together? Dedicated IP addresses aren’t synonymous with dedicated hosting, but they are typically offered together. To define dedicated hosting, you can assume that dedicated hosting packages will have dedicated IP addresses included as part of your package. Sometimes you’ll even have the ability to use multiple dedicated IPs from a single dedicated server. This can be useful if you want to migrate multiple sites you own to a single dedicated server. If you truly want to unlock the potential of using a dedicated IP address, then you’ll use it in tandem with a dedicated server. This will give you the benefits of using a dedicated server, on top of the benefits of a dedicated IP address. Think of this as not only owning your own house but also owning an entire street, with no other properties around for miles. Think of everything you could do with all that freedom in a single service.   How a Dedicated Server and Dedicated IP Address Work Together Using a dedicated server will provide your site with a lot of benefits. If you’ve been thinking about making the jump to a dedicated IP, then you’ll also want to consider if dedicated hosting is right for you as well: You’ll get access to features like: Incredible server performance with the ability to choose between HDD and SSD hard drives (depending on your website needs) Improved data center protection protocols to keep your physical and digital server components safe. Multiple server management options, such as cPanel or WHM, for easy server management and configuration. A dedicated support team who’s there to help sort through technical issues, or answer your questions 24/7. Typically, hosting providers will prioritize those using dedicated hosting, or even have a separate support team entirely. The ability to have single or multiple dedicated IP addresses. For almost every dedicated server a dedicated IP address is a common industry standard. Basically, with a dedicated IP web host, you can get a lot more freedom with your hosting environment. Plus, you’ll get unparalleled levels of security, performance, reliability, and more.   Closing Thoughts Overall, you can add a dedicated IP address to most hosting plans. But, usually dedicated hosting and a dedicated IP address work together. Not 100% of the time, but sites that need the added benefits of a dedicated IP will probably also require more advanced forms of web hosting. Using a dedicated IP address doesn’t pack as many benefits as it used to, but for some types of website owners,  this service still might be a necessity. Hopefully, you have a better understanding of why you might want a dedicated IP address, and how web hosting with a dedicated IP can benefit your website. To compare cheap dedicated server hosting plans or to learn more about our web hosting services, check out our website or talk to a Hostgator representative today. Find the post on the HostGator Blog

How we made Firewall Rules

CloudFlare Blog -

Recently we launched Firewall Rules, a new feature that allows you to construct expressions that perform complex matching against HTTP requests and then choose how that traffic is handled. As a Firewall feature you can, of course, block traffic. The expressions we support within Firewall Rules along with powerful control over the order in which they are applied allows complex new behaviour.In this blog post I tell the story of Cloudflare’s Page Rules mechanism and how Firewall Rules came to be. Along the way I’ll look at the technical choices that led to us building the new matching engine in Rust.The evolution of the Cloudflare FirewallCloudflare offers two types of firewall for web applications, a managed firewall in the form of a WAF where we write and maintain the rules for you, and a configurable firewall where you write and maintain rules. In this article, we will focus on the configurable firewall.One of the earliest Cloudflare firewall features was the IP Access Rule. It dates backs to the earliest versions of the Cloudflare Firewall and simply allows you to block traffic from specific IP addresses:if request IP equals 203.0.113.1 then block the requestAs attackers and spammers frequently launched attacks from a given network we also introduced the ASN matching capability:if request IP belongs to ASN 64496 then block the requestWe also allowed blocking ranges of addresses defined by CIDR notation when an IP was too specific and an ASN too broad:if request IP is within 203.0.113.0/24 then block the requestBlocking is not the only action you might need and so other actions are available:Whitelist = apply no other firewall rules and allow the request to pass this part of the firewallChallenge = issue a CAPTCHA and if this is passed then allow the request but otherwise deny. This would be used to determine if the request came from a human operatorJavaScript challenge = issue an automated JavaScript challenge and if this is passed then allow the request. This would be used to determine if the request came from a simple stateless bot (perhaps a wget or curl script)Block = deny the requestCloudflare also has Page Rules. Page Rules allow you to match full URIs and then perform actions such as redirects or to raise the security level which can be considered firewall functions:if request URI matches /nullroute then redirect to http://127.0.0.1Cloudflare also added GeoIP information within an HTTP header and the firewall was extended to include that:if request IP originates from county GB then CAPTCHA the requestAll of the above existed in Cloudflare pre-2014, and then during 2016 we set about to identify the most commonly requested firewall features (according to Customer Support tickets and feedback from paying customers) and provide a self-service solution. From that analysis, we added three new capabilities during late 2016: Rate Limiting, User Agent Rules, and Zone Lockdown.Whilst Cloudflare automatically stops very large denial of service attacks, rate limiting allowed customers to stop smaller attacks that were a real concern to them but were low enough volume that Cloudflare’s DDoS defences were not being applied.if request method is POST and request URI matches /wp-admin/index.php and response status code is 403 and more than 3 requests like this occur in a 15 minute time period then block the traffic for 2 hoursUser Agent rules are as simple as:if request user_agent is `Fake User Agent` then CAPTCHA the requestZone Lockdown, however was the first default deny feature. The Cloudflare Firewall could be thought of as “allow all traffic, except where a rule exists to block it”. Zone Lockdown is the opposite “for a given URI, block all traffic, except where a rule exists to allow it”.Zone Lockdown allowed customers could to block access to a public website for all but a few IP addresses or IP ranges. For example, many customers wanted access to a staging website to only be available to their office IP addresses.if request URI matches https://staging.example.com/ and request IP not in 203.0.113.0/24 then block the requestFinally, an Enterprise customer could also contact Cloudflare and have a truly bespoke rule created for them within the WAF engine.Seeing the problemThe firewall worked well for simple mitigation, but it didn’t fully meet the needs of our customers.Each of the firewall features had targeted a single attribute, and the interfaces and implementations reflected that. Whilst the Cloudflare Firewall had evolved to solve a problem as each problem arose, these did not work together. In late 2017 you could sum up the firewall capabilities as:You can block any attack traffic on any criteria, so long as you only pick one of:IPCIDRASNCountryUser AgentURIWe saw the problem, but how to fix it?We match our firewall rules in two ways:Lookup matchingString pattern matchingLookup matching covers the IP, CIDR, ASN, Country and User Agent rules. We would create a key in our globally distributed key/value data store Quicksilver, and store the action in the value:Key = zone:www.example.com_ip:203.0.113.1 Value = blockWhen a request for www.example.com is received, we look at the IP address of the client that made the request, construct the key and perform the lookup. If the key exists in the store, then the value would tell us what action to perform, in this case if the client IP were 203.0.113.1 then we would block the request.Lookup matching is a joy to work with, it is O(1) complexity meaning that a single request would only perform a single lookup for an IP rule regardless of how many IP rules a customer had. Whilst most customers had a few rules, some customers had hundreds of thousands of rules (typically created automatically by combining fail2ban or similar with a Cloudflare API call).Lookups work well when you are only looking up a single value. If you need to combine an IP and a User Agent we would need to produce keys that composed these values together. This massively increases the number of keys that you need to publish.String pattern matching occurs where URI matching is required. For our Page Rules feature this meant combining all of the Page Rules into a single regular expression that we would apply to the request URI whilst handling a request.If you had Page Rules that said (in order):Match */wp-admin/index.php and then blockThen match */xmlrpc.php and then blockThese are converted into:^(?<block__1>(?:.*/wp-admin/index.php))|(?<block__2>(?:.*/xmlrpc.php))$Yes, you read that correctly. Each Page Rule was appended to a single regular expression in the order of execution, and the naming group is used as an overload for the desired action.This works surprisingly well as regular expression matching can be simple and fast especially when the regular expression matches against a single value like the URI, but as soon as you want to match the URI plus an IP range it becomes less obvious how to extend this.This is what we had, a set of features that worked really well providing you want to match a single property of a request. The implementation also meant that none of these features could be trivially extended to embrace multiple properties at a time. We needed something else, a fast way to compute if a request matches a rule that could contain multiple properties as well as pattern matching.A solution that works now and in the futureOver time Cloudflare engineers authored internal posts exploring how a new matching engine might work. The first thing that occurred to every engineer was that the matching must be an expression. These ideas followed a similar approach which we would construct an expression within JSON as a DSL (Domain Specific Language) of our expression language. This DSL could describe matching a request and a UI could render this, and a backend could process it.Early proposals looked like this:{ "And": [ { "Equals"{ "host": "www.example.com" } }, "Or": [ { "Regex": { "path": "^(?: .*/wp-admin/index.php)$" } }{ "Regex": { "path": "^(?: .*/xmlrpc.php)$" } } ] ] }The JSON describes an expression that computers can easily turn into a rule to apply, but people find this hard to read and work with.As we did not wish to display JSON like this in our dashboard we thought about how we might summarise it for a UI:if request host equals www.example.com and (request path matches ^(?:.*/wp-admin/index.php)$ or request path matches ^(?:.*/xmlrpc.php)$)And there came an epiphany. As engineers working we’ve seen an expression language similar to this before, so may I introduce to you our old friend Wireshark®.Wireshark is a network protocol analyzer. To use it you must run a packet capture to record network traffic from a capture device (usually a network card). This is then saved to disk as a .pcap file which you subsequently open in the Wireshark GUI. The Wireshark GUI has a display filter entry box, and when you fill in a display filter the GUI will dissect the saved packet capture such that it will determine which packets match the expression and then show those in the GUI.But we don’t need to do that. In fact, for our scenario that approach does not work as we have a firewall and need to make decisions in real-time as part of the HTTP request handling rather than via the packet capture process.For Cloudflare, we would want to use something like the expression language that is the Wireshark Display Filters but without the capture and dissection as we would want to do this potentially thousands of times per request without noticeable delay.If we were able to use a Wireshark-style expression language then we can reduce the JSON encapsulated expression above to:http.host eq "www.example.com" and (http.request.path ~ "wp-admin/index\.php" or http.request.path ~ "xmlrpc.php")This is human readable, machine parseable, succinct.It also benefits from being highly similar to Wireshark. For security engineers used to working with Wireshark when investigating attacks it offers a degree of portability from an investigation tool to a mitigation engine.To make this work we would need to collect the properties of the request into a simple data structure to match the expressions against. Unlike the packet capture approach we run our firewall within the context of an HTTP server and the web server has already computed the request properties, so we can avoid dissection and populate the fields from the web server knowledge: Field Value http.cookie session=8521F670545D7865F79C3D7BED C29CCE;-background=light http.host www.example.com http.referer http.request.method GET http.request.uri /articles/index?section=539061&expand=comments http.request.uri.path /articles/index http.request.uri.query section=539061&expand=comments http.user_agent Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36 http.x_forwarded_for ip.src 203.0.113.1 ip.geoip.asnum 64496 ip.geoip.country GB ssl true With a table of HTTP request properties and an expression language that can provide a matching expression we were 90% of the way towards a solution! All we needed for the last 90% was the matching engine itself that would provide us with an answer to the question: Does this request match one of the expressions?Enter wirefilter.Wirefilter is the name of the Rust library that Cloudflare has created, and it provides:The ability for Cloudflare to define a set of fields of types, i.e. ip.src is a field of type IPAddressThe ability to define a table of properties from all of the fields that are definedThe ability to parse an expression and to say whether it is syntactically valid, whether the fields in the expression are valid against the fields defined, and whether the operators used for a field are valid for the type of the fieldThe ability to apply an expression to a table and return a true|false response indicating whether the evaluated expression matches the requestIt is named wirefilter as a hat tip towards Wireshark for inspiring our Wireshark-like expression language and also because in our context of the Cloudflare Firewall these expressions act as a filter over traffic.The implementation of wirefilter allows us to embed this matching engine within our REST API which is written in Go:// scheme stores the list of fields and their types that an expression can use var scheme = filterexpr.Scheme{ "http.cookie": filterexpr.TypeString, "http.host": filterexpr.TypeString, "http.referer": filterexpr.TypeString, "http.request.full_uri": filterexpr.TypeString, "http.request.method": filterexpr.TypeString, "http.request.uri": filterexpr.TypeString, "http.request.uri.path": filterexpr.TypeString, "http.request.uri.query": filterexpr.TypeString, "http.user_agent": filterexpr.TypeString, "http.x_forwarded_for": filterexpr.TypeString, "ip.src": filterexpr.TypeIP, "ip.geoip.asnum": filterexpr.TypeNumber, "ip.geoip.country": filterexpr.TypeString, "ssl": filterexpr.TypeBool, }Later we validate expressions provided to the API:// expression here is a string that may look like: // `ip.src eq 203.0.113.1` expressionHash, err := filterexpr.ValidateFilter(scheme, expression) if fve, ok := err.(*filterexpr.ValidationError); ok { validationErrs = append(validationErrs, fve.Ascii) } else if err != nil { return nil, stderrors.Errorf("failed to validate filter: %v", err) }This tells us whether the expression is syntactically correct and also whether the field operators and values match the field type. If the expression is valid then we can use the returned hash to determine uniqueness (the hash is generated inside wirefilter so that uniqueness can ignore whitespace and minor differences).The expressions are then published to our global network of PoPs and are consumed by Lua within our web proxy. The web proxy has the same list of fields that the API does, and is now responsible for building the table of properties from the context within the web proxy:-- The `traits` table defines the mapping between the fields and -- the corresponding values from the nginx evaluation context. local traits = { ['http.host'] = field.str(function(ctx) return ctx.host end), ['http.cookie'] = field.str(function(ctx) local value = ctx.req_headers.cookie or '' if type(value) == 'table' then value = table.concat(value, ";") end return value end), ['http.referer'] = field.str(function(ctx) return ctx.req_headers.referer or '' end), ['http.request.method'] = field.str(function(ctx) return ctx.method end), ['http.request.uri'] = field.str(function(ctx) return ctx.rewrite_uri or ctx.request_uri end), ['http.request.uri.path'] = field.str(function(ctx) return ctx.uri or '/' end), ...With this per-request table describing a request we can see test the filters. In our case what we’re doing here is:Fetch a list of all the expressions we would like to match against the requestCheck whether any expression, when applied via wirefilter to the table above, return true as having matchedFor all matched expressions check the associated actions and their priorityThe actions are not part of the matching itself. Once we have a list of matched expressions we determine which action takes precedence and that is the one that we will execute.Wirefilter then, is a generic library that provides this matching capability that we’ve plugged into our Go APIs and our Lua web proxy, and we use that to power the Cloudflare Firewall.We chose Rust for wirefilter as early in the project we recognised that if we attempted to make implementations of this in Go and Lua, that it would result in inconsistencies that attackers may be able to exploit. We needed our API and edge proxy to behave exactly the same. For this needed a library, both could call and we could choose one of our existing languages at the edge like C, C++, Go, Lua or even implement this not as a library but as a worker in JavaScript. With a mixed set of requirements of performance, memory safety, low memory use, and the capability to be part of other products that we’re working on like Spectrum, Rust stood out as the strongest option.With a library in place and the ability to now match all HTTP traffic, how to get that to a public API and UI without diluting the capability? The problems that arose related to specificity and mutual exclusion.In the past all of our firewall rules had a single dimension to them: i.e. act on IP addresses. And this meant that we had a single property of a single type and whilst there were occasionally edge cases for the most part there were strategies to answer the question “Which is the most specific rule?”. I.e. an IP address is more specific then a /24 which is more specific than a /8. Likewise with URI matching an overly simplistic strategy is that the longer a URI the more specific it is. And if we had 2 IP rules, then only 1 could ever have matched as a request does not come from 2 IPs at once so mutual exclusion is in effect.The old system meant that given 2 rules, we could implicitly and trivially say “this rule is most specific so use the action associated with this rule”.With wirefilter powering Firewall Rules, it isn’t obvious that an IP address is more or less specific when compared to a URI. It gets even more complex when a rule can have negation, as a rule that matches a /8 is less specific than a rule that does not match a single IP (the whole address space except this IP - one of the gotchas of Firewall Rules is also a source of it’s power; you can invert your firewall into a positive security model.As we couldn’t answer specificity using the expression alone, we needed another aspect of the Firewall Rule to provide us this guidance and we realised that customers already had a mechanism to tell us which rules were important… the action.Given a set of rules, we logically have ordered them according to their action (Log has highest priority, Block has lowest):LogAllowChallenge (CAPTCHA)JavaScript ChallengeBlockFor the vast majority of scenarios this proves to be good enough.What about when that isn’t good enough though? Do we have examples of complex configuration that break that approach? Yes!Because the expression language within Firewall Rules is so powerful, and we can support many Firewall Rules, it means that we can now create different firewall configuration for different parts of a web site. i.e. /blog could have wholly different rules than /shop, or for different audiences, i.e. visitors from your office IPs might be allowed on a given URI but everyone else trying to access that URI may be blocked.In this scenario you need the ability to say “run all of these rules first, and then run the other rules”.In single machine firewalls like iptables, OS X Firewall, or your home router firewall, the firewall rules were explicitly ordered so that when you match the first rule it terminates execution and you won’t hit the next rule. When you add a new rule the entire set of rules is republished and this helps to guarantee this behaviour. But this approach does not work well for a Cloud Firewall as a large website with many web applications typically also has a large number of firewall rules. Republishing all of these rules in a single transaction can be slow and if you are adding lots of rules quickly this can lead to delays to the final state being live.If we published individual rules and supported explicit ordering, we risked race conditions where two rules that both were configured in position 4 might exist at the same time and the behaviour if they matched the request would be non-determinable.We solved this by introducing a priority value, where 1 is the highest priority and as an int32 you can create low priority rules all the way down to priority = 2147483647. Not providing a priority value is the equivalent of “lowest” and runs after all rules that have a priority.Priority does not have to be a unique value within Firewall Rules. If two rules are of equal priority then we resort to the order of precedence of the actions as defined earlier.This provides us a few benefits:Because priority allows rules that share a priority to exist we can publish rules 1 at a time… when you add a new rule the speed at which we deploy that globally is not affected by the number of rules you already have.If you do have existing rules in a system that does sequentially order the rules, you can import those into Firewall Rules and preserve their order, i.e. this rule should always run before that rule.But you don’t have to use priority exclusively for ordering as you can also use priority for grouping. For example you may say that all spammers are priority=10000 and all trolls are priority = 5000.Finally… let’s look at those fields again, http.request.path notice that http prefix? By following the naming convention Wireshark has (see their Display Filter Reference) we have not limited this firewall capability solely to a HTTP web proxy. It is a small leap to imagine that if a Spectrum  application declares itself as running SMTP that we could also define fields that understand SMTP and allow filtering of traffic on other application protocols or even at layer 4.What we have built in Firewall Rules gives us these features today:Rich expression language capable of targeting traffic precisely and in real-timeFast global deployment of individual rulesA lot of control over the management and organisation of Firewall RulesAnd in the future, we have a product that can go beyond HTTP and be a true Cloud Firewall for all protocols…the Cloudflare Firewall with Firewall Rules.

How Much Is A Website Domain Name?

HostGator Blog -

The post How Much Is A Website Domain Name? appeared first on HostGator Blog. If you’re looking to build a website, then one of the first things you’ll need to do is buy a domain name for your new project. But, how much is a domain name actually going to cost you? Well, that depends upon a bunch of factors, as the cost of a domain name can vary based on several factors. For example, be prepared to spend a bit of money on domains that someone else already owns. Some older domains have gone for millions of dollars, specifically domains that are very broad such as single word domain names like insurance.com, hotels.com, and investing.com So, if you want a single word, used domain, then be prepared to spend a lot of money on your domain name. On the other side of the equation, you have new domains. The average cost for a brand new domain will typically be anywhere from $10-12, depending upon the registrar you choose and the length of your registration contract. Below you’ll learn why some domains are more valuable than others, the average costs for getting a domain name, and some tips to help you get started. How Much Does a Website Domain Cost? If you’re just getting started online, then the best course of action is to choose a new domain. A new domain will be cheaper, and allows you to build your own brand from scratch. Typically, you’ll be paying around $10-$12 for a new domain, depending upon the domain name extension you choose. Some extensions like .biz, .xyz, or .info, along with hundreds of others will end up being incredibly cheap because internet users aren’t quite used to those extensions and they don’t pack the same kind of punch. Some extensions will lead to even higher registration fees, like .co, .ai, .io and others. Other costs you’ll want to research include: Renewal fees. Some registrars will charge a cheap registration fee, along with a pricier renewal fee. Make sure you’re aware of the price increase upon renewal (if there is one).Privacy fees. Some registrars will charge additional fees to improve the privacy of your domain. This service will hide your contact information from public records.Transfer fees. Sometimes you might want to switch registrars. Make sure there aren’t any hefty fees associated with migrating your domain out of your current registrar. Buying a New Domain Name For most people, they go with the option of purchasing a new domain name. It’s the quickest and cheapest option, and with a little creativity, you can find a solid domain. First, you’ll need to choose a domain registrar. For the sake of example, let’s say you’re going to register your domain with HostGator. Navigate to https://www.hostgator.com/domainsType in your domain of choice. This will let you know if the domain, along with your desired extension is available.If it’s available, then follow through with the purchase. Buying an Old Domain Name If you’ve found the perfect domain, but someone else already owns it, follow these steps to see what you can do it your domain name is already taken. This is only recommended if you have more cash to spend and require a very specific domain for branding purposes. You may have to visit a domain auction site and  make an offer for the domain. Keep in mind that you’ll probably have to do a lot of back and forth negotiation to secure the domain, and the price could be very steep. You can also find expired domains, that might have an existing link profile and authority. Just make sure you thoroughly research the domain before making a purchase. Domain Name Fees to Watch Out For Sometimes you might not be getting as good a deal as you think when registering your domain name. Below you’ll learn about some hidden charges you’ll want to watch out for: Hidden fees. A lot of times what might seem like a good deal up front won’t actually be a good deal. Sometimes fees will be buried within the terms of service. Look out for things like transfer fees, increased renewal fees, long-term domain contracts and more.Short-term discounts. Some domain registrars will offer seemingly good discounts, but this discount will only apply if you register for a long-term contract, or pay for multiple years up front. Short-term coupons can help you get a cheap domain up front, but make sure the costs won’t increase drastically after the first year.Scams. Some domain registrars that offer ridiculously cheap, or even free, domain names will end up charging very high administration fees, or even manipulating your Whois records. Make sure you’re only buying your domain name from a reputable seller. Which Option is Best for Me? For most people, buying a new domain is going to be the preferred route to take. You’ll not only get the best deal, but the registration and purchase process will be the simplest. In time, as your experience and online assets grow it might make sense to negotiate for a domain or purchase an expired domain. But, when you’re just starting out make it easy on yourself and find a solid new domain. Find the post on the HostGator Blog

Pages

Recommended Content

Subscribe to Complete Hosting Guide aggregator - Corporate Blogs