Industry Buzz

New – Convert Your Single-Region Amazon DynamoDB Tables to Global Tables

Amazon Web Services Blog -

Hundreds of thousands of AWS customers are using Amazon DynamoDB. In 2017, we launched DynamoDB global tables, a fully managed solution to deploy multi-region, multi-master DynamoDB tables without having to build and maintain your own replication solution. When you create a global table, you specify the AWS Regions where you want the table to be available. DynamoDB performs all of the necessary tasks to create identical tables in these regions and propagate ongoing data changes to all of them. AWS customers are using DynamoDB global tables for two main reasons: to provide a low-latency experience to their clients and to facilitate their backup or disaster recovery process. Latency is the time it takes for a piece of information to travel through the network and back. Lower latency apps have higher customer engagement and generate more revenue. Deploying your backend to multiple regions close to your customers allows you to reduce the latency in your app. Having a full copy of your data in another region makes it easy to switch traffic to that other region in case you would break your regional setup or in case of an exceedingly rare regional failure. As our CTO, Dr. Werner Vogels wrote: “failures are a given, and everything will eventually fail over time.” Starting today, you can convert your existing DynamoDB tables to global tables with a few clicks in the AWS Management Console, or using the AWS Command Line Interface (CLI), or the Amazon DynamoDB API. Previously, only empty tables could be converted to global tables. You had to guess your regional usage of a table at the time you created it. Now you can go global, or you can extend existing global tables to additional regions at any time. Your applications can continue to use the table while we set up the replication. When you add a region to your table, DynamoDB begins populating the new replica using a snapshot of your existing table. Your applications can continue writing to your existing region while DynamoDB builds the new replica, and all in-flight updates will be eventually replicated to your new replica. To create a DynamoDB global table using the AWS Command Line Interface (CLI), I first create a local table in the US West (Oregon) Region (us-west-2): aws dynamodb create-table --region us-west-2 \ --table-name demo-global-table \ --key-schema AttributeName=id,KeyType=HASH \ --attribute-definitions AttributeName=id,AttributeType=S \ --billing-mode PAY_PER_REQUEST The command returns: { "TableDescription": { "AttributeDefinitions": [ { "AttributeName": "id", "AttributeType": "S" } ], "TableName": "demo-global-table", "KeySchema": [ { "AttributeName": "id", "KeyType": "HASH" } ], "TableStatus": "CREATING", "CreationDateTime": 1570278914.419, "ProvisionedThroughput": { "NumberOfDecreasesToday": 0, "ReadCapacityUnits": 0, "WriteCapacityUnits": 0 }, "TableSizeBytes": 0, "ItemCount": 0, "TableArn": "arn:aws:dynamodb:us-west-2:400000000003:table/demo-global-table", "TableId": "0a04bd34-bbff-42dd-ae18-78d05ce641fd", "BillingModeSummary": { "BillingMode": "PAY_PER_REQUEST" } } } Once the table is created, I insert some items: aws dynamodb batch-write-item --region us-west-2 --request-items file://./batch-write-items.json (The json file is available as a gist) Then, I update the table to add an additional region, the US East (N. Virginia) Region (us-east-1): aws dynamodb update-table --table-name '{ "ReplicaUpdates": [ { "Create": { "RegionName": "us-east-1" } } ] }' The command returns a long JSON, the attributes you need to pay attention to are: { ... "TableStatus": "UPDATING", "TableSizeBytes": 124, "ItemCount": 3, "StreamSpecification": { "StreamEnabled": true, "StreamViewType": "NEW_AND_OLD_IMAGES" }, "LatestStreamLabel": "2019-10-22T19:33:37.819", "LatestStreamArn": "arn:aws:dynamodb:us-west-2:400000000003:table/demo-global-table/stream/2019-10-22T19:33:37.819" } ... } I can make the same update in the AWS Management Console I select the table to update and click Global Tables. Enabling streaming is a requirement for global tables. I first click on Enable stream, then Add region: I choose the region I want to replicate to, for this example EU West (Ireland) and click Create replica. DynamoDB asynchronously replicates the table to the new region. I monitor the progress of the replication in the AWS Management Console. Table’s status will eventually change from Creating to Active. I also can check the status by calling the DescribeTable API and verify TableStatus = Active. After a while, I can query the table in the new region: aws dynamodb get-item --region eu-east-1 --table-name demo-global-table --key '{"id" : {"S" : "0123456789"}}' { "Item": { "firstname": { "S": "Jeff" }, "id": { "S": "0123456789" }, "lastname": { "S": "Barr" } } } Starting today, you can update existing local tables to global tables. In a few weeks, we’ll release a tool that enables you to update your existing global tables to take advantage of this new capability. The update itself will take a few minutes at most. Your table will be available for your applications during the update process. Other Improvements We are also simplifying the internal mechanism used for data synchronization. Previously, DynamoDB global tables leveraged DynamoDB Streams and added three attributes (aws:rep:*) in your schema to keep your data in sync. DynamoDB now manages replication natively. It does not expose synchronization attributes to your data and it does not consume additional write capacity: Only one write operation occurs in each region of your global table, which reduces the consumed replicated write capacity that is required on your table. Because of that, a second DynamoDB Streams record is no longer published. The three aws:rep:* attributes that were previously populated are no longer inserted in the item record. These changes have two consequences for your apps. First, they reduce your DynamoDB costs when using global tables, because no extra write capacity is required to managed the synchronization. Secondly, when your application is relying on the three technical items (aws:rep:*), your application requires a slight code change. In particular, DynamoDB Mapper must not require the aws:rep:* attributes to exist in the item record. With this change, we are also updating the UpdateTable API. Any operation that modifies global secondary indexes (GSIs), billing mode, server-side encryption, or write capacity units on a global table are applied to all other replicas asynchronously. Availability The improved Amazon DynamoDB global tables is available today in the 13 regions where Amazon DynamoDB global tables is available, and more regions are planned in the future. As of today, the list of AWS Regions is us-east-1 (Northern Virginia), us-west-2 (Oregon), us-east-2 (Ohio), us-west-1 (Northern California), ap-northeast-2 (Seoul), ap-southeast-1 (Singapore), ap-southeast-2 (Sydney), ap-northeast-1 (Tokyo), eu-central-1 (Frankfurt), eu-west-1 (Ireland), eu-west-2 (London), GovCloud (US-East), and GovCloud (US-West). There is no change in pricing. You pay only for the resources you use in the additional regions and the data transfer between regions. This update addresses the most common feedback that we have heard from you and will serve as the platform on which we will build additional features in the future. Continue to tell us how you are using global tables and what is important for your apps. -- seb

Announcing CloudTrail Insights: Identify and Respond to Unusual API Activity

Amazon Web Services Blog -

Building software in the cloud makes it easy to instrument systems for logging from the very beginning. With tools like AWS CloudTrail, tracking every action taken on AWS accounts and services is straightforward, providing a way to find the event that caused a given change. But not all log entries are useful. When things are running smoothly, those log entries are like the steady, reassuring hum of machinery on a factory floor. When things start going wrong, that hum can make it harder to hear which piece of equipment has gone a bit wobbly. The same is true with large scale software systems: the volume of log data can be overwhelming. Sifting through those records to find actionable information is tedious. It usually requires a lot of custom software or custom integrations, and can result in false positives and alert fatigue when new services are added. That’s where software automation and machine learning can help. Today, we’re launching AWS CloudTrail Insights in all commercial AWS regions. CloudTrail Insights automatically analyzes write management events from CloudTrail trails and alerts you to unusual activity. For example, if there is an increase in TerminateInstance events that differs from established baselines, you’ll see it as an Insight event. These events make finding and responding to unusual API activity easier than ever. Enabling AWS CloudTrail Insights CloudTrail tracks user activity and API usage. It provides an event history of AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. With the launch of AWS CloudTrail Insights, you can enable machine learning models that detect unusual activity in these logs with just a few clicks. AWS CloudTrail Insights will analyze historical API calls, identifying usage patterns and generating Insight Events for unusual activity. You can also enable Insights on a trail from the AWS Command Line Interface (CLI) by using the put-insight-selectors command: $ aws cloudtrail put-insight-selectors --trail-name trail_name --insight-selectors '{"InsightType": "ApiCallRateInsight"}' Once enabled, CloudTrail Insights sends events to the S3 bucket specified on the trail details page. Events are also sent to CloudWatch Events, and optionally to an CloudWatch Logs log group, just like other CloudTrail Events. This gives you options when it comes to alerting, from sophisticated rules that respond to CloudWatch events to custom AWS Lambda functions. After enabling Insights, historical events for the trail will be analyzed. Anomalous usage patterns found will appear in the CloudTrail Console within 30 minutes. Using CloudTrail Insights In this post we’ll take a look at some AWS CloudTrail Insights Events from the AWS Console. If you’d like to view Insight events from the AWS CLI, you use the CloudTrail LookupEvents call with the event-category parameter. $ aws cloudtrail lookup-events --event-category insight [--max-item] [--lookup-attributes] Quickly scanning the list of CloudTrail Insights, the RunInstances event jumps out to me. Spinning up more EC2 instances can be expensive, and I’ve definitely mis-configured things such that I created more instances than needed before, so I want to take a closer look. Let’s filter the list down to just these events and see what we can learn from AWS CloudTrail Insights. Let’s dig in to the latest event. Here we see that over the course of one minute, there was a spike in RunInstances API call volume. From the Insights graph, we can see the raw event as JSON. { "Records": [ { "eventVersion": "1.07", "eventTime": "2019-11-07T13:25:00Z", "awsRegion": "us-east-1", "eventID": "a9edc959-9488-4790-be0f-05d60e56b547", "eventType": "AwsCloudTrailInsight", "recipientAccountId": "-REDACTED-", "sharedEventID": "c2806063-d85d-42c3-9027-d2c56a477314", "insightDetails": { "state": "Start", "eventSource": "", "eventName": "RunInstances", "insightType": "ApiCallRateInsight", "insightContext": { "statistics": { "baseline": { "average": 0.0020833333}, "insight": { "average": 6} } } }, "eventCategory": "Insight"}, { "eventVersion": "1.07", "eventTime": "2019-11-07T13:26:00Z", "awsRegion": "us-east-1", "eventID": "33a52182-6ff8-49c8-baaa-9caac16a96ce", "eventType": "AwsCloudTrailInsight", "recipientAccountId": "-REDACTED-", "sharedEventID": "c2806063-d85d-42c3-9027-d2c56a477314", "insightDetails": { "state": "End", "eventSource": "", "eventName": "RunInstances", "insightType": "ApiCallRateInsight", "insightContext": { "statistics": { "baseline": { "average": 0.0020833333}, "insight": { "average": 6}, "insightDuration": 1} } }, "eventCategory": "Insight"} ]} Here we can see that the baseline API call volume is 0.002. That means that there’s usually one call to RunInstances roughly once every 500 minutes, so the activity we see in the graph is definitely not normal. By clicking over to the CloudTrail Events tab we can see the individual events that are grouped into this Insight event. It looks like this was probably a normal EC2 autoscaling activity, but I still want to dig in and confirm. By expanding an event in this tab and clicking “View Event,” I can head directly to the event in CloudTrail for more information. After reviewing the event metadata and associated EC2 and IAM resources, I’ve confirmed that while this behavior was unusual, it’s not a cause for concern. It looks like autoscaling did what it was supposed to and that the correct type of instance was created. Things to Know Before you get started, here are some important things to know: CloudTrail Insights costs $0.35 for every 100,000 write management events analyzed for each Insight type. At launch, API call volume insights are the only type available. Activity baselines are scoped to the region and account in which the CloudTrail trail is operating. After an account enables Insights events for the first time, if an unusual activity is detected, you can expect to receive the first Insights events within 36 hours of enabling Insights.. New unusual activity is logged as it is discovered, sending Insight Events to your destination S3 buckets and the AWS console within 30 minutes in most cases. Let me know if you have any questions or feature requests, and happy building! — Brandon  

The December 2019 promo code is an obligatory snack that’s not that great Blog -

2019 has flown past quicker than we can imagine, so here we are with the last .com/.net promo code of the year. Use this code throughout December to save on your upcoming .com and .net renewals. Use the promo code CANDYCANE Dec. 1 through 31, 2019 to renew your .com domains for $10.99 and .net […] The post The December 2019 promo code is an obligatory snack that’s not that great appeared first on Blog.

Cyber Monday 2019 deals are here Blog -

Cyber Monday kicks off early this year, with deals starting on Nov. 29. Save on some of our favorite domains and products over a four-day period when you shop our Cyber Monday sale. The sale includes $6.99 .COM, $1.99 .CO, 50% off G Suite, and more. From Friday, Nov. 29 at 12:01 a.m. MST through […] The post Cyber Monday 2019 deals are here appeared first on Blog.

Improving Containers by Listening to Customers

Amazon Web Services Blog -

At AWS, we build our product roadmap based upon feedback from our customers. The following three new features have all come about because customers have asked us to solve specific issues they have face when building and operating sophisticated container-based applications. Managed Node Groups for Amazon Elastic Kubernetes Service Our customers have told us that they want to focus on building innovative solutions for their customers, and focus less on the heavy lifting of managing Kubernetes infrastructure. Amazon Elastic Kubernetes Service already provides you with a standard, highly-available Kubernetes cluster control plane, and now, AWS can also manage the nodes (Amazon Elastic Compute Cloud (EC2) instances) for your Kubernetes cluster. Amazon Elastic Kubernetes Service makes it easy to apply bug fixes and security patches to nodes, and update them to the latest Kubernetes versions along with the cluster. The Amazon Elastic Kubernetes Service console and API give you a single place to understand the state of your cluster, you no longer have to jump around different services to see all of the resources that make up your cluster. You can provision managed nodes today when you create a new Amazon EKS cluster. There is no additional cost to use Amazon EKS managed node groups, you only pay for the Amazon EKS cluster and AWS resources they provision. To find out more check out this blog: Extending the EKS API: Managed Node Groups. Managing your container Logs with AWS FireLens Customers building container-based applications told us that they wanted more flexibility when it came to logging; however, they didn’t wish to to install, configure, or troubleshoot logging agents. AWS FireLens, gives you this flexibility as you can now forward container logs to storage and analytics tools by configuring your task definition in Amazon ECS or AWS Fargate. This means that developers have their containers send logs to Stdout and then FireLens picks up these logs and forwards them to the destination that has been configured. FireLens works with the open-source projects Fluent Bit and Fluentd, which means that you can send logs to any destination supported by either of those projects. There are a lot of configuration options with FireLens, and you can choose to filter logs and even have logs sent to multiple destinations. For more information, you can take a look at the demo I wrote earlier in the week: Announcing Firelens – A New Way to Manage Container Logs. If’ you would like a deeper understanding of how the technology works and was built, Wesley Pettit goes into even further depth on the Containers Blog in his article: Under the hood: FireLens for Amazon ECS Tasks. Amazon Elastic Container Registry EventBridge Support Customers using Amazon Elastic Container Registry have told us they want to be able to start a build process when new container images are pushed to Elastic Container Registry. We have therefore added Amazon Elastic Container Registry EventBridge support. Using events that Elastic Container Registry now publishes to EventBridge, you can trigger actions such as starting a pipeline or posting a message to somewhere like Amazon Chime or Slack when your image is successfully pushed. To learn more about this new feature, check out the following blog post where I give a more detailed explanation and demo: EventBridge support in Amazon Elastic Container Registry. More to come These 3 new releases add to other great releases we have already had this year such as Savings Plans, Amazon EKS Windows Containers support, and Native Container Image Scanning in Amazon ECR. We are still listening, and we need your feedback, so if you have a feature request or a pain point with your container applications, please let us know by creating or commenting on issues in our public containers roadmap. Sometime in the future I might one-day writing about a new feature that was inspired by you. — Martin  

EventBridge Support in Amazon Elastic Container Registry

Amazon Web Services Blog -

Many of our customers require a secure and private place to store their container images, and that’s why they use our fully managed container registry Amazon Elastic Container Registry. We recently added support for Amazon EventBridge so that you can trigger actions when images are pushed or deleted. These actions can trigger a continuous integration, continuous deployment pipeline when an image is pushed or post a message to your DevOps team Slack channel when an image has been deleted. This new capability can even enable complicated workflows, for example, customers can use the image push event on a base image to trigger a rebuild of images built on top of that base. In this scenario, a base image might be rebuilt weekly to pick up the latest security patches. A push event from the base image repository can trigger other builds, so that all derivative images are patched, too. To show you how to go about using this new capability, I thought I’d open up the console and work through an example of how all the pieces fit together. In the Amazon EventBridge console, I create a new rule, and I enter a unique name and description. Next, I scroll down to Define pattern and begin to customise the type of event pattern that I want to use. I leave the default Event pattern radio button selected and also that I want to use a Pre-defined patten by service. Since Elastic Container Registry is an AWS service, I select AWS as the Service Provider. In the Service Name section, you can select one of the many different AWS services as the event source. I am going to choose the newest addition to this list Elastic Container Registry (ECR). Lastly, in this section, I select ECR Image Action as the Event type. This ECR Image Action contains both DELETE and PUSH as action types. Next, I’m asked to configure which event bus I want to use. For this example, I select the AWS default event bus that comes with every AWS account. Now that I have identified where my events are coming from, I now need to say where I want them to go. We call these targets, and there are plenty of options here. For example, I could send the event to a Lambda Function, a Kinesis stream, or any one of the wide variety of AWS targets. To keep things simple, I’m going to choose to invoke a Amazon Simple Notification Service (SNS) topic. This topic is called ImageAction, and I have subscribed to this topic so that I receive an email when new messages are received by this topic. Back over on my laptop, I push a new version of my container to my repository in to Elastic Container Registry. If I go over to the Elastic Container Registry console, I can see that my Docker Image was successfully pushed, I’m now going to select the image and click the Delete button, which will delete my new image. This will have sent both a PUSH and a DELETE event through to my SNS topic which in turn deliver two emails to me as a subscriber to that topic.   If I open up Outlook, sure enough, I have two (admittedly not pretty) emails that have both the respective action-type of PUSH and DELETE. So there you have it, you can now wire up events in Elastic Container Registry and enable exciting and wonderful things to happen as a result. Amazon EventBridge support in Amazon Elastic Container Registry is available in all public AWS Regions and GovCloud (US). Try it now in the Amazon EventBridge console. Happy Eventing! — Martin

Going Viral Without the Downtime

WP Engine -

For most businesses, generating an influx of website traffic is a dream come true.  More visitors coming to your site likely means more people are interested in your products or services, and an increase in sales is just around the corner.   But what if your website is unable to handle the stress of a sudden… The post Going Viral Without the Downtime appeared first on WP Engine.

Our Expanded Transparency Report: First Half of 2019

LinkedIn Official Blog -

Today we’re publishing our bi-annual Transparency Report for the January-June 2019 reporting period. As always, this includes our Government Requests Report, outlining how we respond to government requests about our members or the content they share. For the first time, we’re also publishing a Community Report. We’re including this new report to provide more visibility into how we enforce our Professional Community Policies and address activity and content that isn’t allowed on our platform.... .

How Much Does WordPress Hosting Cost?

HostGator Blog -

The post How Much Does WordPress Hosting Cost? appeared first on HostGator Blog. So you’ve decided to build your website on WordPress. You’re in good company. Because of how robust and easy to use it is, it powers over 34% of all the websites on the web. And as an added bonus, WordPress itself is entirely free.  But that doesn’t mean you can create a WordPress website without spending anything. You will still need to budget for some main expenses. Chief amongst them: WordPress hosting. How Much Does WordPress Hosting Cost? Most WordPress website owners can expect to spend in the range of $6 to $35 a month for WordPress hosting.  But the spectrum of WordPress hosting prices is much broader than that. On the low end, hosting for a WordPress website starts at around $3 a month for a basic shared hosting plan, and can go up to over $1,000 for dedicated WordPress hosting for enterprise businesses.  What is WordPress Hosting? If you’re new to running a website, you may not understand why web hosting is important to invest in at all. But anyone figuring out their WordPress website pricing must treat the cost of web hosting as a non-negotiable expense. If you want your site to be published to the world-wide web and accessible to other people, it has to be hosted somewhere. While any web hosting plan you consider should work for a site built on WordPress, WordPress hosting refers to a subset of web hosting plans specifically designed for WordPress sites. Typically that means they offer WordPress-friendly features such as: One-click WordPress installationFaster loading times because they’re configured for WordPress specificallyAutomatic WordPress updatesWordPress-specific security featuresCustomer support staff well versed in WordPress If you’re building a basic hobbyist website on WordPress and just need the most affordable web hosting option you can find, a basic shared web hosting plan should work. But if you want a higher level of performance for your WordPress site, seeking out WordPress hosting specifically is typically worth it.  5 Factors That Influence the Cost of WordPress Hosting With such a wide range in the pricing of WordPress hosting, you may wonder what accounts for so much variety—and what it means for what you can expect to pay. There are five main factors that affect what a website owner will spend on WordPress hosting. 1. The web hosting provider you choose One of the first things you’ll notice when you start looking into WordPress hosting options is that you have a lot of choices. With how ubiquitous WordPress is, it’s not too surprising that a lot of different companies offer web hosting plans that are specifically optimized for WordPress. Every web hosting provider sets their own prices, and most offer a number of different plans at different pricing levels. While the plans and features you see offered across web hosting companies may look similar, the company you choose will influence the experience you have with WordPress hosting.  In particular, your choice of web hosting provider matters when it comes to: UptimeSpeedEase of useCustomer support We review each of these below. Uptime  Uptime is the term used to describe the percentage of time your website is up and accessible to other people. All web hosting servers occasionally have to undergo maintenance, which will cause downtime that takes your website offline. And any servers that aren’t well maintained or taken care of may experience additional periods of downtime when things break or don’t work at full capacity. A reputable web hosting company should promise at least 99% uptime, but most go further with 99.99% uptime or more. Before committing to a web hosting provider, research both what they promise and what their reputation for uptime are. This matters for all websites, but is especially important if your WordPress website is for a business where website downtime costs you money and hurts your professional reputation.   Speed  Website speed plays a key role in how people experience your website. Think about it: when was the last time you patiently waited several seconds for a website to load? If you can remember a time, it must have been a web page you were really interested in. Otherwise, in the fast-moving world of the web, we’re all prone to click away if something doesn’t load fast.  Your website speed is directly related to your web hosting provider and plan. The provider you choose isn’t the only factor, but it’s a big one. If their servers are overloaded or not optimally functioning, it could slow your website down. And that will cost you visitors who don’t care enough to stick around if it means waiting.   Ease of use Web hosting providers typically provide an interface you can use to manage important aspects of your web account. Most call this the cPanel (short for control panel). It’s where you’ll take care of your billing, domain name management, backups, and website files—just to name some of the main things. A well designed cPanel will make taking care of your website much easier. Before choosing a web hosting platform, you can generally find support materials from the company with information about what the cPanel looks like and how to use it. Make sure it looks intuitive, so you won’t have to waste time learning how to complete basic tasks.  Customer support Even if the cPanel is easy to figure out and the company is great when it comes to uptime, you may hit up against issues managing your website. When that happens, you want to know there’s someone trustworthy and knowledgeable you can turn to.  A good web hosting provider will offer 24/7 customer support with live chat included for emergency situations. Make sure they provide support in the channels of your choice (e.g. phone or  live chat), and that their representatives have a reputation for knowing their stuff. A good customer support team can make the difference between loving your web host and hating them.   The best WordPress web hosting providers aren’t necessarily the most expensive. This is one area of life where you can get quality without having to shell out an exorbitant amount. Don’t choose on price though, pay more attention to what you can learn about a hosting provider on their own website and via third-party sources like reviews and awards from fellow customers.  2. The level of storage and bandwidth you need WordPress hosting companies typically offer a few different plans. While sometimes plans include different features, the main difference between the different payment levels is how much storage and bandwidth they provide.  A website that’s just a few pages that only gets a couple hundred visitors a month will take up much less space on a web hosting server than one that with thousands of pages, lots of rich multimedia, and thousands of regular visitors. Web hosting plans that work great for that first website will therefore be much cheaper than a web hosting plan good enough for the second. Some of the common types of plans you’ll see are: WordPress cloud hosting – Cloud hosting is a good option for WordPress because it’s more flexible than the other plans. WordPress sites hosted on the cloud tend to load fast, and it’s possible to scale how much you use (and pay) as your needs change. WordPress shared hosting – Shared hosting is the best choice for smaller websites that don’t get a lot of visitors yet. You pay less in exchange for sharing a server with other websites as well. You don’t get as much bandwidth because of it, but many websites don’t need all that much bandwidth, particularly newer websites or those for small businesses.WordPress VPS hosting – For WordPress sites a bit too big and popular for shared hosting, WordPress VPS hosting is a step up. It costs more, but you share the server with fewer other websites and each one has a space that’s partitioned off, so there’s no chance of your website being affected by someone else’s traffic. WordPress dedicated hosting – Big businesses or popular media sites will require their own server to handle the amount of bandwidth large sites with a lot of traffic needs. Many web hosting companies offer plans where you get a dedicated server that’s still managed and maintained for you by the company. The costs of dedicated hosting go up as your needs increase, but to make sure your website performs at the level your visitors expect, higher costs can be worth it.  3. The level of security We live in an era of rampant data breaches and identity theft.  eCommerce website owners have to treat security as a top priority. Even if you don’t sell products or collect any personal information through your site, you still need to be thinking about it. All websites run the risk of getting hacked.  Who you choose for WordPress hosting is only one factor in website security, but it matters. A legitimate web hosting provider will offer firewall protection for their servers, and include security features like the ability to control file permissions. In addition, you can usually get an SSL certificate either included in your web hosting plan or as an add-on, which adds an extra level of protection to your website.  4. Number of sites Many basic WordPress hosting plans only allow hosting for one website on your account. But if you’re planning to build multiple, you can invest in a plan that allows for two, three, or more.  Plans that include more sites will also generally include more space and bandwidth to go with them, but be sure to confirm yours does. Running three sites on one web hosting account will require roughly three times as much space (depending on how big and popular each of the three are, of course). Having the ability to host more sites is only worth it if all of them still perform at the level you need.  5. The features and extras included in your plan The other big factor in how much a web hosting plan costs is the specific set of features it includes. All WordPress hosting plans will come with some features included, and often you’ll have the option to add extra ones for an additional fee as needed.  Some of the common features you’ll see in WordPress hosting packages are: eCommerce features – Anyone building an eCommerce website with WordPress will want to make sure they have all the ecommerce features required. A lot of those can be obtained with WordPress plugins. But your hosting plan needs to include at least an SSL certificate to assure proper security and will need to be compatible with whatever ecommerce software you go with. Webmail options – Having email addresses from your domain name makes you look more professional and provides an additional branding option. Some web hosting plans put a limit on the email addresses you can create and manage with your account. Make sure the plan you choose lets you create as many branded email addresses as you need.  Automated backups – Building and maintaining a website takes a lot of work. You could lose it all within a moment if you don’t keep your website backed up. Web hosting plans that offer automated website backups reduce your risk of losing everything, without the work of having to remember to perform manual backups.  Security features – As already mentioned, web hosting plans will either come with an SSL certificate included or as an add-on for an extra fee. Some will also include additional security features such as security software, automatic WordPress updates, and automatic malware removal. Scalability – Some websites will see fairly consistent traffic. Others will see the number of visitors jump at certain times of year. A web hosting plan that offers scalability will make it easy to increase the amount of bandwidth you need in real time, so your website performance can match your needs even as they change.  There isn’t always a direct correlation between these features and higher prices. In some cases, an affordable web hosting plan will include features like an SSL certificate or unlimited email addresses for free. Just make sure you’re clear on what’s included in any WordPress hosting plan you consider before you sign up, so you know if you’ll have to pay extra for something you need.   WordPress Website Pricing: Other Costs to Consider WordPress hosting isn’t the only cost you’ll have to consider when you choose WordPress. While the CMS offers a lot for free, you should anticipate a few other expenses when working on your website budget: Domain name Your domain name—the main address you use on the web (the thing that usually starts with www and ends with something like .com)—is another required expense. Every website needs one if you want people to be able to find you. If you go with a domain name that’s available (in other words, if someone hasn’t already bought it), then you can expect the cost to register your domain to be somewhere in the range of $10-$20.  If you decide on a domain name that someone else already owns though, expect to pay a much higher price to buy it from them, if it’s even for sale. The amount will depend on how willing they are to part with it, and how valuable they consider the keywords included. Domain privacy Domain privacy isn’t a required expense, but it’s one many website owners will prefer to invest in. When you register a domain, you’re expected to hand over personal details including your name, email address, and physical address. These get published to the ICANN WHOIS database where anyone can see them. If you care about keeping those details private, either for security reasons or simply to protect yourself from spammers and scammers, you’ll need to pay for it. Many domain registrars offer domain privacy for a few bucks a year as an add-on when you register your domain name. Plugins The main way to add functionality to your WordPress website is with plugins. Developers have created a lot of plugins that cover all sorts of features. Many of them are even free. But there’s a good chance you’ll find yourself needing to invest in one or more premium plugins in order to gain the full functionality you need for your WordPress site.  Themes WordPress websites are most often built using a theme. You can find a number of free WordPress themes available. But a lot of website owners—especially those building professional or eCommerce websites—will benefit from investing in a premium theme that provides more extensive features.  Web design One of the benefits of using a CMS like WordPress is that it’s much easier to design a website with it than working from scratch with a coding language like HTML. But that doesn’t mean that building a website with WordPress is something just anyone can do.  If you’re not particularly adept at web design, or if you have a specific vision in mind, you’ll probably need to hire someone to help you with the web design process. Or at the very least, you’ll want to invest in a website builder or a good theme that takes care of some of the design for you. Sign Up for a WordPress Hosting Solution Even with these various expenses, building a website with WordPress is a good deal. And by choosing the right WordPress hosting plan, all the benefits you get with WordPress—that it’s affordable, easy, and flexible—will extend to your web hosting as well.  HostGator offers WordPress hosting plans that promise easy installation, scalability, unlimited emails, and fast webpage loading speed. And all of it comes at affordable prices from one of the most respected names in web hosting.   Find the post on the HostGator Blog

InMotion Hosting Purchases Building in Virginia Beach and Earns Economic Development Grant

InMotion Hosting Blog -

InMotion Hosting does more than provide web hosting. Real people use InMotion’s products to deploy apps and software services to people who need them. In other words, InMotion helps businesses achieve their wildly important goals. In honor of these efforts, the Virginia Beach Development Authority recently awarded InMotion Hosting with an economic development grant. Along with employing a vibrant force of Virginia Beach talent, InMotion recently invested $10,797,500 into the local economy with the purchase of a 61,000 square foot building.  InMotion Hosting’s founders, Todd Robinson and Sunil Saxena, recognize the importance of investing in one of America’s growing “digital” cities. Continue reading InMotion Hosting Purchases Building in Virginia Beach and Earns Economic Development Grant at The Official InMotion Hosting Blog.

WP Engine’s London Summit: How AI Can Build a More Meaningful Digital Experience

WP Engine -

Agency heads, digital marketers, and innovative thinkers came together for WP Engine’s third annual European Summit in London last week, which highlighted breakthrough developments in WordPress, personalization, and voice. WP Engine also unveiled a groundbreaking new study on Artificial Intelligence during the event, which took place at the historic Langham’s Hotel—where guests have included Mark… The post WP Engine’s London Summit: How AI Can Build a More Meaningful Digital Experience appeared first on WP Engine.

Welcome to AWS Storage Day

Amazon Web Services Blog -

Everyone on the AWS team has been working non-stop to make sure that re:Invent 2019 is the biggest and best one yet. Way back in September, the entire team of AWS News Bloggers gathered in Seattle for a set of top-secret briefings. We listened to the teams, read their PRFAQs (Press Release + FAQ), and chose the launches that we wanted to work on. We’ve all been heads-down ever since, reading the docs, putting the services to work, writing drafts, and responding to feedback. Heads-Up Today, a week ahead of opening day, we are making the first round of announcements, all of them related to storage. We are doing this in order to spread out the launches a bit, and to give you some time to sign up for the appropriate re:Invent sessions. We’ve written individual posts for some of the announcements, and are covering the rest in summary form in this post. Regardless of the AWS storage service that you use, I think you’ll find something interesting and useful here. We are launching significant new features for Amazon Elastic Block Store (EBS), Amazon FSx for Windows File Server, Amazon Elastic File System (EFS), AWS DataSync, AWS Storage Gateway, and Amazon Simple Storage Service (S3). As I never tire of saying, all of these features are available now and you can start using them today! Let’s get into it… Elastic Block Store (EBS) The cool new Fast Snapshot Restore (FSR) feature enables the creation of fully-initialized, full-performance EBS volumes. Amazon FSx for Windows File Server This file system now includes a long list of enterprise-ready features, including remote management, native multi-AZ file systems, user quotas, and more! Several new features make this file system even more cost-effective, including data deduplication and support for smaller SSD file systems. Elastic File System (EFS) Amazon EFS is now available in all commercial AWS regions. Check out the EFS Pricing page for pricing in your region, or read my original post, Amazon Elastic File System – Production-Ready in Three Regions, to learn more. AWS DataSync AWS DataSync was launched at re:Invent 2018. It supports Automated and Accelerated Data Transfer, and can be used for migration, upload & process operations, and backup/DR. Effective November 1, 2019, we are reducing the per-GB price for AWS DataSync from $0.04/GB to $0.0125/GB. For more information, check out the AWS DataSync Pricing page. The new task scheduling feature allows you to periodically execute a task that detects changes and copies them from the source storage system to the destination, with options to run tasks on an hourly, daily, weekly, or custom basis: We recently added support in the Europe (London), Europe (Paris), and Canada (Central) Regions. Today we are adding support in the Europe (Stockholm), Asia Pacific (Mumbai), South America (São Paulo), Asia Pacific (Hong Kong), and AWS GovCloud (US-East) Regions. As a result, AWS DataSync is now available in all commercial and GovCloud regions. For more information, check out the AWS DataSync Storage Day post! AWS Storage Gateway AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage. With this launch, you now have access to a new set of enterprise features: High Availability – Storage Gateway now includes a range of health checks when running within a VMware vSphere High Availability (VMware HA) environment, and can now recover from most service interruptions in under 60 seconds. In the unlikely event that a recovery is necessary, sessions will be maintained and applications should continue to operate unaffected after a pause. The new gateway health checks integrate automatically with VMware through the VM heartbeat. You have the ability to adjust the sensitivity of the heartbeat from within VMware: To learn more, read Deploy a Highly Available AWS Storage Gateway on a VMware vSphere Cluster. Enhanced Metrics – If you enable Amazon CloudWatch Integration, AWS Storage Gateway now publishes cache utilization, access pattern, throughput, and I/O metrics to Amazon CloudWatch and makes them visible in the Monitoring tab for each gateway. To learn more, read Monitoring Gateways. More Maintenance Options – You now have additional control over the software updates that are applied to each Storage Gateway. Mandatory security updates are always applied promptly, and you can control the schedule for feature updates. You have multiple options including day of the week and day of the month, with more coming soon. To learn more, read Managing Storage Gateway Updates. Increased Performance – AWS Storage Gateway now delivers higher read performance when used as a Virtual Tape Library, and for reading data and listing directories when used as a File Gateway, providing you faster access to data managed through these gateways. Amazon S3 We launched Same-Region Replication (SRR) in mid-September, giving you the ability to configure in-region replication based on bucket, prefix, or object tag. When an object is replicated using SRR, the metadata, Access Control Lists (ACLs), and objects tags associated with the object are also replicated. Once SRR has been configured on a source bucket, any changes to these elements will trigger a replication to the destination bucket. To learn more, read about S3 Replication. Today we are launching a Service Level Agreement (SLA) for S3 Replication, along with the ability to monitor the status of each of your replication configurations. To learn more, read S3 Replication Update: Replication SLA, Metrics, and Events. AWS Snowball Edge This is, as I have already shared, a large-scale data migration and edge computing device with on-board compute and storage capabilities. We recently launched three free training courses that will help you to learn more about this unique and powerful device: AWS Snowball Edge Getting Started AWS Snowball Edge Logistics and Planning Using AWS Snowball Edge You may also enjoy reading about Data Migration Best Practices with AWS Snowball Edge. — Jeff;  

New – Amazon EBS Fast Snapshot Restore (FSR)

Amazon Web Services Blog -

Amazon Elastic Block Store (EBS) has been around for more than a decade and is a fundamental AWS building block. You can use it to create persistent storage volumes that can store up to 16 TiB and supply up to 64,000 IOPS (Input/Output Operations per Second). You can choose between four types of volumes, making the choice that best addresses your data transfer throughput, IOPS, and pricing requirements. If your requirements change, you can modify the type of a volume, expand it, or change the performance while the volume remains online and active. EBS snapshots allow you to capture the state of a volume for backup, disaster recovery, and other purposes. Once created, a snapshot can be used to create a fresh EBS volume. Snapshots are stored in Amazon Simple Storage Service (S3) for high durability. Our ever-creative customers are using EBS snapshots in many interesting ways. In addition to the backup and disaster recovery use cases that I just mentioned, they are using snapshots to quickly create analytical or test environments using data drawn from production, and to support Virtual Desktop Interface (VDI) environments. As you probably know, the AMIs (Amazon Machine Images), that you use to launch EC2 instances are also stored as one or more snapshots. Fast Snapshot Restore Today we are launching Fast Snapshot Restore (FSR) for EBS. You can enable it for new and existing snapshots on a per-AZ (Availability Zone) basis, and then create new EBS volumes that deliver their maximum performance and do not need to be initialized. This performance enhancement will allow you to build AWS-based systems that are even faster and more responsive than before. Faster boot times will speed up your VDI environments and allow your Auto Scaling Groups to come online and start processing traffic more quickly, even if you use large and/or custom AMIs. I am sure that you will soon dream up new applications that can take advantage of this new level of speed and predictability. Fast Snapshot Restore can be enabled on a snapshot even while the snapshot is being created. If you create nightly backup snapshots, enabling them for FSR will allow you to do fast restores the following day regardless of the size of the volume or the snapshot. Enabling & Using Fast Snapshot Restore I can get started in minutes! I open the EC2 Console and find the first snapshot that I want to set up for fast restore: I select the snapshot and choose Manage Fast Snapshot Restore from the Actions menu: Then I select the Availability Zones where I plan to create EBS volumes, and click Save: After the settings are saved, I receive a confirmation: The console shows me that my snapshot is being enabled for Fast Snapshot Restore: The status progresses from enabling to optimizing, and then to enabled. Behind the scenes and with no extra effort on my part, the optimization process provisions extra resources to deliver the fast restores, proceeding at a rate of one TiB per hour. By contrast, non-optimized volumes retrieve data directly from the S3-stored snapshot on an incremental, on-demand basis. Once the optimization is complete, I can create volumes from the snapshot in the usual way, confident that they will be ready in seconds and pre-initialized for full performance! Each FSR-enabled snapshot supports creation of up to 10 initialized volumes per hour per Availability Zone; additional volume creations will be non-initialized. As my needs change, I can enable Fast Snapshot Restore in additional Availability Zones and I can disable it in Zones where I had previously enabled it. When Fast Snapshot Restore is enabled for a snapshot in a particular Availability Zone, a bucket-based credit system governs the acceleration process. Creating a volume consumes a credit; the credits refill over time, and the maximum number of credits is a function of the FSR-enabled snapshot size. Here are some guidelines: A 100 GiB FSR-enabled snapshot will have a maximum credit balance of 10, and a fill rate of 10 credits per hour. A 4 TiB FSR-enabled snapshot will have a maximum credit balance of 1, and a fill rate of 1 credit every 4 hours. In other words, you can do 1 TiB of restores per hour for a given FSR-enabled snapshot within an AZ. Things to Know Here are some things to know about Fast Snapshot Restore: Regions & AZs – Fast Snapshot Restore is available in all Availability Zones of the US East (N. Virginia), US West (Oregon), US West (N. California), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Sydney), and Asia Pacific (Tokyo) Regions. Pricing – You pay $0.75 for each hour that Fast Snapshot Restore is enabled for a snapshot in a particular Availability Zone, pro-rated and with a minimum of one hour. Monitoring – You can use the following per-minute CloudWatch metrics to track the state of the credit bucket for each FSR-enabled snapshot: FastSnapshotRestoreCreditsBalance – The number of volume creation credits that are available. FastSnapshotRestoreCreditsBucketSize – The maximum number of volume creation credits that can be accumulated. CLI & Programmatic Access – You can use the enable-fast-snapshot-restores, describe-fast-snapshot-restores, and disable-fast-snapshot-restores commands to create and manage your accelerated snapshots from the command line. You can also use the EnableFastSnapshotRestores, DescribeFastSnapshotRestores, and DisableFastSnapshotRestores API functions from your application code. CloudWatch Events – You can use the EBS Fast Snapshot Restore State-change Notification event type to invoke Lambda functions or other targets when the state of a snapshot/AZ pair changes. Events are emitted on successful and unsuccessful transitions to the enabling, optimizing, enabled, disabling, and disabled states. Data Lifecycle Manager – You can enable FSR on snapshots created by your DLM lifecycle policies, specify AZs, and specify the number of snapshots to be FSR-enabled. You can use an existing CloudFormation template to integrate FSR into your DLM policies (read about the AWS::DLM::LifecyclePolicy to learn more). In the Works We are launching with support for snapshots that you own. Over time, we intend to expand coverage and allow you to enable Fast Snapshot Restore for snapshots that you have been granted access to. Available Now Fast Snapshot Restore is available now and you can start using it today! — Jeff;  

S3 Replication Update: Replication SLA, Metrics, and Events

Amazon Web Services Blog -

S3 Cross-Region Replication has been around since early 2015 (new Cross-Region Replication for Amazon S3), and Same-Region Replication has been around for a couple of months. Replication is very easy to set up, and lets you use rules to specify that you want to copy objects from one S3 bucket to another one. The rules can specify replication of the entire bucket, or of a subset based on prefix or tag: You can use replication to copy critical data within or between AWS regions in order to meet regulatory requirements for geographic redundancy as part of a disaster recover plan, or for other operational reasons. You can copy within a region to aggregate logs, set up test & development environments, and to address compliance requirements. S3’s replication features have been put to great use: Since the launch in 2015, our customers have replicated trillions of objects and exabytes of data! Today I am happy to be able to tell you that we are making it even more powerful, with the addition of Replication Time Control. This feature builds on the existing rule-driven replication and gives you fine-grained control based on tag or prefix so that you can use Replication Time Control with the data set you specify. Here’s what you get: Replication SLA – You can now take advantage of a replication SLA to increase the predictability of replication time. Replication Metrics – You can now monitor the maximum replication time for each rule using new CloudWatch metrics. Replication Events – You can now use events to track any object replications that deviate from the SLA. Let’s take a closer look! New Replication SLA S3 replicates your objects to the destination bucket, with timing influenced by object size & count, available bandwidth, other traffic to the buckets, and so forth. In situations where you need additional control over replication time, you can use our new Replication Time Control feature, which is designed to perform as follows: Most of the objects will be replicated within seconds. 99% of the objects will be replicated within 5 minutes. 99.99% of the objects will be replicated within 15 minutes. When you enable this feature, you benefit from the associated Service Level Agreement. The SLA is expressed in terms of a percentage of objects that are expected to be replicated within 15 minutes, and provides for billing credits if the SLA is not met: 99.9% to 98.0% – 10% credit 98.0% to 95.0% – 25% credit 95% to 0% – 100% credit The billing credit applies to a percentage of the Replication Time Control fee, replication data transfer, S3 requests, and S3 storage charges in the destination for the billing period. I can enable Replication Time Control when I create a new replication rule, and I can also add it to an existing rule: Replication begins as soon as I create or update the rule. I can use the Replication Metrics and the Replication Events to monitor compliance. In addition to the existing charges for S3 requests and data transfer between regions, you will pay an extra per-GB charge to use Replication Time Control; see the S3 Pricing page for more information. Replication Metrics Each time I enable Replication Time Control for a rule, S3 starts to publish three new metrics to CloudWatch. They are available in the S3 and CloudWatch Consoles: I created some large tar files, and uploaded them to my source bucket. I took a quick break, and inspected the metrics. Note that I did my testing before the launch, so don’t get overly concerned with the actual numbers. Also, keep in mind that these metrics are aggregated across the replication for display, and are not a precise indication of per-object SLA compliance. BytesPendingReplication jumps up right after the upload, and then drops down as the replication takes place: ReplicationLatency peaks and then quickly drops down to zero after S3 Replication transfers over 37 GB from the United States to Australia with a maximum latency of 8.3 minutes: And OperationsPendingCount tracks the number of objects to be replicated: I can also set CloudWatch Alarms on the metrics. For example, I might want to know if I have a replication backlog larger than 75 GB (for this to work as expected, I must set the Missing data treatment to Treat missing data as ignore (maintain the alarm state): These metrics are billed as CloudWatch Custom Metrics. Replication Events Finally, you can track replication issues by setting up events on an SQS queue, SNS topic, or Lambda function. Start at the console’s Events section: You can use these events to monitor adherence to the SLA. For example, you could store Replication time missed threshold and Replication time completed after threshold events in a database to track occasions where replication took longer than expected. The first event will tell you that the replication is running late, and the second will tell you that it has completed, and how late it was. To learn more, read about Replication. Available Now You can start using these features today in all commercial AWS Regions, excluding the AWS China (Beijing) and AWS China (Ningxia) Regions. — Jeff; PS – If you want to learn more about how S3 works, be sure to attend the re:Invent session: Beyond Eleven Nines: Lessons from the Amazon S3 Culture of Durability.  

Amazon FSx For Windows File Server Update – Multi-AZ, & New Enterprise-Ready Features

Amazon Web Services Blog -

Last year I told you about Amazon FSx for Windows File Server — Fast, Fully Managed, and Secure. That launch was well-received, and our customers (Neiman Marcus, Ancestry, Logicworks, and Qube Research & Technologies to name a few) are making great use of the service. They love the fact that they can access their shares from a wide variety of sources, and that they can use their existing Active Directory environment to authenticate users. They benefit from a native implementation with fast, SSD-powered performance, and no longer spend time attaching and formatting storage devices, updating Windows Server, or recovering from hardware failures. Since the launch, we have continued to enhance Amazon FSx for Windows File Server, largely in response to customer requests. Some of the more significant enhancements include: Self-Managed Directories – This launch gave you the ability to join your Amazon FSx file systems to on-premises or in-cloud self-managed Microsoft Active Directories. To learn how to get started with this feature, read Using Amazon FSx with Your Self-Managed Microsoft Active Directory. Fine-Grained File Restoration – This launch (powered by Windows shadow copies) gave your users the ability to easily view and restore previous versions of their files. To learn how to configure and use this feature, read Working with Shadow Copies. On-Premises Access – This launch gave you the power to access your file systems from on-premises using AWS Direct Connect or an AWS VPN connection. You can host user shares in the cloud for on-premises access, and you can also use it to support your backup and disaster recovery model. To learn more, read Accessing Amazon FSx for Windows File Server File Systems from On-Premises. Remote Management CLI – This launch focused on a set of CLI commands (PowerShell Cmdlets) to manage your Amazon FSx for Windows File Server file systems. The commands support remote management and give you the ability to fully automate many types of setup, configuration, and backup workflows from a central location. Enterprise-Ready Features Today we are launching an extensive list of new features that are designed to address the top-priority requests from our enterprise customers. Native Multi-AZ File Systems -You can now create file systems that span AWS Availability Zones (AZs). You no longer need to set up or manage replication across AZs; instead, you select the multi-AZ deployment option when you create your file system: Then you select two subnets where your file system will reside: This will create an active file server and a hot standby, each with their own storage, and synchronous replication across AZs to the standby. If the active file server fails, Amazon FSx will automatically fail over to the standby, so that you can maintain operations without losing any data. Failover typically takes less than 30 seconds. The DNS name remains unchanged, making replication and failover transparent, even during planned maintenance windows. This feature is available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (London), Asia Pacific (Tokyo), Asia Pacific (Singapore), and Europe (Stockholm) Regions. Support for SQL Server – Amazon FSx now supports the creation of Continuously Available (CA) file shares, which are optimized for use by Microsoft SQL Server. This allows you to store your active SQL Server data on a fully managed Windows file system in AWS. Smaller Minimum Size – Single-AZ file systems can now be as small as 32 GiB (the previous minimum was 300 GiB). Data Deduplication – You can optimize your storage by seeking out and eliminating low-level duplication of data, with the potential to reduce your storage costs. The actual space savings will depend on your use case, but you can expect it to be around 50% for typical workloads (read Microsoft’s Data Duplication Overview and Understanding Data Duplication to learn more). Once enabled for a file system with Enable-FSxDedup, deduplication jobs are run on a default schedule that you can customize if desired. You can use the Get-FSxDedupStatus command to see some interesting stats about your file system: To learn more, read Using Data Deduplication. Programmatic File Share Configuration – You can now programmatically configure your file shares using PowerShell commands (this is part of the Remote Management CLI that I mentioned earlier). You can use these commands to automate your setup, migration, and synchronization workflows. The commands include: New-FSxSmbShare – Create a new shared folder. Grant-FSxSmbShareAccess – Add an access control entry (ACE) to an ACL. Get-FSxSmbSession – Get information about active SMB sessions. Get-FSxSmbOpenFile – Get information about files opened on SMB sessions. To learn more, read Managing File Shares. Enforcement of In-Transit Encryption – You can insist that connections to your file shares make use of in-transit SMB encryption: PS> Set-FSxSmbServerConfiguration -RejectUnencryptedAccess $True To learn more, read about Encryption of Data in Transit. Quotas – You can now use quotas to monitor and control the amount of storage space consumed by each user. You can set up per-user quotas, monitor usage, track violations, and choose to deny further consumption to users who exceed their quotas: PS> Enable-FSxUserQuotas Mode=Enforce PS> Set-FSXUserQuota jbarr ... To learn more, read about Managing User Quotas. Available Now Putting it all together, this laundry list of new enterprise-ready features and the power to create Multi-AZ file systems makes Amazon FSx for Windows File Server a great choice when you are moving your existing NAS (Network Attached Storage) to the AWS Cloud. All of these features are available now and you can start using them today in all commercial AWS Regions where Amazon FSx for Windows File Server is available, unless otherwise noted above. — Jeff;  

How to Optimize Your Videos for SEO [11 Tips]

HostGator Blog -

The post How to Optimize Your Videos for SEO [11 Tips] appeared first on HostGator Blog. Video is a big part of how people consume information online. On average, people send 6.8 hours every day watching online video, and that number’s on an upward trajectory year to year.  For businesses, video is essential. 54% of consumers said it’s their preferred format for brand content, making it the top choice—beating out email, social, and blogs.  That means if you want to reach people online, video is a good way to do it. But as with any type of content you publish on the web, you should anticipate having a lot of competition. Over 400 hours of video are added to YouTube every minute.  Anyone hoping to get their message out using video has to figure out how to rise above the rest of the noise to reach the right people.  How to Optimize Your Videos for SEO Video SEO isn’t about doing one or two things. It involves a whole strategy. While taking steps to optimize each individual video you create is part of it, making sure you’re making the right videos and building out a channel that earns authority is just as important.  1.  Perform keyword research for your videos. You’re probably already doing keyword research for your overall SEO strategy, and may figure you can just apply that research to your video strategy as well. Sorry, it’s not that easy. The keywords that get a lot of traction on Google are different than the ones that are most popular on YouTube. And most searches on Google don’t produce results with video, unless the searcher makes a special point of clicking on the video option in the menu. Google’s algorithm tracks data on the type of results people click on when doing different types of searches. If they’re not showing video on page one of the search engine results page (SERP) for a keyword, that means people searching that term aren’t usually interested in watching a video for their answers.  Video keyword research is focused on learning what people are searching for on YouTube, and what keywords produce video results in Google. Within YouTube, you can gain a lot of helpful keyword suggestions by paying attention to their autofill feature. You start to type a phrase relevant to your business, and see what YouTube suggests. To find out what keywords produce video results, do SERP research. Simply type your top keywords into the search bar, and see what shows up on the SERP. If videos show up on page one, that’s a strong keyword for video SEO. Both of these tactics for video keyword research can take a lot of time, so you can speed the process up a bit with SEO tools. Some general SEO tools will provide an analysis of what the SERPs look like for different keywords, so you can more easily learn when a keyword produces things like video results or an answer box. And there are keyword research tools that focus specifically on YouTube keywords, such as VidIQ and YTCockpit.   2. Research the competition. Once you’ve identified a list of keywords worth focusing on, start doing competitor research. Identify who’s ranking in both YouTube and Google for those keywords now. Watch their videos. Pay attention to the titles, descriptions, and tags they use. And visit their channels. Take notes on what you learn, so you can better spot trends in what the winning videos and channels have in common. Those insights will help you figure out how to compete effectively in your space.  3. Create a video SEO marketing strategy. Use what you learned in the first two steps to make a plan that covers: What your YouTube channel’s branding will beWhat topics to cover in your videosHow long each one should be How often you’ll release a new oneHow you’ll promote your videos Your plan will change and evolve as you collect more data on what works for your audience. But having a clear roadmap will help you get the early traction you need to collect that data to begin with.  4. Optimize your YouTube channel for SEO. Ideally, you don’t just want people to watch one of your videos and move on. You want them to click to see more after that first one. Or even better, click that Subscribe button so your new videos start showing up in their main feed. So before you worry about optimizing each of your videos, make sure you’ve built a strong channel page. Add an original header image that’s visually arresting and says something about your channel’s value. Write a killer channel description that tells people why they should subscribe. Consider making a trailer for your channel that tells people what it’s all about, and why they should follow it.  Having a strong channel will add some extra legitimacy to each video you put out there and help you use your video content to build a more ongoing connection with your audience.  5.  Include your target keyword in the video title. Take care in crafting the best possible video title. Your title needs to accomplish multiple things at once: Clearly communicate to potential viewers what the video is aboutConvince them that your video is worth clicking onInclude your target keyword If you’ve chosen good keywords, those three goals won’t be in opposition.  6. Include your target keyword in your video script. When writing the script for your video, include your target keyword somewhere in it. Don’t overload it with keywords, of course. And don’t try to shoehorn it in where it doesn’t fit. But if your video’s genuinely about the topic the keyword represents, including it in there shouldn’t be hard to do naturally.  This is important because YouTube can parse a lot of what’s said in a video, which will influence which videos they decide to include in the results for a search. It also matters because of the next tip.  7.  Include a transcription for your YouTube video. Including a transcription of your video does a couple of important things at once: It ensures there’s text that Google can understand. That makes the page your video is on stronger in terms of Google SEO, since their algorithms have more information to learn what the page is about.It gives your audience more than one way to consume the content. Obviously a lot of people like watching video, but some people prefer reading to watching. With a transcription, you give people a choice. You make your video more accessible to people with disabilities. You can load a transcript file to YouTube that is used to provide closed captioning on the video itself.  And one experiment found that videos with closed captioning get over 7% more views on average. And because you included your target keyword in your video script, your transcript gets it onto the page another time or two. Learn more about the benefits of adding closed captions to your videos. 8. Write a strong video description and include your video keywords. Always fill in the description section for your videos. It gives you an additional opportunity to convince visitors that your video is worth watching, and provides another space for you to encourage people to subscribe to your channel.  Your video description is one of the best places you have to give YouTube information on what your video is about. Use at least 200 words to describe your video. And of course, use this as another opportunity to get your keyword in there (naturally).  8. Add tags to your YouTube videos. YouTube also lets you add tags to your video. These probably aren’t as strong of a ranking signal for them as the other parts of the page we’ve covered already, but it never hurts to make good use of this section. Use your main keyword as a tag, along with any secondary keywords on your list that are relevant.  If you’re not sure what to put here, go back to the notes you took when analyzing your competitors’ videos to get some ideas.  9. Select the best thumbnail option. While all this text is helpful for SEO, one of the main ways YouTube and Google will decide if your video is a helpful resource for the topics it covers is whether people actually watch it. Picking a good thumbnail for your video won’t directly impact your SEO, but it’s important for getting people to click on your video.  Video’s a visual medium, so you want the first image people see to be compelling enough to make them want to click to see more. Don’t just settle for the default image YouTube grabs, take a minute to figure out the best screen to capture for your thumbnail and customize it.  10. Promote your YouTube videos and channel. As with website SEO, some of the ranking signals that determine whether your videos show up have to do with communicating to YouTube and Google what your video is about. But others have more to do with trying to gauge the quality of the video—the two search engines care whether or not people see something they like when they click. That means metrics like how many people subscribe to your channel, view your video, and how long they view the video for all have a role to play in whether or not your videos show up in search. To start getting the kind of impressive metrics that prove to YouTube and Google that your videos are awesome, people have to watch your videos to begin with. So once you’ve created your channel and started releasing your first videos, actively promote them. Send them to your email list and share them on social media. Embed them on your WordPress website and in related blog posts. Consider if it’s worth promoting your channel via a paid advertising campaign to give it an initial boost. Your first viewers will help you both get the metrics that signal quality to the search engines. And if they like the videos, they’re likely to share and help promote them as well.  11. Analyze your YouTube metrics. With every new marketing tactic you try, you’ll probably get something wrong. Even the best content creators and marketers can’t fully predict what people will like and not like. But luckily, digital channels come with analytics that tell you what’s working and what’s not. Pay attention to your metrics on YouTube to learn what your audience likes. Which topics get the most views? Which videos do viewers tend to drop off from early, and at what point do they stop? Which ones do people give the thumb’s up and thumb’s down for?  Every video you launch will help you gain some new data on what your audience is interested in. Put that to work by revising your video strategy over time to better create a channel that’s truly useful to your audience, and that performs better in the search engines. Why SEO for Videos is Important Creating great videos requires a significant investment in time and money. If no one ever finds the videos you create, nothing you spend making them will pay off. If you’re going to put work into making videos, it’s just as important to also put work into making sure people will be able to find them.  Search engine optimization (SEO) is mostly associated with text, since so much of it is about using the right terminology to match the language your audience uses when they’re searching for information. But video SEO is one of the best tactics you have to make your video content more discoverable.  What is Video SEO? Video SEO is the collection of steps and best practices you can use to increase the odds that your video will show up in the search engines. But where SEO is typically focused on one main search engine—Google—in video SEO, we have another that’s at least as important: YouTube.  YouTube is the most visited website in the world. So while you also want to get your videos to show up in Google as often as possible, YouTube should have a special place in how you approach your video SEO strategy. The good news is that what’s good for YouTube SEO and what’s good for Google SEO are essentially the same. Google owns YouTube, and 88% of videos in the top 10 results on Google are pulled from YouTube. Video SEO: One More Channel to Reach Your Audience Optimizing your videos for SEO is important for getting them in front of more people. But it’s always important to remember that showing up in the search engines is never the whole point. It’s about connecting with your audience.  Using video and promoting what you create via SEO are just another way to establish that initial connection required to provide something of value to your audience. What’s even more important is what happens after they click. Strive to create videos that earn the attention and time people give to them, and that will both improve your SEO and help you gain a more loyal audience that cares about your content.  Find the post on the HostGator Blog

How to Customize Facebook Ads for the Customer Journey

Social Media Examiner -

Are you targeting cold, warm, and hot audiences with Facebook ads? Wondering what types of ads work best with each audience? In this article, you’ll discover how to use six types of Facebook ads to move people further along the customer journey. 2 Facebook Ad Types That Work With Cold Audiences Cold audiences contain new […] The post How to Customize Facebook Ads for the Customer Journey appeared first on Social Media Marketing | Social Media Examiner.


Recommended Content

Subscribe to Complete Hosting Guide aggregator