Industry Buzz

S3 Replication Update: Replication SLA, Metrics, and Events

Amazon Web Services Blog -

S3 Cross-Region Replication has been around since early 2015 (new Cross-Region Replication for Amazon S3), and Same-Region Replication has been around for a couple of months. Replication is very easy to set up, and lets you use rules to specify that you want to copy objects from one S3 bucket to another one. The rules can specify replication of the entire bucket, or of a subset based on prefix or tag: You can use replication to copy critical data within or between AWS regions in order to meet regulatory requirements for geographic redundancy as part of a disaster recover plan, or for other operational reasons. You can copy within a region to aggregate logs, set up test & development environments, and to address compliance requirements. S3’s replication features have been put to great use: Since the launch in 2015, our customers have replicated trillions of objects and exabytes of data! Today I am happy to be able to tell you that we are making it even more powerful, with the addition of Replication Time Control. This feature builds on the existing rule-driven replication and gives you fine-grained control based on tag or prefix so that you can use Replication Time Control with the data set you specify. Here’s what you get: Replication SLA – You can now take advantage of a replication SLA to increase the predictability of replication time. Replication Metrics – You can now monitor the maximum replication time for each rule using new CloudWatch metrics. Replication Events – You can now use events to track any object replications that deviate from the SLA. Let’s take a closer look! New Replication SLA S3 replicates your objects to the destination bucket, with timing influenced by object size & count, available bandwidth, other traffic to the buckets, and so forth. In situations where you need additional control over replication time, you can use our new Replication Time Control feature, which is designed to perform as follows: Most of the objects will be replicated within seconds. 99% of the objects will be replicated within 5 minutes. 99.99% of the objects will be replicated within 15 minutes. When you enable this feature, you benefit from the associated Service Level Agreement. The SLA is expressed in terms of a percentage of objects that are expected to be replicated within 15 minutes, and provides for billing credits if the SLA is not met: 99.9% to 98.0% – 10% credit 98.0% to 95.0% – 25% credit 95% to 0% – 100% credit The billing credit applies to a percentage of the Replication Time Control fee, replication data transfer, S3 requests, and S3 storage charges in the destination for the billing period. I can enable Replication Time Control when I create a new replication rule, and I can also add it to an existing rule: Replication begins as soon as I create or update the rule. I can use the Replication Metrics and the Replication Events to monitor compliance. In addition to the existing charges for S3 requests and data transfer between regions, you will pay an extra per-GB charge to use Replication Time Control; see the S3 Pricing page for more information. Replication Metrics Each time I enable Replication Time Control for a rule, S3 starts to publish three new metrics to CloudWatch. They are available in the S3 and CloudWatch Consoles: I created some large tar files, and uploaded them to my source bucket. I took a quick break, and inspected the metrics. Note that I did my testing before the launch, so don’t get overly concerned with the actual numbers. Also, keep in mind that these metrics are aggregated across the replication for display, and are not a precise indication of per-object SLA compliance. BytesPendingReplication jumps up right after the upload, and then drops down as the replication takes place: ReplicationLatency peaks and then quickly drops down to zero after S3 Replication transfers over 37 GB from the United States to Australia with a maximum latency of 8.3 minutes: And OperationsPendingCount tracks the number of objects to be replicated: I can also set CloudWatch Alarms on the metrics. For example, I might want to know if I have a replication backlog larger than 75 GB (for this to work as expected, I must set the Missing data treatment to Treat missing data as ignore (maintain the alarm state): These metrics are billed as CloudWatch Custom Metrics. Replication Events Finally, you can track replication issues by setting up events on an SQS queue, SNS topic, or Lambda function. Start at the console’s Events section: You can use these events to monitor adherence to the SLA. For example, you could store Replication time missed threshold and Replication time completed after threshold events in a database to track occasions where replication took longer than expected. The first event will tell you that the replication is running late, and the second will tell you that it has completed, and how late it was. To learn more, read about Replication. Available Now You can start using these features today in all commercial AWS Regions, excluding the AWS China (Beijing) and AWS China (Ningxia) Regions. — Jeff; PS – If you want to learn more about how S3 works, be sure to attend the re:Invent session: Beyond Eleven Nines: Lessons from the Amazon S3 Culture of Durability.  

Amazon FSx For Windows File Server Update – Multi-AZ, & New Enterprise-Ready Features

Amazon Web Services Blog -

Last year I told you about Amazon FSx for Windows File Server — Fast, Fully Managed, and Secure. That launch was well-received, and our customers (Neiman Marcus, Ancestry, Logicworks, and Qube Research & Technologies to name a few) are making great use of the service. They love the fact that they can access their shares from a wide variety of sources, and that they can use their existing Active Directory environment to authenticate users. They benefit from a native implementation with fast, SSD-powered performance, and no longer spend time attaching and formatting storage devices, updating Windows Server, or recovering from hardware failures. Since the launch, we have continued to enhance Amazon FSx for Windows File Server, largely in response to customer requests. Some of the more significant enhancements include: Self-Managed Directories – This launch gave you the ability to join your Amazon FSx file systems to on-premises or in-cloud self-managed Microsoft Active Directories. To learn how to get started with this feature, read Using Amazon FSx with Your Self-Managed Microsoft Active Directory. Fine-Grained File Restoration – This launch (powered by Windows shadow copies) gave your users the ability to easily view and restore previous versions of their files. To learn how to configure and use this feature, read Working with Shadow Copies. On-Premises Access – This launch gave you the power to access your file systems from on-premises using AWS Direct Connect or an AWS VPN connection. You can host user shares in the cloud for on-premises access, and you can also use it to support your backup and disaster recovery model. To learn more, read Accessing Amazon FSx for Windows File Server File Systems from On-Premises. Remote Management CLI – This launch focused on a set of CLI commands (PowerShell Cmdlets) to manage your Amazon FSx for Windows File Server file systems. The commands support remote management and give you the ability to fully automate many types of setup, configuration, and backup workflows from a central location. Enterprise-Ready Features Today we are launching an extensive list of new features that are designed to address the top-priority requests from our enterprise customers. Native Multi-AZ File Systems -You can now create file systems that span AWS Availability Zones (AZs). You no longer need to set up or manage replication across AZs; instead, you select the multi-AZ deployment option when you create your file system: Then you select two subnets where your file system will reside: This will create an active file server and a hot standby, each with their own storage, and synchronous replication across AZs to the standby. If the active file server fails, Amazon FSx will automatically fail over to the standby, so that you can maintain operations without losing any data. Failover typically takes less than 30 seconds. The DNS name remains unchanged, making replication and failover transparent, even during planned maintenance windows. This feature is available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (London), Asia Pacific (Tokyo), Asia Pacific (Singapore), and Europe (Stockholm) Regions. Support for SQL Server – Amazon FSx now supports the creation of Continuously Available (CA) file shares, which are optimized for use by Microsoft SQL Server. This allows you to store your active SQL Server data on a fully managed Windows file system in AWS. Smaller Minimum Size – Single-AZ file systems can now be as small as 32 GiB (the previous minimum was 300 GiB). Data Deduplication – You can optimize your storage by seeking out and eliminating low-level duplication of data, with the potential to reduce your storage costs. The actual space savings will depend on your use case, but you can expect it to be around 50% for typical workloads (read Microsoft’s Data Duplication Overview and Understanding Data Duplication to learn more). Once enabled for a file system with Enable-FSxDedup, deduplication jobs are run on a default schedule that you can customize if desired. You can use the Get-FSxDedupStatus command to see some interesting stats about your file system: To learn more, read Using Data Deduplication. Programmatic File Share Configuration – You can now programmatically configure your file shares using PowerShell commands (this is part of the Remote Management CLI that I mentioned earlier). You can use these commands to automate your setup, migration, and synchronization workflows. The commands include: New-FSxSmbShare – Create a new shared folder. Grant-FSxSmbShareAccess – Add an access control entry (ACE) to an ACL. Get-FSxSmbSession – Get information about active SMB sessions. Get-FSxSmbOpenFile – Get information about files opened on SMB sessions. To learn more, read Managing File Shares. Enforcement of In-Transit Encryption – You can insist that connections to your file shares make use of in-transit SMB encryption: PS> Set-FSxSmbServerConfiguration -RejectUnencryptedAccess $True To learn more, read about Encryption of Data in Transit. Quotas – You can now use quotas to monitor and control the amount of storage space consumed by each user. You can set up per-user quotas, monitor usage, track violations, and choose to deny further consumption to users who exceed their quotas: PS> Enable-FSxUserQuotas Mode=Enforce PS> Set-FSXUserQuota jbarr ... To learn more, read about Managing User Quotas. Available Now Putting it all together, this laundry list of new enterprise-ready features and the power to create Multi-AZ file systems makes Amazon FSx for Windows File Server a great choice when you are moving your existing NAS (Network Attached Storage) to the AWS Cloud. All of these features are available now and you can start using them today in all commercial AWS Regions where Amazon FSx for Windows File Server is available, unless otherwise noted above. — Jeff;  

How to Accept Payments Through Your WordPress Website

DreamHost Blog -

Being able to accept credit card payments for services or products is crucial to opening up new entrepreneurial opportunities on your WordPress website. But deciding which payment gateways to offer and whether they’re compatible with your e-commerce store’s structure can be tricky. To make things easier, we’ve gathered the research together all in one place for you. After all, the checkout process for online customers can be a significant factor in lead conversion. Understanding what each payment gateway has to offer can help you create the best possible experience for customers. In this article, we’ll cover the basics of online payment platforms. We’ll also review a number of free and premium payment applications that you can add to your WordPress website. If you’re ready, let’s dive right in! Your Store Deserves WooCommerce HostingSell anything, anywhere, anytime on the world's biggest e-commerce platform.See Plans Why It’s Important to Choose the Right Payment Gateways for Your Website Payment gateways are the services that facilitate processing credit and debit card payments for merchants. The gateway you choose is pretty important, especially when you consider that 75% of retail consumers abandon their carts during the checkout process. Offering your customers a variety of payment options is one excellent way to keep them happy. A payment gateway can simply be a software application or involve physical hardware to be used in Point of Sale (POS) transactions. Either way, there are a lot of essential elements to consider when choosing a gateway for your business, including: Payment Card Industry Data Security Standard (PCI DSS). PCI DSS compliance means that the service provider adheres to the information security standard for anyone handling major credit cards. Secure Sockets Layer (SSL) Certificates. It’s recommended that websites with online payment options also purchase SSL certificates. This means data exchanges will occur over a secure connection. Plugin compatibility. You’ll want to make sure your chosen gateway has a plugin that will deliver what you need and be compatible with your site’s theme. User experience. Make sure your chosen gateway can be tested to ensure its ability to deliver the most effective user experience during checkout and after. If it seems too hard to choose, it’s not unusual to employ more than one. In fact, customers are more likely to complete a purchase when they have lots of payment options. Related: How to Build an Awesome WooCommerce Store with the OceanWP Theme The 5 Best Payment Gateways for WordPress Now that you understand what elements to keep in mind while shopping around for a payment gateway, we’ll take a look at five of the best options for WordPress store owners. 1. Authorize.Net Authorize.Net offers advanced fraud detection services for free, among other services. You’ll also have a Quickbooks sync option and the ability to accept multiple currencies. In terms of POS services, you can easily turn any Windows-based computer into a POS terminal using’s free software. You’ll need to purchase a card reader to complete the system, however. fees: Account set-up fee: $49.00 Monthly gateway fee: $25.00 Per transaction processing fee: $0.10 With a simple checkout experience and no contracts, is an excellent gateway option. 2. PayPal PayPal is probably the most trusted name in payment gateways. That’s a huge plus for new businesses. What’s more, PayPal is consistent and transparent in its fee schedule and offers an extensive suite of tools. You can also set up a POS checkout with several hardware options, which include mobile and traditional terminal possibilities. PayPal fees: All online sales: 2.9% + $0.30 per transaction Non-keyed POS transactions: 2.7% Keyed-in mobile and in-store transactions: 3.5% + $0.15 Virtual terminal: 3.1% + $0.30 per transaction PayPal has 286 million active user accounts globally. That’s a pretty big group of potential customers who are more likely to complete their checkouts if PayPal is an option. Related: How to Pick the Right WordPress Theme for Your Website 3. Stripe Stripe is a top choice if you plan on doing a lot of international business. It accepts over 100 foreign currencies and converts them automatically. The company has also created its own POS Terminal system, as well. You’ll have to purchase this, but it’s a nice option if you plan on doing events or setting up a physical storefront. Stripe fees: Credit cards: 2.9% + $0.30 per transaction Suite of tools: Stripe charges fees separately for each option International conversion: 1% fee on top of normal Stripe transaction fees Terminal transactions: 2.7% + $0.5 per transaction, plus the cost of hardware Stripe is definitely a workhorse, with many options for developers. 4. Amazon Pay Aside from brand recognition, one of the most significant benefits of using Amazon Pay is that unlike PayPal, shoppers don’t have to leave your website to complete their payments. Users can log in with their Amazon accounts and complete “in-line” purchases in a familiar and smooth checkout process. While Amazon Pay does not offer a POS system, you’ll have access to other benefits. These include the ability to set up recurring payments, fraud detection, and donation settings. Amazon domestic-only fees: Web and mobile processing: 2.9% + $0.30 per transaction Alexa voice-activated purchases: 4% processing fee + $0.30 per transaction Charitable organizations: 2.2% processing fee + $0.30 per transaction Amazon is an excellent choice for a small business looking to grow its reach and audience. The power of Amazon offers a lot of potential for just about any new business. 5. Square Square is a leader in the payment gateway industry. There are so many features and tools you’ll get for free, such as the virtual terminal. This application enables you to turn any web-connected device into a credit card payment terminal, even without a card swiper. Square also offers a sophisticated POS system, which incorporates iPad hardware into a stylish and easy-to-use digital terminal. Square fees: Keyed-in and card on file: 3.5% + $0.15 per transaction E-commerce and invoices: 2.9% + $0.30 per transaction Point of sale: 2.6% + $0.10 per transaction Like some of the other gateway options, Square is definitely a top choice for startups and businesses with small monthly transaction numbers. 6 Free and Premium Payment Gateway Plugins As you evaluate which gateway might be the best option for your business, you’ll also want to make sure it can integrate with your WordPress website. So let’s look at a few plugins that may help you simplify that process. 1. PayPal Payments Pro The PayPal Payments Pro plugin solves one of the downsides of using PayPal. Your customers will no longer be taken away from your website when they choose to use a credit card. Key Features: Easy to configure Offers on-site checkout Price: This is a free plugin. 2. WP Full Stripe WP Full Stripe is powered (naturally) by Stripe. This plugin is designed to make it easy for you to add payment options to your website. This includes embedding payment forms into your site and setting up subscription options or recurring payments. Key Features: Accepts donations online Lets you set up recurring payments for users Saves payment information for repeat customers Price: $39. At the time of publication, the free version hadn’t been tested with recent versions of WordPress so it’s not a recommended solution. 3. WP Simple Pay If you choose Stripe as your payment gateway, WP Simple Pay is another option for integrating payments into your WordPress site. This plugin is pretty much an “all-in-one” option, which eliminates the need for additional plugins to create a payment experience. Key Features: Displays product images on checkout pages Support over 135 currencies Is mobile responsive Price: The “lite” version is free, while the “pro” version is $99–$499. 4. Payment Gateway for WooCommerce Explicitly designed for WooCommerce users, Payment Gateway for WooCommerce offers unique security features for your website. You’ll be able to take credit and debit card payments right on your store through Key Features: Does not require SSL Secures payments through servers Includes optional success and failure messages Price: This is another free option. 5. WooCommerce WooCommerce is a perfect partner for your WordPress e-commerce website. Packed with features and options, WooCommerce also has over 300 add-ons available that extend its functionality. This includes extensions that connect it up with a wide variety of payment gateways. Key Features: Is an open-source option Includes customizable product, cart, and checkout pages Integrates with over 140 payment gateways Price: Free for the base plugin with payment gateway add-ons ranging from $0–$79. Related: WooCommerce vs. Shopify: An In-Depth Guide 6. Amazon Pay WooCommerce Payment Gateway Finally, the Amazon Pay WooCommerce Payment Gateway plugin is designed to specifically integrate Amazon Pay into a WooCommerce online store. The recognition and trust consumers have with Amazon can be a massive benefit to your website. Key Features: Lets you use customer-stored Amazon account information to make payments Enables management of refunds from your WordPress admin dashboard Handles single, recurring, and subscription payments Price: You won’t pay a cent to use this particular plugin. How to Avoid Mistakes When Adding Payment Options to Your Website There are a few things you’ll want to avoid while setting up your payment gateways in WordPress. For example, you’ll want to avoid choosing options that lack easy WordPress integration. You’ll also want to choose a web host that’s equipped to handle payments online. When shopping for a hosting provider, check to see if each one offers SSL/TLS certificates with its packages or if you’ll have to purchase one separately. Regardless, you’ll need to clearly inform site users about what kind of security you offer for payment transactions. It’s also necessary to understand that you don’t have to incorporate complete shopping cart functionality into your website. You can add just one or two simple payment options if that’s the approach that will work best with your customers and your products or services. Accepting Payments Made Easy Generating income through a website is not uncommon in today’s digital marketplace. To get the most out of your WordPress store’s earning potential, you’ll want to choose the best payment gateway options. Here at DreamHost, we want you to be able to focus on building your online business. That’s why we offer managed WordPress hosting plans for a variety of needs. We’ll take care of keeping your WordPress installation up-to-date and running smoothly! The post How to Accept Payments Through Your WordPress Website appeared first on Website Guides, Tips and Knowledge.

How to Optimize Your Videos for SEO [11 Tips]

HostGator Blog -

The post How to Optimize Your Videos for SEO [11 Tips] appeared first on HostGator Blog. Video is a big part of how people consume information online. On average, people send 6.8 hours every day watching online video, and that number’s on an upward trajectory year to year.  For businesses, video is essential. 54% of consumers said it’s their preferred format for brand content, making it the top choice—beating out email, social, and blogs.  That means if you want to reach people online, video is a good way to do it. But as with any type of content you publish on the web, you should anticipate having a lot of competition. Over 400 hours of video are added to YouTube every minute.  Anyone hoping to get their message out using video has to figure out how to rise above the rest of the noise to reach the right people.  How to Optimize Your Videos for SEO Video SEO isn’t about doing one or two things. It involves a whole strategy. While taking steps to optimize each individual video you create is part of it, making sure you’re making the right videos and building out a channel that earns authority is just as important.  1.  Perform keyword research for your videos. You’re probably already doing keyword research for your overall SEO strategy, and may figure you can just apply that research to your video strategy as well. Sorry, it’s not that easy. The keywords that get a lot of traction on Google are different than the ones that are most popular on YouTube. And most searches on Google don’t produce results with video, unless the searcher makes a special point of clicking on the video option in the menu. Google’s algorithm tracks data on the type of results people click on when doing different types of searches. If they’re not showing video on page one of the search engine results page (SERP) for a keyword, that means people searching that term aren’t usually interested in watching a video for their answers.  Video keyword research is focused on learning what people are searching for on YouTube, and what keywords produce video results in Google. Within YouTube, you can gain a lot of helpful keyword suggestions by paying attention to their autofill feature. You start to type a phrase relevant to your business, and see what YouTube suggests. To find out what keywords produce video results, do SERP research. Simply type your top keywords into the search bar, and see what shows up on the SERP. If videos show up on page one, that’s a strong keyword for video SEO. Both of these tactics for video keyword research can take a lot of time, so you can speed the process up a bit with SEO tools. Some general SEO tools will provide an analysis of what the SERPs look like for different keywords, so you can more easily learn when a keyword produces things like video results or an answer box. And there are keyword research tools that focus specifically on YouTube keywords, such as VidIQ and YTCockpit.   2. Research the competition. Once you’ve identified a list of keywords worth focusing on, start doing competitor research. Identify who’s ranking in both YouTube and Google for those keywords now. Watch their videos. Pay attention to the titles, descriptions, and tags they use. And visit their channels. Take notes on what you learn, so you can better spot trends in what the winning videos and channels have in common. Those insights will help you figure out how to compete effectively in your space.  3. Create a video SEO marketing strategy. Use what you learned in the first two steps to make a plan that covers: What your YouTube channel’s branding will beWhat topics to cover in your videosHow long each one should be How often you’ll release a new oneHow you’ll promote your videos Your plan will change and evolve as you collect more data on what works for your audience. But having a clear roadmap will help you get the early traction you need to collect that data to begin with.  4. Optimize your YouTube channel for SEO. Ideally, you don’t just want people to watch one of your videos and move on. You want them to click to see more after that first one. Or even better, click that Subscribe button so your new videos start showing up in their main feed. So before you worry about optimizing each of your videos, make sure you’ve built a strong channel page. Add an original header image that’s visually arresting and says something about your channel’s value. Write a killer channel description that tells people why they should subscribe. Consider making a trailer for your channel that tells people what it’s all about, and why they should follow it.  Having a strong channel will add some extra legitimacy to each video you put out there and help you use your video content to build a more ongoing connection with your audience.  5.  Include your target keyword in the video title. Take care in crafting the best possible video title. Your title needs to accomplish multiple things at once: Clearly communicate to potential viewers what the video is aboutConvince them that your video is worth clicking onInclude your target keyword If you’ve chosen good keywords, those three goals won’t be in opposition.  6. Include your target keyword in your video script. When writing the script for your video, include your target keyword somewhere in it. Don’t overload it with keywords, of course. And don’t try to shoehorn it in where it doesn’t fit. But if your video’s genuinely about the topic the keyword represents, including it in there shouldn’t be hard to do naturally.  This is important because YouTube can parse a lot of what’s said in a video, which will influence which videos they decide to include in the results for a search. It also matters because of the next tip.  7.  Include a transcription for your YouTube video. Including a transcription of your video does a couple of important things at once: It ensures there’s text that Google can understand. That makes the page your video is on stronger in terms of Google SEO, since their algorithms have more information to learn what the page is about.It gives your audience more than one way to consume the content. Obviously a lot of people like watching video, but some people prefer reading to watching. With a transcription, you give people a choice. You make your video more accessible to people with disabilities. You can load a transcript file to YouTube that is used to provide closed captioning on the video itself.  And one experiment found that videos with closed captioning get over 7% more views on average. And because you included your target keyword in your video script, your transcript gets it onto the page another time or two. Learn more about the benefits of adding closed captions to your videos. 8. Write a strong video description and include your video keywords. Always fill in the description section for your videos. It gives you an additional opportunity to convince visitors that your video is worth watching, and provides another space for you to encourage people to subscribe to your channel.  Your video description is one of the best places you have to give YouTube information on what your video is about. Use at least 200 words to describe your video. And of course, use this as another opportunity to get your keyword in there (naturally).  8. Add tags to your YouTube videos. YouTube also lets you add tags to your video. These probably aren’t as strong of a ranking signal for them as the other parts of the page we’ve covered already, but it never hurts to make good use of this section. Use your main keyword as a tag, along with any secondary keywords on your list that are relevant.  If you’re not sure what to put here, go back to the notes you took when analyzing your competitors’ videos to get some ideas.  9. Select the best thumbnail option. While all this text is helpful for SEO, one of the main ways YouTube and Google will decide if your video is a helpful resource for the topics it covers is whether people actually watch it. Picking a good thumbnail for your video won’t directly impact your SEO, but it’s important for getting people to click on your video.  Video’s a visual medium, so you want the first image people see to be compelling enough to make them want to click to see more. Don’t just settle for the default image YouTube grabs, take a minute to figure out the best screen to capture for your thumbnail and customize it.  10. Promote your YouTube videos and channel. As with website SEO, some of the ranking signals that determine whether your videos show up have to do with communicating to YouTube and Google what your video is about. But others have more to do with trying to gauge the quality of the video—the two search engines care whether or not people see something they like when they click. That means metrics like how many people subscribe to your channel, view your video, and how long they view the video for all have a role to play in whether or not your videos show up in search. To start getting the kind of impressive metrics that prove to YouTube and Google that your videos are awesome, people have to watch your videos to begin with. So once you’ve created your channel and started releasing your first videos, actively promote them. Send them to your email list and share them on social media. Embed them on your WordPress website and in related blog posts. Consider if it’s worth promoting your channel via a paid advertising campaign to give it an initial boost. Your first viewers will help you both get the metrics that signal quality to the search engines. And if they like the videos, they’re likely to share and help promote them as well.  11. Analyze your YouTube metrics. With every new marketing tactic you try, you’ll probably get something wrong. Even the best content creators and marketers can’t fully predict what people will like and not like. But luckily, digital channels come with analytics that tell you what’s working and what’s not. Pay attention to your metrics on YouTube to learn what your audience likes. Which topics get the most views? Which videos do viewers tend to drop off from early, and at what point do they stop? Which ones do people give the thumb’s up and thumb’s down for?  Every video you launch will help you gain some new data on what your audience is interested in. Put that to work by revising your video strategy over time to better create a channel that’s truly useful to your audience, and that performs better in the search engines. Why SEO for Videos is Important Creating great videos requires a significant investment in time and money. If no one ever finds the videos you create, nothing you spend making them will pay off. If you’re going to put work into making videos, it’s just as important to also put work into making sure people will be able to find them.  Search engine optimization (SEO) is mostly associated with text, since so much of it is about using the right terminology to match the language your audience uses when they’re searching for information. But video SEO is one of the best tactics you have to make your video content more discoverable.  What is Video SEO? Video SEO is the collection of steps and best practices you can use to increase the odds that your video will show up in the search engines. But where SEO is typically focused on one main search engine—Google—in video SEO, we have another that’s at least as important: YouTube.  YouTube is the most visited website in the world. So while you also want to get your videos to show up in Google as often as possible, YouTube should have a special place in how you approach your video SEO strategy. The good news is that what’s good for YouTube SEO and what’s good for Google SEO are essentially the same. Google owns YouTube, and 88% of videos in the top 10 results on Google are pulled from YouTube. Video SEO: One More Channel to Reach Your Audience Optimizing your videos for SEO is important for getting them in front of more people. But it’s always important to remember that showing up in the search engines is never the whole point. It’s about connecting with your audience.  Using video and promoting what you create via SEO are just another way to establish that initial connection required to provide something of value to your audience. What’s even more important is what happens after they click. Strive to create videos that earn the attention and time people give to them, and that will both improve your SEO and help you gain a more loyal audience that cares about your content.  Find the post on the HostGator Blog

What is InterWorx?

Liquid Web Official Blog -

InterWorx is a control panel for your server and sites. InterWorx provides tools to configure your web server, email, domains, and web sites. You can use it to install WordPress, manage your files, and improve your security. InterWorx gives you everything you need to launch and manage your website.” A lot goes into getting your website and server up and running. While it is possible to configure all of the relevant pieces directly from the command line, using a control panel like InterWorx makes the job much easier. InterWorx is broken into two main sections, NodeWorx and SiteWorx. NodeWorx is where you will focus on configuring and monitoring your server as a whole. SiteWorx provides access to the tools for managing your websites, email, and databases. Let’s explore who can use InterWorx, how it works, and what it can do. Check out our complete guide on How to Migrate from cPanel to InterWorx. Who Can Use InterWorx? Web Designers and Developers InterWorx is a smart choice for anyone needing to get their site up and running on a cloud or dedicated server. It helps you spend less time administering the server and more time focused on delivering code and content. From configuring Apache to installing WordPress and everything in between, InterWorx has the tools to make your job easier. Agencies and Resellers Managing multiple sites across different customers is a big job. InterWorx can help you keep everything straight. Install different sites into their own accounts, give users access to just what they need, so you can focus on running your business. With the ability to plugin to WHMCS, InterWorx is a perfect fit for a growing reseller or agency.” InterWorx Controls Your Server (with NodeWorx) You can login to NodeWorx at When you first login with your email address and password (check your email after installation for instructions on logging in) you will be signed into NodeWorx. The login homepage will give you an overview of running services as well as some usage charts for CPU, memory, and traffic. Below is a brief overview of each section within the NodeWorx navigation. NodeWorx If you have other users that need to manage your server with you, this is where you will manage their access. In this section you can configure access to the NodeWorx control panel. Create additional users and manage those users experience with themes, API access, and alert subscriptions. SiteWorx This heading does not put you in the SiteWorx interface, but rather allows you to configure users to login to SiteWorx. SiteWorx users will be able to configure everything they need to run their website. When creating accounts in SiteWorx you can allocate access to different resources through creating Packages. Packages allow configuration for a variety of options including storage, bandwidth, email, domains, and databases. Each of these can be configured to specific limits or ‘unlimited’.” If you are migrating to InterWorx you can also import users and configurations from other popular control panels. Accounts can be imported one by one, or through mass import. Resellers InterWorx gives you everything you need to start reselling your server. Packages can be configured and assigned to reseller accounts. Resellers can then login and create SiteWorx accounts under their reseller account. System Services This the heart of configuring your server. All of the primary services can be configured and monitored from here. These services include Apache, FTP, SSH, Dovecot, qmail, SpamAssassin, ClamAV, MySQL, DNS, and NFS. ConfigServer Plugins This section is where your overarching firewall and security settings can be configured. CSF, LFD, and firewall options can be managed here. Server Here you can manage a few more details about your server as a whole. Software repositories can be configured and packages kept up to date. You can review log files and crontabs, and configure IPs. Don’t just take our word for it. See what this customer had to say about using InterWorx. InterWorx Controls Your Sites (with SiteWorx) Once the basic setup for your server has been completed for InterWorx (which is mostly done by default on installation) you are ready to get your site going with SiteWorx. To login to SiteWorx, go to You will need the email address, password, and domain name of your SiteWorx account to login. Once in, you will be able to see an overview of usage on your domain. Hosting Features This is where all of the magic happens. Email addresses, aliases, forwarders, and webmail can all be setup. Secondary domains, parked domains, and domain redirects can be configured. MySQL databases and their accompanying users, FTP accounts, and cron jobs can all be configured through the interface. Last, and definitely not least, Softaculous is available for the automated install of a large selection of tools from blogs to wikis. Administration Additional accounts can be created and configured to your SiteWorx domain. You can also view recent visitors and web access logs here. Backups After all of the work you have done to build your site, you don’t want to lose anything. You can manually create backups, or configure them to run on a schedule. If anything happens, you can restore those backups from here as well. Statistics Everybody loves to have their content seen. Understanding your visitors can provide great insights into how your website is being used. AWStats and Webalizer are ready to help turn your server access logs into understanding. File Manager Move your files around and get them all into the right place. Preferences Here you can enable server logs, and subscribe to notifications on issues from your site such as SSL expiration or usage overages. Conclusion With such a deep feature set and intuitive interface, InterWorx is clearly a strong option for a control panel. Whether you are a lone developer, or a member of a large team, InterWorx has tools to help you do your job. With recent pricing increases plaguing the control panel market right now, InterWorx is the most affordable option with the biggest feature set. With built in migrations from other control panels, migration is easy. See how to get started at our Help Center. Get Started With InterWorx Today With simple per-server pricing starting at $20/month, spinning up an InterWorx server couldn’t be easier or more affordable. Make the switch today! The post What is InterWorx? appeared first on Liquid Web.

How to Customize Facebook Ads for the Customer Journey

Social Media Examiner -

Are you targeting cold, warm, and hot audiences with Facebook ads? Wondering what types of ads work best with each audience? In this article, you’ll discover how to use six types of Facebook ads to move people further along the customer journey. 2 Facebook Ad Types That Work With Cold Audiences Cold audiences contain new […] The post How to Customize Facebook Ads for the Customer Journey appeared first on Social Media Marketing | Social Media Examiner.

New – Using Step Functions to Orchestrate Amazon EMR workloads

Amazon Web Services Blog -

AWS Step Functions allows you to add serverless workflow automation to your applications. The steps of your workflow can run anywhere, including in AWS Lambda functions, on Amazon Elastic Compute Cloud (EC2), or on-premises. To simplify building workflows, Step Functions is directly integrated with multiple AWS Services: Amazon ECS, AWS Fargate, Amazon DynamoDB, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), AWS Batch, AWS Glue, Amazon SageMaker, and (to run nested workflows) with Step Functions itself. Starting today, Step Functions connects to Amazon EMR, enabling you to create data processing and analysis workflows with minimal code, saving time, and optimizing cluster utilization. For example, building data processing pipelines for machine learning is time consuming and hard. With this new integration, you have a simple way to orchestrate workflow capabilities, including parallel executions and dependencies from the result of a previous step, and handle failures and exceptions when running data processing jobs. Specifically, a Step Functions state machine can now: Create or terminate an EMR cluster, including the possibility to change the cluster termination protection. In this way, you can reuse an existing EMR cluster for your workflow, or create one on-demand during execution of a workflow. Add or cancel an EMR step for your cluster. Each EMR step is a unit of work that contains instructions to manipulate data for processing by software installed on the cluster, including tools such as Apache Spark, Hive, or Presto. Modify the size of an EMR cluster instance fleet or group, allowing you to manage scaling programmatically depending on the requirements of each step of your workflow. For example, you may increase the size of an instance group before adding a compute-intensive step, and reduce the size just after it has completed. When you create or terminate a cluster or add an EMR step to a cluster, you can use synchronous integrations to move to the next step of your workflow only when the corresponding activity has completed on the EMR cluster. Reading the configuration or the state of your EMR clusters is not part of the Step Functions service integration. In case you need that, the EMR List* and Describe* APIs can be accessed using Lambda functions as tasks. Building a Workflow with EMR and Step Functions On the Step Functions console, I create a new state machine. The console renders it visually, so that is much easier to understand: To create the state machine, I use the following definition using the Amazon States Language (ASL): { "StartAt": "Should_Create_Cluster", "States": { "Should_Create_Cluster": { "Type": "Choice", "Choices": [ { "Variable": "$.CreateCluster", "BooleanEquals": true, "Next": "Create_A_Cluster" }, { "Variable": "$.CreateCluster", "BooleanEquals": false, "Next": "Enable_Termination_Protection" } ], "Default": "Create_A_Cluster" }, "Create_A_Cluster": { "Type": "Task", "Resource": "arn:aws:states:::elasticmapreduce:createCluster.sync", "Parameters": { "Name": "WorkflowCluster", "VisibleToAllUsers": true, "ReleaseLabel": "emr-5.28.0", "Applications": [{ "Name": "Hive" }], "ServiceRole": "EMR_DefaultRole", "JobFlowRole": "EMR_EC2_DefaultRole", "LogUri": "s3://aws-logs-123412341234-eu-west-1/elasticmapreduce/", "Instances": { "KeepJobFlowAliveWhenNoSteps": true, "InstanceFleets": [ { "InstanceFleetType": "MASTER", "TargetOnDemandCapacity": 1, "InstanceTypeConfigs": [ { "InstanceType": "m4.xlarge" } ] }, { "InstanceFleetType": "CORE", "TargetOnDemandCapacity": 1, "InstanceTypeConfigs": [ { "InstanceType": "m4.xlarge" } ] } ] } }, "ResultPath": "$.CreateClusterResult", "Next": "Merge_Results" }, "Merge_Results": { "Type": "Pass", "Parameters": { "CreateCluster.$": "$.CreateCluster", "TerminateCluster.$": "$.TerminateCluster", "ClusterId.$": "$.CreateClusterResult.ClusterId" }, "Next": "Enable_Termination_Protection" }, "Enable_Termination_Protection": { "Type": "Task", "Resource": "arn:aws:states:::elasticmapreduce:setClusterTerminationProtection", "Parameters": { "ClusterId.$": "$.ClusterId", "TerminationProtected": true }, "ResultPath": null, "Next": "Add_Steps_Parallel" }, "Add_Steps_Parallel": { "Type": "Parallel", "Branches": [ { "StartAt": "Step_One", "States": { "Step_One": { "Type": "Task", "Resource": "arn:aws:states:::elasticmapreduce:addStep.sync", "Parameters": { "ClusterId.$": "$.ClusterId", "Step": { "Name": "The first step", "ActionOnFailure": "CONTINUE", "HadoopJarStep": { "Jar": "command-runner.jar", "Args": [ "hive-script", "--run-hive-script", "--args", "-f", "s3://eu-west-1.elasticmapreduce.samples/cloudfront/code/Hive_CloudFront.q", "-d", "INPUT=s3://eu-west-1.elasticmapreduce.samples", "-d", "OUTPUT=s3://MY-BUCKET/MyHiveQueryResults/" ] } } }, "End": true } } }, { "StartAt": "Wait_10_Seconds", "States": { "Wait_10_Seconds": { "Type": "Wait", "Seconds": 10, "Next": "Step_Two (async)" }, "Step_Two (async)": { "Type": "Task", "Resource": "arn:aws:states:::elasticmapreduce:addStep", "Parameters": { "ClusterId.$": "$.ClusterId", "Step": { "Name": "The second step", "ActionOnFailure": "CONTINUE", "HadoopJarStep": { "Jar": "command-runner.jar", "Args": [ "hive-script", "--run-hive-script", "--args", "-f", "s3://eu-west-1.elasticmapreduce.samples/cloudfront/code/Hive_CloudFront.q", "-d", "INPUT=s3://eu-west-1.elasticmapreduce.samples", "-d", "OUTPUT=s3://MY-BUCKET/MyHiveQueryResults/" ] } } }, "ResultPath": "$.AddStepsResult", "Next": "Wait_Another_10_Seconds" }, "Wait_Another_10_Seconds": { "Type": "Wait", "Seconds": 10, "Next": "Cancel_Step_Two" }, "Cancel_Step_Two": { "Type": "Task", "Resource": "arn:aws:states:::elasticmapreduce:cancelStep", "Parameters": { "ClusterId.$": "$.ClusterId", "StepId.$": "$.AddStepsResult.StepId" }, "End": true } } } ], "ResultPath": null, "Next": "Step_Three" }, "Step_Three": { "Type": "Task", "Resource": "arn:aws:states:::elasticmapreduce:addStep.sync", "Parameters": { "ClusterId.$": "$.ClusterId", "Step": { "Name": "The third step", "ActionOnFailure": "CONTINUE", "HadoopJarStep": { "Jar": "command-runner.jar", "Args": [ "hive-script", "--run-hive-script", "--args", "-f", "s3://eu-west-1.elasticmapreduce.samples/cloudfront/code/Hive_CloudFront.q", "-d", "INPUT=s3://eu-west-1.elasticmapreduce.samples", "-d", "OUTPUT=s3://MY-BUCKET/MyHiveQueryResults/" ] } } }, "ResultPath": null, "Next": "Disable_Termination_Protection" }, "Disable_Termination_Protection": { "Type": "Task", "Resource": "arn:aws:states:::elasticmapreduce:setClusterTerminationProtection", "Parameters": { "ClusterId.$": "$.ClusterId", "TerminationProtected": false }, "ResultPath": null, "Next": "Should_Terminate_Cluster" }, "Should_Terminate_Cluster": { "Type": "Choice", "Choices": [ { "Variable": "$.TerminateCluster", "BooleanEquals": true, "Next": "Terminate_Cluster" }, { "Variable": "$.TerminateCluster", "BooleanEquals": false, "Next": "Wrapping_Up" } ], "Default": "Wrapping_Up" }, "Terminate_Cluster": { "Type": "Task", "Resource": "arn:aws:states:::elasticmapreduce:terminateCluster.sync", "Parameters": { "ClusterId.$": "$.ClusterId" }, "Next": "Wrapping_Up" }, "Wrapping_Up": { "Type": "Pass", "End": true } } } I let the Step Functions console create a new AWS Identity and Access Management (IAM) role for the executions of this state machine. The role automatically includes all permissions required to access EMR. This state machine can either use an existing EMR cluster, or create a new one. I can use the following input to create a new cluster that is terminated at the end of the workflow: { "CreateCluster": true, "TerminateCluster": true } To use an existing cluster, I need to provide input in the cluster ID, using this syntax: { "CreateCluster": false, "TerminateCluster": false, "ClusterId": "j-..." } Let’s see how that works. As the workflow starts, the Should_Create_Cluster Choice state looks into the input to decide if it should enter the Create_A_Cluster state or not. There, I use a synchronous call (elasticmapreduce:createCluster.sync) to wait for the new EMR cluster to reach the WAITING state before progressing to the next workflow state. The AWS Step Functions console shows the resource that is being created with a link to the EMR console: After that, the Merge_Results Pass state merges the input state with the cluster ID of the newly created cluster to pass it to the next step in the workflow. Before starting to process any data, I use the Enable_Termination_Protection state (elasticmapreduce:setClusterTerminationProtection) to help ensure that the EC2 instances in my EMR cluster are not shut down by an accident or error. Now I am ready to do something with the EMR cluster. I have three EMR steps in the workflow. For the sake of simplicity, these steps are all based on this Hive tutorial. For each step, I use Hive’s SQL-like interface to run a query on some sample CloudFront logs and write the results to Amazon Simple Storage Service (S3). In a production use case, you’d probably have a combination of EMR tools processing and analyzing your data in parallel (two or more steps running at the same time) or with some dependencies (the output of one step is required by another step). Let’s try to do something similar. First I execute Step_One and Step_Two inside a Parallel state: Step_One is running the EMR step synchronously as a job (elasticmapreduce:addStep.sync). That means that the execution waits for the EMR step to be completed (or cancelled) before moving on to the next step in the workflow. You can optionally add a timeout to monitor that the execution of the EMR step happens within an expected time frame. Step_Two is adding an EMR step asynchronously (elasticmapreduce:addStep). In this case, the workflow moves to the next step as soon as EMR replies that the request has been received. After a few seconds, to try another integration, I cancel Step_Two (elasticmapreduce:cancelStep). This integration can be really useful in production use cases. For example, you can cancel an EMR step if you get an error from another step running in parallel that would make it useless to continue with the execution of this step. After those two steps have both completed and produce their results, I execute Step_Three as a job, similarly to what I did for Step_One. When Step_Three has completed, I enter the Disable_Termination_Protection step, because I am done using the cluster for this workflow. Depending on the input state, the Should_Terminate_Cluster Choice state is going to enter the Terminate_Cluster state (elasticmapreduce:terminateCluster.sync) and wait for the EMR cluster to terminate, or go straight to the Wrapping_Up state and leave the cluster running. Finally I have a state for Wrapping_Up. I am not doing much in this final state actually, but you can’t end a workflow from a Choice state. In the EMR console I see the status of my cluster and of the EMR steps: Using the AWS Command Line Interface (CLI), I find the results of my query in the S3 bucket configured as output for the EMR steps: aws s3 ls s3://MY-BUCKET/MyHiveQueryResults/ ... Based on my input, the EMR cluster is still running at the end of this workflow execution. I follow the resource link in the Create_A_Cluster step to go to the EMR console and terminate it. In case you are following along with this demo, be careful to not leave your EMR cluster running if you don’t need it. Available Now Step Functions integration with EMR is available in all regions. There is no additional cost for using this feature on top of the usual Step Functions and EMR pricing. You can now use Step Functions to quickly build complex workflows for executing EMR jobs. A workflow can include parallel executions, dependencies, and exception handling. Step Functions makes it easy to retry failed jobs and terminate workflows after critical errors, because you can specify what happens when something goes wrong. Let me know what are you going to use this feature for! — Danilo

AWS Systems Manager Explorer – A Multi-Account, Multi-Region Operations Dashboard

Amazon Web Services Blog -

Since 2006, Amazon Web Services has been striving to simplify IT infrastructure. Thanks to services like Amazon Elastic Compute Cloud (EC2), Amazon Simple Storage Service (S3), Amazon Relational Database Service (RDS), AWS CloudFormation and many more, millions of customers can build reliable, scalable, and secure platforms in any AWS region in minutes. Having spent 10 years procuring, deploying and managing more hardware than I care to remember, I’m still amazed every day by the pace of innovation that builders achieve with our services. With great power come great responsibility. The second you create AWS resources, you’re responsible for them: security of course, but also cost and scaling. This makes monitoring and alerting all the more important, which is why we built services like Amazon CloudWatch, AWS Config and AWS Systems Manager. Still, customers told us that their operations work would be much simpler if they could just look at a single dashboard, listing potential issues on AWS resources no matter which ones of their accounts or which region they’ve been created in. We got to work, and today we’re very happy to announce the availability of AWS Systems Manager Explorer, a unified operations dashboard built as part of Systems Manager. Introducing AWS Systems Manager Explorer Collecting monitoring information and alerts from EC2, Config, CloudWatch and Systems Manager, Explorer presents you with an intuitive graphical dashboard that lets you quickly view and browse problems affecting your AWS resources. By default, this data comes from the account and region you’re running in, and you can easily include other regions as well as other accounts managed with AWS Organizations. Specifically, Explorer can provide operational information about: EC2 issues, such as unhealthy instances, EC2 instances that have a non-compliant patching status, AWS resources that don’t comply with Config rules (predefined or your own), AWS resources that have triggered a CloudWatch Events rule (predefined or your own). Each issue is stored as an OpsItem in AWS Systems Manager OpsCenter, and is assigned a status (open, in progress, resolved), a severity and a category (security, performance, cost, etc.). Widgets let you quickly browse OpsItems, and a timeline of all OpsItems is also available. In addition to OpsItems, the Explorer dashboard also includes widgets that show consolidated information on EC2 instances: Instance count, with a tag filter, Instances managed by Systems Manager, as well as unmanaged instances, Instances sorted by AMI id. As you would expect, all information can be exported to S3 for archival or further processing, and you can also set up Amazon Simple Notification Service (SNS) notifications. Last but not least, all data visible on the dashboard can be accessed from the AWS CLI or any AWS SDK with the GetOpsSummary API. Let’s take a quick tour. A Look at AWS Systems Manager Explorer Before using Explorer, we recommend that you first set up Config and Systems Manager. This will help populate your Explorer dashboard immediately. No setup is required for CloudWatch events. Setting up Config is a best practice, and the procedure is extremely simple: don’t forget to enable EC2 rules too. Setting up Systems Manager is equally simple, thanks to the quick setup procedure: add managed instances and check for patch compliance in just a few clicks! Don’t forget to do this in all regions and accounts you want Explorer to manage. If you set these services up later, you’ll have to wait for a little while for data to be retrieved and displayed. Now, let’s head out to the AWS console for Explorer. Once I’ve completed the one-click setup page creating a service role and enabling data sources, a quick look at the CloudWatch Events console confirms that rules have been created automatically. Explorer recommends that I add regions and accounts in order to get a unified view. Of course, you can skip this step if you just want a quick taste of the service. If you’re keen on synchronizing data, you can easily create a resource data sync, which will fetch operations data coming from other regions and other accounts. I’m going all in here, but please make sure you tick the boxes that work for you. Once data has been retrieved and processed, my dashboard lights up. Good thing it’s only a test account! I can also see information on all EC2 instances. From here on, I can group OpsItems and instances according to different dimensions (accounts, regions, tags). I can also drill down on OpsItems, view their details in Opscenter, and apply runbooks to fix them. If you want to know more about Opscenter, here’s the launch post. Now Available! We believe AWS Systems Manager Explorer will help Operations teams find and solve problems easier and faster, no matter what the scale of their AWS infrastructure is. This feature is available today in all regions where AWS Systems Manager OpsCenter is available. Give it a try, and please share your feedback in the AWS forum for AWS Systems Manager, or with your usual AWS support contacts. – Julien

WP Engine is Now Australia’s Largest WordPress Digital Experience Platform Provider

WP Engine -

BRISBANE, AUSTRALIA — Nov. 20, 2019 –  WP Engine, the WordPress Digital Experience Platform (DXP), today announced it is now the largest WordPress DXP in Australia and New Zealand (ANZ). As part of that growth, WP Engine’s customer base is now over 4,000 and the number of Australian and New Zealand (ANZ)-based agencies in the… The post WP Engine is Now Australia’s Largest WordPress Digital Experience Platform Provider appeared first on WP Engine.

New – Application Load Balancer Simplifies Deployment with Weighted Target Groups

Amazon Web Services Blog -

One of the benefits of cloud computing is the possibility to create infrastructure programmatically and to tear it down when it is not longer needed. This allows to radically change the way developers deploy their applications. When developers used to deploy applications on premises, they had to reuse existing infrastructure for new versions of their applications. In the cloud, developers create new infrastructure for new versions of their applications. They keep the previous version running in parallel for awhile before to tear it down. This technique is called blue/green deployments. It allows to progressively switch traffic between two versions of your apps, to monitor business and operational metrics on the new version, and to switch traffic back to the previous version in case anything goes wrong. To adopt blue/green deployments, AWS customers are adopting two strategies. The first strategy consists of creating a second application stack, including a second load balancer. Developers use some kind of weighted routing technique, such as DNS, to direct part of the traffic to each stack. The second strategy consists of replacing infrastructure behind the load balancer. Both strategies can cause delays in moving traffic between versions, depending on DNS TTL and caching on client machines. They can cause additional costs to run the extra load balancer, and potential delays to warm up the extra load balancer. A target group tells a load balancer where to direct traffic to : EC2 instances, fixed IP addresses; or AWS Lambda functions, amongst others. When creating a load balancer, you create one or more listeners and configure listener rules to direct the traffic to one target group. Today, we are announcing weighted target groups for application load balancers. It allows developers to control how to distribute traffic to multiple versions of their application. Multiple, Weighted Target Groups You can now add more than one target group to the forward action of a listener rule, and specify a weight for each group. For example, when you define a rule having two target groups with weights of 8 and 2, the load balancer will route 80% of the traffic to the first target group and 20% to the other. To experiment with weighted target groups today, you can use this CDK code. It creates two auto scaling groups with EC2 instances and an Elastic Load Balancer in front of them. It also deploys a sample web app on the instances. The blue version of the web app is deployed to the blue instance and the green version of the web app is deployed to the green instance. The infrastructure looks like this: You can git clone the CDK project and type npm run build && cdk bootstrap && cdk deploy to deploy the above infrastructure. To show you how to configure the load balancer, the CDK code creates the auto scaling, the load balancer and a generic target group. Let’s manually finish the configuration and create two weighted target groups, one for each version of the application. First, I navigate to the EC2 console, select Target Groups and click the Create Target Group button. I create a target group called green. Be sure to select the correct Amazon Virtual Private Cloud, the one created by the CDK script has a name starting with “AlbWtgStack...“, then click Create. I repeat the operation to create a blue target group. My Target Groups console looks like this: Next, I change the two auto scaling groups to point them to the blue and green target groups. In the AWS Management Console, I click Auto Scaling Groups, select one of the two auto scaling groups and I pay attention to the name (it contains either ‘green’ or ‘blue’), I click Actions then Edit. In the Edit details screen, I remove the target group that has been created by the CDK script and add the target group matching the name of the auto scaling group (green or blue). I click Save at the bottom of the screen and I repeat the operation for the other auto scaling group. Next, I change the listener rule to add these two target groups, each having their own weight. In the EC2 console, I select Load Balancers on the left side, then I search for the load balancer created by the CDK code (the name starts with “alb”). I click Listeners, then View / edit rules: There is one rule created by the CDK script. I modify it by clicking the edit icon on the top, then again the edit icon on the left of the rule. I delete the Foward to rule by clicking the trash can icon. Then I click “+ Add Action” to add two Forward to rules, each having a target group, (blue and green) weighted with 50 and 50. Finally, click Update on the right side. I am now ready to test the weighted load balancing. I point my browser to the DNS name of the load balancer. I see either the green or the blue version of the web app. I force my browser to reload the page and I observe the load balancer in action, sending 50% of the requests to the green application and 50% to the blue application. Some browsers might cache the page and not reflect the weight I defined. Safari and Chrome are less aggressive than Firefox at this exercise. Now, in the AWS Management Console, I change the weights to 80 and 20 and continue to refresh my browser. I observe that the blue version is displayed 8 times out of 10, on average. I can also adjust the weight from the ALB ModifyListener API, the AWS Command Line Interface (CLI) or with AWS CloudFormation. For example, I use the AWS Command Line Interface (CLI) like this: aws elbv2 modify-listener \ --listener-arn "<listener arn>" \ --default-actions \ '[{ "Type": "forward", "Order": 1, "ForwardConfig": { "TargetGroups": [ { "TargetGroupArn": "<target group 1 arn>", "Weight": 80 }, { "TargetGroupArn": "<target group 2 arn>", "Weight": 20 }, ] } }]' Or I use AWS CloudFormation with this JSON extract: "ListenerRule1": { "Type": "AWS::ElasticLoadBalancingV2::ListenerRule", "Properties": { "Actions": [{ "Type": "forward", "ForwardConfig": { "TargetGroups": [{ "TargetGroupArn": { "Ref": "TargetGroup1" }, "Weight": 1 }, { "TargetGroupArn": { "Ref": "TargetGroup2" }, "Weight": 1 }] } }], "Conditions": [{ "Field": "path-pattern", "Values": ["foo"] }], "ListenerArn": { "Ref": "Listener" }, "Priority": 1 } } If you are using an external service or tool to manage your load balancer, you may need to wait till the provider updates their APIs to support weighted routing configuration on Application load balancer. Other uses In addition to blue/green deployments, AWS customers can use weighted target groups for two other use cases: cloud migration, or migration between different AWS compute resources. When you migrate an on-premises application to the cloud, you may want to do it progressively, with a period where the application is running both on the on-premises data center and in the cloud. Eventually, when you have verified that the cloud version performs satisfactorily, you may completely deprecate the on-premises application. Similarly, when you migrate a workload from EC2 instances to Docker containers running on AWS Fargate for example, you can easily bring up your new application stack on a new target group and gradually move the traffic by changing the target group weights, with no downtime for end users. With Application Load Balancer supporting a variety of AWS resources like EC2, Containers (Amazon ECS, Amazon Elastic Kubernetes Service, AWS Fargate), AWS Lambda functions and IP addresses as targets, you can choose to move traffic between any of these. Target Group Stickiness There are situations when you want the clients to experience the same version of the application for a specified duration. Or you want clients currently using the app to not switch to the newly deployed (green) version during their session. For these use cases, we also introduce target group stickiness. When target group stickiness is enabled, the requests from a client are all sent to the same target group for the specified time duration. At the expiry of the duration, the requests are distributed to a target group according to the weight. ALB issues a cookie to maintain target group stickiness. Note that target group stickiness is different from the already existing target stickiness (also known as Sticky Sessions). Sticky Sessions makes sure that the requests from a client are always sticking to a particular target within a target group. Target group stickiness only ensures the requests are sent to a particular target group. Sticky sessions can be used in conjunction with the target group level stickiness. To add or configure target group stickiness from the AWS Command Line Interface (CLI), you use the TargetGroupStickinessConfig parameter, like below: aws elbv2 modify-listener \ --listener-arn "<listener arn" \ --default-actions \ '[{ "Type": "forward", "Order": 1, "ForwardConfig": { "TargetGroups": [ {"TargetGroupArn": "<target group 1 arn>", "Weight": 20}, \ {"TargetGroupArn": "<target group 2 arn>", "Weight": 80}, \ ], "TargetGroupStickinessConfig": { "Enabled": true, "DurationSeconds": 2000 } } }]' Availability Application Load Balancer supports up to 5 target groups per listener’s rules, each having their weight. You can adjust the weights as many times as you need, up to the API threshold limit. There might be a slight delay before the actual traffic weight is updated. Weighted target group is available in all AWS Regions today. There is no additional cost to use weighted target group on Application Load Balancer. -- seb PS: do not forget to delete the example infrastructure created for this blog post and stop accruing AWS charges. As we manually modified an infrastructure created by the CDK, a simple cdk destroy will immediately return. Connect to the AWS CloudFormation console instead and delete the AlbWtgStack. You also need to manually delete the blue and green target groups in the EC2 console.

Why Agencies Prefer to Build with Genesis

WP Engine -

Since it was launched in 2009, the Genesis Framework has become a favorite solution among digital agencies that build websites with WordPress. Genesis offers a straightforward, programmatic approach to theme-building, and agencies around the world have found success putting it to use for their various client projects. Remkus de Vries, who together with his wife… The post Why Agencies Prefer to Build with Genesis appeared first on WP Engine.

Amazon Connect Introduces Web & Mobile Chat for a True Omnichannel Contact Center Experience

Amazon Web Services Blog -

When we started Amazon in 1995, it was with the mission to be the earth’s most customer-centric company. It obviously requires many talented individuals and technologies to deliver on that vision, including contact centers. As Amazon’s retail business scaled, we first shopped for third-party contact center solutions, but we could not find one that fit our needs, so we decided to build our own. After we built an initial version, we listened to our contact center team feedback and iterated for several years to meet our strict requirements of security, elasticity, flexibility, reliability, and high customer experience standards. Many AWS customers told us they have the same challenges to procure, install, configure, and operate their contact centers. They asked us to make our solution available to all businesses. Since we launched Amazon Connect, thousands of customers have created their own contact centers in the cloud. Amazon Connect makes it easy for non-technical customers to design contact flows, manage agents, and track performance metrics. It is easy to integrate Amazon Connect to other systems, such as customer relationship management (CRM) or to integrate Amazon Lex intelligent conversational bots into contact flows. For example, Intuit integrates Amazon Connect with Salesforce to build contact flow experiences that adapt to their customer needs in real-time. In the United Kingdom, the National Health Service (NHS) is using Amazon Connect and Amazon Lex to automatically answer most frequently asked questions about European Health Insurance Card (EHIC). During the first four weeks of operation, 42 percent of EHIC calls were resolved via the integrated Amazon Connect and Amazon Lex solution, and did not have to be passed back to a human agent. There was a 26 percent reduction in EHIC contact center calls handled by human agents. But voice-based contact centers are only one part of the story. Today, we communicate more and more with messaging, and customers use multiple channels to communicate with businesses. Often, simple questions can be answered by a short chat message and do not involve a voice conversation with an agent. This is why we are announcing web and mobile chat for Amazon Connect. Your customers can now choose between using chat or making a phone call to get their questions or concerns addressed. When they choose to chat with a contact center agent, they can do it at their own pace, making it as familiar as messaging a friend. Conversation context is maintained across both chat and voice, giving customers freedom to move between channels without forcing them to start all over again or to wait for an agent. Amazon Connect chat gives businesses a single unified contact center service for voice and chat. Amazon Connect provides a single routing engine which increases efficient distribution amongst agents and reduces end-customer wait times. Agents have a single user interface to help end-customers using both voice and chat, reducing the number of tools they have to learn and the number of screens they have to interact with. Chat activities nicely integrate into your existing contact center flows and the automation you built for voice. You build your flows once and you reuse them across multiple channels. Likewise, for metrics collection and the dashboards you built, they automatically benefit from the unified metrics across multiple channels. Your customers can start chatting with contact center agents from any of your business applications, web or mobile. Let’s see what it looks like. In the below example, the chat is integrated in your company web site. Contact center agents receive chat requests in the same web-based Contact Control Panel (CCP) they use for voice engagements. Since it is web-based, agents can work from virtually anywhere. Customers can integrate the CCP directly into the applications their contact center agents use, such as their customer relationship management (CRM) system, using the CCP SDK. To add chat capabilities to your Amazon Connect contact center, open the console and enable your agents to take chats by enabling Chat in their Routing Profile, no code is required. Once this is done, they can begin accepting chats through the updated agent experience. Should you need help adding Amazon Connect chat capabilities to your website or applications, please reach out to one of the dozens of Amazon Connect partners available worldwide. Amazon Connect chat is charged on a per use basis. There are no required up-front payments, long-term commitments or minimum monthly fees. You pay per chat message, independently of the number of agents or customers using it. Regional pricing may vary, read the pricing page for the details. Amazon Connect chat will be generally available this week in all AWS regions where Amazon Connect is offered: US East (Northern Virginia), US West (Oregon), EU (Frankfurt), Asia Pacific (Sydney), Asia Pacific (Tokyo). As usual, we’re willing to hear your feedback, do not hesitate to share your thoughts with us. -- seb

Special Considerations for Streamlining Mobile Checkout on WooCommerce

Nexcess Blog -

While the data tells us that many sites are seeing more mobile traffic than desktop traffic, conversions on mobile devices still lag far behind. Today we’re going to look at some things you can do to make it easy for mobile users to convert to paying customers on your WooCommerce store. As you try these suggestions, remember to A/B test them for your site. Just because they worked for others doesn’t mean that they’ll have a positive impact on your particular checkout process. Make Form Fields Easy Mobile devices have this great thing called a software keyboard. That means that if we accurately identify our form fields, mobile users will be presented with a keyboard that’s appropriate to the data that needs to be entered. As you can see above, WooCommerce does this by default when it identifies an email field. To make it easier to enter my email, the @ symbol readily accessible, rather than being hidden behind another set of keys. Other fields to review are the Zip/Postal Code fields and phone number fields. If you want these fields in a particular format, it’s best to pre-program the format, rather than leave it up to the user. More than once, my checkout has been denied because one of the fields was incorrectly formatted. It’s already annoying having to switch back and forth between letters and numbers on a phone keyboard. So, don’t frustrate your users further by forcing them to enter the data you want in an exact format. Your users should never have to deal with this. Take the time to write some JavaScript to format the field how you want after their done typing, or handle the formatting server side. Any other option is putting an extra burden on your user, making them less likely to purchase. Format Field Errors Well Have you ever clicked “checkout” after wading through form fields only to be greeted a stack of errors the site says you made? WooCommerce is just as guilty of making users hunt for issues with their checkout information just as any other platform out there. Sure, they provide that little red * beside required fields, but if you miss one, all you’re going to get is a big red box at the top of the page. You’re on your own to hunt down the issue with the notification provided. There are a couple of better approaches to this system, with my favorite being validating the fields as you enter them. Don’t make the user wait, show them right away if the field is right or not. The second option I like is showing the field errors directly inline. That means when you have an error, highlight the field in red and explain what the issue is in the same view. If you want the implement this, Business Bloomer has a great tutorial on adding errors inline with WooCommerce fields. I’ve used this on a few client sites and have been very happy with the results. Don’t Hide your Checkout with Notifications While you may be able to get away with some upsell tactics on a desktop checkout experience, it’s far too easy to ruin a mobile checkout experience. Touch targets are often far too small, and sometimes even off-screen when popups display on mobile devices. Instead of popups or other visual clutter, look at Smart Offers to increase your total order value. Rather than asking a user to add something to their cart before they’ve made a purchase, Smart Offers asks them after they’ve completed their initial checkout action. Check Reachability Research exists on how users interact with their phones, but I have yet to find anything on how gender affects phone interaction. This is a critical gap in research since women, in general, have smaller hands than men, potentially affecting reachability. Since programmers are overwhelmingly male, this means that the people building online stores aren’t testing the reachability of their interfaces for 50% of the population. Testing how people interact with your checkout form is crucial to ensuring that you have solid conversions. But, don’t fall into the trap of only testing with those that are convenient. Make sure that you put effort into testing across a broad spectrum of hand size and device size. With one client, there was pushback on tweaking the checkout for smaller hands because they didn’t have a broad base female customers that purchased on mobile. I convinced them to make a few small tweaks to help make the checkout process better for smaller hands, and within a few weeks, we saw an increase in the purchasing by female customers from their mobile devices. We didn’t see female customers before, because the checkout wasn’t built with them in mind. Site Speed Another consideration you need to take into account for mobile checkout is understanding site speed in the context of where your target market is. While users in cities will get 4G speeds, rural users may only have 3G connections and severely limited data plans. Even looking at the countries you are targeting can mean you need to think about different things in terms of site speed. In Canada, we have decent speeds, but anemic data plans compared to many other places in the world. When you’re developing your mobile checkout experience, make sure you test it on throttled internet connections. It’s fairly easy in Firefox and Chrome.   If you’re testing on Safari, then you’re going to have to look at a 3rd party tool like Charles Proxy or install the Network Link Conditioner tool for xCode to simulate slower connections. You can even use this tool in conjunction with your iOS test device to throttle the live connection as you test your site on a properly mobile device instead of a simulated one in the browser. Make sure you test your site against the slow connections that your users may have, instead of checking only against the connection you have at work. Visible Trust Marks A trust mark is an image from your SSL provider or some other icon that shows you have a secure and trustworthy payment provider. Often these are relegated to the bottom of a site for desktop users, but it’s worth reevaluating where you put them for your mobile layout. For one client I worked with, we experimented with putting them small right at the top of the checkout. That way, when the user came to the checkout on their phone, the first thing they saw was the small marks that said we had a secure site without malware on it. This small change produced a 1 – 2% increase in conversions, which adds up to a bunch of extra earnings over the year. Password Filling Applications One of the final ways to help increase users completing your mobile checkout process is to make sure that any user account fields work with tools like 1Password, Dashlane, and LastPass. Passwords are enough of a pain sitting in front of a full keyboard, but they get even worse when you enforce secure passwords that require switching back and forth between the different keyboards. Testing this is fairly easy; Grab a free copy of all the above tools and put your password into them. Then, try to checkout using them to fill in any passwords or user fields that are in your checkout form. Don’t forget to use each of these applications to create an account at checkout as well. Possibly the worst mistake you can make here is blocking the ability to copy and paste passwords into your account fields. This is how password applications work, and any user that is creating secure passwords is highly likely to leave once they see they can’t add their nice long random password in without manually typing it. Building a good checkout process is crucial to having a profitable eCommerce site. And, with the rise in mobile purchasing, it’s even more important to make sure that you provide a top-notch experience to mobile purchasers. By working through the steps here, you can make sure that you do provide an excellent experience for your mobile users. They’ll be happy, and you’ll convert more purchases, which makes you happy. Build a High-Performing WooCommerce Store Create a store that converts traffic with Nexcess Managed WooCommerce Hosting solutions. They come standard with Jilt to help you recover abandoned carts, performance tests whenever you need them, and the platform reduces query loads by 95%, leading to a faster store. The post Special Considerations for Streamlining Mobile Checkout on WooCommerce appeared first on Nexcess Blog.

How To Build Your Music Side Hustle Website Quickly

HostGator Blog -

The post How To Build Your Music Side Hustle Website Quickly appeared first on HostGator Blog. Have you been thinking about taking your talent for music and building it into a successful side hustle? If you’re passionate about the music industry and are looking to start making extra money with music, the time is now to get started. According to the latest IFPI Global Music Report in 2019, the global recorded music market grew by 9.7% in 2018, which represents the fourth consecutive year of growth. Not to mention, total revenues for 2018 were $19.1 billion. The music industry is growing, leaving room for creatives like yourself to get in on a piece of the $19.1 billion dollar music industry pie. There are diverse ways to grow a music side hustle—ways you may not have even considered making money. This article will present an overview of ways to make money with music and how a website can help you grow your music business. 3 Paths to Music Side Hustle Success The music industry is more than just getting paid to play music. Here are some top ways to grow a music side hustle that will help you earn extra cash each month. 1. Record and produce your own music A popular and fun way to make extra money with music is to write and produce music and work as your own music label owner. Calixto Gabriel Bravo, also known as Xcelencia, is one musician that has built his business by working as a recording artist and producer. Bravo explains his business: “I am an independent recording artist, a music producer, and music label owner. Coming from a family of four, I always had a rebellious and fiercely independent energy that translated into my passion for music. I understood that I wanted to make an impact in the music industry from an early age, and now I am accomplishing my dreams.” When you visit Xcelencia’s website, you quickly get an idea of who he is as a musician. His website acts as a way to showcase his own music, presents a gallery of his work, and provides a way for followers to get into contact with him. Additionally, his website acts as a method for him to receive constructive criticism from others. He says, “My side hustle is successful not just using metrics such as revenue or data/analytics, also based on but the amount of positive feedback and constructive criticism I receive on a weekly basis from listeners, colleagues, and more.” 2. Work as an independent singer-songwriter Your interest for music may start and stop with writing and singing your own music, and that’s great. Working as a singer-songwriter is an awesome side hustle as it helps you bring in a little extra cash each month with your primary passion. Grover Anderson is one musician who explains how he started and grew this type of music side hustle. He says, “For the last twelve years I’ve been an independent singer-songwriter. I started writing songs in my college dorm room, then eventually started taking them into bars and coffee shops, and finally onto large stages in Northern California.” Anderson continues, “I feel compelled to write, and love playing those songs for people, so I decided to try and make a go of it. Handling everything from booking to PR to finances on my own, I have released 4 albums and toured the country multiple times. I do most of my touring in the summertime, because during the school year I’m a high school English and Drama teacher.” You may be worried about whether or not there is money in this type of business. Anderson explains how he measures success: “I measure success by growth over time. For the first 5-10 years I played small gigs, sold a few albums, and was lucky to have a few website visitors a week. I never broke even on my expenses, but I loved making music so much that I pressed on. In the last two years I’ve had visitors in the thousands, made a modest profit, and played bigger and bigger stages, even opening for America and Creedence Clearwater Revisited last summer.” The best thing about a singer-songwriter music side hustle is you can grow at your own pace. Regardless of how quickly your business grows, you’ll enjoy the process. 3. Sell your music online Playing music at live events is only one way to participate in the music industry. There is also a huge market for selling your music online. Companies are always looking for jingles, podcasts are often in need of theme music, and YouTubers don’t want to run the risk of not earning money on videos by using a song they don’t own the license to.  If you’re looking to make money from music, consider selling your music online. There are several other ways to make money from music including rating music, teaching online music lessons, starting a YouTube channel, and more. Regardless of what you choose to do for your side hustle, one thing is for certain—you need a website to be successful. Why Does Every Musician Need a Website? While it’s vital that you keep up your music chops, music isn’t the only skill you need to have a successful side hustle. In order to grow your music business, you also need a website. Here are some of the top ways a website will help you with your music side hustle. 1. Showcase your music It’s difficult to get hired to come play if potential clients don’t know what your music sounds like. A website will act as an online portfolio of all of your work. With a website, you can include demo music, published music, videos, and any other information that is relevant to your music business. 2. Get found Sometimes the best way to catch a break or get hired is through Google search. For example, people are always looking for local musicians to play at cafes, corporate parties and events, city celebrations, weddings, and more. While it’s true many of your gigs will come via word of mouth, you’ll also find some of your gigs coming from being one of the first musicians to show up in a local search. 3. Display your schedule Do you already have a small following? Great! The next step is to let your fans know when and where they can come see you play. The best way to do this is to include a touring schedule on your website.  With a click of a button, fans will be able to learn about your upcoming shows, RSVP, and even buy tickets online, depending on your website capabilities. 4. Manage bookings Another reason to have a website is to have a place where people looking for a musician can book your services. What better place to manage this than right on your website? You can include a booking schedule, or a simple contact page on your website that shows when you are available and allows people to book your services with ease. 5. Sell merchandise Anyone in the music industry knows that the money you can earn from playing is just part of your income. If you have a small following, you can also make money on merchandise. Of course you’ll want to set up a merch table when you go play your gigs, but you can also sell your music, t-shirts, hats, etc. right on your website. When setting up your music side hustle, don’t forget the most important step—making sure your fans and clients can learn everything about you via your website. How to Build a Music Website It’s important to have a website for your music business if you’re going to promote your music or land any gigs. That means, the first step to success is getting your website up and running. The good news is to get a website up, you don’t have to know how to code or design or have a huge budget. With the help of HostGator’s Website Builder, you can get a nice website up in no time.  Additionally, HostGator has a team ready to help you navigate the process in the event you have any questions. If you follow the process outlined below, you can get your website published in less than a day. Step 1: Pick a hosting plan for your music website. The Gator Website Builder offers plans. You can pick your plan depending on your needs. The starter plan includes a free domain, 200+ templates, a drag-and-drop editor, cloud hosting, and website analytics. It’s the perfect pick for someone looking to start a music website. If you are nervous about building your own music website and want priority support, you’ll want to select the premium plan. This plan provides everything in the Starter plan plus priority support.  If you are planning on selling merchandise on your website, then select the eCommerce plan. This plan will help you set up an eCommerce store where you can collect payments online. Once you’ve picked a plan, click “buy now” and you can set up your account. Step 2: Pick a domain name for your music website. Every Gator Website Builder package includes a free domain. To pick your domain, all you have to do is type something in the “get domain” box. If your top choice isn’t available, select another. You’ll notice that many musicians use their own name, music production company, or band name as their domain name, but this isn’t a must. If you need help, here is an article on picking out the perfect domain name for your music website. If you already have a domain name, then you can connect it to your HostGator account by clicking “connect it here.”  Step 3: Create your account. Once you have a domain name, you can connect your HostGator account. All you need is an email address or Facebook account to connect. Then, enter your payment information, and you’re ready to go. Step 4: Pick a template for your music website. The best news about creating your website is you don’t have to do any coding. Gator Website Builder comes with hundreds of templates. All you have to do is pick the one that matches the vibe of your music side hustle best. Once you create your account, you’ll be directed to the “choose a template” page. You can scroll through over 200 professionally-designed templates, and select the template that best fits the goals for your music website. You can also customize any of the templates to match the colors and theme of your music. There’s even a template made for musicians! Step 5: Add content to your music website. Once you have selected the perfect template, click “start editing.” This will send you to your dashboard where you can add, edit, and delete pages. Pages you might want to include are an about page, contact page, music portfolio, schedule of your shows, merchandise page, and more. The drag and drop builder to make it easy to design your website. However, if you have any questions, GatorBuilder also includes a free and easy step-by-step guide for reference that you can access at any time. To access this guide, Click the “menu” icon next to the Gator by HostGator logo and select the “getting started tour.”   Step 6: Review your content and launch your music website. The last step is to review your website and go live. By clicking “preview,” you can see your music website in full. This is when you can look at your website and make sure everything looks perfect. If everything is on target, then click the “finish preview” button at the top and then “publish website” at the top of the dashboard. Gator Website Builder will present a series of quick steps to help you go live, and you’ll be officially ready to land your first gig. Get Started on Your Music Website Don’t let your passion for music die out. The music industry is innovating, and with the help of the internet and an awesome website, it’s easier than ever to promote your music.  For more information on starting a music side hustle and website, check out Gator Website Builder today. The process of building a website is intuitive, easy, and you’ll be pleased with the results. Find the post on the HostGator Blog

WP Engine’s Agency Partner Program, Largest In WordPress, Grows 8X In First Two Years

WP Engine -

AUSTIN, Texas – Nov. 19, 2019 – WP Engine, the WordPress digital experience platform, today announced the number of digital agencies participating in its Agency Partner Program (APP) has grown to 5,000+ globally, making it the largest in WordPress. This growth represents an 8X increase since the program began in 2017 and a 186% increase… The post WP Engine’s Agency Partner Program, Largest In WordPress, Grows 8X In First Two Years appeared first on WP Engine.

How to Go Live in HD Quality From Your Computer

Social Media Examiner -

Do you want to use high-quality live video in your marketing? Wondering if you have the right tech setup? In this article, you’ll find a checklist of tips and tools to create HD-quality live video broadcasts from your laptop or desktop. Why Broadcast Live Video in High Definition? If you want to organically reach as […] The post How to Go Live in HD Quality From Your Computer appeared first on Social Media Marketing | Social Media Examiner.

CloudFormation Update – CLI + Third-Party Resource Support + Registry

Amazon Web Services Blog -

CloudFormation was launched in 2011 (AWS CloudFormation – Create Your AWS Stack From a Recipe) and has become an indispensable tool for many AWS customers. They love the fact that they can define a template once and then use it to reliably provision their AWS resources. They also make frequent use of Change Sets, and count on them to provide insights into the actions (additions, changes, and deletions) that will take place when the change set is executed. As I have written about in the past, CloudFormation takes special care to implement a model that is consistent, stable, and uniform. This is an important, yet often overlooked, element of CloudFormation, but one that our customers tell us that they highly value! Let’s take a look at a couple of the most frequent requests from our customers: Performance – Over the past year, the number of operations performed on CloudFormation stacks has grown by 30% per quarter! The development team has worked non-stop to make CloudFormation faster and more efficient, even as usage grows like a weed, using a combination of architectural improvements and low-level optimizations. Over the past couple of months, this work has allowed us to raise a number of soft and hard limits associated with CloudFormation, and drove a significant reduction in average and maximum latency for Create and Update operations. Coverage – We release new services and new features very rapidly, and sometimes without CloudFormation support. Our goal is to support new services and new features as quickly as possible, and I believe that we are making progress. We are also using the new CloudFormation Coverage Roadmap as a primary source of input to our development process, and have already addressed 43 of the issues. Extensibility – Customers who make extensive use of CloudFormation tell us that they want to automate the creation of non-AWS resources. This includes resources created by their own development teams and by third-party suppliers of SaaS applications, monitoring tools, and so forth. They are already making good use of Custom Resources, but as always want even more control and power, and a simple way to manage them. CloudFormation Registry and CloudFormation CLI Today we are addressing your requests for more coverage and better extensibility with the launch of the CloudFormation CLI as an open source project. You can use this kit to define and create resource providers that automate the creation of resources in a safe & systematic way. You create a schema, define a handler for five core operations, test it locally, and then publish your provider to a new provider registry that is associated with your AWS account. We have also been working with a select set of third-party vendors, helping them to create resource providers for their SaaS applications, monitoring tools, and so forth. You will be able to obtain the providers from the vendors of interest and add them to your provider registry. Finally, we are making a set of AWS resource providers available in open source form. You can use them to learn how to write a robust provider, and you can also extend them (in your own namespace), as desired. Let’s dive in! CloudFormation CLI This set of tools gives you everything you need to build your own resource providers, including detailed documentation and sample code. The cfn (CloudFormation Command Line Interface) command helps you to initialize your project, generate skeleton code, test your provider, and register it with CloudFormation. Here are the principal steps: Model – Create and validate a schema that serves as the canonical description of your resource. Develop – Write a handler (Java and Go now, with other languages to follow) that defines five core operations (Create, Read, Update, Delete, and List) on your resource, and test it locally. Register – Register the provider with CloudFormation so that it can be used in your CloudFormation templates. Modeling a Resource The schema for a resource must conform to the Resource Provider Definition Schema. It defines the resource, its properties, and its attributes. The properties can be defined as read-only, write-only, and create-only; this provides CloudFormation with the information it needs to have in order to be able to modify existing resources when it is executing an operation on a stack. Here is a simple definition: { "additionalProperties": false, "createOnlyProperties": [ "/properties/Name" ], "primaryIdentifier": [ "/properties/Name" ], "properties": { "Name": { "description": "The name of the configuration set.", "maxLength": 64, "pattern": "^[a-zA-Z0-9_-]{0,64}$", "type": "string" } }, "typeName": "AWS::SES::ConfigurationSet", "description": "A sample resource" } Develop The handlers make use of a framework that takes care of error handling, throttling of calls to downstream APIs, credential management, and so forth. The CloudFormation CLI contains complete sample code; you can also study the Amazon SES Resource Provider (or any of the others) to learn more. To learn more, read Walkthrough: Develop a Resource Provider in the CloudFormation CLI documentation. Register After you have developed and locally tested your resource provider, you need to tell CloudFormation about it. Using the CloudFormation CLI, you submit the package (schema and compiled handlers) to the desired AWS region(s). The acceptance process is asynchronous; once it completes you can use the new resource type in your CloudFormation templates. Cloudformation Registry The CloudFormation registry provides per-account, per-region storage for your resource providers. You can access it from the CloudFormation Console: Select Public to view the native AWS resources (AWS::*::*); select Private to view resources that you create, and those that you obtain from third parties. You can also access the registry programmatically using the RegisterType, DeregisterType, ListTypes, ListTypeRegistrations, ListTypeVersions, and DescribeType functions. Third-Party Support As I mentioned earlier, a select set of third-party vendors have been working to create resource providers ahead of today’s launch. Here’s the initial list: Atlassian – DevOps Just Got a Whole Lot Easier with Opsgenie and AWS CloudFormation Registry and CLI. Datadog – Implement monitoring as code with Datadog and CloudFormation Registry. Densify – How to Adopt Continuous Optimization in AWS Using CloudFormation. Dynatrace – Fortinet – –Fortinet Now Integrates with AWS CloudFormation Registry and CLI to Enhance Cloud Security. New Relic – Create New Relic Alerts in AWS CloudFormation Templates. Spotinst – AWS Cloudformation Custom Resource No Longer Required for Spotinst. After registering the provider from a vendor, you will be able to reference the corresponding resource types in your CloudFormation templates. For example, you can use Datadog::Monitors::Monitor to create a Datadog monitor. If you are a third-party vendor and are interested in creating a resource provider for your product, send an email to Available Now You can use the CloudFormation CLI to build resource providers for use in all public AWS regions. — Jeff;


Recommended Content

Subscribe to Complete Hosting Guide aggregator