Industry Buzz

Announcing Firelens – A New Way to Manage Container Logs

Amazon Web Services Blog -

Today, the fantastic team that builds our container services at AWS have launched an excellent new tool called AWS FireLens that will make dealing with logs a whole lot easier. Using FireLens, customers can direct container logs to storage and analytics tools without modifying deployment scripts, manually installing extra software or writing additional code. With a few configuration updates on Amazon ECS or AWS Fargate, you select the destination and optionally define filters to instruct FireLens to send container logs to where they are needed. FireLens works with either Fluent Bit or Fluentd, which means that you can send logs to any destination supported by either of those open-source projects. We maintain a web page where you can see a list of AWS Partner Network products that have been reviewed by AWS Solution Architects. You can send log data or events to any of these products using FireLens. I find the simplest way to understand FireLens is to use it, so in the rest of this blog post, I’m going to demonstrate using FireLens with a container in Amazon ECS, forwarding the container logs on to Amazon CloudWatch. First, I need to configure a task definition, I got an example definition from the Amazon ECS FireLens Examples on GitHub. I replaced the AWS Identity and Access Management (IAM) roles with my own taskRoleArn and executionRoleArn IAM roles, I also added port mappings so that I could access the NGINX container from a browser. { "family": "firelens-example-cloudwatch", "taskRoleArn": "arn:aws:iam::365489000573:role/ecsInstanceRole", "executionRoleArn": "arn:aws:iam::365489300073:role/ecsTaskExecutionRole", "containerDefinitions": [ { "essential": true, "image": "906394416424.dkr.ecr.us-east-1.amazonaws.com/aws-for-fluent-bit:latest", "name": "log_router", "firelensConfiguration": { "type": "fluentbit" }, "logConfiguration": { "logDriver": "awslogs", "options": { "awslogs-group": "firelens-container", "awslogs-region": "us-west-2", "awslogs-create-group": "true", "awslogs-stream-prefix": "firelens" } }, "memoryReservation": 50 }, { "essential": true, "image": "nginx", "name": "app", "portMappings": [ { "containerPort": 80, "hostPort": 80 } ], "logConfiguration": { "logDriver":"awsfirelens", "options": { "Name": "cloudwatch", "region": "us-west-2", "log_group_name": "firelens-fluent-bit", "auto_create_group": "true", "log_stream_prefix": "from-fluent-bit" } }, "memoryReservation": 100 } ] } I saved the task definition to a local folder and then used the AWS Command Line Interface (CLI) to register the task definition. aws ecs register-task-definition --cli-input-json file://cloudwatch_task_definition.json I already have an ECS cluster set up, but if you don’t, you can learn how to do that from the ECS documentation. The command below creates a service on my ECS cluster using my newly registered task definition. aws ecs create-service --cluster demo-cluster --service-name demo-service --task-definition firelens-example-cloudwatch --desired-count 1 --launch-type "EC2" After logging into the Amazon ECS console and drilling into my service, and my tasks, I find the container definition that exposes an External Link. This IP address is exposed since I asked for the container to map port 80 of the container port to port 80 of the host port inside of the task definition. If I go to that IP adress in a browser then the NGINX container which I used as my app, serves its default page. The NGINX container logs any requests that it receives to Stdout and so FireLens will now forward these logs on to CloudWatch. I added a little message to the URL so that when I take a look at the logs, I should be able to quickly identify this request from all the others. I then navigated over to the Amazon CloudWatch console and drilled down into the firelens-fluent-bit log group. If you remember this is the log group name that I set up in the original task definition. Below you will notice I have several logs in my log stream and the last one is the request that I just made in the browser. If you look closely at the log, you will find that “IT WORKS” is passed in as part of the GET request. So there we have it, I successfully set up FireLens and had it forward my container logs on to CloudWatch I could, of course, have chosen a different destination, for example, a third-party provider like Datadog or an AWS destination like Amazon Kinesis Data Firehose. If you want to try FireLens, it is available today in all regions that support Amazon ECS, and AWS Fargate. Happy Logging!

Codero’s Managed Hybrid Clouds – The Way of the Future

Codero Blog -

Our President and CEO, Bill King, recently spoke with Website Planet about a number of topics including the flexibility, cost savings and scalability of hybrid multi-cloud solutions, the factors for selecting AWS or Azure, small business solutions, and our Serious Support™ commitment. As a long-standing provider of hosting services, Codero has a experienced perspective on industry trends and customer objectives.…

Revisiting the DCKAP Summit with Jeries

Nexcess Blog -

Last week, our team attended DCKAP Summit, one of the few agency-organized conferences in North America. Not only did we sponsor the event, but we also volunteered our time to prepare and give presentations to the conference attendees on how managed ecommerce solutions are paving the way for merchant success.  In attendance was Jeries Eadeh, the Nexcess VP of Channel Sales at Nexcess, Amber Hamad, Strategic Partner Development Manager, and Anna Brown, also Strategic Partner Development. The event took place over one day and saw 15 speakers make their way to the stage to present on diverse topics, such as how to monetize customer support interactions and how to solve ecommerce pain points with integrations and PIM.  In total, the event brought together some of the best minds in ecommerce across the US. The knowledge shared provided insights into core merchant pain points and offered actionable advice on how to realize the potential of a storefront.  Below is a roundup of how it went, what we thought and what we learned. From Humble Beginnings  The morning keynote and general session were MC’d by Christopher Cuenza, who was a key figure not only introducing the speakers, but he always stopped to remind us what a special opportunity this event was for everyone.  Throughout the day he reminded us to make sure to meet someone new, learn a few things about them and share some genuine interest in one another.  Found and CEO of DCKAP, Karthik Chidambaram, naturally kicked off the morning keynote with some preparation for topics and discussions to be presented throughout the day. Karthik shared inspiring stories about DCKAPs humble beginnings in Chicago, Illinois.   Thanks to Karhik, and the entire team at DCKAP, in attendance was an amazing group of agencies, technology providers and merchants All of them there to discuss various topics from modern development to cloud services, product customization platforms, QA testing and much much more. Exploring Modern Ecommerce Trends  This was followed by Karthik was Marc Ostryniec Global SVP Sales at BigCommerce, who presented a riveting story around modern-day commerce trends. He connected today’s best practices back to some great American businessmen and provided advice on how merchants can drive growth with their businesses. Of particular interest was how folks like King C. Gillette and businesses like the Dollar Shave Club followed the same business model to disrupt an entire industry.   Indeed an amazing keynote speech! Contemporary Commerce is Entrepreneurial, says Marc Ostryniec, Global SVP sales from @BigCommerce. #dckapsummit #digitalcommerce #eCommerce #conference #digitaltransformation pic.twitter.com/2MohzTrTsm — DCKAP (@DCKAP) November 12, 2019 Get started with Nexcess BigCommerce for WordPress solutions, and take control of your toolset. We learned from some other extraordinary people like Mohan Natarajan, Praveen Venugopal, Bhavani Ramasubbu.  Who all took the opportunity to help merchants solve some challenging problems with product customization, management strategies, and simple quality assurance testing tools. It’s clear the DCKAP team have been putting their heads together and working incredibly hard to solve many common pain points for merchants.   Mohan Natarajan at #dckapsummit on @productimize how to drive revenue 3x pic.twitter.com/mgaZtQoCYF — DCKAP (@DCKAP) November 12, 2019 We finished up the day with another round of presentations from Sivaranjani Ramamoorthy with DCKAP, Stephen Cohan from Dot Digital and Steve Hoffman from Avalara.  All of these touched on important and relevant topics for today’s modern retailers. The afternoon group tackled challenging subjects like ADA compliance, growing your business with marketing automation, and managing both federal and local state sales taxes as a remote company operating across multiple state lines.  Speaking on the Tools and Features Modern Merchants Need I greatly appreciate the opportunity to present and speak with everyone at the conference.   As we continue to network and meet merchants, agencies and technology providers, we’re learning that cloud computing, information security and performance-tuned architecture remain to be some of the most critical challenges facing today’s merchant.  I talked about the tools and features we’ve implemented for our clients. This included auto scaling, 1-click development sites, and more.  The VP Global Channel Sales, Jeries Eadeh @ibnwadie from @nexcess talks about simple tools for Modern Development: made for Merchants, by Merchants. #dckapsummit #eCommerce #digitalcommerce #conference pic.twitter.com/2wuhWFqkF6 — DCKAP (@DCKAP) November 12, 2019 Thank you to DCKAP and the entire team who organized the event.  It’s clear this event will be a must-attend next year. We look forward to working with everyone again to make the next event even more successful. Learn more about how managed hosting can help you create solutions that do more.  The post Revisiting the DCKAP Summit with Jeries appeared first on Nexcess Blog.

In The Works – New AMD-Powered, Compute-Optimized EC2 Instances (C5a/C5ad)

Amazon Web Services Blog -

We’re getting ready to give you even more power and even more choices when it comes to EC2 instances. We will soon launch C5a and C5ad instances powered by custom second-generation AMD EPYC “Rome” processors running at frequencies as high as 3.3 GHz. You will be able to use these compute-optimized instances to run your batch processing, distributed analytics, web applications and other compute-intensive workloads. Like the existing AMD-powered instances in the M, R and T families, the C5a and C5ad instances are built on the AWS Nitro System and give you an opportunity to balance your instance mix based on cost and performance. The instances will be available in eight sizes and also in bare metal form, with up to 192 vCPUs and 384 GiB of memory. The C5ad instances will include up to 7.6 TiB of fast, local NVMe storage, making them perfect for video encoding, image manipulation, and other media processing workloads. The bare metal instances (c5an.metal and c5adn.metal) will offer twice as much memory and double the vCPU count of comparable instances, making them some of the largest and most powerful compute-optimized instances yet. The bare metal variants will have access to 100 Gbps of network bandwidth and will be compatible with Elastic Fabric Adapter — perfect for your most demanding HPC workloads! I’ll have more information soon, so stay tuned! — Jeff;

How to Set Up Sales Funnels in Google Analytics (Step-by-Step)

HostGator Blog -

The post How to Set Up Sales Funnels in Google Analytics (Step-by-Step) appeared first on HostGator Blog. This article is part of HostGator’s Web Pros Series. In this series, we feature articles from our team of experts here at HostGator. Our Product Managers, Linux Administrators, Marketers, and Tech Support engineers share their best tips for getting the most of your website.  Let’s say you’ve got a great promo video on the homepage of your B2B website, which does an excellent job advertising your services and encouraging leads to call you. Only problem is, no one’s watching it. Or maybe you run an eCommerce store. You have no problem getting people to visit from social media. Many of them even add items to their carts, but they’re still dropping off mid-checkout. Sound familiar? While frustrating, these are common experiences for any website owner. Fortunately, there’s a way to discover what’s keeping people from converting on your website.  All you’ll need to figure it out is a free Google Analytics account and a solid internet connection. (If you haven’t set up Google Analytics for your website yet, you can do that here.)  Then, it’s time to set up sales funnels in Google Analytics. Once that’s done, you’ll be able to track visitor behavior on your website, identify problem areas, and optimize the user experience to get more visitors to do more of the things you want—like make a purchase, fill out a lead form, or subscribe to your newsletter.  If any of that sounds confusing, don’t worry. Below, I walk you through what sales funnels are, why they’re important, and how to track them in Google Analytics. We’ll end by discussing how you can take action based on the insights they provide. What Is a Sales Funnel? A sales funnel is a sequence of steps that a user takes for completing a conversion. A sales funnel on an eCommerce website might look something like this: The customer arrives at the website.Once there, the customer browses a few different product pages.Next, the customer adds an item or two to their cart.Finally, the customer purchases said item(s). The sales funnel looks different for different types of websites and different types of customers. That’s why it’s important to know who your customers are and outline the series of actions they might take on your website. All sales funnels end in a conversion. A conversion can have different definitions depending on what business you’re in and the type of website you’re operating. Traditionally, when people think of a conversion, they think of completing an order on a website. But a conversion can be broader than that, like signing up for your email newsletter or downloading a whitepaper.  Ultimately, a conversion is any behavior you want your customer to take that results in some value for your business.  Why Every Site Owner Should Care About Sales Funnels Sales funnels are essential for understanding the steps your customers take before reaching their final conversion, and any obstacles that prevent them from getting there. You can think of each step in your sales funnel as a pivotal touchpoint you want people to reach on their way to converting. Once you’ve defined each of those steps, you can identify the areas of friction: the places where people get stuck, leave, or otherwise don’t continue with the conversion process. When you have that information, you’re able to optimize the design and flow of your site, adjusting things to capture more conversions. Suddenly, you know what’s working on your website, and what’s not—so you can get started fixing it. Case in Point: Business Coach Example  Here’s an example to illustrate the value of sales funnels. Let’s say you’re a business coach. As part of your lead gen process, you offer a free 30-minute consultation, so clients can get a feel for what it’s like to work with you. You advertise this consultation throughout your website with a prominent CTA button. To sign up, people have to click to a separate registration page and complete a form.  By analyzing your sales funnel in Google Analytics, you find that these CTA buttons have a high click-through rate. Regardless of which page they clicked from, the number of people who see the CTA button, compared with those who click on it to reach the registration page, hovers around 50%. That indicates that you’re doing a great job creating interest in your free consultation.  However, once people reach the page with the registration form, fewer than 1% actually fill it out. Given that interest was so high, what explains this sudden loss of interest? Your consultation is free. What do people have to lose? Well, your registration form might have too many fields, exhausting people from filling it out in the first place. Or, the actual signup form is buried way down on the page where people can’t find it. Maybe the page loads slowly and people give up and leave. Each of these could be opportunities to improve your sales funnel. Currently, one or more of these things is turning people off and making them leave. Once they’re gone, they might as well be gone forever. It’s up to you to test different changes to see what boosts that 1%. Thanks to Google Analytics, you know exactly where the problem lies: the page with the form. People are clicking through to the page with the form, but they’re stopping there. The sales funnel helps you pinpoint the issue, so you can stay focused and make the changes that drive improvements—instead of wasting your time working on things that aren’t part of the problem, like changing up your advertising copy, or bumping up the consultation time from 30 minutes to 60.  What Is Funnel Analysis? Funnel analysis is turning your sales funnel into something you can monitor and analyze. Let’s use an eCommerce site as a hypothetical.  Below would be potential steps in your funnel:  People arrive on your website.People navigate to a product page.People add a product to their cart.People make it to the checkout page. People actually complete their purchase. Funnel analysis involves quantifying each of those steps and seeing how many people made it to each. Essentially, you want to know two things: the percentage drop off from step to step, and the cumulative percentage of the total. This gives you really good visibility of where your friction points are. Plug those steps into Google Analytics, and you can literally see the friction points. For our hypothetical eCommerce site, here’s what the data might look like in Google Analytics: The blue blocks represent the total number of people who reach each step, while the red arrows point to the number of people who drop off at each step. This data tells us a few things: Of all the people who reach the website, 80% leave without browsing any product pages. That represents a big opportunity for us. We might look into whether the homepage does a good job directing people towards product pages? Are product categories listed in the main menu? Are featured items highlighted on the front page? These are all the things we could test to drive more people to visit the product pages. Of the people who visit a product page, 75% of them end up progressing to the next step, and adding the item to their cart. Nice! This is a good sign that among the people who are interested enough to visit the product page, we do a great job convincing them they should buy it.Unfortunately, only about 6% of those people end up completing their checkout. So, something is off. Maybe there’s a technical issue with the checkout page, and people don’t feel like they can trust the site with their credit card information. Maybe there are too many fields for them to fill out, or it asks for information unrelated to their purchase. Whatever the reason, this is an issue worth looking into. The fact that people added the item to their cart indicates a strong intent to buy, so if none of them actually convert, there must be something blocking them from making a successful purchase. Just by looking at the raw data, we suddenly have a ton of information we can work from to optimize our website. This is what makes funnel analysis so relatable. Once you start to think of your website as a journey for your customers to take, you can get in their mindset and consider the incremental steps that keep them moving forward.  Next, let’s talk about how to apply this strategy to your site. How to Visualize the Sales Funnels for Your Website Before you even open Google Analytics, step one is to get a good understanding of your site and what you want people to do. I recommend a brainstorming session where you map out your funnels. If you have an eCommerce site, your funnels likely look similar to the ones we outlined above. If you have a blog, the funnel concept still applies. There may be no product pages or “add to cart” button, but you still have a homepage, category pages, and blog articles. These blog articles are essentially the “products” of your site. You want to sit down and really think about your site, the journey you want people to take, and their ultimate destination or goal. Is that goal having them read the blog? Figuring out how to direct them toward your blog articles would be the top priority.  Remember, you could have multiple funnels within one site. You may be a blogger who sells products on the side, so you’d have different sales funnels for your blog content vs. your online store.  By the end of this brainstorming session, you should know what you want people to do, and have that broken out into steps (e.g. entering the site, visiting a blog page, downloading a whitepaper). How to Set Up Sales Funnels in Google Analytics Once you have your sales funnels mapped out, it’s time to collect the data. There are a lot of ways to skin the cat but segments and goals are easiest, so that’s what I’m going to show you today.  We’ll start with the easiest option: creating a segment.  Note: For the sake of expediency, we’re going to use product pages as an example in the following steps. If you’re setting up funnels for your blog, adjust accordingly. Option #1: Create a Segment You’re going to create four segments: one each for your home, product, cart, and thank your page. In Google Analytics, navigate to Acquisition > All Traffic > Channels. This screen shows you all your website traffic, broken down by channel (social, organic, direct, etc.).  Step 1: Create your homepage segment. Click the Add Segment button at the top left. This will take you to a new screen. Here you’ll see that Google Analytics already offers a lot of options relevant to sales funnels. If you wanted to, you could click “Made a Purchase” and call it a day.  But your site isn’t one-size-fits-all. You’ve already outlined the specific pages on our site that you want users to progress through. The easiest way to create a segment for that specific sales funnel, and avoid Google muddying up the data, is to create a custom segment. Click the red New Segment button in the upper left. Next, click Conditions under the Advanced menu on the left.  In this screen, you’ll define each step of our sales funnel. Using the drop down menu, search for and select “Page.” Then, select “exactly matches” from the second drop-down menu. This prevents Google from including other pages with similar URLs.  Finally, either enter the URL of your homepage in the text field, or use the / suggested by Google. (In Google Analytics, / is shorthand for your homepage.) Name your segment in the “Segment Name” field at the top left, and click the blue Save button. Step 2: Create your product page segment. To see how many people go from your homepage to the product page, you’ll need to create another segment. To do that, you’ll repeat the same steps above. Here they are for easy reference: Click on Add Segment.Click the red New Segment button.Click Conditions under the advanced menu on the left.Search for Page from the drop-down menu. Next, if you have a single product, or you want to create a funnel for just one product, you can continue the same process you used for your homepage. Select “exactly matches” from the drop-down menu and enter the exact URL of the product page in the text field.  Alternately, if you sell multiple products, you might want to see how many people visit any product page on your site. In that case, you select “contains” instead of “exactly matches” from the drop-down menu, and use a common denominator in the text field (for example, if all your product pages include /shop/ in the URL, you would enter /shop/ in the text field.) Give your segment a name you can identify, like the product name, or simply “Product Page,” and click Save. Step 3: Create your cart page segment. Follow the same steps again: Click on Add Segment.Click the red New Segment button.Click Conditions under the advanced menu on the left.Search for Page from the drop-down menu. For your cart segment, you’ll likely need to use the “contains” option, since the URL may change for each individual cart. Find a common denominator like /checkout/ or /cart/ and enter that into the text field. Name your segment “Cart” or something easily identifiable, and click Save. You can probably guess what’s coming next. Step 4: Finally, create your purchase confirmation, or thank you, page segment. Follow the same steps again: Click on Add Segment.Click the red New Segment button.Click Conditions under the advanced menu on the left.Search for Page from the drop-down menu. Again, since the specific URL may vary by user, you’ll want to use the “contains” option and find a common denominator in the URL, like /thank-you/. Name this your “Thank You” segment and click Save.  Now, your Google Analytics should look something like this: Each segment is represented by a different-colored line, and you can visually see the drop-off from step to step. There’s also a wealth of numerical data for each segment outlined below. In this sample, it looks like there’s a significant drop-off from the product page to the cart, and again a number of people who don’t complete checkout. Either of these would be good places to start optimizing. Option #2: Get Up Goals in Google Analytics Now, for the slightly more advanced option: setting up Goals.  Once you do this, you’ll unlock additional Google Analytics capability, like the Funnel Visualization report (available under Conversions > Goals > Funnel Visualization). To set up goals, you’ll need to go to your Google Analytics Admin settings. Click on the gear icon in the bottom left corner of your screen, then click Goals. Next, click the red New Goal button. We’re going to use a Template goal here to track completed orders. Check the circle by Template, then click the blue Continue button. As with segments, you’ll find there are a bunch of template goals you can choose from, based on common conversion goals people have for their websites. For example, if we hosted product videos on our site, we might want to track how many people actually watch themt. The Event goal would be useful for that. Since we want to track how many people ultimately place an order, we’ll stick with Destination. Enter a name for your goal, like “Placed an Order, “ and check the circle by Destination. Then click Continue. In the final screen, we’ll use our thank you page as the URL and selects “Equals to” from the drop-down menu: Now, you have the option of setting a Value for each conversion. If you primarily use your website for lead generation, your revenue likely isn’t captured through Google Analytics. You can still estimate it by turning the Value field to ON, and then specifying a dollar amount. For example, if you’re using this goal to track whitepaper downloads, you might guess that on average each download is worth $100 to your business, so you would enter that. Again, this field is optional. Next, turn Funnel to ON. Here, you’ll outline the different URLs for each step of your funnel, like we did when we created custom segments. Click Save when you’re done. Finally, click “Verify this goal” to check your work. Then, navigate back to Conversions > Goals and you’ll see you can now view the Funnel Visualization report where it shows you a visual representation of your funnel: This report tells us a few things at a high level: At the top, it tells you how many people completed the goal (120 sessions) and the all-up conversion rate for this funnel (14.69%).On the left, it tells us the prior page that brought visitors to funnel in the first place. Often, this tends to be the homepage. That’s the most popular page for most websites, so it makes sense that it shows up here (indicated by “(entrance)”). On the right, it tells us how many people abandoned the funnel at each step, and what page they visited next, if they didn’t leave your site completely (indicated by “(exit)”). Google Analytics will show you this same information for every step. Where did your visitors come from before entering your funnel, did they leave at any step, and if so, where did they go? How to Use Google Analytics Funnel Data to Optimize Your Site Funnel analysis helps you quantify how many users make it to each step, and determine the abandonment vs. retention rate for each step. Funnel analysis gives you the what. The why is a bit harder to decipher. Again, let’s use our eCommerce funnel as an example. Our funnel analysis reveals we have a high percentage of people going from our homepage to our product page. That’s great. If people didn’t make it there, that’d be roadblock #1.  But of those people that reach the product page, a large number drop out before adding anything to their cart. We’d want to focus here first. What can we do to improve our product pages? Is it a design issue? Are users confused by how to add a product to the cart? We might need to tweak the placement, color, or size of the Add to Cart button.Is it a marketing issue? Maybe our positioning needs work and we need to do a better job emphasizing the benefits of this particular product. Is it a pricing issue? Perhaps the pricing is really high compared to our competitors. When people see it, they gawk at it and leave. There are all sorts of reasons you might see friction at the funnel steps. That’s when this quantitative data works really well with the qualitative side, like adjusting the user experience, surveying your customers, and A/B testing your changes.  Customer surveys are my favorite place to start. There are lots of great (and free) survey tools you can use for this. Run a survey on the problem page with one simple question: “Were you able to accomplish what you were looking for on the site? If not, why not?”  From my experience with usability testing, you’ll find lots of people say they can’t find something or don’t know where to go in the survey. This is a big opportunity to work on first. If you have a small budget, start with your friends and family. Ask them to walk through the process of buying them on your website, but don’t give them any hints. Do they run into any roadblocks?  Turning Insights Into Action with Sales Funnels in Google Analytics Conceptually, sales funnels are not a very challenging thing to grasp. We’re all consumers and we’re familiar with conducting business online. We’ve all been in that position before where we find ourselves so confused by a website, that we get frustrated and leave for a competitor site.  That confusion represents a choking point in your sales funnel, and it can make or break your site. Funnel analysis helps you find those choke points. Then, it’s up to you to experiment with how to fix them. Fortunately, setting up sales funnels in Google Analytics is not difficult. Follow the steps above, and you might be surprised by what you find!Ready to do more with Google Analytics? Check out our guide to filtering out bot traffic. Find the post on the HostGator Blog

How to Avoid Hardware Failure

Liquid Web Official Blog -

Its 3 am on Sunday morning and your cell phone is ringing. It’s the CEO of your company. Quickly, you pinch yourself to see if you are dreaming. Nope, not a dream. You wipe the cold out of your eyes, clear your throat and reluctantly answer the call. The CEO is obviously shaken and by the sound of his tone, it appears something detrimental has occurred. He anxiously exclaims the business is losing money by the second. As your CEO continues speaking, you being the IT Manager, vaguely remember a meeting you had a few weeks back with your marketing team about a social media influencer campaign they were going to run. They shared data with an estimated increase in traffic and wanted to make sure the infrastructure was going to be able to handle the large influx of visitors with the current setup. Unfortunately, you underestimated the traffic. It’s all a blur as the CEO explains the database is completely down and customers can’t reach the site which means lost revenue. Your CEO continues and begins to question your competence in your role. You had one job: ensure that the companies website is always online. After 10 minutes, you finally get a chance to get a word in and you let him know you will be contacting your hosting provider immediately. You fear your job is in jeopardy and feel the guilt in the pit of your stomach. The damage has been done. You start to ask yourself: could there have been a way to avoid this problem? Yes, you should have listened to your marketing team, but what if they had not brought this to your attention. What other precautions could you have taken to avoid downtime? Was there a way to avoid hardware failure altogether? Learn how highly available infrastructure can help your business. Download our white paper on Why High Availability Matters – And How You Can Achieve It Reliably and Affordably. Can Hardware Failure Happen? If you have read this far, you have either first hand experienced a similar problem of downtime or you fear that this could happen to you at any moment. What a lot of people do not understand is hardware failure occurs all the time. It is inevitable. Hardware failure is defined as a malfunction to the circuits or mechanical components of the infrastructure or system.” In this case, we are talking about hardware failure in a datacenter. What Can Hardware Failure at a Datacenter Include? Hardware failure in a datacenter can include failure from any of the following systems: Hard Drive Motherboard Power Source Other internal components Whether you are new to IT or very tenured, the fact remains that hardware failure happens to everyone at some point. What you do today in preparing for the worst can affect your job, but more importantly, it will also have an affect on your quality of life.” Wouldn’t it be less stressful if you had the proper systems, procedures, and protocols in place so that when disaster strikes your system stays online? Even better, you would be able to rest easy at night and be fully away from work when out of the office, knowing that the infrastructure is prepared for anything (and your job is safe). Let’s look at some of the ways to keep your systems online despite hardware failure. How to Avoid Hardware Failure Initially, pricing for disaster recovery can become the focal point of conversations internally in a business where budget constraints can be an obstacle in standing up a solution similar to the above. Savvy operation teams and go to market leadership often run an analysis to determine how tolerant the business can be with downtime. Questions to Consider During Your Downtime Tolerance Analysis Two questions to consider when doing an analysis would be: 1. How Much Revenue is Lost Every Minute the Site(s) are Unreachable? For most businesses, calculating the revenue lost during hardware failure is not as simple as calculating the number of customers that could not get to our site(s) multiplied by our typical conversion rate. Revenue, “the great equalizer,” is often affected in many indirect ways during hardware failure. It’s much more extensive (and expensive, unfortunately). Revenue loss can include employee productivity loss, loss of access to essential systems like POS, VOIP, and email, as well as lost customers and new potential sales.” 2. How Much Brand Damage Occurs When the Site(s) are Down? Brand damage is likely the harder of the two to quantify. It really boils down to one question: How is your brand name perceived by the public now that your site(s) are down?” Sometimes a single blip of downtime can cause the loss of a large opportunity. The impact on the brand naturally varies based on a host of different reasons. The leadership team at the business will be the best judge of the opportunities lost, brand perception damage, and potential future sales lost as a result. Decide if the Benefits of a Highly Available System Outweigh Potential Downtime Any way you slice it, you have to evaluate if the benefit of purchasing high availability infrastructure outweighs the cost of potential revenue and brand losses due to downtime. We often insure the things that we personally value like our cars, boats, homes, and health. Businesses survive disasters, and even thrive, based on revenue and brand reputation. Why risk-taking blows to either from poor analysis and poor planning? What Type of Solutions are There? Did you know that Liquid Web has solutions for both disaster recovery and high availability? The good news is we can help tailor a custom solution suitable for your downtime tolerance, even if that tolerance is zero. Our high availability solutions can ensure that you can handle large influxes of traffic so that you can focus more on sales and employee productivity, even during hardware failure. Liquid Web’s disaster recovery options provide you with the infrastructure needed to develop policies and procedures that can quickly allow you to resume vital functions after a natural or human-related disaster. Both solutions include our world-class rock solid support by the Most Helpful Humans in Hosting. Learn More About High Availability Infrastructure The post How to Avoid Hardware Failure appeared first on Liquid Web.

How to Use Instagram Hashtags for Business: A Guide for Marketers

Social Media Examiner -

Wondering how to use hashtags to improve your visibility on Instagram? Looking for a guide to follow? In this article, you’ll discover a complete guide to using hashtags strategically across Instagram feed posts, story posts, and IGTV posts. The Basics of Using Instagram Hashtags for Business A lot of people struggle with Instagram hashtags. They […] The post How to Use Instagram Hashtags for Business: A Guide for Marketers appeared first on Social Media Marketing | Social Media Examiner.

Twitter Introduces Conversation Insights and Upcoming Features for 2020

Social Media Examiner -

Welcome to this week’s edition of the Social Media Marketing Talk Show, a news show for marketers who want to stay on the leading edge of social media. On this week’s Social Media Marketing Talk Show, we explore Twitter’s newest conversation tools and other features coming to the platform in 2020 with special guest, Madalyn […] The post Twitter Introduces Conversation Insights and Upcoming Features for 2020 appeared first on Social Media Marketing | Social Media Examiner.

Now available: Batch Recommendations in Amazon Personalize

Amazon Web Services Blog -

Today, we’re very happy to announce that Amazon Personalize now supports batch recommendations. Launched at AWS re:Invent 2018, Personalize is a fully-managed service that allows you to create private, customized personalization recommendations for your applications, with little to no machine learning experience required. With Personalize, you provide the unique signals in your activity data (page views, sign-ups, purchases, and so forth) along with optional customer demographic information (age, location, etc.). You then provide the inventory of the items you want to recommend, such as articles, products, videos, or music: as explained in previous blog posts, you can use both historical data stored in Amazon Simple Storage Service (S3) and streaming data sent in real-time from a JavaScript tracker or server-side. Then, entirely under the covers, Personalize will process and examine the data, identify what is meaningful, select the right algorithms, train and optimize a personalization model that is customized for your data, and is accessible via an API that can be easily invoked by your business application. However, some customers have told us that batch recommendations would be a better fit for their use cases. For example, some of them need the ability to compute recommendations for very large numbers of users or items in one go, store them, and feed them over time to batch-oriented workflows such as sending email or notifications: although you could certainly do this with a real-time recommendation endpoint, batch processing is simply more convenient and more cost-effective. Let’s do a quick demo. Introducing Batch Recommendations For the sake of brevity, I’ll reuse the movie recommendation solution trained in this post on the MovieLens data set. Here, instead of deploying a real-time campaign based on this solution, we’re going to create a batch recommendation job. First, let’s define users for whom we’d like to recommend movies. I simply list their user ids in a JSON file that I store in an S3 bucket. {"userId": "123"} {"userId": "456"} {"userId": "789"} {"userId": "321"} {"userId": "654"} {"userId": "987"} Then, I apply a bucket policy to that bucket, so that Personalize may read and write objects in it. I’m using the AWS console here, and you can do the same thing programmatically with the PutBucketAcl API. Now let’s head out to the Personalize console, and create a batch inference job. As you would expect, I need to give the job a name, and select an AWS Identity and Access Management (IAM) role for Personalize in order to allow access to my S3 bucket. The bucket policy was taken care of already. Then, I select the solution that I want to use to recommend movies. Finally, I define the location of input and output data, with optional AWS Key Management Service (KMS) keys for decryption and encryption. After a little while, the job is complete, and I can fetch recommendations from my bucket. $ aws s3 cp s3://jsimon-personalize-euwest-1/batch/output/batch/users.json.out - {"input":{"userId":"123"}, "output": {"recommendedItems": ["137", "285", "14", "283", "124", "13", "508", "276", "275", "475", "515", "237", "246", "117", "19", "9", "25", "93", "181", "100", "10", "7", "273", "1", "150"]}} {"input":{"userId":"456"}, "output": {"recommendedItems": ["272", "333", "286", "271", "268", "313", "340", "751", "332", "750", "347", "316", "300", "294", "690", "331", "307", "288", "304", "302", "245", "326", "315", "346", "305"]}} {"input":{"userId":"789"}, "output": {"recommendedItems": ["275", "14", "13", "93", "1", "117", "7", "246", "508", "9", "248", "276", "137", "151", "150", "111", "124", "237", "744", "475", "24", "283", "20", "273", "25"]}} {"input":{"userId":"321"}, "output": {"recommendedItems": ["86", "197", "180", "603", "170", "427", "191", "462", "494", "175", "61", "198", "238", "45", "507", "203", "357", "661", "30", "428", "132", "135", "479", "657", "530"]}} {"input":{"userId":"654"}, "output": {"recommendedItems": ["272", "270", "268", "340", "210", "313", "216", "302", "182", "318", "168", "174", "751", "234", "750", "183", "271", "79", "603", "204", "12", "98", "333", "202", "902"]}} {"input":{"userId":"987"}, "output": {"recommendedItems": ["286", "302", "313", "294", "300", "268", "269", "288", "315", "333", "272", "242", "258", "347", "690", "310", "100", "340", "50", "292", "327", "332", "751", "319", "181"]}} In a real-life scenario, I would then feed these recommendations to downstream applications for further processing. Of course, instead of using the console, I would create and manage jobs programmatically with the CreateBatchInferenceJob, DescribeBatchInferenceJob, and ListBatchInferenceJobs APIs. Now Available! Using batch recommendations with Amazon Personalize is an easy and cost-effective way to add personalization to your applications. You can start using this feature today in all regions where Personalize is available. Please send us feedback, either on the AWS forum for Amazon Personalize, or through your usual AWS support contacts. — Julien

New – Insert, Update, Delete Data on S3 with Amazon EMR and Apache Hudi

Amazon Web Services Blog -

Storing your data in Amazon S3 provides lots of benefits in terms of scale, reliability, and cost effectiveness. On top of that, you can leverage Amazon EMR to process and analyze your data using open source tools like Apache Spark, Hive, and Presto. As powerful as these tools are, it can still be challenging to deal with use cases where you need to do incremental data processing, and record-level insert, update, and delete. Talking with customers, we found that there are use cases that need to handle incremental changes to individual records, for example: Complying with data privacy regulations, where their users choose to exercise their right to be forgotten, or change their consent as to how their data can be used. Working with streaming data, when you have to handle specific data insertion and update events. Using change data capture (CDC) architectures to track and ingest database change logs from enterprise data warehouses or operational data stores. Reinstating late arriving data, or analyzing data as of a specific point in time. Starting today, EMR release 5.28.0 includes Apache Hudi (incubating), so that you no longer need to build custom solutions to perform record-level insert, update, and delete operations. Hudi development started in Uber in 2016 to address inefficiencies across ingest and ETL pipelines. In the recent months the EMR team has worked closely with the Apache Hudi community, contributing patches that include updating Hudi to Spark 2.4.4 (HUDI-12), supporting Spark Avro (HUDI-91), adding support for AWS Glue Data Catalog (HUDI-306), as well as multiple bug fixes. Using Hudi, you can perform record-level inserts, updates, and deletes on S3 allowing you to comply with data privacy laws, consume real time streams and change data captures, reinstate late arriving data and track history and rollbacks in an open, vendor neutral format. You create datasets and tables and Hudi manages the underlying data format. Hudi uses Apache Parquet, and Apache Avro for data storage, and includes built-in integrations with Spark, Hive, and Presto, enabling you to query Hudi datasets using the same tools that you use today with near real-time access to fresh data. When launching an EMR cluster, the libraries and tools for Hudi are installed and configured automatically any time at least one of the following components is selected: Hive, Spark, or Presto. You can use Spark to create new Hudi datasets, and insert, update, and delete data. Each Hudi dataset is registered in your cluster’s configured metastore (including the AWS Glue Data Catalog), and appears as a table that can be queried using Spark, Hive, and Presto. Hudi supports two storage types that define how data is written, indexed, and read from S3: Copy on Write – data is stored in columnar format (Parquet) and updates create a new version of the files during writes. This storage type is best used for read-heavy workloads, because the latest version of the dataset is always available in efficient columnar files. Merge on Read – data is stored with a combination of columnar (Parquet) and row-based (Avro) formats; updates are logged to row-based “delta files” and compacted later creating a new version of the columnar files. This storage type is best used for write-heavy workloads, because new commits are written quickly as delta files, but reading the data set requires merging the compacted columnar files with the delta files. Let’s do a quick overview of how you can set up and use Hudi datasets in an EMR cluster. Using Apache Hudi with Amazon EMR I start creating a cluster from the EMR console. In the advanced options I select EMR release 5.28.0 (the first including Hudi) and the following applications: Spark, Hive, and Tez. In the hardware options, I add 3 task nodes to ensure I have enough capacity to run both Spark and Hive. When the cluster is ready, I use the key pair I selected in the security options to SSH into the master node and access the Spark Shell. I use the following command to start the Spark Shell to use it with Hudi: $ spark-shell --conf "spark.serializer=org.apache.spark.serializer.KryoSerializer" --conf "spark.sql.hive.convertMetastoreParquet=false" --jars /usr/lib/hudi/hudi-spark-bundle.jar,/usr/lib/spark/external/lib/spark-avro.jar There, I use the following Scala code to import some sample ELB logs in a Hudi dataset using the Copy on Write storage type: import org.apache.spark.sql.SaveMode import org.apache.spark.sql.functions._ import org.apache.hudi.DataSourceWriteOptions import org.apache.hudi.config.HoodieWriteConfig import org.apache.hudi.hive.MultiPartKeysValueExtractor //Set up various input values as variables val inputDataPath = "s3://athena-examples-us-west-2/elb/parquet/year=2015/month=1/day=1/" val hudiTableName = "elb_logs_hudi_cow" val hudiTablePath = "s3://MY-BUCKET/PATH/" + hudiTableName // Set up our Hudi Data Source Options val hudiOptions = Map[String,String]( DataSourceWriteOptions.RECORDKEY_FIELD_OPT_KEY -> "request_ip", DataSourceWriteOptions.PARTITIONPATH_FIELD_OPT_KEY -> "request_verb", HoodieWriteConfig.TABLE_NAME -> hudiTableName, DataSourceWriteOptions.OPERATION_OPT_KEY -> DataSourceWriteOptions.INSERT_OPERATION_OPT_VAL, DataSourceWriteOptions.PRECOMBINE_FIELD_OPT_KEY -> "request_timestamp", DataSourceWriteOptions.HIVE_SYNC_ENABLED_OPT_KEY -> "true", DataSourceWriteOptions.HIVE_TABLE_OPT_KEY -> hudiTableName, DataSourceWriteOptions.HIVE_PARTITION_FIELDS_OPT_KEY -> "request_verb", DataSourceWriteOptions.HIVE_ASSUME_DATE_PARTITION_OPT_KEY -> "false", DataSourceWriteOptions.HIVE_PARTITION_EXTRACTOR_CLASS_OPT_KEY -> classOf[MultiPartKeysValueExtractor].getName) // Read data from S3 and create a DataFrame with Partition and Record Key val inputDF = spark.read.format("parquet").load(inputDataPath) // Write data into the Hudi dataset inputDF.write .format("org.apache.hudi") .options(hudiOptions) .mode(SaveMode.Overwrite) .save(hudiTablePath) In the Spark Shell, I can now count the records in the Hudi dataset: scala> inputDF2.count() res1: Long = 10491958 In the options, I used the integration with the Hive metastore configured for the cluster, so that the table is created in the default database. In this way, I can use Hive to query the data in the Hudi dataset: hive> use default; hive> select count(*) from elb_logs_hudi_cow; ... OK 10491958 ... I can now update or delete a single record in the dataset. In the Spark Shell, I prepare some variables to find the record I want to update, and a SQL statement to select the value of the column I want to change: val requestIpToUpdate = "243.80.62.181" val sqlStatement = s"SELECT elb_name FROM elb_logs_hudi_cow WHERE request_ip = '$requestIpToUpdate'" I execute the SQL statement to see the current value of the column: scala> spark.sql(sqlStatement).show() +------------+                                                                   |    elb_name| +------------+ |elb_demo_003| +------------+ Then, I select and update the record: // Create a DataFrame with a single record and update column value val updateDF = inputDF.filter(col("request_ip") === requestIpToUpdate) .withColumn("elb_name", lit("elb_demo_001")) Now I update the Hudi dataset with a syntax similar to the one I used to create it. But this time, the DataFrame I am writing contains only one record: // Write the DataFrame as an update to existing Hudi dataset updateDF.write .format("org.apache.hudi") .options(hudiOptions) .option(DataSourceWriteOptions.OPERATION_OPT_KEY, DataSourceWriteOptions.UPSERT_OPERATION_OPT_VAL) .mode(SaveMode.Append) .save(hudiTablePath) In the Spark Shell, I check the result of the update: scala> spark.sql(sqlStatement).show() +------------+                                                                   |    elb_name| +------------+ |elb_demo_001| +------------+ Now I want to delete the same record. To delete it, I pass the EmptyHoodieRecordPayload payload in the write options: // Write the DataFrame with an EmptyHoodieRecordPayload for deleting a record updateDF.write .format("org.apache.hudi") .options(hudiOptions) .option(DataSourceWriteOptions.OPERATION_OPT_KEY, DataSourceWriteOptions.UPSERT_OPERATION_OPT_VAL) .option(DataSourceWriteOptions.PAYLOAD_CLASS_OPT_KEY, "org.apache.hudi.EmptyHoodieRecordPayload") .mode(SaveMode.Append) .save(hudiTablePath) In the Spark Shell, I see that the record is no longer available: scala> spark.sql(sqlStatement).show() +--------+                                                                       |elb_name| +--------+ +--------+ How are all those updates and deletes managed by Hudi? Let’s use the Hudi Command Line Interface (CLI) to connect to the dataset and see now those changes are interpreted as commits: This dataset is a Copy on Write dataset, that means that each time there is an update to a record, the file that contains that record will be rewritten to contain the updated values. You can see how many records have been written for each commit. The bottom line of the table describes the initial creation of the dataset, above there is the single record update, and at the top the single record delete. With Hudi, you can roll back to each commit. For example, I can roll back the delete operation with: hudi:elb_logs_hudi_cow->commit rollback --commit 20191104121031 In the Spark Shell, the record is now back to where it was, just after the update: scala> spark.sql(sqlStatement).show() +------------+                                                                   |    elb_name| +------------+ |elb_demo_001| +------------+ Copy on Write is the default storage type. I can repeat the steps above to create and update a Merge on Read dataset type by adding this to our hudiOptions: DataSourceWriteOptions.STORAGE_TYPE_OPT_KEY -> "MERGE_ON_READ" If you update a Merge on Read dataset and look at the commits with the Hudi CLI, you can see how different Merge on Read is compared to Copy on Write. With Merge on Read, you are only writing the updated rows and not whole files as with Copy on Write. This is why Merge on Read is helpful for use cases that require more writes, or update/delete heavy workload, with a fewer number of reads. Delta commits are written to disk as Avro records (row-based storage), and compacted data is written as Parquet files (columnar storage). To avoid creating too many delta files, Hudi will automatically compact your dataset so that your reads are as performant as possible. When a Merge On Read dataset is created, two Hive tables are created: The first table matches the name of the dataset. The second table has the characters _rt appended to its name; the _rt postfix stands for real-time. When queried, the first table return the data that has been compacted, and will not show the latest delta commits. Using this table provides the best performance, but omits the freshest data. Querying the real-time table will merge the compacted data with the delta commits on read, hence this dataset being called “Merge on Read”. This will result in the freshest data being available, but incurs a performance overhead, and is not as performant as querying the compacted data. In this way, data engineers and analysts have the flexibility to choose between performance and data freshness. Available Now This new feature is available now in all regions with EMR 5.28.0. There is no additional cost in using Hudi with EMR. You can learn more about Hudi in the EMR documentation. This new tool can simplify the way you process, update and delete data in S3. Let me know which use cases are you going to use it for! — Danilo

Build Your Next Online Course Using Course Maker Pro

WP Engine -

Today, WordPress is being used in a multitude of ways—from content hubs to e-commerce websites and everything in between—more users are turning to WordPress to power their digital experiences than ever before.   One of the reasons WordPress has become so popular across various use cases is the massive catalog of pre-built, industry-specific themes users can… The post Build Your Next Online Course Using Course Maker Pro appeared first on WP Engine.

Meet a Helpful Human – Wes Mills

Liquid Web Official Blog -

We’re the employees you would hire if you could. Responsive, helpful, and dedicated in ways automation simply can’t be. We’re your team. Each month we recognize one of our Most Helpful Humans in Hosting. Meet Wes Wes Mills is our amazing “mad” data scientist at Liquid Web in charge making the magic happen by curating and organizing the large amounts of data that comes into Liquid Web. In other words, he makes our customer experience better through data analysis and objective process changes. Wes is obsessed with the behaviors of people and how they connect and intersect, making his work in analytics and marketing the perfect fit for him at Liquid Web. Why did you join Liquid Web? After graduating from the University of Texas with a B.B.A. in Marketing and Economics in 2013, I ventured out to find a career that married digital marketing with behavioral economics. Data was plentiful and interesting, which made the choice easy to start my career at an agency. I continued to expand my breadth of experiences across the business, which led me into other quantitative roles in a handful of other great organizations, including Rackspace. When the opportunity for Liquid Web presented itself, I knew it was a perfect fit—both in terms of the culture at Liquid Web and the solutions required for the business to grow. I had finally found a role that brought data science and marketing together along with an amazing group of brilliant people.” What draws you to the hosting industry as a career? I’m drawn to the hosting industry because of two things: first, businesses are moving to the cloud and it’s enabling smaller companies to scale their output with minimal cost. Second, as the industry matures, differentiation becomes a lot more important which brings unique challenges and problems to solve at Liquid Web. These are the exact challenges I knew I could help with entering the door at Liquid Web. Is there something specific at Liquid Web that you just love? I love that Liquid Web is able to be agile, even in the area of data and analytics. For instance, we’ve adopted new tools that help us gather data from disparate sources, analyze data much more quickly, and tell our stories in a more compelling way with the latest visualization tools. That helps our teams make data-based decisions in real-time, giving us an edge on the competition. What’s your favorite part about the company culture at Liquid Web? Customer-facing or not, everyone is truly a Helpful Human in Hosting. If I’m curious about how something works at Liquid Web, I know someone will be more than happy to walk me through it. And that isn’t common in today’s workplace.” For instance, when I first joined, one of our Software Architects named Mike was patient in explaining how our customers check out online, and he answered every question I threw at him (some even late into the night!). Through our collaboration and his brilliance, we were able to gain insight that made buying Liquid Web products much easier online for customers. And that’s a win everyone is now enjoying. In your eyes, what’s the difference between Liquid Web and other employers? Liquid Web has a culture of allowing all to voice opinions or thoughts—everyone has a voice. With a flat hierarchy, meaning everyone is treated equally whether you’re a manager or a VP. Everyone has a say in how things are done. This leads to better business decisions company-wide and higher morale for all employees. What is the biggest milestone you’ve accomplished? The biggest milestone I’ve accomplished at Liquid Web is being able to answer questions using data we’ve never consumed before. With analytics, we can ask better questions and form hypotheses that may solve critical roadblocks hindering achieving revenue goals. A great example of this is better understanding the value of display ads. Giving marketing leaders insight on the value of display and other marketing channels makes our marketing team run more efficiently through attributing value to marketing channels that are working, and rerouting funds away from ones that are not. I also loved being part of the National Cybersecurity Awareness Month video series, helping to raise awareness about how data science can secure your data. Check it out! Tell us about the most positive experience you have had at Liquid Web. The most positive experience I’ve had at Liquid Web is expanding the analytics team and scaling the impact the team can have. A two-man analytics team turned into three, which exponentially increased the amount of information we shared across Liquid Web along with the quality of insights shared. In addition, watching our new team member develop and learn difficult skills quickly has been incredibly rewarding. What are you known for at Liquid Web? What do people specifically come to you for? I’m known at Liquid Web for being the go-to on marketing analytics, customer behavior, and automating manual processes (who doesn’t love automating mundane work?). For instance, we used to gather insights from a third-party organization on our customers by pulling a manual list of customers. Now, we have automated the process, giving back extra time to the team to achieve more. Work aside, what are some of your hobbies? I’m a musician (guitar is the primary tool of choice), and I love GarageBand—you can record, mix, and produce anywhere you are! What is your favorite TV show? The Bachelor. It’s like watching a behavioral economics experiment. How do people deal with scarcity? Or, better, how does one deal with excess supply? Let the drama commence. If you could have dinner with one famous person [dead or alive] who would it be? Cornelius Vanderbilt, the business magnate known for owning the New York Central Railroad. He was able to shape the very geography of the United States by laying the infrastructure that would make transportation across vast distances possible. He used the latest technologies to organize his businesses differently. This is what inspires me in my career—never sticking to the status quo and always being willing to do things differently. You can follow Wes Mills on LinkedIn. We hope you enjoyed our series, and stay tuned for the next Most Helpful Human in Hosting profile. The post Meet a Helpful Human – Wes Mills appeared first on Liquid Web.

Facebook Ads Strategy: A New Approach for a Competitive Marketplace

Social Media Examiner -

Have your Facebook ads stopped working? Wondering what needs to change? To explore a new approach to Facebook ads strategy, I interview Nicholas Kusmich on the Social Media Marketing Podcast. Nicholas is a Facebook ads strategist with H2H Media Group. He hosts the Accelerated Results podcast and is the author of Give: The Ultimate Guide […] The post Facebook Ads Strategy: A New Approach for a Competitive Marketplace appeared first on Social Media Marketing | Social Media Examiner.

Domain Name System: Explained

Reseller Club Blog -

Domain Name System (DNS) is a database framework that interprets a personal computer’s registered domain name into an IP address and vice versa. Network PCs use IP addresses to find and associate with one another, but IP locations can be hard for individuals to recall. For instance, on the web, it’s a lot simpler to remember the website www.abc.com than it is to recollect its relating IP address (257.101.177.77).  The DNS automatically converts the name we type into our web browsers to IP addresses of servers hosting that site. DNS also enables you to associate with another authorized PC or allow remote management by utilizing its easy to understand area name as opposed to its numerical IP address. On the other hand, Reverse DNS (rDNS) makes an interpretation of an IP address into a domain name.  Every organization that has a chain of computers has one server dealing with DNS inquiries called a domain server. It will hold all the IP addresses inside its system, in addition to the IP addresses of recently accessed PCs outside the system. DNS can be compared to a telephone directory where you find phone number using easy to remember names. How DNS Works DNS resolution involves a process similar to finding a house using the street address. Each device connected to the internet is given an IP address. When someone enters a query, the hostname is converted into an IP address to complete the query. This translation between a web address and machine-friendly address is crucial to for any webpage to load.  On the machine level, when a search query is initiated, the browser looks for information in a local cache. If the address is found, it will look for DNS server in the Local Area Network (LAN). If the DNS server in the LAN is found and receives the query, a result will be returned. If DNS server is not located, the local server will forward the query to DNS cache server provided by the internet service provider. The DNS cache servers contain temporary DNS records based on cached value acquired from authoritative DNS servers. An authoritative DNS server as the name suggests stores and provides a list of authoritative name servers for each of the top-level domains. The working of DNS is based on a hierarchy, and it is essential to further learn about these servers.  Types of DNS Servers DNS recursor – The DNS recursor server gets requests from client machines via apps like internet browsers. The recursor then makes additional requests to fulfil the customer’s DNS query. Think of it as a librarian that goes to find a particular book present somewhere in the library.  Root nameserver – This is the initial phase in deciphering comprehensible hostnames into the IP. Think of it as the index available in the library that gives you the shelf number based on the name of the book.  TLD nameserver – The TLD is the subsequent stage in the search for a particular IP, and it has the last segment of a hostname. The common TLD server are .com, .in, .org., etc.  Authoritative nameserver – This nameserver is the final halt in the inquiry. If the definitive name server approaches the mentioned record, it will restore the IP for the mentioned hostname back to the Recursor, which made the underlying query. What Is DNS Propagation If your IP address is similar to the street address used to find your house, what happens if you change your home address? What is the domain name server with the new IP address? Well, this is where DNS propagation gain relevance. In simple terms, DNS propagation is the time it takes for any changes made in the name server to come into effect.  When you change the nameservers for your domain or change the hosting provider, the ISP nodes across the world may take up to 72 hours to update their caches with the new DNS information of your domain. However, the time required to ensure a complete update of records across all nodes may differ. New information about the nameservers will not be propagated immediately, and some of your users may still be redirected to your old website. Each ISP node saves the cache to speed up the loading time, and you will have no other option but to wait until all the nodes are updated. You can bypass or minimize the DNS propagation by pointing your domain to the destination IP address using “A Record” on the side of the current DNS provider, setting the minimal TTL. After updating the “A Record” you can wait for an hour and then change the nameservers of your domain. This will ensure that your website will not have any downtime as both hosts will show the same new website.  DNS Security Extensions Given that DNS is vital for redirecting any query to your website, it is hardly surprising that hackers and bad actors will try to manipulate it. DNS inherently has no means of establishing whether the data is coming from authorized domains or has been tampered. This exposes the system to a lot of vulnerabilities and attacks such as DNS cache poisoning, DNS reflection attack, DNS amplification attack, etc. In a DNS cache poisoning attack, bad actors replace the valid IP address with a malicious IP address. So, virtually all the users reaching for the genuine site will be redirected to this new IP address. This new location could have an exact clone of the original site meant to steal crucial data such as personal information & banking information, or it could redirect to a website and malware would be downloaded on the local computer.  To address these serious concerns, DNS Security Extensions (DNSSEC) were put in place. DNSSEC is aimed at addressing the weaknesses in DNS and adding authentication to it, making the system more secure. DNSSEC uses cryptographic keys and digital signatures to enforce legitimate connections and accurate lookup data.  While DNSSEC can substantially reduce the vulnerabilities of DNS, administrative overhead, as well as time and cost, restrict its implementation. A better alternative for many organizations would be to opt for Cloud-based DNS. Similar to cloud web hosting, a cloud-based DNS ensures geographically diverse networks and DNS server infrastructure. It enables high availability, global performance,  scalability, stronger security, and better resource management. Do let us know your thoughts and if you have used cloud-based DNS in the comment section below.  .fb_iframe_widget_fluid_desktop iframe { width: 100% !important; } The post Domain Name System: Explained appeared first on ResellerClub Blog.

New Automation Features In AWS Systems Manager

Amazon Web Services Blog -

Today we are announcing additional automation features inside of AWS Systems Manager. If you haven’t used Systems Manager yet, it’s a service that provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources. With this new release, it just got even more powerful. We have added additional capabilities to AWS Systems Manager that enables you to build, run, and share automations with others on your team or inside your organisation — making managing your infrastructure more repeatable and less error-prone. Inside the AWS Systems Manager console on the navigation menu, there is an item called Automation if I click this menu item I will see the Execute automation button. When I click on this I am asked what document I want to run. AWS provides a library of documents that I could choose from, however today, I am going to build my own so I will click on the Create document button. This takes me to a a new screen that allows me to create a document (sometimes referred to as an automation playbook) that amongst other things executes Python or PowerShell scripts. The console gives me two options for editing a document: A YAML editor or the “Builder” tool that provides a guided, step-by-step user interface with the ability to include documentation for each workflow step. So, let’s take a look by building and running a simple automation. When I create a document using the Builder tool, the first thing required is a document name. Next, I need to provide a description. As you can see below, I’m able to use Markdown to format the description. The description is an excellent opportunity to describe what your document does, this is valuable since most users will want to share these documents with others on their team and build a library of documents to solve everyday problems. Optionally, I am asked to provide parameters for my document. These parameters can be used in all of the scripts that you will create later. In my example, I have created three parameters: imageId, tagValue, and instanceType. When I come to execute this document, I will have the opportunity to provide values for these parameters that will override any defaults that I set. When someone executes my document, the scripts that are executed will interact with AWS services. A document runs with the user permissions for most of its actions along with the option of providing an Assume Role. However, for documents with the Run a Script action, the role is required when the script is calling any AWS API. You can set the Assume role globally in the builder tool; however, I like to add a parameter called assumeRole to my document, this gives anyone that is executing it the ability to provide a different one. You then wire this parameter up to the global assumeRole by using the {{assumeRole}}syntax in the Assume role property textbox (I have called my parameter name assumeRole but you could call it what you like, just make sure that the name you give the parameter is what you put in the double parentheses syntax e.g.{{yourParamName}}). Once my document is set up, I then need to create the first step of my document. Your document can contain 1 or more steps, and you can create sophisticated workflows with branching, for example based on a parameter or failure of a step. Still, in this example, I am going to create three steps that execute one after another. Again you need to give the step a name and a description. This description can also include markdown. You need to select an Action Type, for this example I will choose Run a script. With the ‘Run a script’ action type, I get to run a script in Python or PowerShell without requiring any infrastructure to run the script. It’s important to realise that this script will not be running on one of your EC2 instances. The scripts run in a managed compute environment. You can configure a Amazon CloudWatch log group on the preferences page to send outputs to a CloudWatch log group of your choice. In this demo, I write some Python that creates an EC2 instance. You will notice that this script is using the AWS SDK for Python. I create an instance based upon an image_id, tag_value, and instance_type that are passed in as parameters to the script. To pass parameters into the script, in the Additional Inputs section, I select InputPayload as the input type. I then use a particular YAML format in the Input Value text box to wire up the global parameters to the parameters that I am going to use in the script. You will notice that again I have used the double parentheses syntax to reference the global parameters e.g. {{imageId}} In the Outputs section, I also wire up an output parameter than can be used by subsequent steps. Next, I will add a second step to my document . This time I will poll the instance to see if its status has switched to ok. The exciting thing about this code is the InstanceId, is passed into the script from a previous step. This is an example of how the execution steps can be chained together to use outputs of earlier steps. def poll_instance(events, context): import boto3 import time ec2 = boto3.client('ec2') instance_id = events['InstanceId'] print('[INFO] Waiting for instance to enter Status: Ok', instance_id) instance_status = "null" while True: res = ec2.describe_instance_status(InstanceIds=[instance_id]) if len(res['InstanceStatuses']) == 0: print("Instance Status Info is not available yet") time.sleep(5) continue instance_status = res['InstanceStatuses'][0]['InstanceStatus']['Status'] print('[INFO] Polling get status of the instance', instance_status) if instance_status == 'ok': break time.sleep(10) return {'Status': instance_status, 'InstanceId': instance_id} To pass the parameters into the second step, notice that I use the double parentheses syntax to reference the output of a previous step. The value in the Input value textbox {{launchEc2Instance.payload}} is the name of the step launchEc2Instance and then the name of the output parameter payload. Lastly, I will add a final step. This step will run a PowerShell script and use the AWS Tools for PowerShell. I’ve added this step purely to show that you can use PowerShell as an alternative to Python. You will note on the first line that I have to Install the AWSPowerShell.NetCore module and use the -Force switch before I can start interacting with AWS services. All this step does is take the InstanceId output from the LaunchEc2Instance step and use it to return the InstanceType of the ECS instance. It’s important to note that I have to pass the parameters from LaunchEc2Instance step to this step by configuring the Additional inputs in the same way I did earlier. Now that our document is created we can execute it. I go to the Actions & Change section of the menu and select Automation, from this screen, I click on the Execute automation button. I then get to choose the document I want to execute. Since this is a document I created, I can find it on the Owned by me tab. If I click the LaunchInstance document that I created earlier, I get a document details screen that shows me the description I added. This nicely formatted description allows me to generate documentation for my document and enable others to understand what it is trying to achieve. When I click Next, I am asked to provide any Input parameters for my document. I add the imageId and ARN for the role that I want to use when executing this automation. It’s important to remember that this role will need to have permissions to call any of the services that are requested by the scripts. In my example, that means it needs to be able to create EC2 instances. Once the document executes, I am taken to a screen that shows the steps of the document and gives me details about how long each step took and respective success or failure of each step. I can also drill down into each step and examine the logs. As you can see, all three steps of my document completed successfully, and if I go to the Amazon Elastic Compute Cloud (EC2) console, I will now have an EC2 instance that I created with tag LaunchedBySsmAutomation. These new features can be found today in all regions inside the AWS Systems Manager console so you can start using them straight away. Happy Automating! — Martin;

2019 Fall Hackathon: Propelling WP Engine Forward, Faster

WP Engine -

WP Engine, like any engine, needs fuel to press ahead. Innovation is the spark that ignites and propels us forward faster, and to keep that ingenious spark lit, we actively foster a creative and collaborative environment at WP Engine where cutting-edge ideas can take root and flourish. Our bi-annual Hackathons play an integral role in… The post 2019 Fall Hackathon: Propelling WP Engine Forward, Faster appeared first on WP Engine.

Impressions From WordCamp US 2019

InMotion Hosting Blog -

As a longtime sponsor of open-source projects, InMotion Hosting was thrilled to have the opportunity to sponsor WordCamp US 2019 in St. Louis, Missouri. WordCamp US is the year end WordPress meetup for North America. There were over 2,000 attendees – including Cody, Harry, and Joseph from InMotion Hosting. Each of them attended an expert speaker session and we wanted to take the opportunity to share their highlights: Cody Murphy On Marketing and Automation I attended a fascinating session about automation by Beka Rice, Head of Product at Skyverge. Continue reading Impressions From WordCamp US 2019 at The Official InMotion Hosting Blog.

What are PHP Workers and Why You Should Care

Nexcess Blog -

Have you ever browsed through your favorite coffee shop’s website and as you check out with that new order of coffee, you end up getting a 504 error after a delay? Or maybe you were browsing your favorite sports website and as you try to load the next page, it takes a while to load and comes back with a timeout error? These situations are frustrating, and not what we expect when we look at a site. In both cases, the cause may be not having enough PHP workers allocated to a site. Without enough PHP workers, a site can’t process all site requests that come in if there are a higher number of them. It’s not a good situation, as site speed is incredibly important for converting visitors to sales leads and customers. What is a PHP Worker? A PHP worker is essentially a mechanism that handles requests for a website that require back-end processing. Generally, any non-static or cached files that require processing are handled by PHP workers. This is usually active tasks like an inventory check on a specific item or it could be something as complex as viewing and listing all prior orders for a customer. When a PHP worker is started, it remains persistent until processes are completed or certain conditions are met. Think of PHP workers as a check-out line at a grocery store where each item that is to be scanned is a PHP process. If you only have one PHP worker (one checkout line) then everything must go through that single checkout lane, and the cashier can only work through one order at the time. PHP workers can limit the number of concurrent, or simultaneous, transactions on a site. As previously mentioned, if you have only four PHP workers (four checkout lines) the site can only process four transactions at once. However, this does not mean that the fifth customer (PHP process) or beyond does not get processed. PHP processes are placed in a queue for the worker which means it processes the first request in line then moves onto the next PHP process in the queue. In other words, a long line forms and people start waiting. Luckily, PHP workers process the information faster than grocery store cashiers. They work very quickly and can clear many and most processes within milliseconds. By having only a few additional PHP workers, you are able to have many more concurrent processes that can be run at one time, meaning more customer orders can be processed at once. What Happens When You Have Too Few PHP Workers Let’s say you have only two PHP workers on a site and you have several plugins and a heavy theme. Those two PHP workers will constantly be used only to process plugins and theme processes leaving a queue to build up immediately for new page requests from visitors to your site. If you are running an ecommerce site on top of this, it will only increase the queue amount. Much like customers waiting in line, some PHP processes will abandon the line. Processes that are not written to abandon the line, or time out, and will sit and wait. Then, they will begin to put a much higher load on server resources. It’s like the checkout line is now wrapping around the block! PHP processes on a WordPress website can be as simple as the submission of a contact form or a request to geolocate a visitor based upon their IP or zip code. For eCommerce websites, this can look a little different. Items such as new orders being processed, carts, and customer logins would all utilize PHP workers. The products or descriptions will usually be cached so that generally would not require a PHP process for viewing. Having only three to five PHP workers means that you can only have that many simultaneous transactions on the website and that the PHP workers will process requests in the order they were triggered (just like a shopping line). How To Lighten The Load For Your PHP Workers A common problem area to start with for PHP workers is having too many plugins and heavy themes. You can generally help alleviate issues caused by a bloated website with these tips: Add site caching with a plugin Reduce external calls to remote sites General site optimization Site optimization can get complicated, especially with sites that experience heavier traffic which requires more attention to detail. Generally, the larger the site, the more efficient the site must be in the way it requests its styles, products, orders, and customers. This way, you utilize the PHP workers for general site functionality less and PHP workers can process what matters – your traffic – effectively. Nexcess plans come with enough concurrent users for even the largest of sites to manage traffic. With Nexcess, you already have 20 concurrent users as part of an XS plan. This increases in increments of 20 as you move up to the XXL plan (which has 120). Other managed application platforms offer anywhere from two to four PHP workers in introductory offerings. Nexcess Managed WordPress and WooCommerce also have server-side caching built-in which helps minimize the use of PHP workers to process static content, allowing the PHP workers to process requests from the people who matter most: your customers. Maintain a Faster Site with More PHP Workers PHP workers can manage thousands of processes each, however; many factors come into play, including: How many exterior calls are they making? How many plugins are competing with inquiries to the database? Additionally, adding PHP workers to a site will also increase the resource allocation being used from the server. The more PHP processes running, the more RAM and CPU allocations will be needed, thus creating heavier loads on the server and having as much optimization as possible can reduce that server load. PHP workers are key, but they are not magic, one-size-fits-all solution. The more plugins (even inactive ones), the more PHP workers are utilized to process non-static requests. The same applies to heavily featured themes. For this reason, it is always a good idea to use caching and a CDN to help reduce the task load for PHP workers. This will optimize your site to process customer requests in the fastest manner possible.   Start your WooCommerce store knowing that it’s ready to handle traffic requirements. Learn more. The post What are PHP Workers and Why You Should Care appeared first on Nexcess Blog.

The 6 Best WordPress Plugins for Podcasts

DreamHost Blog -

Podcasting is an online content option that has steadily grown in popularity. If you’re just thinking about starting one, however, navigating what you need to produce and display a podcast can be a little daunting. Fortunately, podcasts and WordPress go together like peanut butter and jelly. Open-source developers have built many plugins for WordPress that make it easy to create an online experience for your listeners. Whether you want to import content or style a custom player for your website, there’s a plugin to make your job easier. In this article, we’ll review some of the reasons you should consider adding a podcast to your website. We’ll also provide an extensive list of the best podcast plugins that are available, and weigh up their pros and cons. If you’re ready for a soundcheck, let’s get started! Build a Website for Your PodcastWe offer budget-friendly Shared Hosting services with robust features and resources to help you create the perfect site. Plans start at $2.59/mo.Choose Your Plan Why You Should Consider Adding a Podcast to Your WordPress Website It’s no secret that podcasting has become extremely popular. The Edison Research group found that more than 32% of the United States population listened to a podcast in the last month. This was a result of its 2019 study, which also showed a decline in social media use for the first time. There are plenty of reasons for this shift, including a rise in smart speaker ownership and a low barrier to entry for podcasting. What’s more, a podcast can be a highly-effective way to boost customer engagement, increase Search Engine Optimization (SEO), and add interesting, shareable content to your website. Related: How to Create Your First WordPress Plugin 6 Podcast Plugins for Your WordPress Site Getting your podcast off the ground can take a little work, of course. The good news is that there are lots of WordPress plugins that can help you integrate your podcast with your website. Let’s take a look at six of the top options. 1. PowerPress Created by Blubrry, a podcast host, the PowerPress plugin has an easy-to-use interface while also bringing powerful tools to your audio production. Whether you use Blubrry to host your podcast or not, you’ll get great import options with this tool. PowerPress comes with migration tools and is fully compliant for integration with both Google Play and Apple Podcasts. Additionally, you can make use of this plugin’s subscriber tools and podcast statistics features. Key Features: Provides simple and advanced modes for flexible administration Supports embeds from video platforms like YouTube Includes podcast SEO tools Pros: PowerPress is definitely an all-in-one tool for integrating podcasts into your WordPress website. If you’re looking for something beyond basic functionality, this plugin can get you started. Cons: You’ll need to host your podcast on Blubrry to access advanced features, such as professional-level podcast statistics and premium audio players. Price: The PowerPress plugin is free, but you’ll a Blubrry hosting plan to access premium options. 2. Seriously Simple Podcasting Seriously Simple Podcasting is brought to you by another excellent podcasting platform. Castos podcast hosting created this plugin to make it easy to integrate your podcast with WordPress. Considered a great option for beginners, it incorporates all the best parts of WordPress and focuses them on your podcast. You can use any post type for your podcast episodes, for example, and migrate your content with a one-click option. Additionally, you can use the WordPress Block Editor to add podcast information and episodes easily to your website. Key Features: Uses the WordPress interface to administrate podcasts Works with any podcast host Lets you gather statistics through a free add-on Pros: A great beginner option with room to grow, Seriously Simple Podcasting definitely makes new users feel at home within the familiar WordPress workspace. Cons: You’ll get more features if you host your podcast with Castos. Price: This is a free plugin, but you’ll require a Castos hosting plan to access advanced features. Related: Keep Your Content Fresh: How to Repurpose Old Blog Posts 3. Libsyn Publisher Hub The Libsyn Publisher Hub plugin is designed to work with the Libsyn podcast hosting platform. If you host with Libsyn, this plugin ties your WordPress website to your hosting account and adds a new content block to the WordPress Editor. Additionally, this plugin makes it easy to store your media files off your website’s server, as Libsyn will host them for you. This keeps your WordPress site light and fast. You can also link to your shows, import them to WordPress, and create new posts to feature them. Key Features: Enables you to use WordPress to upload your podcast files directly to Libsyn Includes Apple optimization tags for Apple Podcasts Provides advanced scheduling tools that help to create smooth workflows Pros: Being able to upload media files from WordPress to Libsyn is a big plus. It means you only have to sign into one place, but you’ll get the benefit of both systems. Cons: This plugin is for Libsyn users only. Price: The plugin is free, but it requires a Libsyn hosting plan. 4. Podlove Podcast Publisher Podlove Podcast Publisher is an independent plugin that’s not tied to any single podcast host. With this plugin, you’ll be able to create a separate post type to feature your episodes. You’ll also be able to use the WordPress interface to manage just about all aspects of your podcast. You can integrate Podlove with any podcast host where you have control over your file naming. This means you can use anything from simple File Transfer Protocol (FTP)-controlled storage to cloud storage solutions. Key Features: Includes customizable template options Provides robust statistics features, including stats on downloads Offers an included player that supports multiple audio and video formats Pros: Podlove gives you the option to forego purchasing a podcast hosting plan, and offers you a bit more flexibility and control. Cons: There’s no official support, so you’ll need to get help from the community of users (if required). Price: This plugin is free. In addition, Podlove used to offer paid-for professional support plans but has suspended new contracts for the time being. 5. Podcast Player If you’re looking for a fast and easy way to load podcast episodes into your website’s pages or posts, Podcast Player is worth checking out. This plugin uses your podcast’s RSS feed to produce a simple but effective podcast player on your website. When you install the Podcast player plugin, you’ll get a new podcast option in the WordPress Block Editor. Adding this to a post gives you a palette of options that you can use to customize your player. Key Features: Lets you display more than one player on a page or post Supports video feeds Has a completely responsive design Pros: This is a simple yet effective podcast plugin that is not partial to any particular podcasting host, leaving you free to move around or host with your favorite solution. Cons: If you have more than one podcast you want to feature, you’ll need to use the Block Editor to create separate players. You can only display one podcast feed per player. Price: This is a completely free plugin. 6. Buzzsprout Podcasting The Buzzsprout Podcasting plugin is designed for users with Buzzsprout hosting accounts. This tool is unique in that its functionality lives in your WordPress Media library. When you want to embed a podcast, you’ll use the media insert option to place an attractive player onto your page or post. The way this plugin works is simple. When you use the embed option, the plugin will grab your Buzzsprout podcast feed information and display it on your website. Additionally, Buzzsprout is Apple Podcast compliant. Key Features: Offers players designed with HTML5 Provides access to show statistics Can display Apple Podcast artwork Pros: This is a quick and easy way to embed a podcast on your website in a well-designed player. Cons: This plugin is designed to work only with Buzzsprout accounts. Price: The plugin is free – while you’ll have to have a Buzzsprout account, it does offer a free podcast hosting option, and the plugin comes with all of its plans. How to Get Started With Podcasting If you’ve decided to start a podcast, you can take various approaches. There’s a simple, budget-friendly path, in addition to a pricier option with decked-out recording equipment. However, let’s go over a few essentials you should consider before diving into production. There are a few things that are important when starting out, including: Gear. At a minimum, you’ll need a computer and a microphone. While you can use your computer’s built-in microphone, you’ll get better quality sound with an external one. If you have a bigger budget, there are lots of additional gadgets you can explore for your podcast as well. A podcast host. It’s important to understand that this is usually different from your website host. A podcast host specializes in media storage. This is the best way to keep your website free from the burden of heavy media files. A show strategy. Planning out your content helps you build a show that listeners will love. Creating a posting schedule is one way to form that strategy. Audio editing software. From Audacity to Adobe, there are quite a few options when it comes to audio editing. Regardless of your budget, there are podcast strategies to suit just about any situation. With roughly 50% of American households tuning in to podcasts, there’s a good possibility that you’ll be able to build a loyal following. Podcast Publisher, Here You Come Podcasting is a fun, creative, and easy way to engage with your audience. In many ways, it can be a more personal experience for customers who are looking to learn about your products or services. Here at DreamHost, we love a good podcast. We offer a variety of shared hosting plans that are just right for getting your podcast website off the ground. Our Unlimited Shared plan starts at $5.95 per month and brings you peace of mind with secure backups and instant WordPress setup. You can podcast with confidence, knowing that your listeners will never miss an episode due to a crashed site! The post The 6 Best WordPress Plugins for Podcasts appeared first on Website Guides, Tips and Knowledge.

Pages

Recommended Content

Subscribe to Complete Hosting Guide aggregator