Industry Buzz

Create Snapshots From Any Block Storage Using EBS Direct APIs

Amazon Web Services Blog -

I am excited to announce you can now create Amazon Elastic Block Store (EBS) snapshots from any block storage data, such as on-premises volumes, volumes from another cloud provider, existing block data stored on Amazon Simple Storage Service (S3), or even your own laptop :-) AWS customers using the cloud for disaster recovery of on-premises infrastructure all have the same question: how can I transfer my on-premises volume data to the cloud efficiently and at low cost? You usually create temporary Amazon Elastic Compute Cloud (EC2) instances, attach Amazon Elastic Block Store (EBS) volumes, transfer the data at block level from on-premises to these new Amazon Elastic Block Store (EBS) volumes, take a snapshot of every EBS volumes created and tear-down the temporary infrastructure. Some of you choose to use CloudEndure to simplify this process. Or maybe you just gave up and did not copy your on-premises volumes to the cloud because of the complexity. To simplify this, we are announcing today 3 new APIs that are part of EBS direct API, a new set of APIs we announced at re:Invent 2019. We initially launched a read and diff APIs. We extend it today with write capabilities. These 3 new APIs allow to create Amazon Elastic Block Store (EBS) snapshots from your on-premises volumes, or any block storage data that you want to be able to store and recover in AWS. With the addition of write capability in EBS direct API, you can now create new snapshots from your on-premises volumes, or create incremental snapshots, and delete them. Once a snapshot is created, it has all the benefits of snapshots created from Amazon Elastic Block Store (EBS) volumes. You can copy them, share them between AWS Accounts, keep them available for a Fast Snapshot Restore, or create Amazon Elastic Block Store (EBS) volumes from them. Having Amazon Elastic Block Store (EBS) snapshots created from any volumes, without the need to spin up Amazon Elastic Compute Cloud (EC2) instances and Amazon Elastic Block Store (EBS) volumes, allows you to simplify and to lower the cost of the creation and management of your disaster recovery copy in the cloud. Let’s have a closer look at the API You first call StartSnapshot to create a new snapshot. When the snapshot is incremental, you pass the ID of the parent snapshot. You can also pass additional tags to apply to the snapshot, or encrypt these snapshots and manage the key, just like usual. If you choose to encrypt snapshots, be sure to check our technical documentation to understand the nuances and options. Then, for each block of data, you call PutSnapshotBlock. This API has 6 mandatory parameters: snapshot-id, block-index, block-data, block-length, checksum, and checksum-algorithm. The API supports block lengths of 512 KB. You can send your blocks in any order, and in parallel, block-index keeps the order correct. After you send all the blocks, you call CompleteSnapshot with changed-blocks-count parameter having the number of blocks you sent. Let’s put all these together Here is the pseudo code you must write to create a snapshot. AmazonEBS amazonEBS = AmazonEBSClientBuilder.standard() .withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration(endpointName, awsRegion)) .withCredentials(credentialsProvider) .build(); response = amazonEBS.startSnapshot(startSnapshotRequest) snapshotId = response.getSnapshotId(); for each (block in changeset) { putResponse = amazonEBS.putSnapshotBlock(putSnapshotBlockRequest); } amazonEBS.completeSnapshot(completeSnapshotRequest); As usual, when using this code, you must have appropriate IAM policies allowing to call the new API. For example: { "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": [ "ebs:StartSnapshot", "ebs:PutSnapshotBlock", "ebs:CompleteSnapshot" ], "Resource": "arn:aws:ec2:<Region>::snapshot/*" }] Also include some KMS related permissions when creating encrypted snapshots. In addition of the storage cost for snapshots, there is a charge per API call when you call PutSnapshotBlock. These new snapshot APIs are available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), China (Beijing), China (Ningxia), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), Middle East (Bahrain), and South America (São Paulo). You can start to use them today. -- seb

AWS IoT SiteWise – Now Generally Available

Amazon Web Services Blog -

At AWS re:Invent 2018, we announced AWS IoT SiteWise in preview which is a fully managed AWS IoT service that you can use to collect, organize, and analyze data from industrial equipment at scale. Getting performance metrics from industrial equipment is challenging because data is often locked into proprietary on-premises data stores and typically requires specialized expertise to retrieve and place in a format that is useful for analysis. AWS IoT SiteWise simplifies this process by providing software running on a gateway that resides in your facilities and automates the process of collecting and organizing industrial equipment data. With AWS IoT SiteWise, you can easily monitor equipment across your industrial facilities to identify waste, such as breakdown of equipment and processes, production inefficiencies, and defects in products. Last year at AWS re:Invent 2019, a bunch of new features were launched including SiteWise Monitor. Today, I am excited to announce AWS IoT SiteWise is now generally available in regions of US East (N. Virginia), US West (Oregon), Europe (Frankfurt), and Europe (Ireland). Let’s see how AWS IoT SiteWise works! AWS IoT SiteWise – Getting Started You can easily explore AWS IoT SiteWise by creating a demo wind farm with a single click on the AWS IoT SiteWise console. The demo deploys an AWS CloudFormation template to create assets and generate sample data for up to a week. You can find the SiteWise demo in the upper-right corner of the AWS IoT SiteWise console, and choose Create demo. The demo takes around 3 minutes to create demo models and assets for representing a wind farm. Once you see created assets in the console, you can create virtual representations of your industrial operations with AWS IoT SiteWise assets. An asset can represent a device, a piece of equipment, or a process that uploads one or more data streams to the AWS Cloud. For example, the wind turbine that sends air temperature, propeller rotation speed, and power output time-series measurements to asset properties in AWS IoT SiteWise. Also, you can securely collect data from the plant floor from sensors, equipment, or a local on-premises gateway and upload to the AWS Cloud using a gateway software called AWS IoT SiteWise Connector. It runs on common industrial gateway devices running AWS IoT Greengrass, and reads data directly from servers and historians over the OPC Unified Architecture protocol. AWS IoT SiteWise also accepts MQTT data through AWS IoT Core, and direct ingestion using REST API. You can learn how to collect data using AWS IoT SiteWise Connector in the blog series – Part 1 of AWS IoT Blog and in the service documentation. SiteWise Monitor – Creating Managed Web Applications Once data is stored in AWS IoT SiteWise, you can stream live data in near real-time and query historical data to build downstream IoT applications, but we provide a no-code alternative with SiteWise Monitor. You can explore your library of assets, and create and share operational dashboards with plant operators for real-time monitoring and visualization of equipment health and output with SiteWise Monitor. With SiteWise Monitor console, choose Create portal to create a web application that is accessible from from a web browser on any web-enabled desktop, tablet or phone and sign-in with your corporate credentials through AWS Single Sign-On (SSO) experience. Administrators can create one or more web applications to easily share access to asset data with any team in your organization to accelerate insights. If you click a given portal link and sign in via the credential of AWS SSO, you can visualize and monitor your device, process, and equipment data to quickly identify issues and improve operational efficiency with SiteWise Monitor You can create a dashboard in a new project for your team so they can visualize and understand your project data. And, choose a visualization type that best displays your data and rearrange and resize visualizations to create a layout that fits your team’s needs. The dashboard shows asset data and computed metrics in near real time or you can compare and analyze historical time series data from multiple assets and different time periods. There is a new dashboard feature, where you can specify thresholds on the charts and have the charts change color when those thresholds are exceeded. Also, you can learn how to monitor key measurements and metrics of your assets in near-real time using SiteWise Monitor in the blog series – Part 2 of AWS IoT Blog. Furthermore, you can subscribe to the AWS IoT SiteWise modeled data via AWS IoT Core rules engine, enable condition monitoring and send notifications or alerts using AWS IoT Events in near-real time, and enable Business Intelligence (BI) reporting on historical data using Amazon QuickSight. For more detail, please refer to this hands-on guide in the blog series – Part 3 of AWS IoT Blog. Now Available! With AWS IoT SiteWise, you only pay for what you use with no minimum fees or mandatory service usage. You are billed separately for usage of messaging, data storage, data processing, and SiteWise Monitor. This approach provides you with billing transparency because you only pay for the specific AWS IoT SiteWise resources you use. Please visit the pricing page to learn more and estimate your monthly bill using the AWS IoT SiteWise Calculator. You can watch interesting talks about business cases and solutions in ‘Driving Overall Equipment Effectiveness (OEE) Across Your Industrial Facilities’ and ‘Building an End-to-End Industrial IoT (IIoT) Solution with AWS IoT‘. To learn more, please visit the AWS IoT SiteWise website or the tutorial, and developer guide. Explore AWS IoT SiteWise with Bill Vass and Cherie Wong! Please send us feedback either in the forum for AWS IoT SiteWise or through your usual AWS support contacts. – Channy;

Choosing a hosting platform in 2020

cPanel Blog -

Choosing a hosting platform in 2020 is more like navigating a labyrinth, and with so many options, it can seem like a daunting task. Over the past decade, the web hosting market has grown over 100%, currently valued at $62 billion in 2020. As with much of technology in the past decade, web hosting is in constant flux. From new technologies to consolation and acquisitions, the competition in the industry has never been so fierce.  Aspects to consider ...

AWS Well-Architected Framework – Updated White Papers, Tools, and Best Practices

Amazon Web Services Blog -

We want to make sure that you are designing and building AWS-powered applications in the best possible way. Back in 2015 we launched AWS Well-Architected to make sure that you have all of the information that you need to do this right. The framework is built on five pillars: Operational Excellence – The ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures. Security – The ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies. Reliability – The ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues. Performance Efficiency – The ability to use computing resources efficiently to meet system requirements, and to maintain that efficiency as demand changes and technologies evolve. Cost Optimization – The ability to run systems to deliver business value at the lowest price point. Whether you are a startup, a unicorn, or an enterprise, the AWS Well-Architected Framework will point you in the right direction and then guide you along the way as you build your cloud applications. Lots of Updates Today we are making a host of updates to the Well-Architected Framework! Here’s an overview: Well-Architected Framework -This update includes new and updated questions, best practices, and improvement plans, plus additional examples and architectural considerations. We have added new best practices in operational excellence (organization), reliability (workload architecture), and cost optimization (practice Cloud Financial Management). We are also making the framework available in eight additional languages (Spanish, French, German, Japanese, Korean, Brazilian Portuguese, Simplified Chinese, and Traditional Chinese). Read the Well-Architected Framework (PDF, Kindle) to learn more. Pillar White Papers & Labs – We have updated the white papers that define each of the five pillars with additional content, including new & updated questions, real-world examples, additional cross-references, and a focus on actionable best practices. We also updated the labs that accompany each pillar: Operational Excellence (PDF, Kindle, Lab) Security (PDF, Kindle, Lab) Reliability (PDF, Kindle, Lab) Performance Efficiency (PDF, Kindle, Lab) Cost Optimization (PDF, Kindle, Lab) Well-Architected Tool – We have updated the AWS Well-Architected Tool to reflect the updates that we made to the Framework and to the White Papers. Learning More In addition to the documents that I linked above, you should also watch these videos. In this video, AWS customer Cox Automotive talks about how they are using AWS Well-Architected to deliver results across over 200 platforms: In this video, my colleague Rodney Lester tells you how to build better workloads with the Well-Architected Framework and Tool: Get Started Today If you are like me, a lot of interesting services and ideas are stashed away in a pile of things that I hope to get to “someday.” Given the importance of the five pillars that I mentioned above, I’d suggest that Well-Architected does not belong in that pile, and that you should do all that you can to learn more and to become well-architected as soon as possible! — Jeff;

New – Label Videos with Amazon SageMaker Ground Truth

Amazon Web Services Blog -

Launched at AWS re:Invent 2018, Amazon Sagemaker Ground Truth is a capability of Amazon SageMaker that makes it easy to annotate machine learning datasets. Customers can efficiently and accurately label image, text and 3D point cloud data with built-in workflows, or any other type of data with custom workflows. Data samples are automatically distributed to a workforce (private, 3rd party or MTurk), and annotations are stored in Amazon Simple Storage Service (S3). Optionally, automated data labeling may also be enabled, reducing both the amount of time required to label the dataset, and the associated costs. As models become more sophisticated, AWS customers are increasingly applying machine learning prediction to video content. Autonomous driving is perhaps the most well-known use case, as safety demands that road condition and moving objects be correctly detected and tracked in real-time. Video prediction is also a popular application in Sports, tracking players or racing vehicles to compute all kinds of statistics that fans are so fond of. Healthcare organizations also use video prediction to identify and track anatomical objects in medical videos. Manufacturing companies do the same to track objects on the assembly line, parcels for logistics, and more. The list goes on, and amazing applications keep popping up in many different industries. Of course, this requires building and labeling video datasets, where objects of interest need to be labeled manually. At 30 frames per second, one minute of video translates to 1,800 individual images, so the amount of work can quickly become overwhelming. In addition, specific tools have to be built to label images, manage workflows, and so on. All this work takes valuable time and resources away from an organization’s core business. AWS customers have asked us for a better solution, and today I’m very happy to announce that Amazon Sagemaker Ground Truth now supports video labeling. Customer use case: the National Football League The National Football League (NFL) has already put this new feature to work. Says Jennifer Langton, SVP of Player Health and Innovation, NFL: “At the National Football League (NFL), we continue to look for new ways to use machine learning (ML) to help our fans, broadcasters, coaches, and teams benefit from deeper insights. Building these capabilities requires large amounts of accurately labeled training data. Amazon SageMaker Ground Truth was truly a force multiplier in accelerating our project timelines. We leveraged the new video object tracking workflow in addition to other existing computer vision (CV) labeling workflows to develop labels for training a computer vision system that tracks all 22 players as they move on the field during plays. Amazon SageMaker Ground Truth reduced the timeline for developing a high quality labeling dataset by more than 80%”. Courtesy of the NFL, here are a couple of predicted frames, showing helmet detection in a Seattle Seahawks video. This particular video has 353 frames. This first picture is frame #100. This second picture is frame #110. Introducing Video Labeling With the addition of video task types, customers can now use Amazon Sagemaker Ground Truth for: Video clip classification Video multi-frame object detection Video multi-frame object tracking The multi-frame task types support multiple labels, so that you may label different object classes present in the video frames. You can create labeling jobs to annotate frames from scratch, as well as adjustment jobs to review and fine tune frames that have already been labeled. These jobs may be distributed either to a private workforce, or to a vendor workforce you picked on AWS Marketplace. Using the built-in GUI, workers can then easily label and track objects across frames. Once they’ve annotated a frame, they can use an assistive labeling feature to predict the location of bounding boxes in the next frame, as you will see in the demo below. This significantly simplifies labeling work, saves time, and improves the quality of annotations. Last but not least, work is saved automatically. Preparing Input Data for Video Object Detection and Tracking As you would expect, input data must be located in S3. You may bring either video files, or sequences of video frames. The first option is the simplest, as Amazon Sagemaker Ground Truth includes a tool that automatically extracts frames from your video files. Optionally, you can sample frames (1 in ‘n’), in order to reduce the amount of labeling work. The extraction tool also builds a manifest file describing sequences and frames. You can learn more about it in the documentation. The second option requires two steps: extracting frames, and building the manifest file. Extracting frames can easily be performed with the popular ffmpeg open source tool. Here’s how you could convert the first 60 seconds of a video to a frame sequence. $ ffmpeg -ss 00:00:00.00 -t 00:01:0.00 -i basketball.mp4 frame%04d.jpg Each frame sequence should be uploaded to S3 under a different prefix, for example s3://my-bucket/my-videos/sequence1, s3://my-bucket/my-videos/sequence2, and so on, as explained in the documentation. Once you have uploaded your frame sequences, you may then either bring your own JSON files to describe them, or let Ground Truth crawl your sequences and build the JSON files and the manifest file for you automatically. Please note that a video sequence cannot be longer than 2,000 frames, which corresponds to about a minute of video at 30 frames per second. Each sequence should be described by a simple sequence file: A sequence number, an S3 prefix, and a number of frames. A list of frames: number, file name, and creation timestamp. Here’s an example of a sequence file. {"version": "2020-06-01", "seq-no": 1, "prefix": "s3://jsimon-smgt/videos/basketball", "number-of-frames": 1800, "frames": [ {"frame-no": 1, "frame": "frame0001.jpg", "unix-timestamp": 1594111541.71155}, {"frame-no": 2, "frame": "frame0002.jpg", "unix-timestamp": 1594111541.711552}, {"frame-no": 3, "frame": "frame0003.jpg", "unix-timestamp": 1594111541.711553}, {"frame-no": 4, "frame": "frame0004.jpg", "unix-timestamp": 1594111541.711555}, . . . Finally, the manifest file should point at the sequence files you’d like to include in the labeling job. Here’s an example. {"source-ref": "s3://jsimon-smgt/videos/seq1.json"} {"source-ref": "s3://jsimon-smgt/videos/seq2.json"} . . . Just like for other task types, the augmented manifest is available in S3 once labeling is complete. It contains annotations and labels, which you can then feed to your machine learning training job. Labeling Videos with Amazon SageMaker Ground Truth Here’s a sample video where I label the first ten frames of a sequence. You can see a screenshot below. I first use the Ground Truth GUI to carefully label the first frame, drawing bounding boxes for basketballs and basketball players. Then, I use the “Predict next” assistive labeling tool to predict the location of the boxes in the next nine frames, applying only minor adjustments to some boxes. Although this was my first try, I found the process easy and intuitive. With a little practice, I could certainly go much faster! Getting Started Now, it’s your turn. You can start labeling videos with Amazon Sagemaker Ground Truth today in the following regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Canada (Central), Europe (Ireland), Europe (London), Europe (Frankfurt), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Seoul), Asia Pacific (Sydney), Asia Pacific (Tokyo). We’re looking forward to reading your feedback. You can send it through your usual support contacts, or in the AWS Forum for Amazon SageMaker. - Julien

7 Best WordPress Survey Plugins

HostGator Blog -

The post 7 Best WordPress Survey Plugins appeared first on HostGator Blog. There’s nothing better than user feedback to improve your business and website. Often, when you’re building your website you’re just making guesses about what your visitors want to see. If you have a deep understanding of your market and customers, then you might be correct. But, being able to back up your opinions with data can go a long way towards building a better website, and growing your business. Sending out a survey via email doesn’t always work the best. But, what if you could integrate a survey into your existing website? Good news: with a WordPress survey plugin, you can. Below you’ll learn what a WordPress survey plugin is, how it can benefit your website, and what you need to look for in a survey plugin. Finally, we’ll break down some of the best WordPress survey plugins you can find today. What is a WordPress Survey Plugin? A WordPress survey plugin is a plugin that lets you run surveys for your website visitors. These plugins let you gather feedback on your products, services, and even your website as a whole. The overall feature set you’ll get access to will depend upon the plugin you’re using, but they’ll generally have features similar to that below: An easy survey and poll creation tool you can access from your dashboardMultiple options to display the survey form on your siteData reporting and graphical display featuresAccess to survey and poll data for exporting and analysis As you’ll soon learn, a lot of WordPress survey plugins integrate with popular form plugins, while others will stand on their own as survey and polling tools.  What Features Should I Look for in a WordPress Survey Plugin? There are a ton of different WordPress survey plugins out there on the market. This is both good and bad. Good, because this means you’ll be able to find the perfect plugin to fit your needs. Bad, because you’ll have to sort through a ton of different plugins that might not work for you. Regardless of the plugin you decide to use here are some features that you’ll want to consider: 1. Survey Form Design You must take stock of your survey needs before you begin your plugin search. Some users will want nothing more than a simple pop-up form, while other website owners will prefer multi-page forms that allow you to save your answers throughout. There are all kinds of different survey design plugins out there. Some are simple, while others have advanced features like conditional logic for a personalized survey experience, built-in calculations, and more. 2. Built-in Survey Reporting Survey reporting is how your survey results will display. Having to read through hundreds of survey responses on your own won’t be very helpful. Instead, look for a survey plugin that’ll help you quantify your results and make your results actionable. Some features to look for include, automated graph creation and percentage breakdowns, bundled embedding to display your results on your website, and more. A lot of times the data you generate can be transformed into engaging blog posts and other linkable assets. 3. How Easy It Is to Use There’s no point in installing a plugin if you’re never going to be able to use it. Make sure you choose a plugin that’s intuitive enough to use. The goal is to add useful surveys to your site, not spend all day fiddling with a plugin, trying to get it to work. However, the ease of use will also be influenced by what you’re trying to accomplish with the plugin. The greater the feature set and the more interactive you want to make your surveys, the higher the learning curve will be.  4. Overall Cost Finally, you’ll need to consider how much a plugin costs. Usually, most plugins (even premium ones) have free versions that contain fewer features overall. Sometimes all you need are the core features and you can stick with the free version forever. Other plugins only offer premium versions, with a flat rate, yearly, or monthly fee. Finally, we have forever free plugins that are free for the lifetime of using the plugin. Make sure you choose a plugin that’s in alignment with your budget. The beauty of WordPress having such a large plugin library is you can always find a suitable plugin that fits within your budget. The Best WordPress Survey Plugins There are all kinds of WordPress survey plugins out there on the market today. Instead of having to sort through every available plugin, we’ve compiled a list of some of the best below. You’ll find robust form plugins with survey functionality, simple polling plugins, interactive surveys, and more. 1. WPForms WPForms is one of the most popular WordPress form plugins on the market. Beyond the ability to create online forms you can also create in-depth surveys and pools for your visitors. To use the survey feature of this plugin you’ll need to get the surveys and polls add-on. This add-on will combine the powerful form creation features with some very useful survey features like conditional logic, multiple page forms, email integration, and much more. When you’re creating a form you can easily convert it into a survey by clicking a button. Finally, you have access to the in-depth reporting features. This WordPress survey plugin will automatically create graphs and other visual reports from the survey responses you receive. You can even export these graphs to create reports or shareable social media graphics. 2. YOP Poll YOP Poll helps you create simple polls and surveys and embed them across your website. The unique polling and data collection features let you run multiple polls at any given time. You can use either shortcodes or widgets to embed and run multiple polls on your site. There’s also a template library, so you can find a theme that matches the design of your site. If you’re running a high volume of polls you can also clone and duplicate polls you’ve created in the past. This plugin is very easy to use and easily integrates with virtually every theme on the market. You also have a high level of control in data reporting and you can choose what information to display and which information you want to keep private. 3. Gravity Forms Gravity Forms is one of the longest-running WordPress plugins. Like, WPForms above, you’ll need to get the add-on in order to add survey and form functionality to your site. Once you’ve activated the add-on you can create robust survey forms with a variety of interactive survey fields. The results from your surveys and forms can only be viewed from within the admin area. To create forms and graphs with your data you’ll need to export the CSV data and use a third-party tool to create visual elements. 4. Quiz and Survey Master Quiz and Survey Master is a plugin that’ll help you add quizzes and surveys to your website. The interface of the plugin is a little bit difficult to use, but you will find extensive documentation to help you create engaging quizzes and surveys. The free version of this quiz plugin does come equipped with a ton of useful features like interactive answers, answer scoring and even leaderboards. You can create multi-page forms, as well as quizzes with this WordPress plugin. However, you won’t be able to access in-depth data reporting and analysis unless you purchase the premium version of the plugin. 5. WP-Polls WP-Polls is a very simple polling plugin. Unlike other form-based plugins on this list, this plugin doesn’t include a form builder. If you want to customize the appearance of your polls, then you can use the built-in templates or create custom CSS code. You’re only able to collect responses via checkboxes or radio buttons, so the level of depth to your data will be limited. However, this might be all you’re looking for. This form is best for simple voting polls when you’re trying to see if users prefer one option over the other. Once a user submits an answer they’ll automatically be able to see the current results of the survey. 6. Formidable Forms Formidable Forms is another feature-rich form plugin with survey and poll functionality. Even with the number of features this tool provides, it remains very easy to use. You’ll get access to form response fields like checkboxes, radio buttons, dropdowns, and more. The form builder is equipped with a drag and drop interface, so you can quickly build complex forms. This plugin also has a lot of advanced data reporting features. You can showcase the results of your survey with charts, graphs, tables, pie charts, histograms, and more. This makes it super easy to share the data you’ve collected with your audience and across social media. 7. Poll, Survey, Form, & Quiz Maker by OpinionStage Poll, Survey, Form & Quiz Maker is a great polling plugin that lets you add a variety of different interactive polls and surveys to your website. The surveys you can create with this WordPress plugin can even include images and videos. This can help to make your surveys much more engaging than what some other plugins can provide. You can create a variety of question types, like single or multiple answers, along with open-ended questions, and more. Once you’re done creating your surveys, you can embed them across your site in pages, posts, sidebar, and widget areas. Benefits of Using a WordPress Survey Plugin Gathering data about your audience can be hard work. You can send out a link to a Google Form in an email blast to your list. Or, encourage your visitors to drop a comment or send you an email. Using a WordPress survey plugin helps to simplify this process. Not only will the survey forms you create have a better design, but you’ll be able to collect more useful data as well. Here are a few smart reasons to use a WordPress survey plugin: 1. Gain Better Market Understanding Uncovering more useful information about your market is never a bad thing. You can use this information to improve your website, build a better website, and even uncover new product and content opportunities. A great survey plugin will also make data collection easy. With appealing forms, your users will actually enjoy filling out your surveys and polls. 2. Improve Your Offerings/Website Once you’ve collected all that data, it’s time to put it to work. A WordPress survey plugin will help you transform responses into valuable insights. This can either be used internally, to improve your existing website, products, and services. Or, it can be transformed into engaging blog posts and social media graphics. Fresh data is always useful online, being able to generate it yourself and put it to work for you can be a valuable asset for your business. 3. 24/7 Data Collection Survey plugins make data collection easy. For example, you have a form that lives on your website 24/7/365. Plus, by making it easy to access and fun to fill out you’re encouraging your users to leave a response. Think about other forms of data collection, like passing out physical forms, or conducting interviews. All of these are time-intensive and make it very difficult to capture responses at scale. Hosting a survey on your site gives your visitors a chance to answer questions at their leisure.  Choosing the Best Survey Plugin for Your Website By now you know how survey plugins can benefit your site and the kinds of benefits you can receive when you start running your own surveys on your site. The WordPress survey plugins above all have slightly different offerings. Some are robust form plugins with survey functionality, while others are simple polling plugins. The plugin you choose to go with depends upon your needs as a website owner. Once again, refer to the list above to help determine which plugin is going to be the best fit for you. You can always try out a few plugins to see which fit best with your workflow and site design before you choose a plugin to use over the long-term. Find the post on the HostGator Blog

7 Tried and Tested Strategies to Make Your Website a Lead Magnet

Reseller Club Blog -

Building a customer base is one of the biggest challenges that new businesses face. While large enterprises will spend thousands of dollars on pay-per-click ads and social media campaigns, small businesses don’t have those kinds of budgets and often have to rely on organic ways to find their customers. However, this should not dissuade you from starting your business.  Fortunately, there are several easy-to-implement and inexpensive methods of capturing useful leads for your business, such as your official website. The advantage of using your site as a lead magnet is that you have full control over what you publish and how you track visits and conversions. Moreover, it just improves the overall quality of your website, which is crucial when you are competing in cyberspace.  If you are looking for ideas on how to turn your website into a lead magnet, read on for seven useful tips. 1. Pick the right domain name This may seem unusual, however, it is a great strategy to attract quality leads. Your domain name is the first thing people will see when they search for you. Finding the right one is a small yet crucial first step towards creating a strong web presence. By making it easier for people to find you and remember you, you increase the likelihood of converting them into leads.  When thinking about your domain name, you must consider the following: Is it short? Long and clunky domain names are hard to remember, prone to being misspelt, and may also seem spammy. For instance, if you are looking to buy a secondhand car online and find two links in the search results, would you rather click on or  Pick a domain name that’s short and simple, ideally not more than 18 characters. Avoid using a hyphen, numbers, or modified spellings that may confuse people. Is it contextual? Avoid going with something random and completely unrelated to your business just because the domain name of your choice wasn’t available. Try finding your top preferences on new domain extensions such as .STORE, .ONLINE, .SITE, .SPACE or .TECH.  Not only are they easier to acquire, but they also provide more context to your domain name. For instance, if you come across the domain name, you immediately know what the website is about and the domain name has an appealing ring to it too. Is it creative? You need a domain name that sets you apart from your rivals and intrigues people to click on your URL and learn more about you. Try to be creative without being cryptic. For instance, is a great choice for a wellness and fitness center.  2. Focus on your website’s design Your website’s design should be clean, aesthetically appealing, and easy to navigate. Cluttered layouts can be off-putting for visitors and also make you come across as unprofessional, whereas a well-designed site shows your visitors that you are skilled, experienced and care about the finer details.  Your design should inspire confidence in the visitors’ mind that they have come to the right place. It should reflect your industry and your brand’s personality. For instance, a website that sells children’s learning aids should use playful elements and bright colors, while a news website should focus on high picture quality and clear, easy-to-read fonts with less distracting elements. 3. Reduce the loading speed The Internet is all about instant gratification. If your website takes a long time to load, the bounce rate is bound to be high. According to this 2020 report by Ezoic, loading speed of 1-3 seconds may lead to a 32% bounce rate, while a speed of 1-10 seconds may lead to a 106% bounce rate. Make sure that your website pages load within seconds by finding the right web host, compressing your images, installing fewer plugins and minimizing HTTP requests. 4. Start and maintain a good blog Customers spend a lot of time researching the products they need online. Why not provide them with everything they need on your own website? Starting a blog is not only a great way to engage, educate and inform your audience but also to strengthen your credibility as an expert in the field.  Post content related to your industry and products, such as latest trends, product reviews, guides and how-tos in different formats such as articles, videos, infographics, and even podcasts. For inspiration, check out these blogs by Walmart, ExxonMobil, and Hewlett Packard. 5. Provide clear call-to-action (CTA) in your blog Make it easy for visitors to take the desired action by providing relevant product links and CTAs throughout your blog posts. For instance, if you are an online fashion store and your blog post is about 5 stylish must-haves this monsoon, you can provide links to the products mentioned in your post along with short and simple CTAs, such as “Buy these boots at flat 10% off”. 6. Optimize your website for mobile A lot of online browsing and shopping takes place on mobile phones these days. According to a report published on Broadband Search, mobile traffic in 2019 had gone up by 222% compared to 2013. With so much online traffic coming from mobile phones, every website needs to ensure that it is optimized for use on phones. You don’t want visitors bouncing off your website just because it takes too much time to load or the layout looks weird on their mobile devices.  Test your website on various mobile devices such as smartphones and tablets to see how long it takes to load, how simple the navigation is, whether the layout looks good, and whether the pictures look high-quality and the text readable. You can even run it through an online test such as Google’s Mobile-Friendly Test to get a report on what are the strong points of your website and which areas need improvement. 7. Ask for their email addresses When visitors create accounts on your website, provide you with their email addresses, and subscribe to your newsletter, it helps you get repeat business from them. You can use their information to track what they are browsing, which products and services are of interest to them, their purchase history and whether or not they have abandoned their shopping carts.  This will help you retarget them with relevant information and products that they are likely to buy and even get them to complete the purchases in their abandoned cart through discounts and other special offers.  Your website would be the ideal place for capturing people’s email addresses. However, new visitors will most often be wary of divulging personal contact details unless you give them an incentive to do so. For instance, you could offer them a free product such as an ebook, access to tutorial videos and other premium content or attractive deals (Sign up now and get flat 30% off on your first purchase!). Once you have their information and have a fair idea of their shopping and content consumption behavior, your chances of converting them into leads and eventually into repeat customers will increase manifold.  Conclusion Your website is like your storefront in the online world. It is a reflection of your identity and your personality. If there is one investment that you must not compromise on, it is the quality of your website. A well-designed website with a professional look and feel and useful information can be your most useful tool for lead generation, which is the first step towards creating a large loyal customer base.  .fb_iframe_widget_fluid_desktop iframe { width: 100% !important; } The post 7 Tried and Tested Strategies to Make Your Website a Lead Magnet appeared first on ResellerClub Blog.

No Humans Involved: Mitigating a 754 Million PPS DDoS Attack Automatically

CloudFlare Blog -

On June 21, Cloudflare automatically mitigated a highly volumetric DDoS attack that peaked at 754 million packets per second. The attack was part of an organized four day campaign starting on June 18 and ending on June 21: attack traffic was sent from over 316,000 IP addresses towards a single Cloudflare IP address that was mostly used for websites on our Free plan. No downtime or service degradation was reported during the attack, and no charges accrued to customers due to our unmetered mitigation guarantee. The attack was detected and handled automatically by Gatebot, our global DDoS detection and mitigation system without any manual intervention by our teams. Notably, because our automated systems were able to mitigate the attack without issue, no alerts or pages were sent to our on-call teams and no humans were involved at all.Attack Snapshot - Peaking at 754 Mpps. The two different colors in the graph represent two separate systems dropping packets. During those four days, the attack utilized a combination of three attack vectors over the TCP protocol: SYN floods, ACK floods and SYN-ACK floods. The attack campaign sustained for multiple hours at rates exceeding 400-600 million packets per second and peaked multiple times above 700 million packets per second, with a top peak of 754 million packets per second. Despite the high and sustained packet rates, our edge continued serving our customers during the attack without impacting performance at allThe Three Types of DDoS: Bits, Packets & RequestsAttacks with high bits per second rates aim to saturate the Internet link by sending more bandwidth per second than the link can handle. Mitigating a bit-intensive flood is similar to a dam blocking gushing water in a canal with limited capacity, allowing just a portion through.Bit Intensive DDoS Attacks as a Gushing River Blocked By GatebotIn such cases, the Internet service provider may block or throttle the traffic above the allowance resulting in denial of service for legitimate users that are trying to connect to the website but are blocked by the service provider. In other cases, the link is simply saturated and everything behind that connection is offline.Swarm of Mosquitoes as a Packet Intensive DDoS AttackHowever in this DDoS campaign, the attack peaked at a mere 250 Gbps (I say, mere, but ¼ Tbps is enough to knock pretty much anything offline if it isn’t behind some DDoS mitigation service) so it does not seem as the attacker intended to saturate our Internet links, perhaps because they know that our global capacity exceeds 37 Tbps. Instead, it appears the attacker attempted (and failed) to overwhelm our routers and data center appliances with high packet rates reaching 754 million packets per second. As opposed to water rushing towards a dam, flood of packets can be thought of as a swarm of millions of mosquitoes that you need to zap one by one.Zapping Mosquitoes with GatebotDepending on the ‘weakest link’ in a data center, a packet intensive DDoS attack may impact the routers, switches, web servers, firewalls, DDoS mitigation devices or any other appliance that is used in-line. Typically, a high packet rate may cause the memory buffer to overflow and thus voiding the router’s ability to process additional packets. This is because there’s a small fixed CPU cost of handing each packet and so if you can send a lot of small packets you can block an Internet connection not by filling it but by causing the hardware that handles the connection to be overwhelmed with processing.Another form of DDoS attack is one with a high HTTP request per second rate. An HTTP request intensive DDoS attack aims to overwhelm a web server’s resources with more HTTP requests per second than the server can handle. The goal of a DDoS attack with a high request per second rate is to max out the CPU and memory utilization of the server in order to crash it or prevent it from being able to respond to legitimate requests. Request intensive DDoS attacks allow the attacker to generate much less bandwidth, as opposed to bit intensive attacks, and still cause a denial of service.Automated DDoS Detection & MitigationSo how did we handle 754 million packets per second? First, Cloudflare’s network utilizes BGP Anycast to spread attack traffic globally across our fleet of data centers. Second, we built our own DDoS protection systems, Gatebot and dosd, which drop packets inside the Linux kernel for maximum efficiency in order to handle massive floods of packets. And third, we built our own L4 load-balancer, Unimog, which uses our appliances' health and other various metrics to load-balance traffic intelligently within a data center. In 2017, we published a blog introducing Gatebot, one of our two DDoS protection systems. The blog was titled Meet Gatebot - a bot that allows us to sleep, and that’s exactly what happened during this attack. The attack surface was spread out globally by our Anycast, then Gatebot detected and mitigated the attack automatically without human intervention. And traffic inside each datacenter was load-balanced intelligently to avoid overwhelming any one machine. And as promised in the blog title, the attack peak did in fact occur while our London team was asleep. So how does Gatebot work? Gatebot asynchronously samples traffic from every one of our data centers in over 200 locations around the world. It also monitors our customers’ origin server health. It then analyzes the samples to identify patterns and traffic anomalies that can indicate attacks. Once an attack is detected, Gatebot sends mitigation instructions to the edge data centers.To complement Gatebot, last year we released a new system codenamed dosd (denial of service daemon) which runs in every one of our data centers around the world in over 200 cities. Similarly to Gatebot, dosd detects and mitigates attacks autonomously but in the scope of a single server or data center. You can read more about dosd in our recent blog.The DDoS LandscapeWhile in recent months we’ve observed a decrease in the size and duration of DDoS attacks, highly volumetric and globally distributed DDoS attacks such as this one still persist. Regardless of the size, type or sophistication of the attack, Cloudflare offers unmetered DDoS protection to all customers and plan levels—including the Free plans.

How To Fix “Error Establishing a Database Connection” in WordPress

cPanel Blog -

The “Error establishing a database connection” message strikes fear in a WordPress users heart, prompting many a panicked support request. You try to load a page, but all you see is a white box with a mysterious error message. WordPress is down and the “helpful” suggestions beneath the error are more confusing than useful. How can you fix a database error when you can’t even open the admin dashboard to see what’s wrong? Fortunately, “Error ...

Smart Plugin Manager Does It Again: New Features Improve Productivity

WP Engine -

The best plugin manager for WordPress just got even better. WP Engine’s Smart Plugin Manager—the only comprehensive plugin manager for WordPress—now has several new features that make it even easier for site developers to take advantage of the massive plugin ecosystem available for WordPress sites. Launched in 2019, Smart Plugin Manager takes the headache out… The post Smart Plugin Manager Does It Again: New Features Improve Productivity appeared first on WP Engine.

How to Create a Link Building Strategy

DreamHost Blog -

Your website is not an island. While creating top-quality content is important, your website’s relationship with every other site on the vast sea of the internet is just as vital. You won’t get very far if no one is linking to your pages, and you can’t expect many people to do so without some effort on your part. Even if you can’t force people to link to your content (and you shouldn’t because your mama taught you better than that), you can take some simple steps to encourage other sites to send visitors your way. All it takes to generate quality links is a little careful planning and a few proven techniques. In this post, we’ll talk about why you need a fully-developed link building strategy. Then we’ll explore how to create one effectively. Let’s get going! Got Great Content on Your Site?Make sure your hosting can keep up. We’ll ensure your website is fast, secure, and always up so you can focus on building links, not managing downtime. Plans start at $2.59/mo.Give Us a Try What Is Link Building (And Why Does It Matter)? Unless your website is very unusual, it’s going to contain a lot of links. Internal links point towards other pages on your own website, while external links point away from your site to other web pages. Having plenty of both is vital for your site’s User Experience (UX) and Search Engine Optimization (SEO). However, there’s another kind of link that should be on your radar as a website owner. Backlinks are links on other web pages that point towards your website. So if someone writes an article on their news site and includes a link to one of your blog posts, that’s a backlink. Backlinks are just as important as the links you include on your own site because: Links to your site improve your visibility, helping to familiarize people with your brand. They also bring new visitors to your website, including those you might not have had an easy way to reach otherwise. Google and other search engines view backlinks as a positive symbol — they indicate that others find your content useful and worth linking to. Therefore, having plenty of quality links to your site (from relevant websites with high domain authority) can improve your search engine rankings. There’s no doubt that the more people are linking to your site, the better. However, there is one big problem when it comes to backlinks — you rarely control them. This means you’ll need to engage in some link building or take steps to increase the number of backlinks pointing your way. Doing that isn’t always easy. There’s a lot of content for people to link to and they may not even know about yours. So you’re most likely to succeed if you can put together a comprehensive, well-thought-out link building strategy. Related: Don’t Let a Broken Link or These Other Common WordPress Errors Slow Your Site Down The Dos and Don’ts of Link Building In a moment, we’ll walk you through the process of putting together your link building strategy and successfully executing it. First, however, it’s important to cover some basics. For example, there are things you’ll want to avoid (like the plague) while conducting your link building efforts. These include: Avoid paying people to include your links on their sites. That’s generally considered unethical, and if Google finds out you’re doing it, you’ll be penalized severely. Don’t mislead people about your links in an effort to get them featured. This is likely to backfire on you — if people click on a link leading to your site but find out that your content isn’t relevant to them, they’re just going to leave. Never spam other people’s sites with your links manually. It can be tempting to add links to your site’s content to as many other websites as possible. However, doing this too much can harm your credibility and get a lot of your links reported as spam. Opt out of link directories and link exchange schemes. These are shady techniques developed to get a lot of links into the public quickly — like the above methods, they can backfire and get the attention of Google (and not in a good way). Familiarize yourself with “black hat” link building techniques and don’t use them. This mostly means trying to get “hidden” links on pages by cloaking them, making them hard to see, or even hacking directly into other sites. Pretty gross, right? Some of these are obviously bad ideas, while others (such as link exchanges) might initially seem smart until you learn more about them. None are worth the risks involved. So what should you do? We’ll go into more detail shortly, but let’s lay the groundwork with these link building “dos”: Encourage links from high-quality and high-ranking sites. The quality of your backlinks matters just as much to Google as the quantity. So where possible, you want to try and get backlinks from sites that are trustworthy, well-maintained, and visible. Focus on relevant websites. You want to encourage new visitors likely to be interested in what your site has to offer. Backlinks on sites relevant to their needs are much more valuable than backlinks from random pages. Reach out. You don’t have to simply hope for backlinks — you can actually ask for them directly, and there are several effective (and non-intrusive) ways to do so. Use a variety of techniques. One link building method may not get you too far — but a combination of three or four smart techniques can make a big difference. Create awesome content. Your content marketing matters! The truth is the better your content, the more likely people are to link to it, whether as a result of your efforts or simply stumbling across it. At this point, you’re probably wondering how to put all of this into practice. Without further ado, let’s jump into the practical portion of link building 101. Related: How to Create a Content Marketing Strategy How to Create a Successful Link Building Strategy (In 5 Steps) First, a caveat: Like any way of promoting your website, there is no “one right way” to do link building. Likewise, there’s no golden ticket that will get you a hundred backlinks by next Thursday (if you find one, hit us up!). However, you can take some basic steps that will greatly improve your chances of successfully increasing backlinks. We recommend starting with the following five steps, molding them as needed to fit your unique needs. Step 1: Take a Close Look at Your Target Audience A lot of successful link building comes down to pursuing backlinks in relevant places. This means you have to be very familiar with your target audience. If you don’t know what they care about and where they hang out, you can’t encourage links they’re likely to see. If you haven’t done so already, this is a perfect time to put together a target audience profile. That’s a detailed description of the visitors you’d like to attract to your website. You’ll want to research them carefully and collect information on their demographics, behaviors, interests, needs, and so on. When it comes to link building, you’ll want to pay particular attention to where your target audience spends their time online. What sites do they visit and which social media platforms do they prefer? These are the places you’ll benefit most from including in your link building strategy. This is also a good point to research your competitors’ backlink strategy. If you can, find out what kinds of sites link to your top competitors. It’s also useful to know what online places and communities your competitors are ignoring, as those can contain audiences hungry for the quality content you’re offering. That’s a link opportunity you don’t want to miss! Related: 13 Simple Ways to Get Started with Search Engine Optimization Step 2: Audit Your Existing Content Next up, it’s time to think like a content marketer. You can’t encourage links to your site unless you know what you want people to link to. Generally, you’ll want to focus on specific content, rather than simply your home page (which can appear more spammy and less authentic). So this is a great time to conduct a thorough audit of your site’s existing content. While doing this, you can: Look for top-quality pages and posts (or even product pages) that you think other sites would want to link to. Add these to a list as you go, so you know what represents your best content. Find any content that could be great but needs a little improvement. With a few tweaks, so-so articles can become a target for a quality backlink. This means ensuring that they’re up-to-date (for example, make sure you don’t have a broken link in the text and that stats are still accurate), match your brand’s style guide, and provide value to your audience. This publication checklist is a good way to make sure you don’t miss anything. Search for “gaps” you can create new content to fill. There may be information or topics that you think other sites would be happy to link to, but you haven’t written about yet. You can follow our guide to writing a blog post to get started. After auditing your content, the next natural step is to start improving and expanding it. Having lots of high-quality content makes link building a lot easier. It’s also worth noting that if you don’t have a blog on your website yet, now is the time to start one! There are few things better than a blog for generating lots of new, timely content that people will want to share with their audiences. If you’re not convinced, check out how these companies are using their blogs to increase brand awareness and build their reputations online. Related: Keep Your Content Fresh: How to Repurpose an Old Blog Post Step 3: Consider What Link Building You Can Perform Yourself As we mentioned earlier, most of link building involves getting other people to link to your site of their own volition. However, there is a little link building you can do on your own, without venturing into spammy territory. Who doesn’t love a little DIY? The first and most important part of this step is internal linking. You need to make sure all of your online presences are connected. This means your social media accounts should point to your site (and vice versa), and if you have more than one website, they should be interlinked as well. You can also include some links to your content on other people’s websites, particularly in forums and comments sections. But be careful — don’t create too many of these links and make sure they’re always highly relevant. You don’t want to be that person shilling Bitcoin on every other post. Your best approach is to find sites and communities your target audience is present on and engage genuinely with them. When organic link building opportunities come up and you can share a helpful link, don’t be afraid to do so. While these links are not considered as valuable by Google as a natural link created by someone not affiliated with your site, they still have an impact. Step 4: Start Conducting Outreach At this point, you’ve done a little link building of your own. You’ve also improved your site’s content marketing efforts, which will hopefully generate more links for you organically (as people stumble across and share your pages and awesome articles). However, the best way to build links is to ask for them. Yep, you can reach out to a website and simply ask them to link to your content. This is a common practice and can be very successful when approached carefully. It can even help create the foundation for mutually-beneficial relationships between you and other relevant sites. So, what does successful outreach look like? Everyone’s strategy is a little different, but the following tips and techniques are key: Reach out to highly-relevant sites. This is where all your research back in Step No. 1 will come in handy. Sites that see your content and audience as relevant to them are more likely to welcome your request, rather than seeing it as intrusive. Offer specific content they can link to. It’s not usually effective to just email blogs and write, “Link to my website, pretty please?” Instead, use the results of your content audit to identify specific pages and posts you’d like to share and request links to them specifically. Share genuinely useful content. A link building request is obviously self-serving, but it doesn’t have to be all about you. Do some research on the site you’re reaching out to and find something you think would really be interesting or useful to its audience. Blog posts, tutorials, infographics, and videos are all great options. Suggest specific places your links could be included. This shows that you’ve done your research and makes accepting the request easier on the target site. You can propose new links where none currently exist or even offer a better piece of content as a replacement for an existing link. Don’t forget to offer up some anchor text to make it even easier for the other site’s admin. Most importantly, remember to be polite and conduct yourself professionally. Never demand that someone include a link to your site — people who manage successful websites learned not to feed internet trolls a long time ago. Instead, create a concise and friendly message that you can send to the sites and blogs on your list and try to personalize it for each one. Step 5: Get Involved in Guest Blogging Guest blogging can be one of the most powerful tools in your link building strategy. Also called “guest posting,” it involves writing a brand-new post specifically to be featured on another website. This post can then contain one or more links back to your site and content. You can often get farther with guest blogging than with simple link requests. After all, you’ll be providing content to another website for free. In return, they’ll link back to your site. This is a very attractive proposition for blogs, in particular, since they’re always in need of fresh content. Just like with outreach, guest blogging is most effective if you follow some simple best practices. These include: Avoid sites that want you to pay them to publish your guest post. Most blogs will accept this kind of content for free, so there’s no need to pay for placement unless you’re desperate to be featured on a specific high-profile blog. Check the blog to see if they have guidelines for guest bloggers. Many will have a dedicated “write for us” page that outlines their requirements, what they will and won’t accept, and so on. By carefully following these guidelines, you’ll increase your chances of getting past a busy blogger’s spam filter. Do your research. Find out what the blog’s style is like and what kinds of topics they cover. This will help you come up with a topic idea that they’re more likely to accept. Reach out with a proposal first. Don’t simply write up a full post and submit it — these will often be rejected and can waste a lot of your time. Instead, reach out to the blog and let them know what topic you’d like to cover, what key points you’ll include, and what link(s) you’re hoping to see. Create quality, unique content. Never copy content from your own site or elsewhere (plagiarism is always a big no-no) and instead take the time to put together a unique, polished post for each blog. Also, avoid getting too “salesy” about your own website or products and focus on providing real value to the blog’s audience. This is the most time-intensive of our link building strategies. Still, it can pay off in increased visibility, improved authority, and links that are perfectly placed to capture your audience’s attention. Plus, you might develop mutually-beneficial relationships with some of these blogs, providing further opportunities for interlinking in the future. Measuring Your Link Building Efforts The above steps should get you well on your way to running an effective link building campaign. However, like other digital marketing tactics, it’s important to measure your effectiveness. Otherwise, you won’t know if your efforts are paying off or if your approach needs to be adjusted. Trying to keep tabs on your backlinks manually can be very difficult — it’s best to use an analytics tool instead. Many solutions can tell you everything you need to know about your backlinks, quickly and with minimal fuss. Related: Improve Your Search Engine Rankings with These Tools If you have a favorite analytics tool already, chances are it can help you out in this area. If not, a perfect place to start is with Google Analytics. This tool is free, accessible to beginners, and full of useful metrics and features. For instance, you can go to Acquisition > All Traffic > Referrals in your Google Analytics dashboard. Here, you’ll see data about the visitors who arrive on your site from external links — in other words, everyone who comes to your website via a backlink. This includes a summary of trends over time, as well as a detailed breakdown of all the links leading to your site and how popular they are. You can use this data to monitor the results of your backlink strategy. It’s also handy for seeing what sites are linking to yours, and which ones drive the most traffic your way. When combined with Google Analytics’ many other data points, this can even tell you how your link building strategy interacts with your other marketing and SEO efforts. SEO Strategy in Your InboxWhether you need help choosing the right anchor text, wrangling inbound links, or understanding Google's algorithm, we can help! Subscribe to our monthly digest so you never miss an article.Sign Me Up Pass the Link Juice If you want to improve your website’s traffic and attract more of your target audience, link building is necessary. A complete link building strategy helps you encourage relevant sites to share your content with their audiences. It’s a method that takes a little time to master but is cheap, cost-effective, and highly-trackable. Of course, bringing traffic to your website is just the start. You also want those new visitors to have an excellent experience — which starts with high-quality web hosting. Fortunately, our shared website hosting can do the trick! The post How to Create a Link Building Strategy appeared first on Website Guides, Tips and Knowledge.

How to Test a New WordPress Theme Without Crashing Your Current Site [4-Step Guide]

HostGator Blog -

The post How to Test a New WordPress Theme Without Crashing Your Current Site [4-Step Guide] appeared first on HostGator Blog. Wanna try out a new WordPress theme, but are afraid of how it will impact your current site? How do you ensure that while you’re testing out new theme ideas, you don’t crash your current site, lose precious copy, or mess up your website? Speaking as someone who accidentally erased her entire website back in 2015 (massive oops), it’s a valid concern.  To help you avoid losing your website to a cyber black hole, here are some practical ways to test your new WordPress theme without crashing your current site. How to test a WordPress theme for your current website without going live If you’re ready to make changes to your current site and try a new WordPress theme, here are the steps to follow.  Step 1: Back it up (Just like Prince Royce and J-Lo recommend) Remember when I said I accidentally deleted my entire WordPress website back in my novice freelance writing days? I don’t know how I deleted it. All I know is that my site was gone forever. It was particularly upsetting since I paid a chunk of change to a designer to create my site for me, and I had to pay someone AGAIN to get my new freelance writing website up and running. Had I followed a few simple steps and backed up my website, I could have saved myself a headache and some cash. Oh well. We live, we learn.  Here’s how to back up your website: Install a WordPress back up pluginDownload your back up to your computer  Yep, two steps. That’s it. Oh, how I wish the older, wiser version of myself could go back in time and tell my younger self to back up her website. Step 2: Choose your WordPress theme testing option When testing a WordPress theme, you have three options: Create a coming soon pageDownload a staging pluginDownload and test in a local WordPress testing environment. We’ll review each of these below. Option A: Create a coming soon page Let’s say you’re making changes to your website, but you’re not quite set on what new theme you want to use. You don’t want your visitors to see you changing back and forth between multiple theme options for obvious reasons. Instead of trying to make changes for everyone to see, you can activate what’s called a coming soon plugin. A WordPress coming soon plugin allows you to create a page with a custom message that says you’re making exciting changes to your site, and you’ll be back online as soon as your updates are complete.  Here are some things to include on your coming soon page: A compelling headline of what is coming soonA brief description of upcoming changes and why it’s excitingA timeline of when your new site will launchA way to get in touch with you in the meantime (e.g., social media, phone, email, etc.)A sign-up form for your email list Option B: Download a staging plugin If you don’t want to take your website down, but still want to test new designs or a new WordPress theme, you have an alternative. You can install a WordPress staging plugin. Plugins like WP Staging, Duplicator, and WP Stagecoach clone a version of your site where you can make changes behind the scenes. With the help of a staging plugin, you can make any changes you want without any fear of erasing or breaking your current website. You also don’t have to take your website down while you’re experimenting, as your staging website is not visible to the public. Here is how to build a staging site with a plugin (e.g., WP Staging): Download your preferred WordPress staging pluginInstall and activate your preferred staging pluginNavigate to the staging plugin on the left navigation bar in your WordPress dashboardFollow the prompts to start and create your staged websiteName your staged websiteWait for the plugin to clone your website Open your new staged/cloned websiteClick loginUse your WordPress login credentialsNotice the URL will include the staging website URL (e.g.,, followed by what you named your site.Make your changes Directions may vary slightly depending on which plugin you use, but all the plugins listed above are fairly intuitive to navigate. Option C: Download and test in a local WordPress testing environment Let’s say you don’t dare mess around in WordPress at all, and you want your changes to stay as free and as clear from your current site as possible. There is an answer for you as well. It’s installing a local WordPress testing environment app on your computer. A local testing environment is like having a server on your own computer where you can make changes to your site and preview them without it ever touching your site. In short, it’s just like a staging site, but not a WordPress plugin. It works on your computer instead. Popular WordPress testing environments include DesktopServer by ServerPress and Local by Flywheel.  To set up local testing of your WordPress site, you’ll need to download a WordPress migration app, export your production site to a file, save it to your computer, and follow the steps your local testing software requires.  Consider using the All-in-One WP Migration plugin to export your site. If you have a simple blog or website, it’s sufficient to test with a staging website. If you have a robust website with lots of content and an online store, for example, it’s a good idea to look into a local WordPress testing software. Step 3: Install your new theme into WordPress To change the current design of your WordPress site to a new WordPress theme, you first have to select your new WordPress theme and install it into WordPress. There are several free WordPress themes, and many of them are gorgeous. If you are working on a budget, browse the free WordPress themes and pick something you like. If you want more options, you can purchase a WordPress theme and install it on your WordPress account. Here are some popular places to find a paid WordPress theme: StudioPressOceanWPThemeFuseThemeForestCreative MarketRara Theme If you’d rather support a small business, many talented designers create awesome WordPress themes you can purchase. You can look into Etsy or your favorite independent designer. Regardless of what you choose, make sure the theme is well-designed and mobile-responsive. Once you have purchased your theme, here’s how to install it: Download the .zip file of the new theme you purchased and save it to your desktop.Login to your WordPress site.Navigate to the “Appearance” section on the left navigation bar of your WordPress dashboard.Click on “themes.”At the top of the screen, you’ll see “Add New” next to the heading “Themes.” Click on “Add New.”Next to the heading “Add New,” you’ll see “Upload Theme.” Click on it.Either drag and drop the .zip file or click “choose file” and select the .zip file.Click “install now.”WordPress will return a result that your theme has been installed. From here, you can either activate your new theme or press live preview. You can also add as many themes as you want using this same method and then preview them in the next step. If you want to see how your site would look in the theme you just downloaded, select live preview. Just don’t hit “activate and publish,” unless you are one-hundred percent sure you want this to be the new look of your website. Step 4: Browse theme options and preview them live WordPress has an awesome feature for its themes. With every theme you install, you can either “customize it,” or “live preview” it. Let’s talk about how. If you navigate to “Appearance” in your dashboard and click on “Themes,” WordPress will show you all the available themes you have uploaded. Here is what my dashboard looks like: If you hover over the theme in interest, you’ll see two options: activate and live preview. Activate will activate that theme for your website. Live preview will simply show you what your site would look like dressed up in that particular store. It’s like taking your website shopping for a new outfit! Here is what my travel website looks like in its current theme: I know. Hold the applause. If I want to test out a new theme, I follow the process above and click “live preview.” Here is what my website would look like in that particular theme. Remember, I haven’t copied any of my custom code or tweaked anything yet. This just gives me a general idea. What do you think? Should I change my theme? Either way, I can follow this process continually, rinse and repeat, until I have settled on a theme. If I don’t press “activate and publish,” WordPress won’t save any of my changes. This is a great way to test a new WordPress theme without making changes. Once you are finally ready to switch over to a new theme, you can activate it, customize it, and publish it. Check out HostGator’s managed WordPress hosting plan If you plan on making changes to your website, or just want the security that your website will have backups, then look into HostGator’s managed WordPress hosting plan. A managed WordPress hosting plan from HostGator includes 2.5X faster load times, an easy control panel for simple navigation, free website migration (so you don’t have to do the scary work yourself), 1GB-3GB of backups (depending on your plan), and more. Check out HostGator today, and get your website set up with a new, gorgeous, mobile-responsive WordPress theme. Find the post on the HostGator Blog

Sandboxing in Linux with zero lines of code

CloudFlare Blog -

Modern Linux operating systems provide many tools to run code more securely. There are namespaces (the basic building blocks for containers), Linux Security Modules, Integrity Measurement Architecture etc. In this post we will review Linux seccomp and learn how to sandbox any (even a proprietary) application without writing a single line of code. Tux by Iwan Gabovitch, GPL Sandbox, Simplified Pixabay License Linux system calls System calls (syscalls) is a well-defined interface between userspace applications and the operating system (OS) kernel. On modern operating systems most applications provide only application-specific logic as code. Applications do not, and most of the time cannot, directly access low-level hardware or networking, when they need to store data or send something over the wire. Instead they use system calls to ask the OS kernel to do specific hardware and networking tasks on their behalf: Apart from providing a generic high level way for applications to interact with the low level hardware, the system call architecture allows the OS kernel to manage available resources between applications as well as enforce policies, like application permissions, networking access control lists etc. Linux seccomp Linux seccomp is yet another syscall on Linux, but it is a bit special, because it influences how the OS kernel will behave when the application uses other system calls. By default, the OS kernel has almost no insight into userspace application logic, so it provides all the possible services it can. But not all applications require all services. Consider an application which converts image formats: it needs the ability to read and write data from disk, but in its simplest form probably does not need any network access. Using seccomp an application can declare its intentions in advance to the Linux kernel. For this particular case it can notify the kernel that it will be using the read and write system calls, but never the send and recv system calls (because its intent is to work with local files and never with the network). It’s like establishing a contract between the application and the OS kernel: But what happens if the application later breaks the contract and tries to use one of the system calls it promised not to use? The kernel will “penalise” the application, usually by immediately terminating it. Linux seccomp also allows less restrictive actions for the kernel to take: instead of terminating the whole application, the kernel can be requested to terminate only the thread, which issued the prohibited system call the kernel may just send a SIGSYS signal to the calling thread the seccomp policy can specify an error code, which the kernel will then return to the calling application instead of executing the prohibited system call if the violating process is under ptrace (for example executing under a debugger), the kernel can notify the tracer (the debugger) that a prohibited system call is about to happen and let the debugger decide what to do the kernel may be instructed to allow and execute the system call, but log the attempt: this is useful, when we want to verify that our seccomp policy is not too tight without the risk of terminating the application and potentially creating an outage Although there is a lot of flexibility in defining the potential penalty for the application, from a security perspective it is usually best to stick with the complete application termination upon seccomp policy violation. The reason for that will be described later in the examples in the post. So why would the application take the risk of being abruptly terminated and declare its intentions beforehand, if it can just be “silent” and the OS kernel will allow it to use any system call by default? Of course, for a normal behaving application it makes no sense, but it turns out this feature is quite effective to protect from rogue applications and arbitrary code execution exploits. Imagine our image format converter is written in some unsafe language and an attacker was able to take control of the application by making it process some malformed image. What the attacker might do is to try to steal some sensitive information from the machine running our converter and send it to themselves via the network. By default, the OS kernel will most likely allow it and a data leak will happen. But if our image converter “confined” (or sandboxed) itself beforehand to only read and write local data the kernel will terminate the application when the latter tries to leak the data over the network thus preventing the leak and locking out the attacker from our system! Integrating seccomp into the application To see how seccomp can be used in practice, let’s consider a toy example program myos.c: #include <stdio.h> #include <sys/utsname.h> int main(void) { struct utsname name; if (uname(&name)) { perror("uname failed: "); return 1; } printf("My OS is %s!\n", name.sysname); return 0; } This is a simplified version of the uname command line tool, which just prints your operating system name. Like its full-featured counterpart, it uses the uname system call to actually get the name of the current operating system from the kernel. Let’s see it action: $ gcc -o myos myos.c $ ./myos My OS is Linux! Great! We’re on Linux, so can further experiment with seccomp (it is a Linux-only feature). Notice that we’re properly handling the error code after invoking the uname system call. However, according to the man page it can only fail, when the passed in buffer pointer is invalid. And in this case the set error number will be “EINVAL”, which translates to invalid parameter. In our case, the “struct utsname” structure is being allocated on the stack, so our pointer will always be valid. In other words, in normal circumstances the uname system call should never fail in this particular program. To illustrate seccomp capabilities we will add a “sandbox” function to our program before the main logic myos_raw_seccomp.c: #include <linux/seccomp.h> #include <linux/filter.h> #include <linux/audit.h> #include <sys/ptrace.h> #include <sys/prctl.h> #include <stdlib.h> #include <stdio.h> #include <stddef.h> #include <sys/utsname.h> #include <errno.h> #include <unistd.h> #include <sys/syscall.h> static void sandbox(void) { struct sock_filter filter[] = { /* seccomp(2) says we should always check the arch */ /* as syscalls may have different numbers on different architectures */ /* see */ /* for simplicity we only allow x86_64 */ BPF_STMT(BPF_LD | BPF_W | BPF_ABS, (offsetof(struct seccomp_data, arch))), /* if not x86_64, tell the kernel to kill the process */ BPF_JUMP(BPF_JMP | BPF_JEQ | BPF_K, AUDIT_ARCH_X86_64, 0, 4), /* get the actual syscall number */ BPF_STMT(BPF_LD | BPF_W | BPF_ABS, (offsetof(struct seccomp_data, nr))), /* if "uname", tell the kernel to return EPERM, otherwise just allow */ BPF_JUMP(BPF_JMP | BPF_JEQ | BPF_K, SYS_uname, 0, 1), BPF_STMT(BPF_RET | BPF_K, SECCOMP_RET_ERRNO | (EPERM & SECCOMP_RET_DATA)), BPF_STMT(BPF_RET | BPF_K, SECCOMP_RET_ALLOW), BPF_STMT(BPF_RET | BPF_K, SECCOMP_RET_KILL), }; struct sock_fprog prog = { .len = (unsigned short) (sizeof(filter) / sizeof(filter[0])), .filter = filter, }; /* see seccomp(2) on why this is needed */ if (prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0)) { perror("PR_SET_NO_NEW_PRIVS failed"); exit(1); }; /* glibc does not have a wrapper for seccomp(2) */ /* invoke it via the generic syscall wrapper */ if (syscall(SYS_seccomp, SECCOMP_SET_MODE_FILTER, 0, &prog)) { perror("seccomp failed"); exit(1); }; } int main(void) { struct utsname name; sandbox(); if (uname(&name)) { perror("uname failed"); return 1; } printf("My OS is %s!\n", name.sysname); return 0; } To sandbox itself the application defines a BPF program, which implements the desired sandboxing policy. Then the application passes this program to the kernel via the seccomp system call. The kernel does some validation checks to ensure the BPF program is OK and then runs this program on every system call the application makes. The results of the execution of the program is used by the kernel to determine if the current call complies with the desired policy. In other words the BPF program is the “contract” between the application and the kernel. In our toy example above, the BPF program simply checks which system call is about to be invoked. If the application is trying to use the uname system call we tell the kernel to just return a EPERM (which stands for “operation not permitted”) error code. We also tell the kernel to allow any other system call. Let’s see if it works now: $ gcc -o myos myos_raw_seccomp.c $ ./myos uname failed: Operation not permitted uname failed now with the EPERM error code and EPERM is not even described as a potential failure code in the uname manpage! So we know now that this happened because we “told” the kernel to prohibit us using the uname syscall and to return EPERM instead. We can double check this by replacing EPERM with some other error code, which is totally inappropriate for this context, for example ENETDOWN (“network is down”). Why would we need the network to be up to just get the currently executing OS? Yet, recompiling and rerunning the program we get: $ gcc -o myos myos_raw_seccomp.c $ ./myos uname failed: Network is down We can also verify the other part of our “contract” works as expected. We told the kernel to allow any other system call, remember? In our program, when uname fails, we convert the error code to a human readable message and print it on the screen with the perror function. To print on the screen perror uses the write system call under the hood and since we can actually see the printed error message, we know that the kernel allowed our program to make the write system call in the first place. seccomp with libseccomp While it is possible to use seccomp directly, as in the examples above, BPF programs are cumbersome to write by hand and hard to debug, review and update later. That’s why it is usually a good idea to use a more high-level library, which abstracts away most of the low-level details. Luckily such a library exists: it is called libseccomp and is even recommended by the seccomp man page. Let’s rewrite our program’s sandbox() function to use this library instead: myos_libseccomp.c: #define _GNU_SOURCE #include <stdio.h> #include <stdlib.h> #include <sys/utsname.h> #include <seccomp.h> #include <err.h> static void sandbox(void) { /* allow all syscalls by default */ scmp_filter_ctx seccomp_ctx = seccomp_init(SCMP_ACT_ALLOW); if (!seccomp_ctx) err(1, "seccomp_init failed"); /* kill the process, if it tries to use "uname" syscall */ if (seccomp_rule_add_exact(seccomp_ctx, SCMP_ACT_KILL, seccomp_syscall_resolve_name("uname"), 0)) { perror("seccomp_rule_add_exact failed"); exit(1); } /* apply the composed filter */ if (seccomp_load(seccomp_ctx)) { perror("seccomp_load failed"); exit(1); } /* release allocated context */ seccomp_release(seccomp_ctx); } int main(void) { struct utsname name; sandbox(); if (uname(&name)) { perror("uname failed: "); return 1; } printf("My OS is %s!\n", name.sysname); return 0; } Our sandbox() function not only became shorter and much more readable, but also provided the ability to reference syscalls in our rules by names and not internal numbers as well as not having to deal with other quirks, like setting PR_SET_NO_NEW_PRIVS bit and dealing with system architectures. It is worth noting we have modified our seccomp policy a bit. In the raw seccomp example above we instructed the kernel to return an error code when the application tries to execute a prohibited syscall. This is good for demonstration purposes, but in most cases a stricter action is required. Just returning an error code and allowing the application to continue gives the potentially malicious code a chance to bypass the policy. There are many syscalls in Linux and some of them do the same or similar things. For example, we might want to prohibit the application to read data from disk, so we deny the read syscall in our policy and tell the kernel to return an error code instead. However, if the application does get exploited, the exploit code/logic might look like below: … if (-1 == read(fd, buf, count)) { /* hm… read failed, but what about pread? */ if (-1 == pread(fd, buf, count, offset) { /* what about readv? */ ... } /* bypassed the prohibited read(2) syscall */ } … Wait what?! There is more than one read system call? Yes, there are read, pread, readv as well as more obscure ones, like io_submit and io_uring_enter. Of course, it is our fault for providing incomplete seccomp policy, which does not block all possible read syscalls. But if at least we had instructed the kernel to terminate the process immediately upon violation of the first plain read, the malicious code above would not have the chance to be clever and try other options. Given the above in the libseccomp example we have a stricter policy now, which tells the kernel to terminate the process upon the policy violation. Let’s see if it works: $ gcc -o myos myos_libseccomp.c -lseccomp $ ./myos Bad system call Notice that we need to link against libseccomp when compiling the application. Also, when we run the application, we don’t see the uname failed: Operation not permitted error output anymore, because we don’t give the application the ability to even print a failure message. Instead, we see a Bad system call message from the shell, which tells us that the application was terminated with a SIGSYS signal. Great! zero code seccomp The previous examples worked fine, but both of them have one disadvantage: we actually needed to modify the source code to embed our desired seccomp policy into the application. This is because seccomp syscall affects the calling process and its children, but there is no interface to inject the policy from “outside”. It is expected that developers will sandbox their code themselves as part of the application logic, but in practice this rarely happens. When developers are starting a new project, most of the time the focus is on primary functionality and security features are usually either postponed or omitted altogether. Also, most real-world software is usually written using some high-level programming language and/or a framework, where the developers do not deal with the system calls directly and probably are even unaware which system calls are being used by their code. On the other hand we have system operators, sysadmins, SRE and other folks, who run the above code in production. They are more incentivized to keep production systems secure, thus would probably want to sandbox the services as much as possible. But most of the time they don’t have access to the source code. So there are mismatched expectations: developers have the ability to sandbox their code, but are usually not incentivized to do so and operators have the incentive to sandbox the code, but don’t have the ability. This is where “zero code seccomp” might help, where an external operator can inject the desired sandbox policy into any process without needing to modify any source code. Systemd is one of the popular implementations of a “zero code seccomp” approach. Systemd-managed services can have a SystemCallFilter= directive defined in their unit files listing all the system calls the managed service is allowed to make. As an example, let’s go back to our toy application without any sandboxing code embedded: $ gcc -o myos myos.c $ ./myos My OS is Linux! Now we can run the same code with systemd, but prohibit the application for using uname without changing or recompiling any code (we’re using systemd-run to create an ephemeral systemd service unit for us): $ systemd-run --user --pty --same-dir --wait --collect --service-type=exec --property="SystemCallFilter=~uname" ./myos Running as unit: run-u0.service Press ^] three times within 1s to disconnect TTY. Finished with result: signal Main processes terminated with: code=killed/status=SYS Service runtime: 6ms We don’t see the normal My OS is Linux! output anymore and systemd conveniently tells us that the managed process was terminated with a SIGSYS signal. We can even go further and use another directive SystemCallErrorNumber= to configure our seccomp policy not to terminate the application, but return an error code instead as in our first seccomp raw example: $ systemd-run --user --pty --same-dir --wait --collect --service-type=exec --property="SystemCallFilter=~uname" --property="SystemCallErrorNumber=ENETDOWN" ./myos Running as unit: run-u2.service Press ^] three times within 1s to disconnect TTY. uname failed: Network is down Finished with result: exit-code Main processes terminated with: code=exited/status=1 Service runtime: 6ms systemd small print Great! We can now inject almost any seccomp policy into any process without the need to write any code or recompile the application. However, there is an interesting statement in the systemd documentation: ...Note that the execve, exit, exit_group, getrlimit, rt_sigreturn, sigreturn system calls and the system calls for querying time and sleeping are implicitly whitelisted and do not need to be listed explicitly... Some system calls are implicitly allowed and we don’t have to list them. This is mostly related to the way how systemd manages processes and injects the seccomp policy. We established earlier that seccomp policy applies to the current process and its children. So, to inject the policy, systemd forks itself, calls seccomp in the forked process and then execs the forked process into the target application. That’s why always allowing the execve system call is necessary in the first place, because otherwise systemd cannot do its job as a service manager. But what if we want to explicitly prohibit some of these system calls? If we continue with the execve as an example, that can actually be a dangerous system call most applications would want to prohibit. Seccomp is an effective tool to protect the code from arbitrary code execution exploits, remember? If a malicious actor takes over our code, most likely the first thing they will try is to get a shell (or replace our code with any other application which is easier to control) by directing our code to call execve with the desired binary. So, if our code does not need execve for its main functionality, it would be a good idea to prohibit it. Unfortunately, it is not possible with the systemd SystemCallFilter= approach... Introducing Cloudflare sandbox We really liked the “zero code seccomp” approach with systemd SystemCallFilter= directive, but were not satisfied with its limitations. We decided to take it one step further and make it possible to prohibit any system call in any process externally without touching its source code, so came up with the Cloudflare sandbox. It’s a simple standalone toolkit consisting of a shared library and an executable. The shared library is supposed to be used with dynamically linked applications and the executable is for statically linked applications. sandboxing dynamically linked executables For dynamically linked executables it is possible to inject custom code into the process by utilizing the LD_PRELOAD environment variable. The shared library from our toolkit also contains a so-called initialization routine, which should be executed before the main logic. This is how we make the target application sandbox itself: LD_PRELOAD tells the dynamic loader to load our as part of the application, when it starts the runtime executes the initialization routine from the before most of the main logic our initialization routine configures the sandbox policy described in special environment variables by the time the main application logic begin executing, the target process has the configured seccomp policy enforced Let’s see how it works with our myos toy tool. First, we need to make sure it is actually a dynamically linked application: $ ldd ./myos (0x00007ffd8e1e3000) => /lib/x86_64-linux-gnu/ (0x00007f339ddfb000) /lib64/ (0x00007f339dfcf000) Yes, it is . Now, let’s prohibit it from using the uname system call with our toolkit: $ LD_PRELOAD=/usr/lib/x86_64-linux-gnu/ SECCOMP_SYSCALL_DENY=uname ./myos adding uname to the process seccomp filter Bad system call Yet again, we’ve managed to inject our desired seccomp policy into the myos application without modifying or recompiling it. The advantage of this approach is that it doesn’t have the shortcomings of the systemd’s SystemCallFilter= and we can block any system call (luckily Bash is a dynamically linked application as well): $ /bin/bash -c 'echo I will try to execve something...; exec /usr/bin/echo Doing arbitrary code execution!!!' I will try to execve something... Doing arbitrary code execution!!! $ LD_PRELOAD=/usr/lib/x86_64-linux-gnu/ SECCOMP_SYSCALL_DENY=execve /bin/bash -c 'echo I will try to execve something...; exec /usr/bin/echo Doing arbitrary code execution!!!' adding execve to the process seccomp filter I will try to execve something... Bad system call The only problem here is that we may accidentally forget to LD_PRELOAD our library and potentially run unprotected. Also, as described in the man page, LD_PRELOAD has some limitations. We can overcome all these problems by making a permanent part of our target application: $ patchelf --add-needed /usr/lib/x86_64-linux-gnu/ ./myos $ ldd ./myos (0x00007fff835ae000) /usr/lib/x86_64-linux-gnu/ (0x00007fc4f55f2000) => /lib/x86_64-linux-gnu/ (0x00007fc4f5425000) /lib64/ (0x00007fc4f5647000) Again, we didn’t need access to the source code here, but patched the compiled binary instead. Now we can just configure our seccomp policy as before without the need of LD_PRELOAD: $ ./myos My OS is Linux! $ SECCOMP_SYSCALL_DENY=uname ./myos adding uname to the process seccomp filter Bad system call sandboxing statically linked executables The above method is quite convenient and easy, but it doesn’t work for statically linked executables: $ gcc -static -o myos myos.c $ ldd ./myos not a dynamic executable $ LD_PRELOAD=/usr/lib/x86_64-linux-gnu/ SECCOMP_SYSCALL_DENY=uname ./myos My OS is Linux! This is because there is no dynamic loader involved in starting a statically linked executable, so LD_PRELOAD has no effect. For this case our toolkit contains a special application launcher, which will inject the seccomp rules similarly to the way systemd does it: $ sandboxify ./myos My OS is Linux! $ SECCOMP_SYSCALL_DENY=uname sandboxify ./myos adding uname to the process seccomp filter Note that we don’t see the Bad system call shell message anymore, because our target executable is being started by the launcher instead of the shell directly. Unlike systemd however, we can use this launcher to block dangerous system calls, like execve, as well: $ sandboxify /bin/bash -c 'echo I will try to execve something...; exec /usr/bin/echo Doing arbitrary code execution!!!' I will try to execve something... Doing arbitrary code execution!!! SECCOMP_SYSCALL_DENY=execve sandboxify /bin/bash -c 'echo I will try to execve something...; exec /usr/bin/echo Doing arbitrary code execution!!!' adding execve to the process seccomp filter I will try to execve something... sandboxify vs From the examples above you may notice that it is possible to use sandboxify with dynamically linked executables as well, so why even bother with The difference becomes visible, when we start using not the “denylist” policy as in most examples in this post, but rather the preferred “allowlist” policy, where we explicitly allow only the system calls we need, but prohibit everything else. Let’s convert our toy application back into the dynamically-linked one and try to come up with the minimal list of allowed system calls it needs to function properly: $ gcc -o myos myos.c $ ldd ./myos (0x00007ffe027f6000) => /lib/x86_64-linux-gnu/ (0x00007f4f1410a000) /lib64/ (0x00007f4f142de000) $ LD_PRELOAD=/usr/lib/x86_64-linux-gnu/ SECCOMP_SYSCALL_ALLOW=exit_group:fstat:uname:write ./myos adding exit_group to the process seccomp filter adding fstat to the process seccomp filter adding uname to the process seccomp filter adding write to the process seccomp filter My OS is Linux So we need to allow 4 system calls: exit_group:fstat:uname:write. This is the tightest “sandbox”, which still doesn’t break the application. If we remove any system call from this list, the application will terminate with the Bad system call message (try it yourself!). If we use the same allowlist, but with the sandboxify launcher, things do not work anymore: $ SECCOMP_SYSCALL_ALLOW=exit_group:fstat:uname:write sandboxify ./myos adding exit_group to the process seccomp filter adding fstat to the process seccomp filter adding uname to the process seccomp filter adding write to the process seccomp filter The reason is sandboxify and inject seccomp rules at different stages of the process lifecycle. Consider the following very high level diagram of a process startup: In a nutshell, every process has two runtime stages: “runtime init” and the “main logic”. The main logic is basically the code, which is located in the program main() function and other code put there by the application developers. But the process usually needs to do some work before the code from the main() function is able to execute - we call this work the “runtime init” on the diagram above. Developers do not write this code directly, but most of the time this code is automatically generated by the compiler toolchain, which is used to compile the source code. To do its job, the “runtime init” stage uses a lot of different system calls, but most of them are not needed later at the “main logic” stage. If we’re using the “allowlist” approach for our sandboxing, it does not make sense to allow these system calls for the whole duration of the program, if they are only used once on program init. This is where the difference between and sandboxify comes from: enforces the seccomp rules usually after the “runtime init” stage has already executed, so we don’t have to allow most system calls from that stage. sandboxify on the other hand enforces the policy before the “runtime init” stage, so we have to allow all the system calls from both stages, which usually results in a bigger allowlist, thus wider attack surface. Going back to our toy myos example, here is the minimal list of all the system calls we need to allow to make the application work under our sandbox: $ SECCOMP_SYSCALL_ALLOW=access:arch_prctl:brk:close:exit_group:fstat:mmap:mprotect:munmap:openat:read:uname:write sandboxify ./myos adding access to the process seccomp filter adding arch_prctl to the process seccomp filter adding brk to the process seccomp filter adding close to the process seccomp filter adding exit_group to the process seccomp filter adding fstat to the process seccomp filter adding mmap to the process seccomp filter adding mprotect to the process seccomp filter adding munmap to the process seccomp filter adding openat to the process seccomp filter adding read to the process seccomp filter adding uname to the process seccomp filter adding write to the process seccomp filter My OS is Linux! It is 13 syscalls vs 4 syscalls, if we’re using the approach! Conclusions In this post we discussed how to easily sandbox applications on Linux without the need to write any additional code. We introduced the Cloudflare sandbox toolkit and discussed the different approaches we take at sandboxing dynamically linked applications vs statically linked applications. Having safer code online helps to build a Better Internet and we would be happy if you find our sandbox toolkit useful. Looking forward to the feedback, improvements and other contributions!

How to Analyze Instagram Stories Ads

Social Media Examiner -

Do you know whether your Instagram Stories ads are working? Wondering which ad performance metrics to track and where to find the data? In this article, you’ll discover how to analyze Instagram Stories ads data so you can find out what’s working and what isn’t. To learn how to analyze Instagram Stories ads, read the […] The post How to Analyze Instagram Stories Ads appeared first on Social Media Examiner | Social Media Marketing.

WordPress 5.5 Beta 1 News -

WordPress 5.5 Beta 1 is now available for testing! This software is still in development, so it’s not recommended to run this version on a production site. Consider setting up a test site to play with the new version. You can test the WordPress 5.5 beta in two ways: Try the WordPress Beta Tester plugin (choose the “bleeding edge nightlies” option)Or download the beta here (zip). The current target for final release is August 11, 2020. This is only five weeks away. Your help is needed to ensure this release is tested properly. Testing for bugs is an important part of polishing the release during the beta stage and a great way to contribute. Here are some of the big changes and features to pay close attention to while testing. Block editor: features and improvements WordPress 5.5 will include ten releases of the Gutenberg plugin, bringing with it a long list of exciting new features. Here are just a few: Inline image editing – Crop, rotate, and zoom photos inline right from image blocks.Block patterns – Building elaborate pages can be a breeze with new block patterns. Several are included by default.Device previews – See how your content will look to users on many different screen sizes. End block overwhelm. The new block inserter panel displays streamlined categories and collections. As a bonus, it supports patterns and integrates with the new block directory right out of the box.Discover, install, and insert third-party blocks from your editor using the new block directory.A better, smoother editing experience with: Refined drag-and-dropBlock movers that you can see and grabParent block selectionContextual focus highlightsMulti-select formatting lets you change a bunch of blocks at once Ability to copy and relocate blocks easilyAnd, better performanceAn expanded design toolset for themes.Now add backgrounds and gradients to more kinds of blocks, like groups, columns, media & textAnd support for more types of measurements — not just pixels. Choose ems, rems, percentages, vh, vw, and more! Plus, adjust line heights while typing, turning writing and typesetting into the seamless act. In all, WordPress 5.5 brings more than 1,500 useful improvements to the block editor experience.  To see all of the features for each release in detail check out the release posts: 7.5, 7.6, 7.7, 7.8, 7.9, 8.0, 8.1, 8.2, 8.3, and 8.4. Wait! There’s more! XML sitemaps XML Sitemaps are now included in WordPress and enabled by default. Sitemaps are essential to search engines discovering the content on your website. Your site’s home page, posts, pages, custom post types, and more will be included to improve your site’s visibility. Auto-updates for plugins and themes WordPress 5.5 also brings auto-updates for plugins and themes. Easily control which plugins and themes keep themselves up to date on their own. It’s always recommended that you run the latest versions of all plugins and themes. The addition of this feature makes that easier than ever! Lazy-loading images WordPress 5.5 will include native support for lazy-loaded images utilizing new browser standards. With lazy-loading, images will not be sent to users until they approach the viewport. This saves bandwidth for everyone (users, hosts, ISPs), makes it easier for those with slower internet speeds to browse the web, saves electricity, and more. Better accessibility With every release, WordPress works hard to improve accessibility. Version 5.5 is no different and packs a parcel of accessibility fixes and enhancements. Take a look: List tables now come with extensive, alternate view modes.Link-list widgets can now be converted to HTML5 navigation blocks.Copying links in media screens and modal dialogs can now be done with a simple click of a button.Disabled buttons now actually look disabled.Meta boxes can now be moved with the keyboard.A custom logo on the front page no longer links to the front page.Assistive devices can now see status messages in the Image Editor.The shake animation indicating a login failure now respects the user’s choices in the prefers-reduced-motion media query.Redundant Error: prefixes have been removed from error notices. Miscellaneous Changes Plugins and themes can now be updated by uploading a ZIP file.More finely grained control of redirect_guess_404_permalink().Several packaged external libraries have been updated, including PHPMailer, SimplePie, Twemoji, Masonry, and more! Keep your eyes on the Make WordPress Core blog for 5.5-related developer notes in the coming weeks, breaking down these and other changes in greater detail. So far, contributors have fixed more than 350 tickets in WordPress 5.5, including 155 new features and enhancements, and more bug fixes are on the way. How You Can Help Do you speak a language other than English? Help translate WordPress into more than 100 languages! If you think you’ve found a bug, please post to the Alpha/Beta area in the support forums. We would love to hear from you! If you’re comfortable writing a reproducible bug report, file one on WordPress Trac. That’s also where you can find a list of known bugs. Props to @webcommsat, @yvettesonneveld, @estelaris, and @marybaum for compiling/writing this post, @davidbaumwald for editing/proof reading, and @cbringmann, @desrosj, and @andreamiddleton for final review.

CVE-2020-5902: Helping to protect against the F5 TMUI RCE vulnerability

CloudFlare Blog -

Cloudflare has deployed a new managed rule protecting customers against a remote code execution vulnerability that has been found in F5 BIG-IP’s web-based Traffic Management User Interface (TMUI). Any customer who has access to the Cloudflare Web Application Firewall (WAF) is automatically protected by the new rule (100315) that has a default action of BLOCK.Initial testing on our network has shown that attackers started probing and trying to exploit this vulnerability starting on July 3.F5 has published detailed instructions on how to patch affected devices, how to detect if attempts have been made to exploit the vulnerability on a device and instructions on how to add a custom mitigation. If you have an F5 device, read their detailed mitigations before reading the rest of this blog post.The most popular probe URL appears to be /tmui/login.jsp/..;/tmui/locallb/workspace/fileRead.jsp followed by /tmui/login.jsp/..;/tmui/util/getTabSet.jsp, /tmui/login.jsp/..;/tmui/system/user/authproperties.jsp and /tmui/login.jsp/..;/tmui/locallb/workspace/tmshCmd.jsp. All contain the critical pattern ..; which is at the heart of the vulnerability.On July 3 we saw O(1k) probes ramping to O(1m) yesterday. This is because simple test patterns have been added to scanning tools and small test programs made available by security researchers.The VulnerabilityThe vulnerability was disclosed by the vendor on July 1 and allows both authenticated and unauthenticated users to perform remote code execution (RCE).Remote Code Execution is a type of code injection which provides the attacker the ability to run any arbitrary code on the target application, allowing them, in most scenarios such as this one, to gain privileged access and perform a full system take over.The vulnerability affects the administration interface only (the management dashboard), not the underlying data plane provided by the application.How to MitigateIf updating the application is not possible, the attack can be mitigated by blocking all requests that match the following regular expression in the URL:.*\.\.;.*The above regular expression matches two dot characters (.) followed by a semicolon within any sequence of characters.Customers who are using the Cloudflare WAF, that have their F5 BIG-IP TMUI interface proxied behind Cloudflare, are already automatically protected from this vulnerability with rule 100315. If you wish to turn off the rule or change the default action:Head over to the Cloudflare Firewall, then click on Managed Rules and head over to the advanced link under the Cloudflare Managed Rule set,Search for rule ID: 100315,Select any appropriate action or disable the rule.

7 Mobile Commerce Best Practices for Any Website

HostGator Blog -

The post 7 Mobile Commerce Best Practices for Any Website appeared first on HostGator Blog. There is a reason why brands are investing more and more time and energy into setting up efficient online stores. The popularity of eCommerce is soaring, and the online shopping business in the USA is growing 3X faster than the offline segment. Professionals expect worldwide retail eCommerce sales will reach $4.9 trillion by 2021, and only grow from there. By 2040, 95% of shopping experiences will take place online. While eCommerce is growing, so is the functionality of online commerce tech and competition from other brands. If you want to stay ahead in the rapid-paced online commerce world, you must keep up with eCommerce best practices and provide a top-notch customer shopping experience. One surefire way to do this is to optimize your website, not only for desktop purchases but for smooth mobile shopping experiences as well.  This article will offer seven mobile commerce best practices that will help you create a stellar mobile shopping experience for your customers. 1. Optimize your app for mobile navigation by categorizing your products Increasing your mobile sales all comes down to how you organize your mobile shopping experience. Your customers only have 6 inches by 2 inches of screen space to understand what products you are offering, so you have to be smart about your user design. This is especially critical if you sell hundreds of products. You can make it easy for customers to find what they are looking for by dividing your products into categories. I like the following example from Sephora’s mobile-friendly site. As you know, Sephora carries hundreds of products. Thankfully, they make it easy to find what you are looking for with the use of categories and thumbnails. Consumers can quickly scan these 12 different thumbnails, locate what they are looking for, and click on the most relevant thumbnail.  The Sephora shopping app will then redirect shoppers to another easy-to-navigate menu to fine-tune results. The Sephora shopping experience remains clear-cut and organized throughout the entire process, making it possible for make-up and skincare lovers to locate products in just a few clicks of a button. 2. Create a positive in-app search experience Sometimes customers know exactly what they want when they come to your mobile-friendly website or visit your mobile app. The last thing you want customers to do is to waste time browsing through all your various categories to find the exact product they need. Instead, include a search bar on the navigation bar that remains visible no matter what page your customer visits. Here is an example from DoorDash. With a click of a button, DoorDashers can enter the restaurant or cuisine they want and can find what they are looking for without a hitch. When the product search bar is clearly and always visible, it makes it easy for customers to locate what they want in seconds. 3. Make it easy to check out Ever been interested in a product and then couldn’t figure out how to add it to your cart and gave up? Worse, have you ever been too lazy to get up and get your credit card, so you abandoned your purchase? I’ve been in both of those situations. If the check out process is even the tiniest bit difficult, I’ll be the first one to divert my attention elsewhere. I’m not the only one. The average cart abandonment rate is nearly 68%. If you want to push your consumers over the finish line, it’s up to you to make it convenient.  Here’s how: Include as few as steps as possible. The more steps consumers have to take to get from seeing a product to purchasing it, the less likely they will be to buy.  Allow people to edit their order within the cart. I recently came across a website that didn’t allow me to edit the quantity in the cart. I had to remove the item, go back and refind the item, and then re-add the correct quantity. It seems simple, but those unnecessary extra steps turned me off to the mobile site. Present payment options. There are several mobile commerce technologies that integrate popular payment methods (e.g., PayPal, Apple Pay, Amazon Pay) into your website. Options make purchasing easier for the consumer. 4. Show off your security measures to shoppers Eight-two percent of Americans surveyed in a study said they worry about online security. Despite these concerns, 15% of websites are still not encrypted with SSL technology. While it’s an absolute must to encrypt your website with SSL technology, providing shoppers with extra security for your mobile store requires a bit more work. Show your customers you care about enhanced security by: Using lock icons to let shoppers know you use an encrypted payment system.Including the words “secure” in bold and noticeable type on your check-out page.Providing trusted payment platform options like Apple Pay, Amazon Pay, and PayPal.Looking into the use of cybersecurity providers like McAfee Secure and Norton Secured.Adding a “learn more” or a “privacy statement” icon customers can click on for review (see the example from Starbucks below). Do everything you can to clearly communicate that you take extra security precautions for secure payment processing. 5. Emphasize promotions There is a reason nearly every time you navigate to a new mobile commerce website, you see a pop-up offering a one-time coupon in exchange for an email address.  Well, there are two reasons. One is to capture new email subscribers, but the second is to encourage shoppers to spend money the first time they visit the site. Offering mobile coupons works. In fact, 77% of customers spend about $10-$50 more when they’re redeeming a mobile coupon. Since mobile coupons can positively influence your sales revenues, it makes sense to draw attention to your promotions. Here’s how. Include a coupon for first-time visitors Twenty-eight percent of shoppers are likely to spend more money if a retailer offers a percent off their total. If you want customers to spend more, incentivize them. If you have a WordPress website, you can find a WordPress plugin that will help you create the perfect mobile-responsive coupon pop-up. Highlight free shipping Did you know that 61% of customers abandon their purchase when they discover extra costs? Additionally, 24% of shoppers will abandon their cart if they can’t see or calculate shipping costs upfront. A good way to avoid this debacle, besides full transparency, is to offer free shipping. If you provide this luxury to your customers, make a big deal about it. Put your “free shipping” notice front and center. Another strategy to increase sales is to offer free shipping for orders that reach a certain purchasing threshold. Sephora gets me every time. I always end up adding lip gloss, mascara, or face mask to hit their free shipping minimum. If every consumer does the same, it translates into boosted sales revenues for Sephora. Put holiday promotions front and center Is it Black Friday? Valentine’s Day? Mother’s Day? Shopping always increases around the holidays. Smart brands will capitalize on these high sales days by offering a holiday promotion. Don’t make it difficult for your customers to find your holiday promotions on your mobile site. Place a link to your promotion at the top of the page like the example above from Kizik.  6. Provide easy package tracking Making sales is exciting, but repeat business is really where the money is at. Stats show that 65% of a company’s business is from existing customers, and the probability of selling to a customer you already have is 60-70 percent. How do you use your mobile site as a tool for increased retention rates? You make the user experience a delight. This includes making it a cinch for customers to track their orders.  There are several shipping tracking apps you can partner with that will revolutionize the tracking experience. These apps will provide customers with the option to enter their email or phone number for real-time shipping notifications. 7. Add a wishlist or save features option Let’s say you sell hundreds of clothing items on your website. How do you make it easy for customers to browse your collections, pick out what they like, and follow through with their purchase?That’s right! You add a wishlist or save feature option. You’ve used it before. It’s essentially a virtually shopping cart where people can add or eliminate products with the click of the button. It’s magic and no mobile store is complete without this feature. Boost Sales Now With These Mobile Commerce Best Practices Creating an easy mobile shopping experience is integral to the success of your eCommerce business. With the right tools, it’s easy to create a mobile responsive online store with full eCommerce support. For more information on how to build your mobile website, visit HostGator today. Find the post on the HostGator Blog

Digital Realty Launches Development of Second Data Centre in Hong Kong

My Host News -

HONG KONG – Digital Realty (NYSE: DLR), a leading global provider of data centre, colocation and interconnection solutions, announced today the development of a new, carrier-neutral data centre in a purpose-built facility in Hong Kong – to be named Digital Realty Kin Chuen (HKG11). The move marks another significant expansion of PlatformDIGITAL across Asia Pacific, closely following the recent groundbreaking of Digital Realty’s new data centre in Seoul, Korea. The Hong Kong facility will enable customers to rapidly scale digital transformation strategies by deploying critical infrastructure with a leading global data centre provider at the heart of a growing community of interest. Digital Realty entered Hong Kong in 2012 with the acquisition of Digital TKO (HKG10), located within the Tsueng Kwan O industrial estate and capable of delivering up to 18 megawatts of critical IT capacity. The new facility is strategically located in Kwai Chung, Hong Kong’s rapidly developing new data centre cluster and the primary auxiliary location outside Tseung Kwan O, providing the ability to cater to diverse, multi-site workloads. Upon completion, the new, 21,000 square-metre building will deliver up to 24 megawatts of critical IT capacity. The new data centre will support the continued development of Hong Kong as a key technology and data hub and drive the adoption of cloud computing services and solutions across the region. The facility is expected to be built out and ready for global and regional customers by mid-2021. “Our investment in Hong Kong is another important milestone on our global platform roadmap, enabling customers’ digital transformation strategies while demonstrating our commitment to supporting their future growth on PlatformDIGITAL,” said Digital Realty Chief Executive Officer A. William Stein. “As we continue to expand in Asia, the launch of our second facility in Hong Kong underscores its importance as a major data hub, providing customers with the coverage, capacity and connectivity requirements to support their digital ambitions.” The HKG11 facility will be built up to a total of 12 floors, eight of which will be dedicated for customer deployments. The new facility will also offer superior connectivity through close access to various facilities-based operators. “Hong Kong is a regional leader in cloud readiness and has significant potential for further cloud adoption along with a strong base of customers with an appetite for digital technologies,” added Mark Smith, Managing Director, Asia Pacific for Digital Realty. “We are delighted to launch our new facility, which will go a long way towards meeting the rapidly growing demand and bringing value to customers across the region, especially from China.” Hong Kong is well placed among Asian cities in terms of cloud readiness. The city claimed the top spot in the recent Cloud Readiness Index (CRI)1 based on cloud infrastructure, security, and regulation, according to the Asia Cloud Computing Association (ACCA). The index found that Hong Kong is already a strong regional performer in fundamental readiness areas such as cloud regulation and infrastructure. An opportunity exists for the city to strengthen areas such as cloud governance and security to spur broader and faster cloud adoption, according to the study. Digital Realty is one of the world’s largest owners, developers and operators of highly reliable data centre facilities. The new Hong Kong development will strengthen Digital Realty’s presence within the Asia Pacific region, where the company currently operates a network of industry-leading data centres located in Tokyo, Osaka, Hong Kong, Singapore, Sydney and Melbourne and recently broke ground on its first facility in Seoul, currently scheduled to open for customers by the end of 2021. About Digital Realty Digital Realty supports the data centre, colocation and interconnection strategies of customers across the Americas, EMEA and APAC, ranging from cloud and information technology services, communications and social networking to financial services, manufacturing, energy, healthcare and consumer products. To learn more about Digital Realty, please visit, or follow us on Twitter at @digitalapac and visit our industry insights at

Alibaba Cloud to Launch Third Datacentre in Indonesia in Early 2021

My Host News -

Jakarta, Indonesia – Alibaba Cloud, the digital technology and intelligence backbone of Alibaba Group, is to launch its third datacenter in Indonesia early next year. The move is in response to the fast-growing demand from its Indonesian customers, many of whom are increasingly adopting the cloud and rapidly progressing their digital transformation strategies. The expansion comes after Alibaba Cloud Indonesia built its first datacenter in 2018, with the second datacenter constructed in 2019. Upon completion of the third, Alibaba Cloud will have 64 availability zones across 21 regions worldwide. In addition, the trusted cloud computing giant revealed its plan to build its first data scrubbing centre in the country, helping Indonesian customers – especially those in the finance and gaming sectors – to fend off cyberattacks. “On this very occasion, I would like to extend my congratulations to Alibaba Cloud for the on-going development of the Availability Zone and the Data Scrubbing Centre in Indonesia, which is hoped to be launched by 2021,” said the Minister of Communication and Information Technology, Johnny G. Plate. Minister Johnny sees that this initiative will be one of the ways to accelerate the country’s digital transformation. “We hope that the data management process in the centre will be in line with the Indonesian Government principles in data management, such as lawfulness, fairness, and transparency,” he added. The new datacenter would also enrich Alibaba Cloud’s local offerings, providing a comprehensive suite of cloud product and services from database, security, network to machine learning and data analytics. “We aim to keep expanding our leading products and innovative services to meet the diverse demand from our customers across different sectors, including e-commerce, finance, online media, education and gaming.” Chen added. Meantime, the opening of the first data scrubbing centre in Indonesia, which is estimated to be completed in early 2021, will help detect, analyse and remove Tbps-level large volume of malicious traffic to defend distributed denial of service (DDoS) attacks, especially for finance and gaming businesses which are the common targets of cyberattacks. “While cyberattacks such as DDoS have grown in intensity and sophistication, especially at a time when more businesses are moving their IT infrastructure onto cloud, Alibaba Cloud’s Anti-DDoS service – which is deployed on its globally distributed data scrubbing centres – can automatically mitigate attacks and reinforce the security of clients’ applications, significantly reducing the threat of malicious attacks.” Chen said. Alibaba Cloud has been ranked as the number one public cloud service provider in APAC and number three globally, according to Gartner. Since it started serving the market in 2016, Alibaba Cloud Indonesia has introduced proven technologies from the Alibaba ecosystem to serve a wide array of Indonesian customers, including those in e-commerce, finance, logistics, gaming, education and media. In January 2020, Alibaba Cloud announced the launch of a ‘Partner Alliance Program’, an initiative with local ecosystem partners to promote cloud adoption and the use of data intelligence among businesses of all sizes and kinds. It has also partnered with universities, incubator and training institutions to support digital talent development in Indonesia. For more information about how Alibaba Cloud has been supporting local Indonesian clients, please visit: About Alibaba Cloud Established in 2009, Alibaba Cloud (, the data intelligence backbone of Alibaba Group, is among the world’s top three IaaS providers, according to Gartner, and the largest provider of public cloud services in China, according to IDC. Alibaba Cloud provides a comprehensive suite of cloud computing services to businesses worldwide, including merchants doing business on Alibaba Group marketplaces, start-ups, corporations and government organisations. Alibaba Cloud is the official Cloud Services Partner of the International Olympic Committee.

Green House Data and Zerto Fast-Track Digital Transformation with Hybrid Cloud Resilience and Portability

My Host News -

CHEYENNE, Wyo. & BOSTON – Green House Data, a leading provider of digital transformation consulting and managed IT services, today announced a strategic alignment with Zerto to accelerate digital transformation initiatives centered around hybrid cloud resilience and multi-cloud application migration. Zerto is an industry leading software solution that replaces legacy solutions with a single platform to enable disaster recovery, data protection, and workload mobility across hyperscale clouds, hosted services, and on-premise data centers. All of which reduces risk and complexity of modernization and cloud adoption. Together, the two organizations help enterprises architect, test, migrate, and protect critical applications and data, even within complex interdependent hybrid environments. “Zerto is a cornerstone solution for our platform agnostic cloud services,” said Green House Data CIO Cortney Thompson. “This partnership will bring our staff and solutions in close alignment with Zerto’s expertise for stronger service delivery and resilient hosting platforms that efficiently enable modern IT multi-cloud agility for our clients.” As more IT environments span across on-premise data centers, service provider partners, and hyperscale cloud platforms like Azure and AWS, workload portability and agility have become vital. Meanwhile, enterprise technology faces expectations of 100% continuous availability. “Often times the migration stages of a digital transformation effort turn into painful sticking points, with complex planning and systems testing required, especially when we deal with client-facing production workloads,” said Green House Data Senior Vice President of Digital Transformation Victor Tingler. “Customers have been extremely satisfied with Zerto and we attribute that to the ease of use when it comes to testing that environment cutover, gaining high confidence in your destination environment in terms of performance and configuration before you hit the button to migrate or failover.” Green House Data has leveraged the Zerto IT Resilience Platform to facilitate digital transformation in numerous customer engagements including zero-downtime cloud migrations and ongoing business continuity with near real-time recovery time objective (RTO) requirements. “As one of our most flexible vendor partners, Green House Data has demonstrated the versatility and value of Zerto for true hybrid cloud environments across a wide range of industries,” said Emily Weeks, director of sales, Cloud and Alliances at Zerto. “With longstanding expertise in disaster recovery, their engineering and support teams are highly proficient in the use of Zerto for both resilience and migration. We look forward to continuing our work together to help clients meet the challenges of modern IT service delivery.” About Green House Data As a leading managed service provider and consulting firm, Green House Data is focused on helping customers advance their digital transformation goals by modernizing business applications, migrating solutions to the cloud, designing hybrid cloud solutions, and applying agile and DevOps engineering practices to build new, innovative solutions. Our portfolio of services is designed to provide continuous improvement along each step of the IT journey to maximize business value and success. We are a Microsoft Gold Partner, Azure Expert MSP, and VMWare Cloud Verified partner offering deep expertise in the Microsoft ecosystem and enterprise IT software and services. Visit us at to learn more and follow us on LinkedIn, Facebook, and Twitter. About Zerto Zerto helps customers accelerate IT transformation by reducing the risk and complexity of modernization and cloud adoption. By replacing multiple legacy solutions with a single IT Resilience Platform, Zerto is changing the way disaster recovery, data protection and cloud are managed. With enterprise scale, Zerto’s software platform delivers continuous availability for an always-on customer experience while simplifying workload mobility to protect, recover and move applications freely across hybrid and multi-clouds. Zerto is trusted globally by over 8,000 customers, works with more than 1,500 partners and is powering resiliency offerings for 450 managed services providers. Learn more at


Recommended Content

Subscribe to Complete Hosting Guide aggregator