Industry Buzz

Visualize and Monitor Highly Distributed Applications with Amazon CloudWatch ServiceLens

Amazon Web Services Blog -

Increasingly distributed applications, with thousands of metrics and terabytes of logs, can be a challenge to visualize and monitor. Gaining an end-to-end insight of the applications and their dependencies to enable rapid pinpointing of performance bottlenecks, operational issues, and customer impact quite often requires the use of multiple dedicated tools each presenting their own particular facet of information. This in turn leads to more complex data ingestion, manual stitching together of the various insights to determine overall performance, and increased costs from maintaining multiple solutions. Amazon CloudWatch ServiceLens, announced today, is a new fully managed observability solution to help with visualizing and analyzing the health, performance, and availability of highly distributed applications, including those with dependencies on serverless and container-based technologies, all in one place. By enabling you to easily isolate endpoints and resources that are experiencing issues, together with analysis of correlated metrics, logs, and application traces, CloudWatch ServiceLens helps reduce Mean Time to Resolution (MTTR) by consolidating all of this data in a single place using a service map. From this map you can understand the relationships and dependencies within your applications, and dive deep into the various logs, metrics, and traces from a single tool to help you quickly isolate faults. Crucial time spent correlating metrics, logs, and trace data from across various tools is saved, thus reducing any downtime incurred by end users. Getting Started with Amazon CloudWatch ServiceLens Let’s see how we can take advantage of Amazon CloudWatch ServiceLens to diagnose the root cause of an alarm triggered from an application. My sample application reads and writes transaction data to a Amazon DynamoDB table using AWS Lambda functions. An Amazon API Gateway is my application’s front-end, with resources for GET and PUT requests, directing traffic to the corresponding GET and PUT lambda functions. The API Gateway resources and the Lambda functions have AWS X-Ray tracing enabled, and the API calls to DynamoDB from within the Lambda functions are wrapped using the AWS X-Ray SDK. You can read more about how to instrument your code, and work with AWS X-Ray, in the Developer Guide. An error condition has triggered an alarm for my application, so my first stop is the Amazon CloudWatch Console, where I click the Alarm link. I can see that there is some issue with availability with one or more of my API Gateway resources. Let’s drill down to see what might be going on. First I want to get an overview of the running application so I click Service Map under ServiceLens in the left-hand navigation panel. The map displays nodes representing the distributed resources in my application. The relative size of the nodes represents the amount of request traffic that each is receiving, as does the thickness of the links between them. I can toggle the map between showing Requests modes or Latency mode. Using the same drop-down I can also toggle the option to change relative sizing of the nodes. The data shown for Request mode or Latency mode helps me isolate nodes that I need to triage first. Clicking View connections can also be used to aid in the process, since it helps me understand incoming and outgoing calls, and their impact on the individual nodes. I’ve closed the map legend in the screenshot so we can get a good look at all the nodes, for reference here it is below. From the map I can immediately see that my front-end gateway is the source of the triggered alarm. The red indicator on the node is showing me that there are 5xx faults associated with the resource, and the length of the indicator relative to the circumference gives me some idea of how many requests are faulting compared to successful requests. Secondly, I can see that the Lambda functions that are handling PUT requests through the API are showing 4xx errors. Finally I can see a purple indicator on the DynamoDB table, indicating throttling is occurring. At this point I have a pretty good idea of what might be happening, but let’s dig a little deeper to see what CloudWatch ServiceLens can help me prove. In addition to Map view, I can also toggle List view. This gives me at-a-glance information on average latency, faults, and requests/min for all nodes and is specifically ordered by default to show the “worst” node first, using a sort order of descending by fault rate – descending by number of alarms in alarm – ascending by Node name. Returning to Map view, hovering my mouse over the node representing my API front-end also gives me similar insight into traffic and faulting request percentage, specific to the node. To see even more data, for any node, clicking it will open a drawer below the map containing graphed data for that resource. In the screenshot below I have clicked on the ApiGatewayPutLambdaFunction node. The drawer, for each resource, enables me to jump to view logs for the resource (View logs), traces (View traces), or a dashboard (View dashboard). Below, I have clicked View dashboard for the same Lambda function. Scrolling through the data presented for that resource, I note that the duration of execution is not high, while all invokes are going into error in tandem. Returning to the API front-end that is showing the alarm, I’d like to take look at the request traces so I click the node to open the drawer, then click View traces. I already know from the map that 5xx and 4xx status codes are being generated in the code paths selected by PUT requests coming into my application so I switch Filter type to be Status code, then select both 502 and 504 entries in the list, finally clicking Add to filter. The Traces view switches to show me the traces that resulted in those error codes, the response time distribution and a set of traces. Ok, now we’re getting close! Clicking the first trace ID, I get a wealth of data including exception messages about that request – more than I can show in a single screenshot! For example, I can see the timelines of each segment that was traced as part of the request. Scrolling down further, I can view exception details (below this I can also see log messages specific to the trace too) – and here lays my answer, confirming the throttling indicator that I saw in the original map. I can also see this exception message in the log data specific to the trace, shown at the foot of the page. Previously, I would have had to scan through logs for the entire application to hopefully spot this message, being able to drill down from the map is a significant time saver. Now I know how to fix the issue and get the alarm resolved – increase the write provisioning for the table! In conjunction with CloudWatch ServiceLens, Amazon CloudWatch has also launched a preview of CloudWatch Synthetics that helps to monitor endpoints using canaries that run 24×7, 365 days a year, so that I am quickly alerted of issues that my customers are facing. These are also visualized on the Service Map and just as I did above, I can drill down to the traces to view transactions that originated from a canary. The faster I can dive deep into a consolidated view of an operational failure or an alarm, the faster I can root cause the issue and help reduce time to resolution and mitigate the customer impact. Amazon CloudWatch ServiceLens is available now in all commercial AWS Regions. — Steve

22 New Languages And Variants, 6 New Regions For Amazon Translate

Amazon Web Services Blog -

Just a few weeks ago, I told you about 7 new languages supported by Amazon Translate, our fully managed service for machine translation. Well, here I am again, announcing no less than 22 new languages and variants, as well as 6 additional AWS Regions where Translate is now available. Introducing 22 New Languages And Variants That’s what I call an update! In addition to existing languages, Translate now supports: Afrikaans, Albanian, Amharic, Azerbaijani, Bengali, Bosnian, Bulgarian, Croatian, Dari, Estonian, Canadian French, Georgian, Hausa, Latvian, Pashto, Serbian, Slovak, Slovenian, Somali, Swahili, Tagalog, and Tamil. Congratulations if you can name all countries and regions of origin: I couldn’t! With these, Translate now supports a total of 54 languages and variants, and 2804 language pairs. The full list is available in the documentation. Whether you are expanding your retail operations globally like Regatta, analyzing employee surveys like Siemens, or enabling multilingual chat in customer engagement like Verint, the new language pairs will help you further streamline and automate your translation workflows, by delivering fast, high-quality, and affordable language translation. Introducing 6 New AWS Regions In addition to existing regions, you can now use Translate in US West (N. California), Europe (London), Europe (Paris), Europe (Stockholm), Asia Pacific (Hong Kong) and Asia Pacific (Sydney). This brings the number of regions where Translate is available up to 17. This expansion is great news for many customers who will now be able to translate data in the region where it’s stored, without having to invoke the service in another region. Again, this will make workflows simpler, faster, and even more cost-effective. Using Amazon Translate In the last post, I showed you how to use Translate with the AWS SDK for C++. In the continued spirit of language diversity, let’s use the SDK for Ruby this time. Just run gem install aws-sdk to install it. The simple program below opens a text file, then reads and translates one line at a time. As you can see, translating only requires one simple API call. Of course, it’s the same with other programming languages: call an API and get the job done! require 'aws-sdk' if ARGV.length != 2 puts "Usage: translate.rb <filename> <target language code>" exit end translate = 'eu-west-1')[0], "r") do |f| f.each_line do |line| resp = translate.translate_text({ text: line, source_language_code: "auto", target_language_code: ARGV[1], }) puts(resp.translated_text) end end Here’s an extract from “Notes on Structured Programming“, a famous computer science paper published by E.W. Dijkstra in 1972. In my life I have seen many programming courses that were essentially like the usual kind of driving lessons, in which one is taught how to handle a car instead of how to use a car to reach one’s destination. My point is that a program is never a goal in itself; the purpose of a program is to evoke computations and the purpose of the computations is to establish a desired effect. Although the program is the final product made by the programmer, the possible computations evoked by it – the “making” of which is left to the machine! – are the true subject matter of his trade. For instance, whenever a programmer states that his program is correct, he really makes an assertion about the computations it may evoke. Let’s translate it to a few languages: how about Albanian, Hausa, Pashto and Tagalog? $ ruby translate.rb dijkstra.txt sq Në jetën time kam parë shumë kurse programimi që ishin në thelb si lloji i zakonshëm i mësimeve të vozitjes, në të cilën mësohet se si të merret me një makinë në vend se si të përdorësh një makinë për të arritur destinacionin e dikujt. Pika ime është se një program nuk është kurrë një qëllim në vetvete; qëllimi i një programi është të ndjell llogaritjet dhe qëllimi i llogaritjeve është të krijojë një efekt të dëshiruar. Megjithëse programi është produkti përfundimtar i bërë nga programuesi, llogaritjet e mundshme të evokuara nga ai - “bërja” e të cilit i është lënë makinë! - janë çështja e vërtetë subjekt i tregtisë së tij. Për shembull, sa herë që një programues thotë se programi i tij është i saktë, ai me të vërtetë bën një pohim në lidhje me llogaritjet që mund të ndjell. $ ruby translate.rb article.txt ha A rayuwata na ga kwasa-kwasai da dama da suka kasance da gaske kamar irin darussan tuki da aka saba da su, inda ake koya wa mutum yadda zai rike mota maimakon yadda zai yi amfani da mota don kaiwa mutum makoma. Dalilina shi ne, shirin ba shi da wata manufa a kanta; manufar shirin shi ne tayar da komfuta kuma manufar ƙididdigar ita ce kafa tasirin da ake so. Ko da yake shirin shine samfurin karshe da mai shiryawa ya yi, ƙididdigar da za a iya amfani da ita - “yin” wanda aka bar shi zuwa na'ura! - su ne batun gaskiya game da cinikinsa. Alal misali, duk lokacin da mai shiryawa ya ce shirinsa daidai ne, yana yin tabbaci game da ƙididdigar da zai iya fitarwa. $ ruby translate.rb dijkstra.txt ps زما په ژوند کې ما د پروګرام کولو ډیری کورسونه لیدلي دي چې په اصل کې د معمول ډول ډول چلولو درسونو په څیر وو، په کوم کې چې دا درس ورکول کیږي چې څنګه د موټر سره معامله وکړي ترڅو د چا منزل ته ورسیږي. زما ټکی دا دی چې یو پروګرام هېڅکله هم په ځان کې هدف نه دی؛ د یوه پروګرام هدف دا دی چې محاسبه راوباسي، او د محاسبې هدف دا دی چې یو مطلوب اثر رامنځته کړي. که څه هم دا پروګرام وروستی محصول دی چې د پروګرام لخوا جوړ شوی، هغه ممکنه حسابونه چې د هغه لخوا رامینځته شوي - د چا «جوړولو» ماشین ته پریښودل کیږي! - اصلي مسله د هغه د سوداګرۍ موضوع ده. د مثال په توګه، کله چې یو پروګرام کوونکی وايي چې د هغه پروګرام سم دی، هغه په حقیقت کې د هغه محاسبې په اړه یو ادعا کوي چې هغه یې کولی شي. $ ruby translate.rb dijkstra.txt tl Sa aking buhay nakita ko ang maraming mga kurso sa programming na karaniwang tulad ng karaniwang uri ng mga aralin sa pagmamaneho, kung saan itinuturo kung paano haharapin ang isang kotse upang makapunta sa patutunguhan ng isang tao. Ang aking punto ay ang isang programa ay hindi kailanman naglalayong mismo; ang layunin ng isang programa ay upang gumuhit ng mga kalkulasyon, at ang layunin ng accounting ay upang lumikha ng nais na epekto. Kahit na ang programa ay ang huling produkto na nilikha ng programa, ang mga posibleng kalkulasyon na nilikha niya - na nag-iiwan ng isang tao na “bumuo” ng makina! - Ang pangunahing isyu ay ang paksa ng kanyang negosyo. Halimbawa, kapag sinabi ng isang programmer na tama ang kanyang programa, talagang gumagawa siya ng claim tungkol sa pagkalkula na maaari niyang gawin. Available Now! The new languages and the new regions are available today. If you’ve never tried Amazon Translate, did you know that the free tier offers 2 million characters per month for the first 12 months, starting from your first translation request? Also, which languages should Translate support next? We’re looking forward to your feedback: please post it to the AWS Forum for Amazon Translate, or send it to your usual AWS support contacts. — Julien

Safe Deployment of Application Configuration Settings With AWS AppConfig

Amazon Web Services Blog -

A few years ago, we identified a need for an internal service to enable us to make configuration changes faster than traditional code deployments, but with the same operational scrutiny as code changes. So, we built a tool to address this need and now most teams across AWS,, Kindle and Alexa use this configuration deployment system to perform changes dynamically in seconds, rather than minutes. The ability to deploy only configuration changes, separate from code, means we do not have to restart the applications or services that use the configuration, and changes take effect immediately. This service is used thousands of times a day inside of Amazon and AWS. A common question we receive from customers is how they can operate like us, and so we decided to take this internal tooling and externalize it for our customers to use. Today, we announced AWS AppConfig, a feature of AWS Systems Manager. AWS AppConfig enables customers to quickly rollout application configuration changes, independent of code, across any size application hosted on Amazon Elastic Compute Cloud (EC2) instances, containers, and serverless applications and functions. Configurations can be created and updated using the API or console, and you can have the changes validated prior to deployment using either a defined schema template or an AWS Lambda function. AWS AppConfig also includes automated safety controls to monitor the deployment of the configuration changes, and to rollback to the previous configuration if issues occur. Deployments of configuration updates are available immediately to your running application, which uses a simple API to periodically poll and retrieve the latest available configuration. Managing Application Configuration Settings with AWS AppConfig Using AWS AppConfig to create and manage configuration settings for applications is a simple process. First I create an Application, and then for that application I define one or more Environments, Configuration profiles, and Deployment strategies. An environment represents a logical deployment group, such as Beta or Production environments, or it might be subcomponents of an application, for example Web, Mobile, and Back-end components. I can configure Amazon CloudWatch alarms for each environment, which will be monitored by AWS AppConfig and trigger a rollback if the alarm fires during deployment. A configuration profile defines the source of the configuration data to deploy, together with optional Validators that will ensure your configuration data is correct prior to deployment. A deployment strategy defines how the deployment is to be performed. To get started, I go to the AWS Systems Manager dashboard and select AppConfig from the navigation panel. Clicking Create configuration data takes me to a page where I specify a name for my application, together with an optional description, and I can also apply tags to the application if I wish. Once I have created my application, I am taken to a page with Environments and Configuration Profiles tabs. Let’s start with creating an environment to target my production environment for the application. With the Environments tab selected, I click Create environment. On the resulting page, I give my environment a name, and then, under Monitors, select Enable rollback on Cloudwatch alarms. I then supply an IAM role that enables AWS AppConfig to monitor the selected alarms during deployments, and click Add. Details about the permissions needed for the role can be found here. Next, I set up a configuration profile. Returning to the details view for my application, I select the Configuration Profiles tab, and click Create configuration profile. The first details needed are a name for the configuration and a service role that AWS AppConfig will assume to access the configuration. I can choose to create a new role, or choose an existing role. Clicking Next, I then select the source of the configuration data that will be deployed. I can choose from a Systems Manager Document, or a parameter in Parameter Store. For this walk-through, I’m going to select a parameter that when set to true, will enable the functionality related to that feature for users of this sample application. Next, I can optionally add one or more validators to the profile, so that AWS AppConfig can verify that when I attempt to deploy an updated setting, the value is correct and not going to cause an application failure. Here, I am going to use a JSON schema, so I am using a syntactic check. If I want to run code to perform validation, a semantic check, I would configure a Lambda function. Once I have entered the schema, I click Create configuration profile to complete the process. One more task remains, to configure a Deployment strategy that will govern the rollout. I can do this from the AWS AppConfig dashboard by selecting the Deployment Strategies tab and clicking Create deployment strategy. Here I have configured a strategy that will cause AWS AppConfig to deploy to one-fifth (20%) of my instances hosting the application at a time. Monitoring will be performed during deployments and a ‘bake’ time which occurs when all deployments complete. During deployments and the bake time, if anything goes wrong, the alarms associated with the environment will trigger and a rollback will occur. You can read more about the options you can set for a strategy here. Clicking Create deployment strategy completes this step and makes the strategy available for selection during deployment. With an environment and a configuration profile configured for my application, and a deployment strategy in place – I can add more of each – I am ready to start a deployment. Navigating to my configuration profile, I click Start deployment. I select the environment I want to deploy to, the version of the parameter that contains the configuration change I want to deploy, the deployment strategy, and I can enter an optional description. The parameter version I selected is 2. Version 1 of the parameter had the value { “featureX”: false }. Version 2 of the parameter has the value { “featureX”: true }, enabling the functionality associated with that feature for my users, as soon as it is deployed. Clicking Start deployment starts the validation and deployment process. I know that the value of version 2 of the parameter is valid, so deployment will proceed after validation. As soon as the deployment becomes available to a target, in its next poll for configuration, the target will receive the updated configuration and the new features associated with featureX will be safely enabled for my users! In this post I presented a very simple experience based around a feature toggle, enabling me to be able to instantly turn on features that might require a timely rollout (for example a new product launch or announcement). AWS AppConfig can be used for many different use cases, here are a couple more examples to consider: A/B Testing: perform experiments on which versions of an application earns more revenue. User Membership: allow Premium Subscribers to access an application’s paid content. Note that the targets receiving deployments from AWS AppConfig do not need to be configured with the Systems Manager Agent, or have an AWS Identity and Access Management (IAM) instance profile, which is required by other Systems Manager capabilities. This enables me to use AWS AppConfig with managed and unmanaged instances. You can read more about AWS AppConfig in the User Guide, and pricing information is available here. AWS AppConfig is available now to customers in all commercial AWS Regions. — Steve

Announcing AWS Managed Rules for AWS WAF

Amazon Web Services Blog -

Building and deploying secure applications is critical work, and the threat landscape is always shifting. We’re constantly working to reduce the pain of maintaining a strong cloud security posture. Today we’re launching a new capability called AWS Managed Rules for AWS WAF that helps you protect your applications without needing to create or manage the rules directly. We’ve also made multiple improvements to AWS WAF with the launch of a new, improved console and API that makes it easier to keep your applications safe. AWS WAF is a web application firewall. It lets you define rules that give you control over which traffic to allow or deny to your application. You can use AWS WAF to help block common threats like SQL injections or cross-site scripting attacks. You can use AWS WAF with Amazon API Gateway, Amazon CloudFront, and Application Load Balancer. Today it’s getting a number of exciting improvements. Creating rules is more straightforward with the introduction of the OR operator, allowing evaluations that would previously require multiple rules. The API experience has been greatly improved, and complex rules can now be created and updated with a single API call. We’ve removed the limit of ten rules per web access control list (ACL) with the introduction of the WAF Capacity Unit (WCU). The switch to WCUs allows the creation of hundreds of rules. Each rule added to a web access control list (ACL) consumes capacity based on the type of rule being deployed, and each web ACL has a defined WCU limit. Using the New AWS WAF Let’s take a look at some of the changes and turn on AWS Managed Rules for AWS WAF. First, I’ll go to AWS WAF and switch over to the new version. Next I’ll create a new web ACL and add it to an existing API Gateway resource on my account. Now I can start adding some rules to our web ACL. With the new AWS WAF, the rules engine has been improved. Statements can be combined with AND, OR, and NOT operators, allowing for more complex rule logic. I’m going to create a simple rule that blocks any request that uses the HTTP method POST. Another cool feature is support for multiple text transformations, so for example, you could have all your requests transformed to decode HTML entities, and then made lowercase. JSON objects now define web ACL rules (and web ACLs themselves), making them versionable assets you can match with your application code. You can also use these JSON documents to create or update rules with a single API call. Using AWS Managed Rules for AWS WAF Now let’s play around with something totally new: AWS Managed Rules. AWS Managed Rules give you instant protection. The AWS Threat Research Team maintains the rules, with new ones being added as additional threats are identified. Additional rule sets are available on the AWS Marketplace. Choose a managed rule group, add it to your web ACL, and AWS WAF immediately helps protect against common threats. I’ve selected a rule group that protects against SQL attacks, and also enabled core rule set. The core rule set covers some of the common threats and security risks described in OWASP Top 10 publication. As soon as I create the web ACL and the changes are propagated, my app will be protected from a whole range of attacks such as SQL injections. Now let’s look at both rules that I’ve added to our ACL and see how things are shaping up. Since my demo rule was quite simple, it doesn’t require much capacity. The managed rules use a bit more, but we’ve got plenty of room to add many more rules to this web ACL. Things to Know That’s a quick tour of the benefits of the new and improved AWS WAF. Before you head to the console to turn it on, there are a few things to keep in mind. The new AWS WAF supports AWS CloudFormation, allowing you to create and update your web ACL and rules using CloudFormation templates. There is no additional charge for using AWS Managed Rules. If you subscribe to managed rules from an AWS Marketplace seller, you will be charged the managed rules price set by the seller. Pricing for AWS WAF has not changed. As always, happy (and secure) building, and I’ll see you at re:Invent or on the re:Invent livestreams soon! — Brandon

AWS Cloud Development Kit (CDK) – Java and .NET are Now Generally Available

Amazon Web Services Blog -

Today, we are happy to announce that Java and .NET support inside the AWS Cloud Development Kit (CDK) is now generally available. The AWS CDK is an open-source software development framework to model and provision your cloud application resources through AWS CloudFormation. AWS CDK also offers support for TypeScript and Python. With the AWS CDK, you can design, compose, and share your own custom resources that incorporate your unique requirements. For example, you can use the AWS CDK to model a VPC, with its associated routing and security configurations. You could then wrap that code into a construct and then share it with the rest of your organization. In this way, you can start to build up libraries of these constructs that you can use to standardize the way your organization creates AWS resources. I like that by using the AWS CDK, you can build your application, including the infrastructure, in your favorite IDE, using the same programming language that you use for your application code. As you code your AWS CDK model in either .NET or Java, you get productivity benefits like code completion and inline documentation, which make it faster to model your infrastructure. How the AWS CDK Works Everything in the AWS CDK is a construct. You can think of constructs as cloud components that can represent architectures of any complexity: a single resource, such as a Amazon Simple Storage Service (S3) bucket or a Amazon Simple Notification Service (SNS) topic, a static website, or even a complex, multi-stack application that spans multiple AWS accounts and regions. You compose constructs together into stacks that you can deploy into an AWS environment, and apps – a collection of one or more stacks. The AWS CDK includes the AWS Construct Library, which contains constructs representing AWS resources. How to use the AWS CDK I’m going to use the AWS CDK to build a simple queue, but rather than handcraft a CloudFormation template in YAML or JSON, the AWS CDK allows me to use a familiar programming language to generate and deploy AWS CloudFormation templates. To get started, I need to install the AWS CDK command-line interface using NPM. Once this download completes, I can code my infrastructure in either TypeScript, Python, JavaScript, Java, or, .NET. npm i -g aws-cdk On my local machine, I create a new folder and navigate into it. mkdir cdk-newsblog-dotnet && cd cdk-newsblog-dotnet Now I have installed the CLI I can execute commands such as cdk init and pass a language switch, in this instance, I am using .NET, and the sample app with the csharp language switch. cdk init sample-app --language csharp If I wanted to use Java rather than .NET, I would change the --language switch to java. cdk init sample-app --language java Since I am in the terminal, I type code . which is a shortcut to open the current folder in VS Code. You could, of course, use any editor, such as Visual Studio or JetBrains Rider. As you can see below, the init command has created a basic .NET AWS CDK project. If I look into the Program.cs, the Main void creates an App and then a CDKDotnetStack. This stack CDKDotnetStack is defined in the CDKDotnetStack.cs file. This is where the meat of the project resides and where all the AWS resources are defined. Inside the CDKDotnetStack.cs file, some code creates a Amazon Simple Queue Service (SQS) then a topic and then finally adds a Amazon Simple Notification Service (SNS) subscription to the topic. Now that I have written the code, the next step is to deploy it. When I do, the AWS CDK will compile and execute this project, converting my .NET code into a AWS CloudFormation template. If I were to just deploy this now, I wouldn’t actually see the CloudFormation template, so the AWS CDK provides a command cdk synth that takes my application, compiles it, executes it, and then outputs a CloudFormation template. This is just standard CloudFormation, if you look through it, you will find the following items: AWS::SQS::Queue – The queue I added. AWS::SQS::QueuePolicy – An IAM policy that allows my topic to send messages to my queue. I didn’t actually define this in code, but the AWS CDK is smart enough to know I need one of these, and so creates one. AWS::SNS::Topic – The topic I created. AWS::SNS::Subscription – The subscription between the queue and the topic. AWS::CDK::Metadata This section is specific to the AWS CDK and is automatically added by the toolkit to every stack. It is used by the AWS CDK team for analytics and to allow us to identify versions if there are any issues. Before I deploy this project to my AWS account, I will use cdk bootstrap. The bootstrap command will create a Amazon Simple Storage Service (S3) bucket for me, which will be used by the AWS CDK to store any assets that might be required during deployment. In this example, I am not using any assets, so technically, I could skip this step. However, it is good practice to bootstrap your environment from the start, so you don’t get deployment errors later if you choose to use assets. I’m now ready to deploy my project and to do that I issue the following command cdk deploy This command first creates the AWS CloudFormation template then deploys it into my account. Since my project will make a security change, it asks me if I wish to deploy these changes. I select yes, and a CloudFormation changeset is created, and my resources start building. Once complete, I can go over to the CloudFormation console and see that all the resources are now part of a AWS CloudFormation stack. That’s it, my resources have been successfully built in the cloud, all using .NET. With the addition of Java and .NET, the AWS CDK now supports 5 programming languages in total, giving you more options in how you build your AWS resources. Why not install the AWS CDK today and give it a try in whichever language is your favorite? — Martin  

Welcome to AWS IoT Day – Eight Powerful New Features

Amazon Web Services Blog -

Just as we did for the recent AWS Storage Day, we are making several pre-re:Invent announcements related to AWS IoT. Here’s what I’ve got for you today: Secure Tunneling – You can set up and use secure tunnels between devices, even if they are behind restrictive network firewalls. Configurable Endpoints – You can create multiple AWS IoT endpoints within a single AWS account, and set up a unique feature configuration on each one. Custom Domains for Configurable Endpoints – You can register your own domains and server certificates and use them to create custom AWS IoT Core endpoints. Enhanced Custom Authorizers – You can now use callbacks to invoke your own authentication and authorization code for MQTT connections. Fleet Provisioning – You can onboard large numbers of IoT devices to the cloud, providing each one with a unique digital identity and any required configuration on its first connection to AWS IoT Core. Alexa Voice Service (AVS) Integration – You can reduce the cost of producing an Alexa built-in device by up to 50% and bring Alexa to devices that have very limited amounts of local processing power and storage. Container Support for AWS IoT Greengrass – You can now deploy, run, and manage Docker containers and applications on your AWS IoT Greengrass-enabled devices. You can deploy containers and Lambda functions on the same device, and you can use your existing build tools and processes for your IoT work. To learn more, read about the Docker Application Deployment Connector. Stream Manager for AWS IoT Greengrass – You can now build AWS IoT Greengrass applications that collect, process, and export streams of data from IoT devices. Your applications can do first-tier processing at the edge, and then route all or selected data to an Amazon Kinesis Data Stream or AWS IoT Analytics for cloud-based, second-tier processing. To learn more, read AWS IoT Greengrass Adds Docker Support and Streams Management at the Edge and Manage Data Streams on the AWS IoT Greengrass Core. Let’s dive into these powerful new features! Secure Tunneling This feature addresses a very common customer request: the need to access, troubleshoot, and fix IoT devices that are behind a restrictive firewall. This feature is of special relevance to medical devices, utility meters, and specialized hardware with an IoT aspect. You can set up a secure tunnel on port 443 that uses TLS 1.2 encryption, and then use a local proxy to move commands and data across the tunnel. There are three elements to this feature: Management APIs – A new set of APIs allow you to open (OpenTunnel), close (CloseTunnel), list (ListTunnels), and describe (DescribeTunnels) secure tunnels. Opening a tunnel generates a tunnel id and a pair of client access tokens: one for the source device and another for the destination device. Local Proxy – A proxy that runs on both ends of the connection, available here in open source form. For IoT-registered devices, the client access token is passed via MQTT to the device, which launches a local proxy on each device to initiate a connection to the tunneling service. The client access token and tunnel id are used to authenticate the connection for each source and destination device to the tunneling service. Tunneling Service – This service implements the tunnel, and connects the source and destination devices. I can use the AWS IoT Console to create and manage tunnels. I click on Manage and Tunnels, then Open New: I give my tunnel a description, select a thing, configure a timeout, add a tag, and click Open New: Then I download the client access tokens for the source and destination: To learn more read the Developer Guide and Introducing Secure Tunneling for AWS IoT Device Management. Custom Domains for Configurable Endpoints This feature gives you the ability to create custom AWS IoT Core endpoints, each with their own DNS CNAME and server certificate. This allows you to keep your brand identity without any software updates. To learn more, read Custom Domains. Configurable Endpoints This feature gives you additional control of your AWS IoT Endpoints by enabling you to customize aspects such as domain and authentication mechanism as well as create multiple endpoints in the same account. This allows you to migrate to AWS IoT while keeping your existing endpoints (perhaps hardwired into devices already out in the field) unchanged and maintaining backwards compatibility with hard-to-update devices in the field. To learn more, read Configurable Endpoints. Enhanced Custom Authorizers This feature allows you to use our own identity and access management implementation to authenticate and authorize traffic to and from your IoT devices. It works with all protocols supported by AWS IoT Core, with support for identities based on a bearer token or on a username/password pair. The enhancement supports custom authentication & authorization over MQTT connections, and also includes simplified token passing for HTTP and WSS connections. To learn more, read Custom Authorizers. Fleet Provisioning This feature makes it easy for you to onboard large fleets of IoT devices to AWS IoT Core. Instead of spending time and resources to uniquely configure each IoT devices at the time of manufacturing, you can now use AWS IoT Core’s fleet provisioning feature to get your generic devices uniquely configured when the device makes its first connection to AWS IoT Core. The configuration can include X.509 certificates, MQTT client identity, serial numbers, and so forth. Fleet Provisioning uses JSON templates that contain device-side configuration, cloud-side configuration, and a proof-of-trust criterion. Trust can be based on an attestable entity such as a bootstrap X.509 certificate, or a local connection to a branded mobile app or onboarding beacon. To learn more, read the Fleet Provisioning Developer Guide. Alexa Voice Service (AVS) Integration This feature lowers the cost of producing an Alexa built-in device up to 50% by offloading compute & memory intensive audio workloads from a physical device to a new virtual Alexa built-in device in the cloud. This lets you integrate AVS on devices with less than 1 MB of RAM and ARM Cortex ‘M’ class microcontrollers (including the new NXP MCU-Based Solution for Alexa Voice Service) and brings Alexa to products such as light switches, thermostats, and small appliances. The devices can take advantage of the new AWS IoT Core features listed above, as well as existing AWS IoT features such as Over the Air (OTA) updates. To learn more, read Introducing Alexa Voice Service (AVS) Integration for AWS IoT Core. — Jeff;

New – AWS IoT Greengrass Adds Container Support and Management of Data Streams at the Edge

Amazon Web Services Blog -

AWS IoT Greengrass extends cloud capabilities to edge devices, so that they can respond to local events in near real-time, even with intermittent connectivity. Today, we are adding two features that make it easier to build IoT solutions: Container support to deploy applications using the Greengrass Docker application deployment connector. Collect, process, and export data streams from edge devices and manage the lifecycle of that data with the Stream Manager for AWS IoT Greengrass. Let’s see how these new features work and how to use them. Deploying a Container-Based Application to a Greengrass Core Device You can now run AWS Lambda functions and container-based applications in your AWS IoT Greengrass core device. In this way it is easier to migrate applications from on-premises, or build new applications that include dependencies such as libraries, other binaries, and configuration files, using container images. This provides a consistent deployment environment for your applications that enables portability across development environments and edge locations. You can easily deploy legacy and third-party applications by packaging the code or executables into the container images. To use this feature, I describe my container-based application using a Docker Compose file. I can reference container images in public or private repositories, such as Amazon Elastic Container Registry (ECR) or Docker Hub. To start, I create a simple web app using Python and Flask that counts the number of times it is visualized. from flask import Flask app = Flask(__name__) counter = 0 @app.route('/') def hello(): global counter counter += 1 return 'Hello World! I have been seen {} times.\n'.format(counter) My requirements.txt file contains a single dependency, flask. I build the container image using this Dockerfile and push it to ECR. FROM python:3.7-alpine WORKDIR /code ENV FLASK_APP ENV FLASK_RUN_HOST COPY requirements.txt requirements.txt RUN pip install -r requirements.txt COPY . . CMD ["flask", "run"] Here is the docker-compose.yml file referencing the container image in my ECR repository. Docker Compose files can describe applications using multiple containers, but for this example I am using just one. version: '3' services: web: image: "" ports: - "80:5000" I upload the docker-compose.yml file to an Amazon Simple Storage Service (S3) bucket. Now I create an AWS IoT Greengrass group using an Amazon Elastic Compute Cloud (EC2) instance as core device. Usually your core device is outside of the AWS cloud, but using an EC2 instance can be a good way to set up and automate a dev & test environment for your deployments at the edge. When the group is ready, I run an “empty” deployment, just to check that everything is working as expected. After a few seconds, my first deployment has completed and I start adding a connector. In the connector section of the AWS IoT Greengrass group, I select Add a connector and search for “Docker”. I select Docker Application Deployment and hit Next. Now I configure the parameters for the connector. I select my docker-compose.yml file on S3. The AWS Identity and Access Management (IAM) role used by the AWS IoT Greengrass group needs permissions to get the file from S3 and to get the authorization token and download the image from ECR. If you use a private repository such as Docker Hub, you can leverage the integration with the AWS Secret Manager to make it easy for your connectors and Lambda functions to use local secrets to interact with services and applications. I deploy my changes, similarly to what I did before. This time, the new container-based application is installed and started on the AWS IoT Greengrass core device. To test the web app that I deployed, I open access to the HTTP port on the Security Group of the EC2 instance I am using as core device. When I connect with my browser, I see the Flask app starting to count the visits. My container-based application is running on the AWS IoT Greengrass core device! You can deploy much more complex applications than what I did in this example. Let’s see that as we go through the other feature released today. Using the Stream Manager for AWS IoT Greengrass For common use cases like video processing, image recognition, or high-volume data collection from sensors at the edge, you often need to build your own data stream management capabilities. The new Stream Manager simplifies this process by adding a standardized mechanism to the Greengrass Core SDK that you can use to process data streams from IoT devices, manage local data retention policies based on cache size or data age, and automatically transmit data directly into AWS cloud services such as Amazon Kinesis and AWS IoT Analytics. The Stream Manager also handles disconnected or intermittent connectivity scenarios by adding configurable prioritization, caching policies, bandwidth utilization, and time-outs on a per-stream basis. In situations where connectivity is unpredictable or bandwidth is constrained, this new functionality enables you to define the behavior of your applications’ data management while disconnected, reconnecting, or connected, allowing you to prioritize important data’s path to the cloud and make efficient use of a connection when it is available. Using this feature, you can focus on your specific application use cases rather than building data retention and connection management functionality. Let’s see now how the Stream Manager works with a practical use case. For example, my AWS IoT Greengrass core device is receiving lots of data from multiple devices. I want to do two things with the data I am collecting: Upload all row data with low priority to AWS IoT Analytics, where I use Amazon QuickSight to visualize and understand my data. Aggregate data locally based on time and location of the devices, and send the aggregated data with high priority to a Kinesis Data Stream that is processed by a business application for predictive maintenance. Using the Stream Manager in the Greengrass Core SDK, I create two local data streams: The first local data stream has a configured low-priority export to IoT Analytics and can use up to 256MB of local disk (yes, it’s a constrained device). You can use memory to store the local data stream if you prefer speed to resilience. When local space is filled up, for example because I lost connectivity to the cloud and I continue to cache locally, I can choose to either reject new data or overwrite the oldest data. The second local data stream is exporting data with high priority to a Kinesis Data Stream and can use up to 128MB of local disk (it’s aggregated data, I need less space for the same amount of time).   Here’s how the data flows in this architecture: Sensor data is collected by a Producer Lambda function that is writing to the first local data stream. A second Aggregator Lambda function is reading from the first local data stream, performing the aggregation, and writing its output to the second local data stream. A Reader container-based app (deployed using the Docker application deployment connector) is rendering the aggregated data in real-time for a display panel. The Stream Manager takes care of the ingestion to the cloud, based on the configuration and the policies of the local data streams, so that developers can focus their efforts on the logic on the device. The use of Lambda functions or container-based apps in the previous architecture is just an example. You can mix and match, or standardize to one or the other, depending on your development best practices. Available Now The Docker application deployment connector and the Stream Manager are available with Greengrass version 1.10. The Stream Manager is available in the Greengrass Core SDK for Java and Python. We are adding support for other platforms based on customer feedback. These new features are independent from each other, but can be used together as in my example. They can simplify the way you build and deploy applications on edge devices, making it easier to process data locally and be integrated with streaming and analytics services in the backend. Let me know what you are going to use these features for! — Danilo

Website Speed Optimization for 2020

Nexcess Blog -

From SEO to conversions, to user experience and beyond, page speed has a direct correlation to the success of your website. The investment of time and resources into developing a well thought through website speed optimization plan is worth the effort – because when done right, the actions you take can expand your reach, increase click-through, and ultimately lead to revenue.  Page speed first came to prominence in 2010, when Google officially announced that it would be involved in search ranking calculations. SEO experts quickly began optimizing on-page elements to maintain and improve page rankings. Those that didn’t fell behind. Before this, speed was significant for one particular reason: conversions. Today, many users expect pages to load in 2 seconds or less, abandoning visits if load times take too long. A 1 second delay in page response can lead to a 7% reduction in conversions.  Let’s take a look at website optimizations that anyone can do. We’ll explore the tools, techniques, and technology available to site owners, and provide actionable strategies for implementing speed improvements. This way, you’re able to create the user experience you want and drive towards the site growth you’re looking for.  Optimize Images Too often web designers create and upload image files with high resolutions. High-resolution images mean bigger file sizes. Bigger file sizes mean longer loading times.  A 1 second delay in page response can lead to a 7% reduction in conversions. One of the fastest (and easiest) website speed optimization techniques is image compression. It’s important to consider two core attributes during the optimization process:  size  quality  If images are over-optimized, their quality suffers. According to ConversionXL, users spend an average of 5.94 seconds looking at a site’s main image. If that image isn’t high-quality, those users are going to instantly look elsewhere. Poor-quality images and design can be just as problematic in terms of bounce rate as them not loading in the first place.  Image optimization is simple with managed hosting. All WordPress and WooCommerce plans come with an image optimization plugin that automatically initiates every time you upload a new asset. This optimizes your time and allows you to work on what you want to, instead of having to focus on how best to optimize an image.  If you want to manage image files manually:  PNG files are good for graphics and illustrations as they are designed to compress images as much as possible without quality loss.  JPEG files are best for photographs. JPEG compression works well with complex images —  just make sure to check that they remain a suitable quality. Measure how much space an image requires beforehand. If it’s going to sit in a 100×100 pixel space, use a canvas of that size when building it. If possible, SVGs are effective for minimizing file size and maintaining quality due to being code.  Simplify Web Design When it comes to website speed optimization, less is almost always more. Instead of adding additional functionality where it’s not needed, consider how features will affect site delivery to users.  That being said, simple website design doesn’t mean featureless. Rather, it means considering where you want a user to go and how you can make their journey to that point as simple and relevant as possible.  Visually complex sites are judged as being less beautiful than their simpler counterparts. It’s also better if website design isn’t complex. In an early UX study conducted by Google, which has set the scene for UX design in the last several years, it was found that users tend to judge a website’s aesthetics within 1/50th – 1/20th of a second. Visually complex sites were almost always judged as being less beautiful than their simpler counterparts. We fundamentally believe that making simple websites should be easy. To that end, we’ve bundled the Beaver Builder plugin with all of our WordPress and WooCommerce managed hosting plans. Beaver Builder helps site designers with a simple and easy to use drag and drop page builder, along with customization options site owners need. As you simplify your website’s design, pay attention to these things: What is the goal of your website? Where do you want users to land? Considering how to get them from point A to point B is critical, not only for simplifying a site, but also for optimizing the user experience.  Earlier this year, we found that 85% of enterprise stores do not use hero images. This is a core component of their website speed optimization strategy. How important are your hero images?  Can they be simpler?  Javascript code often works behind the scenes on modern websites. Is the JavaScript code on your site relevant and needed?   Enable Caching Caching is site speed’s silver bullet. It helps website owners automatically deliver content to more users at faster speeds.It works by storing page elements on a visitor’s computer the first time they visit a site. During subsequent visits, instead of having to re-download them from the server, the user will be able to use the copy stored on their computer.  However, there are limits to what caching can and can’t do. Traditional caching only affects static elements. This includes images and some types of code. It does not help with dynamic elements like shopping carts. There are dynamic caching options available for ecommerce stores, but these tend to require more in-depth customization and set up.  Nexcess solutions come with caching options enabled and optimized by default. The Nexcess Cloud Accelerator allows for our advanced Nginx caching system to be activated with one click in your client portal, significantly improving website speed. As you’re considering caching tools for you website, think about the following elements:  Which caching tools are available to you and which is right for your site? If you’re unsure, talk with your hosting provider. Our tech experts are always happy to explain your options to help you make a good decision. Should you also be caching dynamic assets? This is often a good idea for ecommerce stores. Varnish is a good option for Magento storefronts.  Equally as important as caching is the number of PHP workers supporting your site (it’s more important for ecommerce stores). Check how many your solution offers and see if you need to upgrade.  Explore Different Integration Options Integrations and functionality add-ons can be just as detrimental to site speed as on-page elements. A well-executed optimization strategy considers to what effect any integrations are utilized and whether they have been implemented suitaby.  Integrations can include plugins, extensions or add-ons, and they may live on the same server as your site, or they may exist in an external container. Regardless of where they exist, it’s important to consider two things: What are an integration’s resource requirements? What effect does an integration have on the user experience? Nexcess solutions come with two options for optimizing your integrations. The first of these is container add-ons. These are designed to run outside of your core hosting account, saving resources for site visitors and website speed.  Integrations, when employed effectively, can provide solid functionality without sacrificing much-needed resources.  The second of these is specific to WordPress and WooCommerce solutions – plugin packages optimized for site speed. From (analytics that runs outside of a server) to automatic image optimization, each plugin has been selected based on its ability to improve site speed and the user experience.  When choosing which integrations to add to a site, consider: What are its resource requirements? Analytics software can be particularly resource heavy. Scheduling tools like RabbitMQ. These can help reduce strain from resource heavy integrations by scheduling them to run during off-peak times.  Container-based integrations that run outside of your main hosting account. Explore the different container options offered by Nexcess.  Use a CDN Have you ever visited an international site and been faced with a homepage that crawls? Chances are that the site is delivering content to you from somewhere else in the world. It’s the time it takes to reach you that’s causing the longer load times.  The answer is to implement a CDN (Content Delivery Network). A CDN caches static elements (like images) in locations around the world, so visitors to your site can download them from their nearest location. This can increase speed significantly.  A CDN allows for localized delivery of assets to site visitors based anywhere in the world.  Nexcess offers a CDN service with all of our hosting solutions. Depending on your base plan, this may cost extra. If you’re uncertain which plan is right for your site, talk to a Nexcess team member. When choosing a CDN, pay attention to: How many locations does it offer? Are they locations near your target audience? What bandwidth does the CDN have? If you’re unsure what you need, talk with one of our experts who can help determine what’s optimal for your business. Does the CDN include an SSL? An SSL certificate will help ensure  your site is secure.  Prioritizing Website Speed Optimization Effective website speed optimization strategies are targeted. Optimization is often done with a core objective in mind. To that end, think about which pages are the most important to your site experience and focus on those as a top priority.  In most cases, homepages are vital. They often function as a starting point for visitors. Ensuring they load efficiently can engage a visitor when they first arrive at your site, and significantly reduce your bounce rate.  If you’re running an ecommerce store, product pages are also important. They serve as solid, bottom of funnel touchpoints for conversion. If they load slowly, you are going to see a higher than expected bounce rate.  Optimization should have an effect on your site as a whole. However, focusing on core pages will help you to improve specific, high-value user experiences quickly and effectively.  Website speed optimization is core to delivering the right user experience. There are numerous methods for optimizing a site, each of which can be adjusted to align with your core objective. Working through each and testing your site speed is key to securing the best results.  Get started with a managed hosting solution that provides optimizations by default. Learn more. The post Website Speed Optimization for 2020 appeared first on Nexcess Blog.

5 Best Discount Wheel Popup Plugins for WordPress

HostGator Blog -

The post 5 Best Discount Wheel Popup Plugins for WordPress appeared first on HostGator Blog. Looking for ways to convert more visitors into buyers and subscribers? You’ve done everything right. You have a well-designed website and you have an awesome popup form on your site. But you’re still not getting the conversions you wanted. Where do you go from here?  Try gamifying your popups to get your visitors’ attention.  How do you do that exactly? Add a little ‘spin’ to your usual popups by using a discount wheel popup on your WordPress website.  What are Discount Wheel Popup Plugins for WordPress? Discount wheel popups are fun opt-in forms that make users feel like they are playing a game where they can win something. All they have to do is submit their email and click a button. The wheel spins to land on a prize, or not, depending on whether you want every wedge to offer a reward.  This sounds pretty neat, right?  So, let’s take a look at six of the best discount wheel popup tools available for WordPress. 1. OptinMonster OptinMonster is a complete opt-in forms tool that helps you get more email subscribers.  It has a drag and drop interface to build discount wheel popups without any coding. You can control all aspects of the popup wheel campaign with its intuitive features. OptinMonster’s discount wheel popup has three templates. You can choose any template and customize it to match your brand. You get complete design control. You can change the fonts, colors, layouts, and edit the prize details. It also gives you the option to add your logo to the centre of the wheel. For added information, you can insert icons and videos to the discount wheel popup.  Once you’ve created your popup wheel, you can automate when and how the discount wheel popup appears on your WordPress website. You can choose the exact moment users get the discount wheel popup by setting display rules. Use Geo-Location Targeting to control which locations see the discount wheel popup. You can also set the popup wheel to appear only on certain pages.  OptinMonster makes creating a discount wheel popup easy. It’s a leading solution if you’re  looking for a discount wheel popup plugin that works on all major platforms and that you can scale. 2. WP Optin Wheel WP Optin Wheel is a plugin for WordPress and WooCommerce. Using it is intuitive and you can create your discount wheel in minutes  For your email marketing needs WP Optin Wheel integrates with several email marketing providers and lead capturing software. With WP Optin Wheel you can choose to display your discount wheel as a popup or inline in a post or page. You can customize a number of elements or use its predefined templates.  Here’s what you need to know about pricing. WP Optin Wheel currently supports A single site for $39/yearThree sites for $99/yearUnlimited sites for $199/year WP Optin Wheel is a great choice for anyone who wants to create a discount wheel popup on their WordPress website. 3. OptinSpin OptinSpin is a plugin for coupon code wheels that works on WordPress and WooCommerce sites. It has a great feature where users get their coupon code delivered to their email. This works well because it serves as a reminder in a user’s email box. They can think about using the code or can come back later when they are ready to buy. It has great features like: Full customization and branding optionsEmail marketing integrationRetargeting customers through Facebook MessengerDrip CRM integrationTriggers like time delay, clickable tab, desktop, and mobile exit intent OptinSpin is affordable, too. It costs $29 for six months and $38 for a whole year. 4. Popup Maker Popup Maker is another full-featured tool for creating popups. It works on several website platforms, including WordPress!  Popup Maker has a Spin-to-Win popup option. It has five templates for its coupon wheel popup. It offers several useful features under each pricing plan: Silver Plan – $14.95/month 100,000 pageviewsPopup effectsPopups based on several triggers like page load, click, hover, and Gold Plan – $6.95/month 500,000 pageviewsSocial Media popupAge restriction popupPopup schedule by week Platinum Plan – $21.95/month 1 million pageviewsCountry-based popupsGoogle Map popupPush notificationsCustom branding Popup is an all-round popup tool that works on several platforms. It’s ideal for anyone who wants versatility and multiple features as they build discount wheel popups on their WordPress website. 5. Wheel of Popups Wheel of Popups is your opt-in tool that works for all platforms. You can use it on WooCommerce and WordPress. All you need to do is add one line of code on your website to launch your popups. It is great for lead generation and for increasing conversions.  Its exit-intent technology triggers the coupon wheel popup before a visitor leaves and allows them to win a code, have fun and perhaps buy a product. It also captures the visitor’s email so you can build your email list. Wheel of Popups also integrates with several email marketing providers. What is its pricing plan? You can sign up on Wheel of Popups for free with a 10-day trial of its services. It has 4 different packages:  Personal for $19 a month for 1 website.Developer for $49 a month for 5 websites.Agency for $149 a month for 20 websites.Reseller for $249 a month for 50 websites. What’s the Best Discount Wheel Popup Plugin for Your WordPress Site? You’ve just had a look at some of the best discount wheel popup tools available for WordPress. All of them have attractive features and different pricing plans. There isn’t one right option for everyone. There are as many ‘correct’ options as there are businesses.  Here are a couple of things for you to consider before you choose any of these tools: What are your business goals? What do you want to achieve? Will you need a whole host of features in the future? Or will a single gamification popup do? How many pageviews do you expect to get per month?Will you work on just one WordPress site or multiple?Are advanced tools like geo-targeting and age restriction popups even relevant to you? When you have a clear idea of what your goals are, you will be able to select the best tool for your business.  If you’re carrying out extensive marketing activities then OptinMonster is right for you. It is comprehensive and scalable for your future business needs. Its lead generation tools and analytics give you complete control over your marketing campaigns. You can read more about OptinMonster on SeedPress.  Try using more than one strategy to boost traffic and conversions. You can also create an online contest to make your website go viral.  Work with the best tools to make your eCommerce business successful! Find the post on the HostGator Blog

How to Use Facebook Ads to Promote Limited-Time Offers

Social Media Examiner -

Do you want to promote limited-time offers this holiday season? Wondering how to create an automated Facebook ad sequence to deliver daily specials? In this article, you’ll learn how to set up a multi-day Facebook ad campaign to automatically deliver time-sensitive offers. Prepare Your Holiday Campaign Assets in Advance The holiday shopping season in the […] The post How to Use Facebook Ads to Promote Limited-Time Offers appeared first on Social Media Marketing | Social Media Examiner.

How to Boost B2B Sales By Focusing on Customer Support

Reseller Club Blog -

Gone are the days when your sales personnel could drive the process of customer acquisition with their gift of the gab. Today, buyers rely on informative landing pages, websites, educational videos, blogs, and user reviews to choose vendors in an informed manner. Forrester indicates that 68% of B2B customers prefer to research online independently. In addition to up-to-date and relevant digital channels, buyers also expect speed and efficiency when it comes to customer service and query resolution. Consequently, B2B companies are rapidly adopting technologically advanced support systems such as live chat, social media, cobrowsing, etc. to offer their customers real-time support. Online community platforms and support pages are also becoming popular to provide customers with the answers they need, without reaching out to support, especially outside working hours. Importance of Customer Support for B2B Sales  For most B2B companies, products or services offered are more or less stable. Thus, when it comes to choosing between two brands offering the same product or service, the company that provides best and consistent customer support almost always wins. According to a survey, 70% of customers believe that the quality of support reflects how much an organization values them – which means that keeping your users’ waiting is a reflection of your indifference towards them.  In terms of sales and transactional value, 7 out of 10 U.S. consumers will spend more money to buy from a company that delivers great customer service. The opposite scenario is 51% of consumers will never buy from the company after one bad service experience. The message is clear, isn’t it? In an era where better customer experience means higher engagement and sales, offering better and consistent support to your customers across multiple touchpoints could be the key to trumping your competition in the market. Below, we share five tips or best practices that will help you optimise your customer support channels and boost your sales effectively. 4 Best Practices to Improve B2B Sales 1. Customize the support package All B2B buyers are not the same. Besides, the presence of multiple decision-makers in a single company makes it imperative to customize your support package as per your customers’ preferences. For example, if your target companies comprise young teams, you may benefit from integrating self-help options, such as AI-enabled chatbots that enable quick, as well as 24-hour customer support. Gartner points out that “By 2021, 15% of all customer service interactions will be completely handled by AI, an increase of 400% from 2017.”  2. Serve your niche segment Most businesses today work on wafer-thin margins and are continually looking for solutions to improve their operational efficiency. By establishing yourself as a thought leader in a particular niche, you can win the trust of businesses in that field, naturally inclining their purchase decision in your favor. One of the best ways to achieve this is by publishing thoughtful, research-backed, useful content on your website and other places, focused on resolving the key issues faced by your buyers.  If you are looking for inspiration, visit Hubspot to check out their fantastic blog with various ‘how to’ articles exemplifying the inbound marketing methodology.  Remember, there’s no point in being a jack of all trades and master of none. Instead, focus on a particular niche and establish yourself as a master! 3. Compete on “service quality” by defining Service Level Agreement A multi-level service level agreement or SLA could be the defining factor for your company by making customer service the heart of your organization. An SLA of this nature defines service quality, creating specific standards of service to be achieved by every department, including sales and marketing. A good SLA also standardizes the sales process, making each employee within a department understand his or her role better. 4. Investment in customer support tools Customer support technologies such as live chat can reduce the response time significantly while enabling you to connect to your users in real-time. By integrating live chat software on your website, you empower users to connect with you at the precise moment they need help on their purchase journey – leading to higher customer satisfaction and more sales. No wonder then that vendors that use live chat increase their chances of conversion by up to 3X as compared to businesses that don’t.  Enabling seamless live chat support across your website and mobile devices enable customers to reach out to you on-the-go, making it easier for them to purchase from you, which could be a key differentiator in the market. Here are some stats that highlight the importance of live chat in B2B: Reduce Costs: Live chat enables your support staff to engage more than one customer at a time, making it almost 50% cheaper than handling phone calls. Build Trust and loyalty: A study by Oracle found that 90% of customers feel confident about buying on a website when they see a live chat button and 63% of consumers are more likely to return to a website that offers live chat. Of course, only incorporating live chat software on your website isn’t enough. You need to train your support staff to use this feature optimally for best results. For example, your staff can ask for contact details at the start of any conversation to provide a more personalized experience. In case the chat user is not an existing customer, the staff can ask them whether they are interested in receiving news and promotional material from your company.  Live chat software may also provide demographic information about users, as well as, their recent browsing history, enabling sales agent to connect with them better. Here are some tips to empower your customer-facing team for best results: Have a ‘best-practices’ manual for the support team to reduce effort within workflows. Adopt a customer-centric business approach through focused recruitment programs and tools. Use performance analytics to determine best behaviors that click with clients. Provide digital assistants, such as chatbots, live chat, and co-browsing software to reduce workload, optimize costs, and gain relevant and useful insights. Integrate your Customer Relationship Management Software with Customer Support for Better Customer Experience Many businesses use customer relationship management (CRM) software, such as SugarCRM, Salesforce, and Microsoft Dynamics CRM to track data of individual clients. Such information helps the sales team understand customers better and offer more personalised services. However, most businesses use separate customer support software, and integrating their CRM system with customer support software can provide a much better customer experience. An integrated CRM system ensures consistent communication by creating a unified system that provides your agents with complete customer information across their sales journey. An integrated system also makes it easier to track and manage customer information as information from multiple teams (marketing, sales, and customer service) is stored in a single place. Talking about customer experience, an integrated CRM system not only leads to higher personalisation but also complete automation. With an integrated CRM, you can directly convert customer emails into tickets using a common email ID, send automated responses, create impactful email marketing campaigns, and save new data centrally. Conclusion The world of B2B is constantly changing. Today, customers are not only interested in knowing about your product or service. They also want to know how your product or service can solve their existing problems or help them meet their business goals effectively.  Thus, simply enumerating the benefits of your product will not take you anywhere. However, adding a personal touch to the sales process through targeted content and seamless omnichannel support can take your business a long way. .fb_iframe_widget_fluid_desktop iframe { width: 100% !important; } The post How to Boost B2B Sales By Focusing on Customer Support appeared first on ResellerClub Blog.

Get into the Black Friday and Cyber Monday Spirit with the Best Web Hosting Deals!

HostGator India Blog -

Holiday season spells sales and with Thanksgiving around the corner, it is time to welcome back Black Friday and Cyber Monday. For consumers, this is the perfect time to get great deals on a range of products from fashion to furnishing, and appliances to gadgets there is something for everyone.    If you’re thinking of starting […]

Gear up for the Biggest Sale of the Season – Black Friday and Cyber Monday

BigRock Blog -

With Thanksgiving around the corner, the biggest shopping season aka Black Friday and Cyber Monday are not far behind. From consumers to businesses, this is the perfect time to get a sweet deal on products you’ve been eyeing for a long time.  Thinking of improving business profits or looking to start your own blog? Think no more! This is the season to indulge and invest as you won’t get a better deal than Black Friday at any other time of the year.  We at BigRock are having a BIG Black Friday Sale lined up for the month of November.  The BigRock Black Friday sale will be live from 28th November and end on 30th November. The Cyber Monday sale would be live on 2nd and 3rd December. So, without further ado let us move on to the deals you don’t want to miss!  Black Friday Domain Deals As part of our Black Friday Domain Deals, enjoy discounts on some of our popular domain extensions such as .COM, .IN, .CO and .BIZ! Along with this, we’re also running a Bonanza offer on New Domain extensions.  .BIZ ₹199 .CO ₹399 .COM ₹149 for 1st year (with 2-year purchase) .IN   ₹199 for 1st year ((with 2-year purchase) nTLD Bonanza offer Click here Exciting Web Hosting deals It’s raining discounts with amazing Black Friday Hosting offers with FLAT 60% off on all our web hosting packages! This is a ‘once in a year and not to miss’ opportunity. Additionally, you even get 25% off on our DIY website builder!  Special Offers on Email Hosting Starting new and want to establish yourself? Choose a secure and professional email address.  Get never before BigRock discounts this sale season on email hosting plans only from 28th November to 3rd December for unimaginable prices! Business Email 24/Mo/Account Enterprise Email 49/Mo/Account G Suite 179/Mo/Account Special Product Launch  MS Office-365 Email   * Applies to Annual plans only Enticing Domain Value Adds Value additions are great, aren’t they? They are even greater when you get them at a discounted price without burning a hole in your pocket!  BigRock Instant  50% Off, Effective price – ₹149/yr  Privacy Protect  FLAT ₹299/yr Bundled Deals Domain and Hosting deals In fact, if you’re a new website owner we strongly recommend that you make the most out of these BigRock Value Deals.  Addons for Everyone Want Google to rank you better? Secure your website today! Leverage the Black Friday and Cyber Monday sale and get 20% off on Codeguard, SiteLock and SSL.   Codeguard Backup Service 20% Off SiteLock Malware Protect 20% Off SSL Certificates 20% Off * Applies to Annual plans only We’ve slashed our prices to empower you to take the first step and establish yourself in this vast internet domain! Take advantage of our biggest Black Friday and Cyber Monday sale!  We hope you make the most of these deals! Happy shopping!

Facebook Lets Advertisers Control Where Ads Appear

Social Media Examiner -

Welcome to this week’s edition of the Social Media Marketing Talk Show, a news show for marketers who want to stay on the leading edge of social media. On this week’s Social Media Marketing Talk Show, we explore Facebook’s new brand safety controls and transparency tools for advertisers, upcoming Facebook ad updates, and more with […] The post Facebook Lets Advertisers Control Where Ads Appear appeared first on Social Media Marketing | Social Media Examiner.

New for Identity Federation – Use Employee Attributes for Access Control in AWS

Amazon Web Services Blog -

When you manage access to resources on AWS or many other systems, you most probably use Role-Based Access Control (RBAC). When you use RBAC, you define access permissions to resources, group these permissions in policies, assign policies to roles, assign roles to entities such as a person, a group of persons, a server, an application, etc. Many AWS customers told us they are doing so to simplify granting access permissions to related entities, such as persons sharing similar business functions in the organisation. For example, you might create a role for a finance database administrator and give that role access to the tables and compute resources necessary for finance. When Alice, a database admin, moves into that department, you assign her the finance database administrator role. On AWS, you use AWS Identity and Access Management (IAM) permissions policies and IAM roles to implement your RBAC strategy. The multiplication of resources makes it difficult to scale. When a new resource is added to the system, system administrators must add permissions for that new resource to all relevant policies. How do you scale this to thousands of resources and thousands of policies? How do you verify that a change in one policy does not grant unnecessary privileges to a user or application? Attribute-Based Access Control To simplify the management of permissions, in the context of an evergrowing number of resources, a new paradigm emerged: Attribute-Based Access Control (ABAC). When you use ABAC, you define permissions based on matching attributes. You can use any type of attributes in the policies: user attributes, resource attributes, environment attributes. Policies are IF … THEN rules, for example: IF user attribute role == manager THEN she can access file resources having attribute sensitivity == confidential. Using ABAC permission control allows to scale your permission system, as you no longer need to update policies when adding resources. Instead, you ensure that resources have the proper attributes attached to them. ABAC allows you to manage fewer policies because you do not need to create policies per job role. On AWS, attributes are called tags. You can attach tags to resources such as Amazon Elastic Compute Cloud (EC2) instance, Amazon Elastic Block Store (EBS) volumes, AWS Identity and Access Management (IAM) users and many others. Having the possibility to tag resources, combined with the possibility to define permissions conditions on tags, effectively allows you to adopt the ABAC paradigm to control access to your AWS resources. You can learn more about how to use ABAC permissions on AWS by reading the new ABAC section of the documentation or taking the tutorial, or watching Brigid’s session at re:Inforce. This was a big step, but it only worked if your user attributes were stored in AWS. Many AWS customers manage identities (and their attributes) in another source and use federation to manage AWS access for their users. Pass in Attributes for Federated Users We’re excited to announce that you can now pass user attributes in the AWS session when your users federate into AWS, using standards-based SAML. You can now use attributes defined in external identity systems as part of attributes-based access control decisions within AWS. Administrators of the external identity system manage user attributes and define attributes to pass in during federation. The attributes you pass in are called “session tags”. Session tags are temporary tags which are only valid for the duration of the federated session. Granting access to cloud resources using ABAC has several advantages. One of them is you have fewer roles to manage. For example, imagine a situation where Bob and Alice share the same job function, but different cost centers; and you want to grant access only to resources belonging to each individual’s cost center. With ABAC, only one role is required, instead of two roles with RBAC. Alice and Bob assume the same role. The policy will grant access to resources where their cost center tag value matches the resource cost center tag value. Imagine now you have over 1,000 people across 20 cost centers. ABAC can reduce the cost center roles from 20 to 1. Let us consider another example. Let’s say your systems engineer configures your external identity system to include CostCenter as a session tag when developers federate into AWS using an IAM role. All federated developers assume the same role, but are granted access only to AWS resources belonging to their cost center, because permissions apply based on the CostCenter tag included in their federated session and on the resources. Let’s illustrate this example with the diagram below: In the figure above, blue, yellow, and green represent the three cost centers my workforce users are attached to. To setup ABAC, I first tag all project resources with their respective CostCenter tags and configure my external identity system to include the CostCenter tag in the developer session. The IAM role in this scenario grants access to project resources based on the CostCenter tag. The IAM permissions might look like this : { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DescribeInstances"], "Resource": "*" }, { "Effect": "Allow", "Action": ["ec2:StartInstances","ec2:StopInstances"], "Resource": "*", "Condition": { "StringEquals": { "ec2:ResourceTag/CostCenter": "${aws:PrincipalTag/CostCenter}" } } } ] } The access will be granted (Allow) only when the condition matches: when the value of the resources’ CostCenter tag matches the value of the principal’s CostCenter tag. Now, whenever my workforce users federate into AWS using this role, they only get access to the resources belonging to their cost center based on the CostCenter tag included in the federated session. If a user switches from cost center green to blue, your system administrator will update the external identity system with CostCenter = blue, and permissions in AWS automatically apply to grant access to the blue cost center AWS resources, without requiring permissions update in AWS. Similarly, when your system administrator adds a new workforce user in the external identity system, this user immediately gets access to the AWS resources belonging to her cost center. We have worked with Auth0, ForgeRock, IBM, Okta, OneLogin, Ping Identity, and RSA to ensure the attributes defined in their systems are correctly propagated to AWS sessions. You can refer to their published guidelines on configuring session tags for AWS for more details. In case you are using other Identity Providers, you may still be able to configure session tags, if they support the industry standards SAML 2.0 or OpenID Connect (OIDC). We look forward to working with additional Identity Providers to certify Session Tags with their identity solutions. Sessions Tags are available in all AWS Regions today at no additional cost. You can read our new session tags documentation page to follow step-by-step instructions to configure an ABAC-based permission system. -- seb

Using Spatial Data with Amazon Redshift

Amazon Web Services Blog -

Today, Amazon Redshift announced support for a new native data type called GEOMETRY. This new type enables ingestion, storage, and queries against two-dimensional geographic data, together with the ability to apply spatial functions to that data. Geographic data (also known as georeferenced data) refers to data that has some association with a location relative to Earth. Coordinates, elevation, addresses, city names, zip (or postal) codes, administrative and socioeconomic boundaries are all examples of geographic data. The GEOMETRY type enables us to easily work with coordinates such as latitude and longitude in our table columns, which can then be converted or combined with other types of geographic data using spatial functions. The type is abstract, meaning it cannot be directly instantiated, and polymorphic. The actual types supported for this data (and which will be used in table columns) are points, linestrings, polygons, multipoints, multilinestrings, multipolygons, and geometry collections. In addition to creating GEOMETRY-typed data columns in tables the new support also enables ingestion of geographic data from delimited text files using the existing COPY command. The data in the files is expected to be in hexadecimal Extended Well-Known Binary (EWKB) format which is a standard for representing geographic data. To show the new type in action I imagined a scenario where I am working as a personal tour coordinator based in Berlin, Germany, and my client has supplied me with a list of attractions that they want to visit. My task is to locate accommodation for this client that is reasonably central to the set of attractions, and within a certain budget. Geographic data is ideal for solving this scenario. Firstly, the set of points representing the attractions combine to form one or more polygons which I can use to restrict my search for accommodation. In a single query I can then join the data representing those polygons with data representing a set of accommodations to arrive at the results. This spatial query is actually quite expensive in CPU terms yet Redshift is able to execute the query in less that one second. Sample Scenario Data To show my scenario in action I needed to first source various geographic data related to Berlin. Firstly I obtained the addresses, and latitude/longitude coordinates, of a variety of attractions in the city using several ‘top X things to see’ travel websites. For accommodation I used Airbnb data, licensed under the Creative Commons 1.0 Universal “Public Domain Dedication” from I then added to this zip code data for the city, licensed under Creative Commons Attribution 3.0 Germany (CC BY 3.0 DE). The provider for this data is Amt für Statistik Berlin-Brandenburg. Any good tour coordinator would of course have a web site or application with an interactive map so as to be able to show clients the locations of the accommodation that matched their criteria. In real life, I’m not a tour coordinator (outside of my family!) so for this post I’m going to focus solely on the back-end processes – the loading of the data, and the eventual query to satisfy our client’s request using the Redshift console. Creating a Redshift Cluster My first task is to load the various sample data sources into database tables in a Redshift cluster. To do this I go to the Redshift console dashboard and select Create cluster. This starts a wizard that walks me through the process of setting up a new cluster, starting with the type and number of nodes that I want to create. In Cluster details I fill out a name for my new cluster, set a password for the master user, and select an AWS Identity and Access Management (IAM) role that will give permission for Redshift to access one of my buckets in Amazon Simple Storage Service (S3) in read-only mode when I come to load my sample data later. The new cluster will be created in my default Amazon Virtual Private Cloud for the region, and I also opted to use the defaults for node types and number of nodes. You can read more about available options for creating clusters in the Management Guide. Finally I click Create cluster to start the process, which will take just a few minutes. Loading the Sample Data With the cluster ready to use I can load the sample data into my database, so I head to the Query editor and using the pop-up, connect to my default database for the cluster. My sample data will be sourced from delimited text files that I’ve uploaded as private objects to an S3 bucket and loaded into three tables. The first, accommodations, will hold the Airbnb data. The second, zipcodes, will hold the zip or postal codes for the city. The final table, attractions, will hold the coordinates of the city attractions that my client can choose from. To create and load the accommodations data I paste the following statements into tabs in the query editor, one at a time, and run them. Note that schemas in databases have access control semantics and the public prefix shown on the table names below simply means I am referencing the public schema, accessible to all users, for the database in use. To create the accommodations table I use: CREATE TABLE public.accommodations ( id INTEGER PRIMARY KEY, shape GEOMETRY, name VARCHAR(100), host_name VARCHAR(100), neighbourhood_group VARCHAR(100), neighbourhood VARCHAR(100), room_type VARCHAR(100), price SMALLINT, minimum_nights SMALLINT, number_of_reviews SMALLINT, last_review DATE, reviews_per_month NUMERIC(8,2), calculated_host_listings_count SMALLINT, availability_365 SMALLINT ); To load the data from S3: COPY public.accommodations FROM 's3://my-bucket-name/redshift-gis/accommodations.csv' DELIMITER ';' IGNOREHEADER 1 CREDENTIALS 'aws_iam_role=arn:aws:iam::123456789012:role/RedshiftDemoRole'; Next, I repeat the process for the zipcodes table. CREATE TABLE public.zipcode ( ogc_field INTEGER, wkb_geometry GEOMETRY, gml_id VARCHAR, spatial_name VARCHAR, spatial_alias VARCHAR, spatial_type VARCHAR ); COPY public.zipcode FROM 's3://my-bucket-name/redshift-gis/zipcode.csv' DELIMITER ';' IGNOREHEADER 1 IAM_ROLE 'arn:aws:iam::123456789012:role/RedshiftDemoRole'; And finally I create the attractions table and load data into it. CREATE TABLE public.berlin_attractions ( name VARCHAR, address VARCHAR, lat DOUBLE PRECISION, lon DOUBLE PRECISION, gps_lat VARCHAR, gps_lon VARCHAR ); COPY public.berlin_attractions FROM 's3://my-bucket-name/redshift-gis/berlin-attraction-coordinates.txt' DELIMITER '|' IGNOREHEADER 1 IAM_ROLE 'arn:aws:iam::123456789012:role/RedshiftDemoRole'; Finding Somewhere to Stay! With the data loaded, I can now put on my travel coordinator hat and select some properties for my client to consider for their stay in Berlin! Remember, in the real world this would likely be surfaced from a web or other application for the client. I’m simply going to make use of the query editor again. My client has decided they want a trip to focus on the city museums and they have a budget of 200 EUR per night for accommodation. Opening a new tab in the editor, I paste in and run the following query. WITH museums(name,loc) AS (SELECT name, ST_SetSRID(ST_Point(lon,lat),4326) FROM public.berlin_attractions WHERE name LIKE '%Museum%') SELECT,a.price,avg(ST_DistanceSphere(m.loc,a.shape)) AS avg_distance FROM museums m,public.accommodations a WHERE a.price <= 200 GROUP BY,a.price ORDER BY avg_distance LIMIT 10; The query finds the accommodation(s) that are “best located” to visit all the museums, and whose price is within the client’s budget. Here “best located” is defined as having the smallest average distance from all the selected museums. In the query you can see some of the available spatial functions, ST_SetSRID and ST_Point, operating on the latitude and longitude GEOMETRY columns for the attractions, and ST_DistanceSphere to determine distance. This yields the following results. Wrap a web or native application front-end around this and we have a new geographic data-based application that we can use to delight clients who have an idea of what they want to see in the city and also want convenient and in-budget accommodation best placed to enable that! Let’s also consider another scenario. Imagine I have a client who wants to stay in the center of Berlin but isn’t sure what attractions or accommodations are present in the central district, and has a budget of 150 EUR per night. How can we answer that question? First we need the coordinates of what we might consider to be the center of Berlin – latitude 52.516667, longitude 13.388889. Using the zipcode table we can convert this coordinate location to a polygon enclosing that region of the city. Our query must then get all attractions within that polygon, plus all accommodations (within budget), ordered by average distance from the attractions. Here’s the query: WITH center(geom) AS (SELECT wkb_geometry FROM zipcode WHERE ST_Within(ST_SetSRID(ST_Point(13.388889, 52.516667), 4326), wkb_geometry)), pois(name,loc) AS (SELECT name, ST_SetSRID(ST_Point(lon,lat),4326) FROM public.berlin_attractions,center WHERE ST_Within(ST_SetSRID(ST_Point(lon,lat),4326), center.geom)) SELECT,a.price,avg(ST_DistanceSphere(p.loc,a.shape)) AS avg_distance, LISTAGG(, ';') as pois FROM pois p,public.accommodations a WHERE a.price <= 150 GROUP BY,a.price ORDER BY avg_distance LIMIT 10; When I run this in the query editor, I get the following results. You can see the list of attractions in the area represented by the zipcode in the pois column. So there we have some scenarios for making use of geographic data in Amazon Redshift using the new GEOMETRY type and associated spatial functions, and I’m sure there are many more! The new type and functions are available now in all AWS Regions to all customers at no additional cost. — Steve

Customize Your LinkedIn Feed to Help You Accomplish Your Goals

LinkedIn Official Blog -

We designed LinkedIn around a simple idea: “People you know, talking about things you care about.” It is at the heart of every product we build. One of the most vibrant and powerful tools we have to make this idea a reality is actually right in front of you every time you open the LinkedIn app: your feed! The LinkedIn Feed is a great way to get way to get inspired by new insights and perspectives. But it’s also an invitation to have conversations and to participate with a vibrant network of... .


Recommended Content

Subscribe to Complete Hosting Guide aggregator