Industry Buzz

20 of The Best Video Ad Networks for Advertisers

Grow Traffic Blog -

Advertisers looking to expand into new channels can do a lot worse than investing in video. The cost and buy-in for video production is a thousand times lower now than it was a decade ago, with commonplace HD cameras in every smartphone, quick and easy video editing apps on every platform, and the skill necessary to use them just a few tutorials away. Once you’ve decided to embark on a quest of video advertising, though, you need to figure out where to advertise. Sure, you can just dump your videos on YouTube, Facebook, or Instagram, but those aren’t video advertising networks. They work, but they aren’t specialized. What you should actually do is invest in a couple of specific ad networks to test different audiences. What I’ve done is compiled 20 different video ad networks that have floated to the top as the leaders in the industry. All of them will work, so it’s up to you to pick a few and invest. Run a small, simple budget with your ads and see how the audiences respond. Ideally, you’ll find great groups of people and get more than your money’s worth in return. What to Look For in an Advertising Network When you’re vetting a video ad network, you should look at a few different factors to determine if it’s worth your time. Display options. A good network does more than just display advertising; they have mobile placements, desktop placements, and other resources at their disposal. Since users often use different kinds of devices throughout the day, connecting with them on all of them is extremely important. Notable clients. Good networks work with brands both large and small, and many of them will promote their list of big-name clients as a way to attract other businesses who want to be in good company. Useful data. Every ad network has analytics available, but you want more than just the basic ad performance metrics. A good data set will give you audience information, targeting optimization, and a whole lot more. Broad targeting. Speaking of targeting options, you want your ad network to give you as many options as possible. Part of the reason Facebook ads are so successful is the wide range of possible targeting options. Any good video advertising network will have their own selection of data to pick through and use for this exact purpose. Of course, nothing beats an experiment. Set up a basic budget and run some ads to see how they do. Limit your investment until you’ve proven your success. 1. Social Networks Rather than take up a third of this list with various social networks, I’ll just put them here under one banner. Pretty much every social network today offers some video advertising, and many of them have a good selection of targeting options and a broad audience to work with. Facebook, Twitter, and Instagram all do video ads quite well. YouTube, of course, works directly with Google Ads. Pinterest and Snapchat are also good options to consider. 2. App Lovin This ad network is focused primarily on mobile games. The mobile game industry is huge, with everything from industry giants like King’s games, Hearthstone, and Fortnite to the massive swaths of Chinese shovelware. Mobile game ads tend to fall into two categories; videos and interactive ads. Videos showcasing games can be extremely compelling, and this ad network has a huge audience ready to go for your mobile app ads. 3. Ad Colony This is one of the largest mobile ad networks in the world, with an audience of over 1.4 billion users worldwide. They have a variety of different ad formats, including instant play video, end cards, display videos, and rich media. They’ve also worked with a huge array of different brands, from FX and UFC to Jack in the Box and Hilton. You can see galleries and examples of their ads before you even register. 4. Vungle Vungle is one of the fastest growing mobile video ad companies out there. Right now they’re in a great place to invest, and it’s quite possible that there will be some beneficial changes coming down the pipe in the next couple years. Vungle is, as of this writing, being purchased for somewhere north of $750 million by the private equity firm Blackstone. With this kind of backing, the sky is the limit for a network like Vungle. 5. Verizon Advertising Verizon, being one of the world’s largest telecom giants, has fingers in pretty much every pie they can reach. It should come as no surprise that you can run video advertising with them as well. Verizon’s advertising arm is Oath, a brand which you might recognize if you’ve paid attention to online advertising over the last few years. Oath is a company Verizon uses to head up their media wing, and includes AOL’s advertising network, Yahoo’s advertising platform, and all that entails. This includes advertising on Tumblr, the network formerly known as Brightroll, and several others. 6. Rhythm One This video advertising network isn’t entirely focused on mobile, but rather takes a cross-platform unified approach. Through them, you can run a singular ad campaign that stretches across devices and media types, including display advertising, mobile advertising, and even TV commercial spots. They have both self-service and managed campaigns, and their audience is top-notch. Definitely worth giving them a look. 7. Hulu Hulu is a household name by now, rivaling Netflix and Amazon’s video service. They have 65 million viewers watching ads with their videos, so their audience is pretty significant and engaged. With an average age of 32, you’re reaching primarily millennials with the variety of different ad formats and buying methods. Overall, you can do a lot worse than something like Hulu with your advertising money. 8. Tube Mogul All the big-name video advertising platforms have been bought up by major brands looking to acquire a foot in the door. Tube Mogul was one such network, and their platform was pretty great. Don’t let the past tense fool you; they’re still around, they’re just operating under the banner of the Adobe Advertising Cloud. Their ads operate across channels and with excellent, detailed analytics and targeting. Adobe’s cloud services are quite solid, so I’d recommend anyone giving this platform a look. 9. Tremor Video It almost feels like cheating adding this network to the list, because they acquired Rhythm One not too long ago, giving them an even broader reach than they already had. Still, the two remain mostly separate, so you can use Tremor at the same time if you want. They have a lot of tech backing up their network, with contextual advertising, social reach, television, and placements all over the world. They also have very detailed targeting options, including unique geo-behavioral options. 10. Undertone Undertone is a large premium ad network that works with clients ranging from Audi and Disney to smaller brands the world over. They have a wide range of ad units, including unique digital canvases. They’re constantly pushing the cutting edge of digital advertising, with unique technology and broad targeting options to play with. Their goal is synchronization; making sure all elements of your ad campaigns are on the same page. They’re definitely one of the largest networks on this list, but if you can meet their requirements, they’re an excellent choice. 11. SpotX SpotX is another great ad network, though they’re a managed services provider, which means they require you to apply and meet their standards before you can run ads on their network. They have custom targeting, strategic planning, and programmatic buying and management that can bring in incredible returns on investment. The application process is simple, but they’re pretty strict about what they accept, so don’t be too disappointed if they don’t let you in right away. 12. Chocolate Chocolate is another one of those “new” ad networks that is made up of the remnants of other networks they bought up. Vdopia, for example, is part of the new Chocolate network. They have a marketplace made up of both publishers and advertisers, where you can manually or programmatically purchase your advertising. They’re designed to scale with your business as you grow, and work with a range of different mobile video ad formats. 13. Conversant You may recognize the name Conversant from discussions of affiliate marketing, where CJ Affiliate is on of the top names. Conversant is the company behind CJ Affiliate ever since they bought Commission Junction. Currently, Conversant has been purchased by Publicis Groupe, so much like Vungle, this is a network to watch moving forward. Given that they already work with big name brands like Cabela’s, Urban Outfitters, and GoDaddy, there’s a ton of potential here. 14. Say Media Say specializes in “making ads people want to see”. Now, I’m always skeptical of claims like that, since the ads are only as good as the people creating them, but it can’t be denied that Say Media is a pretty great platform. They have full page ads, an alternative to banners, branded content, and a whole lot more. Since their focus is on content rather than on the call to action, you often find excellent stories and a great placement for video ads. 15. Exponential Exponential is a relatively old at network and hasn’t been acquired by another firm, which is always a good sign; they have the legs to stand on their own. Much like other modern networks, they specialize in cross-platform unified ads that sync up campaigns between mobile, desktop, and tablet advertising. Their audience is significant and their engagement rates are pretty good, so it’s a good network to dig into. 16. Amobee Amobee is another ad network made up of the devoured scraps of other ad networks, mashed together to make something new and, more importantly, larger. They cover TV, digital marketing, and social media all in one platform, making it great for pretty much every device and audience you could want. As for their constituent parts, Amobee is made up of Adconion, Kontera, and some other components. 17. AppNexus AppNexus is an advertising platform with a huge, open and transparent marketplace. You can advertise on a wide variety of platforms and devices, with different styles of content and media, including video. Their video inventory in particular is excellent, with unique video options, programmatic targeting, and a flexible path for purchasing your inventory. 18. Rubicon Project Rubicon Project is a global ad network with video and other advertising options. You can purchase ads in pretty much any region with thousands of publishers. They have a three-step process to verify your advertising, which ensures a minimum of low quality or spammy, disingenuous, or dangerous ads. Additionally, they have a bunch of different tools and automation options to help enhance and manage your advertising. 19. Aerserv In operation since 2013, Aerserv has partnered or joined with InMobi to form a huge video and app-based advertising network. They have robust ad inventory management, great mobile targeting and implementation, programmatic buying, and optimization through varying means with real time data. They also provide a dedicated support team to answer questions and help with advertising at any time. 20. Unruly Unruly is an interesting ad network in that they’re driven not just by engagement, but by sentiment. Their ad optimization is powered by emotional data in addition to other standard factors. Their network has a global audience of 1.2 billion people, with brand-safe premium sites at the forefront of the network. If you’re interested in something a little outside the box, Unruly is an incredible experiment. The post 20 of The Best Video Ad Networks for Advertisers appeared first on Growtraffic Blog.

New – Trigger a Kernel Panic to Diagnose Unresponsive EC2 Instances

Amazon Web Services Blog -

When I was working on systems deployed in on-premises data centers, it sometimes happened I had to debug an unresponsive server. It usually involved asking someone to physically press a non-maskable interrupt (NMI) button on the frozen server or to send a signal to a command controller over a serial interface (yes, serial, such as in RS-232).This command triggered the system to dump the state of the frozen kernel to a file for further analysis. Such a file is usually called a core dump or a crash dump. The crash dump includes an image of the memory of the crashed process, the system registers, program counter, and other information useful in determining the root cause of the freeze. Today, we are announcing a new Amazon Elastic Compute Cloud (EC2) API allowing you to remotely trigger the generation of a kernel panic on EC2 instances. The EC2:SendDiagnosticInterrupt API sends a diagnostic interrupt, similar to pressing a NMI button on a physical machine, to a running EC2 instance. It causes the instance’s hypervisor to send a non-maskable interrupt (NMI) to the operating system. The behaviour of your operating system when a NMI interrupt is received depends on its configuration. Typically, it involves entering into kernel panic. The kernel panic behaviour also depends on the operating system configuration, it might trigger the generation of the crash dump data file, obtain a backtrace, load a replacement kernel or restart the system. You can control who in your organisation is authorized to use that API through IAM Policies, I will give an example below. Cloud and System Engineers, or specialists in kernel diagnosis and debugging, find in the crash dump invaluable information to analyse the causes of a kernel freeze. Tools like WinDbg (on Windows) and crash (on Linux) can be used to inspect the dump. Using Diagnostic Interrupt Using this API is a three step process. First you need to configure the behavior of your OS when it receives the interrupt. By default, our Windows Server AMIs have memory dump already turned on. Automatic restart after the memory dump has been saved is also selected. The default location for the memory dump file is %SystemRoot% which is equivalent to C:\Windows. You can access these options by going to : Start > Control Panel > System > Advanced System Settings > Startup and Recovery On Amazon Linux 2, you need to install and configurekdump & kexec. This is a one-time setup. $ sudo yum install kexec-tools Then edit the file /etc/default/grub to allocate the amount of memory to be reserved for the crash kernel. In this example, we reserve 160M by adding crashkernel=160M. The amount of memory to allocate depends on your instance’s memory size. The general recommendation is to test kdump to see if the allocated memory is sufficient. The kernel doc has the full syntax of the crashkernel kernel parameter. GRUB_CMDLINE_LINUX_DEFAULT="crashkernel=160M console=tty0 console=ttyS0,115200n8 net.ifnames=0 biosdevname=0 nvme_core.io_timeout=4294967295 rd.emergency=poweroff rd.shell=0" And rebuild the grub configuration: $ sudo grub2-mkconfig -o /boot/grub2/grub.cfg Finally edit /etc/sysctl.conf and add a line : kernel.unknown_nmi_panic=1. This tells the kernel to trigger a kernel panic upon receiving the interrupt. You are now ready to reboot your instance. Be sure to include these commands in your user data script or in your AMI to automatically configure this on all your instances. Once the instance is rebooted, verify that kdump is correctly started. $ systemctl status kdump.service ● kdump.service - Crash recovery kernel arming Loaded: loaded (/usr/lib/systemd/system/kdump.service; enabled; vendor preset: enabled) Active: active (exited) since Fri 2019-07-05 15:09:04 UTC; 3h 13min ago Process: 2494 ExecStart=/usr/bin/kdumpctl start (code=exited, status=0/SUCCESS) Main PID: 2494 (code=exited, status=0/SUCCESS) CGroup: /system.slice/kdump.service Jul 05 15:09:02 ip-172-31-15-244.ec2.internal systemd[1]: Starting Crash recovery kernel arming... Jul 05 15:09:04 ip-172-31-15-244.ec2.internal kdumpctl[2494]: kexec: loaded kdump kernel Jul 05 15:09:04 ip-172-31-15-244.ec2.internal kdumpctl[2494]: Starting kdump: [OK] Jul 05 15:09:04 ip-172-31-15-244.ec2.internal systemd[1]: Started Crash recovery kernel arming. Our documentation contains the instructions for other operating systems. Once this one-time configuration is done, you’re ready for the second step, to trigger the API. You can do this from any machine where the AWS CLI or SDK is configured. For example : $ aws ec2 send-diagnostic-interrupt --region us-east-1 --instance-id <value> There is no return value from the CLI, this is expected. If you have a terminal session open on that instance, it disconnects. Your instance reboots. You reconnect to your instance, you find the crash dump in /var/crash. The third and last step is to analyse the content of the crash dump. On Linux systems, you need to install the crash utility and the debugging symbols for your version of the kernel. Note that the kernel version should be the same that was captured by kdump. To find out which kernel you are currently running, use the uname -r command. $ sudo yum install crash $ sudo debuginfo-install kernel $ sudo crash /usr/lib/debug/lib/modules/4.14.128-112.105.amzn2.x86_64/vmlinux /var/crash/127.0.0.1-2019-07-05-15\:08\:43/vmcore crash 7.2.6-1.amzn2.0.1 ... output suppressed for brevity ... KERNEL: /usr/lib/debug/lib/modules/4.14.128-112.105.amzn2.x86_64/vmlinux DUMPFILE: /var/crash/127.0.0.1-2019-07-05-15:08:43/vmcore [PARTIAL DUMP] CPUS: 2 DATE: Fri Jul 5 15:08:38 2019 UPTIME: 00:07:23 LOAD AVERAGE: 0.00, 0.00, 0.00 TASKS: 104 NODENAME: ip-172-31-15-244.ec2.internal RELEASE: 4.14.128-112.105.amzn2.x86_64 VERSION: #1 SMP Wed Jun 19 16:53:40 UTC 2019 MACHINE: x86_64 (2500 Mhz) MEMORY: 7.9 GB PANIC: "Kernel panic - not syncing: NMI: Not continuing" PID: 0 COMMAND: "swapper/0" TASK: ffffffff82013480 (1 of 2) [THREAD_INFO: ffffffff82013480] CPU: 0 STATE: TASK_RUNNING (PANIC) Collecting kernel crash dumps is often the only way to collect kernel debugging information, be sure to test this procedure frequently, in particular after updating your operating system or when you will create new AMIs. Control Who Is Authorized to Send Diagnostic Interrupt You can control who in your organisation is authorized to send the Diagnostic Interrupt, and to which instances, through IAM policies with resource-level permissions, like in the example below. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "ec2:SendDiagnosticInterrupt", "Resource": "arn:aws:ec2:region:account-id:instance/instance-id" } ] } Pricing There are no additional charges for using this feature. However, as your instance continues to be in a ‘running’ state after it receives the diagnostic interrupt, instance billing will continue as usual. Availability You can send Diagnostic Interrupts to all EC2 instances powered by the AWS Nitro System, except A1 (Arm-based). This is C5, C5d, C5n, i3.metal, I3en, M5, M5a, M5ad, M5d, p3dn.24xlarge, R5, R5a, R5ad, R5d, T3, T3a, and Z1d as I write this. The Diagnostic Interrupt API is now available in all public AWS Regions and GovCloud (US), you can start to use it today. -- seb

Upcoming Changes to Let’s Encrypt Plugin

cPanel Blog -

Earlier this year, Let’s Encrypt announced the end of life (EOL) plan for their original API. Starting this November, they will no longer allow new account registrations through the original API. After the original API reaches EOL, new account registrations must use Let’s Encrypt’s new API. Because of this, cPanel is migrating its Let’s Encrypt plugin to use that new API instead of the old API. Why change now? If we do not update our plugin, we ...

Content Not Performing On Social Media – How To Optimize Your Social Game

Pickaweb Blog -

As marketing discoveries go, it can be disappointing to realize that your social media channels aren’t having the impact you’d like. Instead of feeling disheartened, though, try seeing it as a good thing. How? Many businesses never make the discovery and spend both time and money investing in a channel that offers them no business The post Content Not Performing On Social Media – How To Optimize Your Social Game appeared first on Pickaweb.

You #AskGoogleWebmasters, we answer

Google Webmaster Central Blog -

We love to help folks make awesome websites. For a while now, we've been answering questions from developers, site-owners, webmasters, and of course SEOs in our office hours hangouts, in the help forums, and at events. Recently, we've (re-)started answering your questions in a video series called #AskGoogleWebmasters on our YouTube channel.  (At Google, behind the scenes, during the recording of one of the episodes.) When we started with the webmaster office-hours back in 2012, we thought we'd be able to get through all questions within a few months, or perhaps a year. Well ... the questions still haven't stopped -- it's great to see such engagement when it comes to making great websites!  To help make it a bit easier to find answers, we've started producing shorter videos answering individual questions. Some of the questions may seem fairly trivial to you, others don't always have simple answers, but all of them are worth answering. Curious about the first episodes? Check out the videos below and the playlist for all episodes! To ask a question, just use the hashtag #AskGoogleWebmasters on Twitter. While we can't get to all submissions, we regularly pick up the questions there to use in future episodes. We pick questions primarily about websites & websearch, which are relevant to many sites. Want to stay in the loop? Make sure to subscribe to our channel. If you'd like to discuss the questions or other important webmaster topics, feel free to drop by our webmaster help forums and chat with the awesome experts there.  Posted by John Mueller, Google Switzerland

For the 12th Time, Liquid Web is Honored as an Inc. 5000 Fastest-Growing U.S. Company

Liquid Web Official Blog -

LANSING, Mich., Aug, 15th 2019 – Liquid Web, LLC, (https://www.liquidweb.com), the market leader in managed hosting and managed application services to SMBs, has been announced as a 12-time honoree for the Inc. 5000 Fastest-Growing US Companies. Of the tens of thousands of companies that have applied to the Inc. 5000 over the years, only a fraction have made the list more than once. Only 33 companies have made the list 12 times. “We certainly take great pride in this unique accomplishment,” said Liquid Web CEO Jim Geiger. “Achieving this honor for the 12th time further validates our strategy to make technology more accessible and valuable for SMB entrepreneurs and the agencies, designers, and developers who create for them. We have an unwavering dedication to provide impeccable products, service, and support to power the online potential of our customers. Our growth is fueled by our vision to be the world’s most loved hosting provider and our 2019 Net Promoter Score of 69 validates that our more than 30,000 customers rely on Liquid Web as their trusted technology partner,” Geiger said. Inc. 5000 recognizes the fastest-growing companies in America, ranking each company by the rate of revenue growth over a span of three years. Factors include the number of employees, industry, location, and revenue. Liquid Web has continued its successful growth both organically and through strategic acquisitions. Earlier this year, Liquid Web announced the launch of its VMware Private Cloud Powered by NetApp to offer enterprise-level features and functionality at affordable prices to small to midsize businesses. They also introduced annual pricing to their VPS offering and expanded their Managed WordPress and WooCommerce offerings, by adding features such as WPMerge and AffiliateWP and expanding locations in the EU to better serve their EU customers. About Liquid Web Marking its 22nd anniversary, Liquid Web powers online content, commerce, and potential for SMB entrepreneurs and the designers, developers, and digital agencies who create for them. An industry leader in managed hosting and cloud services, Liquid Web is known for its high-performance services and exceptional customer support. Liquid Web offers a broad portfolio designed so customers can choose a hosting solution that is hands-on or hands-off or a hybrid of the two. The company owns and manages its own core data centers, providing a diverse range of offerings, including bare metal servers, fully managed hosting, Managed WordPress, and Managed WooCommerce Hosting, and continues to evolve its service offerings to meet the ever-changing needs of its web-reliant, professional customers. With more than a million sites under management, Liquid Web serves over 30,000 customers spanning 150 countries. The company has assembled a world-class team, global data centers and an expert group of 24/7/365 solution engineers. As an industry leader in customer service*, the rapidly expanding company has been recognized among INC. Magazine’s 5000 Fastest-Growing Companies for twelve years. For more information, please visit www.liquidweb.com, or read our blog posts at https://www.liquidweb.com/blog. Stay up to date with all Liquid Web events on Twitter and LinkedIn. *2019 Net Promoter Score of 69 Contact: Mayra Pena, mpena@liquidweb.com The post For the 12th Time, Liquid Web is Honored as an Inc. 5000 Fastest-Growing U.S. Company appeared first on Liquid Web.

How Cloudflare can Amp up your SEO Efforts

Reseller Club Blog -

Imagine, in a sea of millions of websites, yours exists, however, you want it to do more than just exist – you want it to thrive, to be visible, to have people interact with it. And so, you decide to invest in SEO (Search Engine Optimisation). It’s a cost-effective, credible solution to get your website ranked on Google. Sounds great right? What’s even better is that you can add to your efforts by using Cloudflare – it actually helps boost your SEO efforts. So, let’s break down what that means by answering some simple questions. What is Cloudflare? Cloudflare is one of the world’s largest Content Delivery Networks (CDN). However, it does a lot more than that. Cloudflare also provides services like DDoS (Distributed Denial of Service) mitigation, distributed domain name services and internet security services.  How does it work? Well, between the visitor and the Cloudflare user’s hosting provider sit the Cloudflare services. These services act as a reverse proxy server for websites. What does that mean? A reverse proxy server, like Cloudflare, sits in front of web servers and basically forwards the visitor’s request to those web servers, acting as a middleman.  To go a little further into reverse proxy servers, we first need to understand proxy servers. A proxy server, also known as a forward proxy, is a server that handles the client/visitor machines. Say, for instance, there is a client and they are initiating a request to a server, the request may first go to one server (a proxy server), before being sent to the actual backend server. The backend server will process the request and send it back. However, the client is unaware that there was a server in between while sending the request to the final server – this is known as a proxy server. For the backend server, it will assume that the proxy server is the client and that the requests are coming from that particular server.  For reverse proxy server, a client might initiate a request to the intermediate, proxy server. However, the difference is there are multiple backend servers, and the request from the proxy server will go to only one of them. The backend server will process the request and redirect it back to the client. The client doesn’t know how many servers are there. Each time a request is sent, it could be to a new server. In a proxy server (or a forward proxy server) it is a single setup, of multiple clients with one server. A reverse proxy has single (or multiple) clients with multiple servers. The customers are the clients, who only see the website, however, our requests will go internally to different servers based on where we reside.  So, what are the benefits of Cloudflare?  To begin with, Cloudflare is simple and easy to install. Furthermore, Cloudflare boosts your security and enhances your protection from attacks. By acting as a reverse proxy server, your website never needs to reveal the IP address of the origin server. This means that hackers can’t attack the server. It only allows genuine users the opportunity to access your website so that the resources of your website don’t get drained and the speed of your website remains intact. It protects your website from DDoS and DoS attacks. If the speed of your website slows down, it will save the cached files until your site speed returns to normal. Finally, it provides free SSL certificates. Now, you’ve probably noticed that speed and security are 2 of Cloudflare’s largest benefits. It’s these two factors that contribute to your SEO efforts. Before we tell you how to, let’s also run through the basics of SEO.  SEO defined: SEO, which stands for Search Engine Optimisation, is the practice of increasing the quality and quantity of traffic to your website through organic results via your search engine. Quality of traffic: Quality traffic basically refers to having the right audience reach your website – those who are genuinely interested in the products you offer.  Quantity of traffic: Once you’ve accessed the right kind of audience, you want to increase the numbers so that more people are engaging and interacting with your website and the services you offer. Organic Search Results: Organic traffic refers to the traffic that you didn’t have to pay for.  How does it work? Well, when you access Google (or Bing, Yahoo or any other Search Engine) and type in a request, it responds by sending you a long list of websites that could answer your request. However, there’s a method to the madness. The links that Google offers are decided on the basis of a crawler that it uses. This crawler goes out and collects the information related to what you typed in the search bar. The crawlers then bring back all that content to the search engine to build an index. The index is then run through an algorithm, which tries to match the data with your request.  The Optimisation part of SEO is when content writers on the web, design that same content to help the Search Engine find them. Optimization can take the form of title tags, meta tags, the right keywords, internal links – all of which help the crawler move through and understand your site structure quickly.  So, what would it take for your page to rank highly on the index? Here’s where we return to Cloudflare because speed and security are 2 factors that help determine your ranking on the page. Let’s see how: Speed: Google has stated that website speed is an important factor considered in its internal algorithm to rank pages. This is because slow page speed means that the crawler goes through fewer pages, which could negatively affect your place on the index.  Page speed also becomes an issue of sorts. Those with longer load times, tend to have steep bounce rates and website users spend less time on the page. Load times also have a direct impact on conversions.  How can Cloudflare CDN help?  Cloudflare is one of the largest Content Delivery Networks in the world. This means that your site will be distributed amongst a large pool of servers, all of which will handle requests for your site. As a reverse proxy server, Cloudflare will balance the incoming traffic evenly amongst the servers so that none of them become overloaded. In the event that one server fails, another can take its place. The reverse proxy server will also cache your content resulting in quicker performance. All of these factors can drastically improve your site speed – a key factor in determining your ranking on the index. Security: Google is looking to make the web a secure and safe place and has advocated for websites moving to HTTPS, by acquiring an SSL certificate. The reason is simple: data is encrypted in transit and therefore private and sensitive information isn’t misused. It is said that HTTPS websites will achieve a boost in ranking, while also gaining the optimal level of security. In essence, Google looks to reward secure and peak performance websites by increasing their ranking on the index.   Also, threats and attacks can affect your rankings. Although there are no direct penalties, your ranking does tend to suffer and with repeated attacks, Google can flag or even blacklist your website for Malware – thereby obliterating your spot on the rankings.  How do CDNs, like Cloudflare, help with these problems? Firstly, Cloudflare provides its users with SSL certificates. Secondly, CDNs like Cloudflare offer reverse proxy, which keeps a website or server’s IP address hidden. This makes it harder for attackers to target them with a DDoS attack. These threats will instead be routed to the CDN, which comes equipped with the resources to fend off an attack. Therefore, your site remains secure, safe and should rank more consistently. At ResellerClub, Cloudflare acts as a proxy between your visitors and our servers. As a result, you’ll be able to fend off malicious visitors, save bandwidth and greatly reduce page load times. We deploy high performing Cloudflare CDN and security so that you and your customers stay secure. With our packages, you can easily set up your Reseller Hosting business, because we offer robust infrastructure, a fantastic team of support executives and effective management of your web hosting accounts.  If you have any questions or suggestions, feel free to comment below! .fb_iframe_widget_fluid_desktop iframe { width: 100% !important; } The post How Cloudflare can Amp up your SEO Efforts appeared first on ResellerClub Blog.

How to Boost SEO on Your WordPress Website [In 15 Steps]

HostGator Blog -

The post How to Boost SEO on Your WordPress Website [In 15 Steps] appeared first on HostGator Blog. No matter how much work you put into making your WordPress website look good, it won’t pay off if you can’t get people to show up. The internet is packed full of websites vying for attention. For people to find yours, you have to put some effort into getting them there. A top tactic for doing that is search engine optimization (SEO).  SEO offers a number of important benefits: It improves your website’s visibility.It makes it easy for people already looking for what you do to find you. It increases traffic.It’s affordable.Once achieved, SEO results are long-lasting. SEO isn’t your only option for getting more eyes on your WordPress website, but it’s one of the best places to start.  15 Steps to Improving WordPress SEO SEO is competitive and can take a lot of time to do well. But many of the most important steps for WordPress SEO are actually fairly simple. Some of these you can even get done today, while for some others you’ll want to create an SEO plan to implement over the coming weeks and months.  1. Make sure you have the right hosting provider and plan. Search engine algorithms—the complex code that determines which order websites show up in when you do a search—aim to prioritize websites that provide the best experience to visitors. Now think about how you feel when you click on a website and it takes forever for the page to load.  In the fast-moving world of the high-speed internet, “forever” can actually just mean a few seconds, but that’s long enough for waiting to feel like a nuisance. Search engines are well aware of how people feel about slow loading times, so site speed is a ranking factor they’re upfront about. For your website to load quickly, choosing the right web hosting provider and plan is paramount. If your website isn’t delivering the level of speed you need, consider if it’s time for either an upgrade or a switch to an all new web hosting company. Look for one that offers managed WordPress hosting and can promise reliable service and site speed.  2. Install an SEO plugin. Choosing WordPress for your website means you don’t have to deal with HTML when making updates (thank goodness). In its place, you need to find the right plugins that provide the substitute functionality you need. Some SEO steps that you’d otherwise use HTML for can be completed via an intuitive interface with the right plugin. Some popular options include: YoastAll in One SEO PackSmartCrawl SEOThe SEO Framework Many of the next steps on our list are much easier to complete with a good WordPress SEO plugin.  3. Create a sitemap. For a page on your website to show up in a search, the search engine has to first know it’s there. The search engines have bots that continually crawl the web to find and index web pages.   You can speed up the process of getting all the pages on your website indexed by creating and submitting a sitemap. All of the plugins shared above have features to help with this step. Use the plugin of your choice to generate a sitemap for your WordPress site, then submit it to each of the main search engines.  4. Do keyword research. Keywords are the cornerstone of an SEO strategy. You don’t want to show up in the search engines for just any search, you want your website to show up when people are looking for what you do. When you optimize a web page for search, you’re optimizing it for a specific keyword. To determine which keywords to base your strategy on, use free keyword research tools like Google’s Keyword Planner and Answer the Public, or paid tools like Moz and SEMRush to gain data on the terms your audience is using in search. You want to identify keywords that get a decent number of monthly searches, but aren’t too competitive to rank for. For new websites and small businesses long-tail keywords—terms that are specific and tend to be longer—are usually your best bet.  5. Choose a primary keyword for each page. When you have a strong list of keywords to target, figure out the best primary keyword for each page on your website, along with a secondary keyword or two. Every page should have a different primary keyword so you aren’t competing with yourself for search engine rankings. Selecting your keyword is necessary for the next several steps. 6. Customize all your URLs. When you create a new page in WordPress, it will automatically generate a URL for it—but one that provides no useful information. It will look something like this: The URL is part of the page search engines look at to learn what the page is about. Not only is a generic URL like that not useful to your visitors, who will never remember it, it also doesn’t communicate anything to Google about what’s on the page.  Your SEO plugin should provide a field for you to customize the URL of each page, or it may even automatically generate a URL based on the page title you enter. Make sure to fill in a URL that uses your primary keyword and relates to what’s on the page.  7. Write a relevant meta description for each page. Meta descriptions don’t have an effect on a web page’s rankings, but they’re still important because they often show up on the search engine results page when you rank for a term.  The meta description is your opportunity to get someone to choose your website out of all the options that appear. Use the brief space you have here (around 155 characters) to make a case for why someone should click. And be sure to include your primary keyword. If your page starts showing up in the results when someone searches for your keyword, it will be bolded in the meta description (as in the example above), drawing more attention.  8. Use headings strategically. Another part of the page the algorithms look at to understand what it’s about is the headings you use. With HTML, you would add headings to the page using the <h2>, <h3>, and <h4> tags.  Within WordPress though, you can select a heading each time you add a text block to the page.  Headings are useful for separating the page into different sections that make it easier for your readers to skim. In terms of SEO, they give you more options to include your keywords—but you should only do so if including your keyword in a heading also makes sense for your human readers.  9. Optimize your images for search. Search engine algorithms can’t see images, but there are a few parts of an image file that they can read. That includes: The image filenameThe alt tagThe captionThe description You can fill in these sections easily in WordPress each time you add a new image to your media library, allowing you to optimize your WordPress images for SEO. Look for the Attachment Details section on the right side of the screen.  Each of these fields is another opportunity to communicate something about your web page to the search engines. Fill in this information for every image you add to your website, including your keyword where relevant. Most of these sections won’t be visible to your average visitor, but the caption will, so make sure anything you add there is useful to your human visitors. 10.  Optimize your images for speed. We already established how important speed is for SEO. Even with the right web hosting plan, if you add a lot of large, high-resolution images to your website, they can slow your page loading time. But there are a number of tips to make your images load faster, while still looking good.  Here are a few suggestions for optimizing your images for speed: Save your images as .jpg rather than .png so they’re smallerCompress your images with a compression plugin. Set up lazy load. Another option a plugin can help with.   11.  Create a blogging strategy. Blogging is good for SEO because it keeps your website current and fresh, and gives you lots of opportunities to rank for different relevant keywords. And WordPress is well designed for hosting a blog on your website.  Use your keyword research as a starting point to create a blogging strategy that targets relevant terms your audience is searching for. Create an editorial calendar to keep you consistent in your blogging and strive to make sure each piece you publish: Is useful to the readers you most want to reachIs relevant in some way to the main thing your website offers, be it products, services, or a certain type of informationIs written in a web-friendly format. That means lots of white space, short paragraphs, sections separated by headings, and bullets or numbered lists where appropriate (kind of like this one)Is optimized for search (that just means following all the rules in this post) Blog posts are a good way to increase your website’s visibility and gain the attention and trust of the category of people you want to reach.  12.  Practice internal linking. An internal link is any link on a web page that points to another page on your own website. Internal links are useful for SEO because, by showing Google which pages are related to each other, it’s yet another signal about what your page is about. And with internal links, you have the power to choose the anchor text you use—e.g. the words that are hyperlinked (those that usually show up in blue and underlined). The more context clues you give the algorithms, the better a job they do of understanding what terms your page should link for. Internal links are also a way to spread link authority around your website. When you add an internal link to a popular page that ranks well now, it makes the linked page look a little more valuable in the eyes of Google.  13.  Build backlinks. Internal links are nice, but backlinks are where your web pages start to really gain points with the algorithms. A backlink is any link to a page on your website from another website. Every backlink from a well respected, relevant website works as an endorsement for your website. Google sees it as confirmation that what’s on the page is valuable. The more high-quality backlinks a website earns, the more SEO authority the website will have. More authority = higher search results.  Link building is one of the hardest parts of SEO, but once you have all the on-site optimization covered (which is the category everything else on this list falls under), it’s the most important step for boosting SEO on your WordPress website.   14.  Learn from your analytics. This list covers all the best practices for improving SEO for your WordPress website, but the details of what will work best will depend on your particular website and audience. You can use a plugin like the Google Analyticator to put some of your most important analytics front-and-center in your WordPress dashboard to easily track how much traffic you get, and which pages are the most popular.  Supplement that information by digging deeper in Google Analytics. And if it’s in your budget, you can gain even more in-depth information with SEO tools like Moz or Ahrefs that provide rankings data. Analyze that data as you go and learn what types of pages and content you create perform best in the search engines. Then apply what you learn to your strategy moving forward.  15.  Perform regular content audits. Creating new content for SEO takes a lot of time and energy. Make that work go further by performing a content audit at least once a year to find opportunities to make your old content stronger. High-performing pieces can be updated to make them more current and keep them strong. Low-performing pieces can either be scrapped or improved based on the insights you’ve learned from your analytics.  Don’t publish and forget. Treat the content you have as a living thing—it should evolve and grow over time to strengthen your business and become ever more useful to your audience.  Build Your WordPress SEO on a Strong Foundation  For everything on this list to pay off, your website has to work consistently and load fast. For that, you need a web hosting plan that provides nearly constant uptime and promises the highest level of performance. HostGator’s managed WordPress hosting plan delivers.  It’s compatible with all WordPress websites, delivers fast loading times, and has a 99.99% uptime guarantee or your money back. Start your WordPress website stronger with the right web hosting plan.  Find the post on the HostGator Blog

Cloudflare Global Network Expands to 193 Cities

CloudFlare Blog -

Cloudflare’s global network currently spans 193 cities across 90+ countries. With over 20 million Internet properties on our network, we increase the security, performance, and reliability of large portions of the Internet every time we add a location.Expanding Network to New CitiesSo far in 2019, we’ve added a score of new locations: Amman, Antananarivo*, Arica*, Asunción, Baku, Bengaluru, Buffalo, Casablanca, Córdoba*, Cork, Curitiba, Dakar*, Dar es Salaam, Fortaleza, Geneva, Göteborg, Guatemala City, Hyderabad, Kigali, Kolkata, Male*, Maputo, Nagpur, Neuquén*, Nicosia, Nouméa, Ottawa, Port-au-Prince, Porto Alegre, Querétaro, Ramallah, and Thessaloniki.Our Humble BeginningsWhen Cloudflare launched in 2010, we focused on putting servers at the Internet’s crossroads: large data centers with key connections, like the Amsterdam Internet Exchange and Equinix Ashburn. This not only provided the most value to the most people at once but was also easier to manage by keeping our servers in the same buildings as all the local ISPs, server providers, and other people they needed to talk to streamline our services. This is a great approach for bootstrapping a global network, but we’re obsessed with speed in general. There are over five hundred cities in the world with over one million inhabitants, but only a handful of them have the kinds of major Internet exchanges that we targeted. Our goal as a company is to help make a better Internet for all, not just those lucky enough to live in areas with affordable and easily-accessible interconnection points. However, we ran up against two broad, nasty problems: a) running out of major Internet exchanges and b) latency still wasn’t as low as we wanted. Clearly, we had to start scaling in new ways.One of our first big steps was entering into partnerships around the world with local ISPs, who have many of the same problems we do: ISPs want to save money and provide fast Internet to their customers, but they often don’t have a major Internet exchange nearby to connect to. Adding Cloudflare equipment to their infrastructure effectively brought more of the Internet closer to them. We help them speed up millions of Internet properties while reducing costs by serving traffic locally. Additionally, since all of our servers are designed to support all our products, a relatively small physical footprint can also provide security, performance, reliability, and more.Upgrading Capacity in Existing CitiesThough it may be obvious and easy to overlook, continuing to build out existing locations is also a key facet of building a global network. This year, we have significantly increased the computational capacity at the edge of our network. Additionally, by making it easier to interconnect with Cloudflare, we have increased the number of unique networks directly connected with us to over 8,000. This makes for a faster, more reliable Internet experience for the >1 billion IPs that we see daily.To make these capacity upgrades possible for our customers, efficient infrastructure deployment has been one of our keys to success. We want our infrastructure deployment to be targeted and flexible.Targeted DeploymentThe next Cloudflare customer through our door could be a small restaurant owner on a Pro plan with thousands of monthly pageviews or a fast-growing global tech company like Discord. As a result, we need to always stay one step ahead and synthesize a lot of data all at once for our customers.To accommodate this expansion, our Capacity Planning team is learning new ways to optimize our servers. One key strategy is targeting exactly where to send our servers. However, staying on top of everything isn’t easy - we are a global anycast network, which introduces unpredictability as to where incoming traffic goes. To make things even more difficult, each city can contain as many as five distinct deployments. Planning isn’t just a question of what city to send servers to, it’s one of which address.To make sense of it all, we tackle the problem with simulations. Some, but not all, of the variables we model include historical traffic growth rates, foreseeable anomalous spikes (e.g., Cyber Day in Chile), and consumption states from our live deal pipeline, as well as product costs, user growth, end-customer adoption. We also add in site reliability, potential for expansion, and expected regional expansion and partnerships, as well as strategic priorities and, of course, feedback from our fantastic Systems Reliability Engineers.Flexible Supply ChainKnowing where to send a server is only the first challenge of many when it comes to a global network. Just like our user base, our supply chain must span the entire world while also staying flexible enough to quickly react to time constraints, pricing changes including taxes and tariffs, import/export restrictions and required certifications - not to mention local partnerships many more dynamic location-specific variables. Even more reason we have to stay quick on our feet, there will always be unforeseen roadblocks and detours even in the most well-prepared plans. For example, a planned expansion in our Prague location might warrant an expanded presence in Vienna for failover.Once servers arrive at our data centers, our Data Center Deployment and Technical Operations teams work with our vendors and on-site data center personnel (our “Remote Hands” and “Smart Hands”) to install the physical server, manage the cabling, and handle other early-stage provisioning processes.Our architecture, which is designed so that every server can support every service, makes it easier to withstand hardware failures and efficiently load balance workloads between equipment and between locations.Join Our TeamIf working at a rapidly expanding, globally diverse company interests you, we’re hiring for scores of positions, including in the Infrastructure group. If you want to help increase hardware efficiency, deploy and maintain servers, work on our supply chain, or strengthen ISP partnerships, get in touch.*Represents cities where we have data centers with active Internet ports and where we are configuring our servers to handle traffic for more customers (at the time of publishing)

Amplify Console – Hosting for Fullstack Serverless Web Apps

Amazon Web Services Blog -

AWS Amplify Console is a fullstack web app hosting service, with continuous deployment from your preferred source code repository. Amplify Console has been introduced in November 2018 at AWS re:Invent. Since then, the team has been listening to customer feedback and iterated quickly to release several new features, here is a short re:Cap. Instant Cache Invalidation Amplify Console allows to host single page web apps or static sites with serverless backends via a content delivery network, or CDN. A CDN is a network of distributed servers that cache files at edge locations across the world enabling low latency distribution of your web file assets. Previously, updating content on the CDN required manually invalidating the cache and waiting 15-20 minutes for changes to propagate globally. To make frequent updates, developers found workarounds such as setting lower time-to-live (TTLs) on asset headers which enables faster updates, but adversely impacts performance. Now, you no longer have to make a tradeoff between faster deployments and faster performance. On every commit code to your repository, the Amplify Console builds and deploys changes to the CDN that are viewable immediately in the browser. “Deploy To Amplify Console” Button When publishing your project source code on GitHub, you can make it easy for other developers to build and deploy your application by providing a “Deploy To Amplify Console” button in the Readme document. Clicking on that button will open Amplify Console and propose a three step process to deploy your code. You test this yourself with these example projects and have a look at the documentation. Adding a button to your own code repository is as easy as adding this line in your Readme document (be sure to replace the username and repository name in the GitHub URL): [![amplifybutton](https://oneclick.amplifyapp.com/button.svg)](https://console.aws.amazon.com/amplify/home#/deploy?repo=https://github.com/username/repository) Manual Deploy I think it is a good idea to version control everything, including simple web site where you are the only developer. But just in case you do not want to use a source code repository as source for your deployment, Amplify Console allows to deploy a zip file, a local folder on your laptop, an Amazon S3 bucket or any HTTPS URL, such as a shared repository on Dropbox. When creating a new Amplify Console project, select Deploy without Git Provider option.  Then choose your source file (your laptop, Amazon S3 or an HTTPS URI) AWS CloudFormation Integration Developers love automation. Deploying code or infrastructure is no different : you must ensure your infrastructure deployments are automated and repeatable. AWS CloudFormation allows you to automate the creation of infrastruture in the cloud based on a YAML or JSON description. Amplify Console added three new resource types to AWS CloudFormation: AWS::Amplify::App AWS::Amplify::Branch AWS::Amplify::Domain These allows you respectively to create a new Amplify Console app, to define the Git branch, and the DNS domain name to use. AWS CloudFormation connects to your source code repository to add a webhook to it. You need to include your Github Personal Access Token to allow this to happen, this blog post has all the details. Remember to not hardcode credentials (or OAuth tokens) into your Cloudformation templates, use parameters instead. Deploy Multiple Git Branches We believe your CI/CD tools must adapt to your team workflow, not the other way around. Amplify Console supports branch pattern deployments, allowing you to automatically deploy branches that match a specific pattern without any extra configuration. Pattern matching is based on regular expresssions. When you want to test a new feature, you typically create a new branch in Git. Amplify Console and the Amplify CLI are now detecting this and will provision a separate backend and hosting infrastructure for your serverless app. To enable branch detection, use the left menu, click on General > Edit and turn on Branch Autodetection: Custom HTTP Headers You can customize Amplify Console to send customized HTTP response headers. Response headers can be used for debugging, security, or informational purposes. To add your custom headers, you select App Settings > Build Settings and then edit the buildspec. For example, to enforce TLS transport and prevent XSS attacks, you can add the following headers: customHeaders: - pattern: '**/*' headers: - key: 'Strict-Transport-Security' value: 'max-age=31536000; includeSubDomains' - key: 'X-Frame-Options' value: 'X-Frame-Options: SAMEORIGIN' - key: 'X-XSS-Protection' value: 'X-XSS-Protection: 1; mode=block' - key: 'X-Content-Type-Options' value: 'X-Content-Type-Options: nosniff' - key: 'Content-Security-Policy' value: "default-src 'self'" The documentation has more details. Custom Containers for Build Last but not least, we made several changes to the build environment. Amplify Console uses AWS CodeBuild behind the scenes. The default build container image is now based on Amazon Linux 2 and has Serverless Application Model (SAM) CLI pre-installed. If, for whatever reasons you want to use your own container for the build, you can configure Amplify Console to do so. Select App Settings > Build Settings : And then edit the build image setting There are a few requirements on the container image: it has to have cURL, git, OpenSSH and, if you are building NodeJS projects, node and npm. As usual, the details are in the documentation. Each of these new features has been driven by your feedback, so please continue to tell us what is important for you by submittin, and expect to see more changes coming in the second part of the year and beyond. -- seb

SAP’s Migration Factory Certifies Rackspace for HANA Migrations

The Rackspace Blog & Newsroom -

The old adage “Never touch a running system” still contains a kernel of truth. Especially with mission critical solutions like SAP’s Enterprise Resource Management solution, SAP ECC, and its Customer Relationship Management software, where the core business processes of most large enterprises are executed, concerns about downtime, data losses, time and cost to adapt custom […] The post SAP’s Migration Factory Certifies Rackspace for HANA Migrations appeared first on The Official Rackspace Blog.

7 Ways to Improve Your Site Speed in WordPress

HostGator Blog -

The post 7 Ways to Improve Your Site Speed in WordPress appeared first on HostGator Blog. For the past several years, Google has been emphasizing site speed as a ranking factor in their algorithms. Given that, it’s amazing to see the number of under-optimized WordPress sites that exist. People spend so much time on “SEO” and content generation, and they forget to do the one thing that will increase the ranking of all their pages. Well, it’s never too late to get started. Here are seven ways to improve your site speed in WordPress. These will make Google sit up and take notice! These are listed in order of importance. Method 1: Use a Datacenter Closest to Your Clients The location of your server plays a big role in your site speed. For example, if your clients are based in the US, then HostGator is an ideal web host, since we have two data centers in the country – one in Texas, and one in Utah. You can view the speed with which your site is fetched by the Googlebot in your search console. Ideally, this should be just a few hundred milliseconds. When I switched my server to a local host, you can see how fast my site fetch speed went down: So don’t ignore this aspect of site speed. It’s crucial! Method 2: Implement Dynamic Caching WordPress generates its pages afresh each time a visitor comes to your site. This is quite a costly process and puts a strain on your database as well as your CPU. In addition, page generation takes time, so there’s a small delay for each visitor. The solution to this is dynamic caching. What is Dynamic Caching? The idea behind dynamic caching is to save a copy of the generated page and serve that copy to the next visitor. This way, each page is generated just once instead of over and over again. Not only is this faster, it reduces the resource load on your server, which means other parts of your site will work faster. It also means that your site can handle many, many more visitors! How to Implement Dynamic Caching on HostGator Dynamic caching can be implemented either with a 3rd party plugin or on the server. Having it enabled on the server is much faster. Not many web hosts allow this, but HostGator offers server caching on their WordPress plans as shown here on the product page: So if you use managed WordPress hosting with HostGator, just turn on the feature and you’re good to go! Here’s a complete review of HostGator WordPress, including all the special features! But even if you don’t have WordPress optimized hosting, you can implement dynamic caching with a plugin. I personally recommend WP Super Cache, which is an extremely popular WordPress plugin, is easy to use, and will get the job done without hassles. Method 3: Use a CDN A CDN is a “Content Distribution Network”. Apart from dynamic pages, there are lots of things on your site that never change. Images, Javascript, and CSS. Well…almost never change. Because of this, it’s best to deliver these resources from a server closest to your client. A CDN looks at the IP address of your visitor and chooses to send static content from a server closest to that location. Which means that people on opposite ends of the earth will receive the content equally fast. It’s really quite a magical technology. As before if you have WordPress hosting with HostGator, a CDN is available by default. But even without such a plan, you can use Cloudflare as your CDN. Despite it being free, I think Cloudflare is one of the best CDNs on the market. HostGator has a tie-up with Cloudflare, which allows for easy integration. You can even do cool stuff like changing your nameservers for faster access. But that’s beyond the scope of this tutorial. Method 4: Deferring or Asyncing JavaScript This one can be a bit tricky. Almost all websites use JavaScript. It’s an essential part of the web, but this adds to the page load time. The key is to wait until the page has fully loaded and is visible before loading JavaScript. It’s easier said than done, and each website works differently. Which is why we need a plugin. The one I recommend is Autoptimize. It’s open source and is almost universally recommended by WordPress gurus. After downloading and installing the plugin on WordPress, you can click the button to aggregate and asynchronously load JavaScript as shown here: The plugin has many options. Make sure to test them all so that your website’s features work properly. Stuff like resizing tables etc are all enabled by Javascript. Method 5: Inlining and Deferring CSS The CSS counterpart to method 4, this refers to delaying the loading of CSS files until the page has downloaded and displayed. However, there’s a catch. If we delay the loading of CSS, our page will look horrible and unstyled, since the CSS files are missing! The solution is called “Inlining” above-the-fold CSS. What this means is that you need to isolate the CSS rules that apply to all visible elements when your page first loads. And then paste those rules directly into every page so that they’re loaded instantly. Once your page has rendered, you can then load the CSS files at your leisure. So how do we do this? Get the Critical CSS This is pretty hard to do manually. So we’re lucky that automatic online tools exist to do it for us! For example, here’s an online tool from SiteLocity that’s quite popular. Simply type in your URL, and it’ll generate the critical above-the-fold CSS for you. Copy the rules that it gives you and use it in the next step. Insert the CSS Inline In method 4, we used the tool “Autoptimize”. Just like before, there is a section in the main settings area to enter your critical CSS as shown here: As shown above, paste the CSS into the box and save your changes. Now when you load your page, all the important CSS will be downloaded immediately, but the external files will be served later when the page has fully loaded. This makes your site blazing fast! Method 6: Lazy Load your Images Images constitute the bulk of a web page’s size. And not surprising, since a single image can be hundreds of MB. So it’s important to only load those images when necessary. “Lazy Loading” is the practice of downloading images only when the user has scrolled far enough to view them. Otherwise, if you have an image way down the article, and the user leaves the page before that, it’s wasted bandwidth both for you as well as the visitor. And it means your site slowed down unnecessarily. Lazy loading is yet another feature that’s difficult to implement manually. Luckily for us, WordPress themselves have released a plugin called Jetpack. I highly recommend using it, since it has a ton of useful features that you can play around with, and lazy loading of images is one of them as shown here: It’s just a single setting! Enable it and you’re done. Now when you visit your page, the images won’t be downloaded until you’re far enough down to see them. In which case, they’ll appear by magic as your user scrolls. Neat right? Method 7: Removing Unnecessary Emoji Code I didn’t notice this myself until I combed through my HTML code. WordPress adds a whole lot of junk useless emoji code to every page in order to render smiley faces and emojis. It’s a useful feature, but it’s a lot of wasted code, and it’s loaded every single time. Luckily, the Autoptimize plugin that we saw earlier has a way to remove them in the “Extra” tab as shown here: Click this option, save your changes, and you’re done! No more emoji code. The idea is to keep your WordPress installation neat and clean, without any unnecessary junk. These seven methods outlined here are a mix of server level and page level optimizations. Together, they should put your site on a fast track to higher rankings, and better experiences for your visitors. Find the post on the HostGator Blog

How to Develop a Social Media Approval Process for Your Company

Social Media Examiner -

Do you work with a social media team? Wondering how to create a social media content approval process? In this article, you’ll learn how to set up a workflow to manage, schedule, and publish pre-approved social media content. #1: Share Social Media and Brand Assets With Your Team in One Place Whether you’re a rookie […] The post How to Develop a Social Media Approval Process for Your Company appeared first on Social Media Marketing | Social Media Examiner.

The 1% Errors that Kill your Freelance Business

Liquid Web Official Blog -

When you start a business, you’re buoyed by dreams. Of course, the business will be successful. You know you’re great at your job because you’ve been told that before. You’re a technical or design genius, and you’re just waiting to be able to work for yourself and have some more freedom. Unfortunately, for many beginning freelancers, there is a big wake up call coming. For me, there were a number of small things that almost killed my business, some of them many times over the time I’ve been working for myself. Business Killing Errors Let’s start by looking at the errors, then I’ll show you what the solutions to these business problems are so that you can run a successful freelance business. Too Much Chasing When I started my business I determined I’d make 10 solid contacts a day to keep my business running. When you’re starting, that’s the type of action it’s going to take. Unfortunately, most freelancers spend far too long talking to anyone who has a wallet and a pulse. It’s not about sending prospects away because you don’t want to work for the prices they want to pay, although that is part of it. Every client you finish working with should inform you more about the projects you do best and enjoy the most so that you can start taking more of those projects instead of spreading yourself thin across projects where you can’t bring high value. A side effect of this is that freelancers who continually chase every possible client are often undercharging for their services. When you chase everyone that comes knocking on your door, it’s far too easy to get into a race for the bottom as you try to win every possible contract. Subscribe to the Liquid Web weekly newsletter to get more Web Professional content like this sent straight to your inbox. Skimming Client Correspondence Once you’ve won a project, it’s time to start selling yourself and the value you bring all over again. This time, you’re not trying to convince a prospect to become a client, you’re trying to show a client that they made the right decision in hiring you. As you do this, make sure you carefully read every piece of correspondence your client sends you. More than once I’ve gotten myself into trouble when I didn’t read every little part of a note in Trello. I’ve ended up answering part of the question, and the client has to ask again to make sure that I answer the whole question. If you’re not careful when you address emails and notes from your clients, it’s easy to make yourself look unprofessional as you make your client do twice the work they should have to do. Too Much Freedom In my first few months working for myself, I would get up around 8am, eat breakfast, and check email. Then, I would walk the dog for an hour around 10am and have lunch at 12pm for an hour. After lunch, I’d realize I didn’t do any work to move client projects forward so I’d try to put in a flurry of work after 1pm. Almost invariably, I’d look up after what felt like a long work session to realize that it was only 2:30pm and I had checked social media a bunch instead of working. Many days in those first few months would end with maybe an hour or two billed to a client, but I had a nice tan from walking the dog. The opposite of this is also bad for you. Working every second of the day isn’t healthy. If you’re answering emails all night and on weekends, or if you’re diving into code for clients with every minute you can spare, you’re on track for burnout. Keep reading and I’ll show you how I schedule my days to balance work and rest so that I can be more productive than most. Late Delivery The latest I’ve ever delivered a project is a year late. Wait, it was 11 months, that makes a difference right? No, it doesn’t and I’m lucky my client was gracious and that I had worked with them well for years previously. We still work together, only because my client is gracious and because I’ve been delivering regularly for a few years again. Did you know that about 68% of software projects fail? Out of that 68%, half of them either take 180% more time to deliver or produce less than 70% of the intended functionality. The fact is that even 2 days late is late and if you do this regularly, your business isn’t going to survive. Keep reading to see how I manage projects so that they deliver on time. You Think Your Clients Will Remember You It’s easy to think that because you delivered a great project to a client, they’ll remember you the next time they have work to do. Unfortunately, this isn’t the truth. I can’t count the number of times in my 11 years of building WordPress sites I’ve looked at a client site two years later only to find that it’s totally changed and I had no idea. Many clients will work with whatever developer or designer is currently top of mind. This can be fixed by building a good follow up system. You Spend Too Much I love new shiny stuff. As I write this, DJI came out with a new action camera to compete with my GoPro Hero 7 Black, and boy do I want to purchase it. I don’t need it, but it’s pretty dang cool and the DJI Action Cam is cheaper than that GoPro. But I already own a GoPro Hero 7 Black, and there really isn’t a reason to purchase the DJI camera.  When I started my business, I would already have an order placed for the new DJI camera. It wouldn’t have mattered that I didn’t have the money. I would see some new shiny piece of technology and I’d order it for “work” reasons. More than once, I spent the money I needed to use to pay myself on something cool just because it was new and cool. Luckily, I’ve solved that problem for myself, despite still loving to purchase new shiny stuff. Fixing These Business Killing Errors Those are the big problems that kill businesses and if you read them and see yourself in them, remember I’ve made each and every mistake listed. Some of them more than once. Let me tell you about the systems and processes I use to help me not fall into these business-killing traps. Start Vetting Prospects The first problem was too much chasing prospects, and this is fixed by building a client vetting process. The way to start this is to establish a few ground rules about working with you. My process starts with a set of questions that you must answer if you want to work with me. I’ve shared the exact initial email I send every prospect. Book a Call Once a prospect has answered those questions, the next step is to book a call with me. No, I don’t work with first-time clients without this call. For specific reasons outlined below, I only book these calls on Friday before noon. Yes, some prospects don’t like this and choose not to work with me. To me, this means that we weren’t a good fit because any calls during the project will also take place on Friday before noon. Far from trying to be belligerent about calls, as we’ll talk about in a minute, I do this because I schedule the rest of my week for work on the projects I currently have on my plate. Collaborate on a Proposal The next step is producing a written proposal for a prospect. I start by writing out the initial draft, and then we work on it together. If my prospects aren’t up for a bit of collaborative work on a proposal, I bow out of the running.  I only do collaborative proposals because it’s a great test to work together before anything has been signed. In early May of this year, we got to this point with one particular client and working together on the proposal showed me that the client wasn’t thoroughly reading my emails. I learned this as they asked questions during the week for things that were clearly spelled out in the proposal at their original request. Without this step of working together on the proposal, I would have headed into a project that was way more management than I desired. Land the Project Once you’ve put this work in with a prospect, you’re highly likely to land the work. The prospects that were mostly “kicking the tires” bowed out earlier because of your requirements. A great side effect of a solid client vetting process is that it lets prospects know that they’re dealing with a professional that has an established process that works to deliver winning projects. By taking the time to talk with prospects two or three times, you’ll be better equipped to understand their problems so that you can solve them well. Both of these things show your prospects that you’re a high-value freelancer, so you can charge more. When I started to implement this process I almost doubled my rates in a few months. Simply because I showed that I was a professional, my prospects started to treat me like one. This whole process isn’t about weeding out certain clients as much as it’s about finding ideal clients where you can truly deliver high value. It’s about finding clients you can work with for years. Most of my current clients have been with me for 7+ years and while we’ve had rocky roads, we continue to work together because we treat each other as professionals and trust that we will continue to act professionally together. Read, Write, Then Read Again Another key aspect of showing that you’re a professional is being thorough with your client correspondence. This was something I struggled with early in my business and it was harming me because I looked unprofessional and used so much time communicating with clients. To combat this, I added a few rules to my client correspondence. First, I never reply inline to a client. If I’m replying to an email, I open Drafts (iOS or macOS) to write the reply. If I’m replying to something in Trello, I open up Drafts to write the reply. When I started I was so strict with this that even if my client had a six-word question and my answer was yes/no I would write it in another application. Second, read the customer request. Then write the reply in another app. Next, read the customer request in detail again and physically point to the sentence or paragraph that addresses their question. As you do this, make sure to read it again to ensure that the paragraph you’re pointing to does indeed answer the question completely. Third, copy and paste the answer into the email or project management system and… do step two again. While it may seem like a lot of work, it’s worth it. I first heard about pointing and calling in a book that talked about the Japanese transit system doing it so that workers didn’t miss a step. Yes, I felt silly, but it stopped me from wasting my client’s time and helped me to reply professionally to their requests the first time. Now, I’m more likely to hear that my responses are the most complete responses that a customer has ever had instead of getting a repeat of a part of the question I didn’t address. Schedule Work Time (Too Much Freedom) I’ve already alluded to this, but I schedule my day and I stick to my schedule. I start by scheduling out my week in a notebook, which you can see below. I start each day at around 5:30-6:00 am by reading for an hour. Then I write and do client work for around two hours. I follow this working block with a 2-3 hour break. Some days I run. Some days I take a kid to figure skating and some days my wife runs while I hang out with our kids who aren’t in school. Then from around noon until 3 pm, I get back to work and focus on only work. During this window, I’ll look at email or other tasks that are less mentally demanding than my morning work. Each section of this schedule is intentional and specific. In the book When by Daniel Pink, we learn that it doesn’t matter if you’re a morning person or a night person, you focus best shortly after you wake up. Morning people like me do it early, night people have better focus later in the day because they got up later in the day. Once you’re through your first peak of focus, you hit a trough where you don’t focus very well. This is my mid-day break. Later in the day, you hit a second peak of focus before slowly declining until you go to bed. I use my second peak for less cognitively demanding tasks like dealing with email and basic project management. When I started my business, I figured every minute was the same so I didn’t plan different types of tasks for different times in the day. I’d often start with email, using my best brain time on a task that wasn’t demanding. Then late in the day, I’d try to dig deep into a client problem and wonder why it was so much work? One of the keys to scheduling your work time well is cutting distractions. I do most of my work on an iPad with all the notifications turned off. Instead of huge screen real estate, I have a single window to view. I don’t have overlapping applications so I simply focus on the task at hand instead of looking around at which window on my screen is the most interesting currently. I also don’t put my phone on my desk. Yes, it’s in my office, but it’s on a shelf where I can’t see it and it’s set to only ring if my wife calls me. No, it doesn’t even make a sound when my wife sends me a text message. When I’m working, I focus on work. When I’m not working, I don’t let work creep into my life. The final way to make sure that you focus on your work is to start tracking your time. I track every minute I’m in the office, every day of the week. I can tell you that I spent 20 minutes before writing this article adjusting a few things on my desk so that my monitor stand could be mounted exactly where I want it mounted. I use Cushion for this, but most billing software has some form of time tracking built-in. At the very least, they have an integration with Toggl which does awesome with time tracking. While it may seem like a burden to track your time, it’s the only way you’re going to be able to find problems in the work you’re doing. I color-code all the work for my business in red. I know that if I have too much red in a day, I didn’t directly earn any money because I wasn’t working for clients. At the end of every week, I take a quick count of the hours I worked focusing on the number of hours I worked for clients so that I can be sure that I’m earning enough to pay the bills. Late Delivery Once you have a system that lets you get enough focused work hours in the day, you’re well on your way to delivering projects on time. You’re also going to need some type of project management system. If you’re on the lookout for that, check out my previous article on Project Management Basics for Freelancers. Those aren’t the only pieces you need though. You need to have a system to regularly review your projects and all your tasks so that you don’t drop any balls. For me, the most productive hour of the week is my Friday shutdown routine. The times I miss this shutdown, I can measure a 10-15% drop in my productivity during the following week. That’s not the only shutdown routine I have though. I have a daily process I use to check in with my projects and plan the work that needs to get done the following day. You’re not going to be surprised to hear that I also have a system to transition months so that I have a handle on the projects that should be the focus of each month. Let’s start by walking through my monthly routine because the other systems require information from my monthly routine. Each month, I plan an hour to survey the last month of notes in my Bullet Journal and look ahead based on the future log of my Bullet Journal. Armed with this information I take every project I have to work on in a month and start to list it out. The goal here is to have a reference for each week when I’m doing my planning. When I came to the time I had planned to write, I looked up the article that we had already planned because it was written in my monthly list. My monthly list trickles down into my weekly list, which is generated every Friday in about 30 minutes as I wind down from the week. The goal of the weekly shut down is to look at the week that has passed and see if anything got missed. If something got missed, I need to plan to fit it in the following week. I also use this time to build out a plan for the week so that at any given moment I know what type of work I should be doing. I start this weekly plan with my runs, then follow it with any family commitments that take up time I could be working. You can see skating listed here as something I need to take into account as I plan the week. Then I look at the projects and tasks that need to be accomplished in a week and slot them into times when I can do them. At any given moment of the week, I know what I should be working on. It’s possible that I’m not working on that item, but if I don’t start with a plan I spend a bunch of time trying to decide what I should be working on instead of doing something to push my work forward. My daily routine starts with 30 minutes left in the day. I stop my work and look through what was planned for the day. If I got everything done, I move on to the next day and make sure I have everything I need to get work done. If I’ve missed something in the day, I evaluate the time left in the week to see where I can get the item in. When I said above that I schedule everything down to my call time, you can see I was serious. Build a Follow Up Process It’s unfortunate, but many clients will work with whatever development shop or freelancer is most recent in memory. That means you need to have a long term follow up process so that you stay top of mind for your clients. In my business, I did a small project for a client in 2011. It was less than $1k, but I spent the next six years following up with them and they turned into a $50k client in year six. There was more than one year where I only heard back from them once despite my regular communication. I didn’t do anything special, I just dropped them into my follow up process. While there is a lot of great software out there, you don’t need to use it. I have used Contactually and Pipedrive in the past, but I’ve centralized everything in my Bullet Journal now. My process is as simple as reaching out to a prospect I still feel is worth reaching out to every three months. When I send them an email I mention a resource that may be relevant to their business and then I bump them forward three months in the Future Log of my Bullet Journal. I got away from Contactually and Pipedrive because they ended up turning into huge lists of people I hadn’t contacted in a long time as they both kept bringing forward anyone I had talked to for any reason. When it comes to following up with prospects, I start with a weekly email for 3 weeks to see if they’re ready to move forward with a project. After 3 weeks, I email them every month for a quarter, and then I move them to the quarterly follow up. As long as I’ve heard from someone in the last calendar year, I keep following up until they tell me to stop. You may look at this and think it’s not manageable, but I can get through all the follow up I have to do in a week in 30 minutes, usually on Fridays as I’m between calls. If you can’t set aside those 30 minutes to check-in with old clients and prospects, then it’s going to be hard to keep your business running over the long term. Have a Business Budget One of the final things that almost sunk my business was budgeting. From the beginning of my business, when I took my wife on a “date” and just happened to be near a client where I could pick up a check that allowed me to pay us, all the way to 2018 when there was a bunch of shiny tech I wanted, I’ve made some dumb mistakes with my business. Luckily I read Profit First by Mike Michalowicz, adopted the system in the middle of 2018, and I couldn’t have made a better decision. The basics of Profit First are that on the 10th and 25th of the month you deal with your finances. You take all the income you’ve had come in and divide it up based on percentages. Here are the percentages I use now: Taxes: 15%Pay: 60%Expenses: 20%Profit: 1%Extra: 4% Those numbers mean that I put 15% away for taxes, and since I’m a bit of a spender, I send that money directly to my tax account. I put aside 60% of everything I earn to pay myself. Another 20% goes towards any business expenses. I always have a profit because I put 1% away in a profit account and that extra 4% heads to the government as I pay off some tax debt. I did mention that I made some bad financial decisions, and being in debt on taxes is the result of some of those. The great thing about this system is that I can purchase anything I want with the 20% expenses. I don’t need to feel bad about spending that on things that the business needs. By the same token, if there isn’t money in the expense account, it’s time to trim the fat in the business. Out of the 60% listed above, I pay myself on the 10th and the 25th a set amount. It’s not everything I have in the account, and if I don’t have enough to pay myself what is expected, we have to deal with less. The best part of the whole thing is the 1% Profit that you put aside no matter what. Every quarter, I get to spend that on whatever seems like it would be cool for the family. We’ve put it towards a babysitter and dinner out, or a bicycle. The only rule is that I can’t spend it on anything that’s for the business. Back when I adopted Profit First, times we tight. By adopting this system I didn’t start earning more, but there was instant relief in my stress about finances and as I got less stressed, I made better business decisions, which in turn helped the business become more profitable again. I recommend Profit First to every freelancer I talk to and they’ve all been surprised at how much better it makes them feel. The post The 1% Errors that Kill your Freelance Business appeared first on Liquid Web.

Building a GraphQL server on the edge with Cloudflare Workers

CloudFlare Blog -

Today, we're open-sourcing an exciting project that showcases the strengths of our Cloudflare Workers platform: workers-graphql-server is a batteries-included Apollo GraphQL server, designed to get you up and running quickly with GraphQL.Testing GraphQL queries in the GraphQL PlaygroundAs a full-stack developer, I’m really excited about GraphQL. I love building user interfaces with React, but as a project gets more complex, it can become really difficult to manage how your data is managed inside of an application. GraphQL makes that really easy - instead of having to recall the REST URL structure of your backend API, or remember when your backend server doesn't quite follow REST conventions - you just tell GraphQL what data you want, and it takes care of the rest.Cloudflare Workers is uniquely suited as a platform to being an incredible place to host a GraphQL server. Because your code is running on Cloudflare's servers around the world, the average latency for your requests is extremely low, and by using Wrangler, our open-source command line tool for building and managing Workers projects, you can deploy new versions of your GraphQL server around the world within seconds.If you'd like to try the GraphQL server, check out a demo GraphQL playground, deployed on Workers.dev. This optional add-on to the GraphQL server allows you to experiment with GraphQL queries and mutations, giving you a super powerful way to understand how to interface with your data, without having to hop into a codebase.If you're ready to get started building your own GraphQL server with our new open-source project, we've added a new tutorial to our Workers documentation to help you get up and running - check it out here!Finally, if you're interested in how the project works, or want to help contribute - it's open-source! We'd love to hear your feedback and see your contributions. Check out the project on GitHub.

How Hackers Can Use Your Expired Domains to Steal Data

HostGator Blog -

The post How Hackers Can Use Your Expired Domains to Steal Data appeared first on HostGator Blog. When businesses and blogs rename or merge, old domains sometimes get left behind. Security researchers say expired domains can put data at risk. Scammers may set up fake shops on expired domains and use them to steal credit card data from unwary bargain hunters. Or they may target email accounts linked to the domain to scam clients, steal company secrets and break into employees’ shopping and travel accounts. Prevention is as easy as renewing and protecting all your domains—but that’s not always simple, especially if you own a lot of domains. Here’s what you need to know about your risks when a domain expires and how to keep yours current. What Happens When Domains Expire? The first thing you need to know is that when domains expire, they’re available to anyone who wants to pay to register them. They’re also easy to find online, through sites that offer expired domain name searches and lists of recently expired domains to bid on. Some buyers buy expired domains for legitimate projects. Others are not so ethical. Your expired domain could end up as a fake online store Criminal gangs snap up expired domains to turn them into phishing sites. That damages the brands that lose their domains, the brands impersonated by the scammers, and shoppers who fall for the scam.  Security blogger Brian Krebs profiled a photographer whose old portfolio domain was turned into a fake athletic shoe store after her registration lapsed. Thieves used it to steal credit card data for resale on the dark web. For the photographer, the damage went beyond the loss of her website. She had no way to access social media accounts that were linked to her domain email address, because the scammers changed her passwords. Now the domain that used to host her portfolio redirects to the official adidas website, after adidas and Reebok sued the scammers who exploited her expired domain along with hundreds of others.  Your expired domain could let data thieves into your business Last year, security researchers with Australian cybersecurity firm Iron Bastion proved that registering abandoned business and law firm domains could give criminals access to insider data. By setting up a catch-all email forwarding service for domains they re-register, criminals can access confidential client data and emails. They can run scams using this information or sell it on the dark web. They can also take over former employees’ social media, banking, and professional accounts by changing the passwords linked to the old domain’s email addresses.  What should you do with domains you don’t use anymore? Security experts say the best way to safeguard your old domains is to keep renewing them, even if you’re not currently using them. Then you should close the email accounts associated with those domains and unlink those email accounts from alerts sent by banks, airlines, and other services that handle sensitive (and valuable) information. If you must let your old domains go, you’ll need to be thorough about updating any online accounts you and your employees set up using old domain email addresses. Then you’ll need to close those email accounts. In either case, it’s wise to let your customers and vendors know about your change of email address. Give them some advance notice, ask them to whitelist your new email address, and then ask them to delete the old address when you’ve closed that account.  For any email account on any domain, it’s always a good idea to set up two-factor authentication (2FA). By requiring a code from an SMS message or an authenticator app, you reduce the risk of someone maliciously changing your password on your email account and other accounts you set up with your email address.  And speaking of passwords, don’t make it easy for hackers to guess or brute-force yours. Every email address on your domains should have a strong password that’s not used for any other accounts.  How can you keep all your domains current and safe? Follow these recommendations from domain security experts to keep your domains in your possession. Give your domain registrations fewer chances to lapse. Start by registering or renewing for the longest amount of time you can, like three years instead of one. Then set your registrations to auto-renew.  Keep your registration information up to date. Update your domain registration accounts when your email address, phone number, or other contact information changes. Changed credit cards or online payment services? Make sure you change your domain payment information, or your auto-renewals will fail. Keep your registration information private. Domain privacy protection costs a few dollars a year, and it’s worth it. If you add domain privacy when you register your domain, your registrar’s contact information is listed in the WHOIS public database. Without domain privacy, your name, email address, and other personal data are on display. That can put you at risk for spam, scams, and harassment.  Lock your domains. Domains must be unlocked when you’re transferring them to a new host. Otherwise, lock them to keep scammers from transferring them to a different web host without your consent.  In HostGator’s Customer Portal, you can lock your domains for free. Navigate to Domains in the left sidebar. Under Manage Domains, you have the option to lock all your domains at once. You can also click the More button for any of your domains to lock one at a time. Under Domain Overview, click the Change link next to Locking. That takes you to Domain Locking. Then you just move the switch to Locking ON and click Save Domain Locking. Now your domain is protected against theft by unauthorized transfer. And with auto-renew in place and good cybersecurity practices, your domains are safe from expiration and exploitation. Ready for a new domain? HostGator now offers new customers a year of free domain registration with selected hosting packages and top-level domains. Sign up for 12 or more months of hosting, register a .com, .net, or .org top-level domain, and get the first year’s domain registration for free. See complete offer details here.  Find the post on the HostGator Blog

Adding Scale to Digital Marketing Expertise

WP Engine -

Geek Powered Studios (GPS) is a full-fledged, comprehensive digital marketing agency based in Austin, TX. Founded in 2009, GPS has amassed an impressive client list that spans the U.S., and as they’ve grown over the past decade, WP Engine has been along for much of the ride, providing service and support for the agency’s many… The post Adding Scale to Digital Marketing Expertise appeared first on WP Engine.

Using callback URLs for approval emails with AWS Step Functions

Amazon Web Services Blog -

Guest post by Cloud Robotics Research Scientist at iRobot and AWS Serverless Hero, Ben Kehoe AWS Step Functions is a serverless workflow orchestration service that lets you coordinate processes using the declarative Amazon States Language. When you have a Step Functions task that takes more than fifteen minutes, you can’t use an AWS Lambda function—Step Functions provides the callback pattern for us in this situation. Approval emails are a common use case in this category. In this post, I show you how to create a Step Functions state machine that uses the sfn-callback-urls application for an email approval step. The app is available in the AWS Serverless Application Repository. The state machine sends an email containing approve/reject links, and later a confirmation email. You can easily expand this state machine for your use cases. Solution overview An approval email must include URLs that send the appropriate result back to Step Functions when the user clicks on them. The URL should be valid for an extended period of time, longer than presigned URLs—what if the user is on vacation this week? Ideally, this doesn’t involve storage of the token and the maintenance that requires. Luckily, there’s an AWS Serverless Application Repository app for that! The sfn-callback-urls app allows you to generate one-time-use callback URLs through a call to either an Amazon API Gateway or a Lambda function. Each URL has an associated name, whether it causes success or failure, and what output should be sent back to Step Functions. Sending an HTTP GET or POST to a URL sends its output to Step Functions. sfn-callback-urls is stateless, and it also supports POST callbacks with JSON bodies for use with webhooks. Deploying the app First, deploy the sfn-callback-urls serverless app and make note of the ARN for the Lambda function that it exposes. In the AWS Serverless Application Repository console, select Show apps that create custom IAM roles or resource policies, and search for sfn-callback-urls.  You can also access the application. Under application settings, select the box to acknowledge the creation of IAM resources. By default, this app creates a KMS key. You can disable this by setting the DisableEncryption parameter to true, but first read the Security section in the Readme to the left. Scroll down and choose Deploy. On the deployment confirmation page, choose CreateUrls, which opens the Lambda console for that function. Make note of the function ARN because you need it later. Create the application by doing the following: Create an SNS topic and subscribe your email to it. Create the Lambda function that handles URL creation and email sending, and add proper permissions. Create an IAM role for the state machine to invoke the Lambda function. Create the state machine. Start the execution and send yourself some emails! Create the SNS topic In the SNS console, choose Topics, Create Topic. Name the topic ApprovalEmailsTopic, and choose Create Topic. Make a note of the topic ARN, for example arn:aws:sns:us-east-2:012345678912:ApprovalEmailsTopic. Now, set up a subscription to receive emails. Choose Create subscription. For Protocol, choose Email, enter an email address, and choose Create subscription. Wait for an email to arrive in your inbox with a confirmation link. It confirms the subscription, allowing messages published to the topic to be emailed to you. Create the Lambda function Now create the Lambda function that handles the creation of callback URLs and sending of emails. For this short post, create a single Lambda function that completes two separate steps: Creating callback URLs Sending the approval email, and later sending the confirmation email There’s an if statement in the code to separate the two, which requires the state machine to tell the Lambda function which state is invoking it. The best practice here would be to use two separate Lambda functions. To create the Lambda function in the Lambda console, choose Create function, name it ApprovalEmailsFunction, and select the latest Python 3 runtime. Under Permissions, choose Create a new Role with basic permissions, Create. Add permissions by scrolling down to Configuration. Choose the link to see the role in the IAM console. Add IAM permissions In the IAM console, select the new role and choose Add inline policy. Add permissions for sns:Publish to the topic that you created and lambda:InvokeFunction to the sfn-callback-urls CreateUrls function ARN. Back in the Lambda console, use the following code in the function: import json, os, boto3 def lambda_handler(event, context): print('Event:', json.dumps(event)) # Switch between the two blocks of code to run # This is normally in separate functions if event['step'] == 'SendApprovalRequest': print('Calling sfn-callback-urls app') input = { # Step Functions gives us this callback token # sfn-callback-urls needs it to be able to complete the task "token": event['token'], "actions": [ # The approval action that transfers the name to the output { "name": "approve", "type": "success", "output": { # watch for re-use of this field below "name_in_output": event['name_in_input'] } }, # The rejection action that names the rejecter { "name": "reject", "type": "failure", "error": "rejected", "cause": event['name_in_input'] + " rejected it" } ] } response = boto3.client('lambda').invoke( FunctionName=os.environ['CREATE_URLS_FUNCTION'], Payload=json.dumps(input) ) urls = json.loads(response['Payload'].read())['urls'] print('Got urls:', urls) # Compose email email_subject = 'Step Functions example approval request' email_body = """Hello {name}, Click below (these could be better in HTML emails): Approve: {approve} Reject: {reject} """.format( name=event['name_in_input'], approve=urls['approve'], reject=urls['reject'] ) elif event['step'] == 'SendConfirmation': # Compose email email_subject = 'Step Functions example complete' if 'Error' in event['output']: email_body = """Hello, Your task was rejected: {cause} """.format( cause=event['output']['Cause'] ) else: email_body = """Hello {name}, Your task is complete. """.format( name=event['output']['name_in_output'] ) else: raise ValueError print('Sending email:', email_body) boto3.client('sns').publish( TopicArn=os.environ['TOPIC_ARN'], Subject=email_subject, Message=email_body ) print('done') return {} Now, set the following environment variables TOPIC_ARN and CREATE_URLS_FUNCTION to the ARNs of your topic and the sfn-callback-urls function noted earlier. After updating the code and environment variables, choose Save.   Create the state machine You first need a role for the state machine to assume that can invoke the new Lambda function. In the IAM console, create a role with Step Functions as its trusted entity. This requires the AWSLambdaRole policy, which gives it access to invoke your function. Name the role ApprovalEmailsStateMachineRole. Now you’re ready to create the state machine. In the Step Functions console, choose Create state machine, name it ApprovalEmails, and use the following code: { "Version": "1.0", "StartAt": "SendApprovalRequest", "States": { "SendApprovalRequest": { "Type": "Task", "Resource": "arn:aws:states:::lambda:invoke.waitForTaskToken", "Parameters": { "FunctionName": "ApprovalEmailsFunction", "Payload": { "step.$": "$$.State.Name", "name_in_input.$": "$.name", "token.$": "$$.Task.Token" } }, "ResultPath": "$.output", "Next": "SendConfirmation", "Catch": [ { "ErrorEquals": [ "rejected" ], "ResultPath": "$.output", "Next": "SendConfirmation" } ] }, "SendConfirmation": { "Type": "Task", "Resource": "arn:aws:states:::lambda:invoke", "Parameters": { "FunctionName": "ApprovalEmailsFunction", "Payload": { "step.$": "$$.State.Name", "output.$": "$.output" } }, "End": true } } } This state machine has two states. It takes as input a JSON object with one field, “name”. Each state is a Lambda task. To shorten this post, I combined the functionality for both states in a single Lambda function. You pass the state name as the step field to allow the function to choose the block of code to run. Using the best practice of using separate functions for different responsibilities, this field would not be necessary. The first state, SendApprovalRequest, expects an input JSON object with a name field. It packages that name along with the step and the task token (required to complete the callback task), and invokes the Lambda function with it. Whatever output is received as part of the callback, the state machine stores it in the output JSON object under the output field. That output then becomes the input to the second state. The second state, SendConfirmation, takes that output field along with the step and invokes the function again. The second invocation does not use the callback pattern and doesn’t involve a task token. Start the execution To run the example, choose Start execution and set the input to be a JSON object that looks like the following: {   "name": "Ben" } You see the execution graph with the SendApprovalRequest state highlighted. This means it has started and is waiting for the task token to be returned. Check your inbox for an email with approve and reject links. Choose a link and get a confirmation page in the browser saying that your response has been accepted. In the State Machines console, you see that the execution has finished, and you also receive a confirmation email for approval or rejection. Conclusion In this post, I demonstrated how to use the sfn-callback-urls app from the AWS Serverless Application Repository to create URLs for approval emails. I also showed you how to build a system that can create and send those emails and process the results. This pattern can be used as part of a larger state machine to manage your own workflow. This example is also available as an AWS CloudFormation template in the sfn-callback-urls GitHub repository. Ben Kehoe

Pages

Recommended Content

Subscribe to Complete Hosting Guide aggregator