Industry Buzz

The Second Edition of Our “Learn User Support” Workshop Is Open for Signups News -

Back in January, we partnered with Support Driven and launched the first version of the Learn User Support Workshop, which helps women in the Asia-Pacific region develop the skills they need to succeed in a technical support role. We had 24 students enrolled in our first cohort. Today, we’re happy to share that the next edition of the Learn User Support Workshop will launch on August 19, 2019. The course is entirely web-based — there’s no need to travel anywhere to attend — and completely free. So if you identify as a woman, are based in the Asia-Pacific region, and are serious about a career in user support, this might be a perfect match for you. Building a better, bigger workshop The strong positive feedback we received from our students earlier this year, as well as the increasingly long waitlist, inspired us to improve the course content and to design it to accommodate more learners.  What topics will we cover? As a participant, expect to learn how to… Develop your own support philosophy.Build successful troubleshooting strategies.Manage challenging interactions.Implement productivity tools.Optimize your approach to applying and interviewing for jobs in support. This six-module course will start on August 19 and will run through September 29. We will publish a new module every Monday, and each learner will have one week to complete it. We’ll include lots of hands-on work, and by the end of the course, each participant will also develop a résumé and portfolio site on WordPress that they can then share with potential employers. Meet your friendly organizers As for your teachers, the people who lead this workshop are Automattic Happiness Engineers — master communicators with deep, wide-ranging experience in distributed technical support.  Automattic, which offers the workshop, is a fully-distributed company — there are more than 930 full-time Automatticians spread across 70 countries and speaking 88 languages. We serve users from every corner of the world via products like, Jetpack, and WooCommerce, among others.  As people who believe in the benefits of distributed work, we love helping remote professionals level up their skills. Our commitment to Diversity & Inclusion leads us to look for ways to make the tech sector more representative of the wide and varied world it serves. As a result, this virtual workshop will equip Asia-Pacific-based women who are — or want to become — support professionals with skills that are specifically tailored to the demands of remote work.   Are you ready to sign up? Just click below: SIGN UP NOW! We have 20 slots for this cohort on a first come, first serve basis. We will get in touch with you via email if you are selected for the course. If you know anyone who might be a good fit, feel free to share this post with them!

People of WordPress: Amanda Rush News -

You’ve probably heard that WordPress is open source software, and may know that it’s created and run by volunteers. WordPress enthusiasts share many examples of how WordPress changed people’s lives for the better. This monthly series shares some of those lesser-known, amazing stories. Meet Amanda Rush from Augusta, Georgia, USA. Amanda Rush is a WordPress advocate with a visual disability. She first started using computers in 1985, which enabled her to turn in homework to her sighted teachers. Screen reader technology for Windows was in its infancy then, so she worked in DOS almost exclusively. After graduating high school, Amanda went to college to study computer science, programming with DOS-based tools since compilers for Windows were still inaccessible. As part of her computer science course of study, she learned HTML which began her career in web development. How Amanda got started with WordPress Amanda began maintaining a personal website, and eventually began publishing her own content using LiveJournal. However, controlling the way the page around her content looked was hard, and she soon outgrew the hosted solution. So in 2005, Amanda bought, set up a very simple CMS for blogging, and started publishing there. She accepted the lack of design and content, and lack of easy customization because she wasn’t willing to code her own solution. Nor did she want to move to another hosted solution, as she liked being able to customize her own site, as well as publish content. Hebrew dates led her to WordPress At some point, Amanda was looking for an easy way to display the Hebrew dates alongside the Gregorian dates on her blog entries. Unfortunately, the blogging software she was using at the time, did not offer customization options at that level. She decided to research alternative solutions and came across a WordPress plugin that did just that.  The fact that WordPress would not keep her locked into a visual editor, used themes to customize styling, and offered ways to mark up content, immediately appealed to Amanda. She decided to give it a go. Accessibility caused her to dive deeper When the software Amanda used at work became completely inaccessible, she started learning about WordPress. While she was learning about this new software, Web 2.0 was introduced. The lack of support for it in the screen reader she used meant that WordPress administration was completely inaccessible. To get anything done, Amanda needed to learn to find her way in WordPress’ file structure. Eventually Amanda started working as an independent contractor for the largest screen reader developer in the market, Freedom Scientific. She worked from home every day and hacked on WordPress after hours. Unfortunately Amanda hit a rough patch when her job at Freedom Scientific ended. Using her savings she undertook further studies for various Cisco and Red Hat certifications, only to discover that the required testing for these certifications were completely inaccessible. She could study all she wanted, but wasn’t able to receive grades to pass the courses. She lost her financial aid, her health took a turn for the worse, she was diagnosed with Lupus, and lost her apartment. Amanda relocated to Augusta where she had supportive friends who offered her a couch and a roof over her head. But Amanda refused to give up Amanda continued to hack WordPress through all of this. It was the only stable part of her life. She wanted to help make WordPress accessible for people with disabilities, and in 2012 joined the  WordPress Accessibility Team. Shortly after that, she finally got her own place to live, and started thinking about what she was going to do with the rest of her working life. Listening to podcasts led her to take part in WordSesh, which was delivered completely online and enabled Amanda to participate without needing to travel. She began to interact with WordPress people on Twitter, and continued to contribute to the community as part of the WordPress Accessibility Team. Things had finally started to pick up. Starting her own business In 2014, Amanda officially launched her own business, Customer Servant Consultancy. Since WordPress is open source, and becoming increasingly accessible, Amanda could modify WordPress to build whatever she wanted and not be at the mercy of web and application developers who know nothing about accessibility. And if she got stuck, she could tap into the community and its resources. Improving her circumstances and becoming more self-sufficient means Amanda was able to take back some control over her life in general. She was able to gain independence and create her own business despite being part of the blind community, which has an 80% unemployment rate.  In her own words: We’re still fighting discrimination in the workplace, and we’re still fighting for equal access when it comes to the technology we use to do our jobs. But the beauty of WordPress and its community is that we can create opportunities for ourselves.I urge my fellow blind community members to join me inside this wonderful thing called WordPress. Because it will change your lives if you let it.Amanda Rush, entrepreneur This post is based on an article originally published on, a community initiative created by Topher DeRosia. HeroPress highlights people in the WordPress community who have overcome barriers and whose stories would otherwise go unheard. Meet more WordPress community members over at!

The Serverlist: Building out the SHAMstack

CloudFlare Blog -

Check out our seventh edition of The Serverlist below. Get the latest scoop on the serverless space, get your hands dirty with new developer tutorials, engage in conversations with other serverless developers, and find upcoming meetups and conferences to attend.Sign up below to have The Serverlist sent directly to your mailbox. .newsletter .visually-hidden { position: absolute; white-space: nowrap; width: 1px; height: 1px; overflow: hidden; border: 0; padding: 0; clip: rect(0 0 0 0); clip-path: inset(50%); } .newsletter form { display: flex; flex-direction: row; margin-bottom: 1em; } .newsletter input[type="email"], .newsletter button[type="submit"] { font: inherit; line-height: 1.5; padding-top: .5em; padding-bottom: .5em; border-radius: 3px; } .newsletter input[type="email"] { padding-left: .8em; padding-right: .8em; margin: 0; margin-right: .5em; box-shadow: none; border: 1px solid #ccc; } .newsletter input[type="email"]:focus { border: 1px solid #3279b3; } .newsletter button[type="submit"] { padding-left: 1.25em; padding-right: 1.25em; background-color: #f18030; color: #fff; } .newsletter .privacy-link { font-size: .9em; } Email Submit Your privacy is important to us newsletterForm.addEventListener('submit', async function(e) { e.preventDefault() fetch('', { method: 'POST', body: newsletterForm.elements[0].value }).then(async res => { const thing = await res.text() newsletterForm.innerHTML = thing const homeURL = '' if (window.location.href !== homeURL) { window.setTimeout(_ => { window.location = homeURL }, 5000) } }) }) iframe[seamless]{ background-color: transparent; border: 0 none transparent; padding: 0; overflow: hidden; } const magic = document.getElementById('magic') function resizeIframe() { const iframeDoc = magic.contentDocument const iframeWindow = magic.contentWindow magic.height = iframeDoc.body.clientHeight magic.width = "100%" const injectedStyle = iframeDoc.createElement('style') injectedStyle.innerHTML = ` body { background: white !important; } ` magic.contentDocument.head.appendChild(injectedStyle) function onFinish() { setTimeout(() => { = '' }, 80) } if (iframeDoc.readyState === 'loading') { iframeWindow.addEventListener('load', onFinish) } else { onFinish() } } async function fetchURL(url) { magic.addEventListener('load', resizeIframe) const call = await fetch(`${url}`) const text = await call.text() const divie = document.createElement("div") divie.innerHTML = text const listie = divie.getElementsByTagName("a") for (var i = 0; i < listie.length; i++) { listie[i].setAttribute("target", "_blank") } magic.scrolling = "no" magic.srcdoc = divie.innerHTML } fetchURL("")

New gTLD Report – July 2019

Reseller Club Blog -

From .TOP surpassing .SITE and .ONLINE, July saw a slight change in the way we are marching towards the second half of 2019. While 4 out of 5 new gTLDs are same as in June, the new entrant this month is .SPACE which is back on the list after a while! It now stands at the fifth position with a substantial market share. In fact, it is noteworthy that July saw an overall increase of 40% in the total registration count of all the new gTLDs.  Let us first take a pictorial look at how the various top 15 new gTLDs are placed: !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script")[0],d=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=d+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var r=e.createElement("script");r.async=1,,r.src=i,o.parentNode.insertBefore(r,o)}}(document,0,"infogram-async",""); New gTLD Report – July 2019Infogram *Registration Numbers Facilitated by ResellerClub Let us now take a look at the top 5 new gTLDs in terms of registration count. Worth noting is the fact that the top 4 new gTLDs have also appeared in the top 5 in each of the last 3 months. .TOP – .TOP has been the largest contributor with the total number of registrations in July. It was able to grab a 43% share of the total registrations and a spike of 282% compared to the previous month. This is one new gTLD which has maintained consistency in terms of adding registrations. The promo price of $0.99 has been a factor for its improved performance in the China market.   .SITE – .SITE was able to secure second place with a total registration count of 16%. This is tremendous keeping in mind the promo price of $4.99. .ONLINE – The promo price for .ONLINE was set to $6.99. In spite of a slightly higher promo price, it is on the third spot with 10% of the total registration count. The increase in the number of registration of this new gTLD can be credited to the global markets. .XYZ – .XYZ was able to hold onto its fourth position. The promo price of $ 0.69 has been the reason for the surge in the global markets. Overall, this new gTLD contributed 8% in terms of total registrations in July.  .SPACE – With a promo price of $ 1.99 .SPACE is back in the top 5 list and now assumes the fifth spot with a 3% total registration count.  The registrations of the new entrant .SHOP saw a jump of 11% helping it bounce back to the top 15. Along with this, .LIVE was able to retain its seventh spot by contributing a total registration count of 2%. .STORE was able to retain its ninth spot with a jump of 25% in its registrations whereas, .LIFE was able to retain its tenth spot with a growth of 2%.  Here’s a quick glance into the exciting domain promos we’ve got lined up for the month of August: Help your customer’s build a website for their business with a .OOO extension at just $2.99 Take your customers business online with a .SHOP domain at just $6.99 That’s all folks! Check out all our leading domain promos here and help your customer’s get the right one for their online business. You can also head to our Facebook or Twitter pages to get all the updates about our trending domain promos. Just lookout for the posts with #domainpromos. See you there!  .fb_iframe_widget_fluid_desktop iframe { width: 100% !important; } The post New gTLD Report – July 2019 appeared first on ResellerClub Blog.

Do I Need VPS Hosting?

HostGator Blog -

The post Do I Need VPS Hosting? appeared first on HostGator Blog. When it comes to choosing the right kind of hosting packages for your website, you’re going to have a lot of different options to choose from. Not only do you have to find a quality hosting provider, but you have to decide between a multitude of hosting types as well. You’ll have a lot of things to consider: The hosting features that your site requiresThe scalability of your web hosting environmentThe level of server resources you requireIf your traffic levels are rising quicklyIf you require any unique server softwareAnd a lot more One of the most common forms of hosting you’ve come across in your search is VPS hosting. Below we breakdown what VPS hosting is, how it works, and the pros and cons, so you can decide if this style of hosting is going to be the best fit for your needs. By the end of this post you’ll be able to answer the question, “Do I need VPS hosting?” with a resounding yes or no.  What Is VPS Hosting? VPS stands for Virtual Private Server. That might give you a couple of clues as to what this type of hosting is all about. First, we’ll start on the “server” portion. A server is essentially a big computer that’s used to store website files. When you purchase any kind of web hosting you’re renting server space from a hosting company who runs hundreds or thousands of servers, known as a datacenter. When someone types the URL of your website into their browser, the browser will communicate with the server and display your website’s files. All of this happens in a fraction of a second. To understand the virtualized aspect of a virtual private server, let’s compare it to a few other forms of hosting packages. With shared hosting, you’re renting a portion of a server, which you split with other users. With dedicated hosting, you’re renting an entire physical server that’s entirely dedicated to your site. VPS hosting acts as a combination between the two. Your virtual server will pull from multiple different shared server environments, but it’s entirely private, so it operates similar to a dedicated server. You’ll have access to a greater level of server resources, improve website performance, higher levels of security, and a lot more. You’ll learn more about the advantages of VPS hosting below.  How Does VPS Hosting Work? As you learned above, the virtualization aspect of VPS is one of the biggest differentiating factors between VPS and other types of hosting. Instead of a physical server being divided up into tons of shared server environments, it’s broken down into a handful of virtual servers. So, yes you’re still technically sharing a physical server environment. But, there are much higher privacy protocols in place, so any other VPS hosting users will never affect your ist in any way. The virtualization aspect works to create a virtual dedicated server.  This gives your site advantages like: Better site performance. With a VPS server, you’ll have access to a guaranteed level of server resources, so you can always expect the same level of high performance. Higher security standards. Your server will be completely isolated from other websites, and you can implement stricter security firewalls and the like. Greater server customization and access. With a VPS server, you have direct root server access, with greater control over server OS, scripts, and more. Keep reading to learn more about the benefits of VPS hosting. Pros of VPS Hosting For some website owners, VPS hosting will be their dream hosting setup. It offers your website a great blend of server performance, security, and control, all in an affordable server package.  Here are some of the biggest advantages to VPS hosting:  1. High Level of Server Resource Access With a VPS server, you’re guaranteed greater access to server resources. This means higher levels of storage, bandwidth, CPU, RAM, and more. Plus, access to the resources spelled out in your hosting plan is guaranteed. Sometimes, if you’re on a shared hosting plan you might notice a drop in site performance due to other sites on the same server. With VPS hosting this will never be the case. So, not only are the plan limits much larger. But, you’ll always have these resources available and dedicated to your site alone.  2. Greater Flexibility and Control When you choose VPS hosting, you’re giving yourself greater server access and customization options. Essentially, you’re less limited with VPS vs. shared hosting when it comes to what you can do with your server.  Using a VPS gives you more server customization options right out of the gate, plus the ability to customize your server down the road. For example, with most VPS hosts you’ll have a choice of operating system, as well as the type of software you’d like to install on your server. Most VPS plans will also give you SSH access, which is secure direct server access. Some users might not require this, but for those that do, this will be invaluable.  3. More Affordable than Dedicated Hosting If you want some of the benefits offered by a dedicated server, but don’t quite have the budget for it, then VPS can be a great choice.  Sure, technically VPS isn’t the same as a dedicated server, but it operates in mostly the same way. Basically, with VPS hosting, you’re getting a lot of the performance and the features of a dedicated server, but without the high associated costs.  Plus, by configuring your site to run on a VPS server now, you’ll gain the understanding you need if you ever do decide to upgrade to a dedicated server. When that day comes, you’ll have a leg up in terms of the learning curve.   4. More Scalable by Nature If you ran into the limits of your shared hosting plan, then you’re probably looking for a form of hosting that will grow with you as your site grows. VPS hosting is pretty scalable, meaning you can add more server resources if your site requires it. Plus, VPS servers can be quite large, so they can support very large and fast growing websites.  Now, it isn’t as instantly scalable as cloud hosting. But, it’s still scalable, you’ll just need to notify your hosting company about the increase before you hit your plan limits.  Cons of VPS Hosting VPS hosting is a very popular form of hosting for those who want a hybrid blend between shared and dedicated hosting. But, it still won’t be the perfect form of hosting for everyone. Here are some of the biggest drawbacks to VPS hosting you’ll want to be aware of:  1. Requires More Technical Expertise VPS hosting isn’t technically advanced, but it does require more tech skills than a basic shared hosting plan. Shared hosting is built from the ground up for beginners and the intuitive nature of shared hosting reflects that. When you sign up for VPS hosting it’s generally assumed that you have more experience with your site. At the very least you should be comfortable with the backend of your server.  If you want to do more advanced things with your server, then you might have to hire out the necessary technical help.  2. More Expensive than Shared Hosting VPS hosting doesn’t usually fall into the “expensive” hosting category. But, if you’re upgrading to VPS from a shared hosting plan, then get ready for a price increase.  With the additional costs of VPS hosting, you will be getting access to a higher quality server, along with greater plan limits, great server performance, and improve security. But, it will come at a cost.  Be aware that if you require a higher performing style of hosting, then you’ll have to pay for it.  But, when looking at the feature set you have access to, compared to the overall price, it does end up being a pretty good deal.  Reasons to Upgrade to VPS Hosting If your site has been experiencing any of the issues below, then it might be time to consider VPS hosting.  Here are some of the most common reasons website owners decide to upgrade to VPS hosting: 1. Your Site is Loading Slowly There are a number of reasons for your site to be loading slowly. But, if you’ve taken the time to optimize your site performance and you’re still dealing with very slow loading times, then it might be time to upgrade your host. It could be an issue with your traffic levels (covered below), RAM consumption, server storage issues, or something else altogether.  By migrating to VPS, you’ll give your site support for higher traffic levels, along with more storage to effectively store your site’s files.  2. Your Traffic Levels are on the Rise Shared hosting is meant for websites that don’t get much traffic. But, as your traffic levels grow, then you’ll also start to demand more from your web host. If you notice an upward trend in your traffic levels, then it might be worth upgrading your hosting. Rising traffic levels mean greater server resources consumption, so to avoid slow loading times, and even server crashes, it’s smart to upgrade sooner rather than later.  3. You Want a More Secure Host Keeping your site secure is one of the most important things you can do. Right out of the box, VPS hosting plans offer you higher levels of security. A VPS website security checker will include improved firewalls, dedicated malware scans, monitoring, along with improved website backups (in case something goes wrong). Plus, your site will be operating in a completely isolated server environment, so you’ll never be impacted by other sites.  What to Look for in a VPS Host If you’re the type of site owner who could benefit from VPS hosting, then you’ll also need to ensure you choose a VPS web host that offers the features you require, and the quality you need.  Here are some of the key features to look for in a VPS hosting provider:  Sufficient Storage and Bandwidth When choosing a VPS plan make sure the plans you’re looking at have sufficient CPU, RAM, disk space, and bandwidth.  Server Security Features VPS hosting should have a very high level of security. Look for features like DDoS attack protection, multiple firewalls, along with regular offsite backups, in case a full website restore is needed.  Knowledgeable Support A quality support team is a must-have. Look for a VPS host that offers multiple support channels, speedy response times, and technical team members who can help you through tough website or server issues.  High Reliability Reliability is how often your site is online. The industry standard is above 99%, which seems high, but remember that any time your site is offline can be actually losing your site money. Ideally, a VPS host should offer you an uptime around 99.99%. Quality Server Hardware The quality of your VPS server depends on the physical server hardware along with the network. Keep an eye out for Intel processors, and RAID drives. On the network side, you’ll want a fully redundant network that’s built with no single point of failure. Of course, there are probably many more features that you’ll require. But, at the very least, keep an eye out on the hosting plan features highlighted above.  So, Do I Need VPS Hosting Services? Hopefully, you have a better understanding of how VPS hosting works, the types of sites VPS is used for, and the benefits it can bring your website. VPS hosting services  aren’t right for everyone. But website owners who have rising traffic levels, are currently experiencing slow loading speeds, or want a higher level of security, can all benefit from VPS hosting. Finally, you should consider if you have the technical means to manage your own VPS account. It will be more intensive than what it took to configure your shared server—especially if you’re running any custom server elements.  If you’re looking for very high levels of performance and demand the best for your site, VPS hosting delivers. Do you need VPS hosting services? Trust your site to a VPS hosting provider that checks all of the above boxes and more. Explore your VPS hosting options with HostGator. Find the post on the HostGator Blog

A Fresh Look for Rackspace’s Open Cloud Academy

The Rackspace Blog & Newsroom -

The Open Cloud Academy is always working to improve, and thanks to recent feedback from alumni, we’re excited to announce a series of upgrades. Since 2013, Rackspace’s Open Cloud Academy has helped prepare students to enter the fast-growing field of information technology with hands-on technical training. We began by offering courses, focused on Linux System […] The post A Fresh Look for Rackspace’s Open Cloud Academy appeared first on The Official Rackspace Blog.

Women in Technology: Rhonda Capone

Liquid Web Official Blog -

Liquid Web’s Head of Strategic Initiatives on being present at the start of brand new technology, her father’s lasting influence on her career, and the value of Servant Leadership. “A common thread in my career is I have been in the company of strong women throughout,” she says. “Each of them, in various roles, has shaped me as a leader.”   Rhonda Capone knows a thing or two about stepping out of her comfort zone and trying new things. Her journey in tech began in the early 1990s working for the Cellular Telephone Industry Association as a member of the Fraud and Roaming department. This was brand new technology and therefore brand new territory, and Rhonda Capone embraced the unknown, joining CTIA less than a decade after the first truly mobile telephone was launched.  Staffed by industry pioneers, the association worked with cellular manufacturers and license providers to build out coverage coast to coast in the United States. CTIA also worked alongside various regulatory agencies to determine how spectrum would be allocated and sold. They worked to determine how policies for different cellular companies would work together to provide a brand new service to consumers. Rhonda Capone is a reminder that the technology and services we now take for granted were built by individuals who believed in a vision, rolled up their sleeves, and got to work.  “It was such an exciting time. I was a part of an emerging market and had the opportunity to work with technical and business leaders. I was so green and eager to learn something new,” she says. “I will always be grateful to the early leaders of CTIA for giving me the opportunity to learn, grow, and set my path on this technology journey.” The lessons she learned early in her career have been essential to her success. “Being a leader doesn’t mean you have all the answers. A good leader knows when to ask for advice and help from their peers, within their team, and their management structure.”  Twenty-five years after helping to shepherd brand new technology to the American public, Rhonda Capone is putting to use her ability to collaborate and communicate, and her willingness to dive in and try new things as Liquid Web’s Head of Strategic Initiatives, leading the execution of the company’s key projects and programs. From the beginning of her career to her current role at Liquid Web, she has always looked for the ways in which she can contribute best to each team and each role.  The unique opportunity to work on an array of exciting projects through her career is not lost on her. She has played a role in the creation of standards that helped shape the early days of the cellular industry, training and education for industry personnel, infrastructure deployments, software development, and large platform conversions. “Looking back, what I am most proud of is my ability to find the best way to contribute with each and every role,” she says.  She credits her father with planting the early seeds of career success. “He taught me so much about the value of a strong work ethic. I learned from him to always approach a new situation with an open mind and to treat others with kindness and empathy. He also taught me that the best relationships are built on mutual trust— trust is earned, not just given and never bought. And he taught me to trust your gut and find humor in every day!” Fellow women in technology have also played an important role in her career. “A common thread in my career is I have been in the company of strong women throughout,” she says. “Each of them, in various roles, has shaped me as a leader.”   She has also been shaped by the concept of Servant Leadership. “This idea helped me to hone my skills and weave them into the way I build my teams and organizational functions. I’ve learned that the desire to serve others and a positive attitude can take you a long way.” As she takes on leadership roles in technology, she strives to look for ways to make a positive impact both within her team and within the company. To do this, she continues to leave her comfort zone and discover new ways to make a positive impact.  In a shining case of leading by example, her advice to women just starting out in technology is a few simple concepts that she has built her career around. Don’t be afraid. Raise your hand. Step into something new. The post Women in Technology: Rhonda Capone appeared first on Liquid Web.

How to Develop Superfans Who Gladly Evangelize for You

Social Media Examiner -

Do you want to create superfans for your business? Wondering how to develop the kind of connected community that elevated your brand? To explore how to develop superfans who will gladly evangelize anything for you and your business, I interview Pat Flynn. Pat is an active keynote speaker and host of the popular Smart Passive […] The post How to Develop Superfans Who Gladly Evangelize for You appeared first on Social Media Marketing | Social Media Examiner.

Preview Release of the new AWS Tools for PowerShell

Amazon Web Services Blog -

In 2012 we announced the first version of the AWS Tools for PowerShell module for Windows PowerShell, containing around 550 cmdlets supporting 20 or so services. In the years since, the growth of AWS has expanded the module to almost 6000 cmdlets spanning 160+ services, plus an additional (but identical) module for users of PowerShell 6 or higher that is capable of being run cross-platform. There are downsides to putting all those cmdlets into a single module (AWSPowerShell for PowerShell v2 through v5.1 on Windows, AWSPowerShell.NetCore for PowerShell v6 or higher on Windows, macOS, and Linux). First, the import time of the modules has grown significantly. On my 8th Gen Core i7 laptop the time to import either module has grown beyond 25 seconds. Second, the team discovered an issue with listing all of the cmdlets in the module manifests and subsequently had to revert to specifying ‘*’ for the CmdletsToExport manifest property. This prevents PowerShell from determining the cmdlets in the modules until they are explicitly imported, impacting tab completion of cmdlet names. In my shell profile I use the Set-AWSCredential and Set-DefaultAWSRegion cmdlets to set an initial scope for my shells. Thus I have to first explicitly import the module and then wait for the shell to become usable. This slow load time is obviously unsustainable, even more so when writing AWS Lambda functions in PowerShell when we particularly want a fast startup time. Announcing the Refactored AWS Tools for PowerShell Modules (Preview) Today the team released a new set of modules to the PowerShell Gallery to address this issue. These modules are in preview so that the team can gather your feedback (good or bad) which we hope you’ll give! In the preview release each AWS service now has its own PowerShell module, all depending on a common shared module named AWS.Tools.Common (this is the same modular approach that we take with the AWS SDK for .NET on NuGet). This has a number of implications: Instead of downloading and installing a single large module for all services, you can now install only the modules for services you actually need. The common module will be installed automatically when you install a service-specific module. You no longer need to explicitly import any of the preview modules before use, as the CmdletsToExport manifest property for each module is now properly specified. The versioning strategy for the new modules currently follows the AWSPowerShell and AWSPowerShell.NetCore modules. The strategy is detailed on the team’s GitHub repository notice for the preview and we welcome your feedback on it. Shell startup time is fast! On the same system I noted earlier the load time for my command shells is now between 1 and 2 seconds on average. The only change to my shell profile was to remove the explicit module import. The new modules follow the name pattern AWS.Tools.ServiceName. In some cases the more common contraction is used for the name. For example: AWS.Tools.EC2 AWS.Tools.S3 AWS.Tools.DirectoryService AWS.Tools.ElasticLoadBalancingV2 AWS.Tools.Polly AWS.Tools.Rekognition etc If you are writing PowerShell functions for AWS Lambda, be sure to update your script dependencies (using the #Requires statement) to use the new modules. You also need to add a #Requires statement for the common module. For example if I am writing a Lambda function in PowerShell that uses Amazon Simple Storage Service (S3) then I need to add the following two statements to my function’s script file: #Requires -Modules @{ModuleName='AWS.Tools.Common';ModuleVersion='3.3.563.0'} #Requires -Modules @{ModuleName='AWS.Tools.S3';ModuleVersion='3.3.563.0'} Mandatory Parameters The team has also addressed another long-standing and popular request from users – that of marking parameters as mandatory. Mandatory parameters are a great feature of PowerShell, helping guide users who are unfamiliar with APIs, and we’re very pleased to now be in a position to support them. The marking of mandatory parameters is dependent on data in the service models so if you discover any issues please let the team know in the link at the end of this post so that they can investigate and have the service models corrected if need be. Other Preview Changes The development team has also taken the opportunity to remove some old and obsolete cmdlets. If you need to use any of these removed cmdlets you will need to continue using the existing modules for the time being, but be sure to raise an issue on GitHub so that the team can consider supporting them in the new version: CloudHSM (HSM) is removed in favor of CloudHSMV2 (HSM2) ElasticLoadBalancing (ELB) is removed in favor of ElasticLoadBalancingV2 (ELB2) CloudWatchEvents (CWE) is removed in favor of EventBridge (EVB) KinesisAnalytics (KINA) is removed in favor of KinesisAnalyticsV2 (KINA2) What happens to the AWSPowerShell and AWSPowerShell.NetCore modules? Nothing! These modules will remain and will be updated in sync with the preview for the foreseeable future. Backwards compatibility is taken very seriously at AWS and we don’t want to deter use of these modules until we know the community is happy with the replacements. Note that you cannot mix the two different sets of modules. For example if you have the AWSPowerShell (or AWSPowerShell.NetCore) module loaded then an attempt to load modules from the preview will fail with an error. Get involved! The new preview modules are now available in the PowerShell Gallery and more details about the release can be found in Matteo’s notice on GitHub. The development team is eager to hear your feedback on the preview – do you like (or not like) the new modular format? Have you encountered any issues with the new support for marking mandatory parameters? Any other backwards compatibility issues you’ve found? Thoughts on the versioning strategy that should be adopted? Be sure to let them know on their GitHub issues repository! — Steve  

AWS Lake Formation – Now Generally Available

Amazon Web Services Blog -

As soon as companies started to have data in digital format, it was possible for them to build a data warehouse, collecting data from their operational systems, such as Customer relationship management (CRM) and Enterprise resource planning (ERP) systems, and use this information to support their business decisions. The reduction in costs of storage, together with an even greater reduction in complexity for managing large quantities of data, made possible by services such as Amazon S3, has allowed companies to retain more information, including raw data that is not structured, such as logs, images, video, and scanned documents. This is the idea of a data lake: to store all your data in one, centralized repository, at any scale. We are seeing this approach with customers like Netflix, Zillow, NASDAQ, Yelp, iRobot, FINRA, and Lyft. They can run their analytics on this larger dataset, from simple aggregations to complex machine learning algorithms, to better discover patterns in their data and understand their business. Last year at re:Invent we introduced in preview AWS Lake Formation, a service that makes it easy to ingest, clean, catalog, transform, and secure your data and make it available for analytics and machine learning. I am happy to share that Lake Formation is generally available today! With Lake Formation you have a central console to manage your data lake, for example to configure the jobs that move data from multiple sources, such as databases and logs, to your data lake. Having such a large and diversified amount of data makes configuring the right access permission also critical. You can secure access to metadata in the Glue Data Catalog and data stored in S3 using a single set of granular data access policies defined in Lake Formation. These policies allow you to define table and column-level data access. One thing I like the most of Lake Formation is that it works with your data already in S3! You can easily register your existing data with Lake Formation, and you don’t need to change existing processes loading your data to S3. Since data remains in your account, you have full control. You can also use Glue ML Transforms to easily deduplicate your data. Deduplication is important to reduce the amount of storage you need, but also to make analyzing your data more efficient because you don’t have neither the overhead nor the possible confusion of looking at the same data twice. This problem is trivial if duplicate records can be identified by a unique key, but becomes very challenging when you have to do a “fuzzy match”. A similar approach can be used for record linkage, that is when you are looking for similar items in different tables, for example to do a “fuzzy join” of two databases that do not share a unique key. In this way, implementing a data lake from scratch is much faster, and managing a data lake is much easier, making these technologies available to more customers. Creating a Data Lake Let’s build a data lake using the Lake Formation console. First I register the S3 buckets that are going to be part of my data lake. Then I create a database and grant permission to the IAM users and roles that I am going to use to manage my data lake. The database is registered in the Glue Data Catalog and holds the metadata required to analyze the raw data, such as the structure of the tables that are going to be automatically generated during data ingestion. Managing permissions is one of the most complex tasks for a data lake. Consider for example the huge amount of data that can be part of it, the sensitive, mission-critical nature of some of the data, and the different structured, semi-structured, and unstructured formats in which data can reside. Lake Formation makes it easier with a central location where you can give IAM users, roles, groups, and Active Directory users (via federation) access to databases, tables, optionally allowing or denying access to specific columns within a table. To simplify data ingestion, I can use blueprints that create the necessary workflows, crawlers and jobs on AWS Glue for common use cases. Workflows enable orchestration of your data loading workloads by building dependencies between Glue entities, such as triggers, crawlers and jobs, and allow you to track visually the status of the different nodes in the workflows on the console, making it easier to monitor progress and troubleshoot issues. Database blueprints help load data from operational databases. For example, if you have an e-commerce website, you can ingest all your orders in your data lake. You can load a full snapshot from an existing database, or incrementally load new data. In case of an incremental load, you can select a table and one or more of its columns as bookmark keys (for example, a timestamp in your orders) to determine previously imported data. Log file blueprints simplify ingesting logging formats used by Application Load Balancers, Elastic Load Balancers, and AWS CloudTrail. Let’s see how that works more in depth. Security is always a top priority, and I want to be able to have a forensic log of all management operations across my account, so I choose the CloudTrail blueprint. As source, I select a trail collecting my CloudTrail logs from all regions into an S3 bucket. In this way, I’ll be able to query account activity across all my AWS infrastructure. This works similarly for a larger organization having multiple AWS accounts: they just need, when configuring the trail in the CloudTrial console, to apply the trail to their whole organization. I then select the target database, and the S3 location for the data lake. As data format I use Parquet, a columnar storage format that will make querying the data faster and cheaper. The import frequency can be hourly to monthly, with the option to choose the day of the week and the time. For now, I want to run the workflow on demand. I can do that from the console or programmatically, for example using any AWS SDK or the AWS Command Line Interface (CLI). Finally, I give the workflow a name, the IAM role to use during execution, and a prefix for the tables that will be automatically created by this workflow. I start the workflow from the Lake Formation console and select to view the workflow graph. This opens the AWS Glue console, where I can visually see the steps of the workflow and monitor the progress of this run. When the workflow is completed a new table is available in my data lake database. The source data remain as logs in the S3 bucket output of CloudTrail, but now I have them consolidated, in Parquet format and partitioned by date, in my data lake S3 location. To optimize costs, I can set up an S3 lifecycle policy that automatically expires data in the source S3 bucket after a safe amount of time has passed. Securing Access to the Data Lake Lake Formation provides secure and granular access to data stores in the data lake, via a new grant/revoke permissions model that augments IAM policies. It is simple to set up these permissions, for example using the console: I simply select the IAM user or role I want to grant access to. Then I select the database and optionally the tables and the columns I want to provide access to. It is also possible to select which type of access to provide. For this demo, simple select permissions are sufficient. Accessing the Data Lake Now I can query the data using tools like Amazon Athena or Amazon Redshift. For example, I open the query editor in the Athena console. First, I want to use my new data lake to look into which source IP addresses are most common in my AWS Account activity: SELECT sourceipaddress, count(*) FROM my_trail_cloudtrail GROUP BY sourceipaddress ORDER BY 2 DESC; Looking at the result of the query, you can see which are the AWS API endpoints that I use the most. Then, I’d like to check which user identity types are used. That is an information stored in JSON format inside one of the columns. I can use some of the JSON functions available with Amazon Athena to get that information in my SQL statements: SELECT json_extract_scalar(useridentity, '$.type'), count(*) FROM "mylake"."my_trail_cloudtrail" GROUP BY json_extract_scalar(useridentity, '$.type') ORDER BY 2 DESC; Most of the times, AWS services are the ones creating activities in my trail. These queries are just an example, but give me quickly a deeper insight in what is happening in my AWS account. Think of what could be a similar impact for your business! Using database and logs blueprints, you can quickly create workflows to ingest data from multiple sources within your organization, set the right permission at column level of who can have access to any information collected, clean and prepare your data using machine learning transforms, and correlate and visualize the information using tools like Amazon Athena, Amazon Redshift, and Amazon QuickSight. Customizing Data Access with Column-Level Permissions In order to follow data privacy guidelines and compliance, the mission-critical data stored in a data lake requires to create custom views for different stakeholders inside the company. Let’s compare the visibility of two IAM users in my AWS account, one that has full permissions on a table, and one that has only select access to a subset of the columns of the same table. I already have a user with full access to the table containing my CloudTrail data, it’s called danilop. I create a new limitedview IAM user and I give it access to the Athena console. In the Lake Formation console, I only give this new user select permissions on three of the columns. To verify the different access to the data in the table, I log in with one user at a time and go to the Athena console. On the left I can explore which tables and columns the logged-in user can see in the Glue Data Catalog. Here’s a comparison for the two users, side-by-side: The limited user has access only to the three columns that I explicitly configured, and to the four columns used for partitioning the table, whose access is required to see any data. When I query the table in the Athena console with a select * SQL statement, logged in as the limitedview user, I only see data from those seven columns: Available Now There is no additional cost in using AWS Lake Formation, you pay for the use of the underlying services such as Amazon S3 and AWS Glue. One of the core benefits of Lake Formation are the security policies it is introducing. Previously you had to use separate policies to secure data and metadata access, and these policies only allowed table-level access. Now you can give access to each user, from a central location, only to the the columns they need to use. AWS Lake Formation is now available in US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), and Asia Pacific (Tokyo). Redshift integration with Lake Formation requires Redshift cluster version 1.0.8610 or higher, your clusters should have been automatically updated by the time you read this. Support for Apache Spark with Amazon EMR will follow over the next few months. I only scratched the surface of what you can do with Lake Formation. Building and managing a data lake for your business is now much easier, let me know how you are using these new capabilities! — Danilo

Introducing Certificate Transparency Monitoring

CloudFlare Blog -

Today we’re launching Certificate Transparency Monitoring (my summer project as an intern!) to help customers spot malicious certificates. If you opt into CT Monitoring, we’ll send you an email whenever a certificate is issued for one of your domains. We crawl all public logs to find these certificates quickly. CT Monitoring is available now in public beta and can be enabled in the Crypto Tab of the Cloudflare dashboard.BackgroundMost web browsers include a lock icon in the address bar. This icon is actually a button — if you’re a security advocate or a compulsive clicker (I’m both), you’ve probably clicked it before! Here’s what happens when you do just that in Google Chrome:This seems like good news. The Cloudflare blog has presented a valid certificate, your data is private, and everything is secure. But what does this actually mean?CertificatesYour browser is performing some behind-the-scenes work to keep you safe. When you request a website (say,, the website should present a certificate that proves its identity. This certificate is like a stamp of approval: it says that your connection is secure. In other words, the certificate proves that content was not intercepted or modified while in transit to you. An altered Cloudflare site would be problematic, especially if it looked like the actual Cloudflare site. Certificates protect us by including information about websites and their owners.We pass around these certificates because the honor system doesn’t work on the Internet. If you want a certificate for your own website, just request one from a Certificate Authority (CA), or sign up for Cloudflare and we’ll do it for you! CAs issue certificates just as real-life notaries stamp legal documents. They confirm your identity, look over some data, and use their special status to grant you a digital certificate. Popular CAs include DigiCert, Let’s Encrypt, and Sectigo. This system has served us well because it has kept imposters in check, but also promoted trust between domain owners and their visitors.Unfortunately, nothing is perfect.It turns out that CAs make mistakes. In rare cases, they become reckless. When this happens, illegitimate certificates are issued (even though they appear to be authentic). If a CA accidentally issues a certificate for your website, but you did not request the certificate, you have a problem. Whoever received the certificate might be able to:Steal login credentials from your visitors.Interrupt your usual services by serving different content.These attacks do happen, so there’s good reason to care about certificates. More often, domain owners lose track of their certificates and panic when they discover unexpected certificates. We need a way to prevent these situations from ruining the entire system.Certificate TransparencyAh, Certificate Transparency (CT). CT solves the problem I just described by making all certificates public and easy to audit. When CAs issue certificates, they must submit certificates to at least two “public logs.” This means that collectively, the logs carry important data about all trusted certificates on the Internet. Several companies offer CT logs — Google has launched a few of its own. We announced Cloudflare's Nimbus log last year.Logs are really, really big, and often hold hundreds of millions of certificate records.The log infrastructure helps browsers validate websites’ identities. When you request in Safari or Google Chrome, the browser will actually require Cloudflare’s certificate to be registered in a CT log. If the certificate isn’t found in a log, you won’t see the lock icon next to the address bar. Instead, the browser will tell you that the website you’re trying to access is not secure. Are you going to visit a website marked “NOT SECURE”? Probably not.There are systems that audit CT logs and report illegitimate certificates. Therefore, if your browser finds a valid certificate that is also trusted in a log, everything is secure.What We're Announcing TodayCloudflare has been an industry leader in CT. In addition to Nimbus, we launched a CT dashboard called Merkle Town and explained how we made it. Today, we’re releasing a public beta of Certificate Transparency Monitoring.If you opt into CT Monitoring, we’ll send you an email whenever a certificate is issued for one of your domains. When you get an alert, don’t panic; we err on the side of caution by sending alerts whenever a possible domain match is found. Sometimes you may notice a suspicious certificate. Maybe you won’t recognize the issuer, or the subdomain is not one you offer (e.g. Alerts are sent quickly so you can contact a CA if something seems wrong.This raises the question: if services already audit public logs, why are alerts necessary? Shouldn’t errors be found automatically? Well no, because auditing is not exhaustive. The best person to audit your certificates is you. You know your website. You know your personal information. Cloudflare will put relevant certificates right in front of you.You can enable CT Monitoring on the Cloudflare dashboard. Just head over to the Crypto Tab and find the “Certificate Transparency Monitoring” card. You can always turn the feature off if you’re too popular in the CT world.If you’re on a Business or Enterprise plan, you can tell us who to notify. Instead of emailing the zone owner (which we do for Free and Pro customers), we accept up to 10 email addresses as alert recipients. We do this to avoid overwhelming large teams. These emails do not have to be tied to a Cloudflare account and can be manually added or removed at any time.How This Actually WorksOur Cryptography and SSL teams worked hard to make this happen; they built on the work of some clever tools mentioned earlier:Merkle Town is a hub for CT data. We process all trusted certificates and present relevant statistics on our website. This means that every certificate issued on the Internet passes through Cloudflare, and all the data is public (so no privacy concerns here).Cloudflare Nimbus is our very own CT log. It contains more than 400 million certificates.Note: Cloudflare, Google, and DigiCert are not the only CT log providers.So here’s the process... At some point in time, you (or an impostor) request a certificate for your website. A Certificate Authority approves the request and issues the certificate. Within 24 hours, the CA sends this certificate to a set of CT logs. This is where we come in: Cloudflare uses an internal process known as “The Crawler” to look through millions of certificate records. Merkle Town dispatches The Crawler to monitor CT logs and check for new certificates. When The Crawler finds a new certificate, it pulls the entire certificate through Merkle Town.When we process the certificate in Merkle Town, we also check it against a list of monitored domains. If you have CT Monitoring enabled, we’ll send you an alert immediately. This is only possible because of Merkle Town’s existing infrastructure. Also, The Crawler is ridiculously fast.I Got a Certificate Alert. What Now?Good question. Most of the time, certificate alerts are routine. Certificates expire and renew on a regular basis, so it’s totally normal to get these emails. If everything looks correct (the issuer, your domain name, etc.), go ahead and toss that email in the trash.In rare cases, you might get an email that looks suspicious. We provide a detailed support article that will help. The basic protocol is this:Contact the CA (listed as “Issuer” in the email).Explain why you think the certificate is suspicious.The CA should revoke the certificate (if it really is malicious).We also have a friendly support team that can be reached here. While Cloudflare is not at CA and cannot revoke certificates, our support team knows quite a bit about certificate management and is ready to help.The FutureCertificate Transparency has started making regular appearances on the Cloudflare blog. Why? It’s required by Chrome and Safari, which dominate the browser market and set precedents for Internet security. But more importantly, CT can help us spot malicious certificates before they are used in attacks. This is why we will continue to refine and improve our certificate detection methods.What are you waiting for? Go enable Certificate Transparency Monitoring!

Welcome to WebPros Summit 2019!

cPanel Blog -

Summit /ˈsəmət/ (noun)- the highest level or degree attainable; the highest stage of development.This year the cPanel Conference is being transformed into the WebPros Summit. With the addition of cPanel to the WebPros family of companies, the natural progression for our annual conference was a combined conference. Partnering with the communities of Plesk, WHMCS, and SolusVM will increase the size and impact that an annual hosting conference has. Enter WebPros Summit 2019. With the power …

3 Reasons Why Your B2B Business Needs an eCommerce Store

Nexcess Blog -

Consumer-focused retailers were quick to embrace ecommerce when it became a practical sales platform more than two decades ago. Today, the majority of B2C retailers have an ecommerce store. B2B suppliers, however, were slow to adopt ecommerce. B2B sales relationships are quite different to those in the B2C world, often involving contractual negotiations, personal relationships… Continue reading →

Reseller vs. Affiliate Programs

HostGator Blog -

The post Reseller vs. Affiliate Programs appeared first on HostGator Blog. If you’ve been looking for a way to make some additional cash by recommending products and services, then you’ve no doubt come across the term resellers and affiliates. At their core, these share many similarities, but the mechanics required for each are very different. If you’re a reseller, then you’re re-selling a service under your own brand name, but don’t have to worry about the fulfillment of that service. As an affiliate, you’re sending customers to businesses in exchange for a commission of the sale. If that’s a little confusing, don’t worry. Below we take an in-depth look at both reseller and affiliate programs, so you can decide which route you want to take in our reseller vs affiliate showdown.  What is a Reseller? As a reseller, you’re more or less operating a traditional business.  You purchase certain products or services and then sell them as if they were your own. Typically, you’re able to buy these at a discount and sell them at a higher price to make a profit. Let’s look at an example of how reseller hosting works: Say you want to start your own web hosting company. If you were to start completely from scratch you’d have to purchase physical servers, secure them in some form of datacenter, install server software, hire IT staff, build a website to sell hosting, hire customer support staff, and that’s just the beginning. Not only would your startup costs be extremely high, but there are so many moving pieces you have to get right, and that’s even before you get your first customer. But, if you sign up to be a hosting reseller, then you can greatly simplify the process.  When you sign up for a reseller hosting plan you can purchase hosting at bulk. Basically, it’s a lot of server space, which you can divide up however you wish. You’ll also get access to ancillary features like cPanel access, email management, dedicated support, and a lot more. Plus, all of this is white labeled, so you can brand it under your own company. There’s no way your customers would ever know that you weren’t running the servers yourself.  Common Reseller Program Use Cases There are reseller programs of all types, not just in the web hosting space. But, to continue our example, let’s look at a few different ways you can use reseller hosting: 1. Offer Additional Services If you’re a web developer, or currently run an agency, then you’re likely always on the lookout for additional professional services you can offer your clients and customers that will make their lives easier (and your business richer). For a lot of people, managing their own hosting can be a nightmare. It’s overly complex, confusing, and is a hassle.  As a developer or web agency, you can replace your customer’s current hosting company and take care of everything yourself. The hosting parent company will take care of the heavy lifting for you, handling things like: Server maintenanceUpdating server softwareEnsuring high uptimeHandling support requests This leaves you open to focus on your client and customer sites while earning a monthly recurring fee for hosting their sites.  This allows you to not only make more money per customer but also help to create recurring revenue for your business.  2. Start a New Company Another approach (which we mentioned above) is creating your own business.  You purchase reseller hosting, also known as white label hosting, and sell those services under your own brand. The parent web host will take care of all the technical tasks, leaving you open to focus on marketing and customer acquisition. If you’ve been wanting to get into the hosting game, then this is a great way to do so without having high startup costs.  If you do want to take this route, then make sure you read our guide to making money with reseller hosting, you’ll learn how to best increase your chances of success in the reseller hosting business.   3. Grow a Side Hustle Maybe you’re on the lookout for additional sources of income or want to monetize your blog? If you commonly answer tech questions from your friends or help them out with website issues, then reseller hosting might be a worthwhile investment. Instead of having to figure out multiple hosting companies, you can simply have them sign up for your hosting company and better manage their sites. Instead of paying a hosting company they’ll pay you for hosting instead. In some cases, you might even be able to offer them a better deal than what they’re getting through a hosting company.  Pros and Cons of Being a Reseller Joining a reseller program won’t be perfect for everyone. It can be a great opportunity, but not everyone will be ready for the work required to operate as a reseller successfully. Here are some of the biggest advantages and disadvantages to a reseller program: Pros of Being a Reseller 1. Low Startup Costs When you start a reseller hosting business all you need to invest in is enough hosting space for your first customers and a website. There are no infrastructure costs, and you can keep your costs as low as possible until you break even.  2. Create a Scalable Business With a reseller business, you can expand your business near infinitely. In the hosting example, you can simply purchase more hosting as your needs grow. All you need to focus on is signing up more customers.  3. High-Quality Service with Less Work By partnering with a quality reseller business you’re selling their services under your name. If you partner with a company whose service you love, then you’ll be able to pass on this same quality of service.  Cons of Being a Reseller  1. You Don’t Have Complete Control As a reseller, you won’t have complete control over all aspects of your business. You can control most of the front end, but the back end service or product is all dependant on the company you align yourself with. If there are issues with the service, there won’t be much you can do about it.  2. Can’t Compete on Price Often, if you’re in the reseller business you’ll be forced to compete on something other than price. Most of the big players in the space will be able to offer cheaper prices, so you’ll have to find another way to differentiate yourself from the rest of your market.  What Is an Affiliate? If being a reseller sounds like a lot of work, then you’ll probably be better suited towards being an affiliate.  Being a reseller can be likened to being a CEO while being an affiliate is the equivalent of being in sales. In order to become an affiliate all you have to do is join an affiliate network for a product or service you love. Once you join you’ll be given a unique link with an embedded tracking code. Whenever you share this link and a person uses that link to buy a product or service, you’ll receive a commission. Affiliate marketing is a very common and effective way to earn money online. It’s relatively hands-off, all you have to do is share your link and drive new customers to the affiliate offer. Businesses create their own affiliate programs because it’s a great way to generate new qualified leads and customers. It’s a win-win for both the company and the affiliate. The company gets a new customer and the affiliate gets paid. Let’s look at a quick example: One of the most well-known and widely used affiliate programs is Amazon Associates. You join their affiliate program and you’ll have access to a unique tracking code. Then, let’s say you’re writing a blog post about the best power tools for new dads. Any time you mention a product they sell you link to that product on the Amazon store with your unique link. Whenever a customer clicks on that link, a tracking cookie will activate on their browser. For anything they purchase within that window, you’ll receive a commission of the total sale price.  The same goes for most other affiliate networks. The buyer window might differ, but the process remains the same.  If you’re interested in becoming a hosting affiliate, then there’s a great affiliate program right here at HostGator. Once you sign up as an affiliate you’ll be able to recommend high-quality hosting to your readers. Drive clicks, make sales, and receive a commission.  Finding Success as an Affiliate There are multiple paths to success as an affiliate. Whether you want to create an entire site dedicated to affiliate marketing, sell affiliate products through an email list, or simply insert affiliate links throughout your content.  Here are some of the most common paths to affiliate marketing success: 1. Creating an Affiliate Review Site Affiliate review sites are one of the most common pathways to success. This style of site is dedicated to reviewing different products or services in a certain niche. They’ll often also provide related content that answers different questions and better serves the niche as a whole. Take a look at sites like NerdWallet, Gear Patrol, and the very unique 50em (which only focuses on two products).  These sites make their income from reviewing and recommending products. When someone purchases a product or service through their affiliate link, they’ll receive a commission.  2. Selling Affiliate Products via Email Email marketing is a great way to drive traffic to an affiliate offer. By building an email list of people in a certain niche, you can recommend them products they might be interested. Email also gives you the opportunity to establish a relationship with your audience and even pre-sell certain products and services before you send them to the product or landing page.  3. Add Links to Your Content or Social If you’re not completely focused on affiliate marketing, but would like to add some additional income to your online efforts, then you can include affiliate links wherever they make sense. This could be through a link on your social media profiles or stories, on a separate resource page on your website, or even just sprinkled throughout your affiliate content whenever you mention a product or service.  Pros and Cons of Affiliate Programs Being an affiliate isn’t going to be right for everyone. It all depends on the goals of your website and what niche you’re in. Here are some of the most common advantages and disadvantages to joining an affiliate program. Pros of Being an Affiliate 1. Generate Passive Revenue When affiliate marketing is done the right way it can generate you consistent passive income. Of course, this depends on how your site is structured. But, if you have high ranking review-style content that gets consistent traffic, you can expect a certain percentage of this traffic to convert into income.  2. Easy to Get Started All you need to get started with affiliate marketing is a link. Once you have your unique tracking link you can promote this to your existing social media audience, or start to create content about the service you’re recommending on your own website.  3. Low Level of Responsibility Once you’ve referred a customer to a company, your work stops there. You don’t have to deliver on the product or service. All you have to do is sit back and wait for your commission to arrive.  Cons of Being an Affiliate Marketer: 1. Commission Only Sale Unlike running your own business, even if it’s reseller based, you don’t really have the opportunity to grow your income via upsells or other products with affiliate marketing.  Sure, you could recommend other products to your audience down the line, but right out of the gate your income might be a little limited. You’re also limited by the commision that the affiliate company is willing to pay out.  2. Sales Aren’t Guaranteed There are no guarantees in the world of affiliate marketing. Just because you sent a lead to a company doesn’t mean that person is going to follow through with the sale. While there are click-based programs that payout affiliates based on the traffic they drive, most pay out only upon confirmed sales. Reseller vs. Affiliate: Which is Right for Me? Whether you choose to become a reseller or an affiliate depends on your goals.  Do you want to create your own business offering reseller services? Or create an add-on service for your current business? Then, reseller hosting might be right for you. Do you just want to recommend products and services and receive a percentage of the sale? Then, becoming an affiliate marketer is probably the path for you. Hopefully, by now you’re leaning in one direction or the other. Whichever direction you’re leaning towards, you can make it happen with HostGator’s highly-reviewed reseller hosting services and affiliate programs. Find the post on the HostGator Blog

SSRF Attacks: Difficult to Detect But Largely Preventable

The Rackspace Blog & Newsroom -

The security of Rackspace and our customers is of the utmost importance to us, and so, when a cybersecurity breach makes the news, we always want to put it in context, and offer recommendations when appropriate. First, a reassurance: it is possible to have a secure cloud environment, provided cloud users understand the threat landscape […] The post SSRF Attacks: Difficult to Detect But Largely Preventable appeared first on The Official Rackspace Blog.

Quick Guide to Best Practices for Data Backup

Liquid Web Official Blog -

Your data is of paramount importance. No matter whether you store sensitive customer data for your eCommerce business, or you simply have oodles of cat videos, no one wants to wake up one morning and discover that their data is gone. Due to the nature of ever-evolving online attacks, it’s impossible to guarantee that your data will never be hacked or corrupted. The only way to fully protect yourself is to regularly backup your data so you can fully recover in the event of a disaster.  Follow these six best practices from The Most Helpful Humans in Hosting® when choosing your ideal backup solution. These pointers will ensure that your data will be safe and fully recoverable. 1. Use Remote Storage A critical factor in your backup solution is remote backups. Backing up your data and storing it on the same disk as your original data can be an exercise in futility. Off-site, or at least off-server, backups will remain viable even if your central server is compromised, allowing you to recover your data entirely. Whether on a physical Dedicated or Cloud-Based server, off-site backups are crucial for real disaster recovery. Get weekly tips and tricks for securing your infrastructure sent straight to your inbox. Subscribe to our weekly newsletter. 2. Take Backups Frequently and Regularly Prevent the loss of your critical data by ensuring backups are taken frequently and on a regular schedule. On Fully Managed Servers, your control panel gives you the flexibility to have account-level backups on your schedule. Determining how often your data is updated can help you create a timeline of how regularly your data gets backed up. Critical data that is continuously updated will need a more frequent backup schedule. A continuous backup solution would work well in this case. Whereas more static data may only need daily/nightly or even weekly backups. Then, make sure your backup solution matches your business needs. 3. Consider Retention Span After determining the frequency, it’s also vital to consider how long you will retain each backup. Keeping every backup forever isn’t feasible due to a limited amount of space for storage. Most backup solutions offer a series of retention schedules, such as keeping hourly and daily backups for a week, weekly backups for a month, and monthly backups for a few months or even years. This type of schedule allows for having multiple, recent backups in the instance recovery is needed. Good business backup practices include retaining specific backups, such as monthly or bi-annual, for as long as possible, if not forever. Also, we recommend researching your industry’s data retention standards and requirements. HIPAA Compliant solutions or those for financial institutions will have strict requirements for backup retention. 4. Keep Backups Encrypted & Protected There are instances where it’s not enough to back up your data in an off-site location. Aside from the security of the facility holding your backups, encrypting the files is an added step in data security. Backup encryption during storage ensures that your data will be what you expect in the event you need to recover it. 5. Store Backups on RAID Arrays For a bit of extra redundancy, you should store your backups on RAID arrays. Distributing your data across two or more drives in a RAID array allows for better performance, reliability, and more extensive data sets in your backup solution. RAIDs can also help ensure your stored data gets protected from the failure of a single drive.  Redundancy, also known as high availability infrastructure, is the best way to decrease your risk of going offline and/or losing data during a disaster. 6. Stack Your Backup Solutions Because backup solutions will differ in how they treat your data, it’s best to use multiple solutions. For example, Liquid Web’s Dedicated backup solution takes backups of your entire server and stores it in a secure and remote location. Alternatively, cPanel backups only take copies of your cPanel account and can be stored either locally or remotely. Local reserves via cPanel are available for every user. cPanel backups can be especially useful for those users who have multiple accounts on one server but only need to restore one account. Due to the different benefits of both solutions, we recommend backing up both full images of your server in addition to smaller snapshots of your cPanel accounts. Stacking your backup solutions in this manner will ensure your data will be recovered as quickly and efficiently as possible, no matter what kind of disaster hits Need a Backup Solution? There are many backup options to choose from depending on your server type and business needs, but these six best practices should help you choose the best solution for you, whether it be remote or cloud backup solutions. The post Quick Guide to Best Practices for Data Backup appeared first on Liquid Web.

How to Get a Custom Video Made for YouTube Ads

Grow Traffic Blog -

If you’re interested in YouTube ads, you need video. The little YouTube banner ads that pop up over videos are negligible; the real value comes from the pre-roll, mid-roll, and unskippable video ads. Videos are what people come to engage with, and videos are what they’re prepared to see. What happens if you want to advertise on YouTube but you don’t have videos on hand? You have to come up with some solution to the problem, and “not using YouTube ads” isn’t a valid solution. Thankfully, there are a few options you can pursue. Do It Yourself The first option is to make your own video for your YouTube ads. I know, I know, if you don’t know how to make nicely edited videos, you’re going to have a bit of a hard time with this. It’s completely understandable. In order to pull it off, you need to dedicate yourself to learning the craft, at least on a superficial level. First of all, I recommend that you do some reading. Check these out: YouTube’s Creator Academy. This page specifically is about ads on YouTube, teaching you about different ad formats, factors that impact advertising, and other basic knowledge you should know. As long as you can pass their quiz, you have a baseline knowledge to know what to do next. YouTube Ads For Beginners. This is an article about how to launch and optimize a YouTube video ads campaign, published by HubSpot, one of the top marketing agencies in the world. This gives you a pretty advanced level of knowledge about running campaigns. The Complete Guide to YouTube Ads for Marketers. This is a Hootsuite post that covers a lot of great information about YouTube ads. It has some overlap with the HubSpot article, but it’s not entirely the same, so it’s worth reading them both. Disruptive Advertising’s How to Write a Video Ad People Actually Want to Watch. The title here is pretty self-explanatory. Your script and storyboard are important, so knowing how to produce them is crucial. Additionally, you might want to look up a video editor and some tutorials for it. There are dozens of video editors out there, ranging from simple camera apps to full-on movie studio suites, so there’s something for everyone. A lot of it comes down to preference which you choose. The video ad DIY option is serviceable if you have some video equipment, only want to make very simple ads, or otherwise don’t want to invest much into your videos. It’s unfortunately not a great option if you’re looking to invest heavily into YouTube ads, because your videos will hold you back until you’re much more experienced. As such, I’d recommend moving on to the next option unless you have an absolutely shoestring budget. Hire a Cheap Freelancer The second option you have is to hire a cheap freelancer to make something for you. In this case, a “cheap” freelancer could be anywhere from a $5 Fiverr hire to someone asking for $30 an hour to make a simple project. Obviously, skilled freelancers can charge much more. Fiverr. Normally, I wouldn’t recommend Fiverr for much of anything. However, you can get a full short video ad for very cheap, and it’s very unlikely to be plagiarized from another source. Some sellers have a series of customizable templates they use, and others will simply put their video editing skills to use for something simple at a relatively short price. As of this writing, there are over 1,300 people selling “short video ads” as a service, as low as $35 for a basic project. Prices range all over the place; some are around $50, some $100, and there are even a few selling for as much as $3,000 for a custom animated project. Upwork. Upwork is the combination of several former freelancer hubs, and as such has one of the largest audiences of freelancers out there. You can get video production for anywhere from $30/hr to $100/hr or more. Now, that’s not per hour of video, that’s per hour of freelancer time. You’ll have to talk to the freelancer specifically to see if they’re willing to work on your pitch and how many hours it will take. This is another freelancer hub, except rather than browsing and hiring freelancers directly, you develop a pitch and post it to the project board. Freelancers can bid on the project, and you can pick the one that has the right mixture of skills and price for your needs. You might get a good deal, or you might struggle to find someone who works with your brand, and prices can vary wildly. Additionally, you can use freelancers to perform different aspects of video production. You don’t need one do-it-all freelancer. You can hire one to do the script writing, another to do the voice-over and sound effects, and another to do the actual video. This allows you to hire cheaper, faster products from more experienced freelancers and essentially have the individual parts “assembled” by another. Whether or not that’s better or a savings depends on all of the different people involved. Obviously, freelancers can scale as high as your budget allows. Enterprise-level professional freelancers basically run as agencies and charge incredible prices for incredible work. It’s up to you to find the right balance. Use a Template Video Service The third option you can pursue is using a relatively cheap template-based video creation service. There are a lot of these services, and the variety of templates and amount of customization they allow differs between them. If you’re not sure what I mean, consider something like Canva. Canva is a template-based web graphics editor that allows you to create anything from a flyer to a social media post to an infographic with ease. You can use their free assets, upload your own assets, or pay for stock assets, in any combination. You build what you want, racking up charges for assets you use along the way, and pay when you’ve finalized a design to export. These video editors work in much the same way, except instead of static images, they provide a combination of graphics, video clips, and audio in both sound effects and music that won’t earn you a copyright violation. Here are some options you can look at. Animoto. This is a simple video template editor. You choose a template – or start from scratch – and upload resources you want to use. You can bring your own video clips and images, or you can pay for stock assets. Customize everything and publish it for a well-formatted video perfect for YouTube ads. Pricing starts at $33 per month for white-label videos, or only $5 per month if you don’t mind their logo in your video. Biteable. Another simple template editor. You choose a template, upload assets or use stock assets, and render a finished video. Sound familiar? Pretty much all of these services are going to work the same way. You can use Biteable for free, but to get non-watermarked videos and access to their asset library, plans start at $20 per month. Filmora. Unlike the above two, this is an app you download to use. This means it has a higher learning curve, and it requires you to have more of your own assets. You can find templates online from other agencies, or build something of your own from scratch. AdLaunch. Another template-based maker, this platform works best with Chrome and lets you start creating a video ad immediately. You can use it on a per-video basis for $10 per video, or you can buy a membership that starts around $20 per month with a limit of 10 videos per month. There are all sorts of other video editing apps out there as well. You can almost certainly find something to interest you. Use a YouTube Partner Advertising Agency YouTube, of course, knows full well that in order to run advertising on their platform, you need to be able to upload videos, and not everyone has a video production skill in-house. That’s why they have kept a list of partner companies for a wide variety of different budgets and skill levels. You can see whatever their most up to date list is here. A couple of the entries on the list are partners listed above, and a few are not. For the most part, these partner agencies are video production companies that offer a variety of different services, from DIY apps to full-service video production. You can go to them with an idea and hire them to produce a video, and that’s that. The pricing depends on the length of video, the depth of work required for the idea, and whatever other assets may be required. Since you’re looking at a somewhat higher budget here if you hire a company to do the work for you, it’s tricky to necessarily recommend this option. If you have the budget for it, you’re pretty well guaranteed to get a great video out of it. On the other hand, many small businesses are operating on thin advertising budgets, so you might not be able to contract some of the higher end companies. Contract a Full Scale Video Production Company Speaking of high end companies, the sky really is the limit when it comes to video production. You don’t think a company like Coke or McDonalds is going to hire some $20 a month company to handle their video ads, are they? Of course not. At the high end, you have companies charging tens of thousands of dollars an hour, or millions of dollars per project. There are a lot of such agencies out there. This directory lists over 8,500 firms with some element of video production in their specialty list. Prices for services with these companies range from $1,000 to over $250,000. If you’re interested in hiring one of these companies, go right ahead. However, since the budgets are so high, the stakes are incredible. You want to do your best to vet these companies before you sign a contract. Here are some questions you might consider asking them before you hire them. Does the company sub-contract freelancers, or do they have their own team? Some mid-level companies are just fronts for a middleman arbitrage scheme that gives you mediocre results for an inflated price. Is the company familiar with the YouTube ad formats? Some of these companies don’t use YouTube for advertising and instead specialize in videos for television commercial ad spots. You want to hire a company that is familiar with the destination of your ads. Does the company have past clients you can talk to? You won’t always be able to contact high end clients, but you may want to see if you can talk directly to some clients instead of just watching a hand-selected demo reel of successful ads. Even a great demo reel will fail to disclose if a company is hellish to work with. With your ideas in mind, what kind of budget would you need to spend?  For high-end video production, a sub-15-second YouTube ad spot shouldn’t be at the high end of their service price range. You also want to make sure you aren’t going to have to compromise your vision to stay in a budget. Does the company have a history of working in your industry? Video is video, but different companies have different specialties. You want to make sure the company truly understands your business and your niche. Once you’ve properly vetted a company, only then should you consider signing a contract. Make sure to shop around! The post How to Get a Custom Video Made for YouTube Ads appeared first on Growtraffic Blog.

New – Local Mocking and Testing with the Amplify CLI

Amazon Web Services Blog -

The open source Amplify Framework provides a set of libraries, user interface (UI) components, and a command line interface (CLI) that make it easier to add sophisticated cloud features to your web or mobile apps by provisioning backend resources using AWS CloudFormation. A comment I often get when talking with our customers, is that when you are adding new features or solving bugs, it is important to iterate as fast as possible, getting a quick feedback from your actions. How can we improve their development experience? Well, last week the Amplify team launched the new Predictions category, to let you quickly add machine learning capabilities to your web or mobile app. Today, they are doing it again. I am very happy to share that you can now use the Amplify CLI to mock some of the most common cloud services it provides, and test your application 100% locally! By mocking here I mean that instead of using the actual backend component, an API in the case of cloud services, a local, simplified emulation of that API is available instead. This emulation provides the basic functionality that you need for testing during development, but not the full behavior you’d get from the production service. With this new mocking capability you can test your changes quickly, without the need of provisioning or updating the cloud resources you are using at every step. In this way, you can set up unit and integration tests that can be executed rapidly, without affecting your cloud backend. Depending on the architecture of your app, you can set up automatic testing in your CI/CD pipeline without provisioning backend resources. This is really useful when editing AWS AppSync resolver mapping templates, written in Apache Velocity Template Language (VTL), which take your requests as input, and output a JSON document containing the instructions for the resolver. You can now have immediate feedback on your edits, and test if your resolvers work as expected without having to wait for a deployment for every update. For this first release, the Amplify CLI can mock locally: AppSync GraphQL APIs, including resolver mapping templates and storage backed by Amazon DynamoDB. AWS Lambda functions invoked directly or as resolvers of a GraphQL API. Amazon Simple Storage Service (S3) buckets used as storage for your application. Amazon Cognito User Pool authentication for GraphQL APIs, but you need first to get a JSON Web Token (JWT) from the actual service; after that, the JWT is honored locally. API Mocking Let’s do a quick overview of what you can do. For example, let’s create a sample app that helps people store and share the location of those nice places that allow you to refill your reusable water bottle and reduce plastic waste. To install the Amplify CLI, I need Node.js (version >= 8.11.x) and npm (version >= 5.x): npm install -g @aws-amplify/cli amplify configure Amplify supports lots of different frameworks, for this example I am using React and I start with a sample app (npx requires npm >= 5.2.x): npx create-react-app refillapp cd refillapp I use the Amplify CLI to inizialize the project and add an API. The Amplify CLI are interactive, asking you questions that drive the configuration of your backend. In this case, when asked, I select to add a GraphQL API. amplify init amplify add api During the creation of the API, I edit the GraphQL schema, and define a RefillLocation in this way: type RefillLocation @model { id: ID! name: String! description: String streetAddress: String! city: String! stateProvinceOrRegion: String zipCode: String! countryCode: String! } The fields that have an exclamation mark ! at the end are mandatory. The other fields are optional, and can be omitted when creating a new object. The @model you see in the first line is a directive using GraphQL Transform to define top level object types in your API that are backed by DynamoDB and generate for you all the necessary CRUDL (create, read, update, delete, and list) queries and mutations, and the subscriptions to be notified of such mutations. Now, I would normally need to run amplify push to configure and provision the backend resources required by the project (AppSync and DynamoDB in this case). But to get a quick feedback, I use the new local mocking capability running this command: amplify mock Alternatively, I can use the amplify mock api command to specifically mock just my GraphQL API. It would be the same at this stage, but it can be handy when using more than one mocking capability at a time. The output of the mock command gives you some information on what it does, and what you can do, including the AppSync Mock endpoint: GraphQL schema compiled successfully. Edit your schema at /MyCode/refillapp/amplify/backend/api/refillapp/schema.graphql or place .graphql files in a directory at /MyCode/refillapp/amplify/backend/api/refillapp/schema Creating table RefillLocationTable locally Running GraphQL codegen ✔ Generated GraphQL operations successfully and saved at src/graphql AppSync Mock endpoint is running at http://localhost:20002 I keep the mock command running in a terminal window to get feedback of possible errors in my code. For example, when I edit a VTL template, the Amplify CLI recognizes that immediately, and generates the updated code for the resolver. In case of a mistake, I get an error from the running mock command. The AppSync Mock endpoint gives you access to: the GraphQL transformations required by your API DynamoDB Local to manage your API data locally the Amplify GraphQL Explorer, based on the open source OneGraph graphiql-explorer plugin I can now run GraphQL queries, mutations, and subscriptions locally for my API, using a web interface. For example, to create a new RefillLocation I build the mutation visually, like this: To get the list of the RefillLocation objects in a city, I build the query using the same web interface, and run it against the local DynamoDB storage: When I am confident that my data model is correct, I start building the frontend code of my app, editing the App.js file of my React app, and add functionalities that I can immediately test, thanks to local mocking. To add the Amplify Framework to my app, including the React extensions, I use Yarn: yarn add aws-amplify yarn add aws-amplify-react Now, using the Amplify Framework library, I can write code like this to run a GraphQL operation: import API, { graphqlOperation } from '@aws-amplify/api'; import { createRefillLocation } from './graphql/mutations'; const refillLocation = { name: "My Favorite Place", streetAddress: "123 Here or There", zipCode: "12345" city: "Seattle", countryCode: "US" }; await API.graphql(graphqlOperation(createRefillLocation, { input: refillLocation })); Storage Mocking I now want to add a new feature to my app, to let users upload and share pictures of a RefillLocation. To do so, I add the Storage category to the configuration of my project and select “Content” to use S3: amplify add storage Using the Amplify Framework library, I can now, straight from the browser, put, get, or remove objects from S3 using the following syntax: import Storage from '@aws-amplify/storage'; Storage.put(name, file, { level: 'public' }) .then(result => console.log(result)) .catch(err => console.log(err)); Storage.get(file, { level: 'public' }) .then(result => { console.log(result); this.setState({ imageUrl: result }); fetch(result); }) .catch(err => alert(err)); All those interactions with S3 are marked as public, because I want my users to share their pictures with each other publicly, but the Amplify Framework supports different access levels, such as private, protected, and public. You can find more information on this in the File Access Levels section of the Amplify documentation. Since S3 storage is supported by this new mocking capability, I use again amplify mock to test my whole application locally, including the backend used by my GraphQL API (AppSync and DynamoDB) and my content storage (S3). If I want to test only part of my application locally, I can use amplify mock api or amplify mock storage to have only the GraphQL API, or the S3 storage, mocked locally. Availabe Now There are lots of other features that I didn’t have time to cover in this post, the best way to learn is to be curious and get hands on! You can start using Amplify by following the get-started tutorial. Being able to mock and test your application locally can help you build and refine your ideas faster, let us know what you think in the Amplify CLI GitHub repository. — Danilo


Recommended Content

Subscribe to Complete Hosting Guide aggregator