Industry Buzz

The Month in WordPress: June 2020

WordPress.org News -

June was an exciting month for WordPress! Major changes are coming to the Gutenberg plugin, and WordCamp Europe brought the WordPress community closer together. Read on to learn more and to get all the latest updates.  WordPress 5.4.2 released We said hello to WordPress 5.4.2 on June 10. This security and maintenance release features 17 fixes and 4 enhancements, so we recommend that you update your sites immediately. To download WordPress 5.4.2, visit your Dashboard, click on Updates, then Update Now, or download the latest version directly from WordPress.org. For more information, visit this post, review the full list of changes on Trac, or check out the HelpHub documentation page for version 5.4.2. WordPress 5.4.2 is a short-cycle maintenance release. The next major release will be version 5.5, planned for August 2020. Want to get involved in building WordPress Core? Follow the Core team blog, and join the #core channel in the Making WordPress Slack group. Gutenberg 8.3 and 8.4 The core team launched Gutenberg 8.3 and 8.4 this month, paving the way for some exciting block editor features. Version 8.3 introduced enhancements like a reorganized, more intuitive set of block categories, a parent block selector, an experimental spacing control, and user-controlled link color options. Version 8.4 comes with new image-editing tools and the ability to edit options for multiple blocks.  The block directory search feature that was previously available as an experimental feature, is now enabled for all Gutenberg installations. For full details on the latest versions on these Gutenberg releases, visit these posts about 8.3 and 8.4. Want to get involved in building Gutenberg? Follow the Core team blog, contribute to Gutenberg on GitHub, and join the #core-editor channel in the Making WordPress Slack group. WordPress Bumps Minimum Recommended PHP Version to 7.2 In a major update, WordPress has bumped the minimum PHP recommendation to 7.2. The ServeHappy API has been updated to set the minimum acceptable PHP version to 7.2, while the WordPress downloads page recommends 7.3 or newer. Previously, the ServeHappy dashboard widget was showing the upgrade notice to users of PHP 5.6 or lower. This decision comes after discussions with the core Site Health team and the Hosting team, both of which recommended that the upgrade notice be shown to users of PHP <=7.1. WordCamp Europe 2020 Moved Online Following the success of a remote WordCamp Spain, WordCamp Europe was held fully online from June 4 to 6. The event drew a record 8,600 signups from people based in 138 countries, along with 2,500 signups for contributor day. WCEU Online also showcased 33 speakers and 40 sponsors, in addition to a Q&A with Matt Mullenweg. You can find the videos of the event in WordPress.tv by following this link, or you can catch the live stream recording of the entire event from the WP Europe YouTube Channel. Want to get involved with the Community team? Follow the Community blog here, or join them in the #community-events channel in the Making WordPress Slack group. To organize a Meetup or WordCamp, visit the handbook page.  Further Reading: Josepha Haden (@chanthaboune), the executive director of the WordPress project, published a post that highlights resources on how the global WordPress community can focus on equity to help dismantle racial, societal, and systemic injustice. PHP, the primary programming language in which WordPress is written, celebrated its 25th anniversary this month!The Community team is updating the WordCamp code of conduct to address discrimination based on age, caste, social class, and other identifying characteristics.The WordPress Core team is promoting more inclusive language by updating all git repositories to use `trunk` instead of `master`. Additionally, the team proposes to rename  “invalid,” “worksforme,” and “wontfix” ticket resolutions to “not-applicable,” “not-reproducible” or “cannot-reproduce,” and “not-implemented,” respectively. The Documentation team is working on an external linking policy and has started a discussion on how to allow linking to trusted sources to benefit users. The Core team has put up a proposal to merge extensible core sitemaps to WordPress core in the 5.5 release. The feature is currently available as a feature plugin.WordCamp Denver was held online May 26–27. The event sold over 2,400 tickets and featured 27 speakers and 20 sponsors. You can catch the recorded live stream on the event site.The Core team is working on updating the version of jQuery used in WordPress core. Have a story that we should include in the next “Month in WordPress” post? Please submit it here.

Open Source Software: Hearts, Minds, and Acquisitions

cPanel Blog -

How Open Source Software is changing the world: In the past decade, Open Source Software has become a legitimized business model and has taken the world by storm. What started back in the 1980s as a free software initiative has grown into massive volunteer communities and industry-leading software platforms. A recent CB Insights report estimates that the Open Source service industry will reach nearly $33 billion by 2022. The History of Open Source Software: Open Source Software has its roots ...

Taking EWWW IO Off WP Engine’s Disallowed Plugins List

WP Engine -

Plugins are a huge part of the WordPress ecosystem—they’re one of the key features that make it such a flexible platform. With more than 55,000 free plugins available in the WordPress Plugin Repository, as well as thousands of premium or “paid” plugins available in the market today, plugins represent an increasingly massive opportunity for adding… The post Taking EWWW IO Off WP Engine’s Disallowed Plugins List appeared first on WP Engine.

Announcing the Porting Assistant for .NET

Amazon Web Services Blog -

.NET Core is the future of .NET! Version 4.8 of the .NET Framework is the last major version to be released, and Microsoft has stated it will receive only bug-, reliability-, and security-related fixes going forward. For applications where you want to continue to take advantage of future investments and innovations in the .NET platform, you need to consider porting your applications to .NET Core. Also, there are additional reasons to consider porting applications to .NET Core such as benefiting from innovation in Linux and open source, improved application scaling and performance, and reducing licensing spend. Porting can, however, entail significant manual effort, some of which is undifferentiated such as updating references to project dependencies. When porting .NET Framework applications, developers need to search for compatible NuGet packages and update those package references in the application’s project files, which also need to be updated to the .NET Core project file format. Additionally, they need to discover replacement APIs since .NET Core contains a subset of the APIs available in the .NET Framework. As porting progresses, developers have to wade through long lists of compile errors and warnings to determine the best or highest priority places to continue chipping away at the task. Needless to say, this is challenging, and the added friction could be a deterrent for customers with large portfolios of applications. Today we announced the Porting Assistant for .NET, a new tool that helps customers analyze and port their .NET Framework applications to .NET Core running on Linux. The Porting Assistant for .NET assesses both the application source code and the full tree of public API and NuGet package dependencies to identify those incompatible with .NET Core and guides developers to compatible replacements when available. The suggestion engine for API and package replacements is designed to improve over time as the assistant learns more about the usage patterns and frequency of missing packages and APIs. The Porting Assistant for .NET differs from other tools in that it is able to assess the full tree of package dependencies, not just incompatible APIs. It also uses solution files as the starting point, which makes it easier to assess monolithic solutions containing large numbers of projects, instead of having to analyze and aggregate information on individual binaries. These and other abilities gives developers a jump start in the porting process. Analyzing and porting an application Getting started with porting applications using the Porting Assistant for .NET is easy, with just a couple of prerequisites. First, I need to install the .NET Core 3.1 SDK. Secondly I need a credential profile (compatible with the AWS Command Line Interface (CLI), although the CLI is not used or required). The credential profile is used to collect compatibility information on the public APIs and packages (from NuGet and core Microsoft packages) used in your application and public NuGet packages that it references. With those prerequisites taken care of, I download and run the installer for the assistant. With the assistant installed, I check out my application source code and launch the Porting Assistant for .NET from the Start menu. If I’ve previously assessed some solutions, I can view and open those from the Assessed solutions screen, enabling me to pick up where I left off. Or I can select Get started, as I’ll do here, from the home page to begin assessing my application’s solution file. I’m asked to select the credential profile I want to use, and here I can also elect to opt-in to share my telemetry data. Sharing this data helps to further improve on suggestion accuracy for all users as time goes on, and is useful in identifying issues, so we hope you consider opting-in. I click Next, browse to select the solution file that I want, and then click Assess to begin the analysis. For this post I’m going to use the open source NopCommerce project. When analysis is complete I am shown the overall results – the number of incompatible packages the application depends on, APIs it uses that are incompatible, and an overall Portability score. This score is an estimation of the effort required to port the application to .NET Core, based on the number of incompatible APIs it uses. If I’m working on porting multiple applications I can use this to identify and prioritize the applications I want to start on first. Let’s dig into the assessment overview to see what was discovered. Clicking on the solution name takes me to a more detailed dashboard and here I can see the projects that make up the application in the solution file, and for each the numbers of incompatible package and API dependencies, along with the portability score for each particular project. The current port status of each project is also listed, if I’ve already begun porting the application and have reopened the assessment. Note that with no project selected in the Projects tab the data shown in the Project references, NuGet packages, APIs, and Source files tabs is solution-wide, but I can scope the data if I wish by first selecting a project. The Project references tab shows me a graphical view of the package dependencies and I can see where the majority of the dependencies are consumed, in this case the Npp.Core, Npp.Services, and Npp.Web.Framework projects. This view can help me decide where I might want to start first, so as to get the most ‘bang for my buck’ when I begin. I can also select projects to see the specific dependencies more clearly. The NuGet packages tab gives me a look at the compatible and incompatible dependencies, and suggested replacements if available. The APIs tab lists the incompatible APIs, what package they are in, and how many times they are referenced. Source files lists all of the source files making up the projects in the application, with an indication of how many incompatible API calls can be found in each file. Selecting a source file opens a view showing me where the incompatible APIs are being used and suggested package versions to upgrade to, if they exist, to resolve the issue. If there is no suggested replacement by simply updating to a different package version then I need to crack open a source editor and update the code to use a different API or approach. Here I’m looking at the report for DependencyRegistrar.cs, that exists in the Nop.Web project, and uses the Autofac NuGet package. Let’s start porting the application, starting with the Nop.Core project. First, I navigate back to the Projects tab, select the project, and then click Port project. During porting the tool will help me update project references to NuGet packages, and also updates the project files themselves to the newer .NET Core formats. I have the option of either making a copy of the application’s solution file, project files, and source files, or I can have the changes made in-place. Here I’ve elected to make a copy. Clicking Save copies the application source code to the selected location, and opens the Port projects view where I can set the new target framework version (in this case netcoreapp3.1), and the list of NuGet dependencies for the project that I need to upgrade. For each incompatible package the Porting Assistant for .NET gives me a list of possible version upgrades and for each version, I am shown the number of incompatible APIs that will either remain, or will additionally become incompatible. For the package I selected here there’s no difference but for cases where later versions potentially increase the number of incompatible APIs that I would then need to manually fix up in the source code, this indication helps me to make a trade-off decision around whether to upgrade to the latest versions of a package, or stay with an older one. Once I select a version, the Deprecated API calls field alongside the package will give me a reminder of what I need to fix up in a code editor. Clicking on the value summarizes the deprecated calls. I continue with this process for each package dependency and when I’m ready, click Port to have the references updated. Using my IDE, I can then go into the source files and work on replacing the incompatible API calls using the Porting Assistant for .NET‘s source file and deprecated API list views as a reference, and follow a similar process for the other projects in my application. Improving the suggestion engine The suggestion engine behind the Porting Assistant for .NET is designed to learn and give improved results over time, as customers opt-in to sharing their telemetry. The data models behind the engine, which are the result of analyzing hundreds of thousands of unique packages with millions of package versions, are available on GitHub. We hope that you’ll consider helping improve accuracy and completeness of the results by contributing your data. The user guide gives more details on how the data is used. The Porting Assistant for .NET is free to use and is available now. — Steve

The July 2020 promo code is tasty and in season

Name.com Blog -

Can you believe we’re more than halfway through 2020? We certainly can’t—not that we’re complaining. But what we do know is that we’re back with another promo code to save you on your .com and .net renewals throughout the month of July. Use the promo code CARROT July 1 – 31, 2020 to renew your […] The post The July 2020 promo code is tasty and in season appeared first on Name.com Blog.

7 Mobile-Friendly Design Tips for Your Blog or Website

HostGator Blog -

The post 7 Mobile-Friendly Design Tips for Your Blog or Website appeared first on HostGator Blog. Think about the last time you were in public with a bunch of strangers. What were people doing when you looked around? I bet you a shiny silver dollar that they had their heads buried deep into their smartphone, checking social media or browsing websites. If you don’t like to rely on anecdotal evidence alone, recent research shows that the number of smartphone users in just the United States is over 257.3 million. That’s nearly 80 percent of the total US population. All of this is just to say that if you have a blog or website, or are planning to launch one soon, and want to capture the attention of smartphone users, you have to make your website mobile-friendly. In other words, you have to make it possible for smartphone users to easily read and navigate your website from a mobile device. None of this pinching, scrolling, and zooming business! This post will provide the top seven mobile-friendly design tips for your blog or website. 1. Pick a mobile-responsive website or blog theme No one said you had to be a web designer to have a mobile-friendly website. They did say, however, that you have to be picky when it comes to choosing a theme. Save yourself a headache and pick a mobile-friendly theme. As a quick review, a theme is a preconfigured framework that defines how your website looks. Mobile-responsive themes are pre-coded for both optimal desktop and mobile navigation.  If you have a WordPress blog or website, most templates will already be optimized for mobile devices. However, you can double-check by reading the theme description and also by testing the demo URL for responsiveness.  If you buy a third-party WordPress website, the theme product page will usually include a picture or demo of how the theme will look on a desktop and mobile device (see below). If you’re running your website in HostGator, take note that all of our drag-and-drop website builder templates are mobile-friendly. 2. Include a viewport meta tag If you’re the “I’ll do what you say, no questions asked type of human,” here’s what you need to do: Copy. Copy the following code: <meta name=″viewport″ content=″width=device-width, initial-scale=1″>Paste. Paste the code in the HTML <head> area for each page of your website. Tada! You’re done. Pat yourself on the back. Now the explanation. The viewport code provides search engines with the information needed to correctly display the size and scale of the content, based on what device an internet searcher is using.  Put simply: this meta tag provides a top-notch, multiple-device navigation experience, so website visitors can enjoy your content from a large desktop and a small six-inch mobile screen alike. 3. Simplify your content Any time you, as an internet browser, navigate a good mobile version of a website, you’ll notice the content isn’t as rich as it is on a desktop. When creating copy and design elements for your mobile website, the hard and fast rule is: simplify everything. Here’s what I mean. When I search Krispy Kreme (I’m not eating a dozen doughnuts, you are!) on my desktop, this is what the home page looks like. It’s complete and full of all the information you could ever want to learn about these delicious confections.  What happens when I search for the same website via a mobile device? This is how the home page looks now. You only see the essentials. Krispy Kreme puts the most important information on the top (“order now”) and eliminates any elements that aren’t critical to a mobile shopping experience. It’s awesome. Do as Krispy Kreme does. 4. Avoid fancy design elements The goal of mobile search is to get answers from brands as quickly and efficiently as possible. This means when you are designing your mobile-friendly website, you have to let go of all your fancy design ideas and opt for the quick and simple. Here are the two things to forget about first. Flash? Just say no. As it turns out, Flash is not an apt name for the technology. Why? Flash can slow down a site’s load time significantly. The more you can do to speed up your site’s load time, the more attractive your website will be to users. Additionally, neither Android nor iOS supports Flash. This means if your websites’ mobile experience is dependent on whether or not your viewers can see your Flash animation, you’re going to run into trouble when someone searches via a mobile device. Avoid the use of pop-ups and refreshers Pop-ups and refreshers can be a great tool for desktop viewing. They are especially helpful in capturing new subscribers, making announcements, and giving discount codes. However, these tools are distracting on a mobile device. You want your content to be as simple and easy for your visitors to navigate as possible. If your visitors see irrelevant pop-ups that take up the whole screen, it could lead to a potential customer getting frustrated and clicking out of your website. Also, have you ever tried to hit the tiny close pop-up “x” on a small screen? It’s impossible. A smart alternative is to include a small bar at the top of your mobile search page where you include promotions, announcements, or free shipping. 5. Consider the size of your font and buttons  Have you ever visited a blog on a mobile device and had to zoom in to consume the content? How long did you keep reading? Chances are it wasn’t very long.  The font on mobile sites should be at least 14px. This makes it easy for most people to read your content without any problems. If you have any copy that is supplemental, you can keep the font at 12px. Reading articles on a mobile device isn’t the only time where the size of the design element comes into question. It’s also important to design any clickable buttons correctly. The last thing you want is for your mobile visitors to have a difficult time selecting products or tapping on buttons. Bigger buttons are best. Shoot for button sizes that are at least 44px by 44px.  44px by 44px is the size of this circle: 6. Compress your images and CSS Did you know that 47 percent of website visitors expect a site to load in less than 2 seconds, according to Kissmetrics? And, 40 percent of visitors will exit out of the website if the loading process takes more than 3 seconds. You’ve already avoided using Flash and eliminated distracting elements like pop-ups, but what about images and CSS? Images and CSS take up a lot of server space, meaning they take longer to load. But, you don’t want to get rid of visual design elements that make the mobile user experience better. The solution is not to leave out images or style sheets, but to compress them. When you compress your image file sizes they load faster without negatively affecting the quality of your site. 7. Include a search function if you sell products on your mobile site Think about the popular eCommerce platform, Amazon, for a second. There are over 12 million products on Amazon; yet, anyone can find and purchase exactly what they want in a matter of seconds. One way Amazon accomplishes this is by including a search bar at the top of their mobile app. You may not have 12 million products, but that doesn’t mean the magic of a search bar can’t help you organize the products you do offer. Design Your Mobile-Friendly Website There are over a billion websites in the world, but not all of these websites are created equal. Website owners that work hard to improve the mobile user experience get rewarded with more traffic, referrals, and repeat visitors. As you design your mobile-friendly website, implement the design tips listed above. These tips will help keep your website working properly when someone comes searching from a desktop or mobile device. You can get started with these responsive web design tools. Remember, you don’t have to do all the hard work yourself. HostGator’s drag and drop website builder is already optimized for mobile viewing. All you have to do is pick a template and customize it to your liking. Or, you can have our web design pros create a professional, mobile-friendly web design that’s all your own. Find the post on the HostGator Blog

How to Optimize Your Content for Search Questions using Deep Learning

Bing's Webmaster Blog -

One of the most exciting developments is how well Bing and other major search engines can answer questions typed by users in search boxes.   Search is moving farther and farther away from the old ten blue links and matching typed keywords with keywords in content.   Answers to search questions can come up in the form of intelligent answers where we get a single result with the answer, and/or “People Also Ask”, where we get a list of related questions and answers to explore further.   This opens both opportunities and challenges for content producers and SEOs.    First, there is no keyword mapping to do as questions rarely include the same words as their corresponding answers. We also have the challenge that questions and answers can be phrased in many different ways.   How do we make sure our content is selected when our target customers search for answers we can provide?   I think one approach to do this is to evaluate our content by following the same process that Bing’s answering engine follows and contrast it to an evaluation of competitors that are doing really well.   In the process of doing these competitive evaluations we will learn about the strengths and weaknesses of the systems, and the building blocks that help search engines answer questions.   BERT - Bidirectional Encoder Representations of Transformers One of the several fundamental system that Bing and other major search engines use to answer questions is called BERT (Bidirectional Encoder Representations of Transformers.) As stated by Jeffrey Zhu, Program Manager of the Bing Platform in the article Bing delivers its largest improvement in search experience using Azure GPUs: “Recently, there was a breakthrough in natural language understanding with a type of model called transformers (as popularized by Bidirectional Encoder Representations from Transformers, BERT) … Starting from April of this year, we used large transformer models to deliver the largest quality improvements to our Bing customers in the past year. For example, in the query "what can aggravate a concussion", the word "aggravate" indicates the user wants to learn about actions to be taken after a concussion and not about causes or symptoms. Our search powered by these models can now understand the user intent and deliver a more useful result. More importantly, these models are now applied to every Bing search query globally making Bing results more relevant and intelligent.”

Second Half of the Year Day

Liquid Web Official Blog -

It’s July. Wait, Whaaaat? Although March and April seemed to crawl along as we all learned to navigate brand new economic and personal landscapes, it’s somehow suddenly summer. The goals we set for ourselves and our businesses back in January seem a distant memory now. Wednesday, July 1st is precisely half-way through the year 2020. The 182nd day of the year, it is the perfect time to take stock of the past six months. In so many ways, the first half of this year has been challenging for businesses and organizations. There’s a reason that nearly every email we exchange lately contains words like “unprecedented” and “uncertainty.” But we know that times of upheaval can also be times of tremendous positive change—if we are intentional about the ways we engage with our business, with our employees, and with ourselves. 2020 thus far has been tumultuous. And in six months, it will be behind us. It’s never been more important to take the time to pause, reflect and reevaluate, and recognize how to move forward in ways that recognize that the current state of the world may be the norm for some time. So, how can we advance our businesses, given the circumstances in which we find ourselves? Take stock. Reassess how the year has gone so far. Look back on your goals for the year. When originally setting objectives and targets to measure the success of ongoing projects at the start of 2020, no organizations could foresee what our lives and companies and work would look like in a few short months. Now is a great time to examine those objectives and targets and re-prioritize. What should you start doing, stop doing, and keep doing? What goals have been reached and which ones just aren’t realistic anymore? Which projects need attention, what new opportunities have presented themselves, and which efforts are simply not adding value anymore? Be ruthless. If understandable delays have occurred in your business, think of July 1st as a time to get back on track. Consider July 1st as a restart – a New Mid-Year’s day, if you will. It’s a clean slate on which to adjust goals and come up with cohesive action plans that take our new “normal” into account. Make an action plan for forward movement. Do you need to take a different approach? Recognize the ways you can continue to make progress in the midst of uncertain times. If there are aspects of your business that need attention or adjustment, think about trying a different approach if you’re falling short of some of the goalposts you’ve set. Think about new methods and actionable steps that could help you and your team find new and positive ways of working for the latter half of the year. Try setting SMART goals—ones that are specific, measurable, attainable, relevant, and time-based. Keeping security up to date. Take some time at the beginning of July to ensure that your online security is up to par. As most companies continue working remotely, cybersecurity protection is more important than ever. Make sure that your cybersecurity is up-to-date and that any necessary updates have been installed. Prevent security issues and make sure the second half of the year is as smooth as possible. Security is complex and can be viewed as “just another thing to worry about.” But mitigating risk is a critical component of any successful business and you owe it to your customers to protect them – and to yourself to protect what you’ve worked so hard to achieve. Don’t make a hard year worse by succumbing to a security breach that could have been prevented. If you need help, here’s a few good resources. 3 Essential Cybersecurity Tips for 2020 Cybersecurity Best Practices 3 Steps to Mitigate Security Threats Holidays! Think ahead. Planning ahead for the holiday season will be essential for eCommerce stores. From your website optimization to products, July is the perfect time to plan out what your strategy should be for the 2020 holiday season. It is also a good time to get in touch with suppliers and distributors to understand about any potential delays and restrictions due to the pandemic. Online business will be the order of the day. Is your digital commerce strategy and site ready to go? No? Let’s get on it. Here’s how we can help. Prepare Your Site for Potential Spikes in Traffic: Liquid Web offers infrastructure that can scale quickly, ensuring server resources can meet demands. Don’t leave customers unable to check out with your products due to slow load times, or worse, a crashing site. Mitigating a Malicious Attack on Your Server: Handling legitimate traffic can be quite enough without adding on a DDoS attack or code injection. Liquid Web offers basic and advanced DDoS protection to help, along with other add-ons such as firewalls, load balancing, or ServerSecurePlus for server hardening. Get creative about “events.” At the beginning of 2020, I approved a Marketing plan that invested heavily in events, in-person Partner Summits, and travel to clients. Well, that’s not our world anymore. As conferences, face to face meetings, and business-related travel continues to stay motionless, getting creative about ways to connect with your customers and employees is important. We’ve all had to make adjustments to our event planning for this year. While we are unable to come together for in-person conferences, consider using this time to completely rethink the way your business approaches these gatherings. Using a webinar format is a great option for the time being, give thought to how you can make them fun and interesting. But also consider brainstorming about how best to revamp in-person events when the time comes. Connect with employees. It is vital to find ways to connect with employees who are working remotely. Outside of whatever sort of mid-year performance review you may do, consider reaching out to remote employees to have separate, open discussions about their professional goals. Though many things in our businesses have shifted and changed, our employees still have ambitions. Ask them about their hopes and think about ways you can support them. Many people are using this time to think about personal development. Perhaps there are training resources or seminars that you can offer remotely to help employees build skill sets or try new things. Finally, be sure to celebrate your successes. What have you done well in these challenging times? How have you shown up for your employees and reminded them of their value? It is essential to look back at all you have accomplished during the first half of the year. Be sure to celebrate and congratulate yourselves and your colleagues as we continue working through this pandemic. The post Second Half of the Year Day appeared first on Liquid Web.

KIOXIA Demonstrates New EDSFF SSD Form Factor Purpose-Built for Servers and Storage

My Host News -

SAN JOSE, CA – Delivering a glimpse into the future of data center architectures, KIOXIA America, Inc. (formerly Toshiba Memory America, Inc.), has successfully demonstrated1 an E3.S full-function development vehicle in conjunction with a leading server original equipment manufacturer (OEM). This breakthrough Enterprise and Datacenter SSD Form Factor (EDSFF), also known as E3, is designed to maximize system density, efficiency and simplicity. Being developed in the SNIA SFF-TA-1008 technical work group, in which KIOXIA is an active member, EDSFF E3.Short (E3.S) and E3.Long (E3.L) solutions are the future of SSD storage for servers and All Flash Arrays (AFAs) in cloud and enterprise data centers. Featuring one common connector, this innovative form factor standard for PCIe® technology-based devices, such as NVMe SSDs, graphics processing units (GPUs), and network interface cards (NICs), enables a complete array of footprint, power and capacity options, offering unprecedented system flexibility. In addition, EDSFF E3.x drives break free from the design limitations of the 2.5” form factor2 by supporting higher power budgets (up to 40 watts) and better signal integrity to deliver the performance promised by PCIe Gen 5.0 and beyond, optimizing future generations of server and storage systems. Furthermore, these future-forward SSD designs feature improved thermal characteristics that address the cooling challenges that come with game-changing speeds. KIOXIA’s EDSFF full-function development vehicle is based on the E3.S thin (7.5mm) form factor, which offers increased flash storage density per drive for optimized power efficiency and rack consolidation. The drive features the same core components as the recently announced CM6 Series PCIe 4.0 NVMe 1.4 SSD and is configured with x8 lanes and up to 28W of power. Additional E3 size and width options will also be supported, including E3.S thick (16.8mm) and E3.L thin. “We are excited to work with the world’s leading server and storage system developers to bring new classes of systems to market that will be able to fully unleash the power of flash memory, NVMe and PCIe,” noted Shigenori Yanagi, Technology Executive, SSD at KIOXIA Corporation. “EDSFF E3.S will power the future generations of servers and storage, making the data center even more efficient.” Offering one of the broadest SSD product portfolios, KIOXIA is committed to being a leader in data center flash storage solutions through flash memory, SSD and software innovations. For more information, visit www.kioxia.com. About KIOXIA America, Inc. KIOXIA America, Inc. (formerly Toshiba Memory America, Inc.) is the U.S.-based subsidiary of KIOXIA Corporation, a leading worldwide supplier of flash memory and solid state drives (SSDs). From the invention of flash memory to today’s breakthrough BiCS FLASH 3D technology, KIOXIA continues to pioneer cutting-edge memory solutions and services that enrich people’s lives and expand society’s horizons. The company’s innovative 3D flash memory technology, BiCS FLASH, is shaping the future of storage in high-density applications, including advanced smartphones, PCs, SSDs, automotive, and data centers. For more information, please visit KIOXIA.com. 1: Successful demonstration completed at a server OEM lab. 2: “2.5-inch” indicates the form factor of the SSD. It does not indicate drive’s physical size. PCIe is a registered trademark of PCI-SIG. NVMe is a trademark of NVM Express, Inc.

Rocket Launches First WordPress Edge Cloud Service, with Built-in Website Security Suite, Increasing WordPress speed by 2-3x world-wide

My Host News -

WEST PALM BEACH, FL – Lead by Web Hosting Industry Veteran and seasoned startup founder Ben Gabler, Rocket today emerged as an all-in-one Managed WordPress Hosting provider at the Edge of the cloud. Rocket is Gabler’s vision for bringing WordPress to the Edge of the Cloud with Global Caching and Website Security inherently built-in to the Platform. After spending years working with WordPress users seeking CDN and WAF solutions to layer on top of their hosting provider – Gabler realized there was not only an opportunity to integrate these services but simplify the experience so every WordPress website in the world can effortlessly benefit. “WordPress users should be able to focus on building and managing their website content without needing a degree in security and performance best practices” said Gabler. “Our platform has a unique footprint at the Edge of the Cloud that not only brings WordPress hosting as close as possible to your website visitors, but it also provides Enterprise Website Optimization and Security tools at no additional cost.” Starting with the end result, Rocket’s platform enables WordPress users of all sizes to deliver maximum WordPress performance across the globe, while maintaining a secure experience. Rocket delivers a full suite of optimization tools within the platform removing the skill sets and resources needed to manually configure separate plugins and operational settings during a WordPress deployment, optimization or update. WordPress users can now focus on what matters most, making a digital impact.. Rocket Platform Benefits include: Easy to use Control Panel: Modern interface built for WordPress users of all sizes – making it easier than ever to develop, stage, launch, and boost your WordPress Website performance. Premium Servers & Global Footprint: Directly connected with all major ISP networks the platform’s Enterprise grade servers at the Edge of the cloud put your WordPress within arms reach of our Website Visitors. Built-in Caching (CDN) and Proxying: Rocket automatically caches all website content in over 200 locations, no plugin or configuration required. Rocket’s global caching fully supports dynamic content including WooCommerce. We also proxy and cache several third-party scripts like Google fonts to reduce DNS lookups and increase load time on your pages. Always-on Website Security suite: Every WordPress installation includes a Website Firewall (WAF) and Malware Scanning/Patching specifically tuned for WordPress at no additional cost. Protecting every WordPress install from common http attacks, weak password usage, brute-force prevention, and much more. Optimized JavaScript and Images: Our platform can automatically optimize your website’s use of JavaScript and increase page load time by asynchronously loading it. The platform also applies lossless image optimization with WebP support Automated WordPress Updates: Save time, headache, and money with our automated WordPress core, plugin, and theme updates. As a WordPress plus Edge Solution, Rocket is the only provider to leverage over 200 locations around the world extending the footprint of every WordPress installation. Not only is caching and security built in, the Rocket platform minimizes packet transfer delivering premium performance at increased speeds to site users anywhere in the world. To build this innovative and easy-to-use WordPress hosting platform, Rocket teamed up with Total Server Solutions. “We take great pride in customizing our global reach and hyper converged cloud offering to support our client’s innovations” said Gary Simat, Total Server Solutions’ CEO. “The Rocket story is one of many partnerships where the result achieved more than each could accomplish individually. Even more, Total Server Solutions is using the Rocket platform to build our WordPress online presence” “We’re really excited to bring this Product to market with our partners. While the name Rocket may be new, the team behind it is extremely seasoned” said Gabler. “Seeing WordPress evolve from the early 2000’s to where it is today, we’re thrilled to be a part of the WordPress community again. We strongly feel our Platform will really make an impact to provide a better Internet experience for users all over the world” Rocket’s Simple, Fast, & Secure Managed WordPress Hosting is available today and is priced based on resources required starting at just $25 a month. See full pricing details on our website or contact our Sales team for more information. About Rocket Rocket is an all-in-one Managed WordPress Hosting platform built for WordPress Websites of all sizes. We deploy and cache your entire website in over 200 locations with built-in Website Security tools. We primarily compete with WPEngine, Kinsta, and GoDaddy. Our management team comprises hosting industry experts that bring over 30 years of combined experience to the table. With an easy-to-use control panel on top of an Enterprise grade global footprint, we hope to help make the internet a safer place for WordPress users of all sizes. For more information, visit onrocket.com

Liquid Web Acquires ServerSide, a leading Microsoft Windows CMS Hosting Provider

My Host News -

LANSING, Mich. – Liquid Web, LLC, the market leader in managed hosting and managed application services to SMBs and entrepreneurs, is excited to announce the acquisition of ServerSide adding proven experience in hosting the leading Microsoft Windows Content Management solutions to Liquid Web’s portfolio. The acquisition of ServerSide bolsters Liquid Web’s VMware cloud hosting capabilities for small to medium businesses launched in 2019. It also accelerates the company’s entrance into the Progress Sitefinity, Kentico, and Sitecore hosting market. The ServerSide team, including Steve Oren, founder, and CEO, have joined Liquid Web and have helped lead the effort to migrate customers onto the Liquid Web platform. “The acquisition of ServerSide supports Liquid Web’s mission to power leading content management platforms. With ServerSide, we are excited about building upon the relationships ServerSide had with Sitefinity, Kentico, and Sitecore and their ecosystem partners”, said Joe Oesterling, CTO. “We are excited about joining the Liquid Web team. We’ve successfully migrated our customers to Liquid Web’s platform, and we are working hand and hand to deploy our VMware architecture more broadly within Liquid Web”, said Steve Oren, Former CEO at ServerSide. “We look forward to using Liquid Web’s scale to be a bigger player in the leading Windows CMS ecosystems,” said Oren. To learn more about the Liquid Web Private Cloud powered by VMware and NetApp visit: https://www.liquidweb.com/products/private-cloud/. About the Liquid Web Family of Brands Building on over 20 years of success, our Liquid Web Brand Family consists of four companies (Liquid Web, Nexcess, iThemes, and InterWorx), delivering software, solutions, and managed services for mission-critical sites, stores, and applications to SMBs and the designers, developers, and agencies who create for them. With more than 1.5 million sites under management, The Liquid Web Family of Brands serves over 45,000 customers spanning 150 countries. Collectively, the companies have assembled a world-class team of industry experts, provide unparalleled service from a dedicated group of solution engineers available 24/7/365, and own and manage 10 global data centers. As an industry leader in customer service*, the rapidly expanding brand family has been recognized among INC. Magazine’s 5000 Fastest-Growing Companies for twelve years. For more information, please visit https://www.liquidweb.com/ for more info. *2019 Net Promoter Score of 67

TierPoint Announces Seattle Data Center Expansion Plan

My Host News -

SEATTLE – TierPoint, a leading provider of secure, connected data center and cloud solutions at the edge of the internet, today announced plans to expand its state-of-the-art data center in Seattle’s KOMO Plaza. The nearly 18,000 sq. ft. expansion will include new raised floor, office and support space, featuring fully redundant and generator-backed power; high-efficiency cooling; multi-layer physical security, meeting stringent regulatory compliance standards; and diverse network connectivity through a group of 15 carriers and onramp providers, including AWS Direct Connect. “We already have commitments from customers for some of the expanded capacity, and additional room to support the robust demand we’re seeing for colocation and cloud solutions in the Pacific Northwest,” said TierPoint Region Vice President Boyd Goodfellow. “Seattle is a key market for us and one of the fastest-growing markets for IT and other technology companies in the country.” TierPoint expects the expansion to be completed and available to clients later this year, with the total facility then featuring nearly 3.5 MW of installed critical load capacity, scalable to 5.0 MW. About TierPoint Meeting clients where they are on their journey to IT transformation, TierPoint (tierpoint.com) is a leading provider of secure, connected data center and cloud solutions at the edge of the internet. The company has one of the largest customer bases in the industry, with thousands of clients ranging from the public to private sectors, from small businesses to Fortune 500 enterprises. TierPoint also has one of the largest and most geographically diversified footprints in the nation, with over 40 world-class data centers in 20 U.S. markets and 8 multi-tenant cloud pods, connected by a coast-to-coast network. Led by a proven management team, TierPoint’s highly experienced IT professionals offer a comprehensive solution portfolio of private, multitenant, managed hyperscale, and hybrid cloud, plus colocation, disaster recovery, security, and other managed IT services.

Equinix Expands Dallas Infomart Campus with New $142M Data Center and 5G Proof of Concept Center

My Host News -

REDWOOD CITY, CA – Equinix, Inc.(Nasdaq: EQIX), the global interconnection and data center company, today announced the expansion of its Dallas Infomart Data Center campus with the opening of a new $142M International Business Exchange (IBX®) data center and the launch of its 5G and Edge Proof of Concept Center (POCC). The moves support the growing demand for companies to accelerate their evolution from traditional to digital businesses by rapidly scaling their infrastructure, easily adopting hybrid multicloud architectures and interconnecting with strategic business partners within the Platform Equinix® global ecosystem of nearly 10,000 customers. The Dallas region is a major communications hub for the southern United States, with a concentration of telecommunications companies. Many of these companies are part of the dense and diverse ecosystem of carriers, clouds and enterprises at Equinix’s Dallas Infomart campus. This ecosystem makes Equinix Dallas an ideal location for companies seeking to test and validate new 5G and edge innovations. The Equinix 5G and Edge Proof of Concept Center (POCC) will provide a 5G and edge “sandbox” environment, enabling Mobile Network Operators (MNOs), cloud platforms, technology vendors and enterprises to directly connect with the largest edge data center platform in order to test, demonstrate and accelerate complex 5G and edge deployment and interoperability scenarios. The Equinix 5G and Edge POCC aims to:   Develop 5G and edge architectures that leverage ecosystems already resident at Equinix. Explore hybrid multicloud interconnectivty scenarios between MNOs, public clouds and private infrastructures. Develop multiparty business models, partnering strategies and go-to-market motions for the nascent 5G and edge market. The DA11 IBX is the ninth data center for Equinix in the Dallas metro area, and the second building on the growing Dallas Infomart campus. It is a four-story, state-of-the-art data center designed to deliver both small- and large-capacity deployments. The innovative, modular construction incorporates Equinix’s Flexible Data Center (FDC) principles, which leverage common design elements for space, power and cooling to reduce capital cost while ensuring long-term maintenance predictability. For Equinix customers, this approach supports delivery of the highest standards for uptime and availability while lowering operating risk and complexity. It will provide needed capacity for businesses seeking to architect hybrid multicloud infrastructures within a dense ecosystems of customers and partners. The $142 million first phase of DA11 provides a capacity of 1,975 cabinets and colocation space of approximately 72,000 square feet. Upon completion of the planned future phases, the facility is expected to provide a total capacity of more than 3,850 cabinets and colocation space of more than 144,000 square feet. The Dallas metro represents one of the largest enterprise and colocation markets in the Americas and includes nine Equinix IBX data centers, that house more than 135 network service providers—more than any other data center provider in the Dallas metro area. Directly connected to Equinix’s Infomart Data Center, the fifth-most-dense interconnection hub in the United States, these colocation facilities provide proximity to banking, commerce, telecommunications, computer technology, energy, healthcare and medical research, transportation and logistics companies in the metro area. Dallas is a major interconnection point for Latin America traffic with key terrestrial routes serving Central and South America. In combination with our operations in Miami, Los Angeles, Mexico, Bogotá, Sao Paulo and Rio de Janeiro, Equinix continues to expand solutions for enterprise, cloud and content providers looking to address the Latin America Market. According to the 2019 Global Interconnection Index (GXI) Report published by Equinix, enterprise consumption of interconnection bandwidth is expected to grow by 63 percent CAGR in LATAM by 2022 and will contribute up to 11 percent of interconnection bandwidth globally. In this region, content and digital media is expected to outpace other regions in interconnection bandwidth adoption. The Equinix Dallas IBX data centers offer access to Equinix Cloud Exchange Fabric (ECX Fabric), an on-demand platform that enables Equinix customers to discover and dynamically connect to any other customer across any Equinix location globally. Offered through an easy-to-use portal and a single connection to the Equinix platform, ECX Fabric offers access to more than 2,100 of the world’s largest enterprises, cloud service providers (including Alibaba Cloud, Amazon Web Services, Google Cloud Platform, IBM Cloud, Microsoft Azure and Oracle Cloud) and SaaS providers (including Salesforce, SAP and ServiceNow, among others). By reaching their entire digital ecosystem through a single private and secure connection, companies can rapidly scale their digital business operations globally. Customers can also locate their data close to the edge of their network, increasing performance by keeping data near consumption points. Equinix is a leader in data center sustainability and in greening the supply chains of its customers. Equinix’s long-term goal of using 100% clean and renewable energy for its global platform has resulted in significant increases in renewable energy coverage globally including 100% renewable throughout the United States. Equinix continues to make advancements in the way it designs, builds and operates its data centers with high energy efficiency standards. DA11 customers will benefit from reductions of their CO2 footprint through Equinix’s renewable energy procurement strategy and the use of energy-efficient systems throughout the facility. In the Americas, Equinix now operates more than 90 IBX data centers strategically located in Brazil, Canada, Colombia, Mexico and the United States. Globally, Platform Equinix is comprised of more than 210 IBX data centers across 56 markets and 26 countries, providing data center and interconnection services for more than 9,700 of the world’s leading businesses. About Equinix Equinix, Inc. (Nasdaq: EQIX) connects the world’s leading businesses to their customers, employees and partners inside the most-interconnected data centers. On this global platform for digital business, companies come together across more than 55 markets on five continents to reach everywhere, interconnect everyone and integrate everything they need to create their digital futures. Equinix.com.

Making the WAF 40% faster

CloudFlare Blog -

Cloudflare’s Web Application Firewall (WAF) protects against malicious attacks aiming to exploit vulnerabilities in web applications. It is continuously updated to provide comprehensive coverage against the most recent threats while ensuring a low false positive rate.As with all Cloudflare security products, the WAF is designed to not sacrifice performance for security, but there is always room for improvement.This blog post provides a brief overview of the latest performance improvements that were rolled out to our customers.Transitioning from PCRE to RE2Back in July of 2019, the WAF transitioned from using a regular expression engine based on PCRE to one inspired by RE2, which is based around using a deterministic finite automaton (DFA) instead of backtracking algorithms. This change came as a result of an outage where an update added a regular expression which backtracked enormously on certain HTTP requests, resulting in exponential execution time.After the migration was finished, we saw no measurable difference in CPU consumption at the edge, but noticed execution time outliers in the 95th and 99th percentiles decreased, something we expected given RE2's guarantees of a linear time execution with the size of the input.As the WAF engine uses a thread pool, we also had to implement and tune a regex cache shared between the threads to avoid excessive memory consumption (the first implementation turned out to use a staggering amount of memory).These changes, along with others outlined in the post-mortem blog post, helped us improve reliability and safety at the edge and have the confidence to explore further performance improvements.But while we’ve highlighted regular expressions, they are only one of the many capabilities of the WAF engine.Matching StagesWhen an HTTP request reaches the WAF, it gets organized into several logical sections to be analyzed: method, path, headers, and body. These sections are all stored in Lua variables. If you are interested in more detail on the implementation of the WAF itself you can watch this old presentation.Before matching these variables against specific malicious request signatures, some transformations are applied. These transformations are functions that range from simple modifications like lowercasing strings to complex tokenizers and parsers looking to fingerprint certain malicious payloads.As the WAF currently uses a variant of the ModSecurity syntax, this is what a rule might look like:SecRule REQUEST_BODY "@rx /\x00+evil" "drop, t:urlDecode, t:lowercase" It takes the request body stored in the REQUEST_BODY variable, applies the urlDecode() and lowercase() functions to it and then compares the result with the regular expression signature \x00+evil. In pseudo-code, we can represent it as:rx( "/\x00+evil", lowercase( urlDecode( REQUEST_BODY ) ) ) Which in turn would match a request whose body contained percent encoded NULL bytes followed by the word "evil”, e.g.:GET /cms/admin?action=post HTTP/1.1 Host: example.com Content-Type: text/plain; charset=utf-8 Content-Length: 16 thiSis%2F%00eVil The WAF contains thousands of these rules and its objective is to execute them as quickly as possible to minimize any added latency to a request. And to make things harder, it needs to run most of the rules on nearly every request. That’s because almost all HTTP requests are non-malicious and no rules are going to match. So we have to optimize for the worst case: execute everything!To help mitigate this problem, one of the first matching steps executed for many rules is pre-filtering. By checking if a request contains certain bytes or sets of strings we are able to potentially skip a considerable number of expressions.In the previous example, doing a quick check for the NULL byte (represented by \x00 in the regular expression) allows us to completely skip the rule if it isn’t found: contains( "\x00", REQUEST_BODY ) and rx( "/\x00+evil", lowercase( urlDecode( REQUEST_BODY ) ) ) Since most requests don’t match any rule and these checks are quick to execute, overall we aren’t doing more operations by adding them.Other steps can also be used to scan through and combine several regular expressions and avoid execution of rule expressions. As usual, doing less work is often the simplest way to make a system faster.MemoizationWhich brings us to memoization - caching the output of a function call to reuse it in future calls.Let’s say we have the following expressions:1. rx( "\x00+evil", lowercase( url_decode( body ) ) ) 2. rx( "\x00+EVIL", remove_spaces( url_decode( body ) ) ) 3. rx( "\x00+evil", lowercase( url_decode( headers ) ) ) 4. streq( "\x00evil", lowercase( url_decode( body ) ) ) In this case, we can reuse the result of the nested function calls (1) as they’re the same in (4). By saving intermediate results we are also able to take advantage of the result of url_decode( body ) from (1) and use it in (2) and (4). Sometimes it is also possible to swap the order functions are applied to improve caching, though in this case we would get different results. A naive implementation of this system can simply be a hash table with each entry having the function(s) name(s) and arguments as the key and its output as the value.Some of these functions are expensive and caching the result does lead to significant savings. To give a sense of magnitude, one of the rules we modified to ensure memoization took place saw its execution time reduced by about 95%:Execution time per ruleThe WAF engine implements memoization and the rules take advantage of it, but there’s always room to increase cache hits.Rewriting Rules and ResultsCloudflare has a very regular cadence of releasing updates and new rules to the Managed Rulesets. However, as more rules are added and new functions implemented, the memoization cache hit rate tends to decrease.To improve this, we first looked into the rules taking the most wall-clock time to execute using some of our performance metrics:Execution time per ruleHaving these, we cross-referenced them with the ones having cache misses (output is truncated with [...]):moura@cf $ ./parse.py --profile Hit Ratio: ------------- 0.5608 Hot entries: ------------- [urlDecode, replaceComments, REQUEST_URI, REQUEST_HEADERS, ARGS_POST] [urlDecode, REQUEST_URI] [urlDecode, htmlEntityDecode, jsDecode, replaceNulls, removeWhitespace, REQUEST_URI, REQUEST_HEADERS] [urlDecode, lowercase, REQUEST_FILENAME] [urlDecode, REQUEST_FILENAME] [urlDecode, lowercase, replaceComments, compressWhitespace, ARGS, REQUEST_FILENAME] [urlDecode, replaceNulls, removeWhitespace, REQUEST_URI, REQUEST_HEADERS, ARGS_POST] [...] Candidates: ------------- 100152A - replace t:removeWhitespace with t:compressWhitespace,t:removeWhitespace 100214 - replace t:lowercase with (?i) 100215 - replace t:lowercase with (?i) 100300 - consider REQUEST_URI over REQUEST_FILENAME 100137D - invert order of t:replaceNulls,t:lowercase [...] After identifying more than 40 rules, we rewrote them to take full advantage of memoization and added pre-filter checks where possible. Many of these changes were not immediately obvious, which is why we’re also creating tools to aid analysts in creating even more efficient rules. This also helps ensure they run in accordance with the latency budgets the team has set.This change resulted in an increase of the cache hit percentage from 56% to 74%, which crucially included the most expensive transformations. Most importantly, we also observed a sharp decrease of 40% in the average time the WAF takes to process and analyze an HTTP request at the Cloudflare edge.WAF Request Processing - Time AverageA comparable decrease was also observed for the 95th and 99th percentiles.Finally, we saw a drop of CPU consumption at the edge of around 4.3%.Next StepsWhile the Lua WAF has served us well throughout all these years, we are currently porting it to use the same engine powering Firewall Rules. It is based on our open-sourced wirefilter execution engine, which uses a filter syntax inspired by Wireshark®. In addition to allowing more flexible filter expressions, it provides better performance and safety.The rule optimizations we've described in this blog post are not lost when moving to the new engine, however, as the changes were deliberately not specific to the current Lua engine’s implementation. And while we're routinely profiling, benchmarking and making complex optimizations to the Firewall stack, sometimes just relatively simple changes can have a surprisingly huge effect.

How to Encourage Employees to Share Your LinkedIn Content: 4 Tips

Social Media Examiner -

Need more visibility on LinkedIn? Wondering how to get employees involved with your LinkedIn content strategy? In this article, you’ll discover four ways to help your employees share more company content with their personal networks on LinkedIn. Why Encourage Employees to Share Company Content on Your LinkedIn Page? Getting your colleagues involved with your LinkedIn […] The post How to Encourage Employees to Share Your LinkedIn Content: 4 Tips appeared first on Social Media Examiner | Social Media Marketing.

FindMyHost Releases July 2020 Editors’ Choice Awards

My Host News -

OKLAHOMA CITY, OK – Web Hosting Directory and Review site www.FindMyHost.com released the July Editor’s Choice Awards for 2020 today. Web Hosting companies strive to provide their customers with the very best service and support. We want to take the opportunity to acknowledge the hosts per category who have excelled in their field. The FindMyHost Editors’ Choice Awards are chosen based on Editor and Consumer Reviews. Customers who wish to submit positive reviews for the current or past Web Host are free to do so by visiting the customer review section of FindMyHost.com.  By doing so, you nominate your web host for next months Editor’s Choice awards. We would like to congratulate all the web hosts who participated and in particular the following who received top honors in their field: Dedicated Servers GlowHost.com   Visit GlowHost.com  View Report Card Business Hosting KnownSRV.com   Visit KnownSRV.com  View Report Card SSD Hosting KVCHosting.net   Visit KVCHosting.net  View Report Card VPS MightWeb.net   Visit MightWeb.net  View Report Card Secure Hosting VPSFX.com   Visit VPSFX.com  View Report Card Cloud Hosting BudgetVM.com   Visit BudgetVM.com  View Report Card Reseller Hosting ZipServers.com   Visit ZipServers.com  View Report Card Website Monitoring UptimeSpy.com   Visit UptimeSpy.com  View Report Card About FindMyHost FindMyHost, Inc. is an online magazine that provides editor reviews, consumer hosting news, interviews discussion forums and more. FindMyHost.com was established in January 2001 to protect web host consumers and web developers from making the wrong choice when choosing a web host. FindMyHost.com showcases a selection of web hosting companies who have undergone their approved host program testing and provides reviews from customers. FindMyHost’s extensive website can be found at www.FindMyHost.com.

AWS App2Container – A New Containerizing Tool for Java and ASP.NET Applications

Amazon Web Services Blog -

Our customers are increasingly developing their new applications with containers and serverless technologies, and are using modern continuous integration and delivery (CI/CD) tools to automate the software delivery life cycle. They also maintain a large number of existing applications that are built and managed manually or using legacy systems. Maintaining these two sets of applications with disparate tooling adds to operational overhead and slows down the pace of delivering new business capabilities. As much as possible, they want to be able to standardize their management tooling and CI/CD processes across both their existing and new applications, and see the option of packaging their existing applications into containers as the first step towards accomplishing that goal. However, containerizing existing applications requires a long list of manual tasks such as identifying application dependencies, writing dockerfiles, and setting up build and deployment processes for each application. These manual tasks are time consuming, error prone, and can slow down the modernization efforts. Today, we are launching AWS App2Container, a new command-line tool that helps containerize existing applications that are running on-premises, in Amazon Elastic Compute Cloud (EC2), or in other clouds, without needing any code changes. App2Container discovers applications running on a server, identifies their dependencies, and generates relevant artifacts for seamless deployment to Amazon ECS and Amazon EKS. It also provides integration with AWS CodeBuild and AWS CodeDeploy to enable a repeatable way to build and deploy containerized applications. AWS App2Container generates the following artifacts for each application component: Application artifacts such as application files/folders, Dockerfiles, container images in Amazon Elastic Container Registry (ECR), ECS Task definitions, Kubernetes deployment YAML, CloudFormation templates to deploy the application to Amazon ECS or EKS, and templates to set up a build/release pipeline in AWS Codepipeline which also leverages AWS CodeBuild and CodeDeploy. Starting today, you can use App2Container to containerize ASP.NET (.NET 3.5+) web applications running in IIS 7.5+ on Windows, and Java applications running on Linux—standalone JBoss, Apache Tomcat, and generic Java applications such as Spring Boot, IBM WebSphere, Oracle WebLogic, etc. By modernizing existing applications using containers, you can make them portable, increase development agility, standardize your CI/CD processes, and reduce operational costs. Now let’s see how it works! AWS App2Container – Getting Started AWS App2Container requires that the following prerequisites be installed on the server(s) hosting your application: AWS Command Line Interface (CLI) version 1.14 or later, Docker tools, and (in the case of ASP.NET) Powershell 5.0+ for applications running on Windows. Additionally, you need to provide appropriate IAM permissions to App2Container to interact with AWS services. For example, let’s look how you containerize your existing Java applications. App2Container CLI for Linux is packaged as a tar.gz archive. The file provides users an interactive shell script, install.sh to install the App2Container CLI. Running the script guides users through the install steps and also updates the user’s path to include the App2Container CLI commands. First, you can begin by running a one-time initialization on the installed server for the App2Container CLI with the init command. $ sudo app2container init Workspace directory path for artifacts[default: /home/ubuntu/app2container/ws]: AWS Profile (configured using 'aws configure --profile')[default: default]: Optional S3 bucket for application artifacts (Optional)[default: none]: Report usage metrics to AWS? (Y/N)[default: y]: Require images to be signed using Docker Content Trust (DCT)? (Y/N)[default: n]: Configuration saved This sets up a workspace to store application containerization artifacts (minimum 20GB of disk space available). You can extract them into your Amazon Simple Storage Service (S3) bucket using your AWS profile configured to use AWS services. Next, you can view Java processes that are running on the application server by using the inventory command. Each Java application process has a unique identifier (for example, java-tomcat-9e8e4799) which is the application ID. You can use this ID to refer to the application with other App2Container CLI commands. $ sudo app2container inventory { "java-jboss-5bbe0bec": { "processId": 27366, "cmdline": "java ... /home/ubuntu/wildfly-10.1.0.Final/modules org.jboss.as.standalone -Djboss.home.dir=/home/ubuntu/wildfly-10.1.0.Final -Djboss.server.base.dir=/home/ubuntu/wildfly-10.1.0.Final/standalone ", "applicationType": "java-jboss" }, "java-tomcat-9e8e4799": { "processId": 2537, "cmdline": "/usr/bin/java ... -Dcatalina.home=/home/ubuntu/tomee/apache-tomee-plume-7.1.1 -Djava.io.tmpdir=/home/ubuntu/tomee/apache-tomee-plume-7.1.1/temp org.apache.catalina.startup.Bootstrap start ", "applicationType": "java-tomcat" } } You can also intialize ASP.NET applications on an administrator-run PowerShell session of Windows Servers with IIS version 7.0 or later. Note that Docker tools and container support are available on Windows Server 2016 and later versions. You can select to run all app2container operations on the application server with Docker tools installed or use a worker machine with Docker tools using Amazon ECS-optimized Windows Server AMIs. PS> app2container inventory { "iis-smarts-51d2dbf8": { "siteName": "nopCommerce39", "bindings": "http/*:90:", "applicationType": "iis" } } The inventory command displays all IIS websites on the application server that can be containerized. Each IIS website process has a unique identifier (for example, iis-smarts-51d2dbf8) which is the application ID. You can use this ID to refer to the application with other App2Container CLI commands. You can choose a specific application by referring to its application ID and generate an analysis report for the application by using the analyze command. $ sudo app2container analyze --application-id java-tomcat-9e8e4799 Created artifacts folder /home/ubuntu/app2container/ws/java-tomcat-9e8e4799 Generated analysis data in /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/analysis.json Analysis successful for application java-tomcat-9e8e4799 Please examine the same, make appropriate edits and initiate containerization using "app2container containerize --application-id java-tomcat-9e8e4799" You can use the analysis.json template generated by the application analysis to gather information on the analyzed application that helps identify all system dependencies from the analysisInfo section, and update containerization parameters to customize the container images generated for the application using the containerParameters section. $ cat java-tomcat-9e8e4799/analysis.json { "a2CTemplateVersion": "1.0", "createdTime": "2020-06-24 07:40:5424", "containerParameters": { "_comment1": "*** EDITABLE: The below section can be edited according to the application requirements. Please see the analyisInfo section below for deetails discoverd regarding the application. ***", "imageRepository": "java-tomcat-9e8e4799", "imageTag": "latest", "containerBaseImage": "ubuntu:18.04", "coopProcesses": [ 6446, 6549, 6646] }, "analysisInfo": { "_comment2": "*** NON-EDITABLE: Analysis Results ***", "processId": 2537 "appId": "java-tomcat-9e8e4799", "userId": "1000", "groupId": "1000", "cmdline": [...], "os": {...}, "ports": [...] } } Also, you can run the $ app2container extract --application-id java-tomcat-9e8e4799 command to generate an application archive for the analyzed application. This depends on the analysis.json file generated earlier in the workspace folder for the application,and adheres to any containerization parameter updates specified in there. By using extract command, you can continue the workflow on a worker machine after running the first set of commands on the application server. Now you can containerize command generated Docker images for the selected application. $ sudo app2container containerize --application-id java-tomcat-9e8e4799 AWS pre-requisite check succeeded Docker pre-requisite check succeeded Extracted container artifacts for application Entry file generated Dockerfile generated under /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/Artifacts Generated dockerfile.update under /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/Artifacts Generated deployment file at /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/deployment.json Containerization successful. Generated docker image java-tomcat-9e8e4799 You're all set to test and deploy your container image. Next Steps: 1. View the container image with \"docker images\" and test the application. 2. When you're ready to deploy to AWS, please edit the deployment file as needed at /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/deployment.json. 3. Generate deployment artifacts using app2container generate app-deployment --application-id java-tomcat-9e8e4799 Using this command, you can view the generated container images using Docker images on the machine where the containerize command is run. You can use the docker run command to launch the container and test application functionality. Note that in addition to generating container images, the containerize command also generates a deployment.json template file that you can use with the next generate-appdeployment command. You can edit the parameters in the deployment.json template file to change the image repository name to be registered in Amazon ECR, the ECS task definition parameters, or the Kubernetes App name. $ cat java-tomcat-9e8e4799/deployment.json { "a2CTemplateVersion": "1.0", "applicationId": "java-tomcat-9e8e4799", "imageName": "java-tomcat-9e8e4799", "exposedPorts": [ { "localPort": 8090, "protocol": "tcp6" } ], "environment": [], "ecrParameters": { "ecrRepoTag": "latest" }, "ecsParameters": { "createEcsArtifacts": true, "ecsFamily": "java-tomcat-9e8e4799", "cpu": 2, "memory": 4096, "dockerSecurityOption": "", "enableCloudwatchLogging": false, "publicApp": true, "stackName": "a2c-java-tomcat-9e8e4799-ECS", "reuseResources": { "vpcId": "", "cfnStackName": "", "sshKeyPairName": "" }, "gMSAParameters": { "domainSecretsArn": "", "domainDNSName": "", "domainNetBIOSName": "", "createGMSA": false, "gMSAName": "" } }, "eksParameters": { "createEksArtifacts": false, "applicationName": "", "stackName": "a2c-java-tomcat-9e8e4799-EKS", "reuseResources": { "vpcId": "", "cfnStackName": "", "sshKeyPairName": "" } } } At this point, the application workspace where the artifacts are generated serves as an iteration sandbox. You can choose to edit the Dockerfile generated here to make changes to their application and use the docker build command to build new container images as needed. You can generate the artifacts needed to deploy the application containers in Amazon EKS by using the generate-deployment command. $ sudo app2container generate app-deployment --application-id java-tomcat-9e8e4799 AWS pre-requisite check succeeded Docker pre-requisite check succeeded Created ECR Repository Uploaded Cloud Formation resources to S3 Bucket: none Generated Cloud Formation Master template at: /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/EksDeployment/amazon-eks-master.template.yaml EKS Cloudformation templates and additional deployment artifacts generated successfully for application java-tomcat-9e8e4799 You're all set to use AWS Cloudformation to manage your application stack. Next Steps: 1. Edit the cloudformation template as necessary. 2. Create an application stack using the AWS CLI or the AWS Console. AWS CLI command: aws cloudformation deploy --template-file /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/EksDeployment/amazon-eks-master.template.yaml --capabilities CAPABILITY_NAMED_IAM --stack-name java-tomcat-9e8e4799 3. Setup a pipeline for your application stack: app2container generate pipeline --application-id java-tomcat-9e8e4799 This command works based on the deployment.json template file produced as part of running the containerize command. App2Container will now generate ECS/EKS cloudformation templates as well and an option to deploy those stacks. The command registers the container image to user specified ECR repository, generates cloudformation template for Amazon ECS and EKS deployments. You can register ECS task definition with Amazon ECS and use kubectl to launch the containerized application on the existing Amazon EKS or self-managed kubernetes cluster using App2Container generated amazon-eks-master.template.deployment.yaml. Alternatively, you can directly deploy containerized applications by --deploy options into Amazon EKS. $ sudo app2container generate app-deployment --application-id java-tomcat-9e8e4799 --deploy AWS pre-requisite check succeeded Docker pre-requisite check succeeded Created ECR Repository Uploaded Cloud Formation resources to S3 Bucket: none Generated Cloud Formation Master template at: /home/ubuntu/app2container/ws/java-tomcat-9e8e4799/EksDeployment/amazon-eks-master.template.yaml Initiated Cloudformation stack creation. This may take a few minutes. Please visit the AWS Cloudformation Console to track progress. Deploying application to EKS Handling ASP.NET Applications with Windows Authentication Containerizing ASP.NET applications is almost same process as Java applications, but Windows containers cannot be directly domain joined. They can however still use Active Directory (AD) domain identities to support various authentication scenarios. App2Container detects if a site is using Windows authentication and accordingly makes the IIS site’s application pool run as the network service identity, and generates the new cloudformation templates for Windows authenticated IIS applications. The creation of gMSA and AD Security group, domain join ECS nodes and making containers use this gMSA are all taken care of by those templates. Also, it provides two PowerShell scripts as output to the $ app2container containerize command along with an instruction file on how to use it. The following is an example output: PS C:\Windows\system32> app2container containerize --application-id iis-SmartStoreNET-a726ba0b Running AWS pre-requisite check... Running Docker pre-requisite check... Container build complete. Please use "docker images" to view the generated container images. Detected that the Site is using Windows Authentication. Generating powershell scripts into C:\Users\Admin\AppData\Local\app2container\iis-SmartStoreNET-a726ba0b\Artifacts required to setup Container host with Windows Authentication Please look at C:\Users\Admin\AppData\Local\app2container\iis-SmartStoreNET-a726ba0b\Artifacts\WindowsAuthSetupInstructions.md for setup instructions on Windows Authentication. A deployment file has been generated under C:\Users\Admin\AppData\Local\app2container\iis-SmartStoreNET-a726ba0b Please edit the same as needed and generate deployment artifacts using "app2container generate-deployment" The first PowerShellscript, DomainJoinAddToSecGroup.ps1, joins the container host and adds it to an Active Directory security group. The second script, CreateCredSpecFile.ps1, creates a Group Managed Service Account (gMSA), grants access to the Active Directory security group, generates the credential spec for this gMSA, and stores it locally on the container host. You can execute these PowerShellscripts on the ECS host. The following is an example usage of the scripts: PS C:\Windows\system32> .\DomainJoinAddToSecGroup.ps1 -ADDomainName Dominion.com -ADDNSIp 10.0.0.1 -ADSecurityGroup myIISContainerHosts -CreateADSecurityGroup:$true PS C:\Windows\system32> .\CreateCredSpecFile.ps1 -GMSAName MyGMSAForIIS -CreateGMSA:$true -ADSecurityGroup myIISContainerHosts Before executing the app2container generate-deployment command, edit the deployment.json file to change the value of dockerSecurityOption to the name of the CredentialSpec file that the CreateCredSpecFile script generated. For example, "dockerSecurityOption": "credentialspec:file://dominion_mygmsaforiis.json" Effectively, any access to network resource made by the IIS server inside the container for the site will now use the above gMSA to authenticate. The final step is to authorize this gMSA account on the network resources that the IIS server will access. A common example is authorizing this gMSA inside the SQL Server. Finally, if the application must connect to a database to be fully functional and you run the container in Amazon ECS, ensure that the application container created from the Docker image generated by the tool has connectivity to the same database. You can refer to this documentation for options on migrating: MS SQL Server from Windows to Linux on AWS, Database Migration Service, and backup and restore your MS SQL Server to Amazon RDS. Now Available AWS App2Container is offered free. You only pay for the actual usage of AWS services like Amazon EC2, ECS, EKS, and S3 etc based on their usage. For details, please refer to App2Container FAQs and documentations. Give this a try, and please send us feedback either through your usual AWS Support contacts, on the AWS Forum for ECS, AWS Forum for EKS, or on the container roadmap on Github. — Channy;

Amazon RDS Proxy – Now Generally Available

Amazon Web Services Blog -

At AWS re:Invent 2019, we launched the preview of Amazon RDS Proxy, a fully managed, highly available database proxy for Amazon Relational Database Service (RDS) that makes applications more scalable, more resilient to database failures, and more secure. Following the preview of MySQL engine, we extended to the PostgreSQL compatibility. Today, I am pleased to announce that we are now generally available for both engines. Many applications, including those built on modern serverless architectures using AWS Lambda, Fargate, Amazon ECS, or EKS can have a large number of open connections to the database server, and may open and close database connections at a high rate, exhausting database memory and compute resources. Amazon RDS Proxy allows applications to pool and share connections established with the database, improving database efficiency, application scalability, and security. With RDS Proxy, failover times for Amazon Aurora and RDS databases are reduced by up to 66%, and database credentials, authentication, and access can be managed through integration with AWS Secrets Manager and AWS Identity and Access Management (IAM). Amazon RDS Proxy can be enabled for most applications with no code change, and you don’t need to provision or manage any additional infrastructure and only pay per vCPU of the database instance for which the proxy is enabled. Amazon RDS Proxy – Getting started You can get started with Amazon RDS Proxy in just a few clicks by going to the AWS management console and creating an RDS Proxy endpoint for your RDS databases. In the navigation pane, choose Proxies and Create proxy. You can also see the proxy panel below. To create your proxy, specify the Proxy identifier, a unique name of your choosing, and choose the database engine – either MySQL or PostgreSQL. Choose the encryption setting if you want the proxy to enforce TLS / SSL for all connection between application and proxy, and specify a time period that a client connection can be idle before the proxy can close it. A client connection is considered idle when the application doesn’t submit a new request within the specified time after the previous request completed. The underlying connection between the proxy and database stays open and is returned to the connection pool. Thus, it’s available to be reused for new client connections. Next, choose one RDS DB instance or Aurora DB cluster in Database to access through this proxy. The list only includes DB instances and clusters with compatible database engines, engine versions, and other settings. Specify Connection pool maximum connections, a value between 1 and 100. This setting represents the percentage of the max_connections value that RDS Proxy can use for its connections. If you only intend to use one proxy with this DB instance or cluster, you can set it to 100. For details about how RDS Proxy uses this setting, see Connection Limits and Timeouts. Choose at least one Secrets Manager secret associated with the RDS DB instance or Aurora DB cluster that you intend to access with this proxy, and select an IAM role that has permission to access the Secrets Manager secrets you chose. If you don’t have an existing secret, please click Create a new secret before setting up the RDS proxy. After setting VPC Subnets and a security group, please click Create proxy. If you more settings in details, please refer to the documentation. You can see the new RDS proxy after waiting a few minutes and then point your application to the RDS Proxy endpoint. That’s it! You can also create an RDS proxy easily via AWS CLI command. aws rds create-db-proxy \ --db-proxy-name channy-proxy \ --role-arn iam_role \ --engine-family { MYSQL|POSTGRESQL } \ --vpc-subnet-ids space_separated_list \ [--vpc-security-group-ids space_separated_list] \ [--auth ProxyAuthenticationConfig_JSON_string] \ [--require-tls | --no-require-tls] \ [--idle-client-timeout value] \ [--debug-logging | --no-debug-logging] \ [--tags comma_separated_list] How RDS Proxy works Let’s see an example that demonstrates how open connections continue working during a failover when you reboot a database or it becomes unavailable due to a problem. This example uses a proxy named channy-proxy and an Aurora DB cluster with DB instances instance-8898 and instance-9814. When the failover-db-cluster command is run from the Linux command line, the writer instance that the proxy is connected to changes to a different DB instance. You can see that the DB instance associated with the proxy changes while the connection remains open. $ mysql -h channy-proxy.proxy-abcdef123.us-east-1.rds.amazonaws.com -u admin_user -p Enter password: ... mysql> select @@aurora_server_id; +--------------------+ | @@aurora_server_id | +--------------------+ | instance-9814 | +--------------------+ 1 row in set (0.01 sec) mysql> [1]+ Stopped mysql -h channy-proxy.proxy-abcdef123.us-east-1.rds.amazonaws.com -u admin_user -p $ # Initially, instance-9814 is the writer. $ aws rds failover-db-cluster --db-cluster-id cluster-56-2019-11-14-1399 JSON output $ # After a short time, the console shows that the failover operation is complete. $ # Now instance-8898 is the writer. $ fg mysql -h channy-proxy.proxy-abcdef123.us-east-1.rds.amazonaws.com -u admin_user -p mysql> select @@aurora_server_id; +--------------------+ | @@aurora_server_id | +--------------------+ | instance-8898 | +--------------------+ 1 row in set (0.01 sec) mysql> [1]+ Stopped mysql -h channy-proxy.proxy-abcdef123.us-east-1.rds.amazonaws.com -u admin_user -p $ aws rds failover-db-cluster --db-cluster-id cluster-56-2019-11-14-1399 JSON output $ # After a short time, the console shows that the failover operation is complete. $ # Now instance-9814 is the writer again. $ fg mysql -h channy-proxy.proxy-abcdef123.us-east-1.rds.amazonaws.com -u admin_user -p mysql> select @@aurora_server_id; +--------------------+ | @@aurora_server_id | +--------------------+ | instance-9814 | +--------------------+ 1 row in set (0.01 sec) +---------------+---------------+ | Variable_name | Value | +---------------+---------------+ | hostname | ip-10-1-3-178 | +---------------+---------------+ 1 row in set (0.02 sec) With RDS Proxy, you can build applications that can transparently tolerate database failures without needing to write complex failure handling code. RDS Proxy automatically routes traffic to a new database instance while preserving application connections. You can review the demo for an overview of RDS Proxy and the steps you need take to access RDS Proxy from a Lambda function. If you want to know how your serverless applications maintain excellent performance even at peak loads, please read this blog post. For a deeper dive into using RDS Proxy for MySQL with serverless, visit this post. The following are a few things that you should be aware of: Currently, RDS Proxy is available for the MySQL and PostgreSQL engine family. This engine family includes RDS for MySQL 5.6 and 5.7, PostgreSQL 10.11 and 11.5. In an Aurora cluster, all of the connections in the connection pool are handled by the Aurora primary instance. To perform load balancing for read-intensive workloads, you still use the reader endpoint directly for the Aurora cluster. Your RDS Proxy must be in the same VPC as the database. Although the database can be publicly accessible, the proxy can’t be. Proxies don’t support compressed mode. For example, they don’t support the compression used by the --compress or -C options of the mysql command. Now Available! Amazon RDS Proxy is generally available in US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (London) , Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney) and Asia Pacific (Tokyo) regions for Aurora MySQL, RDS for MySQL, Aurora PostgreSQL, and RDS for PostgreSQL, and it includes support for Aurora Serverless and Aurora Multi-Master. Take a look at the product page, pricing, and the documentation to learn more. Please send us feedback either in the AWS forum for Amazon RDS or through your usual AWS support contacts. – Channy;

Employee Spotlight: Rachel Noonan

WP Engine -

In this ongoing blog series, we speak with WP Engine employees around the globe to learn more about their roles, what they love about the cities they work in, and what they like most about working at WP Engine.  In this interview, we talk to Rachel Noonan, an Engineering Manager at WP Engine’s Limerick office,… The post Employee Spotlight: Rachel Noonan appeared first on WP Engine.

Pages

Recommended Content

Subscribe to Complete Hosting Guide aggregator