Corporate Blogs

Cloudflare architecture and how BPF eats the world

CloudFlare Blog -

Recently at Netdev 0x13, the Conference on Linux Networking in Prague, I gave a short talk titled "Linux at Cloudflare". The talk ended up being mostly about BPF. It seems, no matter the question - BPF is the answer.Here is a transcript of a slightly adjusted version of that talk.At Cloudflare we run Linux on our servers. We operate two categories of data centers: large "Core" data centers, processing logs, analyzing attacks, computing analytics, and the "Edge" server fleet, delivering customer content from 180 locations across the world.In this talk, we will focus on the "Edge" servers. It's here where we use the newest Linux features, optimize for performance and care deeply about DoS resilience.Our edge service is special due to our network configuration - we are extensively using anycast routing. Anycast means that the same set of IP addresses are announced by all our data centers.This design has great advantages. First, it guarantees the optimal speed for end users. No matter where you are located, you will always reach the closest data center. Then, anycast helps us to spread out DoS traffic. During attacks each of the locations receives a small fraction of the total traffic, making it easier to ingest and filter out unwanted traffic.Anycast allows us to keep the networking setup uniform across all edge data centers. We applied the same design inside our data centers - our software stack is uniform across the edge servers. All software pieces are running on all the servers.In principle, every machine can handle every task - and we run many diverse and demanding tasks. We have a full HTTP stack, the magical Cloudflare Workers, two sets of DNS servers - authoritative and resolver, and many other publicly facing applications like Spectrum and Warp.Even though every server has all the software running, requests typically cross many machines on their journey through the stack. For example, an HTTP request might be handled by a different machine during each of the 5 stages of the processing.Let me walk you through the early stages of inbound packet processing:(1) First, the packets hit our router. The router does ECMP, and forwards packets onto our Linux servers. We use ECMP to spread each target IP across many, at least 16, machines. This is used as a rudimentary load balancing technique.(2) On the servers we ingest packets with XDP eBPF. In XDP we perform two stages. First, we run volumetric DoS mitigations, dropping packets belonging to very large layer 3 attacks.(3) Then, still in XDP, we perform layer 4 load balancing. All the non-attack packets are redirected across the machines. This is used to work around the ECMP problems, gives us fine-granularity load balancing and allows us to gracefully take servers out of service.(4) Following the redirection the packets reach a designated machine. At this point they are ingested by the normal Linux networking stack, go through the usual iptables firewall, and are dispatched to an appropriate network socket.(5) Finally packets are received by an application. For example HTTP connections are handled by a "protocol" server, responsible for performing TLS encryption and processing HTTP, HTTP/2 and QUIC protocols.It's in these early phases of request processing where we use the coolest new Linux features. We can group useful modern functionalities into three categories:DoS handlingLoad balancingSocket dispatchLet's discuss DoS handling in more detail. As mentioned earlier, the first step after ECMP routing is Linux's XDP stack where, among other things, we run DoS mitigations.Historically our mitigations for volumetric attacks were expressed in classic BPF and iptables-style grammar. Recently we adapted them to execute in the XDP eBPF context, which turned out to be surprisingly hard. Read on about our adventures:L4Drop: XDP DDoS Mitigationsxdpcap: XDP Packet CaptureXDP based DoS mitigation talk by Arthur FabreXDP in practice: integrating XDP into our DDoS mitigation pipeline (PDF)During this project we encountered a number of eBPF/XDP limitations. One of them was the lack of concurrency primitives. It was very hard to implement things like race-free token buckets. Later we found that Facebook engineer Julia Kartseva had the same issues. In February this problem has been addressed with the introduction of bpf_spin_lock helper.While our modern volumetric DoS defenses are done in XDP layer, we still rely on iptables for application layer 7 mitigations. Here, a higher level firewall’s features are useful: connlimit, hashlimits and ipsets. We also use the xt_bpf iptables module to run cBPF in iptables to match on packet payloads. We talked about this in the past:Lessons from defending the indefensible (PPT)Introducing the BPF toolsAfter XDP and iptables, we have one final kernel side DoS defense layer.Consider a situation when our UDP mitigations fail. In such case we might be left with a flood of packets hitting our application UDP socket. This might overflow the socket causing packet loss. This is problematic - both good and bad packets will be dropped indiscriminately. For applications like DNS it's catastrophic. In the past to reduce the harm, we ran one UDP socket per IP address. An unmitigated flood was bad, but at least it didn't affect the traffic to other server IP addresses.Nowadays that architecture is no longer suitable. We are running more than 30,000 DNS IP's and running that number of UDP sockets is not optimal. Our modern solution is to run a single UDP socket with a complex eBPF socket filter on it - using the SO_ATTACH_BPF socket option. We talked about running eBPF on network sockets in past blog posts:eBPF, Sockets, Hop Distance and manually writing eBPF assemblySOCKMAP - TCP splicing of the futureThe mentioned eBPF rate limits the packets. It keeps the state - packet counts - in an eBPF map. We can be sure that a single flooded IP won't affect other traffic. This works well, though during work on this project we found a rather worrying bug in the eBPF verifier:eBPF can't count?!I guess running eBPF on a UDP socket is not a common thing to do.Apart from the DoS, in XDP we also run a layer 4 load balancer layer. This is a new project, and we haven't talked much about it yet. Without getting into many details: in certain situations we need to perform a socket lookup from XDP.The problem is relatively simple - our code needs to look up the "socket" kernel structure for a 5-tuple extracted from a packet. This is generally easy - there is a bpf_sk_lookup helper available for this. Unsurprisingly, there were some complications. One problem was the inability to verify if a received ACK packet was a valid part of a three-way handshake when SYN-cookies are enabled. My colleague Lorenz Bauer is working on adding support for this corner case.After DoS and the load balancing layers, the packets are passed onto the usual Linux TCP / UDP stack. Here we do a socket dispatch - for example packets going to port 53 are passed onto a socket belonging to our DNS server.We do our best to use vanilla Linux features, but things get complex when you use thousands of IP addresses on the servers.Convincing Linux to route packets correctly is relatively easy with the "AnyIP" trick. Ensuring packets are dispatched to the right application is another matter. Unfortunately, standard Linux socket dispatch logic is not flexible enough for our needs. For popular ports like TCP/80 we want to share the port between multiple applications, each handling it on a different IP range. Linux doesn't support this out of the box. You can call bind() either on a specific IP address or all IP's (with 0.0.0.0).In order to fix this, we developed a custom kernel patch which adds a SO_BINDTOPREFIX socket option. As the name suggests - it allows us to call bind() on a selected IP prefix. This solves the problem of multiple applications sharing popular ports like 53 or 80.Then we run into another problem. For our Spectrum product we need to listen on all 65535 ports. Running so many listen sockets is not a good idea (see our old war story blog), so we had to find another way. After some experiments we learned to utilize an obscure iptables module - TPROXY - for this purpose. Read about it here:Abusing Linux's firewall: the hack that allowed us to build SpectrumThis setup is working, but we don't like the extra firewall rules. We are working on solving this problem correctly - actually extending the socket dispatch logic. You guessed it - we want to extend socket dispatch logic by utilizing eBPF. Expect some patches from us.Then there is a way to use eBPF to improve applications. Recently we got excited about doing TCP splicing with SOCKMAP:SOCKMAP - TCP splicing of the futureThis technique has a great potential for improving tail latency across many pieces of our software stack. The current SOCKMAP implementation is not quite ready for prime time yet, but the potential is vast.Similarly, the new TCP-BPF aka BPF_SOCK_OPS hooks provide a great way of inspecting performance parameters of TCP flows. This functionality is super useful for our performance team.Some Linux features didn't age well and we need to work around them. For example, we are hitting limitations of networking metrics. Don't get me wrong - the networking metrics are awesome, but sadly they are not granular enough. Things like TcpExtListenDrops and TcpExtListenOverflows are reported as global counters, while we need to know it on a per-application basis.Our solution is to use eBPF probes to extract the numbers directly from the kernel. My colleague Ivan Babrou wrote a Prometheus metrics exporter called "ebpf_exporter" to facilitate this. Read on:Introducing ebpf_exporterhttps://github.com/cloudflare/ebpf_exporterWith "ebpf_exporter" we can generate all manner of detailed metrics. It is very powerful and saved us on many occasions.In this talk we discussed 6 layers of BPFs running on our edge servers:Volumetric DoS mitigations are running on XDP eBPFIptables xt_bpf cBPF for application-layer attacksSO_ATTACH_BPF for rate limits on UDP socketsLoad balancer, running on XDPeBPFs running application helpers like SOCKMAP for TCP socket splicing, and TCP-BPF for TCP measurements"ebpf_exporter" for granular metricsAnd we're just getting started! Soon we will be doing more with eBPF based socket dispatch, eBPF running on Linux TC (Traffic Control) layer and more integration with cgroup eBPF hooks. Then, our SRE team is maintaining ever-growing list of BCC scripts useful for debugging.It feels like Linux stopped developing new API's and all the new features are implemented as eBPF hooks and helpers. This is fine and it has strong advantages. It's easier and safer to upgrade eBPF program than having to recompile a kernel module. Some things like TCP-BPF, exposing high-volume performance tracing data, would probably be impossible without eBPF.Some say "software is eating the world", I would say that: "BPF is eating the software".

How to Use Social Data to Launch a Successful Video Marketing Campaign

Reseller Club Blog -

Video marketing is a hit. According to Oberlo, 87% of marketing professionals use video as a marketing tool. So, if you don’t implement video content in your marketing campaigns, you’re definitely missing out. Essentially, video marketing is a component of an integrated marketing plan, aimed to increase audience engagement and boost social activity, mostly through social media. The strategy is built around one video or a series of videos that are connected to a particular product and are characteristic of the company’s values. Why is video marketing on the rise, when there are so much other different content types that are as engaging? What makes it stand out? Take a look at these stats: According to Renderforest, 5 billion videos are watched on YouTube on a daily basis According to the same source, 70% of marketers claim that videos convert better than any other type of content Optinmonster supports these numbers, adding that 94% of businesses are using video as an effective marketing tool, and 81% of the surveyed businesses saw an increase in sales. Moreover, 53% of businesses saw a decrease in customer support issues after using explainer videos in their marketing campaigns Social Data and Video Marketing Today, not a single marketing campaign can be launched without social media, not only because social media give brands extra exposure, but also because they provide brands with social data to help them keep up with the KPIs. Although mining social data is a pretty down-to-earth process, with tools like Google Analytics and Adobe Analytics making it possible, confusion may hit you when you actually try to use it for your video marketing campaign. So, here are some insights on how to use social data to launch a successful video marketing campaign. 1. Study Your Audience for Better Targeting Knowing your target audience is crucial, and your future video marketing campaign is no exception. But audience analysis is also an important part of video production and plays an important role in creating a video script. Another reason why you should pay close attention to audience analysis in your video marketing campaign is delivering the right message to the right people, especially if you’re a newborn brand. According to Social Bakers, this type of content is used primarily in the early stage of a marketing campaign, when customers are only learning about your brand. Audience analysis is a multifaceted process that involves the discussion of crucial points that will influence the nature of your marketing videos. This process involves the analysis of the following aspects: These aspects describe one audience persona that will characterize the whole target audience that you will address in your video. The more detailed this description is, the more targeted marketing video you’ll create because you’ll know exactly who you want to reach with your video message. So, for better targeting and for a more detailed marketing video, make sure that you have a proper audience analysis. 2. Explore Likes, Shares, and Comments for Content Ideas If you choose to launch your video marketing campaign on one of the social media platforms, social data that describes the performance of your previous posts, as well as the data, collected from likes, and shares, will help you determine which platform will work the best for your marketing needs and which content your followers from each platform prefer better. For instance, if your Facebook account stats show that posts with videos on your page are the most active, like here, you can consider your Facebook account as the main platform for your video marketing campaign. Activity on your posts, including comments, likes and shares, is the source of unique metrics, mined from social media platforms. They give you important information on what your existing social media audience likes and wants to see, and you can use it in your video marketing campaign. Social data that can be mined from them, can give a serious boost to your creativity, providing you with plenty of content ideas. Here’s how you can use them. Likes and Shares This is an indirect form of feedback from your social media followers. This way they express their perception of the content you’re posting, so listen closely. The more likes a post gets, the more clues you receive as for what your audience wants to see, and you can later implement these ideas during the video production process. The analysis of likes and shares is a generator of engaging topics for your marketing videos. For instance, the international real estate company Flatfy saw an increased interest in posts about real estate statistics on their Instagram account. So, that’s how they came up with an idea for a marketing video, covering the global population and the future of the real estate search. Comments Your followers may express their feedback using the comment section under your posts. Comments are a great way to find out who watches your content and make necessary adjustments to your video marketing campaign. For instance, one comment in Spanish under a “What is inbound marketing?” video inspired us to create a whole video in Spanish: Such comments give you the idea about not only what your audience needs but who they are as well. So, watch comments closely to find inspiration for your marketing videos. They are a great source of valuable social data. 3. Perform Competitor Analysis By taking a look at your competitors, you obtain their social data for your benefit. Analyze, how their marketing videos performing to learn from their mistakes and benefit from their wins as well. According to Statista, competitor analysis includes the following essential steps: Identifying relevant competitors in a specified market segment Describing key data concerning the competitors that you’ve picked Analyzing the business strategies of each competitor. In our case, you take all video marketing campaigns of each of your competitors and analyze their performance. You can use SWOT analysis to obtain the full scope of data Studying their corporate philosophy and making a comparison to your brand’s values Based on their behaviours and marketing strategies, you can establish your immediate goals Including competitor analysis into your video marketing campaign is an important step to be taken before you even launch the production of the video itself. Competitor analysis can give you important clues and ideas for creating a video script, as well as the knowledge of the things you will need to avoid. Your primary goal is to create a marketing video that stands out, thus, competitor analysis will help you understand, which characteristic features of your brand you want to emphasize. 4. Keep Track of Your KPIs Lastly, valuable social data can be obtained from your own KPIs (Key Performance Indicators). Take a look at your previous video marketing campaigns. How did they perform? What were their advantages? What were the drawbacks? How can they be fixed By answering these and other possible related questions will help you figure out what should be included in your next video marketing campaign. When you launch it, keeping track of the KPIs will be the task that you’ll have to do on a regular basis to make sure that your video marketing campaign performs well. Conclusion Opting for a video marketing campaign is surely the right decision, and social data will help you make it right. Correctly applied social data will help you understand exactly who you want to target with your video marketing campaign. It will also give you important hints on what your followers want to see and even provide you with ideas for your marketing videos. Social data, mined from competitor analysis and your previous KPIs, will give you the knowledge of what you want to highlight and what should be avoided in your next video marketing campaign to make it successful. Hopefully, these tips will help you figure out how to use immense amounts of valuable data you’ll get from social media in order to create a successful video marketing campaign. .fb_iframe_widget_fluid_desktop iframe { width: 100% !important; } The post How to Use Social Data to Launch a Successful Video Marketing Campaign appeared first on ResellerClub Blog.

Join Cloudflare & Yandex at our Moscow meetup! Присоединяйтесь к митапу в Москве!

CloudFlare Blog -

Photo by Serge Kutuzov / UnsplashAre you based in Moscow? Cloudflare is partnering with Yandex to produce a meetup this month in Yandex's Moscow headquarters.  We would love to invite you to join us to learn about the newest in the Internet industry. You'll join Cloudflare's users, stakeholders from the tech community, and Engineers and Product Managers from both Cloudflare and Yandex.Cloudflare Moscow MeetupTuesday, May 30, 2019: 18:00 - 22:00 Location: Yandex - Ulitsa L'va Tolstogo, 16, Moskva, Russia, 119021Talks will include "Performance and scalability at Cloudflare”, "Security at Yandex Cloud", and "Edge computing".Speakers will include Evgeny Sidorov, Information Security Engineer at Yandex, Ivan Babrou, Performance Engineer at Cloudflare, Alex Cruz Farmer, Product Manager for Firewall at Cloudflare, and Olga Skobeleva, Solutions Engineer at Cloudflare.Agenda:18:00 - 19:00 - Registration and welcome cocktail19:00 - 19:10 - Cloudflare overview19:10 - 19:40 - Performance and scalability at Cloudflare19:40 - 20:10 - Security at Yandex Cloud20:10 - 20:40 - Cloudflare security solutions and industry security trends20:40 - 21:10 - Edge computingQ&AThe talks will be followed by food, drinks, and networking.View Event Details & Register Here »We'll hope to meet you soon.Разработчики, присоединяйтесь к Cloudflare и Яндексу на нашей предстоящей встрече в Москве!Cloudflare сотрудничает с Яндексом, чтобы организовать мероприятие в этом месяце в штаб-квартире Яндекса. Мы приглашаем вас присоединиться к встрече посвященной новейшим достижениям в интернет-индустрии. На мероприятии соберутся клиенты Cloudflare, профессионалы из технического сообщества, инженеры из Cloudflare и Яндекса.Вторник, 30 мая: 18:00 - 22:00Место встречи: Яндекс, улица Льва Толстого, 16, Москва, Россия, 119021Доклады будут включать себя такие темы как «Решения безопасности Cloudflare и тренды в области безопасности», «Безопасность в Yandex Cloud», “Производительность и масштабируемость в Cloudflare и «Edge computing» от докладчиков из Cloudflare и Яндекса.Среди докладчиков будут Евгений Сидоров, Заместитель руководителя группы безопасности сервисов в Яндексе, Иван Бобров, Инженер по производительности в Cloudflare, Алекс Круз Фармер, Менеджер продукта Firewall в Cloudflare, и Ольга Скобелева, Инженер по внедрению в Cloudflare.Программа:18:00 - 19:00 - Регистрация, напитки и общение19:00 - 19:10 - Обзор Cloudflare19:10 - 19:40 - Производительность и масштабируемость в Cloudflare19:40 - 20:10 - Решения для обеспечения безопасности в Яндексе20:10 - 20:40 - Решения безопасности Cloudflare и тренды в области безопасности20:40 - 21:10 - Примеры Serverless-решений по безопасностиQ&AВслед за презентациям последует общение, еда и напитки.Посмотреть детали события и зарегистрироваться можно здесь »Ждем встречи с вами!

Scaling in WordPress Using Multiple Nodes [Webinar]

WP Engine -

WordPress has evolved into a full-featured CMS that can be suitable for any size company or website. That’s because, over the years, WordPress has become fully scalable. High-traffic sites use multi-node infrastructure to establish redundancy and guarantee high-availability. However, establishing and maintaining a multi-node setup isn’t always straightforward. In this webinar, you’ll learn from scaling… The post Scaling in WordPress Using Multiple Nodes [Webinar] appeared first on WP Engine.

Benefits of Magento Hosting

HostGator Blog -

The post Benefits of Magento Hosting appeared first on HostGator Blog. Your eCommerce business is starting to come together. You have your business plan and an idea  of what the design of your eCommerce website will look like. But you still need to figure out the right eCommerce software and web hosting plan to get your eCommerce website working and ready for business. When considering your web hosting options, you may come across an option that addresses both needs in its focus: Magento hosting. What Is Magento? Before you can know if Magento hosting is right for you, you need to decide whether you’ll use Magento for your website. Magento is the third most popular eCommerce platform on the market (falling only behind WooCommerce and Shopify). It supplies the main functionality you need to run an eCommerce site, namely: A shopping cartCheck out functionalityAccount creation and guest check out optionsIntegration with payment processing apps to accept paymentAbility to list your products and track availability Those are the basics you need, but Magento has a vast array of features that go beyond the basics. Between the core functionality of the platform itself and the extensions you can add to it, the software packs a lot of power into the framework it provides for your eCommerce store.   Why You Need eCommerce Software You know now that Magento is a type of eCommerce software, but maybe you’re wondering if you can get by without an eCommerce platform to begin with. eCommerce websites have unique needs that won’t be served by a web hosting plan or content management system alone. If you want to make sales through your website, then you need a way to list your products, track inventory, provide a secure checkout, and accept payments. If you think about the online stores you buy from the most often, they probably have additional eCommerce features like the ability to set up a wishlist, mark your favorites, set up subscriptions, or take advantage of coupons. For all of that, you need the right software. If you’re building an online store, consider eCommerce software a necessary part of the process. 12 Reasons to Use Magento for Your eCommerce Website Magento isn’t your only option, but it’s one of the most popular CMSes for eCommerce websites for a number of reasons. 1. Magento is free. Few things in life are free, and it’s even rarer for something with as much usefulness as Magento to be free. But the Magento core is freely available for anyone to use. You may incur some costs for extensions you add on, or for developers you hire to help you use Magento. But the platform itself won’t cost you a thing. 2. It’s open source. Magento is open source software which means that anyone with the skills to develop a new module or extension can do so. Magento boasts a community of over 300,000 developers. They have an active forum with thousands of contributors, all working to make Magento do as much as business owners need and want it to. One of the nice things about using an open source solution is that you can count on it to improve in quality and functionality over time as people work to make it better for the benefit of all users. 3. Magento’s advanced security is ideal for eCommerce. Website security is important for all website owners, but when you run an online store that regularly takes sensitive financial and personal information from customers, security takes on an extra level of necessity. In the eCommerce community, Magento is widely considered a strong choice for security. And while the core software provides security against hackers, you can make your Magento website more secure with any of the hundreds of security extensions available. In addition, Magento lets you control how much access you allow each person who updates your website to have. Security permissions help you limit the risk of an angry employee making malicious updates to your website, or someone accidentally breaking something on the site due to ignorance. 4. Magento supports huge eCommerce product catalogs and order volume It’s understandable to wonder just how much a free product can actually do, especially if you have big goals and expect to see significant traffic numbers or list a high number of products. Magento really can handle a lot. You can add up to 500,000 products to your store on one Magento site. And the platform can handle over 80,000 orders every hour (as long as your web hosting plan is also up to the task). Plus, Magento’s reporting dashboard shows you at-a-glance how those sales are translating to your bottom line: That means it should work for you in the early stages of building your business, and allow you tons of room to grow as your sales and customer numbers increase. 5. Magento’s flexible extensions make it easy to customize your online store. Magento is extremely customizable, especially when it comes to the changes you can make using the large library of extensions. You have a lot of power to make the backend of your website look the way you want it, as well as making the website itself intuitive for your visitors. And Magento has a lot of different features you can choose to incorporate if you so desire, such as using categories to better group your different products, allowing different payment options, and letting you manage orders through the platform. With Magento, you can configure your website to have the features you want it to, both on your end and the user end. 6. Magento’s shopping cart functionality is responsive. With over half of all web use now happening on mobile devices, it’s integral that all online stores make their website mobile friendly. Magento’s shopping cart functionality is fully responsive, which means it works just as well on tablets and smartphones as it does on desktop computers. The last thing you want is for half of your visitors to bounce because they find your website difficult to use on their particular device—or worse, bounce right before a sale because the checkout process is a pain on the small screen. Magento will help you avoid that fate. 7. Magento’s powerful product search helps customers quickly find what they need. Nothing else about your eCommerce site will matter if customers find using it more difficult than it’s worth. Your website has to provide an experience that’s intuitive and pleasant from the moment they first land on the site, to when they complete their purchase. Magento helps you realize that by letting you organize your products in user-friendly ways, so visitors can browse and filter results based on their particular preferences. You can enable features like auto-suggestion for search terms or display popular search term clouds that can further help customers find what they need. And you can load multiple high-resolution images for each product to help customers make a better decision. 8. Built-in features are designed to boost your conversion rate. Magento also has a number of options you can use to increase conversions and upsells. You can offer free shipping and discounts to incentivize sales. You can also add in areas for customers to add their own promo codes at checkout: You can enable one-click purchases (just like Amazon!) with their Instant Purchase feature: You can set up your website to display recently viewed products, so customers are more likely to go back and buy items they’d considered. You can have your website show related products, or items commonly bought together, to encourage customers to buy more products at once. Finally, you can make it easy for customers to share items they like on social or with friends. 9. Magento follows SEO best practices. The minds behind Magento knew the importance of search engine optimization (SEO) for a website and built a platform that made optimizing your pages for search simple. You can easily customize your URLs, fill in the relevant meta information on each page, and create an auto-generated sitemap to submit to the search engines. 10. Magento enables personalization. Personalization is an eCommerce tactic gaining steam and showing significant results. Magento’s core platform provides some personalization options, but you can go even further with tailoring the way customers experience your website with extensions that offer additional personalization features. With the Magento core, customers can create unique accounts so you can better track their behavior over time and show them products and ads based on past purchases or views. You can automate the process of serving up personalized recommendations for each visitor. With extensions, you can generate automated recommendation and reminder emails based on their past behavior and deliver personalized ads across other sites to help get past visitors back on your site. 11. Magento works with third-party applications. Magento helps power a lot of useful features and functionality on your own website, but what happens on your own website is only part of running an online store. You also need to think about marketing your store in various channels around the web to get customers to your website to begin with. Not to mention all the work you have to do to keep up with your business’s finances. Magento is compatible with a wide range of third-party applications that help with those parts, including email marketing software, Google analytics, accounting software products, and payment processing apps. Easy integration with the various apps you need to run your business make all the extra business tasks you have to take care of easier. 12. Magento tutorials abound online. Nothing is perfect, and one of the downsides of using Magento is that it has a learning curve. If you’re not a professional developer, or you don’t have the budget to hire one, learning the ropes of using Magento can take some time and work. But you can find a lot of useful information to help you get started online. Free tutorials are available on sites like TutorialsPoint. Or for a more thorough introduction, you can take one of the courses offered at Magento U for a fee. What is Magento Hosting? Magento hosting is a web hosting plan that provides compatibility with the eCommerce software Magento. While Magento is free and provides a lot of useful features for running an online store, one thing it notably doesn’t provide is web hosting. Magento hosting plans will often provide the additional important business features an eCommerce website needs such as an SSL certificate and compatibility with third-party solutions commonly used by businesses, such as email marketing and Google Analytics. While many web hosting plans that aren’t specifically Magento hosting may make it possible to use Magento, an application web hosting plan that provides specific functionality related to using Magento can often better meet the needs of eCommerce stores that depend on the shopping cart software. 5 Benefits of Magento Hosting When choosing the best web hosting option for your eCommerce store, you’ll benefit from prioritizing Magento compatibility upfront. There are five main reasons to specifically seek out an application hosting plan that works with Magento specifically. 1. Easy installation With a Magento hosting plan, you can trust that adding Magento to your web hosting account will be quick and easy. With HostGator’s Magento hosting, you can add the application in one click once you’re signed into your web hosting account. You can focus your time on building your website, rather than figuring out how to get your eCommerce software and web hosting service to work together. 2. Assured compatibility If you’ve already made the choice to power your eCommerce options with Magento—and especially if you’ve already put the work into using it in designing your website—you really don’t want to realize after you sign up for a web hosting plan that there are compatibility issues in getting Magento to work right with your plan. Save yourself the trouble of having to work to get the two programs to support each other, or worse, having to switch over to a new web hosting provider after you’re already eager to launch. Start with a plan you know will work seamlessly with Magento from the get go. 3. No performance issues One of the complaints you’ll occasionally see about Magento is that some sites face performance issues, such as slow loading times. In fact, the issue isn’t usually Magento when that occurs, it’s that the website owner went with a web hosting plan that wasn’t up to the task of running the Magento site. To avoid performance issues, find a Magento web hosting plan that provides the degree of power and bandwidth you need. 4. No hidden fees Some web hosting plans will advertise an upfront price, then hit you with unexpected fees when it comes time to add the apps you need to your account.  With Magento hosting, you not only know that your web hosting plan will be compatible with your Magento site, you can also trust that you won’t have to pay anything extra to use Magento with the plan you buy. 5. Proper security As we’ve already established, security is paramount when you’re running an online store. Choosing a secure web hosting provider like Magento is one step to making your eCommerce site secure. Finding a web hosting plan that also promises top-notch security is another. A good Magento hosting plan will have strong firewalls in place to protect your website from hackers and offer additional security features, like an SSL certificate and security software you can add to your website. Get Started with Magento Hosting Starting an eCommerce website is a big deal. If you make the right choices, it can put you on the path to big profits. Early on, two of the most important choices you can make are which eCommerce software to go with and which web hosting plan to choose. If you go with Magento, picking a Magento hosting plan to go along with your choice just makes sense.Get started with HostGator’s application hosting. Find the post on the HostGator Blog

Faster script loading with BinaryAST?

CloudFlare Blog -

JavaScript Cold startsThe performance of applications on the web platform is becoming increasingly bottlenecked by the startup (load) time. Large amounts of JavaScript code are required to create rich web experiences that we’ve become used to. When we look at the total size of JavaScript requested on mobile devices from HTTPArchive, we see that an average page loads 350KB of JavaScript, while 10% of pages go over the 1MB threshold. The rise of more complex applications can push these numbers even higher.While caching helps, popular websites regularly release new code, which makes cold start (first load) times particularly important. With browsers moving to separate caches for different domains to prevent cross-site leaks, the importance of cold starts is growing even for popular subresources served from CDNs, as they can no longer be safely shared.Usually, when talking about the cold start performance, the primary factor considered is a raw download speed. However, on modern interactive pages one of the other big contributors to cold starts is JavaScript parsing time. This might seem surprising at first, but makes sense - before starting to execute the code, the engine has to first parse the fetched JavaScript, make sure it doesn’t contain any syntax errors and then compile it to the initial bytecode. As networks become faster, parsing and compilation of JavaScript could become the dominant factor.The device capability (CPU or memory performance) is the most important factor in the variance of JavaScript parsing times and correspondingly the time to application start. A 1MB JavaScript file will take an order of a 100 ms to parse on a modern desktop or high-end mobile device but can take over a second on an average phone  (Moto G4).A more detailed post on the overall cost of parsing, compiling and execution of JavaScript shows how the JavaScript boot time can vary on different mobile devices. For example, in the case of news.google.com, it can range from 4s on a Pixel 2 to 28s on a low-end device.While engines continuously improve raw parsing performance, with V8 in particular doubling it over the past year, as well as moving more things off the main thread, parsers still have to do lots of potentially unnecessary work that consumes memory, battery and might delay the processing of the useful resources.The “BinaryAST” ProposalThis is where BinaryAST comes in. BinaryAST is a new over-the-wire format for JavaScript proposed and actively developed by Mozilla that aims to speed up parsing while keeping the semantics of the original JavaScript intact. It does so by using an efficient binary representation for code and data structures, as well as by storing and providing extra information to guide the parser ahead of time.The name comes from the fact that the format stores the JavaScript source as an AST encoded into a binary file. The specification lives at tc39.github.io/proposal-binary-ast and is being worked on by engineers from Mozilla, Facebook, Bloomberg and Cloudflare.“Making sure that web applications start quickly is one of the most important, but also one of the most challenging parts of web development. We know that BinaryAST can radically reduce startup time, but we need to collect real-world data to demonstrate its impact. Cloudflare's work on enabling use of BinaryAST with Cloudflare Workers is an important step towards gathering this data at scale.” Till Schneidereit, Senior Engineering Manager, Developer TechnologiesMozillaParsing JavaScriptFor regular JavaScript code to execute in a browser the source is parsed into an intermediate representation known as an AST that describes the syntactic structure of the code. This representation can then be compiled into a byte code or a native machine code for execution.A simple example of adding two numbers can be represented in an AST as:Parsing JavaScript is not an easy task; no matter which optimisations you apply, it still requires reading the entire text file char by char, while tracking extra context for syntactic analysis.The goal of the BinaryAST is to reduce the complexity and the amount of work the browser parser has to do overall by providing an additional information and context by the time and place where the parser needs it.To execute JavaScript delivered as BinaryAST the only steps required are:Another benefit of BinaryAST is that it makes possible to only parse the critical code necessary for start-up, completely skipping over the unused bits. This can dramatically improve the initial loading time.This post will now describe some of the challenges of parsing JavaScript in more detail, explain how the proposed format addressed them, and how we made it possible to run its encoder in Workers.HoistingJavaScript relies on hoisting for all declarations - variables, functions, classes. Hoisting is a property of the language that allows you to declare items after the point they’re syntactically used.Let's take the following example:function f() { return g(); } function g() { return 42; }Here, when the parser is looking at the body of f, it doesn’t know yet what g is referring to - it could be an already existing global function or something declared further in the same file - so it can’t finalise parsing of the original function and start the actual compilation.BinaryAST fixes this by storing all the scope information and making it available upfront before the actual expressions.As shown by the difference between the initial AST and the enhanced AST in a JSON representation:Lazy parsingOne common technique used by modern engines to improve parsing times is lazy parsing. It utilises the fact that lots of websites include more JavaScript than they actually need, especially for the start-up.Working around this involves a set of heuristics that try to guess when any given function body in the code can be safely skipped by the parser initially and delayed for later. A common example of such heuristic is immediately running the full parser for any function that is wrapped into parentheses:(function(...Such prefix usually indicates that a following function is going to be an IIFE (immediately-invoked function expression), and so the parser can assume that it will be compiled and executed ASAP, and wouldn’t benefit from being skipped over and delayed for later.(function() { … })();These heuristics significantly improve the performance of the initial parsing and cold starts, but they’re not completely reliable or trivial to implement.One of the reasons is the same as in the previous section - even with lazy parsing, you still need to read the contents, analyse them and store an additional scope information for the declarations.Another reason is that the JavaScript specification requires reporting any syntax errors immediately during load time, and not when the code is actually executed. A class of these errors, called early errors, is checking for mistakes like usage of the reserved words in invalid contexts, strict mode violations, variable name clashes and more. All of these checks require not only lexing JavaScript source, but also tracking extra state even during the lazy parsing.Having to do such extra work means you need to be careful about marking functions as lazy too eagerly, especially if they actually end up being executed during the page load. Otherwise you’re making cold start costs even worse, as now every function that is erroneously marked as lazy, needs to be parsed twice - once by the lazy parser and then again by the full one.Because BinaryAST is meant to be an output format of other tools such as Babel, TypeScript and bundlers such as Webpack, the browser parser can rely on the JavaScript being already analysed and verified by the initial parser. This allows it to skip function bodies completely, making lazy parsing essentially free.It reduces the cost of a completely unused code - while including it is still a problem in terms of the network bandwidth (don’t do this!), at least it’s not affecting parsing times anymore. These benefits apply equally to the code that is used later in the page lifecycle (for example, invoked in response to user actions), but is not required during the startup.Last but not least important benefit of such approach is that BinaryAST encodes lazy annotations as part of the format, giving tools and developers direct and full control over the heuristics. For example, a tool targeting the Web platform or a framework CLI can use its domain-specific knowledge to mark some event handlers as lazy or eager depending on the context and the event type.Avoiding ambiguity in parsingUsing a text format for a programming language is great for readability and debugging, but it's not the most efficient representation for parsing and execution.For example, parsing low-level types like numbers, booleans and even strings from text requires extra analysis and computation, which is unnecessary when you can just store and read them as native binary-encoded values in the first place and read directly on the other side.Another problem is an ambiguity in the grammar itself. It was already an issue in the ES5 world, but could usually be resolved with some extra bookkeeping based on the previously seen tokens. However, in ES6+ there are productions that can be ambiguous all the way through until they’re parsed completely.For example, a token sequence like:(a, {b: c, d}, [e = 1])...can start either a parenthesized comma expression with nested object and array literals and an assignment:(a, {b: c, d}, [e = 1]); // it was an expressionor a parameter list of an arrow expression function with nested object and array patterns and a default value:(a, {b: c, d}, [e = 1]) => … // it was a parameter listBoth representations are perfectly valid, but have completely different semantics, and you can’t know which one you’re dealing with until you see the final token.To work around this, parsers usually have to either backtrack, which can easily get exponentially slow, or to parse contents into intermediate node types that are capable of holding both expressions and patterns, with following conversion. The latter approach preserves linear performance, but makes the implementation more complicated and requires preserving more state.In the BinaryAST format this issue doesn't exist in the first place because the parser sees the type of each node before it even starts parsing its contents.Cloudflare ImplementationCurrently, the format is still in flux, but the very first version of the client-side implementation was released under a flag in Firefox Nightly several months ago. Keep in mind this is only an initial unoptimised prototype, and there are already several experiments changing the format to provide improvements to both size and parsing performance.On the producer side, the reference implementation lives at github.com/binast/binjs-ref. Our goal was to take this reference implementation and consider how we would deploy it at Cloudflare scale.If you dig into the codebase, you will notice that it currently consists of two parts.One is the encoder itself, which is responsible for taking a parsed AST, annotating it with scope and other relevant information, and writing out the result in one of the currently supported formats. This part is written in Rust and is fully native.Another part is what produces that initial AST - the parser. Interestingly, unlike the encoder, it's implemented in JavaScript.Unfortunately, there is currently no battle-tested native JavaScript parser with an open API, let alone implemented in Rust. There have been a few attempts, but, given the complexity of JavaScript grammar, it’s better to wait a bit and make sure they’re well-tested before incorporating it into the production encoder.On the other hand, over the last few years the JavaScript ecosystem grew to extensively rely on developer tools implemented in JavaScript itself. In particular, this gave a push to rigorous parser development and testing. There are several JavaScript parser implementations that have been proven to work on thousands of real-world projects.With that in mind, it makes sense that the BinaryAST implementation chose to use one of them - in particular, Shift - and integrated it with the Rust encoder, instead of attempting to use a native parser.Connecting Rust and JavaScriptIntegration is where things get interesting.Rust is a native language that can compile to an executable binary, but JavaScript requires a separate engine to be executed. To connect them, we need some way to transfer data between the two without sharing the memory.Initially, the reference implementation generated JavaScript code with an embedded input on the fly, passed it to Node.js and then read the output when the process had finished. That code contained a call to the Shift parser with an inlined input string and produced the AST back in a JSON format.This doesn’t scale well when parsing lots of JavaScript files, so the first thing we did is transformed the Node.js side into a long-living daemon. Now Rust could spawn a required Node.js process just once and keep passing inputs into it and getting responses back as individual messages.Running in the cloudWhile the Node.js solution worked fairly well after these optimisations, shipping both a Node.js instance and a native bundle to production requires some effort. It's also potentially risky and requires manual sandboxing of both processes to make sure we don’t accidentally start executing malicious code.On the other hand, the only thing we needed from Node.js is the ability to run the JavaScript parser code. And we already have an isolated JavaScript engine running in the cloud - Cloudflare Workers! By additionally compiling the native Rust encoder to Wasm (which is quite easy with the native toolchain and wasm-bindgen), we can even run both parts of the code in the same process, making cold starts and communication much faster than in a previous model.Optimising data transferThe next logical step is to reduce the overhead of data transfer. JSON worked fine for communication between separate processes, but with a single process we should be able to retrieve the required bits directly from the JavaScript-based AST.To attempt this, first of all, we needed to move away from the direct JSON usage to something more generic that would allow us to support various import formats. The Rust ecosystem already has an amazing serialisation framework for that - Serde.Aside from allowing us to be more flexible in regard to the inputs, rewriting to Serde helped an existing native use case too. Now, instead of parsing JSON into an intermediate representation and then walking through it, all the native typed AST structures can be deserialized directly from the stdout pipe of the Node.js process in a streaming manner. This significantly improved both the CPU usage and memory pressure.But there is one more thing we can do: instead of serializing and deserializing from an intermediate format (let alone, a text format like JSON), we should be able to operate [almost] directly on JavaScript values, saving memory and repetitive work.How is this possible? wasm-bindgen provides a type called JsValue that stores a handle to an arbitrary value on the JavaScript side. This handle internally contains an index into a predefined array.Each time a JavaScript value is passed to the Rust side as a result of a function call or a property access, it’s stored in this array and an index is sent to Rust. The next time Rust wants to do something with that value, it passes the index back and the JavaScript side retrieves the original value from the array and performs the required operation.By reusing this mechanism, we could implement a Serde deserializer that requests only the required values from the JS side and immediately converts them to their native representation. It’s now open-sourced under https://github.com/cloudflare/serde-wasm-bindgen.At first, we got a much worse performance out of this due to the overhead of more frequent calls between 1) Wasm and JavaScript - SpiderMonkey has improved these recently, but other engines still lag behind and 2) JavaScript and C++, which also can’t be optimised well in most engines.The JavaScript <-> C++ overhead comes from the usage of TextEncoder to pass strings between JavaScript and Wasm in wasm-bindgen, and, indeed, it showed up as the highest in the benchmark profiles. This wasn’t surprising - after all, strings can appear not only in the value payloads, but also in property names, which have to be serialized and sent between JavaScript and Wasm over and over when using a generic JSON-like structure.Luckily, because our deserializer doesn’t have to be compatible with JSON anymore, we can use our knowledge of Rust types and cache all the serialized property names as JavaScript value handles just once, and then keep reusing them for further property accesses.This, combined with some changes to wasm-bindgen which we have upstreamed, allows our deserializer to be up to 3.5x faster in benchmarks than the original Serde support in wasm-bindgen, while saving ~33% off the resulting code size. Note that for string-heavy data structures it might still be slower than the current JSON-based integration, but situation is expected to improve over time when reference types proposal lands natively in Wasm.After implementing and integrating this deserializer, we used the wasm-pack plugin for Webpack to build a Worker with both Rust and JavaScript parts combined and shipped it to some test zones.Show me the numbersKeep in mind that this proposal is in very early stages, and current benchmarks and demos are not representative of the final outcome (which should improve numbers much further).As mentioned earlier, BinaryAST can mark functions that should be parsed lazily ahead of time. By using different levels of lazification in the encoder (https://github.com/binast/binjs-ref/blob/b72aff7dac7c692a604e91f166028af957cdcda5/crates/binjs_es6/src/lazy.rs#L43) and running tests against some popular JavaScript libraries, we found following speed-ups.Level 0 (no functions are lazified)With lazy parsing disabled in both parsers we got a raw parsing speed improvement of between 3 and 10%. Name Source size (kb) JavaScript Parse time (average ms) BinaryAST parse time (average ms) Diff (%) React 20 0.403 0.385 -4.56 D3 (v5) 240 11.178 10.525 -6.018 Angular 180 6.985 6.331 -9.822 Babel 780 21.255 20.599 -3.135 Backbone 32 0.775 0.699 -10.312 wabtjs 1720 64.836 59.556 -8.489 Fuzzball (1.2) 72 3.165 2.768 -13.383 Level 3 (functions up to 3 levels deep are lazified)But with the lazification set to skip nested functions of up to 3 levels we see much more dramatic improvements in parsing time between 90 and 97%. As mentioned earlier in the post, BinaryAST makes lazy parsing essentially free by completely skipping over the marked functions. Name Source size (kb) Parse time (average ms) BinaryAST parse time (average ms) Diff (%) React 20 0.407 0.032 -92.138 D3 (v5) 240 11.623 0.224 -98.073 Angular 180 7.093 0.680 -90.413 Babel 780 21.100 0.895 -95.758 Backbone 32 0.898 0.045 -94.989 wabtjs 1720 59.802 1.601 -97.323 Fuzzball (1.2) 72 2.937 0.089 -96.970 All the numbers are from manual tests on a Linux x64 Intel i7 with 16Gb of ram.While these synthetic benchmarks are impressive, they are not representative of real-world scenarios. Normally you will use at least some of the loaded JavaScript during the startup. To check this scenario, we decided to test some realistic pages and demos on desktop and mobile Firefox and found speed-ups in page loads too.For a sample application (https://github.com/cloudflare/binjs-demo, https://serve-binjs.that-test.site/) which weighed in at around 1.2 MB of JavaScript we got the following numbers for initial script execution: Device JavaScript BinaryAST Desktop 338ms 314ms Mobile (HTC One M8) 2019ms 1455ms Here is a video that will give you an idea of the improvement as seen by a user on mobile Firefox (in this case showing the entire page startup time):Next step is to start gathering data on real-world websites, while improving the underlying format.How do I test BinaryAST on my website?We’ve open-sourced our Worker so that it could be installed on any Cloudflare zone: https://github.com/binast/binjs-ref/tree/cf-wasm.One thing to be currently wary of is that, even though the result gets stored in the cache, the initial encoding is still an expensive process, and might easily hit CPU limits on any non-trivial JavaScript files and fall back to the unencoded variant. We are working to improve this situation by releasing BinaryAST encoder as a separate feature with more relaxed limits in the following few days.Meanwhile, if you want to play with BinaryAST on larger real-world scripts, an alternative option is to use a static binjs_encode tool from https://github.com/binast/binjs-ref to pre-encode JavaScript files ahead of time. Then, you can use a Worker from https://github.com/cloudflare/binast-cf-worker to serve the resulting BinaryAST assets when supported and requested by the browser.On the client side, you’ll currently need to download Firefox Nightly, go to about:config and enable unrestricted BinaryAST support via the following options:Now, when opening a website with either of the Workers installed, Firefox will get BinaryAST instead of JavaScript automatically.SummaryThe amount of JavaScript in modern apps is presenting performance challenges for all consumers. Engine vendors are experimenting with different ways to improve the situation - some are focusing on raw decoding performance, some on parallelizing operations to reduce overall latency, some are researching new optimised formats for data representation, and some are inventing and improving protocols for the network delivery.No matter which one it is, we all have a shared goal of making the Web better and faster. On Cloudflare's side, we're always excited about collaborating with all the vendors and combining various approaches to make that goal closer with every step.

How AWS helps our Customers to go Global – Report from Korea

Amazon Web Services Blog -

Amazon Web Services Korea LLC (AWS Korea) opened an office in Seoul, South Korea in 2012. This office has educated and supported many customers from startups to large enterprises. Owing to high customer demand, we launched our Asia Pacific (Seoul) Region with 2 Availability Zones and two edge locations in January 2016. This Region has given AWS customers in Korea low-latency access to our suite of AWS infrastructure services. Andy Jassy, CEO of Amazon Web Services announced to launch Seoul Region in AWS Cloud 2016. Following this launch, Amazon CloudFront announced two new Edge locations and one Edge cache: the third in May 2016, and the fourth in Feb 2018. CloudFront’s expansion across Korea further improves the availability and performance of content delivery to users in the region. Today I am happy to announce that AWS added a third Availability Zone (AZ) to the AWS Asia Pacific (Seoul) Region to support the high demand of our growing Korean customer base. This third AZ provides customers with additional flexibility to architect scalable, fault-tolerant, and highly available applications in AWS Asia Pacific (Seoul), and will support additional AWS services in Korea. This launch brings AWS’s global AZ total to 66 AZs within 21 geographic Regions around the world. AZs located in AWS Regions consist of one or more discrete data centers, each with redundant power, networking, and connectivity, and each housed in separate facilities. Now AWS serves tens of thousands of active customers in Korea, ranging from startups and enterprises to educational institutions. One of the examples that reflects this demand is AWS Summit Seoul 2019, a part of our commitment to investing in education. More than 16,000 builders attended, a greater than tenfold increase from the 1,500 attendees of our first Summit in 2015. AWS Summit 2018 – a photo of keynote by Dr. Werner Vogels, CTO of Amazon.com So, how have Korean customers migrated to the AWS Cloud and what has motivated them? They have learned that the AWS Cloud is the new normal in the IT industry and quick adoption to their business has allowed them to regain global competitiveness. Let us look at some examples of how our customers are utilizing the benefit of the broad and deep AWS Cloud platform in the global market by replicating their services in Korea. Do you know Korean Wave? The Korean Wave represents the increase in global popularity of South Korean culture such as Korean Pop and Drama. The top three broadcasting companies in Korea (KBS, MBC, and SBS) use AWS. They co-invested to found Content Alliance Platform (CAP) that launched POOQ, which offers real-time OTT broadcasting to 600,000+ subscribers for TV programs including popular K-Dramas and has been able to reduce the buffer times on its streaming services by 20 percents. CAP also used AWS’s video processing and delivery services to stream Korea’s largest sports event, the PyeongChang 2018 Olympic Winter Games. Lots of K-Pop fans from KCON Concert 2016 in France – Wikipedia SM Entertainment, a South Korean entertainment company to lead K-Pop influences with NCT 127, EXO, Super Junior, and Girls’ Generation. The company uses AWS to deliver its websites and mobile applications. By using AWS, the company was able to scale to support more than 3 million new users of EXO-L mobile app in three weeks. The company also developed its mobile karaoke app, Everysing, on AWS, saving more than 50 percent in development costs. The scalability, flexibility, and pay-as-you-go pricing of AWS encouraged them to develop more mobile apps. Global Enterprises on the Cloud Korean Enterprises rapidly adopted AWS cloud to offer scalable global scale services as well as focus on their own business needs. Samsung Electronics uses the breadth of AWS services to save infrastructure costs and achieve rapid deployments, which provides high availability to customers and allows them to scale their services globally to support Galaxy customers worldwide. For example, Samsung Electronics increased reliability and reduced costs by 40 percent within a year after migrating its 860TB Samsung Cloud database to AWS. Samsung chose Amazon DynamoDB for its stability, scalability, and low latency to maintain the database used by 300 million Galaxy smartphone users worldwide. LG Electronics has selected AWS to run its mission-critical services for more than 35 million LG Smart TVs across the globe to handle the dramatic instant traffic peaks that come with broadcasting live sports events such as the World Cup and Olympic Games. Also, it built a new home appliance IoT platform called ThinQ. LG Electronics uses a serverless architecture and secure provisioning on AWS to reduce the development costs for this platform by 80 percent through increased efficiency in managing its developer and maintenance resources. Recently Korean Air decided to move its entire infrastructure to AWS over the next three years – including its website, loyalty program, flight operations, and other mission-critical operations — and will shut down its data centers after this migration. “This will enable us to bring new services to market faster and more efficiently, so that customer satisfaction continues to increase.” said Kenny Chang, CIO of Korean Air. AWS Customers in Korea – From Startups to Enterprises in each industries AI/ML on Traditional Manufacturers AWS is helping Korean manufacturing companies realize the benefits of digitalization and regain global competitiveness by leveraging over collective experience gained from working with customers and partners around the world. Kia Motors produces three million vehicles a year to customers worldwide. It uses Amazon Rekognition and Amazon Polly to develop a car log-in feature using face analysis and voice services. Introduced in CES 2018, this system welcomes drivers and adjusts settings such as seating, mirrors and in-vehicle infotainment based on individual preferences to create a personalized driving experience. Coway, a Korean home appliance company uses AWS for IoCare, its IoT service for tens of thousands of air & water purifiers. It migrated IoCare from on-premises to AWS for speed and efficiency to handle increasing traffic as their business grew. Coway uses AWS managed services such as AWS IoT, Amazon Kinesis, Amazon DynamoDB, AWS Lambda, Amazon RDS, and Amazon ElastiCache, which also integrated Alexa Skills with AWS Lambda with their high-end air purifier Airmega for the global market. Play Amazing Games AWS has transformed the nature of Korean gaming companies, allowing them to autonomously launch and expand their businesses globally without help from local publishers. As a result, the top 15 gaming companies in Korea are currently using AWS, including Nexon, NC Soft, Krafton, Netmarble, and KaKao Games. Krafton is the developer of the hit video game Player Unknown’s Battle Grounds (PUBG), which was developed on AWS in less than 18 months. The game uses AWS Lambda, Amazon SQS, and AWS CodeDeploy for its core backend service, Amazon DynamoDB as its primary game database, and Amazon Redshift as its data analytics platform. PUBG broke records upon release, with more than 3 million concurrent players connected to the game. Nexon, a top Korean gaming company to produce top mobile games such as Heroes of Incredible Tales (HIT). They achieved cost savings of more than 30 percent for global infrastructure management and can now launch new games quicker by using AWS. Nexon uses Amazon DynamoDB for its game database and first started using AWS to respond to unpredictable spikes in user demand. Startups to go Global Lots of hot startups in Korea are using AWS to grow the local market, but here are great examples to go global although they are based on Korea. Azar is Hyperconnect’s video-based social discovery mobile app recorded 300 million downloads and now widely accessible in over 200 countries around the world with 20 billion cumulative matches in last year. Overcoming complex matching issues for reliable video chats between users, Hyperconnect utilizes various AWS services efficiently, which uses Amazon EC2, Amazon RDS, and Amazon SES to save cost managing global infra, and Amazon S3 and Amazon CloudFront to store and deliver service data to global users faster. They also use Amazon EMR to manage the vast amount of data generated by 40 million matches per day. SendBird provides chat APIs and messaging SDK in more than 10 thousand apps globally processing about 700 million messages per month. It uses AWS global regions to provide a top-class customer experience by keeping low latency under 100 ms everywhere in the world. Amazon ElastiCache is currently used to handle large volumes of chat data, and all the data are stored in the encrypted Amazon Aurora for integrity and reliability. Server log data are analyzed and processed using the Amazon Kinesis Data Firehose as well as Amazon Athena. Freedom to Local Financial Industry We also see Korean enterprises in the financial services industry leverage AWS to digitally transform their businesses by using data analytics, fintech, and digital banking initiatives. Financial services companies in Korea are leveraging AWS to deliver an enhanced customer experience, and examples of these customers include Shinhan Financial Group, KB Kookmin Bank, Kakao Pay, Mirae Asset, and Yuanta Securities. Shinhan Financial Group achieved a 50 percent cost reduction and a 20 percent response-time reduction after migrating its North American and Japanese online banking services to AWS. Shinhan’s new Digital Platform unit now uses Amazon ECS, Amazon CloudFront, and other services to reduce development time for new applications by 50 percent. Shinhan is currently pursuing an all-in migration to AWS including moving more than 150 workloads. Hyundai Card, a top Korean credit card company and a financial subsidiary of the Hyundai Kia Motor Group, built a dev/test platform called Playground on AWS to prototype new software and services by the development team. The customer uses Amazon EMR, AWS Glue, and Amazon Kinesis for cost and architecture optimization. It allowed quick testing of new projects without waiting for resource allocation from on-premises infrastructure, reducing the development period by 3-4 months Security and Compliance At AWS, the security, privacy, and protection of customer data always come first, which AWS provides local needs as well as global security and compliances. Our most recent example of this commitment is that AWS became the first global cloud service provider to achieve the Korea-Information Security Management System certification (K-ISMS) in December 2017. With this certification, enterprises and organizations across Korea are able to meet its compliance requirements more effectively and accelerate business transformation by using best-in-class technology delivered from the highly secure and reliable AWS Cloud. AWS also completed its first annual surveillance audit for the K-ISMS certification in 2018. In April 2019, AWS achieved the Multi-Tier Cloud Security Standard (MTCS) Level-3 certification for Seoul region. AWS is also the first cloud service provider in Korea to do so. With the MTCS, FSI customers in Korea can accelerate cloud adoption by no longer having to validate 109 controls, as required in the relevant regulations (Financial Security Institute’s Guideline on Use of Cloud Computing Services in Financial Industry and the Regulation on Supervision on Electronic Financial Transactions (RSEFT). AWS also published a workbook for Korean FSI customer, covering those and 32 additional controls from the RSEFT. What to support and enable Korean customers AWS Korea has made significant investments in education and training in Korea. Tens of thousands of people including IT professionals, developers, and students have been trained in AWS cloud skills over the last two years. AWS Korea also supports community-driven activities to enhance the developer ecosystem of cloud computing in Korea. To date, the AWS Korean User Group has tens of thousands of members, who hold hundreds of meetups across Korea annually. AWS Educate program is expected to accelerate Korean students’ capabilities in cloud computing skills, helping them acquire cloud expertise that is becoming increasingly relevant for their future employment. Tens of universities including Sogang University, Yonsei University, and Seoul National University have joined this program with thousands of students participating in AWS-related classes and non-profit e-learning programs such as Like a Lion, a non-profit organization that teaches coding to students. AWS is building a vibrant cloud ecosystem with hundreds of partners ― Systems Integrator (SI) partners include LG CNS, Samsung SDS, Youngwoo Digital, Saltware, NDS, and many others. Among them, Megazone, GS Neotek, and Bespin Global are AWS Premier Consulting Partners. Independent Software Vendor (ISV) partners include AhnLab, Hancom, SK Infosec, SendBird, and IGAWorks. They help our customers to enable AWS services in their workloads to migrate from on-premise or launch new services. The customer’s celebration whiteboard for 5th anniversary of AWS Summit Seoul Finally, I want to introduce lots of customer’s feedback in our whiteboard of AWS Summit 2019 although they were written in Korean. Here is one voice from them ― “It made me decide to become an AWS customer voluntary to climb on the shoulders of the giant to see the world.” We always will hear customer’s voices and build the broadest and deepest cloud platform for them to leverage ours and be successful in both Korea and global market. – Channy Yun; This article was translated into Korean(한국어) in AWS Korea Blog.

Search at Google I/O 2019

Google Webmaster Central Blog -

Google I/O is our yearly developer conference where we have the pleasure of announcing some exciting new Search-related features and capabilities. A good place to start is Google Search: State of the Union, which explains how to take advantage of the latest capabilities in Google Search: We also gave more details on how JavaScript and Google Search work together and what you can do to make sure your JavaScript site performs well in Search. Try out new features todayHere are some of the new features, codelabs, and documentation that you can try out today: Googlebot now runs the latest Chromium rendering engine: This means Googebot now supports new features, like ES6, IntersectionObserver for lazy-loading, and Web Components v1 APIs. Googlebot will regularly update it's rendering engine. Learn more about the update in our Google Search and JavaScript talk, blog post, and updated our guidance on how to fix JavaScript issues for Google Search. How-to & FAQ launched on Google Search and the Assistant: You can get started today by following the developer documentation: How-to and FAQ. We also launched supporting Search Console reports. Learn more about How-to and FAQ in our structured data talk. Find and listen to podcasts in Search: Last week, we launched the ability to listen to podcasts directly on Google Search when you search for a certain show. In the coming months, we'll start surfacing podcasts in search results based on the content of the podcast, and let users save episodes for listening later. To enable your podcast in Search, follow the Podcast developer documentation. Try our new codelabs: Check out our new codelabs about how to add structured data, fix a Single Page App for Search, and implement Dynamic Rendering with Rendertron.Be among the first to test new featuresYour help is invaluable to making sure our products work for everyone. We shared some new features that we're still testing and would love your feedback and participation. Speed report: We're currently piloting the new Speed report in Search Console. Sign up to be a beta tester. Mini-apps: We announced Mini-apps, which engage users with interactive workflows and live content directly on Search and the Assistant. Submit your idea for the Mini-app Early Adopters Program.Learn more about what's coming soonI/O is a place where we get to showcase new Search features, so we're excited to give you a heads up on what's next on the horizon: High-resolution images: In the future, you'll be able to opt in to highlight your high-resolution images for your users. Stay tuned for details. 3D and AR in Search: We are working with partners to bring 3D models and AR content to Google Search. Check out what it might look like and stay tuned for more details.We hope these cool announcements help & inspire you to create even better websites that work well in Search. Should you have any questions, feel free to post in our webmaster help forums, contact us on Twitter, or reach out to us at any of the next events we're at. Posted by Lizzi Harvey, Technical Writer

Nexcess and BigCommerce Announce eCommerce Partnership

Nexcess Blog -

May 2, 2019 – We’re proud to announce the addition of a new hosting solution to our lineup for merchants: BigCommerce. This new addition allows us to provide merchants with multiple options for creating, customizing, and delivering their online stores. As a powerful, headless eCommerce solution, BigCommerce allows merchants to employ a powerful product catalog… Continue reading →

Live video just got more live: Introducing Concurrent Streaming Acceleration

CloudFlare Blog -

Today we’re excited to introduce Concurrent Streaming Acceleration, a new technique for reducing the end-to-end latency of live video on the web when using Stream Delivery.Let’s dig into live-streaming latency, why it’s important, and what folks have done to improve it.How “live” is “live” video?Live streaming makes up an increasing share of video on the web. Whether it’s a TV broadcast, a live game show, or an online classroom, users expect video to arrive quickly and smoothly. And the promise of “live” is that the user is seeing events as they happen. But just how close to “real-time” is “live” Internet video? Delivering live video on the Internet is still hard and adds lots of latency:The content source records video and sends it to an encoding server;The origin server transforms this video into a format like DASH, HLS or CMAF that can be delivered to millions of devices efficiently;A CDN is typically used to deliver encoded video across the globeClient players decode the video and render it on the screenAnd all of this is under a time constraint — the whole process need to happen in a few seconds, or video experiences will suffer. We call the total delay between when the video was shot, and when it can be viewed on an end-user’s device, as “end-to-end latency” (think of it as the time from the camera lens to your phone’s screen).Traditional segmented deliveryVideo formats like DASH, HLS, and CMAF work by splitting video into small files, called “segments”. A typical segment duration is 6 seconds.If a client player needs to wait for a whole 6s segment to be encoded, sent through a CDN, and then decoded, it can be a long wait! It takes even longer if you want the client to build up a buffer of segments to protect against any interruptions in delivery. A typical player buffer for HLS is 3 segments:Clients may have to buffer three 6-second chunks, introducing at least 18s of latency‌‌When you consider encoding delays, it’s easy to see why live streaming latency on the Internet has typically been about 20-30 seconds. We can do better.Reduced latency with chunked transfer encodingA natural way to solve this problem is to enable client players to start playing the chunks while they’re downloading, or even while they’re still being created. Making this possible requires a clever bit of cooperation to encode and deliver the files in a particular way, known as “chunked encoding.” This involves splitting up segments into smaller, bite-sized pieces, or “chunks”. Chunked encoding can typically bring live latency down to 5 or 10 seconds.Confusingly, the word “chunk” is overloaded to mean two different things:CMAF or HLS chunks, which are small pieces of a segment (typically 1s) that are aligned on key framesHTTP chunks, which are just a way of delivering any file over the webChunked Encoding splits segments into shorter chunksHTTP chunks are important because web clients have limited ability to process streams of data. Most clients can only work with data once they’ve received the full HTTP response, or at least a complete HTTP chunk. By using HTTP chunked transfer encoding, we enable video players to start parsing and decoding video sooner.CMAF chunks are important so that decoders can actually play the bits that are in the HTTP chunks. Without encoding video in a careful way, decoders would have random bits of a video file that can’t be played.CDNs can introduce additional bufferingChunked encoding with HLS and CMAF is growing in use across the web today. Part of what makes this technique great is that HTTP chunked encoding is widely supported by CDNs – it’s been part of the HTTP spec for 20 years.CDN support is critical because it allows low-latency live video to scale up and reach audiences of thousands or millions of concurrent viewers – something that’s currently very difficult to do with other, non-HTTP based protocols.Unfortunately, even if you enable chunking to optimise delivery, your CDN may be working against you by buffering the entire segment. To understand why consider what happens when many people request a live segment at the same time:If the file is already in cache, great! CDNs do a great job at delivering cached files to huge audiences. But what happens when the segment isn’t in cache yet? Remember – this is the typical request pattern for live video!Typically, CDNs are able to “stream on cache miss” from the origin. That looks something like this:But again – what happens when multiple people request the file at once? CDNs typically need to pull the entire file into cache before serving additional viewers:Only one viewer can stream video, while other clients wait for the segment to buffer at the CDNThis behavior is understandable. CDN data centers consist of many servers. To avoid overloading origins, these servers typically coordinate amongst themselves using a “cache lock” (mutex) that allows only one server to request a particular file from origin at a given time. A side effect of this is that while a file is being pulled into cache, it can’t be served to any user other than the first one that requested it. Unfortunately, this cache lock also defeats the purpose of using chunked encoding!To recap thus far:Chunked encoding splits up video segments into smaller piecesThis can reduce end-to-end latency by allowing chunks to be fetched and decoded by players, even while segments are being produced at the origin serverSome CDNs neutralize the benefits of chunked encoding by buffering entire files inside the CDN before they can be delivered to clientsCloudflare’s solution: Concurrent Streaming AccelerationAs you may have guessed, we think we can do better. Put simply, we now have the ability to deliver un-cached files to multiple clients simultaneously while we pull the file once from the origin server.This sounds like a simple change, but there’s a lot of subtlety to do this safely. Under the hood, we’ve made deep changes to our caching infrastructure to remove the cache lock and enable multiple clients to be able to safely read from a single file while it’s still being written.The best part is – all of Cloudflare now works this way! There’s no need to opt-in, or even make a config change to get the benefit.We rolled this feature out a couple months ago and have been really pleased with the results so far. We measure success by the “cache lock wait time,” i.e. how long a request must wait for other requests – a direct component of Time To First Byte.  One OTT customer saw this metric drop from 1.5s at P99 to nearly 0, as expected:This directly translates into a 1.5-second improvement in end-to-end latency. Live video just got more live!ConclusionNew techniques like chunked encoding have revolutionized live delivery, enabling publishers to deliver low-latency live video at scale. Concurrent Streaming Acceleration helps you unlock the power of this technique at your CDN, potentially shaving precious seconds of end-to-end latency.If you’re interested in using Cloudflare for live video delivery, contact our enterprise sales team.And if you’re interested in working on projects like this and helping us improve live video delivery for the entire Internet, join our engineering team!

The Career Pivot: Beat Burnout With A Job That’s Right for You

LinkedIn Official Blog -

Looking for a new job? You’re in the driver's seat. Unemployment is at a near 50-year low, and there are more than 20 million jobs available on LinkedIn right now. However, nearly half of all professionals say they don’t know what their career path should look like. If you’re not satisfied with where you are today, or not sure which way you should be steering, it could be time to take inventory of what you want professionally and make a switch. Perhaps you’re looking to change the type of work... .

Email Marketing Basics: Tips to Launching A Successful Campaign

InMotion Hosting Blog -

No matter who you are or where you are, you’ve probably got an email account (or several). Even though we live in the age of social media apps and new digital landscapes, the email account is still the starting point for a myriad of services we subscribe to and buy from. This is why it’s critically important to adopt a well-developed email marketing strategy. A successful email campaign involves much more than technical solutions. Continue reading Email Marketing Basics: Tips to Launching A Successful Campaign at The Official InMotion Hosting Blog.

How to Write Blog Posts for Your Buyer Personas

HostGator Blog -

The post How to Write Blog Posts for Your Buyer Personas appeared first on HostGator Blog. Quick quiz for business bloggers: In one sentence, describe the audience for your blog. If you had your answer ready, you’re ready to write must-read content for your customers. If you had to stop and think about who your audience is, or if you said “everybody,” it’s time to get a clear picture of your readers so you can create more effective content. In both cases, the key is to research, build, and use buyer personas. Write for a Specific Persona If you aced the quiz, it’s because you have a customer persona. Personas are like character sketches for marketers and bloggers. They define types of audience members by their interests, age range, online behaviors, and shopping habits. You create personas based on data from your site analytics, social media monitoring, site-visitor surveys, and interviews with your readers and customers. If you’re just starting out, research the types of people you’d like to have in your audience. Start with the persona that represents the largest part of your audience. Let’s say you have a blog for your hobby farming supply business. Your primary persona might be a retired banking executive (let’s call her Daisy) in her early 60s whose partner is also retired. She recently bought a vintage farmhouse on a small acreage. Her interests are raising flowers and herbs for market and she’d also like to set up a duck pond and a rental cottage on her property. Daisy likes to carefully research purchases and she prioritizes quality over price. Here’s a sample persona template you can use to create your own website personas: Speak the Same Language as Your Customers Whoever your persona is, write in a voice that they’ll understand. Let’s stick with the hobby farm supply example for a bit. Maybe your background is in agribusiness. Daisy, your retired banking-executive persona, won’t know the ag jargon that you do. She searches for terms like “how much to feed ducks,” not “how to formulate balanced poultry rations.” Include the keywords she’s likely to use in your posts to show her you’re speaking to her, so she’ll stick around. Bonus: Better SEO is a natural outcome of using the phrases your personas use. Not sure how your persona talks about or searches for their interests? Look at your blog and social media comments and email messages from your customers. Monitor your Google Search Console data to see which keyphrases bring readers to your blog. And check out other blogs, vlogs, and podcasts in your niche. The goal isn’t to copy anyone else’s voice but to connect with prospective customers by speaking their language. Tailor Post Length to Your Audience and Your Goals How long should your business blog posts be? That depends on your goals for each post and the time your persona has to read it. Daisy is retired and has time to focus on her interests, but an audience of mid-career professionals with small children will have less time to read. Short and long posts both have their place on your posting schedule, but you’ll want to skew toward what your audience prefers. The Case for Short Blog Posts Short blog posts of at least 300 words are a great way to tackle niche topics. That’s good for readers who want specific information. It’s also good for SEO, because narrowly focused posts can help you rank well for longtail search phrases. For example, if the persona you’re writing for is a pet rabbit owner, it’s going to be hard to rank well for “rabbit care,” which generates more than 443 million results. By going into more detail with posts on “elderly rabbit grooming,” “safe chew toys for rabbits,” “how to build a rabbit castle” and so on, you’re more likely to reach readers searching for those topics. You can later compile all your short posts on one topic into a PDF to give away to readers who join your list. The Case for Long Blog Posts Long posts—1,000 words and more—are more challenging to write and require a bigger time commitment from you and your customers. Long content typically does well in search results, so it’s worth your time to create at least a few. These can be mega-posts that combine and expand on previous short posts. They can also be new content, like a list or a how-to guide, to promote an upcoming launch or new product. For example, if you’re preparing to start selling an online course, a long post that includes a sample of the class material can help prospective students decide to register. Take your time writing and editing long posts to make sure they deliver what your personas want to know, using the same language they do. And if you’re planning a product launch, review your current site hosting plan to make sure it can handle launch-related spikes in traffic. You may want to upgrade to a more powerful plan like HostGator Cloud Hosting for more speed and bandwidth, and add on CodeGuard daily backup service to easily restore your site if your launch-prep site changes temporarily break things. Pace Your Blog Posts Properly Ask your readers how often they want to hear from you, then build a calendar to match your persona’s preferences. If you don’t have a big audience yet, remember that most people are happy to read one or two new posts a week from a blog they value. Less than that is probably okay, too. Too-frequent posts may overwhelm subscribers and lead them to drop your blog. Save daily posting for when you can hire help, have a large audience, and have specific marketing goals that require lots of new content. Keep an eye on your blog, email, and sales metrics. Over time, you should see how your publishing schedule affects page views, time on the site, email opens and clickthroughs, unsubscribes, and conversions. Tweak the schedule if you need to so your readers stick around. Close with a Call to Action What separates good bloggers from great bloggers? Great bloggers who build thriving online communities and businesses have a clear goal for each blog post before they write it. Before you write, decide what you want your readers to do when they reach the end of your post. Do you want them to join your email list? Share your post? Buy your duck brooders? Once you know, ask them to do it. Don’t assume it’s obvious. Life is filled with distractions, so make your calls to action clear: Join the list. Get the book. Register now. Reserve your appointment. There’s one other benefit to building personas before you blog. It helps to make your posts more conversational and builds rapport with your audience. So, whenever you’re ready to write, think about your persona, what they want to know, how much time they have to read, and the keywords they search for. Then you’re ready to write posts that will connect. Find the post on the HostGator Blog

WP Engine Launches Cloudflare Stream Video Plugin For WordPress

WP Engine -

AUSTIN, Texas – May X, 2018 – WP Engine, the WordPress Digital Experience Platform (DXP), today announced the launch of the Cloudflare Stream Video Plugin for WordPress. The plugin was built by WP Engine in partnership with Cloudflare to make it incredibly easy for WordPress users to publish and stream performance optimized videos on WordPress… The post WP Engine Launches Cloudflare Stream Video Plugin For WordPress appeared first on WP Engine.

Bringing Simplicity to Video Streaming

WP Engine -

By 2022, video will make up 82 percent of all IP traffic—a fourfold increase from 2017. This rise can be attributed in great part to younger generations like Gen Z, who are increasingly turning to video as their preferred method for consuming content online. Some of this has to do with the way Gen Z… The post Bringing Simplicity to Video Streaming appeared first on WP Engine.

Announcing Cloudflare Image Resizing: Simplifying Optimal Image Delivery

CloudFlare Blog -

In the past three years, the amount of image data on the median mobile webpage has doubled. Growing images translate directly to users hitting data transfer caps, experiencing slower websites, and even leaving if a website doesn’t load in a reasonable amount of time. The crime is many of these images are so slow because they are larger than they need to be, sending data over the wire which has absolutely no (positive) impact on the user’s experience.To provide a concrete example, let’s consider this photo of Cloudflare’s Lava Lamp Wall: On the left you see the photo, scaled to 300 pixels wide. On the right you see the same image delivered in its original high resolution, scaled in a desktop web browser. On a regular-DPI screen, they both look the same, yet the image on the right takes more than twenty times more data to load. Even for the best and most conscientious developers resizing every image to handle every possible device geometry consumes valuable time, and it’s exceptionally easy to forget to do this resizing altogether.Today we are launching a new product, Image Resizing, to fix this problem once and for all.Announcing Image ResizingWith Image Resizing, Cloudflare adds another important product to its suite of available image optimizations.  This product allows customers to perform a rich set of the key actions on images.Resize - The source image will be resized to the specified height and width.  This action allows multiple different sized variants to be created for each specific use.Crop - The source image will be resized to a new size that does not maintain the original aspect ratio and a portion of the image will be removed.  This can be especially helpful for headshots and product images where different formats must be achieved by keeping only a portion of the image.Compress - The source image will have its file size reduced by applying lossy compression.  This should be used when slight quality reduction is an acceptable trade for file size reduction.Convert to WebP - When the users browser supports it, the source image will be converted to WebP.  Delivering a WebP image takes advantage of the modern, highly optimized image format.By using a combination of these actions, customers store a single high quality image on their server, and Image Resizing can be leveraged to create specialized variants for each specific use case.  Without any additional effort, each variant will also automatically benefit from Cloudflare’s global caching.ExamplesEcommerce ThumbnailsEcommerce sites typically store a high-quality image of each product.  From that image, they need to create different variants depending on how that product will be displayed.  One example is creating thumbnails for a catalog view.  Using Image Resizing, if the high quality image is located here:https://example.com/images/shoe123.jpgThis is how to display a 75x75 pixel thumbnail using Image Resizing:<img src="/cdn-cgi/image/width=75,height=75/images/shoe123.jpg">Responsive ImagesWhen tailoring a site to work on various device types and sizes, it’s important to always use correctly sized images.  This can be difficult when images are intended to fill a particular percentage of the screen.  To solve this problem, <img srcset sizes> can be used.Without Image Resizing, multiple versions of the same image would need to be created and stored.  In this example, a single high quality copy of hero.jpg is stored, and Image Resizing is used to resize for each particular size as needed.<img width="100%" srcset=" /cdn-cgi/image/fit=contain,width=320/assets/hero.jpg 320w, /cdn-cgi/image/fit=contain,width=640/assets/hero.jpg 640w, /cdn-cgi/image/fit=contain,width=960/assets/hero.jpg 960w, /cdn-cgi/image/fit=contain,width=1280/assets/hero.jpg 1280w, /cdn-cgi/image/fit=contain,width=2560/assets/hero.jpg 2560w, " src="/cdn-cgi/image/width=960/assets/hero.jpg"> Enforce Maximum Size Without Changing URLsImage Resizing is also available from within a Cloudflare Worker. Workers allow you to write code which runs close to your users all around the world. For example, you might wish to add Image Resizing to your images while keeping the same URLs. Your users and client would be able to use the same image URLs as always, but the images will be transparently modified in whatever way you need.You can install a Worker on a route which matches your image URLs, and resize any images larger than a limit:addEventListener('fetch', event => { event.respondWith(handleRequest(event.request)) }) async function handleRequest(request) { return fetch(request, { cf: { image: { width: 800, height: 800, fit: 'scale-down' } }); } As a Worker is just code, it is also easy to run this worker only on URLs with image extensions, or even to only resize images being delivered to mobile clients.Cloudflare and ImagesCloudflare has a long history building tools to accelerate images. Our caching has always helped reduce latency by storing a copy of images closer to the user.  Polish automates options for both lossless and lossy image compression to remove unnecessary bytes from images.  Mirage accelerates image delivery based on device type. We are continuing to invest in all of these tools, as they all serve a unique role in improving the image experience on the web.Image Resizing is different because it is the first image product at Cloudflare to give developers full control over how their images would be served. You should choose Image Resizing if you are comfortable defining the sizes you wish your images to be served at in advance or within a Cloudflare Worker.Next Steps and Simple PricingImage Resizing is available today for Business and Enterprise Customers.  To enable it, login to the Cloudflare Dashboard and navigate to the Speed Tab.  There you’ll find the section for Image Resizing which you can enable with one click.This product is included in the Business and Enterprise plans at no additional cost with generous usage limits.  Business Customers have 100k requests per month limit and will be charged $10 for each additional 100k requests per month.  Enterprise Customers have a 10M request per month limit with discounted tiers for higher usage.  Requests are defined as a hit on a URI that contains Image Resizing or a call to Image Resizing from a Worker.Now that you’ve enabled Image Resizing, it’s time to resize your first image.Using your existing site, store an image here: https://yoursite.com/images/yourimage.jpgUse this URL to resize that image:https://yoursite.com/cdn-cgi/image/width=100,height=100,quality=75/images/yourimage.jpgExperiment with changing width=, height=, and quality=.The instructions above use the Default URL Format for Image Resizing.  For details on options, uses cases, and compatibility, refer to our Developer Documentation.

Notice of MDS Vulnerabilities

The Rackspace Blog & Newsroom -

On 14 May 2019, Intel released information about a new group of vulnerabilities collectively called Microarchitectural Data Sampling (MDS). Left unmitigated, these vulnerabilities could potentially allow sophisticated attackers to gain access to sensitive data, secrets, and credentials that could allow for privilege escalation and unauthorized access to user data. Our highest priority is protection of […] The post Notice of MDS Vulnerabilities appeared first on The Official Rackspace Blog.

Removal of PHP 5.6 and PHP 7.0 in EasyApache Profiles

cPanel Blog -

Both PHP 5.6 and PHP 7.0 reached End of Life at the beginning of the year, and are no longer receiving any security patches from PHP. With cPanel & WHM Version 80 moving to the current tier, we are also encouraging users to upgrade to supported PHP versions in EasyApache 4. To help with that, we are removing PHP 5.6 and 7.0 from our default EasyApache profiles. This change only impacts servers running our default …

Pages

Recommended Content

Subscribe to Complete Hosting Guide aggregator - Corporate Blogs