Corporate Blogs

Optimize Your Reseller Hosting For WordPress

InMotion Hosting Blog -

Reseller hosting is a great opportunity for you to develop a hosting business of your own, whether you’d like to do it as a side hustle or a full-time job. Our reseller package gives you everything you would need. And there’s no need to fear working in a shared hosting environment. In theory, you can have multiple customers all using WordPress and no need to worry about bandwidth. We’ve compiled a list of considerations you can take now to help your customers optimize their WordPress sites sufficiently so that shared resources are not a problem for anyone. Continue reading Optimize Your Reseller Hosting For WordPress at The Official InMotion Hosting Blog.

Difference Between VPS SSD and VPS Cloud

Reseller Club Blog -

Over the past few posts, we have been writing about Virtual Private Server (VPS) Hosting- the different types, the advantages, and disadvantages. In today’s post we’ll go a step further and compare the infrastructure your VPS is based on viz. VPS HDD, VPS SSD and VPS Cloud. In our last article, we covered the difference between VPS HDD and VPS SSD. In this article, we will be covering what VPS SSD and VPS Cloud are, VPS SSD vs VPS Cloud, and figure out which is the best for your website. VPS SSD: VPS SSD is a Virtual Private Server powered by a Solid State Drive (SSD). In this type of hosting, the hosting provider uses a physical SSD disk on their physical server. The advantage of an SSD is that there is less power drainage while at the same time website speed and performance are higher. VPS Cloud: In VPS Cloud, as the name suggests your server is set up over a cloud. In VPS Cloud your hosting service provider would combine the availability of VPS with the scalability offered by the cloud. The advantage of using the cloud with VPS is that your load is distributed fairly amongst the different servers which ensure you have better performance and website speed. Pros and Cons of VPS SSD vs VPS Cloud Pros: Pros of VPS SSD Pros of VPS Cloud Powered by the efficient SSD drives Powered by the scalable cloud architecture SSD ensures low risk of mechanical failure as there are no moving parts There is no mechanical failure as everything is stored over the cloud and can be accessed even if one server is down Here the boot time of the server is faster as compared to VPS Cloud Even though the boot time of the server is slightly slow, VPS Cloud is instantly scalable Comes with cPanel and is fully managed as the hosting provider Offers Unmanaged hosting thus, you can customize it as per your own needs. Cons: Cons of VPS SSD Cons of VPS Cloud VPS SSD is costlier compared to VPS Cloud VPS Cloud though cheaper compromises on the booting speed and time Even though read speed is fast in VPS SSD, the write is slower The read and write speed is comparatively slower Even though the life of VPS SSD is longer compared to other VPS hosting plans it doesn’t give a warning signal if it is about to fail. Hence, you can lose your data if a real-time backup isn’t enabled Have to add, cPanel add-on and only basic support is offered Which is best for your website? Both VPS SSD and VPS Cloud are good when compared to the traditional/classic VPS which used HDD drive for the physical server. Moreover, they both have their own sets of advantages and disadvantages. If your website traffic increases at a faster speed then VPS Cloud is the go-to choice, however, if you are looking for more speed then VPS SSD is the go-to choice. We at ResellerClub, offer VPS SSD with all our VPS Hosting plans and storage space from 20-120 GB. Moreover, with our SSD VPS Hosting you get an intuitive control panel, DDoS protection, easy upgrades and much more. VPS SSD vs VPS Cloud, irrespective of the hosting plan choosing the right storage and infrastructure for your VPS Hosting depends on your business needs. In the end, it is your content, marketing, and the website design that adds to the success of your website. Therefore, research more before choosing! .fb_iframe_widget_fluid_desktop iframe { width: 100% !important; } The post Difference Between VPS SSD and VPS Cloud appeared first on ResellerClub Blog.

Faster script loading with BinaryAST?

CloudFlare Blog -

JavaScript Cold startsThe performance of applications on the web platform is becoming increasingly bottlenecked by the startup (load) time. Large amounts of JavaScript code are required to create rich web experiences that we’ve become used to. When we look at the total size of JavaScript requested on mobile devices from HTTPArchive, we see that an average page loads 350KB of JavaScript, while 10% of pages go over the 1MB threshold. The rise of more complex applications can push these numbers even higher.While caching helps, popular websites regularly release new code, which makes cold start (first load) times particularly important. With browsers moving to separate caches for different domains to prevent cross-site leaks, the importance of cold starts is growing even for popular subresources served from CDNs, as they can no longer be safely shared.Usually, when talking about the cold start performance, the primary factor considered is a raw download speed. However, on modern interactive pages one of the other big contributors to cold starts is JavaScript parsing time. This might seem surprising at first, but makes sense - before starting to execute the code, the engine has to first parse the fetched JavaScript, make sure it doesn’t contain any syntax errors and then compile it to the initial bytecode. As networks become faster, parsing and compilation of JavaScript could become the dominant factor.The device capability (CPU or memory performance) is the most important factor in the variance of JavaScript parsing times and correspondingly the time to application start. A 1MB JavaScript file will take an order of a 100 ms to parse on a modern desktop or high-end mobile device but can take over a second on an average phone  (Moto G4).A more detailed post on the overall cost of parsing, compiling and execution of JavaScript shows how the JavaScript boot time can vary on different mobile devices. For example, in the case of news.google.com, it can range from 4s on a Pixel 2 to 28s on a low-end device.While engines continuously improve raw parsing performance, with V8 in particular doubling it over the past year, as well as moving more things off the main thread, parsers still have to do lots of potentially unnecessary work that consumes memory, battery and might delay the processing of the useful resources.The “BinaryAST” ProposalThis is where BinaryAST comes in. BinaryAST is a new over-the-wire format for JavaScript proposed and actively developed by Mozilla that aims to speed up parsing while keeping the semantics of the original JavaScript intact. It does so by using an efficient binary representation for code and data structures, as well as by storing and providing extra information to guide the parser ahead of time.The name comes from the fact that the format stores the JavaScript source as an AST encoded into a binary file. The specification lives at tc39.github.io/proposal-binary-ast and is being worked on by engineers from Mozilla, Facebook, Bloomberg and Cloudflare.“Making sure that web applications start quickly is one of the most important, but also one of the most challenging parts of web development. We know that BinaryAST can radically reduce startup time, but we need to collect real-world data to demonstrate its impact. Cloudflare's work on enabling use of BinaryAST with Cloudflare Workers is an important step towards gathering this data at scale.” Till Schneidereit, Senior Engineering Manager, Developer TechnologiesMozillaParsing JavaScriptFor regular JavaScript code to execute in a browser the source is parsed into an intermediate representation known as an AST that describes the syntactic structure of the code. This representation can then be compiled into a byte code or a native machine code for execution.A simple example of adding two numbers can be represented in an AST as:Parsing JavaScript is not an easy task; no matter which optimisations you apply, it still requires reading the entire text file char by char, while tracking extra context for syntactic analysis.The goal of the BinaryAST is to reduce the complexity and the amount of work the browser parser has to do overall by providing an additional information and context by the time and place where the parser needs it.To execute JavaScript delivered as BinaryAST the only steps required are:Another benefit of BinaryAST is that it makes possible to only parse the critical code necessary for start-up, completely skipping over the unused bits. This can dramatically improve the initial loading time.This post will now describe some of the challenges of parsing JavaScript in more detail, explain how the proposed format addressed them, and how we made it possible to run its encoder in Workers.HoistingJavaScript relies on hoisting for all declarations - variables, functions, classes. Hoisting is a property of the language that allows you to declare items after the point they’re syntactically used.Let's take the following example:function f() { return g(); } function g() { return 42; }Here, when the parser is looking at the body of f, it doesn’t know yet what g is referring to - it could be an already existing global function or something declared further in the same file - so it can’t finalise parsing of the original function and start the actual compilation.BinaryAST fixes this by storing all the scope information and making it available upfront before the actual expressions.As shown by the difference between the initial AST and the enhanced AST in a JSON representation:Lazy parsingOne common technique used by modern engines to improve parsing times is lazy parsing. It utilises the fact that lots of websites include more JavaScript than they actually need, especially for the start-up.Working around this involves a set of heuristics that try to guess when any given function body in the code can be safely skipped by the parser initially and delayed for later. A common example of such heuristic is immediately running the full parser for any function that is wrapped into parentheses:(function(...Such prefix usually indicates that a following function is going to be an IIFE (immediately-invoked function expression), and so the parser can assume that it will be compiled and executed ASAP, and wouldn’t benefit from being skipped over and delayed for later.(function() { … })();These heuristics significantly improve the performance of the initial parsing and cold starts, but they’re not completely reliable or trivial to implement.One of the reasons is the same as in the previous section - even with lazy parsing, you still need to read the contents, analyse them and store an additional scope information for the declarations.Another reason is that the JavaScript specification requires reporting any syntax errors immediately during load time, and not when the code is actually executed. A class of these errors, called early errors, is checking for mistakes like usage of the reserved words in invalid contexts, strict mode violations, variable name clashes and more. All of these checks require not only lexing JavaScript source, but also tracking extra state even during the lazy parsing.Having to do such extra work means you need to be careful about marking functions as lazy too eagerly, especially if they actually end up being executed during the page load. Otherwise you’re making cold start costs even worse, as now every function that is erroneously marked as lazy, needs to be parsed twice - once by the lazy parser and then again by the full one.Because BinaryAST is meant to be an output format of other tools such as Babel, TypeScript and bundlers such as Webpack, the browser parser can rely on the JavaScript being already analysed and verified by the initial parser. This allows it to skip function bodies completely, making lazy parsing essentially free.It reduces the cost of a completely unused code - while including it is still a problem in terms of the network bandwidth (don’t do this!), at least it’s not affecting parsing times anymore. These benefits apply equally to the code that is used later in the page lifecycle (for example, invoked in response to user actions), but is not required during the startup.Last but not least important benefit of such approach is that BinaryAST encodes lazy annotations as part of the format, giving tools and developers direct and full control over the heuristics. For example, a tool targeting the Web platform or a framework CLI can use its domain-specific knowledge to mark some event handlers as lazy or eager depending on the context and the event type.Avoiding ambiguity in parsingUsing a text format for a programming language is great for readability and debugging, but it's not the most efficient representation for parsing and execution.For example, parsing low-level types like numbers, booleans and even strings from text requires extra analysis and computation, which is unnecessary when you can just store and read them as native binary-encoded values in the first place and read directly on the other side.Another problem is an ambiguity in the grammar itself. It was already an issue in the ES5 world, but could usually be resolved with some extra bookkeeping based on the previously seen tokens. However, in ES6+ there are productions that can be ambiguous all the way through until they’re parsed completely.For example, a token sequence like:(a, {b: c, d}, [e = 1])...can start either a parenthesized comma expression with nested object and array literals and an assignment:(a, {b: c, d}, [e = 1]); // it was an expressionor a parameter list of an arrow expression function with nested object and array patterns and a default value:(a, {b: c, d}, [e = 1]) => … // it was a parameter listBoth representations are perfectly valid, but have completely different semantics, and you can’t know which one you’re dealing with until you see the final token.To work around this, parsers usually have to either backtrack, which can easily get exponentially slow, or to parse contents into intermediate node types that are capable of holding both expressions and patterns, with following conversion. The latter approach preserves linear performance, but makes the implementation more complicated and requires preserving more state.In the BinaryAST format this issue doesn't exist in the first place because the parser sees the type of each node before it even starts parsing its contents.Cloudflare ImplementationCurrently, the format is still in flux, but the very first version of the client-side implementation was released under a flag in Firefox Nightly several months ago. Keep in mind this is only an initial unoptimised prototype, and there are already several experiments changing the format to provide improvements to both size and parsing performance.On the producer side, the reference implementation lives at github.com/binast/binjs-ref. Our goal was to take this reference implementation and consider how we would deploy it at Cloudflare scale.If you dig into the codebase, you will notice that it currently consists of two parts.One is the encoder itself, which is responsible for taking a parsed AST, annotating it with scope and other relevant information, and writing out the result in one of the currently supported formats. This part is written in Rust and is fully native.Another part is what produces that initial AST - the parser. Interestingly, unlike the encoder, it's implemented in JavaScript.Unfortunately, there is currently no battle-tested native JavaScript parser with an open API, let alone implemented in Rust. There have been a few attempts, but, given the complexity of JavaScript grammar, it’s better to wait a bit and make sure they’re well-tested before incorporating it into the production encoder.On the other hand, over the last few years the JavaScript ecosystem grew to extensively rely on developer tools implemented in JavaScript itself. In particular, this gave a push to rigorous parser development and testing. There are several JavaScript parser implementations that have been proven to work on thousands of real-world projects.With that in mind, it makes sense that the BinaryAST implementation chose to use one of them - in particular, Shift - and integrated it with the Rust encoder, instead of attempting to use a native parser.Connecting Rust and JavaScriptIntegration is where things get interesting.Rust is a native language that can compile to an executable binary, but JavaScript requires a separate engine to be executed. To connect them, we need some way to transfer data between the two without sharing the memory.Initially, the reference implementation generated JavaScript code with an embedded input on the fly, passed it to Node.js and then read the output when the process had finished. That code contained a call to the Shift parser with an inlined input string and produced the AST back in a JSON format.This doesn’t scale well when parsing lots of JavaScript files, so the first thing we did is transformed the Node.js side into a long-living daemon. Now Rust could spawn a required Node.js process just once and keep passing inputs into it and getting responses back as individual messages.Running in the cloudWhile the Node.js solution worked fairly well after these optimisations, shipping both a Node.js instance and a native bundle to production requires some effort. It's also potentially risky and requires manual sandboxing of both processes to make sure we don’t accidentally start executing malicious code.On the other hand, the only thing we needed from Node.js is the ability to run the JavaScript parser code. And we already have an isolated JavaScript engine running in the cloud - Cloudflare Workers! By additionally compiling the native Rust encoder to Wasm (which is quite easy with the native toolchain and wasm-bindgen), we can even run both parts of the code in the same process, making cold starts and communication much faster than in a previous model.Optimising data transferThe next logical step is to reduce the overhead of data transfer. JSON worked fine for communication between separate processes, but with a single process we should be able to retrieve the required bits directly from the JavaScript-based AST.To attempt this, first of all, we needed to move away from the direct JSON usage to something more generic that would allow us to support various import formats. The Rust ecosystem already has an amazing serialisation framework for that - Serde.Aside from allowing us to be more flexible in regard to the inputs, rewriting to Serde helped an existing native use case too. Now, instead of parsing JSON into an intermediate representation and then walking through it, all the native typed AST structures can be deserialized directly from the stdout pipe of the Node.js process in a streaming manner. This significantly improved both the CPU usage and memory pressure.But there is one more thing we can do: instead of serializing and deserializing from an intermediate format (let alone, a text format like JSON), we should be able to operate [almost] directly on JavaScript values, saving memory and repetitive work.How is this possible? wasm-bindgen provides a type called JsValue that stores a handle to an arbitrary value on the JavaScript side. This handle internally contains an index into a predefined array.Each time a JavaScript value is passed to the Rust side as a result of a function call or a property access, it’s stored in this array and an index is sent to Rust. The next time Rust wants to do something with that value, it passes the index back and the JavaScript side retrieves the original value from the array and performs the required operation.By reusing this mechanism, we could implement a Serde deserializer that requests only the required values from the JS side and immediately converts them to their native representation. It’s now open-sourced under https://github.com/cloudflare/serde-wasm-bindgen.At first, we got a much worse performance out of this due to the overhead of more frequent calls between 1) Wasm and JavaScript - SpiderMonkey has improved these recently, but other engines still lag behind and 2) JavaScript and C++, which also can’t be optimised well in most engines.The JavaScript <-> C++ overhead comes from the usage of TextEncoder to pass strings between JavaScript and Wasm in wasm-bindgen, and, indeed, it showed up as the highest in the benchmark profiles. This wasn’t surprising - after all, strings can appear not only in the value payloads, but also in property names, which have to be serialized and sent between JavaScript and Wasm over and over when using a generic JSON-like structure.Luckily, because our deserializer doesn’t have to be compatible with JSON anymore, we can use our knowledge of Rust types and cache all the serialized property names as JavaScript value handles just once, and then keep reusing them for further property accesses.This, combined with some changes to wasm-bindgen which we have upstreamed, allows our deserializer to be up to 3.5x faster in benchmarks than the original Serde support in wasm-bindgen, while saving ~33% off the resulting code size. Note that for string-heavy data structures it might still be slower than the current JSON-based integration, but situation is expected to improve over time when reference types proposal lands natively in Wasm.After implementing and integrating this deserializer, we used the wasm-pack plugin for Webpack to build a Worker with both Rust and JavaScript parts combined and shipped it to some test zones.Show me the numbersKeep in mind that this proposal is in very early stages, and current benchmarks and demos are not representative of the final outcome (which should improve numbers much further).As mentioned earlier, BinaryAST can mark functions that should be parsed lazily ahead of time. By using different levels of lazification in the encoder (https://github.com/binast/binjs-ref/blob/b72aff7dac7c692a604e91f166028af957cdcda5/crates/binjs_es6/src/lazy.rs#L43) and running tests against some popular JavaScript libraries, we found following speed-ups.Level 0 (no functions are lazified)With lazy parsing disabled in both parsers we got a raw parsing speed improvement of between 3 and 10%. Name Source size (kb) JavaScript Parse time (average ms) BinaryAST parse time (average ms) Diff (%) React 20 0.403 0.385 -4.56 D3 (v5) 240 11.178 10.525 -6.018 Angular 180 6.985 6.331 -9.822 Babel 780 21.255 20.599 -3.135 Backbone 32 0.775 0.699 -10.312 wabtjs 1720 64.836 59.556 -8.489 Fuzzball (1.2) 72 3.165 2.768 -13.383 Level 3 (functions up to 3 levels deep are lazified)But with the lazification set to skip nested functions of up to 3 levels we see much more dramatic improvements in parsing time between 90 and 97%. As mentioned earlier in the post, BinaryAST makes lazy parsing essentially free by completely skipping over the marked functions. Name Source size (kb) Parse time (average ms) BinaryAST parse time (average ms) Diff (%) React 20 0.407 0.032 -92.138 D3 (v5) 240 11.623 0.224 -98.073 Angular 180 7.093 0.680 -90.413 Babel 780 21.100 0.895 -95.758 Backbone 32 0.898 0.045 -94.989 wabtjs 1720 59.802 1.601 -97.323 Fuzzball (1.2) 72 2.937 0.089 -96.970 All the numbers are from manual tests on a Linux x64 Intel i7 with 16Gb of ram.While these synthetic benchmarks are impressive, they are not representative of real-world scenarios. Normally you will use at least some of the loaded JavaScript during the startup. To check this scenario, we decided to test some realistic pages and demos on desktop and mobile Firefox and found speed-ups in page loads too.For a sample application (https://github.com/cloudflare/binjs-demo, https://serve-binjs.that-test.site/) which weighed in at around 1.2 MB of JavaScript we got the following numbers for initial script execution: Device JavaScript BinaryAST Desktop 338ms 314ms Mobile (HTC One M8) 2019ms 1455ms Here is a video that will give you an idea of the improvement as seen by a user on mobile Firefox (in this case showing the entire page startup time):Next step is to start gathering data on real-world websites, while improving the underlying format.How do I test BinaryAST on my website?We’ve open-sourced our Worker so that it could be installed on any Cloudflare zone: https://github.com/binast/binjs-ref/tree/cf-wasm.One thing to be currently wary of is that, even though the result gets stored in the cache, the initial encoding is still an expensive process, and might easily hit CPU limits on any non-trivial JavaScript files and fall back to the unencoded variant. We are working to improve this situation by releasing BinaryAST encoder as a separate feature with more relaxed limits in the following few days.Meanwhile, if you want to play with BinaryAST on larger real-world scripts, an alternative option is to use a static binjs_encode tool from https://github.com/binast/binjs-ref to pre-encode JavaScript files ahead of time. Then, you can use a Worker from https://github.com/cloudflare/binast-cf-worker to serve the resulting BinaryAST assets when supported and requested by the browser.On the client side, you’ll currently need to download Firefox Nightly, go to about:config and enable unrestricted BinaryAST support via the following options:Now, when opening a website with either of the Workers installed, Firefox will get BinaryAST instead of JavaScript automatically.SummaryThe amount of JavaScript in modern apps is presenting performance challenges for all consumers. Engine vendors are experimenting with different ways to improve the situation - some are focusing on raw decoding performance, some on parallelizing operations to reduce overall latency, some are researching new optimised formats for data representation, and some are inventing and improving protocols for the network delivery.No matter which one it is, we all have a shared goal of making the Web better and faster. On Cloudflare's side, we're always excited about collaborating with all the vendors and combining various approaches to make that goal closer with every step.

How AWS helps our Customers to go Global – Report from Korea

Amazon Web Services Blog -

Amazon Web Services Korea LLC (AWS Korea) opened an office in Seoul, South Korea in 2012. This office has educated and supported many customers from startups to large enterprises. Owing to high customer demand, we launched our Asia Pacific (Seoul) Region with 2 Availability Zones and two edge locations in January 2016. This Region has given AWS customers in Korea low-latency access to our suite of AWS infrastructure services. Andy Jassy, CEO of Amazon Web Services announced to launch Seoul Region in AWS Cloud 2016. Following this launch, Amazon CloudFront announced two new Edge locations and one Edge cache: the third in May 2016, and the fourth in Feb 2018. CloudFront’s expansion across Korea further improves the availability and performance of content delivery to users in the region. Today I am happy to announce that AWS added a third Availability Zone (AZ) to the AWS Asia Pacific (Seoul) Region to support the high demand of our growing Korean customer base. This third AZ provides customers with additional flexibility to architect scalable, fault-tolerant, and highly available applications in AWS Asia Pacific (Seoul), and will support additional AWS services in Korea. This launch brings AWS’s global AZ total to 66 AZs within 21 geographic Regions around the world. AZs located in AWS Regions consist of one or more discrete data centers, each with redundant power, networking, and connectivity, and each housed in separate facilities. Now AWS serves tens of thousands of active customers in Korea, ranging from startups and enterprises to educational institutions. One of the examples that reflects this demand is AWS Summit Seoul 2019, a part of our commitment to investing in education. More than 16,000 builders attended, a greater than tenfold increase from the 1,500 attendees of our first Summit in 2015. AWS Summit 2018 – a photo of keynote by Dr. Werner Vogels, CTO of Amazon.com So, how have Korean customers migrated to the AWS Cloud and what has motivated them? They have learned that the AWS Cloud is the new normal in the IT industry and quick adoption to their business has allowed them to regain global competitiveness. Let us look at some examples of how our customers are utilizing the benefit of the broad and deep AWS Cloud platform in the global market by replicating their services in Korea. Do you know Korean Wave? The Korean Wave represents the increase in global popularity of South Korean culture such as Korean Pop and Drama. The top three broadcasting companies in Korea (KBS, MBC, and SBS) use AWS. They co-invested to found Content Alliance Platform (CAP) that launched POOQ, which offers real-time OTT broadcasting to 600,000+ subscribers for TV programs including popular K-Dramas and has been able to reduce the buffer times on its streaming services by 20 percents. CAP also used AWS’s video processing and delivery services to stream Korea’s largest sports event, the PyeongChang 2018 Olympic Winter Games. Lots of K-Pop fans from KCON Concert 2016 in France – Wikipedia SM Entertainment, a South Korean entertainment company to lead K-Pop influences with NCT 127, EXO, Super Junior, and Girls’ Generation. The company uses AWS to deliver its websites and mobile applications. By using AWS, the company was able to scale to support more than 3 million new users of EXO-L mobile app in three weeks. The company also developed its mobile karaoke app, Everysing, on AWS, saving more than 50 percent in development costs. The scalability, flexibility, and pay-as-you-go pricing of AWS encouraged them to develop more mobile apps. Global Enterprises on the Cloud Korean Enterprises rapidly adopted AWS cloud to offer scalable global scale services as well as focus on their own business needs. Samsung Electronics uses the breadth of AWS services to save infrastructure costs and achieve rapid deployments, which provides high availability to customers and allows them to scale their services globally to support Galaxy customers worldwide. For example, Samsung Electronics increased reliability and reduced costs by 40 percent within a year after migrating its 860TB Samsung Cloud database to AWS. Samsung chose Amazon DynamoDB for its stability, scalability, and low latency to maintain the database used by 300 million Galaxy smartphone users worldwide. LG Electronics has selected AWS to run its mission-critical services for more than 35 million LG Smart TVs across the globe to handle the dramatic instant traffic peaks that come with broadcasting live sports events such as the World Cup and Olympic Games. Also, it built a new home appliance IoT platform called ThinQ. LG Electronics uses a serverless architecture and secure provisioning on AWS to reduce the development costs for this platform by 80 percent through increased efficiency in managing its developer and maintenance resources. Recently Korean Air decided to move its entire infrastructure to AWS over the next three years – including its website, loyalty program, flight operations, and other mission-critical operations — and will shut down its data centers after this migration. “This will enable us to bring new services to market faster and more efficiently, so that customer satisfaction continues to increase.” said Kenny Chang, CIO of Korean Air. AWS Customers in Korea – From Startups to Enterprises in each industries AI/ML on Traditional Manufacturers AWS is helping Korean manufacturing companies realize the benefits of digitalization and regain global competitiveness by leveraging over collective experience gained from working with customers and partners around the world. Kia Motors produces three million vehicles a year to customers worldwide. It uses Amazon Rekognition and Amazon Polly to develop a car log-in feature using face analysis and voice services. Introduced in CES 2018, this system welcomes drivers and adjusts settings such as seating, mirrors and in-vehicle infotainment based on individual preferences to create a personalized driving experience. Coway, a Korean home appliance company uses AWS for IoCare, its IoT service for tens of thousands of air & water purifiers. It migrated IoCare from on-premises to AWS for speed and efficiency to handle increasing traffic as their business grew. Coway uses AWS managed services such as AWS IoT, Amazon Kinesis, Amazon DynamoDB, AWS Lambda, Amazon RDS, and Amazon ElastiCache, which also integrated Alexa Skills with AWS Lambda with their high-end air purifier Airmega for the global market. Play Amazing Games AWS has transformed the nature of Korean gaming companies, allowing them to autonomously launch and expand their businesses globally without help from local publishers. As a result, the top 15 gaming companies in Korea are currently using AWS, including Nexon, NC Soft, Krafton, Netmarble, and KaKao Games. Krafton is the developer of the hit video game Player Unknown’s Battle Grounds (PUBG), which was developed on AWS in less than 18 months. The game uses AWS Lambda, Amazon SQS, and AWS CodeDeploy for its core backend service, Amazon DynamoDB as its primary game database, and Amazon Redshift as its data analytics platform. PUBG broke records upon release, with more than 3 million concurrent players connected to the game. Nexon, a top Korean gaming company to produce top mobile games such as Heroes of Incredible Tales (HIT). They achieved cost savings of more than 30 percent for global infrastructure management and can now launch new games quicker by using AWS. Nexon uses Amazon DynamoDB for its game database and first started using AWS to respond to unpredictable spikes in user demand. Startups to go Global Lots of hot startups in Korea are using AWS to grow the local market, but here are great examples to go global although they are based on Korea. Azar is Hyperconnect’s video-based social discovery mobile app recorded 300 million downloads and now widely accessible in over 200 countries around the world with 20 billion cumulative matches in last year. Overcoming complex matching issues for reliable video chats between users, Hyperconnect utilizes various AWS services efficiently, which uses Amazon EC2, Amazon RDS, and Amazon SES to save cost managing global infra, and Amazon S3 and Amazon CloudFront to store and deliver service data to global users faster. They also use Amazon EMR to manage the vast amount of data generated by 40 million matches per day. SendBird provides chat APIs and messaging SDK in more than 10 thousand apps globally processing about 700 million messages per month. It uses AWS global regions to provide a top-class customer experience by keeping low latency under 100 ms everywhere in the world. Amazon ElastiCache is currently used to handle large volumes of chat data, and all the data are stored in the encrypted Amazon Aurora for integrity and reliability. Server log data are analyzed and processed using the Amazon Kinesis Data Firehose as well as Amazon Athena. Freedom to Local Financial Industry We also see Korean enterprises in the financial services industry leverage AWS to digitally transform their businesses by using data analytics, fintech, and digital banking initiatives. Financial services companies in Korea are leveraging AWS to deliver an enhanced customer experience, and examples of these customers include Shinhan Financial Group, KB Kookmin Bank, Kakao Pay, Mirae Asset, and Yuanta Securities. Shinhan Financial Group achieved a 50 percent cost reduction and a 20 percent response-time reduction after migrating its North American and Japanese online banking services to AWS. Shinhan’s new Digital Platform unit now uses Amazon ECS, Amazon CloudFront, and other services to reduce development time for new applications by 50 percent. Shinhan is currently pursuing an all-in migration to AWS including moving more than 150 workloads. Hyundai Card, a top Korean credit card company and a financial subsidiary of the Hyundai Kia Motor Group, built a dev/test platform called Playground on AWS to prototype new software and services by the development team. The customer uses Amazon EMR, AWS Glue, and Amazon Kinesis for cost and architecture optimization. It allowed quick testing of new projects without waiting for resource allocation from on-premises infrastructure, reducing the development period by 3-4 months Security and Compliance At AWS, the security, privacy, and protection of customer data always come first, which AWS provides local needs as well as global security and compliances. Our most recent example of this commitment is that AWS became the first global cloud service provider to achieve the Korea-Information Security Management System certification (K-ISMS) in December 2017. With this certification, enterprises and organizations across Korea are able to meet its compliance requirements more effectively and accelerate business transformation by using best-in-class technology delivered from the highly secure and reliable AWS Cloud. AWS also completed its first annual surveillance audit for the K-ISMS certification in 2018. In April 2019, AWS achieved the Multi-Tier Cloud Security Standard (MTCS) Level-3 certification for Seoul region. AWS is also the first cloud service provider in Korea to do so. With the MTCS, FSI customers in Korea can accelerate cloud adoption by no longer having to validate 109 controls, as required in the relevant regulations (Financial Security Institute’s Guideline on Use of Cloud Computing Services in Financial Industry and the Regulation on Supervision on Electronic Financial Transactions (RSEFT). AWS also published a workbook for Korean FSI customer, covering those and 32 additional controls from the RSEFT. What to support and enable Korean customers AWS Korea has made significant investments in education and training in Korea. Tens of thousands of people including IT professionals, developers, and students have been trained in AWS cloud skills over the last two years. AWS Korea also supports community-driven activities to enhance the developer ecosystem of cloud computing in Korea. To date, the AWS Korean User Group has tens of thousands of members, who hold hundreds of meetups across Korea annually. AWS Educate program is expected to accelerate Korean students’ capabilities in cloud computing skills, helping them acquire cloud expertise that is becoming increasingly relevant for their future employment. Tens of universities including Sogang University, Yonsei University, and Seoul National University have joined this program with thousands of students participating in AWS-related classes and non-profit e-learning programs such as Like a Lion, a non-profit organization that teaches coding to students. AWS is building a vibrant cloud ecosystem with hundreds of partners ― Systems Integrator (SI) partners include LG CNS, Samsung SDS, Youngwoo Digital, Saltware, NDS, and many others. Among them, Megazone, GS Neotek, and Bespin Global are AWS Premier Consulting Partners. Independent Software Vendor (ISV) partners include AhnLab, Hancom, SK Infosec, SendBird, and IGAWorks. They help our customers to enable AWS services in their workloads to migrate from on-premise or launch new services. The customer’s celebration whiteboard for 5th anniversary of AWS Summit Seoul Finally, I want to introduce lots of customer’s feedback in our whiteboard of AWS Summit 2019 although they were written in Korean. Here is one voice from them ― “It made me decide to become an AWS customer voluntary to climb on the shoulders of the giant to see the world.” We always will hear customer’s voices and build the broadest and deepest cloud platform for them to leverage ours and be successful in both Korea and global market. – Channy Yun; This article was translated into Korean(한국어) in AWS Korea Blog.

Search at Google I/O 2019

Google Webmaster Central Blog -

Google I/O is our yearly developer conference where we have the pleasure of announcing some exciting new Search-related features and capabilities. A good place to start is Google Search: State of the Union, which explains how to take advantage of the latest capabilities in Google Search: We also gave more details on how JavaScript and Google Search work together and what you can do to make sure your JavaScript site performs well in Search. Try out new features todayHere are some of the new features, codelabs, and documentation that you can try out today: Googlebot now runs the latest Chromium rendering engine: This means Googebot now supports new features, like ES6, IntersectionObserver for lazy-loading, and Web Components v1 APIs. Googlebot will regularly update it's rendering engine. Learn more about the update in our Google Search and JavaScript talk, blog post, and updated our guidance on how to fix JavaScript issues for Google Search. How-to & FAQ launched on Google Search and the Assistant: You can get started today by following the developer documentation: How-to and FAQ. We also launched supporting Search Console reports. Learn more about How-to and FAQ in our structured data talk. Find and listen to podcasts in Search: Last week, we launched the ability to listen to podcasts directly on Google Search when you search for a certain show. In the coming months, we'll start surfacing podcasts in search results based on the content of the podcast, and let users save episodes for listening later. To enable your podcast in Search, follow the Podcast developer documentation. Try our new codelabs: Check out our new codelabs about how to add structured data, fix a Single Page App for Search, and implement Dynamic Rendering with Rendertron.Be among the first to test new featuresYour help is invaluable to making sure our products work for everyone. We shared some new features that we're still testing and would love your feedback and participation. Speed report: We're currently piloting the new Speed report in Search Console. Sign up to be a beta tester. Mini-apps: We announced Mini-apps, which engage users with interactive workflows and live content directly on Search and the Assistant. Submit your idea for the Mini-app Early Adopters Program.Learn more about what's coming soonI/O is a place where we get to showcase new Search features, so we're excited to give you a heads up on what's next on the horizon: High-resolution images: In the future, you'll be able to opt in to highlight your high-resolution images for your users. Stay tuned for details. 3D and AR in Search: We are working with partners to bring 3D models and AR content to Google Search. Check out what it might look like and stay tuned for more details.We hope these cool announcements help & inspire you to create even better websites that work well in Search. Should you have any questions, feel free to post in our webmaster help forums, contact us on Twitter, or reach out to us at any of the next events we're at. Posted by Lizzi Harvey, Technical Writer

InMotion Hosting’s WordPress Web Hosting vs GoDaddy’s Web Hosting

InMotion Hosting Blog -

As one of the more well-known companies, GoDaddy is no longer just a domain registrar and has gotten into the WordPress web hosting game. Since we at InMotion are seasoned professionals in this field, and since we cherish good competition, we wanted to see how GoDaddy plans stack up against InMotion Hosting plans. WordPress Web Hosting Plans Compared We’ve come up with a few categories that we feel are good measuring points for a host: price, storage, email accounts, and growth options. Continue reading InMotion Hosting’s WordPress Web Hosting vs GoDaddy’s Web Hosting at The Official InMotion Hosting Blog.

Nexcess and BigCommerce Announce eCommerce Partnership

Nexcess Blog -

May 2, 2019 – We’re proud to announce the addition of a new hosting solution to our lineup for merchants: BigCommerce. This new addition allows us to provide merchants with multiple options for creating, customizing, and delivering their online stores. As a powerful, headless eCommerce solution, BigCommerce allows merchants to employ a powerful product catalog… Continue reading →

Live video just got more live: Introducing Concurrent Streaming Acceleration

CloudFlare Blog -

Today we’re excited to introduce Concurrent Streaming Acceleration, a new technique for reducing the end-to-end latency of live video on the web when using Stream Delivery.Let’s dig into live-streaming latency, why it’s important, and what folks have done to improve it.How “live” is “live” video?Live streaming makes up an increasing share of video on the web. Whether it’s a TV broadcast, a live game show, or an online classroom, users expect video to arrive quickly and smoothly. And the promise of “live” is that the user is seeing events as they happen. But just how close to “real-time” is “live” Internet video? Delivering live video on the Internet is still hard and adds lots of latency:The content source records video and sends it to an encoding server;The origin server transforms this video into a format like DASH, HLS or CMAF that can be delivered to millions of devices efficiently;A CDN is typically used to deliver encoded video across the globeClient players decode the video and render it on the screenAnd all of this is under a time constraint — the whole process need to happen in a few seconds, or video experiences will suffer. We call the total delay between when the video was shot, and when it can be viewed on an end-user’s device, as “end-to-end latency” (think of it as the time from the camera lens to your phone’s screen).Traditional segmented deliveryVideo formats like DASH, HLS, and CMAF work by splitting video into small files, called “segments”. A typical segment duration is 6 seconds.If a client player needs to wait for a whole 6s segment to be encoded, sent through a CDN, and then decoded, it can be a long wait! It takes even longer if you want the client to build up a buffer of segments to protect against any interruptions in delivery. A typical player buffer for HLS is 3 segments:Clients may have to buffer three 6-second chunks, introducing at least 18s of latency‌‌When you consider encoding delays, it’s easy to see why live streaming latency on the Internet has typically been about 20-30 seconds. We can do better.Reduced latency with chunked transfer encodingA natural way to solve this problem is to enable client players to start playing the chunks while they’re downloading, or even while they’re still being created. Making this possible requires a clever bit of cooperation to encode and deliver the files in a particular way, known as “chunked encoding.” This involves splitting up segments into smaller, bite-sized pieces, or “chunks”. Chunked encoding can typically bring live latency down to 5 or 10 seconds.Confusingly, the word “chunk” is overloaded to mean two different things:CMAF or HLS chunks, which are small pieces of a segment (typically 1s) that are aligned on key framesHTTP chunks, which are just a way of delivering any file over the webChunked Encoding splits segments into shorter chunksHTTP chunks are important because web clients have limited ability to process streams of data. Most clients can only work with data once they’ve received the full HTTP response, or at least a complete HTTP chunk. By using HTTP chunked transfer encoding, we enable video players to start parsing and decoding video sooner.CMAF chunks are important so that decoders can actually play the bits that are in the HTTP chunks. Without encoding video in a careful way, decoders would have random bits of a video file that can’t be played.CDNs can introduce additional bufferingChunked encoding with HLS and CMAF is growing in use across the web today. Part of what makes this technique great is that HTTP chunked encoding is widely supported by CDNs – it’s been part of the HTTP spec for 20 years.CDN support is critical because it allows low-latency live video to scale up and reach audiences of thousands or millions of concurrent viewers – something that’s currently very difficult to do with other, non-HTTP based protocols.Unfortunately, even if you enable chunking to optimise delivery, your CDN may be working against you by buffering the entire segment. To understand why consider what happens when many people request a live segment at the same time:If the file is already in cache, great! CDNs do a great job at delivering cached files to huge audiences. But what happens when the segment isn’t in cache yet? Remember – this is the typical request pattern for live video!Typically, CDNs are able to “stream on cache miss” from the origin. That looks something like this:But again – what happens when multiple people request the file at once? CDNs typically need to pull the entire file into cache before serving additional viewers:Only one viewer can stream video, while other clients wait for the segment to buffer at the CDNThis behavior is understandable. CDN data centers consist of many servers. To avoid overloading origins, these servers typically coordinate amongst themselves using a “cache lock” (mutex) that allows only one server to request a particular file from origin at a given time. A side effect of this is that while a file is being pulled into cache, it can’t be served to any user other than the first one that requested it. Unfortunately, this cache lock also defeats the purpose of using chunked encoding!To recap thus far:Chunked encoding splits up video segments into smaller piecesThis can reduce end-to-end latency by allowing chunks to be fetched and decoded by players, even while segments are being produced at the origin serverSome CDNs neutralize the benefits of chunked encoding by buffering entire files inside the CDN before they can be delivered to clientsCloudflare’s solution: Concurrent Streaming AccelerationAs you may have guessed, we think we can do better. Put simply, we now have the ability to deliver un-cached files to multiple clients simultaneously while we pull the file once from the origin server.This sounds like a simple change, but there’s a lot of subtlety to do this safely. Under the hood, we’ve made deep changes to our caching infrastructure to remove the cache lock and enable multiple clients to be able to safely read from a single file while it’s still being written.The best part is – all of Cloudflare now works this way! There’s no need to opt-in, or even make a config change to get the benefit.We rolled this feature out a couple months ago and have been really pleased with the results so far. We measure success by the “cache lock wait time,” i.e. how long a request must wait for other requests – a direct component of Time To First Byte.  One OTT customer saw this metric drop from 1.5s at P99 to nearly 0, as expected:This directly translates into a 1.5-second improvement in end-to-end latency. Live video just got more live!ConclusionNew techniques like chunked encoding have revolutionized live delivery, enabling publishers to deliver low-latency live video at scale. Concurrent Streaming Acceleration helps you unlock the power of this technique at your CDN, potentially shaving precious seconds of end-to-end latency.If you’re interested in using Cloudflare for live video delivery, contact our enterprise sales team.And if you’re interested in working on projects like this and helping us improve live video delivery for the entire Internet, join our engineering team!

The Career Pivot: Beat Burnout With A Job That’s Right for You

LinkedIn Official Blog -

Looking for a new job? You’re in the driver's seat. Unemployment is at a near 50-year low, and there are more than 20 million jobs available on LinkedIn right now. However, nearly half of all professionals say they don’t know what their career path should look like. If you’re not satisfied with where you are today, or not sure which way you should be steering, it could be time to take inventory of what you want professionally and make a switch. Perhaps you’re looking to change the type of work... .

Email Marketing Basics: Tips to Launching A Successful Campaign

InMotion Hosting Blog -

No matter who you are or where you are, you’ve probably got an email account (or several). Even though we live in the age of social media apps and new digital landscapes, the email account is still the starting point for a myriad of services we subscribe to and buy from. This is why it’s critically important to adopt a well-developed email marketing strategy. A successful email campaign involves much more than technical solutions. Continue reading Email Marketing Basics: Tips to Launching A Successful Campaign at The Official InMotion Hosting Blog.

How to Write Blog Posts for Your Buyer Personas

HostGator Blog -

The post How to Write Blog Posts for Your Buyer Personas appeared first on HostGator Blog. Quick quiz for business bloggers: In one sentence, describe the audience for your blog. If you had your answer ready, you’re ready to write must-read content for your customers. If you had to stop and think about who your audience is, or if you said “everybody,” it’s time to get a clear picture of your readers so you can create more effective content. In both cases, the key is to research, build, and use buyer personas. Write for a Specific Persona If you aced the quiz, it’s because you have a customer persona. Personas are like character sketches for marketers and bloggers. They define types of audience members by their interests, age range, online behaviors, and shopping habits. You create personas based on data from your site analytics, social media monitoring, site-visitor surveys, and interviews with your readers and customers. If you’re just starting out, research the types of people you’d like to have in your audience. Start with the persona that represents the largest part of your audience. Let’s say you have a blog for your hobby farming supply business. Your primary persona might be a retired banking executive (let’s call her Daisy) in her early 60s whose partner is also retired. She recently bought a vintage farmhouse on a small acreage. Her interests are raising flowers and herbs for market and she’d also like to set up a duck pond and a rental cottage on her property. Daisy likes to carefully research purchases and she prioritizes quality over price. Here’s a sample persona template you can use to create your own website personas: Speak the Same Language as Your Customers Whoever your persona is, write in a voice that they’ll understand. Let’s stick with the hobby farm supply example for a bit. Maybe your background is in agribusiness. Daisy, your retired banking-executive persona, won’t know the ag jargon that you do. She searches for terms like “how much to feed ducks,” not “how to formulate balanced poultry rations.” Include the keywords she’s likely to use in your posts to show her you’re speaking to her, so she’ll stick around. Bonus: Better SEO is a natural outcome of using the phrases your personas use. Not sure how your persona talks about or searches for their interests? Look at your blog and social media comments and email messages from your customers. Monitor your Google Search Console data to see which keyphrases bring readers to your blog. And check out other blogs, vlogs, and podcasts in your niche. The goal isn’t to copy anyone else’s voice but to connect with prospective customers by speaking their language. Tailor Post Length to Your Audience and Your Goals How long should your business blog posts be? That depends on your goals for each post and the time your persona has to read it. Daisy is retired and has time to focus on her interests, but an audience of mid-career professionals with small children will have less time to read. Short and long posts both have their place on your posting schedule, but you’ll want to skew toward what your audience prefers. The Case for Short Blog Posts Short blog posts of at least 300 words are a great way to tackle niche topics. That’s good for readers who want specific information. It’s also good for SEO, because narrowly focused posts can help you rank well for longtail search phrases. For example, if the persona you’re writing for is a pet rabbit owner, it’s going to be hard to rank well for “rabbit care,” which generates more than 443 million results. By going into more detail with posts on “elderly rabbit grooming,” “safe chew toys for rabbits,” “how to build a rabbit castle” and so on, you’re more likely to reach readers searching for those topics. You can later compile all your short posts on one topic into a PDF to give away to readers who join your list. The Case for Long Blog Posts Long posts—1,000 words and more—are more challenging to write and require a bigger time commitment from you and your customers. Long content typically does well in search results, so it’s worth your time to create at least a few. These can be mega-posts that combine and expand on previous short posts. They can also be new content, like a list or a how-to guide, to promote an upcoming launch or new product. For example, if you’re preparing to start selling an online course, a long post that includes a sample of the class material can help prospective students decide to register. Take your time writing and editing long posts to make sure they deliver what your personas want to know, using the same language they do. And if you’re planning a product launch, review your current site hosting plan to make sure it can handle launch-related spikes in traffic. You may want to upgrade to a more powerful plan like HostGator Cloud Hosting for more speed and bandwidth, and add on CodeGuard daily backup service to easily restore your site if your launch-prep site changes temporarily break things. Pace Your Blog Posts Properly Ask your readers how often they want to hear from you, then build a calendar to match your persona’s preferences. If you don’t have a big audience yet, remember that most people are happy to read one or two new posts a week from a blog they value. Less than that is probably okay, too. Too-frequent posts may overwhelm subscribers and lead them to drop your blog. Save daily posting for when you can hire help, have a large audience, and have specific marketing goals that require lots of new content. Keep an eye on your blog, email, and sales metrics. Over time, you should see how your publishing schedule affects page views, time on the site, email opens and clickthroughs, unsubscribes, and conversions. Tweak the schedule if you need to so your readers stick around. Close with a Call to Action What separates good bloggers from great bloggers? Great bloggers who build thriving online communities and businesses have a clear goal for each blog post before they write it. Before you write, decide what you want your readers to do when they reach the end of your post. Do you want them to join your email list? Share your post? Buy your duck brooders? Once you know, ask them to do it. Don’t assume it’s obvious. Life is filled with distractions, so make your calls to action clear: Join the list. Get the book. Register now. Reserve your appointment. There’s one other benefit to building personas before you blog. It helps to make your posts more conversational and builds rapport with your audience. So, whenever you’re ready to write, think about your persona, what they want to know, how much time they have to read, and the keywords they search for. Then you’re ready to write posts that will connect. Find the post on the HostGator Blog

WP Engine Launches Cloudflare Stream Video Plugin For WordPress

WP Engine -

AUSTIN, Texas – May X, 2018 – WP Engine, the WordPress Digital Experience Platform (DXP), today announced the launch of the Cloudflare Stream Video Plugin for WordPress. The plugin was built by WP Engine in partnership with Cloudflare to make it incredibly easy for WordPress users to publish and stream performance optimized videos on WordPress… The post WP Engine Launches Cloudflare Stream Video Plugin For WordPress appeared first on WP Engine.

Bringing Simplicity to Video Streaming

WP Engine -

By 2022, video will make up 82 percent of all IP traffic—a fourfold increase from 2017. This rise can be attributed in great part to younger generations like Gen Z, who are increasingly turning to video as their preferred method for consuming content online. Some of this has to do with the way Gen Z… The post Bringing Simplicity to Video Streaming appeared first on WP Engine.

Announcing Cloudflare Image Resizing: Simplifying Optimal Image Delivery

CloudFlare Blog -

In the past three years, the amount of image data on the median mobile webpage has doubled. Growing images translate directly to users hitting data transfer caps, experiencing slower websites, and even leaving if a website doesn’t load in a reasonable amount of time. The crime is many of these images are so slow because they are larger than they need to be, sending data over the wire which has absolutely no (positive) impact on the user’s experience.To provide a concrete example, let’s consider this photo of Cloudflare’s Lava Lamp Wall: On the left you see the photo, scaled to 300 pixels wide. On the right you see the same image delivered in its original high resolution, scaled in a desktop web browser. On a regular-DPI screen, they both look the same, yet the image on the right takes more than twenty times more data to load. Even for the best and most conscientious developers resizing every image to handle every possible device geometry consumes valuable time, and it’s exceptionally easy to forget to do this resizing altogether.Today we are launching a new product, Image Resizing, to fix this problem once and for all.Announcing Image ResizingWith Image Resizing, Cloudflare adds another important product to its suite of available image optimizations.  This product allows customers to perform a rich set of the key actions on images.Resize - The source image will be resized to the specified height and width.  This action allows multiple different sized variants to be created for each specific use.Crop - The source image will be resized to a new size that does not maintain the original aspect ratio and a portion of the image will be removed.  This can be especially helpful for headshots and product images where different formats must be achieved by keeping only a portion of the image.Compress - The source image will have its file size reduced by applying lossy compression.  This should be used when slight quality reduction is an acceptable trade for file size reduction.Convert to WebP - When the users browser supports it, the source image will be converted to WebP.  Delivering a WebP image takes advantage of the modern, highly optimized image format.By using a combination of these actions, customers store a single high quality image on their server, and Image Resizing can be leveraged to create specialized variants for each specific use case.  Without any additional effort, each variant will also automatically benefit from Cloudflare’s global caching.ExamplesEcommerce ThumbnailsEcommerce sites typically store a high-quality image of each product.  From that image, they need to create different variants depending on how that product will be displayed.  One example is creating thumbnails for a catalog view.  Using Image Resizing, if the high quality image is located here:https://example.com/images/shoe123.jpgThis is how to display a 75x75 pixel thumbnail using Image Resizing:<img src="/cdn-cgi/image/width=75,height=75/images/shoe123.jpg">Responsive ImagesWhen tailoring a site to work on various device types and sizes, it’s important to always use correctly sized images.  This can be difficult when images are intended to fill a particular percentage of the screen.  To solve this problem, <img srcset sizes> can be used.Without Image Resizing, multiple versions of the same image would need to be created and stored.  In this example, a single high quality copy of hero.jpg is stored, and Image Resizing is used to resize for each particular size as needed.<img width="100%" srcset=" /cdn-cgi/image/fit=contain,width=320/assets/hero.jpg 320w, /cdn-cgi/image/fit=contain,width=640/assets/hero.jpg 640w, /cdn-cgi/image/fit=contain,width=960/assets/hero.jpg 960w, /cdn-cgi/image/fit=contain,width=1280/assets/hero.jpg 1280w, /cdn-cgi/image/fit=contain,width=2560/assets/hero.jpg 2560w, " src="/cdn-cgi/image/width=960/assets/hero.jpg"> Enforce Maximum Size Without Changing URLsImage Resizing is also available from within a Cloudflare Worker. Workers allow you to write code which runs close to your users all around the world. For example, you might wish to add Image Resizing to your images while keeping the same URLs. Your users and client would be able to use the same image URLs as always, but the images will be transparently modified in whatever way you need.You can install a Worker on a route which matches your image URLs, and resize any images larger than a limit:addEventListener('fetch', event => { event.respondWith(handleRequest(event.request)) }) async function handleRequest(request) { return fetch(request, { cf: { image: { width: 800, height: 800, fit: 'scale-down' } }); } As a Worker is just code, it is also easy to run this worker only on URLs with image extensions, or even to only resize images being delivered to mobile clients.Cloudflare and ImagesCloudflare has a long history building tools to accelerate images. Our caching has always helped reduce latency by storing a copy of images closer to the user.  Polish automates options for both lossless and lossy image compression to remove unnecessary bytes from images.  Mirage accelerates image delivery based on device type. We are continuing to invest in all of these tools, as they all serve a unique role in improving the image experience on the web.Image Resizing is different because it is the first image product at Cloudflare to give developers full control over how their images would be served. You should choose Image Resizing if you are comfortable defining the sizes you wish your images to be served at in advance or within a Cloudflare Worker.Next Steps and Simple PricingImage Resizing is available today for Business and Enterprise Customers.  To enable it, login to the Cloudflare Dashboard and navigate to the Speed Tab.  There you’ll find the section for Image Resizing which you can enable with one click.This product is included in the Business and Enterprise plans at no additional cost with generous usage limits.  Business Customers have 100k requests per month limit and will be charged $10 for each additional 100k requests per month.  Enterprise Customers have a 10M request per month limit with discounted tiers for higher usage.  Requests are defined as a hit on a URI that contains Image Resizing or a call to Image Resizing from a Worker.Now that you’ve enabled Image Resizing, it’s time to resize your first image.Using your existing site, store an image here: https://yoursite.com/images/yourimage.jpgUse this URL to resize that image:https://yoursite.com/cdn-cgi/image/width=100,height=100,quality=75/images/yourimage.jpgExperiment with changing width=, height=, and quality=.The instructions above use the Default URL Format for Image Resizing.  For details on options, uses cases, and compatibility, refer to our Developer Documentation.

Importance of a Website and the Best Web Hosting for Small Businesses

Reseller Club Blog -

Everything starts small and then grows big, even businesses. By definition, a small business is a business that is owned independently and is limited in terms of size and revenue generated. For instance, a cake shop or a web design agency with an employee size of 10-12 can be said to be a small business. Furthermore, the upper limit to the employee count is 500. Thus, if your business falls under the bracket of 1 – 500 employees you are a small business. Moving on, irrespective of your business size in today’s competitive online trending world having a social presence is a must. Be it a Facebook business page, Twitter or Instagram to engage more users and reach a wider audience base, however, a business website is also a must and shouldn’t be overlooked. In this article, we’ll be covering the importance of a business website and the best hosting for small business websites. So let us begin! Importance of a website If we were to say, having a business website adds to a business’s credibility, wouldn’t you agree? You most likely would! In fact, it is easier to browse through a website listed on Google as it is more detailed along with your YouTube, Facebook Marketplace or Instagram. In fact, according to a report, 97% of consumers use the internet to find local businesses, whereas, another finding says, only 64% of small businesses have their own website. If we are to go by the results of these two findings, we can conclude the importance of having a business website. Chances are 64% of businesses will perform better than you in gaining an audience and broadening their outreach. Retaining the customer, however, is a secondary thing. Let us see the top 4 reasons why having a business website is important: Your website shows up in the local internet search results, resulting in more people to know about your existence. You can leverage your digital marketing strategy by running dedicated campaigns on social media and linking back to your website for more in-depth information. You can collect customer information by asking them to subscribe and keep in touch with them on a timely basis with the right email marketing. Content is king, and you can use it to sell products to your customers by attracting them with the right words and information. Your website is like a one-stop-shop for customers looking for a dedicated product, albeit, a virtual one. Thriftbooks, an online bookstore is one example of a website that is a virtual bookstore and has steadily grown over the years. Now, that we’ve seen the importance of a website for your small business. Let us move on to tackle the next big thing, the backend of your website. You may wonder, why the backend? However, you must know the technicalities while running your website. Things to look out in web hosting Imagine you finally decide to set up your website and are launching it at 11 AM local time. The invites have been mailed, the social media poster launched and bang as the clock strikes 11, your current and potential future customers click on the much-awaited link! However, to their surprise and your utter horror, the website is down because the server you hosted on crashed due to or your website loads slowly because of so much traffic. And if this is a recurring issue, instead of your website helping you gain more customers it will end up making you lose more customers. Tragic, isn’t it? Calm down, it isn’t tragic, the right web hosting can help you avoid this problem, however, what is the right web hosting? When it comes to web hosting solutions for a small business there is always a doubt which is the best hosting for small businesses? Before tackling that question, let us first understand what are the things that you need to look out for in web hosting. Managed Servers Being a small business you will have a limited number of employees to manage your business and marketing activities. Along with that if you were told that you even need to manage the technical aspect of running your website, it might complicate matters. Web hosting that provides managed services is simple and helps take care of your business website by maintaining it, upgrading the software, managing the security, backup and much more so that you can concentrate on running your business without worrying about the website. RC Advantage: We offer managed hosting services that are simple and secure along with an easy intuitive dashboard, cPanel, for free to help you navigate through your orders easily. This means you can take care of running your business while we take care of the website management. Website loading speed Website loading speed is an important factor when it comes to how your customers feel about your website. According to a survey, 40% of users abandon a site if it takes 3 mins to load. To make sure your website visitors don’t get the same experience which, in turn, can hamper your performance make sure to choose a web hosting service that offers good load speed. RC Advantage: Our web hosting services, especially Cloud Hosting comes with varnish cache and ceph storage. Varnish cache helps in boosting your website speed by 1000x that means, blazing fast website load speed, while, ceph ensures there is no single point of failure! Support System admin support is very important in case you are stuck with something and don’t know how to proceed further. A good web hosting service provider will always offer 24/7 website support. RC Advantage: We offer round-the-clock expert website support to resolve any of your technical queries to provide uninterrupted services. Resource upgradation Today you are a small business, tomorrow your performance may multiple and you will outgrow your hosting resources. At this time, it is important that your hosting provider offers resources upgradation like RAM, CPU, etc.to accommodate your growing traffic. RC Advantages: Our hosting plans are easily scalable. We offer easy single click upgradation of hosting resources like RAM, OS and CPU with our Cloud Hosting. Best web hosting for small business Having seen what are the things that your chosen web hosting should have it is time to figure out which web hosting is best for small businesses? There are several web hosting options available, however, one that we would recommend to small businesses is Cloud Hosting. In Cloud hosting your website data is stored across multiple devices, which improves redundancy and website load time. Moreover, it offers all the above features, along with, data mirroring, several 1-click application installations like WordPress, Joomla, etc. an intuitive panel to monitor your resources and much more. Above all reliability is of utmost importance and Cloud Hosting promises that and we believe it is the best hosting for small business. We hope this article helped you in understanding the importance of having your own business website, as well as, choosing the web hosting solutions for your small business. If you have any queries or comments, feel free to leave them in the comments section below. .fb_iframe_widget_fluid_desktop iframe { width: 100% !important; } The post Importance of a Website and the Best Web Hosting for Small Businesses appeared first on ResellerClub Blog.

Notice of MDS Vulnerabilities

The Rackspace Blog & Newsroom -

On 14 May 2019, Intel released information about a new group of vulnerabilities collectively called Microarchitectural Data Sampling (MDS). Left unmitigated, these vulnerabilities could potentially allow sophisticated attackers to gain access to sensitive data, secrets, and credentials that could allow for privilege escalation and unauthorized access to user data. Our highest priority is protection of […] The post Notice of MDS Vulnerabilities appeared first on The Official Rackspace Blog.

Removal of PHP 5.6 and PHP 7.0 in EasyApache Profiles

cPanel Blog -

Both PHP 5.6 and PHP 7.0 reached End of Life at the beginning of the year, and are no longer receiving any security patches from PHP. With cPanel & WHM Version 80 moving to the current tier, we are also encouraging users to upgrade to supported PHP versions in EasyApache 4. To help with that, we are removing PHP 5.6 and 7.0 from our default EasyApache profiles. This change only impacts servers running our default …

Parallel streaming of progressive images

CloudFlare Blog -

Progressive image rendering and HTTP/2 multiplexing technologies have existed for a while, but now we've combined them in a new way that makes them much more powerful. With Cloudflare progressive streaming images appear to load in half of the time, and browsers can start rendering pages sooner. document.getElementsByTagName('video')[0].playbackRate=0.4In HTTP/1.1 connections, servers didn't have any choice about the order in which resources were sent to the client; they had to send responses, as a whole, in the exact order they were requested by the web browser. HTTP/2 improved this by adding multiplexing and prioritization, which allows servers to decide exactly what data is sent and when. We’ve taken advantage of these new HTTP/2 capabilities to improve perceived speed of loading of progressive images by sending the most important fragments of image data sooner.This feature is compatible with all major browsers, and doesn’t require any changes to page markup, so it’s very easy to adopt. Sign up for the Beta to enable it on your site!What is progressive image rendering?Basic images load strictly from top to bottom. If a browser has received only half of an image file, it can show only the top half of the image. Progressive images have their content arranged not from top to bottom, but from a low level of detail to a high level of detail. Receiving a fraction of image data allows browsers to show the entire image, only with a lower fidelity. As more data arrives, the image becomes clearer and sharper.This works great in the JPEG format, where only about 10-15% of the data is needed to display a preview of the image, and at 50% of the data the image looks almost as good as when the whole file is delivered. Progressive JPEG images contain exactly the same data as baseline images, merely reshuffled in a more useful order, so progressive rendering doesn’t add any cost to the file size. This is possible, because JPEG doesn't store the image as pixels. Instead, it represents the image as frequency coefficients, which are like a set of predefined patterns that can be blended together, in any order, to reconstruct the original image. The inner workings of JPEG are really fascinating, and you can learn more about them from my recent performance.now() conference talk.The end result is that the images can look almost fully loaded in half of the time, for free! The page appears to be visually complete and can be used much sooner. The rest of the image data arrives shortly after, upgrading images to their full quality, before visitors have time to notice anything is missing.HTTP/2 progressive streamingBut there's a catch. Websites have more than one image (sometimes even hundreds of images). When the server sends image files naïvely, one after another, the progressive rendering doesn’t help that much, because overall the images still load sequentially:Having complete data for half of the images (and no data for the other half) doesn't look as good as having half of the data for all images.And there's another problem: when the browser doesn't know image sizes yet, it lays the page out with placeholders instead, and relays out the page when each image loads. This can make pages jump during loading, which is inelegant, distracting and annoying for the user.Our new progressive streaming feature greatly improves the situation: we can send all of the images at once, in parallel. This way the browser gets size information for all of the images as soon as possible, can paint a preview of all images without having to wait for a lot of data, and large images don’t delay loading of styles, scripts and other more important resources.This idea of streaming of progressive images in parallel is as old as HTTP/2 itself, but it needs special handling in low-level parts of web servers, and so far this hasn't been implemented at a large scale. When we were improving our HTTP/2 prioritization, we realized it can be also used to implement this feature. Image files as a whole are neither high nor low priority. The priority changes within each file, and dynamic re-prioritization gives us the behavior we want: The image header that contains the image size is very high priority, because the browser needs to know the size as soon as possible to do page layout. The image header is small, so it doesn't hurt to send it ahead of other data. The minimum amount of data in the image required to show a preview of the image has a medium priority (we'd like to plug "holes" left for unloaded images as soon as possible, but also leave some bandwidth available for scripts, fonts and other resources) The remainder of the image data is low priority. Browsers can stream it last to refine image quality once there's no rush, since the page is already fully usable. Knowing the exact amount of data to send in each phase requires understanding the structure of image files, but it seemed weird to us to make our web server parse image responses and have a format-specific behavior hardcoded at a protocol level. By framing the problem as a dynamic change of priorities, were able to elegantly separate low-level networking code from knowledge of image formats. We can use Workers or offline image processing tools to analyze the images, and instruct our server to change HTTP/2 priorities accordingly.The great thing about parallel streaming of images is that it doesn’t add any overhead. We’re still sending the same data, the same amount of data, we’re just sending it in a smarter order. This technique takes advantage of existing web standards, so it’s compatible with all browsers.The waterfallHere are waterfall charts from WebPageTest showing comparison of regular HTTP/2 responses and progressive streaming. In both cases the files were exactly the same, the amount of data transferred was the same, and the overall page loading time was the same (within measurement noise). In the charts, blue segments show when data was transferred, and green shows when each request was idle.The first chart shows a typical server behavior that makes images load mostly sequentially. The chart itself looks neat, but the actual experience of loading that page was not great — the last image didn't start loading until almost the end.The second chart shows images loaded in parallel. The blue vertical streaks throughout the chart are image headers sent early followed by a couple of stages of progressive rendering. You can see that useful data arrived sooner for all of the images. You may notice that one of the images has been sent in one chunk, rather than split like all the others. That’s because at the very beginning of a TCP/IP connection we don't know the true speed of the connection yet, and we have to sacrifice some opportunity to do prioritization in order to maximize the connection speed.The metrics compared to other solutionsThere are other techniques intended to provide image previews quickly, such as low-quality image placeholder (LQIP), but they have several drawbacks. They add unnecessary data for the placeholders, and usually interfere with browsers' preload scanner, and delay loading of full-quality images due to dependence on JavaScript needed to upgrade the previews to full images.Our solution doesn't cause any additional requests, and doesn't add any extra data. Overall page load time is not delayed.Our solution doesn't require any JavaScript. It takes advantage of functionality supported natively in the browsers.Our solution doesn't require any changes to page's markup, so it's very safe and easy to deploy site-wide.The improvement in user experience is reflected in performance metrics such as SpeedIndex metric and and time to visually complete. Notice that with regular image loading the visual progress is linear, but with the progressive streaming it quickly jumps to mostly complete:Getting the most out of progressive renderingAvoid ruining the effect with JavaScript. Scripts that hide images and wait until the onload event to reveal them (with a fade in, etc.) will defeat progressive rendering. Progressive rendering works best with the good old <img> element. Is it JPEG-only?Our implementation is format-independent, but progressive streaming is useful only for certain file types. For example, it wouldn't make sense to apply it to scripts or stylesheets: these resources are rendered as all-or-nothing.Prioritizing of image headers (containing image size) works for all file formats.The benefits of progressive rendering are unique to JPEG (supported in all browsers) and JPEG 2000 (supported in Safari). GIF and PNG have interlaced modes, but these modes come at a cost of worse compression. WebP doesn't even support progressive rendering at all. This creates a dilemma: WebP is usually 20%-30% smaller than a JPEG of equivalent quality, but progressive JPEG appears to load 50% faster. There are next-generation image formats that support progressive rendering better than JPEG, and compress better than WebP, but they're not supported in web browsers yet. In the meantime you can choose between the bandwidth savings of WebP or the better perceived performance of progressive JPEG by changing Polish settings in your Cloudflare dashboard.Custom header for experimentationWe also support a custom HTTP header that allows you to experiment with, and optimize streaming of other resources on your site. For example, you could make our servers send the first frame of animated GIFs with high priority and deprioritize the rest. Or you could prioritize loading of resources mentioned in <head> of HTML documents before <body> is loaded. The custom header can be set only from a Worker. The syntax is a comma-separated list of file positions with priority and concurrency. The priority and concurrency is the same as in the whole-file cf-priority header described in the previous blog post.cf-priority-change: <offset in bytes>:<priority>/<concurrency>, ... For example, for a progressive JPEG we use something like (this is a fragment of JS to use in a Worker):let headers = new Headers(response.headers); headers.set("cf-priority", "30/0"); headers.set("cf-priority-change", "512:20/1, 15000:10/n"); return new Response(response.body, {headers}); Which instructs the server to use priority 30 initially, while it sends the first 512 bytes. Then switch to priority 20 with some concurrency (/1), and finally after sending 15000 bytes of the file, switch to low priority and high concurrency (/n) to deliver the rest of the file. We’ll try to split HTTP/2 frames to match the offsets specified in the header to change the sending priority as soon as possible. However, priorities don’t guarantee that data of different streams will be multiplexed exactly as instructed, since the server can prioritize only when it has data of multiple streams waiting to be sent at the same time. If some of the responses arrive much sooner from the upstream server or the cache, the server may send them right away, without waiting for other responses.Try it!You can use our Polish tool to convert your images to progressive JPEG. Sign up for the beta to have them elegantly streamed in parallel.

Pages

Recommended Content

Subscribe to Complete Hosting Guide aggregator - Corporate Blogs