Industry Buzz

AWS Launches & Previews at re:Invent 2019 – Wednesday, December 4th

Amazon Web Services Blog -

Here’s what we announced today: Amplify DataStore – This is a persistent, on-device storage repository that will help you to synchronize data across devices and to handle offline operations. It can be used as a standalone local datastore for web and mobile applications that have no connection to the cloud or an AWS account. When used with a cloud backend, it transparently synchronizes data with AWS AppSync. Amplify iOS and Amplify Android – These open source libraries enable you can build scalable and secure mobile applications. You can easily add analytics, AI/ML, API (GraphQL and REST), datastore, and storage functionality to your mobile and web applications. The use case-centric libraries provide a declarative interface that enables you to programmatically apply best practices with abstractions. The libraries, along with the Amplify CLI, a toolchain to create, integrate, and manage the cloud services used by your applications, are part of the Amplify Framework. Amazon Neptune Workbench – You can now query your graphs from within the Neptune Console using either Gremlin or SPARQL queries. You get a fully managed, interactive development environment that supports live code and narrative text within Jupyter notebooks. In addition to queries, the notebooks support bulk loading, query planning, and query profiling. To get started, visit the Neptune Console. Amazon Chime Meetings App for Slack – This new app allows Slack users to start and join Amazon Chime online meetings from their Slack workspace channels and conversations. Slack users that are new to Amazon Chime will be auto-registered with Chime when they use the app for the first time, and can get access to all of the benefits of Amazon Chime meetings from their Slack workspace. Administrators of Slack workspaces can install the Amazon Chime Meetings App for Slack from the Slack App Directory. To learn more, visit this blog post. HTTP APIs for Amazon API Gateway in Preview – This is a new API Gateway feature that will let you build cost-effective, high-performance RESTful APIs for serverless workloads using Lambda functions and other services with an HTTP endpoint. HTTP APIs are optimized for performance—they offer the core functionality of API Gateway at a cost savings of up to 70% compared to REST APIs in API Gateway. You will be able to create routes that map to multiple disparate backends, define & apply authentication and authorization to routes, set up rate limiting, and use custom domains to route requests to the APIs. Visit this blog post to get started. Windows gMSA Support in ECS – Amazon Elastic Container Service (ECS) now supports Windows group Managed Service Account (gMSA), a new capability that allows you to authenticate and authorize your ECS-powered Windows containers with network resources using an Active Directory (AD). You can now easily use Integrated Windows Authentication with your Windows containers on ECS to secure services. — Jeff;  

Amplify DataStore – Simplify Development of Offline Apps with GraphQL

Amazon Web Services Blog -

The open source Amplify Framework is a command line tool and a library allowing web & mobile developers to easily provision and access cloud based services. For example, if I want to create a GraphQL API for my mobile application, I use amplify add api on my development machine to configure the backend API. After answering a few questions, I type amplify push to create an AWS AppSync API backend in the cloud. Amplify generates code allowing my app to easily access the newly created API. Amplify supports popular web frameworks, such as Angular, React, and Vue. It also supports mobile applications developed with React Native, Swift for iOS, or Java for Android. If you want to learn more about how to use Amplify for your mobile applications, feel free to attend one the workshops (iOS or React Native) we prepared for the re:Invent 2019 conference. AWS customers told us the most difficult tasks when developing web & mobile applications is to synchronize data across devices and to handle offline operations. Ideally, when a device is offline, your customers should be able to continue to use your application, not only to access data but also to create and modify them. When the device comes back online, the application must reconnect to the backend, synchronize the data and resolve conflicts, if any. It requires a lot of undifferentiated code to correctly handle all edge cases, even when using AWS AppSync SDK’s on-device cache with offline mutations and delta sync. Today, we are introducing Amplify DataStore, a persistent on-device storage repository for developers to write, read, and observe changes to data. Amplify DataStore allows developers to write apps leveraging distributed data without writing additional code for offline or online scenario. Amplify DataStore can be used as a stand-alone local datastore in web and mobile applications, with no connection to the cloud, or the need to have an AWS Account. However, when used with a cloud backend, Amplify DataStore transparently synchronizes data with an AWS AppSync API when network connectivity is available. Amplify DataStore automatically versions data, implements conflict detection and resolution in the cloud using AppSync. The toolchain also generates object definitions for my programming language based on the GraphQL schema developers provide. Let’s see how it works. I first install the Amplify CLI and create a React App. This is standard React, you can find the script on my git repo. I add Amplify DataStore to the app with npx amplify-app. npx is specific for NodeJS, Amplify DataStore also integrates with native mobile toolchains, such as the Gradle plugin for Android Studio and CocoaPods that creates custom XCode build phases for iOS. Now that the scaffolding of my app is done, I add a GraphQL schema representing two entities: Posts and Comments on these posts. I install the dependencies and use AWS Amplify CLI to generate the source code for the objects defined in the GraphQL schema. # add a graphql schema to amplify/backend/api/amplifyDatasource/schema.graphql echo "enum PostStatus { ACTIVE INACTIVE } type Post @model { id: ID! title: String! comments: [Comment] @connection(name: "PostComments") rating: Int! status: PostStatus! } type Comment @model { id: ID! content: String post: Post @connection(name: "PostComments") }" > amplify/backend/api/amplifyDatasource/schema.graphql # install dependencies npm i @aws-amplify/core @aws-amplify/DataStore @aws-amplify/pubsub # generate the source code representing the model npm run amplify-modelgen # create the API in the cloud npm run amplify-push @model and @connection are directives that the Amplify GraphQL Transformer uses to generate code. Objects annotated with @model are top level objects in your API, they are stored in DynamoDB, you can make them searchable, version them or restrict their access to authorised users only. @connection allows to express 1-n relationships between objects, similarly to what you would define when using a relational database (you can use the @key directive to model n-n relationships). The last step is to create the React app itself. I propose to download a very simple sample app to get started quickly: # download a simple react app curl -o src/App.js https://raw.githubusercontent.com/sebsto/amplify-datastore-js-e2e/master/src/App.js # start the app npm run start I connect my browser to the app http://localhost:8080and start to test the app. The demo app provides a basic UI (as you can guess, I am not a graphic designer !) to create, query, and to delete items. Amplify DataStore provides developers with an easy to use API to store, query and delete data. Read and write are propagated in the background to your AppSync endpoint in the cloud. Amplify DataStore uses a local data store via a storage adapter, we ship IndexedDB for web and SQLite for mobile. Amplify DataStore is open source, you can add support for other database, if needed. From a code perspective, interacting with data is as easy as invoking the save(), delete(), or query() operations on the DataStore object (this is a Javascript example, you would write similar code for Swift or Java). Notice that the query() operation accepts filters based on Predicates expressions, such as item.rating("gt", 4) or Predicates.All. function onCreate() { DataStore.save( new Post({ title: `New title ${Date.now()}`, rating: 1, status: PostStatus.ACTIVE }) ); } function onDeleteAll() { DataStore.delete(Post, Predicates.ALL); } async function onQuery(setPosts) { const posts = await DataStore.query(Post, c => c.rating("gt", 4)); setPosts(posts) } async function listPosts(setPosts) { const posts = await DataStore.query(Post, Predicates.ALL); setPosts(posts); } I connect to Amazon DynamoDB console and observe the items are stored in my backend: There is nothing to change in my code to support offline mode. To simulate offline mode, I turn off my wifi. I add two items in the app and turn on the wifi again. The app continues to operate as usual while offline. The only noticeable change is the _version field is not updated while offline, as it is populated by the backend. When the network is back, Amplify DataStore transparently synchronizes with the backend. I verify there are 5 items now in DynamoDB (the table name is different for each deployment, be sure to adjust the name for your table below): aws dynamodb scan --table-name Post-raherug3frfibkwsuzphkexewa-amplify \ --filter-expression "#deleted <> :value" \ --expression-attribute-names '{"#deleted" : "_deleted"}' \ --expression-attribute-values '{":value" : { "BOOL": true} }' \ --query "Count" 5 // <= there are now 5 non deleted items in the table ! Amplify DataStore leverages GraphQL subscriptions to keep track of changes that happen on the backend. Your customers can modify the data from another device and Amplify DataStore takes care of synchronizing the local data store transparently. No GraphQL knowledge is required, Amplify DataStore takes care of the low level GraphQL API calls for you automatically. Real-time data, connections, scalability, fan-out and broadcasting are all handled by the Amplify client and AppSync, using WebSocket protocol under the cover. We are effectively using GraphQL as a network protocol to dynamically transform model instances to GraphQL documents over HTTPS. To refresh the UI when a change happens on the backend, I add the following code in the useEffect() React hook. It uses the DataStore.observe() method to register a callback function ( msg => { ... } ). Amplify DataStore calls this function when an instance of Post changes on the backend. const subscription = DataStore.observe(Post).subscribe(msg => { console.log(msg.model, msg.opType, msg.element); listPosts(setPosts); }); Now, I open the AppSync console. I query existing Posts to retrieve a Post ID. query ListPost { listPosts(limit: 10) { items { id title status rating _version } } } I choose the first post in my app, the one starting with 7d8… and I send the following GraphQL mutation: mutation UpdatePost { updatePost(input: { id: "7d80688f-898d-4fb6-a632-8cbe060b9691" title: "updated title 13:56" status: ACTIVE rating: 7 _version: 1 }) { id title status rating _lastChangedAt _version _deleted } } Immediately, I see the app receiving the notification and refreshing its user interface. Finally, I test with multiple devices. I first create a hosting environment for my app using amplify add hosting and amplify publish. Once the app is published, I open the iOS Simulator and Chrome side by side. Both apps initially display the same list of items. I create new items in both apps and observe the apps refreshing their UI in near real time. At the end of my test, I delete all items. I verify there are no more items in DynamoDB (the table name is different for each deployment, be sure to adjust the name for your table below): aws dynamodb scan --table-name Post-raherug3frfibkwsuzphkexewa-amplify \ --filter-expression "#deleted <> :value" \ --expression-attribute-names '{"#deleted" : "_deleted"}' \ --expression-attribute-values '{":value" : { "BOOL": true} }' \ --query "Count" 0 // <= all the items have been deleted ! When syncing local data with the backend, AWS AppSync keeps track of version numbers to detect conflicts. When there is a conflict, the default resolution strategy is to automerge the changes on the backend. Automerge is an easy strategy to resolve conflit without writing client-side code. For example, let’s pretend I have an initial Post, and Bob & Alice update the post at the same time: The original item: { "_version": 1, "id": "25", "rating": 6, "status": "ACTIVE", "title": "DataStore is Available" } Alice updates the rating: { "_version": 2, "id": "25", "rating": 10, "status": "ACTIVE", "title": "DataStore is Available" } At the same time, Bob updates the title: { "_version": 2, "id": "25", "rating": 6, "status": "ACTIVE", "title": "DataStore is great !" } The final item after auto-merge is: { "_version": 3, "id": "25", "rating": 10, "status": "ACTIVE", "title": "DataStore is great !" } Automerge strictly defines merging rules at field level, based on type information defined in the GraphQL schema. For example List and Map are merged, and conflicting updates on scalars (such as numbers and strings) preserve the value existing on the server. Developers can chose other conflict resolution strategies: optimistic concurrency (conflicting updates are rejected) or custom (an AWS Lambda function is called to decide what version is the correct one). You can choose the conflit resolution strategy with amplify update api. You can read more about these different strategies in the AppSync documentation. The full source code for this demo is available on my git repository. The app has less than 100 lines of code, 20% being just UI related. Notice that I did not write a single line of GraphQL code, everything happens in the Amplify DataStore. Your Amplify DataStore cloud backend is available in all AWS Regions where AppSync is available, which, at the time I write this post are: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (London). There is no additional charges to use Amplify DataStore in your application, you only pay for the backend resources you use, such as AppSync and DynamoDB (see here and here for the pricing detail). Both services have a free tier allowing you to discover and to experiment for free. Amplify DataStore allows you to focus on the business value of your apps, instead of writing undifferentiated code. I can’t wait to discover the great applications you’re going to build with it. -- seb

EasyApache 3 to EasyApache 4 Autoconversion

cPanel Blog -

As you may have noticed on the front page of our website, we’ve added a new section about the “Up Next” initiative, explaining upcoming changes to cPanel & WHM. A significant change coming in early 2020 is the EasyApache 3 to EasyApache 4 autoconversion. When we last made a change in the Up Next initiative, we upgraded users on out of date and unsupported cPanel & WHM version to Version 78. This upgrade did come …

The New Holiday Rules at Work

LinkedIn Official Blog -

The holiday season is in full swing, and the “most wonderful time of the year” is also known for being the busiest and most stressful. In fact, 88% of Americans* feel stressed during the holiday season due to balancing holiday commitments at both home and work.  So, here’s the good news: the world of work is changing, and the lines between our professional and personal lives are blurred as professionals today receive more work-life flexibility and balance than ever before. But how does this... .

How to Launch a Website for Your Etsy Shop

HostGator Blog -

The post How to Launch a Website for Your Etsy Shop appeared first on HostGator Blog. You’ve heard of Etsy. You know, it’s the place you go to when you need an adorable print of your favorite quote. It’s the hub for gorgeous, unique, hand-made jewelry of all sorts. It’s the perfect eCommerce website to find organic products, vintage items, and craft supplies. You may even already have an Etsy side hustle of your own, and know everything that is involved with selling your own hand-crafted items online. What you may not know about Etsy, however, are the exact numbers that prove just how successful Etsy is.  To fill you in, reports show that Etsy generated revenues worth 603.7 million U.S. dollars last year alone, according to Statista. Not to mention, in July 2019, the company had a market capitalization of 7.46 billion U.S. dollars, and Etsy is projected to grow its revenue at a 22 percent CAGR between 2019 and 2021. What this means in regular people speak is this: Etsy is the perfect place for crafty individuals to start a successful side hustle.  While using Etsy is an awesome way to list and sell your products online, it’s only half the battle of growing a successful online store side hustle. The other half lies in running, operating, and integrating your Etsy products into your own website. This article will cover why you need your own website along with your Etsy store, how to build your website, and how to integrate your Etsy store into your website. Why Every Etsy Shop Needs Its Own Website Etsy is an excellent eCommerce platform for listing and selling your products. However, that’s where Etsy starts and stops. If you are looking to grow your side hustle into a revenue making machine, then it’s necessary to have a website. Let’s look into some of the reasons how a website helps you grow your side hustle. 1. You can increase the sales of your products The first reason to start your own website is so you can take complete control of the sales of your products. To illustrate this point, let’s meet a successful side hustler, Rebekah Welch. Rebekah Welch is the owner and operator of Cherish Bath + Body. Welch describes what she does in her own words. She says, “I make bath products and skin care from scratch using natural, organic, plant based ingredients.”  While Welch started out selling products with only an Etsy store, she soon recognized that she lacked complete control of her sales. As a result, she used HostGator to build her own website, and boost her revenues. Welch says, “I just recently finished building my own website. Previously, I relied on Etsy for sales. Now, having my own site, I can sell my products on my own terms.” Now that Welch has complete control of her online sales, she has plans for future growth. About her future plans, she says, “I am going to grow my business into a full time income. I would like to set up a few wholesale accounts and see my products in boutiques as well as building a brick and mortar store.”  Key Takeaway: Building your own website will allow you to sell your products on your own terms. 2. You can gain complete control of your business Since Etsy is its own company with investors, pressures from stakeholders to grow rapidly, and a list of terms and conditions you agree to, Etsy often works on a different agenda as its sellers. For example, at any time Etsy can raise their transaction fees to boost profits, make design changes as desired, and change its search algorithm, which may result in dips in your Etsy store traffic. These potential changes can negatively impact the sales and overall success of your Etsy store, and there is nothing you can do about it. The best way to ensure your business isn’t subject to the “Big Wig Decision Makers” at Etsy is to create a website—a website you control fully. Key Takeaway: When your store only exists on Etsy, Etsy is the boss. When you build your own website and sell your products in your own online store, you’re the boss. 3. You can invest in robust online marketing Another benefit of having your own website is the ability to invest in online marketing, content marketing through your blog, and to grow at your own pace. When you have your own website, you have full control to optimize it for search, invest in paid advertising, and build a content marketing plan.  All of these endeavors will help you grow your fan base, boost your clientele, and make more sales. Key Takeaway: Operating your own website allows you to participate in online marketing to grow your traffic and boost sales. Grow Your Etsy Business by Building Your Own Website with HostGator One of the benefits of Etsy is how easy it is to get your store up and running. The good news is setting up your website with HostGator is also an intuitive process, especially if you’ve already set up an Etsy store. To set up your own website, you don’t need to know how to code, or be an expert on web design. You also don’t have to spend an arm and a leg to hire a professional to help you. All you need to do is follow six easy steps, and you can get your website up in less than a day. Here is a quick overview of each of the six steps. Step 1: Pick a hosting plan for your website. Every website needs hosting. Buying a hosting package essentially means you are renting space on a third-party company server to store your web files. HostGator is a web hosting company that offers three website builder plans you can choose from for your online store. You can pick your plan depending on your needs and how much functionality you need for your site. The Starter plan includes a free domain, 200+ templates that will work well for someone looking to promote a book, cloud hosting, a drag-and-drop editor, and website analytics.  The Premium plan includes everything the Start plan includes plus access to priority support when you need it. Since you are running an online store and will be selling products, the best package for you is the eCommerce plan. This package includes everything the Premium plan provides plus full eCommerce functionality. In other words, you can sell your products online with this plan. Once you’ve picked the eCommerce plan for your online store, click “buy now” and you can set up your account. Step 2: Pick a domain name for your website. Every Gator Website Builder package includes a free domain. This means you don’t have to purchase a domain from a separate domain hosting company. To pick your domain, simply type something in the “get domain” box.  Since you already have an Etsy shop, the best thing to do would be to choose the name of your Etsy store as your domain. For example, Rebekah Welch picked cherishbathandbody.com, the same name of her Etsy shop. If you are just getting started and don’t have an Etsy store yet or a domain name, here is a helpful article on how to choose the perfect domain name. If you already have a domain name you’ve been saving for when you launch your own website, then you can connect it to your HostGator account by clicking “connect it here.”  Step 3: Create your account. Once you have a domain name, it’s easy to connect your HostGator account. All you need is a Facebook account or an email address to connect. Then, enter your payment information for the package you selected, and you’ll be ready to pick a template for your website. Step 4: Pick a template for your eCommerce website. As mentioned above, you don’t have to build your website on your own. The Gator Website Builder comes with templates, and all you have to do is pick the one that matches the style of your Etsy store. Once you create your account, HostGator will direct you to the “choose a template” page.  You can scroll through more than 200 professionally-designed templates, and select the template that you love. The next step is to customize it as you please with the drag and drop builder. Step 5: Add content to your online store. After you have selected a template, it’s time to start customizing your website with content. Click “start editing.” This step will send you to your dashboard where you can add, edit, and delete pages like your homepage, about page, online store, product pages, blog, and any other page you want to include. With the drag and drop builder, you can make your website look how you want it to look by pointing, clicking, dragging, and dropping the elements you want to include. It’s an intuitive process, but if you have any questions, HostGator provides a free and easy step-by-step guide for reference that you can access at any time.  To access this helpful guide, click the “menu” icon next to the Gator by HostGator logo and select the “getting started tour.”   Additionally, since you signed up for the eCommerce plan, you have access to priority support whenever you have questions. Step 6: Review your content and launch your website. The last step is to review your website, make any desired changes, and publish your website. By clicking “preview,” you can see your website in full. If everything looks perfect, then click the “finish preview” button at the top and then “publish website” at the top of the dashboard. Gator Website Builder will present a series of quick steps to help you go live. How to Integrate Your Etsy Shop into Your Website Do you already have your own website? Then, you’ve already done the hard part! All that’s left is for you to integrate your Etsy shop into your website.  Etsy used to offer a service called Etsy Mini where you could copy and paste a unique code into your website, and the code would pull your products into your website. Unfortunately, Etsy no longer offers this service, and without some deep digging and intense work arounds, it’s difficult to integrate products with an Etsy code. However, even if Etsy Mini is no longer an option, you’re not out of luck. You can still add your Etsy store into your WordPress account. Here’s how. 1. Add the Etsy plugin The first step to getting integrating your Etsy store into WordPress is to install and activate the Etsy plugin Upon activation, remember to go to Settings and then the Etsy Shop page and enter your Etsy API key to connect your shop. 2. Copy your API key Once you have connected your shop, you will see your Etsy API key. Copy this key for the next step. 3. Paste your API key Next, return to your WordPress admin area and paste the Etsy API key, and save changes.  4. Create a page or edit an existing page Once you have saved your changes, you’re ready to sell products from your Etsy shop on your WordPress site. You’ll just need to create a new page in WordPress or edit an existing page, and add your shortcode.  And, that’s it! It’s worth mentioning that since Etsy is its own eCommerce platform, and won’t have the same functionality as WooCommerce. As your side hustle begins getting traction, and when you are ready to grow your side hustle into a full-fledged business, you may want to consider ditching your Etsy shop entirely and building out your own eCommerce platform on your website. Grow Your Etsy Shop with Your Own Website If you have an Etsy side hustle and are looking to gain more control, make more sales, and market your products online with more freedom, now is the time to set up your own website. For more information on how to get started with building your own website, visit HostGator and check out the Gator Website Builder. With the help of the Gator Website Builder, you can get your own website up and running in less than a day. Find the post on the HostGator Blog

How to Create Square Videos That Stand Out: 6 Useful Tools

Social Media Examiner -

Want to make square videos that work on any social platform? Looking for tools to help? In this article, you’ll discover six tools to crop, brand, and optimize square videos to perform better on Facebook, Instagram, and LinkedIn. Why Square Video on Social Media? Brands and businesses of all sizes are jumping on the video […] The post How to Create Square Videos That Stand Out: 6 Useful Tools appeared first on Social Media Marketing | Social Media Examiner.

AWS Launches & Previews at re:Invent 2019 – Tuesday, December 3rd

Amazon Web Services Blog -

Whew, what a day. This post contains a summary of the announcements that we made today. Launch Blog Posts Here are detailed blog posts for the launches: AWS Outposts Now Available – Order Your Racks Today! Inf1 Instances with AWS Inferentia Chips for High Performance Cost-Effective Inferencing. EBS Direct APIs – Programmatic Access to EBS Snapshot Content. AWS Compute Optimizer – Your Customized Resource Optimization Service. Amazon EKS on AWS Fargate Now Generally Available. AWS Fargate Spot Now Generally Available. ECS Cluster Auto Scaling is Now Generally Available. Easily Manage Shared Data Sets with Amazon S3 Access Points. Amazon Redshift Update – Next-Generation Compute Instances and Managed, Analytics-Optimized Storage. Amazon Redshift – Data Lake Export and Federated Queries. Amazon Rekognition Custom Labels. Amazon SageMaker Studio: The First Fully Integrated Development Environment For Machine Learning. Amazon SageMaker Model Monitor – Fully Managed Automatic Monitoring For Your Machine Learning Models. Amazon SageMaker Experiments – Organize, Track And Compare Your Machine Learning Trainings. Amazon SageMaker Debugger – Debug Your Machine Learning Models. Amazon SageMaker Autopilot – Automatically Create High-Quality Machine Learning Models. Now Available on Amazon SageMaker: The Deep Graph Library. Amazon SageMaker Processing – Fully Managed Data Processing and Model Evaluation. Deep Java Library (DJL). AWS Now Available from a Local Zone in Los Angeles. Lambda Provisioned Concurrency. AWS Step Functions Express Workflows: High Performance & Low Cost. AWS Transit Gateway – Build Global Networks and Centralize Monitoring Using Network Manager. AWS Transit Gateway Adds Multicast and Inter-regional Peering. VPC Ingress Routing – Simplifying Integration of Third-Party Appliances. Amazon Chime Meeting Regions. Other Launches Here’s an overview of some launches that did not get a blog post. I’ve linked to the What’s New or product information pages instead: EBS-Optimized Bandwidth Increase – Thanks to improvements to the Nitro system, all newly launched C5/C5d/C5n/C5dn, M5/M5d/M5n/M5dn, R5/R5d/R5n/R5dn, and P3dn instances will support 36% higher EBS-optimized instance bandwidth, up to 19 Gbps. In addition newly launched High Memory instances (6, 9, 12 TB) will also support 19 Gbps of EBS-optimized instance bandwidth, a 36% increase from 14Gbps. For details on each size, read more about Amazon EBS-Optimized Instances. EC2 Capacity Providers – You will have additional control over how your applications use compute capacity within EC2 Auto Scaling Groups and when using AWS Fargate. You get an abstraction layer that lets you make late binding decisions on capacity, including the ability to choose how much Spot capacity that you would like to use. Read the What’s New to learn more. Previews Here’s an overview of the previews that we revealed today, along with links that will let you sign up and/or learn more (most of these were in Andy’s keynote): AWS Wavelength – AWS infrastructure deployments that embed AWS compute and storage services within the telecommunications providers’ datacenters at the edge of the 5G network to provide developers the ability to build applications that serve end-users with single-digit millisecond latencies. You will be able to extend your existing VPC to a Wavelength Zone and then make use of EC2, EBS, ECS, EKS, IAM, CloudFormation, Auto Scaling, and other services. This low-latency access to AWS will enable the next generation of mobile gaming, AR/VR, security, and video processing applications. To learn more, visit the AWS Wavelength page. Amazon Managed Apache Cassandra Service (MCS) – This is a scalable, highly available, and managed Apache Cassandra-compatible database service. Amazon Managed Cassandra Service is serverless, so you pay for only the resources you use and the service automatically scales tables up and down in response to application traffic. You can build applications that serve thousands of requests per second with virtually unlimited throughput and storage. To learn more, read New – Amazon Managed Apache Cassandra Service (MCS). Graviton2-Powered EC2 Instances – New Arm-based general purpose, compute-optimized, and memory-optimized EC2 instances powered by the new Graviton2 processor. The instances offer a significant performance benefit over the 5th generation (M5, C5, and R5) instances, and also raise the bar on security. To learn more, read Coming Soon – Graviton2-Powered General Purpose, Compute-Optimized, & Memory-Optimized EC2 Instances. AWS Nitro Enclaves – AWS Nitro Enclaves will let you create isolated compute environments to further protect and securely process highly sensitive data such as personally identifiable information (PII), healthcare, financial, and intellectual property data within your Amazon EC2 instances. Nitro Enclaves uses the same Nitro Hypervisor technology that provides CPU and memory isolation for EC2 instances. To learn more, visit the Nitro Enclaves page. The Nitro Enclaves preview is coming soon and you can sign up now. Amazon Detective – This service will help you to analyze and visualize security data at scale. You will be able to quickly identify the root causes of potential security issues or suspicious activities. It automatically collects log data from your AWS resources and uses machine learning, statistical analysis, and graph theory to build a linked set of data that will accelerate your security investigation. Amazon Detective can scale to process terabytes of log data and trillions of events. Sign up for the Amazon Detective Preview. Amazon Fraud Detector – This service makes it easy for you to identify potential fraud that is associated with online activities. It uses machine learning and incorporates 20 years of fraud detection expertise from AWS and Amazon.com, allowing you to catch fraud faster than ever before. You can create a fraud detection model with a few clicks, and detect fraud related to new accounts, guest checkout, abuse of try-before-you-buy, and (coming soon) online payments. To learn more, visit the Amazon Fraud Detector page. Amazon Kendra – This is a highly accurate and easy to use enterprise search service that is powered by machine learning. It supports natural language queries and will allow users to discover information buried deep within your organization’s vast content stores. Amazon Kendra will include connectors for popular data sources, along with an API to allow data ingestion from other sources. You can access the Kendra Preview from the AWS Management Console. Contact Lens for Amazon Connect – This is a set of analytics capabilities for Amazon Connect that use machine learning to understand sentiment and trends within customer conversations in your contact center. Once enabled, specified calls are automatically transcribed using state-of-the-art machine learning techniques, fed through a natural language processing engine to extract sentiment, and indexed for searching. Contact center supervisors and analysts can look for trends, compliance risks, or contacts based on specific words and phrases mentioned in the call to effectively train agents, replicate successful interactions, and identify crucial company and product feedback. Sign up for the Contact Lens for Amazon Connect Preview. Amazon Augmented AI (A2I) – This service will make it easy for you to build workflows that use a human to review low-confidence machine learning predictions. The service includes built-in workflows for common machine learning use cases including content moderation (via Amazon Rekognition) and text extraction (via Amazon Textract), and also allows you to create your own. You can use a pool of reviewers within your own organization, or you can access the workforce of over 500,000 independent contractors who are already performing machine learning tasks through Amazon Mechanical Turk. You can also make use of workforce vendors that are pre-screened by AWS for quality and adherence to security procedures. To learn more, read about Amazon Augmented AI (Amazon A2I), or visit the A2I Console to get started. Amazon CodeGuru – This ML-powered service provides code reviews and application performance recommendations. It helps to find the most expensive (computationally speaking) lines of code, and gives you specific recommendations on how to fix or improve them. It has been trained on best practices learned from millions of code reviews, along with code from thousands of Amazon projects and the top 10,000 open source projects. It can identify resource leaks, data race conditions between concurrent threads, and wasted CPU cycles. To learn more, visit the Amazon CodeGuru page. Amazon RDS Proxy – This is a fully managed database proxy that will help you better scale applications, including those built on modern serverless architectures, without worrying about managing connections and connection pools, while also benefiting from faster failover in the event of a database outage. It is highly available and deployed across multiple AZs, and integrates with IAM and AWS Secrets Manager so that you don’t have to embed your database credentials in your code. Amazon RDS Proxy is fully compatible with MySQL protocol and requires no application change. You will be able to create proxy endpoints and start using them in minutes. To learn more, visit the RDS Proxy page. — Jeff;

New – AWS Step Functions Express Workflows: High Performance & Low Cost

Amazon Web Services Blog -

We launched AWS Step Functions at re:Invent 2016, and our customers took to the service right away, using them as a core element of their multi-step workflows. Today, we see customers building serverless workflows that orchestrate machine learning training, report generation, order processing, IT automation, and many other multi-step processes. These workflows can run for up to a year, and are built around a workflow model that includes checkpointing, retries for transient failures, and detailed state tracking for auditing purposes. Based on usage and feedback, our customers really like the core Step Functions model. They love the declarative specifications and the ease with which they can build, test, and scale their workflows. In fact, customers like Step Functions so much that they want to use them for high-volume, short-duration use cases such as IoT data ingestion, streaming data processing, and mobile application backends. New Express Workflows Today we are launching Express Workflows as an option to the existing Standard Workflows. The Express Workflows use the same declarative specification model (the Amazon States Language) but are designed for those high-volume, short-duration use cases. Here’s what you need to know: Triggering – You can use events and read/write API calls associated with a long list of AWS services to trigger execution of your Express Workflows. Execution Model – Express Workflows use an at-least-once execution model, and will not attempt to automatically retry any failed steps, but you can use Retry and Catch, as described in Error Handling. The steps are not checkpointed, so per-step status information is not available. Successes and failures are logged to CloudWatch Logs, and you have full control over the logging level. Workflow Steps – Express Workflows support many of the same service integrations as Standard Workflows, with the exception of Activity Tasks. You can initiate long-running services such as AWS Batch, AWS Glue, and Amazon SageMaker, but you cannot wait for them to complete. Duration – Express Workflows can run for up to five minutes of wall-clock time. They can invoke other Express or Standard Workflows, but cannot wait for them to complete. You can also invoke Express Workflows from Standard Workflows, composing both types in order to meet the needs of your application. Event Rate – Express Workflows are designed to support a per-account invocation rate greater than 100,000 events per second. Accounts are configured for 6,000 events per second by default and we will, as usual, raise it on request. Pricing – Standard Workflows are priced based on the number of state transitions. Express Workflows are priced based on the number of invocations and a GB/second charge based on the amount of memory used to track the state of the workflow during execution. While the pricing models are not directly comparable, Express Workflows will be far more cost-effective at scale. To learn more, read about AWS Step Functions Pricing. As you can see, most of what you already know about Standard Workflows also applies to Express Workflows! You can replace some of your Standard Workflows with Express Workflows, and you can use Express Workflows to build new types of applications. Using Express Workflows I can create an Express Workflow and attach it to any desired events with just a few minutes of work. I simply choose the Express type in the console: Then I define my state machine: I configure the CloudWatch logging, and add a tag: Now I can attach my Express Workflow to my event source. I open the EventBridge Console and create a new rule: I define a pattern that matches PutObject events on a single S3 bucket: I select my Express Workflow as the event target, add a tag, and click Create: The particular event will occur only if I have a CloudTrail trail that is set up to record object-level activity: Then I upload an image to my bucket, and check the CloudWatch Logs group to confirm that my workflow ran as expected: As a more realistic test, I can upload several hundred images at once and confirm that my Lambda functions are invoked with high concurrency: I can also use the new Monitoring tab in the Step Functions console to view the metrics that are specific to the state machine: Available Now You can create and use AWS Step Functions Express Workflows today in all AWS Regions! — Jeff;

New – Provisioned Concurrency for Lambda Functions

Amazon Web Services Blog -

It’s really true that time flies, especially when you don’t have to think about servers: AWS Lambda just turned 5 years old and the team is always looking for new ways to help customers build and run applications in an easier way. As more mission critical applications move to serverless, customers need more control over the performance of their applications. Today we are launching Provisioned Concurrency, a feature that keeps functions initialized and hyper-ready to respond in double-digit milliseconds. This is ideal for implementing interactive services, such as web and mobile backends, latency-sensitive microservices, or synchronous APIs. When you invoke a Lambda function, the invocation is routed to an execution environment to process the request. When a function has not been used for some time, when you need to process more concurrent invocations, or when you update a function, new execution environments are created. The creation of an execution environment takes care of installing the function code and starting the runtime. Depending on the size of your deployment package, and the initialization time of the runtime and of your code, this can introduce latency for the invocations that are routed to a new execution environment. This latency is usually referred to as a “cold start”. For most applications this additional latency is not a problem. For some applications, however, this latency may not be acceptable. When you enable Provisioned Concurrency for a function, the Lambda service will initialize the requested number of execution environments so they can be ready to respond to invocations. Configuring Provisioned Concurrency I create two Lambda functions that use the same Java code and can be triggered by Amazon API Gateway. To simulate a production workload, these functions are repeating some mathematical computation 10 million times in the initialization phase and 200,000 times for each invocation. The computation is using java.Math.Random and conditions (if ...) to avoid compiler optimizations (such as “unlooping” the iterations). Each function has 1GB of memory and the size of the code is 1.7MB. I want to enable Provisioned Concurrency only for one of the two functions, so that I can compare how they react to a similar workload. In the Lambda console, I select one the functions. In the configuration tab, I see the new Provisioned Concurrency settings. I select Add configuration. Provisioned Concurrency can be enabled for a specific Lambda function version or alias (you can’t use $LATEST). You can have different settings for each version of a function. Using an alias, it is easier to enable these settings to the correct version of your function. In my case I select the alias live that I keep updated to the latest version using the AWS SAM AutoPublishAlias function preference. For the Provisioned Concurrency, I enter 500 and Save. Now, the Provisioned Concurrency configuration is in progress. The execution environments are being prepared to serve concurrent incoming requests based on my input. During this time the function remains available and continues to serve traffic. After a few minutes, the concurrency is ready. With these settings, up to 500 concurrent requests will find an execution environment ready to process them. If I go above that, the usual scaling of Lambda functions still applies. To generate some load, I use an Amazon Elastic Compute Cloud (EC2) instance in the same region. To keep it simple, I use the ab tool bundled with the Apache HTTP Server to call the two API endpoints 10,000 times with a concurrency of 500. Since these are new functions, I expect that: For the function with Provisioned Concurrency enabled and set to 500, my requests are managed by pre-initialized execution environments. For the other function, that has Provisioned Concurrency disabled, about 500 execution environments need to be provisioned, adding some latency to the same amount of invocations, about 5% of the total. One cool feature of the ab tool is that is reporting the percentage of the requests served within a certain time. That is a very good way to look at API latency, as described in this post on Serverless Latency by Tim Bray. Here are the results for the function with Provisioned Concurrency disabled: Percentage of the requests served within a certain time (ms) 50% 351 66% 359 75% 383 80% 396 90% 435 95% 1357 98% 1619 99% 1657 100% 1923 (longest request) Looking at these numbers, I see that 50% the requests are served within 351ms, 66% of the requests within 359ms, and so on. It’s clear that something happens when I look at 95% or more of the requests: the time suddenly increases by about a second. These are the results for the function with Provisioned Concurrency enabled: Percentage of the requests served within a certain time (ms) 50% 352 66% 368 75% 382 80% 387 90% 400 95% 415 98% 447 99% 513 100% 593 (longest request) Let’s compare those numbers in a graph. As expected for my test workload, I see a big difference in the response time of the slowest 5% of the requests (between 95% and 100%), where the function with Provisioned Concurrency disabled shows the latency added by the creation of new execution environments and the (slow) initialization in my function code. In general, the amount of latency added depends on the runtime you use, the size of your code, and the initialization required by your code to be ready for a first invocation. As a result, the added latency can be more, or less, than what I experienced here. The number of invocations affected by this additional latency depends on how often the Lambda service needs to create new execution environments. Usually that happens when the number of concurrent invocations increases beyond what already provisioned, or when you deploy a new version of a function. A small percentage of slow response times (generally referred to as tail latency) really makes a difference in end user experience. Over an extended period of time, most users are affected during some of their interactions. With Provisioned Concurrency enabled, user experience is much more stable. Provisioned Concurrency is a Lambda feature and works with any trigger. For example, you can use it with WebSockets APIs, GraphQL resolvers, or IoT Rules. This feature gives you more control when building serverless applications that require low latency, such as web and mobile apps, games, or any service that is part of a complex transaction. Available Now Provisioned Concurrency can be configured using the console, the AWS Command Line Interface (CLI), or AWS SDKs for new or existing Lambda functions, and is available today in the following AWS Regions: in US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), and Europe (Stockholm), Middle East (Bahrain), and South America (São Paulo). You can use the AWS Serverless Application Model (SAM) and SAM CLI to test, deploy and manage serverless applications that use Provisioned Concurrency. With Application Auto Scaling you can automate configuring the required concurrency for your functions. As policies, Target Tracking and Scheduled Scaling are supported. Using these policies, you can automatically increase the amount of concurrency during times of high demand and decrease it when the demand decreases. You can also use Provisioned Concurrency today with AWS Partner tools, including configuring Provisioned Currency settings with the Serverless Framework and Terraform, or viewing metrics with Datadog, Epsagon, Lumigo, New Relic, SignalFx, SumoLogic, and Thundra. You only pay for the amount of concurrency that you configure and for the period of time that you configure it. Pricing in US East (N. Virginia) is $0.015 per GB-hour for Provisioned Concurrency and $0.035 per GB-hour for Duration. The number of requests is charged at the same rate as normal functions. You can find more information in the Lambda pricing page. This new feature enables developers to use Lambda for a variety of workloads that require highly consistent latency. Let me know what you are going to use it for! — Danilo

AWS ECS Cluster Auto Scaling is Now Generally Available

Amazon Web Services Blog -

Today, we have launched AWS ECS Cluster Auto Scaling. This new capability improves your cluster scaling experience by increasing the speed and reliability of cluster scale-out, giving you control over the amount of spare capacity maintained in your cluster, and automatically managing instance termination on cluster scale-in. To enable ECS Cluster Auto Scaling, you will need to create a new ECS resource type called a Capacity Provider. A Capacity Provider can be associated with an EC2 Auto Scaling Group (ASG). When you associate an ECS Capacity Provider with an ASG and add the Capacity Provider to an ECS cluster, the cluster can now scale your ASG automatically by using two new features of ECS: Managed scaling, with an automatically-created scaling policy on your ASG, and a new scaling metric (Capacity Provider Reservation) that the scaling policy uses; and Managed instance termination protection, which enables container-aware termination of instances in the ASG when scale-in happens. These new features will give customers greater control of when and how Amazon ECS clusters scale-in and scale-out. Capacity Provider Reservation The new metric, called capacity provider reservation, measures the total percentage of cluster resources needed by all ECS workloads in the cluster, including existing workloads, new workloads, and changes in workload size. This metric enables the scaling policy to scale out quicker and more reliably than it could when using CPU or memory reservation metrics. Customers can also use this metric to reserve spare capacity in their clusters. Reserving spare capacity allows customers to run more containers immediately if needed, without waiting for new instances to start. Managed Instance Termination Protection With instance termination protection, ECS controls which instances the scaling policy is allowed to terminate on scale-in, to minimize disruptions of running containers. These improvements help customers achieve lower operational costs and higher availability of their container workloads running on ECS. How This Help Customers Customers running scalable container workloads on ECS often use metric-based scaling policies to automatically scale their ECS clusters. These scaling policies use generic metrics such as average cluster CPU and memory reservation percentages to determine when the policy should add or remove cluster instances. Clusters running a single workload, or workloads that scale-out slowly, often work well with such policies. However, customers running multiple workloads in the same cluster, or workloads that scale-out rapidly, are more likely to experience problems with cluster scaling. Ideally, increases in workload size that cannot be accommodated by the current cluster should trigger the policy to scale the cluster out to a larger size. Because the existing metrics are not container-specific and account only for resources already in use, this may happen slowly or be unreliable. Furthermore, because the scaling policy does not know where containers are running in the cluster, it can unnecessarily terminate containers when scaling in. These issues can reduce the availability of container workloads. Mitigations such as over-provisioning, custom tooling, or manual intervention often impose high operational costs. Enough Talk, Let’s Scale To understand these new features more clearly, I think it’s helpful to work through an example. Amazon ECS Cluster Auto Scaling can be set up and configured using the AWS Management Console, AWS CLI, or Amazon ECS API. I’m going to open up my terminal and create a cluster. Firstly, I create two files. The first file is called demo-launchconfig.json and defines the instance configuration for the Amazon Elastic Compute Cloud (EC2) instances that will make up my auto scaling group. { "LaunchConfigurationName": "demo-launchconfig", "ImageId": "ami-01f07b3fa86406c96", "SecurityGroups": [ "sg-0fa5be8c3749f3aa0" ], "InstanceType": "t2.micro", "BlockDeviceMappings": [ { "DeviceName": "/dev/xvdcz", "Ebs": { "VolumeSize": 22, "VolumeType": "gp2", "DeleteOnTermination": true, "Encrypted": true } } ], "InstanceMonitoring": { "Enabled": false }, "IamInstanceProfile": "arn:aws:iam::365489315573:role/ecsInstanceRole", "AssociatePublicIpAddress": true } The second file is demo-userdata.txt, and it contains the user data that will be added to each EC2 instance. The ECS_CLUSTER name included in the file must be the same as the name of the cluster we are going to create. In my case, the name is demo-news-blog-scale. #!/bin/bash echo ECS_CLUSTER=demo-news-blog-scale >> /etc/ecs/ecs.config Using the create-launch-configuration command, I pass the two files I created as inputs, this will create the launch configuration that I will use in my auto scaling group. aws autoscaling create-launch-configuration --cli-input-json file://demo-launchconfig.json --user-data file://demo-userdata.txt Next, I create a file called demo-asgconfig.json and define my requirements. { "LaunchConfigurationName": "demo-launchconfig", "MinSize": 0, "MaxSize": 100, "DesiredCapacity": 0, "DefaultCooldown": 300, "AvailabilityZones": [ "ap-southeast-1c" ], "HealthCheckType": "EC2", "HealthCheckGracePeriod": 300, "VPCZoneIdentifier": "subnet-abcd1234", "TerminationPolicies": [ "DEFAULT" ], "NewInstancesProtectedFromScaleIn": true, "ServiceLinkedRoleARN": "arn:aws:iam::111122223333:role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForAutoScaling" } I then use the create-auto-scaling-group command to create an auto scaling group called demo-asg using the above file as an input. aws autoscaling create-auto-scaling-group --auto-scaling-group-name demo-asg --cli-input-json file://demo-asgconfig.json I am now ready to create a capacity provider. I create a file called demo-capacityprovider.json, importantly, I set the managedTerminationProtection property to ENABLED. { "name": "demo-capacityprovider", "autoScalingGroupProvider": { "autoScalingGroupArn": "arn:aws:autoscaling:ap-southeast-1:365489315573:autoScalingGroup:e9c2f0c4-9a4c-428e-b81e-b22411a52954:autoScalingGroupName/demo-ASG", "managedScaling": { "status": "ENABLED", "targetCapacity": 100, "minimumScalingStepSize": 1, "maximumScalingStepSize": 100 }, "managedTerminationProtection": "ENABLED" } } I then use the new create-capacity-provider command to create a provider using the file as an input. aws ecs create-capacity-provider --cli-input-json file://demo-capacityprovider.json Now all the components have been created, I can finally create a cluster. I add the capacity provider and set the default capacity provider for the cluster as demo-capacityprovider. aws ecs create-cluster --cluster-name demo-news-blog-scale --capacity-providers demo-capacityprovider --default-capacity-provider-strategy<br />capacityProvider=demo-capacityprovider,weight=1 I now need to wait until the cluster has moved into the active state. I use the following command to get details about the cluster. aws ecs describe-clusters --clusters demo-news-blog-scale --include ATTACHMENTS Now that my cluster is set up, I can register some tasks. Firstly I will need to create a task definition. Below is a file I. have created called demo-sleep-taskdef.json. All this definition does is define a container that sleeps for infinity. { "family": "demo-sleep-taskdef", "containerDefinitions": [ { "name": "sleep", "image": "amazonlinux:2", "memory": 20, "essential": true, "command": [ "sh", "-c", "sleep infinity"] }], "requiresCompatibilities": [ "EC2"] } I then register the task definition using the register-task-definition command. aws ecs register-task-definition --cli-input-json file://demo-sleep-taskdef.json Finally, I can create my tasks. In this case, I have created 5 tasks based on the demo-sleep-taskdef:1 definition that I just registered. aws ecs run-task --cluster demo-news-blog-scale --count 5 --task-definition demo-sleep-taskdef:1 Now because instances are not yet available to run the tasks, the tasks go into a provisioning state, which means they are waiting for capacity to become available. The capacity provider I configured will now scale-out the auto scaling group so that instances start up and join the cluster – at which point the tasks get placed on the instances. This gives a true “scale from zero” capability, which did not previously exist. Things To Know AWS ECS Cluster Auto Scaling is now available in all regions where Amazon ECS and AWS Auto Scaling are available – check the region table for the latest list. Happy Scaling! — Martin  

AWS Fargate Spot Now Generally Available

Amazon Web Services Blog -

Today at AWS re:Invent 2019 we announced AWS Fargate Spot. Fargate Spot is a new capability on AWS Fargate that can run interruption tolerant Amazon Elastic Container Service (Amazon ECS) Tasks at up to a 70% discount off the Fargate price. If you are familiar with EC2 Spot Instances, the concept is the same. We use spare capacity in the AWS cloud to run your tasks. When the capacity for Fargate Spot is available, you will be able to launch tasks based on your specified request. When AWS needs the capacity back, tasks running on Fargate Spot will be interrupted with two minutes of notification. If the capacity for Fargate Spot stops being available, Fargate will scale down tasks running on Fargate Spot while maintaining any regular tasks you are running. As your tasks could be interrupted, you should not run tasks on Fargate Spot that cannot tolerate interruptions. However, for your fault-tolerant workloads, this feature enables you to optimize your costs. The service is an obvious fit for parallelizable workloads like image rendering, Monte Carlo simulations, and genomic processing. However, customers can also use Fargate Spot for tasks that run as a part of ECS services such as websites and APIs which require high availability. When configuring your Service Autoscaling policy, you can specify the minimum number of regular tasks that should run at all times and then add tasks running on Fargate Spot to improve service performance in a cost-efficient way. When the capacity for  Fargate Spot is available, the Scheduler will launch tasks to meet your request. If the capacity for Fargate Spot stops being available,  Fargate Spot will scale down, while maintaining the minimum number of regular tasks to ensure the application’s availability. So let us take a look at how we can get started using  AWS Fargate Spot. First, I create a new Fargate cluster inside of the ECS console, I choose Networking only, and I follow the wizard to complete the process. Once my cluster is created, I need to add a capacity provider, by default, my cluster has two capacity providers FARGATE and FARGATE_SPOT. To use the FARGATE_SPOT capacity provider, I update my cluster and set the default provider to use FARGATE_SPOT, I press the Update Cluster button and then select FARGATE_SPOT as the default capacity provider and click Update. I then run a task in the cluster in the usual way. I select my task definition and enter that I want 10 tasks. Then after configuring VPC and security groups, I click Run Task Now the 10 tasks run, but rather than using regular Fargate infrastructure, they use Fargate Spot. If I peek inside one of the tasks, I can verify that the task is indeed using the FARGATE-SPOT capacity provider. So that’s how you get started with Fargate Spot, you can try yourself right now. A few weeks ago, we saw the release of Compute Savings Plans (of which Fargate is a part) and now with Fargate Spot, customers can save a great deal of money and run many different types of applications; there has never been a better time to be using Fargate. AWS Fargate Spot is available in all regions where AWS Fargate is available, so you can try it yourself today. — Martin

New – EBS Direct APIs – Programmatic Access to EBS Snapshot Content

Amazon Web Services Blog -

EBS Snapshots are really cool! You can create them interactively from the AWS Management Console: You can create them from the Command Line (create-snapshot) or by making a call to the CreateSnapshot function, and you can use the Data Lifecycle Manager (DLM) to set up automated snapshot management. All About Snapshots The snapshots are stored in Amazon Simple Storage Service (S3), and can be used to quickly create fresh EBS volumes as needed. The first snapshot of a volume contains a copy of every 512K block on the volume. Subsequent snapshots contain only the blocks that have changed since the previous snapshot. The incremental nature of the snapshots makes them very cost-effective, since (statistically speaking) many of the blocks on an EBS volume do not change all that often. Let’s look at a quick example. Suppose that I create and format an EBS volume with 8 blocks (this is smaller than the allowable minimum size, but bear with me), copy some files to it, and then create my first snapshot (Snap1). The snapshot contains all of the blocks, and looks like this: Then I add a few more files, delete one, and create my second snapshot (Snap2). The snapshot contains only the blocks that were modified after I created the first one, and looks like this: I make a few more changes, and create a third snapshot (Snap3): Keep in mind that the relationship between directories, files, and the underlying blocks is controlled by the file system, and is generally quite complex in real-world situations. Ok, so now I have three snapshots, and want to use them to create a new volume. Each time I create a snapshot of an EBS volume, an internal reference to the previous snapshot is created. This allows CreateVolume to find the most recent copy of each block, like this: EBS manages all of the details for me behind the scenes. For example, if I delete Snap2, the copy of Block 0 in the snapshot also deleted since the copy in Snap3 is newer, but the copy of Block 4 in Snap2 becomes part of Snap3: By the way, the chain of backward references (Snap3 to Snap1, or Snap3 to Snap2 to Snap1) is referred to as the lineage of the set of snapshots. Now that I have explained all this, I should also tell you that you generally don’t need to know this, and can focus on creating, using, and deleting snapshots! However… Access to Snapshot Content Today we are introducing EBS direct APIs that provide you with access to the snapshot content, as described above. These APIs are designed for developers of backup/recovery, disaster recovery, and data management products & services, and will allow them to make their offerings faster and more cost-effective. The new APIs use a block index (0, 1, 2, and so forth), to identify a particular 512K block within a snapshot. The index is returned in the form of an encrypted token, which is meaningful only to the GetSnapshotBlock API. I have represented these tokens as T0, T1, and so forth below. The APIs currently work on blocks of 512K bytes, with plans to support more block sizes in the future. Here are the APIs : ListSnapshotBlocks – Identifies all of the blocks in a given snapshot as encrypted tokens. For Snap1, it would return [T0, T1, T2, T3, T4, T5, T6, T7] and for Snap2 it would return [T0, T4]. GetSnapshotBlock – Returns the content of a block. If the block is part of an encrypted snapshot, it will be returned in decrypted form. ListChangedBlocks – Returns the list of blocks that have changed between two snapshots in a lineage, again as encrypted tokens. For Snap2 it would return [T0, T4] and for Snap3 it would return [T0, T5]. Like I said, these APIs were built to address one specialized yet very important use case. Having said that, I am now highly confident that new and unexpected ones will pop up within 48 hours (feel free to share them with me)! Available Now The EBS direct APIs are available now and you can start using them today in the US East (N. Virginia), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Singapore), and Asia Pacific (Tokyo) Regions; they will become available in the remaining regions in the next few weeks. There is a charge for calls to the List and Get APIs, and the usual KMS charges will apply when you call GetSnapshotBlock to access a block that is part of an encrypted snapshot. — Jeff;

Pages

Recommended Content

Subscribe to Complete Hosting Guide aggregator