Amazon Web Services Blog

New Desktop Client for AWS Client VPN

We launched AWS Client VPN last year so that you could use your OpenVPN-based clients to securely access your AWS and on-premises networks from anywhere (read Introducing AWS Client VPN to Securely Access AWS and On-Premises Resources to learn more). As a refresher, this is a fully-managed elastic VPN service that scales the number of connections up and down according to demand. It allows you to provide easy connectivity to your workforce and your business partners, along with the ability to monitor and manage all of the connections from one console. You can create Client VPN endpoints, associate them with the desired VPC subnets, and set up authorization rules to enable your users to access the desired cloud resources.   New Desktop Client for AWS Client VPN Today we are making it even easier for you to connect your Windows and MacOS clients to AWS, with the launch of the desktop client by AWS. These applications can be installed on your desktop or laptop, and support mutual authentication, username/password via Active Directory, and the use of Multi-Factor Authentication (MFA). After you use the client to establish a VPN connection, the desktop or laptop is effectively part of the configured VPC, and can access resources as allowed by the authorization rules. The client applications are available at no charge, and can be used to establish connections to any AWS region where you have an AWS Client VPN endpoint. You can currently create these endpoints in the US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Canada (Central), Europe (Ireland), Europe (London), Europe (Frankfurt), Europe (Stockholm), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Seoul), Asia Pacific (Sydney), and Asia Pacific (Tokyo) Regions. — Jeff;

Update on Amazon Linux AMI end-of-life

Launched in September 2010, the Amazon Linux AMI has helped numerous customers build Linux-based applications on Amazon Elastic Compute Cloud (EC2). In order to bring them even more security, stability, and productivity, we introduced Amazon Linux 2 in 2017. Adding many modern features, Amazon Linux 2 is backed by long-term support, and we strongly encourage you to use it for your new applications. As stated in the FAQ, we documented that the last version of the Amazon Linux AMI (2018.03) would be end-of-life on June 30, 2020. Based on customer feedback, we are extending the end-of-life date, and we’re also announcing a maintenance support period. End-of-life Extension The end-of-life for Amazon Linux AMI is now extended to December 31, 2020: until then, we will continue to provide security updates and refreshed versions of packages as needed. Maintenance Support Beyond December 31, 2020, the Amazon Linux AMI will enter a new maintenance support period that extends to June 30, 2023. During this maintenance support period: The Amazon Linux AMI will only receive critical and important security updates for a reduced set of packages. It will no longer be guaranteed to support new EC2 platform capabilities, or new AWS features. Supported packages will include: The Linux kernel, Low-level system libraries such as glibc and openssl, Popular packages that are still in a supported state in their upstream sources, such as MySQL and PHP. We will provide a detailed list of supported and unsupported packages in future posts. Questions? If you need assistance or have feedback, please reach out to your usual AWS support contacts, or post a message in the AWS Forum for Amazon Linux. Thank you for using Amazon Linux AMI! - Julien  

AWS DataSync Update – Support for Amazon FSx for Windows File Server

AWS DataSync helps you to move large amounts of data into and out of the AWS Cloud. As I noted in New – AWS DataSync – Automated and Accelerated Data Transfer, our customers use DataSync for their large-scale migration, upload & process, archiving, and backup/DR use cases. Amazon FSx for Windows File Server gives you network file storage that is fully compatible with your existing Windows applications and environments (read New – Amazon FSx for Windows File Server – Fast, Fully Managed, and Secure to learn more). It includes a very wide variety of enterprise-ready features including native multi-AZ file systems, support for SQL Server, data deduplication, quotas, and the ability to force the use of in-transit encryption. Our customers use Amazon FSx for Windows File Server to lift-and-shift their Windows workloads to the cloud, where they can benefit from consistent sub-millisecond performance and high throughput. Inside AWS DataSync The DataSync agent is deployed as a VM within your existing on-premises or cloud-based environment so that it can access your NAS or file system via NFS or SMB. The agent uses a robust, highly-optimized data transfer protocol to move data back and forth at up to 10 times the speed of open source data transfer solutions. DataSync can be used for a one-time migration-style transfer, or it can be invoked on a periodic, incremental basis for upload & process, archiving, and backup/DR purposes. Our customers use DataSync for transfer operations that encompass hundreds of terabytes of data and millions of files. Since the launch of DataSync in November 2018, we have made several important updates and changes to DataSync including: 68% Price Reduction – We reduced the data transfer charge to $0.0125 per gigabyte. Task Scheduling – We gave you the ability to schedule data transfer tasks using the AWS Management Console or the AWS Command Line Interface (CLI), with hourly, daily, and weekly options: Additional Region Support – We recently made DataSync available in the Europe (Stockholm), South America (São Paulo), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), and AWS GovCloud (US-East) Regions, bringing the total list of supported regions to 20. EFS-to-EFS Transfer – We added support for file transfer between a pair of Amazon Elastic File System (EFS) file systems. Filtering for Data Transfers – We gave you the ability to use file path and object key filters to control the data transfer operation: SMB File Share Support – We added support for file transfer between a pair of SMB file shares. S3 Storage Class Support – We gave you the ability to choose the S3 Storage Class when transferring data to an S3 bucket. FSx for Windows Support Today I am happy to announce that we are giving you the ability to use DataSync to transfer data to and from Amazon FSx for Windows File Server file systems. You can configure these file systems as DataSync Locations and then reference them in your DataSync Tasks. After I choose the desired FSx for Windows file system, I supply a username and password, and enter the name of the Windows domain for authentication: Then I create a task that uses one of my existing SMB shares as a source, and the FSx for Windows file system as a destination. I give my task a name (MyTask), and configure any desired options: I can set up filtering and use a schedule: I have many scheduling options; here are just a few: If I don’t use a schedule, I can simply click Start to run my task on an as-needed basis: When I do this, I have the opportunity to review and refine the settings for the task: The task starts within seconds, and I can watch the data transfer and throughput metrics in the console: In addition to the console-based access that I just showed you, you can also use the DataSync API and the DataSync CLI to create tasks (CreateTask), start them (StartTaskExecution), check on task status (DescribeTaskExecution) and much more. Available Now This important new feature is available now and you can start using it today! — Jeff;

New – T3 Instances on Dedicated Single-Tenant Hardware

T3 instances use a burst pricing model that allows you to host general purpose workloads at low cost, with access to sustainable, full-core performance when needed. You can choose from seven different sizes and receive an assured baseline amount of processing power, courtesy of custom high frequency Intel® Xeon® Scalable Processors. Our customers use them to host many different types of production and development workloads including microservices, small and medium databases, and virtual desktops. Some of our customers launch large fleets of T3 instances and use them to test applications in a wide range of conditions, environments, and configurations. We launched the first EC2 Dedicated Instances way back in 2011. Dedicated Instances run on single-tenant hardware, providing physical isolation from instances that belong to other AWS accounts. Our customers use Dedicated Instances to further their compliance goals (PCI, SOX, FISMA, and so forth), and also use them to run software that is subject to license or tenancy restrictions. Dedicated T3 Today I am pleased to announce that we are now making all seven sizes (t3.nano through t3.2xlarge) of T3 instances available in dedicated form, in 14 regions.You can now save money by using T3 instances to run workloads that require the use of dedicated hardware, while benefiting from access to the AVX-512 instructions and other advanced features of the latest generation of Intel® Xeon® Scalable Processors. Just like the existing T3 instances, the dedicated T3 instances are powered by the Nitro system, and launch with Unlimited bursting enabled. They use ENA networking and offer up to 5 Gbps of network bandwidth. You can launch dedicated T3 instances using the EC2 API, the AWS Management Console: The AWS Command Line Interface (CLI): $ aws ec2 run-instances --placement Tenancy=dedicated ... or via a CloudFormation template (set tenancy to dedicated in your Launch Template). Now Available Dedicated T3 instances are available in the US East (N. Virginia), US East (Ohio), US West (N. California), South America (São Paulo), Canada (Central), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Europe (London), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Mumbai), and Asia Pacific (Seoul) Regions. You can purchase the instances in On-Demand or Reserved Instance form. There is an additional fee of $2 per hour when at least one Dedicated Instance of any type is running in a region, and $0.05 per hour when you you burst above the baseline performance for an extended period of time. — Jeff;

Amazon EKS Price Reduction

Since it launched 18 months ago, Amazon Elastic Kubernetes Service has released a staggering 62 features, 14 regions, and 4 Kubernetes versions. While developers, like me, are loving the speed of innovation and the incredible new features, today, we have an announcement that is going to bring a smile to the people in your finance department. We are reducing the price by 50%. As of the 21st of January, the price will reduce from $0.20 per hour for each Amazon EKS cluster to $0.10 per hour. This new price is for all new and existing Amazon EKS clusters. Incredible Momentum Last year, I wrote about a few of those 62 Amazon EKS features. Features such as Amazon EKS on AWS Fargate, EKS Windows Containers support, and Managed Node Groups for Amazon Elastic Kubernetes Service. It has been a pleasure to hear customers in the comments, in meetings, and at events tell me that features like these are enabling them to run different kinds of applications more reliably and more efficiently than ever before. I also have enjoyed watching customer feedback come in via the public containers roadmap and see the Amazon EKS team deliver requested features at a constant rate. Customers are Flourishing on Amazon Elastic Kubernetes Service Amazon EKS is used by big and small customers to run everything from simple websites to mission-critical systems and large scale machine learning jobs. Below are three examples from the many customers that are seeing tremendous value from Amazon EKS. Snap runs 100% on K8s in the cloud and, in the last year, moved multiple parts of their app, including the core messaging architecture to Amazon EKS as part of their move from a monolithic service-oriented architecture to microservices. In their words, “Undifferentiated Heavy Lifting is work that we have to do that doesn’t directly benefit our customers. It’s just work. Amazon EKS frees us up to worry about delivering customer value and allows developers without operational experience to innovate without having to know where their code runs.” You can learn more about Snap’s journey in this video recorded at the AWS New York Summit. HSBC sees Amazon EKS as a crucial component of its infrastructure strategy and a major factor in its migration of workloads to the cloud. They joined us on stage at AWS re:Invent 2019 to talk about why they bank on Amazon EKS. Advalo is a predictive marketing platform company, reaching customers during the most influential moments in their purchase decision. – Edouard Devouge, Lead SRE at Advalo says “We are running our applications on Amazon EKS, launching up to 2,000 nodes per day and running up to 75,000 pods for microservices and Machine Learning apps, allowing us to detect purchase intent through individualized Marketing in the website and shops of our customers.” With today’s announcement, all the benefits that these customers describe are now available at a great new price, ensuring that AWS remains the best place in the world to run your Kubernetes clusters. Amazon Elastic Kubernetes Service Resources Here are some resources to help you to learn how to make great use of Amazon EKS in your organization: Deploy a Kubernetes Application (tutorial). Amazon EKS Microservices (workshop). Developer Guide (documentation). Amazon Elastic Kubernetes Service Customer Case Studies. Effective Immediately The 50% price reduction is available in all regions effective immediately, and you do not have to do anything to take advantage of the new price. From today onwards, you will be charged the new lower price for the Amazon Elastic Kubernetes Service service. So sit back, relax, and enjoy the savings. — Martin

CloudEndure Highly Automated Disaster Recovery – 80% Price Reduction

AWS acquired CloudEndure last year. After the acquisition we began working with our new colleagues to integrate their products into the AWS product portfolio. CloudEndure Disaster Recovery is designed to help you minimize downtime and data loss. It continuously replicates the contents of your on-premises, virtual, or cloud-based systems to a low-cost staging area in the AWS region of your choice, within the confines of your AWS account: The block-level replication encompasses essentially every aspect of the protected system including the operating system, configuration files, databases, applications, and data files. CloudEndure Disaster Recovery can replicate any database or application that runs on supported versions of Linux or Windows, and is commonly used with Oracle and SQL Server, as well as enterprise applications such as SAP. If you do an AWS-to-AWS replication, the AWS environment within a specified VPC is replicated; this includes the VPC itself, subnets, security groups, routes, ACLs, Internet Gateways, and other items. Here are some of the most popular and interesting use cases for CloudEndure Disaster Recovery: On-Premises to Cloud Disaster Recovery -This model moves your secondary data center to the AWS Cloud without downtime or performance impact. You can improve your reliability, availability, and security without having to invest in duplicate hardware, networking, or software. Cross-Region Disaster Recovery – If your application is already on AWS, you can add an additional layer of cost-effective protection and improve your business continuity by setting up cross-region disaster recovery. You can set up continuous replication between regions or Availability Zones and meet stringent RPO (Recovery Point Objective) or RTO (Recovery Time Objective) requirements. Cross-Cloud Disaster Recovery – If you run workloads on other clouds, you can increase your overall resilience and meet compliance requirements by using AWS as your DR site. CloudEndure Disaster Recovery will replicate and recover your workloads, including automatic conversion of your source machines so that they boot and run natively on AWS. 80% Price Reduction Recovery is quick and robust, yet cost-effective. In fact, we are reducing the price for CloudEndure Disaster Recovery by about 80% today, making it more cost-effective than ever: $0.028 per hour, or about $20 per month per server. If you have tried to implement a DR solution in the traditional way, you know that it requires a costly set of duplicate IT resources (storage, compute, and networking) and software licenses. By replicating your workloads into a low-cost staging area in your preferred AWS Region, CloudEndure Disaster Recovery reduces compute costs by 95% and eliminates the need to pay for duplicate OS and third-party application licenses. To learn more, watch the Disaster Recovery to AWS Demo Video: After that, be sure to visit the new CloudEndure Disaster Recovery page! — Jeff;

In the Works – AWS Osaka Local Region Expansion to Full Region

Today, we are excited to announce that, due to high customer demand for additional services in Osaka, the Osaka Local Region will be expanded into a full AWS Region with three Availability Zones by early 2021. Like all AWS Regions, each Availability Zone will be isolated with its own power source, cooling system, and physical security, and be located far enough apart to significantly reduce the risk of a single event impacting availability, yet near enough to provide low latency for high availability applications. We are constantly expanding our infrastructure to provide customers with sufficient capacity to grow and the necessary tools to architect a variety of system designs for higher availability and robustness. AWS now operates 22 regions and 69 Availability Zones globally. In March 2011, we launched the AWS Tokyo Region as our fifth AWS Region with two Availability Zones. After that, we launched a third Tokyo Availability Zone in 2012 and a fourth in 2018. In February 2018, we launched the Osaka Local Region as a new region construct that comprises an isolated, fault-tolerant infrastructure design contained in a single data center and complements an existing AWS Region. Located 400km from the Tokyo Region, the Osaka Local Region has supported customers with applications that require in-country, geographic diversity for disaster recovery purposes that could not be served with the Tokyo Region alone. Osaka Local Region in the Future When launched, the Osaka Region will provide the same broad range of services as other AWS Regions and will be available to all AWS customers. Customers will be capable of deploying multi-region systems within Japan, and those users located in western Japan will enjoy even lower latency than what they have today. If you are interested in how AWS Global Infrastructure is designed and built to deliver the most flexible, reliable, scalable and secure cloud computing environment with the highest global network performance then check out our Global Infrastructure site which explains and visualizes it all. Stay Tuned I’ll be sure to share additional news about this and other upcoming AWS Regions as soon as I have it, so stay tuned! We are working on 4 more regions (Indonesia, Italy, South Africa, and Spain), and 13 more Availability Zones globally. – Kame, Sr. Product Marketing Manager / Sr. Evangelist Amazon Web Services Japan  

AWS Backup: EC2 Instances, EFS Single File Restore, and Cross-Region Backup

Since we launched AWS Backup last year, over 20,000 AWS customers are protecting petabytes of data every day. AWS Backup is a fully managed, centralized backup service simplifying the management of your backups for your Amazon Elastic Block Store (EBS) volumes, your databases (Amazon Relational Database Service (RDS) or Amazon DynamoDB), AWS Storage Gateway and your Amazon Elastic File System (EFS) filesystems. We continuously listen to your feedback and today, we are bringing additional enterprise data capabilities to AWS Backup : you can now back up entire Amazon Elastic Compute Cloud (EC2) instances, you can now copy your backups to other AWS Regions, and you can now restore a single file from your Elastic File System filesystem instead of the full filesystem. Here are the details. EC2 Instance Backup Backing up and restoring an EC2 instance requires additional protection than just the instance’s individual EBS volumes. To restore an instance, you’ll need to restore all EBS volumes but also recreate an identical instance: instance type, VPC, Security Group, IAM role etc. Today, we are adding the ability to perform backup and recovery tasks on whole EC2 instances. When you back up an EC2 instance, AWS Backup will protect all EBS volumes attached to the instance, and it will attach them to an AMI that stores all parameters from the original EC2 instance except for two (Elastic Inference Accelerator and user data script). Once the backup is complete, you can easily restore the full instance using the console, API, or AWS Command Line Interface (CLI). You will be able to restore and edit all parameters using the API or AWS Command Line Interface (CLI), and in the console, you will be able to restore and edit 16 parameters from your original EC2 instance. To get started, open the Backup console and select either a backup plan or an on-demand backup. For this example, I chose On-Demand backup. I select EC2 from the list of services and select the ID of the instance I want to backup. Note that you need to stop write activity and flush filesystem caches in case you’re using RAID volumes or any other type of technique to group your volumes. After a while, I see the backup available in my vault. To restore the backup, I select the backup and click Restore. Before actually starting the restore, I can see the EC2 configuration options that have been backed up and I have the opportunity to modify any value listed before re-creating the instance. After a few seconds, my restored instance starts and is available in the EC2 console. Single File Restore for EFS Often AWS Backup customers would like to restore an accidentally deleted or corrupted file or folder. Before today, you would need to perform a full restore of the entire filesystem, which makes it difficult to meet your strict RTO objectives. Starting today, you can restore a single file or directory from your Elastic File System filesystem. You select the backup, type the relative path of the file or directory to restore, and AWS Backup will create a new Elastic File System recovery directory at the root of your filesystem, preserving the original path hierarchy. You can restore your files to an existing filesystem or to a new filesystem. To restore a single file from an Elastic File System backup, I choose the backup from the vault and I click Restore. On the Restore backup window, I choose between restoring the full filesystem or individual items. I enter the path relative to the root of the filesystem (not including the mount point) for the files and directories I want to restore. I also choose if I want to restore the items in the existing filesystem or in a new filesystem. Finally, I click Restore backup to start the restore job. Cross-region Backup Many enterprise AWS customers have strict business continuity policies requiring a minimum distance between two copies of their backup. To help enterprises to meet this requirement, we’re adding the capability to copy a backup to another Region, either on-demand when you need it or automatically, as part of a backup plan. To initiate an on-demand copy of my backup to another Region, I use the console to browse my vaults, select the backup I want to copy and click Copy. I chose the destination Region, the destination vault, and keep the default value for other options. I click Copy on the bottom of the page. The time to make the copy depends on the size of the backup. I monitor the status on the new Copy Jobs tab of the Job section: Once the copy is finished, I switch my console to the target Region, I see the backup in the target vault and I can initiate a restore operation, just like usual. I also can use the AWS Command Line Interface (CLI) or one of our AWS SDKs to automate or to integrate any of these processes in other applications. Pricing Pricing depends on the type of backup: there is no additional charge for EC2 instance backup, you will be charged for the storage used by all EBS volumes attached to your instance, for Elastic File System single file restore, you will be charged a fixed fee per restore and for the number of bytes you restore, and for cross-region backup, you will be charged for the cross-region data transfer bandwidth and for the new warm storage space in the target Region. These three new features are available today in all commercial AWS Regions where AWS Backup is available (you can verify services availability per Region on this web page). As it is usual with any backup system, it is best practice to regularly perform backups and backup testing. Restore-able backups are the best kind of backups. -- seb

New for Amazon EFS – IAM Authorization and Access Points

When building or migrating applications, we often need to share data across multiple compute nodes. Many applications use file APIs and Amazon Elastic File System (EFS) makes it easy to use those applications on AWS, providing a scalable, fully managed Network File System (NFS) that you can access from other AWS services and on-premises resources. EFS scales on demand from zero to petabytes with no disruptions, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity. By using it, you get strong file system consistency across 3 Availability Zones. EFS performance scales with the amount of data stored, with the option to provision the throughput you need. Last year, the EFS team focused on optimizing costs with the introduction of the EFS Infrequent Access (IA) storage class, with storage prices up to 92% lower compared to EFS Standard. You can quickly start reducing your costs by setting a Lifecycle Management policy to move to EFS IA the files that haven’t been accessed for a certain amount of days. Today, we are introducing two new features that simplify managing access, sharing data sets, and protecting your EFS file systems: IAM authentication and authorization for NFS Clients, to identify clients and use IAM policies to manage client-specific permissions. EFS access points, to enforce the use of an operating system user and group, optionally restricting access to a directory in the file system. Using IAM Authentication and Authorization In the EFS console, when creating or updating an EFS file system, I can now set up a file system policy. This is an IAM resource policy, similar to bucket policies for Amazon Simple Storage Service (S3), and can be used, for example, to disable root access, enforce read-only access, or enforce in-transit encryption for all clients. Identity-based policies, such as those used by IAM users, groups, or roles, can override these default permissions. These new features work on top of EFS’s current network-based access using security groups. I select the option to disable root access by default, click on Set policy, and then select the JSON tab. Here, I can review the policy generated based on my settings, or create a more advanced policy, for example to grant permissions to a different AWS account or a specific IAM role. The following actions can be used in IAM policies to manage access permissions for NFS clients: ClientMount to give permission to mount a file system with read-only access ClientWrite to be able to write to the file system ClientRootAccess to access files as root I look at the policy JSON. I see that I can mount and read (ClientMount) the file system, and I can write (ClientWrite) in the file system, but since I selected the option to disable root access, I don’t have ClientRootAccess permissions. Similarly, I can attach a policy to an IAM user or role to give specific permissions. For example, I create a IAM role to give full access to this file system (including root access) with this policy: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "elasticfilesystem:ClientMount", "elasticfilesystem:ClientWrite", "elasticfilesystem:ClientRootAccess" ], "Resource": "arn:aws:elasticfilesystem:us-east-2:123412341234:file-system/fs-d1188b58" } ] } I start an Amazon Elastic Compute Cloud (EC2) instance in the same Amazon Virtual Private Cloud as the EFS file system, using Amazon Linux 2 and a security group that can connect to the file system. The EC2 instance is using the IAM role I just created. The open source efs-utils are required to connect a client using IAM authentication, in-transit encryption, or both. Normally, on Amazon Linux 2, I would install efs-utils using yum, but the new version is still rolling out, so I am following the instructions to build the package from source in this repository. I’ll update this blog post when the updated package is available. To mount the EFS file system, I use the mount command. To leverage in-transit encryption, I add the tls option. I am not using IAM authentication here, so the permissions I specified for the “*” principal in my file system policy apply to this connection. $ sudo mkdir /mnt/shared $ sudo mount -t efs -o tls fs-d1188b58 /mnt/shared My file system policy disables root access by default, so I can’t create a new file as root. $ sudo touch /mnt/shared/newfile touch: cannot touch ‘/mnt/shared/newfile’: Permission denied I now use IAM authentication adding the iam option to the mount command (tls is required for IAM authentication to work). $ sudo mount -t efs -o iam,tls fs-d1188b58 /mnt/shared When I use this mount option, the IAM role from my EC2 instance profile is used to connect, along with the permissions attached to that role, including root access: $ sudo touch /mnt/shared/newfile $ ls -la /mnt/shared/newfile -rw-r--r-- 1 root root 0 Jan  8 09:52 /mnt/shared/newfile Here I used the IAM role to have root access. Other common use cases are to enforce in-transit encryption (using the aws:SecureTransport condition key) or create different roles for clients needing write or read-only access. EFS IAM permission checks are logged by AWS CloudTrail to audit client access to your file system. For example, when a client mounts a file system, a NewClientConnection event is shown in my CloudTrail console. Using EFS Access Points EFS access points allow you to easily manage application access to NFS environments, specifying a POSIX user and group to use when accessing the file system, and restricting access to a directory within a file system. Use cases that can benefit from EFS access points include: Container-based environments, where developers build and deploy their own containers (you can also see this blog post for using EFS for container storage). Data science applications, that require read-only access to production data. Sharing a specific directory in your file system with other AWS accounts. In the EFS console, I create two access points for my file system, each using a different POSIX user and group: /data – where I am sharing some data that must be read and updated by multiple clients. /config – where I share some configuration files that must not be updated by clients using the /data access point. I used file permissions 755 for both access points. That means that I am giving read and execute access to everyone and write access to the owner of the directory only. Permissions here are used when creating the directory. Within the directory, permissions are under full control of the user. I mount the /data access point adding the accesspoint option to the mount command: $ sudo mount -t efs -o tls,accesspoint=fsap-0204ce67a2208742e fs-d1188b58 /mnt/shared I can now create a file, because I am not doing that as root, but I am automatically using the user and group ID of the access point: $ sudo touch /mnt/shared/datafile $ ls -la /mnt/shared/datafile -rw-r--r-- 1 1001 1001 0 Jan  8 09:58 /mnt/shared/datafile I mount the file system again, without specifying an access point. I see that datafile was created in the /data directory, as expected considering the access point configuration. When using the access point, I was unable to access any files that were in the root or other directories of my EFS file system. $ sudo mount -t efs -o tls /mnt/shared/ $ ls -la /mnt/shared/data/datafile -rw-r--r-- 1 1001 1001 0 Jan  8 09:58 /mnt/shared/data/datafile To use IAM authentication with access points, I add the iam option: $ sudo mount -t efs -o iam,tls,accesspoint=fsap-0204ce67a2208742e fs-d1188b58 /mnt/shared I can restrict a IAM role to use only a specific access point adding a Condition on the AccessPointArn to the policy: "Condition": {     "StringEquals": {         "elasticfilesystem:AccessPointArn" : "arn:aws:elasticfilesystem:us-east-2:123412341234:access-point/fsap-0204ce67a2208742e"     } } Using IAM authentication and EFS access points together simplifies securely sharing data for container-based architectures and multi-tenant-applications, because it ensures that every application automatically gets the right operating system user and group assigned to it, optionally limiting access to a specific directory, enforcing in-transit encryption, or giving read-only access to the file system. Available Now IAM authorization for NFS clients and EFS access points are available in all regions where EFS is offered, as described in the AWS Region Table. There is no additional cost for using them. You can learn more about using EFS with IAM and access points in the documentation. It’s now easier to create scalable architectures sharing data and configurations. Let me know what you are going use these new features for! — Danilo

Urgent & Important – Rotate Your Amazon RDS, Aurora, and Amazon DocumentDB (with MongoDB compatibility) Certificates

Feb 5th, 2020: We’ve made an edit to this post. Previously, we had communicated that between February 5 and March 5, 2020, RDS would automatically stage the new certificates on RDS database instances without a restart. Based on customer feedback and to give customers as much time as possible to complete updates, RDS will neither stage nor update database certificates automatically ahead of March 5, 2020. This means that customers will be able to use the full time until March 5, 2020 to update applications and databases to use the new CA certificates. ********************************** You may have already received an email or seen a console notification, but I don’t want you to be taken by surprise! Rotate Now If you are using Amazon Aurora, Amazon Relational Database Service (RDS), or Amazon DocumentDB (with MongoDB compatibility) and are taking advantage of SSL/TLS certificate validation when you connect to your database instances, you need to download & install a fresh certificate, rotate the certificate authority (CA) for the instances, and then reboot the instances. If you are not using SSL/TLS connections or certificate validation, you do not need to make any updates, but I recommend that you do so in order to be ready in case you decide to use SSL/TLS connections in the future. In this case, you can use a new CLI option that rotates and stages the new certificates but avoids a restart. The new certificate (CA-2019) is available as part of a certificate bundle that also includes the old certificate (CA-2015) so that you can make a smooth transition without getting into a chicken and egg situation. What’s Happening? The SSL/TLS certificates for RDS, Aurora, and Amazon DocumentDB expire and are replaced every five years as part of our standard maintenance and security discipline. Here are some important dates to know: September 19, 2019 – The CA-2019 certificates were made available. January 14, 2020 – Instances created on or after this date will have the new (CA-2019) certificates. You can temporarily revert to the old certificates if necessary. February 5 to March 5, 2020 – RDS will stage (install but not activate) new certificates on existing instances. Restarting the instance will activate the certificate. March 5, 2020 – The CA-2015 certificates will expire. Applications that use certificate validation but have not been updated will lose connectivity. How to Rotate Earlier this month I created an Amazon RDS for MySQL database instance and set it aside in preparation for this blog post. As you can see from the screen shot above, the RDS console lets me know that I need to perform a Certificate update. I visit Using SSL/TLS to Encrypt a Connection to a DB Instance and download a new certificate. If my database client knows how to handle certificate chains, I can download the root certificate and use it for all regions. If not, I download a certificate that is specific to the region where my database instance resides. I decide to download a bundle that contains the old and new root certificates: Next, I update my client applications to use the new certificates. This process is specific to each app and each database client library, so I don’t have any details to share. Once the client application has been updated, I change the certificate authority (CA) to rds-ca-2019. I can Modify the instance in the console, and select the new CA: I can also do this via the CLI: $ aws rds modify-db-instance --db-instance-identifier database-1 \ --ca-certificate-identifier rds-ca-2019 The change will take effect during the next maintenance window. I can also apply it immediately: $ aws rds modify-db-instance --db-instance-identifier database-1 \ --ca-certificate-identifier rds-ca-2019 --apply-immediately After my instance has been rebooted (either immediately or during the maintenance window), I test my application to ensure that it continues to work as expected. If I am not using SSL and want to avoid a restart, I use --no-certificate-rotation-restart: $ aws rds modify-db-instance --db-instance-identifier database-1 \ --ca-certificate-identifier rds-ca-2019 --no-certificate-rotation-restart The database engine will pick up the new certificate during the next planned or unplanned restart. I can also use the RDS ModifyDBInstance API function or a CloudFormation template to change the certificate authority. Once again, all of this must be completed by March 5, 2020 or your applications may be unable to connect to your database instance using SSL or TLS. Things to Know Here are a couple of important things to know: Amazon Aurora Serverless – AWS Certificate Manager (ACM) is used to manage certificate rotations for this database engine, and no action is necessary. Regions – Rotation is needed for database instances in all commercial AWS regions except Asia Pacific (Hong Kong), Middle East (Bahrain), and China (Ningxia). Cluster Scaling – If you add more nodes to an existing cluster, the new nodes will receive the CA-2019 certificate if one or more of the existing nodes already have it. Otherwise, the CA-2015 certificate will be used. Learning More Here are some links to additional information: Updating Your Amazon DocumentDB TLS Certificates Updating Applications to Connect to MariaDB DB Instances Using New SSL/TLS Certificates. Updating Applications to Connect to Microsoft SQL Server DB Instances Using New SSL/TLS Certificates. Updating Applications to Connect to MySQL DB Instances Using New SSL/TLS Certificates. Updating Applications to Connect to Oracle DB Instances Using New SSL/TLS Certificates. Updating Applications to Connect to PostgreSQL DB Instances Using New SSL/TLS Certificates. Updating Applications to Connect to Aurora MySQL DB Clusters Using New SSL/TLS Certificates. Updating Applications to Connect to Aurora PostgreSQL DB Clusters Using New SSL/TLS Certificates. — Jeff;  

Amazon at CES 2020 – Connectivity & Mobility

The Consumer Electronics Show (CES) starts tomorrow. Attendees will have the opportunity to learn about the latest and greatest developments in many areas including 5G, IoT, Advertising, Automotive, Blockchain, Health & Wellness, Home & Family, Immersive Entertainment, Product Design & Manufacturing, Robotics & Machine Intelligence, and Sports. Amazon at CES If you will be traveling to Las Vegas to attend CES, I would like to invite you to visit the Amazon Automotive exhibit in the Las Vegas Convention Center. Come to booth 5616 to learn about our work to help auto manufacturers and developers create the next generation of software-defined vehicles: As you might know, this industry is working to reinvent itself, with manufacturers expanding from designing & building vehicles to a more expansive vision that encompasses multiple forms of mobility. At the booth, you will find multiple demos that are designed to show you what is possible when you mashup vehicles, connectivity, software, apps, sensors, and machine learning in new ways. Cadillac Customer Journey – This is an interactive, immersive demo of a data-driven shopping experience to engage customers at every touchpoint. Powered by ZeroLight and running on AWS, the demo uses 3D imagery that is generated in real time on GPU-equipped EC2 instances. Future Mobility – This demo uses the Alexa Auto SDK and several AWS Machine Learning services to create an interactive in-vehicle assistant. It stores driver profiles in the cloud, and uses Amazon Rekognition to load the proper profile for the driver. Machine learning is used to detect repeated behaviors, such as finding the nearest coffee shop each morning. Rivian Alexa – This full-vehicle demo showcases the deep Alexa Auto SDK integration that Rivian is using to control core vehicle functions on their upcoming R1T Electric Truck. Smart Home / Garage – This demo ensemble showcases several of the Alexa home-to-car and car-to-home integrations, and features multiple Amazon & Alexa offerings including Amazon Pay, Fire TV, and Ring. Karma Automotive / Blackberry QNX – Built on AWS IoT and machine learning inference models developed using Amazon SageMaker, this demo includes two use cases. The first one shows how data from Karma‘s fleet of electric vehicles is used to predict the battery state of health. The second one shows how cloud-trained models run at the edge (in the vehicle) to detect gestures that control vehicle functions. Accenture Personalized Connected Vehicle Adventure – This demo shows how identity and personalization can be used to create unique transportation experiences. The journeys are customized using learned preferences and contextual data gathered in real time, powered by Amazon Personalize. Accenture Data Monetization – This demo tackles data monetization while preserving customer privacy. Built around a data management reference architecture that uses Amazon QLDB and AWS Data Exchange, the demo enables consent and value exchange, with a focus on insights, predictions, and recommendations. Denso Connected Vehicle Reference System – CVRS is an intelligent, end-to-end mobility service built on the AWS Connected Vehicle Solution. It uses a layered architecture that combines edge and cloud components, to allow mobility service providers to build innovative products without starting from scratch. WeRide – This company runs a fleet of autonomous vehicles in China. The ML training to support the autonomy runs on AWS, as does the overall fleet management system. The demo shows how the AWS cloud supports their connected & autonomous fleet. Dell EMC / National Instruments – This jointly developed demo focuses on the Hardware-in-Loop phase of autonomous vehicle development, where actual vehicle hardware running in real-world conditions is used. Unity – This demo showcases a Software-in-Loop autonomous vehicle simulation built with Unity. An accurate, photorealistic representation of Berlin, Germany is used, with the ability to dynamically vary parameters such as time, weather, and scenery. Using the Unity Simulation framework and AWS, 100 permutations of each scene are generated and used as training data in parallel. Get in Touch If you are interested in learning more about any of these demos or if you are ready to build a connected or autonomous vehicle solution of your own, please feel free to contact us. — Jeff;

Celebrating AWS Community Leaders at re:Invent 2019

Even though cloud computing is a global phenomenon, location still matters when it comes to community. For example, customers regularly tell me that they are impressed by the scale, enthusiasm, and geographic reach of the AWS User Group Community. We are deeply appreciative of the work that our user group and community leaders do. Each year, leaders of local communities travel to re:Invent in order to attend a series of events designed to meet their unique needs. They attend an orientation session, learn about We Power Tech (“Building a future of tech that is diverse, inclusive and accessible”), watch the keynotes, and participate in training sessions as part of a half-day AWS Community Leader workshop. After re:Invent wraps up, they return to their homes and use their new knowledge and skills to do an even better job of creating and sharing technical content and of nurturing their communities. Community Leadership Grants In order to make it possible for more community leaders to attend and benefit from re:Invent, we launched a grant program in 2018. The grants covered registration, housing, and flights and were awarded to technologists from emerging markets and underrepresented communities. Several of the recipients went on to become AWS Heroes, and we decided to expand the program for 2019. We chose 17 recipients from 14 countries across 5 continents, with an eye toward recognizing those who are working to build inclusive AWS communities. Additionally, We Power Tech launched a separate Grant Program with Project Alloy to help support underrepresented technologists in the first five years of their careers to attend re:Invent by covering conference registration, hotel, and airfare. In total, there were 102 grantees from 16 countries. The following attendees received Community Leadership Grants and were able to attend re:Invent: Ahmed Samir – Riyadh, KSA (LinkedIn, Twitter) – Ahmed is a co-organizer of the AWS Riyadh User Group. He is well known for his social media accounts in which he translates all AWS announcements to Arabic. Veronique Robitaille – Valencia, Spain (LinkedIn, Twitter) – Veronique is an SA certified cloud consultant in Valencia, Spain. She is the co organizer of the AWS User Group in Valencia, and also translates AWS content into Spanish. Dzenana Dzevlan – Mostar, Bosnia (LinkedIn) – Dzenana is an electrical engineering masters student at the University of Sarajevo, and a co-organizer of the AWS User Group in Bosnia-Herzegovina. Magdalena Zawada – Warsaw, Poland (LinkedIn) – Magdalena is a cloud consultant and co-organizer of the AWS User Group Poland. Hiromi Ito – Osaka, Japan (Twitter) – Hiromi runs IT communities for women in Japan and elsewhere in Asia, and also contributes to JAWS-UG in Kansai. She is the founder of the Asian Woman’s Association Meetup in Singapore. Lena Taupier – Columbus, Ohio, USA (LinkedIn) – Lena co-organizes the Columbus AWS Meetup, was on the organizing team for the 2018 and 2019 Midwest / Chicago AWS Community Days, and delivered a lightning talk on “Building Diverse User Communities” at re:Invent. Victor Perez – Panama City, Panama (LinkedIn) – Victor founded the AWS Panama User Group after deciding that he wanted to make AWS Cloud the new normal for the country. He also created the AWS User Group Caracas. Hiro Nishimura – New York, USA (LinkedIn, Twitter) – Hiro is an educator at heart. She founded AWS Newbies to teach beginners about AWS, and worked with LinkedIn to create video courses to introduce cloud computing to non-engineers. Sridevi Murugayen –  Chennai, India (LinkedIn) – Sridevi is a core leader of AWS Community Day Chennai. She managed a diversity session at the Community Day, and is a regular presenter and participant in the AWS Chennai User Group. Sukanya Mandal – Mumbai, India (LinkedIn) – Sukanya leads the PyData community in Mumbai, and also contributes to the AWS User Group there. She talked about “ML for IoT at the Edge” at the AWS Developer Lounge in the re:Invent 2019 Expo Hall. Seohyun Yoon – Seoul, Korea (LinkedIn) – Seohyun is a founding member of the student division of the AWS Korea Usergroup (AUSG), one of the youngest active AWS advocates in Korea, and served as a judge for the re:Invent 2019 Non-Profit Hackathon for Good. Check out her hands-on AWS lab guides! Farah Clara Shinta Rachmady – Jakarta, Indonesia (LinkedIn, Twitter) – Farah nurtures AWS Indonesia and other technical communities in Indonesia, and also organizes large-scale events & community days. Sandy Rodríguez – Mexico City, Mexico (LinkedIn) – Sandy co-organized the AWS Mexico City User Group and focuses on making events great for attendees. She delivered a 20-minute session in the AWS Village Theater at re:Invent 2019. Her work is critical to the growth of the AWS community in Mexico. Vanessa Alves dos Santos – São Paulo, Brazil (LinkedIn) – Vanessa is a powerful AWS advocate within her community. She helped to plan AWS Community Days Brazil and the AWS User Group in São Paulo. The following attendees were chosen for grants, but were not able to attend due to issues with travel visas: Ayeni Oluwakemi – Lagos, Nigeria (LinkedIn, Twitter) – Ayeni is the founder of the AWS User Group in Lagos, Nigeria. She is the organizer of AWSome Day in Nigeria, and writes for the Cloud Guru Blog. Ewere Diagboya – Lagos, Nigeria (LinkedIn, Twitter) – Ewere is one of our most active advocates in Nigeria. He is very active in the DevOps and Cloud Computing community as educator, and also organizes the DevOps Nigeria Meetup. Minh Ha – Hanoi, Vietnam – Minh grows the AWS User Group Vietnam by organizing in-person meetups and online events. She co-organized AWS Community Day 2018, runs hackathons, and co-organized SheCodes Vietnam. — Jeff;  

New – Amazon Comprehend Medical Adds Ontology Linking

Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to find insights in unstructured text. It is very easy to use, with no machine learning experience required. You can customize Comprehend for your specific use case, for example creating custom document classifiers to organize your documents into your own categories, or custom entity types that analyze text for your specific terms. However, medical terminology can be very complex and specific to the healthcare domain. For this reason, we introduced last year Amazon Comprehend Medical, a HIPAA eligible natural language processing service that makes it easy to use machine learning to extract relevant medical information from unstructured text. Using Comprehend Medical, you can quickly and accurately gather information, such as medical condition, medication, dosage, strength, and frequency from a variety of sources like doctors’ notes, clinical trial reports, and patient health records. Today, we are adding the capability of linking the information extracted by Comprehend Medical to medical ontologies. An ontology provides a declarative model of a domain that defines and represents the concepts existing in that domain, their attributes, and the relationships between them. It is typically represented as a knowledge base, and made available to applications that need to use or share knowledge. Within health informatics, an ontology is a formal description of a health-related domain. The ontologies supported by Comprehend Medical are: ICD-10-CM, to identify medical conditions as entities and link related information such as diagnosis, severity, and anatomical distinctions as attributes of that entity. This is a diagnosis code set that is very useful for population health analytics, and for getting payments from insurance companies based on medical services rendered. RxNorm, to identify medications as entities and link attributes such as dose, frequency, strength, and route of administration to that entity. Healthcare providers use these concepts to enable use cases like medication reconciliation, which is is the process of creating the most accurate list possible of all medications a patient is taking. For each ontology, Comprehend Medical returns a ranked list of potential matches. You can use confidence scores to decide which matches make sense, or what might need further review. Let’s see how this works with an example. Using Ontology Linking In the Comprehend Medical console, I start by giving some unstructured, doctor notes in input: At first, I use some functionalities that were already available in Comprehend Medical to detect medical and protected health information (PHI) entities. Among the recognized entities (see this post for more info) there are some symptoms and medications. Medications are recognized as generics or brands. Let’s see how we can connect some of these entities to more specific concepts. I use the new features to link those entities to RxNorm concepts for medications. In the text, only the parts mentioning medications are detected. In the details of the answer, I see more information. For example, let’s look at one of the detected medications: The first occurrence of the term “Clonidine” (in second line in the input text above) is linked to the generic concept (on the left in the image below) in the RxNorm ontology. The second occurrence of the term “Clonidine” (in the fourth line in the input text above) is followed by an explicit dosage, and is linked to a more prescriptive format that includes dosage (on the right in the image below) in the RxNorm ontology. To look for for medical conditions using ICD-10-CM concepts, I am giving a different input: The idea again is to link the detected entities, like symptoms and diagnoses, to specific concepts. As expected, diagnoses and symptoms are recognized as entities. In the detailed results those entities are linked to the medical conditions in the ICD-10-CM ontology. For example, the two main diagnoses described in the input text are the top results, and specific concepts in the ontology are inferred by Comprehend Medical, each with its own score. In production, you can use Comprehend Medical via API, to integrate these functionalities with your processing workflow. All the screenshots above render visually the structured information returned by the API in JSON format. For example, this is the result of detecting medications (RxNorm concepts): { "Entities": [ { "Id": 0, "Text": "Clonidine", "Category": "MEDICATION", "Type": "GENERIC_NAME", "Score": 0.9933062195777893, "BeginOffset": 83, "EndOffset": 92, "Attributes": [], "Traits": [], "RxNormConcepts": [ { "Description": "Clonidine", "Code": "2599", "Score": 0.9148101806640625 }, { "Description": "168 HR Clonidine 0.00417 MG/HR Transdermal System", "Code": "998671", "Score": 0.8215734958648682 }, { "Description": "Clonidine Hydrochloride 0.025 MG Oral Tablet", "Code": "892791", "Score": 0.7519310116767883 }, { "Description": "10 ML Clonidine Hydrochloride 0.5 MG/ML Injection", "Code": "884225", "Score": 0.7171697020530701 }, { "Description": "Clonidine Hydrochloride 0.2 MG Oral Tablet", "Code": "884185", "Score": 0.6776907444000244 } ] }, { "Id": 1, "Text": "Vyvanse", "Category": "MEDICATION", "Type": "BRAND_NAME", "Score": 0.9995427131652832, "BeginOffset": 148, "EndOffset": 155, "Attributes": [ { "Type": "DOSAGE", "Score": 0.9910679459571838, "RelationshipScore": 0.9999822378158569, "Id": 2, "BeginOffset": 156, "EndOffset": 162, "Text": "50 mgs", "Traits": [] }, { "Type": "ROUTE_OR_MODE", "Score": 0.9997182488441467, "RelationshipScore": 0.9993833303451538, "Id": 3, "BeginOffset": 163, "EndOffset": 165, "Text": "po", "Traits": [] }, { "Type": "FREQUENCY", "Score": 0.983681321144104, "RelationshipScore": 0.9999642372131348, "Id": 4, "BeginOffset": 166, "EndOffset": 184, "Text": "at breakfast daily", "Traits": [] } ], "Traits": [], "RxNormConcepts": [ { "Description": "lisdexamfetamine dimesylate 50 MG Oral Capsule [Vyvanse]", "Code": "854852", "Score": 0.8883932828903198 }, { "Description": "lisdexamfetamine dimesylate 50 MG Chewable Tablet [Vyvanse]", "Code": "1871469", "Score": 0.7482635378837585 }, { "Description": "Vyvanse", "Code": "711043", "Score": 0.7041242122650146 }, { "Description": "lisdexamfetamine dimesylate 70 MG Oral Capsule [Vyvanse]", "Code": "854844", "Score": 0.23675969243049622 }, { "Description": "lisdexamfetamine dimesylate 60 MG Oral Capsule [Vyvanse]", "Code": "854848", "Score": 0.14077001810073853 } ] }, { "Id": 5, "Text": "Clonidine", "Category": "MEDICATION", "Type": "GENERIC_NAME", "Score": 0.9982216954231262, "BeginOffset": 199, "EndOffset": 208, "Attributes": [ { "Type": "STRENGTH", "Score": 0.7696017026901245, "RelationshipScore": 0.9999960660934448, "Id": 6, "BeginOffset": 209, "EndOffset": 216, "Text": "0.2 mgs", "Traits": [] }, { "Type": "DOSAGE", "Score": 0.777644693851471, "RelationshipScore": 0.9999927282333374, "Id": 7, "BeginOffset": 220, "EndOffset": 236, "Text": "1 and 1 / 2 tabs", "Traits": [] }, { "Type": "ROUTE_OR_MODE", "Score": 0.9981689453125, "RelationshipScore": 0.999950647354126, "Id": 8, "BeginOffset": 237, "EndOffset": 239, "Text": "po", "Traits": [] }, { "Type": "FREQUENCY", "Score": 0.99753737449646, "RelationshipScore": 0.9999889135360718, "Id": 9, "BeginOffset": 240, "EndOffset": 243, "Text": "qhs", "Traits": [] } ], "Traits": [], "RxNormConcepts": [ { "Description": "Clonidine Hydrochloride 0.2 MG Oral Tablet", "Code": "884185", "Score": 0.9600071907043457 }, { "Description": "Clonidine Hydrochloride 0.025 MG Oral Tablet", "Code": "892791", "Score": 0.8955953121185303 }, { "Description": "24 HR Clonidine Hydrochloride 0.2 MG Extended Release Oral Tablet", "Code": "885880", "Score": 0.8706559538841248 }, { "Description": "12 HR Clonidine Hydrochloride 0.2 MG Extended Release Oral Tablet", "Code": "1013937", "Score": 0.786146879196167 }, { "Description": "Chlorthalidone 15 MG / Clonidine Hydrochloride 0.2 MG Oral Tablet", "Code": "884198", "Score": 0.601354718208313 } ] } ], "ModelVersion": "0.0.0" } Similarly, this is the output when detecting medical conditions (ICD-10-CM concepts): { "Entities": [ { "Id": 0, "Text": "coronary artery disease", "Category": "MEDICAL_CONDITION", "Type": "DX_NAME", "Score": 0.9933860898017883, "BeginOffset": 90, "EndOffset": 113, "Attributes": [], "Traits": [ { "Name": "DIAGNOSIS", "Score": 0.9682672023773193 } ], "ICD10CMConcepts": [ { "Description": "Atherosclerotic heart disease of native coronary artery without angina pectoris", "Code": "I25.10", "Score": 0.8199513554573059 }, { "Description": "Atherosclerotic heart disease of native coronary artery", "Code": "I25.1", "Score": 0.4950370192527771 }, { "Description": "Old myocardial infarction", "Code": "I25.2", "Score": 0.18753206729888916 }, { "Description": "Atherosclerotic heart disease of native coronary artery with unstable angina pectoris", "Code": "I25.110", "Score": 0.16535982489585876 }, { "Description": "Atherosclerotic heart disease of native coronary artery with unspecified angina pectoris", "Code": "I25.119", "Score": 0.15222692489624023 } ] }, { "Id": 2, "Text": "atrial fibrillation", "Category": "MEDICAL_CONDITION", "Type": "DX_NAME", "Score": 0.9923409223556519, "BeginOffset": 116, "EndOffset": 135, "Attributes": [], "Traits": [ { "Name": "DIAGNOSIS", "Score": 0.9708861708641052 } ], "ICD10CMConcepts": [ { "Description": "Unspecified atrial fibrillation", "Code": "I48.91", "Score": 0.7011875510215759 }, { "Description": "Chronic atrial fibrillation", "Code": "I48.2", "Score": 0.28612759709358215 }, { "Description": "Paroxysmal atrial fibrillation", "Code": "I48.0", "Score": 0.21157972514629364 }, { "Description": "Persistent atrial fibrillation", "Code": "I48.1", "Score": 0.16996538639068604 }, { "Description": "Atrial premature depolarization", "Code": "I49.1", "Score": 0.16715925931930542 } ] }, { "Id": 3, "Text": "hypertension", "Category": "MEDICAL_CONDITION", "Type": "DX_NAME", "Score": 0.9993137121200562, "BeginOffset": 138, "EndOffset": 150, "Attributes": [], "Traits": [ { "Name": "DIAGNOSIS", "Score": 0.9734011888504028 } ], "ICD10CMConcepts": [ { "Description": "Essential (primary) hypertension", "Code": "I10", "Score": 0.6827990412712097 }, { "Description": "Hypertensive heart disease without heart failure", "Code": "I11.9", "Score": 0.09846580773591995 }, { "Description": "Hypertensive heart disease with heart failure", "Code": "I11.0", "Score": 0.09182810038328171 }, { "Description": "Pulmonary hypertension, unspecified", "Code": "I27.20", "Score": 0.0866364985704422 }, { "Description": "Primary pulmonary hypertension", "Code": "I27.0", "Score": 0.07662317156791687 } ] }, { "Id": 4, "Text": "hyperlipidemia", "Category": "MEDICAL_CONDITION", "Type": "DX_NAME", "Score": 0.9998835325241089, "BeginOffset": 153, "EndOffset": 167, "Attributes": [], "Traits": [ { "Name": "DIAGNOSIS", "Score": 0.9702492356300354 } ], "ICD10CMConcepts": [ { "Description": "Hyperlipidemia, unspecified", "Code": "E78.5", "Score": 0.8378056883811951 }, { "Description": "Disorders of lipoprotein metabolism and other lipidemias", "Code": "E78", "Score": 0.20186281204223633 }, { "Description": "Lipid storage disorder, unspecified", "Code": "E75.6", "Score": 0.18514418601989746 }, { "Description": "Pure hyperglyceridemia", "Code": "E78.1", "Score": 0.1438658982515335 }, { "Description": "Other hyperlipidemia", "Code": "E78.49", "Score": 0.13983778655529022 } ] }, { "Id": 5, "Text": "chills", "Category": "MEDICAL_CONDITION", "Type": "DX_NAME", "Score": 0.9989762306213379, "BeginOffset": 211, "EndOffset": 217, "Attributes": [], "Traits": [ { "Name": "SYMPTOM", "Score": 0.9510533213615417 } ], "ICD10CMConcepts": [ { "Description": "Chills (without fever)", "Code": "R68.83", "Score": 0.7460958361625671 }, { "Description": "Fever, unspecified", "Code": "R50.9", "Score": 0.11848161369562149 }, { "Description": "Typhus fever, unspecified", "Code": "A75.9", "Score": 0.07497859001159668 }, { "Description": "Neutropenia, unspecified", "Code": "D70.9", "Score": 0.07332006841897964 }, { "Description": "Lassa fever", "Code": "A96.2", "Score": 0.0721040666103363 } ] }, { "Id": 6, "Text": "nausea", "Category": "MEDICAL_CONDITION", "Type": "DX_NAME", "Score": 0.9993392825126648, "BeginOffset": 220, "EndOffset": 226, "Attributes": [], "Traits": [ { "Name": "SYMPTOM", "Score": 0.9175007939338684 } ], "ICD10CMConcepts": [ { "Description": "Nausea", "Code": "R11.0", "Score": 0.7333012819290161 }, { "Description": "Nausea with vomiting, unspecified", "Code": "R11.2", "Score": 0.20183530449867249 }, { "Description": "Hematemesis", "Code": "K92.0", "Score": 0.1203150525689125 }, { "Description": "Vomiting, unspecified", "Code": "R11.10", "Score": 0.11658868193626404 }, { "Description": "Nausea and vomiting", "Code": "R11", "Score": 0.11535880714654922 } ] }, { "Id": 8, "Text": "flank pain", "Category": "MEDICAL_CONDITION", "Type": "DX_NAME", "Score": 0.9315784573554993, "BeginOffset": 235, "EndOffset": 245, "Attributes": [ { "Type": "ACUITY", "Score": 0.9809532761573792, "RelationshipScore": 0.9999837875366211, "Id": 7, "BeginOffset": 229, "EndOffset": 234, "Text": "acute", "Traits": [] } ], "Traits": [ { "Name": "SYMPTOM", "Score": 0.8182812929153442 } ], "ICD10CMConcepts": [ { "Description": "Unspecified abdominal pain", "Code": "R10.9", "Score": 0.4959934949874878 }, { "Description": "Generalized abdominal pain", "Code": "R10.84", "Score": 0.12332479655742645 }, { "Description": "Lower abdominal pain, unspecified", "Code": "R10.30", "Score": 0.08319114148616791 }, { "Description": "Upper abdominal pain, unspecified", "Code": "R10.10", "Score": 0.08275411278009415 }, { "Description": "Jaw pain", "Code": "R68.84", "Score": 0.07797083258628845 } ] }, { "Id": 10, "Text": "numbness", "Category": "MEDICAL_CONDITION", "Type": "DX_NAME", "Score": 0.9659366011619568, "BeginOffset": 255, "EndOffset": 263, "Attributes": [ { "Type": "SYSTEM_ORGAN_SITE", "Score": 0.9976192116737366, "RelationshipScore": 0.9999089241027832, "Id": 11, "BeginOffset": 271, "EndOffset": 274, "Text": "leg", "Traits": [] } ], "Traits": [ { "Name": "SYMPTOM", "Score": 0.7310190796852112 } ], "ICD10CMConcepts": [ { "Description": "Anesthesia of skin", "Code": "R20.0", "Score": 0.767346203327179 }, { "Description": "Paresthesia of skin", "Code": "R20.2", "Score": 0.13602739572525024 }, { "Description": "Other complications of anesthesia", "Code": "T88.59", "Score": 0.09990577399730682 }, { "Description": "Hypothermia following anesthesia", "Code": "T88.51", "Score": 0.09953102469444275 }, { "Description": "Disorder of the skin and subcutaneous tissue, unspecified", "Code": "L98.9", "Score": 0.08736388385295868 } ] } ], "ModelVersion": "0.0.0" } Available Now You can use Amazon Comprehend Medical via the console, AWS Command Line Interface (CLI), or AWS SDKs. With Comprehend Medical, you pay only for what you use. You are charged based on the amount of text processed on a monthly basis, depending on the features you use. For more information, please see the Comprehend Medical section in the Comprehend Pricing page. Ontology Linking is available in all regions were Amazon Comprehend Medical is offered, as described in the AWS Regions Table. The new ontology linking APIs make it easy to detect medications and medical conditions in unstructured clinical text and link them to RxNorm and ICD-10-CM codes respectively. This new feature can help you reduce the cost, time and effort of processing large amounts of unstructured medical text with high accuracy. — Danilo

AWS Links & Updates – Monday, December 9, 2019

With re:Invent 2019 behind me, I have a fairly light blogging load for the rest of the month. I do, however, have a collection of late-breaking news and links that I want to share while they are still hot out of the oven! AWS Online Tech Talks for December – We have 18 tech talks scheduled for the remainder of the month. You can lean about Running Kubernetes on AWS Fargate, What’s New with AWS IoT, Transforming Healthcare with AI, and much more! AWS Outposts: Ordering and Installation Overview – This video walks you through the process of ordering and installing an Outposts rack. You will learn about the physical, electrical, and network requirements, and you will get to see an actual install first-hand. NFL Digital Athlete – We have partnered with the NFL to use data and analytics to co-develop the Digital Athlete, a platform that aims to improve player safety & treatment, and to predict & prevent injury. Watch the video in this tweet to learn more: AWS JPL Open Source Rover Challenge – Build and train a reinforcement learning (RL) model on AWS to autonomously drive JPL’s Open-Source Rover between given locations in a simulated Mars environment with the least amount of energy consumption and risk of damage. To learn more, visit the web site or watch the Launchpad Video. Map for Machine Learning on AWS – My colleague Julien Simon created an awesome map that categories all of the ML and AI services. The map covers applied ML, SageMaker’s built-in environments, ML for text, ML for any data, ML for speech, ML for images & video, fraud detection, personalization & recommendation, and time series. The linked article contains a scaled-down version of the image; the original version is best! Verified Author Badges for Serverless App Repository – The authors of applications in the Serverless Application Repository can now apply for a Verified Author badge that will appear next to the author’s name on the application card and the detail page. Cloud Innovation Centers – We announced that we will open three more Cloud Innovation Centers in 2020 (one in Australia and two in Bahrain), bringing the global total to eleven. Machine Learning Embark – This new program is designed to help companies transform their development teams into machine learning practitioners. It is based on our own internal experience, and will help to address and overcome common challenges in the machine learning journey. Read the blog post to learn more. Enjoy! — Jeff;

Check out The Amazon Builders’ Library – This is How We Do It!

Amazon customers often tell us that they want to know more about how we build and run our business. On the retail side, they tour Amazon Fulfillment Centers and see how we we organize our warehouses. Corporate customers often ask about our Leadership Principles, and sometimes adopt (and then adapt) them for their own use. I regularly speak with customers in our Executive Briefing Center (EBC), and talk to them about working backwards, PRFAQs, narratives, bar-raising, accepting failure as part of long-term success, and our culture of innovation. The same curiosity that surrounds our business surrounds our development culture. We are often asked how we design, build, measure, run, and scale the hardware and software systems that underlie, AWS, and our other businesses. New Builders’ Library Today I am happy to announce The Amazon Builders’ Library. We are launching with a collection of detailed articles that will tell you exactly how we build and run our systems, each one written by the senior technical leaders who have deep expertise in that part of our business. This library is designed to give you direct access to the theory and the practices that underlie our work. Students, developers, dev managers, architects, and CTOs will all find this content to be helpful. This is the content that is “not sold in stores” and not taught in school! The library is organized by category: Architecture – The design decisions that we make when designing a cloud service that help us to optimize for security, durability, high availability, and performance. Software Delivery & Operations – The process of releasing new software to the cloud and maintaining health & high availability thereafter. Inside the Library I took a quick look at two of the articles while writing this post, and learned a lot! Avoiding insurmountable queue backlogs – Principal Engineer David Yanacek explores the ins and outs of message queues, exploring the benefits and the risks, including many of the failure modes that can arise. He talks about how queues are used to power AWS Lambda and AWS IoT Core, and describes the sophisticated strategies that are used to maintain responsive and to implement (in his words) “magical resource isolation.” David shares multiple patterns that are used to create asynchronous multitenant systems that are resilient, including use of multiple queues, shuffle sharding, delay queues, back-pressure, and more. Challenges with distributed systems – Senior Principal Engineer Jacob Gabrielson discusses they many ways that distributed systems can fail. After defining three distinct types (offline, soft real-time, and hard real-time) of systems, he uses an analogy with Bizarro to explain why hard real-time systems are (again, in his words) “frankly, a bit on the evil side.” Building on an example based on Pac-Man, he adds some request/reply communication and enumerates all of the ways that it can succeed or fail. He discussed fate sharing and how it can be used to reduce the number of test cases, and also talks about many of the other difficulties that come with testing distributed systems. These are just two of the articles; be sure to check out the entire collection. More to Come We’ve got a lot more content in the pipeline, and we are also interested in your stories. Please feel free to leave feedback on this post, and we’ll be in touch. — Jeff;  

AWS Launches & Previews at re:Invent 2019 – Wednesday, December 4th

Here’s what we announced today: Amplify DataStore – This is a persistent, on-device storage repository that will help you to synchronize data across devices and to handle offline operations. It can be used as a standalone local datastore for web and mobile applications that have no connection to the cloud or an AWS account. When used with a cloud backend, it transparently synchronizes data with AWS AppSync. Amplify iOS and Amplify Android – These open source libraries enable you can build scalable and secure mobile applications. You can easily add analytics, AI/ML, API (GraphQL and REST), datastore, and storage functionality to your mobile and web applications. The use case-centric libraries provide a declarative interface that enables you to programmatically apply best practices with abstractions. The libraries, along with the Amplify CLI, a toolchain to create, integrate, and manage the cloud services used by your applications, are part of the Amplify Framework. Amazon Neptune Workbench – You can now query your graphs from within the Neptune Console using either Gremlin or SPARQL queries. You get a fully managed, interactive development environment that supports live code and narrative text within Jupyter notebooks. In addition to queries, the notebooks support bulk loading, query planning, and query profiling. To get started, visit the Neptune Console. Amazon Chime Meetings App for Slack – This new app allows Slack users to start and join Amazon Chime online meetings from their Slack workspace channels and conversations. Slack users that are new to Amazon Chime will be auto-registered with Chime when they use the app for the first time, and can get access to all of the benefits of Amazon Chime meetings from their Slack workspace. Administrators of Slack workspaces can install the Amazon Chime Meetings App for Slack from the Slack App Directory. To learn more, visit this blog post. HTTP APIs for Amazon API Gateway in Preview – This is a new API Gateway feature that will let you build cost-effective, high-performance RESTful APIs for serverless workloads using Lambda functions and other services with an HTTP endpoint. HTTP APIs are optimized for performance—they offer the core functionality of API Gateway at a cost savings of up to 70% compared to REST APIs in API Gateway. You will be able to create routes that map to multiple disparate backends, define & apply authentication and authorization to routes, set up rate limiting, and use custom domains to route requests to the APIs. Visit this blog post to get started. Windows gMSA Support in ECS – Amazon Elastic Container Service (ECS) now supports Windows group Managed Service Account (gMSA), a new capability that allows you to authenticate and authorize your ECS-powered Windows containers with network resources using an Active Directory (AD). You can now easily use Integrated Windows Authentication with your Windows containers on ECS to secure services. — Jeff;  

Amplify DataStore – Simplify Development of Offline Apps with GraphQL

The open source Amplify Framework is a command line tool and a library allowing web & mobile developers to easily provision and access cloud based services. For example, if I want to create a GraphQL API for my mobile application, I use amplify add api on my development machine to configure the backend API. After answering a few questions, I type amplify push to create an AWS AppSync API backend in the cloud. Amplify generates code allowing my app to easily access the newly created API. Amplify supports popular web frameworks, such as Angular, React, and Vue. It also supports mobile applications developed with React Native, Swift for iOS, or Java for Android. If you want to learn more about how to use Amplify for your mobile applications, feel free to attend one the workshops (iOS or React Native) we prepared for the re:Invent 2019 conference. AWS customers told us the most difficult tasks when developing web & mobile applications is to synchronize data across devices and to handle offline operations. Ideally, when a device is offline, your customers should be able to continue to use your application, not only to access data but also to create and modify them. When the device comes back online, the application must reconnect to the backend, synchronize the data and resolve conflicts, if any. It requires a lot of undifferentiated code to correctly handle all edge cases, even when using AWS AppSync SDK’s on-device cache with offline mutations and delta sync. Today, we are introducing Amplify DataStore, a persistent on-device storage repository for developers to write, read, and observe changes to data. Amplify DataStore allows developers to write apps leveraging distributed data without writing additional code for offline or online scenario. Amplify DataStore can be used as a stand-alone local datastore in web and mobile applications, with no connection to the cloud, or the need to have an AWS Account. However, when used with a cloud backend, Amplify DataStore transparently synchronizes data with an AWS AppSync API when network connectivity is available. Amplify DataStore automatically versions data, implements conflict detection and resolution in the cloud using AppSync. The toolchain also generates object definitions for my programming language based on the GraphQL schema developers provide. Let’s see how it works. I first install the Amplify CLI and create a React App. This is standard React, you can find the script on my git repo. I add Amplify DataStore to the app with npx amplify-app. npx is specific for NodeJS, Amplify DataStore also integrates with native mobile toolchains, such as the Gradle plugin for Android Studio and CocoaPods that creates custom XCode build phases for iOS. Now that the scaffolding of my app is done, I add a GraphQL schema representing two entities: Posts and Comments on these posts. I install the dependencies and use AWS Amplify CLI to generate the source code for the objects defined in the GraphQL schema. # add a graphql schema to amplify/backend/api/amplifyDatasource/schema.graphql echo "enum PostStatus { ACTIVE INACTIVE } type Post @model { id: ID! title: String! comments: [Comment] @connection(name: "PostComments") rating: Int! status: PostStatus! } type Comment @model { id: ID! content: String post: Post @connection(name: "PostComments") }" > amplify/backend/api/amplifyDatasource/schema.graphql # install dependencies npm i @aws-amplify/core @aws-amplify/DataStore @aws-amplify/pubsub # generate the source code representing the model npm run amplify-modelgen # create the API in the cloud npm run amplify-push @model and @connection are directives that the Amplify GraphQL Transformer uses to generate code. Objects annotated with @model are top level objects in your API, they are stored in DynamoDB, you can make them searchable, version them or restrict their access to authorised users only. @connection allows to express 1-n relationships between objects, similarly to what you would define when using a relational database (you can use the @key directive to model n-n relationships). The last step is to create the React app itself. I propose to download a very simple sample app to get started quickly: # download a simple react app curl -o src/App.js # start the app npm run start I connect my browser to the app http://localhost:8080and start to test the app. The demo app provides a basic UI (as you can guess, I am not a graphic designer !) to create, query, and to delete items. Amplify DataStore provides developers with an easy to use API to store, query and delete data. Read and write are propagated in the background to your AppSync endpoint in the cloud. Amplify DataStore uses a local data store via a storage adapter, we ship IndexedDB for web and SQLite for mobile. Amplify DataStore is open source, you can add support for other database, if needed. From a code perspective, interacting with data is as easy as invoking the save(), delete(), or query() operations on the DataStore object (this is a Javascript example, you would write similar code for Swift or Java). Notice that the query() operation accepts filters based on Predicates expressions, such as item.rating("gt", 4) or Predicates.All. function onCreate() { new Post({ title: `New title ${}`, rating: 1, status: PostStatus.ACTIVE }) ); } function onDeleteAll() { DataStore.delete(Post, Predicates.ALL); } async function onQuery(setPosts) { const posts = await DataStore.query(Post, c => c.rating("gt", 4)); setPosts(posts) } async function listPosts(setPosts) { const posts = await DataStore.query(Post, Predicates.ALL); setPosts(posts); } I connect to Amazon DynamoDB console and observe the items are stored in my backend: There is nothing to change in my code to support offline mode. To simulate offline mode, I turn off my wifi. I add two items in the app and turn on the wifi again. The app continues to operate as usual while offline. The only noticeable change is the _version field is not updated while offline, as it is populated by the backend. When the network is back, Amplify DataStore transparently synchronizes with the backend. I verify there are 5 items now in DynamoDB (the table name is different for each deployment, be sure to adjust the name for your table below): aws dynamodb scan --table-name Post-raherug3frfibkwsuzphkexewa-amplify \ --filter-expression "#deleted <> :value" \ --expression-attribute-names '{"#deleted" : "_deleted"}' \ --expression-attribute-values '{":value" : { "BOOL": true} }' \ --query "Count" 5 // <= there are now 5 non deleted items in the table ! Amplify DataStore leverages GraphQL subscriptions to keep track of changes that happen on the backend. Your customers can modify the data from another device and Amplify DataStore takes care of synchronizing the local data store transparently. No GraphQL knowledge is required, Amplify DataStore takes care of the low level GraphQL API calls for you automatically. Real-time data, connections, scalability, fan-out and broadcasting are all handled by the Amplify client and AppSync, using WebSocket protocol under the cover. We are effectively using GraphQL as a network protocol to dynamically transform model instances to GraphQL documents over HTTPS. To refresh the UI when a change happens on the backend, I add the following code in the useEffect() React hook. It uses the DataStore.observe() method to register a callback function ( msg => { ... } ). Amplify DataStore calls this function when an instance of Post changes on the backend. const subscription = DataStore.observe(Post).subscribe(msg => { console.log(msg.model, msg.opType, msg.element); listPosts(setPosts); }); Now, I open the AppSync console. I query existing Posts to retrieve a Post ID. query ListPost { listPosts(limit: 10) { items { id title status rating _version } } } I choose the first post in my app, the one starting with 7d8… and I send the following GraphQL mutation: mutation UpdatePost { updatePost(input: { id: "7d80688f-898d-4fb6-a632-8cbe060b9691" title: "updated title 13:56" status: ACTIVE rating: 7 _version: 1 }) { id title status rating _lastChangedAt _version _deleted } } Immediately, I see the app receiving the notification and refreshing its user interface. Finally, I test with multiple devices. I first create a hosting environment for my app using amplify add hosting and amplify publish. Once the app is published, I open the iOS Simulator and Chrome side by side. Both apps initially display the same list of items. I create new items in both apps and observe the apps refreshing their UI in near real time. At the end of my test, I delete all items. I verify there are no more items in DynamoDB (the table name is different for each deployment, be sure to adjust the name for your table below): aws dynamodb scan --table-name Post-raherug3frfibkwsuzphkexewa-amplify \ --filter-expression "#deleted <> :value" \ --expression-attribute-names '{"#deleted" : "_deleted"}' \ --expression-attribute-values '{":value" : { "BOOL": true} }' \ --query "Count" 0 // <= all the items have been deleted ! When syncing local data with the backend, AWS AppSync keeps track of version numbers to detect conflicts. When there is a conflict, the default resolution strategy is to automerge the changes on the backend. Automerge is an easy strategy to resolve conflit without writing client-side code. For example, let’s pretend I have an initial Post, and Bob & Alice update the post at the same time: The original item: { "_version": 1, "id": "25", "rating": 6, "status": "ACTIVE", "title": "DataStore is Available" } Alice updates the rating: { "_version": 2, "id": "25", "rating": 10, "status": "ACTIVE", "title": "DataStore is Available" } At the same time, Bob updates the title: { "_version": 2, "id": "25", "rating": 6, "status": "ACTIVE", "title": "DataStore is great !" } The final item after auto-merge is: { "_version": 3, "id": "25", "rating": 10, "status": "ACTIVE", "title": "DataStore is great !" } Automerge strictly defines merging rules at field level, based on type information defined in the GraphQL schema. For example List and Map are merged, and conflicting updates on scalars (such as numbers and strings) preserve the value existing on the server. Developers can chose other conflict resolution strategies: optimistic concurrency (conflicting updates are rejected) or custom (an AWS Lambda function is called to decide what version is the correct one). You can choose the conflit resolution strategy with amplify update api. You can read more about these different strategies in the AppSync documentation. The full source code for this demo is available on my git repository. The app has less than 100 lines of code, 20% being just UI related. Notice that I did not write a single line of GraphQL code, everything happens in the Amplify DataStore. Your Amplify DataStore cloud backend is available in all AWS Regions where AppSync is available, which, at the time I write this post are: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (London). There is no additional charges to use Amplify DataStore in your application, you only pay for the backend resources you use, such as AppSync and DynamoDB (see here and here for the pricing detail). Both services have a free tier allowing you to discover and to experiment for free. Amplify DataStore allows you to focus on the business value of your apps, instead of writing undifferentiated code. I can’t wait to discover the great applications you’re going to build with it. -- seb

AWS Launches & Previews at re:Invent 2019 – Tuesday, December 3rd

Whew, what a day. This post contains a summary of the announcements that we made today. Launch Blog Posts Here are detailed blog posts for the launches: AWS Outposts Now Available – Order Your Racks Today! Inf1 Instances with AWS Inferentia Chips for High Performance Cost-Effective Inferencing. EBS Direct APIs – Programmatic Access to EBS Snapshot Content. AWS Compute Optimizer – Your Customized Resource Optimization Service. Amazon EKS on AWS Fargate Now Generally Available. AWS Fargate Spot Now Generally Available. ECS Cluster Auto Scaling is Now Generally Available. Easily Manage Shared Data Sets with Amazon S3 Access Points. Amazon Redshift Update – Next-Generation Compute Instances and Managed, Analytics-Optimized Storage. Amazon Redshift – Data Lake Export and Federated Queries. Amazon Rekognition Custom Labels. Amazon SageMaker Studio: The First Fully Integrated Development Environment For Machine Learning. Amazon SageMaker Model Monitor – Fully Managed Automatic Monitoring For Your Machine Learning Models. Amazon SageMaker Experiments – Organize, Track And Compare Your Machine Learning Trainings. Amazon SageMaker Debugger – Debug Your Machine Learning Models. Amazon SageMaker Autopilot – Automatically Create High-Quality Machine Learning Models. Now Available on Amazon SageMaker: The Deep Graph Library. Amazon SageMaker Processing – Fully Managed Data Processing and Model Evaluation. Deep Java Library (DJL). AWS Now Available from a Local Zone in Los Angeles. Lambda Provisioned Concurrency. AWS Step Functions Express Workflows: High Performance & Low Cost. AWS Transit Gateway – Build Global Networks and Centralize Monitoring Using Network Manager. AWS Transit Gateway Adds Multicast and Inter-regional Peering. VPC Ingress Routing – Simplifying Integration of Third-Party Appliances. Amazon Chime Meeting Regions. Other Launches Here’s an overview of some launches that did not get a blog post. I’ve linked to the What’s New or product information pages instead: EBS-Optimized Bandwidth Increase – Thanks to improvements to the Nitro system, all newly launched C5/C5d/C5n/C5dn, M5/M5d/M5n/M5dn, R5/R5d/R5n/R5dn, and P3dn instances will support 36% higher EBS-optimized instance bandwidth, up to 19 Gbps. In addition newly launched High Memory instances (6, 9, 12 TB) will also support 19 Gbps of EBS-optimized instance bandwidth, a 36% increase from 14Gbps. For details on each size, read more about Amazon EBS-Optimized Instances. EC2 Capacity Providers – You will have additional control over how your applications use compute capacity within EC2 Auto Scaling Groups and when using AWS Fargate. You get an abstraction layer that lets you make late binding decisions on capacity, including the ability to choose how much Spot capacity that you would like to use. Read the What’s New to learn more. Previews Here’s an overview of the previews that we revealed today, along with links that will let you sign up and/or learn more (most of these were in Andy’s keynote): AWS Wavelength – AWS infrastructure deployments that embed AWS compute and storage services within the telecommunications providers’ datacenters at the edge of the 5G network to provide developers the ability to build applications that serve end-users with single-digit millisecond latencies. You will be able to extend your existing VPC to a Wavelength Zone and then make use of EC2, EBS, ECS, EKS, IAM, CloudFormation, Auto Scaling, and other services. This low-latency access to AWS will enable the next generation of mobile gaming, AR/VR, security, and video processing applications. To learn more, visit the AWS Wavelength page. Amazon Managed Apache Cassandra Service (MCS) – This is a scalable, highly available, and managed Apache Cassandra-compatible database service. Amazon Managed Cassandra Service is serverless, so you pay for only the resources you use and the service automatically scales tables up and down in response to application traffic. You can build applications that serve thousands of requests per second with virtually unlimited throughput and storage. To learn more, read New – Amazon Managed Apache Cassandra Service (MCS). Graviton2-Powered EC2 Instances – New Arm-based general purpose, compute-optimized, and memory-optimized EC2 instances powered by the new Graviton2 processor. The instances offer a significant performance benefit over the 5th generation (M5, C5, and R5) instances, and also raise the bar on security. To learn more, read Coming Soon – Graviton2-Powered General Purpose, Compute-Optimized, & Memory-Optimized EC2 Instances. AWS Nitro Enclaves – AWS Nitro Enclaves will let you create isolated compute environments to further protect and securely process highly sensitive data such as personally identifiable information (PII), healthcare, financial, and intellectual property data within your Amazon EC2 instances. Nitro Enclaves uses the same Nitro Hypervisor technology that provides CPU and memory isolation for EC2 instances. To learn more, visit the Nitro Enclaves page. The Nitro Enclaves preview is coming soon and you can sign up now. Amazon Detective – This service will help you to analyze and visualize security data at scale. You will be able to quickly identify the root causes of potential security issues or suspicious activities. It automatically collects log data from your AWS resources and uses machine learning, statistical analysis, and graph theory to build a linked set of data that will accelerate your security investigation. Amazon Detective can scale to process terabytes of log data and trillions of events. Sign up for the Amazon Detective Preview. Amazon Fraud Detector – This service makes it easy for you to identify potential fraud that is associated with online activities. It uses machine learning and incorporates 20 years of fraud detection expertise from AWS and, allowing you to catch fraud faster than ever before. You can create a fraud detection model with a few clicks, and detect fraud related to new accounts, guest checkout, abuse of try-before-you-buy, and (coming soon) online payments. To learn more, visit the Amazon Fraud Detector page. Amazon Kendra – This is a highly accurate and easy to use enterprise search service that is powered by machine learning. It supports natural language queries and will allow users to discover information buried deep within your organization’s vast content stores. Amazon Kendra will include connectors for popular data sources, along with an API to allow data ingestion from other sources. You can access the Kendra Preview from the AWS Management Console. Contact Lens for Amazon Connect – This is a set of analytics capabilities for Amazon Connect that use machine learning to understand sentiment and trends within customer conversations in your contact center. Once enabled, specified calls are automatically transcribed using state-of-the-art machine learning techniques, fed through a natural language processing engine to extract sentiment, and indexed for searching. Contact center supervisors and analysts can look for trends, compliance risks, or contacts based on specific words and phrases mentioned in the call to effectively train agents, replicate successful interactions, and identify crucial company and product feedback. Sign up for the Contact Lens for Amazon Connect Preview. Amazon Augmented AI (A2I) – This service will make it easy for you to build workflows that use a human to review low-confidence machine learning predictions. The service includes built-in workflows for common machine learning use cases including content moderation (via Amazon Rekognition) and text extraction (via Amazon Textract), and also allows you to create your own. You can use a pool of reviewers within your own organization, or you can access the workforce of over 500,000 independent contractors who are already performing machine learning tasks through Amazon Mechanical Turk. You can also make use of workforce vendors that are pre-screened by AWS for quality and adherence to security procedures. To learn more, read about Amazon Augmented AI (Amazon A2I), or visit the A2I Console to get started. Amazon CodeGuru – This ML-powered service provides code reviews and application performance recommendations. It helps to find the most expensive (computationally speaking) lines of code, and gives you specific recommendations on how to fix or improve them. It has been trained on best practices learned from millions of code reviews, along with code from thousands of Amazon projects and the top 10,000 open source projects. It can identify resource leaks, data race conditions between concurrent threads, and wasted CPU cycles. To learn more, visit the Amazon CodeGuru page. Amazon RDS Proxy – This is a fully managed database proxy that will help you better scale applications, including those built on modern serverless architectures, without worrying about managing connections and connection pools, while also benefiting from faster failover in the event of a database outage. It is highly available and deployed across multiple AZs, and integrates with IAM and AWS Secrets Manager so that you don’t have to embed your database credentials in your code. Amazon RDS Proxy is fully compatible with MySQL protocol and requires no application change. You will be able to create proxy endpoints and start using them in minutes. To learn more, visit the RDS Proxy page. — Jeff;

New – AWS Step Functions Express Workflows: High Performance & Low Cost

We launched AWS Step Functions at re:Invent 2016, and our customers took to the service right away, using them as a core element of their multi-step workflows. Today, we see customers building serverless workflows that orchestrate machine learning training, report generation, order processing, IT automation, and many other multi-step processes. These workflows can run for up to a year, and are built around a workflow model that includes checkpointing, retries for transient failures, and detailed state tracking for auditing purposes. Based on usage and feedback, our customers really like the core Step Functions model. They love the declarative specifications and the ease with which they can build, test, and scale their workflows. In fact, customers like Step Functions so much that they want to use them for high-volume, short-duration use cases such as IoT data ingestion, streaming data processing, and mobile application backends. New Express Workflows Today we are launching Express Workflows as an option to the existing Standard Workflows. The Express Workflows use the same declarative specification model (the Amazon States Language) but are designed for those high-volume, short-duration use cases. Here’s what you need to know: Triggering – You can use events and read/write API calls associated with a long list of AWS services to trigger execution of your Express Workflows. Execution Model – Express Workflows use an at-least-once execution model, and will not attempt to automatically retry any failed steps, but you can use Retry and Catch, as described in Error Handling. The steps are not checkpointed, so per-step status information is not available. Successes and failures are logged to CloudWatch Logs, and you have full control over the logging level. Workflow Steps – Express Workflows support many of the same service integrations as Standard Workflows, with the exception of Activity Tasks. You can initiate long-running services such as AWS Batch, AWS Glue, and Amazon SageMaker, but you cannot wait for them to complete. Duration – Express Workflows can run for up to five minutes of wall-clock time. They can invoke other Express or Standard Workflows, but cannot wait for them to complete. You can also invoke Express Workflows from Standard Workflows, composing both types in order to meet the needs of your application. Event Rate – Express Workflows are designed to support a per-account invocation rate greater than 100,000 events per second. Accounts are configured for 6,000 events per second by default and we will, as usual, raise it on request. Pricing – Standard Workflows are priced based on the number of state transitions. Express Workflows are priced based on the number of invocations and a GB/second charge based on the amount of memory used to track the state of the workflow during execution. While the pricing models are not directly comparable, Express Workflows will be far more cost-effective at scale. To learn more, read about AWS Step Functions Pricing. As you can see, most of what you already know about Standard Workflows also applies to Express Workflows! You can replace some of your Standard Workflows with Express Workflows, and you can use Express Workflows to build new types of applications. Using Express Workflows I can create an Express Workflow and attach it to any desired events with just a few minutes of work. I simply choose the Express type in the console: Then I define my state machine: I configure the CloudWatch logging, and add a tag: Now I can attach my Express Workflow to my event source. I open the EventBridge Console and create a new rule: I define a pattern that matches PutObject events on a single S3 bucket: I select my Express Workflow as the event target, add a tag, and click Create: The particular event will occur only if I have a CloudTrail trail that is set up to record object-level activity: Then I upload an image to my bucket, and check the CloudWatch Logs group to confirm that my workflow ran as expected: As a more realistic test, I can upload several hundred images at once and confirm that my Lambda functions are invoked with high concurrency: I can also use the new Monitoring tab in the Step Functions console to view the metrics that are specific to the state machine: Available Now You can create and use AWS Step Functions Express Workflows today in all AWS Regions! — Jeff;

New – Provisioned Concurrency for Lambda Functions

It’s really true that time flies, especially when you don’t have to think about servers: AWS Lambda just turned 5 years old and the team is always looking for new ways to help customers build and run applications in an easier way. As more mission critical applications move to serverless, customers need more control over the performance of their applications. Today we are launching Provisioned Concurrency, a feature that keeps functions initialized and hyper-ready to respond in double-digit milliseconds. This is ideal for implementing interactive services, such as web and mobile backends, latency-sensitive microservices, or synchronous APIs. When you invoke a Lambda function, the invocation is routed to an execution environment to process the request. When a function has not been used for some time, when you need to process more concurrent invocations, or when you update a function, new execution environments are created. The creation of an execution environment takes care of installing the function code and starting the runtime. Depending on the size of your deployment package, and the initialization time of the runtime and of your code, this can introduce latency for the invocations that are routed to a new execution environment. This latency is usually referred to as a “cold start”. For most applications this additional latency is not a problem. For some applications, however, this latency may not be acceptable. When you enable Provisioned Concurrency for a function, the Lambda service will initialize the requested number of execution environments so they can be ready to respond to invocations. Configuring Provisioned Concurrency I create two Lambda functions that use the same Java code and can be triggered by Amazon API Gateway. To simulate a production workload, these functions are repeating some mathematical computation 10 million times in the initialization phase and 200,000 times for each invocation. The computation is using java.Math.Random and conditions (if ...) to avoid compiler optimizations (such as “unlooping” the iterations). Each function has 1GB of memory and the size of the code is 1.7MB. I want to enable Provisioned Concurrency only for one of the two functions, so that I can compare how they react to a similar workload. In the Lambda console, I select one the functions. In the configuration tab, I see the new Provisioned Concurrency settings. I select Add configuration. Provisioned Concurrency can be enabled for a specific Lambda function version or alias (you can’t use $LATEST). You can have different settings for each version of a function. Using an alias, it is easier to enable these settings to the correct version of your function. In my case I select the alias live that I keep updated to the latest version using the AWS SAM AutoPublishAlias function preference. For the Provisioned Concurrency, I enter 500 and Save. Now, the Provisioned Concurrency configuration is in progress. The execution environments are being prepared to serve concurrent incoming requests based on my input. During this time the function remains available and continues to serve traffic. After a few minutes, the concurrency is ready. With these settings, up to 500 concurrent requests will find an execution environment ready to process them. If I go above that, the usual scaling of Lambda functions still applies. To generate some load, I use an Amazon Elastic Compute Cloud (EC2) instance in the same region. To keep it simple, I use the ab tool bundled with the Apache HTTP Server to call the two API endpoints 10,000 times with a concurrency of 500. Since these are new functions, I expect that: For the function with Provisioned Concurrency enabled and set to 500, my requests are managed by pre-initialized execution environments. For the other function, that has Provisioned Concurrency disabled, about 500 execution environments need to be provisioned, adding some latency to the same amount of invocations, about 5% of the total. One cool feature of the ab tool is that is reporting the percentage of the requests served within a certain time. That is a very good way to look at API latency, as described in this post on Serverless Latency by Tim Bray. Here are the results for the function with Provisioned Concurrency disabled: Percentage of the requests served within a certain time (ms) 50% 351 66% 359 75% 383 80% 396 90% 435 95% 1357 98% 1619 99% 1657 100% 1923 (longest request) Looking at these numbers, I see that 50% the requests are served within 351ms, 66% of the requests within 359ms, and so on. It’s clear that something happens when I look at 95% or more of the requests: the time suddenly increases by about a second. These are the results for the function with Provisioned Concurrency enabled: Percentage of the requests served within a certain time (ms) 50% 352 66% 368 75% 382 80% 387 90% 400 95% 415 98% 447 99% 513 100% 593 (longest request) Let’s compare those numbers in a graph. As expected for my test workload, I see a big difference in the response time of the slowest 5% of the requests (between 95% and 100%), where the function with Provisioned Concurrency disabled shows the latency added by the creation of new execution environments and the (slow) initialization in my function code. In general, the amount of latency added depends on the runtime you use, the size of your code, and the initialization required by your code to be ready for a first invocation. As a result, the added latency can be more, or less, than what I experienced here. The number of invocations affected by this additional latency depends on how often the Lambda service needs to create new execution environments. Usually that happens when the number of concurrent invocations increases beyond what already provisioned, or when you deploy a new version of a function. A small percentage of slow response times (generally referred to as tail latency) really makes a difference in end user experience. Over an extended period of time, most users are affected during some of their interactions. With Provisioned Concurrency enabled, user experience is much more stable. Provisioned Concurrency is a Lambda feature and works with any trigger. For example, you can use it with WebSockets APIs, GraphQL resolvers, or IoT Rules. This feature gives you more control when building serverless applications that require low latency, such as web and mobile apps, games, or any service that is part of a complex transaction. Available Now Provisioned Concurrency can be configured using the console, the AWS Command Line Interface (CLI), or AWS SDKs for new or existing Lambda functions, and is available today in the following AWS Regions: in US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), and Europe (Stockholm), Middle East (Bahrain), and South America (São Paulo). You can use the AWS Serverless Application Model (SAM) and SAM CLI to test, deploy and manage serverless applications that use Provisioned Concurrency. With Application Auto Scaling you can automate configuring the required concurrency for your functions. As policies, Target Tracking and Scheduled Scaling are supported. Using these policies, you can automatically increase the amount of concurrency during times of high demand and decrease it when the demand decreases. You can also use Provisioned Concurrency today with AWS Partner tools, including configuring Provisioned Currency settings with the Serverless Framework and Terraform, or viewing metrics with Datadog, Epsagon, Lumigo, New Relic, SignalFx, SumoLogic, and Thundra. You only pay for the amount of concurrency that you configure and for the period of time that you configure it. Pricing in US East (N. Virginia) is $0.015 per GB-hour for Provisioned Concurrency and $0.035 per GB-hour for Duration. The number of requests is charged at the same rate as normal functions. You can find more information in the Lambda pricing page. This new feature enables developers to use Lambda for a variety of workloads that require highly consistent latency. Let me know what you are going to use it for! — Danilo


Recommended Content