Amazon Web Services Blog

AWS Backup – Automate and Centrally Manage Your Backups

AWS gives you the power to easily and dynamically create file systems, block storage volumes, relational databases, NoSQL databases, and other resources that store precious data. You can create them on a moment’s notice as the need arises, giving you access to as much storage as you need and opening the door to large-scale cloud migration. When you bring your sensitive data to the cloud, you need to make sure that you continue to meet business and regulatory compliance requirements, and you definitely want to make sure that you are protected against application errors. While you can build your own backup tools using the built-in snapshot operations built in to many of the services that I listed above, creating an enterprise wide backup strategy and the tools to implement it still takes a lot of work. We are changing that. New AWS Backup AWS Backup is designed to help you automate and centrally manage your backups. You can create policy-driven backup plans, monitor the status of on-going backups, verify compliance, and find / restore backups, all using a central console. Using a combination of the existing AWS snapshot operations and new, purpose-built backup operations, Backup backs up EBS volumes, EFS file systems, RDS & Aurora databases, DynamoDB tables, and Storage Gateway volumes to Amazon Simple Storage Service (S3), with the ability to tier older backups to Amazon Glacier. Because Backup includes support for Storage Gateway volumes, you can include your existing, on-premises data in the backups that you create. Each backup plan includes one or more backup rules. The rules express the backup schedule, frequency, and backup window. Resources to be backed-up can be identified explicitly or in a policy-driven fashion using tags. Lifecycle rules control storage tiering and expiration of older backups. Backup gathers the set of snapshots and the metadata that goes along with the snapshots into collections that define a recovery point. You get lots of control so that you can define your daily / weekly / monthly backup strategy, the ability to rest assured that your critical data is being backed up in accord with your requirements, and the ability to restore that data on an as-needed data. Backups are grouped into vaults, each encrypted by a KMS key. Using AWS Backup You can get started with AWS Backup in minutes. Open the AWS Backup Console and click Create backup plan: I can build a plan from scratch, start from an existing plan or define one using JSON. I’ll Build a new plan, and start by giving my plan a name: Now I create the first rule for my backup plan. I call it MainBackup, indicate that I want it to run daily, define the lifecycle (transition to cold storage after 1 month, expire after 6 months), and select the Default vault: I can tag the recovery points that are created as a result of this rule, and I can also tag the backup plan itself: I’m all set, so I click Create plan to move forward: At this point my plan exists and is ready to run, but it has just one rule and does not have any resource assignments (so there’s nothing to back up): Now I need to indicate which of my resources are subject to this backup plan I click Assign resources, and then create one or more resource assignments. Each assignment is named and references an IAM role that is used to create the recovery point. Resources can be denoted by tag or by resource ID, and I can use both in the same assignment. I enter all of the values and click Assign resources to wrap up: The next step is to wait for the first backup job to run (I cheated by editing my backup window in order to get this post done as quickly as possible). I can peek at the Backup Dashboard to see the overall status: Backups On Demand I also have the ability to create a recovery point on demand for any of my resources. I choose the desired resource and designate a vault, then click Create an on-demand backup: I indicated that I wanted to create the backup right away, so a job is created: The job runs to completion within minutes: Inside a Vault I can also view my collection of vaults, each of which contains multiple recovery points: I can examine see the list of recovery points in a vault: I can inspect a recovery point, and then click Restore to restore my table (in this case): I’ve shown you the highlights, and you can discover the rest for yourself! Things to Know Here are a couple of things to keep in mind when you are evaluating AWS Backup: Services – We are launching with support for EBS volumes, RDS databases, DynamoDB tables, EFS file systems, and Storage Gateway volumes. We’ll add support for additional services over time, and welcome your suggestions. Backup uses the existing snapshot operations for all services except EFS file systems. Programmatic Access – You can access all of the functions that I showed you above using the AWS Command Line Interface (CLI) and the AWS Backup APIs. The APIs are powerful integration points for your existing backup tools and scripts. Regions – Backups work within the scope of a particular AWS Region, with plans in the works to enable several different types of cross-region functionality in 2019. Pricing – You pay the normal AWS charges for backups that are created using the built-in AWS snapshot facilities. For Amazon EFS, there’s a low, per-GB charge for warm storage and an even lower charge for cold storage. Available Now AWS Backup is available now and you can start using it today! — Jeff;    

Behind the Scenes & Under the Carpet – The CenturyLink Network that Powered AWS re:Invent 2018

If you are a long-time reader, you may have already figured out that I am fascinated by the behind-the-scenes and beneath-the-streets activities that enable and power so much of our modern world. For example, late last year I told you how The AWS Cloud Goes Underground at re:Invent and shared some information about the communication and network infrastructure that was used to provide top-notch connectivity to re:Invent attendees and to those watching the keynotes and live streams from afar. Today, with re:Invent 2018 in the rear-view mirror (and planning for next year already underway), I would like to tell you how 5-time re:Invent Network Services Provider CenturyLink designed and built a redundant, resilient network that used AWS Direct Connect to provide 180 Gbps of bandwidth and supported over 81,000 devices connected across eight venues. Above the ground, we worked closely with ShowNets to connect their custom network and WiFi deployment in each venue to the infrastructure provided by CenturyLink. The 2018 re:Invent Network This year, the network included diverse routes to multiple AWS regions, with a brand-new multi-node metro fiber ring that encompassed the Sands Expo, Wynn Resort, Circus Circus, Mirage, Vdara, Bellagio, Aria, and MGM Grand facilities. Redundant 10 Gbps connections to each venue and to multiple AWS Direct Connect locations were used to ensure high availability. The network was provisioned using CenturyLink Cloud Connect Dynamic Connections. Here’s a network diagram (courtesy of CenturyLink) that shows the metro fiber ring and the connectivity: The network did its job, and supported keynotes, live streams, breakout sessions, hands-on labs, hackathons, workshops, and certification exams. Here are the final numbers, as measured on-site at re:Invent 2018: Live Streams – Over 60K views from over 100 countries. Peak Data Transfer – 9.5 Gbps across six 10 Gbps connections. Total Data Transfer – 160 TB. Thanks again to our Managed Service Partner for building and running the robust network that supported our customers, partners, and employees at re:Invent! — Jeff;

Podcast #289: A Look at Amazon FSx For Windows File Server

In this episode, Simon speaks with Andrew Crudge (Senior Product Manager, FSx) about this newly released service, capabilities available to customers and how to make the best use of it in your environment. Additional Resources FSx for Windows File Server FSx Getting Started FSx Features FSx Pricing FSx FAQs FSx for Windows Tech Talk FSx Technical Documentation Join the Discussion About the AWS Podcast The AWS Podcast is a cloud platform podcast for developers, dev ops, and cloud professionals seeking the latest news and trends in storage, security, infrastructure, serverless, and more. Join Simon Elisha and Jeff Barr for regular updates, deep dives and interviews. Whether you’re building machine learning and AI models, open source projects, or hybrid cloud solutions, the AWS Podcast has something for you. Subscribe with one of the following: Like the Podcast? Rate us on iTunes and send your suggestions, show ideas, and comments to We want to hear from you!

New – Amazon DocumentDB (with MongoDB Compatibility): Fast, Scalable, and Highly Available

A glance at the AWS Databases page will show you that we offer an incredibly wide variety of databases, each one purpose-built to address a particular need! In order to help you build the coolest and most powerful applications, you can mix and match relational, key-value, in-memory, graph, time series, and ledger databases. Introducing Amazon DocumentDB (with MongoDB compatibility) Today we are launching Amazon DocumentDB (with MongoDB compatibility), a fast, scalable, and highly available document database that is designed to be compatible with your existing MongoDB applications and tools. Amazon DocumentDB uses a purpose-built SSD-based storage layer, with 6x replication across 3 separate Availability Zones. The storage layer is distributed, fault-tolerant, and self-healing, giving you the the performance, scalability, and availability needed to run production-scale MongoDB workloads. Each MongoDB database contains a set of collections. Each collection (similar to a relational database table) contains a set of documents, each in the JSON-like BSON format. For example: { name: "jeff", full_name: {first: "jeff", last: "barr"}, title: "VP, AWS Evangelism", email: "", city: "Seattle", foods: ["chocolate", "peanut butter"] } Each document can have a unique set of field-value pairs and data; there are no fixed or predefined schemas. The MongoDB API includes the usual CRUD (create, read, update, and delete) operations along with a very rich query model. This is just the tip of the iceberg (the MongoDB API is very powerful and flexible), so check out the list of supported MongoDB operations, data types, and functions to learn more. All About Amazon DocumentDB Here’s what you need to know about Amazon DocumentDB: Compatibility – Amazon DocumentDB is compatible with version 3.6 of MongoDB. Scalability – Storage can be scaled from 10 GB up to 64 TB in increments of 10 GB. You don’t need to preallocate storage or monitor free space; Amazon DocumentDB will take care of that for you. You can choose between six instance sizes (15.25 GiB to 488 GiB of memory), and you can create up to 15 read replicas. Storage and compute are decoupled and you can scale each one independently and as-needed. Performance – Amazon DocumentDB stores database changes as a log stream, allowing you to process millions of reads per second with millisecond latency. The storage model provides a nice performance increase without compromising data durability, and greatly enhances overall scalability. Reliability – The 6-way storage replication ensures high availability. Amazon DocumentDB can failover from a primary to a replica within 30 seconds, and supports MongoDB replica set emulation so applications can handle failover quickly. Fully Managed – Like the other AWS database services, Amazon DocumentDB is fully managed, with built-in monitoring, fault detection, and failover. You can set up daily snapshot backups, take manual snapshots, and use either one to create a fresh cluster if necessary. You can also do point-in-time restores (with second-level resolution) to any point within the 1-35 day backup retention period. Secure – You can choose to encrypt your active data, snapshots, and replicas with the KMS key of your choice when you create each of your Amazon DocumentDB clusters. Authentication is enabled by default, as is encryption of data in transit. Compatible – As I said earlier, Amazon DocumentDB is designed to work with your existing MongoDB applications and tools. Just be sure to use drivers intended for MongoDB 3.4 or newer. Internally, Amazon DocumentDB implements the MongoDB 3.6 API by emulating the responses that a MongoDB client expects from a MongoDB server. Creating An Amazon DocumentDB (with MongoDB compatibility) Cluster You can create a cluster from the Console, Command Line, CloudFormation, or by making a call to the CreateDBCluster function. I’ll use the Amazon DocumentDB Console today. I open the console and click Launch Amazon DocumentDB to get started: I name my cluster, choose the instance class, and specify the number of instances (one is the primary and the rest are replicas). Then I enter a master username and password: I can use any of the following instance classes for my cluster: At this point I can click Create cluster to use default settings, or I can click Show advanced settings for additional control. I can choose any desired VPC, subnets, and security group. I can also set the port and parameter group for the cluster: I can control encryption (enabled by default), set the backup retention period, and establish the backup window for point-in-time restores: I can also control the maintenance window for my new cluster. Once I am ready I click Create cluster to proceed: My cluster starts out in creating status, and switches to available very quickly: As do the instances in the cluster: Connecting to a Cluster With the cluster up and running, I install the mongo shell on an EC2 instance (details depend on your distribution) and fetch a certificate so that I can make a secure connection: $ wget The console shows me the command that I need to use to make the connection: I simply customize the command with the password that I specified when I created the cluster: From there I can use any of the mongo shell commands to insert, query, and examine data. I inserted some very simple documents and then ran an equally simple query (I’m sure you can do a lot better): Now Available Amazon DocumentDB (with MongoDB compatibility) is available now and you can start using it today in the US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Ireland) Regions. Pricing is based on the instance class, storage consumption for current documents and snapshots, I/O operations, and data transfer. — Jeff;

Western Digital HDD Simulation at Cloud Scale – 2.5 Million HPC Tasks, 40K EC2 Spot Instances

Earlier this month my colleague Bala Thekkedath published a story about Extreme Scale HPC and talked about how AWS customer Western Digital built a cloud-scale HPC cluster on AWS and used it to simulate crucial elements of upcoming head designs for their next-generation hard disk drives (HDD). The simulation described in the story encompassed a little over 2.5 million tasks, and ran to completion in just 8 hours on a million-vCPU Amazon EC2 cluster. As Bala shared in his story, much of the simulation work at Western Digital revolves around the need to evaluate different combinations of technologies and solutions that comprise an HDD. The engineers focus on cramming ever-more data into the same space, improving storage capacity and increasing transfer speed in the process. Simulating millions of combinations of materials, energy levels, and rotational speeds allows them to pursue the highest density and the fastest read-write times. Getting the results more quickly allows them to make better decisions and lets them get new products to market more rapidly than before. Here’s a visualization of Western Digital’s energy-assisted recording process in action. The top stripe represents the magnetism; the middle one represents the added energy (heat); and the bottom one represents the actual data written to the medium via the combination of magnetism and heat: I recently spoke to my colleagues and to the teams at Western Digital and Univa who worked together to make this record-breaking run a reality. My goal was to find out more about how they prepared for this run, see what they learned, and to share it with you in case you are ready to run a large-scale job of your own. Ramping Up About two years ago, the Western Digital team was running clusters as big as 80K vCPUs, powered by EC2 Spot Instances in order to be as cost-effective as possible. They had grown to the 80K vCPU level after repeated, successful runs with 8K, 16K, and 32K vCPUs. After these early successes, they decided to shoot for the moon, push the boundaries, and work toward a one million vCPU run. They knew that this would stress and tax their existing tools, and settled on a find/fix/scale-some-more methodology. Univa’s Grid Engine is a batch scheduler. It is responsible for keeping track of the available compute resources (EC2 instances) and dispatching work to the instances as quickly and efficiently as possible. The goal is to get the job done in the smallest amount of time and at the lowest cost. Univa’s Navops Launch supports container-based computing and also played a critical role in this run by allowing the same containers to be used for Grid Engine and AWS Batch. One interesting scaling challenge arose when 50K hosts created concurrent connections to the Grid Engine scheduler. Once running, the scheduler can dispatch up to 3000 tasks per second, with an extra burst in the (relatively rare) case that an instance terminates unexpectedly and signals the need to reschedule 64 or more tasks as quickly as possible. The team also found that referencing worker instances by IP addresses allowed them to sidestep some internal (AWS) rate limits on the number of DNS lookups per Elastic Network Interface. The entire simulation is packed in a Docker container for ease of use. When newly launched instances come online they register their specs (instance type, IP address, vCPU count, memory, and so forth) in an ElastiCache for Redis cluster. Grid Engine uses this data to find and manage instances; this is more efficient and scalable than calling DescribeInstances continually. The simulation tasks read and write data from Amazon Simple Storage Service (S3), taking advantage of S3’s ability to store vast amounts of data and to handle any conceivable request rate. Inside a Simulation Task Each potential head design is described by a collection of parameters; the overall simulation run consists of an exploration of this parameter space. The results of the run help the designers to find designs that are buildable, reliable, and manufacturable. This particular run focused on modeling write operations. Each simulation task ran for 2 to 3 hours, depending on the EC2 instance type. In order to avoid losing work if a Spot Instance is about to be terminated, the tasks checkpoint themselves to S3 every 15 minutes, with a bit of extra logic to cover the important case where the job finishes after the termination signal but before the actual shutdown. Making the Run After just 6 weeks of planning and prep (including multiple large-scale AWS Batch runs to generate the input files), the combined Western Digital / Univa / AWS team was ready to make the full-scale run. They used an AWS CloudFormation template to start Grid Engine and launch the cluster. Due to the Redis-based tracking that I described earlier, they were able to start dispatching tasks to instances as soon as they became available. The cluster grew to one million vCPUs in 1 hour and 32 minutes and ran full-bore for 6 hours: When there were no more undispatched tasks available, Grid Engine began to shut the instances down, reaching the zero-instance point in about an hour. During the run, Grid Engine was able to keep the instances fully supplied with work over 99% of the time. The run used a combination of C3, C4, M4, R3, R4, and M5 instances. Here’s the overall breakdown over the course of the run: The job spanned all six Availability Zones in the US East (N. Virginia) Region. Spot bids were placed at the On-Demand price. Over the course of the run, about 1.5% of the instances in the fleet were terminated and automatically replaced; the vast majority of the instances stayed running for the entire time. And That’s That This job ran 8 hours and cost $137,307 ($17,164 per hour). The folks I talked to estimated that this was about half the cost of making the run on an in-house cluster, if they had one of that size! Evaluating the success of the run, Steve Phillpott (CIO of Western Digital) told us: “Storage technology is amazingly complex and we’re constantly pushing the limits of physics and engineering to deliver next-generation capacities and technical innovation. This successful collaboration with AWS shows the extreme scale, power and agility of cloud-based HPC to help us run complex simulations for future storage architecture analysis and materials science explorations. Using AWS to easily shrink simulation time from 20 days to 8 hours allows Western Digital R&D teams to explore new designs and innovations at a pace un-imaginable just a short time ago.” The Western Digital team behind this one is hiring an R&D Engineering Technologist; they also have many other open positions! A Run for You If you want to do a run on the order of 100K to 1M cores (or more), our HPC team is ready to help, as are our friends at Univa. To get started, Contact HPC Sales! — Jeff;

Learn about AWS Services & Solutions – January AWS Online Tech Talks

Happy New Year! Join us this January to learn about AWS services and solutions. The AWS Online Tech Talks are live, online presentations that cover a broad range of topics at varying technical levels. These tech talks, led by AWS solutions architects and engineers, feature technical deep dives, live demonstrations, customer examples, and Q&A with AWS experts. Register Now! Note – All sessions are free and in Pacific Time. Tech talks this month: Containers January 22, 2019 | 9:00 AM – 10:00 AM PT – Deep Dive Into AWS Cloud Map: Service Discovery for All Your Cloud Resources – Learn how to increase your application availability with AWS Cloud Map, a new service that lets you discover all your cloud resources. Data Lakes & Analytics January 22, 2019 | 1:00 PM – 2:00 PM PT – – Increase Your Data Engineering Productivity Using Amazon EMR Notebooks – Learn how to develop analytics and data processing applications faster with Amazon EMR Notebooks. Enterprise & Hybrid January 29, 2019 | 1:00 PM – 2:00 PM PT – Build Better Workloads with the AWS Well-Architected Framework and Tool – Learn how to apply architectural best practices to guide your cloud migration. IoT January 29, 2019 | 9:00 AM – 10:00 AM PT – How To Visually Develop IoT Applications with AWS IoT Things Graph – See how easy it is to build IoT applications by visually connecting devices & web services. Mobile January 21, 2019 | 11:00 AM – 12:00 PM PT – Build Secure, Offline, and Real Time Enabled Mobile Apps Using AWS AppSync and AWS Amplify – Learn how to easily build secure, cloud-connected data-driven mobile apps using AWS Amplify, GraphQL, and mobile-optimized AWS services. Networking January 30, 2019 | 9:00 AM – 10:00 AM PT – Improve Your Application’s Availability and Performance with AWS Global Accelerator – Learn how to accelerate your global latency-sensitive applications by routing traffic across AWS Regions. Robotics January 29, 2019 | 11:00 AM – 12:00 PM PT – Using AWS RoboMaker Simulation for Real World Applications – Learn how AWS RoboMaker simulation works and how you can get started with your own projects. Security, Identity & Compliance January 23, 2019 | 1:00 PM – 2:00 PM PT – Customer Showcase: How Dow Jones Uses AWS to Create a Secure Perimeter Around Its Web Properties – Learn tips and tricks from a real-life example on how to be in control of your cloud security and automate it on AWS. January 30, 2019 | 11:00 AM – 12:00 PM PT – Introducing AWS Key Management Service Custom Key Store – Learn how you can generate, store, and use your KMS keys in hardware security modules (HSMs) that you control. Serverless January 31, 2019 | 9:00 AM – 10:00 AM PT – Nested Applications: Accelerate Serverless Development Using AWS SAM and the AWS Serverless Application Repository – Learn how to compose nested applications using the AWS Serverless Application Model (SAM), SAM CLI, and the AWS Serverless Application Repository. January 31, 2019 | 11:00 AM – 12:00 PM PT – Deep Dive Into Lambda Layers and the Lambda Runtime API – Learn how to use Lambda Layers to enable re-use and sharing of code, and how you can build and test Layers locally using the AWS Serverless Application Model (SAM). Storage January 28, 2019 | 11:00 AM – 12:00 PM PT – The Amazon S3 Storage Classes – Learn about the Amazon S3 Storage Classes and how to use them to optimize your storage resources. January 30, 2019 | 1:00 PM – 2:00 PM PT – Deep Dive on Amazon FSx for Windows File Server: Running Windows on AWS – Learn how to deploy Amazon FSx for Windows File Server in some of the most common use cases.

Just-in-time VPN access with an AWS IoT button

Guest post by AWS Community Hero Teri Radichel. Teri Radichel provides cyber security assessments, pen testing, and research services through her company, 2nd Sight Lab. She is also the founder of the AWS Architects Seattle Meetup. While traveling to deliver cloud security training, I connect to Wi-Fi networks, both in my hotel room and in the classroom using a VPN. Most companies expose a remote VPN endpoint to the entire internet. I came up with a hypothesis that I could use an AWS IoT button to allow network access only to required locations. What if a VPN user could click to get access, which would trigger opening a network rule, and double-click to disallow network traffic again? I tested it out this idea, and you can see the results below. You might be wondering why you might want to use a VPN for remote cloud administration. Why an AWS IoT button instead of a laptop or mobile application? More on that in my cloud security blog. Initially, I wanted to use the AWS IoT Enterprise Button because it allows an organization to have control over the certificates used on the devices. It also uses Wi-Fi, and I was hoping to capture the button IP address to grant network access. To do that, I had to be able to prove that the button received the same IP address from the Wi-Fi network as my laptop. Unfortunately, due to captive portals used by some wireless networks, I had problems connecting the button at some locations. Next, I tried the AT&T LTE-M Button. I was able to get this button to work for my use case, but with a few less than user-friendly requirements. Because this button is on a cellular network rather than the Wi-Fi I use to connect to my VPN in a hotel room, I can’t auto-magically determine the IP address. I must manually set it using the AWS IoT mobile application. The other issue I had is that some networks change the public IP addresses of the Wi-Fi client after the VPN connection. The before and after IP addresses are always in the same network block but are not consistent. Instead of using a single IP address, the user has to understand how to figure out what IP range to pass into the button. This proof of concept implementation works well but would not be an ideal solution for non-network-savvy users. The good news is that I don’t have to log into my AWS account with administrative privileges to change the network settings and allow access to the VPN endpoint from my location. The AWS IoT button user has limited permissions, as does the role for the AWS Lambda function that grants access. The AWS IoT button is a form of multi-factor authentication. Configure the button with a Lambda function Caveat: This is not a fully tested or production-ready solution, but it is a starting point to give you some ideas for the implementation of on-demand network access to administrative endpoints. The roles for the button and the Lambda function can be much more restrictive than the ones I used in my proof-of-concept implementation. 1. Set up your VPN endpoint (the instructions are beyond the scope of this post). You can use something like OpenVPN or any of the AWS Marketplace options that allow you to create VPNs for remote access. 2. Jeff Barr has already written a most excellent post on how to set up your AWS IoT button with a Lambda function. The process is straight-forward. 3. To allow your button to change the network, add the ability for your Lambda role to replace network ACL entries. This role enables the assigned resource to update any rule in the account—not recommended! Limit this further to specific network ACLs. Also, make sure that users can only edit the placement attributes of their own assigned buttons. { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": "ec2:ReplaceNetworkAclEntry", "Resource": "*" } ] } 4. Write the code. For my test, I edited the default code that comes with the button to prove that what I was trying to do would work. I left in the lines that send a text message, but after the network change. That way, if there’s an error, the user doesn’t get the SNS message. Additionally, you always, always, always want to validate any inputs sent into any application from a client. I added log lines to show where you can add that. Change these variables to match your environment: vpcid, nacl, rule. The rule parameter is the rule in your network ACL that is updated to use the IP address provided with the button application. from __future__ import print_function import boto3 import json import logging logger = logging.getLogger() logger.setLevel(logging.INFO) sns = boto3.client('sns') def lambda_handler(event, context):'Received event: ' + json.dumps(event)) attributes = event['placementInfo']['attributes'] phone_number = attributes['phoneNumber'] message = attributes['message'] ip = attributes['ip']"Need code here to validate this is a valid IP address")"Need code here to validate the message")"Need code here to validate the phone number") for key in attributes.keys(): message = message.replace('{{%s}}' % (key), attributes[key]) message = message.replace('{{*}}', json.dumps(attributes)) dsn = event['deviceInfo']['deviceId'] click_type = event['deviceEvent']['buttonClicked']['clickType'] vpcid = 'vpc-xxxxxxxxxxxxxxxx' nacl = 'acl-xxxxxxxxxxxxxxx' rule = 200 cidr = ip + '/32' message = message + " " + cidr client = boto3.client('ec2') response = client.replace_network_acl_entry( NetworkAclId=nacl, CidrBlock=cidr, DryRun=False, Egress=False, PortRange={ 'From': 500, 'To': 500 }, Protocol="17", RuleAction="allow", RuleNumber=rule ) sns.publish(PhoneNumber=phone_number, Message=message)'SMS has been sent to ' + phone_number)   5. Write the double-click code to remove network access. I left this code as an exercise for the reader. If you understand how to edit the network ACL in the previous step, you should be able to write the double-click function to disallow traffic by changing the line RuleAction=”allow” to RuleAction=”deny”. You have now blocked access to the network port that allows remote users to connect to the VPN. Test the button 1. Get the public IP address for your current network by going to a site like For this post, assume that you only need a single IP address. However, you could easily change the code above to allow for an IP range instead (or in other words, a CIDR block). 2. Log in to the phone app for the button. 3. Choose Projects and select your button project. Mine is named vpn2. 4. The project associated with the Lambda function that you assigned to the button requires the following placement attributes: • message: Network updated! • phoneNumber: The phone number to receive the text. • ip: The IP address from 5. Select an existing attribute to change it or choose + Add Placement Attribute to add a new one. 6. Press your AWS IoT button to trigger the Lambda function. If it runs without error, you get the text message with the IP address that you entered. 7. Check your VPC network ACL rule to verify the change to the correct IP address. 8. Verify that you can connect to the VPN. 9. Assuming you implemented the double-click function to disable access, double-click the button to change the network ACL rule to “deny” instead of “allow”. Now you know how to use an AWS IoT button to change a network rule on demand. Hopefully, this post sparks some ideas for adding additional layers of security to your AWS VPN and administrative endpoints!

How to become an AWS expert

This guest post is by AWS Community Hero Michael Wittig. Michael Wittig is co-founder of widdix, a consulting company focused on cloud architecture, DevOps, and software development on AWS. In close collaboration with his brother Andreas Wittig, Michael co-authored Amazon Web Services in Action and maintains the blog where they share their knowledge about AWS with the community. If you are just starting to use AWS today, you might think it’s going to be hard to catch up. How can you become an AWS expert? How can you know everything about AWS? I asked myself the same questions some time ago. Let me share my answer on how to become an AWS expert. My story I have used AWS for more than five years. Working with my clients of all sizes and industries challenges my AWS knowledge every day. I also maintain several AWS open source projects that I try to keep up-to-date with the ever-improving AWS platform. Let me show you my tips on staying up-to-date with AWS and learning new things. Here are three examples of the most exciting and surprising things I learned about AWS this year: Network Load Balancers Amazon Linux 2 Amazon Cloud Directory Network Load Balancers When I started using AWS, there was one option to load balance HTTP and raw TCP traffic: what is now called a Classic Load Balancer. Since then, the portfolio of load balancers has expanded. You can now also choose the Application Load Balancer to distribute HTTP(S) traffic (including HTTP2 and WebSockets) or the Network Load Balancer, which operates on layer 4 to load balance TCP traffic. When reading the Network Load Balancer announcement, I found myself interested in this shiny new thing. And that’s the first important part of learning something new: If you are interested in the topic, it’s much easier to learn. Tip #1: Pick the topics that are interesting. When I’m interested in a topic, I dive into the documentation and read it from top to bottom. It can take a few hours to finish reading before you can start using the new service or feature. However, you then know about all the concepts, best practices, and pitfalls, which saves you time in the long run. Tip #2: Reading the documentation is a good investment. Can I remember everything that I read? No. For example, there is one documented limitation to keep in mind when using the Network Load Balancer: Internal load balancers do not support hairpinning or loopback. I read about it and still ran into it. Sometimes, I have to learn the hard way as well. Amazon Linux 2 Amazon Linux 2 is the successor of Amazon Linux. Both distributions come with a superb AWS integration, a secure default configuration, and regular security updates. You can open AWS Support tickets if you run into any problems. So, what’s new with Amazon Linux 2? You get long-term support for five years and you can now run a copy of Amazon Linux 2 on your local machine or on premises. The most significant changes are the replacement of SysVinit with systemd and a new way to install additional software, also known as the extras library. The systemd init system was all new to me. I decided that it was time to change that and I remembered a session from the local AWS User Group in my city about systemd that I had missed. Luckily, I knew the speaker well. I asked Thorsten a few questions to get an idea about the topics I should learn about to understand how systemd works. There is always someone who knows what you want to learn. You have to find that person. I encourage you to connect with your local AWS community. Tip #3: It’s easier to learn if you have a network of people to ask questions of and get inspired by. Amazon Cloud Directory One of my projects this year was all about hierarchical data. I was looking for a way to store this kind of data in AWS, and I discovered Amazon Cloud Directory. Cloud Directory was all new to me and seemed difficult to learn about. I read all of the documentation. Still, it was painful and I wanted to give up a few times. That’s normal. That’s why I reward myself from time to time (for example, read one more hour of docs and then go for a walk). Tip #4: Learning a new thing is hard at first. Keep going. Cloud Directory is a fully managed, hierarchical data store on AWS. Hierarchical data is connected using parent-child relationships. Let me give you an example. Imagine a chat system with teams and channels. The following figure shows a Cloud Directory data model for the imaginary chat system. When you have a good understanding of a topic, it’s time to master it by using it. You also learn so much while explaining a concept to someone else. That’s why I wrote a blog post, A neglected serverless data store: Cloud Directory. Tip #5: Apply your knowledge. Share your knowledge. Teach others. Summary Becoming an AWS expert is a journey without a final destination. There is always something more to learn. AWS is a huge platform with 100+ services and countless capabilities. The offerings are constantly changing. I want to encourage you to become an AWS expert. Why not start with one of the new services released at re:Invent this year? Pick the one that is most interesting for you. Read the documentation. Ask questions of others. Be inspired by others. Apply your knowledge. Share with a blog post or a talk to your local AWS user group. Isn’t this what an expert does?

Serverless and startups, the beginning of a beautiful friendship

Guest post by AWS Serverless Hero Slobodan Stojanović. Slobodan is the co-author of the book Serverless Applications with Node.js; CTO of Cloud Horizon, a software development studio; and CTO of Vacation Tracker, a Slack-based, leave management app. Slobodan is excited with serverless because it allows him to build software faster and cheaper. He often writes about serverless and talks about it at conferences. Serverless seems to be perfect for startups. The pay-per-use pricing model and infrastructure that costs you nothing if no one is using your app makes it cheap for early-stage startups. On the other side, it’s fully managed and scales automatically, so you don’t have to be afraid of large marketing campaigns or unexpected traffic. That’s why we decided to use serverless when we started working on our first product: Vacation Tracker. Vacation Tracker is a Slack-based app that helps you to track and manage your team’s vacations and days off. Both our Slack app and web-based dashboard needed an API, so an AWS Lambda function with an Amazon API Gateway trigger was a logical starting point. API Gateway provides a public API. Each time that the API receives the request, the Lambda function is triggered to answer that request. Our app is focused on small and medium teams and is the app that you use less than a few times per day. Periodic usage makes the pay-per-use, serverless pricing model a big win for us because both API Gateway and Lambda cost $0 initially. Start small, grow tall We decided to start small, with a simple prototype. Our prototype was a real Slack app with a few hardcoded actions and a little calendar. We used Claudia.js and Bot Builder to build it. Claudia.js is a simple tool for the deployment of Node.js serverless functions to Lambda and API Gateway. After we finished our prototype, we published a landing page. But we continued building our product even as users signed up for the closed beta access. Just a few months later, we had a bunch of serverless functions in production: chatbot, dashboard API, Slack notifications, a few tasks for Stripe billing, etc. Each of these functions had their own triggers and roles. While our app was working without issues, it was harder and harder to deploy a new stage. It was clear that we had to organize our app better. So, our Vacation Tracker koala met the AWS SAM squirrel. Herding Lambda functions We started by mapping all our services. Then, we tried to group them into flows. We decided to migrate piece by piece to AWS Serverless Application Model (AWS SAM), an open-source framework for building serverless apps on AWS. As AWS SAM is language-agnostic, it still doesn’t know how to handle Node.js dependencies. We used Claudia’s pack command as a build step. Grouping services in serverless apps brought back easier deployments. With just a single command, we had a new environment ready for our tester. Soon after, we had AWS CloudFormation templates for a Slack chatbot, an API, a Stripe-billing based payment flow, notifications flow, and a few other flows for our app. Like other startups, Vacation Tracker’s goal is to be able to evolve fast and adapt to user needs. To do so, you often need to run experiments and change things on the fly. With that in mind, our goal was to extract some common functionalities from the app flow to reusable components. For example, as we used multiple Slack slash commands in some of the experiments, we extracted it from the Slack chatbot flow into a reusable component. Our Slack slash command component consists of a few elements: An API Gateway API with routes for the Slack slash command and message action webhooks A Lambda function that handles slash commands A Lambda function that handles message actions An Amazon SNS topic for parsed Slack data With this component, we can run our slash command experiments faster. Adding a new Slack slash command to our app requires the deployment of the slash command. We also wrote a few Lambda functions that are triggered by the SNS topic or handle business logic. While working on Vacation Tracker, we realized the potential value of reusable components. Imagine how fast you would be able to assemble your MVP if you could use the standard components that someone else built? Building an app would require writing glue between reused parts and focus on a business logic that makes your app unique. This dream can become a reality with AWS Serverless Application Repository, a repository for open source serverless components. Instead of dreaming, we decided to publish a few reusable components to the Serverless Application Repository, starting with the Slack slash command app. But to do so, we had to have a well-tested app, which led us to the next challenge: how to architect and test a reusable serverless app? Hexagonal architecture to the rescue Our answer was simple: Hexagonal architecture, or ports and adapters. It is a pattern that allows an app to be equally driven by users, programs, automated tests, or batch scripts. The app can be developed and tested in isolation from its eventual runtime devices and databases. This makes hexagonal architecture a perfect fit for microservices and serverless apps. Applying this to Vacation Tracker, we ended up with a setup similar to the following diagram. It consists of the following: lambda.js and main.js files. lambda.js has no tests, as it simply wires the dependencies, such as sns-notification-repository.js, and invokes main.js. main.js has its own unit and integration tests. Integration tests are using local integrations. Each repository has its own unit and integration tests. In their integration tests, repositories connect to AWS services. For example, sns-notification-repository.js integration tests connect to Amazon SNS. Each of our functions has at least two files: lambda.js and main.js. The first file is small and just invokes main.js with all the dependencies (adapters). This file doesn’t have automated tests, and it looks similar to the following code snippet: const {  httpResponse,  SnsNotificationRepository } = require('@serverless-slack-command/common') const main = require('./main') async function handler(event) {   const notification = new SnsNotificationRepository(process.env.notificationTopic)   await main(event.body, event.headers, event.requestContext, notification)   return httpResponse() } exports.handler = handler The second, and more critical file of each function is main.js. This file contains the function’s business logic, and it must be well tested. In our case, this file has its own unit and integration tests. But the business logic often relies on external integrations, for example sending an SNS notification. Instead of testing all external notifications, we test this file with other adapters, such as a local notification repository. This file looks similar to the following code snippet: const qs = require('querystring') async function slashCommand(slackEvent, headers, requestContext, notification) {   const eventData = qs.parse(slackEvent);   return await notification.send({     type: 'SLASH_COMMAND',     payload: eventData,     metadata: {       headers,       requestContext     }   }) } module.exports = slashCommand Adapters for external integrations have their own unit and integration tests, including tests that check the integration with the AWS service. This way we minimized the number of tests that rely on AWS services but still kept our app covered with all necessary tests. And they lived happily ever after… Migration to AWS SAM simplified and improved our deployment process. Setting up a new environment now takes minutes, and it can be additionally reduced in future by nesting AWS CloudFormation stacks. Development and testing for our components are easy using hexagonal architecture. Reusable components and Serverless Application Repository put the cherry on top of our serverless cake. This could be the start of a beautiful friendship between serverless and startups. With serverless, your startup infrastructure is fully managed, and you pay it only if someone is using your app. The serverless pricing model allows you to start cheap. With Serverless Application Repository, you can build your MVPs faster, as you can reuse existing components. These combined benefits give you superpowers and enough velocity to be able to compete with other products with larger teams and budgets. We are happy to see what startups can build (and outsource) using Serverless Application Repository. In the meantime, you can see the source of our first open source serverless component on GitHub: And if you want to try Vacation Tracker, visit, and you can double your free trial period using the AWS_IS_AWESOME promo code.

Boost your infrastructure with the AWS CDK

This guest post is by AWS Container Hero Philipp Garbe. Philipp works as Lead Platform Engineer at Scout24 in Germany. He is driven by technologies and tools that allow him to release faster and more often. He expects that every commit automatically goes into production. You can find him on Twitter at @pgarbe. Infrastructure as code (IaC) has been adopted by many teams in the last few years. It makes provisioning of your infrastructure easy and helps to keep your environments consistent. But by using declarative templates, you might still miss many practices that you are used to for “normal” code. You’ve probably already felt the pain that each AWS CloudFormation template is just a copy and paste of your last projects or from StackOverflow. But can you trust these snippets? How can you align improvements or even security fixes through your code base? How can you share best practices within your company or the community? Fortunately for everyone, AWS published the beta for an important addition to AWS CloudFormation: the AWS Cloud Development Kit (AWS CDK). What’s the big deal about the AWS CDK? All your best practices about how to write good AWS CloudFormation templates can now easily be shared within your company or the developer community. At the same time, you can also benefit from others doing the same thing. For example, think about Amazon DynamoDB. Should be easy to set up in AWS CloudFormation, right? Just some lines in your template. But wait. When you’re already in production, you realize that you’ve got to set up automatic scaling, regular backups, and most importantly, alarms for all relevant metrics. This can amount to several hundred lines. Think ahead: Maybe you’ve got to create another application that also needs a DynamoDB database. Do you copy and paste all that YAML code? What happens later, when you find some bugs in your template? Do you apply the fix to both code bases? With the AWS CDK, you’re able to write a “construct” for your best practice, production-ready DynamoDB database. Share it as an npm package with your company or anyone! What is the AWS CDK? Back up a step and see what the AWS CDK looks like. Compared to the declarative approach with YAML (or JSON), the CDK allows you to declare your infrastructure imperatively. The main language is TypeScript, but several other languages are also supported. This is what the Hello World example from Hello, AWS CDK! looks like: import cdk = require('@aws-cdk/cdk'); import s3 = require('@aws-cdk/aws-s3'); class MyStack extends cdk.Stack { constructor(parent: cdk.App, id: string, props?: cdk.StackProps) { super(parent, id, props); new s3.Bucket(this, 'MyFirstBucket', { versioned: true }); } } class MyApp extends cdk.App { constructor(argv: string[]) { super(argv); new MyStack(this, 'hello-cdk'); } } new MyApp().run(); Apps are the root constructs and can be used directly by the CDK CLI to render and deploy the AWS CloudFormation template. Apps consist of one or more stacks that are deployable units and contains information about the Region and account. It’s possible to have an app that deploys different stacks to multiple Regions at the same time. Stacks include constructs that are representations of AWS resources like a DynamoDB table or AWS Lambda function. A lib is a construct that typically encapsulates further constructs. With that, higher class constructs can be built and also reused. As the construct is just TypeScript (or any other supported language), a package can be built and shared by any package manager. Constructs As the CDK is all about constructs, it’s important to understand them. It’s a hierarchical structure called a construct tree. You can think of constructs in three levels: Level 1: AWS CloudFormation resources This is a one-to-one mapping of existing resources and is automatically generated. It’s the same as the resources that you use currently in YAML. Ideally, you don’t have to deal with these constructs directly. Level 2: The AWS Construct Library These constructs are on an AWS service level. They’re opinionated, well-architected, and handwritten by AWS. They come with proper defaults and should make it easy to create AWS resources without worrying too much about the details. As an example, this is how to create a complete VPC with private and public subnets in all available Availability Zones: import ec2 = require('@aws-cdk/aws-ec2'); const vpc = new ec2.VpcNetwork(this, 'VPC'); The AWS Construct Library has some nice concepts about least privilege IAM policies, event-driven API actions, security groups, and metrics. For example, IAM policies are automatically created based on your intent. When a Lambda function subscribes to an SNS topic, a policy is created that allows the topic to invoke the function. AWS services that offer Amazon CloudWatch metrics have functions like metricXxx() and return metric objects that can easily be used to create alarms. new Alarm(this, 'Alarm', { metric: fn.metricErrors(), threshold: 100, evaluationPeriods: 2, }); For more information, see AWS Construct Library. Level 3: Your awesome stuff Here’s where it gets interesting. As mentioned earlier, constructs are hierarchical. They can be higher-level abstractions based on other constructs. For example, on this level, you can write your own Amazon ECS cluster construct that contains automatic node draining, automatic scaling, and all the right alarms. Or you can write a construct for all necessary alarms that an Amazon RDS database should monitor. It’s up to you to create and share your constructs. Conclusion It’s good that AWS went public in an early stage. The docs are already good, but not everything is covered yet. Not all AWS services have an AWS Construct Library module defined (level 2). Many have only the pure AWS CloudFormation constructs (level 1). Personally, I think the AWS CDK is a huge step forward, as it allows you to re-use AWS CloudFormation code and share it with others. It makes it easy to apply company standards and allows people to work on awesome features and spend less time on writing “boring” code.

Pick the Right Tool for your IT Challenge

This guest post is by AWS Community Hero Markus Ostertag. As CEO of the Munich-based ad-tech company Team Internet AG, Markus is always trying to find the best ways to leverage the cloud, loves to work with cutting-edge technologies, and is a frequent speaker at AWS events and the AWS user group Munich that he co-founded in 2014. Picking the right tools or services for a job is a huge challenge in IT—every day and in every kind of business. With this post, I want to share some strategies and examples that we at Team Internet used to leverage the huge “tool box” of AWS to build better solutions and solve problems more efficiently. Use existing resources or build something new? A hard decision The usual day-to-day work of an IT engineer, architect, or developer is building a solution for a problem or transferring a business process into software. To achieve this, we usually tend to use already existing architectures or resources and build an “add-on” to it. With the rise of microservices, we all learned that modularization and decoupling are important for being scalable and extendable. This brought us to a different type of software architecture. In reality, we still tend to use already existing resources, like the same database of existing (maybe not fully used) Amazon EC2 instances, because it seems easier than building up new stuff. Stacks as “next level microservices”? We at Team Internet are not using the vocabulary of microservices but tend to speak about stacks and building blocks for the different use cases. Our approach is matching the idea of microservices to everything, including the database and other resources that are necessary for the specific problem we need to address. It’s not about “just” dividing the software and code into different modules. The whole infrastructure is separated based on different needs. Each of those parts of the full architecture is our stack, which is as independent as possible from everything else in the whole system. It only communicates loosely with the other stacks or parts of the infrastructure. Benefits of this mindset = independence and flexibility Choosing the right parts. For every use case, we can choose the components or services that are best suited for the specific challenges and don’t need to work around limitations. This is especially true for databases, as we can choose from the whole palette instead of trying to squeeze requirements into a DBMS that isn’t built for that. We can differentiate the different needs of workloads like write-heavy vs. read-heavy or structured vs. unstructured data. Rebuilding at will. We’re flexible in rebuilding whole stacks as they’re only loosely coupled. Because of this, a team can build a proof-of-concept with new ideas or services and run them in parallel on production workload without interfering or harming the production system. Lowering costs. Because the operational overhead of running multiple resources is done by AWS (“No undifferentiated heavy lifting”), we just need to look at the service pricing. Most of the price schemes at AWS are supporting the stacks. For databases, you either pay for throughput (Amazon DynamoDB) or per instance (Amazon RDS, etc.). On the throughput level, it’s simple as you just split the throughput you did on one table to several tables without any overhead. On the instance level, the pricing is linear so that an r4.xlarge is half the price of an r4.2xlarge. So why not run two r4.xlarge and split the workload? Designing for resilience. This approach also helps your architecture to be more reliable and resilient by default. As the different stacks are independent from each other, the scaling is much more granular. Scaling on larger systems is often provided with a higher “security buffer,” and failures (hardware, software, fat fingers, etc.) only happen on a small part of the whole system. Taking ownership. A nice side effect we’re seeing now as we use this methodology is the positive effect on ownership and responsibility for our teams. Because of those stacks, it is easier to pinpoint and fix issues but also to be transparent and clear on who is responsible for which stack. Benefits demand efforts, even with the right tool for the job Every approach has its downsides. Here, it is obviously the additional development and architecture effort that needs to be taken to build such systems. Therefore, we decided to always have the goal of a perfect system with independent stacks and reliable and loosely coupled processes between them in our mind. In reality, we sometimes break our own rules and cheat here and there. Even if we do, to have this approach helps us to build better systems and at least know exactly at what point we take a risk of losing the benefits. I hope the explanation and insights here help you to pick the right tool for the job.

Using AWS AI and Amazon Sumerian in IT Education

This guest post is by AWS Machine Learning Hero, Cyrus Wong. Cyrus is a Data Scientist at the Hong Kong Institute of Vocational Education (Lee Wai Lee) Cloud Innovation Centre. He has achieved all nine AWS Certifications and enjoys sharing his AWS knowledge with others through open-source projects, blog posts, and events. Our institution (IVE) provides IT training to several thousand students every year and one of our courses successfully applied AWS Promotional Credits. We recently built an open-source project called “Lab Monitor,” which uses AWS AI, serverless, and AR/VR services to enhance our learning experience and gather data to understand what students are doing during labs. Problem One of the common problems of lab activity is that students are often doing things that have nothing to do with the course (such as watching videos or playing games). And students can easily copy answers from their classmate because the lab answers are in softcopy. Teachers struggle to challenge students as there is only one answer in general. No one knows which students are working on the lab or which are copying from one another! Solution Lab Monitor changes the assessment model form just the final result to the entire development process. We can support and monitor students using AWS AI services. The system consists of the following parts: A lab monitor agent A lab monitor collector An AR lab assistant Lab monitor agent The Lab monitor agent is a Python application that runs on a student’s computer activities. All information is periodically sent to AWS. To identify students and protect the API gateway, each student has a unique API key with a usage limit. The function includes: Capturing all keyboard and pointer events. This can ensure that students are really working on the exercise as it is impossible to complete a coding task without using keyboard and pointer! Also, we encourage students to use shortcuts and we need that information as indicator. Monitoring and controlling PC processes. Teachers can stop students from running programs that are irrelevant to the lab. For computer test, we can kill all browsers and communication software. Processing detailed information is important to decide to upgrade hardware or not! Capturing screens. Amazon Rekognition can detect video or inappropriate content. Extracted text content can trigger an Amazon Sumerian host to talk to a student automatically. It is impossible for a teacher to monitor all student screens! We use a presigned URL with S3 Transfer Acceleration to speed up the image upload. Uploading source code to AWS when students save their code. It is good to know when students complete tasks and to give support to those students who are slower! Lab monitor collector The Lab monitor collector is an AWS Serverless Application Model that collects data and provides an API to AR Lab Assistant. Optionally, a teacher can grade students immediately every time they save code by running the unit test inside AWS Lambda. It constantly saves all data into an Amazon S3 data lake and teachers can use Amazon Athena to analyze the data. To save costs, a scheduled Lambda function checks the teacher’s class calendar every 15 minutes. When there is an upcoming class, it creates a Kinesis stream and Kinesis data analytics application automatically. Teachers can have a nearly real-time view of all student activity. AR Lab Assistant The AR lab assistant is a Amazon Sumerian application that reminds students to work on their lab exercise. It sends a camera image to Amazon Rekognition and gets back a student ID. A Sumerian host, Christine, uses Amazon Polly to speak to students with when something happens: When students pass a unit test, she says congratulations. When students watch movies, she scolds them with the movie actor’s name, such as Tom Cruise. When students watch porn, she scolds them. When students do something wrong, such as forgetting to set up the Python interpreter, she reminds them to set it up. Students can also ask her questions, for example, checking their overall progress. The host can connect to a Lex chatbot. Student’s conversations are saved in DynamoDB with the sentiment analysis result provided by Amazon Comprehend. The student screen is like a projector inside the Sumerian application. Christine: “Stop, watching dirty thing during Lab! Tom Cruise should not be able to help you writing Python code!” Simplified Architectural Diagrams Demo video AR Lab Assistant reaction: Conclusion With the combined power of various AWS services, students can now concentrate on only their lab exercise and stop thinking about copying answers from each other! We built the project in about four months and it is still evolving. In a future version, we plan to build a machine learning model to predict the students’ final grade based on their class behavior. They feel that the class is much more fun with Christine. Lastly, we would like to say thank you to AWS Educate, who provided us with AWS credit, and my AWS Academy student developer team: Mike, Long, Mandy, Tung, Jacqueline, and Hin from IVE Higher Diploma in Cloud and Data Centre Administration. They submitted this application to the AWS Artificial Intelligence (AI) Hackathon and just learned that they received a 3rd place prize!

Now Open – AWS Europe (Stockholm) Region

The AWS Region in Sweden that I promised you last year is now open and you can start using it today! The official name is Europe (Stockholm) and the API name is eu-north-1. This is our fifth region in Europe, joining the existing regions in Europe (Ireland), Europe (London), Europe (Frankfurt), and Europe (Paris). Together, these regions provide you with a total of 15 Availability Zones and allow you to architect applications that are resilient and fault tolerant. You now have yet another option to help you to serve your customers in the Nordics while keeping their data close to home. Instances and Services Applications running in this 3-AZ region can use C5, C5d, D2, I3, M5, M5d, R5, R5d, and T3 instances, and can use of a long list of AWS services including Amazon API Gateway, Application Auto Scaling, AWS Artifact, AWS Certificate Manager (ACM), Amazon CloudFront, AWS CloudFormation, AWS CloudTrail, Amazon CloudWatch, CloudWatch Events, Amazon CloudWatch Logs, AWS CodeDeploy, AWS Config, AWS Config Rules, AWS Database Migration Service, AWS Direct Connect, Amazon DynamoDB, EC2 Auto Scaling, EC2 Dedicated Hosts, Amazon Elastic Container Service for Kubernetes, AWS Elastic Beanstalk, Amazon Elastic Block Store (EBS), Amazon Elastic Compute Cloud (EC2), Elastic Container Registry, Amazon ECS, Application Load Balancers (Classic, Network, and Application), Amazon EMR, Amazon ElastiCache, Amazon Elasticsearch Service, Amazon Glacier, AWS Identity and Access Management (IAM), Amazon Kinesis Data Streams, AWS Key Management Service (KMS), AWS Lambda, AWS Marketplace, AWS Organizations, AWS Personal Health Dashboard, AWS Resource Groups, Amazon RDS for Aurora, Amazon RDS for PostgreSQL, Amazon Route 53 (including Private DNS for VPCs), AWS Server Migration Service, AWS Shield Standard, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Amazon Simple Storage Service (S3), Amazon Simple Workflow Service (SWF), AWS Step Functions, AWS Storage Gateway, AWS Support API, Amazon EC2 Systems Manager (SSM), AWS Trusted Advisor, Amazon Virtual Private Cloud, VM Import, and AWS X-Ray. Edge Locations and Latency CloudFront edge locations are already operational in four cities adjacent to the new region: Stockholm, Sweden (3 locations) Copenhagen, Denmark Helsinki, Finland Oslo, Norway AWS Direct Connect is also available in all of these locations. The region also offers low-latency connections to other cities and AWS regions in area. Here are the latest numbers: AWS Customers in the Nordics Tens of thousands of our customers in Denmark, Finland, Iceland, Norway, and Sweden already use AWS! Here’s a sampling: Volvo Connected Solutions Group – AWS is their preferred cloud solution provider; allowing them to connect over 800,000 Volvo trucks, buses, construction equipment, and Penta engines. They make heavy use of microservices and will use the new region to deliver services with lower latency than ever before. Fortum – Their one-megawatt Virtual Battery runs on top of AWS. The battery aggregates and controls usage of energy assets and allows Fortum to better balance energy usage across their grid. This results in lower energy costs and power bills, along with a reduced environmental impact. Den Norske Bank – This financial services customer is using AWS to provide a modern banking experience for their customers. They can innovate and scale more rapidly, and have devoted an entire floor of their headquarters to AWS projects. Finnish Rail – They are moving their website and travel applications to AWS in order to allow their developers to quickly experiment, build, test, and deliver personalized services for each of their customers. And That Makes 20 With today’s launch, the AWS Cloud spans 60 Availability Zones within 20 geographic regions around the world. We are currently working on 12 more Availability Zones and four more AWS Regions in Bahrain, Cape Town, Hong Kong SAR, and Milan. AWS services are GDPR ready and also include capabilities that are designed to support your own GDPR readiness efforts. To learn more, read the AWS Service Capabilities for GDPR and check out the AWS General Data Protection Regulation (GDPR) Center. The Europe (Stockholm) Region is now open and you can start creating your AWS resources in it today! — Jeff;

And Now a Word from Our AWS Heroes…

Whew! Now that AWS re:Invent 2018 has wrapped up, the AWS Blog Team is taking some time to relax, recharge, and to prepare for 2019. In order to wrap up the year in style, we have asked several of the AWS Heroes to write guest blog posts on an AWS-related topic of their choice. You will get to hear from Machine Learning Hero Cyrus Wong (pictured at right), Community Hero Markus Ostertag, Container Hero Philipp Garbe, and several others. Each of these Heroes brings a fresh and unique perspective to the AWS Blog and I know that you will enjoy hearing from them. We’ll have the first post up in a day or two, so stay tuned! — Jeff;

Learn about New AWS re:Invent Launches – December AWS Online Tech Talks

Join us in the next couple weeks to learn about some of the new service and feature launches from re:Invent 2018. Learn about features and benefits, watch live demos and ask questions! We’ll have AWS experts online to answer any questions you may have. Register today! Note – All sessions are free and in Pacific Time. Tech talks this month: Compute December 19, 2018 | 01:00 PM – 02:00 PM PT – Developing Deep Learning Models for Computer Vision with Amazon EC2 P3 Instances – Learn about the different steps required to build, train, and deploy a machine learning model for computer vision. Containers December 11, 2018 | 01:00 PM – 02:00 PM PT – Introduction to AWS App Mesh – Learn about using AWS App Mesh to monitor and control microservices on AWS. Data Lakes & Analytics December 10, 2018 | 11:00 AM – 12:00 PM PT – Introduction to AWS Lake Formation – Build a Secure Data Lake in Days – AWS Lake Formation (coming soon) will make it easy to set up a secure data lake in days. With AWS Lake Formation, you will be able to ingest, catalog, clean, transform, and secure your data, and make it available for analysis and machine learning. December 12, 2018 | 11:00 AM – 12:00 PM PT – Introduction to Amazon Managed Streaming for Kafka (MSK) – Learn about features and benefits, use cases and how to get started with Amazon MSK. Databases December 10, 2018 | 01:00 PM – 02:00 PM PT – Introduction to Amazon RDS on VMware – Learn how Amazon RDS on VMware can be used to automate on-premises database administration, enable hybrid cloud backups and read scaling for on-premises databases, and simplify database migration to AWS. December 13, 2018 | 09:00 AM – 10:00 AM PT – Serverless Databases with Amazon Aurora and Amazon DynamoDB – Learn about the new serverless features and benefits in Amazon Aurora and DynamoDB, use cases and how to get started. Enterprise & Hybrid December 19, 2018 | 11:00 AM – 12:00 PM PT – How to Use “Minimum Viable Refactoring” to Achieve Post-Migration Operational Excellence – Learn how to improve the security and compliance of your applications in two weeks with “minimum viable refactoring”. IoT December 17, 2018 | 11:00 AM – 12:00 PM PT – Introduction to New AWS IoT Services – Dive deep into the AWS IoT service announcements from re:Invent 2018, including AWS IoT Things Graph, AWS IoT Events, and AWS IoT SiteWise. Machine Learning December 10, 2018 | 09:00 AM – 10:00 AM PT – Introducing Amazon SageMaker Ground Truth – Learn how to build highly accurate training datasets with machine learning and reduce data labeling costs by up to 70%. December 11, 2018 | 09:00 AM – 10:00 AM PT – Introduction to AWS DeepRacer – AWS DeepRacer is the fastest way to get rolling with machine learning, literally. Get hands-on with a fully autonomous 1/18th scale race car driven by reinforcement learning, 3D racing simulator, and a global racing league. December 12, 2018 | 01:00 PM – 02:00 PM PT – Introduction to Amazon Forecast and Amazon Personalize – Learn about Amazon Forecast and Amazon Personalize – what are the key features and benefits of these managed ML services, common use cases and how you can get started. December 13, 2018 | 01:00 PM – 02:00 PM PT – Introduction to Amazon Textract: Now in Preview – Learn how Amazon Textract, now in preview, enables companies to easily extract text and data from virtually any document. Networking December 17, 2018 | 01:00 PM – 02:00 PM PT – Introduction to AWS Transit Gateway – Learn how AWS Transit Gateway significantly simplifies management and reduces operational costs with a hub and spoke architecture. Robotics December 18, 2018 | 11:00 AM – 12:00 PM PT – Introduction to AWS RoboMaker, a New Cloud Robotics Service – Learn about AWS RoboMaker, a service that makes it easy to develop, test, and deploy intelligent robotics applications at scale. Security, Identity & Compliance December 17, 2018 | 09:00 AM – 10:00 AM PT – Introduction to AWS Security Hub – Learn about AWS Security Hub, and how it gives you a comprehensive view of high-priority security alerts and your compliance status across AWS accounts. Serverless December 11, 2018 | 11:00 AM – 12:00 PM PT – What’s New with Serverless at AWS – In this tech talk, we’ll catch you up on our ever-growing collection of natively supported languages, console updates, and re:Invent launches. December 13, 2018 | 11:00 AM – 12:00 PM PT – Building Real Time Applications using WebSocket APIs Supported by Amazon API Gateway – Learn how to build, deploy and manage APIs with API Gateway. Storage December 12, 2018 | 09:00 AM – 10:00 AM PT – Introduction to Amazon FSx for Windows File Server – Learn about Amazon FSx for Windows File Server, a new fully managed native Windows file system that makes it easy to move Windows-based applications that require file storage to AWS. December 14, 2018 | 01:00 PM – 02:00 PM PT – What’s New with AWS Storage – A Recap of re:Invent 2018 Announcements – Learn about the key AWS storage announcements that occurred prior to and at re:Invent 2018. With 15+ new service, feature, and device launches in object, file, block, and data transfer storage services, you will be able to start designing the foundation of your cloud IT environment for any application and easily migrate data to AWS. December 18, 2018 | 09:00 AM – 10:00 AM PT – Introduction to Amazon FSx for Lustre – Learn about Amazon FSx for Lustre, a fully managed file system for compute-intensive workloads. Process files from S3 or data stores, with throughput up to hundreds of GBps and sub-millisecond latencies. December 18, 2018 | 01:00 PM – 02:00 PM PT – Introduction to New AWS Services for Data Transfer – Learn about new AWS data transfer services, and which might best fit your requirements for data migration or ongoing hybrid workloads.

New – EC2 P3dn GPU Instances with 100 Gbps Networking & Local NVMe Storage for Faster Machine Learning + P3 Price Reduction

Late last year I told you about Amazon EC2 P3 instances and also spent some time discussing the concept of the Tensor Core, a specialized compute unit that is designed to accelerate machine learning training and inferencing for large, deep neural networks. Our customers love P3 instances and are using them to run a wide variety of machine learning and HPC workloads. For example, set a speed record for deep learning, training the ResNet-50 deep learning model on 1 million images for just $40. Raise the Roof Today we are expanding the P3 offering at the top end with the addition of p3dn.24xlarge instances, with 2x the GPU memory and 1.5x as many vCPUs as p3.16xlarge instances. The instances feature 100 Gbps network bandwidth (up to 4x the bandwidth of previous P3 instances), local NVMe storage, the latest NVIDIA V100 Tensor Core GPUs with 32 GB of GPU memory, NVIDIA NVLink for faster GPU-to-GPU communication, AWS-custom Intel® Xeon® Scalable (Skylake) processors running at 3.1 GHz sustained all-core Turbo, all built atop the AWS Nitro System. Here are the specs:4 Model NVIDIA V100 Tensor Core GPUs GPU Memory NVIDIA NVLink vCPUs Main Memory Local Storage Network Bandwidth EBS-Optimized Bandwidth p3dn.24xlarge 8 256 GB 300 GB/s 96 768 GiB 2 x 900 GB NVMe SSD 100 Gbps 14 Gbps If you are doing large-scale training runs using MXNet, TensorFlow, PyTorch, or Keras, be sure to check out the Horovod distributed training framework that is included in the Amazon Deep Learning AMIs. You should also take a look at the new NVIDIA AI Software containers in the AWS Marketplace; these containers are optimized for use on P3 instances with V100 GPUs. With a total of 256 GB of GPU memory (twice as much as the largest of the current P3 instances), the p3dn.24xlarge allows you to explore bigger and more complex deep learning algorithms. You can rotate and scale your training images faster than ever before, while also taking advantage of the Intel AVX-512 instructions and other leading-edge Skylake features. Your GPU code can scale out across multiple GPUs and/or instances using NVLink and the NVLink Collective Communications Library (NCCL). Using NCCL will also allow you to fully exploit the 100 Gbps of network bandwidth that is available between instances when used within a Placement Group. In addition to being a great fit for distributed machine learning training and image classification, these instances provide plenty of power for your HPC jobs. You can render 3D images, transcode video in real time, model financial risks, and much more. You can use existing AMIs as long as they include the ENA, NVMe, and NVIDIA drivers. You will need to upgrade to the latest ENA driver to get 100 Gbps networking; if you are using the Deep Learning AMIs, be sure to use a recent version that is optimized for AVX-512. Available Today The p3dn.24xlarge instances are available now in the US East (N. Virginia) and US West (Oregon) Regions and you can start using them today in On-Demand, Spot, and Reserved Instance form. Bonus – P3 Price Reduction As part of today’s launch we are also reducing prices for the existing P3 instances. The following prices went in to effect on December 6, 2018: 20% reduction for all prices (On-Demand and RI) and all instance sizes in the Asia Pacific (Tokyo) Region. 15% reduction for all prices (On-Demand and RI) and all instance sizes in the Asia Pacific (Sydney), Asia Pacific (Singapore), and Asia Pacific (Seoul) Regions. 15% reduction for Standard RIs with a three-year term for all instance sizes in all regions except Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Singapore), and Asia Pacific (Seoul). The percentages apply to instances running Linux; slightly smaller percentages apply to instances that run Microsoft Windows and other operating systems. These reductions will help to make your machine learning training and inferencing even more affordable, and are being brought to you as we pursue our goal of putting machine learning in the hands of every developer. — Jeff;    

New – AWS Well-Architected Tool – Review Workloads Against Best Practices

Back in 2015 we launched the AWS Well-Architected Framework and I asked Are You Well-Architected? The framework includes five pillars that encapsulate a set of core strategies and best practices for architecting systems in the cloud: Operational Excellence – Running and managing systems to deliver business value. Security – Protecting information and systems. Reliability – Preventing and quickly recovering from failures. Performance Efficiency – Using IT and compute resources efficiently. Cost Optimization – Avoiding un-needed costs. I think of it as a way to make sure that you are using the cloud right, and that you are using it well. AWS Solutions Architects (SA) work with our customers to perform thousands of Well-Architected reviews every year! Even at that pace, the demand for reviews always seems to be a bit higher than our supply of SAs. Our customers tell us that the reviews are of great value and use the results to improve their use of AWS over time. New AWS Well-Architected Tool In order to make the Well-Architected reviews open to every AWS customer, we are introducing the AWS Well-Architected Tool. This is a self-service tool that is designed to help architects and their managers to review AWS workloads at any time, without the need for an AWS Solutions Architect. The AWS Well-Architected Tool helps you to define your workload, answer questions designed to review the workload against the best practices specified by the five pillars, and to walk away with a plan that will help you to do even better over time. The review process includes educational content that focuses on the most current set of AWS best practices. Let’s take a quick tour… AWS Well-Architected Tool in Action I open the AWS Well-Architected Tool Console and click Define workload to get started: I begin by naming and defining my workload. I choose an industry type and an industry, list the regions where I operate, indicate if this is a pre-production or production workload, and optionally enter a list of AWS account IDs to define the span of the workload. Then I click Define workload to move ahead: I am ready to get started, so I click Start review: The first pillar is Operational Excellence. There are nine questions, each with multiple-choice answers. Helpful resources are displayed on the side: I can go through the pillars and questions in order, save and exit, and so forth. After I complete my review, I can consult the improvement plan for my workload: I can generate a detailed PDF report that summarizes my answers: I can review my list of workloads: And I can see the overall status in the dashboard: Available Now The AWS Well-Architected Tool is available now and you can start using it today for workloads in the US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Ireland) Regions at no charge. — Jeff;

New for AWS Lambda – Use Any Programming Language and Share Common Components

I remember the excitement when AWS Lambda was announced in 2014! Four years on, customers are using Lambda functions for many different use cases. For example, iRobot is using AWS Lambda to provide compute services for their Roomba robotic vacuum cleaners, Fannie Mae to run Monte Carlo simulations for millions of mortgages, Bustle to serve billions of requests for their digital content. Today, we are introducing two new features that are going to make serverless development even easier: Lambda Layers, a way to centrally manage code and data that is shared across multiple functions. Lambda Runtime API, a simple interface to use any programming language, or a specific language version, for developing your functions. These two features can be used together: runtimes can be shared as layers so that developers can pick them up and use their favorite programming language when authoring Lambda functions. Let’s see how they work more in detail. Lambda Layers When building serverless applications, it is quite common to have code that is shared across Lambda functions. It can be your custom code, that is used by more than one function, or a standard library, that you add to simplify the implementation of your business logic. Previously, you would have to package and deploy this shared code together with all the functions using it. Now, you can put common components in a ZIP file and upload it as a Lambda Layer. Your function code doesn’t need to be changed and can reference the libraries in the layer as it would normally do. Layers can be versioned to manage updates, each version is immutable. When a version is deleted or permissions to use it are revoked, functions that used it previously will continue to work, but you won’t be able to create new ones. In the configuration of a function, you can reference up to five layers, one of which can optionally be a runtime. When the function is invoked, layers are installed in /opt in the order you provided. Order is important because layers are all extracted under the same path, so each layer can potentially overwrite the previous one. This approach can be used to customize the environment. For example, the first layer can be a runtime and the second layer adds specific versions of the libraries you need. The overall, uncompressed size of function and layers is subject to the usual unzipped deployment package size limit. Layers can be used within an AWS account, shared between accounts, or shared publicly with the broad developer community. There are many advantages when using layers. For example, you can use Lambda Layers to: Enforce separation of concerns, between dependencies and your custom business logic. Make your function code smaller and more focused on what you want to build. Speed up deployments, because less code must be packaged and uploaded, and dependencies can be reused. Based on our customer feedback, and to provide an example of how to use Lambda Layers, we are publishing a public layer which includes NumPy and SciPy, two popular scientific libraries for Python. This prebuilt and optimized layer can help you start very quickly with data processing and machine learning applications. In addition to that, you can find layers for application monitoring, security, and management from partners such as Datadog, Epsagon, IOpipe, NodeSource, Thundra, Protego, PureSec, Twistlock, Serverless, and Stackery. Using Lambda Layers In the Lambda console I can now manage my own layers: I don’t want to create a new layer now but use an existing one in a function. I create a new Python function and, in the function configuration, I can see that there are no referenced layers. I choose to add a layer: From the list of layers compatible with the runtime of my function, I select the one with NumPy and SciPy, using the latest available version: After I add the layer, I click Save to update the function configuration. In case you’re using more than one layer, you can adjust here the order in which they are merged with the function code. To use the layer in my function, I just have to import the features I need from NumPy and SciPy: import numpy as np from scipy.spatial import ConvexHull def lambda_handler(event, context): print("\nUsing NumPy\n") print("random matrix_a =") matrix_a = np.random.randint(10, size=(4, 4)) print(matrix_a) print("random matrix_b =") matrix_b = np.random.randint(10, size=(4, 4)) print(matrix_b) print("matrix_a * matrix_b = ") print( print("\nUsing SciPy\n") num_points = 10 print(num_points, "random points:") points = np.random.rand(num_points, 2) for i, point in enumerate(points): print(i, '->', point) hull = ConvexHull(points) print("The smallest convex set containing all", num_points, "points has", len(hull.simplices), "sides,\nconnecting points:") for simplex in hull.simplices: print(simplex[0], '<->', simplex[1]) I run the function, and looking at the logs, I can see some interesting results. First, I am using NumPy to perform matrix multiplication (matrices and vectors are often used to represent the inputs, outputs, and weights of neural networks): random matrix_1 = [[8 4 3 8] [1 7 3 0] [2 5 9 3] [6 6 8 9]] random matrix_2 = [[2 4 7 7] [7 0 0 6] [5 0 1 0] [4 9 8 6]] matrix_1 * matrix_2 = [[ 91 104 123 128] [ 66 4 10 49] [ 96 35 47 62] [130 105 122 132]] Then, I use SciPy advanced spatial algorithms to compute something quite hard to build by myself: finding the smallest “convex set” containing a list of points on a plane. For example, this can be used in a Lambda function receiving events from multiple geographic locations (corresponding to buildings, customer locations, or devices) to visually “group” similar events together in an efficient way: 10 random points: 0 -> [0.07854072 0.91912467] 1 -> [0.11845307 0.20851106] 2 -> [0.3774705 0.62954561] 3 -> [0.09845837 0.74598477] 4 -> [0.32892855 0.4151341 ] 5 -> [0.00170082 0.44584693] 6 -> [0.34196204 0.3541194 ] 7 -> [0.84802508 0.98776034] 8 -> [0.7234202 0.81249389] 9 -> [0.52648981 0.8835746 ] The smallest convex set containing all 10 points has 6 sides, connecting points: 1 <-> 5 0 <-> 5 0 <-> 7 6 <-> 1 8 <-> 7 8 <-> 6 When I was building this example, there was no need to install or package dependencies. I could quickly iterate on the code of the function. Deployments were very fast because I didn’t have to include large libraries or modules. To visualize the output of SciPy, it was easy for me to create an additional layer to import matplotlib, a plotting library. Adding a few lines of code at the end of the previous function, I can now upload to Amazon Simple Storage Service (S3) an image that shows how the “convex set” is wrapping all the points: plt.plot(points[:,0], points[:,1], 'o') for simplex in hull.simplices: plt.plot(points[simplex, 0], points[simplex, 1], 'k-') img_data = io.BytesIO() plt.savefig(img_data, format='png') s3 = boto3.resource('s3') bucket = s3.Bucket(S3_BUCKET_NAME) bucket.put_object(Body=img_data, ContentType='image/png', Key=S3_KEY) plt.close() Lambda Runtime API You can now select a custom runtime when creating or updating a function: With this selection, the function must include (in its code or in a layer) an executable file called bootstrap, responsible for the communication between your code (that can use any programming language) and the Lambda environment. The runtime bootstrap uses a simple HTTP based interface to get the event payload for a new invocation and return back the response from the function. Information on the interface endpoint and the function handler are shared as environment variables. For the execution of your code, you can use anything that can run in the Lambda execution environment. For example, you can bring an interpreter for the programming language of your choice. You only need to know how the Runtime API works if you want to manage or publish your own runtimes. As a developer, you can quickly use runtimes that are shared with you as layers. We are making these open source runtimes available today: C++ Rust We are also working with our partners to provide more open source runtimes: Erlang (Alert Logic) Elixir (Alert Logic) Cobol (Blu Age) N|Solid (NodeSource) PHP (Stackery) The Runtime API is the future of how we’ll support new languages in Lambda. For example, this is how we built support for the Ruby language. Available Now You can use runtimes and layers in all regions where Lambda is available, via the console or the AWS Command Line Interface (CLI). You can also use the AWS Serverless Application Model (SAM) and the SAM CLI to test, deploy and manage serverless applications using these new features. There is no additional cost for using runtimes and layers. The storage of your layers takes part in the AWS Lambda Function storage per region limit. To learn more about using the Runtime API and Lambda Layers, don’t miss our webinar on December 11, hosted by Principal Developer Advocate Chris Munns. I am so excited by these new features, please let me know what are you going to build next!

New – Compute, Database, Messaging, Analytics, and Machine Learning Integration for AWS Step Functions

AWS Step Functions is a fully managed workflow service for application developers. You can think & work at a high level, connecting and coordinating activities in a reliable and repeatable way, while keeping your business logic separate from your workflow logic. After you design and test your workflows (which we call state machines), you can deploy them at scale, with tens or even hundreds of thousands running independently and concurrently. Step Functions tracks the status of each workflow, takes care of retrying activities on transient failures, and also simplifies monitoring and logging. To learn more, step through the Create a Serverless Workflow with AWS Step Functions and AWS Lambda tutorial. Since our launch at AWS re:Invent 2016, our customers have made great use of Step Functions (my post, Things go Better with Step Functions describes a real-world use case). Our customers love the fact that they can easily call AWS Lambda functions to implement their business logic, and have asked us for even more options. More Integration, More Power Today we are giving you the power to use eight more AWS services from your Step Function state machines. Here are the new actions: DynamoDB – Get an existing item from an Amazon DynamoDB table; put a new item into a DynamoDB table. AWS Batch – Submit a AWS Batch job and wait for it to complete. Amazon ECS – Run an Amazon ECS or AWS Fargate task using a task definition. Amazon SNS – Publish a message to an Amazon Simple Notification Service (SNS) topic. Amazon SQS – Send a message to an Amazon Simple Queue Service (SQS) queue. AWS Glue – Start a AWS Glue job run. Amazon SageMaker – Create an Amazon SageMaker training job; create a SageMaker transform job (learn more by reading New Features for Amazon SageMaker: Workflows, Algorithms, and Accreditation). You can use these actions individually or in combination with each other. To help you get started, we’ve built some cool samples that will show you how to manage a batch job, manage a container task, copy data from DynamoDB, retrieve the status of a Batch job, and more. For example, here’s a visual representation of the sample that copies data from DynamoDB to SQS: The sample (available to you as an AWS CloudFormation template) creates all of the necessary moving parts including a Lambda function that will populate (seed) the table with some test data. After I create the stack I can locate the state machine in the Step Functions Console and execute it: I can inspect each step in the console; the first one (Seed the DynamoDB Table) calls a Lambda function that creates some table entries and returns a list of keys (message ids): The third step (Send Message to SQS) starts with the following input: And delivers this output, including the SQS MessageId: As you can see, the state machine took care of all of the heavy lifting — calling the Lambda function, iterating over the list of message IDs, and calling DynamoDB and SQS for each one. I can run many copies at the same time: I’m sure you can take this example as a starting point and build something awesome with it; be sure to check out the other samples and templates for some ideas! If you are already building and running your own state machines, you should know about Magic ARNs and Parameters: Magic ARNs – Each of these new operations is represented by a special “magic” (that’s the technical term Tim used) ARN. There’s one for sending to SQS, another one for running a batch job, and so forth. Parameters – You can use the Parameters field in a Task state to control the parameters that are passed to the service APIs that implement the new functions. Your state machine definitions can include static JSON or references (in JsonPath form) to specific elements in the state input. Here’s how the Magic ARNs and Parameters are used to define a state: "Read Next Message from DynamoDB": { "Type": "Task", "Resource": "arn:aws:states:::dynamodb:getItem", "Parameters": { "TableName": "StepDemoStack-DDBTable-1DKVAVTZ1QTSH", "Key": { "MessageId": {"S.$": "$.List[0]"} } }, "ResultPath": "$.DynamoDB", "Next": "Send Message to SQS" }, Available Now The new integrations are available now and you can start using them today in all AWS Regions where Step Functions are available. You pay the usual charge for each state transition and for the AWS services that you consume. — Jeff;

New – AWS Toolkits for PyCharm, IntelliJ (Preview), and Visual Studio Code (Preview)

Software developers have their own preferred tools. Some use powerful editors, others Integrated Development Environments (IDEs) that are tailored for specific languages and platforms. In 2014 I created my first AWS Lambda function using the editor in the Lambda console. Now, you can choose from a rich set of tools to build and deploy serverless applications. For example, the editor in the Lambda console has been greatly enhanced last year when AWS Cloud9 was released. For .NET applications, you can use the AWS Toolkit for Visual Studio and AWS Tools for Visual Studio Team Services. AWS Toolkits for PyCharm, IntelliJ, and Visual Studio Code Today, we are announcing the general availability of the AWS Toolkit for PyCharm. We are also announcing the developer preview of the AWS Toolkits for IntelliJ and Visual Studio Code, which are under active development in GitHub. These open source toolkits will enable you to easily develop serverless applications, including a full create, step-through debug, and deploy experience in the IDE and language of your choice, be it Python, Java, Node.js, or .NET. For example, using the AWS Toolkit for PyCharm you can: Create a new, ready-to-deploy serverless application in your preferred runtime. Locally test your code with step-through debugging in a Lambda-like execution environment. Deploy your applications to the AWS region of your choice. Invoke you Lambda functions locally or remotely. Use and customize sample payloads from different event sources such as Amazon Simple Storage Service (S3), Amazon API Gateway, and Amazon Simple Notification Service (SNS). These toolkits are distributed under the open source Apache License, Version 2.0. Installation Some features use the AWS Serverless Application Model (SAM) CLI. You can find installation instructions for your system here. The AWS Toolkit for PyCharm is available via the IDEA Plugin Repository. To install it, in the Settings/Preferences dialog, click Plugins, search for “AWS Toolkit”, use the checkbox to enable it, and click the Install button. You will need to restart your IDE for the changes to take effect. The AWS Toolkit for IntelliJ and Visual Studio Code are currently in developer preview and under active development. You are welcome to build and install these from the GitHub repositories: (for IntelliJ and PyCharm) Building a Serverless application with PyCharm After installing AWS SAM CLI and AWS Toolkit, I create a new project in PyCharm and choose SAM on the left to create a serverless application using the AWS Serverless Application Model. I call my project hello-world in the Location field. Expanding More Settings, I choose which SAM template to use as the starting point for my project. For this walkthrough, I select the “AWS SAM Hello World”. In PyCharm you can use credentials and profiles from your AWS Command Line Interface (CLI) configuration. You can change AWS region quickly if you have multiple environments. The AWS Explorer shows Lambda functions and AWS CloudFormation stacks in the selected AWS region. Starting from a CloudFormation stack, you can see which Lambda functions are part of it. The function handler is in the file. After I open the file, I click on the Lambda icon on the left of the function declaration to have the option to run the function locally or start a local step-by-step debugging session. First, I run the function locally. I can configure the payload of the event that is provided in input for the local invocation, starting from the event templates provided for most services, such as the Amazon API Gateway, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), and so on. You can use a file for the payload, or select the share checkbox to make it available to other team members. The function is executed locally, but here you can choose the credentials and the region to be used if the function is calling other AWS services, such as Amazon Simple Storage Service (S3) or Amazon DynamoDB. A local container is used to emulate the Lambda execution environment. This function is implementing a basic web API, and I can check that the result is in the format expected by the API Gateway. After that, I want to get more information on what my code is doing. I set a breakpoint and start a local debugging session. I use the same input event as before. Again, you can choose the credentials and region for the AWS services used by the function. I step over the HTTP request in the code to inspect the response in the Variables tab. Here you have access to all local variables, including the event and the context provided in input to the function. After that, I resume the program to reach the end of the debugging session. Now I am confident enough to deploy the serverless application right-clicking on the project (or the SAM template file). I can create a new CloudFormation stack, or update an existing one. For now, I create a new stack called hello-world-prod. For example, you can have a stack for production, and one for testing. I select an S3 bucket in the region to store the package used for the deployment. If your template has parameters, here you can set up the values used by this deployment. After a few minutes, the stack creation is complete and I can run the function in the cloud with a right-click in the AWS Explorer. Here there is also the option to jump to the source code of the function. As expected, the result of the remote invocation is the same as the local execution. My serverless application is in production! Using these toolkits, developers can test locally to find problems before deployment, change the code of their application or the resources they need in the SAM template, and update an existing stack, quickly iterating until they reach their goal. For example, they can add an S3 bucket to store images or documents, or a DynamoDB table to store your users, or change the permissions used by their functions. I am really excited by how much faster and easier it is to build your ideas on AWS. Now you can use your preferred environment to accelerate even further. I look forward to seeing what you will do with these new tools!


Recommended Content