Amazon Web Services Blog

New Automation Features In AWS Systems Manager

Today we are announcing additional automation features inside of AWS Systems Manager. If you haven’t used Systems Manager yet, it’s a service that provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources. With this new release, it just got even more powerful. We have added additional capabilities to AWS Systems Manager that enables you to build, run, and share automations with others on your team or inside your organisation — making managing your infrastructure more repeatable and less error-prone. Inside the AWS Systems Manager console on the navigation menu, there is an item called Automation if I click this menu item I will see the Execute automation button. When I click on this I am asked what document I want to run. AWS provides a library of documents that I could choose from, however today, I am going to build my own so I will click on the Create document button. This takes me to a a new screen that allows me to create a document (sometimes referred to as an automation playbook) that amongst other things executes Python or PowerShell scripts. The console gives me two options for editing a document: A YAML editor or the “Builder” tool that provides a guided, step-by-step user interface with the ability to include documentation for each workflow step. So, let’s take a look by building and running a simple automation. When I create a document using the Builder tool, the first thing required is a document name. Next, I need to provide a description. As you can see below, I’m able to use Markdown to format the description. The description is an excellent opportunity to describe what your document does, this is valuable since most users will want to share these documents with others on their team and build a library of documents to solve everyday problems. Optionally, I am asked to provide parameters for my document. These parameters can be used in all of the scripts that you will create later. In my example, I have created three parameters: imageId, tagValue, and instanceType. When I come to execute this document, I will have the opportunity to provide values for these parameters that will override any defaults that I set. When someone executes my document, the scripts that are executed will interact with AWS services. A document runs with the user permissions for most of its actions along with the option of providing an Assume Role. However, for documents with the Run a Script action, the role is required when the script is calling any AWS API. You can set the Assume role globally in the builder tool; however, I like to add a parameter called assumeRole to my document, this gives anyone that is executing it the ability to provide a different one. You then wire this parameter up to the global assumeRole by using the {{assumeRole}}syntax in the Assume role property textbox (I have called my parameter name assumeRole but you could call it what you like, just make sure that the name you give the parameter is what you put in the double parentheses syntax e.g.{{yourParamName}}). Once my document is set up, I then need to create the first step of my document. Your document can contain 1 or more steps, and you can create sophisticated workflows with branching, for example based on a parameter or failure of a step. Still, in this example, I am going to create three steps that execute one after another. Again you need to give the step a name and a description. This description can also include markdown. You need to select an Action Type, for this example I will choose Run a script. With the ‘Run a script’ action type, I get to run a script in Python or PowerShell without requiring any infrastructure to run the script. It’s important to realise that this script will not be running on one of your EC2 instances. The scripts run in a managed compute environment. You can configure a Amazon CloudWatch log group on the preferences page to send outputs to a CloudWatch log group of your choice. In this demo, I write some Python that creates an EC2 instance. You will notice that this script is using the AWS SDK for Python. I create an instance based upon an image_id, tag_value, and instance_type that are passed in as parameters to the script. To pass parameters into the script, in the Additional Inputs section, I select InputPayload as the input type. I then use a particular YAML format in the Input Value text box to wire up the global parameters to the parameters that I am going to use in the script. You will notice that again I have used the double parentheses syntax to reference the global parameters e.g. {{imageId}} In the Outputs section, I also wire up an output parameter than can be used by subsequent steps. Next, I will add a second step to my document . This time I will poll the instance to see if its status has switched to ok. The exciting thing about this code is the InstanceId, is passed into the script from a previous step. This is an example of how the execution steps can be chained together to use outputs of earlier steps. def poll_instance(events, context): import boto3 import time ec2 = boto3.client('ec2') instance_id = events['InstanceId'] print('[INFO] Waiting for instance to enter Status: Ok', instance_id) instance_status = "null" while True: res = ec2.describe_instance_status(InstanceIds=[instance_id]) if len(res['InstanceStatuses']) == 0: print("Instance Status Info is not available yet") time.sleep(5) continue instance_status = res['InstanceStatuses'][0]['InstanceStatus']['Status'] print('[INFO] Polling get status of the instance', instance_status) if instance_status == 'ok': break time.sleep(10) return {'Status': instance_status, 'InstanceId': instance_id} To pass the parameters into the second step, notice that I use the double parentheses syntax to reference the output of a previous step. The value in the Input value textbox {{launchEc2Instance.payload}} is the name of the step launchEc2Instance and then the name of the output parameter payload. Lastly, I will add a final step. This step will run a PowerShell script and use the AWS Tools for PowerShell. I’ve added this step purely to show that you can use PowerShell as an alternative to Python. You will note on the first line that I have to Install the AWSPowerShell.NetCore module and use the -Force switch before I can start interacting with AWS services. All this step does is take the InstanceId output from the LaunchEc2Instance step and use it to return the InstanceType of the ECS instance. It’s important to note that I have to pass the parameters from LaunchEc2Instance step to this step by configuring the Additional inputs in the same way I did earlier. Now that our document is created we can execute it. I go to the Actions & Change section of the menu and select Automation, from this screen, I click on the Execute automation button. I then get to choose the document I want to execute. Since this is a document I created, I can find it on the Owned by me tab. If I click the LaunchInstance document that I created earlier, I get a document details screen that shows me the description I added. This nicely formatted description allows me to generate documentation for my document and enable others to understand what it is trying to achieve. When I click Next, I am asked to provide any Input parameters for my document. I add the imageId and ARN for the role that I want to use when executing this automation. It’s important to remember that this role will need to have permissions to call any of the services that are requested by the scripts. In my example, that means it needs to be able to create EC2 instances. Once the document executes, I am taken to a screen that shows the steps of the document and gives me details about how long each step took and respective success or failure of each step. I can also drill down into each step and examine the logs. As you can see, all three steps of my document completed successfully, and if I go to the Amazon Elastic Compute Cloud (EC2) console, I will now have an EC2 instance that I created with tag LaunchedBySsmAutomation. These new features can be found today in all regions inside the AWS Systems Manager console so you can start using them straight away. Happy Automating! — Martin;

Accelerate SQL Server Always On Deployments with AWS Launch Wizard

Customers sometimes tell us that while they are experts in their domain, their unfamiliarity with the cloud can make getting started more challenging and take more time. They want to be able to quickly and easily deploy enterprise applications on AWS without needing prior tribal knowledge of the AWS platform and best practices, so as to accelerate their journey to the cloud. Announcing AWS Launch Wizard for SQL Server AWS Launch Wizard for SQL Server is a simple, intuitive and free to use wizard-based experience that enables quick and easy deployment of high availability SQL solutions on AWS. The wizard walks you through an end-to-end deployment experience of Always On Availability Groups using prescriptive guidance. By answering a few high-level questions about the application such as required performance characteristics the wizard will then take care of identifying, provisioning, and configuring matching AWS resources such as Amazon Elastic Compute Cloud (EC2) instances, Amazon Elastic Block Store (EBS) volumes, and an Amazon Virtual Private Cloud. Based on your selections the wizard presents you with a dynamically generated estimated cost of deployment – as you modify your resource selections, you can see an updated cost assessment to help you match your budget. Once you approve, AWS Launch Wizard for SQL Server provisions these resources and configures them to create a fully functioning production-ready SQL Server Always On deployment in just a few hours. The created resources are tagged making it easy to identity and work with them and the wizard also creates AWS CloudFormation templates, providing you with a baseline for repeatable and consistent application deployments. Subsequent SQL Server Always On deployments become faster and easier as AWS Launch Wizard for SQL Server takes care of dealing with the required infrastructure on your behalf, determining the resources to match your application’s requirements such as performance, memory, bandwidth etc (you can modify the recommended defaults if you wish). If you want to bring your own SQL Server licenses, or have other custom requirements for the instances, you can also select to use your own custom AMIs provided they meet certain requirements (noted in the service documentation). Using AWS Launch Wizard for SQL Server To get started with my deployment, in the Launch Wizard Console I click the Create deployment button to start the wizard and select SQL Server Always On. The wizard requires an AWS Identity and Access Management (IAM) role granting it permissions to deploy and access resources in my account. The wizard will check to see if a role named AmazonEC2RoleForLaunchWizard exists in my account. If so it will be used, otherwise a new role will be created. The new role will have two AWS managed policies, AmazonSSMManagedInstanceCore and AmazonEC2RolePolicyforLaunchWizard, attached to it. Note that this one time setup process will be typically performed by an IAM Administrator for your organization. However, the IAM user does not have to be an Administrator and CreateRole, AttachRolePolicy, and GetRole permissions are sufficient to perform these operations. After the role is created, the IAM Administrator can delegate the application deployment process to another IAM user who, in turn, must have the AWS Launch Wizard for SQL Server IAM managed policy called AmazonLaunchWizardFullaccess attached to it. With the application type selected I can proceed by clicking Next to start configuring my application settings, beginning with setting a deployment name and optionally an Amazon Simple Notification Service (SNS) topic that AWS Launch Wizard for SQL Server can use for notifications and alerts. In the connectivity options I can choose to use an existing Amazon Virtual Private Cloud or have a new one created. I can also specify the name of an existing key pair (or create one). The key pair will be used if I want to RDP into my instances or obtain the administrator password. For a new Virtual Private Cloud I can also configure the IP address or range to which remote desktop access will be permitted: Instances launched by AWS Launch Wizard for SQL Server will be domain joined to an Active Directory. I can select either an existing AWS Managed AD, or an on-premises AD, or have the wizard create a new AWS Managed Directory for my deployment: The final application settings relate to SQL Server. This is also where I can specify a custom AMI to be used if I want to bring my own SQL Server licenses or have other customization requirements. Here I’m just going to create a new SQL Server Service account and use an Amazon-provided image with license included. Note that if I choose to use an existing service account it should be part of the Managed AD in which you are deploying: Clicking Next takes me to a page to define the infrastructure requirements of my application, in terms of CPU and network performance and memory. I can also select the type of storage (solid state vs magnetic) and required SQL Server throughput. The wizard will recommend the resource types to be launched but I can also override and select specific instance and volume types, and I can also set custom tags to apply to the resources that will be created: The final section of this page shows me the cost estimate based on my selections. This data in this panel is dynamically generated based on my prior selections and I can go back and forth in the wizard, tuning my selections to match my budget: When I am happy with my selections, clicking Next takes me to wizard’s final Review page where I can view a summary of my selections and acknowledge that AWS resources and AWS Identity and Access Management (IAM) permissions will be created on my behalf, along with the estimated cost as was shown in the estimator on the previous page. My final step is to click Deploy to start the deployment process. Status updates during deployment can be viewed on the Deployments page with a final notification to inform me on completion. Post-deployment Management Once my application has been deployed I can manage its resources easily. Firstly I can navigate to Deployments on the AWS Launch Wizard for SQL Server dashboard and using the Actions dropdown I can jump to the Amazon Elastic Compute Cloud (EC2) console where I can manage the EC2 instances, EBS volumes, Active Directory etc. Or, using the same Actions dropdown, I can access SQL Server via the remote desktop gateway instance. If I want to manage future updates and patches to my application using AWS Systems Manager another Actions option takes me to the Systems Manager dashboard for managing my application. I can also use the AWS Launch Wizard for SQL Server to delete deployments performed using the wizard and it will perform a roll-back of all AWS CloudFormation stacks that the service created. Now Available AWS Launch Wizard for SQL Server is generally available and you can use it in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Canada (Central), South America (Sao Paulo), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Seoul), Asia Pacific (Tokyo), EU (Frankfurt), EU (Ireland), EU (London), and EU (Stockholm). Support for the AWS regions in China, and for the GovCloud Region, is in the works. There is no additional charge for using AWS Launch Wizard for SQL Server, only for the resources it creates. — Steve

AWS Data Exchange – Find, Subscribe To, and Use Data Products

We live in a data-intensive, data-driven world! Organizations of all types collect, store, process, analyze data and use it to inform and improve their decision-making processes. The AWS Cloud is well-suited to all of these activities; it offers vast amounts of storage, access to any conceivable amount of compute power, and many different types of analytical tools. In addition to generating and working with data internally, many organizations generate and then share data sets with the general public or within their industry. We made some initial steps to encourage this back in 2008 with the launch of AWS Public Data Sets (Paging Researchers, Analysts, and Developers). That effort has evolved into the Registry of Open Data on AWS (New – Registry of Open Data on AWS (RODA)), which currently contains 118 interesting datasets, with more added all the time. New AWS Data Exchange Today, we are taking the next step forward, and are launching AWS Data Exchange. This addition to AWS Marketplace contains over one thousand licensable data products from over 80 data providers. There’s a diverse catalog of free and paid offerings, in categories such as financial services, health care / life sciences, geospatial, weather, and mapping. If you are a data subscriber, you can quickly find, procure, and start using these products. If you are a data provider, you can easily package, license, and deliver products of your own. Let’s take a look at Data Exchange from both vantage points, and then review some important details. Let’s define a few important terms before diving in: Data Provider – An organization that has one or more data products to share. Data Subscriber – An AWS customer that wants to make use of data products from Data Providers. Data Product – A collection of data sets. Data Set – A container for data assets that belong together, grouped by revision. Revision – A container for one or more data assets as of a point in time. Data Asset – The actual data, in any desired format. AWS Data Exchange for Data Subscribers As a data subscriber, I click View product catalog and start out in the Discover data section of the AWS Data Exchange Console: Products are available from a long list of vendors: I can enter a search term, click Search, and then narrow down my results to show only products that have a Free pricing plan: I can also search for products from a specific vendor, that match a search term, and that have a Free pricing plan: The second one looks interesting and relevant, so I click on 5 Digit Zip Code Boundaries US (TRIAL) to learn more: I think I can use this in my app, and want to give it a try, so I click Continue to subscribe. I review the details, read the Data Subscription Agreement, and click Subscribe: The subscription is activated within a few minutes, and I can see it in my list of Subscriptions: Then I can download the set to my S3 bucket, and take a look. I click into the data set, and find the Revisions: I click into the revision, and I can see the assets (containing the actual data) that I am looking for: I select the asset(s) that I want, and click Export to Amazon S3. Then I choose a bucket, and Click Export to proceed: This creates a job that will copy the data to my bucket (extra IAM permissions are required here; read the Access Control documentation for more info): The jobs run asynchronously and copy data from Data Exchange to the bucket. Jobs can be created interactively, as I just showed you, or programmatically. Once the data is in the bucket, I can access and process it in any desired way. I could, for example, use a AWS Lambda function to parse the ZIP file and use the results to update a Amazon DynamoDB table. Or, I could run an AWS Glue crawler to get the data into my Glue catalog, run an Amazon Athena query, and visualize the results in a Amazon QuickSight dashboard. Subscription can last from 1-36 months with an auto-renew option; subscription fees are billed to my AWS account each month. AWS Data Exchange for Data Providers Now I am going to put my “data provider” hat and show you the basics of the publication process (the User Guide contains a more detailed walk-through). In order to be able to license data, I must agree to the terms and conditions, and my application must be approved by AWS. After I apply and have been approved, I start by creating my first data set. I click Data sets in the navigation, and then Create data set: I describe my data set, and have the option to tag it, then click Create: Next, I click Create revision to create the first revision to the data set: I add a comment, and have the option to tag the revision before clicking Create: I can copy my data from an existing S3 location, or I can upload it from my desktop: I choose the second option, select my file, and it appears as an Imported asset after the import job completes. I review everything, and click Finalize for the revision: My data set is ready right away, and now I can use it to create one or more products: The console outlines the principal steps: I can set up public pricing information for my product: AWS Data Exchange lets me create private pricing plans for individual customers, and it also allows my existing customers to bring their existing (pre-AWS Data Exchange) licenses for my products along with them by creating a Bring Your Own Subscription offer. I can use the provided Data Subscription Agreement (DSA) provided by AWS Data Exchange, use it as the basis for my own, or I can upload an existing one: I can use the AWS Data Exchange API to create, update, list, and manage data sets and revisions to them. Functions include CreateDataSet, UpdataSet, ListDataSets, CreateRevision, UpdateAsset, and CreateJob. Things to Know Here are a couple of things that you should know about Data Exchange: Subscription Verification – The data provider can also require additional information in order to verify my subscription. If that is the case, the console will ask me to supply the info, and the provider will review and approve or decline within 45 days: Here is what the provider sees: Revisions & Notifications – The Data Provider can revise their data sets at any time. The Data Consumer receives a CloudWatch Event each time a product that they are subscribed to is updated; this can be used to launch a job to retrieve the latest revision of the assets. If you are implementing a system of this type and need some test events, find and subscribe to the Heartbeat product: Data Categories & Types – Certain categories of data are not permitted on AWS Data Exchange. For example, your data products may not include information that can be used to identify any person, unless that information is already legally available to the public. See, Publishing Guidelines for detailed guidelines on what categories of data are permitted. Data Provider Location – Data providers must either be a valid legal entity domiciled in the United States or in a member state of the EU. Available Now AWS Data Exchange is available now and you can start using it today. If you own some interesting data and would like to publish it, start here. If you are a developer, browse the product catalog and look for data that will add value to your product. — Jeff;    

New – Import Existing Resources into a CloudFormation Stack

With AWS CloudFormation, you can model your entire infrastructure with text files. In this way, you can treat your infrastructure as code and apply software development best practices, such as putting it under version control, or reviewing architectural changes with your team before deployment. Sometimes AWS resources initially created using the console or the AWS Command Line Interface (CLI) need to be managed using CloudFormation. For example, you (or a different team) may create an IAM role, a Virtual Private Cloud, or an RDS database in the early stages of a migration, and then you have to spend time to include them in the same stack as the final application. In such cases, you often end up recreating the resources from scratch using CloudFormation, and then migrating configuration and data from the original resource. To make these steps easier for our customers, you can now import existing resources into a CloudFormation stack! It was already possible to remove resources from a stack without deleting them by setting the DeletionPolicy to Retain. This, together with the new import operation, enables a new range of possibilities. For example, you are now able to: Create a new stack importing existing resources. Import existing resources in an already created stack. Migrate resources across stacks. Remediate a detected drift. Refactor nested stacks by deleting children stacks from one parent and then importing them into another parent stack. To import existing resources into a CloudFormation stack, you need to provide: A template that describes the entire stack, including both the resources to import and (for existing stacks) the resources that are already part of the stack. Each resource to import must have a DeletionPolicy attribute in the template. This enables easy reverting of the operation in a completely safe manner. A unique identifier for each target resource, for example the name of the Amazon DynamoDB table or of the Amazon Simple Storage Service (S3) bucket you want to import. During the resource import operation, CloudFormation checks that: The imported resources do not already belong to another stack in the same region (be careful with global resources such as IAM roles). The target resources exist and you have sufficient permissions to perform the operation. The properties and configuration values are valid against the resource type schema, which defines its required, acceptable properties, and supported values. The resource import operation does not check that the template configuration and the actual configuration are the same. Since the import operation supports the same resource types as drift detection, I recommend running drift detection after importing resources in a stack. Importing Existing Resources into a New Stack In my AWS account, I have an S3 bucket and a DynamoDB table, both with some data inside, and I’d like to manage them using CloudFormation. In the CloudFormation console, I have two new options: I can create a new stack importing existing resources. I can import resources into an existing stack. In this case, I want to start from scratch, so I create a new stack. The next step is to provide a template with the resources to import. I upload the following template with two resources to import: a DynamoDB table and an S3 bucket. AWSTemplateFormatVersion: "2010-09-09" Description: Import test Resources: ImportedTable: Type: AWS::DynamoDB::Table DeletionPolicy: Retain Properties: BillingMode: PAY_PER_REQUEST AttributeDefinitions: - AttributeName: id AttributeType: S KeySchema: - AttributeName: id KeyType: HASH ImportedBucket: Type: AWS::S3::Bucket DeletionPolicy: Retain In this template I am setting DeletionPolicy  to Retain for both resources. In this way, if I remove them from the stack, they will not be deleted. This is a good option for resources which contain data you don’t want to delete by mistake, or that you may want to move to a different stack in the future. It is mandatory for imported resources to have a deletion policy set, so you can safely and easily revert the operation, and be protected from mistakenly deleting resources that were imported by someone else. I now have to provide an identifier to map the logical IDs in the template with the existing resources. In this case, I use the DynamoDB table name and the S3 bucket name. For other resource types, there may be multiple ways to identify them and you can select which property to use in the drop-down menus. In the final recap, I review changes before applying them. Here I check that I’m targeting the right resources to import with the right identifiers. This is actually a CloudFormation Change Set that will be executed when I import the resources. When importing resources into an existing stack, no changes are allowed to the existing resources of the stack. The import operation will only allow the Change Set action of Import. Changes to parameters are allowed as long as they don’t cause changes to resolved values of properties in existing resources. You can change the template for existing resources to replace hard coded values with a Ref to a resource being imported. For example, you may have a stack with an EC2 instance using an existing IAM role that was created using the console. You can now import the IAM role into the stack and replace in the template the hard coded value used by the EC2 instance with a Ref to the role. Moving on, each resource has its corresponding import events in the CloudFormation console. When the import is complete, in the Resources tab, I see that the S3 bucket and the DynamoDB table are now part of the stack. To be sure the imported resources are in sync with the stack template, I use drift detection. All stack-level tags, including automatically created tags, are propagated to resources that CloudFormation supports. For example, I can use the AWS CLI to get the tag set associated with the S3 bucket I just imported into my stack. Those tags give me the CloudFormation stack name and ID, and the logical ID of the resource in the stack template: $ aws s3api get-bucket-tagging --bucket danilop-toimport { "TagSet": [ { "Key": "aws:cloudformation:stack-name", "Value": "imported-stack" }, { "Key": "aws:cloudformation:stack-id", "Value": "arn:aws:cloudformation:eu-west-1:123412341234:stack/imported-stack/..." }, { "Key": "aws:cloudformation:logical-id", "Value": "ImportedBucket" } ] } Available Now You can use the new CloudFormation import operation via the console, AWS Command Line Interface (CLI), or AWS SDKs, in the following regions: US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Canada (Central), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), EU (Frankfurt), EU (Ireland), EU (London), EU (Paris), and South America (São Paulo). It is now simpler to manage your infrastructure as code, you can learn more on bringing existing resources into CloudFormation management in the documentation. — Danilo

15 Years of AWS Blogging!

I wrote the first post (Welcome) to this blog exactly 15 years ago today. It is safe to say that I never thought that writing those introductory sentences would lead my career in such a new and ever -challenging dimension. This seems like as good of a time as any to document and share the story of how the blog came to be, share some of my favorite posts, and to talk about the actual mechanics of writing and blogging. Before the Beginning Back in 1999 or so, I was part of the Visual Basic team at Microsoft. XML was brand new, and Dave Winer was just starting to talk about RSS. The intersection of VB6, XML, and RSS intrigued me, and I built a little app called Headline Viewer as a side project. I put it up for download, people liked it, and content owners started to send me their RSS feeds for inclusion. The list of feeds took on a life of its own, and people wanted it just as much as they wanted the app. I also started my third personal blog around this time after losing the earlier incarnations in server meltdowns. With encouragement from Aaron Swartz and others, I put Headline Viewer aside and started Syndic8 in late 2001 to collect, organize, and share them. I wrote nearly 90,000 lines of PHP in my personal time, all centered around a very complex MySQL database that included over 50 tables. I learned a lot about hosting, scaling, security, and database management. The site also had an XML-RPC web service interface that supported a very wide range of query and update operations. The feed collection grew to nearly 250,000 over the first couple of years. I did not know it at the time, but my early experience with XML, RSS, blogging, and web services would turn out to be the skills that set me apart when I applied to work at Amazon. Sometimes, as it turns out, your hobbies and personal interests can end up as career-changing assets & differentiators. E-Commerce Web Services In parallel to all of this, I left Microsoft in 2000 and was consulting in the then-new field of web services. At that time, most of the web services in use were nothing more than cute demos: stock quotes, weather forecasts, and currency conversions. Technologists could marvel at a function call that crossed the Internet and back, but investors simply shrugged and moved on. In mid-2002 I became aware of Amazon’s very first web service (now known as the Product Advertising API). This was, in my eyes, the first useful web service. It did something non-trivial that could not have been done locally, and provided value to both the provider and the consumer. I downloaded the SDK (copies were later made available on the mini-CD shown at right), sent the developers some feedback, and before I knew it I was at Amazon HQ, along with 4 or 5 other early fans of the service, for a day-long special event. Several teams shared their plans with us, and asked for our unvarnished feedback. At some point during the day, one of the presenters said “We launched our first service, developers found it, and were building & sharing apps within 24 hours or so. We are going to look around the company and see if we can put web service interfaces on other parts of our business.” This was my light-bulb moment — was going to become accessible to developers! I turned to Sarah Bryar (she had extended the invite to the event) and told her that I wanted to be a part of this. She said that they could make that happen, and a few weeks later (summer of 2002), I was a development manager on the Amazon Associates team, reporting to Larry Hughes. In addition to running a team that produced daily reports for each member of the Associates program, Larry gave me the freedom to “help out” with the nascent web services effort. I wrote sample programs, helped out on the forums, and even contributed to the code base. I went through the usual Amazon interview loop, and had to write some string-handling code on the white board. Web Services Evangelist A couple of months in to the job, Sarah and Rob Frederick approached me and asked me to speak at a conference because no one else wanted to. I was more than happy to do this, and a few months later Sarah offered me the position of Web Services Evangelist. This was a great match for my skills and I took to it right away, booking events with any developer, company, school, or event that wanted to hear from me! Later in 2003 I was part of a brainstorming session at Jeff Bezos’ house. Jeff, Andy Jassy, Al Vermeulen, me, and a few others (I should have kept better notes) spent a day coming up with a long list of ideas that evolved into EC2, S3, RDS, and so forth. I am fairly sure that this is the session discussed in How AWS Came to Be, but I am not 100% certain. Using this list as a starting point, Andy started to write a narrative to define the AWS business. I was fortunate enough to have an office just 2 doors up the hall from him, and spent a lot of time reviewing and commenting on his narrative (read How Jeff Bezos Turned Narrative into Amazon’s Competitive Advantage to learn how we use narratives to define businesses and drive decisions). I also wrote some docs of my own that defined our plans for a developer relations team. We Need a Blog As I read through early drafts of Andy’s first narrative, I began to get a sense that we were going to build something complex & substantial. My developer relations plan included a blog, and I spent a ton of time discussing the specifics in meetings with Andy and Drew Herdener. I remember that it was very hard for me to define precisely what this blog would look like, and how it would work from a content-generation and approval perspective. As is the Amazon way, every answer that I supplied basically begat even more questions from Andy and Drew! We ultimately settled on a few ground rules regarding tone and review, and I was raring to go. I was lucky enough to be asked to accompany Jeff Bezos to the second Foo Camp as his technical advisor. Among many others, I met Ben and Mena Trott of Six Apart, and they gave me a coupon for 1000 free days of access to TypePad, their blogging tool. We Have a Blog Armed with that coupon, I returned to Seattle, created the AWS Blog (later renamed the AWS News Blog), and wrote the first two posts (Welcome and Browse Node API) later that year. Little did I know that those first couple of posts would change the course of my career! I struggled a bit with “voice” in the early days, and could not decide if I was writing as the company, the group, the service, or simply as me. After some experimentation, I found that a personal, first-person style worked best and that’s what I settled on. In the early days, we did not have much of a process or a blog team. Interesting topics found their way in to my inbox, and I simply wrote about them as I saw fit. I had an incredible amount of freedom to pick and choose topics, and words, and I did my best to be a strong, accurate communicator while staying afield of controversies that would simply cause more work for my colleagues in Amazon PR. Launching AWS Andy started building teams and I began to get ready for the first launches. We could have started with a dramatic flourish, proclaiming that we were about to change the world with the introduction of a broad lineup of cloud services. But we don’t work that way, and are happy to communicate in a factual, step-by-step fashion. It was definitely somewhat disconcerting to see that Business Week characterized our early efforts as Amazon’s Risky Bet, but we accept that our early efforts can sometimes be underappreciated or even misunderstood. Here are some of the posts that I wrote for the earliest AWS services and features: SQS – I somehow neglected to write about the first beta of Amazon Simple Queue Service (SQS), and the first mention is in a post called Queue Scratchpad. This post references AWS Zone, a site built by long-time Amazonian Elena Dykhno before she even joined the company! I did manage to write a post for Simple Queue Service Beta 2. At this point I am sure that many people wondered why their bookstore was trying to sell messages queues, but we didn’t see the need to over-explain ourselves or to telegraph our plans. S3 – I wrote my first Amazon S3 post while running to catch a plane, but I did manage to cover all of the basics: a service overview, definitions of major terms, pricing, and an invitation for developers to create cool applications! EC2 – EC2 had been “just about to launch” for quite some time, and I knew that the launch would be a big deal. I had already teased the topic of scalable on-demand web services in Sometimes You Need Just a Little…, and I was ever so ready to actually write about EC2. Of course, our long-scheduled family vacation was set to coincide with the launch, and I wrote part of the Amazon EC2 Beta post while sitting poolside in Cabo San Lucas, Mexico! That post was just about perfect, but I probably should have been clear that “AMI” should be pronounced, and not spelled out, as some pundits claim. EBS – Initially, all of the storage on EC2 instances was ephemeral, and would be lost when the instance was shut down. I think it is safe to say that the launch of EBS (Amazon EBS (Elastic Block Store) – Bring Us Your Data) greatly simplified the use of EC2. These are just a few of my early posts, but they definitely laid the foundation for what has followed. I still take great delight in reading those posts, thinking back to the early days of the cloud. AWS Blogging Today Over the years, the fraction of my time that is allocated to blogging has grown, and now stands at about 80%. This leaves me with time to do a little bit of public speaking, meet with customers, and to do what I can to keep up with this amazing and ever-growing field. I thoroughly enjoy the opportunities that I have to work with the AWS service teams that work so hard to listen to our customers and do their best to respond with services that meet their needs. We now have a strong team and an equally strong production process for new blog posts. Teams request a post by creating a ticket, attaching their PRFAQ (Press Release + FAQ, another type of Amazon document) and giving the bloggers early internal access to their service. We review the materials, ask hard questions, use the service, and draft our post. We share the drafts internally, read and respond to feedback, and eagerly await the go-ahead to publish. Planning and Writing a Post With 3100 posts under my belt (and more on the way), here is what I focus on when planning and writing a post: Learn & Be Curious – This is an Amazon Leadership Principle. Writing is easy once I understand what I want to say. I study each PRFAQ, ask hard questions, and am never afraid to admit that I don’t grok some seemingly obvious point. Time after time I am seemingly at the absolute limit of what I can understand and absorb, but that never stops me from trying. Accuracy – I never shade the truth, and I never use weasel words that could be interpreted in more than one way to give myself an out. The Internet is the ultimate fact-checking vehicle, and I don’t want to be wrong. If I am, I am more than happy to admit it, and to fix the issue. Readability – I have plenty of words in my vocabulary, but I don’t feel the need to use all of them. I would rather use the most appropriate word than the longest and most obscure one. I am also cautious with acronyms and enterprise jargon, and try hard to keep my terabytes and tebibytes (ugh) straight. Frugality – This is also an Amazon Leadership Principle, and I use it in an interesting way. I know that you are busy, and that you don’t need extra words or flowery language. So I try hard (this post notwithstanding) to keep most of my posts at 700 to 800 words. I’d rather you spend the time using the service and doing something useful. Some Personal Thoughts Before I wrap up, I have a couple of reflections on this incredible journey… Writing – Although I love to write, I was definitely not a natural-born writer. In fact, my high school English teacher gave me the lowest possible passing grade and told me that my future would be better if I could only write better. I stopped trying to grasp formal English, and instead started to observe how genuine writers used words & punctuation. That (and decades of practice) made all the difference. Career Paths – Blogging and evangelism have turned out to be a great match for my skills and interests, but I did not figure this out until I was on the far side of 40. It is perfectly OK to be 20-something, 30-something, or even 40-something before you finally figure out who you are and what you like to do. Keep that in mind, and stay open and flexible to new avenues and new opportunities throughout your career. Special Thanks – Over the years I have received tons of good advice and 100% support from many great managers while I slowly grew into a full-time blogger: Andy Jassy, Prashant Sridharan, Steve Rabuchin, and Ariel Kelman. I truly appreciate the freedom that they have given me to develop my authorial voice and my blogging skills over the years! Ana Visneski and Robin Park have done incredible work to build a blogging team that supports me and the other bloggers. Thanks for Reading And with that, I would like to thank you, dear reader, for your time, attention, and very kind words over the past 15 years. It has been the privilege of a lifetime to be able to share so much interesting technology with you! — Jeff;  

Cross-Account Cross-Region Dashboards with Amazon CloudWatch

Best practices for AWS cloud deployments include the use of multiple accounts and/or multiple regions. Multiple accounts provide a security and billing boundary that isolates resources and reduces the impact of issues. Multiple regions ensures a high degree of isolation, low latency for end users, and data resiliency of applications. These best practices can come with monitoring and troubleshooting complications. Centralized operations teams, DevOps engineers, and service owners need to monitor, troubleshoot, and analyze applications running in multiple regions and in many accounts. If an alarm is received an on-call engineer likely needs to login to a dashboard to diagnose the issue and might also need to login to other accounts to view additional dashboards for multiple application components or dependencies. Service owners need visibility of application resources, shared resources, or cross-application dependencies that can impact service availability. Using multiple accounts and/or multiple regions can make it challenging to correlate between components for root cause analysis and increase the time to resolution. Announced today, Amazon CloudWatch cross-account cross-region dashboards enable customers to create high level operational dashboards and utilize one-click drill-downs into more specific dashboards in different accounts, without having to log in and out of different accounts or switch regions. The ability to visualize, aggregate, and summarize performance and operational data across accounts and regions helps reduce friction and thus assists in reducing time to resolution. Cross-Account Cross-Region can also be used purely for navigation, without building dashboards, if I’m only interested in viewing alarms/resources/metrics in other accounts and/or regions for example. Amazon CloudWatch Cross-Account Cross-Region Dashboards Account Setup Getting started with cross-account cross-region dashboards is easy and I also have the choice of integrating with AWS Organizations if I wish. By using Organizations to manage and govern multiple AWS accounts I can use the CloudWatch console to navigate between Amazon CloudWatch dashboards, metrics and alarms, in any account in my organization, without logging in, as I’ll show in this post. I can also of course just set up cross-region dashboards for a single account. In this post I’ll be making use of the integration with Organizations. To support this blog post, I’ve already created an organization and invited, using the Organizations console, several other of my accounts to join. As noted, using Organizations makes it easy for me to select accounts later when I’m configuring my dashboards. I could also choose to not use Organizations and pre-populate a custom account selector, so that I don’t need to remember accounts, or enter the account IDs manually when I need them, as I build my dashboard. You can read more on how to set up an organization in the AWS Organizations User Guide. With my organization set up I’m ready to start configuring the accounts. My first task is to identify and configure the account in which I will create a dashboard – this is my monitoring account (and I can have more than one). Secondly, I need to identify the accounts (known as member accounts in Organizations) that I want to monitor – these accounts will be configured to share data with my monitoring account. My monitoring account requires a Service Linked Role (SLR) to permit CloudWatch to assume a role in each member account. The console will automatically create this role when I enable the cross-account cross-region option. To set up each member account I need to enable data sharing, from within the account, with the monitoring account(s). Starting with my monitoring account, from the CloudWatch console home, I select Settings in the navigation panel to the left. Cross-Account Cross-Region is shown at the top of the page and I click Configure to get started. This takes me to a settings screen that I’ll also use in my member accounts to enable data sharing. For now, in my monitoring account, I want to click the Edit option to view my cross-account cross-region options: The final step for my monitoring account is to enable the AWS Organization account selector option. This will require an additional role be deployed to the master account for the organization to permit the account to access the account list in the organization. The console will guide me through this process for the master account. This concludes set up for my monitoring account and I can now switch focus to my member accounts and enable data sharing. To do this, I log out of my monitoring account and for each member account, log in and navigate to the CloudWatch console and again click Settings before clicking Configure under Cross-Account Cross-Region, as shown earlier. This time I click Share data, enter the IDs of the monitoring account(s) I want to share data with and set the scope of the sharing (read-only access to my CloudWatch data or full read-only access to my account), and then launch a CloudFormation stack with a predefined template to complete the process. Note that I can also elect to share my data with all accounts in the organization. How to do this is detailed in the documentation. That completes configuration of both my monitoring account and the member accounts that my monitoring account will be able to access to obtain CloudWatch data for my resources. I can now proceed to create one or more dashboards in my monitoring account. Configuring Cross-Account Cross-Region Dashboards With account configuration complete it’s time to create a dashboard! In my member accounts I am running several EC2 instances, in different regions. One member account has one Windows and one Linux instance running in US West (Oregon). My second member account is running three Windows instances in an AWS Auto Scaling group in US East (Ohio). I’d like to create a dashboard giving me insight into CPU and network utilization for all these instances across both accounts and both regions. To get started I log into the AWS console with my monitoring account and navigate to the CloudWatch console home, click Dashboards, then Create dashboard. Note the new account ID and region fields at the top of the page – now that cross-account cross-region access has been configured I can also perform ad-hoc inspection across accounts and/or regions without constructing a dashboard. I first give the dashboard a name – I chose Compute – and then select Add widget to add my first set of metrics for CPU utilization. I chose a Line widget and clicked Configure. This takes me to an Add metric graph dialog and I can select the account and regions to pull metrics from into my dashboard. With the account and region selected, I can proceed to select the relevant metrics for my instances and can add all my instances for my monitoring account in the two different regions. Switching accounts, and region, I repeat for the instances in my member accounts. I then add another widget, this time a Stacked area, for inbound network traffic, again selecting the instances of interest in each of my accounts and regions. Finally I click Save dashboard. The end result is a dashboard showing CPU utilization and network traffic for my 4 instances and one cluster across the accounts and regions (note the xa indicator in the top right of each widget, denoting this is representing data from multiple accounts and regions). Hovering over a particular instance triggers a fly-out with additional data, and a deep link that will open the CloudWatch homepage in the account and region of the metric: Availability Amazon CloudWatch cross-account cross-region dashboards are available for use today in all commercial AWS regions and you can take advantage of the integration with AWS Organizations in those regions where Organizations is available. — Steve

New – Savings Plans for AWS Compute Services

I first wrote about EC2 Reserved Instances a decade ago! Since I wrote that post, our customers have saved billions of dollars by using Reserved Instances to commit to usage of a specific instance type and operating system within an AWS region. Over the years we have enhanced the Reserved Instance model to make it easier for you to take advantage of the RI discount. This includes: Regional Benefit – This enhancement gave you the ability to apply RIs across all Availability Zones in a region. Convertible RIs – This enhancement allowed you to change the operating system or instance type at any time. Instance Size Flexibility – This enhancement allowed your Regional RIs to apply to any instance size within a particular instance family. The model, as it stands today, gives you discounts of up to 72%, but it does require you to coordinate your RI purchases and exchanges in order to ensure that you have an optimal mix that covers usage that might change over time. New Savings Plans Today we are launching Savings Plans, a new and flexible discount model that provides you with the same discounts as Reserved Instances, in exchange for a commitment to use a specific amount (measured in dollars per hour) of compute power over a one or three year period. Every type of compute usage has an On Demand price and a (lower) Savings Plan price. After you commit to a specific amount of compute usage per hour, all usage up to that amount will be covered by the Saving Plan, and anything past it will be billed at the On Demand rate. If you own Reserved Instances, the Savings Plan applies to any On Demand usage that is not covered by the RIs. We will continue to sell RIs, but Savings Plans are more flexible and I think many of you will prefer them! Savings Plans are available in two flavors: Compute Savings Plans provide the most flexibility and help to reduce your costs by up to 66% (just like Convertible RIs). The plans automatically apply to any EC2 instance regardless of region, instance family, operating system, or tenancy, including those that are part of EMR, ECS, or EKS clusters, or launched by Fargate. For example, you can shift from C4 to C5 instances, move a workload from Dublin to London, or migrate from EC2 to Fargate, benefiting from Savings Plan prices along the way, without having to do anything. EC2 Instance Savings Plans apply to a specific instance family within a region and provide the largest discount (up to 72%, just like Standard RIs). Just like with RIs, your savings plan covers usage of different sizes of the same instance type (such as a c5.4xlarge or c5.large) throughout a region. You can even switch switch from Windows to Linux while continuing to benefit, without having to make any changes to your savings plan. Purchasing a Savings Plan AWS Cost Explorer will help you to choose a Savings Plan, and will guide you through the purchase process. Since my own EC2 usage is fairly low, I used a test account that had more usage. I open AWS Cost Explorer, then click Recommendations within Savings Plans: I choose my Recommendation options, and review the recommendations: Cost Explorer recommends that I purchase $2.40 of hourly Savings Plan commitment, and projects that I will save 40% (nearly $1200) per month, in comparison to On-Demand. This recommendation tries to take into account variable usage or temporary usage spikes in order to recommend the steady state capacity for which we believe you should consider a Savings Plan. In my case, the variable usage averages out to $0.04 per hour that we’re recommending I keep as On-Demand. I can see the recommended Savings Plans at the bottom of the page, select those that I want to purchase, and Add them to my cart: When I am ready to proceed, I click View cart, review my purchases, and click Submit order to finalize them: My Savings Plans become active right away. I can use the Cost Explorer’s Performance & Coverage reports to review my actual savings, and to verify that I own sufficient Savings Plans to deliver the desired amount of coverage. Available Now As you can see, Savings Plans are easy to use! You can access compute power at discounts of up to 72%, while gaining the flexibility to change compute services, instance types, operating systems, regions, and so forth. Savings Plans are available in all AWS regions outside of China, and you can start to purchase (and benefit) from them today! — Jeff;  

Meet the newest AWS Heroes, including the first Data Heroes!

The AWS Heroes program recognizes community leaders from around the world who have extensive AWS knowledge and a passion for sharing their expertise with others. As trends in the technical community shift, the program evolves to better recognize the most influential community leaders across a variety of technical disciplines. Introducing AWS Data Heroes Today we are introducing AWS Data Heroes: a vibrant community of developers, IT leaders, and educators with a shared passion for analytics, database, and blockchain technologies. Data Heroes are data experts, who actively participate at the forefront of technology trends, leveraging their extensive technical expertise to share knowledge and build a community around a passion for AWS data services. Developers from all backgrounds and skill sets can learn about database, analytics, and blockchain technology through a variety of educational content created by Data Heroes including videos, books, guides, blog posts, and open source projects. The first cohort of AWS Data Heroes include: Alex DeBrie – Omaha, USA Data Hero Alex Debrie is an Engineering Manager at Serverless, Inc. focused on designing, building, and promoting serverless applications. He is passionate about DynamoDB, Lambda, and many other AWS technologies. He is the creator of, a guided walkthrough to DynamoDB, and the author of The DynamoDB Book, a comprehensive guide to data modeling with DynamoDB. He has spoken about data design with DynamoDB at AWS Summits in Chicago and New York, as well as other conferences, and has also assisted AWS in writing official tutorials for using DynamoDB. He blogs on tech-related topics, mostly related to AWS.       Álvaro Hernández – Madrid, Spain Data Hero Álvaro Hernández is a passionate database and software developer. He founded and works as the CEO of OnGres, a PostGres startup set to disrupt the database market. He has been dedicated to PostgreSQL and R&D in databases for two decades. An open source advocate and developer at heart, Álvaro is a well-known member of the PostgreSQL Community, to which he has contributed founding the non-profit Fundación PostgreSQL and the Spanish PostgreSQL User Group. You can find him frequently speaking at PostgreSQL, database, cloud, and Java conferences. Every year, Álvaro travels approximately three-four times around the globe—in 2020, he will hit the milestone of having delivered 100 tech talks.       Goran Opacic – Belgrade, Serbia Data Hero Goran Opacic Goran is the CEO and owner of Esteh and community leader of AWS User Group Belgrade. He is a Solutions Architect focused on Databases and Security and runs, a blog related to AWS, Java, and databases. He works on promoting All Things Cloud, giving lectures and educating a new generation of developers into AWS community. This includes a series of interviews he runs with prominent technology leaders on his YouTube channel.         Guillermo Fisher – Norfolk, USA Data Hero Guillermo Fisher is an Engineering Manager at Handshake and founder of 757ColorCoded, an organization that exists to educate and empower local people of color to achieve careers in technology and improve their lives. In 2019, he partnered with We Power Tech to offer free, instructor-led AWS Tech Essentials training to Southeastern Virginia. As an advocate of AWS technologies, Guillermo blogs about services like Lambda, Athena, and DynamoDB on Medium. He also shares his knowledge at events such as the Atlanta AWS Summit, Hampton Roads DevFest, RevolutionConf, and re:Invent.         Helen Anderson – Wellington, New Zealand Data Hero Helen Anderson is a Business Intelligence Consultant based out of Wellington, New Zealand. She focuses on leading projects that use AWS services to empower users and improve efficiencies. She is a passionate advocate for data analysts and is well known in the Data Community for writing beginner-friendly blog posts, teaching, and mentoring those who are new to the tech industry. In fact, her post, “AWS from A to Z,” is one of the most popular AWS post ever on Helen was also named one of Jefferson Frank’s “Top 7 AWS Experts You Should be Following in 2019.” As a Woman in Tech and career switcher, Helen is passionate about mentoring and inspiring those who are underrepresented in the industry.       Manrique Lopez – Madrid, Spain Data Hero Manrique Lopez is the CEO Bitergia, a software development analytics company. He is passionate about free, libre, open source software development communities. He is a frequent speaker on Open Distro for Elasticsearch. Currently he is active in GrimoireLab, the open source software for software development analytics, and CHAOSS (Community Health Analytics for Open Source Software).           Lynn Langit – Minneapolis, USA Data Hero Lynn Langit is a consultant in the Minneapolis area focused on big data and cloud architecture. In addition to having designed many production AWS solutions, Lynn has also created and delivered technical content about the practicalities of working with the AWS Cloud at developer conferences worldwide. And she has created a series of technical AWS courses for She is currently collaborating virtually with a team at CSIRO Bioinformatics in Sydney, Australia. They are working to leverage modern cloud architectures (containers and serverless) to scale genomic research and tools for use world-wide.       Matt Lewis – Swansea, United Kingdom Data Hero Matt Lewis is Chief Architect at the UK Driver and Vehicle Licensing Agency where he is responsible for setting technology direction and guiding solutions that operate against critical data sets, including a record of all drivers in Great Britain and a record of all vehicles in the UK. He also founded and runs the AWS South Wales user group. He has been actively exploring and presenting the benefits of moving from traditional databases to cloud native services, most recently prototyping use cases for the adoption of Quantum Ledger Database (QLDB). In his spare time, Matt writes about different aspects of public cloud on his personal blog and Twitter, and spends too much time cycling online.       Robert Koch – Denver, USA Data Hero Robert Koch is the Lead Architect at S&P Global and one of the community leaders of He helps drive cloud-based architecture, blogs about migrating to the cloud, and loves to talk data and event-driven systems. In a recent lightning talk, he gave an overview of how Redshift has a symbiotic relationship with PostgreSQL. He currently has AWS certifications as a Cloud Practitioner, Big Data – Specialty, and as a Solution Architect – Associate. He is actively involved in the development community in Denver, often speaking at Denver Dev Day, a bi-annual mini-conference and at the AWS Denver Meetup.         Meet the Other New AWS Heroes Not to be outdone, this month we are thrilled to introduce to you a variety of other new AWS Heroes: Ankit Gupta – Kolkata, India Community Hero Ankit Gupta is a Solutions Architect at PwC India. He brings deep expertise in Solutions Architecture designing on AWS. Ankit is an AWS user since 2012 and works with most AWS Services. He has multiple AWS Certifications and has worked on various types of AWS projects. He is an AWS Community leader helping drive the AWS Community in India since 2014. He is also a co-organizer of AWS User Group Kolkata, India. Ankit has given multiple sessions on AWS Services at various events. He frequently visits Engineering Colleges for providing knowledge sharing sessions on Cloud Technologies. He also mentors Engineering students.       Brian LeRoux – Vancouver, Canada Serverless Hero Brian LeRoux is the co-founder and CTO of continuous delivery platform and core maintainer of OpenJS Architect. Brian helped create the declarative .arc manifest format which aims to make configuration clear, simple, terse and precise. This concision unlocks formerly complex severless primitives with the determinism and interop of standard CloudFormation. Brian believes the future is open source, serverless, and will be written by hackers like you.         Brian Tarbox – Boston, USA Community Hero Brian Tarbox has over thirty years experience delivering mission critical systems on-time, on-target, and with commercial success. He has ten patents, dozens of technical papers, “high engagement” Alexa skills, co-leads the Boston AWS Meetup, manages his company’s all-engineers-get-certified program and has presented at numerous industry events including AWS Community Days. He was the inaugural speaker for the Portland AWS User Group’s first meeting. In 2010 he won RockStar and Duke’s Choice award for the Most Innovative Use of Java for his system for turning log files into music so you could “listen” to your programs. He also won Atlassian’s Charlie award for the Most Innovative Use Of Jira.       Calvin Hendryx-Parker – Indianapolis, USA Community Hero Calvin Hendryx-Parker is the co-founder of Six Feet Up, a women-owned company specializing in Python and AWS consulting. As CTO, he’s an active proponent of Cloud deployments and strategies in the Midwest, and has been using AWS technologies since early 2013. In 2017, Calvin founded the Indiana AWS user group (“IndyAWS”), now the fastest growing tech community in the Midwest with 750+ members. To-date, Calvin has held 30+ IndyAWS monthly meetups and organized the first annual Indy Cloud Conf event focused on cloud computing and cross cloud deployments.         Farrah Campbell – Portland, USA Serverless Hero Farrah Campbell is the Ecosystems Director at Stackery, a serverless workflow company. She is passionate about her work with the Serverless, DevOps, and Women in Technology communities, participating in global industry events, user events, conferences, user groups along with, a documentary focused on how culture changes stories for women in the technology industry. She is the organizer of the Portland Serverless Days, the Portland Serverless Meetup, and speaks around the world about her serverless journey and the serverless mindset.         Gabriel Ramírez – Mexico City, Mexico Community Hero Gabriel Ramírez is the founder of Bootcamp Institute, a company specializes in democratizing the usage and knowledge of AWS for Spanish speakers. He has worked as an AWS Authorized Trainer for years and holds 10 AWS certifications ranging from professional to specialty. Gabriel is the organizer of several AWS User Groups in Mexico and a strong contributor for social programs like AWS Educate, empowering students to adopt the AWS Cloud and get certified. He has helped thousands of people pass the AWS Solutions Architect exam by doing workshops, webinars, and study groups on different social networks and local meetups.       Gillian Armstrong – Belfast, United Kingdom Machine Learning Hero Gillian Armstrong works for Liberty IT where she is helping to bring Machine Learning and Serverless into the enterprise. This involves hands-on architecting and building systems, as well as helping build out strategy and education. She’s excited about how Applied AI, the space where Machine Learning and Serverless meet, is allowing Software Engineers to build intelligence into their systems, and as such is an enthusiastic user and evangelist for the AWS AI Services. She is also exploring how tools like Amazon SageMaker can allow Software Engineers and Data Scientists to work closer together.       Ilya Dmitrichenko – London, United Kingdom Container Hero Ilya Dmitrichenko is a Software Engineer at Weaveworks, focused on making Kubernetes work for a wide range of users. Having started contributing to Kubernetes projects from the early days in 2014, Ilya has focused his attention on cluster lifecycle matters, networking, and observability, as well as developer tools. Most recently, in 2018 Ilya created eksctl, which is now an official CLI for Amazon EKS.           Juyoung Song – Seoul, Korea Container Hero Juyoung Song is a DevOps Engineer at beNX, currently in charge of transforming the legacy-cloud systems into modern cloud architecture to bring global stars such as BTS and millions of fans together in the digital sphere. Juyoung speaks regularly at AWS-organized events such as AWS Container Day, AWS Summit, and This is My Architecture. He also organizes and speaks at various Meetups like AWS Korea User Group and DevOps Korea, about topics such as ECS and Fargate, and its DevOps best practices. He is interested in building hyper-scale DevOps environments for containers using AWS CodeBuild, Terraform, and various open-source tools.       Lukasz Dorosz – Warsaw, Poland Community Hero Lukasz Dorosz is a Head of AWS Architecture and Board Member at CloudState/Chmurowisko. AWS Consulting Partner with a mission to help businesses leverage cloud services to make an impact in the world. As an active community member, he enjoys to share knowledge and experience with people and teach them about the cloud. He is a Co-Leader of AWS User Group POLAND, where regularly contributes to any events and organizes many meetups around Poland. Additionally, he popularizes the cloud through training, workshops and as a speaker at many events. He is also the author of online courses, webinars, and blog posts.       Martijn van Dongen – Amsterdam, The Netherlands Community Hero Martijn van Dongen is an experienced AWS Cloud Evangelist for, part of Xebia. He is a generalist in a broad set of AWS services, with a strong focus on security, containers, serverless and data-intensive platforms. He is the founder and lead of the Dutch AWS User Group. He organizes approximately 20 meetups per year with an average of 100 attendees and has built a powerful network of speakers and sponsors. At the Benelux AWS re:Invent re:Cap early 2019, Martijn organized 14 meetup style sessions in the evening with more than 300 attendees. Martijn regularly writes technical articles on blogs and speaks at meetups, events, and technical conferences such as AWS re:Invent and local AWS events.       Or Hiltch – Tel Aviv, Israel Machine Learning Hero Or Hiltch is Co-Founder and CTO at Skyline AI, the artificial intelligence investment manager for commercial real estate. In parallel to his business career, Or maintains a strong community presence, regularly hosting and speaking on AI, ML, and AWS related meetups and conferences, including SageMaker related topics on the AWS User Group meetup, Serverless NYC Conference, MIT AI 2018, and more. Or is an open-source hacker and creator/maintainer of a few high-profile (1000+ stars) open-source repo’s on GitHub and an avid blogger on ML and software engineering topics, posting on Amazon SageMaker, word2vec, novel uses for unsupervised-learning in real estate, and more.       Sebastian Müller – Hamburg, Germany Serverless Hero Sebastian Müller writes about all things Serverless, GraphQL, React, TypeScript, and Go on his personal website He has a background as an Engineering Lead, Scrum Master, and Full Stack Engineer. Sebastian is a general Technology Enthusiast working as Senior Cloud Consultant at superluminar, an AWS Advanced Consulting Partner and Serverless Development Partner, in Hamburg, Germany. Most articles on his website and projects on GitHub are the results of established practices from his work with various clients.         Steve Bjorg – San Diego, USA Community Hero Steve Bjorg is the Founder and Chief Technical Officer at MindTouch, a San Diego-based enterprise software company that specializes in customer self-service software. He is a frequent contributor to open-source projects and is passionate about serverless software. He is the author of LambdaSharp – a tool for optimizing the developer experience when building Serverless .NET applications on AWS. Steve and his team host a monthly serverless hacking challenge in San Diego to learn and master new AWS services and features.         Vlad Ionescu – Bucharest, Romania Container Hero Vlad Ionescu is a DevOps Consultant helping companies deliver more reliable software faster and safer. He is focused on observability and reliability, with a passion for rapid deployments and simplicity. Vlad’s work is predominantly focused on Kubernetes and Serverless. After starting with kops, he then moved to EKS which he enjoys pushing as far as possible. He can often be found sharing insights in #eks on the Kubernetes Slack. Before rising to the clouds he was a software developer with a background in finance. He has a passion for Haskell and Ruby, but spends most of his time in Python or Go while grumbling about JavaScript features.         You can learn more about AWS Heroes and connect with a Hero near you by visiting the AWS Hero website. — Ross;

Now Available: New C5d Instance Sizes and Bare Metal Instances

Amazon EC2 C5 instances are very popular for running compute-heavy workloads like batch processing, distributed analytics, high-performance computing, machine/deep learning inference, ad serving, highly scalable multiplayer gaming, and video encoding. In 2018, we added blazing fast local NVMe storage, and named these new instances C5d. They are a great fit for applications that need access to high-speed, low latency local storage like video encoding, image manipulation and other forms of media processing. They will also benefit applications that need temporary storage of data, such as batch and log processing and applications that need caches and scratch files. Just a few weeks ago, we launched new instances sizes and a bare metal option for C5 instances. Today, we are happy to add the same capabilities to the C5d family: 12xlarge, 24xlarge, and a bare metal option. The new C5d instance sizes run on Intel’s Second Generation Xeon Scalable processors (code-named Cascade Lake) with sustained all-core turbo frequency of 3.6GHz and maximum single core turbo frequency of 3.9 GHz. The new processors also enable a new feature called Intel Deep Learning Boost, a capability based on the AVX-512 instruction set. Thanks to the new Vector Neural Network Instructions (AVX-512 VNNI), deep learning frameworks will speed up typical machine learning operations like convolution, and automatically improve inference performance over a wide range of workloads. These instances are based on the AWS Nitro System, with dedicated hardware accelerators for EBS processing (including crypto operations), the software-defined network inside of each Virtual Private Cloud (VPC), and ENA networking. New Instance Sizes for C5d: 12xlarge and 24xlarge Here are the specs: Instance Name Logical Processors Memory Local Storage EBS-Optimized Bandwidth Network Bandwidth c5d.12xlarge 48 96 GiB 2 x 900 GB NVMe SSD 7 Gbps 12 Gbps c5d.24xlarge 96 192 GiB 4 x 900 GB NVMe SSD 14 Gbps 25 Gbps Previously, the largest C5d instance available was c5d.18xlarge, with 72 logical processors, 144 GiB of memory, and 1.8 TB of storage. As you can see, the new 24xlarge size increases available resources by 33%, in order to help you crunch those super heavy workloads. Last but not least, customers also get 50% more NVMe storage per logical processor on both 12xlarge and 24xlarge, with up to 3.6 TB of local storage! Bare Metal C5d As is the case with the existing bare metal instances (M5, M5d, R5, R5d, z1d, and so forth), your operating system runs on the underlying hardware and has direct access to processor and other hardware. Bare metal instances can be used to run software with specific requirements, e.g. applications that are exclusively licensed for use on physical, non-virtualized hardware. These instances can also be used to run tools and applications that require access to low-level processor features such as performance counters. Here are the specs: Instance Name Logical Processors Memory Local Storage EBS-Optimized Bandwidth Network Bandwidth c5d.metal 96 192 GiB 4 x 900 GB NVMe SSD 14 Gbps 25 Gbps Bare metal instances can also take advantage of Elastic Load Balancing, Auto Scaling, Amazon CloudWatch, and other AWS services. Now Available! You can start using these new instances today in the following regions: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Canada (Central), Europe (Ireland), Europe (Frankfurt), Europe (Stockholm), Europe (London), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), South America (São Paulo), and AWS GovCloud (US-West). Please send us feedback, either on the AWS forum for Amazon EC2, or through your usual AWS support contacts. — Julien;

In the Works – AWS Region in Spain

We opened AWS Regions in Sweden, Hong Kong, and Bahrain in the span of less than a year, and are currently working on regions in Jakarta, Indonesia, Cape Town, South Africa and Milan, Italy. Coming to Spain Today I am happy to announce that the AWS Europe (Spain) Region is in the works, and will open in late 2022 or early 2023 with three Availability Zones. This will be our seventh region in Europe, joining existing regions in Dublin, Frankfurt, London, Paris, Stockholm, and the upcoming Milan region that will open in early 2020 (check out the AWS Global Infrastructure page to learn more). AWS customers are already making use of 69 Availability Zones across 22 regions worldwide. Today’s announcement brings the total number of global regions (operational and in the works) up to 26. I was in Spain just last month, and was able to meet with developers in Madrid and Barcelona. Their applications were impressive and varied: retail management, entertainment, analytics for online advertising, investment recommendations, social scoring, and more. Several of the companies were born-in-the-cloud startups; all made heavy use of the entire line of AWS database services (Amazon Redshift was mentioned frequently), along with AWS Lambda and AWS CloudFormation. Some were building for the domestic market and others for the global market, but I am confident that they will all be able to benefit from this new region. We launched AWS Activate in Spain in 2013, giving startups access to guidance and one-on-one time with AWS experts, along with web-based training, self-paced labs, customer support, offers from third-parties, and up to $100,000 in AWS service credits. We also work with the VC community (Caixa Risk Capital and KFund), and several startup accelerators (Seedrocket and Wayra). AWS in Spain This upcoming region is the latest in a long series of investments that we have made in the Iberian Peninsula. We opened an edge location in Madrid in 2012, and an office in the same city in 2014.We added our first Direct Connect location in 2016, and another one in 2017, all to support the rapid growth of AWS in the area. We now have two edge locations in Madrid, and an office in Barcelona as well. In addition to our support for startups through AWS Activate, we provide training via AWS Academy and AWS Educate. Both of these programs are designed to build knowledge and skills in cloud computing, and are available in Spanish. Today, hundreds of universities and business schools in Spain are making great use of these programs. The AWS office in Madrid (which I visited on my recent trip) is fully staffed with account managers, business development managers, customer service representatives, partner managers, professional services consultants, solutions architects, and technical account managers. I had the opportunity to participate in an internal fireside with the team, and I can tell you that (like every Amazonian) they are 100% customer-obsessed, and ready to help you to succeed in any possible way. — Jeff; PS – If you would like to join our team in Spain, check out our open positions in Madrid and Barcelona.

200 Amazon CloudFront Points of Presence + Price Reduction

Less than two years ago I announced the 100th Point of Presence for Amazon CloudFront. The overall Point of Presence footprint is now growing at 50% per year. Since we launched the 100th PoP in 2017, we have expanded to 77 cities in 34 countries including China, Israel, Denmark, Norway, South Africa, UAE, Bahrain, Portugal, and Belgium. CloudFront has been used to deliver many high-visibility live-streaming events including Superbowl LIII, Thursday Night Football (via Prime Video), the Royal Wedding, the Winter Olympics, the Commonwealth Games, a multitude of soccer games (including the 2019 FIFA World Cup), and much more. Whether used alone or in conjunction with other AWS services, CloudFront is a great way to deliver content, with plenty of options that also help to secure the content and to protect the underlying source. For example: DDoS Protection – Amazon CloudFront customers were automatically protected against 84,289 Distributed Denial of Service (DDoS) attacks in 2018, including a 1.4 Tbps memcached reflection attack. Attack Mitigation – CloudFront customers used AWS Shield Advanced and AWS WAF to mitigate application-layer attacks, including a flood of over 20 million requests per second. Certificate Management – We announced CloudFront Integration with AWS Certificate Manager in 2016, and use of custom certificates has grown by 600%. New Locations in South America Today I am happy to announce that our global network continues to grow, and now includes 200 Points of Presence, including new locations in Argentina (198), Chile (199), and Colombia (200): AWS customer NED is based in Chile. They are using CloudFront to deliver server-side ad injection and low-latency content distribution to their clients, and are also using Lambda@Edge to implement robust anti-piracy protection. Price Reduction We are also reducing the pricing for on-demand data transfer from CloudFront by 56% for all Points of Presence in South America, effective November 1, 2019. Check out the CloudFront Pricing page to learn more. CloudFront Resources Here are some resources to help you to learn how to make great use CloudFront in your organization: Four Ways to Leverage CloudFront in Faster DevOps Workflows (video). Amazon Prime Video: Delivering the Amazing Video Experience (video). Getting Started with CloudFront (documentation). Getting Started with Amazon S3 Transfer Acceleration (documentation). How to Set up a CloudFront Distribution for Amazon S3 (tutorial). Secure Content Delivery with Amazon CloudFront (white paper). Amazon CloudFront Customer Case Studies. — Jeff;  

Improve Your App Testing With Amplify Console’s Pull Request Previews and Cypress Testing

Amplify Console allows developers to easly configure a Git-based workflow for continuous deployment and hosting of fullstack serverless web apps. Fullstack serverless apps comprise of backend resources such as GraphQL APIs, Data and File Storage, Authentication, or Analytics, integrated with a frontend framework such as React, Gatsby, or Angular. You can read more about the Amplify Console in a previous article I wrote. Today, we are announcing the ability to create preview URLs and to run end-to-end tests on pull requests before releasing code to production. Pull Request previews You can now configure Amplify Console to deploy your application to a unique URL every time a developer submits a pull request to your Git repository. The preview URL is completely different from the one used by the production site. You can see how changes look before merging the pull request into the main branch of your code repository, triggering a new release in the Amplify Console. For fullstack apps with backend environments provisioned via the Amplify CLI, every pull request spins up an ephemeral backend that is deleted when the pull request is closed. You can test changes in complete isolation from the production environment. Amplify Console creates backend infrastructures for pull requests on private git repositories only. This allows to avoid incurring extra costs in case of unsolicited pull requests. To learn how it works, let’s start a web application with a cloud-based authentication backend, and deploy it on Amplify Console. I first create a React application (check here to learn how to install React). npx create-react-app amplify-console-demo cd amplify-console-demo I initialize the Amplify environment (learn how to install the Amplify CLI first). I add a cloud based authentication backend powered by Amazon Cognito. I accept all the defaults answers proposed by Amplify CLI. npm install aws-amplify aws-amplify-react amplify init amplify add auth amplify push I then modify src/App.js to add the front end authentication user interface. The code is available in the AWS Amplify documentation. Once ready, I start the local development server to test the application locally. npm run start I point my browser to http://localhost:8080 to verify the scafolding (the below screenshot is taken from my AWS Cloud 9 development environment). I click Create account to create a user, verify the SignUp flow, and authenticate to the app. After signing up, I see the application page. There are two important details to note. First, I am using a private GitHub repository. Amplify Console only creates backend infrastructure on pull requests for private repositories, to avoid creating unnecessary infrastructure for unsollicited pull requests. Second, the Amplify Console build process looks for dependencies in package-lock.json only. This is why I added the amplify packages with npm and not with yarn. When I am happy with my app, I push the code to a GitHub repo (let’s assume I already did git remote add origin ...). git add amplify git commit -am "initial commit" git push origin master The next step consists of configuring Amplify Console to build and deploy my app on every git commit. I login to the Amplify Console, click Connect App, choose GitHub as repository and click Continue (the first time I do this, I need to authenticate on GitHub, using my GitHub username and password) I select my repository and the branch I want to use as source: Amplify Console detects the type of project and proposes a build file. I select the name of my environment (dev). The first time I use Amplify Console, I follow the instructions to create a new service role. This role authorises Amplify Console to access AWS backend services on my behalf. I click Next. I review the settings and click Save and Deploy. After a few seconds or minutes, my application is ready. I can point my browser to the deployment URL and verify the app is working correctly. Now, let’s enable previews for pull requests. Click Preview on the left menu and Enable Previews. To enable the previews, Amplify Console requires an app to be installed in my GitHub account. I follow the instructions provided by the console to configure my GitHub account. Once set up, I select a branch, click Manage to enable / disable the pull request previews. (At anytime, I can uninstall the Amplify app from my GitHub account by visiting the Applications section of my GitHub account’s settings.) Now that the mechanism is in place, let’s create a pull request. I edit App.js directly on GitHub. I customize the withAuthenticator component to change the color of the Sign In button from orange to green. I save the changes and I create a pull request. On the Pull Request detail page, I click Show all checks to get the status of the Amplify Console test. I see AWS Amplify Console Web Preview in progress. Amplify Console creates a full backend environment to test the pull request, to build and to deploy the frontend. Eventually, I see All checks have passed and a green mark. I click Details to get the preview url. In case of an error, you can see the detailled log file of the build phase in the Amplify Console. I can also check the status of the preview in the Amplify Console. I point my browser to the preview URL to test my change. I can see the green Sign In button instead of the orange one. When I try to authenticate using the username and password I created previously, I receive an User does not exist error message because this preview URL points to a different backend than the main application. I can see two Cognito user pools in the Cognito console, one for each environment. I can control who can access the preview URL using similar access control settings that I use for the main URL. When I am happy with the proposed changes, I merge the pull request on GitHub to trigger a new build and to deploy the change to the production environment. Amplify Console deletes the preview environment upon merging. The ephemeral backend environment created for the pull request also gets deleted. Cypress testing In addition to previewing changes before merging them to the main branch, we also added the capability to run end to end tests during your build process. You can use your favorite test framework to add unit or end-to-end tests to your application and automatically run the tests during the build phase. When you use Cypress test framework, Amplify Console detects the tests in your source tree and automatically adds the testing phase in your application build process. Only projects that are passing all tests are pushed down your pipeline to the deployment phase. You can learn more about this and follow step by step instructions we posted a few weeks ago. These two additions to Amplify Console allow you to gain higher confidence in the robustness of your pipeline and the quality of the code delivered to your production environment. Availability Web previews are available in all Regions where AWS Amplify Console is available today, at no additional cost on top of the regular Amplify Console pricing. With the AWS Free Usage Tier, you can get started for free. Upon sign up, new AWS customers receive 1,000 build minutes per month for the build and deploy feature, and 15 GB served per month and 5 GB data storage per month for the hosting. -- seb

New – Amazon CloudWatch Anomaly Detection

Amazon CloudWatch launched in early 2009 as part of our desire to (as I said at the time) “make it even easier for you to build sophisticated, scalable, and robust web applications using AWS.” We have continued to expand CloudWatch over the years, and our customers now use it to monitor their infrastructure, systems, applications, and even business metrics. They build custom dashboards, set alarms, and count on CloudWatch to alert them to issues that affect the performance or reliability of their applications. If you have used CloudWatch Alarms, you know that there’s a bit of an art to setting your alarm thresholds. You want to make sure to catch trouble early, but you don’t want to trigger false alarms. You need to deal with growth and with scale, and you also need to make sure that you adjust and recalibrate your thresholds to deal with cyclic and seasonal behavior. Anomaly Detection Today we are enhancing CloudWatch with a new feature that will help you to make more effective use of CloudWatch Alarms. Powered by machine learning and building on over a decade of experience, CloudWatch Anomaly Detection has its roots in over 12,000 internal models. It will help you to avoid manual configuration and experimentation, and can be used in conjunction with any standard or custom CloudWatch metric that has a discernible trend or pattern. Anomaly Detection analyzes the historical values for the chosen metric, and looks for predictable patterns that repeat hourly, daily, or weekly. It then creates a best-fit model that will help you to better predict the future, and to more cleanly differentiate normal and problematic behavior. You can adjust and fine-tune the model as desired, and you can even use multiple models for the same CloudWatch metric. Using Anomaly Detection I can create my own models in a matter of seconds! I have an EC2 instance that generates a spike in CPU Utilization every 24 hours: I select the metric, and click the “wave” icon to enable anomaly detection for this metric and statistic: This creates a model with default settings. If I select the model and zoom in to see one of the utilization spikes, I can see that the spike is reflected in the prediction bands: I can use this model as-is to drive alarms on the metric, or I can select the model and click Edit model to customize it: I can exclude specific time ranges (past or future) from the data that is used to train the model; this is a good idea if the data reflects a one-time event that will not happen again. I can also specify the timezone of the data; this lets me handle metrics that are sensitive to changes in daylight savings time: After I have set this up, the anomaly detection model goes in to effect and I can use to create alarms as usual. I choose Anomaly detection as my Threshold type, and use the Anomaly detection threshold to control the thickness of the band. I can raise the alarm when the metric is outside of, great than, or lower than the band: The remaining steps are identical to the ones that you already use to create other types of alarms. Things to Know Here are a couple of interesting things to keep in mind when you are getting ready to use this new CloudWatch feature: Suitable Metrics – Anomaly Detection works best when the metrics have a discernible pattern or trend, and when there is a minimal number of missing data points. Updates – Once the model has been created, it will be updated every five minutes with any new metric data. One-Time Events – The model cannot predict one-time events such as Black Friday or the holiday shopping season. API / CLI / CloudFormation – You can create and manage anomaly models from the Console, the CloudWatch API (PutAnomalyDetector) and the CloudWatch CLI. You can also create AWS::CloudWatch::AnomalyDetector resources in your AWS CloudFormation templates. Now Available You can start creating and using CloudWatch Anomaly Detection today in all commercial AWS regions. To learn more, read about CloudWatch Anomaly Detection in the CloudWatch Documentation. — Jeff;  

Now Available – Amazon Relational Database Service (RDS) on VMware

Last year I told you that we were working to give you Amazon RDS on VMware, with the goal of bringing many of the benefits of Amazon Relational Database Service (RDS) to your on-premises virtualized environments. These benefits include the ability to provision new on-premises databases in minutes, make backups, and restore to a point in time. You get automated management of your on-premises databases, without having to provision and manage the database engine. Now Available Today, I am happy to report that Amazon RDS on VMware is available for production use, and that you can start using it today. We are launching with support for Microsoft SQL Server, PostgreSQL, and MySQL. Here are some important prerequisites: Compatibility – RDS on VMware works with vSphere clusters that run version 6.5 or better. Connectivity – Your vSphere cluster must have outbound connectivity to the Internet, and must be able to make HTTPS connections to the public AWS endpoints. Permissions – You will need to have Administrative privileges (and the skills to match) on the cluster in order to set up RDS on VMware. You will also need to have (or create) a second set of credentials for use by RDS on VMware. Hardware – The hardware that you use to host RDS on VMware must be listed in the relevant VMware Hardware Compatibility Guide. Resources – Each cluster must have at least 24 vCPUs, 24 GiB of memory, and 180 GB of storage for the on-premises management components of RDS on VMware, along with additional resources to support the on-premises database instances that you launch. Setting up Amazon RDS on VMware Due to the nature of this service, the setup process is more involved than usual and I am not going to walk through it at my usual level of detail. Instead, I am going to outline the process and refer you to the Amazon RDS on VMware User Guide for more information. During the setup process, you will be asked to supply details of your vCenter/ESXi configuration. For best results, I advise a dry-run through the User Guide so that you can find and organize all of the necessary information. Here are the principal steps, assuming that you already have a running vSphere data center: Prepare Environment – Check vSphere version, confirm storage device & free space, provision resource pool. Configure Cluster Control Network – Create a network for control traffic and monitoring. Must be a vSphere distributed port group with 128 to 1022 ports. Configure Application Network – This is the network that applications, users, and DBAs will use to interact with the RDS on VMware DB instances. It must be a vSphere distributed port group with 128 to 1022 ports, and it must span all of the ESXi hosts that underly the cluster. The network must have an IPv4 subnet large enough to accommodate all of the instances that you expect to launch. In many cases your cluster will already have an Application Network. Configure Management Network – Configure your ESXi hosts to add a route to the Edge Router (part of RDS on VMware) in the Cluster Control Network Configure vCenter Credentials – Create a set of credentials for use during the onboarding process. Configure Outbound Internet Access – Confirm that outbound connections can be made from the Edge Router in your virtual data center to AWS services. With the preparatory work out of the way, the next step is to bring the cluster onboard by creating a custom (on-premises) Availability Zone and using the installer to install the product. I open the RDS Console, choose the US East (N. Virginia) Region, and click Custom availability zones: I can see my existing custom AZs and their status. I click Create custom AZ to proceed: I enter a name for my AZ and for the VPN tunnel between the selected AWS region and my vSphere data center, and then I enter the IP address of the VPN. Then I click Create custom AZ: My new AZ is visible, in status Unregistered: To register my vSphere cluster as a Custom AZ, I click Download Installer from the AWS Console to download the RDS on VMware installer. I deploy the installer in my cluster and follow through the guided wizard to fill in the network configurations, AWS credentials, and so forth, then start the installation. After the installation is complete, the status of my custom AZ will change to Active. Behind the scenes, the installer automatically deploys the on-premises components of RDS on VMware and connects the vSphere cluster to the AWS region. Some of the database engines require me to bring my own media and an on-premises license. I can import the installation media that I have in my data center onto RDS and use it to launch the database engine. For example, here’s my media image for SQL Server Enterprise Edition: The steps above must be done on a cluster-by-cluster basis. Once a cluster has been set up, multiple Database instances can be launched, based on available compute, storage, and network (IP address) resources. Using Amazon RDS for VMware With all of the setup work complete, I can use the same interfaces (RDS Console, RDS CLI, or the RDS APIs) to launch and manage Database instances in the cloud and on my on-premises network. I’ll use the RDS Console, and click Create database to get started. I choose On-premises and pick my custom AZ, then choose a database engine: I enter a name for my instance, another name for the master user, and enter (or let RDS assign) a password: Then I pick the DB instance class (the v11 in the names refers to version 11 of the VMware virtual hardware definition) and click Create database: Here’s a more detailed look at some of the database instance sizes. As is the case with cloud-based instance sizes, the “c” instances are compute-intensive, the “r” instances are memory-intensive, and the “m” instances are general-purpose: The status of my new database instance starts out as Creating, and progresses though Backing-up and then to Available: Once it is ready, the endpoint is available in the console: On-premises applications can use this endpoint to connect to the database instance across the Application Network. Before I wrap up, let’s take a look at a few other powerful features of RDS on VMware: Snapshot backups, point-in-time restores, and the power to change the DB instance class. Snapshot backups are a useful companion to the automated backups taken daily by RDS on VMware. I simply select Take snapshot from the Action menu: To learn more, read Creating a DB Snapshot. Point in time recovery allows me to create a fresh on-premises DB instance based on the state of an existing one at an earlier point in time. To learn more, read Restoring a DB Instance to a Specified Time. I can change the DB instance class in order to scale up or down in response to changing requirements. I select Modify from the Action menu, choose the new class, and click Submit: The modification will be made during the maintenance window for the DB instance. A few other features that I did not have the space to cover include renaming an existing DB instance (very handy for disaster recovery), and rebooting a DB instance. Available Now Amazon RDS on VMware is available now and you can start using it today in the US East (N. Virginia) Region. — Jeff;  

Migration Complete – Amazon’s Consumer Business Just Turned off its Final Oracle Database

Over my 17 years at Amazon, I have seen that my colleagues on the engineering team are never content to leave good-enough alone. They routinely re-evaluate every internal system to make sure that it is as scalable, efficient, performant, and secure as possible. When they find an avenue for improvement, they will use what they have learned to thoroughly modernize our architectures and implementations, often going so far as to rip apart existing systems and rebuild them from the ground up if necessary. Today I would like to tell you about an internal database migration effort of this type that just wrapped up after several years of work. Over the years we realized that we were spending too much time managing and scaling thousands of legacy Oracle databases. Instead of focusing on high-value differentiated work, our database administrators (DBAs) spent a lot of time simply keeping the lights on while transaction rates climbed and the overall amount of stored data mounted. This included time spent dealing with complex & inefficient hardware provisioning, license management, and many other issues that are now best handled by modern, managed database services. More than 100 teams in Amazon’s Consumer business participated in the migration effort. This includes well-known customer-facing brands and sites such as Alexa, Amazon Prime, Amazon Prime Video, Amazon Fresh, Kindle, Amazon Music, Audible, Shopbop, Twitch, and Zappos, as well as internal teams such as AdTech, Amazon Fulfillment Technology, Consumer Payments, Customer Returns, Catalog Systems, Deliver Experience, Digital Devices, External Payments, Finance, InfoSec, Marketplace, Ordering, and Retail Systems. Migration Complete I am happy to report that this database migration effort is now complete. Amazon’s Consumer business just turned off its final Oracle database (some third-party applications are tightly bound to Oracle and were not migrated). We migrated 75 petabytes of internal data stored in nearly 7,500 Oracle databases to multiple AWS database services including Amazon DynamoDB, Amazon Aurora, Amazon Relational Database Service (RDS), and Amazon Redshift. The migrations were accomplished with little or no downtime, and covered 100% of our proprietary systems. This includes complex purchasing, catalog management, order fulfillment, accounting, and video streaming workloads. We kept careful track of the costs and the performance, and realized the following results: Cost Reduction – We reduced our database costs by over 60% on top of the heavily discounted rate we negotiated based on our scale. Customers regularly report cost savings of 90% by switching from Oracle to AWS. Performance Improvements – Latency of our consumer-facing applications was reduced by 40%. Administrative Overhead – The switch to managed services reduced database admin overhead by 70%. The migration gave each internal team the freedom to choose the purpose-built AWS database service that best fit their needs, and also gave them better control over their budget and their cost model. Low-latency services were migrated to DynamoDB and other highly scalable non-relational databases such as Amazon ElastiCache. Transactional relational workloads with high data consistency requirements were moved to Aurora and RDS; analytics workloads were migrated to Redshift, our cloud data warehouse. We captured the shutdown of the final Oracle database, and had a quick celebration: DBA Career Path As I explained earlier, our DBAs once spent a lot of time managing and scaling our legacy Oracle databases. The migration freed up time that our DBAs now use to do an even better job of performance monitoring and query optimization, all with the goal of letting them deliver a better customer experience. As part of the migration, we also worked to create a fresh career path for our Oracle DBAs, training them to become database migration specialists and advisors. This training includes education on AWS database technologies, cloud-based architecture, cloud security, OpEx-style cost management. They now work with both internal and external customers in an advisory role, where they have an opportunity to share their first-hand experience with large-scale migration of mission-critical databases. Migration Examples Here are examples drawn from a few of the migrations: Advertising – After the migration, this team was able to double their database fleet size (and their throughput) in minutes to accommodate peak traffic, courtesy of RDS. This scale-up effort would have taken months. Buyer Fraud – This team moved 40 TB of data with just one hour of downtime, and realized the same or better performance at half the cost, powered by Amazon Aurora. Financial Ledger – This team moved 120 TB of data, reduced latency by 40%, cut costs by 70%, and cut overhead by the same 70%, all powered by DynamoDB. Wallet – This team migrated more than 10 billion records to DynamoDB, reducing latency by 50% and operational costs by 90% in the process. To learn more about this migration, read Amazon Wallet Scales Using Amazon DynamoDB. My recent Prime Day 2019 post contains more examples of the extreme scale and performance that are possible with AWS. Migration Resources If you are ready to migrate from Oracle (or another hand-managed legacy database) to one or more AWS database services, here are some resources to get you started: AWS Migration Partners – Our slate of AWS Migration Partners have the experience, expertise, and tools to help you to understand, plan, and execute a database migration. Migration Case Studies -Read How Amazon is Achieving Database Freedom Using AWS to learn more about this effort; read the Prime Video, Advertising, Items & Offers, Amazon Fulfillment, and Analytics case studies to learn more about the examples that I mentioned above. AWS Professional Services – My colleagues at AWS Professional Services are ready to work alongside you to make your migration a success. AWS Migration Tools & Services – Check out our Cloud Migration page, read more about Migration Hub, and don’t forget about the Database Migration Service. AWS Database Freedom – The AWS Database Freedom program is designed to help qualified customers migrate from traditional databases to cloud-native AWS databases. AWS re:Invent Sessions – We are finalizing an extensive lineup of chalk talks and breakout sessions for AWS re:Invent that will focus on this migration effort, all led by the team members that planned and executed the migrations. — Jeff;    

Now Available: Bare Metal Arm-Based EC2 Instances

At AWS re:Invent 2018, we announced a new line of Amazon Elastic Compute Cloud (EC2) instances: the A1 family, powered by Arm-based AWS Graviton processors. This family is a great fit for scale-out workloads e.g. web front-ends, containerized microservices or caching fleets. By expanding the choice of compute options, A1 instances help customers use the right instances for the right applications, and deliver up to 45% cost savings. In addition, A1 instances enable Arm developers to build and test natively on Arm-based infrastructure in the cloud: no more cross compilation or emulation required. Today, we are happy to expand the A1 family with a bare metal option. Bare Metal for A1 Instance Name Logical Processors Memory EBS-Optimized Bandwidth Network Bandwidth a1.metal 16 32 GiB 3.5 Gbps Up to 10 Gbps Just like for existing bare metal instances (M5, M5d, R5, R5d, z1d, and so forth), your operating system runs directly on the underlying hardware with direct access to the processor. As described in a previous blog post, you can leverage bare metal instances for applications that: need access to physical resources and low-level hardware features, such as performance counters, that are not always available or fully supported in virtualized environments, are intended to run directly on the hardware, or licensed and supported for use in non-virtualized environments. Bare metal instances can also take advantage of Elastic Load Balancing, Auto Scaling, Amazon CloudWatch, and other AWS services. Working with A1 Instances Bare metal or not, it’s never been easier to work with A1 instances. Initially launched in four AWS regions, they’re now available in four additional regions: Europe (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Mumbai), and Asia Pacific (Sydney). From a software perspective, you can run on A1 instances Amazon Machine Images for popular Linux distributions like Ubuntu, Red Hat Enterprise Linux, SUSE Linux Enterprise Server, Debian, and of course Amazon Linux 2. Applications such as the Apache HTTP Server and NGINX Plus are available too. So are all major programming languages and run-times including PHP, Python, Perl, Golang, Ruby, NodeJS and multiple flavors of Java including Amazon Corretto, a supported open source OpenJDK implementation. What about containers? Good news here as well! Amazon ECS and Amazon EKS both support A1 instances. Docker has announced support for Arm-based architectures in Docker Enterprise Edition, and most Docker official images support Arm. In addition, millions of developers can now use Arm emulation to build, run and test containers on their desktop machine before moving them to production. As you would expect, A1 instances are seamlessly integrated with many AWS services, such as Amazon EBS, Amazon CloudWatch, Amazon Inspector, AWS Systems Manager and AWS Batch. Now Available! You can start using a1.metal instances today in US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Mumbai), and Asia Pacific (Sydney). As always, we appreciate your feedback, so please don’t hesitate to get in touch via the AWS Compute Forum, or through your usual AWS support contacts. — Julien;

New M5n and R5n EC2 Instances, with up to 100 Gbps Networking

AWS customers build ever-demanding applications on Amazon EC2. To support them the best we can, we listen to their requirements, go to work, and come up with new capabilities. For instance, in 2018, we upgraded the networking capabilities of Amazon EC2 C5 instances, with up to 100 Gbps networking, and significant improvements in packet processing performance. These are made possible by our new virtualization technology, aka the AWS Nitro System, and by the Elastic Fabric Adapter which enables low latency on 100 Gbps networking platforms. In order to extend these benefits to the widest range of workloads, we’re happy to announce that these same networking capabilities are available today for both Amazon EC2 M5 and R5 instances. Introducing Amazon EC2 M5n and M5dn instances Since the very early days of Amazon EC2, the M family has been a popular choice for general-purpose workloads. The new M5(d)n instances uphold this tradition, and are a great fit for databases, High Performance Computing, analytics, and caching fleets that can take advantage of improved network throughput and packet rate performance. The chart below lists out the new instances and their specs: each M5(d) instance size now has an M5(d)n counterpart, which supports the upgraded networking capabilities discussed above. For example, whereas the regular m5(d).8xlarge instance has a respectable network bandwidth of 10 Gbps, its m5(d)n.8xlarge sibling goes to 25 Gbps. The top of the line m5(d)n.24xlarge instance even hits 100 Gbps. Here are the specs: Instance Name Logical Processors Memory Local Storage (m5dn only) EBS-Optimized Bandwidth Network Bandwidth m5n.large m5dn.large 2 8 GiB 1 x 75 GB NVMe SSD Up to 3.5 Gbps Up to 25 Gbps m5n.xlarge m5dn.xlarge 4 16 GiB 1 x 150 GB NVMe SSD Up to 3.5 Gbps Up to 25 Gbps m5n.2xlarge m5dn.2xlarge 8 32 GiB 1 x 300 GB NVMe SSD Up to 3.5 Gbps Up to 25 Gbps m5n.4xlarge m5dn.4xlarge 16 64 GiB 2 x 300 GB NVMe SSD 3.5 Gbps Up to 25 Gbps m5n.8xlarge m5dn.8xlarge 32 128 GiB 2 x 600 GB NVMe SSD 5 Gbps 25 Gbps m5n.12xlarge m5dn.12xlarge 48 192 GiB 2 x 900 GB NVMe SSD 7 Gbps 50 Gbps m5n.16xlarge m5dn.16xlarge 64 256 GiB 4 x 600 GB NVMe SSD 10 Gbps 75 Gbps m5n.24xlarge m5dn.24xlarge 96 384 GiB 4 x 900 GB NVMe SSD 14 Gbps 100 Gbps m5n.metal m5dn.metal 96 384 GiB 4 x 900 GB NVMe SSD 14 Gbps 100 Gbps Introducing Amazon EC2 R5n and R5dn instances The R5 family is ideally suited for memory-hungry workloads, such as high performance databases, distributed web scale in-memory caches, mid-size in-memory databases, real time big data analytics, and other enterprise applications. The logic here is exactly the same: each R5(d) instance size has an R5(d)n counterpart. Here are the specs: Instance Name Logical Processors Memory Local Storage (r5dn only) EBS-Optimized Bandwidth Network Bandwidth r5n.large r5dn.large 2 16 GiB 1 x 75 GB NVMe SSD Up to 3.5 Gbps Up to 25 Gbps r5n.xlarge r5dn.xlarge 4 32 GiB 1 x 150 GB NVMe SSD Up to 3.5 Gbps Up to 25 Gbps r5n.2xlarge r5dn.2xlarge 8 64 GiB 1 x 300 GB NVMe SSD Up to 3.5 Gbps Up to 25 Gbps r5n.4xlarge r5dn.4xlarge 16 128 GiB 2 x 300 GB NVMe SSD 3.5 Gbps Up to 25 Gbps r5n.8xlarge r5dn.8xlarge 32 256 GiB 2 x 600 GB NVMe SSD 5 Gbps 25 Gbps r5n.12xlarge r5dn.12xlarge 48 384 GiB 2 x 900 GB NVMe SSD 7 Gbps 50 Gbps r5n.16xlarge r5dn.16xlarge 64 512 GiB 4 x 600 GB NVMe SSD 10 Gbps 75 Gbps r5n.24xlarge r5dn.24xlarge 96 768 GiB 4 x 900 GB NVMe SSD 14 Gbps 100 Gbps r5n.metal r5dn.metal 96 768 GiB 4 x 900 GB NVMe SSD 14 Gbps 100 Gbps These new M5(d)n and R5(d)n instances are powered by custom second generation Intel Xeon Scalable Processors (based on the Cascade Lake architecture) with sustained all-core turbo frequency of 3.1 GHz and maximum single core turbo frequency of 3.5 GHz. Cascade Lake processors enable new Intel Vector Neural Network Instructions (AVX-512 VNNI) which will help speed up typical machine learning operations like convolution, and automatically improve inference performance over a wide range of deep learning workloads. Now Available! You can start using the M5(d)n and R5(d)n instances today in the following regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Frankfurt), and Asia Pacific (Singapore). We hope that these new instances will help you tame your network-hungry workloads! Please send us feedback, either on the AWS Forum for Amazon EC2, or through your usual support contacts. — Julien;

AWS Firewall Manager Update – Support for VPC Security Groups

I introduced you to AWS Firewall Manager last year, and showed you how you can use it to centrally configure and manage your AWS WAF rules and AWS Shield advanced protections. AWS Firewall Manager makes use of AWS Organizations, and lets you build policies and apply them across multiple AWS accounts in a consistent manner. Security Group Support Today we are making AWS Firewall Manager even more useful, giving you the power to define, manage, and audit organization-wide policies for the use of VPC Security Groups. You can use the policies to apply security groups to specified accounts and resources, check and manage the rules that are used in security group, and to find and then clean up unused and redundant security groups. You get real-time notification when misconfigured rules are detected, and can take corrective action from within the Firewall Manager Console. In order to make use of this feature, you need to have an AWS Organization and AWS Config must be enabled for all of the accounts in it. You must also designate an AWS account as the Firewall Administrator. This account has permission to deploy AWS WAF rules, Shield Advanced protections, and security group rules across your organization. Creating and Using Policies After logging in to my organization’s root account, I open the Firewall Manager Console, and click Go to AWS Firewall Manager: Then I click Security Policies in the AWS FMS section to get started. The console displays my existing policies (if any); I click Create policy to move ahead: I select Security group as the Policy type and Common security groups as the Security group policy type, choose the target region, and click Next to proceed (I will examine the other policy types in a minute): I give my policy a name (OrgDefault), choose a security group (SSH_Only), and opt to protect the group’s rules from changes, then click Next: Now I define the scope of the policy. As you can see, I can choose the accounts, resource types, and even specifically tagged resources, before clicking Next: I can also choose to exclude resources that are tagged in a particular way; this can be used to create an organization-wide policy that provides special privileges for a limited group of resources. I review my policy, confirm that I have to enable Config and to pay the associated charges, and click Create policy: The policy takes effect immediately, and begins to evaluate compliance within 3-5 minutes. The Firewall Manager Policies page shows an overview: I can click the policy to learn more: Policies also have an auto-remediation option. While this can be enabled when the policy is created, our advice is to wait until after policy has taken effect so that you can see what will happen when you go ahead and enable auto-remediation: Let’s take a look at the other two security group policy types: Auditing and enforcement of security group rules – This policy type centers around an audit security group that can be used in one of two ways: You can use this policy type when you want to establish guardrails that establish limits on the rules that can be created. For example, I could create a policy rule that allows inbound access from a specific set of IP addresses (perhaps a /24 used by my organization), and use it to detect any resource that is more permissive. Auditing and cleanup of unused and redundant security groups – This policy type looks for security groups that are not being used, or that are redundant: Available Now You can start to use this feature today in the US East (N. Virginia), US East (Ohio), US West (Oregon), US West (N. California), Europe (Ireland), Europe (Frankfurt), Europe (London), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Singapore), and Asia Pacific (Seoul) Regions. You will be charged $100 per policy per month. — Jeff;

Amazon EKS Windows Container Support now Generally Available

In March of this year, we announced a preview of Windows Container support on Amazon Elastic Kubernetes Service and invited customers to experiment and provide us with feedback. Today, after months of refining the product based on that feedback, I am delighted to announce that  Windows Container support is now generally available. Many development teams build and support applications designed to run on Windows Servers and with this announcement they can now deploy them on Kubernetes alongside Linux applications. This ability will provide more consistency in system logging, performance monitoring, and code deployment pipelines. Amazon Elastic Kubernetes Service simplifies the process of building, securing, operating, and maintaining Kubernetes clusters, and allows organizations to focus on building applications instead of operating Kubernetes. We are proud to be the first Cloud provider to have General Availability of Windows Containers on Kubernetes and look forward to customers unlocking the business benefits of Kubernetes for both their Windows and Linux workloads. To show you how this feature works, I will need an Amazon Elastic Kubernetes Service cluster. I am going to create a new one, but this will work with any cluster that is using Kubernetes version 1.14 and above. Once the cluster has been configured, I will add some new Windows nodes and deploy a Windows application. Finally, I will test the application to ensure it is running as expected. The simplest way to get a cluster set up is to use eksctl, the official CLI tool for EKS. The command below creates a cluster called demo-windows-cluster and adds two Linux nodes to the cluster. Currently, at least one Linux node is required to support Windows node and pod networking, however, I have selected two for high availability and we would recomend that you do the same. eksctl create cluster \ --name demo-windows-cluster \ --version 1.14 \ --nodegroup-name standard-workers \ --node-type t3.medium \ --nodes 2 \ --nodes-min 1 \ --nodes-max 3 \ --node-ami auto Starting with eksctl version 0.7, a new utility has been added called install-vpc-controllers. This utility installs the required VPC Resource Controller and VPC Admission Webhook into the cluster. These components run on Linux nodes and are responsible for enabling networking for incoming pods on Windows nodes. To use the tool we run the following command. eksctl utils install-vpc-controllers --name demo-windows-cluster --approve If you don’t want to use eksctl we also provide guides in the documentation on how you can run PowerShell or Bash scripts, to achieve the same outcome. Next, I will need to add some Windows Nodes to our cluster. If you use eksctl to create the cluster then the command below will work. If you are working with an existing cluster, check out the documentation for instructions on how to create a Windows node group and connect it to your cluster. eksctl create nodegroup \ --region us-west-2 \ --cluster demo-windows-cluster \ --version 1.14 \ --name windows-ng \ --node-type t3.medium \ --nodes 3 \ --nodes-min 1 \ --nodes-max 4 \ --node-ami-family WindowsServer2019FullContainer \ --node-ami ami-0f85de0441a8dcf46 The most up to date Windows AMI ID for your region can be found by querying the AWS SSM Parameter Store. Instructions to do this can be found in the Amazon EKS documentation. Now I have the nodes up and running I can deploy a sample application. I am using a YAML file from the AWS containers roadmap GitHub repository. This file configures an app that consists of a single container that runs IIS which in turn hosts a basic HTML page. kubectl apply -f These are Windows containers, which are often a little larger than Linux containers and therefore take a little longer to download and start-up. I monitored the progress of the deployment by running the following command. kubectl get pods -o wide --watch I waited for around 5 minutes for the pod to transition to the Running state. I then executed the following command, which connects to the pod and initializes a PowerShell session inside the container. The windows-server-iis-66bf9745b-xsbsx property is the name of the pod, if you are following along with this your name will be different. kubectl exec -it windows-server-iis-66bf9745b-xsbsx powershell Once you are conected to the PowerShell session you can now execute PowerShell as if you were using the terminal inside the container. Therefore if we run the command below we should get some information back about the news blog Invoke-WebRequest -Uri -UseBasicParsing To exit the PowerShell session I type exit and it returns me to my terminal. From there I can inspect the service that was deployed by the sample application, I type the following command: kubectl get svc windows-server-iis-service This gives me the following output that describes the service: NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE windows-server-iis-service LoadBalancer 80:32750/TCP 54s The External IP should be the address of a load balancer. If I type this URL into a browser and append /default.html then it will load a HTML page that was created by the sample application deployment. This is being served by our IIS server from one of the Windows containers I deployed. So there we have it, Windows Containers running on Amazon Elastic Kubernetes Service. For more details, please check out the documentation. Amazon EKS Windows Container Support is available in all the same regions as Amazon EKS is available, and pricing details can be found over here. We have a long roadmap for Amazon Elastic Kubernetes Service, but we are eager to get your feedback and will use it to drive our prioritization process. Please take a look at this new feature and let us know what you think!

Learn about AWS Services & Solutions – October AWS Online Tech Talks

Learn about AWS Services & Solutions – October AWS Online Tech Talks Join us this October to learn about AWS services and solutions. The AWS Online Tech Talks are live, online presentations that cover a broad range of topics at varying technical levels. These tech talks, led by AWS solutions architects and engineers, feature technical deep dives, live demonstrations, customer examples, and Q&A with AWS experts. Register Now! Note – All sessions are free and in Pacific Time. Tech talks this month: AR/VR:  October 30, 2019 | 9:00 AM – 10:00 AM PT – Using Physics in Your 3D Applications with Amazon Sumerian – Learn how to simulate real-world environments in 3D using Amazon Sumerian’s new robust physics system. Compute:  October 24, 2019 | 9:00 AM – 10:00 AM PT – Computational Fluid Dynamics on AWS – Learn best practices to run Computational Fluid Dynamics (CFD) workloads on AWS. October 28, 2019 | 1:00 PM – 2:00 PM PT – Monitoring Your .NET and SQL Server Applications on Amazon EC2 – Learn how to manage your application logs through AWS services to improve performance and resolve issues for your .Net and SQL Server applications. October 31, 2019 | 9:00 AM – 10:00 AM PT – Optimize Your Costs with AWS Compute Pricing Options – Learn which pricing models work best for your workloads and how to combine different purchase options to optimize cost, scale, and performance. Data Lakes & Analytics:  October 23, 2019 | 9:00 AM – 10:00 AM PT – Practical Tips for Migrating Your IBM Netezza Data Warehouse to the Cloud – Learn how to migrate your IBM Netezza Data Warehouse to the cloud to save costs and improve performance. October 31, 2019 | 11:00 AM – 12:00 PM PT – Alert on Your Log Data with Amazon Elasticsearch Service – Learn how to receive alerts on your data to monitor your application and infrastructure using Amazon Elasticsearch Service. Databases: October 22, 2019 | 1:00 PM – 2:00 PM PT – How to Build Highly Scalable Serverless Applications with Amazon Aurora Serverless – Get an overview of Amazon Aurora Serverless, an on-demand, auto-scaling configuration for Amazon Aurora, and learn how you can use it to build serverless applications. DevOps: October 21, 2019 | 11:00 AM – 12:00 PM PT – Migrate Your Ruby on Rails App to AWS Fargate in One Step Using AWS Rails Provisioner – Learn how to define and deploy containerized Ruby on Rails Applications on AWS with a few commands. End-User Computing:  October 24, 2019 | 11:00 AM – 12:00 PM PT – Why Software Vendors Are Choosing Application Streaming Instead of Rewriting Their Desktop Apps – Walk through common customer use cases of how Amazon AppStream 2.0 lets software vendors deliver instant demos, trials, and training of desktop applications. October 29, 2019 | 11:00 AM – 12:00 PM PT – Move Your Desktops and Apps to AWS End-User Computing – Get an overview of AWS End-User Computing services and then dive deep into best practices for implementation. Enterprise & Hybrid:  October 29, 2019 | 1:00 PM – 2:00 PM PT – Leverage Compute Pricing Models and Rightsizing to Maximize Savings on AWS – Get tips on building a cost-management strategy, incorporating pricing models and resource rightsizing. IoT: October 30, 2019 | 1:00 PM – 2:00 PM PT – Connected Devices at Scale: A Deep Dive into the AWS Smart Product Solution – Learn how to jump-start the development of innovative connected products with the new AWS Smart Product Solution. Machine Learning: October 23, 2019 | 1:00 PM – 2:00 PM PT – Analyzing Text with Amazon Elasticsearch Service and Amazon Comprehend – Learn how to deploy a cost-effective, end-to-end solution for extracting meaningful insights from unstructured text data like customer calls, support tickets, or online customer feedback. October 28, 2019 | 11:00 AM – 12:00 PM PT – AI-Powered Health Data Masking – Learn how to use the AI-Power Health Data Masking solution for use cases like clinical decision support, revenue cycle management, and clinical trial management. Migration: October 22, 2019 | 11:00 AM – 12:00 PM PT – Deep Dive: How to Rapidly Migrate Your Data Online with AWS DataSync – Learn how AWS DataSync makes it easy to rapidly move large datasets into Amazon S3 and Amazon EFS for your applications. Mobile: October 21, 2019 | 1:00 PM – 2:00 PM PT – Mocking and Testing Serverless APIs with AWS Amplify – Learn how to mock and test GraphQL APIs in your local environment with AWS Amplify. Robotics: October 22, 2019 | 9:00 AM – 10:00 AM PT – The Future of Smart Robots Has Arrived – Learn how to and why you should build smarter robots with AWS. Security, Identity and Compliance:  October 29, 2019 | 9:00 AM – 10:00 AM PT – Using AWS Firewall Manager to Simplify Firewall Management Across Your Organization – Learn how AWS Firewall Manager simplifies rule management across your organization. Serverless: October 21, 2019 | 9:00 AM – 10:00 AM PT – Advanced Serverless Orchestration with AWS Step Functions – Go beyond the basics and explore the best practices of Step Functions, including development and deployment of workflows and how you can track the work being done. October 30, 2019 | 11:00 AM – 12:00 PM PT – Managing Serverless Applications with SAM Templates – Learn how to reduce code and increase efficiency by managing your serverless apps with AWS Serverless Application Model (SAM) templates. Storage: October 23, 2019 | 11:00 AM – 12:00 PM PT – Reduce File Storage TCO with Amazon EFS and Amazon FSx for Windows File Server – Learn how to optimize file storage costs with AWS storage solutions.


Recommended Content