Cloud Hosting Blogs

AWS Heroes: Putting AWS security services to work for you

Amazon Web Services Blog -

Guest post by AWS Community Hero Mark Nunnikhoven. Mark is the Vice President of Cloud Research at long-time APN Advanced Technology Partner Trend Micro. In addition to helping educate the AWS community about modern security and privacy, he has spearheaded Trend Micro’s launch-day support of most of the AWS security services and attended every AWS re:Invent! Security is a pillar of the AWS Well-Architected Framework. It’s critical to the success of any workload. But it’s also often misunderstood. It’s steeped in jargon and talked about in terms of threats and fear. This has led to security getting a bad reputation. It’s often thought of as a roadblock and something to put up with. Nothing could be further from the truth. At its heart, cybersecurity is simple. It’s a set of processes and controls that work to make sure that whatever I’ve built works as intended… and only as intended. How do I make that happen in the AWS Cloud? Shared responsibility It all starts with the shared responsibility model. The model defines the line where responsibility for day-to-day operations shifts from AWS to me, the user. AWS provides the security of the cloud and I am responsible for security in the cloud. For each type of service, more and more of my responsibilities shift to AWS. My tinfoil hat would be taken away if I didn’t mention that everyone needs to verify that AWS is holding up their end of the deal (#protip: they are and at world-class levels). This is where AWS Artifact enters the picture. It is an easy way to download the evidence that AWS is fulfilling their responsibilities under the model. But what about my responsibilities under the model? AWS offers help there in the form of various services under the Security, Identity, & Compliance category. Security services The trick is understanding how all of these security services fit together to help me meet my responsibilities. Based on conversations I’ve had around the world and helping teach these services at various AWS Summits, I’ve found that grouping them into five subcategories makes things clearer: authorization, protected stores, authentication, enforcement, and visibility. A few of these categories are already well understood. Authentication services help me identify my users. Authorization services allow me to determine what they—and other services—are allowed to do and under what conditions. Protected stores allow me to encrypt sensitive data and regulate access to it. Two subcategories aren’t as well understood: enforcement and visibility. I use the services in these categories daily in my security practice and they are vital to ensuring that my apps are working as intended. Enforcement Teams struggle with how to get the most out of enforcement controls and it can be difficult to understand how to piece these together into a workable security practice. Most of these controls detect issues, essentially raising their hand when something might be wrong. To protect my deployments, I need a process to handle those detections. By remembering the goal of ensuring that whatever I build works as intended and only as intended, I can better frame how each of these services helps me. AWS CloudTrail logs nearly every API action in an account but mining those logs for suspicious activity is difficult. Enter Amazon GuardDuty. It continuously scours CloudTrail logs—as well as Amazon VPC flow logs and DNS logs—for threats and suspicious activity at the AWS account level. Amazon EC2 instances have the biggest potential for security challenges as they are running a full operating system and applications written by various third parties. All that complexity added up to over 13,000 reported vulnerabilities last year. Amazon Inspector runs on-demand assessments of your instances and raises findings related to the operating system and installed applications that include recommended mitigations. Despite starting from a locked-down state, teams often make mistakes and sometimes accidentally expose sensitive data in an Amazon S3 bucket. Amazon Macie continuously scans targeted buckets looking for sensitive information and misconfigurations. This augments additional protections like S3 Block Public Access and Trusted Advisor checks. AWS WAF and AWS Shield work on AWS edge locations and actively stop attacks that they are configured to detect. AWS Shield targets DDoS activity and AWS WAF takes aim at layer seven or web attacks. Each of these services support the work teams do in hardening configurations and writing quality code. They are designed to help highlight areas of concern for taking action. The challenge is prioritizing those actions. Visibility Prioritization is where the visibility services step in. As previously mentioned, AWS Artifact provides visibility into AWS’ activities under the shared responsibility model. The new AWS Security Hub helps me understand the data generated by the other AWS security, identity, and compliance services along with data generated by key APN Partner solutions. The goal of AWS Security Hub is to be the first stop for any security activity. All data sent to the hub is normalized in the Amazon Finding Format, which includes a standardized severity rating. This provides context for each findings and helps me determine which actions to take first. This prioritized list of findings quickly translates in a set of responses to undertake. At first, these might be manual responses but as with anything in the AWS Cloud, automation is the key to success. Using AWS Lambda to react to AWS Security Hub findings is a wildly successful and simple way of modernizing an approach to security. This automated workflow sits atop a pyramid of security controls: • Core AWS security services and APN Partner solutions at the bottom • The AWS Security Hub providing visibility in the middle • Automation as the crown jewel on top What’s next? In this post, I described my high-level approach to security success in the AWS Cloud. This aligns directly with the AWS Well-Architected Framework and thousands of customer success stories. When you understand the shared responsibility model and the value of each service, you’re well on your way to demystifying security and building better in the AWS Cloud.

New – Open Distro for Elasticsearch

Amazon Web Services Blog -

Elasticsearch is a distributed, document-oriented search and analytics engine. It supports structured and unstructured queries, and does not require a schema to be defined ahead of time. Elasticsearch can be used as a search engine, and is often used for web-scale log analytics, real-time application monitoring, and clickstream analytics. Originally launched as a true open source project, some of the more recent additions to Elasticsearch are proprietary. My colleague Adrian explains our motivation to start Open Distro for Elasticsearch in his post, Keeping Open Source Open. As strong believers in, and supporters of, open source software, we believe this project will help continue to accelerate open source Elasticsearch innovation. Open Distro for Elasticsearch Today we are launching Open Distro for Elasticsearch. This is a value-added distribution of Elasticsearch that is 100% open source (Apache 2.0 license) and supported by AWS. Open Distro for Elasticsearch leverages the open source code for Elasticsearch and Kibana. This is not a fork; we will continue to send our contributions and patches upstream to advance these projects. In addition to Elasticsearch and Kibana, the first release includes a set of advanced security, event monitoring & alerting, performance analysis, and SQL query features (more on those in a bit). In addition to the source code repo, Open Distro for Elasticsearch and Kibana are available as RPM and Docker containers, with separate downloads for the SQL JDBC and the PerfTop CLI. You can run this code on your laptop, in your data center, or in the cloud. Contributions are welcome, as are bug reports and feature requests. Inside Open Distro for Elasticsearch Let’s take a quick look at the features that we are including in Open Distro for Elasticsearch. Some of these are currently available in Amazon Elasticsearch Service; others will become available in future updates. Security – This plugin that supports node-to-node encryption, five types of authentication (basic, Active Directory, LDAP, Kerberos, and SAML), role-based access controls at multiple levels (clusters, indices, documents, and fields), audit logging, and cross-cluster search so that any node in a cluster can run search requests across other nodes in the cluster. Learn More… Event Monitoring & Alerting – This feature notifies you when data from one or more Elasticsearch indices meets certain conditions. You could, for example, notify a Slack channel if an application logs more than five HTTP 503 errors in an hour. Monitoring is based on jobs that run on a defined schedule, checking indices against trigger conditions, and raising alerts when a condition has been triggered. Learn More… Deep Performance Analysis – This is a REST API that allows you to query a long list of performance metrics for your cluster. You can access the metrics programmatically or you can visualize them using perf top and other perf tools. Learn More… SQL Support – This feature allows you to query your cluster using SQL statements. It is an improved version of the elasticsearch-sql plugin, and supports a rich set of statements. This is just the beginning; we have more in the works, and also look forward to your contributions and suggestions! — Jeff;  

Building serverless apps with components from the AWS Serverless Application Repository

Amazon Web Services Blog -

Guest post by AWS Serverless Hero Aleksandar Simovic. Aleksandar is a Senior Software Engineer at Science Exchange and co-author of “Serverless Applications with Node.js” with Slobodan Stojanovic, published by Manning Publications. He also writes on Medium on both business and technical aspects of serverless. Many of you have built a user login or an authorization service from scratch a dozen times. And you’ve probably built another dozen services to process payments and another dozen to export PDFs. We’ve all done it, and we’ve often all done it redundantly. Using the AWS Serverless Application Repository, you can now spend more of your time and energy developing business logic to deliver the features that matter to customers, faster. What is the AWS Serverless Application Repository? The AWS Serverless Application Repository allows developers to deploy, publish, and share common serverless components among their teams and organizations. Its public library contains community-built, open-source, serverless components that are instantly searchable and deployable with customizable parameters and predefined licensing. They are built and published using the AWS Serverless Application Model (AWS SAM), the infrastructure as code, YAML language, used for templating AWS resources. How to use AWS Serverless Application Repository in production I wanted to build an application that enables customers to select a product and pay for it. Sounds like a substantial effort, right? Using AWS Serverless Application Repository, it didn’t actually take me much time. Broadly speaking, I built: A product page with a Buy button, automatically tied to the Stripe Checkout SDK. When a customer chooses Buy, the page displays the Stripe Checkout payment form. A Stripe payment service with an API endpoint that accepts a callback from Stripe, charges the customer, and sends a notification for successful transactions. For this post, I created a pre-built sample static page that displays the product details and has the Stripe Checkout JavaScript on the page. Even with the pre-built page, integrating the payment service is still work. But many other developers have built a payment application at least once, so why should I spend time building identical features? This is where AWS Serverless Application Repository came in handy. Find and deploy a component First, I searched for an existing component in the AWS Serverless Application Repository public library. I typed “stripe” and opted in to see applications that created custom IAM roles or resource policies. I saw the following results: I selected the application titled api-lambda-stripe-charge and chose Deploy on the component’s detail page. Before I deployed any component, I inspected it to make sure it was safe and production-ready. Evaluate a component The recommended approach for evaluating an AWS Serverless Application Repository component is a four-step process: Check component permissions. Inspect the component implementation. Deploy and run the component in a restricted environment. Monitor the component’s behavior and cost before using in production. This might appear to negate the quick delivery benefits of AWS Serverless Application Repository, but in reality, you only verify each component one time. Then you can easily reuse and share the component throughout your company. Here’s how to apply this approach while adding the Stripe component. 1. Check component permissions There are two types of components: public and private. Public components are open source, while private components do not have to be. In this case, the Stripe component is public. I reviewed the code to make sure that it doesn’t give unnecessary permissions that could potentially compromise security. In this case, the Stripe component is on GitHub. On the component page, I opened the template.yaml file. There was only one AWS Lambda function there, so I found the Policies attribute and reviewed the policies that it uses.   CreateStripeCharge: Type: AWS::Serverless::Function Properties: Handler: index.handler Runtime: nodejs8.10 Timeout: 10 Policies: - SNSCrudPolicy: TopicName: !GetAtt SNSTopic.TopicName - Statement: Effect: Allow Action: - ssm:GetParameters Resource: !Sub arn:${AWS::Partition}:ssm:${AWS::Region}:${AWS::AccountId}:parameter/${SSMParameterPrefix}/* The component was using a predefined AWS SAM policy template and a custom one. These predefined policy templates are sets of AWS permissions that are verified and recommended by the AWS security team. Using these policies to specify resource permissions is one of the recommended practices for serverless components on AWS Serverless Application Repository. The other custom IAM policy allows the function to retrieve AWS System Manager parameters, which is the best practice to store secure values, such as the Stripe secret key. 2. Inspect the component implementation I wanted to ensure that the component’s main business logic did only what it was meant to do, which was to create a Stripe charge. It’s also important to look out for unknown third-party HTTP calls to prevent leaks. Then I reviewed this project’s dependencies. For this inspection, I used PureSec, but tools like those offered by Protego are another option. The main business logic was in the charge-customer.js file. It revealed straightforward logic to simply invoke the Stripe create charge and then publish a notification with the created charge. I saw this reflected in the following code: return paymentProcessor.createCharge(token, amount, currency, description) .then(chargeResponse => { createdCharge = chargeResponse; return pubsub.publish(createdCharge, TOPIC_ARN); }) .then(() => createdCharge) .catch((err) => { console.log(err); throw err; }); The paymentProcessor and pubsub values are adapters for the communication with Stripe and Amazon SNS, respectively. I always like to look and see how they work. 3. Deploy and run the component in a restricted environment Maintaining a separate, restricted AWS account in which to test your serverless applications is a best practice for serverless development. I always ensure that my test account has strict AWS Billing and Amazon CloudWatch alarms in place. I signed in to this separate account, opened the Stripe component page, and manually deployed it. After deployment, I needed to verify how it ran. Because this component only has one Lambda function, I looked for that function in the Lambda console and opened its details page so that I could verify the code. 4. Monitor behavior and cost before using a component in production When everything works as expected in my test account, I usually add monitoring and performance tools to my component to help diagnose any incidents and evaluate component performance. I often use Epsagon and Lumigo for this, although adding those steps would have made this post too long. I also wanted to track the component’s cost. To do this, I added a strict Billing alarm that tracked the component cost and the cost of each AWS resource within it. After the component passed these four tests, I was ready to deploy it by adding it to my existing product-selection application. Deploy the component to an existing application To add my Stripe component into my existing application, I re-opened the component Review, Configure, and Deploy page and chose Copy as SAM Resource. That copied the necessary template code to my clipboard. I then added it to my existing serverless application by pasting it into my existing AWS SAM template, under Resources. It looked like the following: Resources: ShowProduct: Type: AWS::Serverless::Function Properties: Handler: index.handler Runtime: nodejs8.10 Timeout: 10 Events: Api: Type: Api Properties: Path: /product/:productId Method: GET   apilambdastripecharge: Type: AWS::Serverless::Application Properties: Location: ApplicationId: arn:aws:serverlessrepo:us-east-1:375983427419:applications/api-lambda-stripe-charge SemanticVersion: 3.0.0 Parameters: # (Optional) Cross-origin resource sharing (CORS) Origin. You can specify a single origin, all origins with "*", or leave it empty and no CORS is applied. CorsOrigin: YOUR_VALUE # This component assumes that the Stripe secret key needed to use the Stripe Charge API is stored as SecureStrings in Parameter Store under the prefix defined by this parameter. See the component README.        # SSMParameterPrefix: lambda-stripe-charge # Uncomment to override the default value Outputs: ApiUrl: Value: !Sub https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Stage/product/123 Description: The URL of the sample API Gateway I copied and pasted an AWS::Serverless::Application AWS SAM resource, which points to the component by ApplicationId and its SemanticVersion. Then, I defined the component’s parameters. I set CorsOrigin to “*” for demonstration purposes. I didn’t have to set the SSMParameterPrefix value, as it picks up a default value. But I did set up my Stripe secret key in the Systems Manager Parameter Store, by running the following command: aws ssm put-parameter --name lambda-stripe-charge/stripe-secret-key --value --type SecureString --overwrite In addition to parameters, components also contain outputs. An output is an externalized component resource or value that you can use with other applications or components. For example, the output for the api-lambda-stripe-charge component is SNSTopic, an Amazon SNS topic. This enables me to attach another component or business logic to get a notification when a successful payment occurs. For example, a lambda-send-email-ses component that sends an email upon successful payment could be attached, too. To finish, I ran the following two commands: aws cloudformation package --template-file template.yaml --output-template-file output.yaml --s3-bucket YOUR_BUCKET_NAME aws cloudformation deploy --template-file output.yaml --stack-name product-show-n-pay --capabilities CAPABILITY_IAM For the second command, you could add parameter overrides as needed. My product-selection and payment application was successfully deployed! Summary AWS Serverless Application Repository enables me to share and reuse common components, services, and applications so that I can really focus on building core business value. In a few steps, I created an application that enables customers to select a product and pay for it. It took a matter of minutes, not hours or days! You can see that it doesn’t take long to cautiously analyze and check a component. That component can now be shared with other teams throughout my company so that they can eliminate their redundancies, too. Now you’re ready to use AWS Serverless Application Repository to accelerate the way that your teams develop products, deliver features, and build and share production-ready applications.

Learn about AWS Services & Solutions – March AWS Online Tech Talks

Amazon Web Services Blog -

Join us this March to learn about AWS services and solutions. The AWS Online Tech Talks are live, online presentations that cover a broad range of topics at varying technical levels. These tech talks, led by AWS solutions architects and engineers, feature technical deep dives, live demonstrations, customer examples, and Q&A with AWS experts. Register now! Note – All sessions are free and in Pacific Time. Tech talks this month: Compute March 26, 2019 | 11:00 AM – 12:00 PM PT – Technical Deep Dive: Running Amazon EC2 Workloads at Scale – Learn how you can optimize your workloads running on Amazon EC2 for cost and performance, all while handling peak demand. March 27, 2019 | 9:00 AM – 10:00 AM PT – Introduction to AWS Outposts – Learn how you can run AWS infrastructure on-premises with AWS Outposts for a truly consistent hybrid experience. March 28, 2019 | 1:00 PM – 2:00 PM PT – Deep Dive on OpenMPI and Elastic Fabric Adapter (EFA) – Learn how you can optimize your workloads running on Amazon EC2 for cost and performance, all while handling peak demand. Containers March 21, 2019 | 11:00 AM – 12:00 PM PT – Running Kubernetes with Amazon EKS – Learn how to run Kubernetes on AWS with Amazon EKS. March 22, 2019 | 9:00 AM – 10:00 AM PT – Deep Dive Into Container Networking – Dive deep into microservices networking and how you can build, secure, and manage the communications into, out of, and between the various microservices that make up your application. Data Lakes & Analytics March 19, 2019 | 9:00 AM – 10:00 AM PT – Fuzzy Matching and Deduplicating Data with ML Transforms for AWS Lake Formation – Learn how to use ML Transforms for AWS Glue to link and de-duplicate matching records. March 20, 2019 | 9:00 AM – 10:00 AM PT – Customer Showcase: Perform Real-time ETL from IoT Devices into your Data Lake with Amazon Kinesis – Learn best practices for how to perform real-time extract-transform-load into your data lake with Amazon Kinesis. March 20, 2019 | 11:00 AM – 12:00 PM PT – Machine Learning Powered Business Intelligence with Amazon QuickSight – Learn how Amazon QuickSight leverages powerful ML and natural language capabilities to generate insights that help you discover the story behind the numbers. Databases March 18, 2019 | 9:00 AM – 10:00 AM PT – What’s New in PostgreSQL 11 – Find out what’s new in PostgreSQL 11, the latest major version of the popular open source database, and learn about AWS services for running highly available PostgreSQL databases in the cloud. March 19, 2019 | 1:00 PM – 2:00 PM PT – Introduction on Migrating your Oracle/SQL Server Databases over to the Cloud using AWS’s New Workload Qualification Framework – Get an introduction on how AWS’s Workload Qualification Framework can help you with your application and database migrations. March 20, 2019 | 1:00 PM – 2:00 PM PT – What’s New in MySQL 8 – Find out what’s new in MySQL 8, the latest major version of the world’s most popular open source database, and learn about AWS services for running highly available MySQL databases in the cloud. March 21, 2019 | 9:00 AM – 10:00 AM PT – Building Scalable & Reliable Enterprise Apps with AWS Relational Databases – Learn how AWS Relational Databases can help you build scalable & reliable enterprise apps. DevOps March 19, 2019 | 11:00 AM – 12:00 PM PT – Introduction to Amazon Corretto: A No-Cost Distribution of OpenJDK – Learn how to transform your approach to secure desktop delivery with a cloud desktop solution like Amazon WorkSpaces. End-User Computing March 28, 2019 | 9:00 AM – 10:00 AM PT – Fireside Chat: Enabling Today’s Workforce with Cloud Desktops – Learn about the tools and best practices Amazon Redshift customers can use to scale storage and compute resources on-demand and automatically to handle growing data volume and analytical demand. Enterprise March 26, 2019 | 1:00 PM – 2:00 PM PT – Speed Your Cloud Computing Journey With the Customer Enablement Services of AWS: ProServe, AMS, and Support – Learn how to accelerate your cloud journey with AWS’s Customer Enablement Services. IoT March 26, 2019 | 9:00 AM – 10:00 AM PT – How to Deploy AWS IoT Greengrass Using Docker Containers and Ubuntu-snap – Learn how to bring cloud services to the edge using containerized microservices by deploying AWS IoT Greengrass to your device using Docker containers and Ubuntu snaps. Machine Learning March 18, 2019 | 1:00 PM – 2:00 PM PT – Orchestrate Machine Learning Workflows with Amazon SageMaker and AWS Step Functions – Learn about how ML workflows can be orchestrated with the rich features of Amazon SageMaker and AWS Step Functions. March 21, 2019 | 1:00 PM – 2:00 PM PT – Extract Text and Data from Any Document with No Prior ML Experience – Learn how to extract text and data from any document with no prior machine learning experience. March 22, 2019 | 11:00 AM – 12:00 PM PT – Build Forecasts and Individualized Recommendations with AI – Learn how you can build accurate forecasts and individualized recommendation systems using our new AI services, Amazon Forecast and Amazon Personalize. Management Tools March 29, 2019 | 9:00 AM – 10:00 AM PT – Deep Dive on Inventory Management and Configuration Compliance in AWS – Learn how AWS helps with effective inventory management and configuration compliance management of your cloud resources. Networking & Content Delivery March 25, 2019 | 1:00 PM – 2:00 PM PT – Application Acceleration and Protection with Amazon CloudFront, AWS WAF, and AWS Shield – Learn how to secure and accelerate your applications using AWS’s Edge services in this demo-driven tech talk. Robotics March 28, 2019 | 11:00 AM – 12:00 PM PT – Build a Robot Application with AWS RoboMaker – Learn how to improve your robotics application development lifecycle with AWS RoboMaker. Security, Identity, & Compliance March 27, 2019 | 11:00 AM – 12:00 PM PT – Remediating Amazon GuardDuty and AWS Security Hub Findings – Learn how to build and implement remediation automations for Amazon GuardDuty and AWS Security Hub. March 27, 2019 | 1:00 PM – 2:00 PM PT – Scaling Accounts and Permissions Management – Learn how to scale your accounts and permissions management efficiently as you continue to move your workloads to AWS Cloud. Serverless March 18, 2019 | 11:00 AM – 12:00 PM PT – Testing and Deployment Best Practices for AWS Lambda-Based Applications – Learn best practices for testing and deploying AWS Lambda based applications. Storage March 25, 2019 | 11:00 AM – 12:00 PM PT – Introducing a New Cost-Optimized Storage Class for Amazon EFS – Come learn how the new Amazon EFS storage class and Lifecycle Management automatically reduces cost by up to 85% for infrequently accessed files.

New – RISC-V Support in the FreeRTOS Kernel

Amazon Web Services Blog -

FreeRTOS is a popular operating system designed for small, simple processors often known as microcontrollers. It is available under the MIT open source license and runs on many different Instruction Set Architectures (ISAs). Amazon FreeRTOS extends FreeRTOS with a collection of IoT-oriented libraries that provide additional networking and security features including support for Bluetooth Low Energy, Over-the-Air Updates, and Wi-Fi. RISC-V is a free and open ISA that was designed to be simple, extensible, and easy to implement. The simplicity of the RISC-V model, coupled with its permissive BSD license, makes it ideal for a wide variety of processors, including low-cost microcontrollers that can be manufactured without incurring license costs. The RISC-V model can be implemented in many different ways, as you can see from the RISC-V cores page. Development tools, including simulators, compilers, and debuggers, are also available. Today I am happy to announce that we are now providing RISC-V support in the FreeRTOS kernel. The kernel supports the RISC-V I profile (RV32I and RV64I) and can be extended to support any RISC-V microcontroller. It includes preconfigured examples for the OpenISA VEGAboard, QEMU emulator for SiFive’s HiFive board, and Antmicro’s Renode emulator for the Microchip M2GL025 Creative Board. You now have a powerful new option for building smart devices that are more cost-effective than ever before! — Jeff;  

Get to know the newest AWS Heroes – Winter 2019

Amazon Web Services Blog -

AWS Heroes are superusers who possess advanced technical skills and are early adopters of emerging technologies. Heroes are passionate about sharing their extensive AWS knowledge with others. Some get involved in-person by running meetups, workshops, and speaking at conferences, while others share with online AWS communities via social media, blog posts, and open source contributions. 2019 is off to a roaring start and we’re thrilled to introduce you to the latest AWS Heroes: Aileen Gemma Smith Ant Stanley Gaurav Kamboj Jeremy Daly Kurt Lee Matt Weagle Shingo Yoshida Aileen Gemma Smith – Sydney, Australia Community Hero Aileen Gemma Smith is the founder and CEO of Vizalytics Technology. The team at Vizalytics serves public and private sector clients worldwide in transportation, tourism, and economic development. She shared their story in the Building Complex Workloads in the Cloud session, at AWS Canberra Summit 2017. Aileen has a keen interest in diversity and inclusion initiatives and is constantly working to elevate the work and voices of underestimated engineers and founders. At AWS Public Sector Summit Canberra in 2018, she was a panelist for We Power Tech, Inclusive Conversations with Women in Technology. She has supported and encouraged the creation of internships and mentoring programs for high school and university students with a focus on building out STEAM initiatives.             Ant Stanley – London, United Kingdom Serverless Hero Ant Stanley is a consultant and community organizer. He founded and currently runs the Serverless London user group, and he is part of the ServerlessDays London organizing team and the global ServerlessDays leadership team. Previously, Ant was a co-founder of A Cloud Guru, and responsible for organizing the first Serverlessconf event in New York in May 2016. Living in London since 2009, Ant’s background before serverless is primarily as a solutions architect at various organizations, from managed service providers to Tier 1 telecommunications providers. His current focus is serverless, GraphQL, and Node.js.                 Gaurav Kamboj – Mumbai, India Community Hero Gaurav Kamboj is a cloud architect at Hotstar, India’s leading OTT provider with a global concurrency record for live streaming to 11Mn+ viewers. At Hotstar, he loves building cost-efficient infrastructure that can scale to millions in minutes. He is also passionate about chaos engineering and cloud security. Gaurav holds the original “all-five” AWS certifications, is co-founder of AWS User Group Mumbai, and speaks at local tech conferences. He also conducts guest lectures and workshops on cloud computing for students at engineering colleges affiliated with the University of Mumbai.                 Jeremy Daly – Boston, USA Serverless Hero Jeremy Daly is the CTO of AlertMe, a startup based in NYC that uses machine learning and natural language processing to help publishers better connect with their readers. He began building cloud-based applications with AWS in 2009. After discovering Lambda, became a passionate advocate for FaaS and managed services. He now writes extensively about serverless on his blog, jeremydaly.com, and publishes Off-by-none, a weekly newsletter that focuses on all things serverless. As an active member of the serverless community, Jeremy contributes to a number of open-source serverless projects, and has created several others, including Lambda API, Serverless MySQL, and Lambda Warmer.               Kurt Lee – Seoul, South Korea Serverless Hero Kurt Lee works at Vingle Inc. as their tech lead. As one of the original team members, he has been involved in nearly all backend applications there. Most recently, he led Vingle’s full migration to serverless, cutting 40% of the server cost. He’s known for sharing his experience of adapting serverless, along with its technical and organizational value, through Medium. He and his team maintain multiple open-source projects, which they developed during the migration. Kurt hosts TechTalk@Vingle regularly, and often presents at AWSKRUG about various aspects of serverless and pushing more things to serverless.               Matt Weagle – Seattle, USA Serverless Hero Matt Weagle leverages machine learning, serverless techniques, and a servicefull mindset at Lyft, to create innovative transportation experiences in an operationally sustainable and secure manner. Matt looks to serverless as a way to increase collaboration across development, operational, security, and financial concerns and support rapid business-value creation. He has been involved in the serverless community for several years. Currently, he is the organizer of Serverless – Seattle and co-organizer of the serverlessDays Seattle event. He writes about serverless topics on Medium and Twitter.               Shingo Yoshida – Tokyo, Japan Serverless Hero Shingo Yoshida is the CEO of Section-9, CTO of CYDAS, as well as a founder of Serverless Community(JP) and a member of JAWS-UG (AWS User Group – Japan). Since 2012, Shingo has not only built a system with just AWS, but has also built with a cloud-native architecture to make his customers happy. Serverless Community(JP) was established in 2016, and meetups have been held 20 times in Tokyo, Osaka, Fukuoka, and Sapporo, including three full-day conferences. Through this community, thousands of participants have discovered the value of serverless. Shingo has contributed to these serverless scenes with many blog posts and books about serverless, including Serverless Architectures on AWS.               There are now 80 AWS Heroes worldwide. Learn about all of them and connect with an AWS Hero.

Podcast #299: February 2019 Updates

Amazon Web Services Blog -

Simon guides you through lots of new features, services and capabilities that you can take advantage of. Including the new AWS Backup service, more powerful GPU capabilities, new SLAs and much, much more! Chapters: Service Level Agreements 0:17 Storage 0:57 Media Services 5:08 Developer Tools 6:17 Analytics 9:54 AI/ML 12:07 Database 14:47 Networking & Content Delivery 17:32 Compute 19:02 Solutions 21:57 Business Applications 23:38 AWS Cost Management 25:07 Migration & Transfer 25:39 Application Integration 26:07 Management & Governance 26:32 End User Computing 29:22 Additional Resources Topic || Service Level Agreements 0:17 Amazon Kinesis Data Firehose Announces 99.9% Service Level Agreement Amazon Kinesis Data Streams Announces 99.9% Service Level Agreement Amazon Kinesis Video Streams Announces 99.9% Service Level Agreement Amazon EKS Announces 99.9% Service Level Agreement Amazon ECR Announces 99.9% Service Level Agreement Amazon Cognito Announces 99.9% Service Level Agreement AWS Step Functions Announces 99.9% Service Level Agreement AWS Secrets Manager Announces Service Level Agreement Amazon MQ Announces 99.9% Service Level Agreement Topic || Storage 0:57 Introducing AWS Backup Introducing Amazon Elastic File System Integration with AWS Backup AWS Storage Gateway Integrates with AWS Backup – Amazon Web Services Amazon EBS Integrates with AWS Backup to Protect Your Volumes AWS Storage Gateway Volume Detach and Attach – Amazon Web Services AWS Storage Gateway – Tape Gateway Performance Amazon FSx for Lustre Offers New Options and Faster Speeds for Working with S3 Data Topic || Media Services 5:08 AWS Elemental MediaConvert Adds IMF Input and Enhances Caption Burn-In Support AWS Elemental MediaLive Adds Support for AWS CloudTrail AWS Elemental MediaLive Now Supports Resource Tagging AWS Elemental MediaLive Adds I-Frame-Only HLS Manifests and JPEG Outputs Topic || Developer Tools 6:17 Amazon Corretto is Now Generally Available AWS CodePipeline Now Supports Deploying to Amazon S3 AWS Cloud9 Supports AWS CloudTrail Logging AWS CodeBuild Now Supports Accessing Images from Private Docker Registry Develop and Test AWS Step Functions Workflows Locally AWS X-Ray SDK for .NET Core is Now Generally Available Topic || Analytics 9:54 Amazon Elasticsearch Service doubles maximum cluster capacity with 200 node cluster support Amazon Elasticsearch Service announces support for Elasticsearch 6.4 Amazon Elasticsearch Service now supports three Availability Zone deployments Now bring your own KDC and enable Kerberos authentication in Amazon EMR Source code for the AWS Glue Data Catalog client for Apache Hive Metastore is now available for download Topic || AI/ML 12:07 Amazon Comprehend is now Integrated with AWS CloudTrail Object Bounding Boxes and More Accurate Object and Scene Detection are now Available for Amazon Rekognition Video Amazon Elastic Inference Now Supports TensorFlow 1.12 with a New Python API New in AWS Deep Learning AMIs: Updated Elastic Inference for TensorFlow, TensorBoard 1.12.1, and MMS 1.0.1 Amazon SageMaker Batch Transform Now Supports TFRecord Format Amazon Transcribe Now Supports US Spanish Speech-to-Text in Real Time Topic || Database 14:47 Amazon Redshift now runs ANALYZE automatically Introducing Python Shell Jobs in AWS Glue Amazon RDS for PostgreSQL Now Supports T3 Instance Types Amazon RDS for Oracle Now Supports T3 Instance Types Amazon RDS for Oracle Now Supports SQLT Diagnostics Tool Version 12.2.180725 Amazon RDS for Oracle Now Supports January 2019 Oracle Patch Set Updates (PSU) and Release Updates (RU) Amazon DynamoDB Local Adds Support for Transactional APIs, On-Demand Capacity Mode, and 20 GSIs Topic || Networking and Content Delivery 17:32 Network Load Balancer Now Supports TLS Termination Amazon CloudFront announces six new Edge locations across United States and France AWS Site-to-Site VPN Now Supports IKEv2 VPC Route Tables Support up to 1,000 Static Routes Topic || Compute 19:02 Announcing a 25% price reduction for Amazon EC2 X1 Instances in the Asia Pacific (Mumbai) AWS Region Amazon EKS Achieves ISO and PCI Compliance AWS Fargate Now Has Support For AWS PrivateLink AWS Elastic Beanstalk Adds Support for Ruby 2.6v AWS Elastic Beanstalk Adds Support for .NET Core 2.2 Amazon ECS and Amazon ECR now have support for AWS PrivateLink GPU Support for Amazon ECS now Available AWS Batch now supports Amazon EC2 A1 Instances and EC2 G3s Instances Topic || Solutions 21:57 Deploy Micro Focus Enterprise Server on AWS with New Quick Start AWS Public Datasets Now Available from UK Meteorological Office, Queensland Government, University of Pennsylvania, Buildzero, and Others Quick Start Update: Active Directory Domain Services on the AWS Cloud Introducing the Media2Cloud solution Topic || Business Applications 23:38 Alexa for Business now offers IT admins simplified workflow to setup shared devices Topic || AWS Cost Management 25:07 Introducing Normalized Units Information for Amazon EC2 Reservations in AWS Cost Explorer Topic || Migration and Transfer 25:39 AWS Migration Hub Now Supports Importing On-Premises Server and Application Data to Track Migration Progress Topic || Application Integration 26:07 Amazon SNS Message Filtering Adds Support for Multiple String Values in Blacklist Matching Topic || Management and Governance 26:32 AWS Trusted Advisor Expands Functionality With New Best Practice Checks AWS Systems Manager State Manager Now Supports Management of In-Guest and Instance-Level Configuration AWS Config Increases Default Limits for AWS Config Rules VIntroducing AWS CloudFormation UpdateReplacePolicy Attribute Automate WebSocket API Creation in Amazon API Gateway Using AWS CloudFormation AWS OpsWorks for Chef Automate and AWS OpsWorks for Puppet Enterprise Now Support AWS CloudFormation VPC Route Tables Support up to 1,000 Static Routes Amazon CloudWatch Agent Adds Support for Procstat Plugin and Multiple Configuration Files Improve Security Of Your AWS SSO Users Signing In To The User Portal By Using Email-based Verification Topic || End User Computing 29:22 Introducing Amazon WorkLink AppStream 2.0 enables custom scripts before session start and after session termination About the AWS Podcast The AWS Podcast is a cloud platform podcast for developers, dev ops, and cloud professionals seeking the latest news and trends in storage, security, infrastructure, serverless, and more. Join Simon Elisha and Jeff Barr for regular updates, deep dives and interviews. Whether you’re building machine learning and AI models, open source projects, or hybrid cloud solutions, the AWS Podcast has something for you. Subscribe with one of the following: Like the Podcast? Rate us on iTunes and send your suggestions, show ideas, and comments to awspodcast@amazon.com. We want to hear from you!

Podcast 298: [Public Sector Special Series #6] – Bringing the White House to the World

Amazon Web Services Blog -

Dr. Stephanie Tuszynski (Director of the Digital Library – White House Historical Association) speaks about how they used AWS to bring the experience of the White House to the world. Additional Resources White House History About the AWS Podcast The AWS Podcast is a cloud platform podcast for developers, dev ops, and cloud professionals seeking the latest news and trends in storage, security, infrastructure, serverless, and more. Join Simon Elisha and Jeff Barr for regular updates, deep dives and interviews. Whether you’re building machine learning and AI models, open source projects, or hybrid cloud solutions, the AWS Podcast has something for you. Subscribe with one of the following: Like the Podcast? Rate us on iTunes and send your suggestions, show ideas, and comments to awspodcast@amazon.com. We want to hear from you!

Now Available – Five New Amazon EC2 Bare Metal Instances: M5, M5d, R5, R5d, and z1d

Amazon Web Services Blog -

Today we are launching the five new EC2 bare metal instances that I promised you a few months ago. Your operating system runs on the underlying hardware and has direct access to the processor and other hardware. The instances are powered by AWS-custom Intel® Xeon® Scalable Processor (Skylake) processors that deliver sustained all-core Turbo performance. Here are the specs: Instance Name Sustained All-Core Turbo Logical Processors Memory Local Storage EBS-Optimized Bandwidth Network Bandwidth m5.metal Up to 3.1 GHz 96 384 GiB – 14 Gbps 25 Gbps m5d.metal Up to 3.1 GHz 96 384 GiB 4 x 900 GB NVMe SSD 14 Gbps 25 Gbps r5.metal Up to 3.1 GHz 96 768 GiB – 14 Gbps 25 Gbps r5d.metal Up to 3.1 GHz 96 768 GiB 4 x 900 GB NVMe SSD 14 Gbps 25 Gbps z1d.metal Up to 4.0 GHz 48 384 GiB 2 x 900 GB NVMe SSD 14 Gbps 25 Gbps The M5 instances are designed for general-purpose workloads, such as web and application servers, gaming servers, caching fleets, and app development environments. The R5 instances are designed for high performance databases, web scale in-memory caches, mid-sized in-memory databases, real-time big data analytics, and other memory-intensive enterprise applications. The M5d and R5d variants also include 3.6 TB of local NVMe SSD storage. z1d instances provide high compute performance and lots of memory, making them ideal for electronic design automation (EDA) and relational databases with high per-core licensing costs. The high CPU performance allows you to license fewer cores and significantly reduce your TCO for Oracle or SQL Server workloads. All of the instances are powered by the AWS Nitro System, with dedicated hardware accelerators for EBS processing (including crypto operations), the software-defined network inside of each Virtual Private Cloud (VPC), ENA networking, and access to the local NVMe storage on the M5d, R5d, and z1d instances. Bare metal instances can also take advantage of Elastic Load Balancing, Auto Scaling, Amazon CloudWatch, and other AWS services. In addition to being a great home for old-school applications and system software that are licensed specifically and exclusively for use on physical, non-virtualized hardware, bare metal instances can be used to run tools and applications that require access to low-level processor features such as performance counters. For example, Mozilla’s Record and Replay Framework (rr) records and replays program execution with low overhead, using the performance counters to measure application performance and to deliver signals and context-switch events with high fidelity. You can read their paper, Engineering Record And Replay For Deployability, to learn more. Launch One Today m5.metal instances are available in the US East (N. Virginia and Ohio), US West (N. California and Oregon), Europe (Frankfurt, Ireland, London, Paris, and Stockholm), and Asia Pacific (Mumbai, Seoul, Singapore, Sydney, and Tokyo) AWS regions. m5d.metal instances are available in the US East (N. Virginia and Ohio), US West (Oregon), Europe (Frankfurt, Ireland, Paris, and Stockholm), and Asia Pacific (Mumbai, Seoul, Singapore, and Sydney) AWS regions. r5.metal instances are available in the US East (N. Virginia and Ohio), US West (N. California and Oregon), Europe (Frankfurt, Ireland, Paris, and Stockholm), Asia Pacific (Mumbai, Seoul, and Singapore), and AWS GovCloud (US-West) AWS regions. r5d.metal instances are available in the US East (N. Virginia and Ohio), US West (N. California), Europe (Frankfurt, Paris, and Stockholm), Asia Pacific (Mumbai, Seoul, and Singapore), and AWS GovCloud (US-West) AWS regions. z1d.metal instances are available in the US East (N. Virginia), US West (N. California and Oregon), Europe (Ireland), and Asia Pacific (Singapore and Tokyo) AWS regions. The bare metal instances will become available in even more AWS regions as soon as possible. — Jeff;  

New – Infrequent Access Storage Class for Amazon Elastic File System (EFS)

Amazon Web Services Blog -

Amazon Elastic File System lets you create petabyte-scale file systems that can be accessed in massively parallel fashion from hundreds or thousands of EC2 instances and on-premises servers, while scaling on demand without disrupting applications. Since the mid-2016 launch of EFS, we have added many new features including encryption of data at rest and in transit, a provisioned throughput option when you need high throughput access to a set of files that do not occupy a lot of space, on-premises access via AWS Direct Connect, EFS File Sync, support for AWS VPN and Inter-Region VPC Peering, and more. Infrequent Access Storage Class Today I would like to tell you about the new Amazon EFS Infrequent Access storage class, as pre-announced at AWS re:Invent. As part of a new Lifecycle Management option for EFS file systems, you can now indicate that you want to move files that have not been accessed in the last 30 days to a storage class that is 85% less expensive. You can enable the use of Lifecycle Management when you create a new EFS file system, and you can enable it later for file systems that were created on or after today’s launch. The new storage class is totally transparent. You can still access your files as needed and in the usual way, with no code or operational changes necessary. You can use the Infrequent Access storage class to meet auditing and retention requirements, create nearline backups that can be recovered using normal file operations, and to keep data close at hand that you need on an occasional basis. Here are a couple of things to keep in mind: Eligible Files – Files that are 128 KiB or larger and that have not been accessed or modified for at least 30 days can be transitioned to the new storage class. Modifications to a file’s metadata that do not change the file will not delay a transition. Priority – Operations that transition files to Infrequent Access run at a lower priority than other operations on the file system. Throughput – If your file system is configured for Bursting mode, the amount of Standard storage determines the throughput. Otherwise, the provisioned throughput applies. Enabling Lifecycle Management You can enable Lifecycle Management and benefit from the Infrequent Access storage class with one click: As I noted earlier, you can check this when you create the file system, or you can enable it later for file systems that you create from now on. Files that have not been read or written for 30 days will be transitioned to the Infrequent Access storage class with no further action on your part. Files in the Standard Access class can be accessed with latency measured in single-digit milliseconds; files in the Infrequent Access class have latency in the low double-digits. Your next AWS bill will include information on your use of both storage classes, so that you can see your cost savings. Available Now This feature is available now and you can start using it today in all AWS Regions where EFS is available. Infrequent Access storage is billed at $0.045 per GB/Month in US East (N. Virginia), with correspondingly low pricing in other regions. There’s also a data transfer charge of $0.01 per GB for reads and writes to Infrequent Access storage. Like every AWS service and feature, we are launching with an initial set of features and a really strong roadmap! For example, we are working on additional lifecycle management flexibility, and would be very interested in learning more about what kinds of times and rules you would like. — Jeff; PS – AWS DataSync will help you to quickly and easily automate data transfer between your existing on-premises storage and EFS.

Podcast #297: Reinforcement Learning with AWS DeepRacer

Amazon Web Services Blog -

How are ML Models Trained? How can developers learn different approaches to solving business problems? How can we race model cars on a global scale? Todd Escalona (Solutions Architect Evangelist, AWS) joins Simon to dive into reinforcement learning and AWS DeepRacer! Additional Resources AWS DeepRacer Open Source DIY Donkey Car re:Invent 2017 Robocar Hackathon AWS DeepRacer Highlights About the AWS Podcast The AWS Podcast is a cloud platform podcast for developers, dev ops, and cloud professionals seeking the latest news and trends in storage, security, infrastructure, serverless, and more. Join Simon Elisha and Jeff Barr for regular updates, deep dives and interviews. Whether you’re building machine learning and AI models, open source projects, or hybrid cloud solutions, the AWS Podcast has something for you. Subscribe with one of the following: Like the Podcast? Rate us on iTunes and send your suggestions, show ideas, and comments to awspodcast@amazon.com. We want to hear from you!

Podcast 296: [Public Sector Special Series #5] – Creating Better Educational Outcomes Using AWS | February 6, 2019

Amazon Web Services Blog -

Cesar Wedemann (QEDU) talks to Simon about how they gather Education data and provide this data to teachers and public schools to improve education in Brazil. They developed a free-access portal that offers easy visualization of brazilian Education open data. Additional Resources QEDU About the AWS Podcast The AWS Podcast is a cloud platform podcast for developers, dev ops, and cloud professionals seeking the latest news and trends in storage, security, infrastructure, serverless, and more. Join Simon Elisha and Jeff Barr for regular updates, deep dives and interviews. Whether you’re building machine learning and AI models, open source projects, or hybrid cloud solutions, the AWS Podcast has something for you. Subscribe with one of the following: Like the Podcast? Rate us on iTunes and send your suggestions, show ideas, and comments to awspodcast@amazon.com. We want to hear from you!

Learn about AWS Services & Solutions – February 2019 AWS Online Tech Talks

Amazon Web Services Blog -

Join us this February to learn about AWS services and solutions. The AWS Online Tech Talks are live, online presentations that cover a broad range of topics at varying technical levels. These tech talks, led by AWS solutions architects and engineers, feature technical deep dives, live demonstrations, customer examples, and Q&A with AWS experts. Register Now! Note – All sessions are free and in Pacific Time. Tech talks this month: Application Integration February 20, 2019 | 11:00 AM – 12:00 PM PT – Customer Showcase: Migration & Messaging for Mission Critical Apps with S&P Global Ratings – Learn how S&P Global Ratings meets the high availability and fault tolerance requirements of their mission critical applications using the Amazon MQ. AR/VR February 28, 2019 | 1:00 PM – 2:00 PM PT – Build AR/VR Apps with AWS: Creating a Multiplayer Game with Amazon Sumerian – Learn how to build real-world augmented reality, virtual reality and 3D applications with Amazon Sumerian. Blockchain February 18, 2019 | 11:00 AM – 12:00 PM PT – Deep Dive on Amazon Managed Blockchain – Explore the components of blockchain technology, discuss use cases, and do a deep dive into capabilities, performance, and key innovations in Amazon Managed Blockchain. Compute February 25, 2019 | 9:00 AM – 10:00 AM PT – What’s New in Amazon EC2 – Learn about the latest innovations in Amazon EC2, including new instances types, related technologies, and consumption options that help you optimize running your workloads for performance and cost. February 27, 2019 | 1:00 PM – 2:00 PM PT – Deploy and Scale Your First Cloud Application with Amazon Lightsail – Learn how to quickly deploy and scale your first multi-tier cloud application using Amazon Lightsail. Containers February 19, 2019 | 9:00 AM – 10:00 AM PT – Securing Container Workloads on AWS Fargate – Explore the security controls and best practices for securing containers running on AWS Fargate. Data Lakes & Analytics February 18, 2019 | 1:00 PM – 2:00 PM PT – Amazon Redshift Tips & Tricks: Scaling Storage and Compute Resources – Learn about the tools and best practices Amazon Redshift customers can use to scale storage and compute resources on-demand and automatically to handle growing data volume and analytical demand. Databases February 18, 2019 | 9:00 AM – 10:00 AM PT – Building Real-Time Applications with Redis – Learn about Amazon’s fully managed Redis service and how it makes it easier, simpler, and faster to build real-time applications. February 21, 2019 | 1:00 PM – 2:00 PM PT – – Introduction to Amazon DocumentDB (with MongoDB Compatibility) – Get an introduction to Amazon DocumentDB (with MongoDB compatibility), a fast, scalable, and highly available document database that makes it easy to run, manage & scale MongoDB-workloads. DevOps February 20, 2019 | 1:00 PM – 2:00 PM PT – Fireside Chat: DevOps at Amazon with Ken Exner, GM of AWS Developer Tools – Join our fireside chat with Ken Exner, GM of Developer Tools, to learn about Amazon’s DevOps transformation journey and latest practices and tools that support the current DevOps model. End-User Computing February 28, 2019 | 9:00 AM – 10:00 AM PT – Enable Your Remote and Mobile Workforce with Amazon WorkLink – Learn about Amazon WorkLink, a new, fully-managed service that provides your employees secure, one-click access to internal corporate websites and web apps using their mobile phones. Enterprise & Hybrid February 26, 2019 | 1:00 PM – 2:00 PM PT – The Amazon S3 Storage Classes – For cloud ops professionals, by cloud ops professionals. Wallace and Orion will tackle your toughest AWS hybrid cloud operations questions in this live Office Hours tech talk. IoT February 26, 2019 | 9:00 AM – 10:00 AM PT – Bring IoT and AI Together – Learn how to bring intelligence to your devices with the intersection of IoT and AI. Machine Learning February 19, 2019 | 1:00 PM – 2:00 PM PT – Getting Started with AWS DeepRacer – Learn about the basics of reinforcement learning, what’s under the hood and opportunities to get hands on with AWS DeepRacer and how to participate in the AWS DeepRacer League. February 20, 2019 | 9:00 AM – 10:00 AM PT – Build and Train Reinforcement Models with Amazon SageMaker RL – Learn about Amazon SageMaker RL to use reinforcement learning and build intelligent applications for your businesses. February 21, 2019 | 11:00 AM – 12:00 PM PT – Train ML Models Once, Run Anywhere in the Cloud & at the Edge with Amazon SageMaker Neo – Learn about Amazon SageMaker Neo where you can train ML models once and run them anywhere in the cloud and at the edge. February 28, 2019 | 11:00 AM – 12:00 PM PT – Build your Machine Learning Datasets with Amazon SageMaker Ground Truth – Learn how customers are using Amazon SageMaker Ground Truth to build highly accurate training datasets for machine learning quickly and reduce data labeling costs by up to 70%. Migration February 27, 2019 | 11:00 AM – 12:00 PM PT – Maximize the Benefits of Migrating to the Cloud – Learn how to group and rationalize applications and plan migration waves in order to realize the full set of benefits that cloud migration offers. Networking February 27, 2019 | 9:00 AM – 10:00 AM PT – Simplifying DNS for Hybrid Cloud with Route 53 Resolver – Learn how to enable DNS resolution in hybrid cloud environments using Amazon Route 53 Resolver. Productivity & Business Solutions February 26, 2019 | 11:00 AM – 12:00 PM PT – Transform the Modern Contact Center Using Machine Learning and Analytics – Learn how to integrate Amazon Connect and AWS machine learning services, such Amazon Lex, Amazon Transcribe, and Amazon Comprehend, to quickly process and analyze thousands of customer conversations and gain valuable insights. Serverless February 19, 2019 | 11:00 AM – 12:00 PM PT – Best Practices for Serverless Queue Processing – Learn the best practices of serverless queue processing, using Amazon SQS as an event source for AWS Lambda. Storage February 25, 2019 | 11:00 AM – 12:00 PM PT – Introducing AWS Backup: Automate and Centralize Data Protection in the AWS Cloud – Learn about this new, fully managed backup service that makes it easy to centralize and automate the backup of data across AWS services in the cloud as well as on-premises.

Podcast 294: [Public Sector Special Series #4] – Using AI to make Content Available for Students at Imperial College of London

Amazon Web Services Blog -

How do you train the next generation of Digital leaders? How do you provide them with a modern educational experience? Can you do it without technical expertise? Hear how Ruth Black (Teaching Fellow at the Digital Academy) applied Amazon Transcribe to make this real. Additional Resources NHS Digital Academy About the AWS Podcast The AWS Podcast is a cloud platform podcast for developers, dev ops, and cloud professionals seeking the latest news and trends in storage, security, infrastructure, serverless, and more. Join Simon Elisha and Jeff Barr for regular updates, deep dives, and interviews. Rate us on iTunes and send your suggestions, show ideas, and comments to awspodcast@amazon.com. We want to hear from you! Subscribe with one of the following:  

Podcast 293: Diving into Data with Amazon Athena

Amazon Web Services Blog -

Do you have lots of data to analyze? Is writing SQL a skill you have? Would you like to analyze massive amounts of data at low cost without capacity planning? In this episode, Simon shares how Amazon Athena can give you options you may not have considered before. Additional Resources Amazon Athena Top 10 Performance Tips Using CTAS for Performance About the AWS Podcast The AWS Podcast is a cloud platform podcast for developers, dev ops, and cloud professionals seeking the latest news and trends in storage, security, infrastructure, serverless, and more. Join Simon Elisha and Jeff Barr for regular updates, deep dives and interviews. Whether you’re building machine learning and AI models, open source projects, or hybrid cloud solutions, the AWS Podcast has something for you. Subscribe with one of the following: Like the Podcast? Rate us on iTunes and send your suggestions, show ideas, and comments to awspodcast@amazon.com. We want to hear from you!

New – TLS Termination for Network Load Balancers

Amazon Web Services Blog -

When you access a web site using the HTTPS protocol, a whole lot of interesting work (formally known as an SSL/TLS handshake) happens to create and maintain a secure communication channel. Your client (browser) and the web server work together to negotiate a mutually agreeable cipher, exchange keys, and set up a session key. Once established, both ends of the conversation use the session key to encrypt and decrypt all further traffic. Because the session key is unique to the conversation between the client and the server, a third party cannot decrypt the traffic or interfere with the conversation. New TLS Termination Today we are simplifying the process of building secure web applications by giving you the ability to make use of TLS (Transport Layer Security) connections that terminate at a Network Load Balancer (you can think of TLS as providing the “S” in HTTPS). This will free your backend servers from the compute-intensive work of encrypting and decrypting all of your traffic, while also giving you a host of other features and benefits: Source IP Preservation – The source IP address and port is presented to your backend servers, even when TLS is terminated at the NLB. This is, as my colleague Colm says, “insane magic!” Simplified Management – Using TLS at scale means that you need to take responsibility for distributing your server certificate to each backend server. This creates extra management work (sometimes involving a fleet of proxy servers), and also increases your attack surface due to the presence of multiple copies of the certificate. Today’s launch removes all of that complexity and gives you a central management point for your certificates. If you are using AWS Certificate Manager (ACM), your certificates will be stored securely, expired & rotated regularly, and updated automatically, all with no action on your part. Zero-day Patching – The TLS protocol is complex and the implementations are updated from time to time in response to emerging threats. Terminating your connections at the NLB protects your backend servers and allows us to update your NLB in response to these threats. We make use of s2n, our security-focused , formally-verified implementation of the TLS/SSL protocols. Improved Compliance – You can use built-in security policies to specify the cipher suites and protocol versions that are acceptable to your application. This will help you in your PCI and FedRAMP compliance effort, and will also allow you to achieve a perfect TLS score. Classic Upgrade – If you are currently using a Classic Load Balancer for TLS termination, switching to a Network Load Balancer will allow you to scale more quickly in response to an increased load. You will also be able to make use of a static IP address for your NLB and to log the source IP address for requests. Access Logs – You now have the ability to enable access logs for your Network Load Balancers and to direct them to the S3 bucket of your choice. The log entries include detailed information about the TLS protocol version, cipher suite, connection time, handshake time, and more. Using TLS Termination You can create a Network Load Balancer and make use of TLS termination in minutes! You can use the API (CreateLoadBalancer), CLI (create-load-balancer), the EC2 Console, or a AWS CloudFormation template. I’ll use the Console, and click Load Balancers to get started. Then I click Create in the Network Load Balancer area: I enter a name (MyLB2) and choose TLS (Secure TCP) as the Load Balancer Protocol: Then I choose one or more Availability Zones, and optionally choose and Elastic IP address for each one. I can also choose to tag my NLB. When I am all set, I click Next: Configure Security Settings to proceed: On the next page, I can choose an existing certificate or upload a new one. I already have one for www.jeff-barr.com, so I’ll choose it. I also choose a security policy (more on that in a minute): There are currently seven security policies to choose from. Each policy allows for the use of certain TLS versions and ciphers: The describe-load-balancer-policies command can be used to learn more about the policies: After choosing the certificate and the policy, I click Next:Configure Routing. I can choose the communication protocol (TCP or TLS) that will be used between my NLB and my targets. If I choose TLS, communication is encrypted; this allows you to make use of complete end-to-end encryption in transit: The remainder of the setup process proceeds as usual, and I can start using my Network Load Balancer right away. Available Now TLS Termination is available now and you can start using it today in the US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), and South America (São Paulo) Regions. — Jeff;  

Amazon WorkLink – Secure, One-Click Mobile Access to Internal Websites and Applications

Amazon Web Services Blog -

We want to make it easier for you and your colleagues to use your mobile devices to access internal corporate websites and applications. Our goal is to give your workforce controlled access to valuable intranet content while maintaining a strong security profile. Introducing Amazon WorkLink Today I would like to tell you about Amazon WorkLink. You get seamless access to internal websites and applications from your mobile device, with no need to modify or migrate any content. Amazon WorkLink is a fully managed, pay-as-you-go service that scales to meet the needs of any organization. It is easy to set up and run, and does not require you to migrate or modify your existing sites or content. You get full control over the domains that are accessible from mobile devices, and you can use your existing SAML-based Identity Provider (IdP) to manage your user base. Amazon WorkLink gains access to your internal resources through a Virtual Private Cloud (VPC). The resources can exist within that VPC (for example, applications hosted on EC2 instance), in another VPC that is peered with it, or on-premises. In the on-premises case, the resources must be accessible via an IPsec tunnel, AWS Direct Connect, or the new AWS Transit Gateway. Applications running in a VPC can use AWS PrivateLink to access AWS services while keeping all traffic on the AWS network. Your users get a secure, non-invasive browsing experience. Corporate content is rendered within the AWS Cloud and delivered to each device over a secure connection. We’re launching with support for devices that run iOS 12, with support for Android 6+ coming within weeks. Inside Amazon WorkLink Amazon WorkLink lets you associates domains with each WorkLink fleet that you create. For example, you could associate phones.example.com, payroll.example.com, and tickets.example.com to provide your users with access to your phone directory, payroll system and trouble ticketing system. When you associate a domain with a fleet, you need to prove to WorkLink that you control the domain. WorkLink will issue an SSL/TLS certificate for the domain and then establish and manage an endpoint to handle requests for the domain. With the fleet created, you can use the email template provided by WorkLink to extend invitations to users. The users accept the invitations, install the WorkLink app, and sign in using their existing corporate identity. The app installs itself as the first-tier DNS resolver and configures the device’s VPN connection so that it can access the WorkLink fleet. When a mobile user accesses a domain that is associated with their fleet, the requested content is fetched, rendered, delivered to the device in vector form across a TLS connection, and rendered in the user’s existing mobile browser. Your users can interact with the content as usual: zooming, scrolling, and typing all work as expected. All HTML, CSS, and JavaScript content is rendered in the cloud on a fleet of EC2 instances isolated from other AWS customers; no content is stored or cached by browsers on the local devices. Encrypted version of cookies are stored by the WorkLink app on the user devices. They are never decrypted on the devices but are sent back to resume sessions when a user gets a new cloud-rendering container. Traffic to and from domains that are not associated with WorkLink continues to flow as before, and does not go through WorkLink. Setting Up Amazon WorkLink Let’s walk through the process of setting up a WorkLink fleet. I don’t have a genuine corporate network or intranet, so I’ll have to wave my hands a bit. I open the Amazon WorkLink Console and click Create fleet to get started: I give my fleet a programmatic name (my-fleet), a display name (MyFleet), and click Create fleet to proceed: My fleet is created in seconds, and is ready for further setup: I click my-fleet to proceed; I can see the mandatory and optional setup steps at a glance: I click Link IdP to use my existing SAML-style identity provider, click Choose file to upload an XML document that describes my metadata provider, and again click Link IdP to proceed: WorkLink validates and processes the document, and generates a service provider metadata document. I download that document, and pass it along to the operator of the identity provider. The provider, in turn, uses the document to finalize the SAML federation for the identity provider: Next, I click Link network to link my users to my company content. I can create a new VPC, or I can use an existing one. Either way, I should choose subnets in two or more Availability Zones in order to maximize availability. The chosen subnets must have enough free IP addresses to support the number of users that will be accessing the fleet; WorkLink will create and manage an Elastic Network Interface (ENI) for each connected user. I’ll use my existing VPC: With my identify provider configured and my network linked, I can click Associate domain to indicate that I want my users to be able to access it some content on my network. I enter the domain name, and click Next to proceed (let’s pretend that www.jeff-barr.com is an intranet site): Now I need to prove that I have control over the domain. I can either modify the DNS configuration or I can respond to an email request. I’ll take the first option: The console displays the necessary changes (an additional CNAME record) that I need to make to my domain: I use Amazon Route 53 to maintain my DNS entries so it is easy to add the CNAME: Amazon WorkLink will validate the DNS entry (this can take four or five hours; email is a bit quicker). I can repeat this step for all desired domains, and I can add even more later. After my domain has been validated I click User invites to get an email invitation that I can send to my users: Your users simply follow the directions and can start to enjoy remote access to the permitted sites and applications within minutes. For example: Other powerful administrative features include the ability to set up and use device policies, and to configure delivery of audit logs to a new or existing Amazon Kinesis Data Stream: Things to Know Here are a couple of things to keep in mind when evaluating Amazon WorkLink: Device Support – We are launching with support for devices that run iOS 12. Support for Android 6 devices will be ready within weeks. Compatibility – Amazon WorkLink is designed to process and render most modern forms of web content, with support for video and audio on the drawing board. It does not support content that makes use of Flash, Silverlight, WebGL, or applets. Identity Providers – Amazon WorkLink can be used with SAML-based identity providers today, with plans to support other types of providers based on customer requests and feedback. Regions – You can create Amazon WorkLink fleets in AWS regions in North America and Europe today. Support for other regions is in the works for rollout later this year. Pricing – Pricing is based on the number of users with an active browser session in a given month. You pay $5 per active user per month. Available Now Amazon WorkLink is available now and you can start using it today! — Jeff;  

Podcast 292: [Public Sector Special Series #3] – Moving to Microservices from an Organisational Standpoint | January 23, 2019

Amazon Web Services Blog -

Jeff Olson (VP & Chief Data Officer at College Board) talks about his experiences in fostering change from an organisational standpoint whilst moving to a microservices architecture. Additional Resources College Board About the AWS Podcast The AWS Podcast is a cloud platform podcast for developers, dev ops, and cloud professionals seeking the latest news and trends in storage, security, infrastructure, serverless, and more. Join Simon Elisha and Jeff Barr for regular updates, deep dives and interviews. Whether you’re building machine learning and AI models, open source projects, or hybrid cloud solutions, the AWS Podcast has something for you. Subscribe with one of the following: Like the Podcast? Rate us on iTunes and send your suggestions, show ideas, and comments to awspodcast@amazon.com. We want to hear from you!

Podcast 291 | January 2019 Update Show

Amazon Web Services Blog -

Simon takes you through a nice mix of updates and new things to take advantage of – even a price drop! Chapters: Service Level Agreements 00:19 Price Reduction 1:15 Databases 2:09 Service Region Expansion 3:52 Analytics 5:23 Machine Learning 7:13 Compute 7:55 IoT 9:37 Management 10:43 Mobile 11:33 Desktop 12:30 Certification 13:11 Additional Resources Topic || Service Level Agreements 00:19 Amazon API Gateway announces service level agreement Amazon EMR Announces 99.9% Service Level Agreement Amazon EFS Announces 99.9% Service Level Agreement AWS Direct Connect Service Level Agreement Topic || Price Reduction 1:15 Announcing AWS Fargate Price Reduction by Up To 50% Topic || Databases 2:09 Introducing Amazon DocumentDB (with MongoDB compatibility) – Generally available AWS Database Migration Service Now Supports Amazon DocumentDB with MongoDB compatibility as a target Topic || Service Region Expansion 3:52 Amazon EC2 High Memory Instances are Now Available in Additional AWS Regions Amazon Neptune is Now Available in Asia Pacific (Sydney) Amazon EKS Available in Seoul Region AWS Glue is now available in the AWS EU (Paris) Region Amazon EC2 X1e Instances are Now Available in the Asia Pacific (Seoul) AWS Region Amazon Pinpoint is now available in three additional regions Topic || Analytics 5:23 Amazon QuickSight Launches Pivot Table Enhancements, Cross-Schema Joins and More Topic || Machine Learning 7:13 Amazon SageMaker now supports encrypting all inter-node communication for distributed training Topic || Compute 7:55 Amazon EC2 Spot now Provides Results for the “Describe Spot Instance Request” in Multiple Pages Announcing Windows Server 2019 AMIs for Amazon EC2 AWS Step Functions Now Supports Resource Tagging Topic || IoT 9:37 AWS IoT Core Now Enables Customers to Store Messages for Disconnected Devices Renesas RX65N System on Chip is Qualified for Amazon FreeRTOS Topic || Management 10:43 AWS Config adds support for AWS Service Catalog AWS Single Sign-On Now Enables You to Direct Users to a Specific AWS Management Console Page Topic || Mobile 11:33 aws-device-farm-now-supports-appium-node.js-and-appium-ruby https://aws.amazon.com/about-aws/whats-new/2019/01/deploy-citrix-virtual-apps-and-desktops-service-on-aws-with-new-quick-start/ Topic || Certification 13:11 Announcing the AWS Certified Alexa Skill Builder – Specialty Beta Exam About the AWS Podcast The AWS Podcast is a cloud platform podcast for developers, dev ops, and cloud professionals seeking the latest news and trends in storage, security, infrastructure, serverless, and more. Join Simon Elisha and Jeff Barr for regular updates, deep dives and interviews. Whether you’re building machine learning and AI models, open source projects, or hybrid cloud solutions, the AWS Podcast has something for you. Subscribe with one of the following: Like the Podcast? Rate us on iTunes and send your suggestions, show ideas, and comments to awspodcast@amazon.com. We want to hear from you!

AWS Backup – Automate and Centrally Manage Your Backups

Amazon Web Services Blog -

AWS gives you the power to easily and dynamically create file systems, block storage volumes, relational databases, NoSQL databases, and other resources that store precious data. You can create them on a moment’s notice as the need arises, giving you access to as much storage as you need and opening the door to large-scale cloud migration. When you bring your sensitive data to the cloud, you need to make sure that you continue to meet business and regulatory compliance requirements, and you definitely want to make sure that you are protected against application errors. While you can build your own backup tools using the built-in snapshot operations built in to many of the services that I listed above, creating an enterprise wide backup strategy and the tools to implement it still takes a lot of work. We are changing that. New AWS Backup AWS Backup is designed to help you automate and centrally manage your backups. You can create policy-driven backup plans, monitor the status of on-going backups, verify compliance, and find / restore backups, all using a central console. Using a combination of the existing AWS snapshot operations and new, purpose-built backup operations, Backup backs up EBS volumes, EFS file systems, RDS & Aurora databases, DynamoDB tables, and Storage Gateway volumes to Amazon Simple Storage Service (S3), with the ability to tier older backups to Amazon Glacier. Because Backup includes support for Storage Gateway volumes, you can include your existing, on-premises data in the backups that you create. Each backup plan includes one or more backup rules. The rules express the backup schedule, frequency, and backup window. Resources to be backed-up can be identified explicitly or in a policy-driven fashion using tags. Lifecycle rules control storage tiering and expiration of older backups. Backup gathers the set of snapshots and the metadata that goes along with the snapshots into collections that define a recovery point. You get lots of control so that you can define your daily / weekly / monthly backup strategy, the ability to rest assured that your critical data is being backed up in accord with your requirements, and the ability to restore that data on an as-needed data. Backups are grouped into vaults, each encrypted by a KMS key. Using AWS Backup You can get started with AWS Backup in minutes. Open the AWS Backup Console and click Create backup plan: I can build a plan from scratch, start from an existing plan or define one using JSON. I’ll Build a new plan, and start by giving my plan a name: Now I create the first rule for my backup plan. I call it MainBackup, indicate that I want it to run daily, define the lifecycle (transition to cold storage after 1 month, expire after 6 months), and select the Default vault: I can tag the recovery points that are created as a result of this rule, and I can also tag the backup plan itself: I’m all set, so I click Create plan to move forward: At this point my plan exists and is ready to run, but it has just one rule and does not have any resource assignments (so there’s nothing to back up): Now I need to indicate which of my resources are subject to this backup plan I click Assign resources, and then create one or more resource assignments. Each assignment is named and references an IAM role that is used to create the recovery point. Resources can be denoted by tag or by resource ID, and I can use both in the same assignment. I enter all of the values and click Assign resources to wrap up: The next step is to wait for the first backup job to run (I cheated by editing my backup window in order to get this post done as quickly as possible). I can peek at the Backup Dashboard to see the overall status: Backups On Demand I also have the ability to create a recovery point on demand for any of my resources. I choose the desired resource and designate a vault, then click Create an on-demand backup: I indicated that I wanted to create the backup right away, so a job is created: The job runs to completion within minutes: Inside a Vault I can also view my collection of vaults, each of which contains multiple recovery points: I can examine see the list of recovery points in a vault: I can inspect a recovery point, and then click Restore to restore my table (in this case): I’ve shown you the highlights, and you can discover the rest for yourself! Things to Know Here are a couple of things to keep in mind when you are evaluating AWS Backup: Services – We are launching with support for EBS volumes, RDS databases, DynamoDB tables, EFS file systems, and Storage Gateway volumes. We’ll add support for additional services over time, and welcome your suggestions. Backup uses the existing snapshot operations for all services except EFS file systems. Programmatic Access – You can access all of the functions that I showed you above using the AWS Command Line Interface (CLI) and the AWS Backup APIs. The APIs are powerful integration points for your existing backup tools and scripts. Regions – Backups work within the scope of a particular AWS Region, with plans in the works to enable several different types of cross-region functionality in 2019. Pricing – You pay the normal AWS charges for backups that are created using the built-in AWS snapshot facilities. For Amazon EFS, there’s a low, per-GB charge for warm storage and an even lower charge for cold storage. Available Now AWS Backup is available now and you can start using it today! — Jeff;    

Pages

Recommended Content

Subscribe to Complete Hosting Guide aggregator - Cloud Hosting Official Blogs