#aws api gateway http endpoint
Explore tagged Tumblr posts
codeonedigest · 1 year ago
Text
AWS API Gateway Tutorial for Cloud API Developer | AWS API Gateway Explained with Examples  
Full Video Link https://youtube.com/shorts/A-DsF8mbF7U Hi, a new #video on #aws #apigateway #cloud is published on #codeonedigest #youtube channel. @java #java #awscloud @awscloud #aws @AWSCloudIndia #Cloud #CloudComputing @YouTube #you
Amazon AWS API Gateway is an AWS service for creating, publishing, maintaining, monitoring, and securing REST, HTTP, and WebSocket APIs. API developers can create APIs that access AWS or other web services, as well as data stored in the AWS Cloud. As an API Gateway API developer, you can create APIs for use in your own client applications.  You can also make your APIs available to third-party…
Tumblr media
View On WordPress
0 notes
markwatsonsbooks · 5 months ago
Text
Tumblr media
AWS Ultimate Guide: From Beginners to Advanced by SK Singh
This is a very comprehensive book on AWS, from beginners to advanced. The book has extensive diagrams to help understand topics much easier way.
To make understanding the subject a smoother experience, the book is divided into the following sections:
Cloud Computing
AWS Fundamentals (What is AWS, AWS Account, AWS Free Tier, AWS Cost & Billing Management, AWS Global Cloud Infrastructure (part I)), IAM, EC2)
AWS Advanced (EC2 Advanced, ELB, Advanced S3, Route 53, AWS Global Cloud Infrastructure (part II), Advanced Storage on AWS, AWS Monitoring, Audit, and Performance),
AWS RDS and Databases (AWS RDS and Cache, AWS Databases)
Serverless (Serverless Computing, AWS Integration, and Messaging)
Container & CI/CD (Container, AWS CI/CD services)
Data & Analytics (Data & Analytics)
Machine Learning (AWS ML/AI Services)
Security (AWS Security & Encryption, AWS Shared Responsibility Model, How to get Support on AWS, Advanced Identity)
Networking (AWS Networking)
Disaster Management (Backup, Recovery & Migrations)
Solutions Architecture (Cloud Architecture Key Design Principles, AWS Well-Architected Framework, Classic Solutions Architecture)
Includes AWS services/features such as IAM, S3, EC2, EC2 purchasing options, EC2 placement groups, Load Balancers, Auto Scaling, S3 Glacier, S3 Storage classes, Route 53 Routing policies, CloudFront, Global Accelerator, EFS, EBS, Instance Store, AWS Snow Family, AWS Storage Gateway, AWS Transfer Family, Amazon CloudWatch, EventBridge, CloudWatch Insights, AWS CloudTrail, AWS Config, Amazon RDS, Amazon Aurora, Amazon ElatiCache, Amazon DocumentDB, Amazon Keyspaces, Amazon Quantum Ledger Database, Amazon Timestream, Amazon Managed Blockchain, AWS Lambda, Amazon DynamoDB, Amazon API Gateway, SQS, SNS, SES, Amazon Kinesis, Amazon Kinesis Firehose, Amazon Kinesis Data Analytics, Amazon Kinesis Data Streams, Amazon Kinesis ECS, Amazon Kinesis ECR, Amazon EKS, AWS CloudFormation, AWS Elastic Beanstalk, AWS CodeBuild, AWS OpsWorks, AWS CodeGuru, AWS CodeCommit, Amazon Athena, Amazon Redshift, Amazon EMR, Amazon QuickSight, AWS Glue, AWS Lake Formation, Amazon MSK, Amazon Rekognition, Amazon Transcribe, Amazon Polly, Amazon Translate, Amazon Lex, Amazon Connect, Amazon Comprehend, Amazon Comprehend Medical, Amazon SageMaker, Amazon Forecast, Amazon Kendra, Amazon Personalize, Amazon Textract, Amazon Fraud Detector, Amazon Sumerian, AWS WAF, AWS Shield Standard, AWS Shield Advanced, AWS Firewall Manager, AWS GuardDuty, Amazon Inspector, Amazon Macie, Amazon Detective, SSM Session Manager, AWS Systems Manager, S3 Replication & Encryption, AWS Organization, AWS Control Tower, AWS SSO, Amazon Cognito, AWS VPC, NAT Gateway, VPC Endpoints, VPC Peering, AWS Transit Gateway, AWS Site-to-Site VPC, Database Management Service (DMS), and many others.
Order YOUR Copy NOW: https://amzn.to/4bfoHQy via @amazon
1 note · View note
obihub · 2 years ago
Link
1 note · View note
livlovlun · 3 years ago
Text
아마존 웹 서비스 최준승 외
1. AWS의 기본 알기 클라우드 컴퓨팅과 AWS Cloud Computing, 그리고 AWS AWS의 특징 AWS의 주요 서비스 AWS의 물리 인프라(Region, AZ, Edge) AWS 과금 방식 이해하기 AWS 과금 요소 AWS 과금 원칙 따라하기 - AWS 서비스 시작하기 AWS 계정 생성 AWS Management Console 로그인 목표 아키텍처 소개 목표 아키텍처 네트워크 구성 2. 리소스를 제어할 수 있는 권한을 관리합니다 - IAM IAM은 어떤 서비스인가요? AWS API, 그리고 IAM AWS API를 사용하는 방법 IAM에서 사용하는 객체들 루트 계정과 IAM 객체 IAM User, Group, Role IAM Policy 따라하기 - 계정 보안 향상을 위한 설정 Cloudtrail 설정 보안/감사를 위한 Config 설정 루트 계정 MFA 설정 패스워드 정책 설정 따라하기 - IAM IAM User 생성 IAM Group 생성 IAM Role 생성 IAM Policy 생성 3. 무제한 용량의 객체 스토리지 - S3 S3는 어떤 서비스인가요? Object Storage, 그리고 S3 Bucket과 Object S3의 접근제어 S3의 Storage Class S3는 어떤 추가기능을 제공하나요? Static Website Hosting Versioning Lifecycle 따라하기 - S3 S3 Bucket 생성 S3 Properties 설정 S3 정적 웹호스팅 설정 S3 Bucket에 객체 업로드 및 다운로드 S3 비용 알기 4. 나만의 Private한 네트워크를 구성해보자 - VPC VPC는 어떤 서비스인가요? Network Topology, 그리고 VPC VPC와 VPC Subnet VPC에서 관리하는 객체 Public Subnet과 Private Subnet VPC는 어떤 기능을 제공하나요? VPC Peering NAT Gateway VPC Endpoint Security Group과 Network ACL 따라하기 - VPC VPC 객체 생성 VPC Subnet 생성 Internet Gateway 생성 및 설정 Route Table 생성 및 설정 Network ACL 정책 설정 Security Group 정책 설정 VPC 비용 알기 5. 모든 서비스의 근본이 되는 Computing Unit - EC2 EC2는 어떤 서비스인가요? Host, Hypervisor, Guest, 그리고 EC2 Instance Type EC2 Action EC2에서 사용하는 Storage Instance Store EBS(Elastic Block Store) 따라하기 - EC2 EC2 Instance 생성 Elastic IP(고정 IP) 설정 EC2 Instance에 SSH 접속 AWS CLI 사용 EC2 meta-data 확인 EC2 비용 알기 6. 귀찮은 DB 관리 부탁드립니다 - RDS RDS는 어떤 서비스인가요? Managed DB Service, 그리고 RDS RDS가 지원하는 DB 엔진 RDS는 어떤 기능을 제공하나요? Multi-AZ Read Replica Backup Maintenance 따라하기 - RDS DB Subnet Group 생성 RDS Parameter Group 생성 RDS Option Group 생성 RDS Instance 생성 RDS Instance 접속 RDS 비용 알기 B. Bridge WordPress 설치 브라우저에서 접속 WordPress AWS Plugin 설치 및 설정 WordPress에 샘플 페이지 포스팅 7. VPC에 특화된 Elastic LoadBalancer - ELB ELB는 어떤 서비스인가요? LB, 그리고 ELB Classic Load Balancer와 Application Load Balancer External ELB와 Internal ELB ELB는 어떤 기능을 제공하나요? Health Check SSL Termination Sticky Session ELB의 기타 기능 따라하기 - ELB ELB 생성하기 ELB 정보 확인 및 기타 설정 ELB 비용 알기 8. 인프라 규모를 자동으로 조절하는 마법 - Auto Scaling Auto Scaling은 어떤 서비스인가요? 인프라 규모의 자동화, Auto Scaling Scale In과 Scale Out Auto Scaling 구성 절차 Launch Configuration Auto Scaling Group Scaling Policy 따라하기 - Auto Scaling 기준 AMI 생성 Launch Configuration 구성 Auto Scaling Group 생성 9. 70여 개의 글로벌 엣지를 클릭 몇 번만으로 사용 - CloudFront CloudFront는 어떤 서비스인가요? CDN, 그리고 CloudFront CloudFront 동작 방식 CloudFront의 원본(Origin) CloudFront는 어떤 기능을 제공하나요? 웹(HTTP/S), 미디어(RTMP) 서비스 동적 콘텐츠 캐싱 보안(Signed URL/Cookie, 국가별 서비스 제한, WAF) 전용 통계 서비스 따라하기 - CloudFront CloudFront 신규 배포 생성하기 CloudFront Origin, Behavior 추가 객체 캐싱 무효화(Invalidation)시키기 CloudFront 비용 알기 10. SLA 100%의 글로벌 DNS 서비스 - Route53 Route53은 어떤 서비스인가요? DNS 서비스, 그리고 Route53 Route53에서 지원하는 레코드 형식 Public, Private Hosted Zone Route53의 특성 Route53은 어떤 기능을 제공하나요? Routing Policy Health Check Alias 레코드 따라하기 - Route53 서비스 도메인 구입 Route53 Hosted Zone 생성 Alias 레코드 생성 및 확인 Route53 비용 알기 11. 모니터링으로부터 시작되는 자동화 - CloudWatch CloudWatch는 어떤 서비스인가요? 모니터링, 그리고 CloudWatch CloudWatch 객체 주요 단위 CloudWatch는 어떤 기능을 제공하나요? Metric Alarm Logs Events Dashboard 따라하기 - CloudWatch 기본 Metric 확인 Custom Metric 생성 Alarm 생성 Logs 생성 Event를 이용한 EBS 백업 CloudWatch 비용 알기
2 notes · View notes
sandeeppotdukhe2511 · 3 years ago
Text
Going Serverless: how to run your first AWS Lambda function in the cloud
A decade ago, cloud servers abstracted away physical servers. And now, “Serverless” is abstracting away cloud servers.
Technically, the servers are still there. You just don’t need to manage them anymore.
Another advantage of going serverless is that you no longer need to keep a server running all the time. The “server” suddenly appears when you need it, then disappears when you’re done with it. Now you can think in terms of functions instead of servers, and all your business logic can now live within these functions.
In the case of AWS Lambda Functions, this is called a trigger. Lambda Functions can be triggered in different ways: an HTTP request, a new document upload to S3, a scheduled Job, an AWS Kinesis data stream, or a notification from AWS Simple Notification Service (SNS).
In this tutorial, I’ll show you how to set up your own Lambda Function and, as a bonus, show you how to set up a REST API all in the AWS Cloud, while writing minimal code.
Note that the Pros and Cons of Serverless depend on your specific use case. So in this article, I’m not going to tell you whether Serverless is right for your particular application — I’m just going to show you how to use it.
First, you’ll need an AWS account. If you don’t have one yet, start by opening a free AWS account here. AWS has a free tier that’s more than enough for what you will need for this tutorial.
We’ll be writing the function isPalindrome, which checks whether a passed string is a palindrome or not.
Above is an example implementation in JavaScript. Here is the link for gist on Github.
A palindrome is a word, phrase, or sequence that reads the same backward as forward, for the sake of simplicity we will limit the function to words only.
As we can see in the snippet above, we take the string, split it, reverse it and then join it. if the string and its reverse are equal the string is a Palindrome otherwise the string is not a Palindrome.
Creating the isPalindrome Lambda Function
In this step we will be heading to the AWS Console to create the Lambda Function:
Tumblr media
In the AWS Console go to Lambda.
Tumblr media
And then press “Get Started Now.”
Tumblr media
For runtime select Node.js 6.10 and then press “Blank Function.”
Tumblr media
Skip this step and press “Next.”
Tumblr media
For Name type in isPalindrome, for description type in a description of your new Lambda Function, or leave it blank.
As you can see in the gist above a Lambda function is just a function we are exporting as a module, in this case, named handler. The function takes three parameters: event, context and a callback function.
The callback will run when the Lambda function is done and will return a response or an error message.For the Blank Lambda blueprint response is hard-coded as the string ‘Hello from Lambda’. For this tutorial since there will be no error handling, you will just use Null. We will look closely at the event parameter in the next few slides.
Tumblr media
Scroll down. For Role choose “Create new Role from template”, and for Role name use isPalindromeRole or any name, you like.
For Policy templates, choose “Simple Microservice” permissions.
Tumblr media
For Memory, 128 megabytes is more than enough for our simple function.
As for the 3 second timeout, this means that — should the function not return within 3 seconds — AWS will shut it down and return an error. Three seconds is also more than enough.
Leave the rest of the advanced settings unchanged.
Tumblr media
Press “Create function.”
Tumblr media
Congratulations — you’ve created your first Lambda Function. To test it press “Test.”
Tumblr media
As you can see, your Lambda Function returns the hard-coded response of “Hello from Lambda.”
Tumblr media
Now add the code from isPalindrome.js to your Lambda Function, but instead of return result use callback(null, result). Then add a hard-coded string value of abcd on line 3 and press “Test.”
Tumblr media
The Lambda Function should return “abcd is not a Palindrome.”
Tumblr media
For the hard-coded string value of “racecar”, The Lambda Function returns “racecar is a Palindrome.”
Tumblr media
So far, the Lambda Function we created is behaving as expected.
In the next steps, I’ll show you how to trigger it and pass it a string argument using an HTTP request.
If you’ve built REST APIs from scratch before using a tool like Express.js, the snippet above should make sense to you. You first create a server, and then define all your routes one-by-one.
In this section, I’ll show you how to do the same thing using the AWS API Gateway.
Creating the API Gateway
Tumblr media
Go to your AWS Console and press “API Gateway.”
Tumblr media
And then press “Get Started.”
Tumblr media
In Create new API dashboard select “New API.”
Tumblr media
For API name, use “palindromeAPI.” For description, type in a description of your new API or just leave it blank.
Tumblr media
Our API will be a simple one, and will only have one GET method that will be used to communicate with the Lambda Function.
In the Actions menu, select “Create Method.” A small sub-menu will appear. Go ahead and select GET, and click on the checkmark to the right.
Tumblr media
For Integration type, select Lambda Function.
Tumblr media
Then press “OK.”
Tumblr media
In the GET — Method Execution screen press “Integration Request.”
Tumblr media
For Integration type, make sure Lambda Function is selected.
Tumblr media
For request body passthrough, select “When there are no templates defined” and then for Content-Type enter “application/json”.
Tumblr media
In the blank space add the JSON object shown below. This JSON object defines the parameter “string” that will allow us to pass through string values to the Lambda Function using an HTTP GET request. This is similar to using req.params in Express.js.
In the next steps, we’ll look at how to pass the string value to the Lambda Function, and how to access the passed value from within the function.
Tumblr media
The API is now ready to be deployed. In the Actions menu click “Deploy API.”
Tumblr media
For Deployment Stage select “[New Stage]”.
Tumblr media
And for Stage name use “prod” (which is short for “production”).
Tumblr media
The API is now deployed, and the invoke URL will be used to communicate via HTTP request with Lambda. If you recall, in addition to a callback, Lambda takes two parameters: event and context.
To send a string value to Lambda you take your function’s invoke URL and add to it ?string=someValue and then the passed value can be accessed from within the function using event.string.
Modify code by removing the hard-coded string value and replacing it with event.string as shown below.
Tumblr media
Now in the browser take your function’s invoke URL and add ?string=abcd to test your function via the browser.
Tumblr media
As you can see Lambda replies that abcd is not a Palindrome. Now do the same for racecar.
Tumblr media
If you prefer you can use Postman as well to test your new isPalindrome Lambda Function. Postman is a great tool for testing your API endpoints, you can learn more about it here.
To verify it works, here’s a Palindrome:
Tumblr media
And here’s a non-palindrome:
Tumblr media
Congratulations — you have just set up and deployed your own Lambda Function!
Thanks for reading!
5 notes · View notes
reportmmorg · 2 years ago
Text
How to install maven on api linux aws
Tumblr media
#How to install maven on api linux aws zip#
Mvn -versionIf it works, hooray! you have successfully installed the latest Apache Maven on your computer. Logout and login to the computer and check the Maven version using the following command. Sudo wget -output-document /etc/bash_completion.d/mvn Credits to Juven Xu: maven-bash-completion Sudo update-alternatives -set mvn /opt/apache-maven-$maven_version/bin/mvnĪdd Bash completion to mvn so that you can complete complex Maven commands by hitting Tab multiple times. Sudo update-alternatives -install "/usr/bin/mvn" "mvn" "/opt/apache-maven-$maven_version/bin/mvn" 0 The utility only requires Python to execute, so that's the only. It's a great tool for managing everything in AWS. M2_HOME="/opt/apache-maven-$maven_version"After the modification, press Ctrl + O to save the changes and Ctrl + X to exit nano. AWS CLI allows users to control different AWS services via the command line. PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/lib/jvm/jdk-10.0.2/bin:/opt/apache-maven-$maven_version/bin"
#How to install maven on api linux aws zip#
Notice the end of PATH variable and the M2_HOME variable. &0183 &32 Select the Link option apache apache-maven-3.6.0-bin.zip to download the zip file. The data passes from the API endpoint to the Lambda function and is handled by the API-Gateway. It makes it super easy for the developers to create https endpoints and integrate it with Lambda function. My Maven installation is located at /opt/apache-maven-3.5.4, so to add that to my /. &0183 &32 Amazon API-Gateway is one of a networking service provided by AWS that allows developers to easily build and deploy API endpoints. This will override any of these environment variables set system wide by /etc/environment. The framework allows to work with applications that engage the following AWS services: API Gateway. sudo nano /etc/environment WARNING: Do not replace your environment file with the following content because you may already have different environment variables which are required by other applications to function properly. &0183 &32 First of all, to have M2HOME set between terminal restarts youll have to add the export statement to /.bashrc (assuming your shell is bash). aws-syndicate is an Amazon Web Services deployment framework written in Python, which allows to easily deploy serverless applications using resource descriptions. Execute the following command and modify the content as given below. You can use nano to edit the file in the terminal itself.
Tumblr media
0 notes
gslin · 3 years ago
Text
AWS Lambda 可以直接有 HTTPS Endpoint 了
AWS Lambda 可以直接有 HTTPS Endpoint 了
AWS 宣佈 AWS Lambda 可以直接有一個 HTTPS Endpoint 了:「Announcing AWS Lambda Function URLs: Built-in HTTPS Endpoints for Single-Function Microservices」。 如同文章裡面提到的,先前得透過 API Gateway 或是 ALB 才能掛上 Lambda: Each function is mapped to API endpoints, methods, and resources using services such as Amazon API Gateway and Application Load Balancer. 現在則是提供像 verylongid.lambda-url.us-east-1.on.aws…
View On WordPress
0 notes
cloudemind · 4 years ago
Text
Amazon Elastic File System (EFS) Brain dump
Có bài viết học luyện thi AWS mới nhất tại https://cloudemind.com/efs/ - Cloudemind.com
Amazon Elastic File System (EFS) Brain dump
Tumblr media
Amazon EFS
Scalable, elastic, cloud-native NFS file system
Provide simple, scalable, fully managed NFS file system for use with AWS Services and on-premises resources.
Build to scale on-demand to petabytes without interrupting applications. Eliminate to manage provision the storage, it is automatically scale your storage as needed.
EFS has 2 types: EFS standard and EFS Infrequent Access (IA).
EFS has lifecycle management (like S3 lifecycle manage) to help move files into EFS IA automatically.
EFS IA is cheaper file system.
Shared access to thousands of Amazon EC2 instances, enabling high level of aggregate throughput and IOPS with consistent low latencies.
Common use cases: Big data analytics, web serving and content management, application development & testing, database backups, containers storage…
EFS is Regional service storing data within and cross Available Zone for high availability and durability.
Amazon EC2 instances can access cross Available Zone, On-premises resources can access EFS via AWS DX and AWS VPN.
EFS can support over 10GB/s, more than 500,000 IOPS.
Using EFS Lifecycle management can reduce cost up to 92%.
Amazon EFS is compatible with all Linux-based AMIs for Amazon EC2.
You do not need to manage storage procurement and provisioning. EFS will grow and shrink automatically as you add or remove files.
AWS DataSync provides fast and secure way to sync existing file system to Amazon EFS, even from on-premise over any network connection, including AWS Direct Connect or AWS VPN.
Moving files to EFS IA by enabling Lifecycle management and choose age-off policy.
Files smaller than 128KB will remain on EFS standard storage, will not move to EFS IA even it is enabled.
Speed: EFS Standard storage is single-digit latencies, EFS IA storage is double-digit latencies.
Throughput:
50MB/s baseline performance per TB of storage.
100MB/s burst for 1TB
More than 1TB stored, storage can burst 100MB/s per TB.
Can have Amazon EFS Provisioned Throughput to provide higher throughput.
EFS’s objects are redundantly across Available Zone.
You can use AWS Backup to incremental backup your EFS.
Access to EFS:
Amazon EC2 instances inside VPC: access directly
Amazon EC2 Classic instance: via ClassicLink
Amazon EC2 instances in other VPCs: using VPC Peering Connection or VPC Transit Gateway.
EFS can store petabytes of storage. With EFS, you dont need to provision in advance, EFS will automatically grow and shrink as files added or removed from the storage.
Mount EFS via NFS v4
Access Points
EFS Access points to simplify application access to shared datasets on EFS. EFS Access points can work with AWS IAM to enforce an user or group, and a directory for every file system request made through the access point.
You can create multiple access points and provide to some specific applications.
Encryption
EFS support encryption in transit and at rest.
You can configure the encryption at rest when creating EFS via console, api or CLI.
Encrypting your data is minimal effect on I/O latency and throughput.
EFS and On-premise access
To access EFS from on-premise, you have to have AWS DX or AWS VPN.
Standard tools like GNU to allow you copy data from on-premise parallel. It can help faster copy. https://www.gnu.org/software/parallel/
Amazon FSx Windows workload
Window file server for Windows based application such as: CRM, ERP, .NET…
Backed by Native Windows file system.
Build on SSD storage.
Can access by thousands of Amazon EC2 at the same time, also provide connectivity to on-premise data center via AWS VPN or AWS DX.
Support multiple access from VPC, Available Zone, regions using VPC Peering and AWS Transit gateway.
High level throughput & sub-millisecond latency.
Amazon FSx for Windows File Server support: SMB, Windows NFS, Active Directory (AD) Integration, Distributed File System (DFS)
Amazon FSx also can mount to Amazon EC2 Linux based instances.
Amazon FSx for Lustre
Fully managed file system that is optimized for HPC (high performance computing), machine learning, and media processing workloads.
Hundreds of GB per second of throughput at sub-millisecond latencies.
Can be integrated with Amazon S3, so you can join long-term datasets with a high performance system. Data can be automatically copied to and from Amazon S3 to Amazon FSx for Lustre.
Amazon FSx for Lustra is POSIX-compliant, you can use your current Linux-based applications without having to make any changes.
Support read-after-write consistency and support File locking.
Amazon Lustre can also be mounted to an Amazon EC2 instance.
Connect to onpremise via AWS DX, or AWS VPN.
Data Transfer
EFS Data transfers between Region using AWS DataSync
EFS Data transfer within Region using AWS Transfer Family endpoint
Limitations
EFS per Regions: 1,000
Pricing
Pay for storage
Pay for read and write to files (EFS IA)
Xem thêm: https://cloudemind.com/efs/
0 notes
globalmediacampaign · 4 years ago
Text
Use Amazon ElastiCache for Redis as a near-real-time feature store
Customers often use Amazon ElastiCache for real-time transactional and analytical use cases. It provides high throughout and low latencies, while meeting a variety of business needs. Because it uses in-memory data structures, typical use cases include database and session caching, as well as leaderboards, gaming and financial trading platforms, social media, and sharing economy apps. Incorporating ElastiCache alongside AWS Lambda and Amazon SageMaker batch processing provides an end-to-end architecture to develop, update, and consume custom-built recommendations for each of your customers. In this post, we walk through a use case in which we set up SageMaker to develop and generate custom personalized products and media recommendations, trigger machine learning (ML) inference in batch mode, store the recommendations in Amazon Simple Storage Service (Amazon S3), and use Amazon ElastiCache for Redis to quickly return recommendations to app and web users. In effect, ElastiCache stores ML features processed asynchronously via batch processing. Lambda functions are the architecture glue that connects individual users to the newest recommendations while balancing cost, performance efficiency, and reliability. Use case In our use case, we need to develop personalized recommendations that don’t need to be updated very frequently. We can use SageMaker to develop an ML-driven set of recommendations for each customer in batch mode (every night, or every few hours), and store the individual recommendations in an S3 bucket. For customers with specific requirements, having an in-memory data store provides access to data elements with sub-millisecond latencies. For our use case, we use a Lambda function to fetch key-value data when a customer logs on to the application or website. In-memory data access provides sub-millisecond latency, which allows the application to deliver relevant ML-driven recommendations without disrupting the user experience. Architecture overview The following diagram illustrates our architecture for accessing ElastiCache for Redis using Lambda. The architecture contains the following steps: SageMaker trains custom recommendations for customer web sessions. ML batch processing generates nightly recommendations. User predictions are stored in Amazon S3 as a JSON file. A Lambda function populates predictions from Amazon S3 to ElastiCache for Redis. A second Lambda function gets predictions based on user ID and prediction rank from ElastiCache for Redis. Amazon API Gateway invokes Lambda with the user ID and prediction rank. The user queries API Gateway to get more recommendations by providing the user ID and prediction rank. Prerequisites To deploy the solution in this post, you need the following requirements: The AWS Command Line Interface (AWS CLI) configured. For instructions, see Installing, updating, and uninstalling the AWS CLI version 2. The AWS Serverless Application Model (AWS SAM) CLI already configured. For instructions, see Install the AWS SAM CLI. Python 3.7 installed. Solution deployment To deploy the solution, you complete the following high-level steps: Prepare the data using SageMaker. Access recommendations using ElastiCache for Redis. Prepare the data using SageMaker For this post, we refer to Building a customized recommender system in Amazon SageMaker for instructions to train a custom recommendation engine. After running through the setup, you get a list of model predictions. With this predictions data, upload a JSON file batchpredictions.json to an S3 bucket. Copy the ARN of this bucket to use later in this post. If you want to skip this SageMaker setup, you can also download the batchpredictions.json file. Access recommendations using ElastiCache for Redis In this section, you create the following resources using the AWS SAM CLI: An AWS Identity and Access Management (IAM) role to provide required permissions for Lambda An API Gateway to provide access to user recommendations An ElastiCache for Redis cluster with cluster mode on to store and retrieve movie recommendations An Amazon S3 gateway endpoint for Amazon VPC The PutMovieRecommendations Lambda function to fetch the movie predictions from the S3 file and insert them into the cluster The GetMovieRecommendations Lambda function to integrate with API Gateway to return recommendations based on user ID and rank Run the following commands to deploy the application into your AWS account. Run sam init --location https://github.com/aws-samples/amazon-elasticache-samples.git --no-input to download the solution code from the aws-samples GitHub repo. Run cd lambda-feature-store to navigate to code directory. Run sam build to build your package. Run sam deploy --guided to deploy the packaged template to your AWS account. The following screenshot shows an example of your output. Test your solution To test your solution, complete the following steps: Run the PutMovieRecommendations Lambda function to put movie recommendations in the Redis cluster: aws lambda invoke --function-name PutMovieRecommendations result.json Copy your API’s invoke URL, enter it in a web browser, and append ?userId=1&rank=1 to your invoke URL (for example, https://12345678.execute-api.us-west-2.amazonaws.com?userId=1&rank=1). You should receive a result like the following: The number 1 recommended movie for user 1 is 2012 Monitor the Redis cluster By default, Amazon CloudWatch provides metrics to monitor your Redis cluster. On the CloudWatch console, choose Metrics in the navigation pane and open the ElastiCache metrics namespace to filter by your cluster name. You should see all the metrics provided for your Redis cluster. Monitoring and creating alarms on metrics can help you detect and prevent issues. For example, a Redis node can connect to a maximum of 65,000 clients at one time, so you can avoid reaching this limit by creating an alarm on the metric NewConnections. In the navigation pane on the CloudWatch console, choose Alarms. Choose Create Alarm. Choose Select Metric and filter the metrics by NewConnections. Under ElastiCache to Cache Node Metrics, select the Redis cluster you created. Choose Select metric. Under Graph attributes, for Statistic, choose Maximum. For Period, choose 1 minute. Under Conditions, define the threshold as 1000. Leave the remaining settings at their default and choose Next. Enter an email list to get notifications and continue through the steps to create an alarm. As a best practice, any applications you create should reuse existing connections to avoid the extra cost of creating a new connection. Redis provides libraries to implement connection pooling, which allows you to pull from a pool of connections instead creating a new one. For more information about monitoring, see Monitoring best practices with Amazon ElastiCache for Redis using Amazon CloudWatch. Clean up your resources You can now delete the resources that you created for this post. By deleting AWS resources that you’re no longer using, you prevent unnecessary charges to your AWS account. To delete the resources, delete the stack via the AWS CloudFormation console. Conclusion In this post, we demonstrated how ElastiCache can serve as the focal point for a custom-trained ML model to present recommendations to app and web users. We used Lambda functions to facilitate the interactions between ElastiCache for Redis and Amazon S3 as well as between the front end and a custom-built ML recommendation engine. For use cases that require a more robust set of features that leverage a managed ML service, you may want to consider Amazon Personalize. For more information, see Amazon Personalize Features. For more details about configuring event sources and examples, see Using AWS Lambda with other services. To receive notifications on the performance of your ElastiCache cluster, you can configure Amazon Simple Notification Service (Amazon SNS) notifications for your CloudWatch alarms. For more information about ElastiCache features, see Amazon ElastiCache Documentation. About the author Kalhan Vundela is a Software Development Engineer who is passionate about identifying and developing solutions to solve customer challenges. Kalhan enjoys hiking, skiing, and cooking. https://aws.amazon.com/blogs/database/use-amazon-elasticache-for-redis-as-a-near-real-time-feature-store/
0 notes
leftsublimemagazine · 4 years ago
Text
Interactivity Using Amazon API Gateway
AWS API Gateway may be a fully managed service that creates it easy for developers to publish, maintain, monitor, and secure APIs at any scale
It handles all of the tasks involved in accepting and processing up to many thousands of concurrent API calls, including traffic management, authorization, and access control, monitoring, and API version management.
It has no minimum fees or startup costs and charges just for the API calls received and therefore the amount of knowledge transferred out.
API Gateway also acts as a proxy to the configured backend operations.
It can scale automatically to handle the quantity of traffic the API receives
API Gateway exposes HTTPS endpoints only for all the APIs created. It does not support unencrypted (HTTP) endpoints
APIs built on Amazon API Gateway can accept any payloads sent over HTTP with typical data formats include JSON, XML, query string parameters, and request headers.
AWS API Gateway can communicate to multiple backends services
Lambda functions
AWS Step functions state machines
HTTP endpoints through Elastic Beanstalk, ELB, or EC2 servers
Non-AWS hosted HTTP based operations accessible via the public Internet
0 notes
javascriptw3schools · 4 years ago
Photo
Tumblr media
Working with WebSockets on AWS: https://t.co/CiNCSO0UAd (A 30 minute screencast focusing on using API Gateway as a WebSocket endpoint to talk with a serverless Lambda function.) #video
0 notes
webpacknews · 4 years ago
Photo
Tumblr media
Working with WebSockets on AWS: https://t.co/CiNCSO0UAd (A 30 minute screencast focusing on using API Gateway as a WebSocket endpoint to talk with a serverless Lambda function.) #video
0 notes
vuejs2 · 4 years ago
Photo
Tumblr media
Working with WebSockets on AWS: https://t.co/CiNCSO0UAd (A 30 minute screencast focusing on using API Gateway as a WebSocket endpoint to talk with a serverless Lambda function.) #video
0 notes
cloudemind · 4 years ago
Text
AWS Direct Connect (DX) Highlights
Có bài viết học luyện thi AWS mới nhất tại https://cloudemind.com/dx/ - Cloudemind.com
AWS Direct Connect (DX) Highlights
Tumblr media
AWS Direct Connect:
Private connection to transfer data between AWS Cloud and on-premise data center
You need a local DX partner to using DX. You connect to the DX partner via fiber optic link. This is a physical link. In the physical link, you can create many virtual interfaces to connect to AWS VPC or AWS Services.
AWS DX support dedicated connection with 1Gbps and 10 Gbps
1Gbps or higher: Work with AWS Partner Network or network provider to connect DX location.
Less than 1Gbps: Work with AWS Partner Network who can create hosted connection for you.
AWS DX support hosted connection capabilities of 1, 2, 5, 10 Gbps.
Support AWS Transit Gateway, aside from configuring site-to-site VPN connections.
Common use cases:
Transfer large datasets
Hybrid cloud to satisfy regulatory requirements of private connectivity
Develop and run real-time application data feeds
Tech Specification
Virtual interfaces is a logical channel that divided from physical dx link. You must create a virtual interface to begin using your DX connection. There are 2 types of virtual interfaces:
Virtual Private interface: Connect to AWS VPC. One private virtual interface for each VPC or you can use AWS DX Gateway.
Virtual Public interface: Connect to AWS public services such as S3, DynamoDB
To access public resources on the remote AWS location, you have to setup public virtual interface and enable BGP session.
Autonomous System Number (ASN) are used to identify networks that present a clearly defined external routing policy to the Internet.
MTU (Maximum Transmission Unit):
Virtual private interface: 1500 or 9001 (jumbo frames)
Transit virtual interface for VPC transit gateway: 1500 or 8500 (jumbo frames)
Virtual public interface: does not support jumbo frames
LAG (Link Aggregation Group) group multiple DX connection into a single, managed connection.
All connection in a LAG has same connection bandwidth
Maximum connections in a LAG: 4
All connections in the LAG must terminate at the same DX endpoint
All connection is a LAG working in active/active mode
Only available for dedicated 1G or 10GB connections
DX Gateway is to use to connect multiple VPC in same or different AWS Regions via virtual private interface.
DX Gateway is global available resource
Enable to connect your on-premise data center to any AWS region (except China region).
Can associate up to 10 VPC in different accounts with DX Gateway. The accounts must be in same AWS payer account ID.
Monitoring
CloudTrail captures all API calls for DX as events
Setup CloudWatch alarms to monitor metrics.
You also can use tags for DX.
Pricing:
Network ports you use and data transfer over the connection
Pricing is per hour consumed per each port type.
Data transfer out will be charged per GB
Data transfer in is free in all locations
Limitations
SpecificationLimitRemarksVirtual interfaces per dedicated connection50Hard limitTransit virtual interface per dedicated connection1Hard limitVirtual interfaces per hosted connection1Hard limitActive DX connection per Region per Account10LAGs50Routes per BGP session on private VIF100Hard limitRoutes per BGP session on public VIF1,000Hard limitDedicated connections per LAG4LAGs per Region10DX Gateway per Account200Virtual private Gateway per DX Gateway10Hard limitvirtual interfaces (private or transit) per DX Gateway30
AWS Direct Connect limitations
Reference:
https://docs.aws.amazon.com/directconnect/latest/UserGuide/limits.html
Xem thêm: https://cloudemind.com/dx/
0 notes
holytheoristtastemaker · 4 years ago
Link
 A key metric for measuring how well you handle system outages is the Mean Time To Recovery or MTTR. It's basically the time it takes you to restore the system to working conditions. The shorter the MTTR, the faster problems are resolved and the less impact your users would experience and hopefully the more likely they will continue to use your product!
And the first step to resolve any problem is to know that you have a problem. The Mean Time to Discovery (MTTD) measures how quickly you detect problems and you need alerts for this - and lots of them.
Exactly what alerts you need depends on your application and what metrics you are collecting. Managed services such as Lambda, SNS, and SQS report important system metrics to CloudWatch out-of-the-box. So depending on the services you leverage in your architecture, there are some common alerts you should have. And here are some alerts that I always make sure to have.
Lambda (Regional Alerts)
You might have noticed the regional metrics (I know, the dashboard says Account-level even though its own description says it's "in the AWS Region") in the Lambda dashboard page.
Tumblr media
Regional concurrency alert
The regional ConcurrentExecutions is an important metric to alert on. Set the alert threshold to ~80% of your current regional concurrency limit (which starts at 1000 for most regions).
Tumblr media
This way, you will be alerted when your Lambda usage is approaching your current limit so you can ask for a limit raise before functions are throttled.
Regional throttles alert
You may also wish to add alerts to the regional Throttles metric. But this depends on whether or not you're using Reserved Concurrency. Reserved Concurrencylimits how much concurrency a function can use and throttling excess invocations shows that it's doing its job. But those throttling can also trigger your alert with false positives.
Lambda (Per-Function Alerts)
(Note: depending on the function's trigger, some of these alerts might not be applicable.)
Error rate alert
Use CloudWatch metric math to calculate the error rate of a function - i.e. 100 * Errors / MAX([Errors, Invocations]). Align the alert threshold with your Service Level Agreements (SLAs). For example, if your SLA states that 99% of requests should succeed then set the error rate alert to 1%.
Throttles alert
Unless you're using Reserved Concurrency, you probably shouldn't expect the function's invocations to be throttled. So you should have an alert against the Throttles metric.
DeadLetterErrors alert
For async functions with a dead letter queue (DLQ), you should set up an alert against the DeadLetterErrors metric. This tells you when the Lambda service is not able to forward failed events to the configured DLQ.
DestinationDeliveryFailures alert
Similar to above, for functions with Lambda Destinations, you should set up an alert against the DestinationDeliveryFailures metric. This tells you when the Lambda service is not able to forward events to the configured destination.
IteratorAge alert
For functions triggered by Kinesis or DynamoDB streams, the IteratorAge metric tells you the age of the messages they receive. When this metric starts to creep up, it's an indicator that the function is not keeping pace with the rate of new messages and is falling behind. The worst-case scenario is that you will experience data loss since data in the streams are only kept for 24 hours by default. This is why you should set up an alert against the IteratorAge metric so that you can detect and rectify the situation before it gets worse.
How Lumigo Helps
Even if you know what alerts you should have, it still takes a lot of effort to set them up. This is where 3rd-party tools like Lumigo can also add a lot of value. For example, Lumigo enables a number of built-in alerts (using sensible, industry-recognized defaults) for auto-traced functions so you don't have to manually configure them yourself. But you still have the option to disable alerts for individual functions should you choose to.
Here are a few of the alerts that Lumigo offers:
Predictions - when Lambda functions are dangerously closed to resource limits (memory/duration/concurrency, etc.)
Abnormal activity detected - invocations (increase/decrease), errors, cost, etc. See below for an example.
On-demand report of misconfigured resources (missing DLQs, wrong DynamoDB throughput mode, etc.)
Threshold exceeded: memory, errors, cold starts, etc. Lambda's runtime crash
Tumblr media
Furthermore, Lumigo integrates with a number of popular messaging platforms so you can be alerted prompted through your favorite channel.
Tumblr media
Oh, and Lumigo does not charge extra for alerts. You only pay for the traces that you send to Lumigo, and it has a free tier for up to 150,000 traced invocations per month. You can sign up for a free Lumigo account here.
API Gateway
By default, API Gateway aggregates metrics for all its endpoints. For example, you will have one 5xxError metric for the entire API, so when there is a spike in 5xx errors you will have no idea which endpoint was the problem.
You need to Enable Detailed CloudWatch Metrics in the stage settings of your APIs to tell API Gateway to generate method-level metrics. This adds to your CloudWatch cost but without them, you will have a hard time debugging problems that happen in production.
Tumblr media
Once you have per-method metrics handy, you can set up alerts for individual methods.
p90/p95/p99 Latency alert
When it comes to monitoring latency, never use Average. "Average" is just a statistical value, on its own, it's almost meaningless. Until we plot the latency distribution we won't actually understand how our users are experiencing our system. For example, all these plots produce the same average but have a very different distribution of how our users experienced the system.
Tumblr media
Seriously, always use percentiles.
So when you set up latency alerts for individual methods, keep two things in mind:
Use the Latency metric instead of IntegrationLatency. IntegrationLatency measures the response time of the integration target (e.g. Lambda) but doesn't include any overhead that API Gateway adds. When measuring API latency, you should measure the latency as close to the caller as possible.
Use the 90th, 95th, or 99th percentile. Or maybe use all 3, but set different threshold levels for them. For example, p90 Latency at 1 second, p95 Latency at 2 seconds, and p99 Latency at 5 seconds.
4xx rate/5xx rate alert
When you use the Average statistic for API Gateway's 4XXError and 5XXErrormetrics you get the corresponding error rate. Set up alerts against these to alert yourself when you start to see an unexpected number of errors.
SQS
When working with SQS, you should set up alerts against the ApproximateAgeOfOldestMessage metric for an SQS queue. It tells you the age of the oldest message in the queue. When this metric trends upwards, it means your SQS function is not able to keep pace with the rate of new messages.
Step Functions
There are a number of metrics that you should alert on:
ExecutionThrottled
ExecutionsAborted
ExecutionsFailed
ExecutionsTimedOut
They represent the various ways state machine executions would fail. And since Step Functions are often used to model business-critical workflows, I would usually set the alert threshold to 1.
0 notes
globalmediacampaign · 4 years ago
Text
Creating a REST API for Amazon DocumentDB (with MongoDB compatibility) with Amazon API Gateway and AWS Lambda
Representational state transfer (REST) APIs are a common architectural style for distributed systems. They benefit from being stateless and therefore enable efficient scaling as workloads increase. These convenient—yet still powerful—APIs are often paired with database systems to give programmatic access to data managed in a database. One request that customers have expressed is to have a REST API for access to their Amazon DocumentDB (with MongoDB compatibility) database, which is what this post discusses. Amazon DocumentDB is a fast, scalable, highly available, and fully managed document database service that supports MongoDB workloads. As a document database, Amazon DocumentDB makes it easy to store, query, and index JSON data. The primary mechanism that users use to interact with Amazon DocumentDB is via the MongoDB drivers, which provide a stateful, session-based API. Providing simple HTTP-based access to Amazon DocumentDB allows for the addition of document data to webpages, other services and microservices, and other applications needing database access. In this post, I demonstrate how to build a REST API for read and write access to Amazon DocumentDB by using Amazon API Gateway, AWS Lambda, and AWS Secrets Manager. Solution overview The REST API in this post can perform insert, update, delete, and read operations against Amazon DocumentDB collections. Access can be restricted to particular collections, all collections in a particular database, or all collections in all databases. To accomplish this goal, we use the following services: Amazon DocumentDB – Stores our data API Gateway – Exposes an HTTP REST API Lambda – Connects the API Gateway service to the database Secrets Manager – Stores the database credentials for use by our Lambda function A discussion around best practices for securing the API endpoints is beyond the scope of this post, but for more information, see Controlling and managing access to a REST API in API Gateway. For this post, a simple username-password authentication is presented as another Lambda function. The following diagram illustrates the architecture of this solution. I use the AWS Serverless Application Model (AWS SAM) to deploy this stack because it’s the preferred approach when developing serverless applications such as this one. The template and code are available in the GitHub repo. In terms of functionality for the API, insert, update, delete, and find operations against data stored in the database are exposed. Storing our data with Amazon DocumentDB (with MongoDB compatibility) Amazon DocumentDB is a fast, scalable, highly available, and fully managed document database service that supports MongoDB workloads. To read more about the architecture, value proposition, and attributes of Amazon DocumentDB, see 12 things you should know about Amazon DocumentDB (with MongoDB compatibility). This post begins with an already existing Amazon DocumentDB cluster that we expose via a REST API to applications. If you don’t already have an Amazon DocumentDB cluster, see Getting Started with Amazon DocumentDB (with MongoDB compatibility). Storing database credentials with Secrets Manager As is common with applications deployed in AWS that connect to Amazon DocumentDB, including Lambda functions, we use Secrets Manager to store credentials to connect to Amazon DocumentDB. This way, you can grant permissions to access those credentials to a role, such as the role used to run a Lambda function. After the application, or Lambda function, retrieves the credentials from Secrets Manager, it can use those credentials to make a database connection to Amazon DocumentDB. Exposing a REST API with API Gateway For this post, we use API Gateway to expose a REST API to the world. API Gateway supports several APIs, REST being just one of the options, but the one on which I focus in this post. API Gateway can define REST endpoints and define operations on those endpoints. For this discussion, an endpoint per collection is used and multiple operations on that endpoint are defined, specifically: GET – Corresponds to a read or find database operation PUT or POST – Corresponds to an insert database operation PATCH – Corresponds to an update database operation DELETE – Corresponds to a delete database operation The parameters that are sent with those operations mimic the parameters in the corresponding database operation. For example, the GET operation mimics the find() operation in the MongoDB API. In the MongoDB API, you can specify five parameters to the find() operation: A filter to identify documents of interest, specified as a JSON object A projection to specify what fields should be returned, specified as a JSON object A sort to specify the order of the results, specified as a JSON object A limit to specify the number of results to return, specified as an integer A skip amount to specify how many results to skip before returning results, specified as an integer Similarly, the PATCH operation mimics the update() operation in the MongoDB API, which takes two parameters: A filter to identify which documents should be updated, specified as a JSON object An update command to indicate what update should be applied, specified as a JSON object Updates include setting or unsetting a field value, incrementing or decrementing a field value, and so on For more information about the MongoDB API, see the MongoDB documentation. We use the path of the REST endpoint to specify the database and collection against which to operate. In API Gateway, you can specify the exact path, thereby specifying a particular database and collection, and the client calling the REST endpoint can’t change that. For example, API Gateway can expose a URL like http:///docdb/mydb/mycollection that corresponds to accessing the mycollection collection in the mydb database. In addition, in API Gateway, you can allow for path variables to be added to a particular REST endpoint. This allows you to expose an endpoint to a particular database, but allows the user to specify the collection. This enables access to all collections in a particular database. For example, API Gateway can expose a URL like http:///docdb/mydb, which allows the caller to append a particular collection name to the URL, such as http:///docdb/mydb/another_collection. This allows access to any collection, including another_collection, inside the mydb database. You can extend this idea further and expose a REST endpoint and allow the client to specify both the database and the collection as path variables, thereby exposing all collections in all databases. For example, API Gateway can expose a URL like http:///docdb/, which allows the caller to append both the database and the collection names to the URL, such as http:///docdb/other_db/other_collection. This allows access to any collection, including other_collection, inside any database, including the other_db database. Enforcing API Gateway security From a security standpoint, as a best practice, access to these endpoints should be restricted via the security mechanisms in API Gateway. For more information, see Controlling and managing access to a REST API in API Gateway. API Gateway has several available authorization schemes, including Amazon Cognito, AWS Identity and Access Management (IAM), and Lambda functions. For the purposes of this post, I use a simple Lambda authorizer that compares the supplied username and password with static values (stored as environment variables in the Lambda function). I can choose to protect some or all of the API endpoints, and even protect the different endpoints differently (for example, with different username/password pairs, or different authorizers), but that is beyond the scope of this post. Connecting API Gateway to the database with Lambda The component doing the heavy lifting in this solution is Lambda. We use a single, simple Lambda function to perform all the various operations exposed: GET (for find), PUT or POST (for insert), PATCH (for update), and DELETE (for delete). When API Gateway calls the Lambda function, the incoming event contains several pieces of metadata, including the REST method that was invoked. This function uses that field to determine which subfunction to call and return the results. Additionally, the API path that was invoked is also sent as part of the event. This path can be used to determine the database and collection that are to be queried. It’s worth talking about a few good practices for implementing this Lambda function. Using Lambda layers Lambda has a mechanism by which you can package up some commonly used libraries or packages so that you don’t need to package them up with each Lambda function that you deploy. For example, I do a lot of work with Lambda functions in Python connecting to Amazon DocumentDB, so many of my functions need the MongoDB drivers to make the connection. Because I’m using the best practice configuration, which uses SSL to communicate with the cluster, my functions also need the certificate file for connecting to Amazon DocumentDB. You can package up these dependencies into a .zip file and create a Lambda layer. Then, when you create a Lambda function, like the CRUD operations function for this REST API, you can add the layer to your function to bring in those dependencies. This greatly simplifies deployment of Lambda functions. You can now easily compose Lambda functions directly on the AWS Management Console, because the dependencies are packaged up already. Additionally, if your Lambda function is simple, you can include the source code for our function directly in an AWS CloudFormation template, simplifying automated deployments as well. For this post, I create a single Lambda layer that includes the MongoDB Python driver and the Amazon DocumentDB certificate file. When added to a Python Lambda function, the resources are available under the /opt directory. Connecting to Amazon DocumentDB If you make the connection to Amazon DocumentDB inside the handler for the Lambda function, you have to go through the process of connecting to the database on every call to the Lambda function. This is a wasteful and resource-intensive approach to connections. As with other database services accessed from Lambda functions, the best practice is to make the connection outside of the handler itself. When AWS reuses the environment for another Lambda invocation, the connection is already made. For this post, I store the connection in a global variable and use that connection, unless it’s uninitialized, in which case I call a database connection subfunction. Enforcing Lambda security As stated earlier, we store our credentials for our Amazon DocumentDB cluster in Secrets Manager. I grant the permission to retrieve this secret to the role that the Lambda function uses. It’s a simple operation to add a subfunction to the Lambda function that retrieves those credentials, which you can then use to connect to the database. Using this pattern is a nice way to not expose usernames or passwords in code or in the configuration of the Lambda function. Creating a REST API in an Amazon Document DB cluster Now, let’s walk through the steps to create a REST API for your Amazon DocumentDB cluster. For this example, I assume you currently have an Amazon DocumentDB cluster, and know the username and password for a user that can query collections in that cluster. If you don’t currently have an Amazon DocumentDB cluster up and running, use the following CloudFormation template to launch one. To deploy this REST API, I use AWS SAM and the following repository. The important files in this repository are the template file, template.yaml, and the Lambda source code located in the docdb_rest folder, specifically app.py and auth.py. Clone the repository with the template and code: git clone https://github.com/aws-samples/docdb-rest.git You need to build the .zip file for the Lambda layer that holds the database driver and certificate authority file to connect to Amazon DocumentDB. To do this, run the following command: make Now you’re ready to build the serverless application via the sam command: sam build When that is complete, deploy the serverless application: sam deploy --capabilities CAPABILITY_NAMED_IAM --guided You need to answer several questions from the command line: The stack name Which Region to deploy A prefix to be prepended to resources created by this stack (for easy identification on the console) The identifier for the Amazon DocumentDB cluster The username for accessing the Amazon DocumentDB cluster The password for accessing the Amazon DocumentDB cluster A VPC subnet with networking access to the Amazon DocumentDB cluster A security group with access to the Amazon DocumentDB cluster The username to use to protect the REST API The password to use to protect the REST API Optionally, choose to confirm changes before deploying. You need to allow the AWS SAM CLI to create an IAM role. Optionally, choose to save the arguments to a configuration file, and choose a configuration file name and a configuration environment. Configuring SAM deploy ====================== Looking for config file [samconfig.toml] : Not found Setting default arguments for 'sam deploy' ========================================= Stack Name [sam-app]: docdb-rest AWS Region [us-east-1]: us-east-2 Parameter Prefix []: docdbrest Parameter DocDBIdentifier []: docdb-cluster Parameter DocDBUsername []: dbuser Parameter DocDBPassword: Parameter DocDBVPCSubnet []: subnet- XXXXXXXXXXXXXXXXX Parameter DocDBSecurityGroup []: sg-XXXXXXXXXXXXXXXXX Parameter APIUsername []: apiuser Parameter APIPassword: #Shows you resources changes to be deployed and require a 'Y' to initiate deploy Confirm changes before deploy [Y/n]: Y #SAM needs permission to be able to create roles to connect to the resources in your template Allow SAM CLI IAM role creation [Y/n]: Y Save arguments to configuration file [Y/n]: Y SAM configuration file [samconfig.toml]: SAM configuration environment [default]: When prompted, choose to deploy this change set. When the stack has finished deploying, you see a list of the resources and a notice: Successfully created/updated stack - docdb-rest in us-east-2 Make note of the APIRoot output printed at the successful deployment. You use this to test the API in the next step. Testing the API Now you can test our API by calling your REST endpoints via curl. To do so, you need to get the URL, which is the APIRoot output from the deployment. You can also retrieve this information from API Gateway. Let’s set environment variables to hold the root URL for the API, as well as the username and password to access the API: export URLBASE= export APIUSER= export APIPWD= Now we can issue some HTTP commands. Insert some data via PUT: curl -X PUT -H "Content-Type: application/json" -d '{"name":"brian", "rating": 5}' https://$APIUSER:$APIPWD@$URLBASE/docdb/blog/test curl -X PUT -H "Content-Type: application/json" -d '{"name":"joe", "rating": 5}' https://$APIUSER:$APIPWD@$URLBASE/docdb/blog/test Insert some data via POST: curl -X POST -H "Content-Type: application/json" -d '{"name":"jason", "rating": 3}' https://$APIUSER:$APIPWD@$URLBASE/docdb/blog/test Retrieve all the data via GET: curl -G https://$APIUSER:$APIPWD@$URLBASE/docdb/blog/test Retrieve just the joe document via GET: curl -G --data-urlencode 'filter={"name": "joe"}' https://$APIUSER:$APIPWD@$URLBASE/docdb/blog/test Retrieve just the joe document but only project the name field via GET: curl -G --data-urlencode 'filter={"name": "joe"}' --data-urlencode 'projection={"_id": 0, "name": 1}' https://$APIUSER:$APIPWD@$URLBASE/docdb/blog/test Update the jason document via PATCH: curl -X PATCH -H "Content-Type: application/json" -d '{"filter": {"name": "jason"},"update": {"$set": {"rating": 4}}}' https://$APIUSER:$APIPWD@$URLBASE/docdb/blog/test Delete the jason document via DELETE: curl -X DELETE -H "Content-Type: application/json" -d '{"filter": {"name": "jason"}}' https://$APIUSER:$APIPWD@$URLBASE/docdb/blog/test See the README file in the repository for more information on the REST API syntax implemented in the Lambda function. Limiting access to certain collections These commands were issued against the generic endpoint that interprets the database and collection from the URL. The API that was implemented has two other endpoints: one that specifies a fixed database (demodb) but allows access to all collections in that database, and one that specifies a specific collection (democollection) in a specific database (demodb). If only the endpoint to a specific collection is exposed, then only that collection can be accessed via REST commands. This allows you to grant broad or narrow access to the databases and collections as suits your needs. Cleaning up You can delete the resources created in this post by deleting the stack via the AWS CloudFormation console or the AWS Command Line Interface (AWS CLI). Your Amazon DocumentDB cluster is not deleted by this operation. Conclusion In this post, I demonstrated how to create a REST API to gain read and write access to collections in an Amazon DocumentDB database. Amazon DocumentDB access is only available within an Amazon VPC, but you can access this REST API outside of the VPC. I also showed how to create a single Lambda function that serves as the bridge between the API Gateway REST API and the Amazon DocumentDB database, and supports insert, update, delete, and read operations. Finally, I showed how to use Lambda layers to simplify Lambda function development, and how to safely store database credentials in Secrets Manager for use by our Lambda function.   For more information about recent launches and blog posts, see Amazon DocumentDB (with MongoDB compatibility) resources. About the Author Brian Hess is a Senior Solution Architect Specialist for Amazon DocumentDB (with MongoDB compatibility) at AWS. He has been in the data and analytics space for over 20 years and has extensive experience with relational and NoSQL databases. https://aws.amazon.com/blogs/database/creating-a-rest-api-for-amazon-documentdb-with-mongodb-compatibility-with-amazon-api-gateway-and-aws-lambda/
0 notes