#elastic beanstalk
Explore tagged Tumblr posts
sa-pawar98 · 1 year ago
Text
Unleash the Power of AWS Elastic Beanstalk.
Are you a developer looking for a hassle-free way to deploy and manage your web applications? Look no further than AWS Elastic Beanstalk. This fully managed platform-as-a-service (PaaS) offering from Amazon Web Services takes the complexities out of infrastructure management, allowing you to focus on what matters most - writing code and delivering exceptional user experiences.
What are AWS Elastic Beanstalk? AWS Elastic Beanstalk is a cloud service that handles the deployment, scaling, monitoring, and maintenance of your applications. It supports various programming languages and frameworks, making it versatile and adaptable to your development needs.
Streamlined Deployment Process Gone are the days of manual configuration and setup. With Elastic Beanstalk, you can deploy your application with just a few clicks. It automatically provisions resource, sets up load balancing, and monitors application health. This means you can roll out updates faster and reduce the risk of errors.
Seamless Scaling Whether you're experiencing a sudden surge in traffic or anticipating growth, Elastic Beanstalk has got your back. Its auto-scaling feature adjusts the number of instances based on demand, ensuring optimal performance without manual intervention. Say goodbye to worries about capacity planning!
Monitoring and Troubleshooting Elastic Beanstalk provides comprehensive health and performance metrics for your applications. You can easily track resource usage, diagnose issues, and fine-tune your app for peak efficiency. With integrated logging and monitoring tools, you're always in control.
Integration with AWS Services Harness the full power of the AWS ecosystem. Elastic Beanstalk seamlessly integrates with other services like Amazon RDS for databases, Amazon S3 for storage, and Amazon CloudWatch for advanced monitoring. This level of integration opens up a world of possibilities for your application architecture.
Focus on Innovation By abstracting away infrastructure concerns, Elastic Beanstalk lets you channel your energy into innovating and improving your application. You can experiment with new features, optimize user experiences, and respond swiftly to market demands.
Embrace the power of Elastic Beanstalk and elevate your development experience to new heights. Start your journey today and experience the difference for yourself.
READ MORE .....
Tumblr media
0 notes
jareenaxjr · 4 years ago
Text
Deploy Simple Spring Boot Web Application in AWS Elastic Beanstalk
This document describes how to deploy simple spring boot web application in AWS Elastic Beanstalk. We need one spring boot project and account in AWS.
Read More...
1 note · View note
anjumxjr · 4 years ago
Link
This document describes how to deploy simple spring boot web application in AWS Elastic Beanstalk. We need one spring boot project and account in AWS.
1 note · View note
kmharish3009 · 5 years ago
Text
HTTPS for Single-Instance Python 3.7 AWS Elastic Beanstalk Environment without Custom Domain or Load Balancer
HTTPS for Single-Instance Python 3.7 AWS Elastic Beanstalk Environment without Custom Domain or Load Balancer
This post describes how to setup HTTPS using a self-signed certificate for a Python 3.7 webapp deployed to a single-instance AWS Elastic Beanstalk environment without using a custom domain or a load balancer. This is useful in dev/test scenarios where HTTPS is required.
Step 1 — Create Beanstalk App
Create a Beanstalk app as shown below:
Tumblr media
Step 2 — Get Application Code
Download &…
View On WordPress
0 notes
laravelvuejs · 5 years ago
Text
Deploying a Laravel App via Elastic Beanstalk | Amazon Web Services BASICS
Deploying a Laravel App via Elastic Beanstalk | Amazon Web Services BASICS
[ad_1] Elastic Beanstalk is a great service to get your web application into the web. This video shows how you can easily use it to deploy a Laravel application which even uses a database!
Want to learn AWS Serverless apps? Dive into my complete introduction for only $13: https://www.udemy.com/aws-serverless-a-complete-introduction/?couponCode=YOUTUBE_1
The sample project: https://github.com/acad…
View On WordPress
0 notes
claydesk · 6 years ago
Video
youtube
Mastering Jenkins CI with Amazon AWS: Build DevOps Pipeline Course Promo
0 notes
mayppong · 7 years ago
Text
My First Docker-based App on BeanStalk And The Port Mapping Battle
The past few days, I've been working on deploying a simple proxy server to AWS Elastic Beanstalk using Docker environment. I had Terraform setup to help me get all the resources I need created, and I was able to get to a healthy state according to the Beanstalk environment dashboard. However, when I try to visit the status page manually from the browser through its loadbalancer URL, it was timing out.
I want to walk you through my very frustrating experience debugging this issue as I believe there's a lot I learnt here. I started my debugging process by 1. check that it has access to the docker image registry which is already on AWS ECR anyway. But the CloudWatch log, docker-events.log, shows that it was able to pull image and created a container. 2. check that the container started correctly by heading to the stdouterr.log. I can see the output my application make telling me it has started and start listening to port 3000 as configured. 3. at this point, I went to the EC2 instance that was created to run the application, and modify its security group so I can attempt to connect to the host machine directly to see if it works without the loadbalancer. Well, I was able to get to the health status page of the application.
At this point, I thought for sure something was wrong with loadbalancer setting: either the connection from outside to the loadbalancer or from loadbalancer to the application. I honestly wasn't sure how Beanstalk's health check actually works either. Is it pinging my app through the loadbalancer? It's telling me the instance is healthy ...
8 hours later, of mucking around with different roles, permissions, security groups, I couldn't figure it out and know I needed a second pair of eyes. While walking my friend through my debugging process, he caught that I was visiting the EC2 instance's port 80 to get to the working status page. This was all I needed. The missing piece I had overlooked. I immediately realized the port mapping from container to the host was wrong. My earlier assumption of port 3000 from container being mapped to port 3000 on the host should not allow me to access the status page through port 80 sucessfully.
Based on their documentation, it seems that Beanstalk automatically maps the first port declared in your Dockerrun.aws.json file and automatically maps it to host's port 80.
{ ..., "Ports": [{ "ContainerPort": "3000" }] }
This means in Terraform, when declaring load balancer, the instance port shouldn't be declared to match your application inside the container's port, and instead, it should be set to port 80.
# THIS WILL NOT WORK setting { namespace = "aws:elb:listener" name = "InstancePort" value = "3000" }
Since the default setting for this is port 80 anyway so I just remove the code block from the Terraform file altogether, destory and reapply everything from scratch again. This time, everything works as expected.
The take away from this are 1. Dockerrun.aws.json maps the first port to host's port 80 so the loadbalancer doesn't need any further configuration from the default. This wasn't really made very clear according to its wording. 2. don't be lazy and skimp the docs. That was one of the first page I saw going through this exercise 3. the health check systems don't go through the loadbalancer. Which after thinking about it, at this point, it makes sense. You want to be able to detect health of each individual instaces. Going through loadbalancer means you can't tell which instance is actually healthy because it depends on routing logic. 4. probably get another eye sooner if possible. I was working on my own for the most part so I wasn't able to do that. But if one was available to me, I feel like this could have been wrapped up much sooner and save my lots of heartache.
If someone from Amazon sees this, please update your document to be more succinct with code example on how Dockerrun.aws.json map the port please.
1 note · View note
syndicode · 7 years ago
Text
How to deploy a Rails app on AWS Elastic Beanstalk
Last time we showed you how to deploy Elixir on Elastic Beanstalk. But as an agency with Rails among favorite frameworks, Syndicode would like to share with you the tutorial on how to deploy a Rails app on AWS Elastic Beanstalk. By the way, we’re looking for the Rails developer!  A little...
0 notes
tech-and-software · 8 years ago
Text
Elastic beanstalk - nginx proxy error
I used Elastic Beanstalk from AWS to run the API of a recent project.  The application was nodejs, one of the environments Beanstalk supports out of the box. The advantage is that it configures the EC2 instances, autoscaling group and load balancer, deploys your app and monitors the fleet automatically, so you don’t have to muck around setting that up.
Beanstalk configures nginx to run in front of nodejs, where it can serve up static files or proxy through to nodejs for dynamic content. The nodejs app is zipped up and uploaded to S3, where Beanstalk picks it up and installs it.
However, I was getting a proxy error when I deployed, which initially stumped me, and nothing specific turned up on Stack Overflow.
I figured it out eventually - I had generated the nodejs app using express-generator.  This places app.js at the top level of the app.  Beanstalk was running that, as I had left the Node command in the Beanstalk application configuration blank.  app.js exits immediately, and Beanstalk was repeatedly re-running it.  Meanwhile nginx can’t proxy through incoming HTTP requests to the nodejs server, as it’s not running, so returns the proxy error to clients.
The app is meant to be started with the bin/www script which the express-generator creates, started with npm start. So the fix was just to change the Node command to npm start, in the Beanstalk config.  I noticed afterwards that the Beanstalk Nodejs documentation states the default order if you leave this blank is to try app.js, server.js then npm start. As app.js was there, but exits immediately, Beanstalk was getting stuck on that.
Here’s the setting in the software configuration section of the AWS console for Beanstalk:
Tumblr media
0 notes
gorkamu · 8 years ago
Text
Plataforma como Servicio: Heroku vs Amazon Web Services
Plataforma como Servicio: Heroku vs Amazon Web Services
No te equivoques,  Amazon Web Services, AWS de ahora en adelante, y Heroku, no son dos plataformas como servicio, solo una de ellos pero ambas nos van a permitir implementar, supervisar y escalar aplicaciones web y móviles de manera completamente diferente. Pero antes de seguir hablando vamos a recopilar un poquito de información.
¿Que coño significa Plataforma como Servicio?
Según la Wikipedia…
View On WordPress
0 notes
bravehartk2 · 8 years ago
Text
AWS reinitialize Elastic Beanstalk environment
AWS reinitialize Elastic Beanstalk environment
To initialize a Amazon Webservice Elastic Beanstalk environment you generally use the command:
$> eb init
But sometimes you need to reinitialize the environment, because you made a mistake or want to change the programming language or AWS region of the connected environment.
In these cases you can reinitialize Elastic Beanstalk Environment by the -i parameter:
$> eb init -i
The -iparameter…
View On WordPress
0 notes
markessnotes-blog · 8 years ago
Link
Check those inks
Awesome Docker
Hello Docker Workshop
Building a microservice with Node.js and Docker
Why Docker
Docker Weekly and archives
Codeship Blog
0 notes
kmharish3009 · 5 years ago
Text
HTTPS for Single-Instance Node.js AWS Elastic Beanstalk Environment without Custom Domain or Load Balancer
HTTPS for Single-Instance Node.js AWS Elastic Beanstalk Environment without Custom Domain or Load Balancer
This post describes how to setup HTTPS using a self-signed certificate for a Node.js 12 webapp deployed to a single-instance AWS Elastic Beanstalk environment without using a custom domain or a load balancer. This is useful in dev/test scenarios where HTTPS is required.
Step 1 — Create Beanstalk App
Create a Beanstalk app as shown below:
Tumblr media
Step 2 — Get Application Code
Download &…
View On WordPress
0 notes
jasminenoack · 8 years ago
Text
Deploying Django and Postgres to Elastic Beanstalk for People Who Don’t Know How to Read AWS Tutorials
So I deployed an app last week on Heroku with Django mostly because I couldn't exactly figure out how to get it working on Amazon. So the solution to this was of course was practice lots of practice. So I deployed apps over and over again to Elastic Beanstalk until I finally got one working. So here is what I did to get a Django App to run on Elastic Beanstalk as someone who doen't just know what to do on AWS.
Phase 1(The database):
So I wanted to use a Postgres database. While you can create this as part of the EB app I didn't want to do that because I wanted the database to be persistent outside of the app. So I went into rds:
Tumblr media
So I choose to create a Postgres database. With dev settings.
Tumblr media
You then need to create the database settings. Make note of the username and password as you will need them later.
Tumblr media
You will then be asked to configure security. For now select the defaults we will edit these later.
Tumblr media
Now launch the database. The creation will take some time so let that run while you move on.
Phase 2(The Django App):
Now for the fun part building the Django app. For now we are just going to build the default app we can go back later and create a more exciting app. So open your terminal and navigate to wherever you want to put the app. I'm using Desktop but you can use anywhere. Create a wrapper directory(we are going to use it to hold the venv). And create the virtual environment. I prefer to use python3, but you can use python2 if you'd like. Last let's add Django to the environment.
Shell $ # make the directory for the project $ mkdir django-test $ # cd into the directory $ cd django-test/ $ # create the virtual environment $ virtualenv -p python3 venv Running virtualenv with interpreter /Library/Frameworks/Python.framework/Versions/3.5/bin/python3 Using base prefix '/Library/Frameworks/Python.framework/Versions/3.5' New python executable in /Users/jasminenoack/Desktop/django-test/venv/bin/python3 Also creating executable in /Users/jasminenoack/Desktop/django-test/venv/bin/python Installing setuptools, pip, wheel...done. $ # activate the virtual environment $ . venv/bin/activate (venv) $ # install Django (venv) $ pip install Django Collecting Django Downloading Django-1.10.1-py2.py3-none-any.whl (6.8MB) 100% |████████████████████████████████| 6.8MB 148kB/s Installing collected packages: Django Successfully installed Django-1.10.1
Great now we can create a Django app. Cd into the directory and create the git and the requirements file.
Shell (venv) $ # Create a Django project (venv) $ django-admin startproject djangoTest (venv) $ # cd into the project (venv) $ cd djangoTest/ (venv) $ # put the contents of the virtual environment into the requirements file (venv) $ pip freeze > requirements.txt (venv) $ # create a git repository (venv) $ git init Initialized empty Git repository in /Users/jasminenoack/Desktop/django-test/djangoTest/.git/ (venv) $ # add files that we don't want to track to the gitignore (venv) $ echo "*.pyc" >> .gitignore (venv) $ echo ".DS_Store" >> .gitignore (venv) $ echo "venv" >> .gitignore (venv) $ echo "__pycache__" >> .gitignore (venv) $ echo "db.*" >> .gitignore (venv) $ # add the files to git and commit (venv) $ git add . (venv) $ git commit -m "create django test app" [master (root-commit) 61f7033] create django test app 7 files changed, 185 insertions(+) create mode 100644 .gitignore create mode 100644 djangoTest/__init__.py create mode 100644 djangoTest/settings.py create mode 100644 djangoTest/urls.py create mode 100644 djangoTest/wsgi.py create mode 100755 manage.py create mode 100644 requirements.txt
So now have a working django project.
Phase 3(Elastic Beanstalk):
Now we need to set up the Application on AWS. For this you will need the Elastic Beanstalk Commandline Interface you will also need a local .aws/config. First we will run `eb init`.
Select the region(use the same one as your database).
Create a new Application
Select your Python version
Set up ssh for your instances
Shell (venv) $ eb init Select a default region 1) us-east-1 : US East (N. Virginia) 2) us-west-1 : US West (N. California) 3) us-west-2 : US West (Oregon) 4) eu-west-1 : EU (Ireland) 5) eu-central-1 : EU (Frankfurt) 6) ap-south-1 : Asia Pacific (Mumbai) 7) ap-southeast-1 : Asia Pacific (Singapore) 8) ap-southeast-2 : Asia Pacific (Sydney) 9) ap-northeast-1 : Asia Pacific (Tokyo) 10) ap-northeast-2 : Asia Pacific (Seoul) 11) sa-east-1 : South America (Sao Paulo) 12) cn-north-1 : China (Beijing) (default is 3): 1 Select an application to use 1) rdsExample 2) [ Create new Application ] (default is 2): 2 Enter Application Name (default is "djangoTest"): Application djangoTest has been created. It appears you are using Python. Is this correct? (y/n): y Select a platform version. 1) Python 3.4 2) Python 3) Python 2.7 4) Python 3.4 (Preconfigured - Docker) (default is 1): 1 Do you want to set up SSH for your instances? (y/n): y Select a keypair. 1) aws-eb 2) [ Create new KeyPair ] (default is 2): 1
Now we have deployed nothing all we have done is create the environment on AWS.
Tumblr media
Now we can create a specific Elastic Beanstalk Environment to run our app using `eb init`.
Shell (venv) $ eb create Enter Environment Name (default is djangoTest-dev): Enter DNS CNAME prefix (default is djangoTest-dev): Select a load balancer type 1) classic 2) application (default is 1): 1 WARNING: You have uncommitted changes. Creating application version archive "app-61f7-160915_232959". Uploading djangoTest/app-61f7-160915_232959.zip to S3. This may take a while. Upload Complete. Environment details for: djangoTest-dev Application name: djangoTest Region: us-east-1 Deployed Version: app-61f7-160915_232959 Environment ID: e-bdjnbk3ajj Platform: 64bit Amazon Linux 2016.03 v2.1.6 running Python 3.4 Tier: WebServer-Standard CNAME: djangoTest-dev.us-east-1.elasticbeanstalk.com Updated: 2016-09-16 03:30:04.383000+00:00 Printing Status: INFO: createEnvironment is starting.
Now the app isn't going to immediately work as we are missing a config file. This file should be in `.ebextensions/django.config`. You will probably need to create the directory. For now just add the WSGIPath to the file.
Yaml -- .ebextensions/django.config option_settings: aws:elasticbeanstalk:container:python: WSGIPath: djangoTest/wsgi.py
Now we can commit these changes and deploy to Elastic Beanstalk using `eb deploy`. Then you can open the page with `eb open` and see:
Tumblr media
Phase 4(Security Groups and the database):
So we are in a great place we have everything working, well almost everything nothing is talking to each other yet. So lets add the ability for our machines to talk to each other. So go to the VPC section in AWS and click security groups in the side panel. We are going to create 2 groups one for the Elastic Beanstalk Instances and one for the database.
Tumblr media Tumblr media
We also have to give the groups permission to communicate. So what we want is for the eb group to be able to talk to the database group on the Postgres port. So select the EB database group and click the inbound rules tab. Click edit and add a rule. In type choose Postgres. And make the source the eb security group.
Now go to your database and under instance actions click modify instance. Change the security group to the database security group you just added. Be sure to click apply immediately so your changes take effect.
Tumblr media Tumblr media
Now we need to add the security group to the Elastic Beanstalk instances. In the AWS console click into the environment in Elastic Beanstalk. Then click configuration. Then click the gear above instances. Now you need to add the security group to the list for the instances, but do not remove the default security groups. Also make sure that you add the security group for the app not the database(we gave the app permision to access the database, so they have to be in the correct order). Press apply. There will be a warning that the changes do not take place immediately. Approve the changes and your instances will be replaced with the new instances with the new settings.
In order to use the Postgres database in Django we need to add settings for Postgres. First, we need to find the DATABASES object in the settings and change the settings to Postgres.
Python if os.environ.get('DB_USER'): DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql_psycopg2', 'NAME': 'django_test', 'USER': os.environ['DB_USER'], 'PASSWORD': os.environ['DB_PASSWORD'], 'HOST': os.environ['DB_HOST'], 'PORT': 5432, } } else: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'), } }
This code checks for environment variables that include the database settings. If the settings are present it looks for a Postgres database with those settings. Otherwise it will create an sqlite3 database. I like to leave this setting when possible because it makes it someone easy to test the app locally if they need to. It is important to not put variables in your code that should be private like the database host for security reasons.
For this code to work we need to add the package psycopg2. So add that to the requirements.txt. If you want to use postgres locally you can also install it with pip. Then run `pip freeze > requirements.txt` to update the requirements.
We also need to update the application config. psycopg2 requires that postgres be installed so we need to add that to the django.config. So now that file looks like:
Yaml packages: yum: postgresql93-devel: [] option_settings: aws:elasticbeanstalk:container:python: WSGIPath: djangoTest/wsgi.py
We also need the database variables so that we can actually access the database. Go back to your configuration in Elastic Beanstalk and this time click software configuration. We can now set the variables as Environment Properties(careful of naming some variables will get overwritten when the app runs like USER. I like to use DB_USER, DB_PASSWORD, DB_HOST). The DB_HOST is listed on the rds instance as endpoint. You can find it by clicking the instance and then clicking the pane indicated by the magnifying glass.
Tumblr media
Once the environment has updated we can run another `eb deploy`. Remember to commit first as it will push based on git. Great so lets run `eb open` and test it. And we get an error.
Tumblr media
So what does this tell us? For starters our database is working. We attached to it and it broke. This makes sense as we haven't run any of the migrations to make the database work. We see such a detail error because we pushed settings with DEBUG = True. So lets add running the migrations to the config script.
yaml packages: yum: postgresql93-devel: [] container_commands: 01_migrate: command: "source /opt/python/run/venv/bin/activate && source /opt/python/current/env && python /opt/python/current/app/manage.py migrate --noinput" leader_only: true option_settings: aws:elasticbeanstalk:container:python: WSGIPath: djangoTest/wsgi.py
Okay so we are looking at something a bit more complicated then we have seen before. This command activates the virtual environment, activates the environment variables, then it runs migrate. So let's add it in git and deploy and see what happens. So we now see the admin site and we don't get an error when we try to log in, instead nothing happens. This is because we can't actually see any of the styling we will handle that later. For now let's create a super user. To do this run `eb ssh` to ssh into the box. Then run `source /opt/python/run/venv/bin/activate && source /opt/python/current/env && python /opt/python/current/app/manage.py createsuperuser` this will run the django command to let us create a super user. And now you can log in the styling looks terrible but you can log in. Congrats you built a Django site which uses a Postgres database on Elastic Beanstalk.
Phase 5(What is up with the styling?):
So in order to have styling on your website you need to be able to run `python manage.py collectstatic`. The easiest way to do this is to run the command locally, add the directory to git and commit it. I prefer not to do this personally. But we will do it as a first step and another day we will look at alternative options.
In order to run collect static you need to edit settings. You need to add STATIC_ROOT. Find STATIC_URL and add it there.
Python STATIC_URL = '/static/' STATIC_ROOT = os.path.join(BASE_DIR, "static")
Now run `python manage.py collectstatic`. You should see something like "61 static files copied to '/Users/jasminenoack/Desktop/django-test/djangoTest/static'." Commit this and `eb deploy`.
And we have SUCCESS:
Tumblr media
I am interested in looking at some ideas here about running things post deploy(collect static needs to be run post deploy). Here as well as the package White Noise and AWS CloudFront, but I haven't tested any of them yet. I don't yet know when I will try them and which I will try. I'd also like to look into setting up celery on elasticbeanstalk.
0 notes
syndicode · 7 years ago
Text
Elixir on Elastic Beanstalk
We love guides. And this time let’s talk about basic setup for Elastic Beanstalk. Little reminding for those who do not (yet) using this service: Elastic Beanstalk is a cloud deployment service that automates the process of setting applications up on the AWS (Amazon Web Services). So how...
0 notes
triadcitydevelopers-blog · 10 years ago
Text
Migrating SmartMonsters to the Cloud. Part Four: the Web Site.
SmartMonsters’ Web site was one of the first in our experience to be fully dynamic.  Every page is a JSP.  Written c. 2001, the goal was similar to far more elaborate turnkey systems such as Kana: content personalization, particularly ad selection, based on user profiles. That model lost interest for us as advertising became unviable on smaller Web properties.  Nevertheless it still has practical uses.  With minimal effort we can reconfigure displayed pages for users who self-identify as visually-impaired, removing all images, simplifying navigation, and replacing sighted TriadCity clients with those designed to be blind-friendly.  That’s really not at all a bad thing to be able to easily do. Architecturally, the site suffers from age-related drawbacks we’d like to fix while migrating.  It’s not designed for Cloud: it doesn’t scale out in an elastic way.  Sessions are old-school server-side JEE requiring replication between instances.  Single-instance content caches minimize database roundtrips, but ensure that instances will be out of sync with one another.  The forum software in particular caches aggressively, guaranteeing both read and write inconsistency between instances.   Fortunately these drawbacks are straightforward to address. Sessions: Elastic Beanstalk will sync ‘em for us, invisibly to our apps, using DynamoDB as back-end store.  Sold: we’ll just turn that on and be done.  But note, we had to put some thought into this option, as it would have been equally feasible to use memcached - see below.  Memcached is tried-and-true, straightforward to configure, and well-understood in practice; in theory it might be a little faster.  I opted for DynamoDB because that’s the solution which AWS explicitly supports.  They provide the necessary JAR, it’s simple and correctly documented.  I don’t think performance will suffer to any degree that users could perceive: DynamoDB’s really, really fast.  So I went with the supported option.  There’s one detail which bugs me, though: the DynamoDB version doesn’t seem to clean up after itself.  There are thousands of rows of old sessions in the table.  I’m not excited re having to write my own script to prune these.  Perhaps it’ll do it itself after enough time elapses.  So far no-one on the AWS fora have answered my question about this. Caches: in-memory caches which previously used Ehcache can be migrated to memcached on ElastiCache.  Elastic Beanstalk server instances can then share a central ElastiCache cloud, restoring sync.  Why memcached over Redis?  Because the cached Forum data are frequently Maps or serialized objects.  No, I didn’t design this.  The main Java Redis client, Jedis, doesn’t store Objects; the main memcached client, Spymemcached, does.  That settles that.  The good news is that our architects have thoughtfully exposed the caching API via the Strategy OO design pattern: there’s a CacheEngine Interface with multiple concrete implementations.  All I had to do was write a MemcachedCacheEngine, confining our migration effort to this one happy location.  Not too bad. There’s an ancillary upside to having memcached in the mix: offloading cache memory to its own cloud allows the main app servers to downscale.  We pushed the cache memory elsewhere, so the app servers don’t need as much.  This makes it inexpensive to generalize caching and sharing of dynamic information which is costly to compute.  For example, we display a page of aggregate and weekly stats from the TriadCity universe here.  These data originate from weekly Elastic Map Reduce jobs which write their results to the primary MongoDB; from there they’re pulled when this page is displayed.  We can eliminate the MongoDB roundtrip by caching them in memcached after the first request.  Stuff like that.  These are very minor performance tweaks and they do add the complexity of additional moving parts.  But, since we have those parts now anyway, why not take advantage? These trivial migration issues were easily solvable.  If we stop there, though, we haven't fully taken advantage of the architectural best practices AWS calls for.  IMO, AWS excels at decomposing and distributing application logic among multiple components communicating via message queues.  Analogously to offloading cache to secondary instances, a lot of the processing which a beefy app server would formerly have done can be distributed to small, specialized environments.  Especially when these jobs can be initiated asynchronously, there’s a net benefit of overall downscaling: the app servers can be smaller when they don’t require headroom to process blocking tasks while remaining responsive to new user requests.  Elastic Beanstalk exposes this architectural pattern via the concepts of “Web tier” and “worker tier” instances, which it’ll link for you with SQS queues automagically.  The Website app offers a couple of opportunities for this kind of decomposition: First, we can offload routine email notifications to a worker tier.  New membership confirmations, password changes, and so on: these can be dropped fire-and-forget onto a message queue from which a minimally-powered worker instance can process them asynchronously.  These messages are infrequent, so the worker instance can be very tiny.  I sized it as a T1 Micro and it’s perfectly happy. [Update: after a mystery production hang traced to an out of memory error, I upped the instance to S2 Small.] On a larger scale, we can extend this strategy to email newsletters.  AWS offers a Simple Email Service handling exactly these kinds of bulk blasts.  Coolness.  In this case, the instance should be larger, but, we don’t need it to be permanently online.  We only need it while we’re broadcasting newsletters, which is at most once a month.  The design can center on another queue-based background job, passing newsletter content to a worker tier and letting that call SES.  We’ll leave the worker instance stopped most of the time, waking it up only when needed.  Ideally this should all be scripted, so that the Servlet which listens for newsletters to be input can fire up the broadcaster on demand - and shut it down again when the blast is complete.  Not super high priority, so we’ll do this manually for the time being.  Cool idea, though. Finally, let’s serve static content from CloudFront.  Images, primarily.  We’ll upload them little by little, converting the Website app into a traditional Web 1.0 three-tier kinda thing, with CloudFront doing what Akamai or another static content service would historically have done.  Again, a very minor performance tweak, and again, adding minor complexity through extra moving parts.   Here’s what the complexity buys, in my view.  The primary Web servers can be smaller.  The work is now distributed among a cluster of scaled-down components interoperating in our VPC.  Although we now use triple the number of virtual machines, overall monthly cost is about a third less than the previously more vertical design because the individual components are dialed way down.  By distributing and scaling out we create a more nimble system at lower operating cost; and we can scale the components independently.  IMO the minor additional operating complexity is worth the trade. There were two unexpected architectural gotchas:   First, the TriadCity game server exposes its list of currently-logged in players via RMI.  The Web site JSPs call into the game’s RMI server for the dope.  Reasonable when all the moving parts are co-located.  Not a happy first choice when those parts are separated by the Internet.  Really kinda ugly when communicating through NAT.  After a day of mucking I pulled the plug.  Current logins and TriadCity date can be grabbed from the database, so that’s that.  Server version, boot time and other details can be written to DB when the game server boots, and the JSPs can pull those data from there too.  An hour, ish.  Done. Second, turned out we saved some interesting non-Serializables to HttpSession.  Loggers, and some ugly Freemarker utilities courtesy yet again of those who designed the open-source Forum software we’ve heavily adapted.  Loggers, NP: just grab them statically when needed, don’t save a class instance.  Freemarker: blecch.  Half a day detangling the mess the Forum designers bequeathed us in their zeal to insist that their design not be embeddable.  Well - it wasn’t their thing.  Fixed after struggle. One AWS quirk caused some temporary grief.  CloudWatch monitors CPU and network I/O, but not memory or disk use.  Apparently these are not natively available to their hypervisor.  Our Web server farm in Elastic Beanstalk went unstable for part of a day: sometimes our frequent new deployments would succeed, other times fail, with little indication in the various AWS logs aside from cryptic return codes.  After much head-scratching I decided to run yum update just for a brain break, which surprisingly led to the solution: out of disk.  Launch new AMIs with double disk allocations; terminate the old instances; back to happy.  Monitoring strategies will have to be figured out for disk and memory use - these are more important to our application contexts than CPU or I/O. I bravely tested elasticity with simulated traffic against the production environment.  Worked as advertised: additional instances spin up automatically, and sessions are shared.  Nice!  Then I terminated a production instance - manual emulation of Chaos Monkey.  EB brought a replacement up within a few seconds.  Very nice.  This lets us easily enable another AWS best practice: keep EC2 instances young.  AWS says that odds of instance failure grow with an instance’s uptime.  The implication is: terminate and restart instances frequently.  With Elastic Beanstalk and shared sessions we can do this completely transparently to users.  I see no reason not to write a script which terminates an instance every day.  If your testing allows confidence that EB will replace the lost instance invisibly to the world, why not?  Let chaos be your friend.
This migration was happily straightforward.  Most of the work centered on simple decompositions taking advantage of AWS best practices.  There was no downtime at all.
Nice!
Tumblr media
0 notes