Tumgik
#debian:stretch stretch
just4programmers · 7 years
Text
Optimizing ASP.NET Core Docker Image sizes
There is a great post from Steve Laster in 2016 about optimizing ASP.NET Docker Image sizes. Since then Docker has added multi-stage build files so you can do more in one Dockerfile...which feels like one step even though it's not. Containers are about easy and reliable deployment, and they're also about density. You want to use as little memory as possible, sure, but it also is nice to make them as small as possible so you're not spending time moving them around the network. The size of the image file can also affect startup time for the container. Plus it's just tidy.
I've been building a little 6 node Raspberry Pi (ARM) Kubenetes Cluster on my desk - like you do - this week, and I noticed that my image sizes were a little larger than I'd like. This is a bigger issue because it's a relatively low-powered system, but again, why carry around x unnecessary megabytes if you don't have to?
Alex Ellis has a great blog on building .NET Core apps for Raspberry Pi along with a YouTube video. In his video and blog he builds a "Console.WriteLine()" console app, which is great for OpenFaas (open source serverless platform) but I wanted to also have ASP.NET Core apps on my Raspberry Pi k8s cluster. He included this as a "challenge" in his blog, so challenge accepted! Thanks for all your help and support, Alex!
ASP.NET Core on Docker (on ARM)
First I make a basic ASP.NET Core app. I could do a Web API, but this time I'll do an MVC one with Razor Pages. To be clear, they are the same thing just with different starting points. I can always add pages or add JSON to either, later.
I start with "dotnet new mvc" (or dotnet new razor, etc). I'm going to be running this in Docker, managed by Kuberenetes, and while I can always change the WebHost in Program.cs to change how the Kestrel web server starts up like this:
WebHost.CreateDefaultBuilder(args) .UseUrls(http://*:5000;http://localhost:5001;https://hostname:5002)
For Docker use cases it's easier to change the listening URL with an Environment Variable. Sure, it could be 80, but I like 5000. I'll set the ASPNETCORE_URLS environment variable to http://+:5000 when I make the Dockerfile.
Optimized MultiStage Dockerfile for ASP.NET
There's a number of "right" ways to do this, so you'll want to think about your scenarios. You'll see below that I'm using ARM (because Raspberry Pi) so if you see errors running your container like "qemu: Unsupported syscall: 345" then you're trying to run an ARM image on x86/x64. I'm going to be building an ARM container from Windows but I can't run it here. I have to push it to a container registry and then tell my Raspberry Pi cluster to pull it down and THEN it'll run, over there.
Here's what I have so far. NOTE there are some things commented out, so be conscious. This is/was a learning exercise for me. Don't you copy/paste unless you know what's up! And if there's a mistake, here's a GitHub Gist of my Dockerfile for you to change and improve.
It's important to understand that .NET Core has an SDK with build tools and development kits and compilers and stuff, and then it has a runtime. The runtime doesn't have the "make an app" stuff, it only has the "run an app stuff." There is not currently an SDK for ARM so that's a limitation that we are (somewhat elegantly) working around with the multistage build file. But, even if there WAS an SDK for ARM, we'd still want to use a Dockerfile like this because it's more efficient with space and makes a smaller image.
Let's break this down. There are two stages. The first FROM is the SDK image that builds the code. We're doing the build inside Docker - which is lovely, and  great reliable way to do builds.
PRO TIP: Docker is smart about making intermediate images and doing the least work, but it's useful if we (the authors) do the right thing as well to help it out.
For example, see where we COPY the .csproj over and then do a "dotnet restore"? Often you'll see folks do a "COPY . ." and then do a restore. That doesn't allow Docker to detect what's changed and you'll end up paying for the restore on EVERY BUILD.
By making this two steps - copy the project, restore, copy the code, this means your "dotnet restore" intermediate step will be cached by Docker and things will be WAY faster.
After you build, you'll do a publish. If you know the destination like I do (linux-arm) you can do a RID (runtime id) publish that is self-contained with -r linux-arm (or debian, or whatever) and you'll get a complete self-contained version of your app.
Otherwise, you can just publish your app's code and use a .NET Core runtime image to run it. Since I'm using a complete self-contained build for this image, it would be overkill to ALSO include the .NET runtime. If you look at the Docker hub for Microsoft/dotnet You'll see images called "deps" for "dependencies." Those are images that sit on top of debian that include the things .NET needs to run - but not .NET itself.
The stack of images looks generally like this (for example)
FROM debian:stretch
FROM microsoft/dotnet:2.0-runtime-deps
FROM microsoft/dotnet:2.0-runtime
So you have your base image, your dependencies, and your .NET runtime. The SDK image would include even more stuff since it needs to build code. Again, that's why we use that for the "as builder" image and then copy out the results of the compile and put them in another runtime image. You get the best of all worlds.
FROM microsoft/dotnet:2.0-sdk as builder RUN mkdir -p /root/src/app/aspnetcoreapp WORKDIR /root/src/app/aspnetcoreapp #copy just the project file over # this prevents additional extraneous restores # and allows us to re-use the intermediate layer # This only happens again if we change the csproj. # This means WAY faster builds! COPY aspnetcoreapp.csproj . #Because we have a custom nuget.config, copy it in COPY nuget.config . RUN dotnet restore ./aspnetcoreapp.csproj COPY . . RUN dotnet publish -c release -o published -r linux-arm #Smaller - Best for apps with self-contained .NETs, as it doesn't include the runtime # It has the *dependencies* to run .NET Apps. The .NET runtime image sits on this FROM microsoft/dotnet:2.0.0-runtime-deps-stretch-arm32v7 #Bigger - Best for apps .NETs that aren't self-contained. #FROM microsoft/dotnet:2.0.0-runtime-stretch-arm32v7 # These are the non-ARM images. #FROM microsoft/dotnet:2.0.0-runtime-deps #FROM microsoft/dotnet:2.0.0-runtime WORKDIR /root/ COPY --from=builder /root/src/app/aspnetcoreapp/published . ENV ASPNETCORE_URLS=http://+:5000 EXPOSE 5000/tcp # This runs your app with the dotnet exe included with the runtime or SDK #CMD ["dotnet", "./aspnetcoreapp.dll"] # This runs your self-contained .NET Core app. You built with -r to get this CMD ["./aspnetcoreapp"]
Notice also that I have a custom nuget.config, so if you do also you'll need to make sure that's available at build time for dotnet restore to pick up all packages.
I've included by commented out a bunch of the FROMs in the second stage. I'm using just the ARM one, but I wanted you to see the others.
Once we have the code we build copied into our runtime image, we set our environment variable so our all listens on port 5000 internally (remember that from above?) Then we run our app. Notice that you can run it with "dotnet foo.dll" if you have the runtime, but if you are like me and using a self-contained build, then you'll just run "foo."
To sum up:
Build with FROM microsoft/dotnet:2.0-sdk as builder
Copy the results out to a runtime
Use the right runtime FROM for you
Right CPU architecture?
Using the .NET Runtime (typical) or using a self-contained build (less so)
Listening on the right port (if a web app)?
Running your app successfully and correctly?
Do you have a .dockerignore? Super important for .NET Builds, as you don't' want to copy over /obj, /bin, etc, but you do want /published. obj/ bin/ !published/
Optimizing a little more
There are a few pre-release "Tree Trimming" tools that can look at your app and remove code and binaries that you are not calling. I included Microsoft.Packaging.Tools.Trimming as well to try it out and get even more unused code out of my final image by just adding a package to my project.
Step 8/14 : RUN dotnet publish -c release -o published -r linux-arm /p:LinkDuringPublish=true ---> Running in 39404479945f Microsoft (R) Build Engine version 15.4.8.50001 for .NET Core Copyright (C) Microsoft Corporation. All rights reserved. Trimmed 152 out of 347 files for a savings of 20.54 MB Final app size is 33.56 MB aspnetcoreapp -> /root/src/app/aspnetcoreapp/bin/release/netcoreapp2.0/linux-arm/aspnetcoreapp.dll Trimmed 152 out of 347 files for a savings of 20.54 MB Final app size is 33.56 MB
If you run docker history on your final image you can see exactly where the size comes from. If/when Microsoft switches from a Debian base image to an Alpine one, this should get even smaller.
C:\Users\scott\Desktop\k8s for pi\aspnetcoreapp>docker history c60 IMAGE CREATED CREATED BY SIZE COMMENT c6094ca46c3b 3 minutes ago /bin/sh -c #(nop) CMD ["dotnet" "./aspnet... 0B b7dfcf137587 3 minutes ago /bin/sh -c #(nop) EXPOSE 5000/tcp 0B a5ba51b91d9d 3 minutes ago /bin/sh -c #(nop) ENV ASPNETCORE_URLS=htt... 0B 8742269735bc 3 minutes ago /bin/sh -c #(nop) COPY dir:cc64bd3b9bacaeb... 56.5MB 28c008e38973 3 minutes ago /bin/sh -c #(nop) WORKDIR /root/ 0B 4bafd6e2811a 4 hours ago /bin/sh -c apt-get update && apt-get i... 45.4MB <missing> 3 weeks ago /bin/sh -c #(nop) CMD ["bash"] 0B <missing> 3 weeks ago /bin/sh -c #(nop) ADD file:8b7cf813a113aa2... 85.7MB
Here is the evolution of my Dockerfile as I made changes and the final result got smaller and smaller. Looks like 45 megs trimmed with a little work or about 20% smaller.
C:\Users\scott\Desktop\k8s for pi\aspnetcoreapp>docker images | find /i "aspnetcoreapp" shanselman/aspnetcoreapp 0.5 c6094ca46c3b About a minute ago 188MB shanselman/aspnetcoreapp 0.4 083bfbdc4e01 12 minutes ago 196MB shanselman/aspnetcoreapp 0.3 fa053b4ee2b4 About an hour ago 199MB shanselman/aspnetcoreapp 0.2 ba73f14e29aa 4 hours ago 207MB shanselman/aspnetcoreapp 0.1 cac2f0e3826c 3 hours ago 233MB
Later I'll do a blog post where I put this standard ASP.NET Core web app into Kubernetes using this YAML description and scale it out on the Raspberry Pi. I'm learning a lot! Thanks to Alex Ellis and Glenn Condron and Jessie Frazelle for their time!
Sponsor: Create powerful Web applications to manage each step of a document’s life cycle with DocuVieware HTML5 Viewer and Document Management Kit. Check our demos to acquire, scan, edit, annotate 100+ formats, and customize your UI!
© 2017 Scott Hanselman. All rights reserved.
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
0 notes
jmtapio · 7 years
Link
There is much to like about Docker. Much has been written about it, and about how secure the containerization is.
This post isn’t about that. This is about keeping what’s inside each container secure. I believe we have a fundamental problem here.
Earlier this month, a study on security vulnerabilities on Docker Hub came out, and the picture isn’t pretty. One key finding:
Over 80% of the :latest versions of official images contained at least on high severity vulnerability!
And it’s not the only one raising questions.
Let’s dive in and see how we got here.
It’s hard to be secure, but Debian makes it easier
Let’s say you want to run a PHP application like WordPress under Apache. Here are the things you need to keep secure:
WordPress itself
All plugins, themes, customizations
All PHP libraries it uses (MySQL, image-processing, etc.)
MySQL
Apache
All libraries MySQL or Apache use: OpenSSL, libc, PHP itself, etc.
The kernel
All containerization tools
On Debian (and most of its best-known derivatives), we are extremely lucky to have a wonderful security support system. If you run a Debian system, the combination of unattended-updates, needrestart, debsecan, and debian-security-support will help one keep a Debian system secure and verify it is. When the latest OpenSSL bug comes out, generally speaking by the time I wake up, unattended-updates has already patched it, needrestart has already restarted any server that uses it, and I’m protected. Debian’s security team generally backports fixes rather than just say “here’s the new version”, making it very safe to automatically apply patches. As long as I use what’s in Debian stable, all layers mentioned above will be protected using this scheme.
This picture is much nicer than what we see in Docker.
Problems
We have a lot of problems in the Docker ecosystem:
No built-in way to know when a base needs to be updated, or to automatically update it
Diverse and complicated vendor security picture
No way to detect when intermediate libraries need to be updated
Complicated final application security picture
Let’s look at them individually.
Problem #1: No built-in way to know when a base needs to be updated, or to automatically update it
First of all, there is nothing in Docker like unattended-updates. Although a few people have suggested ways to run unattended-updates inside containers, there are many reasons that approach doesn’t work well. The standard advice is to update/rebuild containers.
So how do you know when to do that? It is not all that obvious. Theoretically, official OS base images will be updated when needed, and then other Docker hub images will detect the base update and be rebuilt. So, if a bug in a base image is found, and if the vendors work properly, and if you are somehow watching, then you could be protected. There is work in this area; tools such as watchtower help here.
But this can lead to a false sense of security, because:
Problem #2: Diverse and complicated vendor security picture
Different images can use different operating system bases. Consider just these official images, and the bases they use: (tracking latest tag on each)
nginx: debian:stretch-slim (stretch is pre-release at this date!)
mysql: debian:jessie
mongo: debian:wheezy-slim (previous release)
apache httpd: debian:jessie-backports
postgres: debian:jessie
node: buildpack-deps:jessie, eventually depends on debian:jessie
wordpress: php:5.6-apache, eventually depends on debian:jessie
And how about a few unofficial images?
oracle/openjdk: oraclelinux:latest
robotamer/citadel: debian:testing (dangerous, because testing is an alias for different distros at different times)
http://ift.tt/2qqFJ8A: ubuntu of some sort
The good news is that Debian jessie seems to be pretty popular here. The bad news is that you see everything from Oracle Linux, to Ubuntu, to Debian testing, to Debian oldstable in just this list. Go a little further, and you’ll see Alpine Linux, CentOS, and many more represented.
Here’s the question: what do you know about the security practices of each of these organizations? How well updated are their base images? Even if it’s Debian, how well updated is, for instance, the oldstable or the testing image?
The attack surface here is a lot larger than if you were just using a single OS. But wait, it gets worse:
Problem #3: No way to detect when intermediate libraries need to be updated
Let’s say your Docker image is using a base that is updated immediately when a security problem is found. Let’s further assume that your software package (WordPress, MySQL, whatever) is also being updated.
What about the intermediate dependencies? Let’s look at the build process for nginx. The Dockerfile for it begins with Debian:stretch-slim. But then it does a natural thing: it runs an apt-get install, pulling in packages from both Debian and an nginx repo.
I ran the docker build across this. Of course, the apt-get command brings in not just the specified packages, but also their dependencies. Here are the ones nginx brought in:
fontconfig-config fonts-dejavu-core gettext-base libbsd0 libexpat1 libfontconfig1 libfreetype6 libgd3 libgeoip1 libicu57 libjbig0 libjpeg62-turbo libpng16-16 libssl1.1 libtiff5 libwebp6 libx11-6 libx11-data libxau6 libxcb1 libxdmcp6 libxml2 libxpm4 libxslt1.1 nginx nginx-module-geoip nginx-module-image-filter nginx-module-njs nginx-module-xslt ucf
Now, what is going to trigger a rebuild if there’s a security fix to libssl1.1 or libicu57? (Both of these have a history of security holes.) The answer, for the vast majority of Docker images, seems to be: nothing automatic.
Problem #4: Complicated final application security picture
And that brings us to the last problem: Let’s say you want to run an application in Docker. exim, PostgreSQL, Drupal, or maybe something more obscure. Who is watching for security holes in it? If you’re using Debian packages, the Debian security team is. If you’re using a Docker image, well, maybe it’s the random person that contributed it, maybe it’s the vendor, maybe it’s Docker, maybe it’s nobody. You have to take this burden on yourself, to validate the security support picture for each image you use.
Conclusion
All this adds up to a lot of work, which is not taken care of for you by default in Docker. It is no surprise that many Docker images are insecure, given this picture. The unfortunate reality is that many Docker containers are running with known vulnerabilities that have known fixes, but just aren’t, and that’s sad.
I wonder if there are any practices people are using that can mitigate this better than what the current best-practice recommendations seem to be?
via Planet Debian
0 notes
techsur · 6 years
Text
Creating docker image with node, chrome and git
# taking debian as base image From debian:stretch
# prints OS info RUN cat /etc/os-release
# installing default JDK and curl RUN apt-get update RUN apt-get install -y apt-utils RUN apt-get install -y default-jdk RUN apt-get install -y curl
# installing node and pre-requisites RUN apt-get install -y gnupg2 RUN apt-get upgrade -y RUN apt-get install -y curl software-properties-common RUN curl -sL https://deb.nodesource.com/setup_9.x | bash - RUN apt-get install -y  nodejs
# installing build essentials RUN apt-get install -y build-essential
# install xvfb RUN apt-get install --fix-missing -y xvfb
# installing git RUN apt-get install -y git-core
# installing chrome steps
   RUN apt install -y wget    #Download the Google signing key and install it.    RUN wget -q -O - https://dl.google.com/linux/linux_signing_key.pub | apt-key add -    # Set up Google Chrome repository.    RUN echo "deb http://dl.google.com/linux/chrome/deb/ stable main" | tee /etc/apt/sources.list.d/google-chrome.list    # Update repository index and install chrome.    RUN apt-get update && apt-get -y install google-chrome-stable    RUN google-chrome --version
0 notes