Preface
The aim of this article is to go through the steps deploying a full stack app; that is: back end and front end, using Docker. I will only be deploying my app locally on a Windows 10 machine, but in the future, we will go through deploying to a cloud-based service such as Azure or AWS.
I will not be going in depth into any of .NET Core, React or MSSQL, as I want to concentrate on getting my app deployed using Docker. Note that you can use any JavaScript framework in place of React here such as Angular, Vue etc.
Motivation
Why use Docker? Why not just create our database using SSMS and deploy our app to IIS? Because that requires a LOT of pre-requirements; we first need to install MSSQL, .NET Core runtime, SSMS (maybe) and probably a few other things I’ve forgotten. Instead, I’d rather just run a single command that will deploy my app within minutes and have it up and running with minimum setup required. Docker has many other benefits that you can find with a quick Google search.
This is my favoutie quote on Docker:
"Imagine five or so years ago someone telling you in a job interview that they care so much about consistency that they always ship the operating system with their app. You probably wouldn’t have hired them. Yet, that’s exactly the model Docker uses!" Richard Lander
Requirements
VS Code - not really required but it does provide a very nice interface for Docker. If not, you can use the command line instead.
VS Code Docker extension – can be found here.
Docker Desktop for Windows (download from here – we’ll need this to run our docker containers locally as Docker is only runs on any Linux system, Windows 10 desktop systems and Windows Server 2016 and 2019. This will support running both Linux and Windows containers using Windows-native Hyper-V.
Part 1: React App
I’ve created a very simple React app using create-react-app to build a very simple shopping list; this app will display a list of items with their name and price in a table, with functionality to add new items. This React app uses a .NET Core REST API to retrieve our shopping items from a MSSQL database. The finished React app looks like this:
Part 2: .NET Core Api
This app will act as my REST Api to provide the shopping list items to the React app and will store/add these items using a MSSQL database. I also used Entity Framework Core as my ORM to communicate with the database.
Part 3: MS SQL Database
My MSSQL database will store the shopping list items. The database structure will be very simple with a single table:
Dockerfile
I have a Dockerfile to build the image for the .NET Core app. This Dockerfile will build my React app for production ready for the .NET Core app to serve, build, restore and publish the .NET Core project and some other config. Here is my Dockerfile:
FROM microsoft/dotnet:2.2-aspnetcore-runtime AS base WORKDIR
/app FROM node:12.2.0-alpine as
react-build WORKDIR /react-app COPY react-app/
. RUN yarn RUN npm run
build FROM microsoft/dotnet:2.2-sdk AS
build WORKDIR /src COPY
["DotNetCoreApi/DotNetCoreApi.csproj", "DotNetCoreApi/"] RUN
dotnet restore "DotNetCoreApi/DotNetCoreApi.csproj" COPY .
. WORKDIR /src/DotNetCoreApi RUN dotnet build
"DotNetCoreApi.csproj" -c Release -o /app FROM
build AS publish RUN dotnet publish "DotNetCoreApi.csproj" -c
Release -o /app FROM base AS final
WORKDIR /app COPY --from=publish /app . COPY
--from=react-build /react-app/build ./build ENTRYPOINT ["dotnet",
"DotNetCoreApi.dll"]
Let’s briefly go through the commands used in this Dockerfile:
- FROM – initializes a new build stage. Every image must be built from an image, and this is the first command required in a Dockerfile.
- WORKDIR – sets the work directory for any subsequent commands (if the folder doesn’t exist it will be created).
- COPY – copies files or directory from source (first argument) to target (second argument).
- RUN – run a command in a shell (in the currently set WORKDIR).
- ENTRYPOINT – a command that will be executed when the container starts.
The “base” image is the official Microsoft runtime for .NET Core 2.2. This will be downloaded from Docker Hub used to run my .NET Core Api.
The next part of our Dockerfile will get the node image, which will be used to download the node module dependencies using yarn, then create a build of the react app.
After, it will get the .NET Core 2.2 SDK from Docker Hub to restore the nuget packages, build a release version of our .NET Core app and publish it.
Finally, it will use the output of the 2 builds for the React and .NET Core apps to build a final image by copying the contents into a folder. This image will use the runtime for .NET Core 2.2 image. I have set the ENTRYPOINT to tell it where to start from, which is to run the “dotnet” command and provide the .dll for the .NET Core project.
We can now build an image from the Dockerfile using the command
docker build
But our app will not work as we also need a database to be setup! This will only bring up the .NET Core Api and the React app. We need another container to run our MSSQL database, which is where docker-compose comes in.
docker-compose
docker-compose is a tool used to define and run multi-container docker applications. Everything is defined in a docker-compose.yml file, where you define all your services (containers) that form your app. In our case, we will have 2 services:
For the .NET Core app, which will also serve our React app.
An MSSQL database.
Now, we could have a third container to run our React app separately using node in a container, but there’s no need in our case.
Also, if we wanted to, we could just have 2 Dockerfile’s, and use the docker build command, but we’d have to pass lots of parameters for the build such as the port mappings, networks etc. and it gets a bit complicated, so the docker-compose will make things much easier.
Here is the docker-compose.yml file for our app:
version: '3.7' services: dotnetcoreapi: container_name: 'dotnetcoreapi' ports: - '5005:80' networks: - sql build: context: . dockerfile: DotNetCoreApi/Dockerfile depends_on: - db command: bash -c "echo sleep wait a bit till the db is up and running before attempting to create the Allready DB && sleep 7 && /opt/mssql-tools/bin/sqlcmd -S db,1433 -U sa -P Xx12Xxxxx12 -Q 'create database AllReady'" db: abc container_name: 'db' image: 'mcr.microsoft.com/mssql/server' networks: - sql environment: SA_PASSWORD: 'Xx12Xxxxx12' ACCEPT_EULA: 'Y' ports: - '1444:1433' volumes: - database:/var/opt/mssql networks: sql: volumes: database:
Let’s briefly go through what’s happening in this docker-compose.yml file:
version - the version of our app. This is always the first line in a docker-compose file.
services - here you define all the services (containers) that will make up our app, so they can be run together in an isolated environment.
networks - we define a network which both our services will use to communicate with each other.
volumes - we define volumes here to persist data, even after the container is destroyed. In this instance, it will be used to store our MSSQL databases.
Our first service “dotnetcoreapi” will build the .NET Core app. We tell is to expose port 80 in the container (the port on which the .NET Core app runs) and map this port to 5005 on our local machine. This is the port from which we can then access our app. We provide the context, i.e. the folder to build this image from, tell it which Dockerfile to base this image on, set the network to “sql” so the containers can communicate within this network. We will it this image depends on the image of “db” which dictates the order of creation.
I have also provided a command to be run so that the .NET Core app isn’t run UNTIL the MSSQL is up and running.
Secondly, we have our “db” service. We don’t need a Dockerfile for this, we can simply use the official MSSQL image from Microsoft to create this image. We set the environment variables such as the SA password, which we will use to connect to from our .NET Core app. We also provide it a volume to use; this volume will store the SQL database; this volume will be created the first time we run this container. The point of using this volume is to persist the data; once we destroy our container, the volume (and therefore the data) will still be available for the next time we run our container.
Running our app
If we now wanted to run this app on our machine (and we have Docker installed of course), we need to execute the following command where the docker-compose.yml file is:
docker-compose up
We have our 5 images (2 for micorost/dotnet – 1 for the sdk and the other for the runtime), 2 containers (1 running the .NET Core app and 1 running the MSSQL database), our network and our volume.
And our app is running! You will be able to access the React app using “http://localhost:5005”. As I seeded the app with data, you should see some items, and also be able to add new items.
You can also now connect to the database using SSMS. To connect, the server address will be “127.0.0.1,1434” – don’t forget the port 1434! That’s port we specified in the docker-compose file. The credentials will be username SA and the password which you set in the docker-compose file.
You can find the complete source code for this article here on GitHub. If you’d like to simple run it, simply clone the repo locally and run the docker-compose up command inside the folder. And there you have it! A full stack app running without having anything installed!
Happy coding!
コメント