- Mastering Docker Enterprise
- Mark Panthofer
- 1135字
- 2025-04-04 14:48:22
Containerizing the Webforms application
For the Webforms application, Elton takes a more sophisticated approach. Here, we are actually going to build two image files to create the Webform application.
Figure 13 shows the image dependency:

The first image file is a base builder image we can use for this application and reuse for containerizing other .NET 3.5 Webform applications. So, first let's take a look at the base image Dockerfile shown in the following code block:
# escape=`
FROM microsoft/dotnet-framework:3.5-sdk
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
# Install web workload:
RUN Invoke-WebRequest -UseBasicParsing https://download.visualstudio.microsoft.com/download/pr/100196686/e64d79b40219aea618ce2fe10ebd5f0d/vs_BuildTools.exe -OutFile vs_BuildTools.exe; `
Start-Process vs_BuildTools.exe -ArgumentList '--add', 'Microsoft.VisualStudio.Workload.WebBuildTools', '--quiet', '--norestart', '--nocache' -NoNewWindow -Wait;
# Install WebDeploy
RUN Install-PackageProvider -Name chocolatey -RequiredVersion 2.8.5.130 -Force; `
Install-Package -Name webdeploy -RequiredVersion 3.6.0 -Force;
This file starts out in the usual way, by overriding the default escape character for a Windows image build, using the backtick instead of the default backslash. Then, we see the base image from the Microsoft call, microsoft/dotnet-framework:3.5-sdk. Please note we updated the base image reference to use Microsoft's new repository scheme from Elton's example docker/web-builder/3.5/Dockerfile (shown in the following). In the second section of this Dockerfile, we see Visual Studio build tools being downloaded and installed. In the final stage, we see the chocolatey provider being used to install the webdeploy module:
For reference, here's the docker/web filesystem tree:
mta-netfx-dev
├── docker
│ ├── web
│ │ └── Dockerfile
│ └── web-builder
│ └── 3.5
│ └── Dockerfile
This is the complete docker/web-builder/3.5/Dockerfile file; we can build and tag our new base image using this Dockerfile, and it will act as the base image for our application's Dockerfile:
mta-netfx-dev$ docker image build -t mta-sdk-web-builder:3.5 --file .\docker\web-builder\3.5\Dockerfile .
...
Successfully built df1102a58630
Successfully tagged mta-sdk-web-builder:3.5
Now, we can use this new image as the base image for the Webforms application, as follows:
# escape=`
FROM mta-sdk-web-builder:3.5 AS builder
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop';"]
WORKDIR C:\src\SignUp.Web
COPY .\src\SignUp\SignUp.Web\packages.config .
RUN nuget restore packages.config -PackagesDirectory ..\packages
COPY src\SignUp C:\src
RUN msbuild SignUp.Web.csproj /p:OutputPath=c:\out /p:DeployOnBuild=true
# app image
FROM microsoft/aspnet:3.5-windowsservercore-10.0.14393.1884
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop';"]
ENV APP_ROOT="C:\web-app" `
DB_CONNECTION_STRING_PATH=""
WORKDIR $APP_ROOT
RUN Import-Module WebAdministration; `
Set-ItemProperty 'IIS:\AppPools\.NET v2.0' -Name processModel.identityType -Value LocalSystem; `
Remove-Website -Name 'Default Web Site'; `
New-Website -Name 'web-app' -Port 80 -PhysicalPath $env:APP_ROOT -ApplicationPool '.NET v2.0'
COPY .\docker\web\start.ps1 .
ENTRYPOINT ["powershell", ".\\start.ps1"]
COPY --from=builder C:\out\_PublishedWebsites\SignUp.Web .
The previous Dockerfile is called a multi-stage build because the file has more than one FROM statement. The first FROM statement designates the first stage of the build, and it is called builder. This is where msbuild runs to actually build the application. However, it includes a lot of utilities that we don't need at runtime. So, in the very last line of the Dockerfile, we copy the assets from the build stage into the second and final image stage. Therefore, all of the build components are left behind, reducing the size of the final image and creating a smaller attack surface.
One other noteworthy item in this Dockerfile is the second to last line, where we use an ENTRYPOINT. An ENTRYPOINT designates which binary is going to be run when this container starts. This is different than what we did in the database Dockerfile, where we defined a CMD. While these are similar, the CMD will get replaced by anything passed in from the command line and ENTRYPOINT will append the command-line argument as a parameter to the application defined by ENTRYPOINT:
# DB Image used CMD, easily overridden with any other command after the image name
$ docker container run -it run db-image:v1 dir
...directory listing from working directory C:\init
...container exits
# App Image used ENTRYPOINT, command after image name is passed as argument to start.ps1
$ docker container run -it db-image:v1 dir
... passes argument to "dir" as parameter to start.ps1 script and it's ignored
... tries to start web application
For more information, see the following Docker documentation: https://docs.docker.com/engine/reference/builder/#cmd.
Now, let's build our application image:
docker image build -t app-image:v1 --file .\docker\web\Dockerfile .
...
Successfully built dec1102a58630
Successfully tagged app-image:v1
So, we've built three images: one base image for building web apps (mta-sdk-web-builder:3.5), and two final images to run our database and application (db-image:v1 and app-image:v1). It is time to start our application up and test it:

Figure 14 shows our local build machine's single node Docker configuration used for testing the local application containers . We have our application container, called signup-app, and our database container, called signup-db. The database container name (signup-db) is particularly important because the application container relies on that name for DNS lookup for service resolution (locally routable IP address) of the database. The following commands show how we start the database container, then start the application container, and both share a common nat network. Containers need to be on the same network to communicate and to share DNS names:
# Start the database container
$ docker container run --network nat --name signup-db -d db-image:v1
# Start the application container
$ docker container run --network nat -p 8000:80 --name signup-app -d app-image:v1
The first container run command starts the database container in the background (-d) from the db-image:v1 image, names it signup-db, and attaches it to the nat network. The second container run command starts the application container in the background (-d) from the app-image:v1 image, names it signup-app, and attaches it to the nat network.
On my test machine, I point my browser to my local IP address, 8000 (using localhost with IIS can be configured with some permission jostling in IIS, but we can use the dev machine's IP instead) and I see the following glorious screen in Figure 15:

We can now stop our container using the docker container rm -f command.