Slack is a single place for messaging, tools and files — helping everyone save time and collaborate together. Slack Connect furthers the idea of shared channels that let companies collaborate…
Over the past few years Docker has emerged as one of the most efficient ways to manage and deploy cloud-based web applications☁️ 💻 . While containers have been around since 2008, Docker helped them go mainstream, fueling the Virtual Machine vs. Container argument.
In short, a container is an operating system agnostic environment that hosts a fully functional application. Docker uses a virtual machine under the hood which allows developers to declare runtime dependencies via a Dockerfile.
Prior to Docker, developers were forced to manually bootstrap runtime dependencies in order to leverage a virtual machine. In contrast, a container ships with its runtime specified. In fact, a container is intended to be comprised of only its runtime dependencies and the application code. We’ll talk a bit more later about how your Docker parent image will allow you to create containers that already have dependencies like Node and Typescript installed.
A container is able to ship without a dedicated operating system since the container’s VM will handle all the operating system translation needed. Developers can rest assured that their applications will run the same, regardless of the local machine’s operating system or specifications.
Once you go thru the Docker installation process, make sure the Docker application is running by checking for the little whale on your top bar (I’m using a mac for this tutorial).
Pull up the command line and run docker run hello-world
; you should get this output:
What we’re doing here is fetching the hello-world
image from DockerHub and running it on our local machine. This image is pretty basic since its only command is to print out the info above to the console (the exact steps are outlined in the above output).
Now let’s run a different, more complex, image. Run docker run -it ubuntu bash
. This will pull the latest Ubuntu image and run it as a container on your machine. The bash
argument at the end of the command allows us to open a command line session inside the new container we created. You can now issue commands to the new container from your existing console. If you run uname
you can see that the console prints "Linux"
— this means that your container is now running on the Docker VM (which has a virtual Linux kernel). Exit the Ubuntu shell by typing exit and pressing enter. If you run uname
again, you’ll now see the “real” kernel type your machine is running on (in my case it’s "Darwin"
).
Navigate into your application’s directory and run touch Dockerfile
. Then go ahead copy the code below into your Dockerfile
. We’ll walk through each line below.
Note: Your local Dockerfile should NOT have a .txt
extension. You should leave the file extension blank (gist doesn’t allow null
file extensions so this one has a .txt).
Below is a brief explanation of each consequential line in this file.
You’re bound to run into Dockerfiles that will use CMD
instead of ENTRYPOINT
. The difference is CMD
is much easier to override when running a Docker image. So if you want flexibility, with what your container does after it’s instantiated, using CMD
is better. For our purposes, we want node dist/
to be the only option; so we’re using ENTRYPOINT
.
Now that we have all the components ready to go, it’s time to build our Docker image. Make sure you’re in your application’s directory and run docker build -t dockertsc .
. This will build a Docker image with the contents of your current directory and add a human-readable local repository name (in our case it’s dockertsc
).
Now you can run docker images
and you’ll see your new image (id included) right at the top.
To stop your container, open a new command line tab. Grab the container’s ID from running docker ps
. Then run docker stop containerID
(this command may take a few seconds).
Just like remote code repositories — images can be hosted and pulled. Next we’re going to push our image to a remote repo so other folks can access it.
The command to associate a local image with a remote repository is docker tag image username/repository:tag
.
Note: tag
is a handy way for you to specify the difference between images stored in the same repo (very useful for versioning). It’s important to note that the default tag is latest
.
Now upload your new image to the remote with docker push image
. Now the image
is going to be the name (username/repository
) of new image you just tagged (not the original image tagged latest
). Use docker images
to pull up all of your images to make sure you use the correct name.
Now you can run the remote image with docker run -p 4000:80 username/repository:tag
. This will do the same thing as our run
call above, but if it doesn’t find a local image, it will pull the tagged version from the remote repository.
Now you’re up and running with Docker! 🚢 🐳 🚢
Photographing highly religious events can be quite a fraught undertaking. It’s a personal experience so documenting these days is difficult unless you’re part of the accredited media. But what if…
This was written during a very difficult time in my life. Please enjoy.. “A Poem” is published by Emily Denise in Poet’s Ink.
Studying abroad is an exciting and life-changing experience, but it can also be overwhelming and stressful. From deciding where to study to navigating the application process and adapting to a new…