#Introduction Pack your hardware with you when running containerized applications.
This is a command line tool and library to automatically introspect the host machine and modify the docker command to use the hosts GPU hardware without modifying the container itself.
This is meant to ease development of GPU intensive code - i.e. Augmented Reality or Deep Learning projects.
#Motivation The standard approach to getting GPU related hardware to work inside of a container is to modify the standard toolchain with a build command that does something like
- Build a base container with an application.
- Figure out which driver the host uses
- Download the driver from some remote URL
- Add the driver installer to a DockerFile build script
- Add X11 server to the Build Container
- Build new container based on original application
Then to run the container, there is some bash script that wraps the docker run command to pass in only the drivers.
See https://github.com/thewtex/docker-opengl-nvidia.git for an example of this approach.
This approach is not particularly scalable, not robust, and requires a lengthy and complex build process for each user of the application. There are three main issues.
- Different flags are needed for different host configurations
- A mismatch between the host and client X server can cause graphical corruption.
- Modifying the container is very slow, inhibitting development of the underlying application.
Our approach skips the modification of the original containerized application in favor of simply linking in the libraries directly from the host. This involves three capabilities
- Host introspection to determine the correct devices nodes and libraries to pass in to the container.
- Tests that assure that the host is configured correctly for the necessary libraries.
- Demos that allow a human to verify the capabilities engendered by these libraries.
The most similar project to this is Subuser, which attempts to wrap applications. However, it doesn’t incorporate automatic discovery of all of the correct configuration variables, specifically the vendor specific device nodes and libraries.
Support is planned initially for CUDA, OpenCL, and audio applications.
It should be possible to install the required python packages through
pip install -r ./requirements.txt
Additionally, the application requires the glxinfo from mesa-utils package on debian and strace, which should be available from your package repository.
To test whether your host system can be configured correctly, do
python ./tackle.py component-test
Which should output:
GL Rendering Component Passed Host Rendering Component Passed
Failure in either of these two tests indicates that the host does not have
python ./tackle.py component-demo
You should see an image similar to the one shown at the top of this post, with the gears moving.
The really cool part of this demo is that it is running in an unmodified vanilla ubuntu:trusty container, without having to install X11 or the appropriate GPU vendor drivers!