Now that C++ Insights is available for more than one and a half years I got requests on how to run a local instance. It is what I do myself for my training classes or during conference talks. Simply because I do not trust the Wi-Fi at conferences or training facilities. In this article I will cover how you can run a local instance of the web-frontend of C++ Insights together with the same binary as the website uses.
When you request a transformation, by pressing the play button or the equivalent shortcut, a REST request is send to the web server. The Python part processes this request and, if it is valid, invokes a Docker container which contains the C++ Insights binary. There are at least two reasons for this. First, with that no users shall have access to web server itself and each invocation is separate. However, the second reason is probably more important. When compiling the C++ Insights binary it also gets the all include paths of the systems compiled in. This makes it somewhat difficult to port it between systems. Keeping it in more or less the same environment as it was compiled in makes things easier.
Setting the local environment up
In that repository run
make get. It will download the latest pre-build Docker images from DockerHub:
The first image is the run-time environment for C++ Insights. This is the exact same as the website uses.
The second image is the Docker image for the website itself. As a careful reader you may realize at this point that the website itself is not running in a docker environment. However, doing it for this purpose seems to be the easiest way for distributing it.
After that, you can start a local instance with
make start. You should have a local instance of C++ Insights running at
127.0.0.1:5000. In case of trouble you can run
make logs to see what is going on in the container.
make stop shuts the instance down.
How it works
This all works as the second Docker container gets access to the hosts Docker socket. With that it is possible from one Docker to run containers available on the host systems. It is not exactly what sometimes is referred to as a docker-in-docker installation, but close. There may be security issues I'm unaware of. I advise you to not use this setup in a production like environment.