The framework supports and encourages local challenge development, but you may need to install some of the dependencies listed below (especially if you intend to use our helper scripts instead of building images manually).
Building and running challenges
You can build and run TFW exercises locally with just Docker, however, it is advisable to use our tfw.sh script for ease of development, unless you prefer building everything manually.
On top of Docker, our tfw.sh
script requires the following:
- GNU coreutils
- bash
- git
By default, your IDE will fail to autocomplete code and will complain about missing dependencies. To fix this you should install the tfw
pip package in your dev virtualenv:
# SSH:
pip install git+ssh://git@github.com/avatao-content/baseimage-tutorial-framework.git
# HTTPS:
pip install git+https://github.com/avatao-content/baseimage-tutorial-framework.git
Please note that we use our Docker baseimage to distribute the TFW codebase (available on (Docker Hub)[https://hub.docker.com/r/avatao/baseimage-tutorial-framework]) and this pip package only serves to make development more comfortable (thereby it is not published on PyPi intentionally).
Developing the baseimage itself
If you install everything mentioned above and clone the baseimage repository next to an instance of a challenge repository (preferably test-tutorial-framework), you should be good to go (the tfw.sh
script supports developing TFW itself).
Make sure that your clone of the baseimage repository shares the same parent folder with your challenge (test-tfw), so tfw.sh
can find it.
You might also want to install the TFW pip package in editable mode (pip install -e <baseimage_path>
), so that changes can instantly propagate to your development environment.
Developing the TFW frontend
Normally, the frontend is only present in images as a bunch of build artifacts served by nginx, but if you wish to develop the frontend itself, you may need the following dependencies locally:
- yarn
- Angular CLI To begin working on the frontend you should:
- Clone the frontend repository
- Install it's npm dependencies using
yarn install
- Run it using
yarn start
- Your frontend should be available on
localhost:4200
Please note that you will also need to run a TFW challenge locally for the frontend to connect to (preferably test-tfw usingtfw.sh
). TFW based challenges expose the port8888
. Our frontend repository comes with aproxy.conf.json
which will automatically forward the appropriate HTTP requests from the frontend to this port when usingyarn start
orng serve
.
It is also possible to treat the frontend as a Docker image for development if you'd rather avoid installing messy JS packages, but expect slower build times in this case.
Inside test-tutorial-framework
The repository of a tutorial-framework based challenge is quite similar to a regular challenge. The project root should look something like this:
your_repo
├── solvable
│ └── [TFW based Docker image]
├── controller
│ └── [solution checking]
├── metadata
│ └── [challenge descriptions, writeups, etc.]
└── config.yml
The only notable difference is that the solvable
Docker image is a child of our baseimage: solvable/Dockerfile
begins with FROM avatao/baseimage-tutorial-framework
.
From now on we are going to focus on the solvable
image.
Basics of a TFW based challenge
Let us take a closer look on solvable
:
solvable
├── Dockerfile
├── nginx webserver configurations
├── supervisor process manager (init replacement)
└── src example source code
Note that our baseimage requires the nginx
and supervisor
folders to be in these exact locations and to be used as described below.
This is a contract your image must comply.
The src
directory contains a simple example of using TFW.
nginx
All TFW based challenges expose a single port defined in the TFW_PUBLIC_PORT
envvar which is set to 8888
by default.
This means that in order to run multiple HTTP services we must use a reverse proxy.
Any .conf
files in solvable/nginx/
will be automatically included in the nginx configuration.
In case you want to serve a website or service you must proxy it through TFW_PUBLIC_PORT
.
This is really easy: just create a config file in solvable/nginx/
similar to this one:
location /yoururl {
proxy_pass http://127.0.0.1:3333;
}
After this you can access the service running on port 3333
at http://localhost:8888/yoururl
It is very important to understand that from now on your application must behave well behind a reverse proxy.
What this means is all href
s must point the proxied paths (e.g. links should refer to /yoururl/register
instead of /register
) on your HTML pages.
You can learn about configuring nginx in this handy little tutorial.
supervisor
In most Docker conainers there is a single process running (with PID 1
inside the PID namespace).
When working with TFW you can run as many processes as you want to by using supervisord.
Any .conf
files in the solvable/supervisor/
directory will be included in the supervisor configuration.
The programs you define this way are easy to manage (starting/stopping/restarting) using the supervisorctl
command line tool or our built-in event handler.
You can even configure your processes to start with the container by including autostart=true
in your configuration file.
To run your own webservice for instance you need to create a config file in solvable/supervisor/
similar to this one:
[program:yourprogram]
user=user
directory=/home/user/example/
command=python3 server.py
autostart=true
This starts the /home/user/example/server.py
script using python3
after your container entered the running state (because of autostart=true
, supervisor does not start programs by default).
You can learn more about configuring supervisor here.
src
This folder contains an template setup of our pre-written event handlers and example FSMs. Note that this is not a part of the framework by any means, these are just simple examples.
solvable/src
├── event_handler_main.py event handlers implemented in python
├── frontend_config.yaml YAML configuration of the fontend (parsed by event_handler_main)
├── pipe_io_main.py spawns POSIX named pipes capable of communicating with the TFW server
├── webservice/ an example webserver
├── test_fsm.py example FSM in python
└── test_fsm.yml example FSM in yaml
event_handler_main.py
contains example usage of our pre-defined event handlers written in Python3.
As you can see they run in a separate process (set up in solvable/supervisor/event_handler_main.conf
).
These event handlers could be implemented in any language that has ZMQ bindings.
Note that you don't have to use all our event handlers.
Should you want to avoid using a feature, you can just delete the appropriate event handler from event_handler_main.py
.
pipe_io_main.py
runs proxy event handlers capable of creating and communicating over POSIX named pipes.
These allow you to send/receive messages to/from the TFW server using the open()
, write()
and read()
system calls instead of ZMQ sockets.
For example you could send a message with the command echo [some JSON] > /tmp/tfw_send
in a terminal.
Or you could use a readline function in any programming language to receive a message.
It also monitors the /run/tfw/
directory for the creation of new pipes.
test_fsm.yml
and test_fsm.py
are the implementations of the same FSM in YAML and Python to provide you examples of creating your own machine.
It is genarally a good idea to separate these files from the rest of the stuff in solvable
, so it is a good practice to create an src
directory.
FSM
A good state machine is the backbone of a good TFW challenge.
There are two ways to define a state machine: - Using a YAML configuration file - Implementing it in Python
The first option allows you to handle FSM callbacks and custom logic in any programming language (not just Python) and is generally really easy to work with (you can execute arbitrary shell commands on events).
You should choose this method unless you have good reason not to.
This involves creating your YAML file (see test_fsm.yml
for an example) and parsing it using our YamlFSM
class (see event_handler_main.py
for an example).
The second option allows you to implement your FSM in Python, using the transitions library.
To do this just subclass our FSMBase
class or use our LinearFSM
class for simple machines (see test_fsm.py
for an example).
In your FSM you can define callbacks for states and transitions.
State callbacks:
- on_enter
- on_exit
Transition callbacks:
- before
- after
In your YAML file you can use these in the state and transition objects as keys, then add a shell command to run as a value (again, see test_fsm.yml
for examples).
It is also possible to add preconditions to transitions.
This is done by adding a predicates
key with a list of shell commands to run.
If you do this, the transition will only succeed if the return code of all predicates was 0
(as per unix convention for success).
Our YamlFSM
implementation also supports jinja2 templates inside the YAML
config file (examples in test_fsm.yml
).