Resources for “Generating Instructions at Different Levels of Abstraction”
This work was accepted as a long paper at COLING 2020, you can read a pre-print on arXiv.
You can watch an example video on YouTube, showing how a bridge is instructed with the three different strategies.
When generating technical instructions, it is often convenient to describe complex objects in the world at different levels of abstraction. A novice user might need an object explained piece by piece, while for an expert, talking about the complex object (e.g. a wall or railing) directly may be more succinct and efficient. We show how to generate building instructions at different levels of abstraction in Minecraft. We introduce the use of hierarchical planning to this end, a method from AI planning which can capture the structure of complex objects neatly. A crowdsourcing evaluation shows that the choice of abstraction level matters to users, and that an abstraction strategy which balances low-level and high-level object descriptions compares favorably to ones which don’t.
Replicating our set-up and our results
We use MC-Saar-Instruct for our experiment;
the study set-up is completely automated by our automatic experiment
script. Clone that repository and run
download, compile, setup and start everything.
If you want to reproduce the setup exactly, do the following:
- setup a MariaDB database as described at the broker documentation on setting up the database;
- generate a personal access token on github (everything is public but for some reason GitHub wants authenticated access for some of the artifacts)
- add this to your
gpr.user=[Your GitHub Username] gpr.key=[The generated personal access token]
- Check out the automatic setup at the correct revision:
git checkout https://github.com/minecraft-saar/automatic-experiment-setups cd automatic-experiment-setups git checkout d4afa70bc15faacfe9742a38bce9ef5532ae451c
- now setup and start everything:
With the script running, you will have a Minecraft server running
which participants can connect to, as well as the three architects
described in the paper. A web interface on
gives you an overview of the running and finished games.
The experiment evaluation is done by the experiment analysis program. It connects to the database and writes text-based analyses as well as a tab-separated file of data. This tab-separated file can then be processed by the rmarkdown script we also publish in that repository. This script will generate a report with all the numbers we published in our paper.
We also make available the complete anonymized data gathered during our experiments. You can import that SQL dump into your database for further analysis. In addition, you can have a look at the RMarkdown analysis computing the numbers shown in our paper.
As always: If you have questions, do not hesitate to contact an author.