Overviewο
This archive is organized as follows. The π docs
folder contains HTML documents
with this description, and the rest of the root directory represents a snapshot from
our source code repository with the following structure:
File / folder |
Description |
---|---|
π /
βββ π benchmark
| βββ *.py
| βββ AquilaMISRunnerv4.ipynb
βββ precompute_embeddings.py
|
The code implementing QPU calculations: python implementation files for IBM and D-Wave devices (including the script for local D-Wave embeddings preparation) and the Jupyter notebook used to run the experiments on the QuEra device. |
βββ π instances
| βββ orig_instances.zip
| βββ QUBO_formulations.zip
|
Problem instances (see format descriptions). Two
archives correspond to the original instances and the respective QUBO
formulations (extracted to |
βββ π figures
| βββ *.png
| βββ *.tex
|
Calculation outputs: generated figures, including the |
βββ π run_logs
| βββ π dwave
| | βββ π samples-csv
| | βββ embeddings.zip
| | βββ raw_logs.zip
| βββ π quera
| | βββ π samples-csv
| | βββ raw_logs.zip
| βββ π ibm-qpu
| | βββ π samples-csv
| | βββ raw_logs.zip
| βββ π ibm-sim
| | βββ π samples-csv
| | βββ raw_logs.zip
| βββ π summaries
| βββ π classic_solutions
|
Run logs, results of the computations. This includes the raw QPU outputs
(in dedicated |
βββ π post_processing
βββ qubo_utils.py
βββ make_suppl_sample_figures.sh
βββ compress_images.sh
|
Auxiliary scripts for post-processing, including summarizing the logfiles and the production of figures. |
βββ MWC_inst.py
βββ MIS_inst.py
βββ TSP_inst.py
βββ download-and-extract-TSP.sh
|
Instance generation code. Note that for the TSP-related code to work, we need the original TSPLIB instance collection. The last file contains commands used to download and unpack such instances on our GNU/Linux-based system. |
βββ classic_solve_UDMISes.py
βββ classic_solve_MWCs.py
βββ classic_solve_MWC_QUBO_only.py
βββ classic_solve_TSPs_MTZ.py
|
Baseline computations implementation (with Gurobi) |
βββ quera_suppl_samples.ids
βββ dwave_suppl_samples.ids
βββ ibm-qpu_suppl_samples.ids
βββ ibm-sim_suppl_samples.ids
|
Instance IDs used to create the βsample outputβ figures in the appendix.
(Corresponding files with |
βββ data_sel.ipynb
βββ instance_figs.ipynb
|
A few examples for dataset visualisation in the format of Jupyter notebooks (not critical for results reproducibility). |
βββ Makefile
|
A makefile with all computational recipes relevant to the post-processing (i.e., production of figures from the log files). |
Computational pipelineο
Most of the figures in the paper are generated automatically, as discussed
below. The figures are created using dedicated post-processing scripts from
intermediate summary data tables. These tables are in turn constructed from the
raw log files (in JSON format) obtained from the QPUs. Assuming the necessary
software is present and instances and run logs are unpacked, one can rebuild any
figure by issuing a make
command from the root directory. GNU make will consider the filesβ timestamps and
re-run the calculations as necessary. For example:
make figures/inst_summary.png # to rebuild Figure 4, or
make figures # to rebuild all figures for the paper
Interactions with QPU are specific to the devices and both formats and
procedures might be subject to updates. (Notably, qiskit
API and remote
access guidelines have changed since the moment we ran our computational
experiments.) However, we believe that the code presented in the π
benchmark
directory provides a good starting point to build upon these
results.
The key figures in the paper are produced as follows:
Figure orΒ table |
Filename |
Plotting code |
Data files |
---|---|---|---|
Fig. 4 |
inst_summary.png |
inst_summary.R |
quera_stats.csv, quera_summary.csv, dwave_stats.csv, dwave_summary.csv, embeddings.csv, ibm-qpu_stats.csv, ibm-qpu_summary.csv, ibm-sim_stats.csv, ibm-sim_summary.csv, solutions_all.csv |
Fig. 5a, Fig. 5b, Table 5 |
qpu_vs_cpu_runtimes.png, hard_instances.png qpu_vs_cpu.tex |
QPU_vs_classic_figures.R |
TSP_MTZ.csv, UDMIS.csv, MWC_QUBO.csv, ibm-qpu_summary.csv, ibm-sim_summary.csv, dwave_summary.csv, embeddings.csv, quera_summary.csv |
Fig. 6, Fig. 11, Table 7 Fig. 18 |
runtime_vs_embtime_TSP.png, runtime_vs_embtime_MWC.png, runtime_vs_embtime_UDMIS.png dwave_qubit_cost_logs_regline.png regression_summary.tex dwave_soltimes_noemb.png dwave_embtime_share.png |
dwave.R |
dwave_summary.csv, embeddings.csv |
Fig. 9 |
nonzeros_vs_size.png |
igs_draw_nonzeros.R |
instances.csv all_devices_stats.csv |
Fig. 10 |
runs_summary.png |
inst_summary_by_inst.R |
quera_summary_full.csv dwave_summary_full.csv, embeddings.csv, ibm-qpu_summary_full.csv, ibm-sim_summary_full.csv, solutions_all.csv |
Fig. 12, 13, and 15 |
files *.json.hist.png in
|
make_suppl_sample_figures.sh |
(raw QPU logfiles) |
Figures 7, 8, and 14 (instance examples) are made with ad-hoc code illustrated
in the Jupyter notebook data_sel.ipynb
.
Please refer to Makefile
for more details on the computational recipes.
Implementation detailsο
Our experimental setup broadly comprises five parts:
Instance generation code is written in Python and executed locally on a laptop. The relevant modules are:
Baseline calculations (solving the problems with the classical solver) is done using Gurobi in the following modules:
Solves UDMIS instances with Gurobi, recording the objectives and solution times.
Solves MaxCut instances with Gurobi, recording the objectives and solution times.
Illustration: compares QUBO and LBOP formulations for MaxCut.
Solves TSP in MTZ formulation with Gurobi, saving the objectives and solution times.
Interactions with QPUs are programmed in Python and implemented via remote interfaces. The relevant code for D-Wave and IBM is stored in
π benchmark
folder:Implements abstract interfaces for instance storage and QPU solvers.
Contains core QPU-interaction code.
An example top-level script used to run QPU experiments for IBM and D-Wave.
An example wrapper script for running experiments on D-Wave machine.
An example wrapper script for running experiments on IBM machines.
An example wrapper script for running experiments on IBM (simulator) machines.
Precomputes (DWave) embeddings for the instances in QUBO folder.
The Jupyter HTML Notebook
Post-processing of the log files (summarizing the raw data) is done locally in Python, with the following two modules:
Implements JSON pasers, summarizing the raw QPU output files.
Utility script: selects instance attempts ("runs") for the main text figures/tables.
Figures production from
.csv
-files, including the necessary technical data transformations are made using R language, the respective code comprises.R
-files inπ post_processing
directory.
Preparing python and R environmentsο
To run the code, minimal environment setup might be necessary, for Python and R. This section aims to provide a basic guideline, note that other software might (theoretically) be needed, depending on your system.
Note
For the (python) environment management we used the recommended venv <https://docs.python.org/3/library/venv.html>-way. For example, the virtual environment can be created and activated with:
$ python3.10 -m venv .venv $ source .venv/bin/activate $ # then, we can install the necessary packages there with: $ pip install -r requirements.txt
Python code relies on the packages specified in the standard packages
list
format.
Note that the code for solving the problems on the classical CPU relies on the Gurobi stand-alone solver.
Notably, for accessing QPU hardware you will need to supply your own access
tokens. What is, certainly, not present in π benchmark
folder is the
file private.py
, where one should supply an individual access token in the
form of:
DWave_token = '<your-access-token>'
To install the necessary R libraries, from within R session run:
install.packages("dplyr", "plyr", "forcats", "ggplot2", "gridExtra",
"Hmisc", "kableExtra", "knitr", "optparse", "stargazer", "stringr",
"tidyr", "viridis", "ggh4x")