Free Stock photos by Vecteezy

Call for Artifacts

Reproducibility of experiments is crucial to foster open, reusable, and trustworthy research. To improve and reward reproducibility, iFM 2023 has an optional artifact evaluation process for accepted papers.

The goals of the artifact evaluation are manifold. We want to encourage authors to provide more substantial evidence to their papers and to reward authors who aim for reproducibility of their results, and therefore create artifacts. Also, we want to give more visibility and credit to the effort of tool developers in our community. Furthermore, we want to simplify the independent replication of results presented in the paper and to ease future comparisons with existing approaches.

Artifacts of interest include (but are not limited to):

  • Software, Tools, or Frameworks
  • Datasets
  • Test suites
  • Machine-checkable proofs
  • Protocols used for empirical studies
  • Any combination of them
  • Any other digital artifact described in the paper

Artifact submission is optional. Papers that are successfully evaluated will be awarded one or more artifact badges (see awarding), but the result of the artifact evaluation will not alter the paper’s acceptance decision. We aim to assess the artifacts themselves and not the quality of the research linked to the artifact, which has been assessed by the iFM 2023 program committee already. The goal of our review process is to be constructive and to improve the submitted artifacts. Only if an artifact cannot be improved to achieve sufficient quality in the given time frame or if it is inconsistent with the paper, it should be rejected. To credit the effort of tool developers, we plan to apply for a special issue of the Original Software Publication track in Science of Computer Programming. Authors of selected artifacts will be invited to contribute to this issue.

Important Dates

  • 17 August 2023 - Artifact registration deadline
  • 24 August 2023 - Artifact submission deadline
  • 31 August 2023 - Test phase notification
  • 01 - 10 September 2023 - Communication phase and author fixes
  • 20 September 2023 - Final artifact notification

Artifact Evaluation

All artifacts are evaluated by the artifact evaluation committee. Each artifact will be reviewed by at least two committee members. Reviewers will read the accepted paper and explore the artifact to evaluate how well the artifact supports the claims and results of the paper. The evaluation is based on the following questions.

  • Is the artifact well-documented?
  • Is the artifact consistent with the paper and the claims made by the paper, e.g., does it significantly contribute to the generation of its main results?
  • Is the artifact complete, i.e., how many of the results of the paper are replicable?
  • Are the results of the paper replicable through the artifact, e.g., can the included software be used to generate the results of the paper, and the included data be accessed and manipulated?
  • Is the artifact easy to use?
  • Does the artifact provide a proper and explicitly documented license?
  • Is the artifact publicly and permanently available?

Test Phase

In the test phase, reviewers check if the artifact is functional, i.e., they look for setup problems (e.g., corrupted, missing files, crashes on simple examples, etc.). If any problems are detected, the authors are notified of the outcome and asked for clarification. Authors will be given enough time (see Important Dates) to address reviewers’ comments and solve any problems that exist.

Assessment Phase

In the assessment phase, reviewers will try to reproduce any experiments or activities and evaluate the artifact w.r.t. the questions detailed above. The final review is communicated using EasyChair.

Awarding

Authors may use all granted badges on the title page of the respective paper. iFM awards the evaluation and availability badges of EAPLS.The availability badge will be awarded if the artifact is made permanently and publicly available and has a DOI. We recommend services like zenodo or figshare for this. Also, the artifact needs to be relevant and add value beyond the text in the paper.

The evaluation badge has two levels, functional and reusable. Each successfully evaluated artifact receives at least the functional badge. The reusable badge is granted to artifacts of very high quality. Detailed guidelines for both levels are described here.

Artifacts that are not exercisable, as, for example, protocols used for empirical studies, will be evaluated only according to the Available badge, as Functional and Reusable badges are not applicable.

Artifact Submission

An artifact submission consists of

  • An Abstract, to be written directly in EasyChair, that:
    1. summarizes the artifact and explains its relation to the paper,
    2. describes which badge the authors submit for,
    3. mentions where in the artifact it is documented how to perform the test phase and how to reproduce the results of the paper, and
    4. includes an URL - we encourage you to provide a DOI - to a .zip file of your artifact containing
      • a license file that allows the artifact evaluation committee to evaluate the artifact,
      • clear documentation of how to perform the test phase,
      • a documentation of how to reproduce the results of the paper, and
    5. the SHA256 checksum of the .zip file.
  • A .pdf file of the most recent version of the accepted paper, which may differ from the submitted version to take reviewers’ comments into account.

Please also look at the Artifact Packaging Guidelines below for more detailed information about the contents of the artifact.

The abstract and the .pdf file of your paper must be submitted via EasyChair:

https://easychair.org/conferences/?conf=ifm-2023

We need the checksum to ensure the integrity of your artifact. You can generate the checksum using the following command-line tools.

  • Linux: sha256sum <file>
  • Windows: CertUtil -hashfile <file> SHA256
  • MacOS: shasum -a 256 <file>

If you cannot submit the artifact as requested or encounter any other difficulties in the submission process, please contact the artifact evaluation chairs prior to submission.

Artifact Packaging Guidelines

We expect that authors package their artifact (.zip file) and write their instructions such that the artifact evaluation committee can evaluate the artifact within the iFM 2023 virtual machine (see below). It is a virtual machine (VM) created by VirtualBox 7.0.6 based on a minimal installation of Ubuntu 22.04 LTS with the following additional packages installed:

build-essential 
mono-complete 
clang 
cmake 
openjdk-11-jre 
openjdk-11-jdk 
python3-pip 
ruby 
rustc 
gcc-multilib 
g++-multilib

Moreover, VirtualBox guest additions are installed on the VM, it is therefore possible to easily connect a shared folder from a host computer running VirtualBox. The login and password of the default/root user are ifm2023 / ifm2023.

The VM is intended to be used with artifacts that are self-contained, i.e., they contain the presented software (e.g. datasets, software, etc.), plus all necessary dependencies (e.g. packages), so that they can be evaluated without an Internet connection. Therefore, the artifact must include all additional software or libraries that are not part of the VM and provide instructions on how to install and set them up. Do not submit a virtual machine image in the .zip file,. Artifact Evaluation Committee members will copy your .zip file into the provided virtual machine. For further information, consider our recommendations on the artifact content.

In case your experiments cannot be replicated inside the provided VM, please contact the Artifact Evaluation Committee chairs before submission. Possible reasons may include the need for special hardware (FPGAs, GPUs, clusters, robots, etc.), software licensing issues, or the need to access the internet. In any case, you are encouraged to submit a complete artifact. This way, the reviewers have the option to replicate the experiments in the event they have access to the required resources.

Recommendations

We recommend preparing your artifact in such a way that any computer science expert without dedicated expertise in your field can use your artifact, especially replicate your results. For example, keep the evaluation process simple, provide easy-to-use scripts, and a detailed README document. Furthermore, the artifact and its documentation should be self-contained.

Next to the main artifact, i.e., data, software, libraries, scripts, etc. required to replicate the results of your paper and any additional software required by your artifact including an installation description, we recommend including the following two elements.

License

A LICENSE file describing the rights. Your license needs to allow the artifact evaluation committee members to download and evaluate the artifact, e.g., download, use, execute, and modify the artifact for the purpose of artifact evaluation. Please refer to typical open-source licenses. Artifacts without an open-source license are also accepted, but a type of license needs to be specified, which allows the committee to assess the artifact. For quick help about possible licenses, visit https://choosealicense.com/.

README

The README file should introduce the artifact to the user, i.e., describe what the artifact does, and guide the user through the installation, set up tests, and replication of your results. Ideally, it should consist of the following sections.

  • Artifact Name: a name for your artifact.
  • Summary: a brief description of the artifact goal, authors, reference to the paper, and indication on how to cite the artifact.
  • Set-up: describes the steps to set up your artifact within the provided iFM 2023 VM. To simplify the reviewing process, we recommend providing an installation script (if necessary).
  • Hardware Requirements: hardware requirements (RAM, number of cores, CPU frequency), which you considered to test your artifact. Your resource requirements should be modest and allow replication of results even on laptops.
  • Test Instructions: a document how to perform the test phase evaluation, e.g., provide instructions that allow rudimentary testing (i.e., in such a way that technical difficulties would pop up) in as little time as possible.
  • Replication Instructions: a clear description of how to repeat/replicate/reproduce the results presented in the paper.
    • Please document which claims or results of the paper can be replicated with the artifact and how (e.g., which experiment must be performed). Please also explain which claims and results cannot be replicated and why.
    • Describe in detail how to replicate the results in the paper, especially describe the steps that need to be performed to replicate the results in the paper. To simplify the reviewing process, we recommend providing evaluation scripts (where applicable).
    • Please provide for each task/step of the replication (an estimate) how long it will take to perform it or how long it took for you and what exact machine(s) you used.
  • Replication with Limited Resources: For tasks or experiments that require a large amount of resources (hardware or time), we additionally recommend offering the possibility to replicate a subset of the results of the paper that can be reproduced in a reasonable amount of time (e.g., within 8 hours) on various hardware platforms including laptops. In this case, please also include a script to replicate only a subset of the results. If this is not possible, please contact the artifact evaluation chairs early, but no later than before submission.
  • Examples of Usage: a description of how to use your artifact in general accompanied by small examples.