Skip to content

Extend mull-reporter to support concurrent reads and writes to a shared SQLite database file #1135

@bartlettroscoe

Description

@bartlettroscoe

Description

The tool mull-reporter-<version> when running:

mull-runner-<version> --coverage-info=<test-name>.profdata --workers=<num-workers> \
  --reporters=SQLite --report-name=<report-name> --report-dir=<project-dir> \
  <test-exec>

results in SQLite write errors when attempting to run multiple mull-runner invocations that write to the same SQLite database at the same time. There is a race condition that produces the following error:

[error] Cannot execute 
CREATE TABLE IF NOT EXISTS mutant (
  mutant_id TEXT,
  mutator TEXT,
  filename TEXT,
  directory TEXT,
  line_number INT,
  column_number INT,
  end_line_number INT,
  end_column_number INT,
  status INT,
  duration INT,
  stdout TEXT,
  stderr TEXT
);

CREATE TABLE IF NOT EXISTS information (
  key TEXT,
  value TEXT
);

Reason: 'database is locked'

[error] Error messages are treated as fatal errors. Exiting now.

While mull-runner runs the test executable in parallel across mutations (and most real test binaries have hundreds of mutations) using all cores, there are cases where it is attractive to run mutations for different test executables at the same time, including:

On machines with many cores (e.g., 100+), you are likely to get better core utilization by running several mull-runner invocations at the same time with a smaller --workers=<num-workers>, and letting a test driver like ctest (which uses libuv) do the scheduling. For example, if you assign a single mull-runner invocation all 100+ cores and only five mutations run (because other mutations were squashed by upstream tests due to #1133), the remaining 95+ cores will be idle. If the test binary takes a long time to run, this will result in significant underutilization of the machine and a significant increase in wall-clock time.

Proposed solution

It is possible to configure the SQLite database to be more robust to concurrent reads and writes. With some database setup and by retrying reads and writes with short delays, you can make this robust. See:

(which may contain errors because it was generated by an LLM).

Workarounds

Possible workarounds include:

  • Do not run multiple mull-runner programs at the same time; run them serially (e.g., RUN_SERIAL TRUE with ctest)
  • Have every invocation of mull-runner write to a unique SQLite database file, then aggregate them into a single SQLite database file before calling mull-reporter.

The former is what I am currently doing. The latter is feasible, but it would eliminate the ability to skip mutations that have already been squashed by prior tests due to #1133.

Mull version

mull-reporter-<version> --version:

Mull: Practical mutation testing and fault injection for C and C++
Home: https://github.com/mull-project/mull
Docs: https://mull.readthedocs.io
Support: https://mull.readthedocs.io/en/latest/Support.html
Version: 0.27.0
LLVM: 19.1.7

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions