Skip to content

stress

Stress-test a backend with varying concurrency levels. Tests whether concurrent requests cause output divergence, KV cache corruption, or errors.

CLI Reference

infer-check

infer-check: correctness and reliability testing for LLM inference engines.

Usage:

infer-check [OPTIONS] COMMAND [ARGS]...

Options:

Name Type Description Default
--version boolean Show the version and exit. False
--max-tokens integer Default max tokens for generation (applies to all prompts unless they specify their own). 1024
--num-prompts integer range (1 and above) Limit the number of prompts to use from a suite; if omitted, all prompts are used. None
--help boolean Show this message and exit. False

compare

Compare two quantizations of the same model.

MODEL_A and MODEL_B are model specs — HuggingFace repos, Ollama tags, or local GGUF paths. The backend is auto-detected from the identifier, or you can use an explicit prefix (ollama:, mlx:, gguf:, vllm-mlx:).

Examples: # Two MLX quants infer-check compare \ mlx-community/Llama-3.1-8B-Instruct-4bit \ mlx-community/Llama-3.1-8B-Instruct-8bit

# MLX native vs Ollama GGUF
infer-check compare \
  mlx-community/Llama-3.1-8B-Instruct-4bit \
  ollama:llama3.1:8b-instruct-q4_K_M

# Bartowski GGUF vs Unsloth GGUF (both via Ollama)
infer-check compare \
  ollama:bartowski/Llama-3.1-8B-Instruct-GGUF \
  ollama:unsloth/Llama-3.1-8B-Instruct-GGUF

Usage:

infer-check compare [OPTIONS] MODEL_A MODEL_B

Options:

Name Type Description Default
--prompts text Bundled suite name (e.g. 'reasoning') or path to a .jsonl file. adversarial-numerics
--output path Output directory. ./results/compare/
--base-url text Base URL override for HTTP backends. Applied to both models unless they resolve to mlx-lm. None
--label-a text Custom label for model A (defaults to auto-derived short name). None
--label-b text Custom label for model B (defaults to auto-derived short name). None
--report / --no-report boolean Generate an HTML comparison report after the run. True
--max-tokens integer range (1 and above) Override default max tokens for generation. None
--num-prompts integer range (1 and above) Limit number of prompts to use. None
--disable-thinking / --enable-thinking boolean Suppress reasoning/thinking mode on models that support it (Qwen3, DeepSeek-R1, Ollama think, vLLM chat_template_kwargs, OpenAI/OpenRouter reasoning). Models without a thinking mode are unaffected. Defaults to disabled so outputs are directly comparable across runs; pass --enable-thinking to restore it. True
--chat / --no-chat boolean Use /v1/chat/completions for HTTP backends (applies chat template server-side). Pass --no-chat to use raw /v1/completions instead. Ignored for mlx-lm. True
--help boolean Show this message and exit. False

determinism

Test whether a backend produces identical outputs across repeated runs at temperature=0.

Usage:

infer-check determinism [OPTIONS]

Options:

Name Type Description Default
--model text Model ID or HuggingFace path. Sentinel.UNSET
--backend text Backend type (auto-detected if omitted). None
--prompts text Bundled suite name (e.g. 'reasoning') or path to a .jsonl file. Sentinel.UNSET
--output path Output directory. ./results/determinism/
--runs integer Number of runs per prompt. 100
--base-url text Base URL for HTTP backends. None
--max-tokens integer range (1 and above) Override default max tokens for generation. None
--num-prompts integer range (1 and above) Limit number of prompts to use. None
--disable-thinking / --enable-thinking boolean Suppress reasoning/thinking mode on models that support it (Qwen3, DeepSeek-R1, Ollama think, vLLM chat_template_kwargs, OpenAI/OpenRouter reasoning). Models without a thinking mode are unaffected. Defaults to disabled so outputs are directly comparable across runs; pass --enable-thinking to restore it. True
--chat / --no-chat boolean Use /v1/chat/completions for HTTP backends (applies chat template server-side). Pass --no-chat to use raw /v1/completions instead. Ignored for mlx-lm. True
--help boolean Show this message and exit. False

diff

Compare outputs across different backends for the same model and prompts.

Usage:

infer-check diff [OPTIONS]

Options:

Name Type Description Default
--model text Model ID or HuggingFace path. Sentinel.UNSET
--backends text Comma-separated backend names, e.g. 'mlx-lm,llama-cpp'. First is baseline. Sentinel.UNSET
--prompts text Bundled suite name (e.g. 'reasoning') or path to a .jsonl file. Sentinel.UNSET
--output path Output directory. ./results/diff/
--quant text Quantization level applied to all backends. None
--base-urls text Comma-separated base URLs for HTTP backends (positionally matched to --backends). None
--max-tokens integer range (1 and above) Override default max tokens for generation. None
--num-prompts integer range (1 and above) Limit number of prompts to use. None
--disable-thinking / --enable-thinking boolean Suppress reasoning/thinking mode on models that support it (Qwen3, DeepSeek-R1, Ollama think, vLLM chat_template_kwargs, OpenAI/OpenRouter reasoning). Models without a thinking mode are unaffected. Defaults to disabled so outputs are directly comparable across runs; pass --enable-thinking to restore it. True
--chat / --no-chat boolean Use /v1/chat/completions for HTTP backends (applies chat template server-side). Pass --no-chat to use raw /v1/completions instead. Ignored for mlx-lm. True
--help boolean Show this message and exit. False

report

Generate a report from previously saved result JSON files.

Usage:

infer-check report [OPTIONS] RESULTS_DIR

Options:

Name Type Description Default
--format choice (html | json) Output format. html
--output path Output file path (defaults to /report.html or report.json). None
--help boolean Show this message and exit. False

stress

Stress-test a backend with varying concurrency levels.

Usage:

infer-check stress [OPTIONS]

Options:

Name Type Description Default
--model text Model ID or HuggingFace path. Sentinel.UNSET
--backend text Backend type (auto-detected if omitted). None
--prompts text Bundled suite name (e.g. 'reasoning') or path to a .jsonl file. Sentinel.UNSET
--output path Output directory. ./results/stress/
--concurrency text Comma-separated concurrency levels. 1,2,4,8,16
--base-url text Base URL for HTTP backends. None
--max-tokens integer range (1 and above) Override default max tokens for generation. None
--num-prompts integer range (1 and above) Limit number of prompts to use. None
--disable-thinking / --enable-thinking boolean Suppress reasoning/thinking mode on models that support it (Qwen3, DeepSeek-R1, Ollama think, vLLM chat_template_kwargs, OpenAI/OpenRouter reasoning). Models without a thinking mode are unaffected. Defaults to disabled so outputs are directly comparable across runs; pass --enable-thinking to restore it. True
--chat / --no-chat boolean Use /v1/chat/completions for HTTP backends (applies chat template server-side). Pass --no-chat to use raw /v1/completions instead. Ignored for mlx-lm. True
--help boolean Show this message and exit. False

sweep

Run a quantization sweep: compare pre-quantized models against a baseline.

Each model is a separate HuggingFace repo or local path. The first model (or --baseline) is the reference; all others are compared against it.

Example:

infer-check sweep \
  --models "bf16=mlx-community/Llama-3.1-8B-Instruct-bf16,
            4bit=mlx-community/Llama-3.1-8B-Instruct-4bit,
            3bit=mlx-community/Llama-3.1-8B-Instruct-3bit" \
  --prompts reasoning

Usage:

infer-check sweep [OPTIONS]

Options:

Name Type Description Default
--models text Comma-separated label=model_path pairs. Example: 'bf16=mlx-community/Llama-3.1-8B-Instruct-bf16,4bit=mlx-community/Llama-3.1-8B-Instruct-4bit' Sentinel.UNSET
--backend text Backend type (auto-detected if omitted). None
--prompts text Bundled suite name (e.g. 'reasoning') or path to a .jsonl file. Sentinel.UNSET
--output path Output directory. ./results/sweep/
--baseline text Baseline label (defaults to first in --models). None
--base-url text Base URL for HTTP backends. None
--max-tokens integer range (1 and above) Override default max tokens for generation. None
--num-prompts integer range (1 and above) Limit number of prompts to use. None
--disable-thinking / --enable-thinking boolean Suppress reasoning/thinking mode on models that support it (Qwen3, DeepSeek-R1, Ollama think, vLLM chat_template_kwargs, OpenAI/OpenRouter reasoning). Models without a thinking mode are unaffected. Defaults to disabled so outputs are directly comparable across runs; pass --enable-thinking to restore it. True
--chat / --no-chat boolean Use /v1/chat/completions for HTTP backends (applies chat template server-side). Pass --no-chat to use raw /v1/completions instead. Ignored for mlx-lm. True
--help boolean Show this message and exit. False

How it works

  1. Baseline pass -- runs all prompts at concurrency=1 (the first level). These outputs become the reference.
  2. Concurrent passes -- for each concurrency level, runs all prompts with that many concurrent requests using asyncio.Semaphore.
  3. Consistency check -- compares each concurrent output against the baseline (concurrency=1) output for the same prompt.
  4. Error tracking -- counts failed requests at each concurrency level.
  5. Summary -- displays output consistency and error count per concurrency level.

Example

infer-check stress \
  --model mlx-community/Meta-Llama-3.1-8B-Instruct-4bit \
  --backend openai-compat \
  --base-url http://127.0.0.1:8000 \
  --prompts reasoning \
  --concurrency 1,2,4,8 \
  --output ./results/stress/

Output:

                    Stress Test Summary
┏━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━┓
┃ concurrency ┃ errors ┃ output_consistency  ┃
┡━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━┩
│           1 │      0 │              100.00% │
│           2 │      0 │              100.00% │
│           4 │      0 │              100.00% │
│           8 │      0 │              100.00% │
└─────────────┴────────┴─────────────────────┘

What to look for

  • Errors at high concurrency -- the backend is failing under load. Check server logs for OOM, timeout, or connection errors.
  • Dropping output consistency -- concurrent requests are interfering with each other. This is a strong signal of KV cache corruption or batch-dependent computation bugs.
  • Consistency drop at a specific threshold -- if consistency drops sharply at concurrency N, the backend likely has a fixed-size buffer or cache that overflows at that level.

Tip

For HTTP backends (openai-compat, vllm-mlx, llama-cpp), make sure the server is running before starting the stress test. The --base-url option lets you point to any running server.

Output format

Results are saved as a JSON array of StressResult objects, each containing:

  • concurrency_level -- the concurrency level tested
  • results -- all InferenceResult objects from that level
  • error_count -- number of failed requests
  • output_consistency -- fraction of outputs matching the baseline (concurrency=1)