Run Performance Benchmarking with IPEX-LLM#

We can perform benchmarking for IPEX-LLM on Intel CPUs and GPUs using the benchmark scripts we provide.

Prepare The Environment#

You can refer to here to install IPEX-LLM in your environment. The following dependencies are also needed to run the benchmark scripts.

pip install pandas
pip install omegaconf

Prepare The Scripts#

Navigate to your local workspace and then download IPEX-LLM from GitHub. Modify the config.yaml under all-in-one folder for your benchmark configurations.

cd your/local/workspace
git clone https://github.com/intel-analytics/ipex-llm.git
cd ipex-llm/python/llm/dev/benchmark/all-in-one/

config.yaml#

repo_id:
  - 'meta-llama/Llama-2-7b-chat-hf'
local_model_hub: '/mnt/disk1/models'
warm_up: 1
num_trials: 3
low_bit: 'sym_int4' # default to use 'sym_int4' (i.e. symmetric int4)
batch_size: 1 # default to 1
in_out_pairs:
  - '32-32'
  - '1024-128'
  - '2048-256'
test_api:
  - "transformer_int4_gpu"
cpu_embedding: False

Some parameters in the yaml file that you can configure:

  • repo_id: The name of the model and its organization.

  • local_model_hub: The folder path where the models are stored on your machine.

  • warm_up: The number of runs as warmup trials, executed before performance benchmarking.

  • num_trials: The number of runs for performance benchmarking. The final benchmark result would be the average of all the trials.

  • low_bit: The low_bit precision you want to convert to for benchmarking.

  • batch_size: The number of samples on which the models make predictions in one forward pass.

  • in_out_pairs: Input sequence length and output sequence length combined by ‘-’.

  • test_api: Use different test functions on different machines.

    • transformer_int4_gpu on Intel GPU for Linux

    • transformer_int4_gpu_win on Intel GPU for Windows

    • transformer_int4 on Intel CPU

  • cpu_embedding: Whether to put embedding on CPU (only available now for windows gpu related test_api).

Remark: If you want to benchmark the performance without warmup, you can set warm_up: 0 and num_trials: 1 in config.yaml, and run each single model and in_out_pair separately.

Run on Windows#

Please refer to here to configure oneAPI environment variables.

set SYCL_CACHE_PERSISTENT=1
set BIGDL_LLM_XMX_DISABLED=1

python run.py

Run on Linux#

For Intel Arc™ A-Series Graphics and Intel Data Center GPU Flex Series, we recommend:

./run-arc.sh

Result#

After the benchmarking is completed, you can obtain a CSV result file under the current folder. You can mainly look at the results of columns 1st token avg latency (ms) and 2+ avg latency (ms/token) for the benchmark results. You can also check whether the column actual input/output tokens is consistent with the column input/output tokens and whether the parameters you specified in config.yaml have been successfully applied in the benchmarking.