Analysis Benchmarks

In the context of metrics for analysis facilities, was there any consideration of 1 or more analysis benchmarks? At the node level, the quality of the local disk/ssd will be one factor, but I think more important will be the site storage or cache. This would mean the benchmark needs to run on the whole resource, or the part of it meant for analysis.
Probably some ntup analysis could be common for ATLAS&CMS. Maybe an ML workload too, and allowed to run on GPU.

I think this also exposes a problem with running the various benchmarks sequentially. We normally benefit from the mix of workloads on a node and on the site. The obvious ones are io & memory. It seems we need to run some % of the various workloads on a node and on a site.

For GPUs we’ve been using some beams workloads:

and some additional ML generic workloads. It would be great to have HEP specific ones here as well.

I was pointed yesterday to this talk:

and benchmarks from the analysis grand challenge.

In the HEP-Benchmarks project we are following the activities of the experiments, and we are collecting CPU/GPU workloads that have chances to go in production.
For GPUs we currently have containers that include the w.i.p. MC generation on GPU (Madgraph @NLO) [1], and the w.i.p. Particle Flow ML workload [2], in addition to the CMS HLT reco.
Hopefully we can include in the near future other GPU workloads from the simulation, reconstruction and analysis activities of the experiments.

For what concerns the running mode of the different workloads: the workload containers used for HEPScore have enough configuration parameters to allow to setup a mix of workloads on a node in the way you describe. This is not the procedure followed for the official benchmark, but it’s doable for performance studies.