Database Location?

Hello all,

Is it possible to view the uploaded benchmarks?

Thanks
Andrew

Hi Andrew

results are made available after analysis and aggregation in this table
https://w3.hepix.org/benchmarking/scores_HS23.html

Domenico

Hi Domenico,

Thanks for the response. Is there a way to see if “bmksend” was successful? I know that it has the output line saying the json was submitted, but I’m curious to know if/when my json files were successfully approved/added to the database.

Thanks!
Andrew

Hi @giordano – the last update of the table is from late Sept. Is there a way to get feedback to see if the uploads are successful? We directed all USCMS sites to begin to benchmark and upload to the central DB, and I’ve yet to see any results show up centrally.

Cheers
Andrew

Hi @meloam

I have created a Grafana dashboard so you can check if your data has been properly submitted, so long as the site tag was defined.

Let me know if you can find your documents there and if you need help with anything else.

Best,
Gonzalo

Hi @gmenende,

so long as the site tag was defined.

Following HEP-Benchmarks / hep-benchmark-suite · GitLab there was no information on the site tag. Could the instructions be fixed to add the arguments that are expected to be provided, and could there be instructions to add tags for existing run json (of which that are many) to add the tags you’re expecting?

Thanks!
Andrew

Hi,

Don’t worry, I have updated the dashboard so it also takes into account documents without the site tag. As you mentioned, the README does not suggest setting this tag since it isn’t mandatory. However, if planning on sending data to the central DB it is pretty helpful to identify data.

Our example scripts show how the site tag can be added to the configuration of the suite. Of course, you could add any other tags as well in a similar fashion.

As for adding it to old documents, it would be a matter of adding a JSON field to them, in particular the message.host.tags.site set to your site. This could be achieved in several ways, e.g. a small python script, but as I said it isn’t compulsory.

Kind regards,
Gonzalo

Hi Gonzalo,

I think that end goal is to have something a non-expert can run on a fresh node and benchmark in a way that’s sent to the central DB, so if the documentation can be updated, then that would be very useful. You point to the “example scripts”, but there are several of them and they also don’t match with what is in the README (or even with each other, necessarily), so I think it would be very helpful to refine and/or coalesce the instructions into the “default” version that we can point site admins to. The benchmark is really long, so having 100 nodes spending 8 hours benchmarking without the results being made public in the central DB is disappointing.

My 2c
Andrew

Hi Andrew,

Thank you for the feedback, we will take it into account. The Suite itself is a complex and highly customizable tool that as you mention could be too complicated for non-experts to use. That’s why the example scripts came to be; each one is designed to run a particular configuration and has additional checks to ensure everything is fine. E.g. the site there is required, if you don’t set it it will tell you and exit.

The script you describe seems to be run_HEPscore.sh to me. Nevertheless, please don’t get discouraged as all computed results are valuable, no matter if the site field is present or not. Even if the documents haven’t been sent to us during the run, bmksend can send them later, so nothing gets lost.

We are currently thinking about the direction these example scripts should take. In principle, we expected them to be just helpers, examples of how to run the suite (and hepscore) that users could inspect to see what’s going on and add their own changes if needed. However, the trend has been more towards them becoming “official” scripts that aren’t usually modified at all by the user, so we are considering our options here. As soon as a decision is taken, we will make sure to reflect it on the README as well.

Let me know if you still need help finding your results or with any suite-related topics.

Best,
Gonzalo

Dear Andrew

the official documentation describing how to run the benchmark HEPScore23
(as discussed in and approved by the WLCG HEPScore deplyment task force)
is available here: How to Run HEPScore23 Benchmark

This is the procedure every WLCG site is invited to follow in order to produce the results
that will be published in the benchmarking DB.
(More information in the GDB report of last September: https://indico.cern.ch/event/1225116/contributions/5519006/attachments/2713539/4712490/GDB-13-09-2023-giordano.pdf
)

Gonzalo has been a bit reductive mentioning the script as “example script”.
This is the script to be used to publish data in the Benchmarking DB.

Your 100 measurements are extremely interesting for benchmarking studies.
We can help in uploading them. For additional support on your specific issue, I invite you to open a GGUS ticket (as documented in the above mentioned “How To”) to help us to properly track it.

Cheers
Domenico

Hello,

Thanks for your responses. I will open a GGUS ticket and inform my USCMS colleagues of the updated documentation (I had previously passed around the draft google doc).

For completeness here, I ran the following across my cluster:

#!/bin/bash
rm -rf /tmp/hep-benchmark-suite
export SINGULARITY_CACHEDIR=/mnt/ceph/home/meloam/00-projects/hep-benchmark-suite/singularity-cache
source /mnt/ceph/home/meloam/00-projects/hep-benchmark-suite/venv/bin/activate
cd /mnt/ceph/home/meloam/00-projects/hep-benchmark-suite/
bmkrun -c default
cp /tmp/hep-benchmark-suite/*/bmkrun_report.json /mnt/ceph/home/meloam/00-projects/hep-benchmark-suite/$HOSTNAME.json

Which, after this thread I know is incorrect.

Thanks again and I’ll follow up on GGUS
Andrew