REANA mini-workshop on 24 February 2021

Hello:

We are organising a virtual mini-workshop amongst REANA users and developers. The workshop will take place on Wednesday 24 February and will happen over Zoom.

The idea is to get together to exchange experiences, tips and tricks amongst REANA users and developers on one hand, and to help identify areas where the REANA platform needs the most improvements, on the other hand.

If you would like to take part, please reply below with some more details about what you are looking for more exactly. For example:

  • I can present my use case and workflow practices to other participants to exchange tips and get some feedback

  • I am using Yadage but I am not sure how to best express this long sub-workflow declaratively

  • I want to bridge my GitLab repository with REANA to run workflows for some tagged commits

  • I would like to use code from CVMFS and publish my results in my personal EOS folder

  • I would like some hands-on help installing and running REANA on my laptop to debug my containers and workflows

  • … and anything else that might interest you with respect to running containerised workflows on REANA!

Please let us know whether you would like to take part and which topics interest you the most.

This will allow us to prepare a more targeted program for the mini-workshop.

Best regards,

Tibor

Hey, looking forward to this workshop! I’d say the main thing I’m interested in making progress in at the moment is strategies for smoothening the transition from running a RECAST workflow on traditional infrastructure (i.e. a PC or VM) to REANA. Specifically:

  • How to interface with REANA via the python API
  • Generate reana.yaml automatically based on recast.yml used by recast-atlas tool
  • Sort out permissions issues sometimes encountered associated with container UID/GID when running ATLAS containers on REANA
  • Unify resource definitions and initial setup for kerberos authentication
    • REANA: keytab uploaded to cluster as k8s secret (documented here. Use kerberos: true resource in steps.yml
    • recast-atlas: Environment variables supplied to recast-atlas client by user. kinit run automatically in container on start-up during workflow.

Hey! I am happy to present my use case (which include using CVMFS and publishing to EOS) as well as my current CI setup with the Gitlab-REANA integration.

A couple of suggestions for topics that could be discussed from my side. I might add more if anything else comes to my mind.

Default workflow language
While Serial is nice and easy to understand, it doesn’t have the features required for standard HEP workflows. Starting with Serial is therefore a step that isn’t really necessary. On the other hand, Yadage lacks documentation, so going beyond the basics is also difficult. The use of CWL is somewhat unclear and painful. Each language has its advantages, but none can do everything one would want it to do, see supported systems docs. Making one the default would clear things up. And this language should then aim for very extensive documentation.

Sharing of workflows
Add the ability to give other users access to workflows, e.g. via egroups. This would help with automated calibration workflows, reducing the dependency on individual analysts.

Container sanity checks
The REANA docker docs have a section on user IDs, which often leads to confusion. There could be a simple build service that takes a container image as input and then fixes the user IDs if necessary or at least checks the IDs and then provides suggestions on what to fix.

Data set jobs
This would depend on the experiment, but being able to provide a data set name and REANA then resolves the files that belong to the data set and splits them into i jobs of j files each would be nice to have. I think this is on the road map already and for directories or S3 buckets could be implemented independent of the experiment.

WLCG support
While not a critical feature, it would be nice to have support for running grid jobs, since it allows to use even more of the available computing resources. This will again have to be implemented per experiment.

Python instead of YAML
Writing YAML is a pain. Being able to write workflows in Python would make this much more enjoyable. Argo exposes the complete API so that one can create workflows using the Go client or cURL, and there is also Couler that allows to do this in Python. Something similar for REANA would be awesome. In principle, this can already be done in Yadage? From a recent Argo survey, I know that this is the most requested feature for them as well.

I would like to discuss and hack away at two topics related to flocking jobs to condor via yadage

[1] The jobs flock slowly making any sort of iteration or development very tedious. Can there by a condor dev cluster?

[2] Use of the ATLAS analysis framework within condor-flocked jobs submitted with yadage. This was discussed a bit on mattermost here - Mattermost - but no resolution was reached. This is currently a blocker for scaling up usage on larger scales with ATLAS setups.

Thanks for the topic suggestions, we have identified 4-5 topics blocks to start with, and we can take the rest in a future event.

Here is the tentative agenda for 2021-02-24 mini-workshop: REANA mini-workshop 2021-02-24 (24 February 2021) · Indico