At the Summit, I talked about how we had initially tried to use classifiers and then different repositories and classifiers for keeping platform specific artifacts separate. Both of those strategies didn’t work out well.
If you’re talking about just uploading the different quality records along with the final artifact (and not producing different variants of the same artifact), I think you can accomplish that with a single pipeline and publishing step. e.g., with Jenkins, you can either share a workspace or copy artifacts from one workspace to another in your pipeline. Then your final build step can publish everything at once. You could use classifiers for that.
One of the issues with using classifiers to publish different artifacts has to do with when you publish them. If you publish them as they come available, you’ll necessarily publish the metadata file multiple times (which is one kind of headache) each time you add an artifact or you’ll publish the metadata file with a list of artifacts that do not exist yet. If you go the first way, you might run into problems on the artifact repository side (the wrong metadata files overwriting one another) or just managing the jobs since you can’t parallelize them. If you go the second way, you’ll run into problems in other projects if they try to pull in artifacts that have metadata but have not been published yet.
My recommendation at the Summit for publishing the same artifact built for multiple platforms was to encode the platform into the artifact id or version number. That’s what my project ended up doing (using the artifact id). It sounds like your problem might be different and you might be able to avoid most of the problems by just having a single publish step, if I understand you correctly.