Resources Blog Managing Nexus API Using Jenkins X

Managing Nexus API Using Jenkins X

In my last post, Jenkins X — Managing Jenkins, I talked about how we manage our Jenkins server. This time around, I’ll be looking at the Nexus server and how it too can be similarly managed.

Current Status

Jenkins X comes with an optional Nexus server packaged into the platform which is great to get you started. However, as your project becomes more complex, you might find yourself needing other repositories or changes to the configuration. It is these changes which proved a little difficult to manage. Here is the setup as of today:

  • current Jenkins X nexus chart hard-codes the pre-packaged repositories
  • the script takes the list of repository script files and executes them
  • there is an open issue on GitHub to move the files into the values.yaml but I’m not sure when it will be fixed.

So, that being the case, we need a way to add our custom repositories. Now before I get into the implementation, I’d like to give you a little background as to what is actually happening.

The Script

The script runs directly after the Nexus server becomes available. Its job, among other things, is to loop over a list of files containing repository creation scripts and to execute them in Nexus. The scripts themselves are idempotent meaning the Nexus Repository will only be created if it does not already exist. This ensures that your Nexus server will have all the necessary repositories on startup.

So, how do the scripts work?

The Nexus API

Nexus Repository uses groovy scripts to configure the instance. Out of the scope of this post, here is a great example of configuring Nexus through scripts.

Here is some more information from the official Nexus site:

And? Why are you telling me this?

Because, although the scripts primary function is to set up the nexus repositories, there is actually nothing to stop you adding other scripts that, say:

  • create repository groups
  • add users
  • add roles
  • schedule tasks, etc.

Creating the Script Files

Let’s first start by creating a ConfigMap resource containing our custom scripts. Here is a link to the gist in question:

The above example showcases the four most common types of task:

  • redshift-maven-repository.json
    a public maven release repository
  • apache-org-snapshots.json
    a public maven snapshot repository
  • my-protected-repo.json
    a private maven repository with username/password authentication
  • maven-group.json
    script to create various maven repository groups

Looking at the content and replacing \n with real line breaks you can see the actual script that will be executed, for example:

NOTE: There is some excellent information on Nexus scripts including an example project to be found here:

Getting the Necessary Information

So, how do we get our files in the right place? We will need to place the files in the appropriate directory on the Nexus pod.

But we have a problem with that. We can’t mount the entire directory because that would clobber all the existing files. Luckily, we can use subPaths as mentioned in this post to mount individual files.

Since I want to do this automatically we will want to compare the current mounts against the required list. After that, we need to patch the nexus deployment, adding any mounts not yet added.

So, we need to:

  • list the current Nexus deployments mount points
  • list the current Nexus deployments subPaths
  • list the custom repos from our file
  • add the config map to the cluster
  • create a patch string containing any new mount points and subPaths
  • apply the patch if necessary

For this, we’ll need to the help of two of my favourite commands, jq and yq, and a little command line fu.

Let’s go…

Current mount points from deployment

kubectl name

will give you something like:

nexus nexus-data-volume

Current subPaths (Repo Files) from Deployment


will give you something like:



Current script files from our custom yaml

This was actually more complicated that first thought, mainly because yq doesn’t allow outputting the keys of a map object. To get around this, I needed to use yq to output the data section, followed by jq to output the keys. The end result is:


NOTE: the Docker commands can be replaced with the binary commands if you have yq and jq installed locally

Creating the Deployment Patch

Now we have the main input variables, we can process them and construct our patch string. We will be using the json patch type since this allows us to add to existing lists. More information about json merges can be found here.

Volume Mount

First up, add the custom repo config map as a volume if it doesn’t already exist


Sub Paths

Secondly, loop through our repository script files, adding the subPath where necessary (notice the maven-group.json being placed in the parent directory — this is where the expects it)

for repoF

Update ConfigMap

Thirdly, apply the config map in preparation for the mounts:

update config

NOTE: you will notice that I am using sops to decrypt the file on the fly. I encrypt the repository scripts config map file as it contains the credentials for my-protected-repo.

Patch the Deployment

And finally patch the deployment if necessary:

patch if necessary

Putting it All Together

Finally, putting it all together in a bash function gives you this function:

which can run as part of your Jenkins X config process.

And there you have it, custom repositories in your Jenkins X managed Nexus server.

I hope you have found this post useful and that you now have a little insight into what is possible with the Nexus scripting API.

In the next chapter I will have a look at using gitversion as a dynamic version driver for your project.

Picture of Steve Boardwell

Written by Steve Boardwell

Steve Boardwell is a DevOps enthusiast who loves all things automation, often finding himself trying to automate the family. He has worked in and around test, build, and configuration management since the days when servers were real and maven1 was 'the new kid on the block'. Nowadays, he prefers to set things up in a more containerised fashion, leveraging technologies such as kubernetes, terraform, and helm. Read more of Steve's writing at Medium: