We are currently looking to optimize our SCM-Manager instance specifically for Subversion (SVN) workflows, with a primary focus on reducing latency during metadata queries and maximizing data transfer speeds.
We are running SCM-Manager as a Docker container on a Linux host and would appreciate any advice on fine-tuning this setup.
just to be sure: You’re talking about SVN metadata, right? Not metadata you get from SCM-Manager using the REST API?
To be honest, our knowledge about SVN is limited and we highly rely on SVNKit for the SVN internals.
The basics about SCM-Manager itself would be this:
only install plugins you really need
keep your permissions simple (not too many groups)
if you’re running behind a reverse proxy, choose a fast one (especially for SVN, which is kind of chatty as far as I know using many requests)
About docker: Take a look at docker stats whether memory is okay. You can also check standard Java stats rom inside the docker container (docker exec -ti scm bash), for example you can use jstat -gcutil 1 1000 to check the garbage collector.
Do you have concrete issues?
And by the way: Thanks a lot for you nice feedback.
Yes, I’m talking about SVN metadata. We are currently using SVN version control inside Unreal Engine. Whenever a file is moved or renamed, unreal made queries to the SVN server.
Usually, this operation does not take a lot of time for us. Unfortunately, the same operation seem to be taking a lot longer for us (1 to 5 minutes) for each file operation.
2026-03-02 00:49:22.841 [CentralWorkQueue-11] [ ] ERROR sonia.scm.work.UnitOfWork - task sonia.scm.search.LuceneSimpleIndexTask@174aabb2 failed after 3.591 s
java.lang.IllegalArgumentException: path is required
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:141)
at sonia.scm.repository.api.CatCommandBuilder.getStream(CatCommandBuilder.java:97)
at com.cloudogu.scm.search.FileContentFactory.create(FileContentFactory.java:58)
at com.cloudogu.scm.search.FileContentFactory.create(FileContentFactory.java:48)
at com.cloudogu.scm.search.Indexer.store(Indexer.java:56)
at com.cloudogu.scm.search.IndexSyncWorker.updateIndex(IndexSyncWorker.java:91)
at com.cloudogu.scm.search.IndexSyncWorker.ensureIndexIsUpToDate(IndexSyncWorker.java:86)
at com.cloudogu.scm.search.IndexSyncWorker.ensureIndexIsUpToDate(IndexSyncWorker.java:71)
at com.cloudogu.scm.search.IndexSyncWorker.ensureIndexIsUpToDate(IndexSyncWorker.java:60)
at com.cloudogu.scm.search.IndexSyncer.ensureIndexIsUpToDate(IndexSyncer.java:90)
at com.cloudogu.scm.search.IndexSyncer.ensureIndexIsUpToDate(IndexSyncer.java:51)
at com.cloudogu.scm.search.IndexerTask.update(IndexerTask.java:49)
at sonia.scm.search.LuceneIndexTask.run(LuceneIndexTask.java:60)
at sonia.scm.work.UnitOfWork.run(UnitOfWork.java:112)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
I no longer see the ERROR sonia.scm.work.UnitOfWork after remove the “search” plugin.
I’ve also updated scm-manager to the latest version 3.11.5.
Unfortunately, we are still experiencing erratic speed drop when performing svn operations.
Is there a way to deploy an svn server and directly connect with the repositories hosted on scm-manager using the svn:// instead of going through svnkit?
I’m afraid a direct connection bypassing SCM-Manager is nothing I would encourage you to do. Depending on what you use SCM-Manager for, at least caches will not be invalidated.
You wrote “Usually, this operation does not take a lot of time for us”. Do you mean, that this was faster with former versions of SCM-Manager or has it always been the case that performance droped over time?
However, SCM-Manager is lacking memory. Probably more than half of the time it’s frantically trying to free up memory. Seven “Full Garbage Collections” per second is definitely nothing you should observe, and the different spaces are at their limits. Have you tried to increase the memory for the process? Do you have a limit somewhere? Is SCM-Manager running on it’s own machine?
It would be of great interest if, after increasing memory, SCM-Manager just hits the limit later (than we would have to look for a memory leak somewhere) or if it runs fine.
If you want to experiment with Java VM options, you can set them using the environment variable JAVA_OPTS.
The SVN performance dropped overtime for us. I think it might have been due to the “search” plugin lead to worker crash. After removing it, the result of jstat -gcutil 1 1000 is looking a lot better.
We are running SCM-manager in a vm with 40G of ram and CPU utilization from `docker stats seem to be around 0.8% while idling and can go up to 400% when 4 to 5 people are interacting with svn.
We are still experiencing slow svn speed for our artist.
Our primary usage of SCM-Manager is to manage SVN project, users, and permissions.
It would be great if I can test and compare the performance between using direct svn protocol vs SVNKit.
SVNKit is solid but it would also be good to have an alternative option for pulling a repository. Similar to how you can use different protocols for git clone.
I ran some initial tests comparing two setups: SCM-Manager using SVNKit and a native Subversion server. Both setups have identical hardware and network configurations.
In our tests, the native SVN server was roughly 2x faster for repository operations. Our repositories are quite large and contain many heavy binary files, so performance is a major concern for us.
I really like SCM-Manager’s repository and user management features, but since SVN users are relatively niche, relying on SVNKit (a Java re-implementation of Subversion) makes me a bit concerned about long-term performance and compatibility.
If we wanted to develop an SCM-Manager plugin that uses the native Subversion implementation instead of SVNKit, what core components or extension points would we need to implement to support that?
replacing SVNKit with a native implementation could be … well, interesting? The integration is more than just redirecting streams.
Before jou really think about this, there is one thing that came to my mind: When I get you right, you are mostly handling big non-text files like images, right? In this case, the GZip compression could be an issue. The option can be found in the global administration under “Settings” - “Subversion”. If you have the compression enabled, you should run another test with this disabled.
Hullo @pfeuffer, we already have our GZip compression option disabled.
The overhead of replacing SVNKit with a native implementation is clearly significant. While the performance gap is frustrating, investing heavy into SVN is a hard sell.
Alternatively, I would be keen to see if you guys have any suggestion into optimizing SVN performance. Our repository are usually quite large (1 to 2 tb), and any improvement would greatly help us.
Hey @tanh, after talking to my colleague about this, I’m afraid we’re running out of options here. To dig deeper into this, it would be of interest whether this simply has to do something with throughput or if we’re hitting a DAV protocol wall (which is the only protocol SVNKit implements, when we’re not mistaken).
You could set up another test server (or modify your running instance) to count the number of requests the server has to handle during one request. To do so, you need to set two environment variables for your docker process:
With this, you should see logs for each request in your log. Using grep and wc you can now count, how many requests you need for one svn update, svn checkout or whatever you need.
But to be honest, even with this number we have little options. Replacing SVNKit with an access to the native implementation should be doable, but that’s nothing we could afford to do at the moment.