Svn committing transaction... timeout

SCM-Manager 2.48.3

If I commit a large number of files to a svn repo all the files upload then at the “committing transaction…” part of the push after 10 mins or so I get a timeout. Is this expected behavior or is there a config option that can help? I have the :

<Set name="idleTimeout">
          <Property name="jetty.http.idleTimeout" default="8000000" />

In the config.

If I commit fewer files the commit works fine. It seems like number of files matters more then total file size.

It looks like the issue is caused by SVN Server file timeout settings.

Editing the “servers” file in:


to have the following under “[global]” at the bottom of the file seems to fix the issue.

http-timeout = 7200

Though now I get an 502 error. If I restart the docker container running scm manager then the pushes work again for awhile, but big pushes seem to get back to having the 502 error after some time. Smaller pushes always seem to work.

Hi @bluestek ,

just to be sure: Do you use any sort of reverse proxy or a firewall to access your SCM-Manager or do you “talk” to the docker container directly? If the latter is the case, can you give us a rough estimate about how many files you are committing and how big they are?


It’s behind nginx-proxy-manager with the following custom settings:

keepalive_timeout 1d;
send_timeout 1d;
client_body_timeout 1d;
client_header_timeout 1d;
proxy_connect_timeout 1d;
proxy_read_timeout 1d;
proxy_send_timeout 1d;
fastcgi_connect_timeout 1d;
fastcgi_read_timeout 1d;
fastcgi_send_timeout 1d;
memcached_connect_timeout 1d;
memcached_read_timeout 1d;
memcached_send_timeout 1d;

client_max_body_size 40000M;

The 502 error seems to have been resolved with some environment changes. I’m running scm manager on a low power proxmox server running a ubuntu vm with docker on it backed by a ceph cluster with the data form scm on a cephfs mount. I found out the the docker vm was running out of memory do to a cloud backup software. I moved the backup software to anther server in the cluster and changed cephfs mounts to use different monitors and now things are running great! Thanks for the replay!

Looking forward to v3!!