How do I change the default IP ranges of CES before the setup?

Dear Community,

I’m trying to install and configure of CES on VMWare ESXi 6.5.

I already got the VirtualBox image running, after some adjustments in the .OVF file. Now I’ve encountered the following problem: The IP ranges used by docker and the bridge interface, interfere with our network. I would like to change their ranges to for example: docker: and bridge Currently docker uses and on the bridge there is a route with

There is little to no information about the configuration of docker and the bridge interface on the site and the community forum, as far as I have searched. I would be very pleased if I could possibly get some advice and/or guidance on how to achieve this.

Thanks in advance.

Kind regards,
Marijn Nijhof


Hi Marijn,

Thanks for your question. You are right, we are still upgrading our documentation. Your issue is not covered yet, but we try to provide a solution for you. Most of the steps require root permissions:

  1. Boot up a fresh Cloudogu EcoSystem, but do not start the setup yet
  2. Change the IP range of the Docker default network interface ( docker network inspect bridge for details)
    • Create the file /etc/ces/daemon.json with the following content:
 "bip": ""

Note: The subnet you wanted to use ( did not work out in our tests. Maybe this was due to our network environment, which is probably different from yours. “” however did work out.

  • Restart the Docker daemon via systemctl restart docker
  • Check the Docker daemon status via systemctl status docker . There should be no “failed” or “error” messages.
  1. Change the IP range of the “cesnet1” docker network ( docker network inspect cesnet1 for details)
    • Remove the network via docker network rm cesnet1
    • Recreate the network with your IP range:
      docker network create --subnet= --ip-range= cesnet1
    • Add a new firewall rule for the new IP range: ufw allow from "$(docker network inspect cesnet1 -f '{{(index .IPAM.Config 0).Subnet}}')" to any port 4001
    • Remove the old firewall rule: ufw delete 5
      • This will delete the rule “allow from to any port 4001”
      • Confirm with “y”

Now you should be good to go and can continue with the regular setup in your browser.

If you have any more questions, please feel free to ask!

For your interest: We plan to offer a dedicated VMware image in the near future to ease the use of the CES.

Kind regards


Hi Michael,

Thank you very much for the fast reply. Sorry for the late response, I got sick and as of today I was finally able to pick up this project again.

I followed all the steps you gave me. I would like to note (for other people that may read this) that I think the daemon.json file should be in /etc/docker/ instead of /etc/ces/, at least it didn’t work for me before I put that file there.

The good news is that I went through the setup process and it completed successfully this time. After this I received the unix user credentials. I’ve been waiting for a while now, but I’m still getting an HTTP 404 error on every subpage, see attached screenshot.

I have the feeling it still has something to do with networking, so I’ll put some network config output at the end of this reply. (I have masked some IP addresses with crosses, because I don’t want to share them publicly.) I hope they are useful.

Thank you in advanced.

Kind regards,
Marijn Nijhof

root@ces:/etc/ces# ip a
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:50:56:a2:2d:18 brd ff:ff:ff:ff:ff:ff
inet 172.XXX.XXX.12/24 brd 172.XXX.XXX.255 scope global ens32
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fea2:2d18/64 scope link
valid_lft forever preferred_lft forever
3: br-09c95164d582: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:e3:c4:13:69 brd ff:ff:ff:ff:ff:ff
inet brd scope global br-09c95164d582
valid_lft forever preferred_lft forever
inet6 fe80::42:e3ff:fec4:1369/64 scope link
valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:8d:6d:1b:fc brd ff:ff:ff:ff:ff:ff
inet brd scope global docker0
valid_lft forever preferred_lft forever

root@ces:/etc/ces# ip route
default via 172.XXX.XXX.254 dev ens32 proto static dev br-09c95164d582 scope link
172.XXX.XXX.0/24 dev ens32 proto kernel scope link src 172.XXX.XXX.12 dev docker0 proto kernel scope link src linkdown

Hi Marijn,

first of all, you are totally right about the daemon.json, /etc/docker/ is the correct folder for it.

I’m sorry to hear that your dogus won’t work correctly; let’s try to debug that.
First we should check if all containers started up correctly. Please use docker ps -a to make sure that all containers are in a “healthy” state (indicated in the STATUS column). If there are any “unhealthy” or restarting containers, please have a look at the respective log file inside the /var/log/docker folder. To help us find the problem you can post the complete log file (or sections of it) here.


Hi Robert,

All containers except for ldap-mapper report a healthy status. I’ve checked the log and the following is repeated over and over:
Sep 30 13:34:04 ces docker/ldap-mapper[5096]: failed to execute template of source /home/ldap-mapper/ldap-mapper.conf.tpl: template: ldap-mapper.conf.tpl:67:28: executing “ldap-mapper.conf.tpl” at <.Config.GetAndDecrypt>: error callin
g GetAndDecrypt: key sa-ldap/username does not exists

Kind regards,

Hi Marijn,

that’s an interesting output, but most likely not the general problem on your machine. The ldap-mapper is only used in combination with premium dogus, so you can deactivate it for now via sudo docker stop ldap-mapper.
We will try to establish the same environment on one of our testing machines and see if we can get closer to a solution. Just like you, we assume it has something to do with this special network configuration, but don’t know what exactly is causing the trouble.


Hi Marijn,

We were able to reproduce your problem and we now know that it is caused by an unreachable user backend.

What kind of user backend did you choose during the setup process? Was it the embedded or the external one? If you chose the external one, did you try the “test connection” option after you filled in all user backend credentials?
In case you chose the embedded option, did you install the ldap dogu too? To check if it is present on your machine, just type sudo docker ps -a.

Further error messages confirming a problem with the user backend may be found in the /var/log/docker/cas.log. Please have a look if there is anything in it that could get us closer to the cause.


Hi Robert,

We used external. I’ve tested te connection with my own user credentials. Then it showed a screen with info that it recieved about my account. I’ve checked the cas log and it indeed looks like the problem has something to do with our AD connection:

Oct 6 08:25:47 ces docker/cas[5096]: #033[1;31m2021-10-06 08:25:47,845 ERROR [org.ldaptive.transport.netty.NettyConnection] - <Connection open failed for org.ldaptive.transport.netty.NettyConnection@1923521468::ldapUrl=[org.ldaptive.LdapURL@-388806472::scheme=ldaps, hostname=172.XXX.XXX.1, port=636, baseDn=null, attributes=null, scope=null, filter=null, inetAddress=null], isOpen=false, connectTime=null, connectionConfig=[org.ldaptive.ConnectionConfig@968181632::ldapUrl=ldaps://172.XXX.XXX.1:636, connectTimeout=PT50M, responseTimeout=PT5S, reconnectTimeout=PT2M, autoReconnect=true, autoReconnectCondition=org.ldaptive.ConnectionConfig$$Lambda$951/0x0000000100ba1440@6bd750a0, autoReplay=true, sslConfig=[org.ldaptive.ssl.SslConfig@1838611486::credentialConfig=null, trustManagers=[org.ldaptive.ssl.AllowAnyTrustManager@29124a5a], hostnameVerifier=org.ldaptive.ssl.DefaultHostnameVerifier@40544c14, enabledCipherSuites=null, enabledProtocols=null, handshakeCompletedListeners=null, handshakeTimeout=PT1M], useStartTLS=false, connectionInitializers=[org.ldaptive.BindConnectionInitializer@732279030::bindDn=scm manager, bindSaslConfig=null, bindControls=null], connectionStrategy=org.ldaptive.ActivePassiveConnectionStrategy@5c5832d8, connectionValidator=null, transportOptions={}], channel=null>#033[m

I’m not entirely sure what I’m looking at here, I hope you can conclude something out of this. I see a lot of nulls, but I’m sure I’ve filled in every box.

Thank you for the support.

Kind regards,

Hi Marijn,

I’m glad you haven’t given up yet :slight_smile:

If you just want to test the CES, the embedded LDAP might be the easier option.
But we should get it running with your AD too. This error message just says that the connection was unsuccessful. There should be a reason or a more specific error message in the logs.

If this is not the case, you can try to increase the log-level with cesapp edit-config cas logging/root DEBUG && docker restart cas.

The nulls shouldn’t be an issue here. It might be a problem that you used an IP address + SSL for your AD configuration. CAS expects that the host name can be verified by the SSL certificate (even if you have chosen SSLAny): DefaultHostnameVerifier (Ldaptive 2.1.0 API)

I think the test connection in the setup is less restrictive here.

If this is the problem, there should be a in the CAS log and you should provide the correct host name. You can reconfigure CAS by entering cesapp edit-config cas and then restarting it.

Kind regards

Hi Micheal,

Haha yea, I’m not giving up because we have a older version running in a nested VM. Because a co-worker once started using your ecosystem. I don’t want to have it running in a nested situation, so I’m trying to get it right once and for all.

Thank you so much for the help. I indeed used the IP of the DC instead of the FQDN. Because it could not reach the DC otherwise. You were right on with your assumption. It took me some time but I found out that for some reason the nameserver the installation was looking at was I expected it to be our DC because that is what I’ve entered in the netplan config. I changed the resolv.conf to the right nameserver. After that I ran the setup again, this time with the FQDN of the DC. Now I have a working installation! :smiley:

Marijn Nijhof