Configuring CrashPlan 4.8.0 Pro -or- Home on FreeNAS 9.10

The bulk of this post has been pieced together from other bits of documentation on the web (see Additional Sources at the bottom of the post), but most of the available information applies specifically to CrashPlan “home” — not “pro”. Also, the included CrashPlan plugin in FreeNAS 9.10 is quite out of date so this guide will help you install and configure the most recent version of either the home or the pro version of CrashPlan. This how-to assumes that you’ve already installed FreeNAS.

Step #1: Install the included CrashPlan plugin

This will create the CrashPlan jail for you. Login to the admin interface of your FreeNAS box and navigate to Plugins:

Install the included CrashPlan plugin

Install the included CrashPlan plugin

Click OK to begin the installation

Click OK to begin the installation

Wait for the install process to complete

Wait for the install process to complete

Once the installation process has completed, verify that the plugin shows up under the Installed tab of the Plugins menu, but do not attempt to start the CrashPlan service just yet.

The CrashPlan plugin has been installed but is not yet running

The CrashPlan plugin has been installed but is not yet running

Step #2: Configure a gateway IP address for the CrashPlan jail

Navigate to Jails -> crashplan_1 -> Edit

Edit the crashplan_1 jail configuration

Edit the crashplan_1 jail configuration

Configure network settings

Configure the jail’s network settings

You’ll notice that FreeNAS has already assigned an IP address to this jail. You can change it now to suit your needs, or leave it alone. Click on Advanced Mode so that we can configure a gateway.

Configure your gateway and/or any other desired network settings

Configure your gateway and/or any other desired network settings

Configure any other settings you need and then scroll down to the bottom of the dialog box and click Save.

Step #3: Enable SSH

We’re actually going to enable SSH on both the NAS and the crashplan_1 jail. Navigate to Services and click the wrench icon next to SSH

Navigate to "Services"

Navigate to “Services”

Click the wrench icon to configure SSH Settings

Click the wrench icon to configure SSH Settings

Check the option to Login as Root with password and click OK. Later on, we’ll change this to only allow public key authentication.

Allow root logins with password

Allow root logins with password

Then start the SSH service:

Click the toggle switch to start the SSH service

Click the toggle switch to start the SSH service

If everything went well, you should be able to use secure shell to connect to your NAS.

Step #4: Enable SSH on the crashplan_1 jail

Connect to your NAS as root via ssh

Login to the NAS as root via ssh, then drop into the crashplan_1 jail via jexec

Login to the NAS as root via ssh, then drop into the crashplan_1 jail via jexec

Once logged in, type the following command and hit enter:

jls

This will output a listing of available jails:

[root@freenas] ~# jls
   JID  IP Address      Hostname                      Path
     1  -               crashplan_1                   /mnt/tank/jails/crashplan_1

To drop into the crashplan_1 jail, you need to run jexec followed by the number of the desired jail from the output of the previous command:

jexec 1

After you’ve executed that last command, your shell should look something like that last screenshot. Now we can continue with configuring ssh on the jail:

vi /etc/ssh/sshd_config

We need to change the value of the PermitRootLogin setting to without-password and verify that the value of PasswordAuthentication is set to no. Additionally, we need to ensure that the values of AllowTcpForwarding and PubkeyAuthentication are yes. The only option that you should have to change is the first one; the latter three should have the correct setting by default, but check them just in case. Make sure they match the following:

PermitRootLogin without-password
PasswordAuthentication no
AllowTcpForwarding yes
PubkeyAuthentication yes
Set "PermitRootLogin" to "without-password" to enable public key authentication via ssh

Set “PermitRootLogin” to “without-password” to enable public key authentication via ssh

Verify that TCP forwarding is enabled. If this is not enabled, then we won't be able to connect to the CrashPlan engine from the CrashPlan app later on.

Verify that TCP forwarding is enabled. If this is not enabled, then we won’t be able to connect to the CrashPlan engine from the CrashPlan app later on.

Save the file and quit, then execute the following commands:

sysrc sshd_enable=YES
service sshd keygen
service sshd start
Enabling and starting ssh in the crashplan_1 jail

Enabling and starting ssh in the crashplan_1 jail

Next, make sure to insert your public key into the .ssh/authorized_keys file:

cd
mkdir .ssh
vi .ssh/authorized_keys
Create an authorized_keys file and paste your public key into it.

Create an authorized_keys file and paste your public key into it.

Paste your key into .ssh/authorized_keys, then save and quit and execute the following command:

chmod -R 600 .ssh

Optionally, update some stuff:

pkg clean; pkg update; pkg upgrade

Let’s also restrict root logins to public key authentication on the NAS (i.e., outside of the jail). Assuming you’re still logged into the jail, execute the following commands:

exit
vi /etc/ssh/sshd_config

Then change the PasswordAuthentication, PubkeyAuthentication and PermitRootLogin options to match the following (just like before):

PasswordAuthentication no
PermitRootLogin without-password
PubkeyAuthentication yes

Save and quit, but whatever you do, do not reboot or restart the ssh service — not yet. Next, you’ll need to create an authorized_keys file just like you did for the crashplan_1 jail:

cd
mkdir .ssh
vi .ssh/authorized_keys

Once again, paste your public key into this file and then save and quit and execute the following command:

chmod -R 600 .ssh

… And then either restart the ssh daemon:

service sshd restart

Or, reboot the NAS via the web interface or by executing the reboot command.

Reboot. This step isn't strictly necessary, but you may receive the following error: "Crashplan data did not validate, configure it first" when attempting to start the service (even after accepting the java EULA.) If this happens to you, reboot before proceeding.

Reboot. This step isn’t strictly necessary, but you may receive the following error: “CrashPlan data did not validate, configure it first” when attempting to start the service (even after accepting the java EULA.) If this happens to you, reboot before proceeding.

Step #5: Install the FreeNAS CrashPlan Plugin

Login to the web interface and navigate to Plugins -> CrashPlan.

Navigate to "Plugins" -> "CrashPlan"

Navigate to “Plugins” -> “CrashPlan”

As soon as you click on the CrashPlan icon, an EULA for Java will appear. Scroll all the way down to the bottom and Click “Yes, I accept”:

Accept the EULA

Accept the EULA

And then click the X icon (cancel) on in the upper-right corner of the next window that appears.

Click 'cancel' in this dialog box

Click ‘cancel’ in this dialog box

If you’re curious, here’s where the link mentioned in the dialog box takes you, currently: http://support.code42.com/CrashPlan/4/Configuring/Using_CrashPlan_On_A_Headless_Computer. We’re going to cover all of this stuff in the next few steps.

Now we need to download the most recent version of CrashPlan (4.8.0 at the time of this writing.) There are two versions we’re concerned with: Home and Pro. I’m using the Pro version but the steps to install and configure either version are mostly the same. Download whichever version you require:

*Note: CrashPlan Pro needs to be downloaded from your account’s Administration Console.

In the following steps, I will refer to the tarball as CrashPlan-<version>.tgz.

Make sure to download it to your local machine as it will need to be installed locally as well. Once it’s downloaded (assuming you saved the file in ~/Downloads) use scp to copy it to the crashplan_1 jail. In the following example, my pool is named tank, my crashplan jail index is 1, and the IP of my FreeNAS box (not the jail) is 10.0.2.218, so be sure to modify the following steps to suit your environment.

Execute the following commands from a terminal on your local machine; make sure you type the correct filename:

cd ~/Downloads
scp CrashPlan-<version>.tgz root@10.0.2.218:/mnt/tank/jails/crashplan_1/usr/pbi/crashplan-amd64/share/crashplan/

Then ssh to your NAS. Remember to change the IP in the following command to the IP of your NAS:

ssh root@10.0.2.218

Get the jail index of your CrashPlan jail:

jls

Drop into the jail using the jexec command, followed by the index of your CrashPlan jail. In the following example, my CrashPlan jail index is 1:

jexec 1

Once you’re in the correct jail, execute the following commands:

cd /usr/pbi/crashplan-amd64/share/crashplan/
tar -xf CrashPlan-<version>.tgz
cd crashplan-install
cpio -idv < CrashPlan-<version>.cpi
cd ..
rm -r lib*
cp -r crashplan-install/lib* .
sysrc crashplan_enable=YES

If you are using the Pro version, the following step is absolutely necessary; continuing from above:

sed -i .backup 's/<orgType>CONSUMER<\/orgType>/<orgType>BUSINESS<\/orgType>/g; s/central.crashplan.com/central.crashplanpro.com/g' conf/default.service.xml conf/my.service.xml

Step #6: Update Java

The latest version of CrashPlan requires a Java update. Since this CrashPlan will be running in a FreeNAS jail, you will need the 32-bit version of the JRE. Run the following commands (continuing from step #5):

cd /usr/pbi/crashplan-amd64
wget `cat /usr/pbi/crashplan-amd64/share/crashplan/crashplan-install/install.defaults | grep I586 | cut -d'=' -f2`

Create a directory for the new version of Java. For example, if the tarball downloaded in the previous step was named: jre-linux-i586-1.8.0_72.tgz, then create a directory named: jre-linux-i586-1.8.0_72 and move the file into that directory.

mkdir jre-linux-i586-1.8.0_72
mv jre-linux-i586-1.8.0_72.tgz jre-linux-i586-1.8.0_72/
cd jre-linux-i586-1.8.0_72
tar -xf jre-linux-i586-1.8.0_72.tgz

Next, you’ll need to reconfigure CrashPlan to use this version of Java. Start by editing /usr/pbi/crashplan-amd64/share/crashplan/bin/CrashPlanEngine and inserting the following text, near the very top of the file:

## Point to new version of Java
## See also: "JAVACOMMON" in: /usr/pbi/crashplan-amd64/share/crashplan/install.vars
export LD_LIBRARY_PATH="/usr/pbi/crashplan-amd64/jre-linux-i586-1.8.0_72/jre/lib/i386/jli"

For example, here’s what mine looks like:

CrashPlanEngine

Next, we need to edit /usr/pbi/crashplan-amd64/share/crashplan/install.vars and change the path of the JAVACOMMON variable so that it points to our new version of Java:

JAVACOMMON=/usr/pbi/crashplan-amd64/jre-linux-i586-1.8.0_72/jre/bin/java

install.vars

Now before we start the service, there is an issue you should be aware of. The CrashPlan service may not connect to CrashPlan’s servers. If you tail the log, and then start the service you might see an error about Java not being able to resolve the FQDN of any of CrashPlan’s servers:

service crashplan start
tail -f /var/log/crashplan/engine_output.log | grep UnresolvedAddressException

Here’s a snippet:

java.io.IOException: Unexpected Exception in connect() for remoteAddress=esa-sea.crashplanpro.com:443, java.nio.channels.UnresolvedAddressException

I have yet to find an adequate explanation as to why this happens, but here’s how you fix it (from this post in the FreeNAS forums). Basically, you need to add the IP addresses of CrashPlan’s servers to /etc/hosts. Run the following commands (copy paste the following block into your terminal) to do this in a single step:

echo "" >> /etc/hosts
host central.crashplanpro.com | awk '{print $4 "\t" $1}' >> /etc/hosts
host central.crashplan.com | awk '{print $4 "\t" $1}' >> /etc/hosts
host esa-sea.crashplan.com | awk '{print $4 "\t" $1 " esa-sea.crashplanpro.com"}' >> /etc/hosts
host deg-sea.crashplan.com | awk '{print $4 "\t" $1}' >> /etc/hosts
echo "" >> /etc/hosts

And then restart the service:

service crashplan restart

Verify that the service is running and connecting to CrashPlan’s servers with the following command:

sockstat -4 | grep java

You should see output similar to the following (10.0.2.2 is the IP of my CrashPlan jail):

root@crashplan_1:/usr/pbi/crashplan-amd64/jre-linux-i586-1.8.0_72 # sockstat -4 | grep java
root     java       860   142 tcp4  10.0.2.2:38223        216.17.8.49:443
root     java       860   149 tcp4  127.0.0.1:4247        *:*
root     java       860   152 tcp4  10.0.2.2:38220        162.222.43.24:443
root     java       860   153 tcp4  10.0.2.2:38222        162.222.40.90:443

Step #7: Install CrashPlan on your local machine

Again, I’m assuming the CrashPlan tarball was saved to ~/Downloads on your local machine:

cd ~/Downloads
tar -xf CrashPlan-<version>.tgz
cd crashplan-install/
sudo ./install.sh

Accept all of the default settings when prompted. Since we don’t want the CrashPlan service running locally, we need to disable it on our local machine. This next step is not universal across all distros. For example, I’m using Ubuntu 15.10; to stop and disable (i.e., prevent from starting on boot) CrashPlan, I need to do the following:

sudo service crashplan stop
sudo sh -c 'echo "manual" > /etc/init/crashplan.override'

As you can see from the method used to disable the CrashPlan service, the init script for CrashPlan on Ubuntu 15.10 is an upstart job. In 15.04 and later, both systemd and upstart are installed by default. Notice to future readers using Ubuntu: this will most likely change in the near future, given Ubuntu’s adoption of systemd as the default init system in 15.04.

Now we need to grab the auth token from the jail. The IP address of my jail is 10.0.2.2; yours will probably be different. Modify as necessary.

ssh root@10.0.2.2 "cat /var/lib/crashplan/.ui_info"

The output will look similar to this:

Retrieving the auth token from /var/lib/crashplan/.ui_info

Retrieving the auth token from /var/lib/crashplan/.ui_info

The format of the .ui_info file is:

4243,eeeb3f1a-b426-4a78-b70c-460a36da9381,127.0.0.1
port|-------------auth token-------------|loopback

You need to modify the port and the auth token in your local /var/lib/crashplan/.ui_info. The port needs to be 4200, and the auth token needs to be replaced with the one from the previous command. You can simply overwrite the local .ui_info:

sudo sh -c 'echo "4200,eeeb3f1a-b426-4a78-b70c-460a36da9381,127.0.0.1" > /var/lib/crashplan/.ui_info

Any time the CrashPlan service is restarted on the NAS, this auth token could potentially change. There’s nothing you can really do about it except be aware that it happens. If the desktop app has trouble connecting to the backup engine, be sure to check this.

Step #8: Starting the CrashPlanDesktop app on your local machine

First, setup an ssh tunnel. You will need to use the IP address of the crashplan_1 jail in this command. Mine is 10.0.2.2; modify yours as necessary:

ssh -L 4200:localhost:4243 root@10.0.2.2 -Nv

Then, on your local machine, start the CrashPlanDesktop app either by clicking the icon on your desktop (if there is one) or by starting the app from the terminal:

Two ways to start the CrashPlanDesktop app

Two ways to start the CrashPlanDesktop app

If everything went as planned, the desktop app should connect to the backup engine via the tunnel, and you should be able to configure your CrashPlan backup.

Step #9: Optionally, create a custom CrashPlan startup command

Remember how I said that the token in the .ui_info file changes occasionally? Do you manage multiple headless CrashPlan instances? The following might be incredibly convenient for you. Simply append the following to your .bashrc. Make sure the value of LOCALPORT is correct for your environment. Optionally, change DEFAULTHOST to the hostname or IP of a CrashPlan jail you connect to most frequently:

### Connect to a headless CrashPlan instance
crashplan() {
    LOCALPORT=4200
    USER=root
    if [ -z "${1}" ]; then
        HOST="DEFAULTHOST"
    else
        HOST="${1}"
    fi
    ui_info=(`ssh $USER@$HOST "cat /var/lib/crashplan/.ui_info" | sed -e 's/,/ /g'`);
    echo "$LOCALPORT,${ui_info[1]},${ui_info[2]}" > /var/lib/crashplan/.ui_info
    ((sleep 3; CrashPlanDesktop)&); ssh -L $LOCALPORT:localhost:${ui_info[0]} $USER@$HOST -Nv
}

Save and close .bashrc then open another terminal and run the following command, replacing CRASHPLANHOST with the host name or IP address of the jail that the CrashPlan instance is running on:

crashplan YOURCRASHPLANHOST

Or, if you set a “default” host:

crashplan

This will handle updating the auth token in your local .ui_info file to match whatever exists on the server, while also setting up the ssh tunnel and starting CrashPlan.

Additional Sources:

tail: “inotify resources exhausted” and/or “inotify cannot be used, reverting to polling: Too many open files”

If you happen across either of these messages while tailing a logfile:

  • tail: inotify resources exhausted
  • tail: inotify cannot be used, reverting to polling: Too many open files

… And you have CrashPlan installed[*], then you probably have too low a limit on the number of inotify.max_user_watches. I only mention CrashPlan because this seems to be fairly common with CrashPlan on Linux. This could happen for a variety of reasons actually so to find out what is causing it, do the following:

echo 1 > /sys/kernel/debug/tracing/events/syscalls/sys_exit_inotify_add_watch/enable
echo 1 > /sys/kernel/debug/tracing/tracing_enabled

Those two commands will enable you to “watch” inotify_add_watch events. To actually watch them, wait a few minutes after enabling, and then:

cat /sys/kernel/debug/tracing/trace

You should see some output similar to this:

root@localhost:~# cat /sys/kernel/debug/tracing/trace | more
# tracer: nop
#
#           TASK-PID    CPU#    TIMESTAMP  FUNCTION
#              | |       |          |         |
            java-13752 [010] 180569.026114: sys_inotify_add_watch -> 0x1
            java-13752 [010] 180569.038573: sys_inotify_add_watch -> 0x2
            java-13752 [010] 180569.039368: sys_inotify_add_watch -> 0x3
            java-13752 [010] 180569.044214: sys_inotify_add_watch -> 0x4
            java-13752 [010] 180569.051454: sys_inotify_add_watch -> 0x5
            java-13752 [010] 180569.052107: sys_inotify_add_watch -> 0x6
            java-13752 [010] 180569.059542: sys_inotify_add_watch -> 0x7
            java-13752 [010] 180569.060265: sys_inotify_add_watch -> 0x8
            java-13752 [010] 180569.060760: sys_inotify_add_watch -> 0x9
            java-13752 [010] 180569.068002: sys_inotify_add_watch -> 0xa
            java-13752 [010] 180569.068549: sys_inotify_add_watch -> 0xb
            java-13752 [010] 180569.082694: sys_inotify_add_watch -> 0xc
            java-13752 [010] 180569.089735: sys_inotify_add_watch -> 0xd
            java-13752 [010] 180569.093624: sys_inotify_add_watch -> 0xe
            java-13752 [010] 180569.094271: sys_inotify_add_watch -> 0xf
            java-13752 [010] 180569.098156: sys_inotify_add_watch -> 0x10
            java-13752 [010] 180569.098794: sys_inotify_add_watch -> 0x11
            java-13752 [010] 180569.105731: sys_inotify_add_watch -> 0x12
            java-13752 [010] 180569.109630: sys_inotify_add_watch -> 0x13
            java-13752 [010] 180569.119702: sys_inotify_add_watch -> 0x14
            java-13752 [010] 180569.123390: sys_inotify_add_watch -> 0x15
            java-13752 [010] 180569.127319: sys_inotify_add_watch -> 0x16
            java-13752 [010] 180569.127801: sys_inotify_add_watch -> 0x17
            java-13752 [010] 180569.131432: sys_inotify_add_watch -> 0x18
            java-13752 [010] 180569.135184: sys_inotify_add_watch -> 0x19
            java-13752 [010] 180569.135616: sys_inotify_add_watch -> 0x1a
            java-13752 [010] 180569.139202: sys_inotify_add_watch -> 0x1b
            java-13752 [010] 180569.139622: sys_inotify_add_watch -> 0x1c
            java-13752 [010] 180569.149321: sys_inotify_add_watch -> 0x1d
            java-13752 [010] 180569.149717: sys_inotify_add_watch -> 0x1e
            java-13752 [010] 180569.156260: sys_inotify_add_watch -> 0x1f
            java-13752 [010] 180569.165739: sys_inotify_add_watch -> 0x20
            java-13752 [010] 180569.169937: sys_inotify_add_watch -> 0x21
            java-13752 [010] 180569.170296: sys_inotify_add_watch -> 0x22
            java-13752 [010] 180569.177402: sys_inotify_add_watch -> 0x23
            java-13752 [010] 180569.183846: sys_inotify_add_watch -> 0x24
            java-13752 [010] 180569.187312: sys_inotify_add_watch -> 0x25
            java-13752 [010] 180569.187802: sys_inotify_add_watch -> 0x26
            java-13752 [010] 180569.191314: sys_inotify_add_watch -> 0x27
            java-13752 [010] 180569.191781: sys_inotify_add_watch -> 0x28
            java-13752 [010] 180569.198126: sys_inotify_add_watch -> 0x29
            java-13752 [010] 180569.201667: sys_inotify_add_watch -> 0x2a
            java-13752 [010] 180569.209703: sys_inotify_add_watch -> 0x2b
            java-13752 [010] 180569.212063: sys_inotify_add_watch -> 0x2c
            java-13752 [010] 180569.214432: sys_inotify_add_watch -> 0x2d
            java-13752 [010] 180569.214729: sys_inotify_add_watch -> 0x2e
            java-13752 [010] 180569.216971: sys_inotify_add_watch -> 0x2f
            java-13752 [010] 180569.219159: sys_inotify_add_watch -> 0x30
            java-13752 [010] 180569.219450: sys_inotify_add_watch -> 0x31
            java-13752 [010] 180569.221780: sys_inotify_add_watch -> 0x32
            java-13752 [010] 180569.222029: sys_inotify_add_watch -> 0x33
            java-13752 [010] 180569.225990: sys_inotify_add_watch -> 0x34
            java-13752 [010] 180569.228548: sys_inotify_add_watch -> 0x35
            java-13752 [010] 180569.228797: sys_inotify_add_watch -> 0x36
            java-13752 [010] 180569.232822: sys_inotify_add_watch -> 0x37
            java-13752 [010] 180569.233054: sys_inotify_add_watch -> 0x38
            java-13752 [010] 180569.237234: sys_inotify_add_watch -> 0x39
            java-13752 [010] 180569.237551: sys_inotify_add_watch -> 0x3a
            java-13752 [010] 180569.243332: sys_inotify_add_watch -> 0x3b
            java-13752 [010] 180569.245901: sys_inotify_add_watch -> 0x3c
            java-13752 [010] 180569.246179: sys_inotify_add_watch -> 0x3d
            java-13752 [010] 180569.250486: sys_inotify_add_watch -> 0x3e
            java-13752 [010] 180569.250802: sys_inotify_add_watch -> 0x3f
            java-13752 [010] 180569.252945: sys_inotify_add_watch -> 0x40
            java-13752 [010] 180569.253189: sys_inotify_add_watch -> 0x41
            java-13752 [010] 180569.255402: sys_inotify_add_watch -> 0x42
            java-13752 [010] 180569.255661: sys_inotify_add_watch -> 0x43
            java-13752 [010] 180569.259566: sys_inotify_add_watch -> 0x44
            java-13752 [010] 180569.261640: sys_inotify_add_watch -> 0x45
            java-13752 [010] 180569.263669: sys_inotify_add_watch -> 0x46
            java-13752 [010] 180569.265819: sys_inotify_add_watch -> 0x47
            java-13752 [010] 180569.267893: sys_inotify_add_watch -> 0x48
            java-13752 [010] 180569.269967: sys_inotify_add_watch -> 0x49
            java-13752 [010] 180569.271976: sys_inotify_add_watch -> 0x4a
            java-13752 [010] 180569.272240: sys_inotify_add_watch -> 0x4b
            java-13752 [010] 180569.291990: sys_inotify_add_watch -> 0x4c
            java-13752 [010] 180569.292369: sys_inotify_add_watch -> 0x4d
            java-13752 [010] 180569.292726: sys_inotify_add_watch -> 0x4e
            java-13752 [010] 180569.293091: sys_inotify_add_watch -> 0x4f
            java-13752 [010] 180569.293420: sys_inotify_add_watch -> 0x50
            java-13752 [010] 180569.293749: sys_inotify_add_watch -> 0x51
            java-13752 [010] 180569.305760: sys_inotify_add_watch -> 0x52
            java-13752 [010] 180569.306204: sys_inotify_add_watch -> 0x53
            java-13752 [010] 180569.306665: sys_inotify_add_watch -> 0x54
            java-13752 [010] 180569.307042: sys_inotify_add_watch -> 0x55
            java-13752 [010] 180569.307385: sys_inotify_add_watch -> 0x56
            java-13752 [010] 180569.307724: sys_inotify_add_watch -> 0x57
            java-13752 [010] 180569.308032: sys_inotify_add_watch -> 0x58
            java-13752 [010] 180569.321561: sys_inotify_add_watch -> 0x59
            java-13752 [010] 180569.321968: sys_inotify_add_watch -> 0x5a
            java-13752 [010] 180569.322274: sys_inotify_add_watch -> 0x5b
            java-13752 [010] 180569.322552: sys_inotify_add_watch -> 0x5c
            java-13752 [010] 180569.322830: sys_inotify_add_watch -> 0x5d
            java-13752 [010] 180569.323106: sys_inotify_add_watch -> 0x5e
            java-13752 [010] 180569.323378: sys_inotify_add_watch -> 0x5f
            java-13752 [010] 180569.323635: sys_inotify_add_watch -> 0x60
            java-13752 [010] 180569.337109: sys_inotify_add_watch -> 0x61
            java-13752 [010] 180569.337452: sys_inotify_add_watch -> 0x62
            java-13752 [010] 180569.337779: sys_inotify_add_watch -> 0x63
            java-13752 [010] 180569.338094: sys_inotify_add_watch -> 0x64
            java-13752 [010] 180569.338379: sys_inotify_add_watch -> 0x65
            java-13752 [010] 180569.338660: sys_inotify_add_watch -> 0x66
--More--

Note the task and PID columns:

root@localhost:~# ps waux | grep java
root     13679 50.3  4.1 6393844 510320 pts/1  SNl  11:58   1:18 /usr/local/crashplan/jre/bin/java -Dfile.encoding=UTF-8 -Dapp=CrashPlanService -DappBaseName=CrashPlan -Xms20m -Xmx1024m -Djava.net.preferIPv4Stack=true -Dsun.net.inetaddr.ttl=300 -Dnetworkaddress.cache.ttl=300 -Dsun.net.inetaddr.negative.ttl=0 -Dnetworkaddress.cache.negative.ttl=0 -Dc42.native.md5.enabled=false -classpath /usr/local/crashplan/lib/com.backup42.desktop.jar:/usr/local/crashplan/lang com.backup42.service.CPService

The PID doesn’t always match up with the process that added the watch, in the example above, CrashPlan likely spawned a child process (PID: 13752, according to our trace) to add the inotify watches.

So now you know why this is happening, here is what you should do about it, First, to see what the currently configured limit is:

cat /proc/sys/fs/inotify/max_user_watches

It seems that the default limit for Ubuntu servers is 8192. To raise the limit, run the following as root:

sysctl -w fs.inotify.max_user_watches=32768

Or, to make the limit permanent, edit /etc/sysctl.conf and append the following line:

fs.inotify.max_user_watches=32768

Then be sure to re-load the config file using the following command:

sysctl -p

The limit 32768 might be a bit high[**] so you may want a lower one depending on the available resources (RAM, CPU, etc.) of your machine. For reference, I use this configuration on production servers with 12GB RAM or more. YMMV.

To put things back to their default settings (defaults for Ubuntu anyway):

echo 0 > /sys/kernel/debug/tracing/events/syscalls/sys_exit_inotify_add_watch/enable
echo 1 > /sys/kernel/debug/tracing/tracing_enabled

(On ubuntu, the default setting for /sys/kernel/debug/tracing/tracing_enabled is “1”)

Notes / Further reading: