JVM Tuning

Introduction
The LogicalDOC DMS and web-client hardware requirements are variable and depend significantly on the number of concurrent users that will be accessing the system. You should tune the memory and garbage collection parameters for the JVM as appropriate for your use-case. The metrics and estimates below are only a suggestion and your system may vary.

Disk Space Usage
The size of your LogicalDOC repository defines how much disk space you will need and is a very simple calculation. Documents in LogicalDOC are by default stored directly on disk, therefore to hold 1000 documents of 1MB in size will require 1000MB of disk space. You should also make sure there is sufficient space overhead for temporary files and versions - each version of a file is stored on disk as a separate copy of that file, so make sure you allow for that in your disk size calculations (and use versioning judiciously).

It is highly recommended that a server class machine with SCSI RAID disk array is used. The performance of reading/writing documents is almost solely dependent on the speed of your network and the speed of your disk array. The overhead of the LogicalDOC server itself for reading documents is very low as content is streamed directly from the disks to the output stream. The overhead of writing documents is also low but depending on the indexing options (e.g. atomic or background indexing) there may be some additional overhead as the documents are indexed or meta-data extracted from the document in each file.

JVM Memory and CPU Hardware for multiple users
The Repository L2 Cache plus initial VM overhead plus basic LogicalDOC system memory is setup with a default installation to require a maximum of approximately 1GB. This means you can run the LogicalDOC repository and web-client with many users accessing the system with a basic single CPU server and only 1GB of memory assigned to the LogicalDOC JVM. However - you will need to add additional memory as your user base grows. You will need to add CPUs depending on the complexity of the tasks you expect your users to perform and how many concurrent users are accessing the client.

Note that for these metrics, N concurrent users is considered equivalent to 10xN casual users that the server could support.

Suggested memory+CPU settings per server:


 * For 20 concurrent or up to 200 casual users:
 * 1GB JVM RAM
 * 2x server CPU (or 1xDual-core)


 * For 50 concurrent users or up to 500 casual users:
 * 2GB JVM RAM
 * 4x server CPU (or 2xDual-core)


 * For 100 concurrent users or up to 1000 casual users:
 * 3GB JVM RAM
 * 8x server CPU (or 4xDual-core)

For the tests providing these metrics, a Dell Poweredge Server 2600 dual Xeon CPU 32bit machine was used:
 * 2x Intel Xeon 2.8GHz (533Mhz FSB, single-core)
 * 4GB RAM
 * 3x 36GB Ultra320 SCSI Raid 0

LogicalDOC Enterprise 5.0 and MySQL 5 on Windows Server 2003 with Tomcat was deployed onto the server. Approximately 100,000 documents and 1000 user instances were imported into the system (Note that over 2 million documents have been successfully loaded into a similarly configured LogicalDOC repository).

Similar tests were performed on an equivalently configured Linux box running Suse 11.2 (also Ubuntu and Fedora have been tested).

Recently tests have been run using LogicalDOC Enterprise 5.0 and MySQL 5 on Linux Suse 11.2 with TomCat on a dual CPU, Opteron Server machine:
 * 2x AMD Operaton 285 2.6GHz (dual-core)
 * 4GB RAM
 * 3x 36GB Ultra320 SCSI Raid 0

Concurrent users are considered to be users constantly accessing the system through the LogicalDOC web-client with only a small pause between requests (3-10 seconds maximum) - with continuous access 24/7. Casual users are considered to be users occasionally accessing the system through the LogicalDOC web-client or webdav/webservice interfaces with a large gap between requests (e.g. occasional document access during the working day).

Permanent Generation (PermGen) Size
The default PermGen size in Sun JVMs is 64M, which is very close to the total size of permanent objects (Spring beans, caches, etc.) that LogicalDOC creates. For this reason it's quite easy to overflow the PermGen via configuration changes or with the addition of custom extensions, and so it's recommended to increase the PermGen to avoid OutOfMemory errors eg. -XX:MaxPermSize=128M is a good starting point.

Typical Settings
LogicalDOC generates a high proportion of temporary objects, both in client threads as well as in the background processes. In order to reduce the spillover of temporary objects into the OldGen portion of the heap, the NewSize should be as large as possible.

The following settings tamed the garbage collections and revealed (with GC printing and JMX tracing) that the OldGen was not growing noticeably over and above the permanent space allocated for caches. Cache sizes are still estimated top out around 520M. So, for a typical 32 bit installation with at least 2GB available for the VM: JAVA_OPTS= -server -Xss1024K -Xms1G -Xmx2G -XX:MaxPermSize=128M -XX:NewSize=512m The following can also be adjusted to control the garbage collection behaviour. -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode -XX:CMSInitiatingOccupancyFraction=80

Notes for Low-End Machines
This section applies if you have less than 2GB available.

The stack size of 1024K (-Xss1024K) is generous. Some installations may require a little over 512K on occassion, while many may only use 256K. If the per-thread memory consumption is too high for your installation, reduce the stack size to 512K and then to 256K and keep an eye out for memory-related errors in the logs.

The NewSize should be kept as large as possible. It can be reduced, but the memory consumption should be watched on JConsole or some other tool to ensure that the rate of spillover of temporary objects is kept down. If the machine is supporting 500 simultaneous operations, for instance, then the spillover of temporary objects (from NewSize being too small) will cause hold-ups on memory assignment as the garbage collector does sweeps.

The Effects of NewSize
Given that the OldGen is composed primarily of cache data of up to about 520M, at least 1GB should be reserved for OldGen. Once -Xmx increases, the OldGen can be increased to 2G. 512M should be left as a buffer to account for miscellaneous (PermGen, etc). So the following variations might be applied: -Xmx2G -Xms1G -XX:NewSize=512M (OldGen at least 1G) -Xmx3G -Xms1G -XX:NewSize=512M (OldGen at least 2G) -Xmx4G -Xms2G -XX:NewSize=1G (OldGen at least 2.5G) -Xmx6G -Xms3G -XX:NewSize=2G (OldGen at least 3.5G) -Xmx8G -Xms4G -XX:NewSize=3G (OldGen at least 4.5G)

If you're needing these kinds of figures, you'll be needing to run JConsole (and Java 6) to observe the rate of spillover from Eden to Survivor to OldGen. If, after the system has been running for a while, the OldGen size stabilizes, then the NewSize can be increased appropriately.

Real-World Example
The following settings are used on a high-volume clustered 64-bit, dual 2.6GHz Xeon / dual-core per CPU, 8GB RAM environment. Note the different memory ratios and try to preserve them on different environments. A minimum MaxPermSize of 128M is recommended but may sometimes require 256M.

-server -Xss1M -Xms2G -Xmx3G -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode -XX:NewSize=1G -XX:MaxPermSize=128M -XX:CMSInitiatingOccupancyFraction=80