Zarafa Archiver & Dynamic Configurations with Split Multi-Server

Zarafa Archiver & Dynamic Configurations with Split Multi-Server

With exploding data sets in groupware systems most environments tend to hard limit the size of mailboxes from users. Users tend to keep their groupware data as long as they are associated to a company which often means years or even decades of data – for one user. IT Departments want to provide the users with the best level of data availability, even though this data is often not really business related anymore.

Should we limit mailbox sizes to cope with these issues?

There are important IT related reasons for limiting mailbox sizes. Once is for keeping databases growing beyond a reasonable limit as database sizes have a direct impact on your possibilities to efficiently backup or restore your valuable data. Additionally, keeping users to hold a specific level of size with narrowed mailbox quotas costs valuable work time and therefore generates business related costs. Since IT infrastructure generates costs as well – especially in groupware regions where traditionally “better” hardware with more expensive (mostly SAS 15k) drives are in use – Archiving solutions came around to limit the constraints with expensive hardware yet keeping users having their data in access and keeping them productive. There is no user which needs the same performance for years of old data, so traditional SATA drives (which are available in 5-10 times in size of high performance SAS drives at a fraction of the price) are the best way to store long-term data. This method is called storage tiering and delivers the right performance/value ratio without limiting users at levels which cost productivity.

What is our answer?

Zarafa Archiver is the corresponding product for transparent archiving in the Zarafa Ecosystem. It enables IT departments to reduce costs in IT infrastructure and meeting the real needs of performance in your Groupware system.

Zarafa Archiver at a glance

Zarafa Archiver runs separated zarafa-server instances where database and attachments can run on. For some customers Zarafa Archiver is the first step in Multi-Server Environments as this is the way to setup Zarafa Archiver. Zarafa Archiver is very dynamic in its way to setup the archiving policies, namely copy, stubbing, deleting and purging. Copy delivers the ability to even archive data on delivery (via dagent) making it impossible for users to delete data by accident as all data is still available in the archive. Stubbing as most commonly used practice in archiving leaves a very small mail envelope which only keeps valuable metadata (like senders, recipients, subject, dates and flags) in the primary store using a very small amount of storage while moving the larger body and attachments to the slower storage backend. Deleting frees the data on the primary store completely, whereas purging deletes data permanently also from the archive. All steps are deeply configurable, allowing customers to setup the right policy meeting their requirements. Transparent Archiving is really fully transparent at Zarafa, allowing users to access archived content from any client using every protocol at any time.

Dynamic Configuration

In very large environments with hundreds of locations there are also separate requirements in archiving per location, requiring dynamic configurations of the Archive System to handle with. Zarafa Multi-Server environments share the same LDAPv3 enabled backend, so potentially all users are visible to each and every Multi-Server node. As Zarafa Archiver is licensed separately (not every customer requires the same amount of users being archived as there are users in the corporate infrastructure), the run of Zarafa Archiver checks how many users have attached archives. This license count is necessary to have the overview and therefore requires a license check on every node. Now if one of the locations is not available for example for maintenance reasons central archiving is not possible until the connection is restored.

Split Multi-Server is a technique to only show the nodes required for a specific function to run at a specific location. This reduces memory footprint (as not all data has to be cached) and therefore also reduces costs in IT infrastructure with an archiving solution (less RAM and less CPU requirements). However Split Multi-Server as a technique which should never be used with default Multi-Server systems (hosting users mailboxes), as this eliminates the ability to use standard groupware functionality such as free/busy-information and public folders.

Split Multi-Server is setup by utilizing ldap.cfg’s directive ldap_user_search_filter. When the parameter zarafaUserArchiveServers is correctly configured for each user in the backend, the Archive nodes only need to know these users. A common value of ldap_user_search_filter = (zarafaAccount=1) is then best used when setting ldap_user_search_filter = (&(zarafaAccount=1)(zarafaUserArchiveServers=*)). This part reduces memory overhead and allows IT administration to see exactly the users which are archived from the corresponding nodes. Now, we do not want the server to check licensing for zarafa-archiver on every node. In reality it makes sense to name archive servers differently than the primary nodes. With this in place archive servers can be limited with using ldap_server_search_filter with values such as (&(cn=zarch*)(cn=zarafa1)(cn=zarafa2)(cn=zarafa3)). So the archiving servers only need to see themselves (as archiving Multi-Server nodes) and the nodes where users have their archives attached with (In this example zarafa1-3).


Archiving is more important than ever in controlling rising data growth, reducing costs middle to long-term without limiting valuable time from users. Having data available with any client and any device being a completely backend oriented solution Zarafa Archiver is easily deployed in small to very large setups. Having its possibilities with dynamic configurations enables IT departments to exactly setup the intended groupware infrastructure.

Share on FacebookTweet about this on TwitterShare on Google+Share on LinkedIn
Michael Kromer
Michael Kromer

Leave a Reply

Your email address will not be published. Required fields are marked *