Page updated: 21/03/2007
About the website

                        Valid XHTML 1.0!

gLite 3.0

glite-SE_dcache_admin_postgres - Update to version 3.0.0-0

Date 13.03.07
Priority Normal


D-Cache is no longer configured as a single system as it is a very configurable service. For this reason multiple metapackages for D-cache have been created.

The new set of metapackages is:
  • glite-SE_dcache_pool contains the minimum dependencies for D-cache (pool nodes)
  • glite-SE_dcache_admin_postgres contains the dependencies for a administrative node running on the postgres database
  • glite-SE_dcache_admin_gdbm contains the dependencies for a administrative node running on the gdbm database.

They replace the original two metapackages:

  • glite-SE_dcache replaced by glite-SE_dcache_admin_gdbm
  • glite-SE_dcache_gdbm  replaced by glite-SE_dcache_admin_gdbm

It is recommended that all YAIM installed admin nodes are upgraded to pnfs-postgres this is provided as a dependency of package target glite-SE_dcache_admin_postgres.

For sites that have not upgraded pnfs to pnfs-postgresql should do so either before or after the upgrade. glite-SE_dcache_admin_gdbm is provided with a dependacy to pnfs-gdbm and pnfs. This backend is no longer supported to it is recommended to upgrade to pnfs-postgresql as soon as possible.

The new postgres version does not always come "necessaryly". It depends on the repository choice of the sites. Some (external) repositories include postgres others do not.

Sites should check about postgres before they perform any update.

The D-Cache repository now contains postgres RPMs of version 8.1.X and postgres might be upgraded. Note however that you cannot read you old postgres DBs any longer if you run version 8.0.X and smaller. You need to dump your database before. Upgrade and re-import the database.

Short "step by step":

1.) Stop dcache service and PNFS

2.) pg_dumpall > /Some/Big/Disk/backup
[note: this dump might get large on a production system]

3.) Upgrade postres RPMs (and D-Cache)

4.) (Re-)Move old database

5.) Create new database:
su - posrgres
initdb -D /var/lib/pgsql
start postgres DB

6.) Import DB: psql -f /Some/Big/Disk/backup postgres

7.) Note: I had to change some postgres config files
before PNFS could be started again. (See old config files
or D-Cache book: )

8.) Start PNFS and D-Cache services.

Official changes from 1.66 R 5 to the 1.7.0 Branch

New Features :

  • gPlazma authorization added. Supports multiple authorisation scheams including VOMS
  • Pools prepared to run on Windows XP.
  • The maximum (default) time, a dcap/dccp request should wait for a file 'retrieve from HSM' can now be set in the door as well as in the dccp command line or with the dcap API.
  • GsiFtp and SRM understand extended proxies. gsiDcap will follow.
  • PUT and GET supported by newly defined SRM-2.2 definition.
  • dCap (client and server) now supports passive connections by default. (Client callback to server for data connection).
    The server listen ports can be restricted to one port per JVM. This allows for CEs behind a NAT or firewall while dCache-SEs are outside.
  • xRootd protocol added. This allows to access dCache from within ROOT transparently. Read only, Read write, Secure and insecure modes available.
  • Info Provider : works with LCG 2.6.X (GIP)
  • Error type Fatal added. This allows for advanced actions (e-mail , sms , firealarm) in conjunction with log4j.
  • The key value pair h=yes resp. h=no added to pnfs level2 depending whether this file is supposed to go to HSM.
  • dCap Mover : negative size value sent to the dCap mover by the dCap library no longer causes the mover to loop.
  • dCap Door : improved permission handling : when checking permission, we now honour the actual file permissions and the permissions of the immediate parent.
  • FTP Door : Commands chmod and rmdir added.
  • Increased performance of the billing cell (mainly database interface).
  • Cleaner : cleaner can now be chained. In case of two different dCache instances with just one Pnfs file system, the cleaner directories can be chained so that a rm in pnfs
    removes deleted files from pools of both dCache instances.
  • Cleaner : a pool can (optionally) delete precious files if those files have been removed from pnfs even if there is an HSM attached.
    For security reasons, the default is to never remove precious files if there is an HSM attached to a pool.
  • PoolManager and read only pools : Pools can be declared rdOnly by a command in the PoolManager command set.
    This disallows all actions which would write data into this particular pool. (write from client, restore from HSM and p2p destination).
    This pool state is not persistent. (is not saved in the PoolManger.conf file).
  • Cost calcuation for multi I/O queues : fast cost prediction was added for multi I/O queues. (Better Tuning)
  • Files can be automatically replicated on arrival in the dCache. See chapter "File Hopping" for more information.
  • Pools can be selected by protocol.
  • Pool to pool transfers pool destinations are treated seperately from 'read' pool selection.
  • 'per dCache partition' parameters now exist. DCache partitions a assigned to PoolManager links. (See chapter 'dCache Partitioning' in the configuration section).
  • Introduction of a central HSM flush system.
  • SRM now based on Tomcat 5 software stack
  • Client now supports $X509_USER_PROXY and $X509_CERT_DIR

Bug Fixes :

  • Pathfinder reported .(access)(pnfsid) (using PnfsManagerV3).
  • 'precious' files have been removed from write pools while the pool was shut down this has been fixed.
  • After a 'flush storage class' the corresponding pool no longer appear dead for some time status is displayed.
  • PoolManager restore handler : a retry in the restore handler, either automatically or manually will now always refetch the StorageInfo from the PnfsManager.
  • Files are not longer requested from HSM if they are not yet flushed to the HSM.
  • Pnfs proxy : we now support multiple pnfs instances connected to a single dCache instance. (performance improvement).
  • dCap library : fstat in preload dCap lib fixed.
  • dCap library : SEGV on more than 65 CA fixed.
  • cells : In case of ssh kerberos clients, the id string exceeded the maximum defined value in cells. This caused a protocol exception.
  • dCap : setting/sending file permissions with url-like syntax (on write).
  • dCap : large file problem fixed. dCap lib always opens local files with O_LARGEFILE.
  • FTP door removes file entry if transfer fails.
  • dCap Mover : dCap mover disables pool for all I/O errors.
  • Security Enhancement PNFS cannot be mounted / accessed from client ports above 1024
  • FTP door no longer lists content of files in directories users should not have access to eg /root
  • Client no longer returns non 0 on success.

Administrator side:

  • Dump utility for the config/SI-* files in the pools.
  • PnfsManager pnfs manager queue can be dumped to disk.
  • Specific Exception introduced for PnfsManger FileNotFound. Makes pool code cleaner.
  • isLink() added to PnfsFile
  • Pool Manager query message (file in pool ?) can be suppressed by flag in ProtocolInfo.
  • new method in repository : contains( PnfsId pnfsid);

Please also have a look at the list of known issues.

Updated rpms

Name Version Full RPM name Description
dcache-client 1.7.0-16 dcache-client-1.7.0-16.noarch.rpm dCache Client
dcache-server 1.7.0-24 dcache-server-1.7.0-24.noarch.rpm dCache Server
edg-mkgridmap 2.8.1-1 edg-mkgridmap-2.8.1-1.noarch.rpm A tool to build the grid-mapfile
edg-mkgridmap-conf 2.8.1-1 edg-mkgridmap-conf-2.8.1-1.noarch.rpm edg-mkgridmap configuration files
glite-SE_dcache_admin_postgres 3.0.0-0 glite-SE_dcache_admin_postgres-3.0.0-0.noarch.rpm glite SE dcache admin postgres node
glite-yaim 3.0.0-38 glite-yaim-3.0.0-38.noarch.rpm glite-yaim
pnfs-postgresql 3.1.10-7 pnfs-postgresql-3.1.10-7.i386.rpm pnfs server with postgresql DB backend

The RPMs can be updated using apt via

Service reconfiguration after update

Service must be reconfigured.

Service restart after update

Service must be restarted.

How to apply the fix

  1. Update the RPMs (see above)
  2. Update configuration (see above)
  3. Restart the service if necessary (see above)