You are here: Home ATLAS-BFG Storage

Storage

User Guide for the BeeGFS-Filesystem

The BeeGFS-Filesystem is mounted at /work. It provides so-called workspaces, which are directories with space to store data. Each workspace belongs to one user and has an expiration date. After the expiration date is reached, the workspace and all its data will be deleted (after a certain grace period). This ensures that unused data is not unnecessarily using up all the storage space. However, the expiration date can be extended several times.

The official documentation of the workspace tool can be found here: https://github.com/holgerBerger/hpc-workspace and is summarized below.

Creating a new workspace

A new workspace can be created with

[UID@ui ~]$ ws_allocate <ws-name> <days>

where <ws-name> is the name of the created workspace and <days> the remaining number of days until the expiration date. After you create the workspace, you will get some information about the newly created workspace:

Info: creating workspace.
/work/ws/atlas/<uid>-<ws-name>
remaining extensions  : 99
remaining time in days: 100

You can now acces the workspace at /work/ws/atlas/<uid>-<ws-name>

Listing all workspaces

You can list all your workspaces with

[UID@ui ~]$ ws_list

 

Extending the expiration date

The expiration date can be extedended 99 times by a maximum of 100 days per extension. To extend the workspace, use the following command:

[UID@ui ~]$ ws_extend <ws-name> <days>

Since the maximum number of days until the expiration date of the workspace is always 100, it does not work to extend the expiration date several times in a row. This has to be done every 100 days. In the near future, there will be the possibility to sign up for e-mail notifications which will notify you if a workspace is about to expire.

Deleting a workspace

A workspace can be deleted with

[UID@ui ~]$ ws_release <ws-name>

 

Sharing data with other users

If the user is in your own group (i.e. atlher, atljak, or atlsch) the sharing is done the usual way with chmod.

Sharing data with a specific user of another group does not work right now. As a workaround, you can share your workspace with a secondary group (e.g. atl or even unifr).

[UID@ui ~]$ chgrp atl /work/ws/atlas/<uid>-<ws-name>
[UID@ui ~]$ chmod g+rX /work/ws/atlas/<uid>-<ws-name>

 

Retrieving your current usage

You can obtain your current usage with

beegfs-ctl --getquota --uid <uid>

LOCALGROUPDISK

For long-term archiving of valuable data, ATLAS users should use the UNI-FREIBURG_LOCALGROUPDISK.

rucio upload --rse UNI-FREIBURG_LOCALGROUPDISK user.<username>:<DatasetName> <file1> <file2> <file3>

 

User Guide for the ATLAS-BGF-Lustre-File-System

!!! The Lustre-System was decommissioned in February 2018 !!!
Please use the BeeGFS-Filesystem or the LOCALGROUPDISK for storage.

Lustre is a high performance cluster file system (parallel file system), which offers a high I/O bandwith for large size files. Nevertheless you should keep following things in mind, concerning then ATLAS-BFG-Lustre-Storage solution.

  • The Lustre-File-System is composed of a Meta-Data-Server and a couple of Storage-Servers. In case of handling small files the I/O bandwith of Lustre can drop significantly. Please keep small files in your home directory tree.
  • The ATLAS-BFG-Lustre-File-System has a storage capacity of about 145TB. Due to a huge storage capacity a backup of files stored on the Lustre-File-System is not foreseen.

Access within ATLS-BFG-Cluster

In the Lustre-Sytem we provide two directories to the user, a user directory and a group directory. For convinience each user can access the directories by two environment variables $STORAGE_GROUP and $STORAGE_HOME. The storagespace available is restricted by quotas set for each group. The data access is controlled by standard unix-file system permissions. The permissions can be changed by the chmod command. You may also use more detailed access permissions (link).

Use Cases

The user UID with the group GID can change into his lustre-user-directory by using his environment variable:

 [UID@ui ~]$ cd $STORAGE_HOME
 [UID@ui  ]$ pwd
 /storage/users/UID

Change into your current group directory:

 [UID@ui ~]$ cd $STORAGE_GROUP
 [UID@ui  ]$ pwd
 /storage/groups/GID

You can identify your group with the user command groups .

 [UID@ui ~]$ groups $USER
   UID : GID

Every user can control the quota of the group he belongs to, with the following command lfs quota -g <groupID> <filesystem>.

  [UID@ui  ]$ lfs quota -g `groups $USER | cut -d" " -f3` $STORAGE_GROUP
  
   Disk quotas for group GID (gid 12345678):
        Filesystem  kbytes     quota       limit        grace    files    quota    limit    grace
   /storage/groups/GID
                        44    26214400   31457280            9     0        0

In the example above the user UID belongs to the group GID. The group members of the example group used 44 kByte out of 26214400 kByte (quota). For a short period of time (grace) the group members may use a maximum of 31457280 kByte (limit). The star (next example) behind the quota-value is just a hint, that the group exceeded the quota limit.

  [UID@ui  ]$ lfs quota -g `groups $USER | cut -d" " -f3` $STORAGE_GROUP
  
   Disk quotas for group GID (gid 12345678):
        Filesystem  kbytes     quota       limit        grace    files    quota    limit    grace
   /storage/groups/GID
                    28311912* 26214400   31457280 1w6d23h57m1s      34       0       0

If a user exceeds the limit, lustre prompts the following message: Disk quota exceeded.

 [UID@ui ~]$ cp file.txt $STORAGE_HOME
  cp: writing `/storage/users/UID/file.txt': Disk quota exceeded

 

Links

To better understand the Lustre cluster filesystem, it is strongly recommend to follow these two links by
NASA High-End Computing Capability
Lustre Basics
Lustre Best Practices