Hardware: Computing Cluster
GM/CA @ APS computing environment has similar structure
for all three beamlines: 23-ID-D, 23-ID-B and 23-BM. The two ID beamlines
which are equipped with fast Pilatus3-6M and Eiger-16M detectors share ultrafast
1PB storage with BeeGFS distributed file system. The BM beamline equipped with
Rayonix-300 CCD detector provides a 576TB shared storage with BeeGFS file system.
The computing infrastructure of the ID beamlines clusters is shown on the
picture below.

Users who log on the beamline workstations have
their home directories on the storage array and thus all the workstations
access the same home directory for a given user account. The storage capacity
allows us to keep users data for three months after the experiment.
Computers at the ID beamlines are connected to
internal 56Gbps fiber network and the workstations accessible from outside
the lab (ws2, ws5, and ws6) are connected to 10Gbps uplink. The BM subnet is
on 10Gbps fiber network.
Users are provided with two groups of
workstations. One group is allocated for collecting and processing data on
the day of experiment (day-1 workstations). These are blXws1, blXws2, blXws3,
blXws6, and blXkeithley, where "X" stands for the beamline number ('1' for IDD,
'2' for IDB, and '3' for BM). The other group (day-2 workstations) consisting
of ws4, ws5, and ws7 is offered for those users who wish to continue processing
or backing up their data after the experiment is over. At the ID beamlines
there is one more group of workstations (ws8 to ws12 of IDD combined with ws8 to
ws12 of IDB) which do not have monitors and keyboards for user access. They are
used for automatic data processing from JBluIce.
All computers operate under CentOS-7, a freeware
clone of Redhat Enterprise Linux operating system, and MATE graphical desktop
environment. All of them have most common crystallographic data processing software
packages installed including HKL3000/HKL2000, Phenix, PyMol, and
etc.
The following computing policies are implemented:
- The account management is centralized and all workstations access the same
home directories that actually reside on the storage array.
- No disk quota is enforced on user account.
- Most workstations are provided with USB3 connectivity (which accepts
USB2 as well, but the later is not recommended because of low speeds).
Users are encouraged to bring their external drives for making data backups.
More information about backups to external drives is provided on the
data management webpage.
- Users can remotely download collected data to their institutions using the
GMCA Globus servers.
- Users can also SFTP out their data. The speed rate may vary on the
route to user's institution. The best expected rate is about 7-8MB/s.
Due to tight ANL security restrictions, option to SFTP in, i.e. to access
data from user's home institution is only available with ws5 for the
day of experiment plus one day. Please inquire your host if you need
such access.
- Users laptops using Wi-Fi can connect to ws2, ws5, and ws6 via
SSH, SFTP or NOMACHINE protocols. The Wi-Fi connection also provides
access to outside Internet resources like web pages and e-mail.
- GM/CA @ APS stores users data for three months from their experiment
start date. During this period users are expected to verify that their backup
was successful and data was safely delivered to their home institution. After
this period an e-mail is sent to remind users about scheduled deleting of their
data and then in two days the data is automatically deleted from the GM/CA
storage array.
- User accounts are automatically disabled one day after their experiment
start date. If you need to extend your access to day-2 workstations or to
the Globus servers, please send request to your host who will arrange a
temporary exception. Permanent exceptions are not possible.
- Remote access using NoMachine technology is possible. For additional
details click here.