Access to Computational Infrastructure

Description of the HPC infrastructure

The RICAP HPC infrastructure is composed of the following supercomputers:

  • BSC: A general purpose cluster composed of 165,488 Intel Platinium cores in 3,456 nodes, with more than 394TB of main memory and 25PB of storage.
  • CIEMAT: 1 cluster with 680 Intel Gold cores and 456 Xeon Phi cores, 1 cluster of ~100,000 Nvidia cores, two cloud nodes with ~ 950 CPU cores and more than 1 PB of storage.
  • CEDIA: 12 computing nodes with 322 Intel Xeon cores,  1TB RAM & 6TB of storage. Also, 5760 CUDA cores.
    • CEDIA will also promote de use of the Ecuatorian supercomputer Quinde I (Cluster with 1.760 CPU Cores and Nvidia K80 Tesla under MIMD NUMA architecture; RAM of 11 TB; and, parallel storage of 350TB). Quinde I provides Rmax = 232 TFLOPS and Rpeak=488,9 TFLOPS.
  • CeNAT-UCR: Several clusters with 72 Xeon cores, ~25,000 Nvidia cores and ~1450 Xeon Phi cores.
  • CINVESTAV: SGI ICE-XA (CPU) and SGI ICE-X (GPU) with 8,900 cores (Rmax of 429 Tflops). Lustre Seagate ClusterStor 9000 storage of 1 PB.
  • CSC-CONICET: One cluster of 4,096 AMD Opteron cores and 16,384 Nvidia cores with 8,192 GB of RAM and 72TB of storage.
  • HPC-Cuba: One HPC cluster with 50 nodes composed og 800 CPU cores and ~20.000 GPU cores and one Big Data cluster composed of 30 nodes with CPU 480 cores.
  • UDELAR: Cluster of 29 nodes with 576 cores and 1.28 TB of RAM, and 128 Xeon Phi cores with 16 GB of RAM
  • UIS: One cluster of 24 nodes (2,4GHz and 16GB RAM) and once cluster with 128 NVIDIA FERMI Tesla (104 GB of RAM and 4 Intel Haskwell processors).
  • UFRGS: One cluster with 256 nodes and 19,968 Nvidia cores.
  • UNIANDES: One cluster with 1,808 cores with HT (8 TB RAM) jointly with 160 TB of storage.

 

Access Rules: HPC

The rules for access to RICAP supercomputer infrastructure are made through open and competitive calls with a deadline for submission of proposals. Such calls will be publicized by RICAP through all possible means.

The rules for requesting computational resources are found in each of the calls published on this website.

 

Third call

  • Applications must be sent by e-mail to ricap@ciemat.es with the following subject: Request for the third call of RICAP HPC resources.
  • Rules.
  • Deadline for submission of proposals: February, 28th 2019
  • Date for resolution of the call: March, 15th 2019
  • Foreseen days for the use of the resource: From April, 1st 2019 to May, 31st 2019
  • Final report of the results of the call (in Spanish).

 

Second call

  • Applications must be sent by e-mail to ricap@ciemat.es with the following subject: Request for the second call of RICAP HPC resources.
  • Rules.
  • Deadline for submission of proposals: April, 30th 2018
  • Date for resolution of the call: May, 28th 2018
  • Foreseen days for the use of the resource: From June, 1st 2018 to Sep, 30th 2018
  • Final report of the results of the call (in Spanish).

 

First call

  • Applications must be sent by e-mail to ricap@ciemat.es with the following subject: Request for the first call of RICAP HPC resources.
  • Rules.
  • Deadline for submission of proposals: October, 15th 2017.
  • Date for resolution of the call: October, 31st 2017.
  • Foreseen days for the use of the resource: From November, 1st 2017 to Nov, 30th 2017.
  • Final report of the results of the call (in Spanish).

Description and access of the cloud HTC infrastructure

The RICAP cloud infrastructure is composed of federated nodes that offer computing resources in a 24x7 frame. Its main aim is to provide additional and continued resources to the supercomputing infrastructure for R&D purposes. The nodes are:

  • CIEMAT: Central node and working nodes composed of 160 heterogeneous cores managed with Open Nebula and KVM.
  • CINVESTAV: 1 working node composed of 20 cores. Management by Open Nebula and KVM.
  • CSC-CONICET: 2 computing nodes composed by 256 cores and 256 GB. management by Open Nebula and KVM.
  • UIS: 16 computing nodes composed each by 2 Intel Xeon E5645 and 104GB RAM, and 8 Tesla 2075. Management by Open Nebula and KVM.
  • UFRGS: 5 computing nodes composed each by 2 Intel Xeon E5530 @2.40Ghz and 24 GB RAM. Management by Open Nebula and KVM.

 

Access to the cloud infrastructure.

Application for membership of the cloud infrastructure.

Acceptable Use Policy 

Am OpenNebula tutorial can be found here. In addition, a summarized guide can be found here.

Screen shots of access and utilization of Open Nebula