High Performance Computing (HPC)
The Office of Research and Office of Information Technology are pleased to announce a new program of support for compute-intensive research projects. IT has built a high-performance computing (HPC) cluster to help support research and sponsored research projects by tenured and tenure-track faculty and other researchers. The goals of this HPC initiative are to:
- Provide computational resources directly to researchers;
- Use project outcomes, feedback, and patterns to design simple and effective means of connecting faculty with additional resources, improved workflows in the future, and identify future needs.
We encourage all faculty who have research projects that they think might benefit from HPC resources to apply and provide a description, however preliminary, of their project so we can better assess project priorities and sizes.
Listed below are the specific procedures for HPC access requests.
I. HPC Access
1. Applicants must apply online.
2. Applications are accepted on an ongoing basis and all projects once prioritized are subject to re-prioritization as needed and deemed necessary.
II. Qualifications
1. Priority is given to applicants and PIs who are Fordham tenured, tenure-track, and emeriti faculty members. Students (both Grad and Undergrad) and external collaborators are provisioned under their primary or sponsoring PI.
2. Applicants should submit a separate application for each project and may be allocated additional resources for each project if necessary, subject to prioritization and scheduling constraints as new projects (see section VI. HPC Operation)
III. Requirements
Applicants are required to submit the following information online:
1. Application form (https://form.jotform.com/232344164466153);
2. HPC Request Narrative (project title, request justification, project timeline; any associated grant(s) or funding tied to the effort);
3. Specifications of support being requested:
a) An estimate of required resources (Overall Compute Time, Cores, RAM, Scratch Storage) required in support of your research. This is often referred to as an HPC Budget.
b) Expected and Required Technical Stack, and any licenses for software held or that need to be acquired for the effort. As the environment is new, this will help drive what is needed and can impact timelines and other aspects when assessing the project needs vs. the current environment and its fitness for the effort.
c) Computational Testing: Certain researchers may possess a keen understanding of their distinct timing and technical prerequisites. If a portion of the allocated "budget" needs to be earmarked for the evaluation of technical requirements, it is advisable to communicate this requirement with IT. This might entail engaging in discussions with the research personnel and conducting trial projects as integral components of the comprehensive endeavor and application process.
d) Collaborators share in the overall “wall time” budget requested.
4. Request for Proposal (RFP) from external funding agencies, if applicable.
IV. Post Award Procedures
1. HPC space must be utilized as requested, according to the approved specifications.
2. Each awardee must request additional parameters upon exhaustion of the awarded utilization.
V. Non-Allowable Usage
1. Allotted resources cannot be used to process projects other than your own, i.e., projects by any other member of the Fordham Faculty, Staff, or a non-Fordham entity;
2. Usage must be justified in the HPC Request Narrative as directly necessary to this specific project and use should not be substantially different from what was requested.
VI. HPC Operation
1. Requests will be reviewed by the Office of Research in conjunction with the HPC administration. The Office of Research will organize a three-member review committee to evaluate applications. The review committee may include the faculty member, the Office of Research staff, and the Office of IT staff member.
2. Awardees shall be contacted as soon as possible regarding their request submission.
3. All active HPC research efforts are administered and managed by Educational Technologies and Research Computing, Office of IT. The HPC administrators can provide system administration to aid in software setup/deployment.
4. The amount of resources requested can also impact project priority and job placements as determined by the job manager and system use.
5. The number of access requests granted and the eligibility requirements may be adjusted periodically based on the availability of resources and funding.
6. Since project needs and outcomes are intended to drive change, this policy and usage requirements, and other aspects are subject to update as needed to reflect research needs and improve HPC workflows.
7. All research on the cluster HPC is guided by various use policies and as guided by your contract with Fordham and Fordham’s Intellectual Policy practices. Please review the following information and links.
- EdTech and Research Technologies are finalizing an acceptable computational use policy that will be posted under https://www.fordham.edu/advanced-research-computing, when available.
- https://www.fordham.edu/academics/research/office-of-sponsored-programs/financial-conflict-of-interest-in-research/university-policy/
- https://www.fordham.edu/about/leadership-and-administration/administrative-offices/office-of-finance/financial-policies-and-guidelines/conflict-of-interest-policy-for-employees/
- https://www.fordham.edu/academics/research/office-of-research/compliance/
- https://www.fordham.edu/academics/research/office-of-research/compliance/ip-policy/
- https://www.fordham.edu/
resources/policies/ intellectual-property-policy/ - https://www.fordham.edu/information-technology/it-security--assurance/it-policies-procedures-and-guidelines/acceptable-uses-of-it-infrastructure-and-resources-policy-statement/intellectual-property/
8. For questions please contact Andrew Angelopoulos at [email protected].
9. Current Shareable Resources Available:
- Compute: 256 Cores, AMD 7543, across 4 Nodes (64 Cores, 256GB RAM each)
- 1TB RAM Total across Compute Nodes
- ½ Petabyte Spectrum Scale Scratch Storage.