Procurement, Installation And Commissioning Of The High Performance Computing Cluster Procurement, Installation And Commissioning Of The High Performance Computing Cluster , High Performance Computing Cluster , Master Node • Dual 12 Cores Cpu’S Each Of 2.40 Ghz And Min. 2.5 Mb Cache For Each Physical Core. • Minimum 128 Gb Ddr5 Of Highest Frequency As Supported By The Above Cpu’S. Ram Should Be Scalable To Atleast 512 Gb In 100% Balanced Config. Without Discarding The Current 128 Gb Of Ram Being Quoted • 2X 1.92 Tb Sata 2.5” Ssd (In H/W Raid – 1). Raid Card Shall Support H/W Raid Levels 0, 1, 5, 6, 10 With Atleast 1 Gb Of Cache. • System Shall Have Minimum 6X Pci Express Gen 5.0 Slots (Min. 5X Pcie 5.0 X16 Slots). • Min. 2X Usb, 1X Vga, 1X Dedicated Rj45 Remote Management Port. • Chassis Should Have Minimum 800W (1+1) Redundant Platinum Level Certified Power Supplies And 8X 3.5”/2.5” Hot-Swap Drive Bays. • Max. 2U Rack Server Compute Node • Dual 32 Cores Cpu’S Each With Min. 2.50 Ghz, 60M Cache. • Min. 512 Gb Of Ddr5 Ram Of Highest Frequency As Supported By The Cpu. Ram Should Be Scalable To Atleast 2 Tb In 100% Balanced Config. Without Discarding The Current 512 Gb Ram Being Quoted. Single 1.92 Tb Sata 2.5” Ssd • Min. 2X Usb 3.2 Gen1, 1X Vga, 1X Dedicated Rj45 Remote Management Port. • Min. 2X Usb, 1X Vga, 1X Dedicated Rj45 Remote Management Port. • Chassis Should Have Minimum 800W (1+1) Redundant Platinum Level Certified Power Supplies And 8X 3.5”/2.5” Hot-Swap Drive Bays. • Max 2U Rack Server With 800Wredundant Platinum Or Above Level Certified Power Supplies. , Pu Accelerators Specifications: The Gpu Server Must Be Pre-Configured With A Minimum Of 4 Gpu Cards, And Must Support Scalability To At Least 8 Gpus Within The Same Chassis Without Architectural Modification. Technical Specifications Of Each Gpu Are As Follows. • Memory = 32 Gb Gddr6 • Memory Interface = 256 – Bit • Memory Bandwidth = 576 Gb/Sec, • No. Of Cores = Min. 12,000 • Single Precision = 65.0 Tflops Performance • System Interface = Pcie 4.0 X16 • Total Board Power = 250 Watts • Form Factor = Dual Width Active Cooling Gpu Server Specifications: Dual 32 Cores Cpu’S Each With Min. 2.50 Ghz, 60M Cache. Min. 512 Gb Of Ddr5 Ram Of Highest Frequency As Supported By The Cpu. Ram Should Be Scalable To Atleast 2 Tb In 100% Balanced Config. Without Discarding The Current 512 Gb Ram Being Quoted. Single 1.92 Tb Sata 2.5” Ssd. Min. 2X Usb 3.2 Gen1, 1X Vga, 1X Dedicated Rj45, Remote Management Port. , ? Nas Storage ? Single – Controller Nas Storage With The Minimum ? Specifications As Below. ? • 500 Tb Raw Storage ? • Storage Shall Support Both Block (Iscsi, Fcp, Srp) ? And File (Smb, Nfs, Ftp, Afp) Protocols Out Of The ? Box. ? • Host Interface :- 2X 1G Rj45, 2X 10G Rj45 ? • Controller Shall Support Raid Levels 0, 1, 0+1, 5, ? 6, 50 & 60 With Max. 24X Sas/Sata Lff/Sff Bays ? In 4U Form Factor ? • Box Should Support Ssd Caching (Optional ? Upgrade) ? • Box Shall Have Built-In Snapshot With Rollback ? Feature. ? • Box Shall Have Webui For Setup And Configuration ? • Controller Shall Have Minimum 24 Cpu Cores With ? 512 Gb Of Memory ? Primary Network Switch ? Switch Shall Have The Features As Below. ? • Minimum 12X 100M/1G/2.5G/5G/10G Base-T ? Ports, 2X 10G Sfp+ Ports, 2X 1G/10G Base-T/Sfp+ ? Combo Ports ? • Switching Capacity = 320 Gbps ? • Packet Forwarding Rate = 238.08 Mpps ? • Mac Address Table = Upto 32K Entries Per Device ? • All Required Cables To Be Bundled ? Secondary Network Switch ? Minimum 24X 10/100/1000 Base-T Rj45 Ports Unmanaged, Ethernet Switch With All Required Cables ? Ethernet Switch With All Required Cables ? Advanced Features: Default Software Features ? Include: Snapshot: 64 Per Source Volume, 128 Per ? System Remote Replication (File-Level)Optional ? Features: Ssd Cache Clients Supported: Windows, ? Mac Os, Linux, Free Bsd, Solaris Protocol Support: ? File-Level Protocol: Smb, Nfs, Ftp, Afp Block-Level ? Protocol: Iscsi, Fcp, Srp Ports: Storage Should Be ? Configured With Minimum 2 Number Of Usb Ports , Server Rack Server Rack With The Below Features. • U Size = 42U • Dimensions = 800Mm (Width) X 1200Mm (Depth) • Doors = Front Perforated Door, Rear Perforated Split Door • Extra Features = Side Panels, Castors, Levelling Feet • Pdu’S = 02 Qty. Of Vertical Pdu’S Each Of 22Kw,32A, 230V/400V With (18) Iec C13 And (6) Iec C19 Power Sockets • Server Rack Shall Also Include Top Mounted Self- Cooling Exhaust Fan. , High-Performance Computing Os (Optimized Linux Distribution) Specifications: ? Enterprise-Grade Linux Os Optimized For Hpc, ? Long-Term Support (Lts) Version (Compatible Withresource / Cluster Managementsoftware),Preferred Ubuntu Hpcos. , Cluster Resource Management Software ( Job Scheduler Like Slurm, ): Specifications: ? Must Support Dynamic Resource Allocation And Integrates With Mpi And Gpu Resources, ? Multi-Cluster Scheduling Support Cluster Management Software : ? The Hpc Cluster Being Quoted Should Have A ? Bundled ? Cluster Management Software With The Below Set Of Features :- ? Hpc Cluster Suite ? Scalable Cluster Deployment And Management ? Workload Manager ? Dynamic Workload Manager Change ? Shared Home ? General Foss Compilers , Libraries, Parallel Libraries , Scientific Libraries ? Graphical Monitoring ? Simple And Easy To Use Gui Web Based Interface For Targeted Workload Management. ? High Availability ? Support & Maintenance ? Updates And Bug Fixes ? Infiniband Support, ? More Installation Options - Install To Hard Disk, Run Stateless (Diskless) & Run Diskless With A Bit Of State (Statelite) ? Built In Automatic Discovery - No Need To Power On One Machine At A Time To Discover. ? Nodes That Fail Can Be Replaced And Back In Action By Just Powering Them On. ? Dynamic Provisioning Of Nodes , Cuda & Open Mpi(Message Passing Interface) Libraries (For Parallel Computing & Gpu Optimization) Specifications: ? Cuda For Parallel Computing Platform And Api Model For Using Gpus. ? Mpi For Distributed Computing,High-Performance Implementation Of The Mpi Standard , Domain-Specific Software (Simulations, Data Analytics & Modeling Tools) ? Specifications: ? Hpc-Enabled Scientific Computing Softwares For, Application Frameworks: Tensorflow, Pytorch, Openfoam, Qgis ? Must Support: ¦ Parallel Computing & Distributed Processing Frameworks ¦ Geospatial Data Analysis & Remote Sensing Workflows ¦ Machine Learning & Deep Learning Model Training On Hpc Hardware ¦ High-Performance Data Handling & Storage Optimization ? Required Support For: ¦ Cluster-Based Execution ¦ Gpu Acceleration For Ai/Ml ¦ Scalability For Large Datasets ? All Software Must Be Open-Source And Configured To Run Seamlessly On The Cluster Hardware