Resources

RENCI has a 24,000+ square foot facility including a 2,000 square foot data center at 100 Europa Drive (Chapel Hill) as well as multiple campus engagement sites including ones at Duke University in Durham, North Carolina State University (NCSU) in Raleigh, and UNC-Chapel Hill.  The engagement sites at these universities engage and involve faculty, students and researchers from across the state.  These sites are also interconnected by high performance networking (10 Gb/s links in RTP) enabling the creation of large virtual organizations to meet the needs of research and education, as well as support economic development.

  • 2,000 square feet of floor space on an 18 inch raised floor
  • 600 kVA Commercial power
  • 375 kVA UPS power
  • 20 kVA Generator power
  • 134 Tons dedicated cooling
  • Room for 40 Racks of High Performance Computing, Storage, and Networking

RENCI began operations as an organization starting in 2004. Since that time, the organization has acquired various computational systems to support projects and activities. The following is a list of the major computational infrastructure currently active at RENCI.

Hatteras

Deployed in summer 2013 and expanded in early 2014, Hatteras is a 5168-core cluster running CentOS Linux.  Hatteras is not fully MPI interconnected, and is instead segmented into several independent sub-clusters with varying architectures.  Hatteras is capable of concurrently running 9 512-way ensemble members.  Hatteras uses Dell’s densest blade enclosure to allow for maximum core-count within each chassis.  Hatteras’ sub-clusters have the following configurations:

  • Chassis 0-3 (512 interconnected cores per chassis)
    • 32 x Dell M420 quarter-height blade server
      • Two Intel Xeon E5-2450 CPUs (2.1GHz, 8-core)
      • 96GB 1600MHz RAM
      • 50GB SSD for local I/O
    • 40Gb/s Mellanox FDR-10 Interconnect
  • Chassis 4-7 (640 interconnected cores per chassis)
    • 32 x Dell M420 Quarter-Height Blade Server
      • Two Intel Xeon E5-2470v2 CPUs (2.4GHz, 10-core)
      • 96GB 1600MHz RAM
      • 50GB SSD for local I/O
    • 40Gb/s Mellanox FDR-10 Interconnect
  • Hadoop (560 interconnected cores)
    • 30 x Dell R720xd 2U Rack Server
      • Two Intel Xeon E5-2670 processors (16 cores total @ 2.6GHz)
      • 256GB RDIMM RAM @ 1600MHz
      • 36 Terabytes (12 x 3TB) of raw local disk dedicated to the node
      • 146GB RAID-1 volume dedicated for OS
      • 10Gb/s Dedicated Ethernet NAS Connectivity
    • 2 x Dell R820 2U Rack Server (LargeMem)
      • Four Intel Xeon E5-4640v2 processors (40 cores total @ 2.2GHz)
      • 1.5TB LRDIMM RAM @ 1600MHz
      • 9.6 Terabytes (8 x 1.2TB) of raw local disk dedicated to the node
      • 10Gb/s Dedicated Ethernet NAS Connectivity
    • 56Gb/s Mellanox FDR Infiniband Interconnect
    • 40Gb/s Mellanox Ethernet Interconnect

Blue Ridge

160-node cluster running CentOS Linux and Windows HPC 2008. The cluster has a 40 Gigabit Infiniband MPI interconnect, a 1 Gigabit file system interconnect, 20 TB shared file system (NFS w/ 20 Gigabit network). Each node has the following specifications:

  • Dell PowerEdge M610 (Blade)
  • 2 x 2.8Ghz Intel Nehalem-EP 5560,  quad core
  • 24 GBs 1333Mhz memory
  • 74 GBs 15K RPM SAS Drive

Additionally, Blue Ridge includes the following:

2 Nodes with nVidia Tesla S1070 GPGPU’s (General Purpose Graphics Processing Unit) with the following specifications

  • Dell PowerEdge R710
  • 2 x 2.8Ghz Intel Nehalem 5560-EP,  quad core
  • 96 GBs 1066Mhz memory
  • 4 x nVidia Tesla S1070’s

2 Nodes for large memory work, with the following specifications:

  • Dell PowerEdge R910
  • 4 x 2.00Ghz Intel Nehalem-EX, 8 core
  • 1 TB 1066Mhz memory

In addition to these systems, a Microsoft-sponsored research project has been leveraged to deploy a comprehensive environment supporting both research and general RENCI operations. The Microsoft-sponsored research project encourages the use of leading edge technologies and the services of the entire Microsoft enterprise IT platform. The systems currently deployed include:

  • 40 core Windows Compute Cluster available to the research community
  • 5 TB SQL Server available to the research community
  • 36 TB SQL Server dedicated to a specific genetics research project
  • Team collaboration services using SharePoint 2010

In addition to these capabilities, the deployed Microsoft based enterprise IT services provide a complete platform for research and operations, including: directory services (AD); federated identity (ADFS); enterprise messaging (Exchange 2010); inventory, monitoring and configuration management (System Center); project management and collaboration (SharePoint); source control and SDLC (Visual Studio Team Foundation Server); generally available Windows login node (Remote Desktop Services).

Virtualization Infrastructure

RENCI has a 12 node VMware vSphere Enterprise Plus cluster that service the needs of most projects.

  • 12 x Dell PowerEdge R720
    • 2 x 2.9GHz Intel Xeon E5-2690 Processors (16 cores total)
    • 256GB System memory
    • 4 x 10GbE Network connections
    • 2 x FC8 Fiber connections

The RENCI Storage Infrastructure includes:

  • NetApp Clustered Data ONTAP
    • FAS3250 Filer HA Pair
      • 1.536PB (4TB x 384) Raw 7.2kRPM Disks
      • 2TB FlashCache
    • FAS6220 Filer HA Pair
      • 57TB (400GB x 144) Raw 10kRPM Disks
      • 4TB FlashCache
    • FAS8060 Filer HA Pair
      • 1.15PB (3TB x 384) Raw 7.2kRPM Disks
      • 8TB FlashCache
  • Quantum StorNext High-Performance HSM
    • Disk Tier
      • 528TB (440 x 1.2TB) Raw 10kRPM Disks
      • 16TB (40 x 400GB) SSD Read Cache
    • Tape Tier
      • Quantum Scalar i6000 Library
        • 1.45PB uncompressed – 2.9PB compressed Capacity (966 LTO-5 Tapes)
        • 4 LTO-5 Tape Drives
  • Kaminario K2 SSD Array
    • 6TB Usable SSD
    • 4 x FC8 Connectivity
    • ~200,000 4k Random-Read IOPS

The RENCI production network connects to the North Carolina Research and Education Network (NCREN) and the University of North Carolina’s campus network. NCREN provides connectivity to Internet2 Layer 3 service at 10Gbps, anticipated to become 100Gbps by fall of 2014.

RENCI’s production connectivity to the outside world is facilitated through a Cisco 7609 router managed by RENCI staff. Connectivity into the datacenter is facilitated through a mix of switches managed by UNC Information Technology Services and RENCI (Enterasys N7). RENCI’s datacenter-internal networking infrastructure is facilitated by Dell/Force10 S4810 switches capable of supporting 288 10Gb/s connections. This allows RENCI to cleanly separate production, research and experimental networking infrastructures such that they can coexist without interfering with each other.

The Layer 2 Breakable Experimental Network (BEN) is the primary platform for RENCI experimental network research. It consists of several segments of NCNI dark fiber, a time-shared resource that’s available to the Triangle research community. BEN is a research facility created for researchers and scientists in order to promote scientific discovery by providing the Universities with world-class infrastructure and resources for experimentation with disruptive technologies. In the context of RENCI, the engagement sites at Duke, NCSU and UNC, as well as RENCI at Europa use BEN to provide experimental, non-production network connectivity between each other. Engagement sites act as PoPs (Points of Presence) that are distributed across the Triangle metro region that form a research test bed. RENCI acts as a caretaker of the facility as well as a participant in the experimentation activities on BEN.

On BEN RENCI has deployed Polatis fiber switches, Infinera DTN bandwidth provisioning platforms and a mix of Cisco 6509 and Juniper routers. This equipment is intended for experimental and research use in GENI and also serves in support of RENCI’s research agenda. BEN connects to the outside world using bandwidth-on-demand Layer 2 connection to the 10Gbps Internet 2 Advanced Layer 2 Service.

GENI Infrastructure

Through a project named ExoGENI (www.exogeni.net) funded by the NSF through the GENI Project Office, RENCI has deployed 13 ‘GENI Racks’ across the US university campuses, with each rack consisting of 10 IBM x3650 M4 worker nodes controlled through a head node, a 6TB iSCSI storage array and a 10Gbps/40Gbps BNT 8264 OpenFlow switch. Several racks by other vendors were ‘opted-into’ the system at the request of owner campuses (NICTA, Australia, UvA, The Netherlands, GWU, WVnet)

These ‘ExoGENI’ racks constitute a ‘networked cloud’ prototype infrastructure running OpenStack and xCAT software intended for GENI experimenters but also suitable for distributed computation experiments in various domain sciences. The GENI programming interface for ExoGENI is provided by the ORCA (Open Resource Control Architecture) software designed jointly by RENCI and Duke University.

Racks are connected to the public Internet for management access.  In addition they are connected to Internet2 ION and AL2S Layer2 services, as well as ESnet Layer 2 OSCARS service through a number of regional providers (LEARN, MCNC, CENIC, MERIT, OARnet). These provide connections between racks, as well as with other elements of GENI infrastructure, including the OpenFlow overlays on Internet 2.

More information at http://wiki.exogeni.net

Visualization Infrastructure

UNC Chapel Hill

  • Social Computing Room at ITS Manning: The Social Computing Room is a 24-foot x 24- foot room that utilizes three projectors per wall, or 12 projectors total, for a 360-degree experience. It is capable of visualizing big scientific data at 9.5 million pixels. The room also supports high-profile arts and humanities projects. The room includes a tracking system capable of tracking up to 15 people or resources in real time.
  • Odum Social Computing Room at Davis Library: The Odum RENCI Social Computing Room is a 20-foot x 20- foot room that provides a 360-degree experience similar to the Social Computing Room at ITS Manning. Available Fall 2013.
  • Tele-immersion Room: The Tele-immersion Room combines two 4K resolution projectors arranged to project true 3D stereoscopic imagery at over four times HD resolution. The room supports a variety of visualization, arts and humanities projects. Capabilities include both interactive and pre-computed streaming movie visualization animations.

Videoconferencing: There is a two projector teleconferencing node at ITS Manning. The Tele-immersion listed above also supports dual-channel HD videoconferencing.

Europa Center

  • HD Edit Suite: This edit suite enables video editing and post-production in native HD resolution.

Videoconferencing: There are three teleconferencing nodes at Europa: two with three projectors (one of these supports dual-channel HD videoconferencing) and the other with one projector and a Cisco CTS 500 Telepresence unit.

Duke

  • zSpace Station – a stereoscopic tracked 3D virtual tabletop visualization platform.

Videoconferencing: A two-projector teleconferencing node that supports traditional and dual-channel HD videoconferencing.

NC State

  • Social Computing Room at D.H. Hill Library: Available in Fall 2013, the Social Computing Room at D.H. Hill Library is a 25-foot x 25- foot room that utilizes three projectors per wall, or 12 projectors total, for a 360-degree experience. It is capable of visualizing data at 9.5 million pixels. The room also supports high-profile arts and humanities projects.

Manteo (Coastal Studies Institute)

Videoconferencing: A teleconferencing node using a 46-inch flat panel display, camera and echo-cancelling microphone on a portable cart.