UNC-Chapel Hill and RTI International selected to provide data management, stewardship to NIH-funded researchers focused on the opioid and pain management public health crises

NIH HEAL Initiative Data Stewardship Group will support researchers in making their data FAIR (findable, accessible, interoperable and reusable)  

Healthcare data has become increasingly easy to create, collect and store over the last decade. However, the industry is still working toward next steps in unlocking the potential of that collected data: preparing the data in such a way that it can be found and accessed; breaking down storage silos while also maintaining patient privacy; and teaching researchers, policymakers, physicians and patients how to effectively analyze and make use of the wealth of data that can inform decisions and policy.  

The NIH Helping to End Addiction Long-term InitiativeSM, or NIH HEAL InitiativeSM, is an aggressive, transagency effort to speed scientific solutions to stem the national opioid public health crisis. Recognizing the need to capitalize on the data their researchers are gathering in support of this mission, the NIH HEAL Initiative is providing up to $21.4 million over five years to the Renaissance Computing Institute (RENCI) at the University of North Carolina at Chapel Hill and RTI International (RTI) to help researchers successfully and securely prepare and sustain data from more than 500 studies. 

RENCI and RTI will work in partnership with the HEAL-funded team at the University of Chicago that is building a cloud-based platform to allow HEAL researchers, other investigators and advocates, health care providers, and policymakers to easily find NIH HEAL Initiative research results and data and use them to inform their own research, practice, policies and programs. 

According to Rebecca G. Baker, Ph.D., director of the NIH HEAL Initiative, data is the currency of lifesaving and evidence-based practice. 

“To be maximally useful, data must be findable to support new research and secondary analyses, as well as to guide education and policy about pain and addiction,” said Baker. “Preparing data to be easily discoverable can be a challenging and resource-intensive task. While most recognize the need to make data FAIR, not all research teams have the resources or expertise to do this. The RENCI/RTI group, in partnership with the Chicago team, will be available to HEAL-funded investigators to augment efforts where needed.”

Sharing HEAL-generated results and associated data as rapidly as possible will allow the broader community to ask and answer new research questions; conduct secondary analyses; and address fast-evolving challenges that surround pain management, opioid use and misuse, and overdose. NIH HEAL Initiative data are highly diverse and include imaging/microscopy, behavior, genomics, pharmacokinetics, and more. 

“Providing efficient and secure access for investigators to combine data from different studies should give us a much more accurate overall picture of how challenges around pain management and addiction can be addressed,” said Stan Ahalt, director of RENCI. “Given the urgency of HEAL’s mission, we are thankful to be able to provide expertise that can facilitate discovery of important elements hidden within the data.”  

To bring these hidden elements to light, the RENCI/RTI team will study the existing NIH HEAL Initiative data efforts and collaborations and through engagement with HEAL investigators will produce use cases and requirements for working across diverse data types. 

“We will ensure that the ecosystem architecture is purpose-built and that the ecosystem team provides the on-hand expertise to address HEAL’s needs as the research evolves,” said Rebecca Boyles, director and senior scientist in the Research Computing Division at RTI International. 


The Renaissance Computing Institute (RENCI) develops and deploys advanced technologies to enable research discoveries and practical innovations. RENCI partners with researchers, government, and industry to engage and solve the problems that affect North Carolina, our nation, and the world. An institute of the University of North Carolina at Chapel Hill, RENCI was launched in 2004 as a collaboration involving UNC Chapel Hill, Duke University, and North Carolina State University.

About RTI International

RTI International is an independent, nonprofit research institute dedicated to improving the human condition. Clients rely on us to answer questions that demand an objective and multidisciplinary approach — one that integrates expertise across the social and laboratory sciences, engineering and international development. We believe in the promise of science, and we are inspired every day to deliver on that promise for the good of people, communities and businesses around the world. For more information, visit www.rti.org.

SoftIron® Joins the iRODS Consortium; certifies HyperDrive® Compatibility with iRODS Architecture

SoftIron Ltd., the leader in task-specific data center solutions, today announced that it has joined the Integrated Rule-Oriented Data System (iRODS) Consortium, which supports the development of free open source software for data discovery, workflow automation, secure collaboration, and data virtualization. In joining the consortium, whose data management platform is used globally by research, commercial and governmental organizations, SoftIron has certified that its open source, Ceph-based HyperDrive™ Storage Appliances are compatible with the iRODS Architecture.

“With the open-source nature of Ceph and its ‘Swiss Army Knife’ capabilities that combine file, block, and object storage within the same infrastructure, we think that SoftIron’s HyperDrive storage appliances are a perfect complement to organizations using iRODS, and who want to scale their storage in a supported, simplified, flexible way,” said Phil Straw, CEO of SoftIron. “And, we’re especially pleased to formalize our membership this week, to coincide with BioData World Congress.” The event hosts some of the world’s leading life science organizations – many of whom use iRODS as a key data management platform in pharmaceutical research – enabling collaboration in their pursuit to solve some of the world’s great challenges. Phil continues; “These organizations are already using open source iRODS to advance their mission critical research, so we’re excited to showcase what SoftIron and Ceph can do to provide them with performance, flexibility and scalability gains, as well as reducing their total cost of ownership.”

“SoftIron and its strong orientation to open source is a great addition to the iRODS ecosystem,” said Jason Coposky, Executive Director of the iRODS Consortium. “Ceph has been gaining traction with both vendors and end-user organizations engaged with iRODS. Welcoming SoftIron, which purpose-builds hardware to optimize every aspect of Ceph, as a member brings immense value to that ecosystem. We look forward to collaborating with SoftIron as we work together to bring added capability, and flexibility to the iRODS community.”

In order to give iRODS users and others in life, bio and pharmaceutical sciences a perspective in using open source Ceph as part of their operational foundation, SoftIron’s Andrew Moloney, VP of Strategy, will be presenting this week during the BioData World Congress. His presentation, titled, “Redefining Software-Defined Storage – All the Performance, Without the Complexity,” will discuss some of the most important drivers of HPC storage growth, the operational challenges in storage infrastructure, and various infrastructural approaches for building software-defined storage architectures. Andrew’s talk will be available at 12.30pm GMT, November 9th, 2020. For more information, or a copy of the talk, please email info@softiron.com.

SoftIron® is the world-leader in task-specific appliances for scale-out data center solutions. Their superior, purpose-built hardware is designed, developed and assembled in California, and they are the only manufacturer to offer auditable provenance. SoftIron’s HyperDrive® software-defined, enterprise storage portfolio runs at wire-speed and is custom-designed to optimize Ceph. HyperSwitch™ is their line of next-generation, top-of-rack switches built to maximize the performance and flexibility of SONiC. HyperCast™ is their high-density, concurrent 4K transcoding solution, for multi-screen, multi-format delivery. SoftIron unlocks greater business value for enterprises by delivering best-in-class products, free from software and hardware lock-in. For more information visit www.SoftIron.com.

The iRODS Consortium is administered by founding member RENCI, a research institute for applications of cyberinfrastructure located at the University of North Carolina at Chapel Hill. Current members of the iRODS Consortium, in addition to SoftIron, include RENCI, Bayer, the U.S. National Institute of Environmental Health Sciences, DataDirect Networks, Western Digital, the Wellcome Sanger Institute, Utrecht University, MSC, University College London, the Swedish National Infrastructure for Computing, University of Groningen, SURF, NetApp, Texas Advanced Computing Center (TACC), Cloudian, Maastricht University, University of Colorado, Boulder, SUSE, Agriculture Victoria, OpenIO, KU Leuven, the Bibliothèque et Archives nationales du Québec, CINES, and four additional anonymous members.

NSF announces $3 million award to expand FABRIC cyberinfrastructure globally

Advanced network offers platform to reimagine the Internet and speed scientific discovery

A new $3 million grant from the National Science Foundation (NSF) will expand FABRIC, a project to build the nation’s largest cyberinfrastructure testbed, to four preeminent scientific institutions in Asia and Europe. The expansion represents an ambitious effort to accelerate scientific discovery by creating the networks needed to move vast amounts of data across oceans and time zones seamlessly and securely.

Science is fast outgrowing the capabilities of today’s Internet infrastructure. To fully capitalize on big data, artificial intelligence, advanced computation and the Internet of Things requires robust, interconnected computers, storage, networks and software. Uneven progress in science cyberinfrastructure has led to bottlenecks that stymie collaboration and slow the process of discovery.

FABRIC, launched in 2019 with a $20 million grant from NSF, is building a cyberinfrastructure platform where computer scientists can reimagine the Internet and test new ways to store, compute, and move data. With the new NSF award, a sister project called FABRIC Across Borders (FAB) will link FABRIC’s nationwide infrastructure to nodes in Japan, Switzerland, the U.K. and the Netherlands.

“FAB allows collaborative international science projects to experiment with ways to do their science more efficiently,” said FAB Principal Investigator Anita Nikolich, Director of Technology Innovation at the University of Illinois School of Information Sciences and Cyber Policy Fellow at the Harris School of Public Policy at University of Chicago. “Sending large quantities of data long distances—across borders and oceans—is complicated when your science depends on real-time processing so you don’t miss once in a lifetime events. Being able to put FABRIC nodes in physically distant places allows us to experiment with the infrastructure to support new capabilities and also bring disparate communities together.”

FAB will be led by the University of Illinois along with core team members from RENCI at the University of North Carolina at Chapel Hill; the University of Kentucky; the Department of Energy’s Energy Sciences Network (ESnet); Clemson University; and the University of Chicago. Over three years, the team will work with international partners to place FABRIC nodes at the University of Tokyo; CERN, the European Organization for Nuclear Research in Geneva, Switzerland; the University of Bristol in the U.K.; and the University of Amsterdam.

The project is driven by science needs in fields that are pushing the limits of what today’s Internet can support. As new scientific instruments are due to come online in the next few years—generating ever larger data sets and demanding ever more powerful computation—FAB gives researchers a testbed to explore and anticipate how all that data will be handled and shared among collaborators spanning continents.

“FAB will offer a rich set of network-resident capabilities to develop new models for data delivery from the Large Hadron Collider (LHC) at CERN to physicists worldwide,” said Rob Gardner, Deputy Dean for Computing and research professor in the Physical Sciences Division at the University of Chicago and member of FAB’s core team. “As we prepare for the high luminosity LHC, the FAB international testbed will provide a network R&D infrastructure we’ve never had before, allowing us to consider novel analysis systems that will propel discoveries at the high energy frontier of particle physics.”

“FABRIC will tremendously help the ATLAS experiment in prototyping and testing at scale some of the innovative ideas we have to meet the high throughput and big data challenges ATLAS will face during the high luminosity LHC era,” said ATLAS computing coordinators Alessandro Di Girolamo, a staff scientist in CERN’s IT department, and Zach Marshall, an ATLAS physicist from Lawrence Berkeley National Laboratory. “The ATLAS physics community will be excited to test new ways of doing analysis, better exploiting the distributed computing infrastructure we run all around the world.”

To ensure the project meets the needs of the scientists it aims to serve, FAB will be built around use cases led by scientific partners in five areas:

  • Physics (high energy physics use cases at CERN’s Large Hadron Collider)

  • Space (astronomy and cosmology use cases in the Legacy Survey of Space and Time and the Cosmic Microwave Background-Stage 4 project)

  • Smart cities (sensing and computing use cases to advance smart, connected communities for the NSF SAGE project and work at the University of Antwerp and the University of Bristol)

  • Weather (use cases to improve weather and climate prediction at the University of Miami and Brazil’s Center for Weather Forecast and Climatic Studies)

  • Computer science (use cases in private 5G networks at the University of Tokyo; censorship evasion at Clemson University; network competition and sharing at the University of Kentucky; and software-defined networking and P4 programming at South Korea’s national research and engineering network, KREONET)

FAB will connect with existing U.S. and international cyberinfrastructure testbeds and bring programmable networking hardware, storage, computers, and software into one interconnected system. All software associated with FAB will be open source and posted in a publicly available repository: https://github.com/fabric-testbed/.

Tagged , , , |

Cloud Computing Testbed Chameleon Launches Third Phase with Focus on IoT and Reproducibility

$10 million NSF grant funds next four years of multi-institutional project

Since it launched in 2015, Chameleon has enabled systems and networking innovations by providing thousands of computer scientists with the bare metal access they need to conceptualize, assemble, and test new cloud computing approaches. 

Under a new four-year, $10 million grant from the National Science Foundation (NSF), the cloud computing testbed will further broaden its scope, adding new features for reproducibility, IoT and networking experimentation, and GPU computation to its core mission. This multi-institutional initiative is led by the University of Chicago (UChicago) in collaboration with the Renaissance Computing Institute (RENCI), Texas Advanced Computing Center (TACC), and Northwestern University.

“Chameleon is a scientific instrument for computer science systems research,” said Kate Keahey, senior computer scientist at Argonne National Laboratory and the Consortium for Advanced Science and Engineering (CASE) of the University of Chicago, and principal investigator of the Chameleon project. “Astronomers have telescopes, biologists have microscopes, and computer scientists have Chameleon.”

In its first five years, Chameleon has attracted more than 4,000 users from over 100 institutions, working on more than 500 different research and education projects. Scientists have used the testbed to study power management, operating systems, virtualization, high performance computing, distributed computing, networking, security, machine learning, and more. Educators have used Chameleon for cloud computing courses, allowing college and high school students to build their own cloud and learn the inner workings of the technology. 

The upcoming phase of Chameleon will further develop work already begun such as the popular CHameleon Infrastructure (CHI) that provides enhanced capabilities with the open source OpenStack project

The team will also broaden connections to other mission-specific testbeds, which will allow experimenters to implement core contributions of testbeds beyond Chameleon into their work. For example, Chameleon will expand capabilities for connecting IoT technologies by integrating with testbeds such as SAGE.

RENCI’s contributions to Chameleon in the third phase of funding will support this cross-testbed capability by further enabling experimentation with advanced programmable networking devices and accelerators. The RENCI team will also develop new options for software-defined networking that will allow compatibility with FABRIC, a currently-developing “everywhere programmable” nationwide instrument with large amounts of compute and storage, interconnected by high speed, dedicated optical links. 

“The planned additions to Chameleon will allow academic researchers to experiment with advanced programmable networks in a large-scale cloud environment,” said Paul Ruth, assistant director of network research and infrastructure at RENCI and co-PI on the Chameleon project. “We are excited to extend Chameleon’s cloud experiments into RENCI’s FABRIC testbed, which will facilitate larger, more diverse networking experiments.” 

Finally, the Chameleon team will also add expanded tools for reproducible research, and they will add new hardware and storage resources at the project’s two primary sites, UChicago and TACC, as well as at a supplemental Northwestern University site.

“Chameleon is a great example of how shared infrastructure with over 4,000 users can save the academic community time and money while catalyzing new research results,” said Deepankar Medhi, program director in the Computer & Information Sciences & Engineering Directorate (CISE) at the National Science Foundation. “NSF is pleased to fund Chameleon for four more years in order to extend the platform with new capabilities, thus allowing researchers to conduct new lines of research and students to learn newer technologies.”

To learn more about the testbed or begin experimenting on it today, visit chameleoncloud.org.

What to expect at the 2020 iRODS User Group Meeting

iRODS users and consortium members will gather virtually from June 9-12

The worldwide iRODS user community will connect online this week for the 12th Annual iRODS User Group Meeting – three days of learning, sharing of use cases, and discussions of new capabilities that have been added to iRODS in the last year.

The virtual event, sponsored by the University of Arizona, Cloudian, and RENCI, will be a collection of live talks with Q&A. An audience of nearly 300 participants representing dozens of academic, government, and commercial institutions is expected to join.

“The annual iRODS User Group Meeting has always opened our eyes to the impact of iRODS worldwide, and this year’s meeting will be no different,” says Jason Coposky, Executive Director at iRODS. “Although we are moving to a virtual platform, we intend to provide a similar experience to years past by ensuring there are plenty of opportunities for networking, discussion, and collaboration.”

Meeting attendees will learn about new updates such as hard links, direct streaming, and policy composition, according to Coposky. On June 12, the last day of the meeting, the Consortium team will run an iRODS Troubleshooting session, where participants can receive one-on-one help with an existing or planned iRODS installation or integration.

Last month, iRODS Consortium and RENCI announced the release of iRODS 4.2.8. A notable addition within the release was a new C++ rule engine plugin that provides an iRODS system the ability to convey hard links to its users. An iRODS system stores a hard link when replicas of two different iRODS data objects with different logical paths share a common physical path on the same host. When this occurs, metadata is added to both logical data objects for bookkeeping.

This year’s update to the iRODS S3 plugin shares the design and engineering underway to provide direct streaming into and out of S3-compatible storage. This rewrite uses the new iRODS IOStreams library and in-memory buffering to make efficient multi-part transfers.

With the addition of a continuation code to the rule engine plugin framework, iRODS users are now able to configure multiple policies to be invoked for any given policy enforcement point. The policy developers now have the ability to separate the policy enforcement points from the policy itself. Given this new approach, multiple policies can be configured together, or composed, without the need to touch the code.

As always with the annual UGM, in addition to general software updates, users will offer presentations about their organizations’ deployments of iRODS. This year’s meeting will feature 23 talks from users around the world. Among the use cases and deployments to be featured are:

  • SmartFarm data management, Agriculture Victoria. Data management challenges increase with large datasets generated with new sensing technologies. This requires the development of standardised, automated, on line, authenticated and verifiable standard processes for uploading data for storage and analytics on computing facilities. Agriculture Victoria undertakes research and development in animal and plant production, chemistry, spatial information, soil and water science. Working with iRODS, Agriculture Victoria are piloting new data management workflows of ‘SmartFarm’ data, and this talk will discuss lessons from small, medium, and high data Agriculture SmartFarm use cases using edge computing and collaborative data infrastructure and the flow on development of capability for AVR researchers.
  • Data management in autonomous driving projects, Aptiv. Aptiv is a global technology company that develops safer, greener, and more connected solutions that enable the future of mobility. The company deployed iRODS in production around 1.5 year ago, together with the start of the development phase of a big project on autonomous driving. The researchers will share how iRODS has assisted in tracking and migrating data between partners and within engineering groups responsible for data collection, manual and automatical analysis.
  • Building a national Research Data Management (RDM) infrastructure with iRODS in the Netherlands, SURF. In the Netherlands, many universities are looking at iRODS to support their researchers, as they recognize the powerful potential of the tool in two areas: support for secure cooperation, and support over the entire research data life cycle. SURF, a national organization providing IT support and infrastructure for universities, is now working closely together with six universities towards building a national RDM infrastructure based on iRODS. Researchers from SURF will share a case study for the use of iRODS, not for a specific research group, but for an entire nation to enhance the support of their researchers by working together on this iRODS based infrastructure.
  • Keeping pace with science: The CyVerse Data Store in 2020 and the Future, CyVerse / University of Arizona. CyVerse, hosted at the University of Arizona, provides a national cyberinfrastructure for life science research as well as training scientists in using such high performance computing resources.This talk will describe the current features of the CyVerse Data Store and plans for its evolution. Since its inception in 2010, the Data Store has leveraged the power and versatility of iRODS by continually extending the functionality of CyVerse’s cyberinfrastructure. These features include project-specific storage, offsite replication, third-party service and application integrations, several data access methods, event stream publishing for indexing, and optimizations for accessing large sets of small files.

Registration for the Virtual iRODS UGM will remain open throughout the week. See the registration page for details.

About the iRODS Consortium

The iRODS Consortium is a membership organization that supports the development of the integrated Rule-Oriented Data System (iRODS), free open source software for data virtualization, data discovery, workflow automation, and secure collaboration. The iRODS Consortium provides a production-ready iRODS distribution and iRODS training, professional integration services, and support. The world’s top researchers in life sciences, geosciences, and information management use iRODS to control their data. Learn more at irods.org.

The iRODS Consortium is administered by founding member RENCI, a research institute for applications of cyberinfrastructure at the University of North Carolina at Chapel Hill. For more information about RENCI, visit renci.org.

OpenIO joins iRODS Consortium

The iRODS Consortium, the foundation that leads development and support of the integrated Rule-Oriented Data System (iRODS) data management software, welcomes OpenIO as its newest Consortium member.

Read more

RENCI to help lead effort to make cancer research data more useful and accessible

The Renaissance Computing Institute (RENCI) at the University of North Carolina at Chapel Hill will collaborate on an $8.8 million, 3.5-year effort to make the volumes of data arising from cancer research more accessible, organized, and powerful. This contract was awarded by the Frederick National Laboratory for Cancer Research on behalf of the National Cancer Institute.

Read more

RENCI researchers spearhead $20 million project to test a reimagined Internet

Collaboration will establish a nationwide network infrastructure

The University of North Carolina at Chapel Hill will lead a $20 million project to create a platform for testing novel internet architectures that could enable a faster, more secure Internet.

With leadership from researchers at the Renaissance Computing Institute (RENCI), UNC-Chapel Hill and its partners will build a platform, called FABRIC, to provide a nationwide testbed for reimagining how data can be stored, computed and moved through shared infrastructure. FABRIC, funded by the National Science Foundation, will allow scientists to explore what a new Internet could look like at scale, and help determine the internet architecture of the future.

Read more

SUSE joins the iRODS Consortium

The iRODS Consortium, the foundation that leads development and support of the integrated Rule-Oriented Data System (iRODS) data management software, welcomes SUSE as its newest Consortium member.  

Read more
Tagged , , |

South Big Data Hub receives second round of NSF funding

$4 million will support continued innovation and problem-solving in the Southern data science community

The National Science Foundation (NSF) recently announced the second phase of funding for the regional Big Data Innovation Hubs (Hubs). Each of the Hubs will receive $4 million over four years for a total investment of $16 million.

Each Hub is located in one of the four U.S. Census regions (South, Northeast, Midwest, and West) and serves as a thought leader and convening force on social and economic challenges that are unique to the region by playing four key roles: (1) Accelerating public-private partnerships that break down barriers between industry, academia, and government, (2) Growing R&D communities that connect data scientists with domain scientists and practitioners, (3) Facilitating data sharing and shared cyberinfrastructure and services, and (4) Building data science capacity for education and workforce development.

Read more
Tagged , , |