XSEDE Science Successes
Chameleon connects to bridges
CHAMELEON: A BRIDGE TO XSEDE'S 'BRIDGES'
UC/TACC cloud computing testbed helps Pittsburgh Supercomputing Center get ahead of the game
In the world of advanced computing, computer scientists commonly use supercomputers to explore new technologies. Without supercomputers, the field of computer science would not make progress in developing better and more efficient algorithms, methods, and tools that help advance technology. However, supercomputing environments can have restrictive systems and software in place that cannot be modified to develop new, customized environments.
In the rapidly emerging and flexible computing paradigm of cloud computing, a new system was needed to address the academic research community's needs to develop and experiment with novel cloud architectures and pursue new applications of cloud computing in customized environments. Chameleon, launched in 2015, was designed to do just that.
In conjunction with the University of Chicago and the Texas Advanced Computing Center (TACC), the National Science Foundation (NSF) funded Chameleon, TACC's first system focused on cloud computing for Computer Science research. This $10 million system is an experimental testbed for cloud architecture and applications, specifically for the computer science domain.
"Cloud computing infrastructure provides great flexibility in being able to dynamically reconfigure all or parts of a computing system so that it can best suit the needs of the applications and users," said Derek Simmel, a 15-year veteran of the Advanced Systems Group at the Pittsburgh Supercomputing Center (PSC). "With this flexibility, however, comes considerable complexity in monitoring and managing the resources, and in determining how best to provision them. This is where having an experimental facility like Chameleon really helps."
Simmel is also an XSEDE (Extreme Science and Engineering Discovery Environment) expert who works on PSC's Bridges, an NSF-funded XSEDE resource for empowering new research communities and bringing together high performance computing (HPC) and Big Data. Bridges operates in part as a cluster but also has the ability to provide cloud resources including virtual machines (VMs) and other dynamically configurable computational resources.
According to Simmel, the new Bridges system provided new challenges because it's a non-traditional system and involves deployment using OpenStack.
"The cloud infrastructure software itself (OpenStack) is also evolving rapidly, as computer scientists work to improve and expand its capabilities," Simmel said. "Keeping up with new developments and changes in the way one operates all the component cloud services is a considerable burden to cloud system operators — the learning curve remains fairly steep, and all the expertise required for a traditional computing facility needs to be available for cloud-provisioned systems as well."
The infrastructure of cloud computing is as complex as managing an entire supercomputing machine room — all the software and services required for computing, networking, scheduling, monitoring, security and software management are represented in a layer of cloud services that operate between the physical hardware and the virtual systems accessed by users.
Enter Chameleon.
It's often a challenge to test the scalability of system software components before a large deployment, particularly if you need low level hardware access. Chameleon was designed for just these sort of cases – when your local test hardware is inadequate, and you are testing something that would be difficult to test in the commercial cloud – like replacing the available file system. Projects like Slash2 can use Chameleon to make tomorrow's cloud systems better than today's.
-Dan Stanzione, Executive Director at TACC and a Co-PI on the Chameleon project
PSC started using Chameleon in August 2015 and tested OpenStack for five to six months on Chameleon before the first Bridges hardware arrived in early 2016 (Bridges entered full production in July 2016). In addition to providing bare metal reconfiguration capabilities, a modest partition of Chameleon has been configured with OpenStack KVM to provide a ready made cloud for researchers interested in experimenting with cloud computing. This allowed Simmel and others to experiment with and optimize a piece of software called SLASH2, a PSC-developed distributed file system.
They tested deployment of the SLASH2 distributed filesystem on CentOS 7.x virtual machines provisioned using the Chameleon OpenStack environment. "A primary challenge was to identify and understand what happens to SLASH2 filesystems as the number of client systems mounting the filesystem scales up into the hundreds of nodes, and as data intensive applications and access patterns on some nodes affect the availability and performance for others," Simmel said.
According to Simmel, the results gathered in their observations informed the SLASH2 developers regarding areas of the SLASH2 system that are sensitive to scaling, to data access patterns at scale, and load management. They were able to implement improvements to SLASH2 to reduce contention and to provide configuration controls to sustain availability as the number of clients rose toward 1000+, as is the case on the new Bridges system. These improvements are now in production on the Bridges /pylon2 filesystem today.
"In preparation for the Bridges supercomputer being fully deployed in July, it was very convenient for Chameleon to be ahead of us so we could use their resources to test configurations, deployment scenarios, and the scalability of Slash2 on VMs that were provided by Chameleon," Simmel said. "It was the right system available at the right time."
PSC employs OpenStack to provision the Bridges system itself, and also to provide VMs and VM-based services for users. Now that Bridges is in full production mode, the lessons learned on Chameleon for OpenStack and Slash2 deployment are being put to work for the domain scientists using Bridges and Bridges-provisioned VMs to run their HPC simulations and data analyses. "OpenStack is one of the leading software collections in the cloud arena right now – it's developing very quickly. Chameleon gave us a working OpenStack environment at a time when we needed one. We needed to focus on developing solutions for Slash2 and Bridges rather than tackling the difficulties of getting a large OpenStack system stabilized," Simmel said.
"It's often a challenge to test the scalability of system software components before a large deployment, particularly if you need low level hardware access", said Dan Stanzione, Executive Director at TACC and a Co-PI on the Chameleon project. "Chameleon was designed for just these sort of cases – when your local test hardware is inadequate, and you are testing something that would be difficult to test in the commercial cloud – like replacing the available file system. Projects like Slash2 can use Chameleon to make tomorrow's cloud systems better than today's."
Users don't notice the preparation behind the curtain, Simmel said, but it was helpful to know in advance what challenges PSC would face in scaling Slash2 and in understanding where to prioritize efforts in solving problems. Currently, Slash2 is running in production on Bridges as one of the two primary file systems. "And access to that from VMs was greatly facilitated by our up front testing on Chameleon before the machine was here," Simmel said.
"Its novel that there was another NSF resource elsewhere for us to use for HPC infrastructure development and testing," Simmel concluded. "In the past, we've acquired equipment to experiment with before deploying a production HPC system, but it has been very limited — those machines didn't allow us to try the higher level testing that we needed to do with OpenStack and Slash2. It was really convenient to have Chameleon available rather than having to home grow our own system. We greatly appreciate the resources and service provided to us by the Chameleon project."

Chameleon: Cloud Computing Testbedfor Cloud Architecture and Applications

Derek Simmel, a 15-year veteran of the Advanced Systems Group at the Pittsburgh Supercomputing Center (PSC)

Bridges is a uniquely capable resource for empowering new research communities and bringing together HPC and Big Data.
- XSEDE Resources, Trinity Enable Non-Human Primate Reference Transcriptome Resource to Support Study of Genes in Our Closest Relatives
- Turtle Tree of Life
- Region 1 Champions meet at Idaho National Laboratory
- Crash test simulations expose real risks
- NSF supports development of new arctic maps
- How was the planet Earth formed?
- Exploring Large Data for Scientific Discovery
- XSEDE Value Added
- Scholars program helps realize dream
- Making sense of cyberinfrastructure
- XSEDE15 Wrap Up
- Bioinformatics Scripts Solutions
- XSEDE15 Plenary Panel
- Polymer Potential
- The Future of NSF Advanced Computing Infrastructure
- 2015 International Summer School on HPC Challenges
- A Catalyst for Complexity
- As Austin Grows So Does Its Traffic Woes
- The University of Tennessee, Knoxville, Wins Second Place in an International Student Supercomputing Competition
- PSC Receives NSF Award for Bridges Supercomputer
- Innovative New Supercomputers Increase Nation's Computational Capacity and Capability
- Exploring Competitive Balance
- A Direct Bridge
- The Dopamine Transporter
- XSEDE Supercomputers Laid the Foundation for an Unprecedented Simulation of Cosmological Evolution
- Big Data Needs Big Funding
- XSEDE helps create a more effective way to assemble genomic information
- Of Micelles and Machines
- XSEDE Allocation System to Receive Makeover
- Internet2: Advancing Science in the Age of Big Data
- XSEDE User Portal At Your Fingertips: Mobile App
- Researchers Study Air Pollution
- Dan Stanzione: New Executive Director at TACC
- People of XSEDE: Campus Champions - Preaching the HPC Gospel
- XSEDE and Blue Waters Go Supernova
- Two at a Time
- Show Him the Money
- Cosmic Slurp
- Turning Salt into the Unknown
- Looking Inside Images
- Farming the Wind
- Breaking out of the Digital Graveyard
- The Mechanism of Short-term Memory
- Open Science and Industry Collaboration
- XSEDE, Prace Call for Requests of Joint Support
- XSEDE Wins HPCWire Award
- Shields to Maximum, Mr. Scott
- The Ultimate Timekeeper
- Blue Waters, XSEDE sign collaborative agreement
- People of XSEDE - Outreach programs set XSEDE apart
- Wrangler Reels in Award
- The Great Comet: NSF awards $12 Million Grant to SDSC to deploy Comet
- Meet the Gribbles
- 2013 Nobel Prize in Chemistry winners bring HPC to the lab
- XSEDE helps create a more effective way to assemble genomic information
- XSEDE facilitates large-scale image analysis to understand diseases
- XSEDE announces new campus briding services and tools
- XSEDE, NSF Release Cloud Survey Report
- XSEDE13: Programming Competition Allows Students to "Geek Out" and Gain Crucial Skillsets
- Katlin Thaney gave XSEDE13 Keynote: Gateways for Open Science
- XSEDE13 conference selects best papers, posters visualizations and more
- XSEDE13 speaker tells how turbulence simulations help make movie magic
- XSEDE13 Plenary Talk: Accelerating Brain Research with Supercomputers
- Invited speakers announced for Extreme Scaling Workshop - Heterogenous Computing
- XSEDE13 speaker LeManuel "Lee" Bitsóí: Democratizing Scientific Research
Read more about Bitsóí's talk at this year's conference - More than 70 students from 4 continents gain HPC skills at fourth annual Summer School
- Registration opens for Extreme Scaling Workshop 2013
- Campus Champions Fellows Named
- Campus Champions program reaches 200 members
- Rock Snot Genomics: University of Texas researchers use advanced sequencing and TACC's Ranger supercomputer to uncover origin of common algae
- Experiencing some turbulence: Researchers Take on One of Physics' Most Important and Enduring Problems
- Register now for Virtual School summer courses on data-intensive and many-core computing
- XSEDE seeks a Scientific Workflow Specialist for Extended Collaborative Support Service
Applications are due May 31, 2013 - XSEDE13 schedule now available online
- Students from high school to grad school levels invited to participate in programming contest at XSEDE13 high performance computing conference
- SDSC's Gordon enables discoveries in the study of genetics Read about Gordon's role in pinpointing the genetic patterns underlying autism-spectrum disorders, schizophrenia and similar brain conditions.
- XSEDE, National Computational Science Institute offer summer workshops for educators
- XSEDE13 Student Day applications due May 15 High school and undergraduate students get hands-on experience in computational science and interact with expert researchers
- XSEDE upgrades to Internet2's 100G Network
- XSEDE13 Registration now open!
- Get to know XSEDE Staff XSEDE Allocations Manager Ken Hackworth: The Man, The Myth, The Legend
- Two sponsors commit to XSEDE13 conference: Cray and Intel .
- Texas Unleashes Stampede
- Swirling Secrets-Understanding the turbulence of gases
- Blacklight helps researchers develop better materials for carbon capture
- Journey to the limits of spacetime
- Students invited to participate in XSEDE13 Multiple ways for high school, undergraduate, and graduate students to get involved; funding support available.
- XSEDE Call for Humanities, Arts and Social Science ProjectsIf you and your collaborators need to access to large collections of digital data, need more computer power, or require substantial storage capacity and computing power – please share it with XSEDE.
- XSEDE needs your feedback! If you received an invitation to complete the 2013 User Satisfaction Survey, please take 10 minutes today to share your comments about the XSEDE user experience.
- XSEDE deploys Globus Online for data transfer The first official software service on XSEDE has been accepted for production deployment
-
The Stampede Era Begins XSEDE supercomputer now operational and available to the national open science community
- Call for ParticipationInternational Summer School on HPC Challenges in Computational Sciences
- XSEDE, European Grid Infrastructure seek collaborative use cases
Deadline extended to March 8! - XSEDE offers free online parallel computing course Learn to use parallel computers more efficiently and productively
- NICS makes the top of Green500 list XSEDE partner recognized for energy-conscious high-performance computer, Beacon
- XSEDE's John Towns appointed to Compute Canada board of directors Board includes leaders in industry, academia, and computational research
- STILL ACCEPTING RESPONSES to Cloud Use Survey from XSEDE, NSF All researchers encouraged to respond and help shape future of cloud computing in XSEDE
- Make room for Stampede: TACC expands data center for new supercomputer
Read more about the new data center at TACC
See TACC Deputy Director, Dan Stanzione describe the new center - SDSC welcomes Gordon supercomputer as a research powerhouse
Read more about SDSC's Gordon - Campus Bridging Early Adopter Program issues Call For Proposals to be submitted Dec. 1-9
Read more about the program - XSEDE12 announced -- first conference of Extreme Science and Engineering Discovery Environment
Read more about XSEDE12 - PSC, SGI Team Up on Shared-Memory Supercomputer
Read more about PSC's shared-memory supercomputer - Pittsburgh Supercomputing Center Wins High-Performance Computing Award
Read more about PSC - Blacklight Goes to Work at the Pittsburgh Supercomputing Center
Read more about Blacklight - Ranger supercomputer's lifespan extended one year as part of NSF XD initiative.
Read more about Ranger - Kraken set to deliver 2 billionth CPU hour, sustains 96 percent utilization
Read more about Kraken - TACC Offers New, Broader Computational Biology Software Stack to Open Science Community.
Read more about biology software stack - ACM launches new Special Interest Group on High Performance Computing. Join by Nov. 18 for special rate.
Read more about the new SIGHPC - 'What Are You Working on Today,' Ranger, Jaguar and iForge?
Read more about TACC's Ranger supercomputer
Read more about ORNL's Jaguar supercomputer
Read more about NCSA's iForge supercomputer - Adventures with HPC Accelerators, GPUs and Intel MIC Coprocessors
Read more about experiences with new hardware - Developing Scientific Computing Communities
Read more about development efforts - Indiana University to create the National Center for Genome Analysis Support, which will be integrated with XSEDE resources
Read more about the NCGAS at IU - Scientists use XSEDE/TeraGrid resources to determine how shock waves move through solids
Read more about 'super-elastic shock waves' - XSEDE upgrades network
Read more about the XSEDE upgrade - Richard Tapia, Rice University mathematician and professor and member of XSEDE outreach team, receives National Medal of Science
Watch the Oct. 21 webcast
Read more about Tapia's award
Learn more about Richard Tapia - Stampede's comprehensive capabilities to bolster U.S. open science computational resources
Read more about Stampede
Watch a video of Jay Boisseau, director of TACC, discussing Stampede - SDSC announces scalable, high-performance data storage cloud
Read more about SDSC cloud - Appro and SDSC Gordon supercomputer to provide up to 35M IOPS
Read more about SDSC's Gordon - Dr. Barry Schneider from the National Science Foundation to describe XSEDE in the Oklahoma Supercomputing Symposium keynote, Oct. 11-12
Read more about Dr. Schneider's keynote
Go to symposium site - Students research solar cells with HPC
Read more about HPC and solar research - Seeing Is Believing: Extreme Digital visualization and data analysis resources help researchers derive insights from massive data sets
Read more about Extreme Digital - New "Memory Advantage Program" on Blacklight at the Pittsburgh Supercomputing Center
Read more about PSC's MAP - XSEDE project brings advanced cyberinfrastructure, digital services, and expertise to nation's scientists and engineers
Read more about XSEDE - Watch the John Towns video
- How XSEDE will facilitate collaborative science
Read more about XSEDE and collaboration