Korvo Research Project List

Korvo Projects

The Korvo group is active in many areas of systems research, from multicore scheduling to virtualized I/O, from High Performance Computing to Software Defined Networking, from internet scale monitoring to Internet of Things. You'll find here a list of some of our named projects. Please check out the individual pages and research papers linked to from the people tab.

Cloud and Datacenter Systems

The project is a collaboration effort between Georgia Tech, Cisco, Intel ISTC, Carnegie Mellon University, HP Labs, IBM Research, VMware. Led by Dr.Karsten Schwan, the project is aimed at providing scalable monitoring solutions for virtualized datacenter systems, fine-grained resource management and orchestration solutions for enterprise computing systems, and high throughput low latency solutions for streaming processing systems.

Edge Cloud Systems

The context aware, on-demand and distributed nature of next generation applications designed for Mobile-Cloud and Internet of Things scenarios, puts high pressure on devices and infrastructure due to sheer amount of generated traffic while simultaneously requiring even lower response latencies for their interactive nature. The resource contraints in temrs of computation, storage, I/O extension, etc are inherent to the mobile and end-user devices and pose additional challenges to meet above demands. We are exploring the provisioning and use of emerging 'Edge Cloud' tier situated only 1 physical hop away from end user devices (e.g., traditional PCs, homeservers, Wi-Fi routers, Small Cells etc.), which can be provisioned seamlessly on-demand by the backend cloud or by end user devices in ad-hoc manner via device discovery.


GlassBox is a National Science Foundation-funded project with a goal of enabling application developers to improve performance on future high end computing (HEC) machines for their scientific and engineering processes. The basic approach of the project is to offer an open, transparent software infrastructure - a Glass Box system - for creating and tuning large-scale, parallel applications. “Opening up” the tools and services used to create and evaluate peta- and exa-scale codes will involve developing interfaces and methods that make available tool-internal information and the tools will be accessible for new performance management services that improve developer productivity and code efficiency.

Heterogeneous Platforms and Systems

Heterogeneous Platforms and Systems are becoming the norm for mobile devices, server systems, and large-scale machines. This project investigates system support to exploit and deal with heterogeneity, at levels of abstraction ranging from middleware, to toolchains, to instrumentation, and operating systems. Concerning individual platforms, our research targets heterogeneous platforms with multicore CPUs and/or integrated and discrete GPUs, and the complex memory systems of future large-memory servers with on-chip fast RAM, die-stacked RAM, NVRAM, and SSD memories. Concerning larger-scale systems, we are considering cloud-hosting datacenters and high end machines like those planned for the exascale era.


A Metadata Based Programmable File System

File system interfaces have remained remarkably unchanged throughout the computing era. The stability and simplicity of the commonly used interface of modern file systems has created consistent and performance oriented environments on which other applications may be layered. Using hierarchical organizational representations of stored content, users can easily navigate and manage large numbers of files using the current command line and graphical interfaces. However, users and applications are more interested in the information that can be retrieved from the data, leading to data quality checks being an important requirement in the contemporary file systems especially for scientific data sets. Additionally, in the age of distributed computing better metadata management is required to organize files across several devices while maintaining relevant information for interfacing.The proposed Khan file system aims to solve these issues surrounding data quality and organization of rich metadata information in a distributed world. Using unique directory structures, intelligent metadata management and a semantic approach, Khan builds upon other file systems to solve problems in the ubiquitous computing age. Goals addressed by Khan include determining data quality using user-defined parameters, interfacing and organization of the files from interactions with new stores like the cloud and mobile devices and remaining extensible for future needs while retaining and improving upon the performance characteristics of contemporary file system.


Monitoring Analytics for In Situ Workflows at the Exascale

The Mona project is a collaboration between Georgia Tech, University of Oregon, Oak Ridge National Laboratory, and Princeton Plasma Physics Laboratory. Led by Dr. Greg Eisenhauer after Dr. Schwan's passing, the project is aimed at providing scalable platform and application monitoring for in situ workflows at the exascale.

The MONA(lytics) project seeks to understand, evaluate, and ultimately, control the online data flows generated by future exascale applications and the analytics processing applied to those flows: their volumes, speeds, and processing needs; the energy saved by online vs. offline data processing; the effects of next generation computer hardware and of the new ways of performing data management; and the tradeoffs in how well data is analyzed vs. the costs of doing so, when approximate methods are sufficient for the immediate scientific insights being sought.

Non-volatile Memory Project

Non-volatile memory (NVM) is emerging as a promising way forward to substantially enhance future systems. This project investigates system support to exploit the characteristics of NVM.

RSVP: I/O staging for extreme scale data

RSVP: Runtime System for I/O staging in support of Voluminous in situ Processing of extreme scale data

At these extreme scales, online data processing pipelines will need to be easily and dynamically composed, efficiently executed alongside the scientific simulations producing the data, and support reuse of computation and data. Furthermore, the need to seamlessly integrate experimental data is imposing additional demands on extreme-scale datamanagement solutions. The overarching goal of the RSVP project is to fundamentally address these challenges by developing model in which computational, data transformation and data analytic services can be easily and efficiently associated with and applied to science data as part of an end-to-end, in situ “process flow.”

Subscribe to Korvo Research Project List