By Siddharth Vodnala, intern, Voiland College of Engineering and Architecture

Pande, Doppa (l-r)
Pande, Doppa (l-r)

PULLMAN, Wash. – A Washington State University and Carnegie Mellon University team has received a grant from the U.S. Army Research Office to develop a novel computing platform for emerging big data applications.

The researchers, including Partha Pande and Jana Doppa, professors in the School of Electrical Engineering and Computer Science, and professors Radu Marculescu and Diana Marculescu from Carnegie Mellon, are designing datacenter-on-chip (DoC) technology for faster and more energy-efficient data processing and better performance for big data applications.

Datacenter on a chip

Datacenter-on-chip technology puts the equivalent of a huge data center, which uses enormous amounts of energy in large facilities to crunch data for companies like Google or Amazon, on a single computer chip. The chip includes thousands of processors, or cores.

The U.S. Army is interested in such manycore platforms for large real-time battle simulations and for battle information management software. Many military applications such as electronics for war fighters and unmanned aerial vehicles also require both high performance computing and low power consumption.

Big data and deep learning

Partha Pande and Jana Doppa (l-r), professors in the School of Electrical Engineering and Computer Science, are working with professors from Carnegie Mellon to design datacenter-on-chip technology
Partha Pande and Jana Doppa (l-r), professors in the School of Electrical Engineering and Computer Science, are working with professors from Carnegie Mellon to design datacenter-on-chip technology

The researchers are working on data-center-on-chip technology that is specifically optimized for the big data application known as deep learning. Deep learning techniques have been used increasingly in the past few years in areas such as natural language processing, computer vision and self-driving cars.

“Our proposed computing designs are aimed at improving the speed of processing big data while reducing the overall power consumption,” said Pande.

Since deep learning techniques are very data intensive, the researchers will use graphics processing units (GPUs) on the chip to speed up processing in addition to central processing units (CPUs), creating what is known as a heterogeneous platform.

CPU+GPU on single chip

“The main challenge is to integrate CPUs and GPUs into a single chip with an efficient communication network between them, since their characteristics vary a lot,” Pande said. Pande and his team are considering hybrid communication networks consisting of both wired and wireless connections to address this important challenge.

The team also is working to optimize the overall design to achieve the required trade-off between performance, power consumption and thermal properties for big data applications.

“Traditional optimization algorithms scale very poorly on these problems. We are developing novel machine learning based search algorithms to significantly improve the overall design time,’’ said Doppa.

“There were many technical challenges in front of us to create a better power management strategy,” Pande said. “We are confident that our proposed chip designs will prove to be useful to advance the research and development of deep learning and other emerging big data applications around the world.”

 

Contact:

  • Brett Stav, public relations/marketing director, Voiland College of Engineering and Architecture, 509-335-8189, brett.stav@wsu.edu