WSU and Duke University researchers have received a three-year National Science Foundation grant to develop a novel computing framework for big data applications.
Bringing machine learning and a holistic approach into their design work, the researchers will be designing a unique heterogeneous manycore system with the hope of enhancing performance, reducing energy consumption, and ensuring reliability.
“We are pushing the boundaries of existing computing systems in a very novel way,” said Jana Doppa, George and Joan Berry Chair assistant professor in the School of Electrical Engineering and Computer Science (EECS). “It is very exciting.”
Advanced computing systems have allowed for many breakthroughs in science, engineering, and technology, and they continue to play an important role in the era of big data. Since the 1960s, ever smaller transistors on microchips have allowed computer speeds and capabilities to double every two years, a principle that is known as Moore’s Law. But, in recent years, semiconductor advancement has slowed while demand for much faster computing has increased for big data applications, such as for self-driving cars, graph analytics, and personalized medicine.
The WSU and Duke research team are using a unique approach in their computing systems design, taking a holistic approach to optimize computer architecture instead of the more typical piecemeal approach that separately tackles a computer’s memory, computing ability, and communication system. The WSU team includes Doppa and Partha Pande, Boeing Centennial Chair in Computer Engineering and director of the School of EECS.
“People tend to look at the smaller pieces for optimization because that’s easier to deal with,” Doppa said. “We want to look at the computing, memory, and communication layers altogether.”
As part of their mission to improve the computing architecture, the researchers will uniquely use machine learning to help them in their system design, evaluation, and decision making.
“Machine learning allows for getting feedback and then using the updated knowledge to choose the most promising option, known as adaptive iteration,” he said. “It is like how humans go about their problem solving.”
“The overarching theme of the work is to exploit the synergy between machine learning and manycore systems research,” Pande said. “Advancement in novel machine learning algorithms and computer system design are tightly coupled, and advancement in one cannot be achieved without the other.”
The researchers will integrate machine learning into the chip itself, developing a chip that, like a self-driving car, can optimize its performance, energy savings, and even reliability.
“As is the case with a self-driving car, we don’t want to have a human telling the chip what to do,” Doppa said. “We want to maximize its performance and reduce energy, and the chip has to figure out the best way to achieve that.”
Unlike current homogeneous many-core systems, the team plans to include different kinds of processors, such as graphics processing (GPUs) and central processing units (CPUs), within the same system. They also are planning for chips in three dimensions instead of two-dimensional flat chips currently used in computing.
Furthermore, in current systems, storage and computing are in separate places on a chip. That is, a computer has to grab data from storage and then do its computing – a process that is repeated over and over, taking large amounts of energy and time. The WSU team is working to develop a core system that can do both storage and computing together in the same place, which would save energy and time.