With advancements in the digital world, like artificial intelligence, machine learning, robotics and the Internet of Things, computers are experiencing serious challenges. Current digital processing technologies face speed limitations and high-power consumption, as tomorrow’s applications demand stronger computing powers, leaving computing platforms lagging behind the needs of the scientific community.
Arjuna Madanayake, associate professor in the Department of Electrical and Computer Engineering within the College of Engineering & Computing, is exploring elements of analog computing, which predate conventional computing by several decades, to improve modern digital computing systems.
FIU has partnered with Ocius Technologies—a company based in Ohio that supports the commercialization of university research projects—where Madanayake serves as the principal investigator for the project. The company received a $500,000 grant from the Defense Advanced Research Projects Agency (DARPA) to continue its work on a project titled, “Analog Co-Processors for Complex System Simulation and Design.”
Madanayake, in collaboration with the University of Akron Research Foundation (UARF), SI Hariharan and Dale Mugler from Ocius Technologies, and Soumyajit Mandal at Case Western Reserve University, has designed analog chips, which have several advantages, including high processing speeds with low power consumption.
The goal of this research is to incorporate the analog chips into existing digital computers, providing transformational increases in speed for specialized applications.
The technology potentially benefits electromagnetic applications such as the design of antennas for defense and wireless communications, 5G communication, Internet of Things, mobile robotics, wireless networks and imaging techniques that use radio frequency waves.
Another industry that could benefit from the technology is the entertainment industry based on augmented reality, virtual reality and computer games, where real-time physics modeling is useful.
“What makes our technology unique is that we are utilizing advanced developments in high-speed analog to dramatically speed up difficult computing problems for scientific applications,” said Madanayake.
The earliest analog computers date back to the late 1800s and served as special-purpose machines, like a tide predicting computer developed in 1873 by William Thomson. Analog computing was common during the 1930s through the 1950s, but with the rise of digital computing speeds, analog computing’s popularity declined.
“The exponential scaling of digital computers following Moore’s Law led the computer science community to overlook analog computers,” said Madanayake. “Due to power limitations, Moore’s law is somewhat slowing down.”
Moore’s law refers to the perception of Gordon Moore, co-founder and chairman emeritus of Intel Corporation, on how the number of transistors doubles about every two years. In other words, digital computing would dramatically increase in memory capacity and computational throughput, or rate of production, and decrease in cost at a rapid pace.
The programming of analog computers consists of configuring electronic devices to a series of circuits that mimic an algorithm in order to perform mathematical operations at radio frequency speeds. The condition is set up by the user to investigate the particular differential equation or physics problem they want solved, allowing variables in the analog computer to evolve over space and time.
Analog computers vs. digital computers
An advantage of analog computers is their capability of representing data in a wide range of values, making them effective in scenarios where data needs to be reported in real-time at very fast update rates. It also doesn’t require complex programming, like a digital computer does to capture real-time data. Advantages of digital computers are their versatility and their ability to be reprogrammed.
The main difference between the two is the type of data processed. Digital computers operate on data, letters and symbols using binary code, meaning only using digits 0 and 1. They produce numbers as output, whereas analog computers output voltage signals that can take a full range of values.
“This technology has the potential to change the way that complex simulations are currently calculated,” said Mugler, chief technology officer at Ocius Technologies and emeritus dean of the University of Akron’s Honors College.
The project offers several valuable opportunities for FIU students, through multiple collaborations with doctoral students from Case Western Reserve University.
“We’ve had several doctoral students fully immersed in the project since fall 2018,” said Madanayake. “Nilan Udayanga, who defended his Ph.D. recently, was the student lead. Upon graduation, he will be starting a post-doctoral fellowship at the University of Southern California (USC).”
DARPA has invested more than $1.7 million in the project, breaking it down into several phases. The objective of Phase I and Phase II was to successfully research analog co-processors with a focus on simulation and design.
Madanayake completed Phase-I at the University of Akron, Ohio, before moving his research lab to FIU in 2018. The latest $500,000 funding is for Phase-II. In a planned Phase III, FIU and Ocius Technologies are looking for commercial partners that need increased speed in their product development.
The collaborating team members of the grant include Hariharan, professor emeritus at The University of Akron scientist at Ocius Technologies; Mugler, Ocius Technologies; co-principal investigator Soumyajit Mandal, professor at Case Western Reverse University; Udayanga, FIU alumnus; FIU doctoral student Hasantha Malavipathirana; and Jifu Liang from Case Western Reverse University. The team is assisted by Leo Belostotski from the University of Calgary and Justin Ball, a consultant at Ocius Technologies.