Student News Brief

Senior project spotlight: Coleman Hooper

Improving the energy efficiency of speech recognition systems on mobile and edge devices

Coleman Hooper and his senior capstone project

Coleman Hooper, S.B. '22, built a hardware accelerator to make speech recognition software more energy efficient for his senior capstone project. (Eliza Grinnell/SEAS)

Engineering Design Projects (ES 100), the capstone course at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), challenges seniors to engineer a creative solution to a real-world problem.

Hardware-Software Co-Design for Energy-Efficient Deployment of Automatic Speech Recognition Models on Edge Devices

Coleman Hooper, S.B. ‘22, electrical engineering

What did you do for your project?

I designed a hardware accelerator that can be integrated into smart home devices and mobile phones, reducing the amount of power required to run speech recognition applications. The accelerator is designed to support highly accurate models with minimal power consumption and without excessively increasing the time needed for speech recognition or taking up too much space in the device itself.

Where’d your project idea come from?

The design I used as a starting point for this was from a previous project I’d worked on that was mainly run by electrical engineering Ph.D. candidate Thierry Tambe. I’d been looking at different speech recognition models along with my advisors, and we discussed how with the recent advances in natural language processing, there have been corresponding advances in speech recognition, but these speech recognition models hadn’t been supported yet on low-power platforms.

Does this project address a challenge in the industry?

There’s definitely a lot of industry interest in running these types of models for having voice-controlled appliances and smart homes. Voice control can be very useful for accessibility-related reasons, and one of the benefits of being able to run it on low-power devices is that you don’t need WiFi connectivity to be able to interface with it. Currently, many devices use cloud-based speech recognition systems, which means risks to privacy. 

How did your project come together over the course of the year?

The first 1-2 months involved mapping out the algorithm and planning to make sure I understood the hardware platform I was going to be adapting in detail, and what actual support was required. The next two months were very design-heavy, focusing on actually building the accelerator. There was some time spent configuring synthesis tools, and then after that there was a month and a half, from mid-February, consisting of two things: measurements and analysis to assess the benefits of the accelerator, and working on trying to debug the system-level implementation.

What part of the project proved the most challenging?

It was definitely debugging the system-level implementation at the end. At a certain point I’d gone through the simple issues it might be and trying to figure out how to approach it from different angles was very challenging. I’d sat down with both of my advisors, Thierry Tambe and Dr. Gu-Yeon Wei, looked over the design, gone through every component, checked if the configuration looked right, so it was very challenging to go through that and then try to think of new ways something else might be going wrong. It was definitely a very useful process because I learned a lot of debugging skills, but very frustrating. 

What skills did you gain through this project?

I’d previously worked a lot on the machine learning side interacting with hardware, and I wanted to get much more experience producing a complete hardware design and not just a small functional unit, and then managing the interactions between different units and analyzing things at a greater system level. One of the reasons the project was more challenging for me was that it was in an area that I was experienced in but also pushing beyond the depth I had worked at before. That definitely helped me learn a lot more.

 

Press Contact

Matt Goisman | mgoisman@g.harvard.edu