We are pleased to announce the second annual MLCommons Rising Stars cohort of 41 junior researchers from 33 institutions globally! These promising researchers, drawn from over 170 applicants, have demonstrated excellence in Machine Learning (ML) and Systems research and stand out for their current and future contributions and potential. This year’s program has been expanded to include data systems research as an area of interest.
The MLCommons Rising Stars program provides a platform for talented young researchers working at the intersection of ML and systems to build connections with a vibrant research community, engage with industry and academic experts, and develop their skills. The program continues to promote diversity in the research community by seeking researchers from historically underrepresented backgrounds. We are pleased to welcome nine international Rising Stars to this year’s cohort.
As part of our commitment to fostering the growth of our Rising Stars, we are organizing a Rising Stars workshop at the NVIDIA Headquarters in Santa Clara, CA, in July, where the cohort will showcase their work, explore research opportunities, gain new skill sets via career building sessions, and have the opportunity to network with researchers across academia and industry.
“ML is a fast-growing field with rapid adoption across all industries, and we believe that the biggest breakthroughs are yet to come. By nurturing and supporting the next generation of researchers, both domestically and globally, we aim to foster an inclusive environment where these individuals can make groundbreaking contributions that will shape the future of ML and systems research. The Rising Stars program is our investment in the future, and we are excited to see the innovative ideas and solutions that these talented researchers will bring to the table.,” said Vijay Janapa Reddi, MLCommons VP and Research Chair and steering committee member of the Rising Stars program.
We extend our warmest congratulations to this year’s Rising Stars and express our gratitude to everyone who applied.
We would also like to thank Rising Stars organizers Udit Gupta (Cornell Tech), Abdulrahman Mahmoud (Harvard), Lillian Pentecost (Amherst College), Akanksha Atrey (Nokia Bell Labs), and the rest of the organizing and program committee for all their efforts in putting together the program and selecting an impressive cohort of recipients. Two organizing committee members, Sercan Aygun (University of Louisiana at Lafayette) and Husnain Mubarik (AMD), will also steer the outreach and engagement activities for the cohort beyond the workshop.
Finally, we also want to extend our appreciation to Kelly Berschauer (MLCommons), Ritika Borkar (NVIDIA), and Azalia Mirhoseini (Stanford) for their support in putting together the program and workshop.
MLCommons Rising Stars 2024
Baolin Li
Northeastern University
Baolin Li
Northeastern University
Baolin recently received his Ph.D. in Computer Engineering at Northeastern University, advised by Professor Devesh Tiwari. His research focuses on optimizing High Performance Computing (HPC) and Cloud Computing systems for Machine Learning (ML) applications, tackling various challenges in terms of cost-effectiveness, resource sharing, and environmental sustainability. He is joining Netflix to continue his research in the MLSys field.
Cansu Demirkiran
Boston University
Cansu Demirkiran
Boston University
Cansu is a final-year doctoral candidate, advised by Prof. Ajay Joshi, at Boston University in Boston, MA. She holds a B.Sc. degree in Electrical and Electronics Engineering from Middle East Technical University in Ankara, Turkey. Her research focuses on exploring photonics technology as a compute platform for next-generation AI hardware and addressing the associated challenges. She is passionate about emerging technologies and building scalable and sustainable AI systems. Outside of her academic pursuits, Cansu enjoys art, writing, and playing tennis.
Daniel Mendoza
Stanford University
Daniel Mendoza
Stanford University
Daniel is a third year PhD candidate in Electrical Engineering at Stanford University. His research focuses on integrating formal methods and inference serving systems to minimize inference inaccuracy. He is currently working on integrating LLMs and formal model checking for hardware verification.
Foteini Strati
ETH Zürich
Foteini Strati
ETH Zürich
Foteini is a 3rd year PhD student at the Systems Group of ETH Zürich, supervised by Prof. Ana Klimovic. She is interested in Systems for Machine Learning. Currently, she is working on increasing GPU utilization and fault tolerance of ML workloads. During her PhD, she has completed internships in Microsoft Research and NVIDIA. She obtained a MSc degree in Computer Science from ETH Zürich, and a Diploma in Electrical and Computer Engineering from the National Technical University of Athens.
Ismet Dagli
Colorado School of Mines
Ismet Dagli
Colorado School of Mines
Ismet is a PhD candidate at Colorado School of Mines advised by Prof. Mehmet Belviranli. His research focuses creating ecosystems to increase performance and utilization of heterogeneous systems. Ismet extensively worked with edge devices like NVIDIA Jetson family for analytical performance and resource modeling.
João Dantas
Aeronautics Institute of Technology
João Dantas
Aeronautics Institute of Technology
João is a Military Officer with the Brazilian Air Force (FAB) and works as a Research Engineer in the Decision Support Systems Subdivision at the Institute for Advanced Studies. With a particular focus on aerospace and defense applications, Captain Dantas has significantly contributed to developing decision-support systems, ensuring the FAB’s operational success in diverse initiatives. In pursuit of cutting-edge national technologies, his long-term goal is to reinforce Brazil’s technological sovereignty and nurture its defense ecosystem. His work has been deployed in multiple defense applications, including air combat simulation, missile modeling, and the creation and deployment of autonomous agents in simulated operational scenarios. Currently, Dantas is diligently working towards his Ph.D. degree, continuing in ITA’s Electronic and Computer Engineering program.
Mikel K. Ngueajio
Howard University
Mikel K. Ngueajio
Howard University
Mikel is a third-year Ph.D. student in Computer Science at Howard University in Washington, DC, with research interests in Ethical AI, Human-Computer Interaction, and Machine and Deep Learning for speech and language processing. She is passionate about applying my knowledge to help drive social innovations through interdisciplinary research. Her collaborative spirit is evidenced by diverse research experiences with institutions like Amazon, Google, the Educational Testing Service, and the US National Geospatial Agency. Recognized as a 2023-2024 TMCF HBCU Apple Scholar and an AWS Research Fellow, she brings both academic excellence and industry experience to my work. She has previously interned as a Data Science intern at Amazon and a Machine Learning Engineering intern at Apple.
Prashanthi S K
Indian Institute of Science
Prashanthi S K
Indian Institute of Science
Prashanthi is a Prime Minister’s Research Fellow and PhD candidate at the Indian Institute of Science. Her research focuses on the systems aspects of Edge AI, particularly power and performance optimization, as well as modeling and scheduling of DNN workloads on edge devices. Her work has resulted in several publications at top international ACM and IEEE conferences. She has previously worked on developing GPU Device Drivers at Intel. Prashanthi holds a Master’s degree from IIIT Bangalore and a Bachelor’s Degree from M S Ramaiah Institute of Technology, both with gold medals. She has received several awards, including the Prime Minister’s Research Fellowship, Department Recognition Award from Intel, and travel grants from Microsoft Research, ACM-W, TCPP, EuroSys, and SIGMETRICS.
Saurabh Agarwal
University of Wisconsin-Madison
Saurabh Agarwal
University of Wisconsin-Madison
Saurabh is a final year PhD student at University of Wisconsin-Madison. He works in the area of building Systems for Machine Learning. His work involves building new systems for emerging machine learning workloads to make training and inference faster, scalable and more efficient.
Sofia Bourhim
ENSIAS
Sofia Bourhim
ENSIAS
Sofia is a Research Scientist working on Graph Deep Learning and its interdisciplinary applications to recommender systems and drug discovery. She obtained her M.Sc. in Computer Science and Engineering/Business Intelligence, and has worked in diverse industries. She is also interested in using AI to address societal challenges more specifically in the MENA region. She previously interned at Microsft research lab (MARI) as a Research intern and she is a recipient of the Microsoft PhD Fellowship. Sofia’s contributions extend to organizing numerous conferences and workshops, including NAML@NeurIPS’23, IndabaX Morocco, LOG conference, among others.
Xiaofan Yu
University of California San Diego
Xiaofan Yu
University of California San Diego
Xiaofan received the B.S. degree from Peking University, China in 2018 and the M.S. degree from University of California at San Diego in 2020. She is currently pursuing the Ph.D. degree with the Department of CSE, University of California at San Diego. She is working under the supervision of Prof. Tajana Šimunić Rosing, while she has a history of successful collaborations with multiple faculties and industry researchers. Her research interests lie in optimizing ML on embedded systems with limited resources, with a goal of large-scale deployment into the real world. She is expected to graduate in Spring 2025.
Yihua Zhang
Michigan State University
Yihua Zhang
Michigan State University
Yihua is a Ph.D. student of the department of computer science and engineering at Michigan State University. His research has been focused on the optimization theory and optimization foundations of various machine learning applications. In general, his research spans the areas of trustworthy machine learning, including robustness, fairness, privacy, copyright protection, and efficient machine learning, including training efficiency, model efficiency, and data efficiency. He has published papers at major ML/AI conferences such as ICML, ICLR, NeurIPS, CVPR, and ICCV. He also received the Best Paper Runner-Up Award at the Conference on Uncertainty in Artificial Intelligence (UAI), 2022.
Zahidur Talukder
The University of Texas at Arlington
Zahidur Talukder
The University of Texas at Arlington
Zahidur is a Ph.D. candidate in Computer Science and Engineering at The University of Texas at Arlington, specializing in privacy-preserving machine learning with a focus on data privacy, security, fairness, and sustainability in AI. Under the mentorship of Dr. Mohammad Atiqul Islam in the Rigorous Design Lab, Zahidur has developed secure and efficient methods for handling data and clients in federated learning, including innovative algorithms that ensure fairness for heterogeneous devices and self-regulating clients. His work also addresses the environmental impact of AI, proposing solutions to reduce the water footprint of large language model training.
Zhiyuan Yu
Washington University in St. Louis
Zhiyuan Yu
Washington University in St. Louis
Zhiyuan is a final-year Ph.D. candidate at Washington University in St. Louis, working on the security and privacy of safety-critical machine learning systems. His research focuses on integrating the synergy between cyber and physical components to develop resilient protective measures, which have been applied to address real-world challenges in broad areas of medical imaging, IoT devices, autonomous systems, and generative AI (GenAI) applications. He will be on the job market this year.
Bhawana Chhaglani
University of Massachusetts Amherst
Bhawana Chhaglani
University of Massachusetts Amherst
Bhawana is a fourth year Computer Science PhD student in the College of Information and Computer Sciences at University of Massachusetts Amherst. She works in the LASS lab with Prof. Prashant Shenoy and Prof. Jeremy Gummeson. Her research interests lie broadly in the area of Mobile and Wearable Sensing and Systems, IoT and Ubiquitous Computing. Currently, she is working on advancing audio sensing for promoting healthier indoor environments.
Chaojian Li
Georgia Institute of Technology
Chaojian Li
Georgia Institute of Technology
Chaojian is a 5th-year Ph.D. student at Georgia Tech, advised by Prof. Yingyan (Celine) Lin. His research interests are in deep learning and computer architecture, with a focus on 3D reconstruction and rendering in an algorithm-hardware co-design approach and deep learning on edge devices.
Deval Shah
AMD
Deval Shah
AMD
Deval currently works at AMD on performance and optimization for AI models and applications. She is a recent Ph.D. graduate from the University of British Columbia, Department of Electrical and Computer Engineering. Her Ph.D. advisor is Prof. Tor Aamodt. Her PhD research has led her to explore the computational aspects of machine learning and robotics, particularly hardware-algorithm co-design for energy-efficient acceleration of the same.
Hanxian Huang
University of California San Diego
Hanxian Huang
University of California San Diego
Hanxian is a fifth-year PhD Candidacy in CSE at the University of California San Diego, advised by Prof. Jishen Zhao. Before joining UCSD, she received a B.S. in EECS at Peking University. She has interned at Microsoft Research Asia, Gray Systems Lab at Microsoft Research, and Y-tech Lab at Kwai Inc., and visited UCLA as a research intern. She will be interning at the Pytorch Optimization team at Meta. Her research interests span the intersection of machine learning (ML) with programming languages, compilers, and computer systems. She actively explores advanced and innovative ML techniques to enhance system designs and programming tasks, as well as the co-design of efficient ML algorithms and systems. Hanxian will be entering the job market in the fall of 2024.
Jianming Tong
Georgia Institute of Technology
Jianming Tong
Georgia Institute of Technology
Jianming is a PhD candidate at Georgia Tech, under the guidance of Dr. Tushar Krishna. His primary research area is Computer Architecture with major interest on software(MLSys’24)-system(MLSys’23, IEEE Micro’23)-hardware(ISCA’24) full-stack optimizations for privacy-preserving and performance-oriented AI workloads, i.e. make both AI and privacy-preserving AI faster and more efficient. He has an extensive prototype experience on both ASICs (TOC, TVLSI, GLSVLSI) and FPGAs (FPT, SC) and internships at Alibaba DAMO Academy, Pacific Northwest National Lab and Rivos. His research is recognized by Qualcomm Innovation Fellowship.
Junyuan Hong
The University of Texas at Austin
Junyuan Hong
The University of Texas at Austin
Junyuan is a postdoctoral fellow hosted by Dr. Zhangyang Wang in the Institute for Foundations of Machine Learning (IFML) at the University of Texas, Austin. His research interests lie at the intersection of responsible artificial intelligence (AI) and real-world applications, particularly in high-stakes domains such as healthcare. He is deeply motivated by the challenge of imbuing responsible AI systems with privacy, robustness, security, and ethics, to ensure their functionality is reliable and their operations respect individual rights and societal norms.
Omobayode Fagbohungbe
IBM Research
Omobayode Fagbohungbe
IBM Research
Omobayode is a research scientist at the Thomas J. Watson Research Center, Yorktown Heights, NY. His research focuses on designing and implementing deep learning training algorithms for analog in-memory computing. He is also involved in the INT8 and FP8 post-training quantization (PTQ) of large language models (LLM). Before now, he bagged his PhD in Electrical Engineering from Prairie View A&M University, Prairie View, TX, USA, where the dissertation was on noise-resistant neural networks.
Qinghao H
Nanyang Technological University
Qinghao H
Nanyang Technological University
Qinghao is currently a Research Assistant Professor at Nanyang Technological University, Singapore. His research interests include systems for large models, datacenter management and scheduling, and machine learning for systems.
Sehoon Kim
University of California Berkeley
Sehoon Kim
University of California Berkeley
Sehoon is a 4th year Ph.D. student at UC Berkeley’s Berkeley AI Research (BAIR) group. Sehoon’s research is focused on efficient AI solutions and full-stack ML optimization with a focus on Transformers and Large Language Models. In particular, his research covers efficient model and inference system design, model optimization (e.g. quantization and pruning), and hardware-software co-design. He was a finalist for the NVIDIA Graduate Fellowship in 2024. Before joining UC Berkeley, he received his BS degree in ECE from Seoul National University, where he ranked first in the entire class of 2020.
Tzu-Sheng Kuo
Carnegie Mellon University
Tzu-Sheng Kuo
Carnegie Mellon University
Tzu-Sheng is a PhD student in the Human-Computer Interaction Institute at Carnegie Mellon University, co-advised by Prof. Ken Holstein and Prof. Haiyi Zhu. Tzu-Sheng creates interactive systems and methods that support community-driven approaches to AI design and evaluation. Working closely with both online and offline communities, Tzu-Sheng’s research explores how communities can actively shape AI design toward their goals and values, and assess appropriateness for their contexts. Tzu-Sheng’s research has received Best Paper and Honorable Mention Awards at top Human-Computer Interaction conferences, including ACM CHI and UIST.
Yao Fu
University of Edinburgh
Yao Fu
University of Edinburgh
Yao is a third-year PhD student in Computer Science at The University of Edinburgh, under the supervision of Dr. Luo Mai. His research lies at the intersection of machine learning and systems, with a focus on performance and affordability. Recently, his work has centered on efficient serving systems for large language models, leading to two main projects: ServerlessLLM and the MoESys Leaderboard. He received his B.E. in Computer Science and Technology from Sun Yat-sen University in 2021.
Yingjie Li
University of Maryland, College Park
Yingjie Li
University of Maryland, College Park
Yingjie is currently a PhD student in Computer Engineering, University of Maryland, College Park under the supervision of Prof. Cunxi Yu. Her research focuses on physics-aware infrastructure for optical computing platforms, hardware-software co-design, and efficient AI/ML algorithms. She also works in electronic design automation (EDA), focusing on machine learning for synthesis and verification. Her work received the Best Paper Award at DAC (2023), American Physical Society DLS poster award (2022) and Best Poster Presentation Award at DAC Young Fellow (2020). Yingjie won the Second Place at the ACM/SIGDA Student Research Competition (2023) and was selected as the EECS Rising Star (2023).
Zhenglun Kong
Northeastern University
Zhenglun Kong
Northeastern University
Zhenglun is currently pursuing his Ph.D. in the Department of Electrical and Computer Engineering at Northeastern University, Boston, U.S., supervised by Professor Yanzhi Wang.
He is an incoming Postdoctoral Research Fellow in Harvard. He received his B.E. degree in Optoelectronic Information Science and Engineering from Huazhong University of Science and Technology, Wuhan, China. He was a research intern at Microsoft Research, ARM, and Samsung Research. His research is primarily focused on the development of efficient deep learning methodologies tailored for real-world scenarios. This includes efficient pre-training/fine-tuning and inference, model/data compression, and efficient DNN design for language and vision models. He has published multiple papers in the fields of AI & Machine Learning (NeurIPS, ICML, ECCV, AAAI, CVPR, EMNLP, IJCAI, etc.) and beyond.
Ziyi Huang
Columbia University
Ziyi Huang
Columbia University
Ziyi got her Ph.D. degree in Electrical Engineering from Columbia University. Prior to that, she obtained her master’s degree from the University of Michigan and her Bachelor’s degree from the University of Science and Technology of China (USTC). Her research mainly focuses on developing efficient algorithms as well as theoretical frameworks for better image intervention and addressing data quality challenges in machine learning algorithms to provide critical guidance for real-world applications.
Biswadeep Chakraborty
Georgia Institute of Technology
Biswadeep Chakraborty
Georgia Institute of Technology
Biswadeep is a fifth year PhD candidate in Georgia Institute of Technology, working on neuromorphic computing models for low-power edge ML applications. His research leverages the efficiency of spiking neural networks to develop algorithms suited for real-time processing on power-constrained devices. By optimizing these models for edge computing, he aims to enable advanced AI capabilities locally, minimizing latency, and lowering power consumption. This work contributes to more sustainable and efficient AI implementations in smart devices.
Christina Giannoula
University of Toronto
Christina Giannoula
University of Toronto
Christina is a Postdoctoral Researcher at the University of Toronto working with Prof. Gennady Pekhimenko, Prof. Andreas Moshovos and Prof. Nandita Vijaykumar. She is also an affiliated senior researcher at the SAFARI research group, ETH Zürich, advised by Prof. Onur Mutlu. Her current research interests lie in the intersection of computer architecture, computer systems and high-performance computing. Specifically, her research specializes on improving the performance and efficiency of emerging applications, with a focus on machine learning and sparse workloads, in modern computing paradigms, such as AI-specific GPU and Processing-In-Memory architectures, via software, system and hardware co-design.
Feng Liang
The University of Texas at Austin
Feng Liang
The University of Texas at Austin
Feng (Jeff) is a PhD student at UT Austin. His current research interests lie in vision-language models and generative AI with a special interest in efficiency.
Hongzheng Chen
Cornell University
Hongzheng Chen
Cornell University
Hongzheng is a third-year Ph.D. student at Cornell University supervised by Prof. Zhiru Zhang. His research interests broadly lie in domain-specific languages and compilers, efficient runtime systems, and accelerator architecture. He is currently working on compiler optimizations for large-scale heterogeneous computing systems with a special focus on accelerating deep learning applications. He has published several papers on top-tier computer systems & hardware conferences including ASPLOS, PLDI, SC, FPGA, and ICCAD.
Jiawei Liu
University of Illinois Urbana-Champaign
Jiawei Liu
University of Illinois Urbana-Champaign
Jiawei is a third-year Ph.D. candidate at the University of Illinois Urbana-Champaign. His research goal is to simplify the making of great software, with and for machine learning (ML) and its systems. His work on automated ML system testing led to discovering and fixing hundreds of critical new bugs for major ML frameworks and compilers, winning an ACM SIGSOFT Distinguished Paper Award and a Distinguished Artifact Award. His work on large language models (LLMs) for code builds rigorous evaluators used by hundreds of papers and instruction tuners adopted by major organizations. His work on development methodologies for emerging ML systems accelerates the deployment of dozens of emerging LLMs over several emerging platforms. He embraces the open-source community by fully open-sourcing and actively maintaining his research projects and packages.
Karthik Garimella
New York University
Karthik Garimella
New York University
Karthik is a PhD student at New York University, and he is broadly interested in machine learning, systems, and privacy. Currently, his research focuses on privacy-enhanced computation (homomorphic encryption and multi-party computation) and machine learning security & privacy. Before NYU, he received an MS in Computer Engineering from Washington University in St. Louis and a BA in Physics from Hendrix College. Outside of research, he enjoys playing tennis, cooking, reading, and exploring NYC by bike.
Pooria Namyar
University of Southern California
Pooria Namyar
University of Southern California
Pooria is a PhD candidate at the University of Southern California, advised by Prof. Ramesh Govindan. His research is at the intersection of theory, systems, and machine learning. Pooria’s recent work focuses on designing algorithms and optimizations to enhance the performance and availability of large-scale cloud networks. Prior to USC, he received his BSc in Electrical Engineering from Sharif University of Technology.
Sachini Piyoni Ekanayake
University at Albany, State University of New York
Sachini Piyoni Ekanayake
University at Albany, State University of New York
Sachini is a PhD Candidate in the Electrical and Computer Engineering Department at the University at Albany, State University of New York, NY, USA, working with Dr. Daphney-Stavroula Zois. She received her B.Sc. degree in Electrical and Electronic Engineering from the University of Peradeniya, Sri Lanka, in 2017. She was a machine learning fellow intern at GE Research, Niskayuna, NY, USA, in 2022. She was recognized as a Rising Star in Cyber-Physical Systems in 2023. Her research interests include efficient machine learning, with a focus on developing interpretable algorithms and frameworks that facilitate instance-wise learning and inference, particularly in cost-constrained, complex environments.
Shaoyi Huang
University of Connecticut
Shaoyi Huang
University of Connecticut
Shaoyi is an incoming tenure-track assistant professor in the CS department at Stevens Institute of Technology in Fall 2024. During her PhD study, she worked with Prof. Caiwen Ding and Prof. Omer Khan in the School of Computing at University of Connecticut. Her research agenda is grounded in advancing AI systems, including algorithm-system co-design for AI acceleration, emerging deep learning models inference acceleration (i.e., Transformer and LLMs), privacy preserving machine learning, and machine learning for EDA. Shaoyi’s work has been published in high-impact conferences such as HPCA, ASPLOS, SC, DAC, ICCAD, ACL, ICCV, NeurIPS, IJCAI, etc.
Wenqi Jiang
ETH Zürich
Wenqi Jiang
ETH Zürich
Wenqi is a fourth year PhD student at ETH Zürich, where he is advised by Prof. Gustavo Alonso and Prof. Torsten Hoefler. His research interests lie in the intersection of data management, computer architecture, and machine learning systems. Specifically, he is interested in designing efficient data and ML systems in an era when Moore’s Law no longer exists. This involves the development of cross-stack solutions that integrate algorithms, software systems, and underlying hardware to enhance overall system performance and efficiency. Recently, he has built Post-Moore data systems for large language models, vector retrieval via approximate nearest neighbor search, recommender systems, and spatial data processing.
Yifan Gong
Northeastern University
Yifan Gong
Northeastern University
Yifan (Evelyn) is a PhD candidate in the Department of Electrical and Computer Engineering at Northeastern University, supervised by Prof. Yanzhi Wang. She received the B.S. degree (with the highest honor) in Telecommunications Engineering from Xidian University in 2017 and the M.A.Sc. degree (with fellowship) in Computer Engineering from the University of Toronto in 2019. Her research vision is in general artificial intelligence systems to facilitate deep learning implementation on various edge devices and bridge the gap between algorithm innovations and hardware performance optimizations by a hardware and software co-design approach. It includes energy-efficient deep learning and artificial intelligence systems and accelerations of deep neural networks such as large-scale models for AI-generated content. Yifan’s works (first-authored) have been published in top-tier conferences and journals including ICML, ICLR, ICCV, ECCV, DAC, ICCAD, and TODAES. Yifan has been awarded the College of Engineering Outstanding TA award and the Dean‚Äôs Fellowship Award from Northeastern University. She also received the ICCAD Student Scholar Award, DAC Young Fellow, and NeurIPS Spotlight Paper Award.
Yuka Ikarashi
Massachusetts Institute of Technology
Yuka Ikarashi
Massachusetts Institute of Technology
Yuka is a PhD candidate at MIT CSAIL. She received a Master of Science in Computer Science in 2022 from MIT, and a Bachelor of Science in Information Science in 2020 from the University of Tokyo. Ikarashi is passionate about creating compiler systems and programming languages for real-world applications. She is a co-creator of the Exo programming language and has previously worked at Apple, Amazon, and CERN, applying her research to real-world challenges. Her PhD is generously supported by the Masason Foundation Fellowship and Funai Foundation Fellowship awards.
Zhengqi Gao
Massachusetts Institute of Technology
Zhengqi Gao
Massachusetts Institute of Technology
Zhengqi is a third year PhD student at MIT EECS, advised by Prof. Duane S. Boning. Previously, he obtained his M.S. and B.E. from School of Microelectronics, Fudan University, China. His current research interests lie mainly in design automation for photonic integrated circuits and machine learning. He interned at Baidu, Shanghai Qizhi Insitutte, and Nvidia. His research has been recognized with several honors, including being named an ML and Systems Rising Star in 2024, an editor’s highlight in Photonics Research 2023, an ICLR oral paper (top 5%), and the Biren scholarship (1 out of 3 nationwide) in 2020, among a few others.