Robotics and Autonomous Systems (REU)
The Department of Computer Science at University of Southern California offers a 10-week summer research program for undergraduates in Robotics and Autonomous Systems. USC has a large and well established robotics research program that ranges from theoretical to experimental and systems-oriented. USC is a leader in the societally relevant area of robotics for healthcare and at-risk populations (children, the elderly, veterans, etc.); networked robotics for scientific discovery, covering for example environmental monitoring, target tracking, and formation control; using underwater, ground, and aerial robots; and control, machine learning, and perceptual algorithms for grasping, manipulation, and locomotion of humanoid robots. For a comprehensive resource on USC robotics see http://rasc.usc.edu.
Undergraduates in the program will gain research experience spanning the spectrum of cutting edge research topics in robotics. They will also gain exposure to robotics research beyond the scope of the REU site, through seminars by other USC faculty and external site visits, to aid in planning their next career steps. External visits may include trips to the USC Information Sciences Institute (ISI) in Marina del Ray, one of the world’s leading research centers in the fields of computer science and information technology; the USC Institute for Creative Technologies (ICT) in Playa Vista, whose technologies for virtual reality and computer simulation have produced engaging, new, immersive technologies for learning, training, and operational environments; as well as NASA Jet Propulsion Laboratory (JPL) in Pasadena, which has led the world in exploring the solar system’s known planets with robotic spacecraft.
Robotics is an interdisciplinary field, involving expertise in computer science, mechanical engineering, electrical engineering, but also fields outside engineering; this gives the REU students an opportunity to learn about different fields and the broad nature of research. Thus, we welcome applications from students in computer science and all fields of engineering, as well as other fields including neuroscience, psychology, kinesiology, etc. In addition to participating in seminars and social events, students will also prepare a final written report and present their projects to the rest of the institute at the end of the summer.
This Research Experiences for Undergraduates (REU) site is supported by a grant from the National Science Foundation (CNS-2051117).
For general questions or additional information, please contact us using the form below.
For students interested in the Robotics and Autonomous Systems (REU) for Summer 2022, please join our interest list via the button below.
The REU application opens on January 25, 2022.
May 30, 2022 – August 5, 2022
When you apply, we will ask you to rank your top three interests from the research projects listed below. We encourage applicants to explore each mentor’s website to learn more about the individual research activities of each lab.
This project focuses on coordinating teams of robots to autonomously create desired shapes and patterns with minimal user input and minimal communication. Inspired by human abilities to self-localize and self-organize, the research focuses on underlying algorithms for self-localization using information collected by a robots’ onboard sensors. We have run several human studies using an online multi-player interface we developed for our NSF-funded project. Using the interface, participants interact to form shapes in a playing field, communicating only through implicit means provided by the interface. The research involves a combination of designing and testing algorithms for shape formation, coordination, control, and implementation on a testbed of 20 robots specifically designed for this task.
Haptics for Virtual Reality
This project focuses on the design, building, and control of haptic devices for virtual reality. Current VR systems lack any touch feedback, providing only visual and auditory information to the user. However, touch is a critical component for our interactions with the physical world and with other people. This research will investigate how we use our sense of touch to communicate with the physical world and use this knowledge to design haptic devices and rendering systems that allow users to interact with and communicate through the virtual world. To accomplish this, the project will integrate electronics, mechanical design, programming, and human perception to build and program a device to display artificial touch sensations to a user with the goal of creating a natural and realistic interaction.
Design and Manufacturing of Biologically Inspired Robots
Taking inspiration from the nature offers new possibilities for realizing novel robots. Biologically-inspired robotics has emerged as an important specialization within the field of robotics. Explorations in this area have included designing and building walking, crawling, and flying robots that mimic kinematics and dynamics of their biological counterparts, understanding and replicating control mechanisms found in biological creatures, and mimicking biological sensing and actuation mechanisms. This project focuses on designing and fabricating biologically inspired robots. The main emphasis is on identifying the general principles behind taking inspiration from a biological source and converting the inspiration into implementable engineering concepts that can be incorporated into a robot. Realizing a biologically-inspired robot often requires utilizing advanced manufacturing processes for realizing complexity observed in biological creatures. Therefore, this project leverages recent advances in manufacturing to realize biologically inspired robots.
Robotic Wireless Sensing and Communication Networks
The research projects at the Autonomous Networks Research Group will focus on the design and evaluation of networks of robotic nodes for sensing and communication applications. The research spans the design and analysis of algorithms, mathematical modeling, software implementation and evaluation via simulations and testbeds. The mathematical modeling and algorithm design approaches draw from a broad range of tools including stochastic optimization and control, game theory, machine learning, including reinforcement learning and classification, estimation theory, etc.
Socially Assistive Robotics
This project focuses on socially assistive robotics, developing systems capable of aiding people through social interactions that combine monitoring, coaching, motivation, and companionship. The research focuses on the development of human-robot interaction algorithms (involving control and learning in complex, dynamic, and uncertain environments by integrating on-line perception, representation, and interaction with people) and software for providing personalized assistance in convalescence, rehabilitation, training, and education. The research involves a combination of algorithms and software, system integration, and human subjects evaluation studies design, execution, and data analysis. To address the inherently multidisciplinary challenges of this research, the work draws on theories, models, and collaborations from neuroscience, cognitive science, social science, health sciences, and education.
Obstacle-aided Robot Locomotion and Navigation
Physical environments can provide a variety of interaction opportunities for robots to exploit towards their locomotion goals. However, it is unclear how to even extract information about – much less exploit – these opportunities from physical properties (e.g., shape, size, distribution) of the environment. This project integrates engineering, physics, and biomechanics to discover the general principles governing the interactions between bio-inspired robots and their locomotion environments, and uses these principles to create novel control, sensing, and navigation strategies for robots to effectively move through non-flat, non-rigid, complex terrains. For example, with a simple interaction model of robot-obstacle interactions, a bio-inspired multi-legged robot can intelligently exploit obstacle disturbances to generate desired locomotion dynamics. With a better understanding of sand responses to robot leg interactions, we are developing robots with direct-drive legs that can sensitively “feel” the stability and erodibility of desert soil, to help geoscientists collect invaluable measurements on desertification through every step.
Enabling Physical Assistant Robot Autonomy
This project focuses on providing robotic physical assistance to people with disabilities, with an emphasis on the core computational challenges that arise in this domain: what are good models of human behavior, how to learn such models from noisy samples, and how to robustly generate actions for robotic assistants. The research will focus on machine learning algorithms for human behavior modeling, approximation algorithms for robot planning under uncertainty and experimental design of user studies to test the algorithms in deployed robotic systems with actual patients.
Learning Perception and Manipulation for Robots
This project focuses on developing learning-based systems for perception and manipulation for robots. Our emphasis is on the underlying algorithms, with a focus on building experimental systems. The research involves algorithms, software development, system integration, experimentation, and data analysis. In perception we are particularly interested in segmenting, detecting, and identifying objects. For manipulation, we look at how planning techniques, in combination with machine learning, can improve the robot’s manipulation capabilites.
Predicting Probability of Error for Time Series Forecasting
Our group’s research is focused mainly around the integration of formal methods (such as model-based development, verification, testing, formal languages, etc.) for AI-enabled cyber-physical systems to provide guarantees in safety and efficiency even under uncertainty. Other research areas are in the domain of AI-enabled trust, multi-agent coordination, verification of neural networks, etc. Please visit our lab webpage at https://cps-vida.github.io/ to learn more about our research and publications. We are currently looking for undergraduate students to work on projects with a human service robot (HSR). The Toyota HSR is a compact mobile cobot (collaborative robot) platform that can move around and fetch objects using a multi-DOF arm while avoiding obstacles and is primarily used for household assistance. It can be operated through voice commands, using a game controller or a tablet PC. The HSR is very versatile as it is equipped with a wide array of sensors and components and a robotic arm. There are several projects that welcome participation including (but not limited to): (i) reinforcement learning, (ii) explainable deep reinforcement learning (iii) imitation learning/learning from demonstrations, (iv) natural language translation from voice commands to temporal logic task specifications.
Control and Learning for Dynamic Legged Robots
This project will focus on developing control and learning algorithms for dynamic legged robots. The project aims to achieve aggressive mobility skills such as fast running, high jumping, and other parkour skills on extremely rough terrains that have never been realized in legged robots before. In our approach, we will incorporate trajectory optimization, feedback control, and deep reinforcement learning to achieve robust and effective learning on robot hardware. This will enable legged robots to learn more effectively complex mobility skills in real-world scenarios, expanding their role to many practical applications, including firefighting, disaster rescue, inspection jobs in construction sites and offshore drilling rigs, or assisting humans in dangerous situations.
Safety Assurances for Learning and Vision-Driven Robotic Systems
Machine learning-driven vision and perception components make a core part of the navigation and autonomy stacks for modern robotic systems. On the one hand, they enable robots to make intelligent decisions in cluttered and a priori unknown environments based on what they see. On the other hand, the lack of reliable tools to analyze the failures of learning-based vision models make it challenging to integrate them into safety-critical robotic systems, such as autonomous cars and aerial vehicles. In this project, we will explore designing a robust control-based safety monitor for visual navigation and mobility in unknown environments. Our hypothesis is that rather than directly reasoning about the accuracy of the individual vision components and their effect on the robot safety, we can design a safety monitor for the overall system. This monitor detects safety-critical failures in the overall navigation stack (e.g., due to a vision component itself or its interaction with the downstream components) and provides safe corrective action if necessary. The latter is more tractable because the safety analysis of the overall system can be performed in the state-space of the system, which is generally much lower-dimensional than the high-dimensional raw sensory observations. Preliminary results on simulated and real robots demonstrate that our framework can ensure robot safety in various environments despite the vision component errors (the videos of some of our preliminary experiments can be found at https://smlbansal.github.io/website-safe-navigation/). In this project, we will extend the proposed framework to more complex and high-dimensional robotic systems, such as drones and legged robots. Other than ensuring robot safety, we will also explore using the proposed framework to mine critical failures of the system at scale and using this failure dataset to improve the robot perception over time.
Natural language can be used as a communication medium between people and robots. However, most existing natural language processing technology is based solely on written text. A language system that has read and memorized thousands of sentences using the word “hot” will be unable to warn a robot system about the physical sensor danger of touching a live stove burner when a person warns “Watch out, that’s hot!” This project will focus on developing grounded natural language processing models for human-robot collaboration that consider aspects of embodiment such as gestures and gaze, physical object properties, and the interplay between language and vision. Approaches will utilize tools and techniques from a broad range of disciplines, including natural language processing, computer vision, robotics, and linguistics.
Published on October 31st, 2017
Last updated on August 21st, 2022