Robotics and Autonomous Systems (REU)

The Department of Computer Science at University of Southern California offers a 10-week summer research program for undergraduates in Robotics and Autonomous Systems. USC has a large and well established robotics research program that ranges from theoretical to experimental and systems-oriented. USC is a leader in the societally relevant area of robotics for healthcare and at-risk populations (children, the elderly, veterans, etc.); networked robotics for scientific discovery, covering for example environmental monitoring, target tracking, and formation control; using underwater, ground, and aerial robots; and control, machine learning, and perceptual algorithms for grasping, manipulation, and locomotion of humanoid robots. For a comprehensive resource on USC robotics see http://rasc.usc.edu.

Undergraduates in the program will gain research experience spanning the spectrum of cutting edge research topics in robotics. They will also gain exposure to robotics research beyond the scope of the REU site, through seminars by other USC faculty and external site visits, to aid in planning their next career steps. External visits may include trips to the USC Information Sciences Institute (ISI) in Marina del Ray, one of the world's leading research centers in the fields of computer science and information technology; the USC Institute for Creative Technologies (ICT) in Playa Vista, whose technologies for virtual reality and computer simulation have produced engaging, new, immersive technologies for learning, training, and operational environments; as well as NASA Jet Propulsion Laboratory (JPL) in Pasadena, which has led the world in exploring the solar system's known planets with robotic spacecraft.

Robotics is an interdisciplinary field, involving expertise in computer science, mechanical engineering, electrical engineering, but also fields outside engineering; this gives the REU students an opportunity to learn about different fields and the broad nature of research. Thus, we welcome applications from students in computer science and all fields of engineering, as well as other fields including neuroscience, psychology, kinesiology, etc. In addition to participating in seminars and social events, students will also prepare a final written report and present their projects to the rest of the institute at the end of the summer.

This Research Experiences for Undergraduates (REU) site is supported by a grant from the National Science Foundation (CNS-2051117). The site focuses on recruiting a diverse set of participants, as well as students from underrepresented groups.

For general questions or additional information, please contact us using the form below.


For students interested in the Robotics and Autonomous Systems (REU) for Summer 2023, please join our interest list via the button below.
The REU application opens on February 6th 2023.

Research Projects

May 30, 2023 - August 4, 2023

When you apply, we will ask you to rank your top three interests from the research projects listed below. We encourage applicants to explore each mentor’s website to learn more about the individual research activities of each lab.

Cooperative Robotics

This project focuses on coordinating teams of robots to autonomously create desired shapes and patterns with minimal user input and minimal communication. Inspired by human abilities to self-localize and self-organize, the research focuses on underlying algorithms for self-localization using information collected by a robots' onboard sensors. We have run several human studies using an online multi-player interface we developed for our NSF-funded project. Using the interface, participants interact to form shapes in a playing field, communicating only through implicit means provided by the interface. The research involves a combination of designing and testing algorithms for shape formation, coordination, control, and implementation on a testbed of 20 robots specifically designed for this task.

Haptics for Virtual Reality

Haptics for Virtual RealityThis project focuses on the design, building, and control of haptic devices for virtual reality. Current VR systems lack any touch feedback, providing only visual and touch is a critical component for our interactions with the physical world and with other people. This research will investigate how we use our sense of touch to communicate with the physical world and use this knowledge to design haptic devices and rendering systems that allow users to interact with and communicate through the virtual world. To accomplish this, the project will integrate electronics, mechanical design, programming, and human perception to build and program a device to display artificial touch sensations to a user with the goal of creating a natural and realistic interaction.

Robotic Wireless Sensing and Communication Networks

The research projects at the Autonomous Networks Research Group will focus on the design and evaluation of networks of robotic nodes for sensing and communication applications. The research spans the design and analysis of algorithms, mathematical modeling, software implementation and evaluation via simulations and testbeds. The mathematical modeling and algorithm design approaches draw from a broad range of tools including stochastic optimization and control, game theory, machine learning, including reinforcement learning and classification, estimation theory, etc.

Socially Assistive Robotics

Our new interdisciplinary NSF-supported project is developing novel interfaces to enable people with physical disabilities to program, so they can have better access to employment in computing. We will be developing, testing, and evaluating a broad range of interfaces (voice, gesture, speech, micro-expressions, facial expressions, body movements, pedals, etc.) and developing machine learning methods to model human physical abilities and map them to customized interfaces. This is part of our lab’s work on socially assistive devices and robotics, systems capable of aiding people through interactions that combine monitoring, coaching, motivation, and companionship. We develop human-machine interaction algorithms (involving control and learning in complex, dynamic, and uncertain environments by integrating on-line perception, representation, and interaction with people) and software for providing personalized assistance for accessibility as well as in convalescence, rehabilitation, training, and education. Our research involves a combination of algorithms and software, system integration, and human subjects evaluation studies design, execution, and data analysis. To address the inherently multidisciplinary challenges of this research, the work draws on theories, models, and collaborations from neuroscience, cognitive science, social science, health sciences, and education.

Obstacle-aided Robot Locomotion and Navigation

Physical environments can provide a variety of interaction opportunities for robots to exploit towards their locomotion goals. However, it is unclear how to even extract information about – much less exploit – these opportunities from physical properties (e.g., shape, size, distribution) of the environment. This project integrates engineering, physics, and biomechanics to discover the general principles governing the interactions between bio-inspired robots and their locomotion environments, and uses these principles to create novel control, sensing, and navigation strategies for robots to effectively move through non-flat, non-rigid, complex terrains. For example, with a simple interaction model of robot-obstacle interactions, a bio-inspired multi-legged robot can intelligently exploit obstacle disturbances to generate desired locomotion dynamics. With a better understanding of sand responses to robot leg interactions, we are developing robots with direct-drive legs that can sensitively “feel” the stability and erodibility of desert soil, to help geoscientists collect invaluable measurements on desertification through every step.

Robot Assistance in Human Environments

Robot Assistance in Human Environments
In order for robots to be effective assistants in human environments, they need to be able to adapt to the human user, based on the user’s physical characteristics, their individualized preferences, and their priorities on how to execute the task. The student will work with a PhD student and the PI to enable general-purpose robotic arms to assist users in collaborative tasks, such as assembling an IKEA bookcase. The student will work on algorithms that recognize user actions, infer the user’s intent and will produce control commands that allow robots to support users in complex tasks in a variety of environments. We will additionally design and conduct user studies in the lab to evaluate the developed algorithms. An example video of a previously developed system can be found here: https://youtu.be/MxIKG6h1H4o

Learning Perception and Manipulation for Robots

This project focuses on developing learning-based systems for perception and manipulation for robots. Our emphasis is on the underlying algorithms, with a focus on building experimental systems. The research involves algorithms, software development, system integration, experimentation, and data analysis. In perception we are particularly interested in segmenting, detecting, and identifying objects. For manipulation, we look at how planning techniques, in combination with machine learning, can improve the robot's manipulation capabilites.

Reinforcement Learning from High-level Task objectives for the Toyota HSR

The student will investigate application of single-agent and multi-agent reinforcement learning (RL) techniques in combination with temporal logic-based specifications on the Toyota Human Service Robot platform (HSR). The goal will be for the student to demonstrate
autonomous operation of the HSR on simple temporal logic goals. Day-to-day work will consist of interfacing with the HSR and building Python environments required to train the control software of the robot using RL.

Control and Learning for Dynamic Legged Robots

This project will focus on developing control and learning algorithms for dynamic legged robots. The project aims to achieve aggressive mobility skills such as fast running, high jumping, and other parkour skills on extremely rough terrains that have never been realized in legged robots before. In our approach, we will incorporate trajectory optimization, feedback control, and deep reinforcement learning to achieve robust and effective learning on robot hardware. This will enable legged robots to learn more effectively complex mobility skills in real-world scenarios, expanding their role to many practical applications, including firefighting, disaster rescue, inspection jobs in construction sites and offshore drilling rigs, or assisting humans in dangerous situations.

Vision-based autonomous navigation in new environments

Autonomous robot navigation is a fundamental and well-studied problem in robotics. However, developing a fully autonomous robot that can navigate in a priori unknown environments is difficult due to challenges that span dynamics modeling, onboard perception, localization and mapping, trajectory generation, and optimal control. Classical approaches, such as the generation of a real-time, globally consistent geometric map of the environment, are computationally expensive and confounded by texture-less, transparent or shiny objects, or strong ambient lighting. End-to-end learning can avoid map building but is sample inefficient. Furthermore, end-to-end models tend to be system-specific. In this project, we will explore modular architectures to operate autonomous systems in completely novel environments using onboard perception sensors. These architectures use machine learning for high-level planning based on perceptual information; this high-level plan is then used for low-level planning and control via leveraging classical control-theoretic approaches. This modular approach enables the conjoining of the best of both worlds: autonomous systems learn navigation cues without extensive geometric information, making the model relatively lightweight; the inclusion of the physical system structure in learning reduces sample complexity relative to pure learning approaches. Our preliminary results indicate a 10x improvement in sample complexity for wheeled ground robots. We hypothesize that this gap will only increase further as the system dynamics become more complex, such as for an aerial or a legged robot, opening up new avenues for learning navigation policies in robotics. Preliminary experiment videos can be found at: https://smlbansal.github.io/LB-WayPtNav/ and https://smlbansal.github.io/LB-WayPtNav-DH/

Language-guided Robots

Natural language can be used as a communication medium between people and robots. However, most existing natural language processing technology is based solely on written text. A language system that has read and memorized thousands of sentences using the word “hot” will be unable to warn a robot system about the physical sensor danger of touching a live stove burner when a person warns “Watch out, that’s hot!” This project will focus on developing grounded natural language processing models for human-robot collaboration that consider aspects of embodiment such as gestures and gaze, physical object properties, and the interplay between language and vision. Approaches will utilize tools and techniques from a broad range of disciplines, including natural language processing, computer vision, robotics, and linguistics.

Location and Housing


USC is located near downtown Los Angeles. There are numerous restaurants and stores on or adjacent to campus, including Target and Trader Joe’s. USC’s campus is directly linked to downtown LA and downtown Santa Monica with the Expo line light rail, so you have easy access to cultural attractions as well as shopping and the beach! A stipend will be provided to students. We will additionally provide housing and compensation for meals.

Benefits

  • Work with some of the leading researchers in robotics and autonomy.
  • Receive a stipend of $6000 for the entire duration of the program. We will additionally provide housing and compensation for meals.

Eligibility

  • U.S. citizenship or permanent residency is required.
  • Students must be currently enrolled in an undergraduate program.
  • Students must not have completed an undergraduate degree prior to the summer program.

Important Dates

  • Application opens: February 6, 2023
  • Application deadline: February 27, 2023
  • Notification of acceptance begins: March 13, 2023
  • Notification of declined applicants: March 27, 2023
  • Start Date: May 30, 2023
  • End Date: August 4, 2023

How to Apply

Step 1
Application Form

Fill out the online REU application (when the application is open, the button below will contain a link).

You must first complete the application form before the rest of your application materials will be reviewed!

Application opens: February 6, 2023
Application deadline: February 27, 2023

2023 REU Application Form

Step 2
Supplemental Materials

Upload the following materials using password USC_CS_REU.

  1. The most recent unofficial transcripts from all undergraduate institutions you have attended.
  2. A one page personal statement. This may include your research interests and how you came to be interested in them, your previous research experiences, your reasons for wanting to participate in the research at USC, and how participating in this experience might better prepare you to meet your future goals.
Upload Supplemental Materials

Step 3
Recommendation Letter

Request a faculty member to send a letter of recommendation directly to the program administrator.

Email: nikolaid_at_usc_dot_edu
Subject: “REU Site Recommendation: Your full name here” (e.g. “REU Site Recommendation: Leslie Smith” )
Filename: Lastname_Firstname_Recommendation.pdf

Please note that due to the large number of responses, it is not possible to confirm the receipt of individual letters.  Applicants are encouraged to check with their recommender directly to confirm that the letter has been submitted.

Questions? Contact Us!

Published on October 31st, 2017

Last updated on February 16th, 2023