Welcome to SPUR Research Showcase 2023!

Students are presenting their research in a variety of disciplines, and we are excited for you to see their work. Please note that as a research centered university, we support research opportunities in a wide array of areas; some content may not be appropriate for all ages or may be upsetting. Please understand that the views and opinions expressed in the presentations are those of the participants and do not necessarily reflect UCLA or any policy or position of UCLA. By clicking on the "Agree" button, you understand and agree to the items above.

Week 10 Summer Undergraduate Research Showcase SURP 4 - 3:30

Wednesday, August 30 3:30PM – 5:00PM

Location: Online - Live

The Zoom event has ended.

Presentation 1
ARUNAN ELAMARAN, ETHAN LAI, Yang Xing, Pragya Sharma, Gaofeng Dong, Mani Srivastava
Evaluating the Efficacy of Large Language Models as Frameworks for Personal Robotics
Large language models (LLMs), or machine learning models specializing in processing and responding with human text, have shown the ability to reason with and parse complex tasks. Understanding such potential of LLMs, recent research has sought to integrate these models with the field of robotics to provide a means for users to interact more naturally with robots. In this project, we explore one implementation of an LLM-enabled system and the extent to which smaller, local LLMs can be used in edge-computing environments characteristic of personal robotic systems. Our system is distributed over a robot and a nearby edge server. The robot converts human-spoken instructions into a text prompt and sends it to a light-weight LLM on the edge server. In response, the LLM generates code that calls custom function libraries; this script is sent back to the robot, which then utilizes its sensors and actuators to execute the user-specified task. We evaluate this system by prompting the LLM with tasks of varying complexity and ambiguity, measuring the latency and accuracy of each subprocess of the system’s performance. Our findings indicate that the LLM consistently generates accurate code for simple, atomic tasks but occasionally generates erroneous code for complex tasks with higher levels of ambiguity. Despite these errors, we conclude that smaller versions of LLMs can be deployed effectively on edge-computing machines, but additional guardrails must be implemented to ensure their trustworthiness. Our framework serves as an initial platform for further testing of LLMs in robotics and other cyber-physical systems.
Presentation 2
ARUNAN ELAMARAN, ETHAN LAI, Yang Xing, Pragya Sharma, Gaofeng Dong, Mani Srivastava
Evaluating the Efficacy of Large Language Models as Frameworks for Personal Robotics
Large language models (LLMs), or machine learning models specializing in processing and responding with human text, have shown the ability to reason with and parse complex tasks. Understanding such potential of LLMs, recent research has sought to integrate these models with the field of robotics to provide a means for users to interact more naturally with robots. In this project, we explore one implementation of an LLM-enabled system and the extent to which smaller, local LLMs can be used in edge-computing environments characteristic of personal robotic systems. Our system is distributed over a robot and a nearby edge server. The robot converts human-spoken instructions into a text prompt and sends it to a light-weight LLM on the edge server. In response, the LLM generates code that calls custom function libraries; this script is sent back to the robot, which then utilizes its sensors and actuators to execute the user-specified task. We evaluate this system by prompting the LLM with tasks of varying complexity and ambiguity, measuring the latency and accuracy of each subprocess of the system’s performance. Our findings indicate that the LLM consistently generates accurate code for simple, atomic tasks but occasionally generates erroneous code for complex tasks with higher levels of ambiguity. Despite these errors, we conclude that smaller versions of LLMs can be deployed effectively on edge-computing machines, but additional guardrails must be implemented to ensure their trustworthiness. Our framework serves as an initial platform for further testing of LLMs in robotics and other cyber-physical systems.
Presentation 3
Richard Zhou, Chen Wei, Lihua Jin
Investigating the Fracture Mechanics of Liquid Crystal Elastomers
Liquid Crystal Elastomers (LCEs) are a unique type of soft material that incorporates rod-like liquid crystals (LCs) into a flexible polymer network. The reorientation of LCs gives rise to large spontaneous deformation. As such, LCEs have recently been under study in numerous applications including soft robotics, biomedical devices, and artificial muscles/tissue. As these applications are further developed, it is increasingly important to understand the fracture behavior of LCEs to predict and optimize the lifetime of LCE components. The analysis of the energy release rate in LCEs can be complicated, considering the director rotation and dissipation energy associated with the viscoelasticity of the network and the director. The intricate interplay of the network and director give rise to a complex fracture phenomenon, where the crack may propagate with an inclined angle resulting in a mixed-mode fracture. To characterize LCE fracture behavior, we first focused on monodomain LCEs with a director parallel to the loading direction under Mode-I loading. We then analyzed the fracture energy under different loading rates using the pure shear test; delayed fracture was explored via the relaxation and creep tests. Our results show that the loading rate has a direct correlation with fracture energy. Furthermore, for delayed fracture, the threshold displacement was approximated and the rate of crack propagation was compared across different holding values. In the future, we will explore behavior in a fatigue test and different loading modes.
Presentation 4
Connor Steugerwald, Enes Krijestorac, Danijela Cabric
6 GHz Co-Secondary Coexistence with Spatial Prediction
In wireless communications, the 6 GHz frequency band has recently opened for unlicensed secondary users, facilitated by WiFi 6 and cellular 5G technologies. Previously, the 5 GHz band employed listen-before-talk (LBT) to ensure fair channel sharing. LBT entails devices checking for ongoing transmissions before initiating their own data transfer. This study aimed to elevate LBT communication using deep-learning-based radio localization and channel gain spatial prediction. These methods are based on received signal strength measurements from WiFi devices and a 3D map of the environment. The focus was on urban environments, featuring randomly positioned WiFi devices, cellular base stations, and cellular devices. Primary objectives included determining the optimal utilization of the 6 GHz band by cellular base stations amidst WiFi interference and curtailing base station interference on the WiFi through power adaptation. In our approach, the signal-to-interference-plus-noise ratio (SINR) is calculated at the WiFi and cellular receivers, using the aforementioned predictions. The power of the base stations is adjusted proportionally to minimize interference on the ongoing WiFi transmission. The base stations also decide to operate in the 6 GHz band or opt for a different frequency, based on counteractive interference from the WiFi. Simulations were run, and data was compared with a baseline LBT method, revealing a decrease in interference to Wi-Fi devices and an increase in throughput for cellular devices. The presented work introduces an alternative to traditional LBT methods in the 6 GHz band, with the potential to enhance wireless communication fairness.
Presentation 5
JESS XU, Yang Zhang
Mechanical Energy Harvesting Software Toolkit
Mechanical energy harvesting is a promising way to power devices with low power requirements, such as sensors and microcontrollers. For example, previous work, such as MiniKers, has found success in harvesting mechanical energy from manual uses of household objects to provide energy for automatic actuations. However, selecting and characterizing motors for energy-harvesting circuits often involves a long process of trial-and-error, because the force and speed used to actuate a motor can vary greatly between different people and situations, and it can be difficult to estimate the amount of energy that can be harvested from motor datasheets alone. The objective of this project is to create a novice-friendly desktop application that allows users to build energy-harvesting circuits and select parameters for manual actuation for simulation. Our tool builds on Fritzing, an open-source circuit schematic editor, which includes a breadboard view to allow users to visualize circuits similar to how they would build them in real life, for the circuit-building step. We extend Fritzing’s simulation feature using ngspice to provide current and voltage waveforms and allow users to estimate the net amount of energy that a circuit can harvest based on motor model, gear ratio, RPM, and circuit components.