Welcome to SPUR Research Showcase 2022!

Students are presenting their research in a variety of disciplines, and we are excited for you to see their work. Please note that as a research centered university, we support research opportunities in a wide array of areas; some content may not be appropriate for all ages or may be upsetting. Please understand that the views and opinions expressed in the presentations are those of the participants and do not necessarily reflect UCLA or any policy or position of UCLA. By clicking on the "Agree" button, you understand and agree to the items above.

Week 8 Summer Undergraduate Research Showcase 2-3:15pm

Thursday, August 11 2:00PM – 3:15PM

Location: Online - Live

The Zoom event has ended.

Presentation 1
Melissa Cruz, Katsushi Arisaka
Improving Depth Perception in Current Tech
Getting a computer to recognize how far away something is, is an important advancement people are working actively on improving today. It is integrated in a handful of advanced tech, like automated driving, VR, facial recognition, and robotics. People who have attempted to improve/implement this tech have used methods like LiDAR and stereo image processing, though LiDAR seems to be the more popular one right now. However, LiDAR is expensive and would be difficult to implement on a large scale. We’ve decided to re-investigate depth perception from a human’s perspective in the hopes that we can simplify this process further and find more affordable alternatives. What we aimed to accomplish in this experiment is to write a program that analyzes images using motion parallax, binocular disparity, and triangulation to figure out the depths of objects. Our results did just that, although they are not as accurate as we had hoped. In the future, we plan to work on improving the accuracy of our program and to incorporate live feed as well.
Presentation 2
Angela Duran, Melissa Cruz, Kunal Kulkarni, Alex Deal, and Katsushi Arisaka
Depth Perception With Intelligent Machines
This research explores navigation methods and aims to improve on them. By using human vision as foundation, stereovision has potential to replicate intelligent depth sensations. Modern methods today for estimating depth in navigation systems are expensive or too slow. By integrating object detection software and stereovision, this study demonstrates that it is possible to recreate human vision processing in machines. The experiment takes multiple binocular camera setups and runs captured photos through a software program, returning the depths of various objects. The findings suggest that this method is acceptable, but still has some accuracy and speed issues. The accuracy appears to decrease as objects become farther away. Since humans are not fully capable of predicting exact depth measurements, the experiment depth predictions can have a range of error, which is sufficient for navigation systems such as self-driving cars.
Presentation 3
MEGAN C. CHEN, Lara Dolecek
Methods on Optimizing Sparse Matrix Multiplication
Big data and its applications require massive amounts of data computed within the timeframe of milliseconds, and they rely on distributed computations. One such element of distributed computations is large matrix multiplications, and this computation is handled by distributing tasks along worker nodes, as the task is too massive to handle on one machine. Stragglers, nodes that don’t finish computations in a timely manner, are bottlenecks for distributed computations. Current solutions mitigating the adverse effect of stragglers inject redundancies in distributed tasks sent to worker nodes, which lowers the recovery threshold, defined as the minimum number of workers needed to recover the result. Here, we examine sparsity, the quantity of zero entries in data matrices; inspiration from previous solutions are applied to lower the recovery threshold when compared to recovery thresholds of non-sparse matrix multiplication. We take advantage of sparsity to densely pack information from matrices into the shortest possible representation, and directly lower the recovery threshold. The results show that improvement in recovery threshold increases as sparsity increases, improving over 70% at high levels of sparsity. While this result demonstrates improvements in tradeoffs between recovery threshold and computation costs, it currently does not account for numerical stability of the algorithms as decimals and errors stemming from finite numerical precision were not explored here. Future work can employ other developed distributed computation methods with sparse matrix multiplications, look into sparsity in multiple matrices, or study the effects of numerical stability.
Presentation 4
DELSIN R. CARBONELL, Ankur Mehta
Deformable Planar to Spatial Deployable Designs
With the idea of deployable designs, we expect them to be easy, efficient, and practical to use. Auxetics – elastic geodesic grids in this specific research – can be used as a 2D to 3D deployable design that can form a target curved surface. These elastic geodesic grids are built from flat flexible rectangular beams that allow for deformation out of plane, shaping the 3D surface. The structures are relatively simple, cost-efficient, and easy to manufacture. To ensure precise fabrication of the beams and pivot points, we use a laser cutter to create an accurate grid approximation. Since elastic geodesic grids require flexible materials to deploy to their 3D state, our objective is to find out what kind of different materials we can use to fabricate them as well as how the materials affect the overall deployment efficiency of the structure. Knowing this, we can determine what surfaces we can and cannot approximate with certain sets of materials. For future works, we would like to investigate attaching canvas or membranes over the elastic geodesic grids, similar to a deployable tent.
Presentation 5
ALEXANDER T DEAL, Katsushi Arisaka
Stereo Depth Mapping using the YOLO Algorithm
Throughout the 21st century there has been a race to develop technologies that can be applied to cars to make them drive autonomously. One of these technologies is LiDAR, which is used to measure the depths of objects around it. However, these devices come with large costs. Leading some companies like Tesla to abandon them. Luckily there is a low cost alternative. By using two cameras separated a known distance away from each other, along with a reference point a known distance away from the cameras. We are able to automatically calculate the depths of nearby objects. The algorithm requires the integration of the You Only Look Once algorithm. YOLO will be able to identify the objects in the camera’s frame while also providing the pixel location which can be sent to a python program to calculate the object's depth. While not as accurate as LiDAR it can be applied extremely cheaply which will allow it to have applications outside of expensive cars. The algorithm can also be applied to drones, robot cars or normal cameras. While the algorithm is cheap it also has the issue of having strict requirements for its use, giving it more limited options for applications.