Week 8 Summer Undergraduate Research Showcase 3:30-5pm
Thursday, August 11 3:30PM – 5:00PM
Location: Online - Live
The Zoom event has ended.
Presentation 1
Trevor Cai and Yang Zhang
Texture Chromeleon – A Toolkit for Quick and Rich Electrovibration Texture Rendering
Electrovibration is a principle that revolutionizes the way touchscreens are used and interacted with. By applying an oscillating voltage to a screen, electrostatic force is induced between user fingers and the touchscreen, and a wide range of tactile feedback is able to be achieved without the use of any moving parts (for superior durability). Additionally, electrovibration can render a richer set of texture than what can be offered by conventional mechanical vibration approaches. Because of this operating principle, computing devices using electrovibration hold a number of advantages to physical vibration that make it an appealing alternative or addition to its current counterpart. From the lack of wear and tear to increased magnitude and spatial uniformity in tactile feedback, electrovibration induces a perceived sense of friction to sliding fingers that when used in conjunction with physical vibration, allow for a much more immersive experience. In this paper, we propose a toolkit to create realistic haptics on conductive materials using electrovibration. Specifically, to use our toolkit, all a user needs is to slide their smartphone with our 3d printed attachment across a real life surface and process the audio with the code provided. By analyzing the audio waveforms generated by different surfaces and applying them using the software platform, Processing, we are able to recreate realistic haptics that replicate the roughness of different surfaces on a conductive surface connected to the electrovibration circuit.
Presentation 2
STEVE S. LEE, Shurui Li, Puneet Gupta
Compression of Convolutional Neural Networks on One-Dimensional Datasets via Weight Pool Networks
Convolutional neural networks are often used in a variety of classification and prediction models, most commonly in the field of image processing. However, as some of these neural networks become increasingly deep and complex, their requisite computational power and storage size may start to look unviable for resource-constrained devices. Compression techniques such as weight pooling serve to cluster neural network weights together and reduce the number of weights needed to be stored. We specifically used channel-wise weight pooling to allow for groupings on arbitrary 2D filter sizes while minimizing accuracy drop. As this approach has exhibited an adequately low drop in accuracy when run on an image dataset, we applied the same methods to one-dimensional datasets such as Deepsig’s RadioML dataset, which have proven similar levels of accuracy after compression.
Presentation 3
WILLIAM SHIH, Ankur Mehta
End-to-End Design Process for Cut-and-Fold Modular Robots
Robot development is a challenging and resource-consuming process requiring the integration of mechanical, electronic, and computational subsystems and involving many iterations of designing and testing to properly customize robotic technology to fit individual needs. Thus, our goal is to lower this barrier to entry and streamline the robot design process by creating a more efficient framework to ultimately increase accessibility. To help solve this issue, we use LEMUR’s Robot Compiler (RoCo), a framework for visualizing and generating cut-and-fold robots whose mechanical parts are fabricated as flat sheets of material and can be easily folded into their final 3D form. Through RoCo’s hierarchical design process, we first created simple interactive shapes with parameterized properties as subcomponents to build upon and make more complex geometries. From this, we compiled a library of modular robotic components which include wheels, bodies, legs, and peripherals that can be used as building blocks to create a wide array of structures. When paired with the proper electronics and software these structures can realize robots with various capabilities such as multiple types of movement and locomotion in different environments. Our research demonstrates the potential of cut-and-fold robots that is able to help robot creation become more efficient and ubiquitous while offering a wide range of useful capabilities.
Presentation 4
ANDERSON L. TRUONG, Khushbu Pahwa, Yang Zhang
GroundSight: Floor-Sensing Shoe Wearable for Inferring User Context
Real-time location systems (RTLS) are increasingly being used in healthcare and warehouse facilities to monitor the activity of people and equipment. Unlike global positioning, RTLS are a type of local positioning system used for localization within a closed area. Most RTLS use large networks of transmitters and receivers, which can be very expensive to implement. The large overhead and cost make these systems inaccessible to users and smaller facilities with low budgets. Current RTLS solutions also raise privacy concerns with their constant surveillance and monitoring of user location data. To develop a more affordable and secure user localization method, position tracking and identification should be fully performed by the user without any external signals and allow them to control access to their location data. Here, we present a new localization method: GroundSight: a smart shoe accessory that coarsely tracks a user’s positioning by sensing micro environments the user is in with discernable floor patterns. Attached to the heel, the low-profile wearable captures images of the ground in-sync with the user’s steps. The device then classifies and matches these images to a set of user-defined locations in real-time and logs the location data in an SD card or the user’s smartphone. The user is given full control over their location logs and has the choice to share their data. With affordable components and processing done all on-device, GroundSight offers a low-cost alternative to more expensive RTLS systems and an assurance on privacy and protection through user autonomy.
Presentation 5
BILAL K. MALIK, Ankur Mehta
Environmental Design for Robotic Simulation in Virtual Reality
To help those looking to delve into the field of robotics without prior experience, there need to be tools to simplify the process of creating and designing. However, an overlooked aspect of robotics is the design of the environment a robot traverses for its movements. Thus, we created a virtual reality tool that allows for proper visualization and intuitiveness in editing one’s environment. The tool was created using the gaming software Unreal Engine 5 and used the coding languages C++ and Blueprint. Testing all implementations was completed with the Oculus Quest 2 virtual reality headset. A user interface to grab, release and select objects from a distance was created. Additionally, these objects may be placed at any location and move with respect to the controller. The interface even allows for spawning in building blocks which allows for more freedom as a user. Upon selecting individual objects a slider widget appears and allows for the width, length, and height to increase and or decrease. The future of the project is endless, whether that be adding functionality for designing a robot entirely in virtual reality or adding more objects to allow it to be editable. Ultimately our goal is to allow for the visualization of physical interaction between and the edited environment in virtual reality. As a whole, the project helps simplify environmental design and helps those who may not be well versed in robotics.