Three computer science students presented at the Summer Research Showcase held in Leyburn Library on Monday, October 5.
Will Richardson ’11 presented his work “NuclearEd: A Powerful Resource for Educators.” Will collaborated on the project with Chemistry professor Frank Settle and Computer Science professor Tom Whaley.
Camille Cobb ’12 and Carrie Hopkins ’12 presented their work on “Exploring Data Models for Automatically Generating Tests for Web Applications.” Camille and Carrie collaborated with Computer Science professor Sara Sprenkle as well as Katie Baldwin ’10 and Professor Lori Pollock from the University of Delaware.
Click on the thumbnails above to view larger images.
A Parallel Algorithm for Derivation of Regression Coefficients on the Graphics Processing Unit
Pasko Shterev Paskov
Regression analysis is one of the most common methods of statistical inference, finding its roots into scientific research from all areas for more than two centuries. It is used widely due to its intuitive way to establish a relationship between observations of different variables, and therefore provide empirical proof for a hypothetical connection, or dependence, between them. Regression is an invaluable tool for both research and commerce alike, and has understandably received much attention from software companies in the past two decades, as they realized the immense potential of computers to improve and facilitate the use of the method. Although the contribution of such software to the use of regression should not be understated, the massive amounts of information that have become available with the rise of the digital age has made it increasingly more time consuming, and at instances near impossible, for machines to derive the estimated coefficients of regression. This is a very computationally intensive problem, and improving the efficiency of the algorithm is crucial to time-sensitive applications of regression. The series of graphics cards introduced in the past two years has found wide recognition as providing an accessible alternative to parallel computer clusters for many applications. The architecture and parallel capabilities of the GPU entail a great potential for an improvement of regression analysis calculations. This thesis introduces a new parallel regression algorithm in CUDA for use on the GPU, and demonstrates that this algorithm is between four times faster for smaller datasets and six hundred times faster for larger, depending also on the GPU architecture.
A Parallel Algorithm for Fast Edge Detection on the Graphics Processing Unit
Often, it is a race against time to make a proper diagnosis of a disease. In areas of the world where qualified medical personnel are scarce, work is being done on the automated diagnosis of illnesses. Automated diagnosis involves several stages of image processing on lab samples in search of abnormalities that may indicate the presence of such things as tuberculosis. These imageprocessing tasks are good candidates for migration to parallelism which would significantly speed up the process. However, a traditional parallel computer is not a very accessible piece of hardware to many. The graphics processing unit (GPU) has evolved into a highly parallel component that recently has gained the ability to be utilized by developers for non-graphical computations.
This paper demonstrates the parallel computing power of the GPU in the area of medical image processing. We present a new algorithm for performing edge detection on images using NVIDIA’s CUDA programming model in order to program the GPU in C. We evaluated our algorithm on a number of sample images and compared it to two other implementations; one sequential and one parallel. This new algorithm produces impressive speedup in the edge detection process.
Even though classes aren’t in session, W&L CS students and faculty will be collaborating on several projects.
Daniel Thornton ’10 will be working with Dr. Simon Levy on a custom-built robot platform to implement the visual map-seeking circuit (MSC) algorithm for real-time robot navigation. This is the first time that anyone has attempted to apply the MSC algorithm to this task, so it looks Daniel has a challenging summer ahead!
Will Richardson ’11 will be working under the direction of ProfessorsTom Whaley and Frank Settle to develop a searchable website that indexes online resources on nuclear energy. This website will be an important component of the National Energy Education Development project headed by Dr. Frank Settle of Washington and Lee and Dr. Charles Ferguson of the Council on Foreign Relations and funded by Mr. Gerry Lenfest. The website will be used by middle school, high school, and college educators as well as the general public. Will’s work will include design and implementation of a database for the backend of the system as well as the user interface and search engine. This work will be done with input from educators from the target audience. Last summer Will developed a prototype that was well received and led to the current project.
Nine students presented their computer science projects at SSA, the W&L student research conference.
Two groups of students gave presentations. Senior Alex Jackson presented his research on “Parallel Computing in the Python Programming Language”, while Junior Bena Tshishiku, Sophomores Jack Ivy and Will Richardson, and first-year Eric Gehman presented their SLogo project, from the CS209: Software Development course.
These robots were created by students in Prof. Levy’s spring 2008 course CSCI 250: Robot and Mind. Students built the robots using the popular Lego Mindstorms NXT platform and controlled the robots using the Python programming language. The control computer communicated with the robots using a wireless Bluetooth connection. The setup allows students to apply skills learned in their other computer science courses to robotics and frees them from the constraints imposed by the processor and memory limitations of the robot hardware.