43
July 27, 2016 Jordan Hall of Science Research Experience for Undergraduates Summer Poster Session

Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

July 27, 2016

Jordan Hall of Science

Research Experience for Undergraduates Summer Poster Session

Page 2: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

Poster# PosterTitle Presenter AdvisorAerospaceandMechanicalEngineering

1 Behavior of Plasma in Contact with a Conductive Liquid Anode JeanPierre Clark Paul Rumbach2 Fabrication of Polymer Nanofibers with Anomalous Thermal Conductivity Peter Hanly Dr. Tengfei Luo3 Green Plasma Electrochemistry: Catalyst-free CO2 Processing into Methanol Amanda Peterson Dr. David Go4 Detection of tumors using immunotargeted nanoparticles for contrast-enhanced CT Joseph Sawyer Dr. Ryan Roeder5 Noninvasive quantification of biomaterial degradation using nanoparticle imaging probes James Tedesco Dr. Ryan Roeder

BiologicalSciences6 Coupled Hydrologic and Biochemical Modeling of Lake Regions Kathryn Levitan Dr. Stuart Jones

CenterforResearchComputing7 Global Scale Human Agent-Based Modeling Nicole Blandin Dr. Paul Brenner8 100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Dr. Paul Brenner9 Optimal Portfolio Using Genetic Algorithm Hector Reyes Alexander Vyushkov

ChemicalandBiomolecularEngineering10 Plug and Play Membranes: Incorporating Biofouling Resistance into Charged Nanofiltration

MembranesTheodore Dilenschneider Dr. William Phillip

11 Tailoring the Nanostructure of Copolymer Membranes Through the Use of Selective Swelling Agents

Jin Kim Dr. William Phillip

12 Novel High-Throughput Screening Techniques for Membrane Evaluation Ryan J. LaRue Dr. William Phillip13 Studies of Sulfonated Triptycene-containing Random Polysulfone Copolymers for Polyelectrolyte

Membrane Fuel Cells (PEMFCs) and water filtration.Herve Twahirwa Dr. Ruilan Guo

ChemistryandBiochemistry14 The Role of Methylammonium Cations in Perovskite Solar Cells Charles Marchant Dr. Prashant Kamat

CivilEngineeringandGeologicalSciences15 Graphene cathodes for membrane electrode assemblies Juan Velazquez Dr. Kyle Doudrick

ComputerScienceandEngineering16 Identifying Copy Number Variations in Plasmodium using K-mers and Hidden Markov Models Brian Bishop Dr. Scott Emrich17 Face Landmarking: Comparative Analysis Darien Calicott Dr. Patrick Flynn18 Dynamic Network Alignment Dominic Critchlow Dr. Tijana Milenkovic19 Visual Analysis of IEEE VAST Challenge 2016 Data Ian Turk Dr. Chaoli Wang20 Scalability Testing for the Molecular Dynamics Application AWE-WQ up to 10,000 Cores via

HTCondor and the Google Compute EngineLydia Brothers Dr. Douglas Thain

GenderStudies21 Season of Birth and Later Outcomes: Old Questions, New Answers, Bigger Data Stephen Salisbury Dr. Kasey Buckles

NDEnergy22 Iron (III) Oxide/Graphene Nanostructures for Solar Water Splitting Clare Murphy Dr. Ian Lightcap

NDNano23 Production of Magneto-Electric Nanoparticles and Functionalization for Increased Biocompatibility Eoin O'Sullivan Tiffanie Stewart24 Magneto-electric nanoparticles to specifically target cancer cells in vitro Andrea Oviedo Tiffanie Stewart25 Dynamic Monitoring for Lobster Using the ELK Stack Anna Yannakopoulos Dr. Kevin Lannon

Physics26 Deep Learning for Particle Physics Colin Dablain Dr. Kevin Lannon27 Using Deep Neural Networks to Analyze Collisions in High Energy Physics Matthew Drnevich Dr. Kevin Lannon28 Analysis And Stellar Label Approximations of Carbon-Enhanced Metal-Poor Stars Travis Hodge Dr. Vinicius Placco29 Using HP Vertica for Particle Physics Data Analysis Matthew Link Dr. Kevin Lannon30 Modeling the Light Curve of a Type II-P Supernova Amber Ward Dr. Peter Garnavich

Psychology31 Stock Market Simulation Via Geometric Brownian Motion Brett Baumgartner Dr. Zhiyong Zhang32 Fitting a Model to Predict the Number of Fatalities Resulting from Terrorist Attacks Tegan Chesney Dr. Gitta Lubke33 The Consequences of Missing a Correlated Predictor in Multiple Regression Anthony Hall Dr. Gitta Lubke34 Therapist-Patient Interactions with the WeHab System. Thomas Hughes Dr. Mike Villano and

Julaine Zenk

35 Integrating Cognitive Diagnostic Modeling into the AP-CAT Katlynn Kennedy Dr. Ying Cheng

ResearchExperienceforUndergraduatesSummerPosterSession-July27,2016JordanHallofScience

Page 3: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

Poster# PosterTitle Presenter AdvisorPsychology

36 Improving Student Engagement Kristie LeBeau Dr. Ying Cheng

Sociology37 Analyzing Personality and Other Psychological Measures in the Context of Social Networking Katie Dwyer Dr. David Hachen

Page 4: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

PosterTitle PresenterFabrication of Polymer Nanofibers with Anomalous Thermal Conductivity Peter HanlyDetection of tumors using immunotargeted nanoparticles for contrast-enhanced CT Joseph SawyerCoupled Hydrologic and Biochemical Modeling of Lake Regions Kathryn Levitan100Gb Science DMZ Network Analysis: For High Energy Physics Pamela PerkinsPlug and Play Membranes: Incorporating Biofouling Resistance into Charged Nanofiltration Membranes Theodore DilenschneiderNovel High-Throughput Screening Techniques for Membrane Evaluation Ryan J. LaRueThe Role of Methylammonium Cations in Perovskite Solar Cells Charles MarchantIdentifying Copy Number Variations in Plasmodium using K-mers and Hidden Markov Models Brian BishopDynamic Network Alignment Dominic CritchlowScalability Testing for the Molecular Dynamics Application AWE-WQ up to 10,000 Cores via HTCondor and the Google Compute Engine Lydia BrothersIron (III) Oxide/Graphene Nanostructures for Solar Water Splitting Clare MurphyMagneto-electric nanoparticles to specifically target cancer cells in vitro Andrea OviedoDeep Learning for Particle Physics Colin DablainAnalysis And Stellar Label Approximations of Carbon-Enhanced Metal-Poor Stars Travis HodgeModeling the Light Curve of a Type II-P Supernova Amber WardFitting a Model to Predict the Number of Fatalities Resulting from Terrorist Attacks Tegan ChesneyTherapist-Patient Interactions with the WeHab System. Thomas HughesImproving Student Engagement Kristie LeBeau

PosterTitle PresenterBehavior of Plasma in Contact with a Conductive Liquid Anode JeanPierre ClarkGreen Plasma Electrochemistry: Catalyst-free CO2 Processing into Methanol Amanda PetersonNoninvasive quantification of biomaterial degradation using nanoparticle imaging probes James TedescoGlobal Scale Human Agent-Based Modeling Nicole BlandinOptimal Portfolio Using Genetic Algorithm Hector ReyesTailoring the Nanostructure of Copolymer Membranes Through the Use of Selective Swelling Agents Jin KimStudies of Sulfonated Triptycene-containing Random Polysulfone Copolymers for Polyelectrolyte Membrane Fuel Cells (PEMFCs) and water filtration. Herve TwahirwaGraphene cathodes for membrane electrode assemblies Juan VelazquezFace Landmarking: Comparative Analysis Darien CalicottVisual Analysis of IEEE VAST Challenge 2016 Data Ian TurkSeason of Birth and Later Outcomes: Old Questions, New Answers, Bigger Data Stephen SalisburyProduction of Magneto-Electric Nanoparticles and Functionalization for Increased Biocompatibility Eoin O'SullivanDynamic Monitoring for Lobster Using the ELK Stack Anna YannakopoulosUsing Deep Neural Networks to Analyze Collisions in High Energy Physics Matthew DrnevichUsing HP Vertica for Particle Physics Data Analysis Matthew LinkStock Market Simulation Via Geometric Brownian Motion Brett BaumgartnerThe Consequences of Missing a Correlated Predictor in Multiple Regression Anthony HallIntegrating Cognitive Diagnostic Modeling into the AP-CAT Katlynn KennedyAnalyzing Personality and Other Psychological Measures in the Context of Social Networking Katie Dwyer

10:45 am-11:45 am Posters

PosterSchedule9:30 am-10:30 am Posters

Page 5: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling
Page 6: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

Poster Presentation

Behavior of Plasma in Contact with a Conductive Liquid Anode

Jean Pierre Clarke College of Engineering

University of Notre Dame

David Go Aerospace and Mechanical Engineering

Paul Rumbach

College of Engineering Aerospace Engineering and Mechanical Engineering

Advisors: Paul Rumbach, University of Notre Dame, College of Engineering, and David Go,

University of Notre Dame, College of Engineering, Dept. of Aerospace and Mechanical Engineering

The applications of plasma range from avionics to cancer treatment, yet most of the properties of plasma have yet to be fully understood. The more that is known about plasma, the more useful it can become for solving ongoing engineering and medical problems. In this work, we study the behavior of an argon DC plasma when it was placed in contact with a solution of either magnesium sulfate, MgSO4 or sodium perchlorate, NaClO4. Theory, developed from fundamental electrostatics and conservation laws, predicts that the radius of the plasma will decrease as the conductivity and concentration of the salt solution increases, yielding a higher current density. To test this, we photographed the plasma on the surface of the charged liquid and extracted the radius using MATLAB’s image processing tools. Then using this extracted radius we calculated the current density and uncertainty in the current density. The experimental results and the developed theory were then compared and are in good agreement for the various solutions of sodium perchlorate NaClO4 and magnesium sulfate MgSO4.

Page 7: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

Poster Presentation

Fabrication of Polymer Nanofibers with Anomalous Thermal Conductivity

Peter Hanly University College of Dublin

Manikandan Suresh

Birla Institute of Technology, Mesra

Advisor: Tengfei Luo, University of Notre Dame, Dept. of Aerospace and Mechanical Engineering

The ever expanding and evolving manufacturing industry requires new lightweight and energy efficient materials that have high mechanical strength, stiffness and thermal conductivity. The aim of this project is to explore the possibility of altering the structure of polymers so as to produce polymer nanofibers that exhibit these attractive properties that will have lightweight heat transfer applications. Polymers which are usually amorphous materials meaning they do not have a repeating periodic crystal structure and have generally poor thermal conductivity, stiffness and mechanical strength. These properties can be altered by creating well-packed and aligned polymers. These polymers have an altered structure that is not made up of entangled molecular chains but aligned ones. Polymers with such crystal structure have high thermal conductivity and strength. When a polymer gel is mechanically drawn into thin nanofibers the entangled molecular chains are aligned, causing a change in the crystal structure of the polymer and thus altering the properties of the polymer. The goal of this research is therefore to produce a well-documented and simple method to produce polymer nanofibers. This is done by producing a polyethylene gel and mechanically drawing fibers using a tungsten tip. Many variables alter the fibers which are drawn from the process. The variables studied in this research have the greatest effect on the quality and size of the fibers, these being the draw rate and the temperature at which the fiber is drawn at. It is seen in the experiment that a high draw rate of 0.05mm/s and a high draw temperature of approx. 135C gives rise to fibers with very thin diameters. When a successful method is created and consistent nanofibers are produced, the crystallinity can be studied using a SEM.

Page 8: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

Poster Presentation

Green Plasma Electrochemistry: Catalyst-free CO2 Processing into Methanol

Amanda Peterson College of Engineering

University of Notre Dame

Advisors: David Go, University of Notre Dame, College of Engineering, Dept. of Aerospace and Mechanical Engineering and Paul Rumbach, University of Notre Dame, College of Engineering,

In this work, we explore the use of plasma electrochemistry for CO2 reforming to produce carbon by-products that could be useful for alternative fuels, namely methanol. Our plasma electrochemistry set-up consists of an H-tube to split the reaction into a cathode side with formic acid, and an anode side with sodium perchlorate. The cathode side is purged with argon gas, while a separate tube pumps CO2 gas into the solution. A voltage is applied to a stainless steel needle suspended above the solution to generate an atmospheric pressure plasma in contact with the CO2- saturated solution. After processing the solution for a fixed time, gas chromatography mass spectrometry (GC-MS) was used to analyze the solution, with a focus on identifying formaldehyde, ethane, oxalate, and methanol. Our results were quite promising for the presence of methanol. GC-MS showed methanol presence in our samples three times that of the formic acid blanks. This is enough evidence to support the theory that we can use plasma electrochemistry to process CO2 into products that could be used for alternative fuels, and to drive our research forward.

Page 9: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

Poster Presentation

Detection of tumors using immunotargeted nanoparticles for contrast-enhanced CT

Joseph Sawyer Purdue University

Lisa Irimata

College of Engineering Aerospace and Mechanical Engineering

University of Notre Dame

Advisor: Ryan Roeder, University of Notre Dame, College of Engineering, Dept. of Aerospace and Mechanical Engineering

Immunotargeted gold nanoparticles (Au-NPs) have been prepared to enable contrast-enhanced detection of tumors by computed tomography (CT). However, in order to determine efficacy and dosing, the Au-NP binding kinetics and isotherm for targeting cancer cells must be characterized. Therefore, the objective of this study was to verify the specificity and measure the binding kinetics of immunoconjugated Au-NPs for HER2+ breast cancer cells. Au-NPs were encapsulated with a silica shell, which was volume-loaded with Fluorescein isothiocyanate (FITC) for fluorescence imaging, and immunoconjugated with either trastuzumab for targeting HER2+ cells, or immunoglobulin G for a negative control. SKBR-3 cells were incubated with Au-NPs for up to 48 hours, and the rate at which immunotargeted Au-NPs bound to HER2+ cells over time was compared to non-targeted Au-NPs (negative control). Au-NP binding to cells was analyzed at each time point by using both fluorescence microscopy and using inductively coupled plasma optical emission spectroscopy (ICP-OES) to measure the concentrations of gold in solution before and after the cells were incubated with the Au-NPs. Fluorescent microscopy showed greater binding of targeted Au-NPs compared with untargeted Au-NPs. ICP-OES showed an increased concentration of gold binding to cells over time for the targeted Au-NPs, whereas the untargeted Au-NPs remained at a constant concentration over the 48 hours. Therefore, Au-NPs immunoconjugated with trastuzumab were verified to target HER2+ breast cancer cells.

Page 10: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling
Page 11: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

Poster Presentation

Noninvasive quantification of biomaterial degradation using nanoparticle imaging probes

James Tedesco McMaster University

Tyler Curtis

College of Engineering University of Notre Dame

Advisor: Ryan Roeder, University of Notre Dame, College of Engineering, Dept. of Aerospace

and Mechanical Engineering

Understanding the degradation behaviour of biomaterials in vivo is important for the successful design and application of tissue engineering scaffolds. Past approaches to gain such an understanding have involved histological studies where animal sacrifice at numerous time points is required. The common non-invasive approach of fluorescence imaging is limited to subcutaneous implants, two-dimensional images, and by endogenous tissue autofluorescence. The goal of this project is to establish a non-invasive, three-dimensional method to image and quantify scaffold degradation in vivo using contrast-enhanced computed tomography (CT) to detect nanoparticle imaging probes conjugated to the scaffold matrix. In the current study, nanoparticle conjugation and the scaffold degradation rate was verified prior to in vivo experiments. Synthesized gold nanoparticles were conjugated within the collagen scaffolds by using mercaptosuccinic acid modified gold nanoparticles. Multiple carboxyl groups are present on the surface of gold nanoparticles thus the nanoparticles are capable of forming multiple crosslinks with the collagen. The formation of the peptide bonds with amino groups of collagen is achieved via N-(3-Dimethylaminopropyl)-N′-ethylcarbodiimide (EDC) coupling. In vivo scaffold degradation was simulated in vitro using a collagenase digestion. Over the course of the degradation, the scaffolds were imaged using computed tomography and the intensity of signal related to the extent of degradation. The concentration of gold eluted in the media was also determined using inductively coupled plasma optical emission spectroscopy (ICP-OES) to confirm CT measurements. Finally to ensure the observed gold signal accurately describes scaffold degradation, a hydroxyproline assay was used to measure the actual collagen content in the media. Gold nanoparticles were successfully synthesized with a 10-20 nm diameter and functionalized with mercaptosuccinic acid. Type 1 collagen scaffolds were fabricated with 85% porosity and 300-425 micron pore size, and successfully conjugated with the functionalized gold nanoparticles. Collagen scaffolds with atleast 10mM of gold nanoparticles are detectable and quantifiable with spectral CT. An appropriate collagenase concentration to digest the scaffolds was determined to be greater than 0.5mg/mL, having a dependence on the extent of crosslinking.

Page 12: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

Poster Presentation

Coupled Hydrologic and Biochemical Modeling of Lake Regions

Kathryn Levitan Animal Science

University of Kentucky

Jacob Zwart Biological Sciences

University of Notre Dame

Advisor: Stuart Jones, University of Notre Dame, College of Science, Dept. of Biological Sciences

A test bed of 163 lakes in Wisconsin was examined using publically sourced satellite temperature data from 2002 to 2016. Our objective was to examine the accuracy of the satellite data and fit the data to a more accurate model in order to predict lake volumes. Past lake models, specifically regional biochemical models are both time-consuming and difficult to perform. They require sonar or anchor applications and ropes to determine lake depth. For the Earth Observing System Terra and Aqua satellite, the Moderate Resolution Imaging Spectroradiometer (MODIS) was used to extract Land Surface temperature (LST) values. The MODIS satellite produced both day and night temperature values; we solely utilized the data collected at night. Using a masking geospatial tool in ArcGIS, the LST pixel values were united with each lake polygon. The programming language R was used to perform several statistical analyses in order to properly examine the large data set. Our analyses yielded quadratic fit lines providing variables needed to acquire about lake volumes. The curve of the plotted data was used to extract multiple coefficient values. Multiple models were created using known values for volume and area. A training set was created by randomly selecting 40 lake values out of 83. This set would be used to run a model test. Using the AICc package in R Studio, the most accurate model was determined. Five parameters were set to create several different possibilities. The parameters included area, coefficients a and b from the quadratic fit line, and maximum temperature. Lake area was the most significant parameter. To validate the generated models, the predicted volumes created from the AICc test were plotted against actual volumes. The model we generated provided a more accurate and efficient way to predict lake volumes using land surface temperatures.

Page 13: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

Poster Presentation

Global Scale Human Agent-Based Modeling

Nicole Blandin Computing and Applied Mathematics

Saint Mary’s College

Advisor: Paul Brenner, University of Notre Dame, Center for Research Computing

Modeling populations of humans on a global scale is beneficial not only to leaders in government making policy decisions, but civilians as well. However, such models of a world wide population are extremely rare due to complications in social interaction as well as the complexities with simulating a model with agents at a magnitude of eight billion. Using an agent based model that measures lifestyle well-being from Dr. Christopher Thron (Texas A&M) that modeled populations of 10,000, we were able to modify this model to simulate a population of eight billion using MATLAB software. This application allows agents to make lifestyle choices that affect their conspicuous and inconspicuous well-being. We wrote parallel code such that these calculations where each population of agents makes their decisions on a separate CPU core. A simulation of eight billion agents took approximately 640 GB of memory (160 consumer laptops) and approximately eleven hours of processing time to compute. Each agent makes their decision based on the mean of well-being of the population which represents the available choices on the market. To cause interaction between populations we decided to have each mean of well-being from separate populations influence each other. The amount of influence a population has is based on the number of agents. To further model how individuals in these simulations interact with each other we could create a network of social interactions within the group, and instead of agents responding to an overall mean of the group have them respond only to the mean of other agents in their network.

Page 14: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

Poster Presentation

100Gb Science DMZ Network Analysis: For High Energy Physics

Pamela Perkins Computer Information Systems

Huston-Tillotson University

Advisor: Paul Brenner, University of Notre Dame, Center for Research Computing

There is an increasing need to share big data for science and research purposes on college campuses. Despite the availability of high-capacity networks, Notre Dame researchers are experiencing bottlenecks in their data transfers across their 100Gb network. Our goal is to tune the cyber-infrastructure components to optimize performance of the 100 Gb network, while also keeping information highly secure within the Science DMZ. With the help of PerfSONAR, we have tested the network’s performance by attaining the following metrics: The amount of achievable bandwidth (throughput); round trip latency (RTT); and path utilization (traceroute tool).When tested, the highest single stream point to point of throughput reached was 5380 b/s. When several locations were tested, within the United States and overseas, I found a significant correlation between growing latency and significantly dropping bandwidth. There was no significant correlation found in the number of hops. In addition I found the Center for Research Computing (CRC) at Notre Dame could add an additional four Xrootd servers to ease the congestion and to use the 100Gb line more effectively.

Page 15: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

Poster Presentation

Optimal Portfolio Using Genetic Algorithm

Hector Reyes Mathematics and Economics

Valparaiso University

Advisor: Alexander Vyushkov, University of Notre Dame, Center for Research Computing

Investing in the stock market is one classic way of generating money. The main goal of someone who invests in the stock market is to maximize his profit and minimize his risk. Selecting an optimal portfolio is the most important step in someone’s quest to maximize profit and minimize risk. A portfolio is a grouping of stocks. Optimizing a portfolio is one hard task in financial investment decisions. The toughest part is distributing the amount of money to invest in each stock of a portfolio, while maximizing profit and minimizing risk. This project applied the method of a genetic algorithm in order to select an optimal portfolio. A genetic algorithm generates solutions to optimization problems using techniques inspired by natural evolution. A five stock, five years portfolio was utilized in this project in order to demonstrate the efficiency of a genetic algorithm. The most important steps of this method were the fitness function and the crossover. The fitness function is a mathematical formula that determined the effectiveness of the portfolio distribution; it returned a value for each portfolio distribution and the higher the value the better the distribution. The fitness function allowed us to rank and sort the generated distributions. Then, the crossover was performed in order to see how the genetic algorithm converges towards the optimal solution. Look at the crossover as reproduction. The best portfolio distributions, according to the fitness function, were used for the crossover in order to generate even better distributions. Crossover was executed a couple of times by generating new generations of distributions, until the best distribution was produced. The best distribution produced a twenty five percent average return and its computing time was eleven minutes. The genetic algorithm was written as a code in Python, and it confirmed its efficiency.

Page 16: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

Poster Presentation

Plug and Play Membranes: Incorporating Biofouling Resistance into Charged Nanofiltration Membranes

Theodore Dilenschneider College of Engineering

Chemical and Biomolecular Engineering University of Notre Dame

Siyi Qu

College of Engineering Chemical and Biomolecular Engineering

University of Notre Dame

Advisor: William Phillip, University of Notre Dame, College of Engineering, Dept. of Chemical and Biomolecular Engineering

Biofouling, which is the process where a surface becomes encrusted in microorganisms and their by-products, occurs frequently in membrane processes, and reduces membrane performance and longevity. Subsequently, the energy demand and costs of membrane processes used in applications such as seawater desalination and the recovery of algal biofuels experience a corresponding increase. If the increasing global demand for clean water at low energy costs is to be met, membranes that resist fouling in myriad conditions need to be developed. Membranes derived from a poly(acrylonitrile-r-oligo(ethylene glycol) methyl ether methacrylate-r-glycidyl methacrylate) (P(AN-r-OEGMA-r-GMA)) copolymer can be easily manipulated through the use of high throughput ink jet printing techniques to incorporate a patterned surface and charged functionality, but performance suffers in conditions that promote biofouling. This work discusses the ability to incorporate fouling resistance to the pore structure through chemically-patterned superhydrophobic surface chemistries without hindering the size selectivity, structure, and charged functionality of the parent membrane. The chemical environment within the pores can be easily manipulated due to the glycidyl methacrylate (GMA) moieties. Using facile methods, the GMA moieties can be functionalized to incorporate a single functionality, or converted to ink jet printable azide groups that facilitate the creation of a mosaic surface. Charged membranes of a single functionality offer a rejection of 81% of MgCl2 and 89% of Na2SO4 for positive and negative functionalizations, respectively, but suffer a 60% loss of permeability when exposed to a model foulant such as Bovine Serum Albumin (BSA). Membranes functionalized with a fouling resistant fluorinated surface exhibit no decrease in permeability when exposed to BSA. Azide groups facilitate the use of high throughput ink jet printing in the fabrication of a patterned surface incorporating both charged and fluorinated regions. This dual functionality incorporates fouling resistance while maintaining the performance of the parent charged membrane.

Page 17: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

Poster Presentation

Tailoring the Nanostructure of Copolymer Membranes Through the Use of Selective Swelling Agents

Jin Kim

College of Engineering University of Notre Dame

Advisor: William Phillip, University of Notre Dame, College of Engineering, Dept. of Chemical

and Biomolecular Engineering

Over the last few years, nanoporous copolymer membranes have become an area of significant interest within the scientific community due to their high performance, and for their capacity to be rationally modified during their fabrication. Such straightforward modification of the membrane structure is important as it allows for researchers to change the nanostructure and, in turn, the performance profiles of these membranes in a ready fashion. This work, in particular, discusses the impact of swelling agents, which are incorporated into the casting solution of poly(acrylonitrile-r-oligo(ethylene glycol) methyl ether methacrylate) P(AN-r-OEGMA) copolymer membranes and partition selectively into the pore-forming OEGMA segments of the copolymer material. This, in turn, modulates the membrane pore size and increases the permeability of the membranes. The three types of swelling agents used were polyacrylic acid (PAA), polyethylene glycol (PEO), and propylene carbonate, where the amount of each swelling agent used ranged from 0% to 30% of the mass of copolymer in the membrane. By varying the percentage of the swelling agent added and quantifying the hydraulic permeability of the membrane fabricated from the resulting polymer solution, it was found that the flux increased most noticeably when the swelling agent incorporated was around 15% of the overall polymer mass. Ultimately, the results of this work describe the potential of swelling agents, and provide a foundation for systematically understanding the effect of the swelling agents on the structure and performance of these membranes.

Page 18: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

Poster Presentation

Novel High-Throughput Screening Techniques for Membrane Evaluation

Ryan J. LaRue College of Engineering McMaster University

Advisors: William Phillip, University of Notre Dame, Dept. of Chemical and Biomolecular

Engineering and David Latulippe, University of Notre Dame

Nanoporous membranes incorporating self-assembled copolymer precursors show promise as novel separation and purification devices. Of particular interest, membranes fabricated via the self-assembly and nonsolvent-induced phase separation (SNIPS) procedure possess highly-tunable, well-defined nanostructures. Further post-fabrication functionalization can precisely tailor the material surface chemistries. Synthesized using these techniques, copolymer-based membrane adsorbers selectively remove metal contaminants from aqueous solutions. However, the traditional approach to improving the performance of these membrane adsorbers has relied on evaluating membrane structures and surface chemistries in a slow, sequential fashion using a stirred-cell apparatus. This approach significantly limits the pace at which new membranes can be developed. To this end, we designed a microscale, high-throughput test apparatus that allows for six experiments to be conducted simultaneously. Individual experiments employ a flat-sheet membrane (active area ~ 1.5 cm2), with mixing above the surface achieved by a tumble stirrer. -glycidyl methacrylate) membranes fabricated via a SNIPS procedure as effective heavy metal adsorbers. The capture of metal ions was facilitated by amino acid moieties grafted onto the membrane, which effectively complex with the ions. A design-of-experiments approach was used to develop a fundamental understanding of the effects of amino acid type and grafting density on membrane permeability and metal removal.

Page 19: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

Poster Presentation

Studies of Sulfonated Triptycene-containing Random Polysulfone Copolymers for Polyelectrolyte Membrane Fuel Cells (PEMFCs) and water filtration.

Herve Twahirwa

College of Engineering Chemical and Biomolecular Engineering

Advisor: Ruilan Guo, University of Notre Dame, Dept. of Chemical and Biomolecular

Engineering

Sulfonated poly(arylene ether sulfone) copolymers with controlled nanophase separated morphology hold great potential as alternatives to benchmark Nafion® for polyelectrolyte membrane fuel cells (PEMFCs) due to their improved proton conductivity at high relative humidity (RH) levels ,as well as excellent thermal and mechanical stability. It has also been demonstrated by McGrath et al that these copolymers exhibit high chlorine tolerance, good NaCl rejection as well as good antiprotein and antioil water fouling characteristics which makes them attractive candidates for water purification applications. Previous research has shown that long hydrophilic (ionic) sequences and high degrees of sulfonation in these copolymers are required to sustain the high water uptake and high water permeability that are needed for fuel cell and water purification applications respectively. However, this invariably comes at the expense of excessive water uptake swelling resulting in deterioration of the dimensional stability and mechanical robustness of PEM membranes. Recent research in the Guo group on triptycene-containing multiblock copolymers has shown that the triptycene structural units in the hydrophobic sequences of copolymers are very effective at restricting the swelling of polyelectrolyte membranes due to the supramolecular chain threading and interlocking interactions induced by the triptycene units. This research builds on this work by developing a series of triptycene containing random copolymer membranes with different degrees of sulfonation. The presence of triptycene units in the copolymers is expected to decrease the water swelling and improve the dimensional stability of the membranes even when high degrees of sulfonation are employed. 1H NMR was used to characterize the copolymer structure as well as measure the degree of sulfonation. Thin film membranes were fabricated and acidified from all copolymers synthesized and initial characterizations such as water uptake and swelling measurements were carried out in order to evaluate the potential of the membranes for PEMFC and water purification applications

Page 20: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

Poster Presentation

The Role of Methylammonium Cations in Perovskite Solar Cells

Charles Marchant University College Dublin

Advisors: Prashant Kamat, University of Notre Dame, College of Science, Dept. of Chemistry

and Biochemistry and Anselme Mucunguzi, University of Notre Dame, College of Science, Dept. of Chemistry and Biochemistry

Organic inorganic hybrid perovskites emerged as a promising photoconverting material for use in solar cells in 2009, when Kojima et. al. fabricated a dye-sensitized perovskite solar cell with 3.8% power conversion efficiency. Since then, peak power conversion efficiency has reached over 20% in a short period of time, a remarkable rise which is hugely promising in the search for a clean, efficient energy source. Organic inorganic hybrids have the general formula ABX3, where A is an organic cation, B is an inorganic cation and X is a halide. Methylammonium lead iodide, CH3NH2PbI3, is the most commonly used compound in perovskite solar cells. However, so far little is known about the role played by the organic cation CH3NH2

+. This work attempts to investigate the role of the methylammonium cation by studying the photophysical properties of perovskite thin films with varying stoichiometric amounts of the methylammonium cation and the effect on the performance of perovskite solar cells.

Page 21: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

Poster Presentation

Graphene cathodes for membrane electrode assemblies

Juan Velazquez College of Engineering Aerospace Engineering

University of Notre Dame

Andrew Schranck College of Engineering

Civil and Environmental Engineering and Earth Sciences University of Notre Dame

Advisor: Kyle Doudrick, University of Notre Dame, College of Engineering, Dept. of Civil and

Environmental Engineering and Earth Sciences

Membrane electrode assemblies (MEA) are used in photocatalytic fuel cells (PFC) to decrease the distance between the photoelectrode and the counter electrode. Previously, carbon paper or cloth has been used as a conductive electrode substrate, but each of these has limitations with regards to PFC design (e.g., paper is not flexible). Additionally, carbon paper/cloth alone is not a suitable catalyst, resulting in the need for platinum. The aim of this study was to use graphene “inks” as the counter electrode on a MEA in order to reduce or eliminate platinum. Four sources of graphene were tested, including graphene oxide (GO), reduced GO (r-GO), polarized graphene nanoplatelets (P-GNP), and few-layer graphene nanoplatelets (FL-GNP). Each sample was tested for its processability, conductivity and robustness. To examine particle stability and homogeneity in solution, samples were prepared in water or organic solvents. Graphene films were investigated on glass and ion membrane (Nafion 117) substrates in order to compare the films to other surfaces such as conductive glass and carbon paper. The films were loaded using several standard methods including dropcast, doctor blade, and airbrush. Linear Sweep Voltammetry (LSV) tests were conducted on each of the graphene films to measure the conductivity. On glass, FL-GNP and r-GO obtained current densities up to 34.543 µA/cm2 and 0.041344 µA/cm2, respectively, at an applied bias of 1.0 V. Both P-GNP and GO did not attain current densities larger than 0.0002 µA/cm2 at an applied bias of 1.0 V.

Page 22: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

Poster Presentation

Identifying Copy Number Variations in Plasmodium using K-mers and Hidden Markov Models

Brian Bishop

PharmD and Computer Science University of Rhode Island

Advisor: Scott Emrich, University of Notre Dame, College of Science, Dept. of Biological

Sciences

The purpose of this effort is to investigate and develop new methods for identifying copy number variations (CNVs) in the malaria parasite. Malaria is a deadly disease that is extremely difficult to analyze due to a high rate of mutation combined with a high %AT. Because current drugs are becoming less effective in treating P. falciparum infections, specifically in areas of South East Asia, new methods need to be established to better understand emerging resistance in these parasites. Here we propose two new methods to find CNVs, which have been previously associated with drug resistance. We use both alignment-free (k-mer abundances) and traditional read depth methods (post-alignment) to analyze two drug-selected Plasmodium falciparum genomes relative to their parent strain. These results are then compared to some well known CNV finding algorithms such as CNVnator. These new approaches show promise to furthering our knowledge of Malaria and equip researchers with new methods for analyzing the genome of P. falciparum. These new approaches analyze genomic data in a brand new way and shows great promise for identifying potential causes of drug resistance in P. falciparum.

Page 23: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

Poster Presentation

Face Landmarking: Comparative Analysis

Darien Calicott Computer Science

Northeastern State University

Patrick Flynn Computer Science and Engineering

Advisor: Patrick Flynn, University of Notre Dame, College of Engineering, Dept. of Computer

Science and Engineering

Facial landmarking is defined as the detection and localization of certain key points on the face. Much like a person’s ability to recognize other humans by the dimensions of their face, face landmarking software uses algorithms to map the face of the individual(s). Face landmarking software is utilized in various domains ranging from identifying and profiling criminals to recognizing faces on social media networks such as Facebook, to creating interesting and entertaining visual effects such as face swapping. My research team is comparing the accuracy of three face landmarking software: FaceX (explicit shape regression), CMR (cascaded mixture of regressors), and dlib (ensemble of regression trees). We used SGE (Sun Grid Engine) to run these three landmarking programs on a data set of 1,781 frontal face images and compared the consistency of landmark locations. Preliminary results indicate that facial landmark localizations are inconsistent among the techniques and tend to be unstable along the jawline.

Page 24: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

Poster Presentation

Dynamic Network Alignment

Dominic Critchlow Physics and Computer Science Austin Peay State University

Vipin Vijayan

College of Engineering Computer Science and Engineering

University of Notre Dame

Tijana Milenkovic Computer Science and Engineering

Advisor: Tijana Milenkovic, University of Notre Dame, College of Engineering, Dept. of

Computer Science and Engineering

The internet, navigation, power grids, social interactions, cells etc. are complex real-world systems that can be modeled with networks. A network consists of nodes (e.g., computers on the internet, people in a society, or genes in a cell) and interactions (or relationships) between the nodes. Network science, which deals with analyses of complex real-world networks, is an incredibly exciting field. One type of network analysis is network alignment, which aims to compare two or more networks in order to find regions of their topological or functional similarities. In our study, network alignment is defined as an injective (one-to-one) mapping between the compared networks. Network alignment can only be approximated as it falls into the category of NP-hard problems. Existing (approximate) network alignment methods allow for studying only static networks. However, most of real-world complex systems are dynamic. With the recent increase of temporal network data available about such dynamic systems, it only seems natural to use the time information in the process of network alignment. Hence, we aim to generalize an existing state-of-the-art method for alignment of static networks to its dynamic counterpart, in order to allow for aligning dynamic networks.

Page 25: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

Poster Presentation

Visual Analysis of IEEE VAST Challenge 2016 Data

Ian Turk Computer Science

University of Wisconsin Eau Claire

Xinan Zhou

Lei Shi University of Notre Dame

Jun Tao

Computer Science and Engineering University of Notre Dame

Matthew Sinda

Central Michigan University

Advisors: Chaoli Wang, University of Notre Dame, College of Engineering, Dept. of Computer Science and Engineering and Qi Liao

The IEEE Vast Challenge 2016 is a competition intended to make researchers break new ground in visual analytics. The challenge provides competitors with fourteen days of simulated data from a hypothetical laboratory building, including variables such as temperature, gas concentration, electrical power usage, and employee positions in various zones and floors. The researchers must use visual analytics to identify patterns in the data, connect related events, and predict possibly harmful occurrences. Our team decided to achieve this by using a dynamic, interactive visualization. The visualization has two parts. The first part is a view centered on the scalar data, showing a snapshot of each variable in each zone at a particular time as a dot with color representing value and radius representing distance from the average, as well as a traditional linegraph to see trends around the snapshot. The second part is a view focused on the location-based data, and can be configured to show a heatmap of a particular variable, the relative time spent by specific employees in a particular zone, or both. Both views feature automatic anomaly detection using cross-correlation, integrals, and Kullback-Leibler divergence. These tools allowed us to find the patterns and anomalies required by the VAST challenge.

Page 26: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

Poster Presentation

Scalability Testing for the Molecular Dynamics Application AWE-WQ up to 10,000 Cores via HTCondor and the Google Compute Engine

Lydia Brothers

Physics and Computer Science University of Kentucky

Advisor: Douglas Thain, University of Notre Dame, College of Engineering, Dept. of Computer

Science and Engineering

Simulating the conformational dynamics of protein molecules is essential for developing probabilistic links between a protein’s shape and its function. Computationally, molecular dynamics is a challenging field due to the large time scales needed to make direct observations of kinetic transitions. AWE-WQ is a scalable molecular dynamics application using the distributed system framework Work Queue (WQ) and the parallelizable Accelerated Weighted Ensemble (AWE) algorithm. Because of the desire to more efficiently simulate complex molecules, improvements were made to accommodate the use of approximately 10,000 cores in AWE-WQ using HTCondor and the Google Compute Engine. To visualize the bottlenecks of the system, a suite of graphical tools were developed for AWE-WQ, and the functionality to specify and monitor resources was added. While incrementally scaling up the application, performance metrics such as speed, efficiency, and utilization were evaluated and maintained by monitoring the distribution of task (simulation) execution times, master/worker availability, as well as network, CPU, disk, and memory usage.

Page 27: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

Poster Presentation

Season of Birth and Later Outcomes: Old Questions, New Answers, Bigger Data

Stephen Salisbury Economics and Sociology

Indiana University South Bend

Advisor: Kasey Buckles, University of Notre Dame, College of Arts & Letters, Dept. of Economics

In July 2013, Dr. Kasey Buckles and Dr. Daniel Hungerman published a study in The Review of Economics and Statistics using data from live birth certificates and the U.S. Census. While there was a significant amount of previous research showing the link between season of birth and later outcomes, this study showed that the characteristics of those giving birth to these children were similar as well. They discovered that “women giving birth in the winter look different from other women: they are younger, less educated, and less likely to be married” (Buckles & Hungerman, 2013, p. 711). The challenge now is to see if similar results are found when the same variables are considered using a data set that includes many different countries from around the globe. Their new working hypothesis is that the relationship between season of birth and family background varies systematically across countries. In order to test this hypothesis, they need a new set of data that provides information from multiple countries and regions but that has variables similar to the ones they used in their domestic research. My job this summer has been to consider one possible data source to determine if it had the minimum required variables of parental SES, month of birth, and child outcomes. Once this was determined, the next step was to merge the many hundreds of files containing each countries specific sample set into one large sample set that could be used for analysis on this new hypothesis. In addition, outside variables were compiled and added using several other data sets to offer more insight into possible effects on outcomes. This poster will illustrate the multi-layered process of data analytics it took to prepare a refined new source of data to support this new research.

Page 28: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

Poster Presentation

Iron (III) Oxide/Graphene Nanostructures for Solar Water Splitting

Clare Murphy College of Arts & Letters

Environmental Sciences and Greek University of Notre Dame

Advisor: Ian Lightcap, University of Notre Dame, College of Science

Iron (III) Oxide has promising potential as a photocatalyst for solar-driven water splitting. Fe2O3 is a viable photocatalyst due to its chemical and morphological stability, band gap features, and natural abundance. Despite these promising features, Fe2O3 is rarely used as a photocatalytic material due to short charge carrier diffusion lengths, which limit the catalytic efficiency. By depositing iron nanoparticles on a graphene substrate, this project aims to increase the efficiency of solar water splitting using Fe2O3. The project attempts to grow iron nanoparticles smaller than 5 nm such that the charge-transport distance is short enough to limit photoelectrode resistivity and recombination. Additionally, growth of nanoparticles on a graphene substrate will increase the photocatalytic success because graphene is a high efficiency charge carrier collector with high surface area and conductivity. Through this method of nanoparticle deposition, future studies may prove iron oxide an effective material for conversion of water to solar fuel.

Page 29: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

Poster Presentation

Magneto-electric nanoparticles to specifically target cancer cells in vitro

Andrea Oviedo College of Engineering

University of Notre Dame

Advisor: Tiffanie Stewart, University of Notre Dame, NDnano

To this day, targeted drug delivery systems in cancer treatment remain a formidable challenge. However, nanomedicine is paving the way for externally controlled, on-demand release of drugs using magneto-electric nanoparticles, or MENs. These unique nanoparticles exhibit a nonzero magnetoelectric effect, which can not only amplify a low intensity external magnetic field but also generate an additional local electric field. MENs take advantage of the different membrane electric properties between cancer cells and healthy cells. By remotely applying a weak magnetic field, MENs create an external electric field that changes the nanoporosity of the cell membrane, allowing them to penetrate into the cancer cells (a process known as nanoelectroporation) while leaving the healthy cells undisturbed. Tumor cells have a substantially lower electric potential and therefore display a lower threshold for nanoelectroporation. Therefore, MENs are capable of specifically “targeting” cancer cells as drug delivery carriers, while sparing healthy cells in the process. This study will focus on the specificity of CoFe2O4@BaTiO3 MENs intake using four different cell lines, two healthy and two cancer cell lines, under the application of externally applied d.c. magnetic fields of varying intensities. All cell lines will be treated with fluorescently functionalized MENs with varying magnetic field gradients. One experiment will compare DOV-13 ovarian cancer cells versus LP9 cells, which mimic the healthy lining of the intraperitoneal cavity. The other experiment will show specificity between glioblastoma brain tumor cells and HUVEC endothelial cells that mimic cells lining the blood brain barrier. The final experiment will co-culture ovarian cancer spheroids in a collagen matrix with healthy LP9 cells. We will measure the penetration of the MENs into cells by optical fluorescent imaging. We anticipate that MENs will penetrate cancer cells more effectively than healthy cells using relatively low magnetic field gradients.

Page 30: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

Poster Presentation

Dynamic Monitoring for Lobster Using the ELK Stack

Anna Yannakopoulos Physics and Computational Science

Florida State University

Advisor: Kevin Lannon, University of Notre Dame, College of Science, Dept. of Physics

Lobster is a program designed to opportunistically make use of a dynamically-changing pool of heterogeneous resources to compute nontrivial tasks. The complexity of managing these resources necessitates a robust monitoring system to allow the user to identify and act upon a wide variety of failure modes. Currently, Lobster’s built-in monitoring produces a selection of static plot images that cannot be modified while Lobster is running. The ELK software stack is an open source monitoring system including a data collection pipeline (Logstash), a distributed search and analytics engine (Elasticsearch), and an interactive visualization web application (Kibana). The implementation of an ELK monitoring interface in Lobster allows for the dynamic creation and modification of plots and other visualizations for an ongoing Lobster run as new failure modes are encountered. Dynamically-defined time and content filters can be applied to all log data and visualizations simultaneously, allowing specialized monitoring and isolation of interesting events. Unlike the current monitoring solution, visualizations are not limited to a specific recorded subset of the log data; ELK monitoring permits the execution of visualization queries and aggregations on any part of the complete log file, and data from separate log files can be compared in the same visualization. Adding new sources of log data to the ELK monitoring system, including sources such as network information logs that do not originate from Lobster but are hugely relevant to Lobster’s performance, will in most cases seamlessly augment previously-monitored log data. The dynamic nature of ELK monitoring, in comparison to the static nature of current Lobster-controlled monitoring, offers new and powerful tools for quickly discovering and solving problems as they occur. Improving the efficiency and speed of Lobster debugging leaves more time and resources for useful scientific computations.

Page 31: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

Poster Presentation

Deep Learning for Particle Physics

Colin Dablain College of Science

Physics

Advisor: Kevin Lannon, University of Notre Dame, College of Science, Dept. of Physics

The particle colliders employed by high energy physicists to probe the properties of the fundamental constituents of matter produce an astounding volume of collisions, and, consequently, data. Identifying the particles in a particular collision involves solving signal-background classification problems that are intractable for humans. To solve these classification problems, various machine learning methods are typically employed; though no particular machine learning method has yet proved markedly superior to the others. Recent work by machine learning researchers on the field of deep learning, particularly on deep neural networks, has produced state-of-the-art results on image recognition and natural language processing tasks. The scope of my work has been to use simulated collision data from the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) to train deep neural networks with the goal of producing statistically significant results on a dataset of collisions in which proton-proton collisions produce two top quarks and a Higgs Boson.

Page 32: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

Poster Presentation

Using Deep Neural Networks to Analyze Collisions in High Energy Physics

Matthew Drnevich College of Science

Mathematics and Physics University of Notre Dame

Advisor: Kevin Lannon, University of Notre Dame, College of Science, Dept. of Physics

At the forefront of experimental particle physics is the Large Hadron Collider in Geneva, Switzerland. There, protons collide at nearly the speed of light with a rate of over 40 million collisions per second. Due to computing and resource limitations, the raw data that these collisions produce cannot be recorded at such a rate. Furthermore, within this data is an incredibly small subset of collisions that are particularly useful for helping us prove or discover theories in physics. To use an analogy, you can imagine that each particle of sand on a beach is one collision and the goal is to find the rare set of particles that have a specific orange-red tint. The detector uses analysis methods to first remove all of the sand that is obviously not red or orange. Then, the remaining sand is analyzed by more complex analysis tools on a larger computing grid to isolate the sand that includes both red and orange characteristics. Finally, an analysis method needs to be used to discern one reddish-orange sand particle from another. Due to recent advances in machine learning, there is reason to believe that using deep learning techniques, such as neural networks, could improve the accuracy of isolating such events. One such exotic collision, the production of a Higgs boson paired with a top quark, is of particular interest to us. This research into a better algorithm should enable us to analyze these collisions more efficiently and accurately than current methods.

Page 33: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

Poster Presentation

Analysis And Stellar Label Approximations of Carbon-Enhanced Metal-Poor Stars

Travis Hodge Physics and Computer Science Austin Peay State University

Timothy Beers

Physics

Vinicius Placco Physics

Advisors: Vinicius Placco, University of Notre Dame, College of Science, Dept. of Physics and

Timothy Beers, University of Notre Dame, College of Science, Dept. of Physics

A subset of the Carbon-Enhanced Metal-Poor (CEMP) stars (the CEMP-no stars) are believed to be direct descendants from the very first generation of stars to be born after the Big Bang. Determining reliable stellar atmospheric parameters (metallicity, surface gravity, and effective temperature) for low-metallicity stars in the Milky Way galaxy is an integral part in identifying candidate CEMP stars. Using a grid of synthetic stellar spectra, it has been shown that The Cannon, a new machine learning technique, can quickly and accurately determine the stellar parameters with relativity low scatter (~100 K in Temperature, 0.264 in logg, and 0.268 in [Fe/H]), similar to what has been determined using the SEGUE Stellar Parameter Pipeline (SSPP). This technique has a number of direct applications, including determination of parameters in large databases (~4 million) of observed stellar spectra from LAMOST, as well as the ~500,000 stellar spectra observed by SDSS/SEGUE. These two approaches can be compared, and possibly averaged, which will help confidently identify additional CEMP stars for high-resolution spectroscopic follow-up.

Page 34: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

Poster Presentation

Using HP Vertica for Particle Physics Data Analysis

Matthew Link Mathematics and Physics

Calvin College

Jason Slaunwhite Physics

University of Notre Dame

Advisor: Kevin Lannon, University of Notre Dame, College of Science, Dept. of Physics

In 1994, development began on ROOT, a data analysis framework which would become the standard in particle physics. Today, ROOT retains its position as the particle physicist’s tool for analyzing data. It allows researchers to make progressively more complex selections of the data. At each stage, a criteria is given to select interesting subsets of the data while the rest is discarded. The early criteria is simple and known to work. It does not have to be repeated. The late criteria is where the physics research happens. It is where researchers apply their knowledge and creativity to push the bounds of physics. However, no one knows if the late criteria will work. It has to be repeatedly tested and revised. Each test takes significant time as ROOT spins through the data. The researchers wait. Time spent spinning through data is time not spent thinking about physics. For physics researchers, this is not ideal. The private sector has faced similar data problems and has developed tools to handle them. Specifically, Hewlett-Packard built the Vertica Analytics Platform to quickly handle SQL queries for a database. My research focused on employing Vertica to reproduce particle physics analysis done in ROOT. We reproduced large skims of the data to produce smaller data sets. Vertica performed over six times faster than ROOT for all parts of the analysis we studied.

Page 35: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

Poster Presentation

Modeling the Light Curve of a Type II-P Supernova

Amber Ward Physics

Cameron University

Advisor: Peter Garnavich, University of Notre Dame, College of Science, Dept. of Physics

When massive supergiant stars explode the result is called a type II-P SuperNova (SN). These explosions show a wide range of properties and brightness making them poor “standard candles” for measuring cosmic distances. In contrast, the explosions of white dwarf stars, called type Ia SN, are nearly standard candles and they have been used to show that the expansion of the universe is accelerating. Type II-P SN can be confused with the more useful Type Ia in current and future large sky surveys and it is important to model the light curves of II-P events so their contamination in the supernova sample can be reduced. Here, we develop a modeling function to fit the light curves of SNII-P starting with the very well-sampled Kepler Space Telescope observations of KSN2011d. The complete light curve model is made of several individual functions that are applied during the early, middle and late phases of the light curve. The initial rise is best fit using a physically motivated model of Rabinak & Waxman. After maximum, a half-Gaussian Function matches the decline down to the “plateau phase”. A Fermi-Dirac Function is used to fit the plateau and the fast decline after the plateau and finally a linear decay is used to model the radioactive decay at late-times. A Chi2 parameter minimization is used to determine the best parameters to the functional pieces and where the transitions occur. For the initial rise a reduced Chi2 of 1.8 was reached which shows a good fit. The fit could be affected by the shock breakout that is not included in the model. Fitting the declining light curve is on-going.

Page 36: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

Poster Presentation

Stock Market Simulation Via Geometric Brownian Motion

Brett Baumgartner Economics & Finance and Mathematics

Bethel College

Haiyan Liu College of Arts & Letters

Psychology University of Notre Dame

Advisor: Zhiyong Zhang, University of Notre Dame, College of Arts & Letters, Dept. of

Psychology

Predicting the stock market and the state of financial markets is a long sought after, but unaccomplished endeavor in both finance and applied mathematics. The chaotic nature of the volatility along with the inclusion of giant leaps and drops in stock prices due to global news events pose problems in replicating an accurate simulation of stock behavior. There is no perfect solution to this problem, however; there have been notable attempts and thoughts in producing quantitative ways to simulate the stock market. Currently, a stochastic differential equation along with the notion of Brownian motion is perhaps the most agreed upon “best option” in experimenting with the stock market. Within this stochastic differential equation, there are two main parameters that we must take into account, the drift and volatility parameters. The drift represents the overall trend of the stock while the volatility represents the variation off of that trend. In this project, I will explore stock market simulation using geometric Brownian motion in parallel with data provided from Google Finance on day by day stock prices. The goal is to find an accurate way to estimate the parameters of various stocks using an optimization function and simulating our own stock behavior that closely mimics the actual stock.

Page 37: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

Poster Presentation

Fitting a Model to Predict the Number of Fatalities Resulting from Terrorist Attacks

Tegan Chesney Applied and Computational Mathematics and Statistics and Psychology

University of Notre Dame

Advisors: Daniel McArtor, University of Notre Dame, College of Arts & Letters, Dept. of Psychology, Patrick Miller, University of Notre Dame, College of Arts & Letters, Dept. of Psychology, Gitta Lubke, University of Notre Dame, College of Arts & Letters, Dept. of

Psychology, and Ian Campbell, University of Notre Dame, Dept. of Psychology

Based on the Global Terrorism Database collected by the National Consortium for the Study of Terrorism and Responses to Terrorism, the number of fatalities resulting from terrorist attacks appears to be increasing. The goal of this project is to assess this apparent trend in a systematic way. In our initial analysis, we fit a multiple regression model to the data that used year (1970 – 2015) and global region to predict fatalities in a particular year. Results from this model suggest significant differences among the rate of fatalities from terrorist attacks across regions in the world. Specifically, South Asia, the Middle East and North Africa, and Sub-Saharan Africa were found to have a significantly steeper increase per year and significantly higher rates than North America. Upon examining the residuals of our model, it became clear that the rate of fatalities from attacks across all regions follow a non-normal distribution. Because linear models assume that residuals follow a normal distribution, we used a Monte Carlo simulation study to assess if we could still rely on the test statistics from the linear model despite this non-normality. The settings of the simulation study were based on the results of our empirical analysis. We ran the simulation 10,000 times, aggregating the estimates and their p-values of the multiple regression analysis. Then, we evaluated the power for each significant predictor, type I error, and the relative bias of the model. Results from the simulation study indicate that the model still performed relatively well despite the violated assumptions. We conclude that if the residual error is indeed similar to the generated error, then we can justify the use of the linear model despite the violation of the assumption of normality, thereby supporting the conclusions drawn in our initial analysis.

Page 38: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

Poster Presentation

The Consequences of Missing a Correlated Predictor in Multiple Regression

Anthony Hall Biological Sciences and Chemistry

Harper College

Tegan Chesney Applied and Computational Mathematics and Statistics and Psychology

University of Notre Dame

Patrick Miller Psychology

University of Notre Dame

Daniel McArtor Psychology

University of Notre Dame

Ian Campbell Psychology

University of Notre Dame

Advisor: Gitta Lubke, University of Notre Dame, College of Arts & Letters, Dept. of Psychology

When attempting to predict patterns or future events using statistics, finding the correct multiple regression model is crucial. Missing a correlated predictor between two variables in a model can potentially lead to errors in the estimation of the remaining effects in the model. The goal of this project is to provide insight into and conceptualize the ramifications of missing a correlated predictor in a multiple regression model through utilizing Monte Carlo simulations in a controlled setting. In the simulations, the data generating model contained two correlated predictor variables and a random error term. The variables were set to be correlated on two levels: moderately correlated ( r ≈ 0.44) and strongly correlated ( r ≈ 0.71). To provide a baseline for comparisons, a condition in which the population model did not contain the second predictor variable was also simulated. In all conditions, the model fit to the data did not include the second, correlated predictor. Comparisons across the conditions allow one to see the possible effects of not including a correlated predictor in data analysis. After performing the Monte Carlo simulation with 100 iterations and aggregating the p-values, the results showed a rapid increase in both the power and type I error of the estimated effect of the first predictor when the second, correlated predictor was omitted, with a more drastic increase for the higher level of correlation between the two predictors. Through this analysis, we show how Monte Carlo simulations can inform applied researchers about some of the consequences that result from failing to include a relevant, correlated predictor in a linear model, giving researchers an idea of the errors that may exist in their own fitted linear models. For example, dental research that showed a strong correlation between root canals and cancer could be the result of excluding a correlated predictor. The results, based on this analysis, could have a high type I error rate and yield a false conclusion if a relevant, correlated predictor has been left out of the model.

Page 39: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

Poster Presentation

Therapist-Patient Interactions with the WeHab System.

Thomas Hughes Psychology and Economics University of Notre Dame

Advisors: Julaine Zenk, University of Notre Dame, College of Arts & Letters, Dept. of

Psychology and Michael Villano, University of Notre Dame, College of Arts & Letters, Dept. of Psychology

For several years, an interdisciplinary team from Computer Science, Mechanical Engineering and Psychology at the University of Notre Dame has been developing a balance rehabilitation system. WeHab is currently under study for physical therapy and uses the WiiFit balance board as an accurate, low-cost alternative to traditional medical balance systems. This study examines Patient-Therapist conversations in conditions with and without the WeHab system, and determines if there are any notable differences in 2 measures of the utterances of those conversations. The conversations were all coded twice, by a collection of three different coders. Reliability was determined using the G(q,k) estimator. Preliminary analysis has shown little difference in the structure of conversation between groups; there is no significant difference in how often the Therapist asks the Patient to do something as opposed to telling them to, and no significant difference in the proportion of speech attributable to one of the speakers in the conversion when comparing the WeHab versus No WeHab conditions. These results were established using an independent t-test. These early results suggest the WeHab system is not having any effect on the conversation between Therapist and Patient for these measures. Further analysis will review additional measures of the conversation to determine if the use of the WeHab system is affecting the nature of the conversational interaction between the patient and the therapist.

Page 40: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

Poster Presentation

Integrating Cognitive Diagnostic Modeling into the AP-CAT

Katlynn Kennedy Psychology

Saint Mary’s College

Alex Brodersen Psychology

University of Notre Dame

Brendan Whitney Psychology

University of Notre Dame

Advisors: Ying Cheng, University of Notre Dame, College of Arts & Letters, Dept. of Psychology and Cheng Liu, University of Notre Dame, College of Arts & Letters, Dept. of

Psychology

While technology is ever evolving, students’ interest in science, technology, engineering, and mathematics fields is decreasing. The AP-CAT is a computerized adaptive testing (CAT) program funded by the NSF to be developed specifically for students taking AP statistics, a subject that is considered critically important by itself and to other STEM fields. The goal for the AP-CAT system is to promote student engagement with statistics by providing personalized testing through CAT and diagnostic feedback. The Summer 2016 research was primarily in the further development of the program and interface and analysis on data from the initial pilot in Spring 2016. This research focuses on how models such as the item response theory and cognitive diagnostic models, CDM, are used along with CAT algorithms to provide student and teacher feedback on performance when using a CAT system. Using adaptive testing we are able to reduce the number of items given to each student by assessing them at their own proficiency level. We are then able to use the CDM to create a diagnostic profile for each student. It is believed that this may be a way to improve efficiency in understanding student’s proficiency in subjects. Further research will continue on investigating how these programs influence student engagement with statistics.

Page 41: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

Poster Presentation

Improving Student Engagement

Kristie LeBeau Sociology

Saint Mary’s College

Alex Brodersen Psychology

University of Notre Dame

Brendan Whitney Psychology

University of Notre Dame

Advisor: Ying Cheng, University of Notre Dame, College of Arts & Letters, Dept. of Psychology

Research supports the idea that student engagement promotes academic achievement. With such a finding, one might assume much research has gone into studying student engagement in order to increase students’ academic achievement, however student engagement remains a broad and vague definition. Its meta-construct dimensionality makes it difficult to measure, creating a myriad of definitions. Despite its complexity, a three factor model often emerges among definitions separating the concept into affective, behavioral, and cognitive engagement. The AP-CAT project, led by the Psychological and Educational Assessment Lab at the University of Notre Dame, created an online assessment system for AP Statistics that provides adaptive testing and diagnostic feedback to students. The goal for the AP-CAT project is to promote student engagement with statistics. A Student Engagement survey taken by the students resulted in a similar three factor model and included covariates that allude towards predictors that influence engagement. Multiple regression, correlation, and mean difference tests were run in order to search for the best predictors of engagement including factors such as teaching style, use of the AP-CAT system, and external factors such as student employment or participation in athletics. Determining what best predicts and influences engagement allows for future research into how these factors can be improved in order to increase engagement and, in turn, improve academic achievement.

Page 42: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling

Poster Presentation

Analyzing Personality and Other Psychological Measures in the Context of Social Networking

Katie Dwyer Psychology

Saint Mary’s College

Advisor: David Hachen, University of Notre Dame, College of Arts & Letters, Dept. of Sociology

The NetHealth study is designed to assess healthy behaviors and their relationship to social networks. These healthy behaviors, including physical activity and regular sleeping patterns, may be promoted through social ties in a network. Examining the extent to which social networks relate to healthy behaviors is the primary objective of this study. It is also of high interest to learn more about the direction of influence between networks and healthy behaviors—namely, if social networks tend to shape behaviors over time or if behaviors tend to shape social networks over time. The NetHealth research team collected data from over 700 first-year student participants at the University of Notre Dame to examine their social networks and healthy behaviors. To gather information about their physical activity and sleeping habits, they used data that each participant provided to them from individual Fitbit accounts. To collect data about their social interactions and networking, they used an app that monitors who each participant text messages and how often. In addition, they designed and distributed surveys that give participants the opportunity to share with them self-reported data about their social networks (through a network survey) and psychological well-being (through a basic survey). I was given the task of using my background in psychology to select psychosocial variables from the basic survey that could be considered formative elements of social networks. In contemplating which of these variables most heavily impacts an individual in psychological and social aspects, I chose the Big Five personality traits along with measures of depression, self-esteem, and loneliness. I analyzed participant scores on each of the five personality factors to see how highly they correlated with the other measures. The analysis yielded significance on 11 out of the 15 total correlations, all but one of which were significant at the .01 level. This presents an opportunity to draw further associations between psychosocial variables, social network dynamics, and prevalence of health-related behaviors. The results of my analysis could potentially build on the knowledge about more in-depth workings of social network formation, which is essential to the main goals of Project NetHealth.

Page 43: Research Experience for Undergraduates Summer Poster …...100Gb Science DMZ Network Analysis: For High Energy Physics Pamela Perkins Plug and Play Membranes: Incorporating Biofouling