Fermilab / University of Chicago / College of DuPage Center
Submitted by Anonymous (not verified)
on Thursday, September 12, 2013 - 14:15
Welcome to the Fermilab/U Chicago QuarkNet Center.
Description
Center at Fermilab that includes Chicago slots
FNAL-UC Abstract 2014 - Calculating Scaling Relations of Dark Energy Survey Galaxy Clusters using Multiwavelength Data
B. Coy
G. Dzuricsko and I. McNair, De LaSalle Institute
M. Soares-Santos, Fermilab
Galaxy clusters are the most massive structures known to man, and a heated topic of research in the astrophysical world, and one of the primary research focuses of Fermilab's Dark Energy Survey. DES is a sky survey that began collecting optical data last year, and plans to survey about 5000 square degrees of the sky, making it the largest survey in history. This summer, I worked under Marcelle Soares-Santos and Huan Lin in analyzing the data of galaxy clusters taken by DES and the Sloan Digital Sky Survey using an unconventional cluster finding method-Voronoi Tessellation. I used numerous other established catalogs (SZ, MCXC, XMM, and SDSS), and equations relating certain parameters reported to mass to compare this data. What I found was that the DES data was off of other catalogs by a semi-consistent factor of two to three, indicating a consistent flaw in the VT weak lensing mass measuring method calibration. Furthermore, most of the VT galaxies did not find matches with other catalogs, indicating flaws in the VT catalog that may have to be investigated further.
FNAL-UC Abstract 2014 - The Effects of High Pressure on PT-1000 RTDs
A. Lindsay
G. Dzuricsko and I. McNair, De LaSalle Institute
H. Back, Fermilab
This summer I ran many tests on a PT-1000, which is a type of resistance temperature detector (RTD). The PT-1000's resistance changes depending on the temperature. The PT-1000 also experiences a significant amount of self heating at higher voltages, so the temperature is not always accurate. In gas, a "bubble" of heat forms around the PT-1000, whereas in liquid the heat is pulled away more efficiently. This makes the PT-1000 an effective liquid level monitor. There are four PT-1000s placed in a piece of equipment called the Condenser Booster to measure the liquid levels. However, the readings are inaccurate. I discovered that the inaccuracies can be contributed to the high pressure within the Condenser Booster (2600 psi). At high pressures, the gas pulls the heat away from the PT-1000 similarly to liquid, so the PT-1000 is no longer works properly as a liquid level monitor at high pressure.
FNAL-UC Abstract 2014 - Measuring the Speed of Sound as a Technique to infer the Temperature of Bubble Chambers
M. Bernstein
G. Dzuricsko and I. McNair, De LaSalle Institute
M. Crisler, Fermilab
Bubble chambers are contraptions used for direct dark matter detection. They contain a superheated fluid, and should form a small bubble if a dark matter particle interacts with a nucleus of an atom of the fluid. It's essential to know the internal temperature of bubble chambers in order to know the energy threshold, or the energy needed to form one of these bubbles. However, temperature probes cannot be inserted into the chambers because the probes provide nucleation sites, and extraneous bubbles consequently form. This summer, I worked on developing a technique to infer the temperature of a fluid by measuring where the peak resonance frequencies of a three-dimensional standing wave are, and how specific peak frequencies change with temperature.
FNAL-UC Abstract 2014 - PIP-II Beamline MC Simulation for Low Energy Neutrinos
K. Donlin
G. Dzuricsko and I. McNair, De LaSalle Institute
Jong-Hee Yoo, Fermilab
This summer I simulated several different neutrino beams in order to help design a future neutrino beam at FermiLab. I spent a lot of time researching neutrinos; specifically their creation, oscillations, history, and physical characteristics. One of the most important things I learned is that there is much we don't know yet about neutrinos. I used a variety of software, including G4Beamline to model these simulations, test them, collect data, and analyze it. I learned how every detail in the simulations can alter the results. The hardest part for me was probably analyzing the data because I had to fully understand the entire neutrino beam process before I could draw any sort of conclusion of my results. In the next 5-10 years, the neutrino beam I simulated on a computer will possibly be built at FermiLab. I am very fortunate to have been able to work on this project in it's infancy and I hope to continue to work on it in the future.
FNAL-UC Abstract 2014 - Automating Testing of the Optical Links for the CMS
B. Hawks
G. Dzuricsko and I. McNair, De LaSalle Institute
J. Chramowicz and A. Prosser, Fermilab
With the Phase 1 CMS Upgrades being on such a tight schedule, being able to test the upgraded CMS components within a timely manner is necessary. However, when trying to test the Bit Error Rate (BER) of individual channels within a large quantity of seven channel optical transmitters, it is nearly impossible to test everything and record the results by hand. Because of this, Automation is the only way to quickly and accurately test and record data about these transmitters. The task that I worked on this this summer was to automate the testing and data recording process of these transmitters using a language called LabVIEW, which is a graphical programming language specifically designed for laboratory instrument control. By the end of the summer, I was able to automate the testing process, along with other related tasks, and will have my programs continued to be used to test the BER of optical systems by my Mentors John Chramowicz and Alan Prosser.
FNAL-UC Abstract 2014 - Performance Studies for High Speed Data Communication for CMS Tracking Trigger
S. Wang
G. Dzuricsko and I. McNair, De LaSalle Institute
T. Lui, Fermilab
In the Large Hadron Collider (LHC), clusters of protons are accelerated to extremely high speeds and collided inside a detector. In order to observe a rare event, the number of collisions needs to be very high. Luminosity is a measure of the efficiency of an accelerator and its ability to produce a large number of collisions; the higher the luminosity, the higher the number of collisions per second. In the Compact Muon Solenoid (CMS) Detector, a Level 1 Tracking Trigger is used to reconstruct the paths of the particles from the collision. It then compares known paths to that input data stream, thus requiring extremely fast data communication and massive pattern recognition power. With a divide and conquer strategy, the data from the collision separated from 48 sections in the detector to Advanced Telecommunications Computing Architecture Crates (ATCA) with a full mesh backplane to connect each of the ten to fourteen Pulsar II boards within the crate directly. Within each Pulsar II board, there are high speed links to the Rear Transition Module, backplane, and mezzanine card. This summer, I worked on testing the Pulsar II boards to study the performances of these high speed links. By making sure all of the connections are high quality, more steps can be taken towards the advancement
FNAL-UC Abstract 2014 - Performance Study of a 2D Prototype of Vertically Integrated Pattern Recognition Associative Memory (VIPRAM)
S. Subramanian
G. Dzuricsko and I. McNair, De LaSalle Institute
T. Lui, Fermilab
The Compact Muon Solenoid (CMS) is one of two multi-purpose detectors at CERN that are used to try to detect evidence of new physics, including the Higgs boson and dark matter particles. Inside this detector, bunches of protons are collided with each other, resulting in many 'events.' In order to isolate the interesting events from unwanted 'pileup' events, scientists require technology that will be capable of processing information at very high speeds. At Fermilab, physicists and engineers interested in solving this problem have been collaborating on hardware-based pattern recognition technology called Vertically Integrated Pattern Recognition Associative Memory (VIPRAM). In this study, we present results from rigorous testing of a 2D prototype chip, highlighting and exploring possible causes of error in the chip's ability to match and reject patterns. The primary cause of errors is excessive power consumption. We also test the chip on real data patterns and optimize the chip's performance on these patterns based on knowledge about how power consumption in the chip works. In the future, we will extend testing to include information about performance-dependence on voltage and specific power consumption measurements. VIPRAM technology will continue to be improved with the production of a 3D prototype being a major future step in the development of the chip.