Week 5 update: shifts and production

Last week, I learned a very important truth about the world: there’s an entire untapped market for a subgenre of horror movies that take place at government labs at 4am. There’s something very unsettling about a small group of scientists working in a dark building controlling the intensely high-powered beam that’s running below us, while outside everything is dark except the periodic flashing of red warning beacons. How exactly do I know this? Well, *drumroll*, I finally took my first official shift! As I believe I’ve mentioned, the purpose of the shifts is to essentially babysit the beam to make sure all is going well and data is being collected. Because the beam is (theoretically) running 24/7, a third of the available shifts are overnight (midnight-8am), hence my middle-of-the-night realization that this place is definitely haunted. All of the shifts were originally full for the rest of the summer, but my grad student and I managed to convince the other grad student to give up a few of his so I could get some hands-on experience. My first one ended up beingĀ a bit anticlimactic because a group was working on commissioning for their polarimeter until 6am, but I still had fun. I actually had another shift the next day, which ended up being more eventful. Part of it was spent trying to calibrate the target, but a good chunk of it was spent actually gathering usable data, which bodes well for the future of the experiment.

When I wasn’t on shift, I was analyzing some of the production (which is just a fancy way of saying “usable”) data that we had collected. The experimental setup has many beam current monitors (BCMs) and beam position monitors (BPMs) that measure the position and current of the beam at different points along the beamline. In order to get good data, we only need to use about 2 of each, but we need to figure out which 2. The idea is to look at the data from all of them and choose two that 1) give readings close to 0 (which is their calibration set point), 2) give more or less the same reading for all events with the same beam settings, and 3) agree with each other. Once I’d recovered from my science-motivated, espresso-fueled all-nighter, I took a look at some data.

dd_mean: this plot shows the mean values (that is, the average over all events) for the differences between 2 BCMs. If the BCMs are measuring the same thing, this difference should be 0. As you can see, an_us (analog upstream) and an_ds (analog downstream) agree very well, as their difference (represented by the blue circles) is nearly 0. In contrast, 0l02 does not agree well with either of them, as is clear from the yellow x’s and red asterisks. The digital BCM’s, dg_us and dg_ds, are regarded as less reliable so they will not be used for production data. However, the purple crosses and green squares show that the digital BCM’s agree well with their analog counterparts.

sd_rms: this plot shows the standard deviations for all of the BCM’s. Standard deviation is a measure of how spread-out the measurements are; a small standard deviation means that all of the measurements were more or less the same, while a large one means that the measurements differed greatly between events in the same run. We want to choose BCMs with small standard deviations. This plot is a little bit less conclusive than the last; for the first two runs, it looks like an_us and an_ds (blue circle and red asterisk) are performing the best. However, for the later runs, the digital counterparts are performing better and 0l02 is somewhere in the middle. I’m still not sure if other factors affected some of these runs, or if the jump from 1E-4 to 5E-4 is small enough to be insignificant, but hopefully I’ll find out this week.