The Beginning Half

In order to be able to use an EEG to determine if educational videos are any good, we first have to have a setup to collect the data from the headset in a meaningful way. The first week working with the Emotiv EEG was spent mostly familiarizing myself with it and making sure I knew how it worked. The headset has 16 different sensors on it, and the software that comes with it not only gives you the raw outputs from each sensor, but will also give some processed data, like levels for engagement, frustration, excitement, and meditation. The first thing I did was create a program that collected all of this data as it was being generated and output it to a file that could be used later for analysis.

After creating this basic foundation, it was time to collect some data. While wearing the headset, I watched several online videos, some interesting, some incredibly boring, and some average videos so that I could have a baseline for measurements. The first step to analyzing the data was to visualize it, so I loaded the data into R and plotted the graphs, but it is really hard to tell what portions are good or bad just from looking at this raw data, so I wrote a program to smooth out the data to make it more legible. Ideally, I will be able to come up with a computational method to find segments of the video that are potential weak points that can be worked on, e.g. areas where engagement is low or frustration is high.

The first approach I tried just involved marking any time the engagement went below a certain threshold or the frustration went above a certain threshold. The problem with this is, ever person will have different resting values, so it would be very hard to turn this into a generalized solution. Also, if the value only dips below the threshold for a second or two, that is not very useful as the values are always fluctuation.

My second approach was an improved version of my first one. Instead of just using a fixed threshold value, I based the value off of the mean of the entire dataset to make it useful without needing a human to tweak the values. Then, instead of just marking when it dipped below the threshold, I found the area between the data line and the threshold line and sorted this list by the size. This way, if it dips below for a second or two, it will be at the bottom of the list and not worth checking out, while at the top of the list, we will find the areas of the video where the user had low engagement for a long time. This method is effective, but it still does not find all the problematic parts of the video.

After seeing the portions of the graph that the second approach did not capture, I had to come up with something else to supplement it. The parts of the graph that it missed were when the engagement was consistently high to an area where it was consistently lower, but not low enough to be picked up by the area method. To programmatically detect these areas, I wrote a program that used a sliding window to find the slope of 60 second intervals, and once again sort this list by the size of the slope. The resulting list has portions where the graph goes down rapidly at the top and rises rapidly at the bottom. This list matches up nicely with the other list we made, while also capture a few more important parts of the graph.

Next, I will be working with changepoint analysis in order to better analyze the data.

Comments

  1. Hey!

    This sounds like a great project, and definitely something that could be useful! I know I could certainly benefit from a device that recognizes my level of engagement (haha I’ve been studying for the mcat recently so there are definitely times when I zone out and am not as engaged in the material as I would like to be).

    It’ll be interesting to see what your results are, as there could be positive implications for educational videos but also many other areas in which it is important to maintain attention but people have difficulty doing so. Can’t wait to hear more about your research!

    Thanks,
    Sarah