• cura@beehaw.orgOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Surveys After each song, participants were asked to rank how much they liked the song (1 to 10), if they would replay the song (0, 1), recommend the song to their friends (0, 1), if they had heard it previously to assess familiarity (0, 1), and if they found the song offensive (0, 1). We also showed participants lyrics from the song and lyrics created by the researchers and asked them to identify the song lyrics to measure their memory of the song (0, 1).

      I still think your concern is legitimate.

      • ABoxOfNeurons@lemmy.one
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Memory is funny. Stuff can play in the background and become familiar without you being consciously aware of it.

        It would be possible to do this study without contamination by using completely unknown and newly-released songs as a dataset, and checking against future chart data regarding the popularity, or by examining the reaction of an isolated group of people without constant musical bombardment.

  • Nari@beehaw.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 year ago

    I’m increasingly convinced that the pop music of the future will consist entirely of mediocre or terrible songs written by real people – that the flaws and fuck-ups of lousy artists will suddenly seem like magic when compared to an endless stream of algorithmically generated, pristine computer bullshit.

  • cura@beehaw.orgOP
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    Abstract

    Identifying hit songs is notoriously difficult. Traditionally, song elements have been measured from large databases to identify the lyrical aspects of hits. We took a different methodological approach, measuring neurophysiologic responses to a set of songs provided by a streaming music service that identified hits and flops. We compared several statistical approaches to examine the predictive accuracy of each technique. A linear statistical model using two neural measures identified hits with 69% accuracy. Then, we created a synthetic set data and applied ensemble machine learning to capture inherent non-linearities in neural data. This model classified hit songs with 97% accuracy. Applying machine learning to the neural response to 1st min of songs accurately classified hits 82% of the time showing that the brain rapidly identifies hit music. Our results demonstrate that applying machine learning to neural data can substantially increase classification accuracy for difficult to predict market outcomes.

    So they use synthetic data to both train and test their model, this is because the original dataset contains only 24 songs.

    Next, we assessed the bagged ML model’s ability to predict hits from the original 24 song data set. The bagged ML model accurately classified songs with 95.8% which is significantly better than the baseline 54% frequency (Success = 23, N = 24, p < 0.001).

    So the 97.2% accuracy is reported on the synthetic data. On the original one, it is 95.8%. But the authors do acknowledge the limitations.

    While the accuracy of the present study was quite high, there are several limitations that should be addressed in future research. First, our sample was relatively small so we are unable to assess if our findings generalize to larger song databases.