Meta conducted an experiment where thousands of users were shown chronological feeds on Facebook and Instagram for three months. Users of the chronological feeds engaged less with the platforms and were more likely to use competitors like YouTube and TikTok. This suggests that users prefer algorithmically ranked feeds that show them more relevant content, even though some argue chronological feeds provide more transparency. While the experiment found that chronological feeds exposed users to more political and untrustworthy content, it did not significantly impact their political views or behaviors. The researchers note that a permanent switch to chronological feeds could produce different results, but this study provides only a glimpse into the issue.


I think this is bullshit. I exclusively scroll Lemmy in new mode. I scroll I see a post I already have seen. Then I leave. That doesn’t mean I hate it, I’m just done!

  • sculd@beehaw.org
    link
    fedilink
    arrow-up
    144
    ·
    1 year ago

    They don’t “hate” chronological feeds. The study say they are more likely to disengage, and that’s probably because people got what they need from the chronological feed and log off to do other things…

    Proving that chronological feed is more healthy.

  • haganbmj@lemmy.ml
    link
    fedilink
    arrow-up
    54
    ·
    1 year ago

    Less engagement is exactly what I would want. Show me my new chronological content and then I’ll get the hell out of there.

  • AngularAloe@beehaw.org
    link
    fedilink
    arrow-up
    43
    ·
    1 year ago

    “Spend less time once on” is different than “hate”. I hated FB’s feed so much that I was reluctant to get on in the first place, a metric completely different from how long I would spend once I DID open it.

  • notenoughbutter@lemmy.ml
    link
    fedilink
    arrow-up
    36
    ·
    edit-2
    1 year ago

    I’d like to interject for a moment and say,

    this isn’t a test for what users like, this is a test for how users are addicted to the platform

    algorithm provides content in a way that they become a consoomer and more often than not, we actually feel guilty and sad after an hour of scrolling and realising we wasted so much time (like post masturbation sadness)

  • CapedStanker@beehaw.org
    link
    fedilink
    arrow-up
    29
    ·
    1 year ago

    I mean, this isn’t that surprising as the algorithm is intended for full dopamine distribution. It’s like a fucking dopamine faucet and we are all just a bunch of apes.

  • lloram239@feddit.de
    link
    fedilink
    arrow-up
    16
    ·
    1 year ago

    Classic false dilemma. It was never about “algorithm vs chronological”. The problem is the lack of options. Having algorithmic magic be the only way to browse content is the issue. That doesn’t mean it shouldn’t exits or even that it shouldn’t be the default. There should just me more other ways that the user can switch too.

    I have that issue with Youtube, which can be really good at recommending obscure videos with a couple of hundred views that are exactly about the topic you are looking for. But there is no way for me to actively select the topic that the recommendation machine recommends, it’s all based and watch history can very easily get screwed up when you watch the wrong videos. Worse yet, it can’t handle multiple topics at once, so one topic will naturally end up suppressing the other. The workaround for that is to run multiple browser profiles, train each of them on a topic and than be very careful what video you watch with what profile. But that’s frankly stupid, such functionality should be in the UI. Youtube has a topic-bar at the top which looks like it might help, but it’s far to unspecific to be useful, something like “Gaming” isn’t one topic, it’s thousands of topics bundled into one, the recommendation algorithm understands each of the thousand topics individually, the UI does not.

    Give users choice.

    • nous@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      The algorithm is designed to keep you on the platform with endless feeds of content you might click on. And the site is designed to force you towards the algorithm as much as possible. They don’t want to give you choice about how you might want to view content, they just want you to stay on the platform.

      Personally I like just putting all the new content from my subscription that I am interested in, in a watch list then playing through that list and leave when I am done. But youtube is making that workflow harder and harder. Just recently they moved the add to watch later button from the hover on the video to a submenu, resulting in a lot more clicks to do what I used to. And it is now very hard to actually manage your subscriptions in bulk.

  • StarServal@kbin.social
    link
    fedilink
    arrow-up
    15
    ·
    1 year ago

    This is a non-issue. Provide the chronological feed and let people choose how they want to consume their content.

  • Lvxferre@lemmy.ml
    link
    fedilink
    arrow-up
    13
    ·
    1 year ago

    So basically the algorithm feeds an unhealthy addiction. And in no moment the study even tries to contradict the main concerns against algorithm-based sorting: lack of transparency, unhealthiness, bubbling, and feeding into dichotomies like “you like apples, so YOU’RE A BANANA HATER!”.

    Better approaches put power on the hands of the users. For example, tagging content, or sorting it into communities. Perhaps not surprisingly it’s how Mastodon and Lemmy do it, respectively.

    There’s also the matter of quality, not just personal preferences; this sort of thing does require an algorithm, but there’s nothing preventing it from being simple, customisable, and open, so users know exactly why they’re being shown something instead of something else.

  • lichtmetzger@feddit.de
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 year ago

    Users of the chronological feeds engaged less with the platforms

    Because there is no endless content. You will eventually reach the end of your feed, close your browser and go to bed, sleeping well and staying healthy.

    But of course Meta prefers you doomscrolling through the entire night and feeling like shit afterwards. Just one more ad bro…

  • brilokuloj@kbin.social
    link
    fedilink
    arrow-up
    11
    ·
    1 year ago

    Worth keeping in mind that Facebook has manipulated data before:

    https://en.wikipedia.org/wiki/Pivot_to_video#Facebook_metrics_controversy

    In September 2016, Facebook admitted that it had reported artificially inflated numbers to its advertisers about how long viewers watched ads leading to an overestimation of 60-80%[44] Facebook apologized in an official statement and in multiple staff appearances at New York Advertising Week.[45][46] Two months later, Facebook disclosed additional discrepancies in audience metrics.[47][48] In October 2018, a California federal court unsealed the text of a class action lawsuit filed by advertisers against Facebook, alleging that Facebook had known since 2015 that its viewership numbers were inflated “by some 150 to 900 percent” and waited over a year before taking action to disclose or fix the problem, citing internal Facebook communications that “somehow there was no progress on the task for the year” and decisions to “obfuscate the fact that we screwed up the math.”[49][50]

  • 🦊 OneRedFox 🦊@beehaw.org
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 year ago

    Is it possible to design a content recommendation algorithm that isn’t game-able? As it stands right now I don’t think that algorithms are fundamentally bad, just that capitalism ruins everything.

    • Malgas@beehaw.org
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      Goodhart’s Law: Any statistical regularity will tend to collapse once pressure is placed on it for control purposes.

      Or, to paraphrase, any metric that becomes a target ceases to be a good metric. Ranking algorithms, by their nature, use some sort of quantifiable metric as a heuristic for quality.

    • SkepticElliptic@beehaw.org
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      If you weighted things by clicks vs time viewing maybe? The true issue is lack of moderation.

      Non genuine accounts boost the post for whatever reason. This creates engagement. This is good for the marketer and the platform because they make their money through advertising. They don’t care if marketing firms are using thousands of zombie accounts to boost posts.

    • conciselyverbose@kbin.social
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      The question is what do you use to measure quality?

      Engagement is useful but leads to this, obviously. But unless people are constantly rating content they like and don’t like (Reddit was the closest to a robust way to do that), it’s hard to train what content they want.

      • Barry Zuckerkorn@beehaw.org
        link
        fedilink
        arrow-up
        6
        ·
        1 year ago

        In the 80’s, Pepsi was gaining quickly on Coca Cola with the Pepsi challenge: having tasters blindly tasting Pepsi versus Coke and choosing which one they liked better. Pepsi won a majority of these. But over the decades, it turns out that consumer preference for a sip of each didn’t necessarily translate over an entire can, or an entire case of cans. When asked to drink 12-20 ounces (350 to 600 ml) of the soft drink, regularly, people behaved differently than what they did for a 2 ounce (60 ml) taste.

        Asking consumers to rate things in the moment still suffers from their less reliable momentary ratings of things they experience all day, day after day. Especially of things that tend to be associated with unhealthy addictions.

        • conciselyverbose@kbin.social
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          Yeah, you’re right that even having users rate content is still limited.

          I’d argue it almost definitely has to be better than engagement, though. It also has the potential to be less punitive to people who actually are thoughtful with what they like by using the likes as more of a classification problem and less shoving the same trash in everyone’s face.

          It’s a hard problem, but sites aren’t even attempting to actually attempt to do anything but tie you to a shitty dopamine loop.

          • Barry Zuckerkorn@beehaw.org
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            I’d argue it almost definitely has to be better than engagement, though.

            Totally agree. I think those who design the algorithms and measure engagement need to remember that there is a difference between immediate dopamine rush versus long term user satisfaction. User votes can sometimes be poor predictors of long term satisfaction, but I imagine engagement metrics are even less reliable.

              • Barry Zuckerkorn@beehaw.org
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                That’s not a sustainable model, either. Zynga had a decent run but ended up flaming out, eventually purchased by a large gaming company.

                That’s to say nothing of the business models around gambling, alcohol, tobacco, and addictive pharmaceuticals. Low level background addiction is the most profitable, while intense and debilitating addictions tend to lead to unstable revenue (and heavy regulation).

    • CapedStanker@beehaw.org
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      I don’t think the idea should be to make the algorithm’s ungameable because I feel like that is literally impossible with humans. The first rule of web dev or game dev is that the users are going to find ways to use your site, app, software, or api in ways you never intended regardless of how long you, or even a team of people, think about it.

      I’d rather see something where the algorithm is open and pieces of it are voted on by the users and other interested parties. Perhaps let people create and curate their own algorithm’s, something like playlist curation on spotify or youtube but make it as transparent as possible, let people share them and such. Kind of like how playlists are shared.

      • SafetyGoggles@feddit.de
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        I’d rather see something where the algorithm is open and pieces of it are voted on by the users and other interested parties. Perhaps let people create and curate their own algorithm’s, something like playlist curation on spotify or youtube but make it as transparent as possible, let people share them and such. Kind of like how playlists are shared.

        Isn’t that already how it works, sans the transparency part?

        You press “like” on something you like, and the algorithm shows you more that are related to that thing you just liked. Indirectly, you’re curating your feed/algorithm. Or maybe you can look at this from another angle, maybe the “like” button isn’t just for the things you like, but also the things that you don’t particularity like, but would like to see more.

        Then there’s other people around you, your Facebook friends, their likes also affect your feed, as you can see the algorithm suggests things that “people that are interested in things you’re interested in, are also interested in”.

  • GadgeteerZA@beehaw.org
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 year ago

    I’m sorry, but those “suggestions” sound wrong - a chronological feed exposes users to untrustworthy content. The point is an algorithmic feed is unknown manipulation UNLESS the algorithm is known and published. Engaging less is also NOT a bad thing at all, unless you are the platform itself. The inference is that an algorithm will expose users to less political and untrustworthy content? Well, certainly not if the platform wants to generate continuous engagement through provocation and the creation of outrage.

    But OK, it is an experiment by Meta, so let’s just leave it at that.