

I mean why was there a big surge around that time? Not referring to the later drop.


I mean why was there a big surge around that time? Not referring to the later drop.


I wonder what happened around 2018ish


Yeh I’ve alwahs appreciated how these loyalty cards and memberships etcetera don’t insist of verifying or anything. Most just want an email so I just use a generator or make one up on the spot. It’s good too because they often have cheap deals for signing up today.


I’m surprised they don’t wait for you to hit the submit button before dropping this on you so people feel invested and motivated to do as they’re told because of the sunk cost in time and effort writing the review.


I think it’s getting about the level of attention as the person who started doing it hoped it would, which is about as much as possible. That attention is definitely going to run the gamut but it’s the internet so plenty of it’s going to be hate. Every time I see it I’m split between knee-jerk “that’s stupid” and then a begrudging sense of affection for someone’s commitment to pointless contrarianism and quirkiness. With the right mental framing it’s at different times annoying and endearing.


I don’t know somehow GiGa just works but GiGantic wouldn’t have. I think you instinctively made the right choice even if you didn’t mean to


Because of who gets to do the considering.


What pisses me off the most is that Spotlight USED to work very well.
Why 1980 specifically?
Please tell us your birthday.


At least at one point that worked because there was a new article about the sudden surge of popularity from a YouTube video from like 2011 of a middle aged woman looking left then right whichb had successfully fooled the system. Guessing they fixed it since.


How long have you been waiting? You’ll never know mwahahaha


I really like that as a term, never come across it before.


I don’t know why, but I just assumed UK for this. I have no evidence at all, it’s just a specific kind of gross, and manner of speech, (especially calling people who complain about the smoke idiots) that I can just only hear as Chav.


I like the idea that ok that list amongst Clinton and Trump and Prince Andrew is “that guy’s rats”


This is eminently slurpable. The clue is in the name, it makes the sound because you’re not pressing your lips against the rim of the cup in the same way as a sip. You’re more sucking in air towards you over the cup and it happens to lap up a few little waves of coffee that are thoroughly cooled on their way to your mouth.


Ok, I’m seeing this a lot and I get it, and despite my lack of expertise in the field I can sympathize with the sentiment. Perhaps those replies are answering more in the spirit of the post than the letter.
It’s just that the title asked if no one knew what this ‘graceful degradation’ concept was anymore and the text used the example specifically that the page should be exactly the same with or without JavaScript switched on which, without trying to be facetious, sounded kind of logically impossible.


I don’t know anything about web development but, is it really fair to say it should work exactly the same with JavaScript turned off? If that were achievable why would it be there in the first place? I assume the graceful degradation concept is supposed to be that as you strip away more and more layers of additional functionality, the core functions remain or at least some kind of explanation is given to the user why things don’t work.
I don’t feel like LLMs are conscious and I act accordingly as though they aren’t, but I do wonder about the confidence with which you can totally dismiss the notion. Assuming that they are seems like a leap, but since we don’t really know exactly what consciousness is, it seems difficult to rigorously decide upon what does and doesn’t get to be in the category. The usual means by which LLMs are explained not to be conscious, and indeed what I usually say myself, is something like your “they just output probability based on current context” or some variation of “they’re just guessing the next word”, but… is that definitely nothing like what we ourselves do and then call consciousness? Or if indeed that is definitively quite unlike anything we do, does that dissimilarity alone suffice to declare LLMs not conscious? Is ours the only possible example of consciousness, or is the process that drives the behaviour with LLMs possibly just another form or another way of arriving at consciousness? There’s evidently something that triggers an instinctual categorising, most wouldn’t classify a rock as conscious and would find my suggestion that ‘maybe it’s just consciousness in another form than ours’ a pretty weak way to assert that it is, but then again there’s quite a long way between a literal rock and these models running on specific rocks arranged in a particular way and which produce text in a way that’s really similar to the human beings that we all collectively tend to agree are conscious. Is being able to summarise the mechanisms that underpin the behaviour who’s output or manifestation looks like consciousness, enough on it’s own to explain why it definitely isn’t consciousness? Because, what if our endeavours to understand consciousness and understand a biological basis for it in ourselves bear fruit and we can explain deterministically how brains and human consciousness work? In that case, we could, if not totally predict human behaviours deterministically, then at least still give a pretty good and similar summarisation of how we produce those behaviours that look like consciousness. Would we at that point declare that human beings are not conscious either, or would we need a new basis upon which to exclude these current machine approximations of it?
I always felt that things such as the Chinese Room thought experiment didn’t adequately deal with what I was driving at in the previous paragraph and it seems to me that dismissals of machine consciousness on the grounds that LLMs are just statistical models that don’t know what they are doing are missing a similar point. Are we sure that we ourselves are not mechanistically following complicated rules just as neural networks and LLMs are and that’s simply what the experience of consciousness actually is - an unconscious execution of rulesets? Before the current crop of technology that has renewed interest in these questions, when it all seemed a lot more theoretical and perennially decades off, I was comfortable with this uncomfortable thought. Now that we actually have these impressive models that have people wondering about the topic, I seem to be skewing more skeptical and less generous about ascribing consciousness. Suddenly now the Chinese Room thought experiment as a counter to whether these conscious-looking LLMs are really conscious looks more convincing, but that’s not because of any new or better understanding on my part. I seem to be just goal post shifting when faced with something that does a better job of looking conscious than any technology I’d seen previously.