Off-and-on trying out an account over at @tal@oleo.cafe due to scraping bots bogging down lemmy.today to the point of near-unusability.

  • 3 Posts
  • 985 Comments
Joined 2 years ago
cake
Cake day: October 4th, 2023

help-circle
  • Game streaming serices are never going to catch on because the capital needed to build out the infrastructure is ridiculous.

    I don’t know about “never”, but I’ve made similar arguments on here predicated on the cost of building out the bandwidth — I don’t think that we’re likely going to get to the point any time soon where computers living in datacenters are a general-purpose replacement for non-mobile gaming, just because of the cost of building out the bandwidth from datacenter to monitor. Any benefit from having a remote GPU just doesn’t compare terribly well with the cost of having to effectively have a monitor-computer cable for every computer that might be used concurrently to the nearest datacenter.

    But…I can think of specific cases where they’re competitive.

    First, where power is your relevant constraint. If you’re using something like a cell phone or other battery-powered device, it’s a way to deal with power limitations. I mean, if you’re using even something like a laptop without wall power, you probably don’t have more than 100 Wh of battery power, absent USB-C and an external powerstation or something, due to airline restrictions on laptop battery size. If you want to be able to play a game for, say, 3 hours, then your power budget (not just for the GPU, but for everything) is something like 30W. You’re not going to beat that limit unless the restrictions on battery size go away (which…maybe they will, as I understand that there are some more-fire-safe battery chemistries out there).

    And cell phone battery restrictions are typically even harder, like, 20 Wh. That means that for three hours of gaming, your power budget because of size constraints on the phone is maybe about 6 watts.

    If you want power-intensive rendering on those platforms doing remote rendering is your only real option then.

    Second, there are (and could be more) video game genres where you need dynamically-generated images, but where latency isn’t really a constraint. Like, a first-person shooter has some real latency constraints. You need to get a frame back in a tightly bounded amount of time, and you have constraints on how many frames per second you need. But if you were dynamically-rendering images for, I don’t know, an otherwise-text-based adventure game, then the acceptable time required to get a new frame illustrating a given scene might expand to seconds. That drastically slashes the bandwidth required.

    What I don’t think is going to happen in the near future is “gaming PC/non-portable video game consoles get moved to the datacenter”.




  • What makes this worse is that git servers are the most pathologically vulnerable to the onslaught of doom from modern internet scrapers because remember, they click on every link on every page.

    The especially disappointing thing is that, for the specific case that Xe was running into, a better-written scraper could just recognize that this is a public git repository and just git clone the thing and get all the useful code without the overhead. Like, it’s not even “this scraper is scraping data that I don’t want it to have”, but “this scraper is too dumb to just scrape the thing efficiently and is blowing both the scraper’s resources and the server’s resources downloading innumerable redundant copies of the data”.

    It’s probably just as well, since the protection is relevant for other websites, and he probably wouldn’t have done it if he hadn’t been getting his git repo hammered, but…

    EDIT: Plus, I bet that the scraper was requesting a ton of files at once from the server, since he said that it was unusable. Like, you have a zillion servers to parallelize requests over. You could write a scraper that requested one file at once per server, which is common courtesy, and you’re still going to be bandwidth constrained if you’re schlorping up the whole Internet. Xe probably wouldn’t have even noticed.


  • https://en.wikipedia.org/wiki/National_Helium_Reserve

    The National Helium Reserve, also known as the Federal Helium Reserve, was a strategic reserve of the United States, which once held over 1 billion cubic meters (about 170,000,000 kg)[a] of helium gas.

    The Bureau of Land Management (BLM) transferred the reserve to the General Services Administration (GSA) as surplus property, but a 2022 auction[10] failed to finalize a sale.[11] On June 22, 2023, the GSA announced a new auction of the facilities and remaining helium.[12] The auction of the last helium assets was due to take place in November, 2023.[13] Though the last of the Cliffside reserve was to be sold by November 2023, more natural gas was discovered at the site than was previously known, and the Bureau of Land Management extended the auction to January 25, 2024 to allow for increased bids.[14] In 2024 the remaining reserve was sold to the highest bidder, Messer Group.[15]

    Arguably not the best timing on that.


  • Sure. What that guy is using is actually not the most-interesting diagram style, IMHO, for automatic layout of network maps, if you want large-scale stuff, which is where the automatic layout gets more interesting. I have some scripts floating around somewhere that will generate very large network maps — run a bunch of traceroutes, geolocate IPs, dump the results into an sqlite database, and then generate an automatically laid-out Internet network map. I don’t want to go to the trouble of anonymizing the addresses and locations right now, but if you have a graphviz graph and want to try playing with it, I used:

    goes looking

    Ugh, it’s Python 2, a decade-and-a-half old, and never got ported to Python 3. Lemme gin up an example for the non-hierarchical graphviz stuff:

    graph.dot:

    graph foo {
        a--b
        a--d
        b--c
        d--e
        c--e
        e--f
        b--d
    }
    

    Processed with:

    $ sfdp -Goverlap=prism -Gsep=+5 -Gesep=+4 -Gremincross -Gpack -Gsplines=true -Tpdf -o graph.pdf graph.dot
    

    Generates something like this:

    That’ll take a ton of graphviz edges and nicely lay them out while trying to avoid crossing edges and stuff, in a non-hierarchical map. Get more complicated maps that it can’t use direct lines on, it’ll use splines to curve lines around nodes. You can create massive network maps like this. Note that I was last looking at graphviz’s automated layout stuff about 15 years ago, so it’s possible that they have better layout algorithms now, but this can deal with enormous numbers of nodes and will do reasonable things with them.

    I just grabbed his example because it was the first graphviz network map example that came up on a Web search.






  • You have all your devices attached to a console server with a serial port console set up on the serial port, and if they support accessing the BIOS via a serial console, that enabled so that you can access that remotely, right? Either a dedicated hardware console server, or some server on your network with a multiport serial card or a USB to multiport serial adapter or something like that, right? So that if networking fails on one of those other devices, you can fire up minicom or similar on the serial console server and get into the device and fix whatever’s broken?

    Oh, you don’t. Well, that’s probably okay. I mean, you probably won’t lose networking on those devices.


  • You have remote power management set up for the systems in your homelab, right? A server set up that you can reach to power-cycle other servers, so that if they wedge in some unusable state and you can’t be physically there, you can still reboot them? A managed/smart PDU or something like that? Something like one of these guys?

    Oh. You don’t. Well, that’s probably okay. I mean, nothing will probably go wrong and render a device in need of being forcibly rebooted when you’re physically away from home.


  • You have squid or some other forward http proxy set up to share a cache among all the devices on your network set up to access the Web, to minimize duplicate traffic?

    And you have a shared caching DNS server set up locally, something like BIND?

    Oh. You don’t. Well, that’s probably okay. I mean, it probably doesn’t matter that your devices are pulling duplicate copies of data down. Not everyone can have a network that minimizes latency and avoids inefficiency across devices.






  • I care less about speakerphone than I do Bluetooth headsets or regular phone speaker use near me.

    The speakerphone makes more noise!

    Yes, but people already have conversations between each other in public where we can hear both sides. We train ourselves to tune those out. A speakerphone is analogous to that case of another human talking.

    What I find most disruptive about phone conversations near me versus listening to two other people talking (which I can tune out) is that the speech pattern of a phone user is to say something and then pause. The problem is that that is exactly the signal that someone has said something to you, and that your attention is required. I have a harder time ignoring those one-sided conversations than turning out a conversation where I can hear both sides, because it’s basically constantly giving my head the “you just missed something and need to respond” signal. It’s like when someone says something to you, waits for a few seconds, and then your attention gets triggered and you look up and say “what?”

    Now, the article does also reference someone turning a speakerphone way up, and that I can get, if you’re playing it louder than a human would speak. But that’s also kinda a special case.

    I think that in general, the best practice is to text, and I think that most would agree that that’s uncontroversially the best approach in public. But after that, I’d personally prefer to have speakerphone use, above headset or regular phone use.

    EDIT: One interesting approach — I mean, smartphone vendors would always like to have new reasons to sell more hardware, so if they can figure out how to make it work, they might jump on it — might be phones capable of picking up subvocalization.

    https://en.wikipedia.org/wiki/Subvocalization

    Subvocalization, or silent speech, is the internal speech typically made when reading; it provides the sound of the word as it is read.[1][2] This is a natural process when reading, and it helps the mind to access meanings to comprehend and remember what is read, potentially reducing cognitive load.[3]

    This inner speech is characterized by minuscule movements in the larynx and other muscles involved in the articulation of speech. Most of these movements are undetectable (without the aid of machines) by the person who is reading.[3]

    You’d probably also need some sort of speech synthesizer rig capable of converting that into speech.

    A conversation where someone’s using headphones/earbuds and a subvocalization-pickup phone would avoid some of the limitations of texting (not limited to text input speed on an on-screen keyboard or having to look at the display), provide for more privacy for phone users, and not add to sound pollution affecting other people in the environment.

    EDIT2: Other possibilities for the speaker side:

    Bone conduction

    This has actually been done, but has some limitations on the sound it can produce, and you need to have a device in contact with your head.

    https://en.wikipedia.org/wiki/Bone_conduction

    Bone conduction is the conduction of sound to the inner ear primarily through the bones of the skull, allowing the hearer to perceive audio content even if the ear canal is blocked. Bone conduction transmission occurs constantly as sound waves vibrate bone, specifically the bones in the skull, although it is hard for the average individual to distinguish sound being conveyed through the bone as opposed to the sound being conveyed through the air via the ear canal. Intentional transmission of sound through bone can be used with individuals with normal hearing—as with bone-conduction headphones—or as a treatment option for certain types of hearing impairment. Bones are generally more effective at transmitting lower-frequency sounds compared to higher-frequency sounds.

    The Google Glass device employs bone conduction technology for the relay of information to the user through a transducer that sits beside the user’s ear. The use of bone conduction means that any vocal content that is received by the Glass user is nearly inaudible to outsiders.[47]

    Phase-array speakers to produce directional sound

    Here, you need to have the device track its position and orientation relative to a given user’s ears, then have a phase array of speakers that each play the sound at just the right phase offset to produce constructive interference in the direction of the user’s ears — it’s beamforming with sound. Other users will have a hard time hearing the sound, which will be garbled and quieter, because of destructive interference in their direction.

    https://en.wikipedia.org/wiki/Beamforming

    Beamforming or spatial filtering is a signal processing technique used in sensor arrays for directional signal transmission or reception.[1] This is achieved by combining elements in an antenna array in such a way that signals at particular angles experience constructive interference while others experience destructive interference. Beamforming can be used at both the transmitting and receiving ends in order to achieve spatial selectivity. The improvement compared with omnidirectional reception/transmission is known as the directivity of the array.

    We more-frequently use this for reception than for transmission, with microphone arrays, but you can make use of it for transmission. You’ll need a minimum number of speakers in the array to be able to play beams of sound with constructive interference in the direction of a given number of listeners.


  • I don’t presently need to use any service that requires use of a smartphone. I’ve never had a smartphone tied to a Google/Apple account. I don’t even think that I currently have any apps from the Google Store on my phone — just open-source F-Droid stuff.

    It’s true that hypothetically, you could depend on a service that does require you to use an Android or iOS app to make use of it. There are services that do require that there. Lyft, for example, looks like it requires use of an app, though Uber doesn’t appear to do so. And I can’t speak as to your specific situation, but at least where I am, in the US, I’ve never needed to use an Android or iOS app to make use of some class of service.

    But I will say that services will track what people use, and if people are continuing to use other interfaces than smartphone apps to make use of their services, that makes it more likely that that’s what they’ll provide.

    I can’t promise that somewhere in the world, or in some country or city or specific place, someone might be required to use an Android or iOS app, or if not now, down the line, and not have an alternative. They can, at least, limit their use to that app, rather than using it more-broadly. I don’t make zero use of my smartphone software now — like, when I’m driving, I’ll use the open-source OSMAnd to navigate. I sometimes check for Lemmy updates when waiting in line or similar. I don’t normally listen to music while just walking around, but if I did, I’d use a music player on the phone rather than a laptop for it. But I try to shift my usage to the laptop as much as is practical.


  • I don’t intend to get rid of my smartphone, but I do carry a larger device with me, and try to use the phone increasingly as just a dumbphone and cell modem for that device to tether to.

    That may not be viable for everyone — it’s not a great solution to “I’m standing in line and want to use a small device one-handed”. And iOS/Android smartphones are heavily optimized to use very little power, and any other devices mean more power. It probably means carrying a larger case/bag/backpack of some sort with you. And most phone software is designed to know about and be aware of cell network constraints, like acting differently based on whether you’re connected to a cell network for data or a WiFi network for data.

    However, it doesn’t require shifting to a new phone ecosystem. It also makes any such future transition easier — if I have a lot of experience tied up in Android/iOS smartphone software, then there’s a fair bit of lock-in, since shifting to another platform means throwing out a lot of experience in that phone software. If my phone is just a dumbphone and a cell modem, then it’s pretty easy to switch.

    And it’s got some other pleasant perks. Phone OSes tend to be relatively-limited environments. They’re fine for content consumption, like watching YouTube or something, but they’re considerably less-capable in a wide range of software areas than desktop OSes. A smartphone has limited cooling; laptops are significantly more-able to deal with heat. Due to very limited physical space, smartphones usually have very few external connectors — you probably get only a single USB-C connector, and no on-phone headphones jack. You’re probably looking at a USB hub or adapters and rigging up pass-through power if you want anything else. Laptops normally have a variety of USB connectors, a headphones jack, maybe a wired Ethernet connector, maybe an external display jack. Laptops tend to have a larger battery, so it’s reasonable to use the laptop to power external devices like trackballs/larger trackpads, keyboards, etc. You get a larger display, so you don’t have to deal with the workarounds that smartphones have to do to make their small screens as usable as possible. You don’t have to deal with the space constraints that make a touchscreen necessary, having your fingers in front of whatever you’re looking at (though you can get larger devices that do have touchscreens, if you want). You have far more choices on hardware, and that hardware is more-customizable (in part because the hardware likely isn’t an SoC, though you can get an SoC-based laptop if you want). Software support isn’t a smartphone-style “N years, tied to the phone hardware vendor, at which point you either use insecure software or throw the phone out and buy a new one”.