Yes, pacman -S package
will install the version of the package that is listed in the current package database, and will not do anything to update that database.
Yes, pacman -S package
will install the version of the package that is listed in the current package database, and will not do anything to update that database.
I mean if the current system is truly ancient and doesn’t support UEFI I could imagine it ceasing to work, or something like that. But that should be easy enough to fix.
At least on lemmy we can just block instances that do idiotic things like that and still use the rest of the service
As a long-time Linux user I can’t say I’ve noticed big changes in the last 10 years… Maybe I’m forgetting, but when I first used Linux on a desktop I had to compile drivers from source to have working graphics acceleration and WiFi. Things have come a long way since then, but by 2015 I feel like those big things were all sorted. There are still many small things but I think most of those are unchanged, too.
widespread adoption means you can get things like contributors who will then work on optimising battery life and other fundamentals.
I believe RedHat uses Fedora as a kind of guinea pig to test new stuff out they want to put in RHEL. That results in stuff being put in before it’s actually ready even for a fairly bleeding-edge distro.
A specific tweet in which someone called for hotels housing refugees (due to a backlog of asylum applications causing an overflow from purpose-built facilities) to be set on fire.
when you get your cataract surgery
Not everyone gets cataracts
they can make that lens a prescription lens
What would be the point of implanting a “lens” with no optical power?
I think what you’re trying to say is that the implanted lens can be varifocal rather than monofocal.
You’re missing the point, which is that estate agents already get paid by the landlord for this. Charging renters is just extra money for doing what they already did.
And in sane places it doesn’t happen, and is often illegal.
The right tool here is tests at a level higher than machine code instructions that have been in CPUs since the 70s. Maybe TDD practice is not to test at this level, but every example of TDD sure tends to be something similar!
Either way gets me to a passing test, but I prefer the latter because it enables me to write another failing test.
But you could just write that failing test up front. TDD encourages you to pretend to know less than you do (you know that testing evenness requires more than one test, and you know the implementation requires more than some if-statements), but no-one has ever made a convincing argument to me that you get anything out of this pretence.
Tests should make changing your system easier and safer, if they don’t it is typically a sign things are being tested at the wrong level
TDD is about writing (a lot of) unit tests, which are at a low-level. Because they are a low-level design-tool, they test the low-level design. Any non-trivial change affects the low-level design of a component, because changes tend to affect code at a certain level and most of those below it to some degree.
When faced with a failing test, you make it pass as simply as possible, and then you summon all your computer science / programming experience to refactor the code into something more elegant and maintainable.
Why bother making it pass “as simply as possible” instead of summoning all that experience to write something that don’t know is stupid?
TDD doesn’t promise to drive the final implementation at the unit level
What exactly does it drive, then? Apart from writing more test code than application code, with attendant burdens when refactoring or making other changes.
I don’t want unseeded randomness in my tests, ever.
Seed the tests, and making these pass would be trivial.
As the existing reply stated, there are only ever finitely many tests.
My issue with TDD is that it pretends to drive the final implementation with tests, but what is really driving the implementation is the monkey at the keyboard thinking, “testing for evenness should be done with the modulo operation,” not exhaustive tests.
This is what Test Driven Development looks like
I don’t understand how regex comes into it? Sounds tricky though!
I asked an LLM to write a jq
scriptlet for me today. It wasn’t even complicated, it just beat working it out/trying to craft the write string to search Stackoverflow for.
It’s a shame that you’re so quick to express skepticism but so reluctant to do any research of your own, because the facts are a bit embarrassing with the exact same trend in the USA as in the UK.
Driver safety peaks in the 60s, and only moderately worsens after then. The large increase in fatal accidents, by the way, is clearly a result of older drivers being more vulnerable in a crash - because the chart at the bottom doesn’t show any such large increase for passengers and others.
I’m interested to know if this changes your mind.
That doesn’t affect the ability of older drivers, only the number of them.
In fact, since one reason very old drivers might get more accident prone is because they stop driving as much and lose some of the skills, you would expect that, if older Americans really persist in driving more as they get older (you haven’t provided any evidence that they do) they would retain those skills and be less accident prone, not more, so would be safer, and less at need of re-tests, than their UK counterparts.
Focusing on the driving safety of the elderly is a classic example of Saliency Bias. A 20-year old kid wrecking his car is nothing unusual so you don’t remember it when thinking about safety. An 80 year old who can’t even remember which way to turn the wheel getting in a wreck is unusual and extreme, so it’s more salient. Getting stuck behind an elderly driver gives you the impression that they’re a bad and hence unsafe driver, which contributes to this.
I’m not sure he was thrilled about peanut butter choice, he basically had a breakdown