• 0 Posts
  • 63 Comments
Joined 1 year ago
cake
Cake day: June 17th, 2023

help-circle
  • Zeth0s@lemmy.worldtoMildly Infuriating@lemmy.worldAAAA
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    1 year ago

    I used to dual boot linux with windows Vista on an old laptop. I had only installed there the first assassin’s creed and Rome total war. Nothing else, never really connected to internet. After 1 year of not using it a part than few total war sessions, vista was so slow that was unusable. It spontaneously became slow for no reason. I completely removed it, left only linux, and that laptop survived 7 years of intensive use, and was still working 10 years later (just too old).

    Vista was a scam















  • What I mean is different. A dog thinks as a dog, a human thinks as a human, an AI will think as an AI. It will likely be able to pretend to think as a human, but it won’t think as one.

    It won’t have a Proust’s madalaine (sensorial experiences that trigger epiphanies), have the need to travel to some “sacred” location looking for spirituality, miss the hometown were it grew up, its thinking won’t be driven by fears of spiders, need of social recognition, pleasure to see naked women. It’s thoughts won’t be dependent on the daily diet, on the amount of sugar, fat, vitamins, stimulants intake.

    These are simple examples, but in general it will think in a different way. Humans will tune it to pretend to be “as human as possible”, but humans will remain unique


  • As you are in nanotechnologies, when I say average out I am talking in a statistical mechanics way, i.e. the macroscopic phenomenon arising from averaging over the multiple accessible microscopic configurations. Thoughts do not arise like this, they are the results of multiple complex non linear stochastic signals. They depend on a huge amount of single microscopic events, that are not replicabile in a computer as is, and likely not reproducible in a parametrized function. Nothing wrong with that, we might be able to approximate human thoughts, most likely not reproduce them.

    What area of nanotechnology are you? Main problem of nanotechnologies is that they cannot reproduce the complexity of the biological counterparts. Take carbon nanotubes, we cannot reproduce the features of the simpler ion channels with them, let alone the more complex human ones.

    We could build nice models, with interesting functionality, as we are doing with current AI. Machines that can do logic, take decisions, and so on. Even a machine that can predict human thoughts. But they’ll do it in their way, while the real human thoughts will most likely stay human, as the processes from which they arise are very human