I was in the doctor’s office today with a depressed guy going on and on about how his insurance had changed and was trying to kill him. I’m in the USA, so he is entirely correct.

I recognize AI’s value as an outlet for emotional connection. It is helpful to talk about things like disability and feelings of isolation etc., with someone that can be tolerant and understanding. Is there anything competent for AI character/friends online that is worth recommending to someone that is not technically capable and with no revenue or exploitation value to exploit? Just looking for a way to maybe save a guy from himself.

  • j4k3@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    Do you have any new better-than-Llama2-70B models you’ve tried recently?

    I haven’t tried anything new in awhile because of code I changed in Oobabooga and Linux mainline kernel w/Nvidia issues. I basically have to learn git to a much better level and manage my own branch for my mods. I tried koboldcpp but I didn’t care to install the actual Nvidia CUDA toolkit because Nvidia breaks everything they touch.

    • rufus@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Hehe. I’ve recently spent $5 on OpenRouter and tried a few models from 7B to 70b and even one hundred and something billion parameters. They definitely get more intelligent. But I have determined that I’m okay within the 7B to 33B range. At least for my use-case. I’ve tested creative storywriting and dialogue in a near-future setting where AI and androids permeate human society. I wasn’t that impressed. The larger models still did some of the same mistakes and struggled with spacial positions of the characters and the random pacing of the plot points didn’t really get better.

      This wasn’t a scientific test whatsoever, I just took random available models, some were fine-tuned for similar purposes, some not and I just clicked my way through the list. So your mileage may vary here. Perhaps they’re much better with factual knowledge or reasoning. I read a few comments from people who like for example chatting with the Llama(2) base model at 65b/70b parameters and say this is way better than the 13b fine-tunes I usually use.

      And I also wasn’t that impressed with OpenRouter. It makes it easy and has some ‘magic’ to add the correct prompt formatting with all the different instruct formats. But I still had it entangle itself in repetition loops or play stupid until I went ahead and disabled the automatic settings. And once again tried to find the optimal prompt format and settings.

      So I’m back to KoboldCpp. I’m familiar with it’s UI and all the settings. I think the CUDA toolkit within the Debian Linux repository is somewhat alright. I’ve deleted it because it takes up too much space and my old GPU with 2GB of VRAM is useless anyways. We cerainly all had our ‘fun’ with the proprietary NVidia stuff.