yellow [she/her]

  • 1 Post
  • 42 Comments
Joined 1 year ago
cake
Cake day: November 12th, 2024

help-circle
  • Can’t speak to OWUI (llama.cpp’s built in web UI has been sufficient for me), but for image generation, you’ll need to grab a different piece of software to handle that as Ollama only does LLMs, not diffusion models. The two main options for that are ComfyUI and Automatic1111, though I will warn that both require far more manual setup than Ollama does.

    - Opinion warning -

    I would highly recommend you move away from Ollama and switch to llama.cpp or ik_llama.cpp if you can. While Ollama is by far the simplest, most plug-and-play solution, it has a handful odd or misleading design choices that make doing anything worthwhile kind of annoying, like setting the context size to 4096 (are we in 2023 or something?) by default a weird, nonstandard model naming scheme. Ollama configuration is also painfully limited, while llama.cpp exposes a lot more knobs and dials you can tune to get the best performance you can.

    Additionally, you may want to swap out some of the models you’re using for newer ones. As it is unlikely you are running the full 685B parameter Deepseek-R1 on your home rig since it requires >500 GB of RAM to run at any appreciable speed, you’re probably running one of the “distills,” which are smaller models that were fine-tuned to behave like the full-sized model. The Deepseek distills, along with every Llama model, are practically ancient by LLM standards and have since been far outclassed by newer models that are around the same size or even smaller. If you want somewhere to start, check out the new Qwen3.5 models.




  • yellow [she/her]toLinux@lemmy.worldLinux dual boot
    link
    fedilink
    English
    arrow-up
    2
    ·
    9 days ago

    Use dd! It’s a tool that allows you to copy the contents of anything bit-for-bit to anywhere else. First, you’ll need to boot into a live USB of any distro. Then, after plugging in both drives, you’ll want to run something like dd if=/path/to/source/drive of=/path/to/output/drive bs=4M. You can get the paths of each drive by running lsblk, and they’ll look something like /dev/sda1 or /dev/nvme0n1. (Be very careful with dd, as whatever you put as the output drive will be irreversibly overwritten with whatever you put the input drive as.)





  • Just connect USB-C headphones

    Not everyone happens to have a pair lying around, and this doesn’t really work for IEMs and the like.

    or use a USB-C to 3.5mm audio dongle.

    I’ve yet to find a dongle that lasts more than around half a year of frequent use before starting to break. If you have any recommendations, that would be greatly appreciated.






  • I think that, while yes, LLMs are an option for data storage, I don’t think that they’re worth the effort. Sure, they might have a very wide breadth of information that would be hard to gather manually, but how can you be sure that the information you’re getting is a good replica of the source, or that the source that it was trained on was good in the first place? A piece of information could come from either 4chan or Wikipedia, and unless you had the sources yourself to confirm (in which case, why use the LLM as all), you’d have no way of telling which it came from.

    Aside from that, just getting the information out of it would be a challenge, at least for the hardware of today and the near future. Running a model large enough to have a useful amount of world knowledge requires a some pretty substantial hardware if you want any amount of speed that would be useful, and with rising hardware costs, that might not be possible for most people even years from now. Even with the software, if something with your hardware goes wrong, it might be difficult to get inference engines working on newer, unsupported hardware and drivers.

    So sure, maybe as an afterthought if you happen to have some extra space on your drives and oodles of spare RAM, but I doubt that it’d be worth thinking that much about.



  • yellow [she/her]toDogs@lemmy.worldClassic move
    link
    fedilink
    English
    arrow-up
    11
    ·
    29 days ago

    I’m in the latter camp mostly because on top of the inconsistencies in the images, (Why are there two dishwashers? Why is the dog climbing into the dishwasher instead of reversing into it as described?) they don’t really have a plausible reason to exist at all, at least in the context of the text. Why is the dog holding the box and looking directly at the camera in the first photo? If the dog really was flailing around like was described, how would the person have time to take not one but two perfectly framed and posed shots?