Unrelated, but what is this scene from?
yellow [she/her]
- 1 Post
- 42 Comments
Not all, but most of it is from infrared, with a little under half of all light recieved from the sun being infrared.* But yes, we would lose our only source of renewable energy and eventually either freeze or starve to death.
*Based on a quick skim of a Wikipedia article
Use
dd! It’s a tool that allows you to copy the contents of anything bit-for-bit to anywhere else. First, you’ll need to boot into a live USB of any distro. Then, after plugging in both drives, you’ll want to run something likedd if=/path/to/source/drive of=/path/to/output/drive bs=4M. You can get the paths of each drive by runninglsblk, and they’ll look something like/dev/sda1or/dev/nvme0n1. (Be very careful withdd, as whatever you put as the output drive will be irreversibly overwritten with whatever you put the input drive as.)
yellow [she/her]to
Linux@lemmy.ml•Will I survive the Linux CLI if I only switch because I'm a student and Arch distro speed?English
5·10 days agoHonestly, for any semi-modern hardware, the different amount of “bloat” between any two distros is small enough to be irrelevent for most everything you would do on a computer up to and including gaming, especially compared against Windows. Yes, Arch may be less bloated than, say, Ubuntu, but are you really going to notice or care that your system is idling at 1.2 GB of RAM usage instead of 800 MB?
yellow [she/her]to
uBlockOrigin@lemmy.ml•You're paying AI companies a monthly subscription fee to be fingerprinted like a parolee.English
2·11 days agoThe safest would be to run it yourself, though if you don’t have some pretty beefy hardware and some time to set things up you won’t be able to get very close to the performance of any of the big-name hosted AIs on more complex things, but it might be enough for simpler stuff.
Grab LM Studio (or llama.cpp if you’re comfy with a CLI) and some models off of Huggingface if you wanna give local AI a spin.
Just connect USB-C headphones
Not everyone happens to have a pair lying around, and this doesn’t really work for IEMs and the like.
or use a USB-C to 3.5mm audio dongle.
I’ve yet to find a dongle that lasts more than around half a year of frequent use before starting to break. If you have any recommendations, that would be greatly appreciated.
yellow [she/her]to
Fuck AI@lemmy.world•AIs can generate near-verbatim copies of novels from training dataEnglish
5·19 days agoRNG is not an inherent property of a transformer model. You can make it deterministic if you really want to.
You can’t convert it back into anything remotely resembling human-readable text without inference and a whole lot of matrix multiplication.
Could you not make a similar argument about a zip file or any other compression format?
At least the cheaper part, surely?
:xdoes the same thing as:wqbut in one less keystroke :3
yellow [she/her]to
No Stupid Questions@lemmy.world•What books have a lot of useful information should I get? (I mean like a Wikipedia thing with vast knowledge, but non-electronic.)English
1·22 days agoI think that, while yes, LLMs are an option for data storage, I don’t think that they’re worth the effort. Sure, they might have a very wide breadth of information that would be hard to gather manually, but how can you be sure that the information you’re getting is a good replica of the source, or that the source that it was trained on was good in the first place? A piece of information could come from either 4chan or Wikipedia, and unless you had the sources yourself to confirm (in which case, why use the LLM as all), you’d have no way of telling which it came from.
Aside from that, just getting the information out of it would be a challenge, at least for the hardware of today and the near future. Running a model large enough to have a useful amount of world knowledge requires a some pretty substantial hardware if you want any amount of speed that would be useful, and with rising hardware costs, that might not be possible for most people even years from now. Even with the software, if something with your hardware goes wrong, it might be difficult to get inference engines working on newer, unsupported hardware and drivers.
So sure, maybe as an afterthought if you happen to have some extra space on your drives and oodles of spare RAM, but I doubt that it’d be worth thinking that much about.
yellow [she/her]to
Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ@lemmy.dbzer0.com•Cannot find my torrent site bookmarkEnglish
3·26 days agoI doubt they’re blaming the site, they just lost it and are trying to find it again.
I’m in the latter camp mostly because on top of the inconsistencies in the images, (Why are there two dishwashers? Why is the dog climbing into the dishwasher instead of reversing into it as described?) they don’t really have a plausible reason to exist at all, at least in the context of the text. Why is the dog holding the box and looking directly at the camera in the first photo? If the dog really was flailing around like was described, how would the person have time to take not one but two perfectly framed and posed shots?
oalouub is my favorite cheez-it flavour!
yellow [she/her]to
Enough Musk Spam@lemmy.world•Get the fuck out of America then MuskEnglish
8·2 months agoThe point the tweet OP is making isn’t that they want Musk to be deported to South Africa, they’re just calling out his hypocrisy.
Your websites have updates??
OOTL, what’s up with NZXT?
Oh, my bad! The wording didn’t parse as humor to me.










Can’t speak to OWUI (llama.cpp’s built in web UI has been sufficient for me), but for image generation, you’ll need to grab a different piece of software to handle that as Ollama only does LLMs, not diffusion models. The two main options for that are ComfyUI and Automatic1111, though I will warn that both require far more manual setup than Ollama does.
- Opinion warning -
I would highly recommend you move away from Ollama and switch to llama.cpp or ik_llama.cpp if you can. While Ollama is by far the simplest, most plug-and-play solution, it has a handful odd or misleading design choices that make doing anything worthwhile kind of annoying, like setting the context size to 4096 (are we in 2023 or something?) by default a weird, nonstandard model naming scheme. Ollama configuration is also painfully limited, while llama.cpp exposes a lot more knobs and dials you can tune to get the best performance you can.
Additionally, you may want to swap out some of the models you’re using for newer ones. As it is unlikely you are running the full 685B parameter Deepseek-R1 on your home rig since it requires >500 GB of RAM to run at any appreciable speed, you’re probably running one of the “distills,” which are smaller models that were fine-tuned to behave like the full-sized model. The Deepseek distills, along with every Llama model, are practically ancient by LLM standards and have since been far outclassed by newer models that are around the same size or even smaller. If you want somewhere to start, check out the new Qwen3.5 models.