

I’m pretty sure they’re referring to the incompetence, not the unfair trial


I’m pretty sure they’re referring to the incompetence, not the unfair trial
You know, I hadn’t even considered that. So many people use that ridiculous faux-censorship in earnest that I just assumed that was what I was reading.
If it was a joke, then I apologise
There’s no rape joke here, nobody aside from you even implied anything non-consentual


In some contexts, perhaps. I assure you that skill will remain relevant when programming aircraft or nuclear reactors


I thought you were implying that the survey was so unreliable that we couldn’t reach any conclusions about support for Israel, or the lack thereof. I was trying to point out that we could (tenuously) reach at least one conclusion.


While you’re not wrong, I think framing it as “israel-hamas” rather than “israel-palestine” is the least favourable framing available (for a supporter of Palestine) without resorting to really obvious leading questions.
As such, seeing a majority opinion against Israel is encouraging. I’d expect a more nuanced survey to swing more heavily against Israel


Oh, absolutely. It’s not something which should be encouraged, and against a well designed modern system it probably isn’t possible (there must be some challenge-response type NFC systems on the market).
I’m just saying it isn’t unambiguously “illegitimate”


That’s probably debatable, if they have permission. They probably shouldn’t have been given permission, but that’s a separate issue
Sure, there are countries that the US government says US businesses can’t do business with. What the governments of those countries think is irrelevant, in principle, unless they have some leverage they can apply.
If a business has no presence in a country, and the government of wherever they’re hosted has no interest in enforcing the other countries law for them, then threatening of fining the business is largely irrelevant. Note that this letter we’re commenting on doesn’t say “this order is invalid, and we’re going to challenge it in court”. It says “it’s irrelevant, and we intend to ignore it”. They go on to say they’re going to ask a US court to back them up, but that’s actually incidental to the legal statement they’re making.
The UK courts only really control what happens in the UK, at the end of the day. That’s what sovereignty is. If they decide 4chan is a sufficiently significant problem then there are a bunch of things they could tell people in the UK to do about it, like block the site, but 4chan seems to think that they can’t tell 4chan to do anything at all.
Beside which, what I was actually saying is that not being in the UK has nothing whatsoever to do with accepting payment from people who are


What on earth does that have to do with anything?
If someone offers to make and post custom Christmas cards to people, are you saying they should care which country the person paying them is in? Why would that matter at all?


That’s only tenuously true. They’re mainly driven by earth’s orientation, sure, but if that was all then you wouldn’t get regional variations in seasons (like the wet & dry seasons some places get, rather than the 4 we get in most places).
It would also be impossible for winter to come early, and you ask any farmer they’ll tell you that’s a thing which happens


They definitely weren’t working on starship back then. Their first successful launch was in 2008, so that was when they were working on the falcon 1.
You can’t claim all the work they’ve ever done has just been early versions of starship, because the falcon rockets are the most successful rockets in history. They’re a perfectly good product, and the fact that they’ve gone on to try and create something even better isn’t remotely the same as changing direction so often that they never actually get anywhere, like the Orion program has


That depends on whether you consider an LLM to be reading the text, or reproducing it.
Outside of the kind of malfunctions caused by overfitting, like when the same text appears again and again in the training data, it’s not difficult to construct an argument that an LLM does the former, not the latter.


It’s only true of badly designed bridges, these days. Modern engineering tools can calculate the resonant frequencies, and they make certain that those are far away from the frequencies which humans or wind can create


There’s nothing handwavy about renormalization, it’s just a way of describing the mathematics which is easier for a human brain to deal with, so we’ve standardised on it.
An unnormalised wave function can show you the relative probability of any given thing, but it makes like easier if you set the scale so that you can read an actual probably straight off it, rather than having to ask “relative to what?”

Attacking a massive corporation is a very different proposition than attacking individuals, I don’t think that parallel is terribly concerning


Light is a subset of the electromagnetic spectrum
No, it’s not. In physics, we call the entire spectrum “light”, because it’s all fundamentally the same thing.
We can talk about “visible light”, but that’s a subset of light in general. Microwaves, radio waves, x-rays, gamma radiation, and any other section of the spectrum you can think of are all light
It’s certainly not as bad as the problems generative AI tend to have, but it’s still difficult to avoid strange and/or subtle biases.
Very promising technology, but likely to be good at diagnosing problems in Californian students and very hit-and-miss with demographics which don’t tend to sign up for studies in silicon valley
Sure, but there are far more things which will kill the entire person at the same dose they’ll kill the cancer than things which can be carefully controlled by choosing the right dose.
These studies which claim to kill cancer in a petri dish usually turn out to be the former, because not killing the host is the difficult part
Be cautious about trusting the AI-detection tools, they’re not much better than the AI they’re trying to detect, because they’re just as prone to false positives and false negatives as the agents they claim to detect.
It’s also inherently an arms race, because if a tool exists which can easily and reliably detect AI generated content then they’d just be using that tool for their training instead of what they already use, and the AI would quickly learn to defeat it. They also wouldn’t be worrying about their training data being contaminated by the output of existing AI, Which is becoming a genuine problem right now