

I don’t have any answers unfortunately but here is what it looks like on an iPhone on iOS 26:
in the browser:
Spoiler

as a PWA:
Spoiler

it seems that the safe area inset is correct here.


I don’t have any answers unfortunately but here is what it looks like on an iPhone on iOS 26:
in the browser:

as a PWA:

it seems that the safe area inset is correct here.


Ah I see, thanks for the detailed reply! I thought that would also just be an easily toggleable option on the stock tab bar design but if you’d have to do a lot of work around it I totally get that it’s not very feasible right now


Is there a chance we could get an option for collapsing the bottom bar to a single icon in the left corner on scroll, similar to e.g. what Music and Podcasts do? :)


It literally says 5 in the screenshot but ok


Dachte schon das wird ein Artikel zu autofreundlicher Stadtplanung und der Verdrängung der Fußgänger, dann hätte ich sofort zugestimmt. Schade, dass es aber wohl nur eine Art überausformulierter Duschgedanke war.


I’m an empirical researcher in software engineering and all of the points you’re making are being supported by recent papers on SE and/or education. We are also seeing a strong shift in behavior of our students and a lack of ability to explain or justify their “own” work


more like MCETAPB (most cops enable, tolerate and protect bastards)


At least on a Mac keyboard, the en dash is also alt+hyphen and the em dash is shift+alt+hyphen.


let’s see if we can find supporting information on this answer elsewhere or, maybe ask the same question a different way to see if the new answer(s) seem to line up
Yeah, that’s probably the best way to go about it, but still requires some foundational knowledge on your part. For example, in a recent study I worked on we found that programming students struggle hard when the LLM output is wrong and they don’t know enough to understand why. They then tend to trust the LLM anyways and end up prompting variations of the same thing over and over again to no avail. Other studies similarly found that while good students can work faster with AI, many others are actually worse off due to being misled.
I still see them largely as black boxes
The crazy part is that they are, even for the researchers that came up with them. Sure we can understand how the data flows from input to output, but realistically not a single person in the world could look at all of the weights in an LLM and tell you what it has learned. Basically everything we know about their capabilities on tasks is based on just trying it out and seeing how well it works. Hell, even “prompt engineers” are making a lot of their decisions based on vibes only.


I don’t know if it’s just my age/experience or some kind of innate “horse sense” But I tend to do alright with detecting shit responses, whether they be human trolls or an LLM that is lying through its virtual teeth
I’m not sure how you would do that if you are asking about something you don’t have expertise in yet, as it takes the exact same authoritative tone no matter whether the information is real.
Perhaps with a reasonable prompt an LLM can be more honest about when it’s hallucinating?
So far, research suggests this is not possible (unsurprisingly, given the nature of LLMs). Introspective outputs, such as certainty or justifications for decisions, do not map closely to the LLM’s actual internal state.


Oh shoot my bad haha


… why did you have ChatGPT write this? Clearly you have your own thoughts on this no need to ask a machine lol


(structuring inheritance) before the Jesus Club took over
and then it took humanity another 2000 years to move away from inheritance in favor of composition. you’d think someone would’ve realized sooner that it’s not always the right abstraction…
Language designer for a widely used programming language. Basically I want to be Brian Goetz


I have a compsci background and I’ve been following language models since the days of the original GPT and BERT. Still, the weird and distinct behavior of LLMs hasn’t really clicked for me until recently when I really thought about what “model” meant, as you described. It’s simulating what a conversation with another person might look like structurally, and it can do so with impressive detail. But there is no depth to it, so logic and fact-checking are completely foreign concepts in this realm.
When looking at it this way, it also suddenly becomes very clear why people frustratedly telling LLMs things such as “that didn’t work, fix it” is so unproductive and meaningless: what would follow that kind of prompt in a human-to-human conversation? Structurally, an answer that looks very similar! Therefore the LLM will once more produce a structurally similar answer, but there is literally no reason why it would be any more “correct” than the prior output.


“Dann haben wir bis zu den Sommerferien gut zwei Monate Zeit, um sehr schnell ein paar Dinge zu beschließen, damit die Menschen spüren, dass sich wirklich etwas ändert.” Als Beispiele nannte Merz einen besseren Grenzschutz und mehr Abschiebungen
Das kann man sich doch nicht ausdenken. Was soll man denn da bitte schnell “spüren”, außer dass Leute auf einmal verschwunden werden?
Das einzige, was noch trauriger ist, ist dass tatsächlich die Mehrheit der Leute in diesem Land sowas ohne Sinn und Verstand abfeiern.


Ah, thanks so much for reaching out again! Downloading now :D


What do you think the point of this post is, then? Comedic hyperbole only works if there is still some truth to it
Why has even the White House recommended against the use of the C programming language?