Sir. Haxalot
- 12 Posts
- 39 Comments
Sir. Haxalot@nord.pubto
Cybersecurity@sh.itjust.works•An AI coding bot took down Amazon Web Services - Ars TechnicaEnglish
3·6 days agoWorth noting that despite the headline this does not have anything to do with the huge outage in the end of 2025.
The company said the incident in December was an “extremely limited event” affecting only a single service in parts of mainland China. Amazon added that the second incident did not have an impact on a “customer facing AWS service.”
Neither disruption was anywhere near as severe as a 15-hour AWS outage in October 2025 that forced multiple customers’ apps and websites offline—including OpenAI’s ChatGPT.
I would also have felt some level of schadenfreude if it turned out that any of the really big incidents in the end of 2025 was a result of managements aggressive pushes for AI coding. Perhaps that would cool off the heads of executives a bit if there were very real examples pf shit properly hitting the fan…
Sir. Haxalot@nord.pubto
Selfhosted@lemmy.world•How do I access my services from outside?English
2·6 days agoThe free version is mainly just a number of user and device limit. Although the relaying service might be limited as well, but that should only matter if both of your clients have strict NAT, otherwise the Wireguard tunnels gets directly connected and no traffic goes through Netbirds managed servers.
You can also self-host the control plane with pretty much no limitations, and I believe you no longer need SSO (which increased the complexity a lot for homelab setups).
Sir. Haxalot@nord.pubto
Technology@lemmy.world•Microsoft 365's buggy Copilot 'Chat' has been summarizing confidential emails for a month — yet another AI privacy nightmareEnglish
2·9 days agoThat seems to be the terms for the personal edition of Microsoft 365 though? I’m pretty sure the enterprise edition that has the features like DLP and tagging content as confidential would have a separate agreement where they are not passing on the data.
That is like the main selling point of paying extra for enterprise AI services over the free publicly available ones.
Unless this boundary has actually been crossed in which case, yes. It’s very serious.
Sir. Haxalot@nord.pubto
iiiiiiitttttttttttt@programming.dev•Still got ours rockin' years later!English
59·9 days agoDon’t touch that, it’s a load bearing 100Mbit switch.
Sir. Haxalot@nord.pubto
Technology@lemmy.world•Microsoft 365's buggy Copilot 'Chat' has been summarizing confidential emails for a month — yet another AI privacy nightmareEnglish
1·9 days agoThat is kind of assuming the worst case scenario though. You wouldn’t assume that QA can read every email you send through their mail servers ”just because ”
This article sounds a bit like engagement bait based on the idea that any use of LLMs is inherently a privacy violation. I don’t see how pushing the text through a specific class of software is worse than storing confidential data in the mailbox though.
That is assuming that they don’t leak data for training but the article doesn’t mention that.
Sir. Haxalot@nord.pubtoGeneral Data Protection Regulation (“GDPR”) ⚖@sopuli.xyz•Google criticizes Europe's plan to adopt free software -- this abuses a GDPR hole that FOSS compensates forEnglish
1·11 days agoWait, are they saying that when hosting services based on open source you can just refer to the source to explain how data is processed? Or am I missing something?
Because realistically that is still a quite high bar for anyone who wants to understand how data is processed compared to requiring a privacy policy.
Sir. Haxalot@nord.pubto
Technology@piefed.social•Discord advises UK users that they "may be part of an experiment" where instead of their age verification data never leaving their phone, it will now actually leave their phoneEnglish
1·12 days agoThe question can go the other way as well; what proof does people have that Discord is outright lying in their communication? All the communication indicates that they have actually taken steps to minimize the privacy impact. Importantly using local processing and only storing if it’s successful or not, even if that means that it can likely be bypassed (important web dev rule, never trust the client side).
Now introducing the Persona system is very concerning, and also a reason I don’t think it’s an overreaction anymore. Even if they claim they only save the data for longer than 7 days, the connection to Palantir and Peter Thiel is extremely troubling and erodes the trust. I mean it comes down to me not trusting them as much as Discord.
To expand on your question on why they wouldn’t be as evil as possible, it comes down to whether or not you believe that all developers and product managers are evil or not. I have worked for a decade for a few IT heavy companies and yeah, there are shit going on, but it’s mostly due to laziness, or product managers wanting numbers and pretty graphs of user behaviors (when it comes to privacy and data sharing).
The leak of the 70k UK identities is an interesting case. It’s often framed as if the processor was hacked but it was actually the normal support system where they handled appeals. The real mistake was that Discord didn’t properly think through appeal handling and it is probably attributable to a mistake/laziness then intentional malice.
Of course a bit different for the macro social networks, whose primary income stream is selling ads and they want to build behavior profiles because that allows them to argue that advertisers get more value out of their platform. The point I want to make is that your real name and photo doesn’t actually have any value for the companies, because they already do have everything they need from your activity. It does have risks and liabilities though if nothing else due to GDPR.
Sir. Haxalot@nord.pubto
A Boring Dystopia@lemmy.world•What happens to a car when the company behind its software goes under?English
6·12 days agoThis article feels a bit like ragebait.
Yes, this happened once with a company that went bankrupt 2 years after launching their product. They seem to have designed an exceptionally poor product. How does this mean that the enormous engineering failures of this small startup applies to all other car brands?
Most cars have a very clear separation between core driving software and the infotainment, and the vast majority will never have any software updates so what works, will continue to work (or the other way around). At worst you’ll loose stuff like remote commands, wheatear info, list of charging points/map updates… Things that are kind of dynamic and needs to be regularly updated.
Sir. Haxalot@nord.pubto
People Twitter@sh.itjust.works•I totally agree, too much is too much.English
7·12 days agoThe rules only matter if the admins adhere to them and enforces them consistently.
Sir. Haxalot@nord.pubto
Asklemmy@lemmy.ml•Will Lemmy/ the fediverse become age verified platforms?English
2·12 days agoIt sounds like you are assuming that the wallet needs to re-validate each session and I don’t see why this would be needed. Each user account would just need to validate their age once then the website operator could store this in their database. If you’ve validated once you can be sure the user keeps being old enough.
Sir. Haxalot@nord.pubto
Technology@lemmy.world•Meta patents AI that takes over a dead person’s account to keep posting and chatting - DexertoEnglish
4·12 days agoThey’re probably not going to use it…
… but if they do it’s going to be a hell of a good starting point in motivating people to leave Facebook
Sir. Haxalot@nord.pubto
Asklemmy@lemmy.ml•Will Lemmy/ the fediverse become age verified platforms?English
4·12 days agoI believe something like this is supposed to be a use-case of the digital EU Wallet. A website is supposed to be able to receive an attestation of a users age without nessecarily getting any other information about the person.
https://en.wikipedia.org/wiki/EU_Digital_Identity_Wallet
Apparently the relevant feature is Electronic attestations of attributes (EAAs). I’m not really familiar with how it will be implemented though and I am a bit afraid of beurocratic design is going to fuck this up…
Imo something like this would be magnitudes better than the current reliance of video identification. Not only is it much more reliable, it will also not feel nearly as invasive as having to scan your face and hope the provider doesn’t save it somewhere.
Sir. Haxalot@nord.pubtoSpel@feddit.nu•Istället för Discord och Twitch - Svenssons NyheterSvenska
4·13 days agoFör Discord tror jag det beror väldigt mycket på hur aktivt chatten är. För större servrar så håller jag absolut med dig om att det blir för mycket och saker bara försvinner. Men för mindre instanser, typ där det bara är ens närmaste vänner så fungerar upplägget väldigt bra. I instanser med mindre aktivitet tror jag att någonting som är mer tvingande att skapa trådar mest skulle få diskussioner att kännas fragmenterade.
Samtidigt så är det nog många communities som använder Discord vara för att det är stort, även om det inte nödvändigtvis är det bästa alternativet.
Sir. Haxalot@nord.pubto
Fedibridge@lemmy.dbzer0.com•Post on r/Privacy discussing reddit alternatives such as Lemmy & PiefedEnglish
5·13 days agoA big thing is that all your voting activity, while we it appears private is actually"broadcast" to all the servers in the fediverse without any actual verification of who runs it. I learned this after setting up an instance and finding out that it’s possible to list all votes on any post, not just activity on my instance. So I’m not sure if privacy is actually a good selling point.
Is there really a lot of AI generated doorbell camera videos out there? I can’t remember anything posted but then again maybe that just proves the point.
Then again the low resolution does make it much easier to hide typical artefacts and issues so I don’t think it proves anything.
More relateable than I would like it to be…
Sir. Haxalot@nord.pubto
Technology@piefed.social•Discord advises UK users that they "may be part of an experiment" where instead of their age verification data never leaving their phone, it will now actually leave their phoneEnglish
29·15 days agoThat’s not all, though; some users are also unhappy not just with the age verification process itself and the security of their data, but also the people bankrolling Persona, which includes the investment fund of Palantir founder, Peter Thiel. Palantir is the data and surveillance company currently used by US federal agencies, including ICE, and Thiel’s name appears 2000+ times in the Epstein files.
I used to think that people were massively overreacting about all this, but this is some pretty fucking suspicious connections.
Sir. Haxalot@nord.pubto
Technology@piefed.social•Discord advises UK users that they "may be part of an experiment" where instead of their age verification data never leaving their phone, it will now actually leave their phoneEnglish
4·15 days agoThey can absolutely run the verification code client side, but they can’t really fully trust the data being provided from client side since the client might be manipulated or a 3rd party client may have reverse engineered the API to bypass the verification.
Probably they made the decision that it’s worth it to protect privacy (you know the thing people have been complaining about) weighed against that most teens probably won’t figure out how to bypass the system… which makes this sudden change (trial?) where it’s being sent to a 3rd party anyway kind of odd.
Sir. Haxalot@nord.pubto
Sweden@lemmy.world•Discords krav på identifiering samt alternativa tjänsterEnglish
1·16 days agoVad jag har förstått så är det i princip bara att man inte kan se kanaler som är markerade som NSFW, eller om hela servern är det (kanske individuella posts går att markera som NSFW också?), samt att det ska vara begränsningar för DMs.
Står ingenting i deras information om att server ägare måste vara vuxna, men kan tänka mig att man kanske inte får ha “vuxenkanaler” i sin server om inte alla mods har verifierat sig.













I’m like 90% sure that this post is AI Slop, and I just love the irony.
First of all, the writing style reads a lot like AI… but that is not the biggest problem. None of the mitigations mentioned has anything to do with the Huntarr problem. Sure, they have their uses, but the problem with Huntarr was that it was a vibe coded piece of shit. Using immutable references, image signing or checking the Dockerfile would do fuck-all about the problem that the code itself was missing authentication on some important sensitive API Endpoints.
Also, Huntarr does not appear to be a Verified Publisher at all. Did their status get revoked, or was that a hallucination to begin with?
To be fair though the last paragraph does have a point, but for a homelab I don’t think it’s feasible to fully review the source code of everything you install. It would rather come down to being careful with things that are new and doesn’t have an established reputation, which is especially a problem in the era of AI coding. Like the rest of the *arr stack is probably much safer because it’s open source projects that have been around for a long time and had had a lot of eyes on it.