• 0 Posts
  • 46 Comments
Joined 3 years ago
cake
Cake day: June 15th, 2023

help-circle




  • This is how bubbles always go, see the Gartner hype cycle. People always overextend, try to apply new tools/tech into places that it doesn’t belong, and only then do people realise the limitations of technology. This is common in business, C-suites explicitly exploit the hype cycle to secure naive investor funding, but investors always become wise eventually - it’s a game to see how much money can be extracted from them before they become increasingly aware of the limitations of the technology. There will be niches where the tech actually settles, but it’s always much smaller than what’s promised. I’m a programmer, I’ve been listening to people say that LLMs are going to take my job for the past five years, and yet every time I’ve actually tried to apply an LLM directly to my work it’s failed in a pretty drastic manner. I find existing systems useful as a tool, but that’s about it.


  • Don’t forget this is all under the umbrella of the initial hypothetical where AI stalled at it’s current level. I don’t believe that existing LLMs systems will destroy the economy. They’re a tool that people are trying to fit into every hole, much like blockchain during the crypto bubble. We’ve already seen companies fire their customer service departments, try to replace them with LLMs, then have to go crawling back when that failed catastrophically.

    If AI systems continue to improve, however? As I said previously, all bets are off.


  • Fine, you want me to be pedantic? When prompted with tokens that appear in an order that humans understand as a question that corresponds to some aspect of the universe as we understand it, the tokens predicted by the LLM correspond to an answer that humans agree is more representative than the tokens provided by the average human.

    Tell me where in my initial comment I said they weren’t an economic threat. I never said they weren’t. I said they aren’t an existential economic threat. Please read my comment.


  • I don’t want to get into an argument of semantics, whatever your definition of ‘knowledge’ is, LLMs can recall a greater number of factoids than any individual human. That’s all I meant. Are they perfect? No, I never said that. They’re still far beyond the average human, however, hence superhuman.

    I said that LLMs are not an existential threat to humanity, even economically. I never said that they wouldn’t threaten individual jobs, or cause a bubble. Please don’t strawman me. You and I are looking at completely different levels of effects, I’m looking at the big picture - is humanity or society as we know it going to continue to exist in 100 years (in this hypothetical where AI and/or LLMs stagnated)? If yes, then LLMs are not an existential threat. That’s what an existential threat means, after all.

    Is AI causing en economic bubble? Sure, but like all bubbles they will burst when people realise that they have limited use due to their drawbacks. The world will then return to some semblance of normalcy. That’s a non-existential threat.

    Now, if we’re talking about a world in which AI systems continue to evolve? All bets are off the table, which is why AI somehow stagnating to where it is now is the best case scenario.


  • Honestly? If AI systems stopped improving forever? That’s probably best case scenario. LLMs are already superhuman on a knowledge level, human-level in terms of speed (tokens per sec, etc), but subhuman in many other areas. This makes them useful for some tasks, but not so useful that they could cause any sort of existential threat to humanity (either in an economic sense or in a misalignment sense). If LLMs stagnate here then we have at least one tool in our AI toolbox that we’re pretty sure isn’t conscious/sentient/etc., which is useful since that makes them predictable on some level. Humans can deal with that.

    Unfortunately, I see no reason why AI systems in general wouldn’t continue to improve. Even if LLMs do stagnate they’re only one tiny branch of a much larger tree, and we already have at least one example of an AI system that is conscious and sentient - a human. This means even if somehow the human brain was the only architecture ever capable of sentience (incredibly unlikely), we could always simulate/emulate a human brain to get human-level AGI. Simulate/emulate it faster? Superhuman AGI.


  • New Zealand.

    Our laws make carrying anything with the intent to use it as a weapon (in self defence or not) a crime - whether it’s a gun, sword, pepper spray, cricket bat, screwdriver, or lollipop stick. This makes sure that when someone robs a corner store the owner gets jailed for having a baseball bat behind the counter. It’s absurd.

    The law not only doesn’t equalise your chances, it actively forces you to be at a disadvantage when defending yourself, and by the time any police arrive the assailant is long gone. Most criminals don’t have guns (except for the multiple armed gangs of course), but plenty of them bring bladed weapons, there have been multiple cases of machete attacks.

    I’m all for gun ownership for the purpose of property defence. Including strong legal defences for home and store owners repelling assailants.

    I don’t think just anyone should be able to go and purchase a gun no questions asked, it should probably be tied to some kind of mandatory formal training, e.g. participation in army reserves. It should definitely be more difficult than getting a driver’s licence (but I also think a driver’s licence should be harder to get than it is now. The idea that you can go and sit a written test and then legally pilot a two ton steel box in areas constantly surrounded by very squishy people is kind of absurd to me).



  • As with everything, trust is required eventually. It’s more about reducing the amount of trust required than removing it entirely. It’s the same with HTTPS - website certificates only work if you trust the root certificate authorities, for example. Root manufacturer keys may only be certified if they have passed some level of trust with the root authority/authorities. Proving that trust is well-founded is more a physical issue than an algorithmic one. As it is with root CAs it may involve physical cybersecurity audits, etc.




  • Video evidence is relatively easy to fix, you just need camera ICs to cryptographically sign their outputs. If the image/video is tampered with (or even re-encoded) the signature won’t match. As the private key is (hopefully!) stored securely in the hardware IC taking the photo/video, any generated images or videos can’t be signed by such a private key.


  • Under the Dewey Decimal System, books on wood carving and river systems would not be placed together, nor would books on conflict resolution and gardening.

    It’s almost like they’d be placed with books on related topics instead. This Maori traditional system is… not good. Imagine a system where the books are sorted by which Catholic patron saint they fall under, or which greek god they best represent. The librarians even admit in the article that it’s only practical if you’re already well aware of Maori mythos, everyone else gets ‘an opportunity to learn’ (i.e. be completely lost).



  • Not really. While working at the OS-level can typically require ‘unsafe’ operations a core tenet of writing Rust is making safe abstractions around unsafe operations. Rust’s ‘unsafe’ mode doesn’t disable all safety checks either - there are still many invariants that the Rust compiler enforces that a C compiler won’t, even in an ‘unsafe’ block.

    And even ignoring all of that, if 10% of the code needs to be written in Rust’s ‘unsafe’ mode that means the other 90% is automatically error-checked for you, compared with 0% if you’re writing C.


  • Here’s the generation statistics of the BN-800 reactor I mentioned before: https://pris.iaea.org/PRIS/CountryStatistics/ReactorDetails.aspx?current=451 It’s been operating at about 70% of it’s rated capacity basically since it was first turned on, that’s large scale power generation. Breeder reactors have been in commercial use for decades (see also: Phenix and Superphenix).

    The simple reason why breeder reactors aren’t the default is because most reactors don’t need to be breeders. The two main upsides of a breeder reactor is a) breeding of nuclear material, which as I said before was only ever a concern in the very early days of nuclear power. We have thousands of years’ worth of fuel available now. b) The reuse of nuclear waste for additional power generation. Of course you have to have nuclear waste to reuse first, which necessitates many other, non-breeder reactors already being in use, so breeder reactors are usually restricted to countries that already have significant investment into nuclear power, like France, Russia, China, etc… If you don’t need to breed more nuclear fuel, and you don’t have waste to reprocess you might as well keep it simple and build a regular LWR reactor.