maria [she/her]

  • 219 Posts
  • 6.19K Comments
Joined 3 years ago
cake
Cake day: July 5th, 2023

help-circle




  • maria [she/her]to196oppenrulemer
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 hours ago

    ohgosh…

    i too have had some like - tests done where they had to insert something and i was just like - so scared it was crazy scary… and painful and stuff… even tho they put some anti-pain stuff in there before actually going in… it was bad. wowie.

    i genuinely, still have pain from the parts where they had problems “pushing through”… awful. so iguess thats why i cant see it being fun.

    but hey, thank u for sharing ur experience! <3 thats very nice of u.


  • maria [she/her]to196Wait, AI rule? 😶
    link
    fedilink
    English
    arrow-up
    2
    ·
    16 hours ago

    are u addressing the mythos thing in particular or LM capability in general or umm… “safety” stuff?

    the mythos stuff is entirely overblown and that has been proven in many actual usecases by now. its nothing new.

    on general capability increase, it… really only matters if u care about it or wanna use those models. if not, its marketing hype. if u do, even in LM spaces the best saying is still “try it urself”

    im just yapping now, sorry >~<



  • maria [she/her]toTHE_PACK@lemmy.worldyolo
    link
    fedilink
    English
    arrow-up
    2
    ·
    20 hours ago

    HMM I BELIEBE THE IMAGE IS GENERATED AND SOMEONE PUT TEXT OVER IT THEMSELVES! SEEN PLENTY POSTS LIKE THIS WHERE IT WASNT POINZED OUT BY ONYONE BECAUSE THE TEXT SEEMED HUMAN—

    ANYWAY, HIW YA DOING BROTHERRR?








  • maria [she/her]to196Wait, AI rule? 😶
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 day ago

    yea, its disappointing.

    now, i would like to put the blame largely on article sites which just really want some bombaatic headlines, but calling text-predictors output sensational is… quite something.


  • maria [she/her]to196Wait, AI rule? 😶
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 day ago

    fair, they really should focus on what’s goin on-

    i do wonder tho how one would conduct studies on that?

    • theres not really a way to measure mental health… in a meaningful way i think
    • companies claiming “we layer off people because of ai” is more marketing than the actual reason (which is - surprise - profit)

    the studies conducted so far have largely looked into very short usage patterns, because… its difficult to track, if you are not literally openai.

    maybe im missing something here. cuz having some actual studies (besides this popular paper which tried to sell 'u dont learn when generating with LMs" as “LMs cause cognitive debt”) would be really interesting.



  • maria [she/her]to196Wait, AI rule? 😶
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 day ago

    yea- it really is just hype marketing.

    i dont believe they talked about society, but companies being hacked and for some reason this particular model is just a bit “too good” at finding vulnerabilies in foss that they deemed it too dangerous to be sold to everyone.

    … ignoring entirely that this “finding exploits with LMs” is nothing new.


  • maria [she/her]to196Wait, AI rule? 😶
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 day ago

    in case anyone is wondering, this comes from this METR graph, showing how, according to their independent testing, models capabilities “double” every half year, as long as you see “capability” as “amount of time it would have taken an expert human to do the same task”.

    METR is not some hypey ai company, they conduct rather transparent research on language models. they try to measure actual capability progress in the field.

    but, since ai bad: none of this research matters and its all a fluke and the models are benchmaxxed anyway and METR is definitely just a shill-cooperation and so on… sigh























Moderates