StarDreamer

  • 0 Posts
  • 222 Comments
Joined 2 years ago
cake
Cake day: August 2nd, 2023

help-circle


  • Ethical concerns aside there is a difference between using AI to not have to hire artists/developers and using AI because someone can’t realize their vision because they do not have all the prerequisite skills.

    On one hand, you have companies using AI when they can absolutely hire a human to do something; on the other, there is someone who couldn’t have published anything without the assistance of such a tool.

    People have different passions, and not everyone can be good at art, programming, etc to create something amazing. The problem is when someone uses a tool as a clutch, or uses it to replace human expression of intention. Then it truly becomes a soulless worthless piece of crap.

    The best example is people in the scanlation scene that translate manga. It’s fine to use AI to remove the original text while NOBODY is fine with an AI translation. Why? Because redrawing line art is an activity that doesn’t require human expression (it’s more about preserving the original expression of the artist, not changing anything); while localization of text requires a human to interpret and express intent in a different cultural setting.



  • As someone who is in a relevant field (higher ed), the teachers are doing what they can.

    This past year I’ve had college students ask about the time during an exam because they can’t read the analog clock projected on the wall. If you can make it to 20 years old without realizing you’re missing a critical skill and learning it yourself, that’s also on you.

    We’re also seeing a lack of critical thinking skills and ability to retain information. People don’t remember things that were taught 1-2 semesters ago. Not that they need “a refresher”, but completely forget core concepts (such as forgetting what CPU caches are in an advanced architecture course). Then there’s tons of people who can recite every definition on an exam, but not take a step further to come to a conclusion on a problem. (Git revert reverts checked files, so if I run the command after committing a test file the file is gone and no test is executed).

    There is something wrong with students today. And I’m saying that as someone who just finished my undergrad during COVID. But the institutions are adapting by teaching things with less depth, which then dumbs down further education because they now have to re-cover everything from scratch…







  • I may be biased (PhD student here) but I don’t fault them for being as such. Ethics is something that 1) requires formal training 2) requires oversight 3) contains to are different to every person. Quite frankly, it’s not part of their training, never been emphasized as part of their training, and subjective based on cultural experiences.

    What is considered unreasonable risk of harm is going to be different to everybody. To me, if the entire design runs locally and does not collect data for Google’s use then it’s perfectly ethical. That being said, this does not prevent someone else from adding the data collection features. I think the original design of such a system should put in a reasonable amount of effort in stopping that. But if that is done then there’s nothing else to blame them about. The moral responsibility lies with the one who pulled the trigger.

    Should the original designer have anticipated this issue thus never took the first step? Maybe. But that depends on a lot of circumstance that we don’t know so it’s hard to predict anything meaningful.

    As for the more “harm than good” analysis, I absolutely detest that sort of reasoning since it attempts to quantify social utility in a pure mathematical sense. If this reasoning holds, an extreme example would be justifying harm to any minority group as long as it maximizes benefit for society. Basically Omelas. I believe a good quantitative reasoning would be checking if harm is introduced to ANY group of people, as long as that’s the case the whole is considered unethical.


  • This is common for companies that like to hire PhDs.

    PhDs like to work on interesting and challenging projects.

    With nobody to reign them in, they do all kinds of cool stuff that makes no money (e.g. Intel Optane and transactional memory).

    Designing a realtime scam analysis tool with resource constraints is interesting enough to be greenlit but makes no money.

    Once released, they’ll move on to the next big challenge, and when nobody is there to maintain their work, it will be silently dropped by Google.

    I’m willing to bet more than 70% of the Google graveyard comes from projects like these.




  • I keep hearing good things however I have not yet seen any meaningful results for the stuff I would use such a tool for.

    I’ve been working on network function optimization at hundreds of gigabit per second for the past couple of years. Even with MTU-sized packets you are only given approximately 200 ns for processing (this assumes without batching). Optimizations generally involve manual prefetching and using/abusing NIC offload features to minimize atomic instructions (this is also running on arm, where atomic fetch and add in gcc is compiled into a function that does lw, ll, sc and takes approximately 8 times the regular memory access time for a write). Current AI assisted agents cannot generate efficient code that runs at line rate. There are no textbooks or blogs that give a detailed explanation of how these things work. There are no resources for it to be trained on.

    You’ll find a similar problem if you try to prompt them to generate good RDMA code. At best you’ll find something that barely works, and almost always of the code cannot efficiently utilize the latency reduction RDMA provides over traditional transport protocols. The generated code usually looks like how a graduate CS student may think RDMA works, but is usually completely unusable, either requiring additional PCIe round-trips or has severe thrashing issues with main memory.

    My guess is that these tools are ridiculously good at stuff it can find examples of online. However for stuff that have no examples, it is woefully under prepared and you still need a programmer to manually do the work line by line.


  • As much as I hate the concept, it works. However:

    1. It only works with generalized programming. (E.g. write a python script that passes csv files) For any specialized fields this would NOT work (e.g. write a DPDK program that identifies RoCEv2 packets and rewrite the IP address)

    2. It requires the human supervising the AI agent to know how to write the expected code themselves, so they can prompt the agent to use specific techniques (e.g. use python’s csv library instead of string.split). This is not a problem now since even programmers out of college generally know what they are doing.

    If companies try to use this to avoid hiring/training skilled programmers, they will have a very bad time in the future when the skilled talent pool runs dry and nobody knows how to identify correct vs incorrectly written code.


  • StarDreamertoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    8
    ·
    11 months ago

    There’s also changing from circuit to packet switching, which also drastically changes how the handover process works.

    tl;Dr - handover in 5G is buggy and barely works. The whole thing of switching from one service area to another in the middle of a call is held together by hopes and dreams.


  • StarDreamertoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Somehow I disagree with both the premise and the conclusion here.

    I dislike a direct answer to things as it discourages understanding. What is the default memory allocation mechanism in glibc malloc? I could get the answer sbrk() and mmap() and call it a day, but I find understanding when it uses mmap instead of sbrk (since sbrk isn’t numa aware but mmap is) way more useful for future questions.

    Meanwhile, Google adding a tab for AI search is helpful for people who want to use just AI search. It doesn’t take much away from people doing traditional web searches. Why be mad about this instead of the other true questionable decisions Google is doing?



  • StarDreamertoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    16
    ·
    11 months ago

    Nope. Plenty of people want this.

    In the last few years I’ve seen plenty of cases where CS undergrad students get stumped if ChatGPT is unable to debug/explain a question to them. I’ve literally heard “idk because ChatGPT can’t explain this lisp code” as an excuse during office hours.

    Before LLMs, there were also a significant amount of people who used GitHub issues/discord to ask simple application usage questions instead of Googling. There seems to be a significant decrease of people’s willingness to search for an answer regardless of AI tools existing.

    I wonder if it has to do with weaker reading comprehension skills?