As of Wednesday, all youth under 16 in Australia will be banned from major social media platforms like TikTok, Instagram, Snapchat, YouTube, Reddit, Twitch, and X. For over a decade, whistleblowers, politicians, academics, and experts around the world have sounded the alarm about the online harms people of all ages are exposed to.

The ban does nothing to prepare teens to respond to digital harms. It makes no investments in education, community training, or parental support. Youth will not be magically prepared to address problematic online behaviours or content when they turn 16.

The time and resources spent on the ban could be better spent on things like providing education and support for digital citizenship, media literacy, privacy rights or resource centres.

If social media is problematic for a 13, 14 or 15 year old, it’s still likely to be problematic for a 16, 25, or 80 year old. There is no body of research that establishes 16 as a “safe threshold” for social media use and the age for healthy use can vary across genders.

Under the current model, companies will not be inclined to improve their reporting systems for harmful content. In fact, in response to the ban, YouTube is actually removing a feature that would allow teens to report content they find inappropriate.

Youth under 16 who find ways to use these platforms, despite the bans, will be unlikely to come forward and ask for help if things go wrong. After all, they weren’t supposed to be online in the first place.

The answer to mitigating online harms is not kicking teens offline.

Social media companies also need to be accountable to the ways the platforms are designed and run. These platforms are designed in ways that push certain content and elicit particular engagements.

  • AGM@lemmy.ca
    link
    fedilink
    arrow-up
    7
    ·
    14 hours ago

    In China’s system, when a parent buys a phone, they can lock it into child settings at the device level, which not only forces all apps to operate in child mode but also does stuff like turn off internet from 10pm to 6am, caps screen time depending on the child’s age, and imposes break reminders after every 30 minutes of continuous use.

    All the social media platforms are legally required to adjust algorithms and content for phones in child mode to only show age-appropriate content and to increase educational content, also to prevent dms with strangers, tipping, in-game purchases, and things like live broadcasting yourself.

    All the online gaming platforms also require proof of age to use them or face restrictions.

    So, there are much more sophisticated ways of making things work.

  • jaselle@lemmy.ca
    link
    fedilink
    arrow-up
    8
    ·
    18 hours ago

    I think we should just ban social media entirely. If people want to stay in touch with each other, it should be through direct messaging (including email); and if people want to publish their opinions to the world let them use blogs; and if people want to discuss topics, let them use forums.

    Conveniently, if social media is banned, then we don’t need ID verification.

    • brax@sh.itjust.works
      link
      fedilink
      arrow-up
      3
      ·
      15 hours ago

      The bigger problem is how you define “social media”.

      Do we can all message forums and Discord? Wikipedia has the “Talk” section, doesn’t that make it social media?

      What about the drives of worthy educational content on YouTube?

      What about the flood of misinformation sites that are likely to pop up as they’re going to be much harder to filter out?

        • brax@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          ·
          8 hours ago

          Right, and how does one define it sensibly? Especially when you’re considering a law that affects a group of people as diverse as a country’s population, and when it is regarding something that is dynamic in nature.

          • jaselle@lemmy.ca
            link
            fedilink
            arrow-up
            1
            ·
            8 hours ago

            Start conservatively, and ban only things which are bright-line social media. Then expand if this isn’t enough.

  • NarrativeBear@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    21 hours ago

    Any of these solutions are similar to someone coming into my home and telling me how to raise my own kids.

    Instead provide parents with routers that support parental controls, and a countries government can instead maintain a curated list of websites that are accessible for certain age groups.

    This would be the most practical solution and would meet the “protect the children” narrative.

    Anything more then this is a invasion or privacy and a way to monitor the public.

    • Hanrahan@slrpnk.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 hours ago

      Any of these solutions are similar to someone coming into my home and telling me how to raise my own kids.

      I’m not aware of any country in the world that doesn’t do that ?

      Australia has cumpolsory education for children, doesn’t allow smoking, doesn’t allow alcohol consumption, doesn’t allow children to drive, doesnt allow them to participate in porn, doesn’t allow them to have sex, enforces vaccination and a litany of other directives that over ride parental choice.

      Many of the above are considerd harmful for children, like a swathe of experts say about chikdrens exposure to social media.

      Some places in the US you are arrested for child endangerment for allowing your child to walk to school and the US continues to condone regularly shooting their children in the 1000s…

      What I, some random on Lemmy thinks should be irrelevant, this should not be a “do your own research and go with your gut” sort of nonsense, thats what gives us RFK Jr.

      What a majority of clinical experts do think is important. I was just pointing out the blantant flaw in your argument.

    • jaselle@lemmy.ca
      link
      fedilink
      arrow-up
      1
      ·
      18 hours ago

      Routers is not the way. It should be device-side. Children’s phones and computers should blacklist social media, or even whitelist allowed sites IMO. Otherwise they can get around this with data, or public wi-fi.

      • NarrativeBear@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        14 hours ago

        This can already be done TBH, phones have something called private DNS settings, so all one would need to do is set your phones DNS to a appropriate DNS that blocks or allows a specific websites.

        This DNS could potentially be curated by a local government. This would allow a parent to set their child’s phones DNS appropriately at their discretion.

        This would be less privacy invasive and would remove the need for a “digital ID”. While at the same time checking the box of protection ones children at the parents discretion.

        • jaselle@lemmy.ca
          link
          fedilink
          arrow-up
          1
          ·
          9 hours ago

          Yes. But it’s not easy for parents apparently. Indeed, there’s a coordination problem – while the standard is for kids to have social media, removing social media for one child disconnects them from their peers. So standardising the ban would be needed.

  • fourish@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    1 day ago

    I’d love a ban in Canada. There’s nothing compelling in these arguments, protecting kids from online garbage is more important than any of them.

    I’d also like to see it banned for those over 60.

    • mister_newbie@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      ·
      21 hours ago

      Hell to the no. It’s an incredibly slippery slope from banning kids to requiring real ID verification to use portions of the Internet. Parenting is hard, but needs doing.

    • NarrativeBear@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      21 hours ago

      Parental controls have existed on home routes for years.

      This leaves the ban of certain websites at your own discretion and allows you to raise your children the way you see fit.

      Hell, parental controls can be tailored to only allow certain device on your home-network designated as “children’s devices” to only access a list of certain websites blocking everything else. This would the easiest option to implement for any parent as they see fit.

      Handing over personal info so easily to corporately owned websites for the sake of convenience in a huge privacy issue.

      • ILikeBoobies@lemmy.ca
        link
        fedilink
        arrow-up
        1
        ·
        19 hours ago

        If a parent is letting their kid on social media then they aren’t qualified to be a parent.

        However the use of modem parental controls is much better than spying on people.

    • setVeryLoud(true);@lemmy.ca
      link
      fedilink
      arrow-up
      3
      ·
      24 hours ago

      This would imply having to give your ID to access social media.

      Are you willing to trust tech companies with your ID? Discord already suffered an ID leak.

  • jaselle@lemmy.ca
    link
    fedilink
    arrow-up
    25
    ·
    2 days ago

    None of these are good arguments against introducing a ban. Worst argument of all is that “we shouldn’t ban it for 15 year olds because that wouldn’t protect 16 year olds.” Seriously? Is that intentional rage bait?

    I think it’s more than clear by now that algorithmic feeds are hazardous, at least without significant effort in research and safeguards which nobody seems to be doing. So yeah, I’d say: definitely ban algorithmic feeds for teenagers. Hell, ban them for everyone if you must.

    Gating should be done either by ZKP (zero-knowledge proofs, which don’t expose any information to any party other than “I’m at least x years old” – look this up if this is a new concept to you) or device-side by standardizing and streamlining child safety locks.

    • kahnclusions@lemmy.ca
      link
      fedilink
      arrow-up
      1
      ·
      15 hours ago

      Gating should be done either by ZKP (zero-knowledge proofs, which don’t expose any information to any party other than “I’m at least x years old” – look this up if this is a new concept to you) or device-side by standardizing and streamlining child safety locks.

      100%. If a government is truly serious about the issue, then verification can be solved quite easily with ZKPs in a way that preserves privacy.

      • Avid Amoeba@lemmy.ca
        link
        fedilink
        arrow-up
        10
        ·
        2 days ago

        Ban American corporate social media altogether? 🍁 Fedecan 🍁 take over with Lemmy, Mastodon, Pixelfed and Friendica?

        Nevermind, we’ll get invaded for that.

      • jaselle@lemmy.ca
        link
        fedilink
        arrow-up
        6
        ·
        2 days ago

        You say “no,” as though you are disagreeing with me. But did you notice I said this?

        Hell, ban them for everyone if you must.

    • kbal@fedia.io
      link
      fedilink
      arrow-up
      5
      ·
      2 days ago

      zero-knowledge proofs, which don’t expose any information to any party other than “I’m at least x years old”

      Not quite. The well-known zkp for age verification used in the obvious way reveals only: 1. “I’m at least x years old” and 2. “my name is y.” The name can be some other unique assigned identifier, but the point is that whatever is used it needs to uniquely identify you.

      There is no way to tell how old people are across the Internet without relying on unprecedented and shocking intrusions into our privacy.

      • Avid Amoeba@lemmy.ca
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        I read about a cross-signing scheme where diff gov’t agencies can cryptographically sign an ID that allows only partial information to be shared with any one service provider. It was done by some institutions in the nordics.

        With that said, our government is already trusted with our personal ID information. Nothing stops us from creating public service which can be queried for age, which would only provide an answer after the explicit approval by the person through another channel (e.g. email to sign into gov’t portal and approve the age query request). Then require service providers to use it. In fact Equifax already offers such a service without our consent but it costs money to query.

        • MalReynolds@piefed.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          Precisely where Australia and England failed, cheaper to dump the responsibility on the Socials. Fox guards henhouse.

      • jaselle@lemmy.ca
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        I’m not convinced that ZKP requires an identification number or any such deanonymizing data. If there is a ZKP protocol that implements this that is just one possible implementation.

        • kbal@fedia.io
          link
          fedilink
          arrow-up
          3
          ·
          2 days ago

          How would you get by without one? If I produce a proof right now that I’m at least 32 years old, how else do you know it’s a proof for anyone in particular and I didn’t get it from my older brother or some random website that sells them?

          • MalReynolds@piefed.social
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            The authorizor, who provides the ZKP to the client, knows, not he client. This should probably be the licensing / ID provider in your country (because if they’re hacked everyone is screwed anyway, no additional risk) and already have your details, if not you’re likely young or a fairly extreme edge case. Facebook et.al. get bupkiss except older than X. Note in this model ZKP is a nice to have.

            • kbal@fedia.io
              link
              fedilink
              arrow-up
              2
              ·
              1 day ago

              What do you mean, “no additional risk”? It’s a pretty big additional risk, creating a huge central database of everyone’s ID that will be frequently interacted with through a new interface that’s available to every sketchy website in the world. Even if it isn’t compromised it can collect data about how often your name gets looked up, and it isn’t easy to make a system where there isn’t the additional risk of more personal data being collected if the central authority colludes with Facebook. You’d really need to look carefully at the details to evaluate the risks of such a system, which they have not done at all in Australia.

              • kahnclusions@lemmy.ca
                link
                fedilink
                arrow-up
                1
                ·
                15 hours ago

                Such databases already exist in the government, in order to provide services to everyone like healthcare, pension, elections, etc…

          • jaselle@lemmy.ca
            link
            fedilink
            arrow-up
            1
            ·
            2 days ago

            Well, that same problem exists with many of the proposed verification models, like credit cards (how can you verify this is my credit card?) , photo ID, etc.

            Here’s my proposal: your browser can send a request to a verification body (could be the government directly, let’s say) to respond to the challenge from the website you’re accessing, without sending information about which website is asking for the challenge. The verifier sends a cryptographically-signed approval back. The browser forwards this to the website. To prevent comparisons of timing as a deanonymization method, the browser can wait a random period of time before forwarding the request both ways.

            • kbal@fedia.io
              link
              fedilink
              arrow-up
              1
              ·
              2 days ago

              Every time I’ve looked at the details of elaborate schemes resembling the one you imagine, I’m always left with a lot of doubts that they’re secure or practical. Every time I’ve looked at the systems that have actually been implemented in reality, I have no doubt that they suck.

              • kahnclusions@lemmy.ca
                link
                fedilink
                arrow-up
                1
                ·
                15 hours ago

                I don’t feel it’s elaborate at all. I like these solutions because they are actually quite simple. It’s just signing and verifying requests using asymmetric key cryptography, techniques which are known to be robust and secure. The government never knows which web services you are verifying for, and the web services never know your identity or any more information than they need to. They don’t even learn your precise age, just that you’re over 16/18/21 whatever.

                • kbal@fedia.io
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  4 hours ago

                  You are suggesting that a system which does not yet exist will be perfectly safe and secure. None of the ones for which I have seen actual design documents are anything like as safe as you imagine.

              • jaselle@lemmy.ca
                link
                fedilink
                arrow-up
                1
                ·
                2 days ago

                That’s valid. My preference is for device-side child locks. For instance, a header that says, “I am a child.” There is much to improve there still. But failing that, if the winds of politics dictate we must have verification – why not ZKP?

  • Avid Amoeba@lemmy.ca
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    2 days ago

    Age-restricting corporate social media isn’t “kicking teens offline.” That’s a funny straw man.

    We need age restriction and regulations on moderation and algorithms. The latter alone won’t solve the problems social media poses for developing brains. Age restrictions aren’t bulletproof and that’s alright. It’s much easier to stop my child from smoking at the age of 10 when there’s a smoming ban in place than when there isn’t. I want it to be easier to raise them without developing prepubescent brain rot than not. And I think my neighbours would appreciate bringing up another Canadian that has their marbles intact.

    E: Plenty parents outside of the terminally-online circles don’t even realize they should restrict social media use at an early age.

    E2: Tha fact that the Australian ban doesn’t deal with the ID problem is a problem that I definitely would not want us to emulate. A problem in that it does not forbid ID collection by private corporations and it does not provide a privacy-preserving public service for proving age. Besides, Meta already knows the age of most of its users. A reasonable compliance criteria could be established that isn’t 100% that would also be good enough, subject to regulatory audits.