Emotion artificial intelligence uses biological signals such as vocal tone, facial expressions and data from wearable devices as well as text and how people use their computers, to detect and predict how someone is feeling. It can be used in the workplace, for hiring, etc. Loss of privacy is just the beginning. Workers are worried about biased AI and the need to perform the ‘right’ expressions and body language for the algorithms.

  • farsinuce@feddit.dk
    link
    fedilink
    arrow-up
    22
    ·
    edit-2
    8 months ago

    Interesting timing. The EU has just passed the Artificial Intelligence Act, setting a global precedent for the regulation of AI technologies.

    A quick rundown of what it entails and why it might matter in the US:

    What is it?

    • The EU AI Act is a comprehensive set of rules aimed at ensuring AI systems are developed and used ethically, with respect for human rights and safety.
    • The Act targets high-risk AI applications, including those in employment, healthcare, and policing, requiring strict compliance with transparency, data governance, and non-discrimination.

    Key Takeaways:

    • Prohibited Practices: Certain uses of AI, like manipulative behavior manipulation or unfair surveillance, are outright banned.
    • High-Risk Regulation: AI systems with significant implications for people’s rights must undergo rigorous assessments.
    • Transparency and Accountability: AI providers must be transparent about how their systems work, particularly when processing personal data.

    Why Does This Matter in the US?

    • Brussels Effect: Similar to how GDPR set a new global standard for data protection, the EU AI Act could influence international norms and practices around AI, pushing companies worldwide to adopt higher standards.
    • Cross-Border Impact: Many US companies operate in the EU and will need to comply with these regulations, which might lead them to apply the same standards globally.
    • Potential for US Legislation: The EU’s move could catalyze similar regulatory efforts in the US, promoting a broader discussion on the ethical use of AI technologies.

    Emotion-tracking AI is covered:

    Banned applications: The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behaviour or exploits people’s vulnerabilities will also be forbidden.


    Sources:

    • DarkThoughts@fedia.io
      link
      fedilink
      arrow-up
      8
      ·
      8 months ago

      Definitely a good start. Surveillance (or ““tracking””) is one of those areas where ““AI”” is actually dangerous, unlike some of the more overblown topics in the media.

      • farsinuce@feddit.dk
        link
        fedilink
        arrow-up
        9
        ·
        8 months ago

        I spent the better half of 45 minutes writing and revising my comment. So thank you sincerely for the praise, since English is not my first language.

        • Melmi@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          2
          ·
          8 months ago

          If you wrote this yourself, that’s even more ironic, because you used the same format that ChatGPT likes to spit out. Humans influence ChatGPT -> ChatGPT influences humans. Everything’s come full circle.

          I ask though because on your profile you’ve used ChatGPT to write comments before.