AI algorithms can tell your race from medical scans

Neural networks can correctly guess a person’s race just by looking at their bodily x-rays and researchers have no idea how it can tell.

There are biological features that can give clues to a person’s ethnicity, like the colour of their eyes or skin. But beneath all that, it’s difficult for humans to tell. That’s not the case for AI algorithms, according to a study that’s not yet been peer reviewed.

A team of researchers trained five different models on x-rays of different parts of the body, including chest and hands and then labelled each image according to the patient’s race. The machine learning systems were then tested on how well they could predict someone’s race given just their medical scans.

They were surprisingly accurate. The worst performing was able to predict the right answer 80 per cent of the time, and the best was able to do this 99 per cent, according to the paper.

We demonstrate that medical AI systems can easily learn to recognise racial identity in medical images, and that this capability is extremely difficult to isolate or mitigate,” the team warns [PDF].

We strongly recommend that all developers, regulators, and users who are involved with medical image analysis consider the use of deep learning models with extreme caution. In the setting of x-ray and CT imaging data, patient racial identity is readily learnable from the image data alone, generalises to new settings, and may provide a direct mechanism to perpetuate or even worsen the racial disparities that exist in current medical practice.

Ban AI nudity tools, says British MP

Maria Miller, a Member of Parliament for Basingstoke for the Conservative Party, reckons machine learning algorithms that generate fake nude images should be banned.

These so-called “deepfakes” have been doctored using AI software for years. Several tools on the internet that allow perverts to feed the algorithms a picture of someone and get a naked image of them in return. The face is kept the same, but the body is made-up.

Miller is known for being vocal about revenge porn. She believes even if the computer-generated images are fake, the harm inflicted on victims is real.

At the moment, making, taking or distributing without consent intimate sexual images online or through digital technology falls mostly outside of the law,” she told the Beeb.

It should be a sexual offence to distribute sexual images online without consent, reflecting the severity of the impact on people’s lives.” Miller wants to raise the issue in a parliamentary debate, and introduce new legislation to ban deepfake-making software in the UK’s upcoming Online Safety Bill.

AI researchers at Facebook want to pry into your secret encrypted chats without decrypting them

Facebook has employed a team of AI engineers to figure out ways to analyze encrypted messages without decrypting them first.

Homomorphic encryption could help the social media biz sniff through WhatsApp chats to collect data that can be used to better target users with adverts, according to The Information [paywalled].

Facebook could, in theory, figure out what products and services people are interested in using homomorphic encryption techniques. Adverts for these goods could then pop up whenever they logged onto their social media accounts.

The effort appears to be led by Kristin Lautner, a cryptography expert who recently left Microsoft after two decades to join Facebook as head of its West Coast AI research group. A Facebook spokesperson, however, told the publication that the project is “too early for us to consider homomorphic encryption for WhatsApp at this time.”

See more here: theregister.com

Header image: Stanford University

Please Donate Below To Support Our Ongoing Work To Defend The Scientific Method

PRINCIPIA SCIENTIFIC INTERNATIONAL, legally registered in the UK as a company incorporated for charitable purposes. Head Office: 27 Old Gloucester Street, London WC1N 3AX. 

Trackback from your site.

Comments (13)

  • Avatar

    sir_isO

    |

    They’re simply trying to get you to worship AI as your superior.

    It’s not, it’s political garbage. Your “Bible” as such.

    Reply

    • Avatar

      sir_isO

      |

      Excuse me for relegating them to worthlessness. Any and all of their constructs.

      Reply

  • Avatar

    JaKo

    |

    Hello,
    Very typical: UK MP is worried about “fake nudity” pix, yet the serious business, such as “reading encrypted text without decrypting them first” remains unmolested…
    Agree with Sir_ above — wokestan will ban this “race recognition” soon, isn’t race a social rather than physical construct? (/sarc)
    Cheers, JaKo

    Reply

  • Avatar

    Karin Blanc

    |

    Please stop sending me your messages, repeat stop sending me your messages. It is the third time I am asking. I am sure there are many other people who want to read them. I don’t, K Blanc

    Reply

    • Avatar

      sir_isO

      |

      Karin, do you know why your name was demonized?

      You see, the obvious psychological connotation is Carin’, as if it means something, so that’s of course, a terrible thing.

      Reply

      • Avatar

        sir_isO

        |

        And with a surname like Blanc, I take it you are not black enough, by their estimation.

        Reply

  • Avatar

    sir_isO

    |

    Just imagine how many “african americans” will freak out when I tell them they’re not black. You know, with Names like Michael, James, Luther, King.

    People are not particularly bright.

    Reply

    • Avatar

      sir_isO

      |

      It’s like that time I told a bunch of jews “your name having gold in it tells me much about your insecurity regarding the unattainable”

      Reply

      • Avatar

        sir_isO

        |

        You know, it’s related to that question the Lebanese often find themselves asking…”Is Ra El?”

        Reply

        • Avatar

          sir_isO

          |

          I don’t have to attack Israelis, I don’t have to be anti-semitic. Their govt is far more effective at sacrificing them.

          Reply

  • Avatar

    sir_isO

    |

    Essentially, their observational bias results in predictable classification.

    That they’re morons, and their AI is even more retarded.

    Reply

Leave a comment

Save my name, email, and website in this browser for the next time I comment.
Share via