Timnit Gebru, one of Google’s top artificial intelligence researchers, says the company abruptly fired her. The technical co-lead of Google’s Ethical Artificial Intelligence Team claims managers were upset about an email she’d sent to colleagues. The email, which was sent to the Brain Women and Allies listserv, voiced frustration that managers were trying to get Gebru to retract a research paper.
Timnit Gebru, a co-leader of the Ethical Artificial Intelligence team at Google, said she was fired for sending an email that management deemed “inconsistent with the expectations of a Google manager.”
AI ethics pioneer’s exit from Google involved research into risks and inequality in large language models
Following a dispute over several emails and a research paper on Wednesday, AI ethics pioneer and research scientist Timnit Gebru no longer works at Google. The research paper surrounding her exit questions the wisdom of building large language models and examines who benefits from them, who is impacted by negative consequences of their deployment, and whether there is such a thing as a language model that’s too big.
Timnit Gebru says a manager asked her to either retract or remove her name from a research paper she had coauthored, because an internal review had found the contents objectionable. The contents were about bias in AI.
The workers were involved in labor organizing at the company and participated in walkouts last year.
Bioterrorists can trick scientists into making dangerous toxins or viruses by infecting lab computers with malware that alters synthetic DNA they produce for experiments
Cyber security researchers uncovered an online attack that tricks scientists into creating toxic chemicals or deadly viruses in their own labs by replacing ordered sequences with malicious ones.
The attack disrupted the district’s websites and remote learning programs, as well as its grading and email systems, officials said.
Researchers show that computer vision algorithms pretrained on ImageNet exhibit multiple, distressing biases
State-of-the-art image-classifying AI models trained on ImageNet, a popular (but problematic) dataset containing photos scraped from the internet, automatically learn humanlike biases about race, gender, weight, and more according to new research from scientists at Carnegie Mellon University and George Washington University.
A fight over replacing bail with "risk assessment tools" has split reform advocates. Some fear the change will worsen anti-Black discrimination.
Woman becomes first healthcare cyberattack death after German hospital was forced to turn her away when hackers deactivated their computers
The female patient, suffering from a life-threatening illness, had to be turned away and died after the ambulance carrying her was diverted. If the investigation leads to a prosecution, it would be the first confirmed case in which a person has died as the direct consequence of a cyberattack.