Timnit Gebru, a co-leader of the Ethical Artificial Intelligence team at Google, said she was fired for sending an email that management deemed “inconsistent with the expectations of a Google manager.”
Please note that content may be behind a paywall, or have a limited number of free articles.
Google’s Co-Head of Ethical AI Says She Was Fired for Email
AI ethics pioneer’s exit from Google involved research into risks and inequality in large language models
Following a dispute over several emails and a research paper on Wednesday, AI ethics pioneer and research scientist Timnit Gebru no longer works at Google. The research paper surrounding her exit questions the wisdom of building large language models and examines who benefits from them, who is impacted by negative consequences of their deployment, and whether there is such a thing as a language model that’s too big.
Prominent AI Ethics Researcher Says Google Fired Her
Timnit Gebru says a manager asked her to either retract or remove her name from a research paper she had coauthored, because an internal review had found the contents objectionable. The contents were about bias in AI.
Google Researcher Says She Was Fired Over Paper Highlighting Bias in A.I.
Timnit Gebru, one of the few Black women in her field, has voiced exasperation over the company's response to efforts to increase minority hiring.
Federal Labor Agency Says Google Wrongly Fired 2 Employees
The workers were involved in labor organizing at the company and participated in walkouts last year.
Bioterrorists can trick scientists into making dangerous toxins or viruses by infecting lab computers with malware that alters synthetic DNA they produce for experiments
Cyber security researchers uncovered an online attack that tricks scientists into creating toxic chemicals or deadly viruses in their own labs by replacing ordered sequences with malicious ones.
Ransomware Attack Closes Baltimore County Public Schools
The attack disrupted the district’s websites and remote learning programs, as well as its grading and email systems, officials said.
Researchers show that computer vision algorithms pretrained on ImageNet exhibit multiple, distressing biases
State-of-the-art image-classifying AI models trained on ImageNet, a popular (but problematic) dataset containing photos scraped from the internet, automatically learn humanlike biases about race, gender, weight, and more according to new research from scientists at Carnegie Mellon University and George Washington University.
California may replace cash bail with algorithms — but some worry that will be less fair
A fight over replacing bail with "risk assessment tools" has split reform advocates. Some fear the change will worsen anti-Black discrimination.
Majority of Europeans would consider human augmentation, study finds
A study finds that two-thirds of Europeans surveyed would consider human augmentation. The article breaks down the study to see how location, age, gender impacted responses, and what concerns people had.