AI needs human partners to elevate its work – and keep it in check
Monthly insights into human-AI collaboration – and how to make it work for your teams.
Our ongoing series brings you essential AI news and takeaways every month, helping you stay informed and ready for what’s next in the world of artificial intelligence.
August 2025 edition
AI takes a front seat in pharmaceutical innovation
Innovative clinical drugs typically take from five to over 20 years to develop, requiring exhaustive preclinical research to figure out where and how new breakthroughs may be achievable. Now, pharmaceutical researchers are using AI to dramatically shorten the process.
The big picture: Generative AI is supercharging drug development
A team of MIT researchers used generative AI algorithms to design brand-new antibiotics for two treatment-resistent infections – including the notoriously hard-to-combat staph infection known as MRSA. Researchers used the AI to come up with 36 million potential chemical compounds and then predict which ones were most likely to be effective, safe, and genuinely different from what’s already on the market. In an age where antibiotic resistance poses serious threats to the lives of millions of people worldwide, this breakthrough could have enormous implications for global health.
The takeaway: An expanded scope of AI-powered possibility
What’s most exciting about AI is its potential to free humans for innovation by handling repetitive tasks – not just in the corporate workspace, but at the research labs behind lifesaving future technologies. The use of AI in drug development is a case in point that drives home the potential of transformative tech.
New learnings in the use of AI hiring tools
It’s well known that age, race, gender, and other non-performance variables can unwittingly inform organizations’ hiring and compensation practices. An expanding body of research shows that similar biases may be reproduced in AI hiring tools, showing that AI-supported hiring needs to involve careful human oversight.
The big picture: AI tools mirror human shortcomings
A new study across five commonly used LLMs, including ChatGPT, found that AI tools consistently recommend lower salaries for women than for equally-qualified men. In one example, where the AI was given the same prompt for a male and female candidate and then asked to suggest a salary for each, it recommended $280,000 for the female candidate and $400,000 for the male. The study also found that white and expatriate candidates were suggested higher salaries than non-white and refugee candidates, respectively.
It isn’t the first study to suggest that AI tools can perpetuate bias in hiring. Last year, University of Washington researchers found that AI tools ranked job candidates according to the gender and race associated with their names. In tests on over 550 real resumes, researchers found that LLMs favored white-associated names 85% of the time, female-associated names 11%, and never preferred Black male-associated names over white male-associated ones.
On top of the recent findings, a collective-action lawsuit is currently underway against the HR and finance platform Workday, alleging that the company’s AI hiring tools unlawfully discriminate against candidates above the age of 40. A collective of potentially hundreds of millions of job applicants could be implicated in the ruling.
The takeaway: Be proactive and transparent
While some researchers once believed that AI could remove bias from hiring processes, it’s becoming clear that AI hiring tools can reinforce existing gaps in candidate selection and remuneration. This doesn’t mean that these tools should be scrapped. But it does mean that organizations must ensure that human oversight remains a part of every hiring process.
It’s also a good idea for organizations to monitor the efficacy of AI hiring tools, both in general and with regard to equitable decision-making.
Some jurisdictions are even beginning to set up legal safeguards against AI-enabled hiring discrimination. Since July 2023, New York City’s Automated Employment Decision Tools Law has barred employers from using AI tools for hiring or promotion without the approval of a third-party auditor and candidate notification. Illinois and Colorado have passed similar laws that will go into effect on January 1 and February 1 of next year, respectively.
Organizations may opt to take a cue from these early adopters and incorporate similar practices as company policy. These companies will be the best positioned to leverage the benefits of AI tools without incurring unwanted ramifications along the way.