Exscientia are the founding industrial partner of the AI for Drug Discovery Collaborative Training Partnership at the Digital Environment Research Institute (DERI), Queen Mary University of London, funded by BBSRC, and we are proud to celebrate a recent publication from a student in the first cohort of the programme. In this exciting paper, Dee et al. 2024 develop a new fusion AI model and achieve state-of-the-art performance in predicting drug mechanism of action from low-cost cell painting imaging datasets. Tools such as this can streamline the study of drug mechanism of action, helping us to design better drugs for patients. https://lnkd.in/eNYnPimW #AI #AIDDCTP
I'm very proud to share that the first work of my PhD has now been published in Cell iScience! 🙌 https://lnkd.in/dpH_nZzV ⭐ To summarize, in this work we created a deep learning algorithm that was able to discriminate between the response of cells 🔬 to 10 different types of compounds which are integral for developing drugs to combat various forms of cancer 💊 Thank you to my supervisors Greg Slabaugh, Anna Lobley, and Ines Sequeira for your hard work and support with the project. And to the AI for Drug Discovery PhD programme @ Digital Environment Research Institute (DERI) as well as Exscientia for helping to found the programme. Hopefully there will be much more to come! 🎉 ⭐ More specifically: Our work, entitled Cell-Vision Fusion, has created a deep learning model fusing three different network architectures designed for three separate data modalities – images, image-based profiles and compound chemical structures ⚗. We show that our approach can differentiate between the cellular response to ten different kinase inhibitor compounds, classifying mechanism of action with 70% accuracy 🏹 . We have also contributed novel standardisation, normalization and augmentation approaches for Cell Painting image data 🎨 , which reduce the impact of batch effects and allow the model to focus on the biological signal present in the images. 👨💻 The associated GitHub repository, containing all code necessary for replication of our results, as well as access to the dataset can be found at: https://lnkd.in/d37zgtiy 😎