Vanshikha Singh
Bayes Labs
Published in
4 min readAug 23, 2021

--

ROLE OF ARTIFICIAL INTELLIGENCE IN NANOTOXICOLOGY

Undoubtedly, nanomedicines which is a unique and fast-emerging technology, mostly used for the delivery of drugs, have gained immense success in research areas, yet the question lies, to what extent are they being approved, licensed and brought into the market. Ever wondered why despite its advantages and targeted scope, why has it has not been used commercially at the ground level? The data determines that the proportion of nano drugs revenue in the total revenue of the pharmaceutical companies are very low to almost nil. Therefore, the challenges associated with ethical and regulatory issues based on toxicology needs to be accessed, verified and worked upon. This is where Artificial Intelligence and machine learning comes into the picture.

The evolution of artificial intelligence (AI) and machine learning (ML) led to the development of novel approaches for toxicity testing. Computational toxicology is a new field that utilizes a sophisticated model and aims at decoding the factors which are responsible for toxic interactions. The model itself has to take all potential interactions of the substance into account to produce meaningful results. Computational modeling in the field of nanomedicines and nanomaterials is meant to establish a correlation between their biokinetics, the dynamics of the biological response, and the fate once that compound has reached the target organ of interest. Especially, ML here is used to stimulate biokinetics along with studying the interaction of nanomaterials in a different environment. Similarly, biology-based mathematical models are very much suited for the modeling of the fate of nanotherapeutics and engineered nanomaterials within experimental systems.

The most studied approaches for the assessment of nanomaterial induced toxicity are structure-based mathematical models (e.g., Bayesian methods and Markov Chain Monte Carlo simulation) and in particular quantitative structure-activity relationships (QSAR) at the nanoscale (nano-QSAR). The QSAR computational approach is employed to predict the biological function of a compound depending on the physicochemical properties and theoretical descriptors of molecules, hence physiochemical properties serves as a pre-requisite for these.

APPROACHES OF QSAR

There are two different approaches that can be employed for QSAR modeling, those of which includes

1. Theoretical models: These are based on chemical reactivity and the energies of the molecular orbitals constituting the substance of interest is used to derive the QSAR of the substance.

2. A statistical model: This model utilizes pattern recognition in order to correlate descriptors that are potentially useful with the effect one is trying to predict. The statistical models are effective to calibrate the parameters surrounding the molecular descriptors. There are some useful physicochemical properties such as surface charge, aggregation, solubility, etc. used for predicting biological activities of nanomaterials.

3. The hybrid QSAR model — It is a combination of mechanistic reasoning and statistical fit which use theoretical considerations to identify the descriptors that are likely to be predictive.

All of these QSAR models aims to predict the toxicity of a nanomaterial and rely on extrinsic material properties, which determine the bio-nano interactions for evaluating reactivity and nanotoxicity. These physicochemical descriptors, which describe the fate of nanomaterials within the body or in the environment, are incorporated into QSAR models. Some of those descriptors include :

1. Nano QSAR Toxicity models- The computational hybrid nano-QSAR model for the nanocytotoxicity adopts dual approaches: enthalpy of a formation which is related to the bandgap energy and the electronegativity associated with stability.

2. Zero zeta potential (PZZP)-It is used to predict the agglomeration/aggregation behaviour of nanomaterials in computational models.

3. Molecular modelling based SAR models- These are utilized for 1D engineered nanomaterials such as nanoneedle or carbon nanotubes (CNTs) which then predicts their reactivity(length versus the diameter ratio (L/D) of a nanoneedle)

4. 3D Nano-QSAR model: It has been developed by matching the low energy conformations, which were docked into ADME models to design novel nanotherapeutics. Quasi-SMILES can be used to represent the physicochemical properties and experimental conditions: diameter, length, surface area, in vitro toxicity assay, cell line, exposure time, and dose.

USE OF ADMET STUDIES TO DEVELOP SAFE NANOBIOMATERIALS

USE OF ADMET STUDIES TO DEVELOP SAFE NANOBIOMATERIALS

The development and testing of conventional nanotherapeutics is a multi-step process and is very time-consuming. The target compounds are investigated for their pharmacokinetic properties, metabolism, potential toxicity and adverse effects and this entire process becomes extremely expensive, tedious, labour-intensive, and time-consuming, and most often results in restarting the process all over again. With the increase in chemical synthesis and screening of compounds, the demand for useful early information on absorption, distribution, metabolism, excretion, and toxicity data (together called ADMET data) has become a necessity. This has led to the development and use of various high-throughput in vitro ADMET screens. The current software is of limited use owing to the limited nanotoxicology data available and therefore only includes models for irritation, sensitization, immunotoxicology, and neurotoxicity. Additionally, there are certain important endpoints, for which in silico models still remain to be built but the current scope and use of AI in drug discovery reflects on the path to solve the problems associated with nanomaterials and related toxicity as well.

--

--