Industry-wide effort for Pharmacometrics Data Standards

Case Study

Everything related to Modeling & Simulation and Data Analytics
We make it possible!

Industry-wide effort for Pharmacometrics Data Standards

The Challenge

Analysis datasets are fundamental components of pharmacometric analyses and their quality and readiness highly correlate with the efficiency and impact of pharmacometric deliverables overall. Despite this, structures of datasets vary widely.

The Solution

The group of more than 30 scientists, programmers and regulators came together in late 2015, triggered by a poster authored by our team members Andrijana Radivojevic and Henning Schmidt. Since then the ISoP effort has been led by Andrijana Radivojevic, working towards establishing standards for pharmacometric datasets. The ultimate goal for the group is to reduce the time required to specify, implement and review datasets, and to facilitate portability within and across organizations.

The Benefit

The anticipated benefits of a common, tool-agnostic standard for pharmacometric data are many. Of these, a considerable increase in consistency across the community as a whole is perhaps the most important benefit, allowing data to be prepared, reviewed, used, shared, and reused with higher efficiency within and across organizations. Standards will serve to support and ensure regulatory compliance and audit-readiness. Quality would also be improved: a standardized lexicon and data dictionary would result in fewer errors in the datasets themselves leading to higher quality of the analyses and resulting recommendations. Finally, standards would facilitate the automation or semi-automation of data preparation. Many of these activities could be supported by freely available and standardized software tools, which would further increase efficiency. Standards will make training of programmers and pharmacometricians both faster and easier. They will provide a knowledge sharing platform, help to close communication gaps among different function groups, both inside and between institutions, and facilitate stable and streamlined processes—which will in turn increase quality and reduce cycle time. All of these are anticipated to result in more time for scientists and programmers to focus on their mission: understanding and making decisions based on data.

R-based NCA

Bringing powerful R-based NCA to modelers as an initial diagnostic tool

QSP Modeling in Shiny

Boosting insights and communication about models and mechanisms

NLME Modeling in Shiny

Providing the means to efficiently conduct recurring types of NLME analyses