MSc in Pharmaceutical Business and Technology

Thesis - Investigating the determinants of success in the design of systems for Adverse Event Reporting in Post Marketing Surveillance Studies.

Section 1: Methodology

1 General approach - introduction to methodology

The manner in which a research study intends to add knowledge to a given field and hence its utility is largely determined by the research paradigm it employs. As defined by Saunders etal.,(2009),page 118 a paradigm is "a way of examining social phenomena from which particular understandings of these phenomena can be gained and explanations attempted". The choice of paradigm has an important consequence for how we will interpret the observations we make and as such develop knowledge.

To illustrate the research paradigm being pursued in the current study Saunders Onion as displayed in figure 3, is used to illustrate the choices made in relation to the various design aspects and how they relate. Saunders onion represents an overall research strategy as being composed of interrelated layers that build upon themselves beginning with philosophical roots and building toward to the decision on how to collect and analyse data. Each layer will be discussed in turn. This section concludes with a discussion of ethical considerations that underpin the research design process.

2 Approach in this study

Layer 1 Philosophy: Philosophic underpinning are mainly concerned with questions of what is the nature of reality? - Ontology, and how can knowledge about reality be created? - Epistemology. The philosophic underpinnings of this research are realist. Ontologically this implies that we believe that there is an objective reality. In this case, there are natural phenomena that affect the unit of analysis i.e. the conduction of PASSs. In this way ontologically we are closer to a positivist approach than an interpretivist approach(which believes that reality is socially constructed). In terms of epistemology however a realist stance implies that while there is an objective truth, the manner in which we can understand the truth is subject to the value biases of the researcher and in many cases the subjects. This implies that the knowledge we discover is contextual i.e. effected by the context in which it was discovered. This is in contrast to the purely positivist approach which is concerned only with pure causality.

Layer 2 Approaches: The next layer relates to how we intend to use observations in order to test an established theory (a deductive approach) or build from observation to generate theory (inductive approach). The approach proposed is to build from pre-existing literature and experience to examine in practice what factors are critical to the process of a design of systems for reporting adverse events. In this way the study is most akin to a deductive approach as it uses to empirical data to further test and examine existing knowledge and theory

Layer 3 Research strategy: In choosing a research strategy the major strategies outlined in Saunders and displayed in Saunders et al., (2009) were explored. Ethnography and Grounded Theory were not selected as they were inconsistent with the overall aim and philosophy underpinning the approach. Archival research was strongly considered, however access to sufficient archival data was a barrier. An experimental approach was deemed infeasible due to complexity in recreating the real life conditions in a controlled manner. The survey, case study and action research approach were seen as good options and consistent with the approach. Difficulty in getting permissible access to detail a specific case in enough detail however led to the decision to choose the survey approach. A survey method typically involves asking questions to relevant subjects to generate data about the subject phenomenon. A survey approach will allow for the collection of data from a sizeable population in an economical manner (Saunders et al., 2009). In addition, the survey approach will allow for data that can be analyzed in a consistent manner, and can be presented to the reader in a manner that is easily interpreted and with a high degree of reliability.

Layer 4 and 5 - Methodological Approach and timeframe
In deciding on the methodological approach a key choice as outlined in Saunders et al., (2009) is between a mono-method or mixed or multi method approach. A mono method approach involves using a single data collection technique and analysis procedure versus combining multiple in a mixed approach. In the current case a mono method approach was chosen. As above the survey approach will be the main source of information.

In terms of the timeframe the data collected is cross-sectional, as longitudinal studies are mainly concerned with the evolution of a topic over time, in the current case this is not relevant.

Layer 6 Data Collection and Analysis
Consistent with the research objectives formulated in the introductory section, the major objective of the study is to rank the critical success factors associated with the success design of post marketing surveillance studies. The analytic hierarchy Process (AHP) was chosen as the approach to achieve the objective. Below we will outline the major features of the approach and justify the decision to choose the AHP as the primary analytic approach. As the AHP requires that data be collected in a structured manner,the approach to data collection will be discussed after the method is introduced.

Comprehensive Personal Effectiveness Assignment Assistance and Expert Guidance for MSc Management Professional Development Planning

3 The Analytic Hierarchy Approach

Overview

AHP was developed by Thomas Saaty (1979) as an applied tool for use in multi-criteria decision analysis. The process is designed to analyse a given choice in terms of the key factors that underpin a given decision. These factors are arranged in a structured format or hierarchy beginning with the overall goal, the decision criteria (and sub criteria) and the alternatives (Saaty, 2008). In this manner, AHP provides a means to breakdown a complex decision into its constituent parts for analysis. AHP is typically used to alternatives (for example make of cars).

Application of AHP in this study
In this study however we are not focused on ranking alternatives based on the relevant criteria, but creating a tool to rank alternatives. The focus of analysis will thus be the relative importance of different criteria or success factors. In this way the study follows Sambasivan and Fei, (2008)and Salmeron and Herrero, (2005) who apply the AHP method for similar purposes. Sambasivan and Fei, (2008) employ the method to identify the importance of factors for the implementation of environmental management systems, while Salmeron and Herrero, (2005) use the AHP for the ranking of critical success factors related to the successful use of information systems. Therefore the output of the current application of AHP will be a ranking of the critical factors that impact the design of adverse reporting systems that can be used to rank alternative designs.

To rank factors that influence a decision, the AHP method relies on a form of measurement call relative measurement (Brunelli, 2014). Relative measurement focuses on the proportional difference between objects in contrast to classical measurement which focuses exclusively on quantities. This form of measurement is particularly well suited to the measurement of intangible factors. In the current context, we are interested in measuring criteria in relation to their importance for achieving a particular goal i.e. designing a system for adverse event reporting. Taking a relative measurement approach, we are not necessarily interested in the absolute measure of importance but rather the relative importance of a factor for achieving relative to equivalent factors. This we avoid the need to create a tangible scale of measurement for a factor that is not necessarily tangible i.e. importance of a given factor. To illustrate how the AHP process was employed in this study we will examine the two critical process that underpin it. The structuring of the decision hierarchy and the creation of pairwise comparison matrices to the derive priority vectors.

AHP Process 1: Creating the Decision Hierarchy
The first critical design process in the AHP approach is structuring the decision problem. In the current context the subject matter represents a multi-criteria design problem. The system for adverse reporting consists of a number of processes that result in the desired output. When considering the design of the system therefore the designer must consider how the system will perform in the execution of these processes. The performance of a given system design in executing a given process creates a criteria by which a designer judges a given system design in terms of its effective in achieving the desired output or goal. Complexity enters the decision making process when there are multiple competing criteria. Under the AHP approach the solution toward dealing with this complexity is to structure the decision into a decision hierarchy. The construction of the decision hierarchy to choose between alternative options consists of four steps as follows, represented graphically in its general form in figure 1 (the completed form of the hierarchy created in this study is reported in the results and displayed in figure 4):

Step 1 - Define the goal
The goal represents the overall objective of the exercise. In this current context this is the effective design of a system for adverse reporting.

Step 2 - identifying the criteria.
Criteria represent the characteristics that make one alternative more preferable for achieving a goal than another. In creating a decision hierarchy there may be multiple levels of criteria. Specifically there may be a number of sub-criteria that determine the preferability of a given option relative to a criteria. As described below

Step 3- Create a ranking of factors through the use of pairwise ranking
This process involves creating a series of rankings for each level of criteria. Using figure 4 as an example, this would involve performing pairwise comparison of each group of sub-criteria in relation to their importance for achieving performance related to achieving the primary criteria. The primary criteria is then compared to each other in relation to the achievement of the overall goal. The result being a series of weightings of criteria that are used to create a linear weighting system to compare alternatives in step 4. The individual weightings of groups of criteria are called priority vectors, and how they are derived is described in more detail below.

Step 4- Compare alternatives
The last step involves comparing alternatives in relation to their prefer ability based on the identified criterion in step 2. The criteria used to compare alternatives will be lowest level of criteria created. As represented in figure 4 this refers to sub-criteria. Alternatives are compared relative to criteria in the same manner in which criteria are compared relative to each other. That is through the use of pairwise comparisons and the creation of priority vectors. Overall scores for each alternative are created by summing the relative alternatives ranking for a given criteria s_ij , multiplied by the criteria's weighting of importance w_j, represented as follows:

In this study we do not rank alternative designs but rather focus on the derivation of rankings of importance of each criteria. Considering how we would rank alternative designs is a useful illustration of the utility of the created tool for operational design of systems for reporting of adverse reactions. Specifically alternative system designs can be measured relative to each key component, with each measure weighted relative to the importance of each component. Thus providing valuable insight into the relative strengths and weaknesses of competing designs.

Expert Assistance with Situational Analysis Report on Nick Scali Private Limited: Comprehensive Guidance for Conducting a Thorough Situational Analysis.

AHP Process 2: Using the Pairwise Comparison Matrix to derive the Priority Vector

The pairwise comparison matrix is the output of a relative measurement process. Here decision makers are asked to rank two criteria based on their importance for achieving the given goal. The scale used is typically the standard saaty scale reproduced in table 1

Table 1: The Saaty Scale. Source Saaty (1979)

Most Important

9

Considerably More Important

7

More Important

5

Slightly More Important

3

Both Are Equally Important

1

The output of this is a matrix of pairwise comparisons represents in its general form in equation 1. Here each component of the matrix represents the comparison between one factor and another. For example a_12 represents the comparison of factor a1 with a2.

For example consider the ranking of three factors x1 ,x2 and x3. Through the pairwise comparisons it was found that x1 was slightly more important than x2, x1 was most important relative to x3, and x2 was more important than x2. This ranking can be represented in matrix form as follows.

Once the pairwise matrix is formed the next step is to form the priority vector w which gives the relative measure of importance of each factor. The two most prominent methods to derive the priority vector are the eigenvector method and the geometric mean method. The eigenvector method, proposed by Saaty, exploits a result from linear algebra, the Perron-Frobenius theorem to derive w (for more information see Saaty 2008). The geometric mean method obtains the components of w through taking the geometric mean of each row of the vector and dividing by a normalisation term so each component of w sums up to 1. In the current case we employed both methods. Only the results from the eigenvalue approach are reported however as the difference between the two was negligible .

Checking for consistency
A key assumption underlying the method is that decision makers are capable of stating their preferences accurately. A property that underlies the accuracy of a decision makers rankings is that the decision are transitive in their judgements (Brunelli, 2014). Transitivity implies that a decision maker is consistent in his judging his preference for a set of factors. Brunelli, (2014) illustrates this through an example. Consider three stones A, B and C. If the decision maker says that A is two times heavier than B, and that B is three times heavier than C, then if the decision maker is transitive in her judgements, she will say that A is 4 times heavier than C. Should the decision maker say that A is for example 6 times heavier than C then her judgements are intransitive and by implication inconsistent. The property of transitivity is an important one for ensuring the validity of responses. Unfortunately when surveyed decision makers are rarely ever transitive (Brunelli, 2014). This can be attributed to limits in human cognition, as in most surveys decision makers are asked to make a large number of pairwise comparisons.

While full consistency across choices in an AHP survey is almost never to be expected, consistency is still an important property of the AHP process. If there are a large number of inconsistent judgements the significance of the results is questionable. Saaty (2008) provides a measure to determine the degree of consistency of a series of rankings. This measure is termed the consistency ratio with details of its calculation outlined in Saatay (2008). For interpretation the consistency ratio measures the given consistency of a set of rankings by a decision maker against a set a set of rankings that have been derived randomly. A consistency ratio of 0 indicates a fully consistent decision maker. Saaty(2008) determines that a consistency ratio of 0.10 is the threshold of significance by which decision makers can be determined to be consistent. A ration of 10 % indicates that the judgements of a decision maker are 10% as inconsistent as if the judgements were derived randomly (Brunelli, 2014). In the current study consistency ratios are calculated using the approach of Saaty (2008).

4 Survey Design

As previously mentionedthe structure of the survey used in this study was determined by the AHP method. The survey as an instrument in the process requires the elicitation of pairwise comparisons of criteria by decision makers using the Saaty scale. Pairwise rankings were made between all criteria at each level of the hierarchy as discussed above. The key process in the design of the survey instrument is the creation of the hierarchy structure.
To create the hierarchy we had to first define the goal. Once goal was defined the critical criteria that determine the achievement of the goal were identified. In this study the goal was to create an effective design of systems for adverse reporting. The EMA guideline on good pharmacovigilance practice in relation to the reporting of adverse events in the pharmacovigilance process was relied on to identify the critical criteria to achieve the goal. The key processes were the collection of accurate safety information, processing and reporting of information and safety information management.

To complete the survey the next step was to identify the sub criteria that determine the success of each of the three critical processes. To achieve this the literature reviewed in section 2 was utilised to identify critical design criteria for each process. A list of sub-criteria was then generated (see survey in appendix 1). To validate the comprehensiveness of the list of criteria, several interviews were carried out with practitioners in Clinical research organisations. Based on the outcome of these interviews I carried out, the hierarchy was constructed. The resulting hierarchy is presented and discussed in section 4.1. The survey process in this study thus involved asking responders to rank the importance of criteria at each level of the hierarchy. A copy of the survey used is presented in Appendix 1.

Sampling for AHP
Sampling or choosing who to survey is a critical component of the implementation of a survey. As per Saunders (2009) there is two major types of sampling frames, probability and non-probability. Probability based sampling is best suited when the population is known and it is possible to choose a representative sample from the population.

Also read: Level 7 SG7003 Business Simulation with Professional Development Assignment Help for Master of Business Administration Students.

 

Sampling method for this study

In this study it is not possible to identify the population of total practitioners involved in the design of systems for adverse event reporting. For that reason a non-probability sampling frame is proposed. Specifically a purposive sampling frame was chosen. Purposive sampling involves selecting cases based on their utility to achieve the objective of the research (Lavrakas, 2008). In the current case the focus will be on practitioners in Clinical Research Organisations (CRO's) and medical practitioners. CRO's are companies that are paid to carry out research on behalf of principals (Popescu et al., 2012). The advantage of surveying practitioners in CRO's for the current case are twofold.

1. CRO's primary activity is the design and application of clinical studies. As such practitioners within CRO's are uniquely positioned to provide insight into the challenges associated with the design and success of systems for adverse event reporting, in contrast to practitioners within pharmaceutical and medical product manufactures for whom the design of such systems is a secondary activity and often outsourced to CROs

2. CRO's are involved in the completion of PASS studies across a broad variety of medicinal products thus reducing the risk of sampling bias from sampling practitioners who are involved in the completion of studies for too narrow a scope of medicinal products.
To examine if there is any difference in opinion based on level of the respondee in the organisation practitioners in the CRO were sub divided into operational and management level practitioners. In addition as outlined in section 2, medical practitioners who act as the reporters of adverse events in within the system are a key actor in the success of a system of adverse events. For that reason a sample of medical practitioners who involved in the design of such systems are also sampled.

In total 100 practitioners were sent surveys. 80 practitioners from three CRO's were surveyed, with 45 operational level and 35 management level practitioners surveyed. Of the 80 surveys sent 34 valid responses were returned, with 13 management and 21 operational level responses. 20 medics were surveyed, with 8 valid responses. There were thus 42 valid survey responses.

5 Research ethics

Ethical issues in general are defined by Blumberg et al.,(2008)as the "norms or standards of behaviour that guide moral choices about our behaviour and our relationships with others". Ethical issues in the conduction of research identified by Sunders (2009)are as follows

i. privacy of possible and actual participants;

ii. voluntary nature of participation and the right to withdraw partially or completely from the process;

iii. consent and possible deception of participants;

iv. maintenance of the confidentiality of data provided by individuals or identifiable participants and their anonymity;

v. reactions of participants to the way in which you seek to collect data, including
embarrassment, stress, discomfort, pain and harm;

Research ethics for this study

In relation to issues i.-i.v (section 3.6) all possible efforts were made to ensure that privacy, consent, voluntary participation and confidentiality was maintained.

Due to GDPR restrictions and company policy, it was a prerequisite for this research that all data including the names of the companies involved, was anonymous. No personal names, e mail addresses or company names were recorded as part of the surveys carried out for this research. The study was granted permission by the pharmaceutical companies involved to go ahead under these conditions.

Expert Assistance with Leadership and Management in International Business Assignments for Master of Business Administration Students.

Section 2. Analysis and Findings

The results are presented in two sections consistent with the research objectives outlined in the introduction. The first research objective was to identify the design components of systems of adverse event reporting.Section 5.1 achieves this through outlining the hierarchy structure created to decompose the complex problem of designing a system for adverse reporting into its constitute parts (termed criteria) for analysis. The second objective is to identify the criticality of the various design components. Consistent with research objective 2, section 5.2 presents the outcome of the AHP process and resulting ranking of critical criteria in terms of their importance.

Finally the third research objective is to examine the extent to which opinions regarding the criticality of various design components vary across types of practitioners. As such in presenting the ranking of criteria in section 5.2, rankings are presented at the aggregate level and at the level of each subgroup. Finally differences in the structure of rankings are analyzed by examining the relative variation in responses (degree to which individuals differ in opinion) and inconsistency (quality) of rankings across the three groups.

1 Hierarchy structure.

Presented in figure 5 is the hierarchy structure created based on the literature reviews and interviewers with practitioners. The overall goal is the effective reporting of adverse events in post marketing surveillance studies. The overall decision is broken in three main sections based on the critical processes involved in systems for reporting adverse events.

The first process is the efficient collection of accurate safety information. Factors that determine the success in the process of collecting accurate safety information then make up the sub criteria. Firstly the effective collection of safety information depends to a great extent on management and sufficient training of all participants involved. Clear and easily completed reporting structures improve both, the nature of reports and the ability of participants to submit these reports. Having a satisfactory and adequate training of the reporters involved in the need to recognize and report potential safety problems is crucial. Moreover, effective training of the recipients of these reports guarantees that quality is accomplished at all phases of the process. The utilization of the standard Medical Dictionary for Regulatory Activities (MedDRA) and a standard International Conference on Harmonisation (ICH) consistent information design further improves the nature of data collection process.

The second process is the effective processing and reporting of information. The first sub criteria is an effective safety database that allows for the accurate and consistent inputting of data and importantly offers query functionality. In addition to the technological platform there must be strong processes in place to ensure accurate reporting. Critical sub-processes in this instance identified are the characterization of workflow (work process) and responsibilities plainly before the actualizing database system. In addition a complete Standard Operating Procedure (SOP) must be in place and further the SOP must be monitored and updated to maintain compliance with regulatory requirements.
Safety information management is the last part of an effective pharmacovigilance system. The detection of obscure and unexpected relationship between drug use and side effect is a fundamental element of pharmacovigilance. The execution of a standardised query process with which to follow up reports is the first sub-criteria. In addition a system for signal detection and evaluation is critical. Finally the assessment of expansive amounts of reports requires an efficient and precise database, as well as automated statistical methodology.

HRM and Employer Branding Assignment Help: Expert Evaluation and Monitoring of Your Organization's Employer Brand.

2 Ranking of factors

In this section the results of the ranking of factors are reported. The average ranking of factors, the standard deviation of rankings and the average consistency scores are reported at the level of the total group and for each sub group. The ranking of factors represents the relative importance of that factor and is normalised so the sum of all rankings sums to one. Standard deviation is a measure of the variance in responses amongst the surveyed interviewees. A lower standard deviation indicates a higher degree of uniformity in the opinions of the responders. A higher standard deviation indicates a higher degree of disagreement amongst responders regarding the importance of a factor.

As outlined in the previous section consistency scores are a measure of the quality of response.Consistency scores report how consistent the responders are in reporting their preferences. Lower scores indicate higher consistency while higher scores indicate higher inconsistency. A higher inconsistency indicates a higher degree of uncertainty regarding the accuracy of the result . Consistency scores over the threshold0.1 are deemed inconsistent and the results are deemed insignificant. To examine the rankings each subfactor will be examined in turn, then the ranking of the main factors will be examined, followed by an examination of the global ranking of subfactors. This section will conclude with a deeper look at consistency across the major subgroups.

Examining first the ranking of factors that effect the efficient collection of accurate safety information, rankings, standard deviation and consistency scores are presented in table 2. Examining the total rankings first, Clear and easily completed reporting forms is the highest ranked factor at a score approaching 0.4. Adequate training of the reporters is ranked second at 0.2, all remaining factors have similar scores ranging from .154 to .12.

Table 2. Ranking of Factors that the effect the efficient collection of accurate safety information

Factor

Management

Medic

Operations

All Groups

Mean (Standard Deviation)

Adequate training of  recipients of reports

0.181 (0.066)

0.079 (0.024)

0.165 (0.091)

0.154 (0.083)

Clear and easily completed reporting forms

0.53 (0.043)

0.408 (0.131)

0.29 (0.178)

0.387 (0.174)

Adequate training of the reporters

0.087 (0.096)

0.412 (0.148)

0.198 (0.145)

0.204 (0.172)

 Management of information

0.103 (0.055)

0.058 (0.024)

0.184 (0.118)

0.135 (0.103)

Following the Medical dictionary MedDRA

0.099 (0.048)

0.043 (0.028)

0.163 (0.08)

0.12 (0.079)

Consistency

0.019 (0.011)

0.016 (0.006)

0.038 (0.028)

0.028 (0.023)

Average Standard Deviation

0.062

0.071

0.122

0.122

Examining across the sub-groups, similar patterns to the total group are found for operations, however there is less of a spread in ranking between the highest (.29) and lowest factors (.165). For the medics there is a reversal as adequate training of reporters is ranked slightly higher than Clear and easily completed reporting forms at .412 and .408 respectively. This is interesting as medics are frequently the reporters of information. The remaining three factors are again similar in score albeit lower than the average scores of the total ranging from .079 to .043. Finally for management clear and easily completed reporting forms is again the highest ranked factor, this time at a considerable distance. Interestingly adequate training of recipients of reports is the second highest factor at .181. Management in this instance would typically have responsibility for the training of the recipient's adequate reporters. Further in direct contrast to the medics the adequate training of reporters is the lowest ranked score at .087.

Looking at average standard deviation measures across groups, it can be seen that deviations are highest for operations and lowest for management. This indicates that there is less agreement across responders in management higher levels of agreement amongst management. Consistency scores across the groups are again for operations indicating a higher inconsistency for this sub-group. Consistency scores are however lower for the medics than management. As all consistency scores are below the 0.1 threshold, results therefore can be seen to be significant.

The ranking of factors that determine the effective processing and reporting of information are presented in Table 3. Defined workflows and responsibilities are ranked highest at the aggregate level at .304. The remaining factors have similar rankings ranging between .235 to .225. In the operations group the effective safety data base is ranked highest at .371 followed by Defined workflow and responsibilities at .239 with the remaining two factors roughly equal. For the medics there is a clear preference for Comprehensive Standard Operating Procedure (SOP) at .506 with Monitoring and updating SOPs second at .365. Amongst Medics therefore there is thus a clear importance attributed to SOP's in general. For Management Defined workflow and responsibilities is ranked highest. Again management have direct responsibility for this task and this may be affecting this ranking. Monitoring and updating Sops is ranked second and the remaining two factors are of roughly equal ranking.

Table 3. Ranking of Factors that affect the effective processing and reporting of information

Factor

Management

Medic

Operations

Total

Mean (Standard Deviation)

Effective safety data base

0.111 (0.079)

0.078 (0.014)

0.371 (0.162)

0.235 (0.184)

Defined workflow and responsibilities

0.565 (0.167)

0.052 (0.012)

0.239 (0.17)

0.304 (0.242)

Comprehensive Standard Operating Procedure (SOP)

0.1 (0.108)

0.506 (0.175)

0.197 (0.15)

0.225 (0.201)

Monitoring and updating SOPs

0.224 (0.068)

0.365 (0.181)

0.193 (0.127)

0.235 (0.138)

Consistency

0.029 (0.023)

0.028 (0.014)

0.059 (0.069)

0.044 (0.052)

Average Standard Deviation

0.106

0.096

0.152

0.191

Measures of standard deviation are highest for the operations sub-group, with lower and roughly equivalent standard deviations for operations and medics. Notably there is a considerable rise in standard deviation at the aggregate level for this main factor rising to a standard deviation of .191 from .22. This indicates a higher level of disagreement across the groups. This can easily be seen in the respective rankings of factors with no relative equivalence in factor importance of any sub factor. Consistency scores are again below the .01 threshold. Across the groups operations is again the most inconsistent. Notably there is also a rise in inconsistency for this main factor relative to the previous main factor.

The ranking of factors by importance for the Safety information management are presented in table 4. At the aggregate level, having a system for signal description and evaluation is ranked highest at .302 followed by a Standardised Query Process at .285, statistical capabilities at .257 and finally data storage capabilities at .156. The operations subgroups largely retains that preference ranking however with a higher weight on data storage capabilities than statistical capabilities. For medics a standardised query process and system for signal description are ranked highest. Notably both data storage and statistical capabilities are ranked considerably lower in the medics subgroup. For management data storage capabilities are ranked lowly, however statistical capabilities are determined to be the most important, followed by a system for signal description and evaluation and a standardised query process.

Table 4. Ranking of Factors that effect effective reporting of adverse events in post marketing surveillance

Factor

Management

Medic

Operations

Total

Mean (Standard Deviation)

 

 

 

 

Standardised Query Process

0.173 (0.142)

0.48 (0.103)

0.28 (0.167)

0.285 (0.181)

System for signal description and evaluation

0.29 (0.11)

0.317 (0.115)

0.304 (0.124)

0.302 (0.115)

Data storage capabilities

0.07 (0.028)

0.083 (0.04)

0.237 (0.094)

0.156 (0.108)

Statistical capabilities

0.467 (0.174)

0.12 (0.126)

0.179 (0.092)

0.257 (0.191)

Consistency

0.024 (0.013)

0.034 (0.028)

0.042 (0.039)

0.035 (0.031)

Average Standard Deviation

0.114

0.096

0.119

0.149

Similar to the ranking of factors of importance for the effective processing and reporting of information there is no clear uniformity of ranking across the groups with a clear difference of preferences. This leads to a relatively high standard deviation of .149 when examined at the aggregate level. At the group level however there is more uniformity in opinion. Operations still has the highest standard deviation however it is lower than the previous two main factors. There is more deviation in the management at .114, while the deviation amongst the medics is equivalent to the previous main factor at .96.Consistency scores are again globally below the threshold of 0.1. Overall inconsistency for safety information management is lower than the effective processing and reporting of information but higher than efficient collection of accurate safety information. Inconsistency is again highest in the operations sub group. For the first time inconsistency is higher for the medics sub-group than the management subgroup.

Table 5 presents the results of the ranking of the principal factors in the effective reporting of adverse events in post marketing surveillance. At the aggregate level it can be seen that the efficient collection of accurate safety information is the most important factor at .44. This is followed by the effective processing and reporting of information at .328 and safety information management at .232. Across the sub groups there is near uniformity in the order of the ranking with near consistency with the aggregate rankings. The only deviation is in the medic with safety information management and effective processing and reporting of information ranked equivalently.

Table 5. Ranking of Factors that effect effective reporting of adverse events in post marketing surveillance

Factor

Management

Medic

Operations

Aggregate

Mean (Standard Deviation)

 

 

 

 

Efficient collection of accurate safety information

0.444 (0.134)

0.398 (0.1)

0.454 (0.163)

0.44 (0.143)

Effective processing and reporting of information

0.321 (0.076)

0.301 (0.056)

0.342 (0.136)

0.328 (0.107)

Safety information management

0.235 (0.104)

0.301 (0.056)

0.204 (0.111)

0.232 (0.105)

Consistency

0.012 (0.017)

0.003 (0.006)

0.015 (0.021)

0.012 (0.018)

Average Standard Deviation

0.105

0.071

0.137

0.118

As expected with the near uniformity in rank order across groups the standard deviation at the average aggregate standard deviation of ranking is lower than the rankings of the sub-factors at .118. Across the groups standard deviation is highest in operations, followed by management and medics. In terms of consistency the overall score is lowest across all rankings at .012. Across the groups rankings were most consistent for medics, followed by management and operations (0.1307).

Table 6 five reports the global ranking of sub-criteria in terms of their importance. Global rankings are obtained by multiplying the weighting of sub-criteria by the weighting of their respective primary criteria. Rankings are reported at the aggregate level and at the group level as above. Examining the aggregate score, clear and easily completed reporting forms is the highest rank factor with a high margin of importance relative to the second ranked factor. In addition it is further ranked number 1, by management and operations and second by medics. This result strongly indicates that this factor is critically important. After clear and easily reported there is a spread of ratings between the remaining 12 criteria. To illustrate this consider the difference of .07 presence score between the 1st and 2nd ranked criteria at the aggregate level. Similarly for the remaining 12 factors there is a spread of .07 between the 2nd and 13th factor. As above this is strongly by differences across the different sub-groups. This provides further evidence that there is clear differences across the sub-groups in terms of preferences.

Table 6. Global Ranking of sub factors

 

Management

Medics

Operations

Aggregate

Sub Factor

Rank

Global Score

Rank

Global Score

rank

Global Score

Rank

Global Score

Clear and easily completed reporting forms

1st

0.23

2nd

0.16

1st

0.14

1st

0.17

Defined workflow and responsibilities

2nd

0.18

13th

0.02

4th

0.09

2nd

0.1

Adequate training of the reporters

7th

0.05

1st

0.17

3rd

0.09

3rd

0.09

Effective safety data base

10th

0.04

10th

0.02

2nd

0.13

4th

0.08

Monitoring and updating SOPs global

5th

0.07

5th

0.11

10th

0.06

5th

0.07

Comprehensive Standard Operating Procedure (SOP)global

11th

0.03

3rd

0.15

8th

0.06

6th

0.07

System for signal description and evaluation

6th

0.07

6th

0.1

9th

0.06

7th

0.07

Standardised Query Process

12th

0.03

4th

0.14

11th

0.06

8th

0.07

Adequate training of  recipients of reports

4th

0.07

8th

0.03

7th

0.07

9th

0.06

Statistical capabilities

3rd

0.12

7th

0.04

13th

0.04

10th

0.06

Management of information

9th

0.04

11th

0.02

5th

0.08

11th

0.06

Following the Medical dictionary MedDRA

8th

0.04

12th

0.02

6th

0.07

12th

0.05

Data storage capabilities

13th

0.02

9th

0.02

12th

0.05

13th

0.03

To formally examine the extent to which preferences are different across groups the correlation in weighting between groups is examined. To achieve this Pearson's correlation coefficient is calculated for global scores of sub-criteria across the groups. The results are presented in table 6. A score of 1 indicates perfect positive correlation, a score of -1 indicates that scores are perfectly negatively correlated and a score of 0 indicates that scores are uncorrelated. Here it can be seen that all scores are positively correlated. The strength of correlation however is much stronger for management and operations (0.4772) than the relationship between medics and management (0.1234) and medics and operations (0.1307). Medics have a different role in the system for adverse reporting as the reporters of events which may explain this difference.

Table 7. Correlation analysis of scores across groups

 

Management

Medics

Operations

Management

1



Medics

0.1234

1


Operations

0.4722

0.1307

1

The final part of this section further examines consistency in ranking across the three groups. The consistency scores for each round of rankings are presented in figure 6. Here it can be seen that across all four rounds that in-consistencyis highest for the operations groups. For both medics and management inconsistency is largely the same. The relative lower score of the operations group may reflect the relative lack of experience of these practitioners.

Marketing Theory and Practice Report on H&M: Expert Assignment Help for Advertising and Public Relations Management MSc Students.

 

Section 3. Conclusion

This final section discusses the implications of the results for practitioners and future research in the area. Considering first the hierarchy structure presented in section 4.1. The framework provides a tool for practitioners to assist in the design of adverse reporting systems through the identification of critical components of such a system. It is important to note however that while the framework is comprehensive at this moment in time it is likely that as technology and the regulatory environment evolve so too will the critical components of design. Future research should therefore look to expand upon the framework where necessary.

The ranking in section 4.2 provides direction for practitioners as to the relative criticality of the various critical design factors for the key processes of adverse reporting. The global ranking of subfactors in table 6, provides strong evidence that clear and easily completed reporting forms is the most critical factor in the design of such systems. Examining this result in the context of the literature reviewed in section 2, this result provides significant weight to the important of the research of Lu, (2010) and Lee et al., (2019). Both of these studies investigate means to improve the quality of reporting forms. The findings of this study reinforced the importance of the findings of those studies.

Examining rankings for each sub-group there were clear differences across groups in terms of preferences and quality of response. Examining preference heterogeneity first, there was a particularly strong difference between the preferences of medics and practitioners (management and operations) as evidenced by the correlation analysis in table 7. The difference in prioritised rankings points to this being attributable to differences in roles within the system. For example the adequate training of reporters was ranked 7th for management but 1st overall for medics the reporters of information. Further as evidenced by the reported standard deviations there was much less within group preference heterogeneity with standard deviations consistently lower between groups than across. It is argued that this reflects that practitioners with different roles within the process have different perspectives as to what determines success in the design of adverse reporting systems. For the designers of adverse reporting systems this suggests that it is important to consider different perspectives in the design process.

Examining quality of response across groups, both the medics and management responder's have a higher consistency in their ranking than operational staff. This indicates higher quality and significance of responses from these sub-groups. In addition there is a higher standard deviation in the operations sub-group relative to the other two, indicating a higher level of disagreement within that sub-group. It is argued that this result is most likely indicative of the relative gap in experience between operations staff which tend to be junior and medics and management staff which will tend to have relatively more experience. This poses an interesting from a methodical perspective. Namely whether more purposeful sampling from groups with more experience would improve the quality of results. An alternative approach would be to weight the results of more experienced responders higher in the construction of global rankings to reflect higher levels of expertise.

In conclusion, the reporting of adverse events in post market surveillance of pharmaceutical products is a critical component of the pharmacovigilance process. To date there have been multiple papers that deal with designing reporting systems and implementing critical elements of reporting systems. The former studies are mainly descriptive and do not deal with the relative importance of variant aspects of designing systems for adverse reporting. The latter studies are narrow in focus in that they only deal with one aspect of adverse reporting systems. This study contributes to the literature by creating a ranking of the critical criteria/components in the design of systems for adverse reporting. The resulting framework and criteria ranking provides a tool that will assistpractitioners in the future design of systems for adverse reporting. In addition the results of the study point to a number of areas for which future research should prioritise efforts to improve the design of adverse reporting systems and improved pharmacovigilance.

Engage a Professional Writer for Your MSc in Pharmaceutical Business and Technology Assignment

If you are a student at Miracleskills in need of professional assistance with Pharmaceutical Business and Technology, look no further. Our platform provides a seamless solution to hire expert assignment writers in the UK who are well-versed in the subject matter.

Whether you need assistance with HND assignments or are seeking Diploma assignment help, we've got you covered.

Our services go beyond just diplomas; we also offer a range of HND assignment samples for you to review. Our team of proficient writers is well-versed in the subject matter, ensuring that your assignments meet all academic requirements and standards.

RELATED COURSES & ASSIGNMENT SERVICE!!


COMMENTS(0)

LEAVE A COMMENT


Captcha

 

 

Are You Looking for Design of systems for Adverse Event Reporting?