The Effect of Longer Development Times on Product Pipeline Management Performance

In the pharmaceutical industry, value is being destroyed through longer product development times. Given that patent lives are (normally) fixed at 20 years, the double hit of increasing time to market is evident – higher R&D costs and less time at market before generic competitors are able to be released into the marketplace. The Policy implications are massive: A huge and permanent shift away from internal R&D towards partnerships, licensing deals and acquisitions of more innovative biotechnology companies. In this study, we build a system dynamics model of the product development pipeline for a single company operating in the pharmaceutical market. The study shows that in the presence of loss of value due to longer lead times, it is more advantageous to: (a) work faster to reduce the backlog of projects; (b) increase the number of projects started whenever it is possible reduce complexity in the pipeline; and also (c) the optimal decision on resource allocation is independent of the loss of value due to longer lead times.


Introduction
In the pharmaceutical industry, value is being destroyed through longer product development times.It usually takes (on average) 12-13 years to bring a new product to market as opposed to around 8 years a decade or so ago (Cook 2006;Paich, Peck, & Valant, 2004).Given that patent lives are (normally) fixed at 20 years, the double hit of increasing time to market is evidenthigher R&D costs and less time at market before generic competitors are able to enter the market (A.M. Clark & Berven, 2004).
Age in a stock of projects -or, more specifically, time to market -is a critical policy determinant in pharmaceuticals.In practice, drug development can be thought of as a series of stages, or stocks (Figueiredo & Joglekar, 2007;Figueiredo & Loiola, 2012;Paich et al., 2004).The time in any one stage is less important than the total time to market.This can be thought of as the cycle time of a project.
Large pharmaceutical companies (e.g.Novartis, Pfizer, Astra Zeneca, GSK) have to make massive policy decisions based on this phenomenon.The longer a project stays in development, the more it costs and the less market value it can ultimately create.The policy conundrum is whether to fail fast -i.e. to eliminate risky projects early in development (before they consume scarce resources), or to bet that projects can become blockbusters (i.e.massive monopoly products that generate billions of revenue each year, before they come off patent) (A.M. Clark & Berven, 2004).This is a huge and widely discussed phenomenon in the pharmaceutical industry that has wiped billions off market values.The problem is well illustrated by Pfizer (Wilson, 2010), which has managed double digit growth for years.But in the near future, some of its most lucrative drugs will come off patent whereas the pipeline of new drugs is running dry.
The Policy implications are massive: A huge and permanent shift away from internal R&D towards partnerships, licensing deals and acquisitions of more innovative biotechnology companies.Major deals are being made between old pharmaceutical companies that have empty R&D pipelines but possess the infrastructure to market new drugs, and new biotech companies having technology but no infrastructure (Wilson, 2010).Most of these deals are being made on faith rather than solid valuationsbecause most new biotech companies still have no products at the market.
There are many examples of this trend.In 2006, AstraZeneca paid nearly a billion pounds for Cambridge Antibody Group -a biotech company with no products but possessing advanced monoclonal antibody technology (Lavelle, 2006).But monoclonals are still highly risky -one such product caused the disastrous and highly publicized clinical trial failure that nearly killed six volunteers in London -and forced the German drug company into bankruptcy.The question is to determine the value of unproven and speculative technology.Dynamic Modeling has much to offer here.The outcome of the time to market identified by the system dynamics models can be instrumental in a significant shift in R&D policy -away from internal development towards external licensing and partnerships.It can also mean that a company should be more daring in its policies, starting the development of more projects, and/or reducing the number and size of tasks on each project (reducing their average complexity).Even though there are quite a few studies determining the drivers or determinants of cycle time in NPD at the project level (e.g.Cooper & Kleinschmidt, 1994;Griffin, 1997;Zirger & Hartley, 1996), there is a lack of studies, at the portfolio level, generating policies to reduce the negative impact of longer lead times on performance.This study is an attempt to fill this void.
On this study, we build a system dynamics model of the product development pipeline for a single company operating in the pharmaceutical market.In such configuration, the way of computing age in a stock of projects will alter the direction of policy recommendations to improve the performance of the system.The more impact time to market has on profit, Net Present Value (NPV), the more radical should the change in policies be.Therefore, the central question of the study is: what are the policies that maximize value creation in the presence of loss of value due to longer lead times?The study shows that in the presence of loss of value due to longer lead times, it is more advantageous to: (a) work faster to reduce the backlogs of projects, (b) increase the number of projects started whenever it is possible reduce complexity in the pipeline, and also (c) the optimal decision on resource allocation is independent of the loss of value due to longer lead times.

The Model
The basic structure and logic of the model, developed by Figueiredo andLoiola (2012, 2014) are simple; every month, a certain number of projects are started and enters the pipeline.These projects are developed and screened in sequence, before being released into the marketplace.The NPVs of the population of projects are tracked, enabling managers to decide how many projects will be terminated and how much value will be lost due to termination.NPV is a measure of expected value of a project.Managers make decisions, under uncertainty, based on these estimates.Value creation happens while projects are developed at each stage, and this value creation depends on how intensively the teams are working.It is important to point out that the average NPV of the population of projects is also increased by the screening process, since only the projects with higher value will be approved to the next stage.Besides deciding on which projects will be terminated (i.e., defining a screening threshold or minimum allowable NPV for a project to be approved), managers also decide on four variables: the capacity adjustment bias, the resource allocation across stages, the average complexity of the projects, and the number of project introductions (here called starts).Each of these variables affects capacity utilization (how intensively the teams are working) and therefore the value creation rate.Variable starts is the number of projects introduced into the pipeline every year.It is defined as a random normal variable with a certain mean and standard deviation, as defined by the Novartis dataset (Reyck, Degraeve, & Crama, 2004).The model, therefore, is not deterministic.
The proxy used for complexity is man-hours per project at each stage.Complexity represents the size, number and relations between tasks in a project.Wheelwright and Clark (1992) describe complexity as typical decision levers in a Product Pipeline Management (PPM) setting.Complexity selection defines the nature of tasks, and the amount of resources it takes to complete these tasks.Even though the level of complexity at each stage is predetermined to a certain degree by the existence of a minimum number of tasks to be performed, and their sequence, it is fair to assume that there is considerable freedom to managers while deciding project activities.For example, Thomke and Fujimoto (2000) and Khurana and Rosenthal (1997) recommend the front loading of activities in a project, i.e. the increase in complexity and activities early in the development process, as a way of reducing uncertainty and the amount of rework or new work to be done later.
The capacity adjustment bias (variable α) reflects managers' tendencies between either working faster/slower in order to reduce the existing backlogs, or working, if possible, at a constant rate (the best or nominal rate) to increase value creation (see Figure 1).This variable is defined as a value between zero and one.A value of zero represents an extreme tendency towards working at the best work intensity.A value of one represents a tendency toward adjusting work intensity as needed to reduce the size of backlogs.Each stage has a local capacity adjustment bias.The resource allocation bias (variable β) reflects managers' tendencies to allocate more people to work on the initial, mid and final stages of the pipeline.Each stage receives a fraction of total resources, so that β1+β2+β3=1.
Managers also have a bias towards allocation of complexity, i.e. they can increase or decrease the average complexity of the projects in any stage of the process.As was mentioned previously, the complexity of projects (variable γ) can be measured in many ways depending on which kind of product is being developed (lines of code for a software, number of parts for a car, etc.) but in this study the average size of the projects is adopted as a measure and proxy of complexity, meaning that a more complex project would require more design and developing activities to be performed.Complexity is therefore measured by man-hours per project at each stage.
The performance variables in the model are the total value created (NPV) at the end of the pipeline, value creation rates at each stage and respective flows of projects.The adoption of NPV as the only performance criteria for project screening is a necessary simplification; in most companies, however, more than one factor is used to enable the decision to terminate a project, and different factors may be used depending on the stage of development of the project.For example, a pharmaceutical company might be more concerned with the safety of a substance at the early stages and with the manufacturability at later stages.
The Product Pipeline Management problem is structured as a dynamic process in the shape of a chain, as demonstrated in Figure 2. Therefore it is reasonable to assume that accumulation and/or starvation might happen in such a chain.Depending on the decisions made by managers, projects may accumulate in early stages, or the later stages may starve in case too many projects are terminated in early stages.The dynamic aspect of the pipeline adds complexity to the problem and to the optimization effort.The model structure is comprised of three processes: capacity management, value creation and screening in any stage of the pipeline.See on Figure 3 the stock and flow structure of a typical stage.

Model operationalization
This section presents the three basic processes in the PPM model.A more detailed description can be found in Figueiredo and Loiola (2012).A list of all the equations in the model can be found in Figueiredo and Loiola (2014).

Capacity management process
A central concept of the model is the utilization of capacity. Figure 3 displays the structure that captures the decision process for adjusting capacity.As was pointed out in the introductory section, research shows that employee productivity (fraction of time spent on value-adding tasks) initially increases and then decreases as the number of development activities assigned concurrently to each engineer increases (Wheelwright & Clark 1992).This effect is captured in a function that relates utilization and value created (see Figure 1).
Managers have at each stage a fixed amount of resources (employees).An increase in capacity, measured by man-hours per month, is only possible by using the existing resources more intensively, thereby increasing their utilization.In case of overcapacity, the utilization equals the demanded capacity based on the backlog.Capacity is adjusted continuously, depending on the value of the target capacity and on the time to adjust capacity.Target capacity is defined as the demanded rate of development in each gate based on the backlog.If the backlog is filled with projects, the target capacity will be higher, resulting in more work intensity or capacity utilization by the teams (Figueiredo & Loiola, 2012).

Value creation process
The available capacity is used within each stage as shown in Figure 3, during the process of value creation.A certain number of projects enter stage 1 backlog.The value of the projects is tracked by the model, along with their number.The NPV value of the projects is multiplied by a factor, depending on the capacity utilization, as the projects that were in the backlog are developed and go to the next phase to be reviewed.The rate move to review is equal to the available capacity, unless there is overcapacity.The projects then reach gate 1, or stage 1 in review.In this phase projects are reviewed, and depending on the average NPV some fraction will be terminated and the rest will follow the flow to the next stage, the backlog of stage 2. Projects that are approved in the third phase are launched to the market.The values of total NPV created, number of projects and average NPV of finished projects are tracked and used as performance measures.These calculations have been simplified by assuming that the time discounting effect is built into the Average NPV at Start parameter (Figueiredo & Loiola, 2012).

Project screening process
The average NPV of the projects feeds into the screening process: the decision to proceed or terminate a fraction of projects is made depending on the average NPV and a predetermined threshold.The population of NPVs of projects after a review is assumed to follow a Gumbel distribution, because project screening is a search process that selects NPV extreme values (Dahan & Mendelson, 2001;Galambos, 1978;Gumbel, 1958).The Gumbel distribution is the probability distribution for the maximum of multiple draws from exponential-tailed distributions.It applies to NPD problems especially well when there are no specific limits on the potential NPV of a project (Dahan & Mendelson, 2001).

Loss of value due to longer development times
This section of the model consists in an innovation and incorporates a revenue versus time curve, allowing for the loss of value depending on the average time to complete the projects.A sketch of the dynamic structure that captures average lead time per project is shown in Figure 4.The key assumptions of this structure are: (a) in the model, project selection depends solely on the NPV of the projects.Managers will not make a decision to terminate or not a project based on the average time spent to develop them (so the discounting of value does not occur until the projects are completed), and (b) the lead times of projects are evenly distributed among the population of projects in the backlog.For each month that passes, one month of time is added to the stock of lead time, for each project that is in the stocks on the main flow.When projects go from one stock to the next, their elapsed time is transferred.When lead times are long, value is lost at the end of the pipeline (when projects are released into the market).
In order to determine the revenue curve (NPV versus lead time), it is necessary to consider the adoption curve of pharmaceutical products.As an increasing number of individuals use the product, its penetration of the market grows.This growth continues until the product has been tried by all the potential users.At this point the product is referred to as 100 percent adopted in the market (Cook, 2006).Using the example of Rogers (1962), it is possible to translate the number of users of the product to the revenue generated by those users.This is illustrated in Cook (2006), who presents a mathematical representation of an adoption curve."It is this translation of new users over time into revenue that gives rise to the S-shaped adoption curve.This curve is also called a Bass Diffusion Model, a Gompertz Curve, or a type 1 curve" (p.60).
With the value of rate 7, that tracks the flow of lead time just before projects are released into the market, it is possible to calculate the exit flow of value, or Loss due to Time, as seen in equation 1: The loss due to lead time is a percentage taken from Function Loss.For every value of the rate (Rate 7/Stage 3 Completions), measured in years per project, a percentage of loss is captured and multiplied by the total Value Approval rate 3.
Since the average lead time (development time) per project for the calibrated model (Novartis) is 11 years (at steady state condition and base case values for the variables), we assume that NPV values are forecast taking into account that it will take 11 years to develop each project.If less time is taken, then there is an additional NPV creation proportional to the time.A curve was built (Function Loss) and calibrated using data from a publicly available pharmaceutical database that lists sales revenue for the top 200 prescription drugs (Drugs.com,n.d.) over a period of 8 years (from 2002 to 2009).From this database, it can be determined that revenues go up very quickly as the drugs are released into the market, and go down to approximately 5% of the peak revenues once patents expire (at time=20 years), on average.See an example in In the curve, time spans from 0 years to 25 years.If time T>20 years, revenues per year are fixed at 5% only, so there is a higher loss of NPV due to the longer lead times.If time T<20 years, there will be one or more years in which the revenue happens during the period of patent life, so the loss of NPV is not so great.Any year represents the time that it took to develop and release a project since the drug was patented.Using the available data, we forecast the revenue for every period of commercialization, from the date of release into the market to an arbitrary time of 25 years.The total revenues are then summed up, and the values are normalized, with unity happening at time t=11 years.It is then easy to calculate the loss or gain relative to the normalized value.For instance, if time t=11, there is no gain or loss.The company will receive the expected NPV from the project.If time t>11, then there is a progressively larger loss, until time t=20, where the loss is very high, since the patent will already be expired on the date of release.Figure 5 shows the curve.

Calibration
The model developed by Figueiredo and Loiola (2012) was calibrated to the Novartis Innovation Pipeline (Reyck et al. 2004).This case study has all the data necessary for the calibration, including NPV values at each stage, flows, complexity and resources.The Novartis pipeline has four stages, but the first stage (basic research) was excluded and only three stages were considered in this study.That is why it can be safely assumed that the patent has already been filed in year 1.The pipeline was calibrated for a steady state condition, in which value creation is maximum and there is a bias towards reducing the backlogs (α=1).In the calibration procedure, the following parameters were kept with the exact same values as in the data set: starts, resource allocation fractions, average Project Complexity and Termination Rates.The following output was matched by performing an iterative adjustment of the Gumbel function look-up tables (table functions were used to calculate the fractions of projects that were terminated or approved), by changing the mean gain and variance, while keeping the nominal development times within reasonable range: Average backlog in each stage and Average NPV in each Gain or Loss stage.The calibration achieved a goodness of fit of ±1% for all parameters, except the nominal development times.The calibrated parameters are listed in Table 2 and Table 3.

Hypotheses
When managers have a consistent bias towards reducing backlogs (variable alpha is set to unity at all stages), the development teams work faster whenever necessary to reduce the backlog of projects, avoiding accumulation of projects and increases in lead time (Loch & Terwiesch 1999;Moorthy & Png, 1992;Repenning, 2001).In such a condition, the loss due to development time should be reduced since the queuing physics of the pipeline is taken into account in order to reduce lead times.
Hypothesis 1: In the presence of loss of value due to longer development times, managers should have a bias towards reducing backlogs of projects.In such case, projects are developed quickly and accumulation or blocking are avoided in the pipeline.
It is not always more advantageous to increase starts (by acquisitions or more work on the front end of the pipeline) in an NPD pipeline, since this dynamic process is a chain in which blockages can occur.A policy that increases starts can become advantageous in case the complexity of projects is reduced to some extent (whenever possible), since this will enable an extra amount of projects to go through the pipeline quickly, without accumulations.There is a need to follow a complete and balanced sequence of activities in the development process (Ford & Sterman, 1997).Therefore, we posit that the optimal number of starts is increased when the complexity of projects at all stages is reduced.The pipeline will release more projects during any fixed period of time, decreasing accumulations (Loch & Terwiesch, 1999;Moorthy & Png, 1992;Repenning, 2001) and the NPV loss.The adjustment of complexity should be global, for all stages.Otherwise bottlenecks can be created in the pipeline, since only certain stages will be able to work faster and subsequent stages can block the pipeline.
Hypothesis 2: A decrease in complexity of projects at all stages, reducing the average amount invested per project, increases the optimal level of starts.It is more advantageous to simultaneously increase the number of projects started and reduce the complexity of tasks performed at all stages.
Managers can share limited resources (people) across stages as needed, in order to balance the pipeline (Loch & Terwiesch 1999;Repenning, 2001).Since this decision is a balance that has to be achieved in either situation (with or without loss of value due to time), we posit that the optimal choice for resource allocation will not be changed by the loss of value due to longer development times.
Hypothesis 3: Managers can adjust resources independently of the patent lives of the projects.

Model Behavior
This section discusses the key drivers impacting project lead times and presents a series of tests to build confidence in the model's behavior.

Levers impacting project lead times
The average lead time of projects is increased whenever there is a bottleneck in the pipeline.Whenever a bottleneck is created, projects will accumulate and take longer to be developed.There are four key levers impacting lead time in the model; (a) the average complexity of projects (variable γ); (b) the allocation of resources across stages (variable β); (c) the project introduction rate, (starts); and (d) the way capacity is adjusted by the development teams (variable α).
A higher average complexity can reduce the flow of projects, potentially creating bottlenecks, for a given work intensity.An unbalanced distribution of resources creates bottlenecks, which can increase lead time, for a given work intensity.The project introduction rate also affects the flow of projects and has an impact on lead time., since starting more projects can lead to bottlenecks at the first stage, for a given work intensity.Therefore, if managers to some degree have a bias towards developing projects at the optimal work intensity (α<1), the three aforementioned variables can create bottlenecks and have an impact on lead time.On the other hand, if managers continuously adjust work intensity to reduce backlogs (α=1), the impact of those variables can be eliminated, because such a policy eliminates bottlenecks.The effect of these variables on lead time is illustrated in Figures 6, 7 and 8 below.In all conditions, the level of variable α was kept at zero.

Model testing
We conducted a series of tests to build confidence in the model structure and behavior (Forrester & Senge 1980).
All figures show results for conditions in which the structure for loss of value due to lead times is present in the model, except Figure 13.Initially, we study how different configurations of the capacity adjustment bias (subject to a high input of projects (starts)) affect loss of value due to longer lead times.As can be seen in Figure 9, a bias towards reducing backlogs, in this condition, generates much less loss of value for the company than a bias towards increasing value creation.This is in accordance with hypothesis 1; in the presence of loss of value, the company should work faster to reduce backlogs.In order to check the effect of the capacity adjustment bias on total value created, we set Starts to a high condition, with low complexity and high and low values for the bias.As shown in Figure 10, a bias towards reducing the backlogs produces much more value when the pipeline is overloaded with projects and complexity is low.In this situation, the capacity of the pipeline is enhanced, and a bias towards reducing the backlogs means that the work intensity will go up to adapt to this high intake of projects.In the case of Novartis, it is a better policy to work faster.Alpha = 0 ------------Alpha =1 In order to check the effect of the choice of starts under a condition of low complexity, we plot Figure 12.Here the managerial bias is towards reducing backlogs.Again, in accordance with hypothesis 2, a condition of high starts proves to be more beneficial together with lower complexity for the projects.

Methodology
We base our analysis on Figueiredo and Joglekar (2007) and Figueiredo andLoiola (2012, 2014).Such a dynamic model is briefly described in the section named The Model.We study the effect of different configurations or decisions on the total expected value (NPV) created in the process.Such decisions include: (a) allocation of scarce resources; (b) number of projects initiated; (c) average complexity of projects; and (d) bias towards working faster to reduce the number of projects accumulating in the backlogs.The objective is to test the hypotheses and determine how the best policies change depending on the presence of loss of value due to longer lead times.
The model was calibrated with data from Novartis, a large pharmaceutical company.This data is publicly available (Reyck et al., 2004).A simulation study (Davis, Eisenhardt, & Bingham, 2007) is a particularly effective approach for tackling this problem because a model of portfolio value creation and throughput (NPV) can be formulated to include multiple decision parameters and associated longitudinal interactions within a process involving multiple projects.
The simulations are run by creating High (H), Low(L), and Medium(M) values to the decision variables, and running simulations with possible combinations.Medium values are taken from the base case condition, in which the pipeline is balanced and calibrated to the original setting of the Novartis chain (with exactly the same values for the variables, with a variation of less than 1% for the parameters that were iteratively adjusted), and then high and low values are created.See details of the procedure on Figueiredo and Loiola (2012) and see Tables 4 and 5. Variable β has 7 possible configurations, just as in Figueiredo and Loiola (2012).Variables α and γ have 3 combinations only, since the hypotheses are related to a global, consistent policy for all stages.Therefore, a normalized γ variable will be used; the medium value corresponds to 100%, the high value corresponds to 120% and so on.Variable Starts has three values only; high, medium and low.Variable Loss (of value due to longer lead times) is a dummy variable with two possible values, one and zero.The total number of simulations for each dataset is then 3x3x7x3x2= 378 runs.

Table 4
Experimental Design (Resources and Thresholds) With the data from the simulations, a regression model is created to analyze the relations between the variables and total NPV created, following the methodology used by Kleijnen (1995), Anderson, Morrice and Lundeen (2005), and Santos and Santos (2009).
Linear regressions are then run in order to determine how each variable affects total performance at the end of the pipeline (total NPV created).The regression model can be found on the next section.

Experimental design
In this section, the specification of a regression equation and the design of the numerical experiment are presented.
Recall that the model has been set up with the NPV of the projects at the end of the third stage of the pipeline as the outcome variable that is shaped by resource allocation, complexity selection, resource utilization fraction, and screening thresholds across the first three stages.Also, variable Gamma (complexity) is defined as a single normalized variable, since complexities at all stages are adjusted (vary) simultaneously and are correlated.We eliminate the initial 25 years from the dataset, to avoid initialization effects, and run simulations for 55 periods (years).We use the set up described above to specify the following regression equation: Here, i =1 ,2 3 represents stages of the pipeline Cj = Regression coefficients, j=1, 12 V: Total NPV at the end of the third stage of the pipeline at the end of planning horizon: Noise Terms Decisions αi: Capacity adjustment bias at stage i (0<αi<1), Increasing this parameter reduces backlog instead of optimizing capacity utilization.
bi : increase in this parameter enhances resource allocation fraction for stages i, such that (bi 0 and β1+ β2+ β3=1) γ: Normalized Complexity (index for the three stages) S : Number of projects started at the beginning of the pipeline.
The correlation matrix for the regression variables in the base case, for condition 2, is shown in Table 6.Based on our specification, squared terms of the Starts, β and γ variables are also included.This is standard practice in multivariate analysis when non-linearity exists or is expected (Myers, Montgomery, & Vining, 2002).All the linear terms of the relations between starts, complexity and resources and NPV created should be positive and the squared terms should be negative, since this is part of what defines a concave equation.
We recognize that there is correlation between βi and βi 2 and γ and γ 2 , as shown in Table 6.However, we argue that a potential collinearity problem can be neglected, since the usual consequences of multicollinearity, i.e. overall significance of regression without significance of individual coefficients are not present (Deeds & Rothaermel, 2003).Rather, the individual coefficients are significant.Therefore, any existing multicollinearity did not cause a type II error, as it potentially can.Moreover, any existing multicollinearity does not bias the estimates (Greene, 1997).
A confirmation of hypothesis 1, by the presence of a positive interaction between variables α and loss indicates that in the presence of loss of value due to longer lead times, it is better to work faster at all stages to reduce backlogs and the development times.
A confirmation of hypothesis 2, by the presence of a negative interaction between starts and complexity, indicates that instead of just increasing the number of starts in the pipeline (either by creating them in-house or acquiring other companies), managers should be aware that blockages and over-commitment can occur and eliminate the strategic advantage of having more projects entering the pipeline.Therefore, a lower value of complexity should be implemented globally, whenever possible, to ensure that the new projects will be able to be completed in time.Such is the case of the Novartis chain.
A confirmation of hypothesis 3, by the presence of non-significant interactions between variable Loss and Resources (β) indicates that patent life is independent from the resource allocation decisions by managers, in terms of effect on total value created.

Results
For the base case of condition 1, the regressions that examine hypotheses 1 through 3 are reported in Table 7. Model 1 presents the base case, without interactions.Models 2 through 4 (corresponding to columns 2 to 4 respectively) examine perturbations to the base case.We present the results by adding one interaction at a time.We only report only the interactions that are to be tested in the study, but others are significant.Based on our specification, squared terms of the Starts, βi and γi variables are also included.The results for the base case are as expected, however only resources for stage 1 were significant, indicating a bottleneck at that stage.The variable Complexity was also not significant, however interactions with this variable were.The dummy variable accounting for loss of value due to longer lead times had a negative coefficient, as expected.
We now review the results shown in models 2-4.Hypothesis 1 states that in the presence of loss due to lead time, the development teams should work faster to reduce backlogs.Such hypothesis is confirmed by the positive coefficient of the interaction Loss X α.Thus, hypothesis 1 is supported.Hypothesis 2 deals with an increase in optimal starts in the presence of low complexity.The negative sign of the interaction between Starts and Complexity confirms this hypothesis.Figures 15 to 18 show the effect of choice on Complexity on the optimal level of Starts.If complexity is set at a low values, the optimal level of starts goes up.Hypothesis 3 deals with the effect of loss of value due to longer lead times on the allocation of resources (variable β).This hypothesis is supported, since interactions between Loss of Value and β1 and β2 are not significant.
In sum, the study shows that in the presence of loss of value due to longer lead times, it is more advantageous to: (a) work faster to reduce the backlogs of projects; (b) increase the number of projects started whenever it is possible reduce complexity in the pipeline; and also (c) the optimal decision on resource allocation is independent of the loss of value due to longer lead times.It is important to point out that the relatively low R 2 of the regression models is not necessarily a negative result.The R 2 statistic can be small, yet many of the regression coefficient p-values can be statistically significant.Such a relationship between predictors and the response may be very important, even though it may not explain a large amount of variation in the response.It is the responsibility of the analyst to recognize when to disregard a high R2 value due to over-fitting the model, and to disregard a low R 2 value due to large error in the sampled data.Furthermore, these two misconceptions point out why establishing a threshold or cut-off point for an acceptable value of R 2 across all applications is inappropriate (Colton & Bower, 2002;Myers et al., 2002).

Conclusion
This research builds upon Figueiredo and Loiola (2012), however it creates a new, original model that incorporates the effect of longer development times on Product Pipeline Management performance and its impact on policies for PPM.Such a study could not be found in the literature.A few key original hypotheses were proposed, which should generate insights on how to engage in effective PPM in the pharmaceutical sector.Such insights were not explored in the previous studies that used the aforementioned model.
All these hypotheses can be used to generate better policies for pharmaceutical companies.It was shown that in the presence of loss of value due to longer lead times, it is more advantageous (in the case of Novartis) to work faster to reduce the backlogs.Other insights are generated; for instance, it is shown that whenever it is possible to reduce complexity (number and size of tasks) globally at all stages simultaneously, the number of projects started should be larger, in order to increase value created in the pipeline.An increase in number of starts can be achieved by more intensive innovation efforts inhouse or by licensing deals and acquisitions of more innovative biotechnology companies.A decrease in complexity is certainly limited by the amount of tasks that are inherent to the development process, but in theory it can be achieved by more qualified work force, more efficient work and testing and use of new technologies to enable a more efficient work.For instance, K. B. Clark and Fujimoto (1991) studied the world auto industry and showed how the amount of activities in a project depends, among other factors, on the organization's competence and effectiveness in the development process.
The study also shows how the decision on resource allocation, which is the most important in terms of explaining variation of NPV on the dataset (higher R 2 ), is independent of the patent life or loss of value due to longer lead times.So if the pipeline is balanced in terms of resources, there is no need to adapt these resources to account for short patent lives or longer lead times.To our knowledge, such implications have never been explored in the literature.
Ours is a highly stylized model that comes with several limitations (Figueiredo & Loiola, 2012).One limitation of this study is regarding the utility of the model to Brazilian pharmaceutical companies.Even though large companies are present in the country, most of the research and development is done abroad, not in Brazilian territory.Policies for pharmaceutical R&D have therefore limited applicability to Brazilian companies.We argue, however, that the model can be adapted to Brazilian companies from the petrochemical sector.Such companies generate new patents regularly and have structured product development processes (Loiola & Mascarenhas, 2013) Another limitation is that the study focuses on optimizing Net Present Value (NPV) alone, and not in creating methods to enhance the value to the end user, the patient.It can be argued, however, that an increase in the NPV of a Project is related, perhaps indirectly, to the potential benefit that the pharmaceutical product will bring.A product that has a higher Market potential (i.e. more popular and therefore more beneficial) will probably have a higher NPV than a related product with lower Market potential.A third limitation is that possible significant interactions between the decision variables were not studied.We leave these limitations to be explored or improved in future studies.

Figure 5 .
Figure 5. Fraction of Loss (+) or Gain (-) in NPV for Each Year of Release since a Patent is Filed

Figure 9 .
Figure 9. NPV Loss for Different Values of Alpha, With High Value for Starts

Figure 12 .Figure 13 .Figure 14 .
Figure 12.High/low Starts versus NPV Figures 13 and 14are in accordance with hypothesis 3.In both conditions, with and without loss of value due to longer lead times, the medium value for resource allocation continues to be the best policy.It is an indicator that the ideal value of resources is not affected by the loss of value due to longer lead times.

Figure 15 .
Figure 15.Effect of Choice on Complexity on the Optimal Level of Starts

Figure 16 .Figure 17 .Figure 18 .
Figure 16.Effect of Choice on Complexity on the Optimal Level of Starts

Table 1 Example of Loss of Value Due to Longer Development Lead Times
Table 1 below.The patent for Zoloft expired in June 2006.

Table 6
Correlations Matrix