Keywords

1 Introduction

Product lifecycle management (PLM) is often cited as one of the most important strategic capital investments product-based companies can make, resulting in a boosted and sustained competitive advantage (e.g. decreasing product time-to-market, increasing reuse during new product development, and increasing the capacity for change control) when implemented in a holistic approach [1]. Yet, due to the size of the investment and the interdisciplinary impact of PLM, executives are struggling to make the case for financing a PLM project as there is a shortage of known Return On Investment (ROI) or similar value metric information available. Unique industry and organizational variables make it unfeasible for companies to easily benchmark competitors and predetermine the values gained [2]. Issues related to how firms measure performance are not isolated to technologically driven firms. Rather, nearly every organization is facing increasing pressure to justify Information Technology (IT) decisions and to quantify how a new system will change human-computer interactions, and ultimately affect the bottom line. An organization’s ability to innovate depends not only on the overarching system and IT infrastructure, but how humans employ the system to maximize innovation. Advancements in PLM and their implications for an ever evolving and changing workforce require that organizations embrace and employ effective decision making methodologies.

This article explains the challenges executive management face understanding the values of PLM. A literature review reveals the lack of value metrics for companies attempting to determine the value created from their PLM investments. Reviewing previously published value metrics for IT systems creates a baseline for our research on the value of PLM. Future exhaustive research using Grounded Theory will allow for the creation of effective models and organic theories from data gathered on proposed value metrics of PLM. The specific value metrics captured and components analyzed are set forth in this article.

2 Review of Literature

An extensive search conducted for publications directly relating to the value metrics of PLM produced no results. This exposes a clear gap in the field of knowledge. Metrics for understanding value created by PLM have not been documented and executives are struggling to gather information supporting their PLM investments and mitigating the risks of future investments [3]. This research proposes to fill the void of information. Investigating publications concerning value metrics and models of IT systems can be used as a starting point for value metrics for PLM. These publications were chosen based on their relativity to the scope of research and their number of recitations reflecting their applicability into industry.

2.1 Value Metrics of IT Systems

In the Harvard Business Review article “Six IT Decisions Your IT People Shouldn’t Make,” Ross and Weill [4] wrote that the most frequent complaints they heard from top executives were about struggling to calculate payback on IT, realizing the business value from high priced technology, and justifying ongoing increases in IT spending. Their research reveals that companies successfully managing IT investments lead to returns that are as much as 40% higher than those of their competitors [4]. Due to advancements in IT systems and PLM solutions, this delta, based on returns 40% greater than those of competitors, created by successful management will arbitrate companies’ ability to survive in the future.

Weill and Olson [5] analyzed a diverse group of case studies relating to IT investments such as Supply Chain Management (SCM), Customer Relationship Management (CRM), and Enterprise Resource Planning (ERP) systems. They established connections between the IT investment objectives and the performance measures that should be tracked per each investment. Three main connections were established: revenue growth rates should be a performance measure for strategic IT investments, return on assets [6] should be a performance measure for informational IT investments, and indirect labor should be a performance measure for transactional IT investments.

Nine years later, Weill and Broadbent [6] expounded on the above mentioned reference to include their fourth IT investment area of infrastructure. They subsequently created a new model containing the four types of IT investments and their corresponding value added areas, as shown in Fig. 1.

Fig. 1.
figure 1

IT investment area model

During the IT system investment justification process, ROI is often a discussion point since it is a widely used metric. Parker [7] details a method to calculate ROI for IT projects in which he places tangible benefits into five major categories as shown in Fig. 2.

Fig. 2.
figure 2

Tangible benefits of IT investment

Further outlined in Fig. 3: Non-tangible benefits of IT investment are the non-tangible, yet highly important benefits. Parker [7] suggests that these should not be placed in ROI calculations due to their difficulty to financially quantify.

Fig. 3.
figure 3

Non-tangible benefits of IT investment

Beyond the benefits factored into the ROI calculation, there are three other considerations that need to be made. Timeframe, Consistency, and Precision are additional relevant factors. Timeframe is the period in which those benefits are calculated for and Parker recommends this to be around five years for IT systems. Consistency refers to aspects like inflation, taxation, and other assumptions being kept uniform across all IT system project calculations to maintain equal evaluations. Precision is describing all dollar values with a balance of certainty and accuracy and maintaining regularity for all IT investment decisions.

Wen and Yen [8] reviewed the most commonly accepted methodologies for measuring IT investment payoffs. The evaluation methods include ROI, Cost-Benefit Analysis (CBA), Return On Management (ROM), and Information Economics (IE). The tangible benefits cited above by Broadbent, Olsen, Parker, and Weill can be defined as the profit or return and then can be divided by the investment required for the ROI calculation. Further time value functions are applied to provide deeper analytical framework. Net Present Value (NPV) and Discounted Cash Flow (DCF) are additional ROI methods dependent upon an interest rate to perform the calculation. These methods are predominantly used for tangible, quantitative benefits; however, the intangible benefits are better used for CBA, ROM, and IE.

The CBA approach is ideal for two main complications. They include quantifying the value of benefits that do not flow back to the investor, and identifying the market value of costs and benefits from intangible factors. CBA requires an agreement on the measures of value for intangible benefits. If there is disagreement on the appropriate values then one of the following methods should be used. ROM uses a simple ratio of productivity as “output/input.” Strassmann [9] defines the output as the delta between the direct operating costs and the value added due to direct labor. Simons and Dávila [10] express it as productive organizational energy released divided by management time and attention invested. The advantage of the ROM method is that it can focus on the contributions of IT to the management process [8]. The IE method is analogous to the CBA method with the addition of a ranking and scoring technique of intangibles and risk factors associated with the IT investment. The justifications have been made for systems such as ERP, CRM, and SCM; however, the cross-functional impact of PLM is problematic for executives making the case for financing a PLM project [2].

3 Why Is PLM Value Hard to Calculate?

Research by Walton and Tomovic [2] explains, “the benefits of PLM are difficult to assess because the same benefit can be expressed as a function of time, cost, quality, or a combination thereof” [2]. PLM projects, like many other cross-functional implementations with interwoven components and processes, require combined and synchronous attention to all phases. Implementations that lack these combined foci stem from misguided perceptions about the intricacies and interdependencies in foundational PLM modules, which creates negative business outcomes in myriad areas involving people, practices, processes, and technology [11]. Subsequently, having expended millions of dollars for technological infrastructure implementation, education, and service, senior-level executives are eager to see trustworthy and unbiased data supporting their investments in PLM. Furthermore, without knowing the bottom-line impact of PLM on cost-savings and revenue-generation, executives are unable to precisely estimate the level of risk associated with future PLM investments [12].

These imprecise estimates stem from the lack of baseline data, which creates the struggle of determining ROI during, throughout, and after the project roll out. Additionally, the absence of baseline data and the inability to calculate post implementation ROI contributes to the overall deficiencies of market data attempting to place a value of PLM, which only serve to exacerbate the challenge of obtaining buy-in for a PLM implementation project from the corporation’s most senior executives [13]. Resultantly, numerous enterprises scramble to collect the data during the implementation and after rollout because, over the lifecycle of the project, the priority is generally on issue and risk mitigation rather than on gathering data to justify the platform-decision that has already been made. After the rollout, teams resort to focusing on employing the full measure of the new platform and the temporary resources assigned during the implementation are moved on to other projects [14]; thus, data collection continues to be an area lacking both internal to corporations and even more so in the industry at-large. Despite the fact that there are no widely cited failures in PLM implementations, the issues related to there being a dearth of data showing PLM’s value. This exacerbates the struggle companies face in quantifying PLM’s benefits, and thus the adage, ‘you cannot manage what you do not measure’ [15] encapsulates an executive board’s ability to justify their decision to implement PLM. Synonymous to this issue, since there are no widely cited failures, a lack of focus on optimization also exists because implementations are repeatedly seen as successful, so a strong enough urgency has not been created for implementation optimisation.

In order to resolve these challenges, enormous amounts of resources have been expended by companies, vendors, and academicians alike, with an aim of creating maturity models, benchmarking systems, and implementation indexes [2, 16, 17]. Nonetheless, there are still myriad reports and complaints from industry executives who express distress about the gap between published models for ROI and the models’ inapplicability to their companies [18,19,20]. Worse yet, perceptions regarding reported benefits from vendors or consultants are often viewed with skepticism and bias [21,22,23], and executives argue that due to the enormous monetary costs of a PLM implementation, a company’s key decisions cannot be left up to what could be viewed as biased information. Despite the published benefits, the true value or ROI for any particular organization cannot be predetermined, which hinders the decision-making and selection process. Moreover, since the benefits of PLM are often organizationally-specific, they do not easily correlate from one enterprise to another due to copious organizational variables (e.g. industry, company size, market segment, and business process maturity). This further hinders executives attempting to triangulate the value. This research aims to bridge the gap between the benefits of PLM and the creation of value and ROI for organizations implementing PLM.

4 Research Plan

As highlighted earlier, many quantitative research studies have been performed for the valuing of IT systems [6,7,8]. In contrast, our PLM value metric research will follow a qualitative research methodology. Silverman explains that “the papers in qualitative journals do not routinely begin with a hypothesis, the ‘cases’ studied are usually far fewer in number and the authors’ interpretation is carried on throughout the writing” [24]. Sociologists Glaser and Strauss created grounded theory when they explicated the qualitative research strategies that they had used in their studies of how staff organized care of dying patients in hospitals [25]. Since 1967, the method has proved valuable through different industries, and is an inductive, iterative, interactive, and comparative method geared towards theory construction [26]. This approach produces a theory representative of the data that has been gathered and systematically analyzed during the research process.

Our data collection will consist of released information concerning product-producing companies within Fortune’s list of the 1000 largest companies in the U.S. based on revenues for 2014. The information is generally published by four entities: the company, the company’s PLM vendor, a PLM consulting firm that has experience with the company, or conference presentations and proceedings documenting the company’s PLM information. Unavoidably, there are biases in the information released from these sources. This bias will remain within the research data collection pool; however, throughout the initial and focused coding periods we will sieve out as much as possible.

The analysis for determining the value metrics of PLM will consist of coding data in two phases, initial coding and focused coding. Glaser states that initial coding, sometimes known as open coding, asks these questions of the data: what is actually happening in the data, what are these data a study of, and what category does this case, segment, or statement of data indicate [25]? These questions will be answered by reviewing the data word-by-word, line-by-line, paragraph-by-paragraph, or incident-by-incident. Focused coding will be more exclusive and conceptual than initial coding. Performing focused coding amalgamates and explains larger segments of data. This will normally create categories for the grounded theory to be constructed upon. Charmaz suggests making the following comparisons during focused coding: comparing different people, comparing data from the same individuals at different points in time, comparing specific data with the criteria for the category, and comparing categories in the analysis with other categories [27]. Coding will create impressions for ideas, thoughts, or connections, which then leads to theoretical sampling.

Glaser and Strauss define theoretical sampling as “the process of data collection for generating theory whereby the analyst jointly collects, codes, and analyzes his data and decides what data to collect next and where to find them” [25]. This iterative process of gathering data, coding, and theoretical sampling will be performed until there is a saturation of data and all new information stops yielding any new theories. Since grounded theories are derived from data, they can be safe guides for the operation by the establishment of a deeper understanding and insight [28]. The research methodology for the discovery of value metrics for PLM will be Grounded theory approached as described by Silverman [24], Glaser and Strauss [25], Takhar-Lail [28] and Charmaz [26].

The goal of this research is to bridge the gap between the various value metric frameworks proposed in a variety of industry and academically generated literature and the current struggles that industry executives face when making strategic investments for PLM implementations. It provides a novel process to view the value of PLM and takes the research to a degree of understanding, in which, meaning can be given to the collected raw data. The research will gather benefits and costs of PLM implementations from numerous industries and markets. Due to the disparate nature of all the benefits and costs from different companies, after being collected, the data will undergo coding and analysis based on specific organizational variables to determine the value of each instance. These values will be put into a framework to help companies not only in their investment justification processes, but also in their post project value realization process. The multitude of the analysis may be outside of typical geometric spacing (e.g. Fig. 4

Fig. 4.
figure 4

Example of PLM value/impact model

The analysis will generate multidimensional models (i.e. possible utilizing greater than three dimensions) [29] analyzing factors such as industry, company size, PLM use case, solution provider, business case, and value of each instance. This research stands to provide companies a tool and a valuable road map to determine the value of their PLM strategies. Furthermore, by having the ability to extrapolate data within the model, companies will be able to better pinpoint PLM use-cases that are optimized for their industry and company size, allowing them to make well-informed decisions on their investment directions.

5 Limitations

The data collected for on-going and future research outlined in this paper is subject to natural biases due to the source of gathering information. The publishing bodies’ best interest is to represent the benefits and accomplishments as triumphantly as possible without misrepresenting or being deceitful. The rhetoric and the reality of the information gathered will be coded carefully in an effort to eliminate all bias; however, we will not know for sure the extent of the biases represented.

6 Conclusion

The investment size and interdisciplinary impact of PLM create a need for higher probability of success for executives making the case for financing a PLM project. PLM implementations on the whole have a shortage of known ROI information available and the academic research to date has not addressed the challenges that industry executives are facing. Thus, the vast collection of PLM instances gathered by the proposed future research will allow for extrapolation of unique industry and organizational variables making it feasible for companies to easily benchmark competitors and predetermine the values gained.