Share to: share facebook share twitter share wa share telegram print page

Performance indicator

KPI information boards

A performance indicator or key performance indicator (KPI) is a type of performance measurement.[1] KPIs evaluate the success of an organization or of a particular activity (such as projects, programs, products and other initiatives) in which it engages.[2] KPIs provide a focus for strategic and operational improvement, create an analytical basis for decision making and help focus attention on what matters most.[3]

Often success is simply the repeated, periodic achievement of some levels of operational goal (e.g. zero defects, 10/10 customer satisfaction), and sometimes success is defined in terms of making progress toward strategic goals.[4] Accordingly, choosing the right KPIs relies upon a good understanding of what is important to the organization.[5] What is deemed important often depends on the department measuring the performance – e.g. the KPIs useful to finance will differ from the KPIs assigned to sales.

Since there is a need to understand well what is important, various techniques to assess the present state of the business, and its key activities, are associated with the selection of performance indicators. These assessments often lead to the identification of potential improvements, so performance indicators are routinely associated with 'performance improvement' initiatives. A very common way to choose KPIs is to apply a management framework such as the balanced scorecard.

The importance of such performance indicators is evident in the typical decision-making process (e.g. in management of organisations). When a decision-maker considers several options, they must be equipped to properly analyse the status quo to predict the consequences of future actions. Should they make their analysis on the basis of faulty or incomplete information, the predictions will not be reliable and consequently the decision made might yield an unexpected result. Therefore, the proper usage of performance indicators is vital to avoid such mistakes and minimise the risk.[6][7]

KPIs are used not only for business organizations but also for technical aspects such as machine performance. For example, a machine used for production in a factory would output various signals indicating how the current machine status is (e.g., machine sensor signals). Some signals or signals as a result of processing the existing signals may represent the high-level machine performance. These representative signals can be KPI for the machine.

Categorisation of performance indicators

The effective use of performance indicators requires a clear understanding of their different types and purposes. Indicators can be categorised along several key dimensions to ensure a balanced and comprehensive measurement system that supports strategic objectives. A well-designed set of indicators will draw from multiple categories to avoid unintended consequences and provide a holistic view of organisational performance.[5]

A primary method of categorisation is based on the dimension of performance being measured. The Balanced Scorecard framework, for instance, groups indicators into four perspectives: financial (e.g., profitability), customer (e.g., satisfaction), internal business processes (e.g., efficiency), and learning and growth (e.g., innovation).[8] This approach prevents over-reliance on financial metrics alone.

Indicators are also commonly distinguished by their time orientation and function. In this typology:

  • Lagging indicators are outcome-oriented, measuring the final results of past activities (e.g., annual revenue, year-end safety incident count). They are easy to measure but hard to directly influence.
  • Leading indicators are performance drivers, predictive measures that influence future outcomes (e.g., number of client proposals, hours of safety training completed). They are more actionable but can be harder to correlate directly with results.
  • Input indicators measure resources consumed (e.g., budget spent, staff hours), providing context for interpreting outputs and outcomes.[5]

Another critical distinction is based on the nature of the data. Quantitative indicators provide objective, numerical measurement (e.g., unit output, error rates), while qualitative indicators capture subjective, often perceptual data (e.g., stakeholder satisfaction, brand reputation) typically gathered through surveys and interviews.

Furthermore, indicators can be designed for different levels of the organisation. Strategic indicators monitor progress toward top-level goals, operational indicators track departmental or process efficiency, and individual indicators align personal objectives with organisational priorities.

Selecting the right mix of categories is a strategic exercise. An overemphasis on lagging quantitative indicators can lead to short-termism and "gaming" of metrics, while focusing solely on leading or qualitative indicators may lack a connection to ultimate outcomes.[9] A balanced portfolio of indicators across these categories is therefore essential for effective performance management.

Points of measurement

The first step in performance measurement is determining what to measure.

Performance indictors may be applied at various stages within a programme, service, or organisational process. These points capture distinct dimensions of performance, ranging from the earliest stages of resource allocation to final outcomes achieved. It is common to distinguish between:[10]

  • Inputs – the resources (financial, human, or material) dedicated to an activity.
  • Processes – how efficiently or effectively these resources are transformed into outputs.
  • Outputs – the quantity, quality and timeliness of goods or services delivered.
  • Impacts – the short to medium term effects on service users or stakeholders.
  • Outcomes – the broader, long term societal changes that result from an activity.

Mapping indicators across this continuum helps ensure measurement provides a clear picture of performance. The points of measurement may also relate to the relationship between inputs and outputs (productivity), and outputs and outcome (effectiveness).[11] Control (the extent to which employees can influence a result) and mechanism (the causal link between employees’ effort and a performance dimension) further shape measurement decisions.

Selecting the appropriate point of measurement is not simply a technical choice but also a strategic one. For example, focusing narrowly on inputs or outputs can incentivise ‘box-ticking’ behaviours and obscure whether real value is being created.[12] Conversely, outcome and impact indicators may harder to attribute to organisational effort, especially in complex public sector environments.[13]

A balanced approach can involve linking indicators across multiple points of measurement, to trace the relationships between resources, activities and ultimate value.[14] However, this requires careful design to avoid measurement burdens and to ensure alignment with an organisation’s overall strategic objectives.

Quality assurance across the points of measurement helps ensure indicators not only track activity levels but also produce robust, consistent and credible performance data.[15]

Identifying indicators

Once appropriate points of measurement have been determined, the next task is to identify specific indicators that meaningfully capture performance at that stage. An indicator is a measurable variable used to show whether progress is being made towards a goal, rather than the goal itself.[16]

Choice of indicators reflects managerial decisions about what counts as successful performance.[17] Indicators may be financial, such as revenue growth, or non-financial, such as customer satisfaction rates.[18] A good indicator should be simple to understand while aligning closely with business or organisational goals.[19]

The process of identifying indicators is often guided by frameworks such as SMART (specific, measurable, achievable, relevant, time-bound).[20] Alternatives like the FABRIC principles emphasise ideal performance information is focused, appropriate, balanced, robust, integrated, and cost-effective.[21]

In the public sector, where outcomes may depend on many different organisations and external influences, careful selection is needed to avoid misleading or overly broad results.[22]

Key stages of identifying a performance indicator include:[23][24]

  • Clarifying objectives: defining the goals, benchmarks, and standard to be measured.
  • Choosing the point of measurement: deciding whether inputs, outputs, impacts or outcomes best capture performance.
  • Generating potential indicators: identifying options from existing data or stakeholder input.
  • Assessing validity and feasibility: testing whether indicators are conceptually sound, measurable, and proportionate.
  • Piloting and refining: trialling indicators to detect unintended incentives or data issues.
  • Final selection and integration: embedding chosen indicators into reporting and decision-making.

Beyond the technical steps, several broader considerations shape indicator usefulness. Indicators must understandable for managers and other stakeholders, including shareholders or the public, to understand but precise enough to capture what matters. Over-simplified indicators may distort or fail to drive performance, while overly complex or too numerous indicators may fail to gain traction.[25]

Attribution is another challenge. In the public sector, multiple organisations may influence outcomes indicators, and may sometimes develop a shared outcomes framework with reporting to show the particular role an individual organisation.[26] Where a clear link exists between employee effort and performance, indicators may be connected to motivation, reward and appraisal systems.

Data quality is another important consideration.[21] Indicators depend on reliable and consistent information. Weak systems can undermine credibility, shaping which indicators are ultimately preferred.[27] Finally, given the risks of gaming, data fabrication, and selective reporting on indictors,[28] organisations should consider verifiability of underlying data when selecting indicators and choose indicators that are not susceptible to manipulation.[29]

Examples

Accounts

These are some of the examples:

  • Percentage of overdue invoices
  • Percentage of purchase orders raised in advance
  • Number of retrospectively raised purchase orders
  • Finance report error rate (measures the quality of the report)
  • Average cycle time of workflow
  • Number of duplicate payments

Marketing and sales

  • New customer acquisition
  • Customer acquisition cost (CAC)
  • Average deal size
  • Demographic analysis of individuals (potential customers) applying to become customers, and the levels of approval, rejections, and pending numbers
  • Status of existing customers
  • Customer density (the proportion of revenue attributable to a specified percentage of accounts, which ideally should match, for example the top 10% of accounts should broadly contribute 10% of revenue) [5]: page 8 
  • Customer attrition (the loss of clients or customers)
  • Turnover (i.e., revenue) generated by segments of the customer population
  • Outstanding balances held by segments of customers and terms of payment
  • Collection of bad debts within customer relationships
  • Profitability of customers by demographic segments and segmentation of customers by profitability

Many of these customer KPIs are developed and managed with customer relationship management software.

Faster availability of data is a competitive issue for most organizations. For example, businesses that have higher operational/credit risk (involving for example credit cards or wealth management) may want weekly or even daily availability of KPI analysis, facilitated by appropriate IT systems and tools.

Manufacturing

Overall equipment effectiveness (OEE) is a set of broadly accepted nonfinancial metrics that reflect manufacturing success.

  • OEE = availability x performance x quality
  • Availability = run time / total time; by definition this is the percentage of the actual amount of production time the machine is running to the production time the machine is available.
  • Down time = time the building/ location/ service/ machine is out of operation due to any reason (including planned down time such as maintenance or 'out of season').
  • Performance = total count / target counter, by definition this is the percentage of total parts produced on the machine to the production rate of machine.
  • Quality = good count / total count, by definition, this is the percentage of good parts out of the total parts produced on the machine.
  • Cycle time ratio (CTR) = standard cycle time / real cycle time
  • Capacity utilization
  • Rejection rate

Professional services

Most professional services firms (for example, management consultancies, systems integration firms, or digital marketing agencies) use three key performance indicators to track the health of their businesses. They typically use professional services automation (PSA) software to keep track of and manage these metrics.

  • Utilization rate = the percentage of time employees spend generating revenue
  • Project profitability = the difference between the revenue generated by a project and the cost of delivering the work
  • Project success rate = the percentage of projects delivered on time and under budget

System operations

Project execution

  • Earned value
  • Cost variance or cost performance index
  • Schedule variance or schedule performance index
  • Estimate to complete
  • Manpower spent / month
  • Money spent / month
  • Planned spend / month
  • Planned manpower / month
  • Average time to delivery
  • Tasks / staff
  • Project overhead / ROI
  • Planned delivery date vs actual delivery date

Supply chain management

Businesses can utilize supply chain KPIs to establish and monitor progress toward a variety of goals, including lean manufacturing objectives, minority business enterprise and diversity spending, environmental "green" initiatives, cost avoidance programs and low-cost country sourcing targets. Suppliers can implement KPIs to gain a competitive advantage. Suppliers have instant access to a user-friendly portal for submitting standardized cost savings templates. Suppliers and their customers exchange vital supply chain performance data while gaining visibility to the exact status of cost improvement projects and cost savings documentation.

Any business, regardless of size, can better manage supplier performance and overall supply chain performance,[30] with the help of KPIs' robust capabilities, which include:

  • Automated entry and approval functions
  • On-demand, real-time scorecard measures
  • Rework on procured inventory
  • Single data repository to eliminate inefficiencies and maintain consistency
  • Advanced workflow approval process to ensure consistent procedures
  • Flexible data-input modes and real-time graphical performance displays
  • Customized cost savings documentation
  • Simplified setup procedures to eliminate dependence upon IT resources

Main KPIs for supply chain management will detail the following processes:

  • Sales forecasts
  • Inventory
  • Procurement and suppliers
  • Warehousing
  • Transportation
  • Reverse logistics

In a warehouse, the manager will use KPIs that target best use of the facility, like the receiving and put away KPIs to measure the receiving efficiency and the putaway cost per line. Storage KPIs can also be used to determine the efficiency of the storage space and the carrying cost of the inventory.[31]

Government

Governments around the world have adopted performance indicators as part of broader performance management reforms. These initiatives emerged from general concerns about performance deficits in the public sector, and the belief that systematic measurement could improve accountability and outcomes.[32] While governments have established extensive systems for collecting performance data, research suggests that the value of these indicators depends on whether managers effectively use the information in their decision-making processes.

Factors influencing the use of performance data include individual values, leadership roles, organisational culture, and external pressures. Managers with strong public service motivation are more likely to engage with performance information because they see it as a means of achieving public goals.[33] Leadership roles also matter. Task-specific leaders often use indicators more actively than generalist leaders who face broader political responsibilities.[34] Organisational cultures that emphasise learning, flexibility, and innovation are more likely to foster the use of performance information, whereas rigid or highly centralised environments may discourage it. Citizen participation can also create demand for greater accountability, encouraging managers to apply performance data to justify decisions and demonstrate transparency.[35]

International examples show the diversity of approaches. The provincial government of Ontario, Canada has used performance indicators since the late 1990s to assess higher education institutions, reporting on measures such as graduate satisfaction, employment rates, and student outcomes. [36] In England, Public Health England applies indicators to monitor national health screening programmes [37], while UK government departments publish key contract-related indicators to improve service transparency.[38] The United States requires federal agencies to set strategic goals and report on progress under the Government Performance and Results Act.[39] The New Zealand Treasury’s Living Standards Framework and associated wellbeing indicators provide a broader set of measures that move beyond economic performance to social and environmental outcomes. [40]

Although performance indicators are now widespread, their effectiveness remains debated. It can be argued that indicators oversimplify complex goals, encourage symbolic compliance, and shift attention to what is easily measurable rather than what is substantively important. On the other hand, when well-designed and used within supportive cultures, indicators can strengthen accountability, guide learning, and improve service delivery.[41]

Human resource management

Performance indicators are widely used in human resource management (HRM) to assess recruitment, retention, performance, and employee well-being. In the public sector, these measures are shaped by distinctive institutional constraints and workforce motivations. Common HRM indicators include employee turnover rates, time to fill vacancies, absenteeism, staff satisfaction, and survey results.[42]

The effectiveness of HRM practices has been examined across public, semi-public, and private organisations. A large meta-analysis using the ability-motivation-opportunity (AMO) framework found that HRM practices positively influence individual performance in all sectors, but with sector-specific variations.[43] Ability-enhancing practices such as training and selective recruitment are consistently associated with higher job satisfaction and performance.[44] Motivation-enhancing practices, such as performance-based pay, show weaker impacts in public organisations, where employees are often driven more by intrinsic and altruistic motivations than extrinsic rewards.[45] Opportunity-enhancing practices, such as participatory decision-making and job autonomy, appear particularly important in encouraging extra-role behaviours like collaboration and knowledge-sharing.[46]

Employee turnover is a critical indicator for HRM. While traditionally seen as negative, research suggests that turnover may have more complex effects. A study of several hundred public school districts in Texas over nine years found that turnover was linearly negative for basic educational outcomes, such as standardised test scores, but showed a non-linear “inverted U-shaped” relationship with more complex outcomes like college readiness.[47] This indicates that low to moderate turnover may introduce new skills and perspectives, benefiting organisational performance, while very high turnover imposes significant costs and reduces effectiveness.

Other HRM indicators reflect absenteeism, which is often monitored as a proxy for workforce wellbeing and organisational health.[48] Staff satisfaction surveys are also commonly used to measure morale, commitment, and engagement, though their interpretation may be shaped by broader organisational culture and leadership practices.[49]

Other performance indicators

  • Customer order waiting time


Problems

Performance Indicators (PIs) are widely used to measure, manage and provide public accountability across sectors like healthcare, business, education and government.[50] However, they can have challenges and limitations that may affect data accuracy, relevance, and effectiveness if not carefully considered. In practice, overseeing key performance indicators can prove expensive or difficult for organizations. Some indicators such as staff morale may be impossible to quantify. As such, dubious KPIs can be adopted that can be used as a rough guide rather than a precise benchmark.[51]

Key performance indicators can also lead to perverse incentives and unintended consequences as a result of employees working to the specific measurements at the expense of the actual quality or value of their work.[52][53]

Sometimes, collecting statistics can become a substitute for a better understanding of the problems, so the use of dubious KPIs can result in progress in aims and measured effectiveness becoming different. For example, during the Vietnam War, US soldiers were shown to be effective in kill ratios and high body counts, but this was misleading when used to measure aims as it did not show the lack of progress towards the US goal of increasing South Vietnamese government control of its territory.[51] Another example would be to measure the productivity of a software development team in terms of lines of source code written. This approach can easily add large amounts of dubious code, thereby inflating the line count but adding little value in terms of systemic improvement. A similar problem arises when a footballer kicks a ball uselessly to build up their statistics.

Here are some potential problems, examples and impacts with Performance Indicators:

Overemphasis on Quantitative Metrics [54]

  • Problem: PI’s may focus on easily measured metrics (e.g., numbers, rates, targets). This can lead to neglect of qualitative aspects, which if overlooked miss nuance.
  • Example: measuring midwifery success by number of births attended ignores safety, and patient experience.
  • Impact: this can lead to a narrow view of success and overlook important but less tangible outcomes.

Misalignment with strategic Goals [55]

  • Problem: indicators may not reflect priorities or what truly matters to the organisation or stakeholders.
  • Example: focusing on speed of service rather than safety checks to meet production targets.
  • Impact: if PIs are chosen poorly, employees may focus on the wrong priorities or feel disconnected from the metric leading to disengagement or misdirected efforts.

Gaming the System [56].

  • Problem: when performance is reward or penalty based, employees may manipulate data or only focus on what is measured to meet targets rather than improve actual performance.
  • Example: rushing patient discharges to meet bed turnover targets when it impacts care.
  • Impact: undermines the data integrity and leads to unintended consequences, such as neglecting unmeasured areas in quality of patient care.

Data Quality and Reliability [57].

  • Problem: inaccurate, incomplete, or time sensitive data can lead to distorted PIs.
  • Example: manual data entry or inconsistent, unclear definitions can interfere with results. Alternatively, annual staff satisfaction surveys may be too late to address emerging issues.
  • Impact: decisions based on flawed data can lead to poor outcomes, misinformed strategies or misleading conclusions.

Demotivation, Stress and Burden of Measurement [58]

  • Problem: constant monitoring pressure or unrealistic punitive PIs can be time consuming and resource intensive leading to stress, burnout, or reduced job satisfaction.
  • Example: employees may feel pressured to meet numbers rather than focus on meaningful or relevant work.
  • Impact: this can feel intrusive and reduce autonomy affecting employee well-being and retention, particularly in high-pressure environments.

Equity Blindness [59]

  • Problem: PIs may not account for differences in ethnicity, culture, demographics or external factors (e.g., socioeconomic conditions, staffing shortages, policy changes).
  • Example: using standardised benchmarks without adjusting for context (e.g., rural vs. urban) can penalise communities or individuals unfairly.
  • Impact: comparisons across businesses or organisations may be unfair or misleading. Leading to misguided policy decisions, reinforcing systemic inequities and missed opportunities for improvements.

Tunnel Vision [60].

  • Problem: focusing narrowly on measurable PIs can neglect unmeasured broader metrics.
  • Example: emphasising efficiency and patient flow might reduce time spent on compassionate, holistic care.
  • Impact: this can lead to distorted decision-making and unintended consequences.

Strategies to address and mitigate problems with PIs requires a thoughtful, systemic approach. Examples to guide this process are:

Design Indicators for Relevance and Fairness

  • Ensure PIs reflect meaningful outcomes, not only measurable outputs.
  • Avoid one-size -fits-all metrics: customise indicators to fit context.
  • Review regularly: ensure indicators evolve with organisational goals and realities.
  • Include qualitative indicators: balance numbers with narratives (e.g., consumer feedback).

Transparency and Participation

  • Co-design PIs with employees: involve frontline workers in developing indicators to ensure relevance and reflection of real work and values.
  • Clarify expectations: make sure staff understand what is being measured, why, and how it will be used.
  • Provide feedback loops: share results regularly and use them for learning.

Monitor for Gaming and Goal Shift

  • Audit for unintended consequences: check if indicators are encouraging shortcuts or superficial compliance.
  • Reward integrity and innovation, not just target-hitting.
  • Use a balance of mixed indicators: triangulate data by combining quantitative and qualitative data to get a fuller picture.

Reduce Monitoring Burden

  • Streamline data collection: use integrated systems to avoid duplication and manual reporting.
  • Limit frequency: avoid constant tracking; use periodic reviews that allow time for reflection and improvement.
  • Automate where possible: leverage technology to reduce administrative burden

Equity-Minded Alternatives

  • Use disaggregated data to reveal disparity.
  • Include contextual factors like social determinants.
  • Promote inclusive practices in evaluation and improvement efforts.

Further examples may include in nursing, replacing rigid throughput metrics with indicators that reflect quality of care, and patient experience. In education, balance test scores with indicators of student engagement and learning environment quality. Finally, in public services, include metrics for community impact and equity alongside traditional efficiency indicators.

See also

References

  1. ^ Carol Fitz-Gibbon (1990), "Performance indicators", BERA Dialogues (2), ISBN 978-1-85359-092-4
  2. ^ Weilkiens, Tim; Weiss, Christian; Grass, Andrea; Duggen, Kim Nena (2016). "Frameworks". OCEB 2 Certification Guide. Elsevier. pp. 149–169. doi:10.1016/b978-0-12-805352-2.00007-8. ISBN 9780128053522. KPI is a business metric that measures the degree of fulfillment of a goal or a Critical Success Factor (CSF). The CSF is an organization-internal or organization-external property that is necessary to achieve a specific goal. A CSF can involve multiple KPIs.
  3. ^ "What is a Key Performance Indicator (KPI)". KPI.org. Retrieved 1 January 2022.
  4. ^ "Key Performance Indicators – What Are Key Performance Indicators or KPI". Archived from the original on 2016-03-19. Retrieved 2008-02-15.
  5. ^ a b c d Key Performance Indicators: Establishing the Metrics that Guide Success (PDF), archived from the original (PDF) on 2017-12-01, retrieved 2016-04-23 Cite error: The named reference ":0" was defined multiple times with different content (see the help page).
  6. ^ "KPIs and the Logic of Decision Making". www.linkedin.com. Retrieved 2021-04-04.
  7. ^ Dolence, Michael G. (1994). "Using Key Performance Indicators to Drive Strategic Decision Making". New Directions for Institutional Research. 1994 (82): 63–80. doi:10.1002/ir.37019948207 – via academia.edu.
  8. ^ Kaplan, R. S., & Norton, D. P. (1992). The Balanced Scorecard: Measures That Drive Performance. Harvard Business Review, 70(1), 71–79.
  9. ^ Marr, Bernard (2012). Key Performance Indicators: The 75+ Measures Every Manager Needs to Know. Pearson UK.
  10. ^ Van Dooren, Wouter; Bouckaert, Geert; Halligan, John (2015-02-19). Performance Management in the Public Sector (0 ed.). Routledge. p. 69. doi:10.4324/9781315817590. ISBN 978-1-317-81416-0.
  11. ^ Joyce, Paul (2015-02-11). Strategic Management in the Public Sector (1 ed.). Routledge. p. 118. doi:10.4324/9781315740355. ISBN 978-1-315-74035-5.
  12. ^ Benington, John; Moore, Mark Harrison, eds. (2011). Public value: theory and practice. Basingstoke New York: Palgrave Macmillan. pp. 205–211. ISBN 978-0-230-24903-5.
  13. ^ Benington, John; Moore, Mark Harrison, eds. (2011). Public value: theory and practice. Basingstoke New York: Palgrave Macmillan. pp. 205–211. ISBN 978-0-230-24903-5.
  14. ^ Van Dooren, Wouter; Bouckaert, Geert; Halligan, John (2015-02-19). Performance Management in the Public Sector (0 ed.). Routledge. p. 20. doi:10.4324/9781315817590. ISBN 978-1-317-81416-0.
  15. ^ Van Dooren, Wouter; Bouckaert, Geert; Halligan, John (2015-02-19). Performance Management in the Public Sector (0 ed.). Routledge. p. 81. doi:10.4324/9781315817590. ISBN 978-1-317-81416-0.
  16. ^ Parmenter, David (2019). Key performance indicators: developing, implementing, and using winning KPIs (Fourth ed.). Hoboken, New Jersey: Wiley. pp. 48–50. ISBN 978-1-119-62082-2.
  17. ^ Poister, Theodore H.; Hall, Jeremy L.; Aristigueta, Maria P. (2015). Managing and measuring performance in public and nonprofit organizations: An integrated approach. San Francisco, CA: Jossey-Bass & Pffeifer Imprints, Wiley. p. 137. ISBN 978-1-118-43905-0.
  18. ^ Camilleri, Emanuel (2024). Key performance indicators: the complete guide to KPIs for business success. Abingdon, Oxon New York, NY: Routledge. p. 40. ISBN 978-1-032-68545-8.
  19. ^ Carter, Neil (1991). "Learning to Measure Performance: The Use of Indicators in Organizations". Public Administration. 69 (1): 85–101, 93. doi:10.1111/j.1467-9299.1991.tb00783.x. ISSN 1467-9299.
  20. ^ Camilleri, Emanuel (2024). Key performance indicators: the complete guide to KPIs for business success. Abingdon, Oxon New York, NY: Routledge. p. 32. ISBN 978-1-032-68545-8.
  21. ^ a b Public Service Commission, New Zealand (2008). "Performance Measurement Advice and examples on how to develop effective frameworks" (PDF). p. 35.
  22. ^ Benington, John; Moore, Mark Harrison, eds. (2011). Public value: theory and practice. Basingstoke New York: Palgrave Macmillan. pp. 205–211. ISBN 978-0-230-24903-5.
  23. ^ Van Dooren, Wouter; Bouckaert, Geert; Halligan, John (2015-02-19). Performance Management in the Public Sector (0 ed.). Routledge. pp. 69–73. doi:10.4324/9781315817590. ISBN 978-1-317-81416-0.
  24. ^ Poister, Theodore H.; Hall, Jeremy L.; Aristigueta, Maria P. (2015). Managing and measuring performance in public and nonprofit organizations: An integrated approach. San Francisco, CA: Jossey-Bass & Pffeifer Imprints, Wiley. p. 143. ISBN 978-1-118-43905-0.
  25. ^ Parmenter, David (2020). Key performance indicators: developing, implementing, and using winning KPIs (Fourth ed.). Hoboken, New Jersey: Wiley. pp. 27–33. ISBN 978-1-119-62082-2.
  26. ^ Office of the Auditor General, New Zealand (2022). "Good practice in reporting about performance" (PDF). p. 21.
  27. ^ Poister, Theodore H.; Hall, Jeremy L.; Aristigueta, Maria P. (2015). Managing and measuring performance in public and nonprofit organizations: An integrated approach. San Francisco, CA: Jossey-Bass & Pffeifer Imprints, Wiley. p. 121. ISBN 978-1-118-43905-0.
  28. ^ Christopher, Hood; Hood, Christopher (2006). "Gaming in Targetworld: The Targets Approach to Managing British Public Services". Public Administration Review. 66 (4): 515–521: 519. doi:10.1111/j.1540-6210.2006.00612.x. ISSN 1540-6210.
  29. ^ Carter, Neil (1991). "Learning to Measure Performance: The Use of Indicators in Organizations". Public Administration. 69 (1): 85–101: 93. doi:10.1111/j.1467-9299.1991.tb00783.x. ISSN 1467-9299.
  30. ^ El Sayed, H., Supply Chain Key Performance Indicators Analysis, International Journal of Application or Innovation in Engineering & Management (IJAIEM), Volume 2, Issue 1, January 2013, accessed 6 January 2022
  31. ^ "Key Performance Indicators for Warehousing Performance | SIPMM Publications". publication.sipmm.edu.sg. 6 October 2020. Retrieved 2022-07-27.
  32. ^ Kamensky, John (1996). "Role of the "Reinventing Government" Movement in Federal Management Reform". Public Administration Review. 56 (3). Wiley: 247–255. doi:10.2307/976448. JSTOR 976448.
  33. ^ Moynihan, Donald; Pandey, Sanjay (1 March 2010). "The Big Question for Performance Management: Why Do Managers Use Performance Information?". Journal of Public Administration Research and Theory. 20 (4). Oxford University Press: 849–866. doi:10.1093/jopart/muq004. ISSN 1053-1858.
  34. ^ Moynihan, Donald; Pandey, Sanjay (1 March 2010). "The Big Question for Performance Management: Why Do Managers Use Performance Information?". Journal of Public Administration Research and Theory. 20 (4). Oxford University Press: 849–866. doi:10.1093/jopart/muq004. ISSN 1053-1858.
  35. ^ Heikkila, Tanya; Isett, Kimberley (March 2007). "Citizen Involvement and Performance Management in Special-Purpose Governments". Public Administration Review. 67 (2): 238–248. doi:10.1111/j.1540-6210.2007.00710.x.
  36. ^ "Key Performance Indicators". Colleges Ontario. Retrieved 2019-09-25.
  37. ^ Public Health England, Population screening KPIs: purpose and data submission guidance, updated 17 August 2020, accessed 24 November 2022
  38. ^ Key Performance Indicators (KPIs) for government's most important contracts, last updated 25 November 2022, accessed 26 November 2022
  39. ^ Relsh, William; Moldogaziev, Tima; Fernandez, Sergio; Leslie, Colin (2021). "Reversing the Lens: Assessing the Use of Federal Employee Viewpoint Survey in Public Administration Research". Review of Public Personnel Administration. 4 (1). Sage Publications: 132-162. ISSN 0734-371X.
  40. ^ The Treasury - Te Tai Ōhanga New Zealand (12 April 2022). "Our Living Standards Framework". The Treasury. The Treasury - Te Tai Ōhanga New Zealand. Retrieved 18 September 2025.
  41. ^ Moynihan, Donald; Pandey, Sanjay (1 March 2010). "The Big Question for Performance Management: Why Do Managers Use Performance Information?". Journal of Public Administration Research and Theory. 20 (4). Oxford University Press: 849–866. doi:10.1093/jopart/muq004. ISSN 1053-1858.
  42. ^ Mariani, Laura; Gigli, Sabrina; Bandini, Federica (March 2019). "Pay-for-Performance and Other Practices: Alternative Paths for Human Resource Management Effectiveness in Public Social Care Organizations". Review of Public Personnel Administration. 41 (1). Sage Publications: 78–104. doi:10.1177/0734371X19863841. ISSN 0734-371X.
  43. ^ Blom, R.; Kruyen, P. M.; Van der Heijden, B. I. J. M.; Van Thiel, S. (March 2020). "One HRM Fits All? A Meta- Analysis of the Effects of HRM Practices in the Public, Semipublic, and Private Sector". Review of Public Personnel Administration. 40 (1). Sage Publications: 3–35. doi:10.1177/0734371X18773492. ISSN 0734-371X.
  44. ^ Wesemann, Andrew (July 2022). "The Performance Rewards of Human Capital Development in the Federal Government". Public Personnel Management. 51 (1). Sage Publications: 151–169. doi:10.1177/00910260211039876. ISSN 0091-0260.
  45. ^ Mariani, Laura; Gigli, Sabrina; Bandini, Federica (March 2019). "Pay-for-Performance and Other Practices: Alternative Paths for Human Resource Management Effectiveness in Public Social Care Organizations". Review of Public Personnel Administration. 41 (1). Sage Publications: 78–104. doi:10.1177/0734371X19863841. ISSN 0734-371X.
  46. ^ Berkel, Rik van; de Vries, Julia Penning; Knies, Eva (July 2022). "Managing Street-Level Bureaucrats' Performance by Promoting Professional Behavior Through HRM". Public Personnel Management. 51 (2). Sage Publications: 189–212. doi:10.1177/00910260211046554. ISSN 0091-0260.
  47. ^ Meier, Kenneth J.; Hicklin, Alisa (17 October 2007). "Employee Turnover and Organizational Performance: Testing a Hypothesis from Classical Public Administration". Journal of Public Administration Research and Theory. 18 (4): 573–590. doi:10.1093/jopart/mum028. ISSN 1053-1858.
  48. ^ Mariani, Laura; Gigli, Sabrina; Bandini, Federica (March 2019). "Pay-for-Performance and Other Practices: Alternative Paths for Human Resource Management Effectiveness in Public Social Care Organizations". Review of Public Personnel Administration. 41 (1). Sage Publications: 78–104. doi:10.1177/0734371X19863841. ISSN 0734-371X.
  49. ^ Resh, William; Moldogaziev, Tima; Fernandez, Sergio; Leslie, Colin Angus (March 2021). "Reversing the Lens: Assessing the Use of Federal Employee Viewpoint Survey in Public Administration Research". Review of Public Personnel Administration. 41 (1). Sage Publications: 132–162. doi:10.1177/0734371X19865012. ISSN 0734-371X.
  50. ^ New Zealand. Office of the Auditor-General. issuing body. (2021). The problems, progress, and potential of performance reporting. Controller and Auditor-General = Tumuaki o te Mana Arotake. Retrieved from https://oag.parliament.nz/2021/performance-reporting
  51. ^ a b Daddis, Gregory (June 1, 2011). No Sure Victory: Measuring U.S. Army Effectiveness and Progress in the Vietnam War. Oxford University Press, USA. ISBN 978-0-19974-687-3.
  52. ^ Austin, Robert D. (Robert Daniel), 1962– (1996). Measuring and managing performance in organizations. DeMarco, Tom., Lister, Timothy R. New York: Dorset House Publishing. ISBN 0932633366. OCLC 34798037.{{cite book}}: CS1 maint: multiple names: authors list (link) CS1 maint: numeric names: authors list (link)
  53. ^ Martin Fowler (2003-08-29). "CannotMeasureProductivity". Martinfowler.com. Retrieved 2013-05-25.
  54. ^ Pidun, T., & Felden, C. (2011) Limitations of Performance Measurement Systems based on Key Performance Indicators. AMCIS 2011 Proceedings - All Submissions. 14. https://aisel.aisnet.org/amcis2011_submissions/14
  55. ^ Toor, S. R., & Ogunlana, S. O. (2010). Beyond the ‘iron triangle’: Stakeholder perception of key performance indicators (KPIs) for large-scale public sector development projects. International Journal of Project Management, 28(3), 228–236. https://doi.org/10.1016/j.ijproman.2009.05.005
  56. ^ Aboubichr, B., & Conway, N. (2023). The gaming of performance management systems in British universities. Human Relations (New York), 76(4), 602–628. https://doi.org/10.1177/00187267211052827
  57. ^ Riskin, D. J., Monda, K. L., Gagne, J. J., Reynolds, R., Garan, A. R., Dreyer, N., Muntner, P., & Bradbury, B. D. (2025). Implementing Accuracy, Completeness, and Traceability for Data Reliability. JAMA Network Open, 8(3), Article e250128. https://doi.org/10.1001/jamanetworkopen.2025.0128
  58. ^ Siegel, R., König, C. J., & Lazar, V. (2022). The impact of electronic monitoring on employees’ job satisfaction, stress, performance, and counterproductive work behavior: A meta-analysis. Computers in Human Behavior Reports, 8, Article 100227. https://doi.org/10.1016/j.chbr.2022.100227
  59. ^ Campbell, L., Wei, E. K., Morse, M., & Bajaj, K. (2022). Weaving Equity into Every Step of Performance Improvement. The Institute for Healthcare Improvement. Retrieved from: https://www.ihi.org/library/blog/weaving-equity-every-step-performance-improvement
  60. ^ Koch, S. (2024). 7 Pitfalls of KPI Tunnel Vision. (un)common logic – Digital Marketing Strategy. Retrieved from: https://blog.uncommonlogic.com/insights/7-pitfalls-of-kpi-tunnel-vision

Further reading

  • Abeysiriwardana P.C., Jayasinghe-Mudalige U., Role of Peripheral Analysis Methods in Adoption of Successful KPIs for a Research Institute Working Towards Commercial Agriculture. doi:10.1007/s42943-021-00021-z in JGBC. ISSN 0976-1888.
  • Bernard Marr, Key Performance Indicators: The 75+ Measures Every Manager Needs to Know. Financial Times/Prentice Hall, 2012, ISBN 978-0273750116.
  • David Parmenter, Key Performance Indicators: Developing, Implementing, and Using Winning KPIs (4th Edition). John Wiley & Sons, 2020, ISBN 9781119620778.
Prefix: a b c d e f g h i j k l m n o p q r s t u v w x y z 0 1 2 3 4 5 6 7 8 9

Portal di Ensiklopedia Dunia

Kembali kehalaman sebelumnya