What Works Cities
What Works Cities Certification
7 min readNov 14, 2018

--

We updated our program in 2022. Please find the updated glossary here.

What Works Cities Assessment Glossary

What Works Cities Certification is the standard of excellence for data-driven local government. The terms below are applicable to the Assessment criteria, which you can find in our What Works Cities Assessment Guide. To suggest additional terms and definitions, please email certification@whatworkscities.org.

A–D | E–K | L–Q | R-Z

Administrative Data: Data collected for record-keeping, not research purposes.

Active Contract Management (ACM): A set of strategies that combine high-frequency use of data and collaborative meetings between government agencies and service providers to improve outcomes from contracted services.

Benchmark: A standard or point of reference against which data may be compared or assessed. May be used in data analytics or in the absence of internal data.

Benchmarking: The process of continuously comparing and measuring one practice against another to gain information that will help improve performance.

Centralized Model of Government: The active presence of a person or team that is available to support work across the local government.

Citywide: Occurring in at least three city departments and/or agencies.

Civic Data Standards: A set of specifications (or requirements) for how some sets of data should be made publicly available.

Codify: To systematize policy.

Contract Type/Payment Mechanism: The price structure of a contract (e.g., fixed-price, cost-type, performance-based, etc.).

Data: Electronic records stored in computer systems. In the simplest terms, data are lists of things, such as requests for service, inventories, or incidents, which include helpful details, such as dates, locations, images, video, and more.

Data Breach: The release of private information or data to an untrusted environment.

Data Governance: The active presence of an authoritative body to lead and oversee data inventory in alignment with citywide technical, privacy, and strategic objectives.

Data Inventory: A comprehensive listing of all available data sources and in what format those data exist.

Data Quality: The completeness, primacy, timeliness, accessibility, machine readability, availability without registration, being non-proprietary, freedom from licensing restrictions, permanence, and obtainability of a city’s open data.

Data Quality Audit: A process intended to identify the average presence of problems in datasets across a designated data area (data collection, system, department, organization) by sampling data from that data area and assessing it for accuracy, timeliness, or consistency issues. It should ideally be done on a schedule and with random selection of datasets to be assessed, rather than prompted by an immediate need to ensure the quality of a particular dataset.

Data Transparency: The ability to access accurate data from an official source.

Data Visualization: The creation and study of the visual representation of data.

Demographic Bias: Analytic prejudice in favor or against a sector of a population.

Descriptive Statistics: The practice of quantitatively describing a data set by using measures of central tendency, variability, and dispersion.

Desired Outcomes: Results that local governments are seeking to achieve through key procurements, contracts, or grants.

Diagnostic Analysis: A form of advanced analytics that examines data or content to explain why something happened.

Effectiveness: The degree to which a process yields the desired outcome/result, regardless of cost.

Efficiency: The degree to which a process yields the desired output at minimum cost.

Evaluation: A systematic assessment using standard research methods to help gain insights into the design, implementation, or effects of a policy, program, or practice.

Experimental Evaluations: An evaluation that attempts to determine causal effects through the use of counterfactual (what would have happened in the absence of the policy, program, or practice) that is created through random assignment. Because randomness guarantees that there is no systematic relationship between being assigned to a particular comparison group and the other factors related to the outcome of interest, it is considered the “gold standard” for causal evaluation. Randomized controlled trials, which compare outcomes between randomly assigned groups that experience the change to policy, program, or practice (a “treatment group”) or continue to experience the status quo (a “control group”), are the most common form of experimental evaluation in government.

Geographic Bias: Analytic prejudice in favor or against spatially distinct populations.

Geospatial Analysis: The gathering, display, and manipulation of imagery, GPS, satellite photography and historical data, described explicitly in terms of geographic coordinates or implicitly, in terms of a street address, postal code, or forest stand identifier as they are applied to geographic models.

Intervention: Change in practice or program.

Key Metrics: A prevailing metric that a government identifies as the primary way to measure progress toward a strategic priority.

Key Procurements, Contracts, and Grants: Procurements, contracts, and grants that are either tied to high-priority goals or represent large dollar amounts.

Low-Cost Evaluations: An evaluation that does not have significant budget requirements, generally because it leverage existing systems, data, or processes.

Machine Readable: Data in a form processable by a computer.

Measures/Metrics: A quantifiable unit that provides information about the success of a program, department, service, or outcome people care about achieving or maintaining. A government may identify a measure/metric by inventorying data that it already collects, collecting new data, or using validated external data. Measures/metrics can focus on inputs, outputs, service quality, efficiency (e.g., cost per application processed); productivity (e.g., throughput); and effectiveness/outcomes (e.g., unemployment rate).

Metadata: A set of data that describes or gives information about other data.

Observational Evaluations: A way to understand and monitor ongoing processes through indirect or direct observations.

Open Contracting Data Standard (OCDS): An open data standard for publication of structured information on all stages of the contracting process: from planning to implementation.

Open Data: Electronic data records that are accessible in whole or in part to the public and are legally open without restriction on use or re-use. This practice is a form of proactive disclosure — making information available without it being requested.

Performance Analytics: The practice of studying how to perform better and inserting those insights into the operational decision-making process.

Performance Data: Audit reports or other data sets that evaluate a vendor’s performance of past contracts.

Performance Management: The process by which leaders, managers, employees, and stakeholders work collaboratively to identify what they want to achieve, decide how to measure progress, take informed action based on evidence, and take stock of the results to inform future decisions. It includes performance measurement, performance measures, performance targets, data science practices, and transparency.

Performance Measurement: The building block of performance management, it is the process of establishing performance measures/metrics, collecting the relevant data for those measures, and reporting out on the results.

Prescriptive Analysis: The anticipation of what will happen, when it will happen, and why. The analysis culminates in decision options on how to take advantage of a future opportunity or mitigate a future risk.

Procurement Vehicle: The solicitation method (e.g., request for bid, request for proposal, request for qualifications, design-build, agile, problem-based) used to procure a product or service.

Protected Data: Data that cannot be legally shared in the open and are only made available through highly restrictive procedures. Under typical privacy law, data users must meet high standards and establish relationships with the data providers to create credibility and trust before being granted access.

Quality Data: Data characterized as having trustworthiness, accuracy, and reliability.

Quasi-Experimental Evaluations: An evaluation that attempts to determine causal effects through the use of a counterfactual (what would have happened in the absence of the policy, program, or practice) that is not created through random assignment. Examples include natural or “as-if” experiments, before-and-after designs, or matching.

Randomized Controlled Trials: An evaluation that compares outcomes between randomly assigned groups that experience the change to policy, program or practice (a “treatment group”) or continue to experience the status quo (the “control group”). Randomized controlled trials are the most common form of experimental evaluation in government.

Repurposing: A strategic process cities can use to shift funding from ineffective programs and services to those that are evidence-based, resident-centered, and achieve results.

Results-Driven Contracting: A set of strategies to structure, evaluate, and actively manage contracts using data to help cities leverage procurement as a tool in order to make progress on their highest priority goals.

Rigorous Evaluation: An evaluation that uses best practice research methods to generate evidence in which we can have a high degree of confidence.

Schematic Standards: A common structure for how information is shared between publishers.

Scope of Work: The section of a procurement or contract that describes in detail the product or service being contracted.

Semantic Standards: Unambiguous, shared meanings for the data being shared between publishers.

Service-Delivery Models: A set of standards that guide the design, development, deployment, operation and retirement of services offered by a service provider.

Strategic Framework: An overarching set of performance priorities for which the local government wishes to achieve results.

Strategic Goals: Goals of a contract, which are ideally aligned with the local government’s overall goals.

Strategic Priorities: The long-term goals of a municipality.

Tactical Data Engagement: A four-step method (Find, Refine, Design, Implement) that goes beyond open data policy and data portals to facilitate opportunities for the community use of open data to improve residents’ lives.

Target: A desired change in the measure/metric that will advance progress toward strategic priorities within a specific timeline.

User-friendly: Designed to make accessing, updating, downloading, sharing, and reading a city’s data achievable.

Vendor Selection Criteria: The criteria outlined in a procurement document to select a vendor for a contract.

--

--

What Works Cities
What Works Cities Certification

Helping leading cities across the U.S. use data and evidence to improve results for their residents. Launched by @BloombergDotOrg in April 2015.