|
|
Voice to Congress |
For the Technology and AI issue page, the metrics should do four things at once:
That means the metrics cannot be vague, academic, or overly technical. They need to be simple enough for the public, rigorous enough for policy work, and structured enough to support a web page, scoring model, and later Congressional Report Card integration. The good news is that the core public data sources already exist: the FCC National Broadband Map supports coverage and provider-choice measures; the Federal IT Dashboard exposes federal IT spending and investment health data; GAO publishes legacy-system modernization findings; the UN E-Government Survey provides digital-government benchmarking; the ITU publishes the ICT Development Index; Census BTOS supports firm-level AI use tracking; and Stanford’s AI Index provides international AI adoption, trust, and investment benchmarks.
1. Metric Requirements
TM-001 - Public Meaning
Each metric shall answer a plain-English question the average American can understand.
Examples
TM-002 - Actionability
Each metric shall point toward a policy lever, budget choice, oversight action, or operational improvement.
A good metric should make it obvious what Congress, agencies, states, or regulators could do next.
TM-003 - Outcome Priority
The metric set shall prioritize outcomes over inputs.
Preferred
Less useful by themselves
TM-004 - Limited Use of Inputs
Input metrics may be used only when they help explain why outcomes are strong or weak.
Example Federal IT spending is useful, but only if paired with measures showing whether spending is producing modernization and better service. The IT Dashboard publicly reports FY2025 federal IT spending and investment management data, which makes it useful as a supporting input source rather than a stand-alone success measure.
TM-005 - One Definition Per Metric Each metric shall have:
TM-006 - Stable Formula
Each metric formula shall remain stable over time unless a documented revision is required.
If a formula changes, the webpage shall identify the old and new method.
TM-007 - Official or Defensible Source
Each metric shall come from:
The preferred source order should be:
That approach fits the currently available data landscape: FCC for broadband maps, Census for business AI use, GAO for legacy systems, IT Dashboard for federal IT portfolios, UN for digital government benchmarking, and ITU for cross-country connectivity benchmarking.
TM-008 - Trendability
Each metric shall support year-over-year tracking.
The webpage should be able to show:
TM-009 - Geographic Usefulness
Where possible, each metric shall support:
Not every metric will support all four, but the system should prefer metrics that can be localized.
TM-010 - Comparability to Top Countries
Where a meaningful international benchmark exists, the metric shall support comparison to top-performing countries.
This is especially important for:
The UN E-Government Survey and ITU ICT Development Index are especially useful for this purpose because they are designed for country-level comparison.
TM-011 - Avoidance of Vanity Metrics
The metric set shall exclude metrics that look impressive but do not show public benefit.
Examples of weak vanity metrics
TM-012 - Domain Balance
The full metric set shall include measures from each of these categories:
TM-013 - Small Number of Headline Metrics
The public-facing page shall have a small number of headline metrics, ideally 8 to 15, with additional supporting metrics underneath.
Too many top-line metrics make the page hard to understand.
TM-014 - Data Quality Flag
Each metric shall have a data-quality label such as:
TM-015 - Transparent Scoring
If a score or grade is derived from the metrics, the weighting and formula shall be public.
TM-016 - Thresholds and Targets
Each metric shall have:
TM-017 - Public Relevance At least half of the headline metrics shall reflect what households, workers, students, patients, or small businesses directly experience.
TM-018 - Policy Relevance
At least half of the supporting metrics shall reflect conditions Congress or agencies can realistically improve through legislation, oversight, regulation, procurement, or modernization.
TM-019 - Machine Readability
Each metric shall be storable in structured JSON and exportable to CSV for use in the webpage and future analytics.
TM-020 - Web Simplicity
Each metric shall be displayable in a simple format:
TM-021 - Auditability
Each published value shall be traceable to:
TM-022 - Periodic Refresh
The metric system shall support refresh cycles based on source cadence.
Likely cadence examples
2. Recommended Metric Structure for the Web Page
The Technology and AI page should use six metric domains.
Domain A - Access and Affordability
Measures whether people can actually get and afford modern digital access.
Domain B - Digital Government Performance
Measures whether government technology is simple, modern, and usable.
Domain C - Modernization and Reliability
Measures whether public systems are still running on aging, vulnerable, expensive legacy technology.
Domain D - AI Adoption and Economic Use
Measures whether AI is actually spreading through business, public services, and the workforce.
Domain E - AI Safety, Governance, and Trust
Measures whether AI is being governed responsibly and whether the public trusts the system.
Domain F - Global Competitiveness and Benchmarking
Measures whether the U.S. is keeping pace with the best-performing countries.
3. Headline Metric Requirements by Domain
Domain A - Access and Affordability
TAM-A-001 - Household Broadband Availability
Requirement: The metric set shall include a measure of the share of U.S. households or service locations with access to fixed broadband meeting the selected baseline speed standard.
Likely source: FCC National Broadband Map.
TAM-A-002 - Provider Choice
Requirement: The metric set shall include a measure of the share of households with at least two practical high-speed providers.
Why it matters: Coverage without competition often still means high prices and poor service.
TAM-A-003 - Underserved / Unserved Locations
Requirement: The metric set shall include measures of unserved and underserved locations.
TAM-A-004 - International Connectivity Benchmark
Requirement: The metric set shall include at least one internationally comparable connectivity metric.
Likely source: ITU ICT Development Index. The ITU's 2025 IDI is specifically designed to assess whether connectivity is universal and meaningful, and it publishes country scores including the United States.
Domain B - Digital Government Performance
TAM-B-001 - Major Services Available End-to-End Online
Requirement: The metric set shall include a measure of the share of major public services that can be completed fully online.
TAM-B-002 - Status Tracking Availability
Requirement: The metric set shall include a measure of whether major services allow real-time or near-real-time status tracking.
TAM-B-003 - Digital Government Benchmark Rank or Score
Requirement: The metric set shall include an international benchmark for national digital-government maturity.
Likely source: UN E-Government Survey, which benchmarks digital government across all 193 UN Member States and introduced a Digital Government Model Framework in its 2024 edition.
TAM-B-004 - Once-Only / Reuse Capability Proxy
Requirement: The metric set should include a proxy for whether people must repeatedly provide the same information to different government systems.
Domain C - Modernization and Reliability
TAM-C-001 - Legacy Systems in Need of Modernization
Requirement: The metric set shall include a measure of critical federal legacy systems most in need of modernization.
Likely source: GAO. In 2025 GAO reviewed 69 federal legacy systems and identified 11 as most in need of modernization.
TAM-C-002 - Legacy System O&M Burden
Requirement: The metric set shall include a measure showing how much federal IT spending is going to operating and maintaining existing systems versus modernization.
The public IT Dashboard shows FY2025 IT spending and portfolio data, which supports this category. GAO also reports that much federal IT spending still goes to operations and maintenance of existing systems.
TAM-C-003 - Critical Legacy Systems with Known Vulnerabilities
Requirement: The metric set should include a count or share of major legacy systems with known cybersecurity vulnerabilities, unsupported components, or obsolete languages where data is available. GAO reported that seven of the 11 systems it highlighted were operating with known cybersecurity vulnerabilities.
TAM-C-004 - Modernization Completion Rate
Requirement: The metric set shall include a measure of how many identified critical systems have been completed, replaced, or substantially modernized.
Domain D - AI Adoption and Economic Use
TAM-D-001 - Business AI Use Rate
Requirement: The metric set shall include a measure of AI use among firms, especially small and mid-sized businesses.
Likely source: Census BTOS. Census states that BTOS provides high-frequency estimates of AI use rates by firm size class, and its definition of AI includes generative AI.
TAM-D-002 - Public-Sector AI Inventory
Requirement: The metric set shall include a count of federal agencies or major systems publicly disclosing AI use.
TAM-D-003 - Workforce AI Literacy / Readiness Proxy
Requirement: The metric set should include a proxy measure for workforce AI readiness, such as education, training, or adoption-related participation.
TAM-D-004 - Population-Level AI Adoption Benchmark
Requirement: The metric set should include an international benchmark for public or population-level AI adoption.
Likely source: Stanford AI Index, which tracks cross-country adoption and public-opinion measures.
Domain E - AI Safety, Governance, and Trust
TAM-E-001 - Trust in AI Regulation
Requirement: The metric set shall include a measure of public trust in government’s ability to regulate AI responsibly.
Stanford's 2026 AI Index reports cross-country public-opinion comparisons on trust in AI regulation, including the United States.
TAM-E-002 - High-Impact AI Systems with Human Review
Requirement: The metric set should include a measure of whether high-impact public AI systems provide meaningful human review or appeal.
TAM-E-003 - AI Incident Reporting Availability
Requirement: The metric set should include a measure of whether agencies or regulated sectors publicly report material AI incidents.
TAM-E-004 - Assurance / Testing Coverage
Requirement: The metric set should include a measure of how many high-impact AI systems have documented testing, evaluation, or assurance artifacts.
Domain F - Global Competitiveness and Benchmarking
TAM-F-001 - International Digital Competitiveness Score
Requirement: The metric set shall include at least one international competitiveness benchmark.
TAM-F-002 - ICT Development Benchmark
Requirement: The metric set shall include a connectivity benchmark that is internationally comparable.
The ITU IDI is designed for this and reports country scores annually.
TAM-F-003 - Digital Government Benchmark
Requirement: The metric set shall include the UN digital-government benchmark.
TAM-F-004 - AI Leadership Benchmark
Requirement: The metric set should include a benchmark for AI investment, adoption, or deployment relative to peer countries.
Stanford's AI Index is useful here because it tracks cross-country AI investment, adoption, and governance-related perceptions.
4. Recommended Headline Metrics for Version 1
Recommended 10 Headline Metrics
This is enough to make the page useful without overwhelming the visitor.
5. Recommended Supporting Metrics for Version 2
Expand with supporting metrics such as:
6. Scoring Requirements
Planned structure:
A reasonable starting point:
Focus on people experience, not just prestige or investment.
7. Data Pipeline Requirements
The metric pipeline should satisfy these requirements.
DP-001 - Source Registry
There shall be a source registry listing:
DP-002 - Structured Metric File There shall be a single structured metrics file for the page, such as JSON.
DP-003 - Current / Previous / Target Fields
Each metric record shall include:
DP-004 - Display Text
Each metric record shall include:
DP-005 - Data Quality Field Each metric record shall include:
DP-006 - Manual Override Capability
The pipeline shall allow manually curated values where a source is annual, hard to parse, or better maintained by editorial review.
8. Best Public Data Sources for the Next Step
These are the strongest starting sources for the actual data build:
9. Plain-English Summary
For Technology and AI, the metrics should answer five simple questions: