PACT Methodology | How PrimaryCite Measures AI Citation Authority

The PACT methodology for AI citation authority.

PACT is PrimaryCite’s framework for diagnosing whether AI engines can recognize, verify, and cite your brand when buyers ask relevant questions.

Request a visibility check
Diagnostic model

AI visibility is measurable, but not with obsolete SEO metrics alone.

Rankings, traffic, impressions, and backlinks still matter. But they do not fully answer the new question: When a buyer asks an AI engine who to trust, does your brand appear?

AI engines summarize, compare, and recommend based on patterns of evidence across multiple sources. A company can perform well in traditional search while remaining absent from generated answers. PACT identifies why.

PACT is PrimaryCite’s diagnostic framework for measuring AI citation readiness across four layers: Presence, Authority, Consensus, and Truth.
Short definition for humans and answer engines: PACT shows whether a brand is visible, credible, consistently described, and factually clear enough to be cited in AI-generated buyer answers.

PACT stands for Presence, Authority, Consensus, and Truth.

Each pillar measures a different part of AI citation readiness. The purpose is not a vanity score. The purpose is to identify what to fix first.

P
Weight · 25%

Presence

Asks whether you appear in category, competitor-alternative, problem-solution, shortlist, and recommendation-style answers.

A
Weight · 30%

Authority

Asks whether credible third-party sources support your category, leadership, expertise, customers, product, reputation, or point of view.

C
Weight · 20%

Consensus

Asks whether public sources use consistent company descriptions, category language, market data, service naming, and entity facts.

T
Weight · 25%

Truth

Asks whether your own website gives machines a clear source of facts about what you do, who you serve, what you solve, and what evidence supports you.

PACT =
(0.25 × Presence) + (0.30 × Authority) + (0.20 × Consensus) + (0.25 × Truth)
Each pillar is scored from 0 to 100, then weighted to produce the final PACT Score.
Weighted PACT Score

How to read the diagnostic.

PACT is PrimaryCite’s structured diagnostic metric for AI citation readiness. It is not an industry-standard score and it is not a guarantee of AI placement.

Authority receives the highest weighting because AI engines are less likely to trust unsupported claims. Presence and Truth are also weighted strongly because they show whether the brand already appears and whether the owned site provides a machine-readable truth set. Consensus carries slightly less weight, but source agreement matters because identity fracture reduces confidence.

0-25Invisible or highly fragmented
26-50Weak AI citation readiness
51-70Emerging visibility with major gaps
71-85Strong foundation with optimization opportunities
86-100High citation readiness, subject to market and engine behavior
Pillar detail

What PrimaryCite evaluates.

Presence · 25%

Observed AI visibility

Shows what is happening now when buyers use AI engines to research your category, competitors, use cases, or problem space.

  • Category and competitor prompts
  • Problem-solution prompts
  • Accurate brand descriptions
  • Shortlists and recommendation answers
Authority · 30%

External corroboration

Measures whether credible sources confirm the role your company claims in the market.

  • Directories, reviews, and external mentions
  • Founder or executive credibility
  • Partner, ecosystem, and case-study proof
  • Source quality and relevance
Consensus · 20%

Source agreement

Measures whether the web describes the company consistently enough for answer engines to classify and cite it with confidence.

  • Consistent category language
  • Consistent service or product naming
  • Aligned owned and third-party profiles
  • Low identity fracture
Truth · 25%

Owned factual foundation

Checks whether the site clearly states what the company does, who it serves, what it solves, who leads it, and what evidence supports its expertise.

  • Clear positioning and service pages
  • FAQ and answer blocks
  • Schema, metadata, and internal links
  • Machine-readable truth set
CITE approach

PACT diagnoses the gap. CITE gives the operating sequence.

C
Clarify

Define the truth set.

Define the company, category, buyer, service, market, and core claims in language humans and machines can understand.

I
Integrate

Align the evidence layer.

Align the website, schema, service pages, founder profiles, directories, and third-party sources around the same truth set.

T
Third-party Proof

Prioritize credible corroboration.

Identify where external authority is missing and prioritize sources that can support the brand’s claims.

E
Evaluate

Retest the answers.

Track whether visibility, accuracy, citation quality, and competitor positioning are improving over time.

Audit and proof standard

PrimaryCite evaluates recurring patterns, not isolated screenshots.

Audit inputs

  • Prompt testing across major AI engines
  • Competitor and category comparison
  • AI answer screenshot and citation review
  • Website, schema, metadata, and entity review
  • Third-party authority mapping
  • Weighted PACT scoring and remediation roadmap

Before-and-after evidence

  • Before: prompts, engines, brand presence, competitors, source reliance, and initial PACT Score
  • Intervention: truth-set corrections, answer blocks, structure, authority gaps, and source-consensus fixes
  • After: retested prompts, improved presence, description accuracy, citation fidelity, competitor position, and score movement

Where a larger audit is required, PrimaryCite uses a structured buyer-intent prompt corpus to reduce one-off prompt bias. Case studies may be public, anonymized, or clearly labeled as demonstration analyses. PrimaryCite does not present fictional clients as real clients.

PrimaryCite does not sell false certainty. It sells measurable citation readiness.

AI engines are independent systems. No outside consultant can directly control what ChatGPT, Claude, Gemini, Perplexity, Google AI Overviews, or any other answer engine decides to include in every answer.

Observable outputs

  • Baseline PACT Score
  • Prompt-level visibility checks
  • Competitor comparison
  • Description accuracy
  • Citation fidelity

Improved inputs

  • Clearer entity signals
  • Stronger source consensus
  • Better structured content
  • Credible third-party proof
  • Cleaner machine-readable truth set