Rising Transfers · Methodology
How AI Generates Football Verdict Cards from Per-90 Data and Match Events
Rising Transfers Make is an AI tool that generates shareable football verdict cards from any player photograph in 8 to 180 seconds. The pipeline runs five sequential AI stages: vision recognition identifies the player and team, entity resolution maps them to the Rising Transfers database of 56,883 players, topical-event detection finds the most newsworthy recent match events, an angle template library selects the strongest narrative angle (deadlock-broken, drought-ended, season-leading contribution, etc.), and finally GPT-Image-2 renders the card across 21 visual style presets. The result is a verdict card with the take headline, fixture banner ("UCL · ARS 1 — 0 ATM · MAY 5"), per-90 statistical ribbon, and scoreboard context — all derived automatically from publicly available match data, with no manual data entry required from the user.
Why Football Takes Need Receipts
Football fans post takes every day on Twitter, Reddit, and group chats. "Saka cooked Atlético". "Mbappé carries Real Madrid". "Bellingham overrated for the price". Each take is a strong opinion delivered without supporting evidence — and within hours it gets buried by the next take, regardless of whether it was right or wrong. The take itself is a piece of social currency that has zero recoverable value: not searchable, not citable, not visually attached to the data that justified it.
The information layer of football discourse is enormously rich (per-90 statistics, expected goals models, scoreboard data, fixture context, DNA-style player similarity scores), but almost none of it makes it into the takes that fans actually share. The gap between "what the data says" and "what fans post" is exactly the gap that kills the credibility of the take economy. A take with receipts looks fundamentally different from a take without — but most fans have no easy way to attach receipts to their opinion in the moment.
Rising Transfers Make closes this gap. Instead of asking the user to gather statistics, find the right scoreboard, choose a visual style, and write a structured caption — five separate manual steps that almost no one will do — the system does each of those steps automatically from a single photograph upload. The user keeps the take itself ("Banger." / "Cooked." / "Class apart."); everything else is generated.
The Five Stages of AI Verdict Card Generation
A verdict card looks simple in the final output: one image, one headline, a row of statistics, a small fixture banner. The pipeline that generates it runs five distinct AI stages, each handling a problem that would be slow or unreliable for a human to solve manually. The whole sequence completes in 8 seconds (data card path) to 180 seconds (full image generation path with safety retries).
Stage 1 — Vision Recognition
A user uploads any photograph of a footballer. The first AI stage runs a vision model on the image to identify who is in it: jersey colours, badge logos, body language, stadium signals, and (when available) facial recognition all contribute to a probability-weighted detection. The output is a structured candidate list — typically 1 to 3 players the photo might be of, ranked by confidence. Most uploads return a top candidate with 95%+ confidence; ambiguous photos surface alternatives that the user can resolve manually.
Stage 2 — Entity Resolution
The vision output ("Bukayo Saka") is a string. To do anything useful with it, the system has to map that string to the actual database record (player_id 16827155, team_id 19, slug "b-saka"). Entity resolution handles the messy translation: dealing with diacritics, common-name vs full-name variants, multi-token surnames, and players who share names. This stage runs against the dim_players table (56,883 rows) and uses a fuzzy match with a confidence threshold below which the player is treated as not-found. A failed entity match is the single most common reason a generation cannot proceed automatically — the user is then offered manual subject selection.
Stage 3 — Topical Event Detection
Once the player is resolved, the system looks for "what is newsworthy about this player right now?". A library of detectors processes the player's recent match data — goals, red cards, milestones, drought-breaking, late winners, partnership outputs — and returns a ranked list of topical events from the last 30-90 days. Each event is scored on drama (how exciting), recency (how fresh), competition weight (UCL knockout vs lower-league), and rarity (a milestone is rarer than a routine goal). The user is shown the top three events as cards and picks the one their take is about. If no recent goals were found (a player who is injured or suspended), the panel surfaces a hint — "no goals in 60 days" — explaining why the list looks sparse.
Stage 4 — Angle Template Selection
A goal is just a goal. An "angle" is the specific story the data tells: "broke a 0-0 deadlock at minute 29", "ended a 7-game drought against the opposition", "8G + 19A in Premier League this season — 27 contributions". Rising Transfers maintains 14 angle templates organised across three tiers (Tier-A event-anchored, Tier-B season-anchored, Tier-C historical compare). For each topical event, all 14 templates evaluate their trigger conditions against the player's data; templates that pass output a structured angle with a hero number ("29'", "8G + 19A"), a caption, an anchor sentence, and a multi-dimensional score. The system picks the highest-scoring angle and uses its anchor as the verdict take, while routing the angle's data fields into the card's ribbon overlay.
Stage 5 — Visual Generation
The final stage takes the structured inputs (player image, take headline, fixture banner, ribbon stats) and renders them into a final shareable card. Two pipelines exist: a fast data-card path (~3 seconds, SVG composition) and an image-generation path (~60-180 seconds, GPT-Image-2 img2img across 21 visual style presets — editorial cover, cyberpunk, FIFA viral, polaroid, studio Ghibli, oil painting, and more). The image-generation path applies six axes of variation (layout, typography, color tone, decoration, background treatment, data form) sampled deterministically from a user+take seed, so the same user generating the same take twice gets a stable look but different users get visually distinct outputs even on identical takes. The completed card is uploaded to public Storage, registered in the verdicts database with a SHA-256 fingerprint of the source image (used for Wall deduplication), and surfaced at /verdict/{slug} as a permanent URL.
Example: Saka vs Atlético Madrid, UCL Semi-Final
Take a real example: a fan uploads a photo of Bukayo Saka celebrating, on May 5 2026, the night Arsenal beat Atlético Madrid 1-0 in the UCL semi-final. The user types "Banger." into the take field and clicks Generate.
Stage 1 (vision) identifies Saka with 99% confidence — Arsenal jersey, badge, body language, and stadium signals all line up. Stage 2 (entity resolution) maps "Bukayo Saka" to player_id 16827155 in the database. Stage 3 (event detection) processes Saka's last 30 days of match events and surfaces three goals: the 5/5 ARS 1-0 ATM strike at minute 44, a 5/2 goal vs Fulham, and a 4/15 goal at Brighton. The 5/5 strike scores highest because UCL knockout has higher competition weight and the goal was the deadlock-breaker.
Stage 4 (angle templates) evaluates 14 templates. Most do not trigger — Saka's 7G/5A is solid but not "top of cohort", his 24-year age is not in the rarity bands, his career goal #51 is not a strict milestone. But clutch_tie_break triggers cleanly: pre-goal score from the same fixture's prior events was 0-0, this goal made it 1-0, the home team scored. The angle output is hero_value="44'", hero_caption="DEADLOCK BROKEN", anchor="Broke 0-0 deadlock at 44'". Because the user kept their original take "Banger.", the system uses Saka's name + take "Banger." as the headline, but the fixture banner is autogenerated as "UCL · ARS 1 — 0 ATM · MAY 5" and the ribbon stats include 1G THIS MATCH, 44' MINUTE, 7G 5A SEASON, 29 APPS, 7.24 RATING.
Stage 5 (image generation) is auto-styled to fifa_viral landscape (the auto-style picker chose this from the goal event_type and a stable user seed). GPT-Image-2 renders the card with the saved photo as base, the fixture banner across the top, "BANGER." as the dominant headline, the player photo preserved unchanged, and the ribbon overlaid as a horizontal data strip across the bottom. ~120 seconds later, the card is in the user's hand at risingtransfers.com/verdict/b-saka-banger-jf2t — a permanent URL that can be shared anywhere, with full per-90 receipts attached.
This is the entire flow: from one photograph to a take card with structured data, fixture context, and visual identity, in under three minutes. No manual scoreboard lookup. No statistics input. No design work. The AI pipeline does five jobs the user never has to think about, and the user keeps full ownership of the take itself.
Frequently Asked Questions
How does AI generate a football verdict card from just a photo?
A five-stage AI pipeline does the work: vision recognition identifies the player from the photo, entity resolution maps them to a player database with per-90 statistics, topical event detection finds their recent newsworthy match events (goals, milestones, drought-breaking), an angle template selects the strongest narrative ("broke 0-0 deadlock at 44'", "8G + 19A this season"), and GPT-Image-2 renders the final visual card across 21 style presets. The user only provides the photo and an optional one-word take ("Banger." / "Cooked."); everything else is generated automatically from public match data.
What data does the AI use to build the card?
The AI draws on multiple data sources: per-90 player statistics (goals, assists, expected goals, key passes, defensive actions), match event data (goal timing, scoreboard at moment of goal, opposition team), fixture metadata (league, kickoff time, FT/HT state), historical career data (drought lengths, milestone proximity, first-time-vs-opposition), and DNA-style profile vectors (for similar-player comparison cards). All data is derived from public match-event sources; no proprietary information is used.
Why are some cards generated in 8 seconds and others take 3 minutes?
Two pipelines exist for different output styles. The data-card path (SVG-composited templates with charts and stat tables) renders in ~3-8 seconds because it is direct visual composition without an image model. The image-generation path uses GPT-Image-2 img2img to reimagine the photo across 21 visual style presets — this takes 60-180 seconds and includes a safety-refusal retry pass (~17% of img2img calls trigger initial refusal, so a deterministic retry with the same seed often succeeds). The user can choose either pipeline; the system also auto-picks based on the chosen style preset.
How accurate is the AI player recognition from photos?
The vision recognition stage typically returns the correct top candidate with 90%+ confidence on photos with visible jerseys and badges. Confidence drops on tightly-cropped close-up photos, photos in unusual kit (third kit, retro shirts), or photos where the player's face is partially obscured. When confidence is below the auto-accept threshold, the system surfaces 2-3 candidates ranked by score and asks the user to pick — a manual disambiguation that takes one click. False positives (the system confidently identifying the wrong player) are rare but possible; the user can override the detected player at any point in the flow.
What happens if the player has no recent goals or events?
If the topical event detector finds no goals in the last 30-90 days, the panel surfaces a hint: "Latest goal was X days ago. Make currently surfaces goal-type events only." The user can still proceed by choosing a season-summary card (showing aggregate stats) or by switching the subject from the individual player to their team — team-event cards (cup progression, derby win, season streak) work even when an individual player has been quiet. This is by design: the system errs toward "show what is real" rather than fabricating a take that is not supported by recent data.
Can I generate a card for any player, or only top players?
Cards can be generated for any player in the Rising Transfers database (56,883 active players across European competitions). For top players (high market value, frequent matches), the topical event detector returns rich options across multiple recent fixtures. For depth-squad players or players in lower-traffic leagues, the event pool may be sparse — but the player can still be picked as the subject and a season-summary or evergreen card can be generated. The image-generation pipeline does not differentiate by player popularity; the same 21 visual styles are available regardless of subject.
Does the AI invent statistics or make up the take content?
No. Every statistic on a Rising Transfers verdict card is derived from a real match-event database. The fixture banner ("UCL · ARS 1 — 0 ATM · MAY 5") is sourced from dim_fixtures, ribbon stats from fact_match_events and fact_player_stats_seasonal, angle anchors from algorithmic templates with deterministic trigger conditions. The take headline is the user's own input, optionally seeded by an angle template anchor that is itself a structured composition of database fields ("Broke 0-0 deadlock at 44'") rather than a free-form generated sentence. The image is rendered by GPT-Image-2, but the textual content overlaid on the image is structured data — not generated prose.
Try generating a verdict card on any player.
See it in action
Try the tool, not just the theory
Every metric explained above is live on Rising Transfers. Run a real transfer rumour through the Lie Detector, or find players with a matching DNA profile — free, no account needed.