Methodology: Value Creation Score
What this score is
The Value Creation Score is a 0–100 composite that measures how much research you are doing on the platform — not how much money you have, not how well your portfolio performs, and not whether your picks beat the market. It is a measure of research practice: the breadth of companies you study, the depth of analysis on each, the learning activity, the cadence of your visits, and the conversations you start with the AI or your annotations.
We score the practice because it is the thing the user controls. The price of a stock and the return of a portfolio are largely outside your hands; the discipline of doing the work is not.
1. The five components and their weights
| Component | Weight | What it measures |
|---|---|---|
| Research Breadth | 25% | Distinct tickers you have looked at in the last 90 days |
| Research Depth | 25% | Average distinct feature endpoints (DCF, fundamentals, insiders, sensitivity, etc.) used per ticker |
| Education | 20% | LEARN modules + glossary + practice + quiz activity |
| Consistency | 15% | Active days in the last 90 (60% weight) combined with current streak (40% weight) |
| Community | 15% | AI conversations, annotations, and share-card activity |
Weights sum to 1.00. The split mirrors the platform’s pedagogy: Breadth and Depth are weighted equally because a great analyst needs both range and focus; Education sits one rung below because consuming material is necessary but not the same as practicing on it; Consistency and Community are the smaller two because they multiply the others rather than replace them.
The constants live in services/value_score.py as _WEIGHT_BREADTH, _WEIGHT_DEPTH, _WEIGHT_EDUCATION, _WEIGHT_CONSISTENCY, and _WEIGHT_COMMUNITY. Changing any of them requires a changelog entry below.
2. Normalization curve (the same one for every component)
Each component starts as a raw count — e.g., 27 unique tickers. To produce a 0–100 sub-score, the raw count is normalized against a reference ceiling using a soft-log curve. Early progress feels rewarding (the curve rises quickly at low counts) and a high score is still attainable (the curve never asymptotes below 100), but the marginal sub-score per additional unit decreases as you approach the ceiling.
The formula:
sub_score = min(100, 100 * log(1 + (count / ceiling) * (e - 1)))
where e is Euler’s number. At count = ceiling, the formula yields exactly 100. At count = 0, it yields 0. The curve is concave, so the first ticker you study moves the score more than the eightieth.
3. Reference ceilings
| Component | Ceiling (= 100% sub-score) |
|---|---|
| Unique tickers (Breadth) | 100 in 90 days |
| Average features per ticker (Depth) | 8 distinct feature endpoints, out of ~12 available |
| Education actions (Education) | 50 actions in 90 days |
| Active days (Consistency, 60% weight) | 30 active days in 90 |
| Current streak (Consistency, 40% weight) | 14 consecutive days |
| Community actions (Community) | 30 actions in 90 days |
The ceilings live in services/value_score.py as _CEIL_TICKERS, _CEIL_FEATURES, _CEIL_EDUCATION, _CEIL_ACTIVE_DAYS, _CEIL_STREAK, _CEIL_COMMUNITY. They are deliberately ambitious — few users will hit the ceiling on every component, so there is always room to improve.
4. Levels
| Score range | Level name |
|---|---|
| 80–100 | Expert Researcher |
| 60–79 | Active Analyst |
| 40–59 | Growing Investor |
| 20–39 | Building Foundation |
| 0–19 | Getting Started |
Level boundaries are integer multiples of 20 by design — round, memorable, and aligned with the brand’s preference for explicit thresholds over algorithmic mystery. Color progression deliberately reads as gold deepens with mastery (Building Foundation gray → Growing Investor amber-gold → Expert Researcher brand-gold) rather than as a sentiment-coded scale. The vivid-green color the lower tiers carried before 2026-04-26 was retired in Brand v2 Phase 8 A2 because the green-yellow-red sentiment scale signals “the platform is judging you” rather than “you are progressing.”
5. Window and trend
The score uses a rolling 90-day window. Activity older than 90 days does not contribute. This is the same window the rest of the platform uses for “recent” signals (the insider chip’s default lookback, the data-freshness chips, the screener defaults), and it intentionally implies that a user who hasn’t logged in for a quarter starts the next quarter without a head start.
Alongside the score, we report a trend based on the ratio of the last 30 days’ activity to the prior 30 days’ activity:
- Improving: ratio above 1.15 (recent activity is more than 15% higher than the prior month).
- Declining: ratio below 0.85 (recent activity is more than 15% lower).
- Stable: anything in between.
The 15% band exists because day-to-day variance on small activity counts would otherwise flap the label between Improving and Declining. The trend is recalculated on every score read.
6. Percentile
If the platform has at least five users with activity in the last 90 days, the percentile compares your unique-ticker count against the cohort. Below five active users (the current state for many days as of this writing), the percentile falls back to a fixed heuristic table that derives an approximate rank from the score itself:
- Score ≥ 80 → 95th percentile
- Score ≥ 60 → 80th percentile
- Score ≥ 40 → 60th percentile
- Score ≥ 20 → 40th percentile
- Below 20 → 20th percentile
The heuristic is honest about its approximation — in a small-cohort regime, a strict percentile would be misleading or invasive of user privacy.
7. Edge cases and what we deliberately do not do
- No portfolio P&L in the score. Returns are noisy in the short run, lucky in the medium run, and don’t correlate cleanly with research practice. We don’t reward beating the market and we don’t punish trailing it.
- No leaderboard. Per the 2026-05-13 F.2 quorum DEFER decision, the score is a private signal to the user, never a public ranking.
- No surprise demotions. The score moves up and down with your activity in the window; we don’t apply punitive deductions for inactivity beyond the natural decay of the 90-day window.
- Anonymous users get an approximate score from client-side localStorage activity counts (
unique_tickers,active_days, etc., tracked bystatic/js/value-score.ts). Authenticated users get a server-side score fromapi_usage_log, which is the more accurate of the two. - Lookup credits do not appear in the score. Per SF-MONETIZATION-V3 (2026-05-10), lookup-style endpoints are not metered as “credit” activity; only AI-credit consumption maps to a billing meter. The Value Score is also not tied to billing in any direction.
8. Changelog
| Date | Change |
|---|---|
| 2026-05-13 | Initial publication of this methodology page. Weights, ceilings, and levels mirror services/value_score.py as of this date. |
| 2026-04-29 | Re-homed implementation from routes/routes_valuescore.py to services/value_score.py (S56 Initiative 5 deprecation-policy stage 3 → 4 progression). No equation change; structural move only. |
| 2026-04-26 | Brand v2 Phase 8 A2: retired the vivid-green color (#2dd4a0) on Growing Investor; level colors now read as gold-deepens-with-mastery rather than green-red sentiment. No score-equation change. |
| 2026-05-06 | Initiative 1 inline-SQL promotion: read queries moved to typed helpers in pg_db/queries/value_score.py. Same SQL, same numerical output. |
Source code and references
Authoritative source for the equation: services/value_score.py (composite + weights + ceilings + levels), static/js/value-score.ts (client-side activity tracking), pg_db/queries/value_score.py (database reads), routes/routes_valuescore_fastapi.py (the GET /api/value-score endpoint).
This page is subject to and should be read alongside the Editorial Standards. Corrections: editorial@oxfordledge.com.