New MetricMarch 2026

zTS: Scoring Efficiency and Playtype Difficulty

Why playtype-adjusted scoring efficiency better tracks team shooting impact.

Scoring efficiency statistics have evolved through a series of refinements.

Early metrics like FG% were simple but incomplete. They told us how often a player made a shot, but not what those shots were worth.

eFG% corrected that by accounting for the extra value of three-point shots.

But eFG% still ignored free throws, which are a central part of scoring. TS% addressed that by capturing the full arithmetic of scoring possessions — twos, threes, and free throws.

The next step was relative efficiency. Because scoring environments change from season to season, rTS measures efficiency relative to the league average.

Each refinement solved a real limitation of the previous metric.

But one limitation remains.

All of these statistics treat scoring usage as if it were drawn from the same environment.

They are not.

True shot attempts (TSA) are defined as field goal attempts plus free throw possessions — the same scoring-attempt denominator used in TS%. True shot attempts per 100 possessions measure the number of shot-ending possessions a player uses; throughout this piece, I will refer to that as scoring usage. It closely tracks traditional usage rate, but isolates shot-ending possessions only.

Observed efficiency is how efficiently a player turned his scoring usage into points.

When we say a player was +4 relative to league average, we mean he scored more efficiently per true shot attempt than the league average. That is what relative efficiency, or rTS, measures.

A good scoring-efficiency stat should provide a reliable signal for how individual scoring efficiency translates into team-level scoring efficiency.

But observed efficiency combines two different signals.

A player's observed efficiency reflects both how well he scored and the kinds of possessions he was asked to score on. A player who shifts into easier finishing opportunities can see his TS% rise without becoming a better scorer. Another player can take on more creation responsibility and see his observed efficiency fall even if the underlying quality of his scoring barely changes.

Traditional efficiency statistics preserve the results of those possessions. They do not preserve the role context that produced them.

This matters because scoring roles are not equal.

In 2026, creation playtypes sat in the low-50s TS range, while finishing and transition playtypes were far more efficient, above 60 TS.

So when two players post identical efficiency while drawing from very different playtype mixes, they have not solved the same scoring problem.

Raw TS% cannot see that difference.

This is the problem zTS addresses.

zTS makes role explicit by looking at a player's playtype mix and asking a simple question: what would an average scorer produce on this exact mix?

That expectation defines the role itself.

If the expected TS is low, the role was more difficult. If it is high, the role was easier.

zTS adjusts relative efficiency for role.

Here are the 2026 league-average TS% marks for the underlying Synergy playtypes.

2026 league-average TS% by playtype

Negative relative TS extends left from league average. Positive relative TS extends right.

Isolation
51.3 (-6.4)
Handler
52.3 (-5.4)
OffScreen
52.6 (-5.1)
Handoff
53.4 (-4.3)
Spotup
55.1 (-2.6)
Postup
56.9 (-0.8)
RollMan
59.3 (+1.6)
Putback
60.2 (+2.5)
Misc
63.0 (+5.3)
Transition
64.4 (+6.7)
Cut
69.2 (+11.5)
57.7
League average TS
Low-50s creationMid-50s spacingHigh-efficiency finishingTransition burst

The calculation is simple:

Expected TS

Expected TS = Σ(playtype share × league TS for that playtype)

Playtype Difficulty (Role)

Playtype Difficulty = League TS − Expected TS

Harder role = positive difficulty, so it adds to rTS.

zTS = rTS + Playtype Difficulty

The goal is not to invent a new language for scoring. It is to refine the one that already exists.

Relative TS already gives us a familiar scale for thinking about efficiency relative to the league. zTS keeps that scale intact while adding the missing information about the difficulty of the scoring role.

This is not a shot-quality adjustment. Shot-quality models estimate how makeable the final shots were. The playtype role adjustment is answering a different question: what kind of scoring job the player was handling. It does not grade the quality of each look. It evaluates the scoring environment across the player's playtype mix, which is why it can credit high-burden creators and pull down easy-finishing roles even when the final shots themselves are not especially difficult.

Playtype difficulty is related to scoring usage, but it is not just a dressed-up scoring-usage stat. Across 2022-2026, its r correlation with scoring usage lands around 0.45, and among real scorers it drops closer to 0.35. That is exactly what you would hope to see: connected to scoring usage, but clearly measuring something scoring usage alone does not.

The Calculation in Practice

Edwards and Wembanyama both posted roughly 32 in scoring usage in 2025-26, with nearly identical rTS. By traditional efficiency, their scoring seasons look interchangeable.

But their playtype mixes tell a very different story.

Playtype mix
Share of each player's scoring usage by playtype. Hardest scoring environments at the top.
EdwardsScoring usage %
WembyScoring usage %
Expected TS
Playtype-weighted baseline
Baseline
57.7
Edwards
55.2
Wemby
58.5
Playtype Difficulty
57.7 − exp
Harder role = positive
+2.5
Edwards: harder role
-0.8
Wemby: easier role
2026 actual league TS: 57.7

The gap between their playtype difficulties is roughly 3.2 points — invisible in raw TS%, but captured by zTS.

Compare Players Side by Side

Click either portrait to swap players and see the four core scoring fields side by side.

31.8Usage32.7
+3.7rTS+4.0
+2.5Role-0.9
+6.2zTS+3.1

How Big The Adjustments Are

The real question is whether these adjustments are tiny or meaningful across the player pool.

Positive values mean harder-than-average scoring roles. Negative values mean easier ones.

370 players

Hard Role 2+

42

Easy Role -2

73

Middle Band

141

Mean Diff

-0.4

Easier roleHarder role

Median -0.1

-8
-7
-6
-5
-4
-3
-2
-1
0
1
2
3
4
5
5

Tap a bar to open the player list for that range. The values show playtype role-difficulty adjustments to efficiency.

Scoring Usageto
Pos

Most players still cluster near the middle. The important part is that the tails are real. Plenty of rotation players are living in roles that are meaningfully harder or easier than league average.

Try filtering by position — you will find wide variation within every position group. The same is true for scoring usage: players carrying similar volume can land on opposite ends of the difficulty spectrum depending on how they get their shots.

The Finishing-Big Effect

The adjustment is largest for finishing bigs — players who score almost exclusively on rolls, putbacks, and cuts.

2025-26

PlayerUSGrTSRolezTS
Rudy Gobert12.8+8.2-5.6+2.6
Jarrett Allen20.5+8.4-5.4+3.0
Daniel Gafford15.5+10.1-4.6+5.5
Dwight Powell7.9+12.4-6.1+6.3

These players are genuinely efficient scorers.

But they are efficient in the most favorable scoring environments in basketball, which is why their Role numbers are so negative.

Rolls, putbacks, and cuts naturally produce much higher TS%.

zTS adjusts for that.

High-Burden Perimeter Scorers

The adjustment runs in the other direction for perimeter creators.

PlayerUSGrTSRolezTS
Shai Gilgeous-Alexander33.5+8.5+2.6+11.1
Luka Doncic36.5+3.8+2.6+6.4
Jalen Brunson31.3+0.4+2.5+2.9
Stephen Curry31.7+6.2+1.5+7.7
James Harden26.9+4.0+3.4+7.4

These players are solving harder scoring problems, which shows up directly in the positive Role column.

zTS gives that burden credit.

Turning Efficiency Into Scoring Value

Efficiency alone is not enough to measure scoring value.

Once we know how efficient a player was relative to his role, we still need to know how much scoring usage he actually carried.

That leads to add stats.

An add stat converts a rate stat into a value stat by incorporating scoring usage.

Basketball already has several versions of this idea: FG add, TS add, and similar metrics.

TS add is not a complete model of offensive value. It does not capture playmaking, spacing, or the broader structure of an offense. But that is not its purpose.

Its role is narrower and more precise.

TS add is the arithmetic implied by relative efficiency. It answers a specific question: given a player's efficiency relative to a baseline, and his scoring usage, how many points did he add or subtract?

Because of that, TS add functions as a diagnostic tool. If an efficiency metric is well-specified, scaling it by scoring usage should produce a signal that tracks real offensive impact. If it does not, the issue is usually in the efficiency term itself.

value = scoring usage × relative efficiency

TS add is the scoring-specific version of this idea, expressed on a points scale:

TS add = 2 × scoring usage × relative TS% (per 100 possessions)

The factor of 2 converts the result to a points-per-100 scale.

The next sections test how well these constructions track lineup-based shooting impact, and which efficiency baseline holds up once the model meets the data.

What We're Testing Against

To know whether any of this actually matters, we need a clean target: a way to measure how much a player moves team shooting when he is on the floor.

That ground truth is oTS RAPM, the offensive shooting slice of six-factor RAPM. RAPM is a lineup-based plus-minus model. In plain English, oTS RAPM estimates how much a player's presence changes his team's points per shot-ending possession, after accounting for teammates and opponents. It is not full ORAPM. It is the shooting piece only.

It is the cleanest lineup-level target for a scoring-efficiency stat. If a scoring-efficiency metric is any good, it should correlate strongly with oTS RAPM.

The Predictive Gap

That gives us a simple test. Take each player's relative efficiency × scoring usage — his add stat — and ask: how well does it predict his oTS RAPM?

Using a 0 baseline across a 2022–2026 regular-season sample with a 5000+ offensive possession cutoff:

FG add
.050
eFG add
.098
TS add
.257
zTS add
.531

R² vs oTS RAPM (higher = better predictor of team shooting impact)

FG add and eFG add barely predict anything. At a zero baseline, they effectively say no — relative field-goal efficiency times scoring usage tells you almost nothing about a player's real team-shooting impact.

TS add does better by including free throws, but it still explains less than a third of the variance.

Finishing-heavy players are systematically overvalued because their efficiency is inflated by easier roles. High-burden creators are undervalued because the difficulty of their possessions is invisible. Players with higher scoring usage are also pushed down because the baseline is too strict.

zTS add more than doubles the explanatory power — from .257 to .531.

This is the key result.

When you multiply scoring usage by rTS, you are getting a weak and biased signal of scoring value. When you multiply scoring usage by zTS, you are getting a much cleaner estimate of how much a player actually moves his team's shooting efficiency.

In other words:

scoring usage × zTS is dramatically better at predicting a player's impact on team true shooting than scoring usage × rTS.

Not marginally better. Structurally better.

Testing the Model

Before reading the chart, we need one more concept: the baseline.

A 0 baseline assumes scoring value only begins once a player becomes more efficient than league average.

That turns out to be too strict.

Players with high scoring usage are still doing meaningful work even when they sit slightly below league efficiency, because carrying those possessions is itself part of the burden.

So we allow the baseline to move. Instead of forcing the threshold at zero, we ask: where does the model perform best?

Here are the results across baselines from 0 to -10.

BaseFG addeFG addTS addzTS add
0.038.086.257.531
-1.049.112.295.559
-2.061.141.332.582
-3.074.172.366.598
-4.088.204.396.609
-5.102.237.423.616
-6.117.269.446.619
-7.133.299.466.620
-8.148.327.481.618
-9.163.352.494.615
-10.177.374.503.611

The takeaway is simple.

As we relax the baseline, every metric improves. That tells us the same thing: scoring usage matters, and a strict league-average cutoff undervalues it.

But the key result is not where each line peaks. It is how much help each metric needs.

TS add needs a large negative baseline to perform well. It has to artificially boost scoring usage to compensate for the fact that its efficiency term is polluted by role.

zTS does not.

At a 0 baseline, with no tuning at all, zTS add already outperforms TS add at its optimal baseline.

In other words:

scoring usage × zTS, without any adjustment, is a better predictor of team shooting impact than scoring usage × rTS even when rTS is given its best possible baseline.

That is the point.

The other models need to be corrected after the fact to recover information they are missing. zTS builds that information into the efficiency term itself.

Compare scoring-value calculations with different scoring-usage and efficiency inputs:
Player A
Scoring Value+1.5
2 × scoring usage × (+3 − 0) / 100
Player B
Scoring Value+1.2
2 × scoring usage × (+2 − 0) / 100

Role Captures What Position Used To

Another interesting result appears once playtype adjustment is included.

Position stops mattering.

A positionally adjusted version of the model never outperforms the plain zTS version.

The playtype mix is already capturing the structural information that position was standing in for:

who creates offense

who spaces the floor

who finishes plays

Once that context is explicit, positional adjustments become unnecessary.

Playtype Difficulty Moves

Playtype difficulty is not static.

It changes over time, often significantly.

SeasonrTSRolezTS
2021-22+3.6-0.1+3.5
2022-23+11.3+0.9+12.3
2023-24+3.2+1.3+4.4
2024-25+4.0+1.8+5.9
2025-26+7.9+2.6+10.4

A. Reaves — playtype difficulty over five seasons

That tells us something important. Year-to-year efficiency changes are often not pure talent changes.

They are frequently role changes. The player may have improved. The job may have changed. Very often, both changed at once.

Role Change Shows Up In Efficiency

The next question is whether those role changes actually show up in the efficiency line. They do.

The year-to-year role changes are usually modest, but they are not trivial. Across adjacent seasons, the middle half of rotation players move by about half a point of playtype_diff in either direction, and about one in five shifts by at least one full point. That is large enough to materially move a raw efficiency line.

On adjacent-season samples from 2022→2023, 2023→2024, and 2024→2025, a one-point increase in playtype_diff was associated with roughly a 1.4 to 1.6 point drop in raw rTS. The explanatory power moves around from year to year, but the slope stays in basically the same band.

That matters because year-to-year shooting efficiency is noisy. Random conversion swings, health, and real player improvement all pollute the signal. So for role change alone to explain around 10% of adjacent-season rTS movement, and up to about 17% once the sample is smoothed into rolling two-year windows, is a meaningful result.

In practical terms, climbing into a harder scoring role tends to beat raw efficiency back down. Until the player improves enough to overcome that new burden, the box score often reads the role change as an efficiency decline.

What About Team Scheme?

Team playtype mix is shaped by scheme, pace, roster construction, and playmaking. These are the conditions an offense operates in. zTS is about evaluating what individual scorers do within those conditions.

Some teams consistently generate easier mixes. The 2017 Warriors are a clear example, heavy on cuts, transition, and open threes. That profile reflects both system and personnel. It is not an accident.

That raises a natural question:

How should we evaluate the scoring efficiency of a role player in that environment?

A player finishing possessions in an offense that produces a higher share of easier playtypes will tend to operate in more favorable scoring conditions on those possessions. That is exactly what playtype_diff is designed to capture. It adjusts efficiency for the mix of possessions a player actually finished.

But that adjustment operates on the scoring side of the equation. It describes the difficulty of the possessions a player used, not the process that generated them.

At the same time, those possessions aggregate into a team outcome. Team TS% reflects both the mix of opportunities an offense creates and how well those opportunities are converted. Any player level measure of scoring efficiency has to be interpreted alongside that shared result.

The key is to separate two layers:

  • Generation, how the offense creates its opportunities
  • Conversion, how efficiently those opportunities are turned into points

Team playtype mix lives primarily on the first layer. zTS is designed to operate on the second.

Teams do not choose their mix arbitrarily. They allocate possessions across playtypes based on what their personnel can execute effectively. This creates a natural equilibrium. Constraints on generation and differences in personnel limit how far any team can move its mix.

The scale of variation reflects this.

Team playtype difficulty clusters tightly near zero. Across recent seasons, most teams fall within a narrow band, roughly ±0.7 TS points. Player playtype difficulty spans much wider ranges, from roughly -8 to +3.5.

Player playtype_diffTeam playtype_diff
-4-2024← Harder role Easier role →

Half-point bins; share of players vs teams at each playtype_diff. Values outside −6…+4 roll into the edge bins.

Positive PT diff means a harder playtype mix—tougher shot-ending roles on average. With the table sorted by PT diff, the hardest mixes rise to the top (for this season, Boston is first). Negative values are easier mixes. Use the PT diff and TS% column headers to change sort.

#Team

The variation in role within a team is much larger than the variation between teams. Most of the meaningful spread in playtype difficulty lives at the player level.

At the team level, playtype mix does have a measurable relationship with efficiency, but it is modest. Across team-seasons, a one point shift toward an easier playtype environment corresponds to roughly +0.7 team rTS. At the same time, playtype mix explains only about 2% of the variance in team rTS.

This is the key limitation. Playtype mix describes the types of possessions an offense generates, but it does not strongly determine how well those possessions are converted.

There is an additional complication.

Team playtype mix is not purely external context. In many cases, it is produced by the offense's primary initiators. A player who drives transition, generates rim pressure, or creates assisted looks can shift the entire team toward an easier mix. In those cases, part of what appears to be team environment is actually the result of individual offensive value.

This creates an attribution problem. If a player helps produce an easier environment, how should that be reflected in his evaluation as a scorer?

A natural extension is to center playtype difficulty at the team level. That version measures scoring burden relative to teammates, rather than the league.

But this introduces a tradeoff.

Subtracting team playtype difficulty removes shared context, but it can also remove part of the environment created by the offense's best players. At the same time, difficult team environments can reflect structural limitations rather than individual burden. A player operating in a +1 difficulty offense may be carrying more than the stat suggests, or simply operating in a system that fails to create efficient opportunities.

There is no single correction that resolves both cases cleanly.

The league centered version of zTS keeps the question narrower. It measures scoring efficiency relative to the types of possessions a player finished, without attempting to fully reassign how those possessions were created.

Team TS% reflects the combined result of generation and conversion. zTS isolates the conversion side, conditioned on role.

Both are necessary. They answer different questions.

The Point

The elegance of zTS comes from the fact that it improves the existing efficiency framework without changing it.

Relative TS already gives us a language for scoring efficiency: how far above or below league average a player scored. But raw efficiency mixes two things together — how well the player scored, and how difficult the possessions he was asked to score on were.

zTS separates those signals.

It estimates what league-average efficiency would look like on the player's actual playtype mix, treats the gap as role difficulty, and adds that adjustment back to rTS. The result stays in the same familiar TS units, but now reflects both scoring performance and playtype difficulty.

In simple terms:

observed efficiency = scoring performance + playtype difficulty

zTS accounts for the second term so the first becomes easier to interpret.

Seen in the broader arc of scoring metrics, the progression is straightforward.

FG% asked whether the shot went in.

eFG% admitted that some shots are worth more than others.

TS% incorporated free throws.

rTS added league context.

zTS adds the difficulty of the scoring role itself.

Not a new language for scoring. Just a more complete version of the one we already use.

What About Shot Quality?

Shot-quality models and the playtype difficulty adjustment are not doing the same job. Shot quality is trying to estimate how makeable the final shots were. The playtype difficulty adjustment is trying to estimate how difficult the scoring role was.

To compare them fairly, we moved to a shot-only target that strips out free throws: oEFG RAPM, a lineup-based estimate of how much a player moves team effective field-goal percentage.

On the Genius Sports side, qSQ is the shot-quality estimate for an average shooter on that exact shot diet. qSI is what the player actually made relative to that expectation: shotmaking over the average shooter's baseline on the same looks.

qSI does add signal. It improves a simple shot-only model in both five-year windows. But the playtype shot-adjustment improves that model more. In 2017-2021, the base model reached about .38 cross-validated , adding qSI lifted it to about .41, and adding the playtype shot-adjustment pushed it to about .51. In 2022-2025, the same ladder was about .41, .47, and .52.

That is the key result. Shotmaking over expected is not empty. But it is narrower. It captures how much the player beat the quality of his shot diet. The playtype difficulty adjustment is capturing more of the scoring job that produced those shots in the first place. And once the playtype term is already in the model, adding qSI does not improve out-of-sample fit in either window.

Why Not PPP?

It is a fair question. Turnovers are a central part of offensive efficiency, so why not build the adjustment directly on points per possession?

A PPP-based version is straightforward to define. For each playtype, compare a player's PPP to the league average for that same playtype, then weight those differences by the player's own possession mix. This produces a playtype-relative PPP — often expressed as Points Over Expected (POE), or POE per 100 possessions.

We tested this approach.

In ORAPM regressions, the PPP-based version held up well. In some specifications, it was essentially neck and neck with zTS. There is no clear empirical failure here.

The distinction is conceptual.

PPP is not a pure scoring-efficiency measure. Its denominator is possessions, and those possessions include turnovers. Once the adjustment is built on PPP, the statistic is no longer isolating scoring burden. It becomes a hybrid of shot-making and turnover outcomes.

That would be acceptable if the turnover signal were clean. It is not.

The playtype data does not cleanly separate non-passing turnovers from passing turnovers, and the totals sit in between. As a result, PPP at the playtype level carries a partial and ambiguous turnover signal. Some of the value being measured is not the cost of failed scoring attempts, but the cost of failed creation.

This creates an interpretation problem. A PPP-based adjustment answers two questions at once:

  • How efficient was the player as a scorer?
  • How often did his possessions end without a shot?

Those are both important, but they are not the same question.

zTS takes the narrower route. It is built from points and true shot attempts, so the object being adjusted is clearly defined: scoring efficiency on shot-ending possessions. Turnover value is modeled separately.

This separation is not just aesthetic. In our testing, once turnovers were explicitly modeled alongside scoring, zTS provided the cleaner signal.

A PPP-based version — or POE per 100 — remains a legitimate alternative, and further refinement of the turnover component is an open area of exploration. But for the purpose of isolating scoring burden, the simpler decomposition is the more reliable one.

A Few More Investigations

The simplest decomposed scoring model we tested was pts100 + ptdiff_pts100 + TSA100, where TSA100 is scoring usage and ptdiff_pts100 turns playtype difficulty into points. On the clean 5000+ possession samples, that model explained about .65 of oTS RAPM in 2017-2021 and about .63 in 2022-2026. In the earlier window, the playtype-adjusted points term was priced at about 1.40x the coefficient on raw points. In the later window, it was still priced higher, but more modestly, at about 1.17x. The playtype-adjusted burden term carries a clear premium in both windows.

We then tried a more direct model using the estimated lower-level playtype rows themselves. Even there, the compressed playtype_diff version held up better. On the clean 2022-2026 sample, the compact model posted a .612 cross-validated ; the direct lower-level playtype burden model came in at .536. That is a good sign for the public stat. The compression is not just elegant. It is doing real work.

The current read is that playtype_diff may still be a little conservative in broad all-player windows, but it looks much closer to right once we isolate real scorers. That is where the present version of zTS feels strongest: simple enough to explain, strong enough to hold up, and still open to refinement.

Full Leaderboard

370 players
PlayerUSGrTSRolezTSValz20
Shai Gilgeous-AlexanderOKC
33.5+8.5+2.6+11.1+8.4+20.7
Luka DončićLAL
36.5+3.8+2.6+6.4+5.8+14.2
Kawhi LeonardLAC
34.5+5.4+1.4+6.9+5.7+14.0
Stephen CurryGSW
31.7+6.2+1.5+7.7+5.7+14.0
Nikola JokićDEN
27.8+9.4-0.6+8.8+5.4+13.4
Giannis AntetokounmpoMIL
35.6+7.9-1.7+6.2+5.4+13.3
Austin ReavesLAL
25.2+6.3+2.5+8.8+4.8+11.9
Kevin DurantHOU
27.3+5.9+1.8+7.7+4.7+11.6
Anthony EdwardsMIN
31.8+3.7+2.5+6.2+4.7+11.6
James HardenCLE
26.9+4.0+3.4+7.4+4.5+10.9
Jamal MurrayDEN
28.1+4.3+2.4+6.7+4.3+10.6
Donovan MitchellCLE
32.2+3.3+1.7+5.0+4.0+9.8
Zion WilliamsonNOP
26.0+6.8-0.2+6.7+3.9+9.6
Jalen DurenDET
24.5+10.6-3.3+7.3+3.9+9.6
Kon KnueppelCHA
22.6+6.7+0.7+7.4+3.6+8.8
Collin SextonCHI
25.0+4.7+1.2+5.9+3.4+8.2
Keyonte GeorgeUTA
27.4+3.2+1.9+5.1+3.3+8.1
Jimmy Butler IIIGSW
24.0+6.7-0.8+5.9+3.2+7.7
Norman PowellMIA
27.3+4.0+0.8+4.8+3.1+7.6
Cam SpencerMEM
17.0+6.9+2.5+9.4+3.1+7.5
Luke KennardLAL
12.9+12.60.0+12.6+2.9+7.1
Victor WembanyamaSAS
32.7+4.0-0.9+3.1+2.9+6.9
Isaiah JoeOKC
18.9+6.8+0.3+7.1+2.7+6.6
Luka GarzaBOS
17.9+11.2-3.5+7.7+2.7+6.6
Marvin Bagley IIIDAL
18.8+7.4-0.2+7.2+2.7+6.5
Micah PotterIND
16.9+9.8-1.8+8.0+2.6+6.3
Jalen BrunsonNYK
31.3+0.4+2.5+2.9+2.6+6.2
Jaxson HayesLAL
13.1+17.4-6.4+11.1+2.6+6.2
Devin BookerPHX
31.6+0.5+2.2+2.7+2.5+6.0
Zach LaVineSAC
24.2+3.4+0.9+4.3+2.4+5.8
Deni AvdijaPOR
28.6+1.7+1.4+3.2+2.4+5.8
Tyler HerroMIA
26.2+2.7+1.1+3.7+2.4+5.8
Michael Porter Jr.BKN
30.6+2.3+0.4+2.8+2.4+5.8
Sam MerrillCLE
17.9+6.8-0.0+6.8+2.4+5.7
Tim Hardaway Jr.DEN
19.6+5.3+0.6+5.9+2.4+5.7
John CollinsLAC
18.7+7.9-1.8+6.1+2.3+5.5
Chet HolmgrenOKC
21.9+6.3-1.6+4.7+2.3+5.5
Tre JonesCHI
18.2+5.3+0.9+6.2+2.2+5.4
Jerami GrantPOR
24.0+3.1+0.8+4.0+2.2+5.4
Tyrese MaxeyPHI
30.5+1.0+1.5+2.5+2.2+5.4
Desmond BaneORL
23.3+2.8+1.2+4.0+2.2+5.2
Joel EmbiidPHI
34.1+2.2-0.5+1.8+2.1+5.1
Robert Williams IIIPOR
13.1+14.7-5.3+9.3+2.1+5.0
DeMar DeRozanSAC
23.7+1.4+2.4+3.7+2.1+5.0
Darius GarlandLAC
25.7+1.0+2.2+3.2+2.1+5.0
Lauri MarkkanenUTA
29.7+3.1-0.8+2.3+2.0+4.9
Ryan KalkbrennerCHA
11.5+17.4-6.7+10.7+2.0+4.9
Ayo DosunmuMIN
20.2+5.3-0.6+4.8+2.0+4.9
Olivier-Maxence ProsperMEM
18.7+7.8-2.3+5.4+2.0+4.9
Trey Murphy IIINOP
23.5+3.4+0.3+3.7+2.0+4.9

Written with care about basketball analytics

March 2026

Data sources: Synergy Sports, NBA.com/stats