Common Questions

FAQ

Questions that come up regularly — about who GPCS is for, how the verification model works, and what the ratings actually mean.

Is GPCS for consumers or for the industry?

GPCS is a B2B tool. It is designed for awards bodies, grant programmes, platforms, and publishers that need to make structural decisions about game projects — not for consumers deciding what to buy. A player seeing a GPC rating on a Steam page would be reading it out of context. The intended audience is institutions that currently rely on vague labels to draw eligibility lines.

Is participation voluntary? What stops a studio from lying?

Participation is completely voluntary. No one is required to get a GPC rating. For studios that do participate, the system has three verification levels: Unverified (self-reported, low credibility), Verified (public evidence checked — LinkedIn, company registry, press releases), and Audited (third-party CPA review of confidential materials). If a studio submits a false Unverified self-assessment, the rating carries no credibility and no institution should rely on it for anything consequential. The verification layer exists precisely so that anyone acting on a rating can choose a level of scrutiny appropriate to what's at stake.

Doesn't a C rating make a small studio look bad?

No. C is not a grade — it is a description. A C-rated project is a solo or micro-team game: exactly the kind of project that wins Independent Games Festival awards, gets critical acclaim, and builds careers. The problem GPCS is solving is that right now a solo dev and a 40-person publisher-backed studio both get called “indie” — which means the solo dev competes directly against the larger team in the same category. A C rating doesn't diminish the work. It puts it in the right company.

Why keep the AAA and indie labels if GPCS is replacing them?

GPCS doesn't ban any label. “AAA” and “indie” are cultural shorthand and they'll keep being used in press, marketing, and conversation. What GPCS adds is a parallel layer of precision for contexts where vague labels cause real problems — grant eligibility, award categories, platform support tiers. The goal isn't to replace everyday language; it's to give institutions something more rigorous to work with when the stakes require it.

Do AI tools count toward team size?

No. Team size counts people, not tools. A solo developer using AI generation software to produce art assets is still a team of one. GPCS measures production capacity as a function of human labour, infrastructure, and financial resources — because those are the inputs that determine what scale of project is actually achievable. A studio that replaces ten artists with one artist and a text-to-image pipeline has meaningfully different capacity than a ten-person art department, regardless of output volume.

Who fills out the rating form?

Typically the developer or someone on their team — a producer, studio director, or operations lead. The form asks about the project specifically, not the studio's entire business. For Unverified ratings, there's no external check. For Verified and Audited ratings, a third party reviews the answers against evidence, so accuracy matters. The form currently runs as a demo — results stay in your browser and aren't submitted anywhere.

Who is GPCS aimed at, exactly?

Three main groups. Awards bodies that need structured categories — replacing vague “indie” divisions with capacity-tiered brackets. Grant programmes that need auditable eligibility definitions — so they can say “this programme is for B/BB projects” and enforce it. Platforms and publishers that want to route projects to the right support tier rather than treating every incoming submission the same way. Developers benefit indirectly: a credible, externally readable rating is a better signal than a press release.

What happens when the standard reaches critical mass?

The current governance model has Devon Stanton maintaining GPCS with community proposals via GitHub Issues. Once GPCS reaches critical mass — the target is 200+ classified projects and at least two implementation cycles — governance moves to an advisory board with representation from studios, publishers, platforms, awards bodies, and academic researchers. Major changes would require a supermajority vote (7 of 9). Old ratings are grandfathered with a version label, so nothing breaks when the standard evolves.

Still have questions?

The full methodology is in the specification. If something is unclear or you have feedback on the framework, the author welcomes it.