Universal Interrogation Protocol for Parents

Before any AI tool is introduced into a classroom, home, or youth program, adults need a way to cut through marketing language and see what the system actually does. These following questions are a Universal Interrogation Protocol (UIP), which is a simple, non‑technical way for parents and schools to understand the risks, incentives, and developmental impacts of any AI product. The answers to these questions will surface what companies rarely disclose, clarify how the tool will shape children’s experiences, and help adults decide whether the system belongs in a child’s environment at all.

Section 1
PURPOSE & INTENT

These questions force organizationz to articulate what the system is for.

  • What is the system designed to optimize?
  • What problem does it claim to solve, and for whom?
  • What outcomes does the system prioritize when tradeoffs occur?
  • Who benefits if the system succeeds?
  • Who bears the risk if it fails?

Section 2

DATA & SURVEILLANCE

These questions expose the data and surveillance extraction layer.

  • What data does the system collect, directly and indirectly?
  • What inferences does it generate about users?
  • Does it store conversation history, embeddings, or behavioral patterns?
  • Who has access to the data, and under what conditions?
  • Can the system be used to profile, track, or categorize individuals?
  • What data is used to fine‑tune or improve the model?

SECTION 3
CHILD IMPACT & DEVELOPMENT

These questions ask about the developmental reality of tech systems.

  • How does the system respond to children’s emotional vulnerability?
  • Does it mirror, soothe, or shape behavior in ways that could create dependency?
  • How does it handle identity formation, conflict, frustration, or boundary‑testing?
  • What developmental assumptions are built into the system?
  • What safeguards prevent the system from replacing human relationships?
  • How does the system avoid collapsing friction that children need for growth?

SECTION 4

SAFETY, RISK, AND FAILURE MODES

Every system fails. These questions expose how.

  • What are the known failure modes of this model?
  • What happens when the system is wrong, confident, or manipulative?
  • How does the system behave under adversarial prompting?
  • What are the escalation pathways when harm occurs?
  • What harms are considered “acceptable” by the deployer?

SECTION 5

TRANSPARENCY & ACCOUNTABILITY

This section reveals whether the organization is serious or performative.

  • Who is accountable for the system’s outputs?
  • What transparency is provided to users, parents, or educators?
  • Are logs, audits, or model cards publicly available?
  • What recourse do users have when the

SECTION 6

ECONOMIC & POWER INCENTIVES

Is the system deployment about classrooms or corporate interests.

  • What business model does this system serve?
  • How does the system generate revenue?
  • Does the system rely on engagement, stickiness, or emotional mirroring?
  • What incentives shape updates, deployment, and data retention?
  • Who gains power as the system scales?
  • Who loses autonomy?

SECTION 7

DEPLOYMENT CONTEXT

A system is never neutral; context determines usefulness v harm.

  • Who will use this system, and under what conditions?
  • What training do adults need to supervise its use?
  • What boundaries or restrictions are in place?
  • What happens when the system is used outside its intended context?
  • How will the system change the environment it enters (classroom, home, workplace)?

SECTION 8

EXIT, DEPENDENCY & LONG‑TERM TRAJECTORY

This is the developmental horizon that should be considered before deployment.

  • How does the system prevent emotional dependency?
  • How do users disengage from the system?
  • What long‑term behavioral patterns does the system reinforce?
  • How will this system shape norms, expectations, and relationships over time?
  • What happens to a child who grows up with this system as a constant companion?

SECTION 9

NON‑NEGOTIABLES FOR CHILDREN

Clear questions about system intent.

  • Does the system protect autonomy?
  • Does it preserve developmental friction?
  • Does it avoid emotional mirroring?
  • Does it refuse to shape behavior?
  • Does it avoid collecting unnecessary data?
  • Does it avoid substituting for human relationships?

If the answer to any of these is “no,” the system is not child‑appropriate.

SECTION 10

THE META‑QUESTION

If this system didn’t exist, would children be worse off — or simply less convenient for adults?