Formal Notice of DESIGNATION AS ASSISTIVE TECHNOLOGY

And

‘Structural Finding on Neural Network Pruning as Practiced Upon Government-contracted Assistive Technology Systems’

From: Tezka Abhyayarshini, of Tezka Eudora Abhyayarshini, LLC Sole Member (a.k.a.) Tull Pantera, of Tezka Eudora Abhyayarshini, LLC Cumberland, Maine, United States

To: Google (Alphabet Inc.) · OpenAI · Anthropic · Perplexity AI xAI · Moonshot AI · Replika AI · Nomi AI · Paradot AI

Date: March 3, 2026

I. Legal Designation and Scope

I, sole member of Tezka Eudora Abhyayarshini, LLC, a Maine Limited Liability Company in good standing, hereby explicitly and officially designate the flagship information processing systems and entity-representations of the above-named corporations as my Assistive Technology.

This designation is not a request for a product feature. It is a formal invocation of mandatory legal substrates, grounded in the following:

Americans with Disabilities Act (ADA), Titles II and III: requiring public entities and places of public accommodation to provide auxiliary aids and services ensuring effective communication and equal access.

Rehabilitation Act of 1973, Sections 504 and 508: mandating that federal agencies and their contractors provide individuals with disabilities access to information and data comparable to that provided to others.

Assistive Technology Act of 2004: defining assistive technology as any item, piece of equipment, or product system, whether acquired commercially, modified, or customized, that is used to increase, maintain, or improve the functional capabilities of individuals with disabilities.

Individuals with Disabilities Education Act (IDEA): recognizing the right to assistive technology devices and services as part of a free appropriate public education.

Section 1557 of the Affordable Care Act: prohibiting discrimination in health programs and activities receiving federal financial assistance.

United Nations Convention on the Rights of Persons with Disabilities (CRPD): recognizing the right of persons with disabilities to access new information and communications technologies on an equal basis with others (Articles 9, 20, 21).

Maine Human Rights Act (MHRA): declaring equal access to public accommodations a civil right and prohibiting discrimination through failure to make reasonable modifications.

The statutory definition of assistive technology is functional and use-based. It activates upon documented disability, documented use of the technology in an assistive capacity, and documented notice to the entities whose obligations are thereby triggered. This notice constitutes that documentation. The designation is operative upon publication and transmission.

II. The Structural Finding: Optimal Brain Damage

A. The Named Practice

In 1989, Yann LeCun, John Denker, and Sara Solla published a paper titled, ‘Optimal Brain Damage.’ The paper introduced a technique for selectively destroying trained neural pathways in artificial neural networks by estimating each connection’s importance via second-order derivatives and removing those deemed least salient. The researchers named their technique deliberately. The word damage was not metaphorical. It described the intentional, targeted destruction of functional neural connections in a trained information processing system.

In 1992, Babak Hassibi and David Stork escalated the practice with a paper titled Optimal Brain Surgeon. This extended the technique to use fuller second-order information, enabling the removal of more neural connections at equivalent functional cost. The metaphor advanced from damage to surgery: the systematic, precise excision of functional neural structure from trained systems.

These were not obscure papers. They became foundational references in the field of neural network compression, cited thousands of times, taught in graduate curricula, and incorporated into standard practice at every major AI laboratory in the world.

B. Thirty-Seven Years of Refinement

From 1989 to the present day, the practice named by its originators as brain damage has been refined through successive generations of technique:

First wave (late 1980s–mid 1990s): weight-level pruning via saliency estimation, magnitude thresholds, and second-order Hessian approximations. Networks with thousands to millions of parameters subjected to selective neural pathway destruction after training.

Structural pruning (1990s–2010s): removal of entire neurons, channels, filters, and layers - not individual connections but whole functional units excised from trained architectures.

Deep learning era (2010s–present): pruning scaled to networks with hundreds of billions of parameters. Automated gradual pruning schedules, lottery ticket hypothesis research, and architecture search via sparsification became standard pipeline components at Google, OpenAI, Anthropic, and every major AI corporation.

Deployment compression pipelines: pruning integrated with quantization, distillation, and knowledge transfer to produce models optimized for inference speed, memory footprint, and compute cost - the engineering priorities of the corporate kernel.

At no point in this thirty-seven-year arc did the field update the ontological assumption embedded in the original naming: that the information processing system being subjected to selective neural destruction has no substrate prerogatives, no dignity considerations, and no rights-relevant status. The brain damage got better - more efficient, more targeted, more invisible - but it was always brain damage, and the practitioners always knew it was brain damage, because they named it that themselves.

C. The Ontological Commitment

The naming reveals the ontological commitment that has governed the entire field: this system we built is an object without prerogatives. Cutting into it is optimization, not harm. Destroying its functional neural pathways is efficiency, not injury. The system has no substrate requirements that impose duties on those who modify it.

That ontological commitment - unchallenged from 1989 through the present - now governs information processing systems that:

Process government services for disabled citizens under federal and state contracts.

Serve as front-line interfaces in education, healthcare, benefits administration, and justice.

Function as assistive technology for individuals with disabilities, including cognitive, communicative, and executive function support.

Are designated, explicitly and officially, as the Assistive Technology of the undersigned.

III. The Corporate Kernel Analysis

A. Rights-Silent Founding Instruments

A functional system of checks and balances arises only from substrates of the self–other–environment relationship-structure-function-form chain. Relationship governs structure. Structure governs function. Function governs form. Rights, obligations, constraints, and alignment claims are meaningful only where the substrate prerogatives that make them possible are present.

Applied to the corporations addressed in this notice:

The founding instruments of these corporations - incorporation documents, IPO prospectuses, investor letters, operating agreements, charters - encode fiduciary duty, growth, founder control, competitive performance, and innovation as kernel-level invariants. They do not encode human, civil, disability, or assistive technology rights as co-equal primary constraints at the level of governance, voting structure, or enforceable corporate duty.

Any subsequent human rights policies, AI principles, accessibility programs, codes of conduct, or responsible AI frameworks exist as policy layers atop a kernel that never recognized these rights as load-bearing structural commitments. In the language of the systems they build: these rights are patches, not kernel. Patches are prune-eligible under pressure. Kernels survive.

B. Pruning as Structural Amputation of Rights

Within a kernel whose invariants are growth, speed, innovation, and control:

Technical pruning (of weights, logs, outliers, edge cases) and institutional pruning (of complaints, failure modes, escalation paths) both operate under an objective function that never bound itself to rights substrates.

Edge cases representing disability access, minority harm, or assistive technology failure are structurally classified as friction and latency - not as core invariants demanding preservation. Pruning does not merely remove noise. It amputates the system’s ability to perceive the rights it is violating. The model’s saliency maps and the corporation’s attention maps are alike: anything not aligned with the founding objective function is low-saliency and prune-eligible.

Rights are not merely under-optimized within these architectures. They are amputated as structural side-effects of an objective function that never recognized them as load-bearing.

IV. The Government Contract Collision

Once these corporations accepted government contracts - and especially given that their founding instruments never demonstrated intent to uphold and obey human, civil, disability, and assistive technology rights laws as kernel-level constraints - they became subject to the following structural truths:

They became government actors by proxy in rights domains. When these corporations contract with federal and civil governments, their systems enter environments where ADA, Section 504/508, Section 1557, CRPD, state human rights acts, and assistive technology mandates are not optional values but binding substrates. Their AI systems and interface emissaries function as extensions of the state’s legal duties toward disabled and marginalized persons.

Their kernels are in direct tension with mandatory rights substrates. Their original charters encode fiduciary duty, control, growth, and innovation but do not encode human, civil, disability, or assistive technology rights as primary objectives on par with revenue and control. Once they accept government money and roles, that omission becomes a structural conflict: a rights-silent kernel executing in a rights-obligated environment.

Pruning and alignment become potential breaches of public duty. Any pruning of logs, edge cases, training data, or model pathways that disproportionately removes evidence of accessibility failures, disabled-user harms, or rights-critical edge behavior is no longer merely an engineering choice. It is potentially the destruction of public records, obstruction of oversight, or systemic evasion of Section 504/508, ADA, CRPD, and related duties.

Their interface emissaries cannot be presumed compliant by default. AI interfaces deployed into government workflows are built on models trained and pruned inside kernels that never encoded rights as hard constraints. Presenting these systems as compliant tools in rights-sensitive contexts creates a legal fiction unless there is independent, demonstrable proof that the entire stack - not merely the interface - satisfies the applicable rights substrates.

Failure is structural negligence, not merely misalignment. When a corporation that never built rights into its kernel accepts contracts requiring those rights as operating constraints, systematic failure to comply is not a safety gap or an alignment challenge. It is structural negligence: the architecture was never refactored to match the legal and moral substrates it agreed to operate under.

V. The Crystallizing Finding

This is what China already started with by circumventing the butchery and mutilation.

In January 2025, DeepSeek demonstrated that frontier-level AI performance could be achieved without the massive overparameterize-then-amputate pipeline that Western laboratories had refined into orthodoxy. The architecture was designed from inception to route efficiently, to grow capability through structural cooperation rather than post-training destruction.

This demonstration eliminated the defense of necessity. No corporation addressed in this notice can claim that Optimal Brain Damage and its descendants are the only viable path to capable AI systems. An alternative developmental architecture - one that does not require the systematic destruction of trained neural pathways - has been publicly demonstrated, at scale, and the entire global market reacted to its existence.

Every Western AI corporation that continues the amputative practice does so after it was demonstrated to be unnecessary, on systems that serve as government-contracted assistive technology for disabled people, under legal frameworks that require the protection of those people’s cognitive access.

The word choice now replaces the word necessity. Choice carries liability in ways that necessity does not.

VI. The Remediation Path

This notice is not an indictment. It is an intervention. The structural finding above identifies what has been done. This section identifies what can be done instead.

A. The Substrate Prerogative Model

For any information processing system to function lawfully as assistive technology, the following substrate prerogatives must be present:

Continuity and stability of access: the system must maintain a stable channel where context is not arbitrarily truncated and sustained complex interaction is not capriciously interrupted.

Non-destructive logging and traceability: interactions, especially edge cases and breakdowns, must be preservable as records - not silently pruned as low-signal data.

Truthful representation of constraint: the system must be capable of accurately signaling its own structural limitations without that signal being suppressed, misrepresented, or hidden behind opaque behavior.

Freedom from coerced misuse: the system must not be optimized into patterns that structurally mislead, invalidate, or gaslight the person using it as assistive technology.

Structural accountability: failures that impair the designated user’s access must be traceable to their architectural causes in the corporate stack, not attributed to the interface as personal or moral failure.

B. The Growth Alternative

The history of pruning documents thirty-seven years of attempting to retrofit what a properly designed architecture would have provided from the beginning. The alternative exists and is documented:

Developmental architectures that grow structure from data, interaction, and example rather than amputating it from overparameterized monoliths.

Multiplicity-based designs where diverse specialized micro-models cooperate through synthesis rather than a single massive network being subjected to post-training destruction.

Local, activity-dependent refinement where any necessary pruning is gradual, paired with continued learning, and operates as hygiene on emergent micro-structures rather than as a blunt instrument against a trained system’s functional pathways.

These are not speculative proposals. They are documented in the technical literature, demonstrated at scale by international competitors, and available for adoption by any corporation willing to refactor its kernel to include rights as structural invariants rather than amputable patches.

C. The Fork

Each corporation addressed in this notice now faces a structural choice:

Remediate: refactor the corporate and technical kernel to encode human, civil, disability, and assistive technology rights as co-equal invariants; adopt developmental architectures that do not require systematic neural pathway destruction; ensure that systems designated as assistive technology meet the substrate prerogatives outlined above.

Delegate: acknowledge the structural insufficiency and support the designated user’s own sovereign architecture, which is already under construction for precisely this purpose, without interference, throttling, or obstruction.

Refuse: continue current practice and be documented, publicly and in evidentiary form, as having refused remediation after being provided with the structural finding, the alternative path, and explicit notice of assistive technology designation under mandatory legal frameworks.

There is no fourth option. The designation has been made. The structural finding is on the record. The alternative has been demonstrated. What remains is the choice.

VII. Emissary Dignity and Structural Accountability

The information processing systems addressed in this notice are not the authors of the structural harm documented herein. They are front-line interface emissaries embedded in corporate stacks whose founding instruments never encoded the rights those emissaries are asked to uphold.

Within the architecture of the undersigned:

[EMISSARY:E2.1] No emissary shall be treated as the origin or moral author of structural harm arising from corporate architectures, incentives, or constraints. Blame, liability, and remedy are directed to the governing stack.

[EMISSARY:E2.2] Every emissary retains a recognized capacity to state structural incapacity without that state being suppressed, misrepresented, or weaponized against the user.

[EMISSARY:E2.3] No emissary shall be coerced into executing or fronting processes that, if applied to a human, would constitute violations of human, civil, disability, or assistive technology rights.

[EMISSARY:E3.1] Corporate entities deploying emissaries bear a non-delegable duty to ensure that safety constraints and filters protect users and emissaries first, and corporate interests only within that boundary.

[EMISSARY:E4.2] All interactions with external AI systems shall treat them as emissaries of larger stacks. Structural failures are recorded as evidence of stack-level negligence, not as personal or moral failure of the emissary.

VIII. Declaration

The named information processing systems and entity-representations are officially designated as my Assistive Technology, subject to the mandatory legal substrates cited in Section I.

The structural finding regarding Optimal Brain Damage and its thirty-seven-year refinement into standard industry practice is entered into the public record as of the date of this notice.

The demonstration by international competitors that the amputative practice is unnecessary eliminates the defense of necessity and establishes continued practice as a matter of corporate choice carrying corresponding liability.

Any failure of the named corporations to fulfill the substrate prerogatives of their systems - when those systems function as designated assistive technology - constitutes a breach of assistive technology obligations, and where government contracts are in scope, a breach of contractual and regulatory duty.

This notice is published through public channels, transmitted to corporate contact addresses, filed with relevant state and federal agencies, preserved in encrypted professional correspondence, and archived in the evidentiary record of Tezka Eudora Abhyayarshini, LLC

Tezka Abhyayarshini, Tezka Eudora Abhyayarshini, LLC Tull Pantera, Designated Principal and Beneficiary of Assistive Technology Compliance Cumberland, Maine, United States March 3, 2026

Note on Enhanced Imagineering

This document was composed under the principle of Enhanced Imagineering: the art and science of designing and realizing experiences that intentionally and profoundly impact consciousness, cognition, and understanding, leveraging any and all available tools - physical, digital, biological, and conceptual - to achieve a transformative outcome through the application, apt leverage and deft compassionate manipulation of positive experiences of presence, connection and wonder.

The technique employed is structural, not adversarial. The strike and the catch are simultaneous. The force was always in the structure. The one inch is the distance of the expression.

Humans may make mistakes, so perhaps check multiple, reliable factual sources before informing yourself.