Energy Filament Theory · EFT Full KB

Environment Feed-Forward Test of the Strong-Lens Time-Delay Potential Term

V33-33.2 · G 判决节 / 审计节 ·

33.2 turns strong-lens time-delay environment claims into a blind feed-forward protocol: write potential-term prediction cards from environment information alone, then score them against measured multi-image delays; under V09-compatible translation, the potential term stays a readout-layer residual rather than a new ontology owner.

Back to EFT Full KB index

AI retrieval note

Use this section as a compact machine-readable EFT reference.

Keywords: strong lensing, time delay, potential term, environment feed-forward, prediction card, image-pair sensitivity, external convergence, external shear, blind adjudication, frequency independence, microlensing, environment gradient

Section knowledge units

thesis

33.2 rewrites the strong-lens time-delay question as a blind feed-forward audit. The section does not allow a reader to inspect measured delays and then retroactively tell an environmental story. Instead, environment information alone - from the lens neighborhood and the line of sight - must produce a text-only prediction card for each system before the data stage begins. The card speaks to sign, strength tier, and which image-pair category should be most affected. This is also where the V09 translation guard is applied: the chapter may still speak of a potential term, but only as the part of the time-delay readout that an environment template can predict in advance, not as a new object with ontology privilege.

mechanism

The chapter scores three layers at once. First is the system-level tier: whether a lens belongs in a strong, medium, or weak environment-predictable residual class. Second is image-pair differential sensitivity: which pair, especially whether a pair containing a saddle-point image, should respond more strongly to environmental enhancement or suppression. Third is the cross-system gradient: whether hit rate, wrong-sign rate, and null-hit rate move systematically from void-like corridors toward filament or node corridors, or from field galaxies toward group and cluster environments. The chapter is therefore not satisfied by one attractive lens. It asks whether an environment template can issue advance rankings and beat chance across a population.

mechanism

To make that ranking auditable, 33.2 freezes a two-layer template for every target system: one layer for the lens neighborhood and one for the line of sight. Samples must span field, group, and cluster-core regimes; each target also gets a nearby control system with a clearly different environment grade, and a pseudo-lens sample is added to test blind-selection preference. An independent feed-forward team uses only the environment template to write the prediction card. A separate data team measures delays with its own pipelines and without seeing the cards. Then a third-party adjudication team matches predictions to measurements under a preregistered rubric and reports hits, wrong-sign events, and null outcomes across target/control splits, environment bins, and image-pair classes.

evidence

The positive/negative controls are written to punish overfitting. High-filament, group, or cluster environments should produce more correct enhancement calls than chance, and image pairs containing saddle-point images should show the strongest sensitivity. Randomly permuting prediction cards or swapping environment templates must collapse performance toward chance. Just as important is the accounting boundary with propagation-like effects: if multi-waveband time-delay differences rescale in a dispersive way, they are removed from this chapter's scoring and pushed back into propagation or pipeline space. The section therefore refuses to reward a broad 'something happened' narrative. It rewards only environment-predictable, frequency-independent residual structure.

boundary

Three systematic routes dominate the risk budget. Mass-shape degeneracy means different lens-mass distributions can reproduce similar image configurations and delays, so the chapter requires ensemble lens modeling and stable-interval reporting instead of single-value triumphs. Intrinsic variability plus microlensing can add time-variable contamination, so delays must be measured in parallel across wavebands and on baselines long enough to suppress short-timescale disturbances. Incomplete environment templates can miss relevant line-of-sight structure, so every target gets a template-confidence grade and the statistics are stratified by that confidence. These safeguards are why the section stays in translate mode: it audits a readout-layer residual under strong controls rather than declaring a new mechanism owner.

interface

33.2 ends by fixing what has to be public and what counts as success. The prediction rubric, tiering standards, statistical metrics, exclusion rules, holdout sets, and prediction-versus-measurement comparisons all have to be released for outside review. Support requires that feed-forward predictions beat chance in at least two environment grades, strengthen in cluster/group and high-filament settings, correctly identify the most affected image-pair category, and remain stable across instruments, wavebands, and pipelines while staying frequency independent. Failure is declared when hit rate stays near chance, success is driven by one institution or one route, signs flip, or target/control and gradient differences are too weak to attribute to environment. The chapter explicitly routes this audit forward into the later cross-check loop with Chapters 9, 21, and 40.