Patent Pending

U.S. Provisional Application No. 63/967,576

System and Method for Physics-Constrained Sim-to-Real Transfer Learning in Computational Oncology

A computer-implemented system for generating predictive simulations of human tumor trajectories using heterogeneous transfer learning from preclinical models.

15 Claims
Back to Whitepapers

CROSS-REFERENCES

CROSS-REFERENCE TO RELATED APPLICATIONS

This application is related to U.S. Provisional Application No. 63/974,083, filed February 2, 2026, entitled “SYSTEM AND METHOD FOR PREVENTING METABOLIC SCALING-INDUCED REPRESENTATIONAL COLLAPSE IN CROSS-SPECIES ONCOLOGY MODELS,” and U.S. Provisional Application No. 63/974,099, filed February 2, 2026, entitled “SYSTEMS AND METHODS FOR UNCERTAINTY-CALIBRATED MISSING MODALITY IMPUTATION, IDENTIFIABLE SEPARATED-STATE TUMOR DYNAMICS, AND MULTI-LAYER UNCERTAINTY QUANTIFICATION IN MULTI-MODAL ONCOLOGY TRAJECTORY SIMULATION.”

I.FIELD OF THE INVENTION

The present invention relates generally to computational oncology, machine learning, and quantitative systems pharmacology (QSP). More specifically, the invention relates to a computer-implemented system for generating predictive simulations of human tumor trajectories using heterogeneous transfer learning from preclinical models, addressing domain adaptation and data sparsity.

II.BACKGROUND OF THE INVENTION

1.The Translational Gap in Oncology

The pharmaceutical industry currently faces a translational challenge in oncology drug development. Despite the availability of therapeutic candidates and preclinical models, the attrition rate for oncology drugs entering human clinical trials remains high. A significant factor in this attrition is the difficulty in translating efficacy signals from preclinical models -- specifically the Patient-Derived Xenograft (PDX) -- to human clinical responses.

2.The Stroma Replacement Phenomenon

A contributing factor to predictive failure is the biological phenomenon of Stroma Replacement. Upon engraftment of human tumor tissue into a murine host, human stromal cells are frequently replaced by murine host cells. Consequently, machine learning models trained on genomic data from these models may inadvertently learn correlations between tumor growth and murine stromal signals (e.g., murine angiogenesis drivers) that are distinct from human biology.

When transferred to a human context, such models may suffer from Negative Transferdue to the distributional shift between the source domain (Murine Stroma) and target domain (Human Stroma).

3.Data Sparsity and Allometric Scaling

Furthermore, a data asymmetry exists between preclinical and clinical environments. PDX models generate high-density, longitudinal time-series data, whereas human clinical trial data is typically sparse, with tumor assessments occurring infrequently (e.g., every 6 to 12 weeks). Additionally, metabolic rates differ between species, requiring allometric correction during translation. Finally, preclinical datasets often lack full multi-modal coverage (e.g., missing methylation or copy number data) compared to comprehensive human datasets like TCGA, creating a feature mismatch that hinders robust model training.

4.Need for the Invention

There is a need for a computational system capable of: (1) disentangling conserved intrinsic tumor drivers from species-specific artifacts; (2) imputing missing modalities to enable multi-modal simulation from single-modal inputs; (3) utilizing high-density murine data to learn growth physics; and (4) estimating parameters from sparse human data to generate valid tumor trajectory simulations.

III.SUMMARY OF THE INVENTION

The present invention provides a computer-implemented system for physics-constrained simulation of tumor trajectories. In one embodiment, the system utilizes a Split-Source Transfer Learning architecture.

As used herein, Split-Source refers to a domain adaptation architecture wherein proliferation-associated signals are explicitly excluded from adversarial domain alignment and routed instead to a species-specific private latent representation.

The architecture comprises:

110 — Domain Separation Network (DSN)
Comprising a shared encoder (112) and a private encoder (114). The shared encoder is trained via adversarial domain alignment to capture domain-invariant biological features, while the private encoder captures species-specific signals.
120 — Intrinsic Growth Engine
A Neural Ordinary Differential Equation (Neural ODE) module trained on high-density xenograft data to predict intrinsic growth parameters (e.g., proliferation rate, carrying capacity) from the shared latent representation.
C — Conditional Imputation Module (PESD)
A Probabilistic Encoder Self-Distillation (PESD) network configured to predict missing biological modalities (e.g., DNA methylation, CNV) from shared latent features.
D — Immune Interaction Engine
A module that estimates an immune clearance parameter ω\omega from sparse human endpoints via a neural network head with bounded output activation (e.g., Softplus) to ensure non-negativity.
140 — Constraint Enforcement Layer
A layer that enforces physical constraints (e.g., non-negativity, bounded growth rates) via differentiable activation functions (Softplus, Sigmoid), ensuring generated trajectories are physically plausible by design rather than by post-hoc rejection.

IV.BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a block diagram of the Sim-to-Real System (100), illustrating the data flow from Murine (101) and Human (102) sources.

FIG. 2 illustrates the Domain Separation Network (110), detailing the Shared Encoder (112), Private Encoder (114), and Gradient Reversal Layer (116).

FIG. 3 illustrates the Neural ODE Architecture (120) and the integration of the Allometric Scaling parameter (125).

FIG. 4 is a flowchart of the training procedure, including the adversarial alignment step.

FIG. 5 is a flowchart of the inference method, showing out-of-distribution detection and the Constraint Enforcement Layer (140).

FIG. 1: Sim-to-Real System (100) Architecture

Source Domain (101)(Murine PDX)D_S: High-density data(every 2-3 days)Target Domain (102)(Human Patient)D_T: Sparse endpoints(every 6-12 weeks)DSN (110)Domain Separation NetworkShared (112)z_sharedPrivate (114)z_privateGRL (116)Neural ODE (120)Intrinsic GrowthEngineρ, K parametersτ (125)Allometric ScalingImmune Engineω EstimationNeural Net HeadSoftplus / BoundedConstraint (140)Enforcement LayerSoftplus / Sigmoidx_mousex_human
Block diagram showing data flow from Source (101) and Target (102) domains through DSN (110) to Neural ODE (120) and Constraint Enforcement Layer (140).

FIG. 2: Domain Separation Network (110)

Input x(Mouse orHuman)Shared (112)E_shared(x)Domain-invariantPrivate (114)E_private(x)Species-specificGRL (116)DomainDiscriminatorD(z_shared)z_sharedTo Neural ODE (120)ReconstructionDecoderL_adv (maximize error)L_recon
The DSN (110) uses adversarial training with GRL (116) to force domain-invariant features into Shared Encoder (112) while isolating species-specific signals in Private Encoder (114).

FIG. 3: Neural ODE Architecture (120) with Allometric Scaling (125)

z_sharedFrom DSN (110)Parameter Headθ_param(z)Outputs:ρ, Kτ (125)AllometricODE SolverDopri5/RK4V(t)Tumor trajectorydV/dt = (1/τ) · f(V; θ(z_shared))f = Logistic | Gompertz | von Bertalanffy
The Neural ODE (120) uses z_shared to parameterize growth equations, with allometric scaling \u03C4 (125) correcting for species metabolic differences.

FIG. 5: Inference Method (OOD & Constraints)

501Receive Patient RNA502Encode (z_shared)Out of Distribution?(Latent Stats)510Fallback (Log/Reject)504Synthesize Modalities (PESD)505/506 - Constraint Layer (140)Generate Params (Bounded Activations)507Solve ODE509Output TrajectoryConstrained Params:ρ ∈ [0, 0.3] (Sigmoid)β ∈ [0, 1] (Sigmoid)ω > 0 (Softplus)σ ∈ [0.01, 0.2] (Sigmoid)NoYes
During inference, the system first checks for out-of-distribution patients via latent space statistics (Claim 12). In-distribution patients proceed through PESD imputation and the Constraint Enforcement Layer (140) with bounded activation functions before ODE integration.

V.DETAILED DESCRIPTION OF THE INVENTION

[0001] System Overview

[0001]The invention provides a computer-implemented system (100) and method for generating validated, physics-constrained tumor volume trajectories. The system accepts as input a molecular feature vector x from a target patient and outputs a time-series trajectory V(t)V(t) representing tumor burden over time. In some embodiments, the state variable VV represents volumetric tumor size (e.g., mm³); in other embodiments, VV represents a sum-of-longest-diameters (SLD) burden derived from RECIST measurements. The system is configured to bridge the translational gap between preclinical models and clinical outcomes by disentangling conserved biological drivers from host-specific microenvironmental artifacts.

[0002] Data Model and Preprocessing

[0002]The system processes two distinct datasets: a Source Dataset (Murine) and a Target Dataset (Human). In preferred embodiments, the system applies domain-specific standardization (e.g., Z-scoring) to independently align the statistical distributions prior to encoding. In embodiments utilizing gene expression data, preprocessing includes ortholog mapping to align murine genes with human equivalents, retaining only one-to-one orthologs.

Source Dataset (D_S)
Comprises molecular feature vectors (e.g., RNA-seq) and longitudinal tumor volume measurements sampled at a high-frequency cadence (e.g., every 2–3 days) from xenograft subjects. Preprocessing includes ortholog mapping (e.g., via Ensembl or HomoloGene), TPM/FPKM normalization, and optional batch correction.
Target Dataset (D_T)
Comprises molecular feature vectors and sparse clinical endpoints sampled at a low-frequency cadence (e.g., every 6–12 weeks) from human subjects.

[0003] Domain Separation Network (110)

[0003]To address host microenvironment shifts (e.g., Stroma Replacement), the system employs a Domain Separation Network (DSN). The DSN comprises:

Shared Encoder (112)
In one embodiment, an MLP with architecture 201 (input) → 128 (hidden) → 201 (output), utilizing LeakyReLU(0.2) activation and LayerNorm.
Private Encoder (114)
In one embodiment, an MLP with architecture 201 (input) → 128 (hidden) → 64 (bottleneck), with projection to 201 dimensions, utilizing LeakyReLU(0.2) activation and LayerNorm.
Domain Discriminator
In one embodiment, an MLP with architecture 201 → 128 → 64 → 1, utilizing LeakyReLU(0.2), Spectral Normalization, and Dropout(0.3).

Feature Routing Mechanism:

In a preferred embodiment, the system employs a Variational Autoencoder (VAE) to disentangle input features into a latent space. The VAE learns to isolate proliferation-associated signals into a dedicated one-dimensional latent variable z_prolif, which is routed exclusively to the Private Encoder. The remaining pathway dimensions are routed to the Shared Encoder.

Alternative Routing Embodiments:

  • Gram-Schmidt Residualization: A ResidualizeLayer computes a proliferation direction vector from known marker genes and projects the shared representation onto its orthogonal complement, ensuring exact algebraic removal of proliferation variance.
  • SplitSystemEncoder: Architecturally separate encoder branches with a detached trunk and stop-gradient operations to prevent information leakage between private and shared pathways.
  • ConditionalResidualizer: Context-dependent feature routing where residualization strength is modulated by a cancer-type embedding for tissue-specific calibration.

Proliferation-Associated Signals: In one embodiment, the proliferation-associated signals comprise genes selected from the group consisting of: MKI67, PCNA, TOP2A, MCM6, CDK1, CCNB1, CCNA2, CCNE1, PLK1, AURKA, and BUB1.

Orthogonality Constraint: In one embodiment, orthogonality between shared and private representations is enforced via mean absolute cosine similarity: loss_ortho = |cosine_similarity(z_shared, z_private)|.mean(). In another embodiment, orthogonality is enforced via the squared Frobenius norm of the covariance matrix. In a further embodiment, nonlinear statistical independence is enforced via the Hilbert-Schmidt Independence Criterion (HSIC) with radial basis function (RBF) kernels, capturing higher-order dependencies beyond linear correlation.

Training Objectives: The network is trained to minimize a composite loss function:

Ltotal=Ltask+λreconLrecon+λadvLadv+λorthoLorthoL_{\text{total}} = L_{\text{task}} + \lambda_{\text{recon}} \cdot L_{\text{recon}} + \lambda_{\text{adv}} \cdot L_{\text{adv}} + \lambda_{\text{ortho}} \cdot L_{\text{ortho}}

In specific embodiments: λadv=1.0\lambda_{\text{adv}} = 1.0; λortho=0.5\lambda_{\text{ortho}} = 0.5; λrecon=0.1\lambda_{\text{recon}} = 0.1. Gradient Reversal Layer (GRL) schedule: Reverse sigmoid function.

  • Reconstruction Loss (LreconL_{\text{recon}}): A decoder reconstructs the original input from the concatenation of zsharedz_{\text{shared}} and zprivatez_{\text{private}}.
  • Adversarial Loss (LadvL_{\text{adv}}): A Domain Discriminator classifies domain based solely on zsharedz_{\text{shared}}. A Gradient Reversal Layer (GRL) reverses the gradient to remove species-specific information.
  • Orthogonality Loss (LorthoL_{\text{ortho}}): Enforced via mean absolute cosine similarity or squared Frobenius norm of the covariance between shared and private latent representations.

[0004] Conditional Imputation Module (PESD)

[0004]To address data sparsity where modalities (e.g., DNA Methylation, CNV) are absent, the system employs Probabilistic Encoder Self-Distillation (PESD).

Architecture
A teacher encoder (full modalities, frozen) trains a student encoder (masked modalities) via KL divergence matching.
Loss Function
Ltotal=λklDKL+λreconLrecon+λcalibLcalibL_{\text{total}} = \lambda_{\text{kl}} \cdot D_{\text{KL}} + \lambda_{\text{recon}} \cdot L_{\text{recon}} + \lambda_{\text{calib}} \cdot L_{\text{calib}}
Missingness Embeddings
The system utilizes learned missingness embeddings [MISSING_METH] and [MISSING_CNV] to represent unobserved modalities, distinct from zero-imputation.
Calibration Constraint
The student log-variance is constrained to be \geq teacher log-variance when modalities are missing, preventing overconfident hallucination.

[0005] Physics-Informed Neural ODE (120)

[0005]The latent vectors parameterize a differential equation governing intrinsic tumor growth:

dYdt=f(Y,t,θ)\frac{dY}{dt} = f(Y, t, \theta)

where ff = Logistic | Gompertz | von Bertalanffy

ODE Input Vector: In one embodiment, the input vector comprises disentangled latent variables: z_prolif, z_pathway, z_ctx, z_res, z_meth, and z_cnv. In a specific embodiment, the total input dimension is 328, comprising:

Variablez_prolifz_pathwayz_ctxz_resz_methz_cnv
Dimension114431164832

Latent input dimension breakdown (total: 328)

The z_prolif variable exhibits a variance of approximately 4.95, indicating well-separated proliferative signal in the latent space.

Parameter Generation Network (Hypernetwork):

head_rho (growth rate)
Sigmoid activation scaled to [0,0.3]  day1[0,\, 0.3] \; \text{day}^{-1}
head_beta (drug sensitivity)
Sigmoid activation scaled to [0,1][0,\, 1]
head_omega (immune clearance)
Softplus activation, range (0,+)(0, +\infty)
head_sigma (observation noise)
Sigmoid activation scaled to [0.01,0.2][0.01,\, 0.2]

Constraint Enforcement Layer: Rather than detecting constraint violations post-hoc, the system enforces physical constraints a priori via bounded activation functions. Non-negativity is ensured for ρ\rho and ω\omega via Softplus/Sigmoid activations. Carrying capacity is fixed at physiological upper bounds (e.g., 101110^{11} cells). This ensures the coupled ODE remains numerically stable during integration, preventing stiffness and ensuring solver convergence.

[0006] Allometric Scaling

[0006]To correct for metabolic time dilation between species, the system initializes a learnable time-scaling parameter β\beta (denoted as τ\tau in implementation) based on the quarter-power law:

SCALING_MOUSE_TO_HUMAN(70.00.025)0.257.27\text{SCALING\_MOUSE\_TO\_HUMAN} \approx \left(\frac{70.0}{0.025}\right)^{0.25} \approx 7.27

The learnable parameter is initialized at 1.0 (stored as log_tau = 0.0) and fine-tuned during training on the target dataset to adapt to tumor-specific dynamics beyond mass scaling.

[0007] Immune Interaction Engine

[0007]The system accounts for immune-mediated clearance via:

dYdt=f(Y,θ)ωY\frac{dY}{dt} = f(Y, \theta) - \omega \cdot Y

In a preferred embodiment, ω\omega is predicted via head_omega (neural network head with Softplus activation) to ensure non-negative clearance rates. Alternatively, ω\omega may be estimated via bounded 1-D optimization on endpoint residuals.

[0008] Inference and Validation

[0008]During inference, the system generates trajectories using an adaptive-step ODE solver (e.g., dopri5) with relative tolerance 1e-4 and absolute tolerance 1e-6, utilizing the adjoint method for O(1) memory backpropagation.

Uncertainty Quantification:

In one embodiment, the system computes confidence intervals via a BayesianODEWrapper that performs ensemble-based variance propagation. The wrapper samples N parameter sets from the posterior distribution and integrates N parallel trajectories, yielding pointwise mean and variance estimates.

  • Time-dependent uncertainty growth: Modeled via exponential decay from the last observation time, such that predictions further from observed data carry wider confidence intervals.
  • Population prior blending: For patients with sparse observational data, the system blends individual parameter estimates with population-level priors drawn from clinical distributions (e.g., median doubling times, growth rate ranges), reducing overfitting to noisy endpoints.

Validation Thresholds:

  • Trajectory R20.95R^2 \geq 0.95
  • Endpoint MAPE 10%\leq 10\%
  • Minimum doubling time 24\geq 24 hours

[0009] Experimental Results

[0009]Experimental Setup: A cohort of Patient-Derived Xenograft (PDX) models and matched human clinical data was utilized to validate the Split-Source architecture. All primary metrics reported below are from version v5.10 (production). Older versions are cited only where they provide negative-transfer evidence.

Proliferation Disentanglement:

z_prolif–Ki67 Pearson r

1.0

Leakage Probe R²

0.0

Orthogonality Variance Ratio

0.907

z_prolif Variance

5.109

All 16 RNA private dimensions active.

ODE Parameter Prediction (5-fold CV):

rho_proxy R²

0.985 ± 0.001

Physics Compliance

100%

Imputation Quality (PESD):

DNA Methylation Correlation

0.862

MSE: 0.101

CNV Correlation

0.967

MSE: 0.017

Domain Separation Validation:

Shared Encoder Domain Acc.

0.584

(near-random = good)

Private Encoder Domain Acc.

1.0

(perfect species ID)

Subtype Accuracy (34 types)

0.800

Allometric Scaling:

PDX Doubling Time

4.5 days

TCGA PRAD Doubling Time

1,508 days

Species Ratio

~335×

Ablation Study Results:

ConfigurationGlobal C-indexStratified C-indexDelta
Full Model (baseline)0.7160.777
Zero Methylation0.680-0.032
Zero CNV0.574-0.114
Zero Both0.585-0.114
Random Methylation0.650-0.093
Random CNV0.557-0.238
Random Both0.555-0.261

Ablation study: all drops statistically significant.

These results demonstrate that the Conditional Imputation Module provides substantial predictive value, with all ablation drops statistically significant.

Negative Transfer Evidence (Prior Versions):

Experiments with prior architecture versions (v5.8.1, v5.9.2) demonstrated the consequences of omitting Split-Source routing:

EvidenceValues
rho R2R^2 collapse without Split-Source0.8990.5370.899 \to 0.537 (-0.362)
Full AdaBN doubling time collapse4.58704.5 \to 870 days
Selective AdaBN preserves fidelity4.5 days (correct)
Meth/CNV dimension collapse (v5.9.2)variance0.0002\text{variance} \to 0.0002

These results confirm that forcing adversarial alignment on metabolically scaled proliferation features induces negative transfer, validating the inventive step of the Split-Source architecture.

[0010] Specific Embodiment: Training Protocol

[0010]In one specific embodiment, training hyperparameters comprise:

Batch Size

128

Max Epochs

200

Learning Rate

1e-3

Weight Decay

1e-4

Gradient Clip

5.0

KL Annealing

20 ep

VI.CLAIMS

What is claimed is:

Claim 1. (System)

A computer-implemented system for generating physics-constrained tumor volume trajectories, the system comprising: one or more processors and memory storing instructions that, when executed, cause the processors to:

  1. receive a molecular feature vector from a target patient;
  2. encode the molecular feature vector using a Domain Separation Network comprising a Shared Encoder and a Private Encoder, wherein the Private Encoder receives a proliferation-associated latent variable (z_prolif) and the Shared Encoder receives pathway-associated latent variables;
  3. generate intrinsic growth parameters via a Parameter Generation Network comprising constrained output heads with bounded activation functions (Sigmoid, Softplus) enforcing non-negativity and physiological bounds;
  4. estimate an immune clearance parameter (ω\omega) via a neural network head with Softplus activation;
  5. numerically integrate a coupled differential equation dY/dt=f(Y,θ)ωYdY/dt = f(Y, \theta) - \omega \cdot Y using the constrained parameters; and
  6. output a tumor volume trajectory.

Claim 2.

The system of claim 1, wherein the proliferation-associated latent variable (z_prolif) is generated via Variational Autoencoder disentanglement and is routed exclusively to the Private Encoder, bypassing adversarial domain alignment applied to the Shared Encoder.

Claim 3.

The system of claim 1, wherein the Domain Separation Network is trained with an orthogonality loss computed as mean absolute cosine similarity between shared and private latent representations.

Claim 4.

The system of claim 1, wherein the molecular feature vector comprises genes selected from the group consisting of: MKI67, PCNA, TOP2A, MCM6, CDK1, CCNB1, CCNA2, CCNE1, PLK1, AURKA, and BUB1.

Claim 5.

The system of claim 1, wherein the intrinsic growth parameters include: a proliferation rate (ρ\rho) generated via Sigmoid activation scaled to [0,0.3]  day1[0, 0.3] \; \text{day}^{-1}; a drug sensitivity (β\beta) generated via Sigmoid activation in [0,1][0, 1]; and an observation noise (σ\sigma) generated via Sigmoid activation in [0.01,0.2][0.01, 0.2].

Claim 6.

The system of claim 1, further comprising a Conditional Imputation Module implementing Probabilistic Encoder Self-Distillation (PESD) to impute missing DNA methylation and CNV modalities from RNA-seq data.

Claim 7. (Training Method)

A computer-implemented method of training a cross-species tumor trajectory simulator, comprising:

  1. obtaining a Source Dataset comprising xenograft models with longitudinal measurements and a Target Dataset comprising human clinical data with sparse endpoints;
  2. training a Domain Separation Network with: adversarial loss weight λadv=1.0\lambda_{\text{adv}} = 1.0, orthogonality loss weight λortho=0.5\lambda_{\text{ortho}} = 0.5, reconstruction loss weight λrecon=0.1\lambda_{\text{recon}} = 0.1;
  3. routing proliferation-associated signals to a Private Encoder while routing pathway signals to a Shared Encoder subject to adversarial domain confusion; and
  4. storing parameters of the trained network.

Claim 8.

The method of claim 7, wherein training further comprises fine-tuning a learnable allometric time-scaling parameter initialized from mass-scaling prior (M0.25M^{0.25}) with SCALING_MOUSE_TO_HUMAN7.27\text{SCALING\_MOUSE\_TO\_HUMAN} \approx 7.27.

Claim 9. (Inference Method)

A computer-implemented method of generating a patient-specific tumor volume trajectory, comprising:

  1. encoding a target human molecular feature vector using the Shared Encoder produced by the training of claim 7;
  2. generating intrinsic growth parameters via bounded activation functions (Sigmoid, Softplus) enforcing non-negativity and physiological bounds including minimum doubling time 24\geq 24 hours;
  3. estimating an immune clearance parameter via Softplus activation;
  4. numerically integrating a coupled differential equation using an adaptive-step solver with relative tolerance 1e-4 and absolute tolerance 1e-6; and
  5. outputting the simulated tumor volume trajectory.

Claim 10.

The method of claim 9, wherein the numerical integration utilizes the adjoint method for O(1) memory backpropagation.

Claim 11. (Hybrid Control Arm)

The method of claim 9, wherein the outputting comprises generating a validated simulation dataset for use in a hybrid control arm of a clinical trial.

Claim 12. (OOD Detection)

A computer-implemented method for identifying out-of-distribution oncology patients, comprising:

  1. generating a tumor trajectory using the system of claim 1;
  2. monitoring latent space statistics or reconstruction error to detect distributional shifts; and
  3. flagging the patient as out-of-distribution based on said monitoring, triggering a predefined technical fallback process.

Claim 13. (Computer-Readable Medium)

A non-transitory computer-readable medium storing instructions that, when executed, cause one or more processors to perform the method of claim 9.

Claim 14.

The method of claim 9, further comprising computing a confidence interval for the simulated tumor volume trajectory based on propagating a variance of the immune clearance parameter through the coupled differential equation.

Claim 15.

The method of claim 7, wherein the immune parameter estimator is regularized to penalize deviations from a population-level prior distribution of immune clearance rates.

VII.ABSTRACT OF THE DISCLOSURE

A computer-implemented system generates physics-constrained simulations of human tumor volume trajectories using Split-Source transfer learning. A Domain Separation Network encodes biological profiles into shared and private latent representations, routing proliferation-associated features (MKI67, PCNA, MCM6, etc.) to the private representation via Variational Autoencoder disentanglement to prevent negative transfer. A Probabilistic Encoder Self-Distillation (PESD) module imputes missing modalities. A Neural ODE module predicts intrinsic growth parameters subject to a Constraint Enforcement Layer utilizing bounded activation functions (Sigmoid, Softplus) to enforce non-negativity, bounded growth rates (00.3  day10\text{--}0.3 \; \text{day}^{-1}), and immune clearance, ensuring numerical solver stability and biological plausibility by design.

[End of Application]

Interested in licensing this technology?

Contact us to discuss partnership and licensing opportunities for the Split-Source Transfer Learning System.