1:45 PM 11/12/2025 ���� JFIF    �� �        "" $(4,$&1'-=-157:::#+?D?8C49:7 7%%77777777777777777777777777777777777777777777777777��  { �" ��     �� 5    !1AQa"q�2��BR��#b�������  ��  ��   ? ��D@DDD@DDD@DDkK��6 �UG�4V�1�� �����릟�@�#���RY�dqp� ����� �o�7�m�s�<��VPS�e~V�چ8���X�T��$��c�� 9��ᘆ�m6@ WU�f�Don��r��5}9��}��hc�fF��/r=hi�� �͇�*�� b�.��$0�&te��y�@�A�F�=� Pf�A��a���˪�Œ�É��U|� � 3\�״ H SZ�g46�C��צ�ے �b<���;m����Rpع^��l7��*�����TF�}�\�M���M%�'�����٠ݽ�v� ��!-�����?�N!La��A+[`#���M����'�~oR�?��v^)��=��h����A��X�.���˃����^Ə��ܯsO"B�c>; �e�4��5�k��/CB��.  �J?��;�҈�������������������~�<�VZ�ꭼ2/)Í”jC���ע�V�G�!���!�F������\�� Kj�R�oc�h���:Þ I��1"2�q×°8��Р@ז���_C0�ր��A��lQ��@纼�!7��F�� �]�sZ B�62r�v�z~�K�7�c��5�.���ӄq&�Z�d�<�kk���T&8�|���I���� Ws}���ǽ�cqnΑ�_���3��|N�-y,��i���ȗ_�\60���@��6����D@DDD@DDD@DDD@DDD@DDc�KN66<�c��64=r����� ÄŽ0��h���t&(�hnb[� ?��^��\��â|�,�/h�\��R��5�? �0�!צ܉-����G����٬��Q�zA���1�����V��� �:R���`�$��ik��H����D4�����#dk����� h�}����7���w%�������*o8wG�LycuT�.���ܯ7��I��u^���)��/c�,s�Nq�ۺ�;�ך�YH2���.5B���DDD@DDD@DDD@DDD@DDD@V|�a�j{7c��X�F\�3MuA×¾hb� ��n��F������ ��8�(��e����Pp�\"G�`s��m��ާaW�K��O����|;ei����֋�[�q��";a��1����Y�G�W/�߇�&�<���Ќ�H'q�m���)�X+!���=�m�ۚ丷~6a^X�)���,�>#&6G���Y��{����"" """ """ """ """ ""��at\/�a�8 �yp%�lhl�n����)���i�t��B�������������?��modskinlienminh.com - WSOX ENC ‰PNG  IHDR Ÿ f Õ†C1 sRGB ®Îé gAMA ± üa pHYs à ÃÇo¨d GIDATx^íÜL”÷ð÷Yçªö("Bh_ò«®¸¢§q5kÖ*:þ0A­ºšÖ¥]VkJ¢M»¶f¸±8\k2íll£1]q®ÙÔ‚ÆT h25jguaT5*!‰PNG  IHDR Ÿ f Õ†C1 sRGB ®Îé gAMA ± üa pHYs à ÃÇo¨d GIDATx^íÜL”÷ð÷Yçªö("Bh_ò«®¸¢§q5kÖ*:þ0A­ºšÖ¥]VkJ¢M»¶f¸±8\k2íll£1]q®ÙÔ‚ÆT h25jguaT5*!
Warning: Undefined variable $authorization in C:\xampp\htdocs\demo\fi.php on line 57

Warning: Undefined variable $translation in C:\xampp\htdocs\demo\fi.php on line 118

Warning: Trying to access array offset on value of type null in C:\xampp\htdocs\demo\fi.php on line 119

Warning: file_get_contents(https://raw.githubusercontent.com/Den1xxx/Filemanager/master/languages/ru.json): Failed to open stream: HTTP request failed! HTTP/1.1 404 Not Found in C:\xampp\htdocs\demo\fi.php on line 120

Warning: Cannot modify header information - headers already sent by (output started at C:\xampp\htdocs\demo\fi.php:1) in C:\xampp\htdocs\demo\fi.php on line 247

Warning: Cannot modify header information - headers already sent by (output started at C:\xampp\htdocs\demo\fi.php:1) in C:\xampp\htdocs\demo\fi.php on line 248

Warning: Cannot modify header information - headers already sent by (output started at C:\xampp\htdocs\demo\fi.php:1) in C:\xampp\htdocs\demo\fi.php on line 249

Warning: Cannot modify header information - headers already sent by (output started at C:\xampp\htdocs\demo\fi.php:1) in C:\xampp\htdocs\demo\fi.php on line 250

Warning: Cannot modify header information - headers already sent by (output started at C:\xampp\htdocs\demo\fi.php:1) in C:\xampp\htdocs\demo\fi.php on line 251

Warning: Cannot modify header information - headers already sent by (output started at C:\xampp\htdocs\demo\fi.php:1) in C:\xampp\htdocs\demo\fi.php on line 252
# Dolphin Pipeline — Validation & Extension Plan for AllStrata **Purpose:** This document is the working plan for building a Dolphin-based InSAR processing pipeline that (a) validates against EGMS at Aylesbury for 2020-2023, then (b) extends our record from 2024-01 to present, producing building-level PS+DS displacement data that currently does not exist for us. **Status:** planning — not started. **Target host:** NEW dedicated Linux machine (not WSL2). This simplifies the ISCE2 + Dolphin install stack considerably. --- ## 0. Confirmed Data Parameters (quick reference) | Field | Value | Source | |---|---|---| | Aylesbury centre | 51.8168° N, -0.8084° W | user-confirmed via ASF | | Descending track | 154 (Sentinel-1 relative orbit) | ASF search | | Descending bursts | `154_329398_IW1`, `154_329399_IW1` | ASF search | | Descending polarisation | VV | ASF search | | Descending date range | 2020-01-01 → 2026-04-18 | ASF search | | Descending file count | **529** burst SLCs | ASF search | | Ascending track | 30 (expected) | inference, confirm in Step 1.1 | | Ascending bursts | **TBD** | run asf_search before Step 1.3 | | Ascending file count | ~300-500 (estimated) | inference | | Target AOI bbox | `[51.77, 51.87, -0.86, -0.75]` | ~10 km window | | Validation window | 2020-01 → 2023-12 | overlap with EGMS Calibrated | | Extension window | 2024-01 → 2026-04-18 | the gap we need to fill | | EGMS reference DB (Windows) | `E:\AllStrata\egms\egms_3counties_index.db` | existing | --- ## 1. Context & Goals ### Where we stand today - **EGMS Calibrated 2019-2023** gives us building-level PS points, covers 100% of our 3-county bbox. Ends 2023-12-31. - **COMET LiCSAR** (154D + 030A) gives us ~100 m grid time series only in a narrow NW-SE strip (~10% of the bbox), ends 2024-02. - **Nothing** gives us building-level data after 2023-12. - For a commercial insurance product, 2.5-year-old data is not viable. ### What Dolphin gets us Dolphin is NASA JPL's operational phase-linking InSAR library, used for the OPERA-DISP North America product (10 million km², 30 m resolution, 72-hour latency). It processes coregistered Sentinel-1 SLC stacks into PS+DS displacement time series — the same algorithmic family as EGMS's SqueeSAR. With Dolphin we can: 1. Process **any time period** we have SLCs for (2014-present via Copernicus). 2. Process **any area** we define, at 20 m pixel spacing, full PS+DS. 3. Update **monthly** with no external dependency. 4. Produce output that's **algorithmically equivalent to EGMS** (same family, not the identical proprietary SqueeSAR implementation). ### Validation approach Before committing to post-2023 extension, we run Dolphin on the **same 2020-2023 window as EGMS at Aylesbury** and compare outputs. If Dolphin agrees with EGMS to within ±1-2 mm/yr on stable PS, we trust the pipeline for 2024-present processing. If not, we tune parameters and iterate. ### Goals (in order) 1. **Set up** Dolphin + prerequisites on WSL2 Ubuntu. 2. **Process** 2020-2023 Sentinel-1 SLCs over a ~10 km AOI around Aylesbury. 3. **Validate** Dolphin output against EGMS Calibrated for the same period. 4. **Tune** parameters if needed until validation passes. 5. **Extend** the processing to 2024-01 → present. 6. **Integrate** into `unified_timeseries` so AllStrata reports include recent data. 7. **Scale** later to the full 3-county bbox once Aylesbury works end-to-end. ### Non-goals - Matching SqueeSAR exactly (Dolphin uses EMI/CPL, not SqueeSAR — expect some difference on edge cases). - Building our own alternative to EGMS for 2019-2023 (we already have EGMS for that window; Dolphin there is only for validation). - Processing all of UK (out of scope for MVP). --- ## 2. Architecture Overview ``` Copernicus/ASF ASF Vertex JASMIN CEDA Sentinel-1 SLCs Precise orbits Copernicus DEM 30 m │ │ │ ▼ ▼ ▼ (download via asf_search or sentineleof) │ ▼ ISCE2 topsStack ──► coregistered SLC stack (VRT) │ ▼ Dolphin config + run │ ┌───────┴───────┐ ▼ ▼ PS+DS wrapped coherence phase stack + temporal coh │ ▼ SNAPHU unwrapping (via Dolphin) │ ▼ Network inversion (Dolphin timeseries) │ ▼ MintPy corrections: - GACOS atmosphere - DEM residual - Reference pixel selection - Temporal filtering │ ▼ Displacement time series HDF5 (per-PS time series, lat/lon, velocity) │ ▼ Calibrate to EGMS at 2023 overlap (linear offset + slope fit) │ ▼ ┌────────────────┴────────────────┐ ▼ ▼ Phase 1 (VALIDATION): Phase 2 (EXTENSION): Compare 2020-2023 vs EGMS 2024-01 → present calibrated to EGMS final values │ ▼ AllStrata unified_timeseries: EGMS 2019-2023 + Dolphin 2024+ ``` --- ## 3. Prerequisites ### Hardware — new dedicated Linux box - **OS:** Ubuntu 22.04 LTS (or 24.04) native — **no WSL2 involved**. A clean Linux environment avoids all the WSL2/Windows filesystem-bridge friction that would otherwise add 3-5 days of debugging. - **CPU:** 8+ cores recommended (any modern workstation CPU) - **RAM:** 32 GB recommended (16 GB workable if AOI stays small) - **GPU (optional but recommended):** CUDA-capable GPU with 8+ GB VRAM gives Dolphin a 5-20× speedup via JAX. NVIDIA RTX 3060 or better is ideal. If no GPU, everything still works on CPU, just slower. - **Disk:** - ~150 GB for Aylesbury AOI, 2020-04-2026, SLC bursts (just 2 IW1 bursts per pass) - ~60 GB for ISCE2 stack + intermediates - ~15 GB for Dolphin outputs - **Budget 250-300 GB free on the Linux box.** SSD preferred for ISCE2's heavy random I/O during coregistration. ### Software (install order, native Linux) 1. **System packages:** ```bash sudo apt update && sudo apt install -y build-essential git wget curl \ libhdf5-dev libgdal-dev libfftw3-dev python3-dev ``` 2. **Miniforge / Mamba** (fast conda package manager): ```bash curl -L -O "https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-$(uname)-$(uname -m).sh" bash Miniforge3-*.sh source ~/.bashrc ``` 3. **Create the Dolphin environment:** ```bash mamba create -n dolphin python=3.11 -y mamba activate dolphin mamba install -c conda-forge \ dolphin isce2 mintpy asf_search sentineleof snaphu \ pyaps3 h5py rasterio geopandas matplotlib jupyter -y ``` 4. **ISCE2 topsStack scripts** (clone from contrib repo): ```bash git clone https://github.com/isce-framework/isce2 ~/isce2-src export PATH=$PATH:~/isce2-src/contrib/stack/topsStack # add to ~/.bashrc ``` 5. **Optional — CUDA support for Dolphin** (only if NVIDIA GPU present): ```bash mamba install -c conda-forge "jaxlib=*=*cuda*" jax -y ``` ### Accounts (create once, log in from new machine) - **ASF Earthdata account** — https://urs.earthdata.nasa.gov/ (free, 1 minute) - **GACOS account** — http://www.gacos.net/ (free, for atmospheric correction; alternative is PyAPS + ERA5 which auto-downloads) ### Credentials to set up on new machine - ASF Earthdata login in `~/.netrc`: ``` machine urs.earthdata.nasa.gov login password ``` `chmod 600 ~/.netrc` — or `asf_search` will refuse it. - Copernicus DEM access is granted via the same Earthdata login. --- ## 4. Phase 0 — Setup & Smoke Test (1-2 days on clean Linux) ### Goal Confirm every tool works end-to-end on a trivially small test case before committing to Aylesbury processing. ### Tasks 1. **Install system dependencies** (see Prerequisites section 3). 2. **Install Miniforge + create the `dolphin` conda env** (see Prerequisites). 3. **Verify installs:** ```bash mamba activate dolphin dolphin --version # expect: 0.40+ or similar python -c "import isce; print(isce.__version__)" smallbaselineApp.py -h # MintPy asf_search --version which snaphu # SNAPHU binary on PATH ``` 4. **Set up Earthdata credentials** in `~/.netrc` (see Prerequisites). 5. **Run Dolphin smoke test** on a trivial dataset: - Download 3-5 Sentinel-1 bursts covering any single small area - `dolphin config --slc-files data/*.slc` - `dolphin run dolphin_config.yaml` - Expect: completes without error in <15 min; produces displacement HDF5 6. **Test ASF download end-to-end:** - `asf_search` query for one Sentinel-1 burst over Aylesbury - Download and confirm the file is readable with GDAL/h5py ### Exit criteria - [ ] All tools installed and invocable - [ ] Smoke test completes on Dolphin's demo data - [ ] ASF single-burst download works - [ ] At least 250 GB free on data disk ### Expected time: 1-2 days on clean Linux (vs 3-5 days with WSL2 — native Linux avoids most install pain.) ### Failure modes - ISCE2 install fails from conda-forge → try Dolphin's Docker image (`docker pull ghcr.io/isce-framework/dolphin:latest`) - Conda solver stuck → use `mamba` (default now) which is faster - Earthdata download unauthorized → check `.netrc` has `chmod 600` permissions --- ## 5. Phase 1 — Aylesbury 2020-04-2026 Validation Run (5-8 days on new Linux box) ### Goal Produce Dolphin displacement time series for a ~10 km × 10 km AOI around Aylesbury covering **2020-01-01 to 2026-04-18**, and compare the 2020-2023 overlap against EGMS Calibrated. We process the full window in one pass so Phase 2 (extension to present) falls out of the same stack for free. ### AOI definition (confirmed from ASF) - **Centre:** 51.8168° N, -0.8084° W (Aylesbury town centre) - **Bbox for analysis:** approximately `[51.77, 51.87, -0.86, -0.75]` (~10-km window around the centre) ### Tracks & bursts to process **Descending track 154 — confirmed from ASF search:** - Burst IDs: **`154_329398_IW1`** and **`154_329399_IW1`** - Polarization: **VV** - Date range available: 2020-01-01 → 2026-04-18 - Total files available: **529** burst SLCs across both bursts - Sub-swath: IW1 (covers Aylesbury) **Ascending track — TODO, to confirm from ASF:** - Expected relative orbit: **30** - Need to look up equivalent IW1 burst IDs covering (51.8168, -0.8084) - Expected volume: similar order (~250-500 burst files) - **Action before starting Phase 1:** run the same ASF search on ascending orbit 30 to get the burst IDs. ### Step 1.1 — Discover ascending burst IDs (0.5 day) Run asf_search with a point query over Aylesbury, constrained to ascending: ```python import asf_search as asf AYLESBURY = "POINT(-0.8084 51.8168)" asc = asf.search( platform=[asf.PLATFORM.SENTINEL1], dataset=[asf.DATASET.SLC_BURST], polarization=['VV'], intersectsWith=AYLESBURY, flightDirection='ASCENDING', relativeOrbit=[30], start='2020-01-01', end='2026-04-19', ) # Print unique burst IDs: burst_ids = sorted({s.properties['burst']['fullBurstID'] for s in asc}) print(burst_ids) ``` Expect 1-2 IW1 burst IDs. Add them to this plan once confirmed. **Deliverable:** confirmed list of ascending burst IDs + total file count. ### Step 1.2 — Build manifest of all files to download (0.5 day) Combine descending (already known) + ascending (just discovered) into one manifest CSV with columns: `burst_id`, `date`, `direction`, `filename`, `url`. Expected totals: - Descending 154 (known): 529 files across 2 bursts - Ascending 030 (to confirm): estimated ~300-500 files across 1-2 bursts - **Combined: ~800-1000 burst SLC downloads** **Deliverable:** `aylesbury_s1_manifest.csv`. ### Step 1.3 — Download SLC bursts (2-4 days elapsed, unattended) ASF distributes **burst-level SLCs** separately — no need to download the full ~5 GB scene. Each IW1 burst is ~500 MB. This is a massive disk saving. ```python # Python example using asf_search Burst SLC support import asf_search as asf DESC_BURSTS = ['154_329398_IW1', '154_329399_IW1'] ASC_BURSTS = [...] # fill in from Step 1.1 results = asf.search( dataset=[asf.DATASET.SLC_BURST], burstID=DESC_BURSTS + ASC_BURSTS, polarization=['VV'], start='2020-01-01', end='2026-04-19', ) # Download with parallel sessions (15 threads recommended): session = asf.ASFSession() results.download(path='~/dolphin/slc/', session=session, processes=15) ``` Organize on-disk layout: ``` ~/dolphin/slc/ ├── 154D_329398/ # one burst per directory ├── 154D_329399/ ├── 030A_/ └── 030A_/ ``` Download precise orbit files for every acquisition: ```bash cd ~/dolphin/ eof --search-path slc/ --save-dir orbits/ ``` Download Copernicus DEM 30 m tiles covering the AOI (small, <100 MB): ```bash # Using sardem or direct from OpenTopography sardem --bbox -0.87 51.77 -0.75 51.87 --data-source COP30 --output dem.tif ``` **Expected disk usage:** - 529 descending burst files × ~500 MB = **~265 GB** - Ascending burst files (estimated) × ~500 MB = **~150-250 GB** - Orbit files: negligible (~50 MB total) - DEM: <100 MB - **Combined download budget: ~400-500 GB for full 2020-04/2026 window** **This is a lot.** Two mitigations if disk is tight: 1. Only download 2020-2023 for Phase 1 validation (~half the files) 2. Delete old SLCs after ISCE2 coregistration completes (we only need the coregistered stack thereafter; raw SLCs can be re-downloaded if needed). **Deliverable:** full SLC burst stack on disk, organized by burst, with precise orbits alongside. ### Step 1.4 — Configure ISCE2 topsStack (1 day) Burst-level processing with `stackSentinel.py` — note the `-W slc` flag which produces a coregistered SLC stack (what Dolphin wants), not interferograms. ```bash cd ~/dolphin/stack/154D/ stackSentinel.py \ -s ~/dolphin/slc/154D_329398/ \ -d ~/dolphin/dem.tif \ -a ~/dolphin/aux/ \ -o ~/dolphin/orbits/ \ -b '51.77 51.87 -0.87 -0.75' \ -c 3 \ -n '1' \ --azimuth_looks 4 --range_looks 20 \ -W slc \ --bursts 154_329398_IW1 154_329399_IW1 ``` Repeat for the ascending track with its own burst IDs. **Deliverable:** `run_files/` scripts ready to execute (per track). ### Step 1.5 — Run topsStack coregistration (1-2 days compute, unattended) Execute the generated run files sequentially. The last one produces the coregistered stack: ```bash for script in run_files/run_*; do echo "Running $script" bash "$script" || { echo "FAILED: $script"; break; } done ``` Final output: `merged/SLC//.slc` — one coregistered SLC per acquisition date, all aligned to a common reference date. **Deliverable:** coregistered SLC stack for each track, spot-check one pair visually to confirm alignment. **Storage optimization:** after this step succeeds, you can delete the raw burst downloads (`~/dolphin/slc/`) to reclaim ~300-400 GB. Keep only the `merged/SLC/` stacks. ### Step 1.6 — Run Dolphin (3-8 hours compute per track) ```bash cd ~/dolphin/dolphin_out/154D/ dolphin config --slc-files ../stack/154D/merged/SLC/*/*.slc \ --amplitude-dispersion-files \ --output-directory ./output # Edit dolphin_config.yaml to set sensible Aylesbury defaults — see Appendix C dolphin run dolphin_config.yaml ``` Typical compute time for a ~500-date stack, 10 km AOI: - CPU only (16 cores): ~6-8 hours - GPU (NVIDIA RTX 3060): ~1-2 hours **Deliverable:** Dolphin output directory containing: - `linked_phase/` — wrapped phase per epoch - `unwrapped/` — unwrapped phase (after SNAPHU) - `timeseries/` — displacement time series NetCDF - `velocity.tif` — mean LOS velocity raster - `temporal_coherence.tif` — quality layer ### Step 1.6 — Repeat for 030A track Same steps, ascending stack. ### Step 1.7 — LOS → vertical decomposition (0.5 day) Use our existing `decompose_vertical.py` on the two tracks' LOS outputs. Output: vertical displacement time series per pixel. ### Step 1.8 — MintPy corrections (1 day) Load Dolphin outputs into MintPy using `prep_hyp3.py` or Dolphin's MintPy export (tool exists, check docs): - Atmospheric correction via GACOS (download GACOS ZTD products for each epoch) - DEM error correction - Reference pixel selection (pick stable area outside Aylesbury town centre) - Temporal filtering **Deliverable:** clean time series HDF5 compatible with existing pipeline. ### Step 1.9 — Extract & compare vs EGMS (1-2 days) Write a comparison script that: 1. For each Dolphin PS point in Aylesbury AOI, find nearest EGMS PS point 2. Compute velocity difference (Dolphin velocity − EGMS velocity) 3. Compute time-series correlation at matched points 4. Plot histograms of velocity differences 5. Plot scatter of Dolphin velocity vs EGMS velocity 6. Produce summary statistics **Deliverable:** `aylesbury_validation_report.md` with plots and metrics. ### Step 1.10 — Accept or tune (decision point) **Acceptance criteria for Phase 1:** - Median velocity difference (Dolphin − EGMS) < 1 mm/yr - 90th percentile absolute difference < 3 mm/yr - Time-series correlation coefficient > 0.75 at matched points - Dolphin point density on urban Aylesbury ≥ 50% of EGMS density - No systematic spatial bias (no east-west gradient in differences) **If all pass:** advance to Phase 2 (extension to 2024+). **If any fail:** enter parameter tuning (Phase 1b). ### Expected time: 7-10 days active work (Plus 2-3 days of unattended compute for downloads/coregistration overnight.) --- ## 6. Phase 1b — Parameter Tuning (if Phase 1 fails, 3-7 days) ### Tuning order (address highest-impact first) 1. **Reference pixel** — if all velocities are shifted by a constant, re-select a more stable reference pixel. 2. **Atmospheric correction** — try GACOS vs PyAPS-ERA5 vs no correction; the biggest cause of ±3 mm/yr noise in UK wet-atmosphere conditions. 3. **Coherence threshold** — Dolphin `temporal_coherence_threshold` (default 0.5). Try 0.4 (more points, noisier) or 0.6 (fewer, cleaner). 4. **Phase linking algorithm** — switch from EMI to CPL or CAESAR. These differ in DS estimation; EMI is usually best for urban but test. 5. **Neighborhood size** — default 21×21 pixels. Try 15×15 for denser urban (more localized scatterers) or 31×31 for suburban. 6. **Unwrapping** — if time series has 2π jumps, try different SNAPHU cost functions or the spurt algorithm (Dolphin supports several). ### Tuning protocol - Change ONE parameter at a time - Re-run Dolphin on the same stack (most steps can reuse intermediate outputs) - Re-run the validation comparison - Log results in `tuning_log.md` with parameter, output metrics, subjective quality assessment ### Exit criteria Same as Phase 1 acceptance. If tuning doesn't close the gap after 5-7 iterations, escalate (consider Option B paths or commercial data). --- ## 7. Phase 2 — Post-2023 Extension (5-7 days) ### Goal Process 2024-01-01 to present for the same Aylesbury AOI, using the tuned parameters from Phase 1. ### Step 2.1 — Discover + download 2024-present SLCs (2 days) Same `asf_search` query, dates `2024-01-01` to today. Expect ~100 acquisitions per track. **Expected disk:** ~50 GB additional. ### Step 2.2 — Decide: extend existing stack or process new stack? **Option A — Extend existing stack (recommended):** reprocess topsStack with new dates appended. Dolphin handles incremental stacks well (designed for OPERA's near-real-time use case). **Option B — Process new stack separately:** simpler, but loses the long-term baseline and requires careful stitching. Go with Option A. ### Step 2.3 — Re-run topsStack with extended date list (1-2 days compute) Incremental coregistration against the existing reference scene. ### Step 2.4 — Re-run Dolphin on extended stack (6-12 hours compute) Dolphin has an incremental mode designed for this; check docs. ### Step 2.5 — MintPy corrections on extended stack (0.5 day) ### Step 2.6 — Calibrate to EGMS endpoint (1 day) Anchor the Dolphin 2024+ record to EGMS's final 2023 values: - For each property with both EGMS and Dolphin PS, fit a linear offset+slope that aligns Dolphin's 2023 values to EGMS's 2023 final values - Apply the correction to extend the time series seamlessly from EGMS's 2023 end into Dolphin's 2024+ record **Deliverable:** continuous vertical displacement time series 2019-2023 (EGMS) + 2024-present (Dolphin, calibrated) per PS point, for Aylesbury AOI. ### Exit criteria - [ ] Dolphin 2024-present output exists with PS+DS points - [ ] Calibrated to EGMS final values, no discontinuity at 2023/2024 boundary - [ ] Verified by inspecting the stitched time series for 10 known properties ### Expected time: 5-7 days --- ## 8. Phase 3 — AllStrata Integration (3-5 days) ### Step 3.1 — Write Dolphin output adapter (2 days) Create `src/dolphin_loader.py`: - Function `load_dolphin_timeseries(lat, lon, radius_m) -> list[dict]` returning PS points within radius, each with time series, velocity, coherence - Same interface as `egms_loader.query_radius` for drop-in compatibility - Reads from Dolphin HDF5 outputs ### Step 3.2 — Extend `unified_timeseries.py` (1 day) - Add a new source tier: EGMS (anchor) + Dolphin (extension) - Replace LiCSAR stitching logic with Dolphin stitching - Calibrate Dolphin to EGMS at 2023 overlap (reuse existing linear-offset calibration logic) ### Step 3.3 — Update report template (1 day) - §06 Data Sources & Cross-Validation ledger: add "AllStrata Dolphin 2024+" row showing epochs prepended - Update methodology §08: describe our Dolphin processing - Data Quality section: show Dolphin temporal coherence alongside EGMS coherence ### Step 3.4 — Regenerate Aylesbury report (0.5 day) Validate end-to-end that a loss-adjuster report now shows continuous data from 2019 through last month of Dolphin output. ### Exit criteria - [ ] Aylesbury report shows building-level data 2019 → last month - [ ] Time series plot has no artifacts at 2023/2024 boundary - [ ] Cross-validation section cites both EGMS and our Dolphin source --- ## 9. Phase 4 — Automation & Monthly Refresh (2-3 days) ### Goal Once Phase 3 works, automate the monthly refresh so the pipeline keeps itself current without manual intervention. ### Steps 1. **Cron job** on WSL2 (monthly, e.g. 1st of each month): - `asf_search` for new SLCs since last run - Download new SLCs + orbits - Incremental topsStack coregistration - Incremental Dolphin run - Incremental MintPy correction - Update Dolphin output HDF5 2. **Monitoring:** - Email alert on pipeline failure - Output log to `E:/AllStrata/dolphin/refresh_log/` - Health-check endpoint (is latest data <45 days old?) 3. **Resilience:** - Retry on download failures - Graceful handling of missing orbit files (fall back to restituted orbits) ### Exit criteria - [ ] Monthly cron runs successfully for 3 consecutive months - [ ] Pipeline self-recovers from at least one common failure mode - [ ] Data freshness monitoring is in place --- ## 10. Phase 5 — Scale to Full 3 Counties (future, after MVP validation) Once Aylesbury works and customer validation is in progress, expand AOI to the full 3-county bbox (`-1.27/0.39/51.37/52.38`). ### What changes - 4 Sentinel-1 tracks instead of 2 (add 081D descending + 132A ascending for eastern Herts coverage) - ~6× more SLCs to download (~500 GB) - ~6× more compute - Storage requirement goes to ~1-1.5 TB total - May need GPU to keep per-refresh time reasonable ### Prerequisites before starting Phase 5 - Customer validation confirms product-market fit - At least 3 months of Aylesbury Dolphin output in production - Budget for disk expansion if needed --- ## 11. Disk & Time Budget Summary (native Linux) Revised for native Linux on a dedicated box (no WSL2 overhead), and for processing the full 2020-04/2026 window in one pass (Phase 1 + Phase 2 combined — extension falls out of the same stack). | Phase | Wall clock | Compute time (unattended) | Engineering time | Disk added | |---|---|---|---|---| | 0 Setup | **1-2 days** | ~2 hours | ~1 day | ~5 GB | | 1 Download + stack (full 2020-04/2026) | 4-5 days | ~3 days | ~2 days | **~400-500 GB** peak | | 1 post-delete of raw SLCs | — | — | — | **down to ~80 GB** | | 1 Dolphin phase linking (both tracks) | 1-2 days | ~0.5 day (GPU) / 1.5 days (CPU) | 0.5 day | ~15 GB | | 1 MintPy corrections + validation | 2-3 days | ~4 hours | ~2 days | ~5 GB | | 1b Tuning (if validation fails) | 3-7 days | ~2 days | ~3 days | ~5 GB | | 2 Extension (already included above — see note) | 0 | 0 | 0 | 0 | | 3 AllStrata integration | 3-5 days | 0 | 3-5 days | negligible | | 4 Automation | 2-3 days | 0 | 2-3 days | negligible | | **Total (best case, no tuning)** | **~15-20 days** | **~4 days** | **~10 days** | **~100 GB resident** | | **Total (with tuning)** | **~20-27 days** | **~6 days** | **~13 days** | **~105 GB resident** | Note on Phase 2: by processing 2020-01 → 2026-04-18 in one topsStack build, the "2024-present extension" is free. We just subset outputs by date when validating against EGMS vs extending beyond it. **Financial cost: £0** — all open-source tools, our own hardware. **Hidden cost: engineering attention** — ~3-4 weeks of focused work on the Linux box, plus ~1 week to integrate back into AllStrata on the Windows host. **Peak disk during download: ~400-500 GB.** After coregistration we delete raw SLC downloads and drop to ~80-100 GB resident. --- ## 12. Success Criteria (Overall) The project succeeds if at the end we can regenerate an Aylesbury report that: - Shows building-level PS measurements continuously from 2019 to last month - EGMS and Dolphin records agree to <2 mm/yr at 2023 overlap - Updates automatically each month - Passes a loss-adjuster review: "I can see this specific property has been moving since [date], measured monthly" If that works for Aylesbury, we scale to 3 counties (Phase 5). If it doesn't, we reconsider commercial options (SatSense wholesale). --- ## 13. Failure Modes & Fallbacks | Failure | Fallback | |---|---| | WSL2 / ISCE2 install is too painful | Use Dolphin Docker image (ghcr.io/isce-framework/dolphin) | | ASF SLC downloads are slow | Use NASA ASF S3 bucket directly if accessible | | Coregistration fails on some scenes | Drop those epochs; Dolphin tolerates missing dates | | Phase linking crashes on memory | Reduce block size (`block_shape` parameter); try smaller AOI | | Validation fails even after tuning | Re-evaluate: do we need SqueeSAR-exact quality, or is 2-3 mm/yr close enough? | | Monthly refresh breaks on Sentinel-1 mission change | Pause automation, handle manually, patch pipeline | --- ## 14. Open Questions (to resolve before starting) 1. **Linux box specs:** how many CPU cores? How much RAM? SSD or HDD? GPU present? (These determine Phase 1 compute estimates.) 2. **Linux distro:** Ubuntu 22.04 LTS is the default recommendation. If the user prefers 24.04 or another distro, confirm Dolphin install path works. 3. **Ascending burst IDs:** need to run `asf_search` on ascending orbit 30 over Aylesbury to confirm the specific burst IDs (Step 1.1). Do this before Step 1.3 download. 4. **Atmospheric correction:** GACOS (free, account required, more accurate) vs PyAPS+ERA5 (auto, no account, slightly noisier). Pick one and stick with it for the 2020-2026 window. 5. **Reference pixel selection:** where in Aylesbury AOI is truly stable? Use EGMS to identify a high-coherence, low-velocity location (e.g., church spire, solid Victorian masonry) and lock Dolphin to that. 6. **Incremental vs full reprocessing:** for monthly refresh after production launch, Dolphin's incremental mode (ministack chaining) is designed for exactly this. Build the initial full 2020-04/2026 stack once, then monthly appends. 7. **Unwrapping algorithm:** Dolphin supports SNAPHU, spurt, and tophu. SNAPHU is the default; spurt is the newer 3D method that handles phase jumps better in long time series. Try SNAPHU first; fall back to spurt if we see unwrapping errors. 8. **Sync strategy:** how does Dolphin output on the Linux box get back to the AllStrata Windows host? rsync over SSH, SMB share, S3, or NFS — the user knows the new box's network context best. --- ## 15. Pre-Flight Checklist Before kicking off Phase 0: - [ ] New Linux box provisioned with Ubuntu 22.04 LTS - [ ] SSH access configured from AllStrata Windows host - [ ] Data disk mounted with ≥500 GB free (peak download) - [ ] GPU (optional) — confirm NVIDIA driver + CUDA version - [ ] Created ASF Earthdata account; tested login on new box - [ ] Decided on atmospheric correction source (GACOS vs ERA5) - [ ] Registered for GACOS if chosen - [ ] Confirmed ascending burst IDs via asf_search (Step 1.1) - [ ] Allocated the 3-4 weeks of focused engineering time - [ ] Reviewed this plan with user — agreed on exit criteria --- ## Appendix A — Key File Paths (Linux host) All paths are on the new Linux box. Final results sync back to the Windows AllStrata host via scp/rsync/S3 — or we mount an NFS/SMB share. ``` ~/dolphin/ # root of all Dolphin work ├── slc/ # raw burst SLC downloads │ ├── 154D_329398/ # one dir per burst ID │ ├── 154D_329399/ │ ├── 030A_/ # fill after Step 1.1 │ └── 030A_/ ├── orbits/ # precise orbit files (EOF) ├── dem.tif # Copernicus DEM 30m for AOI ├── aux/ # Sentinel-1 aux cal files ├── gacos/ # GACOS ZTD products (if used) ├── stack/ # ISCE2 topsStack outputs │ ├── 154D/ │ │ ├── run_files/ │ │ └── merged/SLC// │ └── 030A/ ├── dolphin_out/ # Dolphin phase-linked outputs │ ├── 154D/output/ │ └── 030A/output/ ├── mintpy_out/ # MintPy-corrected time series │ ├── 154D.h5 │ └── 030A.h5 ├── vertical/ # asc+desc decomposed vertical │ └── aylesbury_vertical.h5 ├── calibrated/ # post-EGMS-calibration (prod) └── refresh_log/ # monthly refresh logs # On the Windows AllStrata host (synced from Linux): C:\Users\Administrator\Documents\AllStrata\src\ ├── dolphin_loader.py # new: adapter to read Dolphin HDF5 ├── unified_timeseries.py # updated: EGMS → Dolphin stitching └── ...existing modules unchanged # Data landing on Windows: E:\AllStrata\dolphin_sync\ # rsync target from Linux box └── aylesbury_vertical.h5 ``` ### Sync strategy options 1. **rsync over SSH:** simplest, run nightly or on-demand 2. **SMB share:** mount a Windows share on Linux, write direct 3. **S3/minio:** if the Linux box is cloud-hosted, write to S3, AllStrata reads 4. **NFS:** if both hosts are on the same LAN Pick whatever's easiest for the new Linux box's networking. --- ## Appendix B — Key Python Packages (conda-forge) ```yaml name: dolphin channels: [conda-forge] dependencies: - python=3.11 - dolphin - isce2 - mintpy - asf_search - sentineleof - snaphu - pyaps3 - h5py - numpy - scipy - rasterio - geopandas - matplotlib - jupyter ``` --- ## Appendix C — First-Iteration Parameters Starting Dolphin config for Aylesbury urban AOI: ```yaml input_options: subdataset: /science/SENTINEL1/CSLC/grids/VV phase_linking: ministack_size: 15 max_num_compressed: 5 half_window: {x: 11, y: 5} # 21×11 pixel neighborhood algorithm: EMI use_evd: false unwrap_options: unwrap_method: snaphu n_parallel_tiles: 4 timeseries_options: reference_point: auto # will pick stable PS automatically output_options: output_pixel_size: 20 # 20m output resolution ``` Adjust during Phase 1b tuning if validation fails. --- *End of plan. Last updated: 2026-04-18.*