| New file |
| | |
| | | # WMS ASN History Log Partitioning |
| | | |
| | | ## What This Is |
| | | |
| | | This project context tracks a brownfield enhancement for the existing WMS monorepo. The immediate goal is to partition ASN history log data so `man_asn_order_log` and `man_asn_order_item_log` are split into physical tables every half year without breaking current archive, query, export, or continue-receipt behavior. |
| | | |
| | | ## Core Value |
| | | |
| | | ASN historical logs must keep working end-to-end while storage is partitioned by half year for maintainability and query safety. |
| | | |
| | | ## Requirements |
| | | |
| | | ### Validated |
| | | |
| | | - ✓ Existing WMS backend and frontend are already in production use for ASN order creation, archive, query, export, and recovery flows. |
| | | |
| | | ### Active |
| | | |
| | | - [ ] Split ASN history main and item logs into semi-annual physical tables. |
| | | - [ ] Preserve current APIs and business behavior for log archive, query, export, and continue receipt. |
| | | - [ ] Deliver migration-ready SQL and code changes that fit the current Spring Boot + MyBatis-Plus brownfield architecture. |
| | | |
| | | ### Out of Scope |
| | | |
| | | - Partitioning active ASN order tables such as `wk_order` and `wk_order_item` — this request only targets history logs. |
| | | - Introducing heavy sharding middleware across the whole system — the current requirement favors a lightweight, local change. |
| | | - Reworking frontend interaction patterns — existing pages and permissions should stay compatible. |
| | | |
| | | ## Context |
| | | |
| | | - Repository: `wms-master` |
| | | - Backend module: `rsf-server` |
| | | - Frontend module: `rsf-design` |
| | | - Data stack: MySQL 5.7, Redis, MyBatis-Plus |
| | | - Current archive flow copies ASN orders into `man_asn_order_log` and `man_asn_order_item_log` when orders are completed or closed. |
| | | - Current history flows include pagination, list, get, export, and continue receipt from archived logs. |
| | | |
| | | ## Constraints |
| | | |
| | | - **Compatibility**: Existing endpoints and business flows must continue to work — this is a brownfield production enhancement. |
| | | - **Database**: Target database is MySQL 5.7 — design must avoid unsupported higher-version-only features. |
| | | - **Architecture**: Prefer lightweight routing inside current MyBatis-Plus infrastructure — avoid system-wide framework churn. |
| | | - **Safety**: Existing historical data must remain queryable after migration — no silent data loss or partition drift. |
| | | |
| | | ## Key Decisions |
| | | |
| | | | Decision | Rationale | Outcome | |
| | | |----------|-----------|---------| |
| | | | Use semi-annual partitioning for ASN history logs | Matches the requirement and keeps table growth bounded | — Pending | |
| | | | Prefer `create_time` as the partition dimension | Existing generic pagination/filtering already understands `timeStart/timeEnd` against `create_time` | — Pending | |
| | | | Keep this change scoped to ASN history logs | Reduces blast radius in a brownfield system | ✓ Good | |
| | | |
| | | --- |
| | | *Last updated: 2026-04-22 after initializing GSD context for ASN history log partitioning* |
| New file |
| | |
| | | # Requirements: WMS ASN History Log Partitioning |
| | | |
| | | **Defined:** 2026-04-22 |
| | | **Core Value:** ASN historical logs must keep working end-to-end while storage is partitioned by half year for maintainability and query safety. |
| | | |
| | | ## v1 Requirements |
| | | |
| | | ### Log Partitioning |
| | | |
| | | - [ ] **LOGP-01**: New ASN history main log writes land in the correct half-year physical table based on `create_time`. |
| | | - [ ] **LOGP-02**: New ASN history item log writes land in the same half-year partition as the parent main log. |
| | | - [ ] **LOGP-03**: New history log records use globally unique IDs so cross-partition lookup does not collide. |
| | | |
| | | ### Query Compatibility |
| | | |
| | | - [ ] **LOGP-04**: Existing history main log page/list/get/query/export behavior remains available after partitioning. |
| | | - [ ] **LOGP-05**: Existing history item log page/list/get/query/export behavior remains available after partitioning. |
| | | - [ ] **LOGP-06**: Business logic that checks archived ASN logs by code or ID can still find records across partitions. |
| | | - [ ] **LOGP-07**: Continue-receipt logic can restore orders and items from partitioned history tables without behavioral regression. |
| | | |
| | | ### Migration And Operations |
| | | |
| | | - [ ] **LOGP-08**: SQL scripts create required semi-annual physical tables for both log entities. |
| | | - [ ] **LOGP-09**: SQL scripts provide a safe migration path for existing `man_asn_order_log` and `man_asn_order_item_log` data. |
| | | - [ ] **LOGP-10**: Implementation documents clear table naming and partition-routing rules for future half-year rollover. |
| | | |
| | | ## v2 Requirements |
| | | |
| | | ### Automation |
| | | |
| | | - **AUTO-01**: Automatically provision future half-year partitions ahead of time. |
| | | - **AUTO-02**: Provide a generic reusable partition framework for other large history tables. |
| | | |
| | | ## Out of Scope |
| | | |
| | | | Feature | Reason | |
| | | |---------|--------| |
| | | | Partition active ASN order tables | Not requested; would expand risk significantly | |
| | | | Full-system sharding middleware rollout | Too heavy for the current targeted enhancement | |
| | | | Frontend redesign for log pages | Existing UI can stay compatible with backend changes | |
| | | |
| | | ## Traceability |
| | | |
| | | | Requirement | Phase | Status | |
| | | |-------------|-------|--------| |
| | | | LOGP-01 | Phase 1 | Pending | |
| | | | LOGP-02 | Phase 1 | Pending | |
| | | | LOGP-03 | Phase 1 | Pending | |
| | | | LOGP-04 | Phase 1 | Pending | |
| | | | LOGP-05 | Phase 1 | Pending | |
| | | | LOGP-06 | Phase 1 | Pending | |
| | | | LOGP-07 | Phase 1 | Pending | |
| | | | LOGP-08 | Phase 1 | Pending | |
| | | | LOGP-09 | Phase 1 | Pending | |
| | | | LOGP-10 | Phase 1 | Pending | |
| | | |
| | | **Coverage:** |
| | | - v1 requirements: 10 total |
| | | - Mapped to phases: 10 |
| | | - Unmapped: 0 |
| | | |
| | | --- |
| | | *Requirements defined: 2026-04-22* |
| | | *Last updated: 2026-04-22 after initial phase setup* |
| New file |
| | |
| | | # Roadmap |
| | | |
| | | ## Current Milestone: v1.0 ASN 历史日志半年分表 |
| | | |
| | | ### Milestone Goal |
| | | |
| | | Implement semi-annual partitioning for ASN history logs in the existing WMS backend without breaking current business flows. |
| | | |
| | | ### Phase 1: ASN 历史日志半年分表 |
| | | |
| | | **Goal:** Implement semi-annual partitioning for `man_asn_order_log` and `man_asn_order_item_log`, including write routing, cross-partition reads, continue-receipt compatibility, and migration SQL. |
| | | **Requirements**: LOGP-01, LOGP-02, LOGP-03, LOGP-04, LOGP-05, LOGP-06, LOGP-07, LOGP-08, LOGP-09, LOGP-10 |
| | | **Depends on:** None |
| | | **Plans:** 1 plan |
| | | |
| | | Plans: |
| | | - [x] 01-01 (ASN history log semi-annual partitioning implementation) |
| | | |
| | | --- |
| | | *Last updated: 2026-04-22* |
| New file |
| | |
| | | --- |
| | | gsd_state_version: 1.0 |
| | | milestone: v1.0 |
| | | milestone_name: ASN 历史日志半年分表 |
| | | status: planning |
| | | --- |
| | | |
| | | # Project State |
| | | |
| | | ## Project Reference |
| | | |
| | | See: .planning/PROJECT.md (updated 2026-04-22) |
| | | |
| | | **Core value:** ASN historical logs must keep working end-to-end while storage is partitioned by half year for maintainability and query safety. |
| | | **Current focus:** Phase 1 implementation completed; pending database rollout and migration execution |
| | | |
| | | ## Current Position |
| | | |
| | | Phase: 1 (ASN 历史日志半年分表) |
| | | Plan: 1 of 1 |
| | | Status: Phase 1 code complete, awaiting rollout verification |
| | | |
| | | ## Session Continuity |
| | | |
| | | Last session: 2026-04-22T00:00:00.000Z |
| | | Stopped at: Phase 1 implementation compiled successfully |
| | | |
| | | ## Accumulated Context |
| | | |
| | | ### Roadmap Evolution |
| | | |
| | | - 2026-04-22: Initialized `.planning` for ASN history log semi-annual partitioning. |
| | | - 2026-04-22: Added Phase 1 for ASN history log semi-annual partitioning. |
| | | - 2026-04-22: Completed Phase 1 code changes and added migration SQL for semi-annual ASN history log partitioning. |
| New file |
| | |
| | | { |
| | | "mode": "yolo", |
| | | "parallelization": true, |
| | | "workflow": { |
| | | "research": true, |
| | | "plan_checker": true, |
| | | "verifier": true, |
| | | "auto_advance": true, |
| | | "skip_discuss": false, |
| | | "nyquist_validation": true |
| | | }, |
| | | "granularity": "coarse" |
| | | } |
| New file |
| | |
| | | --- |
| | | phase: 01-asn |
| | | plan: 01 |
| | | wave: 1 |
| | | autonomous: true |
| | | files_modified: |
| | | - rsf-server/src/main/java/com/vincent/rsf/server/common/config/MybatisPlusConfig.java |
| | | - rsf-server/src/main/java/com/vincent/rsf/server/manager/entity/AsnOrderLog.java |
| | | - rsf-server/src/main/java/com/vincent/rsf/server/manager/entity/AsnOrderItemLog.java |
| | | - rsf-server/src/main/java/com/vincent/rsf/server/manager/service/AsnOrderLogService.java |
| | | - rsf-server/src/main/java/com/vincent/rsf/server/manager/service/AsnOrderItemLogService.java |
| | | - rsf-server/src/main/java/com/vincent/rsf/server/manager/service/impl/AsnOrderLogServiceImpl.java |
| | | - rsf-server/src/main/java/com/vincent/rsf/server/manager/service/impl/AsnOrderItemLogServiceImpl.java |
| | | - rsf-server/src/main/java/com/vincent/rsf/server/manager/service/impl/AsnOrderServiceImpl.java |
| | | - rsf-server/src/main/java/com/vincent/rsf/server/manager/controller/AsnOrderLogController.java |
| | | - rsf-server/src/main/java/com/vincent/rsf/server/manager/controller/AsnOrderItemLogController.java |
| | | - rsf-server/src/main/java/com/vincent/rsf/server/api/service/impl/ReceiveMsgServiceImpl.java |
| | | - rsf-server/src/main/resources/sql/20260422_asn_order_log_semi_annual_partition.sql |
| | | --- |
| | | |
| | | <objective> |
| | | Implement semi-annual partitioning for ASN history log tables in the current WMS codebase with brownfield-compatible routing, migration SQL, and no functional regression in archive, query, export, or continue-receipt flows. |
| | | </objective> |
| | | |
| | | <tasks> |
| | | <task> |
| | | ## Task 1: Add partition routing infrastructure |
| | | |
| | | - Add a semi-annual partition helper that derives table suffixes from `create_time`. |
| | | - Add a scoped routing context for single-table operations on ASN history logs. |
| | | - Extend MyBatis-Plus interceptor configuration with dynamic table-name routing for the two logical log tables. |
| | | </task> |
| | | |
| | | <task> |
| | | ## Task 2: Adapt write paths and point lookups |
| | | |
| | | - Update ASN history log entities and service logic so new writes use globally unique IDs. |
| | | - Route archive writes for main log and item log to the correct half-year physical tables. |
| | | - Update continue-receipt and archived-code lookup flows to search across partitions safely. |
| | | </task> |
| | | |
| | | <task> |
| | | ## Task 3: Adapt controller-facing read paths |
| | | |
| | | - Replace direct generic controller usage where needed with service methods that can query across candidate partitions. |
| | | - Preserve pagination, get, many, query, list, and export behavior for both history controllers. |
| | | - Use existing `timeStart` / `timeEnd` filters to reduce partition fan-out when available. |
| | | </task> |
| | | |
| | | <task> |
| | | ## Task 4: Deliver SQL and verify |
| | | |
| | | - Add SQL script for physical table creation and existing data migration. |
| | | - Run targeted compile or test verification for touched backend modules. |
| | | - Sanity-check GSD files and summarize residual migration risks. |
| | | </task> |
| | | </tasks> |
| | | |
| | | <verification> |
| | | ## Verification Targets |
| | | |
| | | - Archive flow writes both log tables to matching half-year physical tables. |
| | | - Continue-receipt can restore a partitioned log by ID. |
| | | - Archived order lookup by code still returns completed status. |
| | | - Main/item log endpoints still return expected data under partitioned storage. |
| | | - SQL script documents creation and migration steps clearly. |
| | | </verification> |
| New file |
| | |
| | | # Phase 1 Plan 01 Summary |
| | | |
| | | **Completed:** 2026-04-22 |
| | | **Plan:** `01-01` |
| | | **Phase:** `01-asn` |
| | | |
| | | ## Outcome |
| | | |
| | | Implemented semi-annual partitioning support for ASN history logs in the backend codebase and delivered the accompanying SQL migration script. |
| | | |
| | | ## Delivered |
| | | |
| | | - Added dynamic table routing support for `man_asn_order_log` and `man_asn_order_item_log`. |
| | | - Added partition-aware read/write behavior in ASN history log services. |
| | | - Switched new ASN history log IDs to application-generated global IDs. |
| | | - Updated archive flow so item-log batch writes can be forced into the same half-year partition as the parent log. |
| | | - Kept archived-order status lookup compatible across partitions. |
| | | - Added `20260422_asn_order_log_semi_annual_partition.sql` for table creation and legacy data migration. |
| | | |
| | | ## Verification |
| | | |
| | | - `mvn -pl rsf-server -am -DskipTests compile` passed on 2026-04-22. |
| | | - `gsd-sdk query validate.health` returned healthy on 2026-04-22. |
| | | |
| | | ## Residual Rollout Notes |
| | | |
| | | - Target physical half-year tables must exist in MySQL before new writes occur. |
| | | - Database migration must be executed in a maintenance window to avoid duplicate legacy reads during cutover. |
| | | - Runtime verification should cover archive, history query, export, and continue-receipt flows against real partition tables. |
| New file |
| | |
| | | # Phase 1: ASN 历史日志半年分表 - Context |
| | | |
| | | **Gathered:** 2026-04-22 |
| | | **Status:** Ready for implementation |
| | | **Source:** Manual GSD fallback after `gsd-sdk init` provider login failure |
| | | |
| | | <domain> |
| | | ## Phase Boundary |
| | | |
| | | This phase only covers the ASN history log tables: |
| | | |
| | | - `man_asn_order_log` |
| | | - `man_asn_order_item_log` |
| | | |
| | | It includes write routing, read compatibility, continue-receipt compatibility, and SQL migration support for these two tables only. |
| | | |
| | | </domain> |
| | | |
| | | <decisions> |
| | | ## Implementation Decisions |
| | | |
| | | ### Partitioning Rule |
| | | |
| | | - Use semi-annual physical tables for both history entities. |
| | | - Partition key is `create_time`. |
| | | - Main log and item log must use the same half-year suffix for a given archive operation. |
| | | |
| | | ### Brownfield Compatibility |
| | | |
| | | - Preserve existing controller endpoints and business semantics. |
| | | - Prefer lightweight MyBatis-Plus-based routing over introducing heavy sharding middleware. |
| | | - Continue-receipt must be able to find and restore archived data from partitioned tables. |
| | | |
| | | ### Query Compatibility |
| | | |
| | | - Time-aware requests should narrow candidate partitions by `timeStart` / `timeEnd` mapped to `create_time`. |
| | | - Existing no-time requests must still work by searching across known partitions instead of silently only reading the current half-year table. |
| | | - Business lookup by `code` must still work across partitions. |
| | | |
| | | ### ID Strategy |
| | | |
| | | - New log rows should use globally unique IDs to avoid cross-partition collisions for future data. |
| | | - Existing migrated records may retain legacy IDs; cross-partition reads must not assume a single physical table. |
| | | |
| | | ### SQL Delivery |
| | | |
| | | - Add SQL scripts under `rsf-server/src/main/resources/sql/`. |
| | | - Include table creation rules for semi-annual log tables. |
| | | - Include a migration script path for existing base-table data. |
| | | |
| | | ### the agent's Discretion |
| | | |
| | | - Exact helper class names and package placement. |
| | | - Exact partition suffix formatting as long as it is deterministic and documented. |
| | | - Whether to store known partition metadata dynamically or derive it from date ranges plus configured floor. |
| | | |
| | | </decisions> |
| | | |
| | | <canonical_refs> |
| | | ## Canonical References |
| | | |
| | | **Downstream implementation must read these before changing code.** |
| | | |
| | | ### Current Entity Bindings |
| | | |
| | | - `rsf-server/src/main/java/com/vincent/rsf/server/manager/entity/AsnOrderLog.java` — current logical table binding for ASN history main log |
| | | - `rsf-server/src/main/java/com/vincent/rsf/server/manager/entity/AsnOrderItemLog.java` — current logical table binding for ASN history item log |
| | | |
| | | ### Archive / Recovery Logic |
| | | |
| | | - `rsf-server/src/main/java/com/vincent/rsf/server/manager/service/impl/AsnOrderServiceImpl.java` — archive flow that writes logs when ASN orders are completed or closed |
| | | - `rsf-server/src/main/java/com/vincent/rsf/server/manager/service/impl/AsnOrderLogServiceImpl.java` — continue-receipt flow that restores archived orders and deletes logs |
| | | |
| | | ### Query Entrypoints |
| | | |
| | | - `rsf-server/src/main/java/com/vincent/rsf/server/manager/controller/AsnOrderLogController.java` — main log page/list/get/export/query/continue endpoints |
| | | - `rsf-server/src/main/java/com/vincent/rsf/server/manager/controller/AsnOrderItemLogController.java` — item log page/list/get/export/query endpoints |
| | | - `rsf-server/src/main/java/com/vincent/rsf/server/api/service/impl/ReceiveMsgServiceImpl.java` — business lookup by archived log code |
| | | |
| | | ### Infrastructure |
| | | |
| | | - `rsf-server/src/main/java/com/vincent/rsf/server/common/config/MybatisPlusConfig.java` — available MyBatis-Plus interceptor extension point |
| | | - `rsf-server/src/main/java/com/vincent/rsf/server/common/domain/BaseParam.java` — generic `timeStart` / `timeEnd` request parsing |
| | | - `rsf-server/src/main/java/com/vincent/rsf/server/common/domain/PageParam.java` — generic `create_time` range filtering behavior |
| | | |
| | | </canonical_refs> |
| | | |
| | | <specifics> |
| | | ## Specific Ideas |
| | | |
| | | - Prefer a routing helper plus MyBatis-Plus dynamic table name interceptor for single-partition operations. |
| | | - For cross-partition reads, aggregate results across candidate physical tables in service code where necessary. |
| | | - Keep item-log queries aligned with main-log partition selection by shared suffix rules. |
| | | |
| | | </specifics> |
| | | |
| | | <deferred> |
| | | ## Deferred Ideas |
| | | |
| | | - Generic reusable partition framework for all history tables |
| | | - Automatic future partition provisioning job |
| | | - Frontend-level partition filter UX changes |
| | | |
| | | </deferred> |
| | | |
| | | --- |
| | | |
| | | *Phase: 01-asn* |
| | | *Context gathered: 2026-04-22 via manual GSD fallback* |
| | |
| | | gap: 10px; |
| | | } |
| | | .order-print-sheet__title-wrap { |
| | | width: 100%; |
| | | display: flex; |
| | | align-items: flex-start; |
| | | justify-content: space-between; |
| | | align-items: center; |
| | | justify-content: center; |
| | | gap: 16px; |
| | | } |
| | | .order-print-sheet__brand-row { |
| | | width: 100%; |
| | | display: flex; |
| | | min-height: 48px; |
| | | } |
| | | .order-print-sheet__brand-row.is-logo-left { |
| | | justify-content: flex-start; |
| | | } |
| | | .order-print-sheet__brand-row.is-logo-right { |
| | | justify-content: flex-end; |
| | | } |
| | | .order-print-sheet__title-wrap.has-barcode.is-barcode-left, |
| | | .order-print-sheet__title-wrap.has-barcode.is-barcode-right { |
| | | flex-direction: row; |
| | | position: relative; |
| | | min-height: 64px; |
| | | } |
| | | .order-print-sheet__title-wrap.has-barcode.is-barcode-top, |
| | | .order-print-sheet__title-wrap.has-barcode.is-barcode-bottom { |
| | | flex-direction: column; |
| | | align-items: center; |
| | | justify-content: flex-start; |
| | | } |
| | | .order-print-sheet__title-wrap.has-barcode.is-barcode-right .order-print-sheet__barcode, |
| | | .order-print-sheet__title-wrap.has-barcode.is-barcode-left .order-print-sheet__barcode { |
| | | position: absolute; |
| | | left: 0; |
| | | top: 50%; |
| | | transform: translateY(-50%); |
| | | } |
| | | .order-print-sheet__title-wrap.has-barcode.is-barcode-left .order-print-sheet__title, |
| | | .order-print-sheet__title-wrap.has-barcode.is-barcode-right .order-print-sheet__title { |
| | | width: 100%; |
| | | } |
| | | .order-print-sheet__title-wrap.has-barcode.is-barcode-right .order-print-sheet__barcode { |
| | | position: absolute; |
| | | right: 0; |
| | | top: 50%; |
| | | transform: translateY(-50%); |
| | | } |
| | | .order-print-sheet__title-wrap.has-barcode.is-barcode-top .order-print-sheet__barcode { |
| | | order: 0; |
| | | } |
| | | .order-print-sheet__title-wrap.has-barcode.is-barcode-bottom .order-print-sheet__barcode { |
| | | order: 2; |
| | | } |
| | | .order-print-sheet__title { |
| | | flex: 1; |
| | | text-align: center; |
| | | font-size: 24px; |
| | | font-weight: 700; |
| | | letter-spacing: 1px; |
| | | } |
| | | .order-print-sheet__title-wrap.has-barcode.is-barcode-top .order-print-sheet__title, |
| | | .order-print-sheet__title-wrap.has-barcode.is-barcode-bottom .order-print-sheet__title { |
| | | .order-print-sheet__title-wrap.is-barcode-top .order-print-sheet__title, |
| | | .order-print-sheet__title-wrap.is-barcode-bottom .order-print-sheet__title { |
| | | width: 100%; |
| | | } |
| | | .order-print-sheet__barcode { |
| | |
| | | flex-direction: column; |
| | | gap: 4px; |
| | | } |
| | | .order-print-sheet__logo { |
| | | display: flex; |
| | | align-items: center; |
| | | justify-content: center; |
| | | min-height: 48px; |
| | | } |
| | | .order-print-sheet__logo-image { |
| | | display: block; |
| | | max-width: 100%; |
| | | max-height: 72px; |
| | | object-fit: contain; |
| | | } |
| | | .order-print-sheet__barcode.is-barcode-top, |
| | | .order-print-sheet__barcode.is-barcode-bottom { |
| | | width: 240px; |
| | |
| | | <div class="order-print-sheet" :class="{ 'is-preview': preview }" :style="sheetStyle"> |
| | | <div class="order-print-sheet__header"> |
| | | <div |
| | | v-if="logoVisible" |
| | | class="order-print-sheet__brand-row" |
| | | :class="`is-logo-${logoPosition}`" |
| | | > |
| | | <div class="order-print-sheet__logo" :class="`is-logo-${logoPosition}`"> |
| | | <img |
| | | :src="schema.logoSrc" |
| | | alt="template logo" |
| | | class="order-print-sheet__logo-image" |
| | | :style="{ width: `${logoWidth}px` }" |
| | | /> |
| | | </div> |
| | | </div> |
| | | <div |
| | | class="order-print-sheet__title-wrap" |
| | | :class="[`is-barcode-${barcodePosition}`, { 'has-barcode': barcodeVisible }]" |
| | | :class="[ |
| | | `is-barcode-${barcodePosition}`, |
| | | { 'has-barcode': barcodeVisible } |
| | | ]" |
| | | > |
| | | <div |
| | | v-if="barcodeVisible" |
| | |
| | | '' |
| | | ) |
| | | ) |
| | | const logoVisible = computed(() => schema.value.showLogo && Boolean(schema.value.logoSrc)) |
| | | const logoPosition = computed(() => schema.value.logoPosition || 'left') |
| | | const logoWidth = computed(() => Number(schema.value.logoWidth) || 72) |
| | | const barcodeVisible = computed(() => schema.value.showBarcode && Boolean(barcodeValue.value)) |
| | | const barcodePosition = computed(() => schema.value.barcodePosition || 'right') |
| | | const showTotalRow = computed(() => |
| | |
| | | } |
| | | |
| | | .order-print-sheet__title-wrap { |
| | | width: 100%; |
| | | display: flex; |
| | | align-items: flex-start; |
| | | justify-content: space-between; |
| | | align-items: center; |
| | | justify-content: center; |
| | | gap: 16px; |
| | | } |
| | | |
| | | .order-print-sheet__brand-row { |
| | | width: 100%; |
| | | display: flex; |
| | | min-height: 48px; |
| | | } |
| | | |
| | | .order-print-sheet__brand-row.is-logo-left { |
| | | justify-content: flex-start; |
| | | } |
| | | |
| | | .order-print-sheet__brand-row.is-logo-right { |
| | | justify-content: flex-end; |
| | | } |
| | | |
| | | .order-print-sheet__title-wrap.has-barcode.is-barcode-left, |
| | | .order-print-sheet__title-wrap.has-barcode.is-barcode-right { |
| | | flex-direction: row; |
| | | position: relative; |
| | | min-height: 64px; |
| | | } |
| | | |
| | | .order-print-sheet__title-wrap.has-barcode.is-barcode-top, |
| | | .order-print-sheet__title-wrap.has-barcode.is-barcode-bottom { |
| | | flex-direction: column; |
| | | align-items: center; |
| | | justify-content: flex-start; |
| | | } |
| | | |
| | | .order-print-sheet__title-wrap.has-barcode.is-barcode-left .order-print-sheet__barcode { |
| | | position: absolute; |
| | | left: 0; |
| | | top: 50%; |
| | | transform: translateY(-50%); |
| | | } |
| | | |
| | | .order-print-sheet__title-wrap.has-barcode.is-barcode-left .order-print-sheet__title, |
| | | .order-print-sheet__title-wrap.has-barcode.is-barcode-right .order-print-sheet__title { |
| | | width: 100%; |
| | | } |
| | | |
| | | .order-print-sheet__title-wrap.has-barcode.is-barcode-right .order-print-sheet__barcode { |
| | | order: 2; |
| | | position: absolute; |
| | | right: 0; |
| | | top: 50%; |
| | | transform: translateY(-50%); |
| | | } |
| | | |
| | | .order-print-sheet__title-wrap.has-barcode.is-barcode-top .order-print-sheet__barcode { |
| | | order: 0; |
| | | } |
| | | |
| | | .order-print-sheet__title-wrap.has-barcode.is-barcode-bottom .order-print-sheet__barcode { |
| | |
| | | } |
| | | |
| | | .order-print-sheet__title { |
| | | flex: 1; |
| | | text-align: center; |
| | | font-size: 24px; |
| | | font-weight: 700; |
| | | letter-spacing: 1px; |
| | | } |
| | | |
| | | .order-print-sheet__title-wrap.has-barcode.is-barcode-top .order-print-sheet__title, |
| | | .order-print-sheet__title-wrap.has-barcode.is-barcode-bottom .order-print-sheet__title { |
| | | .order-print-sheet__title-wrap.is-barcode-top .order-print-sheet__title, |
| | | .order-print-sheet__title-wrap.is-barcode-bottom .order-print-sheet__title { |
| | | width: 100%; |
| | | } |
| | | |
| | |
| | | gap: 4px; |
| | | } |
| | | |
| | | .order-print-sheet__logo { |
| | | display: flex; |
| | | align-items: center; |
| | | justify-content: center; |
| | | min-height: 48px; |
| | | } |
| | | |
| | | .order-print-sheet__logo-image { |
| | | display: block; |
| | | max-width: 100%; |
| | | max-height: 72px; |
| | | object-fit: contain; |
| | | } |
| | | |
| | | .order-print-sheet__barcode.is-barcode-top, |
| | | .order-print-sheet__barcode.is-barcode-bottom { |
| | | width: 240px; |
| | |
| | | </div> |
| | | |
| | | <div class="order-print-template-manager__form-grid"> |
| | | <ElFormItem label="显示Logo"> |
| | | <ElSwitch v-model="activeTemplate.schema.showLogo" /> |
| | | </ElFormItem> |
| | | <ElFormItem label="显示条码"> |
| | | <ElSwitch v-model="activeTemplate.schema.showBarcode" /> |
| | | </ElFormItem> |
| | |
| | | /> |
| | | </ElFormItem> |
| | | </div> |
| | | |
| | | <div |
| | | v-if="activeTemplate.schema.showLogo" |
| | | class="order-print-template-manager__form-grid" |
| | | > |
| | | <ElFormItem label="Logo位置"> |
| | | <ElSelect v-model="activeTemplate.schema.logoPosition"> |
| | | <ElOption |
| | | v-for="item in logoPositionOptions" |
| | | :key="item.value" |
| | | :label="item.label" |
| | | :value="item.value" |
| | | /> |
| | | </ElSelect> |
| | | </ElFormItem> |
| | | <ElFormItem label="Logo宽度(px)"> |
| | | <ElInputNumber |
| | | v-model="activeTemplate.schema.logoWidth" |
| | | :min="24" |
| | | :max="240" |
| | | :step="4" |
| | | /> |
| | | </ElFormItem> |
| | | <ElFormItem label="上传Logo"> |
| | | <ElUpload |
| | | :auto-upload="false" |
| | | :show-file-list="false" |
| | | accept=".png,.jpg,.jpeg,.gif,.bmp,.webp,.svg" |
| | | @change="handleLogoFileChange" |
| | | > |
| | | <ElButton>上传图片</ElButton> |
| | | </ElUpload> |
| | | </ElFormItem> |
| | | </div> |
| | | |
| | | <ElFormItem v-if="activeTemplate.schema.showLogo" label="Logo预览"> |
| | | <div v-if="activeTemplate.schema.logoSrc" class="order-print-template-manager__logo"> |
| | | <img |
| | | :src="activeTemplate.schema.logoSrc" |
| | | alt="logo preview" |
| | | class="order-print-template-manager__logo-image" |
| | | /> |
| | | <ElButton link type="danger" @click="clearLogo">移除Logo</ElButton> |
| | | </div> |
| | | <ElText v-else type="info">暂未上传Logo</ElText> |
| | | </ElFormItem> |
| | | |
| | | <div |
| | | class="order-print-template-manager__form-grid" |
| | |
| | | getOrderPrintAlignOptions, |
| | | getOrderPrintBarcodePositionOptions, |
| | | getOrderPrintFieldCatalog, |
| | | getOrderPrintLogoPositionOptions, |
| | | getOrderPrintOrientationOptions, |
| | | getOrderPrintPageDimensions, |
| | | getOrderPrintPaperSizeOptions, |
| | |
| | | const orientationOptions = computed(() => getOrderPrintOrientationOptions()) |
| | | const alignOptions = computed(() => getOrderPrintAlignOptions()) |
| | | const barcodePositionOptions = computed(() => getOrderPrintBarcodePositionOptions()) |
| | | const logoPositionOptions = computed(() => getOrderPrintLogoPositionOptions()) |
| | | const spanOptions = computed(() => getOrderPrintSpanOptions()) |
| | | const fieldCatalog = computed(() => getOrderPrintFieldCatalog(props.type, props.enabledFields)) |
| | | const headerFieldOptions = computed(() => fieldCatalog.value.header) |
| | |
| | | |
| | | function emitTypeChange(value) { |
| | | emit('update:type', value) |
| | | } |
| | | |
| | | function validateImageFile(rawFile) { |
| | | const isImageFile = |
| | | String(rawFile?.type || '').startsWith('image/') || |
| | | /\.(png|jpe?g|gif|bmp|webp|svg)$/i.test(rawFile?.name || '') |
| | | if (!isImageFile) { |
| | | ElMessage.error('只能上传图片文件') |
| | | return false |
| | | } |
| | | |
| | | const isLt5MB = Number(rawFile?.size || 0) / 1024 / 1024 < 5 |
| | | if (!isLt5MB) { |
| | | ElMessage.error('Logo 图片大小不能超过 5MB') |
| | | return false |
| | | } |
| | | |
| | | return true |
| | | } |
| | | |
| | | function readFileAsDataUrl(file) { |
| | | return new Promise((resolve, reject) => { |
| | | const reader = new FileReader() |
| | | reader.onload = () => resolve(String(reader.result || '')) |
| | | reader.onerror = reject |
| | | reader.readAsDataURL(file) |
| | | }) |
| | | } |
| | | |
| | | async function handleLogoFileChange(uploadFile) { |
| | | const rawFile = uploadFile?.raw |
| | | if (!rawFile || !activeTemplate.value) { |
| | | return |
| | | } |
| | | if (!validateImageFile(rawFile)) { |
| | | return |
| | | } |
| | | try { |
| | | activeTemplate.value.schema.logoSrc = await readFileAsDataUrl(rawFile) |
| | | } catch (error) { |
| | | ElMessage.error(error?.message || 'Logo 读取失败') |
| | | } |
| | | } |
| | | |
| | | function clearLogo() { |
| | | if (!activeTemplate.value) { |
| | | return |
| | | } |
| | | activeTemplate.value.schema.logoSrc = '' |
| | | } |
| | | |
| | | function getOrientationLabel(value) { |
| | |
| | | width: 100%; |
| | | } |
| | | |
| | | .order-print-template-manager__logo { |
| | | display: flex; |
| | | align-items: center; |
| | | gap: 12px; |
| | | flex-wrap: wrap; |
| | | } |
| | | |
| | | .order-print-template-manager__logo-image { |
| | | display: block; |
| | | max-width: 180px; |
| | | max-height: 72px; |
| | | object-fit: contain; |
| | | border: 1px solid rgba(148, 163, 184, 0.18); |
| | | border-radius: 8px; |
| | | padding: 8px; |
| | | background: #ffffff; |
| | | } |
| | | |
| | | .order-print-template-manager__field-item { |
| | | padding: 12px; |
| | | border-radius: 14px; |
| | |
| | | } |
| | | const DEFAULT_VERSION = 2 |
| | | const DEFAULT_BARCODE_POSITION = 'right' |
| | | const DEFAULT_LOGO_POSITION = 'left' |
| | | const DEFAULT_LOGO_WIDTH = 72 |
| | | const TOTAL_FIELD_MAP = { |
| | | anfme: 'totalAnfme', |
| | | qty: 'totalQty', |
| | |
| | | |
| | | function normalizeBarcodePosition(value) { |
| | | return ['left', 'right', 'top', 'bottom'].includes(value) ? value : DEFAULT_BARCODE_POSITION |
| | | } |
| | | |
| | | function normalizeLogoPosition(value) { |
| | | return ['left', 'right'].includes(value) ? value : DEFAULT_LOGO_POSITION |
| | | } |
| | | |
| | | function cloneData(value) { |
| | |
| | | ...DEFAULT_PAGE, |
| | | ...(config.page || {}) |
| | | }, |
| | | showLogo: config.showLogo === true, |
| | | logoSrc: normalizeText(config.logoSrc), |
| | | logoPosition: normalizeLogoPosition(config.logoPosition), |
| | | logoWidth: Math.max(normalizeNumber(config.logoWidth, DEFAULT_LOGO_WIDTH), 24), |
| | | showBarcode: config.showBarcode !== false, |
| | | barcodeField: normalizeText(config.barcodeField) || 'orderCode', |
| | | barcodeTextField: normalizeText(config.barcodeTextField) || 'orderCode', |
| | |
| | | mode: 'document', |
| | | title: normalizeText(rawSchema.title) || defaultSchema.title, |
| | | page: normalizePage(rawSchema.page), |
| | | showLogo: rawSchema.showLogo === true, |
| | | logoSrc: normalizeText(rawSchema.logoSrc), |
| | | logoPosition: normalizeLogoPosition(rawSchema.logoPosition), |
| | | logoWidth: Math.max( |
| | | normalizeNumber(rawSchema.logoWidth, defaultSchema.logoWidth || DEFAULT_LOGO_WIDTH), |
| | | 24 |
| | | ), |
| | | showBarcode: rawSchema.showBarcode !== false, |
| | | barcodeField: normalizeText(rawSchema.barcodeField) || defaultSchema.barcodeField, |
| | | barcodeTextField: normalizeText(rawSchema.barcodeTextField) || defaultSchema.barcodeTextField, |
| | |
| | | ] |
| | | } |
| | | |
| | | export function getOrderPrintLogoPositionOptions() { |
| | | return [ |
| | | { label: '居左', value: 'left' }, |
| | | { label: '居右', value: 'right' } |
| | | ] |
| | | } |
| | | |
| | | export function getOrderPrintSpanOptions() { |
| | | return [6, 8, 12, 24].map((value) => ({ |
| | | label: `${value}/24`, |
| | |
| | | return R.ok("单据不存在 !!").add(map); |
| | | } |
| | | |
| | | AsnOrderLog orderLog = asnOrderLogService.getOne(new LambdaQueryWrapper<AsnOrderLog>().eq(AsnOrderLog::getCode, queryParams.getOrderNo()).last("limit 1")); |
| | | AsnOrderLog orderLog = asnOrderLogService.getOne( |
| | | new LambdaQueryWrapper<AsnOrderLog>() |
| | | .eq(AsnOrderLog::getCode, queryParams.getOrderNo()) |
| | | .orderByDesc(AsnOrderLog::getId) |
| | | .last("limit 1") |
| | | ); |
| | | if (!Objects.isNull(orderLog)) { |
| | | Map<String, Object> map = new HashMap<>(); |
| | | map.put("exceStatus", "4"); |
| | |
| | | import com.baomidou.mybatisplus.extension.parser.JsqlParserGlobal; |
| | | import com.baomidou.mybatisplus.extension.parser.cache.JdkSerialCaffeineJsqlParseCache; |
| | | import com.baomidou.mybatisplus.extension.plugins.MybatisPlusInterceptor; |
| | | import com.baomidou.mybatisplus.extension.plugins.inner.DynamicTableNameInnerInterceptor; |
| | | import com.baomidou.mybatisplus.extension.plugins.handler.TenantLineHandler; |
| | | import com.baomidou.mybatisplus.extension.plugins.inner.OptimisticLockerInnerInterceptor; |
| | | import com.baomidou.mybatisplus.extension.plugins.inner.PaginationInnerInterceptor; |
| | | import com.baomidou.mybatisplus.extension.plugins.inner.TenantLineInnerInterceptor; |
| | | import com.vincent.rsf.server.manager.partition.AsnLogPartitionSupport; |
| | | import com.vincent.rsf.server.system.entity.User; |
| | | import net.sf.jsqlparser.expression.Expression; |
| | | import net.sf.jsqlparser.expression.LongValue; |
| | |
| | | private static volatile boolean jsqlParserConfigured = false; |
| | | |
| | | @Bean |
| | | public MybatisPlusInterceptor mybatisPlusInterceptor() { |
| | | public MybatisPlusInterceptor mybatisPlusInterceptor(AsnLogPartitionSupport asnLogPartitionSupport) { |
| | | configureJsqlParser(); |
| | | MybatisPlusInterceptor interceptor = new MybatisPlusInterceptor(); |
| | | |
| | | // 添加乐观锁插件 |
| | | interceptor.addInnerInterceptor(new OptimisticLockerInnerInterceptor()); |
| | | |
| | | // ASN历史日志动态表名 |
| | | interceptor.addInnerInterceptor(new DynamicTableNameInnerInterceptor( |
| | | (sql, tableName) -> asnLogPartitionSupport.resolveRoutedTable(tableName) |
| | | )); |
| | | |
| | | // 多租户插件配置 |
| | | TenantLineHandler tenantLineHandler = new TenantLineHandler() { |
| | | @Override |
| | |
| | | * ID |
| | | */ |
| | | @ApiModelProperty(value= "ID") |
| | | @TableId(value = "id", type = IdType.AUTO) |
| | | @TableId(value = "id", type = IdType.ASSIGN_ID) |
| | | private Long id; |
| | | |
| | | @ApiModelProperty("主单ID") |
| | |
| | | * ID |
| | | */ |
| | | @ApiModelProperty(value= "ID") |
| | | @TableId(value = "id", type = IdType.AUTO) |
| | | @TableId(value = "id", type = IdType.ASSIGN_ID) |
| | | private Long id; |
| | | |
| | | |
| New file |
| | |
| | | package com.vincent.rsf.server.manager.partition; |
| | | |
| | | import com.vincent.rsf.framework.exception.CoolException; |
| | | import org.springframework.jdbc.core.JdbcTemplate; |
| | | import org.springframework.stereotype.Component; |
| | | |
| | | import java.time.LocalDateTime; |
| | | import java.time.ZoneId; |
| | | import java.util.ArrayList; |
| | | import java.util.Collections; |
| | | import java.util.Comparator; |
| | | import java.util.Date; |
| | | import java.util.List; |
| | | import java.util.Map; |
| | | import java.util.concurrent.ConcurrentHashMap; |
| | | import java.util.concurrent.TimeUnit; |
| | | import java.util.function.Supplier; |
| | | |
| | | /** |
| | | * Partition support for ASN history log tables. |
| | | */ |
| | | @Component |
| | | public class AsnLogPartitionSupport { |
| | | |
| | | public static final String ORDER_LOG_TABLE = "man_asn_order_log"; |
| | | public static final String ORDER_ITEM_LOG_TABLE = "man_asn_order_item_log"; |
| | | |
| | | private static final long CACHE_TTL_MILLIS = TimeUnit.MINUTES.toMillis(5); |
| | | private static final String LIST_TABLE_SQL = |
| | | "select table_name from information_schema.tables " + |
| | | "where table_schema = database() and (table_name = ? or table_name like ? escape '\\\\')"; |
| | | |
| | | private final JdbcTemplate jdbcTemplate; |
| | | private final Map<String, TableCacheEntry> tableCache = new ConcurrentHashMap<>(); |
| | | |
| | | public AsnLogPartitionSupport(JdbcTemplate jdbcTemplate) { |
| | | this.jdbcTemplate = jdbcTemplate; |
| | | } |
| | | |
| | | public String resolveRoutedTable(String logicalTable) { |
| | | String routed = AsnLogTableRoutingContext.getTable(logicalTable); |
| | | return routed == null ? logicalTable : routed; |
| | | } |
| | | |
| | | public String resolveOrderLogTable(Date createTime) { |
| | | return resolvePhysicalTable(ORDER_LOG_TABLE, createTime); |
| | | } |
| | | |
| | | public String resolveOrderItemLogTable(Date createTime) { |
| | | return resolvePhysicalTable(ORDER_ITEM_LOG_TABLE, createTime); |
| | | } |
| | | |
| | | public String resolvePhysicalTable(String logicalTable, Date createTime) { |
| | | Date effectiveDate = createTime == null ? new Date() : createTime; |
| | | LocalDateTime localDateTime = LocalDateTime.ofInstant(effectiveDate.toInstant(), ZoneId.systemDefault()); |
| | | int half = localDateTime.getMonthValue() <= 6 ? 1 : 2; |
| | | return logicalTable + "_" + localDateTime.getYear() + "_h" + half; |
| | | } |
| | | |
| | | public List<String> listOrderLogTables() { |
| | | return listReadableTables(ORDER_LOG_TABLE); |
| | | } |
| | | |
| | | public List<String> listOrderItemLogTables() { |
| | | return listReadableTables(ORDER_ITEM_LOG_TABLE); |
| | | } |
| | | |
| | | public List<String> listReadableTables(String logicalTable) { |
| | | TableCacheEntry cacheEntry = tableCache.get(logicalTable); |
| | | long now = System.currentTimeMillis(); |
| | | if (cacheEntry != null && now - cacheEntry.loadedAt < CACHE_TTL_MILLIS) { |
| | | return cacheEntry.tables; |
| | | } |
| | | return refreshReadableTables(logicalTable); |
| | | } |
| | | |
| | | public void ensureTableExists(String logicalTable, String actualTable) { |
| | | List<String> tables = refreshReadableTables(logicalTable); |
| | | if (!tables.contains(actualTable)) { |
| | | throw new CoolException("历史日志分表不存在,请先创建表:" + actualTable); |
| | | } |
| | | } |
| | | |
| | | public <T> T executeOnTable(String logicalTable, String actualTable, Supplier<T> supplier) { |
| | | return AsnLogTableRoutingContext.withTable(logicalTable, actualTable, supplier); |
| | | } |
| | | |
| | | public void runOnTable(String logicalTable, String actualTable, Runnable runnable) { |
| | | AsnLogTableRoutingContext.withTable(logicalTable, actualTable, runnable); |
| | | } |
| | | |
| | | private List<String> refreshReadableTables(String logicalTable) { |
| | | try { |
| | | List<String> tables = jdbcTemplate.queryForList( |
| | | LIST_TABLE_SQL, |
| | | String.class, |
| | | logicalTable, |
| | | logicalTable + "\\_%" |
| | | ); |
| | | if (tables == null || tables.isEmpty()) { |
| | | tables = new ArrayList<>(Collections.singletonList(logicalTable)); |
| | | } |
| | | tables.sort(tableComparator(logicalTable)); |
| | | tableCache.put(logicalTable, new TableCacheEntry(Collections.unmodifiableList(new ArrayList<>(tables)), System.currentTimeMillis())); |
| | | return tableCache.get(logicalTable).tables; |
| | | } catch (Exception ex) { |
| | | List<String> fallback = Collections.singletonList(logicalTable); |
| | | tableCache.put(logicalTable, new TableCacheEntry(fallback, System.currentTimeMillis())); |
| | | return fallback; |
| | | } |
| | | } |
| | | |
| | | private Comparator<String> tableComparator(String logicalTable) { |
| | | return (left, right) -> { |
| | | boolean leftBase = logicalTable.equals(left); |
| | | boolean rightBase = logicalTable.equals(right); |
| | | if (leftBase && rightBase) { |
| | | return 0; |
| | | } |
| | | if (leftBase) { |
| | | return 1; |
| | | } |
| | | if (rightBase) { |
| | | return -1; |
| | | } |
| | | return right.compareTo(left); |
| | | }; |
| | | } |
| | | |
| | | private static class TableCacheEntry { |
| | | private final List<String> tables; |
| | | private final long loadedAt; |
| | | |
| | | private TableCacheEntry(List<String> tables, long loadedAt) { |
| | | this.tables = tables; |
| | | this.loadedAt = loadedAt; |
| | | } |
| | | } |
| | | } |
| New file |
| | |
| | | package com.vincent.rsf.server.manager.partition; |
| | | |
| | | import java.util.HashMap; |
| | | import java.util.Map; |
| | | import java.util.function.Supplier; |
| | | |
| | | /** |
| | | * Thread-local table routing context for ASN history log tables. |
| | | */ |
| | | public final class AsnLogTableRoutingContext { |
| | | |
| | | private static final ThreadLocal<Map<String, String>> ROUTES = ThreadLocal.withInitial(HashMap::new); |
| | | |
| | | private AsnLogTableRoutingContext() { |
| | | } |
| | | |
| | | public static String getTable(String logicalTable) { |
| | | return ROUTES.get().get(logicalTable); |
| | | } |
| | | |
| | | public static <T> T withTable(String logicalTable, String actualTable, Supplier<T> supplier) { |
| | | Map<String, String> routes = ROUTES.get(); |
| | | String previous = routes.put(logicalTable, actualTable); |
| | | try { |
| | | return supplier.get(); |
| | | } finally { |
| | | if (previous == null) { |
| | | routes.remove(logicalTable); |
| | | if (routes.isEmpty()) { |
| | | ROUTES.remove(); |
| | | } |
| | | } else { |
| | | routes.put(logicalTable, previous); |
| | | } |
| | | } |
| | | } |
| | | |
| | | public static void withTable(String logicalTable, String actualTable, Runnable runnable) { |
| | | withTable(logicalTable, actualTable, () -> { |
| | | runnable.run(); |
| | | return null; |
| | | }); |
| | | } |
| | | } |
| | |
| | | import com.baomidou.mybatisplus.extension.service.IService; |
| | | import com.vincent.rsf.server.manager.entity.AsnOrderItemLog; |
| | | |
| | | import java.util.Collection; |
| | | import java.util.Date; |
| | | |
| | | public interface AsnOrderItemLogService extends IService<AsnOrderItemLog> { |
| | | |
| | | boolean saveBatchToDate(Collection<AsnOrderItemLog> entityList, Date partitionDate); |
| | | |
| | | } |
| | |
| | | package com.vincent.rsf.server.manager.service.impl; |
| | | |
| | | import com.baomidou.mybatisplus.core.conditions.Wrapper; |
| | | import com.baomidou.mybatisplus.core.metadata.IPage; |
| | | import com.baomidou.mybatisplus.extension.toolkit.SqlHelper; |
| | | import com.vincent.rsf.server.manager.partition.AsnLogPartitionSupport; |
| | | import com.vincent.rsf.server.manager.mapper.AsnOrderItemLogMapper; |
| | | import com.vincent.rsf.server.manager.entity.AsnOrderItemLog; |
| | | import com.vincent.rsf.server.manager.service.AsnOrderItemLogService; |
| | | import com.baomidou.mybatisplus.extension.service.impl.ServiceImpl; |
| | | import org.springframework.beans.factory.annotation.Autowired; |
| | | import org.springframework.stereotype.Service; |
| | | |
| | | import java.io.Serializable; |
| | | import java.util.ArrayList; |
| | | import java.util.Collection; |
| | | import java.util.Collections; |
| | | import java.util.Comparator; |
| | | import java.util.Date; |
| | | import java.util.LinkedHashMap; |
| | | import java.util.List; |
| | | import java.util.Map; |
| | | |
| | | @Service("asnOrderItemLogService") |
| | | public class AsnOrderItemLogServiceImpl extends ServiceImpl<AsnOrderItemLogMapper, AsnOrderItemLog> implements AsnOrderItemLogService { |
| | | |
| | | @Autowired |
| | | private AsnLogPartitionSupport partitionSupport; |
| | | |
| | | @Override |
| | | public boolean save(AsnOrderItemLog entity) { |
| | | if (entity == null) { |
| | | return false; |
| | | } |
| | | String tableName = partitionSupport.resolveOrderItemLogTable(entity.getCreateTime()); |
| | | partitionSupport.ensureTableExists(AsnLogPartitionSupport.ORDER_ITEM_LOG_TABLE, tableName); |
| | | return partitionSupport.executeOnTable(AsnLogPartitionSupport.ORDER_ITEM_LOG_TABLE, tableName, () -> super.save(entity)); |
| | | } |
| | | |
| | | @Override |
| | | public boolean saveBatch(Collection<AsnOrderItemLog> entityList) { |
| | | return saveBatch(entityList, DEFAULT_BATCH_SIZE); |
| | | } |
| | | |
| | | @Override |
| | | public boolean saveBatch(Collection<AsnOrderItemLog> entityList, int batchSize) { |
| | | if (entityList == null || entityList.isEmpty()) { |
| | | return false; |
| | | } |
| | | Map<String, List<AsnOrderItemLog>> grouped = new LinkedHashMap<>(); |
| | | for (AsnOrderItemLog entity : entityList) { |
| | | String tableName = partitionSupport.resolveOrderItemLogTable(entity == null ? null : entity.getCreateTime()); |
| | | grouped.computeIfAbsent(tableName, key -> new ArrayList<>()).add(entity); |
| | | } |
| | | boolean success = true; |
| | | for (Map.Entry<String, List<AsnOrderItemLog>> entry : grouped.entrySet()) { |
| | | partitionSupport.ensureTableExists(AsnLogPartitionSupport.ORDER_ITEM_LOG_TABLE, entry.getKey()); |
| | | Boolean saved = partitionSupport.executeOnTable( |
| | | AsnLogPartitionSupport.ORDER_ITEM_LOG_TABLE, |
| | | entry.getKey(), |
| | | () -> super.saveBatch(entry.getValue(), batchSize) |
| | | ); |
| | | success = success && Boolean.TRUE.equals(saved); |
| | | } |
| | | return success; |
| | | } |
| | | |
| | | @Override |
| | | public boolean saveBatchToDate(Collection<AsnOrderItemLog> entityList, Date partitionDate) { |
| | | if (entityList == null || entityList.isEmpty()) { |
| | | return false; |
| | | } |
| | | String tableName = partitionSupport.resolveOrderItemLogTable(partitionDate); |
| | | partitionSupport.ensureTableExists(AsnLogPartitionSupport.ORDER_ITEM_LOG_TABLE, tableName); |
| | | return partitionSupport.executeOnTable( |
| | | AsnLogPartitionSupport.ORDER_ITEM_LOG_TABLE, |
| | | tableName, |
| | | () -> super.saveBatch(entityList, DEFAULT_BATCH_SIZE) |
| | | ); |
| | | } |
| | | |
| | | @Override |
| | | public AsnOrderItemLog getById(Serializable id) { |
| | | if (id == null) { |
| | | return null; |
| | | } |
| | | for (String tableName : partitionSupport.listOrderItemLogTables()) { |
| | | AsnOrderItemLog record = partitionSupport.executeOnTable( |
| | | AsnLogPartitionSupport.ORDER_ITEM_LOG_TABLE, |
| | | tableName, |
| | | () -> baseMapper.selectById(id) |
| | | ); |
| | | if (record != null) { |
| | | return record; |
| | | } |
| | | } |
| | | return null; |
| | | } |
| | | |
| | | @Override |
| | | public List<AsnOrderItemLog> listByIds(Collection<? extends Serializable> idList) { |
| | | if (idList == null || idList.isEmpty()) { |
| | | return Collections.emptyList(); |
| | | } |
| | | Map<Long, AsnOrderItemLog> merged = new LinkedHashMap<>(); |
| | | for (String tableName : partitionSupport.listOrderItemLogTables()) { |
| | | List<AsnOrderItemLog> part = partitionSupport.executeOnTable( |
| | | AsnLogPartitionSupport.ORDER_ITEM_LOG_TABLE, |
| | | tableName, |
| | | () -> baseMapper.selectBatchIds(idList) |
| | | ); |
| | | mergeRecords(merged, part); |
| | | } |
| | | return sortRecords(new ArrayList<>(merged.values())); |
| | | } |
| | | |
| | | @Override |
| | | public List<AsnOrderItemLog> list() { |
| | | return list((Wrapper<AsnOrderItemLog>) null); |
| | | } |
| | | |
| | | @Override |
| | | public List<AsnOrderItemLog> list(Wrapper<AsnOrderItemLog> queryWrapper) { |
| | | Map<Long, AsnOrderItemLog> merged = new LinkedHashMap<>(); |
| | | for (String tableName : partitionSupport.listOrderItemLogTables()) { |
| | | List<AsnOrderItemLog> part = partitionSupport.executeOnTable( |
| | | AsnLogPartitionSupport.ORDER_ITEM_LOG_TABLE, |
| | | tableName, |
| | | () -> baseMapper.selectList(queryWrapper) |
| | | ); |
| | | mergeRecords(merged, part); |
| | | } |
| | | return sortRecords(new ArrayList<>(merged.values())); |
| | | } |
| | | |
| | | @Override |
| | | public AsnOrderItemLog getOne(Wrapper<AsnOrderItemLog> queryWrapper) { |
| | | List<AsnOrderItemLog> records = list(queryWrapper); |
| | | return records.isEmpty() ? null : records.get(0); |
| | | } |
| | | |
| | | @Override |
| | | public <E extends IPage<AsnOrderItemLog>> E page(E page, Wrapper<AsnOrderItemLog> queryWrapper) { |
| | | List<AsnOrderItemLog> records = list(queryWrapper); |
| | | long current = page.getCurrent() <= 0 ? 1L : page.getCurrent(); |
| | | long size = page.getSize() <= 0 ? records.size() : page.getSize(); |
| | | int fromIndex = (int) Math.min((current - 1) * size, records.size()); |
| | | int toIndex = (int) Math.min(fromIndex + size, records.size()); |
| | | page.setTotal(records.size()); |
| | | page.setRecords(fromIndex >= records.size() ? Collections.emptyList() : new ArrayList<>(records.subList(fromIndex, toIndex))); |
| | | return page; |
| | | } |
| | | |
| | | @Override |
| | | public boolean updateById(AsnOrderItemLog entity) { |
| | | if (entity == null || entity.getId() == null) { |
| | | return false; |
| | | } |
| | | String tableName = locateTableById(entity.getId()); |
| | | if (tableName == null) { |
| | | return false; |
| | | } |
| | | return partitionSupport.executeOnTable( |
| | | AsnLogPartitionSupport.ORDER_ITEM_LOG_TABLE, |
| | | tableName, |
| | | () -> super.updateById(entity) |
| | | ); |
| | | } |
| | | |
| | | @Override |
| | | public boolean removeById(Serializable id) { |
| | | if (id == null) { |
| | | return false; |
| | | } |
| | | String tableName = locateTableById(id); |
| | | if (tableName == null) { |
| | | return false; |
| | | } |
| | | return partitionSupport.executeOnTable( |
| | | AsnLogPartitionSupport.ORDER_ITEM_LOG_TABLE, |
| | | tableName, |
| | | () -> super.removeById(id) |
| | | ); |
| | | } |
| | | |
| | | @Override |
| | | public boolean removeByIds(Collection<?> list) { |
| | | if (list == null || list.isEmpty()) { |
| | | return false; |
| | | } |
| | | Map<String, List<Serializable>> groupedIds = new LinkedHashMap<>(); |
| | | for (Object idObj : list) { |
| | | if (!(idObj instanceof Serializable)) { |
| | | continue; |
| | | } |
| | | Serializable id = (Serializable) idObj; |
| | | String tableName = locateTableById(id); |
| | | if (tableName != null) { |
| | | groupedIds.computeIfAbsent(tableName, key -> new ArrayList<>()).add(id); |
| | | } |
| | | } |
| | | boolean removed = false; |
| | | for (Map.Entry<String, List<Serializable>> entry : groupedIds.entrySet()) { |
| | | Boolean partRemoved = partitionSupport.executeOnTable( |
| | | AsnLogPartitionSupport.ORDER_ITEM_LOG_TABLE, |
| | | entry.getKey(), |
| | | () -> super.removeByIds(entry.getValue()) |
| | | ); |
| | | removed = removed || Boolean.TRUE.equals(partRemoved); |
| | | } |
| | | return removed; |
| | | } |
| | | |
| | | @Override |
| | | public boolean remove(Wrapper<AsnOrderItemLog> queryWrapper) { |
| | | int affected = 0; |
| | | for (String tableName : partitionSupport.listOrderItemLogTables()) { |
| | | Integer count = partitionSupport.executeOnTable( |
| | | AsnLogPartitionSupport.ORDER_ITEM_LOG_TABLE, |
| | | tableName, |
| | | () -> baseMapper.delete(queryWrapper) |
| | | ); |
| | | affected += count == null ? 0 : count; |
| | | } |
| | | return SqlHelper.retBool(affected); |
| | | } |
| | | |
| | | private String locateTableById(Serializable id) { |
| | | for (String tableName : partitionSupport.listOrderItemLogTables()) { |
| | | AsnOrderItemLog record = partitionSupport.executeOnTable( |
| | | AsnLogPartitionSupport.ORDER_ITEM_LOG_TABLE, |
| | | tableName, |
| | | () -> baseMapper.selectById(id) |
| | | ); |
| | | if (record != null) { |
| | | return tableName; |
| | | } |
| | | } |
| | | return null; |
| | | } |
| | | |
| | | private void mergeRecords(Map<Long, AsnOrderItemLog> merged, List<AsnOrderItemLog> records) { |
| | | if (records == null || records.isEmpty()) { |
| | | return; |
| | | } |
| | | for (AsnOrderItemLog record : records) { |
| | | if (record == null || record.getId() == null) { |
| | | continue; |
| | | } |
| | | merged.putIfAbsent(record.getId(), record); |
| | | } |
| | | } |
| | | |
| | | private List<AsnOrderItemLog> sortRecords(List<AsnOrderItemLog> records) { |
| | | records.sort(Comparator.comparing(AsnOrderItemLog::getId, Comparator.nullsLast(Long::compareTo)).reversed()); |
| | | return records; |
| | | } |
| | | } |
| | |
| | | package com.vincent.rsf.server.manager.service.impl; |
| | | |
| | | import com.baomidou.mybatisplus.core.conditions.Wrapper; |
| | | import com.baomidou.mybatisplus.core.conditions.query.LambdaQueryWrapper; |
| | | import com.baomidou.mybatisplus.core.metadata.IPage; |
| | | import com.vincent.rsf.framework.common.R; |
| | | import com.vincent.rsf.framework.exception.CoolException; |
| | | import com.vincent.rsf.server.manager.entity.WkOrder; |
| | |
| | | import com.vincent.rsf.server.manager.mapper.AsnOrderMapper; |
| | | import com.vincent.rsf.server.manager.mapper.AsnOrderLogMapper; |
| | | import com.vincent.rsf.server.manager.entity.AsnOrderLog; |
| | | import com.vincent.rsf.server.manager.partition.AsnLogPartitionSupport; |
| | | import com.vincent.rsf.server.manager.service.AsnOrderItemLogService; |
| | | import com.vincent.rsf.server.manager.service.AsnOrderItemService; |
| | | import com.vincent.rsf.server.manager.service.AsnOrderLogService; |
| | |
| | | import org.springframework.stereotype.Service; |
| | | import org.springframework.transaction.annotation.Transactional; |
| | | |
| | | import java.io.Serializable; |
| | | import java.util.ArrayList; |
| | | import java.util.Collection; |
| | | import java.util.Collections; |
| | | import java.util.Comparator; |
| | | import java.util.LinkedHashMap; |
| | | import java.util.List; |
| | | import java.util.Map; |
| | | import java.util.Objects; |
| | | |
| | | @Service("asnOrderLogService") |
| | | public class AsnOrderLogServiceImpl extends ServiceImpl<AsnOrderLogMapper, AsnOrderLog> implements AsnOrderLogService { |
| | | |
| | | @Autowired |
| | | private AsnLogPartitionSupport partitionSupport; |
| | | @Autowired |
| | | private AsnOrderItemLogService asnOrderItemLogService; |
| | | @Autowired |
| | |
| | | @Override |
| | | @Transactional(rollbackFor = Exception.class) |
| | | public R continueRecipt(Long id) { |
| | | AsnOrderLog orderLog = this.getOne(new LambdaQueryWrapper<AsnOrderLog>().eq(AsnOrderLog::getId, id)); |
| | | AsnOrderLog orderLog = this.getById(id); |
| | | if (Objects.isNull(orderLog)) { |
| | | throw new CoolException("单据不存在!!"); |
| | | } |
| | |
| | | .list(new LambdaQueryWrapper<AsnOrderItemLog>() |
| | | .eq(AsnOrderItemLog::getLogId, id)); |
| | | List<WkOrderItem> orderItems = new ArrayList<>(); |
| | | if (!Objects.isNull(itemLogs) || !itemLogs.isEmpty()) { |
| | | if (!Objects.isNull(itemLogs) && !itemLogs.isEmpty()) { |
| | | for (AsnOrderItemLog itemLog : itemLogs) { |
| | | WkOrderItem item = new WkOrderItem(); |
| | | BeanUtils.copyProperties(itemLog, item); |
| | |
| | | |
| | | return R.ok(); |
| | | } |
| | | |
| | | @Override |
| | | public boolean save(AsnOrderLog entity) { |
| | | if (entity == null) { |
| | | return false; |
| | | } |
| | | String tableName = partitionSupport.resolveOrderLogTable(entity.getCreateTime()); |
| | | partitionSupport.ensureTableExists(AsnLogPartitionSupport.ORDER_LOG_TABLE, tableName); |
| | | return partitionSupport.executeOnTable(AsnLogPartitionSupport.ORDER_LOG_TABLE, tableName, () -> super.save(entity)); |
| | | } |
| | | |
| | | @Override |
| | | public boolean saveBatch(Collection<AsnOrderLog> entityList) { |
| | | return saveBatch(entityList, DEFAULT_BATCH_SIZE); |
| | | } |
| | | |
| | | @Override |
| | | public boolean saveBatch(Collection<AsnOrderLog> entityList, int batchSize) { |
| | | if (entityList == null || entityList.isEmpty()) { |
| | | return false; |
| | | } |
| | | Map<String, List<AsnOrderLog>> grouped = new LinkedHashMap<>(); |
| | | for (AsnOrderLog entity : entityList) { |
| | | String tableName = partitionSupport.resolveOrderLogTable(entity == null ? null : entity.getCreateTime()); |
| | | grouped.computeIfAbsent(tableName, key -> new ArrayList<>()).add(entity); |
| | | } |
| | | boolean success = true; |
| | | for (Map.Entry<String, List<AsnOrderLog>> entry : grouped.entrySet()) { |
| | | partitionSupport.ensureTableExists(AsnLogPartitionSupport.ORDER_LOG_TABLE, entry.getKey()); |
| | | Boolean saved = partitionSupport.executeOnTable( |
| | | AsnLogPartitionSupport.ORDER_LOG_TABLE, |
| | | entry.getKey(), |
| | | () -> super.saveBatch(entry.getValue(), batchSize) |
| | | ); |
| | | success = success && Boolean.TRUE.equals(saved); |
| | | } |
| | | return success; |
| | | } |
| | | |
| | | @Override |
| | | public AsnOrderLog getById(Serializable id) { |
| | | if (id == null) { |
| | | return null; |
| | | } |
| | | for (String tableName : partitionSupport.listOrderLogTables()) { |
| | | AsnOrderLog record = partitionSupport.executeOnTable( |
| | | AsnLogPartitionSupport.ORDER_LOG_TABLE, |
| | | tableName, |
| | | () -> baseMapper.selectById(id) |
| | | ); |
| | | if (record != null) { |
| | | return record; |
| | | } |
| | | } |
| | | return null; |
| | | } |
| | | |
| | | @Override |
| | | public List<AsnOrderLog> listByIds(Collection<? extends Serializable> idList) { |
| | | if (idList == null || idList.isEmpty()) { |
| | | return Collections.emptyList(); |
| | | } |
| | | Map<Long, AsnOrderLog> merged = new LinkedHashMap<>(); |
| | | for (String tableName : partitionSupport.listOrderLogTables()) { |
| | | List<AsnOrderLog> part = partitionSupport.executeOnTable( |
| | | AsnLogPartitionSupport.ORDER_LOG_TABLE, |
| | | tableName, |
| | | () -> baseMapper.selectBatchIds(idList) |
| | | ); |
| | | mergeRecords(merged, part); |
| | | } |
| | | return sortRecords(new ArrayList<>(merged.values())); |
| | | } |
| | | |
| | | @Override |
| | | public List<AsnOrderLog> list() { |
| | | return list((Wrapper<AsnOrderLog>) null); |
| | | } |
| | | |
| | | @Override |
| | | public List<AsnOrderLog> list(Wrapper<AsnOrderLog> queryWrapper) { |
| | | Map<Long, AsnOrderLog> merged = new LinkedHashMap<>(); |
| | | for (String tableName : partitionSupport.listOrderLogTables()) { |
| | | List<AsnOrderLog> part = partitionSupport.executeOnTable( |
| | | AsnLogPartitionSupport.ORDER_LOG_TABLE, |
| | | tableName, |
| | | () -> baseMapper.selectList(queryWrapper) |
| | | ); |
| | | mergeRecords(merged, part); |
| | | } |
| | | return sortRecords(new ArrayList<>(merged.values())); |
| | | } |
| | | |
| | | @Override |
| | | public AsnOrderLog getOne(Wrapper<AsnOrderLog> queryWrapper) { |
| | | List<AsnOrderLog> records = list(queryWrapper); |
| | | return records.isEmpty() ? null : records.get(0); |
| | | } |
| | | |
| | | @Override |
| | | public <E extends IPage<AsnOrderLog>> E page(E page, Wrapper<AsnOrderLog> queryWrapper) { |
| | | List<AsnOrderLog> records = list(queryWrapper); |
| | | long current = page.getCurrent() <= 0 ? 1L : page.getCurrent(); |
| | | long size = page.getSize() <= 0 ? records.size() : page.getSize(); |
| | | int fromIndex = (int) Math.min((current - 1) * size, records.size()); |
| | | int toIndex = (int) Math.min(fromIndex + size, records.size()); |
| | | page.setTotal(records.size()); |
| | | page.setRecords(fromIndex >= records.size() ? Collections.emptyList() : new ArrayList<>(records.subList(fromIndex, toIndex))); |
| | | return page; |
| | | } |
| | | |
| | | @Override |
| | | public boolean updateById(AsnOrderLog entity) { |
| | | if (entity == null || entity.getId() == null) { |
| | | return false; |
| | | } |
| | | String tableName = locateTableById(entity.getId()); |
| | | if (tableName == null) { |
| | | return false; |
| | | } |
| | | return partitionSupport.executeOnTable( |
| | | AsnLogPartitionSupport.ORDER_LOG_TABLE, |
| | | tableName, |
| | | () -> super.updateById(entity) |
| | | ); |
| | | } |
| | | |
| | | @Override |
| | | public boolean removeById(Serializable id) { |
| | | if (id == null) { |
| | | return false; |
| | | } |
| | | String tableName = locateTableById(id); |
| | | if (tableName == null) { |
| | | return false; |
| | | } |
| | | return partitionSupport.executeOnTable( |
| | | AsnLogPartitionSupport.ORDER_LOG_TABLE, |
| | | tableName, |
| | | () -> super.removeById(id) |
| | | ); |
| | | } |
| | | |
| | | @Override |
| | | public boolean removeByIds(Collection<?> list) { |
| | | if (list == null || list.isEmpty()) { |
| | | return false; |
| | | } |
| | | Map<String, List<Serializable>> groupedIds = new LinkedHashMap<>(); |
| | | for (Object idObj : list) { |
| | | if (!(idObj instanceof Serializable)) { |
| | | continue; |
| | | } |
| | | Serializable id = (Serializable) idObj; |
| | | String tableName = locateTableById(id); |
| | | if (tableName != null) { |
| | | groupedIds.computeIfAbsent(tableName, key -> new ArrayList<>()).add(id); |
| | | } |
| | | } |
| | | boolean removed = false; |
| | | for (Map.Entry<String, List<Serializable>> entry : groupedIds.entrySet()) { |
| | | Boolean partRemoved = partitionSupport.executeOnTable( |
| | | AsnLogPartitionSupport.ORDER_LOG_TABLE, |
| | | entry.getKey(), |
| | | () -> super.removeByIds(entry.getValue()) |
| | | ); |
| | | removed = removed || Boolean.TRUE.equals(partRemoved); |
| | | } |
| | | return removed; |
| | | } |
| | | |
| | | private String locateTableById(Serializable id) { |
| | | for (String tableName : partitionSupport.listOrderLogTables()) { |
| | | AsnOrderLog record = partitionSupport.executeOnTable( |
| | | AsnLogPartitionSupport.ORDER_LOG_TABLE, |
| | | tableName, |
| | | () -> baseMapper.selectById(id) |
| | | ); |
| | | if (record != null) { |
| | | return tableName; |
| | | } |
| | | } |
| | | return null; |
| | | } |
| | | |
| | | private void mergeRecords(Map<Long, AsnOrderLog> merged, List<AsnOrderLog> records) { |
| | | if (records == null || records.isEmpty()) { |
| | | return; |
| | | } |
| | | for (AsnOrderLog record : records) { |
| | | if (record == null || record.getId() == null) { |
| | | continue; |
| | | } |
| | | merged.putIfAbsent(record.getId(), record); |
| | | } |
| | | } |
| | | |
| | | private List<AsnOrderLog> sortRecords(List<AsnOrderLog> records) { |
| | | records.sort(Comparator.comparing(AsnOrderLog::getId, Comparator.nullsLast(Long::compareTo)).reversed()); |
| | | return records; |
| | | } |
| | | } |
| | |
| | | logs.add(itemLog); |
| | | }); |
| | | |
| | | if (!asnOrderItemLogService.saveBatch(logs)) { |
| | | if (!asnOrderItemLogService.saveBatchToDate(logs, orderLog.getCreateTime())) { |
| | | throw new CoolException("通知单明细历史档保存失败!!"); |
| | | } |
| | | if (!asnOrderItemService.remove(new LambdaQueryWrapper<WkOrderItem>().eq(WkOrderItem::getOrderId, order.getId()))) { |
| | |
| | | new LinkedHashSet<>(Arrays.asList("left", "center", "right")) |
| | | ); |
| | | |
| | | private static final Set<String> SUPPORTED_DOCUMENT_LOGO_POSITIONS = Collections.unmodifiableSet( |
| | | new LinkedHashSet<>(Arrays.asList("left", "right")) |
| | | ); |
| | | |
| | | @Override |
| | | public List<OrderPrintTemplate> listCurrentTenantTemplates(String type) { |
| | | String normalizedType = normalizeTemplateType(type); |
| | |
| | | ensureNumber(page, "marginBottom", "下边距"); |
| | | ensureNumber(page, "marginLeft", "左边距"); |
| | | |
| | | if (root.getBooleanValue("showLogo")) { |
| | | String logoSrc = normalizeText(root.getString("logoSrc")); |
| | | if (logoSrc.isEmpty()) { |
| | | throw new CoolException("启用Logo时必须上传Logo图片"); |
| | | } |
| | | String logoPosition = normalizeText(root.getString("logoPosition")); |
| | | if (!logoPosition.isEmpty() && !SUPPORTED_DOCUMENT_LOGO_POSITIONS.contains(logoPosition)) { |
| | | throw new CoolException("Logo位置仅支持 left 或 right"); |
| | | } |
| | | getPositiveNumber(root, "logoWidth", "Logo宽度"); |
| | | } |
| | | |
| | | validateDocumentFields(root.getJSONArray("headerFields"), "页头字段", false); |
| | | validateDocumentFields(root.getJSONArray("tableColumns"), "明细列", true); |
| | | validateDocumentFields(root.getJSONArray("footerFields"), "页尾字段", false); |
| New file |
| | |
| | | -- ASN history log semi-annual partitioning |
| | | -- Generated: 2026-04-22 |
| | | -- Scope: |
| | | -- - man_asn_order_log |
| | | -- - man_asn_order_item_log |
| | | -- |
| | | -- Notes: |
| | | -- 1. New application code writes to physical half-year tables: |
| | | -- man_asn_order_log_YYYY_h1 / h2 |
| | | -- man_asn_order_item_log_YYYY_h1 / h2 |
| | | -- 2. New writes use application-generated global IDs. |
| | | -- 3. This script keeps the original logical tables available as rollback backups. |
| | | |
| | | START TRANSACTION; |
| | | |
| | | -- --------------------------------------------------------------------------- |
| | | -- 1) Create current / near-term physical tables from existing logical schema |
| | | -- --------------------------------------------------------------------------- |
| | | |
| | | CREATE TABLE IF NOT EXISTS `man_asn_order_log_2025_h1` LIKE `man_asn_order_log`; |
| | | CREATE TABLE IF NOT EXISTS `man_asn_order_log_2025_h2` LIKE `man_asn_order_log`; |
| | | CREATE TABLE IF NOT EXISTS `man_asn_order_log_2026_h1` LIKE `man_asn_order_log`; |
| | | CREATE TABLE IF NOT EXISTS `man_asn_order_log_2026_h2` LIKE `man_asn_order_log`; |
| | | |
| | | CREATE TABLE IF NOT EXISTS `man_asn_order_item_log_2025_h1` LIKE `man_asn_order_item_log`; |
| | | CREATE TABLE IF NOT EXISTS `man_asn_order_item_log_2025_h2` LIKE `man_asn_order_item_log`; |
| | | CREATE TABLE IF NOT EXISTS `man_asn_order_item_log_2026_h1` LIKE `man_asn_order_item_log`; |
| | | CREATE TABLE IF NOT EXISTS `man_asn_order_item_log_2026_h2` LIKE `man_asn_order_item_log`; |
| | | |
| | | -- Optional: pre-create next half year to avoid rollover gaps |
| | | CREATE TABLE IF NOT EXISTS `man_asn_order_log_2027_h1` LIKE `man_asn_order_log`; |
| | | CREATE TABLE IF NOT EXISTS `man_asn_order_item_log_2027_h1` LIKE `man_asn_order_item_log`; |
| | | |
| | | -- --------------------------------------------------------------------------- |
| | | -- 2) Align ID column type for application-generated global IDs |
| | | -- --------------------------------------------------------------------------- |
| | | -- Keep BIGINT and drop AUTO_INCREMENT semantics on physical tables. |
| | | |
| | | ALTER TABLE `man_asn_order_log_2025_h1` MODIFY COLUMN `id` BIGINT NOT NULL COMMENT 'ID'; |
| | | ALTER TABLE `man_asn_order_log_2025_h2` MODIFY COLUMN `id` BIGINT NOT NULL COMMENT 'ID'; |
| | | ALTER TABLE `man_asn_order_log_2026_h1` MODIFY COLUMN `id` BIGINT NOT NULL COMMENT 'ID'; |
| | | ALTER TABLE `man_asn_order_log_2026_h2` MODIFY COLUMN `id` BIGINT NOT NULL COMMENT 'ID'; |
| | | ALTER TABLE `man_asn_order_log_2027_h1` MODIFY COLUMN `id` BIGINT NOT NULL COMMENT 'ID'; |
| | | |
| | | ALTER TABLE `man_asn_order_item_log_2025_h1` MODIFY COLUMN `id` BIGINT NOT NULL COMMENT 'ID'; |
| | | ALTER TABLE `man_asn_order_item_log_2025_h2` MODIFY COLUMN `id` BIGINT NOT NULL COMMENT 'ID'; |
| | | ALTER TABLE `man_asn_order_item_log_2026_h1` MODIFY COLUMN `id` BIGINT NOT NULL COMMENT 'ID'; |
| | | ALTER TABLE `man_asn_order_item_log_2026_h2` MODIFY COLUMN `id` BIGINT NOT NULL COMMENT 'ID'; |
| | | ALTER TABLE `man_asn_order_item_log_2027_h1` MODIFY COLUMN `id` BIGINT NOT NULL COMMENT 'ID'; |
| | | |
| | | -- --------------------------------------------------------------------------- |
| | | -- 3) Migrate existing logical-table data by create_time |
| | | -- --------------------------------------------------------------------------- |
| | | -- Run only once in a maintenance window. |
| | | -- If the source tables contain data outside the ranges below, add more inserts |
| | | -- using the same half-year boundaries before cleanup. |
| | | |
| | | INSERT INTO `man_asn_order_log_2025_h1` |
| | | SELECT * FROM `man_asn_order_log` |
| | | WHERE `create_time` >= '2025-01-01 00:00:00' AND `create_time` < '2025-07-01 00:00:00'; |
| | | |
| | | INSERT INTO `man_asn_order_log_2025_h2` |
| | | SELECT * FROM `man_asn_order_log` |
| | | WHERE `create_time` >= '2025-07-01 00:00:00' AND `create_time` < '2026-01-01 00:00:00'; |
| | | |
| | | INSERT INTO `man_asn_order_log_2026_h1` |
| | | SELECT * FROM `man_asn_order_log` |
| | | WHERE `create_time` >= '2026-01-01 00:00:00' AND `create_time` < '2026-07-01 00:00:00'; |
| | | |
| | | INSERT INTO `man_asn_order_log_2026_h2` |
| | | SELECT * FROM `man_asn_order_log` |
| | | WHERE `create_time` >= '2026-07-01 00:00:00' AND `create_time` < '2027-01-01 00:00:00'; |
| | | |
| | | INSERT INTO `man_asn_order_item_log_2025_h1` |
| | | SELECT * FROM `man_asn_order_item_log` |
| | | WHERE `create_time` >= '2025-01-01 00:00:00' AND `create_time` < '2025-07-01 00:00:00'; |
| | | |
| | | INSERT INTO `man_asn_order_item_log_2025_h2` |
| | | SELECT * FROM `man_asn_order_item_log` |
| | | WHERE `create_time` >= '2025-07-01 00:00:00' AND `create_time` < '2026-01-01 00:00:00'; |
| | | |
| | | INSERT INTO `man_asn_order_item_log_2026_h1` |
| | | SELECT * FROM `man_asn_order_item_log` |
| | | WHERE `create_time` >= '2026-01-01 00:00:00' AND `create_time` < '2026-07-01 00:00:00'; |
| | | |
| | | INSERT INTO `man_asn_order_item_log_2026_h2` |
| | | SELECT * FROM `man_asn_order_item_log` |
| | | WHERE `create_time` >= '2026-07-01 00:00:00' AND `create_time` < '2027-01-01 00:00:00'; |
| | | |
| | | -- --------------------------------------------------------------------------- |
| | | -- 4) Validation examples |
| | | -- --------------------------------------------------------------------------- |
| | | |
| | | -- Compare source and target counts before cleanup |
| | | -- SELECT COUNT(*) FROM man_asn_order_log; |
| | | -- SELECT |
| | | -- (SELECT COUNT(*) FROM man_asn_order_log_2025_h1) + |
| | | -- (SELECT COUNT(*) FROM man_asn_order_log_2025_h2) + |
| | | -- (SELECT COUNT(*) FROM man_asn_order_log_2026_h1) + |
| | | -- (SELECT COUNT(*) FROM man_asn_order_log_2026_h2) AS migrated_total; |
| | | |
| | | -- SELECT COUNT(*) FROM man_asn_order_item_log; |
| | | -- SELECT |
| | | -- (SELECT COUNT(*) FROM man_asn_order_item_log_2025_h1) + |
| | | -- (SELECT COUNT(*) FROM man_asn_order_item_log_2025_h2) + |
| | | -- (SELECT COUNT(*) FROM man_asn_order_item_log_2026_h1) + |
| | | -- (SELECT COUNT(*) FROM man_asn_order_item_log_2026_h2) AS migrated_total; |
| | | |
| | | -- --------------------------------------------------------------------------- |
| | | -- 5) Optional cleanup / rollback guidance |
| | | -- --------------------------------------------------------------------------- |
| | | -- After validation, either: |
| | | -- A. keep logical tables empty as rollback buffers, or |
| | | -- B. rename them to *_bak_20260422 and recreate empty logical tables if needed. |
| | | -- |
| | | -- Example backup rename: |
| | | -- RENAME TABLE `man_asn_order_log` TO `man_asn_order_log_bak_20260422`; |
| | | -- RENAME TABLE `man_asn_order_item_log` TO `man_asn_order_item_log_bak_20260422`; |
| | | |
| | | COMMIT; |