Files
auto-virtual-tryon/docs/superpowers/plans/2026-03-27-manual-revision-backend.md

402 lines
14 KiB
Markdown
Raw Permalink Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
# Manual Revision Backend Implementation Plan
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
**Goal:** Add backend support for the mid-end manual revision flow: export current asset, register offline-edited revision assets as a single version chain, and require an explicit confirm action before the waiting workflow continues.
**Architecture:** Reuse the existing `waiting_review` Temporal pause in `MidEndPipelineWorkflow` instead of inventing a second workflow state machine. Model manual revisions as first-class asset versions plus a richer review-task status, then implement dedicated HTTP endpoints for revision registration, revision-chain queries, and confirm-continue. The final confirm endpoint will bridge back into the existing review signal by approving the latest revision asset explicitly.
**Tech Stack:** FastAPI, Pydantic, SQLAlchemy async ORM, SQLite, Temporal Python SDK, pytest
---
### Task 1: Lock the desired backend behavior with failing integration tests
**Files:**
- Modify: `tests/test_api.py`
- [ ] **Step 1: Add a failing test for registering a manual revision asset while the order is waiting for review**
Add a new integration test next to the existing mid-end review tests that:
- creates a `semi_pro` order
- waits for `waiting_review`
- calls the new revision registration endpoint
- asserts the response returns a new asset id, version number, parent asset id, and review task state `revision_uploaded`
- [ ] **Step 2: Add a failing test for confirming the revision and letting the existing workflow finish**
Add a second integration test that:
- creates a `semi_pro` order
- waits for `waiting_review`
- registers a revision asset
- calls the new confirm endpoint
- waits for the workflow result
- asserts the workflow succeeds and the order final asset is derived from the revision asset rather than the original QC candidate
- [ ] **Step 3: Add a failing test for listing the single-line revision chain**
Add an integration test that:
- creates a `semi_pro` order
- waits for `waiting_review`
- registers two manual revisions in sequence
- calls the revision chain query endpoint
- asserts the chain order is `v1 -> v2 -> v3` with correct parent-child relationships
- [ ] **Step 4: Run the three new tests and verify they fail for missing routes and schema**
Run:
```bash
pytest tests/test_api.py -k "manual_revision or revision_chain or confirm_revision" -v
```
Expected:
- FastAPI returns `404` or `422`
- no revision assets are created
- current codebase does not satisfy the new flow yet
- [ ] **Step 5: Commit the failing tests**
```bash
git add tests/test_api.py
git commit -m "test: cover manual revision review flow"
```
### Task 2: Add persistence for revision assets and review-task substate
**Files:**
- Modify: `app/domain/enums.py`
- Modify: `app/infra/db/models/asset.py`
- Modify: `app/infra/db/models/review_task.py`
- Modify: `app/infra/db/models/order.py`
- Modify: `app/infra/db/session.py`
- [ ] **Step 1: Extend enums for manual revision support**
Update `app/domain/enums.py` with:
- a new asset type such as `MANUAL_REVISION`
- a new review-task status such as `REVISION_UPLOADED`
Keep `OrderStatus.WAITING_REVIEW` unchanged so the workflow can stay paused in the same state.
- [ ] **Step 2: Add version-chain columns to `AssetORM`**
Update `app/infra/db/models/asset.py` to add:
- `parent_asset_id: int | None`
- `root_asset_id: int | None`
- `version_no: int`
- `is_current_version: bool`
Add a self-referential relationship only if it stays simple; otherwise keep reads explicit in services to avoid ORM complexity.
- [ ] **Step 3: Add review-task fields for the current revision under review**
Update `app/infra/db/models/review_task.py` to add:
- `latest_revision_asset_id: int | None`
- `resume_asset_id: int | None`
The open review task should remain the single source of truth for:
- whether the order is still in `waiting_review`
- whether a revision was uploaded but not yet confirmed
- which asset should be used on confirm
- [ ] **Step 4: Keep the model changes compatible with the current bootstrapping approach**
Update `app/infra/db/session.py` only if imports need to change. Do not introduce Alembic in this MVP plan; this repo currently uses `Base.metadata.create_all`, so keep the first implementation aligned with the existing bootstrap model.
- [ ] **Step 5: Run the focused tests again and verify failure has moved from schema absence to route/service absence**
Run:
```bash
pytest tests/test_api.py -k "manual_revision or revision_chain or confirm_revision" -v
```
Expected:
- tables boot successfully in test setup
- failures now point at missing service logic or missing endpoints
- [ ] **Step 6: Commit the persistence changes**
```bash
git add app/domain/enums.py app/infra/db/models/asset.py app/infra/db/models/review_task.py app/infra/db/models/order.py app/infra/db/session.py
git commit -m "feat: add persistence for manual revision state"
```
### Task 3: Add revision registration and revision-chain query APIs
**Files:**
- Create: `app/api/schemas/revision.py`
- Create: `app/application/services/revision_service.py`
- Create: `app/api/routers/revisions.py`
- Modify: `app/main.py`
- Modify: `app/api/schemas/asset.py`
- Modify: `app/application/services/asset_service.py`
- [ ] **Step 1: Define revision request and response schemas**
Create `app/api/schemas/revision.py` with:
- `RegisterRevisionRequest`
- `RegisterRevisionResponse`
- `RevisionChainItem`
- `RevisionChainResponse`
- `ConfirmRevisionResponse`
For the MVP, use JSON fields instead of multipart upload:
- `parent_asset_id`
- `uploaded_uri`
- `reviewer_id`
- `comment`
This keeps the current mock-backed architecture coherent. Real object storage upload can be a later phase.
- [ ] **Step 2: Implement `RevisionService.register_revision`**
Create `app/application/services/revision_service.py` with logic that:
- loads the order and verifies it is `waiting_review`
- loads the active pending review task
- validates `parent_asset_id` belongs to the order
- creates a new `AssetORM` row with `asset_type=MANUAL_REVISION`
- computes `root_asset_id` and `version_no`
- marks previous revision asset as `is_current_version=False`
- updates the active review task to `REVISION_UPLOADED`
- sets `latest_revision_asset_id` and `resume_asset_id` to the new asset
- [ ] **Step 3: Implement `RevisionService.list_revision_chain`**
Query all order assets that belong to the same root chain, ordered by `version_no ASC`, and serialize them for the UI.
- [ ] **Step 4: Expose the revision routes**
Create `app/api/routers/revisions.py` with:
- `POST /api/v1/orders/{order_id}/revisions`
- `GET /api/v1/orders/{order_id}/revisions`
Wire the router in `app/main.py`.
- [ ] **Step 5: Extend asset serialization for the UI**
Update `app/api/schemas/asset.py` to expose:
- `parent_asset_id`
- `root_asset_id`
- `version_no`
- `is_current_version`
Update `app/application/services/asset_service.py` if ordering needs to prefer `version_no` over raw `created_at`.
- [ ] **Step 6: Run the revision registration and chain tests**
Run:
```bash
pytest tests/test_api.py -k "manual_revision or revision_chain" -v
```
Expected:
- registration test passes
- chain test passes
- confirm test still fails because continue logic is not implemented yet
- [ ] **Step 7: Commit the revision API work**
```bash
git add app/api/schemas/revision.py app/application/services/revision_service.py app/api/routers/revisions.py app/main.py app/api/schemas/asset.py app/application/services/asset_service.py tests/test_api.py
git commit -m "feat: add manual revision registration and chain queries"
```
### Task 4: Add explicit confirm-continue that reuses the existing review signal
**Files:**
- Modify: `app/api/schemas/review.py`
- Modify: `app/api/routers/reviews.py`
- Modify: `app/application/services/review_service.py`
- Modify: `app/workers/workflows/types.py`
- [ ] **Step 1: Add confirm request and response models**
Extend `app/api/schemas/review.py` with:
- `ConfirmRevisionRequest`
- `ConfirmRevisionResponse`
Fields should include:
- `reviewer_id`
- `comment`
Do not ask the caller for `selected_asset_id`; the backend should derive that from the active review tasks `resume_asset_id`.
- [ ] **Step 2: Implement `confirm_revision_continue` in `ReviewService`**
Add a new service method that:
- verifies the order is still `waiting_review`
- loads the active review task
- rejects the call unless task status is `REVISION_UPLOADED`
- verifies `resume_asset_id` is present
- marks the task as `SUBMITTED`
- reuses `WorkflowService.signal_review(...)` with:
- `decision=APPROVE`
- `selected_asset_id=resume_asset_id`
- `comment` prefixed or structured to indicate manual revision confirmation
This is the key MVP simplification: the workflow does not need a new signal type because it already knows how to export an explicitly selected asset.
- [ ] **Step 3: Expose a dedicated confirm route**
Add a route such as:
```text
POST /api/v1/reviews/{order_id}/confirm-revision
```
Keep it separate from `/submit` so the API remains clear and the front-end does not need to fake a normal approve call.
- [ ] **Step 4: Normalize the Temporal payload type if needed**
Update `app/workers/workflows/types.py` only if the review payload needs an optional metadata field such as `source="manual_revision_confirm"`. Skip this if current payload is already sufficient.
- [ ] **Step 5: Run the confirm-flow test**
Run:
```bash
pytest tests/test_api.py -k "confirm_revision" -v
```
Expected:
- workflow resumes from the existing `waiting_review`
- export uses the revision asset id
- order finishes as `succeeded`
- [ ] **Step 6: Commit the confirm flow**
```bash
git add app/api/schemas/review.py app/api/routers/reviews.py app/application/services/review_service.py app/workers/workflows/types.py tests/test_api.py
git commit -m "feat: confirm manual revision and resume workflow"
```
### Task 5: Surface revision state in order, queue, and workflow responses
**Files:**
- Modify: `app/api/schemas/order.py`
- Modify: `app/api/schemas/review.py`
- Modify: `app/api/schemas/workflow.py`
- Modify: `app/application/services/order_service.py`
- Modify: `app/application/services/review_service.py`
- Modify: `app/application/services/workflow_service.py`
- [ ] **Step 1: Extend order detail response**
Update `OrderDetailResponse` to include:
- `current_revision_asset_id`
- `current_revision_version`
- `revision_count`
- `review_task_status`
- [ ] **Step 2: Extend pending review response**
Update `PendingReviewResponse` to include:
- `review_status`
- `latest_revision_asset_id`
- `revision_count`
This is what the new queue UI needs for labels like `可人工介入` and `待确认回流`.
- [ ] **Step 3: Extend workflow/status response only with revision summary, not duplicated chain detail**
Update `WorkflowStatusResponse` or add a nested summary object with:
- `latest_revision_asset_id`
- `latest_revision_version`
- `pending_manual_confirm: bool`
Do not duplicate the full revision chain here; the dedicated revision-chain endpoint already covers that.
- [ ] **Step 4: Implement the response assembly in services**
Update:
- `OrderService.get_order`
- `ReviewService.list_pending_reviews`
- `WorkflowService.get_workflow_status`
Use the current open review task plus current-version asset rows to compute the response fields.
- [ ] **Step 5: Add or update tests for enriched responses**
Extend `tests/test_api.py` assertions so the new endpoints and existing endpoints expose the fields required by the designed UI.
- [ ] **Step 6: Run the full API test module**
Run:
```bash
pytest tests/test_api.py -v
```
Expected:
- existing approve/rerun tests still pass
- new manual revision tests pass
- no regression in low-end flow
- [ ] **Step 7: Commit the response-shape changes**
```bash
git add app/api/schemas/order.py app/api/schemas/review.py app/api/schemas/workflow.py app/application/services/order_service.py app/application/services/review_service.py app/application/services/workflow_service.py tests/test_api.py
git commit -m "feat: expose manual revision state in API responses"
```
### Task 6: Documentation and final verification
**Files:**
- Modify: `README.md`
- Modify: `docs/superpowers/specs/2026-03-27-review-workbench-design.md`
- [ ] **Step 1: Document the manual revision API flow**
Update `README.md` with:
- revision registration endpoint
- revision chain endpoint
- confirm-revision endpoint
- the fact that current MVP uses URI registration instead of real binary upload
- [ ] **Step 2: Sync the spec wording with the implemented API names**
Update the design spec only where route names or payload names need to match the code.
- [ ] **Step 3: Run the complete test suite**
Run:
```bash
pytest -q
```
Expected:
- all existing tests pass
- new manual revision flow is covered
- [ ] **Step 4: Do a quick endpoint smoke pass**
Run:
```bash
pytest tests/test_api.py::test_mid_end_order_waits_review_then_approves -v
pytest tests/test_api.py -k "manual_revision or confirm_revision" -v
```
Expected:
- baseline approve flow still works
- manual revision register/confirm flow works
- [ ] **Step 5: Commit docs and verification updates**
```bash
git add README.md docs/superpowers/specs/2026-03-27-review-workbench-design.md
git commit -m "docs: describe manual revision backend flow"
```
## Notes for the Implementer
- Keep the first implementation scoped to `semi_pro` and the existing single review pause.
- Do not add a second Temporal workflow or a second review wait state in this MVP.
- Do not implement real file storage upload in this pass; register an uploaded URI or mock URI first.
- Keep the version chain single-line. Reject requests that try to branch from a non-current version.
- If a persistent SQLite database already exists locally, schema changes may require deleting the dev DB before rerunning because this repo currently has no migration system.