File size: 2,916 Bytes
859f17b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
# Deploying GestureLSM to Hugging Face Spaces

This directory contains a minimal scaffold for running the `demo.py` Gradio UI inside a Hugging Face Space. Copy the files into a new Space repository (or push this folder as-is) and provide the model checkpoints via the Hugging Face Hub so the app can download them at startup.

## 1. Create the Space
1. In your Hugging Face account, click **New Space**.
2. Choose a Space name, set **SDK** to **Gradio**, and select **CPU Basic** hardware.
3. Leave the default visibility or mark it **Private** while testing.

## 2. Populate the Space repository
Upload the following from this folder to the Space:

- `app.py` – boots the Gradio interface, downloads weights if available, and ensures output folders exist.
- `requirements.txt` – Python dependencies.
- `packages.txt` – system packages (ffmpeg + openfst).
- `prepare_assets.py` (optional helper described below).
- Any configs, sample audio, and auxiliary data your demo needs (e.g. `configs/`, `demo/examples/`, `mean_std/`).

> **Tip**: keep the repository lightweight. Large checkpoints should live in a separate dataset repo and be fetched at runtime.

## 3. Host the checkpoints
1. Create a private **dataset** repo on Hugging Face (e.g. `username/gesturelsm-assets`).
2. Upload the required files:
   - `ckpt/net_300000_upper.pth`
   - `ckpt/net_300000_lower.pth`
   - `ckpt/net_300000_hands.pth`
   - `ckpt/net_300000_lower_trans.pth`
   - `ckpt/net_300000_upper.pth`
   - `ckpt/new_540_shortcut.bin`
   - `mean_std/*.npy`
3. In your Space’s **Settings β†’ Variables and secrets**, add a variable named `HF_GESTURELSM_WEIGHTS_REPO` with the value of the dataset repo (for example `username/gesturelsm-assets`).

When the Space boots, `app.py` will call `snapshot_download` to pull everything into `ckpt/`, preserving the original directory layout expected by `demo.py`.

## 4. Optional asset preparation script
If you need to perform additional setup (e.g. copying assets after download), you can push the provided `prepare_assets.py` and call it from `app.py` or `__init__.py` before launching the interface. Modify it to match your workflow.

## 5. Verify locally
Before pushing, test with the same layout on your machine:

```bash
conda activate gesturelsm
pip install -r hf_space/requirements.txt
python hf_space/app.py
```

Ensure the UI launches and generates outputs using locally stored checkpoints.

## 6. Push & run
Commit and push the Space repository. After the build completes, the public URL will auto-refresh and display the Gradio interface. Upload audio, wait for inference to finish, then download the generated video/NPZ results just like the local demo.

---
Add or adjust dependencies as new features require. Heavy rendering tasks can be slow on free CPU hardware; consider upgrading the Space or trimming the pipeline (e.g. precomputing alignments) if latency becomes an issue.