David Krajewski commited on
Commit
0528815
β€’
1 Parent(s): 41a9b74

Added code to DL models

Browse files
This view is limited to 50 files because it contains too many changes. Β  See raw diff
Files changed (50) hide show
  1. MOFA-Video-Traj/README.md +0 -42
  2. README.md +31 -74
  3. MOFA-Video-Traj/run_gradio.py β†’ app.py +16 -2
  4. assets/images/README.md +0 -1
  5. assets/images/project-mofa.png +0 -0
  6. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc+youtube_voc_16gpu_140k/config.yaml +0 -0
  7. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc+youtube_voc_16gpu_140k/resume.sh +0 -0
  8. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc+youtube_voc_16gpu_140k/resume_slurm.sh +0 -0
  9. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc+youtube_voc_16gpu_140k/train.sh +0 -0
  10. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc+youtube_voc_16gpu_140k/train_slurm.sh +0 -0
  11. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc+youtube_voc_16gpu_140k/validate.sh +0 -0
  12. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc+youtube_voc_16gpu_140k/validate_slurm.sh +0 -0
  13. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_16gpu_70k/config.yaml +0 -0
  14. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_16gpu_70k/resume.sh +0 -0
  15. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_16gpu_70k/resume_slurm.sh +0 -0
  16. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_16gpu_70k/train.sh +0 -0
  17. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_16gpu_70k/train_slurm.sh +0 -0
  18. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_16gpu_70k/validate.sh +0 -0
  19. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_16gpu_70k/validate_slurm.sh +0 -0
  20. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_8gpu_140k/config.yaml +0 -0
  21. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_8gpu_140k/resume.sh +0 -0
  22. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_8gpu_140k/resume_slurm.sh +0 -0
  23. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_8gpu_140k/train.sh +0 -0
  24. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_8gpu_140k/train_slurm.sh +0 -0
  25. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_8gpu_140k/validate.sh +0 -0
  26. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_8gpu_140k/validate_slurm.sh +0 -0
  27. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc+youtube+vip+mpii_lip_16gpu_70k/config.yaml +0 -0
  28. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc+youtube+vip+mpii_lip_16gpu_70k/resume.sh +0 -0
  29. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc+youtube+vip+mpii_lip_16gpu_70k/resume_slurm.sh +0 -0
  30. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc+youtube+vip+mpii_lip_16gpu_70k/train.sh +0 -0
  31. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc+youtube+vip+mpii_lip_16gpu_70k/train_slurm.sh +0 -0
  32. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc+youtube+vip+mpii_lip_16gpu_70k/validate.sh +0 -0
  33. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc+youtube+vip+mpii_lip_16gpu_70k/validate_slurm.sh +0 -0
  34. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc_coco_16gpu_42k/config.yaml +0 -0
  35. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc_coco_16gpu_42k/resume.sh +0 -0
  36. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc_coco_16gpu_42k/resume_slurm.sh +0 -0
  37. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc_coco_16gpu_42k/train.sh +0 -0
  38. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc_coco_16gpu_42k/train_slurm.sh +0 -0
  39. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc_coco_16gpu_42k/validate.sh +0 -0
  40. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc_coco_16gpu_42k/validate_slurm.sh +0 -0
  41. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc_voc_16gpu_42k/config.yaml +0 -0
  42. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc_voc_16gpu_42k/resume.sh +0 -0
  43. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc_voc_16gpu_42k/resume_slurm.sh +0 -0
  44. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc_voc_16gpu_42k/train.sh +0 -0
  45. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc_voc_16gpu_42k/train_slurm.sh +0 -0
  46. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc_voc_16gpu_42k/validate.sh +0 -0
  47. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc_voc_16gpu_42k/validate_slurm.sh +0 -0
  48. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/config.yaml +0 -0
  49. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/resume.sh +0 -0
  50. {MOFA-Video-Traj/models β†’ models}/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/resume_slurm.sh +0 -0
MOFA-Video-Traj/README.md DELETED
@@ -1,42 +0,0 @@
1
- ## Environment Setup
2
-
3
- `pip install -r requirements.txt`
4
-
5
- ## Download checkpoints
6
-
7
- 1. Download the pretrained checkpoints of [SVD_xt](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt-1-1) from huggingface to `./ckpts`.
8
-
9
- 2. Download the checkpoint of [MOFA-Adapter](https://huggingface.co/MyNiuuu/MOFA-Video-Traj) from huggingface to `./ckpts`.
10
-
11
- 3. Download the checkpoint of CMP from [here](https://huggingface.co/MyNiuuu/MOFA-Video-Traj/blob/main/models/cmp/experiments/semiauto_annot/resnet50_vip%2Bmpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar) and put it into `./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints`.
12
-
13
- The final structure of checkpoints should be:
14
-
15
-
16
- ```text
17
- ./ckpts/
18
- |-- controlnet
19
- | |-- config.json
20
- | `-- diffusion_pytorch_model.safetensors
21
- |-- stable-video-diffusion-img2vid-xt-1-1
22
- | |-- feature_extractor
23
- | |-- ...
24
- | |-- image_encoder
25
- | |-- ...
26
- | |-- scheduler
27
- | |-- ...
28
- | |-- unet
29
- | |-- ...
30
- | |-- unet_ch9
31
- | |-- ...
32
- | |-- vae
33
- | |-- ...
34
- | |-- svd_xt_1_1.safetensors
35
- | `-- model_index.json
36
- ```
37
-
38
- ## Run Gradio Demo
39
-
40
- `python run_gradio.py`
41
-
42
- Please refer to the instructions on the gradio interface during the inference process.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md CHANGED
@@ -1,85 +1,42 @@
 
1
 
 
2
 
 
3
 
 
4
 
5
- <div align="center">
6
- <h1>
7
- πŸ¦„οΈ MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model
8
- </h1>
9
- <a href='https://arxiv.org/abs/2405.20222'><img src='https://img.shields.io/badge/ArXiv-PDF-red'></a> &nbsp; <a href='https://myniuuu.github.io/MOFA_Video'><img src='https://img.shields.io/badge/Project-Page-Green'></a> &nbsp; <a href='https://huggingface.co/MyNiuuu/MOFA-Video-Traj'><img src='https://img.shields.io/badge/πŸ€— huggingface-MOFA_Traj-blue'></a>
10
- <div>
11
- <a href='https://myniuuu.github.io/' target='_blank'>Muyao Niu</a> <sup>1,2</sup> &nbsp;
12
- <a href='https://vinthony.github.io/academic/' target='_blank'>Xiaodong Cun</a><sup>2,*</sup> &nbsp;
13
- <a href='https://xinntao.github.io/' target='_blank'>Xintao Wang</a><sup>2</sup> &nbsp;
14
- <a href='https://yzhang2016.github.io/' target='_blank'>Yong Zhang</a><sup>2</sup> &nbsp; <br>
15
- <a href='https://scholar.google.com/citations?user=4oXBp9UAAAAJ&hl=en' target='_blank'>Ying Shan</a><sup>2</sup> &nbsp;
16
- <a href='https://scholar.google.com/citations?user=JD-5DKcAAAAJ&hl=en' target='_blank'>Yinqiang Zheng</a><sup>1,*</sup> &nbsp;
17
- </div>
18
- <div>
19
- <sup>1</sup> The University of Tokyo &nbsp; <sup>2</sup> Tencent AI Lab &nbsp; <sup>*</sup> Corresponding Author &nbsp;
20
- </div>
21
- </div>
22
 
23
- ---
24
 
25
- <div align="center">
26
- Check the gallery of our <a href='https://myniuuu.github.io/MOFA_Video' target='_blank'>project page</a> for many visual results!
27
- </div>
28
 
29
 
30
-
31
-
32
- ## New Features/Updates πŸ”₯πŸ”₯πŸ”₯
33
-
34
- We have released the Gradio inference code and the checkpoints for trajectory-based image animation! Please refer to `./MOFA-Video-Traj/README.md` for instructions.
35
-
36
-
37
- ## πŸ“° CODE RELEASE
38
- - [x] (2024.05.31) Gradio demo and checkpoints for trajectory-based image animation
39
- - [ ] Training scripts for trajectory-based image animation
40
- - [ ] Inference scripts and checkpoints for keypoint-based facial image animation
41
- - [ ] Training scripts for keypoint-based facial image animation
42
- - [ ] Inference Gradio demo for hybrid image animation
43
-
44
-
45
- ## Introduction
46
-
47
- <div align="center">
48
- <h3>
49
- TL;DR: Image 🏞️ + Hybrid Controls πŸ•ΉοΈ = Videos 🎬🍿
50
- </h3>
51
- </div>
52
-
53
- <div align="center">
54
- <img src="assets/images/project-mofa.png">
55
- </div>
56
-
57
- We introduce MOFA-Video, a method designed to adapt motions from different domains to the frozen Video Diffusion Model. By employing <u>sparse-to-dense (S2D) motion generation</u> and <u>flow-based motion adaptation</u>, MOFA-Video can effectively animate a single image using various types of control signals, including trajectories, keypoint sequences, AND their combinations.
58
-
59
- <p align="center">
60
- <img src="assets/images/pipeline.png">
61
- </p>
62
-
63
- During the training stage, we generate sparse control signals through sparse motion sampling and then train different MOFA-Adapters to generate video via pre-trained SVD. During the inference stage, different MOFA-Adapters can be combined to jointly control the frozen SVD.
64
-
65
-
66
- ## πŸ’« Trajectory-based Image Animation
67
-
68
- ### Inference
69
-
70
- Our inference demo is based on Gradio. Please refer to `./MOFA-Video-Traj/README.md` for instructions.
71
-
72
-
73
- ## Citation
74
- ```
75
- @article{niu2024mofa,
76
- title={MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model},
77
- author={Niu, Muyao and Cun, Xiaodong and Wang, Xintao and Zhang, Yong and Shan, Ying and Zheng, Yinqiang},
78
- journal={arXiv preprint arXiv:2405.20222},
79
- year={2024}
80
- }
81
  ```
82
 
83
- ## Acknowledgements
84
- We sincerely appreciate the code release of the following projects: [DragNUWA](https://arxiv.org/abs/2308.08089), [SadTalker](https://github.com/OpenTalker/SadTalker), [AniPortrait](https://github.com/Zejun-Yang/AniPortrait), [Diffusers](https://github.com/huggingface/diffusers), [SVD_Xtend](https://github.com/pixeli99/SVD_Xtend), [Conditional-Motion-Propagation](https://github.com/XiaohangZhan/conditional-motion-propagation), and [Unimatch](https://github.com/autonomousvision/unimatch).
 
85
 
 
 
1
+ ## Environment Setup
2
 
3
+ `pip install -r requirements.txt`
4
 
5
+ ## Download checkpoints
6
 
7
+ 1. Download the pretrained checkpoints of [SVD_xt](https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt-1-1) from huggingface to `./ckpts`.
8
 
9
+ 2. Download the checkpoint of [MOFA-Adapter](https://huggingface.co/MyNiuuu/MOFA-Video-Traj) from huggingface to `./ckpts`.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
 
11
+ 3. Download the checkpoint of CMP from [here](https://huggingface.co/MyNiuuu/MOFA-Video-Traj/blob/main/models/cmp/experiments/semiauto_annot/resnet50_vip%2Bmpii_liteflow/checkpoints/ckpt_iter_42000.pth.tar) and put it into `./models/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/checkpoints`.
12
 
13
+ The final structure of checkpoints should be:
 
 
14
 
15
 
16
+ ```text
17
+ ./ckpts/
18
+ |-- controlnet
19
+ | |-- config.json
20
+ | `-- diffusion_pytorch_model.safetensors
21
+ |-- stable-video-diffusion-img2vid-xt-1-1
22
+ | |-- feature_extractor
23
+ | |-- ...
24
+ | |-- image_encoder
25
+ | |-- ...
26
+ | |-- scheduler
27
+ | |-- ...
28
+ | |-- unet
29
+ | |-- ...
30
+ | |-- unet_ch9
31
+ | |-- ...
32
+ | |-- vae
33
+ | |-- ...
34
+ | |-- svd_xt_1_1.safetensors
35
+ | `-- model_index.json
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
  ```
37
 
38
+ ## Run Gradio Demo
39
+
40
+ `python run_gradio.py`
41
 
42
+ Please refer to the instructions on the gradio interface during the inference process.
MOFA-Video-Traj/run_gradio.py β†’ app.py RENAMED
@@ -28,6 +28,7 @@ from diffusers.utils.import_utils import is_xformers_available
28
 
29
  from utils.flow_viz import flow_to_image
30
  from utils.utils import split_filename, image2arr, image2pil, ensure_dirname
 
31
 
32
 
33
  output_dir_video = "./outputs/videos"
@@ -85,7 +86,12 @@ def get_sparseflow_and_mask_forward(
85
 
86
  return s_flow, mask
87
 
88
-
 
 
 
 
 
89
 
90
  def init_models(pretrained_model_name_or_path, resume_from_checkpoint, weight_dtype, device='cuda', enable_xformers_memory_efficient_attention=False, allow_tf32=False):
91
 
@@ -216,11 +222,14 @@ class Drag:
216
  def __init__(self, device, height, width, model_length):
217
  self.device = device
218
 
 
219
  svd_ckpt = "ckpts/stable-video-diffusion-img2vid-xt-1-1"
220
  mofa_ckpt = "ckpts/controlnet"
221
 
222
  self.device = 'cuda'
223
  self.weight_dtype = torch.float16
 
 
224
 
225
  self.pipeline, self.cmp = init_models(
226
  svd_ckpt,
@@ -631,6 +640,10 @@ class Drag:
631
  return hint_path, outputs_path, flows_path, outputs_mp4_path, flows_mp4_path
632
 
633
 
 
 
 
 
634
  with gr.Blocks() as demo:
635
  gr.Markdown("""<h1 align="center">MOFA-Video</h1><br>""")
636
 
@@ -828,4 +841,5 @@ with gr.Blocks() as demo:
828
 
829
  run_button.click(DragNUWA_net.run, [first_frame_path, tracking_points, inference_batch_size, motion_brush_mask, motion_brush_viz, ctrl_scale], [hint_image, output_video, output_flow, output_video_mp4, output_flow_mp4])
830
 
831
- demo.launch(server_name="0.0.0.0", debug=True, server_port=80)
 
 
28
 
29
  from utils.flow_viz import flow_to_image
30
  from utils.utils import split_filename, image2arr, image2pil, ensure_dirname
31
+ from huggingface_hub import login, hf_hub_download, snapshot_download
32
 
33
 
34
  output_dir_video = "./outputs/videos"
 
86
 
87
  return s_flow, mask
88
 
89
+ def download_models(ckpts_path):
90
+ try:
91
+ snapshot_download(repo_id="vdo/stable-video-diffusion-img2vid-xt-1-1", local_dir=ckpts_path, cache_dir=ckpts_path)
92
+ snapshot_download(repo_id="MyNiuuu/MOFA-Video-Traj", local_dir=ckpts_path, cache_dir=ckpts_path, allow_patterns=["ckpts/controlnet/*"])
93
+ except (Exception, BaseException) as error:
94
+ print(error)
95
 
96
  def init_models(pretrained_model_name_or_path, resume_from_checkpoint, weight_dtype, device='cuda', enable_xformers_memory_efficient_attention=False, allow_tf32=False):
97
 
 
222
  def __init__(self, device, height, width, model_length):
223
  self.device = device
224
 
225
+ ckpts_dir = "ckpts/"
226
  svd_ckpt = "ckpts/stable-video-diffusion-img2vid-xt-1-1"
227
  mofa_ckpt = "ckpts/controlnet"
228
 
229
  self.device = 'cuda'
230
  self.weight_dtype = torch.float16
231
+
232
+ download_models(ckpts_dir)
233
 
234
  self.pipeline, self.cmp = init_models(
235
  svd_ckpt,
 
640
  return hint_path, outputs_path, flows_path, outputs_mp4_path, flows_mp4_path
641
 
642
 
643
+ # Download checkpoints to the right place
644
+
645
+
646
+
647
  with gr.Blocks() as demo:
648
  gr.Markdown("""<h1 align="center">MOFA-Video</h1><br>""")
649
 
 
841
 
842
  run_button.click(DragNUWA_net.run, [first_frame_path, tracking_points, inference_batch_size, motion_brush_mask, motion_brush_viz, ctrl_scale], [hint_image, output_video, output_flow, output_video_mp4, output_flow_mp4])
843
 
844
+ demo.launch()
845
+ # demo.launch(server_name="0.0.0.0", debug=True, server_port=80)
assets/images/README.md DELETED
@@ -1 +0,0 @@
1
- README
 
 
assets/images/project-mofa.png DELETED
Binary file (652 kB)
 
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc+youtube_voc_16gpu_140k/config.yaml RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc+youtube_voc_16gpu_140k/resume.sh RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc+youtube_voc_16gpu_140k/resume_slurm.sh RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc+youtube_voc_16gpu_140k/train.sh RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc+youtube_voc_16gpu_140k/train_slurm.sh RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc+youtube_voc_16gpu_140k/validate.sh RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc+youtube_voc_16gpu_140k/validate_slurm.sh RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_16gpu_70k/config.yaml RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_16gpu_70k/resume.sh RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_16gpu_70k/resume_slurm.sh RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_16gpu_70k/train.sh RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_16gpu_70k/train_slurm.sh RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_16gpu_70k/validate.sh RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_16gpu_70k/validate_slurm.sh RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_8gpu_140k/config.yaml RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_8gpu_140k/resume.sh RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_8gpu_140k/resume_slurm.sh RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_8gpu_140k/train.sh RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_8gpu_140k/train_slurm.sh RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_8gpu_140k/validate.sh RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/alexnet_yfcc_voc_8gpu_140k/validate_slurm.sh RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc+youtube+vip+mpii_lip_16gpu_70k/config.yaml RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc+youtube+vip+mpii_lip_16gpu_70k/resume.sh RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc+youtube+vip+mpii_lip_16gpu_70k/resume_slurm.sh RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc+youtube+vip+mpii_lip_16gpu_70k/train.sh RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc+youtube+vip+mpii_lip_16gpu_70k/train_slurm.sh RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc+youtube+vip+mpii_lip_16gpu_70k/validate.sh RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc+youtube+vip+mpii_lip_16gpu_70k/validate_slurm.sh RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc_coco_16gpu_42k/config.yaml RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc_coco_16gpu_42k/resume.sh RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc_coco_16gpu_42k/resume_slurm.sh RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc_coco_16gpu_42k/train.sh RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc_coco_16gpu_42k/train_slurm.sh RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc_coco_16gpu_42k/validate.sh RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc_coco_16gpu_42k/validate_slurm.sh RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc_voc_16gpu_42k/config.yaml RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc_voc_16gpu_42k/resume.sh RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc_voc_16gpu_42k/resume_slurm.sh RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc_voc_16gpu_42k/train.sh RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc_voc_16gpu_42k/train_slurm.sh RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc_voc_16gpu_42k/validate.sh RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/rep_learning/resnet50_yfcc_voc_16gpu_42k/validate_slurm.sh RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/config.yaml RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/resume.sh RENAMED
File without changes
{MOFA-Video-Traj/models β†’ models}/cmp/experiments/semiauto_annot/resnet50_vip+mpii_liteflow/resume_slurm.sh RENAMED
File without changes