magistrkoljan commited on
Commit
f94c580
Β·
verified Β·
1 Parent(s): e4f8cb2

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +5 -7
app.py CHANGED
@@ -14,6 +14,8 @@ from hydra import initialize, compose
14
  import hydra
15
  from omegaconf import OmegaConf
16
  import time
 
 
17
 
18
  def install_submodules():
19
  subprocess.check_call(['pip', 'install', './submodules/RoMa'])
@@ -300,7 +302,7 @@ with gr.Blocks() as demo:
300
  1. Upload a **front-facing video** or **a folder of images** of a **static** scene.
301
  2. Use the sliders to configure the number of reference views, correspondences, and optimization steps.
302
  3. First press on preprocess Input to extract frames from video(for videos) and COLMAP frames.
303
- 4.Then click **πŸš€ Start Reconstruction** to actually launch the reconstruction pipeline.
304
  5. Watch the training visualization and explore the 3D model.
305
  ‼️ **If you see nothing in the 3D model viewer**, try rotating or zooming β€” sometimes the initial camera orientation is off.
306
 
@@ -326,7 +328,7 @@ with gr.Blocks() as demo:
326
  [["assets/examples/video_tulips.mp4"]]
327
  ],
328
  inputs=[input_file],
329
- label="🎞️ ALternatively, try an Example Video",
330
  examples_per_page=4
331
  )
332
  ref_slider = gr.Slider(4, 32, value=16, step=1, label="Number of Reference Views")
@@ -400,18 +402,14 @@ with gr.Blocks() as demo:
400
  ### πŸŽ₯ Training Visualization
401
  You will see a visualization of the entire training process in the "Training Video" pane.
402
 
403
- ### πŸŒ€ Rendering & 3D Model
404
- - Render the scene from a circular path of novel views.
405
- - Or from camera views close to the original input.
406
 
407
  The 3D model is shown in the right viewer. You can explore it interactively:
408
  - On PC: WASD keys, arrow keys, and mouse clicks
409
  - On mobile: pan and pinch to zoom
410
 
411
  πŸ•’ Note: the 3D viewer takes a few extra seconds (~5s) to display after training ends.
412
-
413
  ---
414
- Preloaded models coming soon. (TODO)
415
  """, elem_id="details")
416
 
417
 
 
14
  import hydra
15
  from omegaconf import OmegaConf
16
  import time
17
+ import contextlib
18
+ import base64
19
 
20
  def install_submodules():
21
  subprocess.check_call(['pip', 'install', './submodules/RoMa'])
 
302
  1. Upload a **front-facing video** or **a folder of images** of a **static** scene.
303
  2. Use the sliders to configure the number of reference views, correspondences, and optimization steps.
304
  3. First press on preprocess Input to extract frames from video(for videos) and COLMAP frames.
305
+ 4. Then click **πŸš€ Start Reconstruction** to actually launch the reconstruction pipeline.
306
  5. Watch the training visualization and explore the 3D model.
307
  ‼️ **If you see nothing in the 3D model viewer**, try rotating or zooming β€” sometimes the initial camera orientation is off.
308
 
 
328
  [["assets/examples/video_tulips.mp4"]]
329
  ],
330
  inputs=[input_file],
331
+ label="🎞️ Alternatively, try an Example Video",
332
  examples_per_page=4
333
  )
334
  ref_slider = gr.Slider(4, 32, value=16, step=1, label="Number of Reference Views")
 
402
  ### πŸŽ₯ Training Visualization
403
  You will see a visualization of the entire training process in the "Training Video" pane.
404
 
405
+ ### πŸŒ€ 3D Model
 
 
406
 
407
  The 3D model is shown in the right viewer. You can explore it interactively:
408
  - On PC: WASD keys, arrow keys, and mouse clicks
409
  - On mobile: pan and pinch to zoom
410
 
411
  πŸ•’ Note: the 3D viewer takes a few extra seconds (~5s) to display after training ends.
 
412
  ---
 
413
  """, elem_id="details")
414
 
415