报告论文安排

时间:2020年09月12日(周六) 9:00-12:00 14:00-17:00

地点:腾讯会议室(会议ID:858 963 5943)

  1. 报告属于正常上课,所有同学不得无故缺席
  2. 每篇论文报告时间12分钟左右(包括展示video,建议ppt不超过15页),提问3分钟左右。
  3. 由于每篇论文的报告时间有限,比较短,因此报告的重点在于:简单介绍作者和单位情况; 主要讲一下这篇文章在做什么(What)?为什么做这个问题(motivation-Why)?这个问题以前有些什么工作? 难点在哪里?这篇文章解决了什么样的难点或提出什么样的新方法(selling points)?局限和未来工作(limitation and future work)? 细节若太多不用展开,关键要让大家知道这篇文章的主要思想和价值所在。
  4. 请大家尽量在网上去找关于你所负责报告论文的相关资料。如各个作者的主页,项目网页。 很多作者会把论文、video、甚至将PPT放在他的主页。讲解论文时,论文中的图片、视频均播放给大家看一下。 “一图胜千言”,视频又能胜于图片。
  5. 书面报告要求采用科研论文格式(标题、摘要、关键字、正文、参考文献等)撰写, 内容可以是所报告论文的概览、同主题多篇论文的综述等。
  6. 评分PPT口头报告小组) + 书面报告个人
序号学号论文
1SF19011001XNect: Real-time Multi-person 3D Motion Capture With a Single RGB Camera
2SF19011003,SF19011014Portrait Shadow Manipulation
3SF19011020,SF19011035A System for Efficient 3D Printed Stop-motion Face Animation
4SF19011021,SF19011029Fast and Deep Facial Deformations
5SF19011033DeepFaceDrawing: Deep Generation of Face Images From Sketches
6SF19011002Neural Subdivision
7SF19011004Dynamic Graph CNN for Learning on Point Clouds
8SF19011005,SF19011023TilinGNN: Learning to Tile With Self-supervised Graph Neural Network
9SF19011007Graph2Plan: Learning Floorplan Generation From Layout Graphs
10SF19011012,SF19011027Lagrangian Neural Style Transfer for Fluids
11SF19011022,SF19011032CNNs on Surfaces Using Rotation-equivariant Features
12SF19011028Inverse Procedural Modeling of Branching Structures by Inferring L-Systems
13SF19011030Unsupervised K-modal Styled Content Generation
14SF19011031,SF19011041Attribute2Font: Creating Fonts You Want From Attributes
15SF19011006,SF19011019Character Controllers Using Motion VAEs
16SF19011009Fast and Flexible Multilegged Locomotion Using Learned Centroidal Dynamics
17SF19011010ARAnimator: In-situ Character Animation in Mobile AR With User-defined Motion Gestures
18SF19011013Local Motion Phases for Learning Multi-Contact Character Movements
19SF19011016CARL: Controllable Agent With Reinforcement Learning for Quadruped Locomotion
20SF19011017Catch & Carry: Reusable Neural Controllers for Vision-guided Whole-body Tasks
21SF19011026Example-driven Virtual Cinematography by Learning Camera Behaviors
22SF19011036Consistent Video Depth Estimation
23SF19011037,SF19011042Generating Digital Painting Lighting Effects via RGB-space Geometry
24SF19011040Real-time Image Smoothing via Iterative Least Squares

SIGGRAPH 2020 papers: 45

https://s2020.siggraph.org/conference/program-events/technical-papers/

http://kesen.realtimerendering.com/sig2020.html

  1. Consistent Video Depth Estimation
  2. Neural Subdivision
  3. Graph2Plan: Learning Floorplan Generation From Layout Graphs
  4. Deep Generative Modeling for Scene Synthesis via Hybrid Representations
  5. Dynamic Graph CNN for Learning on Point Clouds
  6. Point2Mesh: A Self-prior for Deformable Meshes
  7. MGCN: Descriptor Learning Using Multiscale GCNs
  8. CNNs on Surfaces Using Rotation-equivariant Features
  9. DeepFaceDrawing: Deep Generation of Face Images From Sketches
  10. MichiGAN: Multi-Input-Conditioned Hair Image Generation for Portrait Editing
  11. Portrait Shadow Manipulation
  12. Lagrangian Neural Style Transfer for Fluids
  13. Fast and Flexible Multilegged Locomotion Using Learned Centroidal Dynamics
  14. CARL: Controllable Agent With Reinforcement Learning for Quadruped Locomotion
  15. Catch & Carry: Reusable Neural Controllers for Vision-guided Whole-body Tasks
  16. Model Predictive Control With a Visuomotor System for Physics-based Character Animation
  17. Human-in-the-loop Differential Subspace Search in High-dimensional Latent Space
  18. Inverse Procedural Modeling of Branching Structures by Inferring L-Systems
  19. XNect: Real-time Multi-person 3D Motion Capture With a Single RGB Camera
  20. ARAnimator: In-situ Character Animation in Mobile AR With User-defined Motion Gestures
  21. Local Motion Phases for Learning Multi-Contact Character Movements
  22. Character Controllers Using Motion VAEs
  23. Fast and Deep Facial Deformations
  24. Accurate Face Rig Approximation With Deep Differential Subspace Reconstruction
  25. RigNet: Neural Rigging for Articulated Characters
  26. Compositional Neural Scene Representations for Shading Inference
  27. Adaptive Incident Radiance Field Sampling and Reconstruction Using Deep Reinforcement Learning
  28. Example-driven Virtual Cinematography by Learning Camera Behaviors
  29. TilinGNN: Learning to Tile With Self-supervised Graph Neural Network
  30. Attribute2Font: Creating Fonts You Want From Attributes
  31. Real-time Image Smoothing via Iterative Least Squares
  32. Single Image HDR Reconstruction Using a CNN With Masked Features and Perceptual Loss
  33. Learning Temporal Coherence via Self-supervision for GAN-based Video Generation
  34. DeepMag: Source Specific Motion Magnification Using Gradient Ascent
  35. Robust Motion In-betweening
  36. Unpaired Motion Style Transfer From Video to Animation
  37. Skeleton-Aware Networks for Deep Motion Retargeting
  38. Learned Motion Matching
  39. Computational Parquetry: Fabricated Style Transfer With Wood Pixels
  40. A System for Efficient 3D Printed Stop-motion Face Animation
  41. Interactive Video Stylization Using Few-shot Patch-based Training
  42. Generating Digital Painting Lighting Effects via RGB-space Geometry
  43. Unsupervised K-modal Styled Content Generation
  44. Manipulating Attributes of Natural Scenes via Hallucination
  45. End-to-end Learned, Optically Coded Super-resolution SPAD Camera