Проекты машинного обучения
78 subscribers
4 photos
414 links
Download Telegram
VToonify: Controllable High-Resolution Portrait Video Style Transfer

📝Although a series of successful portrait image toonification models built upon the powerful StyleGAN have been proposed, these image-oriented methods have obvious limitations when applied to videos, such as the fixed frame size, the requirement of face alignment, missing non-facial details and temporal inconsistency.
https://github.com/williamyang1991/vtoonify
DigiFace-1M: 1 Million Digital Face Images for Face Recognition

📝Such models are trained on large-scale datasets that contain millions of real human face images collected from the internet.
https://github.com/microsoft/digiface1m
Ask Me Anything: A simple strategy for prompting language models

📝Prompting is a brittle process wherein small modifications to the prompt can cause large variations in the model predictions, and therefore significant effort is dedicated towards designing a painstakingly "perfect prompt" for a task.
https://github.com/hazyresearch/ama_prompting
Vox-Fusion: Dense Tracking and Mapping with Voxel-based Neural Implicit Representation

📝In this work, we present a dense tracking and mapping system named Vox-Fusion, which seamlessly fuses neural implicit representations with traditional volumetric fusion methods.
https://github.com/zju3dv/vox-fusion
Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training

📝The success of Transformer models has pushed the deep learning model scale to billions of parameters.
https://github.com/hpcaitech/colossalai
Towards High-Quality Neural TTS for Low-Resource Languages by Learning Compact Speech Representations

📝Moreover, we optimize the training strategy by leveraging more audio to learn MSMCRs better for low-resource languages.
https://github.com/hhguo/msmc-tts
What Makes Convolutional Models Great on Long Sequence Modeling?

📝We focus on the structure of the convolution kernel and identify two critical but intuitive principles enjoyed by S4 that are sufficient to make up an effective global convolutional model: 1) The parameterization of the convolutional kernel needs to be efficient in the sense that the number of parameters should scale sub-linearly with sequence length.
https://github.com/ctlllll/sgconv
MetaFormer Baselines for Vision

📝By simply applying depthwise separable convolutions as token mixer in the bottom stages and vanilla self-attention in the top stages, the resulting model CAFormer sets a new record on ImageNet-1K: it achieves an accuracy of 85. 5% at 224x224 resolution, under normal supervised training without external data or distillation.
https://github.com/sail-sg/metaformer
Poisson Flow Generative Models

📝We interpret the data points as electrical charges on the $z=0$ hyperplane in a space augmented with an additional dimension $z$, generating a high-dimensional electric field (the gradient of the solution to Poisson equation).
https://github.com/newbeeer/poisson_flow
TAP-Vid: A Benchmark for Tracking Any Point in a Video

📝Generic motion understanding from video involves not only tracking objects, but also perceiving how their surfaces deform and move.
https://github.com/deepmind/tapnet
OneFlow: Redesign the Distributed Deep Learning Framework from Scratch

📝Aiming at a simple, neat redesign of distributed deep learning frameworks for various parallelism paradigms, we present OneFlow, a novel distributed training framework based on an SBP (split, broadcast and partial-value) abstraction and the actor model.
https://github.com/Oneflow-Inc/oneflow
PhaseAug: A Differentiable Augmentation for Speech Synthesis to Simulate One-to-Many Mapping

📝Previous generative adversarial network (GAN)-based neural vocoders are trained to reconstruct the exact ground truth waveform from the paired mel-spectrogram and do not consider the one-to-many relationship of speech synthesis.
https://github.com/mindslab-ai/phaseaug