r_vfx Subreddit VFX Reddit r/vfx Backup by AppleEditing (AE) on Telegram
21 subscribers
2.04K photos
2.08K videos
17.9K links
r/vfx subreddit backup on Telegram. A Backup Project by @RoadToPetabyte and @Reddit2telegram Join our subreddit backup on Discord and Pinterest: https://discord.gg/abCudZwgBr. Other Telegram backup https://discord.gg/jsAxt4rUCB
Download Telegram
Best software for removing tiny tattoos/moles from video (local processing, good tracking)?

Hi everyone,

My apologies if this is not the right sub to post this to. I’m looking for software recommendations for a very specific video editing task.

I need to remove very small skin marks (tiny tattoos / moles / small scars) from video. They’re really small - about mole-sized, not large tattoos.

The clips can be anywhere from a few seconds up to \~10 minutes, and the skin surface moves naturally with the body, so the fix needs to track the motion of the skin across the clip.

What I’m trying to achieve:

\- Remove a tiny spot on skin so it looks natural

\- Have the fix follow the movement automatically (tracking / match move)

\- Avoid frame-by-frame manual painting

\- Work on short clips or up to \~10 min videos

\- 100% local software (no cloud processing)



Things I’ve already tried or looked into:

\- DaVinci Resolve (free) using Fusion + Planar Tracker + Paint

\- Clone painting / skin cleanup tools

\- Clean plate techniques

\- I’ve heard tools like Mocha Pro, After Effects, and PowerMesh might be used for this kind of task

The problem I’m running into is that the marks are extremely small, and sometimes the trackers struggle because there’s not much texture in the skin.

So I’m wondering, what software is best for removing tiny skin marks in video? Is something like Mocha Pro actually worth it for this, or overkill? Are there easier tools specifically designed for skin cleanup / beauty retouching in video?


Any recommendations or workflows from people who do VFX/retouching would be hugely appreciated.

Thanks!

https://redd.it/1rrpwmq
@vfxbackup
NEED HELP ABOUT FACE REGIONAL MASKING AND MOTION TRACKING / TRANSFERING
https://redd.it/1rrqrhk
@vfxbackup
Color or VFX First

Hi everyone! I'm currently working on a film project, and I'm doing post by myself. The edit is picture locked, but now I am debating whether I should first move to color or VFX compositing? The film was shot in BRAW, and much of it features a full CG character. Some have said that I should do the compositing first, but wouldn't the footage then lose the flexibility of BRAW, or should I do correction, then compositing, then creative color?

Let me know!

https://redd.it/1rs0ncn
@vfxbackup
Sony is shutting down Pixo

Rumours are circulating online. They want to shut them down globally. Corporate greed has struck again. Fuck Sony.

https://redd.it/1ry8e5q
@vfxbackup
Seedance 2.0 is nowhere near to take your jobs

I've had the pleasure to play around with Seedance 2.0. It's not officially available in the west, only in China. Because ByteDance delayed or halted the release in the west because of Hollywood/ Studios legal threats.

After having tested it I can confirm it's nowhere near to take VFX studios and people's jobs.

What we've seen online is very cherry picked stuff and also on social media the resolution and quality doesn't need to be as high.

There's still a lot of slop going on. Compression artifacts and other weird artifacts issues happening. Which is maybe less apparent when you see it in these small clips on social media.

While in some ways impressive and of course it being able to replicate cinematic scenes I still see it more like a party trick that can be used for fun. Kind of like Sora 2.0 app. Or some small brands could use it when they can't afford a bigger production budget.

This whole "Hollywood and VFX artists are cooked" is pure nonsense either to generate clicks or these people have no clue what it takes for high-end movies and VFX.

I could go on and on why it's so far away from it being to be a real threat (resolution, compression, consistency, length, and so on...) but I'm sure you get the idea.

https://redd.it/1ry95lo
@vfxbackup
Best shape for printed tracker points ?

Hello.

I woul'd like to print some tracker to put on the ground and rebuilt my scene more easely. Witch shape seems the most convenient ?

I was thinking of a cone of 10cm base and 8 height, with a notch to tie a string of 1meter and space them quickly. It also have to be stackable and not taking much space.

For the colors, I though making multiple colors like black, white, orange and green.

Is it a better way, or some tips I didn't thought about ?

Nice day.

https://redd.it/1rya50w
@vfxbackup
Getting back into VFX after 4 years of non-VFX Work

Hi guys!

I am a trained on-set VFX Supervisor and AE & Nuke Artist (stumbled almost a decade ago into it after "just trying a music video" and got hooked). After COVID broke almost everything, I went into IT (System administration and AI Work) and through that went full circle into VFX again. I am just stunned how much has changed in 4 years, it's really difficult to catch up what really happened.

I started again a month ago doing small projects with part-AI (ofc locally generated and meticulously edit so it doesn't look like fucking slop, it's a tool, not a one stop solution) and part real footage (of course putting all the usual work in at a way higher price, thanks subscriptions, I miss my Adobe CS6) and I'm a bit stunned how AI Slop makes Customer's demands... insane.

Got a project from a music studio. For a album presentation, I do my work, send him the final comp for review, he was happy with it.

After days, he messages me and says he showed it to others and he wants something more AI - like going with a bicycle to the moon, standing on the moon and catching the soon to be released CD there.

Aight, can do that, no problem - biting my ass here that I agreed to a flat fee.

Guess who is sitting here working in Maya, Nuke and Nucoda (yeah, don't laugh, I grew up my whole job-life with a film master setup) to make that happen because AI can, by god and a 32GB VRAM Card, not manage to grasp the idea of how a bike should work.

Maybe AI isn't really a threat. Not yet at least, because for good shots you still need a halfway good artist. It's good at taking tedious parts away (looking at you, set-extension and badly lit green screen removal). And people who fully use AI and call it a day weren't our customer base to begin with.

I am definitely trying to go back doing full-time vfx and editing again.

https://redd.it/1ryqtie
@vfxbackup
VFX artists who want to return to their hometown but have no local opportunities — what do you do?

Hi everyone,
I’ve been thinking about a situation that many VFX artists might relate to.
A lot of people in VFX spend their first 4–6 years in metro cities like Mumbai, Bangalore, or Hyderabad — away from their families — just to build their career.
But after some years, priorities start to change:
Want to be closer to family
Don’t want to stay in expensive metro cities forever
Looking for a more stable and balanced life
The problem is:
👉 Most VFX jobs are concentrated in big cities
👉 Smaller cities / hometowns usually have very limited or no opportunities
So it creates a tough situation:
Stay in metro cities for career
vs
Move back home for personal life
I wanted to ask people who have been through this or seen others go through it:
What do artists usually do in this situation?
Is remote work a realistic option in VFX today?
Do people switch to other fields (like real-time, teaching, freelancing)?
Or do most continue staying in metro cities long-term?
I’m not asking from a negative point of view — just trying to understand how people balance career vs personal life in this industry.
Would really appreciate hearing real experiences.

https://redd.it/1rytccv
@vfxbackup
VFX artists in India — let’s share REAL salaries (Fresher to Supervisor)

Hi everyone,
After reading so many mixed opinions about VFX salaries in India, I feel there’s a lot of confusion — especially for beginners trying to understand the real market.
Instead of guessing, why not make this thread useful for everyone?
👉 If you’re comfortable, please share your current salary + experience level in this format:
Experience: (e.g., 0–1 / 2–4 / 5–8 / 10+)
Role: (FX / Lighting / Comp / Crowd / etc.)
Salary (monthly or LPA):
City (optional):
No need to mention company name.
I think this can really help:
Beginners understand the real starting point
Mid-level artists see growth path
Seniors share what’s actually achievable
There’s a lot of talk about “low pay” or “good pay”, but real numbers are rarely shared openly.
Let’s make this thread transparent and helpful for the community.

https://redd.it/1rysvoh
@vfxbackup
Physically-based lens flare tool for Nuke (open source) – feedback welcome

Hi all,

I’ve just open-sourced a lens flare tool I’ve been developing for Nuke, and I’d love to get some feedback from the community:

Repo: [https://github.com/LocalStarlight/flaresim\_nuke](https://github.com/LocalStarlight/flaresim_nuke)
Release: [https://github.com/LocalStarlight/flaresim\_nuke/releases/tag/v1.0.0](https://github.com/LocalStarlight/flaresim_nuke/releases/tag/v1.0.0)
Tutorial: [https://www.youtube.com/watch?v=yEsBOQNG16Y](https://www.youtube.com/watch?v=yEsBOQNG16Y)

# Credits

This project builds directly on the foundational work of **Eamonn Nugent** ([u/space55](https://www.reddit.com/user/space55/)).

His original CPU-based renderer established:

* The core ray-tracing approach
* The optical modelling
* The lens file format

This repo is essentially an adaptation/extension of that work into a Nuke context.

Original project: [https://github.com/space55/blackhole-rt](https://github.com/space55/blackhole-rt)

# What it does

This tool takes a more physically-based approach to lens flares:

* Treats bright pixels in the image as individual light sources
* Generates ghosting and flare elements based on optical behaviour rather than presets
* Produces flares that inherit structure from the source image

One of the more interesting side effects is that flares can actually contain **distorted versions of the source**—which feels much closer to real-world lens behaviour than typical sprite-based approaches.

# Current state

* Early / WIP
* Not especially optimised yet
* But already producing some interesting (and occasionally surprising) results

# Would love feedback on

* Whether this approach feels useful in production
* Performance vs quality tradeoffs
* Features needed for real pipeline use
* Any issues with the underlying approach

If people find it useful, I’d be happy to keep developing it further.

Cheers!

https://redd.it/1ryvhh1
@vfxbackup
Should I put VFX or compositing in my showreel?

I am making a showreel as a beginner filmmaker (I have a BA and an MA but not a ton of experience, been editing like 10 years, making my own stuff for about 6 years) and I am putting my roles in the corner (focusing on my favourite roles editor, director and VFX)

When I say VFX, I mean basic stuff, compositing, keying, rotoscoping, motion tracking, basic grading, keyframing, and other basic effects you can do in Premiere and After Effects.

Should I put my roles as director, editor, VFX, or something else, since I can't do very advanced VFX (I don't have the kit for it)?

Also, my showreel just starts and has music, clips and sometimes audio. I have the project title and my roles top right, and the name and showreel bottom left. At the end, I have my contact details. Should I put my name and roles in big letters at the start/end as well?

https://redd.it/1ryw03f
@vfxbackup