Dot the AI's
365 subscribers
335 photos
442 videos
2 files
524 links
Accessible artificial intelligence and singularity chronicles.

We research and showcase AI services for all professions.
We write AI-related news. Concise and understandable.


Collaboration: @hello_voic
Download Telegram
Finally!

Gen-4 from Runway. When did Gen-3 come out? A year ago? I think something like that.

All the details here.

In short:

– much more control and consistency, both for characters and objects in the scene
– super control of angles and camera
– everything becomes very cinematic
– better physics and understanding of the world

On the Gen-4 page, there are examples of short films and clips.

#VideoGenerative@TochkiNadAI #videoEditing@TochkiNadAI
What's up with gen-video? Micro-digest.

1. Pika has added pikaframes. Now, you can add up to five images and generate a video up to 25 seconds long based on them. By the way, this works in the app for iOS as well.

2. In Adobe Premiere Pro, you can now erase objects, change backgrounds, sync audio with video, and search for needed scenes by description using a prompt. 4k video is now available for generation.

3. Runway introduced Gen-4 Turbo. It now takes 30 seconds to create a 10-second video.

#VideoGenerative@TochkiNadAI
#videoEditing@TochkiNadAI
This media is not supported in your browser
VIEW IN TELEGRAM
Veo-2 can use Google Cloud

I had to struggle to generate yesterday's scene from the Oval Office, where Trump boasts to journalists that his friends looted billions of dollars in market swings due to tariffs.

As a result, Google didn't generate Trump, and I had to move from realism to animation, but we'll leave it as is)

You can test Veo-2 here. It will ask to link an account and a card. This is Text-to-video.

#VideoGenerative@TochkiNadAI
This media is not supported in your browser
VIEW IN TELEGRAM
Example of how Multi-Elements works in Klinge

Briefly on how to use it; as some subscribers mentioned, even simple instructions are worth publishing. Overall, it's easy and fun:

01. Select “Multi-Elements
02. Upload the reference video
03. Highlight the necessary edits
04. Upload the reference image
05. Choose “Replace/Add/Delete”
06. Add a prompt + generate.

Notice the cool artifact on the output video with the guy's tongue😛

#VideoGenerative@TochkiNadAI
This media is not supported in your browser
VIEW IN TELEGRAM
Luma launched Camera Angle

This means that now you can choose your camera angle more precisely.

For example: top view, selfie, bottom view, and more.

#VideoGenerative@TochkiNadAI
This media is not supported in your browser
VIEW IN TELEGRAM
Google today showed the new Veo 3.

Now the editor itself adds sound, noise, dialogues to the video. Everything from a single prompt. Soon you'll be able to generate an entire scene: image + atmosphere + voice.

Google is launching a new trend. We'll wait to see what others will show, but for now, this is 10/10.

Testing here, only in the states

#videoGenerative@TochkiNadAI
Kling doesn't yet have convenient sound generation like Veo 3, everything is added manually through separate sections. This slows the flow, especially if you're accustomed to a full AI pipeline. But! Visually, Kling 2.1 is getting closer to production level. With free credits, an affordable price, and API prospects, this could be a great choice for creators, studios, and agencies looking for a balance between quality and affordability.

Share your experiences, let's discuss new pipelines and AI solutions for brands. Testing here.

#videoGenerative@TochkiNadAI
This media is not supported in your browser
VIEW IN TELEGRAM
If you missed it – Dream Machine has introduced Reframe. The good old outpainting. It extends images or videos beyond the frame.

— adapts to any format (stories, posts, cinematic widescreen) — expands the scene without losing the main focus

It's simple: upload your media, click Reframe, choose the desired format. Especially useful for content that exists across multiple platforms.

Great find for everyone working with visual content.

#videoGenerative@TochkiNadAI #videoEditing@TochkiNadAI
This media is not supported in your browser
VIEW IN TELEGRAM
At Manus they decided that we don't have enough dozens of video generators, and they've just announced their own.

Well, we're not against it.

Early access will soon be available for Basic, Plus, and Pro accounts.

#videoGenerative@TochkiNadAI
Let's say you want to generate videos in Sora but can't or don't want to pay for ChatGPT.

Not long ago, Bing added generation in Sora to its application.

I decided to check this myself, and I confirmed it. Proof is in the picture.

And I also made an unusual personal discovery: the Bing app was already installed on my phone. Circumstances are still being clarified!

#videoGenerative@TochkiNadAI
Still, I am a fan of cool presentation of updates and features in AI services.

I regularly praise the marketers of Pika, but Luma is good too. The first video is official, presenting the fresh functionality of video-to-video stylization in Dream Machine. Isn’t it cool?

The second is a test of the feature by Mr. Dobrokotov (@strangedalle), to clearly show that we're not just being fed cherry-picks here.

#videoGenerative@TochkiNadAI
Ordered new video generators?

1. This is a new bid for success from ByteDance, called Seedance 1.0. It is positioned as a fast, functional model with 1080p resolution, 24 frames per second, support for multi-frame video, and near real-time rendering. There were some mentions about the cost being around 50 cents (fifty cent 🤭) for 5 seconds.

2. And this is a teaser compilation of the upcoming video model from Midjourney. Creators' opinions are mixed so far. It feels like we should prepare for something that will be decent in quality and expensive. Let's see what features Midjourney will use to compete with others.

#videoGenerative@TochkiNadAI
This media is not supported in your browser
VIEW IN TELEGRAM
Chinese Hailuo 02 (Minimax)

The internet is going crazy over videos where cats jump into a pool with excellent splash physics. A logical step, as the sporty cats have replaced the grandmas from Veo 3.

What it can do:
• Text → video and photo → video.
• Maintains one character in all scenes (reference images).
• Clean 5-10 seconds output within a couple of minutes.
• Built-in templates for titles, transitions, voiceover.

Hailuo works in the browser and on mobile, doesn't require a 4090. Quality ≈ Veo 3.

Free tokens are given upon registration.

#videoGenerative@TochkiNadAI
Has anyone tested Showrunneryet?

Brief overview – this is an AI platform where you can prompt, choose the genre and details, and then generate fully animated scenes, including characters, dialogues, and all visuals. It's all aimed at creating animated series.

They opened access to the platform a year ago, but back then you couldn't generate anything yourself. Instead, a range of cartoons was available, and it was positioned as "AI cartoon streaming, where you can influence the plot" (attaching a screenshot from those times).

Now things are more serious – there's a platform for generation, but access is still through a waitlist.

#videoGenerative@TochkiNadAI
#fun@TochkiNadAI
If you were suddenly looking for it – here they brought Veo 3 with improved realism<ph2>

By the way, OpenArt recently went viral on X and Product Hunt as a platform for creating comprehensive stories. I haven't tested it; if you have, share your thoughts.

#videogenerative@tochkinadai
This media is not supported in your browser
VIEW IN TELEGRAM
Short cool example of the recently announced Runway Aleph.

A lot of things can be done: adding, deleting, and transforming objects, creating any scene angle, changing style and lighting.

There's increasing control in video generation.

#videoGenerative@TochkiNadAI
#videoEditing@TochkiNadAI