If you want Nim to become popular in audiovisual computing - Computer Vision for self-driving cars or Video Processing for Film & Television for example - I strongly suggest that the language comes with basic image/audio/video read/write capabilities by default.

Yes, we do. So why don't you go ahead and write those capabilities? We have our hands full with other things... A package manager, forum, compiler, and standard library don't write themselves!

2017-01-24 17:17:28

We have lots of people pitching ideas they can't seem to implement themselves. Common problem. Other people could implement those ideas, but time and energy are finite. Also a common problem.

What we need is some sort of NimLancer broker system that would make it easy for people to hire a freelancer to implement new open source Nim libraries, compiler / stdlib pull requests, etc. The programmer only gets paid when the requirements are met, with reputable people available to judge any disputes. This would be good for the Nim community / ecosystem as a whole.

So: what are the specific requirements of the high-level video library you wanna see, and how much Bitcoin would you be willing to chip in?

I for one would be willing to code very cheaply if it's with Nim and copyfree, but this particular project isn't for me. I'm very rusty, but if I could find some relatively easy paid Nim projects, I'd be able to devote more time and eventually take on more complex projects - and of course my own ideas...

2017-01-24 19:42:12

I did a lot of video processing in the past.

What you want is a frameserver like Avisynth (Windows-only, multithreading an afterthought, custom scripting language) or Vapoursynth (Windows, Mac, Linux, scripting through Python)

Example Vapoursynth to flip a video:

from vapoursynth import core
video = core.ffms2.Source(source='Rule6.mkv')
video = core.std.Transpose(video)
video.set_output()

This can be forwarded to mencoder, ffmpeg, x264 or x265 encoder. There are facilities to address particular frame, use GPU for lots of thing (example page of all the denoising filters including GPU accelerated)

This is a huge undertaking between SIMD, platform specific video API, multiprocessing, the avisynth/vapoursynth API, creating fast filters (which usually needs assembler ...), mathematical/signal processing knowledge.

2017-09-04 07:44:49

This is an odd request

Yes, there are languages that has support for audio read/write - but the point is that those languages has that support by means of packages/libraries - that kind of functionality is never in the core language.

So, if Nim doesn't have it, I guess it's because Nim users didn't need that functionality enough!

So, unless someone has that itch, and scratches it ... then Nim will never get support for video read/write.

2017-09-04 10:35:50

Doesn't !ffmpeg already support most of this?

Granted that ffmpeg is implemented in C(with some ASM), but it wouldn't be all that difficult to wrap. Someone crazy enough and who has enough time on their hands may be even to fully port it to Nim.

I agree with most everyone else that Video/Audio manipulation should not be a core feature. I do think it is reasonable that the core language makes it simple for someone to write libraries for such a task. I believe this is already the case judging by the fact that multiple audio/visual libraries already exist for Nim.

The codec issue that was brought up by @krux02 is a very real one. Many video compression formats are proprietary. Even the non-proprietary ones are numerous. Trying to implement core language support for this kind of thing would involve a lot of work including testing and constant feature requests. This would detract from more important concerns like bug fixes, compiler optimizations, and documentation improvements.

Nim is already taking a while to get to 1.0. Trying to tack on additional stdlib features is not what the Nim community needs to focus on at the moment.

2017-09-12 15:12:42

You probably can c2nim ffms2. This is what avisynth/vapoursynth uses to store a video in a variable like in my script

from vapoursynth import core
video = core.ffms2.Source(source='Rule6.mkv')

And then it provides facility to get specific frames

const FFMS_Frame *FFMS_GetFrame(FFMS_VideoSource *V, int n, FFMS_ErrorInfo *ErrorInfo);

const FFMS_Frame *FFMS_GetFrameByTime(FFMS_VideoSource *V, double Time, FFMS_ErrorInfo *ErrorInfo);

2017-09-13 07:27:29
<<<••12••>>>