1. 39
  1.  

  2. 17

    Is there some release document somewhere which describes more than just a list of new codecs, protocols and muxers? Specifically, why did the project decide that 5.0 is worthy of a new major version? Are there incompatibilities? Project direction changes? Branding changes? Etc.

    1. 15

      Doesn’t seem to be published yet, but there’s the release patch: http://ffmpeg.org/pipermail/ffmpeg-devel/2022-January/290811.html

      new major release, is now available! For this long-overdue release, a major effort underwent to remove the old encode/decode APIs and replace them with an N:M-based API, the entire libavresample library was removed, libswscale got a new, easier to use AVframe-based API, the Vulkan code was much improved, many new filters were added, including libplacebo integration, and finally, DoVi support was added, including tonemapping and remuxing.

    2.  

      Understanding this code is over my head right now.

      Can someone recomend me what is the standard method to write a simple media player?

      Is it genrally like this:

      1. Read the frame, display on a canvas. Basically pixel by pixel
      2. Sleep a very small amount of time.
      3. Read next frame, display

      Of course, a lot of thing can be optimize, but is that a simplify version of how a media player work?

      1.  

        https://github.com/leandromoreira/ffmpeg-libav-tutorial seems to be a useful tutorial about this. Though, have to admit, it’s been more than 10 years that I touched libav*.

        1.  

          basically yeah. though the way codecs work is that they don’t store all the data for every frame. they take a difference of frames and then calculate that. there will be different types of frames stored for different purposes and every so often you have a “check” frame that is kindof a reset frame that stores the full picture instead of a differential picture and that frame is used for the intermediary frames between those check frames (I frames vs B frames vs P frames) https://ottverse.com/i-p-b-frames-idr-keyframes-differences-usecases/

          and typically you would not draw the frame directly but use some kind of buffering mechanism so that the frame can be shown on the screen instantaneously.

          1.  

            Caveat my knowledge is all but theoretical. But here’s my first reaction:

            1. Due to the encoding, you won’t ever read frames but instructions, e.g., about differences from the previous frame
            2. There might not be enough time to sleep :-)