当前位置:网站首页>Ffmpeg audio and video transfer package (MP4 and flv are transferred to each other, and streaming data is transferred to FLV and MP4)
Ffmpeg audio and video transfer package (MP4 and flv are transferred to each other, and streaming data is transferred to FLV and MP4)
2022-07-18 06:41:00 【Mr.codeee】
1. brief introduction
The operation of converting audio and video files is to convert one format into another , For example, from flv go to MP4, Or convert the stream address data to MP4.
This article mainly explains Stream address data To flv file .
2. technological process

2.1 In the use of FFmpeg API Before , Need to register first API, And then you can use it API. Of course , The new version of the library does not need to call the following methods .
av_register_all()2.2 Build input AVFormatContext
Declare the input encapsulation structure , Use the input file or stream address as the handle of the encapsulation structure . Streaming address of Korean TV station rtmp://mobliestream.c3tv.com:554/live/goodtv.sdp.
AVFormatContext* inputFmtCtx = nullptr;
const char* inputUrl = "rtmp://mobliestream.c3tv.com:554/live/goodtv.sdp";
/// Open the input stream , get data begin
int ret = avformat_open_input(&inputFmtCtx, inputUrl, NULL, NULL);
2.3 Find audio and video stream information , Through the following interface with AVFormatContext Create the stream information corresponding to the input file .
// lookup ;
if (avformat_find_stream_info(inputFmtCtx, NULL) < 0)
{
printf("Couldn't find stream information.\n");
return -1;
}2.4 Build output AVFormatContext
// Output file
AVOutputFormat *ofmt = NULL;
AVFormatContext *ofmt_ctx = NULL;
const char* out_filename = "out.flv";
avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, out_filename);
if (!ofmt_ctx)
{
return -1;
}2.5 Apply for output stream Information .
AVStream* out_stream = NULL;
// Create a new stream
out_stream = avformat_new_stream(ofmt_ctx, NULL);
if (!out_stream)
{
return -1;
}2.6 Replication of information , Output stream After the information is established , Need to input from stream Copy the information to the output stream in .
// Input stream video 、 Audio 、 Subtitles, etc
AVStream* in_stream = ifmt_ctx->streams[i];
AVCodecParameters* in_codecpar = in_stream->codecpar;
// Copy the input stream information to the output stream
ret = avcodec_parameters_copy(out_stream->codecpar, in_codecpar);
if (ret < 0)
{
return -1;
}2.7 Then open the file
// Open the output file
ret = avio_open(&ofmt_ctx->pb, out_filename, AVIO_FLAG_WRITE);
if (ret < 0)
{
return -1;
}2.8 Write the header
// Write header
ret = avformat_write_header(ofmt_ctx, NULL);
if (ret < 0)
{
return -1;
}2.9 Packet read and write
Both input and output are turned on , Next, you can read the packet from the input format , Then write the packet to the output file , Of course , As the input packaging format differs from the output packaging format , The timestamp also needs to be calculated .
AVPacket pkt;
while (1)
{
AVStream* in_stream = NULL;
AVStream* out_stream = NULL;
// Read data from the input stream to pkt in
ret = av_read_frame(ifmt_ctx, &pkt);
if (ret < 0)
break;
in_stream = ifmt_ctx->streams[pkt.stream_index];
if (pkt.stream_index >= stream_mapping_size || stream_mapping[pkt.stream_index] < 0)
{
av_packet_unref(&pkt);
continue;
}
pkt.stream_index = stream_mapping[pkt.stream_index];
out_stream = ofmt_ctx->streams[pkt.stream_index];
/* copy packet */
pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);
pkt.pos = -1;
ret = av_interleaved_write_frame(ofmt_ctx, &pkt);
if (ret < 0)
{
fprintf(stderr, "Error muxing packet\n");
break;
}
av_packet_unref(&pkt);
}2.10 Write the end of the document
// Write the end of the document
av_write_trailer(ofmt_ctx);2.11 finishing , Turn off input and output , Release resources .
// close
avformat_close_input(&ifmt_ctx);
if (ofmt_ctx && !(ofmt->flags & AVFMT_NOFILE))
avio_closep(&ofmt_ctx->pb);
avformat_free_context(ofmt_ctx);
av_freep(&stream_mapping);
if (ret < 0 && ret != AVERROR_EOF)
{
return -1;
}3. Source code
#include "pch.h"
#include <iostream>
extern "C"
{
#include "libavformat/avformat.h"
#include "libavutil/dict.h"
#include "libavutil/opt.h"
#include "libavutil/timestamp.h"
#include "libswscale/swscale.h"
#include "libswresample/swresample.h"
};
int main()
{
//av_register_all();
avformat_network_init();
AVFormatContext* ifmt_ctx = NULL;
const char* inputUrl = "rtmp://mobliestream.c3tv.com:554/live/goodtv.sdp";
/// Open the input stream
int ret = avformat_open_input(&ifmt_ctx, inputUrl, NULL, NULL);
if (ret != 0)
{
printf("Couldn't open input stream.\n");
return -1;
}
// Find stream information
if (avformat_find_stream_info(ifmt_ctx, NULL) < 0)
{
printf("Couldn't find stream information.\n");
return -1;
}
// Output file
AVOutputFormat *ofmt = NULL;
AVFormatContext *ofmt_ctx = NULL;
const char* out_filename = "out.flv";
avformat_alloc_output_context2(&ofmt_ctx, NULL, NULL, out_filename);
if (!ofmt_ctx)
{
return -1;
}
int stream_mapping_size = ifmt_ctx->nb_streams;
// Allocate memory for arrays
int* stream_mapping = (int *)av_mallocz_array(stream_mapping_size, sizeof(*stream_mapping));
if (!stream_mapping)
{
return -1;
}
int stream_index = 0;
ofmt = ofmt_ctx->oformat;
for (int i = 0; i < ifmt_ctx->nb_streams; i++)
{
// Output stream
AVStream* out_stream = NULL;
// Input stream video 、 Audio 、 Subtitles, etc
AVStream* in_stream = ifmt_ctx->streams[i];
AVCodecParameters* in_codecpar = in_stream->codecpar;
if (in_codecpar->codec_type != AVMEDIA_TYPE_AUDIO && in_codecpar->codec_type != AVMEDIA_TYPE_VIDEO && in_codecpar->codec_type != AVMEDIA_TYPE_SUBTITLE)
{
stream_mapping[i] = -1;
continue;
}
stream_mapping[i] = stream_index++;
// Create a new stream
out_stream = avformat_new_stream(ofmt_ctx, NULL);
if (!out_stream)
{
return -1;
}
// Copy the input stream information to the output stream
ret = avcodec_parameters_copy(out_stream->codecpar, in_codecpar);
if (ret < 0)
{
return -1;
}
out_stream->codecpar->codec_tag = 0;
}
if (!(ofmt->flags & AVFMT_NOFILE))
{
// Open the output file
ret = avio_open(&ofmt_ctx->pb, out_filename, AVIO_FLAG_WRITE);
if (ret < 0)
{
return -1;
}
}
// Write header
ret = avformat_write_header(ofmt_ctx, NULL);
if (ret < 0)
{
return -1;
}
AVPacket pkt;
while (1)
{
AVStream* in_stream = NULL;
AVStream* out_stream = NULL;
// Read data from the input stream to pkt in
ret = av_read_frame(ifmt_ctx, &pkt);
if (ret < 0)
break;
in_stream = ifmt_ctx->streams[pkt.stream_index];
if (pkt.stream_index >= stream_mapping_size || stream_mapping[pkt.stream_index] < 0)
{
av_packet_unref(&pkt);
continue;
}
pkt.stream_index = stream_mapping[pkt.stream_index];
out_stream = ofmt_ctx->streams[pkt.stream_index];
/* copy packet */
pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, (AVRounding)(AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);
pkt.pos = -1;
ret = av_interleaved_write_frame(ofmt_ctx, &pkt);
if (ret < 0)
{
fprintf(stderr, "Error muxing packet\n");
break;
}
av_packet_unref(&pkt);
}
// Write the end of the document
av_write_trailer(ofmt_ctx);
// close
avformat_close_input(&ifmt_ctx);
if (ofmt_ctx && !(ofmt->flags & AVFMT_NOFILE))
avio_closep(&ofmt_ctx->pb);
avformat_free_context(ofmt_ctx);
av_freep(&stream_mapping);
if (ret < 0 && ret != AVERROR_EOF)
{
return -1;
}
return 0;
}
边栏推荐
- Differences among foreach, for in and for of
- 探秘ZGC
- 【C#】常用的Utils
- leetcode:240. 搜索二维矩阵 II
- New usage and deconstruction assignment of data types
- [Mori city] random talk on GIS data (IV) - coordinate system
- The difference between set and map
- flink.14 DataStream模块 source底层是怎么实现的?
- VI editor commands
- Special topic of software R & D efficiency demand value stream analysis
猜你喜欢

QT ui设计师界面常用操作记录(QTableWidget)

Matlab-mex

The difference between let / const /var

Application du moteur de visualisation Web de topologie dans le domaine de la simulation et de l'analyse

Network socket programming

Among the top 50 intelligent operation and maintenance enterprises in 2022, Borui data strength was selected

MQ Series 2: technology selection of Message Oriented Middleware

FFmpeg 简介

leetcode:240. Search 2D matrix II

函数高级应用
随机推荐
2022年智能运维企业50强,博睿数据实力入选
R language uses LM function to build regression model, uses BoxCox function of mass package to find the best power transformation to improve model fitting, visualize BoxCox curve and obtain the best l
[golang] heap structure template based on go language
leetcode:240. 搜索二维矩阵 II
智能工厂名词解释
R language uses LM function to build linear regression model, uses I operator to embed expression, and uses expression to specify the form of regression equation
Network socket programming
ECCV 2022 | multi domain long tail distributed learning, research on unbalanced domain generalization (open source)
【专有名词】
R语言使用epiDisplay包的roc.from.table函数可视化临床诊断表格数据对应的ROC曲线并输出新的诊断表(diagnostic table)、输出灵敏度、1-特异度、AUC值等
Domain Driven Design Fundamentals
let / const /var的区别
SSM integration problems - org apache. ibatis. transaction. TransactionFactory
R语言ggplot2可视化条形图:通过双色渐变配色颜色主题可视化条形图
智能工厂具体的名词解释
循环语句及数组
Animation and encapsulation (offset, client, scroll series)
Diwen serial port screen tutorial (2)
Medical document OCR recognition + knowledge base verification, enabling insurance intelligent claim settlement
What app should individuals use to buy stocks is safer and faster