Merge branch 'master' of https://github.com/ZLMediaKit/ZLMediaKit into feature/transcode2
Some checks failed
Android / build (push) Has been cancelled
CodeQL / Analyze (cpp) (push) Has been cancelled
CodeQL / Analyze (javascript) (push) Has been cancelled
Docker / build (push) Has been cancelled
Linux / build (push) Has been cancelled
Linux_Python / build (push) Has been cancelled
macOS / build (push) Has been cancelled
macOS_Python / build (push) Has been cancelled
Windows / build (push) Has been cancelled
Windows_Python / build (push) Has been cancelled

# Conflicts:
#	conf/config.ini
#	src/Codec/Transcode.cpp
#	src/Common/MediaSource.h
#	src/Common/MultiMediaSourceMuxer.cpp
#	src/Common/MultiMediaSourceMuxer.h
#	src/Common/macros.h
#	webrtc/WebRtcPusher.cpp
#	webrtc/WebRtcTransport.cpp
#	webrtc/WebRtcTransport.h
This commit is contained in:
cqm 2026-04-03 09:35:50 +08:00
commit 03526f141e
283 changed files with 42056 additions and 13083 deletions

View File

@ -0,0 +1,137 @@
---
name: Project General Translation & Terminology Guidelines
description: Definitive guidelines, contextual awareness strategies, standard terminology, and comment formatting rules for translating code, configurations, and documentation from Chinese to English in this repository.
---
# 🤖 Systemic Translation & Terminology Instructions for AI Agents
This document is the absolute source of truth and **Standard Operating Procedure (SOP)** for translating Chinese comments, configurations, and documentation into English within this repository.
**ATTENTION AI AGENTS:** You are NOT merely translating words; you are executing a systematic algorithm to localize complex streaming media and networking concepts. Do not rely solely on "passive reading" or "translation memory." You MUST follow the rigid workflow outlined below.
---
## Phase 1: Contextual Anchoring (MANDATORY BEFORE TRANSLATION)
Before translating any block of text, you must explicitly anchor yourself to the specific technical domain. **Literal translation of Chinese industry slang (黑话) is strictly prohibited.**
1. **Identify the Domain:** Look at the module or configuration section (e.g., `[rtp_proxy]`, `[http]`, `[general]`, `[hls]`).
2. **Setup the Mental Lexicon:**
- If `[api]/[http]`: Anchor to standard REST API and Web server concepts (e.g., `Requests/Responses`, `CORS`, `Forwarded IPs`).
- If `Network I/O / [general]`: Anchor to socket programming and OS-level terms (e.g., `Write coalescing`, `Buffers`, `File handles`).
- If `Media Streaming (RTSP/RTMP/RTC)`: Anchor to multimedia transport concepts (e.g., `GOP`, `Payload`, `B-frames`, `Jitter`, `Visual artifacts`).
3. **Verification-Driven Translation:** If you encounter a Chinese term that sounds colloquial or metaphoric (e.g., “花屏” - flowered screen, “秒开” - open in seconds, “溯源” - trace back to origin), **DO NOT guess or translate literally**. Ask yourself: _"How do top-tier English open-source projects (FFmpeg, WebRTC, Nginx) refer to this specific technical phenomenon?"_
---
## Phase 2: Structural Translation & Anti-Pattern Detection
LLMs naturally tend to follow the grammatical structure of the source text. Chinese technical writing often uses sprawling sentences and explanatory fillers. You must actively break these patterns.
### 🚫 Rule 1: The "Action-Result" Paradigm
- **Trigger:** When the Chinese text says "设置为0关闭此特性" (Setting this to 0 disables this feature) or "打开此选项会导致..." (Turning this on causes...).
- **Execution:** Force your output to use the exact structure: `Setting this to [Value] disables [Feature] and allows [Consequence].` Do NOT translate explanatory filler like "This mechanism's logic dictates that...".
### 🚫 Rule 2: Sub-clause Elimination (No "Chinglish")
- **Trigger:** Long noun clusters or overly personified system descriptions (e.g., "服务器会认为这个流是断开的" - The server will think this stream is disconnected).
- **Execution:** Use direct, objective voice: `The stream is considered disconnected.` or `The system drops the stream.`
### 🚫 Rule 3: Clarifying Ambiguous Actions
- **Trigger:** The word `忽略` (Ignore/Skip) vs. `丢弃/放弃` (Abandon/Drop).
- **Execution:** Use `Ignore` or `Skip` for non-critical timeouts (e.g., waiting for a track to be ready). Reserve `Abandon`, `Drop` or `Disconnect` only for fatal errors or closed sockets.
### 🚫 Rule 4: Zero Information Loss & Causal Reconstruction
- **Trigger:** When condensing text for native flow, or translating complex caveats (e.g., parenthetical conditions, "而不是" / instead of, side-effects).
- **Execution:** You may reorganise syntax to sound professional, but you MUST NOT drop crucial qualifiers, modifiers, or side effects. If a Chinese config says "instead of returning X via hook", the English translation must explicitly mention "returning X". Information completeness supersedes structural brevity.
### 🚫 Rule 5: The Golden Balance (Zero Info Loss vs. Native Phrasing)
- **The Core Conflict:** You must achieve **Zero Information Loss** WITHOUT resorting to **Chinglish** or literal word-for-word translation.
- **What "Information" Means:** "Retaining information" means capturing 100% of the **technical causality**, **side-effects**, **prerequisites**, and **system boundaries** present in the Chinese text.
- **What "Information" DOES NOT Mean:** It does NOT mean preserving the Chinese grammatical structure, literal phrasing, or colloquialisms (啰嗦句子和字面用词).
- **Execution (The Top-Down Conceptual Approach):**
1. **Contextual Override:** Never translate a noun literally if the surrounding constraints (e.g., units like "seconds", prefixes, or the specific protocol) dictate a domain term. For example, if a setting is measured in "seconds", the Chinese word "大小" (size) MUST logically translate to `Duration` or `Interval`, **NEVER** `Size`.
2. **Conceptual Compression:** When faced with a sprawling, explanatory Chinese sentence, distill the _technical payload_ and express it using concise, standard industry jargon.
- _Anti-pattern (Literal/Chinglish):_ `After disabling the traditional authentication mode, you must first call the API to log in. Upon success, a cookie will be set, and all APIs can be accessed without restriction as long as the cookie is valid.`
- _Pro-pattern (Native/Jargon):_ `When disabled, users must first call /index/api/login. Upon success, a cookie auth token is set for subsequent requests.` (Using "subsequent requests" efficiently compresses the lengthy Chinese explanation).
3. **Technical Abstraction:** Recognize standard operations (e.g., "拉流再推流"). Do not translate the physical actions (`pulling and then pushing`); translate the abstract technical process (`re-publishing` or `re-encoding`).
### 🚫 Rule 6: Anti-Summarization (Strict Boolean & Causality Preservation)
- **Trigger:** When applying Conceptual Compression (Rule 5) to a text block containing conditionals or explanations.
- **The Core Conflict:** _Compression_ reduces word count by using jargon. _Summarization_ drops critical logic. **Summarization is strictly forbidden.**
- **Execution (The Boolean Mapping Rule):**
1. Treat Chinese comments like code blocks. Extract all `IF/THEN/ELSE` branches, prerequisites, and root causes before translating.
2. If the original text states a "success" path and a "failure" path, the English translation MUST explicitly state both paths. You cannot compress them into a single vague outcome.
3. If the original text states _why_ a feature exists (the exact cause or defect being prevented), the English translation MUST explicitly state that exact cause. You cannot compress it into generic "to improve performance" or "to prevent errors."
4. Perform a **Reverse Mapping Check**: After writing the English sentence, ask yourself—"If I reverse-compile this English back to Chinese, would any `IF` conditions or edge-case explanations be missing?" If yes, rewrite it completely to restore the dropped logic.
---
## Phase 3: The Hardcoded Terminology Dictionary
**CRITICAL:** When translating, if you encounter these Chinese concepts, you MUST use the exact, first provided English term. **Do not mix or alternate synonyms.**
### Network & Architecture
- 源站 -> `Origin server`
- 溯源 (拉流) -> `Origin pull`
- 推流代理 / 拉流代理 -> `Publishing proxies` / `Pulling proxies`
- 按需拉流 -> `On-demand stream pulling`
- 集群 -> `Cluster`
- 推流断开后的超时等待 -> `Grace period for publisher reconnection`
### Video & Playback Experience
- 秒开 / 极速秒开 -> `Instant playback (zero-delay startup)` (e.g., 级联秒开 -> `Instant playback for cascaded streams`)
- 花屏 -> `Visual artifacts (glitches)` _(NEVER use "Screen tearing", which is a hardware V-sync issue)_
- 卡顿 -> `Playback stuttering`
### System I/O & HTTP
- 合并写 -> `Write coalescing` _(NEVER use "Merged write")_
- 请求和回复 -> `Requests and Responses` _(Avoid "Replies")_
- 在代理后方获取真实IP -> `Extract the real client IP when behind a proxy (e.g., via X-Forwarded-For)`
### General Technical Terms
- 切片 -> `Segment` (e.g., HLS segment)
- 封装 / 打包 -> `Packaging`
- 负载 -> `Payload`
- 鉴权 -> `Authentication`
- 处理 / 应对 (故障) -> `Handle` or `Address`
---
## Phase 4: Strict Formatting Rules (CRITICAL)
When translating comments inside code files (`.cpp`, `.h`) or configs (`.ini`), apply these hard constraints:
1. **Bilingual Retention:** Unless explicitly instructed to delete Chinese, **ALWAYS retain the original Chinese comments**.
2. **Bottom Placement:** Place the English translation immediately **below** the Chinese line or block.
3. **Block Uniformity:** Do NOT translate line-by-line (`ZH-EN-ZH-EN`). If a Chinese comment is a 3-line block, output it as a 3-line Chinese block followed by a 3-line English block.
```cpp
/*
* 这里是第一行中文描述。
* 这里是第二行中文补充。
*/
/*
* This is the English translation of the first line.
* This is the English translation of the second line.
*/
```
---
## Phase 5: The Post-Translation Verification Workflow (DO NOT SKIP)
If you are asked to review or update translations in a long file, **you cannot rely solely on passive reading**. You MUST execute this workflow:
1. **Active Scan (Regex/Search):** Before reading the document, use file search tools to actively scan for known anti-patterns in the current English text (e.g., search for `Screen tearing`, `Merged write`, `Replies`, `Source station`). Fix them immediately.
2. **Format Review:** Scan for `ZH-EN-ZH-EN` interleaving and fix it to block format.
3. **Blind English Review:** After translating, hide the Chinese text from your mental context. Read _only_ your English output constraint: _Does this sound like a snippet from the official Nginx or WebRTC manuals? Is it concise (CBD: Clarity, Brevity, Directness)?_ If it sounds like a literal word-for-word translation, rewrite it natively.

1
.claude/skills Symbolic link
View File

@ -0,0 +1 @@
../.agent/skills

View File

@ -2,7 +2,7 @@ name: Android
on: [push, pull_request]
jobs:
build:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
steps:
- name: 下载源码

View File

@ -5,7 +5,7 @@ on: [push, pull_request]
jobs:
analyze:
name: Analyze
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
permissions:
actions: read
contents: read
@ -43,7 +43,7 @@ jobs:
with:
repository: cisco/libsrtp
fetch-depth: 1
ref: v2.3.0
ref: v2.7.0
path: 3rdpart/libsrtp
- name: 编译 SRTP

View File

@ -15,7 +15,7 @@ env:
jobs:
build:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
permissions:
contents: read
packages: write

View File

@ -6,7 +6,7 @@ on:
jobs:
issue_lint:
runs-on: ubuntu-22.04
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@v3

View File

@ -5,7 +5,7 @@ on: [push, pull_request]
jobs:
build:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@v1

172
.github/workflows/linux_py.yml vendored Normal file
View File

@ -0,0 +1,172 @@
name: Linux_Python
on: [push, pull_request]
jobs:
build:
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@v1
- name: 下载submodule源码
run: mv -f .gitmodules_github .gitmodules && git submodule sync && git submodule update --init
- name: 下载 SRTP
uses: actions/checkout@v2
with:
repository: cisco/libsrtp
fetch-depth: 1
ref: v2.3.0
path: 3rdpart/libsrtp
- name: 下载 openssl
uses: actions/checkout@v2
with:
repository: openssl/openssl
fetch-depth: 1
ref: OpenSSL_1_1_1
path: 3rdpart/openssl
- name: 下载 usrsctp
uses: actions/checkout@v2
with:
repository: sctplab/usrsctp
fetch-depth: 1
ref: 0.9.5.0
path: 3rdpart/usrsctp
- name: 启动 Docker 容器, 在Docker 容器中执行脚本
run: |
docker pull centos:7
docker run -v $(pwd):/root -w /root --rm centos:7 sh -c "
#!/bin/bash
set -x
# Backup original CentOS-Base.repo file
cp /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
# Define new repository configuration
cat <<EOF > /etc/yum.repos.d/CentOS-Base.repo
[base]
name=CentOS-7 - Base - mirrors.aliyun.com
baseurl=http://mirrors.aliyun.com/centos/7/os/x86_64/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
[updates]
name=CentOS-7 - Updates - mirrors.aliyun.com
baseurl=http://mirrors.aliyun.com/centos/7/updates/x86_64/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7
EOF
cat > /etc/yum.repos.d/epel-aliyun.repo <<EOF
[epel]
name=Extra Packages for Enterprise Linux 7 - x86_64
baseurl=http://mirrors.aliyun.com/epel/7/x86_64/
enabled=1
gpgcheck=0
EOF
cat > /etc/yum.repos.d/CentOS-SCLo-aliyun.repo <<EOF
[C7-SCLo-rh]
name=CentOS-7 SCLo RH - x86_64
baseurl=http://mirrors.aliyun.com/centos/7/sclo/x86_64/rh/
enabled=1
gpgcheck=0
EOF
# Clean yum cache and recreate it
yum clean all
yum makecache
echo \"CentOS 7 软件源已成功切换\"
yum install -y git wget gcc gcc-c++ make which devtoolset-11
# === 1. 下载并静默安装 Miniconda ===
wget -q https://repo.anaconda.com/miniconda/Miniconda3-py39_23.3.1-0-Linux-x86_64.sh -O miniconda.sh
bash miniconda.sh -b -p "$HOME/miniconda" # -b 表示 batch静默安装
export PATH="$HOME/miniconda/bin:$PATH"
# === 2. 初始化 conda非交互模式 ===
source "$HOME/miniconda/etc/profile.d/conda.sh"
# === 3. 创建 Python 3.11 环境 ===
conda create -n py11 python=3.11 -y
# === 4. 激活环境 ===
conda activate py11
# === 5. 验证环境 ===
python --version
# === 6. 安装必要模块 ===
conda install -y pip setuptools jinja2 wheel
mkdir -p /root/install
cd 3rdpart/openssl
./config no-shared --prefix=/root/install
make -j $(nproc)
make install
cd ../../
wget https://github.com/Kitware/CMake/releases/download/v3.29.5/cmake-3.29.5.tar.gz
tar -xf cmake-3.29.5.tar.gz
cd cmake-3.29.5
OPENSSL_ROOT_DIR=/root/install ./configure
make -j $(nproc)
make install
cd ..
cd 3rdpart/usrsctp
mkdir build
cd build
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_POSITION_INDEPENDENT_CODE=ON ..
make -j $(nproc)
make install
cd ../../../
cd 3rdpart/libsrtp && ./configure --enable-openssl --with-openssl-dir=/root/install && make -j $(nproc) && make install
cd ../../
source /opt/rh/devtoolset-11/enable
gcc --version
mkdir -p linux_build && cd linux_build && cmake .. -DENABLE_PYTHON=ON -DPYTHON_EXECUTABLE=$(which python3.14) -DOPENSSL_ROOT_DIR=/root/install -DCMAKE_BUILD_TYPE=Release && make -j $(nproc)
"
- name: 设置环境变量
run: |
echo "BRANCH=$(echo ${GITHUB_REF#refs/heads/} | tr -s "/\?%*:|\"<>" "_")" >> $GITHUB_ENV
echo "BRANCH2=$(echo ${GITHUB_REF#refs/heads/} )" >> $GITHUB_ENV
echo "DATE=$(date +%Y-%m-%d)" >> $GITHUB_ENV
- name: 打包二进制
id: upload
uses: actions/upload-artifact@v4
with:
name: ${{ github.workflow }}_${{ env.BRANCH }}_${{ env.DATE }}
path: release/*
if-no-files-found: error
retention-days: 90
- name: issue评论
if: github.event_name != 'pull_request' && github.ref != 'refs/heads/feature/test'
uses: actions/github-script@v7
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
github.rest.issues.createComment({
issue_number: ${{vars.VERSION_ISSUE_NO}},
owner: context.repo.owner,
repo: context.repo.repo,
body: '- 下载地址: [${{ github.workflow }}_${{ env.BRANCH }}_${{ env.DATE }}](${{ steps.upload.outputs.artifact-url }})\n'
+ '- 分支: ${{ env.BRANCH2 }}\n'
+ '- git hash: ${{ github.sha }} \n'
+ '- 编译日期: ${{ env.DATE }}\n'
+ '- 编译记录: [${{ github.run_id }}](https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }})\n'
+ '- 打包ci名: ${{ github.workflow }}\n'
+ '- 开启特性: openssl/webrtc/datachannel\n'
+ '- 说明: 本二进制在centos7(x64)上编译请确保您的机器系统不低于此版本本程序依赖python3.11, 运行前请miniconda安装python3.11\n'
})

View File

@ -18,10 +18,15 @@ jobs:
with:
vcpkgDirectory: '${{github.workspace}}/vcpkg'
vcpkgTriplet: arm64-osx
# 2024.06.01
vcpkgGitCommitId: '47364fbc300756f64f7876b549d9422d5f3ec0d3'
# 2025.07.11
vcpkgGitCommitId: 'efcfaaf60d7ec57a159fc3110403d939bfb69729'
vcpkgArguments: 'openssl libsrtp[openssl] usrsctp'
- name: 安装指定 CMake
uses: jwlawson/actions-setup-cmake@v2
with:
cmake-version: '3.30.5'
- name: 编译
uses: lukka/run-cmake@v3
with:

80
.github/workflows/macos_py.yml vendored Normal file
View File

@ -0,0 +1,80 @@
name: macOS_Python
on: [push, pull_request]
jobs:
build:
runs-on: macOS-latest
steps:
- uses: actions/checkout@v1
- name: 下载submodule源码
run: mv -f .gitmodules_github .gitmodules && git submodule sync && git submodule update --init
- name: 配置 vcpkg
uses: lukka/run-vcpkg@v7
with:
vcpkgDirectory: '${{github.workspace}}/vcpkg'
vcpkgTriplet: arm64-osx
# 2025.07.11
vcpkgGitCommitId: 'efcfaaf60d7ec57a159fc3110403d939bfb69729'
vcpkgArguments: 'openssl libsrtp[openssl] usrsctp'
- name: 安装指定 CMake
uses: jwlawson/actions-setup-cmake@v2
with:
cmake-version: '3.30.5'
- name: 检查并设置 Python 3
run: |
PYTHON_ROOT=$(python3 -c "import sys; print(sys.prefix)")
echo "PYTHON_ROOT=$PYTHON_ROOT" >> $GITHUB_ENV
PYTHON_EXECUTABLE=$(which python3)
echo "PYTHON_EXECUTABLE=$PYTHON_EXECUTABLE" >> $GITHUB_ENV
- name: 编译
uses: lukka/run-cmake@v3
with:
useVcpkgToolchainFile: true
cmakeBuildType: Release
cmakeListsOrSettingsJson: CMakeListsTxtAdvanced
buildDirectory: '${{github.workspace}}/build'
buildWithCMakeArgs: '--config Release'
cmakeAppendedArgs: '-DPYTHON_EXECUTABLE=${{ env.PYTHON_EXECUTABLE }} -DENABLE_PYTHON=ON -DENABLE_API=OFF -DENABLE_TESTS=OFF -DCMAKE_BUILD_TYPE=Release'
- name: 设置环境变量
run: |
echo "BRANCH=$(echo ${GITHUB_REF#refs/heads/} | tr -s "/\?%*:|\"<>" "_")" >> $GITHUB_ENV
echo "BRANCH2=$(echo ${GITHUB_REF#refs/heads/} )" >> $GITHUB_ENV
echo "DATE=$(date +%Y-%m-%d)" >> $GITHUB_ENV
- name: 打包二进制
id: upload
uses: actions/upload-artifact@v4
with:
name: ${{ github.workflow }}_${{ env.BRANCH }}_${{ env.DATE }}
path: release/*
if-no-files-found: error
retention-days: 90
- name: issue评论
if: github.event_name != 'pull_request' && github.ref != 'refs/heads/feature/test'
uses: actions/github-script@v7
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
github.rest.issues.createComment({
issue_number: ${{vars.VERSION_ISSUE_NO}},
owner: context.repo.owner,
repo: context.repo.repo,
body: '- 下载地址: [${{ github.workflow }}_${{ env.BRANCH }}_${{ env.DATE }}](${{ steps.upload.outputs.artifact-url }})\n'
+ '- 分支: ${{ env.BRANCH2 }}\n'
+ '- git hash: ${{ github.sha }} \n'
+ '- 编译日期: ${{ env.DATE }}\n'
+ '- 编译记录: [${{ github.run_id }}](https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }})\n'
+ '- 打包ci名: ${{ github.workflow }}\n'
+ '- 开启特性: openssl/webrtc/datachannel\n'
+ '- 说明: 此二进制为arm64版本; 本程序依赖python3.14, 运行前请brew install python@3.14安装\n'
})

View File

@ -4,7 +4,7 @@ on: [pull_request]
jobs:
check:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
steps:
- uses: actions/checkout@v2
with:

View File

@ -4,7 +4,7 @@ on: [push, pull_request]
jobs:
build:
runs-on: windows-2019
runs-on: windows-2022
steps:
- uses: actions/checkout@v1
@ -17,8 +17,8 @@ jobs:
with:
vcpkgDirectory: '${{github.workspace}}/vcpkg'
vcpkgTriplet: x64-windows-static
# 2024.06.01
vcpkgGitCommitId: '47364fbc300756f64f7876b549d9422d5f3ec0d3'
# 2025.07.11
vcpkgGitCommitId: 'efcfaaf60d7ec57a159fc3110403d939bfb69729'
vcpkgArguments: 'openssl libsrtp[openssl] usrsctp'
- name: 编译

86
.github/workflows/windows_py.yml vendored Normal file
View File

@ -0,0 +1,86 @@
name: Windows_Python
on: [push, pull_request]
jobs:
build:
runs-on: windows-2022
steps:
- uses: actions/checkout@v1
- name: 下载submodule源码
run: mv -Force .gitmodules_github .gitmodules && git submodule sync && git submodule update --init
- name: 配置 vcpkg
uses: lukka/run-vcpkg@v7
with:
vcpkgDirectory: '${{github.workspace}}/vcpkg'
vcpkgTriplet: x64-windows-static
# 2025.07.11
vcpkgGitCommitId: 'efcfaaf60d7ec57a159fc3110403d939bfb69729'
vcpkgArguments: 'openssl libsrtp[openssl] usrsctp'
- name: Setup Python 3.14
uses: actions/setup-python@v4
with:
python-version: 3.14
architecture: x64
- name: Set PYTHON_EXECUTABLE
shell: pwsh
run: |
$pythonExe = python -c "import sys; print(sys.executable)"
Add-Content -Path $Env:GITHUB_ENV -Value "PYTHON_EXECUTABLE=$pythonExe"
- name: Check PYTHON_EXECUTABLE
run: echo $Env:PYTHON_EXECUTABLE
shell: pwsh
- name: 编译
uses: lukka/run-cmake@v3
with:
useVcpkgToolchainFile: true
cmakeBuildType: Release
cmakeListsOrSettingsJson: CMakeListsTxtAdvanced
buildDirectory: '${{github.workspace}}/build'
buildWithCMakeArgs: '--config Release'
cmakeAppendedArgs: '-DPYTHON_EXECUTABLE=${{ env.PYTHON_EXECUTABLE }} -DENABLE_PYTHON=ON -DENABLE_API=OFF -DENABLE_TESTS=OFF -DCMAKE_BUILD_TYPE=Release'
- name: 设置环境变量
run: |
$dateString = Get-Date -Format "yyyy-MM-dd"
$branch = $env:GITHUB_REF -replace "refs/heads/", "" -replace "[\\/\\\?\%\*:\|\x22<>]", "_"
$branch2 = $env:GITHUB_REF -replace "refs/heads/", ""
echo "BRANCH=$branch" >> $env:GITHUB_ENV
echo "BRANCH2=$branch2" >> $env:GITHUB_ENV
echo "DATE=$dateString" >> $env:GITHUB_ENV
- name: 打包二进制
id: upload
uses: actions/upload-artifact@v4
with:
name: ${{ github.workflow }}_${{ env.BRANCH }}_${{ env.DATE }}
path: release/*
if-no-files-found: error
retention-days: 90
- name: issue评论
if: github.event_name != 'pull_request' && github.ref != 'refs/heads/feature/test'
uses: actions/github-script@v7
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
github.rest.issues.createComment({
issue_number: ${{vars.VERSION_ISSUE_NO}},
owner: context.repo.owner,
repo: context.repo.repo,
body: '- 下载地址: [${{ github.workflow }}_${{ env.BRANCH }}_${{ env.DATE }}](${{ steps.upload.outputs.artifact-url }})\n'
+ '- 分支: ${{ env.BRANCH2 }}\n'
+ '- git hash: ${{ github.sha }} \n'
+ '- 编译日期: ${{ env.DATE }}\n'
+ '- 编译记录: [${{ github.run_id }}](https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }})\n'
+ '- 打包ci名: ${{ github.workflow }}\n'
+ '- 开启特性: openssl/webrtc/datachannel\n'
+ '- 说明: 此二进制为x64版本;本程序依赖python3.14, 运行前请先安装python3.14\n'
})

3
.gitmodules vendored
View File

@ -10,3 +10,6 @@
[submodule "www/webassist"]
path = www/webassist
url = https://gitee.com/victor1002/zlm_webassist
[submodule "3rdpart/pybind11"]
path = 3rdpart/pybind11
url = https://gitee.com/mirrors/pybind11.git

View File

@ -9,4 +9,7 @@
url = https://github.com/open-source-parsers/jsoncpp.git
[submodule "www/webassist"]
path = www/webassist
url = https://github.com/1002victor/zlm_webassist
url = https://github.com/1002victor/zlm_webassist
[submodule "3rdpart/pybind11"]
path = 3rdpart/pybind11
url = https://github.com/pybind/pybind11.git

View File

@ -116,113 +116,18 @@ endif()
##############################################################################
# toolkit
# TODO: toolkit 便
include(CheckStructHasMember)
include(CheckSymbolExists)
# sendmmsg , _GNU_SOURCE GNU
list(APPEND CMAKE_REQUIRED_DEFINITIONS -D_GNU_SOURCE)
check_struct_has_member("struct mmsghdr" msg_hdr sys/socket.h HAVE_MMSG_HDR)
check_symbol_exists(sendmmsg sys/socket.h HAVE_SENDMMSG_API)
check_symbol_exists(recvmmsg sys/socket.h HAVE_RECVMMSG_API)
set(COMPILE_DEFINITIONS)
# ToolKit ENABLE_OPENSSL ENABLE_MYSQL
list(FIND MK_COMPILE_DEFINITIONS ENABLE_OPENSSL ENABLE_OPENSSL_INDEX)
if(NOT ENABLE_OPENSSL_INDEX EQUAL -1)
list(APPEND COMPILE_DEFINITIONS ENABLE_OPENSSL)
endif()
list(FIND MK_COMPILE_DEFINITIONS ENABLE_MYSQL ENABLE_MYSQL_INDEX)
if(NOT ENABLE_MYSQL_INDEX EQUAL -1)
list(APPEND COMPILE_DEFINITIONS ENABLE_MYSQL)
endif()
if(HAVE_MMSG_HDR)
list(APPEND COMPILE_DEFINITIONS HAVE_MMSG_HDR)
endif()
if(HAVE_SENDMMSG_API)
list(APPEND COMPILE_DEFINITIONS HAVE_SENDMMSG_API)
endif()
if(HAVE_RECVMMSG_API)
list(APPEND COMPILE_DEFINITIONS HAVE_RECVMMSG_API)
endif()
# check the socket buffer size set by the upper cmake project, if it is set, use the setting of the upper cmake project, otherwise set it to 256K
# if the socket buffer size is set to 0, it means that the socket buffer size is not set, and the kernel default value is used(just for linux)
if(DEFINED SOCKET_DEFAULT_BUF_SIZE)
if (SOCKET_DEFAULT_BUF_SIZE EQUAL 0)
message(STATUS "Socket default buffer size is not set, use the kernel default value")
else()
message(STATUS "Socket default buffer size is set to ${SOCKET_DEFAULT_BUF_SIZE}")
endif ()
add_definitions(-DSOCKET_DEFAULT_BUF_SIZE=${SOCKET_DEFAULT_BUF_SIZE})
endif()
set(ToolKit_ROOT ${CMAKE_CURRENT_SOURCE_DIR}/ZLToolKit)
#
file(GLOB ToolKit_SRC_LIST
${ToolKit_ROOT}/src/*/*.cpp
${ToolKit_ROOT}/src/*/*.h
${ToolKit_ROOT}/src/*/*.c)
if(IOS)
list(APPEND ToolKit_SRC_LIST
${ToolKit_ROOT}/src/Network/Socket_ios.mm)
endif()
###################################################################
#使wepoll windows iocp epoll
if(ENABLE_WEPOLL)
if(WIN32)
message(STATUS "Enable wepoll")
#wepollapi
list(APPEND ToolKit_SRC_LIST
${CMAKE_CURRENT_SOURCE_DIR}/wepoll/wepoll.c
${CMAKE_CURRENT_SOURCE_DIR}/wepoll/sys/epoll.cpp)
#wepoll
include_directories(${CMAKE_CURRENT_SOURCE_DIR}/wepoll)
#epoll
add_definitions(-DHAS_EPOLL)
endif()
endif()
###################################################################
# win32
if(NOT WIN32)
list(REMOVE_ITEM ToolKit_SRC_LIST ${ToolKit_ROOT}/win32/getopt.c)
else()
# Windows.h Winsock.h
list(APPEND COMPILE_DEFINITIONS
WIN32_LEAN_AND_MEAN MP4V2_NO_STDINT_DEFS
#
_CRT_SECURE_NO_WARNINGS _WINSOCK_DEPRECATED_NO_WARNINGS)
endif()
#
add_library(zltoolkit STATIC ${ToolKit_SRC_LIST})
add_library(ZLMediaKit::ToolKit ALIAS zltoolkit)
target_compile_definitions(zltoolkit
PUBLIC ${COMPILE_DEFINITIONS})
target_compile_options(zltoolkit
PRIVATE ${COMPILE_OPTIONS_DEFAULT})
target_include_directories(zltoolkit
PRIVATE
"$<BUILD_INTERFACE:${ToolKit_ROOT}/src>"
PUBLIC
"$<BUILD_INTERFACE:${ToolKit_ROOT}>/src")
add_subdirectory(ZLToolKit)
#
add_library(ZLMediaKit::ToolKit ALIAS ZLToolKit)
#
update_cached_list(MK_LINK_LIBRARIES ZLMediaKit::ToolKit)
if(USE_SOLUTION_FOLDERS AND (NOT GROUP_BY_EXPLORER))
# IDE ,
set_file_group(${ToolKit_ROOT}/src ${ToolKit_SRC_LIST})
endif()
##############################################################################
# 使
if(ENABLE_CXX_API)
#
install(DIRECTORY ${ToolKit_ROOT}/
DESTINATION ${INSTALL_PATH_INCLUDE}/ZLToolKit
REGEX "(.*[.](md|cpp)|win32)$" EXCLUDE)
install(TARGETS zltoolkit
DESTINATION ${INSTALL_PATH_LIB})
endif()
if (ENABLE_PYTHON)
# ============ pybind11 lib ============
add_subdirectory(pybind11)
update_cached_list(MK_LINK_LIBRARIES pybind11::embed)
include_directories(${CMAKE_CURRENT_SOURCE_DIR}/pybind11/include)
update_cached_list(MK_COMPILE_DEFINITIONS ENABLE_PYTHON)
endif ()

@ -1 +1 @@
Subproject commit 04212017c0dc764f99f1db46240d59dcdf154700
Subproject commit 7302286cf4be39d416b023fec3fd4ca9c54af762

@ -1 +1 @@
Subproject commit 69098a18b9af0c47549d9a271c054d13ca92b006
Subproject commit ca98c98457b1163cca1f7d8db62827c115fec6d1

@ -1 +1 @@
Subproject commit 0658496d5fc7d238f41e10ea4d0a10113a8eed84
Subproject commit 21c4451ff2e4c4bb1c817e606c8b4e5deac1e719

1
3rdpart/pybind11 Submodule

@ -0,0 +1 @@
Subproject commit ed5057ded698e305210269dafa57574ecf964483

View File

@ -1,28 +0,0 @@
wepoll - epoll for Windows
https://github.com/piscisaureus/wepoll
Copyright 2012-2020, Bert Belder <bertbelder@gmail.com>
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@ -1,202 +0,0 @@
# wepoll - epoll for windows
[![][ci status badge]][ci status link]
This library implements the [epoll][man epoll] API for Windows
applications. It is fast and scalable, and it closely resembles the API
and behavior of Linux' epoll.
## Rationale
Unlike Linux, OS X, and many other operating systems, Windows doesn't
have a good API for receiving socket state notifications. It only
supports the `select` and `WSAPoll` APIs, but they
[don't scale][select scale] and suffer from
[other issues][wsapoll broken].
Using I/O completion ports isn't always practical when software is
designed to be cross-platform. Wepoll offers an alternative that is
much closer to a drop-in replacement for software that was designed
to run on Linux.
## Features
* Can poll 100000s of sockets efficiently.
* Fully thread-safe.
* Multiple threads can poll the same epoll port.
* Sockets can be added to multiple epoll sets.
* All epoll events (`EPOLLIN`, `EPOLLOUT`, `EPOLLPRI`, `EPOLLRDHUP`)
are supported.
* Level-triggered and one-shot (`EPOLLONESTHOT`) modes are supported
* Trivial to embed: you need [only two files][dist].
## Limitations
* Only works with sockets.
* Edge-triggered (`EPOLLET`) mode isn't supported.
## How to use
The library is [distributed][dist] as a single source file
([wepoll.c][wepoll.c]) and a single header file ([wepoll.h][wepoll.h]).<br>
Compile the .c file as part of your project, and include the header wherever
needed.
## Compatibility
* Requires Windows Vista or higher.
* Can be compiled with recent versions of MSVC, Clang, and GCC.
## API
### General remarks
* The epoll port is a `HANDLE`, not a file descriptor.
* All functions set both `errno` and `GetLastError()` on failure.
* For more extensive documentation, see the [epoll(7) man page][man epoll],
and the per-function man pages that are linked below.
### epoll_create/epoll_create1
```c
HANDLE epoll_create(int size);
HANDLE epoll_create1(int flags);
```
* Create a new epoll instance (port).
* `size` is ignored but most be greater than zero.
* `flags` must be zero as there are no supported flags.
* Returns `NULL` on failure.
* [Linux man page][man epoll_create]
### epoll_close
```c
int epoll_close(HANDLE ephnd);
```
* Close an epoll port.
* Do not attempt to close the epoll port with `close()`,
`CloseHandle()` or `closesocket()`.
### epoll_ctl
```c
int epoll_ctl(HANDLE ephnd,
int op,
SOCKET sock,
struct epoll_event* event);
```
* Control which socket events are monitored by an epoll port.
* `ephnd` must be a HANDLE created by
[`epoll_create()`](#epoll_createepoll_create1) or
[`epoll_create1()`](#epoll_createepoll_create1).
* `op` must be one of `EPOLL_CTL_ADD`, `EPOLL_CTL_MOD`, `EPOLL_CTL_DEL`.
* `sock` must be a valid socket created by [`socket()`][msdn socket],
[`WSASocket()`][msdn wsasocket], or [`accept()`][msdn accept].
* `event` should be a pointer to a [`struct epoll_event`](#struct-epoll_event).<br>
If `op` is `EPOLL_CTL_DEL` then the `event` parameter is ignored, and it
may be `NULL`.
* Returns 0 on success, -1 on failure.
* It is recommended to always explicitly remove a socket from its epoll
set using `EPOLL_CTL_DEL` *before* closing it.<br>
As on Linux, closed sockets are automatically removed from the epoll set, but
wepoll may not be able to detect that a socket was closed until the next call
to [`epoll_wait()`](#epoll_wait).
* [Linux man page][man epoll_ctl]
### epoll_wait
```c
int epoll_wait(HANDLE ephnd,
struct epoll_event* events,
int maxevents,
int timeout);
```
* Receive socket events from an epoll port.
* `events` should point to a caller-allocated array of
[`epoll_event`](#struct-epoll_event) structs, which will receive the
reported events.
* `maxevents` is the maximum number of events that will be written to the
`events` array, and must be greater than zero.
* `timeout` specifies whether to block when no events are immediately available.
- `<0` block indefinitely
- `0` report any events that are already waiting, but don't block
- `≥1` block for at most N milliseconds
* Return value:
- `-1` an error occurred
- `0` timed out without any events to report
- `≥1` the number of events stored in the `events` buffer
* [Linux man page][man epoll_wait]
### struct epoll_event
```c
typedef union epoll_data {
void* ptr;
int fd;
uint32_t u32;
uint64_t u64;
SOCKET sock; /* Windows specific */
HANDLE hnd; /* Windows specific */
} epoll_data_t;
```
```c
struct epoll_event {
uint32_t events; /* Epoll events and flags */
epoll_data_t data; /* User data variable */
};
```
* The `events` field is a bit mask containing the events being
monitored/reported, and optional flags.<br>
Flags are accepted by [`epoll_ctl()`](#epoll_ctl), but they are not reported
back by [`epoll_wait()`](#epoll_wait).
* The `data` field can be used to associate application-specific information
with a socket; its value will be returned unmodified by
[`epoll_wait()`](#epoll_wait).
* [Linux man page][man epoll_ctl]
| Event | Description |
|---------------|----------------------------------------------------------------------|
| `EPOLLIN` | incoming data available, or incoming connection ready to be accepted |
| `EPOLLOUT` | ready to send data, or outgoing connection successfully established |
| `EPOLLRDHUP` | remote peer initiated graceful socket shutdown |
| `EPOLLPRI` | out-of-band data available for reading |
| `EPOLLERR` | socket error<sup>1</sup> |
| `EPOLLHUP` | socket hang-up<sup>1</sup> |
| `EPOLLRDNORM` | same as `EPOLLIN` |
| `EPOLLRDBAND` | same as `EPOLLPRI` |
| `EPOLLWRNORM` | same as `EPOLLOUT` |
| `EPOLLWRBAND` | same as `EPOLLOUT` |
| `EPOLLMSG` | never reported |
| Flag | Description |
|------------------|---------------------------|
| `EPOLLONESHOT` | report event(s) only once |
| `EPOLLET` | not supported by wepoll |
| `EPOLLEXCLUSIVE` | not supported by wepoll |
| `EPOLLWAKEUP` | not supported by wepoll |
<sup>1</sup>: the `EPOLLERR` and `EPOLLHUP` events may always be reported by
[`epoll_wait()`](#epoll_wait), regardless of the event mask that was passed to
[`epoll_ctl()`](#epoll_ctl).
[ci status badge]: https://ci.appveyor.com/api/projects/status/github/piscisaureus/wepoll?branch=master&svg=true
[ci status link]: https://ci.appveyor.com/project/piscisaureus/wepoll/branch/master
[dist]: https://github.com/piscisaureus/wepoll/tree/dist
[man epoll]: http://man7.org/linux/man-pages/man7/epoll.7.html
[man epoll_create]: http://man7.org/linux/man-pages/man2/epoll_create.2.html
[man epoll_ctl]: http://man7.org/linux/man-pages/man2/epoll_ctl.2.html
[man epoll_wait]: http://man7.org/linux/man-pages/man2/epoll_wait.2.html
[msdn accept]: https://msdn.microsoft.com/en-us/library/windows/desktop/ms737526(v=vs.85).aspx
[msdn socket]: https://msdn.microsoft.com/en-us/library/windows/desktop/ms740506(v=vs.85).aspx
[msdn wsasocket]: https://msdn.microsoft.com/en-us/library/windows/desktop/ms742212(v=vs.85).aspx
[select scale]: https://daniel.haxx.se/docs/poll-vs-select.html
[wsapoll broken]: https://daniel.haxx.se/blog/2012/10/10/wsapoll-is-broken/
[wepoll.c]: https://github.com/piscisaureus/wepoll/blob/dist/wepoll.c
[wepoll.h]: https://github.com/piscisaureus/wepoll/blob/dist/wepoll.h

View File

@ -1,14 +0,0 @@
/*
* Copyright (c) 2016 The ZLToolKit project authors. All Rights Reserved.
*
* This file is part of ZLToolKit(https://github.com/ZLMediaKit/ZLToolKit).
*
* Use of this source code is governed by MIT license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#include "epoll.h"
std::map<int, HANDLE> toolkit::s_wepollHandleMap;
int toolkit::s_handleIndex = 0;
std::mutex toolkit::s_handleMtx;

View File

@ -1,59 +0,0 @@
/*
* Copyright (c) 2016 The ZLToolKit project authors. All Rights Reserved.
*
* This file is part of ZLToolKit(https://github.com/ZLMediaKit/ZLToolKit).
*
* Use of this source code is governed by MIT license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#ifndef ZLMEDIAKIT_EPOLL_H
#define ZLMEDIAKIT_EPOLL_H
#include "wepoll.h"
#include <map>
#include <mutex>
// 屏蔽 EPOLLET
#define EPOLLET 0
namespace toolkit {
// 索引handle
extern std::map<int, HANDLE> s_wepollHandleMap;
extern int s_handleIndex;
extern std::mutex s_handleMtx;
// 屏蔽epoll_create epoll_ctl epoll_wait参数差异
inline int epoll_create(int size) {
HANDLE handle = ::epoll_create(size);
if (!handle) {
return -1;
}
{
std::lock_guard<std::mutex> lck(s_handleMtx);
int idx = ++s_handleIndex;
s_wepollHandleMap[idx] = handle;
return idx;
}
}
inline int epoll_ctl(int ephnd, int op, SOCKET sock, struct epoll_event *ev) {
HANDLE handle;
{
std::lock_guard<std::mutex> lck(s_handleMtx);
handle = s_wepollHandleMap[ephnd];
}
return ::epoll_ctl(handle, op, sock, ev);
}
inline int epoll_wait(int ephnd, struct epoll_event *events, int maxevents, int timeout) {
HANDLE handle;
{
std::lock_guard<std::mutex> lck(s_handleMtx);
handle = s_wepollHandleMap[ephnd];
}
return ::epoll_wait(handle, events, maxevents, timeout);
}
} // namespace toolkit
#endif // ZLMEDIAKIT_EPOLL_H

File diff suppressed because it is too large Load Diff

View File

@ -1,107 +0,0 @@
/*
* wepoll - epoll for Windows
* https://github.com/piscisaureus/wepoll
*
* Copyright 2012-2020, Bert Belder <bertbelder@gmail.com>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are
* met:
*
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
*
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef WEPOLL_H_
#define WEPOLL_H_
#ifndef WEPOLL_EXPORT
#define WEPOLL_EXPORT
#endif
#include <stdint.h>
enum EPOLL_EVENTS {
EPOLLIN = (int)(1U << 0),
EPOLLPRI = (int)(1U << 1),
EPOLLOUT = (int)(1U << 2),
EPOLLERR = (int)(1U << 3),
EPOLLHUP = (int)(1U << 4),
EPOLLRDNORM = (int)(1U << 6),
EPOLLRDBAND = (int)(1U << 7),
EPOLLWRNORM = (int)(1U << 8),
EPOLLWRBAND = (int)(1U << 9),
EPOLLMSG = (int)(1U << 10), /* Never reported. */
EPOLLRDHUP = (int)(1U << 13),
EPOLLONESHOT = (int)(1U << 31)
};
#define EPOLLIN (1U << 0)
#define EPOLLPRI (1U << 1)
#define EPOLLOUT (1U << 2)
#define EPOLLERR (1U << 3)
#define EPOLLHUP (1U << 4)
#define EPOLLRDNORM (1U << 6)
#define EPOLLRDBAND (1U << 7)
#define EPOLLWRNORM (1U << 8)
#define EPOLLWRBAND (1U << 9)
#define EPOLLMSG (1U << 10)
#define EPOLLRDHUP (1U << 13)
#define EPOLLONESHOT (1U << 31)
#define EPOLL_CTL_ADD 1
#define EPOLL_CTL_MOD 2
#define EPOLL_CTL_DEL 3
typedef void *HANDLE;
typedef uintptr_t SOCKET;
typedef union epoll_data {
void *ptr;
int fd;
uint32_t u32;
uint64_t u64;
SOCKET sock; /* Windows specific */
HANDLE hnd; /* Windows specific */
} epoll_data_t;
struct epoll_event {
uint32_t events; /* Epoll events and flags */
epoll_data_t data; /* User data variable */
};
#ifdef __cplusplus
extern "C" {
#endif
WEPOLL_EXPORT HANDLE epoll_create(int size);
WEPOLL_EXPORT HANDLE epoll_create1(int flags);
WEPOLL_EXPORT int epoll_close(HANDLE ephnd);
WEPOLL_EXPORT int epoll_ctl(HANDLE ephnd, int op, SOCKET sock, struct epoll_event *event);
WEPOLL_EXPORT int epoll_wait(HANDLE ephnd, struct epoll_event *events, int maxevents, int timeout);
#ifdef __cplusplus
} /* extern "C" */
#endif
#endif /* WEPOLL_H_ */

20
AUTHORS
View File

@ -107,4 +107,22 @@ WuPeng <wp@zafu.edu.cn>
[huangcaichun](https://github.com/huangcaichun)
[jamesZHANG500](https://github.com/jamesZHANG500)
[weidelong](https://github.com/wdl1697454803)
[小强先生](https://github.com/linshangqiang)
[小强先生](https://github.com/linshangqiang)
[李之阳](https://github.com/leo94666)
[sgzed](https://github.com/sgzed)
[gaoshan](https://github.com/foobra)
[zhang2349](https://github.com/zhang2349)
[benshi](https://github.com/BenLocal)
[autoantwort](https://github.com/autoantwort)
[u7ko4](https://github.com/u7ko4)
[WengQiang](https://github.com/Tsubaki-01)
[wEnchanters](https://github.com/wEnchanters)
[sbkyy](https://github.com/sbkyy)
[wuxingzhong](https://github.com/wuxingzhong)
[286897655](https://github.com/286897655)
[ss002012](https://github.com/ss002012)
[a839419160](https://github.com/a839419160)
[oldma3095](https://github.com/oldma3095)
[Dary](https://github.com/watersounds)
[N.z](https://github.com/neesonqk)
[yanggs](https://github.com/callinglove)

View File

@ -1,6 +1,6 @@
# MIT License
#
# Copyright (c) 2016-2022 The ZLMediaKit project authors. All Rights Reserved.
# Copyright (c) 2016-present The ZLMediaKit project authors. All Rights Reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
@ -21,7 +21,7 @@
# SOFTWARE.
#
cmake_minimum_required(VERSION 3.1.3)
cmake_minimum_required(VERSION 3.1.3...3.26)
#
# Load custom modules
@ -32,6 +32,8 @@ project(ZLMediaKit LANGUAGES C CXX)
# 使 C++11
# Enable C++11
set(CMAKE_CXX_STANDARD 11)
# -fPIC
set(CMAKE_POSITION_INDEPENDENT_CODE ON)
option(ENABLE_API "Enable C API SDK" ON)
option(ENABLE_API_STATIC_LIB "Enable mk_api static lib" OFF)
@ -42,6 +44,7 @@ option(ENABLE_FFMPEG "Enable FFmpeg" OFF)
option(ENABLE_HLS "Enable HLS" ON)
option(ENABLE_JEMALLOC_STATIC "Enable static linking to the jemalloc library" OFF)
option(ENABLE_JEMALLOC_DUMP "Enable jemalloc to dump malloc statistics" OFF)
option(ENABLE_TCMALLOC "Enable linking to the tcmalloc library" OFF)
option(ENABLE_MEM_DEBUG "Enable Memory Debug" OFF)
option(ENABLE_MP4 "Enable MP4" ON)
option(ENABLE_MSVC_MT "Enable MSVC Mt/Mtd lib" ON)
@ -56,10 +59,14 @@ option(ENABLE_TESTS "Enable Tests" ON)
option(ENABLE_SCTP "Enable SCTP" ON)
option(ENABLE_WEBRTC "Enable WebRTC" ON)
option(ENABLE_X264 "Enable x264" OFF)
option(ENABLE_WEPOLL "Enable wepoll" ON)
option(ENABLE_VIDEOSTACK "Enable video stack" OFF)
option(DISABLE_REPORT "Disable report to report.zlmediakit.com" OFF)
option(USE_SOLUTION_FOLDERS "Enable solution dir supported" ON)
option(ENABLE_OBJCOPY "Enable use objcopy to generate debug info file" ON)
#
option(BUILD_SHARED_LIBS "Build shared instead of static" OFF)
option(ENABLE_PYTHON "Enable python plugin" OFF)
##############################################################################
# socket256k.0socket,使用系统内核默认值(设置为0仅对linux有效)
# Set the default buffer size of the socket to 256k. If set to 0, the default buffer size of the socket will not be set,
@ -198,7 +205,10 @@ if(UNIX)
if("${CMAKE_BUILD_TYPE}" STREQUAL "Debug")
set(COMPILE_OPTIONS_DEFAULT ${COMPILE_OPTIONS_DEFAULT} "-g3")
else()
set(COMPILE_OPTIONS_DEFAULT ${COMPILE_OPTIONS_DEFAULT} "-g0")
find_program(OBJCOPY_FOUND objcopy)
if (OBJCOPY_FOUND AND ENABLE_OBJCOPY)
set(COMPILE_OPTIONS_DEFAULT ${COMPILE_OPTIONS_DEFAULT} "-g3")
endif()
endif()
elseif(WIN32)
if (MSVC)
@ -208,8 +218,8 @@ elseif(WIN32)
# warning C4530: C++ exception handler used, but unwind semantics are not enabled.
"/EHsc")
# disable Windows logo
list(APPEND COMPILE_OPTIONS_DEFAULT "/nologo")
list(APPEND CMAKE_STATIC_LINKER_FLAGS "/nologo")
string(REPLACE "/nologo" "" CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}")
set(CMAKE_STATIC_LINKER_FLAGS "")
endif()
endif()
@ -248,8 +258,8 @@ endif()
# Multiple modules depend on ffmpeg related libraries, unified search
if(ENABLE_FFMPEG)
find_package(PkgConfig QUIET)
# ffmpeg/libutil
# find ffmpeg/libutil installed
# ffmpeg/libavutil
# find ffmpeg/libavutil installed
if(PKG_CONFIG_FOUND)
pkg_check_modules(AVUTIL QUIET IMPORTED_TARGET libavutil)
if(AVUTIL_FOUND)
@ -288,8 +298,19 @@ if(ENABLE_FFMPEG)
endif()
endif()
# ffmpeg/libutil
# find ffmpeg/libutil installed
# ffmpeg/libavfilter
# find ffmpeg/libavfilter installed
if(PKG_CONFIG_FOUND)
pkg_check_modules(AVFILTER QUIET IMPORTED_TARGET libavfilter)
if(AVFILTER_FOUND)
update_cached_list(MK_LINK_LIBRARIES PkgConfig::AVFILTER)
message(STATUS "found library: ${AVFILTER_LIBRARIES}")
endif()
endif()
# ffmpeg/libavutil
# find ffmpeg/libavutil installed
if(NOT AVUTIL_FOUND)
find_package(AVUTIL QUIET)
if(AVUTIL_FOUND)
@ -332,7 +353,16 @@ if(ENABLE_FFMPEG)
endif()
endif()
if(AVUTIL_FOUND AND AVCODEC_FOUND AND SWSCALE_FOUND AND SWRESAMPLE_FOUND)
if(NOT AVFILTER_FOUND)
find_package(AVFILTER QUIET)
if(AVFILTER_FOUND)
include_directories(SYSTEM ${AVFILTER_INCLUDE_DIR})
update_cached_list(MK_LINK_LIBRARIES ${AVFILTER_LIBRARIES})
message(STATUS "found library: ${AVFILTER_LIBRARIES}")
endif()
endif()
if(AVUTIL_FOUND AND AVCODEC_FOUND AND SWSCALE_FOUND AND SWRESAMPLE_FOUND AND AVFILTER_FOUND)
update_cached_list(MK_COMPILE_DEFINITIONS ENABLE_FFMPEG)
update_cached_list(MK_LINK_LIBRARIES ${CMAKE_DL_LIBS})
else()
@ -393,6 +423,19 @@ if(JEMALLOC_FOUND)
endif ()
endif()
# tcmalloc
# find tcmalloc installed
if(ENABLE_TCMALLOC)
find_package(TCMALLOC QUIET)
if(TCMALLOC_FOUND)
message(STATUS "Link with tcmalloc library: ${TCMALLOC_LIBRARIES}")
update_cached_list(MK_LINK_LIBRARIES ${TCMALLOC_LIBRARIES})
else()
set(ENABLE_TCMALLOC OFF)
message(WARNING "tcmalloc 相关功能未找到")
endif()
endif()
# openssl
# find openssl installed
find_package(OpenSSL QUIET)
@ -467,6 +510,17 @@ if(ENABLE_SRT)
update_cached_list(MK_COMPILE_DEFINITIONS ENABLE_SRT)
endif()
if(ENABLE_WEBRTC)
# srtp
find_package(SRTP QUIET)
if(SRTP_FOUND AND ENABLE_OPENSSL)
message(STATUS "found library: ${SRTP_LIBRARIES}, ENABLE_WEBRTC defined")
update_cached_list(MK_COMPILE_DEFINITIONS ENABLE_WEBRTC)
else()
set(ENABLE_WEBRTC OFF)
message(WARNING "srtp 未找到, WebRTC 相关功能打开失败")
endif()
endif()
# ----------------------------------------------------------------------------
# Solution folders:
# ----------------------------------------------------------------------------
@ -551,6 +605,9 @@ file(COPY "${CMAKE_CURRENT_SOURCE_DIR}/www" DESTINATION ${EXECUTABLE_OUTPUT_PATH
file(COPY "${CMAKE_CURRENT_SOURCE_DIR}/conf/config.ini" DESTINATION ${EXECUTABLE_OUTPUT_PATH})
file(COPY "${CMAKE_CURRENT_SOURCE_DIR}/default.pem" DESTINATION ${EXECUTABLE_OUTPUT_PATH})
if (ENABLE_FFMPEG)
file(COPY "${CMAKE_CURRENT_SOURCE_DIR}/DejaVuSans.ttf" DESTINATION ${EXECUTABLE_OUTPUT_PATH})
endif ()
# VideoStack
# Copy the default background image used by VideoStack when there is no video stream
if (ENABLE_VIDEOSTACK AND ENABLE_FFMPEG AND ENABLE_X264)

BIN
DejaVuSans.ttf Normal file

Binary file not shown.

View File

@ -36,7 +36,7 @@
- [谁在使用zlmediakit?](https://github.com/ZLMediaKit/ZLMediaKit/issues/511)
- 全面支持ipv6网络
- 支持多轨道模式(一个流中多个视频/音频)
- 全协议支持H264/H265/AAC/G711/OPUS/MP3部分支持VP8/VP9/AV1/JPEG/MP3/H266/ADPCM/SVAC/G722/G723/G729
- 全协议支持H264/H265/AAC/G711/OPUS/MP3/VP8/VP9/AV1部分支持JPEG/H266/ADPCM/SVAC/G722/G723/G729/MP2
## 项目定位
@ -47,7 +47,7 @@
## 功能清单
### 功能一览
<img width="800" alt="功能一览" src="https://github.com/ZLMediaKit/ZLMediaKit/assets/11495632/481ea769-5b27-495e-bf7d-31191e6af9d2">
<img width="749" alt="功能预览" src="https://github.com/user-attachments/assets/7072fe1c-e2b3-47e9-bd50-e5266523edf1">
- RTSP[S]
- RTSP[S] 服务器支持RTMP/MP4/HLS转RTSP[S],支持亚马逊echo show这样的设备
@ -57,7 +57,7 @@
- 服务器/客户端完整支持Basic/Digest方式的登录鉴权全异步可配置化的鉴权接口
- 支持H265编码
- 服务器支持RTSP推流(包括`rtp over udp` `rtp over tcp`方式)
- 支持H264/H265/AAC/G711/OPUS/MJPEG/MP3编码其他编码能转发但不能转协议
- 支持H264/H265/AAC/G711/OPUS/MJPEG/MP3/VP8/VP9/AV1/MP2编码,其他编码能转发但不能转协议
- RTMP[S]
- RTMP[S] 播放服务器支持RTSP/MP4/HLS转RTMP
@ -70,25 +70,25 @@
- 支持H264/H265/AAC/G711/OPUS/MP3编码其他编码能转发但不能转协议
- 支持[RTMP-H265](https://github.com/ksvc/FFmpeg/wiki)
- 支持[RTMP-OPUS](https://github.com/ZLMediaKit/ZLMediaKit/wiki/RTMP%E5%AF%B9H265%E5%92%8COPUS%E7%9A%84%E6%94%AF%E6%8C%81)
- 支持[enhanced-rtmp(H265)](https://github.com/veovera/enhanced-rtmp)
- 支持[enhanced-rtmp(H265/VP8/VP9/AV1/OPUS)](https://github.com/veovera/enhanced-rtmp)
- HLS
- 支持HLS文件(mpegts/fmp4)生成自带HTTP文件服务器
- 通过cookie追踪技术可以模拟HLS播放为长连接可以实现HLS按需拉流、播放统计等业务
- 支持HLS播发器支持拉流HLS转rtsp/rtmp/mp4
- 支持H264/H265/AAC/G711/OPUS/MP3编码
- 支持H264/H265/AAC/G711/OPUS/MP3/VP8/VP9/AV1/MP2编码
- 支持多轨道模式
- TS
- 支持http[s]-ts直播
- 支持ws[s]-ts直播
- 支持H264/H265/AAC/G711/OPUS/MP3编码
- 支持H264/H265/AAC/G711/OPUS/MP3/VP8/VP9/AV1/MP2编码
- 支持多轨道模式
- fMP4
- 支持http[s]-fmp4直播
- 支持ws[s]-fmp4直播
- 支持H264/H265/AAC/G711/OPUS/MJPEG/MP3编码
- 支持H264/H265/AAC/G711/OPUS/MJPEG/MP3/VP8/VP9/AV1/MP2编码
- 支持多轨道模式
- HTTP[S]与WebSocket
@ -103,7 +103,7 @@
- GB28181与RTP推流
- 支持UDP/TCP RTP(PS/TS/ES)推流服务器可以转换成RTSP/RTMP/HLS等协议
- 支持RTSP/RTMP/HLS等协议转rtp推流客户端支持TCP/UDP模式提供相应restful api支持主动被动方式
- 支持H264/H265/AAC/G711/OPUS/MP3编码
- 支持H264/H265/AAC/G711/OPUS/MP3/VP8/VP9/AV1编码
- 支持es/ps/ts/ehome rtp推流
- 支持es/ps rtp转推
- 支持GB28181主动拉流模式
@ -113,7 +113,7 @@
- MP4点播与录制
- 支持录制为FLV/HLS/MP4
- RTSP/RTMP/HTTP-FLV/WS-FLV支持MP4文件点播支持seek
- 支持H264/H265/AAC/G711/OPUS/MP3编码
- 支持H264/H265/AAC/G711/OPUS/MP3/VP8/VP9/AV1编码
- 支持多轨道模式
- WebRTC
@ -131,11 +131,13 @@
- 支持webrtc over tcp模式
- 优秀的nack、jitter buffer算法, 抗丢包能力卓越
- 支持whip/whep协议
- 支持编码格式与rtsp协议一致
- [支持ice-full,支持作为webrtc客户端拉流、推流以及p2p模式](./webrtc/USAGE.md)
- [SRT支持](./srt/srt.md)
- 其他
- 支持丰富的restful api以及web hook事件
- 支持简单的telnet调试
- 支持配置文件热加载
- 支持配置文件、ssl证书热加载
- 支持流量统计、推拉流鉴权等事件
- 支持虚拟主机,可以隔离不同域名
- 支持按需拉流,无人观看自动关断拉流
@ -146,7 +148,48 @@
- 支持按需解复用、转协议当有人观看时才开启转协议降低cpu占用率
- 支持溯源模式的集群部署溯源方式支持rtsp/rtmp/hls/http-ts, 边沿站支持hls, 源站支持多个(采用round robin方式溯源)
- rtsp/rtmp/webrtc推流异常断开后可以在超时时间内重连推流播放器无感知
## 闭源专业版
在最新开源代码的基础,新增以下[闭源专业版](https://github.com/xia-chu/zlmediakit-pro)
- 音视频转码功能
- 1、音视频间任意转码(包括h265/h264/opus/g711/aac/g722/g722.1/mp3/svac/vp8/vp9/av1等。
- 2、基于配置文件的转码支持设置比特率codec类型等参数。
- 3、基于http api的动态增减转码支持设置比特率分辨率倍数codec类型、滤镜等参数。
- 4、支持硬件、软件自适应转码。
- 5、支持按需转码有人观看才转码支持透明转码模式业务无需感知转码的存在业务代码无需做任何调整。
- 6、支持负载过高时转码主动降低帧率且不花屏。
- 7、支持滤镜支持添加osd文本以及logo角标等能力。
- 8、支持全GPU硬件编解码与滤镜防止显存与内存频繁拷贝。
- JT1078部标版本
- 1、支持接收jt1078推流转其他协议自适应音视频共享seq和单独seq模式。
- 2、支持jt1078级联支持jt1078对讲。
- 3、jt1078相关接口、端口和用法与GB28181用法一致保持兼容。
- 4、支持h264/h265/g711/aac/mp3/g721/g722/g723/g729/g726/adpcm等编码。
- IPTV版本
- 1、支持rtsp-ts/hls/http-ts/rtp组播/udp组播拉流转协议支持ts透传模式无需解复用转rtsp-ts/hls/http-ts/srt协议。
- 2、支持接收rtsp-ts/srt/rtp-ts推流支持ts透传模式无需解复用转rtsp-ts/hls/http-ts/srt协议。
- 3、上述功能同时支持解复用ts为es流再转rtsp/rtmp/flv/http-ts/hls/hls-fmp4/mp4/fmp4/webrtc等协议。
- S3云存储
- 支持s3/minio云存储内存流直接写入解决录像文件io系统瓶颈问题
- 支持直接通过zlmediakit的http服务下载和点播云存储文件。
- 支持遍历云存储文件并生成http菜单网页。
- WebRTC集群
- 支持rtc流量代理解决k8s部署zlmediakit webrtc服务时http信令交互与rtc流量打不到同一个pod实例的问题。
- AI推理
- 支持yolo推理插件支持人员、车辆等目标AI识别支持目标跟踪支持多边形布防支持ocr支持c++/python插件快速混合开发。
- 支持tensorRT 全cuda加速推理。
- 支持onnxruntime(cpu/gpu) 推理。
- 支持ascend cann加速推理。
- python插件支持调用c++接口操作流媒体与绘制当前视频画面。
- WebRTC mcu语音聊天室
- 支持mcu多人语音聊天室混音前支持背景噪声消除静音不参与混音解决超大规模多人语音聊天室sfu方案不可用的问题。
- 支持100人语音连麦上千人旁听级会议。
## 编译以及测试
**编译前务必仔细参考wiki:[快速开始](https://github.com/ZLMediaKit/ZLMediaKit/wiki/%E5%BF%AB%E9%80%9F%E5%BC%80%E5%A7%8B)操作!!!**
@ -191,17 +234,22 @@ bash build_docker_images.sh
- [jessibuca](https://github.com/langhuihui/jessibuca) 基于wasm支持H265的播放器
- [wsPlayer](https://github.com/v354412101/wsPlayer) 基于MSE的websocket-fmp4播放器
- [BXC_gb28181Player](https://github.com/any12345com/BXC_gb28181Player) C++开发的支持国标GB28181协议的视频流播放器
- [RTCPlayer](https://github.com/leo94666/RTCPlayer) 一个基于Android客户端的的RTC播放器
- [WebRTC-Vue-Demo](https://github.com/Heartbreaker16/ZLMediaKit-WebRTC-Vue-Demo) zlmediakit webrtc播放器vue版本
- WEB管理网站
- [zlm_webassist](https://github.com/1002victor/zlm_webassist) 本项目配套的前后端分离web管理项目
- [AKStreamNVR](https://github.com/langmansh/AKStreamNVR) 前后端分离web项目,支持webrtc播放
- [StreamUI](https://github.com/lmk123568/StreamUI) 一个极简、轻便的视频流媒体管理平台
- [PyMKUI](https://github.com/ZLMediaKit/pymkui) ZLMediaKit官方推出的管理平台网站
- SDK
- [spring-boot-starter](https://github.com/lunasaw/zlm-spring-boot-starter) 本项目hook和rest接口starter
- [java sdk](https://github.com/lidaofu-hub/j_zlm_sdk) 本项目c sdk完整java包装库
- [c# sdk](https://github.com/malegend/ZLMediaKit.Autogen) 本项目c sdk完整c#包装库
- [metaRTC](https://github.com/metartc/metaRTC) 全国产纯c webrtc sdk
- 监控与运维
- [ZLMediaKit_exporter](https://github.com/guohuachan/ZLMediaKit_exporter) 一个用于采集 ZLMediaKit 核心指标的 Prometheus Exporter搭配 Grafana 即可快速构建实时监控面板
- 其他项目(已停止更新)
- [NodeJS实现的GB28181平台](https://gitee.com/hfwudao/GB28181_Node_Http)
@ -382,6 +430,9 @@ bash build_docker_images.sh
[ss002012](https://github.com/ss002012)
[a839419160](https://github.com/a839419160)
[oldma3095](https://github.com/oldma3095)
[Dary](https://github.com/watersounds)
[N.z](https://github.com/neesonqk)
[yanggs](https://github.com/callinglove)
同时感谢JetBrains对开源项目的支持本项目使用CLion开发与调试

View File

@ -45,7 +45,7 @@
## Feature List
### Overview of Features
<img width="800" alt="Overview of Features" src="https://github.com/ZLMediaKit/ZLMediaKit/assets/11495632/481ea769-5b27-495e-bf7d-31191e6af9d2">
<img width="749" alt="Overview of Features" src="https://github.com/user-attachments/assets/7072fe1c-e2b3-47e9-bd50-e5266523edf1">
- RTSP[S]
- RTSP[S] server, supports RTMP/MP4/HLS to RTSP[S] conversion, supports devices such as Amazon Echo Show
@ -124,6 +124,8 @@
- Supports WebRTC over TCP mode
- Excellent NACK and jitter buffer algorithms with outstanding packet loss resistance
- Supports WHIP/WHEP protocols
- [Supports ice-full, works as a WebRTC client for pulling streams, pushing streams, and P2P mode](./webrtc/USAGE.md)
- [SRT support](./srt/srt.md)
- Others
- Supports rich RESTful APIs and webhook events
@ -139,7 +141,36 @@
- Supports on-demand demultiplexing and protocol conversion, reducing CPU usage by only enabling it when someone is watching
- Supports cluster deployment in traceable mode, with RTSP/RTMP/HLS/HTTP-TS support for traceable mode and HLS support for edge stations and multiple sources for source stations (using round-robin tracing)
- Can reconnect to streaming after abnormal disconnection in RTSP/RTMP/WebRTC pushing within a timeout period, with no impact on the player.
## Closed-Source Professional Edition
Based on the latest open-source code, the following closed-source professional editions have been added. For details, please contact: 1213642868@qq.com
- Transcoding Version
- Supports arbitrary audio and video transcoding, including H.265/H.264/Opus/G.711/AAC/G.722/G.722.1/MP3/SVAC, etc.
- Configuration file-based transcoding, allowing customization of bitrate, codec type, and other parameters.
- Dynamic transcoding management via HTTP API, supporting settings for bitrate, resolution scaling, codec type, filters, etc.
- Supports adaptive hardware and software transcoding.
- Supports on-demand transcoding, only transcoding when a viewer is present. It also supports transparent transcoding mode, requiring no modifications to business logic.
- Supports automatic frame rate reduction under high load conditions to prevent video artifacts.
- Supports filters, including OSD text overlay and logo watermarking.
- Supports full GPU hardware encoding/decoding and filtering, minimizing frequent memory transfers between VRAM and RAM.
- Supports full GPU (CUDA) inference plugins, enabling AI-based object detection for people, vehicles, and other targets.
- JT1078 Version
- Supports JT1078 stream ingestion and protocol conversion, with adaptive audio-video shared sequence and individual sequence modes.
- Adds JT1078 cascading support and JT1078 intercom support.
- JT1078 APIs and usage remain consistent with GB28181, ensuring compatibility.
- Supports H.264/H.265/G.711/AAC/MP3/G.721/G.722/G.723/G.729/G.726/ADPCM encoding.
- IPTV Version
- Supports RTSP-TS/HLS/HTTP-TS/RTP multicast/UDP multicast stream ingestion and protocol conversion. Supports TS passthrough mode, eliminating the need for demuxing when converting to RTSP-TS/HLS/HTTP-TS/SRT.
- Supports RTSP-TS/SRT stream ingestion and TS passthrough mode, avoiding the need for demuxing when converting to RTSP-TS/HLS/HTTP-TS/SRT.
- All the above features also support demuxing TS into ES streams and converting them to RTSP/RTMP/FLV/HTTP-TS/HLS/HLS-FMP4/MP4/FMP4/WebRTC.
- VP9/AV1 Version
Fully supports AV1/VP9 encoding, with RTMP/RTSP/TS/PS/HLS/MP4/FMP4 protocol compatibility for AV1/VP9.
## System Requirements
- Compiler with c++11 support, such as GCC 4.8+, Clang 3.3+, or VC2015+.
@ -375,6 +406,8 @@ bash build_docker_images.sh
- [GB28181 player implemented in C++](https://github.com/any12345com/BXC_gb28181Player)
- [Android RTCPlayer](https://github.com/leo94666/RTCPlayer)
- Monitor
- [Prometheus Exporter for ZLMediaKit](https://github.com/guohuachan/ZLMediaKit_exporter)
## License
@ -542,6 +575,9 @@ Thanks to all those who have supported this project in various ways, including b
[ss002012](https://github.com/ss002012)
[a839419160](https://github.com/a839419160)
[oldma3095](https://github.com/oldma3095)
[Dary](https://github.com/watersounds)
[N.z](https://github.com/neesonqk)
[yanggs](https://github.com/callinglove)
Also thank to JetBrains for their support for open source project, we developed and debugged zlmediakit with CLion:

View File

@ -1,6 +1,6 @@
# MIT License
#
# Copyright (c) 2016-2022 The ZLMediaKit project authors. All Rights Reserved.
# Copyright (c) 2016-present The ZLMediaKit project authors. All Rights Reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
@ -77,6 +77,31 @@ install(TARGETS mk_api
LIBRARY DESTINATION ${INSTALL_PATH_LIB}
RUNTIME DESTINATION ${INSTALL_PATH_RUNTIME})
if(MSVC)
set(RESOURCE_FILE "${CMAKE_SOURCE_DIR}/resource.rc")
set_source_files_properties(${RESOURCE_FILE} PROPERTIES LANGUAGE RC)
target_sources(mk_api PRIVATE ${RESOURCE_FILE})
endif()
#relase debug
string(TOLOWER ${CMAKE_BUILD_TYPE} CMAKE_BUILD_TYPE_LOWER)
if(UNIX AND ENABLE_OBJCOPY)
if("${CMAKE_BUILD_TYPE_LOWER}" STREQUAL "release")
find_program(OBJCOPY_FOUND objcopy)
if (OBJCOPY_FOUND)
add_custom_command(TARGET mk_api
POST_BUILD
COMMAND objcopy --only-keep-debug ${EXECUTABLE_OUTPUT_PATH}/libmk_api.so ${EXECUTABLE_OUTPUT_PATH}/libmk_api.so.debug
COMMAND objcopy --strip-all ${EXECUTABLE_OUTPUT_PATH}/libmk_api.so
COMMAND objcopy --add-gnu-debuglink=${EXECUTABLE_OUTPUT_PATH}/libmk_api.so.debug ${EXECUTABLE_OUTPUT_PATH}/libmk_api.so
)
install(FILES ${EXECUTABLE_OUTPUT_PATH}/libmk_api.so.debug DESTINATION ${INSTALL_PATH_RUNTIME})
else()
message(STATUS "not found objcopy, generate libmk_api.so.debug skip")
endif()
endif()
endif()
# IOS
if(IOS)
return()

View File

@ -259,31 +259,24 @@ API_EXPORT uint16_t API_CALL mk_rtp_server_start(uint16_t port);
*/
API_EXPORT uint16_t API_CALL mk_rtc_server_start(uint16_t port);
// 获取webrtc answer sdp回调函数 [AUTO-TRANSLATED:10c93fa9]
// Get webrtc answer sdp callback function
typedef void(API_CALL *on_mk_webrtc_get_answer_sdp)(void *user_data, const char *answer, const char *err);
/**
* webrtc交换sdpoffer sdp生成answer sdp
* @param user_data
* @param cb
* @param type webrtc插件类型echo,play,push
* @param offer webrtc offer sdp
* @param url rtc url, rtc://__defaultVhost/app/stream?key1=val1&key2=val2
* webrtc exchange sdp, generate answer sdp based on offer sdp
* @param user_data Callback user pointer
* @param cb Callback function
* @param type webrtc plugin type, supports echo, play, push
* @param offer webrtc offer sdp
* @param url rtc url, for example rtc://__defaultVhost/app/stream?key1=val1&key2=val2
* [AUTO-TRANSLATED:ea79659b]
* websocket[s]
* @param port websocket监听端口
* @param ssl ssl类型服务器
* @return 0:,0:
*
*/
API_EXPORT void API_CALL mk_webrtc_get_answer_sdp(void *user_data, on_mk_webrtc_get_answer_sdp cb, const char *type,
const char *offer, const char *url);
API_EXPORT uint16_t API_CALL mk_signaling_server_start(uint16_t port, int ssl);
/**
* webrtc-ice[s]
* @param port websocket监听端口
* @return 0:,0:
*
*/
API_EXPORT uint16_t API_CALL mk_ice_server_start(uint16_t port);
API_EXPORT void API_CALL mk_webrtc_get_answer_sdp2(void *user_data, on_user_data_free user_data_free, on_mk_webrtc_get_answer_sdp cb, const char *type,
const char *offer, const char *url);
/**
* srt服务器

View File

@ -193,6 +193,8 @@ API_EXPORT uint64_t API_CALL mk_media_source_get_alive_second(const mk_media_sou
API_EXPORT int API_CALL mk_media_source_close(const mk_media_source ctx,int force);
//MediaSource::seekTo()
API_EXPORT int API_CALL mk_media_source_seek_to(const mk_media_source ctx,uint32_t stamp);
// MediaSource::setSpeed()
API_EXPORT void API_CALL mk_media_source_set_speed(const mk_media_source ctx, float speed);
/**
* rtp推流成功与否的回调()

View File

@ -343,6 +343,40 @@ API_EXPORT void API_CALL mk_mpeg_muxer_init_complete(mk_mpeg_muxer ctx);
*/
API_EXPORT int API_CALL mk_mpeg_muxer_input_frame(mk_mpeg_muxer ctx, mk_frame frame);
//////////////////////////////////////////////////////////////////////
#if defined(ENABLE_RTPPROXY)
typedef struct mk_ps_decoder_t *mk_ps_decoder;
typedef void (API_CALL *on_mk_ps_decoder_stream)(void *user_data, int stream, int codecid, const void *ext, size_t ext_len, int finish);
typedef void(API_CALL *on_mk_ps_decoder_frame)(void *user_data, int stream, int codecid, int flags, int64_t pts, int64_t dts, const void *data, size_t bytes);
/**
* ps解析器
* @param scb stream ; , track?
* @param dcb
* @param user_data
* @return
*/
API_EXPORT mk_ps_decoder API_CALL mk_ps_decoder_create(on_mk_ps_decoder_stream scb, on_mk_ps_decoder_frame dcb, void * user_data);
/**
* ps解析器
* @param ctx
*/
API_EXPORT void API_CALL mk_ps_decoder_release(mk_ps_decoder ctx);
/**
* ps数据
* @param ctx ps解析器指针
* @param data ps数据指针
* @param bytes
*/
API_EXPORT void API_CALL mk_ps_decoder_input(mk_ps_decoder ctx, const char * data, size_t bytes);
# endif
#ifdef __cplusplus
}
#endif

View File

@ -27,5 +27,6 @@
#include "mk_frame.h"
#include "mk_track.h"
#include "mk_transcode.h"
#include "mk_webrtc.h"
#endif /* MK_API_H_ */

View File

@ -125,6 +125,21 @@ API_EXPORT int API_CALL mk_recorder_start(int type, const char *vhost, const cha
*/
API_EXPORT int API_CALL mk_recorder_stop(int type, const char *vhost, const char *app, const char *stream);
/**
*
* @param vhost
* @param app
* @param stream id
* @param path
* @param back_ms
* @param forward_ms
* @return 1:0
* */
API_EXPORT int API_CALL mk_recorder_start_task(const char *vhost, const char *app, const char *stream, const char *path, uint32_t back_ms, uint32_t forward_ms);
/**
* mp4列表
* @param vhost

View File

@ -21,6 +21,7 @@ typedef struct mk_rtp_server_t *mk_rtp_server;
* @param port 0
* @param tcp_mode tcp模式(0: 1: 2: )
* @param stream_id id
* @param multiple RTP服务器 1: 0:
* @return
* Create GB28181 RTP server
* @param port Listening port, 0 for random
@ -32,6 +33,7 @@ typedef struct mk_rtp_server_t *mk_rtp_server;
*/
API_EXPORT mk_rtp_server API_CALL mk_rtp_server_create(uint16_t port, int tcp_mode, const char *stream_id);
API_EXPORT mk_rtp_server API_CALL mk_rtp_server_create2(uint16_t port, int tcp_mode, const char *vhost, const char *app, const char *stream_id);
API_EXPORT mk_rtp_server API_CALL mk_rtp_server_create3(uint16_t port, int tcp_mode, const char *vhost, const char *app, const char *stream_id, int multiplex);
/**
* TCP
@ -110,6 +112,53 @@ typedef void(API_CALL *on_mk_rtp_server_detach)(void *user_data);
API_EXPORT void API_CALL mk_rtp_server_set_on_detach(mk_rtp_server ctx, on_mk_rtp_server_detach cb, void *user_data);
API_EXPORT void API_CALL mk_rtp_server_set_on_detach2(mk_rtp_server ctx, on_mk_rtp_server_detach cb, void *user_data, on_user_data_free user_data_free);
/**
* RTP服务器过滤SSRC
* @param ctx
* @param ssrc ssrc
*
*/
API_EXPORT void API_CALL mk_rtp_server_update_ssrc(mk_rtp_server ctx, uint32_t ssrc);
/**
* rtp信息获取回调
* @param exist rtp信息 0: 1:
* @param peer_ip ip
* @param peer_port
* @param local_ip ip
* @param local_port
* @param identifier
*
*/
typedef void(API_CALL *on_mk_rtp_get_info)(int exist, const char *peer_ip, uint16_t peer_port, const char *local_ip, uint16_t local_port, const char *identifier);
/**
* rtp推流信息
* @param app
* @param stream id
* @param cb rtp信息获取回调
*
*/
API_EXPORT void API_CALL mk_rtp_get_info(const char *app, const char *stream, on_mk_rtp_get_info cb);
/**
* RTP超时检查
* @param app
* @param stream id
*
*/
API_EXPORT void API_CALL mk_rtp_pause_check(const char *app, const char *stream);
/**
* RTP超时检查
* @param app
* @param stream id
*
*/
API_EXPORT void API_CALL mk_rtp_resume_check(const char *app, const char *stream);
#ifdef __cplusplus
}
#endif

111
api/include/mk_webrtc.h Normal file
View File

@ -0,0 +1,111 @@
/*
* Copyright (c) 2016-present The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/ZLMediaKit/ZLMediaKit).
*
* Use of this source code is governed by MIT-like license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#ifndef MK_WEBRTC_H
#define MK_WEBRTC_H
#include "mk_common.h"
#include "mk_proxyplayer.h"
#include <stdint.h>
#ifdef __cplusplus
extern "C" {
#endif
// 获取webrtc answer sdp回调函数 [AUTO-TRANSLATED:10c93fa9]
// Get webrtc answer sdp callback function
typedef void(API_CALL *on_mk_webrtc_get_answer_sdp)(void *user_data, const char *answer, const char *err);
// 获取webrtc proxy player信息回调函数
typedef void(API_CALL *on_mk_webrtc_get_proxy_player_info_cb)(const char *info_json, const char *err);
//WebRTC-注册到信令服务器、WebRTC-从信令服务器注销回调函数
typedef void(API_CALL *on_mk_webrtc_room_keeper_info_cb)(void *user_data, const char *room_key, const char *err);
//获取WebRTC-Peer查看注册信息、WebRTC-信令服务器查看注册信息回调函数
typedef void(API_CALL *on_mk_webrtc_room_keeper_data_cb)(const char *data);
/**
* webrtc交换sdpoffer sdp生成answer sdp
* @param user_data
* @param cb
* @param type webrtc插件类型echo,play,push
* @param offer webrtc offer sdp
* @param url rtc url, rtc://__defaultVhost/app/stream?key1=val1&key2=val2
* webrtc exchange sdp, generate answer sdp based on offer sdp
* @param user_data Callback user pointer
* @param cb Callback function
* @param type webrtc plugin type, supports echo, play, push
* @param offer webrtc offer sdp
* @param url rtc url, for example rtc://__defaultVhost/app/stream?key1=val1&key2=val2
* [AUTO-TRANSLATED:ea79659b]
*/
API_EXPORT void API_CALL mk_webrtc_get_answer_sdp(void *user_data, on_mk_webrtc_get_answer_sdp cb, const char *type, const char *offer, const char *url);
API_EXPORT void API_CALL mk_webrtc_get_answer_sdp2(
void *user_data, on_user_data_free user_data_free, on_mk_webrtc_get_answer_sdp cb, const char *type, const char *offer, const char *url);
/**
* webrtc proxy player信息
* @param mk_proxy_player
* @param cb
*/
API_EXPORT void API_CALL mk_webrtc_get_proxy_player_info(mk_proxy_player ctx, on_mk_webrtc_get_proxy_player_info_cb cb);
/**
* WebRTC-
* @param server_host host
* @param server_port port
* @param room_id id
* @param ssl ssl
* @param cb
* @param user_data
*/
API_EXPORT void API_CALL
mk_webrtc_add_room_keeper(const char *room_id, const char *server_host, uint16_t server_port, int ssl, on_mk_webrtc_room_keeper_info_cb cb, void *user_data);
API_EXPORT void API_CALL mk_webrtc_add_room_keeper2(
const char *room_id, const char *server_host, uint16_t server_port, int ssl, on_mk_webrtc_room_keeper_info_cb cb, void *user_data,
on_user_data_free user_data_free);
/**
* WebRTC-
* @param room_key key
* @param cb
* @param user_data
*/
API_EXPORT void API_CALL mk_webrtc_del_room_keeper(const char *room_key, on_mk_webrtc_room_keeper_info_cb cb, void *user_data);
API_EXPORT void API_CALL
mk_webrtc_del_room_keeper2(const char *room_key, on_mk_webrtc_room_keeper_info_cb cb, void *user_data, on_user_data_free user_data_free);
/**
* WebRTC-Peer查看注册信息
* @param cb
*/
API_EXPORT void API_CALL mk_webrtc_list_room_keeper(on_mk_webrtc_room_keeper_data_cb cb);
/**
* WebRTC-
* @param cb
*/
API_EXPORT void API_CALL mk_webrtc_list_rooms(on_mk_webrtc_room_keeper_data_cb cb);
#ifdef __cplusplus
}
#endif
#endif /* MK_WEBRTC_H */

View File

@ -29,6 +29,7 @@ using namespace mediakit;
static TcpServer::Ptr rtsp_server[2];
static TcpServer::Ptr rtmp_server[2];
static TcpServer::Ptr http_server[2];
static TcpServer::Ptr signaling_server[2];
static TcpServer::Ptr shell_server;
#ifdef ENABLE_RTPPROXY
@ -37,9 +38,14 @@ static RtpServer::Ptr rtpServer;
#endif
#ifdef ENABLE_WEBRTC
#include "../webrtc/WebRtcSession.h"
#include "webrtc/WebRtcSession.h"
#include "webrtc/IceSession.hpp"
#include "webrtc/WebRtcSignalingSession.h"
#include "webrtc/WebRtcTransport.h"
static UdpServer::Ptr rtcServer_udp;
static TcpServer::Ptr rtcServer_tcp;
static UdpServer::Ptr iceServer_udp;
static TcpServer::Ptr iceServer_tcp;
#endif
#if defined(ENABLE_SRT)
@ -76,6 +82,9 @@ API_EXPORT void API_CALL mk_stop_all_server(){
#ifdef ENABLE_WEBRTC
rtcServer_udp = nullptr;
rtcServer_tcp = nullptr;
iceServer_udp = nullptr;
iceServer_tcp = nullptr;
CLEAR_ARR(signaling_server);
#endif
#ifdef ENABLE_SRT
srtServer = nullptr;
@ -288,46 +297,46 @@ API_EXPORT uint16_t API_CALL mk_rtc_server_start(uint16_t port) {
#endif
}
#ifdef ENABLE_WEBRTC
class WebRtcArgsUrl : public mediakit::WebRtcArgs {
public:
WebRtcArgsUrl(std::string url) { _url = std::move(url); }
toolkit::variant operator[](const std::string &key) const override {
if (key == "url") {
return _url;
API_EXPORT uint16_t API_CALL mk_signaling_server_start(uint16_t port, int ssl) {
#ifdef ENABLE_WEBRTC
ssl = MAX(0, MIN(ssl, 1));
try {
signaling_server[ssl] = std::make_shared<TcpServer>();
if (ssl) {
signaling_server[ssl]->start<WebRtcWebcosktSignalSslSession>(port);
} else {
signaling_server[ssl]->start<WebRtcWebcosktSignalingSession>(port);
}
return "";
return signaling_server[ssl]->getPort();
} catch (std::exception &ex) {
signaling_server[ssl] = nullptr;
WarnL << ex.what();
return 0;
}
private:
std::string _url;
};
#endif
API_EXPORT void API_CALL mk_webrtc_get_answer_sdp(void *user_data, on_mk_webrtc_get_answer_sdp cb, const char *type,
const char *offer, const char *url) {
mk_webrtc_get_answer_sdp2(user_data, nullptr, cb, type, offer, url);
}
API_EXPORT void API_CALL mk_webrtc_get_answer_sdp2(void *user_data, on_user_data_free user_data_free, on_mk_webrtc_get_answer_sdp cb, const char *type,
const char *offer, const char *url) {
#ifdef ENABLE_WEBRTC
assert(type && offer && url && cb);
auto session = std::make_shared<HttpSession>(Socket::createSocket());
std::string offer_str = offer;
std::shared_ptr<void> ptr(user_data, user_data_free ? user_data_free : [](void *) {});
auto args = std::make_shared<WebRtcArgsUrl>(url);
WebRtcPluginManager::Instance().negotiateSdp(*session, type, *args, [offer_str, session, ptr, cb](const WebRtcInterface &exchanger) mutable {
auto &handler = const_cast<WebRtcInterface &>(exchanger);
try {
auto sdp_answer = handler.getAnswerSdp(offer_str);
cb(ptr.get(), sdp_answer.data(), nullptr);
} catch (std::exception &ex) {
cb(ptr.get(), nullptr, ex.what());
}
});
#else
WarnL << "未启用webrtc功能, 编译时请开启ENABLE_WEBRTC";
return 0;
#endif
}
API_EXPORT uint16_t API_CALL mk_ice_server_start(uint16_t port){
#ifdef ENABLE_WEBRTC
try {
iceServer_tcp = std::make_shared<TcpServer>();
iceServer_udp = std::make_shared<UdpServer>();
iceServer_udp->start<IceSession>(port);
iceServer_tcp->start<IceSession>(port);
return 0;
} catch (std::exception &ex) {
iceServer_udp = nullptr;
iceServer_tcp = nullptr;
WarnL << ex.what();
return 0;
}
#else
WarnL << "未启用webrtc功能, 编译时请开启ENABLE_WEBRTC";
return 0;
#endif
}

View File

@ -296,6 +296,13 @@ API_EXPORT int API_CALL mk_media_source_seek_to(const mk_media_source ctx,uint32
MediaSource *src = (MediaSource *)ctx;
return src->seekTo(stamp);
}
API_EXPORT void API_CALL mk_media_source_set_speed(const mk_media_source ctx, float speed) {
assert(ctx);
MediaSource *src = (MediaSource *)ctx;
src->getOwnerPoller()->async([=]() mutable { src->speed(speed); });
}
API_EXPORT void API_CALL mk_media_source_start_send_rtp(const mk_media_source ctx, const char *dst_url, uint16_t dst_port, const char *ssrc, int con_type, on_mk_media_source_send_rtp_result cb, void *user_data) {
mk_media_source_start_send_rtp2(ctx, dst_url, dst_port, ssrc, con_type, cb, user_data, nullptr);
}
@ -347,6 +354,7 @@ API_EXPORT void API_CALL mk_media_source_start_send_rtp4(const mk_media_source c
args.close_delay_ms = (*ini_ptr)["close_delay_ms"].empty() ? 0 : (*ini_ptr)["close_delay_ms"].as<int>();
args.rtcp_timeout_ms = (*ini_ptr)["rtcp_timeout_ms"].empty() ? 30000 : (*ini_ptr)["rtcp_timeout_ms"].as<int>();
args.rtcp_send_interval_ms = (*ini_ptr)["rtcp_send_interval_ms"].empty() ? 5000 : (*ini_ptr)["rtcp_send_interval_ms"].as<int>();
args.enable_origin_recv_limit = (*ini_ptr)["enable_origin_recv_limit"].empty() ? false : (*ini_ptr)["enable_origin_recv_limit"].as<bool>();
std::shared_ptr<void> ptr(
user_data, user_data_free ? user_data_free : [](void *) {});
src->getOwnerPoller()->async([=]() mutable {

View File

@ -11,6 +11,7 @@
#include "mk_frame.h"
#include "Record/MPEG.h"
#include "Extension/Factory.h"
#include "Rtp/PSDecoder.h"
using namespace mediakit;
@ -223,4 +224,36 @@ API_EXPORT int API_CALL mk_mpeg_muxer_input_frame(mk_mpeg_muxer ctx, mk_frame fr
assert(ctx && frame);
auto ptr = reinterpret_cast<MpegMuxerForC *>(ctx);
return ptr->inputFrame(*((Frame::Ptr *) frame));
}
}
//////////////////////////////////////////////////////////////////////
#if defined(ENABLE_RTPPROXY)
API_EXPORT mk_ps_decoder API_CALL mk_ps_decoder_create(on_mk_ps_decoder_stream scb, on_mk_ps_decoder_frame dcb, void * user_data) {
assert(dcb);
auto ps_decoder = new PSDecoder();
std::shared_ptr<void> ptr(user_data, [](void *) {});
if (scb) {
ps_decoder->setOnStream([ptr,scb](int stream, int codecid, const void *extra, size_t bytes, int finish) {
scb(ptr.get(), stream, getCodecByMpegId(codecid), extra, bytes, finish);
});
}
ps_decoder->setOnDecode([ptr,dcb](int stream, int codecid, int flags, int64_t pts, int64_t dts, const void *data, size_t bytes) {
dcb(ptr.get(), stream,getCodecByMpegId(codecid),flags,pts,dts,data,bytes);
});
return reinterpret_cast<mk_ps_decoder>(ps_decoder);
}
API_EXPORT void API_CALL mk_ps_decoder_release(mk_ps_decoder ctx) {
assert(ctx);
auto ptr = reinterpret_cast<PSDecoder *>(ctx);
delete ptr;
}
API_EXPORT void API_CALL mk_ps_decoder_input(mk_ps_decoder ctx, const char * data, size_t bytes) {
assert(ctx && data);
auto ptr = reinterpret_cast<PSDecoder *>(ctx);
ptr->input(reinterpret_cast<const uint8_t *>(data), bytes);
}
#endif

View File

@ -309,7 +309,7 @@ API_EXPORT void API_CALL mk_media_start_send_rtp2(mk_media ctx, const char *dst_
auto ref = *obj;
std::shared_ptr<void> ptr(user_data, user_data_free ? user_data_free : [](void *) {});
(*obj)->getChannel()->getOwnerPoller(MediaSource::NullMediaSource())->async([args, ref, cb, ptr]() {
ref->getChannel()->startSendRtp(MediaSource::NullMediaSource(), args, [cb, ptr](uint16_t local_port, const SockException &ex) {
ref->getChannel()->getMuxer(MediaSource::NullMediaSource())->startSendRtp( args, [cb, ptr](uint16_t local_port, const SockException &ex) {
if (cb) {
cb(ptr.get(), local_port, ex.getErrCode(), ex.what());
}
@ -343,13 +343,14 @@ API_EXPORT void API_CALL mk_media_start_send_rtp4(mk_media ctx, const char *dst_
args.close_delay_ms = (*ini_ptr)["close_delay_ms"].empty() ? 30000 : (*ini_ptr)["close_delay_ms"].as<int>();
args.rtcp_timeout_ms = (*ini_ptr)["rtcp_timeout_ms"].empty() ? 30000 : (*ini_ptr)["rtcp_timeout_ms"].as<int>();
args.rtcp_send_interval_ms = (*ini_ptr)["rtcp_send_interval_ms"].empty() ? 5000 : (*ini_ptr)["rtcp_send_interval_ms"].as<int>();
args.enable_origin_recv_limit = (*ini_ptr)["enable_origin_recv_limit"].empty() ? false : (*ini_ptr)["enable_origin_recv_limit"].as<bool>();
// sender参数无用 [AUTO-TRANSLATED:21590ae5]
// The sender parameter is useless
auto ref = *obj;
std::shared_ptr<void> ptr(
user_data, user_data_free ? user_data_free : [](void *) {});
(*obj)->getChannel()->getOwnerPoller(MediaSource::NullMediaSource())->async([args, ref, cb, ptr]() {
ref->getChannel()->startSendRtp(MediaSource::NullMediaSource(), args, [cb, ptr](uint16_t local_port, const SockException &ex) {
ref->getChannel()->getMuxer(MediaSource::NullMediaSource())->startSendRtp(args, [cb, ptr](uint16_t local_port, const SockException &ex) {
if (cb) {
cb(ptr.get(), local_port, ex.getErrCode(), ex.what());
}
@ -365,7 +366,7 @@ API_EXPORT void API_CALL mk_media_stop_send_rtp(mk_media ctx, const char *ssrc)
auto ref = *obj;
string ssrc_str = ssrc ? ssrc : "";
(*obj)->getChannel()->getOwnerPoller(MediaSource::NullMediaSource())->async([ref, ssrc_str]() {
ref->getChannel()->stopSendRtp(MediaSource::NullMediaSource(), ssrc_str);
ref->getChannel()->getMuxer(MediaSource::NullMediaSource())->stopSendRtp(ssrc_str);
});
}

View File

@ -85,6 +85,27 @@ API_EXPORT int API_CALL mk_recorder_stop(int type, const char *vhost, const char
return stopRecord((Recorder::type)type,vhost,app,stream);
}
API_EXPORT int API_CALL mk_recorder_start_task(const char *vhost, const char *app, const char *stream, const char *path, uint32_t back_ms, uint32_t forward_ms) {
assert(vhost && app && stream);
auto src = MediaSource::find(vhost, app, stream);
if (!src) {
WarnL << "未找到相关的MediaSource,startRecordTask失败:" << vhost << "/" << app << "/" << stream;
return false;
}
bool ret;
src->getOwnerPoller()->async([=]() mutable {
std::string err;
try {
src->getMuxer()->startRecord(path, back_ms, forward_ms);
} catch (std::exception &ex) {
err = ex.what();
WarnL << "MediaSource开启startRecordTask失败:" << vhost << "/" << app << "/" << stream << " what: " << err;
}
ret = err.empty();
});
return ret;
}
API_EXPORT void API_CALL mk_load_mp4_file(const char *vhost, const char *app, const char *stream, const char *file_path, int file_repeat) {
mINI ini;
mk_load_mp4_file2(vhost, app, stream, file_path, file_repeat, (mk_ini)&ini);

View File

@ -31,6 +31,13 @@ API_EXPORT mk_rtp_server API_CALL mk_rtp_server_create2(uint16_t port, int tcp_m
return (mk_rtp_server)server;
}
API_EXPORT mk_rtp_server API_CALL mk_rtp_server_create3(uint16_t port, int tcp_mode, const char *vhost, const char *app, const char *stream_id, int multiplex) {
RtpServer::Ptr *server = new RtpServer::Ptr(new RtpServer);
GET_CONFIG(std::string, local_ip, General::kListenIP)
(*server)->start(port, local_ip.c_str(), MediaTuple { vhost, app, stream_id, "" }, (RtpServer::TcpMode)tcp_mode,multiplex);
return (mk_rtp_server)server;
}
API_EXPORT void API_CALL mk_rtp_server_connect(mk_rtp_server ctx, const char *dst_url, uint16_t dst_port, on_mk_rtp_server_connected cb, void *user_data) {
mk_rtp_server_connect2(ctx, dst_url, dst_port, cb, user_data, nullptr);
}
@ -73,6 +80,41 @@ API_EXPORT void API_CALL mk_rtp_server_set_on_detach2(mk_rtp_server ctx, on_mk_r
}
}
API_EXPORT void API_CALL mk_rtp_server_update_ssrc(mk_rtp_server ctx, uint32_t ssrc) {
assert(ctx);
RtpServer::Ptr *server = (RtpServer::Ptr *)ctx;
(*server)->updateSSRC(ssrc);
}
API_EXPORT void API_CALL mk_rtp_get_info(const char *app, const char *stream, on_mk_rtp_get_info cb) {
assert(cb);
auto src = MediaSource::find(DEFAULT_VHOST, app, stream);
auto process = src ? src->getRtpProcess() : nullptr;
if (!process) {
cb(0, nullptr, 0, nullptr, 0, nullptr);
return;
}
SockInfo *info = process.get();
cb(1, info->get_local_ip().c_str(), info->get_peer_port(), info->get_local_ip().c_str(), info->get_local_port(), info->getIdentifier().c_str());
}
API_EXPORT void API_CALL mk_rtp_pause_check(const char *app, const char *stream) {
auto src = MediaSource::find(DEFAULT_VHOST, app, stream);
auto process = src ? src->getRtpProcess() : nullptr;
if (process) {
process->pauseRtpTimeout(true);
}
}
API_EXPORT void API_CALL mk_rtp_resume_check(const char *app, const char *stream) {
auto src = MediaSource::find(DEFAULT_VHOST, app, stream);
auto process = src ? src->getRtpProcess() : nullptr;
if (process) {
process->pauseRtpTimeout(false);
}
}
#else
API_EXPORT mk_rtp_server API_CALL mk_rtp_server_create(uint16_t port, int enable_tcp, const char *stream_id) {

190
api/source/mk_webrtc.cpp Normal file
View File

@ -0,0 +1,190 @@
/*
* Copyright (c) 2016-present The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/ZLMediaKit/ZLMediaKit).
*
* Use of this source code is governed by MIT-like license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#include "mk_webrtc.h"
#include "mk_util.h"
#include <stdarg.h>
#include <unordered_map>
#include "Util/logger.h"
#include "Util/SSLBox.h"
#include "Util/File.h"
#include "Network/TcpServer.h"
#include "Network/UdpServer.h"
#include "Thread/WorkThreadPool.h"
#include "Rtsp/RtspSession.h"
#include "Rtmp/RtmpSession.h"
#include "Http/HttpSession.h"
#include "Shell/ShellSession.h"
#include "Player/PlayerProxy.h"
using namespace std;
using namespace toolkit;
using namespace mediakit;
#ifdef ENABLE_WEBRTC
#include "webrtc/WebRtcProxyPlayer.h"
#include "webrtc/WebRtcProxyPlayerImp.h"
#include "webrtc/WebRtcSignalingPeer.h"
#include "webrtc/WebRtcSignalingSession.h"
#include "webrtc/WebRtcSession.h"
static UdpServer::Ptr rtcServer_udp;
static TcpServer::Ptr rtcServer_tcp;
class WebRtcArgsUrl : public mediakit::WebRtcArgs {
public:
WebRtcArgsUrl(std::string url) { _url = std::move(url); }
toolkit::variant operator[](const std::string &key) const override {
if (key == "url") {
return _url;
}
return "";
}
private:
std::string _url;
};
#endif
API_EXPORT void API_CALL mk_webrtc_get_answer_sdp(void *user_data, on_mk_webrtc_get_answer_sdp cb, const char *type, const char *offer, const char *url) {
mk_webrtc_get_answer_sdp2(user_data, nullptr, cb, type, offer, url);
}
API_EXPORT void API_CALL mk_webrtc_get_answer_sdp2(
void *user_data, on_user_data_free user_data_free, on_mk_webrtc_get_answer_sdp cb, const char *type, const char *offer, const char *url) {
#ifdef ENABLE_WEBRTC
assert(type && offer && url && cb);
auto session = std::make_shared<HttpSession>(Socket::createSocket());
std::string offer_str = offer;
std::shared_ptr<void> ptr(user_data, user_data_free ? user_data_free : [](void *) {});
auto args = std::make_shared<WebRtcArgsUrl>(url);
WebRtcPluginManager::Instance().negotiateSdp(*session, type, *args, [offer_str, session, ptr, cb](const WebRtcInterface &exchanger) mutable {
auto &handler = const_cast<WebRtcInterface &>(exchanger);
try {
auto sdp_answer = handler.getAnswerSdp(offer_str);
cb(ptr.get(), sdp_answer.data(), nullptr);
} catch (std::exception &ex) {
cb(ptr.get(), nullptr, ex.what());
}
});
#else
WarnL << "未启用webrtc功能, 编译时请开启ENABLE_WEBRTC";
#endif
}
API_EXPORT void API_CALL mk_webrtc_get_proxy_player_info(mk_proxy_player ctx, on_mk_webrtc_get_proxy_player_info_cb cb) {
#ifdef ENABLE_WEBRTC
assert(ctx && cb);
PlayerProxy::Ptr *obj = (PlayerProxy::Ptr *)ctx;
auto media_player = obj->get()->getDelegate();
if (!media_player) {
cb(nullptr, "Media player not found");
return;
}
auto webrtc_player_imp = std::dynamic_pointer_cast<WebRtcProxyPlayerImp>(media_player);
if (!webrtc_player_imp) {
cb(nullptr, "Stream proxy is not WebRTC type");
return;
}
auto webrtc_transport = webrtc_player_imp->getWebRtcTransport();
if (!webrtc_transport) {
cb(nullptr, "WebRTC transport not available");
return;
}
webrtc_transport->getTransportInfo([cb](Json::Value transport_info) mutable {
if (transport_info.isMember("error")) {
cb(nullptr, strdup(transport_info["error"].asCString()));
return;
}
cb(strdup(transport_info.toStyledString().c_str()), "");
});
#else
WarnL << "未启用webrtc功能, 编译时请开启ENABLE_WEBRTC";
#endif
}
API_EXPORT void API_CALL mk_webrtc_add_room_keeper(
const char *room_id, const char *server_host, uint16_t server_port, int ssl, on_mk_webrtc_room_keeper_info_cb cb, void *user_data) {
mk_webrtc_add_room_keeper2(room_id, server_host, server_port, ssl, cb, user_data, nullptr);
}
API_EXPORT void API_CALL mk_webrtc_add_room_keeper2(
const char *room_id, const char *server_host, uint16_t server_port, int ssl, on_mk_webrtc_room_keeper_info_cb cb, void *user_data,
on_user_data_free user_data_free) {
#ifdef ENABLE_WEBRTC
assert(server_host && server_port && room_id && cb);
// server_host: 信令服务器host
// server_post: 信令服务器host
// room_id: 注册的id,信令服务器会对该id进行唯一性检查
std::string server_host_str(server_host), room_id_str(room_id);
std::shared_ptr<void> ptr(user_data, user_data_free ? user_data_free : [](void *) {});
addWebrtcRoomKeeper(server_host_str, server_port, room_id_str, ssl, [ptr,cb](const SockException &ex, const string &key) mutable {
if (ex) {
cb(ptr.get(), nullptr, ex.what());
} else {
cb(ptr.get(), key.c_str(), nullptr);
}
});
#else
WarnL << "未启用webrtc功能, 编译时请开启ENABLE_WEBRTC";
#endif
}
API_EXPORT void API_CALL mk_webrtc_del_room_keeper(const char *room_key, on_mk_webrtc_room_keeper_info_cb cb, void *user_data) {
mk_webrtc_del_room_keeper2(room_key,cb,user_data,nullptr);
}
API_EXPORT void API_CALL
mk_webrtc_del_room_keeper2(const char *room_key, on_mk_webrtc_room_keeper_info_cb cb, void *user_data, on_user_data_free user_data_free) {
#ifdef ENABLE_WEBRTC
assert(room_key && cb);
std::string room_key_str(room_key);
std::shared_ptr<void> ptr(user_data, user_data_free ? user_data_free : [](void *) {});
delWebrtcRoomKeeper(room_key_str, [room_key_str, ptr, cb](const SockException &ex) mutable {
if (ex) {
cb(ptr.get(), room_key_str.c_str(), ex.what());
}
cb(ptr.get(), room_key_str.c_str(), nullptr);
});
#else
WarnL << "未启用webrtc功能, 编译时请开启ENABLE_WEBRTC";
#endif
}
API_EXPORT void API_CALL mk_webrtc_list_room_keeper(on_mk_webrtc_room_keeper_data_cb cb) {
#ifdef ENABLE_WEBRTC
assert(cb);
listWebrtcRoomKeepers([cb](const std::string &key, const WebRtcSignalingPeer::Ptr &p) {
Json::Value item = ToJson(p);
item["room_key"] = key;
cb(strdup(item.toStyledString().c_str()));
});
#else
WarnL << "未启用webrtc功能, 编译时请开启ENABLE_WEBRTC";
#endif
}
API_EXPORT void API_CALL mk_webrtc_list_rooms(on_mk_webrtc_room_keeper_data_cb cb){
#ifdef ENABLE_WEBRTC
assert(cb);
listWebrtcRooms([cb](const std::string &key, const WebRtcSignalingSession::Ptr &p) {
Json::Value item = ToJson(p);
item["room_id"] = key;
cb(strdup(item.toStyledString().c_str()));
});
#else
WarnL << "未启用webrtc功能, 编译时请开启ENABLE_WEBRTC";
#endif
}

View File

@ -1,6 +1,6 @@
# MIT License
#
# Copyright (c) 2016-2022 The ZLMediaKit project authors. All Rights Reserved.
# Copyright (c) 2016-present The ZLMediaKit project authors. All Rights Reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal

View File

@ -64,7 +64,8 @@ void API_CALL on_mk_push_event_func(void *user_data,int err_code,const char *err
void API_CALL on_mk_media_source_regist_func(void *user_data, mk_media_source sender, int regist){
Context *ctx = (Context *) user_data;
const char *schema = mk_media_source_get_schema(sender);
if (strncmp(schema, ctx->push_url, strlen(schema)) == 0) {
if (strncmp(schema, ctx->push_url, strlen(schema)) == 0 ||
(!strncmp(ctx->push_url, "webrtc", 5) && !strcmp(schema, "rtsp")) ) {
// 判断是否为推流协议相关的流注册或注销事件 [AUTO-TRANSLATED:00a88a17]
// Determine if it is a stream registration or deregistration event related to the streaming protocol
release_pusher(&(ctx->pusher));

16
cmake/FindAVFILTER.cmake Normal file
View File

@ -0,0 +1,16 @@
find_path(AVFILTER_INCLUDE_DIR
NAMES libavfilter/avfilter.h
HINTS ${FFMPEG_PATH_ROOT}
PATH_SUFFIXES include)
find_library(AVFILTER_LIBRARY
NAMES avfilter
HINTS ${FFMPEG_PATH_ROOT}
PATH_SUFFIXES bin lib)
set(AVFILTER_LIBRARIES ${AVFILTER_LIBRARY})
set(AVFILTER_INCLUDE_DIRS ${AVFILTER_INCLUDE_DIR})
include(FindPackageHandleStandardArgs)
find_package_handle_standard_args(AVFILTER DEFAULT_MSG AVFILTER_LIBRARY AVFILTER_INCLUDE_DIR)

16
cmake/FindTCMALLOC.cmake Normal file
View File

@ -0,0 +1,16 @@
find_path(Tcmalloc_INCLUDE_DIR
NAMES google/tcmalloc.h
)
find_library(Tcmalloc_LIBRARY
NAMES tcmalloc_minimal tcmalloc
)
set(TCMALLOC_LIBRARIES ${Tcmalloc_LIBRARY})
set(TCMALLOC_INCLUDE_DIRS ${Tcmalloc_INCLUDE_DIR})
INCLUDE(FindPackageHandleStandardArgs)
FIND_PACKAGE_HANDLE_STANDARD_ARGS(TCMALLOC
DEFAULT_MSG
TCMALLOC_LIBRARIES TCMALLOC_INCLUDE_DIRS
)

View File

@ -1,4 +1,4 @@
#include <atomic>
#include <atomic>
static int test()
{

View File

@ -4,52 +4,93 @@
#!!!!你如果修改此范例配置文件(conf/config.ini)并不会被MediaServer进程加载因为MediaServer进程默认加载的是release/${操作系统类型}/${编译类型}/config.ini。
#!!!!当然你每次执行cmake该文件确实会被拷贝至release/${操作系统类型}/${编译类型}/config.ini
#!!!!但是一般建议你直接修改release/${操作系统类型}/${编译类型}/config.ini文件修改此文件一般不起作用,除非你运行MediaServer时使用-c参数指定到此文件。
#!!!! This is a sample configuration file intended to explain the specific meanings and functions of each item.
#!!!! During the `cmake` execution, this file is copied to the `release/${OS type}/${build type}` directory.
#!!!! This directory is also the target path where the MediaServer executable runs and looks for `config.ini` by default.
#!!!! Modifying this sample file (`conf/config.ini`) will not affect the MediaServer process while it runs.
#!!!! Although executing `cmake` overwrites the target config file, it is highly recommended to modify `release/${OS type}/${build type}/config.ini` directly.
#!!!! Changes made here will only take effect if you explicitly load this file using the `-c` parameter when starting the MediaServer.
[api]
#是否调试http api,启用调试后会打印每次http请求的内容和回复
# 是否调试http api,启用调试后会打印每次http请求的内容和回复
# Enable HTTP API debugging. When enabled, it logs the content and responses of each HTTP request.
apiDebug=1
#一些比较敏感的http api在访问时需要提供secret否则无权限调用
#如果是通过127.0.0.1访问,那么可以不提供secret
# 一些比较敏感的http api在访问时需要提供secret否则无权限调用
# 如果是通过127.0.0.1访问,那么可以不提供secret
# For some sensitive HTTP APIs, a secret must be provided when accessing them, otherwise the call is unauthorized.
# If accessed via 127.0.0.1, the secret does not need to be provided.
secret=035c73f7-bb6b-4889-a715-d9eb2d1925cc
#截图保存路径根目录截图通过http api(/index/api/getSnap)生成和获取
# 截图保存路径根目录截图通过http api(/index/api/getSnap)生成和获取
# Root directory for saving snapshots generated via the `/index/api/getSnap` API.
snapRoot=./www/snap/
#默认截图图片在启动FFmpeg截图后但是截图还未生成时可以返回默认的预设图片
# 默认截图图片在启动FFmpeg截图后但是截图还未生成时可以返回默认的预设图片
# Default placeholder image returned while FFmpeg is generating the actual snapshot.
defaultSnap=./www/logo.png
#downloadFile http接口可访问文件的根目录支持多个目录不同目录通过分号(;)分隔
# downloadFile http接口可访问文件的根目录支持多个目录不同目录通过分号(;)分隔
# Root directories accessible via the `downloadFile` API. Separate multiple directories with semicolons (;).
downloadRoot=./www
[ffmpeg]
#FFmpeg可执行程序路径,支持相对路径/绝对路径
# FFmpeg可执行程序路径,支持相对路径/绝对路径
# Path to the FFmpeg executable. Both relative and absolute paths are supported.
bin=/usr/bin/ffmpeg
#FFmpeg拉流再推流的命令模板通过该模板可以设置再编码的一些参数
# FFmpeg拉流再推流的命令模板通过该模板可以设置诸如编码等的一些参数
# FFmpeg command template for pulling and re-publishing streams (used to define re-encoding parameters).
cmd=%s -re -i %s -c:a aac -strict -2 -ar 44100 -ab 48k -c:v libx264 -f flv %s
#FFmpeg生成截图的命令可以通过修改该配置改变截图分辨率或质量
# FFmpeg生成截图的命令可以通过修改该配置改变截图分辨率或质量
# FFmpeg command template for generating snapshots. Modify this to change resolution or quality.
snap=%s -i %s -y -f mjpeg -frames:v 1 -an %s
#FFmpeg日志的路径如果置空则不生成FFmpeg日志
#可以为相对(相对于本可执行程序目录)或绝对路径
# FFmpeg日志的路径如果置空则不生成FFmpeg日志
# 可以为相对(相对于本可执行程序目录)或绝对路径
# Path to the FFmpeg log file (relative or absolute). Leave empty to disable logging.
log=./ffmpeg/ffmpeg.log
# 自动重启的时间(秒), 默认为0, 也就是不自动重启. 主要是为了避免长时间ffmpeg拉流导致的不同步现象
# Automatic restart interval in seconds (0 to disable). Helps prevent A/V desync caused by prolonged FFmpeg stream pulling.
restart_sec=0
#转协议相关开关如果addStreamProxy api和on_publish hook回复未指定转协议参数则采用这些配置项
# 转协议相关开关如果addStreamProxy api和on_publish hook回复未指定转协议参数则采用这些配置项
# Protocol conversion default switches. Used if protocol conversions aren't specified via the `addStreamProxy` API or the `on_publish` webhook.
[protocol]
#转协议时,是否开启帧级时间戳覆盖
# 转协议时,是否开启帧级时间戳覆盖
# 0:采用源视频流绝对时间戳,不做任何改变
# 1:采用zlmediakit接收数据时的系统时间戳(有平滑处理)
# 2:采用源视频流时间戳相对时间戳(增长量),有做时间戳跳跃和回退矫正
# Frame-level timestamp override mode during protocol conversion:
# - 0: Use absolute timestamp from the source (no modification).
# - 1: Use ZLMediaKit system timestamp upon data reception (with smoothing).
# - 2: Use relative timestamp increments, with correction for jumps and backwards drifts.
modify_stamp=2
#转协议是否开启音频
# 转协议是否开启音频
# Whether to enable audio output during protocol conversion.
enable_audio=1
#添加acc静音音频在关闭音频时此开关无效
# 添加AAC静音音频在关闭音频时此开关无效
# Whether to inject AAC silent audio (ignored if `enable_audio` is 0).
add_mute_audio=1
#无人观看时,是否直接关闭(而不是通过on_none_reader hook返回close)
#此配置置1时此流如果无人观看将不触发on_none_reader hook回调
#而是将直接关闭流
# 无人观看时,是否直接关闭(而不是通过on_none_reader hook返回close)
# 此配置置1时此流如果无人观看将不触发on_none_reader hook回调
# 而是将直接关闭流
# Whether to immediately close an unwatched stream directly instead of relying on the `on_none_reader` hook returning 'close'.
# If enabled (1), an unwatched stream is closed outright without triggering the hook callback.
auto_close=0
#推流断开后可以在超时时间内重新连接上继续推流,这样播放器会接着播放。
#置0关闭此特性(推流断开会导致立即断开播放器)
#此参数不应大于播放器超时时间;单位毫秒
# 推流断开后可以在超时时间内重新连接上继续推流,这样播放器会接着播放。
# 置0关闭此特性(推流断开会导致立即断开播放器)
# 此参数不应大于播放器超时时间;单位毫秒
# Defines a grace period (in milliseconds) allowing a disconnected publisher to reconnect and resume streaming.
# During this period, active player connections are maintained rather than dropped.
# Set to 0 to disable this feature, which means dropping a publisher will immediately disconnect all its current players.
# This value must not exceed the player's configured timeout.
continue_push_ms=15000
# 是否启用音频转码
# 主要实现进出RTC音频流的自动转码代码实现详见 RtcMediaSource.h/cpp当前实现
@ -60,112 +101,204 @@ continue_push_ms=15000
# 此外音频转码正常都是用于webrtc的一般也会开启WEBRTC, 即-DENABLE_WEBRTC=1, 此前必须自己装好libsrtp库, 安装过程详见wiki
# audio_transcode配置项可通过配置文件hook来打开注意如果编译时没启用FFMPEG此选项会自动关闭使用此分支前得先确保启用FFMPEG
audio_transcode=1
#平滑发送定时器间隔单位毫秒置0则关闭开启后影响cpu性能同时增加内存
#该配置开启后可以解决一些流发送不平滑导致zlmediakit转发也不平滑的问题
# 平滑发送定时器间隔单位毫秒置0则关闭开启后影响cpu性能同时增加内存
# 该配置开启后可以解决一些流发送不平滑导致zlmediakit转发也不平滑的问题
# Smooth sending timer interval in milliseconds (0 to disable). Enabling this increases CPU and memory usage.
# This solves the issue where unsteady upstream publishing causes ZLMediaKit's forwarding to also be unsteady.
paced_sender_ms=0
#是否开启转换为hls(mpegts)
# 是否开启转换为hls(mpegts)
# Whether to enable conversion to HLS (mpegts).
enable_hls=1
#是否开启转换为hls(fmp4)
# 是否开启转换为hls(fmp4)
# Whether to enable conversion to HLS (fmp4).
enable_hls_fmp4=0
#是否开启MP4录制
# 是否开启MP4录制
# Whether to enable MP4 recording.
enable_mp4=0
#是否开启转换为rtsp
# 是否开启转换为rtsp
# Whether to enable conversion to RTSP.
enable_rtsp=1
#是否开启转换为webrtc
# 是否开启转换为rtc
# Whether to enable conversion to WEBRTC.
enable_rtc=1
#是否开启转换为rtmp/flv
# 是否开启转换为rtmp/flv
# Whether to enable conversion to RTMP/FLV.
enable_rtmp=1
#是否开启转换为http-ts/ws-ts
# 是否开启转换为http-ts/ws-ts
# Whether to enable conversion to HTTP-TS/WS-TS.
enable_ts=1
#是否开启转换为http-fmp4/ws-fmp4
# 是否开启转换为http-fmp4/ws-fmp4
# Whether to enable conversion to HTTP-FMP4/WS-FMP4.
enable_fmp4=1
#是否将mp4录制当做观看者
# 是否将mp4录制当做观看者
# Whether to treat MP4 recording tasks as active stream viewers.
mp4_as_player=0
#mp4切片大小单位秒
# mp4切片大小单位秒
# Maximum duration of MP4 recording segments in seconds.
mp4_max_second=3600
#mp4录制保存路径
# mp4录制保存路径
# Directory path for saving MP4 recordings.
mp4_save_path=./www
#hls录制保存路径
# hls录制保存路径
# Directory path for saving HLS recordings.
hls_save_path=./www
###### 以下是按需转协议的开关在测试ZLMediaKit的接收推流性能时请把下面开关置1
###### 如果某种协议你用不到你可以把以下开关置1以便节省资源(但是还是可以播放,只是第一个播放者体验稍微差点)
###### 如果某种协议你想获取最好的用户体验请置0(第一个播放者可以秒开,且不花屏)
#hls协议是否按需生成如果hls.segNum配置为0(意味着hls录制)那么hls将一直生成(不管此开关)
###### 对于不使用的协议,可以将开关设置为 1 以节省资源(虽然首个播放者体验稍差,但依然可以播放)。
###### 对于希望获得最佳用户体验的协议,请设置为 0首屏秒开且无花屏现象
###### On-demand protocol conversion switches. Set these to 1 during stream reception performance testing to save resources.
###### For unused protocols, setting them to 1 saves resources (with a slight startup delay for the first viewer).
###### For the best user experience (instant playback and no visual artifacts (glitches)), set them to 0.
# hls协议是否按需生成如果hls.segNum配置为0(意味着hls录制)那么hls将一直生成(不管此开关)
# Whether to generate HLS streams on demand. If `hls.segNum` is configured to 0 (implies HLS recording), HLS streams generate continuously regardless of this switch.
hls_demand=0
#rtsp[s]协议是否按需生成
# rtsp[s]协议是否按需生成
# Whether to generate RTSP[S] streams on demand.
rtsp_demand=0
#rtc协议是否按需生成
# rtc协议是否按需生成
# Whether to generate WEBRTC streams on demand.
rtc_demand=0
#rtmp[s]、http[s]-flv、ws[s]-flv协议是否按需生成
# rtmp[s]、http[s]-flv、ws[s]-flv协议是否按需生成
# Whether to generate RTMP[S], HTTP[S]-FLV, and WS[S]-FLV streams on demand.
rtmp_demand=0
#http[s]-ts协议是否按需生成
# http[s]-ts协议是否按需生成
# Whether to generate HTTP[S]-TS streams on demand.
ts_demand=0
#http[s]-fmp4、ws[s]-fmp4协议是否按需生成
# http[s]-fmp4、ws[s]-fmp4协议是否按需生成
# Whether to generate HTTP[S]-FMP4 and WS[S]-FMP4 streams on demand.
fmp4_demand=0
[general]
#是否启用虚拟主机
# 是否启用虚拟主机
# Whether to enable virtual hosting.
enableVhost=0
#播放器或推流器在断开后会触发hook.on_flow_report事件(使用多少流量事件)
#flowThreshold参数控制触发hook.on_flow_report事件阈值使用流量超过该阈值后才触发单位KB
# 播放器或推流器在断开后会触发hook.on_flow_report事件(使用多少流量事件)
# flowThreshold参数控制触发hook.on_flow_report事件阈值使用流量超过该阈值后才触发单位KB
# When a player or publisher disconnects, it triggers the `hook.on_flow_report` event (an event reporting how much traffic was used).
# The `flowThreshold` parameter controls the threshold for triggering the `hook.on_flow_report` event; it is only triggered when the used traffic exceeds this threshold, in KB.
flowThreshold=1024
#播放最多等待时间,单位毫秒
#播放在播放某个流时,如果该流不存在,
#ZLMediaKit会最多让播放器等待maxStreamWaitMS毫秒
#如果在这个时间内,该流注册成功,那么会立即返回播放器播放成功
#否则返回播放器未找到该流,该机制的目的是可以先播放再推流
# 播放最多等待时间,单位毫秒
# 播放在播放某个流时,如果该流不存在,
# ZLMediaKit会最多让播放器等待maxStreamWaitMS毫秒
# 如果在这个时间内,该流注册成功,那么会立即返回播放器播放成功
# 否则返回播放器未找到该流,该机制的目的是可以先播放再推流
# Maximum playback wait time in milliseconds.
# When a requested stream does not exist, ZLMediaKit delays the player for up to `maxStreamWaitMS`.
# If the stream is successfully registered within this period, it immediately returns playback success.
# Otherwise, it returns 'stream not found'. This mechanism enables 'play before push' workflows.
maxStreamWaitMS=15000
#某个流无人观看时触发hook.on_stream_none_reader事件的最大等待时间单位毫秒
#在配合hook.on_stream_none_reader事件时可以做到无人观看自动停止拉流或停止接收推流
# 某个流无人观看时触发hook.on_stream_none_reader事件的最大等待时间单位毫秒
# 在配合hook.on_stream_none_reader事件时可以做到无人观看自动停止拉流或停止接收推流
# The continuous unwatched duration (in ms) required to trigger the `hook.on_stream_none_reader` event.
# Combined with the `hook.on_stream_none_reader` event, this enables automatically stopping origin pulls or disconnecting publishers when a stream remains unwatched.
streamNoneReaderDelayMS=20000
#拉流代理时如果断流再重连成功是否删除前一次的媒体流数据,如果删除将重新开始,
#如果不删除将会接着上一次的数据继续写(录制hls/mp4时会继续在前一个文件后面写)
# 拉流代理时如果断流再重连成功是否删除前一次的媒体流数据,如果删除将重新开始,
# 如果不删除将会接着上一次的数据继续写(录制hls/mp4时会继续在前一个文件后面写)
# Whether to flush cached media data upon successfully reconnecting after an origin pull proxy disconnection. If flushed, the stream restarts cleanly.
# If not flushed, the new data will append directly to the previous data (when recording HLS/MP4, it continues appending to the previous file).
resetWhenRePlay=1
#合并写缓存大小(单位毫秒)合并写指服务器缓存一定的数据后才会一次性写入socket这样能提高性能但是会提高延时
#开启后会同时关闭TCP_NODELAY并开启MSG_MORE
# 合并写缓存大小(单位毫秒)合并写指服务器缓存一定的数据后才会一次性写入socket这样能提高性能但是会提高延时
# 开启后会同时关闭TCP_NODELAY并开启MSG_MORE
# Write coalescing cache duration in ms. The server caches data up to this interval before writing to the socket in bulk, improving performance at the cost of slight latency.
# Enabling this disables `TCP_NODELAY` and enables `MSG_MORE`.
mergeWriteMS=0
#服务器唯一id用于触发hook时区别是哪台服务器
# 服务器唯一id用于触发hook时区别是哪台服务器
# Unique server ID, used to distinguish which server it is when triggering a hook.
mediaServerId=your_server_id
#最多等待未初始化的Track时间单位毫秒超时之后会忽略未初始化的Track
# 最多等待未初始化的Track时间单位毫秒超时之后会忽略未初始化的Track
# Maximum wait time (in ms) for uninitialized Tracks. After the timeout, any uninitialized Tracks will be ignored.
wait_track_ready_ms=10000
#最多等待音频Track收到数据时间单位毫秒超时且完全没收到音频数据忽略音频Track
#加快某些带封装的流metadata说明有音频但是实际上没有的流ready时间比如很多厂商的GB28181 PS
# 最多等待音频Track收到数据时间单位毫秒超时且完全没收到音频数据忽略音频Track
# 加快某些带封装的流metadata说明有音频但是实际上没有的流ready时间比如很多厂商的GB28181 PS
# Maximum wait time (in ms) before an audio track receives its first data packet. If it times out and absolutely no audio data has been received, the audio Track is ignored.
# This speeds up the ready time for certain packaged streams whose metadata falsely claims to include audio, but actually do not (e.g., GB28181 PS).
wait_audio_track_data_ms=1000
#如果流只有单Track最多等待若干毫秒超时后未收到其他Track的数据则认为是单Track
#如果协议元数据有声明特定track数那么无此等待时间
# 如果流只有单Track最多等待若干毫秒超时后未收到其他Track的数据则认为是单Track
# 如果协议元数据有声明特定track数那么无此等待时间
# Maximum wait time (in ms) for additional tracks if a stream currently has only one.
# If no data from other tracks is received within this timeout, it is considered a single-track stream.
# This delay is bypassed if protocol metadata explicitly declares the track count.
wait_add_track_ms=3000
#如果track未就绪我们先缓存帧数据但是有最大个数限制防止内存溢出
# 如果track未就绪我们先缓存帧数据但是有最大个数限制防止内存溢出
# If a track is not ready, we first cache the frame data, but there is a maximum count limit to prevent memory overflow.
unready_frame_cache=100
#是否启用观看人数变化事件广播置1则启用置0则关闭
# 是否启用观看人数变化事件广播置1则启用置0则关闭
# Whether to enable broadcasting of viewership change events. Set to 1 to enable, set to 0 to disable.
broadcast_player_count_changed=0
#绑定的本地网卡ip
# 绑定的本地网卡ip
# Bound local network interface IP address.
listen_ip=::
[hls]
#hls写文件的buf大小调整参数可以提高文件io性能
# hls写文件的buf大小调整参数可以提高文件io性能
# Buffer size used when writing HLS segment files. Increasing this value can improve disk I/O performance.
fileBufSize=65536
#hls最大切片时间
# hls最大切片时间
# Target maximum duration of a single HLS segment.
segDur=2
#m3u8索引中,hls保留切片个数(实际保留切片个数+segRetain个)
#如果设置为0则不删除切片且m3u8文件全量记录切片列表
# m3u8索引中,hls保留切片个数(实际保留切片个数+segRetain个)
# 如果设置为0则不删除切片且m3u8文件全量记录切片列表
# Number of HLS segments retained within the m3u8 playlist index (actual chunks kept = this value + `segRetain`).
# Set to 0 to retain all segments and record the full segment list in the m3u8 file.
segNum=3
#HLS切片延迟个数大于0将生成hls_delay.m3u8文件0则不生成
# HLS切片延迟个数大于0将生成hls_delay.m3u8文件0则不生成
# The segment delay count for HLS. If greater than 0, an `hls_delay.m3u8` variant playlist is generated; if 0, it will not be generated.
segDelay=0
#HLS切片从m3u8文件中移除后继续保留在磁盘上的个数
# HLS切片从m3u8文件中移除后继续保留在磁盘上的个数
# Number of outdated HLS segments to keep on disk after removal from the m3u8 playlist.
segRetain=5
#是否广播 hls切片(ts/fmp4)完成通知(on_record_ts)
# 是否广播 hls切片(ts/fmp4)完成通知(on_record_ts)
# Whether to broadcast HLS segment (TS/FMP4) completion notifications via `on_record_ts`.
broadcastRecordTs=0
#直播hls文件删除延时单位秒issue: #913
# 直播hls文件删除延时单位秒issue: #913
# Delay in seconds before deleting expired live HLS segments. Refer to issue: #913.
deleteDelaySec=10
#此选项开启后m3u8文件还是表现为直播但是切片文件会被全部保留为点播用
#segDur设置为0或segKeep设置为1的情况下每个切片文件夹下会生成一个vod.m3u8文件用于点播该时间段的录像
# 此选项开启后m3u8文件还是表现为直播但是切片文件会被全部保留为点播用
# segDur设置为0或segKeep设置为1的情况下每个切片文件夹下会生成一个vod.m3u8文件用于点播该时间段的录像
# When enabled, the `m3u8` playlist functions as live media, but segment chunks are permanently preserved in storage for Video On Demand (VOD).
# If either `segKeep` is 1 or `segDur` is 0, a `vod.m3u8` playlist is also generated in each segment's folder for VOD playback of that specific recorded period.
segKeep=0
#如果设置为1则第一个切片长度强制设置为1个GOP。当GOP小于segDur可以提高首屏速度
# 如果设置为1则第一个切片长度强制设置为1个GOP。当GOP小于segDur可以提高首屏速度
# If set to 1, the length of the first segment is forcibly set to exactly 1 GOP.
# When the GOP is smaller than `segDur`, this can improve the initial startup (instant playback) speed.
fastRegister=0
# 转码成opus音频时的比特率
opusBitrate=64000
@ -173,286 +306,475 @@ opusBitrate=64000
aacBitrate=64000
[hook]
#是否启用hook事件启用后推拉流都将进行鉴权
# 是否启用hook事件启用后推拉流都将进行鉴权
# Whether to enable webhook events. When enabled, pushing and pulling streams requires authentication.
enable=0
#播放器或推流器使用流量事件,置空则关闭
# 播放器或推流器使用流量事件,置空则关闭
# Player or publisher flow traffic event. Leave empty to disable.
on_flow_report=
#访问http文件鉴权事件置空则关闭鉴权
# 访问http文件鉴权事件置空则关闭鉴权
# HTTP file access authentication event. Leave empty to disable.
on_http_access=
#播放鉴权事件,置空则关闭鉴权
# 播放鉴权事件,置空则关闭鉴权
# Playback authentication event. Leave empty to disable.
on_play=
#推流鉴权事件,置空则关闭鉴权
# 推流鉴权事件,置空则关闭鉴权
# Publishing authentication event. Leave empty to disable.
on_publish=
#录制mp4切片完成事件
# 录制mp4切片完成事件
# MP4 segment recording completion event.
on_record_mp4=
# 录制 hls ts(或fmp4) 切片完成事件
# HLS TS (or fmp4) segment recording completion event.
on_record_ts=
#rtsp播放鉴权事件此事件中比对rtsp的用户名密码
# rtsp播放鉴权事件此事件中比对rtsp的用户名密码
# RTSP playback authentication event (used to verify RTSP username and password).
on_rtsp_auth=
#rtsp播放是否开启专属鉴权事件置空则关闭rtsp鉴权。rtsp播放鉴权还支持url方式鉴权
#建议开发者统一采用url参数方式鉴权rtsp用户名密码鉴权一般在设备上用的比较多
#开启rtsp专属鉴权后将不再触发on_play鉴权事件
# rtsp播放是否开启专属鉴权事件置空则关闭rtsp鉴权。rtsp播放鉴权还支持url方式鉴权
# 建议开发者统一采用url参数方式鉴权rtsp用户名密码鉴权一般在设备上用的比较多
# 开启rtsp专属鉴权后将不再触发on_play鉴权事件
# Whether to enable a dedicated RTSP realm authentication event (leave empty to disable; URL-based auth remains supported).
# We recommend standardizing on URL parameters; RTSP username/password auth is mostly for hardware devices.
# Enabling this bypasses the standard `on_play` webhook.
on_rtsp_realm=
#远程telnet调试鉴权事件
# 远程telnet调试鉴权事件
# Remote telnet debugging authentication event.
on_shell_login=
#直播流注册或注销事件
# 直播流注册或注销事件
# Live stream registration or unregistration event.
on_stream_changed=
#过滤on_stream_changed hook的协议类型可以选择只监听某些感兴趣的协议置空则不过滤协议
# 过滤on_stream_changed hook的协议类型可以选择只监听某些感兴趣的协议置空则不过滤协议
# Filter the protocol types for the `on_stream_changed` hook to listen only to specific protocols. Leave empty to disable filtering.
stream_changed_schemas=rtsp/rtmp/fmp4/ts/hls/hls.fmp4
#无人观看流事件通过该事件可以选择是否关闭无人观看的流。配合general.streamNoneReaderDelayMS选项一起使用
# 无人观看流事件通过该事件可以选择是否关闭无人观看的流。配合general.streamNoneReaderDelayMS选项一起使用
# Triggered when a stream has no viewers. Combined with `general.streamNoneReaderDelayMS`, this enables closing unwatched streams.
on_stream_none_reader=
#播放时未找到流事件通过配合hook.on_stream_none_reader事件可以完成按需拉流
# 播放时未找到流事件通过配合hook.on_stream_none_reader事件可以完成按需拉流
# Triggered when a requested stream is not found. Combined with `hook.on_stream_none_reader`, this enables on-demand origin pulling.
on_stream_not_found=
#服务器启动报告,可以用于服务器的崩溃重启事件监听
# 服务器启动报告,可以用于服务器的崩溃重启事件监听
# Server startup report. Useful for monitoring server crashes and restarts.
on_server_started=
#服务器退出报告,当服务器正常退出时触发
# 服务器退出报告,当服务器正常退出时触发
# Server exit report, triggered when the server shuts down normally.
on_server_exited=
#server保活上报
# server保活上报
# Server keep-alive reporting event.
on_server_keepalive=
#发送rtp(startSendRtp)被动关闭时回调
# 发送rtp(startSendRtp)被动关闭时回调
# Callback triggered when RTP sending (`startSendRtp`) is passively closed.
on_send_rtp_stopped=
#rtp server 超时未收到数据
# rtp server 超时未收到数据
# RTP server timeout event due to not receiving data.
on_rtp_server_timeout=
#hook api最大等待回复时间单位秒
# hook api最大等待回复时间单位秒
# Maximum wait time in seconds for Webhook API responses.
timeoutSec=10
#keepalive hook触发间隔,单位秒float类型
# keepalive hook触发间隔,单位秒float类型
# Interval in seconds (float) for triggering the keep-alive webhook.
alive_interval=10.0
#hook通知失败重试次数,正整数。为0不重试1时重试一次以此类推
# hook通知失败重试次数,正整数。为0不重试1时重试一次以此类推
# Webhook notification failure retry attempts. Must be a non-negative integer (0 to disable).
retry=1
#hook通知失败重试延时单位秒float型
# hook通知失败重试延时单位秒float型
# Delay in seconds (float) between webhook retry attempts.
retry_delay=3.0
[cluster]
#设置源站拉流url模板, 格式跟printf类似第一个%s指定app,第二个%s指定stream_id,
#开启集群模式后on_stream_not_found和on_stream_none_reader hook将无效.
#溯源模式支持以下类型:
#rtmp方式: rtmp://127.0.0.1:1935/%s/%s
#rtsp方式: rtsp://127.0.0.1:554/%s/%s
#hls方式: http://127.0.0.1:80/%s/%s/hls.m3u8
#http-ts方式: http://127.0.0.1:80/%s/%s.live.ts
#支持多个源站,不同源站通过分号(;)分隔
# 设置源站拉流url模板, 格式跟printf类似第一个%s指定app,第二个%s指定stream_id,
# 开启集群模式后on_stream_not_found和on_stream_none_reader hook将无效.
# 溯源模式支持以下类型:
# rtmp方式: rtmp://127.0.0.1:1935/%s/%s
# rtsp方式: rtsp://127.0.0.1:554/%s/%s
# hls方式: http://127.0.0.1:80/%s/%s/hls.m3u8
# http-ts方式: http://127.0.0.1:80/%s/%s.live.ts
# 支持多个源站,不同源站通过分号(;)分隔
# Origin pull URL template (printf style: first `%s` is app, second `%s` is stream_id).
# When cluster mode is enabled, `on_stream_not_found` and `on_stream_none_reader` webhooks are disabled.
# Supported origin pull protocols:
# RTMP mode: rtmp://127.0.0.1:1935/%s/%s
# RTSP mode: rtsp://127.0.0.1:554/%s/%s
# HLS mode: http://127.0.0.1:80/%s/%s/hls.m3u8
# HTTP-TS mode: http://127.0.0.1:80/%s/%s.live.ts
# Separate multiple origin servers with semicolons (;).
origin_url=
#溯源总超时时长单位秒float型假如源站有3个那么单次溯源超时时间为timeout_sec除以3
#单次溯源超时时间不要超过general.maxStreamWaitMS配置
# 溯源总超时时长单位秒float型假如源站有3个那么单次溯源超时时间为timeout_sec除以3
# 单次溯源超时时间不要超过general.maxStreamWaitMS配置
# Total origin pull timeout in seconds (float).
# The single origin attempt timeout (total timeout divided by the number of origins) should not exceed `general.maxStreamWaitMS`.
timeout_sec=15
#溯源失败尝试次数,-1时永久尝试
# 溯源失败尝试次数,-1时永久尝试
# Failure retry attempts for origin pulling (-1 for infinite retries).
retry_count=3
[http]
#http服务器字符编码集
# http服务器字符编码集
# HTTP server character encoding.
charSet=utf-8
#http链接超时时间
# http链接超时时间
# HTTP connection timeout in seconds.
keepAliveSecond=30
#http请求体最大字节数如果post的body太大则不适合缓存body在内存
# http请求体最大字节数如果post的body太大则不适合缓存body在内存
# Maximum number of bytes for the HTTP request body. If the POST body is too large, it is not suitable to cache the body in memory.
maxReqSize=40960
#404网页内容用户可以自定义404网页
# 404网页内容用户可以自定义404网页
# Custom 404 page content. Users can customize the 404 response page here.
#notFound=<html><head><title>404 Not Found</title></head><body bgcolor="white"><center><h1>您访问的资源不存在!</h1></center><hr><center>ZLMediaKit-4.0</center></body></html>
#http服务器监听端口
# http服务器监听端口
# HTTP server listening port.
port=80
#http文件服务器根目录
#可以为相对(相对于本可执行程序目录)或绝对路径
# http文件服务器根目录
# 可以为相对(相对于本可执行程序目录)或绝对路径
# HTTP file server root directory (relative or absolute path).
rootPath=./www
#http文件服务器读文件缓存大小单位BYTE调整该参数可以优化文件io性能
# http文件服务器读文件缓存大小单位BYTE调整该参数可以优化文件io性能
# HTTP file server read cache size in bytes. Tweak to optimize file I/O performance.
sendBufSize=65536
#https服务器监听端口
# https服务器监听端口
# HTTPS server listening port.
sslport=443
#是否显示文件夹菜单,开启后可以浏览文件夹
# 是否显示文件夹菜单,开启后可以浏览文件夹
# Whether to enable directory browsing menus.
dirMenu=1
#虚拟目录, 虚拟目录名和文件路径使用","隔开,多个配置路径间用";"隔开
#例如赋值为 app_a,/path/to/a;app_b,/path/to/b 那么
#访问 http://127.0.0.1/app_a/file_a 对应的文件路径为 /path/to/a/file_a
#访问 http://127.0.0.1/app_b/file_b 对应的文件路径为 /path/to/b/file_b
#访问其他http路径,对应的文件路径还是在rootPath内
# 虚拟目录, 虚拟目录名和文件路径使用","隔开,多个配置路径间用";"隔开
# 例如赋值为 app_a,/path/to/a;app_b,/path/to/b 那么
# 访问 http://127.0.0.1/app_a/file_a 对应的文件路径为 /path/to/a/file_a
# 访问 http://127.0.0.1/app_b/file_b 对应的文件路径为 /path/to/b/file_b
# 访问其他http路径,对应的文件路径还是在rootPath内
# Virtual directory mappings. Format: virtual_name,path;virtual_name,path (name and file path separated by ",", multiple mappings separated by ";").
# For example, set `app_a,/path/to/a;app_b,/path/to/b` then:
# Accessing `http://127.0.0.1/app_a/file_a` maps to `/path/to/a/file_a`.
# Accessing `http://127.0.0.1/app_b/file_b` maps to `/path/to/b/file_b`, while other HTTP paths still map to files under `rootPath`.
virtualPath=
#禁止后缀的文件使用mmap缓存使用“,”隔开
#例如赋值为 .mp4,.flv
#那么访问后缀为.mp4与.flv 的文件不缓存
# 禁止后缀的文件使用mmap缓存使用“,”隔开
# 例如赋值为 .mp4,.flv
# 那么访问后缀为.mp4与.flv 的文件不缓存
# Disables `mmap` caching for specific file extensions. Use `,` to separate multiple extensions.
# Example: `.mp4,.flv` means files with these extensions bypass the `mmap` cache.
forbidCacheSuffix=
#可以把http代理前真实客户端ip放在http头中https://github.com/ZLMediaKit/ZLMediaKit/issues/1388
#切勿暴露此key否则可能导致伪造客户端ip
# 可以把http代理前真实客户端ip放在http头中https://github.com/ZLMediaKit/ZLMediaKit/issues/1388
# 切勿暴露此key否则可能导致伪造客户端ip
# Header name to trust for extracting the real client IP from an HTTP proxy request header. See: https://github.com/ZLMediaKit/ZLMediaKit/issues/1388
# Do not expose this key, as it may lead to forged client IPs.
forwarded_ip_header=
#默认允许所有跨域请求
# 默认允许所有跨域请求
# Whether to allow all cross-origin requests by default (sets generic CORS headers).
allow_cross_domains=1
#允许访问http api和http文件索引的ip地址范围白名单置空情况下不做限制
# 允许访问http api和http文件索引的ip地址范围白名单置空情况下不做限制
# IP whitelist ranges allowed to access the HTTP API and file indexes. Leave empty to allow any IP without restrictions.
allow_ip_range=::1,127.0.0.1,172.16.0.0-172.31.255.255,192.168.0.0-192.168.255.255,10.0.0.0-10.255.255.255
[multicast]
#rtp组播截止组播ip地址
# rtp组播截止组播ip地址
# Maximum IP address for the multicast pool.
addrMax=239.255.255.255
#rtp组播起始组播ip地址
# rtp组播起始组播ip地址
# Minimum IP address for the multicast pool.
addrMin=239.0.0.0
#组播udp ttl
# 组播udp ttl
# TTL (Time to Live) for multicast UDP packets.
udpTTL=64
[record]
#mp4录制或mp4点播的应用名通过限制应用名可以防止随意点播
#点播的文件必须放置在此文件夹下
# mp4录制或mp4点播的应用名通过限制应用名可以防止随意点播
# 点播的文件必须放置在此文件夹下
# Application name for MP4 recording/VOD. Restricting this prevents unauthorized VOD access.
# VOD files must be placed within this specific folder.
appName=record
#mp4录制写文件缓存单位BYTE,调整参数可以提高文件io性能
# mp4录制写文件缓存单位BYTE,调整参数可以提高文件io性能
# MP4 recording write cache size in bytes. Tweak to optimize file I/O performance.
fileBufSize=65536
#mp4点播每次流化数据量单位毫秒
#减少该值可以让点播数据发送量更平滑增大该值则更节省cpu资源
# mp4点播每次流化数据量单位毫秒
# 减少该值可以让点播数据发送量更平滑增大该值则更节省cpu资源
# Duration (in ms) of MP4 data streamed per VOD transmission block.
# Decreasing this value smooths transmission; increasing it saves CPU resources.
sampleMS=500
#mp4录制完成后是否进行二次关键帧索引写入头部
# mp4录制完成后是否进行二次关键帧索引写入头部
# Whether to write a secondary keyframe index into the MP4 header after recording completes (fast start).
fastStart=0
#MP4点播(rtsp/rtmp/http-flv/ws-flv)是否循环播放文件
# MP4点播(rtsp/rtmp/http-flv/ws-flv)是否循环播放文件
# Controls whether MP4 VOD playback (rtsp/rtmp/http-flv/ws-flv) loops the file when it reaches the end.
fileRepeat=0
#MP4录制写文件格式是否采用fmp4启用的话断电未完成录制的文件也能正常打开
# MP4录制写文件格式是否采用fmp4启用的话断电未完成录制的文件也能正常打开
# Whether to use the fmp4 format for MP4 recording. Enables normal playback of interrupted recordings (e.g., due to power loss).
enableFmp4=0
[rtmp]
#rtmp必须在此时间内完成握手否则服务器会断开链接单位秒
# rtmp必须在此时间内完成握手否则服务器会断开链接单位秒
# RTMP handshake timeout in seconds. The server drops the connection if not completed.
handshakeSecond=15
#rtmp超时时间如果该时间内未收到客户端的数据
#或者tcp发送缓存超过这个时间则会断开连接单位秒
# rtmp超时时间如果该时间内未收到客户端的数据
# 或者tcp发送缓存超过这个时间则会断开连接单位秒
# RTMP keep-alive timeout in seconds. Connections drop if no data from the client is received,
# or if the TCP send buffer stall exceeds this duration.
keepAliveSecond=15
#rtmp服务器监听端口
# rtmp服务器监听端口
# RTMP server listening port.
port=1935
#rtmps服务器监听地址
# rtmps服务器监听地址
# RTMPS server listening port.
sslport=0
# rtmp是否直接代理模式
# Whether to enable direct proxy mode for RTMP.
directProxy=1
#h265 rtmp打包采用增强型rtmp标准还是国内拓展标准
enhanced=0
# h265/opus/vp8/vp9/av1 rtmp打包采用增强型rtmp标准还是国内拓展标准
# Whether RTMP packaging for H265/Opus/VP8/VP9/AV1 uses the Enhanced RTMP standard (1) or the domestic extended standard (0).
enhanced=1
[rtp]
#音频mtu大小该参数限制rtp最大字节数推荐不要超过1400
#加大该值会明显增加直播延时
# 音频mtu大小该参数限制rtp最大字节数推荐不要超过1400
# 加大该值会明显增加直播延时
# Audio MTU size (restricts max RTP payload in bytes). We recommend keeping this <= 1400.
# Increasing this value significantly increases live streaming latency.
audioMtuSize=600
#视频mtu大小该参数限制rtp最大字节数推荐不要超过1400
# 视频mtu大小该参数限制rtp最大字节数推荐不要超过1400
# Video MTU size (restricts max RTP payload in bytes). We recommend keeping this <= 1400.
videoMtuSize=1400
#rtp包最大长度限制单位KB,主要用于识别TCP上下文破坏时获取到错误的rtp
# rtp包最大长度限制单位KB,主要用于识别TCP上下文破坏时获取到错误的rtp
# Max RTP packet length in KB. Mainly used to identify receiving wrong RTP packets when TCP stream contexts are corrupted.
rtpMaxSize=10
# rtp 打包时低延迟开关默认关闭为0h264存在一帧多个sliceNAL的情况在这种情况下如果开启可能会导致画面花屏
# Low-latency mode for RTP packaging (disabled by default). Enabling this for H.264 video with multiple slices per frame may cause visual artifacts (glitches).
lowLatency=0
# H264 rtp打包模式是否采用stap-a模式(为了在老版本浏览器上兼容webrtc)还是采用Single NAL unit packet per H.264 模式
# 有些老的rtsp设备不支持stap-a rtp设置此配置为0可提高兼容性
# Whether H.264 RTP packaging uses the `stap-a` mode (for older WebRTC browser compatibility) or the `Single NAL unit packet per H.264` mode.
# Set this to 0 to improve compatibility with legacy RTSP devices that do not support `stap-a`.
h264_stap_a=1
[rtp_proxy]
#导出调试数据(包括rtp/ps/h264)至该目录,置空则关闭数据导出
# 导出调试数据(包括rtp/ps/h264)至该目录,置空则关闭数据导出
# Directory for exporting debugging data (rtp/ps/h264). Leave empty to disable.
dumpDir=
#udp和tcp代理服务器支持rtp(必须是ts或ps类型)代理
# udp和tcp代理服务器支持rtp(必须是ts或ps类型)代理
# UDP/TCP proxy server listening port. Supports RTP proxying (must be TS or PS).
port=10000
#rtp超时时间单位秒
# rtp超时时间单位秒
# RTP timeout in seconds.
timeoutSec=15
#随机端口范围最少确保36个端口
#该范围同时限制rtsp服务器udp端口范围
# 随机端口范围最少确保36个端口
# 该范围同时限制rtsp服务器udp端口范围
# Random port range (ensure at least 36 ports).
# This also restricts the UDP port range for the RTSP server.
port_range=30000-35000
#rtp h264 负载的pt
# rtp h264 负载的pt
# RTP payload type (PT) for H.264.
h264_pt=98
#rtp h265 负载的pt
# rtp h265 负载的pt
# RTP payload type (PT) for H.265.
h265_pt=99
#rtp ps 负载的pt
# rtp ps 负载的pt
# RTP payload type (PT) for PS.
ps_pt=96
#rtp opus 负载的pt
# rtp opus 负载的pt
# RTP payload type (PT) for Opus.
opus_pt=100
#RtpSender相关功能是否提前开启gop缓存优化级联秒开体验默认开启
#如果不调用startSendRtp相关接口可以置0节省内存
# startSendRtp、startRecord相关功能是否提前开启gop缓存优化级联秒开体验默认开启, 并缓存1个GOP
# 如果不调用startSendRtp、startRecord后相关接口可以置0节省内存如果缓存多个gop可以加大该参数
# Whether to pre-enable GOP caching for `startSendRtp` and `startRecord` to optimize instant playback for cascaded streams. Enabled by default, caching 1 GOP.
# If these functions are unused, set to 0 to save memory; to cache multiple GOPs, increase this value.
gop_cache=1
#国标发送g711 rtp 打包时每个包的语音时长是多少默认是100 ms范围为20~180ms (gb28181-2016c.2.4规定)
#最好为20 的倍数程序自动向20的倍数取整
rtp_g711_dur_ms = 100
#udp接收数据socket buffer大小配置
#4*1024*1024=4196304
# 国标发送g711 rtp 打包时每个包的语音时长是多少默认是100 ms范围为20~180ms (gb28181-2016c.2.4规定)
# 最好为20 的倍数程序自动向20的倍数取整
# Audio duration (in ms) per packet when packaging G.711 RTP for GB standards. Defaults to 100 ms (range: 20~180ms per gb28181-2016, c.2.4).
# A multiple of 20 is recommended; the program auto-rounds to the nearest multiple.
rtp_g711_dur_ms=100
# udp接收数据socket buffer大小配置
# 4*1024*1024=4196304
# Socket buffer size for receiving UDP data.
udp_recv_socket_buffer=4194304
# ps/ts解析后是否等待下一帧以判断本帧是否完整开启后提高兼容性但是可能增加延时
# Whether to wait for the next frame after parsing PS/TS to verify frame completeness. Improves compatibility but may increase latency.
merge_frame=1
[rtc]
#rtc播放推流、播放超时时间
# webrtc 信令服务器端口
# WebRTC signaling server port.
signalingPort=3000
signalingSslPort=3001
# STUN/TURN服务器端口
# STUN/TURN server port.
icePort=3478
iceTcpPort=3478
# STUN/TURN端口是否使能TURN服务
# Whether to enable TURN services on the STUN/TURN ports.
enableTurn=1
# ICE传输策略0=不限制(默认)1=仅支持Relay转发2=仅支持P2P直连
# ICE transport policy: 0 (No restrictions, default), 1 (Relay forwarding only), 2 (P2P direct connection only).
iceTransportPolicy=0
# STUN/TURN 服务Ice密码
# ICE credentials for STUN/TURN services.
iceUfrag=ZLMediaKit
icePwd=ZLMediaKit
# webrtc datachannel是否回显数据测试用
# Whether WebRTC Datachannel echoes received data (used for testing).
datachannel_echo=1
max_stun_retry=7
# TURN服务分配端口池
# Port range allocated for TURN services.
port_range=49152-65535
# rtc播放推流、播放超时时间
# Timeout in seconds for RTC stream publishing and playback.
timeoutSec=15
#本机对rtc客户端的可见ip作为服务器时一般为公网ip可有多个用','分开当置空时会自动获取网卡ip
#同时支持环境变量,以$开头,如"$EXTERN_IP"; 请参考https://github.com/ZLMediaKit/ZLMediaKit/pull/1786
# 本机对rtc客户端的可见ip作为服务器时一般为公网ip可有多个用','分开当置空时会自动获取网卡ip
# 同时支持环境变量,以$开头,如"$EXTERN_IP"; 请参考https://github.com/ZLMediaKit/ZLMediaKit/pull/1786
# IP address(es) visible to RTC clients (typically public IPs). Separate multiple IPs with commas (',').
# Leave empty to auto-acquire network card IPs. Also supports env vars starting with `$`, e.g., `"$EXTERN_IP"`; please refer to: https://github.com/ZLMediaKit/ZLMediaKit/pull/1786
externIP=
#rtc udp服务器监听端口号所有rtc客户端将通过该端口传输stun/dtls/srtp/srtcp数据
#该端口是多线程的,同时支持客户端网络切换导致的连接迁移
#需要注意的是如果服务器在nat内需要做端口映射时必须确保外网映射端口跟该端口一致
# 当指定了interfaces,ICE服务器会使用指定网卡bind socket
# 以解决公网IP使用弹性公网IP配置实现(部署机器无法bind该公网ip的问题)
# 支持环境变量,以$开头,如"$PRIVATE_IP"
# If specified, the ICE server binds the socket to this specific network card.
# Solves binding issues on machines with Elastic Public IPs that cannot directly bind the public IP.
# Supports environment variables starting with `$`, e.g., `"$PRIVATE_IP"`.
interfaces=
# rtc udp服务器监听端口号所有rtc客户端将通过该端口传输stun/dtls/srtp/srtcp数据
# 该端口是多线程的,同时支持客户端网络切换导致的连接迁移
# 需要注意的是如果服务器在nat内需要做端口映射时必须确保外网映射端口跟该端口一致
# RTC UDP server listening port. Handles STUN/DTLS/SRTP/SRTCP data for all RTC clients.
# Multi-threaded and supports connection migration during client network switching.
# Note: For deployment behind a NAT, the external mapped port MUST match this port exactly.
port=8000
#rtc tcp服务器监听端口号在udp 不通的情况下会使用tcp传输数据
#该端口是多线程的,同时支持客户端网络切换导致的连接迁移
#需要注意的是如果服务器在nat内需要做端口映射时必须确保外网映射端口跟该端口一致
tcpPort = 8000
#设置remb比特率非0时关闭twcc并开启remb。该设置在rtc推流时有效可以控制推流画质
#目前已经实现twcc自动调整码率关闭remb根据真实网络状况调整码率
# rtc tcp服务器监听端口号在udp 不通的情况下会使用tcp传输数据
# 该端口是多线程的,同时支持客户端网络切换导致的连接迁移
# 需要注意的是如果服务器在nat内需要做端口映射时必须确保外网映射端口跟该端口一致
# RTC TCP server listening port. Used as a fallback if UDP is unreachable.
# Multi-threaded and supports connection migration during client network switching.
# Note: For deployment behind a NAT, the external mapped port MUST match this port exactly.
tcpPort=8000
# 设置remb比特率非0时关闭twcc并开启remb。该设置在rtc推流时有效可以控制推流画质
# 目前已经实现twcc自动调整码率关闭remb根据真实网络状况调整码率
# REMB bitrate threshold. Non-zero values disable TWCC and enable REMB (effective for RTC publishing to control picture quality).
# ZLMediaKit natively supports automatic TWCC bitrate adjustment; disabling REMB allows rates to adjust naturally based on actual network conditions.
rembBitRate=0
#rtc支持的音频codec类型,在前面的优先级更高
#以下范例为所有支持的音频codec
# rtc支持的音频codec类型,在前面的优先级更高
# 以下范例为所有支持的音频codec
# Supported RTC audio codecs (listed in descending priority).
preferredCodecA=PCMA,PCMU,opus,mpeg4-generic
#rtc支持的视频codec类型,在前面的优先级更高
#以下范例为所有支持的视频codec
# rtc支持的视频codec类型,在前面的优先级更高
# 以下范例为所有支持的视频codec
# Supported RTC video codecs (listed in descending priority).
preferredCodecV=H264,H265,AV1,VP9,VP8
# 是否开启RTC协议的G711转码开启后
# 能将传给rtc的g711音频转成opus
# 将由rtc流入g711音频转成aac并转给其他协议流
transcodeG711=0
#webrtc比特率设置
# webrtc比特率设置
# WebRTC bitrate settings.
start_bitrate=0
max_bitrate=0
min_bitrate=0
#nack接收端, rtp发送端zlm发送rtc流
#rtp重发缓存列队最大长度单位毫秒
# nack接收端, rtp发送端zlm发送rtc流
# rtp重发缓存列队最大长度单位毫秒
# NACK receiver / RTP sender queue (ZLM sending RTC streams).
# Maximum length of the RTP retransmission cache queue in ms.
maxRtpCacheMS=5000
#rtp重发缓存列队最大长度单位个数
# rtp重发缓存列队最大长度单位个数
# Maximum length of the RTP retransmission cache queue in packet count.
maxRtpCacheSize=2048
#nack发送端rtp接收端zlm接收rtc推流
#最大保留的rtp丢包状态个数
# nack发送端rtp接收端zlm接收rtc推流
# 最大保留的rtp丢包状态个数
# NACK sender / RTP receiver queue (ZLM receiving RTC streams).
# Maximum number of retained RTP packet-loss states.
nackMaxSize=2048
#rtp丢包状态最长保留时间
# rtp丢包状态最长保留时间
# Maximum retention time for RTP packet-loss states in ms.
nackMaxMS=3000
#nack最多请求重传次数
# nack最多请求重传次数
# Maximum number of NACK retransmission requests.
nackMaxCount=15
#nack重传频率rtt的倍数
# nack重传频率rtt的倍数
# NACK retransmission frequency (multiple of RTT).
nackIntervalRatio=1.0
#nack包中rtp个数减小此值可以让nack包响应更灵敏
# 视频nack包中rtp个数减小此值可以让nack包响应更灵敏
# Number of RTP packets in a video NACK packet. Lower values make NACK responses more sensitive.
nackRtpSize=8
# 音频nack包中rtp个数减小此值可以让nack包响应更灵敏
# Number of RTP packets in an audio NACK packet. Lower values make NACK responses more sensitive.
nackAudioRtpSize=4
# 是否尝试过滤 b帧
# Whether to attempt filtering out B-frames.
bfilter=0
# 是否优先采用webrtc over tcp模式
# Whether to prioritize WebRTC over TCP mode.
preferred_tcp=0
[srt]
#srt播放推流、播放超时时间,单位秒
# srt播放推流、播放超时时间,单位秒
# Timeout in seconds for SRT stream publishing and playback.
timeoutSec=5
#srt udp服务器监听端口号所有srt客户端将通过该端口传输srt数据
#该端口是多线程的,同时支持客户端网络切换导致的连接迁移
# srt udp服务器监听端口号所有srt客户端将通过该端口传输srt数据
# 该端口是多线程的,同时支持客户端网络切换导致的连接迁移
# SRT UDP server listening port. Handles SRT data for all clients.
# Multi-threaded and supports connection migration during client network switching.
port=9000
#srt 协议中延迟缓存的估算参数在握手阶段估算rtt ,然后latencyMul*rtt 为最大缓存时长,此参数越大,表示等待重传的时长就越大
# srt 协议中延迟缓存的估算参数在握手阶段估算rtt ,然后latencyMul*rtt 为最大缓存时长,此参数越大,表示等待重传的时长就越大
# SRT protocol delay buffer estimation parameter. Handshake estimated `RTT * latencyMul` sets the maximum buffer duration. Larger values increase wait times for retransmissions.
latencyMul=4
#包缓存的大小
# 包缓存的大小
# Packet buffer size.
pktBufSize=8192
#srt udp服务器的密码,为空表示不加密
# srt udp服务器的密码,为空表示不加密
# SRT UDP server password (leave empty to disable encryption).
passPhrase=
[rtsp]
#rtsp专有鉴权方式是采用base64还是md5方式
# rtsp专有鉴权方式是采用base64还是md5方式
# Whether RTSP dedicated authentication uses base64 or md5.
authBasic=0
#rtsp拉流、推流代理是否是直接代理模式
#直接代理后支持任意编码格式但是会导致GOP缓存无法定位到I帧可能会导致开播花屏
#并且如果是tcp方式拉流如果rtp大于mtu会导致无法使用udp方式代理
#假定您的拉流源地址不是264或265或AAC那么你可以使用直接代理的方式来支持rtsp代理
#如果你是rtsp推拉流但是webrtc播放也建议关闭直接代理模式
#因为直接代理时rtp中可能没有sps pps,会导致webrtc无法播放; 另外webrtc也不支持Single NAL Unit Packets类型rtp
#默认开启rtsp直接代理rtmp由于没有这些问题是强制开启直接代理的
# rtsp拉流、推流代理是否是直接代理模式
# 直接代理后支持任意编码格式但是会导致GOP缓存无法定位到I帧可能会导致开播花屏
# 并且如果是tcp方式拉流如果rtp大于mtu会导致无法使用udp方式代理
# 假定您的拉流源地址不是264或265或AAC那么你可以使用直接代理的方式来支持rtsp代理
# 如果你是rtsp推拉流但是webrtc播放也建议关闭直接代理模式
# 因为直接代理时rtp中可能没有sps pps,会导致webrtc无法播放; 另外webrtc也不支持Single NAL Unit Packets类型rtp
# 默认开启rtsp直接代理rtmp由于没有这些问题是强制开启直接代理的
# Whether to enable direct proxy mode for RTSP pulling/publishing.
# Direct proxying supports any codec but bypasses GOP cache I-frame detection, potentially causing initial visual artifacts.
# Furthermore, if pulling via TCP, an RTP payload exceeding the MTU will make UDP proxying unusable.
# Assuming your pull source format is not H264, H265, or AAC, you can use direct proxy mode to support RTSP proxying.
# If you are pulling/pushing via RTSP but playing via WebRTC, it is also recommended to disable direct proxy mode;
# this is because direct proxies may drop SPS/PPS (preventing WebRTC playback), and WebRTC does not support `Single NAL Unit Packets` RTP.
# RTSP direct proxy is enabled by default. RTMP natively enforces direct proxying because it lacks these issues.
directProxy=1
#rtsp必须在此时间内完成握手否则服务器会断开链接单位秒
# rtsp必须在此时间内完成握手否则服务器会断开链接单位秒
# RTSP handshake timeout in seconds. The server drops the connection if not completed.
handshakeSecond=15
#rtsp超时时间如果该时间内未收到客户端的数据
#或者tcp发送缓存超过这个时间则会断开连接单位秒
# rtsp超时时间如果该时间内未收到客户端的数据
# 或者tcp发送缓存超过这个时间则会断开连接单位秒
# RTSP keep-alive timeout in seconds. Connections drop if no data is received or if the TCP send buffer stalls for this duration.
keepAliveSecond=15
#rtsp服务器监听地址
# rtsp服务器监听地址
# RTSP server listening port.
port=554
#rtsps服务器监听地址
# rtsps服务器监听地址
# RTSPS server listening port.
sslport=0
#rtsp 转发是否使用低延迟模式当开启时不会缓存rtp包来提高并发可以降低一帧的延迟
# rtsp 转发是否使用低延迟模式当开启时不会缓存rtp包来提高并发可以降低一帧的延迟
# Whether RTSP forwarding uses low-latency mode. Skips RTP packet caching to improve concurrency and reduce latency by one frame.
lowLatency=0
#强制协商rtp传输方式 (0:TCP,1:UDP,2:MULTICAST,-1:不限制)
#当客户端发起RTSP SETUP的时候如果传输类型和此配置不一致则返回461 Unsupported transport
#迫使客户端重新SETUP并切换到对应协议。目前支持FFMPEG和VLC
# 强制协商rtp传输方式 (0:TCP,1:UDP,2:MULTICAST,-1:不限制)
# 当客户端发起RTSP SETUP的时候如果传输类型和此配置不一致则返回461 Unsupported transport
# 迫使客户端重新SETUP并切换到对应协议。目前支持FFMPEG和VLC
# Force RTP transport negotiation type (0: TCP, 1: UDP, 2: MULTICAST, -1: No limits).
# When the client initiates RTSP SETUP, if the transport type conflicts with this configuration, it returns `461 Unsupported transport`.
# This forces the client to re-SETUP and switch to the corresponding protocol. Currently supports FFmpeg and VLC.
rtpTransportType=-1
[shell]
#调试telnet服务器接受最大bufffer大小
# 调试telnet服务器接受最大buffer大小
# Maximum buffer size accepted by the debugging Telnet server.
maxReqSize=1024
#调试telnet服务器监听端口
# 调试telnet服务器监听端口
# Debugging Telnet server listening port.
port=0
# onvif搜索用
# Used for ONVIF search.
[onvif]
port=3702

31
conf/readme_en.md Normal file
View File

@ -0,0 +1,31 @@
## Key parameters that affect performance in the configuration file
### 1. Protocol enable flags (e.g., protocol.enable_hls, protocol.enable_rtsp)
Controls the protocol conversion flags. Disabling unnecessary protocols will save CPU and memory resources.
### 2. On-demand protocol flags (e.g., protocol.hls_demand, protocol.rtsp_demand)
Controls on-demand protocol generation. When both this and the specific protocol are enabled, it saves CPU and memory when there are no active viewers. However, the first viewer will lose the instant playback capability, impacting the initial experience.
### 3. protocol.paced_sender_ms
The interval for the smooth sending timer. This helps address playback stuttering caused by irregular data transmission from the source. When enabled, the timer uses data timestamps to pace the transmission, improving the viewing experience.
However, this increases CPU and memory consumption. A shorter timer interval results in higher CPU usage but better smoothness. The recommended interval is between 30 and 100 milliseconds. For optimal results, use this feature in conjunction with setting `protocol.modify_stamp` to 2 (which suppresses timestamp jumps).
### 4. general.mergeWriteMS
Enables write coalescing, which reduces the number of system calls and the frequency of data sharing between threads during transmission. This significantly boosts forwarding performance but comes at the cost of increased playback latency and reduced transmission smoothness.
### 5. rtp_proxy.gop_cache
Enables the GOP (Group of Pictures) caching feature for the `startSendRtp` cascaded interface, designed to allow instant playback for cascading setups (e.g., GB28181). Note that this setting does not affect the instant playback capability of ZLMediaKit's external live streaming services.
Enabling this option increases memory usage but has a minimal impact on the CPU. We recommend disabling it if you don't use the `startSendRtp` interface.
### 6. hls.fileBufSize
Tuning this parameter can improve the disk I/O performance when writing HLS streams.
### 7. record.fileBufSize
Tuning this parameter can improve the disk I/O performance when recording MP4 files.

View File

@ -1,5 +1,5 @@
FROM ubuntu:20.04 AS build
ARG MODEL
FROM ubuntu:24.04 AS build
ARG MODEL=Release
#shell,rtmp,rtsp,rtsps,http,https,rtp
EXPOSE 1935/tcp
EXPOSE 554/tcp
@ -27,6 +27,7 @@ RUN apt-get update && \
libssl-dev \
gcc \
g++ \
python3-dev \
gdb && \
apt-get autoremove -y && \
apt-get clean -y && \
@ -41,17 +42,17 @@ WORKDIR /opt/media/ZLMediaKit/3rdpart
RUN wget https://github.com/cisco/libsrtp/archive/v2.3.0.tar.gz -O libsrtp-2.3.0.tar.gz && \
tar xfv libsrtp-2.3.0.tar.gz && \
mv libsrtp-2.3.0 libsrtp && \
cd libsrtp && ./configure --enable-openssl && make -j $(nproc) && make install
cd libsrtp && CFLAGS="-fcommon" ./configure --enable-openssl && make -j $(nproc) && make install
#RUN git submodule update --init --recursive && \
RUN mkdir -p build release/linux/${MODEL}/
WORKDIR /opt/media/ZLMediaKit/build
RUN cmake -DCMAKE_BUILD_TYPE=${MODEL} -DENABLE_WEBRTC=true -DENABLE_FFMPEG=true -DENABLE_TESTS=false -DENABLE_API=false .. && \
RUN cmake -DENABLE_PYTHON=true -DCMAKE_BUILD_TYPE=${MODEL} -DENABLE_WEBRTC=true -DENABLE_FFMPEG=true -DENABLE_TESTS=false -DENABLE_API=false .. && \
make -j $(nproc)
FROM ubuntu:20.04
ARG MODEL
FROM ubuntu:24.04
ARG MODEL=Release
# ADD sources.list /etc/apt/sources.list
@ -67,6 +68,10 @@ RUN apt-get update && \
ffmpeg \
gcc \
g++ \
python3 \
python3-dev \
python3-venv \
python3-pip \
gdb && \
apt-get autoremove -y && \
apt-get clean -y && \

View File

@ -413,6 +413,12 @@ Track::Ptr getTrackBySdp(const SdpTrack::Ptr &track) {
// If aac config information cannot be obtained from sdp, then it cannot be obtained from rtp either, so ignore this Track
return nullptr;
}
while (aac_cfg_str.size() < 4) {
aac_cfg_str = '0' + aac_cfg_str;
}
if (aac_cfg_str.size() > 4) {
aac_cfg_str = aac_cfg_str.substr(0, 4);
}
string aac_cfg;
for (size_t i = 0; i < aac_cfg_str.size() / 2; ++i) {
unsigned int cfg;

95
ext-codec/AV1.cpp Normal file
View File

@ -0,0 +1,95 @@
/*
* Copyright (c) 2016-present The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/ZLMediaKit/ZLMediaKit).
*
* Use of this source code is governed by MIT-like license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#include "AV1.h"
#include "AV1Rtp.h"
#include "VpxRtmp.h"
#include "Extension/Factory.h"
using namespace std;
using namespace toolkit;
namespace mediakit {
bool AV1Track::inputFrame(const Frame::Ptr &frame) {
char *dataPtr = frame->data() + frame->prefixSize();
if (0 == aom_av1_codec_configuration_record_init(&_context, dataPtr, frame->size() - frame->prefixSize())) {
_width = _context.width;
_height = _context.height;
//InfoL << _width << "x" << _height;
}
return VideoTrackImp::inputFrame(frame);
}
Track::Ptr AV1Track::clone() const {
return std::make_shared<AV1Track>(*this);
}
Buffer::Ptr AV1Track::getExtraData() const {
if (_context.bytes <= 0)
return nullptr;
auto ret = BufferRaw::create(4 + _context.bytes);
ret->setSize(aom_av1_codec_configuration_record_save(&_context, (uint8_t *)ret->data(), ret->getCapacity()));
return ret;
}
void AV1Track::setExtraData(const uint8_t *data, size_t size) {
if (aom_av1_codec_configuration_record_load(data, size, &_context) > 0) {
_width = _context.width;
_height = _context.height;
}
}
namespace {
CodecId getCodec() {
return CodecAV1;
}
Track::Ptr getTrackByCodecId(int sample_rate, int channels, int sample_bit) {
return std::make_shared<AV1Track>();
}
Track::Ptr getTrackBySdp(const SdpTrack::Ptr &track) {
return std::make_shared<AV1Track>();
}
RtpCodec::Ptr getRtpEncoderByCodecId(uint8_t pt) {
return std::make_shared<AV1RtpEncoder>();
}
RtpCodec::Ptr getRtpDecoderByCodecId() {
return std::make_shared<AV1RtpDecoder>();
}
RtmpCodec::Ptr getRtmpEncoderByTrack(const Track::Ptr &track) {
return std::make_shared<VpxRtmpEncoder>(track);
}
RtmpCodec::Ptr getRtmpDecoderByTrack(const Track::Ptr &track) {
return std::make_shared<VpxRtmpDecoder>(track);
}
Frame::Ptr getFrameFromPtr(const char *data, size_t bytes, uint64_t dts, uint64_t pts) {
return std::make_shared<AV1FrameNoCacheAble>((char *)data, bytes, dts, pts, 0);
}
} // namespace
CodecPlugin av1_plugin = { getCodec,
getTrackByCodecId,
getTrackBySdp,
getRtpEncoderByCodecId,
getRtpDecoderByCodecId,
getRtmpEncoderByTrack,
getRtmpDecoderByTrack,
getFrameFromPtr };
} // namespace mediakit

65
ext-codec/AV1.h Normal file
View File

@ -0,0 +1,65 @@
/*
* Copyright (c) 2016-present The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/ZLMediaKit/ZLMediaKit).
*
* Use of this source code is governed by MIT-like license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#ifndef ZLMEDIAKIT_AV1_H
#define ZLMEDIAKIT_AV1_H
#include "Extension/Frame.h"
#include "Extension/Track.h"
#include "aom-av1.h"
namespace mediakit {
template <typename Parent>
class AV1FrameHelper : public Parent {
public:
friend class FrameImp;
//friend class toolkit::ResourcePool_l<Av1FrameHelper>;
using Ptr = std::shared_ptr<AV1FrameHelper>;
template <typename... ARGS>
AV1FrameHelper(ARGS &&...args)
: Parent(std::forward<ARGS>(args)...) {
this->_codec_id = CodecAV1;
}
bool keyFrame() const override {
auto ptr = (uint8_t *) this->data() + this->prefixSize();
return (*ptr & 0x78) >> 3 == 1;
}
bool configFrame() const override { return false; }
bool dropAble() const override { return false; }
bool decodeAble() const override { return true; }
};
/// Av1 帧类
using AV1Frame = AV1FrameHelper<FrameImp>;
using AV1FrameNoCacheAble = AV1FrameHelper<FrameFromPtr>;
/**
* AV1视频通道
*/
class AV1Track : public VideoTrackImp {
public:
using Ptr = std::shared_ptr<AV1Track>;
AV1Track() : VideoTrackImp(CodecAV1) {}
Track::Ptr clone() const override;
bool inputFrame(const Frame::Ptr &frame) override;
toolkit::Buffer::Ptr getExtraData() const override;
void setExtraData(const uint8_t *data, size_t size) override;
protected:
aom_av1_t _context {};
};
} // namespace mediakit
#endif

582
ext-codec/AV1Rtp.cpp Normal file
View File

@ -0,0 +1,582 @@
/*
* Copyright (c) 2016-present The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/ZLMediaKit/ZLMediaKit).
*
* Use of this source code is governed by MIT-like license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#include "AV1.h"
#include "AV1Rtp.h"
#include <algorithm>
#include <cstring>
#include <vector>
#include <sstream>
#include <iomanip>
using namespace std;
using namespace toolkit;
namespace mediakit {
// AV1 OBU类型定义
static constexpr int kObuTypeSequenceHeader = 1;
static constexpr int kObuTypeTemporalDelimiter = 2;
static constexpr int kObuTypeTileList = 8;
static constexpr int kObuTypePadding = 15;
// RTP聚合头中的位定义
static constexpr uint8_t kObuSizePresentBit = 0b00000010;
static constexpr int kAggregationHeaderSize = 1;
static constexpr int kMaxNumObusToOmitSize = 3;
// LEB128编码/解码辅助函数
static size_t writeLeb128(uint64_t value, uint8_t* buffer) {
size_t size = 0;
do {
uint8_t byte = value & 0x7F;
value >>= 7;
if (value != 0) {
byte |= 0x80;
}
buffer[size++] = byte;
} while (value != 0);
return size;
}
static size_t leb128Size(uint64_t value) {
size_t size = 0;
do {
value >>= 7;
++size;
} while (value != 0);
return size;
}
static bool readLeb128(const uint8_t*& data, size_t& remaining, uint64_t& value) {
value = 0;
size_t shift = 0;
while (remaining > 0 && shift < 56) {
uint8_t byte = *data++;
remaining--;
value |= (uint64_t(byte & 0x7F) << shift);
shift += 7;
if ((byte & 0x80) == 0) {
return true;
}
}
// 兼容性处理如果到达数据末尾但最后一个字节的MSB仍为1
// 假设这是leb128编码的结尾
if (remaining == 0 && shift > 0) {
WarnL << "Tolerating non-standard LEB128 encoding (missing termination bit)";
return true;
}
return false;
}
// OBU辅助函数
static bool obuHasExtension(uint8_t obu_header) {
return obu_header & 0b00000100;
}
static bool obuHasSize(uint8_t obu_header) {
return obu_header & kObuSizePresentBit;
}
static int obuType(uint8_t obu_header) {
return (obu_header & 0b01111000) >> 3;
}
static int maxFragmentSize(int remaining_bytes) {
if (remaining_bytes <= 1) {
return 0;
}
for (int i = 1; ; ++i) {
if (remaining_bytes < (1 << (7 * i)) + i) {
return remaining_bytes - i;
}
}
}
//////////////////////////////////////////////////////////////////////////
// AV1RtpEncoder 实现
//////////////////////////////////////////////////////////////////////////
AV1RtpEncoder::AV1RtpEncoder() {
}
std::vector<AV1RtpEncoder::ObuInfo> AV1RtpEncoder::parseObus(const uint8_t* data, size_t size) {
std::vector<ObuInfo> result;
const uint8_t* ptr = data;
size_t remaining = size;
while (remaining > 0) {
if (remaining < 1) {
WarnL << "Malformed AV1 input: expected OBU header";
return {};
}
ObuInfo obu{};
obu.header = *ptr++;
remaining--;
obu.has_extension = obuHasExtension(obu.header);
obu.has_size_field = obuHasSize(obu.header);
if (obu.has_extension) {
if (remaining < 1) {
WarnL << "Malformed AV1 input: expected extension header";
return {};
}
obu.extension_header = *ptr++;
remaining--;
}
uint64_t payload_size = 0;
if (obu.has_size_field) {
if (!readLeb128(ptr, remaining, payload_size)) {
WarnL << "Malformed AV1 input: failed to read OBU size";
return {};
}
if (payload_size > remaining) {
WarnL << "Malformed AV1 input: OBU size exceeds remaining data";
return {};
}
} else {
payload_size = remaining;
}
obu.payload_data = ptr;
obu.payload_size = payload_size;
ptr += payload_size;
remaining -= payload_size;
int type = obuType(obu.header);
if (type != kObuTypeTemporalDelimiter &&
type != kObuTypeTileList &&
type != kObuTypePadding) {
result.push_back(obu);
}
}
return result;
}
uint8_t AV1RtpEncoder::makeAggregationHeader(bool first_obu_is_fragment,
bool last_obu_is_fragment,
int num_obu_elements,
bool starts_new_coded_video_sequence) {
uint8_t header = 0;
// Z bit: first OBU element is continuation of previous OBU
if (first_obu_is_fragment) {
header |= 0x80;
}
// Y bit: last OBU element will be continued in next packet
if (last_obu_is_fragment) {
header |= 0x40;
}
// W field: number of OBU elements (when <= 3)
if (num_obu_elements <= kMaxNumObusToOmitSize) {
header |= (num_obu_elements << 4);
}
// N bit: beginning of new coded video sequence
if (starts_new_coded_video_sequence) {
header |= 0x08;
}
return header;
}
void AV1RtpEncoder::outputRtp(const uint8_t* data, size_t len, bool mark,
uint64_t stamp, uint8_t aggregation_header) {
auto rtp = getRtpInfo().makeRtp(TrackVideo, nullptr, len + kAggregationHeaderSize, mark, stamp);
auto payload = rtp->data() + RtpPacket::kRtpTcpHeaderSize + RtpPacket::kRtpHeaderSize;
// 写入聚合头
payload[0] = aggregation_header;
// 复制数据
if (len > 0) {
memcpy(payload + kAggregationHeaderSize, data, len);
}
RtpCodec::inputRtp(std::move(rtp), false);
}
bool AV1RtpEncoder::inputFrame(const Frame::Ptr &frame) {
auto ptr = frame->data() + frame->prefixSize();
auto size = frame->size() - frame->prefixSize();
if (size == 0) {
return false;
}
// 解析OBU
auto obus = parseObus((const uint8_t*)ptr, size);
if (obus.empty()) {
return false;
}
// 检查是否包含序列头(关键帧标志)
bool has_sequence_header = false;
for (const auto& obu : obus) {
int type = obuType(obu.header);
if (type == kObuTypeSequenceHeader) {
has_sequence_header = true;
_got_key_frame = true;
break;
}
}
// 如果还没有收到过关键帧,且当前帧不是关键帧,则丢弃
if (!_got_key_frame && !has_sequence_header) {
DebugL << "Dropping AV1 frame before first keyframe";
return false;
}
size_t max_payload_size = getRtpInfo().getMaxSize() - kAggregationHeaderSize;
if (max_payload_size == 0) {
WarnL << "Invalid RTP max payload size for AV1";
return false;
}
for (size_t i = 0; i < obus.size(); ++i) {
const auto& obu = obus[i];
bool is_first_obu = (i == 0);
bool is_last_obu = (i == obus.size() - 1);
if (!sendObu(obu, is_first_obu, is_last_obu,
has_sequence_header && is_first_obu, frame->pts(), max_payload_size)) {
return false;
}
}
return true;
}
bool AV1RtpEncoder::sendObu(const ObuInfo& obu,
bool is_first_obu,
bool is_last_obu,
bool starts_new_sequence,
uint64_t stamp,
size_t max_payload_size) {
std::vector<uint8_t> obu_bytes;
obu_bytes.reserve(1 + (obu.has_extension ? 1 : 0) + obu.payload_size);
obu_bytes.push_back(obu.header & ~kObuSizePresentBit);
if (obu.has_extension) {
obu_bytes.push_back(obu.extension_header);
}
if (obu.payload_size > 0) {
obu_bytes.insert(obu_bytes.end(), obu.payload_data, obu.payload_data + obu.payload_size);
}
size_t offset = 0;
bool first_fragment = true;
while (offset < obu_bytes.size()) {
size_t fragment_size = std::min<size_t>(max_payload_size, obu_bytes.size() - offset);
bool last_fragment = (offset + fragment_size) == obu_bytes.size();
uint8_t agg_header = makeAggregationHeader(
!first_fragment,
!last_fragment,
1,
first_fragment && starts_new_sequence
);
bool mark = last_fragment && is_last_obu;
outputRtp(obu_bytes.data() + offset, fragment_size, mark, stamp, agg_header);
offset += fragment_size;
first_fragment = false;
}
return true;
}
//////////////////////////////////////////////////////////////////////////
// AV1RtpDecoder 实现
//////////////////////////////////////////////////////////////////////////
AV1RtpDecoder::AV1RtpDecoder() {
obtainFrame();
}
void AV1RtpDecoder::obtainFrame() {
_frame = FrameImp::create<AV1Frame>();
}
AV1RtpDecoder::AggregationHeader AV1RtpDecoder::parseAggregationHeader(uint8_t header) {
AggregationHeader agg;
agg.first_obu_is_fragment = (header & 0x80) != 0;
agg.last_obu_is_fragment = (header & 0x40) != 0;
agg.num_obu_elements = (header & 0x30) >> 4;
agg.starts_new_coded_video_sequence = (header & 0x08) != 0;
return agg;
}
bool AV1RtpDecoder::inputRtp(const RtpPacket::Ptr &rtp, bool key_pos) {
auto payload_size = rtp->getPayloadSize();
if (payload_size < kAggregationHeaderSize) {
return false;
}
uint32_t ssrc = rtp->getSSRC();
if (!_has_last_ssrc || _last_ssrc != ssrc) {
resetState();
_last_ssrc = ssrc;
_has_last_ssrc = true;
}
auto stamp = rtp->getStampMS();
auto payload = rtp->getPayload();
auto seq = rtp->getSeq();
// 解析聚合头
auto agg_header = parseAggregationHeader(payload[0]);
const uint8_t* data = payload + kAggregationHeaderSize;
size_t remaining = payload_size - kAggregationHeaderSize;
// InfoL << "RTP seq=" << seq << ", Z=" << agg_header.first_obu_is_fragment
// << ", Y=" << agg_header.last_obu_is_fragment
// << ", W=" << agg_header.num_obu_elements
// << ", N=" << agg_header.starts_new_coded_video_sequence
// << ", payload_size=" << remaining;
// if (remaining > 0) {
// std::ostringstream hex_stream;
// for (size_t i = 0; i < std::min(remaining, size_t(16)); ++i) {
// hex_stream << std::hex << std::setw(2) << std::setfill('0') << (int)data[i] << " ";
// }
// InfoL << "RTP payload hex: " << hex_stream.str();
// }
// 如果开始新的编码视频序列,清理之前的状态
if (agg_header.starts_new_coded_video_sequence) {
InfoL << "Starting new coded video sequence";
resetState();
obtainFrame();
}
if (_has_last_seq) {
uint16_t expected = _last_seq + 1;
if (seq != expected && _assembling_fragment) {
WarnL << "RTP seq gap while assembling fragment, expected=" << expected
<< " got=" << seq << ", dropping incomplete OBU";
_fragment_buffer.clear();
_assembling_fragment = false;
}
}
_last_seq = seq;
_has_last_seq = true;
if (!processPayload(agg_header, data, remaining)) {
resetState();
obtainFrame();
return false;
}
bool marker = rtp->getHeader()->mark;
if (marker) {
if (_assembling_fragment) {
WarnL << "Marker bit set while awaiting fragment continuation";
_fragment_buffer.clear();
_assembling_fragment = false;
}
_last_dts = stamp;
if (!_received_keyframe) {
WarnL << "AV1 RTP packet before keyframe, dropping";
_frame->_buffer.clear();
obtainFrame();
return false;
}
flushFrame(stamp);
return true;
}
_last_dts = stamp;
return false;
}
bool AV1RtpDecoder::processPayload(const AggregationHeader& agg_header,
const uint8_t* data,
size_t remaining) {
size_t element_index = 0;
int expected_elements = agg_header.num_obu_elements;
while (remaining > 0) {
uint64_t element_size = 0;
bool has_size = (expected_elements == 0) || (static_cast<int>(element_index) < expected_elements - 1);
if (has_size) {
if (!readLeb128(data, remaining, element_size)) {
WarnL << "Failed to read OBU element size, trying fallback parsing";
// 兼容性回退如果leb128解析失败尝试直接使用剩余字节数
element_size = remaining;
} else if (element_size > remaining) {
WarnL << "OBU element size (" << element_size << ") exceeds remaining payload ("
<< remaining << "), using remaining size";
element_size = remaining;
}
} else {
element_size = remaining;
}
std::vector<uint8_t> element_bytes;
element_bytes.reserve(element_size);
if (element_size > 0) {
element_bytes.insert(element_bytes.end(), data, data + element_size);
data += element_size;
remaining -= element_size;
}
bool is_first = element_index == 0;
bool is_last = (remaining == 0);
if (is_first && agg_header.first_obu_is_fragment) {
if (_fragment_buffer.empty()) {
WarnL << "Unexpected fragment continuation in AV1 RTP packet";
return false;
}
_fragment_buffer.insert(_fragment_buffer.end(), element_bytes.begin(), element_bytes.end());
} else {
if (_assembling_fragment && !_fragment_buffer.empty()) {
WarnL << "Previous fragment never completed, discarding";
return false;
}
_fragment_buffer = std::move(element_bytes);
}
bool will_continue = is_last && agg_header.last_obu_is_fragment;
if (will_continue) {
_assembling_fragment = true;
} else {
if (!emitObu(_fragment_buffer.data(), _fragment_buffer.size())) {
return false;
}
_fragment_buffer.clear();
_assembling_fragment = false;
}
++element_index;
}
if (expected_elements > 0 && static_cast<int>(element_index) != expected_elements) {
WarnL << "Mismatch between W field (" << expected_elements
<< ") and parsed OBU elements (" << element_index
<< "), tolerating for compatibility";
// 不返回false继续处理以提高兼容性
}
return true;
}
bool AV1RtpDecoder::emitObu(const uint8_t* data, size_t size) {
if (size == 0) {
return true;
}
if (size < 1) {
WarnL << "Empty OBU fragment";
return false;
}
uint8_t obu_header = data[0];
size_t header_size = 1;
// 检查OBU头部是否已经包含size bit
bool already_has_size = obuHasSize(obu_header);
// 如果RTP包中的OBU已经包含size字段需要特殊处理
if (already_has_size) {
//WarnL << "RTP OBU contains size field";
// 跳过extension header处理
if (obuHasExtension(obu_header)) {
if (size < 2) {
WarnL << "OBU with extension flag but insufficient data";
return false;
}
header_size = 2;
}
// 读取原始的size字段
const uint8_t* ptr = data + header_size;
size_t remaining = size - header_size;
uint64_t original_size = 0;
if (!readLeb128(ptr, remaining, original_size)) {
WarnL << "Failed to read original OBU size field";
return false;
}
if (original_size != remaining) {
WarnL << "OBU size mismatch in RTP packet, original_size=" << original_size
<< " remaining=" << remaining;
}
// 直接拷贝完整的OBU包括已有的size字段
_frame->_buffer.append((char*)data, size);
} else {
// 标准情况RTP包中的OBU没有size字段需要我们添加
// 写入带size bit的OBU头部
_frame->_buffer.push_back(obu_header | kObuSizePresentBit);
if (obuHasExtension(obu_header)) {
if (size < 2) {
WarnL << "OBU with extension flag but insufficient data";
return false;
}
_frame->_buffer.push_back(data[1]);
header_size = 2;
}
if (size < header_size) {
WarnL << "Invalid OBU size";
return false;
}
// 计算payload大小并写入leb128编码的size字段
uint64_t payload_size = size - header_size;
uint8_t size_bytes[8];
size_t size_len = writeLeb128(payload_size, size_bytes);
_frame->_buffer.append((char*)size_bytes, size_len);
// 拷贝payload数据
if (payload_size > 0) {
_frame->_buffer.append((char*)data + header_size, payload_size);
}
}
if (obuType(obu_header) == kObuTypeSequenceHeader) {
_received_keyframe = true;
}
return true;
}
void AV1RtpDecoder::flushFrame(uint64_t stamp) {
if (_frame->_buffer.empty()) {
return;
}
_frame->_dts = stamp;
_frame->_pts = stamp;
RtpCodec::inputFrame(_frame);
obtainFrame();
}
void AV1RtpDecoder::resetState() {
_fragment_buffer.clear();
_assembling_fragment = false;
_has_last_seq = false;
_received_keyframe = false;
}
} // namespace mediakit

95
ext-codec/AV1Rtp.h Normal file
View File

@ -0,0 +1,95 @@
/*
* Copyright (c) 2016-present The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/ZLMediaKit/ZLMediaKit).
*
* Use of this source code is governed by MIT-like license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#ifndef ZLMEDIAKIT_AV1RTP_H
#define ZLMEDIAKIT_AV1RTP_H
#include "Rtsp/RtpCodec.h"
#include "Extension/Frame.h"
#include "Extension/CommonRtp.h"
namespace mediakit {
/**
* AV1 RTP编码器
*/
class AV1RtpEncoder : public RtpCodec {
public:
using Ptr = std::shared_ptr<AV1RtpEncoder>;
AV1RtpEncoder();
~AV1RtpEncoder() override = default;
bool inputFrame(const Frame::Ptr &frame) override;
private:
// AV1 OBU信息
struct ObuInfo {
uint8_t header;
uint8_t extension_header;
const uint8_t* payload_data;
size_t payload_size;
bool has_extension;
bool has_size_field;
};
std::vector<ObuInfo> parseObus(const uint8_t* data, size_t size);
void outputRtp(const uint8_t* data, size_t len, bool mark, uint64_t stamp, uint8_t aggregation_header);
uint8_t makeAggregationHeader(bool first_obu_is_fragment, bool last_obu_is_fragment,
int num_obu_elements, bool starts_new_coded_video_sequence);
bool sendObu(const ObuInfo& obu, bool is_first_obu, bool is_last_obu,
bool starts_new_sequence, uint64_t stamp, size_t max_payload_size);
private:
bool _got_key_frame = false;
};
/**
* AV1 RTP解码器
*/
class AV1RtpDecoder : public RtpCodec {
public:
using Ptr = std::shared_ptr<AV1RtpDecoder>;
AV1RtpDecoder();
~AV1RtpDecoder() override = default;
bool inputRtp(const RtpPacket::Ptr &rtp, bool key_pos = false) override;
private:
struct AggregationHeader {
bool first_obu_is_fragment; // Z bit
bool last_obu_is_fragment; // Y bit
int num_obu_elements; // W field (0 = any number)
bool starts_new_coded_video_sequence; // N bit
};
AggregationHeader parseAggregationHeader(uint8_t header);
void obtainFrame();
bool emitObu(const uint8_t* data, size_t size);
bool processPayload(const AggregationHeader& agg_header, const uint8_t* data,
size_t remaining);
void flushFrame(uint64_t stamp);
void resetState();
private:
uint64_t _last_dts = 0;
FrameImp::Ptr _frame;
std::vector<uint8_t> _fragment_buffer;
bool _assembling_fragment = false;
bool _received_keyframe = false;
bool _has_last_seq = false;
uint16_t _last_seq = 0;
bool _has_last_ssrc = false;
uint32_t _last_ssrc = 0;
};
}//namespace mediakit
#endif //ZLMEDIAKIT_AV1RTP_H

View File

@ -1,6 +1,6 @@
# MIT License
#
# Copyright (c) 2016-2022 The ZLMediaKit project authors. All Rights Reserved.
# Copyright (c) 2016-present The ZLMediaKit project authors. All Rights Reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal

View File

@ -13,18 +13,35 @@
#include "Extension/Factory.h"
#include "Extension/CommonRtp.h"
#include "Extension/CommonRtmp.h"
#include "riff-acm.h"
using namespace std;
using namespace toolkit;
namespace mediakit {
Track::Ptr G711Track::clone() const {
return std::make_shared<G711Track>(*this);
Buffer::Ptr G711Track::getExtraData() const {
struct wave_format_t wav {};
wav.wFormatTag = getCodecId() == CodecG711A ? WAVE_FORMAT_ALAW : WAVE_FORMAT_MULAW;
wav.nChannels = getAudioChannel();
wav.nSamplesPerSec = getAudioSampleRate();
wav.nAvgBytesPerSec = 8000;
wav.nBlockAlign = 1;
wav.wBitsPerSample = 8;
auto buff = BufferRaw::create(18 + wav.cbSize);
wave_format_save(&wav, (uint8_t*)buff->data(), buff->size());
return buff;
}
Sdp::Ptr G711Track::getSdp(uint8_t payload_type) const {
return std::make_shared<DefaultSdp>(payload_type, *this);
void G711Track::setExtraData(const uint8_t *data, size_t size) {
struct wave_format_t wav;
if (wave_format_load(data, size, &wav) > 0) {
// Successfully parsed Opus header
_sample_rate = wav.nSamplesPerSec;
_channels = wav.nChannels;
_codecid = (wav.wFormatTag == WAVE_FORMAT_ALAW) ? CodecG711A : CodecG711U;
} else {
WarnL << "Failed to parse G711 extra data";
}
}
namespace {

View File

@ -18,19 +18,16 @@ namespace mediakit{
/**
* G711音频通道
* G711 audio channel
* [AUTO-TRANSLATED:57f8bc08]
*/
class G711Track : public AudioTrackImp{
public:
using Ptr = std::shared_ptr<G711Track>;
G711Track(CodecId codecId, int sample_rate = 8000, int channels = 1, int sample_bit = 16) : AudioTrackImp(codecId, sample_rate, channels, sample_bit) {}
toolkit::Buffer::Ptr getExtraData() const override;
void setExtraData(const uint8_t *data, size_t size) override;
private:
Sdp::Ptr getSdp(uint8_t payload_type) const override;
Track::Ptr clone() const override;
Track::Ptr clone() const override { return std::make_shared<G711Track>(*this); }
};
}//namespace mediakit

View File

@ -38,7 +38,8 @@ bool G711RtpEncoder::inputFrame(const Frame::Ptr &frame) {
_buffer.append(ptr, size);
while (_buffer.size() >= _pkt_bytes) {
RtpCodec::inputRtp(getRtpInfo().makeRtp(TrackAudio, _buffer.data(), _pkt_bytes, false, in_pts), false);
auto tmp = (in_pts+_pkt_dur_ms-1)/_pkt_dur_ms*_pkt_dur_ms;
RtpCodec::inputRtp(getRtpInfo().makeRtp(TrackAudio, _buffer.data(), _pkt_bytes, false, tmp), false);
in_pts += _pkt_dur_ms;
_buffer.erase(0, _pkt_bytes);
}

View File

@ -153,7 +153,6 @@ bool H264Track::ready() const {
bool H264Track::inputFrame(const Frame::Ptr &frame) {
using H264FrameInternal = FrameInternal<H264FrameNoCacheAble>;
int type = H264_TYPE(frame->data()[frame->prefixSize()]);
if ((type == H264Frame::NAL_B_P || type == H264Frame::NAL_IDR) && ready()) {
return inputFrame_l(frame);
}
@ -263,6 +262,10 @@ Track::Ptr H264Track::clone() const {
bool H264Track::inputFrame_l(const Frame::Ptr &frame) {
int type = H264_TYPE(frame->data()[frame->prefixSize()]);
if (type == H264Frame::NAL_AUD) {
// AUD帧丢弃
return false;
}
bool ret = true;
switch (type) {
case H264Frame::NAL_SPS: {
@ -388,7 +391,7 @@ Track::Ptr getTrackBySdp(const SdpTrack::Ptr &track) {
// If there is no sps/pps in the sdp, then it may be possible to recover the sps/pps in the subsequent rtp
return std::make_shared<H264Track>();
}
return std::make_shared<H264Track>(sps, pps, 0, 0);
return std::make_shared<H264Track>(sps, pps, prefixSize(sps.data(), sps.size()), prefixSize(pps.data(), pps.size()));
}
RtpCodec::Ptr getRtpEncoderByCodecId(uint8_t pt) {

View File

@ -160,6 +160,7 @@ toolkit::Buffer::Ptr H265Track::getExtraData() const {
WarnL << "生成H265 extra_data 失败";
return nullptr;
}
extra_data.resize(extra_data_size);
return std::make_shared<BufferString>(std::move(extra_data));
#else
WarnL << "请开启MP4相关功能并使能\"ENABLE_MP4\",否则对H265的支持不完善";
@ -215,6 +216,108 @@ void H265Track::insertConfigFrame(const Frame::Ptr &frame) {
}
}
class BitReader {
public:
BitReader(const uint8_t* data, size_t size) : _data(data), _size(size), _bitPos(0) {}
uint32_t readBits(int n) {
uint32_t result = 0;
for (int i = 0; i < n; i++) {
if (_bitPos >= _size * 8) throw std::runtime_error("Out of range");
int bytePos = _bitPos / 8;
int bitOffset = 7 - (_bitPos % 8);
result = (result << 1) | ((_data[bytePos] >> bitOffset) & 0x01);
_bitPos++;
}
return result;
}
void skipBits(int n) {
_bitPos += n;
if (_bitPos > _size * 8) throw std::runtime_error("Skip out of range");
}
private:
const uint8_t* _data;
size_t _size;
size_t _bitPos;
};
struct HevcProfileInfo {
int profile_id = -1; // profile-id
int level_id = -1; // level-id
int tier_flag = -1; // tier-flag
};
// 移除 00 00 03 防竞争字节
std::vector<uint8_t> removeEmulationPrevention(const uint8_t *data, size_t size) {
std::vector<uint8_t> out;
out.reserve(size);
for (size_t i = 0; i < size; i++) {
if (i + 2 < size && data[i] == 0x00 && data[i + 1] == 0x00 && data[i + 2] == 0x03) {
out.push_back(0x00);
out.push_back(0x00);
i += 2; // skip 0x00 0x00 0x03
} else {
out.push_back(data[i]);
}
}
return out;
}
// 从 VPS 或 SPS 里提取 profile/level/tier 信息
HevcProfileInfo parse_hevc_profile_tier_level(const uint8_t *nalu, size_t size) {
// 去掉起始码 (00 00 01 或 00 00 00 01)
size_t offset = 0;
if (size > 4 && nalu[0] == 0x00 && nalu[1] == 0x00) {
if (nalu[2] == 0x01)
offset = 3;
else if (nalu[2] == 0x00 && nalu[3] == 0x01)
offset = 4;
}
auto rbsp = removeEmulationPrevention(nalu + offset, size - offset);
BitReader br(rbsp.data(), rbsp.size());
// ---- NALU header ----
br.skipBits(1 + 6 + 6 + 3); // forbidden_zero_bit + nal_unit_type + nuh_layer_id + nuh_temporal_id_plus1
// VPS 和 SPS 都包含 profile_tier_level()
// 先解析最少需要的部分
// vps_video_parameter_set_id 或 sps_video_parameter_set_id (略过)
br.readBits(4);
// sps 里还有 sps_max_sub_layers_minus1
uint32_t max_sub_layers_minus1 = br.readBits(3);
// temporal_id_nesting_flag
br.readBits(1);
// ---- profile_tier_level ----
HevcProfileInfo info;
uint32_t profile_space = br.readBits(2); // general_profile_space
info.tier_flag = br.readBits(1); // general_tier_flag
info.profile_id = br.readBits(5); // general_profile_idc
// general_profile_compatibility_flag[32]
for (int i = 0; i < 32; i++)
br.readBits(1);
// general_progressive_source_flag 等 (跳过)
br.readBits(1); // progressive_source_flag
br.readBits(1); // interlaced_source_flag
br.readBits(1); // non_packed_constraint_flag
br.readBits(1); // frame_only_constraint_flag
// general_reserved_zero_44bits
br.skipBits(44);
// general_level_idc (8 bits)
info.level_id = br.readBits(8);
return info;
}
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/**
@ -247,7 +350,9 @@ public:
_printer << "b=AS:" << bitrate << "\r\n";
}
_printer << "a=rtpmap:" << payload_type << " " << getCodecName(CodecH265) << "/" << 90000 << "\r\n";
_printer << "a=fmtp:" << payload_type << " ";
auto info = parse_hevc_profile_tier_level((uint8_t *)strSPS.data(), strSPS.size());
_printer << "a=fmtp:" << payload_type << " level-id=" << info.level_id << "; profile-id=" << info.profile_id << "; tier-flag=" << info.tier_flag << "; ";
_printer << "sprop-vps=";
_printer << encodeBase64(strVPS) << "; ";
_printer << "sprop-sps=";
@ -287,7 +392,10 @@ Track::Ptr getTrackBySdp(const SdpTrack::Ptr &track) {
// If there is no sps/pps in the sdp, then it may be possible to recover sps/pps from the subsequent rtp
return std::make_shared<H265Track>();
}
return std::make_shared<H265Track>(vps, sps, pps, 0, 0, 0);
return std::make_shared<H265Track>(vps, sps, pps,
prefixSize(vps.data(), vps.size()),
prefixSize(sps.data(), sps.size()),
prefixSize(pps.data(), pps.size()));
}
RtpCodec::Ptr getRtpEncoderByCodecId(uint8_t pt) {

View File

@ -268,12 +268,12 @@ void H265RtpEncoder::packRtpFu(const char *ptr, size_t len, uint64_t pts, bool i
auto nal_type = H265_TYPE(ptr[0]); //获取NALU的5bit 帧类型
unsigned char s_e_flags;
bool fu_start = true;
bool mark_bit = false;
bool fu_end = false;
size_t offset = 2;
while (!mark_bit) {
while (!fu_end) {
if (len <= offset + max_size) {
// FU end
mark_bit = true;
fu_end = true;
max_size = len - offset;
s_e_flags = (1 << 6) | nal_type;
} else if (fu_start) {
@ -287,7 +287,9 @@ void H265RtpEncoder::packRtpFu(const char *ptr, size_t len, uint64_t pts, bool i
{
// 传入nullptr先不做payload的内存拷贝 [AUTO-TRANSLATED:7ed49f0a]
// Pass in nullptr first, do not copy the payload memory
auto rtp = getRtpInfo().makeRtp(TrackVideo, nullptr, max_size + 3, mark_bit, pts);
// 只有FU的最后一个分片且整个帧需要设置mark时才设置mark位
bool mark_bit = fu_end && is_mark;
auto rtp = getRtpInfo().makeRtp(TrackVideo, nullptr, max_size + 3, mark_bit && is_mark, pts); //yzw 帧(不是NALU多TILE时一帧有多个NALU)最后一个rtp才设置mark位
// rtp payload 负载部分 [AUTO-TRANSLATED:03a5ef9b]
// rtp payload load part
uint8_t *payload = rtp->getPayload();

View File

@ -133,7 +133,7 @@ static inline void bytestream2_put_be16(PutByteContext *p, uint16_t value) {
}
}
static inline void bytestream2_put_be24(PutByteContext *p, uint16_t value) {
static inline void bytestream2_put_be24(PutByteContext *p, uint32_t value) {
if (!p->eof && (p->buffer_end - p->buffer >= 2)) {
p->buffer[0] = value >> 16;
p->buffer[1] = value >> 8;

218
ext-codec/MP2A.cpp Normal file
View File

@ -0,0 +1,218 @@
/*
* Copyright (c) 2016-present The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/ZLMediaKit/ZLMediaKit).
*
* Use of this source code is governed by MIT-like license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#include "MP2A.h"
#include "MP2ARtp.h"
#include "Extension/Factory.h"
#include "Extension/CommonRtmp.h"
#include "Rtsp/Rtsp.h"
using namespace std;
using namespace toolkit;
namespace mediakit {
// ======================== MpegAudioFrameInfo ========================
// MPEG Audio 版本表
// MPEG Audio version table
// Index: version_bits (2 bits from header)
// 00 = MPEG 2.5, 01 = reserved, 10 = MPEG 2, 11 = MPEG 1
static const int s_mpeg_version[] = { 3, 0, 2, 1 }; // 3=MPEG2.5, 0=reserved, 2=MPEG2, 1=MPEG1
// Layer 表: 00=reserved, 01=III, 10=II, 11=I
static const int s_mpeg_layer[] = { 0, 3, 2, 1 };
// MPEG-1 比特率表 (kbps)
// bitrate_index: 0-15, layer: 1-3
static const int s_bitrate_mpeg1[][16] = {
// Layer I
{ 0, 32, 64, 96, 128, 160, 192, 224, 256, 288, 320, 352, 384, 416, 448, 0 },
// Layer II
{ 0, 32, 48, 56, 64, 80, 96, 112, 128, 160, 192, 224, 256, 320, 384, 0 },
// Layer III
{ 0, 32, 40, 48, 56, 64, 80, 96, 112, 128, 160, 192, 224, 256, 320, 0 },
};
// MPEG-2/2.5 比特率表 (kbps)
static const int s_bitrate_mpeg2[][16] = {
// Layer I
{ 0, 32, 48, 56, 64, 80, 96, 112, 128, 144, 160, 176, 192, 224, 256, 0 },
// Layer II / III
{ 0, 8, 16, 24, 32, 40, 48, 56, 64, 80, 96, 112, 128, 144, 160, 0 },
};
// 采样率表 (Hz)
// Index: [version_index][samplerate_index]
static const int s_sample_rate[][4] = {
{ 44100, 48000, 32000, 0 }, // MPEG-1
{ 22050, 24000, 16000, 0 }, // MPEG-2
{ 11025, 12000, 8000, 0 }, // MPEG-2.5
};
bool MpegAudioFrameInfo::parse(const uint8_t *data, size_t size, MpegAudioFrameInfo &info) {
if (size < 4) {
return false;
}
// 检查同步字 0xFFE0 (11 bits all 1)
if (data[0] != 0xFF || (data[1] & 0xE0) != 0xE0) {
return false;
}
int version_bits = (data[1] >> 3) & 0x03;
int layer_bits = (data[1] >> 1) & 0x03;
// int protection = !(data[1] & 0x01);
int bitrate_index = (data[2] >> 4) & 0x0F;
int samplerate_index = (data[2] >> 2) & 0x03;
int padding = (data[2] >> 1) & 0x01;
int channel_mode = (data[3] >> 6) & 0x03;
int ver = s_mpeg_version[version_bits];
int layer = s_mpeg_layer[layer_bits];
if (ver == 0 || layer == 0 || samplerate_index == 3 || bitrate_index == 0 || bitrate_index == 15) {
return false;
}
int ver_index = ver - 1; // 0=MPEG1, 1=MPEG2, 2=MPEG2.5
int sr = s_sample_rate[ver_index][samplerate_index];
if (sr == 0) {
return false;
}
int bitrate = 0;
if (ver == 1) {
// MPEG-1
bitrate = s_bitrate_mpeg1[layer - 1][bitrate_index];
} else {
// MPEG-2 / MPEG-2.5
if (layer == 1) {
bitrate = s_bitrate_mpeg2[0][bitrate_index];
} else {
bitrate = s_bitrate_mpeg2[1][bitrate_index];
}
}
info.version = ver;
info.layer = layer;
info.bitrate = bitrate;
info.sample_rate = sr;
info.channels = (channel_mode == 3) ? 1 : 2; // 3=mono, 其他=stereo
// 计算每帧的采样数和帧大小
if (layer == 1) {
// Layer I: 384 samples
info.samples_per_frame = 384;
info.frame_size = (12 * bitrate * 1000 / sr + padding) * 4;
} else if (layer == 2) {
// Layer II: 1152 samples
info.samples_per_frame = 1152;
info.frame_size = 144 * bitrate * 1000 / sr + padding;
} else {
// Layer III
if (ver == 1) {
info.samples_per_frame = 1152;
info.frame_size = 144 * bitrate * 1000 / sr + padding;
} else {
info.samples_per_frame = 576;
info.frame_size = 72 * bitrate * 1000 / sr + padding;
}
}
return true;
}
// ======================== MP2ATrack ========================
bool MP2ATrack::inputFrame(const Frame::Ptr &frame) {
if (!_info_parsed) {
auto data = (const uint8_t *)frame->data() + frame->prefixSize();
auto size = frame->size() - frame->prefixSize();
MpegAudioFrameInfo info;
if (MpegAudioFrameInfo::parse(data, size, info)) {
_sample_rate = info.sample_rate;
_channels = info.channels;
_info_parsed = true;
}
}
return AudioTrackImp::inputFrame(frame);
}
Sdp::Ptr MP2ATrack::getSdp(uint8_t pt) const {
// RFC 2250/3551: MPA 的 RTP 时钟频率固定为 90000而不是音频采样率
// RFC 2250/3551: MPA RTP clock rate is fixed at 90000, not the audio sample rate
class MP2ASdp : public Sdp {
public:
// 注意Sdp 基类构造必须传入 90000 作为 sample_rate
MP2ASdp(uint8_t payload_type, int channels, int bitrate)
: Sdp(90000, payload_type) {
_printer << "m=audio 0 RTP/AVP " << (int)payload_type << "\r\n";
if (bitrate) {
_printer << "b=AS:" << bitrate << "\r\n";
}
_printer << "a=rtpmap:" << (int)payload_type << " MPA/90000/" << channels << "\r\n";
}
std::string getSdp() const override { return _printer; }
private:
toolkit::_StrPrinter _printer;
};
return std::make_shared<MP2ASdp>(pt, getAudioChannel(), getBitRate() >> 10);
}
Track::Ptr MP2ATrack::clone() const {
return std::make_shared<MP2ATrack>(*this);
}
namespace {
CodecId getCodec() {
return CodecMP2A;
}
Track::Ptr getTrackByCodecId(int sample_rate, int channels, int sample_bit) {
return std::make_shared<MP2ATrack>(sample_rate, channels);
}
Track::Ptr getTrackBySdp(const SdpTrack::Ptr &track) {
return std::make_shared<MP2ATrack>(track->_samplerate, track->_channel);
}
RtpCodec::Ptr getRtpEncoderByCodecId(uint8_t pt) {
return std::make_shared<MP2ARtpEncoder>();
}
RtpCodec::Ptr getRtpDecoderByCodecId() {
return std::make_shared<MP2ARtpDecoder>();
}
RtmpCodec::Ptr getRtmpEncoderByTrack(const Track::Ptr &track) {
return std::make_shared<CommonRtmpEncoder>(track);
}
RtmpCodec::Ptr getRtmpDecoderByTrack(const Track::Ptr &track) {
return std::make_shared<CommonRtmpDecoder>(track);
}
Frame::Ptr getFrameFromPtr(const char *data, size_t bytes, uint64_t dts, uint64_t pts) {
return std::make_shared<MP2AFrameNoCacheAble>((char *)data, bytes, dts, pts);
}
} // namespace
CodecPlugin mp2a_plugin = { getCodec,
getTrackByCodecId,
getTrackBySdp,
getRtpEncoderByCodecId,
getRtpDecoderByCodecId,
getRtmpEncoderByTrack,
getRtmpDecoderByTrack,
getFrameFromPtr };
} // namespace mediakit

90
ext-codec/MP2A.h Normal file
View File

@ -0,0 +1,90 @@
/*
* Copyright (c) 2016-present The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/ZLMediaKit/ZLMediaKit).
*
* Use of this source code is governed by MIT-like license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#ifndef ZLMEDIAKIT_MP2A_H
#define ZLMEDIAKIT_MP2A_H
#include "Extension/Frame.h"
#include "Extension/Track.h"
namespace mediakit {
/**
* MPEG-1/2 Audio (Layer I/II)
* MPEG-1/2 Audio (Layer I/II) frame helper class template
*/
template <typename Parent>
class MP2AFrameHelper : public Parent {
public:
using Ptr = std::shared_ptr<MP2AFrameHelper>;
template <typename... ARGS>
MP2AFrameHelper(ARGS &&...args)
: Parent(std::forward<ARGS>(args)...) {
this->_codec_id = CodecMP2A;
}
bool keyFrame() const override { return false; }
bool configFrame() const override { return false; }
};
/// MPEG-1/2 Audio 帧类
using MP2AFrame = MP2AFrameHelper<FrameImp>;
using MP2AFrameNoCacheAble = MP2AFrameHelper<FrameFromPtr>;
// MPEG Audio 帧头解析工具
// MPEG Audio frame header parsing utility
struct MpegAudioFrameInfo {
int version = 0; // 1: MPEG-1, 2: MPEG-2, 3: MPEG-2.5
int layer = 0; // 1: Layer I, 2: Layer II, 3: Layer III
int bitrate = 0; // kbps
int sample_rate = 0; // Hz
int channels = 0; // 1: mono, 2: stereo
int frame_size = 0; // bytes per frame
int samples_per_frame = 0;
/**
* MPEG Audio sync word
* Parse frame header info from MPEG Audio sync word
* @param data 4
* @param size
* @return
*/
static bool parse(const uint8_t *data, size_t size, MpegAudioFrameInfo &info);
};
/**
* MPEG-1/2 Audio (Layer I/II) Track
* CodecMP2A
*/
class MP2ATrack : public AudioTrackImp {
public:
using Ptr = std::shared_ptr<MP2ATrack>;
MP2ATrack(int sample_rate = 44100, int channels = 2)
: AudioTrackImp(CodecMP2A, sample_rate, channels, 16) {}
bool inputFrame(const Frame::Ptr &frame) override;
private:
/**
* RFC 2250/3551 MPA RTP 90000
* RFC 2250/3551 specifies MPA RTP clock rate is fixed at 90000
*/
Sdp::Ptr getSdp(uint8_t payload_type) const override;
Track::Ptr clone() const override;
private:
bool _info_parsed = false;
};
} // namespace mediakit
#endif // ZLMEDIAKIT_MP2A_H

175
ext-codec/MP2ARtp.cpp Normal file
View File

@ -0,0 +1,175 @@
/*
* Copyright (c) 2016-present The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/ZLMediaKit/ZLMediaKit).
*
* Use of this source code is governed by MIT-like license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#include "MP2ARtp.h"
namespace mediakit {
// ======================== MP2ARtpEncoder ========================
void MP2ARtpEncoder::outputRtp(const char *data, size_t len, size_t frag_offset, bool mark, uint64_t stamp) {
// RFC 2250 Section 3.5:
// 4 bytes MPEG Audio-specific header + ES data
auto rtp = getRtpInfo().makeRtp(TrackAudio, nullptr, len + kMP2AHeaderSize, mark, stamp);
auto payload = rtp->getPayload();
// MPEG Audio-specific header
// MBZ (16 bits) = 0
payload[0] = 0;
payload[1] = 0;
// Frag_offset (16 bits)
payload[2] = (frag_offset >> 8) & 0xFF;
payload[3] = frag_offset & 0xFF;
// ES data
memcpy(payload + kMP2AHeaderSize, data, len);
RtpCodec::inputRtp(std::move(rtp), false);
}
bool MP2ARtpEncoder::inputFrame(const Frame::Ptr &frame) {
auto data = (const uint8_t *)frame->data() + frame->prefixSize();
auto total_size = (size_t)(frame->size() - frame->prefixSize());
if (total_size <= 0) {
return false;
}
auto max_payload = getRtpInfo().getMaxSize() - kMP2AHeaderSize;
auto base_dts = frame->dts();
// TS demux 可能一次回调多个完整的 MPEG Audio 帧(一个 PES 包),
// 需要逐帧解析并独立打 RTP 包,否则 FFmpeg 等接收端会因为分片
// 导致 RTP payload 不以 sync word 开头而报 "Header missing"。
size_t pos = 0;
int frame_index = 0;
while (pos + 4 <= total_size) {
// 检查 MPEG Audio sync word
if (data[pos] != 0xFF || (data[pos + 1] & 0xE0) != 0xE0) {
// 跳过无效字节,寻找下一个 sync word
++pos;
continue;
}
// 解析帧头获取帧大小
MpegAudioFrameInfo info;
if (!MpegAudioFrameInfo::parse(data + pos, total_size - pos, info) || info.frame_size <= 0) {
++pos;
continue;
}
size_t frame_size = (size_t)info.frame_size;
if (pos + frame_size > total_size) {
// 不完整的帧,打包剩余数据
frame_size = total_size - pos;
}
// 计算当前帧的时间戳偏移(毫秒)
// 每帧 samples_per_frame 个采样点,采样率 info.sample_rate
uint64_t stamp = base_dts;
if (frame_index > 0 && info.sample_rate > 0) {
stamp += (uint64_t)frame_index * info.samples_per_frame * 1000 / info.sample_rate;
}
// 对单个 MPEG Audio 帧打 RTP 包
auto ptr = (const char *)(data + pos);
size_t remain = frame_size;
size_t frag_offset = 0;
while (remain > 0) {
if (remain <= max_payload) {
outputRtp(ptr, remain, frag_offset, true, stamp);
break;
}
outputRtp(ptr, max_payload, frag_offset, false, stamp);
ptr += max_payload;
remain -= max_payload;
frag_offset += max_payload;
}
pos += frame_size;
++frame_index;
}
return true;
}
// ======================== MP2ARtpDecoder ========================
MP2ARtpDecoder::MP2ARtpDecoder() {
obtainFrame();
}
void MP2ARtpDecoder::obtainFrame() {
_frame = FrameImp::create<MP2AFrame>();
}
void MP2ARtpDecoder::flushData() {
if (_frame->_buffer.empty()) {
return;
}
RtpCodec::inputFrame(_frame);
obtainFrame();
}
bool MP2ARtpDecoder::inputRtp(const RtpPacket::Ptr &rtp, bool key_pos) {
auto payload_size = rtp->getPayloadSize();
if (payload_size <= (ssize_t)kMP2AHeaderSize) {
// 负载太小,没有有效 ES 数据
return false;
}
auto payload = rtp->getPayload();
auto stamp = rtp->getStamp();
auto seq = rtp->getSeq();
// 解析 MPEG Audio-specific header (RFC 2250 Section 3.5)
// MBZ (16 bits) + Frag_offset (16 bits)
uint16_t frag_offset = (payload[2] << 8) | payload[3];
auto es_data = payload + kMP2AHeaderSize;
auto es_size = payload_size - kMP2AHeaderSize;
if (frag_offset == 0) {
// frag_offset == 0 表示这是一个新帧(或完整帧)的开始
// 先输出之前缓存的帧(如果有)
flushData();
// 使用 90kHz 时间戳转换为毫秒
_frame->_dts = rtp->getStampMS();
_frame->_pts = _frame->_dts;
} else if (_frame->_buffer.empty()) {
// frag_offset != 0 但 buffer 为空,说明丢了第一个分片包,丢弃
_last_seq = seq;
_last_stamp = stamp;
return false;
} else if (seq != (uint16_t)(_last_seq + 1)) {
// 分片包 seq 不连续,丢包了,丢弃当前帧
WarnL << "mp2a rtp packet loss:" << _last_seq << " -> " << seq;
_frame->_buffer.clear();
_last_seq = seq;
_last_stamp = stamp;
return false;
}
_last_seq = seq;
_last_stamp = stamp;
// 追加 ES 数据
_frame->_buffer.append((char *)es_data, es_size);
// mark bit 表示帧的最后一个 RTP 包,立即输出
if (rtp->getHeader()->mark) {
flushData();
}
return false;
}
} // namespace mediakit

87
ext-codec/MP2ARtp.h Normal file
View File

@ -0,0 +1,87 @@
/*
* Copyright (c) 2016-present The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/ZLMediaKit/ZLMediaKit).
*
* Use of this source code is governed by MIT-like license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#ifndef ZLMEDIAKIT_MP2ARTP_H
#define ZLMEDIAKIT_MP2ARTP_H
#include "MP2A.h"
#include "Rtsp/RtpCodec.h"
namespace mediakit {
// RFC 2250 Section 3.5 MPEG Audio-specific header (4 bytes)
//
// 0 1 2 3
// 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
// +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
// | MBZ | Frag_offset |
// +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
//
// MBZ: Must Be Zero (16 bits)
// Frag_offset: Byte offset into the audio frame for the data in this packet (16 bits)
static constexpr size_t kMP2AHeaderSize = 4;
/**
* MP2A (MPEG-1/2 Audio Layer I/II) RTP
* RFC 2250 Section 3.5
*/
class MP2ARtpEncoder : public RtpCodec {
public:
using Ptr = std::shared_ptr<MP2ARtpEncoder>;
/**
* MPEG Audio RTP
* @param frame
*/
bool inputFrame(const Frame::Ptr &frame) override;
private:
/**
* RTP
* @param data ES
* @param len
* @param frag_offset
* @param mark
* @param stamp (ms)
*/
void outputRtp(const char *data, size_t len, size_t frag_offset, bool mark, uint64_t stamp);
};
/**
* MP2A (MPEG-1/2 Audio Layer I/II) RTP
* RFC 2250 Section 3.5
*/
class MP2ARtpDecoder : public RtpCodec {
public:
using Ptr = std::shared_ptr<MP2ARtpDecoder>;
MP2ARtpDecoder();
/**
* MPEG Audio RTP
* @param rtp rtp
* @param key_pos
*/
bool inputRtp(const RtpPacket::Ptr &rtp, bool key_pos = false) override;
private:
void obtainFrame();
void flushData();
private:
uint16_t _last_seq = 0;
uint32_t _last_stamp = 0;
FrameImp::Ptr _frame;
};
} // namespace mediakit
#endif // ZLMEDIAKIT_MP2ARTP_H

116
ext-codec/MP2V.cpp Normal file
View File

@ -0,0 +1,116 @@
/*
* Copyright (c) 2016-present The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/ZLMediaKit/ZLMediaKit).
*
* Use of this source code is governed by MIT-like license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#include "MP2V.h"
#include "MP2VRtp.h"
#include "Extension/Factory.h"
#include "Rtsp/Rtsp.h"
using namespace std;
using namespace toolkit;
namespace mediakit {
// MPEG-2 sequence header 帧率表 (ISO 13818-2 Table 6-4)
// MPEG-2 sequence header frame rate table
static const float s_mp2v_frame_rate_table[] = {
0, // 0000 forbidden
24000.0 / 1001, // 0001 23.976
24.0, // 0010
25.0, // 0011
30000.0 / 1001, // 0100 29.97
30.0, // 0101
50.0, // 0110
60000.0 / 1001, // 0111 59.94
60.0, // 1000
};
void MP2VTrack::parseSequenceHeader(const uint8_t *data, size_t size) {
// 查找 sequence header start code: 00 00 01 B3
// Look for sequence header start code: 00 00 01 B3
for (size_t i = 0; i + 7 < size; ++i) {
if (data[i] == 0x00 && data[i + 1] == 0x00 && data[i + 2] == 0x01 && data[i + 3] == 0xB3) {
// sequence_header() 结构:
// horizontal_size_value: 12 bits
// vertical_size_value: 12 bits
// aspect_ratio_information: 4 bits
// frame_rate_code: 4 bits
_width = (data[i + 4] << 4) | ((data[i + 5] >> 4) & 0x0F);
_height = ((data[i + 5] & 0x0F) << 8) | data[i + 6];
uint8_t frame_rate_code = data[i + 7] & 0x0F;
if (frame_rate_code > 0 && frame_rate_code <= 8) {
_fps = s_mp2v_frame_rate_table[frame_rate_code];
}
_seq_header_parsed = true;
return;
}
}
}
bool MP2VTrack::inputFrame(const Frame::Ptr &frame) {
if (!_seq_header_parsed) {
parseSequenceHeader((const uint8_t *)frame->data() + frame->prefixSize(),
frame->size() - frame->prefixSize());
}
return VideoTrackImp::inputFrame(frame);
}
Sdp::Ptr MP2VTrack::getSdp(uint8_t pt) const {
return std::make_shared<DefaultSdp>(pt, *this);
}
namespace {
CodecId getCodec() {
return CodecMP2V;
}
Track::Ptr getTrackByCodecId(int sample_rate, int channels, int sample_bit) {
return std::make_shared<MP2VTrack>();
}
Track::Ptr getTrackBySdp(const SdpTrack::Ptr &track) {
return std::make_shared<MP2VTrack>();
}
RtpCodec::Ptr getRtpEncoderByCodecId(uint8_t pt) {
return std::make_shared<MP2VRtpEncoder>();
}
RtpCodec::Ptr getRtpDecoderByCodecId() {
return std::make_shared<MP2VRtpDecoder>();
}
RtmpCodec::Ptr getRtmpEncoderByTrack(const Track::Ptr &track) {
WarnL << "Unsupported MP2V rtmp encoder";
return nullptr;
}
RtmpCodec::Ptr getRtmpDecoderByTrack(const Track::Ptr &track) {
WarnL << "Unsupported MP2V rtmp decoder";
return nullptr;
}
Frame::Ptr getFrameFromPtr(const char *data, size_t bytes, uint64_t dts, uint64_t pts) {
return std::make_shared<MP2VFrameNoCacheAble>((char *)data, bytes, dts, pts, 0);
}
} // namespace
CodecPlugin mp2v_plugin = { getCodec,
getTrackByCodecId,
getTrackBySdp,
getRtpEncoderByCodecId,
getRtpDecoderByCodecId,
getRtmpEncoderByTrack,
getRtmpDecoderByTrack,
getFrameFromPtr };
} // namespace mediakit

97
ext-codec/MP2V.h Normal file
View File

@ -0,0 +1,97 @@
/*
* Copyright (c) 2016-present The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/ZLMediaKit/ZLMediaKit).
*
* Use of this source code is governed by MIT-like license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#ifndef ZLMEDIAKIT_MP2V_H
#define ZLMEDIAKIT_MP2V_H
#include "Extension/Frame.h"
#include "Extension/Track.h"
namespace mediakit {
/**
* MPEG-2 Video
* MPEG-2 Video frame helper class template
*/
template <typename Parent>
class MP2VFrameHelper : public Parent {
public:
using Ptr = std::shared_ptr<MP2VFrameHelper>;
template <typename... ARGS>
MP2VFrameHelper(ARGS &&...args)
: Parent(std::forward<ARGS>(args)...) {
this->_codec_id = CodecMP2V;
}
/**
* MPEG-2 : 00 00 01 00 (picture_start_code)
* I帧判断picture_coding_type == 1 (I-Picture)
* picture_coding_type picture header 11-12 bit ( temporal_reference )
*
* MPEG-2 video start code: 00 00 01 00 (picture_start_code)
* I-frame detection: picture_coding_type == 1 (I-Picture)
*/
bool keyFrame() const override {
auto data = (const uint8_t *)this->data() + this->prefixSize();
auto size = this->size() - this->prefixSize();
return isMP2VKeyFrame(data, size);
}
bool configFrame() const override { return false; }
static bool isMP2VKeyFrame(const uint8_t *data, size_t size) {
// 查找 picture start code (00 00 01 00),然后检查 picture_coding_type
// Look for picture start code (00 00 01 00), then check picture_coding_type
for (size_t i = 0; i + 5 < size; ++i) {
if (data[i] == 0x00 && data[i + 1] == 0x00 && data[i + 2] == 0x01 && data[i + 3] == 0x00) {
// picture header: temporal_reference(10bits) + picture_coding_type(3bits)
// picture_coding_type: 001 = I, 010 = P, 011 = B
uint8_t picture_coding_type = (data[i + 5] >> 3) & 0x07;
return picture_coding_type == 1;
}
}
return false;
}
};
/// MPEG-2 Video 帧类
using MP2VFrame = MP2VFrameHelper<FrameImp>;
using MP2VFrameNoCacheAble = MP2VFrameHelper<FrameFromPtr>;
/**
* MPEG-2 Video Track
*/
class MP2VTrack : public VideoTrackImp {
public:
using Ptr = std::shared_ptr<MP2VTrack>;
MP2VTrack() : VideoTrackImp(CodecMP2V) {}
Track::Ptr clone() const override { return std::make_shared<MP2VTrack>(*this); }
bool inputFrame(const Frame::Ptr &frame) override;
private:
Sdp::Ptr getSdp(uint8_t payload_type) const override;
/**
* sequence header
* Parse width, height and fps from sequence header
*/
void parseSequenceHeader(const uint8_t *data, size_t size);
private:
bool _seq_header_parsed = false;
};
} // namespace mediakit
#endif // ZLMEDIAKIT_MP2V_H

274
ext-codec/MP2VRtp.cpp Normal file
View File

@ -0,0 +1,274 @@
/*
* Copyright (c) 2016-present The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/ZLMediaKit/ZLMediaKit).
*
* Use of this source code is governed by MIT-like license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#include "MP2VRtp.h"
#include "Common/config.h"
namespace mediakit {
// ======================== MP2VRtpDecoder ========================
MP2VRtpDecoder::MP2VRtpDecoder() {
obtainFrame();
}
void MP2VRtpDecoder::obtainFrame() {
_frame = FrameImp::create<MP2VFrame>();
}
bool MP2VRtpDecoder::inputRtp(const RtpPacket::Ptr &rtp, bool key_pos) {
auto seq = rtp->getSeq();
auto last_gop_dropped = _gop_dropped;
bool is_gop_start = decodeRtp(rtp);
if (!_gop_dropped && seq != (uint16_t)(_last_seq + 1) && _last_seq) {
_gop_dropped = true;
WarnL << "start drop mp2v gop, last seq:" << _last_seq << ", rtp:\r\n" << rtp->dumpString();
}
_last_seq = seq;
return is_gop_start && !last_gop_dropped;
}
/**
* RFC 2250 MPEG Video-specific header (4 bytes):
*
* 0 1 2 3
* 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
* +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
* | MBZ |T| TR |AN|N|S|B|E| P | | BFC | | FFC |
* +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
* FBV FFV
*
* T: MPEG-2 specific header extension present (1 bit)
* TR: Temporal Reference (10 bits)
* AN: Active N bit (1 bit)
* N: New picture header (1 bit)
* S: Sequence-header-present (1 bit)
* B: Beginning-of-slice (1 bit)
* E: End-of-slice (1 bit)
* P: Picture-Type (3 bits): I(1), P(2), B(3), D(4)
* FBV: full_pel_backward_vector (1 bit)
* BFC: backward_f_code (3 bits)
* FFV: full_pel_forward_vector (1 bit)
* FFC: forward_f_code (3 bits)
*/
bool MP2VRtpDecoder::decodeRtp(const RtpPacket::Ptr &rtp) {
auto payload_size = rtp->getPayloadSize();
if (payload_size <= (ssize_t)kMP2VHeaderSize) {
// 负载太小,不包含有效数据
return false;
}
auto payload = rtp->getPayload();
auto stamp = rtp->getStampMS();
auto seq = rtp->getSeq();
// 解析 RFC 2250 MPEG Video-specific header
bool t_bit = (payload[0] >> 2) & 0x01;
// uint16_t temporal_ref = ((payload[0] & 0x03) << 8) | payload[1];
// bool seq_header_present = (payload[2] >> 5) & 0x01;
// bool begin_of_slice = (payload[2] >> 4) & 0x01;
// bool end_of_slice = (payload[2] >> 3) & 0x01;
uint8_t picture_type = (payload[2] & 0x07);
// 如果 T bit 置位,还有 4 字节的 MPEG-2 扩展头需要跳过
size_t header_size = kMP2VHeaderSize + (t_bit ? 4 : 0);
if (payload_size <= (ssize_t)header_size) {
return false;
}
auto es_data = payload + header_size;
auto es_size = payload_size - header_size;
// 检查是否为新帧(时间戳变化)
if (!_frame->_buffer.empty() && stamp != _frame->_pts) {
// 时间戳变化,输出上一帧
outputFrame(rtp);
}
if (_frame->_buffer.empty()) {
// 新帧开始
_frame->_pts = stamp;
_drop_flag = false;
_picture_type = picture_type;
}
if (_drop_flag) {
return false;
}
// 检测 seq 不连续,丢弃当前帧
if (!_frame->_buffer.empty() && seq != (uint16_t)(_last_seq + 1) && _last_seq) {
_drop_flag = true;
_frame->_buffer.clear();
return false;
}
// 追加 ES 数据
_frame->_buffer.append((char *)es_data, es_size);
// RTP mark bit 标识帧结束
if (rtp->getHeader()->mark) {
outputFrame(rtp);
return _picture_type == 1; // I-Picture
}
return false;
}
void MP2VRtpDecoder::outputFrame(const RtpPacket::Ptr &rtp) {
if (_frame->_buffer.empty()) {
return;
}
// 生成 DTSMPEG-2 有 B 帧PTS 和 DTS 不一定相同)
_dts_generator.getDts(_frame->_pts, _frame->_dts);
bool is_key = _frame->keyFrame();
if (is_key && _gop_dropped) {
_gop_dropped = false;
InfoL << "new mp2v gop received, rtp:\r\n" << rtp->dumpString();
}
if (!_gop_dropped) {
RtpCodec::inputFrame(_frame);
}
obtainFrame();
}
// ======================== MP2VRtpEncoder ========================
bool MP2VRtpEncoder::hasSequenceHeader(const uint8_t *data, size_t size) {
// 查找 sequence header start code: 00 00 01 B3
for (size_t i = 0; i + 3 < size; ++i) {
if (data[i] == 0x00 && data[i + 1] == 0x00 && data[i + 2] == 0x01 && data[i + 3] == 0xB3) {
return true;
}
}
return false;
}
void MP2VRtpEncoder::parsePictureInfo(const uint8_t *data, size_t size) {
_temporal_ref = 0;
_picture_type = 0;
_fbv = 0;
_bfc = 0;
_ffv = 0;
_ffc = 0;
_has_seq_header = hasSequenceHeader(data, size);
// 查找 picture start code: 00 00 01 00
for (size_t i = 0; i + 5 < size; ++i) {
if (data[i] == 0x00 && data[i + 1] == 0x00 && data[i + 2] == 0x01 && data[i + 3] == 0x00) {
// temporal_reference: 10 bits, picture_coding_type: 3 bits
_temporal_ref = (data[i + 4] << 2) | ((data[i + 5] >> 6) & 0x03);
_picture_type = (data[i + 5] >> 3) & 0x07;
// 解析 motion vector codes (vbv_delay 之后)
// picture header: temporal_reference(10) + picture_coding_type(3) + vbv_delay(16)
if (i + 8 < size) {
uint8_t extra_byte = data[i + 8];
if (_picture_type == 2 /* P */ || _picture_type == 3 /* B */) {
// full_pel_forward_vector(1) + forward_f_code(3)
_ffv = (extra_byte >> 2) & 0x01;
_ffc = ((extra_byte & 0x03) << 1);
if (i + 9 < size) {
_ffc |= (data[i + 9] >> 7) & 0x01;
}
}
if (_picture_type == 3 /* B */) {
// full_pel_backward_vector(1) + backward_f_code(3) 紧跟在 forward 之后
if (i + 9 < size) {
_fbv = (data[i + 9] >> 6) & 0x01;
_bfc = (data[i + 9] >> 3) & 0x07;
}
}
}
return;
}
}
}
void MP2VRtpEncoder::buildMpvHeader(uint8_t *buf, const uint8_t *data, size_t size,
bool is_begin_of_slice, bool is_end_of_slice) {
// RFC 2250 Section 3.4
// Byte 0: MBZ(5) + T(1) + TR high 2 bits
// T = 0 (不发送 MPEG-2 扩展头)
buf[0] = (_temporal_ref >> 8) & 0x03;
// Byte 1: TR low 8 bits
buf[1] = _temporal_ref & 0xFF;
// Byte 2: AN(1) + N(1) + S(1) + B(1) + E(1) + P(3)
uint8_t byte2 = 0;
// AN = 0, N = 0
if (_has_seq_header) {
byte2 |= 0x20; // S bit
}
if (is_begin_of_slice) {
byte2 |= 0x10; // B bit
}
if (is_end_of_slice) {
byte2 |= 0x08; // E bit
}
byte2 |= (_picture_type & 0x07);
buf[2] = byte2;
// Byte 3: FBV(1) + BFC(3) + FFV(1) + FFC(3)
buf[3] = ((_fbv & 0x01) << 7) | ((_bfc & 0x07) << 4) | ((_ffv & 0x01) << 3) | (_ffc & 0x07);
}
bool MP2VRtpEncoder::inputFrame(const Frame::Ptr &frame) {
auto ptr = (const uint8_t *)frame->data() + frame->prefixSize();
auto size = frame->size() - frame->prefixSize();
if (size == 0) {
return false;
}
// 解析帧信息picture type, temporal reference 等)
parsePictureInfo(ptr, size);
bool is_key = frame->keyFrame();
auto max_payload = getRtpInfo().getMaxSize() - kMP2VHeaderSize;
size_t offset = 0;
while (offset < size) {
bool is_first = (offset == 0);
size_t payload_size;
bool is_last;
if (size - offset <= max_payload) {
payload_size = size - offset;
is_last = true;
} else {
payload_size = max_payload;
is_last = false;
}
// 构建 MPEG Video-specific header
uint8_t mpv_header[kMP2VHeaderSize];
buildMpvHeader(mpv_header, ptr + offset, payload_size, is_first, is_last);
// 创建 RTP 包MPEG header + ES data
auto rtp = getRtpInfo().makeRtp(TrackVideo, nullptr, kMP2VHeaderSize + payload_size, is_last, frame->pts());
auto rtp_payload = rtp->getPayload();
// 写入 MPEG Video-specific header
memcpy(rtp_payload, mpv_header, kMP2VHeaderSize);
// 写入 ES 数据
memcpy(rtp_payload + kMP2VHeaderSize, ptr + offset, payload_size);
// 输入到 RTP 环形缓存
RtpCodec::inputRtp(rtp, is_key && is_first);
offset += payload_size;
}
return true;
}
} // namespace mediakit

112
ext-codec/MP2VRtp.h Normal file
View File

@ -0,0 +1,112 @@
/*
* Copyright (c) 2016-present The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/ZLMediaKit/ZLMediaKit).
*
* Use of this source code is governed by MIT-like license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#ifndef ZLMEDIAKIT_MP2VRTP_H
#define ZLMEDIAKIT_MP2VRTP_H
#include "MP2V.h"
#include "Common/Stamp.h"
#include "Rtsp/RtpCodec.h"
namespace mediakit {
// RFC 2250 MPEG Video-specific header (4 bytes)
//
// 0 1 2 3
// 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
// +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
// | MBZ |T| TR |N|S|B|E| P | | BFC | | FFC |
// +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
// AN FBV FFV
static constexpr size_t kMP2VHeaderSize = 4;
/**
* MP2V (MPEG-2 Video) RTP
* MPEG-2 Video over RTP MP2V Frame
* RFC 2250
*/
class MP2VRtpDecoder : public RtpCodec {
public:
using Ptr = std::shared_ptr<MP2VRtpDecoder>;
MP2VRtpDecoder();
/**
* MPEG-2 Video RTP
* @param rtp rtp包
* @param key_pos
*/
bool inputRtp(const RtpPacket::Ptr &rtp, bool key_pos = true) override;
private:
bool decodeRtp(const RtpPacket::Ptr &rtp);
void outputFrame(const RtpPacket::Ptr &rtp);
void obtainFrame();
private:
bool _gop_dropped = true;
bool _drop_flag = false;
uint16_t _last_seq = 0;
uint8_t _picture_type = 0;
MP2VFrame::Ptr _frame;
DtsGenerator _dts_generator;
};
/**
* MP2V (MPEG-2 Video) RTP
* MPEG-2 Video RTP
* RFC 2250
*/
class MP2VRtpEncoder : public RtpCodec {
public:
using Ptr = std::shared_ptr<MP2VRtpEncoder>;
/**
* MPEG-2 Video
* @param frame
*/
bool inputFrame(const Frame::Ptr &frame) override;
private:
/**
* RFC 2250 MPEG Video-specific header
* @param buf 4
* @param data MPEG-2 ES
* @param size
* @param is_begin_of_slice slice
* @param is_end_of_slice slice
*/
void buildMpvHeader(uint8_t *buf, const uint8_t *data, size_t size,
bool is_begin_of_slice, bool is_end_of_slice);
/**
* picture type, temporal reference
*/
void parsePictureInfo(const uint8_t *data, size_t size);
/**
* sequence header
*/
bool hasSequenceHeader(const uint8_t *data, size_t size);
private:
uint16_t _temporal_ref = 0;
uint8_t _picture_type = 0;
uint8_t _fbv = 0;
uint8_t _bfc = 0;
uint8_t _ffv = 0;
uint8_t _ffc = 0;
bool _has_seq_header = false;
};
} // namespace mediakit
#endif // ZLMEDIAKIT_MP2VRTP_H

View File

@ -11,16 +11,32 @@
#include "Opus.h"
#include "Extension/Factory.h"
#include "Extension/CommonRtp.h"
#include "Extension/CommonRtmp.h"
#include "OpusRtmp.h"
#include "opus-head.h"
using namespace std;
using namespace toolkit;
namespace mediakit {
void OpusTrack::setExtraData(const uint8_t *data, size_t size) {
opus_head_t header;
if (opus_head_load(data, size, &header) > 0) {
// Successfully parsed Opus header
_sample_rate = header.input_sample_rate;
_channels = header.channels;
}
}
Sdp::Ptr OpusTrack::getSdp(uint8_t payload_type) const {
return std::make_shared<DefaultSdp>(payload_type, *this);
Buffer::Ptr OpusTrack::getExtraData() const {
struct opus_head_t opus {};
opus.version = 1;
opus.channels = getAudioChannel();
opus.input_sample_rate = getAudioSampleRate();
// opus.pre_skip = 120;
opus.channel_mapping_family = 0;
auto ret = BufferRaw::create(29);
ret->setSize(opus_head_save(&opus, (uint8_t *)ret->data(), ret->getCapacity()));
return ret;
}
namespace {
@ -46,11 +62,11 @@ RtpCodec::Ptr getRtpDecoderByCodecId() {
}
RtmpCodec::Ptr getRtmpEncoderByTrack(const Track::Ptr &track) {
return std::make_shared<CommonRtmpEncoder>(track);
return std::make_shared<OpusRtmpEncoder>(track);
}
RtmpCodec::Ptr getRtmpDecoderByTrack(const Track::Ptr &track) {
return std::make_shared<CommonRtmpDecoder>(track);
return std::make_shared<OpusRtmpDecoder>(track);
}
Frame::Ptr getFrameFromPtr(const char *data, size_t bytes, uint64_t dts, uint64_t pts) {

View File

@ -19,23 +19,20 @@ namespace mediakit {
/**
* Opus帧音频通道
* Opus frame audio channel
* [AUTO-TRANSLATED:522e95da]
*/
class OpusTrack : public AudioTrackImp{
class OpusTrack : public AudioTrackImp {
public:
using Ptr = std::shared_ptr<OpusTrack>;
OpusTrack() : AudioTrackImp(CodecOpus,48000,2,16){}
private:
// 克隆该Track [AUTO-TRANSLATED:9a15682a]
// Clone this Track
Track::Ptr clone() const override {
return std::make_shared<OpusTrack>(*this);
}
// 生成sdp [AUTO-TRANSLATED:663a9367]
// Generate sdp
Sdp::Ptr getSdp(uint8_t payload_type) const override ;
toolkit::Buffer::Ptr getExtraData() const override;
void setExtraData(const uint8_t *data, size_t size) override;
};
}//namespace mediakit

113
ext-codec/OpusRtmp.cpp Normal file
View File

@ -0,0 +1,113 @@
/*
* Copyright (c) 2016-present The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/ZLMediaKit/ZLMediaKit).
*
* Use of this source code is governed by MIT-like license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#include "OpusRtmp.h"
#include "Rtmp/utils.h"
#include "Common/config.h"
#include "Extension/Factory.h"
using namespace std;
using namespace toolkit;
namespace mediakit {
void OpusRtmpDecoder::inputRtmp(const RtmpPacket::Ptr &pkt) {
auto data = pkt->data();
int size = pkt->size();
auto flags = (uint8_t)data[0];
auto codec = (RtmpAudioCodec)(flags >> 4);
auto type = flags & 0x0F;
data++; size--;
if (codec == RtmpAudioCodec::ex_header) {
// @todo parse enhance audio header and check fourcc
data += 4;
size -= 4;
if (type == (uint8_t)RtmpPacketType::PacketTypeSequenceStart) {
getTrack()->setExtraData((uint8_t *)data, size);
} else {
outputFrame(data, size, pkt->time_stamp, pkt->time_stamp);
}
} else {
if (codec == RtmpAudioCodec::aac) {
uint8_t pkt_type = *data;
data++; size--;
if (pkt_type == (uint8_t)RtmpAACPacketType::aac_config_header) {
getTrack()->setExtraData((uint8_t *)data, size);
return;
}
}
outputFrame(data, size, pkt->time_stamp, pkt->time_stamp);
}
}
void OpusRtmpDecoder::outputFrame(const char *data, size_t size, uint32_t dts, uint32_t pts) {
RtmpCodec::inputFrame(Factory::getFrameFromPtr(getTrack()->getCodecId(), data, size, dts, pts));
}
////////////////////////////////////////////////////////////////////////
OpusRtmpEncoder::OpusRtmpEncoder(const Track::Ptr &track) : RtmpCodec(track) {
_enhanced = mINI::Instance()[Rtmp::kEnhanced];
}
bool OpusRtmpEncoder::inputFrame(const Frame::Ptr &frame) {
auto packet = RtmpPacket::create();
if (_enhanced) {
uint8_t flags = ((uint8_t)RtmpAudioCodec::ex_header << 4) | (uint8_t)RtmpPacketType::PacketTypeCodedFrames;
packet->buffer.push_back(flags);
uint32_t fourcc = htonl(getCodecFourCC(getTrack()->getCodecId()));
packet->buffer.append(reinterpret_cast<char *>(&fourcc), 4);
} else {
uint8_t flags = getAudioRtmpFlags(getTrack());
packet->buffer.push_back(flags);
if (getTrack()->getCodecId() == CodecAAC) {
packet->buffer.push_back((uint8_t)RtmpAACPacketType::aac_raw);
}
}
packet->buffer.append(frame->data(), frame->size());
packet->body_size = packet->buffer.size();
packet->time_stamp = frame->dts();
packet->chunk_id = CHUNK_AUDIO;
packet->stream_index = STREAM_MEDIA;
packet->type_id = MSG_AUDIO;
// Output rtmp packet
RtmpCodec::inputRtmp(packet);
return true;
}
void OpusRtmpEncoder::makeConfigPacket() {
auto extra_data = getTrack()->getExtraData();
if (!extra_data || !extra_data->size())
return;
auto packet = RtmpPacket::create();
if (_enhanced) {
uint8_t flags = ((uint8_t)RtmpAudioCodec::ex_header << 4) | (uint8_t)RtmpPacketType::PacketTypeSequenceStart;
packet->buffer.push_back(flags);
uint32_t fourcc = htonl(getCodecFourCC(getTrack()->getCodecId()));
packet->buffer.append(reinterpret_cast<char *>(&fourcc), 4);
} else {
uint8_t flags = getAudioRtmpFlags(getTrack());
packet->buffer.push_back(flags);
if (getTrack()->getCodecId() == CodecAAC) {
packet->buffer.push_back((uint8_t)RtmpAACPacketType::aac_config_header);
}
else{
return ;
}
}
packet->buffer.append(extra_data->data(), extra_data->size());
packet->body_size = packet->buffer.size();
packet->chunk_id = CHUNK_AUDIO;
packet->stream_index = STREAM_MEDIA;
packet->time_stamp = 0;
packet->type_id = MSG_AUDIO;
RtmpCodec::inputRtmp(packet);
}
} // namespace mediakit

51
ext-codec/OpusRtmp.h Normal file
View File

@ -0,0 +1,51 @@
/*
* Copyright (c) 2016-present The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/ZLMediaKit/ZLMediaKit).
*
* Use of this source code is governed by MIT-like license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#ifndef ZLMEDIAKIT_OPUS_RTMPCODEC_H
#define ZLMEDIAKIT_OPUS_RTMPCODEC_H
#include "Rtmp/RtmpCodec.h"
#include "Extension/Track.h"
namespace mediakit {
/**
* Rtmp解码类
* Opus over rtmp OpusFrame
*/
class OpusRtmpDecoder : public RtmpCodec {
public:
using Ptr = std::shared_ptr<OpusRtmpDecoder>;
OpusRtmpDecoder(const Track::Ptr &track) : RtmpCodec(track) {}
void inputRtmp(const RtmpPacket::Ptr &rtmp) override;
protected:
void outputFrame(const char *data, size_t size, uint32_t dts, uint32_t pts);
};
/**
* Rtmp打包类
*/
class OpusRtmpEncoder : public RtmpCodec {
bool _enhanced = false;
public:
using Ptr = std::shared_ptr<OpusRtmpEncoder>;
OpusRtmpEncoder(const Track::Ptr &track);
bool inputFrame(const Frame::Ptr &frame) override;
void makeConfigPacket() override;
};
} // namespace mediakit
#endif // ZLMEDIAKIT_OPUS_RTMPCODEC_H

79
ext-codec/VP8.cpp Normal file
View File

@ -0,0 +1,79 @@
#include "VP8.h"
#include "VP8Rtp.h"
#include "VpxRtmp.h"
#include "Extension/Factory.h"
using namespace std;
using namespace toolkit;
namespace mediakit {
bool VP8Track::inputFrame(const Frame::Ptr &frame) {
char *dataPtr = frame->data() + frame->prefixSize();
if (frame->keyFrame()) {
if (frame->size() - frame->prefixSize() < 10)
return false;
_width = ((dataPtr[7] << 8) + dataPtr[6]) & 0x3FFF;
_height = ((dataPtr[9] << 8) + dataPtr[8]) & 0x3FFF;
webm_vpx_codec_configuration_record_from_vp8(&_vpx, &_width, &_height, dataPtr, frame->size() - frame->prefixSize());
// InfoL << _width << "x" << _height;
}
return VideoTrackImp::inputFrame(frame);
}
Buffer::Ptr VP8Track::getExtraData() const {
auto ret = BufferRaw::create(8 + _vpx.codec_intialization_data_size);
ret->setSize(webm_vpx_codec_configuration_record_save(&_vpx, (uint8_t *)ret->data(), ret->getCapacity()));
return ret;
}
void VP8Track::setExtraData(const uint8_t *data, size_t size) {
webm_vpx_codec_configuration_record_load(data, size, &_vpx);
}
namespace {
CodecId getCodec() {
return CodecVP8;
}
Track::Ptr getTrackByCodecId(int sample_rate, int channels, int sample_bit) {
return std::make_shared<VP8Track>();
}
Track::Ptr getTrackBySdp(const SdpTrack::Ptr &track) {
return std::make_shared<VP8Track>();
}
RtpCodec::Ptr getRtpEncoderByCodecId(uint8_t pt) {
return std::make_shared<VP8RtpEncoder>();
}
RtpCodec::Ptr getRtpDecoderByCodecId() {
return std::make_shared<VP8RtpDecoder>();
}
RtmpCodec::Ptr getRtmpEncoderByTrack(const Track::Ptr &track) {
return std::make_shared<VpxRtmpEncoder>(track);
}
RtmpCodec::Ptr getRtmpDecoderByTrack(const Track::Ptr &track) {
return std::make_shared<VpxRtmpDecoder>(track);
}
Frame::Ptr getFrameFromPtr(const char *data, size_t bytes, uint64_t dts, uint64_t pts) {
return std::make_shared<VP8FrameNoCacheAble>((char *)data, bytes, dts, pts, 0);
}
} // namespace
CodecPlugin vp8_plugin = { getCodec,
getTrackByCodecId,
getTrackBySdp,
getRtpEncoderByCodecId,
getRtpDecoderByCodecId,
getRtmpEncoderByTrack,
getRtmpDecoderByTrack,
getFrameFromPtr };
} // namespace mediakit

49
ext-codec/VP8.h Normal file
View File

@ -0,0 +1,49 @@
#ifndef ZLMEDIAKIT_VP8_H
#define ZLMEDIAKIT_VP8_H
#include "Extension/Frame.h"
#include "Extension/Track.h"
#include "webm-vpx.h"
namespace mediakit {
template <typename Parent>
class VP8FrameHelper : public Parent {
public:
friend class FrameImp;
//friend class toolkit::ResourcePool_l<VP8FrameHelper>;
using Ptr = std::shared_ptr<VP8FrameHelper>;
template <typename... ARGS>
VP8FrameHelper(ARGS &&...args)
: Parent(std::forward<ARGS>(args)...) {
this->_codec_id = CodecVP8;
}
bool keyFrame() const override {
auto ptr = (uint8_t *) this->data() + this->prefixSize();
return !(*ptr & 0x01);
}
bool configFrame() const override { return false; }
bool dropAble() const override { return false; }
bool decodeAble() const override { return true; }
};
/// VP8 帧类
using VP8Frame = VP8FrameHelper<FrameImp>;
using VP8FrameNoCacheAble = VP8FrameHelper<FrameFromPtr>;
class VP8Track : public VideoTrackImp {
public:
VP8Track() : VideoTrackImp(CodecVP8) {}
Track::Ptr clone() const override { return std::make_shared<VP8Track>(*this); }
bool inputFrame(const Frame::Ptr &frame) override;
toolkit::Buffer::Ptr getExtraData() const override;
void setExtraData(const uint8_t *data, size_t size) override;
private:
webm_vpx_t _vpx {};
};
} // namespace mediakit
#endif

356
ext-codec/VP8Rtp.cpp Normal file
View File

@ -0,0 +1,356 @@
/*
* Copyright (c) 2016-present The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/ZLMediaKit/ZLMediaKit).
*
* Use of this source code is governed by MIT-like license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#include "VP8Rtp.h"
#include "Extension/Frame.h"
#include "Common/config.h"
namespace mediakit{
const int16_t kNoPictureId = -1;
const int8_t kNoTl0PicIdx = -1;
const uint8_t kNoTemporalIdx = 0xFF;
const int kNoKeyIdx = -1;
// internal bits
constexpr int kXBit = 0x80;
constexpr int kNBit = 0x20;
constexpr int kSBit = 0x10;
constexpr int kKeyIdxField = 0x1F;
constexpr int kIBit = 0x80;
constexpr int kLBit = 0x40;
constexpr int kTBit = 0x20;
constexpr int kKBit = 0x10;
constexpr int kYBit = 0x20;
constexpr int kFailedToParse = 0;
// VP8 payload descriptor
// https://datatracker.ietf.org/doc/html/rfc7741#section-4.2
//
// 0 1 2 3 4 5 6 7
// +-+-+-+-+-+-+-+-+
// |X|R|N|S|R| PID | (REQUIRED)
// +-+-+-+-+-+-+-+-+
// X: |I|L|T|K| RSV | (OPTIONAL)
// +-+-+-+-+-+-+-+-+
// I: |M| PictureID | (OPTIONAL)
// +-+-+-+-+-+-+-+-+
// | PictureID |
// +-+-+-+-+-+-+-+-+
// L: | TL0PICIDX | (OPTIONAL)
// +-+-+-+-+-+-+-+-+
// T/K: |TID|Y| KEYIDX | (OPTIONAL)
// +-+-+-+-+-+-+-+-+
struct RTPVideoHeaderVP8 {
void InitRTPVideoHeaderVP8();
int Size() const;
int Write(uint8_t *data, int size) const;
int Read(const uint8_t *data, int data_length);
bool isFirstPacket() const { return beginningOfPartition && partitionId == 0; }
friend bool operator!=(const RTPVideoHeaderVP8 &lhs, const RTPVideoHeaderVP8 &rhs) { return !(lhs == rhs); }
friend bool operator==(const RTPVideoHeaderVP8 &lhs, const RTPVideoHeaderVP8 &rhs) {
return lhs.nonReference == rhs.nonReference && lhs.pictureId == rhs.pictureId && lhs.tl0PicIdx == rhs.tl0PicIdx && lhs.temporalIdx == rhs.temporalIdx
&& lhs.layerSync == rhs.layerSync && lhs.keyIdx == rhs.keyIdx && lhs.partitionId == rhs.partitionId
&& lhs.beginningOfPartition == rhs.beginningOfPartition;
}
bool nonReference; // Frame is discardable.
int16_t pictureId; // Picture ID index, 15 bits;
// kNoPictureId if PictureID does not exist.
int8_t tl0PicIdx; // TL0PIC_IDX, 8 bits;
// kNoTl0PicIdx means no value provided.
uint8_t temporalIdx; // Temporal layer index, or kNoTemporalIdx.
bool layerSync; // This frame is a layer sync frame.
// Disabled if temporalIdx == kNoTemporalIdx.
int8_t keyIdx; // 5 bits; kNoKeyIdx means not used.
int8_t partitionId; // VP8 partition ID
bool beginningOfPartition; // True if this packet is the first
// in a VP8 partition. Otherwise false
};
void RTPVideoHeaderVP8::InitRTPVideoHeaderVP8() {
nonReference = false;
pictureId = kNoPictureId;
tl0PicIdx = kNoTl0PicIdx;
temporalIdx = kNoTemporalIdx;
layerSync = false;
keyIdx = kNoKeyIdx;
partitionId = 0;
beginningOfPartition = false;
}
int RTPVideoHeaderVP8::Size() const {
bool tid_present = this->temporalIdx != kNoTemporalIdx;
bool keyid_present = this->keyIdx != kNoKeyIdx;
bool tl0_pid_present = this->tl0PicIdx != kNoTl0PicIdx;
bool pid_present = this->pictureId != kNoPictureId;
int ret = 2;
if (pid_present)
ret += 2;
if (tl0_pid_present)
ret++;
if (tid_present || keyid_present)
ret++;
return ret == 2 ? 1 : ret;
}
int RTPVideoHeaderVP8::Write(uint8_t *data, int size) const {
int ret = 0;
bool tid_present = this->temporalIdx != kNoTemporalIdx;
bool keyid_present = this->keyIdx != kNoKeyIdx;
bool tl0_pid_present = this->tl0PicIdx != kNoTl0PicIdx;
bool pid_present = this->pictureId != kNoPictureId;
uint8_t x_field = 0;
if (pid_present)
x_field |= kIBit;
if (tl0_pid_present)
x_field |= kLBit;
if (tid_present)
x_field |= kTBit;
if (keyid_present)
x_field |= kKBit;
uint8_t flags = 0;
if (x_field != 0)
flags |= kXBit;
if (this->nonReference)
flags |= kNBit;
// Create header as first packet in the frame. NextPacket() will clear it
// after first use.
flags |= kSBit;
data[ret++] = flags;
if (x_field == 0) {
return ret;
}
data[ret++] = x_field;
if (pid_present) {
const uint16_t pic_id = static_cast<uint16_t>(this->pictureId);
data[ret++] = (0x80 | ((pic_id >> 8) & 0x7F));
data[ret++] = (pic_id & 0xFF);
}
if (tl0_pid_present) {
data[ret++] = this->tl0PicIdx;
}
if (tid_present || keyid_present) {
uint8_t data_field = 0;
if (tid_present) {
data_field |= this->temporalIdx << 6;
if (this->layerSync)
data_field |= kYBit;
}
if (keyid_present) {
data_field |= (this->keyIdx & kKeyIdxField);
}
data[ret++] = data_field;
}
return ret;
}
int RTPVideoHeaderVP8::Read(const uint8_t *data, int data_length) {
// RTC_DCHECK_GT(data_length, 0);
int parsed_bytes = 0;
// Parse mandatory first byte of payload descriptor.
bool extension = (*data & 0x80) ? true : false; // X bit
this->nonReference = (*data & 0x20) ? true : false; // N bit
this->beginningOfPartition = (*data & 0x10) ? true : false; // S bit
this->partitionId = (*data & 0x07); // PID field
data++;
parsed_bytes++;
data_length--;
if (!extension)
return parsed_bytes;
if (data_length == 0)
return kFailedToParse;
// Optional X field is present.
bool has_picture_id = (*data & 0x80) ? true : false; // I bit
bool has_tl0_pic_idx = (*data & 0x40) ? true : false; // L bit
bool has_tid = (*data & 0x20) ? true : false; // T bit
bool has_key_idx = (*data & 0x10) ? true : false; // K bit
// Advance data and decrease remaining payload size.
data++;
parsed_bytes++;
data_length--;
if (has_picture_id) {
if (data_length == 0)
return kFailedToParse;
this->pictureId = (*data & 0x7F);
if (*data & 0x80) {
data++;
parsed_bytes++;
if (--data_length == 0)
return kFailedToParse;
// PictureId is 15 bits
this->pictureId = (this->pictureId << 8) + *data;
}
data++;
parsed_bytes++;
data_length--;
}
if (has_tl0_pic_idx) {
if (data_length == 0)
return kFailedToParse;
this->tl0PicIdx = *data;
data++;
parsed_bytes++;
data_length--;
}
if (has_tid || has_key_idx) {
if (data_length == 0)
return kFailedToParse;
if (has_tid) {
this->temporalIdx = ((*data >> 6) & 0x03);
this->layerSync = (*data & 0x20) ? true : false; // Y bit
}
if (has_key_idx) {
this->keyIdx = *data & 0x1F;
}
data++;
parsed_bytes++;
data_length--;
}
return parsed_bytes;
}
/////////////////////////////////////////////////
// VP8RtpDecoder
VP8RtpDecoder::VP8RtpDecoder() {
obtainFrame();
}
void VP8RtpDecoder::obtainFrame() {
_frame = FrameImp::create<VP8Frame>();
}
bool VP8RtpDecoder::inputRtp(const RtpPacket::Ptr &rtp, bool key_pos) {
auto seq = rtp->getSeq();
bool ret = decodeRtp(rtp);
if (!_gop_dropped && seq != (uint16_t)(_last_seq + 1) && _last_seq) {
_gop_dropped = true;
WarnL << "start drop vp8 gop, last seq:" << _last_seq << ", rtp:\r\n" << rtp->dumpString();
}
_last_seq = seq;
return ret;
}
bool VP8RtpDecoder::decodeRtp(const RtpPacket::Ptr &rtp) {
auto payload_size = rtp->getPayloadSize();
if (payload_size <= 0) {
// No actual payload
return false;
}
auto payload = rtp->getPayload();
auto stamp = rtp->getStampMS();
auto seq = rtp->getSeq();
RTPVideoHeaderVP8 info;
int offset = info.Read(payload, payload_size);
if (!offset) {
//_frame_drop = true;
return false;
}
bool start = info.isFirstPacket();
if (start) {
_frame->_pts = stamp;
_frame->_buffer.clear();
_frame_drop = false;
}
if (_frame_drop) {
// This frame is incomplete
return false;
}
if (!start && seq != (uint16_t)(_last_seq + 1)) {
// 中间的或末尾的rtp包其seq必须连续否则说明rtp丢包那么该帧不完整必须得丢弃
_frame_drop = true;
_frame->_buffer.clear();
return false;
}
// Append data
_frame->_buffer.append((char *)payload + offset, payload_size - offset);
bool end = rtp->getHeader()->mark;
if (end) {
// 确保下一次fu必须收到第一个包
_frame_drop = true;
// 该帧最后一个rtp包,输出frame [AUTO-TRANSLATED:a648aaa5]
// The last rtp packet of this frame, output frame
outputFrame(rtp);
}
return (info.isFirstPacket() && (payload[offset] & 0x01) == 0);
}
void VP8RtpDecoder::outputFrame(const RtpPacket::Ptr &rtp) {
if (_frame->dropAble()) {
// 不参与dts生成 [AUTO-TRANSLATED:dff3b747]
// Not involved in dts generation
_frame->_dts = _frame->_pts;
} else {
// rtsp没有dts那么根据pts排序算法生成dts [AUTO-TRANSLATED:f37c17f3]
// Rtsp does not have dts, so dts is generated according to the pts sorting algorithm
_dts_generator.getDts(_frame->_pts, _frame->_dts);
}
if (_frame->keyFrame() && _gop_dropped) {
_gop_dropped = false;
InfoL << "new gop received, rtp:\r\n" << rtp->dumpString();
}
if (!_gop_dropped || _frame->configFrame()) {
RtpCodec::inputFrame(_frame);
}
obtainFrame();
}
////////////////////////////////////////////////////////////////////////
bool VP8RtpEncoder::inputFrame(const Frame::Ptr &frame) {
RTPVideoHeaderVP8 info;
info.InitRTPVideoHeaderVP8();
info.beginningOfPartition = true;
info.nonReference = !frame->dropAble();
uint8_t header[20];
int header_size = info.Write(header, sizeof(header));
int pdu_size = getRtpInfo().getMaxSize() - header_size;
const char *ptr = frame->data() + frame->prefixSize();
size_t len = frame->size() - frame->prefixSize();
bool key = frame->keyFrame();
bool mark = false;
for (size_t pos = 0; pos < len; pos += pdu_size) {
if (static_cast<int>(len - pos) <= pdu_size) {
pdu_size = len - pos;
mark = true;
}
auto rtp = getRtpInfo().makeRtp(TrackVideo, nullptr, pdu_size + header_size, mark, frame->pts());
if (rtp) {
uint8_t *payload = rtp->getPayload();
memcpy(payload, header, header_size);
memcpy(payload + header_size, ptr + pos, pdu_size);
RtpCodec::inputRtp(rtp, key);
}
key = false;
header[0] &= (~kSBit); // Clear 'Start of partition' bit.
}
return true;
}
} // namespace mediakit

63
ext-codec/VP8Rtp.h Normal file
View File

@ -0,0 +1,63 @@
/*
* Copyright (c) 2016-present The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/ZLMediaKit/ZLMediaKit).
*
* Use of this source code is governed by MIT-like license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#ifndef ZLMEDIAKIT_VP8RTPCODEC_H
#define ZLMEDIAKIT_VP8RTPCODEC_H
#include "VP8.h"
// for DtsGenerator
#include "Common/Stamp.h"
#include "Rtsp/RtpCodec.h"
namespace mediakit {
/**
* vp8 rtp解码类
* vp8 over rtsp-rtp VP8Frame
*/
class VP8RtpDecoder : public RtpCodec {
public:
using Ptr = std::shared_ptr<VP8RtpDecoder>;
VP8RtpDecoder();
/**
* vp8 rtp包
* @param rtp rtp包
* @param key_pos
*/
bool inputRtp(const RtpPacket::Ptr &rtp, bool key_pos = true) override;
private:
bool decodeRtp(const RtpPacket::Ptr &rtp);
void outputFrame(const RtpPacket::Ptr &rtp);
void obtainFrame();
private:
bool _gop_dropped = false;
bool _frame_drop = true;
uint16_t _last_seq = 0;
VP8Frame::Ptr _frame;
DtsGenerator _dts_generator;
};
/**
* vp8 rtp打包类
*/
class VP8RtpEncoder : public RtpCodec {
public:
using Ptr = std::shared_ptr<VP8RtpEncoder>;
bool inputFrame(const Frame::Ptr &frame) override;
};
}//namespace mediakit
#endif //ZLMEDIAKIT_VP8RTPCODEC_H

76
ext-codec/VP9.cpp Normal file
View File

@ -0,0 +1,76 @@
#include "VP9.h"
#include "VP9Rtp.h"
#include "VpxRtmp.h"
#include "Extension/Factory.h"
using namespace std;
using namespace toolkit;
namespace mediakit {
bool VP9Track::inputFrame(const Frame::Ptr &frame) {
char *dataPtr = frame->data() + frame->prefixSize();
if (frame->keyFrame()) {
if (frame->size() - frame->prefixSize() < 10)
return false;
webm_vpx_codec_configuration_record_from_vp9(&_vpx, &_width, &_height, dataPtr, frame->size() - frame->prefixSize());
}
return VideoTrackImp::inputFrame(frame);
}
Buffer::Ptr VP9Track::getExtraData() const {
auto ret = BufferRaw::create(8 + _vpx.codec_intialization_data_size);
ret->setSize(webm_vpx_codec_configuration_record_save(&_vpx, (uint8_t *)ret->data(), ret->getCapacity()));
return ret;
}
void VP9Track::setExtraData(const uint8_t *data, size_t size) {
webm_vpx_codec_configuration_record_load(data, size, &_vpx);
}
namespace {
CodecId getCodec() {
return CodecVP9;
}
Track::Ptr getTrackByCodecId(int sample_rate, int channels, int sample_bit) {
return std::make_shared<VP9Track>();
}
Track::Ptr getTrackBySdp(const SdpTrack::Ptr &track) {
return std::make_shared<VP9Track>();
}
RtpCodec::Ptr getRtpEncoderByCodecId(uint8_t pt) {
return std::make_shared<VP9RtpEncoder>();
}
RtpCodec::Ptr getRtpDecoderByCodecId() {
return std::make_shared<VP9RtpDecoder>();
}
RtmpCodec::Ptr getRtmpEncoderByTrack(const Track::Ptr &track) {
return std::make_shared<VpxRtmpEncoder>(track);
}
RtmpCodec::Ptr getRtmpDecoderByTrack(const Track::Ptr &track) {
return std::make_shared<VpxRtmpDecoder>(track);
}
Frame::Ptr getFrameFromPtr(const char *data, size_t bytes, uint64_t dts, uint64_t pts) {
return std::make_shared<VP9FrameNoCacheAble>((char *)data, bytes, dts, pts, 0);
}
} // namespace
CodecPlugin vp9_plugin = { getCodec,
getTrackByCodecId,
getTrackBySdp,
getRtpEncoderByCodecId,
getRtpDecoderByCodecId,
getRtmpEncoderByTrack,
getRtmpDecoderByTrack,
getFrameFromPtr };
} // namespace mediakit

49
ext-codec/VP9.h Normal file
View File

@ -0,0 +1,49 @@
#ifndef ZLMEDIAKIT_VP9_H
#define ZLMEDIAKIT_VP9_H
#include "Extension/Frame.h"
#include "Extension/Track.h"
#include "webm-vpx.h"
namespace mediakit {
template <typename Parent>
class VP9FrameHelper : public Parent {
public:
friend class FrameImp;
//friend class toolkit::ResourcePool_l<VP9FrameHelper>;
using Ptr = std::shared_ptr<VP9FrameHelper>;
template <typename... ARGS>
VP9FrameHelper(ARGS &&...args)
: Parent(std::forward<ARGS>(args)...) {
this->_codec_id = CodecVP9;
}
bool keyFrame() const override {
auto ptr = (uint8_t *) this->data() + this->prefixSize();
return (*ptr & 0x80);
}
bool configFrame() const override { return false; }
bool dropAble() const override { return false; }
bool decodeAble() const override { return true; }
};
/// VP9 帧类
using VP9Frame = VP9FrameHelper<FrameImp>;
using VP9FrameNoCacheAble = VP9FrameHelper<FrameFromPtr>;
class VP9Track : public VideoTrackImp {
public:
VP9Track() : VideoTrackImp(CodecVP9) {};
Track::Ptr clone() const override { return std::make_shared<VP9Track>(*this); }
bool inputFrame(const Frame::Ptr &frame) override;
toolkit::Buffer::Ptr getExtraData() const override;
void setExtraData(const uint8_t *data, size_t size) override;
private:
webm_vpx_t _vpx {};
};
} // namespace mediakit
#endif

342
ext-codec/VP9Rtp.cpp Normal file
View File

@ -0,0 +1,342 @@
/*
* Copyright (c) 2016-present The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/ZLMediaKit/ZLMediaKit).
*
* Use of this source code is governed by MIT-like license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#include "VP9Rtp.h"
#include "Extension/Frame.h"
#include "Common/config.h"
namespace mediakit{
const int16_t kNoPictureId = -1;
const int8_t kNoTl0PicIdx = -1;
const uint8_t kNoTemporalIdx = 0xFF;
const int kNoKeyIdx = -1;
struct VP9ResolutionLayer {
int width;
int height;
};
struct RTPPayloadVP9 {
bool hasPictureID = false;
bool interPicturePrediction = false;
bool hasLayerIndices = false;
bool flexibleMode = false;
bool beginningOfLayerFrame = false;
bool endingOfLayerFrame = false;
bool hasScalabilityStructure = false;
bool largePictureID = false;
int pictureID = -1;
int temporalID = -1;
bool isSwitchingUp = false;
int spatialID = -1;
bool isInterLayeredDepUsed = false;
int tl0PicIdx = -1;
int referenceIdx = -1;
bool additionalReferenceIdx = false;
int spatialLayers = -1;
bool hasResolution = false;
bool hasGof = false;
int numberOfFramesInGof = -1;
std::vector<VP9ResolutionLayer> resolutions;
int parse(unsigned char* data, int dataLength);
bool keyFrame() const { return beginningOfLayerFrame && !interPicturePrediction; }
std::string dump() const {
char line[64] = {0};
snprintf(line, sizeof(line), "%c%c%c%c%c%c%c- %d %d, %d %d",
hasPictureID ? 'I' : ' ',
interPicturePrediction ? 'P' : ' ',
hasLayerIndices ? 'L' : ' ',
flexibleMode ? 'F' : ' ',
beginningOfLayerFrame ? 'B' : ' ',
endingOfLayerFrame ? 'E' : ' ',
hasScalabilityStructure ? 'V' : ' ',
pictureID, tl0PicIdx,
spatialID, temporalID);
return line;
}
};
//
// VP9 format:
//
// Payload descriptor (Flexible mode F = 1)
// 0 1 2 3 4 5 6 7
// +-+-+-+-+-+-+-+-+
// |I|P|L|F|B|E|V|-| (REQUIRED)
// +-+-+-+-+-+-+-+-+
// I: |M| PICTURE ID | (REQUIRED)
// +-+-+-+-+-+-+-+-+
// M: | EXTENDED PID | (RECOMMENDED)
// +-+-+-+-+-+-+-+-+
// L: | T |U| S |D| (CONDITIONALLY RECOMMENDED)
// +-+-+-+-+-+-+-+-+ -
// P,F: | P_DIFF |N| (CONDITIONALLY REQUIRED) - up to 3 times
// +-+-+-+-+-+-+-+-+ -
// V: | SS |
// | .. |
// +-+-+-+-+-+-+-+-+
//
// Payload descriptor (Non flexible mode F = 0)
//
// 0 1 2 3 4 5 6 7
// +-+-+-+-+-+-+-+-+
// |I|P|L|F|B|E|V|-| (REQUIRED)
// +-+-+-+-+-+-+-+-+
// I: |M| PICTURE ID | (RECOMMENDED)
// +-+-+-+-+-+-+-+-+
// M: | EXTENDED PID | (RECOMMENDED)
// +-+-+-+-+-+-+-+-+
// L: | T |U| S |D| (CONDITIONALLY RECOMMENDED)
// +-+-+-+-+-+-+-+-+
// | TL0PICIDX | (CONDITIONALLY REQUIRED)
// +-+-+-+-+-+-+-+-+
// V: | SS |
// | .. |
// +-+-+-+-+-+-+-+-+
#define kIBit 0x80
#define kPBit 0x40
#define kLBit 0x20
#define kFBit 0x10
#define kBBit 0x08
#define kEBit 0x04
#define kVBit 0x02
int RTPPayloadVP9::parse(unsigned char *data, int dataLength) {
const unsigned char* dataPtr = data;
const unsigned char* dataEnd = data + dataLength;
#define VP9_CHECK_BOUNDS(n) do { if (dataPtr + (n) > dataEnd) return -1; } while (0)
// Parse mandatory first byte of payload descriptor
VP9_CHECK_BOUNDS(1);
this->hasPictureID = (*dataPtr & kIBit); // I bit
this->interPicturePrediction = (*dataPtr & kPBit); // P bit
this->hasLayerIndices = (*dataPtr & kLBit); // L bit
this->flexibleMode = (*dataPtr & kFBit); // F bit
this->beginningOfLayerFrame = (*dataPtr & kBBit); // B bit
this->endingOfLayerFrame = (*dataPtr & kEBit); // E bit
this->hasScalabilityStructure = (*dataPtr & kVBit); // V bit
dataPtr++;
if (this->hasPictureID) {
VP9_CHECK_BOUNDS(1);
this->largePictureID = (*dataPtr & 0x80); // M bit
this->pictureID = (*dataPtr & 0x7F);
if (this->largePictureID) {
dataPtr++;
VP9_CHECK_BOUNDS(1);
this->pictureID = ntohs((this->pictureID << 16) + (*dataPtr & 0xFF));
}
dataPtr++;
}
if (this->hasLayerIndices) {
VP9_CHECK_BOUNDS(1);
this->temporalID = (*dataPtr & 0xE0) >> 5; // T bits
this->isSwitchingUp = (*dataPtr & 0x10); // U bit
this->spatialID = (*dataPtr & 0x0E) >> 1; // S bits
this->isInterLayeredDepUsed = (*dataPtr & 0x01); // D bit
if (this->flexibleMode) { // marked in webrtc code
do {
dataPtr++;
VP9_CHECK_BOUNDS(1);
this->referenceIdx = (*dataPtr & 0xFE) >> 1;
this->additionalReferenceIdx = (*dataPtr & 0x01); // D bit
} while (this->additionalReferenceIdx);
} else {
dataPtr++;
VP9_CHECK_BOUNDS(1);
this->tl0PicIdx = (*dataPtr & 0xFF);
}
dataPtr++;
}
if (this->flexibleMode && this->interPicturePrediction) {
/* Skip reference indices */
uint8_t nbit;
do {
VP9_CHECK_BOUNDS(1);
uint8_t p_diff = (*dataPtr & 0xFE) >> 1;
nbit = (*dataPtr & 0x01);
dataPtr++;
} while (nbit);
}
if (this->hasScalabilityStructure) {
VP9_CHECK_BOUNDS(1);
this->spatialLayers = (*dataPtr & 0xE0) >> 5; // N_S bits
this->hasResolution = (*dataPtr & 0x10); // Y bit
this->hasGof = (*dataPtr & 0x08); // G bit
dataPtr++;
if (this->hasResolution) {
for (int i = 0; i <= this->spatialLayers; i++) {
VP9_CHECK_BOUNDS(4);
int width = (dataPtr[0] << 8) + dataPtr[1];
dataPtr += 2;
int height = (dataPtr[0] << 8) + dataPtr[1];
dataPtr += 2;
// InfoL << "got vp9 " << width << "x" << height;
this->resolutions.push_back({ width, height });
}
}
if (this->hasGof) {
VP9_CHECK_BOUNDS(1);
this->numberOfFramesInGof = *dataPtr & 0xFF; // N_G bits
dataPtr++;
for (int frame_index = 0; frame_index < this->numberOfFramesInGof; frame_index++) {
// TODO(javierc): Read these values if needed
VP9_CHECK_BOUNDS(1);
int reference_indices = (*dataPtr & 0x0C) >> 2; // R bits
dataPtr++;
VP9_CHECK_BOUNDS(reference_indices);
for (int reference_index = 0; reference_index < reference_indices; reference_index++) {
dataPtr++;
}
}
}
}
#undef VP9_CHECK_BOUNDS
return dataPtr - data;
}
////////////////////////////////////////////////////
VP9RtpDecoder::VP9RtpDecoder() {
obtainFrame();
}
void VP9RtpDecoder::obtainFrame() {
_frame = FrameImp::create<VP9Frame>();
}
bool VP9RtpDecoder::inputRtp(const RtpPacket::Ptr &rtp, bool key_pos) {
auto seq = rtp->getSeq();
bool is_gop = decodeRtp(rtp);
if (!_gop_dropped && seq != (uint16_t)(_last_seq + 1) && _last_seq) {
_gop_dropped = true;
WarnL << "start drop VP9 gop, last seq:" << _last_seq << ", rtp:\r\n" << rtp->dumpString();
}
_last_seq = seq;
return is_gop;
}
bool VP9RtpDecoder::decodeRtp(const RtpPacket::Ptr &rtp) {
auto payload_size = rtp->getPayloadSize();
if (payload_size < 1) {
// No actual payload
return false;
}
auto payload = rtp->getPayload();
auto stamp = rtp->getStampMS();
auto seq = rtp->getSeq();
RTPPayloadVP9 info;
int offset = info.parse(payload, payload_size);
if (offset < 0) {
WarnL << "VP9 RTP payload parse failed, seq:" << seq;
return false;
}
// InfoL << rtp->dumpString() << "\n" << info.dump();
bool start = info.beginningOfLayerFrame;
if (start) {
_frame->_pts = stamp;
_frame->_buffer.clear();
_frame_drop = false;
}
if (_frame_drop) {
// This frame is incomplete
return false;
}
if (!start && seq != (uint16_t)(_last_seq + 1)) {
// 中间的或末尾的rtp包其seq必须连续否则说明rtp丢包那么该帧不完整必须得丢弃
_frame_drop = true;
_frame->_buffer.clear();
return false;
}
// Append data
_frame->_buffer.append((char *)payload + offset, payload_size - offset);
if (info.endingOfLayerFrame) { // rtp->getHeader()->mark
// 确保下一个包必须是beginningOfLayerFrame
_frame_drop = true;
// 该帧最后一个rtp包,输出frame
outputFrame(rtp);
}
return info.keyFrame();
}
void VP9RtpDecoder::outputFrame(const RtpPacket::Ptr &rtp) {
if (_frame->dropAble()) {
// 不参与dts生成 [AUTO-TRANSLATED:dff3b747]
// Not involved in dts generation
_frame->_dts = _frame->_pts;
} else {
// rtsp没有dts那么根据pts排序算法生成dts [AUTO-TRANSLATED:f37c17f3]
// Rtsp does not have dts, so dts is generated according to the pts sorting algorithm
_dts_generator.getDts(_frame->_pts, _frame->_dts);
}
if (_frame->keyFrame() && _gop_dropped) {
_gop_dropped = false;
InfoL << "new gop received, rtp:\r\n" << rtp->dumpString();
}
if (!_gop_dropped || _frame->configFrame()) {
// InfoL << _frame->pts() << " size=" << _frame->size();
RtpCodec::inputFrame(_frame);
}
obtainFrame();
}
////////////////////////////////////////////////////////////////////////
bool VP9RtpEncoder::inputFrame(const Frame::Ptr &frame) {
uint8_t header[20] = { 0 };
int nheader = 1;
header[0] = kBBit;
bool key = frame->keyFrame();
if (!key)
header[0] |= kPBit;
#if 1
header[0] |= kIBit;
if (++_pic_id > 0x7FFF) {
_pic_id = 0;
}
header[1] = (0x80 | ((_pic_id >> 8) & 0x7F));
header[2] = (_pic_id & 0xFF);
nheader += 2;
#endif
const char *ptr = frame->data() + frame->prefixSize();
int len = frame->size() - frame->prefixSize();
int pdu_size = getRtpInfo().getMaxSize() - nheader;
bool mark = false;
for (int pos = 0; pos < len; pos += pdu_size) {
if (len - pos <= pdu_size) {
pdu_size = len - pos;
header[0] |= kEBit;
mark = true;
}
auto rtp = getRtpInfo().makeRtp(TrackVideo, nullptr, pdu_size + nheader, mark, frame->pts());
if (rtp) {
uint8_t *payload = rtp->getPayload();
memcpy(payload, header, nheader);
memcpy(payload + nheader, ptr + pos, pdu_size);
RtpCodec::inputRtp(rtp, key);
}
key = false;
header[0] &= (~kBBit); // Clear 'Begin of partition' bit.
}
return true;
}
} // namespace mediakit

64
ext-codec/VP9Rtp.h Normal file
View File

@ -0,0 +1,64 @@
/*
* Copyright (c) 2016-present The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/ZLMediaKit/ZLMediaKit).
*
* Use of this source code is governed by MIT-like license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#ifndef ZLMEDIAKIT_VP9RTPCODEC_H
#define ZLMEDIAKIT_VP9RTPCODEC_H
#include "VP9.h"
// for DtsGenerator
#include "Common/Stamp.h"
#include "Rtsp/RtpCodec.h"
namespace mediakit {
/**
* VP9 rtp解码类
* VP9 over rtsp-rtp VP9Frame
*/
class VP9RtpDecoder : public RtpCodec {
public:
using Ptr = std::shared_ptr<VP9RtpDecoder>;
VP9RtpDecoder();
/**
* VP9 rtp包
* @param rtp rtp包
* @param key_pos
*/
bool inputRtp(const RtpPacket::Ptr &rtp, bool key_pos = true) override;
private:
bool decodeRtp(const RtpPacket::Ptr &rtp);
void outputFrame(const RtpPacket::Ptr &rtp);
void obtainFrame();
private:
bool _gop_dropped = false;
bool _frame_drop = true;
uint16_t _last_seq = 0;
VP9Frame::Ptr _frame;
DtsGenerator _dts_generator;
};
/**
* VP9 rtp打包类
*/
class VP9RtpEncoder : public RtpCodec {
public:
using Ptr = std::shared_ptr<VP9RtpEncoder>;
bool inputFrame(const Frame::Ptr &frame) override;
private:
uint16_t _pic_id = 0;
};
}//namespace mediakit
#endif //ZLMEDIAKIT_VP9RTPCODEC_H

153
ext-codec/VpxRtmp.cpp Normal file
View File

@ -0,0 +1,153 @@
/*
* Copyright (c) 2016-present The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/ZLMediaKit/ZLMediaKit).
*
* Use of this source code is governed by MIT-like license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#include "VpxRtmp.h"
#include "Rtmp/utils.h"
#include "Common/config.h"
#include "Extension/Factory.h"
using namespace std;
using namespace toolkit;
namespace mediakit {
void VpxRtmpDecoder::inputRtmp(const RtmpPacket::Ptr &pkt) {
if (_info.codec == CodecInvalid) {
// First, determine if it is an enhanced rtmp
parseVideoRtmpPacket((uint8_t *)pkt->data(), pkt->size(), &_info);
}
if (_info.is_enhanced) {
// Enhanced rtmp
parseVideoRtmpPacket((uint8_t *)pkt->data(), pkt->size(), &_info);
if (!_info.is_enhanced || _info.codec != getTrack()->getCodecId()) {
throw std::invalid_argument("Invalid enhanced-rtmp packet!");
}
auto data = (uint8_t *)pkt->data() + RtmpPacketInfo::kEnhancedRtmpHeaderSize;
auto size = pkt->size() - RtmpPacketInfo::kEnhancedRtmpHeaderSize;
switch (_info.video.pkt_type) {
case RtmpPacketType::PacketTypeSequenceStart: {
getTrack()->setExtraData(data, size);
break;
}
case RtmpPacketType::PacketTypeCodedFramesX:
case RtmpPacketType::PacketTypeCodedFrames: {
auto pts = pkt->time_stamp;
if (RtmpPacketType::PacketTypeCodedFrames == _info.video.pkt_type) {
CHECK_RET(size > 3);
// SI24 = [CompositionTime Offset]
int32_t cts = (load_be24(data) + 0xff800000) ^ 0xff800000;
pts += cts;
data += 3;
size -= 3;
}
outputFrame((char*)data, size, pkt->time_stamp, pts);
break;
}
default:
WarnL << "Unknown pkt_type: " << (int)_info.video.pkt_type;
break;
}
} else {
CHECK_RET(pkt->size() > 5);
uint8_t *cts_ptr = (uint8_t *)(pkt->buffer.data() + 2);
int32_t cts = (load_be24(cts_ptr) + 0xff800000) ^ 0xff800000;
// 国内扩展(12) Vpx rtmp
if (pkt->isConfigFrame()) {
getTrack()->setExtraData((uint8_t *)pkt->data() + 5, pkt->size() - 5);
} else {
outputFrame(pkt->data() + 5, pkt->size() - 5, pkt->time_stamp, pkt->time_stamp + cts);
}
}
}
void VpxRtmpDecoder::outputFrame(const char *data, size_t size, uint32_t dts, uint32_t pts) {
RtmpCodec::inputFrame(Factory::getFrameFromPtr(getTrack()->getCodecId(), data, size, dts, pts));
}
////////////////////////////////////////////////////////////////////////
VpxRtmpEncoder::VpxRtmpEncoder(const Track::Ptr &track) : RtmpCodec(track) {
_enhanced = mINI::Instance()[Rtmp::kEnhanced];
}
bool VpxRtmpEncoder::inputFrame(const Frame::Ptr &frame) {
auto packet = RtmpPacket::create();
packet->buffer.resize(8 + frame->size());
char *buff = packet->data();
int32_t cts = frame->pts() - frame->dts();
if (_enhanced) {
auto header = (RtmpVideoHeaderEnhanced *)buff;
header->enhanced = 1;
header->frame_type = frame->keyFrame() ? (int)RtmpFrameType::key_frame : (int)RtmpFrameType::inter_frame;
header->fourcc = htonl(getCodecFourCC(frame->getCodecId()));
buff += RtmpPacketInfo::kEnhancedRtmpHeaderSize;
if (cts) {
header->pkt_type = (uint8_t)RtmpPacketType::PacketTypeCodedFrames;
set_be24(buff, cts);
buff += 3;
} else {
header->pkt_type = (uint8_t)RtmpPacketType::PacketTypeCodedFramesX;
}
} else {
// flags
uint8_t flags = getCodecFlags(frame->getCodecId());
flags |= (uint8_t)(frame->keyFrame() ? RtmpFrameType::key_frame : RtmpFrameType::inter_frame) << 4;
buff[0] = flags;
buff[1] = (uint8_t)RtmpH264PacketType::h264_nalu;
// cts
set_be24(&buff[2], cts);
buff += 5;
}
packet->time_stamp = frame->dts();
memcpy(buff, frame->data(), frame->size());
buff += frame->size();
packet->body_size = buff - packet->data();
packet->chunk_id = CHUNK_VIDEO;
packet->stream_index = STREAM_MEDIA;
packet->type_id = MSG_VIDEO;
// Output rtmp packet
RtmpCodec::inputRtmp(packet);
return true;
}
void VpxRtmpEncoder::makeConfigPacket() {
auto extra_data = getTrack()->getExtraData();
if (!extra_data || !extra_data->size())
return;
auto pkt = RtmpPacket::create();
pkt->body_size = 5 + extra_data->size();
pkt->buffer.resize(pkt->body_size);
auto buff = pkt->buffer.data();
if (_enhanced) {
auto header = (RtmpVideoHeaderEnhanced *)buff;
header->enhanced = 1;
header->pkt_type = (int)RtmpPacketType::PacketTypeSequenceStart;
header->frame_type = (int)RtmpFrameType::key_frame;
header->fourcc = htonl(getCodecFourCC(getTrack()->getCodecId()));
} else {
uint8_t flags = getCodecFlags(getTrack()->getCodecId());
flags |= ((uint8_t)RtmpFrameType::key_frame << 4);
buff[0] = flags;
buff[1] = (uint8_t)RtmpH264PacketType::h264_config_header;
// cts
memset(buff + 2, 0, 3);
}
memcpy(buff+5, extra_data->data(), extra_data->size());
pkt->chunk_id = CHUNK_VIDEO;
pkt->stream_index = STREAM_MEDIA;
pkt->time_stamp = 0;
pkt->type_id = MSG_VIDEO;
RtmpCodec::inputRtmp(pkt);
}
} // namespace mediakit

54
ext-codec/VpxRtmp.h Normal file
View File

@ -0,0 +1,54 @@
/*
* Copyright (c) 2016-present The ZLMediaKit project authors. All Rights Reserved.
*
* This file is part of ZLMediaKit(https://github.com/ZLMediaKit/ZLMediaKit).
*
* Use of this source code is governed by MIT-like license that can be found in the
* LICENSE file in the root of the source tree. All contributing project authors
* may be found in the AUTHORS file in the root of the source tree.
*/
#ifndef ZLMEDIAKIT_VPX_RTMPCODEC_H
#define ZLMEDIAKIT_VPX_RTMPCODEC_H
#include "Rtmp/RtmpCodec.h"
#include "Extension/Track.h"
namespace mediakit {
/**
* Rtmp解码类
* Vpx over rtmp VpxFrame
*/
class VpxRtmpDecoder : public RtmpCodec {
public:
using Ptr = std::shared_ptr<VpxRtmpDecoder>;
VpxRtmpDecoder(const Track::Ptr &track) : RtmpCodec(track) {}
void inputRtmp(const RtmpPacket::Ptr &rtmp) override;
protected:
void outputFrame(const char *data, size_t size, uint32_t dts, uint32_t pts);
protected:
RtmpPacketInfo _info;
};
/**
* Rtmp打包类
*/
class VpxRtmpEncoder : public RtmpCodec {
bool _enhanced = false;
public:
using Ptr = std::shared_ptr<VpxRtmpEncoder>;
VpxRtmpEncoder(const Track::Ptr &track);
bool inputFrame(const Frame::Ptr &frame) override;
void makeConfigPacket() override;
};
} // namespace mediakit
#endif // ZLMEDIAKIT_VPX_RTMPCODEC_H

View File

@ -26,7 +26,7 @@ void AudioSRC::setOutputAudioConfig(const SDL_AudioSpec &cfg) {
int format = _delegate->getPCMFormat();
int channels = _delegate->getPCMChannel();
if (-1 == SDL_BuildAudioCVT(&_audio_cvt, format, channels, freq, cfg.format, cfg.channels, cfg.freq)) {
throw std::runtime_error("the format conversion is not supported");
throw std::runtime_error("the format conversion is not supported, " + string(SDL_GetError()));
}
InfoL << "audio cvt origin format, freq:" << freq << ", format:" << hex << format << dec << ", channels:" << channels;
InfoL << "audio cvt info, "

View File

@ -1,6 +1,6 @@
# MIT License
#
# Copyright (c) 2016-2022 The ZLMediaKit project authors. All Rights Reserved.
# Copyright (c) 2016-present The ZLMediaKit project authors. All Rights Reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal

View File

@ -18,7 +18,10 @@ using namespace toolkit;
INSTANCE_IMP(SDLAudioDevice);
SDLAudioDevice::~SDLAudioDevice() {
SDL_CloseAudio();
if (_device) {
SDL_CloseAudioDevice(_device);
_device = 0;
}
}
SDLAudioDevice::SDLAudioDevice() {
@ -33,9 +36,13 @@ SDLAudioDevice::SDLAudioDevice() {
SDLAudioDevice *_this = (SDLAudioDevice *) userdata;
_this->onReqPCM((char *) stream, len);
};
if (SDL_OpenAudioDevice(NULL, 0, &wanted_spec, &_audio_config, SDL_AUDIO_ALLOW_ANY_CHANGE) < 0) {
throw std::runtime_error("SDL_OpenAudioDevice failed");
}
_device = SDL_OpenAudioDevice(NULL, 0, &wanted_spec, &_audio_config, 0);
if (_device <= 0)
_device = SDL_OpenAudioDevice(NULL, 0, &wanted_spec, &_audio_config, SDL_AUDIO_ALLOW_ANY_CHANGE);
if (_device <= 0) {
throw std::runtime_error("SDL_OpenAudioDevice failed");
}
InfoL << "actual audioSpec, " << "freq:" << _audio_config.freq
<< ", format:" << hex << _audio_config.format << dec
@ -51,7 +58,7 @@ SDLAudioDevice::SDLAudioDevice() {
void SDLAudioDevice::addChannel(AudioSRC *chn) {
lock_guard<recursive_mutex> lck(_channel_mtx);
if (_channels.empty()) {
SDL_PauseAudio(0);
SDL_PauseAudioDevice(_device, false);
}
chn->setOutputAudioConfig(_audio_config);
_channels.emplace(chn);
@ -61,7 +68,7 @@ void SDLAudioDevice::delChannel(AudioSRC *chn) {
lock_guard<recursive_mutex> lck(_channel_mtx);
_channels.erase(chn);
if (_channels.empty()) {
SDL_PauseAudio(true);
SDL_PauseAudioDevice(_device, true);
}
}

View File

@ -40,6 +40,7 @@ private:
void onReqPCM(char *stream, int len);
private:
SDL_AudioDeviceID _device;
std::shared_ptr<char> _play_buf;
SDL_AudioSpec _audio_config;
std::recursive_mutex _channel_mtx;

View File

@ -135,16 +135,27 @@ public:
}
bool displayYUV(AVFrame *pFrame){
if (!_win) {
int w, h;
double hw = 0.0f;
w = pFrame->width;
h = pFrame->height;
hw = (double)h / (double)w;
w = 720;
h = w * hw;
if (_hwnd) {
_win = SDL_CreateWindowFrom(_hwnd);
}else {
_win = SDL_CreateWindow(_title.data(),
SDL_WINDOWPOS_UNDEFINED,
SDL_WINDOWPOS_UNDEFINED,
pFrame->width,
pFrame->height,
SDL_WINDOW_OPENGL);
w,
h,
SDL_WINDOW_OPENGL |SDL_WINDOW_RESIZABLE | SDL_WINDOW_SHOWN); // 允许最大化
}
SDL_SetWindowInputFocus(_win);
SDL_RaiseWindow(_win);
// SDL_GL_SetSwapInterval(1); // 1 ,“开启垂直同步”就是让程序“等显示器”,以牺牲一点延迟换取画面完整无撕裂。
}
if (_win && ! _render){
#if 0

View File

@ -41,6 +41,10 @@ int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstanc, LPSTR lpCmdLine,
freopen_s(&stream, "CON", "r", stdin);//重定向输入流
freopen_s(&stream, "CON", "w", stdout);//重定向输入流
// 清除流缓冲区, 在win11上还是无法输出文字需要在加入如下代码
std::cin.clear();
std::cout.clear();
//3. 如果我们需要用到控制台窗口句柄可以调用FindWindow取得
HWND _consoleHwnd;
SetConsoleTitleA("test_player");//设置窗口名
@ -56,8 +60,8 @@ int main(int argc, char *argv[]) {
Logger::Instance().add(std::make_shared<ConsoleChannel>());
Logger::Instance().setWriter(std::make_shared<AsyncLogWriter>());
if (argc < 3) {
ErrorL << "\r\n测试方法:./test_player rtxp_url rtp_type\r\n"
if (argc < 2) {
ErrorL << "\r\n测试方法:./test_player rtxp_url [rtp_type] [play_track]\r\n"
<< "例如:./test_player rtsp://admin:123456@127.0.0.1/live/0 0\r\n";
return 0;
}
@ -97,10 +101,14 @@ int main(int argc, char *argv[]) {
decoder->setOnDecode([audio_player, swr](const FFmpegFrame::Ptr &frame) mutable {
if (!swr) {
# if LIBAVCODEC_VERSION_INT >= FF_CODEC_VER_7_1
swr = std::make_shared<FFmpegSwr>(AV_SAMPLE_FMT_S16, &(frame->get()->ch_layout), frame->get()->sample_rate);
#else
swr = std::make_shared<FFmpegSwr>(AV_SAMPLE_FMT_S16, frame->get()->channels, frame->get()->channel_layout, frame->get()->sample_rate);
#endif
}
auto pcm = swr->inputFrame(frame);
auto len = pcm->get()->nb_samples * pcm->get()->channels * av_get_bytes_per_sample((enum AVSampleFormat)pcm->get()->format);
auto len = pcm->get()->nb_samples * pcm->getChannels() * av_get_bytes_per_sample((enum AVSampleFormat)pcm->get()->format);
audio_player->playPCM((const char *)(pcm->get()->data[0]), MIN(len, frame->get()->linesize[0]));
});
audioTrack->addDelegate([decoder](const Frame::Ptr &frame) { return decoder->inputFrame(frame, false, true); });
@ -108,10 +116,11 @@ int main(int argc, char *argv[]) {
});
player->setOnShutdown([](const SockException &ex) { WarnL << "play shutdown: " << ex.what(); });
(*player)[Client::kRtpType] = atoi(argv[2]);
// 不等待track ready再回调播放成功事件这样可以加快秒开速度
(*player)[Client::kWaitTrackReady] = false;
if (argc > 2) {
(*player)[Client::kRtpType] = atoi(argv[2]);
}
if (argc > 3) {
(*player)[Client::kPlayTrack] = atoi(argv[3]);
}

View File

@ -39,14 +39,15 @@
"method": "GET",
"header": [],
"url": {
"raw": "{{ZLMediaKit_URL}}/index/api/getApiList?secret={{ZLMediaKit_secret}}&id=stack_test",
"raw": "{{ZLMediaKit_URL}}/index/api/stack/stop?secret={{ZLMediaKit_secret}}&id=stack_test",
"host": [
"{{ZLMediaKit_URL}}"
],
"path": [
"index",
"api",
"getApiList"
"stack",
"stop"
],
"query": [
{
@ -56,7 +57,44 @@
},
{
"key": "id",
"value": "stack_test"
"value": "stack_test",
"description": "多屏拼接id"
}
]
}
},
"response": []
},
{
"name": "重置多屏拼接(stack/reset)",
"request": {
"method": "POST",
"header": [],
"body": {
"mode": "raw",
"raw": "{\r\n \"gapv\": 0.002,\r\n \"gaph\": 0.001,\r\n \"width\": 1920,\r\n \"url\": [\r\n [\r\n \"rtsp://kkem.me/live/test3\",\r\n \"rtsp://kkem.me/live/cy1\",\r\n \"rtsp://kkem.me/live/cy1\",\r\n \"rtsp://kkem.me/live/cy2\"\r\n ],\r\n [\r\n \"rtsp://kkem.me/live/cy1\",\r\n \"rtsp://kkem.me/live/cy5\",\r\n \"rtsp://kkem.me/live/cy3\",\r\n \"rtsp://kkem.me/live/cy4\"\r\n ],\r\n [\r\n \"rtsp://kkem.me/live/cy5\",\r\n \"rtsp://kkem.me/live/cy6\",\r\n \"rtsp://kkem.me/live/cy7\",\r\n \"rtsp://kkem.me/live/cy8\"\r\n ],\r\n [\r\n \"rtsp://kkem.me/live/cy9\",\r\n \"rtsp://kkem.me/live/cy10\",\r\n \"rtsp://kkem.me/live/cy11\",\r\n \"rtsp://kkem.me/live/cy12\"\r\n ]\r\n ],\r\n \"id\": \"89\",\r\n \"row\": 4,\r\n \"col\": 4,\r\n \"height\": 1080,\r\n \"span\": [\r\n [\r\n [\r\n 0,\r\n 0\r\n ],\r\n [\r\n 1,\r\n 1\r\n ]\r\n ],\r\n [\r\n [\r\n 3,\r\n 0\r\n ],\r\n [\r\n 3,\r\n 1\r\n ]\r\n ],\r\n [\r\n [\r\n 2,\r\n 3\r\n ],\r\n [\r\n 3,\r\n 3\r\n ]\r\n ]\r\n ]\r\n}",
"options": {
"raw": {
"language": "json"
}
}
},
"url": {
"raw": "{{ZLMediaKit_URL}}/index/api/stack/reset?secret={{ZLMediaKit_secret}}",
"host": [
"{{ZLMediaKit_URL}}"
],
"path": [
"index",
"api",
"stack",
"reset"
],
"query": [
{
"key": "secret",
"value": "{{ZLMediaKit_secret}}",
"description": "api操作密钥(配置文件配置)"
}
]
}
@ -310,6 +348,53 @@
},
"response": []
},
{
"name": "删除截图(deleteSnapDirectory)",
"request": {
"method": "GET",
"header": [],
"url": {
"raw": "{{ZLMediaKit_URL}}/index/api/deleteSnapDirectory?secret={{ZLMediaKit_secret}}&vhost={{defaultVhost}}&app=live&stream=test&file=71_1740828613.jpg",
"host": [
"{{ZLMediaKit_URL}}"
],
"path": [
"index",
"api",
"deleteSnapDirectory"
],
"query": [
{
"key": "secret",
"value": "{{ZLMediaKit_secret}}",
"description": "api操作密钥(配置文件配置)"
},
{
"key": "vhost",
"value": "{{defaultVhost}}",
"description": "筛选虚拟主机例如__defaultVhost__"
},
{
"key": "app",
"value": "live",
"description": "筛选应用名,例如 live"
},
{
"key": "stream",
"value": "test",
"description": "筛选流id例如 test"
},
{
"key": "file",
"value": "",
"disabled": true,
"description": "文件名,非必选"
}
]
}
},
"response": []
},
{
"name": "关断单个流(close_stream)",
"request": {
@ -522,7 +607,7 @@
"response": []
},
{
"name": "添加rtsp/rtmp/hls/srt拉流代理(addStreamProxy)",
"name": "添加拉流代理(addStreamProxy)",
"request": {
"method": "GET",
"header": [],
@ -560,7 +645,7 @@
{
"key": "url",
"value": "rtmp://live.hkstv.hk.lxdns.com/live/hks2",
"description": "拉流地址,例如rtmp://live.hkstv.hk.lxdns.com/live/hks2"
"description": "拉流地址,支持rtsp/rtmp/hls/srt/http-flv/http-ts协议"
},
{
"key": "rtp_type",
@ -828,6 +913,12 @@
"description": "推流重试次数,不传此参数或传值<=0时则无限重试",
"disabled": true
},
{
"key": "force",
"value": null,
"description": "是否强制添加代理默认0设置为1时如果拉流失败也会不断重试",
"disabled": true
},
{
"key": "latency",
"value": null,
@ -1179,19 +1270,19 @@
"response": []
},
{
"name": "获取流信息(getMp4RecordFile)",
"name": "获取录像文件列表(getMP4RecordFile)",
"request": {
"method": "GET",
"header": [],
"url": {
"raw": "{{ZLMediaKit_URL}}/index/api/getMp4RecordFile?secret={{ZLMediaKit_secret}}&vhost={{defaultVhost}}&app=proxy&stream=2&customized_path=/www&period=2020-05-26",
"raw": "{{ZLMediaKit_URL}}/index/api/getMP4RecordFile?secret={{ZLMediaKit_secret}}&vhost={{defaultVhost}}&app=proxy&stream=2&customized_path=/www&period=2020-05-26",
"host": [
"{{ZLMediaKit_URL}}"
],
"path": [
"index",
"api",
"getMp4RecordFile"
"getMP4RecordFile"
],
"query": [
{
@ -1333,6 +1424,62 @@
},
"response": []
},
{
"name": "开始事件视频录制(startRecordTask)",
"request": {
"method": "GET",
"header": [],
"url": {
"raw": "{{ZLMediaKit_URL}}/index/api/startRecordTask?secret={{ZLMediaKit_secret}}&vhost={{defaultVhost}}&app=live&stream=test&path=1.mp4&back_ms=10000&forward_ms=10000",
"host": [
"{{ZLMediaKit_URL}}"
],
"path": [
"index",
"api",
"startRecordTask"
],
"query": [
{
"key": "secret",
"value": "{{ZLMediaKit_secret}}",
"description": "api操作密钥(配置文件配置)"
},
{
"key": "vhost",
"value": "{{defaultVhost}}",
"description": "虚拟主机例如__defaultVhost__"
},
{
"key": "app",
"value": "live",
"description": "应用名,例如 live"
},
{
"key": "stream",
"value": "test",
"description": "流id例如 obs"
},
{
"key": "path",
"value": "1.mp4",
"description": "录像文件保存相对路径,包括名称"
},
{
"key": "back_ms",
"value": "10000",
"description": "回溯录制时长"
},
{
"key": "forward_ms",
"value": "10000",
"description": "后续录制时长"
}
]
}
},
"response": []
},
{
"name": "设置录像速度(setRecordSpeed)",
"request": {
@ -1552,6 +1699,12 @@
"key": "expire_sec",
"value": "1",
"description": "截图的过期时间,该时间内产生的截图都会作为缓存返回"
},
{
"key": "async",
"value": "0",
"disabled": true,
"description": "是否采用zlm内置播放器、解码器api异步截图开启后截图速度提升但兼容性降低"
}
]
}
@ -1910,6 +2063,12 @@
"key": "stream_id",
"value": "test",
"description": "该端口绑定的流id"
},
{
"key": "pause_seconds",
"value": "300",
"description": "暂停超时监测后将在pause_seconds时间后恢复",
"disabled": true
}
]
}
@ -2086,6 +2245,12 @@
"value": "",
"description": "发送rtp同时接收一般用于双向语言对讲, 如果不为空说明开启接收值为接收流的id",
"disabled": true
},
{
"key": "enable_origin_recv_limit",
"value": "1",
"description": "转发rtp(tcp模式)时如果发送不出去是否限制源端收流速度此参数在多倍速rtp转发时作用较大",
"disabled": true
}
]
}
@ -2180,6 +2345,12 @@
"value": "5000",
"description": "等待tcp连接超时时间单位毫秒默认5000毫秒",
"disabled": true
},
{
"key": "enable_origin_recv_limit",
"value": "1",
"description": "转发rtp(tcp模式)时如果发送不出去是否限制源端收流速度此参数在多倍速rtp转发时作用较大",
"disabled": true
}
]
}
@ -2255,6 +2426,12 @@
"value": "1",
"description": "rtp es方式打包时是否只打包音频该参数非必选参数",
"disabled": true
},
{
"key": "enable_origin_recv_limit",
"value": "1",
"description": "转发rtp(tcp模式)时如果发送不出去是否限制源端收流速度此参数在多倍速rtp转发时作用较大",
"disabled": true
}
]
}
@ -2479,6 +2656,18 @@
"description": "是否循环点播mp4文件如果配置文件已经开启循环点播此参数无效",
"disabled": true
},
{
"key": "seek_ms",
"value": "0",
"description": "点播seek到特定位置单位毫秒",
"disabled": true
},
{
"key": "speed",
"value": "1.0",
"description": "播放速度, float类型",
"disabled": true
},
{
"key": "enable_hls",
"value": "",
@ -2599,7 +2788,498 @@
}
},
"response": []
}
},
{
"name": "WebRTC-注册到信令服务器(addWebrtcRoomKeeper)",
"request": {
"method": "GET",
"header": [],
"url": {
"raw": "{{ZLMediaKit_URL}}/index/api/addWebrtcRoomKeeper?secret={{ZLMediaKit_secret}}&server_host=127.0.0.1&server_port=3000&room_id=peer_1",
"host": [
"{{ZLMediaKit_URL}}"
],
"path": [
"index",
"api",
"addWebrtcRoomKeeper"
],
"query": [
{
"key": "secret",
"value": "{{ZLMediaKit_secret}}"
},
{
"key": "server_host",
"value": "127.0.0.1",
"description": "要注册到的信令服务器地址"
},
{
"key": "server_port",
"value": "3000",
"description": "要注册到的信令服务器端口"
},
{
"key": "room_id",
"value": "peer_1",
"description": "要注册到的roomid"
}
]
}
},
"response": []
},
{
"name": "WebRTC-从信令服务器注销(delWebrtcRoomKeeper)",
"request": {
"method": "GET",
"header": [],
"url": {
"raw": "{{ZLMediaKit_URL}}/index/api/delWebrtcRoomKeeper?secret={{ZLMediaKit_secret}}&room_key=",
"host": [
"{{ZLMediaKit_URL}}"
],
"path": [
"index",
"api",
"delWebrtcRoomKeeper"
],
"query": [
{
"key": "secret",
"value": "{{ZLMediaKit_secret}}"
},
{
"key": "room_key",
"value": ""
}
]
}
},
"response": []
},
{
"name": "WebRTC-Peer查看注册信息(listWebrtcRoomKeepers)",
"request": {
"method": "GET",
"header": [],
"url": {
"raw": "{{ZLMediaKit_URL}}/index/api/listWebrtcRoomKeepers?secret={{ZLMediaKit_secret}}",
"host": [
"{{ZLMediaKit_URL}}"
],
"path": [
"index",
"api",
"listWebrtcRoomKeepers"
],
"query": [
{
"key": "secret",
"value": "{{ZLMediaKit_secret}}"
}
]
}
},
"response": []
},
{
"name": "WebRTC-信令服务器查看注册信息(listWebrtcRooms)",
"request": {
"method": "GET",
"header": [],
"url": {
"raw": "{{ZLMediaKit_URL}}/index/api/listWebrtcRooms?secret={{ZLMediaKit_secret}}",
"host": [
"{{ZLMediaKit_URL}}"
],
"path": [
"index",
"api",
"listWebrtcRooms"
],
"query": [
{
"key": "secret",
"value": "{{ZLMediaKit_secret}}"
}
]
}
},
"response": []
},
{
"name": "WebRTC-查看WebRTCProxyPlayer连接信息(getWebrtcProxyPlayerInfo)",
"request": {
"method": "GET",
"header": [],
"url": {
"raw": "{{ZLMediaKit_URL}}/index/api/getWebrtcProxyPlayerInfo?secret={{ZLMediaKit_secret}}&key=__defaultVhost__/live/test",
"host": [
"{{ZLMediaKit_URL}}"
],
"path": [
"index",
"api",
"getWebrtcProxyPlayerInfo"
],
"query": [
{
"key": "secret",
"value": "{{ZLMediaKit_secret}}"
},
{
"key": "key",
"value": "__defaultVhost__/live/test"
}
]
}
},
"response": []
},
{
"name": "onvif 搜索",
"request": {
"method": "GET",
"header": [],
"url": {
"raw": "{{ZLMediaKit_URL}}/index/api/searchOnvifDevice?secret={{ZLMediaKit_secret}}&timeout_ms=5000",
"host": [
"{{ZLMediaKit_URL}}"
],
"path": [
"index",
"api",
"searchOnvifDevice"
],
"query": [
{
"key": "secret",
"value": "{{ZLMediaKit_secret}}"
},
{
"key": "subnet_prefix",
"value": "192.168.1"
}
]
}
},
"response": []
},
{
"name": "获取 onvif 设备url",
"request": {
"method": "GET",
"header": [],
"url": {
"raw": "{{ZLMediaKit_URL}}/index/api/getStreamUrl?secret={{ZLMediaKit_secret}}&onvif_url=http://xxxx/onvif/device_service",
"host": [
"{{ZLMediaKit_URL}}"
],
"path": [
"index",
"api",
"getStreamUrl"
],
"query": [
{
"key": "secret",
"value": "{{ZLMediaKit_secret}}"
},
{
"key": "onvif_url",
"value": "http://xxxx/onvif/device_service"
}
]
}
},
"response": []
},
{
"name": "下载程序二进制文件(downloadBin)",
"request": {
"method": "GET",
"header": [],
"url": {
"raw": "{{ZLMediaKit_URL}}/index/api/downloadBin?secret={{ZLMediaKit_secret}}",
"host": [
"{{ZLMediaKit_URL}}"
],
"path": [
"index",
"api",
"downloadBin"
],
"query": [
{
"key": "secret",
"value": "{{ZLMediaKit_secret}}",
"description": "api操作密钥(配置文件配置)"
}
]
}
},
"response": []
},
{
"name": "WebRTC交互(webrtc)",
"request": {
"method": "POST",
"header": [
{
"key": "Content-Type",
"value": "application/json"
}
],
"body": {
"mode": "raw",
"raw": ""
},
"url": {
"raw": "{{ZLMediaKit_URL}}/index/api/webrtc?secret={{ZLMediaKit_secret}}&type=play&app=live&stream=test",
"host": [
"{{ZLMediaKit_URL}}"
],
"path": [
"index",
"api",
"webrtc"
],
"query": [
{
"key": "type",
"value": "play",
"description": "webrtc类型play为播放push为推流echo为回显测试"
},
{
"key": "app",
"value": "live",
"description": "应用名"
},
{
"key": "stream",
"value": "test",
"description": "流id"
},
{
"key": "preferred_tcp",
"value": null,
"description": "是否webrtc over tcp优先模式",
"disabled": true
},
{
"key": "cand_udp",
"value": "test",
"description": "指定zlm服务器udp candidate",
"disabled": true
},
{
"key": "cand_tcp",
"value": null,
"description": "指定zlm服务器tcp candidate",
"disabled": true
}
]
},
"description": "WebRTC交互接口body为SDP offer"
},
"response": []
},
{
"name": "WebRTC-WHIP推流(whip)",
"request": {
"method": "POST",
"header": [
{
"key": "Content-Type",
"value": "application/sdp"
}
],
"body": {
"mode": "raw",
"raw": ""
},
"url": {
"raw": "{{ZLMediaKit_URL}}/index/api/whip?app=live&stream=test",
"host": [
"{{ZLMediaKit_URL}}"
],
"path": [
"index",
"api",
"whip"
],
"query": [
{
"key": "app",
"value": "live",
"description": "应用名"
},
{
"key": "stream",
"value": "test",
"description": "流id"
},
{
"key": "preferred_tcp",
"value": null,
"description": "是否webrtc over tcp优先模式",
"disabled": true
},
{
"key": "cand_udp",
"value": "test",
"description": "指定zlm服务器udp candidate",
"disabled": true
},
{
"key": "cand_tcp",
"value": null,
"description": "指定zlm服务器tcp candidate",
"disabled": true
}
]
},
"description": "WebRTC WHIP标准推流接口body为SDP offer"
},
"response": []
},
{
"name": "WebRTC-WHEP播放(whep)",
"request": {
"method": "POST",
"header": [
{
"key": "Content-Type",
"value": "application/sdp"
}
],
"body": {
"mode": "raw",
"raw": ""
},
"url": {
"raw": "{{ZLMediaKit_URL}}/index/api/whep?app=live&stream=test",
"host": [
"{{ZLMediaKit_URL}}"
],
"path": [
"index",
"api",
"whep"
],
"query": [
{
"key": "app",
"value": "live",
"description": "应用名"
},
{
"key": "stream",
"value": "test",
"description": "流id"
},
{
"key": "preferred_tcp",
"value": null,
"description": "是否webrtc over tcp优先模式",
"disabled": true
},
{
"key": "cand_udp",
"value": "test",
"description": "指定zlm服务器udp candidate",
"disabled": true
},
{
"key": "cand_tcp",
"value": null,
"description": "指定zlm服务器tcp candidate",
"disabled": true
}
]
},
"description": "WebRTC WHEP标准播放接口body为SDP offer"
},
"response": []
},
{
"name": "WebRTC-删除连接(delete_webrtc)",
"request": {
"method": "DELETE",
"header": [],
"url": {
"raw": "{{ZLMediaKit_URL}}/index/api/delete_webrtc?id=&token=",
"host": [
"{{ZLMediaKit_URL}}"
],
"path": [
"index",
"api",
"delete_webrtc"
],
"query": [
{
"key": "id",
"value": "",
"description": "WebRTC连接的唯一标识"
},
{
"key": "token",
"value": "",
"description": "删除操作的验证token"
}
]
},
"description": "删除WebRTC连接需要使用DELETE方法。id和token由whip/whep接口返回的Location头中获取。"
},
"response": []
},
{
"name": "登录(login)",
"request": {
"method": "GET",
"header": [],
"url": {
"raw": "{{ZLMediaKit_URL}}/index/api/login?digest=d00414822dfd8eabed87c5e24ffcdca7",
"host": [
"{{ZLMediaKit_URL}}"
],
"path": [
"index",
"api",
"login"
],
"query": [
{
"key": "digest",
"value": "",
"description": "MD5(\"zlmediakit:\"+${secret}+\":\" +${cookie})"
}
]
}
},
"response": []
},
{
"name": "登出(logout)",
"request": {
"method": "GET",
"header": [],
"url": {
"raw": "{{ZLMediaKit_URL}}/index/api/logout",
"host": [
"{{ZLMediaKit_URL}}"
],
"path": [
"index",
"api",
"logout"
]
}
},
"response": []
}
],
"event": [
{

48
resource.rc Normal file
View File

@ -0,0 +1,48 @@
#ifdef APSTUDIO_INVOKED
#error "This file is not editable by Visual C++."
#endif //APSTUDIO_INVOKED
#include "winres.h"
#if defined(ENABLE_VERSION)
#include "ZLMVersion.h"
#endif
#define ZLM_VERSION 8,0,0,1
// 拼接 BRANCH_NAME 和 COMMIT_HASH ,例如 master - 1c8ed1c
#define COMMIT_HASH_BRANCH_STR BRANCH_NAME " - " COMMIT_HASH
IDI_ICON1 ICON DISCARDABLE "www//logo.ico"
VS_VERSION_INFO VERSIONINFO
FILEVERSION ZLM_VERSION
PRODUCTVERSION ZLM_VERSION
FILEFLAGSMASK 0x17L
#ifdef _DEBUG
FILEFLAGS 0x1L
#else
FILEFLAGS 0x0L
#endif
FILEOS 0x4L
FILETYPE 0x2L
FILESUBTYPE 0x0L
BEGIN
BLOCK "StringFileInfo"
BEGIN
BLOCK "000004b0"
BEGIN
VALUE "CompanyName", "Applied ZLMediaKit Informatics Software"
VALUE "FileDescription", "This file is part of the C++ ZLM"
VALUE "FileVersion", COMMIT_HASH_BRANCH_STR
VALUE "InternalName", COMMIT_HASH_BRANCH_STR
VALUE "LegalCopyright", "Copyright (c) 2016-present The ZLMediaKit project authors"
VALUE "ProductName", "https://github.com/ZLMediaKit"
VALUE "ProductVersion", COMMIT_HASH_BRANCH_STR
END
END
BLOCK "VarFileInfo"
BEGIN
VALUE "Translation", 0x0, 1200
END
END

View File

@ -1,6 +1,6 @@
# MIT License
#
# Copyright (c) 2016-2022 The ZLMediaKit project authors. All Rights Reserved.
# Copyright (c) 2016-present The ZLMediaKit project authors. All Rights Reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
@ -50,10 +50,43 @@ target_compile_definitions(MediaServer
target_compile_options(MediaServer
PRIVATE ${COMPILE_OPTIONS_DEFAULT})
if(MINGW)
update_cached_list(MK_LINK_LIBRARIES dbghelp)
endif()
if(CMAKE_SYSTEM_NAME MATCHES "Linux")
target_link_libraries(MediaServer -Wl,--start-group ${MK_LINK_LIBRARIES} -Wl,--end-group)
else()
target_link_libraries(MediaServer ${MK_LINK_LIBRARIES})
endif()
if(MSVC)
set(RESOURCE_FILE "${CMAKE_SOURCE_DIR}/resource.rc")
set_source_files_properties(${RESOURCE_FILE} PROPERTIES LANGUAGE RC)
target_sources(MediaServer PRIVATE ${RESOURCE_FILE})
else()
# Android, IOS, macOS ...
# CLion, GCC ...
endif()
install(TARGETS MediaServer DESTINATION ${INSTALL_PATH_RUNTIME})
#relase debug
string(TOLOWER ${CMAKE_BUILD_TYPE} CMAKE_BUILD_TYPE_LOWER)
if(UNIX AND ENABLE_OBJCOPY)
if("${CMAKE_BUILD_TYPE_LOWER}" STREQUAL "release")
find_program(OBJCOPY_FOUND objcopy)
if (OBJCOPY_FOUND)
add_custom_command(TARGET MediaServer
POST_BUILD
COMMAND objcopy --only-keep-debug ${EXECUTABLE_OUTPUT_PATH}/MediaServer ${EXECUTABLE_OUTPUT_PATH}/MediaServer.debug
COMMAND objcopy --strip-all ${EXECUTABLE_OUTPUT_PATH}/MediaServer
COMMAND objcopy --add-gnu-debuglink=${EXECUTABLE_OUTPUT_PATH}/MediaServer.debug ${EXECUTABLE_OUTPUT_PATH}/MediaServer
)
install(FILES ${EXECUTABLE_OUTPUT_PATH}/MediaServer.debug DESTINATION ${INSTALL_PATH_RUNTIME})
else()
message(STATUS "not found objcopy, generate MediaServer.debug skip")
endif()
endif()
endif()

View File

@ -84,86 +84,92 @@ void FFmpegSource::play(const string &ffmpeg_cmd_key, const string &src_url, con
try {
_media_info.parse(dst_url);
} catch (std::exception &ex) {
cb(SockException(Err_other, ex.what()));
return;
}
auto ffmpeg_cmd = ffmpeg_cmd_default;
if (!ffmpeg_cmd_key.empty()) {
auto cmd_it = mINI::Instance().find(ffmpeg_cmd_key);
if (cmd_it != mINI::Instance().end()) {
ffmpeg_cmd = cmd_it->second;
auto ffmpeg_cmd = ffmpeg_cmd_default;
if (!ffmpeg_cmd_key.empty()) {
auto cmd_it = mINI::Instance().find(ffmpeg_cmd_key);
if (cmd_it != mINI::Instance().end()) {
ffmpeg_cmd = cmd_it->second;
} else {
WarnL << "配置文件中,ffmpeg命令模板(" << ffmpeg_cmd_key << ")不存在,已采用默认模板(" << ffmpeg_cmd_default << ")";
}
}
if (!toolkit::start_with(ffmpeg_cmd, "%s")) {
throw std::invalid_argument("ffmpeg cmd template must start with '%s'");
}
char cmd[2048] = { 0 };
snprintf(cmd, sizeof(cmd), ffmpeg_cmd.data(), File::absolutePath("", ffmpeg_bin).data(), src_url.data(), dst_url.data());
auto log_file = ffmpeg_log.empty() ? "" : File::absolutePath("", ffmpeg_log);
_process.run(cmd, log_file);
_cmd = cmd;
InfoL << cmd;
if (is_local_ip(_media_info.host)) {
// 推流给自己的,通过判断流是否注册上来判断是否正常 [AUTO-TRANSLATED:423f2be6]
// Push stream to yourself, judge whether the stream is registered to determine whether it is normal
if (_media_info.schema != RTSP_SCHEMA && _media_info.schema != RTMP_SCHEMA && _media_info.schema != "srt") {
cb(SockException(Err_other, "本服务只支持rtmp/rtsp/srt推流"));
return;
}
weak_ptr<FFmpegSource> weakSelf = shared_from_this();
findAsync(timeout_ms, [cb, weakSelf, timeout_ms](const MediaSource::Ptr &src) {
auto strongSelf = weakSelf.lock();
if (!strongSelf) {
// 自己已经销毁 [AUTO-TRANSLATED:3d45c3b0]
// Self has been destroyed
return;
}
if (src) {
// 推流给自己成功 [AUTO-TRANSLATED:65dba71b]
// Push stream to yourself successfully
cb(SockException());
strongSelf->onGetMediaSource(src);
strongSelf->startTimer(timeout_ms);
return;
}
// 推流失败 [AUTO-TRANSLATED:4d8d226a]
// Push stream failed
if (!strongSelf->_process.wait(false)) {
// ffmpeg进程已经退出 [AUTO-TRANSLATED:04193893]
// ffmpeg process has exited
cb(SockException(Err_other, StrPrinter << "ffmpeg已经退出,exit code = " << strongSelf->_process.exit_code()));
return;
}
// ffmpeg进程还在线但是等待推流超时 [AUTO-TRANSLATED:9f71f17b]
// ffmpeg process is still online, but waiting for the stream to timeout
cb(SockException(Err_other, "等待超时"));
});
} else {
WarnL << "配置文件中,ffmpeg命令模板(" << ffmpeg_cmd_key << ")不存在,已采用默认模板(" << ffmpeg_cmd_default << ")";
// 推流给其他服务器的通过判断FFmpeg进程是否在线判断是否成功 [AUTO-TRANSLATED:9b963da5]
// Push stream to other servers, judge whether it is successful by judging whether the FFmpeg process is online
weak_ptr<FFmpegSource> weakSelf = shared_from_this();
_timer = std::make_shared<Timer>(
timeout_ms / 1000.0f,
[weakSelf, cb, timeout_ms]() {
auto strongSelf = weakSelf.lock();
if (!strongSelf) {
// 自身已经销毁 [AUTO-TRANSLATED:5f954f8a]
// Self has been destroyed
return false;
}
// FFmpeg还在线那么我们认为推流成功 [AUTO-TRANSLATED:4330df49]
// FFmpeg is still online, so we think the push stream is successful
if (strongSelf->_process.wait(false)) {
cb(SockException());
strongSelf->startTimer(timeout_ms);
return false;
}
// ffmpeg进程已经退出 [AUTO-TRANSLATED:04193893]
// ffmpeg process has exited
cb(SockException(Err_other, StrPrinter << "ffmpeg已经退出,exit code = " << strongSelf->_process.exit_code()));
return false;
},
_poller);
}
}
char cmd[2048] = { 0 };
snprintf(cmd, sizeof(cmd), ffmpeg_cmd.data(), File::absolutePath("", ffmpeg_bin).data(), src_url.data(), dst_url.data());
auto log_file = ffmpeg_log.empty() ? "" : File::absolutePath("", ffmpeg_log);
_process.run(cmd, log_file);
_cmd = cmd;
InfoL << cmd;
if (is_local_ip(_media_info.host)) {
// 推流给自己的,通过判断流是否注册上来判断是否正常 [AUTO-TRANSLATED:423f2be6]
// Push stream to yourself, judge whether the stream is registered to determine whether it is normal
if (_media_info.schema != RTSP_SCHEMA && _media_info.schema != RTMP_SCHEMA) {
cb(SockException(Err_other, "本服务只支持rtmp/rtsp推流"));
return;
}
weak_ptr<FFmpegSource> weakSelf = shared_from_this();
findAsync(timeout_ms, [cb, weakSelf, timeout_ms](const MediaSource::Ptr &src) {
auto strongSelf = weakSelf.lock();
if (!strongSelf) {
// 自己已经销毁 [AUTO-TRANSLATED:3d45c3b0]
// Self has been destroyed
return;
}
if (src) {
// 推流给自己成功 [AUTO-TRANSLATED:65dba71b]
// Push stream to yourself successfully
cb(SockException());
strongSelf->onGetMediaSource(src);
strongSelf->startTimer(timeout_ms);
return;
}
// 推流失败 [AUTO-TRANSLATED:4d8d226a]
// Push stream failed
if (!strongSelf->_process.wait(false)) {
// ffmpeg进程已经退出 [AUTO-TRANSLATED:04193893]
// ffmpeg process has exited
cb(SockException(Err_other, StrPrinter << "ffmpeg已经退出,exit code = " << strongSelf->_process.exit_code()));
return;
}
// ffmpeg进程还在线但是等待推流超时 [AUTO-TRANSLATED:9f71f17b]
// ffmpeg process is still online, but waiting for the stream to timeout
cb(SockException(Err_other, "等待超时"));
});
} else{
// 推流给其他服务器的通过判断FFmpeg进程是否在线判断是否成功 [AUTO-TRANSLATED:9b963da5]
// Push stream to other servers, judge whether it is successful by judging whether the FFmpeg process is online
weak_ptr<FFmpegSource> weakSelf = shared_from_this();
_timer = std::make_shared<Timer>(timeout_ms / 1000.0f, [weakSelf, cb, timeout_ms]() {
auto strongSelf = weakSelf.lock();
if (!strongSelf) {
// 自身已经销毁 [AUTO-TRANSLATED:5f954f8a]
// Self has been destroyed
return false;
}
// FFmpeg还在线那么我们认为推流成功 [AUTO-TRANSLATED:4330df49]
// FFmpeg is still online, so we think the push stream is successful
if (strongSelf->_process.wait(false)) {
cb(SockException());
strongSelf->startTimer(timeout_ms);
return false;
}
// ffmpeg进程已经退出 [AUTO-TRANSLATED:04193893]
// ffmpeg process has exited
cb(SockException(Err_other, StrPrinter << "ffmpeg已经退出,exit code = " << strongSelf->_process.exit_code()));
return false;
}, _poller);
} catch (std::exception &ex) {
WarnL << ex.what();
cb(SockException(Err_other, ex.what()));
}
}
@ -341,15 +347,70 @@ void FFmpegSource::onGetMediaSource(const MediaSource::Ptr &src) {
setDelegate(listener);
muxer->setDelegate(shared_from_this());
if (_enable_hls) {
src->setupRecord(Recorder::type_hls, true, "", 0);
src->getOwnerPoller()->async([=]() mutable {
src->setupRecord(Recorder::type_hls, true, "", 0);
});
}
if (_enable_mp4) {
src->setupRecord(Recorder::type_mp4, true, "", 0);
src->getOwnerPoller()->async([=]() mutable {
src->setupRecord(Recorder::type_mp4, true, "", 0);
});
}
}
}
void FFmpegSnap::makeSnap(const string &play_url, const string &save_path, float timeout_sec, const onSnap &cb) {
#if defined(ENABLE_FFMPEG)
#include "Player/MediaPlayer.h"
#include "Codec/Transcode.h"
static void makeSnapAsync(const string &play_url, const string &save_path, float timeout_sec, const FFmpegSnap::onSnap &cb) {
struct Holder {
MediaPlayer::Ptr player;
};
auto holder = std::make_shared<Holder>();
auto player = std::make_shared<MediaPlayer>();
(*player)[mediakit::Client::kTimeoutMS] = timeout_sec * 1000;
player->setOnPlayResult([holder, save_path, cb, timeout_sec](const SockException &ex) mutable {
onceToken token(nullptr, [&]() { holder->player = nullptr; });
auto video = ex ? nullptr : dynamic_pointer_cast<VideoTrack>(holder->player->getTrack(TrackVideo, false));
if (!video) {
cb(false, ex ? ex.what() : "none video track");
return;
}
auto decoder = std::make_shared<FFmpegDecoder>(video);
auto new_holder = std::make_shared<Holder>(*holder);
auto timer = EventPollerPool::Instance().getPoller()->doDelayTask(1000 * timeout_sec, [cb, new_holder]() {
// 防止解码失败导致播放器无法释放
new_holder->player = nullptr;
cb(false, "decode frame timeout");
return 0;
});
auto done = false;
decoder->setOnDecode([save_path, new_holder, cb, done, timer](const FFmpegFrame::Ptr &frame) mutable {
if (done) {
return;
}
onceToken token(nullptr, [&]() { new_holder->player = nullptr; timer->cancel(); done = true; });
auto ret = FFmpegUtils::saveFrame(frame, save_path.data());
cb(std::get<0>(ret), std::get<1>(ret));
});
video->addDelegate([decoder](const Frame::Ptr &frame) { return decoder->inputFrame(frame, false, true); });
});
player->play(play_url);
holder->player = std::move(player);
}
#endif
void FFmpegSnap::makeSnap(bool async, const string &play_url, const string &save_path, float timeout_sec, const onSnap &cb) {
#if defined(ENABLE_FFMPEG)
if (async) {
makeSnapAsync(play_url, save_path, timeout_sec, cb);
return;
}
#endif
GET_CONFIG(string, ffmpeg_bin, FFmpeg::kBin);
GET_CONFIG(string, ffmpeg_snap, FFmpeg::kSnap);
GET_CONFIG(string, ffmpeg_log, FFmpeg::kLog);

View File

@ -26,17 +26,20 @@ namespace FFmpeg {
class FFmpegSnap {
public:
using onSnap = std::function<void(bool success, const std::string &err_msg)>;
// / 创建截图 [AUTO-TRANSLATED:6d334c49]
// / Create a screenshot
// / \param play_url 播放url地址只要FFmpeg支持即可 [AUTO-TRANSLATED:609d4de4]
// / \param play_url The playback URL address, as long as FFmpeg supports it
// / \param save_path 截图jpeg文件保存路径 [AUTO-TRANSLATED:0fc0ac0d]
// / \param save_path The path to save the screenshot JPEG file
// / \param timeout_sec 生成截图超时时间(防止阻塞太久) [AUTO-TRANSLATED:0dcc0095]
// / \param timeout_sec Timeout for generating the screenshot (to prevent blocking for too long)
// / \param cb 生成截图成功与否回调 [AUTO-TRANSLATED:5b4b93c9]
// / \param cb Callback for whether the screenshot was generated successfully
static void makeSnap(const std::string &play_url, const std::string &save_path, float timeout_sec, const onSnap &cb);
/**
* [AUTO-TRANSLATED:6d334c49]
* Create a screenshot
* @param async 使(ffmpeg命令行使zlm apizlm播放器支持的拉流协议)
* @param play_url url地址FFmpeg支持即可 [AUTO-TRANSLATED:609d4de4]
* @param play_url The playback URL address, as long as FFmpeg supports it
* @param save_path jpeg文件保存路径 [AUTO-TRANSLATED:0fc0ac0d]
* @param save_path The path to save the screenshot JPEG file
* @param timeout_sec () [AUTO-TRANSLATED:0dcc0095]
* @param timeout_sec Timeout for generating the screenshot (to prevent blocking for too long)
* @param cb [AUTO-TRANSLATED:5b4b93c9]
* @param cb Callback for whether the screenshot was generated successfully
*/
static void makeSnap(bool async, const std::string &play_url, const std::string &save_path, float timeout_sec, const onSnap &cb);
private:
FFmpegSnap() = delete;

Some files were not shown because too many files have changed in this diff Show More