My setup is a mix of the old object storage method (s3fs) and the new method. Because I use a CDN, I no longer mount the Wasabi locations with s3fs. My old videos seem to work fine with just the nginx component of the old setup.
When I upload a video or do a create-import-video-file-job
, the upload or import job seems to complete successfully, as does an initial video-transcoding
job that the import process kicks off and the move-to-object-storage
job.
Job: 17
Type: video-file-import
Processed on Oct 4, 2021, 1:46:10 PM
Finished on Oct 4, 2021, 1:46:12 PM
{
"videoUUID": "c40348fe-9a95-4b90-8660-0e4b78605e43",
"filePath": "/mnt/volume/tmp/com.beatgames.beatsaber-20210909-124038.mp4"
}
Job: 1399
Type: video-transcoding
Processed on Oct 4, 2021, 1:46:12 PM
Finished on Oct 4, 2021, 1:46:17 PM
{
"type": "new-resolution-to-hls",
"videoUUID": "c40348fe-9a95-4b90-8660-0e4b78605e43",
"resolution": 1024,
"isPortraitMode": false,
"copyCodecs": true,
"isMaxQuality": false
}
Job: 13
Type: move-to-object-storage
Processed on Oct 4, 2021, 1:46:12 PM
Finished on Oct 4, 2021, 1:46:22 PM
{
"videoUUID": "c40348fe-9a95-4b90-8660-0e4b78605e43",
"isNewVideo": true
}
(Sometimes, though, the original video file does not appear in object storage, even if the move-to-object-storage
task succeeds. In these cases, the video doesn’t seem to get a row in the videoFile
DB table, either.) Despite the success of the move-to-object-storage
job, when I run create-transcoding-job
shortly after to get all my configured resolutions, most or all of the transcoding jobs fail with NoSuchKey
errors after getting to 100%.
This first transcoding job fails:
Job: 1400
Type: video-transcoding
Processed on Oct 4, 2021, 1:48:09 PM
Finished on Oct 4, 2021, 1:49:01 PM
{
"type": "new-resolution-to-hls",
"videoUUID": "c40348fe-9a95-4b90-8660-0e4b78605e43",
"resolution": 480,
"isPortraitMode": false,
"copyCodecs": false,
"isNewVideo": false,
"isMaxQuality": false
}
NoSuchKey: NoSuchKey
at deserializeAws_restXmlGetObjectCommandError (/var/www/peertube/versions/peertube-v3.4.0/node_modules/@aws-sdk/client-s3/dist/cjs/protocols/Aws_restXml.js:6280:41)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at async /var/www/peertube/versions/peertube-v3.4.0/node_modules/@aws-sdk/middleware-serde/dist/cjs/deserializerMiddleware.js:6:20
at async /var/www/peertube/versions/peertube-v3.4.0/node_modules/@aws-sdk/middleware-signing/dist/cjs/middleware.js:11:20
at async StandardRetryStrategy.retry (/var/www/peertube/versions/peertube-v3.4.0/node_modules/@aws-sdk/middleware-retry/dist/cjs/StandardRetryStrategy.js:51:46)
at async /var/www/peertube/versions/peertube-v3.4.0/node_modules/@aws-sdk/middleware-logger/dist/cjs/loggerMiddleware.js:6:22
The second one fails as well. The third one succeeds:
Job: 1402
Type: video-transcoding
Processed on Oct 4, 2021, 1:50:25 PM
Finished on Oct 4, 2021, 1:52:38 PM
{
"type": "new-resolution-to-hls",
"videoUUID": "c40348fe-9a95-4b90-8660-0e4b78605e43",
"resolution": 1024,
"isPortraitMode": false,
"copyCodecs": false,
"isNewVideo": false,
"isMaxQuality": false
}
If I rerun the create-transcoding-job
maybe 30 minutes later, they probably all succeed without error, but not all m3u8
files end up in the streaming-playlists/hls/c40348fe-9a95-4b90-8660-0e4b78605e43
location in Wasabi. Further, the actual video page returns a 404 in Chrome/Brave’s network debugger, although the page loads and assets attempt to load.
This particular video is a tiny 85MB compared to my usual multi-GB uploads/imports, but I reduced the max_upload_part
to 1GB and then 512MB, just in case. Large videos show the same behavior, just much more slowly.
I have been fighting with these consistency issues so much since upgrading to v3.4.0 that I’m wondering if there’s something about Wasabi’s service or data consistency model that is not working well with the S3 API. Does that seem plausible? Or is this a speed issue on the Wasabi side or a configuration issue on the PeerTube side?