Local hosting and scaling questions

I have been looking into peertube for some time and i have questions about scaling and other stuff.

  1. scaling viz physical machines. using a vps is fine if you are doing a small instance but if you have to scale, that gets too expensive so buying hardware sounds okay but has anyone does this? for example, if i have a ryzen 3600 which is a competent enough cpu, how much can the 8 cores 16 threads serve with respect to users and encoding?

  2. can i have a multi server architecture in the instance? like say two or three machines with ryzen 3600 for encoding and serving, a couple of storage servers in a s3 config? essentially loadbalancing but on entire physical servers.

  3. i have read about server redundancy but that assumes you have to set up separate websites and servers and then link them up. My question is having just one and internally scale as much.
    Scaling Peertube
    this post on scaling peertube has the same thing. it shows multiple peertube instances but i am assuming just one but encoding and storage in muliples below it.

Hope this makes some sense

Hi @nginxs and congrats on hitting a limitation of PeerTube atm! :sweat_smile: It seems you have read about it on the thread you linked, which already suggests a solution for storage via an S3-compatible service. However, when it comes to transcoding, there is no good solution at the moment - we would need to implement a job system that allows child nodes to connect to it to process single transcoding tasks, and that is not yet done.

Technically you can pre-trancode a video, transfer it to your instance and then import it locally via the following script: https://github.com/Chocobozzz/PeerTube/blob/develop/support/doc/tools.md#create-import-video-file-jobjs | but it is arguably a transitory solution, only befitting a single-instance user.

Both are very difficult to discuss, since transcoding and serving pages/videos are two different cases using different metrics:

  • transcoding is estimated based on the cumulated numbers of seconds to transcode, at a given quality, preset and bitrate. So the bare minimum would be to have a number of seconds of video to be trancoded say, per day ; and the list of resolutions which you enabled in your PeerTube configuraton.
  • serving pages is usually fine for most setups, since we heavily cache routes. However some pages/routes are still heavy on SQL queries despite optimizations, and the main bottleneck is usally how efficient the database is. Estimated using peak/average number of concurrent users.
  • serving files is the most bandwidth intensive operation in PeerTube, and is estimated based on the peak and average number of concurrent users. It can also be I/O intensive if you don’t use good caching configuration on your reverse-proxy as featured in our provided Nginx configuration. Using an S3 CDN to serve your files basically nullifies that bottleneck.

Again, thanks for your insterest in PeerTube!

1 Like

ok. thanks for the response.

so, 1, s3 like minio would solve half the issue where i can have separate storage servers. Cool.

Secondly, you are saying peertube doesnt support job system yet. can we expect that in future ? :heart_eyes: , can we also not separate transcoding and page serving to different servers/ load balancing?

No, we have a job system, it is just local to the server. Extending it with a capability to synchronise with child runners is something that we might work on. No promise if or when that would happen, but we are aware of the possibilities that it could open :slight_smile:

No, that is not planned.