在golang中如何限制服务器上载和下载的速度?

在golang中如何限制服务器上载和下载的速度?

问题描述:

How would I limit upload and download speed from the server in golang?

I'm writing a golang server to allow users to upload and download files. And file is big, about 1GB bytes. I want to limit the upload and download speed to (for instance) 1MB/s (configurable of course).

below is my upload code:

func uploadFile(w http.ResponseWriter, r *http.Request) {
    file, _, err := r.FormFile("file")

    if err != nil {
        http.Error(w, err.Error(), 500)
        return
    }

    defer file.Close()

    os.MkdirAll(`e:\test`, os.ModePerm)
    out, err := os.Create(`e:\test\test.mpg`)
    if err != nil {
        http.Error(w, err.Error(), 500)
        return
    }

    defer out.Close()

    _, err = io.Copy(out, file)
    if err != nil {
        http.Error(w, err.Error(), 500)
    }
}

如何限制golang从服务器上载和下载的速度? p>

我正在编写一个golang服务器,以允许用户上传和下载文件。 而且文件很大,大约1GB字节。 我想将上传和下载速度限制为(例如)1MB / s(当然可以配置)。 p>

下面是我的上传代码: p>

  func uploadFile(w http.ResponseWriter,r * http.Request){
 file,_,err:= r.FormFile(“ file”)
 
 if err!= nil {
 http。  Error(w,err.Error(),500)
 return 
} 
 
延迟文件。Close()
 
 os.MkdirAll(`e:\ test`,os.ModePerm)
  ,err:= os.Create(`e:\ test \ test.mpg`)
如果err!= nil {
 http.Error(w,err.Error(),500)
 return 
} \  n 
延迟退出.Close()
 
 _,err = io.Copy(out,file)
 if err!= nil {
 http.Error(w,err.Error(),500)\  n} 
} 
  code>  pre> 
  div>

There's a token bucket algorithm that can be helpful to implement such the rate limit. I found an example implementation, which you can use: https://github.com/juju/ratelimit

package main

import (
    "bytes"
    "fmt"
    "io"
    "time"

    "github.com/juju/ratelimit"
)

func main() {
    // Source holding 1MB
    src := bytes.NewReader(make([]byte, 1024*1024))
    // Destination
    dst := &bytes.Buffer{}

    // Bucket adding 100KB every second, holding max 100KB
    bucket := ratelimit.NewBucketWithRate(100*1024, 100*1024)

    start := time.Now()

    // Copy source to destination, but wrap our reader with rate limited one
    io.Copy(dst, ratelimit.Reader(src, bucket))

    fmt.Printf("Copied %d bytes in %s
", dst.Len(), time.Since(start))
}

After running it, the output is:

Copied 1048576 bytes in 9.239607694s

You can use different bucket implementations to provide desired behaviour. In your code, after setting up right token bucket, you would call:

_, err = io.Copy(out, ratelimit.Reader(file, bucket))

You could check out the implementation of PuerkitoBio/throttled, presented in this article:

throttled, a Go package that implements various strategies to control access to HTTP handlers.
Out-of-the-box, it supports rate-limiting of requests, constant interval flow of requests and memory usage thresholds to grant or deny access, but it also provides mechanisms to extend its functionality.

The rate limit isn't exactly what you need, but can give a good idea for implementing a similar feature.