Compare commits
58 Commits
v1.0.0-rc.
...
v1.0.0-rc.
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
fa6f56caa8 | ||
|
|
5a62f83ec8 | ||
|
|
8bcb3d7c62 | ||
|
|
b2eebcaf6d | ||
|
|
45bcb9cd68 | ||
|
|
e1ff4db330 | ||
|
|
606143b3ad | ||
|
|
57221bdd30 | ||
|
|
8f6494b092 | ||
|
|
c415abf971 | ||
|
|
e10844d32c | ||
|
|
0cd8dc2b5f | ||
|
|
a31e805c5a | ||
|
|
cff705f807 | ||
|
|
c19fc25882 | ||
|
|
fff9a89ebb | ||
|
|
891e5baa27 | ||
|
|
001bbfad1b | ||
|
|
5bead443ad | ||
|
|
77cf57d1ea | ||
|
|
d743d8cfeb | ||
|
|
a81146d730 | ||
|
|
d50f9a83b6 | ||
|
|
7ba32aca84 | ||
|
|
b9f6e17a7d | ||
|
|
d99225c0fb | ||
|
|
490f874128 | ||
|
|
70afbbfc68 | ||
|
|
e09cc3d2dd | ||
|
|
b6008108ac | ||
|
|
1d22e84f21 | ||
|
|
481df3d543 | ||
|
|
cbf914e7a1 | ||
|
|
d742acf308 | ||
|
|
5f1d9ff151 | ||
|
|
1e633ae302 | ||
|
|
7d084e3049 | ||
|
|
c69a0bb107 | ||
|
|
9780cf902d | ||
|
|
f1343c1d77 | ||
|
|
3c662d5b07 | ||
|
|
7d8710824c | ||
|
|
711ee2258c | ||
|
|
60acee2df5 | ||
|
|
0dde54fc8f | ||
|
|
0e474c8b24 | ||
|
|
68ab705aac | ||
|
|
a7ca6b2e34 | ||
|
|
5f1ceeff90 | ||
|
|
c7e732d084 | ||
|
|
9eb795b1bb | ||
|
|
a1288565f0 | ||
|
|
f82c91afbe | ||
|
|
b35ac76720 | ||
|
|
9f27119044 | ||
|
|
9cd416611f | ||
|
|
459c370229 | ||
|
|
b3a2a76f25 |
2
.github/workflows/build-test.yml
vendored
2
.github/workflows/build-test.yml
vendored
@@ -17,7 +17,7 @@ jobs:
|
||||
go-version: '^1.15'
|
||||
|
||||
- name: Build
|
||||
run: GOOS=linux GOARCH=amd64 go build -ldflags="-s -w" -ldflags "-X github.com/pterodactyl/wings/system.Version=dev-${GIT_COMMIT:0:7}" -o build/wings_linux_amd64 -v wings.go
|
||||
run: GOOS=linux GOARCH=amd64 go build -ldflags="-s -w -X github.com/pterodactyl/wings/system.Version=dev-${GIT_COMMIT:0:7}" -o build/wings_linux_amd64 -v wings.go
|
||||
|
||||
- name: Test
|
||||
run: go test ./...
|
||||
|
||||
2
.github/workflows/release.yml
vendored
2
.github/workflows/release.yml
vendored
@@ -17,7 +17,7 @@ jobs:
|
||||
- name: Build
|
||||
env:
|
||||
REF: ${{ github.ref }}
|
||||
run: GOOS=linux GOARCH=amd64 go build -ldflags="-s -w" -ldflags "-X github.com/pterodactyl/wings/system.Version=${REF:11}" -o build/wings_linux_amd64 -v wings.go
|
||||
run: GOOS=linux GOARCH=amd64 go build -ldflags="-s -w -X github.com/pterodactyl/wings/system.Version=${REF:11}" -o build/wings_linux_amd64 -v wings.go
|
||||
|
||||
- name: Test
|
||||
run: go test ./...
|
||||
|
||||
43
README.md
43
README.md
@@ -1,16 +1,35 @@
|
||||
# Alpha Project
|
||||
Please refrain from opening PRs or Issues at this time. This project is still under heavy development, and until we have a solid foundation and plan for how everything will connect, we will not be accepting PRs or feature suggestions.
|
||||
[](https://pterodactyl.io)
|
||||
|
||||
# Pterodactyl wings [](https://travis-ci.org/pterodactyl/wings) [](https://www.codacy.com/app/schrej/wings/dashboard) [](https://www.codacy.com/app/schrej/wings/files)
|
||||
[](https://pterodactyl.io/discord)
|
||||
|
||||
```
|
||||
____
|
||||
__ Pterodactyl _____/___/_______ _______ ______
|
||||
\_____\ \/\/ / / / __ / ___/
|
||||
\___\ / / / / /_/ /___ /
|
||||
\___/\___/___/___/___/___ /______/
|
||||
/_______/ alpha
|
||||
```
|
||||
# Pterodactyl Wings
|
||||
Wings is Pterodactyl's server control plane, built for the rapidly changing gaming industry and designed to be
|
||||
highly performant and secure. Wings provides an HTTP API allowing you to interface directly with running server
|
||||
instances, fetch server logs, generate backups, and control all aspects of the server lifecycle.
|
||||
|
||||
A new generation of the Pterodactyl daemon, written in go.
|
||||
In addition, Wings ships with a built-in SFTP server allowing your system to remain free of Pterodactyl specific
|
||||
dependencies, and allowing users to authenticate with the same credentials they would normally use to access the Panel.
|
||||
|
||||
## Sponsors
|
||||
I would like to extend my sincere thanks to the following sponsors for helping find Pterodactyl's developement.
|
||||
[Interested in becoming a sponsor?](https://github.com/sponsors/DaneEveritt)
|
||||
|
||||
| Company | About |
|
||||
| ------- | ----- |
|
||||
| [**BloomVPS**](https://bloomvps.com) | BloomVPS offers dedicated core VPS and Minecraft hosting with Ryzen 9 processors. With owned-hardware, we offer truly unbeatable prices on high-performance hosting. |
|
||||
| [**VersatileNode**](https://versatilenode.com/) | Looking to host a minecraft server, vps, or a website? VersatileNode is one of the most affordable hosting providers to provide quality yet cheap services with incredible support. |
|
||||
| [**MineStrator**](https://minestrator.com/) | Looking for a French highend hosting company for you minecraft server? More than 14,000 members on our discord, trust us. |
|
||||
| [**DedicatedMC**](https://dedicatedmc.io/) | DedicatedMC provides Raw Power hosting at affordable pricing, making sure to never compromise on your performance and giving you the best performance money can buy. |
|
||||
| [**Skynode**](https://www.skynode.pro/) | Skynode provides blazing fast game servers along with a top-notch user experience. Whatever our clients are looking for, we're able to provide it! |
|
||||
| [**XCORE-SERVER.de**](https://xcore-server.de/) | XCORE-SERVER.de offers High-End Servers for hosting and gaming since 2012. Fast, excellent and well-known for eSports Gaming. |
|
||||
|
||||
## Documentation
|
||||
* [Panel Documentation](https://pterodactyl.io/panel/1.0/getting_started.html)
|
||||
* [Wings Documentation](https://pterodactyl.io/wings/1.0/installing.html)
|
||||
* [Community Guides](https://pterodactyl.io/community/about.html)
|
||||
* Or, get additional help [via Discord](https://discord.gg/pterodactyl)
|
||||
|
||||
## Reporting Issues
|
||||
Please use the [pterodactyl/panel](https://github.com/pterodactyl/panel) repository to report any issues or make
|
||||
feature requests for Wings. In addition, the [security policy](https://github.com/pterodactyl/panel/security/policy) listed
|
||||
within that repository also applies to Wings.
|
||||
@@ -59,9 +59,9 @@ func (r *PanelRequest) logDebug(req *http.Request) {
|
||||
}
|
||||
|
||||
log.WithFields(log.Fields{
|
||||
"method": req.Method,
|
||||
"method": req.Method,
|
||||
"endpoint": req.URL.String(),
|
||||
"headers": headers,
|
||||
"headers": headers,
|
||||
}).Debug("making request to external HTTP endpoint")
|
||||
}
|
||||
|
||||
@@ -144,7 +144,7 @@ type RequestError struct {
|
||||
|
||||
// Returns the error response in a string form that can be more easily consumed.
|
||||
func (re *RequestError) Error() string {
|
||||
return fmt.Sprintf("%s: %s (HTTP/%s)", re.Code, re.Detail, re.Status)
|
||||
return fmt.Sprintf("Error response from Panel: %s: %s (HTTP/%s)", re.Code, re.Detail, re.Status)
|
||||
}
|
||||
|
||||
func (re *RequestError) String() string {
|
||||
|
||||
@@ -2,11 +2,57 @@ package api
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"github.com/apex/log"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/pterodactyl/sftp-server"
|
||||
"regexp"
|
||||
)
|
||||
|
||||
func (r *PanelRequest) ValidateSftpCredentials(request sftp_server.AuthenticationRequest) (*sftp_server.AuthenticationResponse, error) {
|
||||
type SftpAuthRequest struct {
|
||||
User string `json:"username"`
|
||||
Pass string `json:"password"`
|
||||
IP string `json:"ip"`
|
||||
SessionID []byte `json:"session_id"`
|
||||
ClientVersion []byte `json:"client_version"`
|
||||
}
|
||||
|
||||
type SftpAuthResponse struct {
|
||||
Server string `json:"server"`
|
||||
Token string `json:"token"`
|
||||
Permissions []string `json:"permissions"`
|
||||
}
|
||||
|
||||
type sftpInvalidCredentialsError struct {
|
||||
}
|
||||
|
||||
func (ice sftpInvalidCredentialsError) Error() string {
|
||||
return "the credentials provided were invalid"
|
||||
}
|
||||
|
||||
func IsInvalidCredentialsError(err error) bool {
|
||||
_, ok := err.(*sftpInvalidCredentialsError)
|
||||
|
||||
return ok
|
||||
}
|
||||
|
||||
// Usernames all follow the same format, so don't even bother hitting the API if the username is not
|
||||
// at least in the expected format. This is very basic protection against random bots finding the SFTP
|
||||
// server and sending a flood of usernames.
|
||||
var validUsernameRegexp = regexp.MustCompile(`^(?i)(.+)\.([a-z0-9]{8})$`)
|
||||
|
||||
func (r *PanelRequest) ValidateSftpCredentials(request SftpAuthRequest) (*SftpAuthResponse, error) {
|
||||
// If the username doesn't meet the expected format that the Panel would even recognize just go ahead
|
||||
// and bail out of the process here to avoid accidentally brute forcing the panel if a bot decides
|
||||
// to connect to spam username attempts.
|
||||
if !validUsernameRegexp.MatchString(request.User) {
|
||||
log.WithFields(log.Fields{
|
||||
"subsystem": "sftp",
|
||||
"username": request.User,
|
||||
"ip": request.IP,
|
||||
}).Warn("failed to validate user credentials (invalid format)")
|
||||
|
||||
return nil, new(sftpInvalidCredentialsError)
|
||||
}
|
||||
|
||||
b, err := json.Marshal(request)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -22,7 +68,7 @@ func (r *PanelRequest) ValidateSftpCredentials(request sftp_server.Authenticatio
|
||||
|
||||
if r.HasError() {
|
||||
if r.HttpResponseCode() >= 400 && r.HttpResponseCode() < 500 {
|
||||
return nil, new(sftp_server.InvalidCredentialsError)
|
||||
return nil, new(sftpInvalidCredentialsError)
|
||||
}
|
||||
|
||||
rerr := errors.New(r.Error().String())
|
||||
@@ -30,7 +76,7 @@ func (r *PanelRequest) ValidateSftpCredentials(request sftp_server.Authenticatio
|
||||
return nil, rerr
|
||||
}
|
||||
|
||||
response := new(sftp_server.AuthenticationResponse)
|
||||
response := new(SftpAuthResponse)
|
||||
body, _ := r.ReadBody()
|
||||
|
||||
if err := json.Unmarshal(body, response); err != nil {
|
||||
|
||||
@@ -66,7 +66,7 @@ func diagnosticsCmdRun(cmd *cobra.Command, args []string) {
|
||||
Name: "ReviewBeforeUpload",
|
||||
Prompt: &survey.Confirm{
|
||||
Message: "Do you want to review the collected data before uploading to hastebin.com?",
|
||||
Help: "The data, especially the logs, might contain sensitive information, so you should review it. You will be asked again if you want to uplaod.",
|
||||
Help: "The data, especially the logs, might contain sensitive information, so you should review it. You will be asked again if you want to upload.",
|
||||
Default: true,
|
||||
},
|
||||
},
|
||||
@@ -82,7 +82,7 @@ func diagnosticsCmdRun(cmd *cobra.Command, args []string) {
|
||||
_ = dockerInfo
|
||||
|
||||
output := &strings.Builder{}
|
||||
fmt.Fprintln(output, "Pterodactly Wings - Diagnostics Report")
|
||||
fmt.Fprintln(output, "Pterodactyl Wings - Diagnostics Report")
|
||||
printHeader(output, "Versions")
|
||||
fmt.Fprintln(output, "wings:", system.Version)
|
||||
if dockerErr == nil {
|
||||
@@ -210,7 +210,7 @@ func uploadToHastebin(hbUrl, content string) (string, error) {
|
||||
u.Path = path.Join(u.Path, key)
|
||||
return u.String(), nil
|
||||
}
|
||||
return "", errors.New("Couldn't find key in response")
|
||||
return "", errors.New("failed to find key in response")
|
||||
}
|
||||
|
||||
func redact(s string) string {
|
||||
|
||||
43
cmd/root.go
43
cmd/root.go
@@ -27,7 +27,6 @@ import (
|
||||
"github.com/pterodactyl/wings/sftp"
|
||||
"github.com/pterodactyl/wings/system"
|
||||
"github.com/spf13/cobra"
|
||||
"go.uber.org/zap"
|
||||
)
|
||||
|
||||
var configPath = config.DefaultLocation
|
||||
@@ -133,15 +132,18 @@ func rootCmdRun(*cobra.Command, []string) {
|
||||
config.SetDebugViaFlag(debug)
|
||||
|
||||
if err := c.System.ConfigureDirectories(); err != nil {
|
||||
log.WithError(err).Fatal("failed to configure system directories for pterodactyl")
|
||||
os.Exit(1)
|
||||
log.WithField("error", err).Fatal("failed to configure system directories for pterodactyl")
|
||||
return
|
||||
}
|
||||
|
||||
if err := c.System.EnableLogRotation(); err != nil {
|
||||
log.WithField("error", err).Fatal("failed to configure log rotation on the system")
|
||||
return
|
||||
}
|
||||
|
||||
log.WithField("username", c.System.Username).Info("checking for pterodactyl system user")
|
||||
if su, err := c.EnsurePterodactylUser(); err != nil {
|
||||
log.WithError(err).Error("failed to create pterodactyl system user")
|
||||
os.Exit(1)
|
||||
log.WithField("error", err).Fatal("failed to create pterodactyl system user")
|
||||
return
|
||||
} else {
|
||||
log.WithFields(log.Fields{
|
||||
@@ -158,7 +160,7 @@ func rootCmdRun(*cobra.Command, []string) {
|
||||
|
||||
if err := environment.ConfigureDocker(&c.Docker); err != nil {
|
||||
log.WithField("error", err).Fatal("failed to configure docker environment")
|
||||
os.Exit(1)
|
||||
return
|
||||
}
|
||||
|
||||
if err := c.WriteToDisk(); err != nil {
|
||||
@@ -218,9 +220,8 @@ func rootCmdRun(*cobra.Command, []string) {
|
||||
pool.StopWait()
|
||||
|
||||
// Initialize the SFTP server.
|
||||
if err := sftp.Initialize(c); err != nil {
|
||||
log.WithError(err).Error("failed to initialize the sftp server")
|
||||
os.Exit(1)
|
||||
if err := sftp.Initialize(c.System); err != nil {
|
||||
log.WithError(err).Fatal("failed to initialize the sftp server")
|
||||
return
|
||||
}
|
||||
|
||||
@@ -337,30 +338,22 @@ func Execute() error {
|
||||
// Configures the global logger for Zap so that we can call it from any location
|
||||
// in the code without having to pass around a logger instance.
|
||||
func configureLogging(logDir string, debug bool) error {
|
||||
cfg := zap.NewProductionConfig()
|
||||
if debug {
|
||||
cfg = zap.NewDevelopmentConfig()
|
||||
if err := os.MkdirAll(path.Join(logDir, "/install"), 0700); err != nil {
|
||||
return errors.WithStack(err)
|
||||
}
|
||||
|
||||
cfg.Encoding = "console"
|
||||
cfg.OutputPaths = []string{
|
||||
"stdout",
|
||||
}
|
||||
|
||||
logger, err := cfg.Build()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
zap.ReplaceGlobals(logger)
|
||||
|
||||
p := filepath.Join(logDir, "/wings.log")
|
||||
w, err := logrotate.NewFile(p)
|
||||
if err != nil {
|
||||
panic(errors.Wrap(err, "failed to open process log file"))
|
||||
}
|
||||
|
||||
log.SetLevel(log.DebugLevel)
|
||||
if debug {
|
||||
log.SetLevel(log.DebugLevel)
|
||||
} else {
|
||||
log.SetLevel(log.InfoLevel)
|
||||
}
|
||||
|
||||
log.SetHandler(multi.New(
|
||||
cli.Default,
|
||||
cli.New(w.File, false),
|
||||
|
||||
@@ -188,7 +188,7 @@ func NewFromPath(path string) (*Configuration, error) {
|
||||
}
|
||||
|
||||
// Sets the path where the configuration file is located on the server. This function should
|
||||
// not be called except by processes that are generating the configuration such as the configration
|
||||
// not be called except by processes that are generating the configuration such as the configuration
|
||||
// command shipped with this software.
|
||||
func (c *Configuration) unsafeSetPath(path string) {
|
||||
c.Lock()
|
||||
|
||||
@@ -2,8 +2,11 @@ package config
|
||||
|
||||
import (
|
||||
"github.com/apex/log"
|
||||
"github.com/pkg/errors"
|
||||
"html/template"
|
||||
"os"
|
||||
"path"
|
||||
"path/filepath"
|
||||
)
|
||||
|
||||
// Defines basic system configuration settings.
|
||||
@@ -33,11 +36,27 @@ type SystemConfiguration struct {
|
||||
Gid int
|
||||
}
|
||||
|
||||
// The amount of time in seconds that can elapse before a server's disk space calculation is
|
||||
// considered stale and a re-check should occur. DANGER: setting this value too low can seriously
|
||||
// impact system performance and cause massive I/O bottlenecks and high CPU usage for the Wings
|
||||
// process.
|
||||
DiskCheckInterval int64 `default:"150" yaml:"disk_check_interval"`
|
||||
|
||||
// Determines if Wings should detect a server that stops with a normal exit code of
|
||||
// "0" as being crashed if the process stopped without any Wings interaction. E.g.
|
||||
// the user did not press the stop button, but the process stopped cleanly.
|
||||
DetectCleanExitAsCrash bool `default:"true" yaml:"detect_clean_exit_as_crash"`
|
||||
|
||||
// If set to true, file permissions for a server will be checked when the process is
|
||||
// booted. This can cause boot delays if the server has a large amount of files. In most
|
||||
// cases disabling this should not have any major impact unless external processes are
|
||||
// frequently modifying a servers' files.
|
||||
CheckPermissionsOnBoot bool `default:"true" yaml:"check_permissions_on_boot"`
|
||||
|
||||
// If set to false Wings will not attempt to write a log rotate configuration to the disk
|
||||
// when it boots and one is not detected.
|
||||
EnableLogRotate bool `default:"true" yaml:"enable_log_rotate"`
|
||||
|
||||
Sftp SftpConfiguration `yaml:"sftp"`
|
||||
}
|
||||
|
||||
@@ -49,9 +68,20 @@ func (sc *SystemConfiguration) ConfigureDirectories() error {
|
||||
return err
|
||||
}
|
||||
|
||||
log.WithField("path", sc.LogDirectory).Debug("ensuring log directory exists")
|
||||
if err := os.MkdirAll(path.Join(sc.LogDirectory, "/install"), 0700); err != nil {
|
||||
return err
|
||||
// There are a non-trivial number of users out there whose data directories are actually a
|
||||
// symlink to another location on the disk. If we do not resolve that final destination at this
|
||||
// point things will appear to work, but endless errors will be encountered when we try to
|
||||
// verify accessed paths since they will all end up resolving outside the expected data directory.
|
||||
//
|
||||
// For the sake of automating away as much of this as possible, see if the data directory is a
|
||||
// symlink, and if so resolve to its final real path, and then update the configuration to use
|
||||
// that.
|
||||
if d, err := filepath.EvalSymlinks(sc.Data); err != nil {
|
||||
if !os.IsNotExist(err) {
|
||||
return errors.WithStack(err)
|
||||
}
|
||||
} else if d != sc.Data {
|
||||
sc.Data = d
|
||||
}
|
||||
|
||||
log.WithField("path", sc.Data).Debug("ensuring server data directory exists")
|
||||
@@ -72,6 +102,47 @@ func (sc *SystemConfiguration) ConfigureDirectories() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Writes a logrotate file for wings to the system logrotate configuration directory if one
|
||||
// exists and a logrotate file is not found. This allows us to basically automate away the log
|
||||
// rotation for most installs, but also enable users to make modifications on their own.
|
||||
func (sc *SystemConfiguration) EnableLogRotation() error {
|
||||
// Do nothing if not enabled.
|
||||
if sc.EnableLogRotate == false {
|
||||
log.Info("skipping log rotate configuration, disabled in wings config file")
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
if st, err := os.Stat("/etc/logrotate.d"); err != nil && !os.IsNotExist(err) {
|
||||
return errors.WithStack(err)
|
||||
} else if (err != nil && os.IsNotExist(err)) || !st.IsDir() {
|
||||
return nil
|
||||
}
|
||||
|
||||
if _, err := os.Stat("/etc/logrotate.d/wings"); err != nil && !os.IsNotExist(err) {
|
||||
return errors.WithStack(err)
|
||||
} else if err == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
log.Info("no log rotation configuration found, system is configured to support it, adding file now")
|
||||
// If we've gotten to this point it means the logrotate directory exists on the system
|
||||
// but there is not a file for wings already. In that case, let us write a new file to
|
||||
// it so files can be rotated easily.
|
||||
f, err := os.Create("/etc/logrotate.d/wings")
|
||||
if err != nil {
|
||||
return errors.WithStack(err)
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
t, err := template.ParseFiles("templates/logrotate.tpl")
|
||||
if err != nil {
|
||||
return errors.WithStack(err)
|
||||
}
|
||||
|
||||
return errors.Wrap(t.Execute(f, sc), "failed to write logrotate file to disk")
|
||||
}
|
||||
|
||||
// Returns the location of the JSON file that tracks server states.
|
||||
func (sc *SystemConfiguration) GetStatesPath() string {
|
||||
return path.Join(sc.RootDirectory, "states.json")
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
package config
|
||||
|
||||
type ConsoleThrottles struct {
|
||||
// Wether or not the throttler is enabled for this instance.
|
||||
// Whether or not the throttler is enabled for this instance.
|
||||
Enabled bool `json:"enabled" yaml:"enabled" default:"true"`
|
||||
|
||||
// The total number of throttle activations that must accumulate before a server is
|
||||
|
||||
@@ -3,6 +3,7 @@ package environment
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/docker/go-connections/nat"
|
||||
"github.com/pterodactyl/wings/config"
|
||||
"strconv"
|
||||
)
|
||||
|
||||
@@ -25,6 +26,8 @@ type Allocations struct {
|
||||
// Converts the server allocation mappings into a format that can be understood by Docker. While
|
||||
// we do strive to support multiple environments, using Docker's standardized format for the
|
||||
// bindings certainly makes life a little easier for managing things.
|
||||
//
|
||||
// You'll want to use DockerBindings() if you need to re-map 127.0.0.1 to the Docker interface.
|
||||
func (a *Allocations) Bindings() nat.PortMap {
|
||||
var out = nat.PortMap{}
|
||||
|
||||
@@ -50,16 +53,47 @@ func (a *Allocations) Bindings() nat.PortMap {
|
||||
return out
|
||||
}
|
||||
|
||||
// Returns the bindings for the server in a way that is supported correctly by Docker. This replaces
|
||||
// any reference to 127.0.0.1 with the IP of the pterodactyl0 network interface which will allow the
|
||||
// server to operate on a local address while still being accessible by other containers.
|
||||
func (a *Allocations) DockerBindings() nat.PortMap {
|
||||
iface := config.Get().Docker.Network.Interface
|
||||
|
||||
out := a.Bindings()
|
||||
// Loop over all of the bindings for this container, and convert any that reference 127.0.0.1
|
||||
// to use the pterodactyl0 network interface IP, as that is the true local for what people are
|
||||
// trying to do when creating servers.
|
||||
for p, binds := range out {
|
||||
for i, alloc := range binds {
|
||||
if alloc.HostIP != "127.0.0.1" {
|
||||
continue
|
||||
}
|
||||
|
||||
// If using ISPN just delete the local allocation from the server.
|
||||
if config.Get().Docker.Network.ISPN {
|
||||
out[p] = append(out[p][:i], out[p][i+1:]...)
|
||||
} else {
|
||||
out[p][i] = nat.PortBinding{
|
||||
HostIP: iface,
|
||||
HostPort: alloc.HostPort,
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return out
|
||||
}
|
||||
|
||||
// Converts the server allocation mappings into a PortSet that can be understood
|
||||
// by Docker. This formatting is slightly different than "Bindings" as it should
|
||||
// return an empty struct rather than a binding.
|
||||
//
|
||||
// To accomplish this, we'll just get the values from "Bindings" and then set them
|
||||
// To accomplish this, we'll just get the values from "DockerBindings" and then set them
|
||||
// to empty structs. Because why not.
|
||||
func (a *Allocations) Exposed() nat.PortSet {
|
||||
var out = nat.PortSet{}
|
||||
|
||||
for port := range a.Bindings() {
|
||||
for port := range a.DockerBindings() {
|
||||
out[port] = struct{}{}
|
||||
}
|
||||
|
||||
|
||||
@@ -1,17 +1,13 @@
|
||||
package environment
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
type configurationSettings struct {
|
||||
type Settings struct {
|
||||
Mounts []Mount
|
||||
Allocations Allocations
|
||||
Limits Limits
|
||||
Variables Variables
|
||||
}
|
||||
|
||||
// Defines the actual configuration struct for the environment with all of the settings
|
||||
@@ -19,20 +15,36 @@ type configurationSettings struct {
|
||||
type Configuration struct {
|
||||
mu sync.RWMutex
|
||||
|
||||
settings configurationSettings
|
||||
environmentVariables []string
|
||||
settings Settings
|
||||
}
|
||||
|
||||
func NewConfiguration(m []Mount, a Allocations, l Limits, v Variables) *Configuration {
|
||||
// Returns a new environment configuration with the given settings and environment variables
|
||||
// defined within it.
|
||||
func NewConfiguration(s Settings, envVars []string) *Configuration {
|
||||
return &Configuration{
|
||||
settings: configurationSettings{
|
||||
Mounts: m,
|
||||
Allocations: a,
|
||||
Limits: l,
|
||||
Variables: v,
|
||||
},
|
||||
environmentVariables: envVars,
|
||||
settings: s,
|
||||
}
|
||||
}
|
||||
|
||||
// Updates the settings struct for this environment on the fly. This allows modified servers to
|
||||
// automatically push those changes to the environment.
|
||||
func (c *Configuration) SetSettings(s Settings) {
|
||||
c.mu.Lock()
|
||||
c.settings = s
|
||||
c.mu.Unlock()
|
||||
}
|
||||
|
||||
// Updates the environment variables associated with this environment by replacing the entire
|
||||
// array of them with a new one.
|
||||
func (c *Configuration) SetEnvironmentVariables(ev []string) {
|
||||
c.mu.Lock()
|
||||
c.environmentVariables = ev
|
||||
c.mu.Unlock()
|
||||
}
|
||||
|
||||
// Returns the limits assigned to this environment.
|
||||
func (c *Configuration) Limits() Limits {
|
||||
c.mu.RLock()
|
||||
defer c.mu.RUnlock()
|
||||
@@ -40,6 +52,7 @@ func (c *Configuration) Limits() Limits {
|
||||
return c.settings.Limits
|
||||
}
|
||||
|
||||
// Returns the allocations associated with this environment.
|
||||
func (c *Configuration) Allocations() Allocations {
|
||||
c.mu.RLock()
|
||||
defer c.mu.RUnlock()
|
||||
@@ -47,6 +60,7 @@ func (c *Configuration) Allocations() Allocations {
|
||||
return c.settings.Allocations
|
||||
}
|
||||
|
||||
// Returns all of the mounts associated with this environment.
|
||||
func (c *Configuration) Mounts() []Mount {
|
||||
c.mu.RLock()
|
||||
defer c.mu.RUnlock()
|
||||
@@ -54,31 +68,10 @@ func (c *Configuration) Mounts() []Mount {
|
||||
return c.settings.Mounts
|
||||
}
|
||||
|
||||
// Returns all of the environment variables that should be assigned to a running
|
||||
// server instance.
|
||||
// Returns the environment variables associated with this instance.
|
||||
func (c *Configuration) EnvironmentVariables() []string {
|
||||
c.mu.RLock()
|
||||
c.mu.RUnlock()
|
||||
defer c.mu.RUnlock()
|
||||
|
||||
zone, _ := time.Now().In(time.Local).Zone()
|
||||
|
||||
var out = []string{
|
||||
fmt.Sprintf("TZ=%s", zone),
|
||||
fmt.Sprintf("SERVER_MEMORY=%d", c.settings.Limits.MemoryLimit),
|
||||
fmt.Sprintf("SERVER_IP=%s", c.settings.Allocations.DefaultMapping.Ip),
|
||||
fmt.Sprintf("SERVER_PORT=%d", c.settings.Allocations.DefaultMapping.Port),
|
||||
}
|
||||
|
||||
eloop:
|
||||
for k := range c.settings.Variables {
|
||||
for _, e := range out {
|
||||
if strings.HasPrefix(e, strings.ToUpper(k)) {
|
||||
continue eloop
|
||||
}
|
||||
}
|
||||
|
||||
out = append(out, fmt.Sprintf("%s=%s", strings.ToUpper(k), c.settings.Variables.Get(k)))
|
||||
}
|
||||
|
||||
return out
|
||||
return c.environmentVariables
|
||||
}
|
||||
|
||||
@@ -57,15 +57,21 @@ func (e *Environment) Attach() error {
|
||||
e.SetStream(nil)
|
||||
}()
|
||||
|
||||
// Poll resources in a seperate thread since this will block the copy call below
|
||||
// from being reached until it is completed if not run in a seperate process. However,
|
||||
// Poll resources in a separate thread since this will block the copy call below
|
||||
// from being reached until it is completed if not run in a separate process. However,
|
||||
// we still want it to be stopped when the copy operation below is finished running which
|
||||
// indicates that the container is no longer running.
|
||||
go e.pollResources(ctx)
|
||||
go func(ctx context.Context) {
|
||||
if err := e.pollResources(ctx); err != nil {
|
||||
log.WithField("environment_id", e.Id).WithField("error", errors.WithStack(err)).Error("error during environment resource polling")
|
||||
}
|
||||
}(ctx)
|
||||
|
||||
// Stream the reader output to the console which will then fire off events and handle console
|
||||
// throttling and sending the output to the user.
|
||||
_, _ = io.Copy(console, e.stream.Reader)
|
||||
if _, err := io.Copy(console, e.stream.Reader); err != nil {
|
||||
log.WithField("environment_id", e.Id).WithField("error", errors.WithStack(err)).Error("error while copying environment output to console")
|
||||
}
|
||||
}(c)
|
||||
|
||||
return nil
|
||||
@@ -137,6 +143,15 @@ func (e *Environment) Create() error {
|
||||
|
||||
a := e.Configuration.Allocations()
|
||||
|
||||
evs := e.Configuration.EnvironmentVariables()
|
||||
for i, v := range evs {
|
||||
// Convert 127.0.0.1 to the pterodactyl0 network interface if the environment is Docker
|
||||
// so that the server operates as expected.
|
||||
if v == "SERVER_IP=127.0.0.1" {
|
||||
evs[i] = "SERVER_IP="+config.Get().Docker.Network.Interface
|
||||
}
|
||||
}
|
||||
|
||||
conf := &container.Config{
|
||||
Hostname: e.Id,
|
||||
Domainname: config.Get().Docker.Domainname,
|
||||
@@ -148,7 +163,7 @@ func (e *Environment) Create() error {
|
||||
Tty: true,
|
||||
ExposedPorts: a.Exposed(),
|
||||
Image: e.meta.Image,
|
||||
Env: e.variables(),
|
||||
Env: e.Configuration.EnvironmentVariables(),
|
||||
Labels: map[string]string{
|
||||
"Service": "Pterodactyl",
|
||||
"ContainerType": "server_process",
|
||||
@@ -158,16 +173,16 @@ func (e *Environment) Create() error {
|
||||
tmpfsSize := strconv.Itoa(int(config.Get().Docker.TmpfsSize))
|
||||
|
||||
hostConf := &container.HostConfig{
|
||||
PortBindings: a.Bindings(),
|
||||
PortBindings: a.DockerBindings(),
|
||||
|
||||
// Configure the mounts for this container. First mount the server data directory
|
||||
// into the container as a r/w bine.
|
||||
// into the container as a r/w bind.
|
||||
Mounts: e.convertMounts(),
|
||||
|
||||
// Configure the /tmp folder mapping in containers. This is necessary for some
|
||||
// games that need to make use of it for downloads and other installation processes.
|
||||
Tmpfs: map[string]string{
|
||||
"/tmp": "rw,exec,nosuid,size="+tmpfsSize+"M",
|
||||
"/tmp": "rw,exec,nosuid,size=" + tmpfsSize + "M",
|
||||
},
|
||||
|
||||
// Define resource limits for the container based on the data passed through
|
||||
@@ -204,12 +219,6 @@ func (e *Environment) Create() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (e *Environment) variables() []string {
|
||||
v := e.Configuration.EnvironmentVariables()
|
||||
|
||||
return append(v, fmt.Sprintf("STARTUP=%s", e.meta.Invocation))
|
||||
}
|
||||
|
||||
func (e *Environment) convertMounts() []mount.Mount {
|
||||
var out []mount.Mount
|
||||
|
||||
@@ -228,7 +237,7 @@ func (e *Environment) convertMounts() []mount.Mount {
|
||||
// Remove the Docker container from the machine. If the container is currently running
|
||||
// it will be forcibly stopped by Docker.
|
||||
func (e *Environment) Destroy() error {
|
||||
// We set it to stopping than offline to prevent crash detection from being triggeree.
|
||||
// We set it to stopping than offline to prevent crash detection from being triggered.
|
||||
e.setState(environment.ProcessStoppingState)
|
||||
|
||||
err := e.client.ContainerRemove(context.Background(), e.Id, types.ContainerRemoveOptions{
|
||||
@@ -250,7 +259,7 @@ func (e *Environment) Destroy() error {
|
||||
return err
|
||||
}
|
||||
|
||||
// Attaches to the log for the container. This avoids us missing cruicial output that
|
||||
// Attaches to the log for the container. This avoids us missing crucial output that
|
||||
// happens in the split seconds before the code moves from 'Starting' to 'Attaching'
|
||||
// on the process.
|
||||
func (e *Environment) followOutput() error {
|
||||
@@ -296,7 +305,7 @@ func (e *Environment) followOutput() error {
|
||||
// cases an outage shouldn't affect users too badly. It'll at least keep existing servers working
|
||||
// correctly if anything.
|
||||
//
|
||||
// TODO: handle authorization & local images
|
||||
// TODO: local images
|
||||
func (e *Environment) ensureImageExists(image string) error {
|
||||
// Give it up to 15 minutes to pull the image. I think this should cover 99.8% of cases where an
|
||||
// image pull might fail. I can't imagine it will ever take more than 15 minutes to fully pull
|
||||
@@ -362,7 +371,7 @@ func (e *Environment) ensureImageExists(image string) error {
|
||||
log.WithField("image", image).Debug("pulling docker image... this could take a bit of time")
|
||||
|
||||
// I'm not sure what the best approach here is, but this will block execution until the image
|
||||
// is done being pulled, which is what we neee.
|
||||
// is done being pulled, which is what we need.
|
||||
scanner := bufio.NewScanner(out)
|
||||
for scanner.Scan() {
|
||||
continue
|
||||
|
||||
@@ -13,9 +13,8 @@ import (
|
||||
)
|
||||
|
||||
type Metadata struct {
|
||||
Invocation string
|
||||
Image string
|
||||
Stop *api.ProcessStopConfiguration
|
||||
Image string
|
||||
Stop *api.ProcessStopConfiguration
|
||||
}
|
||||
|
||||
// Ensure that the Docker environment is always implementing all of the methods
|
||||
@@ -71,12 +70,6 @@ func New(id string, m *Metadata, c *environment.Configuration) (*Environment, er
|
||||
return e, nil
|
||||
}
|
||||
|
||||
func (e *Environment) SetStopConfiguration(c *api.ProcessStopConfiguration) {
|
||||
e.mu.Lock()
|
||||
e.meta.Stop = c
|
||||
e.mu.Unlock()
|
||||
}
|
||||
|
||||
func (e *Environment) Type() string {
|
||||
return "docker"
|
||||
}
|
||||
@@ -110,7 +103,7 @@ func (e *Environment) Events() *events.EventBus {
|
||||
// Determines if the container exists in this environment. The ID passed through should be the
|
||||
// server UUID since containers are created utilizing the server UUID as the name and docker
|
||||
// will work fine when using the container name as the lookup parameter in addition to the longer
|
||||
// ID auto-assigned when the container is createe.
|
||||
// ID auto-assigned when the container is created.
|
||||
func (e *Environment) Exists() (bool, error) {
|
||||
_, err := e.client.ContainerInspect(context.Background(), e.Id)
|
||||
|
||||
@@ -144,7 +137,7 @@ func (e *Environment) IsRunning() (bool, error) {
|
||||
return c.State.Running, nil
|
||||
}
|
||||
|
||||
// Determine the container exit state and return the exit code and wether or not
|
||||
// Determine the container exit state and return the exit code and whether or not
|
||||
// the container was killed by the OOM killer.
|
||||
func (e *Environment) ExitState() (uint32, bool, error) {
|
||||
c, err := e.client.ContainerInspect(context.Background(), e.Id)
|
||||
@@ -155,7 +148,7 @@ func (e *Environment) ExitState() (uint32, bool, error) {
|
||||
//
|
||||
// However, someone reported an error in Discord about this scenario happening,
|
||||
// so I guess this should prevent it? They didn't tell me how they caused it though
|
||||
// so that's a mystery that will have to go unsolvee.
|
||||
// so that's a mystery that will have to go unsolved.
|
||||
//
|
||||
// @see https://github.com/pterodactyl/panel/issues/2003
|
||||
if client.IsErrNotFound(err) {
|
||||
@@ -167,3 +160,19 @@ func (e *Environment) ExitState() (uint32, bool, error) {
|
||||
|
||||
return uint32(c.State.ExitCode), c.State.OOMKilled, nil
|
||||
}
|
||||
|
||||
// Returns the environment configuration allowing a process to make modifications of the
|
||||
// environment on the fly.
|
||||
func (e *Environment) Config() *environment.Configuration {
|
||||
e.mu.RLock()
|
||||
defer e.mu.RUnlock()
|
||||
|
||||
return e.Configuration
|
||||
}
|
||||
|
||||
// Sets the stop configuration for the environment.
|
||||
func (e *Environment) SetStopConfiguration(c *api.ProcessStopConfiguration) {
|
||||
e.mu.Lock()
|
||||
e.meta.Stop = c
|
||||
e.mu.Unlock()
|
||||
}
|
||||
|
||||
@@ -26,7 +26,7 @@ func (e *Environment) OnBeforeStart() error {
|
||||
// the Panel is usee.
|
||||
if err := e.client.ContainerRemove(context.Background(), e.Id, types.ContainerRemoveOptions{RemoveVolumes: true}); err != nil {
|
||||
if !client.IsErrNotFound(err) {
|
||||
return err
|
||||
return errors.Wrap(err, "failed to remove server docker container during pre-boot")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -35,7 +35,7 @@ func (e *Environment) OnBeforeStart() error {
|
||||
// container and data storage directory.
|
||||
//
|
||||
// This won't actually run an installation process however, it is just here to ensure the
|
||||
// environment gets created properly if it is missing and the server is startee. We're making
|
||||
// environment gets created properly if it is missing and the server is started. We're making
|
||||
// an assumption that all of the files will still exist at this point.
|
||||
if err := e.Create(); err != nil {
|
||||
return err
|
||||
@@ -64,7 +64,7 @@ func (e *Environment) Start() error {
|
||||
|
||||
if c, err := e.client.ContainerInspect(context.Background(), e.Id); err != nil {
|
||||
// Do nothing if the container is not found, we just don't want to continue
|
||||
// to the next block of code here. This check was inlined here to guard againt
|
||||
// to the next block of code here. This check was inlined here to guard against
|
||||
// a nil-pointer when checking c.State below.
|
||||
//
|
||||
// @see https://github.com/pterodactyl/panel/issues/2000
|
||||
@@ -128,7 +128,7 @@ func (e *Environment) Stop() error {
|
||||
|
||||
if s == nil || s.Type == api.ProcessStopSignal {
|
||||
if s == nil {
|
||||
log.WithField("container_id", e.Id).Warn("no stop configuration detected for environment, using termination proceedure")
|
||||
log.WithField("container_id", e.Id).Warn("no stop configuration detected for environment, using termination procedure")
|
||||
}
|
||||
|
||||
return e.Terminate(os.Kill)
|
||||
@@ -217,7 +217,7 @@ func (e *Environment) Terminate(signal os.Signal) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// We set it to stopping than offline to prevent crash detection from being triggeree.
|
||||
// We set it to stopping than offline to prevent crash detection from being triggered.
|
||||
e.setState(environment.ProcessStoppingState)
|
||||
|
||||
sig := strings.TrimSuffix(strings.TrimPrefix(signal.String(), "signal "), "ed")
|
||||
|
||||
@@ -27,7 +27,7 @@ func (e *Environment) setState(state string) error {
|
||||
// Get the current state of the environment before changing it.
|
||||
prevState := e.State()
|
||||
|
||||
// Emit the event to any listeners that are currently registeree.
|
||||
// Emit the event to any listeners that are currently registered.
|
||||
if prevState != state {
|
||||
// If the state changed make sure we update the internal tracking to note that.
|
||||
e.stMu.Lock()
|
||||
|
||||
@@ -15,14 +15,20 @@ import (
|
||||
// Attach to the instance and then automatically emit an event whenever the resource usage for the
|
||||
// server process changes.
|
||||
func (e *Environment) pollResources(ctx context.Context) error {
|
||||
l := log.WithField("container_id", e.Id)
|
||||
|
||||
l.Debug("starting resource polling for container")
|
||||
defer l.Debug("stopped resource polling for container")
|
||||
|
||||
if e.State() == environment.ProcessOfflineState {
|
||||
return errors.New("attempting to enable resource polling on a stopped server instance")
|
||||
return errors.New("cannot enable resource polling on a stopped server")
|
||||
}
|
||||
|
||||
stats, err := e.client.ContainerStats(context.Background(), e.Id, true)
|
||||
if err != nil {
|
||||
return errors.WithStack(err)
|
||||
}
|
||||
defer stats.Body.Close()
|
||||
|
||||
dec := json.NewDecoder(stats.Body)
|
||||
|
||||
@@ -35,7 +41,9 @@ func (e *Environment) pollResources(ctx context.Context) error {
|
||||
|
||||
if err := dec.Decode(&v); err != nil {
|
||||
if err != io.EOF {
|
||||
log.WithField("container_id", e.Id).Warn("encountered error processing docker stats output, stopping collection")
|
||||
l.WithField("error", errors.WithStack(err)).Warn("error while processing Docker stats output for container")
|
||||
} else {
|
||||
l.Debug("io.EOF encountered during stats decode, stopping polling...")
|
||||
}
|
||||
|
||||
return nil
|
||||
@@ -43,6 +51,7 @@ func (e *Environment) pollResources(ctx context.Context) error {
|
||||
|
||||
// Disable collection if the server is in an offline state and this process is still running.
|
||||
if e.State() == environment.ProcessOfflineState {
|
||||
l.Debug("process in offline state while resource polling is still active; stopping poll")
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -66,8 +75,11 @@ func (e *Environment) pollResources(ctx context.Context) error {
|
||||
},
|
||||
}
|
||||
|
||||
b, _ := json.Marshal(st)
|
||||
e.Events().Publish(environment.ResourceEvent, string(b))
|
||||
if b, err := json.Marshal(st); err != nil {
|
||||
l.WithField("error", errors.WithStack(err)).Warn("error while marshaling stats object for environment")
|
||||
} else {
|
||||
e.Events().Publish(environment.ResourceEvent, string(b))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -7,8 +7,8 @@ import (
|
||||
"encoding/json"
|
||||
"github.com/docker/docker/api/types"
|
||||
"github.com/pkg/errors"
|
||||
"io"
|
||||
"os"
|
||||
"github.com/pterodactyl/wings/environment"
|
||||
"strconv"
|
||||
)
|
||||
|
||||
type dockerLogLine struct {
|
||||
@@ -31,6 +31,15 @@ func (e *Environment) SendCommand(c string) error {
|
||||
return errors.New("attempting to send command to non-attached instance")
|
||||
}
|
||||
|
||||
if e.meta.Stop != nil {
|
||||
// If the command being processed is the same as the process stop command then we want to mark
|
||||
// the server as entering the stopping state otherwise the process will stop and Wings will think
|
||||
// it has crashed and attempt to restart it.
|
||||
if e.meta.Stop.Type == "command" && c == e.meta.Stop.Value {
|
||||
e.Events().Publish(environment.StateChangeEvent, environment.ProcessStoppingState)
|
||||
}
|
||||
}
|
||||
|
||||
_, err := e.stream.Conn.Write([]byte(c + "\n"))
|
||||
|
||||
return errors.WithStack(err)
|
||||
@@ -38,44 +47,25 @@ func (e *Environment) SendCommand(c string) error {
|
||||
|
||||
// Reads the log file for the server. This does not care if the server is running or not, it will
|
||||
// simply try to read the last X bytes of the file and return them.
|
||||
func (e *Environment) Readlog(len int64) ([]string, error) {
|
||||
j, err := e.client.ContainerInspect(context.Background(), e.Id)
|
||||
func (e *Environment) Readlog(lines int) ([]string, error) {
|
||||
r, err := e.client.ContainerLogs(context.Background(), e.Id, types.ContainerLogsOptions{
|
||||
ShowStdout: true,
|
||||
ShowStderr: true,
|
||||
Tail: strconv.Itoa(lines),
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return nil, errors.WithStack(err)
|
||||
}
|
||||
defer r.Close()
|
||||
|
||||
var out []string
|
||||
|
||||
scanner := bufio.NewScanner(r)
|
||||
for scanner.Scan() {
|
||||
out = append(out, scanner.Text())
|
||||
}
|
||||
|
||||
if j.LogPath == "" {
|
||||
return nil, errors.New("empty log path defined for server")
|
||||
}
|
||||
|
||||
f, err := os.Open(j.LogPath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
// Check if the length of the file is smaller than the amount of data that was requested
|
||||
// for reading. If so, adjust the length to be the total length of the file. If this is not
|
||||
// done an error is thrown since we're reading backwards, and not forwards.
|
||||
if stat, err := os.Stat(j.LogPath); err != nil {
|
||||
return nil, err
|
||||
} else if stat.Size() < len {
|
||||
len = stat.Size()
|
||||
}
|
||||
|
||||
// Seed to the end of the file and then move backwards until the length is met to avoid
|
||||
// reading the entirety of the file into memory.
|
||||
if _, err := f.Seek(-len, io.SeekEnd); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
b := make([]byte, len)
|
||||
|
||||
if _, err := f.Read(b); err != nil && err != io.EOF {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return e.parseLogToStrings(b)
|
||||
return out, nil
|
||||
}
|
||||
|
||||
// Docker stores the logs for server output in a JSON format. This function will iterate over the JSON
|
||||
@@ -87,6 +77,7 @@ func (e *Environment) parseLogToStrings(b []byte) ([]string, error) {
|
||||
scanner := bufio.NewScanner(bytes.NewReader(b))
|
||||
for scanner.Scan() {
|
||||
var l dockerLogLine
|
||||
|
||||
// Unmarshal the contents and allow up to a single error before bailing out of the process. We
|
||||
// do this because if you're arbitrarily reading a length of the file you'll likely end up
|
||||
// with the first line in the output being improperly formatted JSON. In those cases we want to
|
||||
|
||||
@@ -24,6 +24,9 @@ type ProcessEnvironment interface {
|
||||
// Returns the name of the environment.
|
||||
Type() string
|
||||
|
||||
// Returns the environment configuration to the caller.
|
||||
Config() *Configuration
|
||||
|
||||
// Returns an event emitter instance that can be hooked into to listen for different
|
||||
// events that are fired by the environment. This should not allow someone to publish
|
||||
// events, only subscribe to them.
|
||||
@@ -86,6 +89,6 @@ type ProcessEnvironment interface {
|
||||
SendCommand(string) error
|
||||
|
||||
// Reads the log file for the process from the end backwards until the provided
|
||||
// number of bytes is met.
|
||||
Readlog(int64) ([]string, error)
|
||||
// number of lines is met.
|
||||
Readlog(int) ([]string, error)
|
||||
}
|
||||
|
||||
@@ -2,6 +2,7 @@ package environment
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/apex/log"
|
||||
"math"
|
||||
"strconv"
|
||||
)
|
||||
@@ -21,7 +22,7 @@ type Mount struct {
|
||||
// that we're mounting into the container at the Target location.
|
||||
Source string `json:"source"`
|
||||
|
||||
// Wether or not the directory is being mounted as read-only. It is up to the environment to
|
||||
// Whether or not the directory is being mounted as read-only. It is up to the environment to
|
||||
// handle this value correctly and ensure security expectations are met with its usage.
|
||||
ReadOnly bool `json:"read_only"`
|
||||
}
|
||||
@@ -118,7 +119,13 @@ func (v Variables) Get(key string) string {
|
||||
return fmt.Sprintf("%f", val.(float64))
|
||||
case bool:
|
||||
return strconv.FormatBool(val.(bool))
|
||||
case string:
|
||||
return val.(string)
|
||||
}
|
||||
|
||||
return val.(string)
|
||||
// TODO: I think we can add a check for val == nil and return an empty string for those
|
||||
// and this warning should theoretically never happen?
|
||||
log.Warn(fmt.Sprintf("failed to marshal environment variable \"%s\" of type %+v into string", key, val))
|
||||
|
||||
return ""
|
||||
}
|
||||
|
||||
@@ -47,8 +47,12 @@ func (e *EventBus) Publish(topic string, data string) {
|
||||
defer e.RUnlock()
|
||||
|
||||
if ch, ok := e.subscribers[t]; ok {
|
||||
e := Event{Data: data, Topic: topic}
|
||||
|
||||
for channel := range ch {
|
||||
channel <- Event{Data: data, Topic: topic}
|
||||
go func(channel chan Event, e Event) {
|
||||
channel <- e
|
||||
}(channel, e)
|
||||
}
|
||||
}
|
||||
}()
|
||||
@@ -66,29 +70,33 @@ func (e *EventBus) PublishJson(topic string, data interface{}) error {
|
||||
}
|
||||
|
||||
// Subscribe to an emitter topic using a channel.
|
||||
func (e *EventBus) Subscribe(topic string, ch chan Event) {
|
||||
func (e *EventBus) Subscribe(topics []string, ch chan Event) {
|
||||
e.Lock()
|
||||
defer e.Unlock()
|
||||
|
||||
if _, exists := e.subscribers[topic]; !exists {
|
||||
e.subscribers[topic] = make(map[chan Event]struct{})
|
||||
}
|
||||
for _, topic := range topics {
|
||||
if _, exists := e.subscribers[topic]; !exists {
|
||||
e.subscribers[topic] = make(map[chan Event]struct{})
|
||||
}
|
||||
|
||||
// Only set the channel if there is not currently a matching one for this topic. This
|
||||
// avoids registering two identical listeners for the same topic and causing pain in
|
||||
// the unsubscribe functionality as well.
|
||||
if _, exists := e.subscribers[topic][ch]; !exists {
|
||||
e.subscribers[topic][ch] = struct{}{}
|
||||
// Only set the channel if there is not currently a matching one for this topic. This
|
||||
// avoids registering two identical listeners for the same topic and causing pain in
|
||||
// the unsubscribe functionality as well.
|
||||
if _, exists := e.subscribers[topic][ch]; !exists {
|
||||
e.subscribers[topic][ch] = struct{}{}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Unsubscribe a channel from a given topic.
|
||||
func (e *EventBus) Unsubscribe(topic string, ch chan Event) {
|
||||
func (e *EventBus) Unsubscribe(topics []string, ch chan Event) {
|
||||
e.Lock()
|
||||
defer e.Unlock()
|
||||
|
||||
if _, exists := e.subscribers[topic][ch]; exists {
|
||||
delete(e.subscribers[topic], ch)
|
||||
for _, topic := range topics {
|
||||
if _, exists := e.subscribers[topic][ch]; exists {
|
||||
delete(e.subscribers[topic], ch)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
21
go.mod
21
go.mod
@@ -2,14 +2,6 @@ module github.com/pterodactyl/wings
|
||||
|
||||
go 1.13
|
||||
|
||||
// Uncomment this in development environments to make changes to the core SFTP
|
||||
// server software. This assumes you're using the official Pterodactyl Environment
|
||||
// otherwise this path will not work.
|
||||
//
|
||||
// @see https://github.com/pterodactyl/development
|
||||
//
|
||||
// replace github.com/pterodactyl/sftp-server => ../sftp-server
|
||||
|
||||
require (
|
||||
github.com/AlecAivazis/survey/v2 v2.1.0
|
||||
github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78 // indirect
|
||||
@@ -32,6 +24,7 @@ require (
|
||||
github.com/docker/go-metrics v0.0.1 // indirect
|
||||
github.com/docker/go-units v0.4.0 // indirect
|
||||
github.com/fatih/color v1.9.0
|
||||
github.com/frankban/quicktest v1.10.2 // indirect
|
||||
github.com/fsnotify/fsnotify v1.4.9 // indirect
|
||||
github.com/gabriel-vasile/mimetype v1.1.1
|
||||
github.com/gammazero/deque v0.0.0-20200721202602-07291166fe33 // indirect
|
||||
@@ -40,13 +33,13 @@ require (
|
||||
github.com/gin-gonic/gin v1.6.3
|
||||
github.com/go-playground/validator/v10 v10.3.0 // indirect
|
||||
github.com/gogo/protobuf v1.3.1 // indirect
|
||||
github.com/golang/gddo v0.0.0-20200715224205-051695c33a3f // indirect
|
||||
github.com/google/uuid v1.1.1
|
||||
github.com/gorilla/mux v1.7.4 // indirect
|
||||
github.com/gorilla/websocket v1.4.2
|
||||
github.com/iancoleman/strcase v0.0.0-20191112232945-16388991a334
|
||||
github.com/icza/dyno v0.0.0-20200205103839-49cb13720835
|
||||
github.com/imdario/mergo v0.3.8
|
||||
github.com/karrick/godirwalk v1.16.1
|
||||
github.com/klauspost/compress v1.10.10 // indirect
|
||||
github.com/klauspost/pgzip v1.2.4
|
||||
github.com/magefile/mage v1.10.0 // indirect
|
||||
@@ -65,25 +58,18 @@ require (
|
||||
github.com/pierrec/lz4 v2.5.2+incompatible // indirect
|
||||
github.com/pkg/errors v0.9.1
|
||||
github.com/pkg/profile v1.5.0
|
||||
github.com/pkg/sftp v1.11.0 // indirect
|
||||
github.com/pkg/sftp v1.11.0
|
||||
github.com/prometheus/common v0.11.1 // indirect
|
||||
github.com/pterodactyl/sftp-server v1.1.5
|
||||
github.com/remeh/sizedwaitgroup v1.0.0
|
||||
github.com/sabhiram/go-gitignore v0.0.0-20180611051255-d3107576ba94
|
||||
github.com/smartystreets/goconvey v1.6.4 // indirect
|
||||
github.com/spf13/cobra v1.0.0
|
||||
github.com/spf13/pflag v1.0.5 // indirect
|
||||
github.com/uber-go/zap v1.9.1 // indirect
|
||||
github.com/ulikunitz/xz v0.5.7 // indirect
|
||||
go.uber.org/zap v1.15.0
|
||||
golang.org/x/crypto v0.0.0-20200728195943-123391ffb6de
|
||||
golang.org/x/lint v0.0.0-20200302205851-738671d3881b // indirect
|
||||
golang.org/x/net v0.0.0-20200707034311-ab3426394381 // indirect
|
||||
golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208
|
||||
golang.org/x/text v0.3.3 // indirect
|
||||
golang.org/x/time v0.0.0-20200630173020-3af7569d3a1e // indirect
|
||||
golang.org/x/tools v0.0.0-20200509030707-2212a7e161a5 // indirect
|
||||
golang.org/x/tools/gopls v0.1.3 // indirect
|
||||
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect
|
||||
google.golang.org/genproto v0.0.0-20200806141610-86f49bd18e98 // indirect
|
||||
google.golang.org/grpc v1.31.0 // indirect
|
||||
@@ -92,5 +78,4 @@ require (
|
||||
gopkg.in/ini.v1 v1.57.0
|
||||
gopkg.in/yaml.v2 v2.3.0
|
||||
gotest.tools v2.2.0+incompatible // indirect
|
||||
honnef.co/go/tools v0.0.1-2020.1.3 // indirect
|
||||
)
|
||||
|
||||
132
go.sum
132
go.sum
@@ -1,21 +1,14 @@
|
||||
cloud.google.com/go v0.16.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
|
||||
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
|
||||
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
|
||||
github.com/AlecAivazis/survey/v2 v2.0.7 h1:+f825XHLse/hWd2tE/V5df04WFGimk34Eyg/z35w/rc=
|
||||
github.com/AlecAivazis/survey/v2 v2.0.7/go.mod h1:mlizQTaPjnR4jcpwRSaSlkbsRfYFEyKgLQvYTzxxiHA=
|
||||
github.com/AlecAivazis/survey/v2 v2.1.0 h1:AT4+23hOFopXYZaNGugbk7MWItkz0SfTmH/Hk92KeeE=
|
||||
github.com/AlecAivazis/survey/v2 v2.1.0/go.mod h1:9FJRdMdDm8rnT+zHVbvQT2RTSTLq0Ttd6q3Vl2fahjk=
|
||||
github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78 h1:w+iIsaOQNcT7OZ575w+acHgRric5iCyQh+xv+KJ4HB8=
|
||||
github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8=
|
||||
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
|
||||
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
|
||||
github.com/Jeffail/gabs/v2 v2.2.0 h1:7touC+WzbQ7LO5+mwgxT44miyTqAVCOlIWLA6PiIB5w=
|
||||
github.com/Jeffail/gabs/v2 v2.2.0/go.mod h1:xCn81vdHKxFUuWWAaD5jCTQDNPBMh5pPs9IJ+NcziBI=
|
||||
github.com/Jeffail/gabs/v2 v2.5.1 h1:ANfZYjpMlfTTKebycu4X1AgkVWumFVDYQl7JwOr4mDk=
|
||||
github.com/Jeffail/gabs/v2 v2.5.1/go.mod h1:xCn81vdHKxFUuWWAaD5jCTQDNPBMh5pPs9IJ+NcziBI=
|
||||
github.com/Knetic/govaluate v3.0.1-0.20171022003610-9aa49832a739+incompatible/go.mod h1:r7JcOSlj0wfOMncg0iLm8Leh48TZaKVeNIfJntJ2wa0=
|
||||
github.com/Microsoft/go-winio v0.4.7 h1:vOvDiY/F1avSWlCWiKJjdYKz2jVjTK3pWPHndeG4OAY=
|
||||
github.com/Microsoft/go-winio v0.4.7/go.mod h1:VhR8bwka0BXejwEJY73c50VrPtXAaKcyvVC4A4RozmA=
|
||||
github.com/Microsoft/go-winio v0.4.14 h1:+hMXMk01us9KgxGb7ftKQt2Xpf5hH/yky+TDA+qxleU=
|
||||
github.com/Microsoft/go-winio v0.4.14/go.mod h1:qXqCSQ3Xa7+6tgxaGTIe4Kpcdsi+P8jBhyzoq1bpyYA=
|
||||
github.com/NYTimes/logrotate v1.0.0 h1:6jFGbon6jOtpy3t3kwZZKS4Gdmf1C/Wv5J4ll4Xn5yk=
|
||||
@@ -38,11 +31,8 @@ github.com/andybalholm/brotli v1.0.0 h1:7UCwP93aiSfvWpapti8g88vVVGp2qqtGyePsSuDa
|
||||
github.com/andybalholm/brotli v1.0.0/go.mod h1:loMXtMfwqflxFJPmdbJO0a3KNoPuLBgiu3qAvBg8x/Y=
|
||||
github.com/apache/thrift v0.12.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
|
||||
github.com/apache/thrift v0.13.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
|
||||
github.com/apex/log v1.3.0 h1:1fyfbPvUwD10nMoh3hY6MXzvZShJQn9/ck7ATgAt5pA=
|
||||
github.com/apex/log v1.3.0/go.mod h1:jd8Vpsr46WAe3EZSQ/IUMs2qQD/GOycT5rPWCO1yGcs=
|
||||
github.com/apex/log v1.8.0 h1:+W4j+dttibFvynPLlctdnYFUn1eLKT37BZWWW2iMfEM=
|
||||
github.com/apex/log v1.8.0/go.mod h1:m82fZlWIuiWzWP04XCTXmnX0xRkYYbCdYn8jbJeLBEA=
|
||||
github.com/apex/logs v0.0.4/go.mod h1:XzxuLZ5myVHDy9SAmYpamKKRNApGj54PfYLcFrXqDwo=
|
||||
github.com/apex/logs v1.0.0/go.mod h1:XzxuLZ5myVHDy9SAmYpamKKRNApGj54PfYLcFrXqDwo=
|
||||
github.com/aphistic/golf v0.0.0-20180712155816-02c07f170c5a/go.mod h1:3NqKYiepwy8kCu4PNA+aP7WUV72eXWJeP9/r3/K9aLE=
|
||||
github.com/aphistic/sweet v0.2.0/go.mod h1:fWDlIh/isSE9n6EPsRmC0det+whmX6dJid3stzu0Xys=
|
||||
@@ -51,8 +41,6 @@ github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5
|
||||
github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmVTwzkszR9V5SSuryQ31EELlFMUz1kKyl939pY=
|
||||
github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
|
||||
github.com/aryann/difflib v0.0.0-20170710044230-e206f873d14a/go.mod h1:DAHtR1m6lCRdSC2Tm3DSWRPvIPr6xNKyeHdqDQSQT+A=
|
||||
github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a h1:idn718Q4B6AGu/h5Sxe66HYVdqdGu2l9Iebqhi/AEoA=
|
||||
github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY=
|
||||
github.com/asaskevich/govalidator v0.0.0-20200428143746-21a406dcc535 h1:4daAzAu0S6Vi7/lbWECcX0j45yZReDZ56BQsrVBOEEY=
|
||||
github.com/asaskevich/govalidator v0.0.0-20200428143746-21a406dcc535/go.mod h1:oGkLhpf+kjZl6xBf758TQhh5XrAeiJv/7FRz/2spLIg=
|
||||
github.com/aws/aws-lambda-go v1.13.3/go.mod h1:4UKl9IzQMoD+QF79YdCuzCwp8VbmG4VAQwij/eHl5CU=
|
||||
@@ -68,9 +56,6 @@ github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+Ce
|
||||
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
|
||||
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
|
||||
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
|
||||
github.com/bradfitz/gomemcache v0.0.0-20170208213004-1952afaa557d/go.mod h1:PmM6Mmwb0LSuEubjR8N7PtNe1KxZLtOUHtbeikc5h60=
|
||||
github.com/buger/jsonparser v0.0.0-20191204142016-1a29609e0929 h1:MW/JDk68Rny52yI0M0N+P8lySNgB+NhpI/uAmhgOhUM=
|
||||
github.com/buger/jsonparser v0.0.0-20191204142016-1a29609e0929/go.mod h1:tgcrVJ81GPSF0mz+0nu1Xaz0fazGPrmmJfJtxjbHhUQ=
|
||||
github.com/buger/jsonparser v1.0.0 h1:etJTGF5ESxjI0Ic2UaLQs2LQQpa8G9ykQScukbh4L8A=
|
||||
github.com/buger/jsonparser v1.0.0/go.mod h1:tgcrVJ81GPSF0mz+0nu1Xaz0fazGPrmmJfJtxjbHhUQ=
|
||||
github.com/casbin/casbin/v2 v2.1.2/go.mod h1:YcPU1XXisHhLzuxH9coDNf2FbKpjGlbCg3n9yuLkIJQ=
|
||||
@@ -87,12 +72,8 @@ github.com/cobaugh/osrelease v0.0.0-20181218015638-a93a0a55a249 h1:R0IDH8daQ3lOD
|
||||
github.com/cobaugh/osrelease v0.0.0-20181218015638-a93a0a55a249/go.mod h1:EHKW9yNEYSBpTKzuu7Y9oOrft/UlzH57rMIB03oev6M=
|
||||
github.com/cockroachdb/datadriven v0.0.0-20190809214429-80d97fb3cbaa/go.mod h1:zn76sxSg3SzpJ0PPJaLDCu+Bu0Lg3sKTORVIj19EIF8=
|
||||
github.com/codahale/hdrhistogram v0.0.0-20161010025455-3a0bb77429bd/go.mod h1:sE/e/2PUdi/liOCUjSTXgM1o87ZssimdTWN964YiIeI=
|
||||
github.com/containerd/containerd v1.3.6 h1:SMfcKoQyWhaRsYq7290ioC6XFcHDNcHvcEMjF6ORpac=
|
||||
github.com/containerd/containerd v1.3.6/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
|
||||
github.com/containerd/containerd v1.3.7 h1:eFSOChY8TTcxvkzp8g+Ov1RL0MYww7XEeK0y+zqGpVc=
|
||||
github.com/containerd/containerd v1.3.7/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
|
||||
github.com/containerd/fifo v0.0.0-20190226154929-a9fb20d87448 h1:PUD50EuOMkXVcpBIA/R95d56duJR9VxhwncsFbNnxW4=
|
||||
github.com/containerd/fifo v0.0.0-20190226154929-a9fb20d87448/go.mod h1:ODA38xgv3Kuk8dQz2ZQXpnv/UZZUHUCL7pnLehbXgQI=
|
||||
github.com/containerd/fifo v0.0.0-20200410184934-f15a3290365b h1:qUtCegLdOUVfVJOw+KDg6eJyE1TGvLlkGEd1091kSSQ=
|
||||
github.com/containerd/fifo v0.0.0-20200410184934-f15a3290365b/go.mod h1:jPQ2IAeZRCYxpS/Cm1495vGFww6ecHmMk1YJH2Q5ln0=
|
||||
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
|
||||
@@ -105,8 +86,6 @@ github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfc
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.0/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
|
||||
github.com/creack/pty v1.1.7/go.mod h1:lj5s0c3V2DBrqTV7llrYr5NG6My20zk30Fl46Y7DoTY=
|
||||
github.com/creasty/defaults v1.3.0 h1:uG+RAxYbJgOPCOdKEcec9ZJXeva7Y6mj/8egdzwmLtw=
|
||||
github.com/creasty/defaults v1.3.0/go.mod h1:CIEEvs7oIVZm30R8VxtFJs+4k201gReYyuYHJxZc68I=
|
||||
github.com/creasty/defaults v1.5.0 h1:DW6NAGGaKuNSKkntc8BCBrR2KOUAcXVnfcwu/LmJhaQ=
|
||||
github.com/creasty/defaults v1.5.0/go.mod h1:FPZ+Y0WNrbqOVw+c6av63eyHUAl6pMHZwqLPvXUZGfY=
|
||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
@@ -124,8 +103,6 @@ github.com/docker/go-connections v0.4.0 h1:El9xVISelRB7BuFusrZozjnkIM5YnzCViNKoh
|
||||
github.com/docker/go-connections v0.4.0/go.mod h1:Gbd7IOopHjR8Iph03tsViu4nIes5XhDvyHbTtUxmeec=
|
||||
github.com/docker/go-metrics v0.0.1 h1:AgB/0SvBxihN0X8OR4SjsblXkbMvalQ8cjmtKQ2rQV8=
|
||||
github.com/docker/go-metrics v0.0.1/go.mod h1:cG1hvH2utMXtqgqqYE9plW6lDxS3/5ayHzueweSI3Vw=
|
||||
github.com/docker/go-units v0.3.3 h1:Xk8S3Xj5sLGlG5g67hJmYMmUgXv5N4PhkjJHHqrwnTk=
|
||||
github.com/docker/go-units v0.3.3/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
|
||||
github.com/docker/go-units v0.4.0 h1:3uh0PgVws3nIA0Q+MwDC8yjEPf9zjRfZZWXZYDct3Tw=
|
||||
github.com/docker/go-units v0.4.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
|
||||
github.com/dsnet/compress v0.0.1 h1:PlZu0n3Tuv04TzpfPbrnI0HW/YwodEXDS+oPKahKF0Q=
|
||||
@@ -147,26 +124,20 @@ github.com/fatih/color v1.9.0 h1:8xPHl4/q1VyqGIPif1F+1V3Y3lSmrq01EabUW3CoW5s=
|
||||
github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU=
|
||||
github.com/franela/goblin v0.0.0-20200105215937-c9ffbefa60db/go.mod h1:7dvUGVsVBjqR7JHJk0brhHOZYGmfBYOrK0ZhYMEtBr4=
|
||||
github.com/franela/goreq v0.0.0-20171204163338-bcd34c9993f8/go.mod h1:ZhphrRTfi2rbfLwlschooIH4+wKKDR4Pdxhh+TRoA20=
|
||||
github.com/fsnotify/fsnotify v1.4.3-0.20170329110642-4da3e2cfbabc/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
|
||||
github.com/frankban/quicktest v1.10.2 h1:19ARM85nVi4xH7xPXuc5eM/udya5ieh7b/Sv+d844Tk=
|
||||
github.com/frankban/quicktest v1.10.2/go.mod h1:K+q6oSqb0W0Ininfk863uOk1lMy69l/P6txr3mVT54s=
|
||||
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
|
||||
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
|
||||
github.com/fsnotify/fsnotify v1.4.9 h1:hsms1Qyu0jgnwNXIxa+/V/PDsU6CfLf6CNO8H7IWoS4=
|
||||
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
|
||||
github.com/gabriel-vasile/mimetype v0.1.4 h1:5mcsq3+DXypREUkW+1juhjeKmE/XnWgs+paHMJn7lf8=
|
||||
github.com/gabriel-vasile/mimetype v0.1.4/go.mod h1:kMJbg3SlWZCsj4R73F1WDzbT9AyGCOVmUtIxxwO5pmI=
|
||||
github.com/gabriel-vasile/mimetype v1.1.1 h1:qbN9MPuRf3bstHu9zkI9jDWNfH//9+9kHxr9oRBBBOA=
|
||||
github.com/gabriel-vasile/mimetype v1.1.1/go.mod h1:6CDPel/o/3/s4+bp6kIbsWATq8pmgOisOPG40CJa6To=
|
||||
github.com/gammazero/deque v0.0.0-20200227231300-1e9af0e52b46 h1:iX4+rD9Fjdx8SkmSO/O5WAIX/j79ll3kuqv5VdYt9J8=
|
||||
github.com/gammazero/deque v0.0.0-20200227231300-1e9af0e52b46/go.mod h1:D90+MBHVc9Sk1lJAbEVgws0eYEurY4mv2TDso3Nxh3w=
|
||||
github.com/gammazero/deque v0.0.0-20200721202602-07291166fe33 h1:UG4wNrJX9xSKnm/Gck5yTbxnOhpNleuE4MQRdmcGySo=
|
||||
github.com/gammazero/deque v0.0.0-20200721202602-07291166fe33/go.mod h1:D90+MBHVc9Sk1lJAbEVgws0eYEurY4mv2TDso3Nxh3w=
|
||||
github.com/gammazero/workerpool v0.0.0-20200608033439-1a5ca90a5753 h1:oSQ61LxZkz3Z4La0O5cbyVDvLWEfbNgiD43cSPdjPQQ=
|
||||
github.com/gammazero/workerpool v0.0.0-20200608033439-1a5ca90a5753/go.mod h1:/XWO2YAUUpPi3smDlFBl0vpX0JHwUomDM/oRMwRmnSs=
|
||||
github.com/gammazero/workerpool v1.0.0 h1:MfkJc6KL0tAmjrRDS203AZz3F+84Uod9YbL8KjpcQ00=
|
||||
github.com/gammazero/workerpool v1.0.0/go.mod h1:/XWO2YAUUpPi3smDlFBl0vpX0JHwUomDM/oRMwRmnSs=
|
||||
github.com/garyburd/redigo v1.1.1-0.20170914051019-70e1b1943d4f/go.mod h1:NR3MbYisc3/PwhQ00EMzDiPmrwpPxAn5GI05/YaO1SY=
|
||||
github.com/gbrlsnchs/jwt/v3 v3.0.0-rc.0 h1:7KeiSrO5puFH1+vdAdbpiie2TrNnkvFc/eOQzT60Z2k=
|
||||
github.com/gbrlsnchs/jwt/v3 v3.0.0-rc.0/go.mod h1:D1+3UtCYAJ1os1PI+zhTVEj6Tb+IHJvXjXKz83OstmM=
|
||||
github.com/gbrlsnchs/jwt/v3 v3.0.0-rc.2 h1:3t7jvTkeQfk1FdP0noXSNiM6AdBokLz7QmZDmnCHAAA=
|
||||
github.com/gbrlsnchs/jwt/v3 v3.0.0-rc.2/go.mod h1:AncDcjXz18xetI3A6STfXq2w+LuTx8pQ8bGEwRN8zVM=
|
||||
github.com/ghodss/yaml v1.0.0 h1:wQHKEahhL6wmXdzwWG11gIVCkOv05bNOh+Rxn0yngAk=
|
||||
@@ -193,7 +164,6 @@ github.com/go-playground/validator/v10 v10.2.0/go.mod h1:uOYAAleCW8F/7oMFd6aG0GO
|
||||
github.com/go-playground/validator/v10 v10.3.0 h1:nZU+7q+yJoFmwvNgv/LnPUkwPal62+b2xXj0AU1Es7o=
|
||||
github.com/go-playground/validator/v10 v10.3.0/go.mod h1:uOYAAleCW8F/7oMFd6aG0GOhaH6EGOAJShg8Id5JGkI=
|
||||
github.com/go-sql-driver/mysql v1.4.0/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG5ZlKdlhCg5w=
|
||||
github.com/go-stack/stack v1.6.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
|
||||
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
|
||||
github.com/gogo/googleapis v1.1.0/go.mod h1:gf4bu3Q80BeJ6H1S1vYPm8/ELATdvryBaNFGgqEef3s=
|
||||
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
|
||||
@@ -204,20 +174,16 @@ github.com/gogo/protobuf v1.3.1 h1:DqDEcV5aeaTmdFBePNpYsp3FlcVH/2ISVVM9Qf8PSls=
|
||||
github.com/gogo/protobuf v1.3.1/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
|
||||
github.com/golang/gddo v0.0.0-20190419222130-af0f2af80721 h1:KRMr9A3qfbVM7iV/WcLY/rL5LICqwMHLhwRXKu99fXw=
|
||||
github.com/golang/gddo v0.0.0-20190419222130-af0f2af80721/go.mod h1:xEhNfoBDX1hzLm2Nf80qUvZ2sVwoMZ8d6IE2SrsQfh4=
|
||||
github.com/golang/gddo v0.0.0-20200715224205-051695c33a3f/go.mod h1:sam69Hju0uq+5uvLJUMDlsKlQ21Vrs1Kd/1YFPNYdOU=
|
||||
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
|
||||
github.com/golang/groupcache v0.0.0-20160516000752-02826c3e7903/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
|
||||
github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
|
||||
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
|
||||
github.com/golang/lint v0.0.0-20170918230701-e5d664eb928e/go.mod h1:tluoj9z5200jBnyusfRPU2LqT6J+DAorxEvtC7LHB+E=
|
||||
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
|
||||
github.com/golang/protobuf v1.2.0 h1:P3YflyNX/ehuJFLhxviNdFxQPkGK5cDcApsge1SqnvM=
|
||||
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
|
||||
github.com/golang/protobuf v1.3.5 h1:F768QJ1E9tib+q5Sc8MkdJi1RxLTbRcTf8LJV56aRls=
|
||||
github.com/golang/protobuf v1.3.5/go.mod h1:6O5/vntMXwX2lRkT1hjjk0nAC1IDOTvTlVgjlRvqsdk=
|
||||
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
|
||||
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
|
||||
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
|
||||
@@ -226,25 +192,25 @@ github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvq
|
||||
github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
|
||||
github.com/golang/protobuf v1.4.2 h1:+Z5KGCizgyZCbGh1KZqA0fcLLkwbsjIzS4aV2v7wJX0=
|
||||
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
|
||||
github.com/golang/snappy v0.0.0-20170215233205-553a64147049/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
|
||||
github.com/golang/snappy v0.0.0-20180518054509-2e65f85255db/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
|
||||
github.com/golang/snappy v0.0.1 h1:Qgr9rKW7uDUkrbSmQeiDsGa8SjGyCOGtuasMWwvp2P4=
|
||||
github.com/golang/snappy v0.0.1/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
|
||||
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
|
||||
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
|
||||
github.com/google/go-cmp v0.1.1-0.20171103154506-982329095285/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
|
||||
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
|
||||
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
|
||||
github.com/google/go-cmp v0.3.1 h1:Xye71clBPdm5HgqGwUkwhbynsUJZhDbS20FvLhQ2izg=
|
||||
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
|
||||
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.0 h1:/QaMHBdZ26BB3SSst0Iwl10Epc+xhTquomWX0oZEB6w=
|
||||
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.2 h1:X2ev0eStA3AbceY54o37/0PQ/UWqKEiiO2dKL5OPaFM=
|
||||
github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
||||
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
|
||||
github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/google/uuid v1.1.1 h1:Gkbcsh/GbpXz7lPftLA3P6TYMwjCLYm83jiFQZF/3gY=
|
||||
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/googleapis/gax-go v2.0.0+incompatible/go.mod h1:SFVmujtThgffbyetf+mdk2eWhX2bMyUtNHzFKcPA9HY=
|
||||
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1 h1:EGx4pi6eqNxGaHF6qqu48+N2wcFQ5qg5FXgOdqsJ5d8=
|
||||
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
|
||||
github.com/gorilla/context v1.1.1/go.mod h1:kBGZzfjB9CEq2AlWe17Uuf7NDRt0dE0s8S51q0aT7Yg=
|
||||
@@ -257,7 +223,6 @@ github.com/gorilla/websocket v1.4.0 h1:WDFjx/TMzVgy9VdMMQi2K2Emtwi2QcUQsztZ/zLaH
|
||||
github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
|
||||
github.com/gorilla/websocket v1.4.2 h1:+/TMaTYc4QFitKJxsQ7Yye35DkWvkdLcvGKqM+x0Ufc=
|
||||
github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
|
||||
github.com/gregjones/httpcache v0.0.0-20170920190843-316c5e0ff04e/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
|
||||
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
|
||||
github.com/grpc-ecosystem/go-grpc-middleware v1.0.1-0.20190118093823-f849b5445de4/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
|
||||
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
|
||||
@@ -279,7 +244,6 @@ github.com/hashicorp/go-version v1.2.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09
|
||||
github.com/hashicorp/go.net v0.0.1/go.mod h1:hjKkEWcCURg++eb33jQU7oqQcI9XDCnUzHA0oac0k90=
|
||||
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
|
||||
github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
|
||||
github.com/hashicorp/hcl v0.0.0-20170914154624-68e816d1c783/go.mod h1:oZtUIOe8dh44I2q6ScRibXws4Ajl+d+nod3AaR9vL5w=
|
||||
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
|
||||
github.com/hashicorp/logutils v1.0.0/go.mod h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO+LraFDTW64=
|
||||
github.com/hashicorp/mdns v1.0.0/go.mod h1:tL+uN++7HEJ6SQLQ2/p+z2pH24WQKWjBPkE0mNTz8vQ=
|
||||
@@ -295,9 +259,6 @@ github.com/icza/dyno v0.0.0-20200205103839-49cb13720835 h1:f1irK5f03uGGj+FjgQfZ5
|
||||
github.com/icza/dyno v0.0.0-20200205103839-49cb13720835/go.mod h1:c1tRKs5Tx7E2+uHGSyyncziFjvGpgv4H2HrqXeUQ/Uk=
|
||||
github.com/imdario/mergo v0.3.8 h1:CGgOkSJeqMRmt0D9XLWExdT4m4F1vd3FV3VPt+0VxkQ=
|
||||
github.com/imdario/mergo v0.3.8/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
|
||||
github.com/imdario/mergo v0.3.10 h1:6q5mVkdH/vYmqngx7kZQTjJ5HRsx+ImorDIEQ+beJgc=
|
||||
github.com/imdario/mergo v0.3.10/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA=
|
||||
github.com/inconshreveable/log15 v0.0.0-20170622235902-74a0988b5f80/go.mod h1:cOaXtrgN4ScfRrD9Bre7U1thNq5RtJ8ZoP4iXVGRj6o=
|
||||
github.com/inconshreveable/mousetrap v1.0.0 h1:Z8tu5sraLXCXIcARxBp/8cbvlwVa7Z1NHg9XEKhtSvM=
|
||||
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
|
||||
github.com/influxdata/influxdb1-client v0.0.0-20191209144304-8bf82d3c094d/go.mod h1:qj24IKcXYK6Iy9ceXlo3Tc+vtHo9lIhSX5JddghvEPo=
|
||||
@@ -317,6 +278,8 @@ github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfV
|
||||
github.com/julienschmidt/httprouter v1.2.0 h1:TDTW5Yz1mjftljbcKqRcrYhd4XeOoI98t+9HbQbYf7g=
|
||||
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
|
||||
github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM=
|
||||
github.com/karrick/godirwalk v1.16.1 h1:DynhcF+bztK8gooS0+NDJFrdNZjJ3gzVzC545UNA9iw=
|
||||
github.com/karrick/godirwalk v1.16.1/go.mod h1:j4mkqPuvaLI8mp1DroR3P6ad7cyYd4c1qeJ3RV7ULlk=
|
||||
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 h1:Z9n2FFNUXsshfwJMBgNA0RU6/i7WVaAegv3PtuIHPMs=
|
||||
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51/go.mod h1:CzGEWj7cYgsdH8dAjBGEr58BoE7ScuLd+fwFZ44+/x8=
|
||||
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
|
||||
@@ -330,8 +293,6 @@ github.com/klauspost/compress v1.10.10/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdY
|
||||
github.com/klauspost/cpuid v1.2.0/go.mod h1:Pj4uuM528wm8OyEC2QMXAi2YiTZ96dNQPGgoMS4s3ek=
|
||||
github.com/klauspost/pgzip v1.2.1 h1:oIPZROsWuPHpOdMVWLuJZXwgjhrW8r1yEX8UqMyeNHM=
|
||||
github.com/klauspost/pgzip v1.2.1/go.mod h1:Ch1tH69qFZu15pkjo5kYi6mth2Zzwzt50oCQKQE9RUs=
|
||||
github.com/klauspost/pgzip v1.2.3 h1:Ce2to9wvs/cuJ2b86/CKQoTYr9VHfpanYosZ0UBJqdw=
|
||||
github.com/klauspost/pgzip v1.2.3/go.mod h1:Ch1tH69qFZu15pkjo5kYi6mth2Zzwzt50oCQKQE9RUs=
|
||||
github.com/klauspost/pgzip v1.2.4 h1:TQ7CNpYKovDOmqzRHKxJh0BeaBI7UdQZYc6p7pMQh1A=
|
||||
github.com/klauspost/pgzip v1.2.4/go.mod h1:Ch1tH69qFZu15pkjo5kYi6mth2Zzwzt50oCQKQE9RUs=
|
||||
github.com/konsorten/go-windows-terminal-sequences v1.0.1 h1:mweAR1A6xJ3oS2pRaGiHgQ4OO8tzTaLawm8vnODuwDk=
|
||||
@@ -344,6 +305,8 @@ github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFB
|
||||
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
|
||||
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
|
||||
github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
|
||||
github.com/kr/pretty v0.2.1 h1:Fmg33tUaq4/8ym9TJN1x7sLJnHVwhP33CNkpYV/7rwI=
|
||||
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
|
||||
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
||||
github.com/kr/pty v1.1.4 h1:5Myjjh3JY/NaAi4IsUbHADytDyl1VE1Y9PXDlL+P/VQ=
|
||||
github.com/kr/pty v1.1.4/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
||||
@@ -358,12 +321,10 @@ github.com/magefile/mage v1.9.0 h1:t3AU2wNwehMCW97vuqQLtw6puppWXHO+O2MHo5a50XE=
|
||||
github.com/magefile/mage v1.9.0/go.mod h1:z5UZb/iS3GoOSn0JgWuiw7dxlurVYTu+/jHXqQg881A=
|
||||
github.com/magefile/mage v1.10.0 h1:3HiXzCUY12kh9bIuyXShaVe529fJfyqoVM42o/uom2g=
|
||||
github.com/magefile/mage v1.10.0/go.mod h1:z5UZb/iS3GoOSn0JgWuiw7dxlurVYTu+/jHXqQg881A=
|
||||
github.com/magiconair/properties v1.7.4-0.20170902060319-8d7837e64d3c/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
|
||||
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
|
||||
github.com/magiconair/properties v1.8.1 h1:ZC2Vc7/ZFkGmsVC9KvOjumD+G5lXy2RtTKyzRKO2BQ4=
|
||||
github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
|
||||
github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU=
|
||||
github.com/mattn/go-colorable v0.0.10-0.20170816031813-ad5389df28cd/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU=
|
||||
github.com/mattn/go-colorable v0.1.1/go.mod h1:FuOcm+DKB9mbwrcAfNl7/TZVBZ6rcnceauSikq3lYCQ=
|
||||
github.com/mattn/go-colorable v0.1.2 h1:/bC9yWikZXAL9uJdulbSfyVNIR3n3trXl+v8+1sx8mU=
|
||||
github.com/mattn/go-colorable v0.1.2/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE=
|
||||
@@ -371,7 +332,6 @@ github.com/mattn/go-colorable v0.1.4 h1:snbPLB8fVfU9iwbbo30TPtbLRzwWu6aJS6Xh4eaa
|
||||
github.com/mattn/go-colorable v0.1.4/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE=
|
||||
github.com/mattn/go-colorable v0.1.7 h1:bQGKb3vps/j0E9GfJQ03JyhRuxsvdAanXlT9BTw3mdw=
|
||||
github.com/mattn/go-colorable v0.1.7/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
|
||||
github.com/mattn/go-isatty v0.0.2/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
|
||||
github.com/mattn/go-isatty v0.0.3/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
|
||||
github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
|
||||
github.com/mattn/go-isatty v0.0.5/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
|
||||
@@ -400,7 +360,6 @@ github.com/mitchellh/go-testing-interface v1.0.0/go.mod h1:kRemZodwjscx+RGhAo8eI
|
||||
github.com/mitchellh/gox v0.4.0/go.mod h1:Sd9lOJ0+aimLBi73mGofS1ycjY8lL3uZM3JPS42BGNg=
|
||||
github.com/mitchellh/iochan v1.0.0/go.mod h1:JwYml1nuB7xOzsp52dPpHFffvOCDupsG0QubkSMEySY=
|
||||
github.com/mitchellh/mapstructure v0.0.0-20160808181253-ca63d7c062ee/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
|
||||
github.com/mitchellh/mapstructure v0.0.0-20170523030023-d0303fe80992/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
|
||||
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
|
||||
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
|
||||
@@ -434,8 +393,6 @@ github.com/onsi/ginkgo v1.7.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+W
|
||||
github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
|
||||
github.com/onsi/gomega v1.5.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
|
||||
github.com/op/go-logging v0.0.0-20160315200505-970db520ece7/go.mod h1:HzydrMdWErDVzsI23lYNej1Htcns9BCg93Dk0bBINWk=
|
||||
github.com/opencontainers/go-digest v1.0.0-rc1 h1:WzifXhOVOEOuFYOJAW6aQqW0TooG2iki3E3Ii+WN7gQ=
|
||||
github.com/opencontainers/go-digest v1.0.0-rc1/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
|
||||
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
|
||||
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
|
||||
github.com/opencontainers/image-spec v1.0.1 h1:JMemWkRwHx4Zj+fVxWoMCFm/8sYGGrUVojFA6h/TRcI=
|
||||
@@ -453,10 +410,8 @@ github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FI
|
||||
github.com/patrickmn/go-cache v2.1.0+incompatible h1:HRMgzkcYKYpi3C8ajMPV8OFXaaRUnok+kx1WdO15EQc=
|
||||
github.com/patrickmn/go-cache v2.1.0+incompatible/go.mod h1:3Qf8kWWT7OJRJbdiICTKqZju1ZixQ/KpMGzzAfe6+WQ=
|
||||
github.com/pborman/uuid v1.2.0/go.mod h1:X/NO0urCmaxf9VXbdlT7C2Yzkj2IKimNn4k+gtPdI/k=
|
||||
github.com/pelletier/go-toml v1.0.1-0.20170904195809-1d6b12b7cb29/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
|
||||
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
|
||||
github.com/performancecopilot/speed v3.0.0+incompatible/go.mod h1:/CLtqpZ5gBg1M9iaPbIdPPGyKcA8hKdoy6hAWba7Yac=
|
||||
github.com/pierrec/lz4 v1.0.1 h1:w6GMGWSsCI04fTM8wQRdnW74MuJISakuUU0onU0TYB4=
|
||||
github.com/pierrec/lz4 v1.0.2-0.20190131084431-473cd7ce01a1/go.mod h1:3/3N9NVKO0jef7pBehbT1qWhCMrIgbYNnFAZCqQ5LRc=
|
||||
github.com/pierrec/lz4 v2.0.5+incompatible h1:2xWsjqPFWcplujydGg4WmhC/6fZqK42wMM8aXeqhl0I=
|
||||
github.com/pierrec/lz4 v2.0.5+incompatible/go.mod h1:pdkljMzZIN41W+lC3N2tnIh5sFi+IEE17M5jbnwPHcY=
|
||||
@@ -469,12 +424,8 @@ github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINE
|
||||
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
|
||||
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pkg/profile v1.2.1/go.mod h1:hJw3o1OdXxsrSjjVksARp5W95eeEaEfptyVZyv6JUPA=
|
||||
github.com/pkg/profile v1.4.0 h1:uCmaf4vVbWAOZz36k1hrQD7ijGRzLwaME8Am/7a4jZI=
|
||||
github.com/pkg/profile v1.4.0/go.mod h1:NWz/XGvpEW1FyYQ7fCx4dqYBLlfTcE+A9FLAkNKqjFE=
|
||||
github.com/pkg/profile v1.5.0 h1:042Buzk+NhDI+DeSAA62RwJL8VAuZUMQZUjCsRz1Mug=
|
||||
github.com/pkg/profile v1.5.0/go.mod h1:qBsxPvzyUincmltOk6iyRVxHYg4adc0OFOv72ZdLa18=
|
||||
github.com/pkg/sftp v1.8.3 h1:9jSe2SxTM8/3bXZjtqnkgTBW+lA8db0knZJyns7gpBA=
|
||||
github.com/pkg/sftp v1.8.3/go.mod h1:NxmoDg/QLVWluQDUYG7XBZTLUpKeFa8e3aMf1BfjyHk=
|
||||
github.com/pkg/sftp v1.11.0 h1:4Zv0OGbpkg4yNuUtH0s8rvoYxRCNyT29NVUo6pgPmxI=
|
||||
github.com/pkg/sftp v1.11.0/go.mod h1:lYOWFsE0bwd1+KfKJaKeuokY15vzFx25BLbzYYoAxZI=
|
||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||
@@ -517,13 +468,7 @@ github.com/prometheus/procfs v0.0.8/go.mod h1:7Qr8sr6344vo1JqZ6HhLceV9o3AJ1Ff+Gx
|
||||
github.com/prometheus/procfs v0.1.3 h1:F0+tqvhOksq22sc6iCHF5WGlWjdwj92p0udFh1VFBS8=
|
||||
github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
|
||||
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
|
||||
github.com/pterodactyl/sftp-server v1.1.4 h1:JESuEuZ+d2tajMjuQblPOlGISM9Uc2xOzk7irVF9PQ0=
|
||||
github.com/pterodactyl/sftp-server v1.1.4/go.mod h1:KjSONrenRr1oCh94QIVAU6yEzMe+Hd7r/JHrh5/oQHs=
|
||||
github.com/pterodactyl/sftp-server v1.1.5 h1:r5RIfCDVLpn6MsfD8zcCQLtviy14GJ9E+9HzidjgAGw=
|
||||
github.com/pterodactyl/sftp-server v1.1.5/go.mod h1:YVx5g2gjln7fYFO7+c3iDRTwNyA5GuJtkKME0UDB8co=
|
||||
github.com/rcrowley/go-metrics v0.0.0-20181016184325-3113b8401b8a/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
|
||||
github.com/remeh/sizedwaitgroup v0.0.0-20180822144253-5e7302b12cce h1:aP+C+YbHZfOQlutA4p4soHi7rVUqHQdWEVMSkHfDTqY=
|
||||
github.com/remeh/sizedwaitgroup v0.0.0-20180822144253-5e7302b12cce/go.mod h1:3j2R4OIe/SeS6YDhICBy22RWjJC5eNCJ1V+9+NVNYlo=
|
||||
github.com/remeh/sizedwaitgroup v1.0.0 h1:VNGGFwNo/R5+MJBf6yrsr110p0m4/OX4S3DCy7Kyl5E=
|
||||
github.com/remeh/sizedwaitgroup v1.0.0/go.mod h1:3j2R4OIe/SeS6YDhICBy22RWjJC5eNCJ1V+9+NVNYlo=
|
||||
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
|
||||
@@ -554,24 +499,17 @@ github.com/smartystreets/gunit v1.0.0/go.mod h1:qwPWnhz6pn0NnRBP++URONOVyNkPyr4S
|
||||
github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
|
||||
github.com/sony/gobreaker v0.4.1/go.mod h1:ZKptC7FHNvhBz7dN2LGjPVBz2sZJmc0/PkyDJOjmxWY=
|
||||
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
|
||||
github.com/spf13/afero v0.0.0-20170901052352-ee1bd8ee15a1/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
|
||||
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
|
||||
github.com/spf13/cast v1.1.0/go.mod h1:r2rcYCSwa1IExKTDiTfzaxqT2FNHs8hODu4LnUfgKEg=
|
||||
github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
|
||||
github.com/spf13/cobra v0.0.3/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
|
||||
github.com/spf13/cobra v0.0.7 h1:FfTH+vuMXOas8jmfb5/M7dzEYx7LpcLb7a0LPe34uOU=
|
||||
github.com/spf13/cobra v0.0.7/go.mod h1:/6GTrnGXV9HjY+aR4k0oJ5tcvakLuG6EuKReYlHNrgE=
|
||||
github.com/spf13/cobra v1.0.0 h1:6m/oheQuQ13N9ks4hubMG6BnvwOeaJrqSPLahSnczz8=
|
||||
github.com/spf13/cobra v1.0.0/go.mod h1:/6GTrnGXV9HjY+aR4k0oJ5tcvakLuG6EuKReYlHNrgE=
|
||||
github.com/spf13/jwalterweatherman v0.0.0-20170901151539-12bd96e66386/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
|
||||
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
|
||||
github.com/spf13/pflag v1.0.1-0.20170901120850-7aff26db30c1/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
|
||||
github.com/spf13/pflag v1.0.1/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
|
||||
github.com/spf13/pflag v1.0.3 h1:zPAT6CGy6wXeQ7NtTnaTerfKOsV6V6F8agHXFiazDkg=
|
||||
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
|
||||
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
|
||||
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
|
||||
github.com/spf13/viper v1.0.0/go.mod h1:A8kyI5cUJhb8N+3pkfONlcEcZbueH6nhAm0Fq7SrnBM=
|
||||
github.com/spf13/viper v1.4.0/go.mod h1:PTJ7Z/lr49W6bUbkmS1V3by4uWynFiR9p7+dSq/yZzE=
|
||||
github.com/streadway/amqp v0.0.0-20190404075320-75d898a42a94/go.mod h1:AZpEONHx3DKn8O/DFsRAY58/XVQiIPMTMB1SddzLXVw=
|
||||
github.com/streadway/amqp v0.0.0-20190827072141-edfb9018d271/go.mod h1:AZpEONHx3DKn8O/DFsRAY58/XVQiIPMTMB1SddzLXVw=
|
||||
@@ -583,11 +521,11 @@ github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXf
|
||||
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
||||
github.com/stretchr/testify v1.4.0 h1:2E4SXV/wtOkTonXsotYi4li6zVWxYlZuYNCXe9XRJyk=
|
||||
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
|
||||
github.com/stretchr/testify v1.5.1 h1:nOGnQDM7FYENwehXlg/kFVnos3rEvtKTjRvOWSzb6H4=
|
||||
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
|
||||
github.com/stretchr/testify v1.6.1 h1:hDPOHmpOpP40lSULcqw7IrRb/u7w6RpDC9399XyoNd0=
|
||||
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/tj/assert v0.0.0-20171129193455-018094318fb0 h1:Rw8kxzWo1mr6FSaYXjQELRe88y2KdfynXdnK72rdjtA=
|
||||
github.com/tj/assert v0.0.0-20171129193455-018094318fb0/go.mod h1:mZ9/Rh9oLWpLLDRpvE+3b7gP/C2YyLFYxNmcLnPTMe0=
|
||||
github.com/tj/assert v0.0.3 h1:Df/BlaZ20mq6kuai7f5z2TvPFiwC3xaWJSDQNiIS3Rk=
|
||||
github.com/tj/assert v0.0.3/go.mod h1:Ne6X72Q+TB1AteidzQncjw9PabbMp4PBMZ1k+vd1Pvk=
|
||||
github.com/tj/go-buffer v1.1.0/go.mod h1:iyiJpfFcR2B9sXu7KvjbT9fpM4mOelRSDTbntVj52Uc=
|
||||
github.com/tj/go-elastic v0.0.0-20171221160941-36157cbbebc2/go.mod h1:WjeM0Oo1eNAjXGDx2yma7uG2XoyRZTq1uv3M/o7imD0=
|
||||
@@ -596,7 +534,6 @@ github.com/tj/go-spin v1.1.0 h1:lhdWZsvImxvZ3q1C5OIB7d72DuOwP4O2NdBg9PyzNds=
|
||||
github.com/tj/go-spin v1.1.0/go.mod h1:Mg1mzmePZm4dva8Qz60H2lHwmJ2loum4VIrLgVnKwh4=
|
||||
github.com/tmc/grpc-websocket-proxy v0.0.0-20170815181823-89b8d40f7ca8/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
|
||||
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
|
||||
github.com/uber-go/zap v1.9.1/go.mod h1:GY+83l3yxBcBw2kmHu/sAWwItnTn+ynxHCRo+WiIQOY=
|
||||
github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc=
|
||||
github.com/ugorji/go v1.1.7 h1:/68gy2h+1mWMrwZFeD1kQialdSzAb432dtpeJ42ovdo=
|
||||
github.com/ugorji/go v1.1.7/go.mod h1:kZn38zHttfInRq0xu/PH0az30d+z6vm202qpg1oXVMw=
|
||||
@@ -612,7 +549,6 @@ github.com/xi2/xz v0.0.0-20171230120015-48954b6210f8 h1:nIPpBwaJSVYIxUFsDv3M8ofm
|
||||
github.com/xi2/xz v0.0.0-20171230120015-48954b6210f8/go.mod h1:HUYIGzjTL3rfEspMxjDjgmT5uz5wzYJKVo23qUhYTos=
|
||||
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
|
||||
github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=
|
||||
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
|
||||
go.etcd.io/bbolt v1.3.3/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
|
||||
go.etcd.io/etcd v0.0.0-20191023171146-3cf2f69b5738/go.mod h1:dnLIgRNXwCJa5e+c6mIZCrds/GIG4ncV9HhK5PX7jPg=
|
||||
@@ -623,23 +559,14 @@ go.uber.org/atomic v1.3.2 h1:2Oa65PReHzfn29GpvgsYwloV9AVFHPDk8tYxt2c2tr4=
|
||||
go.uber.org/atomic v1.3.2/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
|
||||
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
|
||||
go.uber.org/atomic v1.5.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
|
||||
go.uber.org/atomic v1.6.0 h1:Ezj3JGmsOnG1MoRWQkPBsKLe9DwWD9QeXzTRzzldNVk=
|
||||
go.uber.org/atomic v1.6.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
|
||||
go.uber.org/multierr v1.1.0 h1:HoEmRHQPVSqub6w2z2d2EOVs2fjyFRGyofhKuyDq0QI=
|
||||
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
|
||||
go.uber.org/multierr v1.3.0/go.mod h1:VgVr7evmIr6uPjLBxg28wmKNXyqE9akIJ5XnfpiKl+4=
|
||||
go.uber.org/multierr v1.5.0 h1:KCa4XfM8CWFCpxXRGok+Q0SS/0XBhMDbHHGABQLvD2A=
|
||||
go.uber.org/multierr v1.5.0/go.mod h1:FeouvMocqHpRaaGuG9EjoKcStLC43Zu/fmqdUMPcKYU=
|
||||
go.uber.org/tools v0.0.0-20190618225709-2cfd321de3ee h1:0mgffUl7nfd+FpvXMVz4IDEaUSmT1ysygQC7qYo7sG4=
|
||||
go.uber.org/tools v0.0.0-20190618225709-2cfd321de3ee/go.mod h1:vJERXedbb3MVM5f9Ejo0C68/HhF8uaILCdgjnY+goOA=
|
||||
go.uber.org/zap v1.9.1 h1:XCJQEf3W6eZaVwhRBof6ImoYGJSITeKWsyeh3HFu/5o=
|
||||
go.uber.org/zap v1.9.1/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
|
||||
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
|
||||
go.uber.org/zap v1.13.0/go.mod h1:zwrFLgMcdUuIBviXEYEH1YKNaOBnKXsx2IPda5bBwHM=
|
||||
go.uber.org/zap v1.15.0 h1:ZZCA22JRF2gQE5FoNmhmrf7jeJJ2uhqDUNRYKm8dvmM=
|
||||
go.uber.org/zap v1.15.0/go.mod h1:Mb2vm2krFEG5DV0W9qcHBYFtp/Wku1cvYaqPsS/WYfc=
|
||||
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20181025213731-e84da0312774/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20181029021203-45a5f77698d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||
golang.org/x/crypto v0.0.0-20190426145343-a29dc8fdc734/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
@@ -647,11 +574,8 @@ golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8U
|
||||
golang.org/x/crypto v0.0.0-20190530122614-20be4c3c3ed5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20190701094942-4def268fd1a4/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20190820162420-60c769a6c586/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20190829043050-9756ffdc2472/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20190927123631-a832865fa7ad/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20200429183012-4b2356b1ed79 h1:IaQbIIB2X/Mp/DKctl6ROxz1KyMlKp4uyvL6+kQ7C88=
|
||||
golang.org/x/crypto v0.0.0-20200429183012-4b2356b1ed79/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.0.0-20200728195943-123391ffb6de h1:ikNHVSjEfnvz6sxdSPCaPt572qowuyMDMJLLm3Db3ig=
|
||||
golang.org/x/crypto v0.0.0-20200728195943-123391ffb6de/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
@@ -660,16 +584,11 @@ golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTk
|
||||
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
|
||||
golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
|
||||
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
|
||||
golang.org/x/lint v0.0.0-20190409202823-959b441ac422/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
|
||||
golang.org/x/lint v0.0.0-20190909230951-414d861bb4ac/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
|
||||
golang.org/x/lint v0.0.0-20190930215403-16217165b5de h1:5hukYrvBGR8/eNkX5mdUezrA6JiaEZDtJb9Ei+1LlBs=
|
||||
golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
|
||||
golang.org/x/lint v0.0.0-20200302205851-738671d3881b h1:Wh+f8QHJXR411sJR8/vRBTZ7YapZaRvUcLFFJhusH0k=
|
||||
golang.org/x/lint v0.0.0-20200302205851-738671d3881b/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
|
||||
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
|
||||
golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
|
||||
golang.org/x/mod v0.2.0 h1:KU7oHjnv3XNWfa5COkzUifxZmxp1TyI7ImMXqFxLwvQ=
|
||||
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
@@ -687,16 +606,11 @@ golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR
|
||||
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20190813141303-74dc4d7220e7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20200425230154-ff2c4b7c35a0 h1:Jcxah/M+oLZ/R4/z5RzfPzGbPXnVDPkEDtf2JnuxN+U=
|
||||
golang.org/x/net v0.0.0-20200425230154-ff2c4b7c35a0/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
|
||||
golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
|
||||
golang.org/x/net v0.0.0-20200707034311-ab3426394381 h1:VXak5I6aEWmAXeQjA+QSZzlgNrpq9mjcfDemuexIKsU=
|
||||
golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
|
||||
golang.org/x/oauth2 v0.0.0-20170912212905-13449ad91cb2/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
golang.org/x/sync v0.0.0-20170517211232-f52d1811a629/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f h1:wMNYb4v58l5UBM7MYRLPG6ZhfOqbKu7X5eyFl8ZhKvA=
|
||||
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
@@ -705,8 +619,6 @@ golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJ
|
||||
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e h1:vcxGaoTs7kV8m5Np9uUNQin4BrLOthgV7252N8V+FwY=
|
||||
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20200317015054-43a5402ce75a h1:WXEvlFVvvGxCJLG6REjsT03iWnKLEWinaScsxF2Vm2o=
|
||||
golang.org/x/sync v0.0.0-20200317015054-43a5402ce75a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208 h1:qwRHBd0NqMbJxfbotnDhm2ByMI1Shq4Y6oRJo21SGJA=
|
||||
golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
@@ -735,19 +647,15 @@ golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e/go.mod h1:h1NjWce9XRLGQEsW7w
|
||||
golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200509044756-6aff5f38e54f h1:mOhmO9WsBaJCNmaZHPtHs9wOcdqdKCjF6OPJlmDM3KI=
|
||||
golang.org/x/sys v0.0.0-20200509044756-6aff5f38e54f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae h1:Ih9Yo4hSPImZOpfGuA4bR/ORKTAbhZo2AbWNRCnevdo=
|
||||
golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200806125547-5acd03effb82 h1:6cBnXxYO+CiRVrChvCosSv7magqTPbyAgz1M8iOv5wM=
|
||||
golang.org/x/sys v0.0.0-20200806125547-5acd03effb82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
|
||||
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
|
||||
golang.org/x/text v0.3.3 h1:cokOdA+Jmi5PJGXLlLllQSgYigAEfHXJAERHVMaCc2k=
|
||||
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/time v0.0.0-20170424234030-8be79e1e0910/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4 h1:SvFZT6jyqRaOeXpc5h/JSfZenJ2O330aBsf7JfSUXmQ=
|
||||
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
|
||||
@@ -765,18 +673,11 @@ golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3
|
||||
golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
|
||||
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
|
||||
golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
|
||||
golang.org/x/tools v0.0.0-20190710153321-831012c29e42/go.mod h1:jcCCGcm9btYwXyDqrUWc6MKQKKGJCWEQ3AfLSRIbEuI=
|
||||
golang.org/x/tools v0.0.0-20190927191325-030b2cf1153e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191029041327-9cc4af7d6b2c/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191029190741-b9c20aec41a5 h1:hKsoRgsbwY1NafxrwTs+k64bikrLBkAgPir1TNCj3Zs=
|
||||
golang.org/x/tools v0.0.0-20191029190741-b9c20aec41a5/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20191130070609-6e064ea0cf2d/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20200103221440-774c71fcf114/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
|
||||
golang.org/x/tools v0.0.0-20200130002326-2f3ba24bd6e7/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
|
||||
golang.org/x/tools v0.0.0-20200509030707-2212a7e161a5 h1:MeC2gMlMdkd67dn17MEby3rGXRxZtWeiRXOnISfTQ74=
|
||||
golang.org/x/tools v0.0.0-20200509030707-2212a7e161a5/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
|
||||
golang.org/x/tools/gopls v0.1.3/go.mod h1:vrCQzOKxvuiZLjCKSmbbov04oeBQQOb4VQqwYK2PWIY=
|
||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7 h1:9zdDQZ7Thm29KFXgAX/+yaf3eVbP7djjWp/dXAppNCc=
|
||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
@@ -784,13 +685,10 @@ golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IV
|
||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 h1:go1bK/D/BFZV2I8cIQd1NKEZ+0owSTG1fDTci4IqFcE=
|
||||
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
google.golang.org/api v0.0.0-20170921000349-586095a6e407/go.mod h1:4mhQ8q/RsB7i+udVvVy5NUi08OU8ZlA0gRVgrF7VFY0=
|
||||
google.golang.org/api v0.3.1/go.mod h1:6wY9I6uQWHQ8EM57III9mq/AjF+i8G65rmVagqKMtkk=
|
||||
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
|
||||
google.golang.org/appengine v1.2.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
||||
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
|
||||
google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
|
||||
google.golang.org/genproto v0.0.0-20170918111702-1e559d0a00ee/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
|
||||
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8 h1:Nw54tB0rB7hY/N0NQvRW8DG4Yk3Q6T9cu9RcFQDu1tc=
|
||||
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
|
||||
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
|
||||
@@ -800,7 +698,6 @@ google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98
|
||||
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
|
||||
google.golang.org/genproto v0.0.0-20200806141610-86f49bd18e98 h1:LCO0fg4kb6WwkXQXRQQgUYsFeFb5taTX5WAx5O/Vt28=
|
||||
google.golang.org/genproto v0.0.0-20200806141610-86f49bd18e98/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
||||
google.golang.org/grpc v1.2.1-0.20170921194603-d4b75ebd4f9f/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
|
||||
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
|
||||
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
|
||||
google.golang.org/grpc v1.20.0/go.mod h1:chYK+tFQF0nDUGJgXMSgLCQk3phJEuONr2DCgLDdAQM=
|
||||
@@ -838,8 +735,6 @@ gopkg.in/cheggaaa/pb.v1 v1.0.25/go.mod h1:V/YB90LKu/1FcN3WVnfiiE5oMCibMjukxqG/qS
|
||||
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
|
||||
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
|
||||
gopkg.in/gcfg.v1 v1.2.3/go.mod h1:yesOnuUOFQAhST5vPY4nbZsb/huCgGGXlipJsBn0b3o=
|
||||
gopkg.in/ini.v1 v1.51.0 h1:AQvPpx3LzTDM0AjnIRlVFwFFGC+npRopjZxLJj6gdno=
|
||||
gopkg.in/ini.v1 v1.51.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
|
||||
gopkg.in/ini.v1 v1.57.0 h1:9unxIsFcTt4I55uWluz+UmL95q4kdJ0buvQ1ZIqVQww=
|
||||
gopkg.in/ini.v1 v1.57.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
|
||||
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
|
||||
@@ -856,6 +751,7 @@ gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.3.0 h1:clyUAQHOM3G0M3f5vQj7LuJrETvjVot3Z5el9nffUtU=
|
||||
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
gopkg.in/yaml.v3 v3.0.0-20200605160147-a5ece683394c h1:grhR+C34yXImVGp7EzNk+DTIk+323eIUWOmEevy6bDo=
|
||||
gopkg.in/yaml.v3 v3.0.0-20200605160147-a5ece683394c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
gotest.tools v2.2.0+incompatible h1:VsBPFP1AI068pPrMxtb/S8Zkgf9xEmTLJjfM+P5UIEo=
|
||||
gotest.tools v2.2.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81HFBacw=
|
||||
@@ -864,7 +760,5 @@ honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWh
|
||||
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
||||
honnef.co/go/tools v0.0.1-2019.2.3 h1:3JgtbtFHMiCmsznwGVTUWbgGov+pVqnlf1dEJTNAXeM=
|
||||
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
|
||||
honnef.co/go/tools v0.0.1-2020.1.3 h1:sXmLre5bzIR6ypkjXCDI3jHPssRhc8KD/Ome589sc3U=
|
||||
honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
|
||||
sigs.k8s.io/yaml v1.1.0/go.mod h1:UJmg0vDUVViEyp3mgSv9WPwZCDxu4rQW1olrI1uml+o=
|
||||
sourcegraph.com/sourcegraph/appdash v0.0.0-20190731080439-ebfcffb1b5c0/go.mod h1:hI742Nqp5OhwiqlzhgfbWU4mW4yO10fP+LoT9WOswdU=
|
||||
|
||||
@@ -26,14 +26,11 @@ func New(data []byte) (*Installer, error) {
|
||||
return nil, NewValidationError("uuid provided was not in a valid format")
|
||||
}
|
||||
|
||||
if !govalidator.IsUUIDv4(getString(data, "service", "egg")) {
|
||||
return nil, NewValidationError("service egg provided was not in a valid format")
|
||||
}
|
||||
|
||||
cfg := &server.Configuration{
|
||||
Uuid: getString(data, "uuid"),
|
||||
Suspended: false,
|
||||
Invocation: getString(data, "invocation"),
|
||||
Uuid: getString(data, "uuid"),
|
||||
Suspended: false,
|
||||
Invocation: getString(data, "invocation"),
|
||||
SkipEggScripts: getBoolean(data, "skip_egg_scripts"),
|
||||
Build: environment.Limits{
|
||||
MemoryLimit: getInt(data, "build", "memory"),
|
||||
Swap: getInt(data, "build", "swap"),
|
||||
@@ -117,7 +114,6 @@ func (i *Installer) Execute() {
|
||||
}
|
||||
|
||||
l.Debug("creating required environment for server instance")
|
||||
// TODO: ensure data directory exists.
|
||||
if err := i.server.Environment.Create(); err != nil {
|
||||
l.WithField("error", err).Error("failed to create environment for server")
|
||||
return
|
||||
@@ -139,3 +135,9 @@ func getInt(data []byte, key ...string) int64 {
|
||||
|
||||
return value
|
||||
}
|
||||
|
||||
func getBoolean(data []byte, key ...string) bool {
|
||||
value, _ := jsonparser.GetBoolean(data, key...)
|
||||
|
||||
return value
|
||||
}
|
||||
|
||||
@@ -76,13 +76,13 @@ func (cfr *ConfigurationFileReplacement) getKeyValue(value []byte) interface{} {
|
||||
func (f *ConfigurationFile) IterateOverJson(data []byte) (*gabs.Container, error) {
|
||||
parsed, err := gabs.ParseJSON(data)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return nil, errors.WithStack(err)
|
||||
}
|
||||
|
||||
for _, v := range f.Replace {
|
||||
value, err := f.LookupConfigurationValue(v)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return nil, errors.WithStack(err)
|
||||
}
|
||||
|
||||
// Check for a wildcard character, and if found split the key on that value to
|
||||
@@ -97,12 +97,20 @@ func (f *ConfigurationFile) IterateOverJson(data []byte) (*gabs.Container, error
|
||||
// time this code is being written.
|
||||
for _, child := range parsed.Path(strings.Trim(parts[0], ".")).Children() {
|
||||
if err := v.SetAtPathway(child, strings.Trim(parts[1], "."), []byte(value)); err != nil {
|
||||
return nil, err
|
||||
if errors.Is(err, gabs.ErrNotFound) {
|
||||
continue
|
||||
}
|
||||
|
||||
return nil, errors.Wrap(err, "failed to set config value of array child")
|
||||
}
|
||||
}
|
||||
} else {
|
||||
if err = v.SetAtPathway(parsed, v.Match, []byte(value)); err != nil {
|
||||
return nil, err
|
||||
if errors.Is(err, gabs.ErrNotFound) {
|
||||
continue
|
||||
}
|
||||
|
||||
return nil, errors.Wrap(err, "unable to set config value at pathway: "+v.Match)
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -110,42 +118,113 @@ func (f *ConfigurationFile) IterateOverJson(data []byte) (*gabs.Container, error
|
||||
return parsed, nil
|
||||
}
|
||||
|
||||
// Sets the value at a specific pathway, but checks if we were looking for a specific
|
||||
// value or not before doing it.
|
||||
func (cfr *ConfigurationFileReplacement) SetAtPathway(c *gabs.Container, path string, value []byte) error {
|
||||
if cfr.IfValue != "" {
|
||||
// If this is a regex based matching, we need to get a little more creative since
|
||||
// we're only going to replacing part of the string, and not the whole thing.
|
||||
if c.Exists(path) && strings.HasPrefix(cfr.IfValue, "regex:") {
|
||||
// We're doing some regex here.
|
||||
r, err := regexp.Compile(strings.TrimPrefix(cfr.IfValue, "regex:"))
|
||||
if err != nil {
|
||||
log.WithFields(log.Fields{"if_value": strings.TrimPrefix(cfr.IfValue, "regex:"), "error": err}).
|
||||
Warn("configuration if_value using invalid regexp, cannot perform replacement")
|
||||
// Regex used to check if there is an array element present in the given pathway by looking for something
|
||||
// along the lines of "something[1]" or "something[1].nestedvalue" as the path.
|
||||
var checkForArrayElement = regexp.MustCompile(`^([^\[\]]+)\[([\d]+)](\..+)?$`)
|
||||
|
||||
return nil
|
||||
}
|
||||
// Attempt to set the value of the path depending on if it is an array or not. Gabs cannot handle array
|
||||
// values as "something[1]" but can parse them just fine. This is basically just overly complex code
|
||||
// to handle that edge case and ensure the value gets set correctly.
|
||||
//
|
||||
// Bless thee who has to touch these most unholy waters.
|
||||
func setValueAtPath(c *gabs.Container, path string, value interface{}) error {
|
||||
var err error
|
||||
|
||||
// If the path exists and there is a regex match, go ahead and attempt the replacement
|
||||
// using the value we got from the key. This will only replace the one match.
|
||||
v := strings.Trim(string(c.Path(path).Bytes()), "\"")
|
||||
if r.Match([]byte(v)) {
|
||||
_, err := c.SetP(r.ReplaceAllString(v, string(value)), path)
|
||||
matches := checkForArrayElement.FindStringSubmatch(path)
|
||||
if len(matches) < 3 {
|
||||
// Only update the value if the pathway actually exists in the configuration, otherwise
|
||||
// do nothing.
|
||||
if c.ExistsP(path) {
|
||||
_, err = c.SetP(value, path)
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
||||
return errors.WithStack(err)
|
||||
}
|
||||
|
||||
return nil
|
||||
} else {
|
||||
if !c.Exists(path) || (c.Exists(path) && !bytes.Equal(c.Bytes(), []byte(cfr.IfValue))) {
|
||||
return nil
|
||||
}
|
||||
i, _ := strconv.Atoi(matches[2])
|
||||
// Find the array element "i" or try to create it if "i" is equal to 0 and is not found
|
||||
// at the given path.
|
||||
ct, err := c.ArrayElementP(i, matches[1])
|
||||
if err != nil {
|
||||
if i != 0 || (!errors.Is(err, gabs.ErrNotArray) && !errors.Is(err, gabs.ErrNotFound)) {
|
||||
return errors.Wrap(err, "error while parsing array element at path")
|
||||
}
|
||||
|
||||
var t = make([]interface{}, 1)
|
||||
// If the length of matches is 4 it means we're trying to access an object down in this array
|
||||
// key, so make sure we generate the array as an array of objects, and not just a generic nil
|
||||
// array.
|
||||
if len(matches) == 4 {
|
||||
t = []interface{}{map[string]interface{}{}}
|
||||
}
|
||||
|
||||
// If the error is because this isn't an array or isn't found go ahead and create the array with
|
||||
// an empty object if we have additional things to set on the array, or just an empty array type
|
||||
// if there is not an object structure detected (no matches[3] available).
|
||||
if _, err = c.SetP(t, matches[1]); err != nil {
|
||||
return errors.Wrap(err, "failed to create empty array for missing element")
|
||||
}
|
||||
|
||||
// Set our cursor to be the array element we expect, which in this case is just the first element
|
||||
// since we won't run this code unless the array element is 0. There is too much complexity in trying
|
||||
// to match additional elements. In those cases the server will just have to be rebooted or something.
|
||||
ct, err = c.ArrayElementP(0, matches[1])
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to find array element at path")
|
||||
}
|
||||
}
|
||||
|
||||
_, err := c.SetP(cfr.getKeyValue(value), path)
|
||||
// Try to set the value. If the path does not exist an error will be raised to the caller which will
|
||||
// then check if the error is because the path is missing. In those cases we just ignore the error since
|
||||
// we don't want to do anything specifically when that happens.
|
||||
//
|
||||
// If there are four matches in the regex it means that we managed to also match a trailing pathway
|
||||
// for the key, which should be found in the given array key item and modified further.
|
||||
if len(matches) == 4 {
|
||||
_, err = ct.SetP(value, strings.TrimPrefix(matches[3], "."))
|
||||
} else {
|
||||
_, err = ct.Set(value)
|
||||
}
|
||||
|
||||
return err
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "failed to set value at config path: "+path)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Sets the value at a specific pathway, but checks if we were looking for a specific
|
||||
// value or not before doing it.
|
||||
func (cfr *ConfigurationFileReplacement) SetAtPathway(c *gabs.Container, path string, value []byte) error {
|
||||
if cfr.IfValue == "" {
|
||||
return setValueAtPath(c, path, cfr.getKeyValue(value))
|
||||
}
|
||||
|
||||
// If this is a regex based matching, we need to get a little more creative since
|
||||
// we're only going to replacing part of the string, and not the whole thing.
|
||||
if c.ExistsP(path) && strings.HasPrefix(cfr.IfValue, "regex:") {
|
||||
// We're doing some regex here.
|
||||
r, err := regexp.Compile(strings.TrimPrefix(cfr.IfValue, "regex:"))
|
||||
if err != nil {
|
||||
log.WithFields(log.Fields{"if_value": strings.TrimPrefix(cfr.IfValue, "regex:"), "error": err}).
|
||||
Warn("configuration if_value using invalid regexp, cannot perform replacement")
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// If the path exists and there is a regex match, go ahead and attempt the replacement
|
||||
// using the value we got from the key. This will only replace the one match.
|
||||
v := strings.Trim(string(c.Path(path).Bytes()), "\"")
|
||||
if r.Match([]byte(v)) {
|
||||
return setValueAtPath(c, path, r.ReplaceAllString(v, string(value)))
|
||||
}
|
||||
|
||||
return nil
|
||||
} else if !c.ExistsP(path) || (c.ExistsP(path) && !bytes.Equal(c.Bytes(), []byte(cfr.IfValue))) {
|
||||
return nil
|
||||
}
|
||||
|
||||
return setValueAtPath(c, path, cfr.getKeyValue(value))
|
||||
}
|
||||
|
||||
// Looks up a configuration value on the Daemon given a dot-notated syntax.
|
||||
|
||||
@@ -3,7 +3,6 @@ package parser
|
||||
import (
|
||||
"bufio"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"github.com/apex/log"
|
||||
"github.com/beevik/etree"
|
||||
"github.com/buger/jsonparser"
|
||||
@@ -16,6 +15,7 @@ import (
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
@@ -96,8 +96,7 @@ func (cfr *ConfigurationFileReplacement) UnmarshalJSON(data []byte) error {
|
||||
return err
|
||||
}
|
||||
|
||||
// See comment on the replacement regex to understand what exactly this is doing.
|
||||
cfr.Match = cfrMatchReplacement.ReplaceAllString(m, ".$1")
|
||||
cfr.Match = m
|
||||
|
||||
iv, err := jsonparser.GetString(data, "if_value")
|
||||
// We only check keypath here since match & replace_with should be present on all of
|
||||
@@ -163,7 +162,7 @@ func (f *ConfigurationFile) Parse(path string, internal bool) error {
|
||||
break
|
||||
}
|
||||
|
||||
if os.IsNotExist(err) {
|
||||
if errors.Is(err, os.ErrNotExist) {
|
||||
// File doesn't exist, we tried creating it, and same error is returned? Pretty
|
||||
// sure this pathway is impossible, but if not, abort here.
|
||||
if internal {
|
||||
@@ -349,33 +348,33 @@ func (f *ConfigurationFile) parseJsonFile(path string) error {
|
||||
func (f *ConfigurationFile) parseYamlFile(path string) error {
|
||||
b, err := readFileBytes(path)
|
||||
if err != nil {
|
||||
return err
|
||||
return errors.WithStack(err)
|
||||
}
|
||||
|
||||
i := make(map[string]interface{})
|
||||
if err := yaml.Unmarshal(b, &i); err != nil {
|
||||
return err
|
||||
return errors.WithStack(err)
|
||||
}
|
||||
|
||||
// Unmarshal the yaml data into a JSON interface such that we can work with
|
||||
// any arbitrary data structure. If we don't do this, I can't use gabs which
|
||||
// makes working with unknown JSON signficiantly easier.
|
||||
// makes working with unknown JSON significantly easier.
|
||||
jsonBytes, err := json.Marshal(dyno.ConvertMapI2MapS(i))
|
||||
if err != nil {
|
||||
return err
|
||||
return errors.WithStack(err)
|
||||
}
|
||||
|
||||
// Now that the data is converted, treat it just like JSON and pass it to the
|
||||
// iterator function to update values as necessary.
|
||||
data, err := f.IterateOverJson(jsonBytes)
|
||||
if err != nil {
|
||||
return err
|
||||
return errors.WithStack(err)
|
||||
}
|
||||
|
||||
// Remarshal the JSON into YAML format before saving it back to the disk.
|
||||
marshaled, err := yaml.Marshal(data.Data())
|
||||
if err != nil {
|
||||
return err
|
||||
return errors.WithStack(err)
|
||||
}
|
||||
|
||||
return ioutil.WriteFile(path, marshaled, 0644)
|
||||
@@ -426,15 +425,46 @@ func (f *ConfigurationFile) parseTextFile(path string) error {
|
||||
// Parses a properties file and updates the values within it to match those that
|
||||
// are passed. Writes the file once completed.
|
||||
func (f *ConfigurationFile) parsePropertiesFile(path string) error {
|
||||
p, err := properties.LoadFile(path, properties.UTF8)
|
||||
// Open the file.
|
||||
f2, err := os.Open(path)
|
||||
if err != nil {
|
||||
return err
|
||||
return errors.WithStack(err)
|
||||
}
|
||||
|
||||
var s strings.Builder
|
||||
|
||||
// Get any header comments from the file.
|
||||
scanner := bufio.NewScanner(f2)
|
||||
for scanner.Scan() {
|
||||
text := scanner.Text()
|
||||
|
||||
if text[0] != '#' {
|
||||
break
|
||||
}
|
||||
|
||||
s.WriteString(text)
|
||||
s.WriteString("\n")
|
||||
}
|
||||
|
||||
// Close the file.
|
||||
_ = f2.Close()
|
||||
|
||||
// Handle any scanner errors.
|
||||
if err := scanner.Err(); err != nil {
|
||||
return errors.WithStack(err)
|
||||
}
|
||||
|
||||
// Decode the properties file.
|
||||
p, err := properties.LoadFile(path, properties.UTF8)
|
||||
if err != nil {
|
||||
return errors.WithStack(err)
|
||||
}
|
||||
|
||||
// Replace any values that need to be replaced.
|
||||
for _, replace := range f.Replace {
|
||||
data, err := f.LookupConfigurationValue(replace)
|
||||
if err != nil {
|
||||
return err
|
||||
return errors.WithStack(err)
|
||||
}
|
||||
|
||||
v, ok := p.Get(replace.Match)
|
||||
@@ -446,27 +476,32 @@ func (f *ConfigurationFile) parsePropertiesFile(path string) error {
|
||||
}
|
||||
|
||||
if _, _, err := p.Set(replace.Match, data); err != nil {
|
||||
return err
|
||||
return errors.WithStack(err)
|
||||
}
|
||||
}
|
||||
|
||||
// Add the new file content to the string builder.
|
||||
for _, key := range p.Keys() {
|
||||
value, ok := p.Get(key)
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
|
||||
s.WriteString(key)
|
||||
s.WriteByte('=')
|
||||
s.WriteString(strings.Trim(strconv.QuoteToASCII(value), `"`))
|
||||
s.WriteString("\n")
|
||||
}
|
||||
|
||||
// Open the file for writing.
|
||||
w, err := os.OpenFile(path, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, 0644)
|
||||
if err != nil {
|
||||
return err
|
||||
return errors.WithStack(err)
|
||||
}
|
||||
defer w.Close()
|
||||
|
||||
var s string
|
||||
// This is a copy of the properties.String() func except we don't plop spaces around
|
||||
// the key=value configurations since people like to complain about that.
|
||||
// func (p *Properties) String() string
|
||||
for _, key := range p.Keys() {
|
||||
value, _ := p.Get(key)
|
||||
|
||||
s = fmt.Sprintf("%s%s=%s\n", s, key, value)
|
||||
}
|
||||
|
||||
// Can't use the properties.Write() function since that doesn't apply our nicer formatting.
|
||||
if _, err := w.Write([]byte(s)); err != nil {
|
||||
// Write the data to the file.
|
||||
if _, err := w.Write([]byte(s.String())); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
|
||||
@@ -16,7 +16,7 @@ func SetAccessControlHeaders(c *gin.Context) {
|
||||
o := c.GetHeader("Origin")
|
||||
if o != config.Get().PanelLocation {
|
||||
for _, origin := range config.Get().AllowedOrigins {
|
||||
if o != origin {
|
||||
if origin != "*" && o != origin {
|
||||
continue
|
||||
}
|
||||
|
||||
|
||||
@@ -2,6 +2,7 @@ package router
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"errors"
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/pterodactyl/wings/router/tokens"
|
||||
"github.com/pterodactyl/wings/server/backup"
|
||||
@@ -28,7 +29,7 @@ func getDownloadBackup(c *gin.Context) {
|
||||
|
||||
b, st, err := backup.LocateLocal(token.BackupUuid)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
if errors.Is(err, os.ErrNotExist) {
|
||||
c.AbortWithStatusJSON(http.StatusNotFound, gin.H{
|
||||
"error": "The requested backup was not found on this server.",
|
||||
})
|
||||
|
||||
@@ -12,22 +12,30 @@ import (
|
||||
"strconv"
|
||||
)
|
||||
|
||||
type serverProcData struct {
|
||||
server.ResourceUsage
|
||||
Suspended bool `json:"suspended"`
|
||||
}
|
||||
|
||||
// Returns a single server from the collection of servers.
|
||||
func getServer(c *gin.Context) {
|
||||
s := GetServer(c.Param("server"))
|
||||
|
||||
p := *s.Proc()
|
||||
|
||||
c.JSON(http.StatusOK, p)
|
||||
c.JSON(http.StatusOK, serverProcData{
|
||||
ResourceUsage: *s.Proc(),
|
||||
Suspended: s.IsSuspended(),
|
||||
})
|
||||
}
|
||||
|
||||
// Returns the logs for a given server instance.
|
||||
func getServerLogs(c *gin.Context) {
|
||||
s := GetServer(c.Param("server"))
|
||||
|
||||
l, _ := strconv.ParseInt(c.DefaultQuery("size", "8192"), 10, 64)
|
||||
l, _ := strconv.Atoi(c.DefaultQuery("size", "100"))
|
||||
if l <= 0 {
|
||||
l = 2048
|
||||
l = 100
|
||||
} else if l > 100 {
|
||||
l = 100
|
||||
}
|
||||
|
||||
out, err := s.ReadLogfile(l)
|
||||
@@ -50,7 +58,7 @@ func getServerLogs(c *gin.Context) {
|
||||
func postServerPower(c *gin.Context) {
|
||||
s := GetServer(c.Param("server"))
|
||||
|
||||
var data struct{
|
||||
var data struct {
|
||||
Action server.PowerAction `json:"action"`
|
||||
}
|
||||
|
||||
@@ -78,7 +86,7 @@ func postServerPower(c *gin.Context) {
|
||||
return
|
||||
}
|
||||
|
||||
// Pass the actual heavy processing off to a seperate thread to handle so that
|
||||
// Pass the actual heavy processing off to a separate thread to handle so that
|
||||
// we can immediately return a response from the server. Some of these actions
|
||||
// can take quite some time, especially stopping or restarting.
|
||||
go func(s *server.Server) {
|
||||
@@ -134,11 +142,13 @@ func patchServer(c *gin.Context) {
|
||||
buf := bytes.Buffer{}
|
||||
buf.ReadFrom(c.Request.Body)
|
||||
|
||||
if err := s.UpdateDataStructure(buf.Bytes(), true); err != nil {
|
||||
if err := s.UpdateDataStructure(buf.Bytes()); err != nil {
|
||||
TrackedServerError(err, s).AbortWithServerError(c)
|
||||
return
|
||||
}
|
||||
|
||||
s.SyncWithEnvironment()
|
||||
|
||||
c.Status(http.StatusNoContent)
|
||||
}
|
||||
|
||||
@@ -168,7 +178,7 @@ func postServerReinstall(c *gin.Context) {
|
||||
c.Status(http.StatusAccepted)
|
||||
}
|
||||
|
||||
// Deletes a server from the wings daemon and deassociates its objects.
|
||||
// Deletes a server from the wings daemon and dissociate it's objects.
|
||||
func deleteServer(c *gin.Context) {
|
||||
s := GetServer(c.Param("server"))
|
||||
|
||||
@@ -206,7 +216,7 @@ func deleteServer(c *gin.Context) {
|
||||
go func(p string) {
|
||||
if err := os.RemoveAll(p); err != nil {
|
||||
log.WithFields(log.Fields{
|
||||
"path": p,
|
||||
"path": p,
|
||||
"error": errors.WithStack(err),
|
||||
}).Warn("failed to remove server files during deletion process")
|
||||
}
|
||||
|
||||
@@ -7,6 +7,7 @@ import (
|
||||
"github.com/pterodactyl/wings/server"
|
||||
"github.com/pterodactyl/wings/server/backup"
|
||||
"net/http"
|
||||
"os"
|
||||
)
|
||||
|
||||
// Backs up a server.
|
||||
@@ -46,19 +47,34 @@ func postServerBackup(c *gin.Context) {
|
||||
c.Status(http.StatusAccepted)
|
||||
}
|
||||
|
||||
// Deletes a local backup of a server.
|
||||
// Deletes a local backup of a server. If the backup is not found on the machine just return
|
||||
// a 404 error. The service calling this endpoint can make its own decisions as to how it wants
|
||||
// to handle that response.
|
||||
func deleteServerBackup(c *gin.Context) {
|
||||
s := GetServer(c.Param("server"))
|
||||
|
||||
b, _, err := backup.LocateLocal(c.Param("backup"))
|
||||
if err != nil {
|
||||
// Just return from the function at this point if the backup was not located.
|
||||
if errors.Is(err, os.ErrNotExist) {
|
||||
c.AbortWithStatusJSON(http.StatusNotFound, gin.H{
|
||||
"error": "The requested backup was not found on this server.",
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
TrackedServerError(err, s).AbortWithServerError(c)
|
||||
return
|
||||
}
|
||||
|
||||
if err := b.Remove(); err != nil {
|
||||
TrackedServerError(err, s).AbortWithServerError(c)
|
||||
return
|
||||
// I'm not entirely sure how likely this is to happen, however if we did manage to locate
|
||||
// the backup previously and it is now missing when we go to delete, just treat it as having
|
||||
// been successful, rather than returning a 404.
|
||||
if !errors.Is(err, os.ErrNotExist) {
|
||||
TrackedServerError(err, s).AbortWithServerError(c)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
c.Status(http.StatusNoContent)
|
||||
|
||||
@@ -149,6 +149,13 @@ func putServerRenameFiles(c *gin.Context) {
|
||||
}
|
||||
|
||||
if err := g.Wait(); err != nil {
|
||||
if errors.Is(err, os.ErrExist) {
|
||||
c.AbortWithStatusJSON(http.StatusBadRequest, gin.H{
|
||||
"error": "Cannot move or rename file, destination already exists.",
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
TrackedServerError(err, s).AbortWithServerError(c)
|
||||
return
|
||||
}
|
||||
@@ -287,7 +294,7 @@ func postServerCompressFiles(c *gin.Context) {
|
||||
return
|
||||
}
|
||||
|
||||
if !s.Filesystem.HasSpaceAvailable() {
|
||||
if !s.Filesystem.HasSpaceAvailable(true) {
|
||||
c.AbortWithStatusJSON(http.StatusConflict, gin.H{
|
||||
"error": "This server does not have enough available disk space to generate a compressed archive.",
|
||||
})
|
||||
@@ -361,7 +368,7 @@ func postServerUploadFiles(c *gin.Context) {
|
||||
return
|
||||
}
|
||||
|
||||
if !s.Filesystem.HasSpaceAvailable() {
|
||||
if !s.Filesystem.HasSpaceAvailable(true) {
|
||||
c.AbortWithStatusJSON(http.StatusConflict, gin.H{
|
||||
"error": "This server does not have enough available disk space to accept any file uploads.",
|
||||
})
|
||||
@@ -371,15 +378,15 @@ func postServerUploadFiles(c *gin.Context) {
|
||||
form, err := c.MultipartForm()
|
||||
if err != nil {
|
||||
c.AbortWithStatusJSON(http.StatusBadRequest, gin.H{
|
||||
"error": "Failed to get multipart form.",
|
||||
"error": "Failed to get multipart form data from request.",
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
headers, ok := form.File["files"]
|
||||
if !ok {
|
||||
c.AbortWithStatusJSON(http.StatusNotModified, gin.H{
|
||||
"error": "No files were attached to the request.",
|
||||
c.AbortWithStatusJSON(http.StatusBadRequest, gin.H{
|
||||
"error": "No files were found on the request body.",
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
@@ -71,7 +71,7 @@ func postCreateServer(c *gin.Context) {
|
||||
func postUpdateConfiguration(c *gin.Context) {
|
||||
// A backup of the configuration for error purposes.
|
||||
ccopy := *config.Get()
|
||||
// A copy of the configuration we're using to bind the data recevied into.
|
||||
// A copy of the configuration we're using to bind the data received into.
|
||||
cfg := *config.Get()
|
||||
|
||||
// BindJSON sends 400 if the request fails, all we need to do is return
|
||||
|
||||
@@ -5,16 +5,16 @@ import (
|
||||
"bytes"
|
||||
"crypto/sha256"
|
||||
"encoding/hex"
|
||||
"errors"
|
||||
"github.com/apex/log"
|
||||
"github.com/buger/jsonparser"
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/mholt/archiver/v3"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/pterodactyl/wings/api"
|
||||
"github.com/pterodactyl/wings/config"
|
||||
"github.com/pterodactyl/wings/installer"
|
||||
"github.com/pterodactyl/wings/router/tokens"
|
||||
"github.com/pterodactyl/wings/server"
|
||||
"go.uber.org/zap"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
@@ -22,7 +22,6 @@ import (
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
func getServerArchive(c *gin.Context) {
|
||||
@@ -94,45 +93,34 @@ func getServerArchive(c *gin.Context) {
|
||||
func postServerArchive(c *gin.Context) {
|
||||
s := GetServer(c.Param("server"))
|
||||
|
||||
go func(server *server.Server) {
|
||||
start := time.Now()
|
||||
|
||||
if err := server.Archiver.Archive(); err != nil {
|
||||
zap.S().Errorw("failed to get archive for server", zap.String("server", server.Id()), zap.Error(err))
|
||||
go func(s *server.Server) {
|
||||
if err := s.Archiver.Archive(); err != nil {
|
||||
s.Log().WithField("error", err).Error("failed to get archive for server")
|
||||
return
|
||||
}
|
||||
|
||||
zap.S().Debugw(
|
||||
"successfully created archive for server",
|
||||
zap.String("server", server.Id()),
|
||||
zap.Duration("time", time.Now().Sub(start).Round(time.Microsecond)),
|
||||
)
|
||||
s.Log().Debug("successfully created server archive, notifying panel")
|
||||
|
||||
r := api.NewRequester()
|
||||
rerr, err := r.SendArchiveStatus(server.Id(), true)
|
||||
rerr, err := r.SendArchiveStatus(s.Id(), true)
|
||||
if rerr != nil || err != nil {
|
||||
if err != nil {
|
||||
zap.S().Errorw("failed to notify panel with archive status", zap.String("server", server.Id()), zap.Error(err))
|
||||
s.Log().WithField("error", err).Error("failed to notify panel of archive status")
|
||||
return
|
||||
}
|
||||
|
||||
zap.S().Errorw(
|
||||
"panel returned an error when sending the archive status",
|
||||
zap.String("server", server.Id()),
|
||||
zap.Error(errors.New(rerr.String())),
|
||||
)
|
||||
s.Log().WithField("error", rerr.String()).Error("panel returned an error when sending the archive status")
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
zap.S().Debugw("successfully notified panel about archive status", zap.String("server", server.Id()))
|
||||
s.Log().Debug("successfully notified panel of archive status")
|
||||
}(s)
|
||||
|
||||
c.Status(http.StatusAccepted)
|
||||
}
|
||||
|
||||
func postTransfer(c *gin.Context) {
|
||||
zap.S().Debug("incoming transfer from panel")
|
||||
|
||||
buf := bytes.Buffer{}
|
||||
buf.ReadFrom(c.Request.Body)
|
||||
|
||||
@@ -141,6 +129,7 @@ func postTransfer(c *gin.Context) {
|
||||
url, _ := jsonparser.GetString(data, "url")
|
||||
token, _ := jsonparser.GetString(data, "token")
|
||||
|
||||
l := log.WithField("server", serverID)
|
||||
// Create an http client with no timeout.
|
||||
client := &http.Client{Timeout: 0}
|
||||
|
||||
@@ -150,25 +139,25 @@ func postTransfer(c *gin.Context) {
|
||||
return
|
||||
}
|
||||
|
||||
zap.S().Errorw("server transfer has failed", zap.String("server", serverID))
|
||||
l.Info("server transfer failed, notifying panel")
|
||||
rerr, err := api.NewRequester().SendTransferFailure(serverID)
|
||||
if rerr != nil || err != nil {
|
||||
if err != nil {
|
||||
zap.S().Errorw("failed to notify panel with transfer failure", zap.String("server", serverID), zap.Error(err))
|
||||
l.WithField("error", err).Error("failed to notify panel with transfer failure")
|
||||
return
|
||||
}
|
||||
|
||||
zap.S().Errorw("panel returned an error when notifying of a transfer failure", zap.String("server", serverID), zap.Error(errors.New(rerr.String())))
|
||||
l.WithField("error", errors.WithStack(rerr)).Error("received error response from panel while notifying of transfer failure")
|
||||
return
|
||||
}
|
||||
|
||||
zap.S().Debugw("successfully notified panel about transfer failure", zap.String("server", serverID))
|
||||
l.Debug("notified panel of transfer failure")
|
||||
}()
|
||||
|
||||
// Make a new GET request to the URL the panel gave us.
|
||||
req, err := http.NewRequest("GET", url, nil)
|
||||
if err != nil {
|
||||
zap.S().Errorw("failed to create http request", zap.Error(err))
|
||||
log.WithField("error", errors.WithStack(err)).Error("failed to create http request for archive transfer")
|
||||
return
|
||||
}
|
||||
|
||||
@@ -178,36 +167,39 @@ func postTransfer(c *gin.Context) {
|
||||
// Execute the http request.
|
||||
res, err := client.Do(req)
|
||||
if err != nil {
|
||||
zap.S().Errorw("failed to send http request", zap.Error(err))
|
||||
l.WithField("error", errors.WithStack(err)).Error("failed to send archive http request")
|
||||
return
|
||||
}
|
||||
defer res.Body.Close()
|
||||
|
||||
// Handle non-200 status codes.
|
||||
if res.StatusCode != 200 {
|
||||
body, err := ioutil.ReadAll(res.Body)
|
||||
_, err := ioutil.ReadAll(res.Body)
|
||||
if err != nil {
|
||||
zap.S().Errorw("failed to read response body", zap.Int("status", res.StatusCode), zap.Error(err))
|
||||
l.WithField("error", errors.WithStack(err)).WithField("status", res.StatusCode).Error("failed read transfer response body")
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
zap.S().Errorw("failed to request server archive", zap.Int("status", res.StatusCode), zap.String("body", string(body)))
|
||||
l.WithField("error", errors.WithStack(err)).WithField("status", res.StatusCode).Error("failed to request server archive")
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
// Get the path to the archive.
|
||||
archivePath := filepath.Join(config.Get().System.ArchiveDirectory, serverID + ".tar.gz")
|
||||
archivePath := filepath.Join(config.Get().System.ArchiveDirectory, serverID+".tar.gz")
|
||||
|
||||
// Check if the archive already exists and delete it if it does.
|
||||
_, err = os.Stat(archivePath)
|
||||
if err != nil {
|
||||
if !os.IsNotExist(err) {
|
||||
zap.S().Errorw("failed to stat file", zap.Error(err))
|
||||
l.WithField("error", errors.WithStack(err)).Error("failed to stat archive file")
|
||||
return
|
||||
}
|
||||
} else {
|
||||
if err := os.Remove(archivePath); err != nil {
|
||||
zap.S().Errorw("failed to delete old file", zap.Error(err))
|
||||
l.WithField("error", errors.WithStack(err)).Warn("failed to remove old archive file")
|
||||
|
||||
return
|
||||
}
|
||||
}
|
||||
@@ -215,65 +207,69 @@ func postTransfer(c *gin.Context) {
|
||||
// Create the file.
|
||||
file, err := os.Create(archivePath)
|
||||
if err != nil {
|
||||
zap.S().Errorw("failed to open file on disk", zap.Error(err))
|
||||
l.WithField("error", errors.WithStack(err)).Error("failed to open archive on disk")
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
// Copy the file.
|
||||
buf := make([]byte, 1024 * 4)
|
||||
buf := make([]byte, 1024*4)
|
||||
_, err = io.CopyBuffer(file, res.Body, buf)
|
||||
if err != nil {
|
||||
zap.S().Errorw("failed to copy file to disk", zap.Error(err))
|
||||
l.WithField("error", errors.WithStack(err)).Error("failed to copy archive file to disk")
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
// Close the file so it can be opened to verify the checksum.
|
||||
if err := file.Close(); err != nil {
|
||||
zap.S().Errorw("failed to close archive file", zap.Error(err))
|
||||
l.WithField("error", errors.WithStack(err)).Error("failed to close archive file")
|
||||
|
||||
return
|
||||
}
|
||||
zap.S().Debug("server archive has been downloaded, computing checksum..", zap.String("server", serverID))
|
||||
|
||||
l.WithField("server", serverID).Debug("server archive downloaded, computing checksum...")
|
||||
|
||||
// Open the archive file for computing a checksum.
|
||||
file, err = os.Open(archivePath)
|
||||
if err != nil {
|
||||
zap.S().Errorw("failed to open file on disk", zap.Error(err))
|
||||
l.WithField("error", errors.WithStack(err)).Error("failed to open archive on disk")
|
||||
return
|
||||
}
|
||||
|
||||
// Compute the sha256 checksum of the file.
|
||||
hash := sha256.New()
|
||||
buf = make([]byte, 1024 * 4)
|
||||
buf = make([]byte, 1024*4)
|
||||
if _, err := io.CopyBuffer(hash, file, buf); err != nil {
|
||||
zap.S().Errorw("failed to copy file for checksum verification", zap.Error(err))
|
||||
l.WithField("error", errors.WithStack(err)).Error("failed to copy archive file for checksum verification")
|
||||
return
|
||||
}
|
||||
|
||||
// Verify the two checksums.
|
||||
if hex.EncodeToString(hash.Sum(nil)) != res.Header.Get("X-Checksum") {
|
||||
zap.S().Errorw("checksum failed verification")
|
||||
l.Error("checksum verification failed for archive")
|
||||
return
|
||||
}
|
||||
|
||||
// Close the file.
|
||||
if err := file.Close(); err != nil {
|
||||
zap.S().Errorw("failed to close archive file", zap.Error(err))
|
||||
l.WithField("error", errors.WithStack(err)).Error("failed to close archive file after calculating checksum")
|
||||
return
|
||||
}
|
||||
|
||||
zap.S().Infow("server archive transfer was successful", zap.String("server", serverID))
|
||||
l.Info("server archive transfer was successful")
|
||||
|
||||
// Get the server data from the request.
|
||||
serverData, t, _, _ := jsonparser.Get(data, "server")
|
||||
if t != jsonparser.Object {
|
||||
zap.S().Errorw("invalid server data passed in request")
|
||||
l.Error("invalid server data passed in request")
|
||||
return
|
||||
}
|
||||
|
||||
// Create a new server installer (note this does not execute the install script)
|
||||
i, err := installer.New(serverData)
|
||||
if err != nil {
|
||||
zap.S().Warnw("failed to validate the received server data", zap.Error(err))
|
||||
l.WithField("error", errors.WithStack(err)).Error("failed to validate received server data")
|
||||
return
|
||||
}
|
||||
|
||||
@@ -285,7 +281,7 @@ func postTransfer(c *gin.Context) {
|
||||
|
||||
// Un-archive the archive. That sounds weird..
|
||||
if err := archiver.NewTarGz().Unarchive(archivePath, i.Server().Filesystem.Path()); err != nil {
|
||||
zap.S().Errorw("failed to extract archive", zap.String("server", serverID), zap.Error(err))
|
||||
l.WithField("error", errors.WithStack(err)).Error("failed to extract server archive")
|
||||
return
|
||||
}
|
||||
|
||||
@@ -300,15 +296,16 @@ func postTransfer(c *gin.Context) {
|
||||
rerr, err := api.NewRequester().SendTransferSuccess(serverID)
|
||||
if rerr != nil || err != nil {
|
||||
if err != nil {
|
||||
zap.S().Errorw("failed to notify panel with transfer success", zap.String("server", serverID), zap.Error(err))
|
||||
l.WithField("error", errors.WithStack(err)).Error("failed to notify panel of transfer success")
|
||||
return
|
||||
}
|
||||
|
||||
zap.S().Errorw("panel returned an error when notifying of a transfer success", zap.String("server", serverID), zap.Error(errors.New(rerr.String())))
|
||||
l.WithField("error", errors.WithStack(rerr)).Error("panel responded with error after transfer success")
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
zap.S().Debugw("successfully notified panel about transfer success", zap.String("server", serverID))
|
||||
l.Info("successfully notified panel of transfer success")
|
||||
}(buf.Bytes())
|
||||
|
||||
c.Status(http.StatusAccepted)
|
||||
|
||||
@@ -28,7 +28,7 @@ func (h *Handler) ListenForExpiration(ctx context.Context) {
|
||||
if jwt != nil {
|
||||
if jwt.ExpirationTime.Unix()-time.Now().Unix() <= 0 {
|
||||
_ = h.SendJson(&Message{Event: TokenExpiredEvent})
|
||||
} else if jwt.ExpirationTime.Unix()-time.Now().Unix() <= 180 {
|
||||
} else if jwt.ExpirationTime.Unix()-time.Now().Unix() <= 60 {
|
||||
_ = h.SendJson(&Message{Event: TokenExpiringEvent})
|
||||
}
|
||||
}
|
||||
@@ -36,38 +36,37 @@ func (h *Handler) ListenForExpiration(ctx context.Context) {
|
||||
}
|
||||
}
|
||||
|
||||
var e = []string{
|
||||
server.StatsEvent,
|
||||
server.StatusEvent,
|
||||
server.ConsoleOutputEvent,
|
||||
server.InstallOutputEvent,
|
||||
server.InstallStartedEvent,
|
||||
server.InstallCompletedEvent,
|
||||
server.DaemonMessageEvent,
|
||||
server.BackupCompletedEvent,
|
||||
}
|
||||
|
||||
// Listens for different events happening on a server and sends them along
|
||||
// to the connected websocket.
|
||||
func (h *Handler) ListenForServerEvents(ctx context.Context) {
|
||||
e := []string{
|
||||
server.StatsEvent,
|
||||
server.StatusEvent,
|
||||
server.ConsoleOutputEvent,
|
||||
server.InstallOutputEvent,
|
||||
server.InstallStartedEvent,
|
||||
server.InstallCompletedEvent,
|
||||
server.DaemonMessageEvent,
|
||||
server.BackupCompletedEvent,
|
||||
}
|
||||
h.server.Log().Debug("listening for server events over websocket")
|
||||
|
||||
eventChannel := make(chan events.Event)
|
||||
for _, event := range e {
|
||||
h.server.Events().Subscribe(event, eventChannel)
|
||||
}
|
||||
h.server.Events().Subscribe(e, eventChannel)
|
||||
|
||||
for d := range eventChannel {
|
||||
go func(ctx context.Context) {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
for _, event := range e {
|
||||
h.server.Events().Unsubscribe(event, eventChannel)
|
||||
}
|
||||
h.server.Events().Unsubscribe(e, eventChannel)
|
||||
|
||||
close(eventChannel)
|
||||
default:
|
||||
_ = h.SendJson(&Message{
|
||||
Event: d.Topic,
|
||||
Args: []string{d.Data},
|
||||
})
|
||||
}
|
||||
}(ctx)
|
||||
|
||||
for d := range eventChannel {
|
||||
if err := h.SendJson(&Message{Event: d.Topic, Args: []string{d.Data}}); err != nil {
|
||||
h.server.Log().WithField("error", err).Warn("error while sending server data over websocket")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -16,7 +16,7 @@ type Message struct {
|
||||
//
|
||||
// - status : Returns the server's power state.
|
||||
// - logs : Returns the server log data at the time of the request.
|
||||
// - power : Performs a power action aganist the server based the data.
|
||||
// - power : Performs a power action against the server based the data.
|
||||
// - command : Performs a command on a server using the data field.
|
||||
Event string `json:"event"`
|
||||
|
||||
|
||||
@@ -64,6 +64,10 @@ func GetHandler(s *server.Server, w http.ResponseWriter, r *http.Request) (*Hand
|
||||
}
|
||||
|
||||
for _, origin := range config.Get().AllowedOrigins {
|
||||
if origin == "*" {
|
||||
return true
|
||||
}
|
||||
|
||||
if o != origin {
|
||||
continue
|
||||
}
|
||||
@@ -91,6 +95,8 @@ func (h *Handler) SendJson(v *Message) error {
|
||||
// Do not send JSON down the line if the JWT on the connection is not
|
||||
// valid!
|
||||
if err := h.TokenValid(); err != nil {
|
||||
h.server.Log().WithField("error", err).Warn("invalid JWT detected for server websocket!")
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -153,9 +159,10 @@ func (h *Handler) TokenValid() error {
|
||||
// error message, otherwise we just send back a standard error message.
|
||||
func (h *Handler) SendErrorJson(msg Message, err error, shouldLog ...bool) error {
|
||||
j := h.GetJwt()
|
||||
expected := errors.Is(err, server.ErrSuspended) || errors.Is(err, server.ErrIsRunning)
|
||||
|
||||
message := "an unexpected error was encountered while handling this request"
|
||||
if server.IsSuspendedError(err) || (j != nil && j.HasPermission(PermissionReceiveErrors)) {
|
||||
if expected || (j != nil && j.HasPermission(PermissionReceiveErrors)) {
|
||||
message = err.Error()
|
||||
}
|
||||
|
||||
@@ -165,7 +172,7 @@ func (h *Handler) SendErrorJson(msg Message, err error, shouldLog ...bool) error
|
||||
wsm.Args = []string{m}
|
||||
|
||||
if len(shouldLog) == 0 || (len(shouldLog) == 1 && shouldLog[0] == true) {
|
||||
if !server.IsSuspendedError(err) {
|
||||
if !expected {
|
||||
h.server.Log().WithFields(log.Fields{"event": msg.Event, "error_identifier": u.String(), "error": err}).
|
||||
Error("failed to handle websocket process; an error was encountered processing an event")
|
||||
}
|
||||
@@ -261,7 +268,7 @@ func (h *Handler) HandleInbound(m Message) error {
|
||||
// Only send the current disk usage if the server is offline, if docker container is running,
|
||||
// Environment#EnableResourcePolling() will send this data to all clients.
|
||||
if state == environment.ProcessOfflineState {
|
||||
_ = h.server.Filesystem.HasSpaceAvailable()
|
||||
_ = h.server.Filesystem.HasSpaceAvailable(false)
|
||||
|
||||
b, _ := json.Marshal(h.server.Proc())
|
||||
h.SendJson(&Message{
|
||||
@@ -309,7 +316,7 @@ func (h *Handler) HandleInbound(m Message) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
logs, err := h.server.Environment.Readlog(1024 * 16)
|
||||
logs, err := h.server.Environment.Readlog(100)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -20,13 +20,11 @@ func (s *Server) notifyPanelOfBackup(uuid string, ad *backup.ArchiveDetails, suc
|
||||
s.Log().WithFields(log.Fields{
|
||||
"backup": uuid,
|
||||
"error": err,
|
||||
}).Error("failed to notify panel of backup status due to internal code error")
|
||||
}).Error("failed to notify panel of backup status due to wings error")
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
s.Log().WithField("backup", uuid).Warn(rerr.String())
|
||||
|
||||
return errors.New(rerr.String())
|
||||
}
|
||||
|
||||
@@ -90,7 +88,7 @@ func (s *Server) Backup(b backup.BackupInterface) error {
|
||||
if notifyError := s.notifyPanelOfBackup(b.Identifier(), &backup.ArchiveDetails{}, false); notifyError != nil {
|
||||
s.Log().WithFields(log.Fields{
|
||||
"backup": b.Identifier(),
|
||||
"error": err,
|
||||
"error": notifyError,
|
||||
}).Warn("failed to notify panel of failed backup state")
|
||||
}
|
||||
|
||||
@@ -102,7 +100,7 @@ func (s *Server) Backup(b backup.BackupInterface) error {
|
||||
"file_size": 0,
|
||||
})
|
||||
|
||||
return errors.WithStack(err)
|
||||
return errors.Wrap(err, "error while generating server backup")
|
||||
}
|
||||
|
||||
// Try to notify the panel about the status of this backup. If for some reason this request
|
||||
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
"context"
|
||||
"github.com/apex/log"
|
||||
gzip "github.com/klauspost/pgzip"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/remeh/sizedwaitgroup"
|
||||
"golang.org/x/sync/errgroup"
|
||||
"io"
|
||||
@@ -25,7 +26,7 @@ type Archive struct {
|
||||
func (a *Archive) Create(dst string, ctx context.Context) (os.FileInfo, error) {
|
||||
f, err := os.OpenFile(dst, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0600)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return nil, errors.WithStack(err)
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
@@ -35,7 +36,7 @@ func (a *Archive) Create(dst string, ctx context.Context) (os.FileInfo, error) {
|
||||
}
|
||||
|
||||
gzw, _ := gzip.NewWriterLevel(f, gzip.BestSpeed)
|
||||
_ = gzw.SetConcurrency(1 << 20, maxCpu)
|
||||
_ = gzw.SetConcurrency(1<<20, maxCpu)
|
||||
|
||||
defer gzw.Flush()
|
||||
defer gzw.Close()
|
||||
@@ -49,23 +50,17 @@ func (a *Archive) Create(dst string, ctx context.Context) (os.FileInfo, error) {
|
||||
// Iterate over all of the files to be included and put them into the archive. This is
|
||||
// done as a concurrent goroutine to speed things along. If an error is encountered at
|
||||
// any step, the entire process is aborted.
|
||||
for p, s := range a.Files.All() {
|
||||
if (*s).IsDir() {
|
||||
continue
|
||||
}
|
||||
|
||||
pa := p
|
||||
st := s
|
||||
|
||||
for _, p := range a.Files.All() {
|
||||
p := p
|
||||
g.Go(func() error {
|
||||
wg.Add()
|
||||
defer wg.Done()
|
||||
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
return errors.WithStack(ctx.Err())
|
||||
default:
|
||||
return a.addToArchive(pa, st, tw)
|
||||
return a.addToArchive(p, tw)
|
||||
}
|
||||
})
|
||||
}
|
||||
@@ -80,33 +75,48 @@ func (a *Archive) Create(dst string, ctx context.Context) (os.FileInfo, error) {
|
||||
log.WithField("location", dst).Warn("failed to delete corrupted backup archive")
|
||||
}
|
||||
|
||||
return nil, err
|
||||
return nil, errors.WithStack(err)
|
||||
}
|
||||
|
||||
st, err := f.Stat()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return nil, errors.WithStack(err)
|
||||
}
|
||||
|
||||
return st, nil
|
||||
}
|
||||
|
||||
// Adds a single file to the existing tar archive writer.
|
||||
func (a *Archive) addToArchive(p string, s *os.FileInfo, w *tar.Writer) error {
|
||||
func (a *Archive) addToArchive(p string, w *tar.Writer) error {
|
||||
f, err := os.Open(p)
|
||||
if err != nil {
|
||||
return err
|
||||
// If you try to backup something that no longer exists (got deleted somewhere during the process
|
||||
// but not by this process), just skip over it and don't kill the entire backup.
|
||||
if os.IsNotExist(err) {
|
||||
return nil
|
||||
}
|
||||
|
||||
return errors.WithStack(err)
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
st := *s
|
||||
s, err := f.Stat()
|
||||
if err != nil {
|
||||
// Same as above, don't kill the process just because the file no longer exists.
|
||||
if os.IsNotExist(err) {
|
||||
return nil
|
||||
}
|
||||
|
||||
return errors.WithStack(err)
|
||||
}
|
||||
|
||||
header := &tar.Header{
|
||||
// Trim the long server path from the name of the file so that the resulting
|
||||
// archive is exactly how the user would see it in the panel file manager.
|
||||
Name: strings.TrimPrefix(p, a.TrimPrefix),
|
||||
Size: st.Size(),
|
||||
Mode: int64(st.Mode()),
|
||||
ModTime: st.ModTime(),
|
||||
Size: s.Size(),
|
||||
Mode: int64(s.Mode()),
|
||||
ModTime: s.ModTime(),
|
||||
}
|
||||
|
||||
// These actions must occur sequentially, even if this function is called multiple
|
||||
@@ -115,12 +125,12 @@ func (a *Archive) addToArchive(p string, s *os.FileInfo, w *tar.Writer) error {
|
||||
defer a.Unlock()
|
||||
|
||||
if err := w.WriteHeader(header); err != nil {
|
||||
return err
|
||||
return errors.WithStack(err)
|
||||
}
|
||||
|
||||
buf := make([]byte, 4*1024)
|
||||
if _, err := io.CopyBuffer(w, f, buf); err != nil {
|
||||
return err
|
||||
return errors.WithStack(err)
|
||||
}
|
||||
|
||||
return nil
|
||||
|
||||
@@ -24,11 +24,11 @@ func LocateLocal(uuid string) (*LocalBackup, os.FileInfo, error) {
|
||||
|
||||
st, err := os.Stat(b.Path())
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
return nil, nil, errors.WithStack(err)
|
||||
}
|
||||
|
||||
if st.IsDir() {
|
||||
return nil, nil, errors.New("invalid archive found; is directory")
|
||||
return nil, nil, errors.New("invalid archive, is directory")
|
||||
}
|
||||
|
||||
return b, st, nil
|
||||
@@ -48,7 +48,7 @@ func (b *LocalBackup) Generate(included *IncludedFiles, prefix string) (*Archive
|
||||
}
|
||||
|
||||
if _, err := a.Create(b.Path(), context.Background()); err != nil {
|
||||
return nil, err
|
||||
return nil, errors.WithStack(err)
|
||||
}
|
||||
|
||||
return b.Details(), nil
|
||||
|
||||
@@ -4,6 +4,7 @@ import (
|
||||
"context"
|
||||
"fmt"
|
||||
"github.com/apex/log"
|
||||
"github.com/pkg/errors"
|
||||
"io"
|
||||
"net/http"
|
||||
"os"
|
||||
@@ -33,17 +34,17 @@ func (s *S3Backup) Generate(included *IncludedFiles, prefix string) (*ArchiveDet
|
||||
}
|
||||
|
||||
if _, err := a.Create(s.Path(), context.Background()); err != nil {
|
||||
return nil, err
|
||||
return nil, errors.WithStack(err)
|
||||
}
|
||||
|
||||
rc, err := os.Open(s.Path())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return nil, errors.WithStack(err)
|
||||
}
|
||||
defer rc.Close()
|
||||
|
||||
if resp, err := s.generateRemoteRequest(rc); err != nil {
|
||||
return nil, err
|
||||
return nil, errors.WithStack(err)
|
||||
} else {
|
||||
resp.Body.Close()
|
||||
|
||||
@@ -79,7 +80,7 @@ func (s *S3Backup) generateRemoteRequest(rc io.ReadCloser) (*http.Response, erro
|
||||
|
||||
log.WithFields(log.Fields{
|
||||
"endpoint": s.PresignedUrl,
|
||||
"headers": r.Header,
|
||||
"headers": r.Header,
|
||||
}).Debug("uploading backup to remote S3 endpoint")
|
||||
|
||||
return http.DefaultClient.Do(r)
|
||||
|
||||
@@ -1,29 +1,23 @@
|
||||
package backup
|
||||
|
||||
import (
|
||||
"os"
|
||||
"sync"
|
||||
)
|
||||
|
||||
type IncludedFiles struct {
|
||||
sync.RWMutex
|
||||
files map[string]*os.FileInfo
|
||||
files []string
|
||||
}
|
||||
|
||||
// Pushes an additional file or folder onto the struct.
|
||||
func (i *IncludedFiles) Push(info *os.FileInfo, p string) {
|
||||
func (i *IncludedFiles) Push(p string) {
|
||||
i.Lock()
|
||||
defer i.Unlock()
|
||||
|
||||
if i.files == nil {
|
||||
i.files = make(map[string]*os.FileInfo)
|
||||
}
|
||||
|
||||
i.files[p] = info
|
||||
i.files = append(i.files, p) // ~~
|
||||
i.Unlock()
|
||||
}
|
||||
|
||||
// Returns all of the files that were marked as being included.
|
||||
func (i *IncludedFiles) All() map[string]*os.FileInfo {
|
||||
func (i *IncludedFiles) All() []string {
|
||||
i.RLock()
|
||||
defer i.RUnlock()
|
||||
|
||||
|
||||
@@ -20,6 +20,10 @@ type Configuration struct {
|
||||
// The command that should be used when booting up the server instance.
|
||||
Invocation string `json:"invocation"`
|
||||
|
||||
// By default this is false, however if selected within the Panel while installing or re-installing a
|
||||
// server, specific installation scripts will be skipped for the server process.
|
||||
SkipEggScripts bool `default:"false" json:"skip_egg_scripts"`
|
||||
|
||||
// An array of environment variables that should be passed along to the running
|
||||
// server process.
|
||||
EnvVars environment.Variables `json:"environment"`
|
||||
@@ -43,6 +47,20 @@ func (s *Server) Config() *Configuration {
|
||||
return &s.cfg
|
||||
}
|
||||
|
||||
func (s *Server) DiskSpace() int64 {
|
||||
s.cfg.mu.RLock()
|
||||
defer s.cfg.mu.RUnlock()
|
||||
|
||||
return s.cfg.Build.DiskSpace
|
||||
}
|
||||
|
||||
func (s *Server) MemoryLimit() int64 {
|
||||
s.cfg.mu.RLock()
|
||||
defer s.cfg.mu.RUnlock()
|
||||
|
||||
return s.cfg.Build.MemoryLimit
|
||||
}
|
||||
|
||||
func (c *Configuration) GetUuid() string {
|
||||
c.mu.RLock()
|
||||
defer c.mu.RUnlock()
|
||||
|
||||
@@ -96,7 +96,6 @@ func (s *Server) Throttler() *ConsoleThrottler {
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
// Sends output to the server console formatted to appear correctly as being sent
|
||||
// from Wings.
|
||||
func (s *Server) PublishConsoleOutputFromDaemon(data string) {
|
||||
|
||||
@@ -35,7 +35,7 @@ func (cd *CrashHandler) SetLastCrash(t time.Time) {
|
||||
// if it was the result of an event that we should try to recover from.
|
||||
//
|
||||
// This function assumes it is called under circumstances where a crash is suspected
|
||||
// of occuring. It will not do anything to determine if it was actually a crash, just
|
||||
// of occurring. It will not do anything to determine if it was actually a crash, just
|
||||
// look at the exit state and check if it meets the criteria of being called a crash
|
||||
// by Wings.
|
||||
//
|
||||
@@ -75,7 +75,7 @@ func (s *Server) handleServerCrash() error {
|
||||
c := s.crasher.LastCrashTime()
|
||||
// If the last crash time was within the last 60 seconds we do not want to perform
|
||||
// an automatic reboot of the process. Return an error that can be handled.
|
||||
if !c.IsZero() && c.Add(time.Second * 60).After(time.Now()) {
|
||||
if !c.IsZero() && c.Add(time.Second*60).After(time.Now()) {
|
||||
s.PublishConsoleOutputFromDaemon("Aborting automatic reboot: last crash occurred less than 60 seconds ago.")
|
||||
|
||||
return &crashTooFrequent{}
|
||||
|
||||
@@ -1,17 +1,9 @@
|
||||
package server
|
||||
|
||||
type suspendedError struct {
|
||||
}
|
||||
import "github.com/pkg/errors"
|
||||
|
||||
func (e *suspendedError) Error() string {
|
||||
return "server is currently in a suspended state"
|
||||
}
|
||||
|
||||
func IsSuspendedError(err error) bool {
|
||||
_, ok := err.(*suspendedError)
|
||||
|
||||
return ok
|
||||
}
|
||||
var ErrIsRunning = errors.New("server is running")
|
||||
var ErrSuspended = errors.New("server is currently in a suspended state")
|
||||
|
||||
type crashTooFrequent struct {
|
||||
}
|
||||
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"github.com/gabriel-vasile/mimetype"
|
||||
"github.com/karrick/godirwalk"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/pterodactyl/wings/config"
|
||||
"github.com/pterodactyl/wings/server/backup"
|
||||
@@ -21,6 +22,7 @@ import (
|
||||
"strings"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"syscall"
|
||||
"time"
|
||||
)
|
||||
|
||||
@@ -39,10 +41,12 @@ func IsPathResolutionError(err error) bool {
|
||||
}
|
||||
|
||||
type Filesystem struct {
|
||||
mu sync.RWMutex
|
||||
mu sync.Mutex
|
||||
lookupTimeMu sync.RWMutex
|
||||
|
||||
lastLookupTime time.Time
|
||||
diskUsage int64
|
||||
lastLookupTime time.Time
|
||||
lookupInProgress int32
|
||||
disk int64
|
||||
|
||||
Server *Server
|
||||
}
|
||||
@@ -180,14 +184,14 @@ func (fs *Filesystem) ParallelSafePath(paths []string) ([]string, error) {
|
||||
pi := p
|
||||
|
||||
// Recursively call this function to continue digging through the directory tree within
|
||||
// a seperate goroutine. If the context is canceled abort this process.
|
||||
// a separate goroutine. If the context is canceled abort this process.
|
||||
g.Go(func() error {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
default:
|
||||
// If the callback returns true, go ahead and keep walking deeper. This allows
|
||||
// us to programatically continue deeper into directories, or stop digging
|
||||
// us to programmatically continue deeper into directories, or stop digging
|
||||
// if that pathway knows it needs nothing else.
|
||||
if c, err := fs.SafePath(pi); err != nil {
|
||||
return err
|
||||
@@ -204,15 +208,20 @@ func (fs *Filesystem) ParallelSafePath(paths []string) ([]string, error) {
|
||||
return cleaned, g.Wait()
|
||||
}
|
||||
|
||||
type SpaceCheckingOpts struct {
|
||||
AllowStaleResponse bool
|
||||
}
|
||||
|
||||
// Determines if the directory a file is trying to be added to has enough space available
|
||||
// for the file to be written to.
|
||||
//
|
||||
// Because determining the amount of space being used by a server is a taxing operation we
|
||||
// will load it all up into a cache and pull from that as long as the key is not expired.
|
||||
func (fs *Filesystem) HasSpaceAvailable() bool {
|
||||
space := fs.Server.Build().DiskSpace
|
||||
|
||||
size, err := fs.getCachedDiskUsage()
|
||||
//
|
||||
// This operation will potentially block unless allowStaleValue is set to true. See the
|
||||
// documentation on DiskUsage for how this affects the call.
|
||||
func (fs *Filesystem) HasSpaceAvailable(allowStaleValue bool) bool {
|
||||
size, err := fs.DiskUsage(allowStaleValue)
|
||||
if err != nil {
|
||||
fs.Server.Log().WithField("error", err).Warn("failed to determine root server directory size")
|
||||
}
|
||||
@@ -221,6 +230,7 @@ func (fs *Filesystem) HasSpaceAvailable() bool {
|
||||
// been allocated.
|
||||
fs.Server.Proc().SetDisk(size)
|
||||
|
||||
space := fs.Server.DiskSpace()
|
||||
// If space is -1 or 0 just return true, means they're allowed unlimited.
|
||||
//
|
||||
// Technically we could skip disk space calculation because we don't need to check if the server exceeds it's limit
|
||||
@@ -237,19 +247,52 @@ func (fs *Filesystem) HasSpaceAvailable() bool {
|
||||
// as needed without overly taxing the system. This will prioritize the value from the cache to avoid
|
||||
// excessive IO usage. We will only walk the filesystem and determine the size of the directory if there
|
||||
// is no longer a cached value.
|
||||
func (fs *Filesystem) getCachedDiskUsage() (int64, error) {
|
||||
//
|
||||
// If "allowStaleValue" is set to true, a stale value MAY be returned to the caller if there is an
|
||||
// expired cache value AND there is currently another lookup in progress. If there is no cached value but
|
||||
// no other lookup is in progress, a fresh disk space response will be returned to the caller.
|
||||
//
|
||||
// This is primarily to avoid a bunch of I/O operations from piling up on the server, especially on servers
|
||||
// with a large amount of files.
|
||||
func (fs *Filesystem) DiskUsage(allowStaleValue bool) (int64, error) {
|
||||
// Check if cache is expired.
|
||||
fs.lookupTimeMu.RLock()
|
||||
isValidInCache := fs.lastLookupTime.After(time.Now().Add(time.Second * time.Duration(-1*config.Get().System.DiskCheckInterval)))
|
||||
fs.lookupTimeMu.RUnlock()
|
||||
|
||||
if !isValidInCache {
|
||||
// If we are now allowing a stale response go ahead and perform the lookup and return the fresh
|
||||
// value. This is a blocking operation to the calling process.
|
||||
if !allowStaleValue {
|
||||
return fs.updateCachedDiskUsage()
|
||||
} else if atomic.LoadInt32(&fs.lookupInProgress) == 0 {
|
||||
// Otherwise, if we allow a stale value and there isn't a valid item in the cache and we aren't
|
||||
// currently performing a lookup, just do the disk usage calculation in the background.
|
||||
go func(fs *Filesystem) {
|
||||
if _, err := fs.updateCachedDiskUsage(); err != nil {
|
||||
fs.Server.Log().WithField("error", errors.WithStack(err)).Warn("failed to determine disk usage in go-routine")
|
||||
}
|
||||
}(fs)
|
||||
}
|
||||
}
|
||||
|
||||
// Return the currently cached value back to the calling function.
|
||||
return atomic.LoadInt64(&fs.disk), nil
|
||||
}
|
||||
|
||||
// Updates the currently used disk space for a server.
|
||||
func (fs *Filesystem) updateCachedDiskUsage() (int64, error) {
|
||||
// Obtain an exclusive lock on this process so that we don't unintentionally run it at the same
|
||||
// time as another running process. Once the lock is available it'll read from the cache for the
|
||||
// second call rather than hitting the disk in parallel.
|
||||
//
|
||||
// This effectively the same speed as running this call in parallel since this cache will return
|
||||
// instantly on the second call.
|
||||
fs.mu.Lock()
|
||||
defer fs.mu.Unlock()
|
||||
|
||||
if fs.lastLookupTime.After(time.Now().Add(time.Second * -60)) {
|
||||
return fs.diskUsage, nil
|
||||
}
|
||||
// Signal that we're currently updating the disk size so that other calls to the disk checking
|
||||
// functions can determine if they should queue up additional calls to this function. Ensure that
|
||||
// we always set this back to 0 when this process is done executing.
|
||||
atomic.StoreInt32(&fs.lookupInProgress, 1)
|
||||
defer atomic.StoreInt32(&fs.lookupInProgress, 0)
|
||||
|
||||
// If there is no size its either because there is no data (in which case running this function
|
||||
// will have effectively no impact), or there is nothing in the cache, in which case we need to
|
||||
@@ -260,8 +303,11 @@ func (fs *Filesystem) getCachedDiskUsage() (int64, error) {
|
||||
// Always cache the size, even if there is an error. We want to always return that value
|
||||
// so that we don't cause an endless loop of determining the disk size if there is a temporary
|
||||
// error encountered.
|
||||
fs.lookupTimeMu.Lock()
|
||||
fs.lastLookupTime = time.Now()
|
||||
atomic.StoreInt64(&fs.diskUsage, size)
|
||||
fs.lookupTimeMu.Unlock()
|
||||
|
||||
atomic.StoreInt64(&fs.disk, size)
|
||||
|
||||
return size, err
|
||||
}
|
||||
@@ -270,20 +316,40 @@ func (fs *Filesystem) getCachedDiskUsage() (int64, error) {
|
||||
// through all of the folders. Returns the size in bytes. This can be a fairly taxing operation
|
||||
// on locations with tons of files, so it is recommended that you cache the output.
|
||||
func (fs *Filesystem) DirectorySize(dir string) (int64, error) {
|
||||
d, err := fs.SafePath(dir)
|
||||
if err != nil {
|
||||
return 0, errors.WithStack(err)
|
||||
}
|
||||
|
||||
var size int64
|
||||
err := fs.Walk(dir, func(_ string, f os.FileInfo, err error) error {
|
||||
if err != nil {
|
||||
return fs.handleWalkerError(err, f)
|
||||
}
|
||||
var st syscall.Stat_t
|
||||
|
||||
if !f.IsDir() {
|
||||
atomic.AddInt64(&size, f.Size())
|
||||
}
|
||||
err = godirwalk.Walk(d, &godirwalk.Options{
|
||||
Unsorted: true,
|
||||
Callback: func(p string, e *godirwalk.Dirent) error {
|
||||
// If this is a symlink then resolve the final destination of it before trying to continue walking
|
||||
// over its contents. If it resolves outside the server data directory just skip everything else for
|
||||
// it. Otherwise, allow it to continue.
|
||||
if e.IsSymlink() {
|
||||
if _, err := fs.SafePath(p); err != nil {
|
||||
if IsPathResolutionError(err) {
|
||||
return godirwalk.SkipThis
|
||||
}
|
||||
|
||||
return nil
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if !e.IsDir() {
|
||||
syscall.Lstat(p, &st)
|
||||
atomic.AddInt64(&size, st.Size)
|
||||
}
|
||||
|
||||
return nil
|
||||
},
|
||||
})
|
||||
|
||||
return size, err
|
||||
return size, errors.WithStack(err)
|
||||
}
|
||||
|
||||
// Reads a file on the system and returns it as a byte representation in a file
|
||||
@@ -346,7 +412,7 @@ func (fs *Filesystem) Writefile(p string, r io.Reader) error {
|
||||
sz, err := io.CopyBuffer(file, r, buf)
|
||||
|
||||
// Adjust the disk usage to account for the old size and the new size of the file.
|
||||
atomic.AddInt64(&fs.diskUsage, sz-currentSize)
|
||||
atomic.AddInt64(&fs.disk, sz-currentSize)
|
||||
|
||||
// Finally, chown the file to ensure the permissions don't end up out-of-whack
|
||||
// if we had just created it.
|
||||
@@ -442,17 +508,22 @@ func (fs *Filesystem) Rename(from string, to string) error {
|
||||
return errors.WithStack(err)
|
||||
}
|
||||
|
||||
if f, err := os.Stat(cleanedFrom); err != nil {
|
||||
return errors.WithStack(err)
|
||||
} else {
|
||||
d := cleanedTo
|
||||
if !f.IsDir() {
|
||||
d = strings.TrimSuffix(d, path.Base(cleanedTo))
|
||||
}
|
||||
// If the target file or directory already exists the rename function will fail, so just
|
||||
// bail out now.
|
||||
if _, err := os.Stat(cleanedTo); err == nil {
|
||||
return os.ErrExist
|
||||
}
|
||||
|
||||
// Ensure that the directory we're moving into exists correctly on the system.
|
||||
if cleanedTo == fs.Path() {
|
||||
return errors.New("attempting to rename into an invalid directory space")
|
||||
}
|
||||
|
||||
d := strings.TrimSuffix(cleanedTo, path.Base(cleanedTo))
|
||||
// Ensure that the directory we're moving into exists correctly on the system. Only do this if
|
||||
// we're not at the root directory level.
|
||||
if d != fs.Path() {
|
||||
if mkerr := os.MkdirAll(d, 0644); mkerr != nil {
|
||||
return errors.WithStack(mkerr)
|
||||
return errors.Wrap(mkerr, "failed to create directory structure for file rename")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -485,19 +556,22 @@ func (fs *Filesystem) Chown(path string) error {
|
||||
|
||||
// If this was a directory, begin walking over its contents recursively and ensure that all
|
||||
// of the subfiles and directories get their permissions updated as well.
|
||||
return fs.Walk(cleaned, func(path string, f os.FileInfo, err error) error {
|
||||
if err != nil {
|
||||
return fs.handleWalkerError(err, f)
|
||||
}
|
||||
return godirwalk.Walk(cleaned, &godirwalk.Options{
|
||||
Unsorted: true,
|
||||
Callback: func(p string, e *godirwalk.Dirent) error {
|
||||
// Do not attempt to chmod a symlink. Go's os.Chown function will affect the symlink
|
||||
// so if it points to a location outside the data directory the user would be able to
|
||||
// (un)intentionally modify that files permissions.
|
||||
if e.IsSymlink() {
|
||||
if e.IsDir() {
|
||||
return godirwalk.SkipThis
|
||||
}
|
||||
|
||||
// Do not attempt to chmod a symlink. Go's os.Chown function will affect the symlink
|
||||
// so if it points to a location outside the data directory the user would be able to
|
||||
// (un)intentionally modify that files permissions.
|
||||
if f.Mode()&os.ModeSymlink != 0 {
|
||||
return nil
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
return os.Chown(path, uid, gid)
|
||||
return os.Chown(p, uid, gid)
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
@@ -574,7 +648,8 @@ func (fs *Filesystem) Copy(p string) error {
|
||||
}
|
||||
defer dest.Close()
|
||||
|
||||
if _, err := io.Copy(dest, source); err != nil {
|
||||
buf := make([]byte, 1024*4)
|
||||
if _, err := io.CopyBuffer(dest, source, buf); err != nil {
|
||||
return errors.WithStack(err)
|
||||
}
|
||||
|
||||
@@ -584,7 +659,7 @@ func (fs *Filesystem) Copy(p string) error {
|
||||
// Deletes a file or folder from the system. Prevents the user from accidentally
|
||||
// (or maliciously) removing their root server data directory.
|
||||
func (fs *Filesystem) Delete(p string) error {
|
||||
// This is one of the few (only?) places in the codebase where we're explictly not using
|
||||
// This is one of the few (only?) places in the codebase where we're explicitly not using
|
||||
// the SafePath functionality when working with user provided input. If we did, you would
|
||||
// not be able to delete a file that is a symlink pointing to a location outside of the data
|
||||
// directory.
|
||||
@@ -609,11 +684,11 @@ func (fs *Filesystem) Delete(p string) error {
|
||||
}
|
||||
} else {
|
||||
if !st.IsDir() {
|
||||
atomic.SwapInt64(&fs.diskUsage, -st.Size())
|
||||
atomic.SwapInt64(&fs.disk, -st.Size())
|
||||
} else {
|
||||
go func(st os.FileInfo, resolved string) {
|
||||
if s, err := fs.DirectorySize(resolved); err == nil {
|
||||
atomic.AddInt64(&fs.diskUsage, -s)
|
||||
atomic.AddInt64(&fs.disk, -s)
|
||||
}
|
||||
}(st, resolved)
|
||||
}
|
||||
@@ -732,26 +807,38 @@ func (fs *Filesystem) GetIncludedFiles(dir string, ignored []string) (*backup.In
|
||||
// files found, and will keep walking deeper and deeper into directories.
|
||||
inc := new(backup.IncludedFiles)
|
||||
|
||||
if err := fs.Walk(cleaned, func(p string, f os.FileInfo, err error) error {
|
||||
if err != nil {
|
||||
return fs.handleWalkerError(err, f)
|
||||
}
|
||||
err = godirwalk.Walk(cleaned, &godirwalk.Options{
|
||||
Unsorted: true,
|
||||
Callback: func(p string, e *godirwalk.Dirent) error {
|
||||
sp := p
|
||||
if e.IsSymlink() {
|
||||
sp, err = fs.SafePath(p)
|
||||
if err != nil {
|
||||
if IsPathResolutionError(err) {
|
||||
return godirwalk.SkipThis
|
||||
}
|
||||
|
||||
// Avoid unnecessary parsing if there are no ignored files, nothing will match anyways
|
||||
// so no reason to call the function.
|
||||
if len(ignored) == 0 || !i.MatchesPath(strings.TrimPrefix(p, fs.Path()+"/")) {
|
||||
inc.Push(&f, p)
|
||||
}
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// We can't just abort if the path is technically ignored. It is possible there is a nested
|
||||
// file or folder that should not be excluded, so in this case we need to just keep going
|
||||
// until we get to a final state.
|
||||
return nil
|
||||
}); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// Only push files into the result array since archives can't create an empty directory within them.
|
||||
if !e.IsDir() {
|
||||
// Avoid unnecessary parsing if there are no ignored files, nothing will match anyways
|
||||
// so no reason to call the function.
|
||||
if len(ignored) == 0 || !i.MatchesPath(strings.TrimPrefix(sp, fs.Path()+"/")) {
|
||||
inc.Push(sp)
|
||||
}
|
||||
}
|
||||
|
||||
return inc, nil
|
||||
// We can't just abort if the path is technically ignored. It is possible there is a nested
|
||||
// file or folder that should not be excluded, so in this case we need to just keep going
|
||||
// until we get to a final state.
|
||||
return nil
|
||||
},
|
||||
})
|
||||
|
||||
return inc, errors.WithStack(err)
|
||||
}
|
||||
|
||||
// Compresses all of the files matching the given paths in the specified directory. This function
|
||||
@@ -788,24 +875,38 @@ func (fs *Filesystem) CompressFiles(dir string, paths []string) (os.FileInfo, er
|
||||
continue
|
||||
}
|
||||
|
||||
if f.IsDir() {
|
||||
err := fs.Walk(p, func(s string, info os.FileInfo, err error) error {
|
||||
if err != nil {
|
||||
return fs.handleWalkerError(err, info)
|
||||
}
|
||||
if !f.IsDir() {
|
||||
inc.Push(p)
|
||||
} else {
|
||||
err := godirwalk.Walk(p, &godirwalk.Options{
|
||||
Unsorted: true,
|
||||
Callback: func(p string, e *godirwalk.Dirent) error {
|
||||
sp := p
|
||||
if e.IsSymlink() {
|
||||
// Ensure that any symlinks are properly resolved to their final destination. If
|
||||
// that destination is outside the server directory skip over this entire item, otherwise
|
||||
// use the resolved location for the rest of this function.
|
||||
sp, err = fs.SafePath(p)
|
||||
if err != nil {
|
||||
if IsPathResolutionError(err) {
|
||||
return godirwalk.SkipThis
|
||||
}
|
||||
|
||||
if !info.IsDir() {
|
||||
inc.Push(&info, s)
|
||||
}
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
if !e.IsDir() {
|
||||
inc.Push(sp)
|
||||
}
|
||||
|
||||
return nil
|
||||
},
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
} else {
|
||||
inc.Push(&f, p)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -9,5 +9,5 @@ import (
|
||||
func (s *Stat) CTime() time.Time {
|
||||
st := s.Info.Sys().(*syscall.Stat_t)
|
||||
|
||||
return time.Unix(int64(st.Ctimespec.Sec), int64(st.Ctimespec.Nsec))
|
||||
return time.Unix(st.Ctimespec.Sec, st.Ctimespec.Nsec)
|
||||
}
|
||||
@@ -9,5 +9,5 @@ import (
|
||||
func (s *Stat) CTime() time.Time {
|
||||
st := s.Info.Sys().(*syscall.Stat_t)
|
||||
|
||||
return time.Unix(int64(st.Ctim.Sec), int64(st.Ctim.Nsec))
|
||||
return time.Unix(st.Ctim.Sec, st.Ctim.Nsec)
|
||||
}
|
||||
@@ -10,7 +10,6 @@ import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"reflect"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
)
|
||||
|
||||
@@ -19,7 +18,7 @@ import (
|
||||
func (fs *Filesystem) SpaceAvailableForDecompression(dir string, file string) (bool, error) {
|
||||
// Don't waste time trying to determine this if we know the server will have the space for
|
||||
// it since there is no limit.
|
||||
if fs.Server.Build().DiskSpace <= 0 {
|
||||
if fs.Server.DiskSpace() <= 0 {
|
||||
return true, nil
|
||||
}
|
||||
|
||||
@@ -28,37 +27,19 @@ func (fs *Filesystem) SpaceAvailableForDecompression(dir string, file string) (b
|
||||
return false, err
|
||||
}
|
||||
|
||||
wg := new(sync.WaitGroup)
|
||||
|
||||
var dirSize int64
|
||||
var cErr error
|
||||
// Get the cached size in a parallel process so that if it is not cached we are not
|
||||
// waiting an unnecessary amount of time on this call.
|
||||
go func() {
|
||||
wg.Add(1)
|
||||
defer wg.Done()
|
||||
|
||||
dirSize, cErr = fs.getCachedDiskUsage()
|
||||
}()
|
||||
dirSize, err := fs.DiskUsage(false)
|
||||
|
||||
var size int64
|
||||
// In a seperate thread, walk over the archive and figure out just how large the final
|
||||
// output would be from dearchiving it.
|
||||
go func() {
|
||||
wg.Add(1)
|
||||
defer wg.Done()
|
||||
// Walk over the archive and figure out just how large the final output would be from unarchiving it.
|
||||
archiver.Walk(source, func(f archiver.File) error {
|
||||
atomic.AddInt64(&size, f.Size())
|
||||
|
||||
// Walk all of the files and calculate the total decompressed size of this archive.
|
||||
archiver.Walk(source, func(f archiver.File) error {
|
||||
atomic.AddInt64(&size, f.Size())
|
||||
return nil
|
||||
})
|
||||
|
||||
return nil
|
||||
})
|
||||
}()
|
||||
|
||||
wg.Wait()
|
||||
|
||||
return ((dirSize + size) / 1000.0 / 1000.0) <= fs.Server.Build().DiskSpace, cErr
|
||||
return ((dirSize + size) / 1000.0 / 1000.0) <= fs.Server.DiskSpace(), errors.WithStack(err)
|
||||
}
|
||||
|
||||
// Decompress a file in a given directory by using the archiver tool to infer the file
|
||||
|
||||
@@ -1,141 +0,0 @@
|
||||
package server
|
||||
|
||||
import (
|
||||
"context"
|
||||
"github.com/gammazero/workerpool"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"sync"
|
||||
)
|
||||
|
||||
type FileWalker struct {
|
||||
*Filesystem
|
||||
}
|
||||
|
||||
type PooledFileWalker struct {
|
||||
wg sync.WaitGroup
|
||||
pool *workerpool.WorkerPool
|
||||
callback filepath.WalkFunc
|
||||
cancel context.CancelFunc
|
||||
|
||||
err error
|
||||
errOnce sync.Once
|
||||
|
||||
Filesystem *Filesystem
|
||||
}
|
||||
|
||||
// Returns a new walker instance.
|
||||
func (fs *Filesystem) NewWalker() *FileWalker {
|
||||
return &FileWalker{fs}
|
||||
}
|
||||
|
||||
// Creates a new pooled file walker that will concurrently walk over a given directory but limit itself
|
||||
// to a worker pool as to not completely flood out the system or cause a process crash.
|
||||
func newPooledWalker(fs *Filesystem) *PooledFileWalker {
|
||||
return &PooledFileWalker{
|
||||
Filesystem: fs,
|
||||
// Create a worker pool that is the same size as the number of processors available on the
|
||||
// system. Going much higher doesn't provide much of a performance boost, and is only more
|
||||
// likely to lead to resource overloading anyways.
|
||||
pool: workerpool.New(runtime.NumCPU()),
|
||||
}
|
||||
}
|
||||
|
||||
// Process a given path by calling the callback function for all of the files and directories within
|
||||
// the path, and then dropping into any directories that we come across.
|
||||
func (w *PooledFileWalker) process(path string) error {
|
||||
p, err := w.Filesystem.SafePath(path)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
files, err := ioutil.ReadDir(p)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Loop over all of the files and directories in the given directory and call the provided
|
||||
// callback function. If we encounter a directory, push that directory onto the worker queue
|
||||
// to be processed.
|
||||
for _, f := range files {
|
||||
sp, err := w.Filesystem.SafeJoin(p, f)
|
||||
if err != nil {
|
||||
// Let the callback function handle what to do if there is a path resolution error because a
|
||||
// dangerous path was resolved. If there is an error returned, return from this entire process
|
||||
// otherwise just skip over this specific file. We don't care if its a file or a directory at
|
||||
// this point since either way we're skipping it, however, still check for the SkipDir since that
|
||||
// would be thrown otherwise.
|
||||
if err = w.callback(sp, f, err); err != nil && err != filepath.SkipDir {
|
||||
return err
|
||||
}
|
||||
|
||||
continue
|
||||
}
|
||||
|
||||
i, err := os.Stat(sp)
|
||||
// You might end up getting an error about a file or folder not existing if the given path
|
||||
// if it is an invalid symlink. We can safely just skip over these files I believe.
|
||||
if os.IsNotExist(err) {
|
||||
continue
|
||||
}
|
||||
|
||||
// Call the user-provided callback for this file or directory. If an error is returned that is
|
||||
// not a SkipDir call, abort the entire process and bubble that error up.
|
||||
if err = w.callback(sp, i, err); err != nil && err != filepath.SkipDir {
|
||||
return err
|
||||
}
|
||||
|
||||
// If this is a directory, and we didn't get a SkipDir error, continue through by pushing another
|
||||
// job to the pool to handle it. If we requested a skip, don't do anything just continue on to the
|
||||
// next item.
|
||||
if i.IsDir() && err != filepath.SkipDir {
|
||||
w.push(sp)
|
||||
} else if !i.IsDir() && err == filepath.SkipDir {
|
||||
// Per the spec for the callback, if we get a SkipDir error but it is returned for an item
|
||||
// that is _not_ a directory, abort the remaining operations on the directory.
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Push a new path into the worker pool and increment the waitgroup so that we do not return too
|
||||
// early and cause panic's as internal directories attempt to submit to the pool.
|
||||
func (w *PooledFileWalker) push(path string) {
|
||||
w.wg.Add(1)
|
||||
w.pool.Submit(func() {
|
||||
defer w.wg.Done()
|
||||
if err := w.process(path); err != nil {
|
||||
w.errOnce.Do(func() {
|
||||
w.err = err
|
||||
if w.cancel != nil {
|
||||
w.cancel()
|
||||
}
|
||||
})
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// Walks the given directory and executes the callback function for all of the files and directories
|
||||
// that are encountered.
|
||||
func (fs *Filesystem) Walk(dir string, callback filepath.WalkFunc) error {
|
||||
w := newPooledWalker(fs)
|
||||
w.callback = callback
|
||||
|
||||
_, cancel := context.WithCancel(context.Background())
|
||||
w.cancel = cancel
|
||||
|
||||
w.push(dir)
|
||||
|
||||
w.wg.Wait()
|
||||
w.pool.StopWait()
|
||||
|
||||
if w.err != nil {
|
||||
return w.err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -25,7 +25,7 @@ import (
|
||||
// Executes the installation stack for a server process. Bubbles any errors up to the calling
|
||||
// function which should handle contacting the panel to notify it of the server state.
|
||||
//
|
||||
// Pass true as the first arugment in order to execute a server sync before the process to
|
||||
// Pass true as the first argument in order to execute a server sync before the process to
|
||||
// ensure the latest information is used.
|
||||
func (s *Server) Install(sync bool) error {
|
||||
if sync {
|
||||
@@ -35,10 +35,17 @@ func (s *Server) Install(sync bool) error {
|
||||
}
|
||||
}
|
||||
|
||||
// Send the start event so the Panel can automatically update.
|
||||
s.Events().Publish(InstallStartedEvent, "")
|
||||
var err error
|
||||
if !s.Config().SkipEggScripts {
|
||||
// Send the start event so the Panel can automatically update. We don't send this unless the process
|
||||
// is actually going to run, otherwise all sorts of weird rapid UI behavior happens since there isn't
|
||||
// an actual install process being executed.
|
||||
s.Events().Publish(InstallStartedEvent, "")
|
||||
|
||||
err := s.internalInstall()
|
||||
err = s.internalInstall()
|
||||
} else {
|
||||
s.Log().Info("server configured to skip running installation scripts for this egg, not executing process")
|
||||
}
|
||||
|
||||
s.Log().Debug("notifying panel of server install state")
|
||||
if serr := s.SyncInstallState(err == nil); serr != nil {
|
||||
@@ -190,7 +197,7 @@ func (ip *InstallationProcess) RemoveContainer() {
|
||||
}
|
||||
}
|
||||
|
||||
// Runs the installation process, this is done as a backgrounded thread. This will configure
|
||||
// Runs the installation process, this is done as in a background thread. This will configure
|
||||
// the required environment, and then spin up the installation container.
|
||||
//
|
||||
// Once the container finishes installing the results will be stored in an installation
|
||||
@@ -203,7 +210,7 @@ func (ip *InstallationProcess) Run() error {
|
||||
|
||||
// We now have an exclusive lock on this installation process. Ensure that whenever this
|
||||
// process is finished that the semaphore is released so that other processes and be executed
|
||||
// without encounting a wait timeout.
|
||||
// without encountering a wait timeout.
|
||||
defer func() {
|
||||
ip.Server.Log().Debug("releasing installation process lock")
|
||||
ip.Server.installer.sem.Release(1)
|
||||
@@ -457,13 +464,13 @@ func (ip *InstallationProcess) Execute() (string, error) {
|
||||
ip.Server.Events().Publish(DaemonMessageEvent, "Installation process completed.")
|
||||
}(r.ID)
|
||||
|
||||
sChann, eChann := ip.client.ContainerWait(ip.context, r.ID, container.WaitConditionNotRunning)
|
||||
sChan, eChan := ip.client.ContainerWait(ip.context, r.ID, container.WaitConditionNotRunning)
|
||||
select {
|
||||
case err := <-eChann:
|
||||
case err := <-eChan:
|
||||
if err != nil {
|
||||
return "", errors.WithStack(err)
|
||||
}
|
||||
case <-sChann:
|
||||
case <-sChan:
|
||||
}
|
||||
|
||||
return r.ID, nil
|
||||
|
||||
@@ -8,6 +8,7 @@ import (
|
||||
"github.com/pterodactyl/wings/environment"
|
||||
"github.com/pterodactyl/wings/events"
|
||||
"regexp"
|
||||
"strconv"
|
||||
)
|
||||
|
||||
// Adds all of the internal event listeners we want to use for a server.
|
||||
@@ -16,45 +17,53 @@ func (s *Server) StartEventListeners() {
|
||||
state := make(chan events.Event)
|
||||
stats := make(chan events.Event)
|
||||
|
||||
s.Environment.Events().Subscribe(environment.ConsoleOutputEvent, console)
|
||||
s.Environment.Events().Subscribe(environment.StateChangeEvent, state)
|
||||
s.Environment.Events().Subscribe(environment.ResourceEvent, stats)
|
||||
go func(console chan events.Event) {
|
||||
for data := range console {
|
||||
// Immediately emit this event back over the server event stream since it is
|
||||
// being called from the environment event stream and things probably aren't
|
||||
// listening to that event.
|
||||
s.Events().Publish(ConsoleOutputEvent, data.Data)
|
||||
|
||||
// TODO: this is leaky I imagine since the routines aren't destroyed when the server is?
|
||||
go func() {
|
||||
for {
|
||||
select {
|
||||
case data := <-console:
|
||||
// Immediately emit this event back over the server event stream since it is
|
||||
// being called from the environment event stream and things probably aren't
|
||||
// listening to that event.
|
||||
s.Events().Publish(ConsoleOutputEvent, data.Data)
|
||||
|
||||
// Also pass the data along to the console output channel.
|
||||
s.onConsoleOutput(data.Data)
|
||||
case data := <-state:
|
||||
s.SetState(data.Data)
|
||||
case data := <-stats:
|
||||
st := new(environment.Stats)
|
||||
if err := json.Unmarshal([]byte(data.Data), st); err != nil {
|
||||
s.Log().WithField("error", errors.WithStack(err)).Warn("failed to unmarshal server environment stats")
|
||||
continue
|
||||
}
|
||||
|
||||
// Update the server resource tracking object with the resources we got here.
|
||||
s.resources.mu.Lock()
|
||||
s.resources.Stats = *st
|
||||
s.resources.mu.Unlock()
|
||||
|
||||
// TODO: we'll need to handle this better since calling it in rapid succession will
|
||||
// cause it to block until the first call is done calculating disk usage, which will
|
||||
// case stat events to pile up for the server.
|
||||
s.Filesystem.HasSpaceAvailable()
|
||||
|
||||
s.emitProcUsage()
|
||||
}
|
||||
// Also pass the data along to the console output channel.
|
||||
s.onConsoleOutput(data.Data)
|
||||
}
|
||||
}()
|
||||
|
||||
s.Log().Fatal("unexpected end-of-range for server console channel")
|
||||
}(console)
|
||||
|
||||
go func(state chan events.Event) {
|
||||
for data := range state {
|
||||
s.SetState(data.Data)
|
||||
}
|
||||
|
||||
s.Log().Fatal("unexpected end-of-range for server state channel")
|
||||
}(state)
|
||||
|
||||
go func(stats chan events.Event) {
|
||||
for data := range stats {
|
||||
st := new(environment.Stats)
|
||||
if err := json.Unmarshal([]byte(data.Data), st); err != nil {
|
||||
s.Log().WithField("error", errors.WithStack(err)).Warn("failed to unmarshal server environment stats")
|
||||
continue
|
||||
}
|
||||
|
||||
// Update the server resource tracking object with the resources we got here.
|
||||
s.resources.mu.Lock()
|
||||
s.resources.Stats = *st
|
||||
s.resources.mu.Unlock()
|
||||
|
||||
s.Filesystem.HasSpaceAvailable(true)
|
||||
|
||||
s.emitProcUsage()
|
||||
}
|
||||
|
||||
s.Log().Fatal("unexpected end-of-range for server stats channel")
|
||||
}(stats)
|
||||
|
||||
s.Log().Info("registering event listeners: console, state, resources...")
|
||||
s.Environment.Events().Subscribe([]string{environment.ConsoleOutputEvent}, console)
|
||||
s.Environment.Events().Subscribe([]string{environment.StateChangeEvent}, state)
|
||||
s.Environment.Events().Subscribe([]string{environment.ResourceEvent}, stats)
|
||||
}
|
||||
|
||||
var stripAnsiRegex = regexp.MustCompile("[\u001B\u009B][[\\]()#;?]*(?:(?:(?:[a-zA-Z\\d]*(?:;[a-zA-Z\\d]*)*)?\u0007)|(?:(?:\\d{1,4}(?:;\\d{0,4})*)?[\\dA-PRZcf-ntqry=><~]))")
|
||||
@@ -81,7 +90,7 @@ func (s *Server) onConsoleOutput(data string) {
|
||||
|
||||
s.Log().WithFields(log.Fields{
|
||||
"match": l.String(),
|
||||
"against": data,
|
||||
"against": strconv.QuoteToASCII(data),
|
||||
}).Debug("detected server in running state based on console line output")
|
||||
|
||||
// If the specific line of output is one that would mark the server as started,
|
||||
|
||||
@@ -93,7 +93,7 @@ func FromConfiguration(data *api.ServerConfigurationResponse) (*Server, error) {
|
||||
}
|
||||
|
||||
s.cfg = cfg
|
||||
if err := s.UpdateDataStructure(data.Settings, false); err != nil {
|
||||
if err := s.UpdateDataStructure(data.Settings); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
@@ -103,10 +103,15 @@ func FromConfiguration(data *api.ServerConfigurationResponse) (*Server, error) {
|
||||
// Right now we only support a Docker based environment, so I'm going to hard code
|
||||
// this logic in. When we're ready to support other environment we'll need to make
|
||||
// some modifications here obviously.
|
||||
envCfg := environment.NewConfiguration(s.Mounts(), s.cfg.Allocations, s.cfg.Build, s.cfg.EnvVars)
|
||||
settings := environment.Settings{
|
||||
Mounts: s.Mounts(),
|
||||
Allocations: s.cfg.Allocations,
|
||||
Limits: s.cfg.Build,
|
||||
}
|
||||
|
||||
envCfg := environment.NewConfiguration(settings, s.GetEnvironmentVariables())
|
||||
meta := docker.Metadata{
|
||||
Invocation: s.Config().Invocation,
|
||||
Image: s.Config().Container.Image,
|
||||
Image: s.Config().Container.Image,
|
||||
}
|
||||
|
||||
if env, err := docker.New(s.Id(), &meta, envCfg); err != nil {
|
||||
@@ -123,7 +128,7 @@ func FromConfiguration(data *api.ServerConfigurationResponse) (*Server, error) {
|
||||
|
||||
// If the server's data directory exists, force disk usage calculation.
|
||||
if _, err := os.Stat(s.Filesystem.Path()); err == nil {
|
||||
go s.Filesystem.HasSpaceAvailable()
|
||||
s.Filesystem.HasSpaceAvailable(true)
|
||||
}
|
||||
|
||||
return s, nil
|
||||
|
||||
@@ -3,6 +3,8 @@ package server
|
||||
import (
|
||||
"context"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/pterodactyl/wings/config"
|
||||
"github.com/pterodactyl/wings/environment"
|
||||
"golang.org/x/sync/semaphore"
|
||||
"os"
|
||||
"time"
|
||||
@@ -86,6 +88,10 @@ func (s *Server) HandlePowerAction(action PowerAction, waitSeconds ...int) error
|
||||
|
||||
switch action {
|
||||
case PowerActionStart:
|
||||
if s.GetState() != environment.ProcessOfflineState {
|
||||
return ErrIsRunning
|
||||
}
|
||||
|
||||
// Run the pre-boot logic for the server before processing the environment start.
|
||||
if err := s.onBeforeStart(); err != nil {
|
||||
return err
|
||||
@@ -93,7 +99,7 @@ func (s *Server) HandlePowerAction(action PowerAction, waitSeconds ...int) error
|
||||
|
||||
return s.Environment.Start()
|
||||
case PowerActionStop:
|
||||
// We're specificially waiting for the process to be stopped here, otherwise the lock is released
|
||||
// We're specifically waiting for the process to be stopped here, otherwise the lock is released
|
||||
// too soon, and you can rack up all sorts of issues.
|
||||
return s.Environment.WaitForStop(10*60, true)
|
||||
case PowerActionRestart:
|
||||
@@ -125,32 +131,46 @@ func (s *Server) HandlePowerAction(action PowerAction, waitSeconds ...int) error
|
||||
// Execute a few functions before actually calling the environment start commands. This ensures
|
||||
// that everything is ready to go for environment booting, and that the server can even be started.
|
||||
func (s *Server) onBeforeStart() error {
|
||||
// Disallow start & restart if the server is suspended.
|
||||
if s.IsSuspended() {
|
||||
return new(suspendedError)
|
||||
}
|
||||
|
||||
s.Log().Info("syncing server configuration with panel")
|
||||
if err := s.Sync(); err != nil {
|
||||
return errors.Wrap(err, "unable to sync server data from Panel instance")
|
||||
}
|
||||
|
||||
if !s.Filesystem.HasSpaceAvailable() {
|
||||
return errors.New("cannot start server, not enough disk space available")
|
||||
// Disallow start & restart if the server is suspended. Do this check after performing a sync
|
||||
// action with the Panel to ensure that we have the most up-to-date information for that server.
|
||||
if s.IsSuspended() {
|
||||
return ErrSuspended
|
||||
}
|
||||
|
||||
// Ensure we sync the server information with the environment so that any new environment variables
|
||||
// and process resource limits are correctly applied.
|
||||
s.SyncWithEnvironment()
|
||||
|
||||
// If a server has unlimited disk space, we don't care enough to block the startup to check remaining.
|
||||
// However, we should trigger a size anyway, as it'd be good to kick it off for other processes.
|
||||
if s.DiskSpace() <= 0 {
|
||||
s.Filesystem.HasSpaceAvailable(true)
|
||||
} else {
|
||||
s.PublishConsoleOutputFromDaemon("Checking server disk space usage, this could take a few seconds...")
|
||||
if !s.Filesystem.HasSpaceAvailable(false) {
|
||||
return errors.New("cannot start server, not enough disk space available")
|
||||
}
|
||||
}
|
||||
|
||||
// Update the configuration files defined for the server before beginning the boot process.
|
||||
// This process executes a bunch of parallel updates, so we just block until that process
|
||||
// is completee. Any errors as a result of this will just be bubbled out in the logger,
|
||||
// is complete. Any errors as a result of this will just be bubbled out in the logger,
|
||||
// we don't need to actively do anything about it at this point, worst comes to worst the
|
||||
// server starts in a weird state and the user can manually adjust.
|
||||
s.PublishConsoleOutputFromDaemon("Updating process configuration files...")
|
||||
s.UpdateConfigurationFiles()
|
||||
|
||||
s.PublishConsoleOutputFromDaemon("Ensuring file permissions are set correctly, this could take a few seconds...")
|
||||
// Ensure all of the server file permissions are set correctly before booting the process.
|
||||
if err := s.Filesystem.Chown("/"); err != nil {
|
||||
return errors.Wrap(err, "failed to chown root server directory during pre-boot process")
|
||||
if config.Get().System.CheckPermissionsOnBoot {
|
||||
s.PublishConsoleOutputFromDaemon("Ensuring file permissions are set correctly, this could take a few seconds...")
|
||||
// Ensure all of the server file permissions are set correctly before booting the process.
|
||||
if err := s.Filesystem.Chown("/"); err != nil {
|
||||
return errors.Wrap(err, "failed to chown root server directory during pre-boot process")
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
|
||||
@@ -39,8 +39,12 @@ func (s *Server) emitProcUsage() {
|
||||
s.resources.mu.RLock()
|
||||
defer s.resources.mu.RUnlock()
|
||||
|
||||
b, _ := json.Marshal(s.resources)
|
||||
s.Events().Publish(StatsEvent, string(b))
|
||||
b, err := json.Marshal(s.resources)
|
||||
if err == nil {
|
||||
s.Events().Publish(StatsEvent, string(b))
|
||||
}
|
||||
|
||||
// TODO: This might be a good place to add a debug log if stats are not sending.
|
||||
}
|
||||
|
||||
// Returns the servers current state.
|
||||
|
||||
@@ -77,7 +77,7 @@ func (s *Server) GetEnvironmentVariables() []string {
|
||||
var out = []string{
|
||||
fmt.Sprintf("TZ=%s", zone),
|
||||
fmt.Sprintf("STARTUP=%s", s.Config().Invocation),
|
||||
fmt.Sprintf("SERVER_MEMORY=%d", s.Build().MemoryLimit),
|
||||
fmt.Sprintf("SERVER_MEMORY=%d", s.MemoryLimit()),
|
||||
fmt.Sprintf("SERVER_IP=%s", s.Config().Allocations.DefaultMapping.Ip),
|
||||
fmt.Sprintf("SERVER_PORT=%d", s.Config().Allocations.DefaultMapping.Port),
|
||||
}
|
||||
@@ -125,7 +125,7 @@ func (s *Server) Sync() error {
|
||||
|
||||
func (s *Server) SyncWithConfiguration(cfg *api.ServerConfigurationResponse) error {
|
||||
// Update the data structure and persist it to the disk.
|
||||
if err := s.UpdateDataStructure(cfg.Settings, false); err != nil {
|
||||
if err := s.UpdateDataStructure(cfg.Settings); err != nil {
|
||||
return errors.WithStack(err)
|
||||
}
|
||||
|
||||
@@ -144,7 +144,7 @@ func (s *Server) SyncWithConfiguration(cfg *api.ServerConfigurationResponse) err
|
||||
}
|
||||
|
||||
// Reads the log file for a server up to a specified number of bytes.
|
||||
func (s *Server) ReadLogfile(len int64) ([]string, error) {
|
||||
func (s *Server) ReadLogfile(len int) ([]string, error) {
|
||||
return s.Environment.Readlog(len)
|
||||
}
|
||||
|
||||
@@ -156,7 +156,7 @@ func (s *Server) IsBootable() bool {
|
||||
return exists
|
||||
}
|
||||
|
||||
// Initalizes a server instance. This will run through and ensure that the environment
|
||||
// Initializes a server instance. This will run through and ensure that the environment
|
||||
// for the server is setup, and that all of the necessary files are created.
|
||||
func (s *Server) CreateEnvironment() error {
|
||||
// Ensure the data directory exists before getting too far through this process.
|
||||
@@ -177,10 +177,6 @@ func (s *Server) IsSuspended() bool {
|
||||
return s.Config().Suspended
|
||||
}
|
||||
|
||||
func (s *Server) Build() *environment.Limits {
|
||||
return &s.Config().Build
|
||||
}
|
||||
|
||||
func (s *Server) ProcessConfiguration() *api.ProcessConfiguration {
|
||||
s.RLock()
|
||||
defer s.RUnlock()
|
||||
|
||||
@@ -15,7 +15,7 @@ import (
|
||||
// The server will be marked as requiring a rebuild on the next boot sequence,
|
||||
// it is up to the specific environment to determine what needs to happen when
|
||||
// that is the case.
|
||||
func (s *Server) UpdateDataStructure(data []byte, background bool) error {
|
||||
func (s *Server) UpdateDataStructure(data []byte) error {
|
||||
src := new(Configuration)
|
||||
if err := json.Unmarshal(data, src); err != nil {
|
||||
return errors.WithStack(err)
|
||||
@@ -80,6 +80,14 @@ func (s *Server) UpdateDataStructure(data []byte, background bool) error {
|
||||
c.Suspended = v
|
||||
}
|
||||
|
||||
if v, err := jsonparser.GetBoolean(data, "skip_egg_scripts"); err != nil {
|
||||
if err != jsonparser.KeyPathNotFoundError {
|
||||
return errors.WithStack(err)
|
||||
}
|
||||
} else {
|
||||
c.SkipEggScripts = v
|
||||
}
|
||||
|
||||
// Environment and Mappings should be treated as a full update at all times, never a
|
||||
// true patch, otherwise we can't know what we're passing along.
|
||||
if src.EnvVars != nil && len(src.EnvVars) > 0 {
|
||||
@@ -97,36 +105,52 @@ func (s *Server) UpdateDataStructure(data []byte, background bool) error {
|
||||
// Update the configuration once we have a lock on the configuration object.
|
||||
s.cfg = c
|
||||
|
||||
if background {
|
||||
go s.runBackgroundActions()
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Runs through different actions once a server's configuration has been persisted
|
||||
// to the disk. This function does not return anything as any failures should be logged
|
||||
// but have no effect on actually updating the server itself.
|
||||
// Updates the environment for the server to match any of the changed data. This pushes new settings and
|
||||
// environment variables to the environment. In addition, the in-situ update method is called on the
|
||||
// environment which will allow environments that make use of it (such as Docker) to immediately apply
|
||||
// some settings without having to wait on a server to restart.
|
||||
//
|
||||
// These tasks run in independent threads where relevant to speed up any updates
|
||||
// that need to happen.
|
||||
func (s *Server) runBackgroundActions() {
|
||||
// Check if the s is now suspended, and if so and the process is not terminated
|
||||
// yet, do it immediately.
|
||||
if s.IsSuspended() && s.GetState() != environment.ProcessOfflineState {
|
||||
s.Log().Info("server suspended with running process state, terminating now")
|
||||
// This functionality allows a server's resources limits to be modified on the fly and have them apply
|
||||
// right away allowing for dynamic resource allocation and responses to abusive server processes.
|
||||
func (s *Server) SyncWithEnvironment() {
|
||||
s.Log().Debug("syncing server settings with environment")
|
||||
|
||||
if err := s.Environment.WaitForStop(10, true); err != nil {
|
||||
s.Log().WithField("error", err).Warn("failed to terminate server environment after suspension")
|
||||
}
|
||||
}
|
||||
// Update the environment settings using the new information from this server.
|
||||
s.Environment.Config().SetSettings(environment.Settings{
|
||||
Mounts: s.Mounts(),
|
||||
Allocations: s.Config().Allocations,
|
||||
Limits: s.Config().Build,
|
||||
})
|
||||
|
||||
// If build limits are changed, environment variables also change. Plus, any modifications to
|
||||
// the startup command also need to be properly propagated to this environment.
|
||||
//
|
||||
// @see https://github.com/pterodactyl/panel/issues/2255
|
||||
s.Environment.Config().SetEnvironmentVariables(s.GetEnvironmentVariables())
|
||||
|
||||
if !s.IsSuspended() {
|
||||
// Update the environment in place, allowing memory and CPU usage to be adjusted
|
||||
// on the fly without the user needing to reboot (theoretically).
|
||||
s.Log().Info("performing server limit modification on-the-fly")
|
||||
if err := s.Environment.InSituUpdate(); err != nil {
|
||||
// This is not a failure, the process is still running fine and will fix itself on the
|
||||
// next boot, or fail out entirely in a more logical position.
|
||||
s.Log().WithField("error", err).Warn("failed to perform on-the-fly update of the server environment")
|
||||
}
|
||||
} else {
|
||||
// Checks if the server is now in a suspended state. If so and a server process is currently running it
|
||||
// will be gracefully stopped (and terminated if it refuses to stop).
|
||||
if s.GetState() != environment.ProcessOfflineState {
|
||||
s.Log().Info("server suspended with running process state, terminating now")
|
||||
|
||||
go func(s *Server) {
|
||||
if err := s.Environment.WaitForStop(60, true); err != nil {
|
||||
s.Log().WithField("error", err).Warn("failed to terminate server environment after suspension")
|
||||
}
|
||||
}(s)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
19
sftp/errors.go
Normal file
19
sftp/errors.go
Normal file
@@ -0,0 +1,19 @@
|
||||
package sftp
|
||||
|
||||
type fxerr uint32
|
||||
|
||||
const (
|
||||
// Extends the default SFTP server to return a quota exceeded error to the client.
|
||||
//
|
||||
// @see https://tools.ietf.org/id/draft-ietf-secsh-filexfer-13.txt
|
||||
ErrSshQuotaExceeded = fxerr(15)
|
||||
)
|
||||
|
||||
func (e fxerr) Error() string {
|
||||
switch e {
|
||||
case ErrSshQuotaExceeded:
|
||||
return "Quota Exceeded"
|
||||
default:
|
||||
return "Failure"
|
||||
}
|
||||
}
|
||||
380
sftp/handler.go
Normal file
380
sftp/handler.go
Normal file
@@ -0,0 +1,380 @@
|
||||
package sftp
|
||||
|
||||
import (
|
||||
"github.com/apex/log"
|
||||
"github.com/patrickmn/go-cache"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/pkg/sftp"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sync"
|
||||
)
|
||||
|
||||
type FileSystem struct {
|
||||
UUID string
|
||||
Permissions []string
|
||||
ReadOnly bool
|
||||
User User
|
||||
Cache *cache.Cache
|
||||
|
||||
PathValidator func(fs FileSystem, p string) (string, error)
|
||||
HasDiskSpace func(fs FileSystem) bool
|
||||
|
||||
logger *log.Entry
|
||||
lock sync.Mutex
|
||||
}
|
||||
|
||||
func (fs FileSystem) buildPath(p string) (string, error) {
|
||||
return fs.PathValidator(fs, p)
|
||||
}
|
||||
|
||||
const (
|
||||
PermissionFileRead = "file.read"
|
||||
PermissionFileReadContent = "file.read-content"
|
||||
PermissionFileCreate = "file.create"
|
||||
PermissionFileUpdate = "file.update"
|
||||
PermissionFileDelete = "file.delete"
|
||||
)
|
||||
|
||||
// Fileread creates a reader for a file on the system and returns the reader back.
|
||||
func (fs FileSystem) Fileread(request *sftp.Request) (io.ReaderAt, error) {
|
||||
// Check first if the user can actually open and view a file. This permission is named
|
||||
// really poorly, but it is checking if they can read. There is an addition permission,
|
||||
// "save-files" which determines if they can write that file.
|
||||
if !fs.can(PermissionFileReadContent) {
|
||||
return nil, sftp.ErrSshFxPermissionDenied
|
||||
}
|
||||
|
||||
p, err := fs.buildPath(request.Filepath)
|
||||
if err != nil {
|
||||
return nil, sftp.ErrSshFxNoSuchFile
|
||||
}
|
||||
|
||||
fs.lock.Lock()
|
||||
defer fs.lock.Unlock()
|
||||
|
||||
if _, err := os.Stat(p); os.IsNotExist(err) {
|
||||
return nil, sftp.ErrSshFxNoSuchFile
|
||||
} else if err != nil {
|
||||
fs.logger.WithField("error", errors.WithStack(err)).Error("error while processing file stat")
|
||||
|
||||
return nil, sftp.ErrSshFxFailure
|
||||
}
|
||||
|
||||
file, err := os.Open(p)
|
||||
if err != nil {
|
||||
fs.logger.WithField("source", p).WithField("error", errors.WithStack(err)).Error("could not open file for reading")
|
||||
return nil, sftp.ErrSshFxFailure
|
||||
}
|
||||
|
||||
return file, nil
|
||||
}
|
||||
|
||||
// Filewrite handles the write actions for a file on the system.
|
||||
func (fs FileSystem) Filewrite(request *sftp.Request) (io.WriterAt, error) {
|
||||
if fs.ReadOnly {
|
||||
return nil, sftp.ErrSshFxOpUnsupported
|
||||
}
|
||||
|
||||
p, err := fs.buildPath(request.Filepath)
|
||||
if err != nil {
|
||||
return nil, sftp.ErrSshFxNoSuchFile
|
||||
}
|
||||
|
||||
var l = fs.logger.WithField("source", p)
|
||||
|
||||
// If the user doesn't have enough space left on the server it should respond with an
|
||||
// error since we won't be letting them write this file to the disk.
|
||||
if !fs.HasDiskSpace(fs) {
|
||||
return nil, ErrSshQuotaExceeded
|
||||
}
|
||||
|
||||
fs.lock.Lock()
|
||||
defer fs.lock.Unlock()
|
||||
|
||||
stat, statErr := os.Stat(p)
|
||||
// If the file doesn't exist we need to create it, as well as the directory pathway
|
||||
// leading up to where that file will be created.
|
||||
if os.IsNotExist(statErr) {
|
||||
// This is a different pathway than just editing an existing file. If it doesn't exist already
|
||||
// we need to determine if this user has permission to create files.
|
||||
if !fs.can(PermissionFileCreate) {
|
||||
return nil, sftp.ErrSshFxPermissionDenied
|
||||
}
|
||||
|
||||
// Create all of the directories leading up to the location where this file is being created.
|
||||
if err := os.MkdirAll(filepath.Dir(p), 0755); err != nil {
|
||||
l.WithFields(log.Fields{
|
||||
"path": filepath.Dir(p),
|
||||
"error": errors.WithStack(err),
|
||||
}).Error("error making path for file")
|
||||
|
||||
return nil, sftp.ErrSshFxFailure
|
||||
}
|
||||
|
||||
file, err := os.Create(p)
|
||||
if err != nil {
|
||||
l.WithField("error", errors.WithStack(err)).Error("failed to create file")
|
||||
|
||||
return nil, sftp.ErrSshFxFailure
|
||||
}
|
||||
|
||||
// Not failing here is intentional. We still made the file, it is just owned incorrectly
|
||||
// and will likely cause some issues.
|
||||
if err := os.Chown(p, fs.User.Uid, fs.User.Gid); err != nil {
|
||||
l.WithField("error", errors.WithStack(err)).Warn("failed to set permissions on file")
|
||||
}
|
||||
|
||||
return file, nil
|
||||
}
|
||||
|
||||
// If the stat error isn't about the file not existing, there is some other issue
|
||||
// at play and we need to go ahead and bail out of the process.
|
||||
if statErr != nil {
|
||||
l.WithField("error", errors.WithStack(statErr)).Error("encountered error performing file stat")
|
||||
|
||||
return nil, sftp.ErrSshFxFailure
|
||||
}
|
||||
|
||||
// If we've made it here it means the file already exists and we don't need to do anything
|
||||
// fancy to handle it. Just pass over the request flags so the system knows what the end
|
||||
// goal with the file is going to be.
|
||||
//
|
||||
// But first, check that the user has permission to save modified files.
|
||||
if !fs.can(PermissionFileUpdate) {
|
||||
return nil, sftp.ErrSshFxPermissionDenied
|
||||
}
|
||||
|
||||
// Not sure this would ever happen, but lets not find out.
|
||||
if stat.IsDir() {
|
||||
return nil, sftp.ErrSshFxOpUnsupported
|
||||
}
|
||||
|
||||
file, err := os.Create(p)
|
||||
if err != nil {
|
||||
// Prevent errors if the file is deleted between the stat and this call.
|
||||
if os.IsNotExist(err) {
|
||||
return nil, sftp.ErrSSHFxNoSuchFile
|
||||
}
|
||||
|
||||
l.WithField("flags", request.Flags).WithField("error", errors.WithStack(err)).Error("failed to open existing file on system")
|
||||
return nil, sftp.ErrSshFxFailure
|
||||
}
|
||||
|
||||
// Not failing here is intentional. We still made the file, it is just owned incorrectly
|
||||
// and will likely cause some issues.
|
||||
if err := os.Chown(p, fs.User.Uid, fs.User.Gid); err != nil {
|
||||
l.WithField("error", errors.WithStack(err)).Warn("error chowning file")
|
||||
}
|
||||
|
||||
return file, nil
|
||||
}
|
||||
|
||||
// Filecmd hander for basic SFTP system calls related to files, but not anything to do with reading
|
||||
// or writing to those files.
|
||||
func (fs FileSystem) Filecmd(request *sftp.Request) error {
|
||||
if fs.ReadOnly {
|
||||
return sftp.ErrSshFxOpUnsupported
|
||||
}
|
||||
|
||||
p, err := fs.buildPath(request.Filepath)
|
||||
if err != nil {
|
||||
return sftp.ErrSshFxNoSuchFile
|
||||
}
|
||||
|
||||
var l = fs.logger.WithField("source", p)
|
||||
|
||||
var target string
|
||||
// If a target is provided in this request validate that it is going to the correct
|
||||
// location for the server. If it is not, return an operation unsupported error. This
|
||||
// is maybe not the best error response, but its not wrong either.
|
||||
if request.Target != "" {
|
||||
target, err = fs.buildPath(request.Target)
|
||||
if err != nil {
|
||||
return sftp.ErrSshFxOpUnsupported
|
||||
}
|
||||
}
|
||||
|
||||
switch request.Method {
|
||||
case "Setstat":
|
||||
if !fs.can(PermissionFileUpdate) {
|
||||
return sftp.ErrSshFxPermissionDenied
|
||||
}
|
||||
|
||||
var mode os.FileMode = 0644
|
||||
// If the client passed a valid file permission use that, otherwise use the
|
||||
// default of 0644 set above.
|
||||
if request.Attributes().FileMode().Perm() != 0000 {
|
||||
mode = request.Attributes().FileMode().Perm()
|
||||
}
|
||||
|
||||
// Force directories to be 0755
|
||||
if request.Attributes().FileMode().IsDir() {
|
||||
mode = 0755
|
||||
}
|
||||
|
||||
if err := os.Chmod(p, mode); err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return sftp.ErrSSHFxNoSuchFile
|
||||
}
|
||||
|
||||
l.WithField("error", errors.WithStack(err)).Error("failed to perform setstat on item")
|
||||
return sftp.ErrSSHFxFailure
|
||||
}
|
||||
return nil
|
||||
case "Rename":
|
||||
if !fs.can(PermissionFileUpdate) {
|
||||
return sftp.ErrSSHFxPermissionDenied
|
||||
}
|
||||
|
||||
if err := os.Rename(p, target); err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return sftp.ErrSSHFxNoSuchFile
|
||||
}
|
||||
|
||||
l.WithField("target", target).WithField("error", errors.WithStack(err)).Error("failed to rename file")
|
||||
|
||||
return sftp.ErrSshFxFailure
|
||||
}
|
||||
|
||||
break
|
||||
case "Rmdir":
|
||||
if !fs.can(PermissionFileDelete) {
|
||||
return sftp.ErrSshFxPermissionDenied
|
||||
}
|
||||
|
||||
if err := os.RemoveAll(p); err != nil {
|
||||
l.WithField("error", errors.WithStack(err)).Error("failed to remove directory")
|
||||
|
||||
return sftp.ErrSshFxFailure
|
||||
}
|
||||
|
||||
return sftp.ErrSshFxOk
|
||||
case "Mkdir":
|
||||
if !fs.can(PermissionFileCreate) {
|
||||
return sftp.ErrSshFxPermissionDenied
|
||||
}
|
||||
|
||||
if err := os.MkdirAll(p, 0755); err != nil {
|
||||
l.WithField("error", errors.WithStack(err)).Error("failed to create directory")
|
||||
|
||||
return sftp.ErrSshFxFailure
|
||||
}
|
||||
|
||||
break
|
||||
case "Symlink":
|
||||
if !fs.can(PermissionFileCreate) {
|
||||
return sftp.ErrSshFxPermissionDenied
|
||||
}
|
||||
|
||||
if err := os.Symlink(p, target); err != nil {
|
||||
l.WithField("target", target).WithField("error", errors.WithStack(err)).Error("failed to create symlink")
|
||||
|
||||
return sftp.ErrSshFxFailure
|
||||
}
|
||||
|
||||
break
|
||||
case "Remove":
|
||||
if !fs.can(PermissionFileDelete) {
|
||||
return sftp.ErrSshFxPermissionDenied
|
||||
}
|
||||
|
||||
if err := os.Remove(p); err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return sftp.ErrSSHFxNoSuchFile
|
||||
}
|
||||
|
||||
l.WithField("error", errors.WithStack(err)).Error("failed to remove a file")
|
||||
|
||||
return sftp.ErrSshFxFailure
|
||||
}
|
||||
|
||||
return sftp.ErrSshFxOk
|
||||
default:
|
||||
return sftp.ErrSshFxOpUnsupported
|
||||
}
|
||||
|
||||
var fileLocation = p
|
||||
if target != "" {
|
||||
fileLocation = target
|
||||
}
|
||||
|
||||
// Not failing here is intentional. We still made the file, it is just owned incorrectly
|
||||
// and will likely cause some issues. There is no logical check for if the file was removed
|
||||
// because both of those cases (Rmdir, Remove) have an explicit return rather than break.
|
||||
if err := os.Chown(fileLocation, fs.User.Uid, fs.User.Gid); err != nil {
|
||||
l.WithField("error", errors.WithStack(err)).Warn("error chowning file")
|
||||
}
|
||||
|
||||
return sftp.ErrSshFxOk
|
||||
}
|
||||
|
||||
// Filelist is the handler for SFTP filesystem list calls. This will handle calls to list the contents of
|
||||
// a directory as well as perform file/folder stat calls.
|
||||
func (fs FileSystem) Filelist(request *sftp.Request) (sftp.ListerAt, error) {
|
||||
p, err := fs.buildPath(request.Filepath)
|
||||
if err != nil {
|
||||
return nil, sftp.ErrSshFxNoSuchFile
|
||||
}
|
||||
|
||||
switch request.Method {
|
||||
case "List":
|
||||
if !fs.can(PermissionFileRead) {
|
||||
return nil, sftp.ErrSshFxPermissionDenied
|
||||
}
|
||||
|
||||
files, err := ioutil.ReadDir(p)
|
||||
if err != nil {
|
||||
fs.logger.WithField("error", errors.WithStack(err)).Error("error while listing directory")
|
||||
|
||||
return nil, sftp.ErrSshFxFailure
|
||||
}
|
||||
|
||||
return ListerAt(files), nil
|
||||
case "Stat":
|
||||
if !fs.can(PermissionFileRead) {
|
||||
return nil, sftp.ErrSshFxPermissionDenied
|
||||
}
|
||||
|
||||
s, err := os.Stat(p)
|
||||
if os.IsNotExist(err) {
|
||||
return nil, sftp.ErrSshFxNoSuchFile
|
||||
} else if err != nil {
|
||||
fs.logger.WithField("source", p).WithField("error", errors.WithStack(err)).Error("error performing stat on file")
|
||||
|
||||
return nil, sftp.ErrSshFxFailure
|
||||
}
|
||||
|
||||
return ListerAt([]os.FileInfo{s}), nil
|
||||
default:
|
||||
// Before adding readlink support we need to evaluate any potential security risks
|
||||
// as a result of navigating around to a location that is outside the home directory
|
||||
// for the logged in user. I don't foresee it being much of a problem, but I do want to
|
||||
// check it out before slapping some code here. Until then, we'll just return an
|
||||
// unsupported response code.
|
||||
return nil, sftp.ErrSshFxOpUnsupported
|
||||
}
|
||||
}
|
||||
|
||||
// Determines if a user has permission to perform a specific action on the SFTP server. These
|
||||
// permissions are defined and returned by the Panel API.
|
||||
func (fs FileSystem) can(permission string) bool {
|
||||
// Server owners and super admins have their permissions returned as '[*]' via the Panel
|
||||
// API, so for the sake of speed do an initial check for that before iterating over the
|
||||
// entire array of permissions.
|
||||
if len(fs.Permissions) == 1 && fs.Permissions[0] == "*" {
|
||||
return true
|
||||
}
|
||||
|
||||
// Not the owner or an admin, loop over the permissions that were returned to determine
|
||||
// if they have the passed permission.
|
||||
for _, p := range fs.Permissions {
|
||||
if p == permission {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
22
sftp/lister.go
Normal file
22
sftp/lister.go
Normal file
@@ -0,0 +1,22 @@
|
||||
package sftp
|
||||
|
||||
import (
|
||||
"io"
|
||||
"os"
|
||||
)
|
||||
|
||||
type ListerAt []os.FileInfo
|
||||
|
||||
// Returns the number of entries copied and an io.EOF error if we made it to the end of the file list.
|
||||
// Take a look at the pkg/sftp godoc for more information about how this function should work.
|
||||
func (l ListerAt) ListAt(f []os.FileInfo, offset int64) (int, error) {
|
||||
if offset >= int64(len(l)) {
|
||||
return 0, io.EOF
|
||||
}
|
||||
|
||||
if n := copy(f, l[offset:]); n < len(f) {
|
||||
return n, io.EOF
|
||||
} else {
|
||||
return n, nil
|
||||
}
|
||||
}
|
||||
308
sftp/server.go
308
sftp/server.go
@@ -1,118 +1,238 @@
|
||||
package sftp
|
||||
|
||||
import (
|
||||
"crypto/rand"
|
||||
"crypto/rsa"
|
||||
"crypto/x509"
|
||||
"encoding/pem"
|
||||
"fmt"
|
||||
"github.com/apex/log"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/pterodactyl/sftp-server"
|
||||
"github.com/patrickmn/go-cache"
|
||||
"github.com/pkg/sftp"
|
||||
"github.com/pterodactyl/wings/api"
|
||||
"github.com/pterodactyl/wings/config"
|
||||
"github.com/pterodactyl/wings/server"
|
||||
"go.uber.org/zap"
|
||||
"regexp"
|
||||
"golang.org/x/crypto/ssh"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"net"
|
||||
"os"
|
||||
"path"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
func Initialize(config *config.Configuration) error {
|
||||
c := &sftp_server.Server{
|
||||
User: sftp_server.SftpUser{
|
||||
Uid: config.System.User.Uid,
|
||||
Gid: config.System.User.Gid,
|
||||
},
|
||||
Settings: sftp_server.Settings{
|
||||
BasePath: config.System.Data,
|
||||
ReadOnly: config.System.Sftp.ReadOnly,
|
||||
BindAddress: config.System.Sftp.Address,
|
||||
BindPort: config.System.Sftp.Port,
|
||||
},
|
||||
CredentialValidator: validateCredentials,
|
||||
PathValidator: validatePath,
|
||||
DiskSpaceValidator: validateDiskSpace,
|
||||
}
|
||||
type Settings struct {
|
||||
BasePath string
|
||||
ReadOnly bool
|
||||
BindPort int
|
||||
BindAddress string
|
||||
}
|
||||
|
||||
if err := sftp_server.New(c); err != nil {
|
||||
return err
|
||||
}
|
||||
type User struct {
|
||||
Uid int
|
||||
Gid int
|
||||
}
|
||||
|
||||
c.ConfigureLogger(func() *zap.SugaredLogger {
|
||||
return zap.S().Named("sftp")
|
||||
})
|
||||
type Server struct {
|
||||
cache *cache.Cache
|
||||
|
||||
// Initialize the SFTP server in a background thread since this is
|
||||
// a long running operation.
|
||||
go func(instance *sftp_server.Server) {
|
||||
if err := c.Initialize(); err != nil {
|
||||
log.WithField("subsystem", "sftp").WithField("error", errors.WithStack(err)).Error("failed to initialize SFTP subsystem")
|
||||
}
|
||||
}(c)
|
||||
Settings Settings
|
||||
User User
|
||||
|
||||
PathValidator func(fs FileSystem, p string) (string, error)
|
||||
DiskSpaceValidator func(fs FileSystem) bool
|
||||
|
||||
// Validator function that is called when a user connects to the server. This should
|
||||
// check against whatever system is desired to confirm if the given username and password
|
||||
// combination is valid. If so, should return an authentication response.
|
||||
CredentialValidator func(r api.SftpAuthRequest) (*api.SftpAuthResponse, error)
|
||||
}
|
||||
|
||||
// Create a new server configuration instance.
|
||||
func New(c *Server) error {
|
||||
c.cache = cache.New(5*time.Minute, 10*time.Minute)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func validatePath(fs sftp_server.FileSystem, p string) (string, error) {
|
||||
s := server.GetServers().Find(func(server *server.Server) bool {
|
||||
return server.Id() == fs.UUID
|
||||
})
|
||||
// Initialize the SFTP server and add a persistent listener to handle inbound SFTP connections.
|
||||
func (c *Server) Initialize() error {
|
||||
serverConfig := &ssh.ServerConfig{
|
||||
NoClientAuth: false,
|
||||
MaxAuthTries: 6,
|
||||
PasswordCallback: func(conn ssh.ConnMetadata, pass []byte) (*ssh.Permissions, error) {
|
||||
resp, err := c.CredentialValidator(api.SftpAuthRequest{
|
||||
User: conn.User(),
|
||||
Pass: string(pass),
|
||||
IP: conn.RemoteAddr().String(),
|
||||
SessionID: conn.SessionID(),
|
||||
ClientVersion: conn.ClientVersion(),
|
||||
})
|
||||
|
||||
if s == nil {
|
||||
return "", errors.New("no server found with that UUID")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
sshPerm := &ssh.Permissions{
|
||||
Extensions: map[string]string{
|
||||
"uuid": resp.Server,
|
||||
"user": conn.User(),
|
||||
"permissions": strings.Join(resp.Permissions, ","),
|
||||
},
|
||||
}
|
||||
|
||||
return sshPerm, nil
|
||||
},
|
||||
}
|
||||
|
||||
return s.Filesystem.SafePath(p)
|
||||
}
|
||||
|
||||
func validateDiskSpace(fs sftp_server.FileSystem) bool {
|
||||
s := server.GetServers().Find(func(server *server.Server) bool {
|
||||
return server.Id() == fs.UUID
|
||||
})
|
||||
|
||||
if s == nil {
|
||||
return false
|
||||
if _, err := os.Stat(path.Join(c.Settings.BasePath, ".sftp/id_rsa")); os.IsNotExist(err) {
|
||||
if err := c.generatePrivateKey(); err != nil {
|
||||
return err
|
||||
}
|
||||
} else if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return s.Filesystem.HasSpaceAvailable()
|
||||
}
|
||||
|
||||
var validUsernameRegexp = regexp.MustCompile(`^(?i)(.+)\.([a-z0-9]{8})$`)
|
||||
|
||||
// Validates a set of credentials for a SFTP login aganist Pterodactyl Panel and returns
|
||||
// the server's UUID if the credentials were valid.
|
||||
func validateCredentials(c sftp_server.AuthenticationRequest) (*sftp_server.AuthenticationResponse, error) {
|
||||
log.WithFields(log.Fields{"subsystem": "sftp", "username": c.User}).Debug("validating credentials for SFTP connection")
|
||||
|
||||
f := log.Fields{
|
||||
"subsystem": "sftp",
|
||||
"username": c.User,
|
||||
"ip": c.IP,
|
||||
}
|
||||
|
||||
// If the username doesn't meet the expected format that the Panel would even recognize just go ahead
|
||||
// and bail out of the process here to avoid accidentially brute forcing the panel if a bot decides
|
||||
// to connect to spam username attempts.
|
||||
if !validUsernameRegexp.MatchString(c.User) {
|
||||
log.WithFields(f).Warn("failed to validate user credentials (invalid format)")
|
||||
|
||||
return nil, new(sftp_server.InvalidCredentialsError)
|
||||
}
|
||||
|
||||
resp, err := api.NewRequester().ValidateSftpCredentials(c)
|
||||
privateBytes, err := ioutil.ReadFile(path.Join(c.Settings.BasePath, ".sftp/id_rsa"))
|
||||
if err != nil {
|
||||
if sftp_server.IsInvalidCredentialsError(err) {
|
||||
log.WithFields(f).Warn("failed to validate user credentials (invalid username or password)")
|
||||
} else {
|
||||
log.WithFields(f).Error("encountered an error while trying to validate user credentials")
|
||||
return err
|
||||
}
|
||||
|
||||
private, err := ssh.ParsePrivateKey(privateBytes)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Add our private key to the server configuration.
|
||||
serverConfig.AddHostKey(private)
|
||||
|
||||
listener, err := net.Listen("tcp", fmt.Sprintf("%s:%d", c.Settings.BindAddress, c.Settings.BindPort))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
log.WithField("host", c.Settings.BindAddress).WithField("port", c.Settings.BindPort).Info("sftp subsystem listening for connections")
|
||||
|
||||
for {
|
||||
conn, _ := listener.Accept()
|
||||
if conn != nil {
|
||||
go c.AcceptInboundConnection(conn, serverConfig)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Handles an inbound connection to the instance and determines if we should serve the request
|
||||
// or not.
|
||||
func (c Server) AcceptInboundConnection(conn net.Conn, config *ssh.ServerConfig) {
|
||||
defer conn.Close()
|
||||
|
||||
// Before beginning a handshake must be performed on the incoming net.Conn
|
||||
sconn, chans, reqs, err := ssh.NewServerConn(conn, config)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
defer sconn.Close()
|
||||
|
||||
go ssh.DiscardRequests(reqs)
|
||||
|
||||
for newChannel := range chans {
|
||||
// If its not a session channel we just move on because its not something we
|
||||
// know how to handle at this point.
|
||||
if newChannel.ChannelType() != "session" {
|
||||
newChannel.Reject(ssh.UnknownChannelType, "unknown channel type")
|
||||
continue
|
||||
}
|
||||
|
||||
return resp, err
|
||||
channel, requests, err := newChannel.Accept()
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
// Channels have a type that is dependent on the protocol. For SFTP this is "subsystem"
|
||||
// with a payload that (should) be "sftp". Discard anything else we receive ("pty", "shell", etc)
|
||||
go func(in <-chan *ssh.Request) {
|
||||
for req := range in {
|
||||
ok := false
|
||||
|
||||
switch req.Type {
|
||||
case "subsystem":
|
||||
if string(req.Payload[4:]) == "sftp" {
|
||||
ok = true
|
||||
}
|
||||
}
|
||||
|
||||
req.Reply(ok, nil)
|
||||
}
|
||||
}(requests)
|
||||
|
||||
// Configure the user's home folder for the rest of the request cycle.
|
||||
if sconn.Permissions.Extensions["uuid"] == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
// Create a new handler for the currently logged in user's server.
|
||||
fs := c.createHandler(sconn)
|
||||
|
||||
// Create the server instance for the channel using the filesystem we created above.
|
||||
server := sftp.NewRequestServer(channel, fs)
|
||||
|
||||
if err := server.Serve(); err == io.EOF {
|
||||
server.Close()
|
||||
}
|
||||
}
|
||||
|
||||
s := server.GetServers().Find(func(server *server.Server) bool {
|
||||
return server.Id() == resp.Server
|
||||
})
|
||||
|
||||
if s == nil {
|
||||
return resp, errors.New("no matching server with UUID found")
|
||||
}
|
||||
|
||||
s.Log().WithFields(f).Debug("credentials successfully validated and matched user to server instance")
|
||||
|
||||
return resp, err
|
||||
}
|
||||
|
||||
// Creates a new SFTP handler for a given server. The directory argument should
|
||||
// be the base directory for a server. All actions done on the server will be
|
||||
// relative to that directory, and the user will not be able to escape out of it.
|
||||
func (c Server) createHandler(sc *ssh.ServerConn) sftp.Handlers {
|
||||
p := FileSystem{
|
||||
UUID: sc.Permissions.Extensions["uuid"],
|
||||
Permissions: strings.Split(sc.Permissions.Extensions["permissions"], ","),
|
||||
ReadOnly: c.Settings.ReadOnly,
|
||||
Cache: c.cache,
|
||||
User: c.User,
|
||||
HasDiskSpace: c.DiskSpaceValidator,
|
||||
PathValidator: c.PathValidator,
|
||||
logger: log.WithFields(log.Fields{
|
||||
"subsystem": "sftp",
|
||||
"username": sc.User(),
|
||||
"ip": sc.RemoteAddr(),
|
||||
}),
|
||||
}
|
||||
|
||||
return sftp.Handlers{
|
||||
FileGet: p,
|
||||
FilePut: p,
|
||||
FileCmd: p,
|
||||
FileList: p,
|
||||
}
|
||||
}
|
||||
|
||||
// Generates a private key that will be used by the SFTP server.
|
||||
func (c Server) generatePrivateKey() error {
|
||||
key, err := rsa.GenerateKey(rand.Reader, 2048)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := os.MkdirAll(path.Join(c.Settings.BasePath, ".sftp"), 0755); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
o, err := os.OpenFile(path.Join(c.Settings.BasePath, ".sftp/id_rsa"), os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0600)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer o.Close()
|
||||
|
||||
pkey := &pem.Block{
|
||||
Type: "RSA PRIVATE KEY",
|
||||
Bytes: x509.MarshalPKCS1PrivateKey(key),
|
||||
}
|
||||
|
||||
if err := pem.Encode(o, pkey); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
97
sftp/sftp.go
Normal file
97
sftp/sftp.go
Normal file
@@ -0,0 +1,97 @@
|
||||
package sftp
|
||||
|
||||
import (
|
||||
"github.com/apex/log"
|
||||
"github.com/pkg/errors"
|
||||
"github.com/pterodactyl/wings/api"
|
||||
"github.com/pterodactyl/wings/config"
|
||||
"github.com/pterodactyl/wings/server"
|
||||
)
|
||||
|
||||
var noMatchingServerError = errors.New("no matching server with that UUID was found")
|
||||
|
||||
func Initialize(config config.SystemConfiguration) error {
|
||||
s := &Server{
|
||||
User: User{
|
||||
Uid: config.User.Uid,
|
||||
Gid: config.User.Gid,
|
||||
},
|
||||
Settings: Settings{
|
||||
BasePath: config.Data,
|
||||
ReadOnly: config.Sftp.ReadOnly,
|
||||
BindAddress: config.Sftp.Address,
|
||||
BindPort: config.Sftp.Port,
|
||||
},
|
||||
CredentialValidator: validateCredentials,
|
||||
PathValidator: validatePath,
|
||||
DiskSpaceValidator: validateDiskSpace,
|
||||
}
|
||||
|
||||
if err := New(s); err != nil {
|
||||
return errors.WithStack(err)
|
||||
}
|
||||
|
||||
// Initialize the SFTP server in a background thread since this is
|
||||
// a long running operation.
|
||||
go func(s *Server) {
|
||||
if err := s.Initialize(); err != nil {
|
||||
log.WithField("subsystem", "sftp").WithField("error", errors.WithStack(err)).Error("failed to initialize SFTP subsystem")
|
||||
}
|
||||
}(s)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func validatePath(fs FileSystem, p string) (string, error) {
|
||||
s := server.GetServers().Find(func(server *server.Server) bool {
|
||||
return server.Id() == fs.UUID
|
||||
})
|
||||
|
||||
if s == nil {
|
||||
return "", noMatchingServerError
|
||||
}
|
||||
|
||||
return s.Filesystem.SafePath(p)
|
||||
}
|
||||
|
||||
func validateDiskSpace(fs FileSystem) bool {
|
||||
s := server.GetServers().Find(func(server *server.Server) bool {
|
||||
return server.Id() == fs.UUID
|
||||
})
|
||||
|
||||
if s == nil {
|
||||
return false
|
||||
}
|
||||
|
||||
return s.Filesystem.HasSpaceAvailable(true)
|
||||
}
|
||||
|
||||
// Validates a set of credentials for a SFTP login against Pterodactyl Panel and returns
|
||||
// the server's UUID if the credentials were valid.
|
||||
func validateCredentials(c api.SftpAuthRequest) (*api.SftpAuthResponse, error) {
|
||||
f := log.Fields{"subsystem": "sftp", "username": c.User, "ip": c.IP}
|
||||
|
||||
log.WithFields(f).Debug("validating credentials for SFTP connection")
|
||||
resp, err := api.NewRequester().ValidateSftpCredentials(c)
|
||||
if err != nil {
|
||||
if api.IsInvalidCredentialsError(err) {
|
||||
log.WithFields(f).Warn("failed to validate user credentials (invalid username or password)")
|
||||
} else {
|
||||
log.WithFields(f).Error("encountered an error while trying to validate user credentials")
|
||||
}
|
||||
|
||||
return resp, err
|
||||
}
|
||||
|
||||
s := server.GetServers().Find(func(server *server.Server) bool {
|
||||
return server.Id() == resp.Server
|
||||
})
|
||||
|
||||
if s == nil {
|
||||
return resp, noMatchingServerError
|
||||
}
|
||||
|
||||
s.Log().WithFields(f).Debug("credentials successfully validated and matched user to server instance")
|
||||
|
||||
return resp, err
|
||||
}
|
||||
13
templates/logrotate.tpl
Normal file
13
templates/logrotate.tpl
Normal file
@@ -0,0 +1,13 @@
|
||||
{{.LogDirectory}}/wings.log {
|
||||
size 10M
|
||||
compress
|
||||
delaycompress
|
||||
dateext
|
||||
maxage 7
|
||||
missingok
|
||||
notifempty
|
||||
create 0640 {{.User.Uid}} {{.User.Gid}}
|
||||
postrotate
|
||||
killall -SIGHUP wings
|
||||
endscript
|
||||
}
|
||||
Reference in New Issue
Block a user