Compare commits

...

29 Commits

Author SHA1 Message Date
Dane Everitt
930abfb4a7 Update CHANGELOG.md 2021-08-02 20:17:02 -07:00
Dane Everitt
ec57f43dd4 Add deprecation flag on the directory, don't remove it entirely 2021-08-02 20:15:25 -07:00
Dane Everitt
a33ac304ca Perhaps don't break _everything_ on people. 2021-08-02 20:02:27 -07:00
Matthew Penner
2a370a8776 downloader: fix internal range check 2021-08-02 15:16:38 -06:00
Matthew Penner
3c54c1f840 break everything
- upgrade dependencies
- run gofmt and goimports to organize code
- fix typos
- other small tweaks
2021-08-02 15:07:00 -06:00
Matthew Penner
4a5e0bb86f docker: fix build 2021-07-17 10:40:14 -06:00
Matthew Penner
e09ee449d1 docker: change final image from busybox to distroless
This should resolve any issues with missing ca-certificates or tzdata.

Fixes https://github.com/pterodactyl/panel/issues/3442
2021-07-17 10:34:31 -06:00
Matthew Penner
7a24e976ef feat(logrotate): fix config with bad user
fixes https://github.com/pterodactyl/panel/issues/3452
2021-07-17 10:25:33 -06:00
Matthew Penner
31ff3f8b56 server(fs): keep file mode when extracting archive 2021-07-15 15:37:38 -06:00
Matthew Penner
f422081695 change minimum go version to 1.16, add multiplatform docker image 2021-07-12 11:06:22 -06:00
Matthew Penner
29b2d6826a archive: fix socket files aborting backups 2021-07-12 10:17:56 -06:00
Matthew Penner
73570c7144 installer: support 'start_on_completion' (#96) 2021-07-04 15:08:05 -07:00
kaziu687
c0a487c47e Fix environment variables with the same prefix being skipped unintentionally (#98)
If you have two env variables (for example ONE_VARIABLE and ONE_VARIABLE_NAME) ONE_VARIABLE_NAME has prefix ONE_VARIABLE and will be skipped.

Co-authored-by: Jakob <dev@schrej.net>
2021-07-04 15:07:46 -07:00
Dane Everitt
1c8efa2fd0 Update codeql-analysis.yml 2021-07-04 15:03:39 -07:00
Dane Everitt
b618ec8877 Bump PID limit to 512 by default 2021-06-28 17:52:42 -07:00
Dane Everitt
08a7ccd175 Update CHANGELOG.md 2021-06-20 18:07:20 -07:00
Dane Everitt
8336f6ff29 Apply container limits to install containers, defaulting to minimums if the server's resources are set too low 2021-06-20 17:21:51 -07:00
Dane Everitt
e0078eee0a [security] enforce process limits at a per-container level to avoid abusive clients impacting other instances 2021-06-20 16:54:00 -07:00
Dane Everitt
c0063d2c61 Update CHANGELOG.md 2021-06-05 08:50:26 -07:00
Dane Everitt
f74a74cd5e Merge pull request #93 from JulienTant/develop
Add decompress tests
2021-06-05 08:46:14 -07:00
Dane Everitt
8055d1355d Update CHANGELOG.md 2021-05-02 15:52:34 -07:00
Dane Everitt
c1ff32ad32 Update test based on corrected error response logic 2021-05-02 15:43:22 -07:00
Dane Everitt
49dd1f7bde Better support for retrying failed requests with the API
Also implements more logic error returns from the Get/Post functions in the client, rather than making the developer call r.Error() on responses.
2021-05-02 15:41:02 -07:00
Dane Everitt
3f47bfd292 Add backoff retries to API calls from Wings 2021-05-02 15:16:30 -07:00
Dane Everitt
ddfd6d9cce Modify backup process to utilize contexts and exponential backoffs
If a request to upload a file part to S3 fails for any 5xx reason it will begin using an exponential backoff to keep re-trying the upload until we've reached a minute of trying to access the endpoint.

This should resolve temporary resolution issues with URLs and certain S3 compatiable systems such as B2 that sometimes return a 5xx error and just need a retry to be successful.

Also supports using the server context to ensure backups are terminated when a server is deleted, and removes the http call without a timeout, replacing it with a 2 hour timeout to account for connections as slow as 10Mbps on a huge file upload.
2021-05-02 12:28:36 -07:00
Dane Everitt
da74ac8291 Trim "~" from container prefix; closes pterodactyl/panel#3310 2021-05-02 11:00:10 -07:00
Dane Everitt
3fda548541 Update CHANGELOG.md 2021-04-27 19:07:31 -07:00
Julien Tant
35b2c420ec add decompress tests 2021-04-25 16:44:54 -07:00
Dane Everitt
daaef5044e Correctly determine name for archive files when decompressing; closes pterodactyl/panel#3296 2021-04-25 15:36:00 -07:00
76 changed files with 1883 additions and 885 deletions

4
.github/FUNDING.yml vendored
View File

@@ -1,2 +1,2 @@
github: [DaneEveritt] github: [ DaneEveritt ]
custom: ["https://paypal.me/PterodactylSoftware"] custom: [ "https://paypal.me/PterodactylSoftware" ]

View File

@@ -2,17 +2,17 @@ name: Run Tests
on: on:
push: push:
branches: branches:
- 'develop' - develop
pull_request: pull_request:
branches: branches:
- 'develop' - develop
jobs: jobs:
build: build:
strategy: strategy:
fail-fast: false fail-fast: false
matrix: matrix:
os: [ ubuntu-20.04 ] os: [ ubuntu-20.04 ]
go: [ '^1.15', '^1.16' ] go: [ '^1.16' ]
goos: [ linux ] goos: [ linux ]
goarch: [ amd64, arm64 ] goarch: [ amd64, arm64 ]
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
@@ -60,7 +60,7 @@ jobs:
run: go test ./... run: go test ./...
- name: Upload Artifact - name: Upload Artifact
uses: actions/upload-artifact@v2 uses: actions/upload-artifact@v2
if: ${{ matrix.go == '^1.15' && (github.ref == 'refs/heads/develop' || github.event_name == 'pull_request') }} if: ${{ matrix.go == '^1.16' && (github.ref == 'refs/heads/develop' || github.event_name == 'pull_request') }}
with: with:
name: wings_${{ matrix.goos }}_${{ matrix.goarch }} name: wings_${{ matrix.goos }}_${{ matrix.goarch }}
path: build/wings_${{ matrix.goos }}_${{ matrix.goarch }} path: build/wings_${{ matrix.goos }}_${{ matrix.goarch }}

View File

@@ -2,30 +2,29 @@ name: CodeQL
on: on:
push: push:
branches: branches:
- 'develop' - develop
pull_request: pull_request:
branches: branches:
- 'develop' - develop
schedule: schedule:
- cron: '0 9 * * 4' - cron: '0 9 * * 4'
jobs: jobs:
analyze: analyze:
name: Analyze name: Analyze
runs-on: ubuntu-20.04 runs-on: ubuntu-latest
permissions:
actions: read
contents: read
security-events: write
strategy: strategy:
fail-fast: false fail-fast: false
matrix: matrix:
language: language: [ 'go' ]
- go
steps: steps:
- name: Code Checkout - uses: actions/checkout@v2
uses: actions/checkout@v2
- name: Checkout Head
run: git checkout HEAD^2
if: ${{ github.event_name == 'pull_request' }}
- name: Initialize CodeQL - name: Initialize CodeQL
uses: github/codeql-action/init@v1 uses: github/codeql-action/init@v1
with: with:
languages: ${{ matrix.language }} languages: ${{ matrix.language }}
- name: Perform CodeQL Analysis - uses: github/codeql-action/autobuild@v1
uses: github/codeql-action/analyze@v1 - uses: github/codeql-action/analyze@v1

View File

@@ -2,8 +2,7 @@ name: Publish Docker Image
on: on:
push: push:
branches: branches:
- 'develop' - develop
tags: tags:
- 'v*' - 'v*'
jobs: jobs:
@@ -44,6 +43,7 @@ jobs:
build-args: | build-args: |
VERSION=${{ steps.build_info.outputs.version_tag }} VERSION=${{ steps.build_info.outputs.version_tag }}
labels: ${{ steps.docker_meta.outputs.labels }} labels: ${{ steps.docker_meta.outputs.labels }}
platforms: linux/amd64,linux/arm64
push: true push: true
tags: ${{ steps.docker_meta.outputs.tags }} tags: ${{ steps.docker_meta.outputs.tags }}
- name: Release Development Build - name: Release Development Build
@@ -53,5 +53,6 @@ jobs:
build-args: | build-args: |
VERSION=dev-${{ steps.build_info.outputs.short_sha }} VERSION=dev-${{ steps.build_info.outputs.short_sha }}
labels: ${{ steps.docker_meta.outputs.labels }} labels: ${{ steps.docker_meta.outputs.labels }}
platforms: linux/amd64,linux/arm64
push: ${{ github.event_name != 'pull_request' }} push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.docker_meta.outputs.tags }} tags: ${{ steps.docker_meta.outputs.tags }}

View File

@@ -11,7 +11,7 @@ jobs:
uses: actions/checkout@v2 uses: actions/checkout@v2
- uses: actions/setup-go@v2 - uses: actions/setup-go@v2
with: with:
go-version: '^1.15' go-version: '^1.16'
- name: Build - name: Build
env: env:
REF: ${{ github.ref }} REF: ${{ github.ref }}

View File

@@ -1,5 +1,47 @@
# Changelog # Changelog
## v1.4.6
### Fixed
* Environment variable starting with the same prefix no longer get merged into a single environment variable value (skipping all but the first).
* The `start_on_completion` flag for server installs will now properly start the server.
* Fixes socket files unintentionally causing backups to be aborted.
* Files extracted from a backup now have their preior mode properly set on the restored files, rather than defaulting to 0644.
* Fixes logrotate issues due to a bad user configuration on some systems.
### Updated
* The minimum Go version required to compile Wings is now `go1.16`.
### Deprecated
> Both of these deprecations will be removed in `Wings@2.0.0`.
* The `Server.Id()` method has been deprecated in favor of `Server.ID()`.
* The `directory` field on the `/api/servers/:server/files/pull` endpoint is deprecated and should be updated to use `root` instead for consistency with other endpoints.
## v1.4.5
### Changed
* Upped the process limit for a container from `256` to `512` in order to address edge-cases for some games that spawn a lot of processes.
## v1.4.4
### Added
* **[security]** Adds support for limiting the total number of pids any one container can have active at once to prevent malicious users from impacting other instances on the same node.
* Server install containers now use the limits assigned to the server, or a globally defined minimum amount of memory and CPU rather than having unlimited resources.
## v1.4.3
This build was created to address `CVE-2021-33196` in `Go` which requires a new binary
be built on the latest `go1.15` version.
## v1.4.2
### Fixed
* Fixes the `~` character not being properly trimmed from container image names when creating a new server.
### Changed
* Implemented exponential backoff for S3 uploads when working with backups. This should resolve many issues with external S3 compatiable systems that sometimes return 5xx level errors that should be re-attempted automatically.
* Implements exponential backoff behavior for all API calls to the Panel that do not immediately return a 401, 403, or 429 error response. This should address fragiligty in some API calls and address random call failures due to connection drops or random DNS resolution errors.
## v1.4.1
### Fixed
* Fixes a bug that would cause the file unarchiving process to put all files in the base directory rather than the directory in which the files should be located.
## v1.4.0 ## v1.4.0
### Fixed ### Fixed
* **[Breaking]** Fixes `/api/servers` and `/api/servers/:server` not properly returning all of the relevant server information and resource usage. * **[Breaking]** Fixes `/api/servers` and `/api/servers/:server` not properly returning all of the relevant server information and resource usage.

View File

@@ -1,5 +1,5 @@
# Stage 1 (Build) # Stage 1 (Build)
FROM golang:1.15-alpine3.12 AS builder FROM --platform=$BUILDPLATFORM golang:1.16-alpine AS builder
ARG VERSION ARG VERSION
RUN apk add --update --no-cache git make upx RUN apk add --update --no-cache git make upx
@@ -14,9 +14,10 @@ RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build \
-o wings \ -o wings \
wings.go wings.go
RUN upx wings RUN upx wings
RUN echo "ID=\"distroless\"" > /etc/os-release
# Stage 2 (Final) # Stage 2 (Final)
FROM busybox:1.33.0 FROM gcr.io/distroless/static:latest
RUN echo "ID=\"busybox\"" > /etc/os-release COPY --from=builder /etc/os-release /etc/os-release
COPY --from=builder /app/wings /usr/bin/ COPY --from=builder /app/wings /usr/bin/
CMD [ "wings", "--config", "/etc/pterodactyl/config.yml" ] CMD [ "/usr/bin/wings", "--config", "/etc/pterodactyl/config.yml" ]

View File

@@ -14,8 +14,9 @@ import (
"github.com/AlecAivazis/survey/v2" "github.com/AlecAivazis/survey/v2"
"github.com/AlecAivazis/survey/v2/terminal" "github.com/AlecAivazis/survey/v2/terminal"
"github.com/pterodactyl/wings/config"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"github.com/pterodactyl/wings/config"
) )
var ( var (

View File

@@ -21,11 +21,12 @@ import (
"github.com/docker/docker/api/types" "github.com/docker/docker/api/types"
"github.com/docker/docker/pkg/parsers/kernel" "github.com/docker/docker/pkg/parsers/kernel"
"github.com/docker/docker/pkg/parsers/operatingsystem" "github.com/docker/docker/pkg/parsers/operatingsystem"
"github.com/spf13/cobra"
"github.com/pterodactyl/wings/config" "github.com/pterodactyl/wings/config"
"github.com/pterodactyl/wings/environment" "github.com/pterodactyl/wings/environment"
"github.com/pterodactyl/wings/loggers/cli" "github.com/pterodactyl/wings/loggers/cli"
"github.com/pterodactyl/wings/system" "github.com/pterodactyl/wings/system"
"github.com/spf13/cobra"
) )
const DefaultHastebinUrl = "https://ptero.co" const DefaultHastebinUrl = "https://ptero.co"

View File

@@ -20,6 +20,10 @@ import (
"github.com/gammazero/workerpool" "github.com/gammazero/workerpool"
"github.com/mitchellh/colorstring" "github.com/mitchellh/colorstring"
"github.com/pkg/profile" "github.com/pkg/profile"
"github.com/spf13/cobra"
"golang.org/x/crypto/acme"
"golang.org/x/crypto/acme/autocert"
"github.com/pterodactyl/wings/config" "github.com/pterodactyl/wings/config"
"github.com/pterodactyl/wings/environment" "github.com/pterodactyl/wings/environment"
"github.com/pterodactyl/wings/loggers/cli" "github.com/pterodactyl/wings/loggers/cli"
@@ -28,9 +32,6 @@ import (
"github.com/pterodactyl/wings/server" "github.com/pterodactyl/wings/server"
"github.com/pterodactyl/wings/sftp" "github.com/pterodactyl/wings/sftp"
"github.com/pterodactyl/wings/system" "github.com/pterodactyl/wings/system"
"github.com/spf13/cobra"
"golang.org/x/crypto/acme"
"golang.org/x/crypto/acme/autocert"
) )
var ( var (
@@ -40,7 +41,7 @@ var (
var rootCommand = &cobra.Command{ var rootCommand = &cobra.Command{
Use: "wings", Use: "wings",
Short: "Runs the API server allowing programatic control of game servers for Pterodactyl Panel.", Short: "Runs the API server allowing programmatic control of game servers for Pterodactyl Panel.",
PreRun: func(cmd *cobra.Command, args []string) { PreRun: func(cmd *cobra.Command, args []string) {
initConfig() initConfig()
initLogging() initLogging()
@@ -90,9 +91,9 @@ func rootCmdRun(cmd *cobra.Command, _ []string) {
case "mem": case "mem":
defer profile.Start(profile.MemProfile).Stop() defer profile.Start(profile.MemProfile).Stop()
case "alloc": case "alloc":
defer profile.Start(profile.MemProfile, profile.MemProfileAllocs()).Stop() defer profile.Start(profile.MemProfile, profile.MemProfileAllocs).Stop()
case "heap": case "heap":
defer profile.Start(profile.MemProfile, profile.MemProfileHeap()).Stop() defer profile.Start(profile.MemProfile, profile.MemProfileHeap).Stop()
case "routines": case "routines":
defer profile.Start(profile.GoroutineProfile).Stop() defer profile.Start(profile.GoroutineProfile).Stop()
case "mutex": case "mutex":
@@ -122,11 +123,6 @@ func rootCmdRun(cmd *cobra.Command, _ []string) {
log.WithField("error", err).Fatal("failed to configure system directories for pterodactyl") log.WithField("error", err).Fatal("failed to configure system directories for pterodactyl")
return return
} }
if err := config.EnableLogRotation(); err != nil {
log.WithField("error", err).Fatal("failed to configure log rotation on the system")
return
}
log.WithField("username", config.Get().System.User).Info("checking for pterodactyl system user") log.WithField("username", config.Get().System.User).Info("checking for pterodactyl system user")
if err := config.EnsurePterodactylUser(); err != nil { if err := config.EnsurePterodactylUser(); err != nil {
log.WithField("error", err).Fatal("failed to create pterodactyl system user") log.WithField("error", err).Fatal("failed to create pterodactyl system user")
@@ -136,6 +132,10 @@ func rootCmdRun(cmd *cobra.Command, _ []string) {
"uid": config.Get().System.User.Uid, "uid": config.Get().System.User.Uid,
"gid": config.Get().System.User.Gid, "gid": config.Get().System.User.Gid,
}).Info("configured system user successfully") }).Info("configured system user successfully")
if err := config.EnableLogRotation(); err != nil {
log.WithField("error", err).Fatal("failed to configure log rotation on the system")
return
}
pclient := remote.New( pclient := remote.New(
config.Get().PanelLocation, config.Get().PanelLocation,
@@ -160,7 +160,7 @@ func rootCmdRun(cmd *cobra.Command, _ []string) {
// Just for some nice log output. // Just for some nice log output.
for _, s := range manager.All() { for _, s := range manager.All() {
log.WithField("server", s.Id()).Info("finished loading configuration for server") log.WithField("server", s.ID()).Info("finished loading configuration for server")
} }
states, err := manager.ReadStates() states, err := manager.ReadStates()
@@ -202,14 +202,14 @@ func rootCmdRun(cmd *cobra.Command, _ []string) {
pool.Submit(func() { pool.Submit(func() {
s.Log().Info("configuring server environment and restoring to previous state") s.Log().Info("configuring server environment and restoring to previous state")
var st string var st string
if state, exists := states[s.Id()]; exists { if state, exists := states[s.ID()]; exists {
st = state st = state
} }
r, err := s.Environment.IsRunning() r, err := s.Environment.IsRunning()
// We ignore missing containers because we don't want to actually block booting of wings at this // We ignore missing containers because we don't want to actually block booting of wings at this
// point. If we didn't do this and you pruned all of the images and then started wings you could // point. If we didn't do this, and you pruned all the images and then started wings you could
// end up waiting a long period of time for all of the images to be re-pulled on Wings boot rather // end up waiting a long period of time for all the images to be re-pulled on Wings boot rather
// than when the server itself is started. // than when the server itself is started.
if err != nil && !client.IsErrNotFound(err) { if err != nil && !client.IsErrNotFound(err) {
s.Log().WithField("error", err).Error("error checking server environment status") s.Log().WithField("error", err).Error("error checking server environment status")
@@ -247,10 +247,10 @@ func rootCmdRun(cmd *cobra.Command, _ []string) {
}) })
} }
// Wait until all of the servers are ready to go before we fire up the SFTP and HTTP servers. // Wait until all the servers are ready to go before we fire up the SFTP and HTTP servers.
pool.StopWait() pool.StopWait()
defer func() { defer func() {
// Cancel the context on all of the running servers at this point, even though the // Cancel the context on all the running servers at this point, even though the
// program is just shutting down. // program is just shutting down.
for _, s := range manager.All() { for _, s := range manager.All() {
s.CtxCancel() s.CtxCancel()
@@ -267,7 +267,7 @@ func rootCmdRun(cmd *cobra.Command, _ []string) {
go func() { go func() {
log.Info("updating server states on Panel: marking installing/restoring servers as normal") log.Info("updating server states on Panel: marking installing/restoring servers as normal")
// Update all of the servers on the Panel to be in a valid state if they're // Update all the servers on the Panel to be in a valid state if they're
// currently marked as installing/restoring now that Wings is restarted. // currently marked as installing/restoring now that Wings is restarted.
if err := pclient.ResetServersState(cmd.Context()); err != nil { if err := pclient.ResetServersState(cmd.Context()); err != nil {
log.WithField("error", err).Error("failed to reset server states on Panel: some instances may be stuck in an installing/restoring state unexpectedly") log.WithField("error", err).Error("failed to reset server states on Panel: some instances may be stuck in an installing/restoring state unexpectedly")
@@ -349,7 +349,7 @@ func rootCmdRun(cmd *cobra.Command, _ []string) {
} }
// Reads the configuration from the disk and then sets up the global singleton // Reads the configuration from the disk and then sets up the global singleton
// with all of the configuration values. // with all the configuration values.
func initConfig() { func initConfig() {
if !strings.HasPrefix(configPath, "/") { if !strings.HasPrefix(configPath, "/") {
d, err := os.Getwd() d, err := os.Getwd()

View File

@@ -21,8 +21,9 @@ import (
"github.com/cobaugh/osrelease" "github.com/cobaugh/osrelease"
"github.com/creasty/defaults" "github.com/creasty/defaults"
"github.com/gbrlsnchs/jwt/v3" "github.com/gbrlsnchs/jwt/v3"
"github.com/pterodactyl/wings/system"
"gopkg.in/yaml.v2" "gopkg.in/yaml.v2"
"github.com/pterodactyl/wings/system"
) )
const DefaultLocation = "/etc/pterodactyl/config.yml" const DefaultLocation = "/etc/pterodactyl/config.yml"
@@ -53,7 +54,7 @@ var _jwtAlgo *jwt.HMACSHA
var _debugViaFlag bool var _debugViaFlag bool
// Locker specific to writing the configuration to the disk, this happens // Locker specific to writing the configuration to the disk, this happens
// in areas that might already be locked so we don't want to crash the process. // in areas that might already be locked, so we don't want to crash the process.
var _writeLock sync.Mutex var _writeLock sync.Mutex
// SftpConfiguration defines the configuration of the internal SFTP server. // SftpConfiguration defines the configuration of the internal SFTP server.
@@ -394,7 +395,7 @@ func EnsurePterodactylUser() error {
} }
// Our way of detecting if wings is running inside of Docker. // Our way of detecting if wings is running inside of Docker.
if sysName == "busybox" { if sysName == "distroless" {
_config.System.Username = system.FirstNotEmpty(os.Getenv("WINGS_USERNAME"), "pterodactyl") _config.System.Username = system.FirstNotEmpty(os.Getenv("WINGS_USERNAME"), "pterodactyl")
_config.System.User.Uid = system.MustInt(system.FirstNotEmpty(os.Getenv("WINGS_UID"), "988")) _config.System.User.Uid = system.MustInt(system.FirstNotEmpty(os.Getenv("WINGS_UID"), "988"))
_config.System.User.Gid = system.MustInt(system.FirstNotEmpty(os.Getenv("WINGS_GID"), "988")) _config.System.User.Gid = system.MustInt(system.FirstNotEmpty(os.Getenv("WINGS_GID"), "988"))
@@ -538,8 +539,7 @@ func EnableLogRotation() error {
} }
defer f.Close() defer f.Close()
t, err := template.New("logrotate").Parse(` t, err := template.New("logrotate").Parse(`{{.LogDirectory}}/wings.log {
{{.LogDirectory}}/wings.log {
size 10M size 10M
compress compress
delaycompress delaycompress
@@ -547,9 +547,8 @@ func EnableLogRotation() error {
maxage 7 maxage 7
missingok missingok
notifempty notifempty
create 0640 {{.User.Uid}} {{.User.Gid}}
postrotate postrotate
killall -SIGHUP wings /usr/bin/systemctl kill -s HUP wings.service >/dev/null 2>&1 || true
endscript endscript
}`) }`)
if err != nil { if err != nil {

View File

@@ -55,6 +55,21 @@ type DockerConfiguration struct {
// utilizes host memory for this value, and that we do not keep track of the space used here // utilizes host memory for this value, and that we do not keep track of the space used here
// so avoid allocating too much to a server. // so avoid allocating too much to a server.
TmpfsSize uint `default:"100" json:"tmpfs_size" yaml:"tmpfs_size"` TmpfsSize uint `default:"100" json:"tmpfs_size" yaml:"tmpfs_size"`
// ContainerPidLimit sets the total number of processes that can be active in a container
// at any given moment. This is a security concern in shared-hosting environments where a
// malicious process could create enough processes to cause the host node to run out of
// available pids and crash.
ContainerPidLimit int64 `default:"512" json:"container_pid_limit" yaml:"container_pid_limit"`
// InstallLimits defines the limits on the installer containers that prevents a server's
// installation process from unintentionally consuming more resources than expected. This
// is used in conjunction with the server's defined limits. Whichever value is higher will
// take precedence in the install containers.
InstallerLimits struct {
Memory int64 `default:"1024" json:"memory" yaml:"memory"`
Cpu int64 `default:"100" json:"cpu" yaml:"cpu"`
} `json:"installer_limits" yaml:"installer_limits"`
} }
// RegistryConfiguration defines the authentication credentials for a given // RegistryConfiguration defines the authentication credentials for a given

View File

@@ -5,6 +5,7 @@ import (
"strconv" "strconv"
"github.com/docker/go-connections/nat" "github.com/docker/go-connections/nat"
"github.com/pterodactyl/wings/config" "github.com/pterodactyl/wings/config"
) )
@@ -19,7 +20,7 @@ type Allocations struct {
Port int `json:"port"` Port int `json:"port"`
} `json:"default"` } `json:"default"`
// Mappings contains all of the ports that should be assigned to a given server // Mappings contains all the ports that should be assigned to a given server
// attached to the IP they correspond to. // attached to the IP they correspond to.
Mappings map[string][]int `json:"mappings"` Mappings map[string][]int `json:"mappings"`
} }
@@ -62,7 +63,7 @@ func (a *Allocations) DockerBindings() nat.PortMap {
iface := config.Get().Docker.Network.Interface iface := config.Get().Docker.Network.Interface
out := a.Bindings() out := a.Bindings()
// Loop over all of the bindings for this container, and convert any that reference 127.0.0.1 // Loop over all the bindings for this container, and convert any that reference 127.0.0.1
// to use the pterodactyl0 network interface IP, as that is the true local for what people are // to use the pterodactyl0 network interface IP, as that is the true local for what people are
// trying to do when creating servers. // trying to do when creating servers.
for p, binds := range out { for p, binds := range out {

View File

@@ -10,6 +10,7 @@ import (
"github.com/docker/docker/api/types" "github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/network" "github.com/docker/docker/api/types/network"
"github.com/docker/docker/client" "github.com/docker/docker/client"
"github.com/pterodactyl/wings/config" "github.com/pterodactyl/wings/config"
) )

View File

@@ -17,6 +17,7 @@ import (
"github.com/docker/docker/api/types/mount" "github.com/docker/docker/api/types/mount"
"github.com/docker/docker/client" "github.com/docker/docker/client"
"github.com/docker/docker/daemon/logger/jsonfilelog" "github.com/docker/docker/daemon/logger/jsonfilelog"
"github.com/pterodactyl/wings/config" "github.com/pterodactyl/wings/config"
"github.com/pterodactyl/wings/environment" "github.com/pterodactyl/wings/environment"
"github.com/pterodactyl/wings/system" "github.com/pterodactyl/wings/system"
@@ -132,14 +133,14 @@ func (e *Environment) InSituUpdate() error {
// //
// @see https://github.com/moby/moby/issues/41946 // @see https://github.com/moby/moby/issues/41946
if _, err := e.client.ContainerUpdate(ctx, e.Id, container.UpdateConfig{ if _, err := e.client.ContainerUpdate(ctx, e.Id, container.UpdateConfig{
Resources: e.resources(), Resources: e.Configuration.Limits().AsContainerResources(),
}); err != nil { }); err != nil {
return errors.Wrap(err, "environment/docker: could not update container") return errors.Wrap(err, "environment/docker: could not update container")
} }
return nil return nil
} }
// Create creates a new container for the server using all of the data that is // Create creates a new container for the server using all the data that is
// currently available for it. If the container already exists it will be // currently available for it. If the container already exists it will be
// returned. // returned.
func (e *Environment) Create() error { func (e *Environment) Create() error {
@@ -178,7 +179,7 @@ func (e *Environment) Create() error {
OpenStdin: true, OpenStdin: true,
Tty: true, Tty: true,
ExposedPorts: a.Exposed(), ExposedPorts: a.Exposed(),
Image: e.meta.Image, Image: strings.TrimPrefix(e.meta.Image, "~"),
Env: e.Configuration.EnvironmentVariables(), Env: e.Configuration.EnvironmentVariables(),
Labels: map[string]string{ Labels: map[string]string{
"Service": "Pterodactyl", "Service": "Pterodactyl",
@@ -192,7 +193,7 @@ func (e *Environment) Create() error {
PortBindings: a.DockerBindings(), PortBindings: a.DockerBindings(),
// Configure the mounts for this container. First mount the server data directory // Configure the mounts for this container. First mount the server data directory
// into the container as a r/w bind. // into the container as an r/w bind.
Mounts: e.convertMounts(), Mounts: e.convertMounts(),
// Configure the /tmp folder mapping in containers. This is necessary for some // Configure the /tmp folder mapping in containers. This is necessary for some
@@ -203,7 +204,7 @@ func (e *Environment) Create() error {
// Define resource limits for the container based on the data passed through // Define resource limits for the container based on the data passed through
// from the Panel. // from the Panel.
Resources: e.resources(), Resources: e.Configuration.Limits().AsContainerResources(),
DNS: config.Get().Docker.Network.Dns, DNS: config.Get().Docker.Network.Dns,
@@ -340,11 +341,9 @@ func (e *Environment) scanOutput(reader io.ReadCloser) {
events := e.Events() events := e.Events()
err := system.ScanReader(reader, func(line string) { if err := system.ScanReader(reader, func(line string) {
events.Publish(environment.ConsoleOutputEvent, line) events.Publish(environment.ConsoleOutputEvent, line)
}) }); err != nil && err != io.EOF {
if err != nil && err != io.EOF {
log.WithField("error", err).WithField("container_id", e.Id).Warn("error processing scanner line in console output") log.WithField("error", err).WithField("container_id", e.Id).Warn("error processing scanner line in console output")
return return
} }
@@ -354,7 +353,7 @@ func (e *Environment) scanOutput(reader io.ReadCloser) {
return return
} }
// Close the current reader before starting a new one, the defer will still run // Close the current reader before starting a new one, the defer will still run,
// but it will do nothing if we already closed the stream. // but it will do nothing if we already closed the stream.
_ = reader.Close() _ = reader.Close()
@@ -372,7 +371,7 @@ type imagePullStatus struct {
// error to the logger but continue with the process. // error to the logger but continue with the process.
// //
// The reasoning behind this is that Quay has had some serious outages as of // The reasoning behind this is that Quay has had some serious outages as of
// late, and we don't need to block all of the servers from booting just because // late, and we don't need to block all the servers from booting just because
// of that. I'd imagine in a lot of cases an outage shouldn't affect users too // of that. I'd imagine in a lot of cases an outage shouldn't affect users too
// badly. It'll at least keep existing servers working correctly if anything. // badly. It'll at least keep existing servers working correctly if anything.
func (e *Environment) ensureImageExists(image string) error { func (e *Environment) ensureImageExists(image string) error {
@@ -486,6 +485,7 @@ func (e *Environment) convertMounts() []mount.Mount {
func (e *Environment) resources() container.Resources { func (e *Environment) resources() container.Resources {
l := e.Configuration.Limits() l := e.Configuration.Limits()
pids := l.ProcessLimit()
return container.Resources{ return container.Resources{
Memory: l.BoundedMemoryLimit(), Memory: l.BoundedMemoryLimit(),
@@ -497,5 +497,6 @@ func (e *Environment) resources() container.Resources {
BlkioWeight: l.IoWeight, BlkioWeight: l.IoWeight,
OomKillDisable: &l.OOMDisabled, OomKillDisable: &l.OOMDisabled,
CpusetCpus: l.Threads, CpusetCpus: l.Threads,
PidsLimit: &pids,
} }
} }

View File

@@ -10,6 +10,7 @@ import (
"github.com/apex/log" "github.com/apex/log"
"github.com/docker/docker/api/types" "github.com/docker/docker/api/types"
"github.com/docker/docker/client" "github.com/docker/docker/client"
"github.com/pterodactyl/wings/environment" "github.com/pterodactyl/wings/environment"
"github.com/pterodactyl/wings/events" "github.com/pterodactyl/wings/events"
"github.com/pterodactyl/wings/remote" "github.com/pterodactyl/wings/remote"
@@ -21,7 +22,7 @@ type Metadata struct {
Stop remote.ProcessStopConfiguration Stop remote.ProcessStopConfiguration
} }
// Ensure that the Docker environment is always implementing all of the methods // Ensure that the Docker environment is always implementing all the methods
// from the base environment interface. // from the base environment interface.
var _ environment.ProcessEnvironment = (*Environment)(nil) var _ environment.ProcessEnvironment = (*Environment)(nil)

View File

@@ -12,6 +12,7 @@ import (
"github.com/docker/docker/api/types" "github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/container" "github.com/docker/docker/api/types/container"
"github.com/docker/docker/client" "github.com/docker/docker/client"
"github.com/pterodactyl/wings/environment" "github.com/pterodactyl/wings/environment"
"github.com/pterodactyl/wings/remote" "github.com/pterodactyl/wings/remote"
) )
@@ -81,7 +82,7 @@ func (e *Environment) Start() error {
return e.Attach() return e.Attach()
} }
// Truncate the log file so we don't end up outputting a bunch of useless log information // Truncate the log file, so we don't end up outputting a bunch of useless log information
// to the websocket and whatnot. Check first that the path and file exist before trying // to the websocket and whatnot. Check first that the path and file exist before trying
// to truncate them. // to truncate them.
if _, err := os.Stat(c.LogPath); err == nil { if _, err := os.Stat(c.LogPath); err == nil {
@@ -242,7 +243,7 @@ func (e *Environment) Terminate(signal os.Signal) error {
} }
if !c.State.Running { if !c.State.Running {
// If the container is not running but we're not already in a stopped state go ahead // If the container is not running, but we're not already in a stopped state go ahead
// and update things to indicate we should be completely stopped now. Set to stopping // and update things to indicate we should be completely stopped now. Set to stopping
// first so crash detection is not triggered. // first so crash detection is not triggered.
if e.st.Load() != environment.ProcessOfflineState { if e.st.Load() != environment.ProcessOfflineState {

View File

@@ -2,12 +2,14 @@ package docker
import ( import (
"context" "context"
"emperror.dev/errors"
"encoding/json" "encoding/json"
"github.com/docker/docker/api/types"
"github.com/pterodactyl/wings/environment"
"io" "io"
"math" "math"
"emperror.dev/errors"
"github.com/docker/docker/api/types"
"github.com/pterodactyl/wings/environment"
) )
// Attach to the instance and then automatically emit an event whenever the resource usage for the // Attach to the instance and then automatically emit an event whenever the resource usage for the
@@ -73,9 +75,8 @@ func (e *Environment) pollResources(ctx context.Context) error {
// value which can be rather confusing to people trying to compare panel usage to // value which can be rather confusing to people trying to compare panel usage to
// their stats output. // their stats output.
// //
// This math is straight up lifted from their CLI repository in order to show the same // This math is from their CLI repository in order to show the same values to avoid people
// values to avoid people bothering me about it. It should also reflect a slightly more // bothering me about it. It should also reflect a slightly more correct memory value anyways.
// correct memory value anyways.
// //
// @see https://github.com/docker/cli/blob/96e1d1d6/cli/command/container/stats_helpers.go#L227-L249 // @see https://github.com/docker/cli/blob/96e1d1d6/cli/command/container/stats_helpers.go#L227-L249
func calculateDockerMemory(stats types.MemoryStats) uint64 { func calculateDockerMemory(stats types.MemoryStats) uint64 {

View File

@@ -6,6 +6,9 @@ import (
"strconv" "strconv"
"github.com/apex/log" "github.com/apex/log"
"github.com/docker/docker/api/types/container"
"github.com/pterodactyl/wings/config"
) )
type Mount struct { type Mount struct {
@@ -23,13 +26,13 @@ type Mount struct {
// that we're mounting into the container at the Target location. // that we're mounting into the container at the Target location.
Source string `json:"source"` Source string `json:"source"`
// Whether or not the directory is being mounted as read-only. It is up to the environment to // Whether the directory is being mounted as read-only. It is up to the environment to
// handle this value correctly and ensure security expectations are met with its usage. // handle this value correctly and ensure security expectations are met with its usage.
ReadOnly bool `json:"read_only"` ReadOnly bool `json:"read_only"`
} }
// The build settings for a given server that impact docker container creation and // Limits is the build settings for a given server that impact docker container
// resource limits for a server instance. // creation and resource limits for a server instance.
type Limits struct { type Limits struct {
// The total amount of memory in megabytes that this server is allowed to // The total amount of memory in megabytes that this server is allowed to
// use on the host system. // use on the host system.
@@ -56,51 +59,76 @@ type Limits struct {
OOMDisabled bool `json:"oom_disabled"` OOMDisabled bool `json:"oom_disabled"`
} }
// Converts the CPU limit for a server build into a number that can be better understood // ConvertedCpuLimit converts the CPU limit for a server build into a number
// by the Docker environment. If there is no limit set, return -1 which will indicate to // that can be better understood by the Docker environment. If there is no limit
// Docker that it has unlimited CPU quota. // set, return -1 which will indicate to Docker that it has unlimited CPU quota.
func (r *Limits) ConvertedCpuLimit() int64 { func (l Limits) ConvertedCpuLimit() int64 {
if r.CpuLimit == 0 { if l.CpuLimit == 0 {
return -1 return -1
} }
return r.CpuLimit * 1000 return l.CpuLimit * 1000
} }
// Set the hard limit for memory usage to be 5% more than the amount of memory assigned to // MemoryOverheadMultiplier sets the hard limit for memory usage to be 5% more
// the server. If the memory limit for the server is < 4G, use 10%, if less than 2G use // than the amount of memory assigned to the server. If the memory limit for the
// 15%. This avoids unexpected crashes from processes like Java which run over the limit. // server is < 4G, use 10%, if less than 2G use 15%. This avoids unexpected
func (r *Limits) MemoryOverheadMultiplier() float64 { // crashes from processes like Java which run over the limit.
if r.MemoryLimit <= 2048 { func (l Limits) MemoryOverheadMultiplier() float64 {
if l.MemoryLimit <= 2048 {
return 1.15 return 1.15
} else if r.MemoryLimit <= 4096 { } else if l.MemoryLimit <= 4096 {
return 1.10 return 1.10
} }
return 1.05 return 1.05
} }
func (r *Limits) BoundedMemoryLimit() int64 { func (l Limits) BoundedMemoryLimit() int64 {
return int64(math.Round(float64(r.MemoryLimit) * r.MemoryOverheadMultiplier() * 1_000_000)) return int64(math.Round(float64(l.MemoryLimit) * l.MemoryOverheadMultiplier() * 1_000_000))
} }
// Returns the amount of swap available as a total in bytes. This is returned as the amount // ConvertedSwap returns the amount of swap available as a total in bytes. This
// of memory available to the server initially, PLUS the amount of additional swap to include // is returned as the amount of memory available to the server initially, PLUS
// which is the format used by Docker. // the amount of additional swap to include which is the format used by Docker.
func (r *Limits) ConvertedSwap() int64 { func (l Limits) ConvertedSwap() int64 {
if r.Swap < 0 { if l.Swap < 0 {
return -1 return -1
} }
return (r.Swap * 1_000_000) + r.BoundedMemoryLimit() return (l.Swap * 1_000_000) + l.BoundedMemoryLimit()
}
// ProcessLimit returns the process limit for a container. This is currently
// defined at a system level and not on a per-server basis.
func (l Limits) ProcessLimit() int64 {
return config.Get().Docker.ContainerPidLimit
}
func (l Limits) AsContainerResources() container.Resources {
pids := l.ProcessLimit()
return container.Resources{
Memory: l.BoundedMemoryLimit(),
MemoryReservation: l.MemoryLimit * 1_000_000,
MemorySwap: l.ConvertedSwap(),
CPUQuota: l.ConvertedCpuLimit(),
CPUPeriod: 100_000,
CPUShares: 1024,
BlkioWeight: l.IoWeight,
OomKillDisable: &l.OOMDisabled,
CpusetCpus: l.Threads,
PidsLimit: &pids,
}
} }
type Variables map[string]interface{} type Variables map[string]interface{}
// Ugly hacky function to handle environment variables that get passed through as not-a-string // Get is an ugly hacky function to handle environment variables that get passed
// from the Panel. Ideally we'd just say only pass strings, but that is a fragile idea and if a // through as not-a-string from the Panel. Ideally we'd just say only pass
// string wasn't passed through you'd cause a crash or the server to become unavailable. For now // strings, but that is a fragile idea and if a string wasn't passed through
// try to handle the most likely values from the JSON and hope for the best. // you'd cause a crash or the server to become unavailable. For now try to
// handle the most likely values from the JSON and hope for the best.
func (v Variables) Get(key string) string { func (v Variables) Get(key string) string {
val, ok := v[key] val, ok := v[key]
if !ok { if !ok {

View File

@@ -12,7 +12,7 @@ type Stats struct {
// The total amount of memory this container or resource can use. Inside Docker this is // The total amount of memory this container or resource can use. Inside Docker this is
// going to be higher than you'd expect because we're automatically allocating overhead // going to be higher than you'd expect because we're automatically allocating overhead
// abilities for the container, so its not going to be a perfect match. // abilities for the container, so it's not going to be a perfect match.
MemoryLimit uint64 `json:"memory_limit_bytes"` MemoryLimit uint64 `json:"memory_limit_bytes"`
// The absolute CPU usage is the amount of CPU used in relation to the entire system and // The absolute CPU usage is the amount of CPU used in relation to the entire system and

View File

@@ -30,7 +30,7 @@ func (e *EventBus) Publish(topic string, data string) {
// Some of our topics for the socket support passing a more specific namespace, // Some of our topics for the socket support passing a more specific namespace,
// such as "backup completed:1234" to indicate which specific backup was completed. // such as "backup completed:1234" to indicate which specific backup was completed.
// //
// In these cases, we still need to the send the event using the standard listener // In these cases, we still need to send the event using the standard listener
// name of "backup completed". // name of "backup completed".
if strings.Contains(topic, ":") { if strings.Contains(topic, ":") {
parts := strings.SplitN(topic, ":", 2) parts := strings.SplitN(topic, ":", 2)
@@ -43,7 +43,7 @@ func (e *EventBus) Publish(topic string, data string) {
e.mu.RLock() e.mu.RLock()
defer e.mu.RUnlock() defer e.mu.RUnlock()
// Acquire a read lock and loop over all of the channels registered for the topic. This // Acquire a read lock and loop over all the channels registered for the topic. This
// avoids a panic crash if the process tries to unregister the channel while this routine // avoids a panic crash if the process tries to unregister the channel while this routine
// is running. // is running.
if cp, ok := e.pools[t]; ok { if cp, ok := e.pools[t]; ok {
@@ -65,7 +65,7 @@ func (e *EventBus) Publish(topic string, data string) {
} }
} }
// Publishes a JSON message to a given topic. // PublishJson publishes a JSON message to a given topic.
func (e *EventBus) PublishJson(topic string, data interface{}) error { func (e *EventBus) PublishJson(topic string, data interface{}) error {
b, err := json.Marshal(data) b, err := json.Marshal(data)
if err != nil { if err != nil {
@@ -77,7 +77,7 @@ func (e *EventBus) PublishJson(topic string, data interface{}) error {
return nil return nil
} }
// Register a callback function that will be executed each time one of the events using the topic // On adds a callback function that will be executed each time one of the events using the topic
// name is called. // name is called.
func (e *EventBus) On(topic string, callback *func(Event)) { func (e *EventBus) On(topic string, callback *func(Event)) {
e.mu.Lock() e.mu.Lock()
@@ -97,7 +97,7 @@ func (e *EventBus) On(topic string, callback *func(Event)) {
e.pools[topic].Add(callback) e.pools[topic].Add(callback)
} }
// Removes an event listener from the bus. // Off removes an event listener from the bus.
func (e *EventBus) Off(topic string, callback *func(Event)) { func (e *EventBus) Off(topic string, callback *func(Event)) {
e.mu.Lock() e.mu.Lock()
defer e.mu.Unlock() defer e.mu.Unlock()
@@ -107,7 +107,7 @@ func (e *EventBus) Off(topic string, callback *func(Event)) {
} }
} }
// Removes all of the event listeners that have been registered for any topic. Also stops the worker // Destroy removes all the event listeners that have been registered for any topic. Also stops the worker
// pool to close that routine. // pool to close that routine.
func (e *EventBus) Destroy() { func (e *EventBus) Destroy() {
e.mu.Lock() e.mu.Lock()

97
go.mod
View File

@@ -1,81 +1,72 @@
module github.com/pterodactyl/wings module github.com/pterodactyl/wings
go 1.14 go 1.16
require ( require (
emperror.dev/errors v0.8.0 emperror.dev/errors v0.8.0
github.com/AlecAivazis/survey/v2 v2.2.7 github.com/AlecAivazis/survey/v2 v2.2.15
github.com/Jeffail/gabs/v2 v2.6.0 github.com/Jeffail/gabs/v2 v2.6.1
github.com/Microsoft/go-winio v0.4.16 // indirect github.com/Microsoft/go-winio v0.5.0 // indirect
github.com/Microsoft/hcsshim v0.8.14 // indirect github.com/Microsoft/hcsshim v0.8.20 // indirect
github.com/NYTimes/logrotate v1.0.0 github.com/NYTimes/logrotate v1.0.0
github.com/andybalholm/brotli v1.0.1 // indirect github.com/andybalholm/brotli v1.0.3 // indirect
github.com/apex/log v1.9.0 github.com/apex/log v1.9.0
github.com/asaskevich/govalidator v0.0.0-20200907205600-7a23bdc65eef github.com/asaskevich/govalidator v0.0.0-20210307081110-f21760c49a8d
github.com/beevik/etree v1.1.0 github.com/beevik/etree v1.1.0
github.com/buger/jsonparser v1.1.0 github.com/buger/jsonparser v1.1.1
github.com/cenkalti/backoff/v4 v4.1.1
github.com/cobaugh/osrelease v0.0.0-20181218015638-a93a0a55a249 github.com/cobaugh/osrelease v0.0.0-20181218015638-a93a0a55a249
github.com/containerd/containerd v1.4.3 // indirect github.com/containerd/containerd v1.5.5 // indirect
github.com/containerd/fifo v0.0.0-20201026212402-0724c46b320c // indirect
github.com/creasty/defaults v1.5.1 github.com/creasty/defaults v1.5.1
github.com/docker/distribution v2.7.1+incompatible // indirect github.com/docker/docker v20.10.7+incompatible
github.com/docker/docker v20.10.1+incompatible
github.com/docker/go-connections v0.4.0 github.com/docker/go-connections v0.4.0
github.com/docker/go-metrics v0.0.1 // indirect github.com/fatih/color v1.12.0
github.com/fatih/color v1.10.0
github.com/franela/goblin v0.0.0-20200825194134-80c0062ed6cd github.com/franela/goblin v0.0.0-20200825194134-80c0062ed6cd
github.com/fsnotify/fsnotify v1.4.9 // indirect github.com/gabriel-vasile/mimetype v1.3.1
github.com/gabriel-vasile/mimetype v1.1.2 github.com/gammazero/workerpool v1.1.2
github.com/gammazero/deque v0.0.0-20201010052221-3932da5530cc // indirect github.com/gbrlsnchs/jwt/v3 v3.0.1
github.com/gammazero/workerpool v1.1.1 github.com/gin-gonic/gin v1.7.2
github.com/gbrlsnchs/jwt/v3 v3.0.0 github.com/go-playground/validator/v10 v10.8.0 // indirect
github.com/gin-gonic/gin v1.6.3 github.com/golang/snappy v0.0.4 // indirect
github.com/go-playground/validator/v10 v10.4.1 // indirect github.com/google/uuid v1.3.0
github.com/golang/snappy v0.0.2 // indirect
github.com/google/go-cmp v0.5.2 // indirect
github.com/google/uuid v1.1.2
github.com/gorilla/mux v1.7.4 // indirect github.com/gorilla/mux v1.7.4 // indirect
github.com/gorilla/websocket v1.4.2 github.com/gorilla/websocket v1.4.2
github.com/iancoleman/strcase v0.1.2 github.com/iancoleman/strcase v0.2.0
github.com/icza/dyno v0.0.0-20200205103839-49cb13720835 github.com/icza/dyno v0.0.0-20210726202311-f1bafe5d9996
github.com/imdario/mergo v0.3.9 github.com/imdario/mergo v0.3.12
github.com/juju/ratelimit v1.0.1 github.com/juju/ratelimit v1.0.1
github.com/karrick/godirwalk v1.16.1 github.com/karrick/godirwalk v1.16.1
github.com/klauspost/compress v1.11.4 // indirect github.com/klauspost/compress v1.13.2 // indirect
github.com/klauspost/pgzip v1.2.5 github.com/klauspost/pgzip v1.2.5
github.com/leodido/go-urn v1.2.1 // indirect github.com/magefile/mage v1.11.0 // indirect
github.com/magefile/mage v1.10.0 // indirect github.com/magiconair/properties v1.8.5
github.com/magiconair/properties v1.8.4
github.com/mattn/go-colorable v0.1.8 github.com/mattn/go-colorable v0.1.8
github.com/mattn/go-isatty v0.0.13 // indirect
github.com/mgutz/ansi v0.0.0-20200706080929-d51e80ef957d // indirect github.com/mgutz/ansi v0.0.0-20200706080929-d51e80ef957d // indirect
github.com/mholt/archiver/v3 v3.5.0 github.com/mholt/archiver/v3 v3.5.0
github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db
github.com/moby/term v0.0.0-20201216013528-df9cb8a40635 // indirect github.com/moby/term v0.0.0-20210619224110-3f7ff695adc6 // indirect
github.com/morikuni/aec v1.0.0 // indirect github.com/morikuni/aec v1.0.0 // indirect
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e // indirect github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect github.com/nwaples/rardecode v1.1.1 // indirect
github.com/opencontainers/image-spec v1.0.1 // indirect
github.com/patrickmn/go-cache v2.1.0+incompatible github.com/patrickmn/go-cache v2.1.0+incompatible
github.com/pierrec/lz4/v4 v4.1.2 // indirect github.com/pierrec/lz4/v4 v4.1.8 // indirect
github.com/pkg/profile v1.5.0 github.com/pkg/profile v1.6.0
github.com/pkg/sftp v1.12.0 github.com/pkg/sftp v1.13.2
github.com/prometheus/client_golang v1.9.0 // indirect github.com/prometheus/common v0.30.0 // indirect
github.com/prometheus/procfs v0.7.1 // indirect
github.com/sabhiram/go-gitignore v0.0.0-20201211210132-54b8a0bf510f github.com/sabhiram/go-gitignore v0.0.0-20201211210132-54b8a0bf510f
github.com/sirupsen/logrus v1.7.0 // indirect github.com/spf13/cobra v1.2.1
github.com/spf13/cobra v1.1.1 github.com/stretchr/testify v1.7.0
github.com/stretchr/testify v1.6.1 github.com/ulikunitz/xz v0.5.10 // indirect
github.com/ugorji/go v1.2.2 // indirect go.uber.org/atomic v1.9.0 // indirect
github.com/ulikunitz/xz v0.5.9 // indirect go.uber.org/multierr v1.7.0 // indirect
golang.org/x/crypto v0.0.0-20201221181555-eec23a3978ad golang.org/x/crypto v0.0.0-20210711020723-a769d52b0f97
golang.org/x/net v0.0.0-20201224014010-6772e930b67b // indirect golang.org/x/net v0.0.0-20210726213435-c6fcb2dbf985 // indirect
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a golang.org/x/sync v0.0.0-20210220032951-036812b2e83c
golang.org/x/sys v0.0.0-20201223074533-0d417f636930 // indirect golang.org/x/term v0.0.0-20210615171337-6886f2dfbf5b // indirect
golang.org/x/term v0.0.0-20201210144234-2321bbc49cbf // indirect golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac // indirect
golang.org/x/text v0.3.4 // indirect google.golang.org/genproto v0.0.0-20210729151513-df9385d47c1b // indirect
golang.org/x/time v0.0.0-20201208040808-7e3f01d25324 // indirect
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect
google.golang.org/genproto v0.0.0-20201214200347-8c77b98c765d // indirect
google.golang.org/grpc v1.34.0 // indirect
gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f // indirect gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f // indirect
gopkg.in/ini.v1 v1.62.0 gopkg.in/ini.v1 v1.62.0
gopkg.in/yaml.v2 v2.4.0 gopkg.in/yaml.v2 v2.4.0

992
go.sum

File diff suppressed because it is too large Load Diff

View File

@@ -7,6 +7,7 @@ import (
"emperror.dev/errors" "emperror.dev/errors"
"github.com/asaskevich/govalidator" "github.com/asaskevich/govalidator"
"github.com/buger/jsonparser" "github.com/buger/jsonparser"
"github.com/pterodactyl/wings/environment" "github.com/pterodactyl/wings/environment"
"github.com/pterodactyl/wings/remote" "github.com/pterodactyl/wings/remote"
"github.com/pterodactyl/wings/server" "github.com/pterodactyl/wings/server"
@@ -16,7 +17,7 @@ type Installer struct {
server *server.Server server *server.Server
} }
// New validates the received data to ensure that all of the required fields // New validates the received data to ensure that all the required fields
// have been passed along in the request. This should be manually run before // have been passed along in the request. This should be manually run before
// calling Execute(). // calling Execute().
func New(ctx context.Context, manager *server.Manager, data []byte) (*Installer, error) { func New(ctx context.Context, manager *server.Manager, data []byte) (*Installer, error) {
@@ -25,10 +26,11 @@ func New(ctx context.Context, manager *server.Manager, data []byte) (*Installer,
} }
cfg := &server.Configuration{ cfg := &server.Configuration{
Uuid: getString(data, "uuid"), Uuid: getString(data, "uuid"),
Suspended: false, Suspended: false,
Invocation: getString(data, "invocation"), Invocation: getString(data, "invocation"),
SkipEggScripts: getBoolean(data, "skip_egg_scripts"), SkipEggScripts: getBoolean(data, "skip_egg_scripts"),
StartOnCompletion: getBoolean(data, "start_on_completion"),
Build: environment.Limits{ Build: environment.Limits{
MemoryLimit: getInt(data, "build", "memory"), MemoryLimit: getInt(data, "build", "memory"),
Swap: getInt(data, "build", "swap"), Swap: getInt(data, "build", "swap"),
@@ -84,7 +86,7 @@ func New(ctx context.Context, manager *server.Manager, data []byte) (*Installer,
// Uuid returns the UUID associated with this installer instance. // Uuid returns the UUID associated with this installer instance.
func (i *Installer) Uuid() string { func (i *Installer) Uuid() string {
return i.server.Id() return i.server.ID()
} }
// Server returns the server instance. // Server returns the server instance.

View File

@@ -1,17 +1,18 @@
package cli package cli
import ( import (
"emperror.dev/errors"
"fmt" "fmt"
"github.com/apex/log"
"github.com/apex/log/handlers/cli"
color2 "github.com/fatih/color"
"github.com/mattn/go-colorable"
"io" "io"
"os" "os"
"strings" "strings"
"sync" "sync"
"time" "time"
"emperror.dev/errors"
"github.com/apex/log"
"github.com/apex/log/handlers/cli"
color2 "github.com/fatih/color"
"github.com/mattn/go-colorable"
) )
var Default = New(os.Stderr, true) var Default = New(os.Stderr, true)

View File

@@ -15,9 +15,10 @@ import (
"github.com/buger/jsonparser" "github.com/buger/jsonparser"
"github.com/icza/dyno" "github.com/icza/dyno"
"github.com/magiconair/properties" "github.com/magiconair/properties"
"github.com/pterodactyl/wings/config"
"gopkg.in/ini.v1" "gopkg.in/ini.v1"
"gopkg.in/yaml.v2" "gopkg.in/yaml.v2"
"github.com/pterodactyl/wings/config"
) )
// The file parsing options that are available for a server configuration file. // The file parsing options that are available for a server configuration file.

View File

@@ -3,6 +3,8 @@ package remote
import ( import (
"fmt" "fmt"
"net/http" "net/http"
"emperror.dev/errors"
) )
type RequestErrors struct { type RequestErrors struct {
@@ -16,13 +18,31 @@ type RequestError struct {
Detail string `json:"detail"` Detail string `json:"detail"`
} }
// IsRequestError checks if the given error is of the RequestError type.
func IsRequestError(err error) bool { func IsRequestError(err error) bool {
_, ok := err.(*RequestError) var rerr *RequestError
if err == nil {
return ok return false
}
return errors.As(err, &rerr)
} }
// Returns the error response in a string form that can be more easily consumed. // AsRequestError transforms the error into a RequestError if it is currently
// one, checking the wrap status from the other error handlers. If the error
// is not a RequestError nil is returned.
func AsRequestError(err error) *RequestError {
if err == nil {
return nil
}
var rerr *RequestError
if errors.As(err, &rerr) {
return rerr
}
return nil
}
// Error returns the error response in a string form that can be more easily
// consumed.
func (re *RequestError) Error() string { func (re *RequestError) Error() string {
c := 0 c := 0
if re.response != nil { if re.response != nil {
@@ -32,6 +52,11 @@ func (re *RequestError) Error() string {
return fmt.Sprintf("Error response from Panel: %s: %s (HTTP/%d)", re.Code, re.Detail, c) return fmt.Sprintf("Error response from Panel: %s: %s (HTTP/%d)", re.Code, re.Detail, c)
} }
// StatusCode returns the status code of the response.
func (re *RequestError) StatusCode() int {
return re.response.StatusCode
}
type SftpInvalidCredentialsError struct { type SftpInvalidCredentialsError struct {
} }

View File

@@ -8,11 +8,14 @@ import (
"io" "io"
"io/ioutil" "io/ioutil"
"net/http" "net/http"
"strconv"
"strings" "strings"
"time" "time"
"emperror.dev/errors" "emperror.dev/errors"
"github.com/apex/log" "github.com/apex/log"
"github.com/cenkalti/backoff/v4"
"github.com/pterodactyl/wings/system" "github.com/pterodactyl/wings/system"
) )
@@ -31,11 +34,11 @@ type Client interface {
} }
type client struct { type client struct {
httpClient *http.Client httpClient *http.Client
baseUrl string baseUrl string
tokenId string tokenId string
token string token string
attempts int maxAttempts int
} }
// New returns a new HTTP request client that is used for making authenticated // New returns a new HTTP request client that is used for making authenticated
@@ -46,7 +49,7 @@ func New(base string, opts ...ClientOption) Client {
httpClient: &http.Client{ httpClient: &http.Client{
Timeout: time.Second * 15, Timeout: time.Second * 15,
}, },
attempts: 1, maxAttempts: 0,
} }
for _, opt := range opts { for _, opt := range opts {
opt(&c) opt(&c)
@@ -71,11 +74,31 @@ func WithHttpClient(httpClient *http.Client) ClientOption {
} }
} }
// Get executes a HTTP GET request.
func (c *client) Get(ctx context.Context, path string, query q) (*Response, error) {
return c.request(ctx, http.MethodGet, path, nil, func(r *http.Request) {
q := r.URL.Query()
for k, v := range query {
q.Set(k, v)
}
r.URL.RawQuery = q.Encode()
})
}
// Post executes a HTTP POST request.
func (c *client) Post(ctx context.Context, path string, data interface{}) (*Response, error) {
b, err := json.Marshal(data)
if err != nil {
return nil, err
}
return c.request(ctx, http.MethodPost, path, bytes.NewBuffer(b))
}
// requestOnce creates a http request and executes it once. Prefer request() // requestOnce creates a http request and executes it once. Prefer request()
// over this method when possible. It appends the path to the endpoint of the // over this method when possible. It appends the path to the endpoint of the
// client and adds the authentication token to the request. // client and adds the authentication token to the request.
func (c *client) requestOnce(ctx context.Context, method, path string, body io.Reader, opts ...func(r *http.Request)) (*Response, error) { func (c *client) requestOnce(ctx context.Context, method, path string, body io.Reader, opts ...func(r *http.Request)) (*Response, error) {
req, err := http.NewRequest(method, c.baseUrl+path, body) req, err := http.NewRequestWithContext(ctx, method, c.baseUrl+path, body)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -92,45 +115,86 @@ func (c *client) requestOnce(ctx context.Context, method, path string, body io.R
debugLogRequest(req) debugLogRequest(req)
res, err := c.httpClient.Do(req.WithContext(ctx)) res, err := c.httpClient.Do(req)
return &Response{res}, err return &Response{res}, err
} }
// request executes a http request and attempts when errors occur. // request executes an HTTP request against the Panel API. If there is an error
// It appends the path to the endpoint of the client and adds the authentication token to the request. // encountered with the request it will be retried using an exponential backoff.
func (c *client) request(ctx context.Context, method, path string, body io.Reader, opts ...func(r *http.Request)) (res *Response, err error) { // If the error returned from the Panel is due to API throttling or because there
for i := 0; i < c.attempts; i++ { // are invalid authentication credentials provided the request will _not_ be
res, err = c.requestOnce(ctx, method, path, body, opts...) // retried by the backoff.
if err == nil && //
res.StatusCode < http.StatusInternalServerError && // This function automatically appends the path to the current client endpoint
res.StatusCode != http.StatusTooManyRequests { // and adds the required authentication headers to the request that is being
break // created. Errors returned will be of the RequestError type if there was some
// type of response from the API that can be parsed.
func (c *client) request(ctx context.Context, method, path string, body io.Reader, opts ...func(r *http.Request)) (*Response, error) {
var res *Response
err := backoff.Retry(func() error {
r, err := c.requestOnce(ctx, method, path, body, opts...)
if err != nil {
if errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) {
return backoff.Permanent(err)
}
return errors.WrapIf(err, "http: request creation failed")
} }
} res = r
if err != nil { if r.HasError() {
return nil, errors.WithStack(err) // Close the request body after returning the error to free up resources.
} defer r.Body.Close()
return // Don't keep spamming the endpoint if we've already made too many requests or
} // if we're not even authenticated correctly. Retrying generally won't fix either
// of these issues.
// get executes a http get request. if r.StatusCode == http.StatusForbidden ||
func (c *client) get(ctx context.Context, path string, query q) (*Response, error) { r.StatusCode == http.StatusTooManyRequests ||
return c.request(ctx, http.MethodGet, path, nil, func(r *http.Request) { r.StatusCode == http.StatusUnauthorized {
q := r.URL.Query() return backoff.Permanent(r.Error())
for k, v := range query { }
q.Set(k, v) return r.Error()
} }
r.URL.RawQuery = q.Encode() return nil
}) }, c.backoff(ctx))
}
// post executes a http post request.
func (c *client) post(ctx context.Context, path string, data interface{}) (*Response, error) {
b, err := json.Marshal(data)
if err != nil { if err != nil {
if v, ok := err.(*backoff.PermanentError); ok {
return nil, v.Unwrap()
}
return nil, err return nil, err
} }
return c.request(ctx, http.MethodPost, path, bytes.NewBuffer(b)) return res, nil
}
// backoff returns an exponential backoff function for use with remote API
// requests. This will allow an API call to be executed approximately 10 times
// before it is finally reported back as an error.
//
// This allows for issues with DNS resolution, or rare race conditions due to
// slower SQL queries on the Panel to potentially self-resolve without just
// immediately failing the first request. The example below shows the amount of
// time that has elapsed between each call to the handler when an error is
// returned. You can tweak these values as needed to get the effect you desire.
//
// If maxAttempts is a value greater than 0 the backoff will be capped at a total
// number of executions, or the MaxElapsedTime, whichever comes first.
//
// call(): 0s
// call(): 552.330144ms
// call(): 1.63271196s
// call(): 2.94284202s
// call(): 4.525234711s
// call(): 6.865723375s
// call(): 11.37194223s
// call(): 14.593421816s
// call(): 20.202045293s
// call(): 27.36567952s <-- Stops here as MaxElapsedTime is 30 seconds
func (c *client) backoff(ctx context.Context) backoff.BackOffContext {
b := backoff.NewExponentialBackOff()
b.MaxInterval = time.Second * 12
b.MaxElapsedTime = time.Second * 30
if c.maxAttempts > 0 {
return backoff.WithContext(backoff.WithMaxRetries(b, uint64(c.maxAttempts)), ctx)
}
return backoff.WithContext(b, ctx)
} }
// Response is a custom response type that allows for commonly used error // Response is a custom response type that allows for commonly used error
@@ -157,15 +221,12 @@ func (r *Response) HasError() bool {
func (r *Response) Read() ([]byte, error) { func (r *Response) Read() ([]byte, error) {
var b []byte var b []byte
if r.Response == nil { if r.Response == nil {
return nil, errors.New("http: attempting to read missing response") return nil, errors.New("remote: attempting to read missing response")
} }
if r.Response.Body != nil { if r.Response.Body != nil {
b, _ = ioutil.ReadAll(r.Response.Body) b, _ = ioutil.ReadAll(r.Response.Body)
} }
r.Response.Body = ioutil.NopCloser(bytes.NewBuffer(b)) r.Response.Body = ioutil.NopCloser(bytes.NewBuffer(b))
return b, nil return b, nil
} }
@@ -177,15 +238,16 @@ func (r *Response) BindJSON(v interface{}) error {
if err != nil { if err != nil {
return err return err
} }
if err := json.Unmarshal(b, &v); err != nil { if err := json.Unmarshal(b, &v); err != nil {
return errors.Wrap(err, "http: could not unmarshal response") return errors.Wrap(err, "remote: could not unmarshal response")
} }
return nil return nil
} }
// Returns the first error message from the API call as a string. The error // Returns the first error message from the API call as a string. The error
// message will be formatted similar to the below example: // message will be formatted similar to the below example. If there is no error
// that can be parsed out of the API you'll still get a RequestError returned
// but the RequestError.Code will be "_MissingResponseCode".
// //
// HttpNotFoundException: The requested resource does not exist. (HTTP/404) // HttpNotFoundException: The requested resource does not exist. (HTTP/404)
func (r *Response) Error() error { func (r *Response) Error() error {
@@ -196,14 +258,18 @@ func (r *Response) Error() error {
var errs RequestErrors var errs RequestErrors
_ = r.BindJSON(&errs) _ = r.BindJSON(&errs)
e := &RequestError{} e := &RequestError{
Code: "_MissingResponseCode",
Status: strconv.Itoa(r.StatusCode),
Detail: "No error response returned from API endpoint.",
}
if len(errs.Errors) > 0 { if len(errs.Errors) > 0 {
e = &errs.Errors[0] e = &errs.Errors[0]
} }
e.response = r.Response e.response = r.Response
return e return errors.WithStackDepth(e, 1)
} }
// Logs the request into the debug log with all of the important request bits. // Logs the request into the debug log with all of the important request bits.

View File

@@ -12,12 +12,11 @@ import (
func createTestClient(h http.HandlerFunc) (*client, *httptest.Server) { func createTestClient(h http.HandlerFunc) (*client, *httptest.Server) {
s := httptest.NewServer(h) s := httptest.NewServer(h)
c := &client{ c := &client{
httpClient: s.Client(), httpClient: s.Client(),
baseUrl: s.URL, baseUrl: s.URL,
maxAttempts: 1,
attempts: 1, tokenId: "testid",
tokenId: "testid", token: "testtoken",
token: "testtoken",
} }
return c, s return c, s
} }
@@ -47,7 +46,7 @@ func TestRequestRetry(t *testing.T) {
} }
i++ i++
}) })
c.attempts = 2 c.maxAttempts = 2
r, err := c.request(context.Background(), "", "", nil) r, err := c.request(context.Background(), "", "", nil)
assert.NoError(t, err) assert.NoError(t, err)
assert.NotNil(t, r) assert.NotNil(t, r)
@@ -60,12 +59,15 @@ func TestRequestRetry(t *testing.T) {
rw.WriteHeader(http.StatusInternalServerError) rw.WriteHeader(http.StatusInternalServerError)
i++ i++
}) })
c.attempts = 2 c.maxAttempts = 2
r, err = c.request(context.Background(), "get", "", nil) r, err = c.request(context.Background(), "get", "", nil)
assert.NoError(t, err) assert.Error(t, err)
assert.NotNil(t, r) assert.Nil(t, r)
assert.Equal(t, http.StatusInternalServerError, r.StatusCode)
assert.Equal(t, 2, i) v := AsRequestError(err)
assert.NotNil(t, v)
assert.Equal(t, http.StatusInternalServerError, v.StatusCode())
assert.Equal(t, 3, i)
} }
func TestGet(t *testing.T) { func TestGet(t *testing.T) {
@@ -74,7 +76,7 @@ func TestGet(t *testing.T) {
assert.Len(t, r.URL.Query(), 1) assert.Len(t, r.URL.Query(), 1)
assert.Equal(t, "world", r.URL.Query().Get("hello")) assert.Equal(t, "world", r.URL.Query().Get("hello"))
}) })
r, err := c.get(context.Background(), "/test", q{"hello": "world"}) r, err := c.Get(context.Background(), "/test", q{"hello": "world"})
assert.NoError(t, err) assert.NoError(t, err)
assert.NotNil(t, r) assert.NotNil(t, r)
} }
@@ -87,7 +89,7 @@ func TestPost(t *testing.T) {
assert.Equal(t, http.MethodPost, r.Method) assert.Equal(t, http.MethodPost, r.Method)
}) })
r, err := c.post(context.Background(), "/test", test) r, err := c.Post(context.Background(), "/test", test)
assert.NoError(t, err) assert.NoError(t, err)
assert.NotNil(t, r) assert.NotNil(t, r)
} }

View File

@@ -58,62 +58,54 @@ func (c *client) GetServers(ctx context.Context, limit int) ([]RawServerData, er
// things in a bad state within the Panel. This API call is executed once Wings // things in a bad state within the Panel. This API call is executed once Wings
// has fully booted all of the servers. // has fully booted all of the servers.
func (c *client) ResetServersState(ctx context.Context) error { func (c *client) ResetServersState(ctx context.Context) error {
res, err := c.post(ctx, "/servers/reset", nil) res, err := c.Post(ctx, "/servers/reset", nil)
if err != nil { if err != nil {
return errors.WrapIf(err, "remote/servers: failed to reset server state on Panel") return errors.WrapIf(err, "remote: failed to reset server state on Panel")
} }
res.Body.Close() _ = res.Body.Close()
return nil return nil
} }
func (c *client) GetServerConfiguration(ctx context.Context, uuid string) (ServerConfigurationResponse, error) { func (c *client) GetServerConfiguration(ctx context.Context, uuid string) (ServerConfigurationResponse, error) {
var config ServerConfigurationResponse var config ServerConfigurationResponse
res, err := c.get(ctx, fmt.Sprintf("/servers/%s", uuid), nil) res, err := c.Get(ctx, fmt.Sprintf("/servers/%s", uuid), nil)
if err != nil { if err != nil {
return config, err return config, err
} }
defer res.Body.Close() defer res.Body.Close()
if res.HasError() {
return config, res.Error()
}
err = res.BindJSON(&config) err = res.BindJSON(&config)
return config, err return config, err
} }
func (c *client) GetInstallationScript(ctx context.Context, uuid string) (InstallationScript, error) { func (c *client) GetInstallationScript(ctx context.Context, uuid string) (InstallationScript, error) {
res, err := c.get(ctx, fmt.Sprintf("/servers/%s/install", uuid), nil) res, err := c.Get(ctx, fmt.Sprintf("/servers/%s/install", uuid), nil)
if err != nil { if err != nil {
return InstallationScript{}, err return InstallationScript{}, err
} }
defer res.Body.Close() defer res.Body.Close()
if res.HasError() {
return InstallationScript{}, res.Error()
}
var config InstallationScript var config InstallationScript
err = res.BindJSON(&config) err = res.BindJSON(&config)
return config, err return config, err
} }
func (c *client) SetInstallationStatus(ctx context.Context, uuid string, successful bool) error { func (c *client) SetInstallationStatus(ctx context.Context, uuid string, successful bool) error {
resp, err := c.post(ctx, fmt.Sprintf("/servers/%s/install", uuid), d{"successful": successful}) resp, err := c.Post(ctx, fmt.Sprintf("/servers/%s/install", uuid), d{"successful": successful})
if err != nil { if err != nil {
return err return err
} }
defer resp.Body.Close() _ = resp.Body.Close()
return resp.Error() return nil
} }
func (c *client) SetArchiveStatus(ctx context.Context, uuid string, successful bool) error { func (c *client) SetArchiveStatus(ctx context.Context, uuid string, successful bool) error {
resp, err := c.post(ctx, fmt.Sprintf("/servers/%s/archive", uuid), d{"successful": successful}) resp, err := c.Post(ctx, fmt.Sprintf("/servers/%s/archive", uuid), d{"successful": successful})
if err != nil { if err != nil {
return err return err
} }
defer resp.Body.Close() _ = resp.Body.Close()
return resp.Error() return nil
} }
func (c *client) SetTransferStatus(ctx context.Context, uuid string, successful bool) error { func (c *client) SetTransferStatus(ctx context.Context, uuid string, successful bool) error {
@@ -121,12 +113,12 @@ func (c *client) SetTransferStatus(ctx context.Context, uuid string, successful
if successful { if successful {
state = "success" state = "success"
} }
resp, err := c.get(ctx, fmt.Sprintf("/servers/%s/transfer/%s", uuid, state), nil) resp, err := c.Get(ctx, fmt.Sprintf("/servers/%s/transfer/%s", uuid, state), nil)
if err != nil { if err != nil {
return err return err
} }
defer resp.Body.Close() _ = resp.Body.Close()
return resp.Error() return nil
} }
// ValidateSftpCredentials makes a request to determine if the username and // ValidateSftpCredentials makes a request to determine if the username and
@@ -136,66 +128,54 @@ func (c *client) SetTransferStatus(ctx context.Context, uuid string, successful
// all of the authorization security logic to the Panel. // all of the authorization security logic to the Panel.
func (c *client) ValidateSftpCredentials(ctx context.Context, request SftpAuthRequest) (SftpAuthResponse, error) { func (c *client) ValidateSftpCredentials(ctx context.Context, request SftpAuthRequest) (SftpAuthResponse, error) {
var auth SftpAuthResponse var auth SftpAuthResponse
res, err := c.post(ctx, "/sftp/auth", request) res, err := c.Post(ctx, "/sftp/auth", request)
if err != nil { if err != nil {
if err := AsRequestError(err); err != nil && (err.StatusCode() >= 400 && err.StatusCode() < 500) {
log.WithFields(log.Fields{"subsystem": "sftp", "username": request.User, "ip": request.IP}).Warn(err.Error())
return auth, &SftpInvalidCredentialsError{}
}
return auth, err return auth, err
} }
defer res.Body.Close() defer res.Body.Close()
e := res.Error() if err := res.BindJSON(&auth); err != nil {
if e != nil { return auth, err
if res.StatusCode >= 400 && res.StatusCode < 500 {
log.WithFields(log.Fields{
"subsystem": "sftp",
"username": request.User,
"ip": request.IP,
}).Warn(e.Error())
return auth, &SftpInvalidCredentialsError{}
}
return auth, errors.New(e.Error())
} }
return auth, nil
err = res.BindJSON(&auth)
return auth, err
} }
func (c *client) GetBackupRemoteUploadURLs(ctx context.Context, backup string, size int64) (BackupRemoteUploadResponse, error) { func (c *client) GetBackupRemoteUploadURLs(ctx context.Context, backup string, size int64) (BackupRemoteUploadResponse, error) {
var data BackupRemoteUploadResponse var data BackupRemoteUploadResponse
res, err := c.get(ctx, fmt.Sprintf("/backups/%s", backup), q{"size": strconv.FormatInt(size, 10)}) res, err := c.Get(ctx, fmt.Sprintf("/backups/%s", backup), q{"size": strconv.FormatInt(size, 10)})
if err != nil { if err != nil {
return data, err return data, err
} }
defer res.Body.Close() defer res.Body.Close()
if err := res.BindJSON(&data); err != nil {
if res.HasError() { return data, err
return data, res.Error()
} }
return data, nil
err = res.BindJSON(&data)
return data, err
} }
func (c *client) SetBackupStatus(ctx context.Context, backup string, data BackupRequest) error { func (c *client) SetBackupStatus(ctx context.Context, backup string, data BackupRequest) error {
resp, err := c.post(ctx, fmt.Sprintf("/backups/%s", backup), data) resp, err := c.Post(ctx, fmt.Sprintf("/backups/%s", backup), data)
if err != nil { if err != nil {
return err return err
} }
defer resp.Body.Close() _ = resp.Body.Close()
return resp.Error() return nil
} }
// SendRestorationStatus triggers a request to the Panel to notify it that a // SendRestorationStatus triggers a request to the Panel to notify it that a
// restoration has been completed and the server should be marked as being // restoration has been completed and the server should be marked as being
// activated again. // activated again.
func (c *client) SendRestorationStatus(ctx context.Context, backup string, successful bool) error { func (c *client) SendRestorationStatus(ctx context.Context, backup string, successful bool) error {
resp, err := c.post(ctx, fmt.Sprintf("/backups/%s/restore", backup), d{"successful": successful}) resp, err := c.Post(ctx, fmt.Sprintf("/backups/%s/restore", backup), d{"successful": successful})
if err != nil { if err != nil {
return err return err
} }
defer resp.Body.Close() _ = resp.Body.Close()
return resp.Error() return nil
} }
// getServersPaged returns a subset of servers from the Panel API using the // getServersPaged returns a subset of servers from the Panel API using the
@@ -206,7 +186,7 @@ func (c *client) getServersPaged(ctx context.Context, page, limit int) ([]RawSer
Meta Pagination `json:"meta"` Meta Pagination `json:"meta"`
} }
res, err := c.get(ctx, "/servers", q{ res, err := c.Get(ctx, "/servers", q{
"page": strconv.Itoa(page), "page": strconv.Itoa(page),
"per_page": strconv.Itoa(limit), "per_page": strconv.Itoa(limit),
}) })
@@ -214,10 +194,6 @@ func (c *client) getServersPaged(ctx context.Context, page, limit int) ([]RawSer
return nil, r.Meta, err return nil, r.Meta, err
} }
defer res.Body.Close() defer res.Body.Close()
if res.HasError() {
return nil, r.Meta, res.Error()
}
if err := res.BindJSON(&r); err != nil { if err := res.BindJSON(&r); err != nil {
return nil, r.Meta, err return nil, r.Meta, err
} }

View File

@@ -6,6 +6,7 @@ import (
"strings" "strings"
"github.com/apex/log" "github.com/apex/log"
"github.com/pterodactyl/wings/parser" "github.com/pterodactyl/wings/parser"
) )
@@ -32,7 +33,7 @@ type Pagination struct {
// ServerConfigurationResponse holds the server configuration data returned from // ServerConfigurationResponse holds the server configuration data returned from
// the Panel. When a server process is started, Wings communicates with the // the Panel. When a server process is started, Wings communicates with the
// Panel to fetch the latest build information as well as get all of the details // Panel to fetch the latest build information as well as get all the details
// needed to parse the given Egg. // needed to parse the given Egg.
// //
// This means we do not need to hit Wings each time part of the server is // This means we do not need to hit Wings each time part of the server is

View File

@@ -2,26 +2,26 @@ package downloader
import ( import (
"context" "context"
"emperror.dev/errors"
"encoding/json" "encoding/json"
"fmt" "fmt"
"github.com/google/uuid"
"github.com/pterodactyl/wings/server"
"io" "io"
"net" "net"
"net/http" "net/http"
"net/url" "net/url"
"path/filepath" "path/filepath"
"regexp"
"strconv"
"strings" "strings"
"sync" "sync"
"time" "time"
"emperror.dev/errors"
"github.com/google/uuid"
"github.com/pterodactyl/wings/server"
) )
var client = &http.Client{ var client = &http.Client{
Timeout: time.Hour * 12, Timeout: time.Hour * 12,
// Disallow any redirect on a HTTP call. This is a security requirement: do not modify // Disallow any redirect on an HTTP call. This is a security requirement: do not modify
// this logic without first ensuring that the new target location IS NOT within the current // this logic without first ensuring that the new target location IS NOT within the current
// instance's local network. // instance's local network.
// //
@@ -36,18 +36,14 @@ var client = &http.Client{
} }
var instance = &Downloader{ var instance = &Downloader{
// Tracks all of the active downloads. // Tracks all the active downloads.
downloadCache: make(map[string]*Download), downloadCache: make(map[string]*Download),
// Tracks all of the downloads active for a given server instance. This is // Tracks all the downloads active for a given server instance. This is
// primarily used to make things quicker and keep the code a little more // primarily used to make things quicker and keep the code a little more
// legible throughout here. // legible throughout here.
serverCache: make(map[string][]string), serverCache: make(map[string][]string),
} }
// Regex to match the end of an IPv4/IPv6 address. This allows the port to be removed
// so that we are just working with the raw IP address in question.
var ipMatchRegex = regexp.MustCompile(`(:\d+)$`)
// Internal IP ranges that should be blocked if the resource requested resolves within. // Internal IP ranges that should be blocked if the resource requested resolves within.
var internalRanges = []*net.IPNet{ var internalRanges = []*net.IPNet{
mustParseCIDR("127.0.0.1/8"), mustParseCIDR("127.0.0.1/8"),
@@ -60,9 +56,11 @@ var internalRanges = []*net.IPNet{
mustParseCIDR("fc00::/7"), mustParseCIDR("fc00::/7"),
} }
const ErrInternalResolution = errors.Sentinel("downloader: destination resolves to internal network location") const (
const ErrInvalidIPAddress = errors.Sentinel("downloader: invalid IP address") ErrInternalResolution = errors.Sentinel("downloader: destination resolves to internal network location")
const ErrDownloadFailed = errors.Sentinel("downloader: download request failed") ErrInvalidIPAddress = errors.Sentinel("downloader: invalid IP address")
ErrDownloadFailed = errors.Sentinel("downloader: download request failed")
)
type Counter struct { type Counter struct {
total int total int
@@ -77,8 +75,8 @@ func (c *Counter) Write(p []byte) (int, error) {
} }
type DownloadRequest struct { type DownloadRequest struct {
URL *url.URL
Directory string Directory string
URL *url.URL
} }
type Download struct { type Download struct {
@@ -90,7 +88,7 @@ type Download struct {
cancelFunc *context.CancelFunc cancelFunc *context.CancelFunc
} }
// Starts a new tracked download which allows for cancellation later on by calling // New starts a new tracked download which allows for cancellation later on by calling
// the Downloader.Cancel function. // the Downloader.Cancel function.
func New(s *server.Server, r DownloadRequest) *Download { func New(s *server.Server, r DownloadRequest) *Download {
dl := Download{ dl := Download{
@@ -102,14 +100,14 @@ func New(s *server.Server, r DownloadRequest) *Download {
return &dl return &dl
} }
// Returns all of the tracked downloads for a given server instance. // ByServer returns all the tracked downloads for a given server instance.
func ByServer(sid string) []*Download { func ByServer(sid string) []*Download {
instance.mu.Lock() instance.mu.Lock()
defer instance.mu.Unlock() defer instance.mu.Unlock()
var downloads []*Download var downloads []*Download
if v, ok := instance.serverCache[sid]; ok { if v, ok := instance.serverCache[sid]; ok {
for _, id := range v { for _, id := range v {
if dl, dlok := instance.downloadCache[id]; dlok { if dl, ok := instance.downloadCache[id]; ok {
downloads = append(downloads, dl) downloads = append(downloads, dl)
} }
} }
@@ -117,7 +115,7 @@ func ByServer(sid string) []*Download {
return downloads return downloads
} }
// Returns a single Download matching a given identifier. If no download is found // ByID returns a single Download matching a given identifier. If no download is found
// the second argument in the response will be false. // the second argument in the response will be false.
func ByID(dlid string) *Download { func ByID(dlid string) *Download {
return instance.find(dlid) return instance.find(dlid)
@@ -134,7 +132,7 @@ func (dl Download) MarshalJSON() ([]byte, error) {
}) })
} }
// Executes a given download for the server and begins writing the file to the disk. Once // Execute executes a given download for the server and begins writing the file to the disk. Once
// completed the download will be removed from the cache. // completed the download will be removed from the cache.
func (dl *Download) Execute() error { func (dl *Download) Execute() error {
ctx, cancel := context.WithTimeout(context.Background(), time.Hour*12) ctx, cancel := context.WithTimeout(context.Background(), time.Hour*12)
@@ -185,7 +183,7 @@ func (dl *Download) Execute() error {
return nil return nil
} }
// Cancels a running download and frees up the associated resources. If a file is being // Cancel cancels a running download and frees up the associated resources. If a file is being
// written a partial file will remain present on the disk. // written a partial file will remain present on the disk.
func (dl *Download) Cancel() { func (dl *Download) Cancel() {
if dl.cancelFunc != nil { if dl.cancelFunc != nil {
@@ -194,12 +192,12 @@ func (dl *Download) Cancel() {
instance.remove(dl.Identifier) instance.remove(dl.Identifier)
} }
// Checks if the given download belongs to the provided server. // BelongsTo checks if the given download belongs to the provided server.
func (dl *Download) BelongsTo(s *server.Server) bool { func (dl *Download) BelongsTo(s *server.Server) bool {
return dl.server.Id() == s.Id() return dl.server.ID() == s.ID()
} }
// Returns the current progress of the download as a float value between 0 and 1 where // Progress returns the current progress of the download as a float value between 0 and 1 where
// 1 indicates that the download is completed. // 1 indicates that the download is completed.
func (dl *Download) Progress() float64 { func (dl *Download) Progress() float64 {
dl.mu.RLock() dl.mu.RLock()
@@ -232,15 +230,19 @@ func (dl *Download) isExternalNetwork(ctx context.Context) error {
// This cluster-fuck of math and integer shit converts an integer IP into a proper IPv4. // This cluster-fuck of math and integer shit converts an integer IP into a proper IPv4.
// For example: 16843009 would become 1.1.1.1 // For example: 16843009 would become 1.1.1.1
if i, err := strconv.ParseInt(host, 10, 64); err == nil { //if i, err := strconv.ParseInt(host, 10, 64); err == nil {
host = strconv.FormatInt((i>>24)&0xFF, 10) + "." + strconv.FormatInt((i>>16)&0xFF, 10) + "." + strconv.FormatInt((i>>8)&0xFF, 10) + "." + strconv.FormatInt(i&0xFF, 10) // host = strconv.FormatInt((i>>24)&0xFF, 10) + "." + strconv.FormatInt((i>>16)&0xFF, 10) + "." + strconv.FormatInt((i>>8)&0xFF, 10) + "." + strconv.FormatInt(i&0xFF, 10)
} //}
if !ipMatchRegex.MatchString(host) { if _, _, err := net.SplitHostPort(host); err != nil {
if dl.req.URL.Scheme == "https" { if !strings.Contains(err.Error(), "missing port in address") {
host = host + ":443" return errors.WithStack(err)
} else { }
host = host + ":80" switch dl.req.URL.Scheme {
case "http":
host += ":80"
case "https":
host += ":443"
} }
} }
@@ -250,7 +252,11 @@ func (dl *Download) isExternalNetwork(ctx context.Context) error {
} }
_ = c.Close() _ = c.Close()
ip := net.ParseIP(ipMatchRegex.ReplaceAllString(c.RemoteAddr().String(), "")) ipStr, _, err := net.SplitHostPort(c.RemoteAddr().String())
if err != nil {
return errors.WithStack(err)
}
ip := net.ParseIP(ipStr)
if ip == nil { if ip == nil {
return errors.WithStack(ErrInvalidIPAddress) return errors.WithStack(ErrInvalidIPAddress)
} }
@@ -265,7 +271,7 @@ func (dl *Download) isExternalNetwork(ctx context.Context) error {
return nil return nil
} }
// Defines a global downloader struct that keeps track of all currently processing downloads // Downloader represents a global downloader that keeps track of all currently processing downloads
// for the machine. // for the machine.
type Downloader struct { type Downloader struct {
mu sync.RWMutex mu sync.RWMutex
@@ -273,11 +279,11 @@ type Downloader struct {
serverCache map[string][]string serverCache map[string][]string
} }
// Tracks a download in the internal cache for this instance. // track tracks a download in the internal cache for this instance.
func (d *Downloader) track(dl *Download) { func (d *Downloader) track(dl *Download) {
d.mu.Lock() d.mu.Lock()
defer d.mu.Unlock() defer d.mu.Unlock()
sid := dl.server.Id() sid := dl.server.ID()
if _, ok := d.downloadCache[dl.Identifier]; !ok { if _, ok := d.downloadCache[dl.Identifier]; !ok {
d.downloadCache[dl.Identifier] = dl d.downloadCache[dl.Identifier] = dl
if _, ok := d.serverCache[sid]; !ok { if _, ok := d.serverCache[sid]; !ok {
@@ -287,7 +293,7 @@ func (d *Downloader) track(dl *Download) {
} }
} }
// Finds a given download entry using the provided ID and returns it. // find finds a given download entry using the provided ID and returns it.
func (d *Downloader) find(dlid string) *Download { func (d *Downloader) find(dlid string) *Download {
d.mu.RLock() d.mu.RLock()
defer d.mu.RUnlock() defer d.mu.RUnlock()
@@ -297,24 +303,24 @@ func (d *Downloader) find(dlid string) *Download {
return nil return nil
} }
// Remove the given download reference from the cache storing them. This also updates // remove removes the given download reference from the cache storing them. This also updates
// the slice of active downloads for a given server to not include this download. // the slice of active downloads for a given server to not include this download.
func (d *Downloader) remove(dlid string) { func (d *Downloader) remove(dlID string) {
d.mu.Lock() d.mu.Lock()
defer d.mu.Unlock() defer d.mu.Unlock()
if _, ok := d.downloadCache[dlid]; !ok { if _, ok := d.downloadCache[dlID]; !ok {
return return
} }
sid := d.downloadCache[dlid].server.Id() sID := d.downloadCache[dlID].server.ID()
delete(d.downloadCache, dlid) delete(d.downloadCache, dlID)
if tracked, ok := d.serverCache[sid]; ok { if tracked, ok := d.serverCache[sID]; ok {
var out []string var out []string
for _, k := range tracked { for _, k := range tracked {
if k != dlid { if k != dlID {
out = append(out, k) out = append(out, k)
} }
} }
d.serverCache[sid] = out d.serverCache[sID] = out
} }
} }

View File

@@ -10,6 +10,7 @@ import (
"github.com/apex/log" "github.com/apex/log"
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
"github.com/google/uuid" "github.com/google/uuid"
"github.com/pterodactyl/wings/server" "github.com/pterodactyl/wings/server"
"github.com/pterodactyl/wings/server/filesystem" "github.com/pterodactyl/wings/server/filesystem"
) )

View File

@@ -2,6 +2,7 @@ package router
import ( import (
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
"github.com/pterodactyl/wings/router/middleware" "github.com/pterodactyl/wings/router/middleware"
"github.com/pterodactyl/wings/server" "github.com/pterodactyl/wings/server"
) )

View File

@@ -12,6 +12,7 @@ import (
"github.com/apex/log" "github.com/apex/log"
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
"github.com/google/uuid" "github.com/google/uuid"
"github.com/pterodactyl/wings/config" "github.com/pterodactyl/wings/config"
"github.com/pterodactyl/wings/remote" "github.com/pterodactyl/wings/remote"
"github.com/pterodactyl/wings/server" "github.com/pterodactyl/wings/server"
@@ -62,7 +63,7 @@ func (re *RequestError) Abort(c *gin.Context, status int) {
// server triggered this error. // server triggered this error.
if s, ok := c.Get("server"); ok { if s, ok := c.Get("server"); ok {
if s, ok := s.(*server.Server); ok { if s, ok := s.(*server.Server); ok {
event = event.WithField("server_id", s.Id()) event = event.WithField("server_id", s.ID())
} }
} }
@@ -262,14 +263,14 @@ func ServerExists() gin.HandlerFunc {
if c.Param("server") != "" { if c.Param("server") != "" {
manager := ExtractManager(c) manager := ExtractManager(c)
s = manager.Find(func(s *server.Server) bool { s = manager.Find(func(s *server.Server) bool {
return c.Param("server") == s.Id() return c.Param("server") == s.ID()
}) })
} }
if s == nil { if s == nil {
c.AbortWithStatusJSON(http.StatusNotFound, gin.H{"error": "The requested resource does not exist on this instance."}) c.AbortWithStatusJSON(http.StatusNotFound, gin.H{"error": "The requested resource does not exist on this instance."})
return return
} }
c.Set("logger", ExtractLogger(c).WithField("server_id", s.Id())) c.Set("logger", ExtractLogger(c).WithField("server_id", s.ID()))
c.Set("server", s) c.Set("server", s)
c.Next() c.Next()
} }

View File

@@ -3,6 +3,7 @@ package router
import ( import (
"github.com/apex/log" "github.com/apex/log"
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
"github.com/pterodactyl/wings/remote" "github.com/pterodactyl/wings/remote"
"github.com/pterodactyl/wings/router/middleware" "github.com/pterodactyl/wings/router/middleware"
"github.com/pterodactyl/wings/server" "github.com/pterodactyl/wings/server"

View File

@@ -8,6 +8,7 @@ import (
"strconv" "strconv"
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
"github.com/pterodactyl/wings/router/middleware" "github.com/pterodactyl/wings/router/middleware"
"github.com/pterodactyl/wings/router/tokens" "github.com/pterodactyl/wings/router/tokens"
"github.com/pterodactyl/wings/server/backup" "github.com/pterodactyl/wings/server/backup"

View File

@@ -10,6 +10,7 @@ import (
"emperror.dev/errors" "emperror.dev/errors"
"github.com/apex/log" "github.com/apex/log"
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
"github.com/pterodactyl/wings/router/downloader" "github.com/pterodactyl/wings/router/downloader"
"github.com/pterodactyl/wings/router/middleware" "github.com/pterodactyl/wings/router/middleware"
"github.com/pterodactyl/wings/router/tokens" "github.com/pterodactyl/wings/router/tokens"
@@ -195,7 +196,7 @@ func deleteServer(c *gin.Context) {
s.Websockets().CancelAll() s.Websockets().CancelAll()
// Remove any pending remote file downloads for the server. // Remove any pending remote file downloads for the server.
for _, dl := range downloader.ByServer(s.Id()) { for _, dl := range downloader.ByServer(s.ID()) {
dl.Cancel() dl.Cancel()
} }
@@ -220,7 +221,7 @@ func deleteServer(c *gin.Context) {
}(s.Filesystem().Path()) }(s.Filesystem().Path())
middleware.ExtractManager(c).Remove(func(server *server.Server) bool { middleware.ExtractManager(c).Remove(func(server *server.Server) bool {
return server.Id() == s.Id() return server.ID() == s.ID()
}) })
// Deallocate the reference to this server. // Deallocate the reference to this server.

View File

@@ -8,6 +8,7 @@ import (
"emperror.dev/errors" "emperror.dev/errors"
"github.com/apex/log" "github.com/apex/log"
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
"github.com/pterodactyl/wings/router/middleware" "github.com/pterodactyl/wings/router/middleware"
"github.com/pterodactyl/wings/server" "github.com/pterodactyl/wings/server"
"github.com/pterodactyl/wings/server/backup" "github.com/pterodactyl/wings/server/backup"
@@ -42,7 +43,7 @@ func postServerBackup(c *gin.Context) {
// Attach the server ID and the request ID to the adapter log context for easier // Attach the server ID and the request ID to the adapter log context for easier
// parsing in the logs. // parsing in the logs.
adapter.WithLogContext(map[string]interface{}{ adapter.WithLogContext(map[string]interface{}{
"server": s.Id(), "server": s.ID(),
"request_id": c.GetString("request_id"), "request_id": c.GetString("request_id"),
}) })

View File

@@ -16,12 +16,13 @@ import (
"emperror.dev/errors" "emperror.dev/errors"
"github.com/apex/log" "github.com/apex/log"
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
"golang.org/x/sync/errgroup"
"github.com/pterodactyl/wings/router/downloader" "github.com/pterodactyl/wings/router/downloader"
"github.com/pterodactyl/wings/router/middleware" "github.com/pterodactyl/wings/router/middleware"
"github.com/pterodactyl/wings/router/tokens" "github.com/pterodactyl/wings/router/tokens"
"github.com/pterodactyl/wings/server" "github.com/pterodactyl/wings/server"
"github.com/pterodactyl/wings/server/filesystem" "github.com/pterodactyl/wings/server/filesystem"
"golang.org/x/sync/errgroup"
) )
// getServerFileContents returns the contents of a file on the server. // getServerFileContents returns the contents of a file on the server.
@@ -245,7 +246,7 @@ func postServerWriteFile(c *gin.Context) {
func getServerPullingFiles(c *gin.Context) { func getServerPullingFiles(c *gin.Context) {
s := ExtractServer(c) s := ExtractServer(c)
c.JSON(http.StatusOK, gin.H{ c.JSON(http.StatusOK, gin.H{
"downloads": downloader.ByServer(s.Id()), "downloads": downloader.ByServer(s.ID()),
}) })
} }
@@ -253,13 +254,20 @@ func getServerPullingFiles(c *gin.Context) {
func postServerPullRemoteFile(c *gin.Context) { func postServerPullRemoteFile(c *gin.Context) {
s := ExtractServer(c) s := ExtractServer(c)
var data struct { var data struct {
// Deprecated
Directory string `binding:"required_without=RootPath,omitempty" json:"directory"`
RootPath string `binding:"required_without=Directory,omitempty" json:"root"`
URL string `binding:"required" json:"url"` URL string `binding:"required" json:"url"`
Directory string `binding:"required,omitempty" json:"directory"`
} }
if err := c.BindJSON(&data); err != nil { if err := c.BindJSON(&data); err != nil {
return return
} }
// Handle the deprecated Directory field in the struct until it is removed.
if data.Directory != "" && data.RootPath == "" {
data.RootPath = data.Directory
}
u, err := url.Parse(data.URL) u, err := url.Parse(data.URL)
if err != nil { if err != nil {
if e, ok := err.(*url.Error); ok { if e, ok := err.(*url.Error); ok {
@@ -277,7 +285,7 @@ func postServerPullRemoteFile(c *gin.Context) {
return return
} }
// Do not allow more than three simultaneous remote file downloads at one time. // Do not allow more than three simultaneous remote file downloads at one time.
if len(downloader.ByServer(s.Id())) >= 3 { if len(downloader.ByServer(s.ID())) >= 3 {
c.AbortWithStatusJSON(http.StatusBadRequest, gin.H{ c.AbortWithStatusJSON(http.StatusBadRequest, gin.H{
"error": "This server has reached its limit of 3 simultaneous remote file downloads at once. Please wait for one to complete before trying again.", "error": "This server has reached its limit of 3 simultaneous remote file downloads at once. Please wait for one to complete before trying again.",
}) })
@@ -285,11 +293,11 @@ func postServerPullRemoteFile(c *gin.Context) {
} }
dl := downloader.New(s, downloader.DownloadRequest{ dl := downloader.New(s, downloader.DownloadRequest{
Directory: data.RootPath,
URL: u, URL: u,
Directory: data.Directory,
}) })
// Execute this pull in a seperate thread since it may take a long time to complete. // Execute this pull in a separate thread since it may take a long time to complete.
go func() { go func() {
s.Log().WithField("download_id", dl.Identifier).WithField("url", u.String()).Info("starting pull of remote file to disk") s.Log().WithField("download_id", dl.Identifier).WithField("url", u.String()).Info("starting pull of remote file to disk")
if err := dl.Execute(); err != nil { if err := dl.Execute(); err != nil {

View File

@@ -7,6 +7,7 @@ import (
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
ws "github.com/gorilla/websocket" ws "github.com/gorilla/websocket"
"github.com/pterodactyl/wings/router/middleware" "github.com/pterodactyl/wings/router/middleware"
"github.com/pterodactyl/wings/router/websocket" "github.com/pterodactyl/wings/router/websocket"
) )

View File

@@ -2,11 +2,14 @@ package router
import ( import (
"bytes" "bytes"
"context"
"errors"
"net/http" "net/http"
"strings" "strings"
"github.com/apex/log" "github.com/apex/log"
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
"github.com/pterodactyl/wings/config" "github.com/pterodactyl/wings/config"
"github.com/pterodactyl/wings/installer" "github.com/pterodactyl/wings/installer"
"github.com/pterodactyl/wings/router/middleware" "github.com/pterodactyl/wings/router/middleware"
@@ -65,14 +68,30 @@ func postCreateServer(c *gin.Context) {
// cycle. If there are any errors they will be logged and communicated back // cycle. If there are any errors they will be logged and communicated back
// to the Panel where a reinstall may take place. // to the Panel where a reinstall may take place.
go func(i *installer.Installer) { go func(i *installer.Installer) {
err := i.Server().CreateEnvironment() if err := i.Server().CreateEnvironment(); err != nil {
if err != nil {
i.Server().Log().WithField("error", err).Error("failed to create server environment during install process") i.Server().Log().WithField("error", err).Error("failed to create server environment during install process")
return return
} }
if err := i.Server().Install(false); err != nil { if err := i.Server().Install(false); err != nil {
log.WithFields(log.Fields{"server": i.Uuid(), "error": err}).Error("failed to run install process for server") log.WithFields(log.Fields{"server": i.Uuid(), "error": err}).Error("failed to run install process for server")
return
}
if i.Server().Config().StartOnCompletion {
log.WithField("server_id", i.Server().ID()).Debug("starting server after successful installation")
if err := i.Server().HandlePowerAction(server.PowerActionStart, 30); err != nil {
if errors.Is(err, context.DeadlineExceeded) {
log.WithFields(log.Fields{"server_id": i.Server().ID(), "action": "start"}).
Warn("could not acquire a lock while attempting to perform a power action")
} else {
log.WithFields(log.Fields{"server_id": i.Server().ID(), "action": "start", "error": err}).
Error("encountered error processing a server power action in the background")
}
}
} else {
log.WithField("server_id", i.Server().ID()).
Debug("skipping automatic start after successful server installation")
} }
}(install) }(install)

View File

@@ -23,6 +23,7 @@ import (
"github.com/juju/ratelimit" "github.com/juju/ratelimit"
"github.com/mholt/archiver/v3" "github.com/mholt/archiver/v3"
"github.com/mitchellh/colorstring" "github.com/mitchellh/colorstring"
"github.com/pterodactyl/wings/config" "github.com/pterodactyl/wings/config"
"github.com/pterodactyl/wings/installer" "github.com/pterodactyl/wings/installer"
"github.com/pterodactyl/wings/remote" "github.com/pterodactyl/wings/remote"
@@ -75,14 +76,14 @@ func getServerArchive(c *gin.Context) {
} }
s := ExtractServer(c) s := ExtractServer(c)
if token.Subject != s.Id() { if token.Subject != s.ID() {
c.AbortWithStatusJSON(http.StatusForbidden, gin.H{ c.AbortWithStatusJSON(http.StatusForbidden, gin.H{
"error": "Missing required token subject, or subject is not valid for the requested server.", "error": "Missing required token subject, or subject is not valid for the requested server.",
}) })
return return
} }
archivePath := getArchivePath(s.Id()) archivePath := getArchivePath(s.ID())
// Stat the archive file. // Stat the archive file.
st, err := os.Lstat(archivePath) st, err := os.Lstat(archivePath)
@@ -123,7 +124,7 @@ func getServerArchive(c *gin.Context) {
c.Header("X-Checksum", checksum) c.Header("X-Checksum", checksum)
c.Header("X-Mime-Type", "application/tar+gzip") c.Header("X-Mime-Type", "application/tar+gzip")
c.Header("Content-Length", strconv.Itoa(int(st.Size()))) c.Header("Content-Length", strconv.Itoa(int(st.Size())))
c.Header("Content-Disposition", "attachment; filename="+strconv.Quote(s.Id()+".tar.gz")) c.Header("Content-Disposition", "attachment; filename="+strconv.Quote(s.ID()+".tar.gz"))
c.Header("Content-Type", "application/octet-stream") c.Header("Content-Type", "application/octet-stream")
_, _ = bufio.NewReader(f).WriteTo(c.Writer) _, _ = bufio.NewReader(f).WriteTo(c.Writer)
@@ -134,7 +135,7 @@ func postServerArchive(c *gin.Context) {
manager := middleware.ExtractManager(c) manager := middleware.ExtractManager(c)
go func(s *server.Server) { go func(s *server.Server) {
l := log.WithField("server", s.Id()) l := log.WithField("server", s.ID())
// This function automatically adds the Source Node prefix and Timestamp to the log // This function automatically adds the Source Node prefix and Timestamp to the log
// output before sending it over the websocket. // output before sending it over the websocket.
@@ -157,7 +158,7 @@ func postServerArchive(c *gin.Context) {
s.Events().Publish(server.TransferStatusEvent, "failure") s.Events().Publish(server.TransferStatusEvent, "failure")
sendTransferLog("Attempting to notify panel of archive failure..") sendTransferLog("Attempting to notify panel of archive failure..")
if err := manager.Client().SetArchiveStatus(s.Context(), s.Id(), false); err != nil { if err := manager.Client().SetArchiveStatus(s.Context(), s.ID(), false); err != nil {
if !remote.IsRequestError(err) { if !remote.IsRequestError(err) {
sendTransferLog("Failed to notify panel of archive failure: " + err.Error()) sendTransferLog("Failed to notify panel of archive failure: " + err.Error())
l.WithField("error", err).Error("failed to notify panel of failed archive status") l.WithField("error", err).Error("failed to notify panel of failed archive status")
@@ -190,7 +191,7 @@ func postServerArchive(c *gin.Context) {
} }
// Attempt to get an archive of the server. // Attempt to get an archive of the server.
if err := a.Create(getArchivePath(s.Id())); err != nil { if err := a.Create(getArchivePath(s.ID())); err != nil {
sendTransferLog("An error occurred while archiving the server: " + err.Error()) sendTransferLog("An error occurred while archiving the server: " + err.Error())
l.WithField("error", err).Error("failed to get transfer archive for server") l.WithField("error", err).Error("failed to get transfer archive for server")
return return
@@ -199,7 +200,7 @@ func postServerArchive(c *gin.Context) {
sendTransferLog("Successfully created archive, attempting to notify panel..") sendTransferLog("Successfully created archive, attempting to notify panel..")
l.Info("successfully created server transfer archive, notifying panel..") l.Info("successfully created server transfer archive, notifying panel..")
if err := manager.Client().SetArchiveStatus(s.Context(), s.Id(), true); err != nil { if err := manager.Client().SetArchiveStatus(s.Context(), s.ID(), true); err != nil {
if !remote.IsRequestError(err) { if !remote.IsRequestError(err) {
sendTransferLog("Failed to notify panel of archive success: " + err.Error()) sendTransferLog("Failed to notify panel of archive success: " + err.Error())
l.WithField("error", err).Error("failed to notify panel of successful archive status") l.WithField("error", err).Error("failed to notify panel of successful archive status")
@@ -360,7 +361,7 @@ func postTransfer(c *gin.Context) {
sendTransferLog("Server transfer failed, check Wings logs for additional information.") sendTransferLog("Server transfer failed, check Wings logs for additional information.")
s.Events().Publish(server.TransferStatusEvent, "failure") s.Events().Publish(server.TransferStatusEvent, "failure")
manager.Remove(func(match *server.Server) bool { manager.Remove(func(match *server.Server) bool {
return match.Id() == s.Id() return match.ID() == s.ID()
}) })
// If the transfer status was successful but the request failed, act like the transfer failed. // If the transfer status was successful but the request failed, act like the transfer failed.

View File

@@ -1,9 +1,11 @@
package tokens package tokens
import ( import (
"github.com/gbrlsnchs/jwt/v3"
"github.com/pterodactyl/wings/config"
"time" "time"
"github.com/gbrlsnchs/jwt/v3"
"github.com/pterodactyl/wings/config"
) )
type TokenData interface { type TokenData interface {

View File

@@ -1,9 +1,10 @@
package tokens package tokens
import ( import (
"github.com/patrickmn/go-cache"
"sync" "sync"
"time" "time"
"github.com/patrickmn/go-cache"
) )
type TokenStore struct { type TokenStore struct {

View File

@@ -2,11 +2,12 @@ package tokens
import ( import (
"encoding/json" "encoding/json"
"github.com/apex/log"
"github.com/gbrlsnchs/jwt/v3"
"strings" "strings"
"sync" "sync"
"time" "time"
"github.com/apex/log"
"github.com/gbrlsnchs/jwt/v3"
) )
// The time at which Wings was booted. No JWT's created before this time are allowed to // The time at which Wings was booted. No JWT's created before this time are allowed to

View File

@@ -2,9 +2,10 @@ package websocket
import ( import (
"context" "context"
"time"
"github.com/pterodactyl/wings/events" "github.com/pterodactyl/wings/events"
"github.com/pterodactyl/wings/server" "github.com/pterodactyl/wings/server"
"time"
) )
// Checks the time to expiration on the JWT every 30 seconds until the token has // Checks the time to expiration on the JWT every 30 seconds until the token has

View File

@@ -2,22 +2,24 @@ package websocket
import ( import (
"context" "context"
"emperror.dev/errors"
"encoding/json" "encoding/json"
"fmt" "fmt"
"net/http"
"strings"
"sync"
"time"
"emperror.dev/errors"
"github.com/apex/log" "github.com/apex/log"
"github.com/gbrlsnchs/jwt/v3" "github.com/gbrlsnchs/jwt/v3"
"github.com/google/uuid" "github.com/google/uuid"
"github.com/gorilla/websocket" "github.com/gorilla/websocket"
"github.com/pterodactyl/wings/config" "github.com/pterodactyl/wings/config"
"github.com/pterodactyl/wings/environment" "github.com/pterodactyl/wings/environment"
"github.com/pterodactyl/wings/environment/docker" "github.com/pterodactyl/wings/environment/docker"
"github.com/pterodactyl/wings/router/tokens" "github.com/pterodactyl/wings/router/tokens"
"github.com/pterodactyl/wings/server" "github.com/pterodactyl/wings/server"
"net/http"
"strings"
"sync"
"time"
) )
const ( const (
@@ -55,11 +57,10 @@ func IsJwtError(err error) bool {
errors.Is(err, jwt.ErrExpValidation) errors.Is(err, jwt.ErrExpValidation)
} }
// Parses a JWT into a websocket token payload. // NewTokenPayload parses a JWT into a websocket token payload.
func NewTokenPayload(token []byte) (*tokens.WebsocketPayload, error) { func NewTokenPayload(token []byte) (*tokens.WebsocketPayload, error) {
payload := tokens.WebsocketPayload{} var payload tokens.WebsocketPayload
err := tokens.ParseToken(token, &payload) if err := tokens.ParseToken(token, &payload); err != nil {
if err != nil {
return nil, err return nil, err
} }
@@ -180,7 +181,7 @@ func (h *Handler) unsafeSendJson(v interface{}) error {
return h.Connection.WriteJSON(v) return h.Connection.WriteJSON(v)
} }
// Checks if the JWT is still valid. // TokenValid checks if the JWT is still valid.
func (h *Handler) TokenValid() error { func (h *Handler) TokenValid() error {
j := h.GetJwt() j := h.GetJwt()
if j == nil { if j == nil {
@@ -199,14 +200,14 @@ func (h *Handler) TokenValid() error {
return ErrJwtNoConnectPerm return ErrJwtNoConnectPerm
} }
if h.server.Id() != j.GetServerUuid() { if h.server.ID() != j.GetServerUuid() {
return ErrJwtUuidMismatch return ErrJwtUuidMismatch
} }
return nil return nil
} }
// Sends an error back to the connected websocket instance by checking the permissions // SendErrorJson sends an error back to the connected websocket instance by checking the permissions
// of the token. If the user has the "receive-errors" grant we will send back the actual // of the token. If the user has the "receive-errors" grant we will send back the actual
// error message, otherwise we just send back a standard error message. // error message, otherwise we just send back a standard error message.
func (h *Handler) SendErrorJson(msg Message, err error, shouldLog ...bool) error { func (h *Handler) SendErrorJson(msg Message, err error, shouldLog ...bool) error {
@@ -236,7 +237,7 @@ func (h *Handler) SendErrorJson(msg Message, err error, shouldLog ...bool) error
return h.unsafeSendJson(wsm) return h.unsafeSendJson(wsm)
} }
// Converts an error message into a more readable representation and returns a UUID // GetErrorMessage converts an error message into a more readable representation and returns a UUID
// that can be cross-referenced to find the specific error that triggered. // that can be cross-referenced to find the specific error that triggered.
func (h *Handler) GetErrorMessage(msg string) (string, uuid.UUID) { func (h *Handler) GetErrorMessage(msg string) (string, uuid.UUID) {
u := uuid.Must(uuid.NewRandom()) u := uuid.Must(uuid.NewRandom())
@@ -246,13 +247,7 @@ func (h *Handler) GetErrorMessage(msg string) (string, uuid.UUID) {
return m, u return m, u
} }
// Sets the JWT for the websocket in a race-safe manner. // GetJwt returns the JWT for the websocket in a race-safe manner.
func (h *Handler) setJwt(token *tokens.WebsocketPayload) {
h.Lock()
h.jwt = token
h.Unlock()
}
func (h *Handler) GetJwt() *tokens.WebsocketPayload { func (h *Handler) GetJwt() *tokens.WebsocketPayload {
h.RLock() h.RLock()
defer h.RUnlock() defer h.RUnlock()
@@ -260,7 +255,14 @@ func (h *Handler) GetJwt() *tokens.WebsocketPayload {
return h.jwt return h.jwt
} }
// Handle the inbound socket request and route it to the proper server action. // setJwt sets the JWT for the websocket in a race-safe manner.
func (h *Handler) setJwt(token *tokens.WebsocketPayload) {
h.Lock()
h.jwt = token
h.Unlock()
}
// HandleInbound handles an inbound socket request and route it to the proper action.
func (h *Handler) HandleInbound(m Message) error { func (h *Handler) HandleInbound(m Message) error {
if m.Event != AuthenticationEvent { if m.Event != AuthenticationEvent {
if err := h.TokenValid(); err != nil { if err := h.TokenValid(); err != nil {

View File

@@ -2,12 +2,14 @@ package server
import ( import (
"io" "io"
"io/fs"
"io/ioutil" "io/ioutil"
"os" "os"
"emperror.dev/errors" "emperror.dev/errors"
"github.com/apex/log" "github.com/apex/log"
"github.com/docker/docker/client" "github.com/docker/docker/client"
"github.com/pterodactyl/wings/environment" "github.com/pterodactyl/wings/environment"
"github.com/pterodactyl/wings/remote" "github.com/pterodactyl/wings/remote"
"github.com/pterodactyl/wings/server/backup" "github.com/pterodactyl/wings/server/backup"
@@ -60,13 +62,13 @@ func (s *Server) Backup(b backup.BackupInterface) error {
ignored := b.Ignored() ignored := b.Ignored()
if b.Ignored() == "" { if b.Ignored() == "" {
if i, err := s.getServerwideIgnoredFiles(); err != nil { if i, err := s.getServerwideIgnoredFiles(); err != nil {
log.WithField("server", s.Id()).WithField("error", err).Warn("failed to get server-wide ignored files") log.WithField("server", s.ID()).WithField("error", err).Warn("failed to get server-wide ignored files")
} else { } else {
ignored = i ignored = i
} }
} }
ad, err := b.Generate(s.Filesystem().Path(), ignored) ad, err := b.Generate(s.Context(), s.Filesystem().Path(), ignored)
if err != nil { if err != nil {
if err := s.notifyPanelOfBackup(b.Identifier(), &backup.ArchiveDetails{}, false); err != nil { if err := s.notifyPanelOfBackup(b.Identifier(), &backup.ArchiveDetails{}, false); err != nil {
s.Log().WithFields(log.Fields{ s.Log().WithFields(log.Fields{
@@ -150,9 +152,12 @@ func (s *Server) RestoreBackup(b backup.BackupInterface, reader io.ReadCloser) (
// Attempt to restore the backup to the server by running through each entry // Attempt to restore the backup to the server by running through each entry
// in the file one at a time and writing them to the disk. // in the file one at a time and writing them to the disk.
s.Log().Debug("starting file writing process for backup restoration") s.Log().Debug("starting file writing process for backup restoration")
err = b.Restore(reader, func(file string, r io.Reader) error { err = b.Restore(s.Context(), reader, func(file string, r io.Reader, mode fs.FileMode) error {
s.Events().Publish(DaemonMessageEvent, "(restoring): "+file) s.Events().Publish(DaemonMessageEvent, "(restoring): "+file)
return s.Filesystem().Writefile(file, r) if err := s.Filesystem().Writefile(file, r); err != nil {
return err
}
return s.Filesystem().Chmod(file, mode)
}) })
return errors.WithStackIf(err) return errors.WithStackIf(err)

View File

@@ -1,14 +1,18 @@
package backup package backup
import ( import (
"context"
"crypto/sha1" "crypto/sha1"
"encoding/hex" "encoding/hex"
"io" "io"
"io/fs"
"os" "os"
"path" "path"
"sync"
"emperror.dev/errors"
"github.com/apex/log" "github.com/apex/log"
"golang.org/x/sync/errgroup"
"github.com/pterodactyl/wings/config" "github.com/pterodactyl/wings/config"
"github.com/pterodactyl/wings/remote" "github.com/pterodactyl/wings/remote"
) )
@@ -22,22 +26,39 @@ const (
// RestoreCallback is a generic restoration callback that exists for both local // RestoreCallback is a generic restoration callback that exists for both local
// and remote backups allowing the files to be restored. // and remote backups allowing the files to be restored.
type RestoreCallback func(file string, r io.Reader) error type RestoreCallback func(file string, r io.Reader, mode fs.FileMode) error
type ArchiveDetails struct { // noinspection GoNameStartsWithPackageName
Checksum string `json:"checksum"` type BackupInterface interface {
ChecksumType string `json:"checksum_type"` // SetClient sets the API request client on the backup interface.
Size int64 `json:"size"` SetClient(c remote.Client)
} // Identifier returns the UUID of this backup as tracked by the panel
// instance.
// ToRequest returns a request object. Identifier() string
func (ad *ArchiveDetails) ToRequest(successful bool) remote.BackupRequest { // WithLogContext attaches additional context to the log output for this
return remote.BackupRequest{ // backup.
Checksum: ad.Checksum, WithLogContext(map[string]interface{})
ChecksumType: ad.ChecksumType, // Generate creates a backup in whatever the configured source for the
Size: ad.Size, // specific implementation is.
Successful: successful, Generate(ctx context.Context, basePath string, ignore string) (*ArchiveDetails, error)
} // Ignored returns the ignored files for this backup instance.
Ignored() string
// Checksum returns a SHA1 checksum for the generated backup.
Checksum() ([]byte, error)
// Size returns the size of the generated backup.
Size() (int64, error)
// Path returns the path to the backup on the machine. This is not always
// the final storage location of the backup, simply the location we're using
// to store it until it is moved to the final spot.
Path() string
// Details returns details about the archive.
Details(ctx context.Context) (*ArchiveDetails, error)
// Remove removes a backup file.
Remove() error
// Restore is called when a backup is ready to be restored to the disk from
// the given source. Not every backup implementation will support this nor
// will every implementation require a reader be provided.
Restore(ctx context.Context, reader io.Reader, callback RestoreCallback) error
} }
type Backup struct { type Backup struct {
@@ -54,39 +75,6 @@ type Backup struct {
logContext map[string]interface{} logContext map[string]interface{}
} }
// noinspection GoNameStartsWithPackageName
type BackupInterface interface {
// SetClient sets the API request client on the backup interface.
SetClient(c remote.Client)
// Identifier returns the UUID of this backup as tracked by the panel
// instance.
Identifier() string
// WithLogContext attaches additional context to the log output for this
// backup.
WithLogContext(map[string]interface{})
// Generate creates a backup in whatever the configured source for the
// specific implementation is.
Generate(string, string) (*ArchiveDetails, error)
// Ignored returns the ignored files for this backup instance.
Ignored() string
// Checksum returns a SHA1 checksum for the generated backup.
Checksum() ([]byte, error)
// Size returns the size of the generated backup.
Size() (int64, error)
// Path returns the path to the backup on the machine. This is not always
// the final storage location of the backup, simply the location we're using
// to store it until it is moved to the final spot.
Path() string
// Details returns details about the archive.
Details() *ArchiveDetails
// Remove removes a backup file.
Remove() error
// Restore is called when a backup is ready to be restored to the disk from
// the given source. Not every backup implementation will support this nor
// will every implementation require a reader be provided.
Restore(reader io.Reader, callback RestoreCallback) error
}
func (b *Backup) SetClient(c remote.Client) { func (b *Backup) SetClient(c remote.Client) {
b.client = c b.client = c
} }
@@ -95,12 +83,12 @@ func (b *Backup) Identifier() string {
return b.Uuid return b.Uuid
} }
// Returns the path for this specific backup. // Path returns the path for this specific backup.
func (b *Backup) Path() string { func (b *Backup) Path() string {
return path.Join(config.Get().System.BackupDirectory, b.Identifier()+".tar.gz") return path.Join(config.Get().System.BackupDirectory, b.Identifier()+".tar.gz")
} }
// Return the size of the generated backup. // Size returns the size of the generated backup.
func (b *Backup) Size() (int64, error) { func (b *Backup) Size() (int64, error) {
st, err := os.Stat(b.Path()) st, err := os.Stat(b.Path())
if err != nil { if err != nil {
@@ -110,7 +98,7 @@ func (b *Backup) Size() (int64, error) {
return st.Size(), nil return st.Size(), nil
} }
// Returns the SHA256 checksum of a backup. // Checksum returns the SHA256 checksum of a backup.
func (b *Backup) Checksum() ([]byte, error) { func (b *Backup) Checksum() ([]byte, error) {
h := sha1.New() h := sha1.New()
@@ -128,51 +116,34 @@ func (b *Backup) Checksum() ([]byte, error) {
return h.Sum(nil), nil return h.Sum(nil), nil
} }
// Returns details of the archive by utilizing two go-routines to get the checksum and // Details returns both the checksum and size of the archive currently stored on
// the size of the archive. // the disk to the caller.
func (b *Backup) Details() *ArchiveDetails { func (b *Backup) Details(ctx context.Context) (*ArchiveDetails, error) {
wg := sync.WaitGroup{} ad := ArchiveDetails{ChecksumType: "sha1"}
wg.Add(2) g, ctx := errgroup.WithContext(ctx)
l := log.WithField("backup_id", b.Uuid) g.Go(func() error {
var checksum string
// Calculate the checksum for the file.
go func() {
defer wg.Done()
l.Info("computing checksum for backup...")
resp, err := b.Checksum() resp, err := b.Checksum()
if err != nil { if err != nil {
log.WithFields(log.Fields{ return err
"backup": b.Identifier(),
"error": err,
}).Error("failed to calculate checksum for backup")
return
} }
ad.Checksum = hex.EncodeToString(resp)
return nil
})
checksum = hex.EncodeToString(resp) g.Go(func() error {
l.WithField("checksum", checksum).Info("computed checksum for backup") s, err := b.Size()
}() if err != nil {
return err
var sz int64
go func() {
defer wg.Done()
if s, err := b.Size(); err != nil {
return
} else {
sz = s
} }
}() ad.Size = s
return nil
})
wg.Wait() if err := g.Wait(); err != nil {
return nil, errors.WithStackDepth(err, 1)
return &ArchiveDetails{
Checksum: checksum,
ChecksumType: "sha1",
Size: sz,
} }
return &ad, nil
} }
func (b *Backup) Ignored() string { func (b *Backup) Ignored() string {
@@ -188,3 +159,19 @@ func (b *Backup) log() *log.Entry {
} }
return l return l
} }
type ArchiveDetails struct {
Checksum string `json:"checksum"`
ChecksumType string `json:"checksum_type"`
Size int64 `json:"size"`
}
// ToRequest returns a request object.
func (ad *ArchiveDetails) ToRequest(successful bool) remote.BackupRequest {
return remote.BackupRequest{
Checksum: ad.Checksum,
ChecksumType: ad.ChecksumType,
Size: ad.Size,
Successful: successful,
}
}

View File

@@ -1,14 +1,15 @@
package backup package backup
import ( import (
"errors" "context"
"io" "io"
"os" "os"
"github.com/pterodactyl/wings/server/filesystem" "emperror.dev/errors"
"github.com/mholt/archiver/v3" "github.com/mholt/archiver/v3"
"github.com/pterodactyl/wings/remote" "github.com/pterodactyl/wings/remote"
"github.com/pterodactyl/wings/server/filesystem"
) )
type LocalBackup struct { type LocalBackup struct {
@@ -56,28 +57,38 @@ func (b *LocalBackup) WithLogContext(c map[string]interface{}) {
// Generate generates a backup of the selected files and pushes it to the // Generate generates a backup of the selected files and pushes it to the
// defined location for this instance. // defined location for this instance.
func (b *LocalBackup) Generate(basePath, ignore string) (*ArchiveDetails, error) { func (b *LocalBackup) Generate(ctx context.Context, basePath, ignore string) (*ArchiveDetails, error) {
a := &filesystem.Archive{ a := &filesystem.Archive{
BasePath: basePath, BasePath: basePath,
Ignore: ignore, Ignore: ignore,
} }
b.log().Info("creating backup for server...") b.log().WithField("path", b.Path()).Info("creating backup for server")
if err := a.Create(b.Path()); err != nil { if err := a.Create(b.Path()); err != nil {
return nil, err return nil, err
} }
b.log().Info("created backup successfully") b.log().Info("created backup successfully")
return b.Details(), nil ad, err := b.Details(ctx)
if err != nil {
return nil, errors.WrapIf(err, "backup: failed to get archive details for local backup")
}
return ad, nil
} }
// Restore will walk over the archive and call the callback function for each // Restore will walk over the archive and call the callback function for each
// file encountered. // file encountered.
func (b *LocalBackup) Restore(_ io.Reader, callback RestoreCallback) error { func (b *LocalBackup) Restore(ctx context.Context, _ io.Reader, callback RestoreCallback) error {
return archiver.Walk(b.Path(), func(f archiver.File) error { return archiver.Walk(b.Path(), func(f archiver.File) error {
if f.IsDir() { select {
return nil case <-ctx.Done():
// Stop walking if the context is canceled.
return archiver.ErrStopWalk
default:
if f.IsDir() {
return nil
}
return callback(filesystem.ExtractNameFromArchive(f), f, f.Mode())
} }
return callback(f.Name(), f)
}) })
} }

View File

@@ -5,13 +5,19 @@ import (
"compress/gzip" "compress/gzip"
"context" "context"
"fmt" "fmt"
"github.com/pterodactyl/wings/server/filesystem"
"io" "io"
"net/http" "net/http"
"os" "os"
"strconv" "strconv"
"time"
"emperror.dev/errors"
"github.com/cenkalti/backoff/v4"
"github.com/pterodactyl/wings/server/filesystem"
"github.com/juju/ratelimit" "github.com/juju/ratelimit"
"github.com/pterodactyl/wings/config" "github.com/pterodactyl/wings/config"
"github.com/pterodactyl/wings/remote" "github.com/pterodactyl/wings/remote"
) )
@@ -45,7 +51,7 @@ func (s *S3Backup) WithLogContext(c map[string]interface{}) {
// Generate creates a new backup on the disk, moves it into the S3 bucket via // Generate creates a new backup on the disk, moves it into the S3 bucket via
// the provided presigned URL, and then deletes the backup from the disk. // the provided presigned URL, and then deletes the backup from the disk.
func (s *S3Backup) Generate(basePath, ignore string) (*ArchiveDetails, error) { func (s *S3Backup) Generate(ctx context.Context, basePath, ignore string) (*ArchiveDetails, error) {
defer s.Remove() defer s.Remove()
a := &filesystem.Archive{ a := &filesystem.Archive{
@@ -53,7 +59,7 @@ func (s *S3Backup) Generate(basePath, ignore string) (*ArchiveDetails, error) {
Ignore: ignore, Ignore: ignore,
} }
s.log().Info("creating backup for server...") s.log().WithField("path", s.Path()).Info("creating backup for server")
if err := a.Create(s.Path()); err != nil { if err := a.Create(s.Path()); err != nil {
return nil, err return nil, err
} }
@@ -61,29 +67,65 @@ func (s *S3Backup) Generate(basePath, ignore string) (*ArchiveDetails, error) {
rc, err := os.Open(s.Path()) rc, err := os.Open(s.Path())
if err != nil { if err != nil {
return nil, err return nil, errors.Wrap(err, "backup: could not read archive from disk")
} }
defer rc.Close() defer rc.Close()
if err := s.generateRemoteRequest(rc); err != nil { if err := s.generateRemoteRequest(ctx, rc); err != nil {
return nil, err return nil, err
} }
ad, err := s.Details(ctx)
return s.Details(), nil if err != nil {
return nil, errors.WrapIf(err, "backup: failed to get archive details after upload")
}
return ad, nil
} }
// Reader provides a wrapper around an existing io.Reader // Restore will read from the provided reader assuming that it is a gzipped
// but implements io.Closer in order to satisfy an io.ReadCloser. // tar reader. When a file is encountered in the archive the callback function
type Reader struct { // will be triggered. If the callback returns an error the entire process is
io.Reader // stopped, otherwise this function will run until all files have been written.
} //
// This restoration uses a workerpool to use up to the number of CPUs available
func (Reader) Close() error { // on the machine when writing files to the disk.
func (s *S3Backup) Restore(ctx context.Context, r io.Reader, callback RestoreCallback) error {
reader := r
// Steal the logic we use for making backups which will be applied when restoring
// this specific backup. This allows us to prevent overloading the disk unintentionally.
if writeLimit := int64(config.Get().System.Backups.WriteLimit * 1024 * 1024); writeLimit > 0 {
reader = ratelimit.Reader(r, ratelimit.NewBucketWithRate(float64(writeLimit), writeLimit))
}
gr, err := gzip.NewReader(reader)
if err != nil {
return err
}
defer gr.Close()
tr := tar.NewReader(gr)
for {
select {
case <-ctx.Done():
return nil
default:
// Do nothing, fall through to the next block of code in this loop.
}
header, err := tr.Next()
if err != nil {
if err == io.EOF {
break
}
return err
}
if header.Typeflag == tar.TypeReg {
if err := callback(header.Name, tr, header.FileInfo().Mode()); err != nil {
return err
}
}
}
return nil return nil
} }
// Generates the remote S3 request and begins the upload. // Generates the remote S3 request and begins the upload.
func (s *S3Backup) generateRemoteRequest(rc io.ReadCloser) error { func (s *S3Backup) generateRemoteRequest(ctx context.Context, rc io.ReadCloser) error {
defer rc.Close() defer rc.Close()
s.log().Debug("attempting to get size of backup...") s.log().Debug("attempting to get size of backup...")
@@ -101,37 +143,7 @@ func (s *S3Backup) generateRemoteRequest(rc io.ReadCloser) error {
s.log().Debug("got S3 upload urls from the Panel") s.log().Debug("got S3 upload urls from the Panel")
s.log().WithField("parts", len(urls.Parts)).Info("attempting to upload backup to s3 endpoint...") s.log().WithField("parts", len(urls.Parts)).Info("attempting to upload backup to s3 endpoint...")
handlePart := func(part string, size int64) (string, error) { uploader := newS3FileUploader(rc)
r, err := http.NewRequest(http.MethodPut, part, nil)
if err != nil {
return "", err
}
r.ContentLength = size
r.Header.Add("Content-Length", strconv.Itoa(int(size)))
r.Header.Add("Content-Type", "application/x-gzip")
// Limit the reader to the size of the part.
r.Body = Reader{Reader: io.LimitReader(rc, size)}
// This http request can block forever due to it not having a timeout,
// but we are uploading up to 5GB of data, so there is not really
// a good way to handle a timeout on this.
res, err := http.DefaultClient.Do(r)
if err != nil {
return "", err
}
defer res.Body.Close()
// Handle non-200 status codes.
if res.StatusCode != http.StatusOK {
return "", fmt.Errorf("failed to put S3 object part, %d:%s", res.StatusCode, res.Status)
}
// Get the ETag from the uploaded part, this should be sent with the CompleteMultipartUpload request.
return res.Header.Get("ETag"), nil
}
for i, part := range urls.Parts { for i, part := range urls.Parts {
// Get the size for the current part. // Get the size for the current part.
var partSize int64 var partSize int64
@@ -144,7 +156,7 @@ func (s *S3Backup) generateRemoteRequest(rc io.ReadCloser) error {
} }
// Attempt to upload the part. // Attempt to upload the part.
if _, err := handlePart(part, partSize); err != nil { if _, err := uploader.uploadPart(ctx, part, partSize); err != nil {
s.log().WithField("part_id", i+1).WithError(err).Warn("failed to upload part") s.log().WithField("part_id", i+1).WithError(err).Warn("failed to upload part")
return err return err
} }
@@ -157,39 +169,97 @@ func (s *S3Backup) generateRemoteRequest(rc io.ReadCloser) error {
return nil return nil
} }
// Restore will read from the provided reader assuming that it is a gzipped type s3FileUploader struct {
// tar reader. When a file is encountered in the archive the callback function io.ReadCloser
// will be triggered. If the callback returns an error the entire process is client *http.Client
// stopped, otherwise this function will run until all files have been written. }
// newS3FileUploader returns a new file uploader instance.
func newS3FileUploader(file io.ReadCloser) *s3FileUploader {
return &s3FileUploader{
ReadCloser: file,
// We purposefully use a super high timeout on this request since we need to upload
// a 5GB file. This assumes at worst a 10Mbps connection for uploading. While technically
// you could go slower we're targeting mostly hosted servers that should have 100Mbps
// connections anyways.
client: &http.Client{Timeout: time.Hour * 2},
}
}
// backoff returns a new expoential backoff implementation using a context that
// will also stop the backoff if it is canceled.
func (fu *s3FileUploader) backoff(ctx context.Context) backoff.BackOffContext {
b := backoff.NewExponentialBackOff()
b.Multiplier = 2
b.MaxElapsedTime = time.Minute
return backoff.WithContext(b, ctx)
}
// uploadPart attempts to upload a given S3 file part to the S3 system. If a
// 5xx error is returned from the endpoint this will continue with an exponential
// backoff to try and successfully upload the part.
// //
// This restoration uses a workerpool to use up to the number of CPUs available // Once uploaded the ETag is returned to the caller.
// on the machine when writing files to the disk. func (fu *s3FileUploader) uploadPart(ctx context.Context, part string, size int64) (string, error) {
func (s *S3Backup) Restore(r io.Reader, callback RestoreCallback) error { r, err := http.NewRequestWithContext(ctx, http.MethodPut, part, nil)
reader := r
// Steal the logic we use for making backups which will be applied when restoring
// this specific backup. This allows us to prevent overloading the disk unintentionally.
if writeLimit := int64(config.Get().System.Backups.WriteLimit * 1024 * 1024); writeLimit > 0 {
reader = ratelimit.Reader(r, ratelimit.NewBucketWithRate(float64(writeLimit), writeLimit))
}
gr, err := gzip.NewReader(reader)
if err != nil { if err != nil {
return err return "", errors.Wrap(err, "backup: could not create request for S3")
} }
defer gr.Close()
tr := tar.NewReader(gr) r.ContentLength = size
for { r.Header.Add("Content-Length", strconv.Itoa(int(size)))
header, err := tr.Next() r.Header.Add("Content-Type", "application/x-gzip")
// Limit the reader to the size of the part.
r.Body = Reader{Reader: io.LimitReader(fu.ReadCloser, size)}
var etag string
err = backoff.Retry(func() error {
res, err := fu.client.Do(r)
if err != nil { if err != nil {
if err == io.EOF { if errors.Is(err, context.DeadlineExceeded) || errors.Is(err, context.Canceled) {
break return backoff.Permanent(err)
} }
return err // Don't use a permanent error here, if there is a temporary resolution error with
// the URL due to DNS issues we want to keep re-trying.
return errors.Wrap(err, "backup: S3 HTTP request failed")
} }
if header.Typeflag == tar.TypeReg { _ = res.Body.Close()
if err := callback(header.Name, tr); err != nil {
if res.StatusCode != http.StatusOK {
err := errors.New(fmt.Sprintf("backup: failed to put S3 object: [HTTP/%d] %s", res.StatusCode, res.Status))
// Only attempt a backoff retry if this error is because of a 5xx error from
// the S3 endpoint. Any 4xx error should be treated as an error that a retry
// would not fix.
if res.StatusCode >= http.StatusInternalServerError {
return err return err
} }
return backoff.Permanent(err)
} }
// Get the ETag from the uploaded part, this should be sent with the
// CompleteMultipartUpload request.
etag = res.Header.Get("ETag")
return nil
}, fu.backoff(ctx))
if err != nil {
if v, ok := err.(*backoff.PermanentError); ok {
return "", v.Unwrap()
}
return "", err
} }
return etag, nil
}
// Reader provides a wrapper around an existing io.Reader
// but implements io.Closer in order to satisfy an io.ReadCloser.
type Reader struct {
io.Reader
}
func (Reader) Close() error {
return nil return nil
} }

View File

@@ -33,7 +33,9 @@ type Configuration struct {
// By default this is false, however if selected within the Panel while installing or re-installing a // By default this is false, however if selected within the Panel while installing or re-installing a
// server, specific installation scripts will be skipped for the server process. // server, specific installation scripts will be skipped for the server process.
SkipEggScripts bool `default:"false" json:"skip_egg_scripts"` SkipEggScripts bool `json:"skip_egg_scripts"`
StartOnCompletion bool `json:"start_on_completion"`
// An array of environment variables that should be passed along to the running // An array of environment variables that should be passed along to the running
// server process. // server process.
@@ -41,7 +43,7 @@ type Configuration struct {
Allocations environment.Allocations `json:"allocations"` Allocations environment.Allocations `json:"allocations"`
Build environment.Limits `json:"build"` Build environment.Limits `json:"build"`
CrashDetectionEnabled bool `default:"true" json:"enabled" yaml:"enabled"` CrashDetectionEnabled bool `json:"crash_detection_enabled"`
Mounts []Mount `json:"mounts"` Mounts []Mount `json:"mounts"`
Egg EggConfiguration `json:"egg,omitempty"` Egg EggConfiguration `json:"egg,omitempty"`
@@ -54,34 +56,30 @@ type Configuration struct {
func (s *Server) Config() *Configuration { func (s *Server) Config() *Configuration {
s.cfg.mu.RLock() s.cfg.mu.RLock()
defer s.cfg.mu.RUnlock() defer s.cfg.mu.RUnlock()
return &s.cfg return &s.cfg
} }
// Returns the amount of disk space available to a server in bytes. // DiskSpace returns the amount of disk space available to a server in bytes.
func (s *Server) DiskSpace() int64 { func (s *Server) DiskSpace() int64 {
s.cfg.mu.RLock() s.cfg.mu.RLock()
defer s.cfg.mu.RUnlock() defer s.cfg.mu.RUnlock()
return s.cfg.Build.DiskSpace * 1024.0 * 1024.0 return s.cfg.Build.DiskSpace * 1024.0 * 1024.0
} }
func (s *Server) MemoryLimit() int64 { func (s *Server) MemoryLimit() int64 {
s.cfg.mu.RLock() s.cfg.mu.RLock()
defer s.cfg.mu.RUnlock() defer s.cfg.mu.RUnlock()
return s.cfg.Build.MemoryLimit return s.cfg.Build.MemoryLimit
} }
func (c *Configuration) GetUuid() string { func (c *Configuration) GetUuid() string {
c.mu.RLock() c.mu.RLock()
defer c.mu.RUnlock() defer c.mu.RUnlock()
return c.Uuid return c.Uuid
} }
func (c *Configuration) SetSuspended(s bool) { func (c *Configuration) SetSuspended(s bool) {
c.mu.Lock() c.mu.Lock()
defer c.mu.Unlock()
c.Suspended = s c.Suspended = s
c.mu.Unlock()
} }

View File

@@ -9,6 +9,7 @@ import (
"emperror.dev/errors" "emperror.dev/errors"
"github.com/mitchellh/colorstring" "github.com/mitchellh/colorstring"
"github.com/pterodactyl/wings/config" "github.com/pterodactyl/wings/config"
"github.com/pterodactyl/wings/system" "github.com/pterodactyl/wings/system"
) )

View File

@@ -3,6 +3,7 @@ package filesystem
import ( import (
"archive/tar" "archive/tar"
"io" "io"
"io/fs"
"os" "os"
"path/filepath" "path/filepath"
"strings" "strings"
@@ -13,8 +14,9 @@ import (
"github.com/juju/ratelimit" "github.com/juju/ratelimit"
"github.com/karrick/godirwalk" "github.com/karrick/godirwalk"
"github.com/klauspost/pgzip" "github.com/klauspost/pgzip"
ignore "github.com/sabhiram/go-gitignore"
"github.com/pterodactyl/wings/config" "github.com/pterodactyl/wings/config"
"github.com/sabhiram/go-gitignore"
) )
const memory = 4 * 1024 const memory = 4 * 1024
@@ -156,9 +158,15 @@ func (a *Archive) addToArchive(p string, rp string, w *tar.Writer) error {
return errors.WrapIff(err, "failed executing os.Lstat on '%s'", rp) return errors.WrapIff(err, "failed executing os.Lstat on '%s'", rp)
} }
// Skip socket files as they are unsupported by archive/tar.
// Error will come from tar#FileInfoHeader: "archive/tar: sockets not supported"
if s.Mode()&fs.ModeSocket != 0 {
return nil
}
// Resolve the symlink target if the file is a symlink. // Resolve the symlink target if the file is a symlink.
var target string var target string
if s.Mode()&os.ModeSymlink != 0 { if s.Mode()&fs.ModeSymlink != 0 {
// Read the target of the symlink. If there are any errors we will dump them out to // Read the target of the symlink. If there are any errors we will dump them out to
// the logs, but we're not going to stop the backup. There are far too many cases of // the logs, but we're not going to stop the backup. There are far too many cases of
// symlinks causing all sorts of unnecessary pain in this process. Sucks to suck if // symlinks causing all sorts of unnecessary pain in this process. Sucks to suck if
@@ -180,7 +188,7 @@ func (a *Archive) addToArchive(p string, rp string, w *tar.Writer) error {
} }
// Fix the header name if the file is not a symlink. // Fix the header name if the file is not a symlink.
if s.Mode()&os.ModeSymlink == 0 { if s.Mode()&fs.ModeSymlink == 0 {
header.Name = rp header.Name = rp
} }

View File

@@ -1,6 +1,9 @@
package filesystem package filesystem
import ( import (
"archive/tar"
"archive/zip"
"compress/gzip"
"fmt" "fmt"
"os" "os"
"path" "path"
@@ -121,7 +124,7 @@ func (fs *Filesystem) DecompressFile(dir string, file string) error {
if f.IsDir() { if f.IsDir() {
return nil return nil
} }
p := filepath.Join(dir, f.Name()) p := filepath.Join(dir, ExtractNameFromArchive(f))
// If it is ignored, just don't do anything with the file and skip over it. // If it is ignored, just don't do anything with the file and skip over it.
if err := fs.IsIgnored(p); err != nil { if err := fs.IsIgnored(p); err != nil {
return nil return nil
@@ -129,6 +132,10 @@ func (fs *Filesystem) DecompressFile(dir string, file string) error {
if err := fs.Writefile(p, f); err != nil { if err := fs.Writefile(p, f); err != nil {
return wrapError(err, source) return wrapError(err, source)
} }
// Update the file permissions to the one set in the archive.
if err := fs.Chmod(p, f.Mode()); err != nil {
return wrapError(err, source)
}
return nil return nil
}) })
if err != nil { if err != nil {
@@ -139,3 +146,35 @@ func (fs *Filesystem) DecompressFile(dir string, file string) error {
} }
return nil return nil
} }
// ExtractNameFromArchive looks at an archive file to try and determine the name
// for a given element in an archive. Because of... who knows why, each file type
// uses different methods to determine the file name.
//
// If there is a archiver.File#Sys() value present we will try to use the name
// present in there, otherwise falling back to archiver.File#Name() if all else
// fails. Without this logic present, some archive types such as zip/tars/etc.
// will write all of the files to the base directory, rather than the nested
// directory that is expected.
//
// For files like ".rar" types, there is no f.Sys() value present, and the value
// of archiver.File#Name() will be what you need.
func ExtractNameFromArchive(f archiver.File) string {
sys := f.Sys()
// Some archive types won't have a value returned when you call f.Sys() on them,
// such as ".rar" archives for example. In those cases the only thing you can do
// is hope that "f.Name()" is actually correct for them.
if sys == nil {
return f.Name()
}
switch s := sys.(type) {
case *tar.Header:
return s.Name
case *gzip.Header:
return s.Name
case *zip.FileHeader:
return s.Name
default:
return f.Name()
}
}

View File

@@ -0,0 +1,55 @@
package filesystem
import (
"io/ioutil"
"sync/atomic"
"testing"
. "github.com/franela/goblin"
)
// Given an archive named test.{ext}, with the following file structure:
// test/
// |──inside/
// |────finside.txt
// |──outside.txt
// this test will ensure that it's being decompressed as expected
func TestFilesystem_DecompressFile(t *testing.T) {
g := Goblin(t)
fs, rfs := NewFs()
g.Describe("Decompress", func() {
for _, ext := range []string{"zip", "rar", "tar", "tar.gz"} {
g.It("can decompress a "+ext, func() {
// copy the file to the new FS
c, err := ioutil.ReadFile("./testdata/test." + ext)
g.Assert(err).IsNil()
err = rfs.CreateServerFile("./test."+ext, c)
g.Assert(err).IsNil()
// decompress
err = fs.DecompressFile("/", "test."+ext)
g.Assert(err).IsNil()
// make sure everything is where it is supposed to be
_, err = rfs.StatServerFile("test/outside.txt")
g.Assert(err).IsNil()
st, err := rfs.StatServerFile("test/inside")
g.Assert(err).IsNil()
g.Assert(st.IsDir()).IsTrue()
_, err = rfs.StatServerFile("test/inside/finside.txt")
g.Assert(err).IsNil()
g.Assert(st.IsDir()).IsTrue()
})
}
g.AfterEach(func() {
rfs.reset()
atomic.StoreInt64(&fs.diskUsed, 0)
atomic.StoreInt64(&fs.diskLimit, 0)
})
})
}

View File

@@ -17,9 +17,10 @@ import (
"emperror.dev/errors" "emperror.dev/errors"
"github.com/gabriel-vasile/mimetype" "github.com/gabriel-vasile/mimetype"
"github.com/karrick/godirwalk" "github.com/karrick/godirwalk"
ignore "github.com/sabhiram/go-gitignore"
"github.com/pterodactyl/wings/config" "github.com/pterodactyl/wings/config"
"github.com/pterodactyl/wings/system" "github.com/pterodactyl/wings/system"
ignore "github.com/sabhiram/go-gitignore"
) )
type Filesystem struct { type Filesystem struct {

View File

@@ -12,6 +12,7 @@ import (
"unicode/utf8" "unicode/utf8"
. "github.com/franela/goblin" . "github.com/franela/goblin"
"github.com/pterodactyl/wings/config" "github.com/pterodactyl/wings/config"
) )
@@ -44,17 +45,21 @@ type rootFs struct {
root string root string
} }
func (rfs *rootFs) CreateServerFile(p string, c string) error { func (rfs *rootFs) CreateServerFile(p string, c []byte) error {
f, err := os.Create(filepath.Join(rfs.root, "/server", p)) f, err := os.Create(filepath.Join(rfs.root, "/server", p))
if err == nil { if err == nil {
f.Write([]byte(c)) f.Write(c)
f.Close() f.Close()
} }
return err return err
} }
func (rfs *rootFs) CreateServerFileFromString(p string, c string) error {
return rfs.CreateServerFile(p, []byte(c))
}
func (rfs *rootFs) StatServerFile(p string) (os.FileInfo, error) { func (rfs *rootFs) StatServerFile(p string) (os.FileInfo, error) {
return os.Stat(filepath.Join(rfs.root, "/server", p)) return os.Stat(filepath.Join(rfs.root, "/server", p))
} }
@@ -79,7 +84,7 @@ func TestFilesystem_Readfile(t *testing.T) {
buf := &bytes.Buffer{} buf := &bytes.Buffer{}
g.It("opens a file if it exists on the system", func() { g.It("opens a file if it exists on the system", func() {
err := rfs.CreateServerFile("test.txt", "testing") err := rfs.CreateServerFileFromString("test.txt", "testing")
g.Assert(err).IsNil() g.Assert(err).IsNil()
err = fs.Readfile("test.txt", buf) err = fs.Readfile("test.txt", buf)
@@ -103,7 +108,7 @@ func TestFilesystem_Readfile(t *testing.T) {
}) })
g.It("cannot open a file outside the root directory", func() { g.It("cannot open a file outside the root directory", func() {
err := rfs.CreateServerFile("/../test.txt", "testing") err := rfs.CreateServerFileFromString("/../test.txt", "testing")
g.Assert(err).IsNil() g.Assert(err).IsNil()
err = fs.Readfile("/../test.txt", buf) err = fs.Readfile("/../test.txt", buf)
@@ -281,13 +286,13 @@ func TestFilesystem_Rename(t *testing.T) {
g.Describe("Rename", func() { g.Describe("Rename", func() {
g.BeforeEach(func() { g.BeforeEach(func() {
if err := rfs.CreateServerFile("source.txt", "text content"); err != nil { if err := rfs.CreateServerFileFromString("source.txt", "text content"); err != nil {
panic(err) panic(err)
} }
}) })
g.It("returns an error if the target already exists", func() { g.It("returns an error if the target already exists", func() {
err := rfs.CreateServerFile("target.txt", "taget content") err := rfs.CreateServerFileFromString("target.txt", "taget content")
g.Assert(err).IsNil() g.Assert(err).IsNil()
err = fs.Rename("source.txt", "target.txt") err = fs.Rename("source.txt", "target.txt")
@@ -314,7 +319,7 @@ func TestFilesystem_Rename(t *testing.T) {
}) })
g.It("does not allow renaming from a location outside the root", func() { g.It("does not allow renaming from a location outside the root", func() {
err := rfs.CreateServerFile("/../ext-source.txt", "taget content") err := rfs.CreateServerFileFromString("/../ext-source.txt", "taget content")
err = fs.Rename("/../ext-source.txt", "target.txt") err = fs.Rename("/../ext-source.txt", "target.txt")
g.Assert(err).IsNotNil() g.Assert(err).IsNotNil()
@@ -378,7 +383,7 @@ func TestFilesystem_Copy(t *testing.T) {
g.Describe("Copy", func() { g.Describe("Copy", func() {
g.BeforeEach(func() { g.BeforeEach(func() {
if err := rfs.CreateServerFile("source.txt", "text content"); err != nil { if err := rfs.CreateServerFileFromString("source.txt", "text content"); err != nil {
panic(err) panic(err)
} }
@@ -392,7 +397,7 @@ func TestFilesystem_Copy(t *testing.T) {
}) })
g.It("should return an error if the source is outside the root", func() { g.It("should return an error if the source is outside the root", func() {
err := rfs.CreateServerFile("/../ext-source.txt", "text content") err := rfs.CreateServerFileFromString("/../ext-source.txt", "text content")
err = fs.Copy("../ext-source.txt") err = fs.Copy("../ext-source.txt")
g.Assert(err).IsNotNil() g.Assert(err).IsNotNil()
@@ -403,7 +408,7 @@ func TestFilesystem_Copy(t *testing.T) {
err := os.MkdirAll(filepath.Join(rfs.root, "/nested/in/dir"), 0755) err := os.MkdirAll(filepath.Join(rfs.root, "/nested/in/dir"), 0755)
g.Assert(err).IsNil() g.Assert(err).IsNil()
err = rfs.CreateServerFile("/../nested/in/dir/ext-source.txt", "external content") err = rfs.CreateServerFileFromString("/../nested/in/dir/ext-source.txt", "external content")
g.Assert(err).IsNil() g.Assert(err).IsNil()
err = fs.Copy("../nested/in/dir/ext-source.txt") err = fs.Copy("../nested/in/dir/ext-source.txt")
@@ -464,7 +469,7 @@ func TestFilesystem_Copy(t *testing.T) {
err := os.MkdirAll(filepath.Join(rfs.root, "/server/nested/in/dir"), 0755) err := os.MkdirAll(filepath.Join(rfs.root, "/server/nested/in/dir"), 0755)
g.Assert(err).IsNil() g.Assert(err).IsNil()
err = rfs.CreateServerFile("nested/in/dir/source.txt", "test content") err = rfs.CreateServerFileFromString("nested/in/dir/source.txt", "test content")
g.Assert(err).IsNil() g.Assert(err).IsNil()
err = fs.Copy("nested/in/dir/source.txt") err = fs.Copy("nested/in/dir/source.txt")
@@ -492,7 +497,7 @@ func TestFilesystem_Delete(t *testing.T) {
g.Describe("Delete", func() { g.Describe("Delete", func() {
g.BeforeEach(func() { g.BeforeEach(func() {
if err := rfs.CreateServerFile("source.txt", "test content"); err != nil { if err := rfs.CreateServerFileFromString("source.txt", "test content"); err != nil {
panic(err) panic(err)
} }
@@ -500,7 +505,7 @@ func TestFilesystem_Delete(t *testing.T) {
}) })
g.It("does not delete files outside the root directory", func() { g.It("does not delete files outside the root directory", func() {
err := rfs.CreateServerFile("/../ext-source.txt", "external content") err := rfs.CreateServerFileFromString("/../ext-source.txt", "external content")
err = fs.Delete("../ext-source.txt") err = fs.Delete("../ext-source.txt")
g.Assert(err).IsNotNil() g.Assert(err).IsNotNil()
@@ -544,7 +549,7 @@ func TestFilesystem_Delete(t *testing.T) {
g.Assert(err).IsNil() g.Assert(err).IsNil()
for _, s := range sources { for _, s := range sources {
err = rfs.CreateServerFile(s, "test content") err = rfs.CreateServerFileFromString(s, "test content")
g.Assert(err).IsNil() g.Assert(err).IsNil()
} }

View File

@@ -103,7 +103,7 @@ func TestFilesystem_Blocks_Symlinks(t *testing.T) {
g := Goblin(t) g := Goblin(t)
fs, rfs := NewFs() fs, rfs := NewFs()
if err := rfs.CreateServerFile("/../malicious.txt", "external content"); err != nil { if err := rfs.CreateServerFileFromString("/../malicious.txt", "external content"); err != nil {
panic(err) panic(err)
} }
@@ -181,7 +181,7 @@ func TestFilesystem_Blocks_Symlinks(t *testing.T) {
}) })
g.It("cannot rename a file to a location outside the directory root", func() { g.It("cannot rename a file to a location outside the directory root", func() {
rfs.CreateServerFile("my_file.txt", "internal content") rfs.CreateServerFileFromString("my_file.txt", "internal content")
err := fs.Rename("my_file.txt", "external_dir/my_file.txt") err := fs.Rename("my_file.txt", "external_dir/my_file.txt")
g.Assert(err).IsNotNil() g.Assert(err).IsNotNil()

BIN
server/filesystem/testdata/test.rar vendored Normal file

Binary file not shown.

BIN
server/filesystem/testdata/test.tar vendored Normal file

Binary file not shown.

BIN
server/filesystem/testdata/test.tar.gz vendored Normal file

Binary file not shown.

BIN
server/filesystem/testdata/test.zip vendored Normal file

Binary file not shown.

View File

@@ -17,6 +17,7 @@ import (
"github.com/docker/docker/api/types/container" "github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/mount" "github.com/docker/docker/api/types/mount"
"github.com/docker/docker/client" "github.com/docker/docker/client"
"github.com/pterodactyl/wings/config" "github.com/pterodactyl/wings/config"
"github.com/pterodactyl/wings/environment" "github.com/pterodactyl/wings/environment"
"github.com/pterodactyl/wings/remote" "github.com/pterodactyl/wings/remote"
@@ -88,15 +89,10 @@ func (s *Server) Reinstall() error {
// Internal installation function used to simplify reporting back to the Panel. // Internal installation function used to simplify reporting back to the Panel.
func (s *Server) internalInstall() error { func (s *Server) internalInstall() error {
script, err := s.client.GetInstallationScript(s.Context(), s.Id()) script, err := s.client.GetInstallationScript(s.Context(), s.ID())
if err != nil { if err != nil {
if !remote.IsRequestError(err) { return err
return err
}
return errors.New(err.Error())
} }
p, err := NewInstallationProcess(s, &script) p, err := NewInstallationProcess(s, &script)
if err != nil { if err != nil {
return err return err
@@ -161,7 +157,7 @@ func (s *Server) SetRestoring(state bool) {
// Removes the installer container for the server. // Removes the installer container for the server.
func (ip *InstallationProcess) RemoveContainer() error { func (ip *InstallationProcess) RemoveContainer() error {
err := ip.client.ContainerRemove(ip.context, ip.Server.Id()+"_installer", types.ContainerRemoveOptions{ err := ip.client.ContainerRemove(ip.context, ip.Server.ID()+"_installer", types.ContainerRemoveOptions{
RemoveVolumes: true, RemoveVolumes: true,
Force: true, Force: true,
}) })
@@ -211,7 +207,7 @@ func (ip *InstallationProcess) Run() error {
// Returns the location of the temporary data for the installation process. // Returns the location of the temporary data for the installation process.
func (ip *InstallationProcess) tempDir() string { func (ip *InstallationProcess) tempDir() string {
return filepath.Join(os.TempDir(), "pterodactyl/", ip.Server.Id()) return filepath.Join(os.TempDir(), "pterodactyl/", ip.Server.ID())
} }
// Writes the installation script to a temporary file on the host machine so that it // Writes the installation script to a temporary file on the host machine so that it
@@ -334,7 +330,7 @@ func (ip *InstallationProcess) BeforeExecute() error {
// Returns the log path for the installation process. // Returns the log path for the installation process.
func (ip *InstallationProcess) GetLogPath() string { func (ip *InstallationProcess) GetLogPath() string {
return filepath.Join(config.Get().System.LogDirectory, "/install", ip.Server.Id()+".log") return filepath.Join(config.Get().System.LogDirectory, "/install", ip.Server.ID()+".log")
} }
// Cleans up after the execution of the installation process. This grabs the logs from the // Cleans up after the execution of the installation process. This grabs the logs from the
@@ -370,7 +366,7 @@ func (ip *InstallationProcess) AfterExecute(containerId string) error {
| |
| Details | Details
| ------------------------------ | ------------------------------
Server UUID: {{.Server.Id}} Server UUID: {{.Server.ID}}
Container Image: {{.Script.ContainerImage}} Container Image: {{.Script.ContainerImage}}
Container Entrypoint: {{.Script.Entrypoint}} Container Entrypoint: {{.Script.Entrypoint}}
@@ -439,6 +435,7 @@ func (ip *InstallationProcess) Execute() (string, error) {
ReadOnly: false, ReadOnly: false,
}, },
}, },
Resources: ip.resourceLimits(),
Tmpfs: map[string]string{ Tmpfs: map[string]string{
"/tmp": "rw,exec,nosuid,size=" + tmpfsSize + "M", "/tmp": "rw,exec,nosuid,size=" + tmpfsSize + "M",
}, },
@@ -473,7 +470,7 @@ func (ip *InstallationProcess) Execute() (string, error) {
} }
}() }()
r, err := ip.client.ContainerCreate(ctx, conf, hostConf, nil, nil, ip.Server.Id()+"_installer") r, err := ip.client.ContainerCreate(ctx, conf, hostConf, nil, nil, ip.Server.ID()+"_installer")
if err != nil { if err != nil {
return "", err return "", err
} }
@@ -535,19 +532,47 @@ func (ip *InstallationProcess) StreamOutput(ctx context.Context, id string) erro
return nil return nil
} }
// Makes a HTTP request to the Panel instance notifying it that the server has // resourceLimits returns the install container specific resource limits. This
// completed the installation process, and what the state of the server is. A boolean // looks at the globally defined install container limits and attempts to use
// value of "true" means everything was successful, "false" means something went // the higher of the two (defined limits & server limits). This allows for servers
// wrong and the server must be deleted and re-created. // with super low limits (e.g. Discord bots with 128Mb of memory) to perform more
func (s *Server) SyncInstallState(successful bool) error { // intensive installation processes if needed.
err := s.client.SetInstallationStatus(s.Context(), s.Id(), successful) //
if err != nil { // This also avoids a server with limits such as 4GB of memory from accidentally
if !remote.IsRequestError(err) { // consuming 2-5x the defined limits during the install process and causing
return err // system instability.
} func (ip *InstallationProcess) resourceLimits() container.Resources {
limits := config.Get().Docker.InstallerLimits
return errors.New(err.Error()) // Create a copy of the configuration so we're not accidentally making changes
// to the underlying server build data.
c := *ip.Server.Config()
cfg := c.Build
if cfg.MemoryLimit < limits.Memory {
cfg.MemoryLimit = limits.Memory
}
// Only apply the CPU limit if neither one is currently set to unlimited. If the
// installer CPU limit is unlimited don't even waste time with the logic, just
// set the config to unlimited for this.
if limits.Cpu == 0 {
cfg.CpuLimit = 0
} else if cfg.CpuLimit != 0 && cfg.CpuLimit < limits.Cpu {
cfg.CpuLimit = limits.Cpu
} }
return nil resources := cfg.AsContainerResources()
// Explicitly remove the PID limits for the installation container. These scripts are
// defined at an administrative level and users can't manually execute things like a
// fork bomb during this process.
resources.PidsLimit = nil
return resources
}
// SyncInstallState makes a HTTP request to the Panel instance notifying it that
// the server has completed the installation process, and what the state of the
// server is. A boolean value of "true" means everything was successful, "false"
// means something went wrong and the server must be deleted and re-created.
func (s *Server) SyncInstallState(successful bool) error {
return s.client.SetInstallationStatus(s.Context(), s.ID(), successful)
} }

View File

@@ -7,6 +7,7 @@ import (
"sync" "sync"
"github.com/apex/log" "github.com/apex/log"
"github.com/pterodactyl/wings/config" "github.com/pterodactyl/wings/config"
"github.com/pterodactyl/wings/environment" "github.com/pterodactyl/wings/environment"
"github.com/pterodactyl/wings/events" "github.com/pterodactyl/wings/events"
@@ -40,7 +41,7 @@ func (dsl *diskSpaceLimiter) Reset() {
// 15 seconds, and terminate it forcefully if it does not stop. // 15 seconds, and terminate it forcefully if it does not stop.
// //
// This function is only executed one time, so whenever a server is marked as booting the limiter // This function is only executed one time, so whenever a server is marked as booting the limiter
// should be reset so it can properly be triggered as needed. // should be reset, so it can properly be triggered as needed.
func (dsl *diskSpaceLimiter) Trigger() { func (dsl *diskSpaceLimiter) Trigger() {
dsl.o.Do(func() { dsl.o.Do(func() {
dsl.server.PublishConsoleOutputFromDaemon("Server is exceeding the assigned disk space limit, stopping process now.") dsl.server.PublishConsoleOutputFromDaemon("Server is exceeding the assigned disk space limit, stopping process now.")
@@ -50,7 +51,7 @@ func (dsl *diskSpaceLimiter) Trigger() {
}) })
} }
// Adds all of the internal event listeners we want to use for a server. These listeners can only be // StartEventListeners adds all the internal event listeners we want to use for a server. These listeners can only be
// removed by deleting the server as they should last for the duration of the process' lifetime. // removed by deleting the server as they should last for the duration of the process' lifetime.
func (s *Server) StartEventListeners() { func (s *Server) StartEventListeners() {
console := func(e events.Event) { console := func(e events.Event) {
@@ -106,15 +107,15 @@ func (s *Server) StartEventListeners() {
} }
stats := func(e events.Event) { stats := func(e events.Event) {
st := new(environment.Stats) var st environment.Stats
if err := json.Unmarshal([]byte(e.Data), st); err != nil { if err := json.Unmarshal([]byte(e.Data), &st); err != nil {
s.Log().WithField("error", err).Warn("failed to unmarshal server environment stats") s.Log().WithField("error", err).Warn("failed to unmarshal server environment stats")
return return
} }
// Update the server resource tracking object with the resources we got here. // Update the server resource tracking object with the resources we got here.
s.resources.mu.Lock() s.resources.mu.Lock()
s.resources.Stats = *st s.resources.Stats = st
s.resources.mu.Unlock() s.resources.mu.Unlock()
// If there is no disk space available at this point, trigger the server disk limiter logic // If there is no disk space available at this point, trigger the server disk limiter logic

View File

@@ -15,6 +15,7 @@ import (
"emperror.dev/errors" "emperror.dev/errors"
"github.com/apex/log" "github.com/apex/log"
"github.com/gammazero/workerpool" "github.com/gammazero/workerpool"
"github.com/pterodactyl/wings/config" "github.com/pterodactyl/wings/config"
"github.com/pterodactyl/wings/environment" "github.com/pterodactyl/wings/environment"
"github.com/pterodactyl/wings/environment/docker" "github.com/pterodactyl/wings/environment/docker"
@@ -28,9 +29,9 @@ type Manager struct {
servers []*Server servers []*Server
} }
// NewManager returns a new server manager instance. This will boot up all of // NewManager returns a new server manager instance. This will boot up all the
// the servers that are currently present on the filesystem and set them into // servers that are currently present on the filesystem and set them into the
// the manager. // manager.
func NewManager(ctx context.Context, client remote.Client) (*Manager, error) { func NewManager(ctx context.Context, client remote.Client) (*Manager, error) {
m := NewEmptyManager(client) m := NewEmptyManager(client)
if err := m.init(ctx); err != nil { if err := m.init(ctx); err != nil {
@@ -52,7 +53,7 @@ func (m *Manager) Client() remote.Client {
return m.client return m.client
} }
// Put replaces all of the current values in the collection with the value that // Put replaces all the current values in the collection with the value that
// is passed through. // is passed through.
func (m *Manager) Put(s []*Server) { func (m *Manager) Put(s []*Server) {
m.mu.Lock() m.mu.Lock()
@@ -60,7 +61,7 @@ func (m *Manager) Put(s []*Server) {
m.mu.Unlock() m.mu.Unlock()
} }
// All returns all of the items in the collection. // All returns all the items in the collection.
func (m *Manager) All() []*Server { func (m *Manager) All() []*Server {
m.mu.RLock() m.mu.RLock()
defer m.mu.RUnlock() defer m.mu.RUnlock()
@@ -78,7 +79,7 @@ func (m *Manager) Add(s *Server) {
// found in the global collection or not. // found in the global collection or not.
func (m *Manager) Get(uuid string) (*Server, bool) { func (m *Manager) Get(uuid string) (*Server, bool) {
match := m.Find(func(server *Server) bool { match := m.Find(func(server *Server) bool {
return server.Id() == uuid return server.ID() == uuid
}) })
return match, match != nil return match, match != nil
} }
@@ -130,7 +131,7 @@ func (m *Manager) Remove(filter func(match *Server) bool) {
func (m *Manager) PersistStates() error { func (m *Manager) PersistStates() error {
states := map[string]string{} states := map[string]string{}
for _, s := range m.All() { for _, s := range m.All() {
states[s.Id()] = s.Environment.State() states[s.ID()] = s.Environment.State()
} }
data, err := json.Marshal(states) data, err := json.Marshal(states)
if err != nil { if err != nil {
@@ -175,11 +176,11 @@ func (m *Manager) InitServer(data remote.ServerConfigurationResponse) (*Server,
return nil, err return nil, err
} }
s.fs = filesystem.New(filepath.Join(config.Get().System.Data, s.Id()), s.DiskSpace(), s.Config().Egg.FileDenylist) s.fs = filesystem.New(filepath.Join(config.Get().System.Data, s.ID()), s.DiskSpace(), s.Config().Egg.FileDenylist)
// Right now we only support a Docker based environment, so I'm going to hard code // Right now we only support a Docker based environment, so I'm going to hard code
// this logic in. When we're ready to support other environment we'll need to make // this logic in. When we're ready to support other environment we'll need to make
// some modifications here obviously. // some modifications here, obviously.
settings := environment.Settings{ settings := environment.Settings{
Mounts: s.Mounts(), Mounts: s.Mounts(),
Allocations: s.cfg.Allocations, Allocations: s.cfg.Allocations,
@@ -191,7 +192,7 @@ func (m *Manager) InitServer(data remote.ServerConfigurationResponse) (*Server,
Image: s.Config().Container.Image, Image: s.Config().Container.Image,
} }
if env, err := docker.New(s.Id(), &meta, envCfg); err != nil { if env, err := docker.New(s.ID(), &meta, envCfg); err != nil {
return nil, err return nil, err
} else { } else {
s.Environment = env s.Environment = env
@@ -212,7 +213,7 @@ func (m *Manager) InitServer(data remote.ServerConfigurationResponse) (*Server,
return s, nil return s, nil
} }
// initializeFromRemoteSource iterates over a given directory and loads all of // initializeFromRemoteSource iterates over a given directory and loads all
// the servers listed before returning them to the calling function. // the servers listed before returning them to the calling function.
func (m *Manager) init(ctx context.Context) error { func (m *Manager) init(ctx context.Context) error {
log.Info("fetching list of servers from API") log.Info("fetching list of servers from API")
@@ -252,7 +253,7 @@ func (m *Manager) init(ctx context.Context) error {
}) })
} }
// Wait until we've processed all of the configuration files in the directory // Wait until we've processed all the configuration files in the directory
// before continuing. // before continuing.
pool.StopWait() pool.StopWait()

View File

@@ -5,6 +5,7 @@ import (
"strings" "strings"
"github.com/apex/log" "github.com/apex/log"
"github.com/pterodactyl/wings/config" "github.com/pterodactyl/wings/config"
"github.com/pterodactyl/wings/environment" "github.com/pterodactyl/wings/environment"
) )

View File

@@ -6,9 +6,10 @@ import (
"time" "time"
"emperror.dev/errors" "emperror.dev/errors"
"golang.org/x/sync/semaphore"
"github.com/pterodactyl/wings/config" "github.com/pterodactyl/wings/config"
"github.com/pterodactyl/wings/environment" "github.com/pterodactyl/wings/environment"
"golang.org/x/sync/semaphore"
) )
type PowerAction string type PowerAction string
@@ -18,7 +19,7 @@ type PowerAction string
// example, sending two "start" actions back to back will not process the second action until // example, sending two "start" actions back to back will not process the second action until
// the first action has been completed. // the first action has been completed.
// //
// This utilizes a workerpool with a limit of one worker so that all of the actions execute // This utilizes a workerpool with a limit of one worker so that all the actions execute
// in a sync manner. // in a sync manner.
const ( const (
PowerActionStart = "start" PowerActionStart = "start"
@@ -27,7 +28,7 @@ const (
PowerActionTerminate = "kill" PowerActionTerminate = "kill"
) )
// Checks if the power action being received is valid. // IsValid checks if the power action being received is valid.
func (pa PowerAction) IsValid() bool { func (pa PowerAction) IsValid() bool {
return pa == PowerActionStart || return pa == PowerActionStart ||
pa == PowerActionStop || pa == PowerActionStop ||
@@ -39,7 +40,7 @@ func (pa PowerAction) IsStart() bool {
return pa == PowerActionStart || pa == PowerActionRestart return pa == PowerActionStart || pa == PowerActionRestart
} }
// Check if there is currently a power action being processed for the server. // ExecutingPowerAction checks if there is currently a power action being processed for the server.
func (s *Server) ExecutingPowerAction() bool { func (s *Server) ExecutingPowerAction() bool {
if s.powerLock == nil { if s.powerLock == nil {
return false return false
@@ -54,9 +55,9 @@ func (s *Server) ExecutingPowerAction() bool {
return !ok return !ok
} }
// Helper function that can receive a power action and then process the actions that need // HandlePowerAction is a helper function that can receive a power action and then process the
// to occur for it. This guards against someone calling Start() twice at the same time, or // actions that need to occur for it. This guards against someone calling Start() twice at the
// trying to restart while another restart process is currently running. // same time, or trying to restart while another restart process is currently running.
// //
// However, the code design for the daemon does depend on the user correctly calling this // However, the code design for the daemon does depend on the user correctly calling this
// function rather than making direct calls to the start/stop/restart functions on the // function rather than making direct calls to the start/stop/restart functions on the
@@ -107,7 +108,7 @@ func (s *Server) HandlePowerAction(action PowerAction, waitSeconds ...int) error
// Release the lock once the process being requested has finished executing. // Release the lock once the process being requested has finished executing.
defer s.powerLock.Release(1) defer s.powerLock.Release(1)
} else { } else {
// Still try to acquire the lock if terminating and it is available, just so that other power // Still try to acquire the lock if terminating, and it is available, just so that other power
// actions are blocked until it has completed. However, if it is unavailable we won't stop // actions are blocked until it has completed. However, if it is unavailable we won't stop
// the entire process. // the entire process.
if ok := s.powerLock.TryAcquire(1); ok { if ok := s.powerLock.TryAcquire(1); ok {
@@ -190,14 +191,14 @@ func (s *Server) onBeforeStart() error {
// Update the configuration files defined for the server before beginning the boot process. // Update the configuration files defined for the server before beginning the boot process.
// This process executes a bunch of parallel updates, so we just block until that process // This process executes a bunch of parallel updates, so we just block until that process
// is complete. Any errors as a result of this will just be bubbled out in the logger, // is complete. Any errors as a result of this will just be bubbled out in the logger,
// we don't need to actively do anything about it at this point, worst comes to worst the // we don't need to actively do anything about it at this point, worse comes to worst the
// server starts in a weird state and the user can manually adjust. // server starts in a weird state and the user can manually adjust.
s.PublishConsoleOutputFromDaemon("Updating process configuration files...") s.PublishConsoleOutputFromDaemon("Updating process configuration files...")
s.UpdateConfigurationFiles() s.UpdateConfigurationFiles()
if config.Get().System.CheckPermissionsOnBoot { if config.Get().System.CheckPermissionsOnBoot {
s.PublishConsoleOutputFromDaemon("Ensuring file permissions are set correctly, this could take a few seconds...") s.PublishConsoleOutputFromDaemon("Ensuring file permissions are set correctly, this could take a few seconds...")
// Ensure all of the server file permissions are set correctly before booting the process. // Ensure all the server file permissions are set correctly before booting the process.
if err := s.Filesystem().Chown("/"); err != nil { if err := s.Filesystem().Chown("/"); err != nil {
return errors.WithMessage(err, "failed to chown root server directory during pre-boot process") return errors.WithMessage(err, "failed to chown root server directory during pre-boot process")
} }

View File

@@ -8,7 +8,7 @@ import (
"github.com/pterodactyl/wings/system" "github.com/pterodactyl/wings/system"
) )
// Defines the current resource usage for a given server instance. If a server is offline you // ResourceUsage defines the current resource usage for a given server instance. If a server is offline you
// should obviously expect memory and CPU usage to be 0. However, disk will always be returned // should obviously expect memory and CPU usage to be 0. However, disk will always be returned
// since that is not dependent on the server being running to collect that data. // since that is not dependent on the server being running to collect that data.
type ResourceUsage struct { type ResourceUsage struct {
@@ -26,7 +26,7 @@ type ResourceUsage struct {
Disk int64 `json:"disk_bytes"` Disk int64 `json:"disk_bytes"`
} }
// Returns the current resource usage stats for the server instance. This returns // Proc returns the current resource usage stats for the server instance. This returns
// a copy of the tracked resources, so making any changes to the response will not // a copy of the tracked resources, so making any changes to the response will not
// have the desired outcome for you most likely. // have the desired outcome for you most likely.
func (s *Server) Proc() ResourceUsage { func (s *Server) Proc() ResourceUsage {
@@ -38,11 +38,12 @@ func (s *Server) Proc() ResourceUsage {
return s.resources return s.resources
} }
// Resets the usages values to zero, used when a server is stopped to ensure we don't hold // Reset resets the usages values to zero, used when a server is stopped to ensure we don't hold
// onto any values incorrectly. // onto any values incorrectly.
func (ru *ResourceUsage) Reset() { func (ru *ResourceUsage) Reset() {
ru.mu.Lock() ru.mu.Lock()
defer ru.mu.Unlock() defer ru.mu.Unlock()
ru.Memory = 0 ru.Memory = 0
ru.CpuAbsolute = 0 ru.CpuAbsolute = 0
ru.Network.TxBytes = 0 ru.Network.TxBytes = 0

View File

@@ -3,6 +3,7 @@ package server
import ( import (
"context" "context"
"fmt" "fmt"
"net/http"
"os" "os"
"strings" "strings"
"sync" "sync"
@@ -10,6 +11,8 @@ import (
"emperror.dev/errors" "emperror.dev/errors"
"github.com/apex/log" "github.com/apex/log"
"github.com/creasty/defaults" "github.com/creasty/defaults"
"golang.org/x/sync/semaphore"
"github.com/pterodactyl/wings/config" "github.com/pterodactyl/wings/config"
"github.com/pterodactyl/wings/environment" "github.com/pterodactyl/wings/environment"
"github.com/pterodactyl/wings/environment/docker" "github.com/pterodactyl/wings/environment/docker"
@@ -17,7 +20,6 @@ import (
"github.com/pterodactyl/wings/remote" "github.com/pterodactyl/wings/remote"
"github.com/pterodactyl/wings/server/filesystem" "github.com/pterodactyl/wings/server/filesystem"
"github.com/pterodactyl/wings/system" "github.com/pterodactyl/wings/system"
"golang.org/x/sync/semaphore"
) )
// Server is the high level definition for a server instance being controlled // Server is the high level definition for a server instance being controlled
@@ -92,11 +94,19 @@ func New(client remote.Client) (*Server, error) {
return &s, nil return &s, nil
} }
// Id returns the UUID for the server instance. // ID returns the UUID for the server instance.
func (s *Server) Id() string { func (s *Server) ID() string {
return s.Config().GetUuid() return s.Config().GetUuid()
} }
// Id returns the UUID for the server instance. This function is deprecated
// in favor of Server.ID().
//
// Deprecated
func (s *Server) Id() string {
return s.ID()
}
// Cancels the context assigned to this server instance. Assuming background tasks // Cancels the context assigned to this server instance. Assuming background tasks
// are using this server's context for things, all of the background tasks will be // are using this server's context for things, all of the background tasks will be
// stopped as a result. // stopped as a result.
@@ -128,7 +138,7 @@ eloop:
for k := range s.Config().EnvVars { for k := range s.Config().EnvVars {
// Don't allow any environment variables that we have already set above. // Don't allow any environment variables that we have already set above.
for _, e := range out { for _, e := range out {
if strings.HasPrefix(e, strings.ToUpper(k)) { if strings.HasPrefix(e, strings.ToUpper(k)+"=") {
continue eloop continue eloop
} }
} }
@@ -140,7 +150,7 @@ eloop:
} }
func (s *Server) Log() *log.Entry { func (s *Server) Log() *log.Entry {
return log.WithField("server", s.Id()) return log.WithField("server", s.ID())
} }
// Sync syncs the state of the server on the Panel with Wings. This ensures that // Sync syncs the state of the server on the Panel with Wings. This ensures that
@@ -150,19 +160,13 @@ func (s *Server) Log() *log.Entry {
// This also means mass actions can be performed against servers on the Panel // This also means mass actions can be performed against servers on the Panel
// and they will automatically sync with Wings when the server is started. // and they will automatically sync with Wings when the server is started.
func (s *Server) Sync() error { func (s *Server) Sync() error {
cfg, err := s.client.GetServerConfiguration(s.Context(), s.Id()) cfg, err := s.client.GetServerConfiguration(s.Context(), s.ID())
if err != nil { if err != nil {
if !remote.IsRequestError(err) { if err := remote.AsRequestError(err); err != nil && err.StatusCode() == http.StatusNotFound {
return err
}
if err.(*remote.RequestError).Status == "404" {
return &serverDoesNotExist{} return &serverDoesNotExist{}
} }
return errors.WithStackIf(err)
return errors.New(err.Error())
} }
return s.SyncWithConfiguration(cfg) return s.SyncWithConfiguration(cfg)
} }
@@ -251,7 +255,7 @@ func (s *Server) EnsureDataDirectoryExists() error {
return nil return nil
} }
// Sets the state of the server internally. This function handles crash detection as // OnStateChange sets the state of the server internally. This function handles crash detection as
// well as reporting to event listeners for the server. // well as reporting to event listeners for the server.
func (s *Server) OnStateChange() { func (s *Server) OnStateChange() {
prevState := s.resources.State.Load() prevState := s.resources.State.Load()
@@ -266,7 +270,7 @@ func (s *Server) OnStateChange() {
s.Events().Publish(StatusEvent, st) s.Events().Publish(StatusEvent, st)
} }
// Reset the resource usage to 0 when the process fully stops so that all of the UI // Reset the resource usage to 0 when the process fully stops so that all the UI
// views in the Panel correctly display 0. // views in the Panel correctly display 0.
if st == environment.ProcessOfflineState { if st == environment.ProcessOfflineState {
s.resources.Reset() s.resources.Reset()
@@ -298,7 +302,7 @@ func (s *Server) OnStateChange() {
} }
// IsRunning determines if the server state is running or not. This is different // IsRunning determines if the server state is running or not. This is different
// than the environment state, it is simply the tracked state from this daemon // from the environment state, it is simply the tracked state from this daemon
// instance, and not the response from Docker. // instance, and not the response from Docker.
func (s *Server) IsRunning() bool { func (s *Server) IsRunning() bool {
st := s.Environment.State() st := s.Environment.State()

View File

@@ -6,6 +6,7 @@ import (
"emperror.dev/errors" "emperror.dev/errors"
"github.com/buger/jsonparser" "github.com/buger/jsonparser"
"github.com/imdario/mergo" "github.com/imdario/mergo"
"github.com/pterodactyl/wings/environment" "github.com/pterodactyl/wings/environment"
) )
@@ -25,7 +26,7 @@ func (s *Server) UpdateDataStructure(data []byte) error {
// Don't allow obviously corrupted data to pass through into this function. If the UUID // Don't allow obviously corrupted data to pass through into this function. If the UUID
// doesn't match something has gone wrong and the API is attempting to meld this server // doesn't match something has gone wrong and the API is attempting to meld this server
// instance into a totally different one, which would be bad. // instance into a totally different one, which would be bad.
if src.Uuid != "" && s.Id() != "" && src.Uuid != s.Id() { if src.Uuid != "" && s.ID() != "" && src.Uuid != s.ID() {
return errors.New("server/update: attempting to merge a data stack with an invalid UUID") return errors.New("server/update: attempting to merge a data stack with an invalid UUID")
} }

View File

@@ -12,7 +12,7 @@ type WebsocketBag struct {
conns map[uuid.UUID]*context.CancelFunc conns map[uuid.UUID]*context.CancelFunc
} }
// Returns the websocket bag which contains all of the currently open websocket connections // Websockets returns the websocket bag which contains all the currently open websocket connections
// for the server instance. // for the server instance.
func (s *Server) Websockets() *WebsocketBag { func (s *Server) Websockets() *WebsocketBag {
s.wsBagLocker.Lock() s.wsBagLocker.Lock()
@@ -25,7 +25,7 @@ func (s *Server) Websockets() *WebsocketBag {
return s.wsBag return s.wsBag
} }
// Adds a new websocket connection to the stack. // Push adds a new websocket connection to the end of the stack.
func (w *WebsocketBag) Push(u uuid.UUID, cancel *context.CancelFunc) { func (w *WebsocketBag) Push(u uuid.UUID, cancel *context.CancelFunc) {
w.mu.Lock() w.mu.Lock()
defer w.mu.Unlock() defer w.mu.Unlock()
@@ -37,14 +37,14 @@ func (w *WebsocketBag) Push(u uuid.UUID, cancel *context.CancelFunc) {
w.conns[u] = cancel w.conns[u] = cancel
} }
// Removes a connection from the stack. // Remove removes a connection from the stack.
func (w *WebsocketBag) Remove(u uuid.UUID) { func (w *WebsocketBag) Remove(u uuid.UUID) {
w.mu.Lock() w.mu.Lock()
delete(w.conns, u) delete(w.conns, u)
w.mu.Unlock() w.mu.Unlock()
} }
// Cancels all of the stored cancel functions which has the effect of disconnecting // CancelAll cancels all the stored cancel functions which has the effect of disconnecting
// every listening websocket for the server. // every listening websocket for the server.
func (w *WebsocketBag) CancelAll() { func (w *WebsocketBag) CancelAll() {
w.mu.Lock() w.mu.Lock()

View File

@@ -11,9 +11,10 @@ import (
"emperror.dev/errors" "emperror.dev/errors"
"github.com/apex/log" "github.com/apex/log"
"github.com/pkg/sftp" "github.com/pkg/sftp"
"golang.org/x/crypto/ssh"
"github.com/pterodactyl/wings/config" "github.com/pterodactyl/wings/config"
"github.com/pterodactyl/wings/server/filesystem" "github.com/pterodactyl/wings/server/filesystem"
"golang.org/x/crypto/ssh"
) )
const ( const (

View File

@@ -18,10 +18,11 @@ import (
"emperror.dev/errors" "emperror.dev/errors"
"github.com/apex/log" "github.com/apex/log"
"github.com/pkg/sftp" "github.com/pkg/sftp"
"golang.org/x/crypto/ssh"
"github.com/pterodactyl/wings/config" "github.com/pterodactyl/wings/config"
"github.com/pterodactyl/wings/remote" "github.com/pterodactyl/wings/remote"
"github.com/pterodactyl/wings/server" "github.com/pterodactyl/wings/server"
"golang.org/x/crypto/ssh"
) )
// Usernames all follow the same format, so don't even bother hitting the API if the username is not // Usernames all follow the same format, so don't even bother hitting the API if the username is not
@@ -132,7 +133,7 @@ func (c *SFTPServer) AcceptInbound(conn net.Conn, config *ssh.ServerConfig) {
if uuid == "" { if uuid == "" {
return false return false
} }
return s.Id() == uuid return s.ID() == uuid
}) })
if srv == nil { if srv == nil {
continue continue