Compare commits

..

69 Commits

Author SHA1 Message Date
DaneEveritt
83861a6dec Update CHANGELOG.md 2022-07-24 19:43:43 -04:00
DaneEveritt
231e24aa33 Support new metadata from panel for servers 2022-07-24 17:16:45 -04:00
DaneEveritt
e3ab241d7f Track file upload activity 2022-07-24 17:12:47 -04:00
DaneEveritt
c18e844689 Support more rapid insertion; ignore issues with i/o 2022-07-24 16:58:03 -04:00
DaneEveritt
8cee18a92b Save activity in a background routine to speed things along; cap query time at 3 seconds 2022-07-24 16:27:25 -04:00
DaneEveritt
f952efd9c7 Don't try to store nil for the metadata 2022-07-24 16:27:05 -04:00
DaneEveritt
21cf66b2b4 Use single connection in pool to avoid simultaneous write lock issues 2022-07-24 16:26:52 -04:00
DaneEveritt
251f91a08e Fix crons to actually run correctly using the configuration values 2022-07-24 15:59:17 -04:00
DaneEveritt
4634c93182 Add cron to handle parsing SFTP events 2022-07-24 14:40:06 -04:00
DaneEveritt
8a867ccc44 Switch to gorm for activity logging 2022-07-24 11:43:48 -04:00
DaneEveritt
61baccb1a3 Push draft of sftp reconcilation details 2022-07-24 10:28:42 -04:00
DaneEveritt
7bd11c1c28 Switch to SQLite for activity tracking 2022-07-10 16:51:11 -04:00
DaneEveritt
e1e7916790 Handle ErrKeyNotFound as a non-error 2022-07-10 14:53:54 -04:00
DaneEveritt
f28e06267c Better tracking of SFTP events 2022-07-10 14:30:32 -04:00
DaneEveritt
59fbd2bcea Add initial niaeve implementation of SFTP logging
This will end up flooding the activity logs due to the way SFTP works, we'll need to have an intermediate step in Wings that batches events every 10 seconds or so and submits them as a single "event" for activity.
2022-07-09 19:37:42 -04:00
Michael
204a4375fc Make the Docker network MTU configurable (#130) 2022-07-09 18:08:52 -04:00
DaneEveritt
dda7d10d37 Use the natural panel event names 2022-07-09 17:52:59 -04:00
DaneEveritt
ed330fa6be Squashed commit of the following:
commit f5baab4e88
Author: DaneEveritt <dane@daneeveritt.com>
Date:   Sat Jul 9 17:50:53 2022 -0400

    Finalize activity event sending logic and cron config

commit 9830387f21
Author: DaneEveritt <dane@daneeveritt.com>
Date:   Sat Jul 9 16:26:13 2022 -0400

    Send power events in a more usable format

commit 49f3a61d16
Author: DaneEveritt <dane@daneeveritt.com>
Date:   Sat Jul 9 15:47:24 2022 -0400

    Configure cron to actually send to endpoint

commit 28137c4c14
Author: DaneEveritt <dane@daneeveritt.com>
Date:   Sat Jul 9 15:42:29 2022 -0400

    Copy the body buffer otherwise subsequent backoff attempts will not have a buffer to send

commit 20e44bdc55
Author: DaneEveritt <dane@daneeveritt.com>
Date:   Sat Jul 9 14:38:41 2022 -0400

    Add internal logic to process activity events and send them to the panel

commit 0380488cd2
Author: DaneEveritt <dane@daneeveritt.com>
Date:   Mon Jul 4 17:55:17 2022 -0400

    Track power events

commit 9eab08b92f
Author: DaneEveritt <dane@daneeveritt.com>
Date:   Mon Jul 4 17:36:03 2022 -0400

    Initial logic to support logging activity on Wings to send back to the panel
2022-07-09 17:51:19 -04:00
DaneEveritt
9864a0fe34 Update README.md 2022-07-09 12:01:15 -04:00
Michael (Parker) Parker
214baf83fb Fix/arm64 docker (#133)
* fix: arm64 docker builds

Don't hardcode amd64 platform for the Wings binary.

* update docker file

don't specify buildplatform
remove upx as it causes arm64 failures
remove goos as the build is on linux hosts.

Co-authored-by: softwarenoob <admin@softwarenoob.com>
2022-07-03 11:09:07 -04:00
Dane Everitt
41fc1973d1 Update README.md 2022-06-24 10:51:58 -04:00
DaneEveritt
a51ce6f4ac Update README.md 2022-06-16 20:37:01 -04:00
DaneEveritt
cec51f11f0 Update CHANGELOG.md 2022-05-31 14:36:24 -04:00
DaneEveritt
b1be2081eb Better archive detection logic; try to use reflection as last ditch effort if unmatched
closes pterodactyl/panel#4101
2022-05-30 18:42:31 -04:00
DaneEveritt
203a2091a0 Use the correct CPU period when throttling servers; closes pterodactyl/panel#4102 2022-05-30 17:45:41 -04:00
DaneEveritt
7fa7cc313f Fix permissions not being checked correctly for admins 2022-05-29 21:48:49 -04:00
DaneEveritt
f390784973 Include error in log output if one occurs during move 2022-05-21 17:01:12 -04:00
DaneEveritt
5df1acd10e We don't return public keys 2022-05-15 16:41:26 -04:00
DaneEveritt
1927a59cd0 Send key correctly; don't retry 4xx errors 2022-05-15 16:17:06 -04:00
DaneEveritt
5bcf4164fb Add support for public key based auth 2022-05-15 16:01:52 -04:00
DaneEveritt
37e4d57cdf Don't include files and folders with identical name prefixes when archiving; closes pterodactyl/panel#3946 2022-05-12 18:00:55 -04:00
DaneEveritt
7ededdb9a2 Update CHANGELOG.md 2022-05-12 17:57:26 -04:00
DaneEveritt
1d197714df Fix faulty handling of named pipes; closes pterodactyl/panel#4059 2022-05-07 15:53:08 -04:00
DaneEveritt
6c98a955e3 Only set cpu limits if specified; closes pterodactyl/panel#3988 2022-05-07 15:23:56 -04:00
Matthew Penner
8bd1ebe360 go: update dependencies 2022-03-25 10:04:57 -06:00
Matthew Penner
93664fd112 router: add additional fields to remote file pull 2022-02-23 15:03:15 -07:00
Matthew Penner
3a738e44d6 run gofumpt 2022-02-23 15:02:19 -07:00
Noah van der Aa
067ca5bb60 Actually enforce upload file size limit (#122) 2022-02-21 14:59:28 -08:00
Dane Everitt
f85509a0c7 Support a custom tmp directory location 2022-02-13 11:59:53 -05:00
Dane Everitt
225a89be72 Update CHANGELOG.md 2022-02-05 12:41:53 -05:00
Dane Everitt
5d1d3cc9e6 Fix panic conditions 2022-02-05 12:11:00 -05:00
Dane Everitt
9f985ae044 Check for error before prefix; fixes abandoned routine; closes pterodactyl/panel#3911
Due to the order of the previous logic in ScanReader, an error not caused by EOF would effectively get ignored since an error will always be returned with `isPrefix` equal to false, thus triggering the first break, and error checking is not performed beyond that point.

Thus, canceling an installation process for a server while this process was running would hang the routine and cause the loop to run endlessly, even with a canceled context.
2022-02-05 11:56:17 -05:00
Dane Everitt
1372eba84e Remove unused function 2022-02-05 11:14:48 -05:00
Dane Everitt
879dcd8df5 Don't trigger a panic condition decoding event stats; closes pterodactyl/panel#3941 2022-02-05 11:06:11 -05:00
Dane Everitt
72476c61ec Simplify the event bus system; address pterodactyl/panel#3903
If my debugging is correct, this should address pterodactyl/panel#3903 in its entirety by addressing a few areas where it was possible for a channel to lock up and cause everything to block
2022-02-02 21:03:53 -05:00
Dane Everitt
0f2e9fcc0b Move the sink pool to be a shared tool 2022-02-02 19:16:34 -05:00
Dane Everitt
5c3e2c2c94 Fix failing test 2022-01-31 19:33:32 -05:00
Dane Everitt
7051feee01 Add additional debug points to server start process 2022-01-31 19:30:07 -05:00
Dane Everitt
cd67e5fdb9 Fix logic for context based environment stopping
Uses dual contexts to handle stopping using a timed context, and also terminating the entire process loop if the parent context gets canceled.
2022-01-31 19:09:08 -05:00
Dane Everitt
84bbefdadc Pass a context through to the start/stop/terminate actions 2022-01-31 18:40:15 -05:00
Dane Everitt
6a4178648f Return context cancelations as a locker locked error 2022-01-31 18:39:41 -05:00
Dane Everitt
1e52ffef64 Fix panic condition when no response is returned 2022-01-31 18:37:02 -05:00
Dane Everitt
0f9f80c181 Improve support for block/mutex contention in pprof 2022-01-30 21:02:18 -05:00
Dane Everitt
4b702052c7 Update CHANGELOG.md 2022-01-30 20:27:26 -05:00
Dane Everitt
7ee6c48fb0 Use a more efficient logging format for containers
JSON has a huge amount of overhead from Docker when we're trying to process large amounts of log data. It makes more sense to just use a better format.
2022-01-30 19:51:23 -05:00
Dane Everitt
2b2b5200eb Rewrite console throttling logic; drop complex timer usage and use a very simple throttle
This also removes server process termination logic when a server is breaching the output limits. It simply continues to efficiently throttle the console output.
2022-01-30 19:31:04 -05:00
Dane Everitt
fb73d5dbbf Always run pprof when running debug through makefile 2022-01-30 15:11:17 -05:00
Dane Everitt
fd7ec2aaac Generate normal and debug artifacts 2022-01-30 15:06:56 -05:00
Dane Everitt
c3df8d2309 Add support for proper use of pprof 2022-01-30 14:50:37 -05:00
Dane Everitt
1965e68a78 Include debug symbols in non-release binaries 2022-01-30 14:05:55 -05:00
Dane Everitt
3208b8579b Add test coverage 2022-01-30 13:58:36 -05:00
Dane Everitt
c4ee82c4dc Code cleanup, providing better commentary to decisions 2022-01-30 12:56:25 -05:00
Dane Everitt
0ec0fffa4d Handle future scenarios where we forgot to add a listener 2022-01-30 11:58:53 -05:00
Dane Everitt
57daf0889a Cleanup logic for updating stats to avoid calling mutex outside of file 2022-01-30 11:55:59 -05:00
Dane Everitt
d7c7155802 Make the powerlocker logic a little more idiomatic 2022-01-30 11:46:27 -05:00
Dane Everitt
11ae5e69ed Improve performance of console output watcher; work directly with bytes rather than string conversions 2022-01-30 11:28:06 -05:00
Dane Everitt
fab88a380e Use buffered channels and ring-buffer logic when processing console data
This change fixes pterodactyl/panel#3921 by implementing logic to drop the oldest message in a channel and push the newest message onto the channel when the channel buffer is full.

This is distinctly different than the previous implementation which just dropped the newest messages, leading to confusing behavior on the client side when a large amount of data was sent over the connection.

Up to 10ms per channel is allowed for blocking before falling back to the drop logic.
2022-01-30 10:55:45 -05:00
Matthew Penner
68d4fb454f actions(test): fix caching, run tests with race detector 2022-01-24 19:06:14 -07:00
Matthew Penner
136540111d docker: attach to container before starting 2022-01-24 19:01:33 -07:00
73 changed files with 2718 additions and 1467 deletions

View File

@@ -32,17 +32,20 @@ jobs:
go env
printf "\n\nSystem Environment:\n\n"
env
printf "Git Version: $(git version)\n\n"
echo "::set-output name=version_tag::${GITHUB_REF/refs\/tags\//}"
echo "::set-output name=short_sha::$(git rev-parse --short HEAD)"
echo "::set-output name=go_cache::$(go env GOCACHE)"
echo "::set-output name=go_mod_cache::$(go env GOMODCACHE)"
- name: Build Cache
uses: actions/cache@v2
with:
path: ${{ steps.env.outputs.go_cache }}
key: ${{ runner.os }}-${{ matrix.go }}-go-${{ hashFiles('**/go.sum') }}
key: ${{ runner.os }}-go${{ matrix.go }}-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-${{ matrix.go }}-go
${{ runner.os }}-go${{ matrix.go }}-
path: |
${{ steps.env.outputs.go_cache }}
${{ steps.env.outputs.go_mod_cache }}
- name: Get Dependencies
run: |
go get -v -t -d ./...
@@ -53,14 +56,21 @@ jobs:
CGO_ENABLED: 0
SRC_PATH: github.com/pterodactyl/wings
run: |
go build -v -trimpath -ldflags="-s -w -X ${SRC_PATH}/system.Version=dev-${GIT_COMMIT:0:7}" -o build/wings_${{ matrix.goos }}_${{ matrix.goarch }} wings.go
upx build/wings_${{ matrix.goos }}_${{ matrix.goarch }}
chmod +x build/wings_${{ matrix.goos }}_${{ matrix.goarch }}
- name: Test
run: go test ./...
- name: Upload Artifact
go build -v -trimpath -ldflags="-s -w -X ${SRC_PATH}/system.Version=dev-${GIT_COMMIT:0:7}" -o build/wings_${GOOS}_${GOARCH} wings.go
go build -v -trimpath -ldflags="-X ${SRC_PATH}/system.Version=dev-${GIT_COMMIT:0:7}" -o build/wings_${GOOS}_${GOARCH}_debug wings.go
upx build/wings_${GOOS}_${{ matrix.goarch }}
chmod +x build/*
- name: Tests
run: go test -race ./...
- name: Upload Release Artifact
uses: actions/upload-artifact@v2
if: ${{ github.ref == 'refs/heads/develop' || github.event_name == 'pull_request' }}
with:
name: wings_${{ matrix.goos }}_${{ matrix.goarch }}
path: build/wings_${{ matrix.goos }}_${{ matrix.goarch }}
name: wings_linux_${{ matrix.goarch }}
path: build/wings_linux_${{ matrix.goarch }}
- name: Upload Debug Artifact
uses: actions/upload-artifact@v2
if: ${{ github.ref == 'refs/heads/develop' || github.event_name == 'pull_request' }}
with:
name: wings_linux_${{ matrix.goarch }}_debug
path: build/wings_linux_${{ matrix.goarch }}_debug

1
.gitignore vendored
View File

@@ -49,3 +49,4 @@ debug
.DS_Store
*.pprof
*.pdf
pprof.*

View File

@@ -1,5 +1,55 @@
# Changelog
## v1.7.0
### Fixed
* Fixes multi-platform support for Wings' Docker image.
### Added
* Adds support for tracking of SFTP actions, power actions, server commands, and file uploads by utilizing a local SQLite database and processing events before sending them to the Panel.
* Adds support for configuring the MTU on the `pterodactyl0` network.
## v1.6.4
### Fixed
* Fixes a bug causing CPU limiting to not be properly applied to servers.
* Fixes a bug causing zip archives to decompress without taking into account nested folder structures.
## v1.6.3
### Fixed
* Fixes SFTP authentication failing for administrative users due to a permissions adjustment on the Panel.
## v1.6.2
### Fixed
* Fixes file upload size not being properly enforced.
* Fixes a bug that prevented listing a directory when it contained a named pipe. Also added a check to prevent attempting to read a named pipe directly.
* Fixes a bug with the archiver logic that would include folders that had the same name prefix. (for example, requesting only `map` would also include `map2` and `map3`)
* Requests to the Panel that return a client error (4xx response code) no longer trigger an exponential backoff, they immediately stop the request.
### Changed
* CPU limit fields are only set on the Docker container if they have been specified for the server — otherwise they are left empty.
### Added
* Added the ability to define the location of the temporary folder used by Wings — defaults to `/tmp/pterodactyl`.
* Adds the ability to authenticate for SFTP using public keys (requires `Panel@1.8.0`).
## v1.6.1
### Fixed
* Fixes error that would sometimes occur when starting a server that would cause the temporary power action lock to never be released due to a blocked channel.
* Fixes a bug causing the CPU usage of Wings to get stuck at 100% when a server is deleted while the installation process is running.
### Changed
* Cleans up a lot of the logic for handling events between the server and environment process to make it easier to make modifications to down the road.
* Cleans up logic handling the `StopAndWait` logic for stopping a server gracefully before terminating the process if it does not respond.
## v1.6.0
### Fixed
* Internal logic for processing a server start event has been adjusted to attach to the Docker container before attempting to start the container. This should fix issues where a server would get stuck after pulling the container image.
* Fixes a bug in the console output that was dropping console lines when a large number of lines were sent at once.
### Changed
* Removed the console throttle logic that would terminate a server instance that was sending too much data. This logic has been replaced with simpler logic that only throttles the console, it does not try to terminate the server. In addition, this change has reduced the number of go-routines needed by the application and dramatically simplified internal logic.
* Removed the `--profiler` flag and replaced it with `--pprof` which will start an internal server listening on `localhost:6060` allowing you to use Go's standard `pprof` tooling.
* Replaced the `json` log driver for Docker containers with `local` to reduce the amount of overhead when it comes to streaming logs from instances.
## v1.5.6
### Fixed
* Rewrote handler logic for the power actions lock to hopefully address issues people have been having when a server crashes and they're unable to start it again until restarting Wings.

View File

@@ -1,19 +1,18 @@
# Stage 1 (Build)
FROM --platform=$BUILDPLATFORM golang:1.17-alpine AS builder
FROM golang:1.17-alpine AS builder
ARG VERSION
RUN apk add --update --no-cache git make upx
RUN apk add --update --no-cache git make
WORKDIR /app/
COPY go.mod go.sum /app/
RUN go mod download
COPY . /app/
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build \
RUN CGO_ENABLED=0 go build \
-ldflags="-s -w -X github.com/pterodactyl/wings/system.Version=$VERSION" \
-v \
-trimpath \
-o wings \
wings.go
RUN upx wings
RUN echo "ID=\"distroless\"" > /etc/os-release
# Stage 2 (Final)

View File

@@ -5,8 +5,8 @@ build:
GOOS=linux GOARCH=arm64 go build -ldflags="-s -w" -gcflags "all=-trimpath=$(pwd)" -o build/wings_linux_arm64 -v wings.go
debug:
go build -ldflags="-X github.com/pterodactyl/wings/system.Version=$(GIT_HEAD)" -race
sudo ./wings --debug --ignore-certificate-errors --config config.yml
go build -ldflags="-X github.com/pterodactyl/wings/system.Version=$(GIT_HEAD)"
sudo ./wings --debug --ignore-certificate-errors --config config.yml --pprof --pprof-block-rate 1
# Runs a remotly debuggable session for Wings allowing an IDE to connect and target
# different breakpoints.

View File

@@ -19,22 +19,17 @@ I would like to extend my sincere thanks to the following sponsors for helping f
| Company | About |
| ------- | ----- |
| [**WISP**](https://wisp.gg) | Extra features. |
| [**MixmlHosting**](https://mixmlhosting.com) | MixmlHosting provides high quality Virtual Private Servers along with game servers, all at a affordable price. |
| [**BisectHosting**](https://www.bisecthosting.com/) | BisectHosting provides Minecraft, Valheim and other server hosting services with the highest reliability and lightning fast support since 2012. |
| [**Fragnet**](https://fragnet.net) | Providing low latency, high-end game hosting solutions to gamers, game studios and eSports platforms. |
| [**Tempest**](https://tempest.net/) | Tempest Hosting is a subsidiary of Path Network, Inc. offering unmetered DDoS protected 10Gbps dedicated servers, starting at just $80/month. Full anycast, tons of filters. |
| [**Bloom.host**](https://bloom.host) | Bloom.host offers dedicated core VPS and Minecraft hosting with Ryzen 9 processors. With owned-hardware, we offer truly unbeatable prices on high-performance hosting. |
| [**MineStrator**](https://minestrator.com/) | Looking for a French highend hosting company for you minecraft server? More than 14,000 members on our discord, trust us. |
| [**DedicatedMC**](https://dedicatedmc.io/) | DedicatedMC provides Raw Power hosting at affordable pricing, making sure to never compromise on your performance and giving you the best performance money can buy. |
| [**MineStrator**](https://minestrator.com/) | Looking for the most highend French hosting company for your minecraft server? More than 24,000 members on our discord trust us. Give us a try! |
| [**Skynode**](https://www.skynode.pro/) | Skynode provides blazing fast game servers along with a top-notch user experience. Whatever our clients are looking for, we're able to provide it! |
| [**XCORE**](https://xcore-server.de/) | XCORE offers High-End Servers for hosting and gaming since 2012. Fast, excellent and well-known for eSports Gaming. |
| [**RoyaleHosting**](https://royalehosting.net/) | Build your dreams and deploy them with RoyaleHostings reliable servers and network. Easy to use, provisioned in a couple of minutes. |
| [**Spill Hosting**](https://spillhosting.no/) | Spill Hosting is a Norwegian hosting service, which aims for inexpensive services on quality servers. Premium i9-9900K processors will run your game like a dream. |
| [**DeinServerHost**](https://deinserverhost.de/) | DeinServerHost offers Dedicated, vps and Gameservers for many popular Games like Minecraft and Rust in Germany since 2013. |
| [**HostBend**](https://hostbend.com/) | HostBend offers a variety of solutions for developers, students, and others who have a tight budget but don't want to compromise quality and support. |
| [**Capitol Hosting Solutions**](https://chs.gg/) | CHS is *the* budget friendly hosting company for Australian and American gamers, offering a variety of plans from Web Hosting to Game Servers; Custom Solutions too! |
| [**ByteAnia**](https://byteania.com/?utm_source=pterodactyl) | ByteAnia offers the best performing and most affordable **Ryzen 5000 Series hosting** on the market for *unbeatable prices*! |
| [**Aussie Server Hosts**](https://aussieserverhosts.com/) | No frills Australian Owned and operated High Performance Server hosting for some of the most demanding games serving Australia and New Zealand. |
| [**HostEZ**](https://hostez.io) | Providing North America Valheim, Minecraft and other popular games with low latency, high uptime and maximum availability. EZ! |
| [**VibeGAMES**](https://vibegames.net/) | VibeGAMES is a game server provider that specializes in DDOS protection for the games we offer. We have multiple locations in the US, Brazil, France, Germany, Singapore, Australia and South Africa.|
| [**RocketNode**](https://rocketnode.net) | RocketNode is a VPS and Game Server provider that offers the best performing VPS and Game hosting Solutions at affordable prices! |
| [**Gamenodes**](https://gamenodes.nl) | Gamenodes love quality. For Minecraft, Discord Bots and other services, among others. With our own programmers, we provide just that little bit of extra service! |
## Documentation
* [Panel Documentation](https://pterodactyl.io/panel/1.0/getting_started.html)

View File

@@ -5,11 +5,15 @@ import (
"crypto/tls"
"errors"
"fmt"
"github.com/pterodactyl/wings/internal/cron"
"github.com/pterodactyl/wings/internal/database"
log2 "log"
"net/http"
_ "net/http/pprof"
"os"
"path"
"path/filepath"
"runtime"
"strconv"
"strings"
"time"
@@ -20,7 +24,6 @@ import (
"github.com/docker/docker/client"
"github.com/gammazero/workerpool"
"github.com/mitchellh/colorstring"
"github.com/pkg/profile"
"github.com/spf13/cobra"
"golang.org/x/crypto/acme"
"golang.org/x/crypto/acme/autocert"
@@ -75,7 +78,9 @@ func init() {
rootCommand.PersistentFlags().BoolVar(&debug, "debug", false, "pass in order to run wings in debug mode")
// Flags specifically used when running the API.
rootCommand.Flags().String("profiler", "", "the profiler to run for this instance")
rootCommand.Flags().Bool("pprof", false, "if the pprof profiler should be enabled. The profiler will bind to localhost:6060 by default")
rootCommand.Flags().Int("pprof-block-rate", 0, "enables block profile support, may have performance impacts")
rootCommand.Flags().Int("pprof-port", 6060, "If provided with --pprof, the port it will run on")
rootCommand.Flags().Bool("auto-tls", false, "pass in order to have wings generate and manage it's own SSL certificates using Let's Encrypt")
rootCommand.Flags().String("tls-hostname", "", "required with --auto-tls, the FQDN for the generated SSL certificate")
rootCommand.Flags().Bool("ignore-certificate-errors", false, "ignore certificate verification errors when executing API calls")
@@ -86,25 +91,6 @@ func init() {
}
func rootCmdRun(cmd *cobra.Command, _ []string) {
switch cmd.Flag("profiler").Value.String() {
case "cpu":
defer profile.Start(profile.CPUProfile).Stop()
case "mem":
defer profile.Start(profile.MemProfile).Stop()
case "alloc":
defer profile.Start(profile.MemProfile, profile.MemProfileAllocs).Stop()
case "heap":
defer profile.Start(profile.MemProfile, profile.MemProfileHeap).Stop()
case "routines":
defer profile.Start(profile.GoroutineProfile).Stop()
case "mutex":
defer profile.Start(profile.MutexProfile).Stop()
case "threads":
defer profile.Start(profile.ThreadcreationProfile).Stop()
case "block":
defer profile.Start(profile.BlockProfile).Stop()
}
printLogo()
log.Debug("running in debug mode")
log.WithField("config_file", configPath).Info("loading configuration from file")
@@ -146,6 +132,10 @@ func rootCmdRun(cmd *cobra.Command, _ []string) {
}),
)
if err := database.Initialize(); err != nil {
log.WithField("error", err).Fatal("failed to initialize database")
}
manager, err := server.NewManager(cmd.Context(), pclient)
if err != nil {
log.WithField("error", err).Fatal("failed to load server configurations")
@@ -275,6 +265,13 @@ func rootCmdRun(cmd *cobra.Command, _ []string) {
}
}()
if s, err := cron.Scheduler(cmd.Context(), manager); err != nil {
log.WithField("error", err).Fatal("failed to initialize cron system")
} else {
log.WithField("subsystem", "cron").Info("starting cron processes")
s.StartAsync()
}
go func() {
// Run the SFTP server.
if err := sftp.New(manager).Run(); err != nil {
@@ -325,6 +322,20 @@ func rootCmdRun(cmd *cobra.Command, _ []string) {
TLSConfig: config.DefaultTLSConfig,
}
profile, _ := cmd.Flags().GetBool("pprof")
if profile {
if r, _ := cmd.Flags().GetInt("pprof-block-rate"); r > 0 {
runtime.SetBlockProfileRate(r)
}
// Catch at least 1% of mutex contention issues.
runtime.SetMutexProfileFraction(100)
profilePort, _ := cmd.Flags().GetInt("pprof-port")
go func() {
http.ListenAndServe(fmt.Sprintf("localhost:%d", profilePort), nil)
}()
}
// Check if the server should run with TLS but using autocert.
if autotls {
m := autocert.Manager{

View File

@@ -89,8 +89,8 @@ type ApiConfiguration struct {
// servers.
DisableRemoteDownload bool `json:"disable_remote_download" yaml:"disable_remote_download"`
// The maximum size for files uploaded through the Panel in bytes.
UploadLimit int `default:"100" json:"upload_limit" yaml:"upload_limit"`
// The maximum size for files uploaded through the Panel in MB.
UploadLimit int64 `default:"100" json:"upload_limit" yaml:"upload_limit"`
}
// RemoteQueryConfiguration defines the configuration settings for remote requests
@@ -132,6 +132,10 @@ type SystemConfiguration struct {
// Directory where local backups will be stored on the machine.
BackupDirectory string `default:"/var/lib/pterodactyl/backups" yaml:"backup_directory"`
// TmpDirectory specifies where temporary files for Pterodactyl installation processes
// should be created. This supports environments running docker-in-docker.
TmpDirectory string `default:"/tmp/pterodactyl" yaml:"tmp_directory"`
// The user that should own all of the server files, and be used for containers.
Username string `default:"pterodactyl" yaml:"username"`
@@ -159,6 +163,15 @@ type SystemConfiguration struct {
// disk usage is not a concern.
DiskCheckInterval int64 `default:"150" yaml:"disk_check_interval"`
// ActivitySendInterval is the amount of time that should ellapse between aggregated server activity
// being sent to the Panel. By default this will send activity collected over the last minute. Keep
// in mind that only a fixed number of activity log entries, defined by ActivitySendCount, will be sent
// in each run.
ActivitySendInterval int `default:"60" yaml:"activity_send_interval"`
// ActivitySendCount is the number of activity events to send per batch.
ActivitySendCount int `default:"100" yaml:"activity_send_count"`
// If set to true, file permissions for a server will be checked when the process is
// booted. This can cause boot delays if the server has a large amount of files. In most
// cases disabling this should not have any major impact unless external processes are
@@ -222,26 +235,14 @@ type ConsoleThrottles struct {
// Whether or not the throttler is enabled for this instance.
Enabled bool `json:"enabled" yaml:"enabled" default:"true"`
// The total number of lines that can be output in a given LineResetInterval period before
// The total number of lines that can be output in a given Period period before
// a warning is triggered and counted against the server.
Lines uint64 `json:"lines" yaml:"lines" default:"2000"`
// The total number of throttle activations that can accumulate before a server is considered
// to be breaching and will be stopped. This value is decremented by one every DecayInterval.
MaximumTriggerCount uint64 `json:"maximum_trigger_count" yaml:"maximum_trigger_count" default:"5"`
// The amount of time after which the number of lines processed is reset to 0. This runs in
// a constant loop and is not affected by the current console output volumes. By default, this
// will reset the processed line count back to 0 every 100ms.
LineResetInterval uint64 `json:"line_reset_interval" yaml:"line_reset_interval" default:"100"`
// The amount of time in milliseconds that must pass without an output warning being triggered
// before a throttle activation is decremented.
DecayInterval uint64 `json:"decay_interval" yaml:"decay_interval" default:"10000"`
// The amount of time that a server is allowed to be stopping for before it is terminated
// forcefully if it triggers output throttles.
StopGracePeriod uint `json:"stop_grace_period" yaml:"stop_grace_period" default:"15"`
Period uint64 `json:"line_reset_interval" yaml:"line_reset_interval" default:"100"`
}
type Configuration struct {

View File

@@ -36,6 +36,7 @@ type DockerNetworkConfiguration struct {
Mode string `default:"pterodactyl_nw" yaml:"network_mode"`
IsInternal bool `default:"false" yaml:"is_internal"`
EnableICC bool `default:"true" yaml:"enable_icc"`
NetworkMTU int64 `default:"1500" yaml:"network_mtu"`
Interfaces dockerNetworkInterfaces `yaml:"interfaces"`
}

View File

@@ -92,7 +92,7 @@ func createDockerNetwork(ctx context.Context, cli *client.Client) error {
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "pterodactyl0",
"com.docker.network.driver.mtu": "1500",
"com.docker.network.driver.mtu": strconv.FormatInt(nw.NetworkMTU, 10),
},
})
if err != nil {

View File

@@ -73,6 +73,9 @@ func (e *Environment) ContainerInspect(ctx context.Context) (types.ContainerJSON
res, err := e.client.HTTPClient().Do(req)
if err != nil {
if res == nil {
return st, errdefs.Unknown(err)
}
return st, errdefs.FromStatusCode(err, res.StatusCode)
}

View File

@@ -16,7 +16,7 @@ import (
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/mount"
"github.com/docker/docker/client"
"github.com/docker/docker/daemon/logger/jsonfilelog"
"github.com/docker/docker/daemon/logger/local"
"github.com/pterodactyl/wings/config"
"github.com/pterodactyl/wings/environment"
@@ -38,13 +38,13 @@ func (nw noopWriter) Write(b []byte) (int, error) {
}
// Attach attaches to the docker container itself and ensures that we can pipe
// data in and out of the process stream. This should not be used for reading
// console data as you *will* miss important output at the beginning because of
// the time delay with attaching to the output.
// data in and out of the process stream. This should always be called before
// you have started the container, but after you've ensured it exists.
//
// Calling this function will poll resources for the container in the background
// until the provided context is canceled by the caller. Failure to cancel said
// context will cause background memory leaks as the goroutine will not exit.
// until the container is stopped. The context provided to this function is used
// for the purposes of attaching to the container, a seecond context is created
// within the function for managing polling.
func (e *Environment) Attach(ctx context.Context) error {
if e.IsAttached() {
return nil
@@ -216,11 +216,12 @@ func (e *Environment) Create() error {
// since we only need it for the last few hundred lines of output and don't care
// about anything else in it.
LogConfig: container.LogConfig{
Type: jsonfilelog.Name,
Type: local.Name,
Config: map[string]string{
"max-size": "5m",
"max-file": "1",
"compress": "false",
"mode": "non-blocking",
},
},
@@ -479,21 +480,3 @@ func (e *Environment) convertMounts() []mount.Mount {
return out
}
func (e *Environment) resources() container.Resources {
l := e.Configuration.Limits()
pids := l.ProcessLimit()
return container.Resources{
Memory: l.BoundedMemoryLimit(),
MemoryReservation: l.MemoryLimit * 1_000_000,
MemorySwap: l.ConvertedSwap(),
CPUQuota: l.ConvertedCpuLimit(),
CPUPeriod: 100_000,
CPUShares: 1024,
BlkioWeight: l.IoWeight,
OomKillDisable: &l.OOMDisabled,
CpusetCpus: l.Threads,
PidsLimit: &pids,
}
}

View File

@@ -27,7 +27,6 @@ var _ environment.ProcessEnvironment = (*Environment)(nil)
type Environment struct {
mu sync.RWMutex
eventMu sync.Once
// The public identifier for this environment. In this case it is the Docker container
// name that will be used for all instances created under it.
@@ -73,6 +72,7 @@ func New(id string, m *Metadata, c *environment.Configuration) (*Environment, er
meta: m,
client: cli,
st: system.NewAtomicString(environment.ProcessOfflineState),
emitter: events.NewBus(),
}
return e, nil
@@ -86,34 +86,33 @@ func (e *Environment) Type() string {
return "docker"
}
// Set if this process is currently attached to the process.
// SetStream sets the current stream value from the Docker client. If a nil
// value is provided we assume that the stream is no longer operational and the
// instance is effectively offline.
func (e *Environment) SetStream(s *types.HijackedResponse) {
e.mu.Lock()
defer e.mu.Unlock()
e.stream = s
e.mu.Unlock()
}
// Determine if the this process is currently attached to the container.
// IsAttached determine if the this process is currently attached to the
// container instance by checking if the stream is nil or not.
func (e *Environment) IsAttached() bool {
e.mu.RLock()
defer e.mu.RUnlock()
return e.stream != nil
}
// Events returns an event bus for the environment.
func (e *Environment) Events() *events.Bus {
e.eventMu.Do(func() {
e.emitter = events.NewBus()
})
return e.emitter
}
// Determines if the container exists in this environment. The ID passed through should be the
// server UUID since containers are created utilizing the server UUID as the name and docker
// will work fine when using the container name as the lookup parameter in addition to the longer
// ID auto-assigned when the container is created.
// Exists determines if the container exists in this environment. The ID passed
// through should be the server UUID since containers are created utilizing the
// server UUID as the name and docker will work fine when using the container
// name as the lookup parameter in addition to the longer ID auto-assigned when
// the container is created.
func (e *Environment) Exists() (bool, error) {
_, err := e.ContainerInspect(context.Background())
if err != nil {
@@ -122,10 +121,8 @@ func (e *Environment) Exists() (bool, error) {
if client.IsErrNotFound(err) {
return false, nil
}
return false, err
}
return true, nil
}
@@ -146,7 +143,7 @@ func (e *Environment) IsRunning(ctx context.Context) (bool, error) {
return c.State.Running, nil
}
// Determine the container exit state and return the exit code and whether or not
// ExitState returns the container exit state, the exit code and whether or not
// the container was killed by the OOM killer.
func (e *Environment) ExitState() (uint32, bool, error) {
c, err := e.ContainerInspect(context.Background())
@@ -163,15 +160,13 @@ func (e *Environment) ExitState() (uint32, bool, error) {
if client.IsErrNotFound(err) {
return 1, false, nil
}
return 0, false, err
}
return uint32(c.State.ExitCode), c.State.OOMKilled, nil
}
// Returns the environment configuration allowing a process to make modifications of the
// environment on the fly.
// Config returns the environment configuration allowing a process to make
// modifications of the environment on the fly.
func (e *Environment) Config() *environment.Configuration {
e.mu.RLock()
defer e.mu.RUnlock()
@@ -179,12 +174,11 @@ func (e *Environment) Config() *environment.Configuration {
return e.Configuration
}
// Sets the stop configuration for the environment.
// SetStopConfiguration sets the stop configuration for the environment.
func (e *Environment) SetStopConfiguration(c remote.ProcessStopConfiguration) {
e.mu.Lock()
defer e.mu.Unlock()
e.meta.Stop = c
e.mu.Unlock()
}
func (e *Environment) SetImage(i string) {

View File

@@ -111,14 +111,24 @@ func (e *Environment) Start(ctx context.Context) error {
actx, cancel := context.WithTimeout(ctx, time.Second*30)
defer cancel()
// You must attach to the instance _before_ you start the container. If you do this
// in the opposite order you'll enter a deadlock condition where we're attached to
// the instance successfully, but the container has already stopped and you'll get
// the entire program into a very confusing state.
//
// By explicitly attaching to the instance before we start it, we can immediately
// react to errors/output stopping/etc. when starting.
if err := e.Attach(actx); err != nil {
return err
}
if err := e.client.ContainerStart(actx, e.Id, types.ContainerStartOptions{}); err != nil {
return errors.WrapIf(err, "environment/docker: failed to start container")
}
// No errors, good to continue through.
sawError = false
return e.Attach(actx)
return nil
}
// Stop stops the container that the server is running in. This will allow up to
@@ -128,9 +138,7 @@ func (e *Environment) Start(ctx context.Context) error {
// You most likely want to be using WaitForStop() rather than this function,
// since this will return as soon as the command is sent, rather than waiting
// for the process to be completed stopped.
//
// TODO: pass context through from the server instance.
func (e *Environment) Stop() error {
func (e *Environment) Stop(ctx context.Context) error {
e.mu.RLock()
s := e.meta.Stop
e.mu.RUnlock()
@@ -154,7 +162,7 @@ func (e *Environment) Stop() error {
case "SIGTERM":
signal = syscall.SIGTERM
}
return e.Terminate(signal)
return e.Terminate(ctx, signal)
}
// If the process is already offline don't switch it back to stopping. Just leave it how
@@ -169,8 +177,10 @@ func (e *Environment) Stop() error {
return e.SendCommand(s.Value)
}
t := time.Second * 30
if err := e.client.ContainerStop(context.Background(), e.Id, &t); err != nil {
// Allow the stop action to run for however long it takes, similar to executing a command
// and using a different logic pathway to wait for the container to stop successfully.
t := time.Duration(-1)
if err := e.client.ContainerStop(ctx, e.Id, &t); err != nil {
// If the container does not exist just mark the process as stopped and return without
// an error.
if client.IsErrNotFound(err) {
@@ -188,45 +198,66 @@ func (e *Environment) Stop() error {
// command. If the server does not stop after seconds have passed, an error will
// be returned, or the instance will be terminated forcefully depending on the
// value of the second argument.
func (e *Environment) WaitForStop(seconds uint, terminate bool) error {
if err := e.Stop(); err != nil {
return err
//
// Calls to Environment.Terminate() in this function use the context passed
// through since we don't want to prevent termination of the server instance
// just because the context.WithTimeout() has expired.
func (e *Environment) WaitForStop(ctx context.Context, duration time.Duration, terminate bool) error {
tctx, cancel := context.WithTimeout(context.Background(), duration)
defer cancel()
// If the parent context is canceled, abort the timed context for termination.
go func() {
select {
case <-ctx.Done():
cancel()
case <-tctx.Done():
// When the timed context is canceled, terminate this routine since we no longer
// need to worry about the parent routine being canceled.
break
}
}()
doTermination := func(s string) error {
e.log().WithField("step", s).WithField("duration", duration).Warn("container stop did not complete in time, terminating process...")
return e.Terminate(ctx, os.Kill)
}
ctx, cancel := context.WithTimeout(context.Background(), time.Duration(seconds)*time.Second)
defer cancel()
// We pass through the timed context for this stop action so that if one of the
// internal docker calls fails to ever finish before we've exhausted the time limit
// the resources get cleaned up, and the exection is stopped.
if err := e.Stop(tctx); err != nil {
if terminate && errors.Is(err, context.DeadlineExceeded) {
return doTermination("stop")
}
return err
}
// Block the return of this function until the container as been marked as no
// longer running. If this wait does not end by the time seconds have passed,
// attempt to terminate the container, or return an error.
ok, errChan := e.client.ContainerWait(ctx, e.Id, container.WaitConditionNotRunning)
ok, errChan := e.client.ContainerWait(tctx, e.Id, container.WaitConditionNotRunning)
select {
case <-ctx.Done():
if ctxErr := ctx.Err(); ctxErr != nil {
if err := ctx.Err(); err != nil {
if terminate {
log.WithField("container_id", e.Id).Info("server did not stop in time, executing process termination")
return e.Terminate(os.Kill)
return doTermination("parent-context")
}
return ctxErr
return err
}
case err := <-errChan:
// If the error stems from the container not existing there is no point in wasting
// CPU time to then try and terminate it.
if err != nil && !client.IsErrNotFound(err) {
if terminate {
l := log.WithField("container_id", e.Id)
if errors.Is(err, context.DeadlineExceeded) {
l.Warn("deadline exceeded for container stop; terminating process")
} else {
l.WithField("error", err).Warn("error while waiting for container stop; terminating process")
if err == nil || client.IsErrNotFound(err) {
return nil
}
return e.Terminate(os.Kill)
if terminate {
if !errors.Is(err, context.DeadlineExceeded) {
e.log().WithField("error", err).Warn("error while waiting for container stop; terminating process")
}
return doTermination("wait")
}
return errors.WrapIf(err, "environment/docker: error waiting on container to enter \"not-running\" state")
}
case <-ok:
}
@@ -234,8 +265,8 @@ func (e *Environment) WaitForStop(seconds uint, terminate bool) error {
}
// Terminate forcefully terminates the container using the signal provided.
func (e *Environment) Terminate(signal os.Signal) error {
c, err := e.ContainerInspect(context.Background())
func (e *Environment) Terminate(ctx context.Context, signal os.Signal) error {
c, err := e.ContainerInspect(ctx)
if err != nil {
// Treat missing containers as an okay error state, means it is obviously
// already terminated at this point.
@@ -260,7 +291,7 @@ func (e *Environment) Terminate(signal os.Signal) error {
// We set it to stopping than offline to prevent crash detection from being triggered.
e.SetState(environment.ProcessStoppingState)
sig := strings.TrimSuffix(strings.TrimPrefix(signal.String(), "signal "), "ed")
if err := e.client.ContainerKill(context.Background(), e.Id, sig); err != nil && !client.IsErrNotFound(err) {
if err := e.client.ContainerKill(ctx, e.Id, sig); err != nil && !client.IsErrNotFound(err) {
return errors.WithStack(err)
}
e.SetState(environment.ProcessOfflineState)

View File

@@ -3,6 +3,7 @@ package environment
import (
"context"
"os"
"time"
"github.com/pterodactyl/wings/events"
)
@@ -58,18 +59,20 @@ type ProcessEnvironment interface {
// can be started an error should be returned.
Start(ctx context.Context) error
// Stops a server instance. If the server is already stopped an error should
// not be returned.
Stop() error
// Stop stops a server instance. If the server is already stopped an error will
// not be returned, this function will act as a no-op.
Stop(ctx context.Context) error
// Waits for a server instance to stop gracefully. If the server is still detected
// as running after seconds, an error will be returned, or the server will be terminated
// depending on the value of the second argument.
WaitForStop(seconds uint, terminate bool) error
// WaitForStop waits for a server instance to stop gracefully. If the server is
// still detected as running after "duration", an error will be returned, or the server
// will be terminated depending on the value of the second argument. If the context
// provided is canceled the underlying wait conditions will be stopped and the
// entire loop will be ended (potentially without stopping or terminating).
WaitForStop(ctx context.Context, duration time.Duration, terminate bool) error
// Terminates a running server instance using the provided signal. If the server
// is not running no error should be returned.
Terminate(signal os.Signal) error
// Terminate stops a running server instance using the provided signal. This function
// is a no-op if the server is already stopped.
Terminate(ctx context.Context, signal os.Signal) error
// Destroys the environment removing any containers that were created (in Docker
// environments at least).

View File

@@ -99,21 +99,36 @@ func (l Limits) ProcessLimit() int64 {
return config.Get().Docker.ContainerPidLimit
}
// AsContainerResources returns the available resources for a container in a format
// that Docker understands.
func (l Limits) AsContainerResources() container.Resources {
pids := l.ProcessLimit()
return container.Resources{
resources := container.Resources{
Memory: l.BoundedMemoryLimit(),
MemoryReservation: l.MemoryLimit * 1_000_000,
MemorySwap: l.ConvertedSwap(),
CPUQuota: l.ConvertedCpuLimit(),
CPUPeriod: 100_000,
CPUShares: 1024,
BlkioWeight: l.IoWeight,
OomKillDisable: &l.OOMDisabled,
CpusetCpus: l.Threads,
PidsLimit: &pids,
}
// If the CPU Limit is not set, don't send any of these fields through. Providing
// them seems to break some Java services that try to read the available processors.
//
// @see https://github.com/pterodactyl/panel/issues/3988
if l.CpuLimit > 0 {
resources.CPUQuota = l.CpuLimit * 1_000
resources.CPUPeriod = 100_000
resources.CPUShares = 1024
}
// Similar to above, don't set the specific assigned CPUs if we didn't actually limit
// the server to any of them.
if l.Threads != "" {
resources.CpusetCpus = l.Threads
}
return resources
}
type Variables map[string]interface{}

View File

@@ -2,10 +2,11 @@ package events
import (
"strings"
"sync"
)
type Listener chan Event
"emperror.dev/errors"
"github.com/goccy/go-json"
"github.com/pterodactyl/wings/system"
)
// Event represents an Event sent over a Bus.
type Event struct {
@@ -15,137 +16,55 @@ type Event struct {
// Bus represents an Event Bus.
type Bus struct {
listenersMx sync.Mutex
listeners map[string][]Listener
*system.SinkPool
}
// NewBus returns a new empty Event Bus.
// NewBus returns a new empty Bus. This is simply a nicer wrapper around the
// system.SinkPool implementation that allows for more simplistic usage within
// the codebase.
//
// All of the events emitted out of this bus are byte slices that can be decoded
// back into an events.Event interface.
func NewBus() *Bus {
return &Bus{
listeners: make(map[string][]Listener),
}
}
// Off unregisters a listener from the specified topics on the Bus.
func (b *Bus) Off(listener Listener, topics ...string) {
b.listenersMx.Lock()
defer b.listenersMx.Unlock()
var closed bool
for _, topic := range topics {
ok := b.off(topic, listener)
if !closed && ok {
close(listener)
closed = true
}
}
}
func (b *Bus) off(topic string, listener Listener) bool {
listeners, ok := b.listeners[topic]
if !ok {
return false
}
for i, l := range listeners {
if l != listener {
continue
}
listeners = append(listeners[:i], listeners[i+1:]...)
b.listeners[topic] = listeners
return true
}
return false
}
// On registers a listener to the specified topics on the Bus.
func (b *Bus) On(listener Listener, topics ...string) {
b.listenersMx.Lock()
defer b.listenersMx.Unlock()
for _, topic := range topics {
b.on(topic, listener)
}
}
func (b *Bus) on(topic string, listener Listener) {
listeners, ok := b.listeners[topic]
if !ok {
b.listeners[topic] = []Listener{listener}
} else {
b.listeners[topic] = append(listeners, listener)
system.NewSinkPool(),
}
}
// Publish publishes a message to the Bus.
func (b *Bus) Publish(topic string, data interface{}) {
// Some of our topics for the socket support passing a more specific namespace,
// Some of our actions for the socket support passing a more specific namespace,
// such as "backup completed:1234" to indicate which specific backup was completed.
//
// In these cases, we still need to send the event using the standard listener
// name of "backup completed".
if strings.Contains(topic, ":") {
parts := strings.SplitN(topic, ":", 2)
if len(parts) == 2 {
topic = parts[0]
}
}
b.listenersMx.Lock()
defer b.listenersMx.Unlock()
enc, err := json.Marshal(Event{Topic: topic, Data: data})
if err != nil {
panic(errors.WithStack(err))
}
b.Push(enc)
}
listeners, ok := b.listeners[topic]
if !ok {
// MustDecode decodes the event byte slice back into an events.Event struct or
// panics if an error is encountered during this process.
func MustDecode(data []byte) (e Event) {
if err := DecodeTo(data, &e); err != nil {
panic(err)
}
return
}
if len(listeners) < 1 {
return
}
var wg sync.WaitGroup
event := Event{Topic: topic, Data: data}
for _, listener := range listeners {
l := listener
wg.Add(1)
go func(l Listener, event Event) {
defer wg.Done()
l <- event
}(l, event)
}
wg.Wait()
}
// Destroy destroys the Event Bus by unregistering and closing all listeners.
func (b *Bus) Destroy() {
b.listenersMx.Lock()
defer b.listenersMx.Unlock()
// Track what listeners have already been closed. Because the same listener
// can be listening on multiple topics, we need a way to essentially
// "de-duplicate" all the listeners across all the topics.
var closed []Listener
for _, listeners := range b.listeners {
for _, listener := range listeners {
if contains(closed, listener) {
continue
// DecodeTo decodes a byte slice of event data into the given interface.
func DecodeTo(data []byte, v interface{}) error {
if err := json.Unmarshal(data, &v); err != nil {
return errors.Wrap(err, "events: failed to decode byte slice")
}
close(listener)
closed = append(closed, listener)
}
}
b.listeners = make(map[string][]Listener)
}
func contains(closed []Listener, listener Listener) bool {
for _, c := range closed {
if c == listener {
return true
}
}
return false
return nil
}

View File

@@ -9,107 +9,34 @@ import (
func TestNewBus(t *testing.T) {
g := Goblin(t)
bus := NewBus()
g.Describe("Events", func() {
var bus *Bus
g.BeforeEach(func() {
bus = NewBus()
})
g.Describe("NewBus", func() {
g.It("is not nil", func() {
g.Assert(bus).IsNotNil("Bus expected to not be nil")
g.Assert(bus.listeners).IsNotNil("Bus#listeners expected to not be nil")
})
})
}
func TestBus_Off(t *testing.T) {
g := Goblin(t)
const topic = "test"
g.Describe("Off", func() {
g.It("unregisters listener", func() {
bus := NewBus()
g.Assert(bus.listeners[topic]).IsNotNil()
g.Assert(len(bus.listeners[topic])).IsZero()
listener := make(chan Event)
bus.On(listener, topic)
g.Assert(len(bus.listeners[topic])).Equal(1, "Listener was not registered")
bus.Off(listener, topic)
g.Assert(len(bus.listeners[topic])).Equal(0, "Topic still has one or more listeners")
})
g.It("unregisters correct listener", func() {
bus := NewBus()
listener := make(chan Event)
listener2 := make(chan Event)
listener3 := make(chan Event)
bus.On(listener, topic)
bus.On(listener2, topic)
bus.On(listener3, topic)
g.Assert(len(bus.listeners[topic])).Equal(3, "Listeners were not registered")
bus.Off(listener, topic)
bus.Off(listener3, topic)
g.Assert(len(bus.listeners[topic])).Equal(1, "Expected 1 listener to remain")
if bus.listeners[topic][0] != listener2 {
// A normal Assert does not properly compare channels.
g.Fail("wrong listener unregistered")
}
// Cleanup
bus.Off(listener2, topic)
})
})
}
func TestBus_On(t *testing.T) {
g := Goblin(t)
const topic = "test"
g.Describe("On", func() {
g.It("registers listener", func() {
bus := NewBus()
g.Assert(bus.listeners[topic]).IsNotNil()
g.Assert(len(bus.listeners[topic])).IsZero()
listener := make(chan Event)
bus.On(listener, topic)
g.Assert(len(bus.listeners[topic])).Equal(1, "Listener was not registered")
if bus.listeners[topic][0] != listener {
// A normal Assert does not properly compare channels.
g.Fail("wrong listener registered")
}
// Cleanup
bus.Off(listener, topic)
})
})
}
func TestBus_Publish(t *testing.T) {
g := Goblin(t)
g.Describe("Publish", func() {
const topic = "test"
const message = "this is a test message!"
g.Describe("Publish", func() {
g.It("publishes message", func() {
bus := NewBus()
g.Assert(bus.listeners[topic]).IsNotNil()
g.Assert(len(bus.listeners[topic])).IsZero()
listener := make(chan Event)
bus.On(listener, topic)
g.Assert(len(bus.listeners[topic])).Equal(1, "Listener was not registered")
listener := make(chan []byte)
bus.On(listener)
done := make(chan struct{}, 1)
go func() {
select {
case m := <-listener:
case v := <-listener:
m := MustDecode(v)
g.Assert(m.Topic).Equal(topic)
g.Assert(m.Data).Equal(message)
case <-time.After(1 * time.Second):
@@ -121,33 +48,33 @@ func TestBus_Publish(t *testing.T) {
<-done
// Cleanup
bus.Off(listener, topic)
bus.Off(listener)
})
g.It("publishes message to all listeners", func() {
bus := NewBus()
g.Assert(bus.listeners[topic]).IsNotNil()
g.Assert(len(bus.listeners[topic])).IsZero()
listener := make(chan Event)
listener2 := make(chan Event)
listener3 := make(chan Event)
bus.On(listener, topic)
bus.On(listener2, topic)
bus.On(listener3, topic)
g.Assert(len(bus.listeners[topic])).Equal(3, "Listener was not registered")
listener := make(chan []byte)
listener2 := make(chan []byte)
listener3 := make(chan []byte)
bus.On(listener)
bus.On(listener2)
bus.On(listener3)
done := make(chan struct{}, 1)
go func() {
for i := 0; i < 3; i++ {
select {
case m := <-listener:
case v := <-listener:
m := MustDecode(v)
g.Assert(m.Topic).Equal(topic)
g.Assert(m.Data).Equal(message)
case m := <-listener2:
case v := <-listener2:
m := MustDecode(v)
g.Assert(m.Topic).Equal(topic)
g.Assert(m.Data).Equal(message)
case m := <-listener3:
case v := <-listener3:
m := MustDecode(v)
g.Assert(m.Topic).Equal(topic)
g.Assert(m.Data).Equal(message)
case <-time.After(1 * time.Second):
@@ -162,9 +89,10 @@ func TestBus_Publish(t *testing.T) {
<-done
// Cleanup
bus.Off(listener, topic)
bus.Off(listener2, topic)
bus.Off(listener3, topic)
bus.Off(listener)
bus.Off(listener2)
bus.Off(listener3)
})
})
})
}

123
go.mod
View File

@@ -3,116 +3,127 @@ module github.com/pterodactyl/wings
go 1.17
require (
emperror.dev/errors v0.8.0
github.com/AlecAivazis/survey/v2 v2.2.15
emperror.dev/errors v0.8.1
github.com/AlecAivazis/survey/v2 v2.3.4
github.com/Jeffail/gabs/v2 v2.6.1
github.com/NYTimes/logrotate v1.0.0
github.com/apex/log v1.9.0
github.com/asaskevich/govalidator v0.0.0-20210307081110-f21760c49a8d
github.com/beevik/etree v1.1.0
github.com/buger/jsonparser v1.1.1
github.com/cenkalti/backoff/v4 v4.1.1
github.com/cenkalti/backoff/v4 v4.1.2
github.com/cobaugh/osrelease v0.0.0-20181218015638-a93a0a55a249
github.com/creasty/defaults v1.5.1
github.com/docker/docker v20.10.7+incompatible
github.com/creasty/defaults v1.5.2
github.com/docker/docker v20.10.14+incompatible
github.com/docker/go-connections v0.4.0
github.com/fatih/color v1.12.0
github.com/fatih/color v1.13.0
github.com/franela/goblin v0.0.0-20200825194134-80c0062ed6cd
github.com/gabriel-vasile/mimetype v1.3.1
github.com/gabriel-vasile/mimetype v1.4.0
github.com/gammazero/workerpool v1.1.2
github.com/gbrlsnchs/jwt/v3 v3.0.1
github.com/gin-gonic/gin v1.7.2
github.com/gin-gonic/gin v1.7.7
github.com/google/uuid v1.3.0
github.com/gorilla/websocket v1.4.2
github.com/gorilla/websocket v1.5.0
github.com/iancoleman/strcase v0.2.0
github.com/icza/dyno v0.0.0-20210726202311-f1bafe5d9996
github.com/juju/ratelimit v1.0.1
github.com/karrick/godirwalk v1.16.1
github.com/klauspost/pgzip v1.2.5
github.com/magiconair/properties v1.8.5
github.com/mattn/go-colorable v0.1.8
github.com/mholt/archiver/v3 v3.5.0
github.com/magiconair/properties v1.8.6
github.com/mattn/go-colorable v0.1.12
github.com/mholt/archiver/v3 v3.5.1
github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db
github.com/patrickmn/go-cache v2.1.0+incompatible
github.com/pkg/profile v1.6.0
github.com/pkg/sftp v1.13.2
github.com/sabhiram/go-gitignore v0.0.0-20201211210132-54b8a0bf510f
github.com/spf13/cobra v1.2.1
github.com/stretchr/testify v1.7.0
golang.org/x/crypto v0.0.0-20210711020723-a769d52b0f97
github.com/pkg/sftp v1.13.4
github.com/sabhiram/go-gitignore v0.0.0-20210923224102-525f6e181f06
github.com/spf13/cobra v1.4.0
github.com/stretchr/testify v1.7.5
golang.org/x/crypto v0.0.0-20220321153916-2c7772ba3064
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c
gopkg.in/ini.v1 v1.62.0
gopkg.in/ini.v1 v1.66.4
gopkg.in/yaml.v2 v2.4.0
)
require github.com/goccy/go-json v0.9.4
require (
github.com/glebarez/sqlite v1.4.6
github.com/go-co-op/gocron v1.15.0
github.com/goccy/go-json v0.9.6
github.com/klauspost/compress v1.15.1
gorm.io/gorm v1.23.8
)
require golang.org/x/sys v0.0.0-20211110154304-99a53858aa08 // indirect
require golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f // indirect
require (
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 // indirect
github.com/Microsoft/go-winio v0.5.0 // indirect
github.com/Microsoft/hcsshim v0.8.20 // indirect
github.com/andybalholm/brotli v1.0.3 // indirect
github.com/Microsoft/go-winio v0.5.2 // indirect
github.com/Microsoft/hcsshim v0.9.2 // indirect
github.com/andybalholm/brotli v1.0.4 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/cespare/xxhash/v2 v2.1.1 // indirect
github.com/containerd/containerd v1.5.5 // indirect
github.com/cespare/xxhash/v2 v2.1.2 // indirect
github.com/containerd/containerd v1.6.2 // indirect
github.com/containerd/fifo v1.0.0 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/docker/distribution v2.7.1+incompatible // indirect
github.com/docker/distribution v2.8.1+incompatible // indirect
github.com/docker/go-metrics v0.0.1 // indirect
github.com/docker/go-units v0.4.0 // indirect
github.com/dsnet/compress v0.0.1 // indirect
github.com/fsnotify/fsnotify v1.4.9 // indirect
github.com/gammazero/deque v0.1.0 // indirect
github.com/dsnet/compress v0.0.2-0.20210315054119-f66993602bf5 // indirect
github.com/fsnotify/fsnotify v1.5.1 // indirect
github.com/gammazero/deque v0.1.1 // indirect
github.com/gin-contrib/sse v0.1.0 // indirect
github.com/go-playground/locales v0.13.0 // indirect
github.com/go-playground/universal-translator v0.17.0 // indirect
github.com/go-playground/validator/v10 v10.8.0 // indirect
github.com/glebarez/go-sqlite v1.17.3 // indirect
github.com/go-playground/locales v0.14.0 // indirect
github.com/go-playground/universal-translator v0.18.0 // indirect
github.com/go-playground/validator/v10 v10.10.1 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/golang/snappy v0.0.4 // indirect
github.com/gorilla/mux v1.7.4 // indirect
github.com/inconshreveable/mousetrap v1.0.0 // indirect
github.com/json-iterator/go v1.1.11 // indirect
github.com/jinzhu/inflection v1.0.0 // indirect
github.com/jinzhu/now v1.1.5 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 // indirect
github.com/klauspost/compress v1.13.2 // indirect
github.com/kr/fs v0.1.0 // indirect
github.com/leodido/go-urn v1.2.1 // indirect
github.com/magefile/mage v1.11.0 // indirect
github.com/mattn/go-isatty v0.0.13 // indirect
github.com/magefile/mage v1.13.0 // indirect
github.com/mattn/go-isatty v0.0.14 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369 // indirect
github.com/mgutz/ansi v0.0.0-20200706080929-d51e80ef957d // indirect
github.com/moby/term v0.0.0-20210619224110-3f7ff695adc6 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.1 // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/morikuni/aec v1.0.0 // indirect
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e // indirect
github.com/nwaples/rardecode v1.1.1 // indirect
github.com/nwaples/rardecode v1.1.3 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/image-spec v1.0.1 // indirect
github.com/pierrec/lz4/v4 v4.1.8 // indirect
github.com/opencontainers/image-spec v1.0.2 // indirect
github.com/pierrec/lz4/v4 v4.1.14 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/prometheus/client_golang v1.11.0 // indirect
github.com/prometheus/client_golang v1.12.1 // indirect
github.com/prometheus/client_model v0.2.0 // indirect
github.com/prometheus/common v0.30.0 // indirect
github.com/prometheus/procfs v0.7.1 // indirect
github.com/prometheus/common v0.32.1 // indirect
github.com/prometheus/procfs v0.7.3 // indirect
github.com/remyoudompheng/bigfft v0.0.0-20200410134404-eec4a21b6bb0 // indirect
github.com/robfig/cron/v3 v3.0.1 // indirect
github.com/sirupsen/logrus v1.8.1 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/ugorji/go/codec v1.1.7 // indirect
github.com/ugorji/go/codec v1.2.7 // indirect
github.com/ulikunitz/xz v0.5.10 // indirect
github.com/xi2/xz v0.0.0-20171230120015-48954b6210f8 // indirect
go.uber.org/atomic v1.9.0 // indirect
go.uber.org/multierr v1.7.0 // indirect
golang.org/x/net v0.0.0-20210726213435-c6fcb2dbf985 // indirect
golang.org/x/term v0.0.0-20210615171337-6886f2dfbf5b // indirect
golang.org/x/text v0.3.6 // indirect
golang.org/x/time v0.0.0-20210723032227-1f47c861a9ac // indirect
go.uber.org/multierr v1.8.0 // indirect
golang.org/x/net v0.0.0-20220225172249-27dd8689420f // indirect
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 // indirect
golang.org/x/text v0.3.7 // indirect
golang.org/x/time v0.0.0-20220224211638-0e9765cccd65 // indirect
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 // indirect
google.golang.org/genproto v0.0.0-20210729151513-df9385d47c1b // indirect
google.golang.org/grpc v1.39.0 // indirect
google.golang.org/protobuf v1.27.1 // indirect
gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f // indirect
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect
google.golang.org/genproto v0.0.0-20220324131243-acbaeb5b85eb // indirect
google.golang.org/grpc v1.45.0 // indirect
google.golang.org/protobuf v1.28.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
modernc.org/libc v1.16.17 // indirect
modernc.org/mathutil v1.4.1 // indirect
modernc.org/memory v1.1.1 // indirect
modernc.org/sqlite v1.17.3 // indirect
)

530
go.sum

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,57 @@
package cron
import (
"context"
"emperror.dev/errors"
"github.com/pterodactyl/wings/internal/database"
"github.com/pterodactyl/wings/internal/models"
"github.com/pterodactyl/wings/server"
"github.com/pterodactyl/wings/system"
)
type activityCron struct {
mu *system.AtomicBool
manager *server.Manager
max int
}
// Run executes the cronjob and ensures we fetch and send all of the stored activity to the
// Panel instance. Once activity is sent it is deleted from the local database instance. Any
// SFTP specific events are not handled in this cron, they're handled seperately to account
// for de-duplication and event merging.
func (ac *activityCron) Run(ctx context.Context) error {
// Don't execute this cron if there is currently one running. Once this task is completed
// go ahead and mark it as no longer running.
if !ac.mu.SwapIf(true) {
return errors.WithStack(ErrCronRunning)
}
defer ac.mu.Store(false)
var activity []models.Activity
tx := database.Instance().WithContext(ctx).
Where("event NOT LIKE ?", "server:sftp.%").
Limit(ac.max).
Find(&activity)
if tx.Error != nil {
return errors.WithStack(tx.Error)
}
if len(activity) == 0 {
return nil
}
if err := ac.manager.Client().SendActivityLogs(ctx, activity); err != nil {
return errors.WrapIf(err, "cron: failed to send activity events to Panel")
}
var ids []int
for _, v := range activity {
ids = append(ids, v.ID)
}
tx = database.Instance().WithContext(ctx).Where("id IN ?", ids).Delete(&models.Activity{})
if tx.Error != nil {
return errors.WithStack(tx.Error)
}
return nil
}

71
internal/cron/cron.go Normal file
View File

@@ -0,0 +1,71 @@
package cron
import (
"context"
"emperror.dev/errors"
log2 "github.com/apex/log"
"github.com/go-co-op/gocron"
"github.com/pterodactyl/wings/config"
"github.com/pterodactyl/wings/server"
"github.com/pterodactyl/wings/system"
"time"
)
const ErrCronRunning = errors.Sentinel("cron: job already running")
var o system.AtomicBool
// Scheduler configures the internal cronjob system for Wings and returns the scheduler
// instance to the caller. This should only be called once per application lifecycle, additional
// calls will result in an error being returned.
func Scheduler(ctx context.Context, m *server.Manager) (*gocron.Scheduler, error) {
if !o.SwapIf(true) {
return nil, errors.New("cron: cannot call scheduler more than once in application lifecycle")
}
l, err := time.LoadLocation(config.Get().System.Timezone)
if err != nil {
return nil, errors.Wrap(err, "cron: failed to parse configured system timezone")
}
activity := activityCron{
mu: system.NewAtomicBool(false),
manager: m,
max: config.Get().System.ActivitySendCount,
}
sftp := sftpCron{
mu: system.NewAtomicBool(false),
manager: m,
max: config.Get().System.ActivitySendCount,
}
s := gocron.NewScheduler(l)
log := log2.WithField("subsystem", "cron")
interval := time.Duration(config.Get().System.ActivitySendInterval) * time.Second
log.WithField("interval", interval).Info("configuring system crons")
_, _ = s.Tag("activity").Every(interval).Do(func() {
log.WithField("cron", "activity").Debug("sending internal activity events to Panel")
if err := activity.Run(ctx); err != nil {
if errors.Is(err, ErrCronRunning) {
log.WithField("cron", "activity").Warn("activity process is already running, skipping...")
} else {
log.WithField("cron", "activity").WithField("error", err).Error("activity process failed to execute")
}
}
})
_, _ = s.Tag("sftp").Every(interval).Do(func() {
log.WithField("cron", "sftp").Debug("sending sftp events to Panel")
if err := sftp.Run(ctx); err != nil {
if errors.Is(err, ErrCronRunning) {
log.WithField("cron", "sftp").Warn("sftp events process already running, skipping...")
} else {
log.WithField("cron", "sftp").WithField("error", err).Error("sftp events process failed to execute")
}
}
})
return s, nil
}

175
internal/cron/sftp_cron.go Normal file
View File

@@ -0,0 +1,175 @@
package cron
import (
"context"
"emperror.dev/errors"
"github.com/pterodactyl/wings/internal/database"
"github.com/pterodactyl/wings/internal/models"
"github.com/pterodactyl/wings/server"
"github.com/pterodactyl/wings/system"
"reflect"
)
type sftpCron struct {
mu *system.AtomicBool
manager *server.Manager
max int
}
type mapKey struct {
User string
Server string
IP string
Event models.Event
Timestamp string
}
type eventMap struct {
max int
ids []int
m map[mapKey]*models.Activity
}
// Run executes the SFTP reconciliation cron. This job will pull all of the SFTP specific events
// and merge them together across user, server, ip, and event. This allows a SFTP event that deletes
// tens or hundreds of files to be tracked as a single "deletion" event so long as they all occur
// within the same one minute period of time (starting at the first timestamp for the group). Without
// this we'd end up flooding the Panel event log with excessive data that is of no use to end users.
func (sc *sftpCron) Run(ctx context.Context) error {
if !sc.mu.SwapIf(true) {
return errors.WithStack(ErrCronRunning)
}
defer sc.mu.Store(false)
var o int
activity, err := sc.fetchRecords(ctx, o)
if err != nil {
return err
}
o += len(activity)
events := &eventMap{
m: map[mapKey]*models.Activity{},
ids: []int{},
max: sc.max,
}
for {
if len(activity) == 0 {
break
}
slen := len(events.ids)
for _, a := range activity {
events.Push(a)
}
if len(events.ids) > slen {
// Execute the query again, we found some events so we want to continue
// with this. Start at the next offset.
activity, err = sc.fetchRecords(ctx, o)
if err != nil {
return errors.WithStack(err)
}
o += len(activity)
} else {
break
}
}
if len(events.m) == 0 {
return nil
}
if err := sc.manager.Client().SendActivityLogs(ctx, events.Elements()); err != nil {
return errors.Wrap(err, "failed to send sftp activity logs to Panel")
}
if tx := database.Instance().Where("id IN ?", events.ids).Delete(&models.Activity{}); tx.Error != nil {
return errors.WithStack(tx.Error)
}
return nil
}
// fetchRecords returns a group of activity events starting at the given offset. This is used
// since we might need to make multiple database queries to select enough events to properly
// fill up our request to the given maximum. This is due to the fact that this cron merges any
// activity that line up across user, server, ip, and event into a single activity record when
// sending the data to the Panel.
func (sc *sftpCron) fetchRecords(ctx context.Context, offset int) (activity []models.Activity, err error) {
tx := database.Instance().WithContext(ctx).
Where("event LIKE ?", "server:sftp.%").
Order("event DESC").
Offset(offset).
Limit(sc.max).
Find(&activity)
if tx.Error != nil {
err = errors.WithStack(tx.Error)
}
return
}
// Push adds an activity to the event mapping, or de-duplicates it and merges the files metadata
// into the existing entity that exists.
func (em *eventMap) Push(a models.Activity) {
m := em.forActivity(a)
// If no activity entity is returned we've hit the cap for the number of events to
// send along to the Panel. Just skip over this record and we'll account for it in
// the next iteration.
if m == nil {
return
}
em.ids = append(em.ids, a.ID)
// Always reduce this to the first timestamp that was recorded for the set
// of events, and not
if a.Timestamp.Before(m.Timestamp) {
m.Timestamp = a.Timestamp
}
list := m.Metadata["files"].([]interface{})
if s, ok := a.Metadata["files"]; ok {
v := reflect.ValueOf(s)
if v.Kind() != reflect.Slice || v.IsNil() {
return
}
for i := 0; i < v.Len(); i++ {
list = append(list, v.Index(i).Interface())
}
// You must set it again at the end of the process, otherwise you've only updated the file
// slice in this one loop since it isn't passed by reference. This is just shorter than having
// to explicitly keep casting it to the slice.
m.Metadata["files"] = list
}
}
// Elements returns the finalized activity models.
func (em *eventMap) Elements() (out []models.Activity) {
for _, v := range em.m {
out = append(out, *v)
}
return
}
// forActivity returns an event entity from our map which allows existing matches to be
// updated with additional files.
func (em *eventMap) forActivity(a models.Activity) *models.Activity {
key := mapKey{
User: a.User.String,
Server: a.Server,
IP: a.IP,
Event: a.Event,
// We group by the minute, don't care about the seconds for this logic.
Timestamp: a.Timestamp.Format("2006-01-02_15:04"),
}
if v, ok := em.m[key]; ok {
return v
}
// Cap the size of the events map at the defined maximum events to send to the Panel. Just
// return nil and let the caller handle it.
if len(em.m) >= em.max {
return nil
}
// Doesn't exist in our map yet, create a copy of the activity passed into this
// function and then assign it into the map with an empty metadata value.
v := a
v.Metadata = models.ActivityMeta{
"files": make([]interface{}, 0),
}
em.m[key] = &v
return &v
}

View File

@@ -0,0 +1,57 @@
package database
import (
"emperror.dev/errors"
"github.com/glebarez/sqlite"
"github.com/pterodactyl/wings/config"
"github.com/pterodactyl/wings/internal/models"
"github.com/pterodactyl/wings/system"
"gorm.io/gorm"
"gorm.io/gorm/logger"
"path/filepath"
"time"
)
var o system.AtomicBool
var db *gorm.DB
// Initialize configures the local SQLite database for Wings and ensures that the models have
// been fully migrated.
func Initialize() error {
if !o.SwapIf(true) {
panic("database: attempt to initialize more than once during application lifecycle")
}
p := filepath.Join(config.Get().System.RootDirectory, "wings.db")
instance, err := gorm.Open(sqlite.Open(p), &gorm.Config{
Logger: logger.Default.LogMode(logger.Silent),
})
if err != nil {
return errors.Wrap(err, "database: could not open database file")
}
db = instance
if sql, err := db.DB(); err != nil {
return errors.WithStack(err)
} else {
sql.SetMaxOpenConns(1)
sql.SetConnMaxLifetime(time.Hour)
}
if tx := db.Exec("PRAGMA synchronous = OFF"); tx.Error != nil {
return errors.WithStack(tx.Error)
}
if tx := db.Exec("PRAGMA journal_mode = MEMORY"); tx.Error != nil {
return errors.WithStack(tx.Error)
}
if err := db.AutoMigrate(&models.Activity{}); err != nil {
return errors.WithStack(err)
}
return nil
}
// Instance returns the gorm database instance that was configured when the application was
// booted.
func Instance() *gorm.DB {
if db == nil {
panic("database: attempt to access instance before initialized")
}
return db
}

View File

@@ -0,0 +1,67 @@
package models
import (
"github.com/pterodactyl/wings/system"
"gorm.io/gorm"
"time"
)
type Event string
type ActivityMeta map[string]interface{}
// Activity defines an activity log event for a server entity performed by a user. This is
// used for tracking commands, power actions, and SFTP events so that they can be reconciled
// and sent back to the Panel instance to be displayed to the user.
type Activity struct {
ID int `gorm:"primaryKey;not null" json:"-"`
// User is UUID of the user that triggered this event, or an empty string if the event
// cannot be tied to a specific user, in which case we will assume it was the system
// user.
User JsonNullString `gorm:"type:uuid" json:"user"`
// Server is the UUID of the server this event is associated with.
Server string `gorm:"type:uuid;not null" json:"server"`
// Event is a string that describes what occurred, and is used by the Panel instance to
// properly associate this event in the activity logs.
Event Event `gorm:"index;not null" json:"event"`
// Metadata is either a null value, string, or a JSON blob with additional event specific
// metadata that can be provided.
Metadata ActivityMeta `gorm:"serializer:json" json:"metadata"`
// IP is the IP address that triggered this event, or an empty string if it cannot be
// determined properly. This should be the connecting user's IP address, and not the
// internal system IP.
IP string `gorm:"not null" json:"ip"`
Timestamp time.Time `gorm:"not null" json:"timestamp"`
}
// SetUser sets the current user that performed the action. If an empty string is provided
// it is cast into a null value when stored.
func (a Activity) SetUser(u string) *Activity {
var ns JsonNullString
if u == "" {
if err := ns.Scan(nil); err != nil {
panic(err)
}
} else {
if err := ns.Scan(u); err != nil {
panic(err)
}
}
a.User = ns
return &a
}
// BeforeCreate executes before we create any activity entry to ensure the IP address
// is trimmed down to remove any extraneous data, and the timestamp is set to the current
// system time and then stored as UTC.
func (a *Activity) BeforeCreate(_ *gorm.DB) error {
a.IP = system.TrimIPSuffix(a.IP)
if a.Timestamp.IsZero() {
a.Timestamp = time.Now()
}
a.Timestamp = a.Timestamp.UTC()
if a.Metadata == nil {
a.Metadata = ActivityMeta{}
}
return nil
}

31
internal/models/models.go Normal file
View File

@@ -0,0 +1,31 @@
package models
import (
"database/sql"
"emperror.dev/errors"
"github.com/goccy/go-json"
)
type JsonNullString struct {
sql.NullString
}
func (v JsonNullString) MarshalJSON() ([]byte, error) {
if v.Valid {
return json.Marshal(v.String)
} else {
return json.Marshal(nil)
}
}
func (v *JsonNullString) UnmarshalJSON(data []byte) error {
var s *string
if err := json.Unmarshal(data, &s); err != nil {
return errors.WithStack(err)
}
if s != nil {
v.String = *s
}
v.Valid = s != nil
return nil
}

View File

@@ -11,9 +11,9 @@ import (
"github.com/apex/log"
"github.com/beevik/etree"
"github.com/buger/jsonparser"
"github.com/goccy/go-json"
"github.com/icza/dyno"
"github.com/magiconair/properties"
"github.com/goccy/go-json"
"gopkg.in/ini.v1"
"gopkg.in/yaml.v2"

View File

@@ -4,6 +4,7 @@ import (
"bytes"
"context"
"fmt"
"github.com/pterodactyl/wings/internal/models"
"io"
"net/http"
"strconv"
@@ -30,6 +31,7 @@ type Client interface {
SetInstallationStatus(ctx context.Context, uuid string, successful bool) error
SetTransferStatus(ctx context.Context, uuid string, successful bool) error
ValidateSftpCredentials(ctx context.Context, request SftpAuthRequest) (SftpAuthResponse, error)
SendActivityLogs(ctx context.Context, activity []models.Activity) error
}
type client struct {
@@ -128,10 +130,19 @@ func (c *client) requestOnce(ctx context.Context, method, path string, body io.R
// and adds the required authentication headers to the request that is being
// created. Errors returned will be of the RequestError type if there was some
// type of response from the API that can be parsed.
func (c *client) request(ctx context.Context, method, path string, body io.Reader, opts ...func(r *http.Request)) (*Response, error) {
func (c *client) request(ctx context.Context, method, path string, body *bytes.Buffer, opts ...func(r *http.Request)) (*Response, error) {
var res *Response
err := backoff.Retry(func() error {
r, err := c.requestOnce(ctx, method, path, body, opts...)
var b bytes.Buffer
if body != nil {
// We have to create a copy of the body, otherwise attempting this request again will
// send no data if there was initially a body since the "requestOnce" method will read
// the whole buffer, thus leaving it empty at the end.
if _, err := b.Write(body.Bytes()); err != nil {
return backoff.Permanent(errors.Wrap(err, "http: failed to copy body buffer"))
}
}
r, err := c.requestOnce(ctx, method, path, &b, opts...)
if err != nil {
if errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) {
return backoff.Permanent(err)
@@ -142,12 +153,10 @@ func (c *client) request(ctx context.Context, method, path string, body io.Reade
if r.HasError() {
// Close the request body after returning the error to free up resources.
defer r.Body.Close()
// Don't keep spamming the endpoint if we've already made too many requests or
// if we're not even authenticated correctly. Retrying generally won't fix either
// of these issues.
if r.StatusCode == http.StatusForbidden ||
r.StatusCode == http.StatusTooManyRequests ||
r.StatusCode == http.StatusUnauthorized {
// Don't keep attempting to access this endpoint if the response is a 4XX
// level error which indicates a client mistake. Only retry when the error
// is due to a server issue (5XX error).
if r.StatusCode >= 400 && r.StatusCode < 500 {
return backoff.Permanent(r.Error())
}
return r.Error()

View File

@@ -3,6 +3,7 @@ package remote
import (
"context"
"fmt"
"github.com/pterodactyl/wings/internal/models"
"strconv"
"sync"
@@ -178,6 +179,16 @@ func (c *client) SendRestorationStatus(ctx context.Context, backup string, succe
return nil
}
// SendActivityLogs sends activity logs back to the Panel for processing.
func (c *client) SendActivityLogs(ctx context.Context, activity []models.Activity) error {
resp, err := c.Post(ctx, "/activity", d{"data": activity})
if err != nil {
return errors.WithStackIf(err)
}
_ = resp.Body.Close()
return nil
}
// getServersPaged returns a subset of servers from the Panel API using the
// pagination query parameters.
func (c *client) getServersPaged(ctx context.Context, page, limit int) ([]RawServerData, Pagination, error) {

View File

@@ -1,15 +1,20 @@
package remote
import (
"bytes"
"github.com/apex/log"
"github.com/goccy/go-json"
"regexp"
"strings"
"github.com/apex/log"
"github.com/goccy/go-json"
"github.com/pterodactyl/wings/parser"
)
const (
SftpAuthPassword = SftpAuthRequestType("password")
SftpAuthPublicKey = SftpAuthRequestType("public_key")
)
// A generic type allowing for easy binding use when making requests to API
// endpoints that only expect a singular argument or something that would not
// benefit from being a typed struct.
@@ -62,9 +67,12 @@ type RawServerData struct {
ProcessConfiguration json.RawMessage `json:"process_configuration"`
}
type SftpAuthRequestType string
// SftpAuthRequest defines the request details that are passed along to the Panel
// when determining if the credentials provided to Wings are valid.
type SftpAuthRequest struct {
Type SftpAuthRequestType `json:"type"`
User string `json:"username"`
Pass string `json:"password"`
IP string `json:"ip"`
@@ -78,44 +86,45 @@ type SftpAuthRequest struct {
// user for the SFTP subsystem.
type SftpAuthResponse struct {
Server string `json:"server"`
Token string `json:"token"`
User string `json:"user"`
Permissions []string `json:"permissions"`
}
type OutputLineMatcher struct {
// The raw string to match against. This may or may not be prefixed with
// regex: which indicates we want to match against the regex expression.
raw string
raw []byte
reg *regexp.Regexp
}
// Matches determines if a given string "s" matches the given line.
func (olm *OutputLineMatcher) Matches(s string) bool {
// Matches determines if the provided byte string matches the given regex or
// raw string provided to the matcher.
func (olm *OutputLineMatcher) Matches(s []byte) bool {
if olm.reg == nil {
return strings.Contains(s, olm.raw)
return bytes.Contains(s, olm.raw)
}
return olm.reg.MatchString(s)
return olm.reg.Match(s)
}
// String returns the matcher's raw comparison string.
func (olm *OutputLineMatcher) String() string {
return olm.raw
return string(olm.raw)
}
// UnmarshalJSON unmarshals the startup lines into individual structs for easier
// matching abilities.
func (olm *OutputLineMatcher) UnmarshalJSON(data []byte) error {
if err := json.Unmarshal(data, &olm.raw); err != nil {
var r string
if err := json.Unmarshal(data, &r); err != nil {
return err
}
if strings.HasPrefix(olm.raw, "regex:") && len(olm.raw) > 6 {
r, err := regexp.Compile(strings.TrimPrefix(olm.raw, "regex:"))
olm.raw = []byte(r)
if bytes.HasPrefix(olm.raw, []byte("regex:")) && len(olm.raw) > 6 {
r, err := regexp.Compile(strings.TrimPrefix(string(olm.raw), "regex:"))
if err != nil {
log.WithField("error", err).WithField("raw", olm.raw).Warn("failed to compile output line marked as being regex")
log.WithField("error", err).WithField("raw", string(olm.raw)).Warn("failed to compile output line marked as being regex")
}
olm.reg = r
}

View File

@@ -4,6 +4,7 @@ import (
"context"
"fmt"
"io"
"mime"
"net"
"net/http"
"net/url"
@@ -13,8 +14,8 @@ import (
"time"
"emperror.dev/errors"
"github.com/google/uuid"
"github.com/goccy/go-json"
"github.com/google/uuid"
"github.com/pterodactyl/wings/server"
)
@@ -77,10 +78,13 @@ func (c *Counter) Write(p []byte) (int, error) {
type DownloadRequest struct {
Directory string
URL *url.URL
FileName string
UseHeader bool
}
type Download struct {
Identifier string
path string
mu sync.RWMutex
req DownloadRequest
server *server.Server
@@ -172,8 +176,28 @@ func (dl *Download) Execute() error {
}
}
fnameparts := strings.Split(dl.req.URL.Path, "/")
p := filepath.Join(dl.req.Directory, fnameparts[len(fnameparts)-1])
if dl.req.UseHeader {
if contentDisposition := res.Header.Get("Content-Disposition"); contentDisposition != "" {
_, params, err := mime.ParseMediaType(contentDisposition)
if err != nil {
return errors.WrapIf(err, "downloader: invalid \"Content-Disposition\" header")
}
if v, ok := params["filename"]; ok {
dl.path = v
}
}
}
if dl.path == "" {
if dl.req.FileName != "" {
dl.path = dl.req.FileName
} else {
parts := strings.Split(dl.req.URL.Path, "/")
dl.path = parts[len(parts)-1]
}
}
p := dl.Path()
dl.server.Log().WithField("path", p).Debug("writing remote file to disk")
r := io.TeeReader(res.Body, dl.counter(res.ContentLength))
@@ -205,6 +229,10 @@ func (dl *Download) Progress() float64 {
return dl.progress
}
func (dl *Download) Path() string {
return filepath.Join(dl.req.Directory, dl.path)
}
// Handles a write event by updating the progress completed percentage and firing off
// events to the server websocket as needed.
func (dl *Download) counter(contentLength int64) *Counter {

View File

@@ -3,6 +3,7 @@ package router
import (
"bufio"
"context"
"github.com/pterodactyl/wings/internal/models"
"io"
"mime/multipart"
"net/http"
@@ -13,6 +14,8 @@ import (
"strconv"
"strings"
"github.com/pterodactyl/wings/config"
"emperror.dev/errors"
"github.com/apex/log"
"github.com/gin-gonic/gin"
@@ -35,6 +38,15 @@ func getServerFileContents(c *gin.Context) {
return
}
defer f.Close()
// Don't allow a named pipe to be opened.
//
// @see https://github.com/pterodactyl/panel/issues/4059
if st.Mode()&os.ModeNamedPipe != 0 {
c.AbortWithStatusJSON(http.StatusBadRequest, gin.H{
"error": "Cannot open files of this type.",
})
return
}
c.Header("X-Mime-Type", st.Mimetype)
c.Header("Content-Length", strconv.Itoa(int(st.Size())))
@@ -120,6 +132,10 @@ func putServerRenameFiles(c *gin.Context) {
// Return nil if the error is an is not exists.
// NOTE: os.IsNotExist() does not work if the error is wrapped.
if errors.Is(err, os.ErrNotExist) {
s.Log().WithField("error", err).
WithField("from_path", pf).
WithField("to_path", pt).
Warn("failed to rename: source or target does not exist")
return nil
}
return err
@@ -258,6 +274,9 @@ func postServerPullRemoteFile(c *gin.Context) {
Directory string `binding:"required_without=RootPath,omitempty" json:"directory"`
RootPath string `binding:"required_without=Directory,omitempty" json:"root"`
URL string `binding:"required" json:"url"`
FileName string `json:"file_name"`
UseHeader bool `json:"use_header"`
Foreground bool `json:"foreground"`
}
if err := c.BindJSON(&data); err != nil {
return
@@ -295,21 +314,41 @@ func postServerPullRemoteFile(c *gin.Context) {
dl := downloader.New(s, downloader.DownloadRequest{
Directory: data.RootPath,
URL: u,
FileName: data.FileName,
UseHeader: data.UseHeader,
})
// Execute this pull in a separate thread since it may take a long time to complete.
go func() {
download := func() error {
s.Log().WithField("download_id", dl.Identifier).WithField("url", u.String()).Info("starting pull of remote file to disk")
if err := dl.Execute(); err != nil {
s.Log().WithField("download_id", dl.Identifier).WithField("error", err).Error("failed to pull remote file")
return err
} else {
s.Log().WithField("download_id", dl.Identifier).Info("completed pull of remote file")
}
return nil
}
if !data.Foreground {
go func() {
_ = download()
}()
c.JSON(http.StatusAccepted, gin.H{
"identifier": dl.Identifier,
})
return
}
if err := download(); err != nil {
NewServerError(err, s).Abort(c)
return
}
st, err := s.Filesystem().Stat(dl.Path())
if err != nil {
NewServerError(err, s).AbortFilesystemError(c)
return
}
c.JSON(http.StatusOK, &st)
}
// Stops a remote file download if it exists and belongs to this server.
@@ -537,8 +576,16 @@ func postServerUploadFiles(c *gin.Context) {
directory := c.Query("directory")
maxFileSize := config.Get().Api.UploadLimit
maxFileSizeBytes := maxFileSize * 1024 * 1024
var totalSize int64
for _, header := range headers {
if header.Size > maxFileSizeBytes {
c.AbortWithStatusJSON(http.StatusBadRequest, gin.H{
"error": "File " + header.Filename + " is larger than the maximum file upload size of " + strconv.FormatInt(maxFileSize, 10) + " MB.",
})
return
}
totalSize += header.Size
}
@@ -554,6 +601,11 @@ func postServerUploadFiles(c *gin.Context) {
if err := handleFileUpload(p, s, header); err != nil {
NewServerError(err, s).Abort(c)
return
} else {
s.SaveActivity(s.NewRequestActivity(token.UserUuid, c.Request.RemoteAddr), server.ActivityFileUploaded, models.ActivityMeta{
"file": header.Filename,
"directory": filepath.Clean(directory),
})
}
}
}
@@ -571,6 +623,5 @@ func handleFileUpload(p string, s *server.Server, header *multipart.FileHeader)
if err := s.Filesystem().Writefile(p, file); err != nil {
return err
}
return nil
}

View File

@@ -5,8 +5,8 @@ import (
"time"
"github.com/gin-gonic/gin"
ws "github.com/gorilla/websocket"
"github.com/goccy/go-json"
ws "github.com/gorilla/websocket"
"github.com/pterodactyl/wings/router/middleware"
"github.com/pterodactyl/wings/router/websocket"

View File

@@ -178,7 +178,7 @@ func postServerArchive(c *gin.Context) {
// Ensure the server is offline. Sometimes a "No such container" error gets through
// which means the server is already stopped. We can ignore that.
if err := s.Environment.WaitForStop(60, false); err != nil && !strings.Contains(strings.ToLower(err.Error()), "no such container") {
if err := s.Environment.WaitForStop(s.Context(), time.Minute, false); err != nil && !strings.Contains(strings.ToLower(err.Error()), "no such container") {
sendTransferLog("Failed to stop server, aborting transfer..")
l.WithField("error", err).Error("failed to stop server")
return

View File

@@ -8,6 +8,7 @@ type UploadPayload struct {
jwt.Payload
ServerUuid string `json:"server_uuid"`
UserUuid string `json:"user_uuid"`
UniqueId string `json:"unique_id"`
}

View File

@@ -7,7 +7,6 @@ import (
"github.com/apex/log"
"github.com/gbrlsnchs/jwt/v3"
"github.com/goccy/go-json"
)
// The time at which Wings was booted. No JWT's created before this time are allowed to
@@ -35,13 +34,13 @@ func DenyJTI(jti string) {
denylist.Store(jti, time.Now())
}
// A JWT payload for Websocket connections. This JWT is passed along to the Websocket after
// it has been connected to by sending an "auth" event.
// WebsocketPayload defines the JWT payload for a websocket connection. This JWT is passed along to
// the websocket after it has been connected to by sending an "auth" event.
type WebsocketPayload struct {
jwt.Payload
sync.RWMutex
UserID json.Number `json:"user_id"`
UserUUID string `json:"user_uuid"`
ServerUUID string `json:"server_uuid"`
Permissions []string `json:"permissions"`
}

View File

@@ -7,8 +7,9 @@ import (
"emperror.dev/errors"
"github.com/goccy/go-json"
"github.com/pterodactyl/wings/events"
"github.com/pterodactyl/wings/system"
"github.com/pterodactyl/wings/server"
)
@@ -88,12 +89,13 @@ func (h *Handler) listenForServerEvents(ctx context.Context) error {
ctx, cancel := context.WithCancel(ctx)
defer cancel()
eventChan := make(chan events.Event)
logOutput := make(chan []byte)
installOutput := make(chan []byte)
h.server.Events().On(eventChan, e...)
h.server.Sink(server.LogSink).On(logOutput)
h.server.Sink(server.InstallSink).On(installOutput)
eventChan := make(chan []byte)
logOutput := make(chan []byte, 8)
installOutput := make(chan []byte, 4)
h.server.Events().On(eventChan) // TODO: make a sinky
h.server.Sink(system.LogSink).On(logOutput)
h.server.Sink(system.InstallSink).On(installOutput)
onError := func(evt string, err2 error) {
h.Logger().WithField("event", evt).WithField("error", err2).Error("failed to send event over server websocket")
@@ -110,19 +112,23 @@ func (h *Handler) listenForServerEvents(ctx context.Context) error {
select {
case <-ctx.Done():
break
case e := <-logOutput:
sendErr := h.SendJson(Message{Event: server.ConsoleOutputEvent, Args: []string{string(e)}})
case b := <-logOutput:
sendErr := h.SendJson(Message{Event: server.ConsoleOutputEvent, Args: []string{string(b)}})
if sendErr == nil {
continue
}
onError(server.ConsoleOutputEvent, sendErr)
case e := <-installOutput:
sendErr := h.SendJson(Message{Event: server.InstallOutputEvent, Args: []string{string(e)}})
case b := <-installOutput:
sendErr := h.SendJson(Message{Event: server.InstallOutputEvent, Args: []string{string(b)}})
if sendErr == nil {
continue
}
onError(server.InstallOutputEvent, sendErr)
case e := <-eventChan:
case b := <-eventChan:
var e events.Event
if err := events.DecodeTo(b, &e); err != nil {
continue
}
var sendErr error
message := Message{Event: e.Topic}
if str, ok := e.Data.(string); ok {
@@ -148,9 +154,9 @@ func (h *Handler) listenForServerEvents(ctx context.Context) error {
}
// These functions will automatically close the channel if it hasn't been already.
h.server.Events().Off(eventChan, e...)
h.server.Sink(server.LogSink).Off(logOutput)
h.server.Sink(server.InstallSink).Off(installOutput)
h.server.Events().Off(eventChan)
h.server.Sink(system.LogSink).Off(logOutput)
h.server.Sink(system.InstallSink).Off(installOutput)
// If the internal context is stopped it is either because the parent context
// got canceled or because we ran into an error. If the "err" variable is nil

View File

@@ -3,6 +3,7 @@ package websocket
import (
"context"
"fmt"
"github.com/pterodactyl/wings/internal/models"
"net/http"
"strings"
"sync"
@@ -11,9 +12,10 @@ import (
"emperror.dev/errors"
"github.com/apex/log"
"github.com/gbrlsnchs/jwt/v3"
"github.com/goccy/go-json"
"github.com/google/uuid"
"github.com/gorilla/websocket"
"github.com/goccy/go-json"
"github.com/pterodactyl/wings/system"
"github.com/pterodactyl/wings/config"
"github.com/pterodactyl/wings/environment"
@@ -39,6 +41,7 @@ type Handler struct {
Connection *websocket.Conn `json:"-"`
jwt *tokens.WebsocketPayload
server *server.Server
ra server.RequestActivity
uuid uuid.UUID
}
@@ -108,6 +111,7 @@ func GetHandler(s *server.Server, w http.ResponseWriter, r *http.Request) (*Hand
Connection: conn,
jwt: nil,
server: s,
ra: s.NewRequestActivity("", r.RemoteAddr),
uuid: u,
}, nil
}
@@ -263,6 +267,7 @@ func (h *Handler) GetJwt() *tokens.WebsocketPayload {
// setJwt sets the JWT for the websocket in a race-safe manner.
func (h *Handler) setJwt(token *tokens.WebsocketPayload) {
h.Lock()
h.ra = h.ra.SetUser(token.UserUUID)
h.jwt = token
h.Unlock()
}
@@ -353,7 +358,7 @@ func (h *Handler) HandleInbound(ctx context.Context, m Message) error {
}
err := h.server.HandlePowerAction(action)
if errors.Is(err, context.DeadlineExceeded) {
if errors.Is(err, system.ErrLockerLocked) {
m, _ := h.GetErrorMessage("another power action is currently being processed for this server, please try again later")
_ = h.SendJson(Message{
@@ -364,6 +369,10 @@ func (h *Handler) HandleInbound(ctx context.Context, m Message) error {
return nil
}
if err == nil {
h.server.SaveActivity(h.ra, models.Event(server.ActivityPowerPrefix+action), nil)
}
return err
}
case SendServerLogsEvent:
@@ -420,7 +429,13 @@ func (h *Handler) HandleInbound(ctx context.Context, m Message) error {
}
}
return h.server.Environment.SendCommand(strings.Join(m.Args, ""))
if err := h.server.Environment.SendCommand(strings.Join(m.Args, "")); err != nil {
return err
}
h.server.SaveActivity(h.ra, server.ActivityConsoleCommand, models.ActivityMeta{
"command": strings.Join(m.Args, ""),
})
return nil
}
}

64
server/activity.go Normal file
View File

@@ -0,0 +1,64 @@
package server
import (
"context"
"emperror.dev/errors"
"github.com/pterodactyl/wings/internal/database"
"github.com/pterodactyl/wings/internal/models"
"time"
)
const ActivityPowerPrefix = "server:power."
const (
ActivityConsoleCommand = models.Event("server:console.command")
ActivitySftpWrite = models.Event("server:sftp.write")
ActivitySftpCreate = models.Event("server:sftp.create")
ActivitySftpCreateDirectory = models.Event("server:sftp.create-directory")
ActivitySftpRename = models.Event("server:sftp.rename")
ActivitySftpDelete = models.Event("server:sftp.delete")
ActivityFileUploaded = models.Event("server:file.uploaded")
)
// RequestActivity is a wrapper around a LoggedEvent that is able to track additional request
// specific metadata including the specific user and IP address associated with all subsequent
// events. The internal logged event structure can be extracted by calling RequestEvent.Event().
type RequestActivity struct {
server string
user string
ip string
}
// Event returns the underlying logged event from the RequestEvent instance and sets the
// specific event and metadata on it.
func (ra RequestActivity) Event(event models.Event, metadata models.ActivityMeta) *models.Activity {
a := models.Activity{Server: ra.server, IP: ra.ip, Event: event, Metadata: metadata}
return a.SetUser(ra.user)
}
// SetUser clones the RequestActivity struct and sets a new user value on the copy
// before returning it.
func (ra RequestActivity) SetUser(u string) RequestActivity {
c := ra
c.user = u
return c
}
func (s *Server) NewRequestActivity(user string, ip string) RequestActivity {
return RequestActivity{server: s.ID(), user: user, ip: ip}
}
// SaveActivity saves an activity entry to the database in a background routine. If an error is
// encountered it is logged but not returned to the caller.
func (s *Server) SaveActivity(a RequestActivity, event models.Event, metadata models.ActivityMeta) {
ctx, cancel := context.WithTimeout(s.Context(), time.Second*3)
go func() {
defer cancel()
if tx := database.Instance().WithContext(ctx).Create(a.Event(event, metadata)); tx.Error != nil {
s.Log().WithField("error", errors.WithStack(tx.Error)).
WithField("event", event).
Error("activity: failed to save event")
}
}()
}

View File

@@ -142,7 +142,7 @@ func (s *Server) RestoreBackup(b backup.BackupInterface, reader io.ReadCloser) (
// instance, otherwise you'll likely hit all types of write errors due to the
// server being suspended.
if s.Environment.State() != environment.ProcessOfflineState {
if err = s.Environment.WaitForStop(120, false); err != nil {
if err = s.Environment.WaitForStop(s.Context(), time.Minute*2, false); err != nil {
if !client.IsErrNotFound(err) {
return errors.WrapIf(err, "server/backup: restore: failed to wait for container stop")
}

View File

@@ -6,12 +6,14 @@ import (
"github.com/gammazero/workerpool"
)
// Parent function that will update all of the defined configuration files for a server
// automatically to ensure that they always use the specified values.
// UpdateConfigurationFiles updates all of the defined configuration files for
// a server automatically to ensure that they always use the specified values.
func (s *Server) UpdateConfigurationFiles() {
pool := workerpool.New(runtime.NumCPU())
s.Log().Debug("acquiring process configuration files...")
files := s.ProcessConfiguration().ConfigurationFiles
s.Log().Debug("acquired process configuration files")
for _, cf := range files {
f := cf
@@ -26,6 +28,8 @@ func (s *Server) UpdateConfigurationFiles() {
if err := f.Parse(p, false); err != nil {
s.Log().WithField("error", err).Error("failed to parse and update server configuration file")
}
s.Log().WithField("path", f.FileName).Debug("finished processing server configuration file")
})
}

View File

@@ -16,6 +16,11 @@ type EggConfiguration struct {
FileDenylist []string `json:"file_denylist"`
}
type ConfigurationMeta struct {
Name string `json:"name"`
Description string `json:"description"`
}
type Configuration struct {
mu sync.RWMutex
@@ -24,6 +29,8 @@ type Configuration struct {
// docker containers as well as in log output.
Uuid string `json:"uuid"`
Meta ConfigurationMeta `json:"meta"`
// Whether or not the server is in a suspended state. Suspended servers cannot
// be started or modified except in certain scenarios by an admin user.
Suspended bool `json:"suspended"`

View File

@@ -1,15 +1,11 @@
package server
import (
"context"
"fmt"
"sync"
"sync/atomic"
"time"
"emperror.dev/errors"
"github.com/mitchellh/colorstring"
"github.com/pterodactyl/wings/config"
"github.com/pterodactyl/wings/system"
)
@@ -18,118 +14,8 @@ import (
// the configuration every time we need to send output along to the websocket for
// a server.
var appName string
var appNameSync sync.Once
var ErrTooMuchConsoleData = errors.New("console is outputting too much data")
type ConsoleThrottler struct {
mu sync.Mutex
config.ConsoleThrottles
// The total number of activations that have occurred thus far.
activations uint64
// The total number of lines that have been sent since the last reset timer period.
count uint64
// Wether or not the console output is being throttled. It is up to calling code to
// determine what to do if it is.
isThrottled *system.AtomicBool
// The total number of lines processed so far during the given time period.
timerCancel *context.CancelFunc
}
// Resets the state of the throttler.
func (ct *ConsoleThrottler) Reset() {
atomic.StoreUint64(&ct.count, 0)
atomic.StoreUint64(&ct.activations, 0)
ct.isThrottled.Store(false)
}
// Triggers an activation for a server. You can also decrement the number of activations
// by passing a negative number.
func (ct *ConsoleThrottler) markActivation(increment bool) uint64 {
if !increment {
if atomic.LoadUint64(&ct.activations) == 0 {
return 0
}
// This weird dohicky subtracts 1 from the activation count.
return atomic.AddUint64(&ct.activations, ^uint64(0))
}
return atomic.AddUint64(&ct.activations, 1)
}
// Determines if the console is currently being throttled. Calls to this function can be used to
// determine if output should be funneled along to the websocket processes.
func (ct *ConsoleThrottler) Throttled() bool {
return ct.isThrottled.Load()
}
// Starts a timer that runs in a seperate thread and will continually decrement the lines processed
// and number of activations, regardless of the current console message volume. All of the timers
// are canceled if the context passed through is canceled.
func (ct *ConsoleThrottler) StartTimer(ctx context.Context) {
system.Every(ctx, time.Duration(int64(ct.LineResetInterval))*time.Millisecond, func(_ time.Time) {
ct.isThrottled.Store(false)
atomic.StoreUint64(&ct.count, 0)
})
system.Every(ctx, time.Duration(int64(ct.DecayInterval))*time.Millisecond, func(_ time.Time) {
ct.markActivation(false)
})
}
// Handles output from a server's console. This code ensures that a server is not outputting
// an excessive amount of data to the console that could indicate a malicious or run-away process
// and lead to performance issues for other users.
//
// This was much more of a problem for the NodeJS version of the daemon which struggled to handle
// large volumes of output. However, this code is much more performant so I generally feel a lot
// better about it's abilities.
//
// However, extreme output is still somewhat of a DoS attack vector against this software since we
// are still logging it to the disk temporarily and will want to avoid dumping a huge amount of
// data all at once. These values are all configurable via the wings configuration file, however the
// defaults have been in the wild for almost two years at the time of this writing, so I feel quite
// confident in them.
//
// This function returns an error if the server should be stopped due to violating throttle constraints
// and a boolean value indicating if a throttle is being violated when it is checked.
func (ct *ConsoleThrottler) Increment(onTrigger func()) error {
if !ct.Enabled {
return nil
}
// Increment the line count and if we have now output more lines than are allowed, trigger a throttle
// activation. Once the throttle is triggered and has passed the kill at value we will trigger a server
// stop automatically.
if atomic.AddUint64(&ct.count, 1) >= ct.Lines && !ct.Throttled() {
ct.isThrottled.Store(true)
if ct.markActivation(true) >= ct.MaximumTriggerCount {
return ErrTooMuchConsoleData
}
onTrigger()
}
return nil
}
// Returns the throttler instance for the server or creates a new one.
func (s *Server) Throttler() *ConsoleThrottler {
s.throttleOnce.Do(func() {
s.throttler = &ConsoleThrottler{
isThrottled: system.NewAtomicBool(false),
ConsoleThrottles: config.Get().Throttles,
}
})
return s.throttler
}
// PublishConsoleOutputFromDaemon sends output to the server console formatted
// to appear correctly as being sent from Wings.
func (s *Server) PublishConsoleOutputFromDaemon(data string) {
@@ -141,3 +27,55 @@ func (s *Server) PublishConsoleOutputFromDaemon(data string) {
colorstring.Color(fmt.Sprintf("[yellow][bold][%s Daemon]:[default] %s", appName, data)),
)
}
// Throttler returns the throttler instance for the server or creates a new one.
func (s *Server) Throttler() *ConsoleThrottle {
s.throttleOnce.Do(func() {
throttles := config.Get().Throttles
period := time.Duration(throttles.Period) * time.Millisecond
s.throttler = newConsoleThrottle(throttles.Lines, period)
s.throttler.strike = func() {
s.PublishConsoleOutputFromDaemon(fmt.Sprintf("Server is outputting console data too quickly -- throttling..."))
}
})
return s.throttler
}
type ConsoleThrottle struct {
limit *system.Rate
lock *system.Locker
strike func()
}
func newConsoleThrottle(lines uint64, period time.Duration) *ConsoleThrottle {
return &ConsoleThrottle{
limit: system.NewRate(lines, period),
lock: system.NewLocker(),
}
}
// Allow checks if the console is allowed to process more output data, or if too
// much has already been sent over the line. If there is too much output the
// strike callback function is triggered, but only if it has not already been
// triggered at this point in the process.
//
// If output is allowed, the lock on the throttler is released and the next time
// it is triggered the strike function will be re-executed.
func (ct *ConsoleThrottle) Allow() bool {
if !ct.limit.Try() {
if err := ct.lock.Acquire(); err == nil {
if ct.strike != nil {
ct.strike()
}
}
return false
}
ct.lock.Release()
return true
}
// Reset resets the console throttler internal rate limiter and overage counter.
func (ct *ConsoleThrottle) Reset() {
ct.limit.Reset()
}

62
server/console_test.go Normal file
View File

@@ -0,0 +1,62 @@
package server
import (
"testing"
"time"
"github.com/franela/goblin"
)
func TestName(t *testing.T) {
g := goblin.Goblin(t)
g.Describe("ConsoleThrottler", func() {
g.It("keeps count of the number of overages in a time period", func() {
t := newConsoleThrottle(1, time.Second)
g.Assert(t.Allow()).IsTrue()
g.Assert(t.Allow()).IsFalse()
g.Assert(t.Allow()).IsFalse()
})
g.It("calls strike once per time period", func() {
t := newConsoleThrottle(1, time.Millisecond*20)
var times int
t.strike = func() {
times = times + 1
}
t.Allow()
t.Allow()
t.Allow()
time.Sleep(time.Millisecond * 100)
t.Allow()
t.Reset()
t.Allow()
t.Allow()
t.Allow()
g.Assert(times).Equal(2)
})
g.It("is properly reset", func() {
t := newConsoleThrottle(10, time.Second)
for i := 0; i < 10; i++ {
g.Assert(t.Allow()).IsTrue()
}
g.Assert(t.Allow()).IsFalse()
t.Reset()
g.Assert(t.Allow()).IsTrue()
})
})
}
func BenchmarkConsoleThrottle(b *testing.B) {
t := newConsoleThrottle(10, time.Millisecond*10)
b.ReportAllocs()
for i := 0; i < b.N; i++ {
t.Allow()
}
}

View File

@@ -2,6 +2,7 @@ package server
import (
"github.com/pterodactyl/wings/events"
"github.com/pterodactyl/wings/system"
)
// Defines all of the possible output events for a server.
@@ -20,7 +21,7 @@ const (
TransferStatusEvent = "transfer status"
)
// Returns the server's emitter instance.
// Events returns the server's emitter instance.
func (s *Server) Events() *events.Bus {
s.emitterLock.Lock()
defer s.emitterLock.Unlock()
@@ -31,3 +32,24 @@ func (s *Server) Events() *events.Bus {
return s.emitter
}
// Sink returns the instantiated and named sink for a server. If the sink has
// not been configured yet this function will cause a panic condition.
func (s *Server) Sink(name system.SinkName) *system.SinkPool {
sink, ok := s.sinks[name]
if !ok {
s.Log().Fatalf("attempt to access nil sink: %s", name)
}
return sink
}
// DestroyAllSinks iterates over all of the sinks configured for the server and
// destroys their instances. Note that this will cause a panic if you attempt
// to call Server.Sink() again after. This function is only used when a server
// is being deleted from the system.
func (s *Server) DestroyAllSinks() {
s.Log().Info("destroying all registered sinks for server instance")
for _, sink := range s.sinks {
sink.Destroy()
}
}

View File

@@ -130,7 +130,7 @@ func (a *Archive) withFilesCallback(tw *tar.Writer) func(path string, de *godirw
for _, f := range a.Files {
// If the given doesn't match, or doesn't have the same prefix continue
// to the next item in the loop.
if p != f && !strings.HasPrefix(p, f) {
if p != f && !strings.HasPrefix(strings.TrimSuffix(p, "/")+"/", f) {
continue
}

View File

@@ -5,9 +5,12 @@ import (
"archive/zip"
"compress/gzip"
"fmt"
gzip2 "github.com/klauspost/compress/gzip"
zip2 "github.com/klauspost/compress/zip"
"os"
"path"
"path/filepath"
"reflect"
"strings"
"sync/atomic"
"time"
@@ -172,13 +175,26 @@ func ExtractNameFromArchive(f archiver.File) string {
return f.Name()
}
switch s := sys.(type) {
case *zip.FileHeader:
return s.Name
case *zip2.FileHeader:
return s.Name
case *tar.Header:
return s.Name
case *gzip.Header:
return s.Name
case *zip.FileHeader:
case *gzip2.Header:
return s.Name
default:
// At this point we cannot figure out what type of archive this might be so
// just try to find the name field in the struct. If it is found return it.
field := reflect.Indirect(reflect.ValueOf(sys)).FieldByName("Name")
if field.IsValid() {
return field.String()
}
// Fallback to the basename of the file at this point. There is nothing we can really
// do to try and figure out what the underlying directory of the file is supposed to
// be since it didn't implement a name field.
return f.Name()
}
}

View File

@@ -115,19 +115,6 @@ func (fs *Filesystem) Touch(p string, flag int) (*os.File, error) {
return f, nil
}
// Reads a file on the system and returns it as a byte representation in a file
// reader. This is not the most memory efficient usage since it will be reading the
// entirety of the file into memory.
func (fs *Filesystem) Readfile(p string, w io.Writer) error {
file, _, err := fs.File(p)
if err != nil {
return err
}
defer file.Close()
_, err = bufio.NewReader(file).WriteTo(w)
return err
}
// Writefile writes a file to the system. If the file does not already exist one
// will be created. This will also properly recalculate the disk space used by
// the server when writing new files or modifying existing ones.
@@ -184,16 +171,16 @@ func (fs *Filesystem) CreateDirectory(name string, p string) error {
return os.MkdirAll(cleaned, 0o755)
}
// Moves (or renames) a file or directory.
// Rename moves (or renames) a file or directory.
func (fs *Filesystem) Rename(from string, to string) error {
cleanedFrom, err := fs.SafePath(from)
if err != nil {
return err
return errors.WithStack(err)
}
cleanedTo, err := fs.SafePath(to)
if err != nil {
return err
return errors.WithStack(err)
}
// If the target file or directory already exists the rename function will fail, so just
@@ -215,7 +202,10 @@ func (fs *Filesystem) Rename(from string, to string) error {
}
}
return os.Rename(cleanedFrom, cleanedTo)
if err := os.Rename(cleanedFrom, cleanedTo); err != nil {
return errors.WithStack(err)
}
return nil
}
// Recursively iterates over a file or directory and sets the permissions on all of the
@@ -492,7 +482,11 @@ func (fs *Filesystem) ListDirectory(p string) ([]Stat, error) {
cleanedp, _ = fs.SafePath(filepath.Join(cleaned, f.Name()))
}
if cleanedp != "" {
// Don't try to detect the type on a pipe — this will just hang the application and
// you'll never get a response back.
//
// @see https://github.com/pterodactyl/panel/issues/4059
if cleanedp != "" && f.Mode()&os.ModeNamedPipe == 0 {
m, _ = mimetype.DetectFile(filepath.Join(cleaned, f.Name()))
} else {
// Just pass this for an unknown type because the file could not safely be resolved within

View File

@@ -1,6 +1,7 @@
package filesystem
import (
"bufio"
"bytes"
"errors"
"math/rand"
@@ -44,6 +45,14 @@ type rootFs struct {
root string
}
func getFileContent(file *os.File) string {
var w bytes.Buffer
if _, err := bufio.NewReader(file).WriteTo(&w); err != nil {
panic(err)
}
return w.String()
}
func (rfs *rootFs) CreateServerFile(p string, c []byte) error {
f, err := os.Create(filepath.Join(rfs.root, "/server", p))
@@ -75,54 +84,6 @@ func (rfs *rootFs) reset() {
}
}
func TestFilesystem_Readfile(t *testing.T) {
g := Goblin(t)
fs, rfs := NewFs()
g.Describe("Readfile", func() {
buf := &bytes.Buffer{}
g.It("opens a file if it exists on the system", func() {
err := rfs.CreateServerFileFromString("test.txt", "testing")
g.Assert(err).IsNil()
err = fs.Readfile("test.txt", buf)
g.Assert(err).IsNil()
g.Assert(buf.String()).Equal("testing")
})
g.It("returns an error if the file does not exist", func() {
err := fs.Readfile("test.txt", buf)
g.Assert(err).IsNotNil()
g.Assert(errors.Is(err, os.ErrNotExist)).IsTrue()
})
g.It("returns an error if the \"file\" is a directory", func() {
err := os.Mkdir(filepath.Join(rfs.root, "/server/test.txt"), 0o755)
g.Assert(err).IsNil()
err = fs.Readfile("test.txt", buf)
g.Assert(err).IsNotNil()
g.Assert(IsErrorCode(err, ErrCodeIsDirectory)).IsTrue()
})
g.It("cannot open a file outside the root directory", func() {
err := rfs.CreateServerFileFromString("/../test.txt", "testing")
g.Assert(err).IsNil()
err = fs.Readfile("/../test.txt", buf)
g.Assert(err).IsNotNil()
g.Assert(IsErrorCode(err, ErrCodePathResolution)).IsTrue()
})
g.AfterEach(func() {
buf.Truncate(0)
atomic.StoreInt64(&fs.diskUsed, 0)
rfs.reset()
})
})
}
func TestFilesystem_Writefile(t *testing.T) {
g := Goblin(t)
fs, rfs := NewFs()
@@ -140,9 +101,10 @@ func TestFilesystem_Writefile(t *testing.T) {
err := fs.Writefile("test.txt", r)
g.Assert(err).IsNil()
err = fs.Readfile("test.txt", buf)
f, _, err := fs.File("test.txt")
g.Assert(err).IsNil()
g.Assert(buf.String()).Equal("test file content")
defer f.Close()
g.Assert(getFileContent(f)).Equal("test file content")
g.Assert(atomic.LoadInt64(&fs.diskUsed)).Equal(r.Size())
})
@@ -152,9 +114,10 @@ func TestFilesystem_Writefile(t *testing.T) {
err := fs.Writefile("/some/nested/test.txt", r)
g.Assert(err).IsNil()
err = fs.Readfile("/some/nested/test.txt", buf)
f, _, err := fs.File("/some/nested/test.txt")
g.Assert(err).IsNil()
g.Assert(buf.String()).Equal("test file content")
defer f.Close()
g.Assert(getFileContent(f)).Equal("test file content")
})
g.It("can create a new file inside a nested directory without a trailing slash", func() {
@@ -163,9 +126,10 @@ func TestFilesystem_Writefile(t *testing.T) {
err := fs.Writefile("some/../foo/bar/test.txt", r)
g.Assert(err).IsNil()
err = fs.Readfile("foo/bar/test.txt", buf)
f, _, err := fs.File("foo/bar/test.txt")
g.Assert(err).IsNil()
g.Assert(buf.String()).Equal("test file content")
defer f.Close()
g.Assert(getFileContent(f)).Equal("test file content")
})
g.It("cannot create a file outside the root directory", func() {
@@ -190,28 +154,6 @@ func TestFilesystem_Writefile(t *testing.T) {
g.Assert(IsErrorCode(err, ErrCodeDiskSpace)).IsTrue()
})
/*g.It("updates the total space used when a file is appended to", func() {
atomic.StoreInt64(&fs.diskUsed, 100)
b := make([]byte, 100)
_, _ = rand.Read(b)
r := bytes.NewReader(b)
err := fs.Writefile("test.txt", r)
g.Assert(err).IsNil()
g.Assert(atomic.LoadInt64(&fs.diskUsed)).Equal(int64(200))
// If we write less data than already exists, we should expect the total
// disk used to be decremented.
b = make([]byte, 50)
_, _ = rand.Read(b)
r = bytes.NewReader(b)
err = fs.Writefile("test.txt", r)
g.Assert(err).IsNil()
g.Assert(atomic.LoadInt64(&fs.diskUsed)).Equal(int64(150))
})*/
g.It("truncates the file when writing new contents", func() {
r := bytes.NewReader([]byte("original data"))
err := fs.Writefile("test.txt", r)
@@ -221,9 +163,10 @@ func TestFilesystem_Writefile(t *testing.T) {
err = fs.Writefile("test.txt", r)
g.Assert(err).IsNil()
err = fs.Readfile("test.txt", buf)
f, _, err := fs.File("test.txt")
g.Assert(err).IsNil()
g.Assert(buf.String()).Equal("new data")
defer f.Close()
g.Assert(getFileContent(f)).Equal("new data")
})
g.AfterEach(func() {

View File

@@ -119,16 +119,6 @@ func TestFilesystem_Blocks_Symlinks(t *testing.T) {
panic(err)
}
g.Describe("Readfile", func() {
g.It("cannot read a file symlinked outside the root", func() {
b := bytes.Buffer{}
err := fs.Readfile("symlinked.txt", &b)
g.Assert(err).IsNotNil()
g.Assert(IsErrorCode(err, ErrCodePathResolution)).IsTrue()
})
})
g.Describe("Writefile", func() {
g.It("cannot write to a file symlinked outside the root", func() {
r := bytes.NewReader([]byte("testing"))

View File

@@ -10,6 +10,7 @@ import (
"path/filepath"
"strconv"
"strings"
"time"
"emperror.dev/errors"
"github.com/apex/log"
@@ -17,23 +18,23 @@ import (
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/mount"
"github.com/docker/docker/client"
"github.com/pterodactyl/wings/config"
"github.com/pterodactyl/wings/environment"
"github.com/pterodactyl/wings/remote"
"github.com/pterodactyl/wings/system"
)
// Executes the installation stack for a server process. Bubbles any errors up to the calling
// function which should handle contacting the panel to notify it of the server state.
// Install executes the installation stack for a server process. Bubbles any
// errors up to the calling function which should handle contacting the panel to
// notify it of the server state.
//
// Pass true as the first argument in order to execute a server sync before the process to
// ensure the latest information is used.
// Pass true as the first argument in order to execute a server sync before the
// process to ensure the latest information is used.
func (s *Server) Install(sync bool) error {
if sync {
s.Log().Info("syncing server state with remote source before executing installation process")
if err := s.Sync(); err != nil {
return err
return errors.WrapIf(err, "install: failed to sync server state with Panel")
}
}
@@ -57,7 +58,7 @@ func (s *Server) Install(sync bool) error {
// error to this log entry. Otherwise ignore it in this log since whatever is calling
// this function should handle the error and will end up logging the same one.
if err == nil {
l.WithField("error", serr)
l.WithField("error", err)
}
l.Warn("failed to notify panel of server install state")
@@ -71,7 +72,7 @@ func (s *Server) Install(sync bool) error {
// the install is completed.
s.Events().Publish(InstallCompletedEvent, "")
return err
return errors.WithStackIf(err)
}
// Reinstalls a server's software by utilizing the install script for the server egg. This
@@ -79,8 +80,8 @@ func (s *Server) Install(sync bool) error {
func (s *Server) Reinstall() error {
if s.Environment.State() != environment.ProcessOfflineState {
s.Log().Debug("waiting for server instance to enter a stopped state")
if err := s.Environment.WaitForStop(10, true); err != nil {
return err
if err := s.Environment.WaitForStop(s.Context(), time.Second*10, true); err != nil {
return errors.WrapIf(err, "install: failed to stop running environment")
}
}
@@ -110,9 +111,7 @@ func (s *Server) internalInstall() error {
type InstallationProcess struct {
Server *Server
Script *remote.InstallationScript
client *client.Client
context context.Context
}
// Generates a new installation process struct that will be used to create containers,
@@ -127,7 +126,6 @@ func NewInstallationProcess(s *Server, script *remote.InstallationScript) (*Inst
return nil, err
} else {
proc.client = c
proc.context = s.Context()
}
return proc, nil
@@ -157,7 +155,7 @@ func (s *Server) SetRestoring(state bool) {
// Removes the installer container for the server.
func (ip *InstallationProcess) RemoveContainer() error {
err := ip.client.ContainerRemove(ip.context, ip.Server.ID()+"_installer", types.ContainerRemoveOptions{
err := ip.client.ContainerRemove(ip.Server.Context(), ip.Server.ID()+"_installer", types.ContainerRemoveOptions{
RemoveVolumes: true,
Force: true,
})
@@ -167,11 +165,10 @@ func (ip *InstallationProcess) RemoveContainer() error {
return nil
}
// Runs the installation process, this is done as in a background thread. This will configure
// the required environment, and then spin up the installation container.
//
// Once the container finishes installing the results will be stored in an installation
// log in the server's configuration directory.
// Run runs the installation process, this is done as in a background thread.
// This will configure the required environment, and then spin up the
// installation container. Once the container finishes installing the results
// are stored in an installation log in the server's configuration directory.
func (ip *InstallationProcess) Run() error {
ip.Server.Log().Debug("acquiring installation process lock")
if !ip.Server.installing.SwapIf(true) {
@@ -207,7 +204,7 @@ func (ip *InstallationProcess) Run() error {
// Returns the location of the temporary data for the installation process.
func (ip *InstallationProcess) tempDir() string {
return filepath.Join(os.TempDir(), "pterodactyl/", ip.Server.ID())
return filepath.Join(config.Get().System.TmpDirectory, ip.Server.ID())
}
// Writes the installation script to a temporary file on the host machine so that it
@@ -267,9 +264,9 @@ func (ip *InstallationProcess) pullInstallationImage() error {
imagePullOptions.RegistryAuth = b64
}
r, err := ip.client.ImagePull(context.Background(), ip.Script.ContainerImage, imagePullOptions)
r, err := ip.client.ImagePull(ip.Server.Context(), ip.Script.ContainerImage, imagePullOptions)
if err != nil {
images, ierr := ip.client.ImageList(context.Background(), types.ImageListOptions{})
images, ierr := ip.client.ImageList(ip.Server.Context(), types.ImageListOptions{})
if ierr != nil {
// Well damn, something has gone really wrong here, just go ahead and abort there
// isn't much anything we can do to try and self-recover from this.
@@ -312,9 +309,10 @@ func (ip *InstallationProcess) pullInstallationImage() error {
return nil
}
// Runs before the container is executed. This pulls down the required docker container image
// as well as writes the installation script to the disk. This process is executed in an async
// manner, if either one fails the error is returned.
// BeforeExecute runs before the container is executed. This pulls down the
// required docker container image as well as writes the installation script to
// the disk. This process is executed in an async manner, if either one fails
// the error is returned.
func (ip *InstallationProcess) BeforeExecute() error {
if err := ip.writeScriptToDisk(); err != nil {
return errors.WithMessage(err, "failed to write installation script to disk")
@@ -340,7 +338,7 @@ func (ip *InstallationProcess) AfterExecute(containerId string) error {
defer ip.RemoveContainer()
ip.Server.Log().WithField("container_id", containerId).Debug("pulling installation logs for server")
reader, err := ip.client.ContainerLogs(ip.context, containerId, types.ContainerLogsOptions{
reader, err := ip.client.ContainerLogs(ip.Server.Context(), containerId, types.ContainerLogsOptions{
ShowStdout: true,
ShowStderr: true,
Follow: false,
@@ -395,12 +393,13 @@ func (ip *InstallationProcess) AfterExecute(containerId string) error {
return nil
}
// Executes the installation process inside a specially created docker container.
// Execute executes the installation process inside a specially created docker
// container.
func (ip *InstallationProcess) Execute() (string, error) {
// Create a child context that is canceled once this function is done running. This
// will also be canceled if the parent context (from the Server struct) is canceled
// which occurs if the server is deleted.
ctx, cancel := context.WithCancel(ip.context)
ctx, cancel := context.WithCancel(ip.Server.Context())
defer cancel()
conf := &container.Config{
@@ -511,18 +510,15 @@ func (ip *InstallationProcess) Execute() (string, error) {
// the server configuration directory, as well as to a websocket listener so
// that the process can be viewed in the panel by administrators.
func (ip *InstallationProcess) StreamOutput(ctx context.Context, id string) error {
reader, err := ip.client.ContainerLogs(ctx, id, types.ContainerLogsOptions{
ShowStdout: true,
ShowStderr: true,
Follow: true,
})
opts := types.ContainerLogsOptions{ShowStdout: true, ShowStderr: true, Follow: true}
reader, err := ip.client.ContainerLogs(ctx, id, opts)
if err != nil {
return err
}
defer reader.Close()
err = system.ScanReader(reader, ip.Server.Sink(InstallSink).Push)
if err != nil {
err = system.ScanReader(reader, ip.Server.Sink(system.InstallSink).Push)
if err != nil && !errors.Is(err, context.Canceled) {
ip.Server.Log().WithFields(log.Fields{"container_id": id, "error": err}).Warn("error processing install output lines")
}
return nil

View File

@@ -1,15 +1,17 @@
package server
import (
"bytes"
"regexp"
"strconv"
"sync"
"time"
"github.com/apex/log"
"github.com/pterodactyl/wings/config"
"github.com/pterodactyl/wings/environment"
"github.com/pterodactyl/wings/events"
"github.com/pterodactyl/wings/system"
"github.com/pterodactyl/wings/environment"
"github.com/pterodactyl/wings/remote"
)
@@ -44,136 +46,133 @@ func (dsl *diskSpaceLimiter) Reset() {
func (dsl *diskSpaceLimiter) Trigger() {
dsl.o.Do(func() {
dsl.server.PublishConsoleOutputFromDaemon("Server is exceeding the assigned disk space limit, stopping process now.")
if err := dsl.server.Environment.WaitForStop(60, true); err != nil {
if err := dsl.server.Environment.WaitForStop(dsl.server.Context(), time.Minute, true); err != nil {
dsl.server.Log().WithField("error", err).Error("failed to stop server after exceeding space limit!")
}
})
}
// processConsoleOutputEvent handles output from a server's Docker container
// and runs through different limiting logic to ensure that spam console output
// does not cause negative effects to the system. This will also monitor the
// output lines to determine if the server is started yet, and if the output is
// not being throttled, will send the data over to the websocket.
func (s *Server) processConsoleOutputEvent(v []byte) {
t := s.Throttler()
err := t.Increment(func() {
s.PublishConsoleOutputFromDaemon("Your server is outputting too much data and is being throttled.")
})
// An error is only returned if the server has breached the thresholds set.
if err != nil {
// If the process is already stopping, just let it continue with that action rather than attempting
// to terminate again.
if s.Environment.State() != environment.ProcessStoppingState {
s.Environment.SetState(environment.ProcessStoppingState)
// Always process the console output, but do this in a seperate thread since we
// don't really care about side-effects from this call, and don't want it to block
// the console sending logic.
go s.onConsoleOutput(v)
go func() {
s.Log().Warn("stopping server instance, violating throttle limits")
s.PublishConsoleOutputFromDaemon("Your server is being stopped for outputting too much data in a short period of time.")
// Completely skip over server power actions and terminate the running instance. This gives the
// server 15 seconds to finish stopping gracefully before it is forcefully terminated.
if err := s.Environment.WaitForStop(config.Get().Throttles.StopGracePeriod, true); err != nil {
// If there is an error set the process back to running so that this throttler is called
// again and hopefully kills the server.
if s.Environment.State() != environment.ProcessOfflineState {
s.Environment.SetState(environment.ProcessRunningState)
// If the console is being throttled, do nothing else with it, we don't want
// to waste time. This code previously terminated server instances after violating
// different throttle limits. That code was clunky and difficult to reason about,
// in addition to being a consistent pain point for users.
//
// In the interest of building highly efficient software, that code has been removed
// here, and we'll rely on the host to detect bad actors through their own means.
if !s.Throttler().Allow() {
return
}
s.Log().WithField("error", err).Error("failed to terminate environment after triggering throttle")
}
}()
}
}
// If we are not throttled, go ahead and output the data.
if !t.Throttled() {
s.Sink(LogSink).Push(v)
}
// Also pass the data along to the console output channel.
s.onConsoleOutput(string(v))
s.Sink(system.LogSink).Push(v)
}
// StartEventListeners adds all the internal event listeners we want to use for a server. These listeners can only be
// removed by deleting the server as they should last for the duration of the process' lifetime.
// StartEventListeners adds all the internal event listeners we want to use for
// a server. These listeners can only be removed by deleting the server as they
// should last for the duration of the process' lifetime.
func (s *Server) StartEventListeners() {
state := make(chan events.Event)
stats := make(chan events.Event)
docker := make(chan events.Event)
c := make(chan []byte, 8)
limit := newDiskLimiter(s)
s.Log().Debug("registering event listeners: console, state, resources...")
s.Environment.Events().On(c)
s.Environment.SetLogCallback(s.processConsoleOutputEvent)
go func() {
l := newDiskLimiter(s)
for {
select {
case e := <-state:
go func() {
case v := <-c:
go func(v []byte, limit *diskSpaceLimiter) {
var e events.Event
if err := events.DecodeTo(v, &e); err != nil {
return
}
switch e.Topic {
case environment.ResourceEvent:
{
var stats struct {
Topic string
Data environment.Stats
}
if err := events.DecodeTo(v, &stats); err != nil {
s.Log().WithField("error", err).Warn("failed to decode server resource event")
return
}
s.resources.UpdateStats(stats.Data)
// If there is no disk space available at this point, trigger the server
// disk limiter logic which will start to stop the running instance.
if !s.Filesystem().HasSpaceAvailable(true) {
limit.Trigger()
}
s.Events().Publish(StatsEvent, s.Proc())
}
case environment.StateChangeEvent:
{
// Reset the throttler when the process is started.
if e.Data == environment.ProcessStartingState {
l.Reset()
limit.Reset()
s.Throttler().Reset()
}
s.OnStateChange()
}()
case e := <-stats:
go func() {
// Update the server resource tracking object with the resources we got here.
s.resources.mu.Lock()
s.resources.Stats = e.Data.(environment.Stats)
s.resources.mu.Unlock()
// If there is no disk space available at this point, trigger the server disk limiter logic
// which will start to stop the running instance.
if !s.Filesystem().HasSpaceAvailable(true) {
l.Trigger()
}
s.Events().Publish(StatsEvent, s.Proc())
}()
case e := <-docker:
go func() {
switch e.Topic {
case environment.DockerImagePullStatus:
s.Events().Publish(InstallOutputEvent, e.Data)
case environment.DockerImagePullStarted:
s.PublishConsoleOutputFromDaemon("Pulling Docker container image, this could take a few minutes to complete...")
default:
case environment.DockerImagePullCompleted:
s.PublishConsoleOutputFromDaemon("Finished pulling Docker container image")
default:
}
}()
}(v, limit)
case <-s.Context().Done():
return
}
}
}()
s.Log().Debug("registering event listeners: console, state, resources...")
s.Environment.SetLogCallback(s.processConsoleOutputEvent)
s.Environment.Events().On(state, environment.StateChangeEvent)
s.Environment.Events().On(stats, environment.ResourceEvent)
s.Environment.Events().On(docker, dockerEvents...)
}
var stripAnsiRegex = regexp.MustCompile("[\u001B\u009B][[\\]()#;?]*(?:(?:(?:[a-zA-Z\\d]*(?:;[a-zA-Z\\d]*)*)?\u0007)|(?:(?:\\d{1,4}(?:;\\d{0,4})*)?[\\dA-PRZcf-ntqry=><~]))")
// Custom listener for console output events that will check if the given line
// of output matches one that should mark the server as started or not.
func (s *Server) onConsoleOutput(data string) {
// Get the server's process configuration.
func (s *Server) onConsoleOutput(data []byte) {
if s.Environment.State() != environment.ProcessStartingState && !s.IsRunning() {
return
}
processConfiguration := s.ProcessConfiguration()
// Make a copy of the data provided since it is by reference, otherwise you'll
// potentially introduce a race condition by modifying the value.
v := make([]byte, len(data))
copy(v, data)
// Check if the server is currently starting.
if s.Environment.State() == environment.ProcessStartingState {
// Check if we should strip ansi color codes.
if processConfiguration.Startup.StripAnsi {
// Strip ansi color codes from the data string.
data = stripAnsiRegex.ReplaceAllString(data, "")
v = stripAnsiRegex.ReplaceAll(v, []byte(""))
}
// Iterate over all the done lines.
for _, l := range processConfiguration.Startup.Done {
if !l.Matches(data) {
if !l.Matches(v) {
continue
}
s.Log().WithFields(log.Fields{
"match": l.String(),
"against": strconv.QuoteToASCII(data),
"against": strconv.QuoteToASCII(string(v)),
}).Debug("detected server in running state based on console line output")
// If the specific line of output is one that would mark the server as started,
@@ -190,7 +189,7 @@ func (s *Server) onConsoleOutput(data string) {
if s.IsRunning() {
stop := processConfiguration.Stop
if stop.Type == remote.ProcessStopCommand && data == stop.Value {
if stop.Type == remote.ProcessStopCommand && bytes.Equal(v, []byte(stop.Value)) {
s.Environment.SetState(environment.ProcessOfflineState)
}
}

View File

@@ -52,6 +52,24 @@ func (m *Manager) Client() remote.Client {
return m.client
}
// Len returns the count of servers stored in the manager instance.
func (m *Manager) Len() int {
m.mu.RLock()
defer m.mu.RUnlock()
return len(m.servers)
}
// Keys returns all of the server UUIDs stored in the manager set.
func (m *Manager) Keys() []string {
m.mu.RLock()
defer m.mu.RUnlock()
keys := make([]string, len(m.servers))
for i, s := range m.servers {
keys[i] = s.ID()
}
return keys
}
// Put replaces all the current values in the collection with the value that
// is passed through.
func (m *Manager) Put(s []*Server) {
@@ -199,7 +217,6 @@ func (m *Manager) InitServer(data remote.ServerConfigurationResponse) (*Server,
} else {
s.Environment = env
s.StartEventListeners()
s.Throttler().StartTimer(s.Context())
}
// If the server's data directory exists, force disk usage calculation.

View File

@@ -4,7 +4,6 @@ import (
"context"
"fmt"
"os"
"sync"
"time"
"emperror.dev/errors"
@@ -41,81 +40,6 @@ func (pa PowerAction) IsStart() bool {
return pa == PowerActionStart || pa == PowerActionRestart
}
type powerLocker struct {
mu sync.RWMutex
ch chan bool
}
func newPowerLocker() *powerLocker {
return &powerLocker{
ch: make(chan bool, 1),
}
}
type errPowerLockerLocked struct{}
func (e errPowerLockerLocked) Error() string {
return "cannot acquire a lock on the power state: already locked"
}
var ErrPowerLockerLocked error = errPowerLockerLocked{}
// IsLocked returns the current state of the locker channel. If there is
// currently a value in the channel, it is assumed to be locked.
func (pl *powerLocker) IsLocked() bool {
pl.mu.RLock()
defer pl.mu.RUnlock()
return len(pl.ch) == 1
}
// Acquire will acquire the power lock if it is not currently locked. If it is
// already locked, acquire will fail to acquire the lock, and will return false.
func (pl *powerLocker) Acquire() error {
pl.mu.Lock()
defer pl.mu.Unlock()
if len(pl.ch) == 1 {
return errors.WithStack(ErrPowerLockerLocked)
}
pl.ch <- true
return nil
}
// TryAcquire will attempt to acquire a power-lock until the context provided
// is canceled.
func (pl *powerLocker) TryAcquire(ctx context.Context) error {
select {
case pl.ch <- true:
return nil
case <-ctx.Done():
if err := ctx.Err(); err != nil {
return errors.WithStack(err)
}
return nil
}
}
// Release will drain the locker channel so that we can properly re-acquire it
// at a later time.
func (pl *powerLocker) Release() {
pl.mu.Lock()
if len(pl.ch) == 1 {
<-pl.ch
}
pl.mu.Unlock()
}
// Destroy cleans up the power locker by closing the channel.
func (pl *powerLocker) Destroy() {
pl.mu.Lock()
if pl.ch != nil {
if len(pl.ch) == 1 {
<-pl.ch
}
close(pl.ch)
}
pl.mu.Unlock()
}
// ExecutingPowerAction checks if there is currently a power action being
// processed for the server.
func (s *Server) ExecutingPowerAction() bool {
@@ -209,11 +133,11 @@ func (s *Server) HandlePowerAction(action PowerAction, waitSeconds ...int) error
return s.Environment.Start(s.Context())
case PowerActionStop:
// We're specifically waiting for the process to be stopped here, otherwise the lock is released
// too soon, and you can rack up all sorts of issues.
return s.Environment.WaitForStop(10*60, true)
fallthrough
case PowerActionRestart:
if err := s.Environment.WaitForStop(10*60, true); err != nil {
// We're specifically waiting for the process to be stopped here, otherwise the lock is
// released too soon, and you can rack up all sorts of issues.
if err := s.Environment.WaitForStop(s.Context(), time.Minute*10, true); err != nil {
// Even timeout errors should be bubbled back up the stack. If the process didn't stop
// nicely, but the terminate argument was passed then the server is stopped without an
// error being returned.
@@ -225,6 +149,10 @@ func (s *Server) HandlePowerAction(action PowerAction, waitSeconds ...int) error
return err
}
if action == PowerActionStop {
return nil
}
// Now actually try to start the process by executing the normal pre-boot logic.
if err := s.onBeforeStart(); err != nil {
return err
@@ -232,7 +160,7 @@ func (s *Server) HandlePowerAction(action PowerAction, waitSeconds ...int) error
return s.Environment.Start(s.Context())
case PowerActionTerminate:
return s.Environment.Terminate(os.Kill)
return s.Environment.Terminate(s.Context(), os.Kill)
}
return errors.New("attempting to handle unknown power action")
@@ -273,15 +201,19 @@ func (s *Server) onBeforeStart() error {
// we don't need to actively do anything about it at this point, worse comes to worst the
// server starts in a weird state and the user can manually adjust.
s.PublishConsoleOutputFromDaemon("Updating process configuration files...")
s.Log().Debug("updating server configuration files...")
s.UpdateConfigurationFiles()
s.Log().Debug("updated server configuration files")
if config.Get().System.CheckPermissionsOnBoot {
s.PublishConsoleOutputFromDaemon("Ensuring file permissions are set correctly, this could take a few seconds...")
// Ensure all the server file permissions are set correctly before booting the process.
s.Log().Debug("chowning server root directory...")
if err := s.Filesystem().Chown("/"); err != nil {
return errors.WithMessage(err, "failed to chown root server directory during pre-boot process")
}
}
s.Log().Info("completed server preflight, starting boot process...")
return nil
}

View File

@@ -1,154 +1,18 @@
package server
import (
"context"
"testing"
"time"
"emperror.dev/errors"
. "github.com/franela/goblin"
"github.com/pterodactyl/wings/system"
)
func TestPower(t *testing.T) {
g := Goblin(t)
g.Describe("PowerLocker", func() {
var pl *powerLocker
g.BeforeEach(func() {
pl = newPowerLocker()
})
g.Describe("PowerLocker#IsLocked", func() {
g.It("should return false when the channel is empty", func() {
g.Assert(cap(pl.ch)).Equal(1)
g.Assert(pl.IsLocked()).IsFalse()
})
g.It("should return true when the channel is at capacity", func() {
pl.ch <- true
g.Assert(pl.IsLocked()).IsTrue()
<-pl.ch
g.Assert(pl.IsLocked()).IsFalse()
// We don't care what the channel value is, just that there is
// something in it.
pl.ch <- false
g.Assert(pl.IsLocked()).IsTrue()
g.Assert(cap(pl.ch)).Equal(1)
})
})
g.Describe("PowerLocker#Acquire", func() {
g.It("should acquire a lock when channel is empty", func() {
err := pl.Acquire()
g.Assert(err).IsNil()
g.Assert(cap(pl.ch)).Equal(1)
g.Assert(len(pl.ch)).Equal(1)
})
g.It("should return an error when the channel is full", func() {
pl.ch <- true
err := pl.Acquire()
g.Assert(err).IsNotNil()
g.Assert(errors.Is(err, ErrPowerLockerLocked)).IsTrue()
g.Assert(cap(pl.ch)).Equal(1)
g.Assert(len(pl.ch)).Equal(1)
})
})
g.Describe("PowerLocker#TryAcquire", func() {
g.It("should acquire a lock when channel is empty", func() {
g.Timeout(time.Second)
err := pl.TryAcquire(context.Background())
g.Assert(err).IsNil()
g.Assert(cap(pl.ch)).Equal(1)
g.Assert(len(pl.ch)).Equal(1)
g.Assert(pl.IsLocked()).IsTrue()
})
g.It("should block until context is canceled if channel is full", func() {
g.Timeout(time.Second)
ctx, cancel := context.WithTimeout(context.Background(), time.Millisecond*500)
defer cancel()
pl.ch <- true
err := pl.TryAcquire(ctx)
g.Assert(err).IsNotNil()
g.Assert(errors.Is(err, context.DeadlineExceeded)).IsTrue()
g.Assert(cap(pl.ch)).Equal(1)
g.Assert(len(pl.ch)).Equal(1)
g.Assert(pl.IsLocked()).IsTrue()
})
g.It("should block until lock can be acquired", func() {
g.Timeout(time.Second)
ctx, cancel := context.WithTimeout(context.Background(), time.Millisecond*200)
defer cancel()
pl.Acquire()
go func() {
time.AfterFunc(time.Millisecond * 50, func() {
pl.Release()
})
}()
err := pl.TryAcquire(ctx)
g.Assert(err).IsNil()
g.Assert(cap(pl.ch)).Equal(1)
g.Assert(len(pl.ch)).Equal(1)
g.Assert(pl.IsLocked()).IsTrue()
})
})
g.Describe("PowerLocker#Release", func() {
g.It("should release when channel is full", func() {
pl.Acquire()
g.Assert(pl.IsLocked()).IsTrue()
pl.Release()
g.Assert(cap(pl.ch)).Equal(1)
g.Assert(len(pl.ch)).Equal(0)
g.Assert(pl.IsLocked()).IsFalse()
})
g.It("should release when channel is empty", func() {
g.Assert(pl.IsLocked()).IsFalse()
pl.Release()
g.Assert(cap(pl.ch)).Equal(1)
g.Assert(len(pl.ch)).Equal(0)
g.Assert(pl.IsLocked()).IsFalse()
})
})
g.Describe("PowerLocker#Destroy", func() {
g.It("should unlock and close the channel", func() {
pl.Acquire()
g.Assert(pl.IsLocked()).IsTrue()
pl.Destroy()
g.Assert(pl.IsLocked()).IsFalse()
defer func() {
r := recover()
g.Assert(r).IsNotNil()
g.Assert(r.(error).Error()).Equal("send on closed channel")
}()
pl.Acquire()
})
})
})
g.Describe("Server#ExecutingPowerAction", func() {
g.It("should return based on locker status", func() {
s := &Server{powerLock: newPowerLocker()}
s := &Server{powerLock: system.NewLocker()}
g.Assert(s.ExecutingPowerAction()).IsFalse()
s.powerLock.Acquire()

View File

@@ -38,6 +38,13 @@ func (s *Server) Proc() ResourceUsage {
return s.resources
}
// UpdateStats updates the current stats for the server's resource usage.
func (ru *ResourceUsage) UpdateStats(stats environment.Stats) {
ru.mu.Lock()
ru.Stats = stats
ru.mu.Unlock()
}
// Reset resets the usages values to zero, used when a server is stopped to ensure we don't hold
// onto any values incorrectly.
func (ru *ResourceUsage) Reset() {

View File

@@ -31,8 +31,7 @@ type Server struct {
ctxCancel *context.CancelFunc
emitterLock sync.Mutex
powerLock *powerLocker
throttleOnce sync.Once
powerLock *system.Locker
// Maintains the configuration for the server. This is the data that gets returned by the Panel
// such as build settings and container images.
@@ -64,16 +63,17 @@ type Server struct {
restoring *system.AtomicBool
// The console throttler instance used to control outputs.
throttler *ConsoleThrottler
throttler *ConsoleThrottle
throttleOnce sync.Once
// Tracks open websocket connections for the server.
wsBag *WebsocketBag
wsBagLocker sync.Mutex
sinks map[SinkName]*sinkPool
sinks map[system.SinkName]*system.SinkPool
logSink *sinkPool
installSink *sinkPool
logSink *system.SinkPool
installSink *system.SinkPool
}
// New returns a new server instance with a context and all of the default
@@ -87,10 +87,10 @@ func New(client remote.Client) (*Server, error) {
installing: system.NewAtomicBool(false),
transferring: system.NewAtomicBool(false),
restoring: system.NewAtomicBool(false),
powerLock: newPowerLocker(),
sinks: map[SinkName]*sinkPool{
LogSink: newSinkPool(),
InstallSink: newSinkPool(),
powerLock: system.NewLocker(),
sinks: map[system.SinkName]*system.SinkPool{
system.LogSink: system.NewSinkPool(),
system.InstallSink: system.NewSinkPool(),
},
}
if err := defaults.Set(&s); err != nil {
@@ -239,14 +239,6 @@ func (s *Server) ReadLogfile(len int) ([]string, error) {
return s.Environment.Readlog(len)
}
// Determine if the server is bootable in it's current state or not. This will not
// indicate why a server is not bootable, only if it is.
func (s *Server) IsBootable() bool {
exists, _ := s.Environment.Exists()
return exists
}
// Initializes a server instance. This will run through and ensure that the environment
// for the server is setup, and that all of the necessary files are created.
func (s *Server) CreateEnvironment() error {

View File

@@ -1,117 +0,0 @@
package server
import (
"sync"
)
// SinkName represents one of the registered sinks for a server.
type SinkName string
const (
// LogSink handles console output for game servers, including messages being
// sent via Wings to the console instance.
LogSink SinkName = "log"
// InstallSink handles installation output for a server.
InstallSink SinkName = "install"
)
// sinkPool represents a pool with sinks.
type sinkPool struct {
mu sync.RWMutex
sinks []chan []byte
}
// newSinkPool returns a new empty sinkPool. A sink pool generally lives with a
// server instance for it's full lifetime.
func newSinkPool() *sinkPool {
return &sinkPool{}
}
// On adds a channel to the sink pool instance.
func (p *sinkPool) On(c chan []byte) {
p.mu.Lock()
p.sinks = append(p.sinks, c)
p.mu.Unlock()
}
// Off removes a given channel from the sink pool. If no matching sink is found
// this function is a no-op. If a matching channel is found, it will be removed.
func (p *sinkPool) Off(c chan []byte) {
p.mu.Lock()
defer p.mu.Unlock()
sinks := p.sinks
for i, sink := range sinks {
if c != sink {
continue
}
// We need to maintain the order of the sinks in the slice we're tracking,
// so shift everything to the left, rather than changing the order of the
// elements.
copy(sinks[i:], sinks[i+1:])
sinks[len(sinks)-1] = nil
sinks = sinks[:len(sinks)-1]
p.sinks = sinks
// Avoid a panic if the sink channel is nil at this point.
if c != nil {
close(c)
}
return
}
}
// Destroy destroys the pool by removing and closing all sinks and destroying
// all of the channels that are present.
func (p *sinkPool) Destroy() {
p.mu.Lock()
defer p.mu.Unlock()
for _, c := range p.sinks {
if c != nil {
close(c)
}
}
p.sinks = nil
}
// Push sends a given message to each of the channels registered in the pool.
func (p *sinkPool) Push(data []byte) {
p.mu.RLock()
// Attempt to send the data over to the channels. If the channel buffer is full,
// or otherwise blocked for some reason (such as being a nil channel), just discard
// the event data and move on to the next channel in the slice. If you don't
// implement the "default" on the select you'll block execution until the channel
// becomes unblocked, which is not what we want to do here.
for _, c := range p.sinks {
select {
case c <- data:
default:
}
}
p.mu.RUnlock()
}
// Sink returns the instantiated and named sink for a server. If the sink has
// not been configured yet this function will cause a panic condition.
func (s *Server) Sink(name SinkName) *sinkPool {
sink, ok := s.sinks[name]
if !ok {
s.Log().Fatalf("attempt to access nil sink: %s", name)
}
return sink
}
// DestroyAllSinks iterates over all of the sinks configured for the server and
// destroys their instances. Note that this will cause a panic if you attempt
// to call Server.Sink() again after. This function is only used when a server
// is being deleted from the system.
func (s *Server) DestroyAllSinks() {
s.Log().Info("destroying all registered sinks for server instance")
for _, sink := range s.sinks {
sink.Destroy()
}
}

View File

@@ -1,6 +1,8 @@
package server
import (
"time"
"github.com/pterodactyl/wings/environment/docker"
"github.com/pterodactyl/wings/environment"
@@ -58,7 +60,7 @@ func (s *Server) SyncWithEnvironment() {
s.Log().Info("server suspended with running process state, terminating now")
go func(s *Server) {
if err := s.Environment.WaitForStop(60, true); err != nil {
if err := s.Environment.WaitForStop(s.Context(), time.Minute, true); err != nil {
s.Log().WithField("error", err).Warn("failed to terminate server environment after suspension")
}
}(s)

58
sftp/event.go Normal file
View File

@@ -0,0 +1,58 @@
package sftp
import (
"emperror.dev/errors"
"github.com/apex/log"
"github.com/pterodactyl/wings/internal/database"
"github.com/pterodactyl/wings/internal/models"
)
type eventHandler struct {
ip string
user string
server string
}
type FileAction struct {
// Entity is the targeted file or directory (depending on the event) that the action
// is being performed _against_, such as "/foo/test.txt". This will always be the full
// path to the element.
Entity string
// Target is an optional (often blank) field that only has a value in it when the event
// is specifically modifying the entity, such as a rename or move event. In that case
// the Target field will be the final value, such as "/bar/new.txt"
Target string
}
// Log parses a SFTP specific file activity event and then passes it off to be stored
// in the normal activity database.
func (eh *eventHandler) Log(e models.Event, fa FileAction) error {
metadata := map[string]interface{}{
"files": []string{fa.Entity},
}
if fa.Target != "" {
metadata["files"] = []map[string]string{
{"from": fa.Entity, "to": fa.Target},
}
}
a := models.Activity{
Server: eh.server,
Event: e,
Metadata: metadata,
IP: eh.ip,
}
if tx := database.Instance().Create(a.SetUser(eh.user)); tx.Error != nil {
return errors.WithStack(tx.Error)
}
return nil
}
// MustLog is a wrapper around log that will trigger a fatal error and exit the application
// if an error is encountered during the logging of the event.
func (eh *eventHandler) MustLog(e models.Event, fa FileAction) {
if err := eh.Log(e, fa); err != nil {
log.WithField("error", errors.WithStack(err)).WithField("event", e).Error("sftp: failed to log event")
}
}

View File

@@ -28,31 +28,39 @@ const (
type Handler struct {
mu sync.Mutex
permissions []string
server *server.Server
fs *filesystem.Filesystem
events *eventHandler
permissions []string
logger *log.Entry
ro bool
}
// Returns a new connection handler for the SFTP server. This allows a given user
// NewHandler returns a new connection handler for the SFTP server. This allows a given user
// to access the underlying filesystem.
func NewHandler(sc *ssh.ServerConn, srv *server.Server) *Handler {
func NewHandler(sc *ssh.ServerConn, srv *server.Server) (*Handler, error) {
uuid, ok := sc.Permissions.Extensions["user"]
if !ok {
return nil, errors.New("sftp: mismatched Wings and Panel versions — Panel 1.10 is required for this version of Wings.")
}
events := eventHandler{
ip: sc.RemoteAddr().String(),
user: uuid,
server: srv.ID(),
}
return &Handler{
permissions: strings.Split(sc.Permissions.Extensions["permissions"], ","),
server: srv,
fs: srv.Filesystem(),
events: &events,
ro: config.Get().System.Sftp.ReadOnly,
logger: log.WithFields(log.Fields{
"subsystem": "sftp",
"username": sc.User(),
"ip": sc.RemoteAddr(),
}),
}
logger: log.WithFields(log.Fields{"subsystem": "sftp", "user": uuid, "ip": sc.RemoteAddr()}),
}, nil
}
// Returns the sftp.Handlers for this struct.
// Handlers returns the sftp.Handlers for this struct.
func (h *Handler) Handlers() sftp.Handlers {
return sftp.Handlers{
FileGet: h,
@@ -121,7 +129,12 @@ func (h *Handler) Filewrite(request *sftp.Request) (io.WriterAt, error) {
}
// Chown may or may not have been called in the touch function, so always do
// it at this point to avoid the file being improperly owned.
_ = h.server.Filesystem().Chown(request.Filepath)
_ = h.fs.Chown(request.Filepath)
event := server.ActivitySftpWrite
if permission == PermissionFileCreate {
event = server.ActivitySftpCreate
}
h.events.MustLog(event, FileAction{Entity: request.Filepath})
return f, nil
}
@@ -172,6 +185,7 @@ func (h *Handler) Filecmd(request *sftp.Request) error {
l.WithField("error", err).Error("failed to rename file")
return sftp.ErrSSHFxFailure
}
h.events.MustLog(server.ActivitySftpRename, FileAction{Entity: request.Filepath, Target: request.Target})
break
// Handle deletion of a directory. This will properly delete all of the files and
// folders within that directory if it is not already empty (unlike a lot of SFTP
@@ -180,10 +194,12 @@ func (h *Handler) Filecmd(request *sftp.Request) error {
if !h.can(PermissionFileDelete) {
return sftp.ErrSSHFxPermissionDenied
}
if err := h.fs.Delete(request.Filepath); err != nil {
p := filepath.Clean(request.Filepath)
if err := h.fs.Delete(p); err != nil {
l.WithField("error", err).Error("failed to remove directory")
return sftp.ErrSSHFxFailure
}
h.events.MustLog(server.ActivitySftpDelete, FileAction{Entity: request.Filepath})
return sftp.ErrSSHFxOk
// Handle requests to create a new Directory.
case "Mkdir":
@@ -191,11 +207,12 @@ func (h *Handler) Filecmd(request *sftp.Request) error {
return sftp.ErrSSHFxPermissionDenied
}
name := strings.Split(filepath.Clean(request.Filepath), "/")
err := h.fs.CreateDirectory(name[len(name)-1], strings.Join(name[0:len(name)-1], "/"))
if err != nil {
p := strings.Join(name[0:len(name)-1], "/")
if err := h.fs.CreateDirectory(name[len(name)-1], p); err != nil {
l.WithField("error", err).Error("failed to create directory")
return sftp.ErrSSHFxFailure
}
h.events.MustLog(server.ActivitySftpCreateDirectory, FileAction{Entity: request.Filepath})
break
// Support creating symlinks between files. The source and target must resolve within
// the server home directory.
@@ -228,6 +245,7 @@ func (h *Handler) Filecmd(request *sftp.Request) error {
l.WithField("error", err).Error("failed to remove a file")
return sftp.ErrSSHFxFailure
}
h.events.MustLog(server.ActivitySftpDelete, FileAction{Entity: request.Filepath})
return sftp.ErrSSHFxOk
default:
return sftp.ErrSSHFxOpUnsupported
@@ -287,15 +305,10 @@ func (h *Handler) can(permission string) bool {
if h.server.IsSuspended() {
return false
}
// SFTPServer owners and super admins have their permissions returned as '[*]' via the Panel
// API, so for the sake of speed do an initial check for that before iterating over the
// entire array of permissions.
if len(h.permissions) == 1 && h.permissions[0] == "*" {
return true
}
for _, p := range h.permissions {
if p == permission {
// If we match the permission specifically, or the user has been granted the "*"
// permission because they're an admin, let them through.
if p == permission || p == "*" {
return true
}
}

View File

@@ -70,7 +70,12 @@ func (c *SFTPServer) Run() error {
conf := &ssh.ServerConfig{
NoClientAuth: false,
MaxAuthTries: 6,
PasswordCallback: c.passwordCallback,
PasswordCallback: func(conn ssh.ConnMetadata, password []byte) (*ssh.Permissions, error) {
return c.makeCredentialsRequest(conn, remote.SftpAuthPassword, string(password))
},
PublicKeyCallback: func(conn ssh.ConnMetadata, key ssh.PublicKey) (*ssh.Permissions, error) {
return c.makeCredentialsRequest(conn, remote.SftpAuthPublicKey, string(ssh.MarshalAuthorizedKey(key)))
},
}
conf.AddHostKey(private)
@@ -86,19 +91,21 @@ func (c *SFTPServer) Run() error {
if conn, _ := listener.Accept(); conn != nil {
go func(conn net.Conn) {
defer conn.Close()
c.AcceptInbound(conn, conf)
if err := c.AcceptInbound(conn, conf); err != nil {
log.WithField("error", err).Error("sftp: failed to accept inbound connection")
}
}(conn)
}
}
}
// Handles an inbound connection to the instance and determines if we should serve the
// request or not.
func (c *SFTPServer) AcceptInbound(conn net.Conn, config *ssh.ServerConfig) {
// AcceptInbound handles an inbound connection to the instance and determines if we should
// serve the request or not.
func (c *SFTPServer) AcceptInbound(conn net.Conn, config *ssh.ServerConfig) error {
// Before beginning a handshake must be performed on the incoming net.Conn
sconn, chans, reqs, err := ssh.NewServerConn(conn, config)
if err != nil {
return
return errors.WithStack(err)
}
defer sconn.Close()
go ssh.DiscardRequests(reqs)
@@ -144,11 +151,17 @@ func (c *SFTPServer) AcceptInbound(conn net.Conn, config *ssh.ServerConfig) {
// Spin up a SFTP server instance for the authenticated user's server allowing
// them access to the underlying filesystem.
handler := sftp.NewRequestServer(channel, NewHandler(sconn, srv).Handlers())
if err := handler.Serve(); err == io.EOF {
handler.Close()
handler, err := NewHandler(sconn, srv)
if err != nil {
return errors.WithStackIf(err)
}
rs := sftp.NewRequestServer(channel, handler.Handlers())
if err := rs.Serve(); err == io.EOF {
_ = rs.Close()
}
}
return nil
}
// Generates a new ED25519 private key that is used for host authentication when
@@ -177,17 +190,17 @@ func (c *SFTPServer) generateED25519PrivateKey() error {
return nil
}
// A function capable of validating user credentials with the Panel API.
func (c *SFTPServer) passwordCallback(conn ssh.ConnMetadata, pass []byte) (*ssh.Permissions, error) {
func (c *SFTPServer) makeCredentialsRequest(conn ssh.ConnMetadata, t remote.SftpAuthRequestType, p string) (*ssh.Permissions, error) {
request := remote.SftpAuthRequest{
Type: t,
User: conn.User(),
Pass: string(pass),
Pass: p,
IP: conn.RemoteAddr().String(),
SessionID: conn.SessionID(),
ClientVersion: conn.ClientVersion(),
}
logger := log.WithFields(log.Fields{"subsystem": "sftp", "username": conn.User(), "ip": conn.RemoteAddr().String()})
logger := log.WithFields(log.Fields{"subsystem": "sftp", "method": request.Type, "username": request.User, "ip": request.IP})
logger.Debug("validating credentials for SFTP connection")
if !validUsernameRegexp.MatchString(request.User) {
@@ -206,15 +219,16 @@ func (c *SFTPServer) passwordCallback(conn ssh.ConnMetadata, pass []byte) (*ssh.
}
logger.WithField("server", resp.Server).Debug("credentials validated and matched to server instance")
sshPerm := &ssh.Permissions{
permissions := ssh.Permissions{
Extensions: map[string]string{
"ip": conn.RemoteAddr().String(),
"uuid": resp.Server,
"user": conn.User(),
"user": resp.User,
"permissions": strings.Join(resp.Permissions, ","),
},
}
return sshPerm, nil
return &permissions, nil
}
// PrivateKeyPath returns the path the host private key for this server instance.

View File

@@ -1,3 +1,3 @@
package system
var Version = "1.5.6"
var Version = "develop"

84
system/locker.go Normal file
View File

@@ -0,0 +1,84 @@
package system
import (
"context"
"sync"
"emperror.dev/errors"
)
var ErrLockerLocked = errors.Sentinel("locker: cannot acquire lock, already locked")
type Locker struct {
mu sync.RWMutex
ch chan bool
}
// NewLocker returns a new Locker instance.
func NewLocker() *Locker {
return &Locker{
ch: make(chan bool, 1),
}
}
// IsLocked returns the current state of the locker channel. If there is
// currently a value in the channel, it is assumed to be locked.
func (l *Locker) IsLocked() bool {
l.mu.RLock()
defer l.mu.RUnlock()
return len(l.ch) == 1
}
// Acquire will acquire the power lock if it is not currently locked. If it is
// already locked, acquire will fail to acquire the lock, and will return false.
func (l *Locker) Acquire() error {
l.mu.Lock()
defer l.mu.Unlock()
select {
case l.ch <- true:
default:
return ErrLockerLocked
}
return nil
}
// TryAcquire will attempt to acquire a power-lock until the context provided
// is canceled.
func (l *Locker) TryAcquire(ctx context.Context) error {
select {
case l.ch <- true:
return nil
case <-ctx.Done():
if err := ctx.Err(); err != nil {
if errors.Is(err, context.DeadlineExceeded) || errors.Is(err, context.Canceled) {
return ErrLockerLocked
}
}
return nil
}
}
// Release will drain the locker channel so that we can properly re-acquire it
// at a later time. If the channel is not currently locked this function is a
// no-op and will immediately return.
func (l *Locker) Release() {
l.mu.Lock()
select {
case <-l.ch:
default:
}
l.mu.Unlock()
}
// Destroy cleans up the power locker by closing the channel.
func (l *Locker) Destroy() {
l.mu.Lock()
if l.ch != nil {
select {
case <-l.ch:
default:
}
close(l.ch)
}
l.mu.Unlock()
}

148
system/locker_test.go Normal file
View File

@@ -0,0 +1,148 @@
package system
import (
"context"
"testing"
"time"
"emperror.dev/errors"
. "github.com/franela/goblin"
)
func TestPower(t *testing.T) {
g := Goblin(t)
g.Describe("Locker", func() {
var l *Locker
g.BeforeEach(func() {
l = NewLocker()
})
g.Describe("PowerLocker#IsLocked", func() {
g.It("should return false when the channel is empty", func() {
g.Assert(cap(l.ch)).Equal(1)
g.Assert(l.IsLocked()).IsFalse()
})
g.It("should return true when the channel is at capacity", func() {
l.ch <- true
g.Assert(l.IsLocked()).IsTrue()
<-l.ch
g.Assert(l.IsLocked()).IsFalse()
// We don't care what the channel value is, just that there is
// something in it.
l.ch <- false
g.Assert(l.IsLocked()).IsTrue()
g.Assert(cap(l.ch)).Equal(1)
})
})
g.Describe("PowerLocker#Acquire", func() {
g.It("should acquire a lock when channel is empty", func() {
err := l.Acquire()
g.Assert(err).IsNil()
g.Assert(cap(l.ch)).Equal(1)
g.Assert(len(l.ch)).Equal(1)
})
g.It("should return an error when the channel is full", func() {
l.ch <- true
err := l.Acquire()
g.Assert(err).IsNotNil()
g.Assert(errors.Is(err, ErrLockerLocked)).IsTrue()
g.Assert(cap(l.ch)).Equal(1)
g.Assert(len(l.ch)).Equal(1)
})
})
g.Describe("PowerLocker#TryAcquire", func() {
g.It("should acquire a lock when channel is empty", func() {
g.Timeout(time.Second)
err := l.TryAcquire(context.Background())
g.Assert(err).IsNil()
g.Assert(cap(l.ch)).Equal(1)
g.Assert(len(l.ch)).Equal(1)
g.Assert(l.IsLocked()).IsTrue()
})
g.It("should block until context is canceled if channel is full", func() {
g.Timeout(time.Second)
ctx, cancel := context.WithTimeout(context.Background(), time.Millisecond*500)
defer cancel()
l.ch <- true
err := l.TryAcquire(ctx)
g.Assert(err).IsNotNil()
g.Assert(errors.Is(err, ErrLockerLocked)).IsTrue()
g.Assert(cap(l.ch)).Equal(1)
g.Assert(len(l.ch)).Equal(1)
g.Assert(l.IsLocked()).IsTrue()
})
g.It("should block until lock can be acquired", func() {
g.Timeout(time.Second)
ctx, cancel := context.WithTimeout(context.Background(), time.Millisecond*200)
defer cancel()
l.Acquire()
go func() {
time.AfterFunc(time.Millisecond*50, func() {
l.Release()
})
}()
err := l.TryAcquire(ctx)
g.Assert(err).IsNil()
g.Assert(cap(l.ch)).Equal(1)
g.Assert(len(l.ch)).Equal(1)
g.Assert(l.IsLocked()).IsTrue()
})
})
g.Describe("PowerLocker#Release", func() {
g.It("should release when channel is full", func() {
l.Acquire()
g.Assert(l.IsLocked()).IsTrue()
l.Release()
g.Assert(cap(l.ch)).Equal(1)
g.Assert(len(l.ch)).Equal(0)
g.Assert(l.IsLocked()).IsFalse()
})
g.It("should release when channel is empty", func() {
g.Assert(l.IsLocked()).IsFalse()
l.Release()
g.Assert(cap(l.ch)).Equal(1)
g.Assert(len(l.ch)).Equal(0)
g.Assert(l.IsLocked()).IsFalse()
})
})
g.Describe("PowerLocker#Destroy", func() {
g.It("should unlock and close the channel", func() {
l.Acquire()
g.Assert(l.IsLocked()).IsTrue()
l.Destroy()
g.Assert(l.IsLocked()).IsFalse()
defer func() {
r := recover()
g.Assert(r).IsNotNil()
g.Assert(r.(error).Error()).Equal("send on closed channel")
}()
l.Acquire()
})
})
})
}

50
system/rate.go Normal file
View File

@@ -0,0 +1,50 @@
package system
import (
"sync"
"time"
)
// Rate defines a rate limiter of n items (limit) per duration of time.
type Rate struct {
mu sync.Mutex
limit uint64
duration time.Duration
count uint64
last time.Time
}
func NewRate(limit uint64, duration time.Duration) *Rate {
return &Rate{
limit: limit,
duration: duration,
last: time.Now(),
}
}
// Try returns true if under the rate limit defined, or false if the rate limit
// has been exceeded for the current duration.
func (r *Rate) Try() bool {
r.mu.Lock()
defer r.mu.Unlock()
now := time.Now()
// If it has been more than the duration, reset the timer and count.
if now.Sub(r.last) > r.duration {
r.count = 0
r.last = now
}
if (r.count + 1) > r.limit {
return false
}
// Hit this once, and return.
r.count = r.count + 1
return true
}
// Reset resets the internal state of the rate limiter back to zero.
func (r *Rate) Reset() {
r.mu.Lock()
r.count = 0
r.last = time.Now()
r.mu.Unlock()
}

67
system/rate_test.go Normal file
View File

@@ -0,0 +1,67 @@
package system
import (
"testing"
"time"
. "github.com/franela/goblin"
)
func TestRate(t *testing.T) {
g := Goblin(t)
g.Describe("Rate", func() {
g.It("properly rate limits a bucket", func() {
r := NewRate(10, time.Millisecond*100)
for i := 0; i < 100; i++ {
ok := r.Try()
if i < 10 && !ok {
g.Failf("should not have allowed take on try %d", i)
} else if i >= 10 && ok {
g.Failf("should have blocked take on try %d", i)
}
}
})
g.It("handles rate limiting in chunks", func() {
var out []int
r := NewRate(12, time.Millisecond*10)
for i := 0; i < 100; i++ {
if i%20 == 0 {
// Give it time to recover.
time.Sleep(time.Millisecond * 10)
}
if r.Try() {
out = append(out, i)
}
}
g.Assert(len(out)).Equal(60)
g.Assert(out[0]).Equal(0)
g.Assert(out[12]).Equal(20)
g.Assert(out[len(out)-1]).Equal(91)
})
g.It("resets back to zero when called", func() {
r := NewRate(10, time.Second)
for i := 0; i < 100; i++ {
if i%10 == 0 {
r.Reset()
}
g.Assert(r.Try()).IsTrue()
}
g.Assert(r.Try()).IsFalse("final attempt should not allow taking")
})
})
}
func BenchmarkRate_Try(b *testing.B) {
r := NewRate(10, time.Millisecond*100)
b.ReportAllocs()
for i := 0; i < b.N; i++ {
r.Try()
}
}

121
system/sink_pool.go Normal file
View File

@@ -0,0 +1,121 @@
package system
import (
"sync"
"time"
)
// SinkName represents one of the registered sinks for a server.
type SinkName string
const (
// LogSink handles console output for game servers, including messages being
// sent via Wings to the console instance.
LogSink SinkName = "log"
// InstallSink handles installation output for a server.
InstallSink SinkName = "install"
)
// SinkPool represents a pool with sinks.
type SinkPool struct {
mu sync.RWMutex
sinks []chan []byte
}
// NewSinkPool returns a new empty SinkPool. A sink pool generally lives with a
// server instance for it's full lifetime.
func NewSinkPool() *SinkPool {
return &SinkPool{}
}
// On adds a channel to the sink pool instance.
func (p *SinkPool) On(c chan []byte) {
p.mu.Lock()
p.sinks = append(p.sinks, c)
p.mu.Unlock()
}
// Off removes a given channel from the sink pool. If no matching sink is found
// this function is a no-op. If a matching channel is found, it will be removed.
func (p *SinkPool) Off(c chan []byte) {
p.mu.Lock()
defer p.mu.Unlock()
sinks := p.sinks
for i, sink := range sinks {
if c != sink {
continue
}
// We need to maintain the order of the sinks in the slice we're tracking,
// so shift everything to the left, rather than changing the order of the
// elements.
copy(sinks[i:], sinks[i+1:])
sinks[len(sinks)-1] = nil
sinks = sinks[:len(sinks)-1]
p.sinks = sinks
// Avoid a panic if the sink channel is nil at this point.
if c != nil {
close(c)
}
return
}
}
// Destroy destroys the pool by removing and closing all sinks and destroying
// all of the channels that are present.
func (p *SinkPool) Destroy() {
p.mu.Lock()
defer p.mu.Unlock()
for _, c := range p.sinks {
if c != nil {
close(c)
}
}
p.sinks = nil
}
// Push sends a given message to each of the channels registered in the pool.
// This will use a Ring Buffer channel in order to avoid blocking the channel
// sends, and attempt to push though the most recent messages in the queue in
// favor of the oldest messages.
//
// If the channel becomes full and isn't being drained fast enough, this
// function will remove the oldest message in the channel, and then push the
// message that it got onto the end, effectively making the channel a rolling
// buffer.
//
// There is a potential for data to be lost when passing it through this
// function, but only in instances where the channel buffer is full and the
// channel is not drained fast enough, in which case dropping messages is most
// likely the best option anyways. This uses waitgroups to allow every channel
// to attempt its send concurrently thus making the total blocking time of this
// function "O(1)" instead of "O(n)".
func (p *SinkPool) Push(data []byte) {
p.mu.RLock()
defer p.mu.RUnlock()
var wg sync.WaitGroup
wg.Add(len(p.sinks))
for _, c := range p.sinks {
go func(c chan []byte) {
defer wg.Done()
select {
case c <- data:
case <-time.After(time.Millisecond * 10):
// If there is nothing in the channel to read, but we also cannot write
// to the channel, just skip over sending data. If we don't do this you'll
// end up blocking the application on the channel read below.
if len(c) == 0 {
break
}
<-c
c <- data
}
}(c)
}
wg.Wait()
}

View File

@@ -1,9 +1,11 @@
package server
package system
import (
"fmt"
"reflect"
"sync"
"testing"
"time"
. "github.com/franela/goblin"
)
@@ -21,7 +23,7 @@ func TestSink(t *testing.T) {
g.Describe("SinkPool#On", func() {
g.It("pushes additional channels to a sink", func() {
pool := &sinkPool{}
pool := &SinkPool{}
g.Assert(pool.sinks).IsZero()
@@ -34,9 +36,9 @@ func TestSink(t *testing.T) {
})
g.Describe("SinkPool#Off", func() {
var pool *sinkPool
var pool *SinkPool
g.BeforeEach(func() {
pool = &sinkPool{}
pool = &SinkPool{}
})
g.It("works when no sinks are registered", func() {
@@ -81,7 +83,7 @@ func TestSink(t *testing.T) {
g.It("does not panic if a nil channel is provided", func() {
ch := make([]chan []byte, 1)
defer func () {
defer func() {
if r := recover(); r != nil {
g.Fail("removing a nil channel should not cause a panic")
}
@@ -95,9 +97,9 @@ func TestSink(t *testing.T) {
})
g.Describe("SinkPool#Push", func() {
var pool *sinkPool
var pool *SinkPool
g.BeforeEach(func() {
pool = &sinkPool{}
pool = &SinkPool{}
})
g.It("works when no sinks are registered", func() {
@@ -123,29 +125,74 @@ func TestSink(t *testing.T) {
g.Assert(len(pool.sinks)).Equal(2)
})
g.It("does not block if a channel is nil or otherwise full", func() {
ch := make([]chan []byte, 2)
ch[1] = make(chan []byte, 1)
ch[1] <- []byte("test")
g.It("uses a ring-buffer to avoid blocking when the channel is full", func() {
ch1 := make(chan []byte, 1)
ch2 := make(chan []byte, 2)
ch3 := make(chan []byte)
pool.On(ch[0])
pool.On(ch[1])
// ch1 and ch2 are now full, and would block if the code doesn't account
// for a full buffer.
ch1 <- []byte("pre-test")
ch2 <- []byte("pre-test")
ch2 <- []byte("pre-test 2")
pool.On(ch1)
pool.On(ch2)
pool.On(ch3)
pool.Push([]byte("testing"))
time.Sleep(time.Millisecond * 20)
g.Assert(MutexLocked(&pool.mu)).IsFalse()
g.Assert(<-ch[1]).Equal([]byte("test"))
// We expect that value previously in the channel to have been dumped
// and therefore only the value we pushed will be present. For ch2 we
// expect only the first message was dropped, and the second one is now
// the first in the out queue.
g.Assert(<-ch1).Equal([]byte("testing"))
g.Assert(<-ch2).Equal([]byte("pre-test 2"))
g.Assert(<-ch2).Equal([]byte("testing"))
// Because nothing in this test was listening for ch3, it would have
// blocked for the 10ms duration, and then been skipped over entirely
// because it had no length to try and push onto.
g.Assert(len(ch3)).Equal(0)
// Now, push again and expect similar results.
pool.Push([]byte("testing 2"))
time.Sleep(time.Millisecond * 20)
pool.Push([]byte("test2"))
g.Assert(<-ch[1]).Equal([]byte("test2"))
g.Assert(MutexLocked(&pool.mu)).IsFalse()
g.Assert(<-ch1).Equal([]byte("testing 2"))
g.Assert(<-ch2).Equal([]byte("testing 2"))
})
g.It("can handle concurrent pushes FIFO", func() {
ch := make(chan []byte, 4)
pool.On(ch)
pool.On(make(chan []byte))
for i := 0; i < 100; i++ {
pool.Push([]byte(fmt.Sprintf("iteration %d", i)))
}
time.Sleep(time.Millisecond * 20)
g.Assert(MutexLocked(&pool.mu)).IsFalse()
g.Assert(len(ch)).Equal(4)
g.Timeout(time.Millisecond * 500)
g.Assert(<-ch).Equal([]byte("iteration 96"))
g.Assert(<-ch).Equal([]byte("iteration 97"))
g.Assert(<-ch).Equal([]byte("iteration 98"))
g.Assert(<-ch).Equal([]byte("iteration 99"))
g.Assert(len(ch)).Equal(0)
})
})
g.Describe("SinkPool#Destroy", func() {
var pool *sinkPool
var pool *SinkPool
g.BeforeEach(func() {
pool = &sinkPool{}
pool = &SinkPool{}
})
g.It("works if no sinks are registered", func() {

29
system/strings.go Normal file
View File

@@ -0,0 +1,29 @@
package system
import (
"math/rand"
"regexp"
"strings"
)
var ipTrimRegex = regexp.MustCompile(`(:\d*)?$`)
const characters = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890"
// RandomString generates a random string of alpha-numeric characters using a
// pseudo-random number generator. The output of this function IS NOT cryptographically
// secure, it is used solely for generating random strings outside a security context.
func RandomString(n int) string {
var b strings.Builder
b.Grow(n)
for i := 0; i < n; i++ {
b.WriteByte(characters[rand.Intn(len(characters))])
}
return b.String()
}
// TrimIPSuffix removes the internal port value from an IP address to ensure we're only
// ever working directly with the IP address.
func TrimIPSuffix(s string) string {
return ipTrimRegex.ReplaceAllString(s, "")
}

View File

@@ -3,12 +3,10 @@ package system
import (
"bufio"
"bytes"
"context"
"fmt"
"io"
"strconv"
"sync"
"time"
"emperror.dev/errors"
"github.com/goccy/go-json"
@@ -90,16 +88,16 @@ func ScanReader(r io.Reader, callback func(line []byte)) error {
} else {
buf.Write(line)
}
// If we encountered an error with something in ReadLine that was not an
// EOF just abort the entire process here.
if err != nil && err != io.EOF {
return err
}
// Finish this loop and begin outputting the line if there is no prefix
// (the line fit into the default buffer), or if we hit the end of the line.
if !isPrefix || err == io.EOF {
break
}
// If we encountered an error with something in ReadLine that was not an
// EOF just abort the entire process here.
if err != nil {
return err
}
}
// Send the full buffer length over to the event handler to be emitted in
@@ -122,22 +120,6 @@ func ScanReader(r io.Reader, callback func(line []byte)) error {
return nil
}
// Runs a given work function every "d" duration until the provided context is canceled.
func Every(ctx context.Context, d time.Duration, work func(t time.Time)) {
ticker := time.NewTicker(d)
go func() {
for {
select {
case <-ctx.Done():
ticker.Stop()
return
case t := <-ticker.C:
work(t)
}
}
}()
}
func FormatBytes(b int64) string {
if b < 1024 {
return fmt.Sprintf("%d B", b)
@@ -165,9 +147,9 @@ func (ab *AtomicBool) Store(v bool) {
ab.mu.Unlock()
}
// Stores the value "v" if the current value stored in the AtomicBool is the opposite
// boolean value. If successfully swapped, the response is "true", otherwise "false"
// is returned.
// SwapIf stores the value "v" if the current value stored in the AtomicBool is
// the opposite boolean value. If successfully swapped, the response is "true",
// otherwise "false" is returned.
func (ab *AtomicBool) SwapIf(v bool) bool {
ab.mu.Lock()
defer ab.mu.Unlock()

View File

@@ -3,10 +3,12 @@ package system
import (
"math/rand"
"strings"
"sync"
"testing"
"time"
. "github.com/franela/goblin"
"github.com/goccy/go-json"
)
func Test_Utils(t *testing.T) {
@@ -40,6 +42,80 @@ func Test_Utils(t *testing.T) {
g.Assert(lines).Equal([]string{"test\rstrin", "another\rli", "hodor\r\r\rhe", "material g"})
})
})
g.Describe("AtomicBool", func() {
var b *AtomicBool
g.BeforeEach(func() {
b = NewAtomicBool(false)
})
g.It("initalizes with the provided start value", func() {
b = NewAtomicBool(true)
g.Assert(b.Load()).IsTrue()
b = NewAtomicBool(false)
g.Assert(b.Load()).IsFalse()
})
g.Describe("AtomicBool#Store", func() {
g.It("stores the provided value", func() {
g.Assert(b.Load()).IsFalse()
b.Store(true)
g.Assert(b.Load()).IsTrue()
})
// This test makes no assertions, it just expects to not hit a race condition
// by having multiple things writing at the same time.
g.It("handles contention from multiple routines", func() {
var wg sync.WaitGroup
wg.Add(100)
for i := 0; i < 100; i++ {
go func(i int) {
b.Store(i%2 == 0)
wg.Done()
}(i)
}
wg.Wait()
})
})
g.Describe("AtomicBool#SwapIf", func() {
g.It("swaps the value out if different than what is stored", func() {
o := b.SwapIf(false)
g.Assert(o).IsFalse()
g.Assert(b.Load()).IsFalse()
o = b.SwapIf(true)
g.Assert(o).IsTrue()
g.Assert(b.Load()).IsTrue()
o = b.SwapIf(true)
g.Assert(o).IsFalse()
g.Assert(b.Load()).IsTrue()
o = b.SwapIf(false)
g.Assert(o).IsTrue()
g.Assert(b.Load()).IsFalse()
})
})
g.Describe("can be marshaled with JSON", func() {
type testStruct struct {
Value AtomicBool `json:"value"`
}
var o testStruct
err := json.Unmarshal([]byte(`{"value":true}`), &o)
g.Assert(err).IsNil()
g.Assert(o.Value.Load()).IsTrue()
b, err2 := json.Marshal(&o)
g.Assert(err2).IsNil()
g.Assert(b).Equal([]byte(`{"value":true}`))
})
})
}
func Benchmark_ScanReader(b *testing.B) {

View File

@@ -2,8 +2,16 @@ package main
import (
"github.com/pterodactyl/wings/cmd"
"math/rand"
"time"
)
func main() {
// Since we make use of the math/rand package in the code, especially for generating
// non-cryptographically secure random strings we need to seed the RNG. Just make use
// of the current time for this.
rand.Seed(time.Now().UnixNano())
// Execute the main binary code.
cmd.Execute()
}