Compare commits

..

24 Commits

Author SHA1 Message Date
Nicola Murino
f117d6d55e set version to 2.0.4 2021-04-10 08:33:34 +02:00
Nicola Murino
3b668813b4 CI: replace deprecated actions with gh CLI 2021-04-08 21:30:21 +02:00
Nicola Murino
4d7a0660fc release workflow: re-add build Linux bundle
it is used as source for ppa packages
2021-04-08 08:41:23 +02:00
Nicola Murino
1ac61468b9 CI: replace xgo with QEMU
currently xgo don't allow to choose the building OS, this could cause
unexpected issues, for example v2.0.3 packages for arm64 and ppc64
don't run on Ubuntu 18.04
2021-04-07 15:38:15 +02:00
Nicola Murino
7dd795e6ed prepare for 2.0.3 release 2021-03-28 14:54:27 +02:00
Nicola Murino
a3d4f5e800 improve signals handling 2021-03-26 16:55:28 +01:00
Nicola Murino
30ce6ef736 add a test case for UID/GID limits 2021-03-25 18:05:18 +01:00
Mike Unitskyi
2e6497ea17 Increase uid:gid limits
Fixes #361
2021-03-25 17:42:26 +01:00
Nicola Murino
cc9db96257 OpenAPI schema: remove some superfluous required definitions
Fixes #356
2021-03-22 19:24:42 +01:00
Nicola Murino
dd485509f6 initialize argon params before creating the data provider 2021-03-21 20:22:17 +01:00
Nicola Murino
df2e490680 try to auto create virtual folders if missing 2021-03-11 18:56:24 +01:00
Nicola Murino
7b0ea8f731 httpclient: load CA certificates only when required
on Windows x509.SystemCertPool is not implemented and therefore we end
uo with an empty certificate pool if we load the CA certificates
unconditionally
2021-03-11 18:56:17 +01:00
Nicola Murino
591bebef0c update deps and use Go 1.16 2021-03-07 11:57:14 +01:00
Nicola Murino
bdb6f585c7 add a setting to skip natural keys validation
Enabling the "skip_natural_keys_validation" data provider setting,
the natural keys for REST API/Web Admin as usernames, admin names,
folder names are not restricted to unreserved URI chars

Fixes #334 #308
2021-03-05 19:08:22 +01:00
Nicola Murino
db354e838c portable mode: fix WebDAV support 2021-03-05 08:50:47 +01:00
Nicola Murino
1e785077f2 add Segmed to the sponsors section 2021-03-03 18:57:31 +01:00
Nicola Murino
700ca7550c SSH system command: add os separator to the resolved path when appropriate
Fixes #327
2021-03-02 11:36:59 +01:00
Nicola Murino
791846adee CI: re-enable cross build
It was accidentally disabled
2021-02-28 12:15:57 +01:00
Nicola Murino
af6f1f6026 change license to AGPL-3 2021-02-28 11:46:13 +01:00
Nicola Murino
5d3288c37d TLS: allow to configure cipher suites
Fixes #316
2021-02-25 19:58:50 +01:00
Nicola Murino
82b26f81d6 don't upload coverage
we only upload coverage for the main branch
2021-02-21 12:13:22 +01:00
Nicola Murino
e0bbed1260 fix os versions too 2021-02-21 12:09:19 +01:00
Nicola Murino
13e81530e9 CI: fix go version for cross builds 2021-02-18 20:34:29 +01:00
Nicola Murino
c10b236f5d create branch 2.0.x 2021-02-18 08:50:42 +01:00
398 changed files with 16387 additions and 78170 deletions

View File

@@ -1,20 +0,0 @@
version: 2
updates:
- package-ecosystem: "gomod"
directory: "/"
schedule:
interval: "weekly"
open-pull-requests-limit: 2
- package-ecosystem: "docker"
directory: "/"
schedule:
interval: "weekly"
open-pull-requests-limit: 2
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"
open-pull-requests-limit: 2

View File

@@ -2,7 +2,7 @@ name: CI
on:
push:
branches: [main]
branches: [2.0.x]
pull_request:
jobs:
@@ -11,13 +11,9 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go: [1.17]
os: [ubuntu-latest, macos-latest]
upload-coverage: [true]
include:
- go: 1.17
os: windows-latest
upload-coverage: false
go: [1.16]
os: [ubuntu-18.04, macos-10.15, windows-2019]
upload-coverage: [false]
steps:
- uses: actions/checkout@v2
@@ -29,42 +25,23 @@ jobs:
with:
go-version: ${{ matrix.go }}
- name: Build for Linux/macOS x86_64
- name: Build for Linux/macOS
if: startsWith(matrix.os, 'windows-') != true
run: |
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
cd tests/eventsearcher
go mod tidy
go build -trimpath -ldflags "-s -w" -o eventsearcher
cd -
- name: Build for macOS arm64
if: startsWith(matrix.os, 'macos-') == true
run: CGO_ENABLED=1 GOOS=darwin GOARCH=arm64 SDKROOT=$(xcrun --sdk macosx --show-sdk-path) go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo_arm64
run: go build -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
- name: Build for Windows
if: startsWith(matrix.os, 'windows-')
run: |
$GIT_COMMIT = (git describe --always --dirty) | Out-String
$DATE_TIME = ([datetime]::Now.ToUniversalTime().toString("yyyy-MM-ddTHH:mm:ssZ")) | Out-String
$LATEST_TAG = ((git describe --tags $(git rev-list --tags --max-count=1)) | Out-String).Trim()
$REV_LIST=$LATEST_TAG+"..HEAD"
$COMMITS_FROM_TAG= ((git rev-list $REV_LIST --count) | Out-String).Trim()
$FILE_VERSION = $LATEST_TAG.substring(1) + "." + $COMMITS_FROM_TAG
go install github.com/tc-hib/go-winres@latest
go-winres simply --arch amd64 --product-version $LATEST_TAG-dev-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/version.date=$DATE_TIME" -o sftpgo.exe
cd tests/eventsearcher
go mod tidy
go build -trimpath -ldflags "-s -w" -o eventsearcher.exe
cd ../..
go build -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/version.date=$DATE_TIME" -o sftpgo.exe
- name: Run test cases using SQLite provider
run: go test -v -p 1 -timeout 15m ./... -coverprofile=coverage.txt -covermode=atomic
run: go test -v -p 1 -timeout 10m ./... -coverprofile=coverage.txt -covermode=atomic
- name: Upload coverage to Codecov
if: ${{ matrix.upload-coverage }}
uses: codecov/codecov-action@v2
uses: codecov/codecov-action@v1
with:
file: ./coverage.txt
fail_ci_if_error: false
@@ -72,81 +49,68 @@ jobs:
- name: Run test cases using bolt provider
run: |
go test -v -p 1 -timeout 2m ./config -covermode=atomic
go test -v -p 1 -timeout 5m ./common -covermode=atomic
go test -v -p 1 -timeout 5m ./httpd -covermode=atomic
go test -v -p 1 -timeout 2m ./common -covermode=atomic
go test -v -p 1 -timeout 3m ./httpd -covermode=atomic
go test -v -p 1 -timeout 8m ./sftpd -covermode=atomic
go test -v -p 1 -timeout 5m ./ftpd -covermode=atomic
go test -v -p 1 -timeout 5m ./webdavd -covermode=atomic
go test -v -p 1 -timeout 2m ./ftpd -covermode=atomic
go test -v -p 1 -timeout 2m ./webdavd -covermode=atomic
go test -v -p 1 -timeout 2m ./telemetry -covermode=atomic
go test -v -p 1 -timeout 2m ./mfa -covermode=atomic
env:
SFTPGO_DATA_PROVIDER__DRIVER: bolt
SFTPGO_DATA_PROVIDER__NAME: 'sftpgo_bolt.db'
- name: Run test cases using memory provider
run: go test -v -p 1 -timeout 15m ./... -covermode=atomic
run: go test -v -p 1 -timeout 10m ./... -covermode=atomic
env:
SFTPGO_DATA_PROVIDER__DRIVER: memory
SFTPGO_DATA_PROVIDER__NAME: ''
- name: Prepare build artifact for macOS
if: startsWith(matrix.os, 'macos-') == true
- name: Gather cross build info
id: cross_info
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
run: |
mkdir -p output/{init,bash_completion,zsh_completion}
cp sftpgo output/sftpgo_x86_64
cp sftpgo_arm64 output/
GIT_COMMIT=$(git describe --always)
BUILD_DATE=$(date -u +%FT%TZ)
echo ::set-output name=sha::${GIT_COMMIT}
echo ::set-output name=created::${BUILD_DATE}
- name: Cross build with xgo
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
uses: crazy-max/ghaction-xgo@v1
with:
go_version: 1.16.x
dest: cross
prefix: sftpgo
targets: linux/arm64,linux/ppc64le
v: true
x: false
race: false
ldflags: -s -w -X github.com/drakkan/sftpgo/version.commit=${{ steps.cross_info.outputs.sha }} -X github.com/drakkan/sftpgo/version.date=${{ steps.cross_info.outputs.created }}
buildmode: default
- name: Prepare build artifact for Linux/macOS
if: startsWith(matrix.os, 'windows-') != true
run: |
mkdir -p output/{bash_completion,zsh_completion}
cp sftpgo output/
cp sftpgo.json output/
cp -r templates output/
cp -r static output/
cp -r openapi output/
cp init/com.github.drakkan.sftpgo.plist output/init/
cp -r init output/
./sftpgo gen completion bash > output/bash_completion/sftpgo
./sftpgo gen completion zsh > output/zsh_completion/_sftpgo
./sftpgo gen man -d output/man/man1
gzip output/man/man1/*
- name: Prepare Windows installer
if: ${{ startsWith(matrix.os, 'windows-') && github.event_name != 'pull_request' }}
- name: Copy cross compiled Linux binaries
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
run: |
Remove-Item -LiteralPath "output" -Force -Recurse -ErrorAction Ignore
mkdir output
copy .\sftpgo.exe .\output
copy .\sftpgo.json .\output
copy .\sftpgo.db .\output
copy .\LICENSE .\output\LICENSE.txt
mkdir output\templates
xcopy .\templates .\output\templates\ /E
mkdir output\static
xcopy .\static .\output\static\ /E
mkdir output\openapi
xcopy .\openapi .\output\openapi\ /E
$LATEST_TAG = ((git describe --tags $(git rev-list --tags --max-count=1)) | Out-String).Trim()
$REV_LIST=$LATEST_TAG+"..HEAD"
$COMMITS_FROM_TAG= ((git rev-list $REV_LIST --count) | Out-String).Trim()
$Env:SFTPGO_ISS_DEV_VERSION = $LATEST_TAG + "." + $COMMITS_FROM_TAG
$CERT_PATH=(Get-Location -PSProvider FileSystem).ProviderPath + "\cert.pfx"
[IO.File]::WriteAllBytes($CERT_PATH,[System.Convert]::FromBase64String($Env:CERT_DATA))
certutil -f -p "$Env:CERT_PASS" -importpfx MY "$CERT_PATH"
rm "$CERT_PATH"
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.17763.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\sftpgo.exe
$INNO_S='/Ssigntool=$qC:/Program Files (x86)/Windows Kits/10/bin/10.0.17763.0/x86/signtool.exe$q sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n $qNicola Murino$q /d $qSFTPGo$q $f'
iscc "$INNO_S" .\windows-installer\sftpgo.iss
certutil -delstore MY "Nicola Murino"
env:
CERT_DATA: ${{ secrets.CERT_DATA }}
CERT_PASS: ${{ secrets.CERT_PASS }}
- name: Upload Windows installer artifact
if: ${{ startsWith(matrix.os, 'windows-') && github.event_name != 'pull_request' }}
uses: actions/upload-artifact@v2
with:
name: sftpgo_windows_installer_x86_64
path: ./sftpgo_windows_x86_64.exe
cp cross/sftpgo-linux-arm64 output/
cp cross/sftpgo-linux-ppc64le output/
- name: Prepare build artifact for Windows
if: startsWith(matrix.os, 'windows-')
run: |
Remove-Item -LiteralPath "output" -Force -Recurse -ErrorAction Ignore
mkdir output
copy .\sftpgo.exe .\output
copy .\sftpgo.json .\output
@@ -154,47 +118,81 @@ jobs:
xcopy .\templates .\output\templates\ /E
mkdir output\static
xcopy .\static .\output\static\ /E
mkdir output\openapi
xcopy .\openapi .\output\openapi\ /E
- name: Upload build artifact
if: startsWith(matrix.os, 'ubuntu-') != true
uses: actions/upload-artifact@v2
with:
name: sftpgo-${{ matrix.os }}-go-${{ matrix.go }}
name: sftpgo-${{ matrix.os }}-go${{ matrix.go }}
path: output
test-goarch-386:
name: Run test cases on 32-bit arch
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: 1.17
- name: Build
- name: Build Linux Packages
id: build_linux_pkgs
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
run: |
cd tests/eventsearcher
go mod tidy
go build -trimpath -ldflags "-s -w" -o eventsearcher
cd -
env:
GOARCH: 386
cp -r pkgs pkgs_arm64
cp -r pkgs pkgs_ppc64le
cd pkgs
./build.sh
cd ..
export NFPM_ARCH=arm64
export BIN_SUFFIX=-linux-arm64
cp cross/sftpgo${BIN_SUFFIX} .
cd pkgs_arm64
./build.sh
cd ..
export NFPM_ARCH=ppc64le
export BIN_SUFFIX=-linux-ppc64le
cp cross/sftpgo${BIN_SUFFIX} .
cd pkgs_ppc64le
./build.sh
PKG_VERSION=$(cat dist/version)
echo "::set-output name=pkg-version::${PKG_VERSION}"
- name: Run test cases
run: go test -v -p 1 -timeout 15m ./... -covermode=atomic
env:
SFTPGO_DATA_PROVIDER__DRIVER: memory
SFTPGO_DATA_PROVIDER__NAME: ''
GOARCH: 386
- name: Upload Debian Package
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
uses: actions/upload-artifact@v2
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-x86_64-deb
path: pkgs/dist/deb/*
test-postgresql-mysql-crdb:
name: Test with PgSQL/MySQL/Cockroach
runs-on: ubuntu-latest
- name: Upload RPM Package
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
uses: actions/upload-artifact@v2
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-x86_64-rpm
path: pkgs/dist/rpm/*
- name: Upload Debian Package arm64
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
uses: actions/upload-artifact@v2
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-arm64-deb
path: pkgs_arm64/dist/deb/*
- name: Upload RPM Package arm64
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
uses: actions/upload-artifact@v2
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-arm64-rpm
path: pkgs_arm64/dist/rpm/*
- name: Upload Debian Package ppc64le
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
uses: actions/upload-artifact@v2
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-ppc64le-deb
path: pkgs_ppc64le/dist/deb/*
- name: Upload RPM Package ppc64le
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
uses: actions/upload-artifact@v2
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-ppc64le-rpm
path: pkgs_ppc64le/dist/rpm/*
test-postgresql-mysql:
name: Test with PostgreSQL/MySQL
runs-on: ubuntu-18.04
services:
postgres:
@@ -231,18 +229,14 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: 1.17
go-version: 1.16
- name: Build
run: |
cd tests/eventsearcher
go mod tidy
go build -trimpath -ldflags "-s -w" -o eventsearcher
cd -
run: go build -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
- name: Run tests using PostgreSQL provider
run: |
go test -v -p 1 -timeout 15m ./... -covermode=atomic
go test -v -p 1 -timeout 10m ./... -covermode=atomic
env:
SFTPGO_DATA_PROVIDER__DRIVER: postgresql
SFTPGO_DATA_PROVIDER__NAME: sftpgo
@@ -253,7 +247,7 @@ jobs:
- name: Run tests using MySQL provider
run: |
go test -v -p 1 -timeout 15m ./... -covermode=atomic
go test -v -p 1 -timeout 10m ./... -covermode=atomic
env:
SFTPGO_DATA_PROVIDER__DRIVER: mysql
SFTPGO_DATA_PROVIDER__NAME: sftpgo
@@ -262,147 +256,12 @@ jobs:
SFTPGO_DATA_PROVIDER__USERNAME: sftpgo
SFTPGO_DATA_PROVIDER__PASSWORD: sftpgo
- name: Run tests using CockroachDB provider
run: |
docker run --rm --name crdb --health-cmd "curl -I http://127.0.0.1:8080" --health-interval 10s --health-timeout 5s --health-retries 6 -p 26257:26257 -d cockroachdb/cockroach:latest start-single-node --insecure --listen-addr 0.0.0.0:26257
docker exec crdb cockroach sql --insecure -e 'create database "sftpgo"'
go test -v -p 1 -timeout 15m ./... -covermode=atomic
docker stop crdb
env:
SFTPGO_DATA_PROVIDER__DRIVER: cockroachdb
SFTPGO_DATA_PROVIDER__NAME: sftpgo
SFTPGO_DATA_PROVIDER__HOST: localhost
SFTPGO_DATA_PROVIDER__PORT: 26257
SFTPGO_DATA_PROVIDER__USERNAME: root
SFTPGO_DATA_PROVIDER__PASSWORD:
build-linux-packages:
name: Build Linux packages
runs-on: ubuntu-18.04
strategy:
matrix:
include:
- arch: amd64
go: 1.17
go-arch: amd64
- arch: aarch64
distro: ubuntu18.04
go: latest
go-arch: arm64
- arch: ppc64le
distro: ubuntu18.04
go: latest
go-arch: ppc64le
- arch: armv7
distro: ubuntu18.04
go: latest
go-arch: arm7
steps:
- uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Set up Go
if: ${{ matrix.arch == 'amd64' }}
uses: actions/setup-go@v2
with:
go-version: ${{ matrix.go }}
- name: Build on amd64
if: ${{ matrix.arch == 'amd64' }}
run: |
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
mkdir -p output/{init,bash_completion,zsh_completion}
cp sftpgo.json output/
cp -r templates output/
cp -r static output/
cp -r openapi output/
cp init/sftpgo.service output/init/
./sftpgo gen completion bash > output/bash_completion/sftpgo
./sftpgo gen completion zsh > output/zsh_completion/_sftpgo
./sftpgo gen man -d output/man/man1
gzip output/man/man1/*
cp sftpgo output/
- uses: uraimo/run-on-arch-action@v2.1.1
if: ${{ matrix.arch != 'amd64' }}
name: Build for ${{ matrix.arch }}
id: build
with:
arch: ${{ matrix.arch }}
distro: ${{ matrix.distro }}
setup: |
mkdir -p "${PWD}/output"
dockerRunArgs: |
--volume "${PWD}/output:/output"
shell: /bin/bash
install: |
apt-get update -q -y
apt-get install -q -y curl gcc git
if [ ${{ matrix.go }} == 'latest' ]
then
GO_VERSION=$(curl -L https://go.dev/VERSION?m=text)
else
GO_VERSION=${{ matrix.go }}
fi
GO_DOWNLOAD_ARCH=${{ matrix.go-arch }}
if [ ${{ matrix.arch}} == 'armv7' ]
then
GO_DOWNLOAD_ARCH=armv6l
fi
curl --retry 5 --retry-delay 2 --connect-timeout 10 -o go.tar.gz -L https://go.dev/dl/${GO_VERSION}.linux-${GO_DOWNLOAD_ARCH}.tar.gz
tar -C /usr/local -xzf go.tar.gz
run: |
export PATH=$PATH:/usr/local/go/bin
if [ ${{ matrix.arch}} == 'armv7' ]
then
export GOARM=7
fi
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
mkdir -p output/{init,bash_completion,zsh_completion}
cp sftpgo.json output/
cp -r templates output/
cp -r static output/
cp -r openapi output/
cp init/sftpgo.service output/init/
./sftpgo gen completion bash > output/bash_completion/sftpgo
./sftpgo gen completion zsh > output/zsh_completion/_sftpgo
./sftpgo gen man -d output/man/man1
gzip output/man/man1/*
cp sftpgo output/
- name: Upload build artifact
uses: actions/upload-artifact@v2
with:
name: sftpgo-linux-${{ matrix.arch }}-go-${{ matrix.go }}
path: output
- name: Build Packages
id: build_linux_pkgs
run: |
export NFPM_ARCH=${{ matrix.go-arch }}
cd pkgs
./build.sh
PKG_VERSION=$(cat dist/version)
echo "::set-output name=pkg-version::${PKG_VERSION}"
- name: Upload Debian Package
uses: actions/upload-artifact@v2
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-${{ matrix.go-arch }}-deb
path: pkgs/dist/deb/*
- name: Upload RPM Package
uses: actions/upload-artifact@v2
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-${{ matrix.go-arch }}-rpm
path: pkgs/dist/rpm/*
golangci-lint:
name: golangci-lint
runs-on: ubuntu-latest
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v2
- name: Run golangci-lint
uses: golangci/golangci-lint-action@v2
with:
version: latest
version: v1.37.1

View File

@@ -1,11 +1,7 @@
name: Docker
on:
#schedule:
# - cron: '0 4 * * *' # everyday at 4:00 AM UTC
push:
branches:
- main
tags:
- v*
pull_request:
@@ -17,21 +13,25 @@ jobs:
strategy:
matrix:
os:
- ubuntu-latest
- ubuntu-18.04
docker_pkg:
- debian
- alpine
optional_deps:
- true
- false
include:
- os: ubuntu-latest
docker_pkg: distroless
optional_deps: false
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Repo metadata
id: repo
uses: actions/github-script@v3
with:
script: |
const repo = await github.repos.get(context.repo)
return repo.data
- name: Gather image information
id: info
run: |
@@ -60,11 +60,8 @@ jobs:
VERSION="${VERSION}-alpine"
VERSION_SLIM="${VERSION}-slim"
DOCKERFILE=Dockerfile.alpine
elif [[ $DOCKER_PKG == distroless ]]; then
VERSION="${VERSION}-distroless"
VERSION_SLIM="${VERSION}-slim"
DOCKERFILE=Dockerfile.distroless
fi
DOCKER_IMAGES=("drakkan/sftpgo" "ghcr.io/drakkan/sftpgo")
TAGS="${DOCKER_IMAGES[0]}:${VERSION}"
TAGS_SLIM="${DOCKER_IMAGES[0]}:${VERSION_SLIM}"
@@ -82,13 +79,6 @@ jobs:
fi
TAGS="${TAGS},${DOCKER_IMAGE}:latest"
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:slim"
elif [[ $DOCKER_PKG == distroless ]]; then
if [[ -n $MAJOR && -n $MINOR ]]; then
TAGS="${TAGS},${DOCKER_IMAGE}:${MINOR}-distroless,${DOCKER_IMAGE}:${MAJOR}-distroless"
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:${MINOR}-distroless-slim,${DOCKER_IMAGE}:${MAJOR}-distroless-slim"
fi
TAGS="${TAGS},${DOCKER_IMAGE}:distroless"
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:distroless-slim"
else
if [[ -n $MAJOR && -n $MINOR ]]; then
TAGS="${TAGS},${DOCKER_IMAGE}:${MINOR}-alpine,${DOCKER_IMAGE}:${MAJOR}-alpine"
@@ -135,13 +125,12 @@ jobs:
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
password: ${{ secrets.CR_PAT }}
if: ${{ github.event_name != 'pull_request' }}
- name: Build and push
uses: docker/build-push-action@v2
with:
context: .
builder: ${{ steps.builder.outputs.name }}
file: ./${{ steps.info.outputs.dockerfile }}
platforms: linux/amd64,linux/arm64,linux/ppc64le
@@ -153,10 +142,10 @@ jobs:
labels: |
org.opencontainers.image.title=SFTPGo
org.opencontainers.image.description=Fully featured and highly configurable SFTP server with optional FTP/S and WebDAV support
org.opencontainers.image.url=https://github.com/drakkan/sftpgo
org.opencontainers.image.documentation=https://github.com/drakkan/sftpgo/blob/${{ github.sha }}/docker/README.md
org.opencontainers.image.source=https://github.com/drakkan/sftpgo
org.opencontainers.image.url=${{ fromJson(steps.repo.outputs.result).html_url }}
org.opencontainers.image.documentation=${{ fromJson(steps.repo.outputs.result).html_url }}/blob/${{ github.sha }}/docker/README.md
org.opencontainers.image.source=${{ fromJson(steps.repo.outputs.result).html_url }}
org.opencontainers.image.version=${{ steps.info.outputs.version }}
org.opencontainers.image.created=${{ steps.info.outputs.created }}
org.opencontainers.image.revision=${{ github.sha }}
org.opencontainers.image.licenses=AGPL-3.0
org.opencontainers.image.licenses=${{ fromJson(steps.repo.outputs.result).license.spdx_id }}

View File

@@ -5,7 +5,7 @@ on:
tags: 'v*'
env:
GO_VERSION: 1.17.3
GO_VERSION: 1.16.3
jobs:
prepare-sources-with-deps:
@@ -51,6 +51,21 @@ jobs:
with:
go-version: ${{ env.GO_VERSION }}
- name: Build for macOS
if: startsWith(matrix.os, 'windows-') != true
run: go build -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
- name: Build for Windows
if: startsWith(matrix.os, 'windows-')
run: |
$GIT_COMMIT = (git describe --always --dirty) | Out-String
$DATE_TIME = ([datetime]::Now.ToUniversalTime().toString("yyyy-MM-ddTHH:mm:ssZ")) | Out-String
go build -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/version.date=$DATE_TIME" -o sftpgo.exe
- name: Initialize data provider
run: ./sftpgo initprovider
shell: bash
- name: Get SFTPGo version
id: get_version
run: echo ::set-output name=VERSION::${GITHUB_REF/refs\/tags\//}
@@ -69,30 +84,6 @@ jobs:
env:
MATRIX_OS: ${{ matrix.os }}
- name: Build for macOS x86_64
if: startsWith(matrix.os, 'windows-') != true
run: go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
- name: Build for macOS arm64
if: startsWith(matrix.os, 'macos-') == true
run: CGO_ENABLED=1 GOOS=darwin GOARCH=arm64 SDKROOT=$(xcrun --sdk macosx --show-sdk-path) go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo_arm64
- name: Build for Windows
if: startsWith(matrix.os, 'windows-')
run: |
$GIT_COMMIT = (git describe --always --dirty) | Out-String
$DATE_TIME = ([datetime]::Now.ToUniversalTime().toString("yyyy-MM-ddTHH:mm:ssZ")) | Out-String
$FILE_VERSION = $Env:SFTPGO_VERSION.substring(1) + ".0"
go install github.com/tc-hib/go-winres@latest
go-winres simply --arch amd64 --product-version $Env:SFTPGO_VERSION-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/version.date=$DATE_TIME" -o sftpgo.exe
env:
SFTPGO_VERSION: ${{ steps.get_version.outputs.VERSION }}
- name: Initialize data provider
run: ./sftpgo initprovider
shell: bash
- name: Prepare Release for macOS
if: startsWith(matrix.os, 'macos-')
run: |
@@ -105,7 +96,6 @@ jobs:
cp sftpgo.json output/
cp sftpgo.db output/sqlite/
cp -r static output/
cp -r openapi output/
cp -r templates output/
cp init/com.github.drakkan.sftpgo.plist output/init/
./sftpgo gen completion bash > output/bash_completion/sftpgo
@@ -113,11 +103,7 @@ jobs:
./sftpgo gen man -d output/man/man1
gzip output/man/man1/*
cd output
tar cJvf ../sftpgo_${SFTPGO_VERSION}_${OS}_x86_64.tar.xz *
cd ..
cp sftpgo_arm64 output/sftpgo
cd output
tar cJvf ../sftpgo_${SFTPGO_VERSION}_${OS}_arm64.tar.xz *
tar cJvf sftpgo_${SFTPGO_VERSION}_${OS}_x86_64.tar.xz *
cd ..
env:
SFTPGO_VERSION: ${{ steps.get_version.outputs.VERSION }}
@@ -135,21 +121,10 @@ jobs:
xcopy .\templates .\output\templates\ /E
mkdir output\static
xcopy .\static .\output\static\ /E
mkdir output\openapi
xcopy .\openapi .\output\openapi\ /E
$CERT_PATH=(Get-Location -PSProvider FileSystem).ProviderPath + "\cert.pfx"
[IO.File]::WriteAllBytes($CERT_PATH,[System.Convert]::FromBase64String($Env:CERT_DATA))
certutil -f -p "$Env:CERT_PASS" -importpfx MY "$CERT_PATH"
rm "$CERT_PATH"
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.17763.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\sftpgo.exe
$INNO_S='/Ssigntool=$qC:/Program Files (x86)/Windows Kits/10/bin/10.0.17763.0/x86/signtool.exe$q sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n $qNicola Murino$q /d $qSFTPGo$q $f'
iscc "$INNO_S" .\windows-installer\sftpgo.iss
certutil -delstore MY "Nicola Murino"
iscc windows-installer\sftpgo.iss
env:
SFTPGO_ISS_VERSION: ${{ steps.get_version.outputs.VERSION }}
SFTPGO_ISS_DOC_URL: https://github.com/drakkan/sftpgo/blob/${{ steps.get_version.outputs.VERSION }}/README.md
CERT_DATA: ${{ secrets.CERT_DATA }}
CERT_PASS: ${{ secrets.CERT_PASS }}
- name: Prepare Portable Release for Windows
if: startsWith(matrix.os, 'windows-')
@@ -163,24 +138,17 @@ jobs:
xcopy .\templates .\win-portable\templates\ /E
mkdir win-portable\static
xcopy .\static .\win-portable\static\ /E
mkdir win-portable\openapi
xcopy .\openapi .\win-portable\openapi\ /E
Compress-Archive .\win-portable\* sftpgo_portable_x86_64.zip
env:
SFTPGO_VERSION: ${{ steps.get_version.outputs.VERSION }}
OS: ${{ steps.get_os_name.outputs.OS }}
- name: Upload macOS x86_64 artifact
- name: Upload macOS artifact
if: startsWith(matrix.os, 'macos-')
uses: actions/upload-artifact@v2
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86_64.tar.xz
path: ./sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86_64.tar.xz
retention-days: 1
- name: Upload macOS arm64 artifact
if: startsWith(matrix.os, 'macos-')
uses: actions/upload-artifact@v2
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_arm64.tar.xz
path: ./sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_arm64.tar.xz
path: ./output/sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86_64.tar.xz
retention-days: 1
- name: Upload Windows installer artifact
@@ -222,12 +190,6 @@ jobs:
deb-arch: ppc64el
rpm-arch: ppc64le
tar-arch: ppc64le
- arch: armv7
distro: ubuntu18.04
go-arch: arm7
deb-arch: armhf
rpm-arch: armv7hl
tar-arch: armv7
steps:
- uses: actions/checkout@v2
@@ -249,7 +211,7 @@ jobs:
- name: Build on amd64
if: ${{ matrix.arch == 'amd64' }}
run: |
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
go build -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
mkdir -p output/{init,sqlite,bash_completion,zsh_completion}
echo "For documentation please take a look here:" > output/README.txt
echo "" >> output/README.txt
@@ -258,7 +220,6 @@ jobs:
cp sftpgo.json output/
cp -r templates output/
cp -r static output/
cp -r openapi output/
cp init/sftpgo.service output/init/
./sftpgo initprovider
./sftpgo gen completion bash > output/bash_completion/sftpgo
@@ -273,7 +234,7 @@ jobs:
env:
SFTPGO_VERSION: ${{ steps.get_version.outputs.SFTPGO_VERSION }}
- uses: uraimo/run-on-arch-action@v2.1.1
- uses: uraimo/run-on-arch-action@v2.0.9
if: ${{ matrix.arch != 'amd64' }}
name: Build for ${{ matrix.arch }}
id: build
@@ -288,16 +249,11 @@ jobs:
install: |
apt-get update -q -y
apt-get install -q -y curl gcc git xz-utils
GO_DOWNLOAD_ARCH=${{ matrix.go-arch }}
if [ ${{ matrix.arch}} == 'armv7' ]
then
GO_DOWNLOAD_ARCH=armv6l
fi
curl --retry 5 --retry-delay 2 --connect-timeout 10 -o go.tar.gz -L https://go.dev/dl/go${{ steps.get_version.outputs.GO_VERSION }}.linux-${GO_DOWNLOAD_ARCH}.tar.gz
curl --retry 5 --retry-delay 2 --connect-timeout 10 -o go.tar.gz -L https://golang.org/dl/go${{ steps.get_version.outputs.GO_VERSION }}.linux-${{ matrix.go-arch }}.tar.gz
tar -C /usr/local -xzf go.tar.gz
run: |
export PATH=$PATH:/usr/local/go/bin
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
go build -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
mkdir -p output/{init,sqlite,bash_completion,zsh_completion}
echo "For documentation please take a look here:" > output/README.txt
echo "" >> output/README.txt
@@ -306,7 +262,6 @@ jobs:
cp sftpgo.json output/
cp -r templates output/
cp -r static output/
cp -r openapi output/
cp init/sftpgo.service output/init/
./sftpgo initprovider
./sftpgo gen completion bash > output/bash_completion/sftpgo
@@ -378,23 +333,16 @@ jobs:
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_ppc64le.tar.xz
- name: Download armv7 artifact
uses: actions/download-artifact@v2
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_armv7.tar.xz
- name: Build bundle
shell: bash
run: |
mkdir -p bundle/{arm64,ppc64le,armv7}
mkdir -p bundle/{arm64,ppc64le}
cd bundle
tar xvf ../sftpgo_${SFTPGO_VERSION}_linux_x86_64.tar.xz
cd arm64
tar xvf ../../sftpgo_${SFTPGO_VERSION}_linux_arm64.tar.xz sftpgo
cd ../ppc64le
tar xvf ../../sftpgo_${SFTPGO_VERSION}_linux_ppc64le.tar.xz sftpgo
cd ../armv7
tar xvf ../../sftpgo_${SFTPGO_VERSION}_linux_armv7.tar.xz sftpgo
cd ..
tar cJvf sftpgo_${SFTPGO_VERSION}_linux_bundle.tar.xz *
cd ..
@@ -439,11 +387,6 @@ jobs:
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_ppc64le.tar.xz
- name: Download armv7 artifact
uses: actions/download-artifact@v2
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_armv7.tar.xz
- name: Download Linux bundle artifact
uses: actions/download-artifact@v2
with:
@@ -464,11 +407,6 @@ jobs:
with:
name: sftpgo_${{ steps.get_version.outputs.PKG_VERSION }}-1_ppc64el.deb
- name: Download Deb armv7 artifact
uses: actions/download-artifact@v2
with:
name: sftpgo_${{ steps.get_version.outputs.PKG_VERSION }}-1_armhf.deb
- name: Download RPM x86_64 artifact
uses: actions/download-artifact@v2
with:
@@ -484,21 +422,11 @@ jobs:
with:
name: sftpgo-${{ steps.get_version.outputs.PKG_VERSION }}-1.ppc64le.rpm
- name: Download RPM armv7 artifact
uses: actions/download-artifact@v2
with:
name: sftpgo-${{ steps.get_version.outputs.PKG_VERSION }}-1.armv7hl.rpm
- name: Download macOS x86_64 artifact
uses: actions/download-artifact@v2
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_macOS_x86_64.tar.xz
- name: Download macOS arm64 artifact
uses: actions/download-artifact@v2
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_macOS_arm64.tar.xz
- name: Download Windows installer x86_64 artifact
uses: actions/download-artifact@v2
with:
@@ -518,7 +446,7 @@ jobs:
run: |
mv sftpgo_windows_x86_64.exe sftpgo_${SFTPGO_VERSION}_windows_x86_64.exe
mv sftpgo_portable_x86_64.zip sftpgo_${SFTPGO_VERSION}_windows_portable_x86_64.zip
gh release create "${SFTPGO_VERSION}" -t "${SFTPGO_VERSION}"
gh release create "${SFTPGO_VERSION}"
gh release upload "${SFTPGO_VERSION}" sftpgo_*.xz --clobber
gh release upload "${SFTPGO_VERSION}" sftpgo-*.rpm --clobber
gh release upload "${SFTPGO_VERSION}" sftpgo_*.deb --clobber

View File

@@ -19,11 +19,8 @@ linters-settings:
simplify: true
goimports:
local-prefixes: github.com/drakkan/sftpgo
#govet:
# report about shadowed variables
#check-shadowing: true
#enable:
# - fieldalignment
maligned:
suggest-new: true
linters:
enable:
@@ -31,14 +28,15 @@ linters:
- errcheck
- gofmt
- goimports
- revive
- golint
- unconvert
- unparam
- bodyclose
- gocyclo
- misspell
- maligned
- whitespace
- dupl
- scopelint
- rowserrcheck
- dogsled
- govet
- dogsled

View File

@@ -1,4 +1,4 @@
FROM golang:1.17-bullseye as builder
FROM golang:1.16-buster as builder
ENV GOFLAGS="-mod=readonly"
@@ -21,16 +21,16 @@ COPY . .
RUN set -xe && \
export COMMIT_SHA=${COMMIT_SHA:-$(git describe --always --dirty)} && \
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -v -o sftpgo
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -v -o sftpgo
FROM debian:bullseye-slim
FROM debian:buster-slim
# Set to "true" to install the optional git dependency
# Set to "true" to install the optional git and rsync dependencies
ARG INSTALL_OPTIONAL_PACKAGES=false
RUN apt-get update && apt-get install --no-install-recommends -y ca-certificates media-types && rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install --no-install-recommends -y ca-certificates mime-support && rm -rf /var/lib/apt/lists/*
RUN if [ "${INSTALL_OPTIONAL_PACKAGES}" = "true" ]; then apt-get update && apt-get install --no-install-recommends -y git && rm -rf /var/lib/apt/lists/*; fi
RUN if [ "${INSTALL_OPTIONAL_PACKAGES}" = "true" ]; then apt-get update && apt-get install --no-install-recommends -y git rsync && rm -rf /var/lib/apt/lists/*; fi
RUN mkdir -p /etc/sftpgo /var/lib/sftpgo /usr/share/sftpgo /srv/sftpgo/data /srv/sftpgo/backups
@@ -42,20 +42,18 @@ RUN groupadd --system -g 1000 sftpgo && \
COPY --from=builder /workspace/sftpgo.json /etc/sftpgo/sftpgo.json
COPY --from=builder /workspace/templates /usr/share/sftpgo/templates
COPY --from=builder /workspace/static /usr/share/sftpgo/static
COPY --from=builder /workspace/openapi /usr/share/sftpgo/openapi
COPY --from=builder /workspace/sftpgo /usr/local/bin/
# Log to the stdout so the logs will be available using docker logs
ENV SFTPGO_LOG_FILE_PATH=""
# templates and static paths are inside the container
ENV SFTPGO_HTTPD__TEMPLATES_PATH=/usr/share/sftpgo/templates
ENV SFTPGO_SMTP__TEMPLATES_PATH=/usr/share/sftpgo/templates
ENV SFTPGO_HTTPD__STATIC_FILES_PATH=/usr/share/sftpgo/static
ENV SFTPGO_HTTPD__OPENAPI_PATH=/usr/share/sftpgo/openapi
# Modify the default configuration file
RUN sed -i "s|\"users_base_dir\": \"\",|\"users_base_dir\": \"/srv/sftpgo/data\",|" /etc/sftpgo/sftpgo.json && \
sed -i "s|\"backups\"|\"/srv/sftpgo/backups\"|" /etc/sftpgo/sftpgo.json
sed -i "s|\"backups\"|\"/srv/sftpgo/backups\"|" /etc/sftpgo/sftpgo.json && \
sed -i "s|\"address\": \"127.0.0.1\",|\"address\": \"\",|" /etc/sftpgo/sftpgo.json
RUN chown -R sftpgo:sftpgo /etc/sftpgo /srv/sftpgo && chown sftpgo:sftpgo /var/lib/sftpgo && chmod 700 /srv/sftpgo/backups

View File

@@ -1,4 +1,4 @@
FROM golang:1.17-alpine3.14 AS builder
FROM golang:1.16-alpine3.12 AS builder
ENV GOFLAGS="-mod=readonly"
@@ -23,17 +23,17 @@ COPY . .
RUN set -xe && \
export COMMIT_SHA=${COMMIT_SHA:-$(git describe --always --dirty)} && \
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -v -o sftpgo
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -v -o sftpgo
FROM alpine:3.14
FROM alpine:3.12
# Set to "true" to install the optional git dependency
# Set to "true" to install the optional git and rsync dependencies
ARG INSTALL_OPTIONAL_PACKAGES=false
RUN apk add --update --no-cache ca-certificates tzdata mailcap
RUN if [ "${INSTALL_OPTIONAL_PACKAGES}" = "true" ]; then apk add --update --no-cache git; fi
RUN if [ "${INSTALL_OPTIONAL_PACKAGES}" = "true" ]; then apk add --update --no-cache rsync git; fi
# set up nsswitch.conf for Go's "netgo" implementation
# https://github.com/gliderlabs/docker-alpine/issues/367#issuecomment-424546457
@@ -47,20 +47,18 @@ RUN addgroup -g 1000 -S sftpgo && \
COPY --from=builder /workspace/sftpgo.json /etc/sftpgo/sftpgo.json
COPY --from=builder /workspace/templates /usr/share/sftpgo/templates
COPY --from=builder /workspace/static /usr/share/sftpgo/static
COPY --from=builder /workspace/openapi /usr/share/sftpgo/openapi
COPY --from=builder /workspace/sftpgo /usr/local/bin/
# Log to the stdout so the logs will be available using docker logs
ENV SFTPGO_LOG_FILE_PATH=""
# templates and static paths are inside the container
ENV SFTPGO_HTTPD__TEMPLATES_PATH=/usr/share/sftpgo/templates
ENV SFTPGO_SMTP__TEMPLATES_PATH=/usr/share/sftpgo/templates
ENV SFTPGO_HTTPD__STATIC_FILES_PATH=/usr/share/sftpgo/static
ENV SFTPGO_HTTPD__OPENAPI_PATH=/usr/share/sftpgo/openapi
# Modify the default configuration file
RUN sed -i "s|\"users_base_dir\": \"\",|\"users_base_dir\": \"/srv/sftpgo/data\",|" /etc/sftpgo/sftpgo.json && \
sed -i "s|\"backups\"|\"/srv/sftpgo/backups\"|" /etc/sftpgo/sftpgo.json
sed -i "s|\"backups\"|\"/srv/sftpgo/backups\"|" /etc/sftpgo/sftpgo.json && \
sed -i "s|\"address\": \"127.0.0.1\",|\"address\": \"\",|" /etc/sftpgo/sftpgo.json
RUN chown -R sftpgo:sftpgo /etc/sftpgo /srv/sftpgo && chown sftpgo:sftpgo /var/lib/sftpgo && chmod 700 /srv/sftpgo/backups

View File

@@ -1,62 +0,0 @@
FROM golang:1.17-bullseye as builder
ENV CGO_ENABLED=0 GOFLAGS="-mod=readonly"
RUN mkdir -p /workspace
WORKDIR /workspace
ARG GOPROXY
COPY go.mod go.sum ./
RUN go mod download
ARG COMMIT_SHA
# This ARG allows to disable some optional features and it might be useful if you build the image yourself.
# For this variant we disable SQLite support since it requires CGO and so a C runtime which is not installed
# in distroless/static-* images
ARG FEATURES=nosqlite
COPY . .
RUN set -xe && \
export COMMIT_SHA=${COMMIT_SHA:-$(git describe --always --dirty)} && \
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -v -o sftpgo
# Modify the default configuration file
RUN sed -i "s|\"users_base_dir\": \"\",|\"users_base_dir\": \"/srv/sftpgo/data\",|" sftpgo.json && \
sed -i "s|\"backups\"|\"/srv/sftpgo/backups\"|" sftpgo.json && \
sed -i "s|\"sqlite\"|\"bolt\"|" sftpgo.json
RUN apt-get update && apt-get install --no-install-recommends -y media-types && rm -rf /var/lib/apt/lists/*
RUN mkdir /etc/sftpgo /var/lib/sftpgo /srv/sftpgo
FROM gcr.io/distroless/static-debian11
COPY --from=builder --chown=1000:1000 /etc/sftpgo /etc/sftpgo
COPY --from=builder --chown=1000:1000 /srv/sftpgo /srv/sftpgo
COPY --from=builder --chown=1000:1000 /var/lib/sftpgo /var/lib/sftpgo
COPY --from=builder --chown=1000:1000 /workspace/sftpgo.json /etc/sftpgo/sftpgo.json
COPY --from=builder /workspace/templates /usr/share/sftpgo/templates
COPY --from=builder /workspace/static /usr/share/sftpgo/static
COPY --from=builder /workspace/openapi /usr/share/sftpgo/openapi
COPY --from=builder /workspace/sftpgo /usr/local/bin/
COPY --from=builder /etc/mime.types /etc/mime.types
# Log to the stdout so the logs will be available using docker logs
ENV SFTPGO_LOG_FILE_PATH=""
# templates and static paths are inside the container
ENV SFTPGO_HTTPD__TEMPLATES_PATH=/usr/share/sftpgo/templates
ENV SFTPGO_SMTP__TEMPLATES_PATH=/usr/share/sftpgo/templates
ENV SFTPGO_HTTPD__STATIC_FILES_PATH=/usr/share/sftpgo/static
ENV SFTPGO_HTTPD__OPENAPI_PATH=/usr/share/sftpgo/openapi
# These env vars are required to avoid the following error when calling user.Current():
# unable to get the current user: user: Current requires cgo or $USER set in environment
ENV USER=sftpgo
ENV HOME=/var/lib/sftpgo
WORKDIR /var/lib/sftpgo
USER 1000:1000
CMD ["sftpgo", "serve"]

114
README.md
View File

@@ -2,59 +2,56 @@
![CI Status](https://github.com/drakkan/sftpgo/workflows/CI/badge.svg?branch=main&event=push)
[![Code Coverage](https://codecov.io/gh/drakkan/sftpgo/branch/main/graph/badge.svg)](https://codecov.io/gh/drakkan/sftpgo/branch/main)
[![Go Report Card](https://goreportcard.com/badge/github.com/drakkan/sftpgo)](https://goreportcard.com/report/github.com/drakkan/sftpgo)
[![License: AGPL v3](https://img.shields.io/badge/License-AGPLv3-blue.svg)](https://www.gnu.org/licenses/agpl-3.0)
[![Docker Pulls](https://img.shields.io/docker/pulls/drakkan/sftpgo)](https://hub.docker.com/r/drakkan/sftpgo)
[![Mentioned in Awesome Go](https://awesome.re/mentioned-badge.svg)](https://github.com/avelino/awesome-go)
Fully featured and highly configurable SFTP server with optional HTTP, FTP/S and WebDAV support, written in Go.
Fully featured and highly configurable SFTP server with optional FTP/S and WebDAV support, written in Go.
Several storage backends are supported: local filesystem, encrypted local filesystem, S3 (compatible) Object Storage, Google Cloud Storage, Azure Blob Storage, SFTP.
## Features
- Support for serving local filesystem, encrypted local filesystem, S3 Compatible Object Storage, Google Cloud Storage, Azure Blob Storage or other SFTP accounts over SFTP/SCP/FTP/WebDAV.
- Virtual folders are supported: a virtual folder can use any of the supported storage backends. So you can have, for example, an S3 user that exposes a GCS bucket (or part of it) on a specified path and an encrypted local filesystem on another one. Virtual folders can be private or shared among multiple users, for shared virtual folders you can define different quota limits for each user.
- Configurable [custom commands and/or HTTP hooks](./docs/custom-actions.md) on file upload, pre-upload, download, pre-download, delete, pre-delete, rename, mmkdir, rmdir on SSH commands and on user add, update and delete.
- Virtual accounts stored within a "data provider".
- SQLite, MySQL, PostgreSQL, CockroachDB, Bolt (key/value store in pure Go) and in-memory data providers are supported.
- Chroot isolation for local accounts. Cloud-based accounts can be restricted to a certain base path.
- Per user and per directory virtual permissions, for each exposed path you can allow or deny: directory listing, upload, overwrite, download, delete, rename, create directories, create symlinks, change owner/group/file mode.
- [REST API](./docs/rest-api.md) for users and folders management, data retention, backup, restore and real time reports of the active connections with possibility of forcibly closing a connection.
- [Web based administration interface](./docs/web-admin.md) to easily manage users, folders and connections.
- [Web client interface](./docs/web-client.md) so that end users can change their credentials, manage and share their files.
- SFTPGo uses virtual accounts stored inside a "data provider".
- SQLite, MySQL, PostgreSQL, bbolt (key/value store in pure Go) and in-memory data providers are supported.
- Each local account is chrooted in its home directory, for cloud-based accounts you can restrict access to a certain base path.
- Public key and password authentication. Multiple public keys per user are supported.
- SSH user [certificate authentication](https://cvsweb.openbsd.org/src/usr.bin/ssh/PROTOCOL.certkeys?rev=1.8).
- Keyboard interactive authentication. You can easily setup a customizable multi-factor authentication.
- Partial authentication. You can configure multi-step authentication requiring, for example, the user password after successful public key authentication.
- Per user authentication methods.
- Two-factor authentication based on time-based one time passwords (RFC 6238) which works with Authy, Google Authenticator and other compatible apps.
- Custom authentication via external programs/HTTP API.
- [Data At Rest Encryption](./docs/dare.md).
- Dynamic user modification before login via external programs/HTTP API.
- Per user authentication methods. You can configure the allowed authentication methods for each user.
- Custom authentication via external programs/HTTP API is supported.
- [Data At Rest Encryption](./docs/dare.md) is supported.
- Dynamic user modification before login via external programs/HTTP API is supported.
- Quota support: accounts can have individual quota expressed as max total size and/or max number of files.
- Bandwidth throttling, with distinct settings for upload and download.
- Per-protocol [rate limiting](./docs/rate-limiting.md) is supported and can be optionally connected to the built-in defender to automatically block hosts that repeatedly exceed the configured limit.
- Bandwidth throttling is supported, with distinct settings for upload and download.
- Per user maximum concurrent sessions.
- Per user and global IP filters: login can be restricted to specific ranges of IP addresses or to a specific IP address.
- Per user and per directory shell like patterns filters: files can be allowed or denied based on shell like patterns.
- Automatically terminating idle connections.
- Automatic blocklist management using the built-in [defender](./docs/defender.md).
- Atomic uploads are configurable.
- Per user and per directory permission management: list directory contents, upload, overwrite, download, delete, rename, create directories, create symlinks, change owner/group and mode, change access and modification times.
- Per user files/folders ownership mapping: you can map all the users to the system account that runs SFTPGo (all platforms are supported) or you can run SFTPGo as root user and map each user or group of users to a different system account (\*NIX only).
- Per user IP filters are supported: login can be restricted to specific ranges of IP addresses or to a specific IP address.
- Per user and per directory shell like patterns filters are supported: files can be allowed or denied based on shell like patterns.
- Virtual folders are supported: directories outside the user home directory can be exposed as virtual folders.
- Configurable custom commands and/or HTTP notifications on file upload, download, pre-delete, delete, rename, on SSH commands and on user add, update and delete.
- Automatically terminating idle connections.
- Automatic blocklist management is supported using the built-in [defender](./docs/defender.md).
- Atomic uploads are configurable.
- Support for Git repositories over SSH.
- SCP and rsync are supported.
- FTP/S is supported. You can configure the FTP service to require TLS for both control and data connections.
- [WebDAV](./docs/webdav.md) is supported.
- Two-Way TLS authentication, aka TLS with client certificate authentication, is supported for REST API/Web Admin, FTPS and WebDAV over HTTPS.
- Support for serving local filesystem, encrypted local filesystem, S3 Compatible Object Storage, Google Cloud Storage, Azure Blob Storage or other SFTP accounts over SFTP/SCP/FTP/WebDAV.
- Per user protocols restrictions. You can configure the allowed protocols (SSH/FTP/WebDAV) for each user.
- [Prometheus metrics](./docs/metrics.md) are exposed.
- Support for HAProxy PROXY protocol: you can proxy and/or load balance the SFTP/SCP/FTP/WebDAV service without losing the information about the client's address.
- [REST API](./docs/rest-api.md) for users and folders management, backup, restore and real time reports of the active connections with possibility of forcibly closing a connection.
- [Web based administration interface](./docs/web-admin.md) to easily manage users, folders and connections.
- Easy [migration](./examples/convertusers) from Linux system user accounts.
- [Portable mode](./docs/portable-mode.md): a convenient way to share a single directory on demand.
- [SFTP subsystem mode](./docs/sftp-subsystem.md): you can use SFTPGo as OpenSSH's SFTP subsystem.
- Performance analysis using built-in [profiler](./docs/profiling.md).
- Configuration format is at your choice: JSON, TOML, YAML, HCL, envfile are supported.
- Log files are accurate and they are saved in the easily parsable JSON format ([more information](./docs/logs.md)).
- SFTPGo supports a [plugin system](./docs/plugins.md) and therefore can be extended using external plugins.
## Platforms
@@ -62,8 +59,8 @@ SFTPGo is developed and tested on Linux. After each commit, the code is automati
## Requirements
- Go as build only dependency. We support the Go version(s) used in [continuous integration workflows](./tree/main/.github/workflows).
- A suitable SQL server to use as data provider: PostgreSQL 9.4+ or MySQL 5.6+ or SQLite 3.x or CockroachDB stable.
- Go 1.15 or higher as build only dependency.
- A suitable SQL server to use as data provider: PostgreSQL 9.4+ or MySQL 5.6+ or SQLite 3.x.
- The SQL server is optional: you can choose to use an embedded bolt database as key/value store or an in memory data provider.
## Installation
@@ -81,22 +78,10 @@ Some Linux distro packages are available:
- Deb and RPM packages are built after each commit and for each release.
- For Ubuntu a PPA is available [here](https://launchpad.net/~sftpgo/+archive/ubuntu/sftpgo).
SFTPGo is also available on [AWS Marketplace](https://aws.amazon.com/marketplace/seller-profile?id=6e849ab8-70a6-47de-9a43-13c3fa849335), purchasing from there will help keep SFTPGo a long-term sustainable project.
On FreeBSD you can install from the [SFTPGo port](https://www.freshports.org/ftp/sftpgo).
On Windows you can use:
- The Windows installer to install and run SFTPGo as a Windows service.
- The portable package to start SFTPGo on demand.
- The [Chocolatey package](https://community.chocolatey.org/packages/sftpgo) to install and run SFTPGo as a Windows service.
You can easily test new features selecting a commit from the [Actions](https://github.com/drakkan/sftpgo/actions) page and downloading the matching build artifacts for Linux, macOS or Windows. GitHub stores artifacts for 90 days.
Alternately, you can [build from source](./docs/build-from-source.md).
[Getting Started Guide for the Impatient](./docs/howto/getting-started.md).
## Configuration
A full explanation of all configuration methods can be found [here](./docs/full-configuration.md).
@@ -115,7 +100,7 @@ Check out [this documentation](./docs/service.md) if you want to run SFTPGo as a
Before starting the SFTPGo server please ensure that the configured data provider is properly initialized/updated.
For PostgreSQL, MySQL and CockroachDB providers, you need to create the configured database. For SQLite, the configured database will be automatically created at startup. Memory and bolt data providers do not require an initialization but they could require an update to the existing data after upgrading SFTPGo.
For PostgreSQL and MySQL providers, you need to create the configured database. For SQLite, the configured database will be automatically created at startup. Memory and bolt data providers do not require an initialization but they could require an update to the existing data after upgrading SFTPGo.
SFTPGo will attempt to automatically detect if the data provider is initialized/updated and if not, will attempt to initialize/ update it on startup as needed.
@@ -135,56 +120,27 @@ sftpgo initprovider --help
You can disable automatic data provider checks/updates at startup by setting the `update_mode` configuration key to `1`.
You can also reset your provider by using the `resetprovider` sub-command. Take a look at the CLI usage for more details:
```bash
sftpgo resetprovider --help
```
## Create the first admin
To start using SFTPGo you need to create an admin user, you can do it in several ways:
- by using the web admin interface. The default URL is [http://127.0.0.1:8080/web/admin](http://127.0.0.1:8080/web/admin)
- by loading initial data
- by enabling `create_default_admin` in your configuration file and setting the environment variables `SFTPGO_DEFAULT_ADMIN_USERNAME` and `SFTPGO_DEFAULT_ADMIN_PASSWORD`
## Upgrading
SFTPGo supports upgrading from the previous release branch to the current one.
Some examples for supported upgrade paths are:
- from 1.2.x to 2.0.x
- from 2.0.x to 2.1.x and so on.
For supported upgrade paths, the data and schema are migrated automatically, alternately you can use the `initprovider` command.
So if, for example, you want to upgrade from a version before 1.2.x to 2.0.x, you must first install version 1.2.x, update the data provider and finally install the version 2.0.x. It is recommended to always install the latest available minor version, ie do not install 1.2.0 if 1.2.2 is available.
Loading data from a provider independent JSON dump is supported from the previous release branch to the current one too. After upgrading SFTPGo it is advisable to regenerate the JSON dump from the new version.
## Downgrading
If for some reason you want to downgrade SFTPGo, you may need to downgrade your data provider schema and data as well. You can use the `revertprovider` command for this task.
As for upgrading, SFTPGo supports downgrading from the previous release branch to the current one.
We support the follwing schema versions:
So, if you plan to downgrade from 2.0.x to 1.2.x, before uninstalling 2.0.x version, you can prepare your data provider executing the following command from the configuration directory:
- `8`, this is the latest version
- `4`, this is the schema for v1.0.0-v1.2.x
So, if you plan to downgrade from 2.0.x to 1.2.x, you can prepare your data provider executing the following command from the configuration directory:
```shell
sftpgo revertprovider --to-version 4
```
Take a look at the CLI usage to see the supported parameter for the `--to-version` argument and to learn how to specify a different configuration file:
Take a look at the CLI usage to learn how to specify a different configuration file:
```shell
```bash
sftpgo revertprovider --help
```
The `revertprovider` command is not supported for the memory provider.
Please note that we only support the current release branch and the current main branch, if you find a bug it is better to report it rather than downgrading to an older unsupported version.
## Users and folders management
After starting SFTPGo you can manage users and folders using:
@@ -194,8 +150,6 @@ After starting SFTPGo you can manage users and folders using:
To support embedded data providers like `bolt` and `SQLite` we can't have a CLI that directly write users and folders to the data provider, we always have to use the REST API.
Full details for users, folders, admins and other resources are documented in the [OpenAPI](/openapi/openapi.yaml) schema. If you want to render the schema without importing it manually, you can explore it on [Stoplight](https://sftpgo.stoplight.io/docs/sftpgo/openapi.yaml).
## Tutorials
Some step-to-step tutorials can be found inside the source tree [howto](./docs/howto "How-to") directory.
@@ -219,13 +173,13 @@ A user can be created or modified by an external program just before the login.
## Custom Actions
SFTPGo allows you to configure custom commands and/or HTTP hooks to receive notifications about file uploads, deletions and several other events.
SFTPGo allows to configure custom commands and/or HTTP notifications on file upload, download, delete, rename, on SSH commands and on user add, update and delete.
More information about custom actions can be found [here](./docs/custom-actions.md).
## Virtual folders
Directories outside the user home directory or based on a different storage provider can be exposed as virtual folders, more information [here](./docs/virtual-folders.md).
Directories outside the user home directory can be exposed as virtual folders, more information [here](./docs/virtual-folders.md).
## Other hooks
@@ -299,7 +253,7 @@ I'd like to make SFTPGo into a sustainable long term project and your [sponsorsh
Thank you to our sponsors!
[<img src="https://www.7digital.com/wp-content/themes/sevendigital/images/top_logo.png" alt="7digital logo">](https://www.7digital.com/)
[<img src="https://images.squarespace-cdn.com/content/5e5db7f1ded5fc06a4e9628b/1583608099266-T5NW2WNQL7PC15LPRB16/logo+black.png?format=1500w&content-type=image%2Fpng" width="33%" alt="segmed logo">](https://www.segmed.ai/)
## License

View File

@@ -3,117 +3,84 @@ package cmd
import (
"os"
"github.com/rs/zerolog"
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/logger"
)
var genCompletionCmd = &cobra.Command{
Use: "completion [bash|zsh|fish|powershell]",
Short: "Generate the autocompletion script for the specified shell",
Long: `Generate the autocompletion script for sftpgo for the specified shell.
Short: "Generate shell completion script",
Long: `To load completions:
See each sub-command's help for details on how to use the generated script.
`,
}
var genCompletionBashCmd = &cobra.Command{
Use: "bash",
Short: "Generate the autocompletion script for bash",
Long: `Generate the autocompletion script for the bash shell.
This script depends on the 'bash-completion' package.
If it is not installed already, you can install it via your OS's package
manager.
To load completions in your current shell session:
Bash:
$ source <(sftpgo gen completion bash)
To load completions for every new session, execute once:
To load completions for each session, execute once:
Linux:
$ sudo sftpgo gen completion bash > /usr/share/bash-completion/completions/sftpgo
$ sudo sftpgo gen completion bash > /usr/share/bash-completion/completions/sftpgo
MacOS:
$ sudo sftpgo gen completion bash > /usr/local/etc/bash_completion.d/sftpgo
You will need to start a new shell for this setup to take effect.
`,
DisableFlagsInUseLine: true,
RunE: func(cmd *cobra.Command, args []string) error {
return cmd.Root().GenBashCompletionV2(os.Stdout, true)
},
}
$ sudo sftpgo gen completion bash > /usr/local/etc/bash_completion.d/sftpgo
var genCompletionZshCmd = &cobra.Command{
Use: "zsh",
Short: "Generate the autocompletion script for zsh",
Long: `Generate the autocompletion script for the zsh shell.
Zsh:
If shell completion is not already enabled in your environment you will need
to enable it. You can execute the following once:
$ echo "autoload -U compinit; compinit" >> ~/.zshrc
To load completions for every new session, execute once:
To load completions for each session, execute once:
Linux:
$ sftpgo gen completion zsh > > "${fpath[1]}/_sftpgo"
$ sftpgo gen completion zsh > "${fpath[1]}/_sftpgo"
macOS:
$ sudo sftpgo gen completion zsh > /usr/local/share/zsh/site-functions/_sftpgo
You will need to start a new shell for this setup to take effect.
`,
DisableFlagsInUseLine: true,
RunE: func(cmd *cobra.Command, args []string) error {
return cmd.Root().GenZshCompletion(os.Stdout)
},
}
var genCompletionFishCmd = &cobra.Command{
Use: "fish",
Short: "Generate the autocompletion script for fish",
Long: `Generate the autocompletion script for the fish shell.
To load completions in your current shell session:
Fish:
$ sftpgo gen completion fish | source
To load completions for every new session, execute once:
To load completions for each session, execute once:
$ sftpgo gen completion fish > ~/.config/fish/completions/sftpgo.fish
You will need to start a new shell for this setup to take effect.
Powershell:
PS> sftpgo gen completion powershell | Out-String | Invoke-Expression
To load completions for every new session, run:
PS> sftpgo gen completion powershell > sftpgo.ps1
and source this file from your powershell profile.
`,
DisableFlagsInUseLine: true,
RunE: func(cmd *cobra.Command, args []string) error {
return cmd.Root().GenFishCompletion(os.Stdout, true)
},
}
var genCompletionPowerShellCmd = &cobra.Command{
Use: "powershell",
Short: "Generate the autocompletion script for powershell",
Long: `Generate the autocompletion script for powershell.
To load completions in your current shell session:
PS C:\> sftpgo gen completion powershell | Out-String | Invoke-Expression
To load completions for every new session, add the output of the above command
to your powershell profile.
`,
DisableFlagsInUseLine: true,
RunE: func(cmd *cobra.Command, args []string) error {
return cmd.Root().GenPowerShellCompletionWithDesc(os.Stdout)
ValidArgs: []string{"bash", "zsh", "fish", "powershell"},
Args: cobra.ExactValidArgs(1),
Run: func(cmd *cobra.Command, args []string) {
var err error
logger.DisableLogger()
logger.EnableConsoleLogger(zerolog.DebugLevel)
switch args[0] {
case "bash":
err = cmd.Root().GenBashCompletion(os.Stdout)
case "zsh":
err = cmd.Root().GenZshCompletion(os.Stdout)
case "fish":
err = cmd.Root().GenFishCompletion(os.Stdout, true)
case "powershell":
err = cmd.Root().GenPowerShellCompletionWithDesc(os.Stdout)
}
if err != nil {
logger.WarnToConsole("Unable to generate shell completion script: %v", err)
os.Exit(1)
}
},
}
func init() {
genCompletionCmd.AddCommand(genCompletionBashCmd)
genCompletionCmd.AddCommand(genCompletionZshCmd)
genCompletionCmd.AddCommand(genCompletionFishCmd)
genCompletionCmd.AddCommand(genCompletionPowerShellCmd)
genCmd.AddCommand(genCompletionCmd)
}

View File

@@ -8,19 +8,18 @@ import (
"github.com/spf13/cobra"
"github.com/spf13/cobra/doc"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/version"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/version"
)
var (
manDir string
genManCmd = &cobra.Command{
Use: "man",
Short: "Generate man pages for sftpgo",
Short: "Generate man pages for SFTPGo CLI",
Long: `This command automatically generates up-to-date man pages of SFTPGo's
command-line interface.
By default, it creates the man page files in the "man" directory under the
current directory.
command-line interface. By default, it creates the man page files
in the "man" directory under the current directory.
`,
Run: func(cmd *cobra.Command, args []string) {
logger.DisableLogger()

View File

@@ -7,16 +7,16 @@ import (
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/drakkan/sftpgo/v2/config"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/config"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/utils"
)
var (
initProviderCmd = &cobra.Command{
Use: "initprovider",
Short: "Initialize and/or updates the configured data provider",
Short: "Initializes and/or updates the configured data provider",
Long: `This command reads the data provider connection details from the specified
configuration file and creates the initial structure or update the existing one,
as needed.
@@ -37,7 +37,7 @@ Please take a look at the usage below to customize the options.`,
Run: func(cmd *cobra.Command, args []string) {
logger.DisableLogger()
logger.EnableConsoleLogger(zerolog.DebugLevel)
configDir = util.CleanDirInput(configDir)
configDir = utils.CleanDirInput(configDir)
err := config.LoadConfig(configDir, configFile)
if err != nil {
logger.WarnToConsole("Unable to initialize data provider, config load error: %v", err)

View File

@@ -7,8 +7,8 @@ import (
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/v2/service"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/service"
"github.com/drakkan/sftpgo/utils"
)
var (
@@ -23,7 +23,7 @@ sftpgo service install
Please take a look at the usage below to customize the startup options`,
Run: func(cmd *cobra.Command, args []string) {
s := service.Service{
ConfigDir: util.CleanDirInput(configDir),
ConfigDir: utils.CleanDirInput(configDir),
ConfigFile: configFile,
LogFilePath: logFilePath,
LogMaxSize: logMaxSize,
@@ -31,7 +31,6 @@ Please take a look at the usage below to customize the startup options`,
LogMaxAge: logMaxAge,
LogCompress: logCompress,
LogVerbose: logVerbose,
LogUTCTime: logUTCTime,
Shutdown: make(chan bool),
}
winService := service.WindowsService{
@@ -61,7 +60,7 @@ func init() {
func getCustomServeFlags() []string {
result := []string{}
if configDir != defaultConfigDir {
configDir = util.CleanDirInput(configDir)
configDir = utils.CleanDirInput(configDir)
result = append(result, "--"+configDirFlag)
result = append(result, configDir)
}
@@ -88,9 +87,6 @@ func getCustomServeFlags() []string {
if logVerbose != defaultLogVerbose {
result = append(result, "--"+logVerboseFlag+"=false")
}
if logUTCTime != defaultLogUTCTime {
result = append(result, "--"+logUTCTimeFlag+"=true")
}
if logCompress != defaultLogCompress {
result = append(result, "--"+logCompressFlag+"=true")
}

View File

@@ -1,10 +1,10 @@
//go:build !noportable
// +build !noportable
package cmd
import (
"fmt"
"io/ioutil"
"os"
"path"
"path/filepath"
@@ -12,76 +12,70 @@ import (
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/v2/common"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/kms"
"github.com/drakkan/sftpgo/v2/sdk"
"github.com/drakkan/sftpgo/v2/service"
"github.com/drakkan/sftpgo/v2/sftpd"
"github.com/drakkan/sftpgo/v2/version"
"github.com/drakkan/sftpgo/v2/vfs"
"github.com/drakkan/sftpgo/common"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/kms"
"github.com/drakkan/sftpgo/service"
"github.com/drakkan/sftpgo/sftpd"
"github.com/drakkan/sftpgo/version"
"github.com/drakkan/sftpgo/vfs"
)
var (
directoryToServe string
portableSFTPDPort int
portableAdvertiseService bool
portableAdvertiseCredentials bool
portableUsername string
portablePassword string
portableLogFile string
portableLogVerbose bool
portableLogUTCTime bool
portablePublicKeys []string
portablePermissions []string
portableSSHCommands []string
portableAllowedPatterns []string
portableDeniedPatterns []string
portableFsProvider string
portableS3Bucket string
portableS3Region string
portableS3AccessKey string
portableS3AccessSecret string
portableS3Endpoint string
portableS3StorageClass string
portableS3ACL string
portableS3KeyPrefix string
portableS3ULPartSize int
portableS3ULConcurrency int
portableS3ForcePathStyle bool
portableGCSBucket string
portableGCSCredentialsFile string
portableGCSAutoCredentials int
portableGCSStorageClass string
portableGCSKeyPrefix string
portableFTPDPort int
portableFTPSCert string
portableFTPSKey string
portableWebDAVPort int
portableWebDAVCert string
portableWebDAVKey string
portableAzContainer string
portableAzAccountName string
portableAzAccountKey string
portableAzEndpoint string
portableAzAccessTier string
portableAzSASURL string
portableAzKeyPrefix string
portableAzULPartSize int
portableAzULConcurrency int
portableAzUseEmulator bool
portableCryptPassphrase string
portableSFTPEndpoint string
portableSFTPUsername string
portableSFTPPassword string
portableSFTPPrivateKeyPath string
portableSFTPFingerprints []string
portableSFTPPrefix string
portableSFTPDisableConcurrentReads bool
portableSFTPDBufferSize int64
portableCmd = &cobra.Command{
directoryToServe string
portableSFTPDPort int
portableAdvertiseService bool
portableAdvertiseCredentials bool
portableUsername string
portablePassword string
portableLogFile string
portableLogVerbose bool
portablePublicKeys []string
portablePermissions []string
portableSSHCommands []string
portableAllowedPatterns []string
portableDeniedPatterns []string
portableFsProvider int
portableS3Bucket string
portableS3Region string
portableS3AccessKey string
portableS3AccessSecret string
portableS3Endpoint string
portableS3StorageClass string
portableS3KeyPrefix string
portableS3ULPartSize int
portableS3ULConcurrency int
portableGCSBucket string
portableGCSCredentialsFile string
portableGCSAutoCredentials int
portableGCSStorageClass string
portableGCSKeyPrefix string
portableFTPDPort int
portableFTPSCert string
portableFTPSKey string
portableWebDAVPort int
portableWebDAVCert string
portableWebDAVKey string
portableAzContainer string
portableAzAccountName string
portableAzAccountKey string
portableAzEndpoint string
portableAzAccessTier string
portableAzSASURL string
portableAzKeyPrefix string
portableAzULPartSize int
portableAzULConcurrency int
portableAzUseEmulator bool
portableCryptPassphrase string
portableSFTPEndpoint string
portableSFTPUsername string
portableSFTPPassword string
portableSFTPPrivateKeyPath string
portableSFTPFingerprints []string
portableSFTPPrefix string
portableCmd = &cobra.Command{
Use: "portable",
Short: "Serve a single directory/account",
Short: "Serve a single directory",
Long: `To serve the current working directory with auto generated credentials simply
use:
@@ -90,9 +84,9 @@ $ sftpgo portable
Please take a look at the usage below to customize the serving parameters`,
Run: func(cmd *cobra.Command, args []string) {
portableDir := directoryToServe
fsProvider := sdk.GetProviderByName(portableFsProvider)
fsProvider := dataprovider.FilesystemProvider(portableFsProvider)
if !filepath.IsAbs(portableDir) {
if fsProvider == sdk.LocalFilesystemProvider {
if fsProvider == dataprovider.LocalFilesystemProvider {
portableDir, _ = filepath.Abs(portableDir)
} else {
portableDir = os.TempDir()
@@ -101,7 +95,7 @@ Please take a look at the usage below to customize the serving parameters`,
permissions := make(map[string][]string)
permissions["/"] = portablePermissions
portableGCSCredentials := ""
if fsProvider == sdk.GCSFilesystemProvider && portableGCSCredentialsFile != "" {
if fsProvider == dataprovider.GCSFilesystemProvider && portableGCSCredentialsFile != "" {
contents, err := getFileContents(portableGCSCredentialsFile)
if err != nil {
fmt.Printf("Unable to get GCS credentials: %v\n", err)
@@ -111,7 +105,7 @@ Please take a look at the usage below to customize the serving parameters`,
portableGCSAutoCredentials = 0
}
portableSFTPPrivateKey := ""
if fsProvider == sdk.SFTPFilesystemProvider && portableSFTPPrivateKeyPath != "" {
if fsProvider == dataprovider.SFTPFilesystemProvider && portableSFTPPrivateKeyPath != "" {
contents, err := getFileContents(portableSFTPPrivateKeyPath)
if err != nil {
fmt.Printf("Unable to get SFTP private key: %v\n", err)
@@ -146,79 +140,62 @@ Please take a look at the usage below to customize the serving parameters`,
LogMaxAge: defaultLogMaxAge,
LogCompress: defaultLogCompress,
LogVerbose: portableLogVerbose,
LogUTCTime: portableLogUTCTime,
Shutdown: make(chan bool),
PortableMode: 1,
PortableUser: dataprovider.User{
BaseUser: sdk.BaseUser{
Username: portableUsername,
Password: portablePassword,
PublicKeys: portablePublicKeys,
Permissions: permissions,
HomeDir: portableDir,
Status: 1,
Filters: sdk.UserFilters{
FilePatterns: parsePatternsFilesFilters(),
},
},
FsConfig: vfs.Filesystem{
Provider: sdk.GetProviderByName(portableFsProvider),
Username: portableUsername,
Password: portablePassword,
PublicKeys: portablePublicKeys,
Permissions: permissions,
HomeDir: portableDir,
Status: 1,
FsConfig: dataprovider.Filesystem{
Provider: dataprovider.FilesystemProvider(portableFsProvider),
S3Config: vfs.S3FsConfig{
S3FsConfig: sdk.S3FsConfig{
Bucket: portableS3Bucket,
Region: portableS3Region,
AccessKey: portableS3AccessKey,
AccessSecret: kms.NewPlainSecret(portableS3AccessSecret),
Endpoint: portableS3Endpoint,
StorageClass: portableS3StorageClass,
ACL: portableS3ACL,
KeyPrefix: portableS3KeyPrefix,
UploadPartSize: int64(portableS3ULPartSize),
UploadConcurrency: portableS3ULConcurrency,
ForcePathStyle: portableS3ForcePathStyle,
},
Bucket: portableS3Bucket,
Region: portableS3Region,
AccessKey: portableS3AccessKey,
AccessSecret: kms.NewPlainSecret(portableS3AccessSecret),
Endpoint: portableS3Endpoint,
StorageClass: portableS3StorageClass,
KeyPrefix: portableS3KeyPrefix,
UploadPartSize: int64(portableS3ULPartSize),
UploadConcurrency: portableS3ULConcurrency,
},
GCSConfig: vfs.GCSFsConfig{
GCSFsConfig: sdk.GCSFsConfig{
Bucket: portableGCSBucket,
Credentials: kms.NewPlainSecret(portableGCSCredentials),
AutomaticCredentials: portableGCSAutoCredentials,
StorageClass: portableGCSStorageClass,
KeyPrefix: portableGCSKeyPrefix,
},
Bucket: portableGCSBucket,
Credentials: kms.NewPlainSecret(portableGCSCredentials),
AutomaticCredentials: portableGCSAutoCredentials,
StorageClass: portableGCSStorageClass,
KeyPrefix: portableGCSKeyPrefix,
},
AzBlobConfig: vfs.AzBlobFsConfig{
AzBlobFsConfig: sdk.AzBlobFsConfig{
Container: portableAzContainer,
AccountName: portableAzAccountName,
AccountKey: kms.NewPlainSecret(portableAzAccountKey),
Endpoint: portableAzEndpoint,
AccessTier: portableAzAccessTier,
SASURL: kms.NewPlainSecret(portableAzSASURL),
KeyPrefix: portableAzKeyPrefix,
UseEmulator: portableAzUseEmulator,
UploadPartSize: int64(portableAzULPartSize),
UploadConcurrency: portableAzULConcurrency,
},
Container: portableAzContainer,
AccountName: portableAzAccountName,
AccountKey: kms.NewPlainSecret(portableAzAccountKey),
Endpoint: portableAzEndpoint,
AccessTier: portableAzAccessTier,
SASURL: portableAzSASURL,
KeyPrefix: portableAzKeyPrefix,
UseEmulator: portableAzUseEmulator,
UploadPartSize: int64(portableAzULPartSize),
UploadConcurrency: portableAzULConcurrency,
},
CryptConfig: vfs.CryptFsConfig{
CryptFsConfig: sdk.CryptFsConfig{
Passphrase: kms.NewPlainSecret(portableCryptPassphrase),
},
Passphrase: kms.NewPlainSecret(portableCryptPassphrase),
},
SFTPConfig: vfs.SFTPFsConfig{
SFTPFsConfig: sdk.SFTPFsConfig{
Endpoint: portableSFTPEndpoint,
Username: portableSFTPUsername,
Password: kms.NewPlainSecret(portableSFTPPassword),
PrivateKey: kms.NewPlainSecret(portableSFTPPrivateKey),
Fingerprints: portableSFTPFingerprints,
Prefix: portableSFTPPrefix,
DisableCouncurrentReads: portableSFTPDisableConcurrentReads,
BufferSize: portableSFTPDBufferSize,
},
Endpoint: portableSFTPEndpoint,
Username: portableSFTPUsername,
Password: kms.NewPlainSecret(portableSFTPPassword),
PrivateKey: kms.NewPlainSecret(portableSFTPPrivateKey),
Fingerprints: portableSFTPFingerprints,
Prefix: portableSFTPPrefix,
},
},
Filters: dataprovider.UserFilters{
FilePatterns: parsePatternsFilesFilters(),
},
},
}
if err := service.StartPortableMode(portableSFTPDPort, portableFTPDPort, portableWebDAVPort, portableSSHCommands, portableAdvertiseService,
@@ -257,7 +234,6 @@ value`)
value`)
portableCmd.Flags().StringVarP(&portableLogFile, logFilePathFlag, "l", "", "Leave empty to disable logging")
portableCmd.Flags().BoolVarP(&portableLogVerbose, logVerboseFlag, "v", false, "Enable verbose logs")
portableCmd.Flags().BoolVar(&portableLogUTCTime, logUTCTimeFlag, false, "Use UTC time for logging")
portableCmd.Flags().StringSliceVarP(&portablePublicKeys, "public-key", "k", []string{}, "")
portableCmd.Flags().StringSliceVarP(&portablePermissions, "permissions", "g", []string{"list", "download"},
`User's permissions. "*" means any
@@ -280,19 +256,18 @@ multicast DNS`)
advertised via multicast DNS, this
flag allows to put username/password
inside the advertised TXT record`)
portableCmd.Flags().StringVarP(&portableFsProvider, "fs-provider", "f", "osfs", `osfs => local filesystem (legacy value: 0)
s3fs => AWS S3 compatible (legacy: 1)
gcsfs => Google Cloud Storage (legacy: 2)
azblobfs => Azure Blob Storage (legacy: 3)
cryptfs => Encrypted local filesystem (legacy: 4)
sftpfs => SFTP (legacy: 5)`)
portableCmd.Flags().IntVarP(&portableFsProvider, "fs-provider", "f", int(dataprovider.LocalFilesystemProvider), `0 => local filesystem
1 => AWS S3 compatible
2 => Google Cloud Storage
3 => Azure Blob Storage
4 => Encrypted local filesystem
5 => SFTP`)
portableCmd.Flags().StringVar(&portableS3Bucket, "s3-bucket", "", "")
portableCmd.Flags().StringVar(&portableS3Region, "s3-region", "", "")
portableCmd.Flags().StringVar(&portableS3AccessKey, "s3-access-key", "", "")
portableCmd.Flags().StringVar(&portableS3AccessSecret, "s3-access-secret", "", "")
portableCmd.Flags().StringVar(&portableS3Endpoint, "s3-endpoint", "", "")
portableCmd.Flags().StringVar(&portableS3StorageClass, "s3-storage-class", "", "")
portableCmd.Flags().StringVar(&portableS3ACL, "s3-acl", "", "")
portableCmd.Flags().StringVar(&portableS3KeyPrefix, "s3-key-prefix", "", `Allows to restrict access to the
virtual folder identified by this
prefix and its contents`)
@@ -300,7 +275,6 @@ prefix and its contents`)
(MB)`)
portableCmd.Flags().IntVar(&portableS3ULConcurrency, "s3-upload-concurrency", 2, `How many parts are uploaded in
parallel`)
portableCmd.Flags().BoolVar(&portableS3ForcePathStyle, "s3-force-path-style", false, `Force path style bucket URL`)
portableCmd.Flags().StringVar(&portableGCSBucket, "gcs-bucket", "", "")
portableCmd.Flags().StringVar(&portableGCSStorageClass, "gcs-storage-class", "", "")
portableCmd.Flags().StringVar(&portableGCSKeyPrefix, "gcs-key-prefix", "", `Allows to restrict access to the
@@ -344,26 +318,15 @@ key for SFTP provider`)
portableCmd.Flags().StringVar(&portableSFTPPrefix, "sftp-prefix", "", `SFTP prefix allows restrict all
operations to a given path within the
remote SFTP server`)
portableCmd.Flags().BoolVar(&portableSFTPDisableConcurrentReads, "sftp-disable-concurrent-reads", false, `Concurrent reads are safe to use and
disabling them will degrade performance.
Disable for read once servers`)
portableCmd.Flags().Int64Var(&portableSFTPDBufferSize, "sftp-buffer-size", 0, `The size of the buffer (in MB) to use
for transfers. By enabling buffering,
the reads and writes, from/to the
remote SFTP server, are split in
multiple concurrent requests and this
allows data to be transferred at a
faster rate, over high latency networks,
by overlapping round-trip times`)
rootCmd.AddCommand(portableCmd)
}
func parsePatternsFilesFilters() []sdk.PatternsFilter {
var patterns []sdk.PatternsFilter
func parsePatternsFilesFilters() []dataprovider.PatternsFilter {
var patterns []dataprovider.PatternsFilter
for _, val := range portableAllowedPatterns {
p, exts := getPatternsFilterValues(strings.TrimSpace(val))
if p != "" {
patterns = append(patterns, sdk.PatternsFilter{
patterns = append(patterns, dataprovider.PatternsFilter{
Path: path.Clean(p),
AllowedPatterns: exts,
DeniedPatterns: []string{},
@@ -382,7 +345,7 @@ func parsePatternsFilesFilters() []sdk.PatternsFilter {
}
}
if !found {
patterns = append(patterns, sdk.PatternsFilter{
patterns = append(patterns, dataprovider.PatternsFilter{
Path: path.Clean(p),
AllowedPatterns: []string{},
DeniedPatterns: exts,
@@ -421,7 +384,7 @@ func getFileContents(name string) (string, error) {
if fi.Size() > 1048576 {
return "", fmt.Errorf("%#v is too big %v/1048576 bytes", name, fi.Size())
}
contents, err := os.ReadFile(name)
contents, err := ioutil.ReadFile(name)
if err != nil {
return "", err
}

View File

@@ -1,9 +1,8 @@
//go:build noportable
// +build noportable
package cmd
import "github.com/drakkan/sftpgo/v2/version"
import "github.com/drakkan/sftpgo/version"
func init() {
version.AddFeature("-portable")

View File

@@ -6,7 +6,7 @@ import (
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/v2/service"
"github.com/drakkan/sftpgo/service"
)
var (

View File

@@ -1,75 +0,0 @@
package cmd
import (
"bufio"
"os"
"strings"
"github.com/rs/zerolog"
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/drakkan/sftpgo/v2/config"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
)
var (
resetProviderForce bool
resetProviderCmd = &cobra.Command{
Use: "resetprovider",
Short: "Reset the configured provider, any data will be lost",
Long: `This command reads the data provider connection details from the specified
configuration file and resets the provider by deleting all data and schemas.
This command is not supported for the memory provider.
Please take a look at the usage below to customize the options.`,
Run: func(cmd *cobra.Command, args []string) {
logger.DisableLogger()
logger.EnableConsoleLogger(zerolog.DebugLevel)
configDir = util.CleanDirInput(configDir)
err := config.LoadConfig(configDir, configFile)
if err != nil {
logger.WarnToConsole("Unable to initialize data provider, config load error: %v", err)
os.Exit(1)
}
kmsConfig := config.GetKMSConfig()
err = kmsConfig.Initialize()
if err != nil {
logger.ErrorToConsole("unable to initialize KMS: %v", err)
os.Exit(1)
}
providerConf := config.GetProviderConf()
if !resetProviderForce {
logger.WarnToConsole("You are about to delete all the SFTPGo data for provider %#v, config file: %#v",
providerConf.Driver, viper.ConfigFileUsed())
logger.WarnToConsole("Are you sure? (Y/n)")
reader := bufio.NewReader(os.Stdin)
answer, err := reader.ReadString('\n')
if err != nil {
logger.ErrorToConsole("unable to read your answer: %v", err)
os.Exit(1)
}
if strings.ToUpper(strings.TrimSpace(answer)) != "Y" {
logger.InfoToConsole("command aborted")
os.Exit(1)
}
}
logger.InfoToConsole("Resetting provider: %#v, config file: %#v", providerConf.Driver, viper.ConfigFileUsed())
err = dataprovider.ResetDatabase(providerConf, configDir)
if err != nil {
logger.WarnToConsole("Error resetting provider: %v", err)
os.Exit(1)
}
logger.InfoToConsole("Tha data provider was successfully reset")
},
}
)
func init() {
addConfigFlags(resetProviderCmd)
resetProviderCmd.Flags().BoolVar(&resetProviderForce, "force", false, `reset the provider without asking for confirmation`)
rootCmd.AddCommand(resetProviderCmd)
}

View File

@@ -7,10 +7,10 @@ import (
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/drakkan/sftpgo/v2/config"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/config"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/utils"
)
var (
@@ -26,11 +26,11 @@ Please take a look at the usage below to customize the options.`,
Run: func(cmd *cobra.Command, args []string) {
logger.DisableLogger()
logger.EnableConsoleLogger(zerolog.DebugLevel)
if revertProviderTargetVersion != 10 {
logger.WarnToConsole("Unsupported target version, 10 is the only supported one")
if revertProviderTargetVersion != 4 {
logger.WarnToConsole("Unsupported target version, 4 is the only supported one")
os.Exit(1)
}
configDir = util.CleanDirInput(configDir)
configDir = utils.CleanDirInput(configDir)
err := config.LoadConfig(configDir, configFile)
if err != nil {
logger.WarnToConsole("Unable to initialize data provider, config load error: %v", err)
@@ -57,7 +57,7 @@ Please take a look at the usage below to customize the options.`,
func init() {
addConfigFlags(revertProviderCmd)
revertProviderCmd.Flags().IntVar(&revertProviderTargetVersion, "to-version", 10, `10 means the version supported in v2.1.x`)
revertProviderCmd.Flags().IntVar(&revertProviderTargetVersion, "to-version", 0, `4 means the version supported in v1.0.0-v1.2.x`)
revertProviderCmd.MarkFlagRequired("to-version") //nolint:errcheck
rootCmd.AddCommand(revertProviderCmd)

View File

@@ -8,7 +8,7 @@ import (
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/drakkan/sftpgo/v2/version"
"github.com/drakkan/sftpgo/version"
)
const (
@@ -28,8 +28,6 @@ const (
logCompressKey = "log_compress"
logVerboseFlag = "log-verbose"
logVerboseKey = "log_verbose"
logUTCTimeFlag = "log-utc-time"
logUTCTimeKey = "log_utc_time"
loadDataFromFlag = "loaddata-from"
loadDataFromKey = "loaddata_from"
loadDataModeFlag = "loaddata-mode"
@@ -46,7 +44,6 @@ const (
defaultLogMaxAge = 28
defaultLogCompress = false
defaultLogVerbose = true
defaultLogUTCTime = false
defaultLoadDataFrom = ""
defaultLoadDataMode = 1
defaultLoadDataQuotaScan = 0
@@ -62,7 +59,6 @@ var (
logMaxAge int
logCompress bool
logVerbose bool
logUTCTime bool
loadDataFrom string
loadDataMode int
loadDataQuotaScan int
@@ -75,7 +71,6 @@ var (
)
func init() {
rootCmd.CompletionOptions.DisableDefaultCmd = true
rootCmd.Flags().BoolP("version", "v", false, "")
rootCmd.Version = version.GetAsString()
rootCmd.SetVersionTemplate(`{{printf "SFTPGo "}}{{printf "%s" .Version}}
@@ -184,14 +179,6 @@ using SFTPGO_LOG_VERBOSE env var too.
`)
viper.BindPFlag(logVerboseKey, cmd.Flags().Lookup(logVerboseFlag)) //nolint:errcheck
viper.SetDefault(logUTCTimeKey, defaultLogUTCTime)
viper.BindEnv(logUTCTimeKey, "SFTPGO_LOG_UTC_TIME") //nolint:errcheck
cmd.Flags().BoolVar(&logUTCTime, logUTCTimeFlag, viper.GetBool(logUTCTimeKey),
`Use UTC time for logging. This flag can be set
using SFTPGO_LOG_UTC_TIME env var too.
`)
viper.BindPFlag(logUTCTimeKey, cmd.Flags().Lookup(logUTCTimeFlag)) //nolint:errcheck
viper.SetDefault(loadDataFromKey, defaultLoadDataFrom)
viper.BindEnv(loadDataFromKey, "SFTPGO_LOADDATA_FROM") //nolint:errcheck
cmd.Flags().StringVar(&loadDataFrom, loadDataFromFlag, viper.GetString(loadDataFromKey),

View File

@@ -6,7 +6,7 @@ import (
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/v2/service"
"github.com/drakkan/sftpgo/service"
)
var (

View File

@@ -5,14 +5,14 @@ import (
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/v2/service"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/service"
"github.com/drakkan/sftpgo/utils"
)
var (
serveCmd = &cobra.Command{
Use: "serve",
Short: "Start the SFTPGo service",
Short: "Start the SFTP Server",
Long: `To start the SFTPGo with the default values for the command line flags simply
use:
@@ -21,7 +21,7 @@ $ sftpgo serve
Please take a look at the usage below to customize the startup options`,
Run: func(cmd *cobra.Command, args []string) {
service := service.Service{
ConfigDir: util.CleanDirInput(configDir),
ConfigDir: utils.CleanDirInput(configDir),
ConfigFile: configFile,
LogFilePath: logFilePath,
LogMaxSize: logMaxSize,
@@ -29,7 +29,6 @@ Please take a look at the usage below to customize the startup options`,
LogMaxAge: logMaxAge,
LogCompress: logCompress,
LogVerbose: logVerbose,
LogUTCTime: logUTCTime,
LoadDataFrom: loadDataFrom,
LoadDataMode: loadDataMode,
LoadDataQuotaScan: loadDataQuotaScan,

View File

@@ -7,7 +7,7 @@ import (
var (
serviceCmd = &cobra.Command{
Use: "service",
Short: "Manage the SFTPGo Windows Service",
Short: "Manage SFTPGo Windows Service",
}
)

View File

@@ -1,54 +0,0 @@
package cmd
import (
"os"
"github.com/rs/zerolog"
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/v2/config"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/smtp"
"github.com/drakkan/sftpgo/v2/util"
)
var (
smtpTestRecipient string
smtpTestCmd = &cobra.Command{
Use: "smtptest",
Short: "Test the SMTP configuration",
Long: `SFTPGo will try to send a test email to the specified recipient.
If the SMTP configuration is correct you should receive this email.`,
Run: func(cmd *cobra.Command, args []string) {
logger.DisableLogger()
logger.EnableConsoleLogger(zerolog.DebugLevel)
configDir = util.CleanDirInput(configDir)
err := config.LoadConfig(configDir, configFile)
if err != nil {
logger.WarnToConsole("Unable to initialize data provider, config load error: %v", err)
os.Exit(1)
}
smtpConfig := config.GetSMTPConfig()
err = smtpConfig.Initialize(configDir)
if err != nil {
logger.ErrorToConsole("unable to initialize SMTP configuration: %v", err)
os.Exit(1)
}
err = smtp.SendEmail(smtpTestRecipient, "SFTPGo - Testing Email Settings", "It appears your SFTPGo email is setup correctly!",
smtp.EmailContentTypeTextPlain)
if err != nil {
logger.WarnToConsole("Error sending email: %v", err)
os.Exit(1)
}
logger.InfoToConsole("No errors were reported while sending an email. Please check your inbox to make sure.")
},
}
)
func init() {
addConfigFlags(smtpTestCmd)
smtpTestCmd.Flags().StringVar(&smtpTestRecipient, "recipient", "", `email address to send the test e-mail to`)
smtpTestCmd.MarkFlagRequired("recipient") //nolint:errcheck
rootCmd.AddCommand(smtpTestCmd)
}

View File

@@ -7,17 +7,17 @@ import (
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/v2/service"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/service"
"github.com/drakkan/sftpgo/utils"
)
var (
startCmd = &cobra.Command{
Use: "start",
Short: "Start the SFTPGo Windows Service",
Short: "Start SFTPGo Windows Service",
Run: func(cmd *cobra.Command, args []string) {
configDir = util.CleanDirInput(configDir)
if !filepath.IsAbs(logFilePath) && util.IsFileInputValid(logFilePath) {
configDir = utils.CleanDirInput(configDir)
if !filepath.IsAbs(logFilePath) && utils.IsFileInputValid(logFilePath) {
logFilePath = filepath.Join(configDir, logFilePath)
}
s := service.Service{
@@ -29,7 +29,6 @@ var (
LogMaxAge: logMaxAge,
LogCompress: logCompress,
LogVerbose: logVerbose,
LogUTCTime: logUTCTime,
Shutdown: make(chan bool),
}
winService := service.WindowsService{

View File

@@ -11,13 +11,12 @@ import (
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/drakkan/sftpgo/v2/common"
"github.com/drakkan/sftpgo/v2/config"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/sdk/plugin"
"github.com/drakkan/sftpgo/v2/sftpd"
"github.com/drakkan/sftpgo/v2/version"
"github.com/drakkan/sftpgo/common"
"github.com/drakkan/sftpgo/config"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/sftpd"
"github.com/drakkan/sftpgo/version"
)
var (
@@ -26,7 +25,7 @@ var (
baseHomeDir = ""
subsystemCmd = &cobra.Command{
Use: "startsubsys",
Short: "Use sftpgo as SFTP file transfer subsystem",
Short: "Use SFTPGo as SFTP file transfer subsystem",
Long: `In this mode SFTPGo speaks the server side of SFTP protocol to stdout and
expects client requests from stdin.
This mode is not intended to be called directly, but from sshd using the
@@ -44,7 +43,6 @@ Command-line flags should be specified in the Subsystem declaration.
if !logVerbose {
logLevel = zerolog.InfoLevel
}
logger.SetLogTime(logUTCTime)
if logJournalD {
logger.InitJournalDLogger(logLevel)
} else {
@@ -77,22 +75,6 @@ Command-line flags should be specified in the Subsystem declaration.
logger.Error(logSender, connectionID, "unable to initialize KMS: %v", err)
os.Exit(1)
}
mfaConfig := config.GetMFAConfig()
err = mfaConfig.Initialize()
if err != nil {
logger.Error(logSender, "", "unable to initialize MFA: %v", err)
os.Exit(1)
}
if err := plugin.Initialize(config.GetPluginsConfig(), logVerbose); err != nil {
logger.Error(logSender, connectionID, "unable to initialize plugin system: %v", err)
os.Exit(1)
}
smtpConfig := config.GetSMTPConfig()
err = smtpConfig.Initialize(configDir)
if err != nil {
logger.Error(logSender, connectionID, "unable to initialize SMTP configuration: %v", err)
os.Exit(1)
}
dataProviderConf := config.GetProviderConf()
if dataProviderConf.Driver == dataprovider.SQLiteDataProviderName || dataProviderConf.Driver == dataprovider.BoltDataProviderName {
logger.Debug(logSender, connectionID, "data provider %#v not supported in subsystem mode, using %#v provider",
@@ -117,7 +99,7 @@ Command-line flags should be specified in the Subsystem declaration.
if user.HomeDir != filepath.Clean(homedir) && !preserveHomeDir {
// update the user
user.HomeDir = filepath.Clean(homedir)
err = dataprovider.UpdateUser(&user, dataprovider.ActionExecutorSystem, "")
err = dataprovider.UpdateUser(&user)
if err != nil {
logger.Error(logSender, connectionID, "unable to update user %#v: %v", username, err)
os.Exit(1)
@@ -134,7 +116,7 @@ Command-line flags should be specified in the Subsystem declaration.
user.Password = connectionID
user.Permissions = make(map[string][]string)
user.Permissions["/"] = []string{dataprovider.PermAny}
err = dataprovider.AddUser(&user, dataprovider.ActionExecutorSystem, "")
err = dataprovider.AddUser(&user)
if err != nil {
logger.Error(logSender, connectionID, "unable to add user %#v: %v", username, err)
os.Exit(1)
@@ -146,7 +128,6 @@ Command-line flags should be specified in the Subsystem declaration.
os.Exit(1)
}
logger.Info(logSender, connectionID, "serving subsystem finished")
plugin.Handler.Cleanup()
os.Exit(0)
},
}
@@ -181,13 +162,5 @@ using SFTPGO_LOG_VERBOSE env var too.
`)
viper.BindPFlag(logVerboseKey, subsystemCmd.Flags().Lookup(logVerboseFlag)) //nolint:errcheck
viper.SetDefault(logUTCTimeKey, defaultLogUTCTime)
viper.BindEnv(logUTCTimeKey, "SFTPGO_LOG_UTC_TIME") //nolint:errcheck
subsystemCmd.Flags().BoolVar(&logUTCTime, logUTCTimeFlag, viper.GetBool(logUTCTimeKey),
`Use UTC time for logging. This flag can be set
using SFTPGO_LOG_UTC_TIME env var too.
`)
viper.BindPFlag(logUTCTimeKey, subsystemCmd.Flags().Lookup(logUTCTimeFlag)) //nolint:errcheck
rootCmd.AddCommand(subsystemCmd)
}

View File

@@ -6,7 +6,7 @@ import (
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/v2/service"
"github.com/drakkan/sftpgo/service"
)
var (

View File

@@ -6,13 +6,13 @@ import (
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/v2/service"
"github.com/drakkan/sftpgo/service"
)
var (
stopCmd = &cobra.Command{
Use: "stop",
Short: "Stop the SFTPGo Windows Service",
Short: "Stop SFTPGo Windows Service",
Run: func(cmd *cobra.Command, args []string) {
s := service.WindowsService{
Service: service.Service{

View File

@@ -6,13 +6,13 @@ import (
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/v2/service"
"github.com/drakkan/sftpgo/service"
)
var (
uninstallCmd = &cobra.Command{
Use: "uninstall",
Short: "Uninstall the SFTPGo Windows Service",
Short: "Uninstall SFTPGo Windows Service",
Run: func(cmd *cobra.Command, args []string) {
s := service.WindowsService{
Service: service.Service{

View File

@@ -14,12 +14,10 @@ import (
"strings"
"time"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/httpclient"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/sdk"
"github.com/drakkan/sftpgo/v2/sdk/plugin"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/httpclient"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/utils"
)
var (
@@ -32,11 +30,6 @@ var (
type ProtocolActions struct {
// Valid values are download, upload, pre-delete, delete, rename, ssh_cmd. Empty slice to disable
ExecuteOn []string `json:"execute_on" mapstructure:"execute_on"`
// Actions to be performed synchronously.
// The pre-delete action is always executed synchronously while the other ones are asynchronous.
// Executing an action synchronously means that SFTPGo will not return a result code to the client
// (which is waiting for it) until your hook have completed its execution.
ExecuteSync []string `json:"execute_sync" mapstructure:"execute_sync"`
// Absolute path to an external program or an HTTP URL
Hook string `json:"hook" mapstructure:"hook"`
}
@@ -50,40 +43,11 @@ func InitializeActionHandler(handler ActionHandler) {
actionHandler = handler
}
// ExecutePreAction executes a pre-* action and returns the result
func ExecutePreAction(user *dataprovider.User, operation, filePath, virtualPath, protocol, ip string, fileSize int64,
openFlags int,
) error {
plugin.Handler.NotifyFsEvent(time.Now().UnixNano(), operation, user.Username, filePath, "", "", protocol, ip, virtualPath, "", fileSize, nil)
if !util.IsStringInSlice(operation, Config.Actions.ExecuteOn) {
// for pre-delete we execute the internal handling on error, so we must return errUnconfiguredAction.
// Other pre action will deny the operation on error so if we have no configuration we must return
// a nil error
if operation == operationPreDelete {
return errUnconfiguredAction
}
return nil
}
notification := newActionNotification(user, operation, filePath, virtualPath, "", "", "", protocol, ip, fileSize,
openFlags, nil)
return actionHandler.Handle(notification)
}
// SSHCommandActionNotification executes the defined action for the specified SSH command.
func SSHCommandActionNotification(user *dataprovider.User, filePath, target, sshCmd string, err error) {
notification := newActionNotification(user, operationSSHCmd, filePath, target, sshCmd, ProtocolSSH, 0, err)
// ExecuteActionNotification executes the defined hook, if any, for the specified action
func ExecuteActionNotification(user *dataprovider.User, operation, filePath, virtualPath, target, virtualTarget, sshCmd,
protocol, ip string, fileSize int64, err error,
) {
plugin.Handler.NotifyFsEvent(time.Now().UnixNano(), operation, user.Username, filePath, target, sshCmd, protocol, ip, virtualPath,
virtualTarget, fileSize, err)
notification := newActionNotification(user, operation, filePath, virtualPath, target, virtualTarget, sshCmd, protocol,
ip, fileSize, 0, err)
if util.IsStringInSlice(operation, Config.Actions.ExecuteSync) {
actionHandler.Handle(notification) //nolint:errcheck
return
}
go actionHandler.Handle(notification) //nolint:errcheck
go actionHandler.Handle(notification) // nolint:errcheck
}
// ActionHandler handles a notification for a Protocol Action.
@@ -93,81 +57,67 @@ type ActionHandler interface {
// ActionNotification defines a notification for a Protocol Action.
type ActionNotification struct {
Action string `json:"action"`
Username string `json:"username"`
Path string `json:"path"`
TargetPath string `json:"target_path,omitempty"`
VirtualPath string `json:"virtual_path"`
VirtualTargetPath string `json:"virtual_target_path,omitempty"`
SSHCmd string `json:"ssh_cmd,omitempty"`
FileSize int64 `json:"file_size,omitempty"`
FsProvider int `json:"fs_provider"`
Bucket string `json:"bucket,omitempty"`
Endpoint string `json:"endpoint,omitempty"`
Status int `json:"status"`
Protocol string `json:"protocol"`
IP string `json:"ip"`
Timestamp int64 `json:"timestamp"`
OpenFlags int `json:"open_flags,omitempty"`
Action string `json:"action"`
Username string `json:"username"`
Path string `json:"path"`
TargetPath string `json:"target_path,omitempty"`
SSHCmd string `json:"ssh_cmd,omitempty"`
FileSize int64 `json:"file_size,omitempty"`
FsProvider int `json:"fs_provider"`
Bucket string `json:"bucket,omitempty"`
Endpoint string `json:"endpoint,omitempty"`
Status int `json:"status"`
Protocol string `json:"protocol"`
}
func newActionNotification(
user *dataprovider.User,
operation, filePath, virtualPath, target, virtualTarget, sshCmd, protocol, ip string,
operation, filePath, target, sshCmd, protocol string,
fileSize int64,
openFlags int,
err error,
) *ActionNotification {
var bucket, endpoint string
status := 1
fsConfig := user.GetFsConfigForPath(virtualPath)
switch fsConfig.Provider {
case sdk.S3FilesystemProvider:
bucket = fsConfig.S3Config.Bucket
endpoint = fsConfig.S3Config.Endpoint
case sdk.GCSFilesystemProvider:
bucket = fsConfig.GCSConfig.Bucket
case sdk.AzureBlobFilesystemProvider:
bucket = fsConfig.AzBlobConfig.Container
if fsConfig.AzBlobConfig.Endpoint != "" {
endpoint = fsConfig.AzBlobConfig.Endpoint
if user.FsConfig.Provider == dataprovider.S3FilesystemProvider {
bucket = user.FsConfig.S3Config.Bucket
endpoint = user.FsConfig.S3Config.Endpoint
} else if user.FsConfig.Provider == dataprovider.GCSFilesystemProvider {
bucket = user.FsConfig.GCSConfig.Bucket
} else if user.FsConfig.Provider == dataprovider.AzureBlobFilesystemProvider {
bucket = user.FsConfig.AzBlobConfig.Container
if user.FsConfig.AzBlobConfig.SASURL != "" {
endpoint = user.FsConfig.AzBlobConfig.SASURL
} else {
endpoint = user.FsConfig.AzBlobConfig.Endpoint
}
case sdk.SFTPFilesystemProvider:
endpoint = fsConfig.SFTPConfig.Endpoint
}
if err == ErrQuotaExceeded {
status = 3
} else if err != nil {
status = 2
} else if err != nil {
status = 0
}
return &ActionNotification{
Action: operation,
Username: user.Username,
Path: filePath,
TargetPath: target,
VirtualPath: virtualPath,
VirtualTargetPath: virtualTarget,
SSHCmd: sshCmd,
FileSize: fileSize,
FsProvider: int(fsConfig.Provider),
Bucket: bucket,
Endpoint: endpoint,
Status: status,
Protocol: protocol,
IP: ip,
OpenFlags: openFlags,
Timestamp: time.Now().UnixNano(),
Action: operation,
Username: user.Username,
Path: filePath,
TargetPath: target,
SSHCmd: sshCmd,
FileSize: fileSize,
FsProvider: int(user.FsConfig.Provider),
Bucket: bucket,
Endpoint: endpoint,
Status: status,
Protocol: protocol,
}
}
type defaultActionHandler struct{}
func (h *defaultActionHandler) Handle(notification *ActionNotification) error {
if !util.IsStringInSlice(notification.Action, Config.Actions.ExecuteOn) {
if !utils.IsStringInSlice(notification.Action, Config.Actions.ExecuteOn) {
return errUnconfiguredAction
}
@@ -188,16 +138,19 @@ func (h *defaultActionHandler) handleHTTP(notification *ActionNotification) erro
u, err := url.Parse(Config.Actions.Hook)
if err != nil {
logger.Warn(notification.Protocol, "", "Invalid hook %#v for operation %#v: %v", Config.Actions.Hook, notification.Action, err)
return err
}
startTime := time.Now()
respCode := 0
httpClient := httpclient.GetRetraybleHTTPClient()
var b bytes.Buffer
_ = json.NewEncoder(&b).Encode(notification)
resp, err := httpclient.RetryablePost(Config.Actions.Hook, "application/json", &b)
resp, err := httpClient.Post(u.String(), "application/json", &b)
if err == nil {
respCode = resp.StatusCode
resp.Body.Close()
@@ -207,8 +160,7 @@ func (h *defaultActionHandler) handleHTTP(notification *ActionNotification) erro
}
}
logger.Debug(notification.Protocol, "", "notified operation %#v to URL: %v status code: %v, elapsed: %v err: %v",
notification.Action, u.Redacted(), respCode, time.Since(startTime), err)
logger.Debug(notification.Protocol, "", "notified operation %#v to URL: %v status code: %v, elapsed: %v err: %v", notification.Action, u.String(), respCode, time.Since(startTime), err)
return err
}
@@ -224,14 +176,14 @@ func (h *defaultActionHandler) handleCommand(notification *ActionNotification) e
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
cmd := exec.CommandContext(ctx, Config.Actions.Hook)
cmd := exec.CommandContext(ctx, Config.Actions.Hook, notification.Action, notification.Username, notification.Path, notification.TargetPath, notification.SSHCmd)
cmd.Env = append(os.Environ(), notificationAsEnvVars(notification)...)
startTime := time.Now()
err := cmd.Run()
logger.Debug(notification.Protocol, "", "executed command %#v, elapsed: %v, error: %v",
Config.Actions.Hook, time.Since(startTime), err)
logger.Debug(notification.Protocol, "", "executed command %#v with arguments: %#v, %#v, %#v, %#v, %#v, elapsed: %v, error: %v",
Config.Actions.Hook, notification.Action, notification.Username, notification.Path, notification.TargetPath, notification.SSHCmd, time.Since(startTime), err)
return err
}
@@ -242,8 +194,6 @@ func notificationAsEnvVars(notification *ActionNotification) []string {
fmt.Sprintf("SFTPGO_ACTION_USERNAME=%v", notification.Username),
fmt.Sprintf("SFTPGO_ACTION_PATH=%v", notification.Path),
fmt.Sprintf("SFTPGO_ACTION_TARGET=%v", notification.TargetPath),
fmt.Sprintf("SFTPGO_ACTION_VIRTUAL_PATH=%v", notification.VirtualPath),
fmt.Sprintf("SFTPGO_ACTION_VIRTUAL_TARGET=%v", notification.VirtualTargetPath),
fmt.Sprintf("SFTPGO_ACTION_SSH_CMD=%v", notification.SSHCmd),
fmt.Sprintf("SFTPGO_ACTION_FILE_SIZE=%v", notification.FileSize),
fmt.Sprintf("SFTPGO_ACTION_FS_PROVIDER=%v", notification.FsProvider),
@@ -251,8 +201,5 @@ func notificationAsEnvVars(notification *ActionNotification) []string {
fmt.Sprintf("SFTPGO_ACTION_ENDPOINT=%v", notification.Endpoint),
fmt.Sprintf("SFTPGO_ACTION_STATUS=%v", notification.Status),
fmt.Sprintf("SFTPGO_ACTION_PROTOCOL=%v", notification.Protocol),
fmt.Sprintf("SFTPGO_ACTION_IP=%v", notification.IP),
fmt.Sprintf("SFTPGO_ACTION_OPEN_FLAGS=%v", notification.OpenFlags),
fmt.Sprintf("SFTPGO_ACTION_TIMESTAMP=%v", notification.Timestamp),
}
}

View File

@@ -3,6 +3,7 @@ package common
import (
"errors"
"fmt"
"io/ioutil"
"os"
"os/exec"
"path/filepath"
@@ -11,73 +12,56 @@ import (
"github.com/stretchr/testify/assert"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/sdk"
"github.com/drakkan/sftpgo/v2/vfs"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/vfs"
)
func TestNewActionNotification(t *testing.T) {
user := &dataprovider.User{
BaseUser: sdk.BaseUser{
Username: "username",
},
Username: "username",
}
user.FsConfig.Provider = sdk.LocalFilesystemProvider
user.FsConfig.Provider = dataprovider.LocalFilesystemProvider
user.FsConfig.S3Config = vfs.S3FsConfig{
S3FsConfig: sdk.S3FsConfig{
Bucket: "s3bucket",
Endpoint: "endpoint",
},
Bucket: "s3bucket",
Endpoint: "endpoint",
}
user.FsConfig.GCSConfig = vfs.GCSFsConfig{
GCSFsConfig: sdk.GCSFsConfig{
Bucket: "gcsbucket",
},
Bucket: "gcsbucket",
}
user.FsConfig.AzBlobConfig = vfs.AzBlobFsConfig{
AzBlobFsConfig: sdk.AzBlobFsConfig{
Container: "azcontainer",
Endpoint: "azendpoint",
},
Container: "azcontainer",
SASURL: "azsasurl",
Endpoint: "azendpoint",
}
user.FsConfig.SFTPConfig = vfs.SFTPFsConfig{
SFTPFsConfig: sdk.SFTPFsConfig{
Endpoint: "sftpendpoint",
},
}
a := newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSFTP, "", 123, 0, errors.New("fake error"))
a := newActionNotification(user, operationDownload, "path", "target", "", ProtocolSFTP, 123, errors.New("fake error"))
assert.Equal(t, user.Username, a.Username)
assert.Equal(t, 0, len(a.Bucket))
assert.Equal(t, 0, len(a.Endpoint))
assert.Equal(t, 2, a.Status)
assert.Equal(t, 0, a.Status)
user.FsConfig.Provider = sdk.S3FilesystemProvider
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSSH, "", 123, 0, nil)
user.FsConfig.Provider = dataprovider.S3FilesystemProvider
a = newActionNotification(user, operationDownload, "path", "target", "", ProtocolSSH, 123, nil)
assert.Equal(t, "s3bucket", a.Bucket)
assert.Equal(t, "endpoint", a.Endpoint)
assert.Equal(t, 1, a.Status)
user.FsConfig.Provider = sdk.GCSFilesystemProvider
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSCP, "", 123, 0, ErrQuotaExceeded)
user.FsConfig.Provider = dataprovider.GCSFilesystemProvider
a = newActionNotification(user, operationDownload, "path", "target", "", ProtocolSCP, 123, ErrQuotaExceeded)
assert.Equal(t, "gcsbucket", a.Bucket)
assert.Equal(t, 0, len(a.Endpoint))
assert.Equal(t, 3, a.Status)
assert.Equal(t, 2, a.Status)
user.FsConfig.Provider = sdk.AzureBlobFilesystemProvider
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSCP, "", 123, 0, nil)
user.FsConfig.Provider = dataprovider.AzureBlobFilesystemProvider
a = newActionNotification(user, operationDownload, "path", "target", "", ProtocolSCP, 123, nil)
assert.Equal(t, "azcontainer", a.Bucket)
assert.Equal(t, "azsasurl", a.Endpoint)
assert.Equal(t, 1, a.Status)
user.FsConfig.AzBlobConfig.SASURL = ""
a = newActionNotification(user, operationDownload, "path", "target", "", ProtocolSCP, 123, nil)
assert.Equal(t, "azcontainer", a.Bucket)
assert.Equal(t, "azendpoint", a.Endpoint)
assert.Equal(t, 1, a.Status)
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSCP, "", 123, os.O_APPEND, nil)
assert.Equal(t, "azcontainer", a.Bucket)
assert.Equal(t, "azendpoint", a.Endpoint)
assert.Equal(t, 1, a.Status)
assert.Equal(t, os.O_APPEND, a.OpenFlags)
user.FsConfig.Provider = sdk.SFTPFilesystemProvider
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSFTP, "", 123, 0, nil)
assert.Equal(t, "sftpendpoint", a.Endpoint)
}
func TestActionHTTP(t *testing.T) {
@@ -88,11 +72,9 @@ func TestActionHTTP(t *testing.T) {
Hook: fmt.Sprintf("http://%v", httpAddr),
}
user := &dataprovider.User{
BaseUser: sdk.BaseUser{
Username: "username",
},
Username: "username",
}
a := newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSFTP, "", 123, 0, nil)
a := newActionNotification(user, operationDownload, "path", "target", "", ProtocolSFTP, 123, nil)
err := actionHandler.Handle(a)
assert.NoError(t, err)
@@ -123,15 +105,13 @@ func TestActionCMD(t *testing.T) {
Hook: hookCmd,
}
user := &dataprovider.User{
BaseUser: sdk.BaseUser{
Username: "username",
},
Username: "username",
}
a := newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSFTP, "", 123, 0, nil)
a := newActionNotification(user, operationDownload, "path", "target", "", ProtocolSFTP, 123, nil)
err = actionHandler.Handle(a)
assert.NoError(t, err)
ExecuteActionNotification(user, OperationSSHCmd, "path", "vpath", "target", "vtarget", "sha1sum", ProtocolSSH, "", 0, nil)
SSHCommandActionNotification(user, "path", "target", "sha1sum", nil)
Config.Actions = actionsCopy
}
@@ -148,12 +128,10 @@ func TestWrongActions(t *testing.T) {
Hook: badCommand,
}
user := &dataprovider.User{
BaseUser: sdk.BaseUser{
Username: "username",
},
Username: "username",
}
a := newActionNotification(user, operationUpload, "", "", "", "", "", ProtocolSFTP, "", 123, 0, nil)
a := newActionNotification(user, operationUpload, "", "", "", ProtocolSFTP, 123, nil)
err := actionHandler.Handle(a)
assert.Error(t, err, "action with bad command must fail")
@@ -197,22 +175,20 @@ func TestPreDeleteAction(t *testing.T) {
err = os.MkdirAll(homeDir, os.ModePerm)
assert.NoError(t, err)
user := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: "username",
HomeDir: homeDir,
},
Username: "username",
HomeDir: homeDir,
}
user.Permissions = make(map[string][]string)
user.Permissions["/"] = []string{dataprovider.PermAny}
fs := vfs.NewOsFs("id", homeDir, "")
c := NewBaseConnection("id", ProtocolSFTP, "", "", user)
fs := vfs.NewOsFs("id", homeDir, nil)
c := NewBaseConnection("id", ProtocolSFTP, user, fs)
testfile := filepath.Join(user.HomeDir, "testfile")
err = os.WriteFile(testfile, []byte("test"), os.ModePerm)
err = ioutil.WriteFile(testfile, []byte("test"), os.ModePerm)
assert.NoError(t, err)
info, err := os.Stat(testfile)
assert.NoError(t, err)
err = c.RemoveFile(fs, testfile, "testfile", info)
err = c.RemoveFile(testfile, "testfile", info)
assert.NoError(t, err)
assert.FileExists(t, testfile)

View File

@@ -1,51 +0,0 @@
package common
import (
"sync"
"sync/atomic"
"github.com/drakkan/sftpgo/v2/logger"
)
// clienstMap is a struct containing the map of the connected clients
type clientsMap struct {
totalConnections int32
mu sync.RWMutex
clients map[string]int
}
func (c *clientsMap) add(source string) {
atomic.AddInt32(&c.totalConnections, 1)
c.mu.Lock()
defer c.mu.Unlock()
c.clients[source]++
}
func (c *clientsMap) remove(source string) {
c.mu.Lock()
defer c.mu.Unlock()
if val, ok := c.clients[source]; ok {
atomic.AddInt32(&c.totalConnections, -1)
c.clients[source]--
if val > 1 {
return
}
delete(c.clients, source)
} else {
logger.Warn(logSender, "", "cannot remove client %v it is not mapped", source)
}
}
func (c *clientsMap) getTotal() int32 {
return atomic.LoadInt32(&c.totalConnections)
}
func (c *clientsMap) getTotalFrom(source string) int {
c.mu.RLock()
defer c.mu.RUnlock()
return c.clients[source]
}

View File

@@ -1,59 +0,0 @@
package common
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestClientsMap(t *testing.T) {
m := clientsMap{
clients: make(map[string]int),
}
ip1 := "192.168.1.1"
ip2 := "192.168.1.2"
m.add(ip1)
assert.Equal(t, int32(1), m.getTotal())
assert.Equal(t, 1, m.getTotalFrom(ip1))
assert.Equal(t, 0, m.getTotalFrom(ip2))
m.add(ip1)
m.add(ip2)
assert.Equal(t, int32(3), m.getTotal())
assert.Equal(t, 2, m.getTotalFrom(ip1))
assert.Equal(t, 1, m.getTotalFrom(ip2))
m.add(ip1)
m.add(ip1)
m.add(ip2)
assert.Equal(t, int32(6), m.getTotal())
assert.Equal(t, 4, m.getTotalFrom(ip1))
assert.Equal(t, 2, m.getTotalFrom(ip2))
m.remove(ip2)
assert.Equal(t, int32(5), m.getTotal())
assert.Equal(t, 4, m.getTotalFrom(ip1))
assert.Equal(t, 1, m.getTotalFrom(ip2))
m.remove("unknown")
assert.Equal(t, int32(5), m.getTotal())
assert.Equal(t, 4, m.getTotalFrom(ip1))
assert.Equal(t, 1, m.getTotalFrom(ip2))
m.remove(ip2)
assert.Equal(t, int32(4), m.getTotal())
assert.Equal(t, 4, m.getTotalFrom(ip1))
assert.Equal(t, 0, m.getTotalFrom(ip2))
m.remove(ip1)
m.remove(ip1)
m.remove(ip1)
assert.Equal(t, int32(1), m.getTotal())
assert.Equal(t, 1, m.getTotalFrom(ip1))
assert.Equal(t, 0, m.getTotalFrom(ip2))
m.remove(ip1)
assert.Equal(t, int32(0), m.getTotal())
assert.Equal(t, 0, m.getTotalFrom(ip1))
assert.Equal(t, 0, m.getTotalFrom(ip2))
}

View File

@@ -11,7 +11,6 @@ import (
"os"
"os/exec"
"path/filepath"
"strconv"
"strings"
"sync"
"sync/atomic"
@@ -19,41 +18,33 @@ import (
"github.com/pires/go-proxyproto"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/httpclient"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/metric"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/v2/vfs"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/httpclient"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/metrics"
"github.com/drakkan/sftpgo/utils"
)
// constants
const (
logSender = "common"
uploadLogSender = "Upload"
downloadLogSender = "Download"
renameLogSender = "Rename"
rmdirLogSender = "Rmdir"
mkdirLogSender = "Mkdir"
symlinkLogSender = "Symlink"
removeLogSender = "Remove"
chownLogSender = "Chown"
chmodLogSender = "Chmod"
chtimesLogSender = "Chtimes"
truncateLogSender = "Truncate"
operationDownload = "download"
operationUpload = "upload"
operationDelete = "delete"
// Pre-download action name
OperationPreDownload = "pre-download"
// Pre-upload action name
OperationPreUpload = "pre-upload"
operationPreDelete = "pre-delete"
operationRename = "rename"
operationMkdir = "mkdir"
operationRmdir = "rmdir"
// SSH command action name
OperationSSHCmd = "ssh_cmd"
logSender = "common"
uploadLogSender = "Upload"
downloadLogSender = "Download"
renameLogSender = "Rename"
rmdirLogSender = "Rmdir"
mkdirLogSender = "Mkdir"
symlinkLogSender = "Symlink"
removeLogSender = "Remove"
chownLogSender = "Chown"
chmodLogSender = "Chmod"
chtimesLogSender = "Chtimes"
truncateLogSender = "Truncate"
operationDownload = "download"
operationUpload = "upload"
operationDelete = "delete"
operationPreDelete = "pre-delete"
operationRename = "rename"
operationSSHCmd = "ssh_cmd"
chtimesFormat = "2006-01-02T15:04:05" // YYYY-MM-DDTHH:MM:SS
idleTimeoutCheckInterval = 3 * time.Minute
)
@@ -74,14 +65,11 @@ const (
// Supported protocols
const (
ProtocolSFTP = "SFTP"
ProtocolSCP = "SCP"
ProtocolSSH = "SSH"
ProtocolFTP = "FTP"
ProtocolWebDAV = "DAV"
ProtocolHTTP = "HTTP"
ProtocolHTTPShare = "HTTPShare"
ProtocolDataRetention = "DataRetention"
ProtocolSFTP = "SFTP"
ProtocolSCP = "SCP"
ProtocolSSH = "SSH"
ProtocolFTP = "FTP"
ProtocolWebDAV = "DAV"
)
// Upload modes
@@ -91,12 +79,6 @@ const (
UploadModeAtomicWithResume
)
func init() {
Connections.clients = clientsMap{
clients: make(map[string]int),
}
}
// errors definitions
var (
ErrPermissionDenied = errors.New("permission denied")
@@ -108,8 +90,6 @@ var (
ErrConnectionDenied = errors.New("you are not allowed to connect")
ErrNoBinding = errors.New("no binding configured")
ErrCrtRevoked = errors.New("your certificate has been revoked")
ErrNoCredentials = errors.New("no credential provided")
ErrInternalFailure = errors.New("internal failure")
errNoTransfer = errors.New("requested transfer not found")
errTransferMismatch = errors.New("transfer mismatch")
)
@@ -123,11 +103,7 @@ var (
QuotaScans ActiveScans
idleTimeoutTicker *time.Ticker
idleTimeoutTickerDone chan bool
supportedProtocols = []string{ProtocolSFTP, ProtocolSCP, ProtocolSSH, ProtocolFTP, ProtocolWebDAV,
ProtocolHTTP, ProtocolHTTPShare}
disconnHookProtocols = []string{ProtocolSFTP, ProtocolSCP, ProtocolSSH, ProtocolFTP}
// the map key is the protocol, for each protocol we can have multiple rate limiters
rateLimiters map[string][]*rateLimiter
supportedProtocols = []string{ProtocolSFTP, ProtocolSCP, ProtocolSSH, ProtocolFTP, ProtocolWebDAV}
)
// Initialize sets the common configuration
@@ -147,42 +123,9 @@ func Initialize(c Configuration) error {
logger.Info(logSender, "", "defender initialized with config %+v", c.DefenderConfig)
Config.defender = defender
}
rateLimiters = make(map[string][]*rateLimiter)
for _, rlCfg := range c.RateLimitersConfig {
if rlCfg.isEnabled() {
if err := rlCfg.validate(); err != nil {
return fmt.Errorf("rate limiters initialization error: %v", err)
}
allowList, err := util.ParseAllowedIPAndRanges(rlCfg.AllowList)
if err != nil {
return fmt.Errorf("unable to parse rate limiter allow list %v: %v", rlCfg.AllowList, err)
}
rateLimiter := rlCfg.getLimiter()
rateLimiter.allowList = allowList
for _, protocol := range rlCfg.Protocols {
rateLimiters[protocol] = append(rateLimiters[protocol], rateLimiter)
}
}
}
vfs.SetTempPath(c.TempPath)
dataprovider.SetTempPath(c.TempPath)
return nil
}
// LimitRate blocks until all the configured rate limiters
// allow one event to happen.
// It returns an error if the time to wait exceeds the max
// allowed delay
func LimitRate(protocol, ip string) (time.Duration, error) {
for _, limiter := range rateLimiters[protocol] {
if delay, err := limiter.Wait(ip); err != nil {
logger.Debug(logSender, "", "protocol %v ip %v: %v", protocol, ip, err)
return delay, err
}
}
return 0, nil
}
// ReloadDefender reloads the defender's block and safe lists
func ReloadDefender() error {
if Config.defender == nil {
@@ -211,31 +154,13 @@ func GetDefenderBanTime(ip string) *time.Time {
return Config.defender.GetBanTime(ip)
}
// GetDefenderHosts returns hosts that are banned or for which some violations have been detected
func GetDefenderHosts() []*DefenderEntry {
if Config.defender == nil {
return nil
}
return Config.defender.GetHosts()
}
// GetDefenderHost returns a defender host by ip, if any
func GetDefenderHost(ip string) (*DefenderEntry, error) {
if Config.defender == nil {
return nil, errors.New("defender is disabled")
}
return Config.defender.GetHost(ip)
}
// DeleteDefenderHost removes the specified IP address from the defender lists
func DeleteDefenderHost(ip string) bool {
// Unban removes the specified IP address from the banned ones
func Unban(ip string) bool {
if Config.defender == nil {
return false
}
return Config.defender.DeleteHost(ip)
return Config.defender.Unban(ip)
}
// GetDefenderScore returns the score for the given IP
@@ -291,14 +216,12 @@ type ActiveTransfer interface {
SignalClose()
Truncate(fsPath string, size int64) (int64, error)
GetRealFsPath(fsPath string) string
SetTimes(fsPath string, atime time.Time, mtime time.Time) bool
}
// ActiveConnection defines the interface for the current active connections
type ActiveConnection interface {
GetID() string
GetUsername() string
GetLocalAddress() string
GetRemoteAddress() string
GetClientVersion() string
GetProtocol() string
@@ -342,10 +265,10 @@ func (t *ConnectionTransfer) getConnectionTransferAsString() string {
}
result += fmt.Sprintf("%#v ", t.VirtualPath)
if t.Size > 0 {
elapsed := time.Since(util.GetTimeFromMsecSinceEpoch(t.StartTime))
speed := float64(t.Size) / float64(util.GetTimeAsMsSinceEpoch(time.Now())-t.StartTime)
result += fmt.Sprintf("Size: %#v Elapsed: %#v Speed: \"%.1f KB/s\"", util.ByteCountIEC(t.Size),
util.GetDurationAsString(elapsed), speed)
elapsed := time.Since(utils.GetTimeFromMsecSinceEpoch(t.StartTime))
speed := float64(t.Size) / float64(utils.GetTimeAsMsSinceEpoch(time.Now())-t.StartTime)
result += fmt.Sprintf("Size: %#v Elapsed: %#v Speed: \"%.1f KB/s\"", utils.ByteCountIEC(t.Size),
utils.GetDurationAsString(elapsed), speed)
}
return result
}
@@ -372,12 +295,6 @@ type Configuration struct {
// 2 means "ignore mode for cloud fs": requests for changing permissions and owner/group/time are
// silently ignored for cloud based filesystem such as S3, GCS, Azure Blob
SetstatMode int `json:"setstat_mode" mapstructure:"setstat_mode"`
// TempPath defines the path for temporary files such as those used for atomic uploads or file pipes.
// If you set this option you must make sure that the defined path exists, is accessible for writing
// by the user running SFTPGo, and is on the same filesystem as the users home directories otherwise
// the renaming for atomic uploads will become a copy and therefore may take a long time.
// The temporary files are not namespaced. The default is generally fine. Leave empty for the default.
TempPath string `json:"temp_path" mapstructure:"temp_path"`
// Support for HAProxy PROXY protocol.
// If you are running SFTPGo behind a proxy server such as HAProxy, AWS ELB or NGNIX, you can enable
// the proxy protocol. It provides a convenient way to safely transport connection information
@@ -395,29 +312,14 @@ type Configuration struct {
// If proxy protocol is set to 2 and we receive a proxy header from an IP that is not in the list then the
// connection will be rejected.
ProxyAllowed []string `json:"proxy_allowed" mapstructure:"proxy_allowed"`
// Absolute path to an external program or an HTTP URL to invoke as soon as SFTPGo starts.
// If you define an HTTP URL it will be invoked using a `GET` request.
// Please note that SFTPGo services may not yet be available when this hook is run.
// Leave empty do disable.
StartupHook string `json:"startup_hook" mapstructure:"startup_hook"`
// Absolute path to an external program or an HTTP URL to invoke after a user connects
// and before he tries to login. It allows you to reject the connection based on the source
// ip address. Leave empty do disable.
PostConnectHook string `json:"post_connect_hook" mapstructure:"post_connect_hook"`
// Absolute path to an external program or an HTTP URL to invoke after an SSH/FTP connection ends.
// Leave empty do disable.
PostDisconnectHook string `json:"post_disconnect_hook" mapstructure:"post_disconnect_hook"`
// Absolute path to an external program or an HTTP URL to invoke after a data retention check completes.
// Leave empty do disable.
DataRetentionHook string `json:"data_retention_hook" mapstructure:"data_retention_hook"`
// Maximum number of concurrent client connections. 0 means unlimited
MaxTotalConnections int `json:"max_total_connections" mapstructure:"max_total_connections"`
// Maximum number of concurrent client connections from the same host (IP). 0 means unlimited
MaxPerHostConnections int `json:"max_per_host_connections" mapstructure:"max_per_host_connections"`
// Defender configuration
DefenderConfig DefenderConfig `json:"defender" mapstructure:"defender"`
// Rate limiter configurations
RateLimitersConfig []RateLimiterConfig `json:"rate_limiters" mapstructure:"rate_limiters"`
DefenderConfig DefenderConfig `json:"defender" mapstructure:"defender"`
idleTimeoutAsDuration time.Duration
idleLoginTimeout time.Duration
defender Defender
@@ -429,8 +331,9 @@ func (c *Configuration) IsAtomicUploadEnabled() bool {
}
// GetProxyListener returns a wrapper for the given listener that supports the
// HAProxy Proxy Protocol
// HAProxy Proxy Protocol or nil if the proxy protocol is not configured
func (c *Configuration) GetProxyListener(listener net.Listener) (*proxyproto.Listener, error) {
var proxyListener *proxyproto.Listener
var err error
if c.ProxyProtocol > 0 {
var policyFunc func(upstream net.Addr) (proxyproto.Policy, error)
@@ -452,105 +355,12 @@ func (c *Configuration) GetProxyListener(listener net.Listener) (*proxyproto.Lis
}
}
}
return &proxyproto.Listener{
Listener: listener,
Policy: policyFunc,
ReadHeaderTimeout: 5 * time.Second,
}, nil
}
return nil, errors.New("proxy protocol not configured")
}
// ExecuteStartupHook runs the startup hook if defined
func (c *Configuration) ExecuteStartupHook() error {
if c.StartupHook == "" {
return nil
}
if strings.HasPrefix(c.StartupHook, "http") {
var url *url.URL
url, err := url.Parse(c.StartupHook)
if err != nil {
logger.Warn(logSender, "", "Invalid startup hook %#v: %v", c.StartupHook, err)
return err
proxyListener = &proxyproto.Listener{
Listener: listener,
Policy: policyFunc,
}
startTime := time.Now()
resp, err := httpclient.RetryableGet(url.String())
if err != nil {
logger.Warn(logSender, "", "Error executing startup hook: %v", err)
return err
}
defer resp.Body.Close()
logger.Debug(logSender, "", "Startup hook executed, elapsed: %v, response code: %v", time.Since(startTime), resp.StatusCode)
return nil
}
if !filepath.IsAbs(c.StartupHook) {
err := fmt.Errorf("invalid startup hook %#v", c.StartupHook)
logger.Warn(logSender, "", "Invalid startup hook %#v", c.StartupHook)
return err
}
startTime := time.Now()
ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
defer cancel()
cmd := exec.CommandContext(ctx, c.StartupHook)
err := cmd.Run()
logger.Debug(logSender, "", "Startup hook executed, elapsed: %v, error: %v", time.Since(startTime), err)
return nil
}
func (c *Configuration) executePostDisconnectHook(remoteAddr, protocol, username, connID string, connectionTime time.Time) {
ipAddr := util.GetIPFromRemoteAddress(remoteAddr)
connDuration := int64(time.Since(connectionTime) / time.Millisecond)
if strings.HasPrefix(c.PostDisconnectHook, "http") {
var url *url.URL
url, err := url.Parse(c.PostDisconnectHook)
if err != nil {
logger.Warn(protocol, connID, "Invalid post disconnect hook %#v: %v", c.PostDisconnectHook, err)
return
}
q := url.Query()
q.Add("ip", ipAddr)
q.Add("protocol", protocol)
q.Add("username", username)
q.Add("connection_duration", strconv.FormatInt(connDuration, 10))
url.RawQuery = q.Encode()
startTime := time.Now()
resp, err := httpclient.RetryableGet(url.String())
respCode := 0
if err == nil {
respCode = resp.StatusCode
resp.Body.Close()
}
logger.Debug(protocol, connID, "Post disconnect hook response code: %v, elapsed: %v, err: %v",
respCode, time.Since(startTime), err)
return
}
if !filepath.IsAbs(c.PostDisconnectHook) {
logger.Debug(protocol, connID, "invalid post disconnect hook %#v", c.PostDisconnectHook)
return
}
ctx, cancel := context.WithTimeout(context.Background(), 20*time.Second)
defer cancel()
startTime := time.Now()
cmd := exec.CommandContext(ctx, c.PostDisconnectHook)
cmd.Env = append(os.Environ(),
fmt.Sprintf("SFTPGO_CONNECTION_IP=%v", ipAddr),
fmt.Sprintf("SFTPGO_CONNECTION_USERNAME=%v", username),
fmt.Sprintf("SFTPGO_CONNECTION_DURATION=%v", connDuration),
fmt.Sprintf("SFTPGO_CONNECTION_PROTOCOL=%v", protocol))
err := cmd.Run()
logger.Debug(protocol, connID, "Post disconnect hook executed, elapsed: %v error: %v", time.Since(startTime), err)
}
func (c *Configuration) checkPostDisconnectHook(remoteAddr, protocol, username, connID string, connectionTime time.Time) {
if c.PostDisconnectHook == "" {
return
}
if !util.IsStringInSlice(protocol, disconnHookProtocols) {
return
}
go c.executePostDisconnectHook(remoteAddr, protocol, username, connID, connectionTime)
return proxyListener, nil
}
// ExecutePostConnectHook executes the post connect hook if defined
@@ -566,12 +376,13 @@ func (c *Configuration) ExecutePostConnectHook(ipAddr, protocol string) error {
ipAddr, c.PostConnectHook, err)
return err
}
httpClient := httpclient.GetRetraybleHTTPClient()
q := url.Query()
q.Add("ip", ipAddr)
q.Add("protocol", protocol)
url.RawQuery = q.Encode()
resp, err := httpclient.RetryableGet(url.String())
resp, err := httpClient.Get(url.String())
if err != nil {
logger.Warn(protocol, "", "Login from ip %#v denied, error executing post connect hook: %v", ipAddr, err)
return err
@@ -640,9 +451,6 @@ func (c *SSHConnection) Close() error {
// ActiveConnections holds the currect active connections with the associated transfers
type ActiveConnections struct {
// clients contains both authenticated and estabilished connections and the ones waiting
// for authentication
clients clientsMap
sync.RWMutex
connections []ActiveConnection
sshConnections []*SSHConnection
@@ -669,9 +477,8 @@ func (conns *ActiveConnections) Add(c ActiveConnection) {
defer conns.Unlock()
conns.connections = append(conns.connections, c)
metric.UpdateActiveConnectionsSize(len(conns.connections))
logger.Debug(c.GetProtocol(), c.GetID(), "connection added, local address %#v, remote address %#v, num open connections: %v",
c.GetLocalAddress(), c.GetRemoteAddress(), len(conns.connections))
metrics.UpdateActiveConnectionsSize(len(conns.connections))
logger.Debug(c.GetProtocol(), c.GetID(), "connection added, num open connections: %v", len(conns.connections))
}
// Swap replaces an existing connection with the given one.
@@ -684,10 +491,8 @@ func (conns *ActiveConnections) Swap(c ActiveConnection) error {
for idx, conn := range conns.connections {
if conn.GetID() == c.GetID() {
err := conn.CloseFS()
conns.connections[idx] = c
logger.Debug(logSender, c.GetID(), "connection swapped, close fs error: %v", err)
conn = nil
conns.connections[idx] = c
return nil
}
}
@@ -706,11 +511,9 @@ func (conns *ActiveConnections) Remove(connectionID string) {
conns.connections[idx] = conns.connections[lastIdx]
conns.connections[lastIdx] = nil
conns.connections = conns.connections[:lastIdx]
metric.UpdateActiveConnectionsSize(lastIdx)
logger.Debug(conn.GetProtocol(), conn.GetID(), "connection removed, local address %#v, remote address %#v close fs error: %v, num open connections: %v",
conn.GetLocalAddress(), conn.GetRemoteAddress(), err, lastIdx)
Config.checkPostDisconnectHook(conn.GetRemoteAddress(), conn.GetProtocol(), conn.GetUsername(),
conn.GetID(), conn.GetConnectionTime())
metrics.UpdateActiveConnectionsSize(lastIdx)
logger.Debug(conn.GetProtocol(), conn.GetID(), "connection removed, close fs error: %v, num open connections: %v",
err, lastIdx)
return
}
}
@@ -800,9 +603,9 @@ func (conns *ActiveConnections) checkIdles() {
logger.Debug(conn.GetProtocol(), conn.GetID(), "close idle connection, idle time: %v, username: %#v close err: %v",
time.Since(conn.GetLastActivity()), conn.GetUsername(), err)
if isFTPNoAuth {
ip := util.GetIPFromRemoteAddress(c.GetRemoteAddress())
ip := utils.GetIPFromRemoteAddress(c.GetRemoteAddress())
logger.ConnectionFailedLog("", ip, dataprovider.LoginMethodNoAuthTryed, c.GetProtocol(), "client idle")
metric.AddNoAuthTryed()
metrics.AddNoAuthTryed()
AddDefenderEvent(ip, HostEventNoLoginTried)
dataprovider.ExecutePostLoginHook(&dataprovider.User{}, dataprovider.LoginMethodNoAuthTryed, ip, c.GetProtocol(),
dataprovider.ErrNoAuthTryed)
@@ -814,51 +617,16 @@ func (conns *ActiveConnections) checkIdles() {
conns.RUnlock()
}
// AddClientConnection stores a new client connection
func (conns *ActiveConnections) AddClientConnection(ipAddr string) {
conns.clients.add(ipAddr)
}
// RemoveClientConnection removes a disconnected client from the tracked ones
func (conns *ActiveConnections) RemoveClientConnection(ipAddr string) {
conns.clients.remove(ipAddr)
}
// GetClientConnections returns the total number of client connections
func (conns *ActiveConnections) GetClientConnections() int32 {
return conns.clients.getTotal()
}
// IsNewConnectionAllowed returns false if the maximum number of concurrent allowed connections is exceeded
func (conns *ActiveConnections) IsNewConnectionAllowed(ipAddr string) bool {
if Config.MaxTotalConnections == 0 && Config.MaxPerHostConnections == 0 {
func (conns *ActiveConnections) IsNewConnectionAllowed() bool {
if Config.MaxTotalConnections == 0 {
return true
}
if Config.MaxPerHostConnections > 0 {
if total := conns.clients.getTotalFrom(ipAddr); total > Config.MaxPerHostConnections {
logger.Debug(logSender, "", "active connections from %v %v/%v", ipAddr, total, Config.MaxPerHostConnections)
AddDefenderEvent(ipAddr, HostEventLimitExceeded)
return false
}
}
conns.RLock()
defer conns.RUnlock()
if Config.MaxTotalConnections > 0 {
if total := conns.clients.getTotal(); total > int32(Config.MaxTotalConnections) {
logger.Debug(logSender, "", "active client connections %v/%v", total, Config.MaxTotalConnections)
return false
}
// on a single SFTP connection we could have multiple SFTP channels or commands
// so we check the estabilished connections too
conns.RLock()
defer conns.RUnlock()
return len(conns.connections) < Config.MaxTotalConnections
}
return true
return len(conns.connections) < Config.MaxTotalConnections
}
// GetStats returns stats for active connections
@@ -873,8 +641,8 @@ func (conns *ActiveConnections) GetStats() []*ConnectionStatus {
ConnectionID: c.GetID(),
ClientVersion: c.GetClientVersion(),
RemoteAddress: c.GetRemoteAddress(),
ConnectionTime: util.GetTimeAsMsSinceEpoch(c.GetConnectionTime()),
LastActivity: util.GetTimeAsMsSinceEpoch(c.GetLastActivity()),
ConnectionTime: utils.GetTimeAsMsSinceEpoch(c.GetConnectionTime()),
LastActivity: utils.GetTimeAsMsSinceEpoch(c.GetLastActivity()),
Protocol: c.GetProtocol(),
Command: c.GetCommand(),
Transfers: c.GetTransfers(),
@@ -908,8 +676,8 @@ type ConnectionStatus struct {
// GetConnectionDuration returns the connection duration as string
func (c *ConnectionStatus) GetConnectionDuration() string {
elapsed := time.Since(util.GetTimeFromMsecSinceEpoch(c.ConnectionTime))
return util.GetDurationAsString(elapsed)
elapsed := time.Since(utils.GetTimeFromMsecSinceEpoch(c.ConnectionTime))
return utils.GetDurationAsString(elapsed)
}
// GetConnectionInfo returns connection info.
@@ -964,8 +732,8 @@ type ActiveVirtualFolderQuotaScan struct {
// ActiveScans holds the active quota scans
type ActiveScans struct {
sync.RWMutex
UserScans []ActiveQuotaScan
FolderScans []ActiveVirtualFolderQuotaScan
UserHomeScans []ActiveQuotaScan
FolderScans []ActiveVirtualFolderQuotaScan
}
// GetUsersQuotaScans returns the active quota scans for users home directories
@@ -973,8 +741,8 @@ func (s *ActiveScans) GetUsersQuotaScans() []ActiveQuotaScan {
s.RLock()
defer s.RUnlock()
scans := make([]ActiveQuotaScan, len(s.UserScans))
copy(scans, s.UserScans)
scans := make([]ActiveQuotaScan, len(s.UserHomeScans))
copy(scans, s.UserHomeScans)
return scans
}
@@ -984,14 +752,14 @@ func (s *ActiveScans) AddUserQuotaScan(username string) bool {
s.Lock()
defer s.Unlock()
for _, scan := range s.UserScans {
for _, scan := range s.UserHomeScans {
if scan.Username == username {
return false
}
}
s.UserScans = append(s.UserScans, ActiveQuotaScan{
s.UserHomeScans = append(s.UserHomeScans, ActiveQuotaScan{
Username: username,
StartTime: util.GetTimeAsMsSinceEpoch(time.Now()),
StartTime: utils.GetTimeAsMsSinceEpoch(time.Now()),
})
return true
}
@@ -1002,15 +770,18 @@ func (s *ActiveScans) RemoveUserQuotaScan(username string) bool {
s.Lock()
defer s.Unlock()
for idx, scan := range s.UserScans {
indexToRemove := -1
for i, scan := range s.UserHomeScans {
if scan.Username == username {
lastIdx := len(s.UserScans) - 1
s.UserScans[idx] = s.UserScans[lastIdx]
s.UserScans = s.UserScans[:lastIdx]
return true
indexToRemove = i
break
}
}
if indexToRemove >= 0 {
s.UserHomeScans[indexToRemove] = s.UserHomeScans[len(s.UserHomeScans)-1]
s.UserHomeScans = s.UserHomeScans[:len(s.UserHomeScans)-1]
return true
}
return false
}
@@ -1036,7 +807,7 @@ func (s *ActiveScans) AddVFolderQuotaScan(folderName string) bool {
}
s.FolderScans = append(s.FolderScans, ActiveVirtualFolderQuotaScan{
Name: folderName,
StartTime: util.GetTimeAsMsSinceEpoch(time.Now()),
StartTime: utils.GetTimeAsMsSinceEpoch(time.Now()),
})
return true
}
@@ -1047,14 +818,17 @@ func (s *ActiveScans) RemoveVFolderQuotaScan(folderName string) bool {
s.Lock()
defer s.Unlock()
for idx, scan := range s.FolderScans {
indexToRemove := -1
for i, scan := range s.FolderScans {
if scan.Name == folderName {
lastIdx := len(s.FolderScans) - 1
s.FolderScans[idx] = s.FolderScans[lastIdx]
s.FolderScans = s.FolderScans[:lastIdx]
return true
indexToRemove = i
break
}
}
if indexToRemove >= 0 {
s.FolderScans[indexToRemove] = s.FolderScans[len(s.FolderScans)-1]
s.FolderScans = s.FolderScans[:len(s.FolderScans)-1]
return true
}
return false
}

View File

@@ -1,9 +1,9 @@
package common
import (
"encoding/json"
"fmt"
"net"
"net/http"
"os"
"os/exec"
"path/filepath"
@@ -13,37 +13,45 @@ import (
"testing"
"time"
"github.com/alexedwards/argon2id"
"github.com/rs/zerolog"
"github.com/spf13/viper"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"golang.org/x/crypto/bcrypt"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/kms"
"github.com/drakkan/sftpgo/v2/sdk"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/v2/vfs"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/httpclient"
"github.com/drakkan/sftpgo/kms"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/utils"
"github.com/drakkan/sftpgo/vfs"
)
const (
logSenderTest = "common_test"
httpAddr = "127.0.0.1:9999"
httpProxyAddr = "127.0.0.1:7777"
configDir = ".."
osWindows = "windows"
userTestUsername = "common_test_username"
userTestPwd = "common_test_pwd"
)
type providerConf struct {
Config dataprovider.Config `json:"data_provider" mapstructure:"data_provider"`
}
type fakeConnection struct {
*BaseConnection
command string
}
func (c *fakeConnection) AddUser(user dataprovider.User) error {
_, err := user.GetFilesystem(c.GetID())
fs, err := user.GetFilesystem(c.GetID())
if err != nil {
return err
}
c.BaseConnection.User = user
c.BaseConnection.Fs = fs
return nil
}
@@ -60,10 +68,6 @@ func (c *fakeConnection) GetCommand() string {
return c.command
}
func (c *fakeConnection) GetLocalAddress() string {
return ""
}
func (c *fakeConnection) GetRemoteAddress() string {
return ""
}
@@ -80,6 +84,110 @@ func (c *customNetConn) Close() error {
return c.Conn.Close()
}
func TestMain(m *testing.M) {
logfilePath := "common_test.log"
logger.InitLogger(logfilePath, 5, 1, 28, false, zerolog.DebugLevel)
viper.SetEnvPrefix("sftpgo")
replacer := strings.NewReplacer(".", "__")
viper.SetEnvKeyReplacer(replacer)
viper.SetConfigName("sftpgo")
viper.AutomaticEnv()
viper.AllowEmptyEnv(true)
driver, err := initializeDataprovider(-1)
if err != nil {
logger.WarnToConsole("error initializing data provider: %v", err)
os.Exit(1)
}
logger.InfoToConsole("Starting COMMON tests, provider: %v", driver)
err = Initialize(Configuration{})
if err != nil {
logger.WarnToConsole("error initializing common: %v", err)
os.Exit(1)
}
httpConfig := httpclient.Config{
Timeout: 5,
}
httpConfig.Initialize(configDir) //nolint:errcheck
go func() {
// start a test HTTP server to receive action notifications
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "OK\n")
})
http.HandleFunc("/404", func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusNotFound)
fmt.Fprintf(w, "Not found\n")
})
if err := http.ListenAndServe(httpAddr, nil); err != nil {
logger.ErrorToConsole("could not start HTTP notification server: %v", err)
os.Exit(1)
}
}()
go func() {
Config.ProxyProtocol = 2
listener, err := net.Listen("tcp", httpProxyAddr)
if err != nil {
logger.ErrorToConsole("error creating listener for proxy protocol server: %v", err)
os.Exit(1)
}
proxyListener, err := Config.GetProxyListener(listener)
if err != nil {
logger.ErrorToConsole("error creating proxy protocol listener: %v", err)
os.Exit(1)
}
Config.ProxyProtocol = 0
s := &http.Server{}
if err := s.Serve(proxyListener); err != nil {
logger.ErrorToConsole("could not start HTTP proxy protocol server: %v", err)
os.Exit(1)
}
}()
waitTCPListening(httpAddr)
waitTCPListening(httpProxyAddr)
exitCode := m.Run()
os.Remove(logfilePath) //nolint:errcheck
os.Exit(exitCode)
}
func waitTCPListening(address string) {
for {
conn, err := net.Dial("tcp", address)
if err != nil {
logger.WarnToConsole("tcp server %v not listening: %v", address, err)
time.Sleep(100 * time.Millisecond)
continue
}
logger.InfoToConsole("tcp server %v now listening", address)
conn.Close()
break
}
}
func initializeDataprovider(trackQuota int) (string, error) {
configDir := ".."
viper.AddConfigPath(configDir)
if err := viper.ReadInConfig(); err != nil {
return "", err
}
var cfg providerConf
if err := viper.Unmarshal(&cfg); err != nil {
return "", err
}
if trackQuota >= 0 && trackQuota <= 2 {
cfg.Config.TrackQuota = trackQuota
}
return cfg.Config.Driver, dataprovider.Initialize(cfg.Config, configDir, true)
}
func closeDataprovider() error {
return dataprovider.Close()
}
func TestSSHConnections(t *testing.T) {
conn1, conn2 := net.Pipe()
now := time.Now()
@@ -135,11 +243,8 @@ func TestDefenderIntegration(t *testing.T) {
assert.False(t, IsBanned(ip))
assert.Nil(t, GetDefenderBanTime(ip))
assert.False(t, DeleteDefenderHost(ip))
assert.False(t, Unban(ip))
assert.Equal(t, 0, GetDefenderScore(ip))
_, err := GetDefenderHost(ip)
assert.Error(t, err)
assert.Nil(t, GetDefenderHosts())
Config.DefenderConfig = DefenderConfig{
Enabled: true,
@@ -152,7 +257,7 @@ func TestDefenderIntegration(t *testing.T) {
EntriesSoftLimit: 100,
EntriesHardLimit: 150,
}
err = Initialize(Config)
err := Initialize(Config)
assert.Error(t, err)
Config.DefenderConfig.Threshold = 3
err = Initialize(Config)
@@ -162,164 +267,40 @@ func TestDefenderIntegration(t *testing.T) {
AddDefenderEvent(ip, HostEventNoLoginTried)
assert.False(t, IsBanned(ip))
assert.Equal(t, 2, GetDefenderScore(ip))
entry, err := GetDefenderHost(ip)
assert.NoError(t, err)
asJSON, err := json.Marshal(&entry)
assert.NoError(t, err)
assert.Equal(t, `{"id":"3132372e312e312e31","ip":"127.1.1.1","score":2}`, string(asJSON), "entry %v", entry)
assert.True(t, DeleteDefenderHost(ip))
assert.False(t, Unban(ip))
assert.Nil(t, GetDefenderBanTime(ip))
AddDefenderEvent(ip, HostEventLoginFailed)
AddDefenderEvent(ip, HostEventNoLoginTried)
assert.True(t, IsBanned(ip))
assert.Equal(t, 0, GetDefenderScore(ip))
assert.NotNil(t, GetDefenderBanTime(ip))
assert.Len(t, GetDefenderHosts(), 1)
entry, err = GetDefenderHost(ip)
assert.NoError(t, err)
assert.False(t, entry.BanTime.IsZero())
assert.True(t, DeleteDefenderHost(ip))
assert.Len(t, GetDefenderHosts(), 0)
assert.True(t, Unban(ip))
assert.Nil(t, GetDefenderBanTime(ip))
assert.False(t, DeleteDefenderHost(ip))
Config = configCopy
}
func TestRateLimitersIntegration(t *testing.T) {
// by default defender is nil
configCopy := Config
Config.RateLimitersConfig = []RateLimiterConfig{
{
Average: 100,
Period: 10,
Burst: 5,
Type: int(rateLimiterTypeGlobal),
Protocols: rateLimiterProtocolValues,
},
{
Average: 1,
Period: 1000,
Burst: 1,
Type: int(rateLimiterTypeSource),
Protocols: []string{ProtocolWebDAV, ProtocolWebDAV, ProtocolFTP},
GenerateDefenderEvents: true,
EntriesSoftLimit: 100,
EntriesHardLimit: 150,
},
}
err := Initialize(Config)
assert.Error(t, err)
Config.RateLimitersConfig[0].Period = 1000
Config.RateLimitersConfig[0].AllowList = []string{"1.1.1", "1.1.1.2"}
err = Initialize(Config)
if assert.Error(t, err) {
assert.Contains(t, err.Error(), "unable to parse rate limiter allow list")
}
Config.RateLimitersConfig[0].AllowList = []string{"172.16.24.7"}
Config.RateLimitersConfig[1].AllowList = []string{"172.16.0.0/16"}
err = Initialize(Config)
assert.NoError(t, err)
assert.Len(t, rateLimiters, 4)
assert.Len(t, rateLimiters[ProtocolSSH], 1)
assert.Len(t, rateLimiters[ProtocolFTP], 2)
assert.Len(t, rateLimiters[ProtocolWebDAV], 2)
assert.Len(t, rateLimiters[ProtocolHTTP], 1)
source1 := "127.1.1.1"
source2 := "127.1.1.2"
source3 := "172.16.24.7" // whitelisted
_, err = LimitRate(ProtocolSSH, source1)
assert.NoError(t, err)
_, err = LimitRate(ProtocolFTP, source1)
assert.NoError(t, err)
// sleep to allow the add configured burst to the token.
// This sleep is not enough to add the per-source burst
time.Sleep(20 * time.Millisecond)
_, err = LimitRate(ProtocolWebDAV, source2)
assert.NoError(t, err)
_, err = LimitRate(ProtocolFTP, source1)
assert.Error(t, err)
_, err = LimitRate(ProtocolWebDAV, source2)
assert.Error(t, err)
_, err = LimitRate(ProtocolSSH, source1)
assert.NoError(t, err)
_, err = LimitRate(ProtocolSSH, source2)
assert.NoError(t, err)
for i := 0; i < 10; i++ {
_, err = LimitRate(ProtocolWebDAV, source3)
assert.NoError(t, err)
}
assert.False(t, Unban(ip))
Config = configCopy
}
func TestMaxConnections(t *testing.T) {
oldValue := Config.MaxTotalConnections
perHost := Config.MaxPerHostConnections
Config.MaxPerHostConnections = 0
ipAddr := "192.168.7.8"
assert.True(t, Connections.IsNewConnectionAllowed(ipAddr))
Config.MaxTotalConnections = 1
Config.MaxPerHostConnections = perHost
assert.True(t, Connections.IsNewConnectionAllowed(ipAddr))
c := NewBaseConnection("id", ProtocolSFTP, "", "", dataprovider.User{})
assert.True(t, Connections.IsNewConnectionAllowed())
c := NewBaseConnection("id", ProtocolSFTP, dataprovider.User{}, nil)
fakeConn := &fakeConnection{
BaseConnection: c,
}
Connections.Add(fakeConn)
assert.Len(t, Connections.GetStats(), 1)
assert.False(t, Connections.IsNewConnectionAllowed(ipAddr))
assert.False(t, Connections.IsNewConnectionAllowed())
res := Connections.Close(fakeConn.GetID())
assert.True(t, res)
assert.Eventually(t, func() bool { return len(Connections.GetStats()) == 0 }, 300*time.Millisecond, 50*time.Millisecond)
assert.True(t, Connections.IsNewConnectionAllowed(ipAddr))
Connections.AddClientConnection(ipAddr)
Connections.AddClientConnection(ipAddr)
assert.False(t, Connections.IsNewConnectionAllowed(ipAddr))
Connections.RemoveClientConnection(ipAddr)
assert.True(t, Connections.IsNewConnectionAllowed(ipAddr))
Connections.RemoveClientConnection(ipAddr)
Config.MaxTotalConnections = oldValue
}
func TestMaxConnectionPerHost(t *testing.T) {
oldValue := Config.MaxPerHostConnections
Config.MaxPerHostConnections = 2
ipAddr := "192.168.9.9"
Connections.AddClientConnection(ipAddr)
assert.True(t, Connections.IsNewConnectionAllowed(ipAddr))
Connections.AddClientConnection(ipAddr)
assert.True(t, Connections.IsNewConnectionAllowed(ipAddr))
Connections.AddClientConnection(ipAddr)
assert.False(t, Connections.IsNewConnectionAllowed(ipAddr))
assert.Equal(t, int32(3), Connections.GetClientConnections())
Connections.RemoveClientConnection(ipAddr)
Connections.RemoveClientConnection(ipAddr)
Connections.RemoveClientConnection(ipAddr)
assert.Equal(t, int32(0), Connections.GetClientConnections())
Config.MaxPerHostConnections = oldValue
}
func TestIdleConnections(t *testing.T) {
configCopy := Config
@@ -341,11 +322,9 @@ func TestIdleConnections(t *testing.T) {
username := "test_user"
user := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: username,
},
Username: username,
}
c := NewBaseConnection(sshConn1.id+"_1", ProtocolSFTP, "", "", user)
c := NewBaseConnection(sshConn1.id+"_1", ProtocolSFTP, user, nil)
c.lastActivity = time.Now().Add(-24 * time.Hour).UnixNano()
fakeConn := &fakeConnection{
BaseConnection: c,
@@ -357,7 +336,7 @@ func TestIdleConnections(t *testing.T) {
Connections.AddSSHConnection(sshConn1)
Connections.Add(fakeConn)
assert.Equal(t, Connections.GetActiveSessions(username), 1)
c = NewBaseConnection(sshConn2.id+"_1", ProtocolSSH, "", "", user)
c = NewBaseConnection(sshConn2.id+"_1", ProtocolSSH, user, nil)
fakeConn = &fakeConnection{
BaseConnection: c,
}
@@ -365,7 +344,7 @@ func TestIdleConnections(t *testing.T) {
Connections.Add(fakeConn)
assert.Equal(t, Connections.GetActiveSessions(username), 2)
cFTP := NewBaseConnection("id2", ProtocolFTP, "", "", dataprovider.User{})
cFTP := NewBaseConnection("id2", ProtocolFTP, dataprovider.User{}, nil)
cFTP.lastActivity = time.Now().UnixNano()
fakeConn = &fakeConnection{
BaseConnection: cFTP,
@@ -396,7 +375,6 @@ func TestIdleConnections(t *testing.T) {
defer Connections.RUnlock()
return len(Connections.sshConnections) == 0
}, 1*time.Second, 200*time.Millisecond)
assert.Equal(t, int32(0), Connections.GetClientConnections())
stopIdleTimeoutTicker()
assert.True(t, customConn1.isClosed)
assert.True(t, customConn2.isClosed)
@@ -405,11 +383,11 @@ func TestIdleConnections(t *testing.T) {
}
func TestCloseConnection(t *testing.T) {
c := NewBaseConnection("id", ProtocolSFTP, "", "", dataprovider.User{})
c := NewBaseConnection("id", ProtocolSFTP, dataprovider.User{}, nil)
fakeConn := &fakeConnection{
BaseConnection: c,
}
assert.True(t, Connections.IsNewConnectionAllowed("127.0.0.1"))
assert.True(t, Connections.IsNewConnectionAllowed())
Connections.Add(fakeConn)
assert.Len(t, Connections.GetStats(), 1)
res := Connections.Close(fakeConn.GetID())
@@ -421,7 +399,7 @@ func TestCloseConnection(t *testing.T) {
}
func TestSwapConnection(t *testing.T) {
c := NewBaseConnection("id", ProtocolFTP, "", "", dataprovider.User{})
c := NewBaseConnection("id", ProtocolFTP, dataprovider.User{}, nil)
fakeConn := &fakeConnection{
BaseConnection: c,
}
@@ -429,11 +407,9 @@ func TestSwapConnection(t *testing.T) {
if assert.Len(t, Connections.GetStats(), 1) {
assert.Equal(t, "", Connections.GetStats()[0].Username)
}
c = NewBaseConnection("id", ProtocolFTP, "", "", dataprovider.User{
BaseUser: sdk.BaseUser{
Username: userTestUsername,
},
})
c = NewBaseConnection("id", ProtocolFTP, dataprovider.User{
Username: userTestUsername,
}, nil)
fakeConn = &fakeConnection{
BaseConnection: c,
}
@@ -465,30 +441,28 @@ func TestAtomicUpload(t *testing.T) {
func TestConnectionStatus(t *testing.T) {
username := "test_user"
user := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: username,
},
Username: username,
}
fs := vfs.NewOsFs("", os.TempDir(), "")
c1 := NewBaseConnection("id1", ProtocolSFTP, "", "", user)
fs := vfs.NewOsFs("", os.TempDir(), nil)
c1 := NewBaseConnection("id1", ProtocolSFTP, user, fs)
fakeConn1 := &fakeConnection{
BaseConnection: c1,
}
t1 := NewBaseTransfer(nil, c1, nil, "/p1", "/p1", "/r1", TransferUpload, 0, 0, 0, true, fs)
t1 := NewBaseTransfer(nil, c1, nil, "/p1", "/r1", TransferUpload, 0, 0, 0, true, fs)
t1.BytesReceived = 123
t2 := NewBaseTransfer(nil, c1, nil, "/p2", "/p2", "/r2", TransferDownload, 0, 0, 0, true, fs)
t2 := NewBaseTransfer(nil, c1, nil, "/p2", "/r2", TransferDownload, 0, 0, 0, true, fs)
t2.BytesSent = 456
c2 := NewBaseConnection("id2", ProtocolSSH, "", "", user)
c2 := NewBaseConnection("id2", ProtocolSSH, user, nil)
fakeConn2 := &fakeConnection{
BaseConnection: c2,
command: "md5sum",
}
c3 := NewBaseConnection("id3", ProtocolWebDAV, "", "", user)
c3 := NewBaseConnection("id3", ProtocolWebDAV, user, nil)
fakeConn3 := &fakeConnection{
BaseConnection: c3,
command: "PROPFIND",
}
t3 := NewBaseTransfer(nil, c3, nil, "/p2", "/p2", "/r2", TransferDownload, 0, 0, 0, true, fs)
t3 := NewBaseTransfer(nil, c3, nil, "/p2", "/r2", TransferDownload, 0, 0, 0, true, fs)
Connections.Add(fakeConn1)
Connections.Add(fakeConn2)
Connections.Add(fakeConn3)
@@ -548,18 +522,13 @@ func TestQuotaScans(t *testing.T) {
username := "username"
assert.True(t, QuotaScans.AddUserQuotaScan(username))
assert.False(t, QuotaScans.AddUserQuotaScan(username))
usersScans := QuotaScans.GetUsersQuotaScans()
if assert.Len(t, usersScans, 1) {
assert.Equal(t, usersScans[0].Username, username)
assert.Equal(t, QuotaScans.UserScans[0].StartTime, usersScans[0].StartTime)
QuotaScans.UserScans[0].StartTime = 0
assert.NotEqual(t, QuotaScans.UserScans[0].StartTime, usersScans[0].StartTime)
if assert.Len(t, QuotaScans.GetUsersQuotaScans(), 1) {
assert.Equal(t, QuotaScans.GetUsersQuotaScans()[0].Username, username)
}
assert.True(t, QuotaScans.RemoveUserQuotaScan(username))
assert.False(t, QuotaScans.RemoveUserQuotaScan(username))
assert.Len(t, QuotaScans.GetUsersQuotaScans(), 0)
assert.Len(t, usersScans, 1)
folderName := "folder"
assert.True(t, QuotaScans.AddVFolderQuotaScan(folderName))
@@ -575,13 +544,8 @@ func TestQuotaScans(t *testing.T) {
func TestProxyProtocolVersion(t *testing.T) {
c := Configuration{
ProxyProtocol: 0,
ProxyProtocol: 1,
}
_, err := c.GetProxyListener(nil)
if assert.Error(t, err) {
assert.Contains(t, err.Error(), "proxy protocol not configured")
}
c.ProxyProtocol = 1
proxyListener, err := c.GetProxyListener(nil)
assert.NoError(t, err)
assert.Nil(t, proxyListener.Policy)
@@ -601,62 +565,13 @@ func TestProxyProtocolVersion(t *testing.T) {
assert.Error(t, err)
}
func TestStartupHook(t *testing.T) {
Config.StartupHook = ""
assert.NoError(t, Config.ExecuteStartupHook())
Config.StartupHook = "http://foo\x7f.com/startup"
assert.Error(t, Config.ExecuteStartupHook())
Config.StartupHook = "http://invalid:5678/"
assert.Error(t, Config.ExecuteStartupHook())
Config.StartupHook = fmt.Sprintf("http://%v", httpAddr)
assert.NoError(t, Config.ExecuteStartupHook())
Config.StartupHook = "invalidhook"
assert.Error(t, Config.ExecuteStartupHook())
if runtime.GOOS != osWindows {
hookCmd, err := exec.LookPath("true")
assert.NoError(t, err)
Config.StartupHook = hookCmd
assert.NoError(t, Config.ExecuteStartupHook())
func TestProxyProtocol(t *testing.T) {
httpClient := httpclient.GetHTTPClient()
resp, err := httpClient.Get(fmt.Sprintf("http://%v", httpProxyAddr))
if assert.NoError(t, err) {
defer resp.Body.Close()
assert.Equal(t, http.StatusBadRequest, resp.StatusCode)
}
Config.StartupHook = ""
}
func TestPostDisconnectHook(t *testing.T) {
Config.PostDisconnectHook = "http://127.0.0.1/"
remoteAddr := "127.0.0.1:80"
Config.checkPostDisconnectHook(remoteAddr, ProtocolHTTP, "", "", time.Now())
Config.checkPostDisconnectHook(remoteAddr, ProtocolSFTP, "", "", time.Now())
Config.PostDisconnectHook = "http://bar\x7f.com/"
Config.executePostDisconnectHook(remoteAddr, ProtocolSFTP, "", "", time.Now())
Config.PostDisconnectHook = fmt.Sprintf("http://%v", httpAddr)
Config.executePostDisconnectHook(remoteAddr, ProtocolSFTP, "", "", time.Now())
Config.PostDisconnectHook = "relativePath"
Config.executePostDisconnectHook(remoteAddr, ProtocolSFTP, "", "", time.Now())
if runtime.GOOS == osWindows {
Config.PostDisconnectHook = "C:\\a\\bad\\command"
Config.executePostDisconnectHook(remoteAddr, ProtocolSFTP, "", "", time.Now())
} else {
Config.PostDisconnectHook = "/invalid/path"
Config.executePostDisconnectHook(remoteAddr, ProtocolSFTP, "", "", time.Now())
hookCmd, err := exec.LookPath("true")
assert.NoError(t, err)
Config.PostDisconnectHook = hookCmd
Config.executePostDisconnectHook(remoteAddr, ProtocolSFTP, "", "", time.Now())
}
Config.PostDisconnectHook = ""
}
func TestPostConnectHook(t *testing.T) {
@@ -699,11 +614,7 @@ func TestPostConnectHook(t *testing.T) {
func TestCryptoConvertFileInfo(t *testing.T) {
name := "name"
fs, err := vfs.NewCryptFs("connID1", os.TempDir(), "", vfs.CryptFsConfig{
CryptFsConfig: sdk.CryptFsConfig{
Passphrase: kms.NewPlainSecret("secret"),
},
})
fs, err := vfs.NewCryptFs("connID1", os.TempDir(), vfs.CryptFsConfig{Passphrase: kms.NewPlainSecret("secret")})
require.NoError(t, err)
cryptFs := fs.(*vfs.CryptFs)
info := vfs.NewFileInfo(name, true, 48, time.Now(), false)
@@ -723,158 +634,19 @@ func TestFolderCopy(t *testing.T) {
MappedPath: filepath.Clean(os.TempDir()),
UsedQuotaSize: 4096,
UsedQuotaFiles: 2,
LastQuotaUpdate: util.GetTimeAsMsSinceEpoch(time.Now()),
LastQuotaUpdate: utils.GetTimeAsMsSinceEpoch(time.Now()),
Users: []string{"user1", "user2"},
}
folderCopy := folder.GetACopy()
folder.ID = 2
folder.Users = []string{"user3"}
require.Len(t, folderCopy.Users, 2)
require.True(t, util.IsStringInSlice("user1", folderCopy.Users))
require.True(t, util.IsStringInSlice("user2", folderCopy.Users))
require.True(t, utils.IsStringInSlice("user1", folderCopy.Users))
require.True(t, utils.IsStringInSlice("user2", folderCopy.Users))
require.Equal(t, int64(1), folderCopy.ID)
require.Equal(t, folder.Name, folderCopy.Name)
require.Equal(t, folder.MappedPath, folderCopy.MappedPath)
require.Equal(t, folder.UsedQuotaSize, folderCopy.UsedQuotaSize)
require.Equal(t, folder.UsedQuotaFiles, folderCopy.UsedQuotaFiles)
require.Equal(t, folder.LastQuotaUpdate, folderCopy.LastQuotaUpdate)
folder.FsConfig = vfs.Filesystem{
CryptConfig: vfs.CryptFsConfig{
CryptFsConfig: sdk.CryptFsConfig{
Passphrase: kms.NewPlainSecret("crypto secret"),
},
},
}
folderCopy = folder.GetACopy()
folder.FsConfig.CryptConfig.Passphrase = kms.NewEmptySecret()
require.Len(t, folderCopy.Users, 1)
require.True(t, util.IsStringInSlice("user3", folderCopy.Users))
require.Equal(t, int64(2), folderCopy.ID)
require.Equal(t, folder.Name, folderCopy.Name)
require.Equal(t, folder.MappedPath, folderCopy.MappedPath)
require.Equal(t, folder.UsedQuotaSize, folderCopy.UsedQuotaSize)
require.Equal(t, folder.UsedQuotaFiles, folderCopy.UsedQuotaFiles)
require.Equal(t, folder.LastQuotaUpdate, folderCopy.LastQuotaUpdate)
require.Equal(t, "crypto secret", folderCopy.FsConfig.CryptConfig.Passphrase.GetPayload())
}
func TestCachedFs(t *testing.T) {
user := dataprovider.User{
BaseUser: sdk.BaseUser{
HomeDir: filepath.Clean(os.TempDir()),
},
}
conn := NewBaseConnection("id", ProtocolSFTP, "", "", user)
// changing the user should not affect the connection
user.HomeDir = filepath.Join(os.TempDir(), "temp")
err := os.Mkdir(user.HomeDir, os.ModePerm)
assert.NoError(t, err)
fs, err := user.GetFilesystem("")
assert.NoError(t, err)
p, err := fs.ResolvePath("/")
assert.NoError(t, err)
assert.Equal(t, user.GetHomeDir(), p)
_, p, err = conn.GetFsAndResolvedPath("/")
assert.NoError(t, err)
assert.Equal(t, filepath.Clean(os.TempDir()), p)
user.FsConfig.Provider = sdk.S3FilesystemProvider
_, err = user.GetFilesystem("")
assert.Error(t, err)
conn.User.FsConfig.Provider = sdk.S3FilesystemProvider
_, p, err = conn.GetFsAndResolvedPath("/")
assert.NoError(t, err)
assert.Equal(t, filepath.Clean(os.TempDir()), p)
err = os.Remove(user.HomeDir)
assert.NoError(t, err)
}
func TestParseAllowedIPAndRanges(t *testing.T) {
_, err := util.ParseAllowedIPAndRanges([]string{"1.1.1.1", "not an ip"})
assert.Error(t, err)
_, err = util.ParseAllowedIPAndRanges([]string{"1.1.1.5", "192.168.1.0/240"})
assert.Error(t, err)
allow, err := util.ParseAllowedIPAndRanges([]string{"192.168.1.2", "172.16.0.0/24"})
assert.NoError(t, err)
assert.True(t, allow[0](net.ParseIP("192.168.1.2")))
assert.False(t, allow[0](net.ParseIP("192.168.2.2")))
assert.True(t, allow[1](net.ParseIP("172.16.0.1")))
assert.False(t, allow[1](net.ParseIP("172.16.1.1")))
}
func TestHideConfidentialData(t *testing.T) {
for _, provider := range sdk.ListProviders() {
u := dataprovider.User{
FsConfig: vfs.Filesystem{
Provider: provider,
},
}
u.PrepareForRendering()
f := vfs.BaseVirtualFolder{
FsConfig: vfs.Filesystem{
Provider: provider,
},
}
f.PrepareForRendering()
}
a := dataprovider.Admin{}
a.HideConfidentialData()
}
func TestUserPerms(t *testing.T) {
u := dataprovider.User{}
u.Permissions = make(map[string][]string)
u.Permissions["/"] = []string{dataprovider.PermUpload, dataprovider.PermDelete}
assert.True(t, u.HasAnyPerm([]string{dataprovider.PermRename, dataprovider.PermDelete}, "/"))
assert.False(t, u.HasAnyPerm([]string{dataprovider.PermRename, dataprovider.PermCreateDirs}, "/"))
u.Permissions["/"] = []string{dataprovider.PermDelete, dataprovider.PermCreateDirs}
assert.True(t, u.HasPermsDeleteAll("/"))
assert.False(t, u.HasPermsRenameAll("/"))
u.Permissions["/"] = []string{dataprovider.PermDeleteDirs, dataprovider.PermDeleteFiles, dataprovider.PermRenameDirs}
assert.True(t, u.HasPermsDeleteAll("/"))
assert.False(t, u.HasPermsRenameAll("/"))
u.Permissions["/"] = []string{dataprovider.PermDeleteDirs, dataprovider.PermRenameFiles, dataprovider.PermRenameDirs}
assert.False(t, u.HasPermsDeleteAll("/"))
assert.True(t, u.HasPermsRenameAll("/"))
}
func BenchmarkBcryptHashing(b *testing.B) {
bcryptPassword := "bcryptpassword"
for i := 0; i < b.N; i++ {
_, err := bcrypt.GenerateFromPassword([]byte(bcryptPassword), 10)
if err != nil {
panic(err)
}
}
}
func BenchmarkCompareBcryptPassword(b *testing.B) {
bcryptPassword := "$2a$10$lPDdnDimJZ7d5/GwL6xDuOqoZVRXok6OHHhivCnanWUtcgN0Zafki"
for i := 0; i < b.N; i++ {
err := bcrypt.CompareHashAndPassword([]byte(bcryptPassword), []byte("password"))
if err != nil {
panic(err)
}
}
}
func BenchmarkArgon2Hashing(b *testing.B) {
argonPassword := "argon2password"
for i := 0; i < b.N; i++ {
_, err := argon2id.CreateHash(argonPassword, argon2id.DefaultParams)
if err != nil {
panic(err)
}
}
}
func BenchmarkCompareArgon2Password(b *testing.B) {
argon2Password := "$argon2id$v=19$m=65536,t=1,p=2$aOoAOdAwvzhOgi7wUFjXlw$wn/y37dBWdKHtPXHR03nNaKHWKPXyNuVXOknaU+YZ+s"
for i := 0; i < b.N; i++ {
_, err := argon2id.ComparePasswordAndHash("password", argon2Password)
if err != nil {
panic(err)
}
}
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,463 +0,0 @@
package common
import (
"bytes"
"context"
"encoding/json"
"fmt"
"net/http"
"net/url"
"os"
"os/exec"
"path"
"path/filepath"
"strings"
"sync"
"time"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/httpclient"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/smtp"
"github.com/drakkan/sftpgo/v2/util"
)
// RetentionCheckNotification defines the supported notification methods for a retention check result
type RetentionCheckNotification = string
const (
// notify results using the defined "data_retention_hook"
RetentionCheckNotificationHook = "Hook"
// notify results by email
RetentionCheckNotificationEmail = "Email"
)
var (
// RetentionChecks is the list of active quota scans
RetentionChecks ActiveRetentionChecks
)
// ActiveRetentionChecks holds the active quota scans
type ActiveRetentionChecks struct {
sync.RWMutex
Checks []RetentionCheck
}
// Get returns the active retention checks
func (c *ActiveRetentionChecks) Get() []RetentionCheck {
c.RLock()
defer c.RUnlock()
checks := make([]RetentionCheck, 0, len(c.Checks))
for _, check := range c.Checks {
foldersCopy := make([]FolderRetention, len(check.Folders))
copy(foldersCopy, check.Folders)
notificationsCopy := make([]string, len(check.Notifications))
copy(notificationsCopy, check.Notifications)
checks = append(checks, RetentionCheck{
Username: check.Username,
StartTime: check.StartTime,
Notifications: notificationsCopy,
Email: check.Email,
Folders: foldersCopy,
})
}
return checks
}
// Add a new retention check, returns nil if a retention check for the given
// username is already active. The returned result can be used to start the check
func (c *ActiveRetentionChecks) Add(check RetentionCheck, user *dataprovider.User) *RetentionCheck {
c.Lock()
defer c.Unlock()
for _, val := range c.Checks {
if val.Username == user.Username {
return nil
}
}
// we silently ignore file patterns
user.Filters.FilePatterns = nil
conn := NewBaseConnection("", "", "", "", *user)
conn.SetProtocol(ProtocolDataRetention)
conn.ID = fmt.Sprintf("data_retention_%v", user.Username)
check.Username = user.Username
check.StartTime = util.GetTimeAsMsSinceEpoch(time.Now())
check.conn = conn
check.updateUserPermissions()
c.Checks = append(c.Checks, check)
return &check
}
// remove a user from the ones with active retention checks
// and returns true if the user is removed
func (c *ActiveRetentionChecks) remove(username string) bool {
c.Lock()
defer c.Unlock()
for idx, check := range c.Checks {
if check.Username == username {
lastIdx := len(c.Checks) - 1
c.Checks[idx] = c.Checks[lastIdx]
c.Checks = c.Checks[:lastIdx]
return true
}
}
return false
}
// FolderRetention defines the retention policy for the specified directory path
type FolderRetention struct {
// Path is the exposed virtual directory path, if no other specific retention is defined,
// the retention applies for sub directories too. For example if retention is defined
// for the paths "/" and "/sub" then the retention for "/" is applied for any file outside
// the "/sub" directory
Path string `json:"path"`
// Retention time in hours. 0 means exclude this path
Retention int `json:"retention"`
// DeleteEmptyDirs defines if empty directories will be deleted.
// The user need the delete permission
DeleteEmptyDirs bool `json:"delete_empty_dirs,omitempty"`
// IgnoreUserPermissions defines if delete files even if the user does not have the delete permission.
// The default is "false" which means that files will be skipped if the user does not have the permission
// to delete them. This applies to sub directories too.
IgnoreUserPermissions bool `json:"ignore_user_permissions,omitempty"`
}
func (f *FolderRetention) isValid() error {
f.Path = path.Clean(f.Path)
if !path.IsAbs(f.Path) {
return util.NewValidationError(fmt.Sprintf("folder retention: invalid path %#v, please specify an absolute POSIX path",
f.Path))
}
if f.Retention < 0 {
return util.NewValidationError(fmt.Sprintf("invalid folder retention %v, it must be greater or equal to zero",
f.Retention))
}
return nil
}
type folderRetentionCheckResult struct {
Path string `json:"path"`
Retention int `json:"retention"`
DeletedFiles int `json:"deleted_files"`
DeletedSize int64 `json:"deleted_size"`
Elapsed time.Duration `json:"-"`
Info string `json:"info,omitempty"`
Error string `json:"error,omitempty"`
}
// RetentionCheck defines an active retention check
type RetentionCheck struct {
// Username to which the retention check refers
Username string `json:"username"`
// retention check start time as unix timestamp in milliseconds
StartTime int64 `json:"start_time"`
// affected folders
Folders []FolderRetention `json:"folders"`
// how cleanup results will be notified
Notifications []RetentionCheckNotification `json:"notifications,omitempty"`
// email to use if the notification method is set to email
Email string `json:"email,omitempty"`
// Cleanup results
results []*folderRetentionCheckResult `json:"-"`
conn *BaseConnection
}
// Validate returns an error if the specified folders are not valid
func (c *RetentionCheck) Validate() error {
folderPaths := make(map[string]bool)
nothingToDo := true
for idx := range c.Folders {
f := &c.Folders[idx]
if err := f.isValid(); err != nil {
return err
}
if f.Retention > 0 {
nothingToDo = false
}
if _, ok := folderPaths[f.Path]; ok {
return util.NewValidationError(fmt.Sprintf("duplicated folder path %#v", f.Path))
}
folderPaths[f.Path] = true
}
if nothingToDo {
return util.NewValidationError("nothing to delete!")
}
for _, notification := range c.Notifications {
switch notification {
case RetentionCheckNotificationEmail:
if !smtp.IsEnabled() {
return util.NewValidationError("in order to notify results via email you must configure an SMTP server")
}
if c.Email == "" {
return util.NewValidationError("in order to notify results via email you must add a valid email address to your profile")
}
case RetentionCheckNotificationHook:
if Config.DataRetentionHook == "" {
return util.NewValidationError("in order to notify results via hook you must define a data_retention_hook")
}
default:
return util.NewValidationError(fmt.Sprintf("invalid notification %#v", notification))
}
}
return nil
}
func (c *RetentionCheck) updateUserPermissions() {
for _, folder := range c.Folders {
if folder.IgnoreUserPermissions {
c.conn.User.Permissions[folder.Path] = []string{dataprovider.PermAny}
}
}
}
func (c *RetentionCheck) getFolderRetention(folderPath string) (FolderRetention, error) {
dirsForPath := util.GetDirsForVirtualPath(folderPath)
for _, dirPath := range dirsForPath {
for _, folder := range c.Folders {
if folder.Path == dirPath {
return folder, nil
}
}
}
return FolderRetention{}, fmt.Errorf("unable to find folder retention for %#v", folderPath)
}
func (c *RetentionCheck) removeFile(virtualPath string, info os.FileInfo) error {
fs, fsPath, err := c.conn.GetFsAndResolvedPath(virtualPath)
if err != nil {
return err
}
return c.conn.RemoveFile(fs, fsPath, virtualPath, info)
}
func (c *RetentionCheck) cleanupFolder(folderPath string) error {
deleteFilesPerms := []string{dataprovider.PermDelete, dataprovider.PermDeleteFiles}
startTime := time.Now()
result := &folderRetentionCheckResult{
Path: folderPath,
}
c.results = append(c.results, result)
if !c.conn.User.HasPerm(dataprovider.PermListItems, folderPath) || !c.conn.User.HasAnyPerm(deleteFilesPerms, folderPath) {
result.Elapsed = time.Since(startTime)
result.Info = "data retention check skipped: no permissions"
c.conn.Log(logger.LevelInfo, "user %#v does not have permissions to check retention on %#v, retention check skipped",
c.conn.User, folderPath)
return nil
}
folderRetention, err := c.getFolderRetention(folderPath)
if err != nil {
result.Elapsed = time.Since(startTime)
result.Error = "unable to get folder retention"
c.conn.Log(logger.LevelError, "unable to get folder retention for path %#v", folderPath)
return err
}
result.Retention = folderRetention.Retention
if folderRetention.Retention == 0 {
result.Elapsed = time.Since(startTime)
result.Info = "data retention check skipped: retention is set to 0"
c.conn.Log(logger.LevelDebug, "retention check skipped for folder %#v, retention is set to 0", folderPath)
return nil
}
c.conn.Log(logger.LevelDebug, "start retention check for folder %#v, retention: %v hours, delete empty dirs? %v, ignore user perms? %v",
folderPath, folderRetention.Retention, folderRetention.DeleteEmptyDirs, folderRetention.IgnoreUserPermissions)
files, err := c.conn.ListDir(folderPath)
if err != nil {
result.Elapsed = time.Since(startTime)
if err == c.conn.GetNotExistError() {
result.Info = "data retention check skipped, folder does not exist"
c.conn.Log(logger.LevelDebug, "folder %#v does not exist, retention check skipped", folderPath)
return nil
}
result.Error = fmt.Sprintf("unable to list directory %#v", folderPath)
c.conn.Log(logger.LevelWarn, result.Error)
return err
}
for _, info := range files {
virtualPath := path.Join(folderPath, info.Name())
if info.IsDir() {
if err := c.cleanupFolder(virtualPath); err != nil {
result.Elapsed = time.Since(startTime)
result.Error = fmt.Sprintf("unable to check folder: %v", err)
c.conn.Log(logger.LevelWarn, "unable to cleanup folder %#v: %v", virtualPath, err)
return err
}
} else {
retentionTime := info.ModTime().Add(time.Duration(folderRetention.Retention) * time.Hour)
if retentionTime.Before(time.Now()) {
if err := c.removeFile(virtualPath, info); err != nil {
result.Elapsed = time.Since(startTime)
result.Error = fmt.Sprintf("unable to remove file %#v: %v", virtualPath, err)
c.conn.Log(logger.LevelWarn, "unable to remove file %#v, retention %v: %v",
virtualPath, retentionTime, err)
return err
}
c.conn.Log(logger.LevelDebug, "removed file %#v, modification time: %v, retention: %v hours, retention time: %v",
virtualPath, info.ModTime(), folderRetention.Retention, retentionTime)
result.DeletedFiles++
result.DeletedSize += info.Size()
}
}
}
if folderRetention.DeleteEmptyDirs {
c.checkEmptyDirRemoval(folderPath)
}
result.Elapsed = time.Since(startTime)
c.conn.Log(logger.LevelDebug, "retention check completed for folder %#v, deleted files: %v, deleted size: %v bytes",
folderPath, result.DeletedFiles, result.DeletedSize)
return nil
}
func (c *RetentionCheck) checkEmptyDirRemoval(folderPath string) {
if folderPath != "/" && c.conn.User.HasAnyPerm([]string{
dataprovider.PermDelete,
dataprovider.PermDeleteDirs,
}, path.Dir(folderPath),
) {
files, err := c.conn.ListDir(folderPath)
if err == nil && len(files) == 0 {
err = c.conn.RemoveDir(folderPath)
c.conn.Log(logger.LevelDebug, "tryed to remove empty dir %#v, error: %v", folderPath, err)
}
}
}
// Start starts the retention check
func (c *RetentionCheck) Start() {
c.conn.Log(logger.LevelInfo, "retention check started")
defer RetentionChecks.remove(c.conn.User.Username)
defer c.conn.CloseFS() //nolint:errcheck
startTime := time.Now()
for _, folder := range c.Folders {
if folder.Retention > 0 {
if err := c.cleanupFolder(folder.Path); err != nil {
c.conn.Log(logger.LevelWarn, "retention check failed, unable to cleanup folder %#v", folder.Path)
c.sendNotifications(time.Since(startTime), err)
return
}
}
}
c.conn.Log(logger.LevelInfo, "retention check completed")
c.sendNotifications(time.Since(startTime), nil)
}
func (c *RetentionCheck) sendNotifications(elapsed time.Duration, err error) {
for _, notification := range c.Notifications {
switch notification {
case RetentionCheckNotificationEmail:
c.sendEmailNotification(elapsed, err) //nolint:errcheck
case RetentionCheckNotificationHook:
c.sendHookNotification(elapsed, err) //nolint:errcheck
}
}
}
func (c *RetentionCheck) sendEmailNotification(elapsed time.Duration, errCheck error) error {
body := new(bytes.Buffer)
data := make(map[string]interface{})
data["Results"] = c.results
totalDeletedFiles := 0
totalDeletedSize := int64(0)
for _, result := range c.results {
totalDeletedFiles += result.DeletedFiles
totalDeletedSize += result.DeletedSize
}
data["HumanizeSize"] = util.ByteCountIEC
data["TotalFiles"] = totalDeletedFiles
data["TotalSize"] = totalDeletedSize
data["Elapsed"] = elapsed
data["Username"] = c.conn.User.Username
data["StartTime"] = util.GetTimeFromMsecSinceEpoch(c.StartTime)
if errCheck == nil {
data["Status"] = "Succeeded"
} else {
data["Status"] = "Failed"
}
if err := smtp.RenderRetentionReportTemplate(body, data); err != nil {
c.conn.Log(logger.LevelWarn, "unable to render retention check template: %v", err)
return err
}
startTime := time.Now()
subject := fmt.Sprintf("Retention check completed for user %#v", c.conn.User.Username)
if err := smtp.SendEmail(c.Email, subject, body.String(), smtp.EmailContentTypeTextHTML); err != nil {
c.conn.Log(logger.LevelWarn, "unable to notify retention check result via email: %v, elapsed: %v", err,
time.Since(startTime))
return err
}
c.conn.Log(logger.LevelInfo, "retention check result successfully notified via email, elapsed: %v", time.Since(startTime))
return nil
}
func (c *RetentionCheck) sendHookNotification(elapsed time.Duration, errCheck error) error {
data := make(map[string]interface{})
totalDeletedFiles := 0
totalDeletedSize := int64(0)
for _, result := range c.results {
totalDeletedFiles += result.DeletedFiles
totalDeletedSize += result.DeletedSize
}
data["username"] = c.conn.User.Username
data["start_time"] = c.StartTime
data["elapsed"] = elapsed.Milliseconds()
if errCheck == nil {
data["status"] = 1
} else {
data["status"] = 0
}
data["total_deleted_files"] = totalDeletedFiles
data["total_deleted_size"] = totalDeletedSize
data["details"] = c.results
jsonData, _ := json.Marshal(data)
startTime := time.Now()
if strings.HasPrefix(Config.DataRetentionHook, "http") {
var url *url.URL
url, err := url.Parse(Config.DataRetentionHook)
if err != nil {
c.conn.Log(logger.LevelWarn, "invalid data retention hook %#v: %v", Config.DataRetentionHook, err)
return err
}
respCode := 0
resp, err := httpclient.RetryablePost(url.String(), "application/json", bytes.NewBuffer(jsonData))
if err == nil {
respCode = resp.StatusCode
resp.Body.Close()
if respCode != http.StatusOK {
err = errUnexpectedHTTResponse
}
}
c.conn.Log(logger.LevelDebug, "notified result to URL: %#v, status code: %v, elapsed: %v err: %v",
url.Redacted(), respCode, time.Since(startTime), err)
return err
}
if !filepath.IsAbs(Config.DataRetentionHook) {
err := fmt.Errorf("invalid data retention hook %#v", Config.DataRetentionHook)
c.conn.Log(logger.LevelWarn, "%v", err)
return err
}
ctx, cancel := context.WithTimeout(context.Background(), 20*time.Second)
defer cancel()
cmd := exec.CommandContext(ctx, Config.DataRetentionHook)
cmd.Env = append(os.Environ(),
fmt.Sprintf("SFTPGO_DATA_RETENTION_RESULT=%v", string(jsonData)))
err := cmd.Run()
c.conn.Log(logger.LevelDebug, "notified result using command: %v, elapsed: %v err: %v",
Config.DataRetentionHook, time.Since(startTime), err)
return err
}

View File

@@ -1,340 +0,0 @@
package common
import (
"errors"
"fmt"
"os/exec"
"runtime"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/sdk"
"github.com/drakkan/sftpgo/v2/smtp"
)
func TestRetentionValidation(t *testing.T) {
check := RetentionCheck{}
check.Folders = append(check.Folders, FolderRetention{
Path: "relative",
Retention: 10,
})
err := check.Validate()
require.Error(t, err)
assert.Contains(t, err.Error(), "please specify an absolute POSIX path")
check.Folders = []FolderRetention{
{
Path: "/",
Retention: -1,
},
}
err = check.Validate()
require.Error(t, err)
assert.Contains(t, err.Error(), "invalid folder retention")
check.Folders = []FolderRetention{
{
Path: "/ab/..",
Retention: 0,
},
}
err = check.Validate()
require.Error(t, err)
assert.Contains(t, err.Error(), "nothing to delete")
assert.Equal(t, "/", check.Folders[0].Path)
check.Folders = append(check.Folders, FolderRetention{
Path: "/../..",
Retention: 24,
})
err = check.Validate()
require.Error(t, err)
assert.Contains(t, err.Error(), `duplicated folder path "/"`)
check.Folders = []FolderRetention{
{
Path: "/dir1",
Retention: 48,
},
{
Path: "/dir2",
Retention: 96,
},
}
err = check.Validate()
assert.NoError(t, err)
assert.Len(t, check.Notifications, 0)
assert.Empty(t, check.Email)
check.Notifications = []RetentionCheckNotification{RetentionCheckNotificationEmail}
err = check.Validate()
require.Error(t, err)
assert.Contains(t, err.Error(), "you must configure an SMTP server")
smtpCfg := smtp.Config{
Host: "mail.example.com",
Port: 25,
TemplatesPath: "templates",
}
err = smtpCfg.Initialize("..")
require.NoError(t, err)
err = check.Validate()
require.Error(t, err)
assert.Contains(t, err.Error(), "you must add a valid email address")
check.Email = "admin@example.com"
err = check.Validate()
assert.NoError(t, err)
smtpCfg = smtp.Config{}
err = smtpCfg.Initialize("..")
require.NoError(t, err)
check.Notifications = []RetentionCheckNotification{RetentionCheckNotificationHook}
err = check.Validate()
require.Error(t, err)
assert.Contains(t, err.Error(), "data_retention_hook")
check.Notifications = []string{"not valid"}
err = check.Validate()
require.Error(t, err)
assert.Contains(t, err.Error(), "invalid notification")
}
func TestRetentionEmailNotifications(t *testing.T) {
smtpCfg := smtp.Config{
Host: "127.0.0.1",
Port: 2525,
TemplatesPath: "templates",
}
err := smtpCfg.Initialize("..")
require.NoError(t, err)
user := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: "user1",
},
}
user.Permissions = make(map[string][]string)
user.Permissions["/"] = []string{dataprovider.PermAny}
check := RetentionCheck{
Notifications: []RetentionCheckNotification{RetentionCheckNotificationEmail},
Email: "notification@example.com",
results: []*folderRetentionCheckResult{
{
Path: "/",
Retention: 24,
DeletedFiles: 10,
DeletedSize: 32657,
Elapsed: 10 * time.Second,
},
},
}
conn := NewBaseConnection("", "", "", "", user)
conn.SetProtocol(ProtocolDataRetention)
conn.ID = fmt.Sprintf("data_retention_%v", user.Username)
check.conn = conn
check.sendNotifications(1*time.Second, nil)
err = check.sendEmailNotification(1*time.Second, nil)
assert.NoError(t, err)
err = check.sendEmailNotification(1*time.Second, errors.New("test error"))
assert.NoError(t, err)
smtpCfg.Port = 2626
err = smtpCfg.Initialize("..")
require.NoError(t, err)
err = check.sendEmailNotification(1*time.Second, nil)
assert.Error(t, err)
smtpCfg = smtp.Config{}
err = smtpCfg.Initialize("..")
require.NoError(t, err)
err = check.sendEmailNotification(1*time.Second, nil)
assert.Error(t, err)
}
func TestRetentionHookNotifications(t *testing.T) {
dataRetentionHook := Config.DataRetentionHook
Config.DataRetentionHook = fmt.Sprintf("http://%v", httpAddr)
user := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: "user2",
},
}
user.Permissions = make(map[string][]string)
user.Permissions["/"] = []string{dataprovider.PermAny}
check := RetentionCheck{
Notifications: []RetentionCheckNotification{RetentionCheckNotificationHook},
results: []*folderRetentionCheckResult{
{
Path: "/",
Retention: 24,
DeletedFiles: 10,
DeletedSize: 32657,
Elapsed: 10 * time.Second,
},
},
}
conn := NewBaseConnection("", "", "", "", user)
conn.SetProtocol(ProtocolDataRetention)
conn.ID = fmt.Sprintf("data_retention_%v", user.Username)
check.conn = conn
check.sendNotifications(1*time.Second, nil)
err := check.sendHookNotification(1*time.Second, nil)
assert.NoError(t, err)
Config.DataRetentionHook = fmt.Sprintf("http://%v/404", httpAddr)
err = check.sendHookNotification(1*time.Second, nil)
assert.ErrorIs(t, err, errUnexpectedHTTResponse)
Config.DataRetentionHook = "http://foo\x7f.com/retention"
err = check.sendHookNotification(1*time.Second, err)
assert.Error(t, err)
Config.DataRetentionHook = "relativepath"
err = check.sendHookNotification(1*time.Second, err)
assert.Error(t, err)
if runtime.GOOS != osWindows {
hookCmd, err := exec.LookPath("true")
assert.NoError(t, err)
Config.DataRetentionHook = hookCmd
err = check.sendHookNotification(1*time.Second, err)
assert.NoError(t, err)
}
Config.DataRetentionHook = dataRetentionHook
}
func TestRetentionPermissionsAndGetFolder(t *testing.T) {
user := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: "user1",
},
}
user.Permissions = make(map[string][]string)
user.Permissions["/"] = []string{dataprovider.PermListItems, dataprovider.PermDelete}
user.Permissions["/dir1"] = []string{dataprovider.PermListItems}
user.Permissions["/dir2/sub1"] = []string{dataprovider.PermCreateDirs}
user.Permissions["/dir2/sub2"] = []string{dataprovider.PermDelete}
check := RetentionCheck{
Folders: []FolderRetention{
{
Path: "/dir2",
Retention: 24 * 7,
IgnoreUserPermissions: true,
},
{
Path: "/dir3",
Retention: 24 * 7,
IgnoreUserPermissions: false,
},
{
Path: "/dir2/sub1/sub",
Retention: 24,
IgnoreUserPermissions: true,
},
},
}
conn := NewBaseConnection("", "", "", "", user)
conn.SetProtocol(ProtocolDataRetention)
conn.ID = fmt.Sprintf("data_retention_%v", user.Username)
check.conn = conn
check.updateUserPermissions()
assert.Equal(t, []string{dataprovider.PermListItems, dataprovider.PermDelete}, conn.User.Permissions["/"])
assert.Equal(t, []string{dataprovider.PermListItems}, conn.User.Permissions["/dir1"])
assert.Equal(t, []string{dataprovider.PermAny}, conn.User.Permissions["/dir2"])
assert.Equal(t, []string{dataprovider.PermAny}, conn.User.Permissions["/dir2/sub1/sub"])
assert.Equal(t, []string{dataprovider.PermCreateDirs}, conn.User.Permissions["/dir2/sub1"])
assert.Equal(t, []string{dataprovider.PermDelete}, conn.User.Permissions["/dir2/sub2"])
_, err := check.getFolderRetention("/")
assert.Error(t, err)
folder, err := check.getFolderRetention("/dir3")
assert.NoError(t, err)
assert.Equal(t, "/dir3", folder.Path)
folder, err = check.getFolderRetention("/dir2/sub3")
assert.NoError(t, err)
assert.Equal(t, "/dir2", folder.Path)
folder, err = check.getFolderRetention("/dir2/sub2")
assert.NoError(t, err)
assert.Equal(t, "/dir2", folder.Path)
folder, err = check.getFolderRetention("/dir2/sub1")
assert.NoError(t, err)
assert.Equal(t, "/dir2", folder.Path)
folder, err = check.getFolderRetention("/dir2/sub1/sub/sub")
assert.NoError(t, err)
assert.Equal(t, "/dir2/sub1/sub", folder.Path)
}
func TestRetentionCheckAddRemove(t *testing.T) {
username := "username"
user := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: username,
},
}
user.Permissions = make(map[string][]string)
user.Permissions["/"] = []string{dataprovider.PermAny}
check := RetentionCheck{
Folders: []FolderRetention{
{
Path: "/",
Retention: 48,
},
},
Notifications: []RetentionCheckNotification{RetentionCheckNotificationHook},
}
assert.NotNil(t, RetentionChecks.Add(check, &user))
checks := RetentionChecks.Get()
require.Len(t, checks, 1)
assert.Equal(t, username, checks[0].Username)
assert.Greater(t, checks[0].StartTime, int64(0))
require.Len(t, checks[0].Folders, 1)
assert.Equal(t, check.Folders[0].Path, checks[0].Folders[0].Path)
assert.Equal(t, check.Folders[0].Retention, checks[0].Folders[0].Retention)
require.Len(t, checks[0].Notifications, 1)
assert.Equal(t, RetentionCheckNotificationHook, checks[0].Notifications[0])
assert.Nil(t, RetentionChecks.Add(check, &user))
assert.True(t, RetentionChecks.remove(username))
require.Len(t, RetentionChecks.Get(), 0)
assert.False(t, RetentionChecks.remove(username))
}
func TestCleanupErrors(t *testing.T) {
user := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: "u",
},
}
user.Permissions = make(map[string][]string)
user.Permissions["/"] = []string{dataprovider.PermAny}
check := &RetentionCheck{
Folders: []FolderRetention{
{
Path: "/path",
Retention: 48,
},
},
}
check = RetentionChecks.Add(*check, &user)
require.NotNil(t, check)
err := check.removeFile("missing file", nil)
assert.Error(t, err)
err = check.cleanupFolder("/")
assert.Error(t, err)
assert.True(t, RetentionChecks.remove(user.Username))
}

View File

@@ -1,9 +1,9 @@
package common
import (
"encoding/hex"
"encoding/json"
"fmt"
"io/ioutil"
"net"
"os"
"sort"
@@ -12,11 +12,11 @@ import (
"github.com/yl2chen/cidranger"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/utils"
)
// HostEvent is the enumerable for the supported host events
// HostEvent is the enumerable for the support host event
type HostEvent int
// Supported host events
@@ -24,53 +24,15 @@ const (
HostEventLoginFailed HostEvent = iota
HostEventUserNotFound
HostEventNoLoginTried
HostEventLimitExceeded
)
// DefenderEntry defines a defender entry
type DefenderEntry struct {
IP string `json:"ip"`
Score int `json:"score,omitempty"`
BanTime time.Time `json:"ban_time,omitempty"`
}
// GetID returns an unique ID for a defender entry
func (d *DefenderEntry) GetID() string {
return hex.EncodeToString([]byte(d.IP))
}
// GetBanTime returns the ban time for a defender entry as string
func (d *DefenderEntry) GetBanTime() string {
if d.BanTime.IsZero() {
return ""
}
return d.BanTime.UTC().Format(time.RFC3339)
}
// MarshalJSON returns the JSON encoding of a DefenderEntry.
func (d *DefenderEntry) MarshalJSON() ([]byte, error) {
return json.Marshal(&struct {
ID string `json:"id"`
IP string `json:"ip"`
Score int `json:"score,omitempty"`
BanTime string `json:"ban_time,omitempty"`
}{
ID: d.GetID(),
IP: d.IP,
Score: d.Score,
BanTime: d.GetBanTime(),
})
}
// Defender defines the interface that a defender must implements
type Defender interface {
GetHosts() []*DefenderEntry
GetHost(ip string) (*DefenderEntry, error)
AddEvent(ip string, event HostEvent)
IsBanned(ip string) bool
GetBanTime(ip string) *time.Time
GetScore(ip string) int
DeleteHost(ip string) bool
Unban(ip string) bool
Reload() error
}
@@ -89,9 +51,6 @@ type DefenderConfig struct {
ScoreInvalid int `json:"score_invalid" mapstructure:"score_invalid"`
// Score for valid login attempts, eg. user accounts that exist
ScoreValid int `json:"score_valid" mapstructure:"score_valid"`
// Score for limit exceeded events, generated from the rate limiters or for max connections
// per-host exceeded
ScoreLimitExceeded int `json:"score_limit_exceeded" mapstructure:"score_limit_exceeded"`
// Defines the time window, in minutes, for tracking client errors.
// A host is banned if it has exceeded the defined threshold during
// the last observation time minutes
@@ -165,9 +124,6 @@ func (c *DefenderConfig) validate() error {
if c.ScoreValid >= c.Threshold {
return fmt.Errorf("score_valid %v cannot be greater than threshold %v", c.ScoreValid, c.Threshold)
}
if c.ScoreLimitExceeded >= c.Threshold {
return fmt.Errorf("score_limit_exceeded %v cannot be greater than threshold %v", c.ScoreLimitExceeded, c.Threshold)
}
if c.BanTime <= 0 {
return fmt.Errorf("invalid ban_time %v", c.BanTime)
}
@@ -228,70 +184,6 @@ func (d *memoryDefender) Reload() error {
return nil
}
// GetHosts returns hosts that are banned or for which some violations have been detected
func (d *memoryDefender) GetHosts() []*DefenderEntry {
d.RLock()
defer d.RUnlock()
var result []*DefenderEntry
for k, v := range d.banned {
if v.After(time.Now()) {
result = append(result, &DefenderEntry{
IP: k,
BanTime: v,
})
}
}
for k, v := range d.hosts {
score := 0
for _, event := range v.Events {
if event.dateTime.Add(time.Duration(d.config.ObservationTime) * time.Minute).After(time.Now()) {
score += event.score
}
}
if score > 0 {
result = append(result, &DefenderEntry{
IP: k,
Score: score,
})
}
}
return result
}
// GetHost returns a defender host by ip, if any
func (d *memoryDefender) GetHost(ip string) (*DefenderEntry, error) {
d.RLock()
defer d.RUnlock()
if banTime, ok := d.banned[ip]; ok {
if banTime.After(time.Now()) {
return &DefenderEntry{
IP: ip,
BanTime: banTime,
}, nil
}
}
if hs, ok := d.hosts[ip]; ok {
score := 0
for _, event := range hs.Events {
if event.dateTime.Add(time.Duration(d.config.ObservationTime) * time.Minute).After(time.Now()) {
score += event.score
}
}
if score > 0 {
return &DefenderEntry{
IP: ip,
Score: score,
}, nil
}
}
return nil, util.NewRecordNotFoundError("host not found")
}
// IsBanned returns true if the specified IP is banned
// and increase ban time if the IP is found.
// This method must be called as soon as the client connects
@@ -329,8 +221,8 @@ func (d *memoryDefender) IsBanned(ip string) bool {
return false
}
// DeleteHost removes the specified IP from the defender lists
func (d *memoryDefender) DeleteHost(ip string) bool {
// Unban removes the specified IP address from the banned ones
func (d *memoryDefender) Unban(ip string) bool {
d.Lock()
defer d.Unlock()
@@ -339,11 +231,6 @@ func (d *memoryDefender) DeleteHost(ip string) bool {
return true
}
if _, ok := d.hosts[ip]; ok {
delete(d.hosts, ip)
return true
}
return false
}
@@ -357,21 +244,11 @@ func (d *memoryDefender) AddEvent(ip string, event HostEvent) {
return
}
// ignore events for already banned hosts
if v, ok := d.banned[ip]; ok {
if v.After(time.Now()) {
return
}
delete(d.banned, ip)
}
var score int
switch event {
case HostEventLoginFailed:
score = d.config.ScoreValid
case HostEventLimitExceeded:
score = d.config.ScoreLimitExceeded
case HostEventUserNotFound, HostEventNoLoginTried:
score = d.config.ScoreInvalid
}
@@ -522,7 +399,7 @@ func loadHostListFromFile(name string) (*HostList, error) {
if name == "" {
return nil, nil
}
if !util.IsFileInputValid(name) {
if !utils.IsFileInputValid(name) {
return nil, fmt.Errorf("invalid host list file name %#v", name)
}
@@ -536,7 +413,7 @@ func loadHostListFromFile(name string) (*HostList, error) {
return nil, fmt.Errorf("host list file %#v is too big: %v bytes", name, info.Size())
}
content, err := os.ReadFile(name)
content, err := ioutil.ReadFile(name)
if err != nil {
return nil, fmt.Errorf("unable to read input file %#v: %v", name, err)
}

View File

@@ -2,9 +2,9 @@ package common
import (
"crypto/rand"
"encoding/hex"
"encoding/json"
"fmt"
"io/ioutil"
"net"
"os"
"path/filepath"
@@ -32,28 +32,27 @@ func TestBasicDefender(t *testing.T) {
data, err := json.Marshal(bl)
assert.NoError(t, err)
err = os.WriteFile(blFile, data, os.ModePerm)
err = ioutil.WriteFile(blFile, data, os.ModePerm)
assert.NoError(t, err)
data, err = json.Marshal(sl)
assert.NoError(t, err)
err = os.WriteFile(slFile, data, os.ModePerm)
err = ioutil.WriteFile(slFile, data, os.ModePerm)
assert.NoError(t, err)
config := &DefenderConfig{
Enabled: true,
BanTime: 10,
BanTimeIncrement: 2,
Threshold: 5,
ScoreInvalid: 2,
ScoreValid: 1,
ScoreLimitExceeded: 3,
ObservationTime: 15,
EntriesSoftLimit: 1,
EntriesHardLimit: 2,
SafeListFile: "slFile",
BlockListFile: "blFile",
Enabled: true,
BanTime: 10,
BanTimeIncrement: 2,
Threshold: 5,
ScoreInvalid: 2,
ScoreValid: 1,
ObservationTime: 15,
EntriesSoftLimit: 1,
EntriesHardLimit: 2,
SafeListFile: "slFile",
BlockListFile: "blFile",
}
_, err = newInMemoryDefender(config)
@@ -73,13 +72,9 @@ func TestBasicDefender(t *testing.T) {
assert.False(t, defender.IsBanned("invalid ip"))
assert.Equal(t, 0, defender.countBanned())
assert.Equal(t, 0, defender.countHosts())
assert.Len(t, defender.GetHosts(), 0)
_, err = defender.GetHost("10.8.0.4")
assert.Error(t, err)
defender.AddEvent("172.16.1.4", HostEventLoginFailed)
defender.AddEvent("192.168.8.4", HostEventUserNotFound)
defender.AddEvent("172.16.1.3", HostEventLimitExceeded)
assert.Equal(t, 0, defender.countHosts())
testIP := "12.34.56.78"
@@ -87,39 +82,16 @@ func TestBasicDefender(t *testing.T) {
assert.Equal(t, 1, defender.countHosts())
assert.Equal(t, 0, defender.countBanned())
assert.Equal(t, 1, defender.GetScore(testIP))
if assert.Len(t, defender.GetHosts(), 1) {
assert.Equal(t, 1, defender.GetHosts()[0].Score)
assert.True(t, defender.GetHosts()[0].BanTime.IsZero())
assert.Empty(t, defender.GetHosts()[0].GetBanTime())
}
host, err := defender.GetHost(testIP)
assert.NoError(t, err)
assert.Equal(t, 1, host.Score)
assert.Empty(t, host.GetBanTime())
assert.Nil(t, defender.GetBanTime(testIP))
defender.AddEvent(testIP, HostEventLimitExceeded)
defender.AddEvent(testIP, HostEventNoLoginTried)
assert.Equal(t, 1, defender.countHosts())
assert.Equal(t, 0, defender.countBanned())
assert.Equal(t, 4, defender.GetScore(testIP))
if assert.Len(t, defender.GetHosts(), 1) {
assert.Equal(t, 4, defender.GetHosts()[0].Score)
}
defender.AddEvent(testIP, HostEventNoLoginTried)
assert.Equal(t, 3, defender.GetScore(testIP))
defender.AddEvent(testIP, HostEventNoLoginTried)
assert.Equal(t, 0, defender.countHosts())
assert.Equal(t, 1, defender.countBanned())
assert.Equal(t, 0, defender.GetScore(testIP))
assert.NotNil(t, defender.GetBanTime(testIP))
if assert.Len(t, defender.GetHosts(), 1) {
assert.Equal(t, 0, defender.GetHosts()[0].Score)
assert.False(t, defender.GetHosts()[0].BanTime.IsZero())
assert.NotEmpty(t, defender.GetHosts()[0].GetBanTime())
assert.Equal(t, hex.EncodeToString([]byte(testIP)), defender.GetHosts()[0].GetID())
}
host, err = defender.GetHost(testIP)
assert.NoError(t, err)
assert.Equal(t, 0, host.Score)
assert.NotEmpty(t, host.GetBanTime())
// now test cleanup, testIP is already banned
testIP1 := "12.34.56.79"
@@ -170,8 +142,8 @@ func TestBasicDefender(t *testing.T) {
assert.True(t, newBanTime.After(*banTime))
}
assert.True(t, defender.DeleteHost(testIP3))
assert.False(t, defender.DeleteHost(testIP3))
assert.True(t, defender.Unban(testIP3))
assert.False(t, defender.Unban(testIP3))
err = os.Remove(slFile)
assert.NoError(t, err)
@@ -179,79 +151,6 @@ func TestBasicDefender(t *testing.T) {
assert.NoError(t, err)
}
func TestExpiredHostBans(t *testing.T) {
config := &DefenderConfig{
Enabled: true,
BanTime: 10,
BanTimeIncrement: 2,
Threshold: 5,
ScoreInvalid: 2,
ScoreValid: 1,
ScoreLimitExceeded: 3,
ObservationTime: 15,
EntriesSoftLimit: 1,
EntriesHardLimit: 2,
}
d, err := newInMemoryDefender(config)
assert.NoError(t, err)
defender := d.(*memoryDefender)
testIP := "1.2.3.4"
defender.banned[testIP] = time.Now().Add(-24 * time.Hour)
// the ban is expired testIP should not be listed
res := defender.GetHosts()
assert.Len(t, res, 0)
assert.False(t, defender.IsBanned(testIP))
_, err = defender.GetHost(testIP)
assert.Error(t, err)
_, ok := defender.banned[testIP]
assert.True(t, ok)
// now add an event for an expired banned ip, it should be removed
defender.AddEvent(testIP, HostEventLoginFailed)
assert.False(t, defender.IsBanned(testIP))
entry, err := defender.GetHost(testIP)
assert.NoError(t, err)
assert.Equal(t, testIP, entry.IP)
assert.Empty(t, entry.GetBanTime())
assert.Equal(t, 1, entry.Score)
res = defender.GetHosts()
if assert.Len(t, res, 1) {
assert.Equal(t, testIP, res[0].IP)
assert.Empty(t, res[0].GetBanTime())
assert.Equal(t, 1, res[0].Score)
}
events := []hostEvent{
{
dateTime: time.Now().Add(-24 * time.Hour),
score: 2,
},
{
dateTime: time.Now().Add(-24 * time.Hour),
score: 3,
},
}
hs := hostScore{
Events: events,
TotalScore: 5,
}
defender.hosts[testIP] = hs
// the recorded scored are too old
res = defender.GetHosts()
assert.Len(t, res, 0)
_, err = defender.GetHost(testIP)
assert.Error(t, err)
_, ok = defender.hosts[testIP]
assert.True(t, ok)
}
func TestLoadHostListFromFile(t *testing.T) {
_, err := loadHostListFromFile(".")
assert.Error(t, err)
@@ -261,7 +160,7 @@ func TestLoadHostListFromFile(t *testing.T) {
_, err = rand.Read(content)
assert.NoError(t, err)
err = os.WriteFile(hostsFilePath, content, os.ModePerm)
err = ioutil.WriteFile(hostsFilePath, content, os.ModePerm)
assert.NoError(t, err)
_, err = loadHostListFromFile(hostsFilePath)
@@ -274,7 +173,7 @@ func TestLoadHostListFromFile(t *testing.T) {
asJSON, err := json.Marshal(hl)
assert.NoError(t, err)
err = os.WriteFile(hostsFilePath, asJSON, os.ModePerm)
err = ioutil.WriteFile(hostsFilePath, asJSON, os.ModePerm)
assert.NoError(t, err)
hostList, err := loadHostListFromFile(hostsFilePath)
@@ -284,7 +183,7 @@ func TestLoadHostListFromFile(t *testing.T) {
hl.IPAddresses = append(hl.IPAddresses, "invalidip")
asJSON, err = json.Marshal(hl)
assert.NoError(t, err)
err = os.WriteFile(hostsFilePath, asJSON, os.ModePerm)
err = ioutil.WriteFile(hostsFilePath, asJSON, os.ModePerm)
assert.NoError(t, err)
hostList, err = loadHostListFromFile(hostsFilePath)
@@ -296,7 +195,7 @@ func TestLoadHostListFromFile(t *testing.T) {
asJSON, err = json.Marshal(hl)
assert.NoError(t, err)
err = os.WriteFile(hostsFilePath, asJSON, os.ModePerm)
err = ioutil.WriteFile(hostsFilePath, asJSON, os.ModePerm)
assert.NoError(t, err)
hostList, err = loadHostListFromFile(hostsFilePath)
@@ -316,7 +215,7 @@ func TestLoadHostListFromFile(t *testing.T) {
assert.NoError(t, err)
}
err = os.WriteFile(hostsFilePath, []byte("non json content"), os.ModePerm)
err = ioutil.WriteFile(hostsFilePath, []byte("non json content"), os.ModePerm)
assert.NoError(t, err)
_, err = loadHostListFromFile(hostsFilePath)
assert.Error(t, err)
@@ -417,11 +316,6 @@ func TestDefenderConfig(t *testing.T) {
require.Error(t, err)
c.ScoreInvalid = 2
c.ScoreLimitExceeded = 10
err = c.validate()
require.Error(t, err)
c.ScoreLimitExceeded = 2
c.ScoreValid = 10
err = c.validate()
require.Error(t, err)

View File

@@ -10,8 +10,8 @@ import (
"github.com/GehirnInc/crypt/md5_crypt"
"golang.org/x/crypto/bcrypt"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/utils"
)
const (
@@ -114,7 +114,7 @@ func (p *basicAuthProvider) getHashedPassword(username string) (string, bool) {
// ValidateCredentials returns true if the credentials are valid
func (p *basicAuthProvider) ValidateCredentials(username, password string) bool {
if hashedPwd, ok := p.getHashedPassword(username); ok {
if util.IsStringPrefixInSlice(hashedPwd, bcryptPwdPrefixes) {
if utils.IsStringPrefixInSlice(hashedPwd, bcryptPwdPrefixes) {
err := bcrypt.CompareHashAndPassword([]byte(hashedPwd), []byte(password))
return err == nil
}

View File

@@ -1,6 +1,7 @@
package common
import (
"io/ioutil"
"os"
"path/filepath"
"runtime"
@@ -19,7 +20,7 @@ func TestBasicAuth(t *testing.T) {
authUserFile := filepath.Join(os.TempDir(), "http_users.txt")
authUserData := []byte("test1:$2y$05$bcHSED7aO1cfLto6ZdDBOOKzlwftslVhtpIkRhAtSa4GuLmk5mola\n")
err = os.WriteFile(authUserFile, authUserData, os.ModePerm)
err = ioutil.WriteFile(authUserFile, authUserData, os.ModePerm)
require.NoError(t, err)
httpAuth, err = NewBasicAuthProvider(authUserFile)
@@ -30,30 +31,30 @@ func TestBasicAuth(t *testing.T) {
require.True(t, httpAuth.ValidateCredentials("test1", "password1"))
authUserData = append(authUserData, []byte("test2:$1$OtSSTL8b$bmaCqEksI1e7rnZSjsIDR1\n")...)
err = os.WriteFile(authUserFile, authUserData, os.ModePerm)
err = ioutil.WriteFile(authUserFile, authUserData, os.ModePerm)
require.NoError(t, err)
require.False(t, httpAuth.ValidateCredentials("test2", "wrong2"))
require.True(t, httpAuth.ValidateCredentials("test2", "password2"))
authUserData = append(authUserData, []byte("test2:$apr1$gLnIkRIf$Xr/6aJfmIrihP4b2N2tcs/\n")...)
err = os.WriteFile(authUserFile, authUserData, os.ModePerm)
err = ioutil.WriteFile(authUserFile, authUserData, os.ModePerm)
require.NoError(t, err)
require.False(t, httpAuth.ValidateCredentials("test2", "wrong2"))
require.True(t, httpAuth.ValidateCredentials("test2", "password2"))
authUserData = append(authUserData, []byte("test3:$apr1$gLnIkRIf$Xr/6aJfmIrihP4b2N2tcs/\n")...)
err = os.WriteFile(authUserFile, authUserData, os.ModePerm)
err = ioutil.WriteFile(authUserFile, authUserData, os.ModePerm)
require.NoError(t, err)
require.False(t, httpAuth.ValidateCredentials("test3", "password3"))
authUserData = append(authUserData, []byte("test4:$invalid$gLnIkRIf$Xr/6$aJfmIr$ihP4b2N2tcs/\n")...)
err = os.WriteFile(authUserFile, authUserData, os.ModePerm)
err = ioutil.WriteFile(authUserFile, authUserData, os.ModePerm)
require.NoError(t, err)
require.False(t, httpAuth.ValidateCredentials("test4", "password3"))
if runtime.GOOS != "windows" {
authUserData = append(authUserData, []byte("test5:$apr1$gLnIkRIf$Xr/6aJfmIrihP4b2N2tcs/\n")...)
err = os.WriteFile(authUserFile, authUserData, os.ModePerm)
err = ioutil.WriteFile(authUserFile, authUserData, os.ModePerm)
require.NoError(t, err)
err = os.Chmod(authUserFile, 0001)
require.NoError(t, err)
@@ -62,7 +63,7 @@ func TestBasicAuth(t *testing.T) {
require.NoError(t, err)
}
authUserData = append(authUserData, []byte("\"foo\"bar\"\r\n")...)
err = os.WriteFile(authUserFile, authUserData, os.ModePerm)
err = ioutil.WriteFile(authUserFile, authUserData, os.ModePerm)
require.NoError(t, err)
require.False(t, httpAuth.ValidateCredentials("test2", "password2"))

File diff suppressed because it is too large Load Diff

View File

@@ -1,243 +0,0 @@
package common
import (
"errors"
"fmt"
"net"
"sort"
"sync"
"sync/atomic"
"time"
"golang.org/x/time/rate"
"github.com/drakkan/sftpgo/v2/util"
)
var (
errNoBucket = errors.New("no bucket found")
errReserve = errors.New("unable to reserve token")
rateLimiterProtocolValues = []string{ProtocolSSH, ProtocolFTP, ProtocolWebDAV, ProtocolHTTP}
)
// RateLimiterType defines the supported rate limiters types
type RateLimiterType int
// Supported rate limiter types
const (
rateLimiterTypeGlobal RateLimiterType = iota + 1
rateLimiterTypeSource
)
// RateLimiterConfig defines the configuration for a rate limiter
type RateLimiterConfig struct {
// Average defines the maximum rate allowed. 0 means disabled
Average int64 `json:"average" mapstructure:"average"`
// Period defines the period as milliseconds. Default: 1000 (1 second).
// The rate is actually defined by dividing average by period.
// So for a rate below 1 req/s, one needs to define a period larger than a second.
Period int64 `json:"period" mapstructure:"period"`
// Burst is the maximum number of requests allowed to go through in the
// same arbitrarily small period of time. Default: 1.
Burst int `json:"burst" mapstructure:"burst"`
// Type defines the rate limiter type:
// - rateLimiterTypeGlobal is a global rate limiter independent from the source
// - rateLimiterTypeSource is a per-source rate limiter
Type int `json:"type" mapstructure:"type"`
// Protocols defines the protocols for this rate limiter.
// Available protocols are: "SFTP", "FTP", "DAV".
// A rate limiter with no protocols defined is disabled
Protocols []string `json:"protocols" mapstructure:"protocols"`
// AllowList defines a list of IP addresses and IP ranges excluded from rate limiting
AllowList []string `json:"allow_list" mapstructure:"mapstructure"`
// If the rate limit is exceeded, the defender is enabled, and this is a per-source limiter,
// a new defender event will be generated
GenerateDefenderEvents bool `json:"generate_defender_events" mapstructure:"generate_defender_events"`
// The number of per-ip rate limiters kept in memory will vary between the
// soft and hard limit
EntriesSoftLimit int `json:"entries_soft_limit" mapstructure:"entries_soft_limit"`
EntriesHardLimit int `json:"entries_hard_limit" mapstructure:"entries_hard_limit"`
}
func (r *RateLimiterConfig) isEnabled() bool {
return r.Average > 0 && len(r.Protocols) > 0
}
func (r *RateLimiterConfig) validate() error {
if r.Burst < 1 {
return fmt.Errorf("invalid burst %v. It must be >= 1", r.Burst)
}
if r.Period < 100 {
return fmt.Errorf("invalid period %v. It must be >= 100", r.Period)
}
if r.Type != int(rateLimiterTypeGlobal) && r.Type != int(rateLimiterTypeSource) {
return fmt.Errorf("invalid type %v", r.Type)
}
if r.Type != int(rateLimiterTypeGlobal) {
if r.EntriesSoftLimit <= 0 {
return fmt.Errorf("invalid entries_soft_limit %v", r.EntriesSoftLimit)
}
if r.EntriesHardLimit <= r.EntriesSoftLimit {
return fmt.Errorf("invalid entries_hard_limit %v must be > %v", r.EntriesHardLimit, r.EntriesSoftLimit)
}
}
r.Protocols = util.RemoveDuplicates(r.Protocols)
for _, protocol := range r.Protocols {
if !util.IsStringInSlice(protocol, rateLimiterProtocolValues) {
return fmt.Errorf("invalid protocol %#v", protocol)
}
}
return nil
}
func (r *RateLimiterConfig) getLimiter() *rateLimiter {
limiter := &rateLimiter{
burst: r.Burst,
globalBucket: nil,
generateDefenderEvents: r.GenerateDefenderEvents,
}
var maxDelay time.Duration
period := time.Duration(r.Period) * time.Millisecond
rtl := float64(r.Average*int64(time.Second)) / float64(period)
limiter.rate = rate.Limit(rtl)
if rtl < 1 {
maxDelay = period / 2
} else {
maxDelay = time.Second / (time.Duration(rtl) * 2)
}
if maxDelay > 10*time.Second {
maxDelay = 10 * time.Second
}
limiter.maxDelay = maxDelay
limiter.buckets = sourceBuckets{
buckets: make(map[string]sourceRateLimiter),
hardLimit: r.EntriesHardLimit,
softLimit: r.EntriesSoftLimit,
}
if r.Type != int(rateLimiterTypeSource) {
limiter.globalBucket = rate.NewLimiter(limiter.rate, limiter.burst)
}
return limiter
}
// RateLimiter defines a rate limiter
type rateLimiter struct {
rate rate.Limit
burst int
maxDelay time.Duration
globalBucket *rate.Limiter
buckets sourceBuckets
generateDefenderEvents bool
allowList []func(net.IP) bool
}
// Wait blocks until the limit allows one event to happen
// or returns an error if the time to wait exceeds the max
// allowed delay
func (rl *rateLimiter) Wait(source string) (time.Duration, error) {
if len(rl.allowList) > 0 {
ip := net.ParseIP(source)
if ip != nil {
for idx := range rl.allowList {
if rl.allowList[idx](ip) {
return 0, nil
}
}
}
}
var res *rate.Reservation
if rl.globalBucket != nil {
res = rl.globalBucket.Reserve()
} else {
var err error
res, err = rl.buckets.reserve(source)
if err != nil {
rateLimiter := rate.NewLimiter(rl.rate, rl.burst)
res = rl.buckets.addAndReserve(rateLimiter, source)
}
}
if !res.OK() {
return 0, errReserve
}
delay := res.Delay()
if delay > rl.maxDelay {
res.Cancel()
if rl.generateDefenderEvents && rl.globalBucket == nil {
AddDefenderEvent(source, HostEventLimitExceeded)
}
return delay, fmt.Errorf("rate limit exceed, wait time to respect rate %v, max wait time allowed %v", delay, rl.maxDelay)
}
time.Sleep(delay)
return 0, nil
}
type sourceRateLimiter struct {
lastActivity int64
bucket *rate.Limiter
}
func (s *sourceRateLimiter) updateLastActivity() {
atomic.StoreInt64(&s.lastActivity, time.Now().UnixNano())
}
func (s *sourceRateLimiter) getLastActivity() int64 {
return atomic.LoadInt64(&s.lastActivity)
}
type sourceBuckets struct {
sync.RWMutex
buckets map[string]sourceRateLimiter
hardLimit int
softLimit int
}
func (b *sourceBuckets) reserve(source string) (*rate.Reservation, error) {
b.RLock()
defer b.RUnlock()
if src, ok := b.buckets[source]; ok {
src.updateLastActivity()
return src.bucket.Reserve(), nil
}
return nil, errNoBucket
}
func (b *sourceBuckets) addAndReserve(r *rate.Limiter, source string) *rate.Reservation {
b.Lock()
defer b.Unlock()
b.cleanup()
src := sourceRateLimiter{
bucket: r,
}
src.updateLastActivity()
b.buckets[source] = src
return src.bucket.Reserve()
}
func (b *sourceBuckets) cleanup() {
if len(b.buckets) >= b.hardLimit {
numToRemove := len(b.buckets) - b.softLimit
kvList := make(kvList, 0, len(b.buckets))
for k, v := range b.buckets {
kvList = append(kvList, kv{
Key: k,
Value: v.getLastActivity(),
})
}
sort.Sort(kvList)
for idx, kv := range kvList {
if idx >= numToRemove {
break
}
delete(b.buckets, kv.Key)
}
}
}

View File

@@ -1,148 +0,0 @@
package common
import (
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/drakkan/sftpgo/v2/util"
)
func TestRateLimiterConfig(t *testing.T) {
config := RateLimiterConfig{}
err := config.validate()
require.Error(t, err)
config.Burst = 1
config.Period = 10
err = config.validate()
require.Error(t, err)
config.Period = 1000
config.Type = 100
err = config.validate()
require.Error(t, err)
config.Type = int(rateLimiterTypeSource)
config.EntriesSoftLimit = 0
err = config.validate()
require.Error(t, err)
config.EntriesSoftLimit = 150
config.EntriesHardLimit = 0
err = config.validate()
require.Error(t, err)
config.EntriesHardLimit = 200
config.Protocols = []string{"unsupported protocol"}
err = config.validate()
require.Error(t, err)
config.Protocols = rateLimiterProtocolValues
err = config.validate()
require.NoError(t, err)
limiter := config.getLimiter()
require.Equal(t, 500*time.Millisecond, limiter.maxDelay)
require.Nil(t, limiter.globalBucket)
config.Type = int(rateLimiterTypeGlobal)
config.Average = 1
config.Period = 10000
limiter = config.getLimiter()
require.Equal(t, 5*time.Second, limiter.maxDelay)
require.NotNil(t, limiter.globalBucket)
config.Period = 100000
limiter = config.getLimiter()
require.Equal(t, 10*time.Second, limiter.maxDelay)
config.Period = 500
config.Average = 1
limiter = config.getLimiter()
require.Equal(t, 250*time.Millisecond, limiter.maxDelay)
}
func TestRateLimiter(t *testing.T) {
config := RateLimiterConfig{
Average: 1,
Period: 1000,
Burst: 1,
Type: int(rateLimiterTypeGlobal),
Protocols: rateLimiterProtocolValues,
}
limiter := config.getLimiter()
_, err := limiter.Wait("")
require.NoError(t, err)
_, err = limiter.Wait("")
require.Error(t, err)
config.Type = int(rateLimiterTypeSource)
config.GenerateDefenderEvents = true
config.EntriesSoftLimit = 5
config.EntriesHardLimit = 10
limiter = config.getLimiter()
source := "192.168.1.2"
_, err = limiter.Wait(source)
require.NoError(t, err)
_, err = limiter.Wait(source)
require.Error(t, err)
// a different source should work
_, err = limiter.Wait(source + "1")
require.NoError(t, err)
allowList := []string{"192.168.1.0/24"}
allowFuncs, err := util.ParseAllowedIPAndRanges(allowList)
assert.NoError(t, err)
limiter.allowList = allowFuncs
for i := 0; i < 5; i++ {
_, err = limiter.Wait(source)
require.NoError(t, err)
}
_, err = limiter.Wait("not an ip")
require.NoError(t, err)
config.Burst = 0
limiter = config.getLimiter()
_, err = limiter.Wait(source)
require.ErrorIs(t, err, errReserve)
}
func TestLimiterCleanup(t *testing.T) {
config := RateLimiterConfig{
Average: 100,
Period: 1000,
Burst: 1,
Type: int(rateLimiterTypeSource),
Protocols: rateLimiterProtocolValues,
EntriesSoftLimit: 1,
EntriesHardLimit: 3,
}
limiter := config.getLimiter()
source1 := "10.8.0.1"
source2 := "10.8.0.2"
source3 := "10.8.0.3"
source4 := "10.8.0.4"
_, err := limiter.Wait(source1)
assert.NoError(t, err)
time.Sleep(20 * time.Millisecond)
_, err = limiter.Wait(source2)
assert.NoError(t, err)
time.Sleep(20 * time.Millisecond)
assert.Len(t, limiter.buckets.buckets, 2)
_, ok := limiter.buckets.buckets[source1]
assert.True(t, ok)
_, ok = limiter.buckets.buckets[source2]
assert.True(t, ok)
_, err = limiter.Wait(source3)
assert.NoError(t, err)
assert.Len(t, limiter.buckets.buckets, 3)
_, ok = limiter.buckets.buckets[source1]
assert.True(t, ok)
_, ok = limiter.buckets.buckets[source2]
assert.True(t, ok)
_, ok = limiter.buckets.buckets[source3]
assert.True(t, ok)
time.Sleep(20 * time.Millisecond)
_, err = limiter.Wait(source4)
assert.NoError(t, err)
assert.Len(t, limiter.buckets.buckets, 2)
_, ok = limiter.buckets.buckets[source3]
assert.True(t, ok)
_, ok = limiter.buckets.buckets[source4]
assert.True(t, ok)
}

View File

@@ -5,13 +5,13 @@ import (
"crypto/x509"
"crypto/x509/pkix"
"fmt"
"os"
"io/ioutil"
"path/filepath"
"sync"
"time"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/utils"
)
// CertManager defines a TLS certificate manager
@@ -98,13 +98,13 @@ func (m *CertManager) LoadCRLs() error {
var crls []*pkix.CertificateList
for _, revocationList := range m.caRevocationLists {
if !util.IsFileInputValid(revocationList) {
if !utils.IsFileInputValid(revocationList) {
return fmt.Errorf("invalid root CA revocation list %#v", revocationList)
}
if revocationList != "" && !filepath.IsAbs(revocationList) {
revocationList = filepath.Join(m.configDir, revocationList)
}
crlBytes, err := os.ReadFile(revocationList)
crlBytes, err := ioutil.ReadFile(revocationList)
if err != nil {
logger.Warn(m.logSender, "unable to read revocation list %#v", revocationList)
return err
@@ -145,13 +145,13 @@ func (m *CertManager) LoadRootCAs() error {
rootCAs := x509.NewCertPool()
for _, rootCA := range m.caCertificates {
if !util.IsFileInputValid(rootCA) {
if !utils.IsFileInputValid(rootCA) {
return fmt.Errorf("invalid root CA certificate %#v", rootCA)
}
if rootCA != "" && !filepath.IsAbs(rootCA) {
rootCA = filepath.Join(m.configDir, rootCA)
}
crt, err := os.ReadFile(rootCA)
crt, err := ioutil.ReadFile(rootCA)
if err != nil {
return err
}

View File

@@ -3,6 +3,7 @@ package common
import (
"crypto/tls"
"crypto/x509"
"io/ioutil"
"os"
"path/filepath"
"testing"
@@ -272,13 +273,13 @@ func TestLoadCertificate(t *testing.T) {
caCrlPath := filepath.Join(os.TempDir(), "testcrl.crt")
certPath := filepath.Join(os.TempDir(), "test.crt")
keyPath := filepath.Join(os.TempDir(), "test.key")
err := os.WriteFile(caCrtPath, []byte(caCRT), os.ModePerm)
err := ioutil.WriteFile(caCrtPath, []byte(caCRT), os.ModePerm)
assert.NoError(t, err)
err = os.WriteFile(caCrlPath, []byte(caCRL), os.ModePerm)
err = ioutil.WriteFile(caCrlPath, []byte(caCRL), os.ModePerm)
assert.NoError(t, err)
err = os.WriteFile(certPath, []byte(serverCert), os.ModePerm)
err = ioutil.WriteFile(certPath, []byte(serverCert), os.ModePerm)
assert.NoError(t, err)
err = os.WriteFile(keyPath, []byte(serverKey), os.ModePerm)
err = ioutil.WriteFile(keyPath, []byte(serverKey), os.ModePerm)
assert.NoError(t, err)
certManager, err := NewCertManager(certPath, keyPath, configDir, logSenderTest)
assert.NoError(t, err)

View File

@@ -7,10 +7,10 @@ import (
"sync/atomic"
"time"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/metric"
"github.com/drakkan/sftpgo/v2/vfs"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/metrics"
"github.com/drakkan/sftpgo/vfs"
)
var (
@@ -20,62 +20,52 @@ var (
// BaseTransfer contains protocols common transfer details for an upload or a download.
type BaseTransfer struct { //nolint:maligned
ID uint64
BytesSent int64
BytesReceived int64
Fs vfs.Fs
File vfs.File
Connection *BaseConnection
cancelFn func()
fsPath string
effectiveFsPath string
requestPath string
ftpMode string
start time.Time
MaxWriteSize int64
MinWriteOffset int64
InitialSize int64
isNewFile bool
transferType int
AbortTransfer int32
aTime time.Time
mTime time.Time
ID uint64
BytesSent int64
BytesReceived int64
Fs vfs.Fs
File vfs.File
Connection *BaseConnection
cancelFn func()
fsPath string
requestPath string
start time.Time
MaxWriteSize int64
MinWriteOffset int64
InitialSize int64
isNewFile bool
transferType int
AbortTransfer int32
sync.Mutex
ErrTransfer error
}
// NewBaseTransfer returns a new BaseTransfer and adds it to the given connection
func NewBaseTransfer(file vfs.File, conn *BaseConnection, cancelFn func(), fsPath, effectiveFsPath, requestPath string,
transferType int, minWriteOffset, initialSize, maxWriteSize int64, isNewFile bool, fs vfs.Fs) *BaseTransfer {
func NewBaseTransfer(file vfs.File, conn *BaseConnection, cancelFn func(), fsPath, requestPath string, transferType int,
minWriteOffset, initialSize, maxWriteSize int64, isNewFile bool, fs vfs.Fs) *BaseTransfer {
t := &BaseTransfer{
ID: conn.GetTransferID(),
File: file,
Connection: conn,
cancelFn: cancelFn,
fsPath: fsPath,
effectiveFsPath: effectiveFsPath,
start: time.Now(),
transferType: transferType,
MinWriteOffset: minWriteOffset,
InitialSize: initialSize,
isNewFile: isNewFile,
requestPath: requestPath,
BytesSent: 0,
BytesReceived: 0,
MaxWriteSize: maxWriteSize,
AbortTransfer: 0,
Fs: fs,
ID: conn.GetTransferID(),
File: file,
Connection: conn,
cancelFn: cancelFn,
fsPath: fsPath,
start: time.Now(),
transferType: transferType,
MinWriteOffset: minWriteOffset,
InitialSize: initialSize,
isNewFile: isNewFile,
requestPath: requestPath,
BytesSent: 0,
BytesReceived: 0,
MaxWriteSize: maxWriteSize,
AbortTransfer: 0,
Fs: fs,
}
conn.AddTransfer(t)
return t
}
// SetFtpMode sets the FTP mode for the current transfer
func (t *BaseTransfer) SetFtpMode(mode string) {
t.ftpMode = mode
}
// GetID returns the transfer ID
func (t *BaseTransfer) GetID() uint64 {
return t.ID
@@ -117,15 +107,6 @@ func (t *BaseTransfer) GetFsPath() string {
return t.fsPath
}
func (t *BaseTransfer) SetTimes(fsPath string, atime time.Time, mtime time.Time) bool {
if fsPath == t.GetFsPath() {
t.aTime = atime
t.mTime = mtime
return true
}
return false
}
// GetRealFsPath returns the real transfer filesystem path.
// If atomic uploads are enabled this differ from fsPath
func (t *BaseTransfer) GetRealFsPath(fsPath string) string {
@@ -156,7 +137,7 @@ func (t *BaseTransfer) Truncate(fsPath string, size int64) (int64, error) {
if t.MaxWriteSize > 0 {
sizeDiff := initialSize - size
t.MaxWriteSize += sizeDiff
metric.TransferCompleted(atomic.LoadInt64(&t.BytesSent), atomic.LoadInt64(&t.BytesReceived), t.transferType, t.ErrTransfer)
metrics.TransferCompleted(atomic.LoadInt64(&t.BytesSent), atomic.LoadInt64(&t.BytesReceived), t.transferType, t.ErrTransfer)
atomic.StoreInt64(&t.BytesReceived, 0)
}
t.Unlock()
@@ -167,12 +148,9 @@ func (t *BaseTransfer) Truncate(fsPath string, size int64) (int64, error) {
}
if size == 0 && atomic.LoadInt64(&t.BytesSent) == 0 {
// for cloud providers the file is always truncated to zero, we don't support append/resume for uploads
// for buffered SFTP we can have buffered bytes so we returns an error
if !vfs.IsBufferedSFTPFs(t.Fs) {
return 0, nil
}
return 0, nil
}
return 0, vfs.ErrVfsUnsupported
return 0, ErrOpUnsupported
}
return 0, errTransferMismatch
}
@@ -202,7 +180,7 @@ func (t *BaseTransfer) getUploadFileSize() (int64, error) {
fileSize = info.Size()
}
if vfs.IsCryptOsFs(t.Fs) && t.ErrTransfer != nil {
errDelete := t.Fs.Remove(t.fsPath, false)
errDelete := t.Connection.Fs.Remove(t.fsPath, false)
if errDelete != nil {
t.Connection.Log(logger.LevelWarn, "error removing partial crypto file %#v: %v", t.fsPath, errDelete)
}
@@ -223,10 +201,10 @@ func (t *BaseTransfer) Close() error {
if t.isNewFile {
numFiles = 1
}
metric.TransferCompleted(atomic.LoadInt64(&t.BytesSent), atomic.LoadInt64(&t.BytesReceived), t.transferType, t.ErrTransfer)
if t.File != nil && t.Connection.IsQuotaExceededError(t.ErrTransfer) {
metrics.TransferCompleted(atomic.LoadInt64(&t.BytesSent), atomic.LoadInt64(&t.BytesReceived), t.transferType, t.ErrTransfer)
if t.ErrTransfer == ErrQuotaExceeded && t.File != nil {
// if quota is exceeded we try to remove the partial file for uploads to local filesystem
err = t.Fs.Remove(t.File.Name(), false)
err = t.Connection.Fs.Remove(t.File.Name(), false)
if err == nil {
numFiles--
atomic.StoreInt64(&t.BytesReceived, 0)
@@ -234,15 +212,15 @@ func (t *BaseTransfer) Close() error {
}
t.Connection.Log(logger.LevelWarn, "upload denied due to space limit, delete temporary file: %#v, deletion error: %v",
t.File.Name(), err)
} else if t.transferType == TransferUpload && t.effectiveFsPath != t.fsPath {
} else if t.transferType == TransferUpload && t.File != nil && t.File.Name() != t.fsPath {
if t.ErrTransfer == nil || Config.UploadMode == UploadModeAtomicWithResume {
err = t.Fs.Rename(t.effectiveFsPath, t.fsPath)
err = t.Connection.Fs.Rename(t.File.Name(), t.fsPath)
t.Connection.Log(logger.LevelDebug, "atomic upload completed, rename: %#v -> %#v, error: %v",
t.effectiveFsPath, t.fsPath, err)
t.File.Name(), t.fsPath, err)
} else {
err = t.Fs.Remove(t.effectiveFsPath, false)
err = t.Connection.Fs.Remove(t.File.Name(), false)
t.Connection.Log(logger.LevelWarn, "atomic upload completed with error: \"%v\", delete temporary file: %#v, "+
"deletion error: %v", t.ErrTransfer, t.effectiveFsPath, err)
"deletion error: %v", t.ErrTransfer, t.File.Name(), err)
if err == nil {
numFiles--
atomic.StoreInt64(&t.BytesReceived, 0)
@@ -253,9 +231,10 @@ func (t *BaseTransfer) Close() error {
elapsed := time.Since(t.start).Nanoseconds() / 1000000
if t.transferType == TransferDownload {
logger.TransferLog(downloadLogSender, t.fsPath, elapsed, atomic.LoadInt64(&t.BytesSent), t.Connection.User.Username,
t.Connection.ID, t.Connection.protocol, t.Connection.localAddr, t.Connection.remoteAddr, t.ftpMode)
ExecuteActionNotification(&t.Connection.User, operationDownload, t.fsPath, t.requestPath, "", "", "", t.Connection.protocol,
t.Connection.GetRemoteIP(), atomic.LoadInt64(&t.BytesSent), t.ErrTransfer)
t.Connection.ID, t.Connection.protocol)
action := newActionNotification(&t.Connection.User, operationDownload, t.fsPath, "", "", t.Connection.protocol,
atomic.LoadInt64(&t.BytesSent), t.ErrTransfer)
go actionHandler.Handle(action) //nolint:errcheck
} else {
fileSize := atomic.LoadInt64(&t.BytesReceived) + t.MinWriteOffset
if statSize, err := t.getUploadFileSize(); err == nil {
@@ -263,11 +242,11 @@ func (t *BaseTransfer) Close() error {
}
t.Connection.Log(logger.LevelDebug, "uploaded file size %v", fileSize)
t.updateQuota(numFiles, fileSize)
t.updateTimes()
logger.TransferLog(uploadLogSender, t.fsPath, elapsed, atomic.LoadInt64(&t.BytesReceived), t.Connection.User.Username,
t.Connection.ID, t.Connection.protocol, t.Connection.localAddr, t.Connection.remoteAddr, t.ftpMode)
ExecuteActionNotification(&t.Connection.User, operationUpload, t.fsPath, t.requestPath, "", "", "", t.Connection.protocol,
t.Connection.GetRemoteIP(), fileSize, t.ErrTransfer)
t.Connection.ID, t.Connection.protocol)
action := newActionNotification(&t.Connection.User, operationUpload, t.fsPath, "", "", t.Connection.protocol,
fileSize, t.ErrTransfer)
go actionHandler.Handle(action) //nolint:errcheck
}
if t.ErrTransfer != nil {
t.Connection.Log(logger.LevelWarn, "transfer error: %v, path: %#v", t.ErrTransfer, t.fsPath)
@@ -278,14 +257,6 @@ func (t *BaseTransfer) Close() error {
return err
}
func (t *BaseTransfer) updateTimes() {
if !t.aTime.IsZero() && !t.mTime.IsZero() {
err := t.Fs.Chtimes(t.fsPath, t.aTime, t.mTime)
t.Connection.Log(logger.LevelDebug, "set times for file %#v, atime: %v, mtime: %v, err: %v",
t.fsPath, t.aTime, t.mTime, err)
}
}
func (t *BaseTransfer) updateQuota(numFiles int, fileSize int64) bool {
// S3 uploads are atomic, if there is an error nothing is uploaded
if t.File == nil && t.ErrTransfer != nil {

View File

@@ -2,6 +2,7 @@ package common
import (
"errors"
"io/ioutil"
"os"
"path/filepath"
"testing"
@@ -10,19 +11,18 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/kms"
"github.com/drakkan/sftpgo/v2/sdk"
"github.com/drakkan/sftpgo/v2/vfs"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/kms"
"github.com/drakkan/sftpgo/vfs"
)
func TestTransferUpdateQuota(t *testing.T) {
conn := NewBaseConnection("", ProtocolSFTP, "", "", dataprovider.User{})
conn := NewBaseConnection("", ProtocolSFTP, dataprovider.User{}, nil)
transfer := BaseTransfer{
Connection: conn,
transferType: TransferUpload,
BytesReceived: 123,
Fs: vfs.NewOsFs("", os.TempDir(), ""),
Fs: vfs.NewOsFs("", os.TempDir(), nil),
}
errFake := errors.New("fake error")
transfer.TransferError(errFake)
@@ -51,21 +51,19 @@ func TestTransferUpdateQuota(t *testing.T) {
func TestTransferThrottling(t *testing.T) {
u := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: "test",
UploadBandwidth: 50,
DownloadBandwidth: 40,
},
Username: "test",
UploadBandwidth: 50,
DownloadBandwidth: 40,
}
fs := vfs.NewOsFs("", os.TempDir(), "")
fs := vfs.NewOsFs("", os.TempDir(), nil)
testFileSize := int64(131072)
wantedUploadElapsed := 1000 * (testFileSize / 1024) / u.UploadBandwidth
wantedDownloadElapsed := 1000 * (testFileSize / 1024) / u.DownloadBandwidth
// some tolerance
wantedUploadElapsed -= wantedDownloadElapsed / 10
wantedDownloadElapsed -= wantedDownloadElapsed / 10
conn := NewBaseConnection("id", ProtocolSCP, "", "", u)
transfer := NewBaseTransfer(nil, conn, nil, "", "", "", TransferUpload, 0, 0, 0, true, fs)
conn := NewBaseConnection("id", ProtocolSCP, u, nil)
transfer := NewBaseTransfer(nil, conn, nil, "", "", TransferUpload, 0, 0, 0, true, fs)
transfer.BytesReceived = testFileSize
transfer.Connection.UpdateLastActivity()
startTime := transfer.Connection.GetLastActivity()
@@ -75,7 +73,7 @@ func TestTransferThrottling(t *testing.T) {
err := transfer.Close()
assert.NoError(t, err)
transfer = NewBaseTransfer(nil, conn, nil, "", "", "", TransferDownload, 0, 0, 0, true, fs)
transfer = NewBaseTransfer(nil, conn, nil, "", "", TransferDownload, 0, 0, 0, true, fs)
transfer.BytesSent = testFileSize
transfer.Connection.UpdateLastActivity()
startTime = transfer.Connection.GetLastActivity()
@@ -89,19 +87,17 @@ func TestTransferThrottling(t *testing.T) {
func TestRealPath(t *testing.T) {
testFile := filepath.Join(os.TempDir(), "afile.txt")
fs := vfs.NewOsFs("123", os.TempDir(), "")
fs := vfs.NewOsFs("123", os.TempDir(), nil)
u := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: "user",
HomeDir: os.TempDir(),
},
Username: "user",
HomeDir: os.TempDir(),
}
u.Permissions = make(map[string][]string)
u.Permissions["/"] = []string{dataprovider.PermAny}
file, err := os.Create(testFile)
require.NoError(t, err)
conn := NewBaseConnection(fs.ConnectionID(), ProtocolSFTP, "", "", u)
transfer := NewBaseTransfer(file, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload, 0, 0, 0, true, fs)
conn := NewBaseConnection(fs.ConnectionID(), ProtocolSFTP, u, fs)
transfer := NewBaseTransfer(file, conn, nil, testFile, "/transfer_test_file", TransferUpload, 0, 0, 0, true, fs)
rPath := transfer.GetRealFsPath(testFile)
assert.Equal(t, testFile, rPath)
rPath = conn.getRealFsPath(testFile)
@@ -122,12 +118,10 @@ func TestRealPath(t *testing.T) {
func TestTruncate(t *testing.T) {
testFile := filepath.Join(os.TempDir(), "transfer_test_file")
fs := vfs.NewOsFs("123", os.TempDir(), "")
fs := vfs.NewOsFs("123", os.TempDir(), nil)
u := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: "user",
HomeDir: os.TempDir(),
},
Username: "user",
HomeDir: os.TempDir(),
}
u.Permissions = make(map[string][]string)
u.Permissions["/"] = []string{dataprovider.PermAny}
@@ -137,10 +131,10 @@ func TestTruncate(t *testing.T) {
}
_, err = file.Write([]byte("hello"))
assert.NoError(t, err)
conn := NewBaseConnection(fs.ConnectionID(), ProtocolSFTP, "", "", u)
transfer := NewBaseTransfer(file, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload, 0, 5, 100, false, fs)
conn := NewBaseConnection(fs.ConnectionID(), ProtocolSFTP, u, fs)
transfer := NewBaseTransfer(file, conn, nil, testFile, "/transfer_test_file", TransferUpload, 0, 5, 100, false, fs)
err = conn.SetStat("/transfer_test_file", &StatAttributes{
err = conn.SetStat(testFile, "/transfer_test_file", &StatAttributes{
Size: 2,
Flags: StatAttrSize,
})
@@ -155,9 +149,9 @@ func TestTruncate(t *testing.T) {
assert.Equal(t, int64(2), fi.Size())
}
transfer = NewBaseTransfer(file, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload, 0, 0, 100, true, fs)
transfer = NewBaseTransfer(file, conn, nil, testFile, "/transfer_test_file", TransferUpload, 0, 0, 100, true, fs)
// file.Stat will fail on a closed file
err = conn.SetStat("/transfer_test_file", &StatAttributes{
err = conn.SetStat(testFile, "/transfer_test_file", &StatAttributes{
Size: 2,
Flags: StatAttrSize,
})
@@ -165,13 +159,13 @@ func TestTruncate(t *testing.T) {
err = transfer.Close()
assert.NoError(t, err)
transfer = NewBaseTransfer(nil, conn, nil, testFile, testFile, "", TransferUpload, 0, 0, 0, true, fs)
transfer = NewBaseTransfer(nil, conn, nil, testFile, "", TransferUpload, 0, 0, 0, true, fs)
_, err = transfer.Truncate("mismatch", 0)
assert.EqualError(t, err, errTransferMismatch.Error())
_, err = transfer.Truncate(testFile, 0)
assert.NoError(t, err)
_, err = transfer.Truncate(testFile, 1)
assert.EqualError(t, err, vfs.ErrVfsUnsupported.Error())
assert.EqualError(t, err, ErrOpUnsupported.Error())
err = transfer.Close()
assert.NoError(t, err)
@@ -188,21 +182,19 @@ func TestTransferErrors(t *testing.T) {
isCancelled = true
}
testFile := filepath.Join(os.TempDir(), "transfer_test_file")
fs := vfs.NewOsFs("id", os.TempDir(), "")
fs := vfs.NewOsFs("id", os.TempDir(), nil)
u := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: "test",
HomeDir: os.TempDir(),
},
Username: "test",
HomeDir: os.TempDir(),
}
err := os.WriteFile(testFile, []byte("test data"), os.ModePerm)
err := ioutil.WriteFile(testFile, []byte("test data"), os.ModePerm)
assert.NoError(t, err)
file, err := os.Open(testFile)
if !assert.NoError(t, err) {
assert.FailNow(t, "unable to open test file")
}
conn := NewBaseConnection("id", ProtocolSFTP, "", "", u)
transfer := NewBaseTransfer(file, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload, 0, 0, 0, true, fs)
conn := NewBaseConnection("id", ProtocolSFTP, u, fs)
transfer := NewBaseTransfer(file, conn, nil, testFile, "/transfer_test_file", TransferUpload, 0, 0, 0, true, fs)
assert.Nil(t, transfer.cancelFn)
assert.Equal(t, testFile, transfer.GetFsPath())
transfer.SetCancelFn(cancelFn)
@@ -221,14 +213,14 @@ func TestTransferErrors(t *testing.T) {
}
assert.NoFileExists(t, testFile)
err = os.WriteFile(testFile, []byte("test data"), os.ModePerm)
err = ioutil.WriteFile(testFile, []byte("test data"), os.ModePerm)
assert.NoError(t, err)
file, err = os.Open(testFile)
if !assert.NoError(t, err) {
assert.FailNow(t, "unable to open test file")
}
fsPath := filepath.Join(os.TempDir(), "test_file")
transfer = NewBaseTransfer(file, conn, nil, fsPath, file.Name(), "/test_file", TransferUpload, 0, 0, 0, true, fs)
transfer = NewBaseTransfer(file, conn, nil, fsPath, "/test_file", TransferUpload, 0, 0, 0, true, fs)
transfer.BytesReceived = 9
transfer.TransferError(errFake)
assert.Error(t, transfer.ErrTransfer, errFake.Error())
@@ -241,13 +233,13 @@ func TestTransferErrors(t *testing.T) {
}
assert.NoFileExists(t, testFile)
err = os.WriteFile(testFile, []byte("test data"), os.ModePerm)
err = ioutil.WriteFile(testFile, []byte("test data"), os.ModePerm)
assert.NoError(t, err)
file, err = os.Open(testFile)
if !assert.NoError(t, err) {
assert.FailNow(t, "unable to open test file")
}
transfer = NewBaseTransfer(file, conn, nil, fsPath, file.Name(), "/test_file", TransferUpload, 0, 0, 0, true, fs)
transfer = NewBaseTransfer(file, conn, nil, fsPath, "/test_file", TransferUpload, 0, 0, 0, true, fs)
transfer.BytesReceived = 9
// the file is closed from the embedding struct before to call close
err = file.Close()
@@ -264,36 +256,21 @@ func TestTransferErrors(t *testing.T) {
func TestRemovePartialCryptoFile(t *testing.T) {
testFile := filepath.Join(os.TempDir(), "transfer_test_file")
fs, err := vfs.NewCryptFs("id", os.TempDir(), "", vfs.CryptFsConfig{CryptFsConfig: sdk.CryptFsConfig{Passphrase: kms.NewPlainSecret("secret")}})
fs, err := vfs.NewCryptFs("id", os.TempDir(), vfs.CryptFsConfig{Passphrase: kms.NewPlainSecret("secret")})
require.NoError(t, err)
u := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: "test",
HomeDir: os.TempDir(),
},
Username: "test",
HomeDir: os.TempDir(),
}
conn := NewBaseConnection(fs.ConnectionID(), ProtocolSFTP, "", "", u)
transfer := NewBaseTransfer(nil, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload, 0, 0, 0, true, fs)
conn := NewBaseConnection(fs.ConnectionID(), ProtocolSFTP, u, fs)
transfer := NewBaseTransfer(nil, conn, nil, testFile, "/transfer_test_file", TransferUpload, 0, 0, 0, true, fs)
transfer.ErrTransfer = errors.New("test error")
_, err = transfer.getUploadFileSize()
assert.Error(t, err)
err = os.WriteFile(testFile, []byte("test data"), os.ModePerm)
err = ioutil.WriteFile(testFile, []byte("test data"), os.ModePerm)
assert.NoError(t, err)
size, err := transfer.getUploadFileSize()
assert.NoError(t, err)
assert.Equal(t, int64(9), size)
assert.NoFileExists(t, testFile)
}
func TestFTPMode(t *testing.T) {
conn := NewBaseConnection("", ProtocolFTP, "", "", dataprovider.User{})
transfer := BaseTransfer{
Connection: conn,
transferType: TransferUpload,
BytesReceived: 123,
Fs: vfs.NewOsFs("", os.TempDir(), ""),
}
assert.Empty(t, transfer.ftpMode)
transfer.SetFtpMode("active")
assert.Equal(t, "active", transfer.ftpMode)
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +1,3 @@
//go:build linux
// +build linux
package config

View File

@@ -1,4 +1,3 @@
//go:build !linux
// +build !linux
package config

View File

@@ -2,6 +2,7 @@ package config_test
import (
"encoding/json"
"io/ioutil"
"os"
"path/filepath"
"strings"
@@ -11,18 +12,15 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/drakkan/sftpgo/v2/common"
"github.com/drakkan/sftpgo/v2/config"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/ftpd"
"github.com/drakkan/sftpgo/v2/httpclient"
"github.com/drakkan/sftpgo/v2/httpd"
"github.com/drakkan/sftpgo/v2/kms"
"github.com/drakkan/sftpgo/v2/mfa"
"github.com/drakkan/sftpgo/v2/sdk/plugin"
"github.com/drakkan/sftpgo/v2/sftpd"
"github.com/drakkan/sftpgo/v2/smtp"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/common"
"github.com/drakkan/sftpgo/config"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/ftpd"
"github.com/drakkan/sftpgo/httpclient"
"github.com/drakkan/sftpgo/httpd"
"github.com/drakkan/sftpgo/sftpd"
"github.com/drakkan/sftpgo/utils"
"github.com/drakkan/sftpgo/webdavd"
)
const (
@@ -44,16 +42,15 @@ func TestLoadConfigTest(t *testing.T) {
assert.NotEqual(t, dataprovider.Config{}, config.GetProviderConf())
assert.NotEqual(t, sftpd.Configuration{}, config.GetSFTPDConfig())
assert.NotEqual(t, httpclient.Config{}, config.GetHTTPConfig())
assert.NotEqual(t, smtp.Config{}, config.GetSMTPConfig())
confName := tempConfigName + ".json"
configFilePath := filepath.Join(configDir, confName)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
err = os.WriteFile(configFilePath, []byte("{invalid json}"), os.ModePerm)
err = ioutil.WriteFile(configFilePath, []byte("{invalid json}"), os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
err = os.WriteFile(configFilePath, []byte(`{"sftpd": {"max_auth_tries": "a"}}`), os.ModePerm)
err = ioutil.WriteFile(configFilePath, []byte("{\"sftpd\": {\"bind_port\": \"a\"}}"), os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, confName)
assert.Error(t, err)
@@ -82,7 +79,7 @@ func TestEmptyBanner(t *testing.T) {
c := make(map[string]sftpd.Configuration)
c["sftpd"] = sftpdConf
jsonConf, _ := json.Marshal(c)
err = os.WriteFile(configFilePath, jsonConf, os.ModePerm)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
@@ -96,7 +93,7 @@ func TestEmptyBanner(t *testing.T) {
c1 := make(map[string]ftpd.Configuration)
c1["ftpd"] = ftpdConf
jsonConf, _ = json.Marshal(c1)
err = os.WriteFile(configFilePath, jsonConf, os.ModePerm)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
@@ -106,35 +103,6 @@ func TestEmptyBanner(t *testing.T) {
assert.NoError(t, err)
}
func TestEnabledSSHCommands(t *testing.T) {
reset()
configDir := ".."
confName := tempConfigName + ".json"
configFilePath := filepath.Join(configDir, confName)
err := config.LoadConfig(configDir, "")
assert.NoError(t, err)
reset()
sftpdConf := config.GetSFTPDConfig()
sftpdConf.EnabledSSHCommands = []string{"scp"}
c := make(map[string]sftpd.Configuration)
c["sftpd"] = sftpdConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = os.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
sftpdConf = config.GetSFTPDConfig()
if assert.Len(t, sftpdConf.EnabledSSHCommands, 1) {
assert.Equal(t, "scp", sftpdConf.EnabledSSHCommands[0])
}
err = os.Remove(configFilePath)
assert.NoError(t, err)
}
func TestInvalidUploadMode(t *testing.T) {
reset()
@@ -149,7 +117,7 @@ func TestInvalidUploadMode(t *testing.T) {
c["common"] = commonConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = os.WriteFile(configFilePath, jsonConf, os.ModePerm)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
@@ -167,12 +135,12 @@ func TestInvalidExternalAuthScope(t *testing.T) {
err := config.LoadConfig(configDir, "")
assert.NoError(t, err)
providerConf := config.GetProviderConf()
providerConf.ExternalAuthScope = 100
providerConf.ExternalAuthScope = 10
c := make(map[string]dataprovider.Config)
c["data_provider"] = providerConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = os.WriteFile(configFilePath, jsonConf, os.ModePerm)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
@@ -195,7 +163,7 @@ func TestInvalidCredentialsPath(t *testing.T) {
c["data_provider"] = providerConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = os.WriteFile(configFilePath, jsonConf, os.ModePerm)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
@@ -218,7 +186,7 @@ func TestInvalidProxyProtocol(t *testing.T) {
c["common"] = commonConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = os.WriteFile(configFilePath, jsonConf, os.ModePerm)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
@@ -241,7 +209,7 @@ func TestInvalidUsersBaseDir(t *testing.T) {
c["data_provider"] = providerConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = os.WriteFile(configFilePath, jsonConf, os.ModePerm)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
@@ -250,6 +218,76 @@ func TestInvalidUsersBaseDir(t *testing.T) {
assert.NoError(t, err)
}
func TestCommonParamsCompatibility(t *testing.T) {
reset()
configDir := ".."
confName := tempConfigName + ".json"
configFilePath := filepath.Join(configDir, confName)
err := config.LoadConfig(configDir, "")
assert.NoError(t, err)
sftpdConf := config.GetSFTPDConfig()
sftpdConf.IdleTimeout = 21 //nolint:staticcheck
sftpdConf.Actions.Hook = "http://hook"
sftpdConf.Actions.ExecuteOn = []string{"upload"}
sftpdConf.SetstatMode = 1 //nolint:staticcheck
sftpdConf.UploadMode = common.UploadModeAtomicWithResume //nolint:staticcheck
sftpdConf.ProxyProtocol = 1 //nolint:staticcheck
sftpdConf.ProxyAllowed = []string{"192.168.1.1"} //nolint:staticcheck
c := make(map[string]sftpd.Configuration)
c["sftpd"] = sftpdConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
commonConf := config.GetCommonConfig()
assert.Equal(t, 21, commonConf.IdleTimeout)
assert.Equal(t, "http://hook", commonConf.Actions.Hook)
assert.Len(t, commonConf.Actions.ExecuteOn, 1)
assert.True(t, utils.IsStringInSlice("upload", commonConf.Actions.ExecuteOn))
assert.Equal(t, 1, commonConf.SetstatMode)
assert.Equal(t, 1, commonConf.ProxyProtocol)
assert.Len(t, commonConf.ProxyAllowed, 1)
assert.True(t, utils.IsStringInSlice("192.168.1.1", commonConf.ProxyAllowed))
err = os.Remove(configFilePath)
assert.NoError(t, err)
}
func TestHostKeyCompatibility(t *testing.T) {
reset()
configDir := ".."
confName := tempConfigName + ".json"
configFilePath := filepath.Join(configDir, confName)
err := config.LoadConfig(configDir, "")
assert.NoError(t, err)
sftpdConf := config.GetSFTPDConfig()
sftpdConf.Keys = []sftpd.Key{ //nolint:staticcheck
{
PrivateKey: "rsa",
},
{
PrivateKey: "ecdsa",
},
}
c := make(map[string]sftpd.Configuration)
c["sftpd"] = sftpdConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
sftpdConf = config.GetSFTPDConfig()
assert.Equal(t, 2, len(sftpdConf.HostKeys))
assert.True(t, utils.IsStringInSlice("rsa", sftpdConf.HostKeys))
assert.True(t, utils.IsStringInSlice("ecdsa", sftpdConf.HostKeys))
err = os.Remove(configFilePath)
assert.NoError(t, err)
}
func TestSetGetConfig(t *testing.T) {
reset()
@@ -293,15 +331,6 @@ func TestSetGetConfig(t *testing.T) {
config.SetTelemetryConfig(telemetryConf)
assert.Equal(t, telemetryConf.BindPort, config.GetTelemetryConfig().BindPort)
assert.Equal(t, telemetryConf.BindAddress, config.GetTelemetryConfig().BindAddress)
pluginConf := []plugin.Config{
{
Type: "eventsearcher",
},
}
config.SetPluginsConfig(pluginConf)
if assert.Len(t, config.GetPluginsConfig(), 1) {
assert.Equal(t, pluginConf[0].Type, config.GetPluginsConfig()[0].Type)
}
}
func TestServiceToStart(t *testing.T) {
@@ -333,262 +362,141 @@ func TestServiceToStart(t *testing.T) {
assert.True(t, config.HasServicesToStart())
}
func TestSSHCommandsFromEnv(t *testing.T) {
reset()
os.Setenv("SFTPGO_SFTPD__ENABLED_SSH_COMMANDS", "cd,scp")
t.Cleanup(func() {
os.Unsetenv("SFTPGO_SFTPD__ENABLED_SSH_COMMANDS")
})
configDir := ".."
err := config.LoadConfig(configDir, "")
assert.NoError(t, err)
sftpdConf := config.GetSFTPDConfig()
if assert.Len(t, sftpdConf.EnabledSSHCommands, 2) {
assert.Equal(t, "cd", sftpdConf.EnabledSSHCommands[0])
assert.Equal(t, "scp", sftpdConf.EnabledSSHCommands[1])
}
}
func TestSMTPFromEnv(t *testing.T) {
reset()
os.Setenv("SFTPGO_SMTP__HOST", "smtp.example.com")
os.Setenv("SFTPGO_SMTP__PORT", "587")
t.Cleanup(func() {
os.Unsetenv("SFTPGO_SMTP__HOST")
os.Unsetenv("SFTPGO_SMTP__PORT")
})
configDir := ".."
err := config.LoadConfig(configDir, "")
assert.NoError(t, err)
smtpConfig := config.GetSMTPConfig()
assert.Equal(t, "smtp.example.com", smtpConfig.Host)
assert.Equal(t, 587, smtpConfig.Port)
}
func TestMFAFromEnv(t *testing.T) {
reset()
os.Setenv("SFTPGO_MFA__TOTP__0__NAME", "main")
os.Setenv("SFTPGO_MFA__TOTP__1__NAME", "additional_name")
os.Setenv("SFTPGO_MFA__TOTP__1__ISSUER", "additional_issuer")
os.Setenv("SFTPGO_MFA__TOTP__1__ALGO", "sha256")
t.Cleanup(func() {
os.Unsetenv("SFTPGO_MFA__TOTP__0__NAME")
os.Unsetenv("SFTPGO_MFA__TOTP__1__NAME")
os.Unsetenv("SFTPGO_MFA__TOTP__1__ISSUER")
os.Unsetenv("SFTPGO_MFA__TOTP__1__ALGO")
})
configDir := ".."
err := config.LoadConfig(configDir, "")
assert.NoError(t, err)
mfaConf := config.GetMFAConfig()
require.Len(t, mfaConf.TOTP, 2)
require.Equal(t, "main", mfaConf.TOTP[0].Name)
require.Equal(t, "SFTPGo", mfaConf.TOTP[0].Issuer)
require.Equal(t, "sha1", mfaConf.TOTP[0].Algo)
require.Equal(t, "additional_name", mfaConf.TOTP[1].Name)
require.Equal(t, "additional_issuer", mfaConf.TOTP[1].Issuer)
require.Equal(t, "sha256", mfaConf.TOTP[1].Algo)
}
func TestDisabledMFAConfig(t *testing.T) {
func TestSFTPDBindingsCompatibility(t *testing.T) {
reset()
configDir := ".."
confName := tempConfigName + ".json"
configFilePath := filepath.Join(configDir, confName)
err := config.LoadConfig(configDir, "")
assert.NoError(t, err)
mfaConf := config.GetMFAConfig()
assert.Len(t, mfaConf.TOTP, 1)
reset()
c := make(map[string]mfa.Config)
c["mfa"] = mfa.Config{}
sftpdConf := config.GetSFTPDConfig()
require.Len(t, sftpdConf.Bindings, 1)
sftpdConf.Bindings = nil
sftpdConf.BindPort = 9022 //nolint:staticcheck
sftpdConf.BindAddress = "127.0.0.1" //nolint:staticcheck
c := make(map[string]sftpd.Configuration)
c["sftpd"] = sftpdConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = os.WriteFile(configFilePath, jsonConf, os.ModePerm)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
mfaConf = config.GetMFAConfig()
assert.Len(t, mfaConf.TOTP, 0)
sftpdConf = config.GetSFTPDConfig()
// the default binding should be replaced with the deprecated configuration
require.Len(t, sftpdConf.Bindings, 1)
require.Equal(t, 9022, sftpdConf.Bindings[0].Port)
require.Equal(t, "127.0.0.1", sftpdConf.Bindings[0].Address)
require.True(t, sftpdConf.Bindings[0].ApplyProxyConfig)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
sftpdConf = config.GetSFTPDConfig()
require.Len(t, sftpdConf.Bindings, 1)
require.Equal(t, 9022, sftpdConf.Bindings[0].Port)
require.Equal(t, "127.0.0.1", sftpdConf.Bindings[0].Address)
require.True(t, sftpdConf.Bindings[0].ApplyProxyConfig)
err = os.Remove(configFilePath)
assert.NoError(t, err)
}
func TestPluginsFromEnv(t *testing.T) {
func TestFTPDBindingsCompatibility(t *testing.T) {
reset()
os.Setenv("SFTPGO_PLUGINS__0__TYPE", "notifier")
os.Setenv("SFTPGO_PLUGINS__0__NOTIFIER_OPTIONS__FS_EVENTS", "upload,download")
os.Setenv("SFTPGO_PLUGINS__0__NOTIFIER_OPTIONS__PROVIDER_EVENTS", "add,update")
os.Setenv("SFTPGO_PLUGINS__0__NOTIFIER_OPTIONS__PROVIDER_OBJECTS", "user,admin")
os.Setenv("SFTPGO_PLUGINS__0__NOTIFIER_OPTIONS__RETRY_MAX_TIME", "2")
os.Setenv("SFTPGO_PLUGINS__0__NOTIFIER_OPTIONS__RETRY_QUEUE_MAX_SIZE", "1000")
os.Setenv("SFTPGO_PLUGINS__0__CMD", "plugin_start_cmd")
os.Setenv("SFTPGO_PLUGINS__0__ARGS", "arg1,arg2")
os.Setenv("SFTPGO_PLUGINS__0__SHA256SUM", "0a71ded61fccd59c4f3695b51c1b3d180da8d2d77ea09ccee20dac242675c193")
os.Setenv("SFTPGO_PLUGINS__0__AUTO_MTLS", "1")
os.Setenv("SFTPGO_PLUGINS__0__KMS_OPTIONS__SCHEME", kms.SchemeAWS)
os.Setenv("SFTPGO_PLUGINS__0__KMS_OPTIONS__ENCRYPTED_STATUS", kms.SecretStatusAWS)
os.Setenv("SFTPGO_PLUGINS__0__AUTH_OPTIONS__SCOPE", "14")
t.Cleanup(func() {
os.Unsetenv("SFTPGO_PLUGINS__0__TYPE")
os.Unsetenv("SFTPGO_PLUGINS__0__NOTIFIER_OPTIONS__FS_EVENTS")
os.Unsetenv("SFTPGO_PLUGINS__0__NOTIFIER_OPTIONS__PROVIDER_EVENTS")
os.Unsetenv("SFTPGO_PLUGINS__0__NOTIFIER_OPTIONS__PROVIDER_OBJECTS")
os.Unsetenv("SFTPGO_PLUGINS__0__NOTIFIER_OPTIONS__RETRY_MAX_TIME")
os.Unsetenv("SFTPGO_PLUGINS__0__NOTIFIER_OPTIONS__RETRY_QUEUE_MAX_SIZE")
os.Unsetenv("SFTPGO_PLUGINS__0__CMD")
os.Unsetenv("SFTPGO_PLUGINS__0__ARGS")
os.Unsetenv("SFTPGO_PLUGINS__0__SHA256SUM")
os.Unsetenv("SFTPGO_PLUGINS__0__AUTO_MTLS")
os.Unsetenv("SFTPGO_PLUGINS__0__KMS_OPTIONS__SCHEME")
os.Unsetenv("SFTPGO_PLUGINS__0__KMS_OPTIONS__ENCRYPTED_STATUS")
os.Unsetenv("SFTPGO_PLUGINS__0__AUTH_OPTIONS__SCOPE")
})
configDir := ".."
err := config.LoadConfig(configDir, "")
assert.NoError(t, err)
pluginsConf := config.GetPluginsConfig()
require.Len(t, pluginsConf, 1)
pluginConf := pluginsConf[0]
require.Equal(t, "notifier", pluginConf.Type)
require.Len(t, pluginConf.NotifierOptions.FsEvents, 2)
require.True(t, util.IsStringInSlice("upload", pluginConf.NotifierOptions.FsEvents))
require.True(t, util.IsStringInSlice("download", pluginConf.NotifierOptions.FsEvents))
require.Len(t, pluginConf.NotifierOptions.ProviderEvents, 2)
require.Equal(t, "add", pluginConf.NotifierOptions.ProviderEvents[0])
require.Equal(t, "update", pluginConf.NotifierOptions.ProviderEvents[1])
require.Len(t, pluginConf.NotifierOptions.ProviderObjects, 2)
require.Equal(t, "user", pluginConf.NotifierOptions.ProviderObjects[0])
require.Equal(t, "admin", pluginConf.NotifierOptions.ProviderObjects[1])
require.Equal(t, 2, pluginConf.NotifierOptions.RetryMaxTime)
require.Equal(t, 1000, pluginConf.NotifierOptions.RetryQueueMaxSize)
require.Equal(t, "plugin_start_cmd", pluginConf.Cmd)
require.Len(t, pluginConf.Args, 2)
require.Equal(t, "arg1", pluginConf.Args[0])
require.Equal(t, "arg2", pluginConf.Args[1])
require.Equal(t, "0a71ded61fccd59c4f3695b51c1b3d180da8d2d77ea09ccee20dac242675c193", pluginConf.SHA256Sum)
require.True(t, pluginConf.AutoMTLS)
require.Equal(t, kms.SchemeAWS, pluginConf.KMSOptions.Scheme)
require.Equal(t, kms.SecretStatusAWS, pluginConf.KMSOptions.EncryptedStatus)
require.Equal(t, 14, pluginConf.AuthOptions.Scope)
configAsJSON, err := json.Marshal(pluginsConf)
require.NoError(t, err)
confName := tempConfigName + ".json"
configFilePath := filepath.Join(configDir, confName)
err = os.WriteFile(configFilePath, configAsJSON, os.ModePerm)
err := config.LoadConfig(configDir, "")
assert.NoError(t, err)
ftpdConf := config.GetFTPDConfig()
require.Len(t, ftpdConf.Bindings, 1)
ftpdConf.Bindings = nil
ftpdConf.BindPort = 9022 //nolint:staticcheck
ftpdConf.BindAddress = "127.1.0.1" //nolint:staticcheck
ftpdConf.ForcePassiveIP = "127.1.1.1" //nolint:staticcheck
ftpdConf.TLSMode = 2 //nolint:staticcheck
c := make(map[string]ftpd.Configuration)
c["ftpd"] = ftpdConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
os.Setenv("SFTPGO_PLUGINS__0__CMD", "plugin_start_cmd1")
os.Setenv("SFTPGO_PLUGINS__0__ARGS", "")
os.Setenv("SFTPGO_PLUGINS__0__AUTO_MTLS", "0")
os.Setenv("SFTPGO_PLUGINS__0__KMS_OPTIONS__SCHEME", kms.SchemeVaultTransit)
os.Setenv("SFTPGO_PLUGINS__0__KMS_OPTIONS__ENCRYPTED_STATUS", kms.SecretStatusVaultTransit)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
pluginsConf = config.GetPluginsConfig()
require.Len(t, pluginsConf, 1)
pluginConf = pluginsConf[0]
require.Equal(t, "notifier", pluginConf.Type)
require.Len(t, pluginConf.NotifierOptions.FsEvents, 2)
require.True(t, util.IsStringInSlice("upload", pluginConf.NotifierOptions.FsEvents))
require.True(t, util.IsStringInSlice("download", pluginConf.NotifierOptions.FsEvents))
require.Len(t, pluginConf.NotifierOptions.ProviderEvents, 2)
require.Equal(t, "add", pluginConf.NotifierOptions.ProviderEvents[0])
require.Equal(t, "update", pluginConf.NotifierOptions.ProviderEvents[1])
require.Len(t, pluginConf.NotifierOptions.ProviderObjects, 2)
require.Equal(t, "user", pluginConf.NotifierOptions.ProviderObjects[0])
require.Equal(t, "admin", pluginConf.NotifierOptions.ProviderObjects[1])
require.Equal(t, 2, pluginConf.NotifierOptions.RetryMaxTime)
require.Equal(t, 1000, pluginConf.NotifierOptions.RetryQueueMaxSize)
require.Equal(t, "plugin_start_cmd1", pluginConf.Cmd)
require.Len(t, pluginConf.Args, 0)
require.Equal(t, "0a71ded61fccd59c4f3695b51c1b3d180da8d2d77ea09ccee20dac242675c193", pluginConf.SHA256Sum)
require.False(t, pluginConf.AutoMTLS)
require.Equal(t, kms.SchemeVaultTransit, pluginConf.KMSOptions.Scheme)
require.Equal(t, kms.SecretStatusVaultTransit, pluginConf.KMSOptions.EncryptedStatus)
require.Equal(t, 14, pluginConf.AuthOptions.Scope)
ftpdConf = config.GetFTPDConfig()
// the default binding should be replaced with the deprecated configuration
require.Len(t, ftpdConf.Bindings, 1)
require.Equal(t, 9022, ftpdConf.Bindings[0].Port)
require.Equal(t, "127.1.0.1", ftpdConf.Bindings[0].Address)
require.True(t, ftpdConf.Bindings[0].ApplyProxyConfig)
require.Equal(t, 2, ftpdConf.Bindings[0].TLSMode)
require.Equal(t, "127.1.1.1", ftpdConf.Bindings[0].ForcePassiveIP)
err = os.Remove(configFilePath)
assert.NoError(t, err)
}
func TestRateLimitersFromEnv(t *testing.T) {
func TestWebDAVDBindingsCompatibility(t *testing.T) {
reset()
os.Setenv("SFTPGO_COMMON__RATE_LIMITERS__0__AVERAGE", "100")
os.Setenv("SFTPGO_COMMON__RATE_LIMITERS__0__PERIOD", "2000")
os.Setenv("SFTPGO_COMMON__RATE_LIMITERS__0__BURST", "10")
os.Setenv("SFTPGO_COMMON__RATE_LIMITERS__0__TYPE", "2")
os.Setenv("SFTPGO_COMMON__RATE_LIMITERS__0__PROTOCOLS", "SSH, FTP")
os.Setenv("SFTPGO_COMMON__RATE_LIMITERS__0__GENERATE_DEFENDER_EVENTS", "1")
os.Setenv("SFTPGO_COMMON__RATE_LIMITERS__0__ENTRIES_SOFT_LIMIT", "50")
os.Setenv("SFTPGO_COMMON__RATE_LIMITERS__0__ENTRIES_HARD_LIMIT", "100")
os.Setenv("SFTPGO_COMMON__RATE_LIMITERS__0__ALLOW_LIST", ", 172.16.2.4, ")
os.Setenv("SFTPGO_COMMON__RATE_LIMITERS__8__AVERAGE", "50")
os.Setenv("SFTPGO_COMMON__RATE_LIMITERS__8__ALLOW_LIST", "192.168.1.1, 192.168.2.0/24")
t.Cleanup(func() {
os.Unsetenv("SFTPGO_COMMON__RATE_LIMITERS__0__AVERAGE")
os.Unsetenv("SFTPGO_COMMON__RATE_LIMITERS__0__PERIOD")
os.Unsetenv("SFTPGO_COMMON__RATE_LIMITERS__0__BURST")
os.Unsetenv("SFTPGO_COMMON__RATE_LIMITERS__0__TYPE")
os.Unsetenv("SFTPGO_COMMON__RATE_LIMITERS__0__PROTOCOLS")
os.Unsetenv("SFTPGO_COMMON__RATE_LIMITERS__0__GENERATE_DEFENDER_EVENTS")
os.Unsetenv("SFTPGO_COMMON__RATE_LIMITERS__0__ENTRIES_SOFT_LIMIT")
os.Unsetenv("SFTPGO_COMMON__RATE_LIMITERS__0__ENTRIES_HARD_LIMIT")
os.Unsetenv("SFTPGO_COMMON__RATE_LIMITERS__0__ALLOW_LIST")
os.Unsetenv("SFTPGO_COMMON__RATE_LIMITERS__8__AVERAGE")
os.Unsetenv("SFTPGO_COMMON__RATE_LIMITERS__8__ALLOW_LIST")
})
configDir := ".."
confName := tempConfigName + ".json"
configFilePath := filepath.Join(configDir, confName)
err := config.LoadConfig(configDir, "")
assert.NoError(t, err)
limiters := config.GetCommonConfig().RateLimitersConfig
require.Len(t, limiters, 2)
require.Equal(t, int64(100), limiters[0].Average)
require.Equal(t, int64(2000), limiters[0].Period)
require.Equal(t, 10, limiters[0].Burst)
require.Equal(t, 2, limiters[0].Type)
protocols := limiters[0].Protocols
require.Len(t, protocols, 2)
require.True(t, util.IsStringInSlice(common.ProtocolFTP, protocols))
require.True(t, util.IsStringInSlice(common.ProtocolSSH, protocols))
require.True(t, limiters[0].GenerateDefenderEvents)
require.Equal(t, 50, limiters[0].EntriesSoftLimit)
require.Equal(t, 100, limiters[0].EntriesHardLimit)
require.Len(t, limiters[0].AllowList, 1)
require.Equal(t, "172.16.2.4", limiters[0].AllowList[0])
require.Equal(t, int64(50), limiters[1].Average)
require.Len(t, limiters[1].AllowList, 2)
require.Equal(t, "192.168.1.1", limiters[1].AllowList[0])
require.Equal(t, "192.168.2.0/24", limiters[1].AllowList[1])
// we check the default values here
require.Equal(t, int64(1000), limiters[1].Period)
require.Equal(t, 1, limiters[1].Burst)
require.Equal(t, 2, limiters[1].Type)
protocols = limiters[1].Protocols
require.Len(t, protocols, 4)
require.True(t, util.IsStringInSlice(common.ProtocolFTP, protocols))
require.True(t, util.IsStringInSlice(common.ProtocolSSH, protocols))
require.True(t, util.IsStringInSlice(common.ProtocolWebDAV, protocols))
require.True(t, util.IsStringInSlice(common.ProtocolHTTP, protocols))
require.False(t, limiters[1].GenerateDefenderEvents)
require.Equal(t, 100, limiters[1].EntriesSoftLimit)
require.Equal(t, 150, limiters[1].EntriesHardLimit)
webdavConf := config.GetWebDAVDConfig()
require.Len(t, webdavConf.Bindings, 1)
webdavConf.Bindings = nil
webdavConf.BindPort = 9080 //nolint:staticcheck
webdavConf.BindAddress = "127.0.0.1" //nolint:staticcheck
c := make(map[string]webdavd.Configuration)
c["webdavd"] = webdavConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
webdavConf = config.GetWebDAVDConfig()
// the default binding should be replaced with the deprecated configuration
require.Len(t, webdavConf.Bindings, 1)
require.Equal(t, 9080, webdavConf.Bindings[0].Port)
require.Equal(t, "127.0.0.1", webdavConf.Bindings[0].Address)
require.False(t, webdavConf.Bindings[0].EnableHTTPS)
err = os.Remove(configFilePath)
assert.NoError(t, err)
}
func TestHTTPDBindingsCompatibility(t *testing.T) {
reset()
configDir := ".."
confName := tempConfigName + ".json"
configFilePath := filepath.Join(configDir, confName)
err := config.LoadConfig(configDir, "")
assert.NoError(t, err)
httpdConf := config.GetHTTPDConfig()
require.Len(t, httpdConf.Bindings, 1)
httpdConf.Bindings = nil
httpdConf.BindPort = 9080 //nolint:staticcheck
httpdConf.BindAddress = "127.1.1.1" //nolint:staticcheck
c := make(map[string]httpd.Conf)
c["httpd"] = httpdConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
httpdConf = config.GetHTTPDConfig()
// the default binding should be replaced with the deprecated configuration
require.Len(t, httpdConf.Bindings, 1)
require.Equal(t, 9080, httpdConf.Bindings[0].Port)
require.Equal(t, "127.1.1.1", httpdConf.Bindings[0].Address)
require.False(t, httpdConf.Bindings[0].EnableHTTPS)
require.True(t, httpdConf.Bindings[0].EnableWebAdmin)
err = os.Remove(configFilePath)
assert.NoError(t, err)
}
func TestSFTPDBindingsFromEnv(t *testing.T) {
@@ -599,12 +507,14 @@ func TestSFTPDBindingsFromEnv(t *testing.T) {
os.Setenv("SFTPGO_SFTPD__BINDINGS__0__APPLY_PROXY_CONFIG", "false")
os.Setenv("SFTPGO_SFTPD__BINDINGS__3__ADDRESS", "127.0.1.1")
os.Setenv("SFTPGO_SFTPD__BINDINGS__3__PORT", "2203")
os.Setenv("SFTPGO_SFTPD__BINDINGS__3__APPLY_PROXY_CONFIG", "1")
t.Cleanup(func() {
os.Unsetenv("SFTPGO_SFTPD__BINDINGS__0__ADDRESS")
os.Unsetenv("SFTPGO_SFTPD__BINDINGS__0__PORT")
os.Unsetenv("SFTPGO_SFTPD__BINDINGS__0__APPLY_PROXY_CONFIG")
os.Unsetenv("SFTPGO_SFTPD__BINDINGS__3__ADDRESS")
os.Unsetenv("SFTPGO_SFTPD__BINDINGS__3__PORT")
os.Unsetenv("SFTPGO_SFTPD__BINDINGS__3__APPLY_PROXY_CONFIG")
})
configDir := ".."
@@ -617,7 +527,7 @@ func TestSFTPDBindingsFromEnv(t *testing.T) {
require.False(t, bindings[0].ApplyProxyConfig)
require.Equal(t, 2203, bindings[1].Port)
require.Equal(t, "127.0.1.1", bindings[1].Address)
require.True(t, bindings[1].ApplyProxyConfig) // default value
require.True(t, bindings[1].ApplyProxyConfig)
}
func TestFTPDBindingsFromEnv(t *testing.T) {
@@ -628,18 +538,13 @@ func TestFTPDBindingsFromEnv(t *testing.T) {
os.Setenv("SFTPGO_FTPD__BINDINGS__0__APPLY_PROXY_CONFIG", "f")
os.Setenv("SFTPGO_FTPD__BINDINGS__0__TLS_MODE", "2")
os.Setenv("SFTPGO_FTPD__BINDINGS__0__FORCE_PASSIVE_IP", "127.0.1.2")
os.Setenv("SFTPGO_FTPD__BINDINGS__0__PASSIVE_IP_OVERRIDES__0__IP", "172.16.1.1")
os.Setenv("SFTPGO_FTPD__BINDINGS__0__TLS_CIPHER_SUITES", "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256")
os.Setenv("SFTPGO_FTPD__BINDINGS__0__PASSIVE_CONNECTIONS_SECURITY", "1")
os.Setenv("SFTPGO_FTPD__BINDINGS__9__ADDRESS", "127.0.1.1")
os.Setenv("SFTPGO_FTPD__BINDINGS__9__PORT", "2203")
os.Setenv("SFTPGO_FTPD__BINDINGS__9__APPLY_PROXY_CONFIG", "t")
os.Setenv("SFTPGO_FTPD__BINDINGS__9__TLS_MODE", "1")
os.Setenv("SFTPGO_FTPD__BINDINGS__9__FORCE_PASSIVE_IP", "127.0.1.1")
os.Setenv("SFTPGO_FTPD__BINDINGS__9__PASSIVE_IP_OVERRIDES__3__IP", "192.168.1.1")
os.Setenv("SFTPGO_FTPD__BINDINGS__9__PASSIVE_IP_OVERRIDES__3__NETWORKS", "192.168.1.0/24, 192.168.3.0/25")
os.Setenv("SFTPGO_FTPD__BINDINGS__9__CLIENT_AUTH_TYPE", "2")
os.Setenv("SFTPGO_FTPD__BINDINGS__9__DEBUG", "1")
os.Setenv("SFTPGO_FTPD__BINDINGS__9__ACTIVE_CONNECTIONS_SECURITY", "1")
os.Setenv("SFTPGO_FTPD__BINDINGS__9__CLIENT_AUTH_TYPE", "1")
t.Cleanup(func() {
os.Unsetenv("SFTPGO_FTPD__BINDINGS__0__ADDRESS")
@@ -647,18 +552,13 @@ func TestFTPDBindingsFromEnv(t *testing.T) {
os.Unsetenv("SFTPGO_FTPD__BINDINGS__0__APPLY_PROXY_CONFIG")
os.Unsetenv("SFTPGO_FTPD__BINDINGS__0__TLS_MODE")
os.Unsetenv("SFTPGO_FTPD__BINDINGS__0__FORCE_PASSIVE_IP")
os.Unsetenv("SFTPGO_FTPD__BINDINGS__0__PASSIVE_IP_OVERRIDES__0__IP")
os.Unsetenv("SFTPGO_FTPD__BINDINGS__0__TLS_CIPHER_SUITES")
os.Unsetenv("SFTPGO_FTPD__BINDINGS__0__ACTIVE_CONNECTIONS_SECURITY")
os.Unsetenv("SFTPGO_FTPD__BINDINGS__9__ADDRESS")
os.Unsetenv("SFTPGO_FTPD__BINDINGS__9__PORT")
os.Unsetenv("SFTPGO_FTPD__BINDINGS__9__APPLY_PROXY_CONFIG")
os.Unsetenv("SFTPGO_FTPD__BINDINGS__9__TLS_MODE")
os.Unsetenv("SFTPGO_FTPD__BINDINGS__9__FORCE_PASSIVE_IP")
os.Unsetenv("SFTPGO_FTPD__BINDINGS__9__PASSIVE_IP_OVERRIDES__3__IP")
os.Unsetenv("SFTPGO_FTPD__BINDINGS__9__PASSIVE_IP_OVERRIDES__3__NETWORKS")
os.Unsetenv("SFTPGO_FTPD__BINDINGS__9__CLIENT_AUTH_TYPE")
os.Unsetenv("SFTPGO_FTPD__BINDINGS__9__DEBUG")
os.Unsetenv("SFTPGO_FTPD__BINDINGS__9__ACTIVE_CONNECTIONS_SECURITY")
})
configDir := ".."
@@ -671,29 +571,17 @@ func TestFTPDBindingsFromEnv(t *testing.T) {
require.False(t, bindings[0].ApplyProxyConfig)
require.Equal(t, 2, bindings[0].TLSMode)
require.Equal(t, "127.0.1.2", bindings[0].ForcePassiveIP)
require.Len(t, bindings[0].PassiveIPOverrides, 0)
require.Equal(t, 0, bindings[0].ClientAuthType)
require.Len(t, bindings[0].TLSCipherSuites, 2)
require.Equal(t, "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256", bindings[0].TLSCipherSuites[0])
require.Equal(t, "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", bindings[0].TLSCipherSuites[1])
require.False(t, bindings[0].Debug)
require.Equal(t, 1, bindings[0].PassiveConnectionsSecurity)
require.Equal(t, 0, bindings[0].ActiveConnectionsSecurity)
require.Equal(t, 2203, bindings[1].Port)
require.Equal(t, "127.0.1.1", bindings[1].Address)
require.True(t, bindings[1].ApplyProxyConfig) // default value
require.True(t, bindings[1].ApplyProxyConfig)
require.Equal(t, 1, bindings[1].TLSMode)
require.Equal(t, "127.0.1.1", bindings[1].ForcePassiveIP)
require.Len(t, bindings[1].PassiveIPOverrides, 1)
require.Equal(t, "192.168.1.1", bindings[1].PassiveIPOverrides[0].IP)
require.Len(t, bindings[1].PassiveIPOverrides[0].Networks, 2)
require.Equal(t, "192.168.1.0/24", bindings[1].PassiveIPOverrides[0].Networks[0])
require.Equal(t, "192.168.3.0/25", bindings[1].PassiveIPOverrides[0].Networks[1])
require.Equal(t, 2, bindings[1].ClientAuthType)
require.Equal(t, 1, bindings[1].ClientAuthType)
require.Nil(t, bindings[1].TLSCipherSuites)
require.Equal(t, 0, bindings[1].PassiveConnectionsSecurity)
require.Equal(t, 1, bindings[1].ActiveConnectionsSecurity)
require.True(t, bindings[1].Debug)
}
func TestWebDAVBindingsFromEnv(t *testing.T) {
@@ -703,23 +591,19 @@ func TestWebDAVBindingsFromEnv(t *testing.T) {
os.Setenv("SFTPGO_WEBDAVD__BINDINGS__1__PORT", "8000")
os.Setenv("SFTPGO_WEBDAVD__BINDINGS__1__ENABLE_HTTPS", "0")
os.Setenv("SFTPGO_WEBDAVD__BINDINGS__1__TLS_CIPHER_SUITES", "TLS_RSA_WITH_AES_128_CBC_SHA ")
os.Setenv("SFTPGO_WEBDAVD__BINDINGS__1__PROXY_ALLOWED", "192.168.10.1")
os.Setenv("SFTPGO_WEBDAVD__BINDINGS__2__ADDRESS", "127.0.1.1")
os.Setenv("SFTPGO_WEBDAVD__BINDINGS__2__PORT", "9000")
os.Setenv("SFTPGO_WEBDAVD__BINDINGS__2__ENABLE_HTTPS", "1")
os.Setenv("SFTPGO_WEBDAVD__BINDINGS__2__CLIENT_AUTH_TYPE", "1")
os.Setenv("SFTPGO_WEBDAVD__BINDINGS__2__PREFIX", "/dav2")
t.Cleanup(func() {
os.Unsetenv("SFTPGO_WEBDAVD__BINDINGS__1__ADDRESS")
os.Unsetenv("SFTPGO_WEBDAVD__BINDINGS__1__PORT")
os.Unsetenv("SFTPGO_WEBDAVD__BINDINGS__1__ENABLE_HTTPS")
os.Unsetenv("SFTPGO_WEBDAVD__BINDINGS__1__TLS_CIPHER_SUITES")
os.Unsetenv("SFTPGO_WEBDAVD__BINDINGS__1__PROXY_ALLOWED")
os.Unsetenv("SFTPGO_WEBDAVD__BINDINGS__2__ADDRESS")
os.Unsetenv("SFTPGO_WEBDAVD__BINDINGS__2__PORT")
os.Unsetenv("SFTPGO_WEBDAVD__BINDINGS__2__ENABLE_HTTPS")
os.Unsetenv("SFTPGO_WEBDAVD__BINDINGS__2__CLIENT_AUTH_TYPE")
os.Unsetenv("SFTPGO_WEBDAVD__BINDINGS__2__PREFIX")
})
configDir := ".."
@@ -731,21 +615,17 @@ func TestWebDAVBindingsFromEnv(t *testing.T) {
require.Empty(t, bindings[0].Address)
require.False(t, bindings[0].EnableHTTPS)
require.Len(t, bindings[0].TLSCipherSuites, 0)
require.Empty(t, bindings[0].Prefix)
require.Equal(t, 8000, bindings[1].Port)
require.Equal(t, "127.0.0.1", bindings[1].Address)
require.False(t, bindings[1].EnableHTTPS)
require.Equal(t, 0, bindings[1].ClientAuthType)
require.Len(t, bindings[1].TLSCipherSuites, 1)
require.Equal(t, "TLS_RSA_WITH_AES_128_CBC_SHA", bindings[1].TLSCipherSuites[0])
require.Equal(t, "192.168.10.1", bindings[1].ProxyAllowed[0])
require.Empty(t, bindings[1].Prefix)
require.Equal(t, 9000, bindings[2].Port)
require.Equal(t, "127.0.1.1", bindings[2].Address)
require.True(t, bindings[2].EnableHTTPS)
require.Equal(t, 1, bindings[2].ClientAuthType)
require.Nil(t, bindings[2].TLSCipherSuites)
require.Equal(t, "/dav2", bindings[2].Prefix)
}
func TestHTTPDBindingsFromEnv(t *testing.T) {
@@ -759,17 +639,13 @@ func TestHTTPDBindingsFromEnv(t *testing.T) {
os.Setenv("SFTPGO_HTTPD__BINDINGS__1__ADDRESS", "127.0.0.1")
os.Setenv("SFTPGO_HTTPD__BINDINGS__1__PORT", "8000")
os.Setenv("SFTPGO_HTTPD__BINDINGS__1__ENABLE_HTTPS", "0")
os.Setenv("SFTPGO_HTTPD__BINDINGS__1__HIDE_LOGIN_URL", " 1")
os.Setenv("SFTPGO_HTTPD__BINDINGS__1__ENABLE_WEB_ADMIN", "1")
os.Setenv("SFTPGO_HTTPD__BINDINGS__2__ADDRESS", "127.0.1.1")
os.Setenv("SFTPGO_HTTPD__BINDINGS__2__PORT", "9000")
os.Setenv("SFTPGO_HTTPD__BINDINGS__2__ENABLE_WEB_ADMIN", "0")
os.Setenv("SFTPGO_HTTPD__BINDINGS__2__ENABLE_WEB_CLIENT", "0")
os.Setenv("SFTPGO_HTTPD__BINDINGS__2__RENDER_OPENAPI", "0")
os.Setenv("SFTPGO_HTTPD__BINDINGS__2__ENABLE_HTTPS", "1 ")
os.Setenv("SFTPGO_HTTPD__BINDINGS__2__ENABLE_HTTPS", "1")
os.Setenv("SFTPGO_HTTPD__BINDINGS__2__CLIENT_AUTH_TYPE", "1")
os.Setenv("SFTPGO_HTTPD__BINDINGS__2__TLS_CIPHER_SUITES", " TLS_AES_256_GCM_SHA384 , TLS_CHACHA20_POLY1305_SHA256")
os.Setenv("SFTPGO_HTTPD__BINDINGS__2__PROXY_ALLOWED", " 192.168.9.1 , 172.16.25.0/24")
os.Setenv("SFTPGO_HTTPD__BINDINGS__2__HIDE_LOGIN_URL", "3")
t.Cleanup(func() {
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__0__ADDRESS")
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__0__PORT")
@@ -777,17 +653,13 @@ func TestHTTPDBindingsFromEnv(t *testing.T) {
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__1__ADDRESS")
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__1__PORT")
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__1__ENABLE_HTTPS")
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__1__HIDE_LOGIN_URL")
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__1__ENABLE_WEB_ADMIN")
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__2__ADDRESS")
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__2__PORT")
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__2__ENABLE_HTTPS")
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__2__ENABLE_WEB_ADMIN")
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__2__ENABLE_WEB_CLIENT")
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__2__RENDER_OPENAPI")
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__2__CLIENT_AUTH_TYPE")
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__2__TLS_CIPHER_SUITES")
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__2__PROXY_ALLOWED")
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__2__HIDE_LOGIN_URL")
})
configDir := ".."
@@ -799,34 +671,22 @@ func TestHTTPDBindingsFromEnv(t *testing.T) {
require.Equal(t, sockPath, bindings[0].Address)
require.False(t, bindings[0].EnableHTTPS)
require.True(t, bindings[0].EnableWebAdmin)
require.True(t, bindings[0].EnableWebClient)
require.True(t, bindings[0].RenderOpenAPI)
require.Len(t, bindings[0].TLSCipherSuites, 1)
require.Equal(t, "TLS_AES_128_GCM_SHA256", bindings[0].TLSCipherSuites[0])
require.Equal(t, 0, bindings[0].HideLoginURL)
require.Equal(t, 8000, bindings[1].Port)
require.Equal(t, "127.0.0.1", bindings[1].Address)
require.False(t, bindings[1].EnableHTTPS)
require.True(t, bindings[1].EnableWebAdmin)
require.True(t, bindings[1].EnableWebClient)
require.True(t, bindings[1].RenderOpenAPI)
require.Nil(t, bindings[1].TLSCipherSuites)
require.Equal(t, 1, bindings[1].HideLoginURL)
require.Equal(t, 9000, bindings[2].Port)
require.Equal(t, "127.0.1.1", bindings[2].Address)
require.True(t, bindings[2].EnableHTTPS)
require.False(t, bindings[2].EnableWebAdmin)
require.False(t, bindings[2].EnableWebClient)
require.False(t, bindings[2].RenderOpenAPI)
require.Equal(t, 1, bindings[2].ClientAuthType)
require.Len(t, bindings[2].TLSCipherSuites, 2)
require.Equal(t, "TLS_AES_256_GCM_SHA384", bindings[2].TLSCipherSuites[0])
require.Equal(t, "TLS_CHACHA20_POLY1305_SHA256", bindings[2].TLSCipherSuites[1])
require.Len(t, bindings[2].ProxyAllowed, 2)
require.Equal(t, "192.168.9.1", bindings[2].ProxyAllowed[0])
require.Equal(t, "172.16.25.0/24", bindings[2].ProxyAllowed[1])
require.Equal(t, 3, bindings[2].HideLoginURL)
}
func TestHTTPClientCertificatesFromEnv(t *testing.T) {
@@ -846,7 +706,7 @@ func TestHTTPClientCertificatesFromEnv(t *testing.T) {
c["http"] = httpConf
jsonConf, err := json.Marshal(c)
require.NoError(t, err)
err = os.WriteFile(configFilePath, jsonConf, os.ModePerm)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
require.NoError(t, err)
err = config.LoadConfig(configDir, confName)
require.NoError(t, err)
@@ -890,77 +750,6 @@ func TestHTTPClientCertificatesFromEnv(t *testing.T) {
require.Equal(t, "key9", config.GetHTTPConfig().Certificates[1].Key)
}
func TestHTTPClientHeadersFromEnv(t *testing.T) {
reset()
configDir := ".."
confName := tempConfigName + ".json"
configFilePath := filepath.Join(configDir, confName)
err := config.LoadConfig(configDir, "")
assert.NoError(t, err)
httpConf := config.GetHTTPConfig()
httpConf.Headers = append(httpConf.Headers, httpclient.Header{
Key: "key",
Value: "value",
URL: "url",
})
c := make(map[string]httpclient.Config)
c["http"] = httpConf
jsonConf, err := json.Marshal(c)
require.NoError(t, err)
err = os.WriteFile(configFilePath, jsonConf, os.ModePerm)
require.NoError(t, err)
err = config.LoadConfig(configDir, confName)
require.NoError(t, err)
require.Len(t, config.GetHTTPConfig().Headers, 1)
require.Equal(t, "key", config.GetHTTPConfig().Headers[0].Key)
require.Equal(t, "value", config.GetHTTPConfig().Headers[0].Value)
require.Equal(t, "url", config.GetHTTPConfig().Headers[0].URL)
os.Setenv("SFTPGO_HTTP__HEADERS__0__KEY", "key0")
os.Setenv("SFTPGO_HTTP__HEADERS__0__VALUE", "value0")
os.Setenv("SFTPGO_HTTP__HEADERS__0__URL", "url0")
os.Setenv("SFTPGO_HTTP__HEADERS__8__KEY", "key8")
os.Setenv("SFTPGO_HTTP__HEADERS__9__KEY", "key9")
os.Setenv("SFTPGO_HTTP__HEADERS__9__VALUE", "value9")
os.Setenv("SFTPGO_HTTP__HEADERS__9__URL", "url9")
t.Cleanup(func() {
os.Unsetenv("SFTPGO_HTTP__HEADERS__0__KEY")
os.Unsetenv("SFTPGO_HTTP__HEADERS__0__VALUE")
os.Unsetenv("SFTPGO_HTTP__HEADERS__0__URL")
os.Unsetenv("SFTPGO_HTTP__HEADERS__8__KEY")
os.Unsetenv("SFTPGO_HTTP__HEADERS__9__KEY")
os.Unsetenv("SFTPGO_HTTP__HEADERS__9__VALUE")
os.Unsetenv("SFTPGO_HTTP__HEADERS__9__URL")
})
err = config.LoadConfig(configDir, confName)
require.NoError(t, err)
require.Len(t, config.GetHTTPConfig().Headers, 2)
require.Equal(t, "key0", config.GetHTTPConfig().Headers[0].Key)
require.Equal(t, "value0", config.GetHTTPConfig().Headers[0].Value)
require.Equal(t, "url0", config.GetHTTPConfig().Headers[0].URL)
require.Equal(t, "key9", config.GetHTTPConfig().Headers[1].Key)
require.Equal(t, "value9", config.GetHTTPConfig().Headers[1].Value)
require.Equal(t, "url9", config.GetHTTPConfig().Headers[1].URL)
err = os.Remove(configFilePath)
assert.NoError(t, err)
config.Init()
err = config.LoadConfig(configDir, "")
require.NoError(t, err)
require.Len(t, config.GetHTTPConfig().Headers, 2)
require.Equal(t, "key0", config.GetHTTPConfig().Headers[0].Key)
require.Equal(t, "value0", config.GetHTTPConfig().Headers[0].Value)
require.Equal(t, "url0", config.GetHTTPConfig().Headers[0].URL)
require.Equal(t, "key9", config.GetHTTPConfig().Headers[1].Key)
require.Equal(t, "value9", config.GetHTTPConfig().Headers[1].Value)
require.Equal(t, "url9", config.GetHTTPConfig().Headers[1].URL)
}
func TestConfigFromEnv(t *testing.T) {
reset()
@@ -968,7 +757,6 @@ func TestConfigFromEnv(t *testing.T) {
os.Setenv("SFTPGO_WEBDAVD__BINDINGS__0__PORT", "12000")
os.Setenv("SFTPGO_DATA_PROVIDER__PASSWORD_HASHING__ARGON2_OPTIONS__ITERATIONS", "41")
os.Setenv("SFTPGO_DATA_PROVIDER__POOL_SIZE", "10")
os.Setenv("SFTPGO_DATA_PROVIDER__IS_SHARED", "1")
os.Setenv("SFTPGO_DATA_PROVIDER__ACTIONS__EXECUTE_ON", "add")
os.Setenv("SFTPGO_KMS__SECRETS__URL", "local")
os.Setenv("SFTPGO_KMS__SECRETS__MASTER_KEY_PATH", "path")
@@ -978,7 +766,6 @@ func TestConfigFromEnv(t *testing.T) {
os.Unsetenv("SFTPGO_WEBDAVD__BINDINGS__0__PORT")
os.Unsetenv("SFTPGO_DATA_PROVIDER__PASSWORD_HASHING__ARGON2_OPTIONS__ITERATIONS")
os.Unsetenv("SFTPGO_DATA_PROVIDER__POOL_SIZE")
os.Unsetenv("SFTPGO_DATA_PROVIDER__IS_SHARED")
os.Unsetenv("SFTPGO_DATA_PROVIDER__ACTIONS__EXECUTE_ON")
os.Unsetenv("SFTPGO_KMS__SECRETS__URL")
os.Unsetenv("SFTPGO_KMS__SECRETS__MASTER_KEY_PATH")
@@ -992,7 +779,6 @@ func TestConfigFromEnv(t *testing.T) {
dataProviderConf := config.GetProviderConf()
assert.Equal(t, uint32(41), dataProviderConf.PasswordHashing.Argon2Options.Iterations)
assert.Equal(t, 10, dataProviderConf.PoolSize)
assert.Equal(t, 1, dataProviderConf.IsShared)
assert.Len(t, dataProviderConf.Actions.ExecuteOn, 1)
assert.Contains(t, dataProviderConf.Actions.ExecuteOn, "add")
kmsConfig := config.GetKMSConfig()

View File

@@ -1,106 +0,0 @@
package dataprovider
import (
"bytes"
"context"
"fmt"
"net/url"
"os"
"os/exec"
"path/filepath"
"strings"
"time"
"github.com/drakkan/sftpgo/v2/httpclient"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/sdk/plugin"
"github.com/drakkan/sftpgo/v2/util"
)
const (
// ActionExecutorSelf is used as username for self action, for example a user/admin that updates itself
ActionExecutorSelf = "__self__"
// ActionExecutorSystem is used as username for actions with no explicit executor associated, for example
// adding/updating a user/admin by loading initial data
ActionExecutorSystem = "__system__"
)
const (
actionObjectUser = "user"
actionObjectAdmin = "admin"
actionObjectAPIKey = "api_key"
actionObjectShare = "share"
)
func executeAction(operation, executor, ip, objectType, objectName string, object plugin.Renderer) {
plugin.Handler.NotifyProviderEvent(time.Now().UnixNano(), operation, executor, objectType, objectName, ip, object)
if config.Actions.Hook == "" {
return
}
if !util.IsStringInSlice(operation, config.Actions.ExecuteOn) ||
!util.IsStringInSlice(objectType, config.Actions.ExecuteFor) {
return
}
go func() {
dataAsJSON, err := object.RenderAsJSON(operation != operationDelete)
if err != nil {
providerLog(logger.LevelWarn, "unable to serialize user as JSON for operation %#v: %v", operation, err)
return
}
if strings.HasPrefix(config.Actions.Hook, "http") {
var url *url.URL
url, err := url.Parse(config.Actions.Hook)
if err != nil {
providerLog(logger.LevelWarn, "Invalid http_notification_url %#v for operation %#v: %v", config.Actions.Hook, operation, err)
return
}
q := url.Query()
q.Add("action", operation)
q.Add("username", executor)
q.Add("ip", ip)
q.Add("object_type", objectType)
q.Add("object_name", objectName)
q.Add("timestamp", fmt.Sprintf("%v", time.Now().UnixNano()))
url.RawQuery = q.Encode()
startTime := time.Now()
resp, err := httpclient.RetryablePost(url.String(), "application/json", bytes.NewBuffer(dataAsJSON))
respCode := 0
if err == nil {
respCode = resp.StatusCode
resp.Body.Close()
}
providerLog(logger.LevelDebug, "notified operation %#v to URL: %v status code: %v, elapsed: %v err: %v",
operation, url.Redacted(), respCode, time.Since(startTime), err)
} else {
executeNotificationCommand(operation, executor, ip, objectType, objectName, dataAsJSON) //nolint:errcheck // the error is used in test cases only
}
}()
}
func executeNotificationCommand(operation, executor, ip, objectType, objectName string, objectAsJSON []byte) error {
if !filepath.IsAbs(config.Actions.Hook) {
err := fmt.Errorf("invalid notification command %#v", config.Actions.Hook)
logger.Warn(logSender, "", "unable to execute notification command: %v", err)
return err
}
ctx, cancel := context.WithTimeout(context.Background(), 15*time.Second)
defer cancel()
cmd := exec.CommandContext(ctx, config.Actions.Hook)
cmd.Env = append(os.Environ(),
fmt.Sprintf("SFTPGO_PROVIDER_ACTION=%v", operation),
fmt.Sprintf("SFTPGO_PROVIDER_OBJECT_TYPE=%v", objectType),
fmt.Sprintf("SFTPGO_PROVIDER_OBJECT_NAME=%v", objectName),
fmt.Sprintf("SFTPGO_PROVIDER_USERNAME=%v", executor),
fmt.Sprintf("SFTPGO_PROVIDER_IP=%v", ip),
fmt.Sprintf("SFTPGO_PROVIDER_TIMESTAMP=%v", util.GetTimeAsMsSinceEpoch(time.Now())),
fmt.Sprintf("SFTPGO_PROVIDER_OBJECT=%v", string(objectAsJSON)))
startTime := time.Now()
err := cmd.Run()
providerLog(logger.LevelDebug, "executed command %#v, elapsed: %v, error: %v", config.Actions.Hook,
time.Since(startTime), err)
return err
}

View File

@@ -1,25 +1,17 @@
package dataprovider
import (
"crypto/sha256"
"encoding/base64"
"encoding/json"
"errors"
"fmt"
"net"
"os"
"regexp"
"strings"
"github.com/alexedwards/argon2id"
passwordvalidator "github.com/wagslane/go-password-validator"
"golang.org/x/crypto/bcrypt"
"github.com/minio/sha256-simd"
"github.com/drakkan/sftpgo/v2/kms"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/mfa"
"github.com/drakkan/sftpgo/v2/sdk"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/utils"
)
// Available permissions for SFTPGo admins
@@ -33,69 +25,26 @@ const (
PermAdminCloseConnections = "close_conns"
PermAdminViewServerStatus = "view_status"
PermAdminManageAdmins = "manage_admins"
PermAdminManageAPIKeys = "manage_apikeys"
PermAdminQuotaScans = "quota_scans"
PermAdminManageSystem = "manage_system"
PermAdminManageDefender = "manage_defender"
PermAdminViewDefender = "view_defender"
PermAdminRetentionChecks = "retention_checks"
PermAdminViewEvents = "view_events"
)
var (
emailRegex = regexp.MustCompile("^(?:(?:(?:(?:[a-zA-Z]|\\d|[!#\\$%&'\\*\\+\\-\\/=\\?\\^_`{\\|}~]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])+(?:\\.([a-zA-Z]|\\d|[!#\\$%&'\\*\\+\\-\\/=\\?\\^_`{\\|}~]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])+)*)|(?:(?:\\x22)(?:(?:(?:(?:\\x20|\\x09)*(?:\\x0d\\x0a))?(?:\\x20|\\x09)+)?(?:(?:[\\x01-\\x08\\x0b\\x0c\\x0e-\\x1f\\x7f]|\\x21|[\\x23-\\x5b]|[\\x5d-\\x7e]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])|(?:(?:[\\x01-\\x09\\x0b\\x0c\\x0d-\\x7f]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}]))))*(?:(?:(?:\\x20|\\x09)*(?:\\x0d\\x0a))?(\\x20|\\x09)+)?(?:\\x22))))@(?:(?:(?:[a-zA-Z]|\\d|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])|(?:(?:[a-zA-Z]|\\d|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])(?:[a-zA-Z]|\\d|-|\\.|~|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])*(?:[a-zA-Z]|\\d|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])))\\.)+(?:(?:[a-zA-Z]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])|(?:(?:[a-zA-Z]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])(?:[a-zA-Z]|\\d|-|\\.|~|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])*(?:[a-zA-Z]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])))\\.?$")
validAdminPerms = []string{PermAdminAny, PermAdminAddUsers, PermAdminChangeUsers, PermAdminDeleteUsers,
PermAdminViewUsers, PermAdminViewConnections, PermAdminCloseConnections, PermAdminViewServerStatus,
PermAdminManageAdmins, PermAdminManageAPIKeys, PermAdminQuotaScans, PermAdminManageSystem,
PermAdminManageDefender, PermAdminViewDefender, PermAdminRetentionChecks, PermAdminViewEvents}
PermAdminManageAdmins, PermAdminQuotaScans, PermAdminManageSystem, PermAdminManageDefender,
PermAdminViewDefender}
)
// TOTPConfig defines the time-based one time password configuration
type TOTPConfig struct {
Enabled bool `json:"enabled,omitempty"`
ConfigName string `json:"config_name,omitempty"`
Secret *kms.Secret `json:"secret,omitempty"`
}
func (c *TOTPConfig) validate(username string) error {
if !c.Enabled {
c.ConfigName = ""
c.Secret = kms.NewEmptySecret()
return nil
}
if c.ConfigName == "" {
return util.NewValidationError("totp: config name is mandatory")
}
if !util.IsStringInSlice(c.ConfigName, mfa.GetAvailableTOTPConfigNames()) {
return util.NewValidationError(fmt.Sprintf("totp: config name %#v not found", c.ConfigName))
}
if c.Secret.IsEmpty() {
return util.NewValidationError("totp: secret is mandatory")
}
if c.Secret.IsPlain() {
c.Secret.SetAdditionalData(username)
if err := c.Secret.Encrypt(); err != nil {
return util.NewValidationError(fmt.Sprintf("totp: unable to encrypt secret: %v", err))
}
}
return nil
}
// AdminFilters defines additional restrictions for SFTPGo admins
// TODO: rename to AdminOptions in v3
type AdminFilters struct {
// only clients connecting from these IP/Mask are allowed.
// IP/Mask must be in CIDR notation as defined in RFC 4632 and RFC 4291
// for example "192.0.2.0/24" or "2001:db8::/32"
AllowList []string `json:"allow_list,omitempty"`
// API key auth allows to impersonate this administrator with an API key
AllowAPIKeyAuth bool `json:"allow_api_key_auth,omitempty"`
// Time-based one time passwords configuration
TOTPConfig TOTPConfig `json:"totp_config,omitempty"`
// Recovery codes to use if the user loses access to their second factor auth device.
// Each code can only be used once, you should use these codes to login and disable or
// reset 2FA for your account
RecoveryCodes []sdk.RecoveryCode `json:"recovery_codes,omitempty"`
}
// Admin defines a SFTPGo admin
@@ -107,124 +56,55 @@ type Admin struct {
// Username
Username string `json:"username"`
Password string `json:"password,omitempty"`
Email string `json:"email,omitempty"`
Email string `json:"email"`
Permissions []string `json:"permissions"`
Filters AdminFilters `json:"filters,omitempty"`
Description string `json:"description,omitempty"`
AdditionalInfo string `json:"additional_info,omitempty"`
// Creation time as unix timestamp in milliseconds. It will be 0 for admins created before v2.2.0
CreatedAt int64 `json:"created_at"`
// last update time as unix timestamp in milliseconds
UpdatedAt int64 `json:"updated_at"`
// Last login as unix timestamp in milliseconds
LastLogin int64 `json:"last_login"`
}
// CountUnusedRecoveryCodes returns the number of unused recovery codes
func (a *Admin) CountUnusedRecoveryCodes() int {
unused := 0
for _, code := range a.Filters.RecoveryCodes {
if !code.Used {
unused++
}
}
return unused
}
func (a *Admin) hashPassword() error {
if a.Password != "" && !util.IsStringPrefixInSlice(a.Password, internalHashPwdPrefixes) {
if config.PasswordValidation.Admins.MinEntropy > 0 {
if err := passwordvalidator.Validate(a.Password, config.PasswordValidation.Admins.MinEntropy); err != nil {
return util.NewValidationError(err.Error())
}
}
if config.PasswordHashing.Algo == HashingAlgoBcrypt {
pwd, err := bcrypt.GenerateFromPassword([]byte(a.Password), config.PasswordHashing.BcryptOptions.Cost)
if err != nil {
return err
}
a.Password = string(pwd)
} else {
pwd, err := argon2id.CreateHash(a.Password, argon2Params)
if err != nil {
return err
}
a.Password = pwd
}
}
return nil
}
func (a *Admin) hasRedactedSecret() bool {
return a.Filters.TOTPConfig.Secret.IsRedacted()
}
func (a *Admin) validateRecoveryCodes() error {
for i := 0; i < len(a.Filters.RecoveryCodes); i++ {
code := &a.Filters.RecoveryCodes[i]
if code.Secret.IsEmpty() {
return util.NewValidationError("mfa: recovery code cannot be empty")
}
if code.Secret.IsPlain() {
code.Secret.SetAdditionalData(a.Username)
if err := code.Secret.Encrypt(); err != nil {
return util.NewValidationError(fmt.Sprintf("mfa: unable to encrypt recovery code: %v", err))
}
}
}
return nil
}
func (a *Admin) validatePermissions() error {
a.Permissions = util.RemoveDuplicates(a.Permissions)
if len(a.Permissions) == 0 {
return util.NewValidationError("please grant some permissions to this admin")
}
if util.IsStringInSlice(PermAdminAny, a.Permissions) {
a.Permissions = []string{PermAdminAny}
}
for _, perm := range a.Permissions {
if !util.IsStringInSlice(perm, validAdminPerms) {
return util.NewValidationError(fmt.Sprintf("invalid permission: %#v", perm))
func (a *Admin) checkPassword() error {
if a.Password != "" && !strings.HasPrefix(a.Password, argonPwdPrefix) {
pwd, err := argon2id.CreateHash(a.Password, argon2Params)
if err != nil {
return err
}
a.Password = pwd
}
return nil
}
func (a *Admin) validate() error {
a.SetEmptySecretsIfNil()
if a.Username == "" {
return util.NewValidationError("username is mandatory")
return &ValidationError{err: "username is mandatory"}
}
if a.Password == "" {
return util.NewValidationError("please set a password")
}
if a.hasRedactedSecret() {
return util.NewValidationError("cannot save an admin with a redacted secret")
}
if err := a.Filters.TOTPConfig.validate(a.Username); err != nil {
return err
}
if err := a.validateRecoveryCodes(); err != nil {
return err
return &ValidationError{err: "please set a password"}
}
if !config.SkipNaturalKeysValidation && !usernameRegex.MatchString(a.Username) {
return util.NewValidationError(fmt.Sprintf("username %#v is not valid, the following characters are allowed: a-zA-Z0-9-_.~", a.Username))
return &ValidationError{err: fmt.Sprintf("username %#v is not valid, the following characters are allowed: a-zA-Z0-9-_.~", a.Username)}
}
if err := a.hashPassword(); err != nil {
if err := a.checkPassword(); err != nil {
return err
}
if err := a.validatePermissions(); err != nil {
return err
a.Permissions = utils.RemoveDuplicates(a.Permissions)
if len(a.Permissions) == 0 {
return &ValidationError{err: "please grant some permissions to this admin"}
}
if utils.IsStringInSlice(PermAdminAny, a.Permissions) {
a.Permissions = []string{PermAdminAny}
}
for _, perm := range a.Permissions {
if !utils.IsStringInSlice(perm, validAdminPerms) {
return &ValidationError{err: fmt.Sprintf("invalid permission: %#v", perm)}
}
}
if a.Email != "" && !emailRegex.MatchString(a.Email) {
return util.NewValidationError(fmt.Sprintf("email %#v is not valid", a.Email))
return &ValidationError{err: fmt.Sprintf("email %#v is not valid", a.Email)}
}
a.Filters.AllowList = util.RemoveDuplicates(a.Filters.AllowList)
for _, IPMask := range a.Filters.AllowList {
_, _, err := net.ParseCIDR(IPMask)
if err != nil {
return util.NewValidationError(fmt.Sprintf("could not parse allow list entry %#v : %v", IPMask, err))
return &ValidationError{err: fmt.Sprintf("could not parse allow list entry %#v : %v", IPMask, err)}
}
}
@@ -233,17 +113,7 @@ func (a *Admin) validate() error {
// CheckPassword verifies the admin password
func (a *Admin) CheckPassword(password string) (bool, error) {
if strings.HasPrefix(a.Password, bcryptPwdPrefix) {
if err := bcrypt.CompareHashAndPassword([]byte(a.Password), []byte(password)); err != nil {
return false, ErrInvalidCredentials
}
return true, nil
}
match, err := argon2id.ComparePasswordAndHash(password, a.Password)
if !match || err != nil {
return false, ErrInvalidCredentials
}
return match, err
return argon2id.ComparePasswordAndHash(password, a.Password)
}
// CanLoginFromIP returns true if login from the given IP is allowed
@@ -268,21 +138,10 @@ func (a *Admin) CanLoginFromIP(ip string) bool {
return false
}
// CanLogin returns an error if the login is not allowed
func (a *Admin) CanLogin(ip string) error {
func (a *Admin) checkUserAndPass(password, ip string) error {
if a.Status != 1 {
return fmt.Errorf("admin %#v is disabled", a.Username)
}
if !a.CanLoginFromIP(ip) {
return fmt.Errorf("login from IP %v not allowed", ip)
}
return nil
}
func (a *Admin) checkUserAndPass(password, ip string) error {
if err := a.CanLogin(ip); err != nil {
return err
}
if a.Password == "" || password == "" {
return errors.New("credentials cannot be null or empty")
}
@@ -293,60 +152,23 @@ func (a *Admin) checkUserAndPass(password, ip string) error {
if !match {
return ErrInvalidCredentials
}
return nil
}
// RenderAsJSON implements the renderer interface used within plugins
func (a *Admin) RenderAsJSON(reload bool) ([]byte, error) {
if reload {
admin, err := provider.adminExists(a.Username)
if err != nil {
providerLog(logger.LevelWarn, "unable to reload admin before rendering as json: %v", err)
return nil, err
}
admin.HideConfidentialData()
return json.Marshal(admin)
if !a.CanLoginFromIP(ip) {
return fmt.Errorf("login from IP %v not allowed", ip)
}
a.HideConfidentialData()
return json.Marshal(a)
return nil
}
// HideConfidentialData hides admin confidential data
func (a *Admin) HideConfidentialData() {
a.Password = ""
if a.Filters.TOTPConfig.Secret != nil {
a.Filters.TOTPConfig.Secret.Hide()
}
for _, code := range a.Filters.RecoveryCodes {
if code.Secret != nil {
code.Secret.Hide()
}
}
a.SetNilSecretsIfEmpty()
}
// SetEmptySecretsIfNil sets the secrets to empty if nil
func (a *Admin) SetEmptySecretsIfNil() {
if a.Filters.TOTPConfig.Secret == nil {
a.Filters.TOTPConfig.Secret = kms.NewEmptySecret()
}
}
// SetNilSecretsIfEmpty set the secrets to nil if empty.
// This is useful before rendering as JSON so the empty fields
// will not be serialized.
func (a *Admin) SetNilSecretsIfEmpty() {
if a.Filters.TOTPConfig.Secret != nil && a.Filters.TOTPConfig.Secret.IsEmpty() {
a.Filters.TOTPConfig.Secret = nil
}
}
// HasPermission returns true if the admin has the specified permission
func (a *Admin) HasPermission(perm string) bool {
if util.IsStringInSlice(PermAdminAny, a.Permissions) {
if utils.IsStringInSlice(PermAdminAny, a.Permissions) {
return true
}
return util.IsStringInSlice(perm, a.Permissions)
return utils.IsStringInSlice(perm, a.Permissions)
}
// GetPermissionsAsString returns permission as string
@@ -366,19 +188,14 @@ func (a *Admin) GetValidPerms() []string {
// GetInfoString returns admin's info as string.
func (a *Admin) GetInfoString() string {
var result strings.Builder
var result string
if a.Email != "" {
result.WriteString(fmt.Sprintf("Email: %v. ", a.Email))
result = fmt.Sprintf("Email: %v. ", a.Email)
}
if len(a.Filters.AllowList) > 0 {
result.WriteString(fmt.Sprintf("Allowed IP/Mask: %v. ", len(a.Filters.AllowList)))
result += fmt.Sprintf("Allowed IP/Mask: %v. ", len(a.Filters.AllowList))
}
return result.String()
}
// CanManageMFA returns true if the admin can add a multi-factor authentication configuration
func (a *Admin) CanManageMFA() bool {
return len(mfa.GetAvailableTOTPConfigs()) > 0
return result
}
// GetSignature returns a signature for this admin.
@@ -391,26 +208,11 @@ func (a *Admin) GetSignature() string {
}
func (a *Admin) getACopy() Admin {
a.SetEmptySecretsIfNil()
permissions := make([]string, len(a.Permissions))
copy(permissions, a.Permissions)
filters := AdminFilters{}
filters.AllowList = make([]string, len(a.Filters.AllowList))
filters.AllowAPIKeyAuth = a.Filters.AllowAPIKeyAuth
filters.TOTPConfig.Enabled = a.Filters.TOTPConfig.Enabled
filters.TOTPConfig.ConfigName = a.Filters.TOTPConfig.ConfigName
filters.TOTPConfig.Secret = a.Filters.TOTPConfig.Secret.Clone()
copy(filters.AllowList, a.Filters.AllowList)
filters.RecoveryCodes = make([]sdk.RecoveryCode, 0)
for _, code := range a.Filters.RecoveryCodes {
if code.Secret == nil {
code.Secret = kms.NewEmptySecret()
}
filters.RecoveryCodes = append(filters.RecoveryCodes, sdk.RecoveryCode{
Secret: code.Secret.Clone(),
Used: code.Used,
})
}
return Admin{
ID: a.ID,
@@ -421,22 +223,13 @@ func (a *Admin) getACopy() Admin {
Permissions: permissions,
Filters: filters,
AdditionalInfo: a.AdditionalInfo,
Description: a.Description,
LastLogin: a.LastLogin,
CreatedAt: a.CreatedAt,
UpdatedAt: a.UpdatedAt,
}
}
func (a *Admin) setFromEnv() error {
envUsername := strings.TrimSpace(os.Getenv("SFTPGO_DEFAULT_ADMIN_USERNAME"))
envPassword := strings.TrimSpace(os.Getenv("SFTPGO_DEFAULT_ADMIN_PASSWORD"))
if envUsername == "" || envPassword == "" {
return errors.New(`to create the default admin you need to set the env vars "SFTPGO_DEFAULT_ADMIN_USERNAME" and "SFTPGO_DEFAULT_ADMIN_PASSWORD"`)
}
a.Username = envUsername
a.Password = envPassword
// setDefaults sets the appropriate value for the default admin
func (a *Admin) setDefaults() {
a.Username = "admin"
a.Password = "password"
a.Status = 1
a.Permissions = []string{PermAdminAny}
return nil
}

View File

@@ -1,186 +0,0 @@
package dataprovider
import (
"encoding/json"
"fmt"
"strings"
"time"
"github.com/alexedwards/argon2id"
"golang.org/x/crypto/bcrypt"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
)
// APIKeyScope defines the supported API key scopes
type APIKeyScope int
// Supported API key scopes
const (
// the API key will be used for an admin
APIKeyScopeAdmin APIKeyScope = iota + 1
// the API key will be used for a user
APIKeyScopeUser
)
// APIKey defines a SFTPGo API key.
// API keys can be used as authentication alternative to short lived tokens
// for REST API
type APIKey struct {
// Database unique identifier
ID int64 `json:"-"`
// Unique key identifier, used for key lookups.
// The generated key is in the format `KeyID.hash(Key)` so we can split
// and lookup by KeyID and then verify if the key matches the recorded hash
KeyID string `json:"id"`
// User friendly key name
Name string `json:"name"`
// we store the hash of the key, this is just like a password
Key string `json:"key,omitempty"`
Scope APIKeyScope `json:"scope"`
CreatedAt int64 `json:"created_at"`
UpdatedAt int64 `json:"updated_at"`
// 0 means never used
LastUseAt int64 `json:"last_use_at,omitempty"`
// 0 means never expire
ExpiresAt int64 `json:"expires_at,omitempty"`
Description string `json:"description,omitempty"`
// Username associated with this API key.
// If empty and the scope is APIKeyScopeUser the key is valid for any user
User string `json:"user,omitempty"`
// Admin username associated with this API key.
// If empty and the scope is APIKeyScopeAdmin the key is valid for any admin
Admin string `json:"admin,omitempty"`
// these fields are for internal use
userID int64
adminID int64
plainKey string
}
func (k *APIKey) getACopy() APIKey {
return APIKey{
ID: k.ID,
KeyID: k.KeyID,
Name: k.Name,
Key: k.Key,
Scope: k.Scope,
CreatedAt: k.CreatedAt,
UpdatedAt: k.UpdatedAt,
LastUseAt: k.LastUseAt,
ExpiresAt: k.ExpiresAt,
Description: k.Description,
User: k.User,
Admin: k.Admin,
userID: k.userID,
adminID: k.adminID,
}
}
// RenderAsJSON implements the renderer interface used within plugins
func (k *APIKey) RenderAsJSON(reload bool) ([]byte, error) {
if reload {
apiKey, err := provider.apiKeyExists(k.KeyID)
if err != nil {
providerLog(logger.LevelWarn, "unable to reload api key before rendering as json: %v", err)
return nil, err
}
apiKey.HideConfidentialData()
return json.Marshal(apiKey)
}
k.HideConfidentialData()
return json.Marshal(k)
}
// HideConfidentialData hides API key confidential data
func (k *APIKey) HideConfidentialData() {
k.Key = ""
}
func (k *APIKey) hashKey() error {
if k.Key != "" && !util.IsStringPrefixInSlice(k.Key, internalHashPwdPrefixes) {
if config.PasswordHashing.Algo == HashingAlgoBcrypt {
hashed, err := bcrypt.GenerateFromPassword([]byte(k.Key), config.PasswordHashing.BcryptOptions.Cost)
if err != nil {
return err
}
k.Key = string(hashed)
} else {
hashed, err := argon2id.CreateHash(k.Key, argon2Params)
if err != nil {
return err
}
k.Key = hashed
}
}
return nil
}
func (k *APIKey) generateKey() {
if k.KeyID != "" || k.Key != "" {
return
}
k.KeyID = util.GenerateUniqueID()
k.Key = util.GenerateUniqueID()
k.plainKey = k.Key
}
// DisplayKey returns the key to show to the user
func (k *APIKey) DisplayKey() string {
return fmt.Sprintf("%v.%v", k.KeyID, k.plainKey)
}
func (k *APIKey) validate() error {
if k.Name == "" {
return util.NewValidationError("name is mandatory")
}
if k.Scope != APIKeyScopeAdmin && k.Scope != APIKeyScopeUser {
return util.NewValidationError(fmt.Sprintf("invalid scope: %v", k.Scope))
}
k.generateKey()
if err := k.hashKey(); err != nil {
return err
}
if k.User != "" && k.Admin != "" {
return util.NewValidationError("an API key can be related to a user or an admin, not both")
}
if k.Scope == APIKeyScopeAdmin {
k.User = ""
}
if k.Scope == APIKeyScopeUser {
k.Admin = ""
}
if k.User != "" {
_, err := provider.userExists(k.User)
if err != nil {
return util.NewValidationError(fmt.Sprintf("unable to check API key user %v: %v", k.User, err))
}
}
if k.Admin != "" {
_, err := provider.adminExists(k.Admin)
if err != nil {
return util.NewValidationError(fmt.Sprintf("unable to check API key admin %v: %v", k.Admin, err))
}
}
return nil
}
// Authenticate tries to authenticate the provided plain key
func (k *APIKey) Authenticate(plainKey string) error {
if k.ExpiresAt > 0 && k.ExpiresAt < util.GetTimeAsMsSinceEpoch(time.Now()) {
return fmt.Errorf("API key %#v is expired, expiration timestamp: %v current timestamp: %v", k.KeyID,
k.ExpiresAt, util.GetTimeAsMsSinceEpoch(time.Now()))
}
if strings.HasPrefix(k.Key, bcryptPwdPrefix) {
if err := bcrypt.CompareHashAndPassword([]byte(k.Key), []byte(plainKey)); err != nil {
return ErrInvalidCredentials
}
} else if strings.HasPrefix(k.Key, argonPwdPrefix) {
match, err := argon2id.ComparePasswordAndHash(plainKey, k.Key)
if err != nil || !match {
return ErrInvalidCredentials
}
}
return nil
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,4 +1,3 @@
//go:build nobolt
// +build nobolt
package dataprovider
@@ -6,7 +5,7 @@ package dataprovider
import (
"errors"
"github.com/drakkan/sftpgo/v2/version"
"github.com/drakkan/sftpgo/version"
)
func init() {

View File

@@ -1,62 +0,0 @@
package dataprovider
import (
"sync"
)
var cachedPasswords passwordsCache
func init() {
cachedPasswords = passwordsCache{
cache: make(map[string]string),
}
}
type passwordsCache struct {
sync.RWMutex
cache map[string]string
}
func (c *passwordsCache) Add(username, password string) {
if !config.PasswordCaching || username == "" || password == "" {
return
}
c.Lock()
defer c.Unlock()
c.cache[username] = password
}
func (c *passwordsCache) Remove(username string) {
if !config.PasswordCaching {
return
}
c.Lock()
defer c.Unlock()
delete(c.cache, username)
}
// Check returns if the user is found and if the password match
func (c *passwordsCache) Check(username, password string) (bool, bool) {
if username == "" || password == "" {
return false, false
}
c.RLock()
defer c.RUnlock()
pwd, ok := c.cache[username]
if !ok {
return false, false
}
return true, pwd == password
}
// CheckCachedPassword is an utility method used only in test cases
func CheckCachedPassword(username, password string) (bool, bool) {
return cachedPasswords.Check(username, password)
}

View File

@@ -1,149 +0,0 @@
package dataprovider
import (
"sync"
"time"
"golang.org/x/net/webdav"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
)
var (
webDAVUsersCache *usersCache
)
func init() {
webDAVUsersCache = &usersCache{
users: map[string]CachedUser{},
}
}
// InitializeWebDAVUserCache initializes the cache for webdav users
func InitializeWebDAVUserCache(maxSize int) {
webDAVUsersCache = &usersCache{
users: map[string]CachedUser{},
maxSize: maxSize,
}
}
// CachedUser adds fields useful for caching to a SFTPGo user
type CachedUser struct {
User User
Expiration time.Time
Password string
LockSystem webdav.LockSystem
}
// IsExpired returns true if the cached user is expired
func (c *CachedUser) IsExpired() bool {
if c.Expiration.IsZero() {
return false
}
return c.Expiration.Before(time.Now())
}
type usersCache struct {
sync.RWMutex
users map[string]CachedUser
maxSize int
}
func (cache *usersCache) updateLastLogin(username string) {
cache.Lock()
defer cache.Unlock()
if cachedUser, ok := cache.users[username]; ok {
cachedUser.User.LastLogin = util.GetTimeAsMsSinceEpoch(time.Now())
cache.users[username] = cachedUser
}
}
// swapWebDAVUser updates an existing cached user with the specified one
// preserving the lock fs if possible
func (cache *usersCache) swap(user *User) {
cache.Lock()
defer cache.Unlock()
if cachedUser, ok := cache.users[user.Username]; ok {
if cachedUser.User.Password != user.Password {
providerLog(logger.LevelDebug, "current password different from the cached one for user %#v, removing from cache",
user.Username)
// the password changed, the cached user is no longer valid
delete(cache.users, user.Username)
return
}
if cachedUser.User.isFsEqual(user) {
// the updated user has the same fs as the cached one, we can preserve the lock filesystem
providerLog(logger.LevelDebug, "current password and fs unchanged for for user %#v, swap cached one",
user.Username)
cachedUser.User = *user
cache.users[user.Username] = cachedUser
} else {
// filesystem changed, the cached user is no longer valid
providerLog(logger.LevelDebug, "current fs different from the cached one for user %#v, removing from cache",
user.Username)
delete(cache.users, user.Username)
}
}
}
func (cache *usersCache) add(cachedUser *CachedUser) {
cache.Lock()
defer cache.Unlock()
if cache.maxSize > 0 && len(cache.users) >= cache.maxSize {
var userToRemove string
var expirationTime time.Time
for k, v := range cache.users {
if userToRemove == "" {
userToRemove = k
expirationTime = v.Expiration
continue
}
expireTime := v.Expiration
if !expireTime.IsZero() && expireTime.Before(expirationTime) {
userToRemove = k
expirationTime = expireTime
}
}
delete(cache.users, userToRemove)
}
if cachedUser.User.Username != "" {
cache.users[cachedUser.User.Username] = *cachedUser
}
}
func (cache *usersCache) remove(username string) {
cache.Lock()
defer cache.Unlock()
delete(cache.users, username)
}
func (cache *usersCache) get(username string) (*CachedUser, bool) {
cache.RLock()
defer cache.RUnlock()
cachedUser, ok := cache.users[username]
return &cachedUser, ok
}
// CacheWebDAVUser add a user to the WebDAV cache
func CacheWebDAVUser(cachedUser *CachedUser) {
webDAVUsersCache.add(cachedUser)
}
// GetCachedWebDAVUser returns a previously cached WebDAV user
func GetCachedWebDAVUser(username string) (*CachedUser, bool) {
return webDAVUsersCache.get(username)
}
// RemoveCachedWebDAVUser removes a cached WebDAV user
func RemoveCachedWebDAVUser(username string) {
webDAVUsersCache.remove(username)
}

358
dataprovider/compat.go Normal file
View File

@@ -0,0 +1,358 @@
package dataprovider
import (
"encoding/json"
"fmt"
"io/ioutil"
"path/filepath"
"github.com/drakkan/sftpgo/kms"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/utils"
"github.com/drakkan/sftpgo/vfs"
)
type compatUserV2 struct {
ID int64 `json:"id"`
Username string `json:"username"`
Password string `json:"password,omitempty"`
PublicKeys []string `json:"public_keys,omitempty"`
HomeDir string `json:"home_dir"`
UID int `json:"uid"`
GID int `json:"gid"`
MaxSessions int `json:"max_sessions"`
QuotaSize int64 `json:"quota_size"`
QuotaFiles int `json:"quota_files"`
Permissions []string `json:"permissions"`
UsedQuotaSize int64 `json:"used_quota_size"`
UsedQuotaFiles int `json:"used_quota_files"`
LastQuotaUpdate int64 `json:"last_quota_update"`
UploadBandwidth int64 `json:"upload_bandwidth"`
DownloadBandwidth int64 `json:"download_bandwidth"`
ExpirationDate int64 `json:"expiration_date"`
LastLogin int64 `json:"last_login"`
Status int `json:"status"`
}
type compatS3FsConfigV4 struct {
Bucket string `json:"bucket,omitempty"`
KeyPrefix string `json:"key_prefix,omitempty"`
Region string `json:"region,omitempty"`
AccessKey string `json:"access_key,omitempty"`
AccessSecret string `json:"access_secret,omitempty"`
Endpoint string `json:"endpoint,omitempty"`
StorageClass string `json:"storage_class,omitempty"`
UploadPartSize int64 `json:"upload_part_size,omitempty"`
UploadConcurrency int `json:"upload_concurrency,omitempty"`
}
type compatGCSFsConfigV4 struct {
Bucket string `json:"bucket,omitempty"`
KeyPrefix string `json:"key_prefix,omitempty"`
CredentialFile string `json:"-"`
Credentials []byte `json:"credentials,omitempty"`
AutomaticCredentials int `json:"automatic_credentials,omitempty"`
StorageClass string `json:"storage_class,omitempty"`
}
type compatAzBlobFsConfigV4 struct {
Container string `json:"container,omitempty"`
AccountName string `json:"account_name,omitempty"`
AccountKey string `json:"account_key,omitempty"`
Endpoint string `json:"endpoint,omitempty"`
SASURL string `json:"sas_url,omitempty"`
KeyPrefix string `json:"key_prefix,omitempty"`
UploadPartSize int64 `json:"upload_part_size,omitempty"`
UploadConcurrency int `json:"upload_concurrency,omitempty"`
UseEmulator bool `json:"use_emulator,omitempty"`
AccessTier string `json:"access_tier,omitempty"`
}
type compatFilesystemV4 struct {
Provider FilesystemProvider `json:"provider"`
S3Config compatS3FsConfigV4 `json:"s3config,omitempty"`
GCSConfig compatGCSFsConfigV4 `json:"gcsconfig,omitempty"`
AzBlobConfig compatAzBlobFsConfigV4 `json:"azblobconfig,omitempty"`
}
type compatUserV4 struct {
ID int64 `json:"id"`
Status int `json:"status"`
Username string `json:"username"`
ExpirationDate int64 `json:"expiration_date"`
Password string `json:"password,omitempty"`
PublicKeys []string `json:"public_keys,omitempty"`
HomeDir string `json:"home_dir"`
VirtualFolders []vfs.VirtualFolder `json:"virtual_folders,omitempty"`
UID int `json:"uid"`
GID int `json:"gid"`
MaxSessions int `json:"max_sessions"`
QuotaSize int64 `json:"quota_size"`
QuotaFiles int `json:"quota_files"`
Permissions map[string][]string `json:"permissions"`
UsedQuotaSize int64 `json:"used_quota_size"`
UsedQuotaFiles int `json:"used_quota_files"`
LastQuotaUpdate int64 `json:"last_quota_update"`
UploadBandwidth int64 `json:"upload_bandwidth"`
DownloadBandwidth int64 `json:"download_bandwidth"`
LastLogin int64 `json:"last_login"`
Filters UserFilters `json:"filters"`
FsConfig compatFilesystemV4 `json:"filesystem"`
}
type backupDataV4Compat struct {
Users []compatUserV4 `json:"users"`
Folders []vfs.BaseVirtualFolder `json:"folders"`
}
func createUserFromV4(u compatUserV4, fsConfig Filesystem) User {
user := User{
ID: u.ID,
Status: u.Status,
Username: u.Username,
ExpirationDate: u.ExpirationDate,
Password: u.Password,
PublicKeys: u.PublicKeys,
HomeDir: u.HomeDir,
VirtualFolders: u.VirtualFolders,
UID: u.UID,
GID: u.GID,
MaxSessions: u.MaxSessions,
QuotaSize: u.QuotaSize,
QuotaFiles: u.QuotaFiles,
Permissions: u.Permissions,
UsedQuotaSize: u.UsedQuotaSize,
UsedQuotaFiles: u.UsedQuotaFiles,
LastQuotaUpdate: u.LastQuotaUpdate,
UploadBandwidth: u.UploadBandwidth,
DownloadBandwidth: u.DownloadBandwidth,
LastLogin: u.LastLogin,
Filters: u.Filters,
}
user.FsConfig = fsConfig
user.SetEmptySecretsIfNil()
return user
}
func convertUserToV4(u User, fsConfig compatFilesystemV4) compatUserV4 {
user := compatUserV4{
ID: u.ID,
Status: u.Status,
Username: u.Username,
ExpirationDate: u.ExpirationDate,
Password: u.Password,
PublicKeys: u.PublicKeys,
HomeDir: u.HomeDir,
VirtualFolders: u.VirtualFolders,
UID: u.UID,
GID: u.GID,
MaxSessions: u.MaxSessions,
QuotaSize: u.QuotaSize,
QuotaFiles: u.QuotaFiles,
Permissions: u.Permissions,
UsedQuotaSize: u.UsedQuotaSize,
UsedQuotaFiles: u.UsedQuotaFiles,
LastQuotaUpdate: u.LastQuotaUpdate,
UploadBandwidth: u.UploadBandwidth,
DownloadBandwidth: u.DownloadBandwidth,
LastLogin: u.LastLogin,
Filters: u.Filters,
}
user.FsConfig = fsConfig
return user
}
func getCGSCredentialsFromV4(config compatGCSFsConfigV4) (*kms.Secret, error) {
secret := kms.NewEmptySecret()
var err error
if len(config.Credentials) > 0 {
secret = kms.NewPlainSecret(string(config.Credentials))
return secret, nil
}
if config.CredentialFile != "" {
creds, err := ioutil.ReadFile(config.CredentialFile)
if err != nil {
return secret, err
}
secret = kms.NewPlainSecret(string(creds))
return secret, nil
}
return secret, err
}
func getCGSCredentialsFromV6(config vfs.GCSFsConfig, username string) (string, error) {
if config.Credentials == nil {
config.Credentials = kms.NewEmptySecret()
}
if config.Credentials.IsEmpty() {
config.CredentialFile = filepath.Join(credentialsDirPath, fmt.Sprintf("%v_gcs_credentials.json",
username))
creds, err := ioutil.ReadFile(config.CredentialFile)
if err != nil {
return "", err
}
err = json.Unmarshal(creds, &config.Credentials)
if err != nil {
return "", err
}
}
if config.Credentials.IsEncrypted() {
err := config.Credentials.Decrypt()
if err != nil {
return "", err
}
// in V4 GCS credentials were not encrypted
return config.Credentials.GetPayload(), nil
}
return "", nil
}
func convertFsConfigToV4(fs Filesystem, username string) (compatFilesystemV4, error) {
fsV4 := compatFilesystemV4{
Provider: fs.Provider,
S3Config: compatS3FsConfigV4{},
AzBlobConfig: compatAzBlobFsConfigV4{},
GCSConfig: compatGCSFsConfigV4{},
}
switch fs.Provider {
case S3FilesystemProvider:
fsV4.S3Config = compatS3FsConfigV4{
Bucket: fs.S3Config.Bucket,
KeyPrefix: fs.S3Config.KeyPrefix,
Region: fs.S3Config.Region,
AccessKey: fs.S3Config.AccessKey,
AccessSecret: "",
Endpoint: fs.S3Config.Endpoint,
StorageClass: fs.S3Config.StorageClass,
UploadPartSize: fs.S3Config.UploadPartSize,
UploadConcurrency: fs.S3Config.UploadConcurrency,
}
if fs.S3Config.AccessSecret.IsEncrypted() {
err := fs.S3Config.AccessSecret.Decrypt()
if err != nil {
return fsV4, err
}
secretV4, err := utils.EncryptData(fs.S3Config.AccessSecret.GetPayload())
if err != nil {
return fsV4, err
}
fsV4.S3Config.AccessSecret = secretV4
}
case AzureBlobFilesystemProvider:
fsV4.AzBlobConfig = compatAzBlobFsConfigV4{
Container: fs.AzBlobConfig.Container,
AccountName: fs.AzBlobConfig.AccountName,
AccountKey: "",
Endpoint: fs.AzBlobConfig.Endpoint,
SASURL: fs.AzBlobConfig.SASURL,
KeyPrefix: fs.AzBlobConfig.KeyPrefix,
UploadPartSize: fs.AzBlobConfig.UploadPartSize,
UploadConcurrency: fs.AzBlobConfig.UploadConcurrency,
UseEmulator: fs.AzBlobConfig.UseEmulator,
AccessTier: fs.AzBlobConfig.AccessTier,
}
if fs.AzBlobConfig.AccountKey.IsEncrypted() {
err := fs.AzBlobConfig.AccountKey.Decrypt()
if err != nil {
return fsV4, err
}
secretV4, err := utils.EncryptData(fs.AzBlobConfig.AccountKey.GetPayload())
if err != nil {
return fsV4, err
}
fsV4.AzBlobConfig.AccountKey = secretV4
}
case GCSFilesystemProvider:
fsV4.GCSConfig = compatGCSFsConfigV4{
Bucket: fs.GCSConfig.Bucket,
KeyPrefix: fs.GCSConfig.KeyPrefix,
CredentialFile: fs.GCSConfig.CredentialFile,
AutomaticCredentials: fs.GCSConfig.AutomaticCredentials,
StorageClass: fs.GCSConfig.StorageClass,
}
if fs.GCSConfig.AutomaticCredentials == 0 {
creds, err := getCGSCredentialsFromV6(fs.GCSConfig, username)
if err != nil {
return fsV4, err
}
fsV4.GCSConfig.Credentials = []byte(creds)
}
default:
// a provider not supported in v4, the configuration will be lost
providerLog(logger.LevelWarn, "provider %v was not supported in v4, the configuration for the user %#v will be lost",
fs.Provider, username)
fsV4.Provider = 0
}
return fsV4, nil
}
func convertFsConfigFromV4(compatFs compatFilesystemV4, username string) (Filesystem, error) {
fsConfig := Filesystem{
Provider: compatFs.Provider,
S3Config: vfs.S3FsConfig{},
AzBlobConfig: vfs.AzBlobFsConfig{},
GCSConfig: vfs.GCSFsConfig{},
}
switch compatFs.Provider {
case S3FilesystemProvider:
fsConfig.S3Config = vfs.S3FsConfig{
Bucket: compatFs.S3Config.Bucket,
KeyPrefix: compatFs.S3Config.KeyPrefix,
Region: compatFs.S3Config.Region,
AccessKey: compatFs.S3Config.AccessKey,
AccessSecret: kms.NewEmptySecret(),
Endpoint: compatFs.S3Config.Endpoint,
StorageClass: compatFs.S3Config.StorageClass,
UploadPartSize: compatFs.S3Config.UploadPartSize,
UploadConcurrency: compatFs.S3Config.UploadConcurrency,
}
if compatFs.S3Config.AccessSecret != "" {
secret, err := kms.GetSecretFromCompatString(compatFs.S3Config.AccessSecret)
if err != nil {
providerLog(logger.LevelError, "unable to convert v4 filesystem for user %#v: %v", username, err)
return fsConfig, err
}
fsConfig.S3Config.AccessSecret = secret
}
case AzureBlobFilesystemProvider:
fsConfig.AzBlobConfig = vfs.AzBlobFsConfig{
Container: compatFs.AzBlobConfig.Container,
AccountName: compatFs.AzBlobConfig.AccountName,
AccountKey: kms.NewEmptySecret(),
Endpoint: compatFs.AzBlobConfig.Endpoint,
SASURL: compatFs.AzBlobConfig.SASURL,
KeyPrefix: compatFs.AzBlobConfig.KeyPrefix,
UploadPartSize: compatFs.AzBlobConfig.UploadPartSize,
UploadConcurrency: compatFs.AzBlobConfig.UploadConcurrency,
UseEmulator: compatFs.AzBlobConfig.UseEmulator,
AccessTier: compatFs.AzBlobConfig.AccessTier,
}
if compatFs.AzBlobConfig.AccountKey != "" {
secret, err := kms.GetSecretFromCompatString(compatFs.AzBlobConfig.AccountKey)
if err != nil {
providerLog(logger.LevelError, "unable to convert v4 filesystem for user %#v: %v", username, err)
return fsConfig, err
}
fsConfig.AzBlobConfig.AccountKey = secret
}
case GCSFilesystemProvider:
fsConfig.GCSConfig = vfs.GCSFsConfig{
Bucket: compatFs.GCSConfig.Bucket,
KeyPrefix: compatFs.GCSConfig.KeyPrefix,
CredentialFile: compatFs.GCSConfig.CredentialFile,
AutomaticCredentials: compatFs.GCSConfig.AutomaticCredentials,
StorageClass: compatFs.GCSConfig.StorageClass,
}
if compatFs.GCSConfig.AutomaticCredentials == 0 {
compatFs.GCSConfig.CredentialFile = filepath.Join(credentialsDirPath, fmt.Sprintf("%v_gcs_credentials.json",
username))
}
secret, err := getCGSCredentialsFromV4(compatFs.GCSConfig)
if err != nil {
providerLog(logger.LevelError, "unable to convert v4 filesystem for user %#v: %v", username, err)
return fsConfig, err
}
fsConfig.GCSConfig.Credentials = secret
}
return fsConfig, nil
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,18 +1,18 @@
package dataprovider
import (
"crypto/x509"
"errors"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"sort"
"sync"
"time"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/v2/vfs"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/utils"
"github.com/drakkan/sftpgo/vfs"
)
var (
@@ -36,14 +36,6 @@ type memoryProviderHandle struct {
admins map[string]Admin
// slice with ordered admins
adminsUsernames []string
// map for API keys, keyID is the key
apiKeys map[string]APIKey
// slice with ordered API keys KeyID
apiKeysIDs []string
// map for shares, shareID is the key
shares map[string]Share
// slice with ordered shares shareID
sharesIDs []string
}
// MemoryProvider auth provider for a memory store
@@ -52,8 +44,9 @@ type MemoryProvider struct {
}
func initializeMemoryProvider(basePath string) {
logSender = fmt.Sprintf("dataprovider_%v", MemoryDataProviderName)
configFile := ""
if util.IsFileInputValid(config.Name) {
if utils.IsFileInputValid(config.Name) {
configFile = config.Name
if !filepath.IsAbs(configFile) {
configFile = filepath.Join(basePath, configFile)
@@ -68,10 +61,6 @@ func initializeMemoryProvider(basePath string) {
vfoldersNames: []string{},
admins: make(map[string]Admin),
adminsUsernames: []string{},
apiKeys: make(map[string]APIKey),
apiKeysIDs: []string{},
shares: make(map[string]Share),
sharesIDs: []string{},
configFile: configFile,
},
}
@@ -100,23 +89,10 @@ func (p *MemoryProvider) close() error {
return nil
}
func (p *MemoryProvider) validateUserAndTLSCert(username, protocol string, tlsCert *x509.Certificate) (User, error) {
var user User
if tlsCert == nil {
return user, errors.New("TLS certificate cannot be null or empty")
}
user, err := p.userExists(username)
if err != nil {
providerLog(logger.LevelWarn, "error authenticating user %#v: %v", username, err)
return user, err
}
return checkUserAndTLSCertificate(&user, protocol, tlsCert)
}
func (p *MemoryProvider) validateUserAndPass(username, password, ip, protocol string) (User, error) {
var user User
if password == "" {
return user, errors.New("credentials cannot be null or empty")
return user, errors.New("Credentials cannot be null or empty")
}
user, err := p.userExists(username)
if err != nil {
@@ -129,7 +105,7 @@ func (p *MemoryProvider) validateUserAndPass(username, password, ip, protocol st
func (p *MemoryProvider) validateUserAndPubKey(username string, pubKey []byte) (User, string, error) {
var user User
if len(pubKey) == 0 {
return user, "", errors.New("credentials cannot be null or empty")
return user, "", errors.New("Credentials cannot be null or empty")
}
user, err := p.userExists(username)
if err != nil {
@@ -143,41 +119,12 @@ func (p *MemoryProvider) validateAdminAndPass(username, password, ip string) (Ad
admin, err := p.adminExists(username)
if err != nil {
providerLog(logger.LevelWarn, "error authenticating admin %#v: %v", username, err)
return admin, ErrInvalidCredentials
return admin, err
}
err = admin.checkUserAndPass(password, ip)
return admin, err
}
func (p *MemoryProvider) updateAPIKeyLastUse(keyID string) error {
p.dbHandle.Lock()
defer p.dbHandle.Unlock()
if p.dbHandle.isClosed {
return errMemoryProviderClosed
}
apiKey, err := p.apiKeyExistsInternal(keyID)
if err != nil {
return err
}
apiKey.LastUseAt = util.GetTimeAsMsSinceEpoch(time.Now())
p.dbHandle.apiKeys[apiKey.KeyID] = apiKey
return nil
}
func (p *MemoryProvider) setUpdatedAt(username string) {
p.dbHandle.Lock()
defer p.dbHandle.Unlock()
if p.dbHandle.isClosed {
return
}
user, err := p.userExistsInternal(username)
if err != nil {
return
}
user.UpdatedAt = util.GetTimeAsMsSinceEpoch(time.Now())
p.dbHandle.users[user.Username] = user
}
func (p *MemoryProvider) updateLastLogin(username string) error {
p.dbHandle.Lock()
defer p.dbHandle.Unlock()
@@ -188,26 +135,11 @@ func (p *MemoryProvider) updateLastLogin(username string) error {
if err != nil {
return err
}
user.LastLogin = util.GetTimeAsMsSinceEpoch(time.Now())
user.LastLogin = utils.GetTimeAsMsSinceEpoch(time.Now())
p.dbHandle.users[user.Username] = user
return nil
}
func (p *MemoryProvider) updateAdminLastLogin(username string) error {
p.dbHandle.Lock()
defer p.dbHandle.Unlock()
if p.dbHandle.isClosed {
return errMemoryProviderClosed
}
admin, err := p.adminExistsInternal(username)
if err != nil {
return err
}
admin.LastLogin = util.GetTimeAsMsSinceEpoch(time.Now())
p.dbHandle.admins[admin.Username] = admin
return nil
}
func (p *MemoryProvider) updateQuota(username string, filesAdd int, sizeAdd int64, reset bool) error {
p.dbHandle.Lock()
defer p.dbHandle.Unlock()
@@ -226,7 +158,7 @@ func (p *MemoryProvider) updateQuota(username string, filesAdd int, sizeAdd int6
user.UsedQuotaSize += sizeAdd
user.UsedQuotaFiles += filesAdd
}
user.LastQuotaUpdate = util.GetTimeAsMsSinceEpoch(time.Now())
user.LastQuotaUpdate = utils.GetTimeAsMsSinceEpoch(time.Now())
providerLog(logger.LevelDebug, "quota updated for user %#v, files increment: %v size increment: %v is reset? %v",
username, filesAdd, sizeAdd, reset)
p.dbHandle.users[user.Username] = user
@@ -270,8 +202,6 @@ func (p *MemoryProvider) addUser(user *User) error {
user.UsedQuotaSize = 0
user.UsedQuotaFiles = 0
user.LastLogin = 0
user.CreatedAt = util.GetTimeAsMsSinceEpoch(time.Now())
user.UpdatedAt = util.GetTimeAsMsSinceEpoch(time.Now())
user.VirtualFolders = p.joinVirtualFoldersFields(user)
p.dbHandle.users[user.Username] = user.getACopy()
p.dbHandle.usernames = append(p.dbHandle.usernames, user.Username)
@@ -305,8 +235,6 @@ func (p *MemoryProvider) updateUser(user *User) error {
user.UsedQuotaSize = u.UsedQuotaSize
user.UsedQuotaFiles = u.UsedQuotaFiles
user.LastLogin = u.LastLogin
user.CreatedAt = u.CreatedAt
user.UpdatedAt = util.GetTimeAsMsSinceEpoch(time.Now())
user.ID = u.ID
// pre-login and external auth hook will use the passed *user so save a copy
p.dbHandle.users[user.Username] = user.getACopy()
@@ -333,8 +261,6 @@ func (p *MemoryProvider) deleteUser(user *User) error {
p.dbHandle.usernames = append(p.dbHandle.usernames, username)
}
sort.Strings(p.dbHandle.usernames)
p.deleteAPIKeysWithUser(user.Username)
p.deleteSharesWithUser(user.Username)
return nil
}
@@ -371,11 +297,6 @@ func (p *MemoryProvider) dumpFolders() ([]vfs.BaseVirtualFolder, error) {
return folders, nil
}
// memory provider cannot be shared, so we always return no recently updated users
func (p *MemoryProvider) getRecentlyUpdatedUsers(after int64) ([]User, error) {
return nil, nil
}
func (p *MemoryProvider) getUsers(limit int, offset int, order string) ([]User, error) {
users := make([]User, 0, limit)
var err error
@@ -396,7 +317,7 @@ func (p *MemoryProvider) getUsers(limit int, offset int, order string) ([]User,
}
u := p.dbHandle.users[username]
user := u.getACopy()
user.PrepareForRendering()
user.HideConfidentialData()
users = append(users, user)
if len(users) >= limit {
break
@@ -411,7 +332,7 @@ func (p *MemoryProvider) getUsers(limit int, offset int, order string) ([]User,
username := p.dbHandle.usernames[i]
u := p.dbHandle.users[username]
user := u.getACopy()
user.PrepareForRendering()
user.HideConfidentialData()
users = append(users, user)
if len(users) >= limit {
break
@@ -434,7 +355,7 @@ func (p *MemoryProvider) userExistsInternal(username string) (User, error) {
if val, ok := p.dbHandle.users[username]; ok {
return val.getACopy(), nil
}
return User{}, util.NewRecordNotFoundError(fmt.Sprintf("username %#v does not exist", username))
return User{}, &RecordNotFoundError{err: fmt.Sprintf("username %#v does not exist", username)}
}
func (p *MemoryProvider) addAdmin(admin *Admin) error {
@@ -452,9 +373,6 @@ func (p *MemoryProvider) addAdmin(admin *Admin) error {
return fmt.Errorf("admin %#v already exists", admin.Username)
}
admin.ID = p.getNextAdminID()
admin.CreatedAt = util.GetTimeAsMsSinceEpoch(time.Now())
admin.UpdatedAt = util.GetTimeAsMsSinceEpoch(time.Now())
admin.LastLogin = 0
p.dbHandle.admins[admin.Username] = admin.getACopy()
p.dbHandle.adminsUsernames = append(p.dbHandle.adminsUsernames, admin.Username)
sort.Strings(p.dbHandle.adminsUsernames)
@@ -476,9 +394,6 @@ func (p *MemoryProvider) updateAdmin(admin *Admin) error {
return err
}
admin.ID = a.ID
admin.CreatedAt = a.CreatedAt
admin.LastLogin = a.LastLogin
admin.UpdatedAt = util.GetTimeAsMsSinceEpoch(time.Now())
p.dbHandle.admins[admin.Username] = admin.getACopy()
return nil
}
@@ -501,7 +416,6 @@ func (p *MemoryProvider) deleteAdmin(admin *Admin) error {
p.dbHandle.adminsUsernames = append(p.dbHandle.adminsUsernames, username)
}
sort.Strings(p.dbHandle.adminsUsernames)
p.deleteAPIKeysWithAdmin(admin.Username)
return nil
}
@@ -518,7 +432,7 @@ func (p *MemoryProvider) adminExistsInternal(username string) (Admin, error) {
if val, ok := p.dbHandle.admins[username]; ok {
return val.getACopy(), nil
}
return Admin{}, util.NewRecordNotFoundError(fmt.Sprintf("admin %#v does not exist", username))
return Admin{}, &RecordNotFoundError{err: fmt.Sprintf("admin %#v does not exist", username)}
}
func (p *MemoryProvider) dumpAdmins() ([]Admin, error) {
@@ -600,7 +514,7 @@ func (p *MemoryProvider) updateFolderQuota(name string, filesAdd int, sizeAdd in
folder.UsedQuotaSize += sizeAdd
folder.UsedQuotaFiles += filesAdd
}
folder.LastQuotaUpdate = util.GetTimeAsMsSinceEpoch(time.Now())
folder.LastQuotaUpdate = utils.GetTimeAsMsSinceEpoch(time.Now())
p.dbHandle.vfolders[name] = folder
return nil
}
@@ -621,12 +535,15 @@ func (p *MemoryProvider) getUsedFolderQuota(name string) (int, int64, error) {
func (p *MemoryProvider) joinVirtualFoldersFields(user *User) []vfs.VirtualFolder {
var folders []vfs.VirtualFolder
for idx := range user.VirtualFolders {
folder := &user.VirtualFolders[idx]
f, err := p.addOrUpdateFolderInternal(&folder.BaseVirtualFolder, user.Username, 0, 0, 0)
for _, folder := range user.VirtualFolders {
f, err := p.addOrGetFolderInternal(folder.Name, folder.MappedPath, user.Username)
if err == nil {
folder.BaseVirtualFolder = f
folders = append(folders, *folder)
folder.UsedQuotaFiles = f.UsedQuotaFiles
folder.UsedQuotaSize = f.UsedQuotaSize
folder.LastQuotaUpdate = f.LastQuotaUpdate
folder.ID = f.ID
folder.MappedPath = f.MappedPath
folders = append(folders, folder)
}
}
return folders
@@ -648,35 +565,30 @@ func (p *MemoryProvider) removeUserFromFolderMapping(folderName, username string
func (p *MemoryProvider) updateFoldersMappingInternal(folder vfs.BaseVirtualFolder) {
p.dbHandle.vfolders[folder.Name] = folder
if !util.IsStringInSlice(folder.Name, p.dbHandle.vfoldersNames) {
if !utils.IsStringInSlice(folder.Name, p.dbHandle.vfoldersNames) {
p.dbHandle.vfoldersNames = append(p.dbHandle.vfoldersNames, folder.Name)
sort.Strings(p.dbHandle.vfoldersNames)
}
}
func (p *MemoryProvider) addOrUpdateFolderInternal(baseFolder *vfs.BaseVirtualFolder, username string, usedQuotaSize int64,
usedQuotaFiles int, lastQuotaUpdate int64) (vfs.BaseVirtualFolder, error) {
folder, err := p.folderExistsInternal(baseFolder.Name)
if err == nil {
// exists
folder.MappedPath = baseFolder.MappedPath
folder.Description = baseFolder.Description
folder.FsConfig = baseFolder.FsConfig.GetACopy()
if !util.IsStringInSlice(username, folder.Users) {
folder.Users = append(folder.Users, username)
func (p *MemoryProvider) addOrGetFolderInternal(folderName, folderMappedPath, username string) (vfs.BaseVirtualFolder, error) {
folder, err := p.folderExistsInternal(folderName)
if _, ok := err.(*RecordNotFoundError); ok {
folder := vfs.BaseVirtualFolder{
ID: p.getNextFolderID(),
Name: folderName,
MappedPath: folderMappedPath,
UsedQuotaSize: 0,
UsedQuotaFiles: 0,
LastQuotaUpdate: 0,
Users: []string{username},
}
p.updateFoldersMappingInternal(folder)
return folder, nil
}
if _, ok := err.(*util.RecordNotFoundError); ok {
folder = baseFolder.GetACopy()
folder.ID = p.getNextFolderID()
folder.UsedQuotaSize = usedQuotaSize
folder.UsedQuotaFiles = usedQuotaFiles
folder.LastQuotaUpdate = lastQuotaUpdate
folder.Users = []string{username}
if err == nil && !utils.IsStringInSlice(username, folder.Users) {
folder.Users = append(folder.Users, username)
p.updateFoldersMappingInternal(folder)
return folder, nil
}
return folder, err
}
@@ -685,7 +597,7 @@ func (p *MemoryProvider) folderExistsInternal(name string) (vfs.BaseVirtualFolde
if val, ok := p.dbHandle.vfolders[name]; ok {
return val, nil
}
return vfs.BaseVirtualFolder{}, util.NewRecordNotFoundError(fmt.Sprintf("folder %#v does not exist", name))
return vfs.BaseVirtualFolder{}, &RecordNotFoundError{err: fmt.Sprintf("folder %#v does not exist", name)}
}
func (p *MemoryProvider) getFolders(limit, offset int, order string) ([]vfs.BaseVirtualFolder, error) {
@@ -706,9 +618,7 @@ func (p *MemoryProvider) getFolders(limit, offset int, order string) ([]vfs.Base
if itNum <= offset {
continue
}
f := p.dbHandle.vfolders[name]
folder := f.GetACopy()
folder.PrepareForRendering()
folder := p.dbHandle.vfolders[name]
folders = append(folders, folder)
if len(folders) >= limit {
break
@@ -721,9 +631,7 @@ func (p *MemoryProvider) getFolders(limit, offset int, order string) ([]vfs.Base
continue
}
name := p.dbHandle.vfoldersNames[i]
f := p.dbHandle.vfolders[name]
folder := f.GetACopy()
folder.PrepareForRendering()
folder := p.dbHandle.vfolders[name]
folders = append(folders, folder)
if len(folders) >= limit {
break
@@ -739,11 +647,7 @@ func (p *MemoryProvider) getFolderByName(name string) (vfs.BaseVirtualFolder, er
if p.dbHandle.isClosed {
return vfs.BaseVirtualFolder{}, errMemoryProviderClosed
}
folder, err := p.folderExistsInternal(name)
if err != nil {
return vfs.BaseVirtualFolder{}, err
}
return folder.GetACopy(), nil
return p.folderExistsInternal(name)
}
func (p *MemoryProvider) addFolder(folder *vfs.BaseVirtualFolder) error {
@@ -791,22 +695,6 @@ func (p *MemoryProvider) updateFolder(folder *vfs.BaseVirtualFolder) error {
folder.UsedQuotaSize = f.UsedQuotaSize
folder.Users = f.Users
p.dbHandle.vfolders[folder.Name] = folder.GetACopy()
// now update the related users
for _, username := range folder.Users {
user, err := p.userExistsInternal(username)
if err == nil {
var folders []vfs.VirtualFolder
for idx := range user.VirtualFolders {
userFolder := &user.VirtualFolders[idx]
if folder.Name == userFolder.Name {
userFolder.BaseVirtualFolder = folder.GetACopy()
}
folders = append(folders, *userFolder)
}
user.VirtualFolders = folders
p.dbHandle.users[user.Username] = user
}
}
return nil
}
@@ -825,10 +713,9 @@ func (p *MemoryProvider) deleteFolder(folder *vfs.BaseVirtualFolder) error {
user, err := p.userExistsInternal(username)
if err == nil {
var folders []vfs.VirtualFolder
for idx := range user.VirtualFolders {
userFolder := &user.VirtualFolders[idx]
for _, userFolder := range user.VirtualFolders {
if folder.Name != userFolder.Name {
folders = append(folders, *userFolder)
folders = append(folders, userFolder)
}
}
user.VirtualFolders = folders
@@ -844,418 +731,6 @@ func (p *MemoryProvider) deleteFolder(folder *vfs.BaseVirtualFolder) error {
return nil
}
func (p *MemoryProvider) apiKeyExistsInternal(keyID string) (APIKey, error) {
if val, ok := p.dbHandle.apiKeys[keyID]; ok {
return val.getACopy(), nil
}
return APIKey{}, util.NewRecordNotFoundError(fmt.Sprintf("API key %#v does not exist", keyID))
}
func (p *MemoryProvider) apiKeyExists(keyID string) (APIKey, error) {
p.dbHandle.Lock()
defer p.dbHandle.Unlock()
if p.dbHandle.isClosed {
return APIKey{}, errMemoryProviderClosed
}
return p.apiKeyExistsInternal(keyID)
}
func (p *MemoryProvider) addAPIKey(apiKey *APIKey) error {
err := apiKey.validate()
if err != nil {
return err
}
p.dbHandle.Lock()
defer p.dbHandle.Unlock()
if p.dbHandle.isClosed {
return errMemoryProviderClosed
}
_, err = p.apiKeyExistsInternal(apiKey.KeyID)
if err == nil {
return fmt.Errorf("API key %#v already exists", apiKey.KeyID)
}
if apiKey.User != "" {
if _, err := p.userExistsInternal(apiKey.User); err != nil {
return util.NewValidationError(fmt.Sprintf("related user %#v does not exists", apiKey.User))
}
}
if apiKey.Admin != "" {
if _, err := p.adminExistsInternal(apiKey.Admin); err != nil {
return util.NewValidationError(fmt.Sprintf("related admin %#v does not exists", apiKey.User))
}
}
apiKey.CreatedAt = util.GetTimeAsMsSinceEpoch(time.Now())
apiKey.UpdatedAt = util.GetTimeAsMsSinceEpoch(time.Now())
apiKey.LastUseAt = 0
p.dbHandle.apiKeys[apiKey.KeyID] = apiKey.getACopy()
p.dbHandle.apiKeysIDs = append(p.dbHandle.apiKeysIDs, apiKey.KeyID)
sort.Strings(p.dbHandle.apiKeysIDs)
return nil
}
func (p *MemoryProvider) updateAPIKey(apiKey *APIKey) error {
err := apiKey.validate()
if err != nil {
return err
}
p.dbHandle.Lock()
defer p.dbHandle.Unlock()
if p.dbHandle.isClosed {
return errMemoryProviderClosed
}
k, err := p.apiKeyExistsInternal(apiKey.KeyID)
if err != nil {
return err
}
if apiKey.User != "" {
if _, err := p.userExistsInternal(apiKey.User); err != nil {
return util.NewValidationError(fmt.Sprintf("related user %#v does not exists", apiKey.User))
}
}
if apiKey.Admin != "" {
if _, err := p.adminExistsInternal(apiKey.Admin); err != nil {
return util.NewValidationError(fmt.Sprintf("related admin %#v does not exists", apiKey.User))
}
}
apiKey.ID = k.ID
apiKey.KeyID = k.KeyID
apiKey.Key = k.Key
apiKey.CreatedAt = k.CreatedAt
apiKey.LastUseAt = k.LastUseAt
apiKey.UpdatedAt = util.GetTimeAsMsSinceEpoch(time.Now())
p.dbHandle.apiKeys[apiKey.KeyID] = apiKey.getACopy()
return nil
}
func (p *MemoryProvider) deleteAPIKey(apiKey *APIKey) error {
p.dbHandle.Lock()
defer p.dbHandle.Unlock()
if p.dbHandle.isClosed {
return errMemoryProviderClosed
}
_, err := p.apiKeyExistsInternal(apiKey.KeyID)
if err != nil {
return err
}
delete(p.dbHandle.apiKeys, apiKey.KeyID)
p.updateAPIKeysOrdering()
return nil
}
func (p *MemoryProvider) getAPIKeys(limit int, offset int, order string) ([]APIKey, error) {
apiKeys := make([]APIKey, 0, limit)
p.dbHandle.Lock()
defer p.dbHandle.Unlock()
if p.dbHandle.isClosed {
return apiKeys, errMemoryProviderClosed
}
if limit <= 0 {
return apiKeys, nil
}
itNum := 0
if order == OrderDESC {
for i := len(p.dbHandle.apiKeysIDs) - 1; i >= 0; i-- {
itNum++
if itNum <= offset {
continue
}
keyID := p.dbHandle.apiKeysIDs[i]
k := p.dbHandle.apiKeys[keyID]
apiKey := k.getACopy()
apiKey.HideConfidentialData()
apiKeys = append(apiKeys, apiKey)
if len(apiKeys) >= limit {
break
}
}
} else {
for _, keyID := range p.dbHandle.apiKeysIDs {
itNum++
if itNum <= offset {
continue
}
k := p.dbHandle.apiKeys[keyID]
apiKey := k.getACopy()
apiKey.HideConfidentialData()
apiKeys = append(apiKeys, apiKey)
if len(apiKeys) >= limit {
break
}
}
}
return apiKeys, nil
}
func (p *MemoryProvider) dumpAPIKeys() ([]APIKey, error) {
p.dbHandle.Lock()
defer p.dbHandle.Unlock()
apiKeys := make([]APIKey, 0, len(p.dbHandle.apiKeys))
if p.dbHandle.isClosed {
return apiKeys, errMemoryProviderClosed
}
for _, k := range p.dbHandle.apiKeys {
apiKeys = append(apiKeys, k)
}
return apiKeys, nil
}
func (p *MemoryProvider) deleteAPIKeysWithUser(username string) {
found := false
for k, v := range p.dbHandle.apiKeys {
if v.User == username {
delete(p.dbHandle.apiKeys, k)
found = true
}
}
if found {
p.updateAPIKeysOrdering()
}
}
func (p *MemoryProvider) deleteAPIKeysWithAdmin(username string) {
found := false
for k, v := range p.dbHandle.apiKeys {
if v.Admin == username {
delete(p.dbHandle.apiKeys, k)
found = true
}
}
if found {
p.updateAPIKeysOrdering()
}
}
func (p *MemoryProvider) deleteSharesWithUser(username string) {
found := false
for k, v := range p.dbHandle.shares {
if v.Username == username {
delete(p.dbHandle.shares, k)
found = true
}
}
if found {
p.updateSharesOrdering()
}
}
func (p *MemoryProvider) updateAPIKeysOrdering() {
// this could be more efficient
p.dbHandle.apiKeysIDs = make([]string, 0, len(p.dbHandle.apiKeys))
for keyID := range p.dbHandle.apiKeys {
p.dbHandle.apiKeysIDs = append(p.dbHandle.apiKeysIDs, keyID)
}
sort.Strings(p.dbHandle.apiKeysIDs)
}
func (p *MemoryProvider) updateSharesOrdering() {
// this could be more efficient
p.dbHandle.sharesIDs = make([]string, 0, len(p.dbHandle.shares))
for shareID := range p.dbHandle.shares {
p.dbHandle.sharesIDs = append(p.dbHandle.sharesIDs, shareID)
}
sort.Strings(p.dbHandle.sharesIDs)
}
func (p *MemoryProvider) shareExistsInternal(shareID, username string) (Share, error) {
if val, ok := p.dbHandle.shares[shareID]; ok {
if username != "" && val.Username != username {
return Share{}, util.NewRecordNotFoundError(fmt.Sprintf("Share %#v does not exist", shareID))
}
return val.getACopy(), nil
}
return Share{}, util.NewRecordNotFoundError(fmt.Sprintf("Share %#v does not exist", shareID))
}
func (p *MemoryProvider) shareExists(shareID, username string) (Share, error) {
p.dbHandle.Lock()
defer p.dbHandle.Unlock()
if p.dbHandle.isClosed {
return Share{}, errMemoryProviderClosed
}
return p.shareExistsInternal(shareID, username)
}
func (p *MemoryProvider) addShare(share *Share) error {
err := share.validate()
if err != nil {
return err
}
p.dbHandle.Lock()
defer p.dbHandle.Unlock()
if p.dbHandle.isClosed {
return errMemoryProviderClosed
}
_, err = p.shareExistsInternal(share.ShareID, share.Username)
if err == nil {
return fmt.Errorf("share %#v already exists", share.ShareID)
}
if _, err := p.userExistsInternal(share.Username); err != nil {
return util.NewValidationError(fmt.Sprintf("related user %#v does not exists", share.Username))
}
if !share.IsRestore {
share.CreatedAt = util.GetTimeAsMsSinceEpoch(time.Now())
share.UpdatedAt = share.CreatedAt
share.LastUseAt = 0
share.UsedTokens = 0
}
if share.CreatedAt == 0 {
share.CreatedAt = util.GetTimeAsMsSinceEpoch(time.Now())
}
if share.UpdatedAt == 0 {
share.UpdatedAt = share.CreatedAt
}
p.dbHandle.shares[share.ShareID] = share.getACopy()
p.dbHandle.sharesIDs = append(p.dbHandle.sharesIDs, share.ShareID)
sort.Strings(p.dbHandle.sharesIDs)
return nil
}
func (p *MemoryProvider) updateShare(share *Share) error {
err := share.validate()
if err != nil {
return err
}
p.dbHandle.Lock()
defer p.dbHandle.Unlock()
if p.dbHandle.isClosed {
return errMemoryProviderClosed
}
s, err := p.shareExistsInternal(share.ShareID, share.Username)
if err != nil {
return err
}
if _, err := p.userExistsInternal(share.Username); err != nil {
return util.NewValidationError(fmt.Sprintf("related user %#v does not exists", share.Username))
}
share.ID = s.ID
share.ShareID = s.ShareID
if !share.IsRestore {
share.UsedTokens = s.UsedTokens
share.CreatedAt = s.CreatedAt
share.LastUseAt = s.LastUseAt
share.UpdatedAt = util.GetTimeAsMsSinceEpoch(time.Now())
}
if share.CreatedAt == 0 {
share.CreatedAt = util.GetTimeAsMsSinceEpoch(time.Now())
}
if share.UpdatedAt == 0 {
share.UpdatedAt = share.CreatedAt
}
p.dbHandle.shares[share.ShareID] = share.getACopy()
return nil
}
func (p *MemoryProvider) deleteShare(share *Share) error {
p.dbHandle.Lock()
defer p.dbHandle.Unlock()
if p.dbHandle.isClosed {
return errMemoryProviderClosed
}
_, err := p.shareExistsInternal(share.ShareID, share.Username)
if err != nil {
return err
}
delete(p.dbHandle.shares, share.ShareID)
p.updateSharesOrdering()
return nil
}
func (p *MemoryProvider) getShares(limit int, offset int, order, username string) ([]Share, error) {
p.dbHandle.Lock()
defer p.dbHandle.Unlock()
if p.dbHandle.isClosed {
return []Share{}, errMemoryProviderClosed
}
if limit <= 0 {
return []Share{}, nil
}
shares := make([]Share, 0, limit)
itNum := 0
if order == OrderDESC {
for i := len(p.dbHandle.sharesIDs) - 1; i >= 0; i-- {
shareID := p.dbHandle.sharesIDs[i]
s := p.dbHandle.shares[shareID]
if s.Username != username {
continue
}
itNum++
if itNum <= offset {
continue
}
share := s.getACopy()
share.HideConfidentialData()
shares = append(shares, share)
if len(shares) >= limit {
break
}
}
} else {
for _, shareID := range p.dbHandle.sharesIDs {
s := p.dbHandle.shares[shareID]
if s.Username != username {
continue
}
itNum++
if itNum <= offset {
continue
}
share := s.getACopy()
share.HideConfidentialData()
shares = append(shares, share)
if len(shares) >= limit {
break
}
}
}
return shares, nil
}
func (p *MemoryProvider) dumpShares() ([]Share, error) {
p.dbHandle.Lock()
defer p.dbHandle.Unlock()
shares := make([]Share, 0, len(p.dbHandle.shares))
if p.dbHandle.isClosed {
return shares, errMemoryProviderClosed
}
for _, s := range p.dbHandle.shares {
shares = append(shares, s)
}
return shares, nil
}
func (p *MemoryProvider) updateShareLastUse(shareID string, numTokens int) error {
p.dbHandle.Lock()
defer p.dbHandle.Unlock()
if p.dbHandle.isClosed {
return errMemoryProviderClosed
}
share, err := p.shareExistsInternal(shareID, "")
if err != nil {
return err
}
share.LastUseAt = util.GetTimeAsMsSinceEpoch(time.Now())
share.UsedTokens += numTokens
p.dbHandle.shares[share.ShareID] = share
return nil
}
func (p *MemoryProvider) getNextID() int64 {
nextID := int64(1)
for _, v := range p.dbHandle.users {
@@ -1295,10 +770,6 @@ func (p *MemoryProvider) clear() {
p.dbHandle.vfolders = make(map[string]vfs.BaseVirtualFolder)
p.dbHandle.admins = make(map[string]Admin)
p.dbHandle.adminsUsernames = []string{}
p.dbHandle.apiKeys = make(map[string]APIKey)
p.dbHandle.apiKeysIDs = []string{}
p.dbHandle.shares = make(map[string]Share)
p.dbHandle.sharesIDs = []string{}
}
func (p *MemoryProvider) reloadConfig() error {
@@ -1322,7 +793,7 @@ func (p *MemoryProvider) reloadConfig() error {
providerLog(logger.LevelWarn, "error loading dump: %v", err)
return err
}
content, err := os.ReadFile(p.dbHandle.configFile)
content, err := ioutil.ReadFile(p.dbHandle.configFile)
if err != nil {
providerLog(logger.LevelWarn, "error loading dump: %v", err)
return err
@@ -1346,79 +817,23 @@ func (p *MemoryProvider) reloadConfig() error {
return err
}
if err := p.restoreAPIKeys(&dump); err != nil {
return err
}
if err := p.restoreShares(&dump); err != nil {
return err
}
providerLog(logger.LevelDebug, "config loaded from file: %#v", p.dbHandle.configFile)
return nil
}
func (p *MemoryProvider) restoreShares(dump *BackupData) error {
for _, share := range dump.Shares {
s, err := p.shareExists(share.ShareID, "")
share := share // pin
share.IsRestore = true
if err == nil {
share.ID = s.ID
err = UpdateShare(&share, ActionExecutorSystem, "")
if err != nil {
providerLog(logger.LevelWarn, "error updating share %#v: %v", share.ShareID, err)
return err
}
} else {
err = AddShare(&share, ActionExecutorSystem, "")
if err != nil {
providerLog(logger.LevelWarn, "error adding share %#v: %v", share.ShareID, err)
return err
}
}
}
return nil
}
func (p *MemoryProvider) restoreAPIKeys(dump *BackupData) error {
for _, apiKey := range dump.APIKeys {
if apiKey.Key == "" {
return fmt.Errorf("cannot restore an empty API key: %+v", apiKey)
}
k, err := p.apiKeyExists(apiKey.KeyID)
apiKey := apiKey // pin
if err == nil {
apiKey.ID = k.ID
err = UpdateAPIKey(&apiKey, ActionExecutorSystem, "")
if err != nil {
providerLog(logger.LevelWarn, "error updating API key %#v: %v", apiKey.KeyID, err)
return err
}
} else {
err = AddAPIKey(&apiKey, ActionExecutorSystem, "")
if err != nil {
providerLog(logger.LevelWarn, "error adding API key %#v: %v", apiKey.KeyID, err)
return err
}
}
}
return nil
}
func (p *MemoryProvider) restoreAdmins(dump *BackupData) error {
for _, admin := range dump.Admins {
a, err := p.adminExists(admin.Username)
admin := admin // pin
if err == nil {
admin.ID = a.ID
err = UpdateAdmin(&admin, ActionExecutorSystem, "")
err = p.updateAdmin(&admin)
if err != nil {
providerLog(logger.LevelWarn, "error updating admin %#v: %v", admin.Username, err)
return err
}
} else {
err = AddAdmin(&admin, ActionExecutorSystem, "")
err = p.addAdmin(&admin)
if err != nil {
providerLog(logger.LevelWarn, "error adding admin %#v: %v", admin.Username, err)
return err
@@ -1434,14 +849,14 @@ func (p *MemoryProvider) restoreFolders(dump *BackupData) error {
f, err := p.getFolderByName(folder.Name)
if err == nil {
folder.ID = f.ID
err = UpdateFolder(&folder, f.Users, ActionExecutorSystem, "")
err = p.updateFolder(&folder)
if err != nil {
providerLog(logger.LevelWarn, "error updating folder %#v: %v", folder.Name, err)
return err
}
} else {
folder.Users = nil
err = AddFolder(&folder)
err = p.addFolder(&folder)
if err != nil {
providerLog(logger.LevelWarn, "error adding folder %#v: %v", folder.Name, err)
return err
@@ -1457,13 +872,13 @@ func (p *MemoryProvider) restoreUsers(dump *BackupData) error {
u, err := p.userExists(user.Username)
if err == nil {
user.ID = u.ID
err = UpdateUser(&user, ActionExecutorSystem, "")
err = p.updateUser(&user)
if err != nil {
providerLog(logger.LevelWarn, "error updating user %#v: %v", user.Username, err)
return err
}
} else {
err = AddUser(&user, ActionExecutorSystem, "")
err = p.addUser(&user)
if err != nil {
providerLog(logger.LevelWarn, "error adding user %#v: %v", user.Username, err)
return err
@@ -1485,7 +900,3 @@ func (p *MemoryProvider) migrateDatabase() error {
func (p *MemoryProvider) revertDatabase(targetVersion int) error {
return errors.New("memory provider does not store data, revert not possible")
}
func (p *MemoryProvider) resetDatabase() error {
return errors.New("memory provider does not store data, reset not possible")
}

View File

@@ -1,13 +1,10 @@
//go:build !nomysql
// +build !nomysql
package dataprovider
import (
"context"
"crypto/x509"
"database/sql"
"errors"
"fmt"
"strings"
"time"
@@ -15,73 +12,47 @@ import (
// we import go-sql-driver/mysql here to be able to disable MySQL support using a build tag
_ "github.com/go-sql-driver/mysql"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/version"
"github.com/drakkan/sftpgo/v2/vfs"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/version"
"github.com/drakkan/sftpgo/vfs"
)
const (
mysqlResetSQL = "DROP TABLE IF EXISTS `{{api_keys}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{folders_mapping}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{admins}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{folders}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{shares}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{users}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{schema_version}}` CASCADE;"
mysqlInitialSQL = "CREATE TABLE `{{schema_version}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `version` integer NOT NULL);" +
"CREATE TABLE `{{admins}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `username` varchar(255) NOT NULL UNIQUE, " +
"`description` varchar(512) NULL, `password` varchar(255) NOT NULL, `email` varchar(255) NULL, `status` integer NOT NULL, " +
"`permissions` longtext NOT NULL, `filters` longtext NULL, `additional_info` longtext NULL);" +
"CREATE TABLE `{{folders}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `name` varchar(255) NOT NULL UNIQUE, " +
"`description` varchar(512) NULL, `path` varchar(512) NULL, `used_quota_size` bigint NOT NULL, " +
"`used_quota_files` integer NOT NULL, `last_quota_update` bigint NOT NULL, `filesystem` longtext NULL);" +
"CREATE TABLE `{{users}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `username` varchar(255) NOT NULL UNIQUE, " +
"`status` integer NOT NULL, `expiration_date` bigint NOT NULL, `description` varchar(512) NULL, `password` longtext NULL, " +
"`public_keys` longtext NULL, `home_dir` varchar(512) NOT NULL, `uid` integer NOT NULL, `gid` integer NOT NULL, " +
"`max_sessions` integer NOT NULL, `quota_size` bigint NOT NULL, `quota_files` integer NOT NULL, " +
"`permissions` longtext NOT NULL, `used_quota_size` bigint NOT NULL, `used_quota_files` integer NOT NULL, " +
"`last_quota_update` bigint NOT NULL, `upload_bandwidth` integer NOT NULL, `download_bandwidth` integer NOT NULL, " +
"`last_login` bigint NOT NULL, `filters` longtext NULL, `filesystem` longtext NULL, `additional_info` longtext NULL);" +
mysqlUsersTableSQL = "CREATE TABLE `{{users}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, " +
"`username` varchar(255) NOT NULL UNIQUE, `password` varchar(255) NULL, `public_keys` longtext NULL, " +
"`home_dir` varchar(255) NOT NULL, `uid` integer NOT NULL, `gid` integer NOT NULL, `max_sessions` integer NOT NULL, " +
" `quota_size` bigint NOT NULL, `quota_files` integer NOT NULL, `permissions` longtext NOT NULL, " +
"`used_quota_size` bigint NOT NULL, `used_quota_files` integer NOT NULL, `last_quota_update` bigint NOT NULL, " +
"`upload_bandwidth` integer NOT NULL, `download_bandwidth` integer NOT NULL, `expiration_date` bigint(20) NOT NULL, " +
"`last_login` bigint(20) NOT NULL, `status` int(11) NOT NULL, `filters` longtext DEFAULT NULL, " +
"`filesystem` longtext DEFAULT NULL);"
mysqlSchemaTableSQL = "CREATE TABLE `{{schema_version}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `version` integer NOT NULL);"
mysqlV2SQL = "ALTER TABLE `{{users}}` ADD COLUMN `virtual_folders` longtext NULL;"
mysqlV3SQL = "ALTER TABLE `{{users}}` MODIFY `password` longtext NULL;"
mysqlV4SQL = "CREATE TABLE `{{folders}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `path` varchar(512) NOT NULL UNIQUE," +
"`used_quota_size` bigint NOT NULL, `used_quota_files` integer NOT NULL, `last_quota_update` bigint NOT NULL);" +
"ALTER TABLE `{{users}}` MODIFY `home_dir` varchar(512) NOT NULL;" +
"ALTER TABLE `{{users}}` DROP COLUMN `virtual_folders`;" +
"CREATE TABLE `{{folders_mapping}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `virtual_path` varchar(512) NOT NULL, " +
"`quota_size` bigint NOT NULL, `quota_files` integer NOT NULL, `folder_id` integer NOT NULL, `user_id` integer NOT NULL);" +
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `{{prefix}}unique_mapping` UNIQUE (`user_id`, `folder_id`);" +
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `{{prefix}}folders_mapping_folder_id_fk_folders_id` FOREIGN KEY (`folder_id`) REFERENCES `{{folders}}` (`id`) ON DELETE CASCADE;" +
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `{{prefix}}folders_mapping_user_id_fk_users_id` FOREIGN KEY (`user_id`) REFERENCES `{{users}}` (`id`) ON DELETE CASCADE;" +
"INSERT INTO {{schema_version}} (version) VALUES (10);"
mysqlV11SQL = "CREATE TABLE `{{api_keys}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `name` varchar(255) NOT NULL, `key_id` varchar(50) NOT NULL UNIQUE," +
"`api_key` varchar(255) NOT NULL UNIQUE, `scope` integer NOT NULL, `created_at` bigint NOT NULL, `updated_at` bigint NOT NULL, `last_use_at` bigint NOT NULL, " +
"`expires_at` bigint NOT NULL, `description` longtext NULL, `admin_id` integer NULL, `user_id` integer NULL);" +
"ALTER TABLE `{{api_keys}}` ADD CONSTRAINT `{{prefix}}api_keys_admin_id_fk_admins_id` FOREIGN KEY (`admin_id`) REFERENCES `{{admins}}` (`id`) ON DELETE CASCADE;" +
"ALTER TABLE `{{api_keys}}` ADD CONSTRAINT `{{prefix}}api_keys_user_id_fk_users_id` FOREIGN KEY (`user_id`) REFERENCES `{{users}}` (`id`) ON DELETE CASCADE;"
mysqlV11DownSQL = "DROP TABLE `{{api_keys}}` CASCADE;"
mysqlV12SQL = "ALTER TABLE `{{admins}}` ADD COLUMN `created_at` bigint DEFAULT 0 NOT NULL;" +
"ALTER TABLE `{{admins}}` ALTER COLUMN `created_at` DROP DEFAULT;" +
"ALTER TABLE `{{admins}}` ADD COLUMN `updated_at` bigint DEFAULT 0 NOT NULL;" +
"ALTER TABLE `{{admins}}` ALTER COLUMN `updated_at` DROP DEFAULT;" +
"ALTER TABLE `{{admins}}` ADD COLUMN `last_login` bigint DEFAULT 0 NOT NULL;" +
"ALTER TABLE `{{admins}}` ALTER COLUMN `last_login` DROP DEFAULT;" +
"ALTER TABLE `{{users}}` ADD COLUMN `created_at` bigint DEFAULT 0 NOT NULL;" +
"ALTER TABLE `{{users}}` ALTER COLUMN `created_at` DROP DEFAULT;" +
"ALTER TABLE `{{users}}` ADD COLUMN `updated_at` bigint DEFAULT 0 NOT NULL;" +
"ALTER TABLE `{{users}}` ALTER COLUMN `updated_at` DROP DEFAULT;" +
"CREATE INDEX `{{prefix}}users_updated_at_idx` ON `{{users}}` (`updated_at`);"
mysqlV12DownSQL = "ALTER TABLE `{{admins}}` DROP COLUMN `updated_at`;" +
"ALTER TABLE `{{admins}}` DROP COLUMN `created_at`;" +
"ALTER TABLE `{{admins}}` DROP COLUMN `last_login`;" +
"ALTER TABLE `{{users}}` DROP COLUMN `created_at`;" +
"ALTER TABLE `{{users}}` DROP COLUMN `updated_at`;"
mysqlV13SQL = "ALTER TABLE `{{users}}` ADD COLUMN `email` varchar(255) NULL;"
mysqlV13DownSQL = "ALTER TABLE `{{users}}` DROP COLUMN `email`;"
mysqlV14SQL = "CREATE TABLE `{{shares}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, " +
"`share_id` varchar(60) NOT NULL UNIQUE, `name` varchar(255) NOT NULL, `description` varchar(512) NULL, " +
"`scope` integer NOT NULL, `paths` longtext NOT NULL, `created_at` bigint NOT NULL, " +
"`updated_at` bigint NOT NULL, `last_use_at` bigint NOT NULL, `expires_at` bigint NOT NULL, " +
"`password` longtext NULL, `max_tokens` integer NOT NULL, `used_tokens` integer NOT NULL, " +
"`allow_from` longtext NULL, `user_id` integer NOT NULL);" +
"ALTER TABLE `{{shares}}` ADD CONSTRAINT `{{prefix}}shares_user_id_fk_users_id` " +
"FOREIGN KEY (`user_id`) REFERENCES `{{users}}` (`id`) ON DELETE CASCADE;"
mysqlV14DownSQL = "DROP TABLE `{{shares}}` CASCADE;"
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `unique_mapping` UNIQUE (`user_id`, `folder_id`);" +
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `folders_mapping_folder_id_fk_folders_id` FOREIGN KEY (`folder_id`) REFERENCES `{{folders}}` (`id`) ON DELETE CASCADE;" +
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `folders_mapping_user_id_fk_users_id` FOREIGN KEY (`user_id`) REFERENCES `{{users}}` (`id`) ON DELETE CASCADE;"
mysqlV6SQL = "ALTER TABLE `{{users}}` ADD COLUMN `additional_info` longtext NULL;"
mysqlV6DownSQL = "ALTER TABLE `{{users}}` DROP COLUMN `additional_info`;"
mysqlV7SQL = "CREATE TABLE `{{admins}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `username` varchar(255) NOT NULL UNIQUE, " +
"`password` varchar(255) NOT NULL, `email` varchar(255) NULL, `status` integer NOT NULL, `permissions` longtext NOT NULL, " +
"`filters` longtext NULL, `additional_info` longtext NULL);"
mysqlV7DownSQL = "DROP TABLE `{{admins}}` CASCADE;"
mysqlV8SQL = "ALTER TABLE `{{folders}}` ADD COLUMN `name` varchar(255) NULL;" +
"ALTER TABLE `{{folders}}` MODIFY `path` varchar(512) NULL;" +
"ALTER TABLE `{{folders}}` DROP INDEX `path`;" +
"UPDATE `{{folders}}` f1 SET name = CONCAT('folder',f1.id);" +
"ALTER TABLE `{{folders}}` MODIFY `name` varchar(255) NOT NULL;" +
"ALTER TABLE `{{folders}}` ADD CONSTRAINT `name` UNIQUE (`name`);"
mysqlV8DownSQL = "ALTER TABLE `{{folders}}` DROP COLUMN `name`;" +
"ALTER TABLE `{{folders}}` MODIFY `path` varchar(512) NOT NULL;" +
"ALTER TABLE `{{folders}}` ADD CONSTRAINT `path` UNIQUE (`path`);"
)
// MySQLProvider auth provider for MySQL/MariaDB database
@@ -95,7 +66,7 @@ func init() {
func initializeMySQLProvider() error {
var err error
logSender = fmt.Sprintf("dataprovider_%v", MySQLDataProviderName)
dbHandle, err := sql.Open("mysql", getMySQLConnectionString(false))
if err == nil {
providerLog(logger.LevelDebug, "mysql database handle created, connection string: %#v, pool size: %v",
@@ -121,7 +92,7 @@ func getMySQLConnectionString(redactedPwd bool) string {
if redactedPwd {
password = "[redacted]"
}
connectionString = fmt.Sprintf("%v:%v@tcp([%v]:%v)/%v?charset=utf8mb4&interpolateParams=true&timeout=10s&parseTime=true&tls=%v&writeTimeout=10s&readTimeout=10s",
connectionString = fmt.Sprintf("%v:%v@tcp([%v]:%v)/%v?charset=utf8&interpolateParams=true&timeout=10s&tls=%v&writeTimeout=10s&readTimeout=10s",
config.Username, password, config.Host, config.Port, config.Name, getSSLMode())
} else {
connectionString = config.ConnectionString
@@ -137,10 +108,6 @@ func (p *MySQLProvider) validateUserAndPass(username, password, ip, protocol str
return sqlCommonValidateUserAndPass(username, password, ip, protocol, p.dbHandle)
}
func (p *MySQLProvider) validateUserAndTLSCert(username, protocol string, tlsCert *x509.Certificate) (User, error) {
return sqlCommonValidateUserAndTLSCertificate(username, protocol, tlsCert, p.dbHandle)
}
func (p *MySQLProvider) validateUserAndPubKey(username string, publicKey []byte) (User, string, error) {
return sqlCommonValidateUserAndPubKey(username, publicKey, p.dbHandle)
}
@@ -153,18 +120,10 @@ func (p *MySQLProvider) getUsedQuota(username string) (int, int64, error) {
return sqlCommonGetUsedQuota(username, p.dbHandle)
}
func (p *MySQLProvider) setUpdatedAt(username string) {
sqlCommonSetUpdatedAt(username, p.dbHandle)
}
func (p *MySQLProvider) updateLastLogin(username string) error {
return sqlCommonUpdateLastLogin(username, p.dbHandle)
}
func (p *MySQLProvider) updateAdminLastLogin(username string) error {
return sqlCommonUpdateAdminLastLogin(username, p.dbHandle)
}
func (p *MySQLProvider) userExists(username string) (User, error) {
return sqlCommonGetUserByUsername(username, p.dbHandle)
}
@@ -185,10 +144,6 @@ func (p *MySQLProvider) dumpUsers() ([]User, error) {
return sqlCommonDumpUsers(p.dbHandle)
}
func (p *MySQLProvider) getRecentlyUpdatedUsers(after int64) ([]User, error) {
return sqlCommonGetRecentlyUpdatedUsers(after, p.dbHandle)
}
func (p *MySQLProvider) getUsers(limit int, offset int, order string) ([]User, error) {
return sqlCommonGetUsers(limit, offset, order, p.dbHandle)
}
@@ -255,62 +210,6 @@ func (p *MySQLProvider) validateAdminAndPass(username, password, ip string) (Adm
return sqlCommonValidateAdminAndPass(username, password, ip, p.dbHandle)
}
func (p *MySQLProvider) apiKeyExists(keyID string) (APIKey, error) {
return sqlCommonGetAPIKeyByID(keyID, p.dbHandle)
}
func (p *MySQLProvider) addAPIKey(apiKey *APIKey) error {
return sqlCommonAddAPIKey(apiKey, p.dbHandle)
}
func (p *MySQLProvider) updateAPIKey(apiKey *APIKey) error {
return sqlCommonUpdateAPIKey(apiKey, p.dbHandle)
}
func (p *MySQLProvider) deleteAPIKey(apiKey *APIKey) error {
return sqlCommonDeleteAPIKey(apiKey, p.dbHandle)
}
func (p *MySQLProvider) getAPIKeys(limit int, offset int, order string) ([]APIKey, error) {
return sqlCommonGetAPIKeys(limit, offset, order, p.dbHandle)
}
func (p *MySQLProvider) dumpAPIKeys() ([]APIKey, error) {
return sqlCommonDumpAPIKeys(p.dbHandle)
}
func (p *MySQLProvider) updateAPIKeyLastUse(keyID string) error {
return sqlCommonUpdateAPIKeyLastUse(keyID, p.dbHandle)
}
func (p *MySQLProvider) shareExists(shareID, username string) (Share, error) {
return sqlCommonGetShareByID(shareID, username, p.dbHandle)
}
func (p *MySQLProvider) addShare(share *Share) error {
return sqlCommonAddShare(share, p.dbHandle)
}
func (p *MySQLProvider) updateShare(share *Share) error {
return sqlCommonUpdateShare(share, p.dbHandle)
}
func (p *MySQLProvider) deleteShare(share *Share) error {
return sqlCommonDeleteShare(share, p.dbHandle)
}
func (p *MySQLProvider) getShares(limit int, offset int, order, username string) ([]Share, error) {
return sqlCommonGetShares(limit, offset, order, username, p.dbHandle)
}
func (p *MySQLProvider) dumpShares() ([]Share, error) {
return sqlCommonDumpShares(p.dbHandle)
}
func (p *MySQLProvider) updateShareLastUse(shareID string, numTokens int) error {
return sqlCommonUpdateShareLastUse(shareID, numTokens, p.dbHandle)
}
func (p *MySQLProvider) close() error {
return p.dbHandle.Close()
}
@@ -325,200 +224,228 @@ func (p *MySQLProvider) initializeDatabase() error {
if err == nil && dbVersion.Version > 0 {
return ErrNoInitRequired
}
if errors.Is(err, sql.ErrNoRows) {
return errSchemaVersionEmpty
}
initialSQL := strings.ReplaceAll(mysqlInitialSQL, "{{schema_version}}", sqlTableSchemaVersion)
initialSQL = strings.ReplaceAll(initialSQL, "{{admins}}", sqlTableAdmins)
initialSQL = strings.ReplaceAll(initialSQL, "{{folders}}", sqlTableFolders)
initialSQL = strings.ReplaceAll(initialSQL, "{{users}}", sqlTableUsers)
initialSQL = strings.ReplaceAll(initialSQL, "{{folders_mapping}}", sqlTableFoldersMapping)
initialSQL = strings.ReplaceAll(initialSQL, "{{prefix}}", config.SQLTablesPrefix)
sqlUsers := strings.Replace(mysqlUsersTableSQL, "{{users}}", sqlTableUsers, 1)
ctx, cancel := context.WithTimeout(context.Background(), longSQLQueryTimeout)
defer cancel()
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, strings.Split(initialSQL, ";"), 10)
tx, err := p.dbHandle.BeginTx(ctx, nil)
if err != nil {
return err
}
_, err = tx.Exec(sqlUsers)
if err != nil {
return err
}
_, err = tx.Exec(strings.Replace(mysqlSchemaTableSQL, "{{schema_version}}", sqlTableSchemaVersion, 1))
if err != nil {
return err
}
_, err = tx.Exec(strings.Replace(initialDBVersionSQL, "{{schema_version}}", sqlTableSchemaVersion, 1))
if err != nil {
return err
}
return tx.Commit()
}
//nolint:dupl
func (p *MySQLProvider) migrateDatabase() error {
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
if err != nil {
return err
}
switch version := dbVersion.Version; {
case version == sqlDatabaseVersion:
providerLog(logger.LevelDebug, "sql database is up to date, current version: %v", version)
if dbVersion.Version == sqlDatabaseVersion {
providerLog(logger.LevelDebug, "sql database is up to date, current version: %v", dbVersion.Version)
return ErrNoInitRequired
case version < 10:
err = fmt.Errorf("database version %v is too old, please see the upgrading docs", version)
providerLog(logger.LevelError, "%v", err)
logger.ErrorToConsole("%v", err)
return err
case version == 10:
return updateMySQLDatabaseFromV10(p.dbHandle)
case version == 11:
return updateMySQLDatabaseFromV11(p.dbHandle)
case version == 12:
return updateMySQLDatabaseFromV12(p.dbHandle)
case version == 13:
return updateMySQLDatabaseFromV13(p.dbHandle)
}
switch dbVersion.Version {
case 1:
return updateMySQLDatabaseFromV1(p.dbHandle)
case 2:
return updateMySQLDatabaseFromV2(p.dbHandle)
case 3:
return updateMySQLDatabaseFromV3(p.dbHandle)
case 4:
return updateMySQLDatabaseFromV4(p.dbHandle)
case 5:
return updateMySQLDatabaseFromV5(p.dbHandle)
case 6:
return updateMySQLDatabaseFromV6(p.dbHandle)
case 7:
return updateMySQLDatabaseFromV7(p.dbHandle)
default:
if version > sqlDatabaseVersion {
providerLog(logger.LevelWarn, "database version %v is newer than the supported one: %v", version,
if dbVersion.Version > sqlDatabaseVersion {
providerLog(logger.LevelWarn, "database version %v is newer than the supported: %v", dbVersion.Version,
sqlDatabaseVersion)
logger.WarnToConsole("database version %v is newer than the supported one: %v", version,
logger.WarnToConsole("database version %v is newer than the supported: %v", dbVersion.Version,
sqlDatabaseVersion)
return nil
}
return fmt.Errorf("database version not handled: %v", version)
return fmt.Errorf("Database version not handled: %v", dbVersion.Version)
}
}
//nolint:dupl
func (p *MySQLProvider) revertDatabase(targetVersion int) error {
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
if err != nil {
return err
}
if dbVersion.Version == targetVersion {
return errors.New("current version match target version, nothing to do")
return fmt.Errorf("current version match target version, nothing to do")
}
switch dbVersion.Version {
case 14:
return downgradeMySQLDatabaseFromV14(p.dbHandle)
case 13:
return downgradeMySQLDatabaseFromV13(p.dbHandle)
case 12:
return downgradeMySQLDatabaseFromV12(p.dbHandle)
case 11:
return downgradeMySQLDatabaseFromV11(p.dbHandle)
case 8:
err = downgradeMySQLDatabaseFrom8To7(p.dbHandle)
if err != nil {
return err
}
err = downgradeMySQLDatabaseFrom7To6(p.dbHandle)
if err != nil {
return err
}
err = downgradeMySQLDatabaseFrom6To5(p.dbHandle)
if err != nil {
return err
}
return downgradeMySQLDatabaseFrom5To4(p.dbHandle)
case 7:
err = downgradeMySQLDatabaseFrom7To6(p.dbHandle)
if err != nil {
return err
}
err = downgradeMySQLDatabaseFrom6To5(p.dbHandle)
if err != nil {
return err
}
return downgradeMySQLDatabaseFrom5To4(p.dbHandle)
case 6:
err = downgradeMySQLDatabaseFrom6To5(p.dbHandle)
if err != nil {
return err
}
return downgradeMySQLDatabaseFrom5To4(p.dbHandle)
case 5:
return downgradeMySQLDatabaseFrom5To4(p.dbHandle)
default:
return fmt.Errorf("database version not handled: %v", dbVersion.Version)
return fmt.Errorf("Database version not handled: %v", dbVersion.Version)
}
}
func (p *MySQLProvider) resetDatabase() error {
sql := strings.ReplaceAll(mysqlResetSQL, "{{schema_version}}", sqlTableSchemaVersion)
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
sql = strings.ReplaceAll(sql, "{{folders}}", sqlTableFolders)
sql = strings.ReplaceAll(sql, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{folders_mapping}}", sqlTableFoldersMapping)
sql = strings.ReplaceAll(sql, "{{api_keys}}", sqlTableAPIKeys)
sql = strings.ReplaceAll(sql, "{{shares}}", sqlTableShares)
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, strings.Split(sql, ";"), 0)
}
func updateMySQLDatabaseFromV10(dbHandle *sql.DB) error {
if err := updateMySQLDatabaseFrom10To11(dbHandle); err != nil {
func updateMySQLDatabaseFromV1(dbHandle *sql.DB) error {
err := updateMySQLDatabaseFrom1To2(dbHandle)
if err != nil {
return err
}
return updateMySQLDatabaseFromV11(dbHandle)
return updateMySQLDatabaseFromV2(dbHandle)
}
func updateMySQLDatabaseFromV11(dbHandle *sql.DB) error {
if err := updateMySQLDatabaseFrom11To12(dbHandle); err != nil {
func updateMySQLDatabaseFromV2(dbHandle *sql.DB) error {
err := updateMySQLDatabaseFrom2To3(dbHandle)
if err != nil {
return err
}
return updateMySQLDatabaseFromV12(dbHandle)
return updateMySQLDatabaseFromV3(dbHandle)
}
func updateMySQLDatabaseFromV12(dbHandle *sql.DB) error {
if err := updateMySQLDatabaseFrom12To13(dbHandle); err != nil {
func updateMySQLDatabaseFromV3(dbHandle *sql.DB) error {
err := updateMySQLDatabaseFrom3To4(dbHandle)
if err != nil {
return err
}
return updateMySQLDatabaseFromV13(dbHandle)
return updateMySQLDatabaseFromV4(dbHandle)
}
func updateMySQLDatabaseFromV13(dbHandle *sql.DB) error {
return updateMySQLDatabaseFrom13To14(dbHandle)
}
func downgradeMySQLDatabaseFromV14(dbHandle *sql.DB) error {
if err := downgradeMySQLDatabaseFrom14To13(dbHandle); err != nil {
func updateMySQLDatabaseFromV4(dbHandle *sql.DB) error {
err := updateMySQLDatabaseFrom4To5(dbHandle)
if err != nil {
return err
}
return downgradeMySQLDatabaseFromV13(dbHandle)
return updateMySQLDatabaseFromV5(dbHandle)
}
func downgradeMySQLDatabaseFromV13(dbHandle *sql.DB) error {
if err := downgradeMySQLDatabaseFrom13To12(dbHandle); err != nil {
func updateMySQLDatabaseFromV5(dbHandle *sql.DB) error {
err := updateMySQLDatabaseFrom5To6(dbHandle)
if err != nil {
return err
}
return downgradeMySQLDatabaseFromV12(dbHandle)
return updateMySQLDatabaseFromV6(dbHandle)
}
func downgradeMySQLDatabaseFromV12(dbHandle *sql.DB) error {
if err := downgradeMySQLDatabaseFrom12To11(dbHandle); err != nil {
func updateMySQLDatabaseFromV6(dbHandle *sql.DB) error {
err := updateMySQLDatabaseFrom6To7(dbHandle)
if err != nil {
return err
}
return downgradeMySQLDatabaseFromV11(dbHandle)
return updateMySQLDatabaseFromV7(dbHandle)
}
func downgradeMySQLDatabaseFromV11(dbHandle *sql.DB) error {
return downgradeMySQLDatabaseFrom11To10(dbHandle)
func updateMySQLDatabaseFromV7(dbHandle *sql.DB) error {
return updateMySQLDatabaseFrom7To8(dbHandle)
}
func updateMySQLDatabaseFrom13To14(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 13 -> 14")
providerLog(logger.LevelInfo, "updating database version: 13 -> 14")
sql := strings.ReplaceAll(mysqlV14SQL, "{{shares}}", sqlTableShares)
sql = strings.ReplaceAll(sql, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 14)
func updateMySQLDatabaseFrom1To2(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 1 -> 2")
providerLog(logger.LevelInfo, "updating database version: 1 -> 2")
sql := strings.Replace(mysqlV2SQL, "{{users}}", sqlTableUsers, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 2)
}
func downgradeMySQLDatabaseFrom14To13(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 14 -> 13")
providerLog(logger.LevelInfo, "downgrading database version: 14 -> 13")
sql := strings.ReplaceAll(mysqlV14DownSQL, "{{shares}}", sqlTableShares)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 13)
func updateMySQLDatabaseFrom2To3(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 2 -> 3")
providerLog(logger.LevelInfo, "updating database version: 2 -> 3")
sql := strings.Replace(mysqlV3SQL, "{{users}}", sqlTableUsers, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 3)
}
func updateMySQLDatabaseFrom12To13(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 12 -> 13")
providerLog(logger.LevelInfo, "updating database version: 12 -> 13")
sql := strings.ReplaceAll(mysqlV13SQL, "{{users}}", sqlTableUsers)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 13)
func updateMySQLDatabaseFrom3To4(dbHandle *sql.DB) error {
return sqlCommonUpdateDatabaseFrom3To4(mysqlV4SQL, dbHandle)
}
func downgradeMySQLDatabaseFrom13To12(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 13 -> 12")
providerLog(logger.LevelInfo, "downgrading database version: 13 -> 12")
sql := strings.ReplaceAll(mysqlV13DownSQL, "{{users}}", sqlTableUsers)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 12)
func updateMySQLDatabaseFrom4To5(dbHandle *sql.DB) error {
return sqlCommonUpdateDatabaseFrom4To5(dbHandle)
}
func updateMySQLDatabaseFrom11To12(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 11 -> 12")
providerLog(logger.LevelInfo, "updating database version: 11 -> 12")
sql := strings.ReplaceAll(mysqlV12SQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 12)
func updateMySQLDatabaseFrom5To6(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 5 -> 6")
providerLog(logger.LevelInfo, "updating database version: 5 -> 6")
sql := strings.Replace(mysqlV6SQL, "{{users}}", sqlTableUsers, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 6)
}
func downgradeMySQLDatabaseFrom12To11(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 12 -> 11")
providerLog(logger.LevelInfo, "downgrading database version: 12 -> 11")
sql := strings.ReplaceAll(mysqlV12DownSQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 11)
func updateMySQLDatabaseFrom6To7(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 6 -> 7")
providerLog(logger.LevelInfo, "updating database version: 6 -> 7")
sql := strings.Replace(mysqlV7SQL, "{{admins}}", sqlTableAdmins, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 7)
}
func updateMySQLDatabaseFrom10To11(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 10 -> 11")
providerLog(logger.LevelInfo, "updating database version: 10 -> 11")
sql := strings.ReplaceAll(mysqlV11SQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
sql = strings.ReplaceAll(sql, "{{api_keys}}", sqlTableAPIKeys)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 11)
func updateMySQLDatabaseFrom7To8(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 7 -> 8")
providerLog(logger.LevelInfo, "updating database version: 7 -> 8")
sql := strings.ReplaceAll(mysqlV8SQL, "{{folders}}", sqlTableFolders)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 8)
}
func downgradeMySQLDatabaseFrom11To10(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 11 -> 10")
providerLog(logger.LevelInfo, "downgrading database version: 11 -> 10")
sql := strings.ReplaceAll(mysqlV11DownSQL, "{{api_keys}}", sqlTableAPIKeys)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 10)
func downgradeMySQLDatabaseFrom8To7(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 8 -> 7")
providerLog(logger.LevelInfo, "downgrading database version: 8 -> 7")
sql := strings.ReplaceAll(mysqlV8DownSQL, "{{folders}}", sqlTableFolders)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 7)
}
func downgradeMySQLDatabaseFrom7To6(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 7 -> 6")
providerLog(logger.LevelInfo, "downgrading database version: 7 -> 6")
sql := strings.Replace(mysqlV7DownSQL, "{{admins}}", sqlTableAdmins, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 6)
}
func downgradeMySQLDatabaseFrom6To5(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 6 -> 5")
providerLog(logger.LevelInfo, "downgrading database version: 6 -> 5")
sql := strings.Replace(mysqlV6DownSQL, "{{users}}", sqlTableUsers, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 5)
}
func downgradeMySQLDatabaseFrom5To4(dbHandle *sql.DB) error {
return sqlCommonDowngradeDatabaseFrom5To4(dbHandle)
}

View File

@@ -1,4 +1,3 @@
//go:build nomysql
// +build nomysql
package dataprovider
@@ -6,7 +5,7 @@ package dataprovider
import (
"errors"
"github.com/drakkan/sftpgo/v2/version"
"github.com/drakkan/sftpgo/version"
)
func init() {

View File

@@ -1,13 +1,10 @@
//go:build !nopgsql
// +build !nopgsql
package dataprovider
import (
"context"
"crypto/x509"
"database/sql"
"errors"
"fmt"
"strings"
"time"
@@ -15,88 +12,50 @@ import (
// we import lib/pq here to be able to disable PostgreSQL support using a build tag
_ "github.com/lib/pq"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/version"
"github.com/drakkan/sftpgo/v2/vfs"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/version"
"github.com/drakkan/sftpgo/vfs"
)
const (
pgsqlResetSQL = `DROP TABLE IF EXISTS "{{api_keys}}" CASCADE;
DROP TABLE IF EXISTS "{{folders_mapping}}" CASCADE;
DROP TABLE IF EXISTS "{{admins}}" CASCADE;
DROP TABLE IF EXISTS "{{folders}}" CASCADE;
DROP TABLE IF EXISTS "{{shares}}" CASCADE;
DROP TABLE IF EXISTS "{{users}}" CASCADE;
DROP TABLE IF EXISTS "{{schema_version}}" CASCADE;
pgsqlUsersTableSQL = `CREATE TABLE "{{users}}" ("id" serial NOT NULL PRIMARY KEY, "username" varchar(255) NOT NULL UNIQUE,
"password" varchar(255) NULL, "public_keys" text NULL, "home_dir" varchar(255) NOT NULL, "uid" integer NOT NULL,
"gid" integer NOT NULL, "max_sessions" integer NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL,
"permissions" text NOT NULL, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL,
"last_quota_update" bigint NOT NULL, "upload_bandwidth" integer NOT NULL, "download_bandwidth" integer NOT NULL,
"expiration_date" bigint NOT NULL, "last_login" bigint NOT NULL, "status" integer NOT NULL, "filters" text NULL,
"filesystem" text NULL);`
pgsqlSchemaTableSQL = `CREATE TABLE "{{schema_version}}" ("id" serial NOT NULL PRIMARY KEY, "version" integer NOT NULL);`
pgsqlV2SQL = `ALTER TABLE "{{users}}" ADD COLUMN "virtual_folders" text NULL;`
pgsqlV3SQL = `ALTER TABLE "{{users}}" ALTER COLUMN "password" TYPE text USING "password"::text;`
pgsqlV4SQL = `CREATE TABLE "{{folders}}" ("id" serial NOT NULL PRIMARY KEY, "path" varchar(512) NOT NULL UNIQUE, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL);
ALTER TABLE "{{users}}" ALTER COLUMN "home_dir" TYPE varchar(512) USING "home_dir"::varchar(512);
ALTER TABLE "{{users}}" DROP COLUMN "virtual_folders" CASCADE;
CREATE TABLE "{{folders_mapping}}" ("id" serial NOT NULL PRIMARY KEY, "virtual_path" varchar(512) NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "folder_id" integer NOT NULL, "user_id" integer NOT NULL);
ALTER TABLE "{{folders_mapping}}" ADD CONSTRAINT "unique_mapping" UNIQUE ("user_id", "folder_id");
ALTER TABLE "{{folders_mapping}}" ADD CONSTRAINT "folders_mapping_folder_id_fk_folders_id" FOREIGN KEY ("folder_id") REFERENCES "{{folders}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
ALTER TABLE "{{folders_mapping}}" ADD CONSTRAINT "folders_mapping_user_id_fk_users_id" FOREIGN KEY ("user_id") REFERENCES "{{users}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
CREATE INDEX "folders_mapping_folder_id_idx" ON "{{folders_mapping}}" ("folder_id");
CREATE INDEX "folders_mapping_user_id_idx" ON "{{folders_mapping}}" ("user_id");
`
pgsqlInitial = `CREATE TABLE "{{schema_version}}" ("id" serial NOT NULL PRIMARY KEY, "version" integer NOT NULL);
CREATE TABLE "{{admins}}" ("id" serial NOT NULL PRIMARY KEY, "username" varchar(255) NOT NULL UNIQUE,
"description" varchar(512) NULL, "password" varchar(255) NOT NULL, "email" varchar(255) NULL, "status" integer NOT NULL,
"permissions" text NOT NULL, "filters" text NULL, "additional_info" text NULL);
CREATE TABLE "{{folders}}" ("id" serial NOT NULL PRIMARY KEY, "name" varchar(255) NOT NULL UNIQUE, "description" varchar(512) NULL,
"path" varchar(512) NULL, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL,
"filesystem" text NULL);
CREATE TABLE "{{users}}" ("id" serial NOT NULL PRIMARY KEY, "username" varchar(255) NOT NULL UNIQUE, "status" integer NOT NULL,
"expiration_date" bigint NOT NULL, "description" varchar(512) NULL, "password" text NULL, "public_keys" text NULL,
"home_dir" varchar(512) NOT NULL, "uid" integer NOT NULL, "gid" integer NOT NULL, "max_sessions" integer NOT NULL,
"quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "permissions" text NOT NULL, "used_quota_size" bigint NOT NULL,
"used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL, "upload_bandwidth" integer NOT NULL,
"download_bandwidth" integer NOT NULL, "last_login" bigint NOT NULL, "filters" text NULL, "filesystem" text NULL,
"additional_info" text NULL);
CREATE TABLE "{{folders_mapping}}" ("id" serial NOT NULL PRIMARY KEY, "virtual_path" varchar(512) NOT NULL,
"quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "folder_id" integer NOT NULL, "user_id" integer NOT NULL);
ALTER TABLE "{{folders_mapping}}" ADD CONSTRAINT "{{prefix}}unique_mapping" UNIQUE ("user_id", "folder_id");
ALTER TABLE "{{folders_mapping}}" ADD CONSTRAINT "{{prefix}}folders_mapping_folder_id_fk_folders_id"
FOREIGN KEY ("folder_id") REFERENCES "{{folders}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
ALTER TABLE "{{folders_mapping}}" ADD CONSTRAINT "{{prefix}}folders_mapping_user_id_fk_users_id"
FOREIGN KEY ("user_id") REFERENCES "{{users}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
CREATE INDEX "{{prefix}}folders_mapping_folder_id_idx" ON "{{folders_mapping}}" ("folder_id");
CREATE INDEX "{{prefix}}folders_mapping_user_id_idx" ON "{{folders_mapping}}" ("user_id");
INSERT INTO {{schema_version}} (version) VALUES (10);
pgsqlV6SQL = `ALTER TABLE "{{users}}" ADD COLUMN "additional_info" text NULL;`
pgsqlV6DownSQL = `ALTER TABLE "{{users}}" DROP COLUMN "additional_info" CASCADE;`
pgsqlV7SQL = `CREATE TABLE "{{admins}}" ("id" serial NOT NULL PRIMARY KEY, "username" varchar(255) NOT NULL UNIQUE,
"password" varchar(255) NOT NULL, "email" varchar(255) NULL, "status" integer NOT NULL, "permissions" text NOT NULL,
"filters" text NULL, "additional_info" text NULL);
`
pgsqlV11SQL = `CREATE TABLE "{{api_keys}}" ("id" serial NOT NULL PRIMARY KEY, "name" varchar(255) NOT NULL,
"key_id" varchar(50) NOT NULL UNIQUE, "api_key" varchar(255) NOT NULL UNIQUE, "scope" integer NOT NULL,
"created_at" bigint NOT NULL, "updated_at" bigint NOT NULL, "last_use_at" bigint NOT NULL,"expires_at" bigint NOT NULL,
"description" text NULL, "admin_id" integer NULL, "user_id" integer NULL);
ALTER TABLE "{{api_keys}}" ADD CONSTRAINT "{{prefix}}api_keys_admin_id_fk_admins_id" FOREIGN KEY ("admin_id")
REFERENCES "{{admins}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE;
ALTER TABLE "{{api_keys}}" ADD CONSTRAINT "{{prefix}}api_keys_user_id_fk_users_id" FOREIGN KEY ("user_id")
REFERENCES "{{users}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE;
CREATE INDEX "{{prefix}}api_keys_admin_id_idx" ON "{{api_keys}}" ("admin_id");
CREATE INDEX "{{prefix}}api_keys_user_id_idx" ON "{{api_keys}}" ("user_id");
pgsqlV7DownSQL = `DROP TABLE "{{admins}}" CASCADE;`
pgsqlV8SQL = `ALTER TABLE "{{folders}}" ADD COLUMN "name" varchar(255) NULL;
ALTER TABLE "folders" ALTER COLUMN "path" DROP NOT NULL;
ALTER TABLE "{{folders}}" DROP CONSTRAINT IF EXISTS folders_path_key;
UPDATE "{{folders}}" f1 SET name = (SELECT CONCAT('folder',f2.id) FROM "{{folders}}" f2 WHERE f2.id = f1.id);
ALTER TABLE "{{folders}}" ALTER COLUMN "name" SET NOT NULL;
ALTER TABLE "{{folders}}" ADD CONSTRAINT "folders_name_uniq" UNIQUE ("name");
`
pgsqlV11DownSQL = `DROP TABLE "{{api_keys}}" CASCADE;`
pgsqlV12SQL = `ALTER TABLE "{{admins}}" ADD COLUMN "created_at" bigint DEFAULT 0 NOT NULL;
ALTER TABLE "{{admins}}" ALTER COLUMN "created_at" DROP DEFAULT;
ALTER TABLE "{{admins}}" ADD COLUMN "updated_at" bigint DEFAULT 0 NOT NULL;
ALTER TABLE "{{admins}}" ALTER COLUMN "updated_at" DROP DEFAULT;
ALTER TABLE "{{admins}}" ADD COLUMN "last_login" bigint DEFAULT 0 NOT NULL;
ALTER TABLE "{{admins}}" ALTER COLUMN "last_login" DROP DEFAULT;
ALTER TABLE "{{users}}" ADD COLUMN "created_at" bigint DEFAULT 0 NOT NULL;
ALTER TABLE "{{users}}" ALTER COLUMN "created_at" DROP DEFAULT;
ALTER TABLE "{{users}}" ADD COLUMN "updated_at" bigint DEFAULT 0 NOT NULL;
ALTER TABLE "{{users}}" ALTER COLUMN "updated_at" DROP DEFAULT;
CREATE INDEX "{{prefix}}users_updated_at_idx" ON "{{users}}" ("updated_at");
pgsqlV8DownSQL = `ALTER TABLE "{{folders}}" DROP COLUMN "name" CASCADE;
ALTER TABLE "{{folders}}" ALTER COLUMN "path" SET NOT NULL;
ALTER TABLE "{{folders}}" ADD CONSTRAINT folders_path_key UNIQUE (path);
`
pgsqlV12DownSQL = `ALTER TABLE "{{users}}" DROP COLUMN "updated_at" CASCADE;
ALTER TABLE "{{users}}" DROP COLUMN "created_at" CASCADE;
ALTER TABLE "{{admins}}" DROP COLUMN "created_at" CASCADE;
ALTER TABLE "{{admins}}" DROP COLUMN "updated_at" CASCADE;
ALTER TABLE "{{admins}}" DROP COLUMN "last_login" CASCADE;
`
pgsqlV13SQL = `ALTER TABLE "{{users}}" ADD COLUMN "email" varchar(255) NULL;`
pgsqlV13DownSQL = `ALTER TABLE "{{users}}" DROP COLUMN "email" CASCADE;`
pgsqlV14SQL = `CREATE TABLE "{{shares}}" ("id" serial NOT NULL PRIMARY KEY,
"share_id" varchar(60) NOT NULL UNIQUE, "name" varchar(255) NOT NULL, "description" varchar(512) NULL,
"scope" integer NOT NULL, "paths" text NOT NULL, "created_at" bigint NOT NULL, "updated_at" bigint NOT NULL,
"last_use_at" bigint NOT NULL, "expires_at" bigint NOT NULL, "password" text NULL,
"max_tokens" integer NOT NULL, "used_tokens" integer NOT NULL, "allow_from" text NULL,
"user_id" integer NOT NULL);
ALTER TABLE "{{shares}}" ADD CONSTRAINT "{{prefix}}shares_user_id_fk_users_id" FOREIGN KEY ("user_id")
REFERENCES "{{users}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE;
CREATE INDEX "{{prefix}}shares_user_id_idx" ON "{{shares}}" ("user_id");
`
pgsqlV14DownSQL = `DROP TABLE "{{shares}}" CASCADE;`
)
// PGSQLProvider auth provider for PostgreSQL database
@@ -110,6 +69,7 @@ func init() {
func initializePGSQLProvider() error {
var err error
logSender = fmt.Sprintf("dataprovider_%v", PGSQLDataProviderName)
dbHandle, err := sql.Open("postgres", getPGSQLConnectionString(false))
if err == nil {
providerLog(logger.LevelDebug, "postgres database handle created, connection string: %#v, pool size: %v",
@@ -152,10 +112,6 @@ func (p *PGSQLProvider) validateUserAndPass(username, password, ip, protocol str
return sqlCommonValidateUserAndPass(username, password, ip, protocol, p.dbHandle)
}
func (p *PGSQLProvider) validateUserAndTLSCert(username, protocol string, tlsCert *x509.Certificate) (User, error) {
return sqlCommonValidateUserAndTLSCertificate(username, protocol, tlsCert, p.dbHandle)
}
func (p *PGSQLProvider) validateUserAndPubKey(username string, publicKey []byte) (User, string, error) {
return sqlCommonValidateUserAndPubKey(username, publicKey, p.dbHandle)
}
@@ -168,18 +124,10 @@ func (p *PGSQLProvider) getUsedQuota(username string) (int, int64, error) {
return sqlCommonGetUsedQuota(username, p.dbHandle)
}
func (p *PGSQLProvider) setUpdatedAt(username string) {
sqlCommonSetUpdatedAt(username, p.dbHandle)
}
func (p *PGSQLProvider) updateLastLogin(username string) error {
return sqlCommonUpdateLastLogin(username, p.dbHandle)
}
func (p *PGSQLProvider) updateAdminLastLogin(username string) error {
return sqlCommonUpdateAdminLastLogin(username, p.dbHandle)
}
func (p *PGSQLProvider) userExists(username string) (User, error) {
return sqlCommonGetUserByUsername(username, p.dbHandle)
}
@@ -200,10 +148,6 @@ func (p *PGSQLProvider) dumpUsers() ([]User, error) {
return sqlCommonDumpUsers(p.dbHandle)
}
func (p *PGSQLProvider) getRecentlyUpdatedUsers(after int64) ([]User, error) {
return sqlCommonGetRecentlyUpdatedUsers(after, p.dbHandle)
}
func (p *PGSQLProvider) getUsers(limit int, offset int, order string) ([]User, error) {
return sqlCommonGetUsers(limit, offset, order, p.dbHandle)
}
@@ -270,62 +214,6 @@ func (p *PGSQLProvider) validateAdminAndPass(username, password, ip string) (Adm
return sqlCommonValidateAdminAndPass(username, password, ip, p.dbHandle)
}
func (p *PGSQLProvider) apiKeyExists(keyID string) (APIKey, error) {
return sqlCommonGetAPIKeyByID(keyID, p.dbHandle)
}
func (p *PGSQLProvider) addAPIKey(apiKey *APIKey) error {
return sqlCommonAddAPIKey(apiKey, p.dbHandle)
}
func (p *PGSQLProvider) updateAPIKey(apiKey *APIKey) error {
return sqlCommonUpdateAPIKey(apiKey, p.dbHandle)
}
func (p *PGSQLProvider) deleteAPIKey(apiKey *APIKey) error {
return sqlCommonDeleteAPIKey(apiKey, p.dbHandle)
}
func (p *PGSQLProvider) getAPIKeys(limit int, offset int, order string) ([]APIKey, error) {
return sqlCommonGetAPIKeys(limit, offset, order, p.dbHandle)
}
func (p *PGSQLProvider) dumpAPIKeys() ([]APIKey, error) {
return sqlCommonDumpAPIKeys(p.dbHandle)
}
func (p *PGSQLProvider) updateAPIKeyLastUse(keyID string) error {
return sqlCommonUpdateAPIKeyLastUse(keyID, p.dbHandle)
}
func (p *PGSQLProvider) shareExists(shareID, username string) (Share, error) {
return sqlCommonGetShareByID(shareID, username, p.dbHandle)
}
func (p *PGSQLProvider) addShare(share *Share) error {
return sqlCommonAddShare(share, p.dbHandle)
}
func (p *PGSQLProvider) updateShare(share *Share) error {
return sqlCommonUpdateShare(share, p.dbHandle)
}
func (p *PGSQLProvider) deleteShare(share *Share) error {
return sqlCommonDeleteShare(share, p.dbHandle)
}
func (p *PGSQLProvider) getShares(limit int, offset int, order, username string) ([]Share, error) {
return sqlCommonGetShares(limit, offset, order, username, p.dbHandle)
}
func (p *PGSQLProvider) dumpShares() ([]Share, error) {
return sqlCommonDumpShares(p.dbHandle)
}
func (p *PGSQLProvider) updateShareLastUse(shareID string, numTokens int) error {
return sqlCommonUpdateShareLastUse(shareID, numTokens, p.dbHandle)
}
func (p *PGSQLProvider) close() error {
return p.dbHandle.Close()
}
@@ -340,206 +228,228 @@ func (p *PGSQLProvider) initializeDatabase() error {
if err == nil && dbVersion.Version > 0 {
return ErrNoInitRequired
}
if errors.Is(err, sql.ErrNoRows) {
return errSchemaVersionEmpty
}
initialSQL := strings.ReplaceAll(pgsqlInitial, "{{schema_version}}", sqlTableSchemaVersion)
initialSQL = strings.ReplaceAll(initialSQL, "{{admins}}", sqlTableAdmins)
initialSQL = strings.ReplaceAll(initialSQL, "{{folders}}", sqlTableFolders)
initialSQL = strings.ReplaceAll(initialSQL, "{{users}}", sqlTableUsers)
initialSQL = strings.ReplaceAll(initialSQL, "{{folders_mapping}}", sqlTableFoldersMapping)
initialSQL = strings.ReplaceAll(initialSQL, "{{prefix}}", config.SQLTablesPrefix)
if config.Driver == CockroachDataProviderName {
// Cockroach does not support deferrable constraint validation, we don't need them,
// we keep these definitions for the PostgreSQL driver to avoid changes for users
// upgrading from old SFTPGo versions
initialSQL = strings.ReplaceAll(initialSQL, "DEFERRABLE INITIALLY DEFERRED", "")
}
sqlUsers := strings.Replace(pgsqlUsersTableSQL, "{{users}}", sqlTableUsers, 1)
ctx, cancel := context.WithTimeout(context.Background(), longSQLQueryTimeout)
defer cancel()
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, []string{initialSQL}, 10)
tx, err := p.dbHandle.BeginTx(ctx, nil)
if err != nil {
return err
}
_, err = tx.Exec(sqlUsers)
if err != nil {
return err
}
_, err = tx.Exec(strings.Replace(pgsqlSchemaTableSQL, "{{schema_version}}", sqlTableSchemaVersion, 1))
if err != nil {
return err
}
_, err = tx.Exec(strings.Replace(initialDBVersionSQL, "{{schema_version}}", sqlTableSchemaVersion, 1))
if err != nil {
return err
}
return tx.Commit()
}
//nolint:dupl
func (p *PGSQLProvider) migrateDatabase() error {
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
if err != nil {
return err
}
switch version := dbVersion.Version; {
case version == sqlDatabaseVersion:
providerLog(logger.LevelDebug, "sql database is up to date, current version: %v", version)
if dbVersion.Version == sqlDatabaseVersion {
providerLog(logger.LevelDebug, "sql database is up to date, current version: %v", dbVersion.Version)
return ErrNoInitRequired
case version < 10:
err = fmt.Errorf("database version %v is too old, please see the upgrading docs", version)
providerLog(logger.LevelError, "%v", err)
logger.ErrorToConsole("%v", err)
return err
case version == 10:
return updatePGSQLDatabaseFromV10(p.dbHandle)
case version == 11:
return updatePGSQLDatabaseFromV11(p.dbHandle)
case version == 12:
return updatePGSQLDatabaseFromV12(p.dbHandle)
case version == 13:
return updatePGSQLDatabaseFromV13(p.dbHandle)
}
switch dbVersion.Version {
case 1:
return updatePGSQLDatabaseFromV1(p.dbHandle)
case 2:
return updatePGSQLDatabaseFromV2(p.dbHandle)
case 3:
return updatePGSQLDatabaseFromV3(p.dbHandle)
case 4:
return updatePGSQLDatabaseFromV4(p.dbHandle)
case 5:
return updatePGSQLDatabaseFromV5(p.dbHandle)
case 6:
return updatePGSQLDatabaseFromV6(p.dbHandle)
case 7:
return updatePGSQLDatabaseFromV7(p.dbHandle)
default:
if version > sqlDatabaseVersion {
providerLog(logger.LevelWarn, "database version %v is newer than the supported one: %v", version,
if dbVersion.Version > sqlDatabaseVersion {
providerLog(logger.LevelWarn, "database version %v is newer than the supported: %v", dbVersion.Version,
sqlDatabaseVersion)
logger.WarnToConsole("database version %v is newer than the supported one: %v", version,
logger.WarnToConsole("database version %v is newer than the supported: %v", dbVersion.Version,
sqlDatabaseVersion)
return nil
}
return fmt.Errorf("database version not handled: %v", version)
return fmt.Errorf("Database version not handled: %v", dbVersion.Version)
}
}
//nolint:dupl
func (p *PGSQLProvider) revertDatabase(targetVersion int) error {
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
if err != nil {
return err
}
if dbVersion.Version == targetVersion {
return errors.New("current version match target version, nothing to do")
return fmt.Errorf("current version match target version, nothing to do")
}
switch dbVersion.Version {
case 14:
return downgradePGSQLDatabaseFromV14(p.dbHandle)
case 13:
return downgradePGSQLDatabaseFromV13(p.dbHandle)
case 12:
return downgradePGSQLDatabaseFromV12(p.dbHandle)
case 11:
return downgradePGSQLDatabaseFromV11(p.dbHandle)
case 8:
err = downgradePGSQLDatabaseFrom8To7(p.dbHandle)
if err != nil {
return err
}
err = downgradePGSQLDatabaseFrom7To6(p.dbHandle)
if err != nil {
return err
}
err = downgradePGSQLDatabaseFrom6To5(p.dbHandle)
if err != nil {
return err
}
return downgradePGSQLDatabaseFrom5To4(p.dbHandle)
case 7:
err = downgradePGSQLDatabaseFrom7To6(p.dbHandle)
if err != nil {
return err
}
err = downgradePGSQLDatabaseFrom6To5(p.dbHandle)
if err != nil {
return err
}
return downgradePGSQLDatabaseFrom5To4(p.dbHandle)
case 6:
err = downgradePGSQLDatabaseFrom6To5(p.dbHandle)
if err != nil {
return err
}
return downgradePGSQLDatabaseFrom5To4(p.dbHandle)
case 5:
return downgradePGSQLDatabaseFrom5To4(p.dbHandle)
default:
return fmt.Errorf("database version not handled: %v", dbVersion.Version)
return fmt.Errorf("Database version not handled: %v", dbVersion.Version)
}
}
func (p *PGSQLProvider) resetDatabase() error {
sql := strings.ReplaceAll(pgsqlResetSQL, "{{schema_version}}", sqlTableSchemaVersion)
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
sql = strings.ReplaceAll(sql, "{{folders}}", sqlTableFolders)
sql = strings.ReplaceAll(sql, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{folders_mapping}}", sqlTableFoldersMapping)
sql = strings.ReplaceAll(sql, "{{api_keys}}", sqlTableAPIKeys)
sql = strings.ReplaceAll(sql, "{{shares}}", sqlTableShares)
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, []string{sql}, 0)
}
func updatePGSQLDatabaseFromV10(dbHandle *sql.DB) error {
if err := updatePGSQLDatabaseFrom10To11(dbHandle); err != nil {
func updatePGSQLDatabaseFromV1(dbHandle *sql.DB) error {
err := updatePGSQLDatabaseFrom1To2(dbHandle)
if err != nil {
return err
}
return updatePGSQLDatabaseFromV11(dbHandle)
return updatePGSQLDatabaseFromV2(dbHandle)
}
func updatePGSQLDatabaseFromV11(dbHandle *sql.DB) error {
if err := updatePGSQLDatabaseFrom11To12(dbHandle); err != nil {
func updatePGSQLDatabaseFromV2(dbHandle *sql.DB) error {
err := updatePGSQLDatabaseFrom2To3(dbHandle)
if err != nil {
return err
}
return updatePGSQLDatabaseFromV12(dbHandle)
return updatePGSQLDatabaseFromV3(dbHandle)
}
func updatePGSQLDatabaseFromV12(dbHandle *sql.DB) error {
if err := updatePGSQLDatabaseFrom12To13(dbHandle); err != nil {
func updatePGSQLDatabaseFromV3(dbHandle *sql.DB) error {
err := updatePGSQLDatabaseFrom3To4(dbHandle)
if err != nil {
return err
}
return updatePGSQLDatabaseFromV13(dbHandle)
return updatePGSQLDatabaseFromV4(dbHandle)
}
func updatePGSQLDatabaseFromV13(dbHandle *sql.DB) error {
return updatePGSQLDatabaseFrom13To14(dbHandle)
}
func downgradePGSQLDatabaseFromV14(dbHandle *sql.DB) error {
if err := downgradePGSQLDatabaseFrom14To13(dbHandle); err != nil {
func updatePGSQLDatabaseFromV4(dbHandle *sql.DB) error {
err := updatePGSQLDatabaseFrom4To5(dbHandle)
if err != nil {
return err
}
return downgradePGSQLDatabaseFromV13(dbHandle)
return updatePGSQLDatabaseFromV5(dbHandle)
}
func downgradePGSQLDatabaseFromV13(dbHandle *sql.DB) error {
if err := downgradePGSQLDatabaseFrom13To12(dbHandle); err != nil {
func updatePGSQLDatabaseFromV5(dbHandle *sql.DB) error {
err := updatePGSQLDatabaseFrom5To6(dbHandle)
if err != nil {
return err
}
return downgradePGSQLDatabaseFromV12(dbHandle)
return updatePGSQLDatabaseFromV6(dbHandle)
}
func downgradePGSQLDatabaseFromV12(dbHandle *sql.DB) error {
if err := downgradePGSQLDatabaseFrom12To11(dbHandle); err != nil {
func updatePGSQLDatabaseFromV6(dbHandle *sql.DB) error {
err := updatePGSQLDatabaseFrom6To7(dbHandle)
if err != nil {
return err
}
return downgradePGSQLDatabaseFromV11(dbHandle)
return updatePGSQLDatabaseFromV7(dbHandle)
}
func downgradePGSQLDatabaseFromV11(dbHandle *sql.DB) error {
return downgradePGSQLDatabaseFrom11To10(dbHandle)
func updatePGSQLDatabaseFromV7(dbHandle *sql.DB) error {
return updatePGSQLDatabaseFrom7To8(dbHandle)
}
func updatePGSQLDatabaseFrom13To14(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 13 -> 14")
providerLog(logger.LevelInfo, "updating database version: 13 -> 14")
sql := strings.ReplaceAll(pgsqlV14SQL, "{{shares}}", sqlTableShares)
sql = strings.ReplaceAll(sql, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 14)
func updatePGSQLDatabaseFrom1To2(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 1 -> 2")
providerLog(logger.LevelInfo, "updating database version: 1 -> 2")
sql := strings.Replace(pgsqlV2SQL, "{{users}}", sqlTableUsers, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 2)
}
func downgradePGSQLDatabaseFrom14To13(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 14 -> 13")
providerLog(logger.LevelInfo, "downgrading database version: 14 -> 13")
sql := strings.ReplaceAll(pgsqlV14DownSQL, "{{shares}}", sqlTableShares)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 13)
func updatePGSQLDatabaseFrom2To3(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 2 -> 3")
providerLog(logger.LevelInfo, "updating database version: 2 -> 3")
sql := strings.Replace(pgsqlV3SQL, "{{users}}", sqlTableUsers, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 3)
}
func updatePGSQLDatabaseFrom12To13(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 12 -> 13")
providerLog(logger.LevelInfo, "updating database version: 12 -> 13")
sql := strings.ReplaceAll(pgsqlV13SQL, "{{users}}", sqlTableUsers)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 13)
func updatePGSQLDatabaseFrom3To4(dbHandle *sql.DB) error {
return sqlCommonUpdateDatabaseFrom3To4(pgsqlV4SQL, dbHandle)
}
func downgradePGSQLDatabaseFrom13To12(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 13 -> 12")
providerLog(logger.LevelInfo, "downgrading database version: 13 -> 12")
sql := strings.ReplaceAll(pgsqlV13DownSQL, "{{users}}", sqlTableUsers)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 12)
func updatePGSQLDatabaseFrom4To5(dbHandle *sql.DB) error {
return sqlCommonUpdateDatabaseFrom4To5(dbHandle)
}
func updatePGSQLDatabaseFrom11To12(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 11 -> 12")
providerLog(logger.LevelInfo, "updating database version: 11 -> 12")
sql := strings.ReplaceAll(pgsqlV12SQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 12)
func updatePGSQLDatabaseFrom5To6(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 5 -> 6")
providerLog(logger.LevelInfo, "updating database version: 5 -> 6")
sql := strings.Replace(pgsqlV6SQL, "{{users}}", sqlTableUsers, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 6)
}
func downgradePGSQLDatabaseFrom12To11(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 12 -> 11")
providerLog(logger.LevelInfo, "downgrading database version: 12 -> 11")
sql := strings.ReplaceAll(pgsqlV12DownSQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 11)
func updatePGSQLDatabaseFrom6To7(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 6 -> 7")
providerLog(logger.LevelInfo, "updating database version: 6 -> 7")
sql := strings.Replace(pgsqlV7SQL, "{{admins}}", sqlTableAdmins, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 7)
}
func updatePGSQLDatabaseFrom10To11(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 10 -> 11")
providerLog(logger.LevelInfo, "updating database version: 10 -> 11")
sql := strings.ReplaceAll(pgsqlV11SQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
sql = strings.ReplaceAll(sql, "{{api_keys}}", sqlTableAPIKeys)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 11)
func updatePGSQLDatabaseFrom7To8(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 7 -> 8")
providerLog(logger.LevelInfo, "updating database version: 7 -> 8")
sql := strings.ReplaceAll(pgsqlV8SQL, "{{folders}}", sqlTableFolders)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 8)
}
func downgradePGSQLDatabaseFrom11To10(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 11 -> 10")
providerLog(logger.LevelInfo, "downgrading database version: 11 -> 10")
sql := strings.ReplaceAll(pgsqlV11DownSQL, "{{api_keys}}", sqlTableAPIKeys)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 10)
func downgradePGSQLDatabaseFrom8To7(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 8 -> 7")
providerLog(logger.LevelInfo, "downgrading database version: 8 -> 7")
sql := strings.ReplaceAll(pgsqlV8DownSQL, "{{folders}}", sqlTableAdmins)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 7)
}
func downgradePGSQLDatabaseFrom7To6(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 7 -> 6")
providerLog(logger.LevelInfo, "downgrading database version: 7 -> 6")
sql := strings.Replace(pgsqlV7DownSQL, "{{admins}}", sqlTableAdmins, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 6)
}
func downgradePGSQLDatabaseFrom6To5(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 6 -> 5")
providerLog(logger.LevelInfo, "downgrading database version: 6 -> 5")
sql := strings.Replace(pgsqlV6DownSQL, "{{users}}", sqlTableUsers, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 5)
}
func downgradePGSQLDatabaseFrom5To4(dbHandle *sql.DB) error {
return sqlCommonDowngradeDatabaseFrom5To4(dbHandle)
}

View File

@@ -1,4 +1,3 @@
//go:build nopgsql
// +build nopgsql
package dataprovider
@@ -6,7 +5,7 @@ package dataprovider
import (
"errors"
"github.com/drakkan/sftpgo/v2/version"
"github.com/drakkan/sftpgo/version"
)
func init() {

View File

@@ -1,183 +0,0 @@
package dataprovider
import (
"sync"
"time"
"github.com/drakkan/sftpgo/v2/logger"
)
var delayedQuotaUpdater quotaUpdater
func init() {
delayedQuotaUpdater = newQuotaUpdater()
}
type quotaObject struct {
size int64
files int
}
type quotaUpdater struct {
paramsMutex sync.RWMutex
waitTime time.Duration
sync.RWMutex
pendingUserQuotaUpdates map[string]quotaObject
pendingFolderQuotaUpdates map[string]quotaObject
}
func newQuotaUpdater() quotaUpdater {
return quotaUpdater{
pendingUserQuotaUpdates: make(map[string]quotaObject),
pendingFolderQuotaUpdates: make(map[string]quotaObject),
}
}
func (q *quotaUpdater) start() {
q.setWaitTime(config.DelayedQuotaUpdate)
go q.loop()
}
func (q *quotaUpdater) loop() {
waitTime := q.getWaitTime()
providerLog(logger.LevelDebug, "delayed quota update loop started, wait time: %v", waitTime)
for waitTime > 0 {
// We do this with a time.Sleep instead of a time.Ticker because we don't know
// how long each quota processing cycle will take, and we want to make
// sure we wait the configured seconds between each iteration
time.Sleep(waitTime)
providerLog(logger.LevelDebug, "delayed quota update check start")
q.storeUsersQuota()
q.storeFoldersQuota()
providerLog(logger.LevelDebug, "delayed quota update check end")
waitTime = q.getWaitTime()
}
providerLog(logger.LevelDebug, "delayed quota update loop ended, wait time: %v", waitTime)
}
func (q *quotaUpdater) setWaitTime(secs int) {
q.paramsMutex.Lock()
defer q.paramsMutex.Unlock()
q.waitTime = time.Duration(secs) * time.Second
}
func (q *quotaUpdater) getWaitTime() time.Duration {
q.paramsMutex.RLock()
defer q.paramsMutex.RUnlock()
return q.waitTime
}
func (q *quotaUpdater) resetUserQuota(username string) {
q.Lock()
defer q.Unlock()
delete(q.pendingUserQuotaUpdates, username)
}
func (q *quotaUpdater) updateUserQuota(username string, files int, size int64) {
q.Lock()
defer q.Unlock()
obj := q.pendingUserQuotaUpdates[username]
obj.size += size
obj.files += files
if obj.files == 0 && obj.size == 0 {
delete(q.pendingUserQuotaUpdates, username)
return
}
q.pendingUserQuotaUpdates[username] = obj
}
func (q *quotaUpdater) getUserPendingQuota(username string) (int, int64) {
q.RLock()
defer q.RUnlock()
obj := q.pendingUserQuotaUpdates[username]
return obj.files, obj.size
}
func (q *quotaUpdater) resetFolderQuota(name string) {
q.Lock()
defer q.Unlock()
delete(q.pendingFolderQuotaUpdates, name)
}
func (q *quotaUpdater) updateFolderQuota(name string, files int, size int64) {
q.Lock()
defer q.Unlock()
obj := q.pendingFolderQuotaUpdates[name]
obj.size += size
obj.files += files
if obj.files == 0 && obj.size == 0 {
delete(q.pendingFolderQuotaUpdates, name)
return
}
q.pendingFolderQuotaUpdates[name] = obj
}
func (q *quotaUpdater) getFolderPendingQuota(name string) (int, int64) {
q.RLock()
defer q.RUnlock()
obj := q.pendingFolderQuotaUpdates[name]
return obj.files, obj.size
}
func (q *quotaUpdater) getUsernames() []string {
q.RLock()
defer q.RUnlock()
result := make([]string, 0, len(q.pendingUserQuotaUpdates))
for username := range q.pendingUserQuotaUpdates {
result = append(result, username)
}
return result
}
func (q *quotaUpdater) getFoldernames() []string {
q.RLock()
defer q.RUnlock()
result := make([]string, 0, len(q.pendingFolderQuotaUpdates))
for name := range q.pendingFolderQuotaUpdates {
result = append(result, name)
}
return result
}
func (q *quotaUpdater) storeUsersQuota() {
for _, username := range q.getUsernames() {
files, size := q.getUserPendingQuota(username)
if size != 0 || files != 0 {
err := provider.updateQuota(username, files, size, false)
if err != nil {
providerLog(logger.LevelWarn, "unable to update quota delayed for user %#v: %v", username, err)
continue
}
q.updateUserQuota(username, -files, -size)
}
}
}
func (q *quotaUpdater) storeFoldersQuota() {
for _, name := range q.getFoldernames() {
files, size := q.getFolderPendingQuota(name)
if size != 0 || files != 0 {
err := provider.updateFolderQuota(name, files, size, false)
if err != nil {
providerLog(logger.LevelWarn, "unable to update quota delayed for folder %#v: %v", name, err)
continue
}
q.updateFolderQuota(name, -files, -size)
}
}
}

View File

@@ -1,293 +0,0 @@
package dataprovider
import (
"encoding/json"
"fmt"
"net"
"strings"
"time"
"github.com/alexedwards/argon2id"
"golang.org/x/crypto/bcrypt"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
)
// ShareScope defines the supported share scopes
type ShareScope int
// Supported share scopes
const (
ShareScopeRead ShareScope = iota + 1
ShareScopeWrite
)
const (
redactedPassword = "[**redacted**]"
)
// Share defines files and or directories shared with external users
type Share struct {
// Database unique identifier
ID int64 `json:"-"`
// Unique ID used to access this object
ShareID string `json:"id"`
Name string `json:"name"`
Description string `json:"description,omitempty"`
Scope ShareScope `json:"scope"`
// Paths to files or directories, for ShareScopeWrite it must be exactly one directory
Paths []string `json:"paths"`
// Username who shared this object
Username string `json:"username"`
CreatedAt int64 `json:"created_at"`
UpdatedAt int64 `json:"updated_at"`
// 0 means never used
LastUseAt int64 `json:"last_use_at,omitempty"`
// ExpiresAt expiration date/time as unix timestamp in milliseconds, 0 means no expiration
ExpiresAt int64 `json:"expires_at,omitempty"`
// Optional password to protect the share
Password string `json:"password"`
// Limit the available access tokens, 0 means no limit
MaxTokens int `json:"max_tokens,omitempty"`
// Used tokens
UsedTokens int `json:"used_tokens,omitempty"`
// Limit the share availability to these IPs/CIDR networks
AllowFrom []string `json:"allow_from,omitempty"`
// set for restores, we don't have to validate the expiration date
// otherwise we fail to restore existing shares and we have to insert
// all the previous values with no modifications
IsRestore bool `json:"-"`
}
// GetScopeAsString returns the share's scope as string.
// Used in web pages
func (s *Share) GetScopeAsString() string {
switch s.Scope {
case ShareScopeRead:
return "Read"
default:
return "Write"
}
}
// IsExpired returns true if the share is expired
func (s *Share) IsExpired() bool {
if s.ExpiresAt > 0 {
return s.ExpiresAt < util.GetTimeAsMsSinceEpoch(time.Now())
}
return false
}
// GetInfoString returns share's info as string.
func (s *Share) GetInfoString() string {
var result strings.Builder
if s.ExpiresAt > 0 {
t := util.GetTimeFromMsecSinceEpoch(s.ExpiresAt)
result.WriteString(fmt.Sprintf("Expiration: %v. ", t.Format("2006-01-02 15:04"))) // YYYY-MM-DD HH:MM
}
if s.LastUseAt > 0 {
t := util.GetTimeFromMsecSinceEpoch(s.LastUseAt)
result.WriteString(fmt.Sprintf("Last use: %v. ", t.Format("2006-01-02 15:04")))
}
if s.MaxTokens > 0 {
result.WriteString(fmt.Sprintf("Usage: %v/%v. ", s.UsedTokens, s.MaxTokens))
} else {
result.WriteString(fmt.Sprintf("Used tokens: %v. ", s.UsedTokens))
}
if len(s.AllowFrom) > 0 {
result.WriteString(fmt.Sprintf("Allowed IP/Mask: %v. ", len(s.AllowFrom)))
}
if s.Password != "" {
result.WriteString("Password protected.")
}
return result.String()
}
// GetAllowedFromAsString returns the allowed IP as comma separated string
func (s *Share) GetAllowedFromAsString() string {
return strings.Join(s.AllowFrom, ",")
}
func (s *Share) getACopy() Share {
allowFrom := make([]string, len(s.AllowFrom))
copy(allowFrom, s.AllowFrom)
return Share{
ID: s.ID,
ShareID: s.ShareID,
Name: s.Name,
Description: s.Description,
Scope: s.Scope,
Paths: s.Paths,
Username: s.Username,
CreatedAt: s.CreatedAt,
UpdatedAt: s.UpdatedAt,
LastUseAt: s.LastUseAt,
ExpiresAt: s.ExpiresAt,
Password: s.Password,
MaxTokens: s.MaxTokens,
UsedTokens: s.UsedTokens,
AllowFrom: allowFrom,
}
}
// RenderAsJSON implements the renderer interface used within plugins
func (s *Share) RenderAsJSON(reload bool) ([]byte, error) {
if reload {
share, err := provider.shareExists(s.ShareID, s.Username)
if err != nil {
providerLog(logger.LevelWarn, "unable to reload share before rendering as json: %v", err)
return nil, err
}
share.HideConfidentialData()
return json.Marshal(share)
}
s.HideConfidentialData()
return json.Marshal(s)
}
// HideConfidentialData hides share confidential data
func (s *Share) HideConfidentialData() {
if s.Password != "" {
s.Password = redactedPassword
}
}
// HasRedactedPassword returns true if this share has a redacted password
func (s *Share) HasRedactedPassword() bool {
return s.Password == redactedPassword
}
func (s *Share) hashPassword() error {
if s.Password != "" && !util.IsStringPrefixInSlice(s.Password, internalHashPwdPrefixes) {
if config.PasswordHashing.Algo == HashingAlgoBcrypt {
hashed, err := bcrypt.GenerateFromPassword([]byte(s.Password), config.PasswordHashing.BcryptOptions.Cost)
if err != nil {
return err
}
s.Password = string(hashed)
} else {
hashed, err := argon2id.CreateHash(s.Password, argon2Params)
if err != nil {
return err
}
s.Password = hashed
}
}
return nil
}
func (s *Share) validatePaths() error {
var paths []string
for _, p := range s.Paths {
p = strings.TrimSpace(p)
if p != "" {
paths = append(paths, p)
}
}
s.Paths = paths
if len(s.Paths) == 0 {
return util.NewValidationError("at least a shared path is required")
}
for idx := range s.Paths {
s.Paths[idx] = util.CleanPath(s.Paths[idx])
}
s.Paths = util.RemoveDuplicates(s.Paths)
if s.Scope == ShareScopeWrite && len(s.Paths) != 1 {
return util.NewValidationError("the write share scope requires exactly one path")
}
return nil
}
func (s *Share) validate() error {
if s.ShareID == "" {
return util.NewValidationError("share_id is mandatory")
}
if s.Name == "" {
return util.NewValidationError("name is mandatory")
}
if s.Scope != ShareScopeRead && s.Scope != ShareScopeWrite {
return util.NewValidationError(fmt.Sprintf("invalid scope: %v", s.Scope))
}
if err := s.validatePaths(); err != nil {
return err
}
if s.ExpiresAt > 0 {
if !s.IsRestore && s.ExpiresAt < util.GetTimeAsMsSinceEpoch(time.Now()) {
return util.NewValidationError("expiration must be in the future")
}
} else {
s.ExpiresAt = 0
}
if s.MaxTokens < 0 {
return util.NewValidationError("invalid max tokens")
}
if s.Username == "" {
return util.NewValidationError("username is mandatory")
}
if s.HasRedactedPassword() {
return util.NewValidationError("cannot save a share with a redacted password")
}
if err := s.hashPassword(); err != nil {
return err
}
s.AllowFrom = util.RemoveDuplicates(s.AllowFrom)
for _, IPMask := range s.AllowFrom {
_, _, err := net.ParseCIDR(IPMask)
if err != nil {
return util.NewValidationError(fmt.Sprintf("could not parse allow from entry %#v : %v", IPMask, err))
}
}
return nil
}
// CheckPassword verifies the share password if set
func (s *Share) CheckPassword(password string) (bool, error) {
if s.Password == "" {
return true, nil
}
if password == "" {
return false, ErrInvalidCredentials
}
if strings.HasPrefix(s.Password, bcryptPwdPrefix) {
if err := bcrypt.CompareHashAndPassword([]byte(s.Password), []byte(password)); err != nil {
return false, ErrInvalidCredentials
}
return true, nil
}
match, err := argon2id.ComparePasswordAndHash(password, s.Password)
if !match || err != nil {
return false, ErrInvalidCredentials
}
return match, err
}
// IsUsable checks if the share is usable from the specified IP
func (s *Share) IsUsable(ip string) (bool, error) {
if s.MaxTokens > 0 && s.UsedTokens >= s.MaxTokens {
return false, util.NewRecordNotFoundError("max share usage exceeded")
}
if s.ExpiresAt > 0 {
if s.ExpiresAt < util.GetTimeAsMsSinceEpoch(time.Now()) {
return false, util.NewRecordNotFoundError("share expired")
}
}
if len(s.AllowFrom) == 0 {
return true, nil
}
parsedIP := net.ParseIP(ip)
if parsedIP == nil {
return false, ErrLoginNotAllowedFromIP
}
for _, ipMask := range s.AllowFrom {
_, network, err := net.ParseCIDR(ipMask)
if err != nil {
continue
}
if network.Contains(parsedIP) {
return true, nil
}
}
return false, ErrLoginNotAllowedFromIP
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,13 +1,10 @@
//go:build !nosqlite
// +build !nosqlite
package dataprovider
import (
"context"
"crypto/x509"
"database/sql"
"errors"
"fmt"
"path/filepath"
"strings"
@@ -15,77 +12,92 @@ import (
// we import go-sqlite3 here to be able to disable SQLite support using a build tag
_ "github.com/mattn/go-sqlite3"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/v2/version"
"github.com/drakkan/sftpgo/v2/vfs"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/utils"
"github.com/drakkan/sftpgo/version"
"github.com/drakkan/sftpgo/vfs"
)
const (
sqliteResetSQL = `DROP TABLE IF EXISTS "{{api_keys}}";
DROP TABLE IF EXISTS "{{folders_mapping}}";
DROP TABLE IF EXISTS "{{admins}}";
DROP TABLE IF EXISTS "{{folders}}";
DROP TABLE IF EXISTS "{{shares}}";
DROP TABLE IF EXISTS "{{users}}";
DROP TABLE IF EXISTS "{{schema_version}}";
`
sqliteInitialSQL = `CREATE TABLE "{{schema_version}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "version" integer NOT NULL);
CREATE TABLE "{{admins}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255) NOT NULL UNIQUE,
"description" varchar(512) NULL, "password" varchar(255) NOT NULL, "email" varchar(255) NULL, "status" integer NOT NULL,
"permissions" text NOT NULL, "filters" text NULL, "additional_info" text NULL);
CREATE TABLE "{{folders}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "name" varchar(255) NOT NULL UNIQUE,
"description" varchar(512) NULL, "path" varchar(512) NULL, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL,
"last_quota_update" bigint NOT NULL, "filesystem" text NULL);
CREATE TABLE "{{users}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255) NOT NULL UNIQUE,
"status" integer NOT NULL, "expiration_date" bigint NOT NULL, "description" varchar(512) NULL, "password" text NULL,
"public_keys" text NULL, "home_dir" varchar(512) NOT NULL, "uid" integer NOT NULL, "gid" integer NOT NULL,
"max_sessions" integer NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "permissions" text NOT NULL,
"used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL,
"upload_bandwidth" integer NOT NULL, "download_bandwidth" integer NOT NULL, "last_login" bigint NOT NULL, "filters" text NULL,
"filesystem" text NULL, "additional_info" text NULL);
sqliteUsersTableSQL = `CREATE TABLE "{{users}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255)
NOT NULL UNIQUE, "password" varchar(255) NULL, "public_keys" text NULL, "home_dir" varchar(255) NOT NULL, "uid" integer NOT NULL,
"gid" integer NOT NULL, "max_sessions" integer NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL,
"permissions" text NOT NULL, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL,
"last_quota_update" bigint NOT NULL, "upload_bandwidth" integer NOT NULL, "download_bandwidth" integer NOT NULL,
"expiration_date" bigint NOT NULL, "last_login" bigint NOT NULL, "status" integer NOT NULL, "filters" text NULL,
"filesystem" text NULL);`
sqliteSchemaTableSQL = `CREATE TABLE "{{schema_version}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "version" integer NOT NULL);`
sqliteV2SQL = `ALTER TABLE "{{users}}" ADD COLUMN "virtual_folders" text NULL;`
sqliteV3SQL = `CREATE TABLE "new__users" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255) NOT NULL UNIQUE,
"password" text NULL, "public_keys" text NULL, "home_dir" varchar(255) NOT NULL, "uid" integer NOT NULL,
"gid" integer NOT NULL, "max_sessions" integer NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL,
"permissions" text NOT NULL, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL,
"upload_bandwidth" integer NOT NULL, "download_bandwidth" integer NOT NULL, "expiration_date" bigint NOT NULL, "last_login" bigint NOT NULL,
"status" integer NOT NULL, "filters" text NULL, "filesystem" text NULL, "virtual_folders" text NULL);
INSERT INTO "new__users" ("id", "username", "public_keys", "home_dir", "uid", "gid", "max_sessions", "quota_size", "quota_files",
"permissions", "used_quota_size", "used_quota_files", "last_quota_update", "upload_bandwidth", "download_bandwidth", "expiration_date",
"last_login", "status", "filters", "filesystem", "virtual_folders", "password") SELECT "id", "username", "public_keys", "home_dir",
"uid", "gid", "max_sessions", "quota_size", "quota_files", "permissions", "used_quota_size", "used_quota_files", "last_quota_update",
"upload_bandwidth", "download_bandwidth", "expiration_date", "last_login", "status", "filters", "filesystem", "virtual_folders",
"password" FROM "{{users}}";
DROP TABLE "{{users}}";
ALTER TABLE "new__users" RENAME TO "{{users}}";`
sqliteV4SQL = `CREATE TABLE "{{folders}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "path" varchar(512) NOT NULL UNIQUE,
"used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL);
CREATE TABLE "{{folders_mapping}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "virtual_path" varchar(512) NOT NULL,
"quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "folder_id" integer NOT NULL REFERENCES "{{folders}}" ("id")
ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED, "user_id" integer NOT NULL REFERENCES "{{users}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
CONSTRAINT "{{prefix}}unique_mapping" UNIQUE ("user_id", "folder_id"));
CREATE INDEX "{{prefix}}folders_mapping_folder_id_idx" ON "{{folders_mapping}}" ("folder_id");
CREATE INDEX "{{prefix}}folders_mapping_user_id_idx" ON "{{folders_mapping}}" ("user_id");
INSERT INTO {{schema_version}} (version) VALUES (10);
CONSTRAINT "unique_mapping" UNIQUE ("user_id", "folder_id"));
CREATE TABLE "new__users" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255) NOT NULL UNIQUE, "password" text NULL,
"public_keys" text NULL, "home_dir" varchar(512) NOT NULL, "uid" integer NOT NULL, "gid" integer NOT NULL, "max_sessions" integer NOT NULL,
"quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "permissions" text NOT NULL, "used_quota_size" bigint NOT NULL,
"used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL, "upload_bandwidth" integer NOT NULL, "download_bandwidth" integer NOT NULL,
"expiration_date" bigint NOT NULL, "last_login" bigint NOT NULL, "status" integer NOT NULL, "filters" text NULL, "filesystem" text NULL);
INSERT INTO "new__users" ("id", "username", "password", "public_keys", "home_dir", "uid", "gid", "max_sessions", "quota_size", "quota_files",
"permissions", "used_quota_size", "used_quota_files", "last_quota_update", "upload_bandwidth", "download_bandwidth", "expiration_date",
"last_login", "status", "filters", "filesystem") SELECT "id", "username", "password", "public_keys", "home_dir", "uid", "gid", "max_sessions",
"quota_size", "quota_files", "permissions", "used_quota_size", "used_quota_files", "last_quota_update", "upload_bandwidth", "download_bandwidth",
"expiration_date", "last_login", "status", "filters", "filesystem" FROM "{{users}}";
DROP TABLE "{{users}}";
ALTER TABLE "new__users" RENAME TO "{{users}}";
CREATE INDEX "folders_mapping_folder_id_idx" ON "{{folders_mapping}}" ("folder_id");
CREATE INDEX "folders_mapping_user_id_idx" ON "{{folders_mapping}}" ("user_id");
`
sqliteV11SQL = `CREATE TABLE "{{api_keys}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "name" varchar(255) NOT NULL,
"key_id" varchar(50) NOT NULL UNIQUE, "api_key" varchar(255) NOT NULL UNIQUE, "scope" integer NOT NULL, "created_at" bigint NOT NULL,
"updated_at" bigint NOT NULL, "last_use_at" bigint NOT NULL, "expires_at" bigint NOT NULL, "description" text NULL,
"admin_id" integer NULL REFERENCES "{{admins}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
"user_id" integer NULL REFERENCES "{{users}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED);
CREATE INDEX "{{prefix}}api_keys_admin_id_idx" ON "api_keys" ("admin_id");
CREATE INDEX "{{prefix}}api_keys_user_id_idx" ON "api_keys" ("user_id");
sqliteV6SQL = `ALTER TABLE "{{users}}" ADD COLUMN "additional_info" text NULL;`
sqliteV6DownSQL = `CREATE TABLE "new__users" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255) NOT NULL UNIQUE,
"password" text NULL, "public_keys" text NULL, "home_dir" varchar(512) NOT NULL, "uid" integer NOT NULL, "gid" integer NOT NULL,
"max_sessions" integer NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "permissions" text NOT NULL,
"used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL, "upload_bandwidth" integer NOT NULL,
"download_bandwidth" integer NOT NULL, "expiration_date" bigint NOT NULL, "last_login" bigint NOT NULL, "status" integer NOT NULL,
"filters" text NULL, "filesystem" text NULL);
INSERT INTO "new__users" ("id", "username", "password", "public_keys", "home_dir", "uid", "gid", "max_sessions", "quota_size", "quota_files",
"permissions", "used_quota_size", "used_quota_files", "last_quota_update", "upload_bandwidth", "download_bandwidth", "expiration_date",
"last_login", "status", "filters", "filesystem") SELECT "id", "username", "password", "public_keys", "home_dir", "uid", "gid", "max_sessions",
"quota_size", "quota_files", "permissions", "used_quota_size", "used_quota_files", "last_quota_update", "upload_bandwidth", "download_bandwidth",
"expiration_date", "last_login", "status", "filters", "filesystem" FROM "{{users}}";
DROP TABLE "{{users}}";
ALTER TABLE "new__users" RENAME TO "{{users}}";
`
sqliteV11DownSQL = `DROP TABLE "{{api_keys}}";`
sqliteV12SQL = `ALTER TABLE "{{admins}}" ADD COLUMN "created_at" bigint DEFAULT 0 NOT NULL;
ALTER TABLE "{{admins}}" ADD COLUMN "updated_at" bigint DEFAULT 0 NOT NULL;
ALTER TABLE "{{admins}}" ADD COLUMN "last_login" bigint DEFAULT 0 NOT NULL;
ALTER TABLE "{{users}}" ADD COLUMN "created_at" bigint DEFAULT 0 NOT NULL;
ALTER TABLE "{{users}}" ADD COLUMN "updated_at" bigint DEFAULT 0 NOT NULL;
CREATE INDEX "{{prefix}}users_updated_at_idx" ON "{{users}}" ("updated_at");
sqliteV7SQL = `CREATE TABLE "{{admins}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255) NOT NULL UNIQUE,
"password" varchar(255) NOT NULL, "email" varchar(255) NULL, "status" integer NOT NULL, "permissions" text NOT NULL, "filters" text NULL,
"additional_info" text NULL);`
sqliteV7DownSQL = `DROP TABLE "{{admins}}";`
sqliteV8SQL = `CREATE TABLE "new__folders" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT,
"name" varchar(255) NOT NULL UNIQUE, "path" varchar(512) NULL, "used_quota_size" bigint NOT NULL,
"used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL);
INSERT INTO "new__folders" ("id", "path", "used_quota_size", "used_quota_files", "last_quota_update", "name")
SELECT "id", "path", "used_quota_size", "used_quota_files", "last_quota_update", ('folder' || "id") FROM "{{folders}}";
DROP TABLE "{{folders}}";
ALTER TABLE "new__folders" RENAME TO "{{folders}}";
`
sqliteV12DownSQL = `DROP INDEX "{{prefix}}users_updated_at_idx";
ALTER TABLE "{{users}}" DROP COLUMN "updated_at";
ALTER TABLE "{{users}}" DROP COLUMN "created_at";
ALTER TABLE "{{admins}}" DROP COLUMN "created_at";
ALTER TABLE "{{admins}}" DROP COLUMN "updated_at";
ALTER TABLE "{{admins}}" DROP COLUMN "last_login";
sqliteV8DownSQL = `CREATE TABLE "new__folders" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT,
"path" varchar(512) NOT NULL UNIQUE, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL,
"last_quota_update" bigint NOT NULL);
INSERT INTO "new__folders" ("id", "path", "used_quota_size", "used_quota_files", "last_quota_update")
SELECT "id", "path", "used_quota_size", "used_quota_files", "last_quota_update" FROM "{{folders}}";
DROP TABLE "{{folders}}";
ALTER TABLE "new__folders" RENAME TO "{{folders}}";
`
sqliteV13SQL = `ALTER TABLE "{{users}}" ADD COLUMN "email" varchar(255) NULL;`
sqliteV13DownSQL = `ALTER TABLE "{{users}}" DROP COLUMN "email";`
sqliteV14SQL = `CREATE TABLE "{{shares}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT,
"share_id" varchar(60) NOT NULL UNIQUE, "name" varchar(255) NOT NULL, "description" varchar(512) NULL,
"scope" integer NOT NULL, "paths" text NOT NULL, "created_at" bigint NOT NULL, "updated_at" bigint NOT NULL,
"last_use_at" bigint NOT NULL, "expires_at" bigint NOT NULL, "password" text NULL, "max_tokens" integer NOT NULL,
"used_tokens" integer NOT NULL, "allow_from" text NULL,
"user_id" integer NOT NULL REFERENCES "{{users}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED);
CREATE INDEX "{{prefix}}shares_user_id_idx" ON "{{shares}}" ("user_id");
`
sqliteV14DownSQL = `DROP TABLE "{{shares}}";`
)
// SQLiteProvider auth provider for SQLite database
@@ -100,11 +112,11 @@ func init() {
func initializeSQLiteProvider(basePath string) error {
var err error
var connectionString string
logSender = fmt.Sprintf("dataprovider_%v", SQLiteDataProviderName)
if config.ConnectionString == "" {
dbPath := config.Name
if !util.IsFileInputValid(dbPath) {
return fmt.Errorf("invalid database path: %#v", dbPath)
if !utils.IsFileInputValid(dbPath) {
return fmt.Errorf("Invalid database path: %#v", dbPath)
}
if !filepath.IsAbs(dbPath) {
dbPath = filepath.Join(basePath, dbPath)
@@ -133,10 +145,6 @@ func (p *SQLiteProvider) validateUserAndPass(username, password, ip, protocol st
return sqlCommonValidateUserAndPass(username, password, ip, protocol, p.dbHandle)
}
func (p *SQLiteProvider) validateUserAndTLSCert(username, protocol string, tlsCert *x509.Certificate) (User, error) {
return sqlCommonValidateUserAndTLSCertificate(username, protocol, tlsCert, p.dbHandle)
}
func (p *SQLiteProvider) validateUserAndPubKey(username string, publicKey []byte) (User, string, error) {
return sqlCommonValidateUserAndPubKey(username, publicKey, p.dbHandle)
}
@@ -149,18 +157,10 @@ func (p *SQLiteProvider) getUsedQuota(username string) (int, int64, error) {
return sqlCommonGetUsedQuota(username, p.dbHandle)
}
func (p *SQLiteProvider) setUpdatedAt(username string) {
sqlCommonSetUpdatedAt(username, p.dbHandle)
}
func (p *SQLiteProvider) updateLastLogin(username string) error {
return sqlCommonUpdateLastLogin(username, p.dbHandle)
}
func (p *SQLiteProvider) updateAdminLastLogin(username string) error {
return sqlCommonUpdateAdminLastLogin(username, p.dbHandle)
}
func (p *SQLiteProvider) userExists(username string) (User, error) {
return sqlCommonGetUserByUsername(username, p.dbHandle)
}
@@ -181,11 +181,6 @@ func (p *SQLiteProvider) dumpUsers() ([]User, error) {
return sqlCommonDumpUsers(p.dbHandle)
}
// SQLite provider cannot be shared, so we always return no recently updated users
func (p *SQLiteProvider) getRecentlyUpdatedUsers(after int64) ([]User, error) {
return nil, nil
}
func (p *SQLiteProvider) getUsers(limit int, offset int, order string) ([]User, error) {
return sqlCommonGetUsers(limit, offset, order, p.dbHandle)
}
@@ -252,62 +247,6 @@ func (p *SQLiteProvider) validateAdminAndPass(username, password, ip string) (Ad
return sqlCommonValidateAdminAndPass(username, password, ip, p.dbHandle)
}
func (p *SQLiteProvider) apiKeyExists(keyID string) (APIKey, error) {
return sqlCommonGetAPIKeyByID(keyID, p.dbHandle)
}
func (p *SQLiteProvider) addAPIKey(apiKey *APIKey) error {
return sqlCommonAddAPIKey(apiKey, p.dbHandle)
}
func (p *SQLiteProvider) updateAPIKey(apiKey *APIKey) error {
return sqlCommonUpdateAPIKey(apiKey, p.dbHandle)
}
func (p *SQLiteProvider) deleteAPIKey(apiKey *APIKey) error {
return sqlCommonDeleteAPIKey(apiKey, p.dbHandle)
}
func (p *SQLiteProvider) getAPIKeys(limit int, offset int, order string) ([]APIKey, error) {
return sqlCommonGetAPIKeys(limit, offset, order, p.dbHandle)
}
func (p *SQLiteProvider) dumpAPIKeys() ([]APIKey, error) {
return sqlCommonDumpAPIKeys(p.dbHandle)
}
func (p *SQLiteProvider) updateAPIKeyLastUse(keyID string) error {
return sqlCommonUpdateAPIKeyLastUse(keyID, p.dbHandle)
}
func (p *SQLiteProvider) shareExists(shareID, username string) (Share, error) {
return sqlCommonGetShareByID(shareID, username, p.dbHandle)
}
func (p *SQLiteProvider) addShare(share *Share) error {
return sqlCommonAddShare(share, p.dbHandle)
}
func (p *SQLiteProvider) updateShare(share *Share) error {
return sqlCommonUpdateShare(share, p.dbHandle)
}
func (p *SQLiteProvider) deleteShare(share *Share) error {
return sqlCommonDeleteShare(share, p.dbHandle)
}
func (p *SQLiteProvider) getShares(limit int, offset int, order, username string) ([]Share, error) {
return sqlCommonGetShares(limit, offset, order, username, p.dbHandle)
}
func (p *SQLiteProvider) dumpShares() ([]Share, error) {
return sqlCommonDumpShares(p.dbHandle)
}
func (p *SQLiteProvider) updateShareLastUse(shareID string, numTokens int) error {
return sqlCommonUpdateShareLastUse(shareID, numTokens, p.dbHandle)
}
func (p *SQLiteProvider) close() error {
return p.dbHandle.Close()
}
@@ -322,205 +261,214 @@ func (p *SQLiteProvider) initializeDatabase() error {
if err == nil && dbVersion.Version > 0 {
return ErrNoInitRequired
}
if errors.Is(err, sql.ErrNoRows) {
return errSchemaVersionEmpty
}
initialSQL := strings.ReplaceAll(sqliteInitialSQL, "{{schema_version}}", sqlTableSchemaVersion)
initialSQL = strings.ReplaceAll(initialSQL, "{{admins}}", sqlTableAdmins)
initialSQL = strings.ReplaceAll(initialSQL, "{{folders}}", sqlTableFolders)
initialSQL = strings.ReplaceAll(initialSQL, "{{users}}", sqlTableUsers)
initialSQL = strings.ReplaceAll(initialSQL, "{{folders_mapping}}", sqlTableFoldersMapping)
initialSQL = strings.ReplaceAll(initialSQL, "{{prefix}}", config.SQLTablesPrefix)
sqlUsers := strings.Replace(sqliteUsersTableSQL, "{{users}}", sqlTableUsers, 1)
ctx, cancel := context.WithTimeout(context.Background(), longSQLQueryTimeout)
defer cancel()
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, []string{initialSQL}, 10)
tx, err := p.dbHandle.BeginTx(ctx, nil)
if err != nil {
return err
}
_, err = tx.Exec(sqlUsers)
if err != nil {
return err
}
_, err = tx.Exec(strings.Replace(sqliteSchemaTableSQL, "{{schema_version}}", sqlTableSchemaVersion, 1))
if err != nil {
return err
}
_, err = tx.Exec(strings.Replace(initialDBVersionSQL, "{{schema_version}}", sqlTableSchemaVersion, 1))
if err != nil {
return err
}
return tx.Commit()
}
//nolint:dupl
func (p *SQLiteProvider) migrateDatabase() error {
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
if err != nil {
return err
}
switch version := dbVersion.Version; {
case version == sqlDatabaseVersion:
providerLog(logger.LevelDebug, "sql database is up to date, current version: %v", version)
if dbVersion.Version == sqlDatabaseVersion {
providerLog(logger.LevelDebug, "sql database is up to date, current version: %v", dbVersion.Version)
return ErrNoInitRequired
case version < 10:
err = fmt.Errorf("database version %v is too old, please see the upgrading docs", version)
providerLog(logger.LevelError, "%v", err)
logger.ErrorToConsole("%v", err)
return err
case version == 10:
return updateSQLiteDatabaseFromV10(p.dbHandle)
case version == 11:
return updateSQLiteDatabaseFromV11(p.dbHandle)
case version == 12:
return updateSQLiteDatabaseFromV12(p.dbHandle)
case version == 13:
return updateSQLiteDatabaseFromV13(p.dbHandle)
}
switch dbVersion.Version {
case 1:
return updateSQLiteDatabaseFromV1(p.dbHandle)
case 2:
return updateSQLiteDatabaseFromV2(p.dbHandle)
case 3:
return updateSQLiteDatabaseFromV3(p.dbHandle)
case 4:
return updateSQLiteDatabaseFromV4(p.dbHandle)
case 5:
return updateSQLiteDatabaseFromV5(p.dbHandle)
case 6:
return updateSQLiteDatabaseFromV6(p.dbHandle)
case 7:
return updateSQLiteDatabaseFromV7(p.dbHandle)
default:
if version > sqlDatabaseVersion {
providerLog(logger.LevelWarn, "database version %v is newer than the supported one: %v", version,
if dbVersion.Version > sqlDatabaseVersion {
providerLog(logger.LevelWarn, "database version %v is newer than the supported: %v", dbVersion.Version,
sqlDatabaseVersion)
logger.WarnToConsole("database version %v is newer than the supported one: %v", version,
logger.WarnToConsole("database version %v is newer than the supported: %v", dbVersion.Version,
sqlDatabaseVersion)
return nil
}
return fmt.Errorf("database version not handled: %v", version)
return fmt.Errorf("Database version not handled: %v", dbVersion.Version)
}
}
//nolint:dupl
func (p *SQLiteProvider) revertDatabase(targetVersion int) error {
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
if err != nil {
return err
}
if dbVersion.Version == targetVersion {
return errors.New("current version match target version, nothing to do")
return fmt.Errorf("current version match target version, nothing to do")
}
switch dbVersion.Version {
case 14:
return downgradeSQLiteDatabaseFromV14(p.dbHandle)
case 13:
return downgradeSQLiteDatabaseFromV13(p.dbHandle)
case 12:
return downgradeSQLiteDatabaseFromV12(p.dbHandle)
case 11:
return downgradeSQLiteDatabaseFromV11(p.dbHandle)
case 8:
err = downgradeSQLiteDatabaseFrom8To7(p.dbHandle)
if err != nil {
return err
}
err = downgradeSQLiteDatabaseFrom7To6(p.dbHandle)
if err != nil {
return err
}
err = downgradeSQLiteDatabaseFrom6To5(p.dbHandle)
if err != nil {
return err
}
return downgradeSQLiteDatabaseFrom5To4(p.dbHandle)
case 7:
err = downgradeSQLiteDatabaseFrom7To6(p.dbHandle)
if err != nil {
return err
}
err = downgradeSQLiteDatabaseFrom6To5(p.dbHandle)
if err != nil {
return err
}
return downgradeSQLiteDatabaseFrom5To4(p.dbHandle)
case 6:
err = downgradeSQLiteDatabaseFrom6To5(p.dbHandle)
if err != nil {
return err
}
return downgradeSQLiteDatabaseFrom5To4(p.dbHandle)
case 5:
return downgradeSQLiteDatabaseFrom5To4(p.dbHandle)
default:
return fmt.Errorf("database version not handled: %v", dbVersion.Version)
return fmt.Errorf("Database version not handled: %v", dbVersion.Version)
}
}
func (p *SQLiteProvider) resetDatabase() error {
sql := strings.ReplaceAll(sqliteResetSQL, "{{schema_version}}", sqlTableSchemaVersion)
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
sql = strings.ReplaceAll(sql, "{{folders}}", sqlTableFolders)
sql = strings.ReplaceAll(sql, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{folders_mapping}}", sqlTableFoldersMapping)
sql = strings.ReplaceAll(sql, "{{api_keys}}", sqlTableAPIKeys)
sql = strings.ReplaceAll(sql, "{{shares}}", sqlTableShares)
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, []string{sql}, 0)
}
func updateSQLiteDatabaseFromV10(dbHandle *sql.DB) error {
if err := updateSQLiteDatabaseFrom10To11(dbHandle); err != nil {
func updateSQLiteDatabaseFromV1(dbHandle *sql.DB) error {
err := updateSQLiteDatabaseFrom1To2(dbHandle)
if err != nil {
return err
}
return updateSQLiteDatabaseFromV11(dbHandle)
return updateSQLiteDatabaseFromV2(dbHandle)
}
func updateSQLiteDatabaseFromV11(dbHandle *sql.DB) error {
if err := updateSQLiteDatabaseFrom11To12(dbHandle); err != nil {
func updateSQLiteDatabaseFromV2(dbHandle *sql.DB) error {
err := updateSQLiteDatabaseFrom2To3(dbHandle)
if err != nil {
return err
}
return updateSQLiteDatabaseFromV12(dbHandle)
return updateSQLiteDatabaseFromV3(dbHandle)
}
func updateSQLiteDatabaseFromV12(dbHandle *sql.DB) error {
if err := updateSQLiteDatabaseFrom12To13(dbHandle); err != nil {
func updateSQLiteDatabaseFromV3(dbHandle *sql.DB) error {
err := updateSQLiteDatabaseFrom3To4(dbHandle)
if err != nil {
return err
}
return updateSQLiteDatabaseFromV13(dbHandle)
return updateSQLiteDatabaseFromV4(dbHandle)
}
func updateSQLiteDatabaseFromV13(dbHandle *sql.DB) error {
return updateSQLiteDatabaseFrom13To14(dbHandle)
}
func downgradeSQLiteDatabaseFromV14(dbHandle *sql.DB) error {
if err := downgradeSQLiteDatabaseFrom14To13(dbHandle); err != nil {
func updateSQLiteDatabaseFromV4(dbHandle *sql.DB) error {
err := updateSQLiteDatabaseFrom4To5(dbHandle)
if err != nil {
return err
}
return downgradeSQLiteDatabaseFromV13(dbHandle)
return updateSQLiteDatabaseFromV5(dbHandle)
}
func downgradeSQLiteDatabaseFromV13(dbHandle *sql.DB) error {
if err := downgradeSQLiteDatabaseFrom13To12(dbHandle); err != nil {
func updateSQLiteDatabaseFromV5(dbHandle *sql.DB) error {
err := updateSQLiteDatabaseFrom5To6(dbHandle)
if err != nil {
return err
}
return downgradeSQLiteDatabaseFromV12(dbHandle)
return updateSQLiteDatabaseFromV6(dbHandle)
}
func downgradeSQLiteDatabaseFromV12(dbHandle *sql.DB) error {
if err := downgradeSQLiteDatabaseFrom12To11(dbHandle); err != nil {
func updateSQLiteDatabaseFromV6(dbHandle *sql.DB) error {
err := updateSQLiteDatabaseFrom6To7(dbHandle)
if err != nil {
return err
}
return downgradeSQLiteDatabaseFromV11(dbHandle)
return updateSQLiteDatabaseFromV7(dbHandle)
}
func downgradeSQLiteDatabaseFromV11(dbHandle *sql.DB) error {
return downgradeSQLiteDatabaseFrom11To10(dbHandle)
func updateSQLiteDatabaseFromV7(dbHandle *sql.DB) error {
return updateSQLiteDatabaseFrom7To8(dbHandle)
}
func updateSQLiteDatabaseFrom13To14(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 13 -> 14")
providerLog(logger.LevelInfo, "updating database version: 13 -> 14")
sql := strings.ReplaceAll(sqliteV14SQL, "{{shares}}", sqlTableShares)
sql = strings.ReplaceAll(sql, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 14)
func updateSQLiteDatabaseFrom1To2(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 1 -> 2")
providerLog(logger.LevelInfo, "updating database version: 1 -> 2")
sql := strings.Replace(sqliteV2SQL, "{{users}}", sqlTableUsers, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 2)
}
func downgradeSQLiteDatabaseFrom14To13(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 14 -> 13")
providerLog(logger.LevelInfo, "downgrading database version: 14 -> 13")
sql := strings.ReplaceAll(sqliteV14DownSQL, "{{shares}}", sqlTableShares)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 13)
func updateSQLiteDatabaseFrom2To3(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 2 -> 3")
providerLog(logger.LevelInfo, "updating database version: 2 -> 3")
sql := strings.ReplaceAll(sqliteV3SQL, "{{users}}", sqlTableUsers)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 3)
}
func updateSQLiteDatabaseFrom12To13(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 12 -> 13")
providerLog(logger.LevelInfo, "updating database version: 12 -> 13")
sql := strings.ReplaceAll(sqliteV13SQL, "{{users}}", sqlTableUsers)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 13)
func updateSQLiteDatabaseFrom3To4(dbHandle *sql.DB) error {
return sqlCommonUpdateDatabaseFrom3To4(sqliteV4SQL, dbHandle)
}
func downgradeSQLiteDatabaseFrom13To12(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 13 -> 12")
providerLog(logger.LevelInfo, "downgrading database version: 13 -> 12")
sql := strings.ReplaceAll(sqliteV13DownSQL, "{{users}}", sqlTableUsers)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 12)
func updateSQLiteDatabaseFrom4To5(dbHandle *sql.DB) error {
return sqlCommonUpdateDatabaseFrom4To5(dbHandle)
}
func updateSQLiteDatabaseFrom11To12(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 11 -> 12")
providerLog(logger.LevelInfo, "updating database version: 11 -> 12")
sql := strings.ReplaceAll(sqliteV12SQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 12)
func updateSQLiteDatabaseFrom5To6(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 5 -> 6")
providerLog(logger.LevelInfo, "updating database version: 5 -> 6")
sql := strings.Replace(sqliteV6SQL, "{{users}}", sqlTableUsers, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 6)
}
func downgradeSQLiteDatabaseFrom12To11(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 12 -> 11")
providerLog(logger.LevelInfo, "downgrading database version: 12 -> 11")
sql := strings.ReplaceAll(sqliteV12DownSQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 11)
func updateSQLiteDatabaseFrom6To7(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 6 -> 7")
providerLog(logger.LevelInfo, "updating database version: 6 -> 7")
sql := strings.Replace(sqliteV7SQL, "{{admins}}", sqlTableAdmins, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 7)
}
func updateSQLiteDatabaseFrom10To11(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 10 -> 11")
providerLog(logger.LevelInfo, "updating database version: 10 -> 11")
sql := strings.ReplaceAll(sqliteV11SQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
sql = strings.ReplaceAll(sql, "{{api_keys}}", sqlTableAPIKeys)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 11)
func updateSQLiteDatabaseFrom7To8(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 7 -> 8")
providerLog(logger.LevelInfo, "updating database version: 7 -> 8")
if err := setPragmaFK(dbHandle, "OFF"); err != nil {
return err
}
sql := strings.ReplaceAll(sqliteV8SQL, "{{folders}}", sqlTableFolders)
if err := sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 8); err != nil {
return err
}
return setPragmaFK(dbHandle, "ON")
}
func downgradeSQLiteDatabaseFrom11To10(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 11 -> 10")
providerLog(logger.LevelInfo, "downgrading database version: 11 -> 10")
sql := strings.ReplaceAll(sqliteV11DownSQL, "{{api_keys}}", sqlTableAPIKeys)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 10)
}
/*func setPragmaFK(dbHandle *sql.DB, value string) error {
func setPragmaFK(dbHandle *sql.DB, value string) error {
ctx, cancel := context.WithTimeout(context.Background(), longSQLQueryTimeout)
defer cancel()
@@ -528,4 +476,35 @@ func downgradeSQLiteDatabaseFrom11To10(dbHandle *sql.DB) error {
_, err := dbHandle.ExecContext(ctx, sql)
return err
}*/
}
func downgradeSQLiteDatabaseFrom8To7(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 8 -> 7")
providerLog(logger.LevelInfo, "downgrading database version: 8 -> 7")
if err := setPragmaFK(dbHandle, "OFF"); err != nil {
return err
}
sql := strings.ReplaceAll(sqliteV8DownSQL, "{{folders}}", sqlTableFolders)
if err := sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 7); err != nil {
return err
}
return setPragmaFK(dbHandle, "ON")
}
func downgradeSQLiteDatabaseFrom7To6(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 7 -> 6")
providerLog(logger.LevelInfo, "downgrading database version: 7 -> 6")
sql := strings.Replace(sqliteV7DownSQL, "{{admins}}", sqlTableAdmins, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 6)
}
func downgradeSQLiteDatabaseFrom6To5(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 6 -> 5")
providerLog(logger.LevelInfo, "downgrading database version: 6 -> 5")
sql := strings.ReplaceAll(sqliteV6DownSQL, "{{users}}", sqlTableUsers)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 5)
}
func downgradeSQLiteDatabaseFrom5To4(dbHandle *sql.DB) error {
return sqlCommonDowngradeDatabaseFrom5To4(dbHandle)
}

View File

@@ -1,4 +1,3 @@
//go:build nosqlite
// +build nosqlite
package dataprovider
@@ -6,7 +5,7 @@ package dataprovider
import (
"errors"
"github.com/drakkan/sftpgo/v2/version"
"github.com/drakkan/sftpgo/version"
)
func init() {

View File

@@ -5,24 +5,20 @@ import (
"strconv"
"strings"
"github.com/drakkan/sftpgo/v2/vfs"
"github.com/drakkan/sftpgo/vfs"
)
const (
selectUserFields = "id,username,password,public_keys,home_dir,uid,gid,max_sessions,quota_size,quota_files,permissions,used_quota_size," +
"used_quota_files,last_quota_update,upload_bandwidth,download_bandwidth,expiration_date,last_login,status,filters,filesystem," +
"additional_info,description,email,created_at,updated_at"
selectFolderFields = "id,path,used_quota_size,used_quota_files,last_quota_update,name,description,filesystem"
selectAdminFields = "id,username,password,status,email,permissions,filters,additional_info,description,created_at,updated_at,last_login"
selectAPIKeyFields = "key_id,name,api_key,scope,created_at,updated_at,last_use_at,expires_at,description,user_id,admin_id"
selectShareFields = "s.share_id,s.name,s.description,s.scope,s.paths,u.username,s.created_at,s.updated_at,s.last_use_at," +
"s.expires_at,s.password,s.max_tokens,s.used_tokens,s.allow_from"
"used_quota_files,last_quota_update,upload_bandwidth,download_bandwidth,expiration_date,last_login,status,filters,filesystem,additional_info"
selectFolderFields = "id,path,used_quota_size,used_quota_files,last_quota_update,name"
selectAdminFields = "id,username,password,status,email,permissions,filters,additional_info"
)
func getSQLPlaceholders() []string {
var placeholders []string
for i := 1; i <= 30; i++ {
if config.Driver == PGSQLDataProviderName || config.Driver == CockroachDataProviderName {
for i := 1; i <= 20; i++ {
if config.Driver == PGSQLDataProviderName {
placeholders = append(placeholders, fmt.Sprintf("$%v", i))
} else {
placeholders = append(placeholders, "?")
@@ -45,142 +41,21 @@ func getDumpAdminsQuery() string {
}
func getAddAdminQuery() string {
return fmt.Sprintf(`INSERT INTO %v (username,password,status,email,permissions,filters,additional_info,description,created_at,updated_at,last_login)
VALUES (%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,0)`, sqlTableAdmins, sqlPlaceholders[0], sqlPlaceholders[1],
sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7],
sqlPlaceholders[8], sqlPlaceholders[9])
return fmt.Sprintf(`INSERT INTO %v (username,password,status,email,permissions,filters,additional_info)
VALUES (%v,%v,%v,%v,%v,%v,%v)`, sqlTableAdmins, sqlPlaceholders[0], sqlPlaceholders[1],
sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6])
}
func getUpdateAdminQuery() string {
return fmt.Sprintf(`UPDATE %v SET password=%v,status=%v,email=%v,permissions=%v,filters=%v,additional_info=%v,description=%v,updated_at=%v
return fmt.Sprintf(`UPDATE %v SET password=%v,status=%v,email=%v,permissions=%v,filters=%v,additional_info=%v
WHERE username = %v`, sqlTableAdmins, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2],
sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7], sqlPlaceholders[8])
sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6])
}
func getDeleteAdminQuery() string {
return fmt.Sprintf(`DELETE FROM %v WHERE username = %v`, sqlTableAdmins, sqlPlaceholders[0])
}
func getShareByIDQuery(filterUser bool) string {
if filterUser {
return fmt.Sprintf(`SELECT %v FROM %v s INNER JOIN %v u ON s.user_id = u.id WHERE s.share_id = %v AND u.username = %v`,
selectShareFields, sqlTableShares, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1])
}
return fmt.Sprintf(`SELECT %v FROM %v s INNER JOIN %v u ON s.user_id = u.id WHERE s.share_id = %v`,
selectShareFields, sqlTableShares, sqlTableUsers, sqlPlaceholders[0])
}
func getSharesQuery(order string) string {
return fmt.Sprintf(`SELECT %v FROM %v s INNER JOIN %v u ON s.user_id = u.id WHERE u.username = %v ORDER BY s.share_id %v LIMIT %v OFFSET %v`,
selectShareFields, sqlTableShares, sqlTableUsers, sqlPlaceholders[0], order, sqlPlaceholders[1], sqlPlaceholders[2])
}
func getDumpSharesQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v s INNER JOIN %v u ON s.user_id = u.id`,
selectShareFields, sqlTableShares, sqlTableUsers)
}
func getAddShareQuery() string {
return fmt.Sprintf(`INSERT INTO %v (share_id,name,description,scope,paths,created_at,updated_at,last_use_at,
expires_at,password,max_tokens,used_tokens,allow_from,user_id) VALUES (%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,%v)`,
sqlTableShares, sqlPlaceholders[0], sqlPlaceholders[1],
sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6],
sqlPlaceholders[7], sqlPlaceholders[8], sqlPlaceholders[9], sqlPlaceholders[10], sqlPlaceholders[11],
sqlPlaceholders[12], sqlPlaceholders[13])
}
func getUpdateShareRestoreQuery() string {
return fmt.Sprintf(`UPDATE %v SET name=%v,description=%v,scope=%v,paths=%v,created_at=%v,updated_at=%v,
last_use_at=%v,expires_at=%v,password=%v,max_tokens=%v,used_tokens=%v,allow_from=%v,user_id=%v WHERE share_id = %v`, sqlTableShares,
sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4],
sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7], sqlPlaceholders[8], sqlPlaceholders[9],
sqlPlaceholders[10], sqlPlaceholders[11], sqlPlaceholders[12], sqlPlaceholders[13])
}
func getUpdateShareQuery() string {
return fmt.Sprintf(`UPDATE %v SET name=%v,description=%v,scope=%v,paths=%v,updated_at=%v,expires_at=%v,
password=%v,max_tokens=%v,allow_from=%v,user_id=%v WHERE share_id = %v`, sqlTableShares,
sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4],
sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7], sqlPlaceholders[8], sqlPlaceholders[9],
sqlPlaceholders[10])
}
func getDeleteShareQuery() string {
return fmt.Sprintf(`DELETE FROM %v WHERE share_id = %v`, sqlTableShares, sqlPlaceholders[0])
}
func getAPIKeyByIDQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v WHERE key_id = %v`, selectAPIKeyFields, sqlTableAPIKeys, sqlPlaceholders[0])
}
func getAPIKeysQuery(order string) string {
return fmt.Sprintf(`SELECT %v FROM %v ORDER BY key_id %v LIMIT %v OFFSET %v`, selectAPIKeyFields, sqlTableAPIKeys,
order, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getDumpAPIKeysQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v`, selectAPIKeyFields, sqlTableAPIKeys)
}
func getAddAPIKeyQuery() string {
return fmt.Sprintf(`INSERT INTO %v (key_id,name,api_key,scope,created_at,updated_at,last_use_at,expires_at,description,user_id,admin_id)
VALUES (%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,%v)`, sqlTableAPIKeys, sqlPlaceholders[0], sqlPlaceholders[1],
sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6],
sqlPlaceholders[7], sqlPlaceholders[8], sqlPlaceholders[9], sqlPlaceholders[10])
}
func getUpdateAPIKeyQuery() string {
return fmt.Sprintf(`UPDATE %v SET name=%v,scope=%v,expires_at=%v,user_id=%v,admin_id=%v,description=%v,updated_at=%v
WHERE key_id = %v`, sqlTableAPIKeys, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2],
sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7])
}
func getDeleteAPIKeyQuery() string {
return fmt.Sprintf(`DELETE FROM %v WHERE key_id = %v`, sqlTableAPIKeys, sqlPlaceholders[0])
}
func getRelatedUsersForAPIKeysQuery(apiKeys []APIKey) string {
var sb strings.Builder
for _, k := range apiKeys {
if k.userID == 0 {
continue
}
if sb.Len() == 0 {
sb.WriteString("(")
} else {
sb.WriteString(",")
}
sb.WriteString(strconv.FormatInt(k.userID, 10))
}
if sb.Len() > 0 {
sb.WriteString(")")
} else {
sb.WriteString("(0)")
}
return fmt.Sprintf(`SELECT id,username FROM %v WHERE id IN %v`, sqlTableUsers, sb.String())
}
func getRelatedAdminsForAPIKeysQuery(apiKeys []APIKey) string {
var sb strings.Builder
for _, k := range apiKeys {
if k.adminID == 0 {
continue
}
if sb.Len() == 0 {
sb.WriteString("(")
} else {
sb.WriteString(",")
}
sb.WriteString(strconv.FormatInt(k.adminID, 10))
}
if sb.Len() > 0 {
sb.WriteString(")")
} else {
sb.WriteString("(0)")
}
return fmt.Sprintf(`SELECT id,username FROM %v WHERE id IN %v`, sqlTableAdmins, sb.String())
}
func getUserByUsernameQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v WHERE username = %v`, selectUserFields, sqlTableUsers, sqlPlaceholders[0])
}
@@ -190,10 +65,6 @@ func getUsersQuery(order string) string {
order, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getRecentlyUpdatedUsersQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v WHERE updated_at >= %v`, selectUserFields, sqlTableUsers, sqlPlaceholders[0])
}
func getDumpUsersQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v`, selectUserFields, sqlTableUsers)
}
@@ -211,27 +82,10 @@ func getUpdateQuotaQuery(reset bool) string {
WHERE username = %v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
}
func getSetUpdateAtQuery() string {
return fmt.Sprintf(`UPDATE %v SET updated_at = %v WHERE username = %v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getUpdateLastLoginQuery() string {
return fmt.Sprintf(`UPDATE %v SET last_login = %v WHERE username = %v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getUpdateAdminLastLoginQuery() string {
return fmt.Sprintf(`UPDATE %v SET last_login = %v WHERE username = %v`, sqlTableAdmins, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getUpdateAPIKeyLastUseQuery() string {
return fmt.Sprintf(`UPDATE %v SET last_use_at = %v WHERE key_id = %v`, sqlTableAPIKeys, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getUpdateShareLastUseQuery() string {
return fmt.Sprintf(`UPDATE %v SET last_use_at = %v, used_tokens = used_tokens +%v WHERE share_id = %v`,
sqlTableShares, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2])
}
func getQuotaQuery() string {
return fmt.Sprintf(`SELECT used_quota_size,used_quota_files FROM %v WHERE username = %v`, sqlTableUsers,
sqlPlaceholders[0])
@@ -240,21 +94,20 @@ func getQuotaQuery() string {
func getAddUserQuery() string {
return fmt.Sprintf(`INSERT INTO %v (username,password,public_keys,home_dir,uid,gid,max_sessions,quota_size,quota_files,permissions,
used_quota_size,used_quota_files,last_quota_update,upload_bandwidth,download_bandwidth,status,last_login,expiration_date,filters,
filesystem,additional_info,description,email,created_at,updated_at)
VALUES (%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,0,0,0,%v,%v,%v,0,%v,%v,%v,%v,%v,%v,%v,%v)`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1],
filesystem,additional_info)
VALUES (%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,0,0,0,%v,%v,%v,0,%v,%v,%v,%v)`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1],
sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7],
sqlPlaceholders[8], sqlPlaceholders[9], sqlPlaceholders[10], sqlPlaceholders[11], sqlPlaceholders[12], sqlPlaceholders[13],
sqlPlaceholders[14], sqlPlaceholders[15], sqlPlaceholders[16], sqlPlaceholders[17], sqlPlaceholders[18], sqlPlaceholders[19],
sqlPlaceholders[20])
sqlPlaceholders[14], sqlPlaceholders[15], sqlPlaceholders[16])
}
func getUpdateUserQuery() string {
return fmt.Sprintf(`UPDATE %v SET password=%v,public_keys=%v,home_dir=%v,uid=%v,gid=%v,max_sessions=%v,quota_size=%v,
quota_files=%v,permissions=%v,upload_bandwidth=%v,download_bandwidth=%v,status=%v,expiration_date=%v,filters=%v,filesystem=%v,
additional_info=%v,description=%v,email=%v,updated_at=%v WHERE id = %v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3],
additional_info=%v WHERE id = %v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3],
sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7], sqlPlaceholders[8], sqlPlaceholders[9],
sqlPlaceholders[10], sqlPlaceholders[11], sqlPlaceholders[12], sqlPlaceholders[13], sqlPlaceholders[14], sqlPlaceholders[15],
sqlPlaceholders[16], sqlPlaceholders[17], sqlPlaceholders[18], sqlPlaceholders[19])
sqlPlaceholders[16])
}
func getDeleteUserQuery() string {
@@ -265,19 +118,13 @@ func getFolderByNameQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v WHERE name = %v`, selectFolderFields, sqlTableFolders, sqlPlaceholders[0])
}
func checkFolderNameQuery() string {
return fmt.Sprintf(`SELECT name FROM %v WHERE name = %v`, sqlTableFolders, sqlPlaceholders[0])
}
func getAddFolderQuery() string {
return fmt.Sprintf(`INSERT INTO %v (path,used_quota_size,used_quota_files,last_quota_update,name,description,filesystem)
VALUES (%v,%v,%v,%v,%v,%v,%v)`, sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2],
sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6])
return fmt.Sprintf(`INSERT INTO %v (path,used_quota_size,used_quota_files,last_quota_update,name) VALUES (%v,%v,%v,%v,%v)`,
sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4])
}
func getUpdateFolderQuery() string {
return fmt.Sprintf(`UPDATE %v SET path=%v,description=%v,filesystem=%v WHERE name = %v`, sqlTableFolders, sqlPlaceholders[0],
sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
return fmt.Sprintf(`UPDATE %v SET path = %v WHERE name = %v`, sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getDeleteFolderQuery() string {
@@ -327,9 +174,9 @@ func getRelatedFoldersForUsersQuery(users []User) string {
if sb.Len() > 0 {
sb.WriteString(")")
}
return fmt.Sprintf(`SELECT f.id,f.name,f.path,f.used_quota_size,f.used_quota_files,f.last_quota_update,fm.virtual_path,
fm.quota_size,fm.quota_files,fm.user_id,f.filesystem,f.description FROM %v f INNER JOIN %v fm ON f.id = fm.folder_id WHERE
fm.user_id IN %v ORDER BY fm.user_id`, sqlTableFolders, sqlTableFoldersMapping, sb.String())
return fmt.Sprintf(`SELECT f.id,f.name,f.path,f.used_quota_size,f.used_quota_files,f.last_quota_update,fm.virtual_path,fm.quota_size,fm.quota_files,fm.user_id
FROM %v f INNER JOIN %v fm ON f.id = fm.folder_id WHERE fm.user_id IN %v ORDER BY fm.user_id`, sqlTableFolders,
sqlTableFoldersMapping, sb.String())
}
func getRelatedUsersForFoldersQuery(folders []vfs.BaseVirtualFolder) string {
@@ -356,3 +203,15 @@ func getDatabaseVersionQuery() string {
func getUpdateDBVersionQuery() string {
return fmt.Sprintf(`UPDATE %v SET version=%v`, sqlTableSchemaVersion, sqlPlaceholders[0])
}
/*func getCompatVirtualFoldersQuery() string {
return fmt.Sprintf(`SELECT id,username,virtual_folders FROM %v`, sqlTableUsers)
}*/
func getCompatV4FsConfigQuery() string {
return fmt.Sprintf(`SELECT id,username,filesystem FROM %v`, sqlTableUsers)
}
func updateCompatV4FsConfigQuery() string {
return fmt.Sprintf(`UPDATE %v SET filesystem=%v WHERE id=%v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1])
}

File diff suppressed because it is too large Load Diff

View File

@@ -4,16 +4,14 @@ SFTPGo provides an official Docker image, it is available on both [Docker Hub](h
## Supported tags and respective Dockerfile links
- [v2.2.0, v2.2, v2, latest](https://github.com/drakkan/sftpgo/blob/v2.2.0/Dockerfile)
- [v2.2.0-alpine, v2.2-alpine, v2-alpine, alpine](https://github.com/drakkan/sftpgo/blob/v2.2.0/Dockerfile.alpine)
- [v2.2.0-slim, v2.2-slim, v2-slim, slim](https://github.com/drakkan/sftpgo/blob/v2.2.0/Dockerfile)
- [v2.2.0-alpine-slim, v2.2-alpine-slim, v2-alpine-slim, alpine-slim](https://github.com/drakkan/sftpgo/blob/v2.2.0/Dockerfile.alpine)
- [v2.2.0-distroless-slim, v2.2-distroless-slim, v2-distroless-slim, distroless-slim](https://github.com/drakkan/sftpgo/blob/v2.2.0/Dockerfile.distroless)
- [v2.0.4, v2.0, v2, latest](https://github.com/drakkan/sftpgo/blob/v2.0.4/Dockerfile)
- [v2.0.4-alpine, v2.0-alpine, v2-alpine, alpine](https://github.com/drakkan/sftpgo/blob/v2.0.4/Dockerfile.alpine)
- [v2.0.4-slim, v2.0-slim, v2-slim, slim](https://github.com/drakkan/sftpgo/blob/v2.0.4/Dockerfile)
- [v2.0.4-alpine-slim, v2.0-alpine-slim, v2-alpine-slim, alpine-slim](https://github.com/drakkan/sftpgo/blob/v2.0.4/Dockerfile.alpine)
- [edge](../Dockerfile)
- [edge-alpine](../Dockerfile.alpine)
- [edge-slim](../Dockerfile)
- [edge-alpine-slim](../Dockerfile.alpine)
- [edge-distroless-slim](../Dockerfile.distroless)
## How to use the SFTPGo image
@@ -22,59 +20,15 @@ SFTPGo provides an official Docker image, it is available on both [Docker Hub](h
Starting a SFTPGo instance is simple:
```shell
docker run --name some-sftpgo -p 8080:8080 -p 2022:2022 -d "drakkan/sftpgo:tag"
docker run --name some-sftpgo -p 127.0.0.1:8080:8080 -p 2022:2022 -d "drakkan/sftpgo:tag"
```
... where `some-sftpgo` is the name you want to assign to your container, and `tag` is the tag specifying the SFTPGo version you want. See the list above for relevant tags.
Now visit [http://localhost:8080/web/admin](http://localhost:8080/web/admin), replacing `localhost` with the appropriate IP address if SFTPGo is not reachable on localhost, create the first admin and a new SFTPGo user. The SFTP service is available on port 2022.
If you don't want to persist any files, for example for testing purposes, you can run an SFTPGo instance like this:
```shell
docker run --rm --name some-sftpgo -p 8080:8080 -p 2022:2022 -d "drakkan/sftpgo:tag"
```
Now visit [http://localhost:8080/](http://localhost:8080/) and create a new SFTPGo user. The SFTP service is available on port 2022.
If you prefer GitHub Container Registry to Docker Hub replace `drakkan/sftpgo:tag` with `ghcr.io/drakkan/sftpgo:tag`.
### Enable FTP service
FTP is disabled by default, you can enable the FTP service by starting the SFTPGo instance in this way:
```shell
docker run --name some-sftpgo \
-p 8080:8080 \
-p 2022:2022 \
-p 2121:2121 \
-p 50000-50100:50000-50100 \
-e SFTPGO_FTPD__BINDINGS__0__PORT=2121 \
-e SFTPGO_FTPD__BINDINGS__0__FORCE_PASSIVE_IP=<your external ip here> \
-d "drakkan/sftpgo:tag"
```
The FTP service is now available on port 2121 and SFTP on port 2022.
You can change the passive ports range (`50000-50100` by default) by setting the environment variables `SFTPGO_FTPD__PASSIVE_PORT_RANGE__START` and `SFTPGO_FTPD__PASSIVE_PORT_RANGE__END`.
It is recommended that you provide a certificate and key file to expose FTP over TLS. You should prefer SFTP to FTP even if you configure TLS, please don't blindly enable the old FTP protocol.
### Enable WebDAV service
WebDAV is disabled by default, you can enable the WebDAV service by starting the SFTPGo instance in this way:
```shell
docker run --name some-sftpgo \
-p 8080:8080 \
-p 2022:2022 \
-p 10080:10080 \
-e SFTPGO_WEBDAVD__BINDINGS__0__PORT=10080 \
-d "drakkan/sftpgo:tag"
```
The WebDAV service is now available on port 10080 and SFTP on port 2022.
It is recommended that you provide a certificate and key file to expose WebDAV over https.
### Container shell access and viewing SFTPGo logs
The docker exec command allows you to run commands inside a Docker container. The following command line will give you a shell inside your `sftpgo` container:
@@ -89,8 +43,6 @@ The logs are available through Docker's container log:
docker logs some-sftpgo
```
**Note:** [distroless](../Dockerfile.distroless) image contains only a statically linked sftpgo binary and its minimal runtime dependencies. Shell is not available on this image.
### Where to Store Data
Important note: There are several ways to store data used by applications that run in Docker containers. We encourage users of the SFTPGo images to familiarize themselves with the options available, including:
@@ -106,7 +58,7 @@ The Docker documentation is a good starting point for understanding the differen
```shell
docker run --name some-sftpgo \
-p 8080:8090 \
-p 127.0.0.1:8080:8090 \
-p 2022:2022 \
--mount type=bind,source=/my/own/sftpgodata,target=/srv/sftpgo \
--mount type=bind,source=/my/own/sftpgohome,target=/var/lib/sftpgo \
@@ -154,7 +106,7 @@ With the above directory permissions, you can start a SFTPGo instance like this:
```shell
docker run --name some-sftpgo \
--user 1100:1100 \
-p 8080:8080 \
-p 127.0.0.1:8080:8080 \
-p 2022:2022 \
--mount type=bind,source="${PWD}/data",target=/srv/sftpgo \
--mount type=bind,source="${PWD}/config",target=/var/lib/sftpgo \
@@ -170,11 +122,9 @@ RUN chown -R 1100:1100 /etc/sftpgo && chown 1100:1100 /var/lib/sftpgo /srv/sftpg
USER 1100:1100
```
**Note:** the above Dockerfile will not work if you use the [distroless](../Dockerfile.distroless) image as base since the `chown` command is not available there.
## Image Variants
The `sftpgo` images comes in many flavors, each designed for a specific use case. The `edge`, `edge-slim`, `edge-alpine`, `edge-alpine-slim` and `edge-distroless-slim` tags are updated after each new commit.
The `sftpgo` images comes in many flavors, each designed for a specific use case. The `edge` and `edge-alpine`tags are updated after each new commit.
### `sftpgo:<version>`
@@ -186,18 +136,9 @@ This image is based on the popular [Alpine Linux project](https://alpinelinux.or
This variant is highly recommended when final image size being as small as possible is desired. The main caveat to note is that it does use [musl libc](https://musl.libc.org/) instead of [glibc and friends](https://www.etalabs.net/compare_libcs.html), so certain software might run into issues depending on the depth of their libc requirements. However, most software doesn't have an issue with this, so this variant is usually a very safe choice. See [this Hacker News comment thread](https://news.ycombinator.com/item?id=10782897) for more discussion of the issues that might arise and some pro/con comparisons of using Alpine-based images.
### `sftpgo:<version>-distroless`
This image is based on the popular [Distroless project](https://github.com/GoogleContainerTools/distroless). We use the latest Debian based distroless image as base.
Distroless variant contains only a statically linked sftpgo binary and its minimal runtime dependencies and so it doesn't allow shell access (no shell is installed).
SQLite support is disabled since it requires CGO and so a C runtime which is not installed.
The default data provider is `bolt`, all the supported data providers expect `sqlite` work.
We only provide the slim variant and so the optional `git` dependency is not available.
### `sftpgo:<suite>-slim`
These tags provide a slimmer image that does not include the optional `git` dependency.
These tags provide a slimmer image that does not include the optional `git` and `rsync` dependencies.
## Helm Chart

View File

@@ -1,6 +1,6 @@
# Account's configuration properties
Please take a look at the [OpenAPI schema](../openapi/openapi.yaml) for the exact definitions of user, folder and admin fields.
Please take a look at the [OpenAPI schema](../httpd/schema/openapi.yaml) for the exact definitions of user, folder and admin fields.
If you need an example you can export a dump using the Web Admin or by invoking the `dumpdata` endpoint directly, you need to obtain an access token first, for example:
```shell

View File

@@ -26,13 +26,13 @@ The compiler is a build time only dependency. It is not required at runtime.
Version info, such as git commit and build date, can be embedded setting the following string variables at build time:
- `github.com/drakkan/sftpgo/v2/version.commit`
- `github.com/drakkan/sftpgo/v2/version.date`
- `github.com/drakkan/sftpgo/version.commit`
- `github.com/drakkan/sftpgo/version.date`
For example, you can build using the following command:
```bash
go build -tags nogcs,nos3,nosqlite -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
go build -tags nogcs,nos3,nosqlite -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
```
You should get a version that includes git commit, build date and available features like this one:

View File

@@ -16,7 +16,7 @@ If the hook defines an external program it can read the following environment va
- `SFTPGO_AUTHD_USERNAME`
- `SFTPGO_AUTHD_PASSWORD`
- `SFTPGO_AUTHD_IP`
- `SFTPGO_AUTHD_PROTOCOL`, possible values are `SSH`, `FTP`, `DAV`, `HTTP`
- `SFTPGO_AUTHD_PROTOCOL`, possible values are `SSH`, `FTP`, `DAV`
Previous global environment variables aren't cleared when the script is called. The content of these variables is _not_ quoted. They may contain special characters. They are under the control of a possibly malicious remote user.
@@ -42,6 +42,4 @@ You can also restrict the hook scope using the `check_password_scope` configurat
You can combine the scopes. For example, 6 means FTP and WebDAV.
You can disable the hook on a per-user basis.
An example check password program allowing 2FA using password + one time token can be found inside the source tree [checkpwd](../examples/OTP/authy/checkpwd) directory.

View File

@@ -1,114 +1,76 @@
# Custom Actions
SFTPGo can notify filesystem and provider events using custom actions. A custom action can be an external program or an HTTP URL.
## Filesystem events
The `actions` struct inside the `common` configuration section allows to configure the actions for file operations and SSH commands.
The `actions` struct inside the "common" configuration section allows to configure the actions for file operations and SSH commands.
The `hook` can be defined as the absolute path of your program or an HTTP URL.
The following `actions` are supported:
- `download`
- `pre-download`
- `upload`
- `pre-upload`
- `delete`
- `pre-delete`
- `rename`
- `mkdir`
- `rmdir`
- `ssh_cmd`
The `upload` condition includes both uploads to new files and overwrite of existing ones. If an upload is aborted for quota limits SFTPGo tries to remove the partial file, so if the notification reports a zero size file and a quota exceeded error the file has been deleted. The `ssh_cmd` condition will be triggered after a command is successfully executed via SSH. `scp` will trigger the `download` and `upload` conditions and not `ssh_cmd`.
For cloud backends directories are virtual, they are created implicitly when you upload a file and are implicitly removed when the last file within a directory is removed. The `mkdir` and `rmdir` notifications are sent only when a directory is explicitly created or removed.
The `upload` condition includes both uploads to new files and overwrite of existing files. If an upload is aborted for quota limits SFTPGo tries to remove the partial file, so if the notification reports a zero size file and a quota exceeded error the file has been deleted. The `ssh_cmd` condition will be triggered after a command is successfully executed via SSH. `scp` will trigger the `download` and `upload` conditions and not `ssh_cmd`.
The notification will indicate if an error is detected and so, for example, a partial file is uploaded.
The `pre-delete` action, if defined, will be called just before files deletion. If the external command completes with a zero exit status or the HTTP notification response code is `200` then SFTPGo will assume that the file was already deleted/moved and so it will not try to remove the file and it will not execute the hook defined for the `delete` action.
The `pre-download` and `pre-upload` actions, will be called before downloads and uploads. If the external command completes with a zero exit status or the HTTP notification response code is `200` then SFTPGo allows the operation, otherwise the client will get a permission denied error.
If the `hook` defines a path to an external program, then this program is invoked with the following arguments:
If the `hook` defines a path to an external program, then this program can read the following environment variables:
- `action`, string, possible values are: `download`, `upload`, `pre-delete`,`delete`, `rename`, `ssh_cmd`
- `username`
- `path` is the full filesystem path, can be empty for some ssh commands
- `target_path`, non-empty for `rename` action and for `sftpgo-copy` SSH command
- `ssh_cmd`, non-empty for `ssh_cmd` action
- `SFTPGO_ACTION`, supported action
The external program can also read the following environment variables:
- `SFTPGO_ACTION`
- `SFTPGO_ACTION_USERNAME`
- `SFTPGO_ACTION_PATH`, is the full filesystem path, can be empty for some ssh commands
- `SFTPGO_ACTION_TARGET`, full filesystem path, non-empty for `rename` `SFTPGO_ACTION` and for some SSH commands
- `SFTPGO_ACTION_VIRTUAL_PATH`, virtual path, seen by SFTPGo users
- `SFTPGO_ACTION_VIRTUAL_TARGET`, virtual target path, seen by SFTPGo users
- `SFTPGO_ACTION_PATH`
- `SFTPGO_ACTION_TARGET`, non-empty for `rename` `SFTPGO_ACTION`
- `SFTPGO_ACTION_SSH_CMD`, non-empty for `ssh_cmd` `SFTPGO_ACTION`
- `SFTPGO_ACTION_FILE_SIZE`, non-zero for `pre-upload`,`upload`, `download` and `delete` actions if the file size is greater than `0`
- `SFTPGO_ACTION_FS_PROVIDER`, `0` for local filesystem, `1` for S3 backend, `2` for Google Cloud Storage (GCS) backend, `3` for Azure Blob Storage backend, `4` for local encrypted backend, `5` for SFTP backend
- `SFTPGO_ACTION_FILE_SIZE`, non-empty for `upload`, `download` and `delete` `SFTPGO_ACTION`
- `SFTPGO_ACTION_FS_PROVIDER`, `0` for local filesystem, `1` for S3 backend, `2` for Google Cloud Storage (GCS) backend, `3` for Azure Blob Storage backend
- `SFTPGO_ACTION_BUCKET`, non-empty for S3, GCS and Azure backends
- `SFTPGO_ACTION_ENDPOINT`, non-empty for S3, SFTP and Azure backend if configured. For Azure this is the endpoint, if configured
- `SFTPGO_ACTION_STATUS`, integer. Status for `upload`, `download` and `ssh_cmd` actions. 1 means no error, 2 means a generic error occurred, 3 means quota exceeded error
- `SFTPGO_ACTION_PROTOCOL`, string. Possible values are `SSH`, `SFTP`, `SCP`, `FTP`, `DAV`, `HTTP`, `DataRetention`
- `SFTPGO_ACTION_IP`, the action was executed from this IP address
- `SFTPGO_ACTION_OPEN_FLAGS`, integer. File open flags, can be non-zero for `pre-upload` action. If `SFTPGO_ACTION_FILE_SIZE` is greater than zero and `SFTPGO_ACTION_OPEN_FLAGS&512 == 0` the target file will not be truncated
- `SFTPGO_ACTION_TIMESTAMP`, int64. Event timestamp as nanoseconds since epoch
- `SFTPGO_ACTION_ENDPOINT`, non-empty for S3 and Azure backend if configured. For Azure this is the SAS URL, if configured otherwise the endpoint
- `SFTPGO_ACTION_STATUS`, integer. 0 means a generic error occurred. 1 means no error, 2 means quota exceeded error
- `SFTPGO_ACTION_PROTOCOL`, string. Possible values are `SSH`, `SFTP`, `SCP`, `FTP`, `DAV`
Previous global environment variables aren't cleared when the script is called.
The program must finish within 30 seconds.
If the `hook` defines an HTTP URL then this URL will be invoked as HTTP POST. The request body will contain a JSON serialized struct with the following fields:
- `action`, string
- `username`, string
- `path`, string
- `target_path`, string, included for `rename` action and `sftpgo-copy` SSH command
- `virtual_path`, string, virtual path, seen by SFTPGo users
- `virtual_target_path`, string, virtual target path, seen by SFTPGo users
- `ssh_cmd`, string, included for `ssh_cmd` action
- `file_size`, int64, included for `pre-upload`, `upload`, `download`, `delete` actions if the file size is greater than `0`
- `fs_provider`, integer, `0` for local filesystem, `1` for S3 backend, `2` for Google Cloud Storage (GCS) backend, `3` for Azure Blob Storage backend, `4` for local encrypted backend, `5` for SFTP backend
- `bucket`, string, inlcuded for S3, GCS and Azure backends
- `endpoint`, string, included for S3, SFTP and Azure backend if configured
- `status`, integer. Status for `upload`, `download` and `ssh_cmd` actions. 1 means no error, 2 means a generic error occurred, 3 means quota exceeded error
- `protocol`, string. Possible values are `SSH`, `SFTP`, `SCP`, `FTP`, `DAV`, `HTTP`, `DataRetention`
- `ip`, string. The action was executed from this IP address
- `open_flags`, integer. File open flags, can be non-zero for `pre-upload` action. If `file_size` is greater than zero and `file_size&512 == 0` the target file will not be truncated
- `timestamp`, int64. Event timestamp as nanoseconds since epoch
- `action`
- `username`
- `path`
- `target_path`, not null for `rename` action
- `ssh_cmd`, not null for `ssh_cmd` action
- `file_size`, not null for `upload`, `download`, `delete` actions
- `fs_provider`, `0` for local filesystem, `1` for S3 backend, `2` for Google Cloud Storage (GCS) backend, `3` for Azure Blob Storage backend
- `bucket`, not null for S3, GCS and Azure backends
- `endpoint`, not null for S3 and Azure backend if configured. For Azure this is the SAS URL, if configured otherwise the endpoint
- `status`, integer. 0 means a generic error occurred. 1 means no error, 2 means quota exceeded error
- `protocol`, string. Possible values are `SSH`, `FTP`, `DAV`
The HTTP hook will use the global configuration for HTTP clients and will respect the retry configurations.
The `pre-*` actions are always executed synchronously while the other ones are asynchronous. You can specify the actions to run synchronously via the `execute_sync` configuration key. Executing an action synchronously means that SFTPGo will not return a result code to the client (which is waiting for it) until your hook have completed its execution. If your hook takes a long time to complete this could cause a timeout on the client side, which wouldn't receive the server response in a timely manner and eventually drop the connection.
## Provider events
The `actions` struct inside the `data_provider` configuration section allows you to configure actions on data provider objects add, update, delete.
The supported object types are:
- `user`
- `admin`
- `api_key`
The `actions` struct inside the "data_provider" configuration section allows you to configure actions on user add, update, delete.
Actions will not be fired for internal updates, such as the last login or the user quota fields, or after external authentication.
If the `hook` defines a path to an external program, then this program can read the following environment variables:
If the `hook` defines a path to an external program, then this program is invoked with the following arguments:
- `SFTPGO_PROVIDER_ACTION`, supported values are `add`, `update`, `delete`
- `SFTPGO_PROVIDER_OBJECT_TYPE`, affetected object type
- `SFTPGO_PROVIDER_OBJECT_NAME`, unique identifier for the affected object, for example username or key id
- `SFTPGO_PROVIDER_USERNAME`, the username that executed the action. There are two special usernames: `__self__` identifies a user/admin that updates itself and `__system__` identifies an action that does not have an explicit executor associated with it, for example users/admins can be added/updated by loading them from initial data
- `SFTPGO_PROVIDER_IP`, the action was executed from this IP address
- `SFTPGO_PROVIDER_TIMESTAMP`, event timestamp as nanoseconds since epoch
- `SFTPGO_PROVIDER_OBJECT`, object serialized as JSON with sensitive fields removed
- `action`, string, possible values are: `add`, `update`, `delete`
- `username`
- `ID`
- `status`
- `expiration_date`
- `home_dir`
- `uid`
- `gid`
The external program can also read the following environment variables:
- `SFTPGO_USER_ACTION`
- `SFTPGO_USER`, user serialized as JSON with sensitive fields removed
Previous global environment variables aren't cleared when the script is called.
The program must finish within 15 seconds.
If the `hook` defines an HTTP URL then this URL will be invoked as HTTP POST. The action, username, ip, object_type and object_name and timestamp are added to the query string, for example `<hook>?action=update&username=admin&ip=127.0.0.1&object_type=user&object_name=user1&timestamp=1633860803249`, and the full object is sent serialized as JSON inside the POST body with sensitive fields removed.
If the `hook` defines an HTTP URL then this URL will be invoked as HTTP POST. The action is added to the query string, for example `<hook>?action=update`, and the user is sent serialized as JSON inside the POST body with sensitive fields removed.
The HTTP hook will use the global configuration for HTTP clients and will respect the retry configurations.
The structure for SFTPGo objects can be found within the [OpenAPI schema](../openapi/openapi.yaml).
## Pub/Sub services
You can forward SFTPGo events to several publish/subscribe systems using the [sftpgo-plugin-pubsub](https://github.com/sftpgo/sftpgo-plugin-pubsub). The notifiers SFTPGo plugins are not suitable for interactive actions such as `pre-*` events. Their scope is to simply forward events to external services. A custom hook is a better choice if you need to react to `pre-*` events.
## Database services
You can store SFTPGo events in database systems using the [sftpgo-plugin-eventstore](https://github.com/sftpgo/sftpgo-plugin-eventstore) and you can search the stored events using the [sftpgo-plugin-eventsearch](https://github.com/sftpgo/sftpgo-plugin-eventsearch).

View File

@@ -1,8 +1,6 @@
# Data At Rest Encryption (DARE)
SFTPGo supports data at-rest encryption via its `cryptfs` virtual file system, in this mode SFTPGo transparently encrypts and decrypts data (to/from the local disk) on-the-fly during uploads and/or downloads, making sure that the files at-rest on the server-side are always encrypted.
Data At Rest Encryption is supported for local filesystem, for cloud storage backends you can use their server side encryption feature.
SFTPGo supports data at-rest encryption via its `cryptfs` virtual file system, in this mode SFTPGo transparently encrypts and decrypts data (to/from the disk) on-the-fly during uploads and/or downloads, making sure that the files at-rest on the server-side are always encrypted.
So, because of the way it works, as described here above, when you set up an encrypted filesystem for a user you need to make sure it points to an empty path/directory (that has no files in it). Otherwise, it would try to decrypt existing files that are not encrypted in the first place and fail.
@@ -14,7 +12,8 @@ The passphrase is stored encrypted itself according to your [KMS configuration](
The encrypted filesystem has some limitations compared to the local, unencrypted, one:
- Resuming uploads is not supported.
- Upload resume is not supported.
- Opening a file for both reading and writing at the same time is not supported and so clients that require advanced filesystem-like features such as `sshfs` are not supported too.
- Truncate is not supported.
- System commands such as `git` or `rsync` are not supported: they will store data unencrypted.
- Virtual folders are not implemented for now, if you are interested in this feature, please consider submitting a well written pull request (fully covered by test cases) or sponsoring this development. We could add a filesystem configuration to each virtual folder so we can mount encrypted or cloud backends as subfolders for local filesystems and vice versa.

View File

@@ -1,32 +0,0 @@
# Data retention hook
This hook runs after a data retention check completes if you specify `Hook` between notifications methods when you start the check.
The `data_retention_hook` can be defined as the absolute path of your program or an HTTP URL.
If the hook defines an external program it can read the following environment variable:
- `SFTPGO_DATA_RETENTION_RESULT`, it contains the data retention check result JSON serialized.
Previous global environment variables aren't cleared when the script is called.
The program must finish within 20 seconds.
If the hook defines an HTTP URL then this URL will be invoked as HTTP POST and the POST body contains the data retention check result JSON serialized.
The HTTP hook will use the global configuration for HTTP clients and will respect the retry configurations.
Here is the schema for the data retention check result:
- `username`, string
- `status`, int. 1 means success, 0 error
- `start_time`, int64. Start time as UNIX timestamp in milliseconds
- `total_deleted_files`, int. Total number of files deleted
- `total_deleted_size`, int64. Total size deleted in bytes
- `elapsed`, int64. Elapsed time in milliseconds
- `details`, list of struct with details for each checked path, each struct contains the following fields:
- `path`, string
- `retention`, int. Retention time in hours
- `deleted_files`, int. Number of files deleted
- `deleted_size`, int64. Size deleted in bytes
- `info`, string. Informative, non fatal, message if any. For example it can indicates that the check was skipped because the user doesn't have the required permissions on this path
- `error`, string. Error message if any

View File

@@ -4,11 +4,10 @@ The built-in `defender` allows you to configure an auto-blocking policy for SFTP
If enabled it will protect SFTP, FTP and WebDAV services and it will automatically block hosts (IP addresses) that continually fail to log in or attempt to connect.
You can configure a score for the following events:
You can configure a score for each event type:
- `score_valid`, defines the score for valid login attempts, eg. user accounts that exist. Default `1`.
- `score_invalid`, defines the score for invalid login attempts, eg. non-existent user accounts or client disconnected for inactivity without authentication attempts. Default `2`.
- `score_limit_exceeded`, defines the score for hosts that exceeded the configured rate limits or the configured max connections per host. Default `3`.
And then you can configure:
@@ -16,9 +15,7 @@ And then you can configure:
- `threshold`, defines the threshold value before banning a host.
- `ban_time`, defines the time to ban a client, as minutes
So a host is banned, for `ban_time` minutes, if the sum of the scores has exceeded the defined threshold during the last observation time minutes.
By defining the scores, each type of event can be weighted. Let's see an example: if `score_invalid` is 3 and `threshold` is 8, a host will be banned after 3 login attempts with an non-existent user within the configured `observation_time`.
So a host is banned, for `ban_time` minutes, if it has exceeded the defined threshold during the last observation time minutes.
A banned IP has no score, it makes no sense to accumulate host events in memory for an already banned IP address.
@@ -28,10 +25,13 @@ The `ban_time_increment` is calculated as percentage of `ban_time`, so if `ban_t
The `defender` will keep in memory both the host scores and the banned hosts, you can limit the memory usage using the `entries_soft_limit` and `entries_hard_limit` configuration keys.
Using the REST API you can:
The REST API allows:
- list hosts within the defender's lists
- remove hosts from the defender's lists
- to retrieve the score for an IP address
- to retrieve the ban time for an IP address
- to unban an IP address
We don't return the whole list of the banned IP addresses or all stored scores because we store them as a hash map and iterating over all the keys of a hash map is not a fast operation and will slow down the recordings of new events.
The `defender` can also load a permanent block list and/or a safe list of ip addresses/networks from a file:
@@ -52,7 +52,7 @@ Here is a small example:
"2001:db8::68"
],
"networks":[
"192.0.3.0/24",
"192.0.2.0/24",
"2001:db8:1234::/48"
]
}

View File

@@ -6,9 +6,9 @@ To enable dynamic user modification, you must set the absolute path of your prog
The external program can read the following environment variables to get info about the user trying to login:
- `SFTPGO_LOGIND_USER`, it contains the user trying to login serialized as JSON. A JSON serialized user id equal to zero means the user does not exist inside SFTPGo
- `SFTPGO_LOGIND_METHOD`, possible values are: `password`, `publickey`, `keyboard-interactive`, `TLSCertificate`
- `SFTPGO_LOGIND_METHOD`, possible values are: `password`, `publickey` and `keyboard-interactive`
- `SFTPGO_LOGIND_IP`, ip address of the user trying to login
- `SFTPGO_LOGIND_PROTOCOL`, possible values are `SSH`, `FTP`, `DAV`, `HTTP`
- `SFTPGO_LOGIND_PROTOCOL`, possible values are `SSH`, `FTP`, `DAV`
The program must write, on its standard output:
@@ -35,8 +35,6 @@ If an error happens while executing the hook then login will be denied.
"Dynamic user creation or modification" and "External Authentication" are mutually exclusive, they are quite similar, the difference is that "External Authentication" returns an already authenticated user while using "Dynamic users modification" you simply create or update a user. The authentication will be checked inside SFTPGo.
In other words while using "External Authentication" the external program receives the credentials of the user trying to login (for example the cleartext password) and it needs to validate them. While using "Dynamic users modification" the pre-login program receives the user stored inside the dataprovider (it includes the hashed password if any) and it can modify it, after the modification SFTPGo will check the credentials of the user trying to login.
You can disable the hook on a per-user basis.
Let's see a very basic example. Our sample program will grant access to the existing user `test_user` only in the time range 10:00-18:00. Other users will not be modified since the program will terminate with no output.
```shell
@@ -55,5 +53,3 @@ fi
```
Please note that this is a demo program and it might not work in all cases. For example, the username should be obtained by parsing the JSON serialized user and not by searching the username inside the JSON as shown here.
The structure for SFTPGo users can be found within the [OpenAPI schema](../openapi/openapi.yaml).

View File

@@ -5,54 +5,37 @@ To enable external authentication, you must set the absolute path of your authen
The external program can read the following environment variables to get info about the user trying to authenticate:
- `SFTPGO_AUTHD_USERNAME`
- `SFTPGO_AUTHD_USER`, STPGo user serialized as JSON, empty if the user does not exist within the data provider
- `SFTPGO_AUTHD_IP`
- `SFTPGO_AUTHD_PROTOCOL`, possible values are `SSH`, `FTP`, `DAV`, `HTTP`
- `SFTPGO_AUTHD_PROTOCOL`, possible values are `SSH`, `FTP`, `DAV`
- `SFTPGO_AUTHD_PASSWORD`, not empty for password authentication
- `SFTPGO_AUTHD_PUBLIC_KEY`, not empty for public key authentication
- `SFTPGO_AUTHD_KEYBOARD_INTERACTIVE`, not empty for keyboard interactive authentication
- `SFTPGO_AUTHD_TLS_CERT`, TLS client certificate PEM encoded. Not empty for TLS certificate authentication
Previous global environment variables aren't cleared when the script is called. The content of these variables is _not_ quoted. They may contain special characters. They are under the control of a possibly malicious remote user.
The program can inspect the SFTPGo user, if it exists, using the `SFTPGO_AUTHD_USER` environment variable.
The program must write, on its standard output:
- a valid SFTPGo user serialized as JSON if the authentication succeeds. The user will be added/updated within the defined data provider
- an empty string, or no response at all, if authentication succeeds and the existing SFTPGo user does not need to be updated. Please note that in versions 2.0.x and earlier an empty response was interpreted as an authentication error
- a user with an empty username if the authentication fails
The program must write, on its standard output, a valid SFTPGo user serialized as JSON if the authentication succeeds or a user with an empty username if the authentication fails.
If the hook is an HTTP URL then it will be invoked as HTTP POST. The request body will contain a JSON serialized struct with the following fields:
- `username`
- `ip`
- `user`, STPGo user serialized as JSON, omitted if the user does not exist within the data provider
- `protocol`, possible values are `SSH`, `FTP`, `DAV`, `HTTP`
- `protocol`, possible values are `SSH`, `FTP`, `DAV`
- `password`, not empty for password authentication
- `public_key`, not empty for public key authentication
- `keyboard_interactive`, not empty for keyboard interactive authentication
- `tls_cert`, TLS client certificate PEM encoded. Not empty for TLS certificate authentication
If authentication succeeds the HTTP response code must be 200 and the response body can be:
If authentication succeeds the HTTP response code must be 200 and the response body a valid SFTPGo user serialized as JSON. If the authentication fails the HTTP response code must be != 200 or the response body must be empty.
- a valid SFTPGo user serialized as JSON. The user will be added/updated within the defined data provider
- empty, the existing SFTPGo user does not need to be updated. Please note that in versions 2.0.x and earlier an empty response was interpreted as an authentication error
If the authentication fails the HTTP response code must be != 200 or the returned SFTPGo user must have an empty username.
If the hook returns a user who is only allowed to authenticate using public key + password (multi step authentication), your hook will be invoked for each authentication step, so it must validate the public key and password separately. SFTPGo will take care that the client uses the allowed sequence.
Actions defined for users added/updated will not be executed in this case and an already logged in user with the same username will not be disconnected.
If the authentication succeeds, the user will be automatically added/updated inside the defined data provider. Actions defined for users added/updated will not be executed in this case and an already logged in user with the same username will not be disconnected, you have to handle these things yourself.
The program hook must finish within 30 seconds, the HTTP hook timeout will use the global configuration for HTTP clients.
This method is slower than built-in authentication, but it's very flexible as anyone can easily write his own authentication hooks.
You can also restrict the authentication scope for the hook using the `external_auth_scope` configuration key:
- `0` means all supported authentication scopes. The external hook will be used for password, public key, keyboard interactive and TLS certificate authentication
- `0` means all supported authentication scopes. The external hook will be used for password, public key and keyboard interactive authentication
- `1` means passwords only
- `2` means public keys only
- `4` means keyboard interactive only
- `8` means TLS certificate only
You can combine the scopes. For example, 3 means password and public key, 5 means password and keyboard interactive, and so on.
@@ -68,10 +51,6 @@ else
fi
```
The structure for SFTPGo users can be found within the [OpenAPI schema](../openapi/openapi.yaml).
You can disable the hook on a per-user basis so that you can mix external and internal users.
An example authentication program allowing to authenticate against an LDAP server can be found inside the source tree [ldapauth](../examples/ldapauth) directory.
An example server, to use as HTTP authentication hook, allowing to authenticate against an LDAP server can be found inside the source tree [ldapauthserver](../examples/ldapauthserver) directory.

View File

@@ -9,14 +9,11 @@ Usage:
sftpgo [command]
Available Commands:
gen A collection of useful generators
help Help about any command
initprovider Initialize and/or updates the configured data provider
portable Serve a single directory/account
revertprovider Revert the configured data provider to a previous version
serve Start the SFTPGo service
smtptest Test the SMTP configuration
startsubsys Use sftpgo as SFTP file transfer subsystem
gen A collection of useful generators
help Help about any command
initprovider Initializes and/or updates the configured data provider
portable Serve a single directory
serve Start the SFTP Server
Flags:
-h, --help help for sftpgo
@@ -39,7 +36,7 @@ The `serve` command supports the following flags:
- `--log-max-backups` int. Maximum number of old log files to retain. Default 5 or the value of `SFTPGO_LOG_MAX_BACKUPS` environment variable. It is unused if `log-file-path` is empty.
- `--log-max-size` int. Maximum size in megabytes of the log file before it gets rotated. Default 10 or the value of `SFTPGO_LOG_MAX_SIZE` environment variable. It is unused if `log-file-path` is empty.
- `--log-verbose` boolean. Enable verbose logs. Default `true` or the value of `SFTPGO_LOG_VERBOSE` environment variable (1 or `true`, 0 or `false`).
- `--log-utc-time` boolean. Enable UTC time for logging. Default `false` or the value of `SFTPGO_LOG_UTC_TIME` environment variable (1 or `true`, 0 or `false`)
- `--profiler` boolean. Enable the built-in profiler. The profiler will be accessible via HTTP/HTTPS using the base URL "/debug/pprof/". Default `false` or the value of `SFTPGO_PROFILER` environment variable (1 or `true`, 0 or `false`).
Log file can be rotated on demand sending a `SIGUSR1` signal on Unix based systems and using the command `sftpgo service rotatelogs` on Windows.
@@ -53,26 +50,20 @@ The configuration file contains the following sections:
- **"common"**, configuration parameters shared among all the supported protocols
- `idle_timeout`, integer. Time in minutes after which an idle client will be disconnected. 0 means disabled. Default: 15
- `upload_mode` integer. 0 means standard: the files are uploaded directly to the requested path. 1 means atomic: files are uploaded to a temporary path and renamed to the requested path when the client ends the upload. Atomic mode avoids problems such as a web server that serves partial files when the files are being uploaded. In atomic mode, if there is an upload error, the temporary file is deleted and so the requested upload path will not contain a partial file. 2 means atomic with resume support: same as atomic but if there is an upload error, the temporary file is renamed to the requested path and not deleted. This way, a client can reconnect and resume the upload. Default: 0
- `upload_mode` integer. 0 means standard: the files are uploaded directly to the requested path. 1 means atomic: files are uploaded to a temporary path and renamed to the requested path when the client ends the upload. Atomic mode avoids problems such as a web server that serves partial files when the files are being uploaded. In atomic mode, if there is an upload error, the temporary file is deleted and so the requested upload path will not contain a partial file. 2 means atomic with resume support: same as atomic but if there is an upload error, the temporary file is renamed to the requested path and not deleted. This way, a client can reconnect and resume the upload.
- `actions`, struct. It contains the command to execute and/or the HTTP URL to notify and the trigger conditions. See [Custom Actions](./custom-actions.md) for more details
- `execute_on`, list of strings. Valid values are `pre-download`, `download`, `pre-upload`, `upload`, `pre-delete`, `delete`, `rename`, `ssh_cmd`. Leave empty to disable actions.
- `execute_sync`, list of strings. Actions to be performed synchronously. The `pre-delete` action is always executed synchronously while the other ones are asynchronous. Executing an action synchronously means that SFTPGo will not return a result code to the client (which is waiting for it) until your hook have completed its execution. Leave empty to execute only the `pre-delete` hook synchronously
- `execute_on`, list of strings. Valid values are `download`, `upload`, `pre-delete`, `delete`, `rename`, `ssh_cmd`. Leave empty to disable actions.
- `hook`, string. Absolute path to the command to execute or HTTP URL to notify.
- `setstat_mode`, integer. 0 means "normal mode": requests for changing permissions, owner/group and access/modification times are executed. 1 means "ignore mode": requests for changing permissions, owner/group and access/modification times are silently ignored. 2 means "ignore mode for cloud based filesystems": requests for changing permissions, owner/group and access/modification times are silently ignored for cloud filesystems and executed for local filesystem.
- `temp_path`, string. Defines the path for temporary files such as those used for atomic uploads or file pipes. If you set this option you must make sure that the defined path exists, is accessible for writing by the user running SFTPGo, and is on the same filesystem as the users home directories otherwise the renaming for atomic uploads will become a copy and therefore may take a long time. The temporary files are not namespaced. The default is generally fine. Leave empty for the default.
- `proxy_protocol`, integer. Support for [HAProxy PROXY protocol](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt). If you are running SFTPGo behind a proxy server such as HAProxy, AWS ELB or NGNIX, you can enable the proxy protocol. It provides a convenient way to safely transport connection information such as a client's address across multiple layers of NAT or TCP proxies to get the real client IP address instead of the proxy IP. Both protocol versions 1 and 2 are supported. If the proxy protocol is enabled in SFTPGo then you have to enable the protocol in your proxy configuration too. For example, for HAProxy, add `send-proxy` or `send-proxy-v2` to each server configuration line. The following modes are supported:
- 0, disabled
- 1, enabled. If the upstream IP is not allowed to send a proxy header the header be ignored. Using this mode does not mean that we can accept connections with and without the proxy header. We always try to read the proxy header and we ignore it if the upstream IP is not allowed to send a proxy header
- 2, required. If the upstream IP is not allowed to send a proxy header the connection will be rejected
- 1, enabled. Proxy header will be used and requests without proxy header will be accepted
- 2, required. Proxy header will be used and requests without proxy header will be rejected
- `proxy_allowed`, List of IP addresses and IP ranges allowed to send the proxy header:
- If `proxy_protocol` is set to 1 and we receive a proxy header from an IP that is not in the list then the connection will be accepted and the header will be ignored
- If `proxy_protocol` is set to 2 and we receive a proxy header from an IP that is not in the list then the connection will be rejected
- `startup_hook`, string. Absolute path to an external program or an HTTP URL to invoke as soon as SFTPGo starts. If you define an HTTP URL it will be invoked using a `GET` request. Please note that SFTPGo services may not yet be available when this hook is run. Leave empty do disable
- `post_connect_hook`, string. Absolute path to the command to execute or HTTP URL to notify. See [Post-connect hook](./post-connect-hook.md) for more details. Leave empty to disable
- `post_disconnect_hook`, string. Absolute path to the command to execute or HTTP URL to notify. See [Post-disconnect hook](./post-disconnect-hook.md) for more details. Leave empty to disable
- `data_retention_hook`, string. Absolute path to the command to execute or HTTP URL to notify. See [Data retention hook](./data-retention-hook.md) for more details. Leave empty to disable
- `max_total_connections`, integer. Maximum number of concurrent client connections. 0 means unlimited. Default: 0.
- `max_per_host_connections`, integer. Maximum number of concurrent client connections from the same host (IP). If the defender is enabled, exceeding this limit will generate `score_limit_exceeded` events and thus hosts that repeatedly exceed the max allowed connections can be automatically blocked. 0 means unlimited. Default: 20.
- `post_connect_hook`, string. Absolute path to the command to execute or HTTP URL to notify. See [Post connect hook](./post-connect-hook.md) for more details. Leave empty to disable
- `max_total_connections`, integer. Maximum number of concurrent client connections. 0 means unlimited
- `defender`, struct containing the defender configuration. See [Defender](./defender.md) for more details.
- `enabled`, boolean. Default `false`.
- `ban_time`, integer. Ban time in minutes.
@@ -80,58 +71,52 @@ The configuration file contains the following sections:
- `threshold`, integer. Threshold value for banning a client.
- `score_invalid`, integer. Score for invalid login attempts, eg. non-existent user accounts or client disconnected for inactivity without authentication attempts.
- `score_valid`, integer. Score for valid login attempts, eg. user accounts that exist.
- `score_limit_exceeded`, integer. Score for hosts that exceeded the configured rate limits or the maximum, per-host, allowed connections.
- `observation_time`, integer. Defines the time window, in minutes, for tracking client errors. A host is banned if it has exceeded the defined threshold during the last observation time minutes.
- `entries_soft_limit`, integer.
- `entries_hard_limit`, integer. The number of banned IPs and host scores kept in memory will vary between the soft and hard limit.
- `safelist_file`, string. Path to a file containing a list of ip addresses and/or networks to never ban.
- `blocklist_file`, string. Path to a file containing a list of ip addresses and/or networks to always ban. The lists can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows. An host that is already banned will not be automatically unbanned if you put it inside the safe list, you have to unban it using the REST API.
- `rate_limiters`, list of structs containing the rate limiters configuration. Take a look [here](./rate-limiting.md) for more details. Each struct has the following fields:
- `average`, integer. Average defines the maximum rate allowed. 0 means disabled. Default: 0
- `period`, integer. Period defines the period as milliseconds. The rate is actually defined by dividing average by period Default: 1000 (1 second).
- `burst`, integer. Burst defines the maximum number of requests allowed to go through in the same arbitrarily small period of time. Default: 1
- `type`, integer. 1 means a global rate limiter, independent from the source host. 2 means a per-ip rate limiter. Default: 2
- `protocols`, list of strings. Available protocols are `SSH`, `FTP`, `DAV`, `HTTP`. By default all supported protocols are enabled
- `allow_list`, list of IP addresses and IP ranges excluded from rate limiting. Default: empty
- `generate_defender_events`, boolean. If `true`, the defender is enabled, and this is not a global rate limiter, a new defender event will be generated each time the configured limit is exceeded. Default `false`
- `entries_soft_limit`, integer.
- `entries_hard_limit`, integer. The number of per-ip rate limiters kept in memory will vary between the soft and hard limit
- **"sftpd"**, the configuration for the SFTP server
- `bindings`, list of structs. Each struct has the following fields:
- `port`, integer. The port used for serving SFTP requests. 0 means disabled. Default: 2022
- `address`, string. Leave blank to listen on all available network interfaces. Default: ""
- `apply_proxy_config`, boolean. If enabled the common proxy configuration, if any, will be applied. Default `true`
- `bind_port`, integer. Deprecated, please use `bindings`
- `bind_address`, string. Deprecated, please use `bindings`
- `idle_timeout`, integer. Deprecated, please use the same key in `common` section.
- `max_auth_tries` integer. Maximum number of authentication attempts permitted per connection. If set to a negative number, the number of attempts is unlimited. If set to zero, the number of attempts is limited to 6.
- `banner`, string. Identification string used by the server. Leave empty to use the default banner. Default `SFTPGo_<version>`, for example `SSH-2.0-SFTPGo_0.9.5`
- `upload_mode` integer. Deprecated, please use the same key in `common` section.
- `actions`, struct. Deprecated, please use the same key in `common` section.
- `keys`, struct array. Deprecated, please use `host_keys`.
- `private_key`, path to the private key file. It can be a path relative to the config dir or an absolute one.
- `host_keys`, list of strings. It contains the daemon's private host keys. Each host key can be defined as a path relative to the configuration directory or an absolute one. If empty, the daemon will search or try to generate `id_rsa`, `id_ecdsa` and `id_ed25519` keys inside the configuration directory. If you configure absolute paths to files named `id_rsa`, `id_ecdsa` and/or `id_ed25519` then SFTPGo will try to generate these keys using the default settings.
- `kex_algorithms`, list of strings. Available KEX (Key Exchange) algorithms in preference order. Leave empty to use default values. The supported values can be found here: [`crypto/ssh`](https://github.com/golang/crypto/blob/master/ssh/common.go#L46 "Supported kex algos")
- `ciphers`, list of strings. Allowed ciphers. Leave empty to use default values. The supported values can be found here: [crypto/ssh](https://github.com/golang/crypto/blob/master/ssh/common.go#L28 "Supported ciphers")
- `macs`, list of strings. Available MAC (message authentication code) algorithms in preference order. Leave empty to use default values. The supported values can be found here: [crypto/ssh](https://github.com/golang/crypto/blob/master/ssh/common.go#L84 "Supported MACs")
- `trusted_user_ca_keys`, list of public keys paths of certificate authorities that are trusted to sign user certificates for authentication. The paths can be absolute or relative to the configuration directory.
- `login_banner_file`, path to the login banner file. The contents of the specified file, if any, are sent to the remote user before authentication is allowed. It can be a path relative to the config dir or an absolute one. Leave empty to disable login banner.
- `setstat_mode`, integer. Deprecated, please use the same key in `common` section.
- `enabled_ssh_commands`, list of enabled SSH commands. `*` enables all supported commands. More information can be found [here](./ssh-commands.md).
- `keyboard_interactive_authentication`, boolean. This setting specifies whether keyboard interactive authentication is allowed. If no keyboard interactive hook or auth plugin is defined the default is to prompt for the user password and then the one time authentication code, if defined. Default: `false`.
- `keyboard_interactive_auth_hook`, string. Absolute path to an external program or an HTTP URL to invoke for keyboard interactive authentication. See [Keyboard Interactive Authentication](./keyboard-interactive.md) for more details.
- `password_authentication`, boolean. Set to false to disable password authentication. This setting will disable multi-step authentication method using public key + password too. It is useful for public key only configurations if you need to manage old clients that will not attempt to authenticate with public keys if the password login method is advertised. Default: `true`.
- `folder_prefix`, string. Virtual root folder prefix to include in all file operations (ex: `/files`). The virtual paths used for per-directory permissions, file patterns etc. must not include the folder prefix. The prefix is only applied to SFTP requests (in SFTP server mode), SCP and other SSH commands will be automatically disabled if you configure a prefix. The prefix is ignored while running as OpenSSH's SFTP subsystem. This setting can help some specific migrations from SFTP servers based on OpenSSH and it is not recommended for general usage. Default: empty.
- `password_authentication`, boolean. Set to false to disable password authentication. This setting will disable multi-step authentication method using public key + password too. It is useful for public key only configurations if you need to manage old clients that will not attempt to authenticate with public keys if the password login method is advertised. Default: true.
- `proxy_protocol`, integer. Deprecated, please use the same key in `common` section.
- `proxy_allowed`, list of strings. Deprecated, please use the same key in `common` section.
- **"ftpd"**, the configuration for the FTP server
- `bindings`, list of structs. Each struct has the following fields:
- `port`, integer. The port used for serving FTP requests. 0 means disabled. Default: 0.
- `address`, string. Leave blank to listen on all available network interfaces. Default: "".
- `apply_proxy_config`, boolean. If enabled the common proxy configuration, if any, will be applied. Please note that we expect the proxy header on control and data connections. Default `true`.
- `apply_proxy_config`, boolean. If enabled the common proxy configuration, if any, will be applied. Default `true`.
- `tls_mode`, integer. 0 means accept both cleartext and encrypted sessions. 1 means TLS is required for both control and data connection. 2 means implicit TLS. Do not enable this blindly, please check that a proper TLS config is in place if you set `tls_mode` is different from 0.
- `force_passive_ip`, ip address. External IP address to expose for passive connections. Leavy empty to autodetect. If not empty, it must be a valid IPv4 address. Defaut: "".
- `passive_ip_overrides`, list of struct that allows to return a different passive ip based on the client IP address. Each struct has the following fields:
- `networks`, list of strings. Each string must define a network in CIDR notation, for example 192.168.1.0/24.
- `ip`, string. Passive IP to return if the client IP address belongs to the defined networks. Empty means autodetect.
- `client_auth_type`, integer. Set to `1` to require a client certificate and verify it. Set to `2` to request a client certificate during the TLS handshake and verify it if given, in this mode the client is allowed not to send a certificate. At least one certification authority must be defined in order to verify client certificates. If no certification authority is defined, this setting is ignored. Default: 0.
- `force_passive_ip`, ip address. External IP address to expose for passive connections. Leavy empty to autodetect. Defaut: "".
- `client_auth_type`, integer. Set to `1` to require client certificate authentication in addition to FTP authentication. You need to define at least a certificate authority for this to work. Default: 0.
- `tls_cipher_suites`, list of strings. List of supported cipher suites for TLS version 1.2. If empty, a default list of secure cipher suites is used, with a preference order based on hardware performance. Note that TLS 1.3 ciphersuites are not configurable. The supported ciphersuites names are defined [here](https://github.com/golang/go/blob/master/src/crypto/tls/cipher_suites.go#L52). Any invalid name will be silently ignored. The order matters, the ciphers listed first will be the preferred ones. Default: empty.
- `passive_connections_security`, integer. Defines the security checks for passive data connections. Set to `0` to require matching peer IP addresses of control and data connection. Set to `1` to disable any checks. Please note that if you run the FTP service behind a proxy you must enable the proxy protocol for control and data connections. Default: `0`.
- `active_connections_security`, integer. Defines the security checks for active data connections. The supported values are the same as described for `passive_connections_security`. Please note that disabling the security checks you will make the FTP service vulnerable to bounce attacks on active data connections, so change the default value only if you are on a trusted/internal network. Default: `0`.
- `debug`, boolean. If enabled any FTP command will be logged. This will generate a lot of logs. Enable only if you are investigating a client compatibility issue or something similar. You shouldn't leave this setting enabled for production servers. Default `false`.
- `bind_port`, integer. Deprecated, please use `bindings`
- `bind_address`, string. Deprecated, please use `bindings`
- `banner`, string. Greeting banner displayed when a connection first comes in. Leave empty to use the default banner. Default `SFTPGo <version> ready`, for example `SFTPGo 1.0.0-dev ready`.
- `banner_file`, path to the banner file. The contents of the specified file, if any, are displayed when someone connects to the server. It can be a path relative to the config dir or an absolute one. If set, it overrides the banner string provided by the `banner` option. Leave empty to disable.
- `active_transfers_port_non_20`, boolean. Do not impose the port 20 for active data transfers. Enabling this option allows to run SFTPGo with less privilege. Default: false.
- `force_passive_ip`, ip address. Deprecated, please use `bindings`
- `passive_port_range`, struct containing the key `start` and `end`. Port Range for data connections. Random if not specified. Default range is 50000-50100.
- `disable_active_mode`, boolean. Set to `true` to disable active FTP, default `false`.
- `enable_site`, boolean. Set to true to enable the FTP SITE command. We support `chmod` and `symlink` if SITE support is enabled. Default `false`
@@ -141,15 +126,16 @@ The configuration file contains the following sections:
- `certificate_key_file`, string. Private key matching the above certificate. This can be an absolute path or a path relative to the config dir. A certificate and the private key are required to enable explicit and implicit TLS. Certificate and key files can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows.
- `ca_certificates`, list of strings. Set of root certificate authorities to be used to verify client certificates.
- `ca_revocation_lists`, list of strings. Set a revocation lists, one for each root CA, to be used to check if a client certificate has been revoked. The revocation lists can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows.
- `tls_mode`, integer. Deprecated, please use `bindings`
- **"webdavd"**, the configuration for the WebDAV server, more info [here](./webdav.md)
- `bindings`, list of structs. Each struct has the following fields:
- `port`, integer. The port used for serving WebDAV requests. 0 means disabled. Default: 0.
- `address`, string. Leave blank to listen on all available network interfaces. Default: "".
- `enable_https`, boolean. Set to `true` and provide both a certificate and a key file to enable HTTPS connection for this binding. Default `false`.
- `client_auth_type`, integer. Set to `1` to require a client certificate and verify it. Set to `2` to request a client certificate during the TLS handshake and verify it if given, in this mode the client is allowed not to send a certificate. At least one certification authority must be defined in order to verify client certificates. If no certification authority is defined, this setting is ignored. Default: 0.
- `client_auth_type`, integer. Set to `1` to require client certificate authentication in addition to basic authentication. You need to define at least a certificate authority for this to work. Default: 0.
- `tls_cipher_suites`, list of strings. List of supported cipher suites for TLS version 1.2. If empty, a default list of secure cipher suites is used, with a preference order based on hardware performance. Note that TLS 1.3 ciphersuites are not configurable. The supported ciphersuites names are defined [here](https://github.com/golang/go/blob/master/src/crypto/tls/cipher_suites.go#L52). Any invalid name will be silently ignored. The order matters, the ciphers listed first will be the preferred ones. Default: empty.
- `prefix`, string. Prefix for WebDAV resources, if empty WebDAV resources will be available at the `/` URI. If defined it must be an absolute URI, for example `/dav`. Default: "".
- `proxy_allowed`, list of IP addresses and IP ranges allowed to set `X-Forwarded-For`, `X-Real-IP`, `CF-Connecting-IP`, `True-Client-IP` headers. Any of the indicated headers, if set on requests from a connection address not in this list, will be silently ignored. Default: empty.
- `bind_port`, integer. Deprecated, please use `bindings`.
- `bind_address`, string. Deprecated, please use `bindings`.
- `certificate_file`, string. Certificate for WebDAV over HTTPS. This can be an absolute path or a path relative to the config dir.
- `certificate_key_file`, string. Private key matching the above certificate. This can be an absolute path or a path relative to the config dir. A certificate and a private key are required to enable HTTPS connections. Certificate and key files can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows.
- `ca_certificates`, list of strings. Set of root certificate authorities to be used to verify client certificates.
@@ -167,7 +153,7 @@ The configuration file contains the following sections:
- `expiration_time`, integer. Expiration time, in minutes, for the cached users. 0 means unlimited. Default: 0.
- `max_size`, integer. Maximum number of users to cache. 0 means unlimited. Default: 50.
- **"data_provider"**, the configuration for the data provider
- `driver`, string. Supported drivers are `sqlite`, `mysql`, `postgresql`, `cockroachdb`, `bolt`, `memory`
- `driver`, string. Supported drivers are `sqlite`, `mysql`, `postgresql`, `bolt`, `memory`
- `name`, string. Database name. For driver `sqlite` this can be the database name relative to the config dir or the absolute path to the SQLite database. For driver `memory` this is the (optional) path relative to the config dir or the absolute path to the provider dump, obtained using the `dumpdata` REST API, to load. This dump will be loaded at startup and can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows. The `memory` provider will not modify the provided file so quota usage and last login will not be persisted. If you plan to use a SQLite database over a `cifs` network share (this is not recommended in general) you must use the `nobrl` mount option otherwise you will get the `database is locked` error. Some users reported that the `bolt` provider works fine over `cifs` shares.
- `host`, string. Database host. Leave empty for drivers `sqlite`, `bolt` and `memory`
- `port`, integer. Database port. Leave empty for drivers `sqlite`, `bolt` and `memory`
@@ -180,71 +166,46 @@ The configuration file contains the following sections:
- 0, disable quota tracking. REST API to scan users home directories/virtual folders and update quota will do nothing
- 1, quota is updated each time a user uploads or deletes a file, even if the user has no quota restrictions
- 2, quota is updated each time a user uploads or deletes a file, but only for users with quota restrictions and for virtual folders. With this configuration, the `quota scan` and `folder_quota_scan` REST API can still be used to periodically update space usage for users without quota restrictions and for folders
- `delayed_quota_update`, integer. This configuration parameter defines the number of seconds to accumulate quota updates. If there are a lot of close uploads, accumulating quota updates can save you many queries to the data provider. If you want to track quotas, a scheduled quota update is recommended in any case, the stored quota may be incorrect for several reasons, such as an unexpected shutdown while uploading files, temporary provider failures, files copied outside of SFTPGo, and so on. You could use the [quotascan example](../examples/quotascan) as a starting point. 0 means immediate quota update.
- `pool_size`, integer. Sets the maximum number of open connections for `mysql` and `postgresql` driver. Default 0 (unlimited)
- `users_base_dir`, string. Users default base directory. If no home dir is defined while adding a new user, and this value is a valid absolute path, then the user home dir will be automatically defined as the path obtained joining the base dir and the username
- `actions`, struct. It contains the command to execute and/or the HTTP URL to notify and the trigger conditions. See [Custom Actions](./custom-actions.md) for more details
- `execute_on`, list of strings. Valid values are `add`, `update`, `delete`. `update` action will not be fired for internal updates such as the last login or the user quota fields.
- `execute_for`, list of strings. Defines the provider objects that trigger the action. Valid values are `user`, `admin`, `api_key`.
- `hook`, string. Absolute path to the command to execute or HTTP URL to notify.
- `external_auth_program`, string. Deprecated, please use `external_auth_hook`.
- `external_auth_hook`, string. Absolute path to an external program or an HTTP URL to invoke for users authentication. See [External Authentication](./external-auth.md) for more details. Leave empty to disable.
- `external_auth_scope`, integer. 0 means all supported authentication scopes (passwords, public keys and keyboard interactive). 1 means passwords only. 2 means public keys only. 4 means key keyboard interactive only. 8 means TLS certificate. The flags can be combined, for example 6 means public keys and keyboard interactive
- `external_auth_scope`, integer. 0 means all supported authentication scopes (passwords, public keys and keyboard interactive). 1 means passwords only. 2 means public keys only. 4 means key keyboard interactive only. The flags can be combined, for example 6 means public keys and keyboard interactive
- `credentials_path`, string. It defines the directory for storing user provided credential files such as Google Cloud Storage credentials. This can be an absolute path or a path relative to the config dir
- `prefer_database_credentials`, boolean. When true, users' Google Cloud Storage credentials will be written to the data provider instead of disk, though pre-existing credentials on disk will be used as a fallback. When false, they will be written to the directory specified by `credentials_path`.
- `pre_login_program`, string. Deprecated, please use `pre_login_hook`.
- `pre_login_hook`, string. Absolute path to an external program or an HTTP URL to invoke to modify user details just before the login. See [Dynamic user modification](./dynamic-user-mod.md) for more details. Leave empty to disable.
- `post_login_hook`, string. Absolute path to an external program or an HTTP URL to invoke to notify a successful or failed login. See [Post-login hook](./post-login-hook.md) for more details. Leave empty to disable.
- `post_login_scope`, defines the scope for the post-login hook. 0 means notify both failed and successful logins. 1 means notify failed logins. 2 means notify successful logins.
- `check_password_hook`, string. Absolute path to an external program or an HTTP URL to invoke to check the user provided password. See [Check password hook](./check-password-hook.md) for more details. Leave empty to disable.
- `check_password_scope`, defines the scope for the check password hook. 0 means all protocols, 1 means SSH, 2 means FTP, 4 means WebDAV. You can combine the scopes, for example 6 means FTP and WebDAV.
- `password_hashing`, struct. It contains the configuration parameters to be used to generate the password hash. SFTPGo can verify passwords in several formats and uses, by default, the `bcrypt` algorithm to hash passwords in plain-text before storing them inside the data provider. These options allow you to customize how the hash is generated.
- `argon2_options`, struct containing the options for argon2id hashing algorithm. The `memory` and `iterations` parameters control the computational cost of hashing the password. The higher these figures are, the greater the cost of generating the hash and the longer the runtime. It also follows that the greater the cost will be for any attacker trying to guess the password. If the code is running on a machine with multiple cores, then you can decrease the runtime without reducing the cost by increasing the `parallelism` parameter. This controls the number of threads that the work is spread across.
- `password_hashing`, struct. It contains the configuration parameters to be used to generate the password hash. SFTPGo can verify passwords in several formats and uses the `argon2id` algorithm to hash passwords in plain-text before storing them inside the data provider. These options allow you to customize how the hash is generated.
- `argon2_options` struct containing the options for argon2id hashing algorithm. The `memory` and `iterations` parameters control the computational cost of hashing the password. The higher these figures are, the greater the cost of generating the hash and the longer the runtime. It also follows that the greater the cost will be for any attacker trying to guess the password. If the code is running on a machine with multiple cores, then you can decrease the runtime without reducing the cost by increasing the `parallelism` parameter. This controls the number of threads that the work is spread across.
- `memory`, unsigned integer. The amount of memory used by the algorithm (in kibibytes). Default: 65536.
- `iterations`, unsigned integer. The number of iterations over the memory. Default: 1.
- `parallelism`. unsigned 8 bit integer. The number of threads (or lanes) used by the algorithm. Default: 2.
- `bcrypt_options`, struct containing the options for bcrypt hashing algorithm
- `cost`, integer between 4 and 31. Default: 10
- `algo`, string. Algorithm to use for hashing passwords. Available algorithms: `argon2id`, `bcrypt`. For bcrypt hashing we use the `$2a$` prefix. Default: `bcrypt`
- `password_validation` struct. It defines the password validation rules for admins and protocol users.
- `admins`, struct. It defines the password validation rules for SFTPGo admins.
- `min_entropy`, float. Defines the minimum password entropy. Take a looke [here](https://github.com/wagslane/go-password-validator#what-entropy-value-should-i-use) for more details. `0` means disabled, any password will be accepted. Default: `0`.
- `users`, struct. It defines the password validation rules for SFTPGo protocol users.
- `min_entropy`, float. Default: `0`.
- `password_caching`, boolean. Verifying argon2id passwords has a high memory and computational cost, verifying bcrypt passwords has a high computational cost, by enabling, in memory, password caching you reduce these costs. Default: `true`
- `update_mode`, integer. Defines how the database will be initialized/updated. 0 means automatically. 1 means manually using the initprovider sub-command.
- `skip_natural_keys_validation`, boolean. If `true` you can use any UTF-8 character for natural keys as username, admin name, folder name. These keys are used in URIs for REST API and Web admin. If `false` only unreserved URI characters are allowed: ALPHA / DIGIT / "-" / "." / "_" / "~". Default: `false`.
- `create_default_admin`, boolean. Before you can use SFTPGo you need to create an admin account. If you open the admin web UI, a setup screen will guide you in creating the first admin account. You can automatically create the first admin account by enabling this setting and setting the environment variables `SFTPGO_DEFAULT_ADMIN_USERNAME` and `SFTPGO_DEFAULT_ADMIN_PASSWORD`. You can also create the first admin by loading initial data. This setting has no effect if an admin account is already found within the data provider. Default `false`.
- `is_shared`, integer. If the data provider is shared across multiple SFTPGo instances, set this parameter to `1`. `MySQL`, `PostgreSQL` and `CockroachDB` can be shared, this setting is ignored for other data providers. For shared data providers, SFTPGo periodically reloads the latest updated users, based on the `updated_at` field, and updates its internal caches if users are updated from a different instance. This check, if enabled, is executed every 10 minutes. Default: `0`.
- **"httpd"**, the configuration for the HTTP server used to serve REST API and to expose the built-in web interface
- `bindings`, list of structs. Each struct has the following fields:
- `port`, integer. The port used for serving HTTP requests. Default: 8080.
- `address`, string. Leave blank to listen on all available network interfaces. On *NIX you can specify an absolute path to listen on a Unix-domain socket Default: "127.0.0.1".
- `enable_web_admin`, boolean. Set to `false` to disable the built-in web admin for this binding. You also need to define `templates_path` and `static_files_path` to use the built-in web admin interface. Default `true`.
- `enable_web_client`, boolean. Set to `false` to disable the built-in web client for this binding. You also need to define `templates_path` and `static_files_path` to use the built-in web client interface. Default `true`.
- `enable_web_admin`, boolean. Set to `false` to disable the built-in web admin for this binding. You also need to define `templates_path` and `static_files_path` to enable the built-in web admin interface. Default `true`.
- `enable_https`, boolean. Set to `true` and provide both a certificate and a key file to enable HTTPS connection for this binding. Default `false`.
- `client_auth_type`, integer. Set to `1` to require client certificate authentication in addition to JWT/Web authentication. You need to define at least a certificate authority for this to work. Default: 0.
- `tls_cipher_suites`, list of strings. List of supported cipher suites for TLS version 1.2. If empty, a default list of secure cipher suites is used, with a preference order based on hardware performance. Note that TLS 1.3 ciphersuites are not configurable. The supported ciphersuites names are defined [here](https://github.com/golang/go/blob/master/src/crypto/tls/cipher_suites.go#L52). Any invalid name will be silently ignored. The order matters, the ciphers listed first will be the preferred ones. Default: empty.
- `proxy_allowed`, list of IP addresses and IP ranges allowed to set `X-Forwarded-For`, `X-Real-IP`, `X-Forwarded-Proto`, `CF-Connecting-IP`, `True-Client-IP` headers. Any of the indicated headers, if set on requests from a connection address not in this list, will be silently ignored. Default: empty.
- `hide_login_url`, integer. If both web admin and web client are enabled each login page will show a link to the other one. This setting allows to hide this link. 0 means that the login links are displayed on both admin and client login page. This is the default. 1 means that the login link to the web client login page is hidden on admin login page. 2 means that the login link to the web admin login page is hidden on client login page. The flags can be combined, for example 3 will disable both login links.
- `render_openapi`, boolean. Set to `false` to disable serving of the OpenAPI schema and renderer. Default `true`.
- `bind_port`, integer. Deprecated, please use `bindings`.
- `bind_address`, string. Deprecated, please use `bindings`. Leave blank to listen on all available network interfaces. On \*NIX you can specify an absolute path to listen on a Unix-domain socket. Default: "127.0.0.1"
- `templates_path`, string. Path to the HTML web templates. This can be an absolute path or a path relative to the config dir
- `static_files_path`, string. Path to the static files for the web interface. This can be an absolute path or a path relative to the config dir. If both `templates_path` and `static_files_path` are empty the built-in web interface will be disabled
- `backups_path`, string. Path to the backup directory. This can be an absolute path or a path relative to the config dir. We don't allow backups in arbitrary paths for security reasons
- `openapi_path`, string. Path to the directory that contains the OpenAPI schema and the default renderer. This can be an absolute path or a path relative to the config dir. If empty the OpenAPI schema and the renderer will not be served regardless of the `render_openapi` directive
- `web_root`, string. Defines a base URL for the web admin and client interfaces. If empty web admin and client resources will be available at the root ("/") URI. If defined it must be an absolute URI or it will be ignored
- `certificate_file`, string. Certificate for HTTPS. This can be an absolute path or a path relative to the config dir.
- `certificate_key_file`, string. Private key matching the above certificate. This can be an absolute path or a path relative to the config dir. If both the certificate and the private key are provided, the server will expect HTTPS connections. Certificate and key files can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows.
- `ca_certificates`, list of strings. Set of root certificate authorities to be used to verify client certificates.
- `ca_revocation_lists`, list of strings. Set a revocation lists, one for each root CA, to be used to check if a client certificate has been revoked. The revocation lists can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows.
- `signing_passphrase`, string. Passphrase to use to derive the signing key for JWT and CSRF tokens. If empty a random signing key will be generated each time SFTPGo starts. If you set a signing passphrase you should consider rotating it periodically for added security.
- `max_upload_file_size`, integer. Defines the maximum request body size, in bytes, for Web Client/API HTTP upload requests. 0 means no limit. Default: 1048576000.
- `cors` struct containing CORS configuration. SFTPGo uses [Go CORS handler](https://github.com/rs/cors), please refer to upstream documentation for fields meaning and their default values.
- `enabled`, boolean, set to true to enable CORS.
- `allowed_origins`, list of strings.
- `allowed_methods`, list of strings.
- `allowed_headers`, list of strings.
- `exposed_headers`, list of strings.
- `allow_credentials` boolean.
- `max_age`, integer.
- **"telemetry"**, the configuration for the telemetry server, more details [below](#telemetry-server)
- `bind_port`, integer. The port used for serving HTTP requests. Set to 0 to disable HTTP server. Default: 10000
- `bind_address`, string. Leave blank to listen on all available network interfaces. On \*NIX you can specify an absolute path to listen on a Unix-domain socket. Default: "127.0.0.1"
@@ -254,7 +215,7 @@ The configuration file contains the following sections:
- `certificate_key_file`, string. Private key matching the above certificate. This can be an absolute path or a path relative to the config dir. If both the certificate and the private key are provided, the server will expect HTTPS connections. Certificate and key files can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows.
- `tls_cipher_suites`, list of strings. List of supported cipher suites for TLS version 1.2. If empty, a default list of secure cipher suites is used, with a preference order based on hardware performance. Note that TLS 1.3 ciphersuites are not configurable. The supported ciphersuites names are defined [here](https://github.com/golang/go/blob/master/src/crypto/tls/cipher_suites.go#L52). Any invalid name will be silently ignored. The order matters, the ciphers listed first will be the preferred ones. Default: empty.
- **"http"**, the configuration for HTTP clients. HTTP clients are used for executing hooks. Some hooks use a retryable HTTP client, for these hooks you can configure the time between retries and the number of retries. Please check the hook specific documentation to understand which hooks use a retryable HTTP client.
- `timeout`, float. Timeout specifies a time limit, in seconds, for requests. For requests with retries this is the timeout for a single request
- `timeout`, integer. Timeout specifies a time limit, in seconds, for requests. For requests with retries this is the timeout for a single request
- `retry_wait_min`, integer. Defines the minimum waiting time between attempts in seconds.
- `retry_wait_max`, integer. Defines the maximum waiting time between attempts in seconds. The backoff algorithm will perform exponential backoff based on the attempt number and limited by the provided minimum and maximum durations.
- `retry_max`, integer. Defines the maximum number of retries if the first request fails.
@@ -263,49 +224,10 @@ The configuration file contains the following sections:
- `cert`, string. Path to the certificate file. The path can be absolute or relative to the config dir.
- `key`, string. Path to the key file. The path can be absolute or relative to the config dir.
- `skip_tls_verify`, boolean. if enabled the HTTP client accepts any TLS certificate presented by the server and any host name in that certificate. In this mode, TLS is susceptible to man-in-the-middle attacks. This should be used only for testing.
- `headers`, list of structs. You can define a list of http headers to add to each hook. Each struct has the following fields:
- `key`, string
- `value`, string. The header is silently ignored if `key` or `value` are empty
- `url`, string, optional. If not empty, the header will be added only if the request URL starts with the one specified here
- **kms**, configuration for the Key Management Service, more details can be found [here](./kms.md)
- `secrets`
- `url`, string. Defines the URI to the KMS service. Default: empty.
- `master_key`, string. Defines the master encryption key as string. If not empty, it takes precedence over `master_key_path`. Default: empty.
- `master_key_path, string. Defines the absolute path to a file containing the master encryption key. Default: empty.
- **mfa**, multi-factor authentication settings
- `totp`, list of struct that define settings for time-based one time passwords (RFC 6238). Each struct has the following fields:
- `name`, string. Unique configuration name. This name should not be changed if there are users or admins using the configuration. The name is not exposed to the authentication apps. Default: `Default`.
- `issuer`, string. Name of the issuing Organization/Company. Default: `SFTPGo`.
- `algo`, string. Algorithm to use for HMAC. The supported algorithms are: `sha1`, `sha256`, `sha512`. Currently Google Authenticator app on iPhone seems to only support `sha1`, please check the compatibility with your target apps/device before setting a different algorithm. You can also define multiple configurations, for example one that uses `sha256` or `sha512` and another one that uses `sha1` and instruct your users to use the appropriate configuration for their devices/apps. The algorithm should not be changed if there are users or admins using the configuration. Default: `sha1`.
- **smtp**, SMTP configuration enables SFTPGo email sending capabilities
- `host`, string. Location of SMTP email server. Leavy empty to disable email sending capabilities. Default: empty.
- `port`, integer. Port of SMTP email server.
- `from`, string. From address, for example `SFTPGo <sftpgo@example.com>`. Many SMTP servers reject emails without a `From` header so, if not set, SFTPGo will try to use the username as fallback, this may or may not be appropriate. Default: empty
- `user`, string. SMTP username. Default: empty
- `password`, string. SMTP password. Leaving both username and password empty the SMTP authentication will be disabled. Default: empty
- `auth_type`, integer. 0 means `Plain`, 1 means `Login`, 2 means `CRAM-MD5`. Default: `0`.
- `encryption`, integer. 0 means no encryption, 1 means `TLS`, 2 means `STARTTLS`. Default: `0`.
- `domain`, string. Domain to use for `HELO` command, if empty `localhost` will be used. Default: empty.
- `templates_path`, string. Path to the email templates. This can be an absolute path or a path relative to the config dir. Templates are searched within a subdirectory named "email" in the specified path. You can customize the email templates by simply specifying an alternate path and putting your custom templates there.
- **plugins**, list of external plugins. Each plugin is configured using a struct with the following fields:
- `type`, string. Defines the plugin type. Supported types: `notifier`, `kms`, `auth`.
- `notifier_options`, struct. Defines the options for notifier plugins.
- `fs_events`, list of strings. Defines the filesystem events that will be notified to this plugin.
- `provider_events`, list of strings. Defines the provider events that will be notified to this plugin.
- `provider_objects`, list if strings. Defines the provider objects that will be notified to this plugin.
- `retry_max_time`, integer. Defines the maximum number of seconds an event can be late. SFTPGo adds a timestamp to each event and add to an internal queue any events that a the plugin fails to handle (the plugin returns an error or it is not running). If a plugin fails to handle an event that is too late, based on this configuration, it will be discarded. SFTPGo will try to resend queued events every 30 seconds. 0 means no retry.
- `retry_queue_max_size`, integer. Defines the maximum number of events that the internal queue can hold. Once the queue is full, the events that cannot be sent to the plugin will be discarded. 0 means no limit.
- `kms_options`, struct. Defines the options for kms plugins.
- `scheme`, string. KMS scheme. Supported schemes are: `awskms`, `gcpkms`, `hashivault`, `azurekeyvault`.
- `encrypted_status`, string. Encrypted status for a KMS secret. Supported statuses are: `AWS`, `GCP`, `VaultTransit`, `AzureKeyVault`.
- `auth_options`, struct. Defines the options for auth plugins.
- `scope`, integer. 1 means passwords only. 2 means public keys only. 4 means key keyboard interactive only. 8 means TLS certificate. The flags can be combined, for example 6 means public keys and keyboard interactive. The scope must be explicit, `0` is not a valid option.
- `cmd`, string. Path to the plugin executable.
- `args`, list of strings. Optional arguments to pass to the plugin executable.
- `sha256sum`, string. SHA256 checksum for the plugin executable. If not empty it will be used to verify the integrity of the executable.
- `auto_mtls`, boolean. If enabled the client and the server automatically negotiate mutual TLS for transport authentication. This ensures that only the original client will be allowed to connect to the server, and all other connections will be rejected. The client will also refuse to connect to any server that isn't the original instance started by the client.
Please note that the plugin system is experimental, the exposed configuration parameters and interfaces may change in a backward incompatible way in future.
- `url`
- `master_key_path`
A full example showing the default config (in JSON format) can be found [here](../sftpgo.json).
@@ -344,35 +266,6 @@ Let's see some examples:
- To set the `port` for the first sftpd binding, you need to define the env var `SFTPGO_SFTPD__BINDINGS__0__PORT`
- To set the `execute_on` actions, you need to define the env var `SFTPGO_COMMON__ACTIONS__EXECUTE_ON`. For example `SFTPGO_COMMON__ACTIONS__EXECUTE_ON=upload,download`
On some hardware you can get faster SFTP performance by replacing the Go `crypto/sha256` implementation with [sha256-simd](https://github.com/minio/sha256-simd).
The performances of SHA256 is relevant for clients using AES CTR ciphers and `hmac-sha2-256` as Message Authentication Code (MAC).
Up to 2.0.x versions SFTPGo automatically used `sha256-simd` but over the time the standard Go implementation improved a lot and now is faster than `sha256-simd` on some CPUs.
You can select `sha256-simd` setting the environment variable `SFTPGO_MINIO_SHA256_SIMD` to `1`.
`sha256-simd` is particularly useful if you have an Intel CPU with SHA extensions or an ARM CPU with Cryptography Extensions.
## Binding to privileged ports
On Linux, if you want to use Internet domain privileged ports (port numbers less than 1024) instead of running the SFTPGo service as root user you can set the `cap_net_bind_service` capability on the `sftpgo` binary. To set the capability you can use the following command:
```shell
$ sudo setcap cap_net_bind_service=+ep /usr/bin/sftpgo
# Check that the capability is added
$ getcap /usr/bin/sftpgo
/usr/bin/sftpgo cap_net_bind_service=ep
```
Now you can use privileged ports such as 21, 22, 443 etc.. without running the SFTPGo service as root user. You have to set the `cap_net_bind_service` capability each time you update the `sftpgo` binary.
An alternative method is to use `iptables`, for example you run the SFTP service on port `2022` and redirect traffic from port `22` to port `2022`:
```shell
sudo iptables -t nat -A PREROUTING -d <ip> -p tcp --dport 22 -m addrtype --dst-type LOCAL -j DNAT --to-destination <ip>:2022
sudo iptables -t nat -A OUTPUT -d <ip> -p tcp --dport 22 -m addrtype --dst-type LOCAL -j DNAT --to-destination <ip>:2022
```
## Telemetry Server
The telemetry server exposes the following endpoints:

View File

@@ -2,8 +2,4 @@
Here we collect step-to-step tutorials. SFTPGo users are encouraged to contribute!
- [Getting Started](./getting-started.md)
- [SFTPGo with PostgreSQL data provider and S3 backend](./postgresql-s3.md)
- [SFTPGo on Windows with Active Directory Integration + Caddy Static File Server](https://www.youtube.com/watch?v=M5UcJI8t4AI)
- [Securing SFTPGo with a free Let's Encrypt TLS Certificate](./lets-encrypt-certificate.md)
- [SFTPGo as OpenSSH's SFTP subsystem](./openssh-sftp-subsystem.md)

View File

@@ -1,491 +0,0 @@
# Getting Started
SFTPGo allows to securely share your files over SFTP and optionally FTP/S and WebDAV too.
Several storage backends are supported and they are configurable per user, so you can serve a local directory for a user and an S3 bucket (or part of it) for another one.
SFTPGo also supports virtual folders, a virtual folder can use any of the supported storage backends. So you can have, for example, an S3 user that exposes a GCS bucket (or part of it) on a specified path and an encrypted local filesystem on another one.
Virtual folders can be private or shared among multiple users, for shared virtual folders you can define different quota limits for each user.
In this tutorial we explore the main features and concepts using the built-in web admin interface. Advanced users can also use the SFTPGo [REST API](https://sftpgo.stoplight.io/docs/sftpgo/openapi.yaml)
- [Installation](#Installation)
- [Initial configuration](#Initial-configuration)
- [Creating users](#Creating-users)
- [Creating users with a Cloud Storage backend](#Creating-users-with-a-Cloud-Storage-backend)
- [Creating users with a local encrypted backend (Data At Rest Encryption)](#Creating-users-with-a-local-encrypted-backend-Data-At-Rest-Encryption))
- [Virtual permissions](#Virtual-permissions)
- [Virtual folders](#Virtual-folders)
- [Configuration parameters](#Configuration-parameters)
- [Use PostgreSQL data provider](#Use-PostgreSQL-data-provider)
- [Use MySQL/MariaDB data provider](#Use-MySQLMariaDB-data-provider)
- [Use CockroachDB data provider](#Use-CockroachDB-data-provider)
- [Enable FTP service](#Enable-FTP-service)
- [Enable WebDAV service](#Enable-WebDAV-service)
## Installation
You can easily install SFTPGo by downloading the appropriate package for your operating system and architecture. Please visit the [releases](https://github.com/drakkan/sftpgo/releases "releases") page.
An official Docker image is available. Documentation is [here](./../../docker/README.md).
In this guide, we assume that SFTPGo is already installed and running using the default configuration.
## Initial configuration
Before you can use SFTPGo you need to create an admin account, so open [http://127.0.0.1:8080/web/admin](http://127.0.0.1:8080/web) in your web browser, replacing `127.0.0.1` with the appropriate IP address if SFTPGo is not running on localhost.
![Setup](./img/setup.png)
After creating the admin account you will be automatically logged in.
![Users list](./img/initial-screen.png)
The the web admin is now available at the following URL:
[http://127.0.0.1:8080/web/admin](http://127.0.0.1:8080/web/admin)
From the `Status` page you see the active services.
![Status](./img/status.png)
The default configuration enables the SFTP service on port `2022` and uses `SQLite` as data provider.
## Creating users
Let's create our first local user:
- from the users page click the `+` icon to open the Add user page
- the only required fields are the `Username`, a `Password` or a `Public key`, and the default `Permissions`
- if you are on Windows or you installed SFTPGo manually and no `users_base_dir` is defined in your configuration file you also have to set a `Home Dir`. It must be an absolute path, for example `/srv/sftpgo/data/username` on Linux or `C:\sftpgo\data\username` on Windows. SFTPGo will try to automatically create the home directory, if missing, when the user logs in. Each user can only access files and folders inside its home directory.
- click `Submit`
![Add user](./img/add-user.png)
Now test the new user, we use the `sftp` CLI here, you can use any SFTP client.
```shell
$ sftp -P 2022 nicola@127.0.0.1
nicola@127.0.0.1's password:
Connected to 127.0.0.1.
sftp> ls
sftp> put file.txt
Uploading file.txt to /file.txt
file.txt 100% 4034 3.9MB/s 00:00
sftp> ls
file.txt
sftp> mkdir adir
sftp> cd adir/
sftp> put file.txt
Uploading file.txt to /adir/file.txt
file.txt 100% 4034 4.0MB/s 00:00
sftp> ls
file.txt
sftp> get file.txt
Fetching /adir/file.txt to file.txt
/adir/file.txt 100% 4034 1.9MB/s 00:00
```
It worked! We can upload/download files and create directories.
Each user can browse and download their files and change their credentials using the web client interface available at the following URL:
[http://127.0.0.1:8080/web/client](http://127.0.0.1:8080/web/client)
![Web client files](./img/web-client-files.png)
![Web client credentials](./img/web-client-credentials.png)
### Creating users with a Cloud Storage backend
The procedure is similar to the one described for local users, you have only specify the Cloud Storage backend and its credentials.
The screenshot below shows an example configuration for an S3 backend.
![S3 user](./img/s3-user.png)
The screenshot below shows an example configuration for an Azure Blob Storage backend.
![Azure Blob user](./img/az-user.png)
The screenshot below shows an example configuration for a Google Cloud Storage backend.
![Google Cloud user](./img/gcs-user.png)
The screenshot below shows an example configuration for an SFTP server as storage backend.
![User using another SFTP server as storage backend](./img/sftp-user.png)
Setting a `Key Prefix` you restrict the user to a specific "folder" in the bucket, so that the same bucket can be shared among different users by assigning to each user a specific portion of the bucket.
### Creating users with a local encrypted backend (Data At Rest Encryption)
The procedure is similar to the one described for local users, you have only specify the encryption passphrase.
The screenshot below shows an example configuration.
![User with cryptfs backend](./img/local-encrypted.png)
You can find more details about Data At Rest Encryption [here](../dare.md).
## Virtual permissions
SFTPGo supports per directory virtual permissions. For each user you have to specify global permissions and then override them on a per-directory basis.
Take a look at the following screens.
![Virtual permissions](./img/virtual-permissions.png)
![Per-directory permissions](./img/dir-permissions.png)
This user has full access as default (`*`), can only list and download from `/read-only` path and has no permissions at all for the `/subdir` path.
Let's test it. We use the `sftp` CLI here, you can use any SFTP client.
```shell
$ sftp -P 2022 nicola@127.0.0.1
Connected to 127.0.0.1.
sftp> ls
adir file.txt read-only subdir
sftp> put file.txt
Uploading file.txt to /file.txt
file.txt 100% 4034 19.4MB/s 00:00
sftp> rm file.txt
Removing /file.txt
sftp> ls
adir read-only subdir
sftp> cd read-only/
sftp> ls
file.txt
sftp> put file1.txt
Uploading file1.txt to /read-only/file1.txt
remote open("/read-only/file1.txt"): Permission denied
sftp> get file.txt
Fetching /read-only/file.txt to file.txt
/read-only/file.txt 100% 4034 2.2MB/s 00:00
sftp> cd ..
sftp> ls
adir read-only subdir
sftp> cd /subdir
sftp> ls
remote readdir("/subdir"): Permission denied
```
as you can see it worked as expected.
## Virtual folders
From the web admin interface click `Folders` and then the `+` icon.
![Add folder](./img/add-folder.png)
To create a local folder you need to specify a `Name` and an `Absolute path`. For other backends you have to specify the backend type and its credentials, this is the same procedure already detailed for creating users with cloud backends.
Suppose we created two virtual folders name `localfolder` and `minio` as you can see in the following screen.
![Folders](./img/folders.png)
- `localfolder` uses the local filesystem as storage backend
- `minio` uses MinIO (S3 compatible) as storage backend
Now, click `Users`, on the left menu, select a user and click the `Edit` icon, to update the user and associate the virtual folders.
Virtual folders must be referenced using their unique name and you can expose them on a configurable virtual path. Take a look at the following screenshot.
![Virtual Folders](./img/virtual-folders.png)
We exposed the folder named `localfolder` on the path `/vdirlocal` (this must be an absolute UNIX path on Windows too) and the folder named `minio` on the path `/vdirminio`. For `localfolder` the quota usage is included within the user quota, while for the `minio` folder we defined separate quota limits: at most 2 files and at most 100MB, whichever is reached first.
The folder `minio` can be shared with other users and we can define different quota limits on a per-user basis. The folder `localfolder` is considered private since we have included its quota limits within those of the user, if we share them with other users we will break quota calculation.
Let's test these virtual folders. We use the `sftp` CLI here, you can use any SFTP client.
```shell
$ sftp -P 2022 nicola@127.0.0.1
nicola@127.0.0.1's password:
Connected to 127.0.0.1.
sftp> ls
adir read-only subdir vdirlocal vdirminio
sftp> cd vdirlocal
sftp> put file.txt
Uploading file.txt to /vdirlocal/file.txt
file.txt 100% 4034 17.3MB/s 00:00
sftp> ls
file.txt
sftp> cd ..
sftp> cd vdirminio/
sftp> put file.txt
Uploading file.txt to /vdirminio/file.txt
file.txt 100% 4034 4.8MB/s 00:00
sftp> ls
file.txt
sftp> put file.txt file1.txt
Uploading file.txt to /vdirminio/file1.txt
file.txt 100% 4034 2.8MB/s 00:00
sftp> put file.txt file2.txt
Uploading file.txt to /vdirminio/file2.txt
remote open("/vdirminio/file2.txt"): Failure
sftp> quit
```
The last upload failed since we exceeded the number of files quota limit.
## Configuration parameters
Until now we used the default configuration, to change the global service parameters you have to edit the configuration file, or set appropriate environment variables, and restart SFTPGo to apply the changes.
A full explanation of all configuration methods can be found [here](./../full-configuration.md), we explore some common use cases. Please keep in mind that SFTPGo can also be configured via [environment variables](../full-configuration.md#environment-variables), this is very convenient if you are using Docker.
The default configuration file is `sftpgo.json` and it can be found within the `/etc/sftpgo` directory if you installed from Linux distro packages. On Windows the configuration file can be found within the `{commonappdata}\SFTPGo` directory where `{commonappdata}` is typically `C:\ProgramData`. SFTPGo also supports reading from TOML and YAML configuration files.
The following snippets assume your are running SFTPGo on Linux but they can be easily adapted for other operating systems.
### Use PostgreSQL data provider
Create a PostgreSQL database named `sftpgo` and a PostgreSQL user with the correct permissions, for example using the `psql` CLI.
```shell
sudo -i -u postgres psql
CREATE DATABASE "sftpgo" WITH ENCODING='UTF8' CONNECTION LIMIT=-1;
create user "sftpgo" with encrypted password 'your password here';
grant all privileges on database "sftpgo" to "sftpgo";
\q
```
Open the SFTPGo configuration file, search for the `data_provider` section and change it as follow.
```json
"data_provider": {
"driver": "postgresql",
"name": "sftpgo",
"host": "127.0.0.1",
"port": 5432,
"username": "sftpgo",
"password": "your password here",
...
}
```
Confirm that the database connection works by initializing the data provider.
```shell
$ sudo su - sftpgo -s /bin/bash -c 'sftpgo initprovider -c /etc/sftpgo'
2021-05-19T22:21:54.000 INF Initializing provider: "postgresql" config file: "/etc/sftpgo/sftpgo.json"
2021-05-19T22:21:54.000 INF updating database version: 8 -> 9
2021-05-19T22:21:54.000 INF Data provider successfully initialized/updated
```
Ensure that SFTPGo starts after the database service.
```shell
sudo systemctl edit sftpgo.service
```
And override the unit definition with the following snippet.
```shell
[Unit]
After=postgresql.service
```
Restart SFTPGo to apply the changes.
### Use MySQL/MariaDB data provider
Create a MySQL database named `sftpgo` and a MySQL user with the correct permissions, for example using the `mysql` CLI.
```shell
$ mysql -u root
MariaDB [(none)]> CREATE DATABASE sftpgo CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
Query OK, 1 row affected (0.000 sec)
MariaDB [(none)]> grant all privileges on sftpgo.* to sftpgo@localhost identified by 'your password here';
Query OK, 0 rows affected (0.027 sec)
MariaDB [(none)]> quit
Bye
```
Open the SFTPGo configuration file, search for the `data_provider` section and change it as follow.
```json
"data_provider": {
"driver": "mysql",
"name": "sftpgo",
"host": "127.0.0.1",
"port": 3306,
"username": "sftpgo",
"password": "your password here",
...
}
```
Confirm that the database connection works by initializing the data provider.
```shell
$ sudo su - sftpgo -s /bin/bash -c 'sftpgo initprovider -c /etc/sftpgo'
2021-05-19T22:29:30.000 INF Initializing provider: "mysql" config file: "/etc/sftpgo/sftpgo.json"
2021-05-19T22:29:30.000 INF updating database version: 8 -> 9
2021-05-19T22:29:30.000 INF Data provider successfully initialized/updated
```
Ensure that SFTPGo starts after the database service.
```shell
sudo systemctl edit sftpgo.service
```
And override the unit definition with the following snippet.
```shell
[Unit]
After=mariadb.service
```
Restart SFTPGo to apply the changes.
### Use CockroachDB data provider
We suppose you have installed CocroackDB this way:
```shell
sudo su
export CRDB_VERSION=21.1.2 # set the latest available version here
wget -qO- https://binaries.cockroachdb.com/cockroach-v${CRDB_VERSION}.linux-amd64.tgz | tar xvz
cp -i cockroach-v${CRDB_VERSION}.linux-amd64/cockroach /usr/local/bin/
mkdir -p /usr/local/lib/cockroach
cp -i cockroach-v${CRDB_VERSION}.linux-amd64/lib/libgeos.so /usr/local/lib/cockroach/
cp -i cockroach-v${CRDB_VERSION}.linux-amd64/lib/libgeos_c.so /usr/local/lib/cockroach/
mkdir /var/lib/cockroach
chown sftpgo:sftpgo /var/lib/cockroach
mkdir -p /etc/cockroach/{certs,ca}
chmod 700 /etc/cockroach/ca
/usr/local/bin/cockroach cert create-ca --certs-dir=/etc/cockroach/certs --ca-key=/etc/cockroach/ca/ca.key
/usr/local/bin/cockroach cert create-node localhost $(hostname) --certs-dir=/etc/cockroach/certs --ca-key=/etc/cockroach/ca/ca.key
/usr/local/bin/cockroach cert create-client root --certs-dir=/etc/cockroach/certs --ca-key=/etc/cockroach/ca/ca.key
chown -R sftpgo:sftpgo /etc/cockroach/certs
exit
```
and you are running it using a systemd unit like this one:
```shell
[Unit]
Description=Cockroach Database single node
Requires=network.target
[Service]
Type=notify
WorkingDirectory=/var/lib/cockroach
ExecStart=/usr/local/bin/cockroach start-single-node --certs-dir=/etc/cockroach/certs --http-addr 127.0.0.1:8888 --listen-addr 127.0.0.1:26257 --cache=.25 --max-sql-memory=.25 --store=path=/var/lib/cockroach
TimeoutStopSec=60
Restart=always
RestartSec=10
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=cockroach
User=sftpgo
[Install]
WantedBy=default.target
```
Create a CockroachDB database named `sftpgo`.
```shell
$ sudo /usr/local/bin/cockroach sql --certs-dir=/etc/cockroach/certs -e 'create database "sftpgo"'
CREATE DATABASE
Time: 13ms
```
Open the SFTPGo configuration file, search for the `data_provider` section and change it as follow.
```json
"data_provider": {
"driver": "cockroachdb",
"name": "",
"host": "",
"port": 0,
"username": "",
"password": "",
"sslmode": 0,
"connection_string": "postgresql://root@localhost:26257/sftpgo?sslcert=%2Fetc%2Fcockroach%2Fcerts%2Fclient.root.crt&sslkey=%2Fetc%2Fcockroach%2Fcerts%2Fclient.root.key&sslmode=verify-full&sslrootcert=%2Fetc%2Fcockroach%2Fcerts%2Fca.crt&connect_timeout=10"
...
}
```
Confirm that the database connection works by initializing the data provider.
```shell
$ sudo su - sftpgo -s /bin/bash -c 'sftpgo initprovider -c /etc/sftpgo'
2021-05-19T22:41:53.000 INF Initializing provider: "cockroachdb" config file: "/etc/sftpgo/sftpgo.json"
2021-05-19T22:41:53.000 INF updating database version: 8 -> 9
2021-05-19T22:41:53.000 INF Data provider successfully initialized/updated
```
Ensure that SFTPGo starts after the database service.
```shell
sudo systemctl edit sftpgo.service
```
And override the unit definition with the following snippet.
```shell
[Unit]
After=cockroachdb.service
```
Restart SFTPGo to apply the changes.
### Enable FTP service
Open the SFTPGo configuration file, search for the `ftpd` section and change it as follow.
```json
"ftpd": {
"bindings": [
{
"port": 2121,
"address": "",
"apply_proxy_config": true,
"tls_mode": 0,
"force_passive_ip": "",
"client_auth_type": 0,
"tls_cipher_suites": []
}
],
"banner": "",
"banner_file": "",
"active_transfers_port_non_20": true,
"passive_port_range": {
"start": 50000,
"end": 50100
},
...
}
```
Restart SFTPGo to apply the changes. The FTP service is now available on port `2121`.
You can also configure the passive ports range (`50000-50100` by default), these ports must be reachable for passive FTP to work. If your FTP server is on the private network side of a NAT configuration you have to set `force_passive_ip` to your external IP address. You may also need to open the passive port range on your firewall.
It is recommended that you provide a certificate and key file to expose FTP over TLS. You should prefer SFTP to FTP even if you configure TLS, please don't blindly enable the old FTP protocol.
### Enable WebDAV service
Open the SFTPGo configuration file, search for the `webdavd` section and change it as follow.
```json
"webdavd": {
"bindings": [
{
"port": 10080,
"address": "",
"enable_https": false,
"client_auth_type": 0,
"tls_cipher_suites": [],
"prefix": "",
"proxy_allowed": []
}
],
...
}
```
Restart SFTPGo to apply the changes. The WebDAV service is now available on port `10080`. It is recommended that you provide a certificate and key file to expose WebDAV over https.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 63 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 63 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 99 KiB

Some files were not shown because too many files have changed in this diff Show More