Compare commits

..

357 Commits

Author SHA1 Message Date
Nicola Murino
665016ed1e set version to 2.3.3
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-05 09:47:00 +02:00
Nicola Murino
97d5680d1e azblob: fix SAS URL with embedded container name
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-01 21:52:32 +02:00
Nicola Murino
e7866047aa allow to edit profile to users logged in via OIDC
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-08-01 21:49:53 +02:00
Nicola Murino
5f313cc6be macOS: add config file search path
this way the default config file is used in brew package if no config file is
set

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-29 17:55:01 +02:00
Nicola Murino
3c2c703408 user templates: apply placeholders also for start directory
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-27 19:09:54 +02:00
Nicola Murino
78a399eed4 download as zip: improve filename
include username and also filename/directory name if the user downloads
a single file/directory

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-26 17:53:04 +02:00
Nicola Murino
e6d434654d backport from main
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-24 08:56:31 +02:00
Nicola Murino
d34446e6e9 web client: add HTML5 player
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-23 16:30:27 +02:00
Nicola Murino
2da19ef233 backport OIDC related changes from main
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-23 15:31:57 +02:00
Nicola Murino
b34bc2b818 add license header to source files
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-18 13:43:25 +02:00
Nicola Murino
378995147b try to better highlight donations and sponsorships options ...
... and to better explain why they are required.

Please don't say "someone else will help the project, I'll just use it"

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-16 20:29:10 +02:00
Nicola Murino
6b995db864 oidc: allow to configure oauth2 scopes
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-16 19:25:04 +02:00
Nicola Murino
371012a46e backport some fixes from main
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-07-15 20:09:06 +02:00
Nicola Murino
d3d788c8d0 s3: improve rename performance
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-30 18:25:40 +02:00
maximethebault
756b122ab8 S3: Fix timeout error when renaming large files (#899)
Remove AWS SDK Transport ResponseHeaderTimeout (finer-grained timeout are already handled by the callers)
Lower the threshold for MultipartCopy (5GB -> 500MB) to improve copy performance and reduce chance of hitting Single part copy timeout

Fixes #898

Signed-off-by: Maxime Thébault <contact@maximethebault.me>
2022-06-30 10:25:04 +02:00
Nicola Murino
e244ba37b2 config: fix replace from env vars for some sub list
ensure to merge configuration from files with configuration from env

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-28 19:17:16 +02:00
Nicola Murino
5610b98d19 fix get branding from env
Fixes #895

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-28 10:46:25 +02:00
Nicola Murino
b3ca20b5e6 dataprovider: fix sql tables prefix handling
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-24 12:26:43 +02:00
Nicola Murino
d0b6ca8d2f backup: include folders set on groups
Fixes #885

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-21 14:13:25 +02:00
Nicola Murino
550158ff4b fix database reset
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-13 19:40:24 +02:00
Nicola Murino
14a3803c8f OpenAPI schema: improve compatibility with some generators
Fixes #875

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-11 19:00:04 +02:00
Nicola Murino
ca4da2f64e set version to 2.3.1
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-10 18:42:13 +02:00
Nicola Murino
049c2b7430 mysql: groups is a reserved keyfrom since MySQL 8.0.2
add mysql to CI

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-10 17:36:26 +02:00
Nicola Murino
7fd5558400 parse IP proxy header also if listening on UNIX domain socket
Fixes #867

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-09 09:48:39 +02:00
Nicola Murino
b60255752f web UIs: fix date formatting on Safari
Fixes #869

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-09 09:47:02 +02:00
Nicola Murino
37f79650c8 APT and YUM repo are now available
This is possible thanks to the Oregon State University's free
mirroring service

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-09 09:46:57 +02:00
Nicola Murino
8988d6542b create branch 2.3.x
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-06 19:01:24 +02:00
Nicola Murino
560e7f316a allow to set an additional shared data path at build time
This is useful to simplify brew package

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-04 21:24:50 +02:00
Nicola Murino
eee5d74e87 back to development
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-04 19:29:46 +02:00
Nicola Murino
9ae473fcdc set version to 2.3.0
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-04 05:01:48 +02:00
Nicola Murino
b774289c6d change default value for naming_rules to 1
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-03 16:09:02 +02:00
Nicola Murino
ecf715880f update docs
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-03 14:36:38 +02:00
Nicola Murino
b2e28fe3a2 groups: apply placeholders to the fs config of virtual folders
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-06-02 09:45:01 +02:00
Nicola Murino
cc2f23bd89 trim values for string lists which can be set as env vars
See #857

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-31 18:22:18 +02:00
Nicola Murino
7329cd804b Fixes #855
update OpenAPI definition, add test cases, fix lint

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-30 19:01:12 +02:00
sunilke
84e3132ed1 Feat private key passphrase for sftpfs (#855)
Signed-off-by: Sunil Keswani <sunilke@zeta.tech>
2022-05-30 19:00:39 +02:00
Nicola Murino
f6b11c2d01 httpd/webdav: allow to configure trusted proxy header and depth
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-28 19:47:23 +02:00
Nicola Murino
32da923dfe httpd: add a setting to customize tokens validation
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-28 13:28:50 +02:00
Nicola Murino
91dfa501f8 improve some docs
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-27 10:09:53 +02:00
Nicola Murino
7c724e18fe add support for ACME compliant certificate authorities
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-27 07:39:55 +02:00
Nicola Murino
302f83c7a4 CI: fix for cockroach 22
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-24 11:31:49 +02:00
Nicola Murino
984ca1fb7e web UIs: update js and css deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-23 19:14:39 +02:00
Nicola Murino
87f6a18476 web admin UI: add column visibility control to the groups table as well
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-22 19:19:14 +02:00
Nicola Murino
90c21458b8 OIDC: add support for implicit roles
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-22 14:38:25 +02:00
Nicola Murino
f536c64043 admin UI: allow to control columns visibility and ordering
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-22 11:45:49 +02:00
Nicola Murino
1a33b5bb53 allow different TLS certificates for each binding
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-21 16:34:47 +02:00
Nicola Murino
0ecaa862bd web UIs: allow to replace the default CSS
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-21 11:05:58 +02:00
Nicola Murino
751946f47a allow to customize timeout and env vars for program based hooks
Fixes #847

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-20 19:30:54 +02:00
Nicola Murino
796ea1dde9 allow to store temporary sessions within the data provider
so we can persist password reset codes, OIDC auth sessions and tokens.
These features will also work in multi-node setups without sicky
sessions now

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-19 19:49:51 +02:00
Tim Birkett
a87aa9b98e feat: make MFA status visible in WebAdmin (#844)
Signed-off-by: Tim Birkett <tim.birkett@sainsburys.co.uk>
2022-05-17 19:27:12 +02:00
Nicola Murino
9abd186166 external auth http hook: properly serialize the user in the POST body
For historical reasons we send the json serialized user as a string field.
I Initially copied the code used in the script hook where it is appropriate
to convert the JSON user to string.

After some time I have noticed this error, I know that changing it now might
break existing external authentication hooks but we cannot continue with
this mistake, new users are surprised by this behavior, sorry

Fixes #836

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-15 18:26:07 +02:00
Nicola Murino
18d0bf9dc3 execute db migrations holding a database-level lock
so migrations cannot be executed concurrently if you run them from multiple
SFTPGo instances at the same time.

CockroachDB doesn't support database-level locks

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-15 15:25:12 +02:00
Nicola Murino
c9bd08cf9c UI branding: use the short name on the login pages
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-15 07:30:36 +02:00
Nicola Murino
d2f4edcdb6 sftpd statvfs: check the virtual quota against that of the filesystem
if the virtual quota limit is greater than the filesystem available space,
we need to return the filesystem limits

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-14 14:53:26 +02:00
Nicola Murino
67abf03fe3 web UIs: move common css to a separate template file
so we can reuse it instead of copying the same CSS every time

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-14 11:54:55 +02:00
Nicola Murino
5d7f6960f3 web UIs: add branding support
Fixes #829

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-13 19:40:52 +02:00
Paul Laffitte
4bea9ed760 add sftpgo logo on login pages (#835)
Signed-off-by: Paul Laffitte <paul.laffitte@enix.fr>
2022-05-13 17:12:52 +02:00
Nicola Murino
4995cf1b02 defender: allow to load blocklist/safelist also from config/env vars
Fixes #831

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-13 14:46:07 +02:00
Tim Birkett
a5d0cbbe44 chore: fix a linting error (#834)
Signed-off-by: Tim Birkett <tim.birkett@sainsburys.co.uk>
2022-05-13 11:14:38 +02:00
Tim Birkett
7b1a0d3cd3 chore: disallow all crawlers with robots.txt (#833)
Signed-off-by: Tim Birkett <tim.birkett@sainsburys.co.uk>
2022-05-13 09:23:43 +02:00
Nicola Murino
1e0b3a2a8c web client: add share mode read/write
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-09 19:09:43 +02:00
Nicola Murino
e72bb1e124 allow building with bolt provider disabled
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-09 11:19:29 +02:00
Nicola Murino
164621289c awscontainer: add a flag to disable the installation code
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-07 12:50:49 +02:00
Nicola Murino
737109b2b8 sftpfs: add more ciphers, KEXs and MACs
they are negotiated according to the order.
Restrictions are generally configured server side.
I want to avoid to expose other settings for now.

Fixes #817

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-06 09:21:57 +02:00
Herbert He
8b8e27b702 docs(cn): support README translation for Simplified Chinese (#818)
Signed-off-by: Herbert <herbert.he0229@gmail.com>
2022-05-05 19:15:49 +02:00
Dylan Legendre
4b099640de Updating typos in openapi/swagger documentation as well as various markdown documentation files (#816)
Signed-off-by: Dylan Legendre <dylanlegendre09@gmail.com>
2022-05-05 18:26:22 +02:00
Nicola Murino
80da2dc722 try to automatically find shared data dirs in system-wide paths
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-05 11:27:19 +02:00
Nicola Murino
61947e67ae ftpd: add basic wildcard support
this is the minimal implementation to allow mget and similar commands with
wildcards.

We only support wildcard for the last path level, for example:

- mget *.xml is supported
- mget dir/*.xml is supported
- mget */*.xml is not supported

Removed . and .. from FTP directory listing

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-04 19:32:02 +02:00
Nicola Murino
9a37e3d159 back to development
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-05-01 21:51:58 +02:00
Nicola Murino
14fb6c4038 always check recently updated users
also fix the query to get users for quota check for sql based providers

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-30 11:59:36 +02:00
Nicola Murino
dd9c5b2149 sql provider: enhanced folder mapping query using an upsert
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-28 14:49:57 +02:00
Nicola Murino
ecd488a840 data provider: remove prefer_database_credentials
Google Cloud Storage credentials are now always stored within the data
provider.

Added a migration to read credentials from disk and store them inside the
data provider.

After v2.3 we can also remove credentials_path

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-28 12:55:01 +02:00
Nicola Murino
4a44a7dfe1 improved readlink handling
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-27 18:38:46 +02:00
Nicola Murino
16a44a144b webclient: don't restore checkbox status
Fixes #807

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-26 09:15:26 +02:00
Nicola Murino
97f8142b1e azblobfs: update to the latest sdk and fix compatibility
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-25 17:34:52 +02:00
Nicola Murino
504cd3efda add groups support
Using groups simplifies the administration of multiple accounts by
letting you assign settings once to a group, instead of multiple
times to each individual user.

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-25 15:49:11 +02:00
zemsten
857b6cc10a Update full-configuration.md (#799)
NGINX spelling

Signed-off-by: Samuel Zarn <samz@localhost.localdomain>
2022-04-22 09:22:36 +02:00
Nicola Murino
002a06629e refactoring of user session counters
Fixes #792

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-14 19:07:41 +02:00
Nicola Murino
5bc0f4f8af linux pkgs: disable vcs stamping
it seems to create build issues since go 1.18.1.

I need to investigate better

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-13 10:25:29 +02:00
Nicola Murino
cacfffc5bf OIDC: add support for custom fields
These fields can be used in the pre-login hook to implement custom
logics

Fixes #787

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-12 19:31:25 +02:00
dependabot[bot]
87d7854453 Bump codecov/codecov-action from 2 to 3 (#789)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 2 to 3.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/master/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v2...v3)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-04-11 09:24:54 +02:00
dependabot[bot]
aa34388de0 Bump actions/download-artifact from 2 to 3 (#790)
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 2 to 3.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/v2...v3)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-04-11 09:24:09 +02:00
Nicola Murino
a3f50029ba update moment.js to v2.29.2
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-09 10:05:15 +02:00
Nicola Murino
f9d8b83c2a sshd: disable by default ssh-rsa host key algo
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-04 18:52:19 +02:00
Nicola Murino
7c8bb5b18a fix quota for uploads outside home dir if rename fails
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-03 13:48:56 +02:00
Nicola Murino
254b2ae87f add support for AWS container
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-03 08:52:36 +02:00
Nicola Murino
5a40f998ae check and update the password hashing algorithm on user login
also add ldap md5 variant as per-user request

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-02 22:20:21 +02:00
Nicola Murino
77f3400161 allow to mount virtual folders on root (/) path
Fixes #783

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-02 18:32:46 +02:00
Nicola Murino
3521bacc4a web user templates: ensure we can save valid users
users with no public key and password are now valid after the recent
changes

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-01 09:47:54 +02:00
Nicola Murino
55f8171dd1 sshd: add support for host key certificates
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-01 08:03:56 +02:00
Nicola Murino
a7b159aebb ssh user certs: add a revoked list
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-31 21:49:06 +02:00
Nicola Murino
5c114b28e3 sshd: we don't need the user certificate
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-31 18:16:50 +02:00
Nicola Murino
e079444e8a update deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-31 09:54:59 +02:00
Nicola Murino
3cb23ac956 be sure to close an SSH connection if all channels are idle
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-30 10:59:25 +02:00
Nicola Murino
8fb256ac91 add link to an external Traefik tutorial
update deps

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-29 18:13:43 +02:00
Nicola Murino
ca32cd5e0e allow placeholders for add/update users and folders
remove session token for S3, a temporary token is useless for our usage

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-27 16:32:21 +02:00
Nicola Murino
e0defafa26 azblob: fix the error returned in fs.Stat
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-26 11:47:12 +01:00
Nicola Murino
5cccb872bb add support to redirect HTTP to HTTPS
Fixes #777

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-26 10:00:02 +01:00
Nicola Murino
aaf940edab enforce CSRF token usage by the same IP for which it was issued
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-26 08:41:50 +01:00
ismail BASKIN
853086b942 Add role field array support (#774)
oidc: add array role field support

Signed-off-by: Ismail Baskin <ismailbaskin5@gmail.com>
2022-03-25 10:36:35 +01:00
Nicola Murino
81bdba6782 docker: re-add ppc64le
The alpine image for Go 1.18 is now available

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-25 09:32:48 +01:00
Nicola Murino
d955ddcef9 check that the jwt token is used by the same IP for which it
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-24 22:03:17 +01:00
Nicola Murino
4bbb195711 plugin: reload IP filter plugin on demand
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-24 10:21:13 +01:00
Nicola Murino
a193089646 add jq to full docker image variants
Fixes #767

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-23 11:35:23 +01:00
Nicola Murino
9bfdc10172 add support for ipfilter plugins
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-23 10:58:01 +01:00
Nicola Murino
b062b38ef4 docker: add rsync to "full" images
there are better alternatives and rsync will only work on local
filesystem, but it can still be useful to some people

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-22 17:29:14 +01:00
Nicola Murino
a31a9dc32c update deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-21 17:52:18 +01:00
Pr0pHesyer
fa43791ea9 Optimized typography for better readability
Signed-off-by: Pr0pHesyer <proskire@protonmail.com>
2022-03-21 15:05:43 +01:00
Nicola Murino
93b9c1617e web UI: allow to load custom css
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-19 21:44:27 +01:00
Nicola Murino
4c710d731f update to Go 1.18
temporarily disabled docker image for ppcle64 as alpine image
is not yet available

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-18 21:52:00 +01:00
Nicola Murino
d9f30e7ac5 add a global whitelist
if defined only the listed IPs/networks can access the configured
services, all other client connections will be dropped before they
even try to authenticate

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-17 22:10:52 +01:00
Nicola Murino
03da7f696c SFTPGo is now listed on Azure Marketplace
Fixes #684

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-17 14:59:02 +01:00
Nicola Murino
883a3dceaf db defender: fix getHost query and add more test cases
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-16 18:22:08 +01:00
lucatiozzo91
7b86e2ac59 always show banned host in ui
When an host is banned but the updated_time field is in the past the ui
didn't show the record.

Fixes #758

Signed-off-by: lucatiozzo91 <luca.tiozzo91@gmail.com>
2022-03-16 17:38:29 +01:00
Nicola Murino
8502d7b051 improve transfer quota limits test case
ReadAll can read more bytes than the effective size, for this test
io.Copy is better

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-15 22:13:07 +01:00
Nicola Murino
6f8b71b89f s3fs: migrate to AWS SDK V2
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-15 19:16:50 +01:00
Nicola Murino
7e7f662a23 ensure that defaults defined in code match the default config file
Fixes #754

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-14 10:42:14 +01:00
Nicola Murino
0bec1c6012 change the default value for prefer_database_credentials to true ...
... and deprecate this setting.

In the future we'll remove prefer_database_credentials and
credentials_path and we will not allow the credentials to be saved on
the filesystem

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-13 14:29:11 +01:00
Nicola Murino
5582f5c811 data provider: add automatic backups
Automatic backup are enabled by default, a new backup will be saved
each day at midnight.

The backups_path setting was moved from the httpd section to the
data_provider one, please adjust your configuration file and or your
env vars

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-13 13:45:07 +01:00
Nicola Murino
48ed3dab1f update docs and deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-11 17:11:49 +01:00
Nicola Murino
d8de0faef5 allow to require two-factor auth for users
Fixes #721

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-06 16:57:13 +01:00
Nicola Murino
df828b6021 gcsfs: use pagers when listing bucket objects
Hopefully fixes #746

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-04 18:46:17 +01:00
Nicola Murino
056daaddfc always execute fs checks for users not logged in after an update
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-03 19:31:54 +01:00
Nicola Murino
5c2fd8d52a add support for a start directory
Fixes #705

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-03 12:44:56 +01:00
Nicola Murino
4519bffa39 S3: add support for assume role
Fixes #736

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-28 20:19:13 +01:00
Nicola Murino
1ea7429921 initprovider: add load data options
Fixes #741

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-28 17:05:18 +01:00
dependabot[bot]
816c174036 Bump github.com/mattn/go-sqlite3 from 1.14.11 to 1.14.12
Bumps [github.com/mattn/go-sqlite3](https://github.com/mattn/go-sqlite3) from 1.14.11 to 1.14.12.
- [Release notes](https://github.com/mattn/go-sqlite3/releases)
- [Commits](https://github.com/mattn/go-sqlite3/compare/v1.14.11...v1.14.12)

---
updated-dependencies:
- dependency-name: github.com/mattn/go-sqlite3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-02-28 09:26:17 +01:00
Nicola Murino
79857a8733 config: restore defaults for smtp templates path
It was mistakenly deleted in the previous commit

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-27 14:16:38 +01:00
Nicola Murino
dcc3292dbc web setup: add an optional installation code
The purpose of this code is to prevent anyone who can access to
the initial setup screen from creating an admin user

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-27 13:08:47 +01:00
Nicola Murino
7f674a7fb3 add more details to the server status page
add all supported fields to the OpenAPI docs

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-26 16:43:29 +01:00
Nicola Murino
b64d3c2fbf simplify rename permission
before this patch we allow a rename in the following cases:

- the user has rename permission on both source and target path
- the user has delete permission on source path and create/upload on
  target path

we now check only the rename/rename_files/rename_dirs permissions.
This is what SFTPGo users expect.

This is a backward incompatible change and it will not backported to
the 2.2.x branch

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-26 12:19:09 +01:00
Nicola Murino
7fc5cb80d6 deb/rpm packages: attempt to set the cap_net_bind_service capability
so the service can bind to privileged ports without running as root user

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-26 10:10:51 +01:00
Andrea Mattia
92460f811f Simplify sed commands in Dockerfile(s)
Closes #740

Signed-off-by: Andrea Mattia <andrea.mattia@kireygroup.com>
2022-02-25 20:58:26 +01:00
Nicola Murino
e18ad55067 S3: add support for session tokens
Fixes #736

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-25 15:30:04 +01:00
Nicola Murino
4e9dae6fa4 allow to cache external authentications
Fixes #733

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-25 11:51:10 +01:00
Nicola Murino
f5a0559be6 don't execute fs check if the user has recent activity
The check could be expensive with some backends and is generally
only required the first time that a user logs in

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-24 16:11:35 +01:00
Nicola Murino
670018f05e OIDC: add profile and email scope to OAuth2 config
Fixes #728

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-22 10:20:14 +01:00
Nicola Murino
8bbf54d2b6 azure blobs: add support for multipart downloads
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-21 19:01:31 +01:00
Nicola Murino
d31cccf85f azblob: switch to the new azure-go-sdk
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-20 14:43:24 +01:00
Nicola Murino
c19b03a3f7 shares: add permission to deny sharing without password
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-19 13:31:58 +01:00
Nicola Murino
c6b8644828 OIDC: execute pre-login hook after IDP authentication
so the SFTPGo users can be auto-created using the hook

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-19 10:53:35 +01:00
Nicola Murino
f1a255aa6c httpd: allow to restrict allowed hosts ...
... and to add security headers to the responses

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-17 18:22:27 +01:00
Nicola Murino
876bf8aa4f sftpfs: improve remove
we know if the client asks to remove a file or directory so let's
use the appropriate command without letting the sftp library guess
the appropriate behavior

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-16 16:46:28 +01:00
Nicola Murino
900e519ff1 SFTP: respect file open flags also for file creation
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-16 16:05:56 +01:00
Nicola Murino
f1832d4478 shares: add an upload form for shares with write scope
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-15 19:19:25 +01:00
Nicola Murino
ebbbf81e65 logger: fix UTC time func
Fixes #719

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-14 12:30:00 +01:00
Nicola Murino
1fccd05e9e allow to configure the minimum version of TLS to be enabled
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-13 15:56:07 +01:00
Nicola Murino
66945c0a02 Web UIs: add OpenID Connect support
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-13 14:30:20 +01:00
Nicola Murino
fa0ca8fe89 quota summary and docs improvements
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-08 12:43:08 +01:00
clach04
c478c7dae9 Docker readme typo
Signed-off-by: clach04 <clach04@gmail.com>
2022-02-06 19:21:35 +01:00
Nicola Murino
9382db751c make HTTP shares browsable
if you share a single folder with read scope, you can now browse the share
and download single files

Fixes #674
See #677

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-06 16:46:43 +01:00
Nicola Murino
7e2a8e70c9 update zerolog deps
The updated version avoid to always create a socket connected to the
journald on application start.

Now the socket is only created if we log to the journald

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-03 17:55:36 +01:00
Nicola Murino
cd35636939 S3: add a timeout for single part uploads
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-01 12:15:56 +01:00
Nicola Murino
d51adb041e update data transfer quota only if the current IP has some limits
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-31 19:30:25 +01:00
Nicola Murino
02db00d008 dataprovider: add naming rules
naming rules allow to support case insensitive usernames, trim trailing
and leading white spaces, and accept any valid UTF-8 characters in
usernames.

If you were enabling `skip_natural_keys_validation` now you need to
set `naming_rules` to `1`

Fixes #687

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-31 18:01:37 +01:00
Nicola Murino
fb2d59ec92 data provider: add config options for certs validation/authentication
Fixes #682

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-30 18:04:03 +01:00
Nicola Murino
1df1225eed add support for data transfer bandwidth limits
with total limit or separate settings for uploads and downloads and
overrides based on the client's IP address.

Limits can be reset using the REST API

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-30 11:42:36 +01:00
Moroi
aca71bff7a Fixed typo
Signed-off-by: Moroi <4635854+Rango-dz@users.noreply.github.com>
2022-01-26 22:00:59 +01:00
Jeremy Clerc
9709aed5e6 httpd: webpath redirect using status found (302)
301 MovedPermanently is cached by the browser which can
be annoying when it is is on base path like / while one
may reuse the domain (e.g. localhost) for other apps/tests.

Fixes #695

Signed-off-by: Jeremy Clerc <jeremy@clerc.io>
2022-01-26 21:50:37 +01:00
Nicola Murino
d2a4178846 check quota usage between ongoing transfers
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-20 18:19:20 +01:00
Nicola Murino
d73be7aee5 remove the use of some unnecessary pointers
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-16 12:09:17 +01:00
Nicola Murino
ffe7f7ff16 doc improvements and minor changes
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-16 09:50:23 +01:00
Nicola Murino
a6ed6fc721 pattern filters: don't allow files in hidden dirs
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-15 19:06:02 +01:00
Nicola Murino
c3831de94e add hide policy to pattern filters
Disallowed files/dirs can be completly hidden. This may cause performance
issues for large directories

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-15 17:16:49 +01:00
Marc
9b6b9cca3d systemd-security: add some easy wins
We can tighten security by adding the following to
the systemd service file:

* NoNewPrivileges: should never be needed
* DevicePolicy: only basics required
* PrivateDevices: only needs mounted stuff, never devs
* ProtectSystem: no need to change boot
* RestrictAddressFamilies: INET, UNIX only

Signed-off-by: Marc <mail@lpcvoid.com>
2022-01-15 13:31:59 +01:00
Nicola Murino
64d1ea2d89 update deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-13 18:48:08 +01:00
Nicola Murino
1c51239da8 Admin UI: allow to create multiple users/folders from templates
the clone button is not needed anymore, you can select a user and
click on template to generate one or more similar users or you can
create users/folders from an empty template

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-12 19:01:19 +01:00
Nicola Murino
51c15de892 web admin: simplify user page
The page to add/edit users should be less less intimidating now.
All the advanced settings are hidden by default. Permissions are set
to any, so if you also have a users base dir set, to add a user
you have to simply set username, password or public key and save

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-10 19:44:16 +01:00
Nicola Murino
b8efb1b8ec squash database migrations.
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-09 12:25:53 +01:00
Nicola Murino
ec1d20f46f sshd: improve docs about supported ciphers, KEX and MACs
also added a check to ensure that the configured values are valid

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-06 18:09:49 +01:00
Nicola Murino
1f619d5ea6 make the sdk a separate module
The SFTPGo SDK now is at the following URL

https://github.com/sftpgo/sdk

Fixes #657

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-06 11:54:43 +01:00
Nicola Murino
6d3d94a01f move kms implementation outside the sdk package
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-06 10:11:47 +01:00
Nicola Murino
0a3d94f73d log at info level the service configurations
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-05 13:22:49 +01:00
Nicola Murino
7c68b03d07 move plugin handling outside the sdk package
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-05 11:37:45 +01:00
Nicola Murino
2912b2e92e sdk: add a logger interface
we are now ready to make the sdk a separate module

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-04 16:07:41 +01:00
Nicola Murino
a6fe802370 move kms definitions to the sdk package
This is the first step to make the sdk a separate module

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-04 12:49:30 +01:00
Nicola Murino
ad483b7581 httpd: switch back to chi Recoverer now that the required patch is merged
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-04 09:48:16 +01:00
Nicola Murino
df86955f28 eventsearcher plugin: add support to search for provider, bucket, endpoint
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-03 17:02:52 +01:00
Nicola Murino
00ec426a80 notifier plugins: add provider, bucket and endpoint to nottifier params
Fixes #656

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-02 19:22:44 +01:00
Nicola Murino
222db53410 notifiers plugin: replace params with a struct
Fixes #658

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-02 15:16:35 +01:00
Nicola Murino
4d85dc108f document that SFTPGo is also available as a winget package
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-01 18:19:48 +01:00
Nicola Murino
6d582a821b back to development
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2021-12-31 16:01:23 +01:00
Nicola Murino
794afbf85e update release workflow
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2021-12-31 14:17:51 +01:00
Nicola Murino
e3f3997c5e set version to 2.2.1
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2021-12-31 13:42:03 +01:00
Nicola Murino
f78090e47f update deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2021-12-29 18:11:00 +01:00
Nicola Murino
4d7a4aa99a check rename source and target 2021-12-28 12:03:52 +01:00
Nicola Murino
c36217c654 improve some docs 2021-12-26 14:54:29 +01:00
Nicola Murino
59bb578b89 web client: allow to move files between folders
Fixes #653
2021-12-25 17:13:23 +01:00
Nicola Murino
7d8823307f defender: add provider driver
Fixes #616
2021-12-25 12:08:07 +01:00
Nicola Murino
8174349032 console logger: enable colors on Windows too ...
... now that zerolog supports this feature
2021-12-20 18:47:18 +01:00
Nicola Murino
00a02dc14d howto: add two-factor authentication 2021-12-19 18:08:12 +01:00
Nicola Murino
ced73ed04e REST API: add an option to create missing dirs 2021-12-19 12:14:53 +01:00
Nicola Murino
cc73bb811b change log level from warn to error where appropriate
Fixes #649
2021-12-16 19:53:00 +01:00
Nicola Murino
a587228cf0 add support for metadata plugins 2021-12-16 18:18:36 +01:00
Nicola Murino
1472a0f415 hooks: preserve MFA related configs
if a user is updated using pre-login or external auth hook we need to
preserve the MFA related configs in the same way we do if the user is
updated using the REST API
2021-12-11 11:08:20 +01:00
Nicola Murino
0bb141960f add support for different bandwidth limits based on client IP 2021-12-10 18:43:26 +01:00
Nicola Murino
c153330ab8 web client: use fetch to upload files
also add REST API to upload a single file as POST body
2021-12-08 19:25:22 +01:00
Nicola Murino
5b4ef0ee3b windows installer: rename the sample configuration with the default values
The previous name sftpgo.json.default could create confusion for Windows
users
2021-12-05 07:58:53 +01:00
Nicola Murino
9632b6ee94 events search: improve test cases 2021-12-04 18:18:59 +01:00
Nicola Murino
78eb1c1166 update OpenAPI schema 2021-12-04 17:57:48 +01:00
Nicola Murino
a7c0b07a2a add session id to notifier plugins/hook 2021-12-04 17:27:24 +01:00
Nicola Murino
dc1cc88a46 keyboard interactive hooks: allow to validate passcode 2021-12-04 15:14:44 +01:00
Nicola Murino
3f5451eab6 web client: save/restore file list preferences 2021-12-04 07:58:49 +01:00
Nicola Murino
30d98326ca docker: update alpine image to 3.15 2021-12-03 19:33:37 +01:00
Nicola Murino
bedc8e288b web client: add support for integrating external viewers/editors 2021-12-03 18:33:08 +01:00
Nicola Murino
6092b6628e logs: use info level for login related messages
so enabling debug level is not required, for example only to understand
that a user exceeded the allowed sessions.

Also set the cache update frequency as documented
2021-12-02 19:36:42 +01:00
Nicola Murino
6ee51c5cc1 kms: remove support for compat secrets
also document how to activate the deprecated builtin provider
2021-12-01 17:53:19 +01:00
Nicola Murino
4df0ae82ac web client: allow downloading of single shared files without compression
Fixes #629
2021-11-30 20:32:10 +01:00
Nicola Murino
5db31f0fb3 web client: allow to upload/delete multiple files 2021-11-30 18:40:50 +01:00
Nicola Murino
0f8170c10f improve some docs and disable telemetry server by default 2021-11-29 17:58:10 +01:00
Nicola Murino
3c24cb773f SFTP: log users connections at info level
uniform SFTP and FTP logs

Fixes #626
2021-11-29 10:15:46 +01:00
Nicola Murino
bec54ac8ae CI: add windows x86
there still seem to be people using x86 on Windows ...
2021-11-28 21:30:31 +01:00
Nicola Murino
c330ac8418 CI: add windows arm64 2021-11-28 18:56:30 +01:00
Nicola Murino
3e478f42ea update lint rules and fix some warnings 2021-11-27 17:04:13 +01:00
Nicola Murino
18ab757216 back to development 2021-11-27 15:07:31 +01:00
Nicola Murino
b6bcf0cd94 set version to 2.2.0 2021-11-27 11:46:05 +01:00
Nicola Murino
015aa36c56 loaddata: improve shares restore
usage and timestamps are now preserved
2021-11-27 11:12:51 +01:00
Nicola Murino
f2480ce5c9 improve chtimes handling on open files 2021-11-26 19:00:44 +01:00
Vincent Murphy
f828c58dca Add --s3-force-path-style to portable 2021-11-26 17:40:23 +01:00
Nicola Murino
dc19921b0c web client: don't show the link for expired shares 2021-11-25 20:09:11 +01:00
Nicola Murino
3f3591bae0 web client: allow to preview images and pdf
pdf depends on browser support. It does not work on mobile devices.
2021-11-25 19:24:32 +01:00
Nicola Murino
fc048728d9 add 7digital to the sponsors section 2021-11-25 13:49:32 +01:00
Nicola Murino
aeb4675196 web admin: use a textarea for allowed/denied ip mask fields
Fixes #621
2021-11-25 13:08:12 +01:00
Nicola Murino
4652f9ede8 FTPD: allow to set different passive IPs based on the client's IP address 2021-11-25 12:45:09 +01:00
Nicola Murino
531cb5b5a1 sftpd: handle setstat requests with multiple attrs 2021-11-24 11:55:14 +01:00
Nicola Murino
9fb43b2c46 docs: clarify how multi-step auth works with external authentication
Fixes #617
2021-11-24 11:27:32 +01:00
Nicola Murino
8a8298ad46 web client: improve file upload 2021-11-22 12:25:36 +01:00
Nicola Murino
3d6b09e949 REST API: expose OpenAPI schema and render it using Swagger UI
Fixes #609
2021-11-21 09:32:51 +01:00
Nicola Murino
fb8f013ea7 web: update permissions on cookie refresh 2021-11-20 10:48:39 +01:00
Nicola Murino
c41319bb7a CI: sign windows installer and executable 2021-11-19 22:44:50 +01:00
Nicola Murino
46157ebbb6 CI docker: remove armv7 support
CI is still unreliable if we enable armv7 support
2021-11-16 09:07:10 +01:00
Nicola Murino
200b1d08c7 docker: add armv7 2021-11-15 21:58:35 +01:00
Nicola Murino
24b0352eb6 GCS: add ACL support 2021-11-15 21:57:41 +01:00
Nicola Murino
52f3a98cc8 preserve GCS credentials on update if not set
credentials were not preserved if "prefer_database_credentials" was
set to true

Fixes #613
2021-11-15 19:12:58 +01:00
Nicola Murino
e29a3efd39 add resetprovider sub-command
Fixes #608
2021-11-15 18:40:31 +01:00
Nicola Murino
ca730e77a5 add separate permissions to delete and rename files and dirs
perm_delete and perm_rename still exist for backward compatibility,
now they are an alias to assign both new split permissions
2021-11-14 16:23:33 +01:00
Nicola Murino
0833b4698e httpd service: add CORS support 2021-11-13 23:14:50 +01:00
Nicola Murino
ee5c5e033d S3: add ACL support
Fixes #610
2021-11-13 16:05:40 +01:00
Nicola Murino
78233ff9a3 web UI/REST API: add password reset
In order to reset the password from the admin/client user interface,
an SMTP configuration must be added and the user/admin must have an email
address.
You can prohibit the reset functionality on a per-user basis by using a
specific restriction.

Fixes #597
2021-11-13 13:25:43 +01:00
Nicola Murino
b331dc5686 web client: show share last use and used tokens 2021-11-07 09:53:35 +01:00
Nicola Murino
dfcfcee208 Windows: fix UTC time logging 2021-11-06 16:27:01 +01:00
Nicola Murino
094ee1522e logger: add a flag to use UTC time for logging 2021-11-06 15:18:16 +01:00
Nicola Murino
3bc58f5988 WebClient/REST API: add sharing support 2021-11-06 14:13:20 +01:00
Martijn Pieters
f6938e76dc Parse auth plugin information from env 2021-11-02 11:36:30 +01:00
Nicola Murino
570964deb3 add post-disconnect hook
Fixes #587
2021-10-29 19:55:18 +02:00
Nicola Murino
31984ffec1 update logo and add it to windows exe and installer
thanks to @asheroto for donating the new logo
2021-10-23 19:27:39 +02:00
Nicola Murino
74fc3aaf37 REST API: add events search 2021-10-23 15:47:21 +02:00
Nicola Murino
97d0a48557 plugins: improve notifier and searcher 2021-10-20 19:39:49 +02:00
Nicola Murino
3bbe67571f plugins: add eventsearcher 2021-10-17 16:43:05 +02:00
Nicola Murino
f131ef130b add a link to the new events store plugin 2021-10-16 17:08:34 +02:00
Nicola Murino
4a6a4ce28d sftpfs: map path resolution error to permission denied
we do the same for os fs so that the problematic directory is excluded
from the webdav listing instead of failing the whole directory listing
2021-10-16 10:32:18 +02:00
Nicola Murino
a80ac80fcd pkgs: update nfpm to 2.7 and use xz as compression for both deb and rpm 2021-10-13 09:15:04 +02:00
Nicola Murino
4aa9686e3b refactor custom actions
SFTPGo is now fully auditable, all fs and provider events that change
something are notified and can be collected using hooks/plugins.

There are some backward incompatible changes for command hooks
2021-10-10 13:08:05 +02:00
Nicola Murino
64e87d64bd web client UI: allow to edit plain text files
Fixes #567
2021-10-09 14:17:28 +02:00
Nicola Murino
9ca0b46f30 UI connections page: add a refresh button 2021-10-07 18:28:31 +02:00
Nicola Murino
6eb154bb74 webdav: add support for lock discovery 2021-10-06 09:11:56 +02:00
Nicola Murino
ea01c3a125 rate limiting: allow to exclude IP addresses/ranges
Fixes #563
2021-10-03 20:50:05 +02:00
Nicola Murino
1b4a1fbbe5 add data retention check hook 2021-10-03 15:17:49 +02:00
Nicola Murino
ec81a7ac29 actions: add a specific protocol for data retention 2021-10-03 10:22:47 +02:00
Nicola Murino
22d28a37b6 cmd: improve completion sub-commands 2021-10-03 08:14:57 +02:00
Nicola Murino
cc134cad9a data retention: allow to notify results via e-mail 2021-10-02 22:25:41 +02:00
Nicola Murino
1459150024 WebDAV: improve logs 2021-10-01 20:37:23 +02:00
root
87751e562e Flesh out examples/ldapauth, specifically:
Support 'virtual' users who have no homeDirectory, uidNumber or gidNumber.
Permit read-only access by a user named "anonymous", with any password.
Assume a conventional DIT with users under ou=people,dc=example,dc=com.
Read the LDAP bindPassword from a file (not baked into the code).
Log progress and problems to syslog.
2021-10-01 09:10:13 +02:00
Nicola Murino
e6f969cb04 web UI: update js and css deps 2021-09-30 10:23:25 +02:00
Nicola Murino
ba1febba73 rework user and admin profiles
users and admins can now also update their email and description
2021-09-29 18:46:15 +02:00
Nicola Murino
af8fa7ff81 Docker: remove rsync from default images
it's time to encourage people to switch to more modern alternatives like
rclone
2021-09-27 11:34:11 +02:00
Nicola Murino
4ab2e4088a CI docker: remove armv7 support
building docker images now takes too long and often fails with random
errors. I have to restart the build several times to be able to push
the images to docker hub and gcr
2021-09-27 10:25:21 +02:00
Nicola Murino
da0ccc6426 add SMTP support
it will be used in future update to add email sending capabilities
2021-09-26 20:25:37 +02:00
Maharanjan
0661876e99 Added email field for user account 2021-09-25 19:06:13 +02:00
Nicola Murino
cd72ac4fc9 CI: add armv7 support 2021-09-25 14:14:21 +02:00
Nicola Murino
da5a061b65 add basic REST APIs for data retention
Fixes #495
2021-09-25 12:20:31 +02:00
Nicola Murino
65948a47f1 systemd unit: set LimitNOFILE to 8192 2021-09-19 17:37:18 +02:00
Nicola Murino
bf4b3e6840 httpd: move the check connection middleware before the logger middleware
Fixes #543
2021-09-19 08:14:59 +02:00
Nicola Murino
6ea38188e8 minor fixes and doc improvements 2021-09-18 10:50:17 +02:00
Nicola Murino
b5639a51fd don't generate defender events for HTTP/WebDAV requests with no auth
it is quite common for HTTP clients to send a first request without
the Authorization header and then send the credentials after receiving
a 401 response. We don't want to generate defender events in this case
2021-09-11 18:23:11 +02:00
Nicola Murino
5c34d814d6 fix a possible nil pointer dereference
it can happen by upgrading from very old versions
2021-09-11 14:19:17 +02:00
Nicola Murino
0eca4f1866 update deps 2021-09-08 12:29:47 +02:00
Nicola Murino
b52f829f05 docker: replace mime-support package with media-types
This way the size of the slim image is similar to the previous buster
based images
2021-09-07 21:04:46 +02:00
Nicola Murino
90f64c9f63 distroless image: minor changes 2021-09-07 19:52:28 +02:00
Oleksandr Shvets
c106498dd8 docker: added distroless image 2021-09-06 19:10:28 +02:00
Nicola Murino
7bad65a43e user: add a permission to disable changing api key authentication
also implement the missing APIs to enable/disable api key authentication
2021-09-06 18:46:35 +02:00
Nicola Murino
101c2962ab web client UI: add a permission to disable password change
Fixes #528
2021-09-05 18:49:13 +02:00
Nicola Murino
59140a6d51 add additional data to MFA secrets and fix pointers management 2021-09-05 14:10:12 +02:00
Nicola Murino
b1d54f69d9 admin: fix possible nil pointer dereference
this possible bug was introduced in the previous commit
2021-09-04 13:56:29 +02:00
Nicola Murino
374de07c7b update deps 2021-09-04 13:30:23 +02:00
Nicola Murino
8a4c21b64a add builtin two-factor auth support
The builtin two-factor authentication is based on time-based one time
passwords (RFC 6238) which works with Authy, Google Authenticator and
other compatible apps.
2021-09-04 12:11:04 +02:00
Nicola Murino
16ba7ddb34 CI: also runs test cases using GOARCH 386
This way we can detect unaligned 64-bit atomic operations that only happen
on 32 bit platforms
2021-08-28 12:03:23 +02:00
Nicola Murino
bd9506da42 BaseConnection struct: ensure 64 bit alignment
Fixes #516
2021-08-28 10:06:49 +02:00
Nicola Murino
b903a6e46f data provider: remove default admin
you need to load initial data or set "create_default_admin" to true
and the appropriate env vars if you don't want to use the web admin
setup screen to create the default admin
2021-08-20 10:37:51 +02:00
Nicola Murino
bcf088f586 data provider: update internal caches if the data provider is shared 2021-08-20 09:35:06 +02:00
Nicola Murino
be3857d572 dataprovider: add timestamp fields for users and admins 2021-08-19 15:51:43 +02:00
Nicola Murino
b99d4ce82e fix folders validation
Fixes #510
2021-08-19 11:28:53 +02:00
Nicola Murino
0a558203da improve proxy documentation
Fixes #507
2021-08-18 15:27:07 +02:00
Nicola Murino
5a549a88fe update to Go 1.17 2021-08-18 14:39:56 +02:00
Nicola Murino
fe953d6b38 REST API: add support for API key authentication 2021-08-17 18:08:32 +02:00
erwiese
05c62b9f40 add documentation for defender scores (#500)
Co-authored-by: Erwin Wiesensarter <erwin.wiesensarter@bkg.bund.de>
2021-08-13 15:40:33 +02:00
Nicola Murino
555dc3b0c0 transfer logs: add FTP mode 2021-08-10 13:07:38 +02:00
Nicola Murino
0de0d3308c improve error messages for generic failures 2021-08-08 19:30:21 +02:00
Nicola Murino
a20373b613 add support for auth plugins 2021-08-08 17:09:48 +02:00
Nicola Murino
ced2e16f41 add support for password validation rules
Fixes #494
2021-08-06 18:56:07 +02:00
Nicola Murino
3ac832c8dd docker: bump Alpine to 3.14 2021-08-05 19:38:30 +02:00
Nicola Murino
a3c087456b ftpd: add some security checks 2021-08-05 18:38:15 +02:00
Nicola Murino
419774158a remove PayPal link
I'm having some issues with my PayPal account, remove it for now
2021-08-03 20:36:10 +02:00
Nicola Murino
0503215e7a web client: try to prevent browsers from caching requests
Fixes #493
2021-08-03 19:58:03 +02:00
dependabot[bot]
9541843ff7 Bump github.com/shirou/gopsutil/v3 from 3.21.6 to 3.21.7 (#491)
Bumps [github.com/shirou/gopsutil/v3](https://github.com/shirou/gopsutil) from 3.21.6 to 3.21.7.
- [Release notes](https://github.com/shirou/gopsutil/releases)
- [Commits](https://github.com/shirou/gopsutil/compare/v3.21.6...v3.21.7)

---
updated-dependencies:
- dependency-name: github.com/shirou/gopsutil/v3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-08-02 10:11:09 +02:00
dependabot[bot]
98f22ba110 Bump uraimo/run-on-arch-action from 2.1.0 to 2.1.1 (#490)
Bumps [uraimo/run-on-arch-action](https://github.com/uraimo/run-on-arch-action) from 2.1.0 to 2.1.1.
- [Release notes](https://github.com/uraimo/run-on-arch-action/releases)
- [Commits](https://github.com/uraimo/run-on-arch-action/compare/v2.1.0...v2.1.1)

---
updated-dependencies:
- dependency-name: uraimo/run-on-arch-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-08-02 10:10:24 +02:00
Nicola Murino
1e9a19e326 add a howto to use SFTPGo as OpenSSH's SFTP subsystem 2021-07-31 19:09:09 +02:00
mmcgeefeedo
0046c9960a add support to override default admin credentials via env vars 2021-07-31 10:39:53 +02:00
Nicola Murino
7640612a95 update deps 2021-07-31 10:22:38 +02:00
Nicola Murino
a26962f367 add dot and dot dot directories to sftp/ftp file listing 2021-07-31 09:42:23 +02:00
Nicola Murino
f778e47d22 sftpd: minor improvements and docs for the prefix middleware 2021-07-29 20:12:23 +02:00
Nicola Murino
4781921336 fix loading enabled_ssh_commands config key 2021-07-29 00:54:22 +02:00
mmcgeefeedo
3ae8abda9e sftpd: add folder prefix middleware 2021-07-29 00:32:55 +02:00
Nicola Murino
90b324d707 Add a link on the login pages to switch between admin and web client login
The links are hidden if only the web admin or only thw web client is
enabled and can also be controlled using the "hide_login_url" setting

Fixes #485
2021-07-27 18:43:00 +02:00
Nicola Murino
3a22aae34f web UI: add support for upload, create dirs, rename, delete 2021-07-26 20:55:49 +02:00
dependabot[bot]
45a0473fec Bump codecov/codecov-action from 1 to 2.0.2 (#486)
* Bump codecov/codecov-action from 1 to 2

Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 1 to 2.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/master/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v1...v2.0.2)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nicola Murino <nicola.murino@gmail.com>
2021-07-26 11:08:48 +02:00
Nicola Murino
a7313e4492 webdav: add new test cases and fix some lock related issues
Our net/webdav branch now include the following patches:

https://github.com/golang/net/pull/92
https://github.com/golang/net/pull/93
https://github.com/golang/net/pull/94
2021-07-25 09:55:14 +02:00
Nicola Murino
c41ae116eb improve logging
Fixes #381
2021-07-24 20:11:17 +02:00
Nicola Murino
83c7453957 user API: allow to disable writes ...
... even if the user has permissions for these actions
2021-07-23 21:41:02 +02:00
Nicola Murino
85a47810ff S3: expose more properties, possible backward incompatible change
Before these changes we implictly set S3ForcePathStyle if an endpoint
was provided.

This can cause issues with some S3 compatible object storages and must
be explicitly set now.

AWS is also deprecating this setting

https://aws.amazon.com/it/blogs/aws/amazon-s3-path-deprecation-plan-the-rest-of-the-story/
2021-07-23 16:56:48 +02:00
Nicola Murino
c997ef876c S3: fix Ceph compatibility
This hack will no longer be needed once Ceph tags a new version and vendors
using it update their servers.

This code is taken from rclone, thank you!

Fixes #483
2021-07-23 11:41:31 +02:00
Nicola Murino
ae8ccadad2 users API: add API to create, delete, rename files and directories 2021-07-23 10:19:27 +02:00
Nicola Murino
5967aa1aa5 FTP: enable ftpserverlib logging and make debug mode configurable 2021-07-20 17:22:08 +02:00
Nicola Murino
c900cde8e4 notifiers plugin: add settings to retry unhandled events 2021-07-20 12:51:21 +02:00
Nicola Murino
13183a9f76 deps cleanup 2021-07-17 15:42:59 +02:00
Nicola Murino
5a568b4077 KMS: allow to provide the master encryption key as string 2021-07-17 15:34:48 +02:00
Nicola Murino
030507a2ce add some docs for the plugin system 2021-07-17 14:14:42 +02:00
Nicola Murino
338301955f move cloud KMS providers to an external plugin 2021-07-17 13:08:05 +02:00
Nicola Murino
6d313f6d8f expose KMS as plugin 2021-07-16 18:22:42 +02:00
Nicola Murino
776dffcf12 kms: improve modularity 2021-07-13 21:17:21 +02:00
Nicola Murino
e1a2451c22 s3: allow to configure the chunk download timeout 2021-07-11 18:39:45 +02:00
Nicola Murino
7344366ce8 sftpd: remove workarounds for directory listing
The underlying issue was fixed in pkg/sftp 1.13.2
2021-07-11 16:26:40 +02:00
Nicola Murino
bd5191dfc5 add experimental plugin system 2021-07-11 15:26:51 +02:00
Nicola Murino
bfa4085932 improve docs 2021-07-03 18:23:36 +02:00
Nicola Murino
302ec2558c add notifications for mkdir/rmdir 2021-07-03 18:07:55 +02:00
Nicola Murino
ff19879ffd allow to use a persistent signing key for JWT and CSRF tokens
Fixes #466
2021-07-01 20:17:40 +02:00
Nicola Murino
04001f7ad3 FTP: try to return more specific error codes/messages for some errors
We now return 552 code for quota exceeded errors and 553 in the following
cases:

- filename denied by a filter
- no upload permission
- no overwrite permission
- pre upload hook error

Fixes #442
2021-06-28 19:40:04 +02:00
Nicola Murino
076b2f0ee0 modules: add v2 support 2021-06-26 07:31:41 +02:00
Nicola Murino
93dfb03eaf GCS: add a trailing / to "directories"
This way SFTPGo should be compatible with Google Cloud console.

This change should be backward compatibile, testing is welcome

Fixes #464
2021-06-24 19:36:01 +02:00
Nicola Murino
e09bdd43d4 defender: fix GetHost for blocklist entries too 2021-06-20 21:57:19 +02:00
Nicola Murino
ac8d8a3da1 update portable mode docs 2021-06-19 19:40:53 +02:00
Manuel Reithuber
a4157e83e9 template fsconfig: updated form-group css classes so we can further improve onFilesystemChanged()
it doesn't reference any vfs providers at all anymore :)
2021-06-19 19:27:54 +02:00
Manuel Reithuber
13f23838a1 template fsconfig.html: using string provider name in onFilesystemChanged() 2021-06-19 19:27:54 +02:00
Manuel Reithuber
fd4c388b23 added vfs.ListProviders() and using it in template fsconfig.html (added a new ListFSProviders template function for that) 2021-06-19 19:27:54 +02:00
Manuel Reithuber
88b10da596 updated utils.LoadTemplate() to call template.ParseFiles() directly and added a way to specify a base template (will be used in the next commit) 2021-06-19 19:27:54 +02:00
Manuel Reithuber
c07dc74d48 template fsconfig.html: simplified code in onFilesystemChanged() 2021-06-19 19:27:54 +02:00
Manuel Reithuber
b48e01155c FilesystemProvider: added .Name() which reverses vfs.GetProviderByName(), and added .ShortInfo(); using .ShortInfo() in User.GetInfoString() 2021-06-19 19:27:54 +02:00
Manuel Reithuber
0ff010cc94 added vfs.GetProviderByName(), using it in for sftpgo portable and for parsing the webadmin form field 2021-06-19 19:27:54 +02:00
Nicola Murino
81aac15a6c defender: don't return expired hosts/banned ip in GetHost too 2021-06-19 18:51:33 +02:00
Nicola Murino
c1b862394d move other errors to utils package 2021-06-19 13:06:01 +02:00
Manuel Reithuber
f19937b715 move Filesystem config validation to vfs 2021-06-19 12:24:43 +02:00
Nicola Murino
f2f612b450 defender: don't return expired hosts/banned ip 2021-06-19 11:02:46 +02:00
Nicola Murino
0c2640bbab update deps 2021-06-19 09:56:49 +02:00
Nicola Murino
3bb0ca1d2b config: remove deprecated configuration keys 2021-06-19 09:47:06 +02:00
Nicola Murino
d5b42f72e2 squash database migrations, remove compat data provider code 2021-06-19 09:03:20 +02:00
Nicola Murino
62744e081b get HTTPD binding from env: respect the documented default 2021-06-17 15:57:41 +02:00
Nicola Murino
9dcaf1555f back to development 2021-06-16 19:28:25 +02:00
440 changed files with 87489 additions and 17377 deletions

2
.github/FUNDING.yml vendored
View File

@@ -9,4 +9,4 @@ community_bridge: # Replace with a single Community Bridge project-name e.g., cl
liberapay: # Replace with a single Liberapay username
issuehunt: # Replace with a single IssueHunt username
otechie: # Replace with a single Otechie username
custom: ['https://www.paypal.com/donate?hosted_button_id=JQL6GBT8GXRKC'] # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']
custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']

View File

@@ -2,7 +2,7 @@ name: CI
on:
push:
branches: [main]
branches: [2.3.x]
pull_request:
jobs:
@@ -11,45 +11,77 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go: [1.16]
go: [1.18]
os: [ubuntu-latest, macos-latest]
upload-coverage: [true]
include:
- go: 1.16
- go: 1.18
os: windows-latest
upload-coverage: false
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Go
uses: actions/setup-go@v2
uses: actions/setup-go@v3
with:
go-version: ${{ matrix.go }}
- name: Build for Linux/macOS x86_64
if: startsWith(matrix.os, 'windows-') != true
run: go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
run: |
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
cd tests/eventsearcher
go build -trimpath -ldflags "-s -w" -o eventsearcher
cd -
cd tests/ipfilter
go build -trimpath -ldflags "-s -w" -o ipfilter
cd -
- name: Build for macOS arm64
if: startsWith(matrix.os, 'macos-') == true
run: CGO_ENABLED=1 GOOS=darwin GOARCH=arm64 SDKROOT=$(xcrun --sdk macosx --show-sdk-path) go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo_arm64
run: CGO_ENABLED=1 GOOS=darwin GOARCH=arm64 SDKROOT=$(xcrun --sdk macosx --show-sdk-path) go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo_arm64
- name: Build for Windows
if: startsWith(matrix.os, 'windows-')
run: |
$GIT_COMMIT = (git describe --always --dirty) | Out-String
$DATE_TIME = ([datetime]::Now.ToUniversalTime().toString("yyyy-MM-ddTHH:mm:ssZ")) | Out-String
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/version.date=$DATE_TIME" -o sftpgo.exe
$LATEST_TAG = ((git describe --tags $(git rev-list --tags --max-count=1)) | Out-String).Trim()
$REV_LIST=$LATEST_TAG+"..HEAD"
$COMMITS_FROM_TAG= ((git rev-list $REV_LIST --count) | Out-String).Trim()
$FILE_VERSION = $LATEST_TAG.substring(1) + "." + $COMMITS_FROM_TAG
go install github.com/tc-hib/go-winres@latest
go-winres simply --arch amd64 --product-version $LATEST_TAG-dev-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/version.date=$DATE_TIME" -o sftpgo.exe
cd tests/eventsearcher
go build -trimpath -ldflags "-s -w" -o eventsearcher.exe
cd ../..
cd tests/ipfilter
go build -trimpath -ldflags "-s -w" -o ipfilter.exe
cd ../..
mkdir arm64
$Env:CGO_ENABLED='0'
$Env:GOOS='windows'
$Env:GOARCH='arm64'
go-winres simply --arch arm64 --product-version $LATEST_TAG-dev-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/version.date=$DATE_TIME" -o .\arm64\sftpgo.exe
mkdir x86
$Env:GOARCH='386'
go-winres simply --arch 386 --product-version $LATEST_TAG-dev-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/version.date=$DATE_TIME" -o .\x86\sftpgo.exe
Remove-Item Env:\CGO_ENABLED
Remove-Item Env:\GOOS
Remove-Item Env:\GOARCH
- name: Run test cases using SQLite provider
run: go test -v -p 1 -timeout 10m ./... -coverprofile=coverage.txt -covermode=atomic
run: go test -v -p 1 -timeout 15m ./... -coverprofile=coverage.txt -covermode=atomic
- name: Upload coverage to Codecov
if: ${{ matrix.upload-coverage }}
uses: codecov/codecov-action@v1
uses: codecov/codecov-action@v3
with:
file: ./coverage.txt
fail_ci_if_error: false
@@ -63,12 +95,14 @@ jobs:
go test -v -p 1 -timeout 5m ./ftpd -covermode=atomic
go test -v -p 1 -timeout 5m ./webdavd -covermode=atomic
go test -v -p 1 -timeout 2m ./telemetry -covermode=atomic
go test -v -p 1 -timeout 2m ./mfa -covermode=atomic
go test -v -p 1 -timeout 2m ./command -covermode=atomic
env:
SFTPGO_DATA_PROVIDER__DRIVER: bolt
SFTPGO_DATA_PROVIDER__NAME: 'sftpgo_bolt.db'
- name: Run test cases using memory provider
run: go test -v -p 1 -timeout 10m ./... -covermode=atomic
run: go test -v -p 1 -timeout 15m ./... -covermode=atomic
env:
SFTPGO_DATA_PROVIDER__DRIVER: memory
SFTPGO_DATA_PROVIDER__NAME: ''
@@ -82,30 +116,140 @@ jobs:
cp sftpgo.json output/
cp -r templates output/
cp -r static output/
cp -r openapi output/
cp init/com.github.drakkan.sftpgo.plist output/init/
./sftpgo gen completion bash > output/bash_completion/sftpgo
./sftpgo gen completion zsh > output/zsh_completion/_sftpgo
./sftpgo gen man -d output/man/man1
gzip output/man/man1/*
- name: Prepare build artifact for Windows
if: startsWith(matrix.os, 'windows-')
- name: Prepare Windows installer
if: ${{ startsWith(matrix.os, 'windows-') && github.event_name != 'pull_request' }}
run: |
Remove-Item -LiteralPath "output" -Force -Recurse -ErrorAction Ignore
mkdir output
copy .\sftpgo.exe .\output
copy .\sftpgo.json .\output
copy .\sftpgo.db .\output
copy .\LICENSE .\output\LICENSE.txt
mkdir output\templates
xcopy .\templates .\output\templates\ /E
mkdir output\static
xcopy .\static .\output\static\ /E
mkdir output\openapi
xcopy .\openapi .\output\openapi\ /E
$LATEST_TAG = ((git describe --tags $(git rev-list --tags --max-count=1)) | Out-String).Trim()
$REV_LIST=$LATEST_TAG+"..HEAD"
$COMMITS_FROM_TAG= ((git rev-list $REV_LIST --count) | Out-String).Trim()
$Env:SFTPGO_ISS_DEV_VERSION = $LATEST_TAG + "." + $COMMITS_FROM_TAG
$CERT_PATH=(Get-Location -PSProvider FileSystem).ProviderPath + "\cert.pfx"
[IO.File]::WriteAllBytes($CERT_PATH,[System.Convert]::FromBase64String($Env:CERT_DATA))
certutil -f -p "$Env:CERT_PASS" -importpfx MY "$CERT_PATH"
rm "$CERT_PATH"
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.20348.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\sftpgo.exe
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.20348.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\arm64\sftpgo.exe
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.20348.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\x86\sftpgo.exe
$INNO_S='/Ssigntool=$qC:/Program Files (x86)/Windows Kits/10/bin/10.0.20348.0/x86/signtool.exe$q sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n $qNicola Murino$q /d $qSFTPGo$q $f'
iscc "$INNO_S" .\windows-installer\sftpgo.iss
rm .\output\sftpgo.exe
rm .\output\sftpgo.db
copy .\arm64\sftpgo.exe .\output
(Get-Content .\output\sftpgo.json).replace('"sqlite"', '"bolt"') | Set-Content .\output\sftpgo.json
$Env:SFTPGO_DATA_PROVIDER__DRIVER='bolt'
$Env:SFTPGO_DATA_PROVIDER__NAME='.\output\sftpgo.db'
.\sftpgo.exe initprovider
Remove-Item Env:\SFTPGO_DATA_PROVIDER__DRIVER
Remove-Item Env:\SFTPGO_DATA_PROVIDER__NAME
$Env:SFTPGO_ISS_ARCH='arm64'
iscc "$INNO_S" .\windows-installer\sftpgo.iss
rm .\output\sftpgo.exe
copy .\x86\sftpgo.exe .\output
$Env:SFTPGO_ISS_ARCH='x86'
iscc "$INNO_S" .\windows-installer\sftpgo.iss
certutil -delstore MY "Nicola Murino"
env:
CERT_DATA: ${{ secrets.CERT_DATA }}
CERT_PASS: ${{ secrets.CERT_PASS }}
- name: Upload Windows installer x86_64 artifact
if: ${{ startsWith(matrix.os, 'windows-') && github.event_name != 'pull_request' }}
uses: actions/upload-artifact@v3
with:
name: sftpgo_windows_installer_x86_64
path: ./sftpgo_windows_x86_64.exe
- name: Upload Windows installer arm64 artifact
if: ${{ startsWith(matrix.os, 'windows-') && github.event_name != 'pull_request' }}
uses: actions/upload-artifact@v3
with:
name: sftpgo_windows_installer_arm64
path: ./sftpgo_windows_arm64.exe
- name: Upload Windows installer x86 artifact
if: ${{ startsWith(matrix.os, 'windows-') && github.event_name != 'pull_request' }}
uses: actions/upload-artifact@v3
with:
name: sftpgo_windows_installer_x86
path: ./sftpgo_windows_x86.exe
- name: Prepare build artifact for Windows
if: startsWith(matrix.os, 'windows-')
run: |
Remove-Item -LiteralPath "output" -Force -Recurse -ErrorAction Ignore
mkdir output
copy .\sftpgo.exe .\output
mkdir output\arm64
copy .\arm64\sftpgo.exe .\output\arm64
mkdir output\x86
copy .\x86\sftpgo.exe .\output\x86
copy .\sftpgo.json .\output
(Get-Content .\output\sftpgo.json).replace('"sqlite"', '"bolt"') | Set-Content .\output\sftpgo.json
mkdir output\templates
xcopy .\templates .\output\templates\ /E
mkdir output\static
xcopy .\static .\output\static\ /E
mkdir output\openapi
xcopy .\openapi .\output\openapi\ /E
- name: Upload build artifact
if: startsWith(matrix.os, 'ubuntu-') != true
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: sftpgo-${{ matrix.os }}-go-${{ matrix.go }}
path: output
test-goarch-386:
name: Run test cases on 32-bit arch
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: 1.18
- name: Build
run: |
cd tests/eventsearcher
go build -trimpath -ldflags "-s -w" -o eventsearcher
cd -
cd tests/ipfilter
go build -trimpath -ldflags "-s -w" -o ipfilter
cd -
env:
GOARCH: 386
- name: Run test cases
run: go test -v -p 1 -timeout 15m ./... -covermode=atomic
env:
SFTPGO_DATA_PROVIDER__DRIVER: memory
SFTPGO_DATA_PROVIDER__NAME: ''
GOARCH: 386
test-postgresql-mysql-crdb:
name: Test with PgSQL/MySQL/Cockroach
runs-on: ubuntu-latest
@@ -139,20 +283,41 @@ jobs:
ports:
- 3307:3306
mysql:
image: mysql:latest
env:
MYSQL_ROOT_PASSWORD: mysql
MYSQL_DATABASE: sftpgo
MYSQL_USER: sftpgo
MYSQL_PASSWORD: sftpgo
options: >-
--health-cmd "mysqladmin status -h 127.0.0.1 -P 3306 -u root -p$MYSQL_ROOT_PASSWORD"
--health-interval 10s
--health-timeout 5s
--health-retries 6
ports:
- 3308:3306
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v2
uses: actions/setup-go@v3
with:
go-version: 1.16
go-version: 1.18
- name: Build
run: go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
run: |
cd tests/eventsearcher
go build -trimpath -ldflags "-s -w" -o eventsearcher
cd -
cd tests/ipfilter
go build -trimpath -ldflags "-s -w" -o ipfilter
cd -
- name: Run tests using PostgreSQL provider
run: |
go test -v -p 1 -timeout 10m ./... -covermode=atomic
go test -v -p 1 -timeout 15m ./... -covermode=atomic
env:
SFTPGO_DATA_PROVIDER__DRIVER: postgresql
SFTPGO_DATA_PROVIDER__NAME: sftpgo
@@ -163,7 +328,18 @@ jobs:
- name: Run tests using MySQL provider
run: |
go test -v -p 1 -timeout 10m ./... -covermode=atomic
go test -v -p 1 -timeout 15m ./... -covermode=atomic
env:
SFTPGO_DATA_PROVIDER__DRIVER: mysql
SFTPGO_DATA_PROVIDER__NAME: sftpgo
SFTPGO_DATA_PROVIDER__HOST: localhost
SFTPGO_DATA_PROVIDER__PORT: 3308
SFTPGO_DATA_PROVIDER__USERNAME: sftpgo
SFTPGO_DATA_PROVIDER__PASSWORD: sftpgo
- name: Run tests using MariaDB provider
run: |
go test -v -p 1 -timeout 15m ./... -covermode=atomic
env:
SFTPGO_DATA_PROVIDER__DRIVER: mysql
SFTPGO_DATA_PROVIDER__NAME: sftpgo
@@ -171,12 +347,14 @@ jobs:
SFTPGO_DATA_PROVIDER__PORT: 3307
SFTPGO_DATA_PROVIDER__USERNAME: sftpgo
SFTPGO_DATA_PROVIDER__PASSWORD: sftpgo
SFTPGO_DATA_PROVIDER__SQL_TABLES_PREFIX: prefix_
- name: Run tests using CockroachDB provider
run: |
docker run --rm --name crdb --health-cmd "curl -I http://127.0.0.1:8080" --health-interval 10s --health-timeout 5s --health-retries 6 -p 26257:26257 -d cockroachdb/cockroach:latest start-single-node --insecure --listen-addr 0.0.0.0:26257
docker run --rm --name crdb --health-cmd "curl -I http://127.0.0.1:8080" --health-interval 10s --health-timeout 5s --health-retries 6 -p 26257:26257 -d cockroachdb/cockroach:latest start-single-node --insecure --listen-addr :26257
sleep 10
docker exec crdb cockroach sql --insecure -e 'create database "sftpgo"'
go test -v -p 1 -timeout 10m ./... -covermode=atomic
go test -v -p 1 -timeout 15m ./... -covermode=atomic
docker stop crdb
env:
SFTPGO_DATA_PROVIDER__DRIVER: cockroachdb
@@ -185,6 +363,7 @@ jobs:
SFTPGO_DATA_PROVIDER__PORT: 26257
SFTPGO_DATA_PROVIDER__USERNAME: root
SFTPGO_DATA_PROVIDER__PASSWORD:
SFTPGO_DATA_PROVIDER__SQL_TABLES_PREFIX: prefix_
build-linux-packages:
name: Build Linux packages
@@ -193,7 +372,7 @@ jobs:
matrix:
include:
- arch: amd64
go: 1.16
go: 1.18
go-arch: amd64
- arch: aarch64
distro: ubuntu18.04
@@ -203,24 +382,29 @@ jobs:
distro: ubuntu18.04
go: latest
go-arch: ppc64le
- arch: armv7
distro: ubuntu18.04
go: latest
go-arch: arm7
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Go
if: ${{ matrix.arch == 'amd64' }}
uses: actions/setup-go@v2
uses: actions/setup-go@v3
with:
go-version: ${{ matrix.go }}
- name: Build on amd64
if: ${{ matrix.arch == 'amd64' }}
run: |
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
mkdir -p output/{init,bash_completion,zsh_completion}
cp sftpgo.json output/
cp -r templates output/
cp -r static output/
cp -r openapi output/
cp init/sftpgo.service output/init/
./sftpgo gen completion bash > output/bash_completion/sftpgo
./sftpgo gen completion zsh > output/zsh_completion/_sftpgo
@@ -228,7 +412,7 @@ jobs:
gzip output/man/man1/*
cp sftpgo output/
- uses: uraimo/run-on-arch-action@v2.0.10
- uses: uraimo/run-on-arch-action@v2
if: ${{ matrix.arch != 'amd64' }}
name: Build for ${{ matrix.arch }}
id: build
@@ -245,19 +429,29 @@ jobs:
apt-get install -q -y curl gcc git
if [ ${{ matrix.go }} == 'latest' ]
then
GO_VERSION=$(curl https://golang.org/VERSION?m=text)
GO_VERSION=$(curl -L https://go.dev/VERSION?m=text)
else
GO_VERSION=${{ matrix.go }}
fi
curl --retry 5 --retry-delay 2 --connect-timeout 10 -o go.tar.gz -L https://golang.org/dl/${GO_VERSION}.linux-${{ matrix.go-arch }}.tar.gz
GO_DOWNLOAD_ARCH=${{ matrix.go-arch }}
if [ ${{ matrix.arch}} == 'armv7' ]
then
GO_DOWNLOAD_ARCH=armv6l
fi
curl --retry 5 --retry-delay 2 --connect-timeout 10 -o go.tar.gz -L https://go.dev/dl/${GO_VERSION}.linux-${GO_DOWNLOAD_ARCH}.tar.gz
tar -C /usr/local -xzf go.tar.gz
run: |
export PATH=$PATH:/usr/local/go/bin
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
if [ ${{ matrix.arch}} == 'armv7' ]
then
export GOARM=7
fi
go build -buildvcs=false -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
mkdir -p output/{init,bash_completion,zsh_completion}
cp sftpgo.json output/
cp -r templates output/
cp -r static output/
cp -r openapi output/
cp init/sftpgo.service output/init/
./sftpgo gen completion bash > output/bash_completion/sftpgo
./sftpgo gen completion zsh > output/zsh_completion/_sftpgo
@@ -266,7 +460,7 @@ jobs:
cp sftpgo output/
- name: Upload build artifact
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: sftpgo-linux-${{ matrix.arch }}-go-${{ matrix.go }}
path: output
@@ -281,13 +475,13 @@ jobs:
echo "::set-output name=pkg-version::${PKG_VERSION}"
- name: Upload Debian Package
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-${{ matrix.go-arch }}-deb
path: pkgs/dist/deb/*
- name: Upload RPM Package
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-${{ matrix.go-arch }}-rpm
path: pkgs/dist/rpm/*
@@ -296,13 +490,12 @@ jobs:
name: golangci-lint
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Go
uses: actions/setup-go@v2
uses: actions/setup-go@v3
with:
go-version: 1.16
go-version: 1.18
- uses: actions/checkout@v3
- name: Run golangci-lint
uses: golangci/golangci-lint-action@v2
uses: golangci/golangci-lint-action@v3
with:
version: latest
skip-go-installation: true

View File

@@ -5,7 +5,7 @@ on:
# - cron: '0 4 * * *' # everyday at 4:00 AM UTC
push:
branches:
- main
- 2.3.x
tags:
- v*
pull_request:
@@ -24,17 +24,13 @@ jobs:
optional_deps:
- true
- false
include:
- os: ubuntu-latest
docker_pkg: distroless
optional_deps: false
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Repo metadata
id: repo
uses: actions/github-script@v4
with:
script: |
const repo = await github.repos.get(context.repo)
return repo.data
uses: actions/checkout@v3
- name: Gather image information
id: info
@@ -64,8 +60,11 @@ jobs:
VERSION="${VERSION}-alpine"
VERSION_SLIM="${VERSION}-slim"
DOCKERFILE=Dockerfile.alpine
elif [[ $DOCKER_PKG == distroless ]]; then
VERSION="${VERSION}-distroless"
VERSION_SLIM="${VERSION}-slim"
DOCKERFILE=Dockerfile.distroless
fi
DOCKER_IMAGES=("drakkan/sftpgo" "ghcr.io/drakkan/sftpgo")
TAGS="${DOCKER_IMAGES[0]}:${VERSION}"
TAGS_SLIM="${DOCKER_IMAGES[0]}:${VERSION_SLIM}"
@@ -83,6 +82,13 @@ jobs:
fi
TAGS="${TAGS},${DOCKER_IMAGE}:latest"
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:slim"
elif [[ $DOCKER_PKG == distroless ]]; then
if [[ -n $MAJOR && -n $MINOR ]]; then
TAGS="${TAGS},${DOCKER_IMAGE}:${MINOR}-distroless,${DOCKER_IMAGE}:${MAJOR}-distroless"
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:${MINOR}-distroless-slim,${DOCKER_IMAGE}:${MAJOR}-distroless-slim"
fi
TAGS="${TAGS},${DOCKER_IMAGE}:distroless"
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:distroless-slim"
else
if [[ -n $MAJOR && -n $MINOR ]]; then
TAGS="${TAGS},${DOCKER_IMAGE}:${MINOR}-alpine,${DOCKER_IMAGE}:${MAJOR}-alpine"
@@ -111,30 +117,31 @@ jobs:
OPTIONAL_DEPS: ${{ matrix.optional_deps }}
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
uses: docker/setup-qemu-action@v2
- name: Set up builder
uses: docker/setup-buildx-action@v1
uses: docker/setup-buildx-action@v2
id: builder
- name: Login to Docker Hub
uses: docker/login-action@v1
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
if: ${{ github.event_name != 'pull_request' }}
- name: Login to GitHub Container Registry
uses: docker/login-action@v1
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.CR_PAT }}
password: ${{ secrets.GITHUB_TOKEN }}
if: ${{ github.event_name != 'pull_request' }}
- name: Build and push
uses: docker/build-push-action@v2
uses: docker/build-push-action@v3
with:
context: .
builder: ${{ steps.builder.outputs.name }}
file: ./${{ steps.info.outputs.dockerfile }}
platforms: linux/amd64,linux/arm64,linux/ppc64le
@@ -145,11 +152,11 @@ jobs:
INSTALL_OPTIONAL_PACKAGES=${{ steps.info.outputs.full }}
labels: |
org.opencontainers.image.title=SFTPGo
org.opencontainers.image.description=Fully featured and highly configurable SFTP server with optional FTP/S and WebDAV support
org.opencontainers.image.url=${{ fromJson(steps.repo.outputs.result).html_url }}
org.opencontainers.image.documentation=${{ fromJson(steps.repo.outputs.result).html_url }}/blob/${{ github.sha }}/docker/README.md
org.opencontainers.image.source=${{ fromJson(steps.repo.outputs.result).html_url }}
org.opencontainers.image.description=Fully featured and highly configurable SFTP server with optional HTTP, FTP/S and WebDAV support
org.opencontainers.image.url=https://github.com/drakkan/sftpgo
org.opencontainers.image.documentation=https://github.com/drakkan/sftpgo/blob/${{ github.sha }}/docker/README.md
org.opencontainers.image.source=https://github.com/drakkan/sftpgo
org.opencontainers.image.version=${{ steps.info.outputs.version }}
org.opencontainers.image.created=${{ steps.info.outputs.created }}
org.opencontainers.image.revision=${{ github.sha }}
org.opencontainers.image.licenses=${{ fromJson(steps.repo.outputs.result).license.spdx_id }}
org.opencontainers.image.licenses=AGPL-3.0

View File

@@ -5,16 +5,16 @@ on:
tags: 'v*'
env:
GO_VERSION: 1.16.5
GO_VERSION: 1.18.5
jobs:
prepare-sources-with-deps:
name: Prepare sources with deps
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v2
uses: actions/setup-go@v3
with:
go-version: ${{ env.GO_VERSION }}
@@ -31,7 +31,7 @@ jobs:
SFTPGO_VERSION: ${{ steps.get_version.outputs.VERSION }}
- name: Upload build artifact
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_src_with_deps.tar.xz
path: ./sftpgo_${{ steps.get_version.outputs.VERSION }}_src_with_deps.tar.xz
@@ -42,34 +42,15 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [macos-10.15, windows-2019]
os: [macos-11, windows-2022]
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v2
uses: actions/setup-go@v3
with:
go-version: ${{ env.GO_VERSION }}
- name: Build for macOS x86_64
if: startsWith(matrix.os, 'windows-') != true
run: go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
- name: Build for macOS arm64
if: startsWith(matrix.os, 'macos-') == true
run: CGO_ENABLED=1 GOOS=darwin GOARCH=arm64 SDKROOT=$(xcrun --sdk macosx --show-sdk-path) go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo_arm64
- name: Build for Windows
if: startsWith(matrix.os, 'windows-')
run: |
$GIT_COMMIT = (git describe --always --dirty) | Out-String
$DATE_TIME = ([datetime]::Now.ToUniversalTime().toString("yyyy-MM-ddTHH:mm:ssZ")) | Out-String
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/version.date=$DATE_TIME" -o sftpgo.exe
- name: Initialize data provider
run: ./sftpgo initprovider
shell: bash
- name: Get SFTPGo version
id: get_version
run: echo ::set-output name=VERSION::${GITHUB_REF/refs\/tags\//}
@@ -88,6 +69,43 @@ jobs:
env:
MATRIX_OS: ${{ matrix.os }}
- name: Build for macOS x86_64
if: startsWith(matrix.os, 'windows-') != true
run: go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
- name: Build for macOS arm64
if: startsWith(matrix.os, 'macos-') == true
run: CGO_ENABLED=1 GOOS=darwin GOARCH=arm64 SDKROOT=$(xcrun --sdk macosx --show-sdk-path) go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo_arm64
- name: Build for Windows
if: startsWith(matrix.os, 'windows-')
run: |
$GIT_COMMIT = (git describe --always --dirty) | Out-String
$DATE_TIME = ([datetime]::Now.ToUniversalTime().toString("yyyy-MM-ddTHH:mm:ssZ")) | Out-String
$FILE_VERSION = $Env:SFTPGO_VERSION.substring(1) + ".0"
go install github.com/tc-hib/go-winres@latest
go-winres simply --arch amd64 --product-version $Env:SFTPGO_VERSION-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/version.date=$DATE_TIME" -o sftpgo.exe
mkdir arm64
$Env:CGO_ENABLED='0'
$Env:GOOS='windows'
$Env:GOARCH='arm64'
go-winres simply --arch arm64 --product-version $Env:SFTPGO_VERSION-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/version.date=$DATE_TIME" -o .\arm64\sftpgo.exe
mkdir x86
$Env:GOARCH='386'
go-winres simply --arch 386 --product-version $Env:SFTPGO_VERSION-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/version.date=$DATE_TIME" -o .\x86\sftpgo.exe
Remove-Item Env:\CGO_ENABLED
Remove-Item Env:\GOOS
Remove-Item Env:\GOARCH
env:
SFTPGO_VERSION: ${{ steps.get_version.outputs.VERSION }}
- name: Initialize data provider
run: ./sftpgo initprovider
shell: bash
- name: Prepare Release for macOS
if: startsWith(matrix.os, 'macos-')
run: |
@@ -100,6 +118,7 @@ jobs:
cp sftpgo.json output/
cp sftpgo.db output/sqlite/
cp -r static output/
cp -r openapi output/
cp -r templates output/
cp init/com.github.drakkan.sftpgo.plist output/init/
./sftpgo gen completion bash > output/bash_completion/sftpgo
@@ -129,31 +148,65 @@ jobs:
xcopy .\templates .\output\templates\ /E
mkdir output\static
xcopy .\static .\output\static\ /E
iscc windows-installer\sftpgo.iss
mkdir output\openapi
xcopy .\openapi .\output\openapi\ /E
$CERT_PATH=(Get-Location -PSProvider FileSystem).ProviderPath + "\cert.pfx"
[IO.File]::WriteAllBytes($CERT_PATH,[System.Convert]::FromBase64String($Env:CERT_DATA))
certutil -f -p "$Env:CERT_PASS" -importpfx MY "$CERT_PATH"
rm "$CERT_PATH"
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.20348.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\sftpgo.exe
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.20348.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\arm64\sftpgo.exe
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.20348.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\x86\sftpgo.exe
$INNO_S='/Ssigntool=$qC:/Program Files (x86)/Windows Kits/10/bin/10.0.20348.0/x86/signtool.exe$q sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n $qNicola Murino$q /d $qSFTPGo$q $f'
iscc "$INNO_S" .\windows-installer\sftpgo.iss
rm .\output\sftpgo.exe
rm .\output\sftpgo.db
copy .\arm64\sftpgo.exe .\output
(Get-Content .\output\sftpgo.json).replace('"sqlite"', '"bolt"') | Set-Content .\output\sftpgo.json
$Env:SFTPGO_DATA_PROVIDER__DRIVER='bolt'
$Env:SFTPGO_DATA_PROVIDER__NAME='.\output\sftpgo.db'
.\sftpgo.exe initprovider
Remove-Item Env:\SFTPGO_DATA_PROVIDER__DRIVER
Remove-Item Env:\SFTPGO_DATA_PROVIDER__NAME
$Env:SFTPGO_ISS_ARCH='arm64'
iscc "$INNO_S" .\windows-installer\sftpgo.iss
rm .\output\sftpgo.exe
copy .\x86\sftpgo.exe .\output
$Env:SFTPGO_ISS_ARCH='x86'
iscc "$INNO_S" .\windows-installer\sftpgo.iss
certutil -delstore MY "Nicola Murino"
env:
SFTPGO_ISS_VERSION: ${{ steps.get_version.outputs.VERSION }}
SFTPGO_ISS_DOC_URL: https://github.com/drakkan/sftpgo/blob/${{ steps.get_version.outputs.VERSION }}/README.md
CERT_DATA: ${{ secrets.CERT_DATA }}
CERT_PASS: ${{ secrets.CERT_PASS }}
- name: Prepare Portable Release for Windows
if: startsWith(matrix.os, 'windows-')
run: |
mkdir win-portable
copy .\sftpgo.exe .\win-portable
mkdir win-portable\arm64
copy .\arm64\sftpgo.exe .\win-portable\arm64
mkdir win-portable\x86
copy .\x86\sftpgo.exe .\win-portable\x86
copy .\sftpgo.json .\win-portable
copy .\sftpgo.db .\win-portable
(Get-Content .\win-portable\sftpgo.json).replace('"sqlite"', '"bolt"') | Set-Content .\win-portable\sftpgo.json
copy .\output\sftpgo.db .\win-portable
copy .\LICENSE .\win-portable\LICENSE.txt
mkdir win-portable\templates
xcopy .\templates .\win-portable\templates\ /E
mkdir win-portable\static
xcopy .\static .\win-portable\static\ /E
Compress-Archive .\win-portable\* sftpgo_portable_x86_64.zip
env:
SFTPGO_VERSION: ${{ steps.get_version.outputs.VERSION }}
OS: ${{ steps.get_os_name.outputs.OS }}
mkdir win-portable\openapi
xcopy .\openapi .\win-portable\openapi\ /E
Compress-Archive .\win-portable\* sftpgo_portable.zip
- name: Upload macOS x86_64 artifact
if: startsWith(matrix.os, 'macos-')
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86_64.tar.xz
path: ./sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86_64.tar.xz
@@ -161,26 +214,42 @@ jobs:
- name: Upload macOS arm64 artifact
if: startsWith(matrix.os, 'macos-')
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_arm64.tar.xz
path: ./sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_arm64.tar.xz
retention-days: 1
- name: Upload Windows installer artifact
- name: Upload Windows installer x86_64 artifact
if: startsWith(matrix.os, 'windows-')
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86_64.exe
path: ./sftpgo_windows_x86_64.exe
retention-days: 1
- name: Upload Windows installer arm64 artifact
if: startsWith(matrix.os, 'windows-')
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_arm64.exe
path: ./sftpgo_windows_arm64.exe
retention-days: 1
- name: Upload Windows installer x86 artifact
if: startsWith(matrix.os, 'windows-')
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86.exe
path: ./sftpgo_windows_x86.exe
retention-days: 1
- name: Upload Windows portable artifact
if: startsWith(matrix.os, 'windows-')
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_portable_x86_64.zip
path: ./sftpgo_portable_x86_64.zip
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_portable.zip
path: ./sftpgo_portable.zip
retention-days: 1
prepare-linux:
@@ -206,12 +275,18 @@ jobs:
deb-arch: ppc64el
rpm-arch: ppc64le
tar-arch: ppc64le
- arch: armv7
distro: ubuntu18.04
go-arch: arm7
deb-arch: armhf
rpm-arch: armv7hl
tar-arch: armv7
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Set up Go
if: ${{ matrix.arch == 'amd64' }}
uses: actions/setup-go@v2
uses: actions/setup-go@v3
with:
go-version: ${{ env.GO_VERSION }}
@@ -227,7 +302,7 @@ jobs:
- name: Build on amd64
if: ${{ matrix.arch == 'amd64' }}
run: |
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
mkdir -p output/{init,sqlite,bash_completion,zsh_completion}
echo "For documentation please take a look here:" > output/README.txt
echo "" >> output/README.txt
@@ -236,6 +311,7 @@ jobs:
cp sftpgo.json output/
cp -r templates output/
cp -r static output/
cp -r openapi output/
cp init/sftpgo.service output/init/
./sftpgo initprovider
./sftpgo gen completion bash > output/bash_completion/sftpgo
@@ -250,7 +326,7 @@ jobs:
env:
SFTPGO_VERSION: ${{ steps.get_version.outputs.SFTPGO_VERSION }}
- uses: uraimo/run-on-arch-action@v2.0.10
- uses: uraimo/run-on-arch-action@v2
if: ${{ matrix.arch != 'amd64' }}
name: Build for ${{ matrix.arch }}
id: build
@@ -265,11 +341,16 @@ jobs:
install: |
apt-get update -q -y
apt-get install -q -y curl gcc git xz-utils
curl --retry 5 --retry-delay 2 --connect-timeout 10 -o go.tar.gz -L https://golang.org/dl/go${{ steps.get_version.outputs.GO_VERSION }}.linux-${{ matrix.go-arch }}.tar.gz
GO_DOWNLOAD_ARCH=${{ matrix.go-arch }}
if [ ${{ matrix.arch}} == 'armv7' ]
then
GO_DOWNLOAD_ARCH=armv6l
fi
curl --retry 5 --retry-delay 2 --connect-timeout 10 -o go.tar.gz -L https://go.dev/dl/go${{ steps.get_version.outputs.GO_VERSION }}.linux-${GO_DOWNLOAD_ARCH}.tar.gz
tar -C /usr/local -xzf go.tar.gz
run: |
export PATH=$PATH:/usr/local/go/bin
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
go build -buildvcs=false -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
mkdir -p output/{init,sqlite,bash_completion,zsh_completion}
echo "For documentation please take a look here:" > output/README.txt
echo "" >> output/README.txt
@@ -278,6 +359,7 @@ jobs:
cp sftpgo.json output/
cp -r templates output/
cp -r static output/
cp -r openapi output/
cp init/sftpgo.service output/init/
./sftpgo initprovider
./sftpgo gen completion bash > output/bash_completion/sftpgo
@@ -291,7 +373,7 @@ jobs:
cd ..
- name: Upload build artifact for ${{ matrix.arch }}
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_${{ matrix.tar-arch }}.tar.xz
path: ./output/sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_${{ matrix.tar-arch }}.tar.xz
@@ -309,14 +391,14 @@ jobs:
SFTPGO_VERSION: ${{ steps.get_version.outputs.SFTPGO_VERSION }}
- name: Upload Deb Package
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.build_linux_pkgs.outputs.pkg-version }}-1_${{ matrix.deb-arch}}.deb
path: ./pkgs/dist/deb/sftpgo_${{ steps.build_linux_pkgs.outputs.pkg-version }}-1_${{ matrix.deb-arch}}.deb
retention-days: 1
- name: Upload RPM Package
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-1.${{ matrix.rpm-arch}}.rpm
path: ./pkgs/dist/rpm/sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-1.${{ matrix.rpm-arch}}.rpm
@@ -335,30 +417,37 @@ jobs:
shell: bash
- name: Download amd64 artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_x86_64.tar.xz
- name: Download arm64 artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_arm64.tar.xz
- name: Download ppc64le artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_ppc64le.tar.xz
- name: Download armv7 artifact
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_armv7.tar.xz
- name: Build bundle
shell: bash
run: |
mkdir -p bundle/{arm64,ppc64le}
mkdir -p bundle/{arm64,ppc64le,armv7}
cd bundle
tar xvf ../sftpgo_${SFTPGO_VERSION}_linux_x86_64.tar.xz
cd arm64
tar xvf ../../sftpgo_${SFTPGO_VERSION}_linux_arm64.tar.xz sftpgo
cd ../ppc64le
tar xvf ../../sftpgo_${SFTPGO_VERSION}_linux_ppc64le.tar.xz sftpgo
cd ../armv7
tar xvf ../../sftpgo_${SFTPGO_VERSION}_linux_armv7.tar.xz sftpgo
cd ..
tar cJvf sftpgo_${SFTPGO_VERSION}_linux_bundle.tar.xz *
cd ..
@@ -366,7 +455,7 @@ jobs:
SFTPGO_VERSION: ${{ steps.get_version.outputs.SFTPGO_VERSION }}
- name: Upload Linux bundle
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_bundle.tar.xz
path: ./bundle/sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_bundle.tar.xz
@@ -378,7 +467,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Get versions
id: get_version
run: |
@@ -389,84 +478,111 @@ jobs:
shell: bash
- name: Download amd64 artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_x86_64.tar.xz
- name: Download arm64 artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_arm64.tar.xz
- name: Download ppc64le artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_ppc64le.tar.xz
- name: Download armv7 artifact
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_armv7.tar.xz
- name: Download Linux bundle artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_bundle.tar.xz
- name: Download Deb amd64 artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.PKG_VERSION }}-1_amd64.deb
- name: Download Deb arm64 artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.PKG_VERSION }}-1_arm64.deb
- name: Download Deb ppc64le artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.PKG_VERSION }}-1_ppc64el.deb
- name: Download Deb armv7 artifact
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.PKG_VERSION }}-1_armhf.deb
- name: Download RPM x86_64 artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo-${{ steps.get_version.outputs.PKG_VERSION }}-1.x86_64.rpm
- name: Download RPM aarch64 artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo-${{ steps.get_version.outputs.PKG_VERSION }}-1.aarch64.rpm
- name: Download RPM ppc64le artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo-${{ steps.get_version.outputs.PKG_VERSION }}-1.ppc64le.rpm
- name: Download RPM armv7 artifact
uses: actions/download-artifact@v3
with:
name: sftpgo-${{ steps.get_version.outputs.PKG_VERSION }}-1.armv7hl.rpm
- name: Download macOS x86_64 artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_macOS_x86_64.tar.xz
- name: Download macOS arm64 artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_macOS_arm64.tar.xz
- name: Download Windows installer x86_64 artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_windows_x86_64.exe
- name: Download Windows portable x86_64 artifact
uses: actions/download-artifact@v2
- name: Download Windows installer arm64 artifact
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_windows_portable_x86_64.zip
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_windows_arm64.exe
- name: Download Windows installer x86 artifact
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_windows_x86.exe
- name: Download Windows portable artifact
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_windows_portable.zip
- name: Download source with deps artifact
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_src_with_deps.tar.xz
- name: Create release
run: |
mv sftpgo_windows_x86_64.exe sftpgo_${SFTPGO_VERSION}_windows_x86_64.exe
mv sftpgo_portable_x86_64.zip sftpgo_${SFTPGO_VERSION}_windows_portable_x86_64.zip
mv sftpgo_windows_arm64.exe sftpgo_${SFTPGO_VERSION}_windows_arm64.exe
mv sftpgo_windows_x86.exe sftpgo_${SFTPGO_VERSION}_windows_x86.exe
mv sftpgo_portable.zip sftpgo_${SFTPGO_VERSION}_windows_portable.zip
gh release create "${SFTPGO_VERSION}" -t "${SFTPGO_VERSION}"
gh release upload "${SFTPGO_VERSION}" sftpgo_*.xz --clobber
gh release upload "${SFTPGO_VERSION}" sftpgo-*.rpm --clobber

View File

@@ -25,6 +25,14 @@ linters-settings:
#enable:
# - fieldalignment
issues:
include:
- EXC0002
- EXC0012
- EXC0013
- EXC0014
- EXC0015
linters:
enable:
- goconst
@@ -40,4 +48,5 @@ linters:
- whitespace
- dupl
- rowserrcheck
- dogsled
- dogsled
- govet

34
DCO Normal file
View File

@@ -0,0 +1,34 @@
Developer Certificate of Origin
Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.

View File

@@ -1,4 +1,4 @@
FROM golang:1.16-buster as builder
FROM golang:1.18-bullseye as builder
ENV GOFLAGS="-mod=readonly"
@@ -21,16 +21,16 @@ COPY . .
RUN set -xe && \
export COMMIT_SHA=${COMMIT_SHA:-$(git describe --always --dirty)} && \
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -v -o sftpgo
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -v -o sftpgo
FROM debian:buster-slim
FROM debian:bullseye-slim
# Set to "true" to install the optional git and rsync dependencies
# Set to "true" to install jq and the optional git and rsync dependencies
ARG INSTALL_OPTIONAL_PACKAGES=false
RUN apt-get update && apt-get install --no-install-recommends -y ca-certificates mime-support && rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install --no-install-recommends -y ca-certificates media-types && rm -rf /var/lib/apt/lists/*
RUN if [ "${INSTALL_OPTIONAL_PACKAGES}" = "true" ]; then apt-get update && apt-get install --no-install-recommends -y git rsync && rm -rf /var/lib/apt/lists/*; fi
RUN if [ "${INSTALL_OPTIONAL_PACKAGES}" = "true" ]; then apt-get update && apt-get install --no-install-recommends -y jq git rsync && rm -rf /var/lib/apt/lists/*; fi
RUN mkdir -p /etc/sftpgo /var/lib/sftpgo /usr/share/sftpgo /srv/sftpgo/data /srv/sftpgo/backups
@@ -42,17 +42,15 @@ RUN groupadd --system -g 1000 sftpgo && \
COPY --from=builder /workspace/sftpgo.json /etc/sftpgo/sftpgo.json
COPY --from=builder /workspace/templates /usr/share/sftpgo/templates
COPY --from=builder /workspace/static /usr/share/sftpgo/static
COPY --from=builder /workspace/openapi /usr/share/sftpgo/openapi
COPY --from=builder /workspace/sftpgo /usr/local/bin/
# Log to the stdout so the logs will be available using docker logs
ENV SFTPGO_LOG_FILE_PATH=""
# templates and static paths are inside the container
ENV SFTPGO_HTTPD__TEMPLATES_PATH=/usr/share/sftpgo/templates
ENV SFTPGO_HTTPD__STATIC_FILES_PATH=/usr/share/sftpgo/static
# Modify the default configuration file
RUN sed -i "s|\"users_base_dir\": \"\",|\"users_base_dir\": \"/srv/sftpgo/data\",|" /etc/sftpgo/sftpgo.json && \
sed -i "s|\"backups\"|\"/srv/sftpgo/backups\"|" /etc/sftpgo/sftpgo.json
RUN sed -i 's|"users_base_dir": "",|"users_base_dir": "/srv/sftpgo/data",|' /etc/sftpgo/sftpgo.json && \
sed -i 's|"backups"|"/srv/sftpgo/backups"|' /etc/sftpgo/sftpgo.json
RUN chown -R sftpgo:sftpgo /etc/sftpgo /srv/sftpgo && chown sftpgo:sftpgo /var/lib/sftpgo && chmod 700 /srv/sftpgo/backups

View File

@@ -1,4 +1,4 @@
FROM golang:1.16-alpine3.13 AS builder
FROM golang:1.18-alpine3.16 AS builder
ENV GOFLAGS="-mod=readonly"
@@ -23,17 +23,17 @@ COPY . .
RUN set -xe && \
export COMMIT_SHA=${COMMIT_SHA:-$(git describe --always --dirty)} && \
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -v -o sftpgo
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -v -o sftpgo
FROM alpine:3.13
FROM alpine:3.16
# Set to "true" to install the optional git and rsync dependencies
# Set to "true" to install jq and the optional git and rsync dependencies
ARG INSTALL_OPTIONAL_PACKAGES=false
RUN apk add --update --no-cache ca-certificates tzdata mailcap
RUN if [ "${INSTALL_OPTIONAL_PACKAGES}" = "true" ]; then apk add --update --no-cache rsync git; fi
RUN if [ "${INSTALL_OPTIONAL_PACKAGES}" = "true" ]; then apk add --update --no-cache jq git rsync; fi
# set up nsswitch.conf for Go's "netgo" implementation
# https://github.com/gliderlabs/docker-alpine/issues/367#issuecomment-424546457
@@ -47,17 +47,15 @@ RUN addgroup -g 1000 -S sftpgo && \
COPY --from=builder /workspace/sftpgo.json /etc/sftpgo/sftpgo.json
COPY --from=builder /workspace/templates /usr/share/sftpgo/templates
COPY --from=builder /workspace/static /usr/share/sftpgo/static
COPY --from=builder /workspace/openapi /usr/share/sftpgo/openapi
COPY --from=builder /workspace/sftpgo /usr/local/bin/
# Log to the stdout so the logs will be available using docker logs
ENV SFTPGO_LOG_FILE_PATH=""
# templates and static paths are inside the container
ENV SFTPGO_HTTPD__TEMPLATES_PATH=/usr/share/sftpgo/templates
ENV SFTPGO_HTTPD__STATIC_FILES_PATH=/usr/share/sftpgo/static
# Modify the default configuration file
RUN sed -i "s|\"users_base_dir\": \"\",|\"users_base_dir\": \"/srv/sftpgo/data\",|" /etc/sftpgo/sftpgo.json && \
sed -i "s|\"backups\"|\"/srv/sftpgo/backups\"|" /etc/sftpgo/sftpgo.json
RUN sed -i 's|"users_base_dir": "",|"users_base_dir": "/srv/sftpgo/data",|' /etc/sftpgo/sftpgo.json && \
sed -i 's|"backups"|"/srv/sftpgo/backups"|' /etc/sftpgo/sftpgo.json
RUN chown -R sftpgo:sftpgo /etc/sftpgo /srv/sftpgo && chown sftpgo:sftpgo /var/lib/sftpgo && chmod 700 /srv/sftpgo/backups

57
Dockerfile.distroless Normal file
View File

@@ -0,0 +1,57 @@
FROM golang:1.18-bullseye as builder
ENV CGO_ENABLED=0 GOFLAGS="-mod=readonly"
RUN mkdir -p /workspace
WORKDIR /workspace
ARG GOPROXY
COPY go.mod go.sum ./
RUN go mod download
ARG COMMIT_SHA
# This ARG allows to disable some optional features and it might be useful if you build the image yourself.
# For this variant we disable SQLite support since it requires CGO and so a C runtime which is not installed
# in distroless/static-* images
ARG FEATURES=nosqlite
COPY . .
RUN set -xe && \
export COMMIT_SHA=${COMMIT_SHA:-$(git describe --always --dirty)} && \
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -v -o sftpgo
# Modify the default configuration file
RUN sed -i 's|"users_base_dir": "",|"users_base_dir": "/srv/sftpgo/data",|' sftpgo.json && \
sed -i 's|"backups"|"/srv/sftpgo/backups"|' sftpgo.json && \
sed -i 's|"sqlite"|"bolt"|' sftpgo.json
RUN apt-get update && apt-get install --no-install-recommends -y media-types && rm -rf /var/lib/apt/lists/*
RUN mkdir /etc/sftpgo /var/lib/sftpgo /srv/sftpgo
FROM gcr.io/distroless/static-debian11
COPY --from=builder --chown=1000:1000 /etc/sftpgo /etc/sftpgo
COPY --from=builder --chown=1000:1000 /srv/sftpgo /srv/sftpgo
COPY --from=builder --chown=1000:1000 /var/lib/sftpgo /var/lib/sftpgo
COPY --from=builder --chown=1000:1000 /workspace/sftpgo.json /etc/sftpgo/sftpgo.json
COPY --from=builder /workspace/templates /usr/share/sftpgo/templates
COPY --from=builder /workspace/static /usr/share/sftpgo/static
COPY --from=builder /workspace/openapi /usr/share/sftpgo/openapi
COPY --from=builder /workspace/sftpgo /usr/local/bin/
COPY --from=builder /etc/mime.types /etc/mime.types
# Log to the stdout so the logs will be available using docker logs
ENV SFTPGO_LOG_FILE_PATH=""
# These env vars are required to avoid the following error when calling user.Current():
# unable to get the current user: user: Current requires cgo or $USER set in environment
ENV USER=sftpgo
ENV HOME=/var/lib/sftpgo
WORKDIR /var/lib/sftpgo
USER 1000:1000
CMD ["sftpgo", "serve"]

132
README.md
View File

@@ -2,67 +2,94 @@
![CI Status](https://github.com/drakkan/sftpgo/workflows/CI/badge.svg?branch=main&event=push)
[![Code Coverage](https://codecov.io/gh/drakkan/sftpgo/branch/main/graph/badge.svg)](https://codecov.io/gh/drakkan/sftpgo/branch/main)
[![Go Report Card](https://goreportcard.com/badge/github.com/drakkan/sftpgo)](https://goreportcard.com/report/github.com/drakkan/sftpgo)
[![License: AGPL v3](https://img.shields.io/badge/License-AGPLv3-blue.svg)](https://www.gnu.org/licenses/agpl-3.0)
[![Docker Pulls](https://img.shields.io/docker/pulls/drakkan/sftpgo)](https://hub.docker.com/r/drakkan/sftpgo)
[![Mentioned in Awesome Go](https://awesome.re/mentioned-badge.svg)](https://github.com/avelino/awesome-go)
Fully featured and highly configurable SFTP server with optional FTP/S and WebDAV support, written in Go.
[English](./README.md) | [简体中文](./README.zh_CN.md)
Fully featured and highly configurable SFTP server with optional HTTP/S, FTP/S and WebDAV support.
Several storage backends are supported: local filesystem, encrypted local filesystem, S3 (compatible) Object Storage, Google Cloud Storage, Azure Blob Storage, SFTP.
## Sponsors
If you find SFTPGo useful please consider supporting this Open Source project.
Maintaining and evolving SFTPGo is a lot of work - easily the equivalent of a full time job - for me.
I'd like to make SFTPGo into a sustainable long term project and would not like to introduce a dual licensing option and limit some features to the proprietary version only.
If you use SFTPGo, it is in your best interest to ensure that the project you rely on stays healthy and well maintained.
This can only happen with your donations and [sponsorships](https://github.com/sponsors/drakkan) :heart:
If you just take and don't return anything back, the project will die in the long run and you will be forced to pay for a similar proprietary solution.
More [info](https://github.com/drakkan/sftpgo/issues/452).
Thank you to our sponsors!
[<img src="https://www.7digital.com/wp-content/themes/sevendigital/images/top_logo.png" alt="7digital logo">](https://www.7digital.com/)
## Features
- Support for serving local filesystem, encrypted local filesystem, S3 Compatible Object Storage, Google Cloud Storage, Azure Blob Storage or other SFTP accounts over SFTP/SCP/FTP/WebDAV.
- Virtual folders are supported: a virtual folder can use any of the supported storage backends. So you can have, for example, an S3 user that exposes a GCS bucket (or part of it) on a specified path and an encrypted local filesystem on another one. Virtual folders can be private or shared among multiple users, for shared virtual folders you can define different quota limits for each user.
- Configurable custom commands and/or HTTP hooks on file upload, pre-upload, download, pre-download, delete, pre-delete, rename, on SSH commands and on user add, update and delete.
- Configurable [custom commands and/or HTTP hooks](./docs/custom-actions.md) on upload, pre-upload, download, pre-download, delete, pre-delete, rename, mkdir, rmdir on SSH commands and on user add, update and delete.
- Virtual accounts stored within a "data provider".
- SQLite, MySQL, PostgreSQL, CockroachDB, Bolt (key/value store in pure Go) and in-memory data providers are supported.
- Chroot isolation for local accounts. Cloud-based accounts can be restricted to a certain base path.
- Per user and per directory virtual permissions, for each exposed path you can allow or deny: directory listing, upload, overwrite, download, delete, rename, create directories, create symlinks, change owner/group/file mode.
- [REST API](./docs/rest-api.md) for users and folders management, backup, restore and real time reports of the active connections with possibility of forcibly closing a connection.
- Per-user and per-directory virtual permissions, for each exposed path you can allow or deny: directory listing, upload, overwrite, download, delete, rename, create directories, create symlinks, change owner/group/file mode and modification time.
- [REST API](./docs/rest-api.md) for users and folders management, data retention, backup, restore and real time reports of the active connections with possibility of forcibly closing a connection.
- [Web based administration interface](./docs/web-admin.md) to easily manage users, folders and connections.
- [Web client interface](./docs/web-client.md) so that end users can change their credentials and browse their files.
- Public key and password authentication. Multiple public keys per user are supported.
- [Web client interface](./docs/web-client.md) so that end users can change their credentials, manage and share their files in the browser.
- Public key and password authentication. Multiple public keys per-user are supported.
- SSH user [certificate authentication](https://cvsweb.openbsd.org/src/usr.bin/ssh/PROTOCOL.certkeys?rev=1.8).
- Keyboard interactive authentication. You can easily setup a customizable multi-factor authentication.
- Partial authentication. You can configure multi-step authentication requiring, for example, the user password after successful public key authentication.
- Per user authentication methods.
- Per-user authentication methods.
- [Two-factor authentication](./docs/howto/two-factor-authentication.md) based on time-based one time passwords (RFC 6238) which works with Authy, Google Authenticator and other compatible apps.
- Simplified user administrations using [groups](./docs/groups.md).
- Custom authentication via external programs/HTTP API.
- Web Client and Web Admin user interfaces support [OpenID Connect](https://openid.net/connect/) authentication and so they can be integrated with identity providers such as [Keycloak](https://www.keycloak.org/). You can find more details [here](./docs/oidc.md).
- [Data At Rest Encryption](./docs/dare.md).
- Dynamic user modification before login via external programs/HTTP API.
- Quota support: accounts can have individual quota expressed as max total size and/or max number of files.
- Bandwidth throttling, with distinct settings for upload and download.
- Quota support: accounts can have individual disk quota expressed as max total size and/or max number of files.
- Bandwidth throttling, with separate settings for upload and download and overrides based on the client's IP address.
- Data transfer bandwidth limits, with total limit or separate settings for uploads and downloads and overrides based on the client's IP address. Limits can be reset using the REST API.
- Per-protocol [rate limiting](./docs/rate-limiting.md) is supported and can be optionally connected to the built-in defender to automatically block hosts that repeatedly exceed the configured limit.
- Per user maximum concurrent sessions.
- Per user and global IP filters: login can be restricted to specific ranges of IP addresses or to a specific IP address.
- Per user and per directory shell like patterns filters: files can be allowed or denied based on shell like patterns.
- Per-user maximum concurrent sessions.
- Per-user and global IP filters: login can be restricted to specific ranges of IP addresses or to a specific IP address.
- Per-user and per-directory shell like patterns filters: files can be allowed, denied and optionally hidden based on shell like patterns.
- Automatically terminating idle connections.
- Automatic blocklist management using the built-in [defender](./docs/defender.md).
- Geo-IP filtering using a [plugin](https://github.com/sftpgo/sftpgo-plugin-geoipfilter).
- Atomic uploads are configurable.
- Per user files/folders ownership mapping: you can map all the users to the system account that runs SFTPGo (all platforms are supported) or you can run SFTPGo as root user and map each user or group of users to a different system account (\*NIX only).
- Per-user files/folders ownership mapping: you can map all the users to the system account that runs SFTPGo (all platforms are supported) or you can run SFTPGo as root user and map each user or group of users to a different system account (\*NIX only).
- Support for Git repositories over SSH.
- SCP and rsync are supported.
- FTP/S is supported. You can configure the FTP service to require TLS for both control and data connections.
- [WebDAV](./docs/webdav.md) is supported.
- ACME protocol is supported. SFTPGo can obtain and automatically renew TLS certificates for HTTPS, WebDAV and FTPS from `Let's Encrypt` or other ACME compliant certificate authorities, using the the `HTTP-01` or `TLS-ALPN-01` [challenge types](https://letsencrypt.org/docs/challenge-types/).
- Two-Way TLS authentication, aka TLS with client certificate authentication, is supported for REST API/Web Admin, FTPS and WebDAV over HTTPS.
- Per user protocols restrictions. You can configure the allowed protocols (SSH/FTP/WebDAV) for each user.
- Per-user protocols restrictions. You can configure the allowed protocols (SSH/HTTP/FTP/WebDAV) for each user.
- [Prometheus metrics](./docs/metrics.md) are exposed.
- Support for HAProxy PROXY protocol: you can proxy and/or load balance the SFTP/SCP/FTP/WebDAV service without losing the information about the client's address.
- Support for HAProxy PROXY protocol: you can proxy and/or load balance the SFTP/SCP/FTP service without losing the information about the client's address.
- Easy [migration](./examples/convertusers) from Linux system user accounts.
- [Portable mode](./docs/portable-mode.md): a convenient way to share a single directory on demand.
- [SFTP subsystem mode](./docs/sftp-subsystem.md): you can use SFTPGo as OpenSSH's SFTP subsystem.
- Performance analysis using built-in [profiler](./docs/profiling.md).
- Configuration format is at your choice: JSON, TOML, YAML, HCL, envfile are supported.
- Log files are accurate and they are saved in the easily parsable JSON format ([more information](./docs/logs.md)).
- SFTPGo supports a [plugin system](./docs/plugins.md) and therefore can be extended using external plugins.
## Platforms
SFTPGo is developed and tested on Linux. After each commit, the code is automatically built and tested on Linux, macOS and Windows using a [GitHub Action](./.github/workflows/development.yml). The test cases are regularly manually executed and passed on FreeBSD. Other *BSD variants should work too.
SFTPGo is developed and tested on Linux. After each commit, the code is automatically built and tested on Linux, macOS and Windows using [GitHub Actions](./.github/workflows/development.yml). The test cases are regularly manually executed and passed on FreeBSD. Other *BSD variants should work too.
## Requirements
- Go as build only dependency. We support the Go version(s) used in [continuous integration workflows](./tree/main/.github/workflows).
- A suitable SQL server to use as data provider: PostgreSQL 9.4+ or MySQL 5.6+ or SQLite 3.x or CockroachDB stable.
- Go as build only dependency. We support the Go version(s) used in [continuous integration workflows](./.github/workflows).
- A suitable SQL server to use as data provider: PostgreSQL 9.4+, MySQL 5.6+, SQLite 3.x, CockroachDB stable.
- The SQL server is optional: you can choose to use an embedded bolt database as key/value store or an in memory data provider.
## Installation
@@ -71,7 +98,9 @@ Binary releases for Linux, macOS, and Windows are available. Please visit the [r
An official Docker image is available. Documentation is [here](./docker/README.md).
Some Linux distro packages are available:
<details>
<summary>Some Linux distro packages are available</summary>
- For Arch Linux via AUR:
- [sftpgo](https://aur.archlinux.org/packages/sftpgo/). This package follows stable releases. It requires `git`, `gcc` and `go` to build.
@@ -79,15 +108,26 @@ Some Linux distro packages are available:
- [sftpgo-git](https://aur.archlinux.org/packages/sftpgo-git/). This package builds and installs the latest git `main` branch. It requires `git`, `gcc` and `go` to build.
- Deb and RPM packages are built after each commit and for each release.
- For Ubuntu a PPA is available [here](https://launchpad.net/~sftpgo/+archive/ubuntu/sftpgo).
- Void Linux provides an [official package](https://github.com/void-linux/void-packages/tree/master/srcpkgs/sftpgo).
SFTPGo is also available on [AWS Marketplace](https://aws.amazon.com/marketplace/seller-profile?id=6e849ab8-70a6-47de-9a43-13c3fa849335), purchasing from there will help keep SFTPGo a long-term sustainable project.
</details>
On FreeBSD you can install from the [SFTPGo port](https://www.freshports.org/ftp/sftpgo).
APT and YUM repositories are [available](./docs/repo.md).
On Windows you can use:
SFTPGo is also available on [AWS Marketplace](https://aws.amazon.com/marketplace/seller-profile?id=6e849ab8-70a6-47de-9a43-13c3fa849335) and [Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/prasselsrl1645470739547.sftpgo_linux), purchasing from there will help keep SFTPGo a long-term sustainable project.
<details><summary>Windows packages</summary>
- The Windows installer to install and run SFTPGo as a Windows service.
- The portable package to start SFTPGo on demand.
- The [winget](https://docs.microsoft.com/en-us/windows/package-manager/winget/install) package to install and run SFTPGo as a Windows service: `winget install SFTPGo`.
- The [Chocolatey package](https://community.chocolatey.org/packages/sftpgo) to install and run SFTPGo as a Windows service.
</details>
On macOS you can install from the Homebrew [Formula](https://formulae.brew.sh/formula/sftpgo).
On FreeBSD you can install from the [SFTPGo port](https://www.freshports.org/ftp/sftpgo).
On DragonFlyBSD you can install SFTPGo from [DPorts](https://github.com/DragonFlyBSD/DPorts/tree/master/ftp/sftpgo).
You can easily test new features selecting a commit from the [Actions](https://github.com/drakkan/sftpgo/actions) page and downloading the matching build artifacts for Linux, macOS or Windows. GitHub stores artifacts for 90 days.
@@ -133,13 +173,21 @@ sftpgo initprovider --help
You can disable automatic data provider checks/updates at startup by setting the `update_mode` configuration key to `1`.
You can also reset your provider by using the `resetprovider` sub-command. Take a look at the CLI usage for more details:
```bash
sftpgo resetprovider --help
```
:warning: Please note that some data providers (e.g. MySQL and CockroachDB) do not support schema changes within a transaction, this means that you may end up with an inconsistent schema if migrations are forcibly aborted. CockroachDB doesn't support database-level locks, so make sure you don't execute migrations concurrently.
## Create the first admin
To start using SFTPGo you need to create an admin user, you can do it in several ways:
- by using the web admin interface. The default URL is [http://127.0.0.1:8080/web/admin](http://127.0.0.1:8080/web/admin)
- by loading initial data
- by enabling `create_default_admin` in your configuration file. In this case the credentials are `admin`/`password`
- by enabling `create_default_admin` in your configuration file and setting the environment variables `SFTPGO_DEFAULT_ADMIN_USERNAME` and `SFTPGO_DEFAULT_ADMIN_PASSWORD`
## Upgrading
@@ -186,7 +234,7 @@ After starting SFTPGo you can manage users and folders using:
To support embedded data providers like `bolt` and `SQLite` we can't have a CLI that directly write users and folders to the data provider, we always have to use the REST API.
Full details for users, folders, admins and other resources are documented in the [OpenAPI](/httpd/schema/openapi.yaml) schema. If you want to render the schema without importing it manually, you can explore it on [Stoplight](https://sftpgo.stoplight.io/docs/sftpgo/openapi.yaml).
Full details for users, folders, admins and other resources are documented in the [OpenAPI](./openapi/openapi.yaml) schema. If you want to render the schema without importing it manually, you can explore it on [Stoplight](https://sftpgo.stoplight.io/docs/sftpgo/openapi.yaml).
## Tutorials
@@ -194,24 +242,28 @@ Some step-to-step tutorials can be found inside the source tree [howto](./docs/h
## Authentication options
### External Authentication
<details><summary> External Authentication</summary>
Custom authentication methods can easily be added. SFTPGo supports external authentication modules, and writing a new backend can be as simple as a few lines of shell script. More information can be found [here](./docs/external-auth.md).
### Keyboard Interactive Authentication
</details>
<details><summary> Keyboard Interactive Authentication</summary>
Keyboard interactive authentication is, in general, a series of questions asked by the server with responses provided by the client.
This authentication method is typically used for multi-factor authentication.
More information can be found [here](./docs/keyboard-interactive.md).
</details>
## Dynamic user creation or modification
A user can be created or modified by an external program just before the login. More information about this can be found [here](./docs/dynamic-user-mod.md).
## Custom Actions
SFTPGo allows to configure custom commands and/or HTTP notifications on file upload, download, delete, rename, on SSH commands and on user add, update and delete.
SFTPGo allows you to configure custom commands and/or HTTP hooks to receive notifications about file uploads, deletions and several other events.
More information about custom actions can be found [here](./docs/custom-actions.md).
@@ -226,17 +278,9 @@ You can use your own hook to [check passwords](./docs/check-password-hook.md).
## Storage backends
### S3 Compatible Object Storage backends
### S3/GCP/Azure
Each user can be mapped to the whole bucket or to a bucket virtual folder. This way, the mapped bucket/virtual folder is exposed over SFTP/SCP/FTP/WebDAV. More information about S3 integration can be found [here](./docs/s3.md).
### Google Cloud Storage backend
Each user can be mapped with a Google Cloud Storage bucket or a bucket virtual folder. This way, the mapped bucket/virtual folder is exposed over SFTP/SCP/FTP/WebDAV. More information about Google Cloud Storage integration can be found [here](./docs/google-cloud-storage.md).
### Azure Blob Storage backend
Each user can be mapped with an Azure Blob Storage container or a container virtual folder. This way, the mapped container/virtual folder is exposed over SFTP/SCP/FTP/WebDAV. More information about Azure Blob Storage integration can be found [here](./docs/azure-blob-storage.md).
Each user can be mapped with a [S3 Compatible Object Storage](./docs/s3.md) /[Google Cloud Storage](./docs/google-cloud-storage.md)/[Azure Blob Storage](./docs/azure-blob-storage.md) bucket or a bucket virtual folder that is exposed over SFTP/SCP/FTP/WebDAV.
### SFTP backend
@@ -255,13 +299,13 @@ Adding new storage backends is quite easy:
- update the web interface and the REST API CLI
- add the flags for the new storage backed to the `portable` mode
Anyway, some backends require a pay per use account (or they offer free account for a limited time period only). To be able to add support for such backends or to review pull requests, please provide a test account. The test account must be available for enough time to be able to maintain the backend and do basic tests before each new release.
Anyway, some backends require a pay per-use account (or they offer free account for a limited time period only). To be able to add support for such backends or to review pull requests, please provide a test account. The test account must be available for enough time to be able to maintain the backend and do basic tests before each new release.
## Brute force protection
The [connection failed logs](./docs/logs.md) can be used for integration in tools such as [Fail2ban](http://www.fail2ban.org/). Example of [jails](./fail2ban/jails) and [filters](./fail2ban/filters) working with `systemd`/`journald` are available in fail2ban directory.
SFTPGo supports a built-in [defender](./docs/defender.md).
You can also use the built-in [defender](./docs/defender.md).
Alternately you can use the [connection failed logs](./docs/logs.md) for integration in tools such as [Fail2ban](http://www.fail2ban.org/). Example of [jails](./fail2ban/jails) and [filters](./fail2ban/filters) working with `systemd`/`journald` are available in fail2ban directory.
## Account's configuration properties
@@ -285,12 +329,6 @@ We are very grateful to all the people who contributed with ideas and/or pull re
Thank you [ysura](https://www.ysura.com/) for granting me stable access to a test AWS S3 account.
## Sponsors
I'd like to make SFTPGo into a sustainable long term project and your [sponsorship](https://github.com/sponsors/drakkan) will really help :heart:
Thank you to our sponsors!
## License
GNU AGPLv3

318
README.zh_CN.md Normal file
View File

@@ -0,0 +1,318 @@
# SFTPGo
![CI Status](https://github.com/drakkan/sftpgo/workflows/CI/badge.svg?branch=main&event=push)
[![Code Coverage](https://codecov.io/gh/drakkan/sftpgo/branch/main/graph/badge.svg)](https://codecov.io/gh/drakkan/sftpgo/branch/main)
[![License: AGPL v3](https://img.shields.io/badge/License-AGPLv3-blue.svg)](https://www.gnu.org/licenses/agpl-3.0)
[![Docker Pulls](https://img.shields.io/docker/pulls/drakkan/sftpgo)](https://hub.docker.com/r/drakkan/sftpgo)
[![Mentioned in Awesome Go](https://awesome.re/mentioned-badge.svg)](https://github.com/avelino/awesome-go)
[English](./README.md) | [简体中文](./README.zh_CN.md)
功能齐全、高度可配置化、支持自定义 HTTP/SFTP/S 和 WebDAV 的 SFTP 服务。
一些存储后端支持本地文件系统、加密本地文件系统、S3兼容对象存储Google Cloud 存储Azure Blob 存储SFTP。
## 特性
- 支持服务本地文件系统、加密本地文件系统、S3 兼容对象存储、Google Cloud 存储、Azure Blob 存储或其它基于 SFTP/SCP/FTP/WebDAV 协议的 SFTP 账户。
- 虚拟目录支持:一个虚拟目录可以用于支持的存储后端。你可以,比如,一个 S3 用户暴露了一个 GCS bucket或者其中一部分在特定的路径下、一个加密本地文件系统在另一个。虚拟目录可以对于大量用户作为私密或者共享分享虚拟目录你可以为每个用户定义不同的配额。
- 可配置的 [自定义命令 和/或 HTTP 钩子](./docs/custom-actions.md) 在 SSH 命令的 upload, pre-upload, download, pre-download, delete, pre-delete, rename, mkdir, rmdir 阶段,和用户添加、更新、删除阶段。
- 存储在 “数据提供程序” 中的虚拟账户。
- 支持 SQLite, MySQL, PostgreSQL, CockroachDB, Bolt (Go 原生键/值存储) 和内存数据提供程序。
- 为本地账户提供 Chroot 隔离。云端账户可以限制为特定的基本路径。
- 每个用户和每个目录虚拟权限,对于每个暴露的路径你可以允许或禁止:目录展示、上传、覆盖、下载、删除、重命名、创建文件夹、创建软连接、修改 owner/group/file 模式和更改时间。
- 为用户和目录管理提供、数据保留、备份、恢复和即时活动连接的实时报告,可能会强制关闭连接,提供 [REST API](./docs/rest-api.md)。
- [基于 Web 的管理员界面](./docs/web-admin.md) 可以容易地管理用户、目录和连接。
- [Web 客户端界面](./docs/web-client.md) 以便终端用户可以在浏览器中更改他们的凭据、管理和共享他们的文件。
- 公钥和密码认证。支持每个用户多个公钥。
- SSH 用户 [证书认证](https://cvsweb.openbsd.org/src/usr.bin/ssh/PROTOCOL.certkeys?rev=1.8).
- 键盘交互认证。您可以轻松设置可定制的多因素身份认证。
- 部分验证。你可以配置多步验证请求,例如,用户密码在公钥验证之后。
- 每个用户的身份验证方法。
- [双重验证](./docs/howto/two-factor-authentication.md) 基于实现一次性密码 (RFC 6238) 可以与 Authy、Google Authenticator 和其他兼容的应用程序配合使用。
- 通过 [群组](./docs/groups.md) 精简用户管理。
- 通过外部 程序/HTTP API 自定义验证。
- Web 客户端和 Web 管理员他用户界面支持 [OpenID Connect](https://openid.net/connect/) 验证,所以它们很容易被集成在诸如 [Keycloak](https://www.keycloak.org/) 之类的身份认证程序。你可以在 [](./docs/oidc.md) 获取更多信息。
- [静态数据加密](./docs/dare.md)。
- 在登录之前通过 程序/HTTP API 进行动态用户修改。
- 配额支持:账户拥有独立的磁盘配额表示为总计最大体积 和/或 最大文件数量。
- 带宽节流,基于客户端 IP 地址独立设置上传、下载和覆盖。
- 数据传输带宽限制,限制总量或基于客户端 IP 地址设置上传、下载和覆盖。限制可以通过 REST API 重置。
- 支持每个协议[限速](./docs/rate-limiting.md),可以可选与内置的防护连接实现自动封禁重复超过设置限制的主机。
- 每个用户的最大并发会话。
- 每个用户和全局 IP 过滤:登录可以被限制在特定的 IP 段和指定的 IP 地址。
- 每个用户和每个文件夹类似于 shell 的模式过滤:文件可以被允许、禁止和隐藏基于类 shell 模式。
- 自动使 idle 连接终止。
- 通过内置的 [防护](./docs/defender.md) 自动管理禁止名单。
- 通过 [插件](https://github.com/sftpgo/sftpgo-plugin-geoipfilter) 实现 地理-IP 过滤。
- 原子上传是可配置的。
- 每个用户 文件/目录 所有权映射:你可以将所有用户映射到运行 SFEPGo 的系统账户(所有的平台都是支持的),或者你可以使用 root 用户运行 SFTPGo 并且映射每个用户或用户组到一个不同系统账户(仅支持 \*NIX
- 通过 SSH 支持 Git 仓库。
- 支持 SCP 和 rsync。
- 支持 FTP/S。你可以配置 FTP 服务为控制和数据连接都需要 TLS。
- [WebDAV](./docs/webdav.md) 是支持的。
- 两步 TLS 验证,具有客户端证书身份验证的 aka TLS支持 REST API/Web Admin、FTPS 和 基于 HTTPS 的 WebDAV。
- 每个用户协议限制。你可以为每个用户配置允许的协议(SSH/HTTP/FTP/WebDAV)。
- 暴露 [输出指标](./docs/metrics.md)。
- 支持 HAProxy PROXY 协议:你可以不需要丢失客户端地址信息代理 和/或 负载平衡 SFTP/SCP/FTP 服务。
- 简单从 Linux 系统用户账户进行 [迁移](./examples/convertusers)。
- [可携带模式](./docs/portable-mode.md):按需共享单个目录的便捷方式。
- [SFTP 子系统模式](./docs/sftp-subsystem.md):你可以使用 SFTPGo 作为 OpenSSH 的 SFTP 子系统。
- 性能分析基于内置的 [分析器](./docs/profiling.md)。
- 配置项格式基于你的选择JSON, TOML, YAML, HCL, envfile 都是支持的。
- 日志文件是精确的,它们被存储为易被解析的 JSON 格式。([更多信息](./docs/logs.md)
- SFTPGo 支持 [插件系统](./docs/plugins.md),因此可以使用外部插件拓展。
## 平台
SFTPGo 基于 Linux 开发和创建。在每一次提交之后,代码会自动通过 [GitHub Actions](./.github/workflows/development.yml) 在 Linux、macOS 和 Windows 构建和测试。测试用例定期手动在 FreeBSD 执行,其他的 *BSD 变体同样适用。
## 要求
- Go 作为构建仅有的依赖。我们支持 [持续集成工作流](./.github/workflows) 中使用的 Go 版本。
- 使用适配的 SQL 服务作为数据提供程序PostgreSQL 9.4+, MySQL 5.6+, SQLite 3.x, CockroachDB stable.
- SQL 服务是可选的:你可以使用一个内置的 bolt 数据库以 键/值 存储,或者一个内存中的数据提供程序。
## 安装
为 Linux、macOS 和 Windows 提供的二进制发行版是可用的。请参考 [发行版](https://github.com/drakkan/sftpgo/releases "releases") 页面。
一个官方的 Docker 镜像是可用的。文档参考 [Docker](./docker/README.md)。
<details>
<summary>一些 Linux 分支包是可用的</summary>
- Arch Linux 通过 AUR:
- [sftpgo](https://aur.archlinux.org/packages/sftpgo/)。这个包跟随稳定的发行版。需要 `git``gcc``go` 进行构建。
- [sftpgo-bin](https://aur.archlinux.org/packages/sftpgo-bin/)。这个包跟随稳定的发行版从 GitHub 下载预构建 Linux 二进制文件。不需要 `git``gcc``go` 进行构建。
- [sftpgo-git](https://aur.archlinux.org/packages/sftpgo-git/)。这个包构建和下载基于最新的 `git` 主分支。需要 `git``gcc``go` 进行构建。
- Deb and RPM 包在每次提交和发行之后构建。
- Ubuntu PPA 在 [](https://launchpad.net/~sftpgo/+archive/ubuntu/sftpgo) 可用。
- Void Linux 提供一个 [官方包](https://github.com/void-linux/void-packages/tree/master/srcpkgs/sftpgo)。
</details>
SFTPGo 在 [AWS Marketplace](https://aws.amazon.com/marketplace/seller-profile?id=6e849ab8-70a6-47de-9a43-13c3fa849335) 和 [Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/prasselsrl1645470739547.sftpgo_linux) 同样可用,在此付费可以帮助 SFTPGo 成为一个可持续发展的长期项目。
<details><summary>Windows 包</summary>
- Windows installer 安装和运行 SFTPGo 作为一个 Windows 服务。
- 开箱即用的包启动按需使用的 SFTPGo。
- [winget](https://docs.microsoft.com/en-us/windows/package-manager/winget/install) 包下载和运行 SFTPGo 作为一个 Windows 服务:`winget install SFTPGo`
- [Chocolatey 包](https://community.chocolatey.org/packages/sftpgo) 下载和运行 SFTPGo 作为一个 Windows 服务。
</details>
在 FreeBSD你可以从 [SFTPGo port](https://www.freshports.org/ftp/sftpgo) 下载。
在 DragonFlyBSD你可以从 [DPorts](https://github.com/DragonFlyBSD/DPorts/tree/master/ftp/sftpgo) 下载。
您可以从 [Actions](https://github.com/drakkan/sftpgo/Actions) 页面选择一个 commit 并下载 Linux、macOS 或 Windows 的匹配构建从而轻松测试新特性。GitHub 存储 90 天。
另外,你可以 [从源码构建](./docs/build-from-source.md)。
[不耐烦的快速上手指南](./docs/howto/getting-started.md).
## 配置项
可以完整的配置项方法说明可以参考 [配置项](./docs/full-configuration.md)。
请确保按需运行之前,[初始化数据提供程序](#data-provider-initialization-and-management)。
默认配置启动 STFPGo运行
```bash
sftpgo serve
```
如果你将 SFTPGo作为服务请参阅 [这篇文档](./docs/service.md)。
### 数据提供程序初始化和管理
在启动 SFTPGo 服务之前,请确保配置的数据提供程序已经被适当的 初始化/更新。
对于 PostgreSQL, MySQL 和 CockroachDB 提供,你需要创建一个配置数据库。对于 SQLite配置数据库将会在启动时被自动创建。内存和 bolt 数据提供程序不需要初始化,但是它们需要在升级 SFTPGo 之后更新现有的数据。
SFTPGo 会尝试自动探测数据提供程序是否被 初始化/更新;如果没有,将会在启动时尝试 初始化/更新。
或者,你可以通过 `initprovider` 命令自行 创建/更新 需要的数据提供程序结构。
比如,你可以执行在配置文件目录下面的命令:
```bash
sftpgo initprovider
```
看一看 CLI 用法学习如何指定一个不同的配置文件:
```bash
sftpgo initprovider --help
```
你可以在启动阶段通过设置 `update_mode` 配置项为 `1`,禁止自动数据提供程序 检查/更新。
你可以通过使用 `resetprovider` 子命令重置你的数据提供程序。看一看 CLI 用法获取更多细节信息:
```bash
sftpgo resetprovider --help
```
:warning: 请注意一些数据提供程序(比如 MySQL 和 CockroachDB不支持事务内的方案更改这意味着如果迁移被强制中止或由多个实例同时运行您可能会得到不一致的方案。
## 创建第一个管理员
开始使用 SFTPGo你需要创建一个管理员用户你可以通过不同的方式进行实现
- 通过 web 管理员界面。默认 URL 是 [http://127.0.0.1:8080/web/admin](http://127.0.0.1:8080/web/admin)
- 通过加载初始数据
- 通过在你的配置文件启用 `create_default_admin` 并设置环境变量 `SFTPGO_DEFAULT_ADMIN_USERNAME``SFTPGO_DEFAULT_ADMIN_PASSWORD`
## 升级
SFTPGo 支持从之前的发行版分支升级到当前分支。
一些支持的升级路径如下:
- 从 1.2.x 到 2.0.x
- 从 2.0.x 到 2.1.x 等。
对支持的升级路径,数据和方案将会自动迁移,你可以使用 `initprovider` 命令作为替代。
所以,比如,你想从 1.2.x 之前的版本升级到 2.0.x你必须首先安装 1.2.x 版本,升级数据提供程序并最终安装版本 2.0.x。建议安装最新的可用小版本如果 1.2.2 可用就不要安装 1.2.0 版本。
从以前发行版分支到当前版本,都支持从独立于数据提供程序的 JSON 转储中加载数据。升级 SFTPGo 后,建议从新版本重新生成 JSON 转储。
## 降级
如果因为一些原因你想降级 SFTPGo你可能需要降级你的用户数据提供程序方案和数据。你可以使用 `revertprovider` 命令执行这项任务。
对于升级SFTPGo 支持从先前的发行版分支降级到当前分支。
所以,如果你有计划从 2.0.x 降级到 1.2.x之前先卸载 2.0.x 版本,你可以通过从配置目录执行以下命令来准备你的数据提供程序:
```shell
sftpgo revertprovider --to-version 4
```
看一看 CLI 的用法、了解 `--to-version` 参数支持的参数,了解如何去指定一个不同的配置文件:
```shell
sftpgo revertprovider --help
```
`revertprovider` 命令不支持内存数据提供程序。
请注意我们只支持当前发行版分支和当前主分支,如果你发现了个 bug最好是报告这个问题而不是降级到一个老的、不被支持的版本。
## 用户和目录管理
在启动 SFTPGo 之后,你可以管理用户和目录使用:
- [基于 Web 的管理员界面](./docs/web-admin.md)
- [REST API](./docs/rest-api.md)
支持内置的数据提供程序比如 `bolt``SQLite`。我们不能使用 CLI 直接将用户和文件夹写到数据提供程序,通常使用 REAST API。
对于用户、目录、管理员和其它资源的细节,都记录在 [OpenAPI](./openapi/openapi.yaml) 方案。如果你想在不手动引入的情况下渲染方案,你可以在 [Stoplight](https://sftpgo.stoplight.io/docs/sftpgo/openapi.yaml) 上暴露它。
## 教程
一些手把手教程可以在源码文件树中的 [howto](./docs/howto "How-to") 目录找到。
## 认证选项
<details><summary>外部认证</summary>
自定义认证方法可以很容易被添加。SFTPGo 支持外部认证模块,编写一个后端可以如编写几行 shell 脚本那样简单。更多的信息可以参考 [外部认证](./docs/external-auth.md)。
</details>
<details><summary>键盘交互认证</summary>
一般来说,键盘交互身份验证是服务器提出的一系列问题,由客户端提供响应。
这种身份认证方法通常用于多因素身份认证。
更多信息参考 [键盘交互](./docs/keyboard-interactive.md)。
</details>
## 动态用户创建或修改
一个用户可以通过外部程序在登录之前被创建和修改。更多关于此可以参考 [动态用户修改](./docs/dynamic-user-mod.md)。
## 自定义动作
SFTPGo 允许你配置自定义的命令 和/或 HTTP 钩子去获取关于文件上传、删除和一些其它操作的通知。
更多关于自定义动作的信息你可以参考 [自定义动作](./docs/custom-actions.md)。
## 虚拟目录
用户 home 文件夹外或者基于不同存储提供的目录,可以作为虚拟目录进行暴露,详细信息参考 [虚拟目录](./docs/virtual-folders.md)。
## 其它钩子
你可以使用 [Post-connect 钩子](./docs/post-connect-hook.md) 及时获取新的连接建立,使用 [Post-login hook](./docs/post-login-hook.md) 获取每次登录之后的通知。你可以使用你自己的钩子去 [验证密码](./docs/check-password-hook.md)。
## 存储后端
### S3/GCP/Azure
每个用户可以被映射到 [S3 兼容对象存储](./docs/s3.md) /[Google Cloud 存储](./docs/google-cloud-storage.md)/[Azure Blob 存储](./docs/azure-blob-storage.md) bucket 或者一个 bucket 虚拟目录,通过 SFTP/SCP/FTP/WebDAV 进行暴露。
### SFTP 后端
每个用户可以被映射到另一个 SFTP 服务器账户或者它的子目录。更多的信息可以参考 [sftpfs](./docs/sftpfs.md)。
### 加密后端
数据静态加密通过 [cryptfs 后端](./docs/dare.md) 进行支持。
### 其它存储后端
添加新的存储后端非常简单:
- 实现 [Fs 接口](./vfs/vfs.go#L28 "interface for filesystem backends")
- 更新用户方法 `GetFilesystem` 返回新的后端
- 更新 web 接口和 REST API CLI
- 为新的存储后端添加向 `portable` 模式添加 flags
无论如何,一些后端需要按次付费账户(或者他们提供限制期限内提供免费账户)。为了能够添加这些账户支持或者预览 PRs请提供一个测试账户。测试账户必须在提供足够长时间维护此后端并且支持每一次新的发行版之前做基本测试。
## 强力保护
SFTPGo 支持内置 [防护](./docs/defender.md)。
你可以使用 [连接失败日志](./docs/logs.md) 在诸如 [Fail2ban](http://www.fail2ban.org/) 进行工具内集成。[jails](./fail2ban/jails) 和 [filters](./fail2ban/filters) 示例,在 fail2ban 目录中与 `systemd`/`journald` 是可以同时工作的。
## 账户配置属性
关于账户配置属性的细节信息,请参考 [账户](./docs/account.md)。
## 性能
SFTPGo 在没有特殊配置的情况下,可以实现低端硬件轻松达到 GB 量级连接,对于大多数场景足够使用了。
更多深度性能分析可以参考 [性能](./docs/performance.md)。
## 发行节奏
STFPGo 发行版是特性驱动的,我们没有基于计划的固定时间。粗略估计,你可以每年期待一到两个新的发行版。
## 感谢
SFTPGo 使用了 [go.mod](./go.mod) 中列出的第三方库。
我们非常感激所有贡献想法 和/或 PRs。
感谢 [ysura](https://www.ysura.com/) 给予我测试 AWS S3 账户的稳定权限。
## 赞助者
我希望可以使 STFPGo 成为一个可持续发展的长期项目,你的 [赞助](https://github.com/sponsors/drakkan) 对我很有帮助!:heart:
感谢我们的赞助者!
[<img src="https://www.7digital.com/wp-content/themes/sevendigital/images/top_logo.png" alt="7digital logo">](https://www.7digital.com/)
## 许可证
GNU AGPLv3

46
acme/account.go Normal file
View File

@@ -0,0 +1,46 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package acme
import (
"crypto"
"github.com/go-acme/lego/v4/registration"
)
type account struct {
Email string `json:"email"`
Registration *registration.Resource `json:"registration"`
key crypto.PrivateKey
}
/** Implementation of the registration.User interface **/
// GetEmail returns the email address for the account.
func (a *account) GetEmail() string {
return a.Email
}
// GetRegistration returns the server registration.
func (a *account) GetRegistration() *registration.Resource {
return a.Registration
}
// GetPrivateKey returns the private account key.
func (a *account) GetPrivateKey() crypto.PrivateKey {
return a.key
}
/** End **/

699
acme/acme.go Normal file
View File

@@ -0,0 +1,699 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// Package acme provides automatic access to certificates from Let's Encrypt and any other ACME-based CA
// The code here is largely coiped from https://github.com/go-acme/lego/tree/master/cmd
// This package is intended to provide basic functionality for obtaining and renewing certificates
// and implements the "HTTP-01" and "TLSALPN-01" challenge types.
// For more advanced features use external tools such as "lego"
package acme
import (
"crypto"
"crypto/x509"
"encoding/json"
"encoding/pem"
"errors"
"fmt"
"math/rand"
"net/url"
"os"
"path/filepath"
"strconv"
"strings"
"time"
"github.com/go-acme/lego/v4/certcrypto"
"github.com/go-acme/lego/v4/certificate"
"github.com/go-acme/lego/v4/challenge"
"github.com/go-acme/lego/v4/challenge/http01"
"github.com/go-acme/lego/v4/challenge/tlsalpn01"
"github.com/go-acme/lego/v4/lego"
"github.com/go-acme/lego/v4/log"
"github.com/go-acme/lego/v4/providers/http/webroot"
"github.com/go-acme/lego/v4/registration"
"github.com/robfig/cron/v3"
"github.com/drakkan/sftpgo/v2/ftpd"
"github.com/drakkan/sftpgo/v2/httpd"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/telemetry"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/v2/version"
"github.com/drakkan/sftpgo/v2/webdavd"
)
const (
logSender = "acme"
)
var (
config *Configuration
scheduler *cron.Cron
logMode int
)
// GetCertificates tries to obtain the certificates for the configured domains
func GetCertificates() error {
if config == nil {
return errors.New("acme is disabled")
}
return config.getCertificates()
}
// HTTP01Challenge defines the configuration for HTTP-01 challenge type
type HTTP01Challenge struct {
Port int `json:"port" mapstructure:"port"`
WebRoot string `json:"webroot" mapstructure:"webroot"`
ProxyHeader string `json:"proxy_header" mapstructure:"proxy_header"`
}
func (c *HTTP01Challenge) isEnabled() bool {
return c.Port > 0 || c.WebRoot != ""
}
func (c *HTTP01Challenge) validate() error {
if !c.isEnabled() {
return nil
}
if c.WebRoot != "" {
if !filepath.IsAbs(c.WebRoot) {
return fmt.Errorf("invalid HTTP-01 challenge web root, please set an absolute path")
}
_, err := os.Stat(c.WebRoot)
if err != nil {
return fmt.Errorf("invalid HTTP-01 challenge web root: %w", err)
}
} else {
if c.Port > 65535 {
return fmt.Errorf("invalid HTTP-01 challenge port: %d", c.Port)
}
}
return nil
}
// TLSALPN01Challenge defines the configuration for TLSALPN-01 challenge type
type TLSALPN01Challenge struct {
Port int `json:"port" mapstructure:"port"`
}
func (c *TLSALPN01Challenge) isEnabled() bool {
return c.Port > 0
}
func (c *TLSALPN01Challenge) validate() error {
if !c.isEnabled() {
return nil
}
if c.Port > 65535 {
return fmt.Errorf("invalid TLSALPN-01 challenge port: %d", c.Port)
}
return nil
}
// Configuration holds the ACME configuration
type Configuration struct {
Email string `json:"email" mapstructure:"email"`
KeyType string `json:"key_type" mapstructure:"key_type"`
CertsPath string `json:"certs_path" mapstructure:"certs_path"`
CAEndpoint string `json:"ca_endpoint" mapstructure:"ca_endpoint"`
// if a certificate is to be valid for multiple domains specify the names separated by commas,
// for example: example.com,www.example.com
Domains []string `json:"domains" mapstructure:"domains"`
RenewDays int `json:"renew_days" mapstructure:"renew_days"`
HTTP01Challenge HTTP01Challenge `json:"http01_challenge" mapstructure:"http01_challenge"`
TLSALPN01Challenge TLSALPN01Challenge `json:"tls_alpn01_challenge" mapstructure:"tls_alpn01_challenge"`
accountConfigPath string
accountKeyPath string
lockPath string
tempDir string
}
// Initialize validates and set the configuration
func (c *Configuration) Initialize(configDir string, checkRenew bool) error {
config = nil
setLogMode(checkRenew)
c.checkDomains()
if len(c.Domains) == 0 {
acmeLog(logger.LevelInfo, "no domains configured, acme disabled")
return nil
}
if c.Email == "" || !util.IsEmailValid(c.Email) {
return fmt.Errorf("invalid email address %#v", c.Email)
}
if c.RenewDays < 1 {
return fmt.Errorf("invalid number of days remaining before renewal: %d", c.RenewDays)
}
supportedKeyTypes := []string{
string(certcrypto.EC256),
string(certcrypto.EC384),
string(certcrypto.RSA2048),
string(certcrypto.RSA4096),
string(certcrypto.RSA8192),
}
if !util.Contains(supportedKeyTypes, c.KeyType) {
return fmt.Errorf("invalid key type %#v", c.KeyType)
}
caURL, err := url.Parse(c.CAEndpoint)
if err != nil {
return fmt.Errorf("invalid CA endopoint: %w", err)
}
if !util.IsFileInputValid(c.CertsPath) {
return fmt.Errorf("invalid certs path %#v", c.CertsPath)
}
if !filepath.IsAbs(c.CertsPath) {
c.CertsPath = filepath.Join(configDir, c.CertsPath)
}
err = os.MkdirAll(c.CertsPath, 0700)
if err != nil {
return fmt.Errorf("unable to create certs path %#v: %w", c.CertsPath, err)
}
c.tempDir = filepath.Join(c.CertsPath, "temp")
err = os.MkdirAll(c.CertsPath, 0700)
if err != nil {
return fmt.Errorf("unable to create certs temp path %#v: %w", c.tempDir, err)
}
serverPath := strings.NewReplacer(":", "_", "/", string(os.PathSeparator)).Replace(caURL.Host)
accountPath := filepath.Join(c.CertsPath, serverPath)
err = os.MkdirAll(accountPath, 0700)
if err != nil {
return fmt.Errorf("unable to create account path %#v: %w", accountPath, err)
}
c.accountConfigPath = filepath.Join(accountPath, c.Email+".json")
c.accountKeyPath = filepath.Join(accountPath, c.Email+".key")
c.lockPath = filepath.Join(c.CertsPath, "lock")
if err = c.validateChallenges(); err != nil {
return err
}
acmeLog(logger.LevelInfo, "configured domains: %+v", c.Domains)
config = c
if checkRenew {
return startScheduler()
}
return nil
}
func (c *Configuration) validateChallenges() error {
if !c.HTTP01Challenge.isEnabled() && !c.TLSALPN01Challenge.isEnabled() {
return fmt.Errorf("no challenge type defined")
}
if err := c.HTTP01Challenge.validate(); err != nil {
return err
}
if err := c.TLSALPN01Challenge.validate(); err != nil {
return err
}
return nil
}
func (c *Configuration) checkDomains() {
var domains []string
for _, domain := range c.Domains {
domain = strings.TrimSpace(domain)
if domain == "" {
continue
}
if d, ok := isDomainValid(domain); ok {
domains = append(domains, d)
}
}
c.Domains = util.RemoveDuplicates(domains, true)
}
func (c *Configuration) setLockTime() error {
lockTime := fmt.Sprintf("%v", util.GetTimeAsMsSinceEpoch(time.Now()))
err := os.WriteFile(c.lockPath, []byte(lockTime), 0600)
if err != nil {
acmeLog(logger.LevelError, "unable to save lock time to %#v: %v", c.lockPath, err)
return fmt.Errorf("unable to save lock time: %w", err)
}
acmeLog(logger.LevelDebug, "lock time saved: %#v", lockTime)
return nil
}
func (c *Configuration) getLockTime() (time.Time, error) {
content, err := os.ReadFile(c.lockPath)
if err != nil {
if os.IsNotExist(err) {
acmeLog(logger.LevelDebug, "lock file %#v not found", c.lockPath)
return time.Time{}, nil
}
acmeLog(logger.LevelError, "unable to read lock file %#v: %v", c.lockPath, err)
return time.Time{}, err
}
msec, err := strconv.ParseInt(strings.TrimSpace(string(content)), 10, 64)
if err != nil {
acmeLog(logger.LevelError, "unable to parse lock time: %v", err)
return time.Time{}, fmt.Errorf("unable to parse lock time: %w", err)
}
return util.GetTimeFromMsecSinceEpoch(msec), nil
}
func (c *Configuration) saveAccount(account *account) error {
jsonBytes, err := json.MarshalIndent(account, "", "\t")
if err != nil {
return err
}
err = os.WriteFile(c.accountConfigPath, jsonBytes, 0600)
if err != nil {
acmeLog(logger.LevelError, "unable to save account to file %#v: %v", c.accountConfigPath, err)
return fmt.Errorf("unable to save account: %w", err)
}
return nil
}
func (c *Configuration) getAccount(privateKey crypto.PrivateKey) (account, error) {
_, err := os.Stat(c.accountConfigPath)
if err != nil && os.IsNotExist(err) {
acmeLog(logger.LevelDebug, "account does not exist")
return account{Email: c.Email, key: privateKey}, nil
}
var account account
fileBytes, err := os.ReadFile(c.accountConfigPath)
if err != nil {
acmeLog(logger.LevelError, "unable to read account from file %#v: %v", c.accountConfigPath, err)
return account, fmt.Errorf("unable to read account from file: %w", err)
}
err = json.Unmarshal(fileBytes, &account)
if err != nil {
acmeLog(logger.LevelError, "invalid account file content: %v", err)
return account, fmt.Errorf("unable to parse account file as JSON: %w", err)
}
account.key = privateKey
if account.Registration == nil || account.Registration.Body.Status == "" {
acmeLog(logger.LevelInfo, "couldn't load account but got a key. Try to look the account up")
reg, err := c.tryRecoverRegistration(privateKey)
if err != nil {
acmeLog(logger.LevelError, "unable to look the account up: %v", err)
return account, fmt.Errorf("unable to look the account up: %w", err)
}
account.Registration = reg
err = c.saveAccount(&account)
if err != nil {
return account, err
}
}
return account, nil
}
func (c *Configuration) loadPrivateKey() (crypto.PrivateKey, error) {
keyBytes, err := os.ReadFile(c.accountKeyPath)
if err != nil {
acmeLog(logger.LevelError, "unable to read account key from file %#v: %v", c.accountKeyPath, err)
return nil, fmt.Errorf("unable to read account key: %w", err)
}
keyBlock, _ := pem.Decode(keyBytes)
var privateKey crypto.PrivateKey
switch keyBlock.Type {
case "RSA PRIVATE KEY":
privateKey, err = x509.ParsePKCS1PrivateKey(keyBlock.Bytes)
case "EC PRIVATE KEY":
privateKey, err = x509.ParseECPrivateKey(keyBlock.Bytes)
default:
err = fmt.Errorf("unknown private key type %#v", keyBlock.Type)
}
if err != nil {
acmeLog(logger.LevelError, "unable to parse private key from file %#v: %v", c.accountKeyPath, err)
return privateKey, fmt.Errorf("unable to parse private key: %w", err)
}
return privateKey, nil
}
func (c *Configuration) generatePrivateKey() (crypto.PrivateKey, error) {
privateKey, err := certcrypto.GeneratePrivateKey(certcrypto.KeyType(c.KeyType))
if err != nil {
acmeLog(logger.LevelError, "unable to generate private key: %v", err)
return nil, fmt.Errorf("unable to generate private key: %w", err)
}
certOut, err := os.Create(c.accountKeyPath)
if err != nil {
acmeLog(logger.LevelError, "unable to save private key to file %#v: %v", c.accountKeyPath, err)
return nil, fmt.Errorf("unable to save private key: %w", err)
}
defer certOut.Close()
pemKey := certcrypto.PEMBlock(privateKey)
err = pem.Encode(certOut, pemKey)
if err != nil {
acmeLog(logger.LevelError, "unable to encode private key: %v", err)
return nil, fmt.Errorf("unable to encode private key: %w", err)
}
acmeLog(logger.LevelDebug, "new account private key generated")
return privateKey, nil
}
func (c *Configuration) getPrivateKey() (crypto.PrivateKey, error) {
_, err := os.Stat(c.accountKeyPath)
if err != nil && os.IsNotExist(err) {
acmeLog(logger.LevelDebug, "private key file %#v does not exist, generating new private key", c.accountKeyPath)
return c.generatePrivateKey()
}
acmeLog(logger.LevelDebug, "loading private key from file %#v, stat error: %v", c.accountKeyPath, err)
return c.loadPrivateKey()
}
func (c *Configuration) loadCertificatesForDomain(domain string) ([]*x509.Certificate, error) {
domain = sanitizedDomain(domain)
acmeLog(logger.LevelDebug, "loading certificates for domain %#v", domain)
content, err := os.ReadFile(filepath.Join(c.CertsPath, domain+".crt"))
if err != nil {
acmeLog(logger.LevelError, "unable to load certificates for domain %#v: %v", domain, err)
return nil, fmt.Errorf("unable to load certificates for domain %#v: %w", domain, err)
}
certs, err := certcrypto.ParsePEMBundle(content)
if err != nil {
acmeLog(logger.LevelError, "unable to parse certificates for domain %#v: %v", domain, err)
return certs, fmt.Errorf("unable to parse certificates for domain %#v: %w", domain, err)
}
return certs, nil
}
func (c *Configuration) needRenewal(x509Cert *x509.Certificate, domain string) bool {
if x509Cert.IsCA {
acmeLog(logger.LevelError, "certificate bundle starts with a CA certificate, cannot renew domain %v", domain)
return false
}
notAfter := int(time.Until(x509Cert.NotAfter).Hours() / 24.0)
if notAfter > c.RenewDays {
acmeLog(logger.LevelDebug, "the certificate for domain %#v expires in %d days, no renewal", domain, notAfter)
return false
}
return true
}
func (c *Configuration) setup() (*account, *lego.Client, error) {
privateKey, err := c.getPrivateKey()
if err != nil {
return nil, nil, err
}
account, err := c.getAccount(privateKey)
if err != nil {
return nil, nil, err
}
config := lego.NewConfig(&account)
config.CADirURL = c.CAEndpoint
config.Certificate.KeyType = certcrypto.KeyType(c.KeyType)
config.UserAgent = fmt.Sprintf("SFTPGo/%v", version.Get().Version)
client, err := lego.NewClient(config)
if err != nil {
acmeLog(logger.LevelError, "unable to get ACME client: %v", err)
return nil, nil, fmt.Errorf("unable to get ACME client: %w", err)
}
err = c.setupChalleges(client)
if err != nil {
return nil, nil, err
}
return &account, client, nil
}
func (c *Configuration) setupChalleges(client *lego.Client) error {
client.Challenge.Remove(challenge.DNS01)
if c.HTTP01Challenge.isEnabled() {
if c.HTTP01Challenge.WebRoot != "" {
acmeLog(logger.LevelDebug, "configuring HTTP-01 web root challenge, path %#v", c.HTTP01Challenge.WebRoot)
providerServer, err := webroot.NewHTTPProvider(c.HTTP01Challenge.WebRoot)
if err != nil {
acmeLog(logger.LevelError, "unable to create HTTP-01 web root challenge provider from path %#v: %v",
c.HTTP01Challenge.WebRoot, err)
return fmt.Errorf("unable to create HTTP-01 web root challenge provider: %w", err)
}
err = client.Challenge.SetHTTP01Provider(providerServer)
if err != nil {
acmeLog(logger.LevelError, "unable to set HTTP-01 challenge provider: %v", err)
return fmt.Errorf("unable to set HTTP-01 challenge provider: %w", err)
}
} else {
acmeLog(logger.LevelDebug, "configuring HTTP-01 challenge, port %d", c.HTTP01Challenge.Port)
providerServer := http01.NewProviderServer("", fmt.Sprintf("%d", c.HTTP01Challenge.Port))
if c.HTTP01Challenge.ProxyHeader != "" {
acmeLog(logger.LevelDebug, "setting proxy header to \"%s\"", c.HTTP01Challenge.ProxyHeader)
providerServer.SetProxyHeader(c.HTTP01Challenge.ProxyHeader)
}
err := client.Challenge.SetHTTP01Provider(providerServer)
if err != nil {
acmeLog(logger.LevelError, "unable to set HTTP-01 challenge provider: %v", err)
return fmt.Errorf("unable to set HTTP-01 challenge provider: %w", err)
}
}
} else {
client.Challenge.Remove(challenge.HTTP01)
}
if c.TLSALPN01Challenge.isEnabled() {
acmeLog(logger.LevelDebug, "configuring TLSALPN-01 challenge, port %d", c.TLSALPN01Challenge.Port)
err := client.Challenge.SetTLSALPN01Provider(tlsalpn01.NewProviderServer("", fmt.Sprintf("%d", c.TLSALPN01Challenge.Port)))
if err != nil {
acmeLog(logger.LevelError, "unable to set TLSALPN-01 challenge provider: %v", err)
return fmt.Errorf("unable to set TLSALPN-01 challenge provider: %w", err)
}
} else {
client.Challenge.Remove(challenge.TLSALPN01)
}
return nil
}
func (c *Configuration) register(client *lego.Client) (*registration.Resource, error) {
return client.Registration.Register(registration.RegisterOptions{TermsOfServiceAgreed: true})
}
func (c *Configuration) tryRecoverRegistration(privateKey crypto.PrivateKey) (*registration.Resource, error) {
config := lego.NewConfig(&account{key: privateKey})
config.CADirURL = c.CAEndpoint
config.UserAgent = fmt.Sprintf("SFTPGo/%v", version.Get().Version)
client, err := lego.NewClient(config)
if err != nil {
acmeLog(logger.LevelError, "unable to get the ACME client: %v", err)
return nil, err
}
return client.Registration.ResolveAccountByKey()
}
func (c *Configuration) obtainAndSaveCertificate(client *lego.Client, domain string) error {
var domains []string
for _, d := range strings.Split(domain, ",") {
d = strings.TrimSpace(d)
if d != "" {
domains = append(domains, d)
}
}
acmeLog(logger.LevelInfo, "requesting certificates for domains %+v", domains)
request := certificate.ObtainRequest{
Domains: domains,
Bundle: true,
MustStaple: false,
PreferredChain: "",
AlwaysDeactivateAuthorizations: false,
}
cert, err := client.Certificate.Obtain(request)
if err != nil {
acmeLog(logger.LevelError, "unable to obtain certificates for domains %+v: %v", domains, err)
return fmt.Errorf("unable to obtain certificates: %w", err)
}
domain = sanitizedDomain(domain)
err = os.WriteFile(filepath.Join(c.CertsPath, domain+".crt"), cert.Certificate, 0600)
if err != nil {
acmeLog(logger.LevelError, "unable to save certificate for domain %v: %v", domain, err)
return fmt.Errorf("unable to save certificate: %w", err)
}
err = os.WriteFile(filepath.Join(c.CertsPath, domain+".key"), cert.PrivateKey, 0600)
if err != nil {
acmeLog(logger.LevelError, "unable to save private key for domain %v: %v", domain, err)
return fmt.Errorf("unable to save private key: %w", err)
}
jsonBytes, err := json.MarshalIndent(cert, "", "\t")
if err != nil {
acmeLog(logger.LevelError, "unable to marshal certificate resources for domain %v: %v", domain, err)
return err
}
err = os.WriteFile(filepath.Join(c.CertsPath, domain+".json"), jsonBytes, 0600)
if err != nil {
acmeLog(logger.LevelError, "unable to save certificate resources for domain %v: %v", domain, err)
return fmt.Errorf("unable to save certificate resources: %w", err)
}
acmeLog(logger.LevelInfo, "certificates for domains %+v saved", domains)
return nil
}
func (c *Configuration) getCertificates() error {
account, client, err := c.setup()
if err != nil {
return err
}
if account.Registration == nil {
reg, err := c.register(client)
if err != nil {
acmeLog(logger.LevelError, "unable to register account: %v", err)
return fmt.Errorf("unable to register account: %w", err)
}
account.Registration = reg
err = c.saveAccount(account)
if err != nil {
return err
}
}
for _, domain := range c.Domains {
err = c.obtainAndSaveCertificate(client, domain)
if err != nil {
return err
}
}
return nil
}
func (c *Configuration) renewCertificates() error {
lockTime, err := c.getLockTime()
if err != nil {
return err
}
acmeLog(logger.LevelDebug, "certificate renew lock time %v", lockTime)
if lockTime.Add(-30*time.Second).Before(time.Now()) && lockTime.Add(5*time.Minute).After(time.Now()) {
acmeLog(logger.LevelInfo, "certificate renew skipped, lock time too close: %v", lockTime)
return nil
}
err = c.setLockTime()
if err != nil {
return err
}
account, client, err := c.setup()
if err != nil {
return err
}
if account.Registration == nil {
acmeLog(logger.LevelError, "cannot renew certificates, your account is not registered")
return fmt.Errorf("cannot renew certificates, your account is not registered")
}
var errRenew error
needReload := false
for _, domain := range c.Domains {
certificates, err := c.loadCertificatesForDomain(domain)
if err != nil {
return err
}
cert := certificates[0]
if !c.needRenewal(cert, domain) {
continue
}
err = c.obtainAndSaveCertificate(client, domain)
if err != nil {
errRenew = err
} else {
needReload = true
}
}
if needReload {
// at least one certificate has been renewed, sends a reload to all services that may be using certificates
err = ftpd.ReloadCertificateMgr()
acmeLog(logger.LevelInfo, "ftpd certificate manager reloaded , error: %v", err)
err = httpd.ReloadCertificateMgr()
acmeLog(logger.LevelInfo, "httpd certificates manager reloaded , error: %v", err)
err = webdavd.ReloadCertificateMgr()
acmeLog(logger.LevelInfo, "webdav certificates manager reloaded , error: %v", err)
err = telemetry.ReloadCertificateMgr()
acmeLog(logger.LevelInfo, "telemetry certificates manager reloaded , error: %v", err)
}
return errRenew
}
func isDomainValid(domain string) (string, bool) {
isValid := false
for _, d := range strings.Split(domain, ",") {
d = strings.TrimSpace(d)
if d != "" {
isValid = true
break
}
}
return domain, isValid
}
func sanitizedDomain(domain string) string {
return strings.NewReplacer(":", "_", "*", "_", ",", "_").Replace(domain)
}
func stopScheduler() {
if scheduler != nil {
scheduler.Stop()
scheduler = nil
}
}
func startScheduler() error {
stopScheduler()
rand.Seed(time.Now().UnixNano())
randSecs := rand.Intn(59)
scheduler = cron.New()
_, err := scheduler.AddFunc(fmt.Sprintf("@every 12h0m%ds", randSecs), renewCertificates)
if err != nil {
return fmt.Errorf("unable to schedule certificates renewal: %w", err)
}
acmeLog(logger.LevelInfo, "starting scheduler, initial certificates check in %d seconds", randSecs)
initialTimer := time.NewTimer(time.Duration(randSecs) * time.Second)
go func() {
<-initialTimer.C
renewCertificates()
}()
scheduler.Start()
return nil
}
func renewCertificates() {
if config != nil {
if err := config.renewCertificates(); err != nil {
acmeLog(logger.LevelError, "unable to renew certificates: %v", err)
}
}
}
func setLogMode(checkRenew bool) {
if checkRenew {
logMode = 1
} else {
logMode = 2
}
log.Logger = &logger.LegoAdapter{
LogToConsole: logMode != 1,
}
}
func acmeLog(level logger.LogLevel, format string, v ...any) {
if logMode == 1 {
logger.Log(level, logSender, "", format, v...)
} else {
switch level {
case logger.LevelDebug:
logger.DebugToConsole(format, v...)
case logger.LevelInfo:
logger.InfoToConsole(format, v...)
case logger.LevelWarn:
logger.WarnToConsole(format, v...)
default:
logger.ErrorToConsole(format, v...)
}
}
}

69
cmd/acme.go Normal file
View File

@@ -0,0 +1,69 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package cmd
import (
"os"
"github.com/rs/zerolog"
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/v2/acme"
"github.com/drakkan/sftpgo/v2/config"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
)
var (
acmeCmd = &cobra.Command{
Use: "acme",
Short: "Obtain TLS certificates from ACME-based CAs like Let's Encrypt",
}
acmeRunCmd = &cobra.Command{
Use: "run",
Short: "Register your account and obtain certificates",
Long: `This command must be run to obtain TLS certificates the first time or every
time you add a new domain to your configuration file.
Certificates are saved in the configured "certs_path".
After this initial step, the certificates are automatically checked and
renewed by the SFTPGo service
`,
Run: func(cmd *cobra.Command, args []string) {
logger.DisableLogger()
logger.EnableConsoleLogger(zerolog.DebugLevel)
configDir = util.CleanDirInput(configDir)
err := config.LoadConfig(configDir, configFile)
if err != nil {
logger.ErrorToConsole("Unable to initialize data provider, config load error: %v", err)
return
}
acmeConfig := config.GetACMEConfig()
err = acmeConfig.Initialize(configDir, false)
if err != nil {
logger.ErrorToConsole("Unable to initialize ACME configuration: %v", err)
}
if err = acme.GetCertificates(); err != nil {
logger.ErrorToConsole("Cannot get certificates: %v", err)
os.Exit(1)
}
},
}
)
func init() {
addConfigFlags(acmeRunCmd)
acmeCmd.AddCommand(acmeRunCmd)
rootCmd.AddCommand(acmeCmd)
}

34
cmd/awscontainer.go Normal file
View File

@@ -0,0 +1,34 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//go:build awscontainer
// +build awscontainer
package cmd
import (
"github.com/spf13/cobra"
"github.com/spf13/viper"
)
func addAWSContainerFlags(cmd *cobra.Command) {
viper.SetDefault("disable_aws_installation_code", false)
viper.BindEnv("disable_aws_installation_code", "SFTPGO_DISABLE_AWS_INSTALLATION_CODE") //nolint:errcheck
cmd.Flags().BoolVar(&disableAWSInstallationCode, "disable-aws-installation-code", viper.GetBool("disable_aws_installation_code"),
`Disable installation code for the AWS container.
This flag can be set using
SFTPGO_DISABLE_AWS_INSTALLATION_CODE env var too.
`)
viper.BindPFlag("disable_aws_installation_code", cmd.Flags().Lookup("disable-aws-installation-code")) //nolint:errcheck
}

View File

@@ -0,0 +1,24 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//go:build !awscontainer
// +build !awscontainer
package cmd
import (
"github.com/spf13/cobra"
)
func addAWSContainerFlags(cmd *cobra.Command) {}

View File

@@ -1,3 +1,17 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package cmd
import "github.com/spf13/cobra"

View File

@@ -1,86 +1,133 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package cmd
import (
"os"
"github.com/rs/zerolog"
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/logger"
)
var genCompletionCmd = &cobra.Command{
Use: "completion [bash|zsh|fish|powershell]",
Short: "Generate shell completion script",
Long: `To load completions:
Short: "Generate the autocompletion script for the specified shell",
Long: `Generate the autocompletion script for sftpgo for the specified shell.
Bash:
See each sub-command's help for details on how to use the generated script.
`,
}
var genCompletionBashCmd = &cobra.Command{
Use: "bash",
Short: "Generate the autocompletion script for bash",
Long: `Generate the autocompletion script for the bash shell.
This script depends on the 'bash-completion' package.
If it is not installed already, you can install it via your OS's package
manager.
To load completions in your current shell session:
$ source <(sftpgo gen completion bash)
To load completions for each session, execute once:
To load completions for every new session, execute once:
Linux:
$ sudo sftpgo gen completion bash > /usr/share/bash-completion/completions/sftpgo
$ sudo sftpgo gen completion bash > /usr/share/bash-completion/completions/sftpgo
MacOS:
$ sudo sftpgo gen completion bash > /usr/local/etc/bash_completion.d/sftpgo
$ sudo sftpgo gen completion bash > /usr/local/etc/bash_completion.d/sftpgo
You will need to start a new shell for this setup to take effect.
`,
DisableFlagsInUseLine: true,
RunE: func(cmd *cobra.Command, args []string) error {
return cmd.Root().GenBashCompletionV2(os.Stdout, true)
},
}
Zsh:
var genCompletionZshCmd = &cobra.Command{
Use: "zsh",
Short: "Generate the autocompletion script for zsh",
Long: `Generate the autocompletion script for the zsh shell.
If shell completion is not already enabled in your environment you will need
to enable it. You can execute the following once:
$ echo "autoload -U compinit; compinit" >> ~/.zshrc
To load completions for each session, execute once:
To load completions for every new session, execute once:
$ sftpgo gen completion zsh > "${fpath[1]}/_sftpgo"
Linux:
$ sftpgo gen completion zsh > > "${fpath[1]}/_sftpgo"
Fish:
macOS:
$ sudo sftpgo gen completion zsh > /usr/local/share/zsh/site-functions/_sftpgo
You will need to start a new shell for this setup to take effect.
`,
DisableFlagsInUseLine: true,
RunE: func(cmd *cobra.Command, args []string) error {
return cmd.Root().GenZshCompletion(os.Stdout)
},
}
var genCompletionFishCmd = &cobra.Command{
Use: "fish",
Short: "Generate the autocompletion script for fish",
Long: `Generate the autocompletion script for the fish shell.
To load completions in your current shell session:
$ sftpgo gen completion fish | source
To load completions for each session, execute once:
To load completions for every new session, execute once:
$ sftpgo gen completion fish > ~/.config/fish/completions/sftpgo.fish
Powershell:
PS> sftpgo gen completion powershell | Out-String | Invoke-Expression
To load completions for every new session, run:
PS> sftpgo gen completion powershell > sftpgo.ps1
and source this file from your powershell profile.
You will need to start a new shell for this setup to take effect.
`,
DisableFlagsInUseLine: true,
ValidArgs: []string{"bash", "zsh", "fish", "powershell"},
Args: cobra.ExactValidArgs(1),
Run: func(cmd *cobra.Command, args []string) {
var err error
logger.DisableLogger()
logger.EnableConsoleLogger(zerolog.DebugLevel)
switch args[0] {
case "bash":
err = cmd.Root().GenBashCompletion(os.Stdout)
case "zsh":
err = cmd.Root().GenZshCompletion(os.Stdout)
case "fish":
err = cmd.Root().GenFishCompletion(os.Stdout, true)
case "powershell":
err = cmd.Root().GenPowerShellCompletionWithDesc(os.Stdout)
}
if err != nil {
logger.WarnToConsole("Unable to generate shell completion script: %v", err)
os.Exit(1)
}
RunE: func(cmd *cobra.Command, args []string) error {
return cmd.Root().GenFishCompletion(os.Stdout, true)
},
}
var genCompletionPowerShellCmd = &cobra.Command{
Use: "powershell",
Short: "Generate the autocompletion script for powershell",
Long: `Generate the autocompletion script for powershell.
To load completions in your current shell session:
PS C:\> sftpgo gen completion powershell | Out-String | Invoke-Expression
To load completions for every new session, add the output of the above command
to your powershell profile.
`,
DisableFlagsInUseLine: true,
RunE: func(cmd *cobra.Command, args []string) error {
return cmd.Root().GenPowerShellCompletionWithDesc(os.Stdout)
},
}
func init() {
genCompletionCmd.AddCommand(genCompletionBashCmd)
genCompletionCmd.AddCommand(genCompletionZshCmd)
genCompletionCmd.AddCommand(genCompletionFishCmd)
genCompletionCmd.AddCommand(genCompletionPowerShellCmd)
genCmd.AddCommand(genCompletionCmd)
}

View File

@@ -1,30 +1,47 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package cmd
import (
"errors"
"fmt"
"io/fs"
"os"
"github.com/rs/zerolog"
"github.com/spf13/cobra"
"github.com/spf13/cobra/doc"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/version"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/version"
)
var (
manDir string
genManCmd = &cobra.Command{
Use: "man",
Short: "Generate man pages for SFTPGo CLI",
Short: "Generate man pages for sftpgo",
Long: `This command automatically generates up-to-date man pages of SFTPGo's
command-line interface. By default, it creates the man page files
in the "man" directory under the current directory.
command-line interface.
By default, it creates the man page files in the "man" directory under the
current directory.
`,
Run: func(cmd *cobra.Command, args []string) {
logger.DisableLogger()
logger.EnableConsoleLogger(zerolog.DebugLevel)
if _, err := os.Stat(manDir); os.IsNotExist(err) {
if _, err := os.Stat(manDir); errors.Is(err, fs.ErrNotExist) {
err = os.MkdirAll(manDir, os.ModePerm)
if err != nil {
logger.WarnToConsole("Unable to generate man page files: %v", err)

View File

@@ -1,3 +1,17 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package cmd
import (
@@ -7,16 +21,17 @@ import (
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/drakkan/sftpgo/config"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/utils"
"github.com/drakkan/sftpgo/v2/config"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/service"
"github.com/drakkan/sftpgo/v2/util"
)
var (
initProviderCmd = &cobra.Command{
Use: "initprovider",
Short: "Initializes and/or updates the configured data provider",
Short: "Initialize and/or updates the configured data provider",
Long: `This command reads the data provider connection details from the specified
configuration file and creates the initial structure or update the existing one,
as needed.
@@ -33,23 +48,28 @@ To initialize/update the data provider from the configuration directory simply u
$ sftpgo initprovider
Any defined action is ignored.
Please take a look at the usage below to customize the options.`,
Run: func(cmd *cobra.Command, args []string) {
logger.DisableLogger()
logger.EnableConsoleLogger(zerolog.DebugLevel)
configDir = utils.CleanDirInput(configDir)
configDir = util.CleanDirInput(configDir)
err := config.LoadConfig(configDir, configFile)
if err != nil {
logger.WarnToConsole("Unable to initialize data provider, config load error: %v", err)
logger.ErrorToConsole("Unable to initialize data provider, config load error: %v", err)
return
}
kmsConfig := config.GetKMSConfig()
err = kmsConfig.Initialize()
if err != nil {
logger.ErrorToConsole("unable to initialize KMS: %v", err)
logger.ErrorToConsole("Unable to initialize KMS: %v", err)
os.Exit(1)
}
providerConf := config.GetProviderConf()
// ignore actions
providerConf.Actions.Hook = ""
providerConf.Actions.ExecuteFor = nil
providerConf.Actions.ExecuteOn = nil
logger.InfoToConsole("Initializing provider: %#v config file: %#v", providerConf.Driver, viper.ConfigFileUsed())
err = dataprovider.InitializeDatabase(providerConf, configDir)
if err == nil {
@@ -57,9 +77,21 @@ Please take a look at the usage below to customize the options.`,
} else if err == dataprovider.ErrNoInitRequired {
logger.InfoToConsole("%v", err.Error())
} else {
logger.WarnToConsole("Unable to initialize/update the data provider: %v", err)
logger.ErrorToConsole("Unable to initialize/update the data provider: %v", err)
os.Exit(1)
}
if providerConf.Driver != dataprovider.MemoryDataProviderName && loadDataFrom != "" {
service := service.Service{
LoadDataFrom: loadDataFrom,
LoadDataMode: loadDataMode,
LoadDataQuotaScan: loadDataQuotaScan,
LoadDataClean: loadDataClean,
}
if err = service.LoadInitialData(); err != nil {
logger.ErrorToConsole("Cannot load initial data: %v", err)
os.Exit(1)
}
}
},
}
)
@@ -67,4 +99,5 @@ Please take a look at the usage below to customize the options.`,
func init() {
rootCmd.AddCommand(initProviderCmd)
addConfigFlags(initProviderCmd)
addBaseLoadDataFlags(initProviderCmd)
}

View File

@@ -1,3 +1,17 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package cmd
import (
@@ -7,8 +21,8 @@ import (
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/service"
"github.com/drakkan/sftpgo/utils"
"github.com/drakkan/sftpgo/v2/service"
"github.com/drakkan/sftpgo/v2/util"
)
var (
@@ -23,7 +37,7 @@ sftpgo service install
Please take a look at the usage below to customize the startup options`,
Run: func(cmd *cobra.Command, args []string) {
s := service.Service{
ConfigDir: utils.CleanDirInput(configDir),
ConfigDir: util.CleanDirInput(configDir),
ConfigFile: configFile,
LogFilePath: logFilePath,
LogMaxSize: logMaxSize,
@@ -31,6 +45,7 @@ Please take a look at the usage below to customize the startup options`,
LogMaxAge: logMaxAge,
LogCompress: logCompress,
LogVerbose: logVerbose,
LogUTCTime: logUTCTime,
Shutdown: make(chan bool),
}
winService := service.WindowsService{
@@ -60,7 +75,7 @@ func init() {
func getCustomServeFlags() []string {
result := []string{}
if configDir != defaultConfigDir {
configDir = utils.CleanDirInput(configDir)
configDir = util.CleanDirInput(configDir)
result = append(result, "--"+configDirFlag)
result = append(result, configDir)
}
@@ -87,6 +102,9 @@ func getCustomServeFlags() []string {
if logVerbose != defaultLogVerbose {
result = append(result, "--"+logVerboseFlag+"=false")
}
if logUTCTime != defaultLogUTCTime {
result = append(result, "--"+logUTCTimeFlag+"=true")
}
if logCompress != defaultLogCompress {
result = append(result, "--"+logCompressFlag+"=true")
}

View File

@@ -1,3 +1,18 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//go:build !noportable
// +build !noportable
package cmd
@@ -9,15 +24,16 @@ import (
"path/filepath"
"strings"
"github.com/sftpgo/sdk"
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/common"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/kms"
"github.com/drakkan/sftpgo/service"
"github.com/drakkan/sftpgo/sftpd"
"github.com/drakkan/sftpgo/version"
"github.com/drakkan/sftpgo/vfs"
"github.com/drakkan/sftpgo/v2/common"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/kms"
"github.com/drakkan/sftpgo/v2/service"
"github.com/drakkan/sftpgo/v2/sftpd"
"github.com/drakkan/sftpgo/v2/version"
"github.com/drakkan/sftpgo/v2/vfs"
)
var (
@@ -27,23 +43,28 @@ var (
portableAdvertiseCredentials bool
portableUsername string
portablePassword string
portableStartDir string
portableLogFile string
portableLogVerbose bool
portableLogUTCTime bool
portablePublicKeys []string
portablePermissions []string
portableSSHCommands []string
portableAllowedPatterns []string
portableDeniedPatterns []string
portableFsProvider int
portableFsProvider string
portableS3Bucket string
portableS3Region string
portableS3AccessKey string
portableS3AccessSecret string
portableS3RoleARN string
portableS3Endpoint string
portableS3StorageClass string
portableS3ACL string
portableS3KeyPrefix string
portableS3ULPartSize int
portableS3ULConcurrency int
portableS3ForcePathStyle bool
portableGCSBucket string
portableGCSCredentialsFile string
portableGCSAutoCredentials int
@@ -64,6 +85,8 @@ var (
portableAzKeyPrefix string
portableAzULPartSize int
portableAzULConcurrency int
portableAzDLPartSize int
portableAzDLConcurrency int
portableAzUseEmulator bool
portableCryptPassphrase string
portableSFTPEndpoint string
@@ -76,7 +99,7 @@ var (
portableSFTPDBufferSize int64
portableCmd = &cobra.Command{
Use: "portable",
Short: "Serve a single directory",
Short: "Serve a single directory/account",
Long: `To serve the current working directory with auto generated credentials simply
use:
@@ -85,9 +108,9 @@ $ sftpgo portable
Please take a look at the usage below to customize the serving parameters`,
Run: func(cmd *cobra.Command, args []string) {
portableDir := directoryToServe
fsProvider := vfs.FilesystemProvider(portableFsProvider)
fsProvider := sdk.GetProviderByName(portableFsProvider)
if !filepath.IsAbs(portableDir) {
if fsProvider == vfs.LocalFilesystemProvider {
if fsProvider == sdk.LocalFilesystemProvider {
portableDir, _ = filepath.Abs(portableDir)
} else {
portableDir = os.TempDir()
@@ -96,7 +119,7 @@ Please take a look at the usage below to customize the serving parameters`,
permissions := make(map[string][]string)
permissions["/"] = portablePermissions
portableGCSCredentials := ""
if fsProvider == vfs.GCSFilesystemProvider && portableGCSCredentialsFile != "" {
if fsProvider == sdk.GCSFilesystemProvider && portableGCSCredentialsFile != "" {
contents, err := getFileContents(portableGCSCredentialsFile)
if err != nil {
fmt.Printf("Unable to get GCS credentials: %v\n", err)
@@ -106,7 +129,7 @@ Please take a look at the usage below to customize the serving parameters`,
portableGCSAutoCredentials = 0
}
portableSFTPPrivateKey := ""
if fsProvider == vfs.SFTPFilesystemProvider && portableSFTPPrivateKeyPath != "" {
if fsProvider == sdk.SFTPFilesystemProvider && portableSFTPPrivateKeyPath != "" {
contents, err := getFileContents(portableSFTPPrivateKeyPath)
if err != nil {
fmt.Printf("Unable to get SFTP private key: %v\n", err)
@@ -114,8 +137,15 @@ Please take a look at the usage below to customize the serving parameters`,
}
portableSFTPPrivateKey = contents
}
if portableFTPDPort >= 0 && len(portableFTPSCert) > 0 && len(portableFTPSKey) > 0 {
_, err := common.NewCertManager(portableFTPSCert, portableFTPSKey, filepath.Clean(defaultConfigDir),
if portableFTPDPort >= 0 && portableFTPSCert != "" && portableFTPSKey != "" {
keyPairs := []common.TLSKeyPair{
{
Cert: portableFTPSCert,
Key: portableFTPSKey,
ID: common.DefaultTLSKeyPaidID,
},
}
_, err := common.NewCertManager(keyPairs, filepath.Clean(defaultConfigDir),
"FTP portable")
if err != nil {
fmt.Printf("Unable to load FTPS key pair, cert file %#v key file %#v error: %v\n",
@@ -123,8 +153,15 @@ Please take a look at the usage below to customize the serving parameters`,
os.Exit(1)
}
}
if portableWebDAVPort > 0 && len(portableWebDAVCert) > 0 && len(portableWebDAVKey) > 0 {
_, err := common.NewCertManager(portableWebDAVCert, portableWebDAVKey, filepath.Clean(defaultConfigDir),
if portableWebDAVPort > 0 && portableWebDAVCert != "" && portableWebDAVKey != "" {
keyPairs := []common.TLSKeyPair{
{
Cert: portableWebDAVCert,
Key: portableWebDAVKey,
ID: common.DefaultTLSKeyPaidID,
},
}
_, err := common.NewCertManager(keyPairs, filepath.Clean(defaultConfigDir),
"WebDAV portable")
if err != nil {
fmt.Printf("Unable to load WebDAV key pair, cert file %#v key file %#v error: %v\n",
@@ -141,64 +178,83 @@ Please take a look at the usage below to customize the serving parameters`,
LogMaxAge: defaultLogMaxAge,
LogCompress: defaultLogCompress,
LogVerbose: portableLogVerbose,
LogUTCTime: portableLogUTCTime,
Shutdown: make(chan bool),
PortableMode: 1,
PortableUser: dataprovider.User{
Username: portableUsername,
Password: portablePassword,
PublicKeys: portablePublicKeys,
Permissions: permissions,
HomeDir: portableDir,
Status: 1,
BaseUser: sdk.BaseUser{
Username: portableUsername,
Password: portablePassword,
PublicKeys: portablePublicKeys,
Permissions: permissions,
HomeDir: portableDir,
Status: 1,
},
Filters: dataprovider.UserFilters{
BaseUserFilters: sdk.BaseUserFilters{
FilePatterns: parsePatternsFilesFilters(),
StartDirectory: portableStartDir,
},
},
FsConfig: vfs.Filesystem{
Provider: vfs.FilesystemProvider(portableFsProvider),
Provider: sdk.GetProviderByName(portableFsProvider),
S3Config: vfs.S3FsConfig{
Bucket: portableS3Bucket,
Region: portableS3Region,
AccessKey: portableS3AccessKey,
AccessSecret: kms.NewPlainSecret(portableS3AccessSecret),
Endpoint: portableS3Endpoint,
StorageClass: portableS3StorageClass,
KeyPrefix: portableS3KeyPrefix,
UploadPartSize: int64(portableS3ULPartSize),
UploadConcurrency: portableS3ULConcurrency,
BaseS3FsConfig: sdk.BaseS3FsConfig{
Bucket: portableS3Bucket,
Region: portableS3Region,
AccessKey: portableS3AccessKey,
RoleARN: portableS3RoleARN,
Endpoint: portableS3Endpoint,
StorageClass: portableS3StorageClass,
ACL: portableS3ACL,
KeyPrefix: portableS3KeyPrefix,
UploadPartSize: int64(portableS3ULPartSize),
UploadConcurrency: portableS3ULConcurrency,
ForcePathStyle: portableS3ForcePathStyle,
},
AccessSecret: kms.NewPlainSecret(portableS3AccessSecret),
},
GCSConfig: vfs.GCSFsConfig{
Bucket: portableGCSBucket,
Credentials: kms.NewPlainSecret(portableGCSCredentials),
AutomaticCredentials: portableGCSAutoCredentials,
StorageClass: portableGCSStorageClass,
KeyPrefix: portableGCSKeyPrefix,
BaseGCSFsConfig: sdk.BaseGCSFsConfig{
Bucket: portableGCSBucket,
AutomaticCredentials: portableGCSAutoCredentials,
StorageClass: portableGCSStorageClass,
KeyPrefix: portableGCSKeyPrefix,
},
Credentials: kms.NewPlainSecret(portableGCSCredentials),
},
AzBlobConfig: vfs.AzBlobFsConfig{
Container: portableAzContainer,
AccountName: portableAzAccountName,
AccountKey: kms.NewPlainSecret(portableAzAccountKey),
Endpoint: portableAzEndpoint,
AccessTier: portableAzAccessTier,
SASURL: kms.NewPlainSecret(portableAzSASURL),
KeyPrefix: portableAzKeyPrefix,
UseEmulator: portableAzUseEmulator,
UploadPartSize: int64(portableAzULPartSize),
UploadConcurrency: portableAzULConcurrency,
BaseAzBlobFsConfig: sdk.BaseAzBlobFsConfig{
Container: portableAzContainer,
AccountName: portableAzAccountName,
Endpoint: portableAzEndpoint,
AccessTier: portableAzAccessTier,
KeyPrefix: portableAzKeyPrefix,
UseEmulator: portableAzUseEmulator,
UploadPartSize: int64(portableAzULPartSize),
UploadConcurrency: portableAzULConcurrency,
DownloadPartSize: int64(portableAzDLPartSize),
DownloadConcurrency: portableAzDLConcurrency,
},
AccountKey: kms.NewPlainSecret(portableAzAccountKey),
SASURL: kms.NewPlainSecret(portableAzSASURL),
},
CryptConfig: vfs.CryptFsConfig{
Passphrase: kms.NewPlainSecret(portableCryptPassphrase),
},
SFTPConfig: vfs.SFTPFsConfig{
Endpoint: portableSFTPEndpoint,
Username: portableSFTPUsername,
Password: kms.NewPlainSecret(portableSFTPPassword),
PrivateKey: kms.NewPlainSecret(portableSFTPPrivateKey),
Fingerprints: portableSFTPFingerprints,
Prefix: portableSFTPPrefix,
DisableCouncurrentReads: portableSFTPDisableConcurrentReads,
BufferSize: portableSFTPDBufferSize,
BaseSFTPFsConfig: sdk.BaseSFTPFsConfig{
Endpoint: portableSFTPEndpoint,
Username: portableSFTPUsername,
Fingerprints: portableSFTPFingerprints,
Prefix: portableSFTPPrefix,
DisableCouncurrentReads: portableSFTPDisableConcurrentReads,
BufferSize: portableSFTPDBufferSize,
},
Password: kms.NewPlainSecret(portableSFTPPassword),
PrivateKey: kms.NewPlainSecret(portableSFTPPrivateKey),
},
},
Filters: dataprovider.UserFilters{
FilePatterns: parsePatternsFilesFilters(),
},
},
}
if err := service.StartPortableMode(portableSFTPDPort, portableFTPDPort, portableWebDAVPort, portableSSHCommands, portableAdvertiseService,
@@ -220,6 +276,9 @@ func init() {
This can be an absolute path or a path
relative to the current directory
`)
portableCmd.Flags().StringVar(&portableStartDir, "start-directory", "/", `Alternate start directory.
This is a virtual path not a filesystem
path`)
portableCmd.Flags().IntVarP(&portableSFTPDPort, "sftpd-port", "s", 0, `0 means a random unprivileged port,
< 0 disabled`)
portableCmd.Flags().IntVar(&portableFTPDPort, "ftpd-port", -1, `0 means a random unprivileged port,
@@ -237,6 +296,7 @@ value`)
value`)
portableCmd.Flags().StringVarP(&portableLogFile, logFilePathFlag, "l", "", "Leave empty to disable logging")
portableCmd.Flags().BoolVarP(&portableLogVerbose, logVerboseFlag, "v", false, "Enable verbose logs")
portableCmd.Flags().BoolVar(&portableLogUTCTime, logUTCTimeFlag, false, "Use UTC time for logging")
portableCmd.Flags().StringSliceVarP(&portablePublicKeys, "public-key", "k", []string{}, "")
portableCmd.Flags().StringSliceVarP(&portablePermissions, "permissions", "g", []string{"list", "download"},
`User's permissions. "*" means any
@@ -259,18 +319,20 @@ multicast DNS`)
advertised via multicast DNS, this
flag allows to put username/password
inside the advertised TXT record`)
portableCmd.Flags().IntVarP(&portableFsProvider, "fs-provider", "f", int(vfs.LocalFilesystemProvider), `0 => local filesystem
1 => AWS S3 compatible
2 => Google Cloud Storage
3 => Azure Blob Storage
4 => Encrypted local filesystem
5 => SFTP`)
portableCmd.Flags().StringVarP(&portableFsProvider, "fs-provider", "f", "osfs", `osfs => local filesystem (legacy value: 0)
s3fs => AWS S3 compatible (legacy: 1)
gcsfs => Google Cloud Storage (legacy: 2)
azblobfs => Azure Blob Storage (legacy: 3)
cryptfs => Encrypted local filesystem (legacy: 4)
sftpfs => SFTP (legacy: 5)`)
portableCmd.Flags().StringVar(&portableS3Bucket, "s3-bucket", "", "")
portableCmd.Flags().StringVar(&portableS3Region, "s3-region", "", "")
portableCmd.Flags().StringVar(&portableS3AccessKey, "s3-access-key", "", "")
portableCmd.Flags().StringVar(&portableS3AccessSecret, "s3-access-secret", "", "")
portableCmd.Flags().StringVar(&portableS3RoleARN, "s3-role-arn", "", "")
portableCmd.Flags().StringVar(&portableS3Endpoint, "s3-endpoint", "", "")
portableCmd.Flags().StringVar(&portableS3StorageClass, "s3-storage-class", "", "")
portableCmd.Flags().StringVar(&portableS3ACL, "s3-acl", "", "")
portableCmd.Flags().StringVar(&portableS3KeyPrefix, "s3-key-prefix", "", `Allows to restrict access to the
virtual folder identified by this
prefix and its contents`)
@@ -278,6 +340,7 @@ prefix and its contents`)
(MB)`)
portableCmd.Flags().IntVar(&portableS3ULConcurrency, "s3-upload-concurrency", 2, `How many parts are uploaded in
parallel`)
portableCmd.Flags().BoolVar(&portableS3ForcePathStyle, "s3-force-path-style", false, `Force path style bucket URL`)
portableCmd.Flags().StringVar(&portableGCSBucket, "gcs-bucket", "", "")
portableCmd.Flags().StringVar(&portableGCSStorageClass, "gcs-storage-class", "", "")
portableCmd.Flags().StringVar(&portableGCSKeyPrefix, "gcs-key-prefix", "", `Allows to restrict access to the
@@ -305,9 +368,13 @@ container setting`)
portableCmd.Flags().StringVar(&portableAzKeyPrefix, "az-key-prefix", "", `Allows to restrict access to the
virtual folder identified by this
prefix and its contents`)
portableCmd.Flags().IntVar(&portableAzULPartSize, "az-upload-part-size", 4, `The buffer size for multipart uploads
portableCmd.Flags().IntVar(&portableAzULPartSize, "az-upload-part-size", 5, `The buffer size for multipart uploads
(MB)`)
portableCmd.Flags().IntVar(&portableAzULConcurrency, "az-upload-concurrency", 2, `How many parts are uploaded in
portableCmd.Flags().IntVar(&portableAzULConcurrency, "az-upload-concurrency", 5, `How many parts are uploaded in
parallel`)
portableCmd.Flags().IntVar(&portableAzDLPartSize, "az-download-part-size", 5, `The buffer size for multipart downloads
(MB)`)
portableCmd.Flags().IntVar(&portableAzDLConcurrency, "az-download-concurrency", 5, `How many parts are downloaded in
parallel`)
portableCmd.Flags().BoolVar(&portableAzUseEmulator, "az-use-emulator", false, "")
portableCmd.Flags().StringVar(&portableCryptPassphrase, "crypto-passphrase", "", `Passphrase for encryption/decryption`)
@@ -335,12 +402,12 @@ by overlapping round-trip times`)
rootCmd.AddCommand(portableCmd)
}
func parsePatternsFilesFilters() []dataprovider.PatternsFilter {
var patterns []dataprovider.PatternsFilter
func parsePatternsFilesFilters() []sdk.PatternsFilter {
var patterns []sdk.PatternsFilter
for _, val := range portableAllowedPatterns {
p, exts := getPatternsFilterValues(strings.TrimSpace(val))
if p != "" {
patterns = append(patterns, dataprovider.PatternsFilter{
patterns = append(patterns, sdk.PatternsFilter{
Path: path.Clean(p),
AllowedPatterns: exts,
DeniedPatterns: []string{},
@@ -359,7 +426,7 @@ func parsePatternsFilesFilters() []dataprovider.PatternsFilter {
}
}
if !found {
patterns = append(patterns, dataprovider.PatternsFilter{
patterns = append(patterns, sdk.PatternsFilter{
Path: path.Clean(p),
AllowedPatterns: []string{},
DeniedPatterns: exts,

View File

@@ -1,8 +1,23 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//go:build noportable
// +build noportable
package cmd
import "github.com/drakkan/sftpgo/version"
import "github.com/drakkan/sftpgo/v2/version"
func init() {
version.AddFeature("-portable")

View File

@@ -1,3 +1,17 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package cmd
import (
@@ -6,7 +20,7 @@ import (
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/service"
"github.com/drakkan/sftpgo/v2/service"
)
var (

89
cmd/resetprovider.go Normal file
View File

@@ -0,0 +1,89 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package cmd
import (
"bufio"
"os"
"strings"
"github.com/rs/zerolog"
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/drakkan/sftpgo/v2/config"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
)
var (
resetProviderForce bool
resetProviderCmd = &cobra.Command{
Use: "resetprovider",
Short: "Reset the configured provider, any data will be lost",
Long: `This command reads the data provider connection details from the specified
configuration file and resets the provider by deleting all data and schemas.
This command is not supported for the memory provider.
Please take a look at the usage below to customize the options.`,
Run: func(cmd *cobra.Command, args []string) {
logger.DisableLogger()
logger.EnableConsoleLogger(zerolog.DebugLevel)
configDir = util.CleanDirInput(configDir)
err := config.LoadConfig(configDir, configFile)
if err != nil {
logger.WarnToConsole("Unable to initialize data provider, config load error: %v", err)
os.Exit(1)
}
kmsConfig := config.GetKMSConfig()
err = kmsConfig.Initialize()
if err != nil {
logger.ErrorToConsole("unable to initialize KMS: %v", err)
os.Exit(1)
}
providerConf := config.GetProviderConf()
if !resetProviderForce {
logger.WarnToConsole("You are about to delete all the SFTPGo data for provider %#v, config file: %#v",
providerConf.Driver, viper.ConfigFileUsed())
logger.WarnToConsole("Are you sure? (Y/n)")
reader := bufio.NewReader(os.Stdin)
answer, err := reader.ReadString('\n')
if err != nil {
logger.ErrorToConsole("unable to read your answer: %v", err)
os.Exit(1)
}
if strings.ToUpper(strings.TrimSpace(answer)) != "Y" {
logger.InfoToConsole("command aborted")
os.Exit(1)
}
}
logger.InfoToConsole("Resetting provider: %#v, config file: %#v", providerConf.Driver, viper.ConfigFileUsed())
err = dataprovider.ResetDatabase(providerConf, configDir)
if err != nil {
logger.WarnToConsole("Error resetting provider: %v", err)
os.Exit(1)
}
logger.InfoToConsole("Tha data provider was successfully reset")
},
}
)
func init() {
addConfigFlags(resetProviderCmd)
resetProviderCmd.Flags().BoolVar(&resetProviderForce, "force", false, `reset the provider without asking for confirmation`)
rootCmd.AddCommand(resetProviderCmd)
}

View File

@@ -1,3 +1,17 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package cmd
import (
@@ -7,10 +21,10 @@ import (
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/drakkan/sftpgo/config"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/utils"
"github.com/drakkan/sftpgo/v2/config"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
)
var (
@@ -26,11 +40,11 @@ Please take a look at the usage below to customize the options.`,
Run: func(cmd *cobra.Command, args []string) {
logger.DisableLogger()
logger.EnableConsoleLogger(zerolog.DebugLevel)
if revertProviderTargetVersion != 8 {
logger.WarnToConsole("Unsupported target version, 8 is the only supported one")
if revertProviderTargetVersion != 15 {
logger.WarnToConsole("Unsupported target version, 15 is the only supported one")
os.Exit(1)
}
configDir = utils.CleanDirInput(configDir)
configDir = util.CleanDirInput(configDir)
err := config.LoadConfig(configDir, configFile)
if err != nil {
logger.WarnToConsole("Unable to initialize data provider, config load error: %v", err)
@@ -57,8 +71,7 @@ Please take a look at the usage below to customize the options.`,
func init() {
addConfigFlags(revertProviderCmd)
revertProviderCmd.Flags().IntVar(&revertProviderTargetVersion, "to-version", 0, `8 means the version supported in v2.0.x`)
revertProviderCmd.MarkFlagRequired("to-version") //nolint:errcheck
revertProviderCmd.Flags().IntVar(&revertProviderTargetVersion, "to-version", 15, `15 means the version supported in v2.2.x`)
rootCmd.AddCommand(revertProviderCmd)
}

View File

@@ -1,3 +1,17 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// Package cmd provides Command Line Interface support
package cmd
@@ -8,7 +22,7 @@ import (
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/drakkan/sftpgo/version"
"github.com/drakkan/sftpgo/v2/version"
)
const (
@@ -28,6 +42,8 @@ const (
logCompressKey = "log_compress"
logVerboseFlag = "log-verbose"
logVerboseKey = "log_verbose"
logUTCTimeFlag = "log-utc-time"
logUTCTimeKey = "log_utc_time"
loadDataFromFlag = "loaddata-from"
loadDataFromKey = "loaddata_from"
loadDataModeFlag = "loaddata-mode"
@@ -44,6 +60,7 @@ const (
defaultLogMaxAge = 28
defaultLogCompress = false
defaultLogVerbose = true
defaultLogUTCTime = false
defaultLoadDataFrom = ""
defaultLoadDataMode = 1
defaultLoadDataQuotaScan = 0
@@ -59,10 +76,13 @@ var (
logMaxAge int
logCompress bool
logVerbose bool
logUTCTime bool
loadDataFrom string
loadDataMode int
loadDataQuotaScan int
loadDataClean bool
// used if awscontainer build tag is enabled
disableAWSInstallationCode bool
rootCmd = &cobra.Command{
Use: "sftpgo",
@@ -71,6 +91,7 @@ var (
)
func init() {
rootCmd.CompletionOptions.DisableDefaultCmd = true
rootCmd.Flags().BoolP("version", "v", false, "")
rootCmd.Version = version.GetAsString()
rootCmd.SetVersionTemplate(`{{printf "SFTPGo "}}{{printf "%s" .Version}}
@@ -121,6 +142,43 @@ env var too.`)
viper.BindPFlag(configFileKey, cmd.Flags().Lookup(configFileFlag)) //nolint:errcheck
}
func addBaseLoadDataFlags(cmd *cobra.Command) {
viper.SetDefault(loadDataFromKey, defaultLoadDataFrom)
viper.BindEnv(loadDataFromKey, "SFTPGO_LOADDATA_FROM") //nolint:errcheck
cmd.Flags().StringVar(&loadDataFrom, loadDataFromFlag, viper.GetString(loadDataFromKey),
`Load users and folders from this file.
The file must be specified as absolute path
and it must contain a backup obtained using
the "dumpdata" REST API or compatible content.
This flag can be set using SFTPGO_LOADDATA_FROM
env var too.
`)
viper.BindPFlag(loadDataFromKey, cmd.Flags().Lookup(loadDataFromFlag)) //nolint:errcheck
viper.SetDefault(loadDataModeKey, defaultLoadDataMode)
viper.BindEnv(loadDataModeKey, "SFTPGO_LOADDATA_MODE") //nolint:errcheck
cmd.Flags().IntVar(&loadDataMode, loadDataModeFlag, viper.GetInt(loadDataModeKey),
`Restore mode for data to load:
0 - new users are added, existing users are
updated
1 - New users are added, existing users are
not modified
This flag can be set using SFTPGO_LOADDATA_MODE
env var too.
`)
viper.BindPFlag(loadDataModeKey, cmd.Flags().Lookup(loadDataModeFlag)) //nolint:errcheck
viper.SetDefault(loadDataCleanKey, defaultLoadDataClean)
viper.BindEnv(loadDataCleanKey, "SFTPGO_LOADDATA_CLEAN") //nolint:errcheck
cmd.Flags().BoolVar(&loadDataClean, loadDataCleanFlag, viper.GetBool(loadDataCleanKey),
`Determine if the loaddata-from file should
be removed after a successful load. This flag
can be set using SFTPGO_LOADDATA_CLEAN env var
too. (default "false")
`)
viper.BindPFlag(loadDataCleanKey, cmd.Flags().Lookup(loadDataCleanFlag)) //nolint:errcheck
}
func addServeFlags(cmd *cobra.Command) {
addConfigFlags(cmd)
@@ -179,30 +237,15 @@ using SFTPGO_LOG_VERBOSE env var too.
`)
viper.BindPFlag(logVerboseKey, cmd.Flags().Lookup(logVerboseFlag)) //nolint:errcheck
viper.SetDefault(loadDataFromKey, defaultLoadDataFrom)
viper.BindEnv(loadDataFromKey, "SFTPGO_LOADDATA_FROM") //nolint:errcheck
cmd.Flags().StringVar(&loadDataFrom, loadDataFromFlag, viper.GetString(loadDataFromKey),
`Load users and folders from this file.
The file must be specified as absolute path
and it must contain a backup obtained using
the "dumpdata" REST API or compatible content.
This flag can be set using SFTPGO_LOADDATA_FROM
env var too.
viper.SetDefault(logUTCTimeKey, defaultLogUTCTime)
viper.BindEnv(logUTCTimeKey, "SFTPGO_LOG_UTC_TIME") //nolint:errcheck
cmd.Flags().BoolVar(&logUTCTime, logUTCTimeFlag, viper.GetBool(logUTCTimeKey),
`Use UTC time for logging. This flag can be set
using SFTPGO_LOG_UTC_TIME env var too.
`)
viper.BindPFlag(loadDataFromKey, cmd.Flags().Lookup(loadDataFromFlag)) //nolint:errcheck
viper.BindPFlag(logUTCTimeKey, cmd.Flags().Lookup(logUTCTimeFlag)) //nolint:errcheck
viper.SetDefault(loadDataModeKey, defaultLoadDataMode)
viper.BindEnv(loadDataModeKey, "SFTPGO_LOADDATA_MODE") //nolint:errcheck
cmd.Flags().IntVar(&loadDataMode, loadDataModeFlag, viper.GetInt(loadDataModeKey),
`Restore mode for data to load:
0 - new users are added, existing users are
updated
1 - New users are added, existing users are
not modified
This flag can be set using SFTPGO_LOADDATA_MODE
env var too.
`)
viper.BindPFlag(loadDataModeKey, cmd.Flags().Lookup(loadDataModeFlag)) //nolint:errcheck
addBaseLoadDataFlags(cmd)
viper.SetDefault(loadDataQuotaScanKey, defaultLoadDataQuotaScan)
viper.BindEnv(loadDataQuotaScanKey, "SFTPGO_LOADDATA_QUOTA_SCAN") //nolint:errcheck
@@ -215,14 +258,4 @@ This flag can be set using SFTPGO_LOADDATA_QUOTA_SCAN
env var too.
(default 0)`)
viper.BindPFlag(loadDataQuotaScanKey, cmd.Flags().Lookup(loadDataQuotaScanFlag)) //nolint:errcheck
viper.SetDefault(loadDataCleanKey, defaultLoadDataClean)
viper.BindEnv(loadDataCleanKey, "SFTPGO_LOADDATA_CLEAN") //nolint:errcheck
cmd.Flags().BoolVar(&loadDataClean, loadDataCleanFlag, viper.GetBool(loadDataCleanKey),
`Determine if the loaddata-from file should
be removed after a successful load. This flag
can be set using SFTPGO_LOADDATA_CLEAN env var
too. (default "false")
`)
viper.BindPFlag(logCompressKey, cmd.Flags().Lookup(logCompressFlag)) //nolint:errcheck
}

View File

@@ -1,3 +1,17 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package cmd
import (
@@ -6,7 +20,7 @@ import (
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/service"
"github.com/drakkan/sftpgo/v2/service"
)
var (

View File

@@ -1,3 +1,17 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package cmd
import (
@@ -5,14 +19,14 @@ import (
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/service"
"github.com/drakkan/sftpgo/utils"
"github.com/drakkan/sftpgo/v2/service"
"github.com/drakkan/sftpgo/v2/util"
)
var (
serveCmd = &cobra.Command{
Use: "serve",
Short: "Start the SFTP Server",
Short: "Start the SFTPGo service",
Long: `To start the SFTPGo with the default values for the command line flags simply
use:
@@ -21,7 +35,7 @@ $ sftpgo serve
Please take a look at the usage below to customize the startup options`,
Run: func(cmd *cobra.Command, args []string) {
service := service.Service{
ConfigDir: utils.CleanDirInput(configDir),
ConfigDir: util.CleanDirInput(configDir),
ConfigFile: configFile,
LogFilePath: logFilePath,
LogMaxSize: logMaxSize,
@@ -29,13 +43,14 @@ Please take a look at the usage below to customize the startup options`,
LogMaxAge: logMaxAge,
LogCompress: logCompress,
LogVerbose: logVerbose,
LogUTCTime: logUTCTime,
LoadDataFrom: loadDataFrom,
LoadDataMode: loadDataMode,
LoadDataQuotaScan: loadDataQuotaScan,
LoadDataClean: loadDataClean,
Shutdown: make(chan bool),
}
if err := service.Start(); err == nil {
if err := service.Start(disableAWSInstallationCode); err == nil {
service.Wait()
if service.Error == nil {
os.Exit(0)
@@ -49,4 +64,5 @@ Please take a look at the usage below to customize the startup options`,
func init() {
rootCmd.AddCommand(serveCmd)
addServeFlags(serveCmd)
addAWSContainerFlags(serveCmd)
}

View File

@@ -1,3 +1,17 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package cmd
import (
@@ -7,7 +21,7 @@ import (
var (
serviceCmd = &cobra.Command{
Use: "service",
Short: "Manage SFTPGo Windows Service",
Short: "Manage the SFTPGo Windows Service",
}
)

68
cmd/smtptest.go Normal file
View File

@@ -0,0 +1,68 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package cmd
import (
"os"
"github.com/rs/zerolog"
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/v2/config"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/smtp"
"github.com/drakkan/sftpgo/v2/util"
)
var (
smtpTestRecipient string
smtpTestCmd = &cobra.Command{
Use: "smtptest",
Short: "Test the SMTP configuration",
Long: `SFTPGo will try to send a test email to the specified recipient.
If the SMTP configuration is correct you should receive this email.`,
Run: func(cmd *cobra.Command, args []string) {
logger.DisableLogger()
logger.EnableConsoleLogger(zerolog.DebugLevel)
configDir = util.CleanDirInput(configDir)
err := config.LoadConfig(configDir, configFile)
if err != nil {
logger.WarnToConsole("Unable to initialize data provider, config load error: %v", err)
os.Exit(1)
}
smtpConfig := config.GetSMTPConfig()
err = smtpConfig.Initialize(configDir)
if err != nil {
logger.ErrorToConsole("unable to initialize SMTP configuration: %v", err)
os.Exit(1)
}
err = smtp.SendEmail(smtpTestRecipient, "SFTPGo - Testing Email Settings", "It appears your SFTPGo email is setup correctly!",
smtp.EmailContentTypeTextPlain)
if err != nil {
logger.WarnToConsole("Error sending email: %v", err)
os.Exit(1)
}
logger.InfoToConsole("No errors were reported while sending an email. Please check your inbox to make sure.")
},
}
)
func init() {
addConfigFlags(smtpTestCmd)
smtpTestCmd.Flags().StringVar(&smtpTestRecipient, "recipient", "", `email address to send the test e-mail to`)
smtpTestCmd.MarkFlagRequired("recipient") //nolint:errcheck
rootCmd.AddCommand(smtpTestCmd)
}

View File

@@ -1,3 +1,17 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package cmd
import (
@@ -7,17 +21,17 @@ import (
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/service"
"github.com/drakkan/sftpgo/utils"
"github.com/drakkan/sftpgo/v2/service"
"github.com/drakkan/sftpgo/v2/util"
)
var (
startCmd = &cobra.Command{
Use: "start",
Short: "Start SFTPGo Windows Service",
Short: "Start the SFTPGo Windows Service",
Run: func(cmd *cobra.Command, args []string) {
configDir = utils.CleanDirInput(configDir)
if !filepath.IsAbs(logFilePath) && utils.IsFileInputValid(logFilePath) {
configDir = util.CleanDirInput(configDir)
if !filepath.IsAbs(logFilePath) && util.IsFileInputValid(logFilePath) {
logFilePath = filepath.Join(configDir, logFilePath)
}
s := service.Service{
@@ -29,6 +43,7 @@ var (
LogMaxAge: logMaxAge,
LogCompress: logCompress,
LogVerbose: logVerbose,
LogUTCTime: logUTCTime,
Shutdown: make(chan bool),
}
winService := service.WindowsService{

View File

@@ -1,3 +1,17 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package cmd
import (
@@ -11,12 +25,13 @@ import (
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/drakkan/sftpgo/common"
"github.com/drakkan/sftpgo/config"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/sftpd"
"github.com/drakkan/sftpgo/version"
"github.com/drakkan/sftpgo/v2/common"
"github.com/drakkan/sftpgo/v2/config"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/plugin"
"github.com/drakkan/sftpgo/v2/sftpd"
"github.com/drakkan/sftpgo/v2/version"
)
var (
@@ -25,7 +40,7 @@ var (
baseHomeDir = ""
subsystemCmd = &cobra.Command{
Use: "startsubsys",
Short: "Use SFTPGo as SFTP file transfer subsystem",
Short: "Use sftpgo as SFTP file transfer subsystem",
Long: `In this mode SFTPGo speaks the server side of SFTP protocol to stdout and
expects client requests from stdin.
This mode is not intended to be called directly, but from sshd using the
@@ -43,6 +58,7 @@ Command-line flags should be specified in the Subsystem declaration.
if !logVerbose {
logLevel = zerolog.InfoLevel
}
logger.SetLogTime(logUTCTime)
if logJournalD {
logger.InitJournalDLogger(logLevel)
} else {
@@ -62,11 +78,12 @@ Command-line flags should be specified in the Subsystem declaration.
logger.Error(logSender, connectionID, "unable to load configuration: %v", err)
os.Exit(1)
}
dataProviderConf := config.GetProviderConf()
commonConfig := config.GetCommonConfig()
// idle connection are managed externally
commonConfig.IdleTimeout = 0
config.SetCommonConfig(commonConfig)
if err := common.Initialize(config.GetCommonConfig()); err != nil {
if err := common.Initialize(config.GetCommonConfig(), dataProviderConf.GetShared()); err != nil {
logger.Error(logSender, connectionID, "%v", err)
os.Exit(1)
}
@@ -75,13 +92,27 @@ Command-line flags should be specified in the Subsystem declaration.
logger.Error(logSender, connectionID, "unable to initialize KMS: %v", err)
os.Exit(1)
}
dataProviderConf := config.GetProviderConf()
mfaConfig := config.GetMFAConfig()
err = mfaConfig.Initialize()
if err != nil {
logger.Error(logSender, "", "unable to initialize MFA: %v", err)
os.Exit(1)
}
if err := plugin.Initialize(config.GetPluginsConfig(), logVerbose); err != nil {
logger.Error(logSender, connectionID, "unable to initialize plugin system: %v", err)
os.Exit(1)
}
smtpConfig := config.GetSMTPConfig()
err = smtpConfig.Initialize(configDir)
if err != nil {
logger.Error(logSender, connectionID, "unable to initialize SMTP configuration: %v", err)
os.Exit(1)
}
if dataProviderConf.Driver == dataprovider.SQLiteDataProviderName || dataProviderConf.Driver == dataprovider.BoltDataProviderName {
logger.Debug(logSender, connectionID, "data provider %#v not supported in subsystem mode, using %#v provider",
dataProviderConf.Driver, dataprovider.MemoryDataProviderName)
dataProviderConf.Driver = dataprovider.MemoryDataProviderName
dataProviderConf.Name = ""
dataProviderConf.PreferDatabaseCredentials = true
}
config.SetProviderConf(dataProviderConf)
err = dataprovider.Initialize(dataProviderConf, configDir, false)
@@ -94,12 +125,17 @@ Command-line flags should be specified in the Subsystem declaration.
logger.Error(logSender, connectionID, "unable to initialize http client: %v", err)
os.Exit(1)
}
commandConfig := config.GetCommandConfig()
if err := commandConfig.Initialize(); err != nil {
logger.Error(logSender, connectionID, "unable to initialize commands configuration: %v", err)
os.Exit(1)
}
user, err := dataprovider.UserExists(username)
if err == nil {
if user.HomeDir != filepath.Clean(homedir) && !preserveHomeDir {
// update the user
user.HomeDir = filepath.Clean(homedir)
err = dataprovider.UpdateUser(&user)
err = dataprovider.UpdateUser(&user, dataprovider.ActionExecutorSystem, "")
if err != nil {
logger.Error(logSender, connectionID, "unable to update user %#v: %v", username, err)
os.Exit(1)
@@ -116,18 +152,24 @@ Command-line flags should be specified in the Subsystem declaration.
user.Password = connectionID
user.Permissions = make(map[string][]string)
user.Permissions["/"] = []string{dataprovider.PermAny}
err = dataprovider.AddUser(&user)
err = dataprovider.AddUser(&user, dataprovider.ActionExecutorSystem, "")
if err != nil {
logger.Error(logSender, connectionID, "unable to add user %#v: %v", username, err)
os.Exit(1)
}
}
err = user.LoadAndApplyGroupSettings()
if err != nil {
logger.Error(logSender, connectionID, "unable to apply group settings for user %#v: %v", username, err)
os.Exit(1)
}
err = sftpd.ServeSubSystemConnection(&user, connectionID, os.Stdin, os.Stdout)
if err != nil && err != io.EOF {
logger.Warn(logSender, connectionID, "serving subsystem finished with error: %v", err)
os.Exit(1)
}
logger.Info(logSender, connectionID, "serving subsystem finished")
plugin.Handler.Cleanup()
os.Exit(0)
},
}
@@ -162,5 +204,13 @@ using SFTPGO_LOG_VERBOSE env var too.
`)
viper.BindPFlag(logVerboseKey, subsystemCmd.Flags().Lookup(logVerboseFlag)) //nolint:errcheck
viper.SetDefault(logUTCTimeKey, defaultLogUTCTime)
viper.BindEnv(logUTCTimeKey, "SFTPGO_LOG_UTC_TIME") //nolint:errcheck
subsystemCmd.Flags().BoolVar(&logUTCTime, logUTCTimeFlag, viper.GetBool(logUTCTimeKey),
`Use UTC time for logging. This flag can be set
using SFTPGO_LOG_UTC_TIME env var too.
`)
viper.BindPFlag(logUTCTimeKey, subsystemCmd.Flags().Lookup(logUTCTimeFlag)) //nolint:errcheck
rootCmd.AddCommand(subsystemCmd)
}

View File

@@ -1,3 +1,17 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package cmd
import (
@@ -6,7 +20,7 @@ import (
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/service"
"github.com/drakkan/sftpgo/v2/service"
)
var (

View File

@@ -1,3 +1,17 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package cmd
import (
@@ -6,13 +20,13 @@ import (
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/service"
"github.com/drakkan/sftpgo/v2/service"
)
var (
stopCmd = &cobra.Command{
Use: "stop",
Short: "Stop SFTPGo Windows Service",
Short: "Stop the SFTPGo Windows Service",
Run: func(cmd *cobra.Command, args []string) {
s := service.WindowsService{
Service: service.Service{

View File

@@ -1,3 +1,17 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package cmd
import (
@@ -6,13 +20,13 @@ import (
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/service"
"github.com/drakkan/sftpgo/v2/service"
)
var (
uninstallCmd = &cobra.Command{
Use: "uninstall",
Short: "Uninstall SFTPGo Windows Service",
Short: "Uninstall the SFTPGo Windows Service",
Run: func(cmd *cobra.Command, args []string) {
s := service.WindowsService{
Service: service.Service{

114
command/command.go Normal file
View File

@@ -0,0 +1,114 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package command
import (
"fmt"
"os"
"strings"
"time"
)
const (
minTimeout = 1
maxTimeout = 300
defaultTimeout = 30
)
var (
config Config
)
// Command define the configuration for a specific commands
type Command struct {
// Path is the command path as defined in the hook configuration
Path string `json:"path" mapstructure:"path"`
// Timeout specifies a time limit, in seconds, for the command execution.
// This value overrides the global timeout if set.
// Do not use variables with the SFTPGO_ prefix to avoid conflicts with env
// vars that SFTPGo sets
Timeout int `json:"timeout" mapstructure:"timeout"`
// Env defines additional environment variable for the commands.
// Each entry is of the form "key=value".
// These values are added to the global environment variables if any
Env []string `json:"env" mapstructure:"env"`
}
// Config defines the configuration for external commands such as
// program based hooks
type Config struct {
// Timeout specifies a global time limit, in seconds, for the external commands execution
Timeout int `json:"timeout" mapstructure:"timeout"`
// Env defines additional environment variable for the commands.
// Each entry is of the form "key=value".
// Do not use variables with the SFTPGO_ prefix to avoid conflicts with env
// vars that SFTPGo sets
Env []string `json:"env" mapstructure:"env"`
// Commands defines configuration for specific commands
Commands []Command `json:"commands" mapstructure:"commands"`
}
func init() {
config = Config{
Timeout: defaultTimeout,
}
}
// Initialize configures commands
func (c Config) Initialize() error {
if c.Timeout < minTimeout || c.Timeout > maxTimeout {
return fmt.Errorf("invalid timeout %v", c.Timeout)
}
for _, env := range c.Env {
if len(strings.Split(env, "=")) != 2 {
return fmt.Errorf("invalid env var %#v", env)
}
}
for idx, cmd := range c.Commands {
if cmd.Path == "" {
return fmt.Errorf("invalid path %#v", cmd.Path)
}
if cmd.Timeout == 0 {
c.Commands[idx].Timeout = c.Timeout
} else {
if cmd.Timeout < minTimeout || cmd.Timeout > maxTimeout {
return fmt.Errorf("invalid timeout %v for command %#v", cmd.Timeout, cmd.Path)
}
}
for _, env := range cmd.Env {
if len(strings.Split(env, "=")) != 2 {
return fmt.Errorf("invalid env var %#v for command %#v", env, cmd.Path)
}
}
}
config = c
return nil
}
// GetConfig returns the configuration for the specified command
func GetConfig(command string) (time.Duration, []string) {
env := os.Environ()
timeout := time.Duration(config.Timeout) * time.Second
env = append(env, config.Env...)
for _, cmd := range config.Commands {
if cmd.Path == command {
timeout = time.Duration(cmd.Timeout) * time.Second
env = append(env, cmd.Env...)
break
}
}
return timeout, env
}

119
command/command_test.go Normal file
View File

@@ -0,0 +1,119 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package command
import (
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestCommandConfig(t *testing.T) {
require.Equal(t, defaultTimeout, config.Timeout)
cfg := Config{
Timeout: 10,
Env: []string{"a=b"},
}
err := cfg.Initialize()
require.NoError(t, err)
assert.Equal(t, cfg.Timeout, config.Timeout)
assert.Equal(t, cfg.Env, config.Env)
assert.Len(t, cfg.Commands, 0)
timeout, env := GetConfig("cmd")
assert.Equal(t, time.Duration(config.Timeout)*time.Second, timeout)
assert.Contains(t, env, "a=b")
cfg.Commands = []Command{
{
Path: "cmd1",
Timeout: 30,
Env: []string{"c=d"},
},
{
Path: "cmd2",
Timeout: 0,
Env: []string{"e=f"},
},
}
err = cfg.Initialize()
require.NoError(t, err)
assert.Equal(t, cfg.Timeout, config.Timeout)
assert.Equal(t, cfg.Env, config.Env)
if assert.Len(t, config.Commands, 2) {
assert.Equal(t, cfg.Commands[0].Path, config.Commands[0].Path)
assert.Equal(t, cfg.Commands[0].Timeout, config.Commands[0].Timeout)
assert.Equal(t, cfg.Commands[0].Env, config.Commands[0].Env)
assert.Equal(t, cfg.Commands[1].Path, config.Commands[1].Path)
assert.Equal(t, cfg.Timeout, config.Commands[1].Timeout)
assert.Equal(t, cfg.Commands[1].Env, config.Commands[1].Env)
}
timeout, env = GetConfig("cmd1")
assert.Equal(t, time.Duration(config.Commands[0].Timeout)*time.Second, timeout)
assert.Contains(t, env, "a=b")
assert.Contains(t, env, "c=d")
assert.NotContains(t, env, "e=f")
timeout, env = GetConfig("cmd2")
assert.Equal(t, time.Duration(config.Timeout)*time.Second, timeout)
assert.Contains(t, env, "a=b")
assert.NotContains(t, env, "c=d")
assert.Contains(t, env, "e=f")
}
func TestConfigErrors(t *testing.T) {
c := Config{}
err := c.Initialize()
if assert.Error(t, err) {
assert.Contains(t, err.Error(), "invalid timeout")
}
c.Timeout = 10
c.Env = []string{"a"}
err = c.Initialize()
if assert.Error(t, err) {
assert.Contains(t, err.Error(), "invalid env var")
}
c.Env = nil
c.Commands = []Command{
{
Path: "",
},
}
err = c.Initialize()
if assert.Error(t, err) {
assert.Contains(t, err.Error(), "invalid path")
}
c.Commands = []Command{
{
Path: "path",
Timeout: 10000,
},
}
err = c.Initialize()
if assert.Error(t, err) {
assert.Contains(t, err.Error(), "invalid timeout")
}
c.Commands = []Command{
{
Path: "path",
Timeout: 30,
Env: []string{"b"},
},
}
err = c.Initialize()
if assert.Error(t, err) {
assert.Contains(t, err.Error(), "invalid env var")
}
}

View File

@@ -1,3 +1,17 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
import (
@@ -8,17 +22,20 @@ import (
"fmt"
"net/http"
"net/url"
"os"
"os/exec"
"path/filepath"
"strings"
"time"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/httpclient"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/utils"
"github.com/drakkan/sftpgo/vfs"
"github.com/sftpgo/sdk"
"github.com/sftpgo/sdk/plugin/notifier"
"github.com/drakkan/sftpgo/v2/command"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/httpclient"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/plugin"
"github.com/drakkan/sftpgo/v2/util"
)
var (
@@ -49,127 +66,144 @@ func InitializeActionHandler(handler ActionHandler) {
actionHandler = handler
}
// ExecutePreAction executes a pre-* action and returns the result
func ExecutePreAction(user *dataprovider.User, operation, filePath, virtualPath, protocol string, fileSize int64, openFlags int) error {
if !utils.IsStringInSlice(operation, Config.Actions.ExecuteOn) {
// for pre-delete we execute the internal handling on error, so we must return errUnconfiguredAction.
// Other pre action will deny the operation on error so if we have no configuration we must return
// a nil error
if operation == operationPreDelete {
return errUnconfiguredAction
}
return nil
func handleUnconfiguredPreAction(operation string) error {
// for pre-delete we execute the internal handling on error, so we must return errUnconfiguredAction.
// Other pre action will deny the operation on error so if we have no configuration we must return
// a nil error
if operation == operationPreDelete {
return errUnconfiguredAction
}
notification := newActionNotification(user, operation, filePath, virtualPath, "", "", protocol, fileSize, openFlags, nil)
return actionHandler.Handle(notification)
return nil
}
// ExecutePreAction executes a pre-* action and returns the result
func ExecutePreAction(conn *BaseConnection, operation, filePath, virtualPath string, fileSize int64, openFlags int) error {
var event *notifier.FsEvent
hasNotifiersPlugin := plugin.Handler.HasNotifiers()
hasHook := util.Contains(Config.Actions.ExecuteOn, operation)
if !hasHook && !hasNotifiersPlugin {
return handleUnconfiguredPreAction(operation)
}
event = newActionNotification(&conn.User, operation, filePath, virtualPath, "", "", "",
conn.protocol, conn.GetRemoteIP(), conn.ID, fileSize, openFlags, nil)
if hasNotifiersPlugin {
plugin.Handler.NotifyFsEvent(event)
}
if !hasHook {
return handleUnconfiguredPreAction(operation)
}
return actionHandler.Handle(event)
}
// ExecuteActionNotification executes the defined hook, if any, for the specified action
func ExecuteActionNotification(user *dataprovider.User, operation, filePath, virtualPath, target, sshCmd, protocol string, fileSize int64, err error) {
notification := newActionNotification(user, operation, filePath, virtualPath, target, sshCmd, protocol, fileSize, 0, err)
if utils.IsStringInSlice(operation, Config.Actions.ExecuteSync) {
actionHandler.Handle(notification) //nolint:errcheck
func ExecuteActionNotification(conn *BaseConnection, operation, filePath, virtualPath, target, virtualTarget, sshCmd string,
fileSize int64, err error,
) {
hasNotifiersPlugin := plugin.Handler.HasNotifiers()
hasHook := util.Contains(Config.Actions.ExecuteOn, operation)
if !hasHook && !hasNotifiersPlugin {
return
}
notification := newActionNotification(&conn.User, operation, filePath, virtualPath, target, virtualTarget, sshCmd,
conn.protocol, conn.GetRemoteIP(), conn.ID, fileSize, 0, err)
if hasNotifiersPlugin {
plugin.Handler.NotifyFsEvent(notification)
}
go actionHandler.Handle(notification) //nolint:errcheck
if hasHook {
if util.Contains(Config.Actions.ExecuteSync, operation) {
actionHandler.Handle(notification) //nolint:errcheck
return
}
go actionHandler.Handle(notification) //nolint:errcheck
}
}
// ActionHandler handles a notification for a Protocol Action.
type ActionHandler interface {
Handle(notification *ActionNotification) error
}
// ActionNotification defines a notification for a Protocol Action.
type ActionNotification struct {
Action string `json:"action"`
Username string `json:"username"`
Path string `json:"path"`
TargetPath string `json:"target_path,omitempty"`
SSHCmd string `json:"ssh_cmd,omitempty"`
FileSize int64 `json:"file_size,omitempty"`
FsProvider int `json:"fs_provider"`
Bucket string `json:"bucket,omitempty"`
Endpoint string `json:"endpoint,omitempty"`
Status int `json:"status"`
Protocol string `json:"protocol"`
OpenFlags int `json:"open_flags,omitempty"`
Handle(notification *notifier.FsEvent) error
}
func newActionNotification(
user *dataprovider.User,
operation, filePath, virtualPath, target, sshCmd, protocol string,
operation, filePath, virtualPath, target, virtualTarget, sshCmd, protocol, ip, sessionID string,
fileSize int64,
openFlags int,
err error,
) *ActionNotification {
) *notifier.FsEvent {
var bucket, endpoint string
status := 1
fsConfig := user.GetFsConfigForPath(virtualPath)
switch fsConfig.Provider {
case vfs.S3FilesystemProvider:
case sdk.S3FilesystemProvider:
bucket = fsConfig.S3Config.Bucket
endpoint = fsConfig.S3Config.Endpoint
case vfs.GCSFilesystemProvider:
case sdk.GCSFilesystemProvider:
bucket = fsConfig.GCSConfig.Bucket
case vfs.AzureBlobFilesystemProvider:
case sdk.AzureBlobFilesystemProvider:
bucket = fsConfig.AzBlobConfig.Container
if fsConfig.AzBlobConfig.Endpoint != "" {
endpoint = fsConfig.AzBlobConfig.Endpoint
}
case vfs.SFTPFilesystemProvider:
case sdk.SFTPFilesystemProvider:
endpoint = fsConfig.SFTPConfig.Endpoint
}
if err == ErrQuotaExceeded {
status = 2
status = 3
} else if err != nil {
status = 0
status = 2
}
return &ActionNotification{
Action: operation,
Username: user.Username,
Path: filePath,
TargetPath: target,
SSHCmd: sshCmd,
FileSize: fileSize,
FsProvider: int(fsConfig.Provider),
Bucket: bucket,
Endpoint: endpoint,
Status: status,
Protocol: protocol,
OpenFlags: openFlags,
return &notifier.FsEvent{
Action: operation,
Username: user.Username,
Path: filePath,
TargetPath: target,
VirtualPath: virtualPath,
VirtualTargetPath: virtualTarget,
SSHCmd: sshCmd,
FileSize: fileSize,
FsProvider: int(fsConfig.Provider),
Bucket: bucket,
Endpoint: endpoint,
Status: status,
Protocol: protocol,
IP: ip,
SessionID: sessionID,
OpenFlags: openFlags,
Timestamp: time.Now().UnixNano(),
}
}
type defaultActionHandler struct{}
func (h *defaultActionHandler) Handle(notification *ActionNotification) error {
if !utils.IsStringInSlice(notification.Action, Config.Actions.ExecuteOn) {
func (h *defaultActionHandler) Handle(event *notifier.FsEvent) error {
if !util.Contains(Config.Actions.ExecuteOn, event.Action) {
return errUnconfiguredAction
}
if Config.Actions.Hook == "" {
logger.Warn(notification.Protocol, "", "Unable to send notification, no hook is defined")
logger.Warn(event.Protocol, "", "Unable to send notification, no hook is defined")
return errNoHook
}
if strings.HasPrefix(Config.Actions.Hook, "http") {
return h.handleHTTP(notification)
return h.handleHTTP(event)
}
return h.handleCommand(notification)
return h.handleCommand(event)
}
func (h *defaultActionHandler) handleHTTP(notification *ActionNotification) error {
func (h *defaultActionHandler) handleHTTP(event *notifier.FsEvent) error {
u, err := url.Parse(Config.Actions.Hook)
if err != nil {
logger.Warn(notification.Protocol, "", "Invalid hook %#v for operation %#v: %v", Config.Actions.Hook, notification.Action, err)
logger.Error(event.Protocol, "", "Invalid hook %#v for operation %#v: %v",
Config.Actions.Hook, event.Action, err)
return err
}
@@ -177,7 +211,7 @@ func (h *defaultActionHandler) handleHTTP(notification *ActionNotification) erro
respCode := 0
var b bytes.Buffer
_ = json.NewEncoder(&b).Encode(notification)
_ = json.NewEncoder(&b).Encode(event)
resp, err := httpclient.RetryablePost(Config.Actions.Hook, "application/json", &b)
if err == nil {
@@ -189,48 +223,54 @@ func (h *defaultActionHandler) handleHTTP(notification *ActionNotification) erro
}
}
logger.Debug(notification.Protocol, "", "notified operation %#v to URL: %v status code: %v, elapsed: %v err: %v",
notification.Action, u.Redacted(), respCode, time.Since(startTime), err)
logger.Debug(event.Protocol, "", "notified operation %#v to URL: %v status code: %v, elapsed: %v err: %v",
event.Action, u.Redacted(), respCode, time.Since(startTime), err)
return err
}
func (h *defaultActionHandler) handleCommand(notification *ActionNotification) error {
func (h *defaultActionHandler) handleCommand(event *notifier.FsEvent) error {
if !filepath.IsAbs(Config.Actions.Hook) {
err := fmt.Errorf("invalid notification command %#v", Config.Actions.Hook)
logger.Warn(notification.Protocol, "", "unable to execute notification command: %v", err)
logger.Warn(event.Protocol, "", "unable to execute notification command: %v", err)
return err
}
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
timeout, env := command.GetConfig(Config.Actions.Hook)
ctx, cancel := context.WithTimeout(context.Background(), timeout)
defer cancel()
cmd := exec.CommandContext(ctx, Config.Actions.Hook, notification.Action, notification.Username, notification.Path, notification.TargetPath, notification.SSHCmd)
cmd.Env = append(os.Environ(), notificationAsEnvVars(notification)...)
cmd := exec.CommandContext(ctx, Config.Actions.Hook)
cmd.Env = append(env, notificationAsEnvVars(event)...)
startTime := time.Now()
err := cmd.Run()
logger.Debug(notification.Protocol, "", "executed command %#v with arguments: %#v, %#v, %#v, %#v, %#v, elapsed: %v, error: %v",
Config.Actions.Hook, notification.Action, notification.Username, notification.Path, notification.TargetPath, notification.SSHCmd, time.Since(startTime), err)
logger.Debug(event.Protocol, "", "executed command %#v, elapsed: %v, error: %v",
Config.Actions.Hook, time.Since(startTime), err)
return err
}
func notificationAsEnvVars(notification *ActionNotification) []string {
func notificationAsEnvVars(event *notifier.FsEvent) []string {
return []string{
fmt.Sprintf("SFTPGO_ACTION=%v", notification.Action),
fmt.Sprintf("SFTPGO_ACTION_USERNAME=%v", notification.Username),
fmt.Sprintf("SFTPGO_ACTION_PATH=%v", notification.Path),
fmt.Sprintf("SFTPGO_ACTION_TARGET=%v", notification.TargetPath),
fmt.Sprintf("SFTPGO_ACTION_SSH_CMD=%v", notification.SSHCmd),
fmt.Sprintf("SFTPGO_ACTION_FILE_SIZE=%v", notification.FileSize),
fmt.Sprintf("SFTPGO_ACTION_FS_PROVIDER=%v", notification.FsProvider),
fmt.Sprintf("SFTPGO_ACTION_BUCKET=%v", notification.Bucket),
fmt.Sprintf("SFTPGO_ACTION_ENDPOINT=%v", notification.Endpoint),
fmt.Sprintf("SFTPGO_ACTION_STATUS=%v", notification.Status),
fmt.Sprintf("SFTPGO_ACTION_PROTOCOL=%v", notification.Protocol),
fmt.Sprintf("SFTPGO_ACTION_OPEN_FLAGS=%v", notification.OpenFlags),
fmt.Sprintf("SFTPGO_ACTION=%v", event.Action),
fmt.Sprintf("SFTPGO_ACTION_USERNAME=%v", event.Username),
fmt.Sprintf("SFTPGO_ACTION_PATH=%v", event.Path),
fmt.Sprintf("SFTPGO_ACTION_TARGET=%v", event.TargetPath),
fmt.Sprintf("SFTPGO_ACTION_VIRTUAL_PATH=%v", event.VirtualPath),
fmt.Sprintf("SFTPGO_ACTION_VIRTUAL_TARGET=%v", event.VirtualTargetPath),
fmt.Sprintf("SFTPGO_ACTION_SSH_CMD=%v", event.SSHCmd),
fmt.Sprintf("SFTPGO_ACTION_FILE_SIZE=%v", event.FileSize),
fmt.Sprintf("SFTPGO_ACTION_FS_PROVIDER=%v", event.FsProvider),
fmt.Sprintf("SFTPGO_ACTION_BUCKET=%v", event.Bucket),
fmt.Sprintf("SFTPGO_ACTION_ENDPOINT=%v", event.Endpoint),
fmt.Sprintf("SFTPGO_ACTION_STATUS=%v", event.Status),
fmt.Sprintf("SFTPGO_ACTION_PROTOCOL=%v", event.Protocol),
fmt.Sprintf("SFTPGO_ACTION_IP=%v", event.IP),
fmt.Sprintf("SFTPGO_ACTION_SESSION_ID=%v", event.SessionID),
fmt.Sprintf("SFTPGO_ACTION_OPEN_FLAGS=%v", event.OpenFlags),
fmt.Sprintf("SFTPGO_ACTION_TIMESTAMP=%v", event.Timestamp),
}
}

View File

@@ -1,3 +1,17 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
import (
@@ -9,63 +23,85 @@ import (
"runtime"
"testing"
"github.com/lithammer/shortuuid/v3"
"github.com/rs/xid"
"github.com/sftpgo/sdk"
"github.com/sftpgo/sdk/plugin/notifier"
"github.com/stretchr/testify/assert"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/vfs"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/plugin"
"github.com/drakkan/sftpgo/v2/vfs"
)
func TestNewActionNotification(t *testing.T) {
user := &dataprovider.User{
Username: "username",
BaseUser: sdk.BaseUser{
Username: "username",
},
}
user.FsConfig.Provider = vfs.LocalFilesystemProvider
user.FsConfig.Provider = sdk.LocalFilesystemProvider
user.FsConfig.S3Config = vfs.S3FsConfig{
Bucket: "s3bucket",
Endpoint: "endpoint",
BaseS3FsConfig: sdk.BaseS3FsConfig{
Bucket: "s3bucket",
Endpoint: "endpoint",
},
}
user.FsConfig.GCSConfig = vfs.GCSFsConfig{
Bucket: "gcsbucket",
BaseGCSFsConfig: sdk.BaseGCSFsConfig{
Bucket: "gcsbucket",
},
}
user.FsConfig.AzBlobConfig = vfs.AzBlobFsConfig{
Container: "azcontainer",
Endpoint: "azendpoint",
BaseAzBlobFsConfig: sdk.BaseAzBlobFsConfig{
Container: "azcontainer",
Endpoint: "azendpoint",
},
}
user.FsConfig.SFTPConfig = vfs.SFTPFsConfig{
Endpoint: "sftpendpoint",
BaseSFTPFsConfig: sdk.BaseSFTPFsConfig{
Endpoint: "sftpendpoint",
},
}
a := newActionNotification(user, operationDownload, "path", "vpath", "target", "", ProtocolSFTP, 123, 0, errors.New("fake error"))
sessionID := xid.New().String()
a := newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSFTP, "", sessionID,
123, 0, errors.New("fake error"))
assert.Equal(t, user.Username, a.Username)
assert.Equal(t, 0, len(a.Bucket))
assert.Equal(t, 0, len(a.Endpoint))
assert.Equal(t, 0, a.Status)
assert.Equal(t, 2, a.Status)
user.FsConfig.Provider = vfs.S3FilesystemProvider
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", ProtocolSSH, 123, 0, nil)
user.FsConfig.Provider = sdk.S3FilesystemProvider
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSSH, "", sessionID,
123, 0, nil)
assert.Equal(t, "s3bucket", a.Bucket)
assert.Equal(t, "endpoint", a.Endpoint)
assert.Equal(t, 1, a.Status)
user.FsConfig.Provider = vfs.GCSFilesystemProvider
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", ProtocolSCP, 123, 0, ErrQuotaExceeded)
user.FsConfig.Provider = sdk.GCSFilesystemProvider
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSCP, "", sessionID,
123, 0, ErrQuotaExceeded)
assert.Equal(t, "gcsbucket", a.Bucket)
assert.Equal(t, 0, len(a.Endpoint))
assert.Equal(t, 2, a.Status)
assert.Equal(t, 3, a.Status)
user.FsConfig.Provider = vfs.AzureBlobFilesystemProvider
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", ProtocolSCP, 123, 0, nil)
user.FsConfig.Provider = sdk.AzureBlobFilesystemProvider
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSCP, "", sessionID,
123, 0, nil)
assert.Equal(t, "azcontainer", a.Bucket)
assert.Equal(t, "azendpoint", a.Endpoint)
assert.Equal(t, 1, a.Status)
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", ProtocolSCP, 123, os.O_APPEND, nil)
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSCP, "", sessionID,
123, os.O_APPEND, nil)
assert.Equal(t, "azcontainer", a.Bucket)
assert.Equal(t, "azendpoint", a.Endpoint)
assert.Equal(t, 1, a.Status)
assert.Equal(t, os.O_APPEND, a.OpenFlags)
user.FsConfig.Provider = vfs.SFTPFilesystemProvider
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", ProtocolSFTP, 123, 0, nil)
user.FsConfig.Provider = sdk.SFTPFilesystemProvider
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSFTP, "", sessionID,
123, 0, nil)
assert.Equal(t, "sftpendpoint", a.Endpoint)
}
@@ -77,9 +113,12 @@ func TestActionHTTP(t *testing.T) {
Hook: fmt.Sprintf("http://%v", httpAddr),
}
user := &dataprovider.User{
Username: "username",
BaseUser: sdk.BaseUser{
Username: "username",
},
}
a := newActionNotification(user, operationDownload, "path", "vpath", "target", "", ProtocolSFTP, 123, 0, nil)
a := newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSFTP, "",
xid.New().String(), 123, 0, nil)
err := actionHandler.Handle(a)
assert.NoError(t, err)
@@ -110,13 +149,20 @@ func TestActionCMD(t *testing.T) {
Hook: hookCmd,
}
user := &dataprovider.User{
Username: "username",
BaseUser: sdk.BaseUser{
Username: "username",
},
}
a := newActionNotification(user, operationDownload, "path", "vpath", "target", "", ProtocolSFTP, 123, 0, nil)
sessionID := shortuuid.New()
a := newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSFTP, "", sessionID,
123, 0, nil)
err = actionHandler.Handle(a)
assert.NoError(t, err)
ExecuteActionNotification(user, OperationSSHCmd, "path", "vpath", "target", "sha1sum", ProtocolSSH, 0, nil)
c := NewBaseConnection("id", ProtocolSFTP, "", "", *user)
ExecuteActionNotification(c, OperationSSHCmd, "path", "vpath", "target", "vtarget", "sha1sum", 0, nil)
ExecuteActionNotification(c, operationDownload, "path", "vpath", "", "", "", 0, nil)
Config.Actions = actionsCopy
}
@@ -133,10 +179,13 @@ func TestWrongActions(t *testing.T) {
Hook: badCommand,
}
user := &dataprovider.User{
Username: "username",
BaseUser: sdk.BaseUser{
Username: "username",
},
}
a := newActionNotification(user, operationUpload, "", "", "", "", ProtocolSFTP, 123, 0, nil)
a := newActionNotification(user, operationUpload, "", "", "", "", "", ProtocolSFTP, "", xid.New().String(),
123, 0, nil)
err := actionHandler.Handle(a)
assert.Error(t, err, "action with bad command must fail")
@@ -180,13 +229,15 @@ func TestPreDeleteAction(t *testing.T) {
err = os.MkdirAll(homeDir, os.ModePerm)
assert.NoError(t, err)
user := dataprovider.User{
Username: "username",
HomeDir: homeDir,
BaseUser: sdk.BaseUser{
Username: "username",
HomeDir: homeDir,
},
}
user.Permissions = make(map[string][]string)
user.Permissions["/"] = []string{dataprovider.PermAny}
fs := vfs.NewOsFs("id", homeDir, "")
c := NewBaseConnection("id", ProtocolSFTP, "", user)
c := NewBaseConnection("id", ProtocolSFTP, "", "", user)
testfile := filepath.Join(user.HomeDir, "testfile")
err = os.WriteFile(testfile, []byte("test"), os.ModePerm)
@@ -202,11 +253,42 @@ func TestPreDeleteAction(t *testing.T) {
Config.Actions = actionsCopy
}
func TestUnconfiguredHook(t *testing.T) {
actionsCopy := Config.Actions
Config.Actions = ProtocolActions{
ExecuteOn: []string{operationDownload},
Hook: "",
}
pluginsConfig := []plugin.Config{
{
Type: "notifier",
},
}
err := plugin.Initialize(pluginsConfig, true)
assert.Error(t, err)
assert.True(t, plugin.Handler.HasNotifiers())
c := NewBaseConnection("id", ProtocolSFTP, "", "", dataprovider.User{})
err = ExecutePreAction(c, OperationPreDownload, "", "", 0, 0)
assert.NoError(t, err)
err = ExecutePreAction(c, operationPreDelete, "", "", 0, 0)
assert.ErrorIs(t, err, errUnconfiguredAction)
ExecuteActionNotification(c, operationDownload, "", "", "", "", "", 0, nil)
err = plugin.Initialize(nil, true)
assert.NoError(t, err)
assert.False(t, plugin.Handler.HasNotifiers())
Config.Actions = actionsCopy
}
type actionHandlerStub struct {
called bool
}
func (h *actionHandlerStub) Handle(notification *ActionNotification) error {
func (h *actionHandlerStub) Handle(event *notifier.FsEvent) error {
h.called = true
return nil
@@ -220,7 +302,7 @@ func TestInitializeActionHandler(t *testing.T) {
InitializeActionHandler(&defaultActionHandler{})
})
err := actionHandler.Handle(&ActionNotification{})
err := actionHandler.Handle(&notifier.FsEvent{})
assert.NoError(t, err)
assert.True(t, handler.called)

View File

@@ -1,10 +1,24 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
import (
"sync"
"sync/atomic"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/v2/logger"
)
// clienstMap is a struct containing the map of the connected clients

View File

@@ -1,3 +1,17 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
import (

View File

@@ -1,3 +1,17 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
// Package common defines code shared among file transfer packages and protocols
package common
@@ -11,6 +25,7 @@ import (
"os"
"os/exec"
"path/filepath"
"strconv"
"strings"
"sync"
"sync/atomic"
@@ -18,12 +33,14 @@ import (
"github.com/pires/go-proxyproto"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/httpclient"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/metrics"
"github.com/drakkan/sftpgo/utils"
"github.com/drakkan/sftpgo/vfs"
"github.com/drakkan/sftpgo/v2/command"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/httpclient"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/metric"
"github.com/drakkan/sftpgo/v2/plugin"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/v2/vfs"
)
// constants
@@ -49,10 +66,13 @@ const (
OperationPreUpload = "pre-upload"
operationPreDelete = "pre-delete"
operationRename = "rename"
operationMkdir = "mkdir"
operationRmdir = "rmdir"
// SSH command action name
OperationSSHCmd = "ssh_cmd"
chtimesFormat = "2006-01-02T15:04:05" // YYYY-MM-DDTHH:MM:SS
idleTimeoutCheckInterval = 3 * time.Minute
OperationSSHCmd = "ssh_cmd"
chtimesFormat = "2006-01-02T15:04:05" // YYYY-MM-DDTHH:MM:SS
idleTimeoutCheckInterval = 3 * time.Minute
periodicTimeoutCheckInterval = 1 * time.Minute
)
// Stat flags
@@ -71,12 +91,15 @@ const (
// Supported protocols
const (
ProtocolSFTP = "SFTP"
ProtocolSCP = "SCP"
ProtocolSSH = "SSH"
ProtocolFTP = "FTP"
ProtocolWebDAV = "DAV"
ProtocolHTTP = "HTTP"
ProtocolSFTP = "SFTP"
ProtocolSCP = "SCP"
ProtocolSSH = "SSH"
ProtocolFTP = "FTP"
ProtocolWebDAV = "DAV"
ProtocolHTTP = "HTTP"
ProtocolHTTPShare = "HTTPShare"
ProtocolDataRetention = "DataRetention"
ProtocolOIDC = "OIDC"
)
// Upload modes
@@ -90,6 +113,7 @@ func init() {
Connections.clients = clientsMap{
clients: make(map[string]int),
}
Connections.perUserConns = make(map[string]int)
}
// errors definitions
@@ -99,12 +123,14 @@ var (
ErrOpUnsupported = errors.New("operation unsupported")
ErrGenericFailure = errors.New("failure")
ErrQuotaExceeded = errors.New("denying write due to space limit")
ErrReadQuotaExceeded = errors.New("denying read due to quota limit")
ErrSkipPermissionsCheck = errors.New("permission check skipped")
ErrConnectionDenied = errors.New("you are not allowed to connect")
ErrNoBinding = errors.New("no binding configured")
ErrCrtRevoked = errors.New("your certificate has been revoked")
ErrNoCredentials = errors.New("no credential provided")
ErrInternalFailure = errors.New("internal failure")
ErrTransferAborted = errors.New("transfer aborted")
errNoTransfer = errors.New("requested transfer not found")
errTransferMismatch = errors.New("transfer mismatch")
)
@@ -115,45 +141,76 @@ var (
// Connections is the list of active connections
Connections ActiveConnections
// QuotaScans is the list of active quota scans
QuotaScans ActiveScans
idleTimeoutTicker *time.Ticker
idleTimeoutTickerDone chan bool
supportedProtocols = []string{ProtocolSFTP, ProtocolSCP, ProtocolSSH, ProtocolFTP, ProtocolWebDAV, ProtocolHTTP}
QuotaScans ActiveScans
transfersChecker TransfersChecker
periodicTimeoutTicker *time.Ticker
periodicTimeoutTickerDone chan bool
supportedProtocols = []string{ProtocolSFTP, ProtocolSCP, ProtocolSSH, ProtocolFTP, ProtocolWebDAV,
ProtocolHTTP, ProtocolHTTPShare, ProtocolOIDC}
disconnHookProtocols = []string{ProtocolSFTP, ProtocolSCP, ProtocolSSH, ProtocolFTP}
// the map key is the protocol, for each protocol we can have multiple rate limiters
rateLimiters map[string][]*rateLimiter
)
// Initialize sets the common configuration
func Initialize(c Configuration) error {
func Initialize(c Configuration, isShared int) error {
Config = c
Config.Actions.ExecuteOn = util.RemoveDuplicates(Config.Actions.ExecuteOn, true)
Config.Actions.ExecuteSync = util.RemoveDuplicates(Config.Actions.ExecuteSync, true)
Config.ProxyAllowed = util.RemoveDuplicates(Config.ProxyAllowed, true)
Config.idleLoginTimeout = 2 * time.Minute
Config.idleTimeoutAsDuration = time.Duration(Config.IdleTimeout) * time.Minute
if Config.IdleTimeout > 0 {
startIdleTimeoutTicker(idleTimeoutCheckInterval)
}
startPeriodicTimeoutTicker(periodicTimeoutCheckInterval)
Config.defender = nil
Config.whitelist = nil
rateLimiters = make(map[string][]*rateLimiter)
for _, rlCfg := range c.RateLimitersConfig {
if rlCfg.isEnabled() {
if err := rlCfg.validate(); err != nil {
return fmt.Errorf("rate limiters initialization error: %w", err)
}
allowList, err := util.ParseAllowedIPAndRanges(rlCfg.AllowList)
if err != nil {
return fmt.Errorf("unable to parse rate limiter allow list %v: %v", rlCfg.AllowList, err)
}
rateLimiter := rlCfg.getLimiter()
rateLimiter.allowList = allowList
for _, protocol := range rlCfg.Protocols {
rateLimiters[protocol] = append(rateLimiters[protocol], rateLimiter)
}
}
}
if c.DefenderConfig.Enabled {
defender, err := newInMemoryDefender(&c.DefenderConfig)
if !util.Contains(supportedDefenderDrivers, c.DefenderConfig.Driver) {
return fmt.Errorf("unsupported defender driver %#v", c.DefenderConfig.Driver)
}
var defender Defender
var err error
switch c.DefenderConfig.Driver {
case DefenderDriverProvider:
defender, err = newDBDefender(&c.DefenderConfig)
default:
defender, err = newInMemoryDefender(&c.DefenderConfig)
}
if err != nil {
return fmt.Errorf("defender initialization error: %v", err)
}
logger.Info(logSender, "", "defender initialized with config %+v", c.DefenderConfig)
Config.defender = defender
}
rateLimiters = make(map[string][]*rateLimiter)
for _, rlCfg := range c.RateLimitersConfig {
if rlCfg.isEnabled() {
if err := rlCfg.validate(); err != nil {
return fmt.Errorf("rate limiters initialization error: %v", err)
}
rateLimiter := rlCfg.getLimiter()
for _, protocol := range rlCfg.Protocols {
rateLimiters[protocol] = append(rateLimiters[protocol], rateLimiter)
}
if c.WhiteListFile != "" {
whitelist := &whitelist{
fileName: c.WhiteListFile,
}
if err := whitelist.reload(); err != nil {
return fmt.Errorf("whitelist initialization error: %w", err)
}
logger.Info(logSender, "", "whitelist initialized from file: %#v", c.WhiteListFile)
Config.whitelist = whitelist
}
vfs.SetTempPath(c.TempPath)
dataprovider.SetTempPath(c.TempPath)
transfersChecker = getTransfersChecker(isShared)
return nil
}
@@ -171,17 +228,27 @@ func LimitRate(protocol, ip string) (time.Duration, error) {
return 0, nil
}
// ReloadDefender reloads the defender's block and safe lists
func ReloadDefender() error {
if Config.defender == nil {
return nil
// Reload reloads the whitelist, the IP filter plugin and the defender's block and safe lists
func Reload() error {
plugin.Handler.ReloadFilter()
var errWithelist error
if Config.whitelist != nil {
errWithelist = Config.whitelist.reload()
}
return Config.defender.Reload()
if Config.defender == nil {
return errWithelist
}
if err := Config.defender.Reload(); err != nil {
return err
}
return errWithelist
}
// IsBanned returns true if the specified IP address is banned
func IsBanned(ip string) bool {
if plugin.Handler.IsIPBanned(ip) {
return true
}
if Config.defender == nil {
return false
}
@@ -191,27 +258,27 @@ func IsBanned(ip string) bool {
// GetDefenderBanTime returns the ban time for the given IP
// or nil if the IP is not banned or the defender is disabled
func GetDefenderBanTime(ip string) *time.Time {
func GetDefenderBanTime(ip string) (*time.Time, error) {
if Config.defender == nil {
return nil
return nil, nil
}
return Config.defender.GetBanTime(ip)
}
// GetDefenderHosts returns hosts that are banned or for which some violations have been detected
func GetDefenderHosts() []*DefenderEntry {
func GetDefenderHosts() ([]dataprovider.DefenderEntry, error) {
if Config.defender == nil {
return nil
return nil, nil
}
return Config.defender.GetHosts()
}
// GetDefenderHost returns a defender host by ip, if any
func GetDefenderHost(ip string) (*DefenderEntry, error) {
func GetDefenderHost(ip string) (dataprovider.DefenderEntry, error) {
if Config.defender == nil {
return nil, errors.New("defender is disabled")
return dataprovider.DefenderEntry{}, errors.New("defender is disabled")
}
return Config.defender.GetHost(ip)
@@ -227,9 +294,9 @@ func DeleteDefenderHost(ip string) bool {
}
// GetDefenderScore returns the score for the given IP
func GetDefenderScore(ip string) int {
func GetDefenderScore(ip string) (int, error) {
if Config.defender == nil {
return 0
return 0, nil
}
return Config.defender.GetScore(ip)
@@ -245,46 +312,60 @@ func AddDefenderEvent(ip string, event HostEvent) {
}
// the ticker cannot be started/stopped from multiple goroutines
func startIdleTimeoutTicker(duration time.Duration) {
stopIdleTimeoutTicker()
idleTimeoutTicker = time.NewTicker(duration)
idleTimeoutTickerDone = make(chan bool)
func startPeriodicTimeoutTicker(duration time.Duration) {
stopPeriodicTimeoutTicker()
periodicTimeoutTicker = time.NewTicker(duration)
periodicTimeoutTickerDone = make(chan bool)
go func() {
counter := int64(0)
ratio := idleTimeoutCheckInterval / periodicTimeoutCheckInterval
for {
select {
case <-idleTimeoutTickerDone:
case <-periodicTimeoutTickerDone:
return
case <-idleTimeoutTicker.C:
Connections.checkIdles()
case <-periodicTimeoutTicker.C:
counter++
if Config.IdleTimeout > 0 && counter >= int64(ratio) {
counter = 0
Connections.checkIdles()
}
go Connections.checkTransfers()
}
}
}()
}
func stopIdleTimeoutTicker() {
if idleTimeoutTicker != nil {
idleTimeoutTicker.Stop()
idleTimeoutTickerDone <- true
idleTimeoutTicker = nil
func stopPeriodicTimeoutTicker() {
if periodicTimeoutTicker != nil {
periodicTimeoutTicker.Stop()
periodicTimeoutTickerDone <- true
periodicTimeoutTicker = nil
}
}
// ActiveTransfer defines the interface for the current active transfers
type ActiveTransfer interface {
GetID() uint64
GetID() int64
GetType() int
GetSize() int64
GetDownloadedSize() int64
GetUploadedSize() int64
GetVirtualPath() string
GetStartTime() time.Time
SignalClose()
SignalClose(err error)
Truncate(fsPath string, size int64) (int64, error)
GetRealFsPath(fsPath string) string
SetTimes(fsPath string, atime time.Time, mtime time.Time) bool
GetTruncatedSize() int64
HasSizeLimit() bool
}
// ActiveConnection defines the interface for the current active connections
type ActiveConnection interface {
GetID() string
GetUsername() string
GetMaxSessions() int
GetLocalAddress() string
GetRemoteAddress() string
GetClientVersion() string
GetProtocol() string
@@ -295,6 +376,7 @@ type ActiveConnection interface {
AddTransfer(t ActiveTransfer)
RemoveTransfer(t ActiveTransfer)
GetTransfers() []ConnectionTransfer
SignalTransferClose(transferID int64, err error)
CloseFS() error
}
@@ -311,11 +393,14 @@ type StatAttributes struct {
// ConnectionTransfer defines the trasfer details to expose
type ConnectionTransfer struct {
ID uint64 `json:"-"`
ID int64 `json:"-"`
OperationType string `json:"operation_type"`
StartTime int64 `json:"start_time"`
Size int64 `json:"size"`
VirtualPath string `json:"path"`
HasSizeLimit bool `json:"-"`
ULSize int64 `json:"-"`
DLSize int64 `json:"-"`
}
func (t *ConnectionTransfer) getConnectionTransferAsString() string {
@@ -328,14 +413,43 @@ func (t *ConnectionTransfer) getConnectionTransferAsString() string {
}
result += fmt.Sprintf("%#v ", t.VirtualPath)
if t.Size > 0 {
elapsed := time.Since(utils.GetTimeFromMsecSinceEpoch(t.StartTime))
speed := float64(t.Size) / float64(utils.GetTimeAsMsSinceEpoch(time.Now())-t.StartTime)
result += fmt.Sprintf("Size: %#v Elapsed: %#v Speed: \"%.1f KB/s\"", utils.ByteCountIEC(t.Size),
utils.GetDurationAsString(elapsed), speed)
elapsed := time.Since(util.GetTimeFromMsecSinceEpoch(t.StartTime))
speed := float64(t.Size) / float64(util.GetTimeAsMsSinceEpoch(time.Now())-t.StartTime)
result += fmt.Sprintf("Size: %#v Elapsed: %#v Speed: \"%.1f KB/s\"", util.ByteCountIEC(t.Size),
util.GetDurationAsString(elapsed), speed)
}
return result
}
type whitelist struct {
fileName string
sync.RWMutex
list HostList
}
func (l *whitelist) reload() error {
list, err := loadHostListFromFile(l.fileName)
if err != nil {
return err
}
if list == nil {
return errors.New("cannot accept a nil whitelist")
}
l.Lock()
defer l.Unlock()
l.list = *list
return nil
}
func (l *whitelist) isAllowed(ip string) bool {
l.RLock()
defer l.RUnlock()
return l.list.isListed(ip)
}
// Configuration defines configuration parameters common to all supported protocols
type Configuration struct {
// Maximum idle timeout as minutes. If a client is idle for a time that exceeds this setting it will be disconnected.
@@ -355,8 +469,9 @@ type Configuration struct {
Actions ProtocolActions `json:"actions" mapstructure:"actions"`
// SetstatMode 0 means "normal mode": requests for changing permissions and owner/group are executed.
// 1 means "ignore mode": requests for changing permissions and owner/group are silently ignored.
// 2 means "ignore mode for cloud fs": requests for changing permissions and owner/group/time are
// silently ignored for cloud based filesystem such as S3, GCS, Azure Blob
// 2 means "ignore mode for cloud fs": requests for changing permissions and owner/group are
// silently ignored for cloud based filesystem such as S3, GCS, Azure Blob. Requests for changing
// modification times are ignored for cloud based filesystem if they are not supported.
SetstatMode int `json:"setstat_mode" mapstructure:"setstat_mode"`
// TempPath defines the path for temporary files such as those used for atomic uploads or file pipes.
// If you set this option you must make sure that the defined path exists, is accessible for writing
@@ -390,10 +505,20 @@ type Configuration struct {
// and before he tries to login. It allows you to reject the connection based on the source
// ip address. Leave empty do disable.
PostConnectHook string `json:"post_connect_hook" mapstructure:"post_connect_hook"`
// Absolute path to an external program or an HTTP URL to invoke after an SSH/FTP connection ends.
// Leave empty do disable.
PostDisconnectHook string `json:"post_disconnect_hook" mapstructure:"post_disconnect_hook"`
// Absolute path to an external program or an HTTP URL to invoke after a data retention check completes.
// Leave empty do disable.
DataRetentionHook string `json:"data_retention_hook" mapstructure:"data_retention_hook"`
// Maximum number of concurrent client connections. 0 means unlimited
MaxTotalConnections int `json:"max_total_connections" mapstructure:"max_total_connections"`
// Maximum number of concurrent client connections from the same host (IP). 0 means unlimited
MaxPerHostConnections int `json:"max_per_host_connections" mapstructure:"max_per_host_connections"`
// Path to a file containing a list of IP addresses and/or networks to allow.
// Only the listed IPs/networks can access the configured services, all other client connections
// will be dropped before they even try to authenticate.
WhiteListFile string `json:"whitelist_file" mapstructure:"whitelist_file"`
// Defender configuration
DefenderConfig DefenderConfig `json:"defender" mapstructure:"defender"`
// Rate limiter configurations
@@ -401,6 +526,7 @@ type Configuration struct {
idleTimeoutAsDuration time.Duration
idleLoginTimeout time.Duration
defender Defender
whitelist *whitelist
}
// IsAtomicUploadEnabled returns true if atomic upload is enabled
@@ -409,9 +535,8 @@ func (c *Configuration) IsAtomicUploadEnabled() bool {
}
// GetProxyListener returns a wrapper for the given listener that supports the
// HAProxy Proxy Protocol or nil if the proxy protocol is not configured
// HAProxy Proxy Protocol
func (c *Configuration) GetProxyListener(listener net.Listener) (*proxyproto.Listener, error) {
var proxyListener *proxyproto.Listener
var err error
if c.ProxyProtocol > 0 {
var policyFunc func(upstream net.Addr) (proxyproto.Policy, error)
@@ -433,12 +558,13 @@ func (c *Configuration) GetProxyListener(listener net.Listener) (*proxyproto.Lis
}
}
}
proxyListener = &proxyproto.Listener{
Listener: listener,
Policy: policyFunc,
}
return &proxyproto.Listener{
Listener: listener,
Policy: policyFunc,
ReadHeaderTimeout: 10 * time.Second,
}, nil
}
return proxyListener, nil
return nil, errors.New("proxy protocol not configured")
}
// ExecuteStartupHook runs the startup hook if defined
@@ -469,14 +595,74 @@ func (c *Configuration) ExecuteStartupHook() error {
return err
}
startTime := time.Now()
ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
timeout, env := command.GetConfig(c.StartupHook)
ctx, cancel := context.WithTimeout(context.Background(), timeout)
defer cancel()
cmd := exec.CommandContext(ctx, c.StartupHook)
cmd.Env = env
err := cmd.Run()
logger.Debug(logSender, "", "Startup hook executed, elapsed: %v, error: %v", time.Since(startTime), err)
return nil
}
func (c *Configuration) executePostDisconnectHook(remoteAddr, protocol, username, connID string, connectionTime time.Time) {
ipAddr := util.GetIPFromRemoteAddress(remoteAddr)
connDuration := int64(time.Since(connectionTime) / time.Millisecond)
if strings.HasPrefix(c.PostDisconnectHook, "http") {
var url *url.URL
url, err := url.Parse(c.PostDisconnectHook)
if err != nil {
logger.Warn(protocol, connID, "Invalid post disconnect hook %#v: %v", c.PostDisconnectHook, err)
return
}
q := url.Query()
q.Add("ip", ipAddr)
q.Add("protocol", protocol)
q.Add("username", username)
q.Add("connection_duration", strconv.FormatInt(connDuration, 10))
url.RawQuery = q.Encode()
startTime := time.Now()
resp, err := httpclient.RetryableGet(url.String())
respCode := 0
if err == nil {
respCode = resp.StatusCode
resp.Body.Close()
}
logger.Debug(protocol, connID, "Post disconnect hook response code: %v, elapsed: %v, err: %v",
respCode, time.Since(startTime), err)
return
}
if !filepath.IsAbs(c.PostDisconnectHook) {
logger.Debug(protocol, connID, "invalid post disconnect hook %#v", c.PostDisconnectHook)
return
}
timeout, env := command.GetConfig(c.PostDisconnectHook)
ctx, cancel := context.WithTimeout(context.Background(), timeout)
defer cancel()
startTime := time.Now()
cmd := exec.CommandContext(ctx, c.PostDisconnectHook)
cmd.Env = append(env,
fmt.Sprintf("SFTPGO_CONNECTION_IP=%v", ipAddr),
fmt.Sprintf("SFTPGO_CONNECTION_USERNAME=%v", username),
fmt.Sprintf("SFTPGO_CONNECTION_DURATION=%v", connDuration),
fmt.Sprintf("SFTPGO_CONNECTION_PROTOCOL=%v", protocol))
err := cmd.Run()
logger.Debug(protocol, connID, "Post disconnect hook executed, elapsed: %v error: %v", time.Since(startTime), err)
}
func (c *Configuration) checkPostDisconnectHook(remoteAddr, protocol, username, connID string, connectionTime time.Time) {
if c.PostDisconnectHook == "" {
return
}
if !util.Contains(disconnHookProtocols, protocol) {
return
}
go c.executePostDisconnectHook(remoteAddr, protocol, username, connID, connectionTime)
}
// ExecutePostConnectHook executes the post connect hook if defined
func (c *Configuration) ExecutePostConnectHook(ipAddr, protocol string) error {
if c.PostConnectHook == "" {
@@ -512,10 +698,12 @@ func (c *Configuration) ExecutePostConnectHook(ipAddr, protocol string) error {
logger.Warn(protocol, "", "Login from ip %#v denied: %v", ipAddr, err)
return err
}
ctx, cancel := context.WithTimeout(context.Background(), 20*time.Second)
timeout, env := command.GetConfig(c.PostConnectHook)
ctx, cancel := context.WithTimeout(context.Background(), timeout)
defer cancel()
cmd := exec.CommandContext(ctx, c.PostConnectHook)
cmd.Env = append(os.Environ(),
cmd.Env = append(env,
fmt.Sprintf("SFTPGO_CONNECTION_IP=%v", ipAddr),
fmt.Sprintf("SFTPGO_CONNECTION_PROTOCOL=%v", protocol))
err := cmd.Run()
@@ -566,10 +754,34 @@ func (c *SSHConnection) Close() error {
type ActiveConnections struct {
// clients contains both authenticated and estabilished connections and the ones waiting
// for authentication
clients clientsMap
clients clientsMap
transfersCheckStatus int32
sync.RWMutex
connections []ActiveConnection
sshConnections []*SSHConnection
perUserConns map[string]int
}
// internal method, must be called within a locked block
func (conns *ActiveConnections) addUserConnection(username string) {
if username == "" {
return
}
conns.perUserConns[username]++
}
// internal method, must be called within a locked block
func (conns *ActiveConnections) removeUserConnection(username string) {
if username == "" {
return
}
if val, ok := conns.perUserConns[username]; ok {
conns.perUserConns[username]--
if val > 1 {
return
}
delete(conns.perUserConns, username)
}
}
// GetActiveSessions returns the number of active sessions for the given username.
@@ -578,23 +790,27 @@ func (conns *ActiveConnections) GetActiveSessions(username string) int {
conns.RLock()
defer conns.RUnlock()
numSessions := 0
for _, c := range conns.connections {
if c.GetUsername() == username {
numSessions++
}
}
return numSessions
return conns.perUserConns[username]
}
// Add adds a new connection to the active ones
func (conns *ActiveConnections) Add(c ActiveConnection) {
func (conns *ActiveConnections) Add(c ActiveConnection) error {
conns.Lock()
defer conns.Unlock()
if username := c.GetUsername(); username != "" {
if maxSessions := c.GetMaxSessions(); maxSessions > 0 {
if val := conns.perUserConns[username]; val >= maxSessions {
return fmt.Errorf("too many open sessions: %d/%d", val, maxSessions)
}
}
conns.addUserConnection(username)
}
conns.connections = append(conns.connections, c)
metrics.UpdateActiveConnectionsSize(len(conns.connections))
logger.Debug(c.GetProtocol(), c.GetID(), "connection added, num open connections: %v", len(conns.connections))
metric.UpdateActiveConnectionsSize(len(conns.connections))
logger.Debug(c.GetProtocol(), c.GetID(), "connection added, local address %#v, remote address %#v, num open connections: %v",
c.GetLocalAddress(), c.GetRemoteAddress(), len(conns.connections))
return nil
}
// Swap replaces an existing connection with the given one.
@@ -607,8 +823,20 @@ func (conns *ActiveConnections) Swap(c ActiveConnection) error {
for idx, conn := range conns.connections {
if conn.GetID() == c.GetID() {
conn = nil
conns.removeUserConnection(conn.GetUsername())
if username := c.GetUsername(); username != "" {
if maxSessions := c.GetMaxSessions(); maxSessions > 0 {
if val := conns.perUserConns[username]; val >= maxSessions {
conns.addUserConnection(conn.GetUsername())
return fmt.Errorf("too many open sessions: %d/%d", val, maxSessions)
}
}
conns.addUserConnection(username)
}
err := conn.CloseFS()
conns.connections[idx] = c
logger.Debug(logSender, c.GetID(), "connection swapped, close fs error: %v", err)
conn = nil
return nil
}
}
@@ -627,9 +855,12 @@ func (conns *ActiveConnections) Remove(connectionID string) {
conns.connections[idx] = conns.connections[lastIdx]
conns.connections[lastIdx] = nil
conns.connections = conns.connections[:lastIdx]
metrics.UpdateActiveConnectionsSize(lastIdx)
logger.Debug(conn.GetProtocol(), conn.GetID(), "connection removed, close fs error: %v, num open connections: %v",
err, lastIdx)
conns.removeUserConnection(conn.GetUsername())
metric.UpdateActiveConnectionsSize(lastIdx)
logger.Debug(conn.GetProtocol(), conn.GetID(), "connection removed, local address %#v, remote address %#v close fs error: %v, num open connections: %v",
conn.GetLocalAddress(), conn.GetRemoteAddress(), err, lastIdx)
Config.checkPostDisconnectHook(conn.GetRemoteAddress(), conn.GetProtocol(), conn.GetUsername(),
conn.GetID(), conn.GetConnectionTime())
return
}
}
@@ -690,13 +921,15 @@ func (conns *ActiveConnections) checkIdles() {
for _, sshConn := range conns.sshConnections {
idleTime := time.Since(sshConn.GetLastActivity())
if idleTime > Config.idleTimeoutAsDuration {
// we close the an ssh connection if it has no active connections associated
idToMatch := fmt.Sprintf("_%v_", sshConn.GetID())
// we close an SSH connection if it has no active connections associated
idToMatch := fmt.Sprintf("_%s_", sshConn.GetID())
toClose := true
for _, conn := range conns.connections {
if strings.Contains(conn.GetID(), idToMatch) {
toClose = false
break
if time.Since(conn.GetLastActivity()) <= Config.idleTimeoutAsDuration {
toClose = false
break
}
}
}
if toClose {
@@ -719,9 +952,9 @@ func (conns *ActiveConnections) checkIdles() {
logger.Debug(conn.GetProtocol(), conn.GetID(), "close idle connection, idle time: %v, username: %#v close err: %v",
time.Since(conn.GetLastActivity()), conn.GetUsername(), err)
if isFTPNoAuth {
ip := utils.GetIPFromRemoteAddress(c.GetRemoteAddress())
ip := util.GetIPFromRemoteAddress(c.GetRemoteAddress())
logger.ConnectionFailedLog("", ip, dataprovider.LoginMethodNoAuthTryed, c.GetProtocol(), "client idle")
metrics.AddNoAuthTryed()
metric.AddNoAuthTryed()
AddDefenderEvent(ip, HostEventNoLoginTried)
dataprovider.ExecutePostLoginHook(&dataprovider.User{}, dataprovider.LoginMethodNoAuthTryed, ip, c.GetProtocol(),
dataprovider.ErrNoAuthTryed)
@@ -733,6 +966,69 @@ func (conns *ActiveConnections) checkIdles() {
conns.RUnlock()
}
func (conns *ActiveConnections) checkTransfers() {
if atomic.LoadInt32(&conns.transfersCheckStatus) == 1 {
logger.Warn(logSender, "", "the previous transfer check is still running, skipping execution")
return
}
atomic.StoreInt32(&conns.transfersCheckStatus, 1)
defer atomic.StoreInt32(&conns.transfersCheckStatus, 0)
conns.RLock()
if len(conns.connections) < 2 {
conns.RUnlock()
return
}
var wg sync.WaitGroup
logger.Debug(logSender, "", "start concurrent transfers check")
// update the current size for transfers to monitors
for _, c := range conns.connections {
for _, t := range c.GetTransfers() {
if t.HasSizeLimit {
wg.Add(1)
go func(transfer ConnectionTransfer, connID string) {
defer wg.Done()
transfersChecker.UpdateTransferCurrentSizes(transfer.ULSize, transfer.DLSize, transfer.ID, connID)
}(t, c.GetID())
}
}
}
conns.RUnlock()
logger.Debug(logSender, "", "waiting for the update of the transfers current size")
wg.Wait()
logger.Debug(logSender, "", "getting overquota transfers")
overquotaTransfers := transfersChecker.GetOverquotaTransfers()
logger.Debug(logSender, "", "number of overquota transfers: %v", len(overquotaTransfers))
if len(overquotaTransfers) == 0 {
return
}
conns.RLock()
defer conns.RUnlock()
for _, c := range conns.connections {
for _, overquotaTransfer := range overquotaTransfers {
if c.GetID() == overquotaTransfer.ConnID {
logger.Info(logSender, c.GetID(), "user %#v is overquota, try to close transfer id %v",
c.GetUsername(), overquotaTransfer.TransferID)
var err error
if overquotaTransfer.TransferType == TransferDownload {
err = getReadQuotaExceededError(c.GetProtocol())
} else {
err = getQuotaExceededError(c.GetProtocol())
}
c.SignalTransferClose(overquotaTransfer.TransferID, err)
}
}
}
logger.Debug(logSender, "", "transfers check completed")
}
// AddClientConnection stores a new client connection
func (conns *ActiveConnections) AddClientConnection(ipAddr string) {
conns.clients.add(ipAddr)
@@ -749,7 +1045,13 @@ func (conns *ActiveConnections) GetClientConnections() int32 {
}
// IsNewConnectionAllowed returns false if the maximum number of concurrent allowed connections is exceeded
// or a whitelist is defined and the specified ipAddr is not listed
func (conns *ActiveConnections) IsNewConnectionAllowed(ipAddr string) bool {
if Config.whitelist != nil {
if !Config.whitelist.isAllowed(ipAddr) {
return false
}
}
if Config.MaxTotalConnections == 0 && Config.MaxPerHostConnections == 0 {
return true
}
@@ -781,19 +1083,19 @@ func (conns *ActiveConnections) IsNewConnectionAllowed(ipAddr string) bool {
}
// GetStats returns stats for active connections
func (conns *ActiveConnections) GetStats() []*ConnectionStatus {
func (conns *ActiveConnections) GetStats() []ConnectionStatus {
conns.RLock()
defer conns.RUnlock()
stats := make([]*ConnectionStatus, 0, len(conns.connections))
stats := make([]ConnectionStatus, 0, len(conns.connections))
for _, c := range conns.connections {
stat := &ConnectionStatus{
stat := ConnectionStatus{
Username: c.GetUsername(),
ConnectionID: c.GetID(),
ClientVersion: c.GetClientVersion(),
RemoteAddress: c.GetRemoteAddress(),
ConnectionTime: utils.GetTimeAsMsSinceEpoch(c.GetConnectionTime()),
LastActivity: utils.GetTimeAsMsSinceEpoch(c.GetLastActivity()),
ConnectionTime: util.GetTimeAsMsSinceEpoch(c.GetConnectionTime()),
LastActivity: util.GetTimeAsMsSinceEpoch(c.GetLastActivity()),
Protocol: c.GetProtocol(),
Command: c.GetCommand(),
Transfers: c.GetTransfers(),
@@ -827,8 +1129,8 @@ type ConnectionStatus struct {
// GetConnectionDuration returns the connection duration as string
func (c *ConnectionStatus) GetConnectionDuration() string {
elapsed := time.Since(utils.GetTimeFromMsecSinceEpoch(c.ConnectionTime))
return utils.GetDurationAsString(elapsed)
elapsed := time.Since(util.GetTimeFromMsecSinceEpoch(c.ConnectionTime))
return util.GetDurationAsString(elapsed)
}
// GetConnectionInfo returns connection info.
@@ -883,8 +1185,8 @@ type ActiveVirtualFolderQuotaScan struct {
// ActiveScans holds the active quota scans
type ActiveScans struct {
sync.RWMutex
UserHomeScans []ActiveQuotaScan
FolderScans []ActiveVirtualFolderQuotaScan
UserScans []ActiveQuotaScan
FolderScans []ActiveVirtualFolderQuotaScan
}
// GetUsersQuotaScans returns the active quota scans for users home directories
@@ -892,8 +1194,8 @@ func (s *ActiveScans) GetUsersQuotaScans() []ActiveQuotaScan {
s.RLock()
defer s.RUnlock()
scans := make([]ActiveQuotaScan, len(s.UserHomeScans))
copy(scans, s.UserHomeScans)
scans := make([]ActiveQuotaScan, len(s.UserScans))
copy(scans, s.UserScans)
return scans
}
@@ -903,14 +1205,14 @@ func (s *ActiveScans) AddUserQuotaScan(username string) bool {
s.Lock()
defer s.Unlock()
for _, scan := range s.UserHomeScans {
for _, scan := range s.UserScans {
if scan.Username == username {
return false
}
}
s.UserHomeScans = append(s.UserHomeScans, ActiveQuotaScan{
s.UserScans = append(s.UserScans, ActiveQuotaScan{
Username: username,
StartTime: utils.GetTimeAsMsSinceEpoch(time.Now()),
StartTime: util.GetTimeAsMsSinceEpoch(time.Now()),
})
return true
}
@@ -921,18 +1223,15 @@ func (s *ActiveScans) RemoveUserQuotaScan(username string) bool {
s.Lock()
defer s.Unlock()
indexToRemove := -1
for i, scan := range s.UserHomeScans {
for idx, scan := range s.UserScans {
if scan.Username == username {
indexToRemove = i
break
lastIdx := len(s.UserScans) - 1
s.UserScans[idx] = s.UserScans[lastIdx]
s.UserScans = s.UserScans[:lastIdx]
return true
}
}
if indexToRemove >= 0 {
s.UserHomeScans[indexToRemove] = s.UserHomeScans[len(s.UserHomeScans)-1]
s.UserHomeScans = s.UserHomeScans[:len(s.UserHomeScans)-1]
return true
}
return false
}
@@ -958,7 +1257,7 @@ func (s *ActiveScans) AddVFolderQuotaScan(folderName string) bool {
}
s.FolderScans = append(s.FolderScans, ActiveVirtualFolderQuotaScan{
Name: folderName,
StartTime: utils.GetTimeAsMsSinceEpoch(time.Now()),
StartTime: util.GetTimeAsMsSinceEpoch(time.Now()),
})
return true
}
@@ -969,17 +1268,14 @@ func (s *ActiveScans) RemoveVFolderQuotaScan(folderName string) bool {
s.Lock()
defer s.Unlock()
indexToRemove := -1
for i, scan := range s.FolderScans {
for idx, scan := range s.FolderScans {
if scan.Name == folderName {
indexToRemove = i
break
lastIdx := len(s.FolderScans) - 1
s.FolderScans[idx] = s.FolderScans[lastIdx]
s.FolderScans = s.FolderScans[:lastIdx]
return true
}
}
if indexToRemove >= 0 {
s.FolderScans[indexToRemove] = s.FolderScans[len(s.FolderScans)-1]
s.FolderScans = s.FolderScans[:len(s.FolderScans)-1]
return true
}
return false
}

View File

@@ -1,6 +1,21 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
import (
"crypto/tls"
"encoding/json"
"fmt"
"net"
@@ -14,14 +29,16 @@ import (
"time"
"github.com/alexedwards/argon2id"
"github.com/sftpgo/sdk"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"golang.org/x/crypto/bcrypt"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/kms"
"github.com/drakkan/sftpgo/utils"
"github.com/drakkan/sftpgo/vfs"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/kms"
"github.com/drakkan/sftpgo/v2/plugin"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/v2/vfs"
)
const (
@@ -59,6 +76,10 @@ func (c *fakeConnection) GetCommand() string {
return c.command
}
func (c *fakeConnection) GetLocalAddress() string {
return ""
}
func (c *fakeConnection) GetRemoteAddress() string {
return ""
}
@@ -122,22 +143,46 @@ func TestDefenderIntegration(t *testing.T) {
// by default defender is nil
configCopy := Config
wdPath, err := os.Getwd()
require.NoError(t, err)
pluginsConfig := []plugin.Config{
{
Type: "ipfilter",
Cmd: filepath.Join(wdPath, "..", "tests", "ipfilter", "ipfilter"),
AutoMTLS: true,
},
}
if runtime.GOOS == osWindows {
pluginsConfig[0].Cmd += ".exe"
}
err = plugin.Initialize(pluginsConfig, true)
require.NoError(t, err)
ip := "127.1.1.1"
assert.Nil(t, ReloadDefender())
assert.Nil(t, Reload())
// 192.168.1.12 is banned from the ipfilter plugin
assert.True(t, IsBanned("192.168.1.12"))
AddDefenderEvent(ip, HostEventNoLoginTried)
assert.False(t, IsBanned(ip))
assert.Nil(t, GetDefenderBanTime(ip))
banTime, err := GetDefenderBanTime(ip)
assert.NoError(t, err)
assert.Nil(t, banTime)
assert.False(t, DeleteDefenderHost(ip))
assert.Equal(t, 0, GetDefenderScore(ip))
_, err := GetDefenderHost(ip)
score, err := GetDefenderScore(ip)
assert.NoError(t, err)
assert.Equal(t, 0, score)
_, err = GetDefenderHost(ip)
assert.Error(t, err)
assert.Nil(t, GetDefenderHosts())
hosts, err := GetDefenderHosts()
assert.NoError(t, err)
assert.Nil(t, hosts)
Config.DefenderConfig = DefenderConfig{
Enabled: true,
Driver: DefenderDriverProvider,
BanTime: 10,
BanTimeIncrement: 50,
Threshold: 0,
@@ -147,36 +192,69 @@ func TestDefenderIntegration(t *testing.T) {
EntriesSoftLimit: 100,
EntriesHardLimit: 150,
}
err = Initialize(Config)
err = Initialize(Config, 0)
// ScoreInvalid cannot be greater than threshold
assert.Error(t, err)
Config.DefenderConfig.Driver = "unsupported"
err = Initialize(Config, 0)
if assert.Error(t, err) {
assert.Contains(t, err.Error(), "unsupported defender driver")
}
Config.DefenderConfig.Driver = DefenderDriverMemory
err = Initialize(Config, 0)
// ScoreInvalid cannot be greater than threshold
assert.Error(t, err)
Config.DefenderConfig.Threshold = 3
err = Initialize(Config)
Config.DefenderConfig.SafeListFile = filepath.Join(os.TempDir(), "sl.json")
err = os.WriteFile(Config.DefenderConfig.SafeListFile, []byte(`{}`), 0644)
assert.NoError(t, err)
assert.Nil(t, ReloadDefender())
defer os.Remove(Config.DefenderConfig.SafeListFile)
err = Initialize(Config, 0)
assert.NoError(t, err)
assert.Nil(t, Reload())
err = os.WriteFile(Config.DefenderConfig.SafeListFile, []byte(`{`), 0644)
assert.NoError(t, err)
err = Reload()
assert.Error(t, err)
AddDefenderEvent(ip, HostEventNoLoginTried)
assert.False(t, IsBanned(ip))
assert.Equal(t, 2, GetDefenderScore(ip))
score, err = GetDefenderScore(ip)
assert.NoError(t, err)
assert.Equal(t, 2, score)
entry, err := GetDefenderHost(ip)
assert.NoError(t, err)
asJSON, err := json.Marshal(&entry)
assert.NoError(t, err)
assert.Equal(t, `{"id":"3132372e312e312e31","ip":"127.1.1.1","score":2}`, string(asJSON), "entry %v", entry)
assert.True(t, DeleteDefenderHost(ip))
assert.Nil(t, GetDefenderBanTime(ip))
banTime, err = GetDefenderBanTime(ip)
assert.NoError(t, err)
assert.Nil(t, banTime)
AddDefenderEvent(ip, HostEventLoginFailed)
AddDefenderEvent(ip, HostEventNoLoginTried)
assert.True(t, IsBanned(ip))
assert.Equal(t, 0, GetDefenderScore(ip))
assert.NotNil(t, GetDefenderBanTime(ip))
assert.Len(t, GetDefenderHosts(), 1)
score, err = GetDefenderScore(ip)
assert.NoError(t, err)
assert.Equal(t, 0, score)
banTime, err = GetDefenderBanTime(ip)
assert.NoError(t, err)
assert.NotNil(t, banTime)
hosts, err = GetDefenderHosts()
assert.NoError(t, err)
assert.Len(t, hosts, 1)
entry, err = GetDefenderHost(ip)
assert.NoError(t, err)
assert.False(t, entry.BanTime.IsZero())
assert.True(t, DeleteDefenderHost(ip))
assert.Len(t, GetDefenderHosts(), 0)
assert.Nil(t, GetDefenderBanTime(ip))
hosts, err = GetDefenderHosts()
assert.NoError(t, err)
assert.Len(t, hosts, 0)
banTime, err = GetDefenderBanTime(ip)
assert.NoError(t, err)
assert.Nil(t, banTime)
assert.False(t, DeleteDefenderHost(ip))
Config = configCopy
@@ -205,10 +283,18 @@ func TestRateLimitersIntegration(t *testing.T) {
EntriesHardLimit: 150,
},
}
err := Initialize(Config)
err := Initialize(Config, 0)
assert.Error(t, err)
Config.RateLimitersConfig[0].Period = 1000
err = Initialize(Config)
Config.RateLimitersConfig[0].AllowList = []string{"1.1.1", "1.1.1.2"}
err = Initialize(Config, 0)
if assert.Error(t, err) {
assert.Contains(t, err.Error(), "unable to parse rate limiter allow list")
}
Config.RateLimitersConfig[0].AllowList = []string{"172.16.24.7"}
Config.RateLimitersConfig[1].AllowList = []string{"172.16.0.0/16"}
err = Initialize(Config, 0)
assert.NoError(t, err)
assert.Len(t, rateLimiters, 4)
@@ -219,6 +305,7 @@ func TestRateLimitersIntegration(t *testing.T) {
source1 := "127.1.1.1"
source2 := "127.1.1.2"
source3 := "172.16.24.7" // whitelisted
_, err = LimitRate(ProtocolSSH, source1)
assert.NoError(t, err)
@@ -237,10 +324,89 @@ func TestRateLimitersIntegration(t *testing.T) {
assert.NoError(t, err)
_, err = LimitRate(ProtocolSSH, source2)
assert.NoError(t, err)
for i := 0; i < 10; i++ {
_, err = LimitRate(ProtocolWebDAV, source3)
assert.NoError(t, err)
}
Config = configCopy
}
func TestWhitelist(t *testing.T) {
configCopy := Config
Config.whitelist = &whitelist{}
err := Config.whitelist.reload()
if assert.Error(t, err) {
assert.Contains(t, err.Error(), "cannot accept a nil whitelist")
}
wlFile := filepath.Join(os.TempDir(), "wl.json")
Config.WhiteListFile = wlFile
err = os.WriteFile(wlFile, []byte(`invalid list file`), 0664)
assert.NoError(t, err)
err = Initialize(Config, 0)
assert.Error(t, err)
wl := HostListFile{
IPAddresses: []string{"172.18.1.1", "172.18.1.2"},
CIDRNetworks: []string{"10.8.7.0/24"},
}
data, err := json.Marshal(wl)
assert.NoError(t, err)
err = os.WriteFile(wlFile, data, 0664)
assert.NoError(t, err)
defer os.Remove(wlFile)
err = Initialize(Config, 0)
assert.NoError(t, err)
assert.True(t, Connections.IsNewConnectionAllowed("172.18.1.1"))
assert.False(t, Connections.IsNewConnectionAllowed("172.18.1.3"))
assert.True(t, Connections.IsNewConnectionAllowed("10.8.7.3"))
assert.False(t, Connections.IsNewConnectionAllowed("10.8.8.2"))
wl.IPAddresses = append(wl.IPAddresses, "172.18.1.3")
wl.CIDRNetworks = append(wl.CIDRNetworks, "10.8.8.0/24")
data, err = json.Marshal(wl)
assert.NoError(t, err)
err = os.WriteFile(wlFile, data, 0664)
assert.NoError(t, err)
assert.False(t, Connections.IsNewConnectionAllowed("10.8.8.3"))
err = Reload()
assert.NoError(t, err)
assert.True(t, Connections.IsNewConnectionAllowed("10.8.8.3"))
assert.True(t, Connections.IsNewConnectionAllowed("172.18.1.3"))
assert.True(t, Connections.IsNewConnectionAllowed("172.18.1.2"))
assert.False(t, Connections.IsNewConnectionAllowed("172.18.1.12"))
Config = configCopy
}
func TestUserMaxSessions(t *testing.T) {
c := NewBaseConnection("id", ProtocolSFTP, "", "", dataprovider.User{
BaseUser: sdk.BaseUser{
Username: userTestUsername,
MaxSessions: 1,
},
})
fakeConn := &fakeConnection{
BaseConnection: c,
}
err := Connections.Add(fakeConn)
assert.NoError(t, err)
err = Connections.Add(fakeConn)
assert.Error(t, err)
err = Connections.Swap(fakeConn)
assert.NoError(t, err)
Connections.Remove(fakeConn.GetID())
Connections.Lock()
Connections.removeUserConnection(userTestUsername)
Connections.Unlock()
assert.Len(t, Connections.GetStats(), 0)
}
func TestMaxConnections(t *testing.T) {
oldValue := Config.MaxTotalConnections
perHost := Config.MaxPerHostConnections
@@ -254,11 +420,12 @@ func TestMaxConnections(t *testing.T) {
Config.MaxPerHostConnections = perHost
assert.True(t, Connections.IsNewConnectionAllowed(ipAddr))
c := NewBaseConnection("id", ProtocolSFTP, "", dataprovider.User{})
c := NewBaseConnection("id", ProtocolSFTP, "", "", dataprovider.User{})
fakeConn := &fakeConnection{
BaseConnection: c,
}
Connections.Add(fakeConn)
err := Connections.Add(fakeConn)
assert.NoError(t, err)
assert.Len(t, Connections.GetStats(), 1)
assert.False(t, Connections.IsNewConnectionAllowed(ipAddr))
@@ -306,7 +473,7 @@ func TestIdleConnections(t *testing.T) {
configCopy := Config
Config.IdleTimeout = 1
err := Initialize(Config)
err := Initialize(Config, 0)
assert.NoError(t, err)
conn1, conn2 := net.Pipe()
@@ -323,9 +490,11 @@ func TestIdleConnections(t *testing.T) {
username := "test_user"
user := dataprovider.User{
Username: username,
BaseUser: sdk.BaseUser{
Username: username,
},
}
c := NewBaseConnection(sshConn1.id+"_1", ProtocolSFTP, "", user)
c := NewBaseConnection(sshConn1.id+"_1", ProtocolSFTP, "", "", user)
c.lastActivity = time.Now().Add(-24 * time.Hour).UnixNano()
fakeConn := &fakeConnection{
BaseConnection: c,
@@ -335,41 +504,44 @@ func TestIdleConnections(t *testing.T) {
sshConn1.lastActivity = c.lastActivity
sshConn2.lastActivity = c.lastActivity
Connections.AddSSHConnection(sshConn1)
Connections.Add(fakeConn)
err = Connections.Add(fakeConn)
assert.NoError(t, err)
assert.Equal(t, Connections.GetActiveSessions(username), 1)
c = NewBaseConnection(sshConn2.id+"_1", ProtocolSSH, "", user)
c = NewBaseConnection(sshConn2.id+"_1", ProtocolSSH, "", "", user)
fakeConn = &fakeConnection{
BaseConnection: c,
}
Connections.AddSSHConnection(sshConn2)
Connections.Add(fakeConn)
err = Connections.Add(fakeConn)
assert.NoError(t, err)
assert.Equal(t, Connections.GetActiveSessions(username), 2)
cFTP := NewBaseConnection("id2", ProtocolFTP, "", dataprovider.User{})
cFTP := NewBaseConnection("id2", ProtocolFTP, "", "", dataprovider.User{})
cFTP.lastActivity = time.Now().UnixNano()
fakeConn = &fakeConnection{
BaseConnection: cFTP,
}
Connections.Add(fakeConn)
err = Connections.Add(fakeConn)
assert.NoError(t, err)
assert.Equal(t, Connections.GetActiveSessions(username), 2)
assert.Len(t, Connections.GetStats(), 3)
Connections.RLock()
assert.Len(t, Connections.sshConnections, 2)
Connections.RUnlock()
startIdleTimeoutTicker(100 * time.Millisecond)
startPeriodicTimeoutTicker(100 * time.Millisecond)
assert.Eventually(t, func() bool { return Connections.GetActiveSessions(username) == 1 }, 1*time.Second, 200*time.Millisecond)
assert.Eventually(t, func() bool {
Connections.RLock()
defer Connections.RUnlock()
return len(Connections.sshConnections) == 1
}, 1*time.Second, 200*time.Millisecond)
stopIdleTimeoutTicker()
stopPeriodicTimeoutTicker()
assert.Len(t, Connections.GetStats(), 2)
c.lastActivity = time.Now().Add(-24 * time.Hour).UnixNano()
cFTP.lastActivity = time.Now().Add(-24 * time.Hour).UnixNano()
sshConn2.lastActivity = c.lastActivity
startIdleTimeoutTicker(100 * time.Millisecond)
startPeriodicTimeoutTicker(100 * time.Millisecond)
assert.Eventually(t, func() bool { return len(Connections.GetStats()) == 0 }, 1*time.Second, 200*time.Millisecond)
assert.Eventually(t, func() bool {
Connections.RLock()
@@ -377,7 +549,7 @@ func TestIdleConnections(t *testing.T) {
return len(Connections.sshConnections) == 0
}, 1*time.Second, 200*time.Millisecond)
assert.Equal(t, int32(0), Connections.GetClientConnections())
stopIdleTimeoutTicker()
stopPeriodicTimeoutTicker()
assert.True(t, customConn1.isClosed)
assert.True(t, customConn2.isClosed)
@@ -385,12 +557,13 @@ func TestIdleConnections(t *testing.T) {
}
func TestCloseConnection(t *testing.T) {
c := NewBaseConnection("id", ProtocolSFTP, "", dataprovider.User{})
c := NewBaseConnection("id", ProtocolSFTP, "", "", dataprovider.User{})
fakeConn := &fakeConnection{
BaseConnection: c,
}
assert.True(t, Connections.IsNewConnectionAllowed("127.0.0.1"))
Connections.Add(fakeConn)
err := Connections.Add(fakeConn)
assert.NoError(t, err)
assert.Len(t, Connections.GetStats(), 1)
res := Connections.Close(fakeConn.GetID())
assert.True(t, res)
@@ -401,21 +574,38 @@ func TestCloseConnection(t *testing.T) {
}
func TestSwapConnection(t *testing.T) {
c := NewBaseConnection("id", ProtocolFTP, "", dataprovider.User{})
c := NewBaseConnection("id", ProtocolFTP, "", "", dataprovider.User{})
fakeConn := &fakeConnection{
BaseConnection: c,
}
Connections.Add(fakeConn)
err := Connections.Add(fakeConn)
assert.NoError(t, err)
if assert.Len(t, Connections.GetStats(), 1) {
assert.Equal(t, "", Connections.GetStats()[0].Username)
}
c = NewBaseConnection("id", ProtocolFTP, "", dataprovider.User{
Username: userTestUsername,
c = NewBaseConnection("id", ProtocolFTP, "", "", dataprovider.User{
BaseUser: sdk.BaseUser{
Username: userTestUsername,
MaxSessions: 1,
},
})
fakeConn = &fakeConnection{
BaseConnection: c,
}
err := Connections.Swap(fakeConn)
c1 := NewBaseConnection("id1", ProtocolFTP, "", "", dataprovider.User{
BaseUser: sdk.BaseUser{
Username: userTestUsername,
},
})
fakeConn1 := &fakeConnection{
BaseConnection: c1,
}
err = Connections.Add(fakeConn1)
assert.NoError(t, err)
err = Connections.Swap(fakeConn)
assert.Error(t, err)
Connections.Remove(fakeConn1.ID)
err = Connections.Swap(fakeConn)
assert.NoError(t, err)
if assert.Len(t, Connections.GetStats(), 1) {
assert.Equal(t, userTestUsername, Connections.GetStats()[0].Username)
@@ -443,31 +633,36 @@ func TestAtomicUpload(t *testing.T) {
func TestConnectionStatus(t *testing.T) {
username := "test_user"
user := dataprovider.User{
Username: username,
BaseUser: sdk.BaseUser{
Username: username,
},
}
fs := vfs.NewOsFs("", os.TempDir(), "")
c1 := NewBaseConnection("id1", ProtocolSFTP, "", user)
c1 := NewBaseConnection("id1", ProtocolSFTP, "", "", user)
fakeConn1 := &fakeConnection{
BaseConnection: c1,
}
t1 := NewBaseTransfer(nil, c1, nil, "/p1", "/p1", "/r1", TransferUpload, 0, 0, 0, true, fs)
t1 := NewBaseTransfer(nil, c1, nil, "/p1", "/p1", "/r1", TransferUpload, 0, 0, 0, 0, true, fs, dataprovider.TransferQuota{})
t1.BytesReceived = 123
t2 := NewBaseTransfer(nil, c1, nil, "/p2", "/p2", "/r2", TransferDownload, 0, 0, 0, true, fs)
t2 := NewBaseTransfer(nil, c1, nil, "/p2", "/p2", "/r2", TransferDownload, 0, 0, 0, 0, true, fs, dataprovider.TransferQuota{})
t2.BytesSent = 456
c2 := NewBaseConnection("id2", ProtocolSSH, "", user)
c2 := NewBaseConnection("id2", ProtocolSSH, "", "", user)
fakeConn2 := &fakeConnection{
BaseConnection: c2,
command: "md5sum",
}
c3 := NewBaseConnection("id3", ProtocolWebDAV, "", user)
c3 := NewBaseConnection("id3", ProtocolWebDAV, "", "", user)
fakeConn3 := &fakeConnection{
BaseConnection: c3,
command: "PROPFIND",
}
t3 := NewBaseTransfer(nil, c3, nil, "/p2", "/p2", "/r2", TransferDownload, 0, 0, 0, true, fs)
Connections.Add(fakeConn1)
Connections.Add(fakeConn2)
Connections.Add(fakeConn3)
t3 := NewBaseTransfer(nil, c3, nil, "/p2", "/p2", "/r2", TransferDownload, 0, 0, 0, 0, true, fs, dataprovider.TransferQuota{})
err := Connections.Add(fakeConn1)
assert.NoError(t, err)
err = Connections.Add(fakeConn2)
assert.NoError(t, err)
err = Connections.Add(fakeConn3)
assert.NoError(t, err)
stats := Connections.GetStats()
assert.Len(t, stats, 3)
@@ -493,7 +688,7 @@ func TestConnectionStatus(t *testing.T) {
}
}
err := t1.Close()
err = t1.Close()
assert.NoError(t, err)
err = t2.Close()
assert.NoError(t, err)
@@ -524,13 +719,18 @@ func TestQuotaScans(t *testing.T) {
username := "username"
assert.True(t, QuotaScans.AddUserQuotaScan(username))
assert.False(t, QuotaScans.AddUserQuotaScan(username))
if assert.Len(t, QuotaScans.GetUsersQuotaScans(), 1) {
assert.Equal(t, QuotaScans.GetUsersQuotaScans()[0].Username, username)
usersScans := QuotaScans.GetUsersQuotaScans()
if assert.Len(t, usersScans, 1) {
assert.Equal(t, usersScans[0].Username, username)
assert.Equal(t, QuotaScans.UserScans[0].StartTime, usersScans[0].StartTime)
QuotaScans.UserScans[0].StartTime = 0
assert.NotEqual(t, QuotaScans.UserScans[0].StartTime, usersScans[0].StartTime)
}
assert.True(t, QuotaScans.RemoveUserQuotaScan(username))
assert.False(t, QuotaScans.RemoveUserQuotaScan(username))
assert.Len(t, QuotaScans.GetUsersQuotaScans(), 0)
assert.Len(t, usersScans, 1)
folderName := "folder"
assert.True(t, QuotaScans.AddVFolderQuotaScan(folderName))
@@ -546,8 +746,13 @@ func TestQuotaScans(t *testing.T) {
func TestProxyProtocolVersion(t *testing.T) {
c := Configuration{
ProxyProtocol: 1,
ProxyProtocol: 0,
}
_, err := c.GetProxyListener(nil)
if assert.Error(t, err) {
assert.Contains(t, err.Error(), "proxy protocol not configured")
}
c.ProxyProtocol = 1
proxyListener, err := c.GetProxyListener(nil)
assert.NoError(t, err)
assert.Nil(t, proxyListener.Policy)
@@ -594,6 +799,37 @@ func TestStartupHook(t *testing.T) {
Config.StartupHook = ""
}
func TestPostDisconnectHook(t *testing.T) {
Config.PostDisconnectHook = "http://127.0.0.1/"
remoteAddr := "127.0.0.1:80"
Config.checkPostDisconnectHook(remoteAddr, ProtocolHTTP, "", "", time.Now())
Config.checkPostDisconnectHook(remoteAddr, ProtocolSFTP, "", "", time.Now())
Config.PostDisconnectHook = "http://bar\x7f.com/"
Config.executePostDisconnectHook(remoteAddr, ProtocolSFTP, "", "", time.Now())
Config.PostDisconnectHook = fmt.Sprintf("http://%v", httpAddr)
Config.executePostDisconnectHook(remoteAddr, ProtocolSFTP, "", "", time.Now())
Config.PostDisconnectHook = "relativePath"
Config.executePostDisconnectHook(remoteAddr, ProtocolSFTP, "", "", time.Now())
if runtime.GOOS == osWindows {
Config.PostDisconnectHook = "C:\\a\\bad\\command"
Config.executePostDisconnectHook(remoteAddr, ProtocolSFTP, "", "", time.Now())
} else {
Config.PostDisconnectHook = "/invalid/path"
Config.executePostDisconnectHook(remoteAddr, ProtocolSFTP, "", "", time.Now())
hookCmd, err := exec.LookPath("true")
assert.NoError(t, err)
Config.PostDisconnectHook = hookCmd
Config.executePostDisconnectHook(remoteAddr, ProtocolSFTP, "", "", time.Now())
}
Config.PostDisconnectHook = ""
}
func TestPostConnectHook(t *testing.T) {
Config.PostConnectHook = ""
@@ -634,7 +870,9 @@ func TestPostConnectHook(t *testing.T) {
func TestCryptoConvertFileInfo(t *testing.T) {
name := "name"
fs, err := vfs.NewCryptFs("connID1", os.TempDir(), "", vfs.CryptFsConfig{Passphrase: kms.NewPlainSecret("secret")})
fs, err := vfs.NewCryptFs("connID1", os.TempDir(), "", vfs.CryptFsConfig{
Passphrase: kms.NewPlainSecret("secret"),
})
require.NoError(t, err)
cryptFs := fs.(*vfs.CryptFs)
info := vfs.NewFileInfo(name, true, 48, time.Now(), false)
@@ -654,15 +892,15 @@ func TestFolderCopy(t *testing.T) {
MappedPath: filepath.Clean(os.TempDir()),
UsedQuotaSize: 4096,
UsedQuotaFiles: 2,
LastQuotaUpdate: utils.GetTimeAsMsSinceEpoch(time.Now()),
LastQuotaUpdate: util.GetTimeAsMsSinceEpoch(time.Now()),
Users: []string{"user1", "user2"},
}
folderCopy := folder.GetACopy()
folder.ID = 2
folder.Users = []string{"user3"}
require.Len(t, folderCopy.Users, 2)
require.True(t, utils.IsStringInSlice("user1", folderCopy.Users))
require.True(t, utils.IsStringInSlice("user2", folderCopy.Users))
require.True(t, util.Contains(folderCopy.Users, "user1"))
require.True(t, util.Contains(folderCopy.Users, "user2"))
require.Equal(t, int64(1), folderCopy.ID)
require.Equal(t, folder.Name, folderCopy.Name)
require.Equal(t, folder.MappedPath, folderCopy.MappedPath)
@@ -678,7 +916,7 @@ func TestFolderCopy(t *testing.T) {
folderCopy = folder.GetACopy()
folder.FsConfig.CryptConfig.Passphrase = kms.NewEmptySecret()
require.Len(t, folderCopy.Users, 1)
require.True(t, utils.IsStringInSlice("user3", folderCopy.Users))
require.True(t, util.Contains(folderCopy.Users, "user3"))
require.Equal(t, int64(2), folderCopy.ID)
require.Equal(t, folder.Name, folderCopy.Name)
require.Equal(t, folder.MappedPath, folderCopy.MappedPath)
@@ -690,9 +928,11 @@ func TestFolderCopy(t *testing.T) {
func TestCachedFs(t *testing.T) {
user := dataprovider.User{
HomeDir: filepath.Clean(os.TempDir()),
BaseUser: sdk.BaseUser{
HomeDir: filepath.Clean(os.TempDir()),
},
}
conn := NewBaseConnection("id", ProtocolSFTP, "", user)
conn := NewBaseConnection("id", ProtocolSFTP, "", "", user)
// changing the user should not affect the connection
user.HomeDir = filepath.Join(os.TempDir(), "temp")
err := os.Mkdir(user.HomeDir, os.ModePerm)
@@ -706,23 +946,27 @@ func TestCachedFs(t *testing.T) {
_, p, err = conn.GetFsAndResolvedPath("/")
assert.NoError(t, err)
assert.Equal(t, filepath.Clean(os.TempDir()), p)
user.FsConfig.Provider = vfs.S3FilesystemProvider
_, err = user.GetFilesystem("")
assert.Error(t, err)
conn.User.FsConfig.Provider = vfs.S3FilesystemProvider
// the filesystem is cached changing the provider will not affect the connection
conn.User.FsConfig.Provider = sdk.S3FilesystemProvider
_, p, err = conn.GetFsAndResolvedPath("/")
assert.NoError(t, err)
assert.Equal(t, filepath.Clean(os.TempDir()), p)
user = dataprovider.User{}
user.HomeDir = filepath.Join(os.TempDir(), "temp")
user.FsConfig.Provider = sdk.S3FilesystemProvider
_, err = user.GetFilesystem("")
assert.Error(t, err)
err = os.Remove(user.HomeDir)
assert.NoError(t, err)
}
func TestParseAllowedIPAndRanges(t *testing.T) {
_, err := utils.ParseAllowedIPAndRanges([]string{"1.1.1.1", "not an ip"})
_, err := util.ParseAllowedIPAndRanges([]string{"1.1.1.1", "not an ip"})
assert.Error(t, err)
_, err = utils.ParseAllowedIPAndRanges([]string{"1.1.1.5", "192.168.1.0/240"})
_, err = util.ParseAllowedIPAndRanges([]string{"1.1.1.5", "192.168.1.0/240"})
assert.Error(t, err)
allow, err := utils.ParseAllowedIPAndRanges([]string{"192.168.1.2", "172.16.0.0/24"})
allow, err := util.ParseAllowedIPAndRanges([]string{"192.168.1.2", "172.16.0.0/24"})
assert.NoError(t, err)
assert.True(t, allow[0](net.ParseIP("192.168.1.2")))
assert.False(t, allow[0](net.ParseIP("192.168.2.2")))
@@ -730,6 +974,90 @@ func TestParseAllowedIPAndRanges(t *testing.T) {
assert.False(t, allow[1](net.ParseIP("172.16.1.1")))
}
func TestHideConfidentialData(t *testing.T) {
for _, provider := range []sdk.FilesystemProvider{sdk.LocalFilesystemProvider,
sdk.CryptedFilesystemProvider, sdk.S3FilesystemProvider, sdk.GCSFilesystemProvider,
sdk.AzureBlobFilesystemProvider, sdk.SFTPFilesystemProvider,
} {
u := dataprovider.User{
FsConfig: vfs.Filesystem{
Provider: provider,
},
}
u.PrepareForRendering()
f := vfs.BaseVirtualFolder{
FsConfig: vfs.Filesystem{
Provider: provider,
},
}
f.PrepareForRendering()
}
a := dataprovider.Admin{}
a.HideConfidentialData()
}
func TestUserPerms(t *testing.T) {
u := dataprovider.User{}
u.Permissions = make(map[string][]string)
u.Permissions["/"] = []string{dataprovider.PermUpload, dataprovider.PermDelete}
assert.True(t, u.HasAnyPerm([]string{dataprovider.PermRename, dataprovider.PermDelete}, "/"))
assert.False(t, u.HasAnyPerm([]string{dataprovider.PermRename, dataprovider.PermCreateDirs}, "/"))
u.Permissions["/"] = []string{dataprovider.PermDelete, dataprovider.PermCreateDirs}
assert.True(t, u.HasPermsDeleteAll("/"))
assert.False(t, u.HasPermsRenameAll("/"))
u.Permissions["/"] = []string{dataprovider.PermDeleteDirs, dataprovider.PermDeleteFiles, dataprovider.PermRenameDirs}
assert.True(t, u.HasPermsDeleteAll("/"))
assert.False(t, u.HasPermsRenameAll("/"))
u.Permissions["/"] = []string{dataprovider.PermDeleteDirs, dataprovider.PermRenameFiles, dataprovider.PermRenameDirs}
assert.False(t, u.HasPermsDeleteAll("/"))
assert.True(t, u.HasPermsRenameAll("/"))
}
func TestGetTLSVersion(t *testing.T) {
tlsVer := util.GetTLSVersion(0)
assert.Equal(t, uint16(tls.VersionTLS12), tlsVer)
tlsVer = util.GetTLSVersion(12)
assert.Equal(t, uint16(tls.VersionTLS12), tlsVer)
tlsVer = util.GetTLSVersion(2)
assert.Equal(t, uint16(tls.VersionTLS12), tlsVer)
tlsVer = util.GetTLSVersion(13)
assert.Equal(t, uint16(tls.VersionTLS13), tlsVer)
}
func TestCleanPath(t *testing.T) {
assert.Equal(t, "/", util.CleanPath("/"))
assert.Equal(t, "/", util.CleanPath("."))
assert.Equal(t, "/", util.CleanPath(""))
assert.Equal(t, "/", util.CleanPath("/."))
assert.Equal(t, "/", util.CleanPath("/a/.."))
assert.Equal(t, "/a", util.CleanPath("/a/"))
assert.Equal(t, "/a", util.CleanPath("a/"))
// filepath.ToSlash does not touch \ as char on unix systems
// so os.PathSeparator is used for windows compatible tests
bslash := string(os.PathSeparator)
assert.Equal(t, "/", util.CleanPath(bslash))
assert.Equal(t, "/", util.CleanPath(bslash+bslash))
assert.Equal(t, "/a", util.CleanPath(bslash+"a"+bslash))
assert.Equal(t, "/a", util.CleanPath("a"+bslash))
assert.Equal(t, "/a/b/c", util.CleanPath(bslash+"a"+bslash+bslash+"b"+bslash+bslash+"c"+bslash))
assert.Equal(t, "/C:/a", util.CleanPath("C:"+bslash+"a"))
}
func TestUserRecentActivity(t *testing.T) {
u := dataprovider.User{}
res := u.HasRecentActivity()
assert.False(t, res)
u.LastLogin = util.GetTimeAsMsSinceEpoch(time.Now())
res = u.HasRecentActivity()
assert.True(t, res)
u.LastLogin = util.GetTimeAsMsSinceEpoch(time.Now().Add(1 * time.Minute))
res = u.HasRecentActivity()
assert.False(t, res)
u.LastLogin = util.GetTimeAsMsSinceEpoch(time.Now().Add(1 * time.Second))
res = u.HasRecentActivity()
assert.True(t, res)
}
func BenchmarkBcryptHashing(b *testing.B) {
bcryptPassword := "bcryptpassword"
for i := 0; i < b.N; i++ {

View File

@@ -1,3 +1,17 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
import (
@@ -10,19 +24,24 @@ import (
"sync/atomic"
"time"
ftpserver "github.com/fclairamb/ftpserverlib"
"github.com/pkg/sftp"
"github.com/sftpgo/sdk"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/utils"
"github.com/drakkan/sftpgo/vfs"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/v2/vfs"
)
// BaseConnection defines common fields for a connection using any supported protocol
type BaseConnection struct {
// last activity for this connection.
// Since this is accessed atomically we put as first element of the struct achieve 64 bit alignment
// Since this field is accessed atomically we put it as first element of the struct to achieve 64 bit alignment
lastActivity int64
// unique ID for a transfer.
// This field is accessed atomically so we put it at the beginning of the struct to achieve 64 bit alignment
transferID int64
// Unique identifier for the connection
ID string
// user associated with this connection if any
@@ -31,22 +50,24 @@ type BaseConnection struct {
startTime time.Time
protocol string
remoteAddr string
localAddr string
sync.RWMutex
transferID uint64
activeTransfers []ActiveTransfer
}
// NewBaseConnection returns a new BaseConnection
func NewBaseConnection(id, protocol, remoteAddr string, user dataprovider.User) *BaseConnection {
func NewBaseConnection(id, protocol, localAddr, remoteAddr string, user dataprovider.User) *BaseConnection {
connID := id
if utils.IsStringInSlice(protocol, supportedProtocols) {
connID = fmt.Sprintf("%v_%v", protocol, id)
if util.Contains(supportedProtocols, protocol) {
connID = fmt.Sprintf("%s_%s", protocol, id)
}
user.UploadBandwidth, user.DownloadBandwidth = user.GetBandwidthForIP(util.GetIPFromRemoteAddress(remoteAddr), connID)
return &BaseConnection{
ID: connID,
User: user,
startTime: time.Now(),
protocol: protocol,
localAddr: localAddr,
remoteAddr: remoteAddr,
lastActivity: time.Now().UnixNano(),
transferID: 0,
@@ -54,13 +75,13 @@ func NewBaseConnection(id, protocol, remoteAddr string, user dataprovider.User)
}
// Log outputs a log entry to the configured logger
func (c *BaseConnection) Log(level logger.LogLevel, format string, v ...interface{}) {
func (c *BaseConnection) Log(level logger.LogLevel, format string, v ...any) {
logger.Log(level, c.protocol, c.ID, format, v...)
}
// GetTransferID returns an unique transfer ID for this connection
func (c *BaseConnection) GetTransferID() uint64 {
return atomic.AddUint64(&c.transferID, 1)
func (c *BaseConnection) GetTransferID() int64 {
return atomic.AddInt64(&c.transferID, 1)
}
// GetID returns the connection ID
@@ -73,15 +94,25 @@ func (c *BaseConnection) GetUsername() string {
return c.User.Username
}
// GetMaxSessions returns the maximum number of concurrent sessions allowed
func (c *BaseConnection) GetMaxSessions() int {
return c.User.MaxSessions
}
// GetProtocol returns the protocol for the connection
func (c *BaseConnection) GetProtocol() string {
return c.protocol
}
// GetRemoteIP returns the remote ip address
func (c *BaseConnection) GetRemoteIP() string {
return util.GetIPFromRemoteAddress(c.remoteAddr)
}
// SetProtocol sets the protocol for this connection
func (c *BaseConnection) SetProtocol(protocol string) {
c.protocol = protocol
if utils.IsStringInSlice(c.protocol, supportedProtocols) {
if util.Contains(supportedProtocols, c.protocol) {
c.ID = fmt.Sprintf("%v_%v", c.protocol, c.ID)
}
}
@@ -113,6 +144,28 @@ func (c *BaseConnection) AddTransfer(t ActiveTransfer) {
c.activeTransfers = append(c.activeTransfers, t)
c.Log(logger.LevelDebug, "transfer added, id: %v, active transfers: %v", t.GetID(), len(c.activeTransfers))
if t.HasSizeLimit() {
folderName := ""
if t.GetType() == TransferUpload {
vfolder, err := c.User.GetVirtualFolderForPath(path.Dir(t.GetVirtualPath()))
if err == nil {
if !vfolder.IsIncludedInUserQuota() {
folderName = vfolder.Name
}
}
}
go transfersChecker.AddTransfer(dataprovider.ActiveTransfer{
ID: t.GetID(),
Type: t.GetType(),
ConnID: c.ID,
Username: c.GetUsername(),
FolderName: folderName,
IP: c.GetRemoteIP(),
TruncatedSize: t.GetTruncatedSize(),
CreatedAt: util.GetTimeAsMsSinceEpoch(time.Now()),
UpdatedAt: util.GetTimeAsMsSinceEpoch(time.Now()),
})
}
}
// RemoveTransfer removes the specified transfer from the active ones
@@ -120,20 +173,34 @@ func (c *BaseConnection) RemoveTransfer(t ActiveTransfer) {
c.Lock()
defer c.Unlock()
indexToRemove := -1
for i, v := range c.activeTransfers {
if v.GetID() == t.GetID() {
indexToRemove = i
break
if t.HasSizeLimit() {
go transfersChecker.RemoveTransfer(t.GetID(), c.ID)
}
for idx, transfer := range c.activeTransfers {
if transfer.GetID() == t.GetID() {
lastIdx := len(c.activeTransfers) - 1
c.activeTransfers[idx] = c.activeTransfers[lastIdx]
c.activeTransfers[lastIdx] = nil
c.activeTransfers = c.activeTransfers[:lastIdx]
c.Log(logger.LevelDebug, "transfer removed, id: %v active transfers: %v", t.GetID(), len(c.activeTransfers))
return
}
}
if indexToRemove >= 0 {
c.activeTransfers[indexToRemove] = c.activeTransfers[len(c.activeTransfers)-1]
c.activeTransfers[len(c.activeTransfers)-1] = nil
c.activeTransfers = c.activeTransfers[:len(c.activeTransfers)-1]
c.Log(logger.LevelDebug, "transfer removed, id: %v active transfers: %v", t.GetID(), len(c.activeTransfers))
} else {
c.Log(logger.LevelWarn, "transfer to remove not found!")
c.Log(logger.LevelWarn, "transfer to remove with id %v not found!", t.GetID())
}
// SignalTransferClose makes the transfer fail on the next read/write with the
// specified error
func (c *BaseConnection) SignalTransferClose(transferID int64, err error) {
c.RLock()
defer c.RUnlock()
for _, t := range c.activeTransfers {
if t.GetID() == transferID {
c.Log(logger.LevelInfo, "signal transfer close for transfer id %v", transferID)
t.SignalClose(err)
}
}
}
@@ -154,9 +221,12 @@ func (c *BaseConnection) GetTransfers() []ConnectionTransfer {
transfers = append(transfers, ConnectionTransfer{
ID: t.GetID(),
OperationType: operationType,
StartTime: utils.GetTimeAsMsSinceEpoch(t.GetStartTime()),
StartTime: util.GetTimeAsMsSinceEpoch(t.GetStartTime()),
Size: t.GetSize(),
VirtualPath: t.GetVirtualPath(),
HasSizeLimit: t.HasSizeLimit(),
ULSize: t.GetUploadedSize(),
DLSize: t.GetDownloadedSize(),
})
}
@@ -173,7 +243,7 @@ func (c *BaseConnection) SignalTransfersAbort() error {
}
for _, t := range c.activeTransfers {
t.SignalClose()
t.SignalClose(ErrTransferAborted)
}
return nil
}
@@ -190,6 +260,18 @@ func (c *BaseConnection) getRealFsPath(fsPath string) string {
return fsPath
}
func (c *BaseConnection) setTimes(fsPath string, atime time.Time, mtime time.Time) bool {
c.RLock()
defer c.RUnlock()
for _, t := range c.activeTransfers {
if t.SetTimes(fsPath, atime, mtime) {
return true
}
}
return false
}
func (c *BaseConnection) truncateOpenHandle(fsPath string, size int64) (int64, error) {
c.RLock()
defer c.RUnlock()
@@ -215,17 +297,51 @@ func (c *BaseConnection) ListDir(virtualPath string) ([]os.FileInfo, error) {
}
files, err := fs.ReadDir(fsPath)
if err != nil {
c.Log(logger.LevelWarn, "error listing directory: %+v", err)
c.Log(logger.LevelDebug, "error listing directory: %+v", err)
return nil, c.GetFsError(fs, err)
}
return c.User.AddVirtualDirs(files, virtualPath), nil
return c.User.FilterListDir(files, virtualPath), nil
}
// CheckParentDirs tries to create the specified directory and any missing parent dirs
func (c *BaseConnection) CheckParentDirs(virtualPath string) error {
fs, err := c.User.GetFilesystemForPath(virtualPath, "")
if err != nil {
return err
}
if fs.HasVirtualFolders() {
return nil
}
if _, err := c.DoStat(virtualPath, 0, false); !c.IsNotExistError(err) {
return err
}
dirs := util.GetDirsForVirtualPath(virtualPath)
for idx := len(dirs) - 1; idx >= 0; idx-- {
fs, err = c.User.GetFilesystemForPath(dirs[idx], "")
if err != nil {
return err
}
if fs.HasVirtualFolders() {
continue
}
if err = c.createDirIfMissing(dirs[idx]); err != nil {
return fmt.Errorf("unable to check/create missing parent dir %#v for virtual path %#v: %w",
dirs[idx], virtualPath, err)
}
}
return nil
}
// CreateDir creates a new directory at the specified fsPath
func (c *BaseConnection) CreateDir(virtualPath string) error {
func (c *BaseConnection) CreateDir(virtualPath string, checkFilePatterns bool) error {
if !c.User.HasPerm(dataprovider.PermCreateDirs, path.Dir(virtualPath)) {
return c.GetPermissionDeniedError()
}
if checkFilePatterns {
if ok, _ := c.User.IsFileAllowed(virtualPath); !ok {
return c.GetPermissionDeniedError()
}
}
if c.User.IsVirtualFolder(virtualPath) {
c.Log(logger.LevelWarn, "mkdir not allowed %#v is a virtual folder", virtualPath)
return c.GetPermissionDeniedError()
@@ -235,23 +351,25 @@ func (c *BaseConnection) CreateDir(virtualPath string) error {
return err
}
if err := fs.Mkdir(fsPath); err != nil {
c.Log(logger.LevelWarn, "error creating dir: %#v error: %+v", fsPath, err)
c.Log(logger.LevelError, "error creating dir: %#v error: %+v", fsPath, err)
return c.GetFsError(fs, err)
}
vfs.SetPathPermissions(fs, fsPath, c.User.GetUID(), c.User.GetGID())
logger.CommandLog(mkdirLogSender, fsPath, "", c.User.Username, "", c.ID, c.protocol, -1, -1, "", "", "", -1, c.remoteAddr)
logger.CommandLog(mkdirLogSender, fsPath, "", c.User.Username, "", c.ID, c.protocol, -1, -1, "", "", "", -1,
c.localAddr, c.remoteAddr)
ExecuteActionNotification(c, operationMkdir, fsPath, virtualPath, "", "", "", 0, nil)
return nil
}
// IsRemoveFileAllowed returns an error if removing this file is not allowed
func (c *BaseConnection) IsRemoveFileAllowed(virtualPath string) error {
if !c.User.HasPerm(dataprovider.PermDelete, path.Dir(virtualPath)) {
if !c.User.HasAnyPerm([]string{dataprovider.PermDeleteFiles, dataprovider.PermDelete}, path.Dir(virtualPath)) {
return c.GetPermissionDeniedError()
}
if !c.User.IsFileAllowed(virtualPath) {
if ok, policy := c.User.IsFileAllowed(virtualPath); !ok {
c.Log(logger.LevelDebug, "removing file %#v is not allowed", virtualPath)
return c.GetPermissionDeniedError()
return c.GetErrorForDeniedFile(policy)
}
return nil
}
@@ -263,17 +381,18 @@ func (c *BaseConnection) RemoveFile(fs vfs.Fs, fsPath, virtualPath string, info
}
size := info.Size()
actionErr := ExecutePreAction(&c.User, operationPreDelete, fsPath, virtualPath, c.protocol, size, 0)
actionErr := ExecutePreAction(c, operationPreDelete, fsPath, virtualPath, size, 0)
if actionErr == nil {
c.Log(logger.LevelDebug, "remove for path %#v handled by pre-delete action", fsPath)
} else {
if err := fs.Remove(fsPath, false); err != nil {
c.Log(logger.LevelWarn, "failed to remove a file/symlink %#v: %+v", fsPath, err)
c.Log(logger.LevelError, "failed to remove file/symlink %#v: %+v", fsPath, err)
return c.GetFsError(fs, err)
}
}
logger.CommandLog(removeLogSender, fsPath, "", c.User.Username, "", c.ID, c.protocol, -1, -1, "", "", "", -1, c.remoteAddr)
logger.CommandLog(removeLogSender, fsPath, "", c.User.Username, "", c.ID, c.protocol, -1, -1, "", "", "", -1,
c.localAddr, c.remoteAddr)
if info.Mode()&os.ModeSymlink == 0 {
vfolder, err := c.User.GetVirtualFolderForPath(path.Dir(virtualPath))
if err == nil {
@@ -286,7 +405,7 @@ func (c *BaseConnection) RemoveFile(fs vfs.Fs, fsPath, virtualPath string, info
}
}
if actionErr != nil {
ExecuteActionNotification(&c.User, operationDelete, fsPath, virtualPath, "", "", c.protocol, size, nil)
ExecuteActionNotification(c, operationDelete, fsPath, virtualPath, "", "", "", size, nil)
}
return nil
}
@@ -309,9 +428,13 @@ func (c *BaseConnection) IsRemoveDirAllowed(fs vfs.Fs, fsPath, virtualPath strin
c.Log(logger.LevelWarn, "removing a directory mapped as virtual folder is not allowed: %#v", fsPath)
return c.GetPermissionDeniedError()
}
if !c.User.HasPerm(dataprovider.PermDelete, path.Dir(virtualPath)) {
if !c.User.HasAnyPerm([]string{dataprovider.PermDeleteDirs, dataprovider.PermDelete}, path.Dir(virtualPath)) {
return c.GetPermissionDeniedError()
}
if ok, policy := c.User.IsFileAllowed(virtualPath); !ok {
c.Log(logger.LevelDebug, "removing directory %#v is not allowed", virtualPath)
return c.GetErrorForDeniedFile(policy)
}
return nil
}
@@ -331,25 +454,30 @@ func (c *BaseConnection) RemoveDir(virtualPath string) error {
if fs.IsNotExist(err) && fs.HasVirtualFolders() {
return nil
}
c.Log(logger.LevelWarn, "failed to remove a dir %#v: stat error: %+v", fsPath, err)
c.Log(logger.LevelError, "failed to remove a dir %#v: stat error: %+v", fsPath, err)
return c.GetFsError(fs, err)
}
if !fi.IsDir() || fi.Mode()&os.ModeSymlink != 0 {
c.Log(logger.LevelDebug, "cannot remove %#v is not a directory", fsPath)
c.Log(logger.LevelError, "cannot remove %#v is not a directory", fsPath)
return c.GetGenericError(nil)
}
if err := fs.Remove(fsPath, true); err != nil {
c.Log(logger.LevelWarn, "failed to remove directory %#v: %+v", fsPath, err)
c.Log(logger.LevelError, "failed to remove directory %#v: %+v", fsPath, err)
return c.GetFsError(fs, err)
}
logger.CommandLog(rmdirLogSender, fsPath, "", c.User.Username, "", c.ID, c.protocol, -1, -1, "", "", "", -1, c.remoteAddr)
logger.CommandLog(rmdirLogSender, fsPath, "", c.User.Username, "", c.ID, c.protocol, -1, -1, "", "", "", -1,
c.localAddr, c.remoteAddr)
ExecuteActionNotification(c, operationRmdir, fsPath, virtualPath, "", "", "", 0, nil)
return nil
}
// Rename renames (moves) virtualSourcePath to virtualTargetPath
func (c *BaseConnection) Rename(virtualSourcePath, virtualTargetPath string) error {
if virtualSourcePath == virtualTargetPath {
return fmt.Errorf("the rename source and target cannot be the same: %w", c.GetOpUnsupportedError())
}
fsSrc, fsSourcePath, err := c.GetFsAndResolvedPath(virtualSourcePath)
if err != nil {
return err
@@ -398,14 +526,15 @@ func (c *BaseConnection) Rename(virtualSourcePath, virtualTargetPath string) err
return c.GetGenericError(ErrQuotaExceeded)
}
if err := fsSrc.Rename(fsSourcePath, fsTargetPath); err != nil {
c.Log(logger.LevelWarn, "failed to rename %#v -> %#v: %+v", fsSourcePath, fsTargetPath, err)
c.Log(logger.LevelError, "failed to rename %#v -> %#v: %+v", fsSourcePath, fsTargetPath, err)
return c.GetFsError(fsSrc, err)
}
vfs.SetPathPermissions(fsDst, fsTargetPath, c.User.GetUID(), c.User.GetGID())
c.updateQuotaAfterRename(fsDst, virtualSourcePath, virtualTargetPath, fsTargetPath, initialSize) //nolint:errcheck
logger.CommandLog(renameLogSender, fsSourcePath, fsTargetPath, c.User.Username, "", c.ID, c.protocol, -1, -1,
"", "", "", -1, c.remoteAddr)
ExecuteActionNotification(&c.User, operationRename, fsSourcePath, virtualSourcePath, fsTargetPath, "", c.protocol, 0, nil)
"", "", "", -1, c.localAddr, c.remoteAddr)
ExecuteActionNotification(c, operationRename, fsSourcePath, virtualSourcePath, fsTargetPath, virtualTargetPath,
"", 0, nil)
return nil
}
@@ -426,22 +555,31 @@ func (c *BaseConnection) CreateSymlink(virtualSourcePath, virtualTargetPath stri
return c.GetFsError(fs, err)
}
if fs.GetRelativePath(fsSourcePath) == "/" {
c.Log(logger.LevelWarn, "symlinking root dir is not allowed")
c.Log(logger.LevelError, "symlinking root dir is not allowed")
return c.GetPermissionDeniedError()
}
if fs.GetRelativePath(fsTargetPath) == "/" {
c.Log(logger.LevelWarn, "symlinking to root dir is not allowed")
c.Log(logger.LevelError, "symlinking to root dir is not allowed")
return c.GetPermissionDeniedError()
}
if !c.User.HasPerm(dataprovider.PermCreateSymlinks, path.Dir(virtualTargetPath)) {
return c.GetPermissionDeniedError()
}
ok, policy := c.User.IsFileAllowed(virtualSourcePath)
if !ok && policy == sdk.DenyPolicyHide {
c.Log(logger.LevelError, "symlink source path %#v is not allowed", virtualSourcePath)
return c.GetNotExistError()
}
if ok, _ = c.User.IsFileAllowed(virtualTargetPath); !ok {
c.Log(logger.LevelError, "symlink target path %#v is not allowed", virtualTargetPath)
return c.GetPermissionDeniedError()
}
if err := fs.Symlink(fsSourcePath, fsTargetPath); err != nil {
c.Log(logger.LevelWarn, "failed to create symlink %#v -> %#v: %+v", fsSourcePath, fsTargetPath, err)
c.Log(logger.LevelError, "failed to create symlink %#v -> %#v: %+v", fsSourcePath, fsTargetPath, err)
return c.GetFsError(fs, err)
}
logger.CommandLog(symlinkLogSender, fsSourcePath, fsTargetPath, c.User.Username, "", c.ID, c.protocol, -1, -1, "",
"", "", -1, c.remoteAddr)
"", "", -1, c.localAddr, c.remoteAddr)
return nil
}
@@ -456,13 +594,19 @@ func (c *BaseConnection) getPathForSetStatPerms(fs vfs.Fs, fsPath, virtualPath s
}
// DoStat execute a Stat if mode = 0, Lstat if mode = 1
func (c *BaseConnection) DoStat(virtualPath string, mode int) (os.FileInfo, error) {
func (c *BaseConnection) DoStat(virtualPath string, mode int, checkFilePatterns bool) (os.FileInfo, error) {
// for some vfs we don't create intermediary folders so we cannot simply check
// if virtualPath is a virtual folder
vfolders := c.User.GetVirtualFoldersInPath(path.Dir(virtualPath))
if _, ok := vfolders[virtualPath]; ok {
return vfs.NewFileInfo(virtualPath, true, 0, time.Now(), false), nil
}
if checkFilePatterns {
ok, policy := c.User.IsFileAllowed(virtualPath)
if !ok && policy == sdk.DenyPolicyHide {
return nil, c.GetNotExistError()
}
}
var info os.FileInfo
@@ -477,6 +621,7 @@ func (c *BaseConnection) DoStat(virtualPath string, mode int) (os.FileInfo, erro
info, err = fs.Stat(c.getRealFsPath(fsPath))
}
if err != nil {
c.Log(logger.LevelError, "stat error for path %#v: %+v", virtualPath, err)
return info, c.GetFsError(fs, err)
}
if vfs.IsCryptOsFs(fs) {
@@ -485,6 +630,14 @@ func (c *BaseConnection) DoStat(virtualPath string, mode int) (os.FileInfo, erro
return info, nil
}
func (c *BaseConnection) createDirIfMissing(name string) error {
_, err := c.DoStat(name, 0, false)
if c.IsNotExistError(err) {
return c.CreateDir(name, false)
}
return err
}
func (c *BaseConnection) ignoreSetStat(fs vfs.Fs) bool {
if Config.SetstatMode == 1 {
return true
@@ -503,11 +656,11 @@ func (c *BaseConnection) handleChmod(fs vfs.Fs, fsPath, pathForPerms string, att
return nil
}
if err := fs.Chmod(c.getRealFsPath(fsPath), attributes.Mode); err != nil {
c.Log(logger.LevelWarn, "failed to chmod path %#v, mode: %v, err: %+v", fsPath, attributes.Mode.String(), err)
c.Log(logger.LevelError, "failed to chmod path %#v, mode: %v, err: %+v", fsPath, attributes.Mode.String(), err)
return c.GetFsError(fs, err)
}
logger.CommandLog(chmodLogSender, fsPath, "", c.User.Username, attributes.Mode.String(), c.ID, c.protocol,
-1, -1, "", "", "", -1, c.remoteAddr)
-1, -1, "", "", "", -1, c.localAddr, c.remoteAddr)
return nil
}
@@ -519,12 +672,12 @@ func (c *BaseConnection) handleChown(fs vfs.Fs, fsPath, pathForPerms string, att
return nil
}
if err := fs.Chown(c.getRealFsPath(fsPath), attributes.UID, attributes.GID); err != nil {
c.Log(logger.LevelWarn, "failed to chown path %#v, uid: %v, gid: %v, err: %+v", fsPath, attributes.UID,
c.Log(logger.LevelError, "failed to chown path %#v, uid: %v, gid: %v, err: %+v", fsPath, attributes.UID,
attributes.GID, err)
return c.GetFsError(fs, err)
}
logger.CommandLog(chownLogSender, fsPath, "", c.User.Username, "", c.ID, c.protocol, attributes.UID, attributes.GID,
"", "", "", -1, c.remoteAddr)
"", "", "", -1, c.localAddr, c.remoteAddr)
return nil
}
@@ -532,39 +685,53 @@ func (c *BaseConnection) handleChtimes(fs vfs.Fs, fsPath, pathForPerms string, a
if !c.User.HasPerm(dataprovider.PermChtimes, pathForPerms) {
return c.GetPermissionDeniedError()
}
if c.ignoreSetStat(fs) {
if Config.SetstatMode == 1 {
return nil
}
if err := fs.Chtimes(c.getRealFsPath(fsPath), attributes.Atime, attributes.Mtime); err != nil {
c.Log(logger.LevelWarn, "failed to chtimes for path %#v, access time: %v, modification time: %v, err: %+v",
isUploading := c.setTimes(fsPath, attributes.Atime, attributes.Mtime)
if err := fs.Chtimes(c.getRealFsPath(fsPath), attributes.Atime, attributes.Mtime, isUploading); err != nil {
c.setTimes(fsPath, time.Time{}, time.Time{})
if errors.Is(err, vfs.ErrVfsUnsupported) && Config.SetstatMode == 2 {
return nil
}
c.Log(logger.LevelError, "failed to chtimes for path %#v, access time: %v, modification time: %v, err: %+v",
fsPath, attributes.Atime, attributes.Mtime, err)
return c.GetFsError(fs, err)
}
accessTimeString := attributes.Atime.Format(chtimesFormat)
modificationTimeString := attributes.Mtime.Format(chtimesFormat)
logger.CommandLog(chtimesLogSender, fsPath, "", c.User.Username, "", c.ID, c.protocol, -1, -1,
accessTimeString, modificationTimeString, "", -1, c.remoteAddr)
accessTimeString, modificationTimeString, "", -1, c.localAddr, c.remoteAddr)
return nil
}
// SetStat set StatAttributes for the specified fsPath
func (c *BaseConnection) SetStat(virtualPath string, attributes *StatAttributes) error {
if ok, policy := c.User.IsFileAllowed(virtualPath); !ok {
return c.GetErrorForDeniedFile(policy)
}
fs, fsPath, err := c.GetFsAndResolvedPath(virtualPath)
if err != nil {
return err
}
pathForPerms := c.getPathForSetStatPerms(fs, fsPath, virtualPath)
if attributes.Flags&StatAttrTimes != 0 {
if err = c.handleChtimes(fs, fsPath, pathForPerms, attributes); err != nil {
return err
}
}
if attributes.Flags&StatAttrPerms != 0 {
return c.handleChmod(fs, fsPath, pathForPerms, attributes)
if err = c.handleChmod(fs, fsPath, pathForPerms, attributes); err != nil {
return err
}
}
if attributes.Flags&StatAttrUIDGID != 0 {
return c.handleChown(fs, fsPath, pathForPerms, attributes)
}
if attributes.Flags&StatAttrTimes != 0 {
return c.handleChtimes(fs, fsPath, pathForPerms, attributes)
if err = c.handleChown(fs, fsPath, pathForPerms, attributes); err != nil {
return err
}
}
if attributes.Flags&StatAttrSize != 0 {
@@ -572,12 +739,12 @@ func (c *BaseConnection) SetStat(virtualPath string, attributes *StatAttributes)
return c.GetPermissionDeniedError()
}
if err := c.truncateFile(fs, fsPath, virtualPath, attributes.Size); err != nil {
c.Log(logger.LevelWarn, "failed to truncate path %#v, size: %v, err: %+v", fsPath, attributes.Size, err)
if err = c.truncateFile(fs, fsPath, virtualPath, attributes.Size); err != nil {
c.Log(logger.LevelError, "failed to truncate path %#v, size: %v, err: %+v", fsPath, attributes.Size, err)
return c.GetFsError(fs, err)
}
logger.CommandLog(truncateLogSender, fsPath, "", c.User.Username, "", c.ID, c.protocol, -1, -1, "", "",
"", attributes.Size, c.remoteAddr)
"", attributes.Size, c.localAddr, c.remoteAddr)
}
return nil
@@ -615,12 +782,6 @@ func (c *BaseConnection) truncateFile(fs vfs.Fs, fsPath, virtualPath string, siz
}
func (c *BaseConnection) checkRecursiveRenameDirPermissions(fsSrc, fsDst vfs.Fs, sourcePath, targetPath string) error {
dstPerms := []string{
dataprovider.PermCreateDirs,
dataprovider.PermUpload,
dataprovider.PermCreateSymlinks,
}
err := fsSrc.Walk(sourcePath, func(walkedPath string, info os.FileInfo, err error) error {
if err != nil {
return c.GetFsError(fsSrc, err)
@@ -632,12 +793,8 @@ func (c *BaseConnection) checkRecursiveRenameDirPermissions(fsSrc, fsDst vfs.Fs,
// inside the parent path was checked. If the current dir has no subdirs with defined permissions inside it
// and it has all the possible permissions we can stop scanning
if !c.User.HasPermissionsInside(path.Dir(virtualSrcPath)) && !c.User.HasPermissionsInside(path.Dir(virtualDstPath)) {
if c.User.HasPerm(dataprovider.PermRename, path.Dir(virtualSrcPath)) &&
c.User.HasPerm(dataprovider.PermRename, path.Dir(virtualDstPath)) {
return ErrSkipPermissionsCheck
}
if c.User.HasPerm(dataprovider.PermDelete, path.Dir(virtualSrcPath)) &&
c.User.HasPerms(dstPerms, path.Dir(virtualDstPath)) {
if c.User.HasPermsRenameAll(path.Dir(virtualSrcPath)) &&
c.User.HasPermsRenameAll(path.Dir(virtualDstPath)) {
return ErrSkipPermissionsCheck
}
}
@@ -655,21 +812,29 @@ func (c *BaseConnection) checkRecursiveRenameDirPermissions(fsSrc, fsDst vfs.Fs,
}
func (c *BaseConnection) hasRenamePerms(virtualSourcePath, virtualTargetPath string, fi os.FileInfo) bool {
if c.User.HasPerm(dataprovider.PermRename, path.Dir(virtualSourcePath)) &&
c.User.HasPerm(dataprovider.PermRename, path.Dir(virtualTargetPath)) {
if c.User.HasPermsRenameAll(path.Dir(virtualSourcePath)) &&
c.User.HasPermsRenameAll(path.Dir(virtualTargetPath)) {
return true
}
if !c.User.HasPerm(dataprovider.PermDelete, path.Dir(virtualSourcePath)) {
if fi == nil {
// we don't know if this is a file or a directory and we don't have all the rename perms, return false
return false
}
if fi != nil {
if fi.IsDir() {
return c.User.HasPerm(dataprovider.PermCreateDirs, path.Dir(virtualTargetPath))
} else if fi.Mode()&os.ModeSymlink != 0 {
return c.User.HasPerm(dataprovider.PermCreateSymlinks, path.Dir(virtualTargetPath))
if fi.IsDir() {
perms := []string{
dataprovider.PermRenameDirs,
dataprovider.PermRename,
}
return c.User.HasAnyPerm(perms, path.Dir(virtualSourcePath)) &&
c.User.HasAnyPerm(perms, path.Dir(virtualTargetPath))
}
return c.User.HasPerm(dataprovider.PermUpload, path.Dir(virtualTargetPath))
// file or symlink
perms := []string{
dataprovider.PermRenameFiles,
dataprovider.PermRename,
}
return c.User.HasAnyPerm(perms, path.Dir(virtualSourcePath)) &&
c.User.HasAnyPerm(perms, path.Dir(virtualTargetPath))
}
func (c *BaseConnection) isRenamePermitted(fsSrc, fsDst vfs.Fs, fsSourcePath, fsTargetPath, virtualSourcePath, virtualTargetPath string, fi os.FileInfo) bool {
@@ -694,12 +859,12 @@ func (c *BaseConnection) isRenamePermitted(fsSrc, fsDst vfs.Fs, fsSourcePath, fs
c.Log(logger.LevelWarn, "renaming a virtual folder is not allowed")
return false
}
if !c.User.IsFileAllowed(virtualSourcePath) || !c.User.IsFileAllowed(virtualTargetPath) {
if fi != nil && fi.Mode().IsRegular() {
c.Log(logger.LevelDebug, "renaming file is not allowed, source: %#v target: %#v",
virtualSourcePath, virtualTargetPath)
return false
}
isSrcAllowed, _ := c.User.IsFileAllowed(virtualSourcePath)
isDstAllowed, _ := c.User.IsFileAllowed(virtualTargetPath)
if !isSrcAllowed || !isDstAllowed {
c.Log(logger.LevelDebug, "renaming source: %#v to target: %#v not allowed", virtualSourcePath,
virtualTargetPath)
return false
}
return c.hasRenamePerms(virtualSourcePath, virtualTargetPath, fi)
}
@@ -726,7 +891,7 @@ func (c *BaseConnection) hasSpaceForRename(fs vfs.Fs, virtualSourcePath, virtual
// rename between user root dir and a virtual folder included in user quota
return true
}
quotaResult := c.HasSpace(true, false, virtualTargetPath)
quotaResult, _ := c.HasSpace(true, false, virtualTargetPath)
return c.hasSpaceForCrossRename(fs, quotaResult, initialSize, fsSourcePath)
}
@@ -738,7 +903,7 @@ func (c *BaseConnection) hasSpaceForCrossRename(fs vfs.Fs, quotaResult vfs.Quota
}
fi, err := fs.Lstat(sourcePath)
if err != nil {
c.Log(logger.LevelWarn, "cross rename denied, stat error for path %#v: %v", sourcePath, err)
c.Log(logger.LevelError, "cross rename denied, stat error for path %#v: %v", sourcePath, err)
return false
}
var sizeDiff int64
@@ -753,7 +918,7 @@ func (c *BaseConnection) hasSpaceForCrossRename(fs vfs.Fs, quotaResult vfs.Quota
} else if fi.IsDir() {
filesDiff, sizeDiff, err = fs.GetDirSize(sourcePath)
if err != nil {
c.Log(logger.LevelWarn, "cross rename denied, error getting size for directory %#v: %v", sourcePath, err)
c.Log(logger.LevelError, "cross rename denied, error getting size for directory %#v: %v", sourcePath, err)
return false
}
}
@@ -788,7 +953,9 @@ func (c *BaseConnection) hasSpaceForCrossRename(fs vfs.Fs, quotaResult vfs.Quota
// GetMaxWriteSize returns the allowed size for an upload or an error
// if no enough size is available for a resume/append
func (c *BaseConnection) GetMaxWriteSize(quotaResult vfs.QuotaCheckResult, isResume bool, fileSize int64, isUploadResumeSupported bool) (int64, error) {
func (c *BaseConnection) GetMaxWriteSize(quotaResult vfs.QuotaCheckResult, isResume bool, fileSize int64,
isUploadResumeSupported bool,
) (int64, error) {
maxWriteSize := quotaResult.GetRemainingSize()
if isResume {
@@ -796,7 +963,7 @@ func (c *BaseConnection) GetMaxWriteSize(quotaResult vfs.QuotaCheckResult, isRes
return 0, c.GetOpUnsupportedError()
}
if c.User.Filters.MaxUploadFileSize > 0 && c.User.Filters.MaxUploadFileSize <= fileSize {
return 0, ErrQuotaExceeded
return 0, c.GetQuotaExceededError()
}
if c.User.Filters.MaxUploadFileSize > 0 {
maxUploadSize := c.User.Filters.MaxUploadFileSize - fileSize
@@ -816,8 +983,49 @@ func (c *BaseConnection) GetMaxWriteSize(quotaResult vfs.QuotaCheckResult, isRes
return maxWriteSize, nil
}
// GetTransferQuota returns the data transfers quota
func (c *BaseConnection) GetTransferQuota() dataprovider.TransferQuota {
result, _, _ := c.checkUserQuota()
return result
}
func (c *BaseConnection) checkUserQuota() (dataprovider.TransferQuota, int, int64) {
clientIP := c.GetRemoteIP()
ul, dl, total := c.User.GetDataTransferLimits(clientIP)
result := dataprovider.TransferQuota{
ULSize: ul,
DLSize: dl,
TotalSize: total,
AllowedULSize: 0,
AllowedDLSize: 0,
AllowedTotalSize: 0,
}
if !c.User.HasTransferQuotaRestrictions() {
return result, -1, -1
}
usedFiles, usedSize, usedULSize, usedDLSize, err := dataprovider.GetUsedQuota(c.User.Username)
if err != nil {
c.Log(logger.LevelError, "error getting used quota for %#v: %v", c.User.Username, err)
result.AllowedTotalSize = -1
return result, -1, -1
}
if result.TotalSize > 0 {
result.AllowedTotalSize = result.TotalSize - (usedULSize + usedDLSize)
}
if result.ULSize > 0 {
result.AllowedULSize = result.ULSize - usedULSize
}
if result.DLSize > 0 {
result.AllowedDLSize = result.DLSize - usedDLSize
}
return result, usedFiles, usedSize
}
// HasSpace checks user's quota usage
func (c *BaseConnection) HasSpace(checkFiles, getUsage bool, requestPath string) vfs.QuotaCheckResult {
func (c *BaseConnection) HasSpace(checkFiles, getUsage bool, requestPath string) (vfs.QuotaCheckResult,
dataprovider.TransferQuota,
) {
result := vfs.QuotaCheckResult{
HasSpace: true,
AllowedSize: 0,
@@ -827,32 +1035,39 @@ func (c *BaseConnection) HasSpace(checkFiles, getUsage bool, requestPath string)
QuotaSize: 0,
QuotaFiles: 0,
}
if dataprovider.GetQuotaTracking() == 0 {
return result
return result, dataprovider.TransferQuota{}
}
transferQuota, usedFiles, usedSize := c.checkUserQuota()
var err error
var vfolder vfs.VirtualFolder
vfolder, err = c.User.GetVirtualFolderForPath(path.Dir(requestPath))
if err == nil && !vfolder.IsIncludedInUserQuota() {
if vfolder.HasNoQuotaRestrictions(checkFiles) && !getUsage {
return result
return result, transferQuota
}
result.QuotaSize = vfolder.QuotaSize
result.QuotaFiles = vfolder.QuotaFiles
result.UsedFiles, result.UsedSize, err = dataprovider.GetUsedVirtualFolderQuota(vfolder.Name)
} else {
if c.User.HasNoQuotaRestrictions(checkFiles) && !getUsage {
return result
return result, transferQuota
}
result.QuotaSize = c.User.QuotaSize
result.QuotaFiles = c.User.QuotaFiles
result.UsedFiles, result.UsedSize, err = dataprovider.GetUsedQuota(c.User.Username)
if usedSize == -1 {
result.UsedFiles, result.UsedSize, _, _, err = dataprovider.GetUsedQuota(c.User.Username)
} else {
err = nil
result.UsedFiles = usedFiles
result.UsedSize = usedSize
}
}
if err != nil {
c.Log(logger.LevelWarn, "error getting used quota for %#v request path %#v: %v", c.User.Username, requestPath, err)
c.Log(logger.LevelError, "error getting used quota for %#v request path %#v: %v", c.User.Username, requestPath, err)
result.HasSpace = false
return result
return result, transferQuota
}
result.AllowedFiles = result.QuotaFiles - result.UsedFiles
result.AllowedSize = result.QuotaSize - result.UsedSize
@@ -861,9 +1076,9 @@ func (c *BaseConnection) HasSpace(checkFiles, getUsage bool, requestPath string)
c.Log(logger.LevelDebug, "quota exceed for user %#v, request path %#v, num files: %v/%v, size: %v/%v check files: %v",
c.User.Username, requestPath, result.UsedFiles, result.QuotaFiles, result.UsedSize, result.QuotaSize, checkFiles)
result.HasSpace = false
return result
return result, transferQuota
}
return result
return result, transferQuota
}
// returns true if this is a rename on the same fs or local virtual folders
@@ -878,22 +1093,22 @@ func (c *BaseConnection) isLocalOrSameFolderRename(virtualSourcePath, virtualTar
return true
}
// we have different folders, only local fs is supported
if sourceFolder.FsConfig.Provider == vfs.LocalFilesystemProvider &&
dstFolder.FsConfig.Provider == vfs.LocalFilesystemProvider {
if sourceFolder.FsConfig.Provider == sdk.LocalFilesystemProvider &&
dstFolder.FsConfig.Provider == sdk.LocalFilesystemProvider {
return true
}
return false
}
if c.User.FsConfig.Provider != vfs.LocalFilesystemProvider {
if c.User.FsConfig.Provider != sdk.LocalFilesystemProvider {
return false
}
if errSrc == nil {
if sourceFolder.FsConfig.Provider == vfs.LocalFilesystemProvider {
if sourceFolder.FsConfig.Provider == sdk.LocalFilesystemProvider {
return true
}
}
if errDst == nil {
if dstFolder.FsConfig.Provider == vfs.LocalFilesystemProvider {
if dstFolder.FsConfig.Provider == sdk.LocalFilesystemProvider {
return true
}
}
@@ -1001,7 +1216,7 @@ func (c *BaseConnection) updateQuotaAfterRename(fs vfs.Fs, virtualSourcePath, vi
if fi.Mode().IsDir() {
numFiles, filesSize, err = fs.GetDirSize(targetPath)
if err != nil {
c.Log(logger.LevelWarn, "failed to update quota after rename, error scanning moved folder %#v: %v",
c.Log(logger.LevelError, "failed to update quota after rename, error scanning moved folder %#v: %v",
targetPath, err)
return err
}
@@ -1009,7 +1224,7 @@ func (c *BaseConnection) updateQuotaAfterRename(fs vfs.Fs, virtualSourcePath, vi
filesSize = fi.Size()
}
} else {
c.Log(logger.LevelWarn, "failed to update quota after rename, file %#v stat error: %+v", targetPath, err)
c.Log(logger.LevelError, "failed to update quota after rename, file %#v stat error: %+v", targetPath, err)
return err
}
if errSrc == nil && errDst == nil {
@@ -1024,12 +1239,34 @@ func (c *BaseConnection) updateQuotaAfterRename(fs vfs.Fs, virtualSourcePath, vi
return nil
}
// IsNotExistError returns true if the specified fs error is not exist for the connection protocol
func (c *BaseConnection) IsNotExistError(err error) bool {
switch c.protocol {
case ProtocolSFTP:
return errors.Is(err, sftp.ErrSSHFxNoSuchFile)
case ProtocolWebDAV, ProtocolFTP, ProtocolHTTP, ProtocolOIDC, ProtocolHTTPShare, ProtocolDataRetention:
return errors.Is(err, os.ErrNotExist)
default:
return errors.Is(err, ErrNotExist)
}
}
// GetErrorForDeniedFile return permission denied or not exist error based on the specified policy
func (c *BaseConnection) GetErrorForDeniedFile(policy int) error {
switch policy {
case sdk.DenyPolicyHide:
return c.GetNotExistError()
default:
return c.GetPermissionDeniedError()
}
}
// GetPermissionDeniedError returns an appropriate permission denied error for the connection protocol
func (c *BaseConnection) GetPermissionDeniedError() error {
switch c.protocol {
case ProtocolSFTP:
return sftp.ErrSSHFxPermissionDenied
case ProtocolWebDAV, ProtocolFTP, ProtocolHTTP:
case ProtocolWebDAV, ProtocolFTP, ProtocolHTTP, ProtocolOIDC, ProtocolHTTPShare, ProtocolDataRetention:
return os.ErrPermission
default:
return ErrPermissionDenied
@@ -1041,7 +1278,7 @@ func (c *BaseConnection) GetNotExistError() error {
switch c.protocol {
case ProtocolSFTP:
return sftp.ErrSSHFxNoSuchFile
case ProtocolWebDAV, ProtocolFTP, ProtocolHTTP:
case ProtocolWebDAV, ProtocolFTP, ProtocolHTTP, ProtocolOIDC, ProtocolHTTPShare, ProtocolDataRetention:
return os.ErrNotExist
default:
return ErrNotExist
@@ -1058,12 +1295,66 @@ func (c *BaseConnection) GetOpUnsupportedError() error {
}
}
func getQuotaExceededError(protocol string) error {
switch protocol {
case ProtocolSFTP:
return fmt.Errorf("%w: %v", sftp.ErrSSHFxFailure, ErrQuotaExceeded.Error())
case ProtocolFTP:
return ftpserver.ErrStorageExceeded
default:
return ErrQuotaExceeded
}
}
func getReadQuotaExceededError(protocol string) error {
switch protocol {
case ProtocolSFTP:
return fmt.Errorf("%w: %v", sftp.ErrSSHFxFailure, ErrReadQuotaExceeded.Error())
default:
return ErrReadQuotaExceeded
}
}
// GetQuotaExceededError returns an appropriate storage limit exceeded error for the connection protocol
func (c *BaseConnection) GetQuotaExceededError() error {
return getQuotaExceededError(c.protocol)
}
// GetReadQuotaExceededError returns an appropriate read quota limit exceeded error for the connection protocol
func (c *BaseConnection) GetReadQuotaExceededError() error {
return getReadQuotaExceededError(c.protocol)
}
// IsQuotaExceededError returns true if the given error is a quota exceeded error
func (c *BaseConnection) IsQuotaExceededError(err error) bool {
switch c.protocol {
case ProtocolSFTP:
if err == nil {
return false
}
if errors.Is(err, ErrQuotaExceeded) {
return true
}
return errors.Is(err, sftp.ErrSSHFxFailure) && strings.Contains(err.Error(), ErrQuotaExceeded.Error())
case ProtocolFTP:
return errors.Is(err, ftpserver.ErrStorageExceeded) || errors.Is(err, ErrQuotaExceeded)
default:
return errors.Is(err, ErrQuotaExceeded)
}
}
// GetGenericError returns an appropriate generic error for the connection protocol
func (c *BaseConnection) GetGenericError(err error) error {
switch c.protocol {
case ProtocolSFTP:
if err == vfs.ErrStorageSizeUnavailable {
return sftp.ErrSSHFxOpUnsupported
return fmt.Errorf("%w: %v", sftp.ErrSSHFxOpUnsupported, err.Error())
}
if err != nil {
if e, ok := err.(*os.PathError); ok {
return fmt.Errorf("%w: %v %v", sftp.ErrSSHFxFailure, e.Op, e.Err.Error())
}
return fmt.Errorf("%w: %v", sftp.ErrSSHFxFailure, err.Error())
}
return sftp.ErrSSHFxFailure
default:

View File

@@ -1,3 +1,17 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
import (
@@ -9,10 +23,14 @@ import (
"time"
"github.com/pkg/sftp"
"github.com/rs/xid"
"github.com/sftpgo/sdk"
"github.com/stretchr/testify/assert"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/vfs"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/kms"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/v2/vfs"
)
// MockOsFs mockable OsFs
@@ -22,19 +40,23 @@ type MockOsFs struct {
}
// Name returns the name for the Fs implementation
func (fs MockOsFs) Name() string {
func (fs *MockOsFs) Name() string {
return "mockOsFs"
}
// HasVirtualFolders returns true if folders are emulated
func (fs MockOsFs) HasVirtualFolders() bool {
func (fs *MockOsFs) HasVirtualFolders() bool {
return fs.hasVirtualFolders
}
func (fs MockOsFs) IsUploadResumeSupported() bool {
func (fs *MockOsFs) IsUploadResumeSupported() bool {
return !fs.hasVirtualFolders
}
func (fs *MockOsFs) Chtimes(name string, atime, mtime time.Time, isUploading bool) error {
return vfs.ErrVfsUnsupported
}
func newMockOsFs(hasVirtualFolders bool, connectionID, rootDir string) vfs.Fs {
return &MockOsFs{
Fs: vfs.NewOsFs(connectionID, rootDir, ""),
@@ -47,8 +69,10 @@ func TestRemoveErrors(t *testing.T) {
homePath := filepath.Join(os.TempDir(), "home")
user := dataprovider.User{
Username: "remove_errors_user",
HomeDir: homePath,
BaseUser: sdk.BaseUser{
Username: "remove_errors_user",
HomeDir: homePath,
},
VirtualFolders: []vfs.VirtualFolder{
{
BaseVirtualFolder: vfs.BaseVirtualFolder{
@@ -62,7 +86,7 @@ func TestRemoveErrors(t *testing.T) {
user.Permissions = make(map[string][]string)
user.Permissions["/"] = []string{dataprovider.PermAny}
fs := vfs.NewOsFs("", os.TempDir(), "")
conn := NewBaseConnection("", ProtocolFTP, "", user)
conn := NewBaseConnection("", ProtocolFTP, "", "", user)
err := conn.IsRemoveDirAllowed(fs, mappedPath, "/virtualpath1")
if assert.Error(t, err) {
assert.Contains(t, err.Error(), "permission denied")
@@ -78,12 +102,14 @@ func TestSetStatMode(t *testing.T) {
fakePath := "fake path"
user := dataprovider.User{
HomeDir: os.TempDir(),
BaseUser: sdk.BaseUser{
HomeDir: os.TempDir(),
},
}
user.Permissions = make(map[string][]string)
user.Permissions["/"] = []string{dataprovider.PermAny}
fs := newMockOsFs(true, "", user.GetHomeDir())
conn := NewBaseConnection("", ProtocolWebDAV, "", user)
conn := NewBaseConnection("", ProtocolWebDAV, "", "", user)
err := conn.handleChmod(fs, fakePath, fakePath, nil)
assert.NoError(t, err)
err = conn.handleChown(fs, fakePath, fakePath, nil)
@@ -94,20 +120,25 @@ func TestSetStatMode(t *testing.T) {
Config.SetstatMode = 2
err = conn.handleChmod(fs, fakePath, fakePath, nil)
assert.NoError(t, err)
err = conn.handleChtimes(fs, fakePath, fakePath, &StatAttributes{
Atime: time.Now(),
Mtime: time.Now(),
})
assert.NoError(t, err)
Config.SetstatMode = oldSetStatMode
}
func TestRecursiveRenameWalkError(t *testing.T) {
fs := vfs.NewOsFs("", os.TempDir(), "")
conn := NewBaseConnection("", ProtocolWebDAV, "", dataprovider.User{})
conn := NewBaseConnection("", ProtocolWebDAV, "", "", dataprovider.User{})
err := conn.checkRecursiveRenameDirPermissions(fs, fs, "/source", "/target")
assert.ErrorIs(t, err, os.ErrNotExist)
}
func TestCrossRenameFsErrors(t *testing.T) {
fs := vfs.NewOsFs("", os.TempDir(), "")
conn := NewBaseConnection("", ProtocolWebDAV, "", dataprovider.User{})
conn := NewBaseConnection("", ProtocolWebDAV, "", "", dataprovider.User{})
res := conn.hasSpaceForCrossRename(fs, vfs.QuotaCheckResult{}, 1, "missingsource")
assert.False(t, res)
if runtime.GOOS != osWindows {
@@ -138,15 +169,65 @@ func TestRenameVirtualFolders(t *testing.T) {
VirtualPath: vdir,
})
fs := vfs.NewOsFs("", os.TempDir(), "")
conn := NewBaseConnection("", ProtocolFTP, "", u)
conn := NewBaseConnection("", ProtocolFTP, "", "", u)
res := conn.isRenamePermitted(fs, fs, "source", "target", vdir, "vdirtarget", nil)
assert.False(t, res)
}
func TestRenamePerms(t *testing.T) {
src := "source"
target := "target"
sub := "/sub"
subTarget := sub + "/target"
u := dataprovider.User{}
u.Permissions = map[string][]string{}
u.Permissions["/"] = []string{dataprovider.PermCreateDirs, dataprovider.PermUpload, dataprovider.PermCreateSymlinks,
dataprovider.PermDeleteFiles}
conn := NewBaseConnection("", ProtocolSFTP, "", "", u)
assert.False(t, conn.hasRenamePerms(src, target, nil))
u.Permissions["/"] = []string{dataprovider.PermRename}
assert.True(t, conn.hasRenamePerms(src, target, nil))
u.Permissions["/"] = []string{dataprovider.PermCreateDirs, dataprovider.PermUpload, dataprovider.PermDeleteFiles,
dataprovider.PermDeleteDirs}
assert.False(t, conn.hasRenamePerms(src, target, nil))
info := vfs.NewFileInfo(src, true, 0, time.Now(), false)
u.Permissions["/"] = []string{dataprovider.PermRenameFiles}
assert.False(t, conn.hasRenamePerms(src, target, info))
u.Permissions["/"] = []string{dataprovider.PermRenameDirs}
assert.True(t, conn.hasRenamePerms(src, target, info))
u.Permissions["/"] = []string{dataprovider.PermRename}
assert.True(t, conn.hasRenamePerms(src, target, info))
u.Permissions["/"] = []string{dataprovider.PermDownload, dataprovider.PermUpload, dataprovider.PermDeleteDirs}
assert.False(t, conn.hasRenamePerms(src, target, info))
// test with different permissions between source and target
u.Permissions["/"] = []string{dataprovider.PermRename}
u.Permissions[sub] = []string{dataprovider.PermRenameFiles}
assert.False(t, conn.hasRenamePerms(src, subTarget, info))
u.Permissions[sub] = []string{dataprovider.PermRenameDirs}
assert.True(t, conn.hasRenamePerms(src, subTarget, info))
// test files
info = vfs.NewFileInfo(src, false, 0, time.Now(), false)
u.Permissions["/"] = []string{dataprovider.PermRenameDirs}
assert.False(t, conn.hasRenamePerms(src, target, info))
u.Permissions["/"] = []string{dataprovider.PermRenameFiles}
assert.True(t, conn.hasRenamePerms(src, target, info))
u.Permissions["/"] = []string{dataprovider.PermRename}
assert.True(t, conn.hasRenamePerms(src, target, info))
// test with different permissions between source and target
u.Permissions["/"] = []string{dataprovider.PermRename}
u.Permissions[sub] = []string{dataprovider.PermRenameDirs}
assert.False(t, conn.hasRenamePerms(src, subTarget, info))
u.Permissions[sub] = []string{dataprovider.PermRenameFiles}
assert.True(t, conn.hasRenamePerms(src, subTarget, info))
}
func TestUpdateQuotaAfterRename(t *testing.T) {
user := dataprovider.User{
Username: userTestUsername,
HomeDir: filepath.Join(os.TempDir(), "home"),
BaseUser: sdk.BaseUser{
Username: userTestUsername,
HomeDir: filepath.Join(os.TempDir(), "home"),
},
}
mappedPath := filepath.Join(os.TempDir(), "vdir")
user.Permissions = make(map[string][]string)
@@ -173,7 +254,7 @@ func TestUpdateQuotaAfterRename(t *testing.T) {
assert.NoError(t, err)
fs, err := user.GetFilesystem("id")
assert.NoError(t, err)
c := NewBaseConnection("", ProtocolSFTP, "", user)
c := NewBaseConnection("", ProtocolSFTP, "", "", user)
request := sftp.NewRequest("Rename", "/testfile")
if runtime.GOOS != osWindows {
request.Filepath = "/dir"
@@ -218,13 +299,15 @@ func TestUpdateQuotaAfterRename(t *testing.T) {
func TestErrorsMapping(t *testing.T) {
fs := vfs.NewOsFs("", os.TempDir(), "")
conn := NewBaseConnection("", ProtocolSFTP, "", dataprovider.User{HomeDir: os.TempDir()})
conn := NewBaseConnection("", ProtocolSFTP, "", "", dataprovider.User{BaseUser: sdk.BaseUser{HomeDir: os.TempDir()}})
osErrorsProtocols := []string{ProtocolWebDAV, ProtocolFTP, ProtocolHTTP, ProtocolHTTPShare,
ProtocolDataRetention, ProtocolOIDC}
for _, protocol := range supportedProtocols {
conn.SetProtocol(protocol)
err := conn.GetFsError(fs, os.ErrNotExist)
if protocol == ProtocolSFTP {
assert.EqualError(t, err, sftp.ErrSSHFxNoSuchFile.Error())
} else if protocol == ProtocolWebDAV || protocol == ProtocolFTP || protocol == ProtocolHTTP {
assert.ErrorIs(t, err, sftp.ErrSSHFxNoSuchFile)
} else if util.Contains(osErrorsProtocols, protocol) {
assert.EqualError(t, err, os.ErrNotExist.Error())
} else {
assert.EqualError(t, err, ErrNotExist.Error())
@@ -237,13 +320,15 @@ func TestErrorsMapping(t *testing.T) {
}
err = conn.GetFsError(fs, os.ErrClosed)
if protocol == ProtocolSFTP {
assert.EqualError(t, err, sftp.ErrSSHFxFailure.Error())
assert.ErrorIs(t, err, sftp.ErrSSHFxFailure)
assert.Contains(t, err.Error(), os.ErrClosed.Error())
} else {
assert.EqualError(t, err, ErrGenericFailure.Error())
}
err = conn.GetFsError(fs, ErrPermissionDenied)
if protocol == ProtocolSFTP {
assert.EqualError(t, err, sftp.ErrSSHFxFailure.Error())
assert.ErrorIs(t, err, sftp.ErrSSHFxFailure)
assert.Contains(t, err.Error(), ErrPermissionDenied.Error())
} else {
assert.EqualError(t, err, ErrPermissionDenied.Error())
}
@@ -255,10 +340,22 @@ func TestErrorsMapping(t *testing.T) {
}
err = conn.GetFsError(fs, vfs.ErrStorageSizeUnavailable)
if protocol == ProtocolSFTP {
assert.EqualError(t, err, sftp.ErrSSHFxOpUnsupported.Error())
assert.ErrorIs(t, err, sftp.ErrSSHFxOpUnsupported)
assert.Contains(t, err.Error(), vfs.ErrStorageSizeUnavailable.Error())
} else {
assert.EqualError(t, err, vfs.ErrStorageSizeUnavailable.Error())
}
err = conn.GetQuotaExceededError()
assert.True(t, conn.IsQuotaExceededError(err))
err = conn.GetReadQuotaExceededError()
if protocol == ProtocolSFTP {
assert.ErrorIs(t, err, sftp.ErrSSHFxFailure)
assert.Contains(t, err.Error(), ErrReadQuotaExceeded.Error())
} else {
assert.ErrorIs(t, err, ErrReadQuotaExceeded)
}
err = conn.GetNotExistError()
assert.True(t, conn.IsNotExistError(err))
err = conn.GetFsError(fs, nil)
assert.NoError(t, err)
err = conn.GetOpUnsupportedError()
@@ -274,13 +371,15 @@ func TestMaxWriteSize(t *testing.T) {
permissions := make(map[string][]string)
permissions["/"] = []string{dataprovider.PermAny}
user := dataprovider.User{
Username: userTestUsername,
Permissions: permissions,
HomeDir: filepath.Clean(os.TempDir()),
BaseUser: sdk.BaseUser{
Username: userTestUsername,
Permissions: permissions,
HomeDir: filepath.Clean(os.TempDir()),
},
}
fs, err := user.GetFilesystem("123")
assert.NoError(t, err)
conn := NewBaseConnection("", ProtocolFTP, "", user)
conn := NewBaseConnection("", ProtocolFTP, "", "", user)
quotaResult := vfs.QuotaCheckResult{
HasSpace: true,
}
@@ -307,7 +406,7 @@ func TestMaxWriteSize(t *testing.T) {
quotaResult.QuotaSize = 0
quotaResult.UsedSize = 0
size, err = conn.GetMaxWriteSize(quotaResult, true, 100, fs.IsUploadResumeSupported())
assert.EqualError(t, err, ErrQuotaExceeded.Error())
assert.True(t, conn.IsQuotaExceededError(err))
assert.Equal(t, int64(0), size)
size, err = conn.GetMaxWriteSize(quotaResult, true, 10, fs.IsUploadResumeSupported())
@@ -319,3 +418,76 @@ func TestMaxWriteSize(t *testing.T) {
assert.EqualError(t, err, ErrOpUnsupported.Error())
assert.Equal(t, int64(0), size)
}
func TestCheckParentDirsErrors(t *testing.T) {
permissions := make(map[string][]string)
permissions["/"] = []string{dataprovider.PermAny}
user := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: userTestUsername,
Permissions: permissions,
HomeDir: filepath.Clean(os.TempDir()),
},
FsConfig: vfs.Filesystem{
Provider: sdk.CryptedFilesystemProvider,
},
}
c := NewBaseConnection(xid.New().String(), ProtocolSFTP, "", "", user)
err := c.CheckParentDirs("/a/dir")
assert.Error(t, err)
user.FsConfig.Provider = sdk.LocalFilesystemProvider
user.VirtualFolders = nil
user.VirtualFolders = append(user.VirtualFolders, vfs.VirtualFolder{
BaseVirtualFolder: vfs.BaseVirtualFolder{
FsConfig: vfs.Filesystem{
Provider: sdk.CryptedFilesystemProvider,
},
},
VirtualPath: "/vdir",
})
user.VirtualFolders = append(user.VirtualFolders, vfs.VirtualFolder{
BaseVirtualFolder: vfs.BaseVirtualFolder{
MappedPath: filepath.Clean(os.TempDir()),
},
VirtualPath: "/vdir/sub",
})
c = NewBaseConnection(xid.New().String(), ProtocolSFTP, "", "", user)
err = c.CheckParentDirs("/vdir/sub/dir")
assert.Error(t, err)
user = dataprovider.User{
BaseUser: sdk.BaseUser{
Username: userTestUsername,
Permissions: permissions,
HomeDir: filepath.Clean(os.TempDir()),
},
FsConfig: vfs.Filesystem{
Provider: sdk.S3FilesystemProvider,
S3Config: vfs.S3FsConfig{
BaseS3FsConfig: sdk.BaseS3FsConfig{
Bucket: "buck",
Region: "us-east-1",
AccessKey: "key",
},
AccessSecret: kms.NewPlainSecret("s3secret"),
},
},
}
c = NewBaseConnection(xid.New().String(), ProtocolSFTP, "", "", user)
err = c.CheckParentDirs("/a/dir")
assert.NoError(t, err)
user.VirtualFolders = append(user.VirtualFolders, vfs.VirtualFolder{
BaseVirtualFolder: vfs.BaseVirtualFolder{
MappedPath: filepath.Clean(os.TempDir()),
},
VirtualPath: "/local/dir",
})
c = NewBaseConnection(xid.New().String(), ProtocolSFTP, "", "", user)
err = c.CheckParentDirs("/local/dir/sub-dir")
assert.NoError(t, err)
err = os.RemoveAll(filepath.Join(os.TempDir(), "sub-dir"))
assert.NoError(t, err)
}

480
common/dataretention.go Normal file
View File

@@ -0,0 +1,480 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
import (
"bytes"
"context"
"encoding/json"
"fmt"
"net/http"
"net/url"
"os"
"os/exec"
"path"
"path/filepath"
"strings"
"sync"
"time"
"github.com/drakkan/sftpgo/v2/command"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/httpclient"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/smtp"
"github.com/drakkan/sftpgo/v2/util"
)
// RetentionCheckNotification defines the supported notification methods for a retention check result
type RetentionCheckNotification = string
// Supported notification methods
const (
// notify results using the defined "data_retention_hook"
RetentionCheckNotificationHook = "Hook"
// notify results by email
RetentionCheckNotificationEmail = "Email"
)
var (
// RetentionChecks is the list of active quota scans
RetentionChecks ActiveRetentionChecks
)
// ActiveRetentionChecks holds the active quota scans
type ActiveRetentionChecks struct {
sync.RWMutex
Checks []RetentionCheck
}
// Get returns the active retention checks
func (c *ActiveRetentionChecks) Get() []RetentionCheck {
c.RLock()
defer c.RUnlock()
checks := make([]RetentionCheck, 0, len(c.Checks))
for _, check := range c.Checks {
foldersCopy := make([]FolderRetention, len(check.Folders))
copy(foldersCopy, check.Folders)
notificationsCopy := make([]string, len(check.Notifications))
copy(notificationsCopy, check.Notifications)
checks = append(checks, RetentionCheck{
Username: check.Username,
StartTime: check.StartTime,
Notifications: notificationsCopy,
Email: check.Email,
Folders: foldersCopy,
})
}
return checks
}
// Add a new retention check, returns nil if a retention check for the given
// username is already active. The returned result can be used to start the check
func (c *ActiveRetentionChecks) Add(check RetentionCheck, user *dataprovider.User) *RetentionCheck {
c.Lock()
defer c.Unlock()
for _, val := range c.Checks {
if val.Username == user.Username {
return nil
}
}
// we silently ignore file patterns
user.Filters.FilePatterns = nil
conn := NewBaseConnection("", "", "", "", *user)
conn.SetProtocol(ProtocolDataRetention)
conn.ID = fmt.Sprintf("data_retention_%v", user.Username)
check.Username = user.Username
check.StartTime = util.GetTimeAsMsSinceEpoch(time.Now())
check.conn = conn
check.updateUserPermissions()
c.Checks = append(c.Checks, check)
return &check
}
// remove a user from the ones with active retention checks
// and returns true if the user is removed
func (c *ActiveRetentionChecks) remove(username string) bool {
c.Lock()
defer c.Unlock()
for idx, check := range c.Checks {
if check.Username == username {
lastIdx := len(c.Checks) - 1
c.Checks[idx] = c.Checks[lastIdx]
c.Checks = c.Checks[:lastIdx]
return true
}
}
return false
}
// FolderRetention defines the retention policy for the specified directory path
type FolderRetention struct {
// Path is the exposed virtual directory path, if no other specific retention is defined,
// the retention applies for sub directories too. For example if retention is defined
// for the paths "/" and "/sub" then the retention for "/" is applied for any file outside
// the "/sub" directory
Path string `json:"path"`
// Retention time in hours. 0 means exclude this path
Retention int `json:"retention"`
// DeleteEmptyDirs defines if empty directories will be deleted.
// The user need the delete permission
DeleteEmptyDirs bool `json:"delete_empty_dirs,omitempty"`
// IgnoreUserPermissions defines if delete files even if the user does not have the delete permission.
// The default is "false" which means that files will be skipped if the user does not have the permission
// to delete them. This applies to sub directories too.
IgnoreUserPermissions bool `json:"ignore_user_permissions,omitempty"`
}
func (f *FolderRetention) isValid() error {
f.Path = path.Clean(f.Path)
if !path.IsAbs(f.Path) {
return util.NewValidationError(fmt.Sprintf("folder retention: invalid path %#v, please specify an absolute POSIX path",
f.Path))
}
if f.Retention < 0 {
return util.NewValidationError(fmt.Sprintf("invalid folder retention %v, it must be greater or equal to zero",
f.Retention))
}
return nil
}
type folderRetentionCheckResult struct {
Path string `json:"path"`
Retention int `json:"retention"`
DeletedFiles int `json:"deleted_files"`
DeletedSize int64 `json:"deleted_size"`
Elapsed time.Duration `json:"-"`
Info string `json:"info,omitempty"`
Error string `json:"error,omitempty"`
}
// RetentionCheck defines an active retention check
type RetentionCheck struct {
// Username to which the retention check refers
Username string `json:"username"`
// retention check start time as unix timestamp in milliseconds
StartTime int64 `json:"start_time"`
// affected folders
Folders []FolderRetention `json:"folders"`
// how cleanup results will be notified
Notifications []RetentionCheckNotification `json:"notifications,omitempty"`
// email to use if the notification method is set to email
Email string `json:"email,omitempty"`
// Cleanup results
results []*folderRetentionCheckResult `json:"-"`
conn *BaseConnection
}
// Validate returns an error if the specified folders are not valid
func (c *RetentionCheck) Validate() error {
folderPaths := make(map[string]bool)
nothingToDo := true
for idx := range c.Folders {
f := &c.Folders[idx]
if err := f.isValid(); err != nil {
return err
}
if f.Retention > 0 {
nothingToDo = false
}
if _, ok := folderPaths[f.Path]; ok {
return util.NewValidationError(fmt.Sprintf("duplicated folder path %#v", f.Path))
}
folderPaths[f.Path] = true
}
if nothingToDo {
return util.NewValidationError("nothing to delete!")
}
for _, notification := range c.Notifications {
switch notification {
case RetentionCheckNotificationEmail:
if !smtp.IsEnabled() {
return util.NewValidationError("in order to notify results via email you must configure an SMTP server")
}
if c.Email == "" {
return util.NewValidationError("in order to notify results via email you must add a valid email address to your profile")
}
case RetentionCheckNotificationHook:
if Config.DataRetentionHook == "" {
return util.NewValidationError("in order to notify results via hook you must define a data_retention_hook")
}
default:
return util.NewValidationError(fmt.Sprintf("invalid notification %#v", notification))
}
}
return nil
}
func (c *RetentionCheck) updateUserPermissions() {
for _, folder := range c.Folders {
if folder.IgnoreUserPermissions {
c.conn.User.Permissions[folder.Path] = []string{dataprovider.PermAny}
}
}
}
func (c *RetentionCheck) getFolderRetention(folderPath string) (FolderRetention, error) {
dirsForPath := util.GetDirsForVirtualPath(folderPath)
for _, dirPath := range dirsForPath {
for _, folder := range c.Folders {
if folder.Path == dirPath {
return folder, nil
}
}
}
return FolderRetention{}, fmt.Errorf("unable to find folder retention for %#v", folderPath)
}
func (c *RetentionCheck) removeFile(virtualPath string, info os.FileInfo) error {
fs, fsPath, err := c.conn.GetFsAndResolvedPath(virtualPath)
if err != nil {
return err
}
return c.conn.RemoveFile(fs, fsPath, virtualPath, info)
}
func (c *RetentionCheck) cleanupFolder(folderPath string) error {
deleteFilesPerms := []string{dataprovider.PermDelete, dataprovider.PermDeleteFiles}
startTime := time.Now()
result := &folderRetentionCheckResult{
Path: folderPath,
}
c.results = append(c.results, result)
if !c.conn.User.HasPerm(dataprovider.PermListItems, folderPath) || !c.conn.User.HasAnyPerm(deleteFilesPerms, folderPath) {
result.Elapsed = time.Since(startTime)
result.Info = "data retention check skipped: no permissions"
c.conn.Log(logger.LevelInfo, "user %#v does not have permissions to check retention on %#v, retention check skipped",
c.conn.User.Username, folderPath)
return nil
}
folderRetention, err := c.getFolderRetention(folderPath)
if err != nil {
result.Elapsed = time.Since(startTime)
result.Error = "unable to get folder retention"
c.conn.Log(logger.LevelError, "unable to get folder retention for path %#v", folderPath)
return err
}
result.Retention = folderRetention.Retention
if folderRetention.Retention == 0 {
result.Elapsed = time.Since(startTime)
result.Info = "data retention check skipped: retention is set to 0"
c.conn.Log(logger.LevelDebug, "retention check skipped for folder %#v, retention is set to 0", folderPath)
return nil
}
c.conn.Log(logger.LevelDebug, "start retention check for folder %#v, retention: %v hours, delete empty dirs? %v, ignore user perms? %v",
folderPath, folderRetention.Retention, folderRetention.DeleteEmptyDirs, folderRetention.IgnoreUserPermissions)
files, err := c.conn.ListDir(folderPath)
if err != nil {
result.Elapsed = time.Since(startTime)
if err == c.conn.GetNotExistError() {
result.Info = "data retention check skipped, folder does not exist"
c.conn.Log(logger.LevelDebug, "folder %#v does not exist, retention check skipped", folderPath)
return nil
}
result.Error = fmt.Sprintf("unable to list directory %#v", folderPath)
c.conn.Log(logger.LevelError, result.Error)
return err
}
for _, info := range files {
virtualPath := path.Join(folderPath, info.Name())
if info.IsDir() {
if err := c.cleanupFolder(virtualPath); err != nil {
result.Elapsed = time.Since(startTime)
result.Error = fmt.Sprintf("unable to check folder: %v", err)
c.conn.Log(logger.LevelError, "unable to cleanup folder %#v: %v", virtualPath, err)
return err
}
} else {
retentionTime := info.ModTime().Add(time.Duration(folderRetention.Retention) * time.Hour)
if retentionTime.Before(time.Now()) {
if err := c.removeFile(virtualPath, info); err != nil {
result.Elapsed = time.Since(startTime)
result.Error = fmt.Sprintf("unable to remove file %#v: %v", virtualPath, err)
c.conn.Log(logger.LevelError, "unable to remove file %#v, retention %v: %v",
virtualPath, retentionTime, err)
return err
}
c.conn.Log(logger.LevelDebug, "removed file %#v, modification time: %v, retention: %v hours, retention time: %v",
virtualPath, info.ModTime(), folderRetention.Retention, retentionTime)
result.DeletedFiles++
result.DeletedSize += info.Size()
}
}
}
if folderRetention.DeleteEmptyDirs {
c.checkEmptyDirRemoval(folderPath)
}
result.Elapsed = time.Since(startTime)
c.conn.Log(logger.LevelDebug, "retention check completed for folder %#v, deleted files: %v, deleted size: %v bytes",
folderPath, result.DeletedFiles, result.DeletedSize)
return nil
}
func (c *RetentionCheck) checkEmptyDirRemoval(folderPath string) {
if folderPath != "/" && c.conn.User.HasAnyPerm([]string{
dataprovider.PermDelete,
dataprovider.PermDeleteDirs,
}, path.Dir(folderPath),
) {
files, err := c.conn.ListDir(folderPath)
if err == nil && len(files) == 0 {
err = c.conn.RemoveDir(folderPath)
c.conn.Log(logger.LevelDebug, "tryed to remove empty dir %#v, error: %v", folderPath, err)
}
}
}
// Start starts the retention check
func (c *RetentionCheck) Start() {
c.conn.Log(logger.LevelInfo, "retention check started")
defer RetentionChecks.remove(c.conn.User.Username)
defer c.conn.CloseFS() //nolint:errcheck
startTime := time.Now()
for _, folder := range c.Folders {
if folder.Retention > 0 {
if err := c.cleanupFolder(folder.Path); err != nil {
c.conn.Log(logger.LevelError, "retention check failed, unable to cleanup folder %#v", folder.Path)
c.sendNotifications(time.Since(startTime), err)
return
}
}
}
c.conn.Log(logger.LevelInfo, "retention check completed")
c.sendNotifications(time.Since(startTime), nil)
}
func (c *RetentionCheck) sendNotifications(elapsed time.Duration, err error) {
for _, notification := range c.Notifications {
switch notification {
case RetentionCheckNotificationEmail:
c.sendEmailNotification(elapsed, err) //nolint:errcheck
case RetentionCheckNotificationHook:
c.sendHookNotification(elapsed, err) //nolint:errcheck
}
}
}
func (c *RetentionCheck) sendEmailNotification(elapsed time.Duration, errCheck error) error {
body := new(bytes.Buffer)
data := make(map[string]any)
data["Results"] = c.results
totalDeletedFiles := 0
totalDeletedSize := int64(0)
for _, result := range c.results {
totalDeletedFiles += result.DeletedFiles
totalDeletedSize += result.DeletedSize
}
data["HumanizeSize"] = util.ByteCountIEC
data["TotalFiles"] = totalDeletedFiles
data["TotalSize"] = totalDeletedSize
data["Elapsed"] = elapsed
data["Username"] = c.conn.User.Username
data["StartTime"] = util.GetTimeFromMsecSinceEpoch(c.StartTime)
if errCheck == nil {
data["Status"] = "Succeeded"
} else {
data["Status"] = "Failed"
}
if err := smtp.RenderRetentionReportTemplate(body, data); err != nil {
c.conn.Log(logger.LevelError, "unable to render retention check template: %v", err)
return err
}
startTime := time.Now()
subject := fmt.Sprintf("Retention check completed for user %#v", c.conn.User.Username)
if err := smtp.SendEmail(c.Email, subject, body.String(), smtp.EmailContentTypeTextHTML); err != nil {
c.conn.Log(logger.LevelError, "unable to notify retention check result via email: %v, elapsed: %v", err,
time.Since(startTime))
return err
}
c.conn.Log(logger.LevelInfo, "retention check result successfully notified via email, elapsed: %v", time.Since(startTime))
return nil
}
func (c *RetentionCheck) sendHookNotification(elapsed time.Duration, errCheck error) error {
data := make(map[string]any)
totalDeletedFiles := 0
totalDeletedSize := int64(0)
for _, result := range c.results {
totalDeletedFiles += result.DeletedFiles
totalDeletedSize += result.DeletedSize
}
data["username"] = c.conn.User.Username
data["start_time"] = c.StartTime
data["elapsed"] = elapsed.Milliseconds()
if errCheck == nil {
data["status"] = 1
} else {
data["status"] = 0
}
data["total_deleted_files"] = totalDeletedFiles
data["total_deleted_size"] = totalDeletedSize
data["details"] = c.results
jsonData, _ := json.Marshal(data)
startTime := time.Now()
if strings.HasPrefix(Config.DataRetentionHook, "http") {
var url *url.URL
url, err := url.Parse(Config.DataRetentionHook)
if err != nil {
c.conn.Log(logger.LevelError, "invalid data retention hook %#v: %v", Config.DataRetentionHook, err)
return err
}
respCode := 0
resp, err := httpclient.RetryablePost(url.String(), "application/json", bytes.NewBuffer(jsonData))
if err == nil {
respCode = resp.StatusCode
resp.Body.Close()
if respCode != http.StatusOK {
err = errUnexpectedHTTResponse
}
}
c.conn.Log(logger.LevelDebug, "notified result to URL: %#v, status code: %v, elapsed: %v err: %v",
url.Redacted(), respCode, time.Since(startTime), err)
return err
}
if !filepath.IsAbs(Config.DataRetentionHook) {
err := fmt.Errorf("invalid data retention hook %#v", Config.DataRetentionHook)
c.conn.Log(logger.LevelError, "%v", err)
return err
}
timeout, env := command.GetConfig(Config.DataRetentionHook)
ctx, cancel := context.WithTimeout(context.Background(), timeout)
defer cancel()
cmd := exec.CommandContext(ctx, Config.DataRetentionHook)
cmd.Env = append(env,
fmt.Sprintf("SFTPGO_DATA_RETENTION_RESULT=%v", string(jsonData)))
err := cmd.Run()
c.conn.Log(logger.LevelDebug, "notified result using command: %v, elapsed: %v err: %v",
Config.DataRetentionHook, time.Since(startTime), err)
return err
}

View File

@@ -0,0 +1,354 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
import (
"errors"
"fmt"
"os/exec"
"runtime"
"testing"
"time"
"github.com/sftpgo/sdk"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/smtp"
)
func TestRetentionValidation(t *testing.T) {
check := RetentionCheck{}
check.Folders = append(check.Folders, FolderRetention{
Path: "relative",
Retention: 10,
})
err := check.Validate()
require.Error(t, err)
assert.Contains(t, err.Error(), "please specify an absolute POSIX path")
check.Folders = []FolderRetention{
{
Path: "/",
Retention: -1,
},
}
err = check.Validate()
require.Error(t, err)
assert.Contains(t, err.Error(), "invalid folder retention")
check.Folders = []FolderRetention{
{
Path: "/ab/..",
Retention: 0,
},
}
err = check.Validate()
require.Error(t, err)
assert.Contains(t, err.Error(), "nothing to delete")
assert.Equal(t, "/", check.Folders[0].Path)
check.Folders = append(check.Folders, FolderRetention{
Path: "/../..",
Retention: 24,
})
err = check.Validate()
require.Error(t, err)
assert.Contains(t, err.Error(), `duplicated folder path "/"`)
check.Folders = []FolderRetention{
{
Path: "/dir1",
Retention: 48,
},
{
Path: "/dir2",
Retention: 96,
},
}
err = check.Validate()
assert.NoError(t, err)
assert.Len(t, check.Notifications, 0)
assert.Empty(t, check.Email)
check.Notifications = []RetentionCheckNotification{RetentionCheckNotificationEmail}
err = check.Validate()
require.Error(t, err)
assert.Contains(t, err.Error(), "you must configure an SMTP server")
smtpCfg := smtp.Config{
Host: "mail.example.com",
Port: 25,
TemplatesPath: "templates",
}
err = smtpCfg.Initialize("..")
require.NoError(t, err)
err = check.Validate()
require.Error(t, err)
assert.Contains(t, err.Error(), "you must add a valid email address")
check.Email = "admin@example.com"
err = check.Validate()
assert.NoError(t, err)
smtpCfg = smtp.Config{}
err = smtpCfg.Initialize("..")
require.NoError(t, err)
check.Notifications = []RetentionCheckNotification{RetentionCheckNotificationHook}
err = check.Validate()
require.Error(t, err)
assert.Contains(t, err.Error(), "data_retention_hook")
check.Notifications = []string{"not valid"}
err = check.Validate()
require.Error(t, err)
assert.Contains(t, err.Error(), "invalid notification")
}
func TestRetentionEmailNotifications(t *testing.T) {
smtpCfg := smtp.Config{
Host: "127.0.0.1",
Port: 2525,
TemplatesPath: "templates",
}
err := smtpCfg.Initialize("..")
require.NoError(t, err)
user := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: "user1",
},
}
user.Permissions = make(map[string][]string)
user.Permissions["/"] = []string{dataprovider.PermAny}
check := RetentionCheck{
Notifications: []RetentionCheckNotification{RetentionCheckNotificationEmail},
Email: "notification@example.com",
results: []*folderRetentionCheckResult{
{
Path: "/",
Retention: 24,
DeletedFiles: 10,
DeletedSize: 32657,
Elapsed: 10 * time.Second,
},
},
}
conn := NewBaseConnection("", "", "", "", user)
conn.SetProtocol(ProtocolDataRetention)
conn.ID = fmt.Sprintf("data_retention_%v", user.Username)
check.conn = conn
check.sendNotifications(1*time.Second, nil)
err = check.sendEmailNotification(1*time.Second, nil)
assert.NoError(t, err)
err = check.sendEmailNotification(1*time.Second, errors.New("test error"))
assert.NoError(t, err)
smtpCfg.Port = 2626
err = smtpCfg.Initialize("..")
require.NoError(t, err)
err = check.sendEmailNotification(1*time.Second, nil)
assert.Error(t, err)
smtpCfg = smtp.Config{}
err = smtpCfg.Initialize("..")
require.NoError(t, err)
err = check.sendEmailNotification(1*time.Second, nil)
assert.Error(t, err)
}
func TestRetentionHookNotifications(t *testing.T) {
dataRetentionHook := Config.DataRetentionHook
Config.DataRetentionHook = fmt.Sprintf("http://%v", httpAddr)
user := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: "user2",
},
}
user.Permissions = make(map[string][]string)
user.Permissions["/"] = []string{dataprovider.PermAny}
check := RetentionCheck{
Notifications: []RetentionCheckNotification{RetentionCheckNotificationHook},
results: []*folderRetentionCheckResult{
{
Path: "/",
Retention: 24,
DeletedFiles: 10,
DeletedSize: 32657,
Elapsed: 10 * time.Second,
},
},
}
conn := NewBaseConnection("", "", "", "", user)
conn.SetProtocol(ProtocolDataRetention)
conn.ID = fmt.Sprintf("data_retention_%v", user.Username)
check.conn = conn
check.sendNotifications(1*time.Second, nil)
err := check.sendHookNotification(1*time.Second, nil)
assert.NoError(t, err)
Config.DataRetentionHook = fmt.Sprintf("http://%v/404", httpAddr)
err = check.sendHookNotification(1*time.Second, nil)
assert.ErrorIs(t, err, errUnexpectedHTTResponse)
Config.DataRetentionHook = "http://foo\x7f.com/retention"
err = check.sendHookNotification(1*time.Second, err)
assert.Error(t, err)
Config.DataRetentionHook = "relativepath"
err = check.sendHookNotification(1*time.Second, err)
assert.Error(t, err)
if runtime.GOOS != osWindows {
hookCmd, err := exec.LookPath("true")
assert.NoError(t, err)
Config.DataRetentionHook = hookCmd
err = check.sendHookNotification(1*time.Second, err)
assert.NoError(t, err)
}
Config.DataRetentionHook = dataRetentionHook
}
func TestRetentionPermissionsAndGetFolder(t *testing.T) {
user := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: "user1",
},
}
user.Permissions = make(map[string][]string)
user.Permissions["/"] = []string{dataprovider.PermListItems, dataprovider.PermDelete}
user.Permissions["/dir1"] = []string{dataprovider.PermListItems}
user.Permissions["/dir2/sub1"] = []string{dataprovider.PermCreateDirs}
user.Permissions["/dir2/sub2"] = []string{dataprovider.PermDelete}
check := RetentionCheck{
Folders: []FolderRetention{
{
Path: "/dir2",
Retention: 24 * 7,
IgnoreUserPermissions: true,
},
{
Path: "/dir3",
Retention: 24 * 7,
IgnoreUserPermissions: false,
},
{
Path: "/dir2/sub1/sub",
Retention: 24,
IgnoreUserPermissions: true,
},
},
}
conn := NewBaseConnection("", "", "", "", user)
conn.SetProtocol(ProtocolDataRetention)
conn.ID = fmt.Sprintf("data_retention_%v", user.Username)
check.conn = conn
check.updateUserPermissions()
assert.Equal(t, []string{dataprovider.PermListItems, dataprovider.PermDelete}, conn.User.Permissions["/"])
assert.Equal(t, []string{dataprovider.PermListItems}, conn.User.Permissions["/dir1"])
assert.Equal(t, []string{dataprovider.PermAny}, conn.User.Permissions["/dir2"])
assert.Equal(t, []string{dataprovider.PermAny}, conn.User.Permissions["/dir2/sub1/sub"])
assert.Equal(t, []string{dataprovider.PermCreateDirs}, conn.User.Permissions["/dir2/sub1"])
assert.Equal(t, []string{dataprovider.PermDelete}, conn.User.Permissions["/dir2/sub2"])
_, err := check.getFolderRetention("/")
assert.Error(t, err)
folder, err := check.getFolderRetention("/dir3")
assert.NoError(t, err)
assert.Equal(t, "/dir3", folder.Path)
folder, err = check.getFolderRetention("/dir2/sub3")
assert.NoError(t, err)
assert.Equal(t, "/dir2", folder.Path)
folder, err = check.getFolderRetention("/dir2/sub2")
assert.NoError(t, err)
assert.Equal(t, "/dir2", folder.Path)
folder, err = check.getFolderRetention("/dir2/sub1")
assert.NoError(t, err)
assert.Equal(t, "/dir2", folder.Path)
folder, err = check.getFolderRetention("/dir2/sub1/sub/sub")
assert.NoError(t, err)
assert.Equal(t, "/dir2/sub1/sub", folder.Path)
}
func TestRetentionCheckAddRemove(t *testing.T) {
username := "username"
user := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: username,
},
}
user.Permissions = make(map[string][]string)
user.Permissions["/"] = []string{dataprovider.PermAny}
check := RetentionCheck{
Folders: []FolderRetention{
{
Path: "/",
Retention: 48,
},
},
Notifications: []RetentionCheckNotification{RetentionCheckNotificationHook},
}
assert.NotNil(t, RetentionChecks.Add(check, &user))
checks := RetentionChecks.Get()
require.Len(t, checks, 1)
assert.Equal(t, username, checks[0].Username)
assert.Greater(t, checks[0].StartTime, int64(0))
require.Len(t, checks[0].Folders, 1)
assert.Equal(t, check.Folders[0].Path, checks[0].Folders[0].Path)
assert.Equal(t, check.Folders[0].Retention, checks[0].Folders[0].Retention)
require.Len(t, checks[0].Notifications, 1)
assert.Equal(t, RetentionCheckNotificationHook, checks[0].Notifications[0])
assert.Nil(t, RetentionChecks.Add(check, &user))
assert.True(t, RetentionChecks.remove(username))
require.Len(t, RetentionChecks.Get(), 0)
assert.False(t, RetentionChecks.remove(username))
}
func TestCleanupErrors(t *testing.T) {
user := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: "u",
},
}
user.Permissions = make(map[string][]string)
user.Permissions["/"] = []string{dataprovider.PermAny}
check := &RetentionCheck{
Folders: []FolderRetention{
{
Path: "/path",
Retention: 48,
},
},
}
check = RetentionChecks.Add(*check, &user)
require.NotNil(t, check)
err := check.removeFile("missing file", nil)
assert.Error(t, err)
err = check.cleanupFolder("/")
assert.Error(t, err)
assert.True(t, RetentionChecks.remove(user.Username))
}

View File

@@ -1,23 +1,36 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
import (
"encoding/hex"
"encoding/json"
"fmt"
"net"
"os"
"sort"
"strings"
"sync"
"time"
"github.com/yl2chen/cidranger"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/utils"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
)
// HostEvent is the enumerable for the support host event
// HostEvent is the enumerable for the supported host events
type HostEvent int
// Supported host events
@@ -28,49 +41,24 @@ const (
HostEventLimitExceeded
)
// DefenderEntry defines a defender entry
type DefenderEntry struct {
IP string `json:"ip"`
Score int `json:"score,omitempty"`
BanTime time.Time `json:"ban_time,omitempty"`
}
// Supported defender drivers
const (
DefenderDriverMemory = "memory"
DefenderDriverProvider = "provider"
)
// GetID returns an unique ID for a defender entry
func (d *DefenderEntry) GetID() string {
return hex.EncodeToString([]byte(d.IP))
}
// GetBanTime returns the ban time for a defender entry as string
func (d *DefenderEntry) GetBanTime() string {
if d.BanTime.IsZero() {
return ""
}
return d.BanTime.UTC().Format(time.RFC3339)
}
// MarshalJSON returns the JSON encoding of a DefenderEntry.
func (d *DefenderEntry) MarshalJSON() ([]byte, error) {
return json.Marshal(&struct {
ID string `json:"id"`
IP string `json:"ip"`
Score int `json:"score,omitempty"`
BanTime string `json:"ban_time,omitempty"`
}{
ID: d.GetID(),
IP: d.IP,
Score: d.Score,
BanTime: d.GetBanTime(),
})
}
var (
supportedDefenderDrivers = []string{DefenderDriverMemory, DefenderDriverProvider}
)
// Defender defines the interface that a defender must implements
type Defender interface {
GetHosts() []*DefenderEntry
GetHost(ip string) (*DefenderEntry, error)
GetHosts() ([]dataprovider.DefenderEntry, error)
GetHost(ip string) (dataprovider.DefenderEntry, error)
AddEvent(ip string, event HostEvent)
IsBanned(ip string) bool
GetBanTime(ip string) *time.Time
GetScore(ip string) int
GetBanTime(ip string) (*time.Time, error)
GetScore(ip string) (int, error)
DeleteHost(ip string) bool
Reload() error
}
@@ -79,6 +67,11 @@ type Defender interface {
type DefenderConfig struct {
// Set to true to enable the defender
Enabled bool `json:"enabled" mapstructure:"enabled"`
// Defender implementation to use, we support "memory" and "provider".
// Using "provider" as driver you can share the defender events among
// multiple SFTPGo instances. For a single instance "memory" provider will
// be much faster
Driver string `json:"driver" mapstructure:"driver"`
// BanTime is the number of minutes that a host is banned
BanTime int `json:"ban_time" mapstructure:"ban_time"`
// Percentage increase of the ban time if a banned host tries to connect again
@@ -98,28 +91,78 @@ type DefenderConfig struct {
// the last observation time minutes
ObservationTime int `json:"observation_time" mapstructure:"observation_time"`
// The number of banned IPs and host scores kept in memory will vary between the
// soft and hard limit
// soft and hard limit for the "memory" driver. For the "provider" driver the
// soft limit is ignored and the hard limit is used to limit the number of entries
// to return when you request for the entire host list from the defender
EntriesSoftLimit int `json:"entries_soft_limit" mapstructure:"entries_soft_limit"`
EntriesHardLimit int `json:"entries_hard_limit" mapstructure:"entries_hard_limit"`
// Path to a file containing a list of ip addresses and/or networks to never ban
// Path to a file containing a list of IP addresses and/or networks to never ban
SafeListFile string `json:"safelist_file" mapstructure:"safelist_file"`
// Path to a file containing a list of ip addresses and/or networks to always ban
// Path to a file containing a list of IP addresses and/or networks to always ban
BlockListFile string `json:"blocklist_file" mapstructure:"blocklist_file"`
// List of IP addresses and/or networks to never ban.
// For large lists prefer SafeListFile
SafeList []string `json:"safelist" mapstructure:"safelist"`
// List of IP addresses and/or networks to always ban.
// For large lists prefer BlockListFile
BlockList []string `json:"blocklist" mapstructure:"blocklist"`
}
type memoryDefender struct {
type baseDefender struct {
config *DefenderConfig
sync.RWMutex
// IP addresses of the clients trying to connected are stored inside hosts,
// they are added to banned once the thresold is reached.
// A violation from a banned host will increase the ban time
// based on the configured BanTimeIncrement
hosts map[string]hostScore // the key is the host IP
banned map[string]time.Time // the key is the host IP
safeList *HostList
blockList *HostList
}
// Reload reloads block and safe lists
func (d *baseDefender) Reload() error {
blockList, err := loadHostListFromFile(d.config.BlockListFile)
if err != nil {
return err
}
blockList = addEntriesToList(d.config.BlockList, blockList, "blocklist")
d.Lock()
d.blockList = blockList
d.Unlock()
safeList, err := loadHostListFromFile(d.config.SafeListFile)
if err != nil {
return err
}
safeList = addEntriesToList(d.config.SafeList, safeList, "safelist")
d.Lock()
d.safeList = safeList
d.Unlock()
return nil
}
func (d *baseDefender) isBanned(ip string) bool {
if d.blockList != nil && d.blockList.isListed(ip) {
// permanent ban
return true
}
return false
}
func (d *baseDefender) getScore(event HostEvent) int {
var score int
switch event {
case HostEventLoginFailed:
score = d.config.ScoreValid
case HostEventLimitExceeded:
score = d.config.ScoreLimitExceeded
case HostEventUserNotFound, HostEventNoLoginTried:
score = d.config.ScoreInvalid
}
return score
}
// HostListFile defines the structure expected for safe/block list files
type HostListFile struct {
IPAddresses []string `json:"addresses"`
@@ -188,319 +231,11 @@ func (c *DefenderConfig) validate() error {
return nil
}
func newInMemoryDefender(config *DefenderConfig) (Defender, error) {
err := config.validate()
if err != nil {
return nil, err
}
defender := &memoryDefender{
config: config,
hosts: make(map[string]hostScore),
banned: make(map[string]time.Time),
}
if err := defender.Reload(); err != nil {
return nil, err
}
return defender, nil
}
// Reload reloads block and safe lists
func (d *memoryDefender) Reload() error {
blockList, err := loadHostListFromFile(d.config.BlockListFile)
if err != nil {
return err
}
d.Lock()
d.blockList = blockList
d.Unlock()
safeList, err := loadHostListFromFile(d.config.SafeListFile)
if err != nil {
return err
}
d.Lock()
d.safeList = safeList
d.Unlock()
return nil
}
// GetHosts returns hosts that are banned or for which some violations have been detected
func (d *memoryDefender) GetHosts() []*DefenderEntry {
d.RLock()
defer d.RUnlock()
var result []*DefenderEntry
for k, v := range d.banned {
result = append(result, &DefenderEntry{
IP: k,
BanTime: v,
})
}
for k, v := range d.hosts {
result = append(result, &DefenderEntry{
IP: k,
Score: v.TotalScore,
})
}
return result
}
// GetHost returns a defender host by ip, if any
func (d *memoryDefender) GetHost(ip string) (*DefenderEntry, error) {
d.RLock()
defer d.RUnlock()
if banTime, ok := d.banned[ip]; ok {
return &DefenderEntry{
IP: ip,
BanTime: banTime,
}, nil
}
if ev, ok := d.hosts[ip]; ok {
return &DefenderEntry{
IP: ip,
Score: ev.TotalScore,
}, nil
}
return nil, dataprovider.NewRecordNotFoundError("host not found")
}
// IsBanned returns true if the specified IP is banned
// and increase ban time if the IP is found.
// This method must be called as soon as the client connects
func (d *memoryDefender) IsBanned(ip string) bool {
d.RLock()
if banTime, ok := d.banned[ip]; ok {
if banTime.After(time.Now()) {
increment := d.config.BanTime * d.config.BanTimeIncrement / 100
if increment == 0 {
increment++
}
d.RUnlock()
// we can save an earlier ban time if there are contemporary updates
// but this should not make much difference. I prefer to hold a read lock
// until possible for performance reasons, this method is called each
// time a new client connects and it must be as fast as possible
d.Lock()
d.banned[ip] = banTime.Add(time.Duration(increment) * time.Minute)
d.Unlock()
return true
}
}
defer d.RUnlock()
if d.blockList != nil && d.blockList.isListed(ip) {
// permanent ban
return true
}
return false
}
// DeleteHost removes the specified IP from the defender lists
func (d *memoryDefender) DeleteHost(ip string) bool {
d.Lock()
defer d.Unlock()
if _, ok := d.banned[ip]; ok {
delete(d.banned, ip)
return true
}
if _, ok := d.hosts[ip]; ok {
delete(d.hosts, ip)
return true
}
return false
}
// AddEvent adds an event for the given IP.
// This method must be called for clients not yet banned
func (d *memoryDefender) AddEvent(ip string, event HostEvent) {
d.Lock()
defer d.Unlock()
if d.safeList != nil && d.safeList.isListed(ip) {
return
}
// ignore events for already banned hosts
if _, ok := d.banned[ip]; ok {
return
}
var score int
switch event {
case HostEventLoginFailed:
score = d.config.ScoreValid
case HostEventLimitExceeded:
score = d.config.ScoreLimitExceeded
case HostEventUserNotFound, HostEventNoLoginTried:
score = d.config.ScoreInvalid
}
ev := hostEvent{
dateTime: time.Now(),
score: score,
}
if hs, ok := d.hosts[ip]; ok {
hs.Events = append(hs.Events, ev)
hs.TotalScore = 0
idx := 0
for _, event := range hs.Events {
if event.dateTime.Add(time.Duration(d.config.ObservationTime) * time.Minute).After(time.Now()) {
hs.Events[idx] = event
hs.TotalScore += event.score
idx++
}
}
hs.Events = hs.Events[:idx]
if hs.TotalScore >= d.config.Threshold {
d.banned[ip] = time.Now().Add(time.Duration(d.config.BanTime) * time.Minute)
delete(d.hosts, ip)
d.cleanupBanned()
} else {
d.hosts[ip] = hs
}
} else {
d.hosts[ip] = hostScore{
TotalScore: ev.score,
Events: []hostEvent{ev},
}
d.cleanupHosts()
}
}
func (d *memoryDefender) countBanned() int {
d.RLock()
defer d.RUnlock()
return len(d.banned)
}
func (d *memoryDefender) countHosts() int {
d.RLock()
defer d.RUnlock()
return len(d.hosts)
}
// GetBanTime returns the ban time for the given IP or nil if the IP is not banned
func (d *memoryDefender) GetBanTime(ip string) *time.Time {
d.RLock()
defer d.RUnlock()
if banTime, ok := d.banned[ip]; ok {
return &banTime
}
return nil
}
// GetScore returns the score for the given IP
func (d *memoryDefender) GetScore(ip string) int {
d.RLock()
defer d.RUnlock()
score := 0
if hs, ok := d.hosts[ip]; ok {
for _, event := range hs.Events {
if event.dateTime.Add(time.Duration(d.config.ObservationTime) * time.Minute).After(time.Now()) {
score += event.score
}
}
}
return score
}
func (d *memoryDefender) cleanupBanned() {
if len(d.banned) > d.config.EntriesHardLimit {
kvList := make(kvList, 0, len(d.banned))
for k, v := range d.banned {
if v.Before(time.Now()) {
delete(d.banned, k)
}
kvList = append(kvList, kv{
Key: k,
Value: v.UnixNano(),
})
}
// we removed expired ip addresses, if any, above, this could be enough
numToRemove := len(d.banned) - d.config.EntriesSoftLimit
if numToRemove <= 0 {
return
}
sort.Sort(kvList)
for idx, kv := range kvList {
if idx >= numToRemove {
break
}
delete(d.banned, kv.Key)
}
}
}
func (d *memoryDefender) cleanupHosts() {
if len(d.hosts) > d.config.EntriesHardLimit {
kvList := make(kvList, 0, len(d.hosts))
for k, v := range d.hosts {
value := int64(0)
if len(v.Events) > 0 {
value = v.Events[len(v.Events)-1].dateTime.UnixNano()
}
kvList = append(kvList, kv{
Key: k,
Value: value,
})
}
sort.Sort(kvList)
numToRemove := len(d.hosts) - d.config.EntriesSoftLimit
for idx, kv := range kvList {
if idx >= numToRemove {
break
}
delete(d.hosts, kv.Key)
}
}
}
func loadHostListFromFile(name string) (*HostList, error) {
if name == "" {
return nil, nil
}
if !utils.IsFileInputValid(name) {
if !util.IsFileInputValid(name) {
return nil, fmt.Errorf("invalid host list file name %#v", name)
}
@@ -544,7 +279,7 @@ func loadHostListFromFile(name string) (*HostList, error) {
for _, cidrNet := range hostList.CIDRNetworks {
_, network, err := net.ParseCIDR(cidrNet)
if err != nil {
logger.Warn(logSender, "", "unable to parse CIDR network %#v", cidrNet)
logger.Warn(logSender, "", "unable to parse CIDR network %#v: %v", cidrNet, err)
continue
}
err = result.Ranges.Insert(cidranger.NewBasicRangerEntry(*network))
@@ -561,13 +296,47 @@ func loadHostListFromFile(name string) (*HostList, error) {
return nil, nil
}
type kv struct {
Key string
Value int64
func addEntriesToList(entries []string, hostList *HostList, listName string) *HostList {
if len(entries) == 0 {
return hostList
}
if hostList == nil {
hostList = &HostList{
IPAddresses: make(map[string]bool),
Ranges: cidranger.NewPCTrieRanger(),
}
}
ipCount := 0
ipLoaded := 0
cdrCount := 0
cdrLoaded := 0
for _, entry := range entries {
entry = strings.TrimSpace(entry)
if strings.LastIndex(entry, "/") > 0 {
cdrCount++
_, network, err := net.ParseCIDR(entry)
if err != nil {
logger.Warn(logSender, "", "unable to parse CIDR network %#v: %v", entry, err)
continue
}
err = hostList.Ranges.Insert(cidranger.NewBasicRangerEntry(*network))
if err == nil {
cdrLoaded++
}
} else {
ipCount++
if net.ParseIP(entry) == nil {
logger.Warn(logSender, "", "unable to parse IP %#v", entry)
continue
}
hostList.IPAddresses[entry] = true
ipLoaded++
}
}
logger.Info(logSender, "", "%s from config loaded, ip addresses loaded: %v/%v networks loaded: %v/%v",
listName, ipLoaded, ipCount, cdrLoaded, cdrCount)
return hostList
}
type kvList []kv
func (p kvList) Len() int { return len(p) }
func (p kvList) Less(i, j int) bool { return p[i].Value < p[j].Value }
func (p kvList) Swap(i, j int) { p[i], p[j] = p[j], p[i] }

View File

@@ -1,3 +1,17 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
import (
@@ -54,6 +68,8 @@ func TestBasicDefender(t *testing.T) {
EntriesHardLimit: 2,
SafeListFile: "slFile",
BlockListFile: "blFile",
SafeList: []string{"192.168.1.3", "192.168.1.4", "192.168.9.0/24"},
BlockList: []string{"192.168.1.1", "192.168.1.2", "10.8.9.0/24"},
}
_, err = newInMemoryDefender(config)
@@ -67,54 +83,80 @@ func TestBasicDefender(t *testing.T) {
defender := d.(*memoryDefender)
assert.True(t, defender.IsBanned("172.16.1.1"))
assert.True(t, defender.IsBanned("192.168.1.1"))
assert.False(t, defender.IsBanned("172.16.1.10"))
assert.False(t, defender.IsBanned("192.168.1.10"))
assert.False(t, defender.IsBanned("10.8.2.3"))
assert.False(t, defender.IsBanned("10.9.2.3"))
assert.True(t, defender.IsBanned("10.8.0.3"))
assert.True(t, defender.IsBanned("10.8.9.3"))
assert.False(t, defender.IsBanned("invalid ip"))
assert.Equal(t, 0, defender.countBanned())
assert.Equal(t, 0, defender.countHosts())
assert.Len(t, defender.GetHosts(), 0)
hosts, err := defender.GetHosts()
assert.NoError(t, err)
assert.Len(t, hosts, 0)
_, err = defender.GetHost("10.8.0.4")
assert.Error(t, err)
defender.AddEvent("172.16.1.4", HostEventLoginFailed)
defender.AddEvent("192.168.1.4", HostEventLoginFailed)
defender.AddEvent("192.168.8.4", HostEventUserNotFound)
defender.AddEvent("172.16.1.3", HostEventLimitExceeded)
defender.AddEvent("192.168.1.3", HostEventLimitExceeded)
assert.Equal(t, 0, defender.countHosts())
testIP := "12.34.56.78"
defender.AddEvent(testIP, HostEventLoginFailed)
assert.Equal(t, 1, defender.countHosts())
assert.Equal(t, 0, defender.countBanned())
assert.Equal(t, 1, defender.GetScore(testIP))
if assert.Len(t, defender.GetHosts(), 1) {
assert.Equal(t, 1, defender.GetHosts()[0].Score)
assert.True(t, defender.GetHosts()[0].BanTime.IsZero())
assert.Empty(t, defender.GetHosts()[0].GetBanTime())
score, err := defender.GetScore(testIP)
assert.NoError(t, err)
assert.Equal(t, 1, score)
hosts, err = defender.GetHosts()
assert.NoError(t, err)
if assert.Len(t, hosts, 1) {
assert.Equal(t, 1, hosts[0].Score)
assert.True(t, hosts[0].BanTime.IsZero())
assert.Empty(t, hosts[0].GetBanTime())
}
host, err := defender.GetHost(testIP)
assert.NoError(t, err)
assert.Equal(t, 1, host.Score)
assert.Empty(t, host.GetBanTime())
assert.Nil(t, defender.GetBanTime(testIP))
banTime, err := defender.GetBanTime(testIP)
assert.NoError(t, err)
assert.Nil(t, banTime)
defender.AddEvent(testIP, HostEventLimitExceeded)
assert.Equal(t, 1, defender.countHosts())
assert.Equal(t, 0, defender.countBanned())
assert.Equal(t, 4, defender.GetScore(testIP))
if assert.Len(t, defender.GetHosts(), 1) {
assert.Equal(t, 4, defender.GetHosts()[0].Score)
score, err = defender.GetScore(testIP)
assert.NoError(t, err)
assert.Equal(t, 4, score)
hosts, err = defender.GetHosts()
assert.NoError(t, err)
if assert.Len(t, hosts, 1) {
assert.Equal(t, 4, hosts[0].Score)
assert.True(t, hosts[0].BanTime.IsZero())
assert.Empty(t, hosts[0].GetBanTime())
}
defender.AddEvent(testIP, HostEventNoLoginTried)
defender.AddEvent(testIP, HostEventNoLoginTried)
assert.Equal(t, 0, defender.countHosts())
assert.Equal(t, 1, defender.countBanned())
assert.Equal(t, 0, defender.GetScore(testIP))
assert.NotNil(t, defender.GetBanTime(testIP))
if assert.Len(t, defender.GetHosts(), 1) {
assert.Equal(t, 0, defender.GetHosts()[0].Score)
assert.False(t, defender.GetHosts()[0].BanTime.IsZero())
assert.NotEmpty(t, defender.GetHosts()[0].GetBanTime())
assert.Equal(t, hex.EncodeToString([]byte(testIP)), defender.GetHosts()[0].GetID())
score, err = defender.GetScore(testIP)
assert.NoError(t, err)
assert.Equal(t, 0, score)
banTime, err = defender.GetBanTime(testIP)
assert.NoError(t, err)
assert.NotNil(t, banTime)
hosts, err = defender.GetHosts()
assert.NoError(t, err)
if assert.Len(t, hosts, 1) {
assert.Equal(t, 0, hosts[0].Score)
assert.False(t, hosts[0].BanTime.IsZero())
assert.NotEmpty(t, hosts[0].GetBanTime())
assert.Equal(t, hex.EncodeToString([]byte(testIP)), hosts[0].GetID())
}
host, err = defender.GetHost(testIP)
assert.NoError(t, err)
@@ -134,14 +176,22 @@ func TestBasicDefender(t *testing.T) {
assert.Equal(t, defender.config.EntriesSoftLimit, defender.countHosts())
// testIP1 and testIP2 should be removed
assert.Equal(t, defender.config.EntriesSoftLimit, defender.countHosts())
assert.Equal(t, 0, defender.GetScore(testIP1))
assert.Equal(t, 0, defender.GetScore(testIP2))
assert.Equal(t, 2, defender.GetScore(testIP3))
score, err = defender.GetScore(testIP1)
assert.NoError(t, err)
assert.Equal(t, 0, score)
score, err = defender.GetScore(testIP2)
assert.NoError(t, err)
assert.Equal(t, 0, score)
score, err = defender.GetScore(testIP3)
assert.NoError(t, err)
assert.Equal(t, 2, score)
defender.AddEvent(testIP3, HostEventNoLoginTried)
defender.AddEvent(testIP3, HostEventNoLoginTried)
// IP3 is now banned
assert.NotNil(t, defender.GetBanTime(testIP3))
banTime, err = defender.GetBanTime(testIP3)
assert.NoError(t, err)
assert.NotNil(t, banTime)
assert.Equal(t, 0, defender.countHosts())
time.Sleep(20 * time.Millisecond)
@@ -150,9 +200,15 @@ func TestBasicDefender(t *testing.T) {
}
assert.Equal(t, 0, defender.countHosts())
assert.Equal(t, config.EntriesSoftLimit, defender.countBanned())
assert.Nil(t, defender.GetBanTime(testIP))
assert.Nil(t, defender.GetBanTime(testIP3))
assert.NotNil(t, defender.GetBanTime(testIP1))
banTime, err = defender.GetBanTime(testIP)
assert.NoError(t, err)
assert.Nil(t, banTime)
banTime, err = defender.GetBanTime(testIP3)
assert.NoError(t, err)
assert.Nil(t, banTime)
banTime, err = defender.GetBanTime(testIP1)
assert.NoError(t, err)
assert.NotNil(t, banTime)
for i := 0; i < 3; i++ {
defender.AddEvent(testIP, HostEventNoLoginTried)
@@ -162,11 +218,13 @@ func TestBasicDefender(t *testing.T) {
assert.Equal(t, 0, defender.countHosts())
assert.Equal(t, defender.config.EntriesSoftLimit, defender.countBanned())
banTime := defender.GetBanTime(testIP3)
banTime, err = defender.GetBanTime(testIP3)
assert.NoError(t, err)
if assert.NotNil(t, banTime) {
assert.True(t, defender.IsBanned(testIP3))
// ban time should increase
newBanTime := defender.GetBanTime(testIP3)
newBanTime, err := defender.GetBanTime(testIP3)
assert.NoError(t, err)
assert.True(t, newBanTime.After(*banTime))
}
@@ -179,6 +237,82 @@ func TestBasicDefender(t *testing.T) {
assert.NoError(t, err)
}
func TestExpiredHostBans(t *testing.T) {
config := &DefenderConfig{
Enabled: true,
BanTime: 10,
BanTimeIncrement: 2,
Threshold: 5,
ScoreInvalid: 2,
ScoreValid: 1,
ScoreLimitExceeded: 3,
ObservationTime: 15,
EntriesSoftLimit: 1,
EntriesHardLimit: 2,
}
d, err := newInMemoryDefender(config)
assert.NoError(t, err)
defender := d.(*memoryDefender)
testIP := "1.2.3.4"
defender.banned[testIP] = time.Now().Add(-24 * time.Hour)
// the ban is expired testIP should not be listed
res, err := defender.GetHosts()
assert.NoError(t, err)
assert.Len(t, res, 0)
assert.False(t, defender.IsBanned(testIP))
_, err = defender.GetHost(testIP)
assert.Error(t, err)
_, ok := defender.banned[testIP]
assert.True(t, ok)
// now add an event for an expired banned ip, it should be removed
defender.AddEvent(testIP, HostEventLoginFailed)
assert.False(t, defender.IsBanned(testIP))
entry, err := defender.GetHost(testIP)
assert.NoError(t, err)
assert.Equal(t, testIP, entry.IP)
assert.Empty(t, entry.GetBanTime())
assert.Equal(t, 1, entry.Score)
res, err = defender.GetHosts()
assert.NoError(t, err)
if assert.Len(t, res, 1) {
assert.Equal(t, testIP, res[0].IP)
assert.Empty(t, res[0].GetBanTime())
assert.Equal(t, 1, res[0].Score)
}
events := []hostEvent{
{
dateTime: time.Now().Add(-24 * time.Hour),
score: 2,
},
{
dateTime: time.Now().Add(-24 * time.Hour),
score: 3,
},
}
hs := hostScore{
Events: events,
TotalScore: 5,
}
defender.hosts[testIP] = hs
// the recorded scored are too old
res, err = defender.GetHosts()
assert.NoError(t, err)
assert.Len(t, res, 0)
_, err = defender.GetHost(testIP)
assert.Error(t, err)
_, ok = defender.hosts[testIP]
assert.True(t, ok)
}
func TestLoadHostListFromFile(t *testing.T) {
_, err := loadHostListFromFile(".")
assert.Error(t, err)
@@ -252,15 +386,32 @@ func TestLoadHostListFromFile(t *testing.T) {
assert.NoError(t, err)
}
func TestAddEntriesToHostList(t *testing.T) {
name := "testList"
hostlist := addEntriesToList([]string{"192.168.6.1", "10.7.0.0/25"}, nil, name)
require.NotNil(t, hostlist)
assert.True(t, hostlist.isListed("192.168.6.1"))
assert.False(t, hostlist.isListed("192.168.6.2"))
assert.True(t, hostlist.isListed("10.7.0.28"))
assert.False(t, hostlist.isListed("10.7.0.129"))
// load invalid values
hostlist = addEntriesToList([]string{"invalidip", "invalidnet/24"}, nil, name)
require.NotNil(t, hostlist)
assert.Len(t, hostlist.IPAddresses, 0)
assert.Equal(t, 0, hostlist.Ranges.Len())
}
func TestDefenderCleanup(t *testing.T) {
d := memoryDefender{
baseDefender: baseDefender{
config: &DefenderConfig{
ObservationTime: 1,
EntriesSoftLimit: 2,
EntriesHardLimit: 3,
},
},
banned: make(map[string]time.Time),
hosts: make(map[string]hostScore),
config: &DefenderConfig{
ObservationTime: 1,
EntriesSoftLimit: 2,
EntriesHardLimit: 3,
},
}
d.banned["1.1.1.1"] = time.Now().Add(-24 * time.Hour)
@@ -278,7 +429,9 @@ func TestDefenderCleanup(t *testing.T) {
d.cleanupBanned()
assert.Equal(t, d.config.EntriesSoftLimit, d.countBanned())
assert.Nil(t, d.GetBanTime("2.2.2.3"))
banTime, err := d.GetBanTime("2.2.2.3")
assert.NoError(t, err)
assert.Nil(t, banTime)
d.hosts["3.3.3.3"] = hostScore{
TotalScore: 0,
@@ -325,11 +478,15 @@ func TestDefenderCleanup(t *testing.T) {
},
}
assert.Equal(t, 1, d.GetScore("3.3.3.3"))
score, err := d.GetScore("3.3.3.3")
assert.NoError(t, err)
assert.Equal(t, 1, score)
d.cleanupHosts()
assert.Equal(t, d.config.EntriesSoftLimit, d.countHosts())
assert.Equal(t, 0, d.GetScore("3.3.3.4"))
score, err = d.GetScore("3.3.3.4")
assert.NoError(t, err)
assert.Equal(t, 0, score)
}
func TestDefenderConfig(t *testing.T) {
@@ -540,7 +697,9 @@ func getDefenderForBench() *memoryDefender {
EntriesHardLimit: 100,
}
return &memoryDefender{
config: config,
baseDefender: baseDefender{
config: config,
},
hosts: make(map[string]hostScore),
banned: make(map[string]time.Time),
}

171
common/defenderdb.go Normal file
View File

@@ -0,0 +1,171 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
import (
"time"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
)
type dbDefender struct {
baseDefender
lastCleanup time.Time
}
func newDBDefender(config *DefenderConfig) (Defender, error) {
err := config.validate()
if err != nil {
return nil, err
}
defender := &dbDefender{
baseDefender: baseDefender{
config: config,
},
lastCleanup: time.Time{},
}
if err := defender.Reload(); err != nil {
return nil, err
}
return defender, nil
}
// GetHosts returns hosts that are banned or for which some violations have been detected
func (d *dbDefender) GetHosts() ([]dataprovider.DefenderEntry, error) {
return dataprovider.GetDefenderHosts(d.getStartObservationTime(), d.config.EntriesHardLimit)
}
// GetHost returns a defender host by ip, if any
func (d *dbDefender) GetHost(ip string) (dataprovider.DefenderEntry, error) {
return dataprovider.GetDefenderHostByIP(ip, d.getStartObservationTime())
}
// IsBanned returns true if the specified IP is banned
// and increase ban time if the IP is found.
// This method must be called as soon as the client connects
func (d *dbDefender) IsBanned(ip string) bool {
d.RLock()
if d.baseDefender.isBanned(ip) {
d.RUnlock()
return true
}
d.RUnlock()
_, err := dataprovider.IsDefenderHostBanned(ip)
if err != nil {
// not found or another error, we allow this host
return false
}
increment := d.config.BanTime * d.config.BanTimeIncrement / 100
if increment == 0 {
increment++
}
dataprovider.UpdateDefenderBanTime(ip, increment) //nolint:errcheck
return true
}
// DeleteHost removes the specified IP from the defender lists
func (d *dbDefender) DeleteHost(ip string) bool {
if _, err := d.GetHost(ip); err != nil {
return false
}
return dataprovider.DeleteDefenderHost(ip) == nil
}
// AddEvent adds an event for the given IP.
// This method must be called for clients not yet banned
func (d *dbDefender) AddEvent(ip string, event HostEvent) {
d.RLock()
if d.safeList != nil && d.safeList.isListed(ip) {
d.RUnlock()
return
}
d.RUnlock()
score := d.baseDefender.getScore(event)
host, err := dataprovider.AddDefenderEvent(ip, score, d.getStartObservationTime())
if err != nil {
return
}
if host.Score > d.config.Threshold {
banTime := time.Now().Add(time.Duration(d.config.BanTime) * time.Minute)
err = dataprovider.SetDefenderBanTime(ip, util.GetTimeAsMsSinceEpoch(banTime))
}
if err == nil {
d.cleanup()
}
}
// GetBanTime returns the ban time for the given IP or nil if the IP is not banned
func (d *dbDefender) GetBanTime(ip string) (*time.Time, error) {
host, err := d.GetHost(ip)
if err != nil {
return nil, err
}
if host.BanTime.IsZero() {
return nil, nil
}
return &host.BanTime, nil
}
// GetScore returns the score for the given IP
func (d *dbDefender) GetScore(ip string) (int, error) {
host, err := d.GetHost(ip)
if err != nil {
return 0, err
}
return host.Score, nil
}
func (d *dbDefender) cleanup() {
lastCleanup := d.getLastCleanup()
if lastCleanup.IsZero() || lastCleanup.Add(time.Duration(d.config.ObservationTime)*time.Minute*3).Before(time.Now()) {
// FIXME: this could be racy in rare cases but it is better than acquire the lock for the cleanup duration
// or to always acquire a read/write lock.
// Concurrent cleanups could happen anyway from multiple SFTPGo instances and should not cause any issues
d.setLastCleanup(time.Now())
expireTime := time.Now().Add(-time.Duration(d.config.ObservationTime+1) * time.Minute)
logger.Debug(logSender, "", "cleanup defender hosts before %v, last cleanup %v", expireTime, lastCleanup)
if err := dataprovider.CleanupDefender(util.GetTimeAsMsSinceEpoch(expireTime)); err != nil {
logger.Error(logSender, "", "defender cleanup error, reset last cleanup to %v", lastCleanup)
d.setLastCleanup(lastCleanup)
}
}
}
func (d *dbDefender) getStartObservationTime() int64 {
t := time.Now().Add(-time.Duration(d.config.ObservationTime) * time.Minute)
return util.GetTimeAsMsSinceEpoch(t)
}
func (d *dbDefender) getLastCleanup() time.Time {
d.RLock()
defer d.RUnlock()
return d.lastCleanup
}
func (d *dbDefender) setLastCleanup(when time.Time) {
d.Lock()
defer d.Unlock()
d.lastCleanup = when
}

311
common/defenderdb_test.go Normal file
View File

@@ -0,0 +1,311 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
import (
"encoding/hex"
"encoding/json"
"os"
"path/filepath"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/util"
)
func TestBasicDbDefender(t *testing.T) {
if !isDbDefenderSupported() {
t.Skip("this test is not supported with the current database provider")
}
config := &DefenderConfig{
Enabled: true,
BanTime: 10,
BanTimeIncrement: 2,
Threshold: 5,
ScoreInvalid: 2,
ScoreValid: 1,
ScoreLimitExceeded: 3,
ObservationTime: 15,
EntriesSoftLimit: 1,
EntriesHardLimit: 10,
SafeListFile: "slFile",
BlockListFile: "blFile",
}
_, err := newDBDefender(config)
assert.Error(t, err)
bl := HostListFile{
IPAddresses: []string{"172.16.1.1", "172.16.1.2"},
CIDRNetworks: []string{"10.8.0.0/24"},
}
sl := HostListFile{
IPAddresses: []string{"172.16.1.3", "172.16.1.4"},
CIDRNetworks: []string{"192.168.8.0/24"},
}
blFile := filepath.Join(os.TempDir(), "bl.json")
slFile := filepath.Join(os.TempDir(), "sl.json")
data, err := json.Marshal(bl)
assert.NoError(t, err)
err = os.WriteFile(blFile, data, os.ModePerm)
assert.NoError(t, err)
data, err = json.Marshal(sl)
assert.NoError(t, err)
err = os.WriteFile(slFile, data, os.ModePerm)
assert.NoError(t, err)
config.BlockListFile = blFile
_, err = newDBDefender(config)
assert.Error(t, err)
config.SafeListFile = slFile
d, err := newDBDefender(config)
assert.NoError(t, err)
defender := d.(*dbDefender)
assert.True(t, defender.IsBanned("172.16.1.1"))
assert.False(t, defender.IsBanned("172.16.1.10"))
assert.False(t, defender.IsBanned("10.8.1.3"))
assert.True(t, defender.IsBanned("10.8.0.4"))
assert.False(t, defender.IsBanned("invalid ip"))
hosts, err := defender.GetHosts()
assert.NoError(t, err)
assert.Len(t, hosts, 0)
_, err = defender.GetHost("10.8.0.3")
assert.Error(t, err)
defender.AddEvent("172.16.1.4", HostEventLoginFailed)
defender.AddEvent("192.168.8.4", HostEventUserNotFound)
defender.AddEvent("172.16.1.3", HostEventLimitExceeded)
hosts, err = defender.GetHosts()
assert.NoError(t, err)
assert.Len(t, hosts, 0)
assert.True(t, defender.getLastCleanup().IsZero())
testIP := "123.45.67.89"
defender.AddEvent(testIP, HostEventLoginFailed)
lastCleanup := defender.getLastCleanup()
assert.False(t, lastCleanup.IsZero())
score, err := defender.GetScore(testIP)
assert.NoError(t, err)
assert.Equal(t, 1, score)
hosts, err = defender.GetHosts()
assert.NoError(t, err)
if assert.Len(t, hosts, 1) {
assert.Equal(t, 1, hosts[0].Score)
assert.True(t, hosts[0].BanTime.IsZero())
assert.Empty(t, hosts[0].GetBanTime())
}
host, err := defender.GetHost(testIP)
assert.NoError(t, err)
assert.Equal(t, 1, host.Score)
assert.Empty(t, host.GetBanTime())
banTime, err := defender.GetBanTime(testIP)
assert.NoError(t, err)
assert.Nil(t, banTime)
defender.AddEvent(testIP, HostEventLimitExceeded)
score, err = defender.GetScore(testIP)
assert.NoError(t, err)
assert.Equal(t, 4, score)
hosts, err = defender.GetHosts()
assert.NoError(t, err)
if assert.Len(t, hosts, 1) {
assert.Equal(t, 4, hosts[0].Score)
assert.True(t, hosts[0].BanTime.IsZero())
assert.Empty(t, hosts[0].GetBanTime())
}
defender.AddEvent(testIP, HostEventNoLoginTried)
defender.AddEvent(testIP, HostEventNoLoginTried)
score, err = defender.GetScore(testIP)
assert.NoError(t, err)
assert.Equal(t, 0, score)
banTime, err = defender.GetBanTime(testIP)
assert.NoError(t, err)
assert.NotNil(t, banTime)
hosts, err = defender.GetHosts()
assert.NoError(t, err)
if assert.Len(t, hosts, 1) {
assert.Equal(t, 0, hosts[0].Score)
assert.False(t, hosts[0].BanTime.IsZero())
assert.NotEmpty(t, hosts[0].GetBanTime())
assert.Equal(t, hex.EncodeToString([]byte(testIP)), hosts[0].GetID())
}
host, err = defender.GetHost(testIP)
assert.NoError(t, err)
assert.Equal(t, 0, host.Score)
assert.NotEmpty(t, host.GetBanTime())
// ban time should increase
assert.True(t, defender.IsBanned(testIP))
newBanTime, err := defender.GetBanTime(testIP)
assert.NoError(t, err)
assert.True(t, newBanTime.After(*banTime))
assert.True(t, defender.DeleteHost(testIP))
assert.False(t, defender.DeleteHost(testIP))
// test cleanup
testIP1 := "123.45.67.90"
testIP2 := "123.45.67.91"
testIP3 := "123.45.67.92"
for i := 0; i < 3; i++ {
defender.AddEvent(testIP, HostEventNoLoginTried)
defender.AddEvent(testIP1, HostEventNoLoginTried)
defender.AddEvent(testIP2, HostEventNoLoginTried)
}
hosts, err = defender.GetHosts()
assert.NoError(t, err)
assert.Len(t, hosts, 3)
for _, host := range hosts {
assert.Equal(t, 0, host.Score)
assert.False(t, host.BanTime.IsZero())
assert.NotEmpty(t, host.GetBanTime())
}
defender.AddEvent(testIP3, HostEventLoginFailed)
hosts, err = defender.GetHosts()
assert.NoError(t, err)
assert.Len(t, hosts, 4)
// now set a ban time in the past, so the host will be cleanead up
for _, ip := range []string{testIP1, testIP2} {
err = dataprovider.SetDefenderBanTime(ip, util.GetTimeAsMsSinceEpoch(time.Now().Add(-1*time.Minute)))
assert.NoError(t, err)
}
hosts, err = defender.GetHosts()
assert.NoError(t, err)
assert.Len(t, hosts, 4)
for _, host := range hosts {
switch host.IP {
case testIP:
assert.Equal(t, 0, host.Score)
assert.False(t, host.BanTime.IsZero())
assert.NotEmpty(t, host.GetBanTime())
case testIP3:
assert.Equal(t, 1, host.Score)
assert.True(t, host.BanTime.IsZero())
assert.Empty(t, host.GetBanTime())
default:
assert.Equal(t, 6, host.Score)
assert.True(t, host.BanTime.IsZero())
assert.Empty(t, host.GetBanTime())
}
}
host, err = defender.GetHost(testIP)
assert.NoError(t, err)
assert.Equal(t, 0, host.Score)
assert.False(t, host.BanTime.IsZero())
assert.NotEmpty(t, host.GetBanTime())
host, err = defender.GetHost(testIP3)
assert.NoError(t, err)
assert.Equal(t, 1, host.Score)
assert.True(t, host.BanTime.IsZero())
assert.Empty(t, host.GetBanTime())
// set a negative observation time so the from field in the queries will be in the future
// we still should get the banned hosts
defender.config.ObservationTime = -2
assert.Greater(t, defender.getStartObservationTime(), time.Now().UnixMilli())
hosts, err = defender.GetHosts()
assert.NoError(t, err)
if assert.Len(t, hosts, 1) {
assert.Equal(t, testIP, hosts[0].IP)
assert.Equal(t, 0, hosts[0].Score)
assert.False(t, hosts[0].BanTime.IsZero())
assert.NotEmpty(t, hosts[0].GetBanTime())
}
_, err = defender.GetHost(testIP)
assert.NoError(t, err)
// cleanup db
err = dataprovider.CleanupDefender(util.GetTimeAsMsSinceEpoch(time.Now().Add(10 * time.Minute)))
assert.NoError(t, err)
// the banned host must still be there
hosts, err = defender.GetHosts()
assert.NoError(t, err)
if assert.Len(t, hosts, 1) {
assert.Equal(t, testIP, hosts[0].IP)
assert.Equal(t, 0, hosts[0].Score)
assert.False(t, hosts[0].BanTime.IsZero())
assert.NotEmpty(t, hosts[0].GetBanTime())
}
_, err = defender.GetHost(testIP)
assert.NoError(t, err)
err = dataprovider.SetDefenderBanTime(testIP, util.GetTimeAsMsSinceEpoch(time.Now().Add(-1*time.Minute)))
assert.NoError(t, err)
err = dataprovider.CleanupDefender(util.GetTimeAsMsSinceEpoch(time.Now().Add(10 * time.Minute)))
assert.NoError(t, err)
hosts, err = defender.GetHosts()
assert.NoError(t, err)
assert.Len(t, hosts, 0)
err = os.Remove(slFile)
assert.NoError(t, err)
err = os.Remove(blFile)
assert.NoError(t, err)
}
func TestDbDefenderCleanup(t *testing.T) {
if !isDbDefenderSupported() {
t.Skip("this test is not supported with the current database provider")
}
config := &DefenderConfig{
Enabled: true,
BanTime: 10,
BanTimeIncrement: 2,
Threshold: 5,
ScoreInvalid: 2,
ScoreValid: 1,
ScoreLimitExceeded: 3,
ObservationTime: 15,
EntriesSoftLimit: 1,
EntriesHardLimit: 10,
}
d, err := newDBDefender(config)
assert.NoError(t, err)
defender := d.(*dbDefender)
lastCleanup := defender.getLastCleanup()
assert.True(t, lastCleanup.IsZero())
defender.cleanup()
lastCleanup = defender.getLastCleanup()
assert.False(t, lastCleanup.IsZero())
defender.cleanup()
assert.Equal(t, lastCleanup, defender.getLastCleanup())
defender.setLastCleanup(time.Now().Add(-time.Duration(config.ObservationTime) * time.Minute * 4))
time.Sleep(20 * time.Millisecond)
defender.cleanup()
assert.True(t, lastCleanup.Before(defender.getLastCleanup()))
providerConf := dataprovider.GetProviderConfig()
err = dataprovider.Close()
assert.NoError(t, err)
lastCleanup = time.Now().Add(-time.Duration(config.ObservationTime) * time.Minute * 4)
defender.setLastCleanup(lastCleanup)
defender.cleanup()
// cleanup will fail and so last cleanup should be reset to the previous value
assert.Equal(t, lastCleanup, defender.getLastCleanup())
err = dataprovider.Initialize(providerConf, configDir, true)
assert.NoError(t, err)
}
func isDbDefenderSupported() bool {
// SQLite shares the implementation with other SQL-based provider but it makes no sense
// to use it outside test cases
switch dataprovider.GetProviderStatus().Driver {
case dataprovider.MySQLDataProviderName, dataprovider.PGSQLDataProviderName,
dataprovider.CockroachDataProviderName, dataprovider.SQLiteDataProviderName:
return true
default:
return false
}
}

340
common/defendermem.go Normal file
View File

@@ -0,0 +1,340 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
import (
"sort"
"time"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/util"
)
type memoryDefender struct {
baseDefender
// IP addresses of the clients trying to connected are stored inside hosts,
// they are added to banned once the thresold is reached.
// A violation from a banned host will increase the ban time
// based on the configured BanTimeIncrement
hosts map[string]hostScore // the key is the host IP
banned map[string]time.Time // the key is the host IP
}
func newInMemoryDefender(config *DefenderConfig) (Defender, error) {
err := config.validate()
if err != nil {
return nil, err
}
defender := &memoryDefender{
baseDefender: baseDefender{
config: config,
},
hosts: make(map[string]hostScore),
banned: make(map[string]time.Time),
}
if err := defender.Reload(); err != nil {
return nil, err
}
return defender, nil
}
// GetHosts returns hosts that are banned or for which some violations have been detected
func (d *memoryDefender) GetHosts() ([]dataprovider.DefenderEntry, error) {
d.RLock()
defer d.RUnlock()
var result []dataprovider.DefenderEntry
for k, v := range d.banned {
if v.After(time.Now()) {
result = append(result, dataprovider.DefenderEntry{
IP: k,
BanTime: v,
})
}
}
for k, v := range d.hosts {
score := 0
for _, event := range v.Events {
if event.dateTime.Add(time.Duration(d.config.ObservationTime) * time.Minute).After(time.Now()) {
score += event.score
}
}
if score > 0 {
result = append(result, dataprovider.DefenderEntry{
IP: k,
Score: score,
})
}
}
return result, nil
}
// GetHost returns a defender host by ip, if any
func (d *memoryDefender) GetHost(ip string) (dataprovider.DefenderEntry, error) {
d.RLock()
defer d.RUnlock()
if banTime, ok := d.banned[ip]; ok {
if banTime.After(time.Now()) {
return dataprovider.DefenderEntry{
IP: ip,
BanTime: banTime,
}, nil
}
}
if hs, ok := d.hosts[ip]; ok {
score := 0
for _, event := range hs.Events {
if event.dateTime.Add(time.Duration(d.config.ObservationTime) * time.Minute).After(time.Now()) {
score += event.score
}
}
if score > 0 {
return dataprovider.DefenderEntry{
IP: ip,
Score: score,
}, nil
}
}
return dataprovider.DefenderEntry{}, util.NewRecordNotFoundError("host not found")
}
// IsBanned returns true if the specified IP is banned
// and increase ban time if the IP is found.
// This method must be called as soon as the client connects
func (d *memoryDefender) IsBanned(ip string) bool {
d.RLock()
if banTime, ok := d.banned[ip]; ok {
if banTime.After(time.Now()) {
increment := d.config.BanTime * d.config.BanTimeIncrement / 100
if increment == 0 {
increment++
}
d.RUnlock()
// we can save an earlier ban time if there are contemporary updates
// but this should not make much difference. I prefer to hold a read lock
// until possible for performance reasons, this method is called each
// time a new client connects and it must be as fast as possible
d.Lock()
d.banned[ip] = banTime.Add(time.Duration(increment) * time.Minute)
d.Unlock()
return true
}
}
defer d.RUnlock()
return d.baseDefender.isBanned(ip)
}
// DeleteHost removes the specified IP from the defender lists
func (d *memoryDefender) DeleteHost(ip string) bool {
d.Lock()
defer d.Unlock()
if _, ok := d.banned[ip]; ok {
delete(d.banned, ip)
return true
}
if _, ok := d.hosts[ip]; ok {
delete(d.hosts, ip)
return true
}
return false
}
// AddEvent adds an event for the given IP.
// This method must be called for clients not yet banned
func (d *memoryDefender) AddEvent(ip string, event HostEvent) {
d.Lock()
defer d.Unlock()
if d.safeList != nil && d.safeList.isListed(ip) {
return
}
// ignore events for already banned hosts
if v, ok := d.banned[ip]; ok {
if v.After(time.Now()) {
return
}
delete(d.banned, ip)
}
score := d.baseDefender.getScore(event)
ev := hostEvent{
dateTime: time.Now(),
score: score,
}
if hs, ok := d.hosts[ip]; ok {
hs.Events = append(hs.Events, ev)
hs.TotalScore = 0
idx := 0
for _, event := range hs.Events {
if event.dateTime.Add(time.Duration(d.config.ObservationTime) * time.Minute).After(time.Now()) {
hs.Events[idx] = event
hs.TotalScore += event.score
idx++
}
}
hs.Events = hs.Events[:idx]
if hs.TotalScore >= d.config.Threshold {
d.banned[ip] = time.Now().Add(time.Duration(d.config.BanTime) * time.Minute)
delete(d.hosts, ip)
d.cleanupBanned()
} else {
d.hosts[ip] = hs
}
} else {
d.hosts[ip] = hostScore{
TotalScore: ev.score,
Events: []hostEvent{ev},
}
d.cleanupHosts()
}
}
func (d *memoryDefender) countBanned() int {
d.RLock()
defer d.RUnlock()
return len(d.banned)
}
func (d *memoryDefender) countHosts() int {
d.RLock()
defer d.RUnlock()
return len(d.hosts)
}
// GetBanTime returns the ban time for the given IP or nil if the IP is not banned
func (d *memoryDefender) GetBanTime(ip string) (*time.Time, error) {
d.RLock()
defer d.RUnlock()
if banTime, ok := d.banned[ip]; ok {
return &banTime, nil
}
return nil, nil
}
// GetScore returns the score for the given IP
func (d *memoryDefender) GetScore(ip string) (int, error) {
d.RLock()
defer d.RUnlock()
score := 0
if hs, ok := d.hosts[ip]; ok {
for _, event := range hs.Events {
if event.dateTime.Add(time.Duration(d.config.ObservationTime) * time.Minute).After(time.Now()) {
score += event.score
}
}
}
return score, nil
}
func (d *memoryDefender) cleanupBanned() {
if len(d.banned) > d.config.EntriesHardLimit {
kvList := make(kvList, 0, len(d.banned))
for k, v := range d.banned {
if v.Before(time.Now()) {
delete(d.banned, k)
}
kvList = append(kvList, kv{
Key: k,
Value: v.UnixNano(),
})
}
// we removed expired ip addresses, if any, above, this could be enough
numToRemove := len(d.banned) - d.config.EntriesSoftLimit
if numToRemove <= 0 {
return
}
sort.Sort(kvList)
for idx, kv := range kvList {
if idx >= numToRemove {
break
}
delete(d.banned, kv.Key)
}
}
}
func (d *memoryDefender) cleanupHosts() {
if len(d.hosts) > d.config.EntriesHardLimit {
kvList := make(kvList, 0, len(d.hosts))
for k, v := range d.hosts {
value := int64(0)
if len(v.Events) > 0 {
value = v.Events[len(v.Events)-1].dateTime.UnixNano()
}
kvList = append(kvList, kv{
Key: k,
Value: value,
})
}
sort.Sort(kvList)
numToRemove := len(d.hosts) - d.config.EntriesSoftLimit
for idx, kv := range kvList {
if idx >= numToRemove {
break
}
delete(d.hosts, kv.Key)
}
}
}
type kv struct {
Key string
Value int64
}
type kvList []kv
func (p kvList) Len() int { return len(p) }
func (p kvList) Less(i, j int) bool { return p[i].Value < p[j].Value }
func (p kvList) Swap(i, j int) { p[i], p[j] = p[j], p[i] }

View File

@@ -1,3 +1,17 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
import (
@@ -10,8 +24,8 @@ import (
"github.com/GehirnInc/crypt/md5_crypt"
"golang.org/x/crypto/bcrypt"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/utils"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
)
const (
@@ -114,7 +128,7 @@ func (p *basicAuthProvider) getHashedPassword(username string) (string, bool) {
// ValidateCredentials returns true if the credentials are valid
func (p *basicAuthProvider) ValidateCredentials(username, password string) bool {
if hashedPwd, ok := p.getHashedPassword(username); ok {
if utils.IsStringPrefixInSlice(hashedPwd, bcryptPwdPrefixes) {
if util.IsStringPrefixInSlice(hashedPwd, bcryptPwdPrefixes) {
err := bcrypt.CompareHashAndPassword([]byte(hashedPwd), []byte(password))
return err == nil
}

View File

@@ -1,3 +1,17 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
import (

File diff suppressed because it is too large Load Diff

View File

@@ -1,8 +1,23 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
import (
"errors"
"fmt"
"net"
"sort"
"sync"
"sync/atomic"
@@ -10,7 +25,7 @@ import (
"golang.org/x/time/rate"
"github.com/drakkan/sftpgo/utils"
"github.com/drakkan/sftpgo/v2/util"
)
var (
@@ -47,6 +62,8 @@ type RateLimiterConfig struct {
// Available protocols are: "SFTP", "FTP", "DAV".
// A rate limiter with no protocols defined is disabled
Protocols []string `json:"protocols" mapstructure:"protocols"`
// AllowList defines a list of IP addresses and IP ranges excluded from rate limiting
AllowList []string `json:"allow_list" mapstructure:"mapstructure"`
// If the rate limit is exceeded, the defender is enabled, and this is a per-source limiter,
// a new defender event will be generated
GenerateDefenderEvents bool `json:"generate_defender_events" mapstructure:"generate_defender_events"`
@@ -78,9 +95,9 @@ func (r *RateLimiterConfig) validate() error {
return fmt.Errorf("invalid entries_hard_limit %v must be > %v", r.EntriesHardLimit, r.EntriesSoftLimit)
}
}
r.Protocols = utils.RemoveDuplicates(r.Protocols)
r.Protocols = util.RemoveDuplicates(r.Protocols, true)
for _, protocol := range r.Protocols {
if !utils.IsStringInSlice(protocol, rateLimiterProtocolValues) {
if !util.Contains(rateLimiterProtocolValues, protocol) {
return fmt.Errorf("invalid protocol %#v", protocol)
}
}
@@ -125,12 +142,23 @@ type rateLimiter struct {
globalBucket *rate.Limiter
buckets sourceBuckets
generateDefenderEvents bool
allowList []func(net.IP) bool
}
// Wait blocks until the limit allows one event to happen
// or returns an error if the time to wait exceeds the max
// allowed delay
func (rl *rateLimiter) Wait(source string) (time.Duration, error) {
if len(rl.allowList) > 0 {
ip := net.ParseIP(source)
if ip != nil {
for idx := range rl.allowList {
if rl.allowList[idx](ip) {
return 0, nil
}
}
}
}
var res *rate.Reservation
if rl.globalBucket != nil {
res = rl.globalBucket.Reserve()

View File

@@ -1,3 +1,17 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
import (
@@ -6,6 +20,8 @@ import (
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/drakkan/sftpgo/v2/util"
)
func TestRateLimiterConfig(t *testing.T) {
@@ -83,6 +99,17 @@ func TestRateLimiter(t *testing.T) {
_, err = limiter.Wait(source + "1")
require.NoError(t, err)
allowList := []string{"192.168.1.0/24"}
allowFuncs, err := util.ParseAllowedIPAndRanges(allowList)
assert.NoError(t, err)
limiter.allowList = allowFuncs
for i := 0; i < 5; i++ {
_, err = limiter.Wait(source)
require.NoError(t, err)
}
_, err = limiter.Wait("not an ip")
require.NoError(t, err)
config.Burst = 0
limiter = config.getLimiter()
_, err = limiter.Wait(source)

View File

@@ -1,36 +1,62 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
import (
"crypto/tls"
"crypto/x509"
"crypto/x509/pkix"
"errors"
"fmt"
"os"
"path/filepath"
"sync"
"time"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/utils"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
)
const (
// DefaultTLSKeyPaidID defines the id to use for non-binding specific key pairs
DefaultTLSKeyPaidID = "default"
)
// TLSKeyPair defines the paths and the unique identifier for a TLS key pair
type TLSKeyPair struct {
Cert string
Key string
ID string
}
// CertManager defines a TLS certificate manager
type CertManager struct {
certPath string
keyPath string
keyPairs []TLSKeyPair
configDir string
logSender string
sync.RWMutex
caCertificates []string
caRevocationLists []string
cert *tls.Certificate
certs map[string]*tls.Certificate
rootCAs *x509.CertPool
crls []*pkix.CertificateList
}
// Reload tries to reload certificate and CRLs
func (m *CertManager) Reload() error {
errCrt := m.loadCertificate()
errCrt := m.loadCertificates()
errCRLs := m.LoadCRLs()
if errCrt != nil {
@@ -39,30 +65,48 @@ func (m *CertManager) Reload() error {
return errCRLs
}
// LoadCertificate loads the configured x509 key pair
func (m *CertManager) loadCertificate() error {
newCert, err := tls.LoadX509KeyPair(m.certPath, m.keyPath)
if err != nil {
logger.Warn(m.logSender, "", "unable to load X509 key pair, cert file %#v key file %#v error: %v",
m.certPath, m.keyPath, err)
return err
// LoadCertificates tries to load the configured x509 key pairs
func (m *CertManager) loadCertificates() error {
if len(m.keyPairs) == 0 {
return errors.New("no key pairs defined")
}
certs := make(map[string]*tls.Certificate)
for _, keyPair := range m.keyPairs {
if keyPair.ID == "" {
return errors.New("TLS certificate without ID")
}
newCert, err := tls.LoadX509KeyPair(keyPair.Cert, keyPair.Key)
if err != nil {
logger.Warn(m.logSender, "", "unable to load X509 key pair, cert file %#v key file %#v error: %v",
keyPair.Cert, keyPair.Key, err)
return err
}
if _, ok := certs[keyPair.ID]; ok {
return fmt.Errorf("TLS certificate with id %#v is duplicated", keyPair.ID)
}
logger.Debug(m.logSender, "", "TLS certificate %#v successfully loaded, id %v", keyPair.Cert, keyPair.ID)
certs[keyPair.ID] = &newCert
}
logger.Debug(m.logSender, "", "TLS certificate %#v successfully loaded", m.certPath)
m.Lock()
defer m.Unlock()
m.cert = &newCert
m.certs = certs
return nil
}
// GetCertificateFunc returns the loaded certificate
func (m *CertManager) GetCertificateFunc() func(*tls.ClientHelloInfo) (*tls.Certificate, error) {
func (m *CertManager) GetCertificateFunc(certID string) func(*tls.ClientHelloInfo) (*tls.Certificate, error) {
return func(clientHello *tls.ClientHelloInfo) (*tls.Certificate, error) {
m.RLock()
defer m.RUnlock()
return m.cert, nil
val, ok := m.certs[certID]
if !ok {
return nil, fmt.Errorf("no certificate for id %v", certID)
}
return val, nil
}
}
@@ -98,7 +142,7 @@ func (m *CertManager) LoadCRLs() error {
var crls []*pkix.CertificateList
for _, revocationList := range m.caRevocationLists {
if !utils.IsFileInputValid(revocationList) {
if !util.IsFileInputValid(revocationList) {
return fmt.Errorf("invalid root CA revocation list %#v", revocationList)
}
if revocationList != "" && !filepath.IsAbs(revocationList) {
@@ -145,7 +189,7 @@ func (m *CertManager) LoadRootCAs() error {
rootCAs := x509.NewCertPool()
for _, rootCA := range m.caCertificates {
if !utils.IsFileInputValid(rootCA) {
if !util.IsFileInputValid(rootCA) {
return fmt.Errorf("invalid root CA certificate %#v", rootCA)
}
if rootCA != "" && !filepath.IsAbs(rootCA) {
@@ -174,25 +218,24 @@ func (m *CertManager) LoadRootCAs() error {
// SetCACertificates sets the root CA authorities file paths.
// This should not be changed at runtime
func (m *CertManager) SetCACertificates(caCertificates []string) {
m.caCertificates = caCertificates
m.caCertificates = util.RemoveDuplicates(caCertificates, true)
}
// SetCARevocationLists sets the CA revocation lists file paths.
// This should not be changed at runtime
func (m *CertManager) SetCARevocationLists(caRevocationLists []string) {
m.caRevocationLists = caRevocationLists
m.caRevocationLists = util.RemoveDuplicates(caRevocationLists, true)
}
// NewCertManager creates a new certificate manager
func NewCertManager(certificateFile, certificateKeyFile, configDir, logSender string) (*CertManager, error) {
func NewCertManager(keyPairs []TLSKeyPair, configDir, logSender string) (*CertManager, error) {
manager := &CertManager{
cert: nil,
certPath: certificateFile,
keyPath: certificateKeyFile,
keyPairs: keyPairs,
certs: make(map[string]*tls.Certificate),
configDir: configDir,
logSender: logSender,
}
err := manager.loadCertificate()
err := manager.loadCertificates()
if err != nil {
return nil, err
}

View File

@@ -1,3 +1,17 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
import (
@@ -280,9 +294,35 @@ func TestLoadCertificate(t *testing.T) {
assert.NoError(t, err)
err = os.WriteFile(keyPath, []byte(serverKey), os.ModePerm)
assert.NoError(t, err)
certManager, err := NewCertManager(certPath, keyPath, configDir, logSenderTest)
keyPairs := []TLSKeyPair{
{
Cert: certPath,
Key: keyPath,
ID: DefaultTLSKeyPaidID,
},
{
Cert: certPath,
Key: keyPath,
ID: DefaultTLSKeyPaidID,
},
}
certManager, err := NewCertManager(keyPairs, configDir, logSenderTest)
if assert.Error(t, err) {
assert.Contains(t, err.Error(), "is duplicated")
}
assert.Nil(t, certManager)
keyPairs = []TLSKeyPair{
{
Cert: certPath,
Key: keyPath,
ID: DefaultTLSKeyPaidID,
},
}
certManager, err = NewCertManager(keyPairs, configDir, logSenderTest)
assert.NoError(t, err)
certFunc := certManager.GetCertificateFunc()
certFunc := certManager.GetCertificateFunc(DefaultTLSKeyPaidID)
if assert.NotNil(t, certFunc) {
hello := &tls.ClientHelloInfo{
ServerName: "localhost",
@@ -290,9 +330,19 @@ func TestLoadCertificate(t *testing.T) {
}
cert, err := certFunc(hello)
assert.NoError(t, err)
assert.Equal(t, certManager.cert, cert)
assert.Equal(t, certManager.certs[DefaultTLSKeyPaidID], cert)
}
certFunc = certManager.GetCertificateFunc("unknownID")
if assert.NotNil(t, certFunc) {
hello := &tls.ClientHelloInfo{
ServerName: "localhost",
CipherSuites: []uint16{tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305},
}
_, err = certFunc(hello)
if assert.Error(t, err) {
assert.Contains(t, err.Error(), "no certificate for id unknownID")
}
}
certManager.SetCACertificates(nil)
err = certManager.LoadRootCAs()
assert.NoError(t, err)
@@ -380,7 +430,32 @@ func TestLoadCertificate(t *testing.T) {
}
func TestLoadInvalidCert(t *testing.T) {
certManager, err := NewCertManager("test.crt", "test.key", configDir, logSenderTest)
certManager, err := NewCertManager(nil, configDir, logSenderTest)
if assert.Error(t, err) {
assert.Contains(t, err.Error(), "no key pairs defined")
}
assert.Nil(t, certManager)
keyPairs := []TLSKeyPair{
{
Cert: "test.crt",
Key: "test.key",
ID: DefaultTLSKeyPaidID,
},
}
certManager, err = NewCertManager(keyPairs, configDir, logSenderTest)
assert.Error(t, err)
assert.Nil(t, certManager)
keyPairs = []TLSKeyPair{
{
Cert: "test.crt",
Key: "test.key",
},
}
certManager, err = NewCertManager(keyPairs, configDir, logSenderTest)
if assert.Error(t, err) {
assert.Contains(t, err.Error(), "TLS certificate without ID")
}
assert.Nil(t, certManager)
}

View File

@@ -1,3 +1,17 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
import (
@@ -7,10 +21,10 @@ import (
"sync/atomic"
"time"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/metrics"
"github.com/drakkan/sftpgo/vfs"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/metric"
"github.com/drakkan/sftpgo/v2/vfs"
)
var (
@@ -20,7 +34,7 @@ var (
// BaseTransfer contains protocols common transfer details for an upload or a download.
type BaseTransfer struct { //nolint:maligned
ID uint64
ID int64
BytesSent int64
BytesReceived int64
Fs vfs.Fs
@@ -30,20 +44,28 @@ type BaseTransfer struct { //nolint:maligned
fsPath string
effectiveFsPath string
requestPath string
ftpMode string
start time.Time
MaxWriteSize int64
MinWriteOffset int64
InitialSize int64
truncatedSize int64
isNewFile bool
transferType int
AbortTransfer int32
aTime time.Time
mTime time.Time
transferQuota dataprovider.TransferQuota
sync.Mutex
errAbort error
ErrTransfer error
}
// NewBaseTransfer returns a new BaseTransfer and adds it to the given connection
func NewBaseTransfer(file vfs.File, conn *BaseConnection, cancelFn func(), fsPath, effectiveFsPath, requestPath string,
transferType int, minWriteOffset, initialSize, maxWriteSize int64, isNewFile bool, fs vfs.Fs) *BaseTransfer {
transferType int, minWriteOffset, initialSize, maxWriteSize, truncatedSize int64, isNewFile bool, fs vfs.Fs,
transferQuota dataprovider.TransferQuota,
) *BaseTransfer {
t := &BaseTransfer{
ID: conn.GetTransferID(),
File: file,
@@ -61,6 +83,8 @@ func NewBaseTransfer(file vfs.File, conn *BaseConnection, cancelFn func(), fsPat
BytesReceived: 0,
MaxWriteSize: maxWriteSize,
AbortTransfer: 0,
truncatedSize: truncatedSize,
transferQuota: transferQuota,
Fs: fs,
}
@@ -68,8 +92,18 @@ func NewBaseTransfer(file vfs.File, conn *BaseConnection, cancelFn func(), fsPat
return t
}
// GetTransferQuota returns data transfer quota limits
func (t *BaseTransfer) GetTransferQuota() dataprovider.TransferQuota {
return t.transferQuota
}
// SetFtpMode sets the FTP mode for the current transfer
func (t *BaseTransfer) SetFtpMode(mode string) {
t.ftpMode = mode
}
// GetID returns the transfer ID
func (t *BaseTransfer) GetID() uint64 {
func (t *BaseTransfer) GetID() int64 {
return t.ID
}
@@ -86,19 +120,60 @@ func (t *BaseTransfer) GetSize() int64 {
return atomic.LoadInt64(&t.BytesReceived)
}
// GetDownloadedSize returns the transferred size
func (t *BaseTransfer) GetDownloadedSize() int64 {
return atomic.LoadInt64(&t.BytesSent)
}
// GetUploadedSize returns the transferred size
func (t *BaseTransfer) GetUploadedSize() int64 {
return atomic.LoadInt64(&t.BytesReceived)
}
// GetStartTime returns the start time
func (t *BaseTransfer) GetStartTime() time.Time {
return t.start
}
// SignalClose signals that the transfer should be closed.
// For same protocols, for example WebDAV, we have no
// access to the network connection, so we use this method
// to make the next read or write to fail
func (t *BaseTransfer) SignalClose() {
// GetAbortError returns the error to send to the client if the transfer was aborted
func (t *BaseTransfer) GetAbortError() error {
t.Lock()
defer t.Unlock()
if t.errAbort != nil {
return t.errAbort
}
return getQuotaExceededError(t.Connection.protocol)
}
// SignalClose signals that the transfer should be closed after the next read/write.
// The optional error argument allow to send a specific error, otherwise a generic
// transfer aborted error is sent
func (t *BaseTransfer) SignalClose(err error) {
t.Lock()
t.errAbort = err
t.Unlock()
atomic.StoreInt32(&(t.AbortTransfer), 1)
}
// GetTruncatedSize returns the truncated sized if this is an upload overwriting
// an existing file
func (t *BaseTransfer) GetTruncatedSize() int64 {
return t.truncatedSize
}
// HasSizeLimit returns true if there is an upload or download size limit
func (t *BaseTransfer) HasSizeLimit() bool {
if t.MaxWriteSize > 0 {
return true
}
if t.transferQuota.HasSizeLimits() {
return true
}
return false
}
// GetVirtualPath returns the transfer virtual path
func (t *BaseTransfer) GetVirtualPath() string {
return t.requestPath
@@ -109,6 +184,16 @@ func (t *BaseTransfer) GetFsPath() string {
return t.fsPath
}
// SetTimes stores access and modification times if fsPath matches the current file
func (t *BaseTransfer) SetTimes(fsPath string, atime time.Time, mtime time.Time) bool {
if fsPath == t.GetFsPath() {
t.aTime = atime
t.mTime = mtime
return true
}
return false
}
// GetRealFsPath returns the real transfer filesystem path.
// If atomic uploads are enabled this differ from fsPath
func (t *BaseTransfer) GetRealFsPath(fsPath string) string {
@@ -126,6 +211,43 @@ func (t *BaseTransfer) SetCancelFn(cancelFn func()) {
t.cancelFn = cancelFn
}
// CheckRead returns an error if read if not allowed
func (t *BaseTransfer) CheckRead() error {
if t.transferQuota.AllowedDLSize == 0 && t.transferQuota.AllowedTotalSize == 0 {
return nil
}
if t.transferQuota.AllowedTotalSize > 0 {
if atomic.LoadInt64(&t.BytesSent)+atomic.LoadInt64(&t.BytesReceived) > t.transferQuota.AllowedTotalSize {
return t.Connection.GetReadQuotaExceededError()
}
} else if t.transferQuota.AllowedDLSize > 0 {
if atomic.LoadInt64(&t.BytesSent) > t.transferQuota.AllowedDLSize {
return t.Connection.GetReadQuotaExceededError()
}
}
return nil
}
// CheckWrite returns an error if write if not allowed
func (t *BaseTransfer) CheckWrite() error {
if t.MaxWriteSize > 0 && atomic.LoadInt64(&t.BytesReceived) > t.MaxWriteSize {
return t.Connection.GetQuotaExceededError()
}
if t.transferQuota.AllowedULSize == 0 && t.transferQuota.AllowedTotalSize == 0 {
return nil
}
if t.transferQuota.AllowedTotalSize > 0 {
if atomic.LoadInt64(&t.BytesSent)+atomic.LoadInt64(&t.BytesReceived) > t.transferQuota.AllowedTotalSize {
return t.Connection.GetQuotaExceededError()
}
} else if t.transferQuota.AllowedULSize > 0 {
if atomic.LoadInt64(&t.BytesReceived) > t.transferQuota.AllowedULSize {
return t.Connection.GetQuotaExceededError()
}
}
return nil
}
// Truncate changes the size of the opened file.
// Supported for local fs only
func (t *BaseTransfer) Truncate(fsPath string, size int64) (int64, error) {
@@ -139,7 +261,12 @@ func (t *BaseTransfer) Truncate(fsPath string, size int64) (int64, error) {
if t.MaxWriteSize > 0 {
sizeDiff := initialSize - size
t.MaxWriteSize += sizeDiff
metrics.TransferCompleted(atomic.LoadInt64(&t.BytesSent), atomic.LoadInt64(&t.BytesReceived), t.transferType, t.ErrTransfer)
metric.TransferCompleted(atomic.LoadInt64(&t.BytesSent), atomic.LoadInt64(&t.BytesReceived), t.transferType, t.ErrTransfer)
if t.transferQuota.HasSizeLimits() {
go func(ulSize, dlSize int64, user dataprovider.User) {
dataprovider.UpdateUserTransferQuota(&user, ulSize, dlSize, false) //nolint:errcheck
}(atomic.LoadInt64(&t.BytesReceived), atomic.LoadInt64(&t.BytesSent), t.Connection.User)
}
atomic.StoreInt64(&t.BytesReceived, 0)
}
t.Unlock()
@@ -173,7 +300,7 @@ func (t *BaseTransfer) TransferError(err error) {
t.cancelFn()
}
elapsed := time.Since(t.start).Nanoseconds() / 1000000
t.Connection.Log(logger.LevelWarn, "Unexpected error for transfer, path: %#v, error: \"%v\" bytes sent: %v, "+
t.Connection.Log(logger.LevelError, "Unexpected error for transfer, path: %#v, error: \"%v\" bytes sent: %v, "+
"bytes received: %v transfer running since %v ms", t.fsPath, t.ErrTransfer, atomic.LoadInt64(&t.BytesSent),
atomic.LoadInt64(&t.BytesReceived), elapsed)
}
@@ -193,6 +320,23 @@ func (t *BaseTransfer) getUploadFileSize() (int64, error) {
return fileSize, err
}
// return 1 if the file is outside the user home dir
func (t *BaseTransfer) checkUploadOutsideHomeDir(err error) int {
if err == nil {
return 0
}
if Config.TempPath == "" {
return 0
}
err = t.Fs.Remove(t.effectiveFsPath, false)
t.Connection.Log(logger.LevelWarn, "upload in temp path cannot be renamed, delete temporary file: %#v, deletion error: %v",
t.effectiveFsPath, err)
// the file is outside the home dir so don't update the quota
atomic.StoreInt64(&t.BytesReceived, 0)
t.MinWriteOffset = 0
return 1
}
// Close it is called when the transfer is completed.
// It logs the transfer info, updates the user quota (for uploads)
// and executes any defined action.
@@ -206,8 +350,13 @@ func (t *BaseTransfer) Close() error {
if t.isNewFile {
numFiles = 1
}
metrics.TransferCompleted(atomic.LoadInt64(&t.BytesSent), atomic.LoadInt64(&t.BytesReceived), t.transferType, t.ErrTransfer)
if t.ErrTransfer == ErrQuotaExceeded && t.File != nil {
metric.TransferCompleted(atomic.LoadInt64(&t.BytesSent), atomic.LoadInt64(&t.BytesReceived),
t.transferType, t.ErrTransfer)
if t.transferQuota.HasSizeLimits() {
dataprovider.UpdateUserTransferQuota(&t.Connection.User, atomic.LoadInt64(&t.BytesReceived), //nolint:errcheck
atomic.LoadInt64(&t.BytesSent), false)
}
if t.File != nil && t.Connection.IsQuotaExceededError(t.ErrTransfer) {
// if quota is exceeded we try to remove the partial file for uploads to local filesystem
err = t.Fs.Remove(t.File.Name(), false)
if err == nil {
@@ -222,10 +371,12 @@ func (t *BaseTransfer) Close() error {
err = t.Fs.Rename(t.effectiveFsPath, t.fsPath)
t.Connection.Log(logger.LevelDebug, "atomic upload completed, rename: %#v -> %#v, error: %v",
t.effectiveFsPath, t.fsPath, err)
// the file must be removed if it is uploaded to a path outside the home dir and cannot be renamed
numFiles -= t.checkUploadOutsideHomeDir(err)
} else {
err = t.Fs.Remove(t.effectiveFsPath, false)
t.Connection.Log(logger.LevelWarn, "atomic upload completed with error: \"%v\", delete temporary file: %#v, "+
"deletion error: %v", t.ErrTransfer, t.effectiveFsPath, err)
t.Connection.Log(logger.LevelWarn, "atomic upload completed with error: \"%v\", delete temporary file: %#v, deletion error: %v",
t.ErrTransfer, t.effectiveFsPath, err)
if err == nil {
numFiles--
atomic.StoreInt64(&t.BytesReceived, 0)
@@ -236,23 +387,23 @@ func (t *BaseTransfer) Close() error {
elapsed := time.Since(t.start).Nanoseconds() / 1000000
if t.transferType == TransferDownload {
logger.TransferLog(downloadLogSender, t.fsPath, elapsed, atomic.LoadInt64(&t.BytesSent), t.Connection.User.Username,
t.Connection.ID, t.Connection.protocol, t.Connection.remoteAddr)
ExecuteActionNotification(&t.Connection.User, operationDownload, t.fsPath, t.requestPath, "", "", t.Connection.protocol,
t.Connection.ID, t.Connection.protocol, t.Connection.localAddr, t.Connection.remoteAddr, t.ftpMode)
ExecuteActionNotification(t.Connection, operationDownload, t.fsPath, t.requestPath, "", "", "",
atomic.LoadInt64(&t.BytesSent), t.ErrTransfer)
} else {
fileSize := atomic.LoadInt64(&t.BytesReceived) + t.MinWriteOffset
if statSize, err := t.getUploadFileSize(); err == nil {
if statSize, errStat := t.getUploadFileSize(); errStat == nil {
fileSize = statSize
}
t.Connection.Log(logger.LevelDebug, "uploaded file size %v", fileSize)
t.updateQuota(numFiles, fileSize)
t.updateTimes()
logger.TransferLog(uploadLogSender, t.fsPath, elapsed, atomic.LoadInt64(&t.BytesReceived), t.Connection.User.Username,
t.Connection.ID, t.Connection.protocol, t.Connection.remoteAddr)
ExecuteActionNotification(&t.Connection.User, operationUpload, t.fsPath, t.requestPath, "", "", t.Connection.protocol, fileSize,
t.ErrTransfer)
t.Connection.ID, t.Connection.protocol, t.Connection.localAddr, t.Connection.remoteAddr, t.ftpMode)
ExecuteActionNotification(t.Connection, operationUpload, t.fsPath, t.requestPath, "", "", "", fileSize, t.ErrTransfer)
}
if t.ErrTransfer != nil {
t.Connection.Log(logger.LevelWarn, "transfer error: %v, path: %#v", t.ErrTransfer, t.fsPath)
t.Connection.Log(logger.LevelError, "transfer error: %v, path: %#v", t.ErrTransfer, t.fsPath)
if err == nil {
err = t.ErrTransfer
}
@@ -260,9 +411,17 @@ func (t *BaseTransfer) Close() error {
return err
}
func (t *BaseTransfer) updateTimes() {
if !t.aTime.IsZero() && !t.mTime.IsZero() {
err := t.Fs.Chtimes(t.fsPath, t.aTime, t.mTime, true)
t.Connection.Log(logger.LevelDebug, "set times for file %#v, atime: %v, mtime: %v, err: %v",
t.fsPath, t.aTime, t.mTime, err)
}
}
func (t *BaseTransfer) updateQuota(numFiles int, fileSize int64) bool {
// S3 uploads are atomic, if there is an error nothing is uploaded
if t.File == nil && t.ErrTransfer != nil {
if t.File == nil && t.ErrTransfer != nil && !t.Connection.User.HasBufferedSFTP(t.GetVirtualPath()) {
return false
}
sizeDiff := fileSize - t.InitialSize

View File

@@ -1,3 +1,17 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
import (
@@ -7,16 +21,17 @@ import (
"testing"
"time"
"github.com/sftpgo/sdk"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/kms"
"github.com/drakkan/sftpgo/vfs"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/kms"
"github.com/drakkan/sftpgo/v2/vfs"
)
func TestTransferUpdateQuota(t *testing.T) {
conn := NewBaseConnection("", ProtocolSFTP, "", dataprovider.User{})
conn := NewBaseConnection("", ProtocolSFTP, "", "", dataprovider.User{})
transfer := BaseTransfer{
Connection: conn,
transferType: TransferUpload,
@@ -50,9 +65,11 @@ func TestTransferUpdateQuota(t *testing.T) {
func TestTransferThrottling(t *testing.T) {
u := dataprovider.User{
Username: "test",
UploadBandwidth: 50,
DownloadBandwidth: 40,
BaseUser: sdk.BaseUser{
Username: "test",
UploadBandwidth: 50,
DownloadBandwidth: 40,
},
}
fs := vfs.NewOsFs("", os.TempDir(), "")
testFileSize := int64(131072)
@@ -61,8 +78,8 @@ func TestTransferThrottling(t *testing.T) {
// some tolerance
wantedUploadElapsed -= wantedDownloadElapsed / 10
wantedDownloadElapsed -= wantedDownloadElapsed / 10
conn := NewBaseConnection("id", ProtocolSCP, "", u)
transfer := NewBaseTransfer(nil, conn, nil, "", "", "", TransferUpload, 0, 0, 0, true, fs)
conn := NewBaseConnection("id", ProtocolSCP, "", "", u)
transfer := NewBaseTransfer(nil, conn, nil, "", "", "", TransferUpload, 0, 0, 0, 0, true, fs, dataprovider.TransferQuota{})
transfer.BytesReceived = testFileSize
transfer.Connection.UpdateLastActivity()
startTime := transfer.Connection.GetLastActivity()
@@ -72,7 +89,7 @@ func TestTransferThrottling(t *testing.T) {
err := transfer.Close()
assert.NoError(t, err)
transfer = NewBaseTransfer(nil, conn, nil, "", "", "", TransferDownload, 0, 0, 0, true, fs)
transfer = NewBaseTransfer(nil, conn, nil, "", "", "", TransferDownload, 0, 0, 0, 0, true, fs, dataprovider.TransferQuota{})
transfer.BytesSent = testFileSize
transfer.Connection.UpdateLastActivity()
startTime = transfer.Connection.GetLastActivity()
@@ -88,15 +105,18 @@ func TestRealPath(t *testing.T) {
testFile := filepath.Join(os.TempDir(), "afile.txt")
fs := vfs.NewOsFs("123", os.TempDir(), "")
u := dataprovider.User{
Username: "user",
HomeDir: os.TempDir(),
BaseUser: sdk.BaseUser{
Username: "user",
HomeDir: os.TempDir(),
},
}
u.Permissions = make(map[string][]string)
u.Permissions["/"] = []string{dataprovider.PermAny}
file, err := os.Create(testFile)
require.NoError(t, err)
conn := NewBaseConnection(fs.ConnectionID(), ProtocolSFTP, "", u)
transfer := NewBaseTransfer(file, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload, 0, 0, 0, true, fs)
conn := NewBaseConnection(fs.ConnectionID(), ProtocolSFTP, "", "", u)
transfer := NewBaseTransfer(file, conn, nil, testFile, testFile, "/transfer_test_file",
TransferUpload, 0, 0, 0, 0, true, fs, dataprovider.TransferQuota{})
rPath := transfer.GetRealFsPath(testFile)
assert.Equal(t, testFile, rPath)
rPath = conn.getRealFsPath(testFile)
@@ -119,8 +139,10 @@ func TestTruncate(t *testing.T) {
testFile := filepath.Join(os.TempDir(), "transfer_test_file")
fs := vfs.NewOsFs("123", os.TempDir(), "")
u := dataprovider.User{
Username: "user",
HomeDir: os.TempDir(),
BaseUser: sdk.BaseUser{
Username: "user",
HomeDir: os.TempDir(),
},
}
u.Permissions = make(map[string][]string)
u.Permissions["/"] = []string{dataprovider.PermAny}
@@ -130,8 +152,9 @@ func TestTruncate(t *testing.T) {
}
_, err = file.Write([]byte("hello"))
assert.NoError(t, err)
conn := NewBaseConnection(fs.ConnectionID(), ProtocolSFTP, "", u)
transfer := NewBaseTransfer(file, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload, 0, 5, 100, false, fs)
conn := NewBaseConnection(fs.ConnectionID(), ProtocolSFTP, "", "", u)
transfer := NewBaseTransfer(file, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload, 0, 5,
100, 0, false, fs, dataprovider.TransferQuota{})
err = conn.SetStat("/transfer_test_file", &StatAttributes{
Size: 2,
@@ -148,7 +171,8 @@ func TestTruncate(t *testing.T) {
assert.Equal(t, int64(2), fi.Size())
}
transfer = NewBaseTransfer(file, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload, 0, 0, 100, true, fs)
transfer = NewBaseTransfer(file, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload, 0, 0,
100, 0, true, fs, dataprovider.TransferQuota{})
// file.Stat will fail on a closed file
err = conn.SetStat("/transfer_test_file", &StatAttributes{
Size: 2,
@@ -158,7 +182,8 @@ func TestTruncate(t *testing.T) {
err = transfer.Close()
assert.NoError(t, err)
transfer = NewBaseTransfer(nil, conn, nil, testFile, testFile, "", TransferUpload, 0, 0, 0, true, fs)
transfer = NewBaseTransfer(nil, conn, nil, testFile, testFile, "", TransferUpload, 0, 0, 0, 0, true,
fs, dataprovider.TransferQuota{})
_, err = transfer.Truncate("mismatch", 0)
assert.EqualError(t, err, errTransferMismatch.Error())
_, err = transfer.Truncate(testFile, 0)
@@ -183,8 +208,10 @@ func TestTransferErrors(t *testing.T) {
testFile := filepath.Join(os.TempDir(), "transfer_test_file")
fs := vfs.NewOsFs("id", os.TempDir(), "")
u := dataprovider.User{
Username: "test",
HomeDir: os.TempDir(),
BaseUser: sdk.BaseUser{
Username: "test",
HomeDir: os.TempDir(),
},
}
err := os.WriteFile(testFile, []byte("test data"), os.ModePerm)
assert.NoError(t, err)
@@ -192,8 +219,9 @@ func TestTransferErrors(t *testing.T) {
if !assert.NoError(t, err) {
assert.FailNow(t, "unable to open test file")
}
conn := NewBaseConnection("id", ProtocolSFTP, "", u)
transfer := NewBaseTransfer(file, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload, 0, 0, 0, true, fs)
conn := NewBaseConnection("id", ProtocolSFTP, "", "", u)
transfer := NewBaseTransfer(file, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload,
0, 0, 0, 0, true, fs, dataprovider.TransferQuota{})
assert.Nil(t, transfer.cancelFn)
assert.Equal(t, testFile, transfer.GetFsPath())
transfer.SetCancelFn(cancelFn)
@@ -219,7 +247,8 @@ func TestTransferErrors(t *testing.T) {
assert.FailNow(t, "unable to open test file")
}
fsPath := filepath.Join(os.TempDir(), "test_file")
transfer = NewBaseTransfer(file, conn, nil, fsPath, file.Name(), "/test_file", TransferUpload, 0, 0, 0, true, fs)
transfer = NewBaseTransfer(file, conn, nil, fsPath, file.Name(), "/test_file", TransferUpload, 0, 0, 0, 0, true,
fs, dataprovider.TransferQuota{})
transfer.BytesReceived = 9
transfer.TransferError(errFake)
assert.Error(t, transfer.ErrTransfer, errFake.Error())
@@ -238,7 +267,8 @@ func TestTransferErrors(t *testing.T) {
if !assert.NoError(t, err) {
assert.FailNow(t, "unable to open test file")
}
transfer = NewBaseTransfer(file, conn, nil, fsPath, file.Name(), "/test_file", TransferUpload, 0, 0, 0, true, fs)
transfer = NewBaseTransfer(file, conn, nil, fsPath, file.Name(), "/test_file", TransferUpload, 0, 0, 0, 0, true,
fs, dataprovider.TransferQuota{})
transfer.BytesReceived = 9
// the file is closed from the embedding struct before to call close
err = file.Close()
@@ -258,11 +288,14 @@ func TestRemovePartialCryptoFile(t *testing.T) {
fs, err := vfs.NewCryptFs("id", os.TempDir(), "", vfs.CryptFsConfig{Passphrase: kms.NewPlainSecret("secret")})
require.NoError(t, err)
u := dataprovider.User{
Username: "test",
HomeDir: os.TempDir(),
BaseUser: sdk.BaseUser{
Username: "test",
HomeDir: os.TempDir(),
},
}
conn := NewBaseConnection(fs.ConnectionID(), ProtocolSFTP, "", u)
transfer := NewBaseTransfer(nil, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload, 0, 0, 0, true, fs)
conn := NewBaseConnection(fs.ConnectionID(), ProtocolSFTP, "", "", u)
transfer := NewBaseTransfer(nil, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload,
0, 0, 0, 0, true, fs, dataprovider.TransferQuota{})
transfer.ErrTransfer = errors.New("test error")
_, err = transfer.getUploadFileSize()
assert.Error(t, err)
@@ -273,3 +306,164 @@ func TestRemovePartialCryptoFile(t *testing.T) {
assert.Equal(t, int64(9), size)
assert.NoFileExists(t, testFile)
}
func TestFTPMode(t *testing.T) {
conn := NewBaseConnection("", ProtocolFTP, "", "", dataprovider.User{})
transfer := BaseTransfer{
Connection: conn,
transferType: TransferUpload,
BytesReceived: 123,
Fs: vfs.NewOsFs("", os.TempDir(), ""),
}
assert.Empty(t, transfer.ftpMode)
transfer.SetFtpMode("active")
assert.Equal(t, "active", transfer.ftpMode)
}
func TestTransferQuota(t *testing.T) {
user := dataprovider.User{
BaseUser: sdk.BaseUser{
TotalDataTransfer: -1,
UploadDataTransfer: -1,
DownloadDataTransfer: -1,
},
}
user.Filters.DataTransferLimits = []sdk.DataTransferLimit{
{
Sources: []string{"127.0.0.1/32", "192.168.1.0/24"},
TotalDataTransfer: 100,
UploadDataTransfer: 0,
DownloadDataTransfer: 0,
},
{
Sources: []string{"172.16.0.0/24"},
TotalDataTransfer: 0,
UploadDataTransfer: 120,
DownloadDataTransfer: 150,
},
}
ul, dl, total := user.GetDataTransferLimits("127.0.1.1")
assert.Equal(t, int64(0), ul)
assert.Equal(t, int64(0), dl)
assert.Equal(t, int64(0), total)
ul, dl, total = user.GetDataTransferLimits("127.0.0.1")
assert.Equal(t, int64(0), ul)
assert.Equal(t, int64(0), dl)
assert.Equal(t, int64(100*1048576), total)
ul, dl, total = user.GetDataTransferLimits("192.168.1.4")
assert.Equal(t, int64(0), ul)
assert.Equal(t, int64(0), dl)
assert.Equal(t, int64(100*1048576), total)
ul, dl, total = user.GetDataTransferLimits("172.16.0.2")
assert.Equal(t, int64(120*1048576), ul)
assert.Equal(t, int64(150*1048576), dl)
assert.Equal(t, int64(0), total)
transferQuota := dataprovider.TransferQuota{}
assert.True(t, transferQuota.HasDownloadSpace())
assert.True(t, transferQuota.HasUploadSpace())
transferQuota.TotalSize = -1
transferQuota.ULSize = -1
transferQuota.DLSize = -1
assert.True(t, transferQuota.HasDownloadSpace())
assert.True(t, transferQuota.HasUploadSpace())
transferQuota.TotalSize = 100
transferQuota.AllowedTotalSize = 10
assert.True(t, transferQuota.HasDownloadSpace())
assert.True(t, transferQuota.HasUploadSpace())
transferQuota.AllowedTotalSize = 0
assert.False(t, transferQuota.HasDownloadSpace())
assert.False(t, transferQuota.HasUploadSpace())
transferQuota.TotalSize = 0
transferQuota.DLSize = 100
transferQuota.ULSize = 50
transferQuota.AllowedTotalSize = 0
assert.False(t, transferQuota.HasDownloadSpace())
assert.False(t, transferQuota.HasUploadSpace())
transferQuota.AllowedDLSize = 1
transferQuota.AllowedULSize = 1
assert.True(t, transferQuota.HasDownloadSpace())
assert.True(t, transferQuota.HasUploadSpace())
transferQuota.AllowedDLSize = -10
transferQuota.AllowedULSize = -1
assert.False(t, transferQuota.HasDownloadSpace())
assert.False(t, transferQuota.HasUploadSpace())
conn := NewBaseConnection("", ProtocolSFTP, "", "", user)
transfer := NewBaseTransfer(nil, conn, nil, "file.txt", "file.txt", "/transfer_test_file", TransferUpload,
0, 0, 0, 0, true, vfs.NewOsFs("", os.TempDir(), ""), dataprovider.TransferQuota{})
err := transfer.CheckRead()
assert.NoError(t, err)
err = transfer.CheckWrite()
assert.NoError(t, err)
transfer.transferQuota = dataprovider.TransferQuota{
AllowedTotalSize: 10,
}
transfer.BytesReceived = 5
transfer.BytesSent = 4
err = transfer.CheckRead()
assert.NoError(t, err)
err = transfer.CheckWrite()
assert.NoError(t, err)
transfer.BytesSent = 6
err = transfer.CheckRead()
if assert.Error(t, err) {
assert.Contains(t, err.Error(), ErrReadQuotaExceeded.Error())
}
err = transfer.CheckWrite()
assert.True(t, conn.IsQuotaExceededError(err))
transferQuota = dataprovider.TransferQuota{
AllowedTotalSize: 0,
AllowedULSize: 10,
AllowedDLSize: 5,
}
transfer.transferQuota = transferQuota
assert.Equal(t, transferQuota, transfer.GetTransferQuota())
err = transfer.CheckRead()
if assert.Error(t, err) {
assert.Contains(t, err.Error(), ErrReadQuotaExceeded.Error())
}
err = transfer.CheckWrite()
assert.NoError(t, err)
transfer.BytesReceived = 11
err = transfer.CheckRead()
if assert.Error(t, err) {
assert.Contains(t, err.Error(), ErrReadQuotaExceeded.Error())
}
err = transfer.CheckWrite()
assert.True(t, conn.IsQuotaExceededError(err))
}
func TestUploadOutsideHomeRenameError(t *testing.T) {
oldTempPath := Config.TempPath
conn := NewBaseConnection("", ProtocolSFTP, "", "", dataprovider.User{})
transfer := BaseTransfer{
Connection: conn,
transferType: TransferUpload,
BytesReceived: 123,
Fs: vfs.NewOsFs("", filepath.Join(os.TempDir(), "home"), ""),
}
fileName := filepath.Join(os.TempDir(), "_temp")
err := os.WriteFile(fileName, []byte(`data`), 0644)
assert.NoError(t, err)
transfer.effectiveFsPath = fileName
res := transfer.checkUploadOutsideHomeDir(os.ErrPermission)
assert.Equal(t, 0, res)
Config.TempPath = filepath.Clean(os.TempDir())
res = transfer.checkUploadOutsideHomeDir(nil)
assert.Equal(t, 0, res)
assert.Greater(t, transfer.BytesReceived, int64(0))
res = transfer.checkUploadOutsideHomeDir(os.ErrPermission)
assert.Equal(t, 1, res)
assert.Equal(t, int64(0), transfer.BytesReceived)
assert.NoFileExists(t, fileName)
Config.TempPath = oldTempPath
}

329
common/transferschecker.go Normal file
View File

@@ -0,0 +1,329 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
import (
"errors"
"sync"
"time"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
)
type overquotaTransfer struct {
ConnID string
TransferID int64
TransferType int
}
type uploadAggregationKey struct {
Username string
FolderName string
}
// TransfersChecker defines the interface that transfer checkers must implement.
// A transfer checker ensure that multiple concurrent transfers does not exceeded
// the remaining user quota
type TransfersChecker interface {
AddTransfer(transfer dataprovider.ActiveTransfer)
RemoveTransfer(ID int64, connectionID string)
UpdateTransferCurrentSizes(ulSize, dlSize, ID int64, connectionID string)
GetOverquotaTransfers() []overquotaTransfer
}
func getTransfersChecker(isShared int) TransfersChecker {
if isShared == 1 {
logger.Info(logSender, "", "using provider transfer checker")
return &transfersCheckerDB{}
}
logger.Info(logSender, "", "using memory transfer checker")
return &transfersCheckerMem{}
}
type baseTransferChecker struct {
transfers []dataprovider.ActiveTransfer
}
func (t *baseTransferChecker) isDataTransferExceeded(user dataprovider.User, transfer dataprovider.ActiveTransfer, ulSize,
dlSize int64,
) bool {
ulQuota, dlQuota, totalQuota := user.GetDataTransferLimits(transfer.IP)
if totalQuota > 0 {
allowedSize := totalQuota - (user.UsedUploadDataTransfer + user.UsedDownloadDataTransfer)
if ulSize+dlSize > allowedSize {
return transfer.CurrentDLSize > 0 || transfer.CurrentULSize > 0
}
}
if dlQuota > 0 {
allowedSize := dlQuota - user.UsedDownloadDataTransfer
if dlSize > allowedSize {
return transfer.CurrentDLSize > 0
}
}
if ulQuota > 0 {
allowedSize := ulQuota - user.UsedUploadDataTransfer
if ulSize > allowedSize {
return transfer.CurrentULSize > 0
}
}
return false
}
func (t *baseTransferChecker) getRemainingDiskQuota(user dataprovider.User, folderName string) (int64, error) {
var result int64
if folderName != "" {
for _, folder := range user.VirtualFolders {
if folder.Name == folderName {
if folder.QuotaSize > 0 {
return folder.QuotaSize - folder.UsedQuotaSize, nil
}
}
}
} else {
if user.QuotaSize > 0 {
return user.QuotaSize - user.UsedQuotaSize, nil
}
}
return result, errors.New("no quota limit defined")
}
func (t *baseTransferChecker) aggregateTransfersByUser(usersToFetch map[string]bool,
) (map[string]bool, map[string][]dataprovider.ActiveTransfer) {
aggregations := make(map[string][]dataprovider.ActiveTransfer)
for _, transfer := range t.transfers {
aggregations[transfer.Username] = append(aggregations[transfer.Username], transfer)
if len(aggregations[transfer.Username]) > 1 {
if _, ok := usersToFetch[transfer.Username]; !ok {
usersToFetch[transfer.Username] = false
}
}
}
return usersToFetch, aggregations
}
func (t *baseTransferChecker) aggregateUploadTransfers() (map[string]bool, map[int][]dataprovider.ActiveTransfer) {
usersToFetch := make(map[string]bool)
aggregations := make(map[int][]dataprovider.ActiveTransfer)
var keys []uploadAggregationKey
for _, transfer := range t.transfers {
if transfer.Type != TransferUpload {
continue
}
key := -1
for idx, k := range keys {
if k.Username == transfer.Username && k.FolderName == transfer.FolderName {
key = idx
break
}
}
if key == -1 {
key = len(keys)
}
keys = append(keys, uploadAggregationKey{
Username: transfer.Username,
FolderName: transfer.FolderName,
})
aggregations[key] = append(aggregations[key], transfer)
if len(aggregations[key]) > 1 {
if transfer.FolderName != "" {
usersToFetch[transfer.Username] = true
} else {
if _, ok := usersToFetch[transfer.Username]; !ok {
usersToFetch[transfer.Username] = false
}
}
}
}
return usersToFetch, aggregations
}
func (t *baseTransferChecker) getUsersToCheck(usersToFetch map[string]bool) (map[string]dataprovider.User, error) {
users, err := dataprovider.GetUsersForQuotaCheck(usersToFetch)
if err != nil {
return nil, err
}
usersMap := make(map[string]dataprovider.User)
for _, user := range users {
usersMap[user.Username] = user
}
return usersMap, nil
}
func (t *baseTransferChecker) getOverquotaTransfers(usersToFetch map[string]bool,
uploadAggregations map[int][]dataprovider.ActiveTransfer,
userAggregations map[string][]dataprovider.ActiveTransfer,
) []overquotaTransfer {
if len(usersToFetch) == 0 {
return nil
}
usersMap, err := t.getUsersToCheck(usersToFetch)
if err != nil {
logger.Warn(logSender, "", "unable to check transfers, error getting users quota: %v", err)
return nil
}
var overquotaTransfers []overquotaTransfer
for _, transfers := range uploadAggregations {
username := transfers[0].Username
folderName := transfers[0].FolderName
remaningDiskQuota, err := t.getRemainingDiskQuota(usersMap[username], folderName)
if err != nil {
continue
}
var usedDiskQuota int64
for _, tr := range transfers {
// We optimistically assume that a cloud transfer that replaces an existing
// file will be successful
usedDiskQuota += tr.CurrentULSize - tr.TruncatedSize
}
logger.Debug(logSender, "", "username %#v, folder %#v, concurrent transfers: %v, remaining disk quota (bytes): %v, disk quota used in ongoing transfers (bytes): %v",
username, folderName, len(transfers), remaningDiskQuota, usedDiskQuota)
if usedDiskQuota > remaningDiskQuota {
for _, tr := range transfers {
if tr.CurrentULSize > tr.TruncatedSize {
overquotaTransfers = append(overquotaTransfers, overquotaTransfer{
ConnID: tr.ConnID,
TransferID: tr.ID,
TransferType: tr.Type,
})
}
}
}
}
for username, transfers := range userAggregations {
var ulSize, dlSize int64
for _, tr := range transfers {
ulSize += tr.CurrentULSize
dlSize += tr.CurrentDLSize
}
logger.Debug(logSender, "", "username %#v, concurrent transfers: %v, quota (bytes) used in ongoing transfers, ul: %v, dl: %v",
username, len(transfers), ulSize, dlSize)
for _, tr := range transfers {
if t.isDataTransferExceeded(usersMap[username], tr, ulSize, dlSize) {
overquotaTransfers = append(overquotaTransfers, overquotaTransfer{
ConnID: tr.ConnID,
TransferID: tr.ID,
TransferType: tr.Type,
})
}
}
}
return overquotaTransfers
}
type transfersCheckerMem struct {
sync.RWMutex
baseTransferChecker
}
func (t *transfersCheckerMem) AddTransfer(transfer dataprovider.ActiveTransfer) {
t.Lock()
defer t.Unlock()
t.transfers = append(t.transfers, transfer)
}
func (t *transfersCheckerMem) RemoveTransfer(ID int64, connectionID string) {
t.Lock()
defer t.Unlock()
for idx, transfer := range t.transfers {
if transfer.ID == ID && transfer.ConnID == connectionID {
lastIdx := len(t.transfers) - 1
t.transfers[idx] = t.transfers[lastIdx]
t.transfers = t.transfers[:lastIdx]
return
}
}
}
func (t *transfersCheckerMem) UpdateTransferCurrentSizes(ulSize, dlSize, ID int64, connectionID string) {
t.Lock()
defer t.Unlock()
for idx := range t.transfers {
if t.transfers[idx].ID == ID && t.transfers[idx].ConnID == connectionID {
t.transfers[idx].CurrentDLSize = dlSize
t.transfers[idx].CurrentULSize = ulSize
t.transfers[idx].UpdatedAt = util.GetTimeAsMsSinceEpoch(time.Now())
return
}
}
}
func (t *transfersCheckerMem) GetOverquotaTransfers() []overquotaTransfer {
t.RLock()
usersToFetch, uploadAggregations := t.aggregateUploadTransfers()
usersToFetch, userAggregations := t.aggregateTransfersByUser(usersToFetch)
t.RUnlock()
return t.getOverquotaTransfers(usersToFetch, uploadAggregations, userAggregations)
}
type transfersCheckerDB struct {
baseTransferChecker
lastCleanup time.Time
}
func (t *transfersCheckerDB) AddTransfer(transfer dataprovider.ActiveTransfer) {
dataprovider.AddActiveTransfer(transfer)
}
func (t *transfersCheckerDB) RemoveTransfer(ID int64, connectionID string) {
dataprovider.RemoveActiveTransfer(ID, connectionID)
}
func (t *transfersCheckerDB) UpdateTransferCurrentSizes(ulSize, dlSize, ID int64, connectionID string) {
dataprovider.UpdateActiveTransferSizes(ulSize, dlSize, ID, connectionID)
}
func (t *transfersCheckerDB) GetOverquotaTransfers() []overquotaTransfer {
if t.lastCleanup.IsZero() || t.lastCleanup.Add(periodicTimeoutCheckInterval*15).Before(time.Now()) {
before := time.Now().Add(-periodicTimeoutCheckInterval * 5)
err := dataprovider.CleanupActiveTransfers(before)
logger.Debug(logSender, "", "cleanup active transfers completed, err: %v", err)
if err == nil {
t.lastCleanup = time.Now()
}
}
var err error
from := time.Now().Add(-periodicTimeoutCheckInterval * 2)
t.transfers, err = dataprovider.GetActiveTransfers(from)
if err != nil {
logger.Error(logSender, "", "unable to check overquota transfers, error getting active transfers: %v", err)
return nil
}
usersToFetch, uploadAggregations := t.aggregateUploadTransfers()
usersToFetch, userAggregations := t.aggregateTransfersByUser(usersToFetch)
return t.getOverquotaTransfers(usersToFetch, uploadAggregations, userAggregations)
}

View File

@@ -0,0 +1,768 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package common
import (
"fmt"
"os"
"path"
"path/filepath"
"strconv"
"strings"
"sync/atomic"
"testing"
"time"
"github.com/rs/xid"
"github.com/sftpgo/sdk"
"github.com/stretchr/testify/assert"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/v2/vfs"
)
func TestTransfersCheckerDiskQuota(t *testing.T) {
username := "transfers_check_username"
folderName := "test_transfers_folder"
groupName := "test_transfers_group"
vdirPath := "/vdir"
group := dataprovider.Group{
BaseGroup: sdk.BaseGroup{
Name: groupName,
},
UserSettings: dataprovider.GroupUserSettings{
BaseGroupUserSettings: sdk.BaseGroupUserSettings{
QuotaSize: 120,
},
},
}
user := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: username,
Password: "testpwd",
HomeDir: filepath.Join(os.TempDir(), username),
Status: 1,
QuotaSize: 0, // the quota size defined for the group is used
Permissions: map[string][]string{
"/": {dataprovider.PermAny},
},
},
VirtualFolders: []vfs.VirtualFolder{
{
BaseVirtualFolder: vfs.BaseVirtualFolder{
Name: folderName,
MappedPath: filepath.Join(os.TempDir(), folderName),
},
VirtualPath: vdirPath,
QuotaSize: 100,
},
},
Groups: []sdk.GroupMapping{
{
Name: groupName,
Type: sdk.GroupTypePrimary,
},
},
}
err := dataprovider.AddGroup(&group, "", "")
assert.NoError(t, err)
group, err = dataprovider.GroupExists(groupName)
assert.NoError(t, err)
assert.Equal(t, int64(120), group.UserSettings.QuotaSize)
err = dataprovider.AddUser(&user, "", "")
assert.NoError(t, err)
user, err = dataprovider.GetUserWithGroupSettings(username)
assert.NoError(t, err)
connID1 := xid.New().String()
fsUser, err := user.GetFilesystemForPath("/file1", connID1)
assert.NoError(t, err)
conn1 := NewBaseConnection(connID1, ProtocolSFTP, "", "", user)
fakeConn1 := &fakeConnection{
BaseConnection: conn1,
}
transfer1 := NewBaseTransfer(nil, conn1, nil, filepath.Join(user.HomeDir, "file1"), filepath.Join(user.HomeDir, "file1"),
"/file1", TransferUpload, 0, 0, 120, 0, true, fsUser, dataprovider.TransferQuota{})
transfer1.BytesReceived = 150
err = Connections.Add(fakeConn1)
assert.NoError(t, err)
// the transferschecker will do nothing if there is only one ongoing transfer
Connections.checkTransfers()
assert.Nil(t, transfer1.errAbort)
connID2 := xid.New().String()
conn2 := NewBaseConnection(connID2, ProtocolSFTP, "", "", user)
fakeConn2 := &fakeConnection{
BaseConnection: conn2,
}
transfer2 := NewBaseTransfer(nil, conn2, nil, filepath.Join(user.HomeDir, "file2"), filepath.Join(user.HomeDir, "file2"),
"/file2", TransferUpload, 0, 0, 120, 40, true, fsUser, dataprovider.TransferQuota{})
transfer1.BytesReceived = 50
transfer2.BytesReceived = 60
err = Connections.Add(fakeConn2)
assert.NoError(t, err)
connID3 := xid.New().String()
conn3 := NewBaseConnection(connID3, ProtocolSFTP, "", "", user)
fakeConn3 := &fakeConnection{
BaseConnection: conn3,
}
transfer3 := NewBaseTransfer(nil, conn3, nil, filepath.Join(user.HomeDir, "file3"), filepath.Join(user.HomeDir, "file3"),
"/file3", TransferDownload, 0, 0, 120, 0, true, fsUser, dataprovider.TransferQuota{})
transfer3.BytesReceived = 60 // this value will be ignored, this is a download
err = Connections.Add(fakeConn3)
assert.NoError(t, err)
// the transfers are not overquota
Connections.checkTransfers()
assert.Nil(t, transfer1.errAbort)
assert.Nil(t, transfer2.errAbort)
assert.Nil(t, transfer3.errAbort)
transfer1.BytesReceived = 80 // truncated size will be subtracted, we are not overquota
Connections.checkTransfers()
assert.Nil(t, transfer1.errAbort)
assert.Nil(t, transfer2.errAbort)
assert.Nil(t, transfer3.errAbort)
transfer1.BytesReceived = 120
// we are now overquota
// if another check is in progress nothing is done
atomic.StoreInt32(&Connections.transfersCheckStatus, 1)
Connections.checkTransfers()
assert.Nil(t, transfer1.errAbort)
assert.Nil(t, transfer2.errAbort)
assert.Nil(t, transfer3.errAbort)
atomic.StoreInt32(&Connections.transfersCheckStatus, 0)
Connections.checkTransfers()
assert.True(t, conn1.IsQuotaExceededError(transfer1.errAbort), transfer1.errAbort)
assert.True(t, conn2.IsQuotaExceededError(transfer2.errAbort), transfer2.errAbort)
assert.True(t, conn1.IsQuotaExceededError(transfer1.GetAbortError()))
assert.Nil(t, transfer3.errAbort)
assert.True(t, conn3.IsQuotaExceededError(transfer3.GetAbortError()))
// update the user quota size
group.UserSettings.QuotaSize = 1000
err = dataprovider.UpdateGroup(&group, []string{username}, "", "")
assert.NoError(t, err)
transfer1.errAbort = nil
transfer2.errAbort = nil
Connections.checkTransfers()
assert.Nil(t, transfer1.errAbort)
assert.Nil(t, transfer2.errAbort)
assert.Nil(t, transfer3.errAbort)
group.UserSettings.QuotaSize = 0
err = dataprovider.UpdateGroup(&group, []string{username}, "", "")
assert.NoError(t, err)
Connections.checkTransfers()
assert.Nil(t, transfer1.errAbort)
assert.Nil(t, transfer2.errAbort)
assert.Nil(t, transfer3.errAbort)
// now check a public folder
transfer1.BytesReceived = 0
transfer2.BytesReceived = 0
connID4 := xid.New().String()
fsFolder, err := user.GetFilesystemForPath(path.Join(vdirPath, "/file1"), connID4)
assert.NoError(t, err)
conn4 := NewBaseConnection(connID4, ProtocolSFTP, "", "", user)
fakeConn4 := &fakeConnection{
BaseConnection: conn4,
}
transfer4 := NewBaseTransfer(nil, conn4, nil, filepath.Join(os.TempDir(), folderName, "file1"),
filepath.Join(os.TempDir(), folderName, "file1"), path.Join(vdirPath, "/file1"), TransferUpload, 0, 0,
100, 0, true, fsFolder, dataprovider.TransferQuota{})
err = Connections.Add(fakeConn4)
assert.NoError(t, err)
connID5 := xid.New().String()
conn5 := NewBaseConnection(connID5, ProtocolSFTP, "", "", user)
fakeConn5 := &fakeConnection{
BaseConnection: conn5,
}
transfer5 := NewBaseTransfer(nil, conn5, nil, filepath.Join(os.TempDir(), folderName, "file2"),
filepath.Join(os.TempDir(), folderName, "file2"), path.Join(vdirPath, "/file2"), TransferUpload, 0, 0,
100, 0, true, fsFolder, dataprovider.TransferQuota{})
err = Connections.Add(fakeConn5)
assert.NoError(t, err)
transfer4.BytesReceived = 50
transfer5.BytesReceived = 40
Connections.checkTransfers()
assert.Nil(t, transfer4.errAbort)
assert.Nil(t, transfer5.errAbort)
transfer5.BytesReceived = 60
Connections.checkTransfers()
assert.Nil(t, transfer1.errAbort)
assert.Nil(t, transfer2.errAbort)
assert.Nil(t, transfer3.errAbort)
assert.True(t, conn1.IsQuotaExceededError(transfer4.errAbort))
assert.True(t, conn2.IsQuotaExceededError(transfer5.errAbort))
if dataprovider.GetProviderStatus().Driver != dataprovider.MemoryDataProviderName {
providerConf := dataprovider.GetProviderConfig()
err = dataprovider.Close()
assert.NoError(t, err)
transfer4.errAbort = nil
transfer5.errAbort = nil
Connections.checkTransfers()
assert.Nil(t, transfer1.errAbort)
assert.Nil(t, transfer2.errAbort)
assert.Nil(t, transfer3.errAbort)
assert.Nil(t, transfer4.errAbort)
assert.Nil(t, transfer5.errAbort)
err = dataprovider.Initialize(providerConf, configDir, true)
assert.NoError(t, err)
}
err = transfer1.Close()
assert.NoError(t, err)
err = transfer2.Close()
assert.NoError(t, err)
err = transfer3.Close()
assert.NoError(t, err)
err = transfer4.Close()
assert.NoError(t, err)
err = transfer5.Close()
assert.NoError(t, err)
Connections.Remove(fakeConn1.GetID())
Connections.Remove(fakeConn2.GetID())
Connections.Remove(fakeConn3.GetID())
Connections.Remove(fakeConn4.GetID())
Connections.Remove(fakeConn5.GetID())
stats := Connections.GetStats()
assert.Len(t, stats, 0)
err = dataprovider.DeleteUser(user.Username, "", "")
assert.NoError(t, err)
err = os.RemoveAll(user.GetHomeDir())
assert.NoError(t, err)
err = dataprovider.DeleteFolder(folderName, "", "")
assert.NoError(t, err)
err = os.RemoveAll(filepath.Join(os.TempDir(), folderName))
assert.NoError(t, err)
err = dataprovider.DeleteGroup(groupName, "", "")
assert.NoError(t, err)
}
func TestTransferCheckerTransferQuota(t *testing.T) {
username := "transfers_check_username"
user := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: username,
Password: "test_pwd",
HomeDir: filepath.Join(os.TempDir(), username),
Status: 1,
TotalDataTransfer: 1,
Permissions: map[string][]string{
"/": {dataprovider.PermAny},
},
},
}
err := dataprovider.AddUser(&user, "", "")
assert.NoError(t, err)
connID1 := xid.New().String()
fsUser, err := user.GetFilesystemForPath("/file1", connID1)
assert.NoError(t, err)
conn1 := NewBaseConnection(connID1, ProtocolSFTP, "", "192.168.1.1", user)
fakeConn1 := &fakeConnection{
BaseConnection: conn1,
}
transfer1 := NewBaseTransfer(nil, conn1, nil, filepath.Join(user.HomeDir, "file1"), filepath.Join(user.HomeDir, "file1"),
"/file1", TransferUpload, 0, 0, 0, 0, true, fsUser, dataprovider.TransferQuota{AllowedTotalSize: 100})
transfer1.BytesReceived = 150
err = Connections.Add(fakeConn1)
assert.NoError(t, err)
// the transferschecker will do nothing if there is only one ongoing transfer
Connections.checkTransfers()
assert.Nil(t, transfer1.errAbort)
connID2 := xid.New().String()
conn2 := NewBaseConnection(connID2, ProtocolSFTP, "", "127.0.0.1", user)
fakeConn2 := &fakeConnection{
BaseConnection: conn2,
}
transfer2 := NewBaseTransfer(nil, conn2, nil, filepath.Join(user.HomeDir, "file2"), filepath.Join(user.HomeDir, "file2"),
"/file2", TransferUpload, 0, 0, 0, 0, true, fsUser, dataprovider.TransferQuota{AllowedTotalSize: 100})
transfer2.BytesReceived = 150
err = Connections.Add(fakeConn2)
assert.NoError(t, err)
Connections.checkTransfers()
assert.Nil(t, transfer1.errAbort)
assert.Nil(t, transfer2.errAbort)
// now test overquota
transfer1.BytesReceived = 1024*1024 + 1
transfer2.BytesReceived = 0
Connections.checkTransfers()
assert.True(t, conn1.IsQuotaExceededError(transfer1.errAbort))
assert.Nil(t, transfer2.errAbort)
transfer1.errAbort = nil
transfer1.BytesReceived = 1024*1024 + 1
transfer2.BytesReceived = 1024
Connections.checkTransfers()
assert.True(t, conn1.IsQuotaExceededError(transfer1.errAbort))
assert.True(t, conn2.IsQuotaExceededError(transfer2.errAbort))
transfer1.BytesReceived = 0
transfer2.BytesReceived = 0
transfer1.errAbort = nil
transfer2.errAbort = nil
err = transfer1.Close()
assert.NoError(t, err)
err = transfer2.Close()
assert.NoError(t, err)
Connections.Remove(fakeConn1.GetID())
Connections.Remove(fakeConn2.GetID())
connID3 := xid.New().String()
conn3 := NewBaseConnection(connID3, ProtocolSFTP, "", "", user)
fakeConn3 := &fakeConnection{
BaseConnection: conn3,
}
transfer3 := NewBaseTransfer(nil, conn3, nil, filepath.Join(user.HomeDir, "file1"), filepath.Join(user.HomeDir, "file1"),
"/file1", TransferDownload, 0, 0, 0, 0, true, fsUser, dataprovider.TransferQuota{AllowedDLSize: 100})
transfer3.BytesSent = 150
err = Connections.Add(fakeConn3)
assert.NoError(t, err)
connID4 := xid.New().String()
conn4 := NewBaseConnection(connID4, ProtocolSFTP, "", "", user)
fakeConn4 := &fakeConnection{
BaseConnection: conn4,
}
transfer4 := NewBaseTransfer(nil, conn4, nil, filepath.Join(user.HomeDir, "file2"), filepath.Join(user.HomeDir, "file2"),
"/file2", TransferDownload, 0, 0, 0, 0, true, fsUser, dataprovider.TransferQuota{AllowedDLSize: 100})
transfer4.BytesSent = 150
err = Connections.Add(fakeConn4)
assert.NoError(t, err)
Connections.checkTransfers()
assert.Nil(t, transfer3.errAbort)
assert.Nil(t, transfer4.errAbort)
transfer3.BytesSent = 512 * 1024
transfer4.BytesSent = 512*1024 + 1
Connections.checkTransfers()
if assert.Error(t, transfer3.errAbort) {
assert.Contains(t, transfer3.errAbort.Error(), ErrReadQuotaExceeded.Error())
}
if assert.Error(t, transfer4.errAbort) {
assert.Contains(t, transfer4.errAbort.Error(), ErrReadQuotaExceeded.Error())
}
Connections.Remove(fakeConn3.GetID())
Connections.Remove(fakeConn4.GetID())
stats := Connections.GetStats()
assert.Len(t, stats, 0)
err = dataprovider.DeleteUser(user.Username, "", "")
assert.NoError(t, err)
err = os.RemoveAll(user.GetHomeDir())
assert.NoError(t, err)
}
func TestAggregateTransfers(t *testing.T) {
checker := transfersCheckerMem{}
checker.AddTransfer(dataprovider.ActiveTransfer{
ID: 1,
Type: TransferUpload,
ConnID: "1",
Username: "user",
FolderName: "",
TruncatedSize: 0,
CurrentULSize: 100,
CurrentDLSize: 0,
CreatedAt: util.GetTimeAsMsSinceEpoch(time.Now()),
UpdatedAt: util.GetTimeAsMsSinceEpoch(time.Now()),
})
usersToFetch, aggregations := checker.aggregateUploadTransfers()
assert.Len(t, usersToFetch, 0)
assert.Len(t, aggregations, 1)
checker.AddTransfer(dataprovider.ActiveTransfer{
ID: 1,
Type: TransferDownload,
ConnID: "2",
Username: "user",
FolderName: "",
TruncatedSize: 0,
CurrentULSize: 0,
CurrentDLSize: 100,
CreatedAt: util.GetTimeAsMsSinceEpoch(time.Now()),
UpdatedAt: util.GetTimeAsMsSinceEpoch(time.Now()),
})
usersToFetch, aggregations = checker.aggregateUploadTransfers()
assert.Len(t, usersToFetch, 0)
assert.Len(t, aggregations, 1)
checker.AddTransfer(dataprovider.ActiveTransfer{
ID: 1,
Type: TransferUpload,
ConnID: "3",
Username: "user",
FolderName: "folder",
TruncatedSize: 0,
CurrentULSize: 10,
CurrentDLSize: 0,
CreatedAt: util.GetTimeAsMsSinceEpoch(time.Now()),
UpdatedAt: util.GetTimeAsMsSinceEpoch(time.Now()),
})
usersToFetch, aggregations = checker.aggregateUploadTransfers()
assert.Len(t, usersToFetch, 0)
assert.Len(t, aggregations, 2)
checker.AddTransfer(dataprovider.ActiveTransfer{
ID: 1,
Type: TransferUpload,
ConnID: "4",
Username: "user1",
FolderName: "",
TruncatedSize: 0,
CurrentULSize: 100,
CurrentDLSize: 0,
CreatedAt: util.GetTimeAsMsSinceEpoch(time.Now()),
UpdatedAt: util.GetTimeAsMsSinceEpoch(time.Now()),
})
usersToFetch, aggregations = checker.aggregateUploadTransfers()
assert.Len(t, usersToFetch, 0)
assert.Len(t, aggregations, 3)
checker.AddTransfer(dataprovider.ActiveTransfer{
ID: 1,
Type: TransferUpload,
ConnID: "5",
Username: "user",
FolderName: "",
TruncatedSize: 0,
CurrentULSize: 100,
CurrentDLSize: 0,
CreatedAt: util.GetTimeAsMsSinceEpoch(time.Now()),
UpdatedAt: util.GetTimeAsMsSinceEpoch(time.Now()),
})
usersToFetch, aggregations = checker.aggregateUploadTransfers()
assert.Len(t, usersToFetch, 1)
val, ok := usersToFetch["user"]
assert.True(t, ok)
assert.False(t, val)
assert.Len(t, aggregations, 3)
aggregate, ok := aggregations[0]
assert.True(t, ok)
assert.Len(t, aggregate, 2)
checker.AddTransfer(dataprovider.ActiveTransfer{
ID: 1,
Type: TransferUpload,
ConnID: "6",
Username: "user",
FolderName: "",
TruncatedSize: 0,
CurrentULSize: 100,
CurrentDLSize: 0,
CreatedAt: util.GetTimeAsMsSinceEpoch(time.Now()),
UpdatedAt: util.GetTimeAsMsSinceEpoch(time.Now()),
})
usersToFetch, aggregations = checker.aggregateUploadTransfers()
assert.Len(t, usersToFetch, 1)
val, ok = usersToFetch["user"]
assert.True(t, ok)
assert.False(t, val)
assert.Len(t, aggregations, 3)
aggregate, ok = aggregations[0]
assert.True(t, ok)
assert.Len(t, aggregate, 3)
checker.AddTransfer(dataprovider.ActiveTransfer{
ID: 1,
Type: TransferUpload,
ConnID: "7",
Username: "user",
FolderName: "folder",
TruncatedSize: 0,
CurrentULSize: 10,
CurrentDLSize: 0,
CreatedAt: util.GetTimeAsMsSinceEpoch(time.Now()),
UpdatedAt: util.GetTimeAsMsSinceEpoch(time.Now()),
})
usersToFetch, aggregations = checker.aggregateUploadTransfers()
assert.Len(t, usersToFetch, 1)
val, ok = usersToFetch["user"]
assert.True(t, ok)
assert.True(t, val)
assert.Len(t, aggregations, 3)
aggregate, ok = aggregations[0]
assert.True(t, ok)
assert.Len(t, aggregate, 3)
aggregate, ok = aggregations[1]
assert.True(t, ok)
assert.Len(t, aggregate, 2)
checker.AddTransfer(dataprovider.ActiveTransfer{
ID: 1,
Type: TransferUpload,
ConnID: "8",
Username: "user",
FolderName: "",
TruncatedSize: 0,
CurrentULSize: 100,
CurrentDLSize: 0,
CreatedAt: util.GetTimeAsMsSinceEpoch(time.Now()),
UpdatedAt: util.GetTimeAsMsSinceEpoch(time.Now()),
})
usersToFetch, aggregations = checker.aggregateUploadTransfers()
assert.Len(t, usersToFetch, 1)
val, ok = usersToFetch["user"]
assert.True(t, ok)
assert.True(t, val)
assert.Len(t, aggregations, 3)
aggregate, ok = aggregations[0]
assert.True(t, ok)
assert.Len(t, aggregate, 4)
aggregate, ok = aggregations[1]
assert.True(t, ok)
assert.Len(t, aggregate, 2)
}
func TestDataTransferExceeded(t *testing.T) {
user := dataprovider.User{
BaseUser: sdk.BaseUser{
TotalDataTransfer: 1,
},
}
transfer := dataprovider.ActiveTransfer{
CurrentULSize: 0,
CurrentDLSize: 0,
}
user.UsedDownloadDataTransfer = 1024 * 1024
user.UsedUploadDataTransfer = 512 * 1024
checker := transfersCheckerMem{}
res := checker.isDataTransferExceeded(user, transfer, 100, 100)
assert.False(t, res)
transfer.CurrentULSize = 1
res = checker.isDataTransferExceeded(user, transfer, 100, 100)
assert.True(t, res)
user.UsedDownloadDataTransfer = 512*1024 - 100
user.UsedUploadDataTransfer = 512*1024 - 100
res = checker.isDataTransferExceeded(user, transfer, 100, 100)
assert.False(t, res)
res = checker.isDataTransferExceeded(user, transfer, 101, 100)
assert.True(t, res)
user.TotalDataTransfer = 0
user.DownloadDataTransfer = 1
user.UsedDownloadDataTransfer = 512 * 1024
transfer.CurrentULSize = 0
transfer.CurrentDLSize = 100
res = checker.isDataTransferExceeded(user, transfer, 0, 512*1024)
assert.False(t, res)
res = checker.isDataTransferExceeded(user, transfer, 0, 512*1024+1)
assert.True(t, res)
user.DownloadDataTransfer = 0
user.UploadDataTransfer = 1
user.UsedUploadDataTransfer = 512 * 1024
transfer.CurrentULSize = 0
transfer.CurrentDLSize = 0
res = checker.isDataTransferExceeded(user, transfer, 512*1024+1, 0)
assert.False(t, res)
transfer.CurrentULSize = 1
res = checker.isDataTransferExceeded(user, transfer, 512*1024+1, 0)
assert.True(t, res)
}
func TestGetUsersForQuotaCheck(t *testing.T) {
usersToFetch := make(map[string]bool)
for i := 0; i < 50; i++ {
usersToFetch[fmt.Sprintf("user%v", i)] = i%2 == 0
}
users, err := dataprovider.GetUsersForQuotaCheck(usersToFetch)
assert.NoError(t, err)
assert.Len(t, users, 0)
for i := 0; i < 40; i++ {
user := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: fmt.Sprintf("user%v", i),
Password: "pwd",
HomeDir: filepath.Join(os.TempDir(), fmt.Sprintf("user%v", i)),
Status: 1,
QuotaSize: 120,
Permissions: map[string][]string{
"/": {dataprovider.PermAny},
},
},
VirtualFolders: []vfs.VirtualFolder{
{
BaseVirtualFolder: vfs.BaseVirtualFolder{
Name: fmt.Sprintf("f%v", i),
MappedPath: filepath.Join(os.TempDir(), fmt.Sprintf("f%v", i)),
},
VirtualPath: "/vfolder",
QuotaSize: 100,
},
},
Filters: dataprovider.UserFilters{
BaseUserFilters: sdk.BaseUserFilters{
DataTransferLimits: []sdk.DataTransferLimit{
{
Sources: []string{"172.16.0.0/16"},
UploadDataTransfer: 50,
DownloadDataTransfer: 80,
},
},
},
},
}
err = dataprovider.AddUser(&user, "", "")
assert.NoError(t, err)
err = dataprovider.UpdateVirtualFolderQuota(&vfs.BaseVirtualFolder{Name: fmt.Sprintf("f%v", i)}, 1, 50, false)
assert.NoError(t, err)
}
users, err = dataprovider.GetUsersForQuotaCheck(usersToFetch)
assert.NoError(t, err)
assert.Len(t, users, 40)
for _, user := range users {
userIdxStr := strings.Replace(user.Username, "user", "", 1)
userIdx, err := strconv.Atoi(userIdxStr)
assert.NoError(t, err)
if userIdx%2 == 0 {
if assert.Len(t, user.VirtualFolders, 1, user.Username) {
assert.Equal(t, int64(100), user.VirtualFolders[0].QuotaSize)
assert.Equal(t, int64(50), user.VirtualFolders[0].UsedQuotaSize)
}
} else {
switch dataprovider.GetProviderStatus().Driver {
case dataprovider.MySQLDataProviderName, dataprovider.PGSQLDataProviderName,
dataprovider.CockroachDataProviderName, dataprovider.SQLiteDataProviderName:
assert.Len(t, user.VirtualFolders, 0, user.Username)
}
}
ul, dl, total := user.GetDataTransferLimits("127.1.1.1")
assert.Equal(t, int64(0), ul)
assert.Equal(t, int64(0), dl)
assert.Equal(t, int64(0), total)
ul, dl, total = user.GetDataTransferLimits("172.16.2.3")
assert.Equal(t, int64(50*1024*1024), ul)
assert.Equal(t, int64(80*1024*1024), dl)
assert.Equal(t, int64(0), total)
}
for i := 0; i < 40; i++ {
err = dataprovider.DeleteUser(fmt.Sprintf("user%v", i), "", "")
assert.NoError(t, err)
err = dataprovider.DeleteFolder(fmt.Sprintf("f%v", i), "", "")
assert.NoError(t, err)
}
users, err = dataprovider.GetUsersForQuotaCheck(usersToFetch)
assert.NoError(t, err)
assert.Len(t, users, 0)
}
func TestDBTransferChecker(t *testing.T) {
if !isDbTransferCheckerSupported() {
t.Skip("this test is not supported with the current database provider")
}
providerConf := dataprovider.GetProviderConfig()
err := dataprovider.Close()
assert.NoError(t, err)
providerConf.IsShared = 1
err = dataprovider.Initialize(providerConf, configDir, true)
assert.NoError(t, err)
c := getTransfersChecker(1)
checker, ok := c.(*transfersCheckerDB)
assert.True(t, ok)
assert.True(t, checker.lastCleanup.IsZero())
transfer1 := dataprovider.ActiveTransfer{
ID: 1,
Type: TransferDownload,
ConnID: xid.New().String(),
Username: "user1",
FolderName: "folder1",
IP: "127.0.0.1",
}
checker.AddTransfer(transfer1)
transfers, err := dataprovider.GetActiveTransfers(time.Now().Add(24 * time.Hour))
assert.NoError(t, err)
assert.Len(t, transfers, 0)
transfers, err = dataprovider.GetActiveTransfers(time.Now().Add(-periodicTimeoutCheckInterval * 2))
assert.NoError(t, err)
var createdAt, updatedAt int64
if assert.Len(t, transfers, 1) {
transfer := transfers[0]
assert.Equal(t, transfer1.ID, transfer.ID)
assert.Equal(t, transfer1.Type, transfer.Type)
assert.Equal(t, transfer1.ConnID, transfer.ConnID)
assert.Equal(t, transfer1.Username, transfer.Username)
assert.Equal(t, transfer1.IP, transfer.IP)
assert.Equal(t, transfer1.FolderName, transfer.FolderName)
assert.Greater(t, transfer.CreatedAt, int64(0))
assert.Greater(t, transfer.UpdatedAt, int64(0))
assert.Equal(t, int64(0), transfer.CurrentDLSize)
assert.Equal(t, int64(0), transfer.CurrentULSize)
createdAt = transfer.CreatedAt
updatedAt = transfer.UpdatedAt
}
time.Sleep(100 * time.Millisecond)
checker.UpdateTransferCurrentSizes(100, 150, transfer1.ID, transfer1.ConnID)
transfers, err = dataprovider.GetActiveTransfers(time.Now().Add(-periodicTimeoutCheckInterval * 2))
assert.NoError(t, err)
if assert.Len(t, transfers, 1) {
transfer := transfers[0]
assert.Equal(t, int64(150), transfer.CurrentDLSize)
assert.Equal(t, int64(100), transfer.CurrentULSize)
assert.Equal(t, createdAt, transfer.CreatedAt)
assert.Greater(t, transfer.UpdatedAt, updatedAt)
}
res := checker.GetOverquotaTransfers()
assert.Len(t, res, 0)
checker.RemoveTransfer(transfer1.ID, transfer1.ConnID)
transfers, err = dataprovider.GetActiveTransfers(time.Now().Add(-periodicTimeoutCheckInterval * 2))
assert.NoError(t, err)
assert.Len(t, transfers, 0)
err = dataprovider.Close()
assert.NoError(t, err)
res = checker.GetOverquotaTransfers()
assert.Len(t, res, 0)
providerConf.IsShared = 0
err = dataprovider.Initialize(providerConf, configDir, true)
assert.NoError(t, err)
}
func isDbTransferCheckerSupported() bool {
// SQLite shares the implementation with other SQL-based provider but it makes no sense
// to use it outside test cases
switch dataprovider.GetProviderStatus().Driver {
case dataprovider.MySQLDataProviderName, dataprovider.PGSQLDataProviderName,
dataprovider.CockroachDataProviderName, dataprovider.SQLiteDataProviderName:
return true
default:
return false
}
}

File diff suppressed because it is too large Load Diff

25
config/config_darwin.go Normal file
View File

@@ -0,0 +1,25 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//go:build darwin
// +build darwin
package config
import "github.com/spf13/viper"
// macOS specific config search path
func setViperAdditionalConfigPaths() {
viper.AddConfigPath("/usr/local/etc/sftpgo")
}

20
config/config_fallback.go Normal file
View File

@@ -0,0 +1,20 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//go:build !linux && !darwin
// +build !linux,!darwin
package config
func setViperAdditionalConfigPaths() {}

View File

@@ -1,3 +1,18 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//go:build linux
// +build linux
package config
@@ -8,4 +23,5 @@ import "github.com/spf13/viper"
func setViperAdditionalConfigPaths() {
viper.AddConfigPath("$HOME/.config/sftpgo")
viper.AddConfigPath("/etc/sftpgo")
viper.AddConfigPath("/usr/local/etc/sftpgo")
}

View File

@@ -1,7 +0,0 @@
// +build !linux
package config
func setViperAdditionalConfigPaths() {
}

File diff suppressed because it is too large Load Diff

135
dataprovider/actions.go Normal file
View File

@@ -0,0 +1,135 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package dataprovider
import (
"bytes"
"context"
"fmt"
"net/url"
"os/exec"
"path/filepath"
"strings"
"time"
"github.com/sftpgo/sdk/plugin/notifier"
"github.com/drakkan/sftpgo/v2/command"
"github.com/drakkan/sftpgo/v2/httpclient"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/plugin"
"github.com/drakkan/sftpgo/v2/util"
)
const (
// ActionExecutorSelf is used as username for self action, for example a user/admin that updates itself
ActionExecutorSelf = "__self__"
// ActionExecutorSystem is used as username for actions with no explicit executor associated, for example
// adding/updating a user/admin by loading initial data
ActionExecutorSystem = "__system__"
)
const (
actionObjectUser = "user"
actionObjectFolder = "folder"
actionObjectGroup = "group"
actionObjectAdmin = "admin"
actionObjectAPIKey = "api_key"
actionObjectShare = "share"
)
func executeAction(operation, executor, ip, objectType, objectName string, object plugin.Renderer) {
if plugin.Handler.HasNotifiers() {
plugin.Handler.NotifyProviderEvent(&notifier.ProviderEvent{
Action: operation,
Username: executor,
ObjectType: objectType,
ObjectName: objectName,
IP: ip,
Timestamp: time.Now().UnixNano(),
}, object)
}
if config.Actions.Hook == "" {
return
}
if !util.Contains(config.Actions.ExecuteOn, operation) ||
!util.Contains(config.Actions.ExecuteFor, objectType) {
return
}
go func() {
dataAsJSON, err := object.RenderAsJSON(operation != operationDelete)
if err != nil {
providerLog(logger.LevelError, "unable to serialize user as JSON for operation %#v: %v", operation, err)
return
}
if strings.HasPrefix(config.Actions.Hook, "http") {
var url *url.URL
url, err := url.Parse(config.Actions.Hook)
if err != nil {
providerLog(logger.LevelError, "Invalid http_notification_url %#v for operation %#v: %v",
config.Actions.Hook, operation, err)
return
}
q := url.Query()
q.Add("action", operation)
q.Add("username", executor)
q.Add("ip", ip)
q.Add("object_type", objectType)
q.Add("object_name", objectName)
q.Add("timestamp", fmt.Sprintf("%v", time.Now().UnixNano()))
url.RawQuery = q.Encode()
startTime := time.Now()
resp, err := httpclient.RetryablePost(url.String(), "application/json", bytes.NewBuffer(dataAsJSON))
respCode := 0
if err == nil {
respCode = resp.StatusCode
resp.Body.Close()
}
providerLog(logger.LevelDebug, "notified operation %#v to URL: %v status code: %v, elapsed: %v err: %v",
operation, url.Redacted(), respCode, time.Since(startTime), err)
} else {
executeNotificationCommand(operation, executor, ip, objectType, objectName, dataAsJSON) //nolint:errcheck // the error is used in test cases only
}
}()
}
func executeNotificationCommand(operation, executor, ip, objectType, objectName string, objectAsJSON []byte) error {
if !filepath.IsAbs(config.Actions.Hook) {
err := fmt.Errorf("invalid notification command %#v", config.Actions.Hook)
logger.Warn(logSender, "", "unable to execute notification command: %v", err)
return err
}
timeout, env := command.GetConfig(config.Actions.Hook)
ctx, cancel := context.WithTimeout(context.Background(), timeout)
defer cancel()
cmd := exec.CommandContext(ctx, config.Actions.Hook)
cmd.Env = append(env,
fmt.Sprintf("SFTPGO_PROVIDER_ACTION=%v", operation),
fmt.Sprintf("SFTPGO_PROVIDER_OBJECT_TYPE=%v", objectType),
fmt.Sprintf("SFTPGO_PROVIDER_OBJECT_NAME=%v", objectName),
fmt.Sprintf("SFTPGO_PROVIDER_USERNAME=%v", executor),
fmt.Sprintf("SFTPGO_PROVIDER_IP=%v", ip),
fmt.Sprintf("SFTPGO_PROVIDER_TIMESTAMP=%v", util.GetTimeAsMsSinceEpoch(time.Now())),
fmt.Sprintf("SFTPGO_PROVIDER_OBJECT=%v", string(objectAsJSON)))
startTime := time.Now()
err := cmd.Run()
providerLog(logger.LevelDebug, "executed command %#v, elapsed: %v, error: %v", config.Actions.Hook,
time.Since(startTime), err)
return err
}

View File

@@ -1,18 +1,37 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package dataprovider
import (
"crypto/sha256"
"encoding/base64"
"encoding/json"
"errors"
"fmt"
"net"
"regexp"
"os"
"strings"
"github.com/alexedwards/argon2id"
passwordvalidator "github.com/wagslane/go-password-validator"
"golang.org/x/crypto/bcrypt"
"github.com/drakkan/sftpgo/utils"
"github.com/drakkan/sftpgo/v2/kms"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/mfa"
"github.com/drakkan/sftpgo/v2/util"
)
// Available permissions for SFTPGo admins
@@ -26,20 +45,56 @@ const (
PermAdminCloseConnections = "close_conns"
PermAdminViewServerStatus = "view_status"
PermAdminManageAdmins = "manage_admins"
PermAdminManageGroups = "manage_groups"
PermAdminManageAPIKeys = "manage_apikeys"
PermAdminQuotaScans = "quota_scans"
PermAdminManageSystem = "manage_system"
PermAdminManageDefender = "manage_defender"
PermAdminViewDefender = "view_defender"
PermAdminRetentionChecks = "retention_checks"
PermAdminMetadataChecks = "metadata_checks"
PermAdminViewEvents = "view_events"
)
var (
emailRegex = regexp.MustCompile("^(?:(?:(?:(?:[a-zA-Z]|\\d|[!#\\$%&'\\*\\+\\-\\/=\\?\\^_`{\\|}~]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])+(?:\\.([a-zA-Z]|\\d|[!#\\$%&'\\*\\+\\-\\/=\\?\\^_`{\\|}~]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])+)*)|(?:(?:\\x22)(?:(?:(?:(?:\\x20|\\x09)*(?:\\x0d\\x0a))?(?:\\x20|\\x09)+)?(?:(?:[\\x01-\\x08\\x0b\\x0c\\x0e-\\x1f\\x7f]|\\x21|[\\x23-\\x5b]|[\\x5d-\\x7e]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])|(?:(?:[\\x01-\\x09\\x0b\\x0c\\x0d-\\x7f]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}]))))*(?:(?:(?:\\x20|\\x09)*(?:\\x0d\\x0a))?(\\x20|\\x09)+)?(?:\\x22))))@(?:(?:(?:[a-zA-Z]|\\d|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])|(?:(?:[a-zA-Z]|\\d|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])(?:[a-zA-Z]|\\d|-|\\.|~|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])*(?:[a-zA-Z]|\\d|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])))\\.)+(?:(?:[a-zA-Z]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])|(?:(?:[a-zA-Z]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])(?:[a-zA-Z]|\\d|-|\\.|~|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])*(?:[a-zA-Z]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])))\\.?$")
validAdminPerms = []string{PermAdminAny, PermAdminAddUsers, PermAdminChangeUsers, PermAdminDeleteUsers,
PermAdminViewUsers, PermAdminViewConnections, PermAdminCloseConnections, PermAdminViewServerStatus,
PermAdminManageAdmins, PermAdminQuotaScans, PermAdminManageSystem, PermAdminManageDefender,
PermAdminViewDefender}
PermAdminViewUsers, PermAdminManageGroups, PermAdminViewConnections, PermAdminCloseConnections,
PermAdminViewServerStatus, PermAdminManageAdmins, PermAdminManageAPIKeys, PermAdminQuotaScans,
PermAdminManageSystem, PermAdminManageDefender, PermAdminViewDefender, PermAdminRetentionChecks,
PermAdminMetadataChecks, PermAdminViewEvents}
)
// AdminTOTPConfig defines the time-based one time password configuration
type AdminTOTPConfig struct {
Enabled bool `json:"enabled,omitempty"`
ConfigName string `json:"config_name,omitempty"`
Secret *kms.Secret `json:"secret,omitempty"`
}
func (c *AdminTOTPConfig) validate(username string) error {
if !c.Enabled {
c.ConfigName = ""
c.Secret = kms.NewEmptySecret()
return nil
}
if c.ConfigName == "" {
return util.NewValidationError("totp: config name is mandatory")
}
if !util.Contains(mfa.GetAvailableTOTPConfigNames(), c.ConfigName) {
return util.NewValidationError(fmt.Sprintf("totp: config name %#v not found", c.ConfigName))
}
if c.Secret.IsEmpty() {
return util.NewValidationError("totp: secret is mandatory")
}
if c.Secret.IsPlain() {
c.Secret.SetAdditionalData(username)
if err := c.Secret.Encrypt(); err != nil {
return util.NewValidationError(fmt.Sprintf("totp: unable to encrypt secret: %v", err))
}
}
return nil
}
// AdminFilters defines additional restrictions for SFTPGo admins
// TODO: rename to AdminOptions in v3
type AdminFilters struct {
@@ -47,6 +102,14 @@ type AdminFilters struct {
// IP/Mask must be in CIDR notation as defined in RFC 4632 and RFC 4291
// for example "192.0.2.0/24" or "2001:db8::/32"
AllowList []string `json:"allow_list,omitempty"`
// API key auth allows to impersonate this administrator with an API key
AllowAPIKeyAuth bool `json:"allow_api_key_auth,omitempty"`
// Time-based one time passwords configuration
TOTPConfig AdminTOTPConfig `json:"totp_config,omitempty"`
// Recovery codes to use if the user loses access to their second factor auth device.
// Each code can only be used once, you should use these codes to login and disable or
// reset 2FA for your account
RecoveryCodes []RecoveryCode `json:"recovery_codes,omitempty"`
}
// Admin defines a SFTPGo admin
@@ -58,15 +121,37 @@ type Admin struct {
// Username
Username string `json:"username"`
Password string `json:"password,omitempty"`
Email string `json:"email"`
Email string `json:"email,omitempty"`
Permissions []string `json:"permissions"`
Filters AdminFilters `json:"filters,omitempty"`
Description string `json:"description,omitempty"`
AdditionalInfo string `json:"additional_info,omitempty"`
// Creation time as unix timestamp in milliseconds. It will be 0 for admins created before v2.2.0
CreatedAt int64 `json:"created_at"`
// last update time as unix timestamp in milliseconds
UpdatedAt int64 `json:"updated_at"`
// Last login as unix timestamp in milliseconds
LastLogin int64 `json:"last_login"`
}
func (a *Admin) checkPassword() error {
if a.Password != "" && !utils.IsStringPrefixInSlice(a.Password, internalHashPwdPrefixes) {
// CountUnusedRecoveryCodes returns the number of unused recovery codes
func (a *Admin) CountUnusedRecoveryCodes() int {
unused := 0
for _, code := range a.Filters.RecoveryCodes {
if !code.Used {
unused++
}
}
return unused
}
func (a *Admin) hashPassword() error {
if a.Password != "" && !util.IsStringPrefixInSlice(a.Password, internalHashPwdPrefixes) {
if config.PasswordValidation.Admins.MinEntropy > 0 {
if err := passwordvalidator.Validate(a.Password, config.PasswordValidation.Admins.MinEntropy); err != nil {
return util.NewValidationError(err.Error())
}
}
if config.PasswordHashing.Algo == HashingAlgoBcrypt {
pwd, err := bcrypt.GenerateFromPassword([]byte(a.Password), config.PasswordHashing.BcryptOptions.Cost)
if err != nil {
@@ -84,38 +169,76 @@ func (a *Admin) checkPassword() error {
return nil
}
func (a *Admin) validate() error {
if a.Username == "" {
return &ValidationError{err: "username is mandatory"}
func (a *Admin) hasRedactedSecret() bool {
return a.Filters.TOTPConfig.Secret.IsRedacted()
}
func (a *Admin) validateRecoveryCodes() error {
for i := 0; i < len(a.Filters.RecoveryCodes); i++ {
code := &a.Filters.RecoveryCodes[i]
if code.Secret.IsEmpty() {
return util.NewValidationError("mfa: recovery code cannot be empty")
}
if code.Secret.IsPlain() {
code.Secret.SetAdditionalData(a.Username)
if err := code.Secret.Encrypt(); err != nil {
return util.NewValidationError(fmt.Sprintf("mfa: unable to encrypt recovery code: %v", err))
}
}
}
if a.Password == "" {
return &ValidationError{err: "please set a password"}
}
if !config.SkipNaturalKeysValidation && !usernameRegex.MatchString(a.Username) {
return &ValidationError{err: fmt.Sprintf("username %#v is not valid, the following characters are allowed: a-zA-Z0-9-_.~", a.Username)}
}
if err := a.checkPassword(); err != nil {
return err
}
a.Permissions = utils.RemoveDuplicates(a.Permissions)
return nil
}
func (a *Admin) validatePermissions() error {
a.Permissions = util.RemoveDuplicates(a.Permissions, false)
if len(a.Permissions) == 0 {
return &ValidationError{err: "please grant some permissions to this admin"}
return util.NewValidationError("please grant some permissions to this admin")
}
if utils.IsStringInSlice(PermAdminAny, a.Permissions) {
if util.Contains(a.Permissions, PermAdminAny) {
a.Permissions = []string{PermAdminAny}
}
for _, perm := range a.Permissions {
if !utils.IsStringInSlice(perm, validAdminPerms) {
return &ValidationError{err: fmt.Sprintf("invalid permission: %#v", perm)}
if !util.Contains(validAdminPerms, perm) {
return util.NewValidationError(fmt.Sprintf("invalid permission: %#v", perm))
}
}
if a.Email != "" && !emailRegex.MatchString(a.Email) {
return &ValidationError{err: fmt.Sprintf("email %#v is not valid", a.Email)}
return nil
}
func (a *Admin) validate() error {
a.SetEmptySecretsIfNil()
if a.Username == "" {
return util.NewValidationError("username is mandatory")
}
if a.Password == "" {
return util.NewValidationError("please set a password")
}
if a.hasRedactedSecret() {
return util.NewValidationError("cannot save an admin with a redacted secret")
}
if err := a.Filters.TOTPConfig.validate(a.Username); err != nil {
return err
}
if err := a.validateRecoveryCodes(); err != nil {
return err
}
if config.NamingRules&1 == 0 && !usernameRegex.MatchString(a.Username) {
return util.NewValidationError(fmt.Sprintf("username %#v is not valid, the following characters are allowed: a-zA-Z0-9-_.~", a.Username))
}
if err := a.hashPassword(); err != nil {
return err
}
if err := a.validatePermissions(); err != nil {
return err
}
if a.Email != "" && !util.IsEmailValid(a.Email) {
return util.NewValidationError(fmt.Sprintf("email %#v is not valid", a.Email))
}
a.Filters.AllowList = util.RemoveDuplicates(a.Filters.AllowList, false)
for _, IPMask := range a.Filters.AllowList {
_, _, err := net.ParseCIDR(IPMask)
if err != nil {
return &ValidationError{err: fmt.Sprintf("could not parse allow list entry %#v : %v", IPMask, err)}
return util.NewValidationError(fmt.Sprintf("could not parse allow list entry %#v : %v", IPMask, err))
}
}
@@ -130,7 +253,11 @@ func (a *Admin) CheckPassword(password string) (bool, error) {
}
return true, nil
}
return argon2id.ComparePasswordAndHash(password, a.Password)
match, err := argon2id.ComparePasswordAndHash(password, a.Password)
if !match || err != nil {
return false, ErrInvalidCredentials
}
return match, err
}
// CanLoginFromIP returns true if login from the given IP is allowed
@@ -155,10 +282,21 @@ func (a *Admin) CanLoginFromIP(ip string) bool {
return false
}
func (a *Admin) checkUserAndPass(password, ip string) error {
// CanLogin returns an error if the login is not allowed
func (a *Admin) CanLogin(ip string) error {
if a.Status != 1 {
return fmt.Errorf("admin %#v is disabled", a.Username)
}
if !a.CanLoginFromIP(ip) {
return fmt.Errorf("login from IP %v not allowed", ip)
}
return nil
}
func (a *Admin) checkUserAndPass(password, ip string) error {
if err := a.CanLogin(ip); err != nil {
return err
}
if a.Password == "" || password == "" {
return errors.New("credentials cannot be null or empty")
}
@@ -169,23 +307,60 @@ func (a *Admin) checkUserAndPass(password, ip string) error {
if !match {
return ErrInvalidCredentials
}
if !a.CanLoginFromIP(ip) {
return fmt.Errorf("login from IP %v not allowed", ip)
}
return nil
}
// RenderAsJSON implements the renderer interface used within plugins
func (a *Admin) RenderAsJSON(reload bool) ([]byte, error) {
if reload {
admin, err := provider.adminExists(a.Username)
if err != nil {
providerLog(logger.LevelError, "unable to reload admin before rendering as json: %v", err)
return nil, err
}
admin.HideConfidentialData()
return json.Marshal(admin)
}
a.HideConfidentialData()
return json.Marshal(a)
}
// HideConfidentialData hides admin confidential data
func (a *Admin) HideConfidentialData() {
a.Password = ""
if a.Filters.TOTPConfig.Secret != nil {
a.Filters.TOTPConfig.Secret.Hide()
}
for _, code := range a.Filters.RecoveryCodes {
if code.Secret != nil {
code.Secret.Hide()
}
}
a.SetNilSecretsIfEmpty()
}
// SetEmptySecretsIfNil sets the secrets to empty if nil
func (a *Admin) SetEmptySecretsIfNil() {
if a.Filters.TOTPConfig.Secret == nil {
a.Filters.TOTPConfig.Secret = kms.NewEmptySecret()
}
}
// SetNilSecretsIfEmpty set the secrets to nil if empty.
// This is useful before rendering as JSON so the empty fields
// will not be serialized.
func (a *Admin) SetNilSecretsIfEmpty() {
if a.Filters.TOTPConfig.Secret != nil && a.Filters.TOTPConfig.Secret.IsEmpty() {
a.Filters.TOTPConfig.Secret = nil
}
}
// HasPermission returns true if the admin has the specified permission
func (a *Admin) HasPermission(perm string) bool {
if utils.IsStringInSlice(PermAdminAny, a.Permissions) {
if util.Contains(a.Permissions, PermAdminAny) {
return true
}
return utils.IsStringInSlice(perm, a.Permissions)
return util.Contains(a.Permissions, perm)
}
// GetPermissionsAsString returns permission as string
@@ -193,6 +368,14 @@ func (a *Admin) GetPermissionsAsString() string {
return strings.Join(a.Permissions, ", ")
}
// GetLastLoginAsString returns the last login as string
func (a *Admin) GetLastLoginAsString() string {
if a.LastLogin > 0 {
return util.GetTimeFromMsecSinceEpoch(a.LastLogin).UTC().Format(iso8601UTCFormat)
}
return ""
}
// GetAllowedIPAsString returns the allowed IP as comma separated string
func (a *Admin) GetAllowedIPAsString() string {
return strings.Join(a.Filters.AllowList, ",")
@@ -203,16 +386,9 @@ func (a *Admin) GetValidPerms() []string {
return validAdminPerms
}
// GetInfoString returns admin's info as string.
func (a *Admin) GetInfoString() string {
var result string
if a.Email != "" {
result = fmt.Sprintf("Email: %v. ", a.Email)
}
if len(a.Filters.AllowList) > 0 {
result += fmt.Sprintf("Allowed IP/Mask: %v. ", len(a.Filters.AllowList))
}
return result
// CanManageMFA returns true if the admin can add a multi-factor authentication configuration
func (a *Admin) CanManageMFA() bool {
return len(mfa.GetAvailableTOTPConfigs()) > 0
}
// GetSignature returns a signature for this admin.
@@ -225,11 +401,26 @@ func (a *Admin) GetSignature() string {
}
func (a *Admin) getACopy() Admin {
a.SetEmptySecretsIfNil()
permissions := make([]string, len(a.Permissions))
copy(permissions, a.Permissions)
filters := AdminFilters{}
filters.AllowList = make([]string, len(a.Filters.AllowList))
filters.AllowAPIKeyAuth = a.Filters.AllowAPIKeyAuth
filters.TOTPConfig.Enabled = a.Filters.TOTPConfig.Enabled
filters.TOTPConfig.ConfigName = a.Filters.TOTPConfig.ConfigName
filters.TOTPConfig.Secret = a.Filters.TOTPConfig.Secret.Clone()
copy(filters.AllowList, a.Filters.AllowList)
filters.RecoveryCodes = make([]RecoveryCode, 0)
for _, code := range a.Filters.RecoveryCodes {
if code.Secret == nil {
code.Secret = kms.NewEmptySecret()
}
filters.RecoveryCodes = append(filters.RecoveryCodes, RecoveryCode{
Secret: code.Secret.Clone(),
Used: code.Used,
})
}
return Admin{
ID: a.ID,
@@ -241,13 +432,21 @@ func (a *Admin) getACopy() Admin {
Filters: filters,
AdditionalInfo: a.AdditionalInfo,
Description: a.Description,
LastLogin: a.LastLogin,
CreatedAt: a.CreatedAt,
UpdatedAt: a.UpdatedAt,
}
}
// setDefaults sets the appropriate value for the default admin
func (a *Admin) setDefaults() {
a.Username = "admin"
a.Password = "password"
func (a *Admin) setFromEnv() error {
envUsername := strings.TrimSpace(os.Getenv("SFTPGO_DEFAULT_ADMIN_USERNAME"))
envPassword := strings.TrimSpace(os.Getenv("SFTPGO_DEFAULT_ADMIN_PASSWORD"))
if envUsername == "" || envPassword == "" {
return errors.New(`to create the default admin you need to set the env vars "SFTPGO_DEFAULT_ADMIN_USERNAME" and "SFTPGO_DEFAULT_ADMIN_PASSWORD"`)
}
a.Username = envUsername
a.Password = envPassword
a.Status = 1
a.Permissions = []string{PermAdminAny}
return nil
}

200
dataprovider/apikey.go Normal file
View File

@@ -0,0 +1,200 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package dataprovider
import (
"encoding/json"
"fmt"
"strings"
"time"
"github.com/alexedwards/argon2id"
"golang.org/x/crypto/bcrypt"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
)
// APIKeyScope defines the supported API key scopes
type APIKeyScope int
// Supported API key scopes
const (
// the API key will be used for an admin
APIKeyScopeAdmin APIKeyScope = iota + 1
// the API key will be used for a user
APIKeyScopeUser
)
// APIKey defines a SFTPGo API key.
// API keys can be used as authentication alternative to short lived tokens
// for REST API
type APIKey struct {
// Database unique identifier
ID int64 `json:"-"`
// Unique key identifier, used for key lookups.
// The generated key is in the format `KeyID.hash(Key)` so we can split
// and lookup by KeyID and then verify if the key matches the recorded hash
KeyID string `json:"id"`
// User friendly key name
Name string `json:"name"`
// we store the hash of the key, this is just like a password
Key string `json:"key,omitempty"`
Scope APIKeyScope `json:"scope"`
CreatedAt int64 `json:"created_at"`
UpdatedAt int64 `json:"updated_at"`
// 0 means never used
LastUseAt int64 `json:"last_use_at,omitempty"`
// 0 means never expire
ExpiresAt int64 `json:"expires_at,omitempty"`
Description string `json:"description,omitempty"`
// Username associated with this API key.
// If empty and the scope is APIKeyScopeUser the key is valid for any user
User string `json:"user,omitempty"`
// Admin username associated with this API key.
// If empty and the scope is APIKeyScopeAdmin the key is valid for any admin
Admin string `json:"admin,omitempty"`
// these fields are for internal use
userID int64
adminID int64
plainKey string
}
func (k *APIKey) getACopy() APIKey {
return APIKey{
ID: k.ID,
KeyID: k.KeyID,
Name: k.Name,
Key: k.Key,
Scope: k.Scope,
CreatedAt: k.CreatedAt,
UpdatedAt: k.UpdatedAt,
LastUseAt: k.LastUseAt,
ExpiresAt: k.ExpiresAt,
Description: k.Description,
User: k.User,
Admin: k.Admin,
userID: k.userID,
adminID: k.adminID,
}
}
// RenderAsJSON implements the renderer interface used within plugins
func (k *APIKey) RenderAsJSON(reload bool) ([]byte, error) {
if reload {
apiKey, err := provider.apiKeyExists(k.KeyID)
if err != nil {
providerLog(logger.LevelError, "unable to reload api key before rendering as json: %v", err)
return nil, err
}
apiKey.HideConfidentialData()
return json.Marshal(apiKey)
}
k.HideConfidentialData()
return json.Marshal(k)
}
// HideConfidentialData hides API key confidential data
func (k *APIKey) HideConfidentialData() {
k.Key = ""
}
func (k *APIKey) hashKey() error {
if k.Key != "" && !util.IsStringPrefixInSlice(k.Key, internalHashPwdPrefixes) {
if config.PasswordHashing.Algo == HashingAlgoBcrypt {
hashed, err := bcrypt.GenerateFromPassword([]byte(k.Key), config.PasswordHashing.BcryptOptions.Cost)
if err != nil {
return err
}
k.Key = string(hashed)
} else {
hashed, err := argon2id.CreateHash(k.Key, argon2Params)
if err != nil {
return err
}
k.Key = hashed
}
}
return nil
}
func (k *APIKey) generateKey() {
if k.KeyID != "" || k.Key != "" {
return
}
k.KeyID = util.GenerateUniqueID()
k.Key = util.GenerateUniqueID()
k.plainKey = k.Key
}
// DisplayKey returns the key to show to the user
func (k *APIKey) DisplayKey() string {
return fmt.Sprintf("%v.%v", k.KeyID, k.plainKey)
}
func (k *APIKey) validate() error {
if k.Name == "" {
return util.NewValidationError("name is mandatory")
}
if k.Scope != APIKeyScopeAdmin && k.Scope != APIKeyScopeUser {
return util.NewValidationError(fmt.Sprintf("invalid scope: %v", k.Scope))
}
k.generateKey()
if err := k.hashKey(); err != nil {
return err
}
if k.User != "" && k.Admin != "" {
return util.NewValidationError("an API key can be related to a user or an admin, not both")
}
if k.Scope == APIKeyScopeAdmin {
k.User = ""
}
if k.Scope == APIKeyScopeUser {
k.Admin = ""
}
if k.User != "" {
_, err := provider.userExists(k.User)
if err != nil {
return util.NewValidationError(fmt.Sprintf("unable to check API key user %v: %v", k.User, err))
}
}
if k.Admin != "" {
_, err := provider.adminExists(k.Admin)
if err != nil {
return util.NewValidationError(fmt.Sprintf("unable to check API key admin %v: %v", k.Admin, err))
}
}
return nil
}
// Authenticate tries to authenticate the provided plain key
func (k *APIKey) Authenticate(plainKey string) error {
if k.ExpiresAt > 0 && k.ExpiresAt < util.GetTimeAsMsSinceEpoch(time.Now()) {
return fmt.Errorf("API key %#v is expired, expiration timestamp: %v current timestamp: %v", k.KeyID,
k.ExpiresAt, util.GetTimeAsMsSinceEpoch(time.Now()))
}
if strings.HasPrefix(k.Key, bcryptPwdPrefix) {
if err := bcrypt.CompareHashAndPassword([]byte(k.Key), []byte(plainKey)); err != nil {
return ErrInvalidCredentials
}
} else if strings.HasPrefix(k.Key, argonPwdPrefix) {
match, err := argon2id.ComparePasswordAndHash(plainKey, k.Key)
if err != nil || !match {
return ErrInvalidCredentials
}
}
return nil
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,3 +1,18 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//go:build nobolt
// +build nobolt
package dataprovider
@@ -5,7 +20,7 @@ package dataprovider
import (
"errors"
"github.com/drakkan/sftpgo/version"
"github.com/drakkan/sftpgo/v2/version"
)
func init() {

View File

@@ -1,3 +1,17 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package dataprovider
import (

View File

@@ -1,3 +1,17 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package dataprovider
import (
@@ -6,7 +20,8 @@ import (
"golang.org/x/net/webdav"
"github.com/drakkan/sftpgo/utils"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
)
var (
@@ -54,29 +69,45 @@ func (cache *usersCache) updateLastLogin(username string) {
defer cache.Unlock()
if cachedUser, ok := cache.users[username]; ok {
cachedUser.User.LastLogin = utils.GetTimeAsMsSinceEpoch(time.Now())
cachedUser.User.LastLogin = util.GetTimeAsMsSinceEpoch(time.Now())
cache.users[username] = cachedUser
}
}
// swapWebDAVUser updates an existing cached user with the specified one
// preserving the lock fs if possible
func (cache *usersCache) swap(user *User) {
// FIXME: this could be racy in rare cases
func (cache *usersCache) swap(userRef *User) {
user := userRef.getACopy()
err := user.LoadAndApplyGroupSettings()
cache.Lock()
defer cache.Unlock()
if cachedUser, ok := cache.users[user.Username]; ok {
if cachedUser.User.Password != user.Password {
providerLog(logger.LevelDebug, "current password different from the cached one for user %#v, removing from cache",
user.Username)
// the password changed, the cached user is no longer valid
delete(cache.users, user.Username)
return
}
if cachedUser.User.isFsEqual(user) {
if err != nil {
providerLog(logger.LevelDebug, "unable to load group settings, for user %#v, removing from cache, err :%v",
user.Username, err)
delete(cache.users, user.Username)
return
}
if cachedUser.User.isFsEqual(&user) {
// the updated user has the same fs as the cached one, we can preserve the lock filesystem
cachedUser.User = *user
providerLog(logger.LevelDebug, "current password and fs unchanged for for user %#v, swap cached one",
user.Username)
cachedUser.User = user
cache.users[user.Username] = cachedUser
} else {
// filesystem changed, the cached user is no longer valid
providerLog(logger.LevelDebug, "current fs different from the cached one for user %#v, removing from cache",
user.Username)
delete(cache.users, user.Username)
}
}

View File

@@ -1,118 +0,0 @@
package dataprovider
import (
"github.com/drakkan/sftpgo/kms"
"github.com/drakkan/sftpgo/vfs"
)
type compatAzBlobFsConfigV9 struct {
Container string `json:"container,omitempty"`
AccountName string `json:"account_name,omitempty"`
AccountKey *kms.Secret `json:"account_key,omitempty"`
Endpoint string `json:"endpoint,omitempty"`
SASURL string `json:"sas_url,omitempty"`
KeyPrefix string `json:"key_prefix,omitempty"`
UploadPartSize int64 `json:"upload_part_size,omitempty"`
UploadConcurrency int `json:"upload_concurrency,omitempty"`
UseEmulator bool `json:"use_emulator,omitempty"`
AccessTier string `json:"access_tier,omitempty"`
}
type compatFilesystemV9 struct {
Provider vfs.FilesystemProvider `json:"provider"`
S3Config vfs.S3FsConfig `json:"s3config,omitempty"`
GCSConfig vfs.GCSFsConfig `json:"gcsconfig,omitempty"`
AzBlobConfig compatAzBlobFsConfigV9 `json:"azblobconfig,omitempty"`
CryptConfig vfs.CryptFsConfig `json:"cryptconfig,omitempty"`
SFTPConfig vfs.SFTPFsConfig `json:"sftpconfig,omitempty"`
}
type compatBaseFolderV9 struct {
ID int64 `json:"id"`
Name string `json:"name"`
MappedPath string `json:"mapped_path,omitempty"`
Description string `json:"description,omitempty"`
UsedQuotaSize int64 `json:"used_quota_size"`
UsedQuotaFiles int `json:"used_quota_files"`
LastQuotaUpdate int64 `json:"last_quota_update"`
Users []string `json:"users,omitempty"`
FsConfig compatFilesystemV9 `json:"filesystem"`
}
type compatFolderV9 struct {
compatBaseFolderV9
VirtualPath string `json:"virtual_path"`
QuotaSize int64 `json:"quota_size"`
QuotaFiles int `json:"quota_files"`
}
type compatUserV9 struct {
ID int64 `json:"id"`
Username string `json:"username"`
FsConfig compatFilesystemV9 `json:"filesystem"`
}
func convertFsConfigFromV9(compatFs compatFilesystemV9, aead string) (vfs.Filesystem, error) {
fsConfig := vfs.Filesystem{
Provider: compatFs.Provider,
S3Config: compatFs.S3Config,
GCSConfig: compatFs.GCSConfig,
CryptConfig: compatFs.CryptConfig,
SFTPConfig: compatFs.SFTPConfig,
}
azSASURL := kms.NewEmptySecret()
if compatFs.Provider == vfs.AzureBlobFilesystemProvider && compatFs.AzBlobConfig.SASURL != "" {
azSASURL = kms.NewPlainSecret(compatFs.AzBlobConfig.SASURL)
}
if compatFs.AzBlobConfig.AccountKey == nil {
compatFs.AzBlobConfig.AccountKey = kms.NewEmptySecret()
}
fsConfig.AzBlobConfig = vfs.AzBlobFsConfig{
Container: compatFs.AzBlobConfig.Container,
AccountName: compatFs.AzBlobConfig.AccountName,
AccountKey: compatFs.AzBlobConfig.AccountKey,
Endpoint: compatFs.AzBlobConfig.Endpoint,
SASURL: azSASURL,
KeyPrefix: compatFs.AzBlobConfig.KeyPrefix,
UploadPartSize: compatFs.AzBlobConfig.UploadPartSize,
UploadConcurrency: compatFs.AzBlobConfig.UploadConcurrency,
UseEmulator: compatFs.AzBlobConfig.UseEmulator,
AccessTier: compatFs.AzBlobConfig.AccessTier,
}
err := fsConfig.AzBlobConfig.EncryptCredentials(aead)
return fsConfig, err
}
func convertFsConfigToV9(fs vfs.Filesystem) (compatFilesystemV9, error) {
azSASURL := ""
if fs.Provider == vfs.AzureBlobFilesystemProvider {
if fs.AzBlobConfig.SASURL != nil && fs.AzBlobConfig.SASURL.IsEncrypted() {
err := fs.AzBlobConfig.SASURL.Decrypt()
if err != nil {
return compatFilesystemV9{}, err
}
azSASURL = fs.AzBlobConfig.SASURL.GetPayload()
}
}
azFsCompat := compatAzBlobFsConfigV9{
Container: fs.AzBlobConfig.Container,
AccountName: fs.AzBlobConfig.AccountName,
AccountKey: fs.AzBlobConfig.AccountKey,
Endpoint: fs.AzBlobConfig.Endpoint,
SASURL: azSASURL,
KeyPrefix: fs.AzBlobConfig.KeyPrefix,
UploadPartSize: fs.AzBlobConfig.UploadPartSize,
UploadConcurrency: fs.AzBlobConfig.UploadConcurrency,
UseEmulator: fs.AzBlobConfig.UseEmulator,
AccessTier: fs.AzBlobConfig.AccessTier,
}
fsV9 := compatFilesystemV9{
Provider: fs.Provider,
S3Config: fs.S3Config,
GCSConfig: fs.GCSConfig,
AzBlobConfig: azFsCompat,
CryptConfig: fs.CryptConfig,
SFTPConfig: fs.SFTPConfig,
}
return fsV9, nil
}

File diff suppressed because it is too large Load Diff

234
dataprovider/group.go Normal file
View File

@@ -0,0 +1,234 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package dataprovider
import (
"encoding/json"
"fmt"
"path/filepath"
"strings"
"github.com/sftpgo/sdk"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/plugin"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/v2/vfs"
)
// GroupUserSettings defines the settings to apply to users
type GroupUserSettings struct {
sdk.BaseGroupUserSettings
// Filesystem configuration details
FsConfig vfs.Filesystem `json:"filesystem"`
}
// Group defines an SFTPGo group.
// Groups are used to easily configure similar users
type Group struct {
sdk.BaseGroup
// settings to apply to users for whom this is a primary group
UserSettings GroupUserSettings `json:"user_settings,omitempty"`
// Mapping between virtual paths and virtual folders
VirtualFolders []vfs.VirtualFolder `json:"virtual_folders,omitempty"`
}
// GetPermissions returns the permissions as list
func (g *Group) GetPermissions() []sdk.DirectoryPermissions {
result := make([]sdk.DirectoryPermissions, 0, len(g.UserSettings.Permissions))
for k, v := range g.UserSettings.Permissions {
result = append(result, sdk.DirectoryPermissions{
Path: k,
Permissions: v,
})
}
return result
}
// GetAllowedIPAsString returns the allowed IP as comma separated string
func (g *Group) GetAllowedIPAsString() string {
return strings.Join(g.UserSettings.Filters.AllowedIP, ",")
}
// GetDeniedIPAsString returns the denied IP as comma separated string
func (g *Group) GetDeniedIPAsString() string {
return strings.Join(g.UserSettings.Filters.DeniedIP, ",")
}
// HasExternalAuth returns true if the external authentication is globally enabled
// and it is not disabled for this group
func (g *Group) HasExternalAuth() bool {
if g.UserSettings.Filters.Hooks.ExternalAuthDisabled {
return false
}
if config.ExternalAuthHook != "" {
return true
}
return plugin.Handler.HasAuthenticators()
}
// SetEmptySecretsIfNil sets the secrets to empty if nil
func (g *Group) SetEmptySecretsIfNil() {
g.UserSettings.FsConfig.SetEmptySecretsIfNil()
for idx := range g.VirtualFolders {
vfolder := &g.VirtualFolders[idx]
vfolder.FsConfig.SetEmptySecretsIfNil()
}
}
// PrepareForRendering prepares a group for rendering.
// It hides confidential data and set to nil the empty secrets
// so they are not serialized
func (g *Group) PrepareForRendering() {
g.UserSettings.FsConfig.HideConfidentialData()
g.UserSettings.FsConfig.SetNilSecretsIfEmpty()
for idx := range g.VirtualFolders {
folder := &g.VirtualFolders[idx]
folder.PrepareForRendering()
}
}
// RenderAsJSON implements the renderer interface used within plugins
func (g *Group) RenderAsJSON(reload bool) ([]byte, error) {
if reload {
group, err := provider.groupExists(g.Name)
if err != nil {
providerLog(logger.LevelError, "unable to reload group before rendering as json: %v", err)
return nil, err
}
group.PrepareForRendering()
return json.Marshal(group)
}
g.PrepareForRendering()
return json.Marshal(g)
}
// GetEncryptionAdditionalData returns the additional data to use for AEAD
func (g *Group) GetEncryptionAdditionalData() string {
return fmt.Sprintf("group_%v", g.Name)
}
// HasRedactedSecret returns true if the user has a redacted secret
func (g *Group) hasRedactedSecret() bool {
for idx := range g.VirtualFolders {
folder := &g.VirtualFolders[idx]
if folder.HasRedactedSecret() {
return true
}
}
return g.UserSettings.FsConfig.HasRedactedSecret()
}
func (g *Group) validate() error {
g.SetEmptySecretsIfNil()
if g.Name == "" {
return util.NewValidationError("name is mandatory")
}
if config.NamingRules&1 == 0 && !usernameRegex.MatchString(g.Name) {
return util.NewValidationError(fmt.Sprintf("name %#v is not valid, the following characters are allowed: a-zA-Z0-9-_.~", g.Name))
}
if g.hasRedactedSecret() {
return util.NewValidationError("cannot save a user with a redacted secret")
}
vfolders, err := validateAssociatedVirtualFolders(g.VirtualFolders)
if err != nil {
return err
}
g.VirtualFolders = vfolders
return g.validateUserSettings()
}
func (g *Group) validateUserSettings() error {
if g.UserSettings.HomeDir != "" {
g.UserSettings.HomeDir = filepath.Clean(g.UserSettings.HomeDir)
if !filepath.IsAbs(g.UserSettings.HomeDir) {
return util.NewValidationError(fmt.Sprintf("home_dir must be an absolute path, actual value: %v",
g.UserSettings.HomeDir))
}
}
if err := g.UserSettings.FsConfig.Validate(g.GetEncryptionAdditionalData()); err != nil {
return err
}
if g.UserSettings.TotalDataTransfer > 0 {
// if a total data transfer is defined we reset the separate upload and download limits
g.UserSettings.UploadDataTransfer = 0
g.UserSettings.DownloadDataTransfer = 0
}
if len(g.UserSettings.Permissions) > 0 {
permissions, err := validateUserPermissions(g.UserSettings.Permissions)
if err != nil {
return err
}
g.UserSettings.Permissions = permissions
}
if err := validateBaseFilters(&g.UserSettings.Filters); err != nil {
return err
}
if !g.HasExternalAuth() {
g.UserSettings.Filters.ExternalAuthCacheTime = 0
}
g.UserSettings.Filters.UserType = ""
return nil
}
func (g *Group) getACopy() Group {
users := make([]string, len(g.Users))
copy(users, g.Users)
virtualFolders := make([]vfs.VirtualFolder, 0, len(g.VirtualFolders))
for idx := range g.VirtualFolders {
vfolder := g.VirtualFolders[idx].GetACopy()
virtualFolders = append(virtualFolders, vfolder)
}
permissions := make(map[string][]string)
for k, v := range g.UserSettings.Permissions {
perms := make([]string, len(v))
copy(perms, v)
permissions[k] = perms
}
return Group{
BaseGroup: sdk.BaseGroup{
ID: g.ID,
Name: g.Name,
Description: g.Description,
CreatedAt: g.CreatedAt,
UpdatedAt: g.UpdatedAt,
Users: users,
},
UserSettings: GroupUserSettings{
BaseGroupUserSettings: sdk.BaseGroupUserSettings{
HomeDir: g.UserSettings.HomeDir,
MaxSessions: g.UserSettings.MaxSessions,
QuotaSize: g.UserSettings.QuotaSize,
QuotaFiles: g.UserSettings.QuotaFiles,
Permissions: permissions,
UploadBandwidth: g.UserSettings.UploadBandwidth,
DownloadBandwidth: g.UserSettings.DownloadBandwidth,
UploadDataTransfer: g.UserSettings.UploadDataTransfer,
DownloadDataTransfer: g.UserSettings.DownloadDataTransfer,
TotalDataTransfer: g.UserSettings.TotalDataTransfer,
Filters: copyBaseUserFilters(g.UserSettings.Filters),
},
FsConfig: g.UserSettings.FsConfig.GetACopy(),
},
VirtualFolders: virtualFolders,
}
}
// GetUsersAsString returns the list of users as comma separated string
func (g *Group) GetUsersAsString() string {
return strings.Join(g.Users, ",")
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,56 +1,180 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//go:build !nomysql
// +build !nomysql
package dataprovider
import (
"context"
"crypto/tls"
"crypto/x509"
"database/sql"
"errors"
"fmt"
"os"
"strings"
"time"
// we import go-sql-driver/mysql here to be able to disable MySQL support using a build tag
_ "github.com/go-sql-driver/mysql"
"github.com/go-sql-driver/mysql"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/version"
"github.com/drakkan/sftpgo/vfs"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/version"
"github.com/drakkan/sftpgo/v2/vfs"
)
const (
mysqlResetSQL = "DROP TABLE IF EXISTS `{{api_keys}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{folders_mapping}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{users_folders_mapping}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{users_groups_mapping}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{groups_folders_mapping}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{admins}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{folders}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{shares}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{users}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{groups}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{defender_events}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{defender_hosts}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{active_transfers}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{shared_sessions}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{schema_version}}` CASCADE;"
mysqlInitialSQL = "CREATE TABLE `{{schema_version}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `version` integer NOT NULL);" +
"CREATE TABLE `{{admins}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `username` varchar(255) NOT NULL UNIQUE, " +
"`password` varchar(255) NOT NULL, `email` varchar(255) NULL, `status` integer NOT NULL, `permissions` longtext NOT NULL, " +
"`filters` longtext NULL, `additional_info` longtext NULL);" +
"`description` varchar(512) NULL, `password` varchar(255) NOT NULL, `email` varchar(255) NULL, `status` integer NOT NULL, " +
"`permissions` longtext NOT NULL, `filters` longtext NULL, `additional_info` longtext NULL, `last_login` bigint NOT NULL, " +
"`created_at` bigint NOT NULL, `updated_at` bigint NOT NULL);" +
"CREATE TABLE `{{defender_hosts}}` (`id` bigint AUTO_INCREMENT NOT NULL PRIMARY KEY, " +
"`ip` varchar(50) NOT NULL UNIQUE, `ban_time` bigint NOT NULL, `updated_at` bigint NOT NULL);" +
"CREATE TABLE `{{defender_events}}` (`id` bigint AUTO_INCREMENT NOT NULL PRIMARY KEY, " +
"`date_time` bigint NOT NULL, `score` integer NOT NULL, `host_id` bigint NOT NULL);" +
"ALTER TABLE `{{defender_events}}` ADD CONSTRAINT `{{prefix}}defender_events_host_id_fk_defender_hosts_id` " +
"FOREIGN KEY (`host_id`) REFERENCES `{{defender_hosts}}` (`id`) ON DELETE CASCADE;" +
"CREATE TABLE `{{folders}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `name` varchar(255) NOT NULL UNIQUE, " +
"`path` varchar(512) NULL, `used_quota_size` bigint NOT NULL, `used_quota_files` integer NOT NULL, " +
"`last_quota_update` bigint NOT NULL);" +
"CREATE TABLE `{{users}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `status` integer NOT NULL, " +
"`expiration_date` bigint NOT NULL, `username` varchar(255) NOT NULL UNIQUE, `password` longtext NULL, " +
"`public_keys` longtext NULL, `home_dir` varchar(512) NOT NULL, `uid` integer NOT NULL, `gid` integer NOT NULL, " +
"`description` varchar(512) NULL, `path` longtext NULL, `used_quota_size` bigint NOT NULL, " +
"`used_quota_files` integer NOT NULL, `last_quota_update` bigint NOT NULL, `filesystem` longtext NULL);" +
"CREATE TABLE `{{users}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `username` varchar(255) NOT NULL UNIQUE, " +
"`status` integer NOT NULL, `expiration_date` bigint NOT NULL, `description` varchar(512) NULL, `password` longtext NULL, " +
"`public_keys` longtext NULL, `home_dir` longtext NOT NULL, `uid` bigint NOT NULL, `gid` bigint NOT NULL, " +
"`max_sessions` integer NOT NULL, `quota_size` bigint NOT NULL, `quota_files` integer NOT NULL, " +
"`permissions` longtext NOT NULL, `used_quota_size` bigint NOT NULL, `used_quota_files` integer NOT NULL, " +
"`last_quota_update` bigint NOT NULL, `upload_bandwidth` integer NOT NULL, `download_bandwidth` integer NOT NULL, " +
"`last_login` bigint NOT NULL, `filters` longtext NULL, `filesystem` longtext NULL, `additional_info` longtext NULL);" +
"CREATE TABLE `{{folders_mapping}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `virtual_path` varchar(512) NOT NULL, " +
"`last_login` bigint NOT NULL, `filters` longtext NULL, `filesystem` longtext NULL, `additional_info` longtext NULL, " +
"`created_at` bigint NOT NULL, `updated_at` bigint NOT NULL, `email` varchar(255) NULL);" +
"CREATE TABLE `{{folders_mapping}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `virtual_path` longtext NOT NULL, " +
"`quota_size` bigint NOT NULL, `quota_files` integer NOT NULL, `folder_id` integer NOT NULL, `user_id` integer NOT NULL);" +
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `{{prefix}}unique_mapping` UNIQUE (`user_id`, `folder_id`);" +
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `{{prefix}}folders_mapping_folder_id_fk_folders_id` FOREIGN KEY (`folder_id`) REFERENCES `{{folders}}` (`id`) ON DELETE CASCADE;" +
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `{{prefix}}folders_mapping_user_id_fk_users_id` FOREIGN KEY (`user_id`) REFERENCES `{{users}}` (`id`) ON DELETE CASCADE;" +
"INSERT INTO {{schema_version}} (version) VALUES (8);"
mysqlV9SQL = "ALTER TABLE `{{admins}}` ADD COLUMN `description` varchar(512) NULL;" +
"ALTER TABLE `{{folders}}` ADD COLUMN `description` varchar(512) NULL;" +
"ALTER TABLE `{{folders}}` ADD COLUMN `filesystem` longtext NULL;" +
"ALTER TABLE `{{users}}` ADD COLUMN `description` varchar(512) NULL;"
mysqlV9DownSQL = "ALTER TABLE `{{users}}` DROP COLUMN `description`;" +
"ALTER TABLE `{{folders}}` DROP COLUMN `filesystem`;" +
"ALTER TABLE `{{folders}}` DROP COLUMN `description`;" +
"ALTER TABLE `{{admins}}` DROP COLUMN `description`;"
"CREATE TABLE `{{shares}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, " +
"`share_id` varchar(60) NOT NULL UNIQUE, `name` varchar(255) NOT NULL, `description` varchar(512) NULL, " +
"`scope` integer NOT NULL, `paths` longtext NOT NULL, `created_at` bigint NOT NULL, " +
"`updated_at` bigint NOT NULL, `last_use_at` bigint NOT NULL, `expires_at` bigint NOT NULL, " +
"`password` longtext NULL, `max_tokens` integer NOT NULL, `used_tokens` integer NOT NULL, " +
"`allow_from` longtext NULL, `user_id` integer NOT NULL);" +
"ALTER TABLE `{{shares}}` ADD CONSTRAINT `{{prefix}}shares_user_id_fk_users_id` " +
"FOREIGN KEY (`user_id`) REFERENCES `{{users}}` (`id`) ON DELETE CASCADE;" +
"CREATE TABLE `{{api_keys}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `name` varchar(255) NOT NULL, `key_id` varchar(50) NOT NULL UNIQUE," +
"`api_key` varchar(255) NOT NULL UNIQUE, `scope` integer NOT NULL, `created_at` bigint NOT NULL, `updated_at` bigint NOT NULL, `last_use_at` bigint NOT NULL, " +
"`expires_at` bigint NOT NULL, `description` longtext NULL, `admin_id` integer NULL, `user_id` integer NULL);" +
"ALTER TABLE `{{api_keys}}` ADD CONSTRAINT `{{prefix}}api_keys_admin_id_fk_admins_id` FOREIGN KEY (`admin_id`) REFERENCES `{{admins}}` (`id`) ON DELETE CASCADE;" +
"ALTER TABLE `{{api_keys}}` ADD CONSTRAINT `{{prefix}}api_keys_user_id_fk_users_id` FOREIGN KEY (`user_id`) REFERENCES `{{users}}` (`id`) ON DELETE CASCADE;" +
"CREATE INDEX `{{prefix}}users_updated_at_idx` ON `{{users}}` (`updated_at`);" +
"CREATE INDEX `{{prefix}}defender_hosts_updated_at_idx` ON `{{defender_hosts}}` (`updated_at`);" +
"CREATE INDEX `{{prefix}}defender_hosts_ban_time_idx` ON `{{defender_hosts}}` (`ban_time`);" +
"CREATE INDEX `{{prefix}}defender_events_date_time_idx` ON `{{defender_events}}` (`date_time`);" +
"INSERT INTO {{schema_version}} (version) VALUES (15);"
mysqlV16SQL = "ALTER TABLE `{{users}}` ADD COLUMN `download_data_transfer` integer DEFAULT 0 NOT NULL;" +
"ALTER TABLE `{{users}}` ALTER COLUMN `download_data_transfer` DROP DEFAULT;" +
"ALTER TABLE `{{users}}` ADD COLUMN `total_data_transfer` integer DEFAULT 0 NOT NULL;" +
"ALTER TABLE `{{users}}` ALTER COLUMN `total_data_transfer` DROP DEFAULT;" +
"ALTER TABLE `{{users}}` ADD COLUMN `upload_data_transfer` integer DEFAULT 0 NOT NULL;" +
"ALTER TABLE `{{users}}` ALTER COLUMN `upload_data_transfer` DROP DEFAULT;" +
"ALTER TABLE `{{users}}` ADD COLUMN `used_download_data_transfer` integer DEFAULT 0 NOT NULL;" +
"ALTER TABLE `{{users}}` ALTER COLUMN `used_download_data_transfer` DROP DEFAULT;" +
"ALTER TABLE `{{users}}` ADD COLUMN `used_upload_data_transfer` integer DEFAULT 0 NOT NULL;" +
"ALTER TABLE `{{users}}` ALTER COLUMN `used_upload_data_transfer` DROP DEFAULT;" +
"CREATE TABLE `{{active_transfers}}` (`id` bigint AUTO_INCREMENT NOT NULL PRIMARY KEY, " +
"`connection_id` varchar(100) NOT NULL, `transfer_id` bigint NOT NULL, `transfer_type` integer NOT NULL, " +
"`username` varchar(255) NOT NULL, `folder_name` varchar(255) NULL, `ip` varchar(50) NOT NULL, " +
"`truncated_size` bigint NOT NULL, `current_ul_size` bigint NOT NULL, `current_dl_size` bigint NOT NULL, " +
"`created_at` bigint NOT NULL, `updated_at` bigint NOT NULL);" +
"CREATE INDEX `{{prefix}}active_transfers_connection_id_idx` ON `{{active_transfers}}` (`connection_id`);" +
"CREATE INDEX `{{prefix}}active_transfers_transfer_id_idx` ON `{{active_transfers}}` (`transfer_id`);" +
"CREATE INDEX `{{prefix}}active_transfers_updated_at_idx` ON `{{active_transfers}}` (`updated_at`);"
mysqlV16DownSQL = "ALTER TABLE `{{users}}` DROP COLUMN `used_upload_data_transfer`;" +
"ALTER TABLE `{{users}}` DROP COLUMN `used_download_data_transfer`;" +
"ALTER TABLE `{{users}}` DROP COLUMN `upload_data_transfer`;" +
"ALTER TABLE `{{users}}` DROP COLUMN `total_data_transfer`;" +
"ALTER TABLE `{{users}}` DROP COLUMN `download_data_transfer`;" +
"DROP TABLE `{{active_transfers}}` CASCADE;"
mysqlV17SQL = "CREATE TABLE `{{groups}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, " +
"`name` varchar(255) NOT NULL UNIQUE, `description` varchar(512) NULL, `created_at` bigint NOT NULL, " +
"`updated_at` bigint NOT NULL, `user_settings` longtext NULL);" +
"CREATE TABLE `{{groups_folders_mapping}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, " +
"`group_id` integer NOT NULL, `folder_id` integer NOT NULL, " +
"`virtual_path` longtext NOT NULL, `quota_size` bigint NOT NULL, `quota_files` integer NOT NULL);" +
"CREATE TABLE `{{users_groups_mapping}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, " +
"`user_id` integer NOT NULL, `group_id` integer NOT NULL, `group_type` integer NOT NULL);" +
"ALTER TABLE `{{folders_mapping}}` DROP FOREIGN KEY `{{prefix}}folders_mapping_folder_id_fk_folders_id`;" +
"ALTER TABLE `{{folders_mapping}}` DROP FOREIGN KEY `{{prefix}}folders_mapping_user_id_fk_users_id`;" +
"ALTER TABLE `{{folders_mapping}}` DROP INDEX `{{prefix}}unique_mapping`;" +
"RENAME TABLE `{{folders_mapping}}` TO `{{users_folders_mapping}}`;" +
"ALTER TABLE `{{users_folders_mapping}}` ADD CONSTRAINT `{{prefix}}unique_user_folder_mapping` " +
"UNIQUE (`user_id`, `folder_id`);" +
"ALTER TABLE `{{users_folders_mapping}}` ADD CONSTRAINT `{{prefix}}users_folders_mapping_user_id_fk_users_id` " +
"FOREIGN KEY (`user_id`) REFERENCES `{{users}}` (`id`) ON DELETE CASCADE;" +
"ALTER TABLE `{{users_folders_mapping}}` ADD CONSTRAINT `{{prefix}}users_folders_mapping_folder_id_fk_folders_id` " +
"FOREIGN KEY (`folder_id`) REFERENCES `{{folders}}` (`id`) ON DELETE CASCADE;" +
"ALTER TABLE `{{users_groups_mapping}}` ADD CONSTRAINT `{{prefix}}unique_user_group_mapping` UNIQUE (`user_id`, `group_id`);" +
"ALTER TABLE `{{groups_folders_mapping}}` ADD CONSTRAINT `{{prefix}}unique_group_folder_mapping` UNIQUE (`group_id`, `folder_id`);" +
"ALTER TABLE `{{users_groups_mapping}}` ADD CONSTRAINT `{{prefix}}users_groups_mapping_group_id_fk_groups_id` " +
"FOREIGN KEY (`group_id`) REFERENCES `{{groups}}` (`id`) ON DELETE NO ACTION;" +
"ALTER TABLE `{{users_groups_mapping}}` ADD CONSTRAINT `{{prefix}}users_groups_mapping_user_id_fk_users_id` " +
"FOREIGN KEY (`user_id`) REFERENCES `{{users}}` (`id`) ON DELETE CASCADE;" +
"ALTER TABLE `{{groups_folders_mapping}}` ADD CONSTRAINT `{{prefix}}groups_folders_mapping_folder_id_fk_folders_id` " +
"FOREIGN KEY (`folder_id`) REFERENCES `{{folders}}` (`id`) ON DELETE CASCADE;" +
"ALTER TABLE `{{groups_folders_mapping}}` ADD CONSTRAINT `{{prefix}}groups_folders_mapping_group_id_fk_groups_id` " +
"FOREIGN KEY (`group_id`) REFERENCES `{{groups}}` (`id`) ON DELETE CASCADE;" +
"CREATE INDEX `{{prefix}}groups_updated_at_idx` ON `{{groups}}` (`updated_at`);"
mysqlV17DownSQL = "ALTER TABLE `{{groups_folders_mapping}}` DROP FOREIGN KEY `{{prefix}}groups_folders_mapping_group_id_fk_groups_id`;" +
"ALTER TABLE `{{groups_folders_mapping}}` DROP FOREIGN KEY `{{prefix}}groups_folders_mapping_folder_id_fk_folders_id`;" +
"ALTER TABLE `{{users_groups_mapping}}` DROP FOREIGN KEY `{{prefix}}users_groups_mapping_user_id_fk_users_id`;" +
"ALTER TABLE `{{users_groups_mapping}}` DROP FOREIGN KEY `{{prefix}}users_groups_mapping_group_id_fk_groups_id`;" +
"ALTER TABLE `{{groups_folders_mapping}}` DROP INDEX `{{prefix}}unique_group_folder_mapping`;" +
"ALTER TABLE `{{users_groups_mapping}}` DROP INDEX `{{prefix}}unique_user_group_mapping`;" +
"DROP TABLE `{{users_groups_mapping}}` CASCADE;" +
"DROP TABLE `{{groups_folders_mapping}}` CASCADE;" +
"DROP TABLE `{{groups}}` CASCADE;" +
"ALTER TABLE `{{users_folders_mapping}}` DROP FOREIGN KEY `{{prefix}}users_folders_mapping_folder_id_fk_folders_id`;" +
"ALTER TABLE `{{users_folders_mapping}}` DROP FOREIGN KEY `{{prefix}}users_folders_mapping_user_id_fk_users_id`;" +
"ALTER TABLE `{{users_folders_mapping}}` DROP INDEX `{{prefix}}unique_user_folder_mapping`;" +
"RENAME TABLE `{{users_folders_mapping}}` TO `{{folders_mapping}}`;" +
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `{{prefix}}unique_mapping` UNIQUE (`user_id`, `folder_id`);" +
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `{{prefix}}folders_mapping_user_id_fk_users_id` " +
"FOREIGN KEY (`user_id`) REFERENCES `{{users}}` (`id`) ON DELETE CASCADE;" +
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `{{prefix}}folders_mapping_folder_id_fk_folders_id` " +
"FOREIGN KEY (`folder_id`) REFERENCES `{{folders}}` (`id`) ON DELETE CASCADE;"
mysqlV19SQL = "CREATE TABLE `{{shared_sessions}}` (`key` varchar(128) NOT NULL PRIMARY KEY, " +
"`data` longtext NOT NULL, `type` integer NOT NULL, `timestamp` bigint NOT NULL);" +
"CREATE INDEX `{{prefix}}shared_sessions_type_idx` ON `{{shared_sessions}}` (`type`);" +
"CREATE INDEX `{{prefix}}shared_sessions_timestamp_idx` ON `{{shared_sessions}}` (`timestamp`);"
mysqlV19DownSQL = "DROP TABLE `{{shared_sessions}}` CASCADE;"
)
// MySQLProvider auth provider for MySQL/MariaDB database
// MySQLProvider defines the auth provider for MySQL/MariaDB database
type MySQLProvider struct {
dbHandle *sql.DB
}
@@ -62,10 +186,18 @@ func init() {
func initializeMySQLProvider() error {
var err error
dbHandle, err := sql.Open("mysql", getMySQLConnectionString(false))
connString, err := getMySQLConnectionString(false)
if err != nil {
return err
}
redactedConnString, err := getMySQLConnectionString(true)
if err != nil {
return err
}
dbHandle, err := sql.Open("mysql", connString)
if err == nil {
providerLog(logger.LevelDebug, "mysql database handle created, connection string: %#v, pool size: %v",
getMySQLConnectionString(true), config.PoolSize)
redactedConnString, config.PoolSize)
dbHandle.SetMaxOpenConns(config.PoolSize)
if config.PoolSize > 0 {
dbHandle.SetMaxIdleConns(config.PoolSize)
@@ -75,24 +207,59 @@ func initializeMySQLProvider() error {
dbHandle.SetConnMaxLifetime(240 * time.Second)
provider = &MySQLProvider{dbHandle: dbHandle}
} else {
providerLog(logger.LevelWarn, "error creating mysql database handler, connection string: %#v, error: %v",
getMySQLConnectionString(true), err)
providerLog(logger.LevelError, "error creating mysql database handler, connection string: %#v, error: %v",
redactedConnString, err)
}
return err
}
func getMySQLConnectionString(redactedPwd bool) string {
func getMySQLConnectionString(redactedPwd bool) (string, error) {
var connectionString string
if config.ConnectionString == "" {
password := config.Password
if redactedPwd {
if redactedPwd && password != "" {
password = "[redacted]"
}
connectionString = fmt.Sprintf("%v:%v@tcp([%v]:%v)/%v?charset=utf8&interpolateParams=true&timeout=10s&tls=%v&writeTimeout=10s&readTimeout=10s",
config.Username, password, config.Host, config.Port, config.Name, getSSLMode())
sslMode := getSSLMode()
if sslMode == "custom" && !redactedPwd {
tlsConfig := &tls.Config{}
if config.RootCert != "" {
rootCAs, err := x509.SystemCertPool()
if err != nil {
rootCAs = x509.NewCertPool()
}
rootCrt, err := os.ReadFile(config.RootCert)
if err != nil {
return "", fmt.Errorf("unable to load root certificate %#v: %v", config.RootCert, err)
}
if !rootCAs.AppendCertsFromPEM(rootCrt) {
return "", fmt.Errorf("unable to parse root certificate %#v", config.RootCert)
}
tlsConfig.RootCAs = rootCAs
}
if config.ClientCert != "" && config.ClientKey != "" {
clientCert := make([]tls.Certificate, 0, 1)
tlsCert, err := tls.LoadX509KeyPair(config.ClientCert, config.ClientKey)
if err != nil {
return "", fmt.Errorf("unable to load key pair %#v, %#v: %v", config.ClientCert, config.ClientKey, err)
}
clientCert = append(clientCert, tlsCert)
tlsConfig.Certificates = clientCert
}
if config.SSLMode == 2 {
tlsConfig.InsecureSkipVerify = true
}
providerLog(logger.LevelInfo, "registering custom TLS config, root cert %#v, client cert %#v, client key %#v",
config.RootCert, config.ClientCert, config.ClientKey)
if err := mysql.RegisterTLSConfig("custom", tlsConfig); err != nil {
return "", fmt.Errorf("unable to register tls config: %v", err)
}
}
connectionString = fmt.Sprintf("%v:%v@tcp([%v]:%v)/%v?charset=utf8mb4&interpolateParams=true&timeout=10s&parseTime=true&tls=%v&writeTimeout=60s&readTimeout=60s",
config.Username, password, config.Host, config.Port, config.Name, sslMode)
} else {
connectionString = config.ConnectionString
}
return connectionString
return connectionString, nil
}
func (p *MySQLProvider) checkAvailability() error {
@@ -107,22 +274,34 @@ func (p *MySQLProvider) validateUserAndTLSCert(username, protocol string, tlsCer
return sqlCommonValidateUserAndTLSCertificate(username, protocol, tlsCert, p.dbHandle)
}
func (p *MySQLProvider) validateUserAndPubKey(username string, publicKey []byte) (User, string, error) {
return sqlCommonValidateUserAndPubKey(username, publicKey, p.dbHandle)
func (p *MySQLProvider) validateUserAndPubKey(username string, publicKey []byte, isSSHCert bool) (User, string, error) {
return sqlCommonValidateUserAndPubKey(username, publicKey, isSSHCert, p.dbHandle)
}
func (p *MySQLProvider) updateTransferQuota(username string, uploadSize, downloadSize int64, reset bool) error {
return sqlCommonUpdateTransferQuota(username, uploadSize, downloadSize, reset, p.dbHandle)
}
func (p *MySQLProvider) updateQuota(username string, filesAdd int, sizeAdd int64, reset bool) error {
return sqlCommonUpdateQuota(username, filesAdd, sizeAdd, reset, p.dbHandle)
}
func (p *MySQLProvider) getUsedQuota(username string) (int, int64, error) {
func (p *MySQLProvider) getUsedQuota(username string) (int, int64, int64, int64, error) {
return sqlCommonGetUsedQuota(username, p.dbHandle)
}
func (p *MySQLProvider) setUpdatedAt(username string) {
sqlCommonSetUpdatedAt(username, p.dbHandle)
}
func (p *MySQLProvider) updateLastLogin(username string) error {
return sqlCommonUpdateLastLogin(username, p.dbHandle)
}
func (p *MySQLProvider) updateAdminLastLogin(username string) error {
return sqlCommonUpdateAdminLastLogin(username, p.dbHandle)
}
func (p *MySQLProvider) userExists(username string) (User, error) {
return sqlCommonGetUserByUsername(username, p.dbHandle)
}
@@ -135,24 +314,36 @@ func (p *MySQLProvider) updateUser(user *User) error {
return sqlCommonUpdateUser(user, p.dbHandle)
}
func (p *MySQLProvider) deleteUser(user *User) error {
func (p *MySQLProvider) deleteUser(user User) error {
return sqlCommonDeleteUser(user, p.dbHandle)
}
func (p *MySQLProvider) updateUserPassword(username, password string) error {
return sqlCommonUpdateUserPassword(username, password, p.dbHandle)
}
func (p *MySQLProvider) dumpUsers() ([]User, error) {
return sqlCommonDumpUsers(p.dbHandle)
}
func (p *MySQLProvider) getRecentlyUpdatedUsers(after int64) ([]User, error) {
return sqlCommonGetRecentlyUpdatedUsers(after, p.dbHandle)
}
func (p *MySQLProvider) getUsers(limit int, offset int, order string) ([]User, error) {
return sqlCommonGetUsers(limit, offset, order, p.dbHandle)
}
func (p *MySQLProvider) getUsersForQuotaCheck(toFetch map[string]bool) ([]User, error) {
return sqlCommonGetUsersForQuotaCheck(toFetch, p.dbHandle)
}
func (p *MySQLProvider) dumpFolders() ([]vfs.BaseVirtualFolder, error) {
return sqlCommonDumpFolders(p.dbHandle)
}
func (p *MySQLProvider) getFolders(limit, offset int, order string) ([]vfs.BaseVirtualFolder, error) {
return sqlCommonGetFolders(limit, offset, order, p.dbHandle)
func (p *MySQLProvider) getFolders(limit, offset int, order string, minimal bool) ([]vfs.BaseVirtualFolder, error) {
return sqlCommonGetFolders(limit, offset, order, minimal, p.dbHandle)
}
func (p *MySQLProvider) getFolderByName(name string) (vfs.BaseVirtualFolder, error) {
@@ -169,7 +360,7 @@ func (p *MySQLProvider) updateFolder(folder *vfs.BaseVirtualFolder) error {
return sqlCommonUpdateFolder(folder, p.dbHandle)
}
func (p *MySQLProvider) deleteFolder(folder *vfs.BaseVirtualFolder) error {
func (p *MySQLProvider) deleteFolder(folder vfs.BaseVirtualFolder) error {
return sqlCommonDeleteFolder(folder, p.dbHandle)
}
@@ -181,6 +372,38 @@ func (p *MySQLProvider) getUsedFolderQuota(name string) (int, int64, error) {
return sqlCommonGetFolderUsedQuota(name, p.dbHandle)
}
func (p *MySQLProvider) getGroups(limit, offset int, order string, minimal bool) ([]Group, error) {
return sqlCommonGetGroups(limit, offset, order, minimal, p.dbHandle)
}
func (p *MySQLProvider) getGroupsWithNames(names []string) ([]Group, error) {
return sqlCommonGetGroupsWithNames(names, p.dbHandle)
}
func (p *MySQLProvider) getUsersInGroups(names []string) ([]string, error) {
return sqlCommonGetUsersInGroups(names, p.dbHandle)
}
func (p *MySQLProvider) groupExists(name string) (Group, error) {
return sqlCommonGetGroupByName(name, p.dbHandle)
}
func (p *MySQLProvider) addGroup(group *Group) error {
return sqlCommonAddGroup(group, p.dbHandle)
}
func (p *MySQLProvider) updateGroup(group *Group) error {
return sqlCommonUpdateGroup(group, p.dbHandle)
}
func (p *MySQLProvider) deleteGroup(group Group) error {
return sqlCommonDeleteGroup(group, p.dbHandle)
}
func (p *MySQLProvider) dumpGroups() ([]Group, error) {
return sqlCommonDumpGroups(p.dbHandle)
}
func (p *MySQLProvider) adminExists(username string) (Admin, error) {
return sqlCommonGetAdminByUsername(username, p.dbHandle)
}
@@ -193,7 +416,7 @@ func (p *MySQLProvider) updateAdmin(admin *Admin) error {
return sqlCommonUpdateAdmin(admin, p.dbHandle)
}
func (p *MySQLProvider) deleteAdmin(admin *Admin) error {
func (p *MySQLProvider) deleteAdmin(admin Admin) error {
return sqlCommonDeleteAdmin(admin, p.dbHandle)
}
@@ -209,6 +432,130 @@ func (p *MySQLProvider) validateAdminAndPass(username, password, ip string) (Adm
return sqlCommonValidateAdminAndPass(username, password, ip, p.dbHandle)
}
func (p *MySQLProvider) apiKeyExists(keyID string) (APIKey, error) {
return sqlCommonGetAPIKeyByID(keyID, p.dbHandle)
}
func (p *MySQLProvider) addAPIKey(apiKey *APIKey) error {
return sqlCommonAddAPIKey(apiKey, p.dbHandle)
}
func (p *MySQLProvider) updateAPIKey(apiKey *APIKey) error {
return sqlCommonUpdateAPIKey(apiKey, p.dbHandle)
}
func (p *MySQLProvider) deleteAPIKey(apiKey APIKey) error {
return sqlCommonDeleteAPIKey(apiKey, p.dbHandle)
}
func (p *MySQLProvider) getAPIKeys(limit int, offset int, order string) ([]APIKey, error) {
return sqlCommonGetAPIKeys(limit, offset, order, p.dbHandle)
}
func (p *MySQLProvider) dumpAPIKeys() ([]APIKey, error) {
return sqlCommonDumpAPIKeys(p.dbHandle)
}
func (p *MySQLProvider) updateAPIKeyLastUse(keyID string) error {
return sqlCommonUpdateAPIKeyLastUse(keyID, p.dbHandle)
}
func (p *MySQLProvider) shareExists(shareID, username string) (Share, error) {
return sqlCommonGetShareByID(shareID, username, p.dbHandle)
}
func (p *MySQLProvider) addShare(share *Share) error {
return sqlCommonAddShare(share, p.dbHandle)
}
func (p *MySQLProvider) updateShare(share *Share) error {
return sqlCommonUpdateShare(share, p.dbHandle)
}
func (p *MySQLProvider) deleteShare(share Share) error {
return sqlCommonDeleteShare(share, p.dbHandle)
}
func (p *MySQLProvider) getShares(limit int, offset int, order, username string) ([]Share, error) {
return sqlCommonGetShares(limit, offset, order, username, p.dbHandle)
}
func (p *MySQLProvider) dumpShares() ([]Share, error) {
return sqlCommonDumpShares(p.dbHandle)
}
func (p *MySQLProvider) updateShareLastUse(shareID string, numTokens int) error {
return sqlCommonUpdateShareLastUse(shareID, numTokens, p.dbHandle)
}
func (p *MySQLProvider) getDefenderHosts(from int64, limit int) ([]DefenderEntry, error) {
return sqlCommonGetDefenderHosts(from, limit, p.dbHandle)
}
func (p *MySQLProvider) getDefenderHostByIP(ip string, from int64) (DefenderEntry, error) {
return sqlCommonGetDefenderHostByIP(ip, from, p.dbHandle)
}
func (p *MySQLProvider) isDefenderHostBanned(ip string) (DefenderEntry, error) {
return sqlCommonIsDefenderHostBanned(ip, p.dbHandle)
}
func (p *MySQLProvider) updateDefenderBanTime(ip string, minutes int) error {
return sqlCommonDefenderIncrementBanTime(ip, minutes, p.dbHandle)
}
func (p *MySQLProvider) deleteDefenderHost(ip string) error {
return sqlCommonDeleteDefenderHost(ip, p.dbHandle)
}
func (p *MySQLProvider) addDefenderEvent(ip string, score int) error {
return sqlCommonAddDefenderHostAndEvent(ip, score, p.dbHandle)
}
func (p *MySQLProvider) setDefenderBanTime(ip string, banTime int64) error {
return sqlCommonSetDefenderBanTime(ip, banTime, p.dbHandle)
}
func (p *MySQLProvider) cleanupDefender(from int64) error {
return sqlCommonDefenderCleanup(from, p.dbHandle)
}
func (p *MySQLProvider) addActiveTransfer(transfer ActiveTransfer) error {
return sqlCommonAddActiveTransfer(transfer, p.dbHandle)
}
func (p *MySQLProvider) updateActiveTransferSizes(ulSize, dlSize, transferID int64, connectionID string) error {
return sqlCommonUpdateActiveTransferSizes(ulSize, dlSize, transferID, connectionID, p.dbHandle)
}
func (p *MySQLProvider) removeActiveTransfer(transferID int64, connectionID string) error {
return sqlCommonRemoveActiveTransfer(transferID, connectionID, p.dbHandle)
}
func (p *MySQLProvider) cleanupActiveTransfers(before time.Time) error {
return sqlCommonCleanupActiveTransfers(before, p.dbHandle)
}
func (p *MySQLProvider) getActiveTransfers(from time.Time) ([]ActiveTransfer, error) {
return sqlCommonGetActiveTransfers(from, p.dbHandle)
}
func (p *MySQLProvider) addSharedSession(session Session) error {
return sqlCommonAddSession(session, p.dbHandle)
}
func (p *MySQLProvider) deleteSharedSession(key string) error {
return sqlCommonDeleteSession(key, p.dbHandle)
}
func (p *MySQLProvider) getSharedSession(key string) (Session, error) {
return sqlCommonGetSession(key, p.dbHandle)
}
func (p *MySQLProvider) cleanupSharedSessions(sessionType SessionType, before int64) error {
return sqlCommonCleanupSessions(sessionType, before, p.dbHandle)
}
func (p *MySQLProvider) close() error {
return p.dbHandle.Close()
}
@@ -223,17 +570,26 @@ func (p *MySQLProvider) initializeDatabase() error {
if err == nil && dbVersion.Version > 0 {
return ErrNoInitRequired
}
if errors.Is(err, sql.ErrNoRows) {
return errSchemaVersionEmpty
}
logger.InfoToConsole("creating initial database schema, version 15")
providerLog(logger.LevelInfo, "creating initial database schema, version 15")
initialSQL := strings.ReplaceAll(mysqlInitialSQL, "{{schema_version}}", sqlTableSchemaVersion)
initialSQL = strings.ReplaceAll(initialSQL, "{{admins}}", sqlTableAdmins)
initialSQL = strings.ReplaceAll(initialSQL, "{{folders}}", sqlTableFolders)
initialSQL = strings.ReplaceAll(initialSQL, "{{users}}", sqlTableUsers)
initialSQL = strings.ReplaceAll(initialSQL, "{{folders_mapping}}", sqlTableFoldersMapping)
initialSQL = strings.ReplaceAll(initialSQL, "{{api_keys}}", sqlTableAPIKeys)
initialSQL = strings.ReplaceAll(initialSQL, "{{shares}}", sqlTableShares)
initialSQL = strings.ReplaceAll(initialSQL, "{{defender_events}}", sqlTableDefenderEvents)
initialSQL = strings.ReplaceAll(initialSQL, "{{defender_hosts}}", sqlTableDefenderHosts)
initialSQL = strings.ReplaceAll(initialSQL, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, strings.Split(initialSQL, ";"), 8)
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, strings.Split(initialSQL, ";"), 15, true)
}
func (p *MySQLProvider) migrateDatabase() error {
func (p *MySQLProvider) migrateDatabase() error { //nolint:dupl
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
if err != nil {
return err
@@ -243,18 +599,22 @@ func (p *MySQLProvider) migrateDatabase() error {
case version == sqlDatabaseVersion:
providerLog(logger.LevelDebug, "sql database is up to date, current version: %v", version)
return ErrNoInitRequired
case version < 8:
case version < 15:
err = fmt.Errorf("database version %v is too old, please see the upgrading docs", version)
providerLog(logger.LevelError, "%v", err)
logger.ErrorToConsole("%v", err)
return err
case version == 8:
return updateMySQLDatabaseFromV8(p.dbHandle)
case version == 9:
return updateMySQLDatabaseFromV9(p.dbHandle)
case version == 15:
return updateMySQLDatabaseFromV15(p.dbHandle)
case version == 16:
return updateMySQLDatabaseFromV16(p.dbHandle)
case version == 17:
return updateMySQLDatabaseFromV17(p.dbHandle)
case version == 18:
return updateMySQLDatabaseFromV18(p.dbHandle)
default:
if version > sqlDatabaseVersion {
providerLog(logger.LevelWarn, "database version %v is newer than the supported one: %v", version,
providerLog(logger.LevelError, "database version %v is newer than the supported one: %v", version,
sqlDatabaseVersion)
logger.WarnToConsole("database version %v is newer than the supported one: %v", version,
sqlDatabaseVersion)
@@ -274,59 +634,145 @@ func (p *MySQLProvider) revertDatabase(targetVersion int) error {
}
switch dbVersion.Version {
case 9:
return downgradeMySQLDatabaseFromV9(p.dbHandle)
case 10:
return downgradeMySQLDatabaseFromV10(p.dbHandle)
case 16:
return downgradeMySQLDatabaseFromV16(p.dbHandle)
case 17:
return downgradeMySQLDatabaseFromV17(p.dbHandle)
case 18:
return downgradeMySQLDatabaseFromV18(p.dbHandle)
case 19:
return downgradeMySQLDatabaseFromV19(p.dbHandle)
default:
return fmt.Errorf("database version not handled: %v", dbVersion.Version)
}
}
func updateMySQLDatabaseFromV8(dbHandle *sql.DB) error {
if err := updateMySQLDatabaseFrom8To9(dbHandle); err != nil {
func (p *MySQLProvider) resetDatabase() error {
sql := sqlReplaceAll(mysqlResetSQL)
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, strings.Split(sql, ";"), 0, false)
}
func updateMySQLDatabaseFromV15(dbHandle *sql.DB) error {
if err := updateMySQLDatabaseFrom15To16(dbHandle); err != nil {
return err
}
return updateMySQLDatabaseFromV9(dbHandle)
return updateMySQLDatabaseFromV16(dbHandle)
}
func updateMySQLDatabaseFromV9(dbHandle *sql.DB) error {
return updateMySQLDatabaseFrom9To10(dbHandle)
}
func downgradeMySQLDatabaseFromV9(dbHandle *sql.DB) error {
return downgradeMySQLDatabaseFrom9To8(dbHandle)
}
func downgradeMySQLDatabaseFromV10(dbHandle *sql.DB) error {
if err := downgradeMySQLDatabaseFrom10To9(dbHandle); err != nil {
func updateMySQLDatabaseFromV16(dbHandle *sql.DB) error {
if err := updateMySQLDatabaseFrom16To17(dbHandle); err != nil {
return err
}
return downgradeMySQLDatabaseFromV9(dbHandle)
return updateMySQLDatabaseFromV17(dbHandle)
}
func updateMySQLDatabaseFrom8To9(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 8 -> 9")
providerLog(logger.LevelInfo, "updating database version: 8 -> 9")
sql := strings.ReplaceAll(mysqlV9SQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
func updateMySQLDatabaseFromV17(dbHandle *sql.DB) error {
if err := updateMySQLDatabaseFrom17To18(dbHandle); err != nil {
return err
}
return updateMySQLDatabaseFromV18(dbHandle)
}
func updateMySQLDatabaseFromV18(dbHandle *sql.DB) error {
return updateMySQLDatabaseFrom18To19(dbHandle)
}
func downgradeMySQLDatabaseFromV16(dbHandle *sql.DB) error {
return downgradeMySQLDatabaseFrom16To15(dbHandle)
}
func downgradeMySQLDatabaseFromV17(dbHandle *sql.DB) error {
if err := downgradeMySQLDatabaseFrom17To16(dbHandle); err != nil {
return err
}
return downgradeMySQLDatabaseFromV16(dbHandle)
}
func downgradeMySQLDatabaseFromV18(dbHandle *sql.DB) error {
if err := downgradeMySQLDatabaseFrom18To17(dbHandle); err != nil {
return err
}
return downgradeMySQLDatabaseFromV17(dbHandle)
}
func downgradeMySQLDatabaseFromV19(dbHandle *sql.DB) error {
if err := downgradeMySQLDatabaseFrom19To18(dbHandle); err != nil {
return err
}
return downgradeMySQLDatabaseFromV18(dbHandle)
}
func updateMySQLDatabaseFrom15To16(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 15 -> 16")
providerLog(logger.LevelInfo, "updating database version: 15 -> 16")
sql := strings.ReplaceAll(mysqlV16SQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{active_transfers}}", sqlTableActiveTransfers)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 16, true)
}
func updateMySQLDatabaseFrom16To17(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 16 -> 17")
providerLog(logger.LevelInfo, "updating database version: 16 -> 17")
sql := strings.ReplaceAll(mysqlV17SQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{groups}}", sqlTableGroups)
sql = strings.ReplaceAll(sql, "{{folders}}", sqlTableFolders)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 9)
sql = strings.ReplaceAll(sql, "{{folders_mapping}}", sqlTableFoldersMapping)
sql = strings.ReplaceAll(sql, "{{users_folders_mapping}}", sqlTableUsersFoldersMapping)
sql = strings.ReplaceAll(sql, "{{users_groups_mapping}}", sqlTableUsersGroupsMapping)
sql = strings.ReplaceAll(sql, "{{groups_folders_mapping}}", sqlTableGroupsFoldersMapping)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 17, true)
}
func downgradeMySQLDatabaseFrom9To8(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 9 -> 8")
providerLog(logger.LevelInfo, "downgrading database version: 9 -> 8")
sql := strings.ReplaceAll(mysqlV9DownSQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
func updateMySQLDatabaseFrom17To18(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 17 -> 18")
providerLog(logger.LevelInfo, "updating database version: 17 -> 18")
if err := importGCSCredentials(); err != nil {
return err
}
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, nil, 18, true)
}
func updateMySQLDatabaseFrom18To19(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 18 -> 19")
providerLog(logger.LevelInfo, "updating database version: 18 -> 19")
sql := strings.ReplaceAll(mysqlV19SQL, "{{shared_sessions}}", sqlTableSharedSessions)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 19, true)
}
func downgradeMySQLDatabaseFrom16To15(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 16 -> 15")
providerLog(logger.LevelInfo, "downgrading database version: 16 -> 15")
sql := strings.ReplaceAll(mysqlV16DownSQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{active_transfers}}", sqlTableActiveTransfers)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 15, false)
}
func downgradeMySQLDatabaseFrom17To16(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 17 -> 16")
providerLog(logger.LevelInfo, "downgrading database version: 17 -> 16")
sql := strings.ReplaceAll(mysqlV17DownSQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{groups}}", sqlTableGroups)
sql = strings.ReplaceAll(sql, "{{folders}}", sqlTableFolders)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 8)
sql = strings.ReplaceAll(sql, "{{folders_mapping}}", sqlTableFoldersMapping)
sql = strings.ReplaceAll(sql, "{{users_folders_mapping}}", sqlTableUsersFoldersMapping)
sql = strings.ReplaceAll(sql, "{{users_groups_mapping}}", sqlTableUsersGroupsMapping)
sql = strings.ReplaceAll(sql, "{{groups_folders_mapping}}", sqlTableGroupsFoldersMapping)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 16, false)
}
func updateMySQLDatabaseFrom9To10(dbHandle *sql.DB) error {
return sqlCommonUpdateDatabaseFrom9To10(dbHandle)
func downgradeMySQLDatabaseFrom18To17(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 18 -> 17")
providerLog(logger.LevelInfo, "downgrading database version: 18 -> 17")
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, nil, 17, false)
}
func downgradeMySQLDatabaseFrom10To9(dbHandle *sql.DB) error {
return sqlCommonDowngradeDatabaseFrom10To9(dbHandle)
func downgradeMySQLDatabaseFrom19To18(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 19 -> 18")
providerLog(logger.LevelInfo, "downgrading database version: 19 -> 18")
sql := strings.ReplaceAll(mysqlV19DownSQL, "{{shared_sessions}}", sqlTableSharedSessions)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 18, false)
}

View File

@@ -1,3 +1,18 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//go:build nomysql
// +build nomysql
package dataprovider
@@ -5,7 +20,7 @@ package dataprovider
import (
"errors"
"github.com/drakkan/sftpgo/version"
"github.com/drakkan/sftpgo/v2/version"
)
func init() {

View File

@@ -1,3 +1,18 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//go:build !nopgsql
// +build !nopgsql
package dataprovider
@@ -14,50 +29,158 @@ import (
// we import lib/pq here to be able to disable PostgreSQL support using a build tag
_ "github.com/lib/pq"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/version"
"github.com/drakkan/sftpgo/vfs"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/version"
"github.com/drakkan/sftpgo/v2/vfs"
)
const (
pgsqlResetSQL = `DROP TABLE IF EXISTS "{{api_keys}}" CASCADE;
DROP TABLE IF EXISTS "{{folders_mapping}}" CASCADE;
DROP TABLE IF EXISTS "{{users_folders_mapping}}" CASCADE;
DROP TABLE IF EXISTS "{{users_groups_mapping}}" CASCADE;
DROP TABLE IF EXISTS "{{groups_folders_mapping}}" CASCADE;
DROP TABLE IF EXISTS "{{admins}}" CASCADE;
DROP TABLE IF EXISTS "{{folders}}" CASCADE;
DROP TABLE IF EXISTS "{{shares}}" CASCADE;
DROP TABLE IF EXISTS "{{users}}" CASCADE;
DROP TABLE IF EXISTS "{{groups}}" CASCADE;
DROP TABLE IF EXISTS "{{defender_events}}" CASCADE;
DROP TABLE IF EXISTS "{{defender_hosts}}" CASCADE;
DROP TABLE IF EXISTS "{{active_transfers}}" CASCADE;
DROP TABLE IF EXISTS "{{shared_sessions}}" CASCADE;
DROP TABLE IF EXISTS "{{schema_version}}" CASCADE;
`
pgsqlInitial = `CREATE TABLE "{{schema_version}}" ("id" serial NOT NULL PRIMARY KEY, "version" integer NOT NULL);
CREATE TABLE "{{admins}}" ("id" serial NOT NULL PRIMARY KEY, "username" varchar(255) NOT NULL UNIQUE,
"password" varchar(255) NOT NULL, "email" varchar(255) NULL, "status" integer NOT NULL, "permissions" text NOT NULL,
"filters" text NULL, "additional_info" text NULL);
CREATE TABLE "{{folders}}" ("id" serial NOT NULL PRIMARY KEY, "name" varchar(255) NOT NULL UNIQUE,
"path" varchar(512) NULL, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL,
"last_quota_update" bigint NOT NULL);
CREATE TABLE "{{users}}" ("id" serial NOT NULL PRIMARY KEY, "status" integer NOT NULL, "expiration_date" bigint NOT NULL,
"username" varchar(255) NOT NULL UNIQUE, "password" text NULL, "public_keys" text NULL, "home_dir" varchar(512) NOT NULL,
"uid" integer NOT NULL, "gid" integer NOT NULL, "max_sessions" integer NOT NULL, "quota_size" bigint NOT NULL,
"quota_files" integer NOT NULL, "permissions" text NOT NULL, "used_quota_size" bigint NOT NULL,
"description" varchar(512) NULL, "password" varchar(255) NOT NULL, "email" varchar(255) NULL, "status" integer NOT NULL,
"permissions" text NOT NULL, "filters" text NULL, "additional_info" text NULL, "last_login" bigint NOT NULL,
"created_at" bigint NOT NULL, "updated_at" bigint NOT NULL);
CREATE TABLE "{{defender_hosts}}" ("id" bigserial NOT NULL PRIMARY KEY, "ip" varchar(50) NOT NULL UNIQUE,
"ban_time" bigint NOT NULL, "updated_at" bigint NOT NULL);
CREATE TABLE "{{defender_events}}" ("id" bigserial NOT NULL PRIMARY KEY, "date_time" bigint NOT NULL, "score" integer NOT NULL,
"host_id" bigint NOT NULL);
ALTER TABLE "{{defender_events}}" ADD CONSTRAINT "{{prefix}}defender_events_host_id_fk_defender_hosts_id" FOREIGN KEY
("host_id") REFERENCES "{{defender_hosts}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE;
CREATE TABLE "{{folders}}" ("id" serial NOT NULL PRIMARY KEY, "name" varchar(255) NOT NULL UNIQUE, "description" varchar(512) NULL,
"path" text NULL, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL,
"filesystem" text NULL);
CREATE TABLE "{{users}}" ("id" serial NOT NULL PRIMARY KEY, "username" varchar(255) NOT NULL UNIQUE, "status" integer NOT NULL,
"expiration_date" bigint NOT NULL, "description" varchar(512) NULL, "password" text NULL, "public_keys" text NULL,
"home_dir" text NOT NULL, "uid" bigint NOT NULL, "gid" bigint NOT NULL, "max_sessions" integer NOT NULL,
"quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "permissions" text NOT NULL, "used_quota_size" bigint NOT NULL,
"used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL, "upload_bandwidth" integer NOT NULL,
"download_bandwidth" integer NOT NULL, "last_login" bigint NOT NULL, "filters" text NULL, "filesystem" text NULL,
"additional_info" text NULL);
CREATE TABLE "{{folders_mapping}}" ("id" serial NOT NULL PRIMARY KEY, "virtual_path" varchar(512) NOT NULL,
"additional_info" text NULL, "created_at" bigint NOT NULL, "updated_at" bigint NOT NULL, "email" varchar(255) NULL);
CREATE TABLE "{{folders_mapping}}" ("id" serial NOT NULL PRIMARY KEY, "virtual_path" text NOT NULL,
"quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "folder_id" integer NOT NULL, "user_id" integer NOT NULL);
ALTER TABLE "{{folders_mapping}}" ADD CONSTRAINT "{{prefix}}unique_mapping" UNIQUE ("user_id", "folder_id");
ALTER TABLE "{{folders_mapping}}" ADD CONSTRAINT "{{prefix}}folders_mapping_folder_id_fk_folders_id"
FOREIGN KEY ("folder_id") REFERENCES "{{folders}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
FOREIGN KEY ("folder_id") REFERENCES "{{folders}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE;
ALTER TABLE "{{folders_mapping}}" ADD CONSTRAINT "{{prefix}}folders_mapping_user_id_fk_users_id"
FOREIGN KEY ("user_id") REFERENCES "{{users}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
FOREIGN KEY ("user_id") REFERENCES "{{users}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE;
CREATE TABLE "{{shares}}" ("id" serial NOT NULL PRIMARY KEY,
"share_id" varchar(60) NOT NULL UNIQUE, "name" varchar(255) NOT NULL, "description" varchar(512) NULL,
"scope" integer NOT NULL, "paths" text NOT NULL, "created_at" bigint NOT NULL, "updated_at" bigint NOT NULL,
"last_use_at" bigint NOT NULL, "expires_at" bigint NOT NULL, "password" text NULL,
"max_tokens" integer NOT NULL, "used_tokens" integer NOT NULL, "allow_from" text NULL,
"user_id" integer NOT NULL);
ALTER TABLE "{{shares}}" ADD CONSTRAINT "{{prefix}}shares_user_id_fk_users_id" FOREIGN KEY ("user_id")
REFERENCES "{{users}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE;
CREATE TABLE "{{api_keys}}" ("id" serial NOT NULL PRIMARY KEY, "name" varchar(255) NOT NULL,
"key_id" varchar(50) NOT NULL UNIQUE, "api_key" varchar(255) NOT NULL UNIQUE, "scope" integer NOT NULL,
"created_at" bigint NOT NULL, "updated_at" bigint NOT NULL, "last_use_at" bigint NOT NULL,"expires_at" bigint NOT NULL,
"description" text NULL, "admin_id" integer NULL, "user_id" integer NULL);
ALTER TABLE "{{api_keys}}" ADD CONSTRAINT "{{prefix}}api_keys_admin_id_fk_admins_id" FOREIGN KEY ("admin_id")
REFERENCES "{{admins}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE;
ALTER TABLE "{{api_keys}}" ADD CONSTRAINT "{{prefix}}api_keys_user_id_fk_users_id" FOREIGN KEY ("user_id")
REFERENCES "{{users}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE;
CREATE INDEX "{{prefix}}folders_mapping_folder_id_idx" ON "{{folders_mapping}}" ("folder_id");
CREATE INDEX "{{prefix}}folders_mapping_user_id_idx" ON "{{folders_mapping}}" ("user_id");
INSERT INTO {{schema_version}} (version) VALUES (8);
CREATE INDEX "{{prefix}}api_keys_admin_id_idx" ON "{{api_keys}}" ("admin_id");
CREATE INDEX "{{prefix}}api_keys_user_id_idx" ON "{{api_keys}}" ("user_id");
CREATE INDEX "{{prefix}}users_updated_at_idx" ON "{{users}}" ("updated_at");
CREATE INDEX "{{prefix}}shares_user_id_idx" ON "{{shares}}" ("user_id");
CREATE INDEX "{{prefix}}defender_hosts_updated_at_idx" ON "{{defender_hosts}}" ("updated_at");
CREATE INDEX "{{prefix}}defender_hosts_ban_time_idx" ON "{{defender_hosts}}" ("ban_time");
CREATE INDEX "{{prefix}}defender_events_date_time_idx" ON "{{defender_events}}" ("date_time");
CREATE INDEX "{{prefix}}defender_events_host_id_idx" ON "{{defender_events}}" ("host_id");
INSERT INTO {{schema_version}} (version) VALUES (15);
`
pgsqlV9SQL = `ALTER TABLE "{{admins}}" ADD COLUMN "description" varchar(512) NULL;
ALTER TABLE "{{folders}}" ADD COLUMN "description" varchar(512) NULL;
ALTER TABLE "{{folders}}" ADD COLUMN "filesystem" text NULL;
ALTER TABLE "{{users}}" ADD COLUMN "description" varchar(512) NULL;
pgsqlV16SQL = `ALTER TABLE "{{users}}" ADD COLUMN "download_data_transfer" integer DEFAULT 0 NOT NULL;
ALTER TABLE "{{users}}" ALTER COLUMN "download_data_transfer" DROP DEFAULT;
ALTER TABLE "{{users}}" ADD COLUMN "total_data_transfer" integer DEFAULT 0 NOT NULL;
ALTER TABLE "{{users}}" ALTER COLUMN "total_data_transfer" DROP DEFAULT;
ALTER TABLE "{{users}}" ADD COLUMN "upload_data_transfer" integer DEFAULT 0 NOT NULL;
ALTER TABLE "{{users}}" ALTER COLUMN "upload_data_transfer" DROP DEFAULT;
ALTER TABLE "{{users}}" ADD COLUMN "used_download_data_transfer" integer DEFAULT 0 NOT NULL;
ALTER TABLE "{{users}}" ALTER COLUMN "used_download_data_transfer" DROP DEFAULT;
ALTER TABLE "{{users}}" ADD COLUMN "used_upload_data_transfer" integer DEFAULT 0 NOT NULL;
ALTER TABLE "{{users}}" ALTER COLUMN "used_upload_data_transfer" DROP DEFAULT;
CREATE TABLE "{{active_transfers}}" ("id" bigserial NOT NULL PRIMARY KEY, "connection_id" varchar(100) NOT NULL,
"transfer_id" bigint NOT NULL, "transfer_type" integer NOT NULL, "username" varchar(255) NOT NULL,
"folder_name" varchar(255) NULL, "ip" varchar(50) NOT NULL, "truncated_size" bigint NOT NULL,
"current_ul_size" bigint NOT NULL, "current_dl_size" bigint NOT NULL, "created_at" bigint NOT NULL,
"updated_at" bigint NOT NULL);
CREATE INDEX "{{prefix}}active_transfers_connection_id_idx" ON "{{active_transfers}}" ("connection_id");
CREATE INDEX "{{prefix}}active_transfers_transfer_id_idx" ON "{{active_transfers}}" ("transfer_id");
CREATE INDEX "{{prefix}}active_transfers_updated_at_idx" ON "{{active_transfers}}" ("updated_at");
`
pgsqlV9DownSQL = `ALTER TABLE "{{users}}" DROP COLUMN "description" CASCADE;
ALTER TABLE "{{folders}}" DROP COLUMN "filesystem" CASCADE;
ALTER TABLE "{{folders}}" DROP COLUMN "description" CASCADE;
ALTER TABLE "{{admins}}" DROP COLUMN "description" CASCADE;
pgsqlV16DownSQL = `ALTER TABLE "{{users}}" DROP COLUMN "used_upload_data_transfer" CASCADE;
ALTER TABLE "{{users}}" DROP COLUMN "used_download_data_transfer" CASCADE;
ALTER TABLE "{{users}}" DROP COLUMN "upload_data_transfer" CASCADE;
ALTER TABLE "{{users}}" DROP COLUMN "total_data_transfer" CASCADE;
ALTER TABLE "{{users}}" DROP COLUMN "download_data_transfer" CASCADE;
DROP TABLE "{{active_transfers}}" CASCADE;
`
pgsqlV17SQL = `CREATE TABLE "{{groups}}" ("id" serial NOT NULL PRIMARY KEY, "name" varchar(255) NOT NULL UNIQUE,
"description" varchar(512) NULL, "created_at" bigint NOT NULL, "updated_at" bigint NOT NULL, "user_settings" text NULL);
CREATE TABLE "{{groups_folders_mapping}}" ("id" serial NOT NULL PRIMARY KEY, "group_id" integer NOT NULL,
"folder_id" integer NOT NULL, "virtual_path" text NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL);
CREATE TABLE "{{users_groups_mapping}}" ("id" serial NOT NULL PRIMARY KEY, "user_id" integer NOT NULL,
"group_id" integer NOT NULL, "group_type" integer NOT NULL);
DROP INDEX "{{prefix}}folders_mapping_folder_id_idx";
DROP INDEX "{{prefix}}folders_mapping_user_id_idx";
ALTER TABLE "{{folders_mapping}}" DROP CONSTRAINT "{{prefix}}unique_mapping";
ALTER TABLE "{{folders_mapping}}" RENAME TO "{{users_folders_mapping}}";
ALTER TABLE "{{users_folders_mapping}}" ADD CONSTRAINT "{{prefix}}unique_user_folder_mapping" UNIQUE ("user_id", "folder_id");
CREATE INDEX "{{prefix}}users_folders_mapping_folder_id_idx" ON "{{users_folders_mapping}}" ("folder_id");
CREATE INDEX "{{prefix}}users_folders_mapping_user_id_idx" ON "{{users_folders_mapping}}" ("user_id");
ALTER TABLE "{{users_groups_mapping}}" ADD CONSTRAINT "{{prefix}}unique_user_group_mapping" UNIQUE ("user_id", "group_id");
ALTER TABLE "{{groups_folders_mapping}}" ADD CONSTRAINT "{{prefix}}unique_group_folder_mapping" UNIQUE ("group_id", "folder_id");
CREATE INDEX "{{prefix}}users_groups_mapping_group_id_idx" ON "{{users_groups_mapping}}" ("group_id");
ALTER TABLE "{{users_groups_mapping}}" ADD CONSTRAINT "{{prefix}}users_groups_mapping_group_id_fk_groups_id"
FOREIGN KEY ("group_id") REFERENCES "{{groups}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION;
CREATE INDEX "{{prefix}}users_groups_mapping_user_id_idx" ON "{{users_groups_mapping}}" ("user_id");
ALTER TABLE "{{users_groups_mapping}}" ADD CONSTRAINT "{{prefix}}users_groups_mapping_user_id_fk_users_id"
FOREIGN KEY ("user_id") REFERENCES "{{users}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE;
CREATE INDEX "{{prefix}}groups_folders_mapping_folder_id_idx" ON "{{groups_folders_mapping}}" ("folder_id");
ALTER TABLE "{{groups_folders_mapping}}" ADD CONSTRAINT "{{prefix}}groups_folders_mapping_folder_id_fk_folders_id"
FOREIGN KEY ("folder_id") REFERENCES "{{folders}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE;
CREATE INDEX "{{prefix}}groups_folders_mapping_group_id_idx" ON "{{groups_folders_mapping}}" ("group_id");
ALTER TABLE "{{groups_folders_mapping}}" ADD CONSTRAINT "{{prefix}}groups_folders_mapping_group_id_fk_groups_id"
FOREIGN KEY ("group_id") REFERENCES "{{groups}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE;
CREATE INDEX "{{prefix}}groups_updated_at_idx" ON "{{groups}}" ("updated_at");
`
pgsqlV17DownSQL = `DROP TABLE "{{users_groups_mapping}}" CASCADE;
DROP TABLE "{{groups_folders_mapping}}" CASCADE;
DROP TABLE "{{groups}}" CASCADE;
DROP INDEX "{{prefix}}users_folders_mapping_folder_id_idx";
DROP INDEX "{{prefix}}users_folders_mapping_user_id_idx";
ALTER TABLE "{{users_folders_mapping}}" DROP CONSTRAINT "{{prefix}}unique_user_folder_mapping";
ALTER TABLE "{{users_folders_mapping}}" RENAME TO "{{folders_mapping}}";
ALTER TABLE "{{folders_mapping}}" ADD CONSTRAINT "{{prefix}}unique_mapping" UNIQUE ("user_id", "folder_id");
CREATE INDEX "{{prefix}}folders_mapping_folder_id_idx" ON "{{folders_mapping}}" ("folder_id");
CREATE INDEX "{{prefix}}folders_mapping_user_id_idx" ON "{{folders_mapping}}" ("user_id");
`
pgsqlV19SQL = `CREATE TABLE "{{shared_sessions}}" ("key" varchar(128) NOT NULL PRIMARY KEY,
"data" text NOT NULL, "type" integer NOT NULL, "timestamp" bigint NOT NULL);
CREATE INDEX "{{prefix}}shared_sessions_type_idx" ON "{{shared_sessions}}" ("type");
CREATE INDEX "{{prefix}}shared_sessions_timestamp_idx" ON "{{shared_sessions}}" ("timestamp");`
pgsqlV19DownSQL = `DROP TABLE "{{shared_sessions}}" CASCADE;`
)
// PGSQLProvider auth provider for PostgreSQL database
// PGSQLProvider defines the auth provider for PostgreSQL database
type PGSQLProvider struct {
dbHandle *sql.DB
}
@@ -81,7 +204,7 @@ func initializePGSQLProvider() error {
dbHandle.SetConnMaxLifetime(240 * time.Second)
provider = &PGSQLProvider{dbHandle: dbHandle}
} else {
providerLog(logger.LevelWarn, "error creating postgres database handler, connection string: %#v, error: %v",
providerLog(logger.LevelError, "error creating postgres database handler, connection string: %#v, error: %v",
getPGSQLConnectionString(true), err)
}
return err
@@ -91,11 +214,17 @@ func getPGSQLConnectionString(redactedPwd bool) string {
var connectionString string
if config.ConnectionString == "" {
password := config.Password
if redactedPwd {
if redactedPwd && password != "" {
password = "[redacted]"
}
connectionString = fmt.Sprintf("host='%v' port=%v dbname='%v' user='%v' password='%v' sslmode=%v connect_timeout=10",
config.Host, config.Port, config.Name, config.Username, password, getSSLMode())
if config.RootCert != "" {
connectionString += fmt.Sprintf(" sslrootcert='%v'", config.RootCert)
}
if config.ClientCert != "" && config.ClientKey != "" {
connectionString += fmt.Sprintf(" sslcert='%v' sslkey='%v'", config.ClientCert, config.ClientKey)
}
} else {
connectionString = config.ConnectionString
}
@@ -114,22 +243,34 @@ func (p *PGSQLProvider) validateUserAndTLSCert(username, protocol string, tlsCer
return sqlCommonValidateUserAndTLSCertificate(username, protocol, tlsCert, p.dbHandle)
}
func (p *PGSQLProvider) validateUserAndPubKey(username string, publicKey []byte) (User, string, error) {
return sqlCommonValidateUserAndPubKey(username, publicKey, p.dbHandle)
func (p *PGSQLProvider) validateUserAndPubKey(username string, publicKey []byte, isSSHCert bool) (User, string, error) {
return sqlCommonValidateUserAndPubKey(username, publicKey, isSSHCert, p.dbHandle)
}
func (p *PGSQLProvider) updateTransferQuota(username string, uploadSize, downloadSize int64, reset bool) error {
return sqlCommonUpdateTransferQuota(username, uploadSize, downloadSize, reset, p.dbHandle)
}
func (p *PGSQLProvider) updateQuota(username string, filesAdd int, sizeAdd int64, reset bool) error {
return sqlCommonUpdateQuota(username, filesAdd, sizeAdd, reset, p.dbHandle)
}
func (p *PGSQLProvider) getUsedQuota(username string) (int, int64, error) {
func (p *PGSQLProvider) getUsedQuota(username string) (int, int64, int64, int64, error) {
return sqlCommonGetUsedQuota(username, p.dbHandle)
}
func (p *PGSQLProvider) setUpdatedAt(username string) {
sqlCommonSetUpdatedAt(username, p.dbHandle)
}
func (p *PGSQLProvider) updateLastLogin(username string) error {
return sqlCommonUpdateLastLogin(username, p.dbHandle)
}
func (p *PGSQLProvider) updateAdminLastLogin(username string) error {
return sqlCommonUpdateAdminLastLogin(username, p.dbHandle)
}
func (p *PGSQLProvider) userExists(username string) (User, error) {
return sqlCommonGetUserByUsername(username, p.dbHandle)
}
@@ -142,24 +283,36 @@ func (p *PGSQLProvider) updateUser(user *User) error {
return sqlCommonUpdateUser(user, p.dbHandle)
}
func (p *PGSQLProvider) deleteUser(user *User) error {
func (p *PGSQLProvider) deleteUser(user User) error {
return sqlCommonDeleteUser(user, p.dbHandle)
}
func (p *PGSQLProvider) updateUserPassword(username, password string) error {
return sqlCommonUpdateUserPassword(username, password, p.dbHandle)
}
func (p *PGSQLProvider) dumpUsers() ([]User, error) {
return sqlCommonDumpUsers(p.dbHandle)
}
func (p *PGSQLProvider) getRecentlyUpdatedUsers(after int64) ([]User, error) {
return sqlCommonGetRecentlyUpdatedUsers(after, p.dbHandle)
}
func (p *PGSQLProvider) getUsers(limit int, offset int, order string) ([]User, error) {
return sqlCommonGetUsers(limit, offset, order, p.dbHandle)
}
func (p *PGSQLProvider) getUsersForQuotaCheck(toFetch map[string]bool) ([]User, error) {
return sqlCommonGetUsersForQuotaCheck(toFetch, p.dbHandle)
}
func (p *PGSQLProvider) dumpFolders() ([]vfs.BaseVirtualFolder, error) {
return sqlCommonDumpFolders(p.dbHandle)
}
func (p *PGSQLProvider) getFolders(limit, offset int, order string) ([]vfs.BaseVirtualFolder, error) {
return sqlCommonGetFolders(limit, offset, order, p.dbHandle)
func (p *PGSQLProvider) getFolders(limit, offset int, order string, minimal bool) ([]vfs.BaseVirtualFolder, error) {
return sqlCommonGetFolders(limit, offset, order, minimal, p.dbHandle)
}
func (p *PGSQLProvider) getFolderByName(name string) (vfs.BaseVirtualFolder, error) {
@@ -176,7 +329,7 @@ func (p *PGSQLProvider) updateFolder(folder *vfs.BaseVirtualFolder) error {
return sqlCommonUpdateFolder(folder, p.dbHandle)
}
func (p *PGSQLProvider) deleteFolder(folder *vfs.BaseVirtualFolder) error {
func (p *PGSQLProvider) deleteFolder(folder vfs.BaseVirtualFolder) error {
return sqlCommonDeleteFolder(folder, p.dbHandle)
}
@@ -188,6 +341,38 @@ func (p *PGSQLProvider) getUsedFolderQuota(name string) (int, int64, error) {
return sqlCommonGetFolderUsedQuota(name, p.dbHandle)
}
func (p *PGSQLProvider) getGroups(limit, offset int, order string, minimal bool) ([]Group, error) {
return sqlCommonGetGroups(limit, offset, order, minimal, p.dbHandle)
}
func (p *PGSQLProvider) getGroupsWithNames(names []string) ([]Group, error) {
return sqlCommonGetGroupsWithNames(names, p.dbHandle)
}
func (p *PGSQLProvider) getUsersInGroups(names []string) ([]string, error) {
return sqlCommonGetUsersInGroups(names, p.dbHandle)
}
func (p *PGSQLProvider) groupExists(name string) (Group, error) {
return sqlCommonGetGroupByName(name, p.dbHandle)
}
func (p *PGSQLProvider) addGroup(group *Group) error {
return sqlCommonAddGroup(group, p.dbHandle)
}
func (p *PGSQLProvider) updateGroup(group *Group) error {
return sqlCommonUpdateGroup(group, p.dbHandle)
}
func (p *PGSQLProvider) deleteGroup(group Group) error {
return sqlCommonDeleteGroup(group, p.dbHandle)
}
func (p *PGSQLProvider) dumpGroups() ([]Group, error) {
return sqlCommonDumpGroups(p.dbHandle)
}
func (p *PGSQLProvider) adminExists(username string) (Admin, error) {
return sqlCommonGetAdminByUsername(username, p.dbHandle)
}
@@ -200,7 +385,7 @@ func (p *PGSQLProvider) updateAdmin(admin *Admin) error {
return sqlCommonUpdateAdmin(admin, p.dbHandle)
}
func (p *PGSQLProvider) deleteAdmin(admin *Admin) error {
func (p *PGSQLProvider) deleteAdmin(admin Admin) error {
return sqlCommonDeleteAdmin(admin, p.dbHandle)
}
@@ -216,6 +401,130 @@ func (p *PGSQLProvider) validateAdminAndPass(username, password, ip string) (Adm
return sqlCommonValidateAdminAndPass(username, password, ip, p.dbHandle)
}
func (p *PGSQLProvider) apiKeyExists(keyID string) (APIKey, error) {
return sqlCommonGetAPIKeyByID(keyID, p.dbHandle)
}
func (p *PGSQLProvider) addAPIKey(apiKey *APIKey) error {
return sqlCommonAddAPIKey(apiKey, p.dbHandle)
}
func (p *PGSQLProvider) updateAPIKey(apiKey *APIKey) error {
return sqlCommonUpdateAPIKey(apiKey, p.dbHandle)
}
func (p *PGSQLProvider) deleteAPIKey(apiKey APIKey) error {
return sqlCommonDeleteAPIKey(apiKey, p.dbHandle)
}
func (p *PGSQLProvider) getAPIKeys(limit int, offset int, order string) ([]APIKey, error) {
return sqlCommonGetAPIKeys(limit, offset, order, p.dbHandle)
}
func (p *PGSQLProvider) dumpAPIKeys() ([]APIKey, error) {
return sqlCommonDumpAPIKeys(p.dbHandle)
}
func (p *PGSQLProvider) updateAPIKeyLastUse(keyID string) error {
return sqlCommonUpdateAPIKeyLastUse(keyID, p.dbHandle)
}
func (p *PGSQLProvider) shareExists(shareID, username string) (Share, error) {
return sqlCommonGetShareByID(shareID, username, p.dbHandle)
}
func (p *PGSQLProvider) addShare(share *Share) error {
return sqlCommonAddShare(share, p.dbHandle)
}
func (p *PGSQLProvider) updateShare(share *Share) error {
return sqlCommonUpdateShare(share, p.dbHandle)
}
func (p *PGSQLProvider) deleteShare(share Share) error {
return sqlCommonDeleteShare(share, p.dbHandle)
}
func (p *PGSQLProvider) getShares(limit int, offset int, order, username string) ([]Share, error) {
return sqlCommonGetShares(limit, offset, order, username, p.dbHandle)
}
func (p *PGSQLProvider) dumpShares() ([]Share, error) {
return sqlCommonDumpShares(p.dbHandle)
}
func (p *PGSQLProvider) updateShareLastUse(shareID string, numTokens int) error {
return sqlCommonUpdateShareLastUse(shareID, numTokens, p.dbHandle)
}
func (p *PGSQLProvider) getDefenderHosts(from int64, limit int) ([]DefenderEntry, error) {
return sqlCommonGetDefenderHosts(from, limit, p.dbHandle)
}
func (p *PGSQLProvider) getDefenderHostByIP(ip string, from int64) (DefenderEntry, error) {
return sqlCommonGetDefenderHostByIP(ip, from, p.dbHandle)
}
func (p *PGSQLProvider) isDefenderHostBanned(ip string) (DefenderEntry, error) {
return sqlCommonIsDefenderHostBanned(ip, p.dbHandle)
}
func (p *PGSQLProvider) updateDefenderBanTime(ip string, minutes int) error {
return sqlCommonDefenderIncrementBanTime(ip, minutes, p.dbHandle)
}
func (p *PGSQLProvider) deleteDefenderHost(ip string) error {
return sqlCommonDeleteDefenderHost(ip, p.dbHandle)
}
func (p *PGSQLProvider) addDefenderEvent(ip string, score int) error {
return sqlCommonAddDefenderHostAndEvent(ip, score, p.dbHandle)
}
func (p *PGSQLProvider) setDefenderBanTime(ip string, banTime int64) error {
return sqlCommonSetDefenderBanTime(ip, banTime, p.dbHandle)
}
func (p *PGSQLProvider) cleanupDefender(from int64) error {
return sqlCommonDefenderCleanup(from, p.dbHandle)
}
func (p *PGSQLProvider) addActiveTransfer(transfer ActiveTransfer) error {
return sqlCommonAddActiveTransfer(transfer, p.dbHandle)
}
func (p *PGSQLProvider) updateActiveTransferSizes(ulSize, dlSize, transferID int64, connectionID string) error {
return sqlCommonUpdateActiveTransferSizes(ulSize, dlSize, transferID, connectionID, p.dbHandle)
}
func (p *PGSQLProvider) removeActiveTransfer(transferID int64, connectionID string) error {
return sqlCommonRemoveActiveTransfer(transferID, connectionID, p.dbHandle)
}
func (p *PGSQLProvider) cleanupActiveTransfers(before time.Time) error {
return sqlCommonCleanupActiveTransfers(before, p.dbHandle)
}
func (p *PGSQLProvider) getActiveTransfers(from time.Time) ([]ActiveTransfer, error) {
return sqlCommonGetActiveTransfers(from, p.dbHandle)
}
func (p *PGSQLProvider) addSharedSession(session Session) error {
return sqlCommonAddSession(session, p.dbHandle)
}
func (p *PGSQLProvider) deleteSharedSession(key string) error {
return sqlCommonDeleteSession(key, p.dbHandle)
}
func (p *PGSQLProvider) getSharedSession(key string) (Session, error) {
return sqlCommonGetSession(key, p.dbHandle)
}
func (p *PGSQLProvider) cleanupSharedSessions(sessionType SessionType, before int64) error {
return sqlCommonCleanupSessions(sessionType, before, p.dbHandle)
}
func (p *PGSQLProvider) close() error {
return p.dbHandle.Close()
}
@@ -230,23 +539,32 @@ func (p *PGSQLProvider) initializeDatabase() error {
if err == nil && dbVersion.Version > 0 {
return ErrNoInitRequired
}
if errors.Is(err, sql.ErrNoRows) {
return errSchemaVersionEmpty
}
logger.InfoToConsole("creating initial database schema, version 15")
providerLog(logger.LevelInfo, "creating initial database schema, version 15")
initialSQL := strings.ReplaceAll(pgsqlInitial, "{{schema_version}}", sqlTableSchemaVersion)
initialSQL = strings.ReplaceAll(initialSQL, "{{admins}}", sqlTableAdmins)
initialSQL = strings.ReplaceAll(initialSQL, "{{folders}}", sqlTableFolders)
initialSQL = strings.ReplaceAll(initialSQL, "{{users}}", sqlTableUsers)
initialSQL = strings.ReplaceAll(initialSQL, "{{folders_mapping}}", sqlTableFoldersMapping)
initialSQL = strings.ReplaceAll(initialSQL, "{{api_keys}}", sqlTableAPIKeys)
initialSQL = strings.ReplaceAll(initialSQL, "{{shares}}", sqlTableShares)
initialSQL = strings.ReplaceAll(initialSQL, "{{defender_events}}", sqlTableDefenderEvents)
initialSQL = strings.ReplaceAll(initialSQL, "{{defender_hosts}}", sqlTableDefenderHosts)
initialSQL = strings.ReplaceAll(initialSQL, "{{prefix}}", config.SQLTablesPrefix)
if config.Driver == CockroachDataProviderName {
// Cockroach does not support deferrable constraint validation, we don't need it,
// Cockroach does not support deferrable constraint validation, we don't need them,
// we keep these definitions for the PostgreSQL driver to avoid changes for users
// upgrading from old SFTPGo versions
initialSQL = strings.ReplaceAll(initialSQL, "DEFERRABLE INITIALLY DEFERRED", "")
}
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, []string{initialSQL}, 8)
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, []string{initialSQL}, 15, true)
}
func (p *PGSQLProvider) migrateDatabase() error {
func (p *PGSQLProvider) migrateDatabase() error { //nolint:dupl
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
if err != nil {
return err
@@ -256,18 +574,22 @@ func (p *PGSQLProvider) migrateDatabase() error {
case version == sqlDatabaseVersion:
providerLog(logger.LevelDebug, "sql database is up to date, current version: %v", version)
return ErrNoInitRequired
case version < 8:
case version < 15:
err = fmt.Errorf("database version %v is too old, please see the upgrading docs", version)
providerLog(logger.LevelError, "%v", err)
logger.ErrorToConsole("%v", err)
return err
case version == 8:
return updatePGSQLDatabaseFromV8(p.dbHandle)
case version == 9:
return updatePGSQLDatabaseFromV9(p.dbHandle)
case version == 15:
return updatePGSQLDatabaseFromV15(p.dbHandle)
case version == 16:
return updatePGSQLDatabaseFromV16(p.dbHandle)
case version == 17:
return updatePGSQLDatabaseFromV17(p.dbHandle)
case version == 18:
return updatePGSQLDatabaseFromV18(p.dbHandle)
default:
if version > sqlDatabaseVersion {
providerLog(logger.LevelWarn, "database version %v is newer than the supported one: %v", version,
providerLog(logger.LevelError, "database version %v is newer than the supported one: %v", version,
sqlDatabaseVersion)
logger.WarnToConsole("database version %v is newer than the supported one: %v", version,
sqlDatabaseVersion)
@@ -287,59 +609,171 @@ func (p *PGSQLProvider) revertDatabase(targetVersion int) error {
}
switch dbVersion.Version {
case 9:
return downgradePGSQLDatabaseFromV9(p.dbHandle)
case 10:
return downgradePGSQLDatabaseFromV10(p.dbHandle)
case 16:
return downgradePGSQLDatabaseFromV16(p.dbHandle)
case 17:
return downgradePGSQLDatabaseFromV17(p.dbHandle)
case 18:
return downgradePGSQLDatabaseFromV18(p.dbHandle)
case 19:
return downgradePGSQLDatabaseFromV19(p.dbHandle)
default:
return fmt.Errorf("database version not handled: %v", dbVersion.Version)
}
}
func updatePGSQLDatabaseFromV8(dbHandle *sql.DB) error {
if err := updatePGSQLDatabaseFrom8To9(dbHandle); err != nil {
func (p *PGSQLProvider) resetDatabase() error {
sql := sqlReplaceAll(pgsqlResetSQL)
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, []string{sql}, 0, false)
}
func updatePGSQLDatabaseFromV15(dbHandle *sql.DB) error {
if err := updatePGSQLDatabaseFrom15To16(dbHandle); err != nil {
return err
}
return updatePGSQLDatabaseFromV9(dbHandle)
return updatePGSQLDatabaseFromV16(dbHandle)
}
func updatePGSQLDatabaseFromV9(dbHandle *sql.DB) error {
return updatePGSQLDatabaseFrom9To10(dbHandle)
}
func downgradePGSQLDatabaseFromV9(dbHandle *sql.DB) error {
return downgradePGSQLDatabaseFrom9To8(dbHandle)
}
func downgradePGSQLDatabaseFromV10(dbHandle *sql.DB) error {
if err := downgradePGSQLDatabaseFrom10To9(dbHandle); err != nil {
func updatePGSQLDatabaseFromV16(dbHandle *sql.DB) error {
if err := updatePGSQLDatabaseFrom16To17(dbHandle); err != nil {
return err
}
return downgradePGSQLDatabaseFromV9(dbHandle)
return updatePGSQLDatabaseFromV17(dbHandle)
}
func updatePGSQLDatabaseFrom8To9(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 8 -> 9")
providerLog(logger.LevelInfo, "updating database version: 8 -> 9")
sql := strings.ReplaceAll(pgsqlV9SQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
func updatePGSQLDatabaseFromV17(dbHandle *sql.DB) error {
if err := updatePGSQLDatabaseFrom17To18(dbHandle); err != nil {
return err
}
return updatePGSQLDatabaseFromV18(dbHandle)
}
func updatePGSQLDatabaseFromV18(dbHandle *sql.DB) error {
return updatePGSQLDatabaseFrom18To19(dbHandle)
}
func downgradePGSQLDatabaseFromV16(dbHandle *sql.DB) error {
return downgradePGSQLDatabaseFrom16To15(dbHandle)
}
func downgradePGSQLDatabaseFromV17(dbHandle *sql.DB) error {
if err := downgradePGSQLDatabaseFrom17To16(dbHandle); err != nil {
return err
}
return downgradePGSQLDatabaseFromV16(dbHandle)
}
func downgradePGSQLDatabaseFromV18(dbHandle *sql.DB) error {
if err := downgradePGSQLDatabaseFrom18To17(dbHandle); err != nil {
return err
}
return downgradePGSQLDatabaseFromV17(dbHandle)
}
func downgradePGSQLDatabaseFromV19(dbHandle *sql.DB) error {
if err := downgradePGSQLDatabaseFrom19To18(dbHandle); err != nil {
return err
}
return downgradePGSQLDatabaseFromV18(dbHandle)
}
func updatePGSQLDatabaseFrom15To16(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 15 -> 16")
providerLog(logger.LevelInfo, "updating database version: 15 -> 16")
sql := strings.ReplaceAll(pgsqlV16SQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{active_transfers}}", sqlTableActiveTransfers)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
if config.Driver == CockroachDataProviderName {
// Cockroach does not allow to run this schema migration within a transaction
ctx, cancel := context.WithTimeout(context.Background(), longSQLQueryTimeout)
defer cancel()
for _, q := range strings.Split(sql, ";") {
if strings.TrimSpace(q) == "" {
continue
}
_, err := dbHandle.ExecContext(ctx, q)
if err != nil {
return err
}
}
return sqlCommonUpdateDatabaseVersion(ctx, dbHandle, 16)
}
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 16, true)
}
func updatePGSQLDatabaseFrom16To17(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 16 -> 17")
providerLog(logger.LevelInfo, "updating database version: 16 -> 17")
sql := pgsqlV17SQL
if config.Driver == CockroachDataProviderName {
sql = strings.ReplaceAll(sql, `ALTER TABLE "{{folders_mapping}}" DROP CONSTRAINT "{{prefix}}unique_mapping";`,
`DROP INDEX "{{prefix}}unique_mapping" CASCADE;`)
}
sql = strings.ReplaceAll(sql, "{{groups}}", sqlTableGroups)
sql = strings.ReplaceAll(sql, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{folders}}", sqlTableFolders)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 9)
sql = strings.ReplaceAll(sql, "{{folders_mapping}}", sqlTableFoldersMapping)
sql = strings.ReplaceAll(sql, "{{users_folders_mapping}}", sqlTableUsersFoldersMapping)
sql = strings.ReplaceAll(sql, "{{users_groups_mapping}}", sqlTableUsersGroupsMapping)
sql = strings.ReplaceAll(sql, "{{groups_folders_mapping}}", sqlTableGroupsFoldersMapping)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 17, true)
}
func downgradePGSQLDatabaseFrom9To8(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 9 -> 8")
providerLog(logger.LevelInfo, "downgrading database version: 9 -> 8")
sql := strings.ReplaceAll(pgsqlV9DownSQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
func updatePGSQLDatabaseFrom17To18(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 17 -> 18")
providerLog(logger.LevelInfo, "updating database version: 17 -> 18")
if err := importGCSCredentials(); err != nil {
return err
}
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, nil, 18, true)
}
func updatePGSQLDatabaseFrom18To19(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 18 -> 19")
providerLog(logger.LevelInfo, "updating database version: 18 -> 19")
sql := strings.ReplaceAll(pgsqlV19SQL, "{{shared_sessions}}", sqlTableSharedSessions)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 19, true)
}
func downgradePGSQLDatabaseFrom16To15(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 16 -> 15")
providerLog(logger.LevelInfo, "downgrading database version: 16 -> 15")
sql := strings.ReplaceAll(pgsqlV16DownSQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{active_transfers}}", sqlTableActiveTransfers)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 15, false)
}
func downgradePGSQLDatabaseFrom17To16(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 17 -> 16")
providerLog(logger.LevelInfo, "downgrading database version: 17 -> 16")
sql := pgsqlV17DownSQL
if config.Driver == CockroachDataProviderName {
sql = strings.ReplaceAll(sql, `ALTER TABLE "{{users_folders_mapping}}" DROP CONSTRAINT "{{prefix}}unique_user_folder_mapping";`,
`DROP INDEX "{{prefix}}unique_user_folder_mapping" CASCADE;`)
}
sql = strings.ReplaceAll(sql, "{{groups}}", sqlTableGroups)
sql = strings.ReplaceAll(sql, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{folders}}", sqlTableFolders)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 8)
sql = strings.ReplaceAll(sql, "{{folders_mapping}}", sqlTableFoldersMapping)
sql = strings.ReplaceAll(sql, "{{users_folders_mapping}}", sqlTableUsersFoldersMapping)
sql = strings.ReplaceAll(sql, "{{users_groups_mapping}}", sqlTableUsersGroupsMapping)
sql = strings.ReplaceAll(sql, "{{groups_folders_mapping}}", sqlTableGroupsFoldersMapping)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 16, false)
}
func updatePGSQLDatabaseFrom9To10(dbHandle *sql.DB) error {
return sqlCommonUpdateDatabaseFrom9To10(dbHandle)
func downgradePGSQLDatabaseFrom18To17(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 18 -> 17")
providerLog(logger.LevelInfo, "downgrading database version: 18 -> 17")
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, nil, 17, false)
}
func downgradePGSQLDatabaseFrom10To9(dbHandle *sql.DB) error {
return sqlCommonDowngradeDatabaseFrom10To9(dbHandle)
func downgradePGSQLDatabaseFrom19To18(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 19 -> 18")
providerLog(logger.LevelInfo, "downgrading database version: 19 -> 18")
sql := strings.ReplaceAll(pgsqlV19DownSQL, "{{shared_sessions}}", sqlTableSharedSessions)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 18, false)
}

View File

@@ -1,3 +1,18 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//go:build nopgsql
// +build nopgsql
package dataprovider
@@ -5,7 +20,7 @@ package dataprovider
import (
"errors"
"github.com/drakkan/sftpgo/version"
"github.com/drakkan/sftpgo/v2/version"
)
func init() {

View File

@@ -1,10 +1,24 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package dataprovider
import (
"sync"
"time"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/v2/logger"
)
var delayedQuotaUpdater quotaUpdater
@@ -18,18 +32,25 @@ type quotaObject struct {
files int
}
type transferQuotaObject struct {
ulSize int64
dlSize int64
}
type quotaUpdater struct {
paramsMutex sync.RWMutex
waitTime time.Duration
sync.RWMutex
pendingUserQuotaUpdates map[string]quotaObject
pendingFolderQuotaUpdates map[string]quotaObject
pendingUserQuotaUpdates map[string]quotaObject
pendingFolderQuotaUpdates map[string]quotaObject
pendingTransferQuotaUpdates map[string]transferQuotaObject
}
func newQuotaUpdater() quotaUpdater {
return quotaUpdater{
pendingUserQuotaUpdates: make(map[string]quotaObject),
pendingFolderQuotaUpdates: make(map[string]quotaObject),
pendingUserQuotaUpdates: make(map[string]quotaObject),
pendingFolderQuotaUpdates: make(map[string]quotaObject),
pendingTransferQuotaUpdates: make(map[string]transferQuotaObject),
}
}
@@ -50,6 +71,7 @@ func (q *quotaUpdater) loop() {
providerLog(logger.LevelDebug, "delayed quota update check start")
q.storeUsersQuota()
q.storeFoldersQuota()
q.storeUsersTransferQuota()
providerLog(logger.LevelDebug, "delayed quota update check end")
waitTime = q.getWaitTime()
}
@@ -130,6 +152,36 @@ func (q *quotaUpdater) getFolderPendingQuota(name string) (int, int64) {
return obj.files, obj.size
}
func (q *quotaUpdater) resetUserTransferQuota(username string) {
q.Lock()
defer q.Unlock()
delete(q.pendingTransferQuotaUpdates, username)
}
func (q *quotaUpdater) updateUserTransferQuota(username string, ulSize, dlSize int64) {
q.Lock()
defer q.Unlock()
obj := q.pendingTransferQuotaUpdates[username]
obj.ulSize += ulSize
obj.dlSize += dlSize
if obj.ulSize == 0 && obj.dlSize == 0 {
delete(q.pendingTransferQuotaUpdates, username)
return
}
q.pendingTransferQuotaUpdates[username] = obj
}
func (q *quotaUpdater) getUserPendingTransferQuota(username string) (int64, int64) {
q.RLock()
defer q.RUnlock()
obj := q.pendingTransferQuotaUpdates[username]
return obj.ulSize, obj.dlSize
}
func (q *quotaUpdater) getUsernames() []string {
q.RLock()
defer q.RUnlock()
@@ -154,6 +206,18 @@ func (q *quotaUpdater) getFoldernames() []string {
return result
}
func (q *quotaUpdater) getTransferQuotaUsernames() []string {
q.RLock()
defer q.RUnlock()
result := make([]string, 0, len(q.pendingTransferQuotaUpdates))
for username := range q.pendingTransferQuotaUpdates {
result = append(result, username)
}
return result
}
func (q *quotaUpdater) storeUsersQuota() {
for _, username := range q.getUsernames() {
files, size := q.getUserPendingQuota(username)
@@ -181,3 +245,17 @@ func (q *quotaUpdater) storeFoldersQuota() {
}
}
}
func (q *quotaUpdater) storeUsersTransferQuota() {
for _, username := range q.getTransferQuotaUsernames() {
ulSize, dlSize := q.getUserPendingTransferQuota(username)
if ulSize != 0 || dlSize != 0 {
err := provider.updateTransferQuota(username, ulSize, dlSize, false)
if err != nil {
providerLog(logger.LevelWarn, "unable to update transfer quota delayed for user %#v: %v", username, err)
continue
}
q.updateUserTransferQuota(username, -ulSize, -dlSize)
}
}
}

110
dataprovider/scheduler.go Normal file
View File

@@ -0,0 +1,110 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package dataprovider
import (
"fmt"
"sync/atomic"
"time"
"github.com/robfig/cron/v3"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/metric"
"github.com/drakkan/sftpgo/v2/util"
)
var (
scheduler *cron.Cron
lastCachesUpdate int64
// used for bolt and memory providers, so we avoid iterating all users
// to find recently modified ones
lastUserUpdate int64
)
func stopScheduler() {
if scheduler != nil {
scheduler.Stop()
scheduler = nil
}
}
func startScheduler() error {
stopScheduler()
scheduler = cron.New()
_, err := scheduler.AddFunc("@every 30s", checkDataprovider)
if err != nil {
return fmt.Errorf("unable to schedule dataprovider availability check: %w", err)
}
if config.AutoBackup.Enabled {
spec := fmt.Sprintf("0 %v * * %v", config.AutoBackup.Hour, config.AutoBackup.DayOfWeek)
_, err = scheduler.AddFunc(spec, config.doBackup)
if err != nil {
return fmt.Errorf("unable to schedule auto backup: %w", err)
}
}
err = addScheduledCacheUpdates()
if err != nil {
return err
}
scheduler.Start()
return nil
}
func addScheduledCacheUpdates() error {
lastCachesUpdate = util.GetTimeAsMsSinceEpoch(time.Now())
_, err := scheduler.AddFunc("@every 10m", checkCacheUpdates)
if err != nil {
return fmt.Errorf("unable to schedule cache updates: %w", err)
}
return nil
}
func checkDataprovider() {
err := provider.checkAvailability()
if err != nil {
providerLog(logger.LevelError, "check availability error: %v", err)
}
metric.UpdateDataProviderAvailability(err)
}
func checkCacheUpdates() {
providerLog(logger.LevelDebug, "start caches check, update time %v", util.GetTimeFromMsecSinceEpoch(lastCachesUpdate))
checkTime := util.GetTimeAsMsSinceEpoch(time.Now())
users, err := provider.getRecentlyUpdatedUsers(lastCachesUpdate)
if err != nil {
providerLog(logger.LevelError, "unable to get recently updated users: %v", err)
return
}
for _, user := range users {
providerLog(logger.LevelDebug, "invalidate caches for user %#v", user.Username)
webDAVUsersCache.swap(&user)
cachedPasswords.Remove(user.Username)
}
lastCachesUpdate = checkTime
providerLog(logger.LevelDebug, "end caches check, new update time %v", util.GetTimeFromMsecSinceEpoch(lastCachesUpdate))
}
func setLastUserUpdate() {
atomic.StoreInt64(&lastUserUpdate, util.GetTimeAsMsSinceEpoch(time.Now()))
}
func getLastUserUpdate() int64 {
return atomic.LoadInt64(&lastUserUpdate)
}

48
dataprovider/session.go Normal file
View File

@@ -0,0 +1,48 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package dataprovider
import (
"errors"
"fmt"
)
// SessionType defines the supported session types
type SessionType int
// Supported session types
const (
SessionTypeOIDCAuth SessionType = iota + 1
SessionTypeOIDCToken
SessionTypeResetCode
)
// Session defines a shared session persisted in the data provider
type Session struct {
Key string
Data any
Type SessionType
Timestamp int64
}
func (s *Session) validate() error {
if s.Key == "" {
return errors.New("unable to save a session with an empty key")
}
if s.Type < SessionTypeOIDCAuth || s.Type > SessionTypeResetCode {
return fmt.Errorf("invalid session type: %v", s.Type)
}
return nil
}

334
dataprovider/share.go Normal file
View File

@@ -0,0 +1,334 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package dataprovider
import (
"encoding/json"
"fmt"
"net"
"strings"
"time"
"github.com/alexedwards/argon2id"
"golang.org/x/crypto/bcrypt"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
)
// ShareScope defines the supported share scopes
type ShareScope int
// Supported share scopes
const (
ShareScopeRead ShareScope = iota + 1
ShareScopeWrite
ShareScopeReadWrite
)
const (
redactedPassword = "[**redacted**]"
)
// Share defines files and or directories shared with external users
type Share struct {
// Database unique identifier
ID int64 `json:"-"`
// Unique ID used to access this object
ShareID string `json:"id"`
Name string `json:"name"`
Description string `json:"description,omitempty"`
Scope ShareScope `json:"scope"`
// Paths to files or directories, for ShareScopeWrite it must be exactly one directory
Paths []string `json:"paths"`
// Username who shared this object
Username string `json:"username"`
CreatedAt int64 `json:"created_at"`
UpdatedAt int64 `json:"updated_at"`
// 0 means never used
LastUseAt int64 `json:"last_use_at,omitempty"`
// ExpiresAt expiration date/time as unix timestamp in milliseconds, 0 means no expiration
ExpiresAt int64 `json:"expires_at,omitempty"`
// Optional password to protect the share
Password string `json:"password"`
// Limit the available access tokens, 0 means no limit
MaxTokens int `json:"max_tokens,omitempty"`
// Used tokens
UsedTokens int `json:"used_tokens,omitempty"`
// Limit the share availability to these IPs/CIDR networks
AllowFrom []string `json:"allow_from,omitempty"`
// set for restores, we don't have to validate the expiration date
// otherwise we fail to restore existing shares and we have to insert
// all the previous values with no modifications
IsRestore bool `json:"-"`
}
// GetScopeAsString returns the share's scope as string.
// Used in web pages
func (s *Share) GetScopeAsString() string {
switch s.Scope {
case ShareScopeWrite:
return "Write"
case ShareScopeReadWrite:
return "Read/Write"
default:
return "Read"
}
}
// IsExpired returns true if the share is expired
func (s *Share) IsExpired() bool {
if s.ExpiresAt > 0 {
return s.ExpiresAt < util.GetTimeAsMsSinceEpoch(time.Now())
}
return false
}
// GetInfoString returns share's info as string.
func (s *Share) GetInfoString() string {
var result strings.Builder
if s.ExpiresAt > 0 {
t := util.GetTimeFromMsecSinceEpoch(s.ExpiresAt)
result.WriteString(fmt.Sprintf("Expiration: %v. ", t.Format("2006-01-02 15:04"))) // YYYY-MM-DD HH:MM
}
if s.LastUseAt > 0 {
t := util.GetTimeFromMsecSinceEpoch(s.LastUseAt)
result.WriteString(fmt.Sprintf("Last use: %v. ", t.Format("2006-01-02 15:04")))
}
if s.MaxTokens > 0 {
result.WriteString(fmt.Sprintf("Usage: %v/%v. ", s.UsedTokens, s.MaxTokens))
} else {
result.WriteString(fmt.Sprintf("Used tokens: %v. ", s.UsedTokens))
}
if len(s.AllowFrom) > 0 {
result.WriteString(fmt.Sprintf("Allowed IP/Mask: %v. ", len(s.AllowFrom)))
}
if s.Password != "" {
result.WriteString("Password protected.")
}
return result.String()
}
// GetAllowedFromAsString returns the allowed IP as comma separated string
func (s *Share) GetAllowedFromAsString() string {
return strings.Join(s.AllowFrom, ",")
}
func (s *Share) getACopy() Share {
allowFrom := make([]string, len(s.AllowFrom))
copy(allowFrom, s.AllowFrom)
return Share{
ID: s.ID,
ShareID: s.ShareID,
Name: s.Name,
Description: s.Description,
Scope: s.Scope,
Paths: s.Paths,
Username: s.Username,
CreatedAt: s.CreatedAt,
UpdatedAt: s.UpdatedAt,
LastUseAt: s.LastUseAt,
ExpiresAt: s.ExpiresAt,
Password: s.Password,
MaxTokens: s.MaxTokens,
UsedTokens: s.UsedTokens,
AllowFrom: allowFrom,
}
}
// RenderAsJSON implements the renderer interface used within plugins
func (s *Share) RenderAsJSON(reload bool) ([]byte, error) {
if reload {
share, err := provider.shareExists(s.ShareID, s.Username)
if err != nil {
providerLog(logger.LevelError, "unable to reload share before rendering as json: %v", err)
return nil, err
}
share.HideConfidentialData()
return json.Marshal(share)
}
s.HideConfidentialData()
return json.Marshal(s)
}
// HideConfidentialData hides share confidential data
func (s *Share) HideConfidentialData() {
if s.Password != "" {
s.Password = redactedPassword
}
}
// HasRedactedPassword returns true if this share has a redacted password
func (s *Share) HasRedactedPassword() bool {
return s.Password == redactedPassword
}
func (s *Share) hashPassword() error {
if s.Password != "" && !util.IsStringPrefixInSlice(s.Password, internalHashPwdPrefixes) {
if config.PasswordHashing.Algo == HashingAlgoBcrypt {
hashed, err := bcrypt.GenerateFromPassword([]byte(s.Password), config.PasswordHashing.BcryptOptions.Cost)
if err != nil {
return err
}
s.Password = string(hashed)
} else {
hashed, err := argon2id.CreateHash(s.Password, argon2Params)
if err != nil {
return err
}
s.Password = hashed
}
}
return nil
}
func (s *Share) validatePaths() error {
var paths []string
for _, p := range s.Paths {
p = strings.TrimSpace(p)
if p != "" {
paths = append(paths, p)
}
}
s.Paths = paths
if len(s.Paths) == 0 {
return util.NewValidationError("at least a shared path is required")
}
for idx := range s.Paths {
s.Paths[idx] = util.CleanPath(s.Paths[idx])
}
s.Paths = util.RemoveDuplicates(s.Paths, false)
if s.Scope >= ShareScopeWrite && len(s.Paths) != 1 {
return util.NewValidationError("the write share scope requires exactly one path")
}
// check nested paths
if len(s.Paths) > 1 {
for idx := range s.Paths {
for innerIdx := range s.Paths {
if idx == innerIdx {
continue
}
if isVirtualDirOverlapped(s.Paths[idx], s.Paths[innerIdx], true) {
return util.NewGenericError("shared paths cannot be nested")
}
}
}
}
return nil
}
func (s *Share) validate() error {
if s.ShareID == "" {
return util.NewValidationError("share_id is mandatory")
}
if s.Name == "" {
return util.NewValidationError("name is mandatory")
}
if s.Scope < ShareScopeRead || s.Scope > ShareScopeReadWrite {
return util.NewValidationError(fmt.Sprintf("invalid scope: %v", s.Scope))
}
if err := s.validatePaths(); err != nil {
return err
}
if s.ExpiresAt > 0 {
if !s.IsRestore && s.ExpiresAt < util.GetTimeAsMsSinceEpoch(time.Now()) {
return util.NewValidationError("expiration must be in the future")
}
} else {
s.ExpiresAt = 0
}
if s.MaxTokens < 0 {
return util.NewValidationError("invalid max tokens")
}
if s.Username == "" {
return util.NewValidationError("username is mandatory")
}
if s.HasRedactedPassword() {
return util.NewValidationError("cannot save a share with a redacted password")
}
if err := s.hashPassword(); err != nil {
return err
}
s.AllowFrom = util.RemoveDuplicates(s.AllowFrom, false)
for _, IPMask := range s.AllowFrom {
_, _, err := net.ParseCIDR(IPMask)
if err != nil {
return util.NewValidationError(fmt.Sprintf("could not parse allow from entry %#v : %v", IPMask, err))
}
}
return nil
}
// CheckCredentials verifies the share credentials if a password if set
func (s *Share) CheckCredentials(username, password string) (bool, error) {
if s.Password == "" {
return true, nil
}
if username == "" || password == "" {
return false, ErrInvalidCredentials
}
if username != s.Username {
return false, ErrInvalidCredentials
}
if strings.HasPrefix(s.Password, bcryptPwdPrefix) {
if err := bcrypt.CompareHashAndPassword([]byte(s.Password), []byte(password)); err != nil {
return false, ErrInvalidCredentials
}
return true, nil
}
match, err := argon2id.ComparePasswordAndHash(password, s.Password)
if !match || err != nil {
return false, ErrInvalidCredentials
}
return match, err
}
// GetRelativePath returns the specified absolute path as relative to the share base path
func (s *Share) GetRelativePath(name string) string {
if len(s.Paths) == 0 {
return ""
}
return util.CleanPath(strings.TrimPrefix(name, s.Paths[0]))
}
// IsUsable checks if the share is usable from the specified IP
func (s *Share) IsUsable(ip string) (bool, error) {
if s.MaxTokens > 0 && s.UsedTokens >= s.MaxTokens {
return false, util.NewRecordNotFoundError("max share usage exceeded")
}
if s.ExpiresAt > 0 {
if s.ExpiresAt < util.GetTimeAsMsSinceEpoch(time.Now()) {
return false, util.NewRecordNotFoundError("share expired")
}
}
if len(s.AllowFrom) == 0 {
return true, nil
}
parsedIP := net.ParseIP(ip)
if parsedIP == nil {
return false, ErrLoginNotAllowedFromIP
}
for _, ipMask := range s.AllowFrom {
_, network, err := net.ParseCIDR(ipMask)
if err != nil {
continue
}
if network.Contains(parsedIP) {
return true, nil
}
}
return false, ErrLoginNotAllowedFromIP
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,3 +1,18 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//go:build !nosqlite
// +build !nosqlite
package dataprovider
@@ -10,76 +25,154 @@ import (
"fmt"
"path/filepath"
"strings"
"time"
// we import go-sqlite3 here to be able to disable SQLite support using a build tag
_ "github.com/mattn/go-sqlite3"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/utils"
"github.com/drakkan/sftpgo/version"
"github.com/drakkan/sftpgo/vfs"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/v2/version"
"github.com/drakkan/sftpgo/v2/vfs"
)
const (
sqliteResetSQL = `DROP TABLE IF EXISTS "{{api_keys}}";
DROP TABLE IF EXISTS "{{folders_mapping}}";
DROP TABLE IF EXISTS "{{users_folders_mapping}}";
DROP TABLE IF EXISTS "{{users_groups_mapping}}";
DROP TABLE IF EXISTS "{{groups_folders_mapping}}";
DROP TABLE IF EXISTS "{{admins}}";
DROP TABLE IF EXISTS "{{folders}}";
DROP TABLE IF EXISTS "{{shares}}";
DROP TABLE IF EXISTS "{{users}}";
DROP TABLE IF EXISTS "{{groups}}";
DROP TABLE IF EXISTS "{{defender_events}}";
DROP TABLE IF EXISTS "{{defender_hosts}}";
DROP TABLE IF EXISTS "{{active_transfers}}";
DROP TABLE IF EXISTS "{{shared_sessions}}";
DROP TABLE IF EXISTS "{{schema_version}}";
`
sqliteInitialSQL = `CREATE TABLE "{{schema_version}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "version" integer NOT NULL);
CREATE TABLE "{{admins}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255) NOT NULL UNIQUE,
"password" varchar(255) NOT NULL, "email" varchar(255) NULL, "status" integer NOT NULL, "permissions" text NOT NULL,
"filters" text NULL, "additional_info" text NULL);
"description" varchar(512) NULL, "password" varchar(255) NOT NULL, "email" varchar(255) NULL, "status" integer NOT NULL,
"permissions" text NOT NULL, "filters" text NULL, "additional_info" text NULL, "last_login" bigint NOT NULL,
"created_at" bigint NOT NULL, "updated_at" bigint NOT NULL);
CREATE TABLE "{{defender_hosts}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "ip" varchar(50) NOT NULL UNIQUE,
"ban_time" bigint NOT NULL, "updated_at" bigint NOT NULL);
CREATE TABLE "{{defender_events}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "date_time" bigint NOT NULL,
"score" integer NOT NULL, "host_id" integer NOT NULL REFERENCES "{{defender_hosts}}" ("id") ON DELETE CASCADE
DEFERRABLE INITIALLY DEFERRED);
CREATE TABLE "{{folders}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "name" varchar(255) NOT NULL UNIQUE,
"path" varchar(512) NULL, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL,
"last_quota_update" bigint NOT NULL);
"description" varchar(512) NULL, "path" text NULL, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL,
"last_quota_update" bigint NOT NULL, "filesystem" text NULL);
CREATE TABLE "{{users}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255) NOT NULL UNIQUE,
"password" text NULL, "public_keys" text NULL, "home_dir" varchar(512) NOT NULL, "uid" integer NOT NULL, "gid" integer NOT NULL,
"status" integer NOT NULL, "expiration_date" bigint NOT NULL, "description" varchar(512) NULL, "password" text NULL,
"public_keys" text NULL, "home_dir" text NOT NULL, "uid" bigint NOT NULL, "gid" bigint NOT NULL,
"max_sessions" integer NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "permissions" text NOT NULL,
"used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL,
"upload_bandwidth" integer NOT NULL, "download_bandwidth" integer NOT NULL, "expiration_date" bigint NOT NULL,
"last_login" bigint NOT NULL, "status" integer NOT NULL, "filters" text NULL, "filesystem" text NULL,
"additional_info" text NULL);
CREATE TABLE "{{folders_mapping}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "virtual_path" varchar(512) NOT NULL,
"upload_bandwidth" integer NOT NULL, "download_bandwidth" integer NOT NULL, "last_login" bigint NOT NULL, "filters" text NULL,
"filesystem" text NULL, "additional_info" text NULL, "created_at" bigint NOT NULL, "updated_at" bigint NOT NULL,
"email" varchar(255) NULL);
CREATE TABLE "{{folders_mapping}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "virtual_path" text NOT NULL,
"quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "folder_id" integer NOT NULL REFERENCES "{{folders}}" ("id")
ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED, "user_id" integer NOT NULL REFERENCES "{{users}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
CONSTRAINT "{{prefix}}unique_mapping" UNIQUE ("user_id", "folder_id"));
CREATE TABLE "{{shares}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "share_id" varchar(60) NOT NULL UNIQUE,
"name" varchar(255) NOT NULL, "description" varchar(512) NULL, "scope" integer NOT NULL, "paths" text NOT NULL,
"created_at" bigint NOT NULL, "updated_at" bigint NOT NULL, "last_use_at" bigint NOT NULL, "expires_at" bigint NOT NULL,
"password" text NULL, "max_tokens" integer NOT NULL, "used_tokens" integer NOT NULL, "allow_from" text NULL,
"user_id" integer NOT NULL REFERENCES "{{users}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED);
CREATE TABLE "{{api_keys}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "name" varchar(255) NOT NULL,
"key_id" varchar(50) NOT NULL UNIQUE, "api_key" varchar(255) NOT NULL UNIQUE, "scope" integer NOT NULL,
"created_at" bigint NOT NULL, "updated_at" bigint NOT NULL, "last_use_at" bigint NOT NULL, "expires_at" bigint NOT NULL,
"description" text NULL, "admin_id" integer NULL REFERENCES "{{admins}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
"user_id" integer NULL REFERENCES "{{users}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED);
CREATE INDEX "{{prefix}}folders_mapping_folder_id_idx" ON "{{folders_mapping}}" ("folder_id");
CREATE INDEX "{{prefix}}folders_mapping_user_id_idx" ON "{{folders_mapping}}" ("user_id");
INSERT INTO {{schema_version}} (version) VALUES (8);
CREATE INDEX "{{prefix}}api_keys_admin_id_idx" ON "{{api_keys}}" ("admin_id");
CREATE INDEX "{{prefix}}api_keys_user_id_idx" ON "{{api_keys}}" ("user_id");
CREATE INDEX "{{prefix}}users_updated_at_idx" ON "{{users}}" ("updated_at");
CREATE INDEX "{{prefix}}shares_user_id_idx" ON "{{shares}}" ("user_id");
CREATE INDEX "{{prefix}}defender_hosts_updated_at_idx" ON "{{defender_hosts}}" ("updated_at");
CREATE INDEX "{{prefix}}defender_hosts_ban_time_idx" ON "{{defender_hosts}}" ("ban_time");
CREATE INDEX "{{prefix}}defender_events_date_time_idx" ON "{{defender_events}}" ("date_time");
CREATE INDEX "{{prefix}}defender_events_host_id_idx" ON "{{defender_events}}" ("host_id");
INSERT INTO {{schema_version}} (version) VALUES (15);
`
sqliteV9SQL = `ALTER TABLE "{{admins}}" ADD COLUMN "description" varchar(512) NULL;
ALTER TABLE "{{folders}}" ADD COLUMN "description" varchar(512) NULL;
ALTER TABLE "{{folders}}" ADD COLUMN "filesystem" text NULL;
ALTER TABLE "{{users}}" ADD COLUMN "description" varchar(512) NULL;
sqliteV16SQL = `ALTER TABLE "{{users}}" ADD COLUMN "download_data_transfer" integer DEFAULT 0 NOT NULL;
ALTER TABLE "{{users}}" ADD COLUMN "total_data_transfer" integer DEFAULT 0 NOT NULL;
ALTER TABLE "{{users}}" ADD COLUMN "upload_data_transfer" integer DEFAULT 0 NOT NULL;
ALTER TABLE "{{users}}" ADD COLUMN "used_download_data_transfer" integer DEFAULT 0 NOT NULL;
ALTER TABLE "{{users}}" ADD COLUMN "used_upload_data_transfer" integer DEFAULT 0 NOT NULL;
CREATE TABLE "{{active_transfers}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "connection_id" varchar(100) NOT NULL,
"transfer_id" bigint NOT NULL, "transfer_type" integer NOT NULL, "username" varchar(255) NOT NULL,
"folder_name" varchar(255) NULL, "ip" varchar(50) NOT NULL, "truncated_size" bigint NOT NULL,
"current_ul_size" bigint NOT NULL, "current_dl_size" bigint NOT NULL, "created_at" bigint NOT NULL,
"updated_at" bigint NOT NULL);
CREATE INDEX "{{prefix}}active_transfers_connection_id_idx" ON "{{active_transfers}}" ("connection_id");
CREATE INDEX "{{prefix}}active_transfers_transfer_id_idx" ON "{{active_transfers}}" ("transfer_id");
CREATE INDEX "{{prefix}}active_transfers_updated_at_idx" ON "{{active_transfers}}" ("updated_at");
`
sqliteV9DownSQL = `CREATE TABLE "new__users" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "status" integer NOT NULL,
"expiration_date" bigint NOT NULL, "username" varchar(255) NOT NULL UNIQUE, "password" text NULL, "public_keys" text NULL,
"home_dir" varchar(512) NOT NULL, "uid" integer NOT NULL, "gid" integer NOT NULL, "max_sessions" integer NOT NULL,
"quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "permissions" text NOT NULL, "used_quota_size" bigint NOT NULL,
"used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL, "upload_bandwidth" integer NOT NULL,
"download_bandwidth" integer NOT NULL, "last_login" bigint NOT NULL, "filters" text NULL, "filesystem" text NULL,
"additional_info" text NULL);
INSERT INTO "new__users" ("id", "status", "expiration_date", "username", "password", "public_keys", "home_dir", "uid", "gid",
"max_sessions", "quota_size", "quota_files", "permissions", "used_quota_size", "used_quota_files", "last_quota_update",
"upload_bandwidth", "download_bandwidth", "last_login", "filters", "filesystem", "additional_info")
SELECT "id", "status", "expiration_date", "username", "password", "public_keys", "home_dir", "uid", "gid", "max_sessions",
"quota_size", "quota_files", "permissions", "used_quota_size", "used_quota_files", "last_quota_update", "upload_bandwidth",
"download_bandwidth", "last_login", "filters", "filesystem", "additional_info" FROM "{{users}}";
DROP TABLE "{{users}}";
ALTER TABLE "new__users" RENAME TO "{{users}}";
CREATE TABLE "new__admins" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255) NOT NULL UNIQUE,
"password" varchar(255) NOT NULL, "email" varchar(255) NULL, "status" integer NOT NULL, "permissions" text NOT NULL,
"filters" text NULL, "additional_info" text NULL);
INSERT INTO "new__admins" ("id", "username", "password", "email", "status", "permissions", "filters", "additional_info")
SELECT "id", "username", "password", "email", "status", "permissions", "filters", "additional_info" FROM "{{admins}}";
DROP TABLE "{{admins}}";
ALTER TABLE "new__admins" RENAME TO "{{admins}}";
CREATE TABLE "new__folders" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "name" varchar(255) NOT NULL UNIQUE,
"path" varchar(512) NULL, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL);
INSERT INTO "new__folders" ("id", "name", "path", "used_quota_size", "used_quota_files", "last_quota_update")
SELECT "id", "name", "path", "used_quota_size", "used_quota_files", "last_quota_update" FROM "{{folders}}";
DROP TABLE "{{folders}}";
ALTER TABLE "new__folders" RENAME TO "{{folders}}";
sqliteV16DownSQL = `ALTER TABLE "{{users}}" DROP COLUMN "used_upload_data_transfer";
ALTER TABLE "{{users}}" DROP COLUMN "used_download_data_transfer";
ALTER TABLE "{{users}}" DROP COLUMN "upload_data_transfer";
ALTER TABLE "{{users}}" DROP COLUMN "total_data_transfer";
ALTER TABLE "{{users}}" DROP COLUMN "download_data_transfer";
DROP TABLE "{{active_transfers}}";
`
sqliteV17SQL = `CREATE TABLE "{{groups}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "name" varchar(255) NOT NULL UNIQUE,
"description" varchar(512) NULL, "created_at" bigint NOT NULL, "updated_at" bigint NOT NULL, "user_settings" text NULL);
CREATE TABLE "{{groups_folders_mapping}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT,
"folder_id" integer NOT NULL REFERENCES "{{folders}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
"group_id" integer NOT NULL REFERENCES "{{groups}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
"virtual_path" text NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL,
CONSTRAINT "{{prefix}}unique_group_folder_mapping" UNIQUE ("group_id", "folder_id"));
CREATE TABLE "{{users_groups_mapping}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT,
"user_id" integer NOT NULL REFERENCES "{{users}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
"group_id" integer NOT NULL REFERENCES "{{groups}}" ("id") ON DELETE NO ACTION,
"group_type" integer NOT NULL, CONSTRAINT "{{prefix}}unique_user_group_mapping" UNIQUE ("user_id", "group_id"));
CREATE TABLE "new__folders_mapping" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT,
"user_id" integer NOT NULL REFERENCES "{{users}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
"folder_id" integer NOT NULL REFERENCES "{{folders}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
"virtual_path" text NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL,
CONSTRAINT "{{prefix}}unique_user_folder_mapping" UNIQUE ("user_id", "folder_id"));
INSERT INTO "new__folders_mapping" ("id", "virtual_path", "quota_size", "quota_files", "folder_id", "user_id") SELECT "id",
"virtual_path", "quota_size", "quota_files", "folder_id", "user_id" FROM "{{folders_mapping}}";
DROP TABLE "{{folders_mapping}}";
ALTER TABLE "new__folders_mapping" RENAME TO "{{users_folders_mapping}}";
CREATE INDEX "{{prefix}}groups_updated_at_idx" ON "{{groups}}" ("updated_at");
CREATE INDEX "{{prefix}}users_folders_mapping_folder_id_idx" ON "{{users_folders_mapping}}" ("folder_id");
CREATE INDEX "{{prefix}}users_folders_mapping_user_id_idx" ON "{{users_folders_mapping}}" ("user_id");
CREATE INDEX "{{prefix}}users_groups_mapping_group_id_idx" ON "{{users_groups_mapping}}" ("group_id");
CREATE INDEX "{{prefix}}users_groups_mapping_user_id_idx" ON "{{users_groups_mapping}}" ("user_id");
CREATE INDEX "{{prefix}}groups_folders_mapping_folder_id_idx" ON "{{groups_folders_mapping}}" ("folder_id");
CREATE INDEX "{{prefix}}groups_folders_mapping_group_id_idx" ON "{{groups_folders_mapping}}" ("group_id");
`
sqliteV17DownSQL = `DROP TABLE "{{users_groups_mapping}}";
DROP TABLE "{{groups_folders_mapping}}";
DROP TABLE "{{groups}}";
CREATE TABLE "new__folders_mapping" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT,
"user_id" integer NOT NULL REFERENCES "{{users}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
"folder_id" integer NOT NULL REFERENCES "{{folders}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
"virtual_path" text NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL,
CONSTRAINT "{{prefix}}unique_folder_mapping" UNIQUE ("user_id", "folder_id"));
INSERT INTO "new__folders_mapping" ("id", "virtual_path", "quota_size", "quota_files", "folder_id", "user_id") SELECT "id",
"virtual_path", "quota_size", "quota_files", "folder_id", "user_id" FROM "{{users_folders_mapping}}";
DROP TABLE "{{users_folders_mapping}}";
ALTER TABLE "new__folders_mapping" RENAME TO "{{folders_mapping}}";
CREATE INDEX "{{prefix}}folders_mapping_folder_id_idx" ON "{{folders_mapping}}" ("folder_id");
CREATE INDEX "{{prefix}}folders_mapping_user_id_idx" ON "{{folders_mapping}}" ("user_id");
`
sqliteV19SQL = `CREATE TABLE "{{shared_sessions}}" ("key" varchar(128) NOT NULL PRIMARY KEY, "data" text NOT NULL,
"type" integer NOT NULL, "timestamp" bigint NOT NULL);
CREATE INDEX "{{prefix}}shared_sessions_type_idx" ON "{{shared_sessions}}" ("type");
CREATE INDEX "{{prefix}}shared_sessions_timestamp_idx" ON "{{shared_sessions}}" ("timestamp");
`
sqliteV19DownSQL = `DROP TABLE "{{shared_sessions}}";`
)
// SQLiteProvider auth provider for SQLite database
// SQLiteProvider defines the auth provider for SQLite database
type SQLiteProvider struct {
dbHandle *sql.DB
}
@@ -94,7 +187,7 @@ func initializeSQLiteProvider(basePath string) error {
if config.ConnectionString == "" {
dbPath := config.Name
if !utils.IsFileInputValid(dbPath) {
if !util.IsFileInputValid(dbPath) {
return fmt.Errorf("invalid database path: %#v", dbPath)
}
if !filepath.IsAbs(dbPath) {
@@ -110,7 +203,7 @@ func initializeSQLiteProvider(basePath string) error {
dbHandle.SetMaxOpenConns(1)
provider = &SQLiteProvider{dbHandle: dbHandle}
} else {
providerLog(logger.LevelWarn, "error creating sqlite database handler, connection string: %#v, error: %v",
providerLog(logger.LevelError, "error creating sqlite database handler, connection string: %#v, error: %v",
connectionString, err)
}
return err
@@ -128,22 +221,34 @@ func (p *SQLiteProvider) validateUserAndTLSCert(username, protocol string, tlsCe
return sqlCommonValidateUserAndTLSCertificate(username, protocol, tlsCert, p.dbHandle)
}
func (p *SQLiteProvider) validateUserAndPubKey(username string, publicKey []byte) (User, string, error) {
return sqlCommonValidateUserAndPubKey(username, publicKey, p.dbHandle)
func (p *SQLiteProvider) validateUserAndPubKey(username string, publicKey []byte, isSSHCert bool) (User, string, error) {
return sqlCommonValidateUserAndPubKey(username, publicKey, isSSHCert, p.dbHandle)
}
func (p *SQLiteProvider) updateTransferQuota(username string, uploadSize, downloadSize int64, reset bool) error {
return sqlCommonUpdateTransferQuota(username, uploadSize, downloadSize, reset, p.dbHandle)
}
func (p *SQLiteProvider) updateQuota(username string, filesAdd int, sizeAdd int64, reset bool) error {
return sqlCommonUpdateQuota(username, filesAdd, sizeAdd, reset, p.dbHandle)
}
func (p *SQLiteProvider) getUsedQuota(username string) (int, int64, error) {
func (p *SQLiteProvider) getUsedQuota(username string) (int, int64, int64, int64, error) {
return sqlCommonGetUsedQuota(username, p.dbHandle)
}
func (p *SQLiteProvider) setUpdatedAt(username string) {
sqlCommonSetUpdatedAt(username, p.dbHandle)
}
func (p *SQLiteProvider) updateLastLogin(username string) error {
return sqlCommonUpdateLastLogin(username, p.dbHandle)
}
func (p *SQLiteProvider) updateAdminLastLogin(username string) error {
return sqlCommonUpdateAdminLastLogin(username, p.dbHandle)
}
func (p *SQLiteProvider) userExists(username string) (User, error) {
return sqlCommonGetUserByUsername(username, p.dbHandle)
}
@@ -156,24 +261,36 @@ func (p *SQLiteProvider) updateUser(user *User) error {
return sqlCommonUpdateUser(user, p.dbHandle)
}
func (p *SQLiteProvider) deleteUser(user *User) error {
func (p *SQLiteProvider) deleteUser(user User) error {
return sqlCommonDeleteUser(user, p.dbHandle)
}
func (p *SQLiteProvider) updateUserPassword(username, password string) error {
return sqlCommonUpdateUserPassword(username, password, p.dbHandle)
}
func (p *SQLiteProvider) dumpUsers() ([]User, error) {
return sqlCommonDumpUsers(p.dbHandle)
}
func (p *SQLiteProvider) getRecentlyUpdatedUsers(after int64) ([]User, error) {
return sqlCommonGetRecentlyUpdatedUsers(after, p.dbHandle)
}
func (p *SQLiteProvider) getUsers(limit int, offset int, order string) ([]User, error) {
return sqlCommonGetUsers(limit, offset, order, p.dbHandle)
}
func (p *SQLiteProvider) getUsersForQuotaCheck(toFetch map[string]bool) ([]User, error) {
return sqlCommonGetUsersForQuotaCheck(toFetch, p.dbHandle)
}
func (p *SQLiteProvider) dumpFolders() ([]vfs.BaseVirtualFolder, error) {
return sqlCommonDumpFolders(p.dbHandle)
}
func (p *SQLiteProvider) getFolders(limit, offset int, order string) ([]vfs.BaseVirtualFolder, error) {
return sqlCommonGetFolders(limit, offset, order, p.dbHandle)
func (p *SQLiteProvider) getFolders(limit, offset int, order string, minimal bool) ([]vfs.BaseVirtualFolder, error) {
return sqlCommonGetFolders(limit, offset, order, minimal, p.dbHandle)
}
func (p *SQLiteProvider) getFolderByName(name string) (vfs.BaseVirtualFolder, error) {
@@ -190,7 +307,7 @@ func (p *SQLiteProvider) updateFolder(folder *vfs.BaseVirtualFolder) error {
return sqlCommonUpdateFolder(folder, p.dbHandle)
}
func (p *SQLiteProvider) deleteFolder(folder *vfs.BaseVirtualFolder) error {
func (p *SQLiteProvider) deleteFolder(folder vfs.BaseVirtualFolder) error {
return sqlCommonDeleteFolder(folder, p.dbHandle)
}
@@ -202,6 +319,38 @@ func (p *SQLiteProvider) getUsedFolderQuota(name string) (int, int64, error) {
return sqlCommonGetFolderUsedQuota(name, p.dbHandle)
}
func (p *SQLiteProvider) getGroups(limit, offset int, order string, minimal bool) ([]Group, error) {
return sqlCommonGetGroups(limit, offset, order, minimal, p.dbHandle)
}
func (p *SQLiteProvider) getGroupsWithNames(names []string) ([]Group, error) {
return sqlCommonGetGroupsWithNames(names, p.dbHandle)
}
func (p *SQLiteProvider) getUsersInGroups(names []string) ([]string, error) {
return sqlCommonGetUsersInGroups(names, p.dbHandle)
}
func (p *SQLiteProvider) groupExists(name string) (Group, error) {
return sqlCommonGetGroupByName(name, p.dbHandle)
}
func (p *SQLiteProvider) addGroup(group *Group) error {
return sqlCommonAddGroup(group, p.dbHandle)
}
func (p *SQLiteProvider) updateGroup(group *Group) error {
return sqlCommonUpdateGroup(group, p.dbHandle)
}
func (p *SQLiteProvider) deleteGroup(group Group) error {
return sqlCommonDeleteGroup(group, p.dbHandle)
}
func (p *SQLiteProvider) dumpGroups() ([]Group, error) {
return sqlCommonDumpGroups(p.dbHandle)
}
func (p *SQLiteProvider) adminExists(username string) (Admin, error) {
return sqlCommonGetAdminByUsername(username, p.dbHandle)
}
@@ -214,7 +363,7 @@ func (p *SQLiteProvider) updateAdmin(admin *Admin) error {
return sqlCommonUpdateAdmin(admin, p.dbHandle)
}
func (p *SQLiteProvider) deleteAdmin(admin *Admin) error {
func (p *SQLiteProvider) deleteAdmin(admin Admin) error {
return sqlCommonDeleteAdmin(admin, p.dbHandle)
}
@@ -230,6 +379,130 @@ func (p *SQLiteProvider) validateAdminAndPass(username, password, ip string) (Ad
return sqlCommonValidateAdminAndPass(username, password, ip, p.dbHandle)
}
func (p *SQLiteProvider) apiKeyExists(keyID string) (APIKey, error) {
return sqlCommonGetAPIKeyByID(keyID, p.dbHandle)
}
func (p *SQLiteProvider) addAPIKey(apiKey *APIKey) error {
return sqlCommonAddAPIKey(apiKey, p.dbHandle)
}
func (p *SQLiteProvider) updateAPIKey(apiKey *APIKey) error {
return sqlCommonUpdateAPIKey(apiKey, p.dbHandle)
}
func (p *SQLiteProvider) deleteAPIKey(apiKey APIKey) error {
return sqlCommonDeleteAPIKey(apiKey, p.dbHandle)
}
func (p *SQLiteProvider) getAPIKeys(limit int, offset int, order string) ([]APIKey, error) {
return sqlCommonGetAPIKeys(limit, offset, order, p.dbHandle)
}
func (p *SQLiteProvider) dumpAPIKeys() ([]APIKey, error) {
return sqlCommonDumpAPIKeys(p.dbHandle)
}
func (p *SQLiteProvider) updateAPIKeyLastUse(keyID string) error {
return sqlCommonUpdateAPIKeyLastUse(keyID, p.dbHandle)
}
func (p *SQLiteProvider) shareExists(shareID, username string) (Share, error) {
return sqlCommonGetShareByID(shareID, username, p.dbHandle)
}
func (p *SQLiteProvider) addShare(share *Share) error {
return sqlCommonAddShare(share, p.dbHandle)
}
func (p *SQLiteProvider) updateShare(share *Share) error {
return sqlCommonUpdateShare(share, p.dbHandle)
}
func (p *SQLiteProvider) deleteShare(share Share) error {
return sqlCommonDeleteShare(share, p.dbHandle)
}
func (p *SQLiteProvider) getShares(limit int, offset int, order, username string) ([]Share, error) {
return sqlCommonGetShares(limit, offset, order, username, p.dbHandle)
}
func (p *SQLiteProvider) dumpShares() ([]Share, error) {
return sqlCommonDumpShares(p.dbHandle)
}
func (p *SQLiteProvider) updateShareLastUse(shareID string, numTokens int) error {
return sqlCommonUpdateShareLastUse(shareID, numTokens, p.dbHandle)
}
func (p *SQLiteProvider) getDefenderHosts(from int64, limit int) ([]DefenderEntry, error) {
return sqlCommonGetDefenderHosts(from, limit, p.dbHandle)
}
func (p *SQLiteProvider) getDefenderHostByIP(ip string, from int64) (DefenderEntry, error) {
return sqlCommonGetDefenderHostByIP(ip, from, p.dbHandle)
}
func (p *SQLiteProvider) isDefenderHostBanned(ip string) (DefenderEntry, error) {
return sqlCommonIsDefenderHostBanned(ip, p.dbHandle)
}
func (p *SQLiteProvider) updateDefenderBanTime(ip string, minutes int) error {
return sqlCommonDefenderIncrementBanTime(ip, minutes, p.dbHandle)
}
func (p *SQLiteProvider) deleteDefenderHost(ip string) error {
return sqlCommonDeleteDefenderHost(ip, p.dbHandle)
}
func (p *SQLiteProvider) addDefenderEvent(ip string, score int) error {
return sqlCommonAddDefenderHostAndEvent(ip, score, p.dbHandle)
}
func (p *SQLiteProvider) setDefenderBanTime(ip string, banTime int64) error {
return sqlCommonSetDefenderBanTime(ip, banTime, p.dbHandle)
}
func (p *SQLiteProvider) cleanupDefender(from int64) error {
return sqlCommonDefenderCleanup(from, p.dbHandle)
}
func (p *SQLiteProvider) addActiveTransfer(transfer ActiveTransfer) error {
return sqlCommonAddActiveTransfer(transfer, p.dbHandle)
}
func (p *SQLiteProvider) updateActiveTransferSizes(ulSize, dlSize, transferID int64, connectionID string) error {
return sqlCommonUpdateActiveTransferSizes(ulSize, dlSize, transferID, connectionID, p.dbHandle)
}
func (p *SQLiteProvider) removeActiveTransfer(transferID int64, connectionID string) error {
return sqlCommonRemoveActiveTransfer(transferID, connectionID, p.dbHandle)
}
func (p *SQLiteProvider) cleanupActiveTransfers(before time.Time) error {
return sqlCommonCleanupActiveTransfers(before, p.dbHandle)
}
func (p *SQLiteProvider) getActiveTransfers(from time.Time) ([]ActiveTransfer, error) {
return sqlCommonGetActiveTransfers(from, p.dbHandle)
}
func (p *SQLiteProvider) addSharedSession(session Session) error {
return sqlCommonAddSession(session, p.dbHandle)
}
func (p *SQLiteProvider) deleteSharedSession(key string) error {
return sqlCommonDeleteSession(key, p.dbHandle)
}
func (p *SQLiteProvider) getSharedSession(key string) (Session, error) {
return sqlCommonGetSession(key, p.dbHandle)
}
func (p *SQLiteProvider) cleanupSharedSessions(sessionType SessionType, before int64) error {
return sqlCommonCleanupSessions(sessionType, before, p.dbHandle)
}
func (p *SQLiteProvider) close() error {
return p.dbHandle.Close()
}
@@ -244,17 +517,26 @@ func (p *SQLiteProvider) initializeDatabase() error {
if err == nil && dbVersion.Version > 0 {
return ErrNoInitRequired
}
if errors.Is(err, sql.ErrNoRows) {
return errSchemaVersionEmpty
}
logger.InfoToConsole("creating initial database schema, version 15")
providerLog(logger.LevelInfo, "creating initial database schema, version 15")
initialSQL := strings.ReplaceAll(sqliteInitialSQL, "{{schema_version}}", sqlTableSchemaVersion)
initialSQL = strings.ReplaceAll(initialSQL, "{{admins}}", sqlTableAdmins)
initialSQL = strings.ReplaceAll(initialSQL, "{{folders}}", sqlTableFolders)
initialSQL = strings.ReplaceAll(initialSQL, "{{users}}", sqlTableUsers)
initialSQL = strings.ReplaceAll(initialSQL, "{{folders_mapping}}", sqlTableFoldersMapping)
initialSQL = strings.ReplaceAll(initialSQL, "{{api_keys}}", sqlTableAPIKeys)
initialSQL = strings.ReplaceAll(initialSQL, "{{shares}}", sqlTableShares)
initialSQL = strings.ReplaceAll(initialSQL, "{{defender_events}}", sqlTableDefenderEvents)
initialSQL = strings.ReplaceAll(initialSQL, "{{defender_hosts}}", sqlTableDefenderHosts)
initialSQL = strings.ReplaceAll(initialSQL, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, []string{initialSQL}, 8)
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, []string{initialSQL}, 15, true)
}
func (p *SQLiteProvider) migrateDatabase() error {
func (p *SQLiteProvider) migrateDatabase() error { //nolint:dupl
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
if err != nil {
return err
@@ -264,18 +546,22 @@ func (p *SQLiteProvider) migrateDatabase() error {
case version == sqlDatabaseVersion:
providerLog(logger.LevelDebug, "sql database is up to date, current version: %v", version)
return ErrNoInitRequired
case version < 8:
case version < 15:
err = fmt.Errorf("database version %v is too old, please see the upgrading docs", version)
providerLog(logger.LevelError, "%v", err)
logger.ErrorToConsole("%v", err)
return err
case version == 8:
return updateSQLiteDatabaseFromV8(p.dbHandle)
case version == 9:
return updateSQLiteDatabaseFromV9(p.dbHandle)
case version == 15:
return updateSQLiteDatabaseFromV15(p.dbHandle)
case version == 16:
return updateSQLiteDatabaseFromV16(p.dbHandle)
case version == 17:
return updateSQLiteDatabaseFromV17(p.dbHandle)
case version == 18:
return updateSQLiteDatabaseFromV18(p.dbHandle)
default:
if version > sqlDatabaseVersion {
providerLog(logger.LevelWarn, "database version %v is newer than the supported one: %v", version,
providerLog(logger.LevelError, "database version %v is newer than the supported one: %v", version,
sqlDatabaseVersion)
logger.WarnToConsole("database version %v is newer than the supported one: %v", version,
sqlDatabaseVersion)
@@ -295,67 +581,160 @@ func (p *SQLiteProvider) revertDatabase(targetVersion int) error {
}
switch dbVersion.Version {
case 9:
return downgradeSQLiteDatabaseFromV9(p.dbHandle)
case 10:
return downgradeSQLiteDatabaseFromV10(p.dbHandle)
case 16:
return downgradeSQLiteDatabaseFromV16(p.dbHandle)
case 17:
return downgradeSQLiteDatabaseFromV17(p.dbHandle)
case 18:
return downgradeSQLiteDatabaseFromV18(p.dbHandle)
case 19:
return downgradeSQLiteDatabaseFromV19(p.dbHandle)
default:
return fmt.Errorf("database version not handled: %v", dbVersion.Version)
}
}
func updateSQLiteDatabaseFromV8(dbHandle *sql.DB) error {
if err := updateSQLiteDatabaseFrom8To9(dbHandle); err != nil {
func (p *SQLiteProvider) resetDatabase() error {
sql := sqlReplaceAll(sqliteResetSQL)
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, []string{sql}, 0, false)
}
func updateSQLiteDatabaseFromV15(dbHandle *sql.DB) error {
if err := updateSQLiteDatabaseFrom15To16(dbHandle); err != nil {
return err
}
return updateSQLiteDatabaseFromV9(dbHandle)
return updateSQLiteDatabaseFromV16(dbHandle)
}
func updateSQLiteDatabaseFromV9(dbHandle *sql.DB) error {
return updateSQLiteDatabaseFrom9To10(dbHandle)
}
func downgradeSQLiteDatabaseFromV9(dbHandle *sql.DB) error {
return downgradeSQLiteDatabaseFrom9To8(dbHandle)
}
func downgradeSQLiteDatabaseFromV10(dbHandle *sql.DB) error {
if err := downgradeSQLiteDatabaseFrom10To9(dbHandle); err != nil {
func updateSQLiteDatabaseFromV16(dbHandle *sql.DB) error {
if err := updateSQLiteDatabaseFrom16To17(dbHandle); err != nil {
return err
}
return downgradeSQLiteDatabaseFromV9(dbHandle)
return updateSQLiteDatabaseFromV17(dbHandle)
}
func updateSQLiteDatabaseFrom8To9(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 8 -> 9")
providerLog(logger.LevelInfo, "updating database version: 8 -> 9")
sql := strings.ReplaceAll(sqliteV9SQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
sql = strings.ReplaceAll(sql, "{{folders}}", sqlTableFolders)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 9)
func updateSQLiteDatabaseFromV17(dbHandle *sql.DB) error {
if err := updateSQLiteDatabaseFrom17To18(dbHandle); err != nil {
return err
}
return updateSQLiteDatabaseFromV18(dbHandle)
}
func downgradeSQLiteDatabaseFrom9To8(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 9 -> 8")
providerLog(logger.LevelInfo, "downgrading database version: 9 -> 8")
func updateSQLiteDatabaseFromV18(dbHandle *sql.DB) error {
return updateSQLiteDatabaseFrom18To19(dbHandle)
}
func downgradeSQLiteDatabaseFromV16(dbHandle *sql.DB) error {
return downgradeSQLiteDatabaseFrom16To15(dbHandle)
}
func downgradeSQLiteDatabaseFromV17(dbHandle *sql.DB) error {
if err := downgradeSQLiteDatabaseFrom17To16(dbHandle); err != nil {
return err
}
return downgradeSQLiteDatabaseFromV16(dbHandle)
}
func downgradeSQLiteDatabaseFromV18(dbHandle *sql.DB) error {
if err := downgradeSQLiteDatabaseFrom18To17(dbHandle); err != nil {
return err
}
return downgradeSQLiteDatabaseFromV17(dbHandle)
}
func downgradeSQLiteDatabaseFromV19(dbHandle *sql.DB) error {
if err := downgradeSQLiteDatabaseFrom19To18(dbHandle); err != nil {
return err
}
return downgradeSQLiteDatabaseFromV18(dbHandle)
}
func updateSQLiteDatabaseFrom15To16(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 15 -> 16")
providerLog(logger.LevelInfo, "updating database version: 15 -> 16")
sql := strings.ReplaceAll(sqliteV16SQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{active_transfers}}", sqlTableActiveTransfers)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 16, true)
}
func updateSQLiteDatabaseFrom16To17(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 16 -> 17")
providerLog(logger.LevelInfo, "updating database version: 16 -> 17")
if err := setPragmaFK(dbHandle, "OFF"); err != nil {
return err
}
sql := strings.ReplaceAll(sqliteV9DownSQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
sql := strings.ReplaceAll(sqliteV17SQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{groups}}", sqlTableGroups)
sql = strings.ReplaceAll(sql, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{folders}}", sqlTableFolders)
if err := sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 8); err != nil {
sql = strings.ReplaceAll(sql, "{{folders_mapping}}", sqlTableFoldersMapping)
sql = strings.ReplaceAll(sql, "{{users_folders_mapping}}", sqlTableUsersFoldersMapping)
sql = strings.ReplaceAll(sql, "{{users_groups_mapping}}", sqlTableUsersGroupsMapping)
sql = strings.ReplaceAll(sql, "{{groups_folders_mapping}}", sqlTableGroupsFoldersMapping)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
if err := sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 17, true); err != nil {
return err
}
return setPragmaFK(dbHandle, "ON")
}
func updateSQLiteDatabaseFrom9To10(dbHandle *sql.DB) error {
return sqlCommonUpdateDatabaseFrom9To10(dbHandle)
func updateSQLiteDatabaseFrom17To18(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 17 -> 18")
providerLog(logger.LevelInfo, "updating database version: 17 -> 18")
if err := importGCSCredentials(); err != nil {
return err
}
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, nil, 18, true)
}
func downgradeSQLiteDatabaseFrom10To9(dbHandle *sql.DB) error {
return sqlCommonDowngradeDatabaseFrom10To9(dbHandle)
func updateSQLiteDatabaseFrom18To19(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 18 -> 19")
providerLog(logger.LevelInfo, "updating database version: 18 -> 19")
sql := strings.ReplaceAll(sqliteV19SQL, "{{shared_sessions}}", sqlTableSharedSessions)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 19, true)
}
func downgradeSQLiteDatabaseFrom16To15(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 16 -> 15")
providerLog(logger.LevelInfo, "downgrading database version: 16 -> 15")
sql := strings.ReplaceAll(sqliteV16DownSQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{active_transfers}}", sqlTableActiveTransfers)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 15, false)
}
func downgradeSQLiteDatabaseFrom17To16(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 17 -> 16")
providerLog(logger.LevelInfo, "downgrading database version: 17 -> 16")
if err := setPragmaFK(dbHandle, "OFF"); err != nil {
return err
}
sql := strings.ReplaceAll(sqliteV17DownSQL, "{{groups}}", sqlTableGroups)
sql = strings.ReplaceAll(sql, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{folders}}", sqlTableFolders)
sql = strings.ReplaceAll(sql, "{{folders_mapping}}", sqlTableFoldersMapping)
sql = strings.ReplaceAll(sql, "{{users_folders_mapping}}", sqlTableUsersFoldersMapping)
sql = strings.ReplaceAll(sql, "{{users_groups_mapping}}", sqlTableUsersGroupsMapping)
sql = strings.ReplaceAll(sql, "{{groups_folders_mapping}}", sqlTableGroupsFoldersMapping)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
if err := sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 16, false); err != nil {
return err
}
return setPragmaFK(dbHandle, "ON")
}
func downgradeSQLiteDatabaseFrom18To17(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 18 -> 17")
providerLog(logger.LevelInfo, "downgrading database version: 18 -> 17")
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, nil, 17, false)
}
func downgradeSQLiteDatabaseFrom19To18(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 19 -> 18")
providerLog(logger.LevelInfo, "downgrading database version: 19 -> 18")
sql := strings.ReplaceAll(sqliteV19DownSQL, "{{shared_sessions}}", sqlTableSharedSessions)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 18, false)
}
func setPragmaFK(dbHandle *sql.DB, value string) error {

View File

@@ -1,3 +1,18 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//go:build nosqlite
// +build nosqlite
package dataprovider
@@ -5,7 +20,7 @@ package dataprovider
import (
"errors"
"github.com/drakkan/sftpgo/version"
"github.com/drakkan/sftpgo/v2/version"
)
func init() {

View File

@@ -1,3 +1,17 @@
// Copyright (C) 2019-2022 Nicola Murino
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published
// by the Free Software Foundation, version 3.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package dataprovider
import (
@@ -5,20 +19,25 @@ import (
"strconv"
"strings"
"github.com/drakkan/sftpgo/vfs"
"github.com/drakkan/sftpgo/v2/vfs"
)
const (
selectUserFields = "id,username,password,public_keys,home_dir,uid,gid,max_sessions,quota_size,quota_files,permissions,used_quota_size," +
"used_quota_files,last_quota_update,upload_bandwidth,download_bandwidth,expiration_date,last_login,status,filters,filesystem," +
"additional_info,description"
"additional_info,description,email,created_at,updated_at,upload_data_transfer,download_data_transfer,total_data_transfer," +
"used_upload_data_transfer,used_download_data_transfer"
selectFolderFields = "id,path,used_quota_size,used_quota_files,last_quota_update,name,description,filesystem"
selectAdminFields = "id,username,password,status,email,permissions,filters,additional_info,description"
selectAdminFields = "id,username,password,status,email,permissions,filters,additional_info,description,created_at,updated_at,last_login"
selectAPIKeyFields = "key_id,name,api_key,scope,created_at,updated_at,last_use_at,expires_at,description,user_id,admin_id"
selectShareFields = "s.share_id,s.name,s.description,s.scope,s.paths,u.username,s.created_at,s.updated_at,s.last_use_at," +
"s.expires_at,s.password,s.max_tokens,s.used_tokens,s.allow_from"
selectGroupFields = "id,name,description,created_at,updated_at,user_settings"
)
func getSQLPlaceholders() []string {
var placeholders []string
for i := 1; i <= 20; i++ {
for i := 1; i <= 50; i++ {
if config.Driver == PGSQLDataProviderName || config.Driver == CockroachDataProviderName {
placeholders = append(placeholders, fmt.Sprintf("$%v", i))
} else {
@@ -28,6 +47,191 @@ func getSQLPlaceholders() []string {
return placeholders
}
func getSQLTableGroups() string {
if config.Driver == MySQLDataProviderName {
return fmt.Sprintf("`%s`", sqlTableGroups)
}
return sqlTableGroups
}
func getAddSessionQuery() string {
if config.Driver == MySQLDataProviderName {
return fmt.Sprintf("INSERT INTO %s (`key`,`data`,`type`,`timestamp`) VALUES (%s,%s,%s,%s) "+
"ON DUPLICATE KEY UPDATE `data`=VALUES(`data`), `timestamp`=VALUES(`timestamp`)",
sqlTableSharedSessions, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
}
return fmt.Sprintf(`INSERT INTO %s (key,data,type,timestamp) VALUES (%s,%s,%s,%s) ON CONFLICT(key) DO UPDATE SET data=
EXCLUDED.data, timestamp=EXCLUDED.timestamp`,
sqlTableSharedSessions, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
}
func getDeleteSessionQuery() string {
if config.Driver == MySQLDataProviderName {
return fmt.Sprintf("DELETE FROM %s WHERE `key` = %s", sqlTableSharedSessions, sqlPlaceholders[0])
}
return fmt.Sprintf(`DELETE FROM %s WHERE key = %s`, sqlTableSharedSessions, sqlPlaceholders[0])
}
func getSessionQuery() string {
if config.Driver == MySQLDataProviderName {
return fmt.Sprintf("SELECT `key`,`data`,`type`,`timestamp` FROM %s WHERE `key` = %s", sqlTableSharedSessions,
sqlPlaceholders[0])
}
return fmt.Sprintf(`SELECT key,data,type,timestamp FROM %s WHERE key = %s`, sqlTableSharedSessions,
sqlPlaceholders[0])
}
func getCleanupSessionsQuery() string {
return fmt.Sprintf(`DELETE from %s WHERE type = %s AND timestamp < %s`,
sqlTableSharedSessions, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getAddDefenderHostQuery() string {
if config.Driver == MySQLDataProviderName {
return fmt.Sprintf("INSERT INTO %v (`ip`,`updated_at`,`ban_time`) VALUES (%v,%v,0) ON DUPLICATE KEY UPDATE `updated_at`=VALUES(`updated_at`)",
sqlTableDefenderHosts, sqlPlaceholders[0], sqlPlaceholders[1])
}
return fmt.Sprintf(`INSERT INTO %v (ip,updated_at,ban_time) VALUES (%v,%v,0) ON CONFLICT (ip) DO UPDATE SET updated_at = EXCLUDED.updated_at RETURNING id`,
sqlTableDefenderHosts, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getAddDefenderEventQuery() string {
return fmt.Sprintf(`INSERT INTO %v (date_time,score,host_id) VALUES (%v,%v,(SELECT id from %v WHERE ip = %v))`,
sqlTableDefenderEvents, sqlPlaceholders[0], sqlPlaceholders[1], sqlTableDefenderHosts, sqlPlaceholders[2])
}
func getDefenderHostsQuery() string {
return fmt.Sprintf(`SELECT id,ip,ban_time FROM %v WHERE updated_at >= %v OR ban_time > 0 ORDER BY updated_at DESC LIMIT %v`,
sqlTableDefenderHosts, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getDefenderHostQuery() string {
return fmt.Sprintf(`SELECT id,ip,ban_time FROM %v WHERE ip = %v AND (updated_at >= %v OR ban_time > 0)`,
sqlTableDefenderHosts, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getDefenderEventsQuery(hostIDS []int64) string {
var sb strings.Builder
for _, hID := range hostIDS {
if sb.Len() == 0 {
sb.WriteString("(")
} else {
sb.WriteString(",")
}
sb.WriteString(strconv.FormatInt(hID, 10))
}
if sb.Len() > 0 {
sb.WriteString(")")
} else {
sb.WriteString("(0)")
}
return fmt.Sprintf(`SELECT host_id,SUM(score) FROM %v WHERE date_time >= %v AND host_id IN %v GROUP BY host_id`,
sqlTableDefenderEvents, sqlPlaceholders[0], sb.String())
}
func getDefenderIsHostBannedQuery() string {
return fmt.Sprintf(`SELECT id FROM %v WHERE ip = %v AND ban_time >= %v`,
sqlTableDefenderHosts, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getDefenderIncrementBanTimeQuery() string {
return fmt.Sprintf(`UPDATE %v SET ban_time = ban_time + %v WHERE ip = %v`,
sqlTableDefenderHosts, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getDefenderSetBanTimeQuery() string {
return fmt.Sprintf(`UPDATE %v SET ban_time = %v WHERE ip = %v`,
sqlTableDefenderHosts, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getDeleteDefenderHostQuery() string {
return fmt.Sprintf(`DELETE FROM %v WHERE ip = %v`, sqlTableDefenderHosts, sqlPlaceholders[0])
}
func getDefenderHostsCleanupQuery() string {
return fmt.Sprintf(`DELETE FROM %v WHERE ban_time < %v AND NOT EXISTS (
SELECT id FROM %v WHERE %v.host_id = %v.id AND %v.date_time > %v)`,
sqlTableDefenderHosts, sqlPlaceholders[0], sqlTableDefenderEvents, sqlTableDefenderEvents, sqlTableDefenderHosts,
sqlTableDefenderEvents, sqlPlaceholders[1])
}
func getDefenderEventsCleanupQuery() string {
return fmt.Sprintf(`DELETE FROM %v WHERE date_time < %v`, sqlTableDefenderEvents, sqlPlaceholders[0])
}
func getGroupByNameQuery() string {
return fmt.Sprintf(`SELECT %s FROM %s WHERE name = %s`, selectGroupFields, getSQLTableGroups(), sqlPlaceholders[0])
}
func getGroupsQuery(order string, minimal bool) string {
var fieldSelection string
if minimal {
fieldSelection = "id,name"
} else {
fieldSelection = selectGroupFields
}
return fmt.Sprintf(`SELECT %s FROM %s ORDER BY name %s LIMIT %v OFFSET %v`, fieldSelection, getSQLTableGroups(),
order, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getGroupsWithNamesQuery(numArgs int) string {
var sb strings.Builder
for idx := 0; idx < numArgs; idx++ {
if sb.Len() == 0 {
sb.WriteString("(")
} else {
sb.WriteString(",")
}
sb.WriteString(sqlPlaceholders[idx])
}
if sb.Len() > 0 {
sb.WriteString(")")
} else {
sb.WriteString("('')")
}
return fmt.Sprintf(`SELECT %s FROM %s WHERE name in %s`, selectGroupFields, getSQLTableGroups(), sb.String())
}
func getUsersInGroupsQuery(numArgs int) string {
var sb strings.Builder
for idx := 0; idx < numArgs; idx++ {
if sb.Len() == 0 {
sb.WriteString("(")
} else {
sb.WriteString(",")
}
sb.WriteString(sqlPlaceholders[idx])
}
if sb.Len() > 0 {
sb.WriteString(")")
} else {
sb.WriteString("('')")
}
return fmt.Sprintf(`SELECT username FROM %s WHERE id IN (SELECT user_id from %s WHERE group_id IN (SELECT id FROM %s WHERE name IN (%s)))`,
sqlTableUsers, sqlTableUsersGroupsMapping, getSQLTableGroups(), sb.String())
}
func getDumpGroupsQuery() string {
return fmt.Sprintf(`SELECT %s FROM %s`, selectGroupFields, getSQLTableGroups())
}
func getAddGroupQuery() string {
return fmt.Sprintf(`INSERT INTO %s (name,description,created_at,updated_at,user_settings)
VALUES (%v,%v,%v,%v,%v)`, getSQLTableGroups(), sqlPlaceholders[0], sqlPlaceholders[1],
sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4])
}
func getUpdateGroupQuery() string {
return fmt.Sprintf(`UPDATE %s SET description=%v,user_settings=%v,updated_at=%v
WHERE name = %s`, getSQLTableGroups(), sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2],
sqlPlaceholders[3])
}
func getDeleteGroupQuery() string {
return fmt.Sprintf(`DELETE FROM %s WHERE name = %s`, getSQLTableGroups(), sqlPlaceholders[0])
}
func getAdminByUsernameQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v WHERE username = %v`, selectAdminFields, sqlTableAdmins, sqlPlaceholders[0])
}
@@ -42,21 +246,142 @@ func getDumpAdminsQuery() string {
}
func getAddAdminQuery() string {
return fmt.Sprintf(`INSERT INTO %v (username,password,status,email,permissions,filters,additional_info,description)
VALUES (%v,%v,%v,%v,%v,%v,%v,%v)`, sqlTableAdmins, sqlPlaceholders[0], sqlPlaceholders[1],
sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7])
return fmt.Sprintf(`INSERT INTO %v (username,password,status,email,permissions,filters,additional_info,description,created_at,updated_at,last_login)
VALUES (%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,0)`, sqlTableAdmins, sqlPlaceholders[0], sqlPlaceholders[1],
sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7],
sqlPlaceholders[8], sqlPlaceholders[9])
}
func getUpdateAdminQuery() string {
return fmt.Sprintf(`UPDATE %v SET password=%v,status=%v,email=%v,permissions=%v,filters=%v,additional_info=%v,description=%v
return fmt.Sprintf(`UPDATE %v SET password=%v,status=%v,email=%v,permissions=%v,filters=%v,additional_info=%v,description=%v,updated_at=%v
WHERE username = %v`, sqlTableAdmins, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2],
sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7])
sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7], sqlPlaceholders[8])
}
func getDeleteAdminQuery() string {
return fmt.Sprintf(`DELETE FROM %v WHERE username = %v`, sqlTableAdmins, sqlPlaceholders[0])
}
func getShareByIDQuery(filterUser bool) string {
if filterUser {
return fmt.Sprintf(`SELECT %v FROM %v s INNER JOIN %v u ON s.user_id = u.id WHERE s.share_id = %v AND u.username = %v`,
selectShareFields, sqlTableShares, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1])
}
return fmt.Sprintf(`SELECT %v FROM %v s INNER JOIN %v u ON s.user_id = u.id WHERE s.share_id = %v`,
selectShareFields, sqlTableShares, sqlTableUsers, sqlPlaceholders[0])
}
func getSharesQuery(order string) string {
return fmt.Sprintf(`SELECT %v FROM %v s INNER JOIN %v u ON s.user_id = u.id WHERE u.username = %v ORDER BY s.share_id %v LIMIT %v OFFSET %v`,
selectShareFields, sqlTableShares, sqlTableUsers, sqlPlaceholders[0], order, sqlPlaceholders[1], sqlPlaceholders[2])
}
func getDumpSharesQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v s INNER JOIN %v u ON s.user_id = u.id`,
selectShareFields, sqlTableShares, sqlTableUsers)
}
func getAddShareQuery() string {
return fmt.Sprintf(`INSERT INTO %v (share_id,name,description,scope,paths,created_at,updated_at,last_use_at,
expires_at,password,max_tokens,used_tokens,allow_from,user_id) VALUES (%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,%v)`,
sqlTableShares, sqlPlaceholders[0], sqlPlaceholders[1],
sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6],
sqlPlaceholders[7], sqlPlaceholders[8], sqlPlaceholders[9], sqlPlaceholders[10], sqlPlaceholders[11],
sqlPlaceholders[12], sqlPlaceholders[13])
}
func getUpdateShareRestoreQuery() string {
return fmt.Sprintf(`UPDATE %v SET name=%v,description=%v,scope=%v,paths=%v,created_at=%v,updated_at=%v,
last_use_at=%v,expires_at=%v,password=%v,max_tokens=%v,used_tokens=%v,allow_from=%v,user_id=%v WHERE share_id = %v`, sqlTableShares,
sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4],
sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7], sqlPlaceholders[8], sqlPlaceholders[9],
sqlPlaceholders[10], sqlPlaceholders[11], sqlPlaceholders[12], sqlPlaceholders[13])
}
func getUpdateShareQuery() string {
return fmt.Sprintf(`UPDATE %v SET name=%v,description=%v,scope=%v,paths=%v,updated_at=%v,expires_at=%v,
password=%v,max_tokens=%v,allow_from=%v,user_id=%v WHERE share_id = %v`, sqlTableShares,
sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4],
sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7], sqlPlaceholders[8], sqlPlaceholders[9],
sqlPlaceholders[10])
}
func getDeleteShareQuery() string {
return fmt.Sprintf(`DELETE FROM %v WHERE share_id = %v`, sqlTableShares, sqlPlaceholders[0])
}
func getAPIKeyByIDQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v WHERE key_id = %v`, selectAPIKeyFields, sqlTableAPIKeys, sqlPlaceholders[0])
}
func getAPIKeysQuery(order string) string {
return fmt.Sprintf(`SELECT %v FROM %v ORDER BY key_id %v LIMIT %v OFFSET %v`, selectAPIKeyFields, sqlTableAPIKeys,
order, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getDumpAPIKeysQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v`, selectAPIKeyFields, sqlTableAPIKeys)
}
func getAddAPIKeyQuery() string {
return fmt.Sprintf(`INSERT INTO %v (key_id,name,api_key,scope,created_at,updated_at,last_use_at,expires_at,description,user_id,admin_id)
VALUES (%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,%v)`, sqlTableAPIKeys, sqlPlaceholders[0], sqlPlaceholders[1],
sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6],
sqlPlaceholders[7], sqlPlaceholders[8], sqlPlaceholders[9], sqlPlaceholders[10])
}
func getUpdateAPIKeyQuery() string {
return fmt.Sprintf(`UPDATE %v SET name=%v,scope=%v,expires_at=%v,user_id=%v,admin_id=%v,description=%v,updated_at=%v
WHERE key_id = %v`, sqlTableAPIKeys, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2],
sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7])
}
func getDeleteAPIKeyQuery() string {
return fmt.Sprintf(`DELETE FROM %v WHERE key_id = %v`, sqlTableAPIKeys, sqlPlaceholders[0])
}
func getRelatedUsersForAPIKeysQuery(apiKeys []APIKey) string {
var sb strings.Builder
for _, k := range apiKeys {
if k.userID == 0 {
continue
}
if sb.Len() == 0 {
sb.WriteString("(")
} else {
sb.WriteString(",")
}
sb.WriteString(strconv.FormatInt(k.userID, 10))
}
if sb.Len() > 0 {
sb.WriteString(")")
} else {
sb.WriteString("(0)")
}
return fmt.Sprintf(`SELECT id,username FROM %v WHERE id IN %v`, sqlTableUsers, sb.String())
}
func getRelatedAdminsForAPIKeysQuery(apiKeys []APIKey) string {
var sb strings.Builder
for _, k := range apiKeys {
if k.adminID == 0 {
continue
}
if sb.Len() == 0 {
sb.WriteString("(")
} else {
sb.WriteString(",")
}
sb.WriteString(strconv.FormatInt(k.adminID, 10))
}
if sb.Len() > 0 {
sb.WriteString(")")
} else {
sb.WriteString("(0)")
}
return fmt.Sprintf(`SELECT id,username FROM %v WHERE id IN %v`, sqlTableAdmins, sb.String())
}
func getUserByUsernameQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v WHERE username = %v`, selectUserFields, sqlTableUsers, sqlPlaceholders[0])
}
@@ -66,6 +391,28 @@ func getUsersQuery(order string) string {
order, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getUsersForQuotaCheckQuery(numArgs int) string {
var sb strings.Builder
for idx := 0; idx < numArgs; idx++ {
if sb.Len() == 0 {
sb.WriteString("(")
} else {
sb.WriteString(",")
}
sb.WriteString(sqlPlaceholders[idx])
}
if sb.Len() > 0 {
sb.WriteString(")")
}
return fmt.Sprintf(`SELECT id,username,quota_size,used_quota_size,total_data_transfer,upload_data_transfer,
download_data_transfer,used_upload_data_transfer,used_download_data_transfer,filters FROM %v WHERE username IN %v`,
sqlTableUsers, sb.String())
}
func getRecentlyUpdatedUsersQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v WHERE updated_at >= %v`, selectUserFields, sqlTableUsers, sqlPlaceholders[0])
}
func getDumpUsersQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v`, selectUserFields, sqlTableUsers)
}
@@ -74,6 +421,16 @@ func getDumpFoldersQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v`, selectFolderFields, sqlTableFolders)
}
func getUpdateTransferQuotaQuery(reset bool) string {
if reset {
return fmt.Sprintf(`UPDATE %v SET used_upload_data_transfer = %v,used_download_data_transfer = %v,last_quota_update = %v
WHERE username = %v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
}
return fmt.Sprintf(`UPDATE %v SET used_upload_data_transfer = used_upload_data_transfer + %v,
used_download_data_transfer = used_download_data_transfer + %v,last_quota_update = %v
WHERE username = %v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
}
func getUpdateQuotaQuery(reset bool) string {
if reset {
return fmt.Sprintf(`UPDATE %v SET used_quota_size = %v,used_quota_files = %v,last_quota_update = %v
@@ -83,32 +440,60 @@ func getUpdateQuotaQuery(reset bool) string {
WHERE username = %v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
}
func getSetUpdateAtQuery() string {
return fmt.Sprintf(`UPDATE %v SET updated_at = %v WHERE username = %v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getUpdateLastLoginQuery() string {
return fmt.Sprintf(`UPDATE %v SET last_login = %v WHERE username = %v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getUpdateAdminLastLoginQuery() string {
return fmt.Sprintf(`UPDATE %v SET last_login = %v WHERE username = %v`, sqlTableAdmins, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getUpdateAPIKeyLastUseQuery() string {
return fmt.Sprintf(`UPDATE %v SET last_use_at = %v WHERE key_id = %v`, sqlTableAPIKeys, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getUpdateShareLastUseQuery() string {
return fmt.Sprintf(`UPDATE %v SET last_use_at = %v, used_tokens = used_tokens +%v WHERE share_id = %v`,
sqlTableShares, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2])
}
func getQuotaQuery() string {
return fmt.Sprintf(`SELECT used_quota_size,used_quota_files FROM %v WHERE username = %v`, sqlTableUsers,
sqlPlaceholders[0])
return fmt.Sprintf(`SELECT used_quota_size,used_quota_files,used_upload_data_transfer,
used_download_data_transfer FROM %v WHERE username = %v`,
sqlTableUsers, sqlPlaceholders[0])
}
func getAddUserQuery() string {
return fmt.Sprintf(`INSERT INTO %v (username,password,public_keys,home_dir,uid,gid,max_sessions,quota_size,quota_files,permissions,
used_quota_size,used_quota_files,last_quota_update,upload_bandwidth,download_bandwidth,status,last_login,expiration_date,filters,
filesystem,additional_info,description)
VALUES (%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,0,0,0,%v,%v,%v,0,%v,%v,%v,%v,%v)`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1],
sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7],
sqlPlaceholders[8], sqlPlaceholders[9], sqlPlaceholders[10], sqlPlaceholders[11], sqlPlaceholders[12], sqlPlaceholders[13],
sqlPlaceholders[14], sqlPlaceholders[15], sqlPlaceholders[16], sqlPlaceholders[17])
filesystem,additional_info,description,email,created_at,updated_at,upload_data_transfer,download_data_transfer,total_data_transfer,
used_upload_data_transfer,used_download_data_transfer)
VALUES (%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,0,0,0,%v,%v,%v,0,%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,0,0)`,
sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4],
sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7], sqlPlaceholders[8], sqlPlaceholders[9],
sqlPlaceholders[10], sqlPlaceholders[11], sqlPlaceholders[12], sqlPlaceholders[13], sqlPlaceholders[14],
sqlPlaceholders[15], sqlPlaceholders[16], sqlPlaceholders[17], sqlPlaceholders[18], sqlPlaceholders[19],
sqlPlaceholders[20], sqlPlaceholders[21], sqlPlaceholders[22], sqlPlaceholders[23])
}
func getUpdateUserQuery() string {
return fmt.Sprintf(`UPDATE %v SET password=%v,public_keys=%v,home_dir=%v,uid=%v,gid=%v,max_sessions=%v,quota_size=%v,
quota_files=%v,permissions=%v,upload_bandwidth=%v,download_bandwidth=%v,status=%v,expiration_date=%v,filters=%v,filesystem=%v,
additional_info=%v,description=%v WHERE id = %v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3],
sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7], sqlPlaceholders[8], sqlPlaceholders[9],
sqlPlaceholders[10], sqlPlaceholders[11], sqlPlaceholders[12], sqlPlaceholders[13], sqlPlaceholders[14], sqlPlaceholders[15],
sqlPlaceholders[16], sqlPlaceholders[17])
additional_info=%v,description=%v,email=%v,updated_at=%v,upload_data_transfer=%v,download_data_transfer=%v,
total_data_transfer=%v WHERE id = %v`,
sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4],
sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7], sqlPlaceholders[8], sqlPlaceholders[9],
sqlPlaceholders[10], sqlPlaceholders[11], sqlPlaceholders[12], sqlPlaceholders[13], sqlPlaceholders[14],
sqlPlaceholders[15], sqlPlaceholders[16], sqlPlaceholders[17], sqlPlaceholders[18], sqlPlaceholders[19],
sqlPlaceholders[20], sqlPlaceholders[21], sqlPlaceholders[22])
}
func getUpdateUserPasswordQuery() string {
return fmt.Sprintf(`UPDATE %v SET password=%v WHERE username = %v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getDeleteUserQuery() string {
@@ -119,10 +504,6 @@ func getFolderByNameQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v WHERE name = %v`, selectFolderFields, sqlTableFolders, sqlPlaceholders[0])
}
func checkFolderNameQuery() string {
return fmt.Sprintf(`SELECT name FROM %v WHERE name = %v`, sqlTableFolders, sqlPlaceholders[0])
}
func getAddFolderQuery() string {
return fmt.Sprintf(`INSERT INTO %v (path,used_quota_size,used_quota_files,last_quota_update,name,description,filesystem)
VALUES (%v,%v,%v,%v,%v,%v,%v)`, sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2],
@@ -138,19 +519,63 @@ func getDeleteFolderQuery() string {
return fmt.Sprintf(`DELETE FROM %v WHERE id = %v`, sqlTableFolders, sqlPlaceholders[0])
}
func getClearFolderMappingQuery() string {
return fmt.Sprintf(`DELETE FROM %v WHERE user_id = (SELECT id FROM %v WHERE username = %v)`, sqlTableFoldersMapping,
func getUpsertFolderQuery() string {
if config.Driver == MySQLDataProviderName {
return fmt.Sprintf("INSERT INTO %v (`path`,`used_quota_size`,`used_quota_files`,`last_quota_update`,`name`,"+
"`description`,`filesystem`) VALUES (%v,%v,%v,%v,%v,%v,%v) ON DUPLICATE KEY UPDATE "+
"`path`=VALUES(`path`),`description`=VALUES(`description`),`filesystem`=VALUES(`filesystem`)",
sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4],
sqlPlaceholders[5], sqlPlaceholders[6])
}
return fmt.Sprintf(`INSERT INTO %v (path,used_quota_size,used_quota_files,last_quota_update,name,description,filesystem)
VALUES (%v,%v,%v,%v,%v,%v,%v) ON CONFLICT (name) DO UPDATE SET path = EXCLUDED.path,description=EXCLUDED.description,
filesystem=EXCLUDED.filesystem`, sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2],
sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6])
}
func getClearUserGroupMappingQuery() string {
return fmt.Sprintf(`DELETE FROM %v WHERE user_id = (SELECT id FROM %v WHERE username = %v)`, sqlTableUsersGroupsMapping,
sqlTableUsers, sqlPlaceholders[0])
}
func getAddFolderMappingQuery() string {
return fmt.Sprintf(`INSERT INTO %v (virtual_path,quota_size,quota_files,folder_id,user_id)
VALUES (%v,%v,%v,%v,(SELECT id FROM %v WHERE username = %v))`, sqlTableFoldersMapping, sqlPlaceholders[0],
sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3], sqlTableUsers, sqlPlaceholders[4])
func getAddUserGroupMappingQuery() string {
return fmt.Sprintf(`INSERT INTO %v (user_id,group_id,group_type) VALUES ((SELECT id FROM %v WHERE username = %v),
(SELECT id FROM %v WHERE name = %v),%v)`,
sqlTableUsersGroupsMapping, sqlTableUsers, sqlPlaceholders[0], getSQLTableGroups(), sqlPlaceholders[1], sqlPlaceholders[2])
}
func getFoldersQuery(order string) string {
return fmt.Sprintf(`SELECT %v FROM %v ORDER BY name %v LIMIT %v OFFSET %v`, selectFolderFields, sqlTableFolders,
func getClearGroupFolderMappingQuery() string {
return fmt.Sprintf(`DELETE FROM %v WHERE group_id = (SELECT id FROM %v WHERE name = %v)`, sqlTableGroupsFoldersMapping,
getSQLTableGroups(), sqlPlaceholders[0])
}
func getAddGroupFolderMappingQuery() string {
return fmt.Sprintf(`INSERT INTO %v (virtual_path,quota_size,quota_files,folder_id,group_id)
VALUES (%v,%v,%v,(SELECT id FROM %v WHERE name = %v),(SELECT id FROM %v WHERE name = %v))`,
sqlTableGroupsFoldersMapping, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlTableFolders,
sqlPlaceholders[3], getSQLTableGroups(), sqlPlaceholders[4])
}
func getClearUserFolderMappingQuery() string {
return fmt.Sprintf(`DELETE FROM %v WHERE user_id = (SELECT id FROM %v WHERE username = %v)`, sqlTableUsersFoldersMapping,
sqlTableUsers, sqlPlaceholders[0])
}
func getAddUserFolderMappingQuery() string {
return fmt.Sprintf(`INSERT INTO %v (virtual_path,quota_size,quota_files,folder_id,user_id)
VALUES (%v,%v,%v,(SELECT id FROM %v WHERE name = %v),(SELECT id FROM %v WHERE username = %v))`,
sqlTableUsersFoldersMapping, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlTableFolders,
sqlPlaceholders[3], sqlTableUsers, sqlPlaceholders[4])
}
func getFoldersQuery(order string, minimal bool) string {
var fieldSelection string
if minimal {
fieldSelection = "id,name"
} else {
fieldSelection = selectFolderFields
}
return fmt.Sprintf(`SELECT %v FROM %v ORDER BY name %v LIMIT %v OFFSET %v`, fieldSelection, sqlTableFolders,
order, sqlPlaceholders[0], sqlPlaceholders[1])
}
@@ -168,6 +593,23 @@ func getQuotaFolderQuery() string {
sqlPlaceholders[0])
}
func getRelatedGroupsForUsersQuery(users []User) string {
var sb strings.Builder
for _, u := range users {
if sb.Len() == 0 {
sb.WriteString("(")
} else {
sb.WriteString(",")
}
sb.WriteString(strconv.FormatInt(u.ID, 10))
}
if sb.Len() > 0 {
sb.WriteString(")")
}
return fmt.Sprintf(`SELECT g.name,ug.group_type,ug.user_id FROM %v g INNER JOIN %v ug ON g.id = ug.group_id WHERE
ug.user_id IN %v ORDER BY ug.user_id`, getSQLTableGroups(), sqlTableUsersGroupsMapping, sb.String())
}
func getRelatedFoldersForUsersQuery(users []User) string {
var sb strings.Builder
for _, u := range users {
@@ -183,7 +625,7 @@ func getRelatedFoldersForUsersQuery(users []User) string {
}
return fmt.Sprintf(`SELECT f.id,f.name,f.path,f.used_quota_size,f.used_quota_files,f.last_quota_update,fm.virtual_path,
fm.quota_size,fm.quota_files,fm.user_id,f.filesystem,f.description FROM %v f INNER JOIN %v fm ON f.id = fm.folder_id WHERE
fm.user_id IN %v ORDER BY fm.user_id`, sqlTableFolders, sqlTableFoldersMapping, sb.String())
fm.user_id IN %v ORDER BY fm.user_id`, sqlTableFolders, sqlTableUsersFoldersMapping, sb.String())
}
func getRelatedUsersForFoldersQuery(folders []vfs.BaseVirtualFolder) string {
@@ -200,7 +642,87 @@ func getRelatedUsersForFoldersQuery(folders []vfs.BaseVirtualFolder) string {
sb.WriteString(")")
}
return fmt.Sprintf(`SELECT fm.folder_id,u.username FROM %v fm INNER JOIN %v u ON fm.user_id = u.id
WHERE fm.folder_id IN %v ORDER BY fm.folder_id`, sqlTableFoldersMapping, sqlTableUsers, sb.String())
WHERE fm.folder_id IN %v ORDER BY fm.folder_id`, sqlTableUsersFoldersMapping, sqlTableUsers, sb.String())
}
func getRelatedGroupsForFoldersQuery(folders []vfs.BaseVirtualFolder) string {
var sb strings.Builder
for _, f := range folders {
if sb.Len() == 0 {
sb.WriteString("(")
} else {
sb.WriteString(",")
}
sb.WriteString(strconv.FormatInt(f.ID, 10))
}
if sb.Len() > 0 {
sb.WriteString(")")
}
return fmt.Sprintf(`SELECT fm.folder_id,g.name FROM %v fm INNER JOIN %v g ON fm.group_id = g.id
WHERE fm.folder_id IN %v ORDER BY fm.folder_id`, sqlTableGroupsFoldersMapping, getSQLTableGroups(), sb.String())
}
func getRelatedUsersForGroupsQuery(groups []Group) string {
var sb strings.Builder
for _, g := range groups {
if sb.Len() == 0 {
sb.WriteString("(")
} else {
sb.WriteString(",")
}
sb.WriteString(strconv.FormatInt(g.ID, 10))
}
if sb.Len() > 0 {
sb.WriteString(")")
}
return fmt.Sprintf(`SELECT um.group_id,u.username FROM %v um INNER JOIN %v u ON um.user_id = u.id
WHERE um.group_id IN %v ORDER BY um.group_id`, sqlTableUsersGroupsMapping, sqlTableUsers, sb.String())
}
func getRelatedFoldersForGroupsQuery(groups []Group) string {
var sb strings.Builder
for _, g := range groups {
if sb.Len() == 0 {
sb.WriteString("(")
} else {
sb.WriteString(",")
}
sb.WriteString(strconv.FormatInt(g.ID, 10))
}
if sb.Len() > 0 {
sb.WriteString(")")
}
return fmt.Sprintf(`SELECT f.id,f.name,f.path,f.used_quota_size,f.used_quota_files,f.last_quota_update,fm.virtual_path,
fm.quota_size,fm.quota_files,fm.group_id,f.filesystem,f.description FROM %s f INNER JOIN %s fm ON f.id = fm.folder_id WHERE
fm.group_id IN %v ORDER BY fm.group_id`, sqlTableFolders, sqlTableGroupsFoldersMapping, sb.String())
}
func getActiveTransfersQuery() string {
return fmt.Sprintf(`SELECT transfer_id,connection_id,transfer_type,username,folder_name,ip,truncated_size,
current_ul_size,current_dl_size,created_at,updated_at FROM %v WHERE updated_at > %v`,
sqlTableActiveTransfers, sqlPlaceholders[0])
}
func getAddActiveTransferQuery() string {
return fmt.Sprintf(`INSERT INTO %v (transfer_id,connection_id,transfer_type,username,folder_name,ip,truncated_size,
current_ul_size,current_dl_size,created_at,updated_at) VALUES (%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,%v)`,
sqlTableActiveTransfers, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3],
sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7], sqlPlaceholders[8],
sqlPlaceholders[9], sqlPlaceholders[10])
}
func getUpdateActiveTransferSizesQuery() string {
return fmt.Sprintf(`UPDATE %v SET current_ul_size=%v,current_dl_size=%v,updated_at=%v WHERE connection_id = %v AND transfer_id = %v`,
sqlTableActiveTransfers, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4])
}
func getRemoveActiveTransferQuery() string {
return fmt.Sprintf(`DELETE FROM %v WHERE connection_id = %v AND transfer_id = %v`,
sqlTableActiveTransfers, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getCleanupActiveTransfersQuery() string {
return fmt.Sprintf(`DELETE FROM %v WHERE updated_at < %v`, sqlTableActiveTransfers, sqlPlaceholders[0])
}
func getDatabaseVersionQuery() string {
@@ -210,19 +732,3 @@ func getDatabaseVersionQuery() string {
func getUpdateDBVersionQuery() string {
return fmt.Sprintf(`UPDATE %v SET version=%v`, sqlTableSchemaVersion, sqlPlaceholders[0])
}
func getCompatUserV10FsConfigQuery() string {
return fmt.Sprintf(`SELECT id,username,filesystem FROM %v`, sqlTableUsers)
}
func updateCompatUserV10FsConfigQuery() string {
return fmt.Sprintf(`UPDATE %v SET filesystem=%v WHERE id=%v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getCompatFolderV10FsConfigQuery() string {
return fmt.Sprintf(`SELECT id,name,filesystem FROM %v`, sqlTableFolders)
}
func updateCompatFolderV10FsConfigQuery() string {
return fmt.Sprintf(`UPDATE %v SET filesystem=%v WHERE id=%v`, sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1])
}

File diff suppressed because it is too large Load Diff

View File

@@ -4,14 +4,16 @@ SFTPGo provides an official Docker image, it is available on both [Docker Hub](h
## Supported tags and respective Dockerfile links
- [v2.1.0, v2.1, v2, latest](https://github.com/drakkan/sftpgo/blob/v2.1.0/Dockerfile)
- [v2.1.0-alpine, v2.1-alpine, v2-alpine, alpine](https://github.com/drakkan/sftpgo/blob/v2.1.0/Dockerfile.alpine)
- [v2.1.0-slim, v2.1-slim, v2-slim, slim](https://github.com/drakkan/sftpgo/blob/v2.1.0/Dockerfile)
- [v2.1.0-alpine-slim, v2.1-alpine-slim, v2-alpine-slim, alpine-slim](https://github.com/drakkan/sftpgo/blob/v2.1.0/Dockerfile.alpine)
- [v2.3.3, v2.3, v2, latest](https://github.com/drakkan/sftpgo/blob/v2.3.3/Dockerfile)
- [v2.3.3-alpine, v2.3-alpine, v2-alpine, alpine](https://github.com/drakkan/sftpgo/blob/v2.3.3/Dockerfile.alpine)
- [v2.3.3-slim, v2.3-slim, v2-slim, slim](https://github.com/drakkan/sftpgo/blob/v2.3.3/Dockerfile)
- [v2.3.3-alpine-slim, v2.3-alpine-slim, v2-alpine-slim, alpine-slim](https://github.com/drakkan/sftpgo/blob/v2.3.3/Dockerfile.alpine)
- [v2.3.3-distroless-slim, v2.3-distroless-slim, v2-distroless-slim, distroless-slim](https://github.com/drakkan/sftpgo/blob/v2.3.3/Dockerfile.distroless)
- [edge](../Dockerfile)
- [edge-alpine](../Dockerfile.alpine)
- [edge-slim](../Dockerfile)
- [edge-alpine-slim](../Dockerfile.alpine)
- [edge-distroless-slim](../Dockerfile.distroless)
## How to use the SFTPGo image
@@ -20,12 +22,12 @@ SFTPGo provides an official Docker image, it is available on both [Docker Hub](h
Starting a SFTPGo instance is simple:
```shell
docker run --name some-sftpgo -p 127.0.0.1:8080:8080 -p 2022:2022 -d "drakkan/sftpgo:tag"
docker run --name some-sftpgo -p 8080:8080 -p 2022:2022 -d "drakkan/sftpgo:tag"
```
... where `some-sftpgo` is the name you want to assign to your container, and `tag` is the tag specifying the SFTPGo version you want. See the list above for relevant tags.
Now visit [http://localhost:8080/web/admin](http://localhost:8080/web/admin), create the first admin and then log in and create a new SFTPGo user. The SFTP service is available on port 2022.
Now visit [http://localhost:8080/web/admin](http://localhost:8080/web/admin), replacing `localhost` with the appropriate IP address if SFTPGo is not reachable on localhost, create the first admin and a new SFTPGo user. The SFTP service is available on port 2022.
If you don't want to persist any files, for example for testing purposes, you can run an SFTPGo instance like this:
@@ -87,6 +89,8 @@ The logs are available through Docker's container log:
docker logs some-sftpgo
```
**Note:** [distroless](../Dockerfile.distroless) image contains only a statically linked sftpgo binary and its minimal runtime dependencies. Shell is not available on this image.
### Where to Store Data
Important note: There are several ways to store data used by applications that run in Docker containers. We encourage users of the SFTPGo images to familiarize themselves with the options available, including:
@@ -96,13 +100,13 @@ Important note: There are several ways to store data used by applications that r
The Docker documentation is a good starting point for understanding the different storage options and variations, and there are multiple blogs and forum postings that discuss and give advice in this area. We will simply show the basic procedure here for the latter option above:
1. Create a data directory on a suitable volume on your host system, e.g. `/my/own/sftpgodata`.
2. Create a home directory for the sftpgo container user on your host system e.g. `/my/own/sftpgohome`.
1. Create a data directory on a suitable volume on your host system, e.g. `/my/own/sftpgodata`. The user with ID `1000` must be able to write to this directory. Please note that you don't need an actual user with ID `1000` on your host system: `chown -R 1000:1000 /my/own/sftpgodata` is enough even if there is no user/group with UID/GID `1000`.
2. Create a home directory for the sftpgo container user on your host system e.g. `/my/own/sftpgohome`. As with the data directory above, make sure that the user with ID `1000` can write to this directory: `chown -R 1000:1000 /my/own/sftpgohome`
3. Start your SFTPGo container like this:
```shell
docker run --name some-sftpgo \
-p 127.0.0.1:8080:8090 \
-p 8080:8090 \
-p 2022:2022 \
--mount type=bind,source=/my/own/sftpgodata,target=/srv/sftpgo \
--mount type=bind,source=/my/own/sftpgohome,target=/var/lib/sftpgo \
@@ -121,7 +125,7 @@ If you want to get fine grained control, you can also mount `/srv/sftpgo/data` a
The runtime configuration can be customized via environment variables that you can set passing the `-e` option to the `docker run` command or inside the `environment` section if you are using [docker stack deploy](https://docs.docker.com/engine/reference/commandline/stack_deploy/) or [docker-compose](https://github.com/docker/compose).
Please take a look [here](../docs/full-configuration.md#environment-variables) to learn how to configure SFTPGo via environment variables.
Please take a look [here](../docs/full-configuration.md) to learn how to configure SFTPGo via environment variables.
Alternately you can mount your custom configuration file to `/var/lib/sftpgo` or `/var/lib/sftpgo/.config/sftpgo`.
@@ -129,7 +133,7 @@ Alternately you can mount your custom configuration file to `/var/lib/sftpgo` or
Initial data can be loaded in the following ways:
- via the `--loaddata-from` flag or the `SFTPGO_LOADDATA_FROM` environment variable
- via the `--loaddata-from` flag or the `SFTPGO_LOADDATA_FROM` environment variable. This flag is supported for both the `serve` command (load initial data and start the service) and the `initprovider` command (initialize the provider, load initial data and exit)
- by providing a dump file to the memory provider
Please take a look [here](../docs/full-configuration.md) for more details.
@@ -150,7 +154,7 @@ With the above directory permissions, you can start a SFTPGo instance like this:
```shell
docker run --name some-sftpgo \
--user 1100:1100 \
-p 127.0.0.1:8080:8080 \
-p 8080:8080 \
-p 2022:2022 \
--mount type=bind,source="${PWD}/data",target=/srv/sftpgo \
--mount type=bind,source="${PWD}/config",target=/var/lib/sftpgo \
@@ -166,9 +170,11 @@ RUN chown -R 1100:1100 /etc/sftpgo && chown 1100:1100 /var/lib/sftpgo /srv/sftpg
USER 1100:1100
```
**Note:** the above Dockerfile will not work if you use the [distroless](../Dockerfile.distroless) image as base since the `chown` command is not available there.
## Image Variants
The `sftpgo` images comes in many flavors, each designed for a specific use case. The `edge` and `edge-alpine`tags are updated after each new commit.
The `sftpgo` images comes in many flavors, each designed for a specific use case. The `edge`, `edge-slim`, `edge-alpine`, `edge-alpine-slim` and `edge-distroless-slim` tags are updated after each new commit.
### `sftpgo:<version>`
@@ -180,9 +186,18 @@ This image is based on the popular [Alpine Linux project](https://alpinelinux.or
This variant is highly recommended when final image size being as small as possible is desired. The main caveat to note is that it does use [musl libc](https://musl.libc.org/) instead of [glibc and friends](https://www.etalabs.net/compare_libcs.html), so certain software might run into issues depending on the depth of their libc requirements. However, most software doesn't have an issue with this, so this variant is usually a very safe choice. See [this Hacker News comment thread](https://news.ycombinator.com/item?id=10782897) for more discussion of the issues that might arise and some pro/con comparisons of using Alpine-based images.
### `sftpgo:<version>-distroless`
This image is based on the popular [Distroless project](https://github.com/GoogleContainerTools/distroless). We use the latest Debian based distroless image as base.
Distroless variant contains only a statically linked sftpgo binary and its minimal runtime dependencies and so it doesn't allow shell access (no shell is installed).
SQLite support is disabled since it requires CGO and so a C runtime which is not installed.
The default data provider is `bolt`, all the supported data providers except `sqlite` work.
We only provide the slim variant and so the optional `git` dependency is not available.
### `sftpgo:<suite>-slim`
These tags provide a slimmer image that does not include the optional `git` and `rsync` dependencies.
These tags provide a slimmer image that does not include the optional `git` dependency.
## Helm Chart

View File

@@ -1,6 +1,6 @@
# Account's configuration properties
Please take a look at the [OpenAPI schema](../httpd/schema/openapi.yaml) for the exact definitions of user, folder and admin fields.
Please take a look at the [OpenAPI schema](../openapi/openapi.yaml) for the exact definitions of user, folder and admin fields.
If you need an example you can export a dump using the Web Admin or by invoking the `dumpdata` endpoint directly, you need to obtain an access token first, for example:
```shell
@@ -10,7 +10,7 @@ $ curl "http://admin:password@127.0.0.1:8080/api/v2/token"
curl -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOlsiQVBJIl0sImV4cCI6MTYxMzMzNTI2MSwianRpIjoiYzBrb2gxZmNkcnBjaHNzMGZwZmciLCJuYmYiOjE2MTMzMzQ2MzEsInBlcm1pc3Npb25zIjpbIioiXSwic3ViIjoiYUJ0SHUwMHNBUmxzZ29yeEtLQ1pZZWVqSTRKVTlXbThHSGNiVWtWVmc1TT0iLCJ1c2VybmFtZSI6ImFkbWluIn0.WiyqvUF-92zCr--y4Q_sxn-tPnISFzGZd_exsG-K7ME" "http://127.0.0.1:8080/api/v2/dumpdata?output-data=1"
```
the dump is a JSON with users, folder and admins.
the dump is a JSON with all SFTPGo data including users, folders, admins.
These properties are stored inside the configured data provider.

View File

@@ -17,4 +17,4 @@ For multipart uploads you can customize the parts size and the upload concurrenc
The configured container must exist.
This backend is very similar to the [S3](./s3.md) backend, and it has the same limitations.
This backend is very similar to the [S3](./s3.md) backend, and it has the same limitations. As with S3 `chtime` will fail with the default configuration, you can install the [metadata plugin](https://github.com/sftpgo/sftpgo-plugin-metadata) to make it work and thus be able to preserve/change file modification times.

View File

@@ -13,9 +13,6 @@ The following build tags are available:
- `nosqlite`, disable SQLite data provider, default enabled
- `noportable`, disable portable mode, default enabled
- `nometrics`, disable Prometheus metrics, default enabled
- `novaultkms`, disable Vault transit secret engine, default enabled
- `noawskms`, disable AWS KMS, default enabled
- `nogcpkms`, disable GCP KMS, default enabled
If no build tag is specified the build will include the default features.
@@ -26,13 +23,13 @@ The compiler is a build time only dependency. It is not required at runtime.
Version info, such as git commit and build date, can be embedded setting the following string variables at build time:
- `github.com/drakkan/sftpgo/version.commit`
- `github.com/drakkan/sftpgo/version.date`
- `github.com/drakkan/sftpgo/v2/version.commit`
- `github.com/drakkan/sftpgo/v2/version.date`
For example, you can build using the following command:
```bash
go build -tags nogcs,nos3,nosqlite -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
go build -tags nogcs,nos3,nosqlite -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
```
You should get a version that includes git commit, build date and available features like this one:

View File

@@ -1,5 +1,9 @@
# Custom Actions
SFTPGo can notify filesystem and provider events using custom actions. A custom action can be an external program or an HTTP URL.
## Filesystem events
The `actions` struct inside the `common` configuration section allows to configure the actions for file operations and SSH commands.
The `hook` can be defined as the absolute path of your program or an HTTP URL.
@@ -12,9 +16,12 @@ The following `actions` are supported:
- `delete`
- `pre-delete`
- `rename`
- `mkdir`
- `rmdir`
- `ssh_cmd`
The `upload` condition includes both uploads to new files and overwrite of existing files. If an upload is aborted for quota limits SFTPGo tries to remove the partial file, so if the notification reports a zero size file and a quota exceeded error the file has been deleted. The `ssh_cmd` condition will be triggered after a command is successfully executed via SSH. `scp` will trigger the `download` and `upload` conditions and not `ssh_cmd`.
The `upload` condition includes both uploads to new files and overwrite of existing ones. If an upload is aborted for quota limits SFTPGo tries to remove the partial file, so if the notification reports a zero size file and a quota exceeded error the file has been deleted. The `ssh_cmd` condition will be triggered after a command is successfully executed via SSH. `scp` will trigger the `download` and `upload` conditions and not `ssh_cmd`.
For cloud backends directories are virtual, they are created implicitly when you upload a file and are implicitly removed when the last file within a directory is removed. The `mkdir` and `rmdir` notifications are sent only when a directory is explicitly created or removed.
The notification will indicate if an error is detected and so, for example, a partial file is uploaded.
@@ -22,76 +29,90 @@ The `pre-delete` action, if defined, will be called just before files deletion.
The `pre-download` and `pre-upload` actions, will be called before downloads and uploads. If the external command completes with a zero exit status or the HTTP notification response code is `200` then SFTPGo allows the operation, otherwise the client will get a permission denied error.
If the `hook` defines a path to an external program, then this program is invoked with the following arguments:
If the `hook` defines a path to an external program, then this program can read the following environment variables:
- `action`, string, supported action
- `username`
- `path` is the full filesystem path, can be empty for some ssh commands
- `target_path`, non-empty for `rename` action and for `sftpgo-copy` SSH command
- `ssh_cmd`, non-empty for `ssh_cmd` action
The external program can also read the following environment variables:
- `SFTPGO_ACTION`
- `SFTPGO_ACTION`, supported action
- `SFTPGO_ACTION_USERNAME`
- `SFTPGO_ACTION_PATH`
- `SFTPGO_ACTION_TARGET`, non-empty for `rename` `SFTPGO_ACTION`
- `SFTPGO_ACTION_PATH`, is the full filesystem path, can be empty for some ssh commands
- `SFTPGO_ACTION_TARGET`, full filesystem path, non-empty for `rename` `SFTPGO_ACTION` and for some SSH commands
- `SFTPGO_ACTION_VIRTUAL_PATH`, virtual path, seen by SFTPGo users
- `SFTPGO_ACTION_VIRTUAL_TARGET`, virtual target path, seen by SFTPGo users
- `SFTPGO_ACTION_SSH_CMD`, non-empty for `ssh_cmd` `SFTPGO_ACTION`
- `SFTPGO_ACTION_FILE_SIZE`, non-zero for `pre-upload`,`upload`, `download` and `delete` actions if the file size is greater than `0`
- `SFTPGO_ACTION_FS_PROVIDER`, `0` for local filesystem, `1` for S3 backend, `2` for Google Cloud Storage (GCS) backend, `3` for Azure Blob Storage backend, `4` for local encrypted backend, `5` for SFTP backend
- `SFTPGO_ACTION_BUCKET`, non-empty for S3, GCS and Azure backends
- `SFTPGO_ACTION_ENDPOINT`, non-empty for S3, SFTP and Azure backend if configured. For Azure this is the endpoint, if configured
- `SFTPGO_ACTION_STATUS`, integer. Status for `upload`, `download` and `ssh_cmd` actions. 0 means a generic error occurred. 1 means no error, 2 means quota exceeded error
- `SFTPGO_ACTION_PROTOCOL`, string. Possible values are `SSH`, `SFTP`, `SCP`, `FTP`, `DAV`, `HTTP`
- `SFTPGO_ACTION_ENDPOINT`, non-empty for S3, SFTP and Azure backend if configured
- `SFTPGO_ACTION_STATUS`, integer. Status for `upload`, `download` and `ssh_cmd` actions. 1 means no error, 2 means a generic error occurred, 3 means quota exceeded error
- `SFTPGO_ACTION_PROTOCOL`, string. Possible values are `SSH`, `SFTP`, `SCP`, `FTP`, `DAV`, `HTTP`, `HTTPShare`, `OIDC`, `DataRetention`
- `SFTPGO_ACTION_IP`, the action was executed from this IP address
- `SFTPGO_ACTION_SESSION_ID`, string. Unique protocol session identifier. For stateless protocols such as HTTP the session id will change for each request
- `SFTPGO_ACTION_OPEN_FLAGS`, integer. File open flags, can be non-zero for `pre-upload` action. If `SFTPGO_ACTION_FILE_SIZE` is greater than zero and `SFTPGO_ACTION_OPEN_FLAGS&512 == 0` the target file will not be truncated
- `SFTPGO_ACTION_TIMESTAMP`, int64. Event timestamp as nanoseconds since epoch
Previous global environment variables aren't cleared when the script is called.
The program must finish within 30 seconds.
If the `hook` defines an HTTP URL then this URL will be invoked as HTTP POST. The request body will contain a JSON serialized struct with the following fields:
- `action`
- `username`
- `path`
- `target_path`, included for `rename` action
- `ssh_cmd`, included for `ssh_cmd` action
- `file_size`, included for `pre-upload`, `upload`, `download`, `delete` actions if the file size is greater than `0`
- `fs_provider`, `0` for local filesystem, `1` for S3 backend, `2` for Google Cloud Storage (GCS) backend, `3` for Azure Blob Storage backend, `4` for local encrypted backend, `5` for SFTP backend
- `bucket`, inlcuded for S3, GCS and Azure backends
- `endpoint`, included for S3, SFTP and Azure backend if configured. For Azure this is the endpoint, if configured
- `status`, integer. Status for `upload`, `download` and `ssh_cmd` actions. 0 means a generic error occurred. 1 means no error, 2 means quota exceeded error
- `protocol`, string. Possible values are `SSH`, `SFTP`, `SCP`, `FTP`, `DAV`, `HTTP`
- `action`, string
- `username`, string
- `path`, string
- `target_path`, string, included for `rename` action and `sftpgo-copy` SSH command
- `virtual_path`, string, virtual path, seen by SFTPGo users
- `virtual_target_path`, string, virtual target path, seen by SFTPGo users
- `ssh_cmd`, string, included for `ssh_cmd` action
- `file_size`, int64, included for `pre-upload`, `upload`, `download`, `delete` actions if the file size is greater than `0`
- `fs_provider`, integer, `0` for local filesystem, `1` for S3 backend, `2` for Google Cloud Storage (GCS) backend, `3` for Azure Blob Storage backend, `4` for local encrypted backend, `5` for SFTP backend
- `bucket`, string, included for S3, GCS and Azure backends
- `endpoint`, string, included for S3, SFTP and Azure backend if configured
- `status`, integer. Status for `upload`, `download` and `ssh_cmd` actions. 1 means no error, 2 means a generic error occurred, 3 means quota exceeded error
- `protocol`, string. Possible values are `SSH`, `SFTP`, `SCP`, `FTP`, `DAV`, `HTTP`, `HTTPShare`, `OIDC`, `DataRetention`
- `ip`, string. The action was executed from this IP address
- `session_id`, string. Unique protocol session identifier. For stateless protocols such as HTTP the session id will change for each request
- `open_flags`, integer. File open flags, can be non-zero for `pre-upload` action. If `file_size` is greater than zero and `file_size&512 == 0` the target file will not be truncated
- `timestamp`, int64. Event timestamp as nanoseconds since epoch
The HTTP hook will use the global configuration for HTTP clients and will respect the retry configurations.
The `pre-*` actions are always executed synchronously while the other ones are asynchronous. You can specify the actions to run synchronously via the `execute_sync` configuration key. Executing an action synchronously means that SFTPGo will not return a result code to the client (which is waiting for it) until your hook have completed its execution. If your hook takes a long time to complete this could cause a timeout on the client side, which wouldn't receive the server response in a timely manner and eventually drop the connection.
The `actions` struct inside the `data_provider` configuration section allows you to configure actions on user add, update, delete.
## Provider events
The `actions` struct inside the `data_provider` configuration section allows you to configure actions on data provider objects add, update, delete.
The supported object types are:
- `user`
- `folder`
- `group`
- `admin`
- `api_key`
Actions will not be fired for internal updates, such as the last login or the user quota fields, or after external authentication.
If the `hook` defines a path to an external program, then this program is invoked with the following arguments:
If the `hook` defines a path to an external program, then this program can read the following environment variables:
- `action`, string, possible values are: `add`, `update`, `delete`
- `username`
- `ID`
- `status`
- `expiration_date`
- `home_dir`
- `uid`
- `gid`
The external program can also read the following environment variables:
- `SFTPGO_USER_ACTION`
- `SFTPGO_USER`, user serialized as JSON with sensitive fields removed
- `SFTPGO_PROVIDER_ACTION`, supported values are `add`, `update`, `delete`
- `SFTPGO_PROVIDER_OBJECT_TYPE`, affected object type
- `SFTPGO_PROVIDER_OBJECT_NAME`, unique identifier for the affected object, for example username or key id
- `SFTPGO_PROVIDER_USERNAME`, the username that executed the action. There are two special usernames: `__self__` identifies a user/admin that updates itself and `__system__` identifies an action that does not have an explicit executor associated with it, for example users/admins can be added/updated by loading them from initial data
- `SFTPGO_PROVIDER_IP`, the action was executed from this IP address
- `SFTPGO_PROVIDER_TIMESTAMP`, event timestamp as nanoseconds since epoch
- `SFTPGO_PROVIDER_OBJECT`, object serialized as JSON with sensitive fields removed
Previous global environment variables aren't cleared when the script is called.
The program must finish within 15 seconds.
If the `hook` defines an HTTP URL then this URL will be invoked as HTTP POST. The action is added to the query string, for example `<hook>?action=update`, and the user is sent serialized as JSON inside the POST body with sensitive fields removed.
If the `hook` defines an HTTP URL then this URL will be invoked as HTTP POST. The action, username, ip, object_type and object_name and timestamp are added to the query string, for example `<hook>?action=update&username=admin&ip=127.0.0.1&object_type=user&object_name=user1&timestamp=1633860803249`, and the full object is sent serialized as JSON inside the POST body with sensitive fields removed.
The HTTP hook will use the global configuration for HTTP clients and will respect the retry configurations.
The structure for SFTPGo users can be found within the [OpenAPI schema](../httpd/schema/openapi.yaml).
The structure for SFTPGo objects can be found within the [OpenAPI schema](../openapi/openapi.yaml).
## Pub/Sub services
You can forward SFTPGo events to several publish/subscribe systems using the [sftpgo-plugin-pubsub](https://github.com/sftpgo/sftpgo-plugin-pubsub). The notifiers SFTPGo plugins are not suitable for interactive actions such as `pre-*` events. Their scope is to simply forward events to external services. A custom hook is a better choice if you need to react to `pre-*` events.
## Database services
You can store SFTPGo events in database systems using the [sftpgo-plugin-eventstore](https://github.com/sftpgo/sftpgo-plugin-eventstore) and you can search the stored events using the [sftpgo-plugin-eventsearch](https://github.com/sftpgo/sftpgo-plugin-eventsearch).

View File

@@ -1,6 +1,8 @@
# Data At Rest Encryption (DARE)
SFTPGo supports data at-rest encryption via its `cryptfs` virtual file system, in this mode SFTPGo transparently encrypts and decrypts data (to/from the disk) on-the-fly during uploads and/or downloads, making sure that the files at-rest on the server-side are always encrypted.
SFTPGo supports data at-rest encryption via its `cryptfs` virtual file system, in this mode SFTPGo transparently encrypts and decrypts data (to/from the local disk) on-the-fly during uploads and/or downloads, making sure that the files at-rest on the server-side are always encrypted.
Data At Rest Encryption is supported for local filesystem, for cloud storage backends you can use their server side encryption feature.
So, because of the way it works, as described here above, when you set up an encrypted filesystem for a user you need to make sure it points to an empty path/directory (that has no files in it). Otherwise, it would try to decrypt existing files that are not encrypted in the first place and fail.

Some files were not shown because too many files have changed in this diff Show More