Compare commits

..

173 Commits

Author SHA1 Message Date
Nicola Murino
29836edf2b fix a possible nil pointer dereference
it can happen by upgrading from very old versions
2021-09-11 12:48:41 +02:00
Nicola Murino
0ad6f031e8 set version to 2.1.1 2021-09-11 06:33:48 +02:00
Nicola Murino
a9838d2e6d update deps and backport some fixes from main branch 2021-09-09 19:43:00 +02:00
Nicola Murino
7a9272ddfc admin setup: remove maxlength from HTML form
Fixes #515
2021-09-04 12:13:49 +02:00
Nicola Murino
69c3a85ea5 BaseConnection struct: ensure 64 bit alignment
Fixes #516
2021-08-28 12:23:29 +02:00
Nicola Murino
d03020e2b8 fix folders validation
Fixes #510
2021-08-19 11:33:00 +02:00
Nicola Murino
89bc50aa81 fix loading enabled_ssh_commands config key 2021-07-31 09:47:53 +02:00
Nicola Murino
d27c32e06b S3: fix Ceph compatibility
This hack will no longer be needed once Ceph tags a new version and vendors
using it update their servers.

This code is taken from rclone, thank you!

Fixes #483
2021-07-23 17:29:58 +02:00
Nicola Murino
53eca2c2bb S3: add per-chunk download timeout
We hard code 3 minutes here, this is configurable in main
2021-07-16 18:49:28 +02:00
Nicola Murino
0e9351e4ad GCS: add a trailing / to "directories"
This way SFTPGo should be compatible with Google Cloud console.

This change should be backward compatibile, testing is welcome

Fixes #464
2021-07-16 18:41:56 +02:00
Nicola Murino
43f468547f update deps 2021-07-16 18:40:56 +02:00
Nicola Murino
c372ca4136 enable CI 2021-06-20 22:01:34 +02:00
Nicola Murino
2b328c82bb defender: don't return expired hosts/banned ip in GetHost too 2021-06-20 21:55:12 +02:00
Nicola Murino
6321d51a24 defender: don't return expired hosts/banned ip 2021-06-20 21:47:38 +02:00
Nicola Murino
62744e081b get HTTPD binding from env: respect the documented default 2021-06-17 15:57:41 +02:00
Nicola Murino
9dcaf1555f back to development 2021-06-16 19:28:25 +02:00
Nicola Murino
a09cf5c8b9 set version to 2.1.0 2021-06-16 17:45:09 +02:00
Nicola Murino
47ebe42375 FTP: fix LIST on files 2021-06-15 06:38:56 +02:00
Nicola Murino
4d97ab9eb9 Let's Encrypt tutorial: use sudo where appropriate 2021-06-14 22:35:08 +02:00
Nicola Murino
8ed13dc4a9 docs: document how to use Let's Encrypt Certificates 2021-06-14 22:05:55 +02:00
Nicola Murino
3b66dd0873 Linux packages: fix static resources copy 2021-06-14 14:18:15 +02:00
Nicola Murino
d992f0ffcc update deps 2021-06-13 08:54:22 +02:00
Nicola Murino
6c5a7e8f13 improve installation docs, add paypal link to fundings 2021-06-12 10:05:25 +02:00
Nicola Murino
9d3d7db29c azblob: store SAS URL as kms.Secret 2021-06-11 22:27:36 +02:00
Nicola Murino
8607788975 s3fs: use "application/x-directory" as folder mime type
This change improve s3fs-fuse compatibility

Fixes #451
2021-06-08 13:52:36 +02:00
Nicola Murino
4be6307d87 webadmin: add defender page 2021-06-08 13:24:28 +02:00
Nicola Murino
feec2118bb improve defender and quotas REST API 2021-06-07 21:52:43 +02:00
Nicola Murino
43182fc25e OpenAPI: add users API
These new APIs match the web client features.

I'm aware that some API do not follow REST best practises.

I want to avoid things likes "/user/folders/<path>"

where "path" must be encoded and making it optional create issues, so
I defined resources as query parameters instead of path parameters
2021-06-05 16:07:09 +02:00
Nicola Murino
976f588863 improve docs to enable FTP/WebDAV
Fixes #447
2021-06-02 09:49:31 +02:00
Nicola Murino
575bcf1f03 add remote address to transfer and commands logs 2021-06-01 22:28:43 +02:00
Nicola Murino
969c992bfd pre-upload: execute the hook just before opening the target file 2021-05-31 22:40:47 +02:00
Nicola Murino
c1239fbf59 pre-upload action: add file open flags
Reading the flags the hook receiver can detect if the client wants to
truncate the target file
2021-05-31 22:33:23 +02:00
Nicola Murino
c63b923ec3 cryptfs: add support for atomic uploads 2021-05-31 21:45:29 +02:00
dependabot[bot]
574c4029fc Bump uraimo/run-on-arch-action from 2.0.9 to 2.0.10 (#444)
Bumps [uraimo/run-on-arch-action](https://github.com/uraimo/run-on-arch-action) from 2.0.9 to 2.0.10.
- [Release notes](https://github.com/uraimo/run-on-arch-action/releases)
- [Commits](https://github.com/uraimo/run-on-arch-action/compare/v2.0.9...v2.0.10)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-05-31 10:05:25 +02:00
Nicola Murino
423d8306be webclient: allow to download multiple files as zip 2021-05-30 23:07:46 +02:00
Nicola Murino
fc7066a25c cross device rename: remove the source if copy suceeded 2021-05-27 22:23:14 +02:00
Nicola Murino
e1bf46c6a5 local fs rename: if it fails with a cross device error try a copy
I don't want to add a new setting for this, at least until we get the
first complain for a slow rename :)

Fixes #440
2021-05-27 20:14:12 +02:00
Nicola Murino
3b46e6a6fb add support for a global temp path
Fixes #436
2021-05-27 15:38:27 +02:00
Nicola Murino
7a85c66ee7 webclient: defer file list rendering
combined with server side processing I can now list a directory with
about 100.000 files in less than 2 seconds without losing client side
filtering and pagination
2021-05-27 09:40:46 +02:00
Nicola Murino
25a44030f9 actions: add pre-download and pre-upload
Downloads and uploads can be denied based on hook response
2021-05-26 07:48:37 +02:00
Nicola Murino
600268ebb8 httpclient: allow to set custom headers 2021-05-25 08:36:01 +02:00
Nicola Murino
1223957f91 webclient: use different icons based on the file extension 2021-05-24 19:09:03 +02:00
Nicola Murino
15cde2dd1a improve test coverage 2021-05-23 22:29:55 +02:00
Nicola Murino
50e441849a try to make the web admin more user friendly
removed all the textarea with fields separated using "::".
This should, hopefully, improve user experience
2021-05-23 22:02:01 +02:00
Nicola Murino
02bb09ec01 remove deprecated file extensions filters
these filters were deprecated a long time ago, everyone should use
patterns filters now
2021-05-22 12:28:05 +02:00
Nicola Murino
402947a43c update deps 2021-05-22 10:42:30 +02:00
Nicola Murino
b9bc8d722d try to improve web client credentials page
I should do the same for the admin page too
2021-05-22 09:54:27 +02:00
Nicola Murino
0cb5c49cf3 map path resolution errors to Permission errors
this way the affected paths will be ignored in WebDAV

Fixes #432
2021-05-21 13:04:22 +02:00
Nicola Murino
9fc4be6d40 minor doc fixes 2021-05-20 18:34:38 +02:00
Nicola Murino
ecfed4dc04 Add a Getting Started Guide 2021-05-20 18:16:27 +02:00
dependabot[bot]
b415e4d98f Bump github.com/lib/pq from 1.10.1 to 1.10.2 (#429)
Bumps [github.com/lib/pq](https://github.com/lib/pq) from 1.10.1 to 1.10.2.
- [Release notes](https://github.com/lib/pq/releases)
- [Commits](https://github.com/lib/pq/compare/v1.10.1...v1.10.2)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-05-17 09:27:16 +02:00
Nicola Murino
7d059efe06 add an example backup script 2021-05-16 22:28:08 +02:00
Nicola Murino
60cfbd2989 setup: auto login after creating the first admin 2021-05-16 21:36:57 +02:00
Nicola Murino
8ecf64f481 httpclient: accepts timeouts as float
Fixes #428
2021-05-16 12:50:06 +02:00
Nicola Murino
019b0f2fd5 http cookie: add max-age and samesite
update deps too
2021-05-16 09:13:00 +02:00
Nicola Murino
15d6cd144a another try to better understand the random webdav test case failure 2021-05-15 08:56:36 +02:00
Nicola Murino
f59f62317e sftpd: fix file upload resume detection
WinSCP does not set the APPEND flag while resuming a file upload,
so we detect a file upload resume if the TRUNCATE flag is not set.
The APPEND flag is now ignored.

Fixes #420
2021-05-15 08:39:01 +02:00
Nicola Murino
f2b93c0402 add a setup screen to create the first admin user
If you prefer to auto-create the first admin you can enable the
"create_default_admin" configuration key and SFTPGo will work as before.

You can also create the first admin by loading initial data: now you can
set both username and password, before you could only change the password
2021-05-14 19:21:15 +02:00
Nicola Murino
0540b8780e redact credentials within hooks
go-retryablehttp does not redact credentials, so we still log them
when we use it

https://github.com/hashicorp/go-retryablehttp/pull/133
2021-05-12 22:44:17 +02:00
Nicola Murino
fa45c9c138 allow to execute actions for file operations and SSH commands synchronously
The actions to run synchronously can be configured via the `execute_sync`
configuration key.

Executing an action synchronously means that SFTPGo will not return a result
code to the client until your hook have completed its execution.

Fixes #409
2021-05-11 12:45:14 +02:00
Nicola Murino
b67cd0d3df ensure no client is connected before running max connections test cases 2021-05-11 08:04:57 +02:00
Nicola Murino
c8f7fc9bc9 httpd/webdav: add a list of hosts allowed to send proxy headers
X-Forwarded-For, X-Real-IP and X-Forwarded-Proto headers will be ignored
for hosts not included in this list.

This is a backward incompatible change, before the proxy headers were
always used
2021-05-11 06:54:06 +02:00
dependabot[bot]
f1b998ce16 Bump github.com/otiai10/copy from 1.5.1 to 1.6.0 (#414)
Bumps [github.com/otiai10/copy](https://github.com/otiai10/copy) from 1.5.1 to 1.6.0.
- [Release notes](https://github.com/otiai10/copy/releases)
- [Commits](https://github.com/otiai10/copy/compare/v1.5.1...v1.6.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-05-10 14:02:09 +02:00
dependabot[bot]
aaa758e978 Bump github.com/minio/sio from 0.2.1 to 0.3.0 (#412)
Bumps [github.com/minio/sio](https://github.com/minio/sio) from 0.2.1 to 0.3.0.
- [Release notes](https://github.com/minio/sio/releases)
- [Commits](https://github.com/minio/sio/compare/v0.2.1...v0.3.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-05-10 11:34:01 +02:00
dependabot[bot]
716946a148 Bump github.com/aws/aws-sdk-go from 1.38.35 to 1.38.36 (#413)
Bumps [github.com/aws/aws-sdk-go](https://github.com/aws/aws-sdk-go) from 1.38.35 to 1.38.36.
- [Release notes](https://github.com/aws/aws-sdk-go/releases)
- [Changelog](https://github.com/aws/aws-sdk-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go/compare/v1.38.35...v1.38.36)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-05-10 11:10:58 +02:00
Nicola Murino
15934d72cc webdav test: increase log size
the latest 10 lines are not enough to understand the issue, try with 20
2021-05-09 10:09:25 +02:00
Nicola Murino
8f6cdacd00 allow to limit the number of per-host connections 2021-05-08 19:45:21 +02:00
Nicola Murino
8f736da4b8 webdav test: add some more logs
QuotaLimits test case sometime fails when running in CI, try to
understand the reason
2021-05-07 22:24:06 +02:00
Nicola Murino
4ea4202b99 httpd/webdav: use a custom listener with read and write deadlines 2021-05-07 20:41:20 +02:00
Nicola Murino
d4bfc3f6b5 fix lint configuration and a warning 2021-05-06 22:06:22 +02:00
Nicola Murino
23d9ebfc91 add a basic front-end web interface for end-users
Fixes #339 #321 #398
2021-05-06 21:35:43 +02:00
dependabot[bot]
5c99f4fb60 Bump github.com/shirou/gopsutil/v3 from 3.21.3 to 3.21.4 (#406)
Bumps [github.com/shirou/gopsutil/v3](https://github.com/shirou/gopsutil) from 3.21.3 to 3.21.4.
- [Release notes](https://github.com/shirou/gopsutil/releases)
- [Commits](https://github.com/shirou/gopsutil/compare/v3.21.3...v3.21.4)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-05-03 14:44:07 +02:00
dependabot[bot]
2263c7e20f Bump github.com/hashicorp/go-retryablehttp from 0.6.8 to 0.7.0 (#405)
Bumps [github.com/hashicorp/go-retryablehttp](https://github.com/hashicorp/go-retryablehttp) from 0.6.8 to 0.7.0.
- [Release notes](https://github.com/hashicorp/go-retryablehttp/releases)
- [Commits](https://github.com/hashicorp/go-retryablehttp/compare/v0.6.8...v0.7.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-05-03 14:43:53 +02:00
dependabot[bot]
515b2d917e Bump github.com/fclairamb/ftpserverlib from 0.13.0 to 0.13.1 (#404)
Bumps [github.com/fclairamb/ftpserverlib](https://github.com/fclairamb/ftpserverlib) from 0.13.0 to 0.13.1.
- [Release notes](https://github.com/fclairamb/ftpserverlib/releases)
- [Commits](https://github.com/fclairamb/ftpserverlib/compare/v0.13.0...v0.13.1)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-05-03 13:29:54 +02:00
dependabot[bot]
af4723356d Bump github.com/lestrrat-go/jwx from 1.1.7 to 1.2.0 (#403)
Bumps [github.com/lestrrat-go/jwx](https://github.com/lestrrat-go/jwx) from 1.1.7 to 1.2.0.
- [Release notes](https://github.com/lestrrat-go/jwx/releases)
- [Changelog](https://github.com/lestrrat-go/jwx/blob/main/Changes)
- [Commits](https://github.com/lestrrat-go/jwx/compare/v1.1.7...v1.2.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-05-03 13:29:16 +02:00
dependabot[bot]
068dd34a38 Bump github.com/aws/aws-sdk-go from 1.38.25 to 1.38.30 (#402)
Bumps [github.com/aws/aws-sdk-go](https://github.com/aws/aws-sdk-go) from 1.38.25 to 1.38.30.
- [Release notes](https://github.com/aws/aws-sdk-go/releases)
- [Changelog](https://github.com/aws/aws-sdk-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go/compare/v1.38.25...v1.38.30)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-05-03 11:41:25 +02:00
dependabot[bot]
b16a5c2caf Bump github.com/go-chi/chi/v5 from 5.0.2 to 5.0.3 (#401)
Bumps [github.com/go-chi/chi/v5](https://github.com/go-chi/chi) from 5.0.2 to 5.0.3.
- [Release notes](https://github.com/go-chi/chi/releases)
- [Changelog](https://github.com/go-chi/chi/blob/master/CHANGELOG.md)
- [Commits](https://github.com/go-chi/chi/compare/v5.0.2...v5.0.3)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-05-03 11:41:09 +02:00
Nicola Murino
a383957cfa OpenAPI: document that also folder-quota-update supports partial updates 2021-04-28 19:33:32 +02:00
Nicola Murino
00f97aabb4 OpenAPI: document that quota-update support partial updates
If the update mode is "add" and you pass only used_quota_size or only
used_quota_files the missing field will remain unchanged
2021-04-28 19:16:15 +02:00
Nicola Murino
32db0787bb add an example script for scheduled quota updates 2021-04-26 21:53:09 +02:00
Nicola Murino
1275328fdf Authentication errors: try to avoid user enumeration
Fixes #395
2021-04-26 19:48:21 +02:00
Nicola Murino
7778716fa7 update crypto and net dependencies 2021-04-25 18:12:02 +02:00
dependabot[bot]
77476d0f56 Bump github.com/aws/aws-sdk-go from 1.38.21 to 1.38.25 (#394)
Bumps [github.com/aws/aws-sdk-go](https://github.com/aws/aws-sdk-go) from 1.38.21 to 1.38.25.
- [Release notes](https://github.com/aws/aws-sdk-go/releases)
- [Changelog](https://github.com/aws/aws-sdk-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go/compare/v1.38.21...v1.38.25)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-04-25 17:07:59 +02:00
dependabot[bot]
c7a1fc2996 Bump cloud.google.com/go/storage from 1.14.0 to 1.15.0 (#392)
Bumps [cloud.google.com/go/storage](https://github.com/googleapis/google-cloud-go) from 1.14.0 to 1.15.0.
- [Release notes](https://github.com/googleapis/google-cloud-go/releases)
- [Changelog](https://github.com/googleapis/google-cloud-go/blob/master/CHANGES.md)
- [Commits](https://github.com/googleapis/google-cloud-go/compare/spanner/v1.14.0...spanner/v1.15.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-04-25 17:07:36 +02:00
dependabot[bot]
e7d8e73be8 Bump github.com/lib/pq from 1.10.0 to 1.10.1 (#391)
Bumps [github.com/lib/pq](https://github.com/lib/pq) from 1.10.0 to 1.10.1.
- [Release notes](https://github.com/lib/pq/releases)
- [Commits](https://github.com/lib/pq/compare/v1.10.0...v1.10.1)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-04-25 17:07:26 +02:00
dependabot[bot]
3ee27f4370 Bump golangci/golangci-lint-action from v2 to v2.5.2 (#389)
Bumps [golangci/golangci-lint-action](https://github.com/golangci/golangci-lint-action) from v2 to v2.5.2.
- [Release notes](https://github.com/golangci/golangci-lint-action/releases)
- [Commits](https://github.com/golangci/golangci-lint-action/compare/v2...5c56cd6c9dc07901af25baab6f2b0d9f3b7c3018)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-04-25 16:41:17 +02:00
Nicola Murino
92424cd1c2 dependabot: limit the number of open pull requests 2021-04-25 16:39:41 +02:00
Nicola Murino
0190dad984 docker: update github script to v4 2021-04-25 15:59:29 +02:00
Nicola Murino
198258f4e7 add dependabot
Fixes #388
2021-04-25 15:54:19 +02:00
Nicola Murino
5be4b6bd44 localfs: fix subdir check if the user has the root dir as home 2021-04-25 14:36:29 +02:00
Nicola Murino
3941255733 docs: fix a typo 2021-04-25 09:42:19 +02:00
Nicola Murino
46998252e5 use bcrypt as default password hashing algo
argon2id has a high memory cost and, if not properly tuned, it can lead to
resource starvation.

Advanced users can still configure and use argon2id.
Passwords stored as argon2id will continue to work
2021-04-25 09:38:33 +02:00
Nicola Murino
74b51f0ad3 update nfpm 2021-04-23 22:53:13 +02:00
Nicola Murino
b11865f971 CI: add support for darwin/arm64
I have no way to test the produced binaries on a real Silicon M1
2021-04-20 23:00:27 +02:00
Nicola Murino
f4369cdbef fix max connections check
Also make sure to close the ssh client connection in test cases
2021-04-20 18:12:16 +02:00
Nicola Murino
92638ce93d add support for hashing password using bcrypt
argon2id remains the default
2021-04-20 13:55:09 +02:00
Nicola Murino
6ef85d6026 add, optional, in memory password caching
Verifying argon2 passwords has a high memory and computational cost,
by enabling, in memory, password caching you reduce this cost
2021-04-20 09:39:36 +02:00
Nicola Murino
bc88503f25 sql providers: reuse the same context where appropriate 2021-04-19 18:58:53 +02:00
Nicola Murino
47317bed9b make sure that Retry-After header has a value greater than zero 2021-04-19 09:16:27 +02:00
Nicola Murino
f45c89fc46 add rate limiting support for REST API/web admin too 2021-04-19 08:14:04 +02:00
Nicola Murino
112e3b2fc2 add rate limiting support 2021-04-18 12:31:06 +02:00
Nicola Murino
124c471a2b FTPD: make sure that the passive ip, if provided, is valid
The server will refuse to start if the provided passive ip is not a
valid IPv4 address.

Fixes #376
2021-04-16 15:08:10 +02:00
Nicola Murino
683ba6cd5b get binding from env: respect the documented default
Fixes #377
2021-04-16 13:35:13 +02:00
Nicola Murino
21fbcf4556 FTP: add support for TLS session resumption on the data connection
Fixes #374
2021-04-16 09:00:40 +02:00
Nicola Murino
2ffefbeb33 add sql_tables_prefix also to indexes and constraints
This allows you to reuse the same database for multiple SFTPGo instances

Fixes #372
2021-04-12 20:00:49 +02:00
Nicola Murino
c844fc7477 add support for delayed quota update
If there are a lot of close uploads, accumulating quota updates can
save you many queries to the data provider
2021-04-11 08:38:43 +02:00
Nicola Murino
4b98f37df1 back to development 2021-04-10 09:40:02 +02:00
Nicola Murino
0bc4db9950 web admin: make base url configurable 2021-04-09 22:02:48 +02:00
Nicola Murino
5acf29dae6 CI: replace deprecated actions with gh CLI 2021-04-08 21:29:09 +02:00
Nicola Murino
e9a42cd508 release workflow: re-add build Linux bundle
it is used as source for PPA packages
2021-04-08 08:38:51 +02:00
Nicola Murino
ed26d68948 portable mode: add SFTP buffer size 2021-04-07 19:47:39 +02:00
Nicola Murino
b389f93d97 allow to select sha256-simd using an env var 2021-04-07 16:25:58 +02:00
Nicola Murino
150aebf8d2 CI: replace xgo with QEMU
currently xgo don't allow to choose the building OS, this could cause
unexpected issues, for example v2.0.3 packages for arm64 and ppc64
don't run on Ubuntu 18.04
2021-04-07 15:12:09 +02:00
Nicola Murino
74e0223eb9 remove sha256-simd usage
sha256-simd is now deprecated

https://github.com/minio/sha256-simd/issues/58

This could slow down sha256 computation on some CPU
2021-04-05 18:23:40 +02:00
Nicola Murino
0823928f98 allow to disable login filesystem checks
SFTPGo requires that the user's home directory, virtual folder root,
and intermediate paths to virtual folders exist to work properly.
If you already know that the required directories exist, disabling
these checks will speed up login.
2021-04-05 17:57:30 +02:00
Nicola Murino
f895059660 web: add responsive table style to connections too
Fixed a small issue for sftpfs too
2021-04-05 11:28:28 +02:00
Nicola Murino
acb4310c11 add a startup hook 2021-04-05 10:07:59 +02:00
Nicola Murino
fdf3f23df5 allow to disable some hooks on a per-user basis
This way you can, for example, mix external and internal users
2021-04-04 22:32:25 +02:00
Nicola Murino
d92861a8e8 sftpfs: disable buffering for downloads if concurrent reads are disabled 2021-04-04 09:53:29 +02:00
Nicola Murino
1ee843757d fix OpenAPI schema 2021-04-03 17:09:08 +02:00
Nicola Murino
ea26d7786c sftpfs: add buffering support
this way we improve performance over high latency networks
2021-04-03 16:00:55 +02:00
Nicola Murino
6eb43baf3d web: fix content type for folders form
Fixes #367
2021-04-01 19:42:18 +02:00
Nicola Murino
2f56375121 improve SFTP loop detection 2021-04-01 18:53:48 +02:00
Nicola Murino
3bfd7e4d17 sftpfs: try to detect if an SFTP user point to itself
this will cause an infinite loop on login. The check should be improved
2021-03-29 21:53:44 +02:00
Nicola Murino
e1c66d96a1 back to development 2021-03-28 22:25:24 +02:00
Nicola Murino
a43854ae9b OpenAPI: document that secrets are automatically encrypted before saving 2021-03-28 11:23:06 +02:00
Nicola Murino
183bedd6ed webui: add responsive extension 2021-03-28 11:02:11 +02:00
Nicola Murino
2a89a8f664 webui: minor improvements 2021-03-27 22:23:01 +01:00
Nicola Murino
5cd27ce529 document Cockroach driver name 2021-03-27 19:41:00 +01:00
Nicola Murino
cee2e18caf convertusers: fix permissions
Fixes #363
2021-03-27 19:18:01 +01:00
Nicola Murino
9ad750da54 WebDAV: try to preserve the lock fs as much as possible 2021-03-27 19:10:27 +01:00
Nicola Murino
5f49af1780 external auth: allow to inspect and preserve an existing user 2021-03-26 15:19:01 +01:00
Nicola Murino
d5f092284a improve signals handling 2021-03-25 19:31:21 +01:00
Nicola Murino
0e50310a66 add a test case for UID/GID limits 2021-03-25 17:30:39 +01:00
Mike Unitskyi
5939ac4801 Increase uid:gid limits (#362)
Fixes #361
2021-03-25 17:11:42 +01:00
Nicola Murino
db274f1093 crdb: fix transactions handling 2021-03-25 09:07:56 +01:00
Nicola Murino
6bc5c64a3a webdav: ignore path, perm and not exist errors in PROPFIND
Fixes #340
2021-03-24 13:32:20 +01:00
Nicola Murino
70e035315e data provider: add CockroachDB support 2021-03-23 19:14:15 +01:00
Nicola Murino
8a1249878a OpenAPI schema: remove some superfluous required definitions
Fixes #356
2021-03-22 19:22:41 +01:00
Nicola Murino
5e375f56dd kms: add a lock, secrets could be modified concurrently for cached users
also reduce the size of the JSON payload omitting empty secrets
2021-03-22 19:03:25 +01:00
Nicola Murino
28f1d66ae5 link the Active Directory example in the howto section 2021-03-22 09:52:05 +01:00
Omar Ramos
79060d37a7 Added in a first draft of the page related to sftpgo-ldap-http-server. 2021-03-22 08:59:29 +01:00
Nicola Murino
800e64404b update deps 2021-03-22 08:55:35 +01:00
Nicola Murino
54c0c1b80d Windows: manually check if we can bind on the configured port/ports
Windows allows the coexistence of three types of sockets on the same
transport-layer service port, for example, 127.0.0.1:8080, [::1]:8080
and [::ffff:0.0.0.0]:8080

Go don't properly handles this, so we use a ugly hack

Fixes #350
2021-03-21 22:21:04 +01:00
Nicola Murino
f7c7e2951d initialize argon params before creating the data provider
Fixes #349
2021-03-21 19:58:57 +01:00
Nicola Murino
f249286cb1 docs: add some notes about the new virtual folders support
fixe a failing test case for the memory provider
2021-03-21 19:47:11 +01:00
Nicola Murino
d6dc3a507e extend virtual folders support to all storage backends
Fixes #241
2021-03-21 19:15:47 +01:00
Nicola Murino
0286da2356 try to auto create virtual folders if missing 2021-03-10 22:30:56 +01:00
Nicola Murino
76c08baaa0 httpclient: load CA certificates only when required
on Windows x509.SystemCertPool is not implemented and therefore we end
uo with an empty certificate pool if we load the CA certificates
unconditionally
2021-03-10 21:45:48 +01:00
Nicola Murino
67ea75cf03 improve OpenAPI schema so it is better rendered on Stoplight 2021-03-07 18:41:56 +01:00
Nicola Murino
4c658bb6f0 webdav: add prefix support 2021-03-07 17:10:45 +01:00
Nicola Murino
1ab02d5891 OpenAPI: improve schema
Fix some lint warnings
2021-03-06 17:08:24 +01:00
Nicola Murino
055506e518 sftpfs: add an option to disable concurrent reads 2021-03-06 15:41:40 +01:00
Nicola Murino
88122ba2f8 update jwtauth to v5 2021-03-05 18:50:45 +01:00
Nicola Murino
bfe0c18976 portable mode: fix WebDAV support 2021-03-05 08:41:24 +01:00
Nicola Murino
df41f0c556 add a setting to skip natural keys validation
Enabling the "skip_natural_keys_validation" data provider setting,
the natural keys for REST API/Web Admin as usernames, admin names,
folder names are not restricted to unreserved URI chars

Fixes #334 #308
2021-03-04 09:48:53 +01:00
Nicola Murino
561c5021dd add Segmed to the sponsors section 2021-03-03 18:55:47 +01:00
Nicola Murino
ad07fc78eb update nfpm and deps 2021-03-03 18:39:58 +01:00
Nicola Murino
3243181c5f Add a link to the OpenAPI schema where relevant
Fixes #329
2021-03-01 22:22:05 +01:00
Nicola Murino
895117718e SSH system command: add os separator to the resolved path when appropriate
Fixes #327
2021-03-01 22:10:45 +01:00
Nicola Murino
534b253c20 WebDAV: improve TLS certificate authentication
For each user you can now configure:

- TLS certificate auth
- TLS certificate auth and password
- Password auth

For TLS certificate auth, the certificate common name is used as
username
2021-03-01 19:28:11 +01:00
Nicola Murino
901cafc6da metrics: reduce complexity for AddLoginResult method
fix a gocyclo warning
2021-02-28 12:23:48 +01:00
Nicola Murino
a6e36e7cad FTP: improve TLS certificate authentication
For each user you can now configure:

- TLS certificate auth
- TLS certificate auth and password
- Password auth

For TLS auth, the certificate common name must match the name provided
using the "USER" FTP command
2021-02-28 12:10:40 +01:00
Nicola Murino
b566457e12 change license to AGPL-3 2021-02-26 19:47:48 +01:00
Nicola Murino
ca3e15578e Use new methods in the io and os packages instead of ioutil ones
ioutil is deprecated in Go 1.16 and SFTPGo is an application, not
a library, we have no reason to keep compatibility with old Go
versions.

Go 1.16 fix some cifs related issues too.
2021-02-25 21:53:04 +01:00
Nicola Murino
4b2edff6dd update deps 2021-02-24 22:27:52 +01:00
Nicola Murino
2146b83343 data providers: add filesystem to folder ...
... and some descriptive fields.
The filesystem support for virtual folders will be implemented in
future commits
2021-02-24 19:40:29 +01:00
Nicola Murino
3e1b07324d GCS: remove compat code 2021-02-22 22:06:23 +01:00
Nicola Murino
8cc2dfe5c2 update pkg/sftp
we don't need my branch anymore now that all the required features for
the sftpfs are available upstream too
2021-02-22 16:27:45 +01:00
Nicola Murino
78a837e8f1 remove other compat code 2021-02-22 09:13:26 +01:00
Nicola Murino
49830516be squash database migrations and remove compat code 2021-02-22 08:37:50 +01:00
Nicola Murino
41e1d9e68a use Go 1.16 for CI and Docker images 2021-02-21 12:01:37 +01:00
Nicola Murino
5da4f931c5 TLS: allow to configure cipher suites
Fixes #316
2021-02-18 20:17:16 +01:00
249 changed files with 32199 additions and 11753 deletions

2
.github/FUNDING.yml vendored
View File

@@ -9,4 +9,4 @@ community_bridge: # Replace with a single Community Bridge project-name e.g., cl
liberapay: # Replace with a single Liberapay username
issuehunt: # Replace with a single IssueHunt username
otechie: # Replace with a single Otechie username
custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']
custom: ['https://www.paypal.com/donate?hosted_button_id=JQL6GBT8GXRKC'] # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']

20
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,20 @@
version: 2
updates:
- package-ecosystem: "gomod"
directory: "/"
schedule:
interval: "weekly"
open-pull-requests-limit: 2
- package-ecosystem: "docker"
directory: "/"
schedule:
interval: "weekly"
open-pull-requests-limit: 2
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"
open-pull-requests-limit: 2

View File

@@ -2,7 +2,7 @@ name: CI
on:
push:
branches: [2.0.x]
branches: [2.1.x]
pull_request:
jobs:
@@ -12,8 +12,12 @@ jobs:
strategy:
matrix:
go: [1.16]
os: [ubuntu-18.04, macos-10.15, windows-2019]
upload-coverage: [false]
os: [ubuntu-latest, macos-latest]
upload-coverage: [true]
include:
- go: 1.16
os: windows-latest
upload-coverage: false
steps:
- uses: actions/checkout@v2
@@ -25,16 +29,20 @@ jobs:
with:
go-version: ${{ matrix.go }}
- name: Build for Linux/macOS
- name: Build for Linux/macOS x86_64
if: startsWith(matrix.os, 'windows-') != true
run: go build -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
run: go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
- name: Build for macOS arm64
if: startsWith(matrix.os, 'macos-') == true
run: CGO_ENABLED=1 GOOS=darwin GOARCH=arm64 SDKROOT=$(xcrun --sdk macosx --show-sdk-path) go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo_arm64
- name: Build for Windows
if: startsWith(matrix.os, 'windows-')
run: |
$GIT_COMMIT = (git describe --always --dirty) | Out-String
$DATE_TIME = ([datetime]::Now.ToUniversalTime().toString("yyyy-MM-ddTHH:mm:ssZ")) | Out-String
go build -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/version.date=$DATE_TIME" -o sftpgo.exe
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/version.date=$DATE_TIME" -o sftpgo.exe
- name: Run test cases using SQLite provider
run: go test -v -p 1 -timeout 10m ./... -coverprofile=coverage.txt -covermode=atomic
@@ -49,11 +57,11 @@ jobs:
- name: Run test cases using bolt provider
run: |
go test -v -p 1 -timeout 2m ./config -covermode=atomic
go test -v -p 1 -timeout 2m ./common -covermode=atomic
go test -v -p 1 -timeout 3m ./httpd -covermode=atomic
go test -v -p 1 -timeout 5m ./common -covermode=atomic
go test -v -p 1 -timeout 5m ./httpd -covermode=atomic
go test -v -p 1 -timeout 8m ./sftpd -covermode=atomic
go test -v -p 1 -timeout 2m ./ftpd -covermode=atomic
go test -v -p 1 -timeout 2m ./webdavd -covermode=atomic
go test -v -p 1 -timeout 5m ./ftpd -covermode=atomic
go test -v -p 1 -timeout 5m ./webdavd -covermode=atomic
go test -v -p 1 -timeout 2m ./telemetry -covermode=atomic
env:
SFTPGO_DATA_PROVIDER__DRIVER: bolt
@@ -65,49 +73,21 @@ jobs:
SFTPGO_DATA_PROVIDER__DRIVER: memory
SFTPGO_DATA_PROVIDER__NAME: ''
- name: Gather cross build info
id: cross_info
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
- name: Prepare build artifact for macOS
if: startsWith(matrix.os, 'macos-') == true
run: |
GIT_COMMIT=$(git describe --always)
BUILD_DATE=$(date -u +%FT%TZ)
echo ::set-output name=sha::${GIT_COMMIT}
echo ::set-output name=created::${BUILD_DATE}
- name: Cross build with xgo
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
uses: crazy-max/ghaction-xgo@v1
with:
go_version: 1.16.x
dest: cross
prefix: sftpgo
targets: linux/arm64,linux/ppc64le
v: true
x: false
race: false
ldflags: -s -w -X github.com/drakkan/sftpgo/version.commit=${{ steps.cross_info.outputs.sha }} -X github.com/drakkan/sftpgo/version.date=${{ steps.cross_info.outputs.created }}
buildmode: default
- name: Prepare build artifact for Linux/macOS
if: startsWith(matrix.os, 'windows-') != true
run: |
mkdir -p output/{bash_completion,zsh_completion}
cp sftpgo output/
mkdir -p output/{init,bash_completion,zsh_completion}
cp sftpgo output/sftpgo_x86_64
cp sftpgo_arm64 output/
cp sftpgo.json output/
cp -r templates output/
cp -r static output/
cp -r init output/
cp init/com.github.drakkan.sftpgo.plist output/init/
./sftpgo gen completion bash > output/bash_completion/sftpgo
./sftpgo gen completion zsh > output/zsh_completion/_sftpgo
./sftpgo gen man -d output/man/man1
gzip output/man/man1/*
- name: Copy cross compiled Linux binaries
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
run: |
cp cross/sftpgo-linux-arm64 output/
cp cross/sftpgo-linux-ppc64le output/
- name: Prepare build artifact for Windows
if: startsWith(matrix.os, 'windows-')
run: |
@@ -120,79 +100,15 @@ jobs:
xcopy .\static .\output\static\ /E
- name: Upload build artifact
if: startsWith(matrix.os, 'ubuntu-') != true
uses: actions/upload-artifact@v2
with:
name: sftpgo-${{ matrix.os }}-go${{ matrix.go }}
name: sftpgo-${{ matrix.os }}-go-${{ matrix.go }}
path: output
- name: Build Linux Packages
id: build_linux_pkgs
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
run: |
cp -r pkgs pkgs_arm64
cp -r pkgs pkgs_ppc64le
cd pkgs
./build.sh
cd ..
export NFPM_ARCH=arm64
export BIN_SUFFIX=-linux-arm64
cp cross/sftpgo${BIN_SUFFIX} .
cd pkgs_arm64
./build.sh
cd ..
export NFPM_ARCH=ppc64le
export BIN_SUFFIX=-linux-ppc64le
cp cross/sftpgo${BIN_SUFFIX} .
cd pkgs_ppc64le
./build.sh
PKG_VERSION=$(cat dist/version)
echo "::set-output name=pkg-version::${PKG_VERSION}"
- name: Upload Debian Package
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
uses: actions/upload-artifact@v2
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-x86_64-deb
path: pkgs/dist/deb/*
- name: Upload RPM Package
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
uses: actions/upload-artifact@v2
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-x86_64-rpm
path: pkgs/dist/rpm/*
- name: Upload Debian Package arm64
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
uses: actions/upload-artifact@v2
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-arm64-deb
path: pkgs_arm64/dist/deb/*
- name: Upload RPM Package arm64
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
uses: actions/upload-artifact@v2
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-arm64-rpm
path: pkgs_arm64/dist/rpm/*
- name: Upload Debian Package ppc64le
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
uses: actions/upload-artifact@v2
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-ppc64le-deb
path: pkgs_ppc64le/dist/deb/*
- name: Upload RPM Package ppc64le
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
uses: actions/upload-artifact@v2
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-ppc64le-rpm
path: pkgs_ppc64le/dist/rpm/*
test-postgresql-mysql:
name: Test with PostgreSQL/MySQL
runs-on: ubuntu-18.04
test-postgresql-mysql-crdb:
name: Test with PgSQL/MySQL/Cockroach
runs-on: ubuntu-latest
services:
postgres:
@@ -232,7 +148,7 @@ jobs:
go-version: 1.16
- name: Build
run: go build -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
run: go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
- name: Run tests using PostgreSQL provider
run: |
@@ -256,12 +172,137 @@ jobs:
SFTPGO_DATA_PROVIDER__USERNAME: sftpgo
SFTPGO_DATA_PROVIDER__PASSWORD: sftpgo
golangci-lint:
name: golangci-lint
- name: Run tests using CockroachDB provider
run: |
docker run --rm --name crdb --health-cmd "curl -I http://127.0.0.1:8080" --health-interval 10s --health-timeout 5s --health-retries 6 -p 26257:26257 -d cockroachdb/cockroach:latest start-single-node --insecure --listen-addr 0.0.0.0:26257
docker exec crdb cockroach sql --insecure -e 'create database "sftpgo"'
go test -v -p 1 -timeout 10m ./... -covermode=atomic
docker stop crdb
env:
SFTPGO_DATA_PROVIDER__DRIVER: cockroachdb
SFTPGO_DATA_PROVIDER__NAME: sftpgo
SFTPGO_DATA_PROVIDER__HOST: localhost
SFTPGO_DATA_PROVIDER__PORT: 26257
SFTPGO_DATA_PROVIDER__USERNAME: root
SFTPGO_DATA_PROVIDER__PASSWORD:
build-linux-packages:
name: Build Linux packages
runs-on: ubuntu-18.04
strategy:
matrix:
include:
- arch: amd64
go: 1.16
go-arch: amd64
- arch: aarch64
distro: ubuntu18.04
go: latest
go-arch: arm64
- arch: ppc64le
distro: ubuntu18.04
go: latest
go-arch: ppc64le
steps:
- uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Set up Go
if: ${{ matrix.arch == 'amd64' }}
uses: actions/setup-go@v2
with:
go-version: ${{ matrix.go }}
- name: Build on amd64
if: ${{ matrix.arch == 'amd64' }}
run: |
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
mkdir -p output/{init,bash_completion,zsh_completion}
cp sftpgo.json output/
cp -r templates output/
cp -r static output/
cp init/sftpgo.service output/init/
./sftpgo gen completion bash > output/bash_completion/sftpgo
./sftpgo gen completion zsh > output/zsh_completion/_sftpgo
./sftpgo gen man -d output/man/man1
gzip output/man/man1/*
cp sftpgo output/
- uses: uraimo/run-on-arch-action@v2.0.10
if: ${{ matrix.arch != 'amd64' }}
name: Build for ${{ matrix.arch }}
id: build
with:
arch: ${{ matrix.arch }}
distro: ${{ matrix.distro }}
setup: |
mkdir -p "${PWD}/output"
dockerRunArgs: |
--volume "${PWD}/output:/output"
shell: /bin/bash
install: |
apt-get update -q -y
apt-get install -q -y curl gcc git
if [ ${{ matrix.go }} == 'latest' ]
then
GO_VERSION=$(curl https://golang.org/VERSION?m=text)
else
GO_VERSION=${{ matrix.go }}
fi
curl --retry 5 --retry-delay 2 --connect-timeout 10 -o go.tar.gz -L https://golang.org/dl/${GO_VERSION}.linux-${{ matrix.go-arch }}.tar.gz
tar -C /usr/local -xzf go.tar.gz
run: |
export PATH=$PATH:/usr/local/go/bin
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
mkdir -p output/{init,bash_completion,zsh_completion}
cp sftpgo.json output/
cp -r templates output/
cp -r static output/
cp init/sftpgo.service output/init/
./sftpgo gen completion bash > output/bash_completion/sftpgo
./sftpgo gen completion zsh > output/zsh_completion/_sftpgo
./sftpgo gen man -d output/man/man1
gzip output/man/man1/*
cp sftpgo output/
- name: Upload build artifact
uses: actions/upload-artifact@v2
with:
name: sftpgo-linux-${{ matrix.arch }}-go-${{ matrix.go }}
path: output
- name: Build Packages
id: build_linux_pkgs
run: |
export NFPM_ARCH=${{ matrix.go-arch }}
cd pkgs
./build.sh
PKG_VERSION=$(cat dist/version)
echo "::set-output name=pkg-version::${PKG_VERSION}"
- name: Upload Debian Package
uses: actions/upload-artifact@v2
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-${{ matrix.go-arch }}-deb
path: pkgs/dist/deb/*
- name: Upload RPM Package
uses: actions/upload-artifact@v2
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-${{ matrix.go-arch }}-rpm
path: pkgs/dist/rpm/*
golangci-lint:
name: golangci-lint
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: 1.16
- name: Run golangci-lint
uses: golangci/golangci-lint-action@v2
with:
version: v1.37.1
version: latest
skip-go-installation: true

View File

@@ -1,6 +1,8 @@
name: Docker
on:
#schedule:
# - cron: '0 4 * * *' # everyday at 4:00 AM UTC
push:
tags:
- v*
@@ -13,7 +15,7 @@ jobs:
strategy:
matrix:
os:
- ubuntu-18.04
- ubuntu-latest
docker_pkg:
- debian
- alpine
@@ -26,7 +28,7 @@ jobs:
- name: Repo metadata
id: repo
uses: actions/github-script@v3
uses: actions/github-script@v4
with:
script: |
const repo = await github.repos.get(context.repo)

View File

@@ -5,7 +5,7 @@ on:
tags: 'v*'
env:
GO_VERSION: 1.16.3
GO_VERSION: 1.16.8
jobs:
prepare-sources-with-deps:
@@ -51,16 +51,20 @@ jobs:
with:
go-version: ${{ env.GO_VERSION }}
- name: Build for macOS
- name: Build for macOS x86_64
if: startsWith(matrix.os, 'windows-') != true
run: go build -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
run: go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
- name: Build for macOS arm64
if: startsWith(matrix.os, 'macos-') == true
run: CGO_ENABLED=1 GOOS=darwin GOARCH=arm64 SDKROOT=$(xcrun --sdk macosx --show-sdk-path) go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo_arm64
- name: Build for Windows
if: startsWith(matrix.os, 'windows-')
run: |
$GIT_COMMIT = (git describe --always --dirty) | Out-String
$DATE_TIME = ([datetime]::Now.ToUniversalTime().toString("yyyy-MM-ddTHH:mm:ssZ")) | Out-String
go build -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/version.date=$DATE_TIME" -o sftpgo.exe
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/version.date=$DATE_TIME" -o sftpgo.exe
- name: Initialize data provider
run: ./sftpgo initprovider
@@ -103,7 +107,11 @@ jobs:
./sftpgo gen man -d output/man/man1
gzip output/man/man1/*
cd output
tar cJvf sftpgo_${SFTPGO_VERSION}_${OS}_x86_64.tar.xz *
tar cJvf ../sftpgo_${SFTPGO_VERSION}_${OS}_x86_64.tar.xz *
cd ..
cp sftpgo_arm64 output/sftpgo
cd output
tar cJvf ../sftpgo_${SFTPGO_VERSION}_${OS}_arm64.tar.xz *
cd ..
env:
SFTPGO_VERSION: ${{ steps.get_version.outputs.VERSION }}
@@ -143,12 +151,20 @@ jobs:
SFTPGO_VERSION: ${{ steps.get_version.outputs.VERSION }}
OS: ${{ steps.get_os_name.outputs.OS }}
- name: Upload macOS artifact
- name: Upload macOS x86_64 artifact
if: startsWith(matrix.os, 'macos-')
uses: actions/upload-artifact@v2
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86_64.tar.xz
path: ./output/sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86_64.tar.xz
path: ./sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86_64.tar.xz
retention-days: 1
- name: Upload macOS arm64 artifact
if: startsWith(matrix.os, 'macos-')
uses: actions/upload-artifact@v2
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_arm64.tar.xz
path: ./sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_arm64.tar.xz
retention-days: 1
- name: Upload Windows installer artifact
@@ -211,7 +227,7 @@ jobs:
- name: Build on amd64
if: ${{ matrix.arch == 'amd64' }}
run: |
go build -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
mkdir -p output/{init,sqlite,bash_completion,zsh_completion}
echo "For documentation please take a look here:" > output/README.txt
echo "" >> output/README.txt
@@ -234,7 +250,7 @@ jobs:
env:
SFTPGO_VERSION: ${{ steps.get_version.outputs.SFTPGO_VERSION }}
- uses: uraimo/run-on-arch-action@v2.0.9
- uses: uraimo/run-on-arch-action@v2.0.10
if: ${{ matrix.arch != 'amd64' }}
name: Build for ${{ matrix.arch }}
id: build
@@ -253,7 +269,7 @@ jobs:
tar -C /usr/local -xzf go.tar.gz
run: |
export PATH=$PATH:/usr/local/go/bin
go build -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
mkdir -p output/{init,sqlite,bash_completion,zsh_completion}
echo "For documentation please take a look here:" > output/README.txt
echo "" >> output/README.txt
@@ -427,6 +443,11 @@ jobs:
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_macOS_x86_64.tar.xz
- name: Download macOS arm64 artifact
uses: actions/download-artifact@v2
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_macOS_arm64.tar.xz
- name: Download Windows installer x86_64 artifact
uses: actions/download-artifact@v2
with:
@@ -446,7 +467,7 @@ jobs:
run: |
mv sftpgo_windows_x86_64.exe sftpgo_${SFTPGO_VERSION}_windows_x86_64.exe
mv sftpgo_portable_x86_64.zip sftpgo_${SFTPGO_VERSION}_windows_portable_x86_64.zip
gh release create "${SFTPGO_VERSION}"
gh release create "${SFTPGO_VERSION}" -t "${SFTPGO_VERSION}"
gh release upload "${SFTPGO_VERSION}" sftpgo_*.xz --clobber
gh release upload "${SFTPGO_VERSION}" sftpgo-*.rpm --clobber
gh release upload "${SFTPGO_VERSION}" sftpgo_*.deb --clobber

View File

@@ -19,8 +19,11 @@ linters-settings:
simplify: true
goimports:
local-prefixes: github.com/drakkan/sftpgo
maligned:
suggest-new: true
#govet:
# report about shadowed variables
#check-shadowing: true
#enable:
# - fieldalignment
linters:
enable:
@@ -28,15 +31,13 @@ linters:
- errcheck
- gofmt
- goimports
- golint
- revive
- unconvert
- unparam
- bodyclose
- gocyclo
- misspell
- maligned
- whitespace
- dupl
- scopelint
- rowserrcheck
- dogsled

View File

@@ -21,7 +21,7 @@ COPY . .
RUN set -xe && \
export COMMIT_SHA=${COMMIT_SHA:-$(git describe --always --dirty)} && \
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -v -o sftpgo
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -v -o sftpgo
FROM debian:buster-slim
@@ -52,8 +52,7 @@ ENV SFTPGO_HTTPD__STATIC_FILES_PATH=/usr/share/sftpgo/static
# Modify the default configuration file
RUN sed -i "s|\"users_base_dir\": \"\",|\"users_base_dir\": \"/srv/sftpgo/data\",|" /etc/sftpgo/sftpgo.json && \
sed -i "s|\"backups\"|\"/srv/sftpgo/backups\"|" /etc/sftpgo/sftpgo.json && \
sed -i "s|\"address\": \"127.0.0.1\",|\"address\": \"\",|" /etc/sftpgo/sftpgo.json
sed -i "s|\"backups\"|\"/srv/sftpgo/backups\"|" /etc/sftpgo/sftpgo.json
RUN chown -R sftpgo:sftpgo /etc/sftpgo /srv/sftpgo && chown sftpgo:sftpgo /var/lib/sftpgo && chmod 700 /srv/sftpgo/backups

View File

@@ -1,4 +1,4 @@
FROM golang:1.16-alpine3.12 AS builder
FROM golang:1.16-alpine3.13 AS builder
ENV GOFLAGS="-mod=readonly"
@@ -23,10 +23,10 @@ COPY . .
RUN set -xe && \
export COMMIT_SHA=${COMMIT_SHA:-$(git describe --always --dirty)} && \
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -v -o sftpgo
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -v -o sftpgo
FROM alpine:3.12
FROM alpine:3.13
# Set to "true" to install the optional git and rsync dependencies
ARG INSTALL_OPTIONAL_PACKAGES=false
@@ -57,8 +57,7 @@ ENV SFTPGO_HTTPD__STATIC_FILES_PATH=/usr/share/sftpgo/static
# Modify the default configuration file
RUN sed -i "s|\"users_base_dir\": \"\",|\"users_base_dir\": \"/srv/sftpgo/data\",|" /etc/sftpgo/sftpgo.json && \
sed -i "s|\"backups\"|\"/srv/sftpgo/backups\"|" /etc/sftpgo/sftpgo.json && \
sed -i "s|\"address\": \"127.0.0.1\",|\"address\": \"\",|" /etc/sftpgo/sftpgo.json
sed -i "s|\"backups\"|\"/srv/sftpgo/backups\"|" /etc/sftpgo/sftpgo.json
RUN chown -R sftpgo:sftpgo /etc/sftpgo /srv/sftpgo && chown sftpgo:sftpgo /var/lib/sftpgo && chmod 700 /srv/sftpgo/backups

View File

@@ -12,40 +12,42 @@ Several storage backends are supported: local filesystem, encrypted local filesy
## Features
- SFTPGo uses virtual accounts stored inside a "data provider".
- SQLite, MySQL, PostgreSQL, bbolt (key/value store in pure Go) and in-memory data providers are supported.
- Each local account is chrooted in its home directory, for cloud-based accounts you can restrict access to a certain base path.
- Support for serving local filesystem, encrypted local filesystem, S3 Compatible Object Storage, Google Cloud Storage, Azure Blob Storage or other SFTP accounts over SFTP/SCP/FTP/WebDAV.
- Virtual folders are supported: a virtual folder can use any of the supported storage backends. So you can have, for example, an S3 user that exposes a GCS bucket (or part of it) on a specified path and an encrypted local filesystem on another one. Virtual folders can be private or shared among multiple users, for shared virtual folders you can define different quota limits for each user.
- Configurable custom commands and/or HTTP hooks on file upload, pre-upload, download, pre-download, delete, pre-delete, rename, on SSH commands and on user add, update and delete.
- Virtual accounts stored within a "data provider".
- SQLite, MySQL, PostgreSQL, CockroachDB, Bolt (key/value store in pure Go) and in-memory data providers are supported.
- Chroot isolation for local accounts. Cloud-based accounts can be restricted to a certain base path.
- Per user and per directory virtual permissions, for each exposed path you can allow or deny: directory listing, upload, overwrite, download, delete, rename, create directories, create symlinks, change owner/group/file mode.
- [REST API](./docs/rest-api.md) for users and folders management, backup, restore and real time reports of the active connections with possibility of forcibly closing a connection.
- [Web based administration interface](./docs/web-admin.md) to easily manage users, folders and connections.
- [Web client interface](./docs/web-client.md) so that end users can change their credentials and browse their files.
- Public key and password authentication. Multiple public keys per user are supported.
- SSH user [certificate authentication](https://cvsweb.openbsd.org/src/usr.bin/ssh/PROTOCOL.certkeys?rev=1.8).
- Keyboard interactive authentication. You can easily setup a customizable multi-factor authentication.
- Partial authentication. You can configure multi-step authentication requiring, for example, the user password after successful public key authentication.
- Per user authentication methods. You can configure the allowed authentication methods for each user.
- Custom authentication via external programs/HTTP API is supported.
- [Data At Rest Encryption](./docs/dare.md) is supported.
- Dynamic user modification before login via external programs/HTTP API is supported.
- Per user authentication methods.
- Custom authentication via external programs/HTTP API.
- [Data At Rest Encryption](./docs/dare.md).
- Dynamic user modification before login via external programs/HTTP API.
- Quota support: accounts can have individual quota expressed as max total size and/or max number of files.
- Bandwidth throttling is supported, with distinct settings for upload and download.
- Bandwidth throttling, with distinct settings for upload and download.
- Per-protocol [rate limiting](./docs/rate-limiting.md) is supported and can be optionally connected to the built-in defender to automatically block hosts that repeatedly exceed the configured limit.
- Per user maximum concurrent sessions.
- Per user and per directory permission management: list directory contents, upload, overwrite, download, delete, rename, create directories, create symlinks, change owner/group and mode, change access and modification times.
- Per user files/folders ownership mapping: you can map all the users to the system account that runs SFTPGo (all platforms are supported) or you can run SFTPGo as root user and map each user or group of users to a different system account (\*NIX only).
- Per user IP filters are supported: login can be restricted to specific ranges of IP addresses or to a specific IP address.
- Per user and per directory shell like patterns filters are supported: files can be allowed or denied based on shell like patterns.
- Virtual folders are supported: directories outside the user home directory can be exposed as virtual folders.
- Configurable custom commands and/or HTTP notifications on file upload, download, pre-delete, delete, rename, on SSH commands and on user add, update and delete.
- Per user and global IP filters: login can be restricted to specific ranges of IP addresses or to a specific IP address.
- Per user and per directory shell like patterns filters: files can be allowed or denied based on shell like patterns.
- Automatically terminating idle connections.
- Automatic blocklist management is supported using the built-in [defender](./docs/defender.md).
- Automatic blocklist management using the built-in [defender](./docs/defender.md).
- Atomic uploads are configurable.
- Per user files/folders ownership mapping: you can map all the users to the system account that runs SFTPGo (all platforms are supported) or you can run SFTPGo as root user and map each user or group of users to a different system account (\*NIX only).
- Support for Git repositories over SSH.
- SCP and rsync are supported.
- FTP/S is supported. You can configure the FTP service to require TLS for both control and data connections.
- [WebDAV](./docs/webdav.md) is supported.
- Two-Way TLS authentication, aka TLS with client certificate authentication, is supported for REST API/Web Admin, FTPS and WebDAV over HTTPS.
- Support for serving local filesystem, encrypted local filesystem, S3 Compatible Object Storage, Google Cloud Storage, Azure Blob Storage or other SFTP accounts over SFTP/SCP/FTP/WebDAV.
- Per user protocols restrictions. You can configure the allowed protocols (SSH/FTP/WebDAV) for each user.
- [Prometheus metrics](./docs/metrics.md) are exposed.
- Support for HAProxy PROXY protocol: you can proxy and/or load balance the SFTP/SCP/FTP/WebDAV service without losing the information about the client's address.
- [REST API](./docs/rest-api.md) for users and folders management, backup, restore and real time reports of the active connections with possibility of forcibly closing a connection.
- [Web based administration interface](./docs/web-admin.md) to easily manage users, folders and connections.
- Easy [migration](./examples/convertusers) from Linux system user accounts.
- [Portable mode](./docs/portable-mode.md): a convenient way to share a single directory on demand.
- [SFTP subsystem mode](./docs/sftp-subsystem.md): you can use SFTPGo as OpenSSH's SFTP subsystem.
@@ -59,8 +61,8 @@ SFTPGo is developed and tested on Linux. After each commit, the code is automati
## Requirements
- Go 1.15 or higher as build only dependency.
- A suitable SQL server to use as data provider: PostgreSQL 9.4+ or MySQL 5.6+ or SQLite 3.x.
- Go as build only dependency. We support the Go version(s) used in [continuous integration workflows](./tree/main/.github/workflows).
- A suitable SQL server to use as data provider: PostgreSQL 9.4+ or MySQL 5.6+ or SQLite 3.x or CockroachDB stable.
- The SQL server is optional: you can choose to use an embedded bolt database as key/value store or an in memory data provider.
## Installation
@@ -78,10 +80,21 @@ Some Linux distro packages are available:
- Deb and RPM packages are built after each commit and for each release.
- For Ubuntu a PPA is available [here](https://launchpad.net/~sftpgo/+archive/ubuntu/sftpgo).
SFTPGo is also available on [AWS Marketplace](https://aws.amazon.com/marketplace/seller-profile?id=6e849ab8-70a6-47de-9a43-13c3fa849335), purchasing from there will help keep SFTPGo a long-term sustainable project.
On FreeBSD you can install from the [SFTPGo port](https://www.freshports.org/ftp/sftpgo).
On Windows you can use:
- The Windows installer to install and run SFTPGo as a Windows service.
- The portable package to start SFTPGo on demand.
You can easily test new features selecting a commit from the [Actions](https://github.com/drakkan/sftpgo/actions) page and downloading the matching build artifacts for Linux, macOS or Windows. GitHub stores artifacts for 90 days.
Alternately, you can [build from source](./docs/build-from-source.md).
[Getting Started Guide for the Impatient](./docs/howto/getting-started.md).
## Configuration
A full explanation of all configuration methods can be found [here](./docs/full-configuration.md).
@@ -100,7 +113,7 @@ Check out [this documentation](./docs/service.md) if you want to run SFTPGo as a
Before starting the SFTPGo server please ensure that the configured data provider is properly initialized/updated.
For PostgreSQL and MySQL providers, you need to create the configured database. For SQLite, the configured database will be automatically created at startup. Memory and bolt data providers do not require an initialization but they could require an update to the existing data after upgrading SFTPGo.
For PostgreSQL, MySQL and CockroachDB providers, you need to create the configured database. For SQLite, the configured database will be automatically created at startup. Memory and bolt data providers do not require an initialization but they could require an update to the existing data after upgrading SFTPGo.
SFTPGo will attempt to automatically detect if the data provider is initialized/updated and if not, will attempt to initialize/ update it on startup as needed.
@@ -120,27 +133,50 @@ sftpgo initprovider --help
You can disable automatic data provider checks/updates at startup by setting the `update_mode` configuration key to `1`.
## Create the first admin
To start using SFTPGo you need to create an admin user, you can do it in several ways:
- by using the web admin interface. The default URL is [http://127.0.0.1:8080/web/admin](http://127.0.0.1:8080/web/admin)
- by loading initial data
- by enabling `create_default_admin` in your configuration file. In this case the credentials are `admin`/`password`
## Upgrading
SFTPGo supports upgrading from the previous release branch to the current one.
Some examples for supported upgrade paths are:
- from 1.2.x to 2.0.x
- from 2.0.x to 2.1.x and so on.
For supported upgrade paths, the data and schema are migrated automatically, alternately you can use the `initprovider` command.
So if, for example, you want to upgrade from a version before 1.2.x to 2.0.x, you must first install version 1.2.x, update the data provider and finally install the version 2.0.x. It is recommended to always install the latest available minor version, ie do not install 1.2.0 if 1.2.2 is available.
Loading data from a provider independent JSON dump is supported from the previous release branch to the current one too. After upgrading SFTPGo it is advisable to regenerate the JSON dump from the new version.
## Downgrading
If for some reason you want to downgrade SFTPGo, you may need to downgrade your data provider schema and data as well. You can use the `revertprovider` command for this task.
We support the follwing schema versions:
As for upgrading, SFTPGo supports downgrading from the previous release branch to the current one.
- `8`, this is the latest version
- `4`, this is the schema for v1.0.0-v1.2.x
So, if you plan to downgrade from 2.0.x to 1.2.x, you can prepare your data provider executing the following command from the configuration directory:
So, if you plan to downgrade from 2.0.x to 1.2.x, before uninstalling 2.0.x version, you can prepare your data provider executing the following command from the configuration directory:
```shell
sftpgo revertprovider --to-version 4
```
Take a look at the CLI usage to learn how to specify a different configuration file:
Take a look at the CLI usage to see the supported parameter for the `--to-version` argument and to learn how to specify a different configuration file:
```bash
```shell
sftpgo revertprovider --help
```
The `revertprovider` command is not supported for the memory provider.
Please note that we only support the current release branch and the current main branch, if you find a bug it is better to report it rather than downgrading to an older unsupported version.
## Users and folders management
After starting SFTPGo you can manage users and folders using:
@@ -150,6 +186,8 @@ After starting SFTPGo you can manage users and folders using:
To support embedded data providers like `bolt` and `SQLite` we can't have a CLI that directly write users and folders to the data provider, we always have to use the REST API.
Full details for users, folders, admins and other resources are documented in the [OpenAPI](/httpd/schema/openapi.yaml) schema. If you want to render the schema without importing it manually, you can explore it on [Stoplight](https://sftpgo.stoplight.io/docs/sftpgo/openapi.yaml).
## Tutorials
Some step-to-step tutorials can be found inside the source tree [howto](./docs/howto "How-to") directory.
@@ -179,7 +217,7 @@ More information about custom actions can be found [here](./docs/custom-actions.
## Virtual folders
Directories outside the user home directory can be exposed as virtual folders, more information [here](./docs/virtual-folders.md).
Directories outside the user home directory or based on a different storage provider can be exposed as virtual folders, more information [here](./docs/virtual-folders.md).
## Other hooks
@@ -253,8 +291,6 @@ I'd like to make SFTPGo into a sustainable long term project and your [sponsorsh
Thank you to our sponsors!
[<img src="https://images.squarespace-cdn.com/content/5e5db7f1ded5fc06a4e9628b/1583608099266-T5NW2WNQL7PC15LPRB16/logo+black.png?format=1500w&content-type=image%2Fpng" width="33%" alt="segmed logo">](https://www.segmed.ai/)
## License
GNU AGPLv3

View File

@@ -1,10 +1,10 @@
//go:build !noportable
// +build !noportable
package cmd
import (
"fmt"
"io/ioutil"
"os"
"path"
"path/filepath"
@@ -22,58 +22,60 @@ import (
)
var (
directoryToServe string
portableSFTPDPort int
portableAdvertiseService bool
portableAdvertiseCredentials bool
portableUsername string
portablePassword string
portableLogFile string
portableLogVerbose bool
portablePublicKeys []string
portablePermissions []string
portableSSHCommands []string
portableAllowedPatterns []string
portableDeniedPatterns []string
portableFsProvider int
portableS3Bucket string
portableS3Region string
portableS3AccessKey string
portableS3AccessSecret string
portableS3Endpoint string
portableS3StorageClass string
portableS3KeyPrefix string
portableS3ULPartSize int
portableS3ULConcurrency int
portableGCSBucket string
portableGCSCredentialsFile string
portableGCSAutoCredentials int
portableGCSStorageClass string
portableGCSKeyPrefix string
portableFTPDPort int
portableFTPSCert string
portableFTPSKey string
portableWebDAVPort int
portableWebDAVCert string
portableWebDAVKey string
portableAzContainer string
portableAzAccountName string
portableAzAccountKey string
portableAzEndpoint string
portableAzAccessTier string
portableAzSASURL string
portableAzKeyPrefix string
portableAzULPartSize int
portableAzULConcurrency int
portableAzUseEmulator bool
portableCryptPassphrase string
portableSFTPEndpoint string
portableSFTPUsername string
portableSFTPPassword string
portableSFTPPrivateKeyPath string
portableSFTPFingerprints []string
portableSFTPPrefix string
portableCmd = &cobra.Command{
directoryToServe string
portableSFTPDPort int
portableAdvertiseService bool
portableAdvertiseCredentials bool
portableUsername string
portablePassword string
portableLogFile string
portableLogVerbose bool
portablePublicKeys []string
portablePermissions []string
portableSSHCommands []string
portableAllowedPatterns []string
portableDeniedPatterns []string
portableFsProvider int
portableS3Bucket string
portableS3Region string
portableS3AccessKey string
portableS3AccessSecret string
portableS3Endpoint string
portableS3StorageClass string
portableS3KeyPrefix string
portableS3ULPartSize int
portableS3ULConcurrency int
portableGCSBucket string
portableGCSCredentialsFile string
portableGCSAutoCredentials int
portableGCSStorageClass string
portableGCSKeyPrefix string
portableFTPDPort int
portableFTPSCert string
portableFTPSKey string
portableWebDAVPort int
portableWebDAVCert string
portableWebDAVKey string
portableAzContainer string
portableAzAccountName string
portableAzAccountKey string
portableAzEndpoint string
portableAzAccessTier string
portableAzSASURL string
portableAzKeyPrefix string
portableAzULPartSize int
portableAzULConcurrency int
portableAzUseEmulator bool
portableCryptPassphrase string
portableSFTPEndpoint string
portableSFTPUsername string
portableSFTPPassword string
portableSFTPPrivateKeyPath string
portableSFTPFingerprints []string
portableSFTPPrefix string
portableSFTPDisableConcurrentReads bool
portableSFTPDBufferSize int64
portableCmd = &cobra.Command{
Use: "portable",
Short: "Serve a single directory",
Long: `To serve the current working directory with auto generated credentials simply
@@ -84,9 +86,9 @@ $ sftpgo portable
Please take a look at the usage below to customize the serving parameters`,
Run: func(cmd *cobra.Command, args []string) {
portableDir := directoryToServe
fsProvider := dataprovider.FilesystemProvider(portableFsProvider)
fsProvider := vfs.FilesystemProvider(portableFsProvider)
if !filepath.IsAbs(portableDir) {
if fsProvider == dataprovider.LocalFilesystemProvider {
if fsProvider == vfs.LocalFilesystemProvider {
portableDir, _ = filepath.Abs(portableDir)
} else {
portableDir = os.TempDir()
@@ -95,7 +97,7 @@ Please take a look at the usage below to customize the serving parameters`,
permissions := make(map[string][]string)
permissions["/"] = portablePermissions
portableGCSCredentials := ""
if fsProvider == dataprovider.GCSFilesystemProvider && portableGCSCredentialsFile != "" {
if fsProvider == vfs.GCSFilesystemProvider && portableGCSCredentialsFile != "" {
contents, err := getFileContents(portableGCSCredentialsFile)
if err != nil {
fmt.Printf("Unable to get GCS credentials: %v\n", err)
@@ -105,7 +107,7 @@ Please take a look at the usage below to customize the serving parameters`,
portableGCSAutoCredentials = 0
}
portableSFTPPrivateKey := ""
if fsProvider == dataprovider.SFTPFilesystemProvider && portableSFTPPrivateKeyPath != "" {
if fsProvider == vfs.SFTPFilesystemProvider && portableSFTPPrivateKeyPath != "" {
contents, err := getFileContents(portableSFTPPrivateKeyPath)
if err != nil {
fmt.Printf("Unable to get SFTP private key: %v\n", err)
@@ -149,8 +151,8 @@ Please take a look at the usage below to customize the serving parameters`,
Permissions: permissions,
HomeDir: portableDir,
Status: 1,
FsConfig: dataprovider.Filesystem{
Provider: dataprovider.FilesystemProvider(portableFsProvider),
FsConfig: vfs.Filesystem{
Provider: vfs.FilesystemProvider(portableFsProvider),
S3Config: vfs.S3FsConfig{
Bucket: portableS3Bucket,
Region: portableS3Region,
@@ -175,7 +177,7 @@ Please take a look at the usage below to customize the serving parameters`,
AccountKey: kms.NewPlainSecret(portableAzAccountKey),
Endpoint: portableAzEndpoint,
AccessTier: portableAzAccessTier,
SASURL: portableAzSASURL,
SASURL: kms.NewPlainSecret(portableAzSASURL),
KeyPrefix: portableAzKeyPrefix,
UseEmulator: portableAzUseEmulator,
UploadPartSize: int64(portableAzULPartSize),
@@ -185,12 +187,14 @@ Please take a look at the usage below to customize the serving parameters`,
Passphrase: kms.NewPlainSecret(portableCryptPassphrase),
},
SFTPConfig: vfs.SFTPFsConfig{
Endpoint: portableSFTPEndpoint,
Username: portableSFTPUsername,
Password: kms.NewPlainSecret(portableSFTPPassword),
PrivateKey: kms.NewPlainSecret(portableSFTPPrivateKey),
Fingerprints: portableSFTPFingerprints,
Prefix: portableSFTPPrefix,
Endpoint: portableSFTPEndpoint,
Username: portableSFTPUsername,
Password: kms.NewPlainSecret(portableSFTPPassword),
PrivateKey: kms.NewPlainSecret(portableSFTPPrivateKey),
Fingerprints: portableSFTPFingerprints,
Prefix: portableSFTPPrefix,
DisableCouncurrentReads: portableSFTPDisableConcurrentReads,
BufferSize: portableSFTPDBufferSize,
},
},
Filters: dataprovider.UserFilters{
@@ -256,7 +260,7 @@ multicast DNS`)
advertised via multicast DNS, this
flag allows to put username/password
inside the advertised TXT record`)
portableCmd.Flags().IntVarP(&portableFsProvider, "fs-provider", "f", int(dataprovider.LocalFilesystemProvider), `0 => local filesystem
portableCmd.Flags().IntVarP(&portableFsProvider, "fs-provider", "f", int(vfs.LocalFilesystemProvider), `0 => local filesystem
1 => AWS S3 compatible
2 => Google Cloud Storage
3 => Azure Blob Storage
@@ -318,6 +322,17 @@ key for SFTP provider`)
portableCmd.Flags().StringVar(&portableSFTPPrefix, "sftp-prefix", "", `SFTP prefix allows restrict all
operations to a given path within the
remote SFTP server`)
portableCmd.Flags().BoolVar(&portableSFTPDisableConcurrentReads, "sftp-disable-concurrent-reads", false, `Concurrent reads are safe to use and
disabling them will degrade performance.
Disable for read once servers`)
portableCmd.Flags().Int64Var(&portableSFTPDBufferSize, "sftp-buffer-size", 0, `The size of the buffer (in MB) to use
for transfers. By enabling buffering,
the reads and writes, from/to the
remote SFTP server, are split in
multiple concurrent requests and this
allows data to be transferred at a
faster rate, over high latency networks,
by overlapping round-trip times`)
rootCmd.AddCommand(portableCmd)
}
@@ -384,7 +399,7 @@ func getFileContents(name string) (string, error) {
if fi.Size() > 1048576 {
return "", fmt.Errorf("%#v is too big %v/1048576 bytes", name, fi.Size())
}
contents, err := ioutil.ReadFile(name)
contents, err := os.ReadFile(name)
if err != nil {
return "", err
}

View File

@@ -1,3 +1,4 @@
//go:build noportable
// +build noportable
package cmd

View File

@@ -26,8 +26,8 @@ Please take a look at the usage below to customize the options.`,
Run: func(cmd *cobra.Command, args []string) {
logger.DisableLogger()
logger.EnableConsoleLogger(zerolog.DebugLevel)
if revertProviderTargetVersion != 4 {
logger.WarnToConsole("Unsupported target version, 4 is the only supported one")
if revertProviderTargetVersion != 8 {
logger.WarnToConsole("Unsupported target version, 8 is the only supported one")
os.Exit(1)
}
configDir = utils.CleanDirInput(configDir)
@@ -57,7 +57,7 @@ Please take a look at the usage below to customize the options.`,
func init() {
addConfigFlags(revertProviderCmd)
revertProviderCmd.Flags().IntVar(&revertProviderTargetVersion, "to-version", 0, `4 means the version supported in v1.0.0-v1.2.x`)
revertProviderCmd.Flags().IntVar(&revertProviderTargetVersion, "to-version", 0, `8 means the version supported in v2.0.x`)
revertProviderCmd.MarkFlagRequired("to-version") //nolint:errcheck
rootCmd.AddCommand(revertProviderCmd)

View File

@@ -18,6 +18,7 @@ import (
"github.com/drakkan/sftpgo/httpclient"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/utils"
"github.com/drakkan/sftpgo/vfs"
)
var (
@@ -30,6 +31,11 @@ var (
type ProtocolActions struct {
// Valid values are download, upload, pre-delete, delete, rename, ssh_cmd. Empty slice to disable
ExecuteOn []string `json:"execute_on" mapstructure:"execute_on"`
// Actions to be performed synchronously.
// The pre-delete action is always executed synchronously while the other ones are asynchronous.
// Executing an action synchronously means that SFTPGo will not return a result code to the client
// (which is waiting for it) until your hook have completed its execution.
ExecuteSync []string `json:"execute_sync" mapstructure:"execute_sync"`
// Absolute path to an external program or an HTTP URL
Hook string `json:"hook" mapstructure:"hook"`
}
@@ -43,11 +49,31 @@ func InitializeActionHandler(handler ActionHandler) {
actionHandler = handler
}
// SSHCommandActionNotification executes the defined action for the specified SSH command.
func SSHCommandActionNotification(user *dataprovider.User, filePath, target, sshCmd string, err error) {
notification := newActionNotification(user, operationSSHCmd, filePath, target, sshCmd, ProtocolSSH, 0, err)
// ExecutePreAction executes a pre-* action and returns the result
func ExecutePreAction(user *dataprovider.User, operation, filePath, virtualPath, protocol string, fileSize int64, openFlags int) error {
if !utils.IsStringInSlice(operation, Config.Actions.ExecuteOn) {
// for pre-delete we execute the internal handling on error, so we must return errUnconfiguredAction.
// Other pre action will deny the operation on error so if we have no configuration we must return
// a nil error
if operation == operationPreDelete {
return errUnconfiguredAction
}
return nil
}
notification := newActionNotification(user, operation, filePath, virtualPath, "", "", protocol, fileSize, openFlags, nil)
return actionHandler.Handle(notification)
}
go actionHandler.Handle(notification) // nolint:errcheck
// ExecuteActionNotification executes the defined hook, if any, for the specified action
func ExecuteActionNotification(user *dataprovider.User, operation, filePath, virtualPath, target, sshCmd, protocol string, fileSize int64, err error) {
notification := newActionNotification(user, operation, filePath, virtualPath, target, sshCmd, protocol, fileSize, 0, err)
if utils.IsStringInSlice(operation, Config.Actions.ExecuteSync) {
actionHandler.Handle(notification) //nolint:errcheck
return
}
go actionHandler.Handle(notification) //nolint:errcheck
}
// ActionHandler handles a notification for a Protocol Action.
@@ -68,29 +94,34 @@ type ActionNotification struct {
Endpoint string `json:"endpoint,omitempty"`
Status int `json:"status"`
Protocol string `json:"protocol"`
OpenFlags int `json:"open_flags,omitempty"`
}
func newActionNotification(
user *dataprovider.User,
operation, filePath, target, sshCmd, protocol string,
operation, filePath, virtualPath, target, sshCmd, protocol string,
fileSize int64,
openFlags int,
err error,
) *ActionNotification {
var bucket, endpoint string
status := 1
if user.FsConfig.Provider == dataprovider.S3FilesystemProvider {
bucket = user.FsConfig.S3Config.Bucket
endpoint = user.FsConfig.S3Config.Endpoint
} else if user.FsConfig.Provider == dataprovider.GCSFilesystemProvider {
bucket = user.FsConfig.GCSConfig.Bucket
} else if user.FsConfig.Provider == dataprovider.AzureBlobFilesystemProvider {
bucket = user.FsConfig.AzBlobConfig.Container
if user.FsConfig.AzBlobConfig.SASURL != "" {
endpoint = user.FsConfig.AzBlobConfig.SASURL
} else {
endpoint = user.FsConfig.AzBlobConfig.Endpoint
fsConfig := user.GetFsConfigForPath(virtualPath)
switch fsConfig.Provider {
case vfs.S3FilesystemProvider:
bucket = fsConfig.S3Config.Bucket
endpoint = fsConfig.S3Config.Endpoint
case vfs.GCSFilesystemProvider:
bucket = fsConfig.GCSConfig.Bucket
case vfs.AzureBlobFilesystemProvider:
bucket = fsConfig.AzBlobConfig.Container
if fsConfig.AzBlobConfig.Endpoint != "" {
endpoint = fsConfig.AzBlobConfig.Endpoint
}
case vfs.SFTPFilesystemProvider:
endpoint = fsConfig.SFTPConfig.Endpoint
}
if err == ErrQuotaExceeded {
@@ -106,11 +137,12 @@ func newActionNotification(
TargetPath: target,
SSHCmd: sshCmd,
FileSize: fileSize,
FsProvider: int(user.FsConfig.Provider),
FsProvider: int(fsConfig.Provider),
Bucket: bucket,
Endpoint: endpoint,
Status: status,
Protocol: protocol,
OpenFlags: openFlags,
}
}
@@ -138,19 +170,16 @@ func (h *defaultActionHandler) handleHTTP(notification *ActionNotification) erro
u, err := url.Parse(Config.Actions.Hook)
if err != nil {
logger.Warn(notification.Protocol, "", "Invalid hook %#v for operation %#v: %v", Config.Actions.Hook, notification.Action, err)
return err
}
startTime := time.Now()
respCode := 0
httpClient := httpclient.GetRetraybleHTTPClient()
var b bytes.Buffer
_ = json.NewEncoder(&b).Encode(notification)
resp, err := httpClient.Post(u.String(), "application/json", &b)
resp, err := httpclient.RetryablePost(Config.Actions.Hook, "application/json", &b)
if err == nil {
respCode = resp.StatusCode
resp.Body.Close()
@@ -160,7 +189,8 @@ func (h *defaultActionHandler) handleHTTP(notification *ActionNotification) erro
}
}
logger.Debug(notification.Protocol, "", "notified operation %#v to URL: %v status code: %v, elapsed: %v err: %v", notification.Action, u.String(), respCode, time.Since(startTime), err)
logger.Debug(notification.Protocol, "", "notified operation %#v to URL: %v status code: %v, elapsed: %v err: %v",
notification.Action, u.Redacted(), respCode, time.Since(startTime), err)
return err
}
@@ -201,5 +231,6 @@ func notificationAsEnvVars(notification *ActionNotification) []string {
fmt.Sprintf("SFTPGO_ACTION_ENDPOINT=%v", notification.Endpoint),
fmt.Sprintf("SFTPGO_ACTION_STATUS=%v", notification.Status),
fmt.Sprintf("SFTPGO_ACTION_PROTOCOL=%v", notification.Protocol),
fmt.Sprintf("SFTPGO_ACTION_OPEN_FLAGS=%v", notification.OpenFlags),
}
}

View File

@@ -3,7 +3,6 @@ package common
import (
"errors"
"fmt"
"io/ioutil"
"os"
"os/exec"
"path/filepath"
@@ -20,7 +19,7 @@ func TestNewActionNotification(t *testing.T) {
user := &dataprovider.User{
Username: "username",
}
user.FsConfig.Provider = dataprovider.LocalFilesystemProvider
user.FsConfig.Provider = vfs.LocalFilesystemProvider
user.FsConfig.S3Config = vfs.S3FsConfig{
Bucket: "s3bucket",
Endpoint: "endpoint",
@@ -30,38 +29,44 @@ func TestNewActionNotification(t *testing.T) {
}
user.FsConfig.AzBlobConfig = vfs.AzBlobFsConfig{
Container: "azcontainer",
SASURL: "azsasurl",
Endpoint: "azendpoint",
}
a := newActionNotification(user, operationDownload, "path", "target", "", ProtocolSFTP, 123, errors.New("fake error"))
user.FsConfig.SFTPConfig = vfs.SFTPFsConfig{
Endpoint: "sftpendpoint",
}
a := newActionNotification(user, operationDownload, "path", "vpath", "target", "", ProtocolSFTP, 123, 0, errors.New("fake error"))
assert.Equal(t, user.Username, a.Username)
assert.Equal(t, 0, len(a.Bucket))
assert.Equal(t, 0, len(a.Endpoint))
assert.Equal(t, 0, a.Status)
user.FsConfig.Provider = dataprovider.S3FilesystemProvider
a = newActionNotification(user, operationDownload, "path", "target", "", ProtocolSSH, 123, nil)
user.FsConfig.Provider = vfs.S3FilesystemProvider
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", ProtocolSSH, 123, 0, nil)
assert.Equal(t, "s3bucket", a.Bucket)
assert.Equal(t, "endpoint", a.Endpoint)
assert.Equal(t, 1, a.Status)
user.FsConfig.Provider = dataprovider.GCSFilesystemProvider
a = newActionNotification(user, operationDownload, "path", "target", "", ProtocolSCP, 123, ErrQuotaExceeded)
user.FsConfig.Provider = vfs.GCSFilesystemProvider
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", ProtocolSCP, 123, 0, ErrQuotaExceeded)
assert.Equal(t, "gcsbucket", a.Bucket)
assert.Equal(t, 0, len(a.Endpoint))
assert.Equal(t, 2, a.Status)
user.FsConfig.Provider = dataprovider.AzureBlobFilesystemProvider
a = newActionNotification(user, operationDownload, "path", "target", "", ProtocolSCP, 123, nil)
assert.Equal(t, "azcontainer", a.Bucket)
assert.Equal(t, "azsasurl", a.Endpoint)
assert.Equal(t, 1, a.Status)
user.FsConfig.AzBlobConfig.SASURL = ""
a = newActionNotification(user, operationDownload, "path", "target", "", ProtocolSCP, 123, nil)
user.FsConfig.Provider = vfs.AzureBlobFilesystemProvider
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", ProtocolSCP, 123, 0, nil)
assert.Equal(t, "azcontainer", a.Bucket)
assert.Equal(t, "azendpoint", a.Endpoint)
assert.Equal(t, 1, a.Status)
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", ProtocolSCP, 123, os.O_APPEND, nil)
assert.Equal(t, "azcontainer", a.Bucket)
assert.Equal(t, "azendpoint", a.Endpoint)
assert.Equal(t, 1, a.Status)
assert.Equal(t, os.O_APPEND, a.OpenFlags)
user.FsConfig.Provider = vfs.SFTPFilesystemProvider
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", ProtocolSFTP, 123, 0, nil)
assert.Equal(t, "sftpendpoint", a.Endpoint)
}
func TestActionHTTP(t *testing.T) {
@@ -74,7 +79,7 @@ func TestActionHTTP(t *testing.T) {
user := &dataprovider.User{
Username: "username",
}
a := newActionNotification(user, operationDownload, "path", "target", "", ProtocolSFTP, 123, nil)
a := newActionNotification(user, operationDownload, "path", "vpath", "target", "", ProtocolSFTP, 123, 0, nil)
err := actionHandler.Handle(a)
assert.NoError(t, err)
@@ -107,11 +112,11 @@ func TestActionCMD(t *testing.T) {
user := &dataprovider.User{
Username: "username",
}
a := newActionNotification(user, operationDownload, "path", "target", "", ProtocolSFTP, 123, nil)
a := newActionNotification(user, operationDownload, "path", "vpath", "target", "", ProtocolSFTP, 123, 0, nil)
err = actionHandler.Handle(a)
assert.NoError(t, err)
SSHCommandActionNotification(user, "path", "target", "sha1sum", nil)
ExecuteActionNotification(user, OperationSSHCmd, "path", "vpath", "target", "sha1sum", ProtocolSSH, 0, nil)
Config.Actions = actionsCopy
}
@@ -131,7 +136,7 @@ func TestWrongActions(t *testing.T) {
Username: "username",
}
a := newActionNotification(user, operationUpload, "", "", "", ProtocolSFTP, 123, nil)
a := newActionNotification(user, operationUpload, "", "", "", "", ProtocolSFTP, 123, 0, nil)
err := actionHandler.Handle(a)
assert.Error(t, err, "action with bad command must fail")
@@ -180,15 +185,15 @@ func TestPreDeleteAction(t *testing.T) {
}
user.Permissions = make(map[string][]string)
user.Permissions["/"] = []string{dataprovider.PermAny}
fs := vfs.NewOsFs("id", homeDir, nil)
c := NewBaseConnection("id", ProtocolSFTP, user, fs)
fs := vfs.NewOsFs("id", homeDir, "")
c := NewBaseConnection("id", ProtocolSFTP, "", user)
testfile := filepath.Join(user.HomeDir, "testfile")
err = ioutil.WriteFile(testfile, []byte("test"), os.ModePerm)
err = os.WriteFile(testfile, []byte("test"), os.ModePerm)
assert.NoError(t, err)
info, err := os.Stat(testfile)
assert.NoError(t, err)
err = c.RemoveFile(testfile, "testfile", info)
err = c.RemoveFile(fs, testfile, "testfile", info)
assert.NoError(t, err)
assert.FileExists(t, testfile)

51
common/clientsmap.go Normal file
View File

@@ -0,0 +1,51 @@
package common
import (
"sync"
"sync/atomic"
"github.com/drakkan/sftpgo/logger"
)
// clienstMap is a struct containing the map of the connected clients
type clientsMap struct {
totalConnections int32
mu sync.RWMutex
clients map[string]int
}
func (c *clientsMap) add(source string) {
atomic.AddInt32(&c.totalConnections, 1)
c.mu.Lock()
defer c.mu.Unlock()
c.clients[source]++
}
func (c *clientsMap) remove(source string) {
c.mu.Lock()
defer c.mu.Unlock()
if val, ok := c.clients[source]; ok {
atomic.AddInt32(&c.totalConnections, -1)
c.clients[source]--
if val > 1 {
return
}
delete(c.clients, source)
} else {
logger.Warn(logSender, "", "cannot remove client %v it is not mapped", source)
}
}
func (c *clientsMap) getTotal() int32 {
return atomic.LoadInt32(&c.totalConnections)
}
func (c *clientsMap) getTotalFrom(source string) int {
c.mu.RLock()
defer c.mu.RUnlock()
return c.clients[source]
}

59
common/clientsmap_test.go Normal file
View File

@@ -0,0 +1,59 @@
package common
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestClientsMap(t *testing.T) {
m := clientsMap{
clients: make(map[string]int),
}
ip1 := "192.168.1.1"
ip2 := "192.168.1.2"
m.add(ip1)
assert.Equal(t, int32(1), m.getTotal())
assert.Equal(t, 1, m.getTotalFrom(ip1))
assert.Equal(t, 0, m.getTotalFrom(ip2))
m.add(ip1)
m.add(ip2)
assert.Equal(t, int32(3), m.getTotal())
assert.Equal(t, 2, m.getTotalFrom(ip1))
assert.Equal(t, 1, m.getTotalFrom(ip2))
m.add(ip1)
m.add(ip1)
m.add(ip2)
assert.Equal(t, int32(6), m.getTotal())
assert.Equal(t, 4, m.getTotalFrom(ip1))
assert.Equal(t, 2, m.getTotalFrom(ip2))
m.remove(ip2)
assert.Equal(t, int32(5), m.getTotal())
assert.Equal(t, 4, m.getTotalFrom(ip1))
assert.Equal(t, 1, m.getTotalFrom(ip2))
m.remove("unknown")
assert.Equal(t, int32(5), m.getTotal())
assert.Equal(t, 4, m.getTotalFrom(ip1))
assert.Equal(t, 1, m.getTotalFrom(ip2))
m.remove(ip2)
assert.Equal(t, int32(4), m.getTotal())
assert.Equal(t, 4, m.getTotalFrom(ip1))
assert.Equal(t, 0, m.getTotalFrom(ip2))
m.remove(ip1)
m.remove(ip1)
m.remove(ip1)
assert.Equal(t, int32(1), m.getTotal())
assert.Equal(t, 1, m.getTotalFrom(ip1))
assert.Equal(t, 0, m.getTotalFrom(ip2))
m.remove(ip1)
assert.Equal(t, int32(0), m.getTotal())
assert.Equal(t, 0, m.getTotalFrom(ip1))
assert.Equal(t, 0, m.getTotalFrom(ip2))
}

View File

@@ -23,28 +23,34 @@ import (
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/metrics"
"github.com/drakkan/sftpgo/utils"
"github.com/drakkan/sftpgo/vfs"
)
// constants
const (
logSender = "common"
uploadLogSender = "Upload"
downloadLogSender = "Download"
renameLogSender = "Rename"
rmdirLogSender = "Rmdir"
mkdirLogSender = "Mkdir"
symlinkLogSender = "Symlink"
removeLogSender = "Remove"
chownLogSender = "Chown"
chmodLogSender = "Chmod"
chtimesLogSender = "Chtimes"
truncateLogSender = "Truncate"
operationDownload = "download"
operationUpload = "upload"
operationDelete = "delete"
operationPreDelete = "pre-delete"
operationRename = "rename"
operationSSHCmd = "ssh_cmd"
logSender = "common"
uploadLogSender = "Upload"
downloadLogSender = "Download"
renameLogSender = "Rename"
rmdirLogSender = "Rmdir"
mkdirLogSender = "Mkdir"
symlinkLogSender = "Symlink"
removeLogSender = "Remove"
chownLogSender = "Chown"
chmodLogSender = "Chmod"
chtimesLogSender = "Chtimes"
truncateLogSender = "Truncate"
operationDownload = "download"
operationUpload = "upload"
operationDelete = "delete"
// Pre-download action name
OperationPreDownload = "pre-download"
// Pre-upload action name
OperationPreUpload = "pre-upload"
operationPreDelete = "pre-delete"
operationRename = "rename"
// SSH command action name
OperationSSHCmd = "ssh_cmd"
chtimesFormat = "2006-01-02T15:04:05" // YYYY-MM-DDTHH:MM:SS
idleTimeoutCheckInterval = 3 * time.Minute
)
@@ -70,6 +76,7 @@ const (
ProtocolSSH = "SSH"
ProtocolFTP = "FTP"
ProtocolWebDAV = "DAV"
ProtocolHTTP = "HTTP"
)
// Upload modes
@@ -79,6 +86,12 @@ const (
UploadModeAtomicWithResume
)
func init() {
Connections.clients = clientsMap{
clients: make(map[string]int),
}
}
// errors definitions
var (
ErrPermissionDenied = errors.New("permission denied")
@@ -90,6 +103,8 @@ var (
ErrConnectionDenied = errors.New("you are not allowed to connect")
ErrNoBinding = errors.New("no binding configured")
ErrCrtRevoked = errors.New("your certificate has been revoked")
ErrNoCredentials = errors.New("no credential provided")
ErrInternalFailure = errors.New("internal failure")
errNoTransfer = errors.New("requested transfer not found")
errTransferMismatch = errors.New("transfer mismatch")
)
@@ -103,7 +118,9 @@ var (
QuotaScans ActiveScans
idleTimeoutTicker *time.Ticker
idleTimeoutTickerDone chan bool
supportedProtocols = []string{ProtocolSFTP, ProtocolSCP, ProtocolSSH, ProtocolFTP, ProtocolWebDAV}
supportedProtocols = []string{ProtocolSFTP, ProtocolSCP, ProtocolSSH, ProtocolFTP, ProtocolWebDAV, ProtocolHTTP}
// the map key is the protocol, for each protocol we can have multiple rate limiters
rateLimiters map[string][]*rateLimiter
)
// Initialize sets the common configuration
@@ -123,9 +140,37 @@ func Initialize(c Configuration) error {
logger.Info(logSender, "", "defender initialized with config %+v", c.DefenderConfig)
Config.defender = defender
}
rateLimiters = make(map[string][]*rateLimiter)
for _, rlCfg := range c.RateLimitersConfig {
if rlCfg.isEnabled() {
if err := rlCfg.validate(); err != nil {
return fmt.Errorf("rate limiters initialization error: %v", err)
}
rateLimiter := rlCfg.getLimiter()
for _, protocol := range rlCfg.Protocols {
rateLimiters[protocol] = append(rateLimiters[protocol], rateLimiter)
}
}
}
vfs.SetTempPath(c.TempPath)
dataprovider.SetTempPath(c.TempPath)
return nil
}
// LimitRate blocks until all the configured rate limiters
// allow one event to happen.
// It returns an error if the time to wait exceeds the max
// allowed delay
func LimitRate(protocol, ip string) (time.Duration, error) {
for _, limiter := range rateLimiters[protocol] {
if delay, err := limiter.Wait(ip); err != nil {
logger.Debug(logSender, "", "protocol %v ip %v: %v", protocol, ip, err)
return delay, err
}
}
return 0, nil
}
// ReloadDefender reloads the defender's block and safe lists
func ReloadDefender() error {
if Config.defender == nil {
@@ -154,13 +199,31 @@ func GetDefenderBanTime(ip string) *time.Time {
return Config.defender.GetBanTime(ip)
}
// Unban removes the specified IP address from the banned ones
func Unban(ip string) bool {
// GetDefenderHosts returns hosts that are banned or for which some violations have been detected
func GetDefenderHosts() []*DefenderEntry {
if Config.defender == nil {
return nil
}
return Config.defender.GetHosts()
}
// GetDefenderHost returns a defender host by ip, if any
func GetDefenderHost(ip string) (*DefenderEntry, error) {
if Config.defender == nil {
return nil, errors.New("defender is disabled")
}
return Config.defender.GetHost(ip)
}
// DeleteDefenderHost removes the specified IP address from the defender lists
func DeleteDefenderHost(ip string) bool {
if Config.defender == nil {
return false
}
return Config.defender.Unban(ip)
return Config.defender.DeleteHost(ip)
}
// GetDefenderScore returns the score for the given IP
@@ -295,6 +358,12 @@ type Configuration struct {
// 2 means "ignore mode for cloud fs": requests for changing permissions and owner/group/time are
// silently ignored for cloud based filesystem such as S3, GCS, Azure Blob
SetstatMode int `json:"setstat_mode" mapstructure:"setstat_mode"`
// TempPath defines the path for temporary files such as those used for atomic uploads or file pipes.
// If you set this option you must make sure that the defined path exists, is accessible for writing
// by the user running SFTPGo, and is on the same filesystem as the users home directories otherwise
// the renaming for atomic uploads will become a copy and therefore may take a long time.
// The temporary files are not namespaced. The default is generally fine. Leave empty for the default.
TempPath string `json:"temp_path" mapstructure:"temp_path"`
// Support for HAProxy PROXY protocol.
// If you are running SFTPGo behind a proxy server such as HAProxy, AWS ELB or NGNIX, you can enable
// the proxy protocol. It provides a convenient way to safely transport connection information
@@ -312,14 +381,23 @@ type Configuration struct {
// If proxy protocol is set to 2 and we receive a proxy header from an IP that is not in the list then the
// connection will be rejected.
ProxyAllowed []string `json:"proxy_allowed" mapstructure:"proxy_allowed"`
// Absolute path to an external program or an HTTP URL to invoke as soon as SFTPGo starts.
// If you define an HTTP URL it will be invoked using a `GET` request.
// Please note that SFTPGo services may not yet be available when this hook is run.
// Leave empty do disable.
StartupHook string `json:"startup_hook" mapstructure:"startup_hook"`
// Absolute path to an external program or an HTTP URL to invoke after a user connects
// and before he tries to login. It allows you to reject the connection based on the source
// ip address. Leave empty do disable.
PostConnectHook string `json:"post_connect_hook" mapstructure:"post_connect_hook"`
// Maximum number of concurrent client connections. 0 means unlimited
MaxTotalConnections int `json:"max_total_connections" mapstructure:"max_total_connections"`
// Maximum number of concurrent client connections from the same host (IP). 0 means unlimited
MaxPerHostConnections int `json:"max_per_host_connections" mapstructure:"max_per_host_connections"`
// Defender configuration
DefenderConfig DefenderConfig `json:"defender" mapstructure:"defender"`
DefenderConfig DefenderConfig `json:"defender" mapstructure:"defender"`
// Rate limiter configurations
RateLimitersConfig []RateLimiterConfig `json:"rate_limiters" mapstructure:"rate_limiters"`
idleTimeoutAsDuration time.Duration
idleLoginTimeout time.Duration
defender Defender
@@ -331,9 +409,8 @@ func (c *Configuration) IsAtomicUploadEnabled() bool {
}
// GetProxyListener returns a wrapper for the given listener that supports the
// HAProxy Proxy Protocol or nil if the proxy protocol is not configured
// HAProxy Proxy Protocol
func (c *Configuration) GetProxyListener(listener net.Listener) (*proxyproto.Listener, error) {
var proxyListener *proxyproto.Listener
var err error
if c.ProxyProtocol > 0 {
var policyFunc func(upstream net.Addr) (proxyproto.Policy, error)
@@ -355,12 +432,49 @@ func (c *Configuration) GetProxyListener(listener net.Listener) (*proxyproto.Lis
}
}
}
proxyListener = &proxyproto.Listener{
Listener: listener,
Policy: policyFunc,
}
return &proxyproto.Listener{
Listener: listener,
Policy: policyFunc,
ReadHeaderTimeout: 5 * time.Second,
}, nil
}
return proxyListener, nil
return nil, errors.New("proxy protocol not configured")
}
// ExecuteStartupHook runs the startup hook if defined
func (c *Configuration) ExecuteStartupHook() error {
if c.StartupHook == "" {
return nil
}
if strings.HasPrefix(c.StartupHook, "http") {
var url *url.URL
url, err := url.Parse(c.StartupHook)
if err != nil {
logger.Warn(logSender, "", "Invalid startup hook %#v: %v", c.StartupHook, err)
return err
}
startTime := time.Now()
resp, err := httpclient.RetryableGet(url.String())
if err != nil {
logger.Warn(logSender, "", "Error executing startup hook: %v", err)
return err
}
defer resp.Body.Close()
logger.Debug(logSender, "", "Startup hook executed, elapsed: %v, response code: %v", time.Since(startTime), resp.StatusCode)
return nil
}
if !filepath.IsAbs(c.StartupHook) {
err := fmt.Errorf("invalid startup hook %#v", c.StartupHook)
logger.Warn(logSender, "", "Invalid startup hook %#v", c.StartupHook)
return err
}
startTime := time.Now()
ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
defer cancel()
cmd := exec.CommandContext(ctx, c.StartupHook)
err := cmd.Run()
logger.Debug(logSender, "", "Startup hook executed, elapsed: %v, error: %v", time.Since(startTime), err)
return nil
}
// ExecutePostConnectHook executes the post connect hook if defined
@@ -376,13 +490,12 @@ func (c *Configuration) ExecutePostConnectHook(ipAddr, protocol string) error {
ipAddr, c.PostConnectHook, err)
return err
}
httpClient := httpclient.GetRetraybleHTTPClient()
q := url.Query()
q.Add("ip", ipAddr)
q.Add("protocol", protocol)
url.RawQuery = q.Encode()
resp, err := httpClient.Get(url.String())
resp, err := httpclient.RetryableGet(url.String())
if err != nil {
logger.Warn(protocol, "", "Login from ip %#v denied, error executing post connect hook: %v", ipAddr, err)
return err
@@ -451,6 +564,9 @@ func (c *SSHConnection) Close() error {
// ActiveConnections holds the currect active connections with the associated transfers
type ActiveConnections struct {
// clients contains both authenticated and estabilished connections and the ones waiting
// for authentication
clients clientsMap
sync.RWMutex
connections []ActiveConnection
sshConnections []*SSHConnection
@@ -617,16 +733,51 @@ func (conns *ActiveConnections) checkIdles() {
conns.RUnlock()
}
// AddClientConnection stores a new client connection
func (conns *ActiveConnections) AddClientConnection(ipAddr string) {
conns.clients.add(ipAddr)
}
// RemoveClientConnection removes a disconnected client from the tracked ones
func (conns *ActiveConnections) RemoveClientConnection(ipAddr string) {
conns.clients.remove(ipAddr)
}
// GetClientConnections returns the total number of client connections
func (conns *ActiveConnections) GetClientConnections() int32 {
return conns.clients.getTotal()
}
// IsNewConnectionAllowed returns false if the maximum number of concurrent allowed connections is exceeded
func (conns *ActiveConnections) IsNewConnectionAllowed() bool {
if Config.MaxTotalConnections == 0 {
func (conns *ActiveConnections) IsNewConnectionAllowed(ipAddr string) bool {
if Config.MaxTotalConnections == 0 && Config.MaxPerHostConnections == 0 {
return true
}
conns.RLock()
defer conns.RUnlock()
if Config.MaxPerHostConnections > 0 {
if total := conns.clients.getTotalFrom(ipAddr); total > Config.MaxPerHostConnections {
logger.Debug(logSender, "", "active connections from %v %v/%v", ipAddr, total, Config.MaxPerHostConnections)
AddDefenderEvent(ipAddr, HostEventLimitExceeded)
return false
}
}
return len(conns.connections) < Config.MaxTotalConnections
if Config.MaxTotalConnections > 0 {
if total := conns.clients.getTotal(); total > int32(Config.MaxTotalConnections) {
logger.Debug(logSender, "", "active client connections %v/%v", total, Config.MaxTotalConnections)
return false
}
// on a single SFTP connection we could have multiple SFTP channels or commands
// so we check the estabilished connections too
conns.RLock()
defer conns.RUnlock()
return len(conns.connections) < Config.MaxTotalConnections
}
return true
}
// GetStats returns stats for active connections

View File

@@ -1,9 +1,9 @@
package common
import (
"encoding/json"
"fmt"
"net"
"net/http"
"os"
"os/exec"
"path/filepath"
@@ -13,15 +13,13 @@ import (
"testing"
"time"
"github.com/rs/zerolog"
"github.com/spf13/viper"
"github.com/alexedwards/argon2id"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"golang.org/x/crypto/bcrypt"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/httpclient"
"github.com/drakkan/sftpgo/kms"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/utils"
"github.com/drakkan/sftpgo/vfs"
)
@@ -29,29 +27,22 @@ import (
const (
logSenderTest = "common_test"
httpAddr = "127.0.0.1:9999"
httpProxyAddr = "127.0.0.1:7777"
configDir = ".."
osWindows = "windows"
userTestUsername = "common_test_username"
userTestPwd = "common_test_pwd"
)
type providerConf struct {
Config dataprovider.Config `json:"data_provider" mapstructure:"data_provider"`
}
type fakeConnection struct {
*BaseConnection
command string
}
func (c *fakeConnection) AddUser(user dataprovider.User) error {
fs, err := user.GetFilesystem(c.GetID())
_, err := user.GetFilesystem(c.GetID())
if err != nil {
return err
}
c.BaseConnection.User = user
c.BaseConnection.Fs = fs
return nil
}
@@ -84,110 +75,6 @@ func (c *customNetConn) Close() error {
return c.Conn.Close()
}
func TestMain(m *testing.M) {
logfilePath := "common_test.log"
logger.InitLogger(logfilePath, 5, 1, 28, false, zerolog.DebugLevel)
viper.SetEnvPrefix("sftpgo")
replacer := strings.NewReplacer(".", "__")
viper.SetEnvKeyReplacer(replacer)
viper.SetConfigName("sftpgo")
viper.AutomaticEnv()
viper.AllowEmptyEnv(true)
driver, err := initializeDataprovider(-1)
if err != nil {
logger.WarnToConsole("error initializing data provider: %v", err)
os.Exit(1)
}
logger.InfoToConsole("Starting COMMON tests, provider: %v", driver)
err = Initialize(Configuration{})
if err != nil {
logger.WarnToConsole("error initializing common: %v", err)
os.Exit(1)
}
httpConfig := httpclient.Config{
Timeout: 5,
}
httpConfig.Initialize(configDir) //nolint:errcheck
go func() {
// start a test HTTP server to receive action notifications
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "OK\n")
})
http.HandleFunc("/404", func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusNotFound)
fmt.Fprintf(w, "Not found\n")
})
if err := http.ListenAndServe(httpAddr, nil); err != nil {
logger.ErrorToConsole("could not start HTTP notification server: %v", err)
os.Exit(1)
}
}()
go func() {
Config.ProxyProtocol = 2
listener, err := net.Listen("tcp", httpProxyAddr)
if err != nil {
logger.ErrorToConsole("error creating listener for proxy protocol server: %v", err)
os.Exit(1)
}
proxyListener, err := Config.GetProxyListener(listener)
if err != nil {
logger.ErrorToConsole("error creating proxy protocol listener: %v", err)
os.Exit(1)
}
Config.ProxyProtocol = 0
s := &http.Server{}
if err := s.Serve(proxyListener); err != nil {
logger.ErrorToConsole("could not start HTTP proxy protocol server: %v", err)
os.Exit(1)
}
}()
waitTCPListening(httpAddr)
waitTCPListening(httpProxyAddr)
exitCode := m.Run()
os.Remove(logfilePath) //nolint:errcheck
os.Exit(exitCode)
}
func waitTCPListening(address string) {
for {
conn, err := net.Dial("tcp", address)
if err != nil {
logger.WarnToConsole("tcp server %v not listening: %v", address, err)
time.Sleep(100 * time.Millisecond)
continue
}
logger.InfoToConsole("tcp server %v now listening", address)
conn.Close()
break
}
}
func initializeDataprovider(trackQuota int) (string, error) {
configDir := ".."
viper.AddConfigPath(configDir)
if err := viper.ReadInConfig(); err != nil {
return "", err
}
var cfg providerConf
if err := viper.Unmarshal(&cfg); err != nil {
return "", err
}
if trackQuota >= 0 && trackQuota <= 2 {
cfg.Config.TrackQuota = trackQuota
}
return cfg.Config.Driver, dataprovider.Initialize(cfg.Config, configDir, true)
}
func closeDataprovider() error {
return dataprovider.Close()
}
func TestSSHConnections(t *testing.T) {
conn1, conn2 := net.Pipe()
now := time.Now()
@@ -243,8 +130,11 @@ func TestDefenderIntegration(t *testing.T) {
assert.False(t, IsBanned(ip))
assert.Nil(t, GetDefenderBanTime(ip))
assert.False(t, Unban(ip))
assert.False(t, DeleteDefenderHost(ip))
assert.Equal(t, 0, GetDefenderScore(ip))
_, err := GetDefenderHost(ip)
assert.Error(t, err)
assert.Nil(t, GetDefenderHosts())
Config.DefenderConfig = DefenderConfig{
Enabled: true,
@@ -257,7 +147,7 @@ func TestDefenderIntegration(t *testing.T) {
EntriesSoftLimit: 100,
EntriesHardLimit: 150,
}
err := Initialize(Config)
err = Initialize(Config)
assert.Error(t, err)
Config.DefenderConfig.Threshold = 3
err = Initialize(Config)
@@ -267,40 +157,151 @@ func TestDefenderIntegration(t *testing.T) {
AddDefenderEvent(ip, HostEventNoLoginTried)
assert.False(t, IsBanned(ip))
assert.Equal(t, 2, GetDefenderScore(ip))
assert.False(t, Unban(ip))
entry, err := GetDefenderHost(ip)
assert.NoError(t, err)
asJSON, err := json.Marshal(&entry)
assert.NoError(t, err)
assert.Equal(t, `{"id":"3132372e312e312e31","ip":"127.1.1.1","score":2}`, string(asJSON), "entry %v", entry)
assert.True(t, DeleteDefenderHost(ip))
assert.Nil(t, GetDefenderBanTime(ip))
AddDefenderEvent(ip, HostEventLoginFailed)
AddDefenderEvent(ip, HostEventNoLoginTried)
assert.True(t, IsBanned(ip))
assert.Equal(t, 0, GetDefenderScore(ip))
assert.NotNil(t, GetDefenderBanTime(ip))
assert.True(t, Unban(ip))
assert.Len(t, GetDefenderHosts(), 1)
entry, err = GetDefenderHost(ip)
assert.NoError(t, err)
assert.False(t, entry.BanTime.IsZero())
assert.True(t, DeleteDefenderHost(ip))
assert.Len(t, GetDefenderHosts(), 0)
assert.Nil(t, GetDefenderBanTime(ip))
assert.False(t, Unban(ip))
assert.False(t, DeleteDefenderHost(ip))
Config = configCopy
}
func TestRateLimitersIntegration(t *testing.T) {
// by default defender is nil
configCopy := Config
Config.RateLimitersConfig = []RateLimiterConfig{
{
Average: 100,
Period: 10,
Burst: 5,
Type: int(rateLimiterTypeGlobal),
Protocols: rateLimiterProtocolValues,
},
{
Average: 1,
Period: 1000,
Burst: 1,
Type: int(rateLimiterTypeSource),
Protocols: []string{ProtocolWebDAV, ProtocolWebDAV, ProtocolFTP},
GenerateDefenderEvents: true,
EntriesSoftLimit: 100,
EntriesHardLimit: 150,
},
}
err := Initialize(Config)
assert.Error(t, err)
Config.RateLimitersConfig[0].Period = 1000
err = Initialize(Config)
assert.NoError(t, err)
assert.Len(t, rateLimiters, 4)
assert.Len(t, rateLimiters[ProtocolSSH], 1)
assert.Len(t, rateLimiters[ProtocolFTP], 2)
assert.Len(t, rateLimiters[ProtocolWebDAV], 2)
assert.Len(t, rateLimiters[ProtocolHTTP], 1)
source1 := "127.1.1.1"
source2 := "127.1.1.2"
_, err = LimitRate(ProtocolSSH, source1)
assert.NoError(t, err)
_, err = LimitRate(ProtocolFTP, source1)
assert.NoError(t, err)
// sleep to allow the add configured burst to the token.
// This sleep is not enough to add the per-source burst
time.Sleep(20 * time.Millisecond)
_, err = LimitRate(ProtocolWebDAV, source2)
assert.NoError(t, err)
_, err = LimitRate(ProtocolFTP, source1)
assert.Error(t, err)
_, err = LimitRate(ProtocolWebDAV, source2)
assert.Error(t, err)
_, err = LimitRate(ProtocolSSH, source1)
assert.NoError(t, err)
_, err = LimitRate(ProtocolSSH, source2)
assert.NoError(t, err)
Config = configCopy
}
func TestMaxConnections(t *testing.T) {
oldValue := Config.MaxTotalConnections
Config.MaxTotalConnections = 1
perHost := Config.MaxPerHostConnections
assert.True(t, Connections.IsNewConnectionAllowed())
c := NewBaseConnection("id", ProtocolSFTP, dataprovider.User{}, nil)
Config.MaxPerHostConnections = 0
ipAddr := "192.168.7.8"
assert.True(t, Connections.IsNewConnectionAllowed(ipAddr))
Config.MaxTotalConnections = 1
Config.MaxPerHostConnections = perHost
assert.True(t, Connections.IsNewConnectionAllowed(ipAddr))
c := NewBaseConnection("id", ProtocolSFTP, "", dataprovider.User{})
fakeConn := &fakeConnection{
BaseConnection: c,
}
Connections.Add(fakeConn)
assert.Len(t, Connections.GetStats(), 1)
assert.False(t, Connections.IsNewConnectionAllowed())
assert.False(t, Connections.IsNewConnectionAllowed(ipAddr))
res := Connections.Close(fakeConn.GetID())
assert.True(t, res)
assert.Eventually(t, func() bool { return len(Connections.GetStats()) == 0 }, 300*time.Millisecond, 50*time.Millisecond)
assert.True(t, Connections.IsNewConnectionAllowed(ipAddr))
Connections.AddClientConnection(ipAddr)
Connections.AddClientConnection(ipAddr)
assert.False(t, Connections.IsNewConnectionAllowed(ipAddr))
Connections.RemoveClientConnection(ipAddr)
assert.True(t, Connections.IsNewConnectionAllowed(ipAddr))
Connections.RemoveClientConnection(ipAddr)
Config.MaxTotalConnections = oldValue
}
func TestMaxConnectionPerHost(t *testing.T) {
oldValue := Config.MaxPerHostConnections
Config.MaxPerHostConnections = 2
ipAddr := "192.168.9.9"
Connections.AddClientConnection(ipAddr)
assert.True(t, Connections.IsNewConnectionAllowed(ipAddr))
Connections.AddClientConnection(ipAddr)
assert.True(t, Connections.IsNewConnectionAllowed(ipAddr))
Connections.AddClientConnection(ipAddr)
assert.False(t, Connections.IsNewConnectionAllowed(ipAddr))
assert.Equal(t, int32(3), Connections.GetClientConnections())
Connections.RemoveClientConnection(ipAddr)
Connections.RemoveClientConnection(ipAddr)
Connections.RemoveClientConnection(ipAddr)
assert.Equal(t, int32(0), Connections.GetClientConnections())
Config.MaxPerHostConnections = oldValue
}
func TestIdleConnections(t *testing.T) {
configCopy := Config
@@ -324,7 +325,7 @@ func TestIdleConnections(t *testing.T) {
user := dataprovider.User{
Username: username,
}
c := NewBaseConnection(sshConn1.id+"_1", ProtocolSFTP, user, nil)
c := NewBaseConnection(sshConn1.id+"_1", ProtocolSFTP, "", user)
c.lastActivity = time.Now().Add(-24 * time.Hour).UnixNano()
fakeConn := &fakeConnection{
BaseConnection: c,
@@ -336,7 +337,7 @@ func TestIdleConnections(t *testing.T) {
Connections.AddSSHConnection(sshConn1)
Connections.Add(fakeConn)
assert.Equal(t, Connections.GetActiveSessions(username), 1)
c = NewBaseConnection(sshConn2.id+"_1", ProtocolSSH, user, nil)
c = NewBaseConnection(sshConn2.id+"_1", ProtocolSSH, "", user)
fakeConn = &fakeConnection{
BaseConnection: c,
}
@@ -344,7 +345,7 @@ func TestIdleConnections(t *testing.T) {
Connections.Add(fakeConn)
assert.Equal(t, Connections.GetActiveSessions(username), 2)
cFTP := NewBaseConnection("id2", ProtocolFTP, dataprovider.User{}, nil)
cFTP := NewBaseConnection("id2", ProtocolFTP, "", dataprovider.User{})
cFTP.lastActivity = time.Now().UnixNano()
fakeConn = &fakeConnection{
BaseConnection: cFTP,
@@ -375,6 +376,7 @@ func TestIdleConnections(t *testing.T) {
defer Connections.RUnlock()
return len(Connections.sshConnections) == 0
}, 1*time.Second, 200*time.Millisecond)
assert.Equal(t, int32(0), Connections.GetClientConnections())
stopIdleTimeoutTicker()
assert.True(t, customConn1.isClosed)
assert.True(t, customConn2.isClosed)
@@ -383,11 +385,11 @@ func TestIdleConnections(t *testing.T) {
}
func TestCloseConnection(t *testing.T) {
c := NewBaseConnection("id", ProtocolSFTP, dataprovider.User{}, nil)
c := NewBaseConnection("id", ProtocolSFTP, "", dataprovider.User{})
fakeConn := &fakeConnection{
BaseConnection: c,
}
assert.True(t, Connections.IsNewConnectionAllowed())
assert.True(t, Connections.IsNewConnectionAllowed("127.0.0.1"))
Connections.Add(fakeConn)
assert.Len(t, Connections.GetStats(), 1)
res := Connections.Close(fakeConn.GetID())
@@ -399,7 +401,7 @@ func TestCloseConnection(t *testing.T) {
}
func TestSwapConnection(t *testing.T) {
c := NewBaseConnection("id", ProtocolFTP, dataprovider.User{}, nil)
c := NewBaseConnection("id", ProtocolFTP, "", dataprovider.User{})
fakeConn := &fakeConnection{
BaseConnection: c,
}
@@ -407,9 +409,9 @@ func TestSwapConnection(t *testing.T) {
if assert.Len(t, Connections.GetStats(), 1) {
assert.Equal(t, "", Connections.GetStats()[0].Username)
}
c = NewBaseConnection("id", ProtocolFTP, dataprovider.User{
c = NewBaseConnection("id", ProtocolFTP, "", dataprovider.User{
Username: userTestUsername,
}, nil)
})
fakeConn = &fakeConnection{
BaseConnection: c,
}
@@ -443,26 +445,26 @@ func TestConnectionStatus(t *testing.T) {
user := dataprovider.User{
Username: username,
}
fs := vfs.NewOsFs("", os.TempDir(), nil)
c1 := NewBaseConnection("id1", ProtocolSFTP, user, fs)
fs := vfs.NewOsFs("", os.TempDir(), "")
c1 := NewBaseConnection("id1", ProtocolSFTP, "", user)
fakeConn1 := &fakeConnection{
BaseConnection: c1,
}
t1 := NewBaseTransfer(nil, c1, nil, "/p1", "/r1", TransferUpload, 0, 0, 0, true, fs)
t1 := NewBaseTransfer(nil, c1, nil, "/p1", "/p1", "/r1", TransferUpload, 0, 0, 0, true, fs)
t1.BytesReceived = 123
t2 := NewBaseTransfer(nil, c1, nil, "/p2", "/r2", TransferDownload, 0, 0, 0, true, fs)
t2 := NewBaseTransfer(nil, c1, nil, "/p2", "/p2", "/r2", TransferDownload, 0, 0, 0, true, fs)
t2.BytesSent = 456
c2 := NewBaseConnection("id2", ProtocolSSH, user, nil)
c2 := NewBaseConnection("id2", ProtocolSSH, "", user)
fakeConn2 := &fakeConnection{
BaseConnection: c2,
command: "md5sum",
}
c3 := NewBaseConnection("id3", ProtocolWebDAV, user, nil)
c3 := NewBaseConnection("id3", ProtocolWebDAV, "", user)
fakeConn3 := &fakeConnection{
BaseConnection: c3,
command: "PROPFIND",
}
t3 := NewBaseTransfer(nil, c3, nil, "/p2", "/r2", TransferDownload, 0, 0, 0, true, fs)
t3 := NewBaseTransfer(nil, c3, nil, "/p2", "/p2", "/r2", TransferDownload, 0, 0, 0, true, fs)
Connections.Add(fakeConn1)
Connections.Add(fakeConn2)
Connections.Add(fakeConn3)
@@ -565,13 +567,31 @@ func TestProxyProtocolVersion(t *testing.T) {
assert.Error(t, err)
}
func TestProxyProtocol(t *testing.T) {
httpClient := httpclient.GetHTTPClient()
resp, err := httpClient.Get(fmt.Sprintf("http://%v", httpProxyAddr))
if assert.NoError(t, err) {
defer resp.Body.Close()
assert.Equal(t, http.StatusBadRequest, resp.StatusCode)
func TestStartupHook(t *testing.T) {
Config.StartupHook = ""
assert.NoError(t, Config.ExecuteStartupHook())
Config.StartupHook = "http://foo\x7f.com/startup"
assert.Error(t, Config.ExecuteStartupHook())
Config.StartupHook = "http://invalid:5678/"
assert.Error(t, Config.ExecuteStartupHook())
Config.StartupHook = fmt.Sprintf("http://%v", httpAddr)
assert.NoError(t, Config.ExecuteStartupHook())
Config.StartupHook = "invalidhook"
assert.Error(t, Config.ExecuteStartupHook())
if runtime.GOOS != osWindows {
hookCmd, err := exec.LookPath("true")
assert.NoError(t, err)
Config.StartupHook = hookCmd
assert.NoError(t, Config.ExecuteStartupHook())
}
Config.StartupHook = ""
}
func TestPostConnectHook(t *testing.T) {
@@ -614,7 +634,7 @@ func TestPostConnectHook(t *testing.T) {
func TestCryptoConvertFileInfo(t *testing.T) {
name := "name"
fs, err := vfs.NewCryptFs("connID1", os.TempDir(), vfs.CryptFsConfig{Passphrase: kms.NewPlainSecret("secret")})
fs, err := vfs.NewCryptFs("connID1", os.TempDir(), "", vfs.CryptFsConfig{Passphrase: kms.NewPlainSecret("secret")})
require.NoError(t, err)
cryptFs := fs.(*vfs.CryptFs)
info := vfs.NewFileInfo(name, true, 48, time.Now(), false)
@@ -649,4 +669,123 @@ func TestFolderCopy(t *testing.T) {
require.Equal(t, folder.UsedQuotaSize, folderCopy.UsedQuotaSize)
require.Equal(t, folder.UsedQuotaFiles, folderCopy.UsedQuotaFiles)
require.Equal(t, folder.LastQuotaUpdate, folderCopy.LastQuotaUpdate)
folder.FsConfig = vfs.Filesystem{
CryptConfig: vfs.CryptFsConfig{
Passphrase: kms.NewPlainSecret("crypto secret"),
},
}
folderCopy = folder.GetACopy()
folder.FsConfig.CryptConfig.Passphrase = kms.NewEmptySecret()
require.Len(t, folderCopy.Users, 1)
require.True(t, utils.IsStringInSlice("user3", folderCopy.Users))
require.Equal(t, int64(2), folderCopy.ID)
require.Equal(t, folder.Name, folderCopy.Name)
require.Equal(t, folder.MappedPath, folderCopy.MappedPath)
require.Equal(t, folder.UsedQuotaSize, folderCopy.UsedQuotaSize)
require.Equal(t, folder.UsedQuotaFiles, folderCopy.UsedQuotaFiles)
require.Equal(t, folder.LastQuotaUpdate, folderCopy.LastQuotaUpdate)
require.Equal(t, "crypto secret", folderCopy.FsConfig.CryptConfig.Passphrase.GetPayload())
}
func TestCachedFs(t *testing.T) {
user := dataprovider.User{
HomeDir: filepath.Clean(os.TempDir()),
}
conn := NewBaseConnection("id", ProtocolSFTP, "", user)
// changing the user should not affect the connection
user.HomeDir = filepath.Join(os.TempDir(), "temp")
err := os.Mkdir(user.HomeDir, os.ModePerm)
assert.NoError(t, err)
fs, err := user.GetFilesystem("")
assert.NoError(t, err)
p, err := fs.ResolvePath("/")
assert.NoError(t, err)
assert.Equal(t, user.GetHomeDir(), p)
_, p, err = conn.GetFsAndResolvedPath("/")
assert.NoError(t, err)
assert.Equal(t, filepath.Clean(os.TempDir()), p)
user.FsConfig.Provider = vfs.S3FilesystemProvider
_, err = user.GetFilesystem("")
assert.Error(t, err)
conn.User.FsConfig.Provider = vfs.S3FilesystemProvider
_, p, err = conn.GetFsAndResolvedPath("/")
assert.NoError(t, err)
assert.Equal(t, filepath.Clean(os.TempDir()), p)
err = os.Remove(user.HomeDir)
assert.NoError(t, err)
}
func TestParseAllowedIPAndRanges(t *testing.T) {
_, err := utils.ParseAllowedIPAndRanges([]string{"1.1.1.1", "not an ip"})
assert.Error(t, err)
_, err = utils.ParseAllowedIPAndRanges([]string{"1.1.1.5", "192.168.1.0/240"})
assert.Error(t, err)
allow, err := utils.ParseAllowedIPAndRanges([]string{"192.168.1.2", "172.16.0.0/24"})
assert.NoError(t, err)
assert.True(t, allow[0](net.ParseIP("192.168.1.2")))
assert.False(t, allow[0](net.ParseIP("192.168.2.2")))
assert.True(t, allow[1](net.ParseIP("172.16.0.1")))
assert.False(t, allow[1](net.ParseIP("172.16.1.1")))
}
func TestHideConfidentialData(t *testing.T) {
for _, provider := range []vfs.FilesystemProvider{vfs.S3FilesystemProvider, vfs.GCSFilesystemProvider,
vfs.AzureBlobFilesystemProvider, vfs.CryptedFilesystemProvider, vfs.SFTPFilesystemProvider} {
u := dataprovider.User{
FsConfig: vfs.Filesystem{
Provider: provider,
},
}
u.PrepareForRendering()
f := vfs.BaseVirtualFolder{
FsConfig: vfs.Filesystem{
Provider: provider,
},
}
f.PrepareForRendering()
}
a := dataprovider.Admin{}
a.HideConfidentialData()
}
func BenchmarkBcryptHashing(b *testing.B) {
bcryptPassword := "bcryptpassword"
for i := 0; i < b.N; i++ {
_, err := bcrypt.GenerateFromPassword([]byte(bcryptPassword), 10)
if err != nil {
panic(err)
}
}
}
func BenchmarkCompareBcryptPassword(b *testing.B) {
bcryptPassword := "$2a$10$lPDdnDimJZ7d5/GwL6xDuOqoZVRXok6OHHhivCnanWUtcgN0Zafki"
for i := 0; i < b.N; i++ {
err := bcrypt.CompareHashAndPassword([]byte(bcryptPassword), []byte("password"))
if err != nil {
panic(err)
}
}
}
func BenchmarkArgon2Hashing(b *testing.B) {
argonPassword := "argon2password"
for i := 0; i < b.N; i++ {
_, err := argon2id.CreateHash(argonPassword, argon2id.DefaultParams)
if err != nil {
panic(err)
}
}
}
func BenchmarkCompareArgon2Password(b *testing.B) {
argon2Password := "$argon2id$v=19$m=65536,t=1,p=2$aOoAOdAwvzhOgi7wUFjXlw$wn/y37dBWdKHtPXHR03nNaKHWKPXyNuVXOknaU+YZ+s"
for i := 0; i < b.N; i++ {
_, err := argon2id.ComparePasswordAndHash("password", argon2Password)
if err != nil {
panic(err)
}
}
}

View File

@@ -21,23 +21,25 @@ import (
// BaseConnection defines common fields for a connection using any supported protocol
type BaseConnection struct {
// last activity for this connection.
// Since this is accessed atomically we put as first element of the struct achieve 64 bit alignment
// Since this field is accessed atomically we put it as first element of the struct to achieve 64 bit alignment
lastActivity int64
// unique ID for a transfer.
// This field is accessed atomically so we put it at the beginning of the struct to achieve 64 bit alignment
transferID uint64
// Unique identifier for the connection
ID string
// user associated with this connection if any
User dataprovider.User
// start time for this connection
startTime time.Time
protocol string
Fs vfs.Fs
startTime time.Time
protocol string
remoteAddr string
sync.RWMutex
transferID uint64
activeTransfers []ActiveTransfer
}
// NewBaseConnection returns a new BaseConnection
func NewBaseConnection(id, protocol string, user dataprovider.User, fs vfs.Fs) *BaseConnection {
func NewBaseConnection(id, protocol, remoteAddr string, user dataprovider.User) *BaseConnection {
connID := id
if utils.IsStringInSlice(protocol, supportedProtocols) {
connID = fmt.Sprintf("%v_%v", protocol, id)
@@ -47,7 +49,7 @@ func NewBaseConnection(id, protocol string, user dataprovider.User, fs vfs.Fs) *
User: user,
startTime: time.Now(),
protocol: protocol,
Fs: fs,
remoteAddr: remoteAddr,
lastActivity: time.Now().UnixNano(),
transferID: 0,
}
@@ -103,10 +105,7 @@ func (c *BaseConnection) GetLastActivity() time.Time {
// CloseFS closes the underlying fs
func (c *BaseConnection) CloseFS() error {
if c.Fs != nil {
return c.Fs.Close()
}
return nil
return c.User.CloseFs()
}
// AddTransfer associates a new transfer to this connection
@@ -207,21 +206,25 @@ func (c *BaseConnection) truncateOpenHandle(fsPath string, size int64) (int64, e
return 0, errNoTransfer
}
// ListDir reads the directory named by fsPath and returns a list of directory entries
func (c *BaseConnection) ListDir(fsPath, virtualPath string) ([]os.FileInfo, error) {
// ListDir reads the directory matching virtualPath and returns a list of directory entries
func (c *BaseConnection) ListDir(virtualPath string) ([]os.FileInfo, error) {
if !c.User.HasPerm(dataprovider.PermListItems, virtualPath) {
return nil, c.GetPermissionDeniedError()
}
files, err := c.Fs.ReadDir(fsPath)
fs, fsPath, err := c.GetFsAndResolvedPath(virtualPath)
if err != nil {
return nil, err
}
files, err := fs.ReadDir(fsPath)
if err != nil {
c.Log(logger.LevelWarn, "error listing directory: %+v", err)
return nil, c.GetFsError(err)
return nil, c.GetFsError(fs, err)
}
return c.User.AddVirtualDirs(files, virtualPath), nil
}
// CreateDir creates a new directory at the specified fsPath
func (c *BaseConnection) CreateDir(fsPath, virtualPath string) error {
func (c *BaseConnection) CreateDir(virtualPath string) error {
if !c.User.HasPerm(dataprovider.PermCreateDirs, path.Dir(virtualPath)) {
return c.GetPermissionDeniedError()
}
@@ -229,46 +232,50 @@ func (c *BaseConnection) CreateDir(fsPath, virtualPath string) error {
c.Log(logger.LevelWarn, "mkdir not allowed %#v is a virtual folder", virtualPath)
return c.GetPermissionDeniedError()
}
if err := c.Fs.Mkdir(fsPath); err != nil {
c.Log(logger.LevelWarn, "error creating dir: %#v error: %+v", fsPath, err)
return c.GetFsError(err)
fs, fsPath, err := c.GetFsAndResolvedPath(virtualPath)
if err != nil {
return err
}
vfs.SetPathPermissions(c.Fs, fsPath, c.User.GetUID(), c.User.GetGID())
if err := fs.Mkdir(fsPath); err != nil {
c.Log(logger.LevelWarn, "error creating dir: %#v error: %+v", fsPath, err)
return c.GetFsError(fs, err)
}
vfs.SetPathPermissions(fs, fsPath, c.User.GetUID(), c.User.GetGID())
logger.CommandLog(mkdirLogSender, fsPath, "", c.User.Username, "", c.ID, c.protocol, -1, -1, "", "", "", -1)
logger.CommandLog(mkdirLogSender, fsPath, "", c.User.Username, "", c.ID, c.protocol, -1, -1, "", "", "", -1, c.remoteAddr)
return nil
}
// IsRemoveFileAllowed returns an error if removing this file is not allowed
func (c *BaseConnection) IsRemoveFileAllowed(fsPath, virtualPath string) error {
func (c *BaseConnection) IsRemoveFileAllowed(virtualPath string) error {
if !c.User.HasPerm(dataprovider.PermDelete, path.Dir(virtualPath)) {
return c.GetPermissionDeniedError()
}
if !c.User.IsFileAllowed(virtualPath) {
c.Log(logger.LevelDebug, "removing file %#v is not allowed", fsPath)
c.Log(logger.LevelDebug, "removing file %#v is not allowed", virtualPath)
return c.GetPermissionDeniedError()
}
return nil
}
// RemoveFile removes a file at the specified fsPath
func (c *BaseConnection) RemoveFile(fsPath, virtualPath string, info os.FileInfo) error {
if err := c.IsRemoveFileAllowed(fsPath, virtualPath); err != nil {
func (c *BaseConnection) RemoveFile(fs vfs.Fs, fsPath, virtualPath string, info os.FileInfo) error {
if err := c.IsRemoveFileAllowed(virtualPath); err != nil {
return err
}
size := info.Size()
action := newActionNotification(&c.User, operationPreDelete, fsPath, "", "", c.protocol, size, nil)
actionErr := actionHandler.Handle(action)
actionErr := ExecutePreAction(&c.User, operationPreDelete, fsPath, virtualPath, c.protocol, size, 0)
if actionErr == nil {
c.Log(logger.LevelDebug, "remove for path %#v handled by pre-delete action", fsPath)
} else {
if err := c.Fs.Remove(fsPath, false); err != nil {
if err := fs.Remove(fsPath, false); err != nil {
c.Log(logger.LevelWarn, "failed to remove a file/symlink %#v: %+v", fsPath, err)
return c.GetFsError(err)
return c.GetFsError(fs, err)
}
}
logger.CommandLog(removeLogSender, fsPath, "", c.User.Username, "", c.ID, c.protocol, -1, -1, "", "", "", -1)
logger.CommandLog(removeLogSender, fsPath, "", c.User.Username, "", c.ID, c.protocol, -1, -1, "", "", "", -1, c.remoteAddr)
if info.Mode()&os.ModeSymlink == 0 {
vfolder, err := c.User.GetVirtualFolderForPath(path.Dir(virtualPath))
if err == nil {
@@ -281,15 +288,14 @@ func (c *BaseConnection) RemoveFile(fsPath, virtualPath string, info os.FileInfo
}
}
if actionErr != nil {
action := newActionNotification(&c.User, operationDelete, fsPath, "", "", c.protocol, size, nil)
go actionHandler.Handle(action) // nolint:errcheck
ExecuteActionNotification(&c.User, operationDelete, fsPath, virtualPath, "", "", c.protocol, size, nil)
}
return nil
}
// IsRemoveDirAllowed returns an error if removing this directory is not allowed
func (c *BaseConnection) IsRemoveDirAllowed(fsPath, virtualPath string) error {
if c.Fs.GetRelativePath(fsPath) == "/" {
func (c *BaseConnection) IsRemoveDirAllowed(fs vfs.Fs, fsPath, virtualPath string) error {
if fs.GetRelativePath(fsPath) == "/" {
c.Log(logger.LevelWarn, "removing root dir is not allowed")
return c.GetPermissionDeniedError()
}
@@ -312,54 +318,57 @@ func (c *BaseConnection) IsRemoveDirAllowed(fsPath, virtualPath string) error {
}
// RemoveDir removes a directory at the specified fsPath
func (c *BaseConnection) RemoveDir(fsPath, virtualPath string) error {
if err := c.IsRemoveDirAllowed(fsPath, virtualPath); err != nil {
func (c *BaseConnection) RemoveDir(virtualPath string) error {
fs, fsPath, err := c.GetFsAndResolvedPath(virtualPath)
if err != nil {
return err
}
if err := c.IsRemoveDirAllowed(fs, fsPath, virtualPath); err != nil {
return err
}
var fi os.FileInfo
var err error
if fi, err = c.Fs.Lstat(fsPath); err != nil {
if fi, err = fs.Lstat(fsPath); err != nil {
// see #149
if c.Fs.IsNotExist(err) && c.Fs.HasVirtualFolders() {
if fs.IsNotExist(err) && fs.HasVirtualFolders() {
return nil
}
c.Log(logger.LevelWarn, "failed to remove a dir %#v: stat error: %+v", fsPath, err)
return c.GetFsError(err)
return c.GetFsError(fs, err)
}
if !fi.IsDir() || fi.Mode()&os.ModeSymlink != 0 {
c.Log(logger.LevelDebug, "cannot remove %#v is not a directory", fsPath)
return c.GetGenericError(nil)
}
if err := c.Fs.Remove(fsPath, true); err != nil {
if err := fs.Remove(fsPath, true); err != nil {
c.Log(logger.LevelWarn, "failed to remove directory %#v: %+v", fsPath, err)
return c.GetFsError(err)
return c.GetFsError(fs, err)
}
logger.CommandLog(rmdirLogSender, fsPath, "", c.User.Username, "", c.ID, c.protocol, -1, -1, "", "", "", -1)
logger.CommandLog(rmdirLogSender, fsPath, "", c.User.Username, "", c.ID, c.protocol, -1, -1, "", "", "", -1, c.remoteAddr)
return nil
}
// Rename renames (moves) fsSourcePath to fsTargetPath
func (c *BaseConnection) Rename(fsSourcePath, fsTargetPath, virtualSourcePath, virtualTargetPath string) error {
if c.User.IsMappedPath(fsSourcePath) {
c.Log(logger.LevelWarn, "renaming a directory mapped as virtual folder is not allowed: %#v", fsSourcePath)
return c.GetPermissionDeniedError()
}
if c.User.IsMappedPath(fsTargetPath) {
c.Log(logger.LevelWarn, "renaming to a directory mapped as virtual folder is not allowed: %#v", fsTargetPath)
return c.GetPermissionDeniedError()
}
srcInfo, err := c.Fs.Lstat(fsSourcePath)
// Rename renames (moves) virtualSourcePath to virtualTargetPath
func (c *BaseConnection) Rename(virtualSourcePath, virtualTargetPath string) error {
fsSrc, fsSourcePath, err := c.GetFsAndResolvedPath(virtualSourcePath)
if err != nil {
return c.GetFsError(err)
return err
}
if !c.isRenamePermitted(fsSourcePath, virtualSourcePath, virtualTargetPath, srcInfo) {
fsDst, fsTargetPath, err := c.GetFsAndResolvedPath(virtualTargetPath)
if err != nil {
return err
}
srcInfo, err := fsSrc.Lstat(fsSourcePath)
if err != nil {
return c.GetFsError(fsSrc, err)
}
if !c.isRenamePermitted(fsSrc, fsDst, fsSourcePath, fsTargetPath, virtualSourcePath, virtualTargetPath, srcInfo) {
return c.GetPermissionDeniedError()
}
initialSize := int64(-1)
if dstInfo, err := c.Fs.Lstat(fsTargetPath); err == nil {
if dstInfo, err := fsDst.Lstat(fsTargetPath); err == nil {
if dstInfo.IsDir() {
c.Log(logger.LevelWarn, "attempted to rename %#v overwriting an existing directory %#v",
fsSourcePath, fsTargetPath)
@@ -370,8 +379,8 @@ func (c *BaseConnection) Rename(fsSourcePath, fsTargetPath, virtualSourcePath, v
initialSize = dstInfo.Size()
}
if !c.User.HasPerm(dataprovider.PermOverwrite, path.Dir(virtualTargetPath)) {
c.Log(logger.LevelDebug, "renaming is not allowed, %#v -> %#v. Target exists but the user "+
"has no overwrite permission", virtualSourcePath, virtualTargetPath)
c.Log(logger.LevelDebug, "renaming %#v -> %#v is not allowed. Target exists but the user %#v"+
"has no overwrite permission", virtualSourcePath, virtualTargetPath, c.User.Username)
return c.GetPermissionDeniedError()
}
}
@@ -381,67 +390,66 @@ func (c *BaseConnection) Rename(fsSourcePath, fsTargetPath, virtualSourcePath, v
virtualSourcePath)
return c.GetOpUnsupportedError()
}
if err = c.checkRecursiveRenameDirPermissions(fsSourcePath, fsTargetPath); err != nil {
if err = c.checkRecursiveRenameDirPermissions(fsSrc, fsDst, fsSourcePath, fsTargetPath); err != nil {
c.Log(logger.LevelDebug, "error checking recursive permissions before renaming %#v: %+v", fsSourcePath, err)
return c.GetFsError(err)
return err
}
}
if !c.hasSpaceForRename(virtualSourcePath, virtualTargetPath, initialSize, fsSourcePath) {
if !c.hasSpaceForRename(fsSrc, virtualSourcePath, virtualTargetPath, initialSize, fsSourcePath) {
c.Log(logger.LevelInfo, "denying cross rename due to space limit")
return c.GetGenericError(ErrQuotaExceeded)
}
if err := c.Fs.Rename(fsSourcePath, fsTargetPath); err != nil {
if err := fsSrc.Rename(fsSourcePath, fsTargetPath); err != nil {
c.Log(logger.LevelWarn, "failed to rename %#v -> %#v: %+v", fsSourcePath, fsTargetPath, err)
return c.GetFsError(err)
}
if dataprovider.GetQuotaTracking() > 0 {
c.updateQuotaAfterRename(virtualSourcePath, virtualTargetPath, fsTargetPath, initialSize) //nolint:errcheck
return c.GetFsError(fsSrc, err)
}
vfs.SetPathPermissions(fsDst, fsTargetPath, c.User.GetUID(), c.User.GetGID())
c.updateQuotaAfterRename(fsDst, virtualSourcePath, virtualTargetPath, fsTargetPath, initialSize) //nolint:errcheck
logger.CommandLog(renameLogSender, fsSourcePath, fsTargetPath, c.User.Username, "", c.ID, c.protocol, -1, -1,
"", "", "", -1)
action := newActionNotification(&c.User, operationRename, fsSourcePath, fsTargetPath, "", c.protocol, 0, nil)
// the returned error is used in test cases only, we already log the error inside action.execute
go actionHandler.Handle(action) // nolint:errcheck
"", "", "", -1, c.remoteAddr)
ExecuteActionNotification(&c.User, operationRename, fsSourcePath, virtualSourcePath, fsTargetPath, "", c.protocol, 0, nil)
return nil
}
// CreateSymlink creates fsTargetPath as a symbolic link to fsSourcePath
func (c *BaseConnection) CreateSymlink(fsSourcePath, fsTargetPath, virtualSourcePath, virtualTargetPath string) error {
if c.Fs.GetRelativePath(fsSourcePath) == "/" {
func (c *BaseConnection) CreateSymlink(virtualSourcePath, virtualTargetPath string) error {
if c.isCrossFoldersRequest(virtualSourcePath, virtualTargetPath) {
c.Log(logger.LevelWarn, "cross folder symlink is not supported, src: %v dst: %v", virtualSourcePath, virtualTargetPath)
return c.GetOpUnsupportedError()
}
// we cannot have a cross folder request here so only one fs is enough
fs, fsSourcePath, err := c.GetFsAndResolvedPath(virtualSourcePath)
if err != nil {
return err
}
fsTargetPath, err := fs.ResolvePath(virtualTargetPath)
if err != nil {
return c.GetFsError(fs, err)
}
if fs.GetRelativePath(fsSourcePath) == "/" {
c.Log(logger.LevelWarn, "symlinking root dir is not allowed")
return c.GetPermissionDeniedError()
}
if c.User.IsVirtualFolder(virtualTargetPath) {
c.Log(logger.LevelWarn, "symlinking a virtual folder is not allowed")
if fs.GetRelativePath(fsTargetPath) == "/" {
c.Log(logger.LevelWarn, "symlinking to root dir is not allowed")
return c.GetPermissionDeniedError()
}
if !c.User.HasPerm(dataprovider.PermCreateSymlinks, path.Dir(virtualTargetPath)) {
return c.GetPermissionDeniedError()
}
if c.isCrossFoldersRequest(virtualSourcePath, virtualTargetPath) {
c.Log(logger.LevelWarn, "cross folder symlink is not supported, src: %v dst: %v", virtualSourcePath, virtualTargetPath)
return c.GetOpUnsupportedError()
}
if c.User.IsMappedPath(fsSourcePath) {
c.Log(logger.LevelWarn, "symlinking a directory mapped as virtual folder is not allowed: %#v", fsSourcePath)
return c.GetPermissionDeniedError()
}
if c.User.IsMappedPath(fsTargetPath) {
c.Log(logger.LevelWarn, "symlinking to a directory mapped as virtual folder is not allowed: %#v", fsTargetPath)
return c.GetPermissionDeniedError()
}
if err := c.Fs.Symlink(fsSourcePath, fsTargetPath); err != nil {
if err := fs.Symlink(fsSourcePath, fsTargetPath); err != nil {
c.Log(logger.LevelWarn, "failed to create symlink %#v -> %#v: %+v", fsSourcePath, fsTargetPath, err)
return c.GetFsError(err)
return c.GetFsError(fs, err)
}
logger.CommandLog(symlinkLogSender, fsSourcePath, fsTargetPath, c.User.Username, "", c.ID, c.protocol, -1, -1, "", "", "", -1)
logger.CommandLog(symlinkLogSender, fsSourcePath, fsTargetPath, c.User.Username, "", c.ID, c.protocol, -1, -1, "",
"", "", -1, c.remoteAddr)
return nil
}
func (c *BaseConnection) getPathForSetStatPerms(fsPath, virtualPath string) string {
func (c *BaseConnection) getPathForSetStatPerms(fs vfs.Fs, fsPath, virtualPath string) string {
pathForPerms := virtualPath
if fi, err := c.Fs.Lstat(fsPath); err == nil {
if fi, err := fs.Lstat(fsPath); err == nil {
if fi.IsDir() {
pathForPerms = path.Dir(virtualPath)
}
@@ -450,96 +458,115 @@ func (c *BaseConnection) getPathForSetStatPerms(fsPath, virtualPath string) stri
}
// DoStat execute a Stat if mode = 0, Lstat if mode = 1
func (c *BaseConnection) DoStat(fsPath string, mode int) (os.FileInfo, error) {
func (c *BaseConnection) DoStat(virtualPath string, mode int) (os.FileInfo, error) {
// for some vfs we don't create intermediary folders so we cannot simply check
// if virtualPath is a virtual folder
vfolders := c.User.GetVirtualFoldersInPath(path.Dir(virtualPath))
if _, ok := vfolders[virtualPath]; ok {
return vfs.NewFileInfo(virtualPath, true, 0, time.Now(), false), nil
}
var info os.FileInfo
var err error
fs, fsPath, err := c.GetFsAndResolvedPath(virtualPath)
if err != nil {
return info, err
}
if mode == 1 {
info, err = c.Fs.Lstat(c.getRealFsPath(fsPath))
info, err = fs.Lstat(c.getRealFsPath(fsPath))
} else {
info, err = c.Fs.Stat(c.getRealFsPath(fsPath))
info, err = fs.Stat(c.getRealFsPath(fsPath))
}
if err == nil && vfs.IsCryptOsFs(c.Fs) {
info = c.Fs.(*vfs.CryptFs).ConvertFileInfo(info)
if err != nil {
return info, c.GetFsError(fs, err)
}
return info, err
if vfs.IsCryptOsFs(fs) {
info = fs.(*vfs.CryptFs).ConvertFileInfo(info)
}
return info, nil
}
func (c *BaseConnection) ignoreSetStat() bool {
func (c *BaseConnection) ignoreSetStat(fs vfs.Fs) bool {
if Config.SetstatMode == 1 {
return true
}
if Config.SetstatMode == 2 && !vfs.IsLocalOrSFTPFs(c.Fs) {
if Config.SetstatMode == 2 && !vfs.IsLocalOrSFTPFs(fs) && !vfs.IsCryptOsFs(fs) {
return true
}
return false
}
func (c *BaseConnection) handleChmod(fsPath, pathForPerms string, attributes *StatAttributes) error {
func (c *BaseConnection) handleChmod(fs vfs.Fs, fsPath, pathForPerms string, attributes *StatAttributes) error {
if !c.User.HasPerm(dataprovider.PermChmod, pathForPerms) {
return c.GetPermissionDeniedError()
}
if c.ignoreSetStat() {
if c.ignoreSetStat(fs) {
return nil
}
if err := c.Fs.Chmod(c.getRealFsPath(fsPath), attributes.Mode); err != nil {
if err := fs.Chmod(c.getRealFsPath(fsPath), attributes.Mode); err != nil {
c.Log(logger.LevelWarn, "failed to chmod path %#v, mode: %v, err: %+v", fsPath, attributes.Mode.String(), err)
return c.GetFsError(err)
return c.GetFsError(fs, err)
}
logger.CommandLog(chmodLogSender, fsPath, "", c.User.Username, attributes.Mode.String(), c.ID, c.protocol,
-1, -1, "", "", "", -1)
-1, -1, "", "", "", -1, c.remoteAddr)
return nil
}
func (c *BaseConnection) handleChown(fsPath, pathForPerms string, attributes *StatAttributes) error {
func (c *BaseConnection) handleChown(fs vfs.Fs, fsPath, pathForPerms string, attributes *StatAttributes) error {
if !c.User.HasPerm(dataprovider.PermChown, pathForPerms) {
return c.GetPermissionDeniedError()
}
if c.ignoreSetStat() {
if c.ignoreSetStat(fs) {
return nil
}
if err := c.Fs.Chown(c.getRealFsPath(fsPath), attributes.UID, attributes.GID); err != nil {
if err := fs.Chown(c.getRealFsPath(fsPath), attributes.UID, attributes.GID); err != nil {
c.Log(logger.LevelWarn, "failed to chown path %#v, uid: %v, gid: %v, err: %+v", fsPath, attributes.UID,
attributes.GID, err)
return c.GetFsError(err)
return c.GetFsError(fs, err)
}
logger.CommandLog(chownLogSender, fsPath, "", c.User.Username, "", c.ID, c.protocol, attributes.UID, attributes.GID,
"", "", "", -1)
"", "", "", -1, c.remoteAddr)
return nil
}
func (c *BaseConnection) handleChtimes(fsPath, pathForPerms string, attributes *StatAttributes) error {
func (c *BaseConnection) handleChtimes(fs vfs.Fs, fsPath, pathForPerms string, attributes *StatAttributes) error {
if !c.User.HasPerm(dataprovider.PermChtimes, pathForPerms) {
return c.GetPermissionDeniedError()
}
if c.ignoreSetStat() {
if c.ignoreSetStat(fs) {
return nil
}
if err := c.Fs.Chtimes(c.getRealFsPath(fsPath), attributes.Atime, attributes.Mtime); err != nil {
if err := fs.Chtimes(c.getRealFsPath(fsPath), attributes.Atime, attributes.Mtime); err != nil {
c.Log(logger.LevelWarn, "failed to chtimes for path %#v, access time: %v, modification time: %v, err: %+v",
fsPath, attributes.Atime, attributes.Mtime, err)
return c.GetFsError(err)
return c.GetFsError(fs, err)
}
accessTimeString := attributes.Atime.Format(chtimesFormat)
modificationTimeString := attributes.Mtime.Format(chtimesFormat)
logger.CommandLog(chtimesLogSender, fsPath, "", c.User.Username, "", c.ID, c.protocol, -1, -1,
accessTimeString, modificationTimeString, "", -1)
accessTimeString, modificationTimeString, "", -1, c.remoteAddr)
return nil
}
// SetStat set StatAttributes for the specified fsPath
func (c *BaseConnection) SetStat(fsPath, virtualPath string, attributes *StatAttributes) error {
pathForPerms := c.getPathForSetStatPerms(fsPath, virtualPath)
func (c *BaseConnection) SetStat(virtualPath string, attributes *StatAttributes) error {
fs, fsPath, err := c.GetFsAndResolvedPath(virtualPath)
if err != nil {
return err
}
pathForPerms := c.getPathForSetStatPerms(fs, fsPath, virtualPath)
if attributes.Flags&StatAttrPerms != 0 {
return c.handleChmod(fsPath, pathForPerms, attributes)
return c.handleChmod(fs, fsPath, pathForPerms, attributes)
}
if attributes.Flags&StatAttrUIDGID != 0 {
return c.handleChown(fsPath, pathForPerms, attributes)
return c.handleChown(fs, fsPath, pathForPerms, attributes)
}
if attributes.Flags&StatAttrTimes != 0 {
return c.handleChtimes(fsPath, pathForPerms, attributes)
return c.handleChtimes(fs, fsPath, pathForPerms, attributes)
}
if attributes.Flags&StatAttrSize != 0 {
@@ -547,17 +574,18 @@ func (c *BaseConnection) SetStat(fsPath, virtualPath string, attributes *StatAtt
return c.GetPermissionDeniedError()
}
if err := c.truncateFile(fsPath, virtualPath, attributes.Size); err != nil {
if err := c.truncateFile(fs, fsPath, virtualPath, attributes.Size); err != nil {
c.Log(logger.LevelWarn, "failed to truncate path %#v, size: %v, err: %+v", fsPath, attributes.Size, err)
return c.GetFsError(err)
return c.GetFsError(fs, err)
}
logger.CommandLog(truncateLogSender, fsPath, "", c.User.Username, "", c.ID, c.protocol, -1, -1, "", "", "", attributes.Size)
logger.CommandLog(truncateLogSender, fsPath, "", c.User.Username, "", c.ID, c.protocol, -1, -1, "", "",
"", attributes.Size, c.remoteAddr)
}
return nil
}
func (c *BaseConnection) truncateFile(fsPath, virtualPath string, size int64) error {
func (c *BaseConnection) truncateFile(fs vfs.Fs, fsPath, virtualPath string, size int64) error {
// check first if we have an open transfer for the given path and try to truncate the file already opened
// if we found no transfer we truncate by path.
var initialSize int64
@@ -566,14 +594,14 @@ func (c *BaseConnection) truncateFile(fsPath, virtualPath string, size int64) er
if err == errNoTransfer {
c.Log(logger.LevelDebug, "file path %#v not found in active transfers, execute trucate by path", fsPath)
var info os.FileInfo
info, err = c.Fs.Stat(fsPath)
info, err = fs.Stat(fsPath)
if err != nil {
return err
}
initialSize = info.Size()
err = c.Fs.Truncate(fsPath, size)
err = fs.Truncate(fsPath, size)
}
if err == nil && vfs.IsLocalOrSFTPFs(c.Fs) {
if err == nil && vfs.IsLocalOrSFTPFs(fs) {
sizeDiff := initialSize - size
vfolder, err := c.User.GetVirtualFolderForPath(path.Dir(virtualPath))
if err == nil {
@@ -588,20 +616,20 @@ func (c *BaseConnection) truncateFile(fsPath, virtualPath string, size int64) er
return err
}
func (c *BaseConnection) checkRecursiveRenameDirPermissions(sourcePath, targetPath string) error {
func (c *BaseConnection) checkRecursiveRenameDirPermissions(fsSrc, fsDst vfs.Fs, sourcePath, targetPath string) error {
dstPerms := []string{
dataprovider.PermCreateDirs,
dataprovider.PermUpload,
dataprovider.PermCreateSymlinks,
}
err := c.Fs.Walk(sourcePath, func(walkedPath string, info os.FileInfo, err error) error {
err := fsSrc.Walk(sourcePath, func(walkedPath string, info os.FileInfo, err error) error {
if err != nil {
return err
return c.GetFsError(fsSrc, err)
}
dstPath := strings.Replace(walkedPath, sourcePath, targetPath, 1)
virtualSrcPath := c.Fs.GetRelativePath(walkedPath)
virtualDstPath := c.Fs.GetRelativePath(dstPath)
virtualSrcPath := fsSrc.GetRelativePath(walkedPath)
virtualDstPath := fsDst.GetRelativePath(dstPath)
// walk scans the directory tree in order, checking the parent directory permissions we are sure that all contents
// inside the parent path was checked. If the current dir has no subdirs with defined permissions inside it
// and it has all the possible permissions we can stop scanning
@@ -615,10 +643,10 @@ func (c *BaseConnection) checkRecursiveRenameDirPermissions(sourcePath, targetPa
return ErrSkipPermissionsCheck
}
}
if !c.isRenamePermitted(walkedPath, virtualSrcPath, virtualDstPath, info) {
if !c.isRenamePermitted(fsSrc, fsDst, walkedPath, dstPath, virtualSrcPath, virtualDstPath, info) {
c.Log(logger.LevelInfo, "rename %#v -> %#v is not allowed, virtual destination path: %#v",
walkedPath, dstPath, virtualDstPath)
return os.ErrPermission
return c.GetPermissionDeniedError()
}
return nil
})
@@ -628,22 +656,7 @@ func (c *BaseConnection) checkRecursiveRenameDirPermissions(sourcePath, targetPa
return err
}
func (c *BaseConnection) isRenamePermitted(fsSourcePath, virtualSourcePath, virtualTargetPath string, fi os.FileInfo) bool {
if c.Fs.GetRelativePath(fsSourcePath) == "/" {
c.Log(logger.LevelWarn, "renaming root dir is not allowed")
return false
}
if c.User.IsVirtualFolder(virtualSourcePath) || c.User.IsVirtualFolder(virtualTargetPath) {
c.Log(logger.LevelWarn, "renaming a virtual folder is not allowed")
return false
}
if !c.User.IsFileAllowed(virtualSourcePath) || !c.User.IsFileAllowed(virtualTargetPath) {
if fi != nil && fi.Mode().IsRegular() {
c.Log(logger.LevelDebug, "renaming file is not allowed, source: %#v target: %#v",
virtualSourcePath, virtualTargetPath)
return false
}
}
func (c *BaseConnection) hasRenamePerms(virtualSourcePath, virtualTargetPath string, fi os.FileInfo) bool {
if c.User.HasPerm(dataprovider.PermRename, path.Dir(virtualSourcePath)) &&
c.User.HasPerm(dataprovider.PermRename, path.Dir(virtualTargetPath)) {
return true
@@ -661,7 +674,39 @@ func (c *BaseConnection) isRenamePermitted(fsSourcePath, virtualSourcePath, virt
return c.User.HasPerm(dataprovider.PermUpload, path.Dir(virtualTargetPath))
}
func (c *BaseConnection) hasSpaceForRename(virtualSourcePath, virtualTargetPath string, initialSize int64,
func (c *BaseConnection) isRenamePermitted(fsSrc, fsDst vfs.Fs, fsSourcePath, fsTargetPath, virtualSourcePath, virtualTargetPath string, fi os.FileInfo) bool {
if !c.isLocalOrSameFolderRename(virtualSourcePath, virtualTargetPath) {
c.Log(logger.LevelInfo, "rename %#v->%#v is not allowed: the paths must be local or on the same virtual folder",
virtualSourcePath, virtualTargetPath)
return false
}
if c.User.IsMappedPath(fsSourcePath) && vfs.IsLocalOrCryptoFs(fsSrc) {
c.Log(logger.LevelWarn, "renaming a directory mapped as virtual folder is not allowed: %#v", fsSourcePath)
return false
}
if c.User.IsMappedPath(fsTargetPath) && vfs.IsLocalOrCryptoFs(fsDst) {
c.Log(logger.LevelWarn, "renaming to a directory mapped as virtual folder is not allowed: %#v", fsTargetPath)
return false
}
if fsSrc.GetRelativePath(fsSourcePath) == "/" {
c.Log(logger.LevelWarn, "renaming root dir is not allowed")
return false
}
if c.User.IsVirtualFolder(virtualSourcePath) || c.User.IsVirtualFolder(virtualTargetPath) {
c.Log(logger.LevelWarn, "renaming a virtual folder is not allowed")
return false
}
if !c.User.IsFileAllowed(virtualSourcePath) || !c.User.IsFileAllowed(virtualTargetPath) {
if fi != nil && fi.Mode().IsRegular() {
c.Log(logger.LevelDebug, "renaming file is not allowed, source: %#v target: %#v",
virtualSourcePath, virtualTargetPath)
return false
}
}
return c.hasRenamePerms(virtualSourcePath, virtualTargetPath, fi)
}
func (c *BaseConnection) hasSpaceForRename(fs vfs.Fs, virtualSourcePath, virtualTargetPath string, initialSize int64,
fsSourcePath string) bool {
if dataprovider.GetQuotaTracking() == 0 {
return true
@@ -674,7 +719,7 @@ func (c *BaseConnection) hasSpaceForRename(virtualSourcePath, virtualTargetPath
}
if errSrc == nil && errDst == nil {
// rename between virtual folders
if sourceFolder.MappedPath == dstFolder.MappedPath {
if sourceFolder.Name == dstFolder.Name {
// rename inside the same virtual folder
return true
}
@@ -684,16 +729,16 @@ func (c *BaseConnection) hasSpaceForRename(virtualSourcePath, virtualTargetPath
return true
}
quotaResult := c.HasSpace(true, false, virtualTargetPath)
return c.hasSpaceForCrossRename(quotaResult, initialSize, fsSourcePath)
return c.hasSpaceForCrossRename(fs, quotaResult, initialSize, fsSourcePath)
}
// hasSpaceForCrossRename checks the quota after a rename between different folders
func (c *BaseConnection) hasSpaceForCrossRename(quotaResult vfs.QuotaCheckResult, initialSize int64, sourcePath string) bool {
func (c *BaseConnection) hasSpaceForCrossRename(fs vfs.Fs, quotaResult vfs.QuotaCheckResult, initialSize int64, sourcePath string) bool {
if !quotaResult.HasSpace && initialSize == -1 {
// we are over quota and this is not a file replace
return false
}
fi, err := c.Fs.Lstat(sourcePath)
fi, err := fs.Lstat(sourcePath)
if err != nil {
c.Log(logger.LevelWarn, "cross rename denied, stat error for path %#v: %v", sourcePath, err)
return false
@@ -708,7 +753,7 @@ func (c *BaseConnection) hasSpaceForCrossRename(quotaResult vfs.QuotaCheckResult
filesDiff = 0
}
} else if fi.IsDir() {
filesDiff, sizeDiff, err = c.Fs.GetDirSize(sourcePath)
filesDiff, sizeDiff, err = fs.GetDirSize(sourcePath)
if err != nil {
c.Log(logger.LevelWarn, "cross rename denied, error getting size for directory %#v: %v", sourcePath, err)
return false
@@ -745,11 +790,11 @@ func (c *BaseConnection) hasSpaceForCrossRename(quotaResult vfs.QuotaCheckResult
// GetMaxWriteSize returns the allowed size for an upload or an error
// if no enough size is available for a resume/append
func (c *BaseConnection) GetMaxWriteSize(quotaResult vfs.QuotaCheckResult, isResume bool, fileSize int64) (int64, error) {
func (c *BaseConnection) GetMaxWriteSize(quotaResult vfs.QuotaCheckResult, isResume bool, fileSize int64, isUploadResumeSupported bool) (int64, error) {
maxWriteSize := quotaResult.GetRemainingSize()
if isResume {
if !c.Fs.IsUploadResumeSupported() {
if !isUploadResumeSupported {
return 0, c.GetOpUnsupportedError()
}
if c.User.Filters.MaxUploadFileSize > 0 && c.User.Filters.MaxUploadFileSize <= fileSize {
@@ -823,6 +868,40 @@ func (c *BaseConnection) HasSpace(checkFiles, getUsage bool, requestPath string)
return result
}
// returns true if this is a rename on the same fs or local virtual folders
func (c *BaseConnection) isLocalOrSameFolderRename(virtualSourcePath, virtualTargetPath string) bool {
sourceFolder, errSrc := c.User.GetVirtualFolderForPath(virtualSourcePath)
dstFolder, errDst := c.User.GetVirtualFolderForPath(virtualTargetPath)
if errSrc != nil && errDst != nil {
return true
}
if errSrc == nil && errDst == nil {
if sourceFolder.Name == dstFolder.Name {
return true
}
// we have different folders, only local fs is supported
if sourceFolder.FsConfig.Provider == vfs.LocalFilesystemProvider &&
dstFolder.FsConfig.Provider == vfs.LocalFilesystemProvider {
return true
}
return false
}
if c.User.FsConfig.Provider != vfs.LocalFilesystemProvider {
return false
}
if errSrc == nil {
if sourceFolder.FsConfig.Provider == vfs.LocalFilesystemProvider {
return true
}
}
if errDst == nil {
if dstFolder.FsConfig.Provider == vfs.LocalFilesystemProvider {
return true
}
}
return false
}
func (c *BaseConnection) isCrossFoldersRequest(virtualSourcePath, virtualTargetPath string) bool {
sourceFolder, errSrc := c.User.GetVirtualFolderForPath(virtualSourcePath)
dstFolder, errDst := c.User.GetVirtualFolderForPath(virtualTargetPath)
@@ -830,14 +909,14 @@ func (c *BaseConnection) isCrossFoldersRequest(virtualSourcePath, virtualTargetP
return false
}
if errSrc == nil && errDst == nil {
return sourceFolder.MappedPath != dstFolder.MappedPath
return sourceFolder.Name != dstFolder.Name
}
return true
}
func (c *BaseConnection) updateQuotaMoveBetweenVFolders(sourceFolder, dstFolder *vfs.VirtualFolder, initialSize,
filesSize int64, numFiles int) {
if sourceFolder.MappedPath == dstFolder.MappedPath {
if sourceFolder.Name == dstFolder.Name {
// both files are inside the same virtual folder
if initialSize != -1 {
dataprovider.UpdateVirtualFolderQuota(&dstFolder.BaseVirtualFolder, -numFiles, -initialSize, false) //nolint:errcheck
@@ -897,7 +976,10 @@ func (c *BaseConnection) updateQuotaMoveToVFolder(dstFolder *vfs.VirtualFolder,
}
}
func (c *BaseConnection) updateQuotaAfterRename(virtualSourcePath, virtualTargetPath, targetPath string, initialSize int64) error {
func (c *BaseConnection) updateQuotaAfterRename(fs vfs.Fs, virtualSourcePath, virtualTargetPath, targetPath string, initialSize int64) error {
if dataprovider.GetQuotaTracking() == 0 {
return nil
}
// we don't allow to overwrite an existing directory so targetPath can be:
// - a new file, a symlink is as a new file here
// - a file overwriting an existing one
@@ -908,7 +990,8 @@ func (c *BaseConnection) updateQuotaAfterRename(virtualSourcePath, virtualTarget
if errSrc != nil && errDst != nil {
// both files are contained inside the user home dir
if initialSize != -1 {
// we cannot have a directory here
// we cannot have a directory here, we are overwriting an existing file
// we need to subtract the size of the overwritten file from the user quota
dataprovider.UpdateUserQuota(&c.User, -1, -initialSize, false) //nolint:errcheck
}
return nil
@@ -916,9 +999,9 @@ func (c *BaseConnection) updateQuotaAfterRename(virtualSourcePath, virtualTarget
filesSize := int64(0)
numFiles := 1
if fi, err := c.Fs.Stat(targetPath); err == nil {
if fi, err := fs.Stat(targetPath); err == nil {
if fi.Mode().IsDir() {
numFiles, filesSize, err = c.Fs.GetDirSize(targetPath)
numFiles, filesSize, err = fs.GetDirSize(targetPath)
if err != nil {
c.Log(logger.LevelWarn, "failed to update quota after rename, error scanning moved folder %#v: %v",
targetPath, err)
@@ -948,7 +1031,7 @@ func (c *BaseConnection) GetPermissionDeniedError() error {
switch c.protocol {
case ProtocolSFTP:
return sftp.ErrSSHFxPermissionDenied
case ProtocolWebDAV, ProtocolFTP:
case ProtocolWebDAV, ProtocolFTP, ProtocolHTTP:
return os.ErrPermission
default:
return ErrPermissionDenied
@@ -960,7 +1043,7 @@ func (c *BaseConnection) GetNotExistError() error {
switch c.protocol {
case ProtocolSFTP:
return sftp.ErrSSHFxNoSuchFile
case ProtocolWebDAV, ProtocolFTP:
case ProtocolWebDAV, ProtocolFTP, ProtocolHTTP:
return os.ErrNotExist
default:
return ErrNotExist
@@ -995,15 +1078,35 @@ func (c *BaseConnection) GetGenericError(err error) error {
}
// GetFsError converts a filesystem error to a protocol error
func (c *BaseConnection) GetFsError(err error) error {
if c.Fs.IsNotExist(err) {
func (c *BaseConnection) GetFsError(fs vfs.Fs, err error) error {
if fs.IsNotExist(err) {
return c.GetNotExistError()
} else if c.Fs.IsPermission(err) {
} else if fs.IsPermission(err) {
return c.GetPermissionDeniedError()
} else if c.Fs.IsNotSupported(err) {
} else if fs.IsNotSupported(err) {
return c.GetOpUnsupportedError()
} else if err != nil {
return c.GetGenericError(err)
}
return nil
}
// GetFsAndResolvedPath returns the fs and the fs path matching virtualPath
func (c *BaseConnection) GetFsAndResolvedPath(virtualPath string) (vfs.Fs, string, error) {
fs, err := c.User.GetFilesystemForPath(virtualPath, c.ID)
if err != nil {
if c.protocol == ProtocolWebDAV && strings.Contains(err.Error(), vfs.ErrSFTPLoop.Error()) {
// if there is an SFTP loop we return a permission error, for WebDAV, so the problematic folder
// will not be listed
return nil, "", c.GetPermissionDeniedError()
}
return nil, "", err
}
fsPath, err := fs.ResolvePath(virtualPath)
if err != nil {
return nil, "", c.GetFsError(fs, err)
}
return fs, fsPath, nil
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,9 +1,9 @@
package common
import (
"encoding/hex"
"encoding/json"
"fmt"
"io/ioutil"
"net"
"os"
"sort"
@@ -12,6 +12,7 @@ import (
"github.com/yl2chen/cidranger"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/utils"
)
@@ -24,15 +25,53 @@ const (
HostEventLoginFailed HostEvent = iota
HostEventUserNotFound
HostEventNoLoginTried
HostEventLimitExceeded
)
// DefenderEntry defines a defender entry
type DefenderEntry struct {
IP string `json:"ip"`
Score int `json:"score,omitempty"`
BanTime time.Time `json:"ban_time,omitempty"`
}
// GetID returns an unique ID for a defender entry
func (d *DefenderEntry) GetID() string {
return hex.EncodeToString([]byte(d.IP))
}
// GetBanTime returns the ban time for a defender entry as string
func (d *DefenderEntry) GetBanTime() string {
if d.BanTime.IsZero() {
return ""
}
return d.BanTime.UTC().Format(time.RFC3339)
}
// MarshalJSON returns the JSON encoding of a DefenderEntry.
func (d *DefenderEntry) MarshalJSON() ([]byte, error) {
return json.Marshal(&struct {
ID string `json:"id"`
IP string `json:"ip"`
Score int `json:"score,omitempty"`
BanTime string `json:"ban_time,omitempty"`
}{
ID: d.GetID(),
IP: d.IP,
Score: d.Score,
BanTime: d.GetBanTime(),
})
}
// Defender defines the interface that a defender must implements
type Defender interface {
GetHosts() []*DefenderEntry
GetHost(ip string) (*DefenderEntry, error)
AddEvent(ip string, event HostEvent)
IsBanned(ip string) bool
GetBanTime(ip string) *time.Time
GetScore(ip string) int
Unban(ip string) bool
DeleteHost(ip string) bool
Reload() error
}
@@ -51,6 +90,9 @@ type DefenderConfig struct {
ScoreInvalid int `json:"score_invalid" mapstructure:"score_invalid"`
// Score for valid login attempts, eg. user accounts that exist
ScoreValid int `json:"score_valid" mapstructure:"score_valid"`
// Score for limit exceeded events, generated from the rate limiters or for max connections
// per-host exceeded
ScoreLimitExceeded int `json:"score_limit_exceeded" mapstructure:"score_limit_exceeded"`
// Defines the time window, in minutes, for tracking client errors.
// A host is banned if it has exceeded the defined threshold during
// the last observation time minutes
@@ -124,6 +166,9 @@ func (c *DefenderConfig) validate() error {
if c.ScoreValid >= c.Threshold {
return fmt.Errorf("score_valid %v cannot be greater than threshold %v", c.ScoreValid, c.Threshold)
}
if c.ScoreLimitExceeded >= c.Threshold {
return fmt.Errorf("score_limit_exceeded %v cannot be greater than threshold %v", c.ScoreLimitExceeded, c.Threshold)
}
if c.BanTime <= 0 {
return fmt.Errorf("invalid ban_time %v", c.BanTime)
}
@@ -184,6 +229,70 @@ func (d *memoryDefender) Reload() error {
return nil
}
// GetHosts returns hosts that are banned or for which some violations have been detected
func (d *memoryDefender) GetHosts() []*DefenderEntry {
d.RLock()
defer d.RUnlock()
var result []*DefenderEntry
for k, v := range d.banned {
if v.After(time.Now()) {
result = append(result, &DefenderEntry{
IP: k,
BanTime: v,
})
}
}
for k, v := range d.hosts {
score := 0
for _, event := range v.Events {
if event.dateTime.Add(time.Duration(d.config.ObservationTime) * time.Minute).After(time.Now()) {
score += event.score
}
}
if score > 0 {
result = append(result, &DefenderEntry{
IP: k,
Score: score,
})
}
}
return result
}
// GetHost returns a defender host by ip, if any
func (d *memoryDefender) GetHost(ip string) (*DefenderEntry, error) {
d.RLock()
defer d.RUnlock()
if banTime, ok := d.banned[ip]; ok {
if banTime.After(time.Now()) {
return &DefenderEntry{
IP: ip,
BanTime: banTime,
}, nil
}
}
if hs, ok := d.hosts[ip]; ok {
score := 0
for _, event := range hs.Events {
if event.dateTime.Add(time.Duration(d.config.ObservationTime) * time.Minute).After(time.Now()) {
score += event.score
}
}
if score > 0 {
return &DefenderEntry{
IP: ip,
Score: score,
}, nil
}
}
return nil, dataprovider.NewRecordNotFoundError("host not found")
}
// IsBanned returns true if the specified IP is banned
// and increase ban time if the IP is found.
// This method must be called as soon as the client connects
@@ -221,8 +330,8 @@ func (d *memoryDefender) IsBanned(ip string) bool {
return false
}
// Unban removes the specified IP address from the banned ones
func (d *memoryDefender) Unban(ip string) bool {
// DeleteHost removes the specified IP from the defender lists
func (d *memoryDefender) DeleteHost(ip string) bool {
d.Lock()
defer d.Unlock()
@@ -231,6 +340,11 @@ func (d *memoryDefender) Unban(ip string) bool {
return true
}
if _, ok := d.hosts[ip]; ok {
delete(d.hosts, ip)
return true
}
return false
}
@@ -244,11 +358,21 @@ func (d *memoryDefender) AddEvent(ip string, event HostEvent) {
return
}
// ignore events for already banned hosts
if v, ok := d.banned[ip]; ok {
if v.After(time.Now()) {
return
}
delete(d.banned, ip)
}
var score int
switch event {
case HostEventLoginFailed:
score = d.config.ScoreValid
case HostEventLimitExceeded:
score = d.config.ScoreLimitExceeded
case HostEventUserNotFound, HostEventNoLoginTried:
score = d.config.ScoreInvalid
}
@@ -413,7 +537,7 @@ func loadHostListFromFile(name string) (*HostList, error) {
return nil, fmt.Errorf("host list file %#v is too big: %v bytes", name, info.Size())
}
content, err := ioutil.ReadFile(name)
content, err := os.ReadFile(name)
if err != nil {
return nil, fmt.Errorf("unable to read input file %#v: %v", name, err)
}

View File

@@ -2,9 +2,9 @@ package common
import (
"crypto/rand"
"encoding/hex"
"encoding/json"
"fmt"
"io/ioutil"
"net"
"os"
"path/filepath"
@@ -32,27 +32,28 @@ func TestBasicDefender(t *testing.T) {
data, err := json.Marshal(bl)
assert.NoError(t, err)
err = ioutil.WriteFile(blFile, data, os.ModePerm)
err = os.WriteFile(blFile, data, os.ModePerm)
assert.NoError(t, err)
data, err = json.Marshal(sl)
assert.NoError(t, err)
err = ioutil.WriteFile(slFile, data, os.ModePerm)
err = os.WriteFile(slFile, data, os.ModePerm)
assert.NoError(t, err)
config := &DefenderConfig{
Enabled: true,
BanTime: 10,
BanTimeIncrement: 2,
Threshold: 5,
ScoreInvalid: 2,
ScoreValid: 1,
ObservationTime: 15,
EntriesSoftLimit: 1,
EntriesHardLimit: 2,
SafeListFile: "slFile",
BlockListFile: "blFile",
Enabled: true,
BanTime: 10,
BanTimeIncrement: 2,
Threshold: 5,
ScoreInvalid: 2,
ScoreValid: 1,
ScoreLimitExceeded: 3,
ObservationTime: 15,
EntriesSoftLimit: 1,
EntriesHardLimit: 2,
SafeListFile: "slFile",
BlockListFile: "blFile",
}
_, err = newInMemoryDefender(config)
@@ -72,9 +73,13 @@ func TestBasicDefender(t *testing.T) {
assert.False(t, defender.IsBanned("invalid ip"))
assert.Equal(t, 0, defender.countBanned())
assert.Equal(t, 0, defender.countHosts())
assert.Len(t, defender.GetHosts(), 0)
_, err = defender.GetHost("10.8.0.4")
assert.Error(t, err)
defender.AddEvent("172.16.1.4", HostEventLoginFailed)
defender.AddEvent("192.168.8.4", HostEventUserNotFound)
defender.AddEvent("172.16.1.3", HostEventLimitExceeded)
assert.Equal(t, 0, defender.countHosts())
testIP := "12.34.56.78"
@@ -82,16 +87,39 @@ func TestBasicDefender(t *testing.T) {
assert.Equal(t, 1, defender.countHosts())
assert.Equal(t, 0, defender.countBanned())
assert.Equal(t, 1, defender.GetScore(testIP))
if assert.Len(t, defender.GetHosts(), 1) {
assert.Equal(t, 1, defender.GetHosts()[0].Score)
assert.True(t, defender.GetHosts()[0].BanTime.IsZero())
assert.Empty(t, defender.GetHosts()[0].GetBanTime())
}
host, err := defender.GetHost(testIP)
assert.NoError(t, err)
assert.Equal(t, 1, host.Score)
assert.Empty(t, host.GetBanTime())
assert.Nil(t, defender.GetBanTime(testIP))
defender.AddEvent(testIP, HostEventNoLoginTried)
defender.AddEvent(testIP, HostEventLimitExceeded)
assert.Equal(t, 1, defender.countHosts())
assert.Equal(t, 0, defender.countBanned())
assert.Equal(t, 3, defender.GetScore(testIP))
assert.Equal(t, 4, defender.GetScore(testIP))
if assert.Len(t, defender.GetHosts(), 1) {
assert.Equal(t, 4, defender.GetHosts()[0].Score)
}
defender.AddEvent(testIP, HostEventNoLoginTried)
defender.AddEvent(testIP, HostEventNoLoginTried)
assert.Equal(t, 0, defender.countHosts())
assert.Equal(t, 1, defender.countBanned())
assert.Equal(t, 0, defender.GetScore(testIP))
assert.NotNil(t, defender.GetBanTime(testIP))
if assert.Len(t, defender.GetHosts(), 1) {
assert.Equal(t, 0, defender.GetHosts()[0].Score)
assert.False(t, defender.GetHosts()[0].BanTime.IsZero())
assert.NotEmpty(t, defender.GetHosts()[0].GetBanTime())
assert.Equal(t, hex.EncodeToString([]byte(testIP)), defender.GetHosts()[0].GetID())
}
host, err = defender.GetHost(testIP)
assert.NoError(t, err)
assert.Equal(t, 0, host.Score)
assert.NotEmpty(t, host.GetBanTime())
// now test cleanup, testIP is already banned
testIP1 := "12.34.56.79"
@@ -142,8 +170,8 @@ func TestBasicDefender(t *testing.T) {
assert.True(t, newBanTime.After(*banTime))
}
assert.True(t, defender.Unban(testIP3))
assert.False(t, defender.Unban(testIP3))
assert.True(t, defender.DeleteHost(testIP3))
assert.False(t, defender.DeleteHost(testIP3))
err = os.Remove(slFile)
assert.NoError(t, err)
@@ -151,6 +179,77 @@ func TestBasicDefender(t *testing.T) {
assert.NoError(t, err)
}
func TestExpiredHostBans(t *testing.T) {
config := &DefenderConfig{
Enabled: true,
BanTime: 10,
BanTimeIncrement: 2,
Threshold: 5,
ScoreInvalid: 2,
ScoreValid: 1,
ScoreLimitExceeded: 3,
ObservationTime: 15,
EntriesSoftLimit: 1,
EntriesHardLimit: 2,
}
d, err := newInMemoryDefender(config)
assert.NoError(t, err)
defender := d.(*memoryDefender)
testIP := "1.2.3.4"
defender.banned[testIP] = time.Now().Add(-24 * time.Hour)
// the ban is expired testIP should not be listed
res := defender.GetHosts()
assert.Len(t, res, 0)
assert.False(t, defender.IsBanned(testIP))
_, err = defender.GetHost(testIP)
assert.Error(t, err)
_, ok := defender.banned[testIP]
assert.True(t, ok)
// now add an event for an expired banned ip, it should be removed
defender.AddEvent(testIP, HostEventLoginFailed)
assert.False(t, defender.IsBanned(testIP))
entry, err := defender.GetHost(testIP)
assert.NoError(t, err)
assert.Equal(t, testIP, entry.IP)
assert.Empty(t, entry.GetBanTime())
assert.Equal(t, 1, entry.Score)
res = defender.GetHosts()
if assert.Len(t, res, 1) {
assert.Equal(t, testIP, res[0].IP)
assert.Empty(t, res[0].GetBanTime())
assert.Equal(t, 1, res[0].Score)
}
events := []hostEvent{
{
dateTime: time.Now().Add(-24 * time.Hour),
score: 2,
},
{
dateTime: time.Now().Add(-24 * time.Hour),
score: 3,
},
}
hs := hostScore{
Events: events,
TotalScore: 5,
}
defender.hosts[testIP] = hs
// the recorded scored are too old
res = defender.GetHosts()
assert.Len(t, res, 0)
_, err = defender.GetHost(testIP)
assert.Error(t, err)
}
func TestLoadHostListFromFile(t *testing.T) {
_, err := loadHostListFromFile(".")
assert.Error(t, err)
@@ -160,7 +259,7 @@ func TestLoadHostListFromFile(t *testing.T) {
_, err = rand.Read(content)
assert.NoError(t, err)
err = ioutil.WriteFile(hostsFilePath, content, os.ModePerm)
err = os.WriteFile(hostsFilePath, content, os.ModePerm)
assert.NoError(t, err)
_, err = loadHostListFromFile(hostsFilePath)
@@ -173,7 +272,7 @@ func TestLoadHostListFromFile(t *testing.T) {
asJSON, err := json.Marshal(hl)
assert.NoError(t, err)
err = ioutil.WriteFile(hostsFilePath, asJSON, os.ModePerm)
err = os.WriteFile(hostsFilePath, asJSON, os.ModePerm)
assert.NoError(t, err)
hostList, err := loadHostListFromFile(hostsFilePath)
@@ -183,7 +282,7 @@ func TestLoadHostListFromFile(t *testing.T) {
hl.IPAddresses = append(hl.IPAddresses, "invalidip")
asJSON, err = json.Marshal(hl)
assert.NoError(t, err)
err = ioutil.WriteFile(hostsFilePath, asJSON, os.ModePerm)
err = os.WriteFile(hostsFilePath, asJSON, os.ModePerm)
assert.NoError(t, err)
hostList, err = loadHostListFromFile(hostsFilePath)
@@ -195,7 +294,7 @@ func TestLoadHostListFromFile(t *testing.T) {
asJSON, err = json.Marshal(hl)
assert.NoError(t, err)
err = ioutil.WriteFile(hostsFilePath, asJSON, os.ModePerm)
err = os.WriteFile(hostsFilePath, asJSON, os.ModePerm)
assert.NoError(t, err)
hostList, err = loadHostListFromFile(hostsFilePath)
@@ -215,7 +314,7 @@ func TestLoadHostListFromFile(t *testing.T) {
assert.NoError(t, err)
}
err = ioutil.WriteFile(hostsFilePath, []byte("non json content"), os.ModePerm)
err = os.WriteFile(hostsFilePath, []byte("non json content"), os.ModePerm)
assert.NoError(t, err)
_, err = loadHostListFromFile(hostsFilePath)
assert.Error(t, err)
@@ -316,6 +415,11 @@ func TestDefenderConfig(t *testing.T) {
require.Error(t, err)
c.ScoreInvalid = 2
c.ScoreLimitExceeded = 10
err = c.validate()
require.Error(t, err)
c.ScoreLimitExceeded = 2
c.ScoreValid = 10
err = c.validate()
require.Error(t, err)

View File

@@ -1,7 +1,6 @@
package common
import (
"io/ioutil"
"os"
"path/filepath"
"runtime"
@@ -20,7 +19,7 @@ func TestBasicAuth(t *testing.T) {
authUserFile := filepath.Join(os.TempDir(), "http_users.txt")
authUserData := []byte("test1:$2y$05$bcHSED7aO1cfLto6ZdDBOOKzlwftslVhtpIkRhAtSa4GuLmk5mola\n")
err = ioutil.WriteFile(authUserFile, authUserData, os.ModePerm)
err = os.WriteFile(authUserFile, authUserData, os.ModePerm)
require.NoError(t, err)
httpAuth, err = NewBasicAuthProvider(authUserFile)
@@ -31,30 +30,30 @@ func TestBasicAuth(t *testing.T) {
require.True(t, httpAuth.ValidateCredentials("test1", "password1"))
authUserData = append(authUserData, []byte("test2:$1$OtSSTL8b$bmaCqEksI1e7rnZSjsIDR1\n")...)
err = ioutil.WriteFile(authUserFile, authUserData, os.ModePerm)
err = os.WriteFile(authUserFile, authUserData, os.ModePerm)
require.NoError(t, err)
require.False(t, httpAuth.ValidateCredentials("test2", "wrong2"))
require.True(t, httpAuth.ValidateCredentials("test2", "password2"))
authUserData = append(authUserData, []byte("test2:$apr1$gLnIkRIf$Xr/6aJfmIrihP4b2N2tcs/\n")...)
err = ioutil.WriteFile(authUserFile, authUserData, os.ModePerm)
err = os.WriteFile(authUserFile, authUserData, os.ModePerm)
require.NoError(t, err)
require.False(t, httpAuth.ValidateCredentials("test2", "wrong2"))
require.True(t, httpAuth.ValidateCredentials("test2", "password2"))
authUserData = append(authUserData, []byte("test3:$apr1$gLnIkRIf$Xr/6aJfmIrihP4b2N2tcs/\n")...)
err = ioutil.WriteFile(authUserFile, authUserData, os.ModePerm)
err = os.WriteFile(authUserFile, authUserData, os.ModePerm)
require.NoError(t, err)
require.False(t, httpAuth.ValidateCredentials("test3", "password3"))
authUserData = append(authUserData, []byte("test4:$invalid$gLnIkRIf$Xr/6$aJfmIr$ihP4b2N2tcs/\n")...)
err = ioutil.WriteFile(authUserFile, authUserData, os.ModePerm)
err = os.WriteFile(authUserFile, authUserData, os.ModePerm)
require.NoError(t, err)
require.False(t, httpAuth.ValidateCredentials("test4", "password3"))
if runtime.GOOS != "windows" {
authUserData = append(authUserData, []byte("test5:$apr1$gLnIkRIf$Xr/6aJfmIrihP4b2N2tcs/\n")...)
err = ioutil.WriteFile(authUserFile, authUserData, os.ModePerm)
err = os.WriteFile(authUserFile, authUserData, os.ModePerm)
require.NoError(t, err)
err = os.Chmod(authUserFile, 0001)
require.NoError(t, err)
@@ -63,7 +62,7 @@ func TestBasicAuth(t *testing.T) {
require.NoError(t, err)
}
authUserData = append(authUserData, []byte("\"foo\"bar\"\r\n")...)
err = ioutil.WriteFile(authUserFile, authUserData, os.ModePerm)
err = os.WriteFile(authUserFile, authUserData, os.ModePerm)
require.NoError(t, err)
require.False(t, httpAuth.ValidateCredentials("test2", "password2"))

2773
common/protocol_test.go Normal file

File diff suppressed because it is too large Load Diff

229
common/ratelimiter.go Normal file
View File

@@ -0,0 +1,229 @@
package common
import (
"errors"
"fmt"
"sort"
"sync"
"sync/atomic"
"time"
"golang.org/x/time/rate"
"github.com/drakkan/sftpgo/utils"
)
var (
errNoBucket = errors.New("no bucket found")
errReserve = errors.New("unable to reserve token")
rateLimiterProtocolValues = []string{ProtocolSSH, ProtocolFTP, ProtocolWebDAV, ProtocolHTTP}
)
// RateLimiterType defines the supported rate limiters types
type RateLimiterType int
// Supported rate limiter types
const (
rateLimiterTypeGlobal RateLimiterType = iota + 1
rateLimiterTypeSource
)
// RateLimiterConfig defines the configuration for a rate limiter
type RateLimiterConfig struct {
// Average defines the maximum rate allowed. 0 means disabled
Average int64 `json:"average" mapstructure:"average"`
// Period defines the period as milliseconds. Default: 1000 (1 second).
// The rate is actually defined by dividing average by period.
// So for a rate below 1 req/s, one needs to define a period larger than a second.
Period int64 `json:"period" mapstructure:"period"`
// Burst is the maximum number of requests allowed to go through in the
// same arbitrarily small period of time. Default: 1.
Burst int `json:"burst" mapstructure:"burst"`
// Type defines the rate limiter type:
// - rateLimiterTypeGlobal is a global rate limiter independent from the source
// - rateLimiterTypeSource is a per-source rate limiter
Type int `json:"type" mapstructure:"type"`
// Protocols defines the protocols for this rate limiter.
// Available protocols are: "SFTP", "FTP", "DAV".
// A rate limiter with no protocols defined is disabled
Protocols []string `json:"protocols" mapstructure:"protocols"`
// If the rate limit is exceeded, the defender is enabled, and this is a per-source limiter,
// a new defender event will be generated
GenerateDefenderEvents bool `json:"generate_defender_events" mapstructure:"generate_defender_events"`
// The number of per-ip rate limiters kept in memory will vary between the
// soft and hard limit
EntriesSoftLimit int `json:"entries_soft_limit" mapstructure:"entries_soft_limit"`
EntriesHardLimit int `json:"entries_hard_limit" mapstructure:"entries_hard_limit"`
}
func (r *RateLimiterConfig) isEnabled() bool {
return r.Average > 0 && len(r.Protocols) > 0
}
func (r *RateLimiterConfig) validate() error {
if r.Burst < 1 {
return fmt.Errorf("invalid burst %v. It must be >= 1", r.Burst)
}
if r.Period < 100 {
return fmt.Errorf("invalid period %v. It must be >= 100", r.Period)
}
if r.Type != int(rateLimiterTypeGlobal) && r.Type != int(rateLimiterTypeSource) {
return fmt.Errorf("invalid type %v", r.Type)
}
if r.Type != int(rateLimiterTypeGlobal) {
if r.EntriesSoftLimit <= 0 {
return fmt.Errorf("invalid entries_soft_limit %v", r.EntriesSoftLimit)
}
if r.EntriesHardLimit <= r.EntriesSoftLimit {
return fmt.Errorf("invalid entries_hard_limit %v must be > %v", r.EntriesHardLimit, r.EntriesSoftLimit)
}
}
r.Protocols = utils.RemoveDuplicates(r.Protocols)
for _, protocol := range r.Protocols {
if !utils.IsStringInSlice(protocol, rateLimiterProtocolValues) {
return fmt.Errorf("invalid protocol %#v", protocol)
}
}
return nil
}
func (r *RateLimiterConfig) getLimiter() *rateLimiter {
limiter := &rateLimiter{
burst: r.Burst,
globalBucket: nil,
generateDefenderEvents: r.GenerateDefenderEvents,
}
var maxDelay time.Duration
period := time.Duration(r.Period) * time.Millisecond
rtl := float64(r.Average*int64(time.Second)) / float64(period)
limiter.rate = rate.Limit(rtl)
if rtl < 1 {
maxDelay = period / 2
} else {
maxDelay = time.Second / (time.Duration(rtl) * 2)
}
if maxDelay > 10*time.Second {
maxDelay = 10 * time.Second
}
limiter.maxDelay = maxDelay
limiter.buckets = sourceBuckets{
buckets: make(map[string]sourceRateLimiter),
hardLimit: r.EntriesHardLimit,
softLimit: r.EntriesSoftLimit,
}
if r.Type != int(rateLimiterTypeSource) {
limiter.globalBucket = rate.NewLimiter(limiter.rate, limiter.burst)
}
return limiter
}
// RateLimiter defines a rate limiter
type rateLimiter struct {
rate rate.Limit
burst int
maxDelay time.Duration
globalBucket *rate.Limiter
buckets sourceBuckets
generateDefenderEvents bool
}
// Wait blocks until the limit allows one event to happen
// or returns an error if the time to wait exceeds the max
// allowed delay
func (rl *rateLimiter) Wait(source string) (time.Duration, error) {
var res *rate.Reservation
if rl.globalBucket != nil {
res = rl.globalBucket.Reserve()
} else {
var err error
res, err = rl.buckets.reserve(source)
if err != nil {
rateLimiter := rate.NewLimiter(rl.rate, rl.burst)
res = rl.buckets.addAndReserve(rateLimiter, source)
}
}
if !res.OK() {
return 0, errReserve
}
delay := res.Delay()
if delay > rl.maxDelay {
res.Cancel()
if rl.generateDefenderEvents && rl.globalBucket == nil {
AddDefenderEvent(source, HostEventLimitExceeded)
}
return delay, fmt.Errorf("rate limit exceed, wait time to respect rate %v, max wait time allowed %v", delay, rl.maxDelay)
}
time.Sleep(delay)
return 0, nil
}
type sourceRateLimiter struct {
lastActivity int64
bucket *rate.Limiter
}
func (s *sourceRateLimiter) updateLastActivity() {
atomic.StoreInt64(&s.lastActivity, time.Now().UnixNano())
}
func (s *sourceRateLimiter) getLastActivity() int64 {
return atomic.LoadInt64(&s.lastActivity)
}
type sourceBuckets struct {
sync.RWMutex
buckets map[string]sourceRateLimiter
hardLimit int
softLimit int
}
func (b *sourceBuckets) reserve(source string) (*rate.Reservation, error) {
b.RLock()
defer b.RUnlock()
if src, ok := b.buckets[source]; ok {
src.updateLastActivity()
return src.bucket.Reserve(), nil
}
return nil, errNoBucket
}
func (b *sourceBuckets) addAndReserve(r *rate.Limiter, source string) *rate.Reservation {
b.Lock()
defer b.Unlock()
b.cleanup()
src := sourceRateLimiter{
bucket: r,
}
src.updateLastActivity()
b.buckets[source] = src
return src.bucket.Reserve()
}
func (b *sourceBuckets) cleanup() {
if len(b.buckets) >= b.hardLimit {
numToRemove := len(b.buckets) - b.softLimit
kvList := make(kvList, 0, len(b.buckets))
for k, v := range b.buckets {
kvList = append(kvList, kv{
Key: k,
Value: v.getLastActivity(),
})
}
sort.Sort(kvList)
for idx, kv := range kvList {
if idx >= numToRemove {
break
}
delete(b.buckets, kv.Key)
}
}
}

135
common/ratelimiter_test.go Normal file
View File

@@ -0,0 +1,135 @@
package common
import (
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestRateLimiterConfig(t *testing.T) {
config := RateLimiterConfig{}
err := config.validate()
require.Error(t, err)
config.Burst = 1
config.Period = 10
err = config.validate()
require.Error(t, err)
config.Period = 1000
config.Type = 100
err = config.validate()
require.Error(t, err)
config.Type = int(rateLimiterTypeSource)
config.EntriesSoftLimit = 0
err = config.validate()
require.Error(t, err)
config.EntriesSoftLimit = 150
config.EntriesHardLimit = 0
err = config.validate()
require.Error(t, err)
config.EntriesHardLimit = 200
config.Protocols = []string{"unsupported protocol"}
err = config.validate()
require.Error(t, err)
config.Protocols = rateLimiterProtocolValues
err = config.validate()
require.NoError(t, err)
limiter := config.getLimiter()
require.Equal(t, 500*time.Millisecond, limiter.maxDelay)
require.Nil(t, limiter.globalBucket)
config.Type = int(rateLimiterTypeGlobal)
config.Average = 1
config.Period = 10000
limiter = config.getLimiter()
require.Equal(t, 5*time.Second, limiter.maxDelay)
require.NotNil(t, limiter.globalBucket)
config.Period = 100000
limiter = config.getLimiter()
require.Equal(t, 10*time.Second, limiter.maxDelay)
config.Period = 500
config.Average = 1
limiter = config.getLimiter()
require.Equal(t, 250*time.Millisecond, limiter.maxDelay)
}
func TestRateLimiter(t *testing.T) {
config := RateLimiterConfig{
Average: 1,
Period: 1000,
Burst: 1,
Type: int(rateLimiterTypeGlobal),
Protocols: rateLimiterProtocolValues,
}
limiter := config.getLimiter()
_, err := limiter.Wait("")
require.NoError(t, err)
_, err = limiter.Wait("")
require.Error(t, err)
config.Type = int(rateLimiterTypeSource)
config.GenerateDefenderEvents = true
config.EntriesSoftLimit = 5
config.EntriesHardLimit = 10
limiter = config.getLimiter()
source := "192.168.1.2"
_, err = limiter.Wait(source)
require.NoError(t, err)
_, err = limiter.Wait(source)
require.Error(t, err)
// a different source should work
_, err = limiter.Wait(source + "1")
require.NoError(t, err)
config.Burst = 0
limiter = config.getLimiter()
_, err = limiter.Wait(source)
require.ErrorIs(t, err, errReserve)
}
func TestLimiterCleanup(t *testing.T) {
config := RateLimiterConfig{
Average: 100,
Period: 1000,
Burst: 1,
Type: int(rateLimiterTypeSource),
Protocols: rateLimiterProtocolValues,
EntriesSoftLimit: 1,
EntriesHardLimit: 3,
}
limiter := config.getLimiter()
source1 := "10.8.0.1"
source2 := "10.8.0.2"
source3 := "10.8.0.3"
source4 := "10.8.0.4"
_, err := limiter.Wait(source1)
assert.NoError(t, err)
time.Sleep(20 * time.Millisecond)
_, err = limiter.Wait(source2)
assert.NoError(t, err)
time.Sleep(20 * time.Millisecond)
assert.Len(t, limiter.buckets.buckets, 2)
_, ok := limiter.buckets.buckets[source1]
assert.True(t, ok)
_, ok = limiter.buckets.buckets[source2]
assert.True(t, ok)
_, err = limiter.Wait(source3)
assert.NoError(t, err)
assert.Len(t, limiter.buckets.buckets, 3)
_, ok = limiter.buckets.buckets[source1]
assert.True(t, ok)
_, ok = limiter.buckets.buckets[source2]
assert.True(t, ok)
_, ok = limiter.buckets.buckets[source3]
assert.True(t, ok)
time.Sleep(20 * time.Millisecond)
_, err = limiter.Wait(source4)
assert.NoError(t, err)
assert.Len(t, limiter.buckets.buckets, 2)
_, ok = limiter.buckets.buckets[source3]
assert.True(t, ok)
_, ok = limiter.buckets.buckets[source4]
assert.True(t, ok)
}

View File

@@ -5,7 +5,7 @@ import (
"crypto/x509"
"crypto/x509/pkix"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"sync"
"time"
@@ -104,7 +104,7 @@ func (m *CertManager) LoadCRLs() error {
if revocationList != "" && !filepath.IsAbs(revocationList) {
revocationList = filepath.Join(m.configDir, revocationList)
}
crlBytes, err := ioutil.ReadFile(revocationList)
crlBytes, err := os.ReadFile(revocationList)
if err != nil {
logger.Warn(m.logSender, "unable to read revocation list %#v", revocationList)
return err
@@ -151,7 +151,7 @@ func (m *CertManager) LoadRootCAs() error {
if rootCA != "" && !filepath.IsAbs(rootCA) {
rootCA = filepath.Join(m.configDir, rootCA)
}
crt, err := ioutil.ReadFile(rootCA)
crt, err := os.ReadFile(rootCA)
if err != nil {
return err
}

View File

@@ -3,7 +3,6 @@ package common
import (
"crypto/tls"
"crypto/x509"
"io/ioutil"
"os"
"path/filepath"
"testing"
@@ -273,13 +272,13 @@ func TestLoadCertificate(t *testing.T) {
caCrlPath := filepath.Join(os.TempDir(), "testcrl.crt")
certPath := filepath.Join(os.TempDir(), "test.crt")
keyPath := filepath.Join(os.TempDir(), "test.key")
err := ioutil.WriteFile(caCrtPath, []byte(caCRT), os.ModePerm)
err := os.WriteFile(caCrtPath, []byte(caCRT), os.ModePerm)
assert.NoError(t, err)
err = ioutil.WriteFile(caCrlPath, []byte(caCRL), os.ModePerm)
err = os.WriteFile(caCrlPath, []byte(caCRL), os.ModePerm)
assert.NoError(t, err)
err = ioutil.WriteFile(certPath, []byte(serverCert), os.ModePerm)
err = os.WriteFile(certPath, []byte(serverCert), os.ModePerm)
assert.NoError(t, err)
err = ioutil.WriteFile(keyPath, []byte(serverKey), os.ModePerm)
err = os.WriteFile(keyPath, []byte(serverKey), os.ModePerm)
assert.NoError(t, err)
certManager, err := NewCertManager(certPath, keyPath, configDir, logSenderTest)
assert.NoError(t, err)

View File

@@ -20,46 +20,48 @@ var (
// BaseTransfer contains protocols common transfer details for an upload or a download.
type BaseTransfer struct { //nolint:maligned
ID uint64
BytesSent int64
BytesReceived int64
Fs vfs.Fs
File vfs.File
Connection *BaseConnection
cancelFn func()
fsPath string
requestPath string
start time.Time
MaxWriteSize int64
MinWriteOffset int64
InitialSize int64
isNewFile bool
transferType int
AbortTransfer int32
ID uint64
BytesSent int64
BytesReceived int64
Fs vfs.Fs
File vfs.File
Connection *BaseConnection
cancelFn func()
fsPath string
effectiveFsPath string
requestPath string
start time.Time
MaxWriteSize int64
MinWriteOffset int64
InitialSize int64
isNewFile bool
transferType int
AbortTransfer int32
sync.Mutex
ErrTransfer error
}
// NewBaseTransfer returns a new BaseTransfer and adds it to the given connection
func NewBaseTransfer(file vfs.File, conn *BaseConnection, cancelFn func(), fsPath, requestPath string, transferType int,
minWriteOffset, initialSize, maxWriteSize int64, isNewFile bool, fs vfs.Fs) *BaseTransfer {
func NewBaseTransfer(file vfs.File, conn *BaseConnection, cancelFn func(), fsPath, effectiveFsPath, requestPath string,
transferType int, minWriteOffset, initialSize, maxWriteSize int64, isNewFile bool, fs vfs.Fs) *BaseTransfer {
t := &BaseTransfer{
ID: conn.GetTransferID(),
File: file,
Connection: conn,
cancelFn: cancelFn,
fsPath: fsPath,
start: time.Now(),
transferType: transferType,
MinWriteOffset: minWriteOffset,
InitialSize: initialSize,
isNewFile: isNewFile,
requestPath: requestPath,
BytesSent: 0,
BytesReceived: 0,
MaxWriteSize: maxWriteSize,
AbortTransfer: 0,
Fs: fs,
ID: conn.GetTransferID(),
File: file,
Connection: conn,
cancelFn: cancelFn,
fsPath: fsPath,
effectiveFsPath: effectiveFsPath,
start: time.Now(),
transferType: transferType,
MinWriteOffset: minWriteOffset,
InitialSize: initialSize,
isNewFile: isNewFile,
requestPath: requestPath,
BytesSent: 0,
BytesReceived: 0,
MaxWriteSize: maxWriteSize,
AbortTransfer: 0,
Fs: fs,
}
conn.AddTransfer(t)
@@ -148,9 +150,12 @@ func (t *BaseTransfer) Truncate(fsPath string, size int64) (int64, error) {
}
if size == 0 && atomic.LoadInt64(&t.BytesSent) == 0 {
// for cloud providers the file is always truncated to zero, we don't support append/resume for uploads
return 0, nil
// for buffered SFTP we can have buffered bytes so we returns an error
if !vfs.IsBufferedSFTPFs(t.Fs) {
return 0, nil
}
}
return 0, ErrOpUnsupported
return 0, vfs.ErrVfsUnsupported
}
return 0, errTransferMismatch
}
@@ -180,7 +185,7 @@ func (t *BaseTransfer) getUploadFileSize() (int64, error) {
fileSize = info.Size()
}
if vfs.IsCryptOsFs(t.Fs) && t.ErrTransfer != nil {
errDelete := t.Connection.Fs.Remove(t.fsPath, false)
errDelete := t.Fs.Remove(t.fsPath, false)
if errDelete != nil {
t.Connection.Log(logger.LevelWarn, "error removing partial crypto file %#v: %v", t.fsPath, errDelete)
}
@@ -204,7 +209,7 @@ func (t *BaseTransfer) Close() error {
metrics.TransferCompleted(atomic.LoadInt64(&t.BytesSent), atomic.LoadInt64(&t.BytesReceived), t.transferType, t.ErrTransfer)
if t.ErrTransfer == ErrQuotaExceeded && t.File != nil {
// if quota is exceeded we try to remove the partial file for uploads to local filesystem
err = t.Connection.Fs.Remove(t.File.Name(), false)
err = t.Fs.Remove(t.File.Name(), false)
if err == nil {
numFiles--
atomic.StoreInt64(&t.BytesReceived, 0)
@@ -212,15 +217,15 @@ func (t *BaseTransfer) Close() error {
}
t.Connection.Log(logger.LevelWarn, "upload denied due to space limit, delete temporary file: %#v, deletion error: %v",
t.File.Name(), err)
} else if t.transferType == TransferUpload && t.File != nil && t.File.Name() != t.fsPath {
} else if t.transferType == TransferUpload && t.effectiveFsPath != t.fsPath {
if t.ErrTransfer == nil || Config.UploadMode == UploadModeAtomicWithResume {
err = t.Connection.Fs.Rename(t.File.Name(), t.fsPath)
err = t.Fs.Rename(t.effectiveFsPath, t.fsPath)
t.Connection.Log(logger.LevelDebug, "atomic upload completed, rename: %#v -> %#v, error: %v",
t.File.Name(), t.fsPath, err)
t.effectiveFsPath, t.fsPath, err)
} else {
err = t.Connection.Fs.Remove(t.File.Name(), false)
err = t.Fs.Remove(t.effectiveFsPath, false)
t.Connection.Log(logger.LevelWarn, "atomic upload completed with error: \"%v\", delete temporary file: %#v, "+
"deletion error: %v", t.ErrTransfer, t.File.Name(), err)
"deletion error: %v", t.ErrTransfer, t.effectiveFsPath, err)
if err == nil {
numFiles--
atomic.StoreInt64(&t.BytesReceived, 0)
@@ -231,10 +236,9 @@ func (t *BaseTransfer) Close() error {
elapsed := time.Since(t.start).Nanoseconds() / 1000000
if t.transferType == TransferDownload {
logger.TransferLog(downloadLogSender, t.fsPath, elapsed, atomic.LoadInt64(&t.BytesSent), t.Connection.User.Username,
t.Connection.ID, t.Connection.protocol)
action := newActionNotification(&t.Connection.User, operationDownload, t.fsPath, "", "", t.Connection.protocol,
t.Connection.ID, t.Connection.protocol, t.Connection.remoteAddr)
ExecuteActionNotification(&t.Connection.User, operationDownload, t.fsPath, t.requestPath, "", "", t.Connection.protocol,
atomic.LoadInt64(&t.BytesSent), t.ErrTransfer)
go actionHandler.Handle(action) //nolint:errcheck
} else {
fileSize := atomic.LoadInt64(&t.BytesReceived) + t.MinWriteOffset
if statSize, err := t.getUploadFileSize(); err == nil {
@@ -243,10 +247,9 @@ func (t *BaseTransfer) Close() error {
t.Connection.Log(logger.LevelDebug, "uploaded file size %v", fileSize)
t.updateQuota(numFiles, fileSize)
logger.TransferLog(uploadLogSender, t.fsPath, elapsed, atomic.LoadInt64(&t.BytesReceived), t.Connection.User.Username,
t.Connection.ID, t.Connection.protocol)
action := newActionNotification(&t.Connection.User, operationUpload, t.fsPath, "", "", t.Connection.protocol,
fileSize, t.ErrTransfer)
go actionHandler.Handle(action) //nolint:errcheck
t.Connection.ID, t.Connection.protocol, t.Connection.remoteAddr)
ExecuteActionNotification(&t.Connection.User, operationUpload, t.fsPath, t.requestPath, "", "", t.Connection.protocol, fileSize,
t.ErrTransfer)
}
if t.ErrTransfer != nil {
t.Connection.Log(logger.LevelWarn, "transfer error: %v, path: %#v", t.ErrTransfer, t.fsPath)

View File

@@ -2,7 +2,6 @@ package common
import (
"errors"
"io/ioutil"
"os"
"path/filepath"
"testing"
@@ -17,12 +16,12 @@ import (
)
func TestTransferUpdateQuota(t *testing.T) {
conn := NewBaseConnection("", ProtocolSFTP, dataprovider.User{}, nil)
conn := NewBaseConnection("", ProtocolSFTP, "", dataprovider.User{})
transfer := BaseTransfer{
Connection: conn,
transferType: TransferUpload,
BytesReceived: 123,
Fs: vfs.NewOsFs("", os.TempDir(), nil),
Fs: vfs.NewOsFs("", os.TempDir(), ""),
}
errFake := errors.New("fake error")
transfer.TransferError(errFake)
@@ -55,15 +54,15 @@ func TestTransferThrottling(t *testing.T) {
UploadBandwidth: 50,
DownloadBandwidth: 40,
}
fs := vfs.NewOsFs("", os.TempDir(), nil)
fs := vfs.NewOsFs("", os.TempDir(), "")
testFileSize := int64(131072)
wantedUploadElapsed := 1000 * (testFileSize / 1024) / u.UploadBandwidth
wantedDownloadElapsed := 1000 * (testFileSize / 1024) / u.DownloadBandwidth
// some tolerance
wantedUploadElapsed -= wantedDownloadElapsed / 10
wantedDownloadElapsed -= wantedDownloadElapsed / 10
conn := NewBaseConnection("id", ProtocolSCP, u, nil)
transfer := NewBaseTransfer(nil, conn, nil, "", "", TransferUpload, 0, 0, 0, true, fs)
conn := NewBaseConnection("id", ProtocolSCP, "", u)
transfer := NewBaseTransfer(nil, conn, nil, "", "", "", TransferUpload, 0, 0, 0, true, fs)
transfer.BytesReceived = testFileSize
transfer.Connection.UpdateLastActivity()
startTime := transfer.Connection.GetLastActivity()
@@ -73,7 +72,7 @@ func TestTransferThrottling(t *testing.T) {
err := transfer.Close()
assert.NoError(t, err)
transfer = NewBaseTransfer(nil, conn, nil, "", "", TransferDownload, 0, 0, 0, true, fs)
transfer = NewBaseTransfer(nil, conn, nil, "", "", "", TransferDownload, 0, 0, 0, true, fs)
transfer.BytesSent = testFileSize
transfer.Connection.UpdateLastActivity()
startTime = transfer.Connection.GetLastActivity()
@@ -87,7 +86,7 @@ func TestTransferThrottling(t *testing.T) {
func TestRealPath(t *testing.T) {
testFile := filepath.Join(os.TempDir(), "afile.txt")
fs := vfs.NewOsFs("123", os.TempDir(), nil)
fs := vfs.NewOsFs("123", os.TempDir(), "")
u := dataprovider.User{
Username: "user",
HomeDir: os.TempDir(),
@@ -96,8 +95,8 @@ func TestRealPath(t *testing.T) {
u.Permissions["/"] = []string{dataprovider.PermAny}
file, err := os.Create(testFile)
require.NoError(t, err)
conn := NewBaseConnection(fs.ConnectionID(), ProtocolSFTP, u, fs)
transfer := NewBaseTransfer(file, conn, nil, testFile, "/transfer_test_file", TransferUpload, 0, 0, 0, true, fs)
conn := NewBaseConnection(fs.ConnectionID(), ProtocolSFTP, "", u)
transfer := NewBaseTransfer(file, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload, 0, 0, 0, true, fs)
rPath := transfer.GetRealFsPath(testFile)
assert.Equal(t, testFile, rPath)
rPath = conn.getRealFsPath(testFile)
@@ -118,7 +117,7 @@ func TestRealPath(t *testing.T) {
func TestTruncate(t *testing.T) {
testFile := filepath.Join(os.TempDir(), "transfer_test_file")
fs := vfs.NewOsFs("123", os.TempDir(), nil)
fs := vfs.NewOsFs("123", os.TempDir(), "")
u := dataprovider.User{
Username: "user",
HomeDir: os.TempDir(),
@@ -131,10 +130,10 @@ func TestTruncate(t *testing.T) {
}
_, err = file.Write([]byte("hello"))
assert.NoError(t, err)
conn := NewBaseConnection(fs.ConnectionID(), ProtocolSFTP, u, fs)
transfer := NewBaseTransfer(file, conn, nil, testFile, "/transfer_test_file", TransferUpload, 0, 5, 100, false, fs)
conn := NewBaseConnection(fs.ConnectionID(), ProtocolSFTP, "", u)
transfer := NewBaseTransfer(file, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload, 0, 5, 100, false, fs)
err = conn.SetStat(testFile, "/transfer_test_file", &StatAttributes{
err = conn.SetStat("/transfer_test_file", &StatAttributes{
Size: 2,
Flags: StatAttrSize,
})
@@ -149,9 +148,9 @@ func TestTruncate(t *testing.T) {
assert.Equal(t, int64(2), fi.Size())
}
transfer = NewBaseTransfer(file, conn, nil, testFile, "/transfer_test_file", TransferUpload, 0, 0, 100, true, fs)
transfer = NewBaseTransfer(file, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload, 0, 0, 100, true, fs)
// file.Stat will fail on a closed file
err = conn.SetStat(testFile, "/transfer_test_file", &StatAttributes{
err = conn.SetStat("/transfer_test_file", &StatAttributes{
Size: 2,
Flags: StatAttrSize,
})
@@ -159,13 +158,13 @@ func TestTruncate(t *testing.T) {
err = transfer.Close()
assert.NoError(t, err)
transfer = NewBaseTransfer(nil, conn, nil, testFile, "", TransferUpload, 0, 0, 0, true, fs)
transfer = NewBaseTransfer(nil, conn, nil, testFile, testFile, "", TransferUpload, 0, 0, 0, true, fs)
_, err = transfer.Truncate("mismatch", 0)
assert.EqualError(t, err, errTransferMismatch.Error())
_, err = transfer.Truncate(testFile, 0)
assert.NoError(t, err)
_, err = transfer.Truncate(testFile, 1)
assert.EqualError(t, err, ErrOpUnsupported.Error())
assert.EqualError(t, err, vfs.ErrVfsUnsupported.Error())
err = transfer.Close()
assert.NoError(t, err)
@@ -182,19 +181,19 @@ func TestTransferErrors(t *testing.T) {
isCancelled = true
}
testFile := filepath.Join(os.TempDir(), "transfer_test_file")
fs := vfs.NewOsFs("id", os.TempDir(), nil)
fs := vfs.NewOsFs("id", os.TempDir(), "")
u := dataprovider.User{
Username: "test",
HomeDir: os.TempDir(),
}
err := ioutil.WriteFile(testFile, []byte("test data"), os.ModePerm)
err := os.WriteFile(testFile, []byte("test data"), os.ModePerm)
assert.NoError(t, err)
file, err := os.Open(testFile)
if !assert.NoError(t, err) {
assert.FailNow(t, "unable to open test file")
}
conn := NewBaseConnection("id", ProtocolSFTP, u, fs)
transfer := NewBaseTransfer(file, conn, nil, testFile, "/transfer_test_file", TransferUpload, 0, 0, 0, true, fs)
conn := NewBaseConnection("id", ProtocolSFTP, "", u)
transfer := NewBaseTransfer(file, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload, 0, 0, 0, true, fs)
assert.Nil(t, transfer.cancelFn)
assert.Equal(t, testFile, transfer.GetFsPath())
transfer.SetCancelFn(cancelFn)
@@ -213,14 +212,14 @@ func TestTransferErrors(t *testing.T) {
}
assert.NoFileExists(t, testFile)
err = ioutil.WriteFile(testFile, []byte("test data"), os.ModePerm)
err = os.WriteFile(testFile, []byte("test data"), os.ModePerm)
assert.NoError(t, err)
file, err = os.Open(testFile)
if !assert.NoError(t, err) {
assert.FailNow(t, "unable to open test file")
}
fsPath := filepath.Join(os.TempDir(), "test_file")
transfer = NewBaseTransfer(file, conn, nil, fsPath, "/test_file", TransferUpload, 0, 0, 0, true, fs)
transfer = NewBaseTransfer(file, conn, nil, fsPath, file.Name(), "/test_file", TransferUpload, 0, 0, 0, true, fs)
transfer.BytesReceived = 9
transfer.TransferError(errFake)
assert.Error(t, transfer.ErrTransfer, errFake.Error())
@@ -233,13 +232,13 @@ func TestTransferErrors(t *testing.T) {
}
assert.NoFileExists(t, testFile)
err = ioutil.WriteFile(testFile, []byte("test data"), os.ModePerm)
err = os.WriteFile(testFile, []byte("test data"), os.ModePerm)
assert.NoError(t, err)
file, err = os.Open(testFile)
if !assert.NoError(t, err) {
assert.FailNow(t, "unable to open test file")
}
transfer = NewBaseTransfer(file, conn, nil, fsPath, "/test_file", TransferUpload, 0, 0, 0, true, fs)
transfer = NewBaseTransfer(file, conn, nil, fsPath, file.Name(), "/test_file", TransferUpload, 0, 0, 0, true, fs)
transfer.BytesReceived = 9
// the file is closed from the embedding struct before to call close
err = file.Close()
@@ -256,18 +255,18 @@ func TestTransferErrors(t *testing.T) {
func TestRemovePartialCryptoFile(t *testing.T) {
testFile := filepath.Join(os.TempDir(), "transfer_test_file")
fs, err := vfs.NewCryptFs("id", os.TempDir(), vfs.CryptFsConfig{Passphrase: kms.NewPlainSecret("secret")})
fs, err := vfs.NewCryptFs("id", os.TempDir(), "", vfs.CryptFsConfig{Passphrase: kms.NewPlainSecret("secret")})
require.NoError(t, err)
u := dataprovider.User{
Username: "test",
HomeDir: os.TempDir(),
}
conn := NewBaseConnection(fs.ConnectionID(), ProtocolSFTP, u, fs)
transfer := NewBaseTransfer(nil, conn, nil, testFile, "/transfer_test_file", TransferUpload, 0, 0, 0, true, fs)
conn := NewBaseConnection(fs.ConnectionID(), ProtocolSFTP, "", u)
transfer := NewBaseTransfer(nil, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload, 0, 0, 0, true, fs)
transfer.ErrTransfer = errors.New("test error")
_, err = transfer.getUploadFileSize()
assert.Error(t, err)
err = ioutil.WriteFile(testFile, []byte("test data"), os.ModePerm)
err = os.WriteFile(testFile, []byte("test data"), os.ModePerm)
assert.NoError(t, err)
size, err := transfer.getUploadFileSize()
assert.NoError(t, err)

View File

@@ -59,14 +59,28 @@ var (
EnableHTTPS: false,
ClientAuthType: 0,
TLSCipherSuites: nil,
Prefix: "",
ProxyAllowed: nil,
}
defaultHTTPDBinding = httpd.Binding{
Address: "127.0.0.1",
Port: 8080,
EnableWebAdmin: true,
EnableWebClient: true,
EnableHTTPS: false,
ClientAuthType: 0,
TLSCipherSuites: nil,
ProxyAllowed: nil,
}
defaultRateLimiter = common.RateLimiterConfig{
Average: 0,
Period: 1000,
Burst: 1,
Type: 2,
Protocols: []string{common.ProtocolSSH, common.ProtocolFTP, common.ProtocolWebDAV, common.ProtocolHTTP},
GenerateDefenderEvents: false,
EntriesSoftLimit: 100,
EntriesHardLimit: 150,
}
)
@@ -96,27 +110,32 @@ func Init() {
IdleTimeout: 15,
UploadMode: 0,
Actions: common.ProtocolActions{
ExecuteOn: []string{},
Hook: "",
ExecuteOn: []string{},
ExecuteSync: []string{},
Hook: "",
},
SetstatMode: 0,
ProxyProtocol: 0,
ProxyAllowed: []string{},
PostConnectHook: "",
MaxTotalConnections: 0,
SetstatMode: 0,
TempPath: "",
ProxyProtocol: 0,
ProxyAllowed: []string{},
PostConnectHook: "",
MaxTotalConnections: 0,
MaxPerHostConnections: 20,
DefenderConfig: common.DefenderConfig{
Enabled: false,
BanTime: 30,
BanTimeIncrement: 50,
Threshold: 15,
ScoreInvalid: 2,
ScoreValid: 1,
ObservationTime: 30,
EntriesSoftLimit: 100,
EntriesHardLimit: 150,
SafeListFile: "",
BlockListFile: "",
Enabled: false,
BanTime: 30,
BanTimeIncrement: 50,
Threshold: 15,
ScoreInvalid: 2,
ScoreValid: 1,
ScoreLimitExceeded: 3,
ObservationTime: 30,
EntriesSoftLimit: 100,
EntriesHardLimit: 150,
SafeListFile: "",
BlockListFile: "",
},
RateLimitersConfig: []common.RateLimiterConfig{defaultRateLimiter},
},
SFTPD: sftpd.Configuration{
Banner: defaultSFTPDBanner,
@@ -128,7 +147,7 @@ func Init() {
MACs: []string{},
TrustedUserCAKeys: []string{},
LoginBannerFile: "",
EnabledSSHCommands: sftpd.GetDefaultSSHCommands(),
EnabledSSHCommands: []string{},
KeyboardInteractiveHook: "",
PasswordAuthentication: true,
},
@@ -180,7 +199,7 @@ func Init() {
Driver: "sqlite",
Name: "sftpgo.db",
Host: "",
Port: 5432,
Port: 0,
Username: "",
Password: "",
ConnectionString: "",
@@ -207,18 +226,28 @@ func Init() {
Iterations: 1,
Parallelism: 2,
},
BcryptOptions: dataprovider.BcryptOptions{
Cost: 10,
},
Algo: dataprovider.HashingAlgoBcrypt,
},
PasswordCaching: true,
UpdateMode: 0,
PreferDatabaseCredentials: false,
SkipNaturalKeysValidation: false,
DelayedQuotaUpdate: 0,
CreateDefaultAdmin: false,
},
HTTPDConfig: httpd.Conf{
Bindings: []httpd.Binding{defaultHTTPDBinding},
TemplatesPath: "templates",
StaticFilesPath: "static",
BackupsPath: "backups",
WebRoot: "",
CertificateFile: "",
CertificateKeyFile: "",
CACertificates: nil,
CARevocationLists: nil,
},
HTTPConfig: httpclient.Config{
Timeout: 20,
@@ -228,6 +257,7 @@ func Init() {
CACertificates: nil,
Certificates: nil,
SkipTLSVerify: false,
Headers: nil,
},
KMSConfig: kms.Configuration{
Secrets: kms.Secrets{
@@ -357,7 +387,16 @@ func HasServicesToStart() bool {
func getRedactedGlobalConf() globalConfig {
conf := globalConf
conf.Common.Actions.Hook = utils.GetRedactedURL(conf.Common.Actions.Hook)
conf.Common.StartupHook = utils.GetRedactedURL(conf.Common.StartupHook)
conf.Common.PostConnectHook = utils.GetRedactedURL(conf.Common.PostConnectHook)
conf.SFTPD.KeyboardInteractiveHook = utils.GetRedactedURL(conf.SFTPD.KeyboardInteractiveHook)
conf.ProviderConf.Password = "[redacted]"
conf.ProviderConf.Actions.Hook = utils.GetRedactedURL(conf.ProviderConf.Actions.Hook)
conf.ProviderConf.ExternalAuthHook = utils.GetRedactedURL(conf.ProviderConf.ExternalAuthHook)
conf.ProviderConf.PreLoginHook = utils.GetRedactedURL(conf.ProviderConf.PreLoginHook)
conf.ProviderConf.PostLoginHook = utils.GetRedactedURL(conf.ProviderConf.PostLoginHook)
conf.ProviderConf.CheckPasswordHook = utils.GetRedactedURL(conf.ProviderConf.CheckPasswordHook)
return conf
}
@@ -402,7 +441,6 @@ func LoadConfig(configDir, configFile string) error {
}
// viper only supports slice of strings from env vars, so we use our custom method
loadBindingsFromEnv()
checkCommonParamsCompatibility()
if strings.TrimSpace(globalConf.SFTPD.Banner) == "" {
globalConf.SFTPD.Banner = defaultSFTPDBanner
}
@@ -429,7 +467,7 @@ func LoadConfig(configDir, configFile string) error {
logger.Warn(logSender, "", "Configuration error: %v", warn)
logger.WarnToConsole("Configuration error: %v", warn)
}
if globalConf.ProviderConf.ExternalAuthScope < 0 || globalConf.ProviderConf.ExternalAuthScope > 7 {
if globalConf.ProviderConf.ExternalAuthScope < 0 || globalConf.ProviderConf.ExternalAuthScope > 15 {
warn := fmt.Sprintf("invalid external_auth_scope: %v reset to 0", globalConf.ProviderConf.ExternalAuthScope)
globalConf.ProviderConf.ExternalAuthScope = 0
logger.Warn(logSender, "", "Configuration error: %v", warn)
@@ -441,53 +479,10 @@ func LoadConfig(configDir, configFile string) error {
logger.Warn(logSender, "", "Configuration error: %v", warn)
logger.WarnToConsole("Configuration error: %v", warn)
}
checkHostKeyCompatibility()
logger.Debug(logSender, "", "config file used: '%#v', config loaded: %+v", viper.ConfigFileUsed(), getRedactedGlobalConf())
return nil
}
func checkHostKeyCompatibility() {
// we copy deprecated fields to new ones to keep backward compatibility so lint is disabled
if len(globalConf.SFTPD.Keys) > 0 && len(globalConf.SFTPD.HostKeys) == 0 { //nolint:staticcheck
logger.Warn(logSender, "", "keys is deprecated, please use host_keys")
logger.WarnToConsole("keys is deprecated, please use host_keys")
for _, k := range globalConf.SFTPD.Keys { //nolint:staticcheck
globalConf.SFTPD.HostKeys = append(globalConf.SFTPD.HostKeys, k.PrivateKey)
}
}
}
func checkCommonParamsCompatibility() {
// we copy deprecated fields to new ones to keep backward compatibility so lint is disabled
if globalConf.SFTPD.IdleTimeout > 0 { //nolint:staticcheck
logger.Warn(logSender, "", "sftpd.idle_timeout is deprecated, please use common.idle_timeout")
logger.WarnToConsole("sftpd.idle_timeout is deprecated, please use common.idle_timeout")
globalConf.Common.IdleTimeout = globalConf.SFTPD.IdleTimeout //nolint:staticcheck
}
if globalConf.SFTPD.Actions.Hook != "" && len(globalConf.Common.Actions.Hook) == 0 { //nolint:staticcheck
logger.Warn(logSender, "", "sftpd.actions is deprecated, please use common.actions")
logger.WarnToConsole("sftpd.actions is deprecated, please use common.actions")
globalConf.Common.Actions.ExecuteOn = globalConf.SFTPD.Actions.ExecuteOn //nolint:staticcheck
globalConf.Common.Actions.Hook = globalConf.SFTPD.Actions.Hook //nolint:staticcheck
}
if globalConf.SFTPD.SetstatMode > 0 && globalConf.Common.SetstatMode == 0 { //nolint:staticcheck
logger.Warn(logSender, "", "sftpd.setstat_mode is deprecated, please use common.setstat_mode")
logger.WarnToConsole("sftpd.setstat_mode is deprecated, please use common.setstat_mode")
globalConf.Common.SetstatMode = globalConf.SFTPD.SetstatMode //nolint:staticcheck
}
if globalConf.SFTPD.UploadMode > 0 && globalConf.Common.UploadMode == 0 { //nolint:staticcheck
logger.Warn(logSender, "", "sftpd.upload_mode is deprecated, please use common.upload_mode")
logger.WarnToConsole("sftpd.upload_mode is deprecated, please use common.upload_mode")
globalConf.Common.UploadMode = globalConf.SFTPD.UploadMode //nolint:staticcheck
}
if globalConf.SFTPD.ProxyProtocol > 0 && globalConf.Common.ProxyProtocol == 0 { //nolint:staticcheck
logger.Warn(logSender, "", "sftpd.proxy_protocol is deprecated, please use common.proxy_protocol")
logger.WarnToConsole("sftpd.proxy_protocol is deprecated, please use common.proxy_protocol")
globalConf.Common.ProxyProtocol = globalConf.SFTPD.ProxyProtocol //nolint:staticcheck
globalConf.Common.ProxyAllowed = globalConf.SFTPD.ProxyAllowed //nolint:staticcheck
}
}
func checkSFTPDBindingsCompatibility() {
if globalConf.SFTPD.BindPort == 0 { //nolint:staticcheck
return
@@ -577,18 +572,86 @@ func loadBindingsFromEnv() {
checkWebDAVDBindingCompatibility()
checkHTTPDBindingCompatibility()
maxBindings := make([]int, 10)
for idx := range maxBindings {
for idx := 0; idx < 10; idx++ {
getRateLimitersFromEnv(idx)
getSFTPDBindindFromEnv(idx)
getFTPDBindingFromEnv(idx)
getWebDAVDBindingFromEnv(idx)
getHTTPDBindingFromEnv(idx)
getHTTPClientCertificatesFromEnv(idx)
getHTTPClientHeadersFromEnv(idx)
}
}
func getRateLimitersFromEnv(idx int) {
rtlConfig := defaultRateLimiter
if len(globalConf.Common.RateLimitersConfig) > idx {
rtlConfig = globalConf.Common.RateLimitersConfig[idx]
}
isSet := false
average, ok := lookupIntFromEnv(fmt.Sprintf("SFTPGO_COMMON__RATE_LIMITERS__%v__AVERAGE", idx))
if ok {
rtlConfig.Average = average
isSet = true
}
period, ok := lookupIntFromEnv(fmt.Sprintf("SFTPGO_COMMON__RATE_LIMITERS__%v__PERIOD", idx))
if ok {
rtlConfig.Period = period
isSet = true
}
burst, ok := lookupIntFromEnv(fmt.Sprintf("SFTPGO_COMMON__RATE_LIMITERS__%v__BURST", idx))
if ok {
rtlConfig.Burst = int(burst)
isSet = true
}
rtlType, ok := lookupIntFromEnv(fmt.Sprintf("SFTPGO_COMMON__RATE_LIMITERS__%v__TYPE", idx))
if ok {
rtlConfig.Type = int(rtlType)
isSet = true
}
protocols, ok := lookupStringListFromEnv(fmt.Sprintf("SFTPGO_COMMON__RATE_LIMITERS__%v__PROTOCOLS", idx))
if ok {
rtlConfig.Protocols = protocols
isSet = true
}
generateEvents, ok := lookupBoolFromEnv(fmt.Sprintf("SFTPGO_COMMON__RATE_LIMITERS__%v__GENERATE_DEFENDER_EVENTS", idx))
if ok {
rtlConfig.GenerateDefenderEvents = generateEvents
isSet = true
}
softLimit, ok := lookupIntFromEnv(fmt.Sprintf("SFTPGO_COMMON__RATE_LIMITERS__%v__ENTRIES_SOFT_LIMIT", idx))
if ok {
rtlConfig.EntriesSoftLimit = int(softLimit)
isSet = true
}
hardLimit, ok := lookupIntFromEnv(fmt.Sprintf("SFTPGO_COMMON__RATE_LIMITERS__%v__ENTRIES_HARD_LIMIT", idx))
if ok {
rtlConfig.EntriesHardLimit = int(hardLimit)
isSet = true
}
if isSet {
if len(globalConf.Common.RateLimitersConfig) > idx {
globalConf.Common.RateLimitersConfig[idx] = rtlConfig
} else {
globalConf.Common.RateLimitersConfig = append(globalConf.Common.RateLimitersConfig, rtlConfig)
}
}
}
func getSFTPDBindindFromEnv(idx int) {
binding := sftpd.Binding{}
binding := sftpd.Binding{
ApplyProxyConfig: true,
}
if len(globalConf.SFTPD.Bindings) > idx {
binding = globalConf.SFTPD.Bindings[idx]
}
@@ -597,7 +660,7 @@ func getSFTPDBindindFromEnv(idx int) {
port, ok := lookupIntFromEnv(fmt.Sprintf("SFTPGO_SFTPD__BINDINGS__%v__PORT", idx))
if ok {
binding.Port = port
binding.Port = int(port)
isSet = true
}
@@ -623,7 +686,9 @@ func getSFTPDBindindFromEnv(idx int) {
}
func getFTPDBindingFromEnv(idx int) {
binding := ftpd.Binding{}
binding := ftpd.Binding{
ApplyProxyConfig: true,
}
if len(globalConf.FTPD.Bindings) > idx {
binding = globalConf.FTPD.Bindings[idx]
}
@@ -632,7 +697,7 @@ func getFTPDBindingFromEnv(idx int) {
port, ok := lookupIntFromEnv(fmt.Sprintf("SFTPGO_FTPD__BINDINGS__%v__PORT", idx))
if ok {
binding.Port = port
binding.Port = int(port)
isSet = true
}
@@ -650,7 +715,7 @@ func getFTPDBindingFromEnv(idx int) {
tlsMode, ok := lookupIntFromEnv(fmt.Sprintf("SFTPGO_FTPD__BINDINGS__%v__TLS_MODE", idx))
if ok {
binding.TLSMode = tlsMode
binding.TLSMode = int(tlsMode)
isSet = true
}
@@ -662,7 +727,7 @@ func getFTPDBindingFromEnv(idx int) {
clientAuthType, ok := lookupIntFromEnv(fmt.Sprintf("SFTPGO_FTPD__BINDINGS__%v__CLIENT_AUTH_TYPE", idx))
if ok {
binding.ClientAuthType = clientAuthType
binding.ClientAuthType = int(clientAuthType)
isSet = true
}
@@ -691,7 +756,7 @@ func getWebDAVDBindingFromEnv(idx int) {
port, ok := lookupIntFromEnv(fmt.Sprintf("SFTPGO_WEBDAVD__BINDINGS__%v__PORT", idx))
if ok {
binding.Port = port
binding.Port = int(port)
isSet = true
}
@@ -709,7 +774,7 @@ func getWebDAVDBindingFromEnv(idx int) {
clientAuthType, ok := lookupIntFromEnv(fmt.Sprintf("SFTPGO_WEBDAVD__BINDINGS__%v__CLIENT_AUTH_TYPE", idx))
if ok {
binding.ClientAuthType = clientAuthType
binding.ClientAuthType = int(clientAuthType)
isSet = true
}
@@ -719,6 +784,18 @@ func getWebDAVDBindingFromEnv(idx int) {
isSet = true
}
proxyAllowed, ok := lookupStringListFromEnv(fmt.Sprintf("SFTPGO_WEBDAVD__BINDINGS__%v__PROXY_ALLOWED", idx))
if ok {
binding.ProxyAllowed = proxyAllowed
isSet = true
}
prefix, ok := os.LookupEnv(fmt.Sprintf("SFTPGO_WEBDAVD__BINDINGS__%v__PREFIX", idx))
if ok {
binding.Prefix = prefix
isSet = true
}
if isSet {
if len(globalConf.WebDAVD.Bindings) > idx {
globalConf.WebDAVD.Bindings[idx] = binding
@@ -729,7 +806,10 @@ func getWebDAVDBindingFromEnv(idx int) {
}
func getHTTPDBindingFromEnv(idx int) {
binding := httpd.Binding{}
binding := httpd.Binding{
EnableWebAdmin: true,
EnableWebClient: true,
}
if len(globalConf.HTTPDConfig.Bindings) > idx {
binding = globalConf.HTTPDConfig.Bindings[idx]
}
@@ -738,7 +818,7 @@ func getHTTPDBindingFromEnv(idx int) {
port, ok := lookupIntFromEnv(fmt.Sprintf("SFTPGO_HTTPD__BINDINGS__%v__PORT", idx))
if ok {
binding.Port = port
binding.Port = int(port)
isSet = true
}
@@ -754,6 +834,12 @@ func getHTTPDBindingFromEnv(idx int) {
isSet = true
}
enableWebClient, ok := lookupBoolFromEnv(fmt.Sprintf("SFTPGO_HTTPD__BINDINGS__%v__ENABLE_WEB_CLIENT", idx))
if ok {
binding.EnableWebClient = enableWebClient
isSet = true
}
enableHTTPS, ok := lookupBoolFromEnv(fmt.Sprintf("SFTPGO_HTTPD__BINDINGS__%v__ENABLE_HTTPS", idx))
if ok {
binding.EnableHTTPS = enableHTTPS
@@ -762,7 +848,7 @@ func getHTTPDBindingFromEnv(idx int) {
clientAuthType, ok := lookupIntFromEnv(fmt.Sprintf("SFTPGO_HTTPD__BINDINGS__%v__CLIENT_AUTH_TYPE", idx))
if ok {
binding.ClientAuthType = clientAuthType
binding.ClientAuthType = int(clientAuthType)
isSet = true
}
@@ -772,6 +858,12 @@ func getHTTPDBindingFromEnv(idx int) {
isSet = true
}
proxyAllowed, ok := lookupStringListFromEnv(fmt.Sprintf("SFTPGO_HTTPD__BINDINGS__%v__PROXY_ALLOWED", idx))
if ok {
binding.ProxyAllowed = proxyAllowed
isSet = true
}
if isSet {
if len(globalConf.HTTPDConfig.Bindings) > idx {
globalConf.HTTPDConfig.Bindings[idx] = binding
@@ -803,22 +895,53 @@ func getHTTPClientCertificatesFromEnv(idx int) {
}
}
func getHTTPClientHeadersFromEnv(idx int) {
header := httpclient.Header{}
key, ok := os.LookupEnv(fmt.Sprintf("SFTPGO_HTTP__HEADERS__%v__KEY", idx))
if ok {
header.Key = key
}
value, ok := os.LookupEnv(fmt.Sprintf("SFTPGO_HTTP__HEADERS__%v__VALUE", idx))
if ok {
header.Value = value
}
url, ok := os.LookupEnv(fmt.Sprintf("SFTPGO_HTTP__HEADERS__%v__URL", idx))
if ok {
header.URL = url
}
if header.Key != "" && header.Value != "" {
if len(globalConf.HTTPConfig.Headers) > idx {
globalConf.HTTPConfig.Headers[idx] = header
} else {
globalConf.HTTPConfig.Headers = append(globalConf.HTTPConfig.Headers, header)
}
}
}
func setViperDefaults() {
viper.SetDefault("common.idle_timeout", globalConf.Common.IdleTimeout)
viper.SetDefault("common.upload_mode", globalConf.Common.UploadMode)
viper.SetDefault("common.actions.execute_on", globalConf.Common.Actions.ExecuteOn)
viper.SetDefault("common.actions.execute_sync", globalConf.Common.Actions.ExecuteSync)
viper.SetDefault("common.actions.hook", globalConf.Common.Actions.Hook)
viper.SetDefault("common.setstat_mode", globalConf.Common.SetstatMode)
viper.SetDefault("common.temp_path", globalConf.Common.TempPath)
viper.SetDefault("common.proxy_protocol", globalConf.Common.ProxyProtocol)
viper.SetDefault("common.proxy_allowed", globalConf.Common.ProxyAllowed)
viper.SetDefault("common.post_connect_hook", globalConf.Common.PostConnectHook)
viper.SetDefault("common.max_total_connections", globalConf.Common.MaxTotalConnections)
viper.SetDefault("common.max_per_host_connections", globalConf.Common.MaxPerHostConnections)
viper.SetDefault("common.defender.enabled", globalConf.Common.DefenderConfig.Enabled)
viper.SetDefault("common.defender.ban_time", globalConf.Common.DefenderConfig.BanTime)
viper.SetDefault("common.defender.ban_time_increment", globalConf.Common.DefenderConfig.BanTimeIncrement)
viper.SetDefault("common.defender.threshold", globalConf.Common.DefenderConfig.Threshold)
viper.SetDefault("common.defender.score_invalid", globalConf.Common.DefenderConfig.ScoreInvalid)
viper.SetDefault("common.defender.score_valid", globalConf.Common.DefenderConfig.ScoreValid)
viper.SetDefault("common.defender.score_limit_exceeded", globalConf.Common.DefenderConfig.ScoreLimitExceeded)
viper.SetDefault("common.defender.observation_time", globalConf.Common.DefenderConfig.ObservationTime)
viper.SetDefault("common.defender.entries_soft_limit", globalConf.Common.DefenderConfig.EntriesSoftLimit)
viper.SetDefault("common.defender.entries_hard_limit", globalConf.Common.DefenderConfig.EntriesHardLimit)
@@ -832,7 +955,7 @@ func setViperDefaults() {
viper.SetDefault("sftpd.macs", globalConf.SFTPD.MACs)
viper.SetDefault("sftpd.trusted_user_ca_keys", globalConf.SFTPD.TrustedUserCAKeys)
viper.SetDefault("sftpd.login_banner_file", globalConf.SFTPD.LoginBannerFile)
viper.SetDefault("sftpd.enabled_ssh_commands", globalConf.SFTPD.EnabledSSHCommands)
viper.SetDefault("sftpd.enabled_ssh_commands", sftpd.GetDefaultSSHCommands())
viper.SetDefault("sftpd.keyboard_interactive_auth_hook", globalConf.SFTPD.KeyboardInteractiveHook)
viper.SetDefault("sftpd.password_authentication", globalConf.SFTPD.PasswordAuthentication)
viper.SetDefault("ftpd.banner", globalConf.FTPD.Banner)
@@ -886,14 +1009,20 @@ func setViperDefaults() {
viper.SetDefault("data_provider.post_login_scope", globalConf.ProviderConf.PostLoginScope)
viper.SetDefault("data_provider.check_password_hook", globalConf.ProviderConf.CheckPasswordHook)
viper.SetDefault("data_provider.check_password_scope", globalConf.ProviderConf.CheckPasswordScope)
viper.SetDefault("data_provider.password_hashing.bcrypt_options.cost", globalConf.ProviderConf.PasswordHashing.BcryptOptions.Cost)
viper.SetDefault("data_provider.password_hashing.argon2_options.memory", globalConf.ProviderConf.PasswordHashing.Argon2Options.Memory)
viper.SetDefault("data_provider.password_hashing.argon2_options.iterations", globalConf.ProviderConf.PasswordHashing.Argon2Options.Iterations)
viper.SetDefault("data_provider.password_hashing.argon2_options.parallelism", globalConf.ProviderConf.PasswordHashing.Argon2Options.Parallelism)
viper.SetDefault("data_provider.password_hashing.algo", globalConf.ProviderConf.PasswordHashing.Algo)
viper.SetDefault("data_provider.password_caching", globalConf.ProviderConf.PasswordCaching)
viper.SetDefault("data_provider.update_mode", globalConf.ProviderConf.UpdateMode)
viper.SetDefault("data_provider.skip_natural_keys_validation", globalConf.ProviderConf.SkipNaturalKeysValidation)
viper.SetDefault("data_provider.delayed_quota_update", globalConf.ProviderConf.DelayedQuotaUpdate)
viper.SetDefault("data_provider.create_default_admin", globalConf.ProviderConf.CreateDefaultAdmin)
viper.SetDefault("httpd.templates_path", globalConf.HTTPDConfig.TemplatesPath)
viper.SetDefault("httpd.static_files_path", globalConf.HTTPDConfig.StaticFilesPath)
viper.SetDefault("httpd.backups_path", globalConf.HTTPDConfig.BackupsPath)
viper.SetDefault("httpd.web_root", globalConf.HTTPDConfig.WebRoot)
viper.SetDefault("httpd.certificate_file", globalConf.HTTPDConfig.CertificateFile)
viper.SetDefault("httpd.certificate_key_file", globalConf.HTTPDConfig.CertificateKeyFile)
viper.SetDefault("httpd.ca_certificates", globalConf.HTTPDConfig.CACertificates)
@@ -927,12 +1056,12 @@ func lookupBoolFromEnv(envName string) (bool, bool) {
return false, false
}
func lookupIntFromEnv(envName string) (int, bool) {
func lookupIntFromEnv(envName string) (int64, bool) {
value, ok := os.LookupEnv(envName)
if ok {
converted, err := strconv.ParseInt(value, 10, 16)
converted, err := strconv.ParseInt(value, 10, 64)
if err == nil {
return int(converted), ok
return converted, ok
}
}

View File

@@ -1,3 +1,4 @@
//go:build linux
// +build linux
package config

View File

@@ -1,3 +1,4 @@
//go:build !linux
// +build !linux
package config

View File

@@ -2,7 +2,6 @@ package config_test
import (
"encoding/json"
"io/ioutil"
"os"
"path/filepath"
"strings"
@@ -46,11 +45,11 @@ func TestLoadConfigTest(t *testing.T) {
configFilePath := filepath.Join(configDir, confName)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, []byte("{invalid json}"), os.ModePerm)
err = os.WriteFile(configFilePath, []byte("{invalid json}"), os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, []byte("{\"sftpd\": {\"bind_port\": \"a\"}}"), os.ModePerm)
err = os.WriteFile(configFilePath, []byte("{\"sftpd\": {\"bind_port\": \"a\"}}"), os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, confName)
assert.Error(t, err)
@@ -79,7 +78,7 @@ func TestEmptyBanner(t *testing.T) {
c := make(map[string]sftpd.Configuration)
c["sftpd"] = sftpdConf
jsonConf, _ := json.Marshal(c)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
err = os.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
@@ -93,7 +92,7 @@ func TestEmptyBanner(t *testing.T) {
c1 := make(map[string]ftpd.Configuration)
c1["ftpd"] = ftpdConf
jsonConf, _ = json.Marshal(c1)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
err = os.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
@@ -103,6 +102,35 @@ func TestEmptyBanner(t *testing.T) {
assert.NoError(t, err)
}
func TestEnabledSSHCommands(t *testing.T) {
reset()
configDir := ".."
confName := tempConfigName + ".json"
configFilePath := filepath.Join(configDir, confName)
err := config.LoadConfig(configDir, "")
assert.NoError(t, err)
reset()
sftpdConf := config.GetSFTPDConfig()
sftpdConf.EnabledSSHCommands = []string{"scp"}
c := make(map[string]sftpd.Configuration)
c["sftpd"] = sftpdConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = os.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
sftpdConf = config.GetSFTPDConfig()
if assert.Len(t, sftpdConf.EnabledSSHCommands, 1) {
assert.Equal(t, "scp", sftpdConf.EnabledSSHCommands[0])
}
err = os.Remove(configFilePath)
assert.NoError(t, err)
}
func TestInvalidUploadMode(t *testing.T) {
reset()
@@ -117,7 +145,7 @@ func TestInvalidUploadMode(t *testing.T) {
c["common"] = commonConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
err = os.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
@@ -135,12 +163,12 @@ func TestInvalidExternalAuthScope(t *testing.T) {
err := config.LoadConfig(configDir, "")
assert.NoError(t, err)
providerConf := config.GetProviderConf()
providerConf.ExternalAuthScope = 10
providerConf.ExternalAuthScope = 100
c := make(map[string]dataprovider.Config)
c["data_provider"] = providerConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
err = os.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
@@ -163,7 +191,7 @@ func TestInvalidCredentialsPath(t *testing.T) {
c["data_provider"] = providerConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
err = os.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
@@ -186,7 +214,7 @@ func TestInvalidProxyProtocol(t *testing.T) {
c["common"] = commonConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
err = os.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
@@ -209,7 +237,7 @@ func TestInvalidUsersBaseDir(t *testing.T) {
c["data_provider"] = providerConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
err = os.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
@@ -218,76 +246,6 @@ func TestInvalidUsersBaseDir(t *testing.T) {
assert.NoError(t, err)
}
func TestCommonParamsCompatibility(t *testing.T) {
reset()
configDir := ".."
confName := tempConfigName + ".json"
configFilePath := filepath.Join(configDir, confName)
err := config.LoadConfig(configDir, "")
assert.NoError(t, err)
sftpdConf := config.GetSFTPDConfig()
sftpdConf.IdleTimeout = 21 //nolint:staticcheck
sftpdConf.Actions.Hook = "http://hook"
sftpdConf.Actions.ExecuteOn = []string{"upload"}
sftpdConf.SetstatMode = 1 //nolint:staticcheck
sftpdConf.UploadMode = common.UploadModeAtomicWithResume //nolint:staticcheck
sftpdConf.ProxyProtocol = 1 //nolint:staticcheck
sftpdConf.ProxyAllowed = []string{"192.168.1.1"} //nolint:staticcheck
c := make(map[string]sftpd.Configuration)
c["sftpd"] = sftpdConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
commonConf := config.GetCommonConfig()
assert.Equal(t, 21, commonConf.IdleTimeout)
assert.Equal(t, "http://hook", commonConf.Actions.Hook)
assert.Len(t, commonConf.Actions.ExecuteOn, 1)
assert.True(t, utils.IsStringInSlice("upload", commonConf.Actions.ExecuteOn))
assert.Equal(t, 1, commonConf.SetstatMode)
assert.Equal(t, 1, commonConf.ProxyProtocol)
assert.Len(t, commonConf.ProxyAllowed, 1)
assert.True(t, utils.IsStringInSlice("192.168.1.1", commonConf.ProxyAllowed))
err = os.Remove(configFilePath)
assert.NoError(t, err)
}
func TestHostKeyCompatibility(t *testing.T) {
reset()
configDir := ".."
confName := tempConfigName + ".json"
configFilePath := filepath.Join(configDir, confName)
err := config.LoadConfig(configDir, "")
assert.NoError(t, err)
sftpdConf := config.GetSFTPDConfig()
sftpdConf.Keys = []sftpd.Key{ //nolint:staticcheck
{
PrivateKey: "rsa",
},
{
PrivateKey: "ecdsa",
},
}
c := make(map[string]sftpd.Configuration)
c["sftpd"] = sftpdConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
sftpdConf = config.GetSFTPDConfig()
assert.Equal(t, 2, len(sftpdConf.HostKeys))
assert.True(t, utils.IsStringInSlice("rsa", sftpdConf.HostKeys))
assert.True(t, utils.IsStringInSlice("ecdsa", sftpdConf.HostKeys))
err = os.Remove(configFilePath)
assert.NoError(t, err)
}
func TestSetGetConfig(t *testing.T) {
reset()
@@ -379,7 +337,7 @@ func TestSFTPDBindingsCompatibility(t *testing.T) {
c["sftpd"] = sftpdConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
err = os.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
@@ -420,7 +378,7 @@ func TestFTPDBindingsCompatibility(t *testing.T) {
c["ftpd"] = ftpdConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
err = os.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
@@ -453,7 +411,7 @@ func TestWebDAVDBindingsCompatibility(t *testing.T) {
c["webdavd"] = webdavConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
err = os.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
@@ -484,7 +442,7 @@ func TestHTTPDBindingsCompatibility(t *testing.T) {
c["httpd"] = httpdConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
err = os.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
@@ -499,6 +457,62 @@ func TestHTTPDBindingsCompatibility(t *testing.T) {
assert.NoError(t, err)
}
func TestRateLimitersFromEnv(t *testing.T) {
reset()
os.Setenv("SFTPGO_COMMON__RATE_LIMITERS__0__AVERAGE", "100")
os.Setenv("SFTPGO_COMMON__RATE_LIMITERS__0__PERIOD", "2000")
os.Setenv("SFTPGO_COMMON__RATE_LIMITERS__0__BURST", "10")
os.Setenv("SFTPGO_COMMON__RATE_LIMITERS__0__TYPE", "2")
os.Setenv("SFTPGO_COMMON__RATE_LIMITERS__0__PROTOCOLS", "SSH, FTP")
os.Setenv("SFTPGO_COMMON__RATE_LIMITERS__0__GENERATE_DEFENDER_EVENTS", "1")
os.Setenv("SFTPGO_COMMON__RATE_LIMITERS__0__ENTRIES_SOFT_LIMIT", "50")
os.Setenv("SFTPGO_COMMON__RATE_LIMITERS__0__ENTRIES_HARD_LIMIT", "100")
os.Setenv("SFTPGO_COMMON__RATE_LIMITERS__8__AVERAGE", "50")
t.Cleanup(func() {
os.Unsetenv("SFTPGO_COMMON__RATE_LIMITERS__0__AVERAGE")
os.Unsetenv("SFTPGO_COMMON__RATE_LIMITERS__0__PERIOD")
os.Unsetenv("SFTPGO_COMMON__RATE_LIMITERS__0__BURST")
os.Unsetenv("SFTPGO_COMMON__RATE_LIMITERS__0__TYPE")
os.Unsetenv("SFTPGO_COMMON__RATE_LIMITERS__0__PROTOCOLS")
os.Unsetenv("SFTPGO_COMMON__RATE_LIMITERS__0__GENERATE_DEFENDER_EVENTS")
os.Unsetenv("SFTPGO_COMMON__RATE_LIMITERS__0__ENTRIES_SOFT_LIMIT")
os.Unsetenv("SFTPGO_COMMON__RATE_LIMITERS__0__ENTRIES_HARD_LIMIT")
os.Unsetenv("SFTPGO_COMMON__RATE_LIMITERS__8__AVERAGE")
})
configDir := ".."
err := config.LoadConfig(configDir, "")
assert.NoError(t, err)
limiters := config.GetCommonConfig().RateLimitersConfig
require.Len(t, limiters, 2)
require.Equal(t, int64(100), limiters[0].Average)
require.Equal(t, int64(2000), limiters[0].Period)
require.Equal(t, 10, limiters[0].Burst)
require.Equal(t, 2, limiters[0].Type)
protocols := limiters[0].Protocols
require.Len(t, protocols, 2)
require.True(t, utils.IsStringInSlice(common.ProtocolFTP, protocols))
require.True(t, utils.IsStringInSlice(common.ProtocolSSH, protocols))
require.True(t, limiters[0].GenerateDefenderEvents)
require.Equal(t, 50, limiters[0].EntriesSoftLimit)
require.Equal(t, 100, limiters[0].EntriesHardLimit)
require.Equal(t, int64(50), limiters[1].Average)
// we check the default values here
require.Equal(t, int64(1000), limiters[1].Period)
require.Equal(t, 1, limiters[1].Burst)
require.Equal(t, 2, limiters[1].Type)
protocols = limiters[1].Protocols
require.Len(t, protocols, 4)
require.True(t, utils.IsStringInSlice(common.ProtocolFTP, protocols))
require.True(t, utils.IsStringInSlice(common.ProtocolSSH, protocols))
require.True(t, utils.IsStringInSlice(common.ProtocolWebDAV, protocols))
require.True(t, utils.IsStringInSlice(common.ProtocolHTTP, protocols))
require.False(t, limiters[1].GenerateDefenderEvents)
require.Equal(t, 100, limiters[1].EntriesSoftLimit)
require.Equal(t, 150, limiters[1].EntriesHardLimit)
}
func TestSFTPDBindingsFromEnv(t *testing.T) {
reset()
@@ -507,14 +521,12 @@ func TestSFTPDBindingsFromEnv(t *testing.T) {
os.Setenv("SFTPGO_SFTPD__BINDINGS__0__APPLY_PROXY_CONFIG", "false")
os.Setenv("SFTPGO_SFTPD__BINDINGS__3__ADDRESS", "127.0.1.1")
os.Setenv("SFTPGO_SFTPD__BINDINGS__3__PORT", "2203")
os.Setenv("SFTPGO_SFTPD__BINDINGS__3__APPLY_PROXY_CONFIG", "1")
t.Cleanup(func() {
os.Unsetenv("SFTPGO_SFTPD__BINDINGS__0__ADDRESS")
os.Unsetenv("SFTPGO_SFTPD__BINDINGS__0__PORT")
os.Unsetenv("SFTPGO_SFTPD__BINDINGS__0__APPLY_PROXY_CONFIG")
os.Unsetenv("SFTPGO_SFTPD__BINDINGS__3__ADDRESS")
os.Unsetenv("SFTPGO_SFTPD__BINDINGS__3__PORT")
os.Unsetenv("SFTPGO_SFTPD__BINDINGS__3__APPLY_PROXY_CONFIG")
})
configDir := ".."
@@ -527,7 +539,7 @@ func TestSFTPDBindingsFromEnv(t *testing.T) {
require.False(t, bindings[0].ApplyProxyConfig)
require.Equal(t, 2203, bindings[1].Port)
require.Equal(t, "127.0.1.1", bindings[1].Address)
require.True(t, bindings[1].ApplyProxyConfig)
require.True(t, bindings[1].ApplyProxyConfig) // default value
}
func TestFTPDBindingsFromEnv(t *testing.T) {
@@ -541,10 +553,9 @@ func TestFTPDBindingsFromEnv(t *testing.T) {
os.Setenv("SFTPGO_FTPD__BINDINGS__0__TLS_CIPHER_SUITES", "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256")
os.Setenv("SFTPGO_FTPD__BINDINGS__9__ADDRESS", "127.0.1.1")
os.Setenv("SFTPGO_FTPD__BINDINGS__9__PORT", "2203")
os.Setenv("SFTPGO_FTPD__BINDINGS__9__APPLY_PROXY_CONFIG", "t")
os.Setenv("SFTPGO_FTPD__BINDINGS__9__TLS_MODE", "1")
os.Setenv("SFTPGO_FTPD__BINDINGS__9__FORCE_PASSIVE_IP", "127.0.1.1")
os.Setenv("SFTPGO_FTPD__BINDINGS__9__CLIENT_AUTH_TYPE", "1")
os.Setenv("SFTPGO_FTPD__BINDINGS__9__CLIENT_AUTH_TYPE", "2")
t.Cleanup(func() {
os.Unsetenv("SFTPGO_FTPD__BINDINGS__0__ADDRESS")
@@ -555,7 +566,6 @@ func TestFTPDBindingsFromEnv(t *testing.T) {
os.Unsetenv("SFTPGO_FTPD__BINDINGS__0__TLS_CIPHER_SUITES")
os.Unsetenv("SFTPGO_FTPD__BINDINGS__9__ADDRESS")
os.Unsetenv("SFTPGO_FTPD__BINDINGS__9__PORT")
os.Unsetenv("SFTPGO_FTPD__BINDINGS__9__APPLY_PROXY_CONFIG")
os.Unsetenv("SFTPGO_FTPD__BINDINGS__9__TLS_MODE")
os.Unsetenv("SFTPGO_FTPD__BINDINGS__9__FORCE_PASSIVE_IP")
os.Unsetenv("SFTPGO_FTPD__BINDINGS__9__CLIENT_AUTH_TYPE")
@@ -577,10 +587,10 @@ func TestFTPDBindingsFromEnv(t *testing.T) {
require.Equal(t, "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", bindings[0].TLSCipherSuites[1])
require.Equal(t, 2203, bindings[1].Port)
require.Equal(t, "127.0.1.1", bindings[1].Address)
require.True(t, bindings[1].ApplyProxyConfig)
require.True(t, bindings[1].ApplyProxyConfig) // default value
require.Equal(t, 1, bindings[1].TLSMode)
require.Equal(t, "127.0.1.1", bindings[1].ForcePassiveIP)
require.Equal(t, 1, bindings[1].ClientAuthType)
require.Equal(t, 2, bindings[1].ClientAuthType)
require.Nil(t, bindings[1].TLSCipherSuites)
}
@@ -591,19 +601,23 @@ func TestWebDAVBindingsFromEnv(t *testing.T) {
os.Setenv("SFTPGO_WEBDAVD__BINDINGS__1__PORT", "8000")
os.Setenv("SFTPGO_WEBDAVD__BINDINGS__1__ENABLE_HTTPS", "0")
os.Setenv("SFTPGO_WEBDAVD__BINDINGS__1__TLS_CIPHER_SUITES", "TLS_RSA_WITH_AES_128_CBC_SHA ")
os.Setenv("SFTPGO_WEBDAVD__BINDINGS__1__PROXY_ALLOWED", "192.168.10.1")
os.Setenv("SFTPGO_WEBDAVD__BINDINGS__2__ADDRESS", "127.0.1.1")
os.Setenv("SFTPGO_WEBDAVD__BINDINGS__2__PORT", "9000")
os.Setenv("SFTPGO_WEBDAVD__BINDINGS__2__ENABLE_HTTPS", "1")
os.Setenv("SFTPGO_WEBDAVD__BINDINGS__2__CLIENT_AUTH_TYPE", "1")
os.Setenv("SFTPGO_WEBDAVD__BINDINGS__2__PREFIX", "/dav2")
t.Cleanup(func() {
os.Unsetenv("SFTPGO_WEBDAVD__BINDINGS__1__ADDRESS")
os.Unsetenv("SFTPGO_WEBDAVD__BINDINGS__1__PORT")
os.Unsetenv("SFTPGO_WEBDAVD__BINDINGS__1__ENABLE_HTTPS")
os.Unsetenv("SFTPGO_WEBDAVD__BINDINGS__1__TLS_CIPHER_SUITES")
os.Unsetenv("SFTPGO_WEBDAVD__BINDINGS__1__PROXY_ALLOWED")
os.Unsetenv("SFTPGO_WEBDAVD__BINDINGS__2__ADDRESS")
os.Unsetenv("SFTPGO_WEBDAVD__BINDINGS__2__PORT")
os.Unsetenv("SFTPGO_WEBDAVD__BINDINGS__2__ENABLE_HTTPS")
os.Unsetenv("SFTPGO_WEBDAVD__BINDINGS__2__CLIENT_AUTH_TYPE")
os.Unsetenv("SFTPGO_WEBDAVD__BINDINGS__2__PREFIX")
})
configDir := ".."
@@ -615,17 +629,21 @@ func TestWebDAVBindingsFromEnv(t *testing.T) {
require.Empty(t, bindings[0].Address)
require.False(t, bindings[0].EnableHTTPS)
require.Len(t, bindings[0].TLSCipherSuites, 0)
require.Empty(t, bindings[0].Prefix)
require.Equal(t, 8000, bindings[1].Port)
require.Equal(t, "127.0.0.1", bindings[1].Address)
require.False(t, bindings[1].EnableHTTPS)
require.Equal(t, 0, bindings[1].ClientAuthType)
require.Len(t, bindings[1].TLSCipherSuites, 1)
require.Equal(t, "TLS_RSA_WITH_AES_128_CBC_SHA", bindings[1].TLSCipherSuites[0])
require.Equal(t, "192.168.10.1", bindings[1].ProxyAllowed[0])
require.Empty(t, bindings[1].Prefix)
require.Equal(t, 9000, bindings[2].Port)
require.Equal(t, "127.0.1.1", bindings[2].Address)
require.True(t, bindings[2].EnableHTTPS)
require.Equal(t, 1, bindings[2].ClientAuthType)
require.Nil(t, bindings[2].TLSCipherSuites)
require.Equal(t, "/dav2", bindings[2].Prefix)
}
func TestHTTPDBindingsFromEnv(t *testing.T) {
@@ -639,13 +657,14 @@ func TestHTTPDBindingsFromEnv(t *testing.T) {
os.Setenv("SFTPGO_HTTPD__BINDINGS__1__ADDRESS", "127.0.0.1")
os.Setenv("SFTPGO_HTTPD__BINDINGS__1__PORT", "8000")
os.Setenv("SFTPGO_HTTPD__BINDINGS__1__ENABLE_HTTPS", "0")
os.Setenv("SFTPGO_HTTPD__BINDINGS__1__ENABLE_WEB_ADMIN", "1")
os.Setenv("SFTPGO_HTTPD__BINDINGS__2__ADDRESS", "127.0.1.1")
os.Setenv("SFTPGO_HTTPD__BINDINGS__2__PORT", "9000")
os.Setenv("SFTPGO_HTTPD__BINDINGS__2__ENABLE_WEB_ADMIN", "0")
os.Setenv("SFTPGO_HTTPD__BINDINGS__2__ENABLE_WEB_CLIENT", "0")
os.Setenv("SFTPGO_HTTPD__BINDINGS__2__ENABLE_HTTPS", "1")
os.Setenv("SFTPGO_HTTPD__BINDINGS__2__CLIENT_AUTH_TYPE", "1")
os.Setenv("SFTPGO_HTTPD__BINDINGS__2__TLS_CIPHER_SUITES", " TLS_AES_256_GCM_SHA384 , TLS_CHACHA20_POLY1305_SHA256")
os.Setenv("SFTPGO_HTTPD__BINDINGS__2__PROXY_ALLOWED", " 192.168.9.1 , 172.16.25.0/24")
t.Cleanup(func() {
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__0__ADDRESS")
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__0__PORT")
@@ -653,13 +672,14 @@ func TestHTTPDBindingsFromEnv(t *testing.T) {
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__1__ADDRESS")
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__1__PORT")
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__1__ENABLE_HTTPS")
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__1__ENABLE_WEB_ADMIN")
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__2__ADDRESS")
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__2__PORT")
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__2__ENABLE_HTTPS")
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__2__ENABLE_WEB_ADMIN")
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__2__ENABLE_WEB_CLIENT")
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__2__CLIENT_AUTH_TYPE")
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__2__TLS_CIPHER_SUITES")
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__2__PROXY_ALLOWED")
})
configDir := ".."
@@ -671,22 +691,28 @@ func TestHTTPDBindingsFromEnv(t *testing.T) {
require.Equal(t, sockPath, bindings[0].Address)
require.False(t, bindings[0].EnableHTTPS)
require.True(t, bindings[0].EnableWebAdmin)
require.True(t, bindings[0].EnableWebClient)
require.Len(t, bindings[0].TLSCipherSuites, 1)
require.Equal(t, "TLS_AES_128_GCM_SHA256", bindings[0].TLSCipherSuites[0])
require.Equal(t, 8000, bindings[1].Port)
require.Equal(t, "127.0.0.1", bindings[1].Address)
require.False(t, bindings[1].EnableHTTPS)
require.True(t, bindings[1].EnableWebAdmin)
require.True(t, bindings[1].EnableWebClient)
require.Nil(t, bindings[1].TLSCipherSuites)
require.Equal(t, 9000, bindings[2].Port)
require.Equal(t, "127.0.1.1", bindings[2].Address)
require.True(t, bindings[2].EnableHTTPS)
require.False(t, bindings[2].EnableWebAdmin)
require.False(t, bindings[2].EnableWebClient)
require.Equal(t, 1, bindings[2].ClientAuthType)
require.Len(t, bindings[2].TLSCipherSuites, 2)
require.Equal(t, "TLS_AES_256_GCM_SHA384", bindings[2].TLSCipherSuites[0])
require.Equal(t, "TLS_CHACHA20_POLY1305_SHA256", bindings[2].TLSCipherSuites[1])
require.Len(t, bindings[2].ProxyAllowed, 2)
require.Equal(t, "192.168.9.1", bindings[2].ProxyAllowed[0])
require.Equal(t, "172.16.25.0/24", bindings[2].ProxyAllowed[1])
}
func TestHTTPClientCertificatesFromEnv(t *testing.T) {
@@ -706,7 +732,7 @@ func TestHTTPClientCertificatesFromEnv(t *testing.T) {
c["http"] = httpConf
jsonConf, err := json.Marshal(c)
require.NoError(t, err)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
err = os.WriteFile(configFilePath, jsonConf, os.ModePerm)
require.NoError(t, err)
err = config.LoadConfig(configDir, confName)
require.NoError(t, err)
@@ -750,6 +776,77 @@ func TestHTTPClientCertificatesFromEnv(t *testing.T) {
require.Equal(t, "key9", config.GetHTTPConfig().Certificates[1].Key)
}
func TestHTTPClientHeadersFromEnv(t *testing.T) {
reset()
configDir := ".."
confName := tempConfigName + ".json"
configFilePath := filepath.Join(configDir, confName)
err := config.LoadConfig(configDir, "")
assert.NoError(t, err)
httpConf := config.GetHTTPConfig()
httpConf.Headers = append(httpConf.Headers, httpclient.Header{
Key: "key",
Value: "value",
URL: "url",
})
c := make(map[string]httpclient.Config)
c["http"] = httpConf
jsonConf, err := json.Marshal(c)
require.NoError(t, err)
err = os.WriteFile(configFilePath, jsonConf, os.ModePerm)
require.NoError(t, err)
err = config.LoadConfig(configDir, confName)
require.NoError(t, err)
require.Len(t, config.GetHTTPConfig().Headers, 1)
require.Equal(t, "key", config.GetHTTPConfig().Headers[0].Key)
require.Equal(t, "value", config.GetHTTPConfig().Headers[0].Value)
require.Equal(t, "url", config.GetHTTPConfig().Headers[0].URL)
os.Setenv("SFTPGO_HTTP__HEADERS__0__KEY", "key0")
os.Setenv("SFTPGO_HTTP__HEADERS__0__VALUE", "value0")
os.Setenv("SFTPGO_HTTP__HEADERS__0__URL", "url0")
os.Setenv("SFTPGO_HTTP__HEADERS__8__KEY", "key8")
os.Setenv("SFTPGO_HTTP__HEADERS__9__KEY", "key9")
os.Setenv("SFTPGO_HTTP__HEADERS__9__VALUE", "value9")
os.Setenv("SFTPGO_HTTP__HEADERS__9__URL", "url9")
t.Cleanup(func() {
os.Unsetenv("SFTPGO_HTTP__HEADERS__0__KEY")
os.Unsetenv("SFTPGO_HTTP__HEADERS__0__VALUE")
os.Unsetenv("SFTPGO_HTTP__HEADERS__0__URL")
os.Unsetenv("SFTPGO_HTTP__HEADERS__8__KEY")
os.Unsetenv("SFTPGO_HTTP__HEADERS__9__KEY")
os.Unsetenv("SFTPGO_HTTP__HEADERS__9__VALUE")
os.Unsetenv("SFTPGO_HTTP__HEADERS__9__URL")
})
err = config.LoadConfig(configDir, confName)
require.NoError(t, err)
require.Len(t, config.GetHTTPConfig().Headers, 2)
require.Equal(t, "key0", config.GetHTTPConfig().Headers[0].Key)
require.Equal(t, "value0", config.GetHTTPConfig().Headers[0].Value)
require.Equal(t, "url0", config.GetHTTPConfig().Headers[0].URL)
require.Equal(t, "key9", config.GetHTTPConfig().Headers[1].Key)
require.Equal(t, "value9", config.GetHTTPConfig().Headers[1].Value)
require.Equal(t, "url9", config.GetHTTPConfig().Headers[1].URL)
err = os.Remove(configFilePath)
assert.NoError(t, err)
config.Init()
err = config.LoadConfig(configDir, "")
require.NoError(t, err)
require.Len(t, config.GetHTTPConfig().Headers, 2)
require.Equal(t, "key0", config.GetHTTPConfig().Headers[0].Key)
require.Equal(t, "value0", config.GetHTTPConfig().Headers[0].Value)
require.Equal(t, "url0", config.GetHTTPConfig().Headers[0].URL)
require.Equal(t, "key9", config.GetHTTPConfig().Headers[1].Key)
require.Equal(t, "value9", config.GetHTTPConfig().Headers[1].Value)
require.Equal(t, "url9", config.GetHTTPConfig().Headers[1].URL)
}
func TestConfigFromEnv(t *testing.T) {
reset()

View File

@@ -1,6 +1,7 @@
package dataprovider
import (
"crypto/sha256"
"encoding/base64"
"errors"
"fmt"
@@ -9,7 +10,7 @@ import (
"strings"
"github.com/alexedwards/argon2id"
"github.com/minio/sha256-simd"
"golang.org/x/crypto/bcrypt"
"github.com/drakkan/sftpgo/utils"
)
@@ -40,6 +41,7 @@ var (
)
// AdminFilters defines additional restrictions for SFTPGo admins
// TODO: rename to AdminOptions in v3
type AdminFilters struct {
// only clients connecting from these IP/Mask are allowed.
// IP/Mask must be in CIDR notation as defined in RFC 4632 and RFC 4291
@@ -59,16 +61,25 @@ type Admin struct {
Email string `json:"email"`
Permissions []string `json:"permissions"`
Filters AdminFilters `json:"filters,omitempty"`
Description string `json:"description,omitempty"`
AdditionalInfo string `json:"additional_info,omitempty"`
}
func (a *Admin) checkPassword() error {
if a.Password != "" && !strings.HasPrefix(a.Password, argonPwdPrefix) {
pwd, err := argon2id.CreateHash(a.Password, argon2Params)
if err != nil {
return err
if a.Password != "" && !utils.IsStringPrefixInSlice(a.Password, internalHashPwdPrefixes) {
if config.PasswordHashing.Algo == HashingAlgoBcrypt {
pwd, err := bcrypt.GenerateFromPassword([]byte(a.Password), config.PasswordHashing.BcryptOptions.Cost)
if err != nil {
return err
}
a.Password = string(pwd)
} else {
pwd, err := argon2id.CreateHash(a.Password, argon2Params)
if err != nil {
return err
}
a.Password = pwd
}
a.Password = pwd
}
return nil
}
@@ -113,6 +124,12 @@ func (a *Admin) validate() error {
// CheckPassword verifies the admin password
func (a *Admin) CheckPassword(password string) (bool, error) {
if strings.HasPrefix(a.Password, bcryptPwdPrefix) {
if err := bcrypt.CompareHashAndPassword([]byte(a.Password), []byte(password)); err != nil {
return false, ErrInvalidCredentials
}
return true, nil
}
return argon2id.ComparePasswordAndHash(password, a.Password)
}
@@ -223,6 +240,7 @@ func (a *Admin) getACopy() Admin {
Permissions: permissions,
Filters: filters,
AdditionalInfo: a.AdditionalInfo,
Description: a.Description,
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,3 +1,4 @@
//go:build nobolt
// +build nobolt
package dataprovider

View File

@@ -0,0 +1,62 @@
package dataprovider
import (
"sync"
)
var cachedPasswords passwordsCache
func init() {
cachedPasswords = passwordsCache{
cache: make(map[string]string),
}
}
type passwordsCache struct {
sync.RWMutex
cache map[string]string
}
func (c *passwordsCache) Add(username, password string) {
if !config.PasswordCaching || username == "" || password == "" {
return
}
c.Lock()
defer c.Unlock()
c.cache[username] = password
}
func (c *passwordsCache) Remove(username string) {
if !config.PasswordCaching {
return
}
c.Lock()
defer c.Unlock()
delete(c.cache, username)
}
// Check returns if the user is found and if the password match
func (c *passwordsCache) Check(username, password string) (bool, bool) {
if username == "" || password == "" {
return false, false
}
c.RLock()
defer c.RUnlock()
pwd, ok := c.cache[username]
if !ok {
return false, false
}
return true, pwd == password
}
// CheckCachedPassword is an utility method used only in test cases
func CheckCachedPassword(username, password string) (bool, bool) {
return cachedPasswords.Check(username, password)
}

142
dataprovider/cacheduser.go Normal file
View File

@@ -0,0 +1,142 @@
package dataprovider
import (
"sync"
"time"
"golang.org/x/net/webdav"
"github.com/drakkan/sftpgo/utils"
)
var (
webDAVUsersCache *usersCache
)
func init() {
webDAVUsersCache = &usersCache{
users: map[string]CachedUser{},
}
}
// InitializeWebDAVUserCache initializes the cache for webdav users
func InitializeWebDAVUserCache(maxSize int) {
webDAVUsersCache = &usersCache{
users: map[string]CachedUser{},
maxSize: maxSize,
}
}
// CachedUser adds fields useful for caching to a SFTPGo user
type CachedUser struct {
User User
Expiration time.Time
Password string
LockSystem webdav.LockSystem
}
// IsExpired returns true if the cached user is expired
func (c *CachedUser) IsExpired() bool {
if c.Expiration.IsZero() {
return false
}
return c.Expiration.Before(time.Now())
}
type usersCache struct {
sync.RWMutex
users map[string]CachedUser
maxSize int
}
func (cache *usersCache) updateLastLogin(username string) {
cache.Lock()
defer cache.Unlock()
if cachedUser, ok := cache.users[username]; ok {
cachedUser.User.LastLogin = utils.GetTimeAsMsSinceEpoch(time.Now())
cache.users[username] = cachedUser
}
}
// swapWebDAVUser updates an existing cached user with the specified one
// preserving the lock fs if possible
func (cache *usersCache) swap(user *User) {
cache.Lock()
defer cache.Unlock()
if cachedUser, ok := cache.users[user.Username]; ok {
if cachedUser.User.Password != user.Password {
// the password changed, the cached user is no longer valid
delete(cache.users, user.Username)
return
}
if cachedUser.User.isFsEqual(user) {
// the updated user has the same fs as the cached one, we can preserve the lock filesystem
cachedUser.User = *user
cache.users[user.Username] = cachedUser
} else {
// filesystem changed, the cached user is no longer valid
delete(cache.users, user.Username)
}
}
}
func (cache *usersCache) add(cachedUser *CachedUser) {
cache.Lock()
defer cache.Unlock()
if cache.maxSize > 0 && len(cache.users) >= cache.maxSize {
var userToRemove string
var expirationTime time.Time
for k, v := range cache.users {
if userToRemove == "" {
userToRemove = k
expirationTime = v.Expiration
continue
}
expireTime := v.Expiration
if !expireTime.IsZero() && expireTime.Before(expirationTime) {
userToRemove = k
expirationTime = expireTime
}
}
delete(cache.users, userToRemove)
}
if cachedUser.User.Username != "" {
cache.users[cachedUser.User.Username] = *cachedUser
}
}
func (cache *usersCache) remove(username string) {
cache.Lock()
defer cache.Unlock()
delete(cache.users, username)
}
func (cache *usersCache) get(username string) (*CachedUser, bool) {
cache.RLock()
defer cache.RUnlock()
cachedUser, ok := cache.users[username]
return &cachedUser, ok
}
// CacheWebDAVUser add a user to the WebDAV cache
func CacheWebDAVUser(cachedUser *CachedUser) {
webDAVUsersCache.add(cachedUser)
}
// GetCachedWebDAVUser returns a previously cached WebDAV user
func GetCachedWebDAVUser(username string) (*CachedUser, bool) {
return webDAVUsersCache.get(username)
}
// RemoveCachedWebDAVUser removes a cached WebDAV user
func RemoveCachedWebDAVUser(username string) {
webDAVUsersCache.remove(username)
}

View File

@@ -1,358 +1,118 @@
package dataprovider
import (
"encoding/json"
"fmt"
"io/ioutil"
"path/filepath"
"github.com/drakkan/sftpgo/kms"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/utils"
"github.com/drakkan/sftpgo/vfs"
)
type compatUserV2 struct {
ID int64 `json:"id"`
Username string `json:"username"`
Password string `json:"password,omitempty"`
PublicKeys []string `json:"public_keys,omitempty"`
HomeDir string `json:"home_dir"`
UID int `json:"uid"`
GID int `json:"gid"`
MaxSessions int `json:"max_sessions"`
QuotaSize int64 `json:"quota_size"`
QuotaFiles int `json:"quota_files"`
Permissions []string `json:"permissions"`
UsedQuotaSize int64 `json:"used_quota_size"`
UsedQuotaFiles int `json:"used_quota_files"`
LastQuotaUpdate int64 `json:"last_quota_update"`
UploadBandwidth int64 `json:"upload_bandwidth"`
DownloadBandwidth int64 `json:"download_bandwidth"`
ExpirationDate int64 `json:"expiration_date"`
LastLogin int64 `json:"last_login"`
Status int `json:"status"`
type compatAzBlobFsConfigV9 struct {
Container string `json:"container,omitempty"`
AccountName string `json:"account_name,omitempty"`
AccountKey *kms.Secret `json:"account_key,omitempty"`
Endpoint string `json:"endpoint,omitempty"`
SASURL string `json:"sas_url,omitempty"`
KeyPrefix string `json:"key_prefix,omitempty"`
UploadPartSize int64 `json:"upload_part_size,omitempty"`
UploadConcurrency int `json:"upload_concurrency,omitempty"`
UseEmulator bool `json:"use_emulator,omitempty"`
AccessTier string `json:"access_tier,omitempty"`
}
type compatS3FsConfigV4 struct {
Bucket string `json:"bucket,omitempty"`
KeyPrefix string `json:"key_prefix,omitempty"`
Region string `json:"region,omitempty"`
AccessKey string `json:"access_key,omitempty"`
AccessSecret string `json:"access_secret,omitempty"`
Endpoint string `json:"endpoint,omitempty"`
StorageClass string `json:"storage_class,omitempty"`
UploadPartSize int64 `json:"upload_part_size,omitempty"`
UploadConcurrency int `json:"upload_concurrency,omitempty"`
type compatFilesystemV9 struct {
Provider vfs.FilesystemProvider `json:"provider"`
S3Config vfs.S3FsConfig `json:"s3config,omitempty"`
GCSConfig vfs.GCSFsConfig `json:"gcsconfig,omitempty"`
AzBlobConfig compatAzBlobFsConfigV9 `json:"azblobconfig,omitempty"`
CryptConfig vfs.CryptFsConfig `json:"cryptconfig,omitempty"`
SFTPConfig vfs.SFTPFsConfig `json:"sftpconfig,omitempty"`
}
type compatGCSFsConfigV4 struct {
Bucket string `json:"bucket,omitempty"`
KeyPrefix string `json:"key_prefix,omitempty"`
CredentialFile string `json:"-"`
Credentials []byte `json:"credentials,omitempty"`
AutomaticCredentials int `json:"automatic_credentials,omitempty"`
StorageClass string `json:"storage_class,omitempty"`
type compatBaseFolderV9 struct {
ID int64 `json:"id"`
Name string `json:"name"`
MappedPath string `json:"mapped_path,omitempty"`
Description string `json:"description,omitempty"`
UsedQuotaSize int64 `json:"used_quota_size"`
UsedQuotaFiles int `json:"used_quota_files"`
LastQuotaUpdate int64 `json:"last_quota_update"`
Users []string `json:"users,omitempty"`
FsConfig compatFilesystemV9 `json:"filesystem"`
}
type compatAzBlobFsConfigV4 struct {
Container string `json:"container,omitempty"`
AccountName string `json:"account_name,omitempty"`
AccountKey string `json:"account_key,omitempty"`
Endpoint string `json:"endpoint,omitempty"`
SASURL string `json:"sas_url,omitempty"`
KeyPrefix string `json:"key_prefix,omitempty"`
UploadPartSize int64 `json:"upload_part_size,omitempty"`
UploadConcurrency int `json:"upload_concurrency,omitempty"`
UseEmulator bool `json:"use_emulator,omitempty"`
AccessTier string `json:"access_tier,omitempty"`
type compatFolderV9 struct {
compatBaseFolderV9
VirtualPath string `json:"virtual_path"`
QuotaSize int64 `json:"quota_size"`
QuotaFiles int `json:"quota_files"`
}
type compatFilesystemV4 struct {
Provider FilesystemProvider `json:"provider"`
S3Config compatS3FsConfigV4 `json:"s3config,omitempty"`
GCSConfig compatGCSFsConfigV4 `json:"gcsconfig,omitempty"`
AzBlobConfig compatAzBlobFsConfigV4 `json:"azblobconfig,omitempty"`
type compatUserV9 struct {
ID int64 `json:"id"`
Username string `json:"username"`
FsConfig compatFilesystemV9 `json:"filesystem"`
}
type compatUserV4 struct {
ID int64 `json:"id"`
Status int `json:"status"`
Username string `json:"username"`
ExpirationDate int64 `json:"expiration_date"`
Password string `json:"password,omitempty"`
PublicKeys []string `json:"public_keys,omitempty"`
HomeDir string `json:"home_dir"`
VirtualFolders []vfs.VirtualFolder `json:"virtual_folders,omitempty"`
UID int `json:"uid"`
GID int `json:"gid"`
MaxSessions int `json:"max_sessions"`
QuotaSize int64 `json:"quota_size"`
QuotaFiles int `json:"quota_files"`
Permissions map[string][]string `json:"permissions"`
UsedQuotaSize int64 `json:"used_quota_size"`
UsedQuotaFiles int `json:"used_quota_files"`
LastQuotaUpdate int64 `json:"last_quota_update"`
UploadBandwidth int64 `json:"upload_bandwidth"`
DownloadBandwidth int64 `json:"download_bandwidth"`
LastLogin int64 `json:"last_login"`
Filters UserFilters `json:"filters"`
FsConfig compatFilesystemV4 `json:"filesystem"`
}
type backupDataV4Compat struct {
Users []compatUserV4 `json:"users"`
Folders []vfs.BaseVirtualFolder `json:"folders"`
}
func createUserFromV4(u compatUserV4, fsConfig Filesystem) User {
user := User{
ID: u.ID,
Status: u.Status,
Username: u.Username,
ExpirationDate: u.ExpirationDate,
Password: u.Password,
PublicKeys: u.PublicKeys,
HomeDir: u.HomeDir,
VirtualFolders: u.VirtualFolders,
UID: u.UID,
GID: u.GID,
MaxSessions: u.MaxSessions,
QuotaSize: u.QuotaSize,
QuotaFiles: u.QuotaFiles,
Permissions: u.Permissions,
UsedQuotaSize: u.UsedQuotaSize,
UsedQuotaFiles: u.UsedQuotaFiles,
LastQuotaUpdate: u.LastQuotaUpdate,
UploadBandwidth: u.UploadBandwidth,
DownloadBandwidth: u.DownloadBandwidth,
LastLogin: u.LastLogin,
Filters: u.Filters,
func convertFsConfigFromV9(compatFs compatFilesystemV9, aead string) (vfs.Filesystem, error) {
fsConfig := vfs.Filesystem{
Provider: compatFs.Provider,
S3Config: compatFs.S3Config,
GCSConfig: compatFs.GCSConfig,
CryptConfig: compatFs.CryptConfig,
SFTPConfig: compatFs.SFTPConfig,
}
user.FsConfig = fsConfig
user.SetEmptySecretsIfNil()
return user
azSASURL := kms.NewEmptySecret()
if compatFs.Provider == vfs.AzureBlobFilesystemProvider && compatFs.AzBlobConfig.SASURL != "" {
azSASURL = kms.NewPlainSecret(compatFs.AzBlobConfig.SASURL)
}
if compatFs.AzBlobConfig.AccountKey == nil {
compatFs.AzBlobConfig.AccountKey = kms.NewEmptySecret()
}
fsConfig.AzBlobConfig = vfs.AzBlobFsConfig{
Container: compatFs.AzBlobConfig.Container,
AccountName: compatFs.AzBlobConfig.AccountName,
AccountKey: compatFs.AzBlobConfig.AccountKey,
Endpoint: compatFs.AzBlobConfig.Endpoint,
SASURL: azSASURL,
KeyPrefix: compatFs.AzBlobConfig.KeyPrefix,
UploadPartSize: compatFs.AzBlobConfig.UploadPartSize,
UploadConcurrency: compatFs.AzBlobConfig.UploadConcurrency,
UseEmulator: compatFs.AzBlobConfig.UseEmulator,
AccessTier: compatFs.AzBlobConfig.AccessTier,
}
err := fsConfig.AzBlobConfig.EncryptCredentials(aead)
return fsConfig, err
}
func convertUserToV4(u User, fsConfig compatFilesystemV4) compatUserV4 {
user := compatUserV4{
ID: u.ID,
Status: u.Status,
Username: u.Username,
ExpirationDate: u.ExpirationDate,
Password: u.Password,
PublicKeys: u.PublicKeys,
HomeDir: u.HomeDir,
VirtualFolders: u.VirtualFolders,
UID: u.UID,
GID: u.GID,
MaxSessions: u.MaxSessions,
QuotaSize: u.QuotaSize,
QuotaFiles: u.QuotaFiles,
Permissions: u.Permissions,
UsedQuotaSize: u.UsedQuotaSize,
UsedQuotaFiles: u.UsedQuotaFiles,
LastQuotaUpdate: u.LastQuotaUpdate,
UploadBandwidth: u.UploadBandwidth,
DownloadBandwidth: u.DownloadBandwidth,
LastLogin: u.LastLogin,
Filters: u.Filters,
}
user.FsConfig = fsConfig
return user
}
func getCGSCredentialsFromV4(config compatGCSFsConfigV4) (*kms.Secret, error) {
secret := kms.NewEmptySecret()
var err error
if len(config.Credentials) > 0 {
secret = kms.NewPlainSecret(string(config.Credentials))
return secret, nil
}
if config.CredentialFile != "" {
creds, err := ioutil.ReadFile(config.CredentialFile)
if err != nil {
return secret, err
}
secret = kms.NewPlainSecret(string(creds))
return secret, nil
}
return secret, err
}
func getCGSCredentialsFromV6(config vfs.GCSFsConfig, username string) (string, error) {
if config.Credentials == nil {
config.Credentials = kms.NewEmptySecret()
}
if config.Credentials.IsEmpty() {
config.CredentialFile = filepath.Join(credentialsDirPath, fmt.Sprintf("%v_gcs_credentials.json",
username))
creds, err := ioutil.ReadFile(config.CredentialFile)
if err != nil {
return "", err
}
err = json.Unmarshal(creds, &config.Credentials)
if err != nil {
return "", err
func convertFsConfigToV9(fs vfs.Filesystem) (compatFilesystemV9, error) {
azSASURL := ""
if fs.Provider == vfs.AzureBlobFilesystemProvider {
if fs.AzBlobConfig.SASURL != nil && fs.AzBlobConfig.SASURL.IsEncrypted() {
err := fs.AzBlobConfig.SASURL.Decrypt()
if err != nil {
return compatFilesystemV9{}, err
}
azSASURL = fs.AzBlobConfig.SASURL.GetPayload()
}
}
if config.Credentials.IsEncrypted() {
err := config.Credentials.Decrypt()
if err != nil {
return "", err
}
// in V4 GCS credentials were not encrypted
return config.Credentials.GetPayload(), nil
azFsCompat := compatAzBlobFsConfigV9{
Container: fs.AzBlobConfig.Container,
AccountName: fs.AzBlobConfig.AccountName,
AccountKey: fs.AzBlobConfig.AccountKey,
Endpoint: fs.AzBlobConfig.Endpoint,
SASURL: azSASURL,
KeyPrefix: fs.AzBlobConfig.KeyPrefix,
UploadPartSize: fs.AzBlobConfig.UploadPartSize,
UploadConcurrency: fs.AzBlobConfig.UploadConcurrency,
UseEmulator: fs.AzBlobConfig.UseEmulator,
AccessTier: fs.AzBlobConfig.AccessTier,
}
return "", nil
}
func convertFsConfigToV4(fs Filesystem, username string) (compatFilesystemV4, error) {
fsV4 := compatFilesystemV4{
fsV9 := compatFilesystemV9{
Provider: fs.Provider,
S3Config: compatS3FsConfigV4{},
AzBlobConfig: compatAzBlobFsConfigV4{},
GCSConfig: compatGCSFsConfigV4{},
S3Config: fs.S3Config,
GCSConfig: fs.GCSConfig,
AzBlobConfig: azFsCompat,
CryptConfig: fs.CryptConfig,
SFTPConfig: fs.SFTPConfig,
}
switch fs.Provider {
case S3FilesystemProvider:
fsV4.S3Config = compatS3FsConfigV4{
Bucket: fs.S3Config.Bucket,
KeyPrefix: fs.S3Config.KeyPrefix,
Region: fs.S3Config.Region,
AccessKey: fs.S3Config.AccessKey,
AccessSecret: "",
Endpoint: fs.S3Config.Endpoint,
StorageClass: fs.S3Config.StorageClass,
UploadPartSize: fs.S3Config.UploadPartSize,
UploadConcurrency: fs.S3Config.UploadConcurrency,
}
if fs.S3Config.AccessSecret.IsEncrypted() {
err := fs.S3Config.AccessSecret.Decrypt()
if err != nil {
return fsV4, err
}
secretV4, err := utils.EncryptData(fs.S3Config.AccessSecret.GetPayload())
if err != nil {
return fsV4, err
}
fsV4.S3Config.AccessSecret = secretV4
}
case AzureBlobFilesystemProvider:
fsV4.AzBlobConfig = compatAzBlobFsConfigV4{
Container: fs.AzBlobConfig.Container,
AccountName: fs.AzBlobConfig.AccountName,
AccountKey: "",
Endpoint: fs.AzBlobConfig.Endpoint,
SASURL: fs.AzBlobConfig.SASURL,
KeyPrefix: fs.AzBlobConfig.KeyPrefix,
UploadPartSize: fs.AzBlobConfig.UploadPartSize,
UploadConcurrency: fs.AzBlobConfig.UploadConcurrency,
UseEmulator: fs.AzBlobConfig.UseEmulator,
AccessTier: fs.AzBlobConfig.AccessTier,
}
if fs.AzBlobConfig.AccountKey.IsEncrypted() {
err := fs.AzBlobConfig.AccountKey.Decrypt()
if err != nil {
return fsV4, err
}
secretV4, err := utils.EncryptData(fs.AzBlobConfig.AccountKey.GetPayload())
if err != nil {
return fsV4, err
}
fsV4.AzBlobConfig.AccountKey = secretV4
}
case GCSFilesystemProvider:
fsV4.GCSConfig = compatGCSFsConfigV4{
Bucket: fs.GCSConfig.Bucket,
KeyPrefix: fs.GCSConfig.KeyPrefix,
CredentialFile: fs.GCSConfig.CredentialFile,
AutomaticCredentials: fs.GCSConfig.AutomaticCredentials,
StorageClass: fs.GCSConfig.StorageClass,
}
if fs.GCSConfig.AutomaticCredentials == 0 {
creds, err := getCGSCredentialsFromV6(fs.GCSConfig, username)
if err != nil {
return fsV4, err
}
fsV4.GCSConfig.Credentials = []byte(creds)
}
default:
// a provider not supported in v4, the configuration will be lost
providerLog(logger.LevelWarn, "provider %v was not supported in v4, the configuration for the user %#v will be lost",
fs.Provider, username)
fsV4.Provider = 0
}
return fsV4, nil
}
func convertFsConfigFromV4(compatFs compatFilesystemV4, username string) (Filesystem, error) {
fsConfig := Filesystem{
Provider: compatFs.Provider,
S3Config: vfs.S3FsConfig{},
AzBlobConfig: vfs.AzBlobFsConfig{},
GCSConfig: vfs.GCSFsConfig{},
}
switch compatFs.Provider {
case S3FilesystemProvider:
fsConfig.S3Config = vfs.S3FsConfig{
Bucket: compatFs.S3Config.Bucket,
KeyPrefix: compatFs.S3Config.KeyPrefix,
Region: compatFs.S3Config.Region,
AccessKey: compatFs.S3Config.AccessKey,
AccessSecret: kms.NewEmptySecret(),
Endpoint: compatFs.S3Config.Endpoint,
StorageClass: compatFs.S3Config.StorageClass,
UploadPartSize: compatFs.S3Config.UploadPartSize,
UploadConcurrency: compatFs.S3Config.UploadConcurrency,
}
if compatFs.S3Config.AccessSecret != "" {
secret, err := kms.GetSecretFromCompatString(compatFs.S3Config.AccessSecret)
if err != nil {
providerLog(logger.LevelError, "unable to convert v4 filesystem for user %#v: %v", username, err)
return fsConfig, err
}
fsConfig.S3Config.AccessSecret = secret
}
case AzureBlobFilesystemProvider:
fsConfig.AzBlobConfig = vfs.AzBlobFsConfig{
Container: compatFs.AzBlobConfig.Container,
AccountName: compatFs.AzBlobConfig.AccountName,
AccountKey: kms.NewEmptySecret(),
Endpoint: compatFs.AzBlobConfig.Endpoint,
SASURL: compatFs.AzBlobConfig.SASURL,
KeyPrefix: compatFs.AzBlobConfig.KeyPrefix,
UploadPartSize: compatFs.AzBlobConfig.UploadPartSize,
UploadConcurrency: compatFs.AzBlobConfig.UploadConcurrency,
UseEmulator: compatFs.AzBlobConfig.UseEmulator,
AccessTier: compatFs.AzBlobConfig.AccessTier,
}
if compatFs.AzBlobConfig.AccountKey != "" {
secret, err := kms.GetSecretFromCompatString(compatFs.AzBlobConfig.AccountKey)
if err != nil {
providerLog(logger.LevelError, "unable to convert v4 filesystem for user %#v: %v", username, err)
return fsConfig, err
}
fsConfig.AzBlobConfig.AccountKey = secret
}
case GCSFilesystemProvider:
fsConfig.GCSConfig = vfs.GCSFsConfig{
Bucket: compatFs.GCSConfig.Bucket,
KeyPrefix: compatFs.GCSConfig.KeyPrefix,
CredentialFile: compatFs.GCSConfig.CredentialFile,
AutomaticCredentials: compatFs.GCSConfig.AutomaticCredentials,
StorageClass: compatFs.GCSConfig.StorageClass,
}
if compatFs.GCSConfig.AutomaticCredentials == 0 {
compatFs.GCSConfig.CredentialFile = filepath.Join(credentialsDirPath, fmt.Sprintf("%v_gcs_credentials.json",
username))
}
secret, err := getCGSCredentialsFromV4(compatFs.GCSConfig)
if err != nil {
providerLog(logger.LevelError, "unable to convert v4 filesystem for user %#v: %v", username, err)
return fsConfig, err
}
fsConfig.GCSConfig.Credentials = secret
}
return fsConfig, nil
return fsV9, nil
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,9 +1,9 @@
package dataprovider
import (
"crypto/x509"
"errors"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"sort"
@@ -44,7 +44,6 @@ type MemoryProvider struct {
}
func initializeMemoryProvider(basePath string) {
logSender = fmt.Sprintf("dataprovider_%v", MemoryDataProviderName)
configFile := ""
if utils.IsFileInputValid(config.Name) {
configFile = config.Name
@@ -89,10 +88,23 @@ func (p *MemoryProvider) close() error {
return nil
}
func (p *MemoryProvider) validateUserAndTLSCert(username, protocol string, tlsCert *x509.Certificate) (User, error) {
var user User
if tlsCert == nil {
return user, errors.New("TLS certificate cannot be null or empty")
}
user, err := p.userExists(username)
if err != nil {
providerLog(logger.LevelWarn, "error authenticating user %#v: %v", username, err)
return user, err
}
return checkUserAndTLSCertificate(&user, protocol, tlsCert)
}
func (p *MemoryProvider) validateUserAndPass(username, password, ip, protocol string) (User, error) {
var user User
if password == "" {
return user, errors.New("Credentials cannot be null or empty")
return user, errors.New("credentials cannot be null or empty")
}
user, err := p.userExists(username)
if err != nil {
@@ -105,7 +117,7 @@ func (p *MemoryProvider) validateUserAndPass(username, password, ip, protocol st
func (p *MemoryProvider) validateUserAndPubKey(username string, pubKey []byte) (User, string, error) {
var user User
if len(pubKey) == 0 {
return user, "", errors.New("Credentials cannot be null or empty")
return user, "", errors.New("credentials cannot be null or empty")
}
user, err := p.userExists(username)
if err != nil {
@@ -119,7 +131,7 @@ func (p *MemoryProvider) validateAdminAndPass(username, password, ip string) (Ad
admin, err := p.adminExists(username)
if err != nil {
providerLog(logger.LevelWarn, "error authenticating admin %#v: %v", username, err)
return admin, err
return admin, ErrInvalidCredentials
}
err = admin.checkUserAndPass(password, ip)
return admin, err
@@ -317,7 +329,7 @@ func (p *MemoryProvider) getUsers(limit int, offset int, order string) ([]User,
}
u := p.dbHandle.users[username]
user := u.getACopy()
user.HideConfidentialData()
user.PrepareForRendering()
users = append(users, user)
if len(users) >= limit {
break
@@ -332,7 +344,7 @@ func (p *MemoryProvider) getUsers(limit int, offset int, order string) ([]User,
username := p.dbHandle.usernames[i]
u := p.dbHandle.users[username]
user := u.getACopy()
user.HideConfidentialData()
user.PrepareForRendering()
users = append(users, user)
if len(users) >= limit {
break
@@ -535,15 +547,12 @@ func (p *MemoryProvider) getUsedFolderQuota(name string) (int, int64, error) {
func (p *MemoryProvider) joinVirtualFoldersFields(user *User) []vfs.VirtualFolder {
var folders []vfs.VirtualFolder
for _, folder := range user.VirtualFolders {
f, err := p.addOrGetFolderInternal(folder.Name, folder.MappedPath, user.Username)
for idx := range user.VirtualFolders {
folder := &user.VirtualFolders[idx]
f, err := p.addOrUpdateFolderInternal(&folder.BaseVirtualFolder, user.Username, 0, 0, 0)
if err == nil {
folder.UsedQuotaFiles = f.UsedQuotaFiles
folder.UsedQuotaSize = f.UsedQuotaSize
folder.LastQuotaUpdate = f.LastQuotaUpdate
folder.ID = f.ID
folder.MappedPath = f.MappedPath
folders = append(folders, folder)
folder.BaseVirtualFolder = f
folders = append(folders, *folder)
}
}
return folders
@@ -571,24 +580,29 @@ func (p *MemoryProvider) updateFoldersMappingInternal(folder vfs.BaseVirtualFold
}
}
func (p *MemoryProvider) addOrGetFolderInternal(folderName, folderMappedPath, username string) (vfs.BaseVirtualFolder, error) {
folder, err := p.folderExistsInternal(folderName)
if _, ok := err.(*RecordNotFoundError); ok {
folder := vfs.BaseVirtualFolder{
ID: p.getNextFolderID(),
Name: folderName,
MappedPath: folderMappedPath,
UsedQuotaSize: 0,
UsedQuotaFiles: 0,
LastQuotaUpdate: 0,
Users: []string{username},
func (p *MemoryProvider) addOrUpdateFolderInternal(baseFolder *vfs.BaseVirtualFolder, username string, usedQuotaSize int64,
usedQuotaFiles int, lastQuotaUpdate int64) (vfs.BaseVirtualFolder, error) {
folder, err := p.folderExistsInternal(baseFolder.Name)
if err == nil {
// exists
folder.MappedPath = baseFolder.MappedPath
folder.Description = baseFolder.Description
folder.FsConfig = baseFolder.FsConfig.GetACopy()
if !utils.IsStringInSlice(username, folder.Users) {
folder.Users = append(folder.Users, username)
}
p.updateFoldersMappingInternal(folder)
return folder, nil
}
if err == nil && !utils.IsStringInSlice(username, folder.Users) {
folder.Users = append(folder.Users, username)
if _, ok := err.(*RecordNotFoundError); ok {
folder = baseFolder.GetACopy()
folder.ID = p.getNextFolderID()
folder.UsedQuotaSize = usedQuotaSize
folder.UsedQuotaFiles = usedQuotaFiles
folder.LastQuotaUpdate = lastQuotaUpdate
folder.Users = []string{username}
p.updateFoldersMappingInternal(folder)
return folder, nil
}
return folder, err
}
@@ -618,7 +632,9 @@ func (p *MemoryProvider) getFolders(limit, offset int, order string) ([]vfs.Base
if itNum <= offset {
continue
}
folder := p.dbHandle.vfolders[name]
f := p.dbHandle.vfolders[name]
folder := f.GetACopy()
folder.PrepareForRendering()
folders = append(folders, folder)
if len(folders) >= limit {
break
@@ -631,7 +647,9 @@ func (p *MemoryProvider) getFolders(limit, offset int, order string) ([]vfs.Base
continue
}
name := p.dbHandle.vfoldersNames[i]
folder := p.dbHandle.vfolders[name]
f := p.dbHandle.vfolders[name]
folder := f.GetACopy()
folder.PrepareForRendering()
folders = append(folders, folder)
if len(folders) >= limit {
break
@@ -647,7 +665,11 @@ func (p *MemoryProvider) getFolderByName(name string) (vfs.BaseVirtualFolder, er
if p.dbHandle.isClosed {
return vfs.BaseVirtualFolder{}, errMemoryProviderClosed
}
return p.folderExistsInternal(name)
folder, err := p.folderExistsInternal(name)
if err != nil {
return vfs.BaseVirtualFolder{}, err
}
return folder.GetACopy(), nil
}
func (p *MemoryProvider) addFolder(folder *vfs.BaseVirtualFolder) error {
@@ -695,6 +717,22 @@ func (p *MemoryProvider) updateFolder(folder *vfs.BaseVirtualFolder) error {
folder.UsedQuotaSize = f.UsedQuotaSize
folder.Users = f.Users
p.dbHandle.vfolders[folder.Name] = folder.GetACopy()
// now update the related users
for _, username := range folder.Users {
user, err := p.userExistsInternal(username)
if err == nil {
var folders []vfs.VirtualFolder
for idx := range user.VirtualFolders {
userFolder := &user.VirtualFolders[idx]
if folder.Name == userFolder.Name {
userFolder.BaseVirtualFolder = folder.GetACopy()
}
folders = append(folders, *userFolder)
}
user.VirtualFolders = folders
p.dbHandle.users[user.Username] = user
}
}
return nil
}
@@ -713,9 +751,10 @@ func (p *MemoryProvider) deleteFolder(folder *vfs.BaseVirtualFolder) error {
user, err := p.userExistsInternal(username)
if err == nil {
var folders []vfs.VirtualFolder
for _, userFolder := range user.VirtualFolders {
for idx := range user.VirtualFolders {
userFolder := &user.VirtualFolders[idx]
if folder.Name != userFolder.Name {
folders = append(folders, userFolder)
folders = append(folders, *userFolder)
}
}
user.VirtualFolders = folders
@@ -793,7 +832,7 @@ func (p *MemoryProvider) reloadConfig() error {
providerLog(logger.LevelWarn, "error loading dump: %v", err)
return err
}
content, err := ioutil.ReadFile(p.dbHandle.configFile)
content, err := os.ReadFile(p.dbHandle.configFile)
if err != nil {
providerLog(logger.LevelWarn, "error loading dump: %v", err)
return err

View File

@@ -1,10 +1,13 @@
//go:build !nomysql
// +build !nomysql
package dataprovider
import (
"context"
"crypto/x509"
"database/sql"
"errors"
"fmt"
"strings"
"time"
@@ -18,41 +21,34 @@ import (
)
const (
mysqlUsersTableSQL = "CREATE TABLE `{{users}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, " +
"`username` varchar(255) NOT NULL UNIQUE, `password` varchar(255) NULL, `public_keys` longtext NULL, " +
"`home_dir` varchar(255) NOT NULL, `uid` integer NOT NULL, `gid` integer NOT NULL, `max_sessions` integer NOT NULL, " +
" `quota_size` bigint NOT NULL, `quota_files` integer NOT NULL, `permissions` longtext NOT NULL, " +
"`used_quota_size` bigint NOT NULL, `used_quota_files` integer NOT NULL, `last_quota_update` bigint NOT NULL, " +
"`upload_bandwidth` integer NOT NULL, `download_bandwidth` integer NOT NULL, `expiration_date` bigint(20) NOT NULL, " +
"`last_login` bigint(20) NOT NULL, `status` int(11) NOT NULL, `filters` longtext DEFAULT NULL, " +
"`filesystem` longtext DEFAULT NULL);"
mysqlSchemaTableSQL = "CREATE TABLE `{{schema_version}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `version` integer NOT NULL);"
mysqlV2SQL = "ALTER TABLE `{{users}}` ADD COLUMN `virtual_folders` longtext NULL;"
mysqlV3SQL = "ALTER TABLE `{{users}}` MODIFY `password` longtext NULL;"
mysqlV4SQL = "CREATE TABLE `{{folders}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `path` varchar(512) NOT NULL UNIQUE," +
"`used_quota_size` bigint NOT NULL, `used_quota_files` integer NOT NULL, `last_quota_update` bigint NOT NULL);" +
"ALTER TABLE `{{users}}` MODIFY `home_dir` varchar(512) NOT NULL;" +
"ALTER TABLE `{{users}}` DROP COLUMN `virtual_folders`;" +
mysqlInitialSQL = "CREATE TABLE `{{schema_version}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `version` integer NOT NULL);" +
"CREATE TABLE `{{admins}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `username` varchar(255) NOT NULL UNIQUE, " +
"`password` varchar(255) NOT NULL, `email` varchar(255) NULL, `status` integer NOT NULL, `permissions` longtext NOT NULL, " +
"`filters` longtext NULL, `additional_info` longtext NULL);" +
"CREATE TABLE `{{folders}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `name` varchar(255) NOT NULL UNIQUE, " +
"`path` varchar(512) NULL, `used_quota_size` bigint NOT NULL, `used_quota_files` integer NOT NULL, " +
"`last_quota_update` bigint NOT NULL);" +
"CREATE TABLE `{{users}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `status` integer NOT NULL, " +
"`expiration_date` bigint NOT NULL, `username` varchar(255) NOT NULL UNIQUE, `password` longtext NULL, " +
"`public_keys` longtext NULL, `home_dir` varchar(512) NOT NULL, `uid` integer NOT NULL, `gid` integer NOT NULL, " +
"`max_sessions` integer NOT NULL, `quota_size` bigint NOT NULL, `quota_files` integer NOT NULL, " +
"`permissions` longtext NOT NULL, `used_quota_size` bigint NOT NULL, `used_quota_files` integer NOT NULL, " +
"`last_quota_update` bigint NOT NULL, `upload_bandwidth` integer NOT NULL, `download_bandwidth` integer NOT NULL, " +
"`last_login` bigint NOT NULL, `filters` longtext NULL, `filesystem` longtext NULL, `additional_info` longtext NULL);" +
"CREATE TABLE `{{folders_mapping}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `virtual_path` varchar(512) NOT NULL, " +
"`quota_size` bigint NOT NULL, `quota_files` integer NOT NULL, `folder_id` integer NOT NULL, `user_id` integer NOT NULL);" +
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `unique_mapping` UNIQUE (`user_id`, `folder_id`);" +
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `folders_mapping_folder_id_fk_folders_id` FOREIGN KEY (`folder_id`) REFERENCES `{{folders}}` (`id`) ON DELETE CASCADE;" +
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `folders_mapping_user_id_fk_users_id` FOREIGN KEY (`user_id`) REFERENCES `{{users}}` (`id`) ON DELETE CASCADE;"
mysqlV6SQL = "ALTER TABLE `{{users}}` ADD COLUMN `additional_info` longtext NULL;"
mysqlV6DownSQL = "ALTER TABLE `{{users}}` DROP COLUMN `additional_info`;"
mysqlV7SQL = "CREATE TABLE `{{admins}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `username` varchar(255) NOT NULL UNIQUE, " +
"`password` varchar(255) NOT NULL, `email` varchar(255) NULL, `status` integer NOT NULL, `permissions` longtext NOT NULL, " +
"`filters` longtext NULL, `additional_info` longtext NULL);"
mysqlV7DownSQL = "DROP TABLE `{{admins}}` CASCADE;"
mysqlV8SQL = "ALTER TABLE `{{folders}}` ADD COLUMN `name` varchar(255) NULL;" +
"ALTER TABLE `{{folders}}` MODIFY `path` varchar(512) NULL;" +
"ALTER TABLE `{{folders}}` DROP INDEX `path`;" +
"UPDATE `{{folders}}` f1 SET name = CONCAT('folder',f1.id);" +
"ALTER TABLE `{{folders}}` MODIFY `name` varchar(255) NOT NULL;" +
"ALTER TABLE `{{folders}}` ADD CONSTRAINT `name` UNIQUE (`name`);"
mysqlV8DownSQL = "ALTER TABLE `{{folders}}` DROP COLUMN `name`;" +
"ALTER TABLE `{{folders}}` MODIFY `path` varchar(512) NOT NULL;" +
"ALTER TABLE `{{folders}}` ADD CONSTRAINT `path` UNIQUE (`path`);"
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `{{prefix}}unique_mapping` UNIQUE (`user_id`, `folder_id`);" +
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `{{prefix}}folders_mapping_folder_id_fk_folders_id` FOREIGN KEY (`folder_id`) REFERENCES `{{folders}}` (`id`) ON DELETE CASCADE;" +
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `{{prefix}}folders_mapping_user_id_fk_users_id` FOREIGN KEY (`user_id`) REFERENCES `{{users}}` (`id`) ON DELETE CASCADE;" +
"INSERT INTO {{schema_version}} (version) VALUES (8);"
mysqlV9SQL = "ALTER TABLE `{{admins}}` ADD COLUMN `description` varchar(512) NULL;" +
"ALTER TABLE `{{folders}}` ADD COLUMN `description` varchar(512) NULL;" +
"ALTER TABLE `{{folders}}` ADD COLUMN `filesystem` longtext NULL;" +
"ALTER TABLE `{{users}}` ADD COLUMN `description` varchar(512) NULL;"
mysqlV9DownSQL = "ALTER TABLE `{{users}}` DROP COLUMN `description`;" +
"ALTER TABLE `{{folders}}` DROP COLUMN `filesystem`;" +
"ALTER TABLE `{{folders}}` DROP COLUMN `description`;" +
"ALTER TABLE `{{admins}}` DROP COLUMN `description`;"
)
// MySQLProvider auth provider for MySQL/MariaDB database
@@ -66,7 +62,7 @@ func init() {
func initializeMySQLProvider() error {
var err error
logSender = fmt.Sprintf("dataprovider_%v", MySQLDataProviderName)
dbHandle, err := sql.Open("mysql", getMySQLConnectionString(false))
if err == nil {
providerLog(logger.LevelDebug, "mysql database handle created, connection string: %#v, pool size: %v",
@@ -108,6 +104,10 @@ func (p *MySQLProvider) validateUserAndPass(username, password, ip, protocol str
return sqlCommonValidateUserAndPass(username, password, ip, protocol, p.dbHandle)
}
func (p *MySQLProvider) validateUserAndTLSCert(username, protocol string, tlsCert *x509.Certificate) (User, error) {
return sqlCommonValidateUserAndTLSCertificate(username, protocol, tlsCert, p.dbHandle)
}
func (p *MySQLProvider) validateUserAndPubKey(username string, publicKey []byte) (User, string, error) {
return sqlCommonValidateUserAndPubKey(username, publicKey, p.dbHandle)
}
@@ -224,27 +224,14 @@ func (p *MySQLProvider) initializeDatabase() error {
if err == nil && dbVersion.Version > 0 {
return ErrNoInitRequired
}
sqlUsers := strings.Replace(mysqlUsersTableSQL, "{{users}}", sqlTableUsers, 1)
ctx, cancel := context.WithTimeout(context.Background(), longSQLQueryTimeout)
defer cancel()
initialSQL := strings.ReplaceAll(mysqlInitialSQL, "{{schema_version}}", sqlTableSchemaVersion)
initialSQL = strings.ReplaceAll(initialSQL, "{{admins}}", sqlTableAdmins)
initialSQL = strings.ReplaceAll(initialSQL, "{{folders}}", sqlTableFolders)
initialSQL = strings.ReplaceAll(initialSQL, "{{users}}", sqlTableUsers)
initialSQL = strings.ReplaceAll(initialSQL, "{{folders_mapping}}", sqlTableFoldersMapping)
initialSQL = strings.ReplaceAll(initialSQL, "{{prefix}}", config.SQLTablesPrefix)
tx, err := p.dbHandle.BeginTx(ctx, nil)
if err != nil {
return err
}
_, err = tx.Exec(sqlUsers)
if err != nil {
return err
}
_, err = tx.Exec(strings.Replace(mysqlSchemaTableSQL, "{{schema_version}}", sqlTableSchemaVersion, 1))
if err != nil {
return err
}
_, err = tx.Exec(strings.Replace(initialDBVersionSQL, "{{schema_version}}", sqlTableSchemaVersion, 1))
if err != nil {
return err
}
return tx.Commit()
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, strings.Split(initialSQL, ";"), 8)
}
func (p *MySQLProvider) migrateDatabase() error {
@@ -252,200 +239,95 @@ func (p *MySQLProvider) migrateDatabase() error {
if err != nil {
return err
}
if dbVersion.Version == sqlDatabaseVersion {
providerLog(logger.LevelDebug, "sql database is up to date, current version: %v", dbVersion.Version)
switch version := dbVersion.Version; {
case version == sqlDatabaseVersion:
providerLog(logger.LevelDebug, "sql database is up to date, current version: %v", version)
return ErrNoInitRequired
}
switch dbVersion.Version {
case 1:
return updateMySQLDatabaseFromV1(p.dbHandle)
case 2:
return updateMySQLDatabaseFromV2(p.dbHandle)
case 3:
return updateMySQLDatabaseFromV3(p.dbHandle)
case 4:
return updateMySQLDatabaseFromV4(p.dbHandle)
case 5:
return updateMySQLDatabaseFromV5(p.dbHandle)
case 6:
return updateMySQLDatabaseFromV6(p.dbHandle)
case 7:
return updateMySQLDatabaseFromV7(p.dbHandle)
case version < 8:
err = fmt.Errorf("database version %v is too old, please see the upgrading docs", version)
providerLog(logger.LevelError, "%v", err)
logger.ErrorToConsole("%v", err)
return err
case version == 8:
return updateMySQLDatabaseFromV8(p.dbHandle)
case version == 9:
return updateMySQLDatabaseFromV9(p.dbHandle)
default:
if dbVersion.Version > sqlDatabaseVersion {
providerLog(logger.LevelWarn, "database version %v is newer than the supported: %v", dbVersion.Version,
if version > sqlDatabaseVersion {
providerLog(logger.LevelWarn, "database version %v is newer than the supported one: %v", version,
sqlDatabaseVersion)
logger.WarnToConsole("database version %v is newer than the supported: %v", dbVersion.Version,
logger.WarnToConsole("database version %v is newer than the supported one: %v", version,
sqlDatabaseVersion)
return nil
}
return fmt.Errorf("Database version not handled: %v", dbVersion.Version)
return fmt.Errorf("database version not handled: %v", version)
}
}
//nolint:dupl
func (p *MySQLProvider) revertDatabase(targetVersion int) error {
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
if err != nil {
return err
}
if dbVersion.Version == targetVersion {
return fmt.Errorf("current version match target version, nothing to do")
return errors.New("current version match target version, nothing to do")
}
switch dbVersion.Version {
case 8:
err = downgradeMySQLDatabaseFrom8To7(p.dbHandle)
if err != nil {
return err
}
err = downgradeMySQLDatabaseFrom7To6(p.dbHandle)
if err != nil {
return err
}
err = downgradeMySQLDatabaseFrom6To5(p.dbHandle)
if err != nil {
return err
}
return downgradeMySQLDatabaseFrom5To4(p.dbHandle)
case 7:
err = downgradeMySQLDatabaseFrom7To6(p.dbHandle)
if err != nil {
return err
}
err = downgradeMySQLDatabaseFrom6To5(p.dbHandle)
if err != nil {
return err
}
return downgradeMySQLDatabaseFrom5To4(p.dbHandle)
case 6:
err = downgradeMySQLDatabaseFrom6To5(p.dbHandle)
if err != nil {
return err
}
return downgradeMySQLDatabaseFrom5To4(p.dbHandle)
case 5:
return downgradeMySQLDatabaseFrom5To4(p.dbHandle)
case 9:
return downgradeMySQLDatabaseFromV9(p.dbHandle)
case 10:
return downgradeMySQLDatabaseFromV10(p.dbHandle)
default:
return fmt.Errorf("Database version not handled: %v", dbVersion.Version)
return fmt.Errorf("database version not handled: %v", dbVersion.Version)
}
}
func updateMySQLDatabaseFromV1(dbHandle *sql.DB) error {
err := updateMySQLDatabaseFrom1To2(dbHandle)
if err != nil {
func updateMySQLDatabaseFromV8(dbHandle *sql.DB) error {
if err := updateMySQLDatabaseFrom8To9(dbHandle); err != nil {
return err
}
return updateMySQLDatabaseFromV2(dbHandle)
return updateMySQLDatabaseFromV9(dbHandle)
}
func updateMySQLDatabaseFromV2(dbHandle *sql.DB) error {
err := updateMySQLDatabaseFrom2To3(dbHandle)
if err != nil {
func updateMySQLDatabaseFromV9(dbHandle *sql.DB) error {
return updateMySQLDatabaseFrom9To10(dbHandle)
}
func downgradeMySQLDatabaseFromV9(dbHandle *sql.DB) error {
return downgradeMySQLDatabaseFrom9To8(dbHandle)
}
func downgradeMySQLDatabaseFromV10(dbHandle *sql.DB) error {
if err := downgradeMySQLDatabaseFrom10To9(dbHandle); err != nil {
return err
}
return updateMySQLDatabaseFromV3(dbHandle)
return downgradeMySQLDatabaseFromV9(dbHandle)
}
func updateMySQLDatabaseFromV3(dbHandle *sql.DB) error {
err := updateMySQLDatabaseFrom3To4(dbHandle)
if err != nil {
return err
}
return updateMySQLDatabaseFromV4(dbHandle)
func updateMySQLDatabaseFrom8To9(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 8 -> 9")
providerLog(logger.LevelInfo, "updating database version: 8 -> 9")
sql := strings.ReplaceAll(mysqlV9SQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
sql = strings.ReplaceAll(sql, "{{folders}}", sqlTableFolders)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 9)
}
func updateMySQLDatabaseFromV4(dbHandle *sql.DB) error {
err := updateMySQLDatabaseFrom4To5(dbHandle)
if err != nil {
return err
}
return updateMySQLDatabaseFromV5(dbHandle)
}
func updateMySQLDatabaseFromV5(dbHandle *sql.DB) error {
err := updateMySQLDatabaseFrom5To6(dbHandle)
if err != nil {
return err
}
return updateMySQLDatabaseFromV6(dbHandle)
}
func updateMySQLDatabaseFromV6(dbHandle *sql.DB) error {
err := updateMySQLDatabaseFrom6To7(dbHandle)
if err != nil {
return err
}
return updateMySQLDatabaseFromV7(dbHandle)
}
func updateMySQLDatabaseFromV7(dbHandle *sql.DB) error {
return updateMySQLDatabaseFrom7To8(dbHandle)
}
func updateMySQLDatabaseFrom1To2(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 1 -> 2")
providerLog(logger.LevelInfo, "updating database version: 1 -> 2")
sql := strings.Replace(mysqlV2SQL, "{{users}}", sqlTableUsers, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 2)
}
func updateMySQLDatabaseFrom2To3(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 2 -> 3")
providerLog(logger.LevelInfo, "updating database version: 2 -> 3")
sql := strings.Replace(mysqlV3SQL, "{{users}}", sqlTableUsers, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 3)
}
func updateMySQLDatabaseFrom3To4(dbHandle *sql.DB) error {
return sqlCommonUpdateDatabaseFrom3To4(mysqlV4SQL, dbHandle)
}
func updateMySQLDatabaseFrom4To5(dbHandle *sql.DB) error {
return sqlCommonUpdateDatabaseFrom4To5(dbHandle)
}
func updateMySQLDatabaseFrom5To6(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 5 -> 6")
providerLog(logger.LevelInfo, "updating database version: 5 -> 6")
sql := strings.Replace(mysqlV6SQL, "{{users}}", sqlTableUsers, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 6)
}
func updateMySQLDatabaseFrom6To7(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 6 -> 7")
providerLog(logger.LevelInfo, "updating database version: 6 -> 7")
sql := strings.Replace(mysqlV7SQL, "{{admins}}", sqlTableAdmins, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 7)
}
func updateMySQLDatabaseFrom7To8(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 7 -> 8")
providerLog(logger.LevelInfo, "updating database version: 7 -> 8")
sql := strings.ReplaceAll(mysqlV8SQL, "{{folders}}", sqlTableFolders)
func downgradeMySQLDatabaseFrom9To8(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 9 -> 8")
providerLog(logger.LevelInfo, "downgrading database version: 9 -> 8")
sql := strings.ReplaceAll(mysqlV9DownSQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
sql = strings.ReplaceAll(sql, "{{folders}}", sqlTableFolders)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 8)
}
func downgradeMySQLDatabaseFrom8To7(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 8 -> 7")
providerLog(logger.LevelInfo, "downgrading database version: 8 -> 7")
sql := strings.ReplaceAll(mysqlV8DownSQL, "{{folders}}", sqlTableFolders)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 7)
func updateMySQLDatabaseFrom9To10(dbHandle *sql.DB) error {
return sqlCommonUpdateDatabaseFrom9To10(dbHandle)
}
func downgradeMySQLDatabaseFrom7To6(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 7 -> 6")
providerLog(logger.LevelInfo, "downgrading database version: 7 -> 6")
sql := strings.Replace(mysqlV7DownSQL, "{{admins}}", sqlTableAdmins, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 6)
}
func downgradeMySQLDatabaseFrom6To5(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 6 -> 5")
providerLog(logger.LevelInfo, "downgrading database version: 6 -> 5")
sql := strings.Replace(mysqlV6DownSQL, "{{users}}", sqlTableUsers, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 5)
}
func downgradeMySQLDatabaseFrom5To4(dbHandle *sql.DB) error {
return sqlCommonDowngradeDatabaseFrom5To4(dbHandle)
func downgradeMySQLDatabaseFrom10To9(dbHandle *sql.DB) error {
return sqlCommonDowngradeDatabaseFrom10To9(dbHandle)
}

View File

@@ -1,3 +1,4 @@
//go:build nomysql
// +build nomysql
package dataprovider

View File

@@ -1,10 +1,13 @@
//go:build !nopgsql
// +build !nopgsql
package dataprovider
import (
"context"
"crypto/x509"
"database/sql"
"errors"
"fmt"
"strings"
"time"
@@ -18,43 +21,40 @@ import (
)
const (
pgsqlUsersTableSQL = `CREATE TABLE "{{users}}" ("id" serial NOT NULL PRIMARY KEY, "username" varchar(255) NOT NULL UNIQUE,
"password" varchar(255) NULL, "public_keys" text NULL, "home_dir" varchar(255) NOT NULL, "uid" integer NOT NULL,
"gid" integer NOT NULL, "max_sessions" integer NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL,
"permissions" text NOT NULL, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL,
"last_quota_update" bigint NOT NULL, "upload_bandwidth" integer NOT NULL, "download_bandwidth" integer NOT NULL,
"expiration_date" bigint NOT NULL, "last_login" bigint NOT NULL, "status" integer NOT NULL, "filters" text NULL,
"filesystem" text NULL);`
pgsqlSchemaTableSQL = `CREATE TABLE "{{schema_version}}" ("id" serial NOT NULL PRIMARY KEY, "version" integer NOT NULL);`
pgsqlV2SQL = `ALTER TABLE "{{users}}" ADD COLUMN "virtual_folders" text NULL;`
pgsqlV3SQL = `ALTER TABLE "{{users}}" ALTER COLUMN "password" TYPE text USING "password"::text;`
pgsqlV4SQL = `CREATE TABLE "{{folders}}" ("id" serial NOT NULL PRIMARY KEY, "path" varchar(512) NOT NULL UNIQUE, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL);
ALTER TABLE "{{users}}" ALTER COLUMN "home_dir" TYPE varchar(512) USING "home_dir"::varchar(512);
ALTER TABLE "{{users}}" DROP COLUMN "virtual_folders" CASCADE;
CREATE TABLE "{{folders_mapping}}" ("id" serial NOT NULL PRIMARY KEY, "virtual_path" varchar(512) NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "folder_id" integer NOT NULL, "user_id" integer NOT NULL);
ALTER TABLE "{{folders_mapping}}" ADD CONSTRAINT "unique_mapping" UNIQUE ("user_id", "folder_id");
ALTER TABLE "{{folders_mapping}}" ADD CONSTRAINT "folders_mapping_folder_id_fk_folders_id" FOREIGN KEY ("folder_id") REFERENCES "{{folders}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
ALTER TABLE "{{folders_mapping}}" ADD CONSTRAINT "folders_mapping_user_id_fk_users_id" FOREIGN KEY ("user_id") REFERENCES "{{users}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
CREATE INDEX "folders_mapping_folder_id_idx" ON "{{folders_mapping}}" ("folder_id");
CREATE INDEX "folders_mapping_user_id_idx" ON "{{folders_mapping}}" ("user_id");
`
pgsqlV6SQL = `ALTER TABLE "{{users}}" ADD COLUMN "additional_info" text NULL;`
pgsqlV6DownSQL = `ALTER TABLE "{{users}}" DROP COLUMN "additional_info" CASCADE;`
pgsqlV7SQL = `CREATE TABLE "{{admins}}" ("id" serial NOT NULL PRIMARY KEY, "username" varchar(255) NOT NULL UNIQUE,
pgsqlInitial = `CREATE TABLE "{{schema_version}}" ("id" serial NOT NULL PRIMARY KEY, "version" integer NOT NULL);
CREATE TABLE "{{admins}}" ("id" serial NOT NULL PRIMARY KEY, "username" varchar(255) NOT NULL UNIQUE,
"password" varchar(255) NOT NULL, "email" varchar(255) NULL, "status" integer NOT NULL, "permissions" text NOT NULL,
"filters" text NULL, "additional_info" text NULL);
CREATE TABLE "{{folders}}" ("id" serial NOT NULL PRIMARY KEY, "name" varchar(255) NOT NULL UNIQUE,
"path" varchar(512) NULL, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL,
"last_quota_update" bigint NOT NULL);
CREATE TABLE "{{users}}" ("id" serial NOT NULL PRIMARY KEY, "status" integer NOT NULL, "expiration_date" bigint NOT NULL,
"username" varchar(255) NOT NULL UNIQUE, "password" text NULL, "public_keys" text NULL, "home_dir" varchar(512) NOT NULL,
"uid" integer NOT NULL, "gid" integer NOT NULL, "max_sessions" integer NOT NULL, "quota_size" bigint NOT NULL,
"quota_files" integer NOT NULL, "permissions" text NOT NULL, "used_quota_size" bigint NOT NULL,
"used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL, "upload_bandwidth" integer NOT NULL,
"download_bandwidth" integer NOT NULL, "last_login" bigint NOT NULL, "filters" text NULL, "filesystem" text NULL,
"additional_info" text NULL);
CREATE TABLE "{{folders_mapping}}" ("id" serial NOT NULL PRIMARY KEY, "virtual_path" varchar(512) NOT NULL,
"quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "folder_id" integer NOT NULL, "user_id" integer NOT NULL);
ALTER TABLE "{{folders_mapping}}" ADD CONSTRAINT "{{prefix}}unique_mapping" UNIQUE ("user_id", "folder_id");
ALTER TABLE "{{folders_mapping}}" ADD CONSTRAINT "{{prefix}}folders_mapping_folder_id_fk_folders_id"
FOREIGN KEY ("folder_id") REFERENCES "{{folders}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
ALTER TABLE "{{folders_mapping}}" ADD CONSTRAINT "{{prefix}}folders_mapping_user_id_fk_users_id"
FOREIGN KEY ("user_id") REFERENCES "{{users}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
CREATE INDEX "{{prefix}}folders_mapping_folder_id_idx" ON "{{folders_mapping}}" ("folder_id");
CREATE INDEX "{{prefix}}folders_mapping_user_id_idx" ON "{{folders_mapping}}" ("user_id");
INSERT INTO {{schema_version}} (version) VALUES (8);
`
pgsqlV7DownSQL = `DROP TABLE "{{admins}}" CASCADE;`
pgsqlV8SQL = `ALTER TABLE "{{folders}}" ADD COLUMN "name" varchar(255) NULL;
ALTER TABLE "folders" ALTER COLUMN "path" DROP NOT NULL;
ALTER TABLE "{{folders}}" DROP CONSTRAINT IF EXISTS folders_path_key;
UPDATE "{{folders}}" f1 SET name = (SELECT CONCAT('folder',f2.id) FROM "{{folders}}" f2 WHERE f2.id = f1.id);
ALTER TABLE "{{folders}}" ALTER COLUMN "name" SET NOT NULL;
ALTER TABLE "{{folders}}" ADD CONSTRAINT "folders_name_uniq" UNIQUE ("name");
pgsqlV9SQL = `ALTER TABLE "{{admins}}" ADD COLUMN "description" varchar(512) NULL;
ALTER TABLE "{{folders}}" ADD COLUMN "description" varchar(512) NULL;
ALTER TABLE "{{folders}}" ADD COLUMN "filesystem" text NULL;
ALTER TABLE "{{users}}" ADD COLUMN "description" varchar(512) NULL;
`
pgsqlV8DownSQL = `ALTER TABLE "{{folders}}" DROP COLUMN "name" CASCADE;
ALTER TABLE "{{folders}}" ALTER COLUMN "path" SET NOT NULL;
ALTER TABLE "{{folders}}" ADD CONSTRAINT folders_path_key UNIQUE (path);
pgsqlV9DownSQL = `ALTER TABLE "{{users}}" DROP COLUMN "description" CASCADE;
ALTER TABLE "{{folders}}" DROP COLUMN "filesystem" CASCADE;
ALTER TABLE "{{folders}}" DROP COLUMN "description" CASCADE;
ALTER TABLE "{{admins}}" DROP COLUMN "description" CASCADE;
`
)
@@ -69,7 +69,6 @@ func init() {
func initializePGSQLProvider() error {
var err error
logSender = fmt.Sprintf("dataprovider_%v", PGSQLDataProviderName)
dbHandle, err := sql.Open("postgres", getPGSQLConnectionString(false))
if err == nil {
providerLog(logger.LevelDebug, "postgres database handle created, connection string: %#v, pool size: %v",
@@ -112,6 +111,10 @@ func (p *PGSQLProvider) validateUserAndPass(username, password, ip, protocol str
return sqlCommonValidateUserAndPass(username, password, ip, protocol, p.dbHandle)
}
func (p *PGSQLProvider) validateUserAndTLSCert(username, protocol string, tlsCert *x509.Certificate) (User, error) {
return sqlCommonValidateUserAndTLSCertificate(username, protocol, tlsCert, p.dbHandle)
}
func (p *PGSQLProvider) validateUserAndPubKey(username string, publicKey []byte) (User, string, error) {
return sqlCommonValidateUserAndPubKey(username, publicKey, p.dbHandle)
}
@@ -228,27 +231,20 @@ func (p *PGSQLProvider) initializeDatabase() error {
if err == nil && dbVersion.Version > 0 {
return ErrNoInitRequired
}
sqlUsers := strings.Replace(pgsqlUsersTableSQL, "{{users}}", sqlTableUsers, 1)
ctx, cancel := context.WithTimeout(context.Background(), longSQLQueryTimeout)
defer cancel()
initialSQL := strings.ReplaceAll(pgsqlInitial, "{{schema_version}}", sqlTableSchemaVersion)
initialSQL = strings.ReplaceAll(initialSQL, "{{admins}}", sqlTableAdmins)
initialSQL = strings.ReplaceAll(initialSQL, "{{folders}}", sqlTableFolders)
initialSQL = strings.ReplaceAll(initialSQL, "{{users}}", sqlTableUsers)
initialSQL = strings.ReplaceAll(initialSQL, "{{folders_mapping}}", sqlTableFoldersMapping)
initialSQL = strings.ReplaceAll(initialSQL, "{{prefix}}", config.SQLTablesPrefix)
if config.Driver == CockroachDataProviderName {
// Cockroach does not support deferrable constraint validation, we don't need it,
// we keep these definitions for the PostgreSQL driver to avoid changes for users
// upgrading from old SFTPGo versions
initialSQL = strings.ReplaceAll(initialSQL, "DEFERRABLE INITIALLY DEFERRED", "")
}
tx, err := p.dbHandle.BeginTx(ctx, nil)
if err != nil {
return err
}
_, err = tx.Exec(sqlUsers)
if err != nil {
return err
}
_, err = tx.Exec(strings.Replace(pgsqlSchemaTableSQL, "{{schema_version}}", sqlTableSchemaVersion, 1))
if err != nil {
return err
}
_, err = tx.Exec(strings.Replace(initialDBVersionSQL, "{{schema_version}}", sqlTableSchemaVersion, 1))
if err != nil {
return err
}
return tx.Commit()
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, []string{initialSQL}, 8)
}
func (p *PGSQLProvider) migrateDatabase() error {
@@ -256,200 +252,95 @@ func (p *PGSQLProvider) migrateDatabase() error {
if err != nil {
return err
}
if dbVersion.Version == sqlDatabaseVersion {
providerLog(logger.LevelDebug, "sql database is up to date, current version: %v", dbVersion.Version)
switch version := dbVersion.Version; {
case version == sqlDatabaseVersion:
providerLog(logger.LevelDebug, "sql database is up to date, current version: %v", version)
return ErrNoInitRequired
}
switch dbVersion.Version {
case 1:
return updatePGSQLDatabaseFromV1(p.dbHandle)
case 2:
return updatePGSQLDatabaseFromV2(p.dbHandle)
case 3:
return updatePGSQLDatabaseFromV3(p.dbHandle)
case 4:
return updatePGSQLDatabaseFromV4(p.dbHandle)
case 5:
return updatePGSQLDatabaseFromV5(p.dbHandle)
case 6:
return updatePGSQLDatabaseFromV6(p.dbHandle)
case 7:
return updatePGSQLDatabaseFromV7(p.dbHandle)
case version < 8:
err = fmt.Errorf("database version %v is too old, please see the upgrading docs", version)
providerLog(logger.LevelError, "%v", err)
logger.ErrorToConsole("%v", err)
return err
case version == 8:
return updatePGSQLDatabaseFromV8(p.dbHandle)
case version == 9:
return updatePGSQLDatabaseFromV9(p.dbHandle)
default:
if dbVersion.Version > sqlDatabaseVersion {
providerLog(logger.LevelWarn, "database version %v is newer than the supported: %v", dbVersion.Version,
if version > sqlDatabaseVersion {
providerLog(logger.LevelWarn, "database version %v is newer than the supported one: %v", version,
sqlDatabaseVersion)
logger.WarnToConsole("database version %v is newer than the supported: %v", dbVersion.Version,
logger.WarnToConsole("database version %v is newer than the supported one: %v", version,
sqlDatabaseVersion)
return nil
}
return fmt.Errorf("Database version not handled: %v", dbVersion.Version)
return fmt.Errorf("database version not handled: %v", version)
}
}
//nolint:dupl
func (p *PGSQLProvider) revertDatabase(targetVersion int) error {
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
if err != nil {
return err
}
if dbVersion.Version == targetVersion {
return fmt.Errorf("current version match target version, nothing to do")
return errors.New("current version match target version, nothing to do")
}
switch dbVersion.Version {
case 8:
err = downgradePGSQLDatabaseFrom8To7(p.dbHandle)
if err != nil {
return err
}
err = downgradePGSQLDatabaseFrom7To6(p.dbHandle)
if err != nil {
return err
}
err = downgradePGSQLDatabaseFrom6To5(p.dbHandle)
if err != nil {
return err
}
return downgradePGSQLDatabaseFrom5To4(p.dbHandle)
case 7:
err = downgradePGSQLDatabaseFrom7To6(p.dbHandle)
if err != nil {
return err
}
err = downgradePGSQLDatabaseFrom6To5(p.dbHandle)
if err != nil {
return err
}
return downgradePGSQLDatabaseFrom5To4(p.dbHandle)
case 6:
err = downgradePGSQLDatabaseFrom6To5(p.dbHandle)
if err != nil {
return err
}
return downgradePGSQLDatabaseFrom5To4(p.dbHandle)
case 5:
return downgradePGSQLDatabaseFrom5To4(p.dbHandle)
case 9:
return downgradePGSQLDatabaseFromV9(p.dbHandle)
case 10:
return downgradePGSQLDatabaseFromV10(p.dbHandle)
default:
return fmt.Errorf("Database version not handled: %v", dbVersion.Version)
return fmt.Errorf("database version not handled: %v", dbVersion.Version)
}
}
func updatePGSQLDatabaseFromV1(dbHandle *sql.DB) error {
err := updatePGSQLDatabaseFrom1To2(dbHandle)
if err != nil {
func updatePGSQLDatabaseFromV8(dbHandle *sql.DB) error {
if err := updatePGSQLDatabaseFrom8To9(dbHandle); err != nil {
return err
}
return updatePGSQLDatabaseFromV2(dbHandle)
return updatePGSQLDatabaseFromV9(dbHandle)
}
func updatePGSQLDatabaseFromV2(dbHandle *sql.DB) error {
err := updatePGSQLDatabaseFrom2To3(dbHandle)
if err != nil {
func updatePGSQLDatabaseFromV9(dbHandle *sql.DB) error {
return updatePGSQLDatabaseFrom9To10(dbHandle)
}
func downgradePGSQLDatabaseFromV9(dbHandle *sql.DB) error {
return downgradePGSQLDatabaseFrom9To8(dbHandle)
}
func downgradePGSQLDatabaseFromV10(dbHandle *sql.DB) error {
if err := downgradePGSQLDatabaseFrom10To9(dbHandle); err != nil {
return err
}
return updatePGSQLDatabaseFromV3(dbHandle)
return downgradePGSQLDatabaseFromV9(dbHandle)
}
func updatePGSQLDatabaseFromV3(dbHandle *sql.DB) error {
err := updatePGSQLDatabaseFrom3To4(dbHandle)
if err != nil {
return err
}
return updatePGSQLDatabaseFromV4(dbHandle)
func updatePGSQLDatabaseFrom8To9(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 8 -> 9")
providerLog(logger.LevelInfo, "updating database version: 8 -> 9")
sql := strings.ReplaceAll(pgsqlV9SQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
sql = strings.ReplaceAll(sql, "{{folders}}", sqlTableFolders)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 9)
}
func updatePGSQLDatabaseFromV4(dbHandle *sql.DB) error {
err := updatePGSQLDatabaseFrom4To5(dbHandle)
if err != nil {
return err
}
return updatePGSQLDatabaseFromV5(dbHandle)
}
func updatePGSQLDatabaseFromV5(dbHandle *sql.DB) error {
err := updatePGSQLDatabaseFrom5To6(dbHandle)
if err != nil {
return err
}
return updatePGSQLDatabaseFromV6(dbHandle)
}
func updatePGSQLDatabaseFromV6(dbHandle *sql.DB) error {
err := updatePGSQLDatabaseFrom6To7(dbHandle)
if err != nil {
return err
}
return updatePGSQLDatabaseFromV7(dbHandle)
}
func updatePGSQLDatabaseFromV7(dbHandle *sql.DB) error {
return updatePGSQLDatabaseFrom7To8(dbHandle)
}
func updatePGSQLDatabaseFrom1To2(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 1 -> 2")
providerLog(logger.LevelInfo, "updating database version: 1 -> 2")
sql := strings.Replace(pgsqlV2SQL, "{{users}}", sqlTableUsers, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 2)
}
func updatePGSQLDatabaseFrom2To3(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 2 -> 3")
providerLog(logger.LevelInfo, "updating database version: 2 -> 3")
sql := strings.Replace(pgsqlV3SQL, "{{users}}", sqlTableUsers, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 3)
}
func updatePGSQLDatabaseFrom3To4(dbHandle *sql.DB) error {
return sqlCommonUpdateDatabaseFrom3To4(pgsqlV4SQL, dbHandle)
}
func updatePGSQLDatabaseFrom4To5(dbHandle *sql.DB) error {
return sqlCommonUpdateDatabaseFrom4To5(dbHandle)
}
func updatePGSQLDatabaseFrom5To6(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 5 -> 6")
providerLog(logger.LevelInfo, "updating database version: 5 -> 6")
sql := strings.Replace(pgsqlV6SQL, "{{users}}", sqlTableUsers, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 6)
}
func updatePGSQLDatabaseFrom6To7(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 6 -> 7")
providerLog(logger.LevelInfo, "updating database version: 6 -> 7")
sql := strings.Replace(pgsqlV7SQL, "{{admins}}", sqlTableAdmins, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 7)
}
func updatePGSQLDatabaseFrom7To8(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 7 -> 8")
providerLog(logger.LevelInfo, "updating database version: 7 -> 8")
sql := strings.ReplaceAll(pgsqlV8SQL, "{{folders}}", sqlTableFolders)
func downgradePGSQLDatabaseFrom9To8(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 9 -> 8")
providerLog(logger.LevelInfo, "downgrading database version: 9 -> 8")
sql := strings.ReplaceAll(pgsqlV9DownSQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
sql = strings.ReplaceAll(sql, "{{folders}}", sqlTableFolders)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 8)
}
func downgradePGSQLDatabaseFrom8To7(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 8 -> 7")
providerLog(logger.LevelInfo, "downgrading database version: 8 -> 7")
sql := strings.ReplaceAll(pgsqlV8DownSQL, "{{folders}}", sqlTableAdmins)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 7)
func updatePGSQLDatabaseFrom9To10(dbHandle *sql.DB) error {
return sqlCommonUpdateDatabaseFrom9To10(dbHandle)
}
func downgradePGSQLDatabaseFrom7To6(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 7 -> 6")
providerLog(logger.LevelInfo, "downgrading database version: 7 -> 6")
sql := strings.Replace(pgsqlV7DownSQL, "{{admins}}", sqlTableAdmins, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 6)
}
func downgradePGSQLDatabaseFrom6To5(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 6 -> 5")
providerLog(logger.LevelInfo, "downgrading database version: 6 -> 5")
sql := strings.Replace(pgsqlV6DownSQL, "{{users}}", sqlTableUsers, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 5)
}
func downgradePGSQLDatabaseFrom5To4(dbHandle *sql.DB) error {
return sqlCommonDowngradeDatabaseFrom5To4(dbHandle)
func downgradePGSQLDatabaseFrom10To9(dbHandle *sql.DB) error {
return sqlCommonDowngradeDatabaseFrom10To9(dbHandle)
}

View File

@@ -1,3 +1,4 @@
//go:build nopgsql
// +build nopgsql
package dataprovider

View File

@@ -0,0 +1,183 @@
package dataprovider
import (
"sync"
"time"
"github.com/drakkan/sftpgo/logger"
)
var delayedQuotaUpdater quotaUpdater
func init() {
delayedQuotaUpdater = newQuotaUpdater()
}
type quotaObject struct {
size int64
files int
}
type quotaUpdater struct {
paramsMutex sync.RWMutex
waitTime time.Duration
sync.RWMutex
pendingUserQuotaUpdates map[string]quotaObject
pendingFolderQuotaUpdates map[string]quotaObject
}
func newQuotaUpdater() quotaUpdater {
return quotaUpdater{
pendingUserQuotaUpdates: make(map[string]quotaObject),
pendingFolderQuotaUpdates: make(map[string]quotaObject),
}
}
func (q *quotaUpdater) start() {
q.setWaitTime(config.DelayedQuotaUpdate)
go q.loop()
}
func (q *quotaUpdater) loop() {
waitTime := q.getWaitTime()
providerLog(logger.LevelDebug, "delayed quota update loop started, wait time: %v", waitTime)
for waitTime > 0 {
// We do this with a time.Sleep instead of a time.Ticker because we don't know
// how long each quota processing cycle will take, and we want to make
// sure we wait the configured seconds between each iteration
time.Sleep(waitTime)
providerLog(logger.LevelDebug, "delayed quota update check start")
q.storeUsersQuota()
q.storeFoldersQuota()
providerLog(logger.LevelDebug, "delayed quota update check end")
waitTime = q.getWaitTime()
}
providerLog(logger.LevelDebug, "delayed quota update loop ended, wait time: %v", waitTime)
}
func (q *quotaUpdater) setWaitTime(secs int) {
q.paramsMutex.Lock()
defer q.paramsMutex.Unlock()
q.waitTime = time.Duration(secs) * time.Second
}
func (q *quotaUpdater) getWaitTime() time.Duration {
q.paramsMutex.RLock()
defer q.paramsMutex.RUnlock()
return q.waitTime
}
func (q *quotaUpdater) resetUserQuota(username string) {
q.Lock()
defer q.Unlock()
delete(q.pendingUserQuotaUpdates, username)
}
func (q *quotaUpdater) updateUserQuota(username string, files int, size int64) {
q.Lock()
defer q.Unlock()
obj := q.pendingUserQuotaUpdates[username]
obj.size += size
obj.files += files
if obj.files == 0 && obj.size == 0 {
delete(q.pendingUserQuotaUpdates, username)
return
}
q.pendingUserQuotaUpdates[username] = obj
}
func (q *quotaUpdater) getUserPendingQuota(username string) (int, int64) {
q.RLock()
defer q.RUnlock()
obj := q.pendingUserQuotaUpdates[username]
return obj.files, obj.size
}
func (q *quotaUpdater) resetFolderQuota(name string) {
q.Lock()
defer q.Unlock()
delete(q.pendingFolderQuotaUpdates, name)
}
func (q *quotaUpdater) updateFolderQuota(name string, files int, size int64) {
q.Lock()
defer q.Unlock()
obj := q.pendingFolderQuotaUpdates[name]
obj.size += size
obj.files += files
if obj.files == 0 && obj.size == 0 {
delete(q.pendingFolderQuotaUpdates, name)
return
}
q.pendingFolderQuotaUpdates[name] = obj
}
func (q *quotaUpdater) getFolderPendingQuota(name string) (int, int64) {
q.RLock()
defer q.RUnlock()
obj := q.pendingFolderQuotaUpdates[name]
return obj.files, obj.size
}
func (q *quotaUpdater) getUsernames() []string {
q.RLock()
defer q.RUnlock()
result := make([]string, 0, len(q.pendingUserQuotaUpdates))
for username := range q.pendingUserQuotaUpdates {
result = append(result, username)
}
return result
}
func (q *quotaUpdater) getFoldernames() []string {
q.RLock()
defer q.RUnlock()
result := make([]string, 0, len(q.pendingFolderQuotaUpdates))
for name := range q.pendingFolderQuotaUpdates {
result = append(result, name)
}
return result
}
func (q *quotaUpdater) storeUsersQuota() {
for _, username := range q.getUsernames() {
files, size := q.getUserPendingQuota(username)
if size != 0 || files != 0 {
err := provider.updateQuota(username, files, size, false)
if err != nil {
providerLog(logger.LevelWarn, "unable to update quota delayed for user %#v: %v", username, err)
continue
}
q.updateUserQuota(username, -files, -size)
}
}
}
func (q *quotaUpdater) storeFoldersQuota() {
for _, name := range q.getFoldernames() {
files, size := q.getFolderPendingQuota(name)
if size != 0 || files != 0 {
err := provider.updateFolderQuota(name, files, size, false)
if err != nil {
providerLog(logger.LevelWarn, "unable to update quota delayed for folder %#v: %v", name, err)
continue
}
q.updateFolderQuota(name, -files, -size)
}
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,10 +1,13 @@
//go:build !nosqlite
// +build !nosqlite
package dataprovider
import (
"context"
"crypto/x509"
"database/sql"
"errors"
"fmt"
"path/filepath"
"strings"
@@ -19,82 +22,59 @@ import (
)
const (
sqliteUsersTableSQL = `CREATE TABLE "{{users}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255)
NOT NULL UNIQUE, "password" varchar(255) NULL, "public_keys" text NULL, "home_dir" varchar(255) NOT NULL, "uid" integer NOT NULL,
"gid" integer NOT NULL, "max_sessions" integer NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL,
"permissions" text NOT NULL, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL,
"last_quota_update" bigint NOT NULL, "upload_bandwidth" integer NOT NULL, "download_bandwidth" integer NOT NULL,
"expiration_date" bigint NOT NULL, "last_login" bigint NOT NULL, "status" integer NOT NULL, "filters" text NULL,
"filesystem" text NULL);`
sqliteSchemaTableSQL = `CREATE TABLE "{{schema_version}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "version" integer NOT NULL);`
sqliteV2SQL = `ALTER TABLE "{{users}}" ADD COLUMN "virtual_folders" text NULL;`
sqliteV3SQL = `CREATE TABLE "new__users" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255) NOT NULL UNIQUE,
"password" text NULL, "public_keys" text NULL, "home_dir" varchar(255) NOT NULL, "uid" integer NOT NULL,
"gid" integer NOT NULL, "max_sessions" integer NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL,
"permissions" text NOT NULL, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL,
"upload_bandwidth" integer NOT NULL, "download_bandwidth" integer NOT NULL, "expiration_date" bigint NOT NULL, "last_login" bigint NOT NULL,
"status" integer NOT NULL, "filters" text NULL, "filesystem" text NULL, "virtual_folders" text NULL);
INSERT INTO "new__users" ("id", "username", "public_keys", "home_dir", "uid", "gid", "max_sessions", "quota_size", "quota_files",
"permissions", "used_quota_size", "used_quota_files", "last_quota_update", "upload_bandwidth", "download_bandwidth", "expiration_date",
"last_login", "status", "filters", "filesystem", "virtual_folders", "password") SELECT "id", "username", "public_keys", "home_dir",
"uid", "gid", "max_sessions", "quota_size", "quota_files", "permissions", "used_quota_size", "used_quota_files", "last_quota_update",
"upload_bandwidth", "download_bandwidth", "expiration_date", "last_login", "status", "filters", "filesystem", "virtual_folders",
"password" FROM "{{users}}";
DROP TABLE "{{users}}";
ALTER TABLE "new__users" RENAME TO "{{users}}";`
sqliteV4SQL = `CREATE TABLE "{{folders}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "path" varchar(512) NOT NULL UNIQUE,
"used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL);
sqliteInitialSQL = `CREATE TABLE "{{schema_version}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "version" integer NOT NULL);
CREATE TABLE "{{admins}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255) NOT NULL UNIQUE,
"password" varchar(255) NOT NULL, "email" varchar(255) NULL, "status" integer NOT NULL, "permissions" text NOT NULL,
"filters" text NULL, "additional_info" text NULL);
CREATE TABLE "{{folders}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "name" varchar(255) NOT NULL UNIQUE,
"path" varchar(512) NULL, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL,
"last_quota_update" bigint NOT NULL);
CREATE TABLE "{{users}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255) NOT NULL UNIQUE,
"password" text NULL, "public_keys" text NULL, "home_dir" varchar(512) NOT NULL, "uid" integer NOT NULL, "gid" integer NOT NULL,
"max_sessions" integer NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "permissions" text NOT NULL,
"used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL,
"upload_bandwidth" integer NOT NULL, "download_bandwidth" integer NOT NULL, "expiration_date" bigint NOT NULL,
"last_login" bigint NOT NULL, "status" integer NOT NULL, "filters" text NULL, "filesystem" text NULL,
"additional_info" text NULL);
CREATE TABLE "{{folders_mapping}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "virtual_path" varchar(512) NOT NULL,
"quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "folder_id" integer NOT NULL REFERENCES "{{folders}}" ("id")
ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED, "user_id" integer NOT NULL REFERENCES "{{users}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
CONSTRAINT "unique_mapping" UNIQUE ("user_id", "folder_id"));
CREATE TABLE "new__users" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255) NOT NULL UNIQUE, "password" text NULL,
"public_keys" text NULL, "home_dir" varchar(512) NOT NULL, "uid" integer NOT NULL, "gid" integer NOT NULL, "max_sessions" integer NOT NULL,
CONSTRAINT "{{prefix}}unique_mapping" UNIQUE ("user_id", "folder_id"));
CREATE INDEX "{{prefix}}folders_mapping_folder_id_idx" ON "{{folders_mapping}}" ("folder_id");
CREATE INDEX "{{prefix}}folders_mapping_user_id_idx" ON "{{folders_mapping}}" ("user_id");
INSERT INTO {{schema_version}} (version) VALUES (8);
`
sqliteV9SQL = `ALTER TABLE "{{admins}}" ADD COLUMN "description" varchar(512) NULL;
ALTER TABLE "{{folders}}" ADD COLUMN "description" varchar(512) NULL;
ALTER TABLE "{{folders}}" ADD COLUMN "filesystem" text NULL;
ALTER TABLE "{{users}}" ADD COLUMN "description" varchar(512) NULL;
`
sqliteV9DownSQL = `CREATE TABLE "new__users" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "status" integer NOT NULL,
"expiration_date" bigint NOT NULL, "username" varchar(255) NOT NULL UNIQUE, "password" text NULL, "public_keys" text NULL,
"home_dir" varchar(512) NOT NULL, "uid" integer NOT NULL, "gid" integer NOT NULL, "max_sessions" integer NOT NULL,
"quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "permissions" text NOT NULL, "used_quota_size" bigint NOT NULL,
"used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL, "upload_bandwidth" integer NOT NULL, "download_bandwidth" integer NOT NULL,
"expiration_date" bigint NOT NULL, "last_login" bigint NOT NULL, "status" integer NOT NULL, "filters" text NULL, "filesystem" text NULL);
INSERT INTO "new__users" ("id", "username", "password", "public_keys", "home_dir", "uid", "gid", "max_sessions", "quota_size", "quota_files",
"permissions", "used_quota_size", "used_quota_files", "last_quota_update", "upload_bandwidth", "download_bandwidth", "expiration_date",
"last_login", "status", "filters", "filesystem") SELECT "id", "username", "password", "public_keys", "home_dir", "uid", "gid", "max_sessions",
"quota_size", "quota_files", "permissions", "used_quota_size", "used_quota_files", "last_quota_update", "upload_bandwidth", "download_bandwidth",
"expiration_date", "last_login", "status", "filters", "filesystem" FROM "{{users}}";
"used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL, "upload_bandwidth" integer NOT NULL,
"download_bandwidth" integer NOT NULL, "last_login" bigint NOT NULL, "filters" text NULL, "filesystem" text NULL,
"additional_info" text NULL);
INSERT INTO "new__users" ("id", "status", "expiration_date", "username", "password", "public_keys", "home_dir", "uid", "gid",
"max_sessions", "quota_size", "quota_files", "permissions", "used_quota_size", "used_quota_files", "last_quota_update",
"upload_bandwidth", "download_bandwidth", "last_login", "filters", "filesystem", "additional_info")
SELECT "id", "status", "expiration_date", "username", "password", "public_keys", "home_dir", "uid", "gid", "max_sessions",
"quota_size", "quota_files", "permissions", "used_quota_size", "used_quota_files", "last_quota_update", "upload_bandwidth",
"download_bandwidth", "last_login", "filters", "filesystem", "additional_info" FROM "{{users}}";
DROP TABLE "{{users}}";
ALTER TABLE "new__users" RENAME TO "{{users}}";
CREATE INDEX "folders_mapping_folder_id_idx" ON "{{folders_mapping}}" ("folder_id");
CREATE INDEX "folders_mapping_user_id_idx" ON "{{folders_mapping}}" ("user_id");
`
sqliteV6SQL = `ALTER TABLE "{{users}}" ADD COLUMN "additional_info" text NULL;`
sqliteV6DownSQL = `CREATE TABLE "new__users" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255) NOT NULL UNIQUE,
"password" text NULL, "public_keys" text NULL, "home_dir" varchar(512) NOT NULL, "uid" integer NOT NULL, "gid" integer NOT NULL,
"max_sessions" integer NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "permissions" text NOT NULL,
"used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL, "upload_bandwidth" integer NOT NULL,
"download_bandwidth" integer NOT NULL, "expiration_date" bigint NOT NULL, "last_login" bigint NOT NULL, "status" integer NOT NULL,
"filters" text NULL, "filesystem" text NULL);
INSERT INTO "new__users" ("id", "username", "password", "public_keys", "home_dir", "uid", "gid", "max_sessions", "quota_size", "quota_files",
"permissions", "used_quota_size", "used_quota_files", "last_quota_update", "upload_bandwidth", "download_bandwidth", "expiration_date",
"last_login", "status", "filters", "filesystem") SELECT "id", "username", "password", "public_keys", "home_dir", "uid", "gid", "max_sessions",
"quota_size", "quota_files", "permissions", "used_quota_size", "used_quota_files", "last_quota_update", "upload_bandwidth", "download_bandwidth",
"expiration_date", "last_login", "status", "filters", "filesystem" FROM "{{users}}";
DROP TABLE "{{users}}";
ALTER TABLE "new__users" RENAME TO "{{users}}";
`
sqliteV7SQL = `CREATE TABLE "{{admins}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255) NOT NULL UNIQUE,
"password" varchar(255) NOT NULL, "email" varchar(255) NULL, "status" integer NOT NULL, "permissions" text NOT NULL, "filters" text NULL,
"additional_info" text NULL);`
sqliteV7DownSQL = `DROP TABLE "{{admins}}";`
sqliteV8SQL = `CREATE TABLE "new__folders" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT,
"name" varchar(255) NOT NULL UNIQUE, "path" varchar(512) NULL, "used_quota_size" bigint NOT NULL,
"used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL);
INSERT INTO "new__folders" ("id", "path", "used_quota_size", "used_quota_files", "last_quota_update", "name")
SELECT "id", "path", "used_quota_size", "used_quota_files", "last_quota_update", ('folder' || "id") FROM "{{folders}}";
DROP TABLE "{{folders}}";
ALTER TABLE "new__folders" RENAME TO "{{folders}}";
`
sqliteV8DownSQL = `CREATE TABLE "new__folders" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT,
"path" varchar(512) NOT NULL UNIQUE, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL,
"last_quota_update" bigint NOT NULL);
INSERT INTO "new__folders" ("id", "path", "used_quota_size", "used_quota_files", "last_quota_update")
SELECT "id", "path", "used_quota_size", "used_quota_files", "last_quota_update" FROM "{{folders}}";
CREATE TABLE "new__admins" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255) NOT NULL UNIQUE,
"password" varchar(255) NOT NULL, "email" varchar(255) NULL, "status" integer NOT NULL, "permissions" text NOT NULL,
"filters" text NULL, "additional_info" text NULL);
INSERT INTO "new__admins" ("id", "username", "password", "email", "status", "permissions", "filters", "additional_info")
SELECT "id", "username", "password", "email", "status", "permissions", "filters", "additional_info" FROM "{{admins}}";
DROP TABLE "{{admins}}";
ALTER TABLE "new__admins" RENAME TO "{{admins}}";
CREATE TABLE "new__folders" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "name" varchar(255) NOT NULL UNIQUE,
"path" varchar(512) NULL, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL);
INSERT INTO "new__folders" ("id", "name", "path", "used_quota_size", "used_quota_files", "last_quota_update")
SELECT "id", "name", "path", "used_quota_size", "used_quota_files", "last_quota_update" FROM "{{folders}}";
DROP TABLE "{{folders}}";
ALTER TABLE "new__folders" RENAME TO "{{folders}}";
`
@@ -112,11 +92,11 @@ func init() {
func initializeSQLiteProvider(basePath string) error {
var err error
var connectionString string
logSender = fmt.Sprintf("dataprovider_%v", SQLiteDataProviderName)
if config.ConnectionString == "" {
dbPath := config.Name
if !utils.IsFileInputValid(dbPath) {
return fmt.Errorf("Invalid database path: %#v", dbPath)
return fmt.Errorf("invalid database path: %#v", dbPath)
}
if !filepath.IsAbs(dbPath) {
dbPath = filepath.Join(basePath, dbPath)
@@ -145,6 +125,10 @@ func (p *SQLiteProvider) validateUserAndPass(username, password, ip, protocol st
return sqlCommonValidateUserAndPass(username, password, ip, protocol, p.dbHandle)
}
func (p *SQLiteProvider) validateUserAndTLSCert(username, protocol string, tlsCert *x509.Certificate) (User, error) {
return sqlCommonValidateUserAndTLSCertificate(username, protocol, tlsCert, p.dbHandle)
}
func (p *SQLiteProvider) validateUserAndPubKey(username string, publicKey []byte) (User, string, error) {
return sqlCommonValidateUserAndPubKey(username, publicKey, p.dbHandle)
}
@@ -261,27 +245,14 @@ func (p *SQLiteProvider) initializeDatabase() error {
if err == nil && dbVersion.Version > 0 {
return ErrNoInitRequired
}
sqlUsers := strings.Replace(sqliteUsersTableSQL, "{{users}}", sqlTableUsers, 1)
ctx, cancel := context.WithTimeout(context.Background(), longSQLQueryTimeout)
defer cancel()
initialSQL := strings.ReplaceAll(sqliteInitialSQL, "{{schema_version}}", sqlTableSchemaVersion)
initialSQL = strings.ReplaceAll(initialSQL, "{{admins}}", sqlTableAdmins)
initialSQL = strings.ReplaceAll(initialSQL, "{{folders}}", sqlTableFolders)
initialSQL = strings.ReplaceAll(initialSQL, "{{users}}", sqlTableUsers)
initialSQL = strings.ReplaceAll(initialSQL, "{{folders_mapping}}", sqlTableFoldersMapping)
initialSQL = strings.ReplaceAll(initialSQL, "{{prefix}}", config.SQLTablesPrefix)
tx, err := p.dbHandle.BeginTx(ctx, nil)
if err != nil {
return err
}
_, err = tx.Exec(sqlUsers)
if err != nil {
return err
}
_, err = tx.Exec(strings.Replace(sqliteSchemaTableSQL, "{{schema_version}}", sqlTableSchemaVersion, 1))
if err != nil {
return err
}
_, err = tx.Exec(strings.Replace(initialDBVersionSQL, "{{schema_version}}", sqlTableSchemaVersion, 1))
if err != nil {
return err
}
return tx.Commit()
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, []string{initialSQL}, 8)
}
func (p *SQLiteProvider) migrateDatabase() error {
@@ -289,185 +260,105 @@ func (p *SQLiteProvider) migrateDatabase() error {
if err != nil {
return err
}
if dbVersion.Version == sqlDatabaseVersion {
providerLog(logger.LevelDebug, "sql database is up to date, current version: %v", dbVersion.Version)
switch version := dbVersion.Version; {
case version == sqlDatabaseVersion:
providerLog(logger.LevelDebug, "sql database is up to date, current version: %v", version)
return ErrNoInitRequired
}
switch dbVersion.Version {
case 1:
return updateSQLiteDatabaseFromV1(p.dbHandle)
case 2:
return updateSQLiteDatabaseFromV2(p.dbHandle)
case 3:
return updateSQLiteDatabaseFromV3(p.dbHandle)
case 4:
return updateSQLiteDatabaseFromV4(p.dbHandle)
case 5:
return updateSQLiteDatabaseFromV5(p.dbHandle)
case 6:
return updateSQLiteDatabaseFromV6(p.dbHandle)
case 7:
return updateSQLiteDatabaseFromV7(p.dbHandle)
case version < 8:
err = fmt.Errorf("database version %v is too old, please see the upgrading docs", version)
providerLog(logger.LevelError, "%v", err)
logger.ErrorToConsole("%v", err)
return err
case version == 8:
return updateSQLiteDatabaseFromV8(p.dbHandle)
case version == 9:
return updateSQLiteDatabaseFromV9(p.dbHandle)
default:
if dbVersion.Version > sqlDatabaseVersion {
providerLog(logger.LevelWarn, "database version %v is newer than the supported: %v", dbVersion.Version,
if version > sqlDatabaseVersion {
providerLog(logger.LevelWarn, "database version %v is newer than the supported one: %v", version,
sqlDatabaseVersion)
logger.WarnToConsole("database version %v is newer than the supported: %v", dbVersion.Version,
logger.WarnToConsole("database version %v is newer than the supported one: %v", version,
sqlDatabaseVersion)
return nil
}
return fmt.Errorf("Database version not handled: %v", dbVersion.Version)
return fmt.Errorf("database version not handled: %v", version)
}
}
//nolint:dupl
func (p *SQLiteProvider) revertDatabase(targetVersion int) error {
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
if err != nil {
return err
}
if dbVersion.Version == targetVersion {
return fmt.Errorf("current version match target version, nothing to do")
return errors.New("current version match target version, nothing to do")
}
switch dbVersion.Version {
case 8:
err = downgradeSQLiteDatabaseFrom8To7(p.dbHandle)
if err != nil {
return err
}
err = downgradeSQLiteDatabaseFrom7To6(p.dbHandle)
if err != nil {
return err
}
err = downgradeSQLiteDatabaseFrom6To5(p.dbHandle)
if err != nil {
return err
}
return downgradeSQLiteDatabaseFrom5To4(p.dbHandle)
case 7:
err = downgradeSQLiteDatabaseFrom7To6(p.dbHandle)
if err != nil {
return err
}
err = downgradeSQLiteDatabaseFrom6To5(p.dbHandle)
if err != nil {
return err
}
return downgradeSQLiteDatabaseFrom5To4(p.dbHandle)
case 6:
err = downgradeSQLiteDatabaseFrom6To5(p.dbHandle)
if err != nil {
return err
}
return downgradeSQLiteDatabaseFrom5To4(p.dbHandle)
case 5:
return downgradeSQLiteDatabaseFrom5To4(p.dbHandle)
case 9:
return downgradeSQLiteDatabaseFromV9(p.dbHandle)
case 10:
return downgradeSQLiteDatabaseFromV10(p.dbHandle)
default:
return fmt.Errorf("Database version not handled: %v", dbVersion.Version)
return fmt.Errorf("database version not handled: %v", dbVersion.Version)
}
}
func updateSQLiteDatabaseFromV1(dbHandle *sql.DB) error {
err := updateSQLiteDatabaseFrom1To2(dbHandle)
if err != nil {
func updateSQLiteDatabaseFromV8(dbHandle *sql.DB) error {
if err := updateSQLiteDatabaseFrom8To9(dbHandle); err != nil {
return err
}
return updateSQLiteDatabaseFromV2(dbHandle)
return updateSQLiteDatabaseFromV9(dbHandle)
}
func updateSQLiteDatabaseFromV2(dbHandle *sql.DB) error {
err := updateSQLiteDatabaseFrom2To3(dbHandle)
if err != nil {
func updateSQLiteDatabaseFromV9(dbHandle *sql.DB) error {
return updateSQLiteDatabaseFrom9To10(dbHandle)
}
func downgradeSQLiteDatabaseFromV9(dbHandle *sql.DB) error {
return downgradeSQLiteDatabaseFrom9To8(dbHandle)
}
func downgradeSQLiteDatabaseFromV10(dbHandle *sql.DB) error {
if err := downgradeSQLiteDatabaseFrom10To9(dbHandle); err != nil {
return err
}
return updateSQLiteDatabaseFromV3(dbHandle)
return downgradeSQLiteDatabaseFromV9(dbHandle)
}
func updateSQLiteDatabaseFromV3(dbHandle *sql.DB) error {
err := updateSQLiteDatabaseFrom3To4(dbHandle)
if err != nil {
return err
}
return updateSQLiteDatabaseFromV4(dbHandle)
func updateSQLiteDatabaseFrom8To9(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 8 -> 9")
providerLog(logger.LevelInfo, "updating database version: 8 -> 9")
sql := strings.ReplaceAll(sqliteV9SQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
sql = strings.ReplaceAll(sql, "{{folders}}", sqlTableFolders)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 9)
}
func updateSQLiteDatabaseFromV4(dbHandle *sql.DB) error {
err := updateSQLiteDatabaseFrom4To5(dbHandle)
if err != nil {
return err
}
return updateSQLiteDatabaseFromV5(dbHandle)
}
func updateSQLiteDatabaseFromV5(dbHandle *sql.DB) error {
err := updateSQLiteDatabaseFrom5To6(dbHandle)
if err != nil {
return err
}
return updateSQLiteDatabaseFromV6(dbHandle)
}
func updateSQLiteDatabaseFromV6(dbHandle *sql.DB) error {
err := updateSQLiteDatabaseFrom6To7(dbHandle)
if err != nil {
return err
}
return updateSQLiteDatabaseFromV7(dbHandle)
}
func updateSQLiteDatabaseFromV7(dbHandle *sql.DB) error {
return updateSQLiteDatabaseFrom7To8(dbHandle)
}
func updateSQLiteDatabaseFrom1To2(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 1 -> 2")
providerLog(logger.LevelInfo, "updating database version: 1 -> 2")
sql := strings.Replace(sqliteV2SQL, "{{users}}", sqlTableUsers, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 2)
}
func updateSQLiteDatabaseFrom2To3(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 2 -> 3")
providerLog(logger.LevelInfo, "updating database version: 2 -> 3")
sql := strings.ReplaceAll(sqliteV3SQL, "{{users}}", sqlTableUsers)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 3)
}
func updateSQLiteDatabaseFrom3To4(dbHandle *sql.DB) error {
return sqlCommonUpdateDatabaseFrom3To4(sqliteV4SQL, dbHandle)
}
func updateSQLiteDatabaseFrom4To5(dbHandle *sql.DB) error {
return sqlCommonUpdateDatabaseFrom4To5(dbHandle)
}
func updateSQLiteDatabaseFrom5To6(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 5 -> 6")
providerLog(logger.LevelInfo, "updating database version: 5 -> 6")
sql := strings.Replace(sqliteV6SQL, "{{users}}", sqlTableUsers, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 6)
}
func updateSQLiteDatabaseFrom6To7(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 6 -> 7")
providerLog(logger.LevelInfo, "updating database version: 6 -> 7")
sql := strings.Replace(sqliteV7SQL, "{{admins}}", sqlTableAdmins, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 7)
}
func updateSQLiteDatabaseFrom7To8(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 7 -> 8")
providerLog(logger.LevelInfo, "updating database version: 7 -> 8")
func downgradeSQLiteDatabaseFrom9To8(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 9 -> 8")
providerLog(logger.LevelInfo, "downgrading database version: 9 -> 8")
if err := setPragmaFK(dbHandle, "OFF"); err != nil {
return err
}
sql := strings.ReplaceAll(sqliteV8SQL, "{{folders}}", sqlTableFolders)
sql := strings.ReplaceAll(sqliteV9DownSQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
sql = strings.ReplaceAll(sql, "{{folders}}", sqlTableFolders)
if err := sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 8); err != nil {
return err
}
return setPragmaFK(dbHandle, "ON")
}
func updateSQLiteDatabaseFrom9To10(dbHandle *sql.DB) error {
return sqlCommonUpdateDatabaseFrom9To10(dbHandle)
}
func downgradeSQLiteDatabaseFrom10To9(dbHandle *sql.DB) error {
return sqlCommonDowngradeDatabaseFrom10To9(dbHandle)
}
func setPragmaFK(dbHandle *sql.DB, value string) error {
ctx, cancel := context.WithTimeout(context.Background(), longSQLQueryTimeout)
defer cancel()
@@ -477,34 +368,3 @@ func setPragmaFK(dbHandle *sql.DB, value string) error {
_, err := dbHandle.ExecContext(ctx, sql)
return err
}
func downgradeSQLiteDatabaseFrom8To7(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 8 -> 7")
providerLog(logger.LevelInfo, "downgrading database version: 8 -> 7")
if err := setPragmaFK(dbHandle, "OFF"); err != nil {
return err
}
sql := strings.ReplaceAll(sqliteV8DownSQL, "{{folders}}", sqlTableFolders)
if err := sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 7); err != nil {
return err
}
return setPragmaFK(dbHandle, "ON")
}
func downgradeSQLiteDatabaseFrom7To6(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 7 -> 6")
providerLog(logger.LevelInfo, "downgrading database version: 7 -> 6")
sql := strings.Replace(sqliteV7DownSQL, "{{admins}}", sqlTableAdmins, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 6)
}
func downgradeSQLiteDatabaseFrom6To5(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 6 -> 5")
providerLog(logger.LevelInfo, "downgrading database version: 6 -> 5")
sql := strings.ReplaceAll(sqliteV6DownSQL, "{{users}}", sqlTableUsers)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 5)
}
func downgradeSQLiteDatabaseFrom5To4(dbHandle *sql.DB) error {
return sqlCommonDowngradeDatabaseFrom5To4(dbHandle)
}

View File

@@ -1,3 +1,4 @@
//go:build nosqlite
// +build nosqlite
package dataprovider

View File

@@ -10,15 +10,16 @@ import (
const (
selectUserFields = "id,username,password,public_keys,home_dir,uid,gid,max_sessions,quota_size,quota_files,permissions,used_quota_size," +
"used_quota_files,last_quota_update,upload_bandwidth,download_bandwidth,expiration_date,last_login,status,filters,filesystem,additional_info"
selectFolderFields = "id,path,used_quota_size,used_quota_files,last_quota_update,name"
selectAdminFields = "id,username,password,status,email,permissions,filters,additional_info"
"used_quota_files,last_quota_update,upload_bandwidth,download_bandwidth,expiration_date,last_login,status,filters,filesystem," +
"additional_info,description"
selectFolderFields = "id,path,used_quota_size,used_quota_files,last_quota_update,name,description,filesystem"
selectAdminFields = "id,username,password,status,email,permissions,filters,additional_info,description"
)
func getSQLPlaceholders() []string {
var placeholders []string
for i := 1; i <= 20; i++ {
if config.Driver == PGSQLDataProviderName {
if config.Driver == PGSQLDataProviderName || config.Driver == CockroachDataProviderName {
placeholders = append(placeholders, fmt.Sprintf("$%v", i))
} else {
placeholders = append(placeholders, "?")
@@ -41,15 +42,15 @@ func getDumpAdminsQuery() string {
}
func getAddAdminQuery() string {
return fmt.Sprintf(`INSERT INTO %v (username,password,status,email,permissions,filters,additional_info)
VALUES (%v,%v,%v,%v,%v,%v,%v)`, sqlTableAdmins, sqlPlaceholders[0], sqlPlaceholders[1],
sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6])
return fmt.Sprintf(`INSERT INTO %v (username,password,status,email,permissions,filters,additional_info,description)
VALUES (%v,%v,%v,%v,%v,%v,%v,%v)`, sqlTableAdmins, sqlPlaceholders[0], sqlPlaceholders[1],
sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7])
}
func getUpdateAdminQuery() string {
return fmt.Sprintf(`UPDATE %v SET password=%v,status=%v,email=%v,permissions=%v,filters=%v,additional_info=%v
return fmt.Sprintf(`UPDATE %v SET password=%v,status=%v,email=%v,permissions=%v,filters=%v,additional_info=%v,description=%v
WHERE username = %v`, sqlTableAdmins, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2],
sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6])
sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7])
}
func getDeleteAdminQuery() string {
@@ -94,20 +95,20 @@ func getQuotaQuery() string {
func getAddUserQuery() string {
return fmt.Sprintf(`INSERT INTO %v (username,password,public_keys,home_dir,uid,gid,max_sessions,quota_size,quota_files,permissions,
used_quota_size,used_quota_files,last_quota_update,upload_bandwidth,download_bandwidth,status,last_login,expiration_date,filters,
filesystem,additional_info)
VALUES (%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,0,0,0,%v,%v,%v,0,%v,%v,%v,%v)`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1],
filesystem,additional_info,description)
VALUES (%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,0,0,0,%v,%v,%v,0,%v,%v,%v,%v,%v)`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1],
sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7],
sqlPlaceholders[8], sqlPlaceholders[9], sqlPlaceholders[10], sqlPlaceholders[11], sqlPlaceholders[12], sqlPlaceholders[13],
sqlPlaceholders[14], sqlPlaceholders[15], sqlPlaceholders[16])
sqlPlaceholders[14], sqlPlaceholders[15], sqlPlaceholders[16], sqlPlaceholders[17])
}
func getUpdateUserQuery() string {
return fmt.Sprintf(`UPDATE %v SET password=%v,public_keys=%v,home_dir=%v,uid=%v,gid=%v,max_sessions=%v,quota_size=%v,
quota_files=%v,permissions=%v,upload_bandwidth=%v,download_bandwidth=%v,status=%v,expiration_date=%v,filters=%v,filesystem=%v,
additional_info=%v WHERE id = %v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3],
additional_info=%v,description=%v WHERE id = %v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3],
sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7], sqlPlaceholders[8], sqlPlaceholders[9],
sqlPlaceholders[10], sqlPlaceholders[11], sqlPlaceholders[12], sqlPlaceholders[13], sqlPlaceholders[14], sqlPlaceholders[15],
sqlPlaceholders[16])
sqlPlaceholders[16], sqlPlaceholders[17])
}
func getDeleteUserQuery() string {
@@ -118,13 +119,19 @@ func getFolderByNameQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v WHERE name = %v`, selectFolderFields, sqlTableFolders, sqlPlaceholders[0])
}
func checkFolderNameQuery() string {
return fmt.Sprintf(`SELECT name FROM %v WHERE name = %v`, sqlTableFolders, sqlPlaceholders[0])
}
func getAddFolderQuery() string {
return fmt.Sprintf(`INSERT INTO %v (path,used_quota_size,used_quota_files,last_quota_update,name) VALUES (%v,%v,%v,%v,%v)`,
sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4])
return fmt.Sprintf(`INSERT INTO %v (path,used_quota_size,used_quota_files,last_quota_update,name,description,filesystem)
VALUES (%v,%v,%v,%v,%v,%v,%v)`, sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2],
sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6])
}
func getUpdateFolderQuery() string {
return fmt.Sprintf(`UPDATE %v SET path = %v WHERE name = %v`, sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1])
return fmt.Sprintf(`UPDATE %v SET path=%v,description=%v,filesystem=%v WHERE name = %v`, sqlTableFolders, sqlPlaceholders[0],
sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
}
func getDeleteFolderQuery() string {
@@ -174,9 +181,9 @@ func getRelatedFoldersForUsersQuery(users []User) string {
if sb.Len() > 0 {
sb.WriteString(")")
}
return fmt.Sprintf(`SELECT f.id,f.name,f.path,f.used_quota_size,f.used_quota_files,f.last_quota_update,fm.virtual_path,fm.quota_size,fm.quota_files,fm.user_id
FROM %v f INNER JOIN %v fm ON f.id = fm.folder_id WHERE fm.user_id IN %v ORDER BY fm.user_id`, sqlTableFolders,
sqlTableFoldersMapping, sb.String())
return fmt.Sprintf(`SELECT f.id,f.name,f.path,f.used_quota_size,f.used_quota_files,f.last_quota_update,fm.virtual_path,
fm.quota_size,fm.quota_files,fm.user_id,f.filesystem,f.description FROM %v f INNER JOIN %v fm ON f.id = fm.folder_id WHERE
fm.user_id IN %v ORDER BY fm.user_id`, sqlTableFolders, sqlTableFoldersMapping, sb.String())
}
func getRelatedUsersForFoldersQuery(folders []vfs.BaseVirtualFolder) string {
@@ -204,14 +211,18 @@ func getUpdateDBVersionQuery() string {
return fmt.Sprintf(`UPDATE %v SET version=%v`, sqlTableSchemaVersion, sqlPlaceholders[0])
}
/*func getCompatVirtualFoldersQuery() string {
return fmt.Sprintf(`SELECT id,username,virtual_folders FROM %v`, sqlTableUsers)
}*/
func getCompatV4FsConfigQuery() string {
func getCompatUserV10FsConfigQuery() string {
return fmt.Sprintf(`SELECT id,username,filesystem FROM %v`, sqlTableUsers)
}
func updateCompatV4FsConfigQuery() string {
func updateCompatUserV10FsConfigQuery() string {
return fmt.Sprintf(`UPDATE %v SET filesystem=%v WHERE id=%v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getCompatFolderV10FsConfigQuery() string {
return fmt.Sprintf(`SELECT id,name,filesystem FROM %v`, sqlTableFolders)
}
func updateCompatFolderV10FsConfigQuery() string {
return fmt.Sprintf(`UPDATE %v SET filesystem=%v WHERE id=%v`, sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1])
}

File diff suppressed because it is too large Load Diff

View File

@@ -4,10 +4,10 @@ SFTPGo provides an official Docker image, it is available on both [Docker Hub](h
## Supported tags and respective Dockerfile links
- [v2.0.4, v2.0, v2, latest](https://github.com/drakkan/sftpgo/blob/v2.0.4/Dockerfile)
- [v2.0.4-alpine, v2.0-alpine, v2-alpine, alpine](https://github.com/drakkan/sftpgo/blob/v2.0.4/Dockerfile.alpine)
- [v2.0.4-slim, v2.0-slim, v2-slim, slim](https://github.com/drakkan/sftpgo/blob/v2.0.4/Dockerfile)
- [v2.0.4-alpine-slim, v2.0-alpine-slim, v2-alpine-slim, alpine-slim](https://github.com/drakkan/sftpgo/blob/v2.0.4/Dockerfile.alpine)
- [v2.1.2, v2.1, v2, latest](https://github.com/drakkan/sftpgo/blob/v2.1.2/Dockerfile)
- [v2.1.2-alpine, v2.1-alpine, v2-alpine, alpine](https://github.com/drakkan/sftpgo/blob/v2.1.2/Dockerfile.alpine)
- [v2.1.2-slim, v2.1-slim, v2-slim, slim](https://github.com/drakkan/sftpgo/blob/v2.1.2/Dockerfile)
- [v2.1.2-alpine-slim, v2.1-alpine-slim, v2-alpine-slim, alpine-slim](https://github.com/drakkan/sftpgo/blob/v2.1.2/Dockerfile.alpine)
- [edge](../Dockerfile)
- [edge-alpine](../Dockerfile.alpine)
- [edge-slim](../Dockerfile)
@@ -25,10 +25,54 @@ docker run --name some-sftpgo -p 127.0.0.1:8080:8080 -p 2022:2022 -d "drakkan/sf
... where `some-sftpgo` is the name you want to assign to your container, and `tag` is the tag specifying the SFTPGo version you want. See the list above for relevant tags.
Now visit [http://localhost:8080/](http://localhost:8080/) and create a new SFTPGo user. The SFTP service is available on port 2022.
Now visit [http://localhost:8080/web/admin](http://localhost:8080/web/admin), create the first admin and then log in and create a new SFTPGo user. The SFTP service is available on port 2022.
If you don't want to persist any files, for example for testing purposes, you can run an SFTPGo instance like this:
```shell
docker run --rm --name some-sftpgo -p 8080:8080 -p 2022:2022 -d "drakkan/sftpgo:tag"
```
If you prefer GitHub Container Registry to Docker Hub replace `drakkan/sftpgo:tag` with `ghcr.io/drakkan/sftpgo:tag`.
### Enable FTP service
FTP is disabled by default, you can enable the FTP service by starting the SFTPGo instance in this way:
```shell
docker run --name some-sftpgo \
-p 8080:8080 \
-p 2022:2022 \
-p 2121:2121 \
-p 50000-50100:50000-50100 \
-e SFTPGO_FTPD__BINDINGS__0__PORT=2121 \
-e SFTPGO_FTPD__BINDINGS__0__FORCE_PASSIVE_IP=<your external ip here> \
-d "drakkan/sftpgo:tag"
```
The FTP service is now available on port 2121 and SFTP on port 2022.
You can change the passive ports range (`50000-50100` by default) by setting the environment variables `SFTPGO_FTPD__PASSIVE_PORT_RANGE__START` and `SFTPGO_FTPD__PASSIVE_PORT_RANGE__END`.
It is recommended that you provide a certificate and key file to expose FTP over TLS. You should prefer SFTP to FTP even if you configure TLS, please don't blindly enable the old FTP protocol.
### Enable WebDAV service
WebDAV is disabled by default, you can enable the WebDAV service by starting the SFTPGo instance in this way:
```shell
docker run --name some-sftpgo \
-p 8080:8080 \
-p 2022:2022 \
-p 10080:10080 \
-e SFTPGO_WEBDAVD__BINDINGS__0__PORT=10080 \
-d "drakkan/sftpgo:tag"
```
The WebDAV service is now available on port 10080 and SFTP on port 2022.
It is recommended that you provide a certificate and key file to expose WebDAV over https.
### Container shell access and viewing SFTPGo logs
The docker exec command allows you to run commands inside a Docker container. The following command line will give you a shell inside your `sftpgo` container:

View File

@@ -16,7 +16,7 @@ If the hook defines an external program it can read the following environment va
- `SFTPGO_AUTHD_USERNAME`
- `SFTPGO_AUTHD_PASSWORD`
- `SFTPGO_AUTHD_IP`
- `SFTPGO_AUTHD_PROTOCOL`, possible values are `SSH`, `FTP`, `DAV`
- `SFTPGO_AUTHD_PROTOCOL`, possible values are `SSH`, `FTP`, `DAV`, `HTTP`
Previous global environment variables aren't cleared when the script is called. The content of these variables is _not_ quoted. They may contain special characters. They are under the control of a possibly malicious remote user.
@@ -42,4 +42,6 @@ You can also restrict the hook scope using the `check_password_scope` configurat
You can combine the scopes. For example, 6 means FTP and WebDAV.
You can disable the hook on a per-user basis.
An example check password program allowing 2FA using password + one time token can be found inside the source tree [checkpwd](../examples/OTP/authy/checkpwd) directory.

View File

@@ -1,15 +1,30 @@
# Custom Actions
The `actions` struct inside the "common" configuration section allows to configure the actions for file operations and SSH commands.
The `actions` struct inside the `common` configuration section allows to configure the actions for file operations and SSH commands.
The `hook` can be defined as the absolute path of your program or an HTTP URL.
The following `actions` are supported:
- `download`
- `pre-download`
- `upload`
- `pre-upload`
- `delete`
- `pre-delete`
- `rename`
- `ssh_cmd`
The `upload` condition includes both uploads to new files and overwrite of existing files. If an upload is aborted for quota limits SFTPGo tries to remove the partial file, so if the notification reports a zero size file and a quota exceeded error the file has been deleted. The `ssh_cmd` condition will be triggered after a command is successfully executed via SSH. `scp` will trigger the `download` and `upload` conditions and not `ssh_cmd`.
The notification will indicate if an error is detected and so, for example, a partial file is uploaded.
The `pre-delete` action, if defined, will be called just before files deletion. If the external command completes with a zero exit status or the HTTP notification response code is `200` then SFTPGo will assume that the file was already deleted/moved and so it will not try to remove the file and it will not execute the hook defined for the `delete` action.
The `pre-download` and `pre-upload` actions, will be called before downloads and uploads. If the external command completes with a zero exit status or the HTTP notification response code is `200` then SFTPGo allows the operation, otherwise the client will get a permission denied error.
If the `hook` defines a path to an external program, then this program is invoked with the following arguments:
- `action`, string, possible values are: `download`, `upload`, `pre-delete`,`delete`, `rename`, `ssh_cmd`
- `action`, string, supported action
- `username`
- `path` is the full filesystem path, can be empty for some ssh commands
- `target_path`, non-empty for `rename` action and for `sftpgo-copy` SSH command
@@ -22,12 +37,13 @@ The external program can also read the following environment variables:
- `SFTPGO_ACTION_PATH`
- `SFTPGO_ACTION_TARGET`, non-empty for `rename` `SFTPGO_ACTION`
- `SFTPGO_ACTION_SSH_CMD`, non-empty for `ssh_cmd` `SFTPGO_ACTION`
- `SFTPGO_ACTION_FILE_SIZE`, non-empty for `upload`, `download` and `delete` `SFTPGO_ACTION`
- `SFTPGO_ACTION_FS_PROVIDER`, `0` for local filesystem, `1` for S3 backend, `2` for Google Cloud Storage (GCS) backend, `3` for Azure Blob Storage backend
- `SFTPGO_ACTION_FILE_SIZE`, non-zero for `pre-upload`,`upload`, `download` and `delete` actions if the file size is greater than `0`
- `SFTPGO_ACTION_FS_PROVIDER`, `0` for local filesystem, `1` for S3 backend, `2` for Google Cloud Storage (GCS) backend, `3` for Azure Blob Storage backend, `4` for local encrypted backend, `5` for SFTP backend
- `SFTPGO_ACTION_BUCKET`, non-empty for S3, GCS and Azure backends
- `SFTPGO_ACTION_ENDPOINT`, non-empty for S3 and Azure backend if configured. For Azure this is the SAS URL, if configured otherwise the endpoint
- `SFTPGO_ACTION_STATUS`, integer. 0 means a generic error occurred. 1 means no error, 2 means quota exceeded error
- `SFTPGO_ACTION_PROTOCOL`, string. Possible values are `SSH`, `SFTP`, `SCP`, `FTP`, `DAV`
- `SFTPGO_ACTION_ENDPOINT`, non-empty for S3, SFTP and Azure backend if configured. For Azure this is the endpoint, if configured
- `SFTPGO_ACTION_STATUS`, integer. Status for `upload`, `download` and `ssh_cmd` actions. 0 means a generic error occurred. 1 means no error, 2 means quota exceeded error
- `SFTPGO_ACTION_PROTOCOL`, string. Possible values are `SSH`, `SFTP`, `SCP`, `FTP`, `DAV`, `HTTP`
- `SFTPGO_ACTION_OPEN_FLAGS`, integer. File open flags, can be non-zero for `pre-upload` action. If `SFTPGO_ACTION_FILE_SIZE` is greater than zero and `SFTPGO_ACTION_OPEN_FLAGS&512 == 0` the target file will not be truncated
Previous global environment variables aren't cleared when the script is called.
The program must finish within 30 seconds.
@@ -37,18 +53,21 @@ If the `hook` defines an HTTP URL then this URL will be invoked as HTTP POST. Th
- `action`
- `username`
- `path`
- `target_path`, not null for `rename` action
- `ssh_cmd`, not null for `ssh_cmd` action
- `file_size`, not null for `upload`, `download`, `delete` actions
- `fs_provider`, `0` for local filesystem, `1` for S3 backend, `2` for Google Cloud Storage (GCS) backend, `3` for Azure Blob Storage backend
- `bucket`, not null for S3, GCS and Azure backends
- `endpoint`, not null for S3 and Azure backend if configured. For Azure this is the SAS URL, if configured otherwise the endpoint
- `status`, integer. 0 means a generic error occurred. 1 means no error, 2 means quota exceeded error
- `protocol`, string. Possible values are `SSH`, `FTP`, `DAV`
- `target_path`, included for `rename` action
- `ssh_cmd`, included for `ssh_cmd` action
- `file_size`, included for `pre-upload`, `upload`, `download`, `delete` actions if the file size is greater than `0`
- `fs_provider`, `0` for local filesystem, `1` for S3 backend, `2` for Google Cloud Storage (GCS) backend, `3` for Azure Blob Storage backend, `4` for local encrypted backend, `5` for SFTP backend
- `bucket`, inlcuded for S3, GCS and Azure backends
- `endpoint`, included for S3, SFTP and Azure backend if configured. For Azure this is the endpoint, if configured
- `status`, integer. Status for `upload`, `download` and `ssh_cmd` actions. 0 means a generic error occurred. 1 means no error, 2 means quota exceeded error
- `protocol`, string. Possible values are `SSH`, `SFTP`, `SCP`, `FTP`, `DAV`, `HTTP`
- `open_flags`, integer. File open flags, can be non-zero for `pre-upload` action. If `file_size` is greater than zero and `file_size&512 == 0` the target file will not be truncated
The HTTP hook will use the global configuration for HTTP clients and will respect the retry configurations.
The `actions` struct inside the "data_provider" configuration section allows you to configure actions on user add, update, delete.
The `pre-*` actions are always executed synchronously while the other ones are asynchronous. You can specify the actions to run synchronously via the `execute_sync` configuration key. Executing an action synchronously means that SFTPGo will not return a result code to the client (which is waiting for it) until your hook have completed its execution. If your hook takes a long time to complete this could cause a timeout on the client side, which wouldn't receive the server response in a timely manner and eventually drop the connection.
The `actions` struct inside the `data_provider` configuration section allows you to configure actions on user add, update, delete.
Actions will not be fired for internal updates, such as the last login or the user quota fields, or after external authentication.
@@ -74,3 +93,5 @@ The program must finish within 15 seconds.
If the `hook` defines an HTTP URL then this URL will be invoked as HTTP POST. The action is added to the query string, for example `<hook>?action=update`, and the user is sent serialized as JSON inside the POST body with sensitive fields removed.
The HTTP hook will use the global configuration for HTTP clients and will respect the retry configurations.
The structure for SFTPGo users can be found within the [OpenAPI schema](../httpd/schema/openapi.yaml).

View File

@@ -12,8 +12,7 @@ The passphrase is stored encrypted itself according to your [KMS configuration](
The encrypted filesystem has some limitations compared to the local, unencrypted, one:
- Upload resume is not supported.
- Resuming uploads is not supported.
- Opening a file for both reading and writing at the same time is not supported and so clients that require advanced filesystem-like features such as `sshfs` are not supported too.
- Truncate is not supported.
- System commands such as `git` or `rsync` are not supported: they will store data unencrypted.
- Virtual folders are not implemented for now, if you are interested in this feature, please consider submitting a well written pull request (fully covered by test cases) or sponsoring this development. We could add a filesystem configuration to each virtual folder so we can mount encrypted or cloud backends as subfolders for local filesystems and vice versa.

View File

@@ -8,6 +8,7 @@ You can configure a score for each event type:
- `score_valid`, defines the score for valid login attempts, eg. user accounts that exist. Default `1`.
- `score_invalid`, defines the score for invalid login attempts, eg. non-existent user accounts or client disconnected for inactivity without authentication attempts. Default `2`.
- `score_limit_exceeded`, defines the score for hosts that exceeded the configured rate limits or the configured max connections per host. Default `3`.
And then you can configure:
@@ -25,13 +26,10 @@ The `ban_time_increment` is calculated as percentage of `ban_time`, so if `ban_t
The `defender` will keep in memory both the host scores and the banned hosts, you can limit the memory usage using the `entries_soft_limit` and `entries_hard_limit` configuration keys.
The REST API allows:
Using the REST API you can:
- to retrieve the score for an IP address
- to retrieve the ban time for an IP address
- to unban an IP address
We don't return the whole list of the banned IP addresses or all stored scores because we store them as a hash map and iterating over all the keys of a hash map is not a fast operation and will slow down the recordings of new events.
- list hosts within the defender's lists
- remove hosts from the defender's lists
The `defender` can also load a permanent block list and/or a safe list of ip addresses/networks from a file:

View File

@@ -6,9 +6,9 @@ To enable dynamic user modification, you must set the absolute path of your prog
The external program can read the following environment variables to get info about the user trying to login:
- `SFTPGO_LOGIND_USER`, it contains the user trying to login serialized as JSON. A JSON serialized user id equal to zero means the user does not exist inside SFTPGo
- `SFTPGO_LOGIND_METHOD`, possible values are: `password`, `publickey` and `keyboard-interactive`
- `SFTPGO_LOGIND_METHOD`, possible values are: `password`, `publickey`, `keyboard-interactive`, `TLSCertificate`
- `SFTPGO_LOGIND_IP`, ip address of the user trying to login
- `SFTPGO_LOGIND_PROTOCOL`, possible values are `SSH`, `FTP`, `DAV`
- `SFTPGO_LOGIND_PROTOCOL`, possible values are `SSH`, `FTP`, `DAV`, `HTTP`
The program must write, on its standard output:
@@ -35,6 +35,8 @@ If an error happens while executing the hook then login will be denied.
"Dynamic user creation or modification" and "External Authentication" are mutually exclusive, they are quite similar, the difference is that "External Authentication" returns an already authenticated user while using "Dynamic users modification" you simply create or update a user. The authentication will be checked inside SFTPGo.
In other words while using "External Authentication" the external program receives the credentials of the user trying to login (for example the cleartext password) and it needs to validate them. While using "Dynamic users modification" the pre-login program receives the user stored inside the dataprovider (it includes the hashed password if any) and it can modify it, after the modification SFTPGo will check the credentials of the user trying to login.
You can disable the hook on a per-user basis.
Let's see a very basic example. Our sample program will grant access to the existing user `test_user` only in the time range 10:00-18:00. Other users will not be modified since the program will terminate with no output.
```shell
@@ -53,3 +55,5 @@ fi
```
Please note that this is a demo program and it might not work in all cases. For example, the username should be obtained by parsing the JSON serialized user and not by searching the username inside the JSON as shown here.
The structure for SFTPGo users can be found within the [OpenAPI schema](../httpd/schema/openapi.yaml).

View File

@@ -5,37 +5,52 @@ To enable external authentication, you must set the absolute path of your authen
The external program can read the following environment variables to get info about the user trying to authenticate:
- `SFTPGO_AUTHD_USERNAME`
- `SFTPGO_AUTHD_USER`, STPGo user serialized as JSON, empty if the user does not exist within the data provider
- `SFTPGO_AUTHD_IP`
- `SFTPGO_AUTHD_PROTOCOL`, possible values are `SSH`, `FTP`, `DAV`
- `SFTPGO_AUTHD_PROTOCOL`, possible values are `SSH`, `FTP`, `DAV`, `HTTP`
- `SFTPGO_AUTHD_PASSWORD`, not empty for password authentication
- `SFTPGO_AUTHD_PUBLIC_KEY`, not empty for public key authentication
- `SFTPGO_AUTHD_KEYBOARD_INTERACTIVE`, not empty for keyboard interactive authentication
- `SFTPGO_AUTHD_TLS_CERT`, TLS client certificate PEM encoded. Not empty for TLS certificate authentication
Previous global environment variables aren't cleared when the script is called. The content of these variables is _not_ quoted. They may contain special characters. They are under the control of a possibly malicious remote user.
The program must write, on its standard output, a valid SFTPGo user serialized as JSON if the authentication succeeds or a user with an empty username if the authentication fails.
The program can inspect the SFTPGo user, if it exists, using the `SFTPGO_AUTHD_USER` environment variable.
The program must write, on its standard output:
- a valid SFTPGo user serialized as JSON if the authentication succeeds. The user will be added/updated within the defined data provider
- an empty string, or no response at all, if authentication succeeds and the existing SFTPGo user does not need to be updated. Please note that in versions 2.0.x and earlier an empty response was interpreted as an authentication error
- a user with an empty username if the authentication fails
If the hook is an HTTP URL then it will be invoked as HTTP POST. The request body will contain a JSON serialized struct with the following fields:
- `username`
- `ip`
- `protocol`, possible values are `SSH`, `FTP`, `DAV`
- `user`, STPGo user serialized as JSON, omitted if the user does not exist within the data provider
- `protocol`, possible values are `SSH`, `FTP`, `DAV`, `HTTP`
- `password`, not empty for password authentication
- `public_key`, not empty for public key authentication
- `keyboard_interactive`, not empty for keyboard interactive authentication
- `tls_cert`, TLS client certificate PEM encoded. Not empty for TLS certificate authentication
If authentication succeeds the HTTP response code must be 200 and the response body a valid SFTPGo user serialized as JSON. If the authentication fails the HTTP response code must be != 200 or the response body must be empty.
If authentication succeeds the HTTP response code must be 200 and the response body can be:
If the authentication succeeds, the user will be automatically added/updated inside the defined data provider. Actions defined for users added/updated will not be executed in this case and an already logged in user with the same username will not be disconnected, you have to handle these things yourself.
- a valid SFTPGo user serialized as JSON. The user will be added/updated within the defined data provider
- empty, the existing SFTPGo user does not need to be updated. Please note that in versions 2.0.x and earlier an empty response was interpreted as an authentication error
If the authentication fails the HTTP response code must be != 200 or the returned SFTPGo user must have an empty username.
Actions defined for users added/updated will not be executed in this case and an already logged in user with the same username will not be disconnected.
The program hook must finish within 30 seconds, the HTTP hook timeout will use the global configuration for HTTP clients.
This method is slower than built-in authentication, but it's very flexible as anyone can easily write his own authentication hooks.
You can also restrict the authentication scope for the hook using the `external_auth_scope` configuration key:
- `0` means all supported authentication scopes. The external hook will be used for password, public key and keyboard interactive authentication
- `0` means all supported authentication scopes. The external hook will be used for password, public key, keyboard interactive and TLS certificate authentication
- `1` means passwords only
- `2` means public keys only
- `4` means keyboard interactive only
- `8` means TLS certificate only
You can combine the scopes. For example, 3 means password and public key, 5 means password and keyboard interactive, and so on.
@@ -51,6 +66,10 @@ else
fi
```
The structure for SFTPGo users can be found within the [OpenAPI schema](../httpd/schema/openapi.yaml).
You can disable the hook on a per-user basis so that you can mix external and internal users.
An example authentication program allowing to authenticate against an LDAP server can be found inside the source tree [ldapauth](../examples/ldapauth) directory.
An example server, to use as HTTP authentication hook, allowing to authenticate against an LDAP server can be found inside the source tree [ldapauthserver](../examples/ldapauthserver) directory.

View File

@@ -52,9 +52,11 @@ The configuration file contains the following sections:
- `idle_timeout`, integer. Time in minutes after which an idle client will be disconnected. 0 means disabled. Default: 15
- `upload_mode` integer. 0 means standard: the files are uploaded directly to the requested path. 1 means atomic: files are uploaded to a temporary path and renamed to the requested path when the client ends the upload. Atomic mode avoids problems such as a web server that serves partial files when the files are being uploaded. In atomic mode, if there is an upload error, the temporary file is deleted and so the requested upload path will not contain a partial file. 2 means atomic with resume support: same as atomic but if there is an upload error, the temporary file is renamed to the requested path and not deleted. This way, a client can reconnect and resume the upload.
- `actions`, struct. It contains the command to execute and/or the HTTP URL to notify and the trigger conditions. See [Custom Actions](./custom-actions.md) for more details
- `execute_on`, list of strings. Valid values are `download`, `upload`, `pre-delete`, `delete`, `rename`, `ssh_cmd`. Leave empty to disable actions.
- `execute_on`, list of strings. Valid values are `pre-download`, `download`, `pre-upload`, `upload`, `pre-delete`, `delete`, `rename`, `ssh_cmd`. Leave empty to disable actions.
- `execute_sync`, list of strings. Actions to be performed synchronously. The `pre-delete` action is always executed synchronously while the other ones are asynchronous. Executing an action synchronously means that SFTPGo will not return a result code to the client (which is waiting for it) until your hook have completed its execution. Leave empty to execute only the `pre-delete` hook synchronously
- `hook`, string. Absolute path to the command to execute or HTTP URL to notify.
- `setstat_mode`, integer. 0 means "normal mode": requests for changing permissions, owner/group and access/modification times are executed. 1 means "ignore mode": requests for changing permissions, owner/group and access/modification times are silently ignored. 2 means "ignore mode for cloud based filesystems": requests for changing permissions, owner/group and access/modification times are silently ignored for cloud filesystems and executed for local filesystem.
- `temp_path`, string. Defines the path for temporary files such as those used for atomic uploads or file pipes. If you set this option you must make sure that the defined path exists, is accessible for writing by the user running SFTPGo, and is on the same filesystem as the users home directories otherwise the renaming for atomic uploads will become a copy and therefore may take a long time. The temporary files are not namespaced. The default is generally fine. Leave empty for the default.
- `proxy_protocol`, integer. Support for [HAProxy PROXY protocol](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt). If you are running SFTPGo behind a proxy server such as HAProxy, AWS ELB or NGNIX, you can enable the proxy protocol. It provides a convenient way to safely transport connection information such as a client's address across multiple layers of NAT or TCP proxies to get the real client IP address instead of the proxy IP. Both protocol versions 1 and 2 are supported. If the proxy protocol is enabled in SFTPGo then you have to enable the protocol in your proxy configuration too. For example, for HAProxy, add `send-proxy` or `send-proxy-v2` to each server configuration line. The following modes are supported:
- 0, disabled
- 1, enabled. Proxy header will be used and requests without proxy header will be accepted
@@ -62,8 +64,10 @@ The configuration file contains the following sections:
- `proxy_allowed`, List of IP addresses and IP ranges allowed to send the proxy header:
- If `proxy_protocol` is set to 1 and we receive a proxy header from an IP that is not in the list then the connection will be accepted and the header will be ignored
- If `proxy_protocol` is set to 2 and we receive a proxy header from an IP that is not in the list then the connection will be rejected
- `startup_hook`, string. Absolute path to an external program or an HTTP URL to invoke as soon as SFTPGo starts. If you define an HTTP URL it will be invoked using a `GET` request. Please note that SFTPGo services may not yet be available when this hook is run. Leave empty do disable
- `post_connect_hook`, string. Absolute path to the command to execute or HTTP URL to notify. See [Post connect hook](./post-connect-hook.md) for more details. Leave empty to disable
- `max_total_connections`, integer. Maximum number of concurrent client connections. 0 means unlimited
- `max_total_connections`, integer. Maximum number of concurrent client connections. 0 means unlimited. Default: 0.
- `max_per_host_connections`, integer. Maximum number of concurrent client connections from the same host (IP). If the defender is enabled, exceeding this limit will generate `score_limit_exceeded` events and thus hosts that repeatedly exceed the max allowed connections can be automatically blocked. 0 means unlimited. Default: 20.
- `defender`, struct containing the defender configuration. See [Defender](./defender.md) for more details.
- `enabled`, boolean. Default `false`.
- `ban_time`, integer. Ban time in minutes.
@@ -71,11 +75,21 @@ The configuration file contains the following sections:
- `threshold`, integer. Threshold value for banning a client.
- `score_invalid`, integer. Score for invalid login attempts, eg. non-existent user accounts or client disconnected for inactivity without authentication attempts.
- `score_valid`, integer. Score for valid login attempts, eg. user accounts that exist.
- `score_limit_exceeded`, integer. Score for hosts that exceeded the configured rate limits or the maximum, per-host, allowed connections.
- `observation_time`, integer. Defines the time window, in minutes, for tracking client errors. A host is banned if it has exceeded the defined threshold during the last observation time minutes.
- `entries_soft_limit`, integer.
- `entries_hard_limit`, integer. The number of banned IPs and host scores kept in memory will vary between the soft and hard limit.
- `safelist_file`, string. Path to a file containing a list of ip addresses and/or networks to never ban.
- `blocklist_file`, string. Path to a file containing a list of ip addresses and/or networks to always ban. The lists can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows. An host that is already banned will not be automatically unbanned if you put it inside the safe list, you have to unban it using the REST API.
- `rate_limiters`, list of structs containing the rate limiters configuration. Take a look [here](./rate-limiting.md) for more details. Each struct has the following fields:
- `average`, integer. Average defines the maximum rate allowed. 0 means disabled. Default: 0
- `period`, integer. Period defines the period as milliseconds. The rate is actually defined by dividing average by period Default: 1000 (1 second).
- `burst`, integer. Burst defines the maximum number of requests allowed to go through in the same arbitrarily small period of time. Default: 1
- `type`, integer. 1 means a global rate limiter, independent from the source host. 2 means a per-ip rate limiter. Default: 2
- `protocols`, list of strings. Available protocols are `SSH`, `FTP`, `DAV`, `HTTP`. By default all supported protocols are enabled
- `generate_defender_events`, boolean. If `true`, the defender is enabled, and this is not a global rate limiter, a new defender event will be generated each time the configured limit is exceeded. Default `false`
- `entries_soft_limit`, integer.
- `entries_hard_limit`, integer. The number of per-ip rate limiters kept in memory will vary between the soft and hard limit
- **"sftpd"**, the configuration for the SFTP server
- `bindings`, list of structs. Each struct has the following fields:
- `port`, integer. The port used for serving SFTP requests. 0 means disabled. Default: 2022
@@ -108,8 +122,8 @@ The configuration file contains the following sections:
- `address`, string. Leave blank to listen on all available network interfaces. Default: "".
- `apply_proxy_config`, boolean. If enabled the common proxy configuration, if any, will be applied. Default `true`.
- `tls_mode`, integer. 0 means accept both cleartext and encrypted sessions. 1 means TLS is required for both control and data connection. 2 means implicit TLS. Do not enable this blindly, please check that a proper TLS config is in place if you set `tls_mode` is different from 0.
- `force_passive_ip`, ip address. External IP address to expose for passive connections. Leavy empty to autodetect. Defaut: "".
- `client_auth_type`, integer. Set to `1` to require client certificate authentication in addition to FTP authentication. You need to define at least a certificate authority for this to work. Default: 0.
- `force_passive_ip`, ip address. External IP address to expose for passive connections. Leavy empty to autodetect. If not empty, it must be a valid IPv4 address. Defaut: "".
- `client_auth_type`, integer. Set to `1` to require a client certificate and verify it. Set to `2` to request a client certificate during the TLS handshake and verify it if given, in this mode the client is allowed not to send a certificate. At least one certification authority must be defined in order to verify client certificates. If no certification authority is defined, this setting is ignored. Default: 0.
- `tls_cipher_suites`, list of strings. List of supported cipher suites for TLS version 1.2. If empty, a default list of secure cipher suites is used, with a preference order based on hardware performance. Note that TLS 1.3 ciphersuites are not configurable. The supported ciphersuites names are defined [here](https://github.com/golang/go/blob/master/src/crypto/tls/cipher_suites.go#L52). Any invalid name will be silently ignored. The order matters, the ciphers listed first will be the preferred ones. Default: empty.
- `bind_port`, integer. Deprecated, please use `bindings`
- `bind_address`, string. Deprecated, please use `bindings`
@@ -132,8 +146,10 @@ The configuration file contains the following sections:
- `port`, integer. The port used for serving WebDAV requests. 0 means disabled. Default: 0.
- `address`, string. Leave blank to listen on all available network interfaces. Default: "".
- `enable_https`, boolean. Set to `true` and provide both a certificate and a key file to enable HTTPS connection for this binding. Default `false`.
- `client_auth_type`, integer. Set to `1` to require client certificate authentication in addition to basic authentication. You need to define at least a certificate authority for this to work. Default: 0.
- `client_auth_type`, integer. Set to `1` to require a client certificate and verify it. Set to `2` to request a client certificate during the TLS handshake and verify it if given, in this mode the client is allowed not to send a certificate. At least one certification authority must be defined in order to verify client certificates. If no certification authority is defined, this setting is ignored. Default: 0.
- `tls_cipher_suites`, list of strings. List of supported cipher suites for TLS version 1.2. If empty, a default list of secure cipher suites is used, with a preference order based on hardware performance. Note that TLS 1.3 ciphersuites are not configurable. The supported ciphersuites names are defined [here](https://github.com/golang/go/blob/master/src/crypto/tls/cipher_suites.go#L52). Any invalid name will be silently ignored. The order matters, the ciphers listed first will be the preferred ones. Default: empty.
- `prefix`, string. Prefix for WebDAV resources, if empty WebDAV resources will be available at the `/` URI. If defined it must be an absolute URI, for example `/dav`. Default: "".
- `proxy_allowed`, list of IP addresses and IP ranges allowed to set `X-Forwarded-For`, `X-Real-IP`, `CF-Connecting-IP`, `True-Client-IP` headers. Any of the indicated headers, if set on requests from a connection address not in this list, will be silently ignored. Default: empty.
- `bind_port`, integer. Deprecated, please use `bindings`.
- `bind_address`, string. Deprecated, please use `bindings`.
- `certificate_file`, string. Certificate for WebDAV over HTTPS. This can be an absolute path or a path relative to the config dir.
@@ -153,7 +169,7 @@ The configuration file contains the following sections:
- `expiration_time`, integer. Expiration time, in minutes, for the cached users. 0 means unlimited. Default: 0.
- `max_size`, integer. Maximum number of users to cache. 0 means unlimited. Default: 50.
- **"data_provider"**, the configuration for the data provider
- `driver`, string. Supported drivers are `sqlite`, `mysql`, `postgresql`, `bolt`, `memory`
- `driver`, string. Supported drivers are `sqlite`, `mysql`, `postgresql`, `cockroachdb`, `bolt`, `memory`
- `name`, string. Database name. For driver `sqlite` this can be the database name relative to the config dir or the absolute path to the SQLite database. For driver `memory` this is the (optional) path relative to the config dir or the absolute path to the provider dump, obtained using the `dumpdata` REST API, to load. This dump will be loaded at startup and can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows. The `memory` provider will not modify the provided file so quota usage and last login will not be persisted. If you plan to use a SQLite database over a `cifs` network share (this is not recommended in general) you must use the `nobrl` mount option otherwise you will get the `database is locked` error. Some users reported that the `bolt` provider works fine over `cifs` shares.
- `host`, string. Database host. Leave empty for drivers `sqlite`, `bolt` and `memory`
- `port`, integer. Database port. Leave empty for drivers `sqlite`, `bolt` and `memory`
@@ -166,6 +182,7 @@ The configuration file contains the following sections:
- 0, disable quota tracking. REST API to scan users home directories/virtual folders and update quota will do nothing
- 1, quota is updated each time a user uploads or deletes a file, even if the user has no quota restrictions
- 2, quota is updated each time a user uploads or deletes a file, but only for users with quota restrictions and for virtual folders. With this configuration, the `quota scan` and `folder_quota_scan` REST API can still be used to periodically update space usage for users without quota restrictions and for folders
- `delayed_quota_update`, integer. This configuration parameter defines the number of seconds to accumulate quota updates. If there are a lot of close uploads, accumulating quota updates can save you many queries to the data provider. If you want to track quotas, a scheduled quota update is recommended in any case, the stored quota may be incorrect for several reasons, such as an unexpected shutdown while uploading files, temporary provider failures, files copied outside of SFTPGo, and so on. You could use the [quotascan example](../examples/quotascan) as a starting point. 0 means immediate quota update.
- `pool_size`, integer. Sets the maximum number of open connections for `mysql` and `postgresql` driver. Default 0 (unlimited)
- `users_base_dir`, string. Users default base directory. If no home dir is defined while adding a new user, and this value is a valid absolute path, then the user home dir will be automatically defined as the path obtained joining the base dir and the username
- `actions`, struct. It contains the command to execute and/or the HTTP URL to notify and the trigger conditions. See [Custom Actions](./custom-actions.md) for more details
@@ -173,7 +190,7 @@ The configuration file contains the following sections:
- `hook`, string. Absolute path to the command to execute or HTTP URL to notify.
- `external_auth_program`, string. Deprecated, please use `external_auth_hook`.
- `external_auth_hook`, string. Absolute path to an external program or an HTTP URL to invoke for users authentication. See [External Authentication](./external-auth.md) for more details. Leave empty to disable.
- `external_auth_scope`, integer. 0 means all supported authentication scopes (passwords, public keys and keyboard interactive). 1 means passwords only. 2 means public keys only. 4 means key keyboard interactive only. The flags can be combined, for example 6 means public keys and keyboard interactive
- `external_auth_scope`, integer. 0 means all supported authentication scopes (passwords, public keys and keyboard interactive). 1 means passwords only. 2 means public keys only. 4 means key keyboard interactive only. 8 means TLS certificate. The flags can be combined, for example 6 means public keys and keyboard interactive
- `credentials_path`, string. It defines the directory for storing user provided credential files such as Google Cloud Storage credentials. This can be an absolute path or a path relative to the config dir
- `prefer_database_credentials`, boolean. When true, users' Google Cloud Storage credentials will be written to the data provider instead of disk, though pre-existing credentials on disk will be used as a fallback. When false, they will be written to the directory specified by `credentials_path`.
- `pre_login_program`, string. Deprecated, please use `pre_login_hook`.
@@ -182,26 +199,34 @@ The configuration file contains the following sections:
- `post_login_scope`, defines the scope for the post-login hook. 0 means notify both failed and successful logins. 1 means notify failed logins. 2 means notify successful logins.
- `check_password_hook`, string. Absolute path to an external program or an HTTP URL to invoke to check the user provided password. See [Check password hook](./check-password-hook.md) for more details. Leave empty to disable.
- `check_password_scope`, defines the scope for the check password hook. 0 means all protocols, 1 means SSH, 2 means FTP, 4 means WebDAV. You can combine the scopes, for example 6 means FTP and WebDAV.
- `password_hashing`, struct. It contains the configuration parameters to be used to generate the password hash. SFTPGo can verify passwords in several formats and uses the `argon2id` algorithm to hash passwords in plain-text before storing them inside the data provider. These options allow you to customize how the hash is generated.
- `argon2_options` struct containing the options for argon2id hashing algorithm. The `memory` and `iterations` parameters control the computational cost of hashing the password. The higher these figures are, the greater the cost of generating the hash and the longer the runtime. It also follows that the greater the cost will be for any attacker trying to guess the password. If the code is running on a machine with multiple cores, then you can decrease the runtime without reducing the cost by increasing the `parallelism` parameter. This controls the number of threads that the work is spread across.
- `password_hashing`, struct. It contains the configuration parameters to be used to generate the password hash. SFTPGo can verify passwords in several formats and uses, by default, the `bcrypt` algorithm to hash passwords in plain-text before storing them inside the data provider. These options allow you to customize how the hash is generated.
- `argon2_options`, struct containing the options for argon2id hashing algorithm. The `memory` and `iterations` parameters control the computational cost of hashing the password. The higher these figures are, the greater the cost of generating the hash and the longer the runtime. It also follows that the greater the cost will be for any attacker trying to guess the password. If the code is running on a machine with multiple cores, then you can decrease the runtime without reducing the cost by increasing the `parallelism` parameter. This controls the number of threads that the work is spread across.
- `memory`, unsigned integer. The amount of memory used by the algorithm (in kibibytes). Default: 65536.
- `iterations`, unsigned integer. The number of iterations over the memory. Default: 1.
- `parallelism`. unsigned 8 bit integer. The number of threads (or lanes) used by the algorithm. Default: 2.
- `bcrypt_options`, struct containing the options for bcrypt hashing algorithm
- `cost`, integer between 4 and 31. Default: 10
- `algo`, string. Algorithm to use for hashing passwords. Available algorithms: `argon2id`, `bcrypt`. For bcrypt hashing we use the `$2a$` prefix. Default: `bcrypt`
- `password_caching`, boolean. Verifying argon2id passwords has a high memory and computational cost, verifying bcrypt passwords has a high computational cost, by enabling, in memory, password caching you reduce these costs. Default: `true`
- `update_mode`, integer. Defines how the database will be initialized/updated. 0 means automatically. 1 means manually using the initprovider sub-command.
- `skip_natural_keys_validation`, boolean. If `true` you can use any UTF-8 character for natural keys as username, admin name, folder name. These keys are used in URIs for REST API and Web admin. If `false` only unreserved URI characters are allowed: ALPHA / DIGIT / "-" / "." / "_" / "~". Default: `false`.
- `create_default_admin`, boolean. If enabled, a default admin user with username `admin` and password `password` will be created on first start. You can also create the first admin user by using the web interface or by loading initial data. Default `false`.
- **"httpd"**, the configuration for the HTTP server used to serve REST API and to expose the built-in web interface
- `bindings`, list of structs. Each struct has the following fields:
- `port`, integer. The port used for serving HTTP requests. Default: 8080.
- `address`, string. Leave blank to listen on all available network interfaces. On *NIX you can specify an absolute path to listen on a Unix-domain socket Default: "127.0.0.1".
- `enable_web_admin`, boolean. Set to `false` to disable the built-in web admin for this binding. You also need to define `templates_path` and `static_files_path` to enable the built-in web admin interface. Default `true`.
- `enable_web_admin`, boolean. Set to `false` to disable the built-in web admin for this binding. You also need to define `templates_path` and `static_files_path` to use the built-in web admin interface. Default `true`.
- `enable_web_client`, boolean. Set to `false` to disable the built-in web client for this binding. You also need to define `templates_path` and `static_files_path` to use the built-in web client interface. Default `true`.
- `enable_https`, boolean. Set to `true` and provide both a certificate and a key file to enable HTTPS connection for this binding. Default `false`.
- `client_auth_type`, integer. Set to `1` to require client certificate authentication in addition to JWT/Web authentication. You need to define at least a certificate authority for this to work. Default: 0.
- `tls_cipher_suites`, list of strings. List of supported cipher suites for TLS version 1.2. If empty, a default list of secure cipher suites is used, with a preference order based on hardware performance. Note that TLS 1.3 ciphersuites are not configurable. The supported ciphersuites names are defined [here](https://github.com/golang/go/blob/master/src/crypto/tls/cipher_suites.go#L52). Any invalid name will be silently ignored. The order matters, the ciphers listed first will be the preferred ones. Default: empty.
- `proxy_allowed`, list of IP addresses and IP ranges allowed to set `X-Forwarded-For`, `X-Real-IP`, `X-Forwarded-Proto`, `CF-Connecting-IP`, `True-Client-IP` headers. Any of the indicated headers, if set on requests from a connection address not in this list, will be silently ignored. Default: empty.
- `bind_port`, integer. Deprecated, please use `bindings`.
- `bind_address`, string. Deprecated, please use `bindings`. Leave blank to listen on all available network interfaces. On \*NIX you can specify an absolute path to listen on a Unix-domain socket. Default: "127.0.0.1"
- `bind_address`, string. Deprecated, please use `bindings`. Leave blank to listen on all available network interfaces. On \*NIX you can specify an absolute path to listen on a Unix-domain socket. Default: ""
- `templates_path`, string. Path to the HTML web templates. This can be an absolute path or a path relative to the config dir
- `static_files_path`, string. Path to the static files for the web interface. This can be an absolute path or a path relative to the config dir. If both `templates_path` and `static_files_path` are empty the built-in web interface will be disabled
- `backups_path`, string. Path to the backup directory. This can be an absolute path or a path relative to the config dir. We don't allow backups in arbitrary paths for security reasons
- `web_root`, string. Defines a base URL for the web admin and client interfaces. If empty web admin and client resources will be available at the root ("/") URI. If defined it must be an absolute URI or it will be ignored
- `certificate_file`, string. Certificate for HTTPS. This can be an absolute path or a path relative to the config dir.
- `certificate_key_file`, string. Private key matching the above certificate. This can be an absolute path or a path relative to the config dir. If both the certificate and the private key are provided, the server will expect HTTPS connections. Certificate and key files can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows.
- `ca_certificates`, list of strings. Set of root certificate authorities to be used to verify client certificates.
@@ -215,7 +240,7 @@ The configuration file contains the following sections:
- `certificate_key_file`, string. Private key matching the above certificate. This can be an absolute path or a path relative to the config dir. If both the certificate and the private key are provided, the server will expect HTTPS connections. Certificate and key files can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows.
- `tls_cipher_suites`, list of strings. List of supported cipher suites for TLS version 1.2. If empty, a default list of secure cipher suites is used, with a preference order based on hardware performance. Note that TLS 1.3 ciphersuites are not configurable. The supported ciphersuites names are defined [here](https://github.com/golang/go/blob/master/src/crypto/tls/cipher_suites.go#L52). Any invalid name will be silently ignored. The order matters, the ciphers listed first will be the preferred ones. Default: empty.
- **"http"**, the configuration for HTTP clients. HTTP clients are used for executing hooks. Some hooks use a retryable HTTP client, for these hooks you can configure the time between retries and the number of retries. Please check the hook specific documentation to understand which hooks use a retryable HTTP client.
- `timeout`, integer. Timeout specifies a time limit, in seconds, for requests. For requests with retries this is the timeout for a single request
- `timeout`, float. Timeout specifies a time limit, in seconds, for requests. For requests with retries this is the timeout for a single request
- `retry_wait_min`, integer. Defines the minimum waiting time between attempts in seconds.
- `retry_wait_max`, integer. Defines the maximum waiting time between attempts in seconds. The backoff algorithm will perform exponential backoff based on the attempt number and limited by the provided minimum and maximum durations.
- `retry_max`, integer. Defines the maximum number of retries if the first request fails.
@@ -224,6 +249,10 @@ The configuration file contains the following sections:
- `cert`, string. Path to the certificate file. The path can be absolute or relative to the config dir.
- `key`, string. Path to the key file. The path can be absolute or relative to the config dir.
- `skip_tls_verify`, boolean. if enabled the HTTP client accepts any TLS certificate presented by the server and any host name in that certificate. In this mode, TLS is susceptible to man-in-the-middle attacks. This should be used only for testing.
- `headers`, list of structs. You can define a list of http headers to add to each hook. Each struct has the following fields:
- `key`, string
- `value`, string. The header is silently ignored if `key` or `value` are empty
- `url`, string, optional. If not empty, the header will be added only if the request URL starts with the one specified here
- **kms**, configuration for the Key Management Service, more details can be found [here](./kms.md)
- `secrets`
- `url`
@@ -266,6 +295,15 @@ Let's see some examples:
- To set the `port` for the first sftpd binding, you need to define the env var `SFTPGO_SFTPD__BINDINGS__0__PORT`
- To set the `execute_on` actions, you need to define the env var `SFTPGO_COMMON__ACTIONS__EXECUTE_ON`. For example `SFTPGO_COMMON__ACTIONS__EXECUTE_ON=upload,download`
On some hardware you can get faster SFTP performance by replacing the Go `crypto/sha256` implementation with [sha256-simd](https://github.com/minio/sha256-simd).
The performances of SHA256 is relevant for clients using AES CTR ciphers and `hmac-sha2-256` as Message Authentication Code (MAC).
Up to 2.0.x versions SFTPGo automatically used `sha256-simd` but over the time the standard Go implementation improved a lot and now is faster than `sha256-simd` on some CPUs.
You can select `sha256-simd` setting the environment variable `SFTPGO_MINIO_SHA256_SIMD` to `1`.
`sha256-simd` is particularly useful if you have an Intel CPU with SHA extensions or an ARM CPU with Cryptography Extensions.
## Telemetry Server
The telemetry server exposes the following endpoints:

View File

@@ -2,4 +2,7 @@
Here we collect step-to-step tutorials. SFTPGo users are encouraged to contribute!
- [Getting Started](./getting-started.md)
- [SFTPGo with PostgreSQL data provider and S3 backend](./postgresql-s3.md)
- [SFTPGo on Windows with Active Directory Integration + Caddy Static File Server](https://www.youtube.com/watch?v=M5UcJI8t4AI)
- [Securing SFTPGo with a free Let's Encrypt TLS Certificate](./lets-encrypt-certificate.md)

View File

@@ -0,0 +1,491 @@
# Getting Started
SFTPGo allows to securely share your files over SFTP and optionally FTP/S and WebDAV too.
Several storage backends are supported and they are configurable per user, so you can serve a local directory for a user and an S3 bucket (or part of it) for another one.
SFTPGo also supports virtual folders, a virtual folder can use any of the supported storage backends. So you can have, for example, an S3 user that exposes a GCS bucket (or part of it) on a specified path and an encrypted local filesystem on another one.
Virtual folders can be private or shared among multiple users, for shared virtual folders you can define different quota limits for each user.
In this tutorial we explore the main features and concepts using the built-in web admin interface. Advanced users can also use the SFTPGo [REST API](https://sftpgo.stoplight.io/docs/sftpgo/openapi.yaml)
- [Installation](#Installation)
- [Initial configuration](#Initial-configuration)
- [Creating users](#Creating-users)
- [Creating users with a Cloud Storage backend](#Creating-users-with-a-Cloud-Storage-backend)
- [Creating users with a local encrypted backend (Data At Rest Encryption)](#Creating-users-with-a-local-encrypted-backend-Data-At-Rest-Encryption))
- [Virtual permissions](#Virtual-permissions)
- [Virtual folders](#Virtual-folders)
- [Configuration parameters](#Configuration-parameters)
- [Use PostgreSQL data provider](#Use-PostgreSQL-data-provider)
- [Use MySQL/MariaDB data provider](#Use-MySQLMariaDB-data-provider)
- [Use CockroachDB data provider](#Use-CockroachDB-data-provider)
- [Enable FTP service](#Enable-FTP-service)
- [Enable WebDAV service](#Enable-WebDAV-service)
## Installation
You can easily install SFTPGo by downloading the appropriate package for your operating system and architecture. Please visit the [releases](https://github.com/drakkan/sftpgo/releases "releases") page.
An official Docker image is available. Documentation is [here](./../../docker/README.md).
In this guide, we assume that SFTPGo is already installed and running using the default configuration.
## Initial configuration
Before you can use SFTPGo you need to create an admin account, so open [http://127.0.0.1:8080/web/admin](http://127.0.0.1:8080/web) in your web browser, replacing `127.0.0.1` with the appropriate IP address if SFTPGo is not running on localhost.
![Setup](./img/setup.png)
After creating the admin account you will be automatically logged in.
![Users list](./img/initial-screen.png)
The the web admin is now available at the following URL:
[http://127.0.0.1:8080/web/admin](http://127.0.0.1:8080/web/admin)
From the `Status` page you see the active services.
![Status](./img/status.png)
The default configuration enables the SFTP service on port `2022` and uses `SQLite` as data provider.
## Creating users
Let's create our first local user:
- from the users page click the `+` icon to open the Add user page
- the only required fields are the `Username`, a `Password` or a `Public key`, and the default `Permissions`
- if you are on Windows or you installed SFTPGo manually and no `users_base_dir` is defined in your configuration file you also have to set a `Home Dir`. It must be an absolute path, for example `/srv/sftpgo/data/username` on Linux or `C:\sftpgo\data\username` on Windows. SFTPGo will try to automatically create the home directory, if missing, when the user logs in. Each user can only access files and folders inside its home directory.
- click `Submit`
![Add user](./img/add-user.png)
Now test the new user, we use the `sftp` CLI here, you can use any SFTP client.
```shell
$ sftp -P 2022 nicola@127.0.0.1
nicola@127.0.0.1's password:
Connected to 127.0.0.1.
sftp> ls
sftp> put file.txt
Uploading file.txt to /file.txt
file.txt 100% 4034 3.9MB/s 00:00
sftp> ls
file.txt
sftp> mkdir adir
sftp> cd adir/
sftp> put file.txt
Uploading file.txt to /adir/file.txt
file.txt 100% 4034 4.0MB/s 00:00
sftp> ls
file.txt
sftp> get file.txt
Fetching /adir/file.txt to file.txt
/adir/file.txt 100% 4034 1.9MB/s 00:00
```
It worked! We can upload/download files and create directories.
Each user can browse and download their files and change their credentials using the web client interface available at the following URL:
[http://127.0.0.1:8080/web/client](http://127.0.0.1:8080/web/client)
![Web client files](./img/web-client-files.png)
![Web client credentials](./img/web-client-credentials.png)
### Creating users with a Cloud Storage backend
The procedure is similar to the one described for local users, you have only specify the Cloud Storage backend and its credentials.
The screenshot below shows an example configuration for an S3 backend.
![S3 user](./img/s3-user.png)
The screenshot below shows an example configuration for an Azure Blob Storage backend.
![Azure Blob user](./img/az-user.png)
The screenshot below shows an example configuration for a Google Cloud Storage backend.
![Google Cloud user](./img/gcs-user.png)
The screenshot below shows an example configuration for an SFTP server as storage backend.
![User using another SFTP server as storage backend](./img/sftp-user.png)
Setting a `Key Prefix` you restrict the user to a specific "folder" in the bucket, so that the same bucket can be shared among different users by assigning to each user a specific portion of the bucket.
### Creating users with a local encrypted backend (Data At Rest Encryption)
The procedure is similar to the one described for local users, you have only specify the encryption passphrase.
The screenshot below shows an example configuration.
![User with cryptfs backend](./img/local-encrypted.png)
You can find more details about Data At Rest Encryption [here](../dare.md).
## Virtual permissions
SFTPGo supports per directory virtual permissions. For each user you have to specify global permissions and then override them on a per-directory basis.
Take a look at the following screens.
![Virtual permissions](./img/virtual-permissions.png)
![Per-directory permissions](./img/dir-permissions.png)
This user has full access as default (`*`), can only list and download from `/read-only` path and has no permissions at all for the `/subdir` path.
Let's test it. We use the `sftp` CLI here, you can use any SFTP client.
```shell
$ sftp -P 2022 nicola@127.0.0.1
Connected to 127.0.0.1.
sftp> ls
adir file.txt read-only subdir
sftp> put file.txt
Uploading file.txt to /file.txt
file.txt 100% 4034 19.4MB/s 00:00
sftp> rm file.txt
Removing /file.txt
sftp> ls
adir read-only subdir
sftp> cd read-only/
sftp> ls
file.txt
sftp> put file1.txt
Uploading file1.txt to /read-only/file1.txt
remote open("/read-only/file1.txt"): Permission denied
sftp> get file.txt
Fetching /read-only/file.txt to file.txt
/read-only/file.txt 100% 4034 2.2MB/s 00:00
sftp> cd ..
sftp> ls
adir read-only subdir
sftp> cd /subdir
sftp> ls
remote readdir("/subdir"): Permission denied
```
as you can see it worked as expected.
## Virtual folders
From the web admin interface click `Folders` and then the `+` icon.
![Add folder](./img/add-folder.png)
To create a local folder you need to specify a `Name` and an `Absolute path`. For other backends you have to specify the backend type and its credentials, this is the same procedure already detailed for creating users with cloud backends.
Suppose we created two virtual folders name `localfolder` and `minio` as you can see in the following screen.
![Folders](./img/folders.png)
- `localfolder` uses the local filesystem as storage backend
- `minio` uses MinIO (S3 compatible) as storage backend
Now, click `Users`, on the left menu, select a user and click the `Edit` icon, to update the user and associate the virtual folders.
Virtual folders must be referenced using their unique name and you can expose them on a configurable virtual path. Take a look at the following screenshot.
![Virtual Folders](./img/virtual-folders.png)
We exposed the folder named `localfolder` on the path `/vdirlocal` (this must be an absolute UNIX path on Windows too) and the folder named `minio` on the path `/vdirminio`. For `localfolder` the quota usage is included within the user quota, while for the `minio` folder we defined separate quota limits: at most 2 files and at most 100MB, whichever is reached first.
The folder `minio` can be shared with other users and we can define different quota limits on a per-user basis. The folder `localfolder` is considered private since we have included its quota limits within those of the user, if we share them with other users we will break quota calculation.
Let's test these virtual folders. We use the `sftp` CLI here, you can use any SFTP client.
```shell
$ sftp -P 2022 nicola@127.0.0.1
nicola@127.0.0.1's password:
Connected to 127.0.0.1.
sftp> ls
adir read-only subdir vdirlocal vdirminio
sftp> cd vdirlocal
sftp> put file.txt
Uploading file.txt to /vdirlocal/file.txt
file.txt 100% 4034 17.3MB/s 00:00
sftp> ls
file.txt
sftp> cd ..
sftp> cd vdirminio/
sftp> put file.txt
Uploading file.txt to /vdirminio/file.txt
file.txt 100% 4034 4.8MB/s 00:00
sftp> ls
file.txt
sftp> put file.txt file1.txt
Uploading file.txt to /vdirminio/file1.txt
file.txt 100% 4034 2.8MB/s 00:00
sftp> put file.txt file2.txt
Uploading file.txt to /vdirminio/file2.txt
remote open("/vdirminio/file2.txt"): Failure
sftp> quit
```
The last upload failed since we exceeded the number of files quota limit.
## Configuration parameters
Until now we used the default configuration, to change the global service parameters you have to edit the configuration file, or set appropriate environment variables, and restart SFTPGo to apply the changes.
A full explanation of all configuration methods can be found [here](./../full-configuration.md), we explore some common use cases. Please keep in mind that SFTPGo can also be configured via [environment variables](../full-configuration.md#environment-variables), this is very convenient if you are using Docker.
The default configuration file is `sftpgo.json` and it can be found within the `/etc/sftpgo` directory if you installed from Linux distro packages. On Windows the configuration file can be found within the `{commonappdata}\SFTPGo` directory where `{commonappdata}` is typically `C:\ProgramData`. SFTPGo also supports reading from TOML and YAML configuration files.
The following snippets assume your are running SFTPGo on Linux but they can be easily adapted for other operating systems.
### Use PostgreSQL data provider
Create a PostgreSQL database named `sftpgo` and a PostgreSQL user with the correct permissions, for example using the `psql` CLI.
```shell
sudo -i -u postgres psql
CREATE DATABASE "sftpgo" WITH ENCODING='UTF8' CONNECTION LIMIT=-1;
create user "sftpgo" with encrypted password 'your password here';
grant all privileges on database "sftpgo" to "sftpgo";
\q
```
Open the SFTPGo configuration file, search for the `data_provider` section and change it as follow.
```json
"data_provider": {
"driver": "postgresql",
"name": "sftpgo",
"host": "127.0.0.1",
"port": 5432,
"username": "sftpgo",
"password": "your password here",
...
}
```
Confirm that the database connection works by initializing the data provider.
```shell
$ sudo su - sftpgo -s /bin/bash -c 'sftpgo initprovider -c /etc/sftpgo'
2021-05-19T22:21:54.000 INF Initializing provider: "postgresql" config file: "/etc/sftpgo/sftpgo.json"
2021-05-19T22:21:54.000 INF updating database version: 8 -> 9
2021-05-19T22:21:54.000 INF Data provider successfully initialized/updated
```
Ensure that SFTPGo starts after the database service.
```shell
sudo systemctl edit sftpgo.service
```
And override the unit definition with the following snippet.
```shell
[Unit]
After=postgresql.service
```
Restart SFTPGo to apply the changes.
### Use MySQL/MariaDB data provider
Create a MySQL database named `sftpgo` and a MySQL user with the correct permissions, for example using the `mysql` CLI.
```shell
$ mysql -u root
MariaDB [(none)]> CREATE DATABASE sftpgo CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
Query OK, 1 row affected (0.000 sec)
MariaDB [(none)]> grant all privileges on sftpgo.* to sftpgo@localhost identified by 'your password here';
Query OK, 0 rows affected (0.027 sec)
MariaDB [(none)]> quit
Bye
```
Open the SFTPGo configuration file, search for the `data_provider` section and change it as follow.
```json
"data_provider": {
"driver": "mysql",
"name": "sftpgo",
"host": "127.0.0.1",
"port": 3306,
"username": "sftpgo",
"password": "your password here",
...
}
```
Confirm that the database connection works by initializing the data provider.
```shell
$ sudo su - sftpgo -s /bin/bash -c 'sftpgo initprovider -c /etc/sftpgo'
2021-05-19T22:29:30.000 INF Initializing provider: "mysql" config file: "/etc/sftpgo/sftpgo.json"
2021-05-19T22:29:30.000 INF updating database version: 8 -> 9
2021-05-19T22:29:30.000 INF Data provider successfully initialized/updated
```
Ensure that SFTPGo starts after the database service.
```shell
sudo systemctl edit sftpgo.service
```
And override the unit definition with the following snippet.
```shell
[Unit]
After=mariadb.service
```
Restart SFTPGo to apply the changes.
### Use CockroachDB data provider
We suppose you have installed CocroackDB this way:
```shell
sudo su
export CRDB_VERSION=21.1.2 # set the latest available version here
wget -qO- https://binaries.cockroachdb.com/cockroach-v${CRDB_VERSION}.linux-amd64.tgz | tar xvz
cp -i cockroach-v${CRDB_VERSION}.linux-amd64/cockroach /usr/local/bin/
mkdir -p /usr/local/lib/cockroach
cp -i cockroach-v${CRDB_VERSION}.linux-amd64/lib/libgeos.so /usr/local/lib/cockroach/
cp -i cockroach-v${CRDB_VERSION}.linux-amd64/lib/libgeos_c.so /usr/local/lib/cockroach/
mkdir /var/lib/cockroach
chown sftpgo:sftpgo /var/lib/cockroach
mkdir -p /etc/cockroach/{certs,ca}
chmod 700 /etc/cockroach/ca
/usr/local/bin/cockroach cert create-ca --certs-dir=/etc/cockroach/certs --ca-key=/etc/cockroach/ca/ca.key
/usr/local/bin/cockroach cert create-node localhost $(hostname) --certs-dir=/etc/cockroach/certs --ca-key=/etc/cockroach/ca/ca.key
/usr/local/bin/cockroach cert create-client root --certs-dir=/etc/cockroach/certs --ca-key=/etc/cockroach/ca/ca.key
chown -R sftpgo:sftpgo /etc/cockroach/certs
exit
```
and you are running it using a systemd unit like this one:
```shell
[Unit]
Description=Cockroach Database single node
Requires=network.target
[Service]
Type=notify
WorkingDirectory=/var/lib/cockroach
ExecStart=/usr/local/bin/cockroach start-single-node --certs-dir=/etc/cockroach/certs --http-addr 127.0.0.1:8888 --listen-addr 127.0.0.1:26257 --cache=.25 --max-sql-memory=.25 --store=path=/var/lib/cockroach
TimeoutStopSec=60
Restart=always
RestartSec=10
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=cockroach
User=sftpgo
[Install]
WantedBy=default.target
```
Create a CockroachDB database named `sftpgo`.
```shell
$ sudo /usr/local/bin/cockroach sql --certs-dir=/etc/cockroach/certs -e 'create database "sftpgo"'
CREATE DATABASE
Time: 13ms
```
Open the SFTPGo configuration file, search for the `data_provider` section and change it as follow.
```json
"data_provider": {
"driver": "cockroachdb",
"name": "",
"host": "",
"port": 0,
"username": "",
"password": "",
"sslmode": 0,
"connection_string": "postgresql://root@localhost:26257/sftpgo?sslcert=%2Fetc%2Fcockroach%2Fcerts%2Fclient.root.crt&sslkey=%2Fetc%2Fcockroach%2Fcerts%2Fclient.root.key&sslmode=verify-full&sslrootcert=%2Fetc%2Fcockroach%2Fcerts%2Fca.crt&connect_timeout=10"
...
}
```
Confirm that the database connection works by initializing the data provider.
```shell
$ sudo su - sftpgo -s /bin/bash -c 'sftpgo initprovider -c /etc/sftpgo'
2021-05-19T22:41:53.000 INF Initializing provider: "cockroachdb" config file: "/etc/sftpgo/sftpgo.json"
2021-05-19T22:41:53.000 INF updating database version: 8 -> 9
2021-05-19T22:41:53.000 INF Data provider successfully initialized/updated
```
Ensure that SFTPGo starts after the database service.
```shell
sudo systemctl edit sftpgo.service
```
And override the unit definition with the following snippet.
```shell
[Unit]
After=cockroachdb.service
```
Restart SFTPGo to apply the changes.
### Enable FTP service
Open the SFTPGo configuration file, search for the `ftpd` section and change it as follow.
```json
"ftpd": {
"bindings": [
{
"port": 2121,
"address": "",
"apply_proxy_config": true,
"tls_mode": 0,
"force_passive_ip": "",
"client_auth_type": 0,
"tls_cipher_suites": []
}
],
"banner": "",
"banner_file": "",
"active_transfers_port_non_20": true,
"passive_port_range": {
"start": 50000,
"end": 50100
},
...
}
```
Restart SFTPGo to apply the changes. The FTP service is now available on port `2121`.
You can also configure the passive ports range (`50000-50100` by default), these ports must be reachable for passive FTP to work. If your FTP server is on the private network side of a NAT configuration you have to set `force_passive_ip` to your external IP address. You may also need to open the passive port range on your firewall.
It is recommended that you provide a certificate and key file to expose FTP over TLS. You should prefer SFTP to FTP even if you configure TLS, please don't blindly enable the old FTP protocol.
### Enable WebDAV service
Open the SFTPGo configuration file, search for the `webdavd` section and change it as follow.
```json
"webdavd": {
"bindings": [
{
"port": 10080,
"address": "",
"enable_https": false,
"client_auth_type": 0,
"tls_cipher_suites": [],
"prefix": "",
"proxy_allowed": []
}
],
...
}
```
Restart SFTPGo to apply the changes. The WebDAV service is now available on port `10080`. It is recommended that you provide a certificate and key file to expose WebDAV over https.

Binary file not shown.

After

Width:  |  Height:  |  Size: 53 KiB

BIN
docs/howto/img/add-user.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB

BIN
docs/howto/img/az-user.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

BIN
docs/howto/img/folders.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

BIN
docs/howto/img/gcs-user.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

BIN
docs/howto/img/logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

BIN
docs/howto/img/s3-user.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

BIN
docs/howto/img/setup.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB

BIN
docs/howto/img/status.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 99 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 50 KiB

View File

@@ -0,0 +1,196 @@
# Securing SFTPGo with a free Let's Encrypt TLS Certificate
This tutorial shows how to create and configure a free Let's encrypt TLS certificate for the SFTPGo Web UI and REST API, the WebDAV service and the FTP service.
Obtaining a Let's Encrypt certificate involves solving a domain validation challenge issued by an ACME (Automatic Certificate Management Environment) server. This challenge verifies your ownership of the domain(s) you're trying to obtain a certificate for. Different challenge types exist, the most commonly used being `HTTP-01`. As its name suggests, it uses the HTTP protocol. While HTTP servers can be configured to use any TCP port, this challenge will only work on port 80 due to security measures.
More info about the supported challenge types can be found [here](https://letsencrypt.org/docs/challenge-types/).
There are several tools that allow you to obtain a Let's encrypt TLS certificate, in this tutorial we'll use the [lego](https://github.com/go-acme/lego) CLI.
The `lego` CLI supports all the Let's encrypt challenge types, in this tutorial we'll focus on `HTTP-01` challenge type and make the following assumptions:
- we are running SFTPGo on Linux
- we need a TLS certificate for the `sftpgo.com` domain
- we have an existing web server already running on port 80 for the `sftpgo.com` domain and the web root path is `/var/www/sftpgo.com`
## Obtaining a certificate
Download the latest [lego release](https://github.com/go-acme/lego/releases) and extract the lego binary in `/usr/local/bin`, then verify that it works.
```shell
lego -v
lego version 4.4.0 linux/amd64
```
We'll store the certificates in `/var/lib/lego` so create this directory.
```shell
sudo mkdir -p /var/lib/lego
```
Now get a certificate. The HTTP based challenge will be created in a file in `/var/www/sftpgo.com/.well-known/acme-challenge`. This directory must be publicly served by your web server.
```shell
sudo lego --accept-tos --path="/var/lib/lego" --email="<you email address here>" --domains="sftpgo.com" --http.webroot="/var/www/sftpgo.com" --http run
```
You should be now able to list your certificate.
```shell
sudo lego --path="/var/lib/lego" list
Found the following certs:
Certificate Name: sftpgo.com
Domains: sftpgo.com
Expiry Date: 2021-09-09 19:41:51 +0000 UTC
Certificate Path: /var/lib/lego/certificates/sftpgo.com.crt
```
Now copy the certificate inside a private path to the SFTPGo service.
```shell
sudo mkdir -p /etc/sftpgo/certs
sudo cp /var/lib/lego/certificates/sftpgo.com.{crt,key} /etc/sftpgo/certs
sudo chown -R sftpgo:sftpgo /etc/sftpgo/certs
```
## Enable HTTPS for SFTPGo Web UI and REST API
Open the SFTPGo configuration file, search for the `httpd` section and change it as follow.
```json
"httpd": {
"bindings": [
{
"port": 9443,
"address": "",
"enable_web_admin": true,
"enable_web_client": true,
"enable_https": true,
"client_auth_type": 0,
"tls_cipher_suites": [],
"proxy_allowed": []
}
],
"templates_path": "/usr/share/sftpgo/templates",
"static_files_path": "/usr/share/sftpgo/static",
"backups_path": "/srv/sftpgo/backups",
"web_root": "",
"certificate_file": "/etc/sftpgo/certs/sftpgo.com.crt",
"certificate_key_file": "/etc/sftpgo/certs/sftpgo.com.key",
"ca_certificates": [],
"ca_revocation_lists": []
}
```
Restart SFTPGo to apply the changes. The HTTPS service is now available on port `9443`.
## Enable HTTPS for WebDAV service
Open the SFTPGo configuration file, search for the `webdavd` section and change it as follow.
```json
"webdavd": {
"bindings": [
{
"port": 10443,
"address": "",
"enable_https": true,
"client_auth_type": 0,
"tls_cipher_suites": [],
"prefix": "",
"proxy_allowed": []
}
],
"certificate_file": "/etc/sftpgo/certs/sftpgo.com.crt",
"certificate_key_file": "/etc/sftpgo/certs/sftpgo.com.key",
...
```
Restart SFTPGo to apply the changes. WebDAV is now availble over HTTPS on port `10443`.
## Enable explicit FTP over TLS
Open the SFTPGo configuration file, search for the `ftpd` section and change it as follow.
```json
"ftpd": {
"bindings": [
{
"port": 2121,
"address": "",
"apply_proxy_config": true,
"tls_mode": 1,
"force_passive_ip": "",
"client_auth_type": 0,
"tls_cipher_suites": []
}
],
"banner": "",
"banner_file": "",
"active_transfers_port_non_20": true,
"passive_port_range": {
"start": 50000,
"end": 50100
},
"disable_active_mode": false,
"enable_site": false,
"hash_support": 0,
"combine_support": 0,
"certificate_file": "/etc/sftpgo/certs/sftpgo.com.crt",
"certificate_key_file": "/etc/sftpgo/certs/sftpgo.com.key",
"ca_certificates": [],
"ca_revocation_lists": []
}
```
Restart SFTPGo to apply the changes. FTPES service is now available on port `2121` and TLS is required for both control and data connection (`tls_mode` is 1).
## Automatic certificate renewal
SFTPGo can reload TLS certificates without service interruption, so we'll create a small bash script that copies the certificates inside the SFTPGo private directory and instructs SFTPGo to load them. We then configure `lego` to run this script when the certificates are renewed.
Create the file `/usr/local/bin/sftpgo_lego_hook` with the following contents.
```shell
#!/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
CERTS_DIR=/etc/sftpgo/certs
mkdir -p ${CERTS_DIR}
cp ${LEGO_CERT_PATH} ${LEGO_CERT_KEY_PATH} ${CERTS_DIR}
chown -R sftpgo:sftpgo ${CERTS_DIR}
systemctl reload sftpgo
```
Ensure that this script is executable.
```shell
sudo chmod 755 /usr/local/bin/sftpgo_lego_hook
```
Now create a daily cron job to check the certificate expiration and renew it if necessary. For example create the file `/etc/cron.daily/lego` with the following contents.
```shell
#!/bin/bash
lego --accept-tos --path="/var/lib/lego" --email="<you email address here>" --domains="sftpgo.com" --http-timeout 60 --http.webroot="/var/www/sftpgo.com" --http renew --renew-hook="/usr/local/bin/sftpgo_lego_hook"
```
Ensure that this cron script is executable.
```shell
sudo chmod 755 /etc/cron.daily/lego
```
When the certificate is renewed you should see SFTPGo logs like the following to confirm that the new certificate was successfully loaded.
```json
{"level":"debug","time":"2021-06-14T20:05:15.785","sender":"service","message":"Received reload request"}
{"level":"debug","time":"2021-06-14T20:05:15.785","sender":"httpd","message":"TLS certificate \"/etc/sftpgo/certs/sftpgo.com.crt\" successfully loaded"}
{"level":"debug","time":"2021-06-14T20:05:15.785","sender":"ftpd","message":"TLS certificate \"/etc/sftpgo/certs/sftpgo.com.crt\" successfully loaded"}
{"level":"debug","time":"2021-06-14T20:05:15.786","sender":"webdavd","message":"TLS certificate \"/etc/sftpgo/certs/sftpgo.com.crt\" successfully loaded"}
```

View File

@@ -188,19 +188,19 @@ sudo systemctl restart sftpgo
systemctl status sftpgo
```
## Create the first admin
To start using SFTPGo you need to create an admin user, the easiest way is to use the built-in Web admin interface, so open the Web Admin URL and create the first admin user.
[http://127.0.0.1:8080/web/admin](http://127.0.0.1:8080/web/admin)
## Add virtual users
The easiest way to add virtual users is to use the built-in Web interface.
You can expose the Web Admin interface over the network replacing `"bind_address": "127.0.0.1"` in the `httpd` configuration section with `"bind_address": ""` and apply the change restarting the SFTPGo service with the following command.
So navigate to the Web Admin URL again and log in using the credentials you just set up.
```shell
sudo systemctl restart sftpgo
```
So now open the Web Admin URL.
[http://127.0.0.1:8080/web](http://127.0.0.1:8080/web)
[http://127.0.0.1:8080/web/admin](http://127.0.0.1:8080/web/admin)
Click `Add` and fill the user details, the minimum required parameters are:

View File

@@ -13,6 +13,7 @@ The logs can be divided into the following categories:
- `sender` string. `Upload` or `Download`
- `time` string. Date/time with millisecond precision
- `level` string
- `remote_addr` string. IP and, optionally, port of the remote client. For example `127.0.0.1:1234` or `127.0.0.1`
- `elapsed_ms`, int64. Elapsed time, as milliseconds, for the upload/download
- `size_bytes`, int64. Size, as bytes, of the download/upload
- `username`, string
@@ -22,6 +23,7 @@ The logs can be divided into the following categories:
- **"command logs"**, SFTP/SCP command logs:
- `sender` string. `Rename`, `Rmdir`, `Mkdir`, `Symlink`, `Remove`, `Chmod`, `Chown`, `Chtimes`, `Truncate`, `SSHCommand`
- `level` string
- `remote_addr` string. IP and, optionally, port of the remote client. For example `127.0.0.1:1234` or `127.0.0.1`
- `username`, string
- `file_path` string
- `target_path` string
@@ -37,7 +39,7 @@ The logs can be divided into the following categories:
- **"http logs"**, REST API logs:
- `sender` string. `httpd`
- `level` string
- `remote_addr` string. IP and port of the remote client
- `remote_addr` string. IP and, optionally, port of the remote client. For example `127.0.0.1:1234` or `127.0.0.1`
- `proto` string, for example `HTTP/1.1`
- `method` string. HTTP method (`GET`, `POST`, `PUT`, `DELETE` etc.)
- `user_agent` string

View File

@@ -91,6 +91,17 @@ Flags:
parallel (default 2)
--s3-upload-part-size int The buffer size for multipart uploads
(MB) (default 5)
--sftp-buffer-size int The size of the buffer (in MB) to use
for transfers. By enabling buffering,
the reads and writes, from/to the
remote SFTP server, are split in
multiple concurrent requests and this
allows data to be transferred at a
faster rate, over high latency networks,
by overlapping round-trip times
--sftp-disable-concurrent-reads Concurrent reads are safe to use and
disabling them will degrade performance.
Disable for read once servers
--sftp-endpoint string SFTP endpoint as host:port for SFTP
provider
--sftp-fingerprints strings SFTP fingerprints to verify remote host

View File

@@ -10,9 +10,9 @@ If the hook defines an external program it can reads the following environment v
- `SFTPGO_LOGIND_USER`, it contains the user serialized as JSON. The username is empty if the connection is closed for authentication timeout
- `SFTPGO_LOGIND_IP`
- `SFTPGO_LOGIND_METHOD`, possible values are `publickey`, `password`, `keyboard-interactive`, `publickey+password`, `publickey+keyboard-interactive` or `no_auth_tryed`
- `SFTPGO_LOGIND_METHOD`, possible values are `publickey`, `password`, `keyboard-interactive`, `publickey+password`, `publickey+keyboard-interactive`, `TLSCertificate`, `TLSCertificate+password` or `no_auth_tryed`
- `SFTPGO_LOGIND_STATUS`, 1 means login OK, 0 login KO
- `SFTPGO_LOGIND_PROTOCOL`, possible values are `SSH`, `FTP`, `DAV`
- `SFTPGO_LOGIND_PROTOCOL`, possible values are `SSH`, `FTP`, `DAV`, `HTTP`
Previous global environment variables aren't cleared when the script is called.
The program must finish within 20 seconds.
@@ -20,6 +20,8 @@ The program must finish within 20 seconds.
If the hook is an HTTP URL then it will be invoked as HTTP POST. The login method, the used protocol, the ip address and the status of the user are added to the query string, for example `<http_url>?login_method=password&ip=1.2.3.4&protocol=SSH&status=1`.
The request body will contain the user serialized as JSON.
The structure for SFTPGo users can be found within the [OpenAPI schema](../httpd/schema/openapi.yaml).
The HTTP hook will use the global configuration for HTTP clients and will respect the retry configurations.
The `post_login_scope` supports the following configuration values:

61
docs/rate-limiting.md Normal file
View File

@@ -0,0 +1,61 @@
# Rate limiting
Rate limiting allows to control the number of requests going to the SFTPGo services.
SFTPGo implements a [token bucket](https://en.wikipedia.org/wiki/Token_bucket) initially full and refilled at the configured rate. The `burst` configuration parameter defines the size of the bucket. The rate is defined by dividing `average` by `period`, so for a rate below 1 req/s, one needs to define a period larger than a second.
Requests that exceed the configured limit will be delayed or denied if they exceed the maximum delay time.
SFTPGo allows to define per-protocol rate limiters so you can have different configurations for different protocols.
The supported protocols are:
- `SSH`, includes SFTP and SSH commands
- `FTP`, includes FTP, FTPES, FTPS
- `DAV`, WebDAV
- `HTTP`, REST API and web admin
You can also define two types of rate limiters:
- global, it is independent from the source host and therefore define an aggregate limit for the configured protocol/s
- per-host, this type of rate limiter can be connected to the built-in [defender](./defender.md) and generate `score_limit_exceeded` events and thus hosts that repeatedly exceed the configured limit can be automatically blocked
If you configure a per-host rate limiter, SFTPGo will keep a rate limiter in memory for each host that connects to the service, you can limit the memory usage using the `entries_soft_limit` and `entries_hard_limit` configuration keys.
You can defines how many rate limiters as you want, but keep in mind that if you defines multiple rate limiters each request will be checked against all the configured limiters and so it can potentially be delayed multiple times. Let's clarify with an example, here is a configuration that defines a global rate limiter and a per-host rate limiter for the FTP protocol:
```json
"rate_limiters": [
{
"average": 100,
"period": 1000,
"burst": 1,
"type": 1,
"protocols": [
"SSH",
"FTP",
"DAV",
"HTTP"
],
"generate_defender_events": false,
"entries_soft_limit": 100,
"entries_hard_limit": 150
},
{
"average": 10,
"period": 1000,
"burst": 1,
"type": 2,
"protocols": [
"FTP"
],
"generate_defender_events": true,
"entries_soft_limit": 100,
"entries_hard_limit": 150
}
]
```
we have a global rate limiter that limit the aggregate rate for the all the services to 100 req/s and an additional rate limiter that limits the `FTP` protocol to 10 req/s per host.
With this configuration, when a client connects via FTP it will be limited first by the global rate limiter and then by the per host rate limiter.
Clients connecting via SFTP/WebDAV will be checked only against the global rate limiter.

View File

@@ -40,7 +40,7 @@ You can create other administrator and assign them the following permissions:
You can also restrict administrator access based on the source IP address. If you are running SFTPGo behind a reverse proxy you need to allow both the proxy IP address and the real client IP.
The OpenAPI 3 schema for the exposed API can be found inside the source tree: [openapi.yaml](../httpd/schema/openapi.yaml "OpenAPI 3 specs").
The OpenAPI 3 schema for the exposed API can be found inside the source tree: [openapi.yaml](../httpd/schema/openapi.yaml "OpenAPI 3 specs"). If you want to render the schema without importing it manually, you can explore it on [Stoplight](https://sftpgo.stoplight.io/docs/sftpgo/openapi.yaml).
You can generate your own REST client in your preferred programming language, or even bash scripts, using an OpenAPI generator such as [swagger-codegen](https://github.com/swagger-api/swagger-codegen) or [OpenAPI Generator](https://openapi-generator.tech/).

View File

@@ -23,7 +23,7 @@ Some SFTP commands don't work over S3:
- `chtimes`, `chown` and `chmod` will fail. If you want to silently ignore these method set `setstat_mode` to `1` or `2` in your configuration file
- `truncate`, `symlink`, `readlink` are not supported
- opening a file for both reading and writing at the same time is not supported
- upload resume is not supported
- resuming uploads is not supported
- upload mode `atomic` is ignored since S3 uploads are already atomic
Other notes:

View File

@@ -10,6 +10,7 @@ Here are the supported configuration parameters:
- `PrivateKey`
- `Fingerprints`
- `Prefix`
- `BufferSize`
The mandatory parameters are the endpoint, the username and a password or a private key. If you define both a password and a private key the key is tried first. The provided private key should be PEM encoded, something like this:
@@ -28,3 +29,7 @@ The password and the private key are stored as ciphertext according to your [KMS
SHA256 fingerprints for remote server host keys are optional but highly recommended: if you provide one or more fingerprints the server host key will be verified against them and the connection will be denied if none of the fingerprints provided match that for the server host key.
Specifying a prefix you can restrict all operations to a given path within the remote SFTP server.
Buffering can be enabled by setting a buffer size (in MB) greater than 0. By enabling buffering, the reads and writes, from/to the remote SFTP server, are split in multiple concurrent requests and this allows data to be transferred at a faster rate, over high latency networks, by overlapping round-trip times. With buffering enabled, resuming uploads and trucate are not supported and a file cannot be opened for both reading and writing at the same time. 0 means disabled.
Some SFTP servers (eg. AWS Transfer) do not support opening files read/write at the same time, you can enable buffering to work with them.

View File

@@ -36,7 +36,7 @@ SFTPGo supports the following built-in SSH commands:
- `md5sum`, `sha1sum`, `sha256sum`, `sha384sum`, `sha512sum`. Useful to check message digests for uploaded files.
- `cd`, `pwd`. Some SFTP clients do not support the SFTP SSH_FXP_REALPATH packet type, so they use `cd` and `pwd` SSH commands to get the initial directory. Currently `cd` does nothing and `pwd` always returns the `/` path. These commands will work with any storage backend but keep in mind that to calculate the hash we need to read the whole file, for remote backends this means downloading the file, for the encrypted backend this means decrypting the file.
- `sftpgo-copy`. This is a built-in copy implementation. It allows server side copy for files and directories. The first argument is the source file/directory and the second one is the destination file/directory, for example `sftpgo-copy <src> <dst>`. The command will fail if the destination exists. Copy for directories spanning virtual folders is not supported. Only local filesystem is supported: recursive copy for Cloud Storage filesystems requires a new request for every file in any case, so a real server side copy is not possible.
- `sftpgo-remove`. This is a built-in remove implementation. It allows to remove single files and to recursively remove directories. The first argument is the file/directory to remove, for example `sftpgo-remove <dst>`. Only local filesystem is supported: recursive remove for Cloud Storage filesystems requires a new request for every file in any case, so a server side remove is not possible.
- `sftpgo-remove`. This is a built-in remove implementation. It allows to remove single files and to recursively remove directories. The first argument is the file/directory to remove, for example `sftpgo-remove <dst>`. Only local and encrypted filesystems are supported: recursive remove for Cloud Storage filesystems requires a new request for every file in any case, so a server side remove is not possible.
The following SSH commands are enabled by default:

View File

@@ -1,20 +1,28 @@
# Virtual Folders
A virtual folder is a mapping between a SFTPGo virtual path and a filesystem path outside the user home directory.
The specified paths must be absolute and the virtual path cannot be "/", it must be a sub directory.
The parent directory to the specified virtual path must exist. SFTPGo will try to automatically create any missing parent directory for the configured virtual folders at user login.
A virtual folder is a mapping between a SFTPGo virtual path and a filesystem path outside the user home directory or a different storage provider.
For example, you can have a local user with an S3-based virtual folder or vice versa.
The specified local paths must be absolute and the virtual path cannot be "/", it must be a sub directory.
SFTPGo will try to automatically create any missing parent directory for the configured virtual folders at user login.
For each virtual folder, the following properties can be configured:
- `folder_name`, is the ID for an existings folder. The folder structure contains the absolute filesystem path to expose as virtual folder
- `filesystem`, this way you can map a local path or a Cloud backend to mount as virtual folders
- `virtual_path`, the SFTPGo absolute path to use to expose the mapped path
- `quota_size`, maximum size allowed as bytes. 0 means unlimited, -1 included in user quota
- `quota_files`, maximum number of files allowed. 0 means unlimited, -1 included in user quota
For example if the configure folder has configured `/tmp/mapped` or `C:\mapped` as filesystem path and you set `/vfolder` as virtual path then SFTPGo users can access `/tmp/mapped` or `C:\mapped` via the `/vfolder` virtual path.
For example if a folder is configured to use `/tmp/mapped` or `C:\mapped` as filesystem path and `/vfolder` as virtual path then SFTPGo users can access `/tmp/mapped` or `C:\mapped` via the `/vfolder` virtual path.
Nested SFTP folders using the same SFTPGo instance (identified using the host keys) are not allowed as they could cause infinite SFTP loops.
The same virtual folder can be shared among users, different folder quota limits for each user are supported.
Folder quota limits can also be included inside the user quota but in this case the folder is considered "private" and sharing it with other users will break user quota calculation.
The calculation of the quota for a given user is obtained as the sum of the files contained in his home directory and those within each defined virtual folder.
Using the REST API you can:
@@ -24,6 +32,3 @@ Using the REST API you can:
- delete a virtual folder. SFTPGo removes folders from the data provider, no files deletion will occur
If you remove a folder, from the data provider, any users relationships will be cleared up. If the deleted folder is included inside the user quota you need to do a user quota scan to update its quota. An orphan virtual folder will not be automatically deleted since if you add it again later then a quota scan is needed and it could be quite expensive, anyway you can easily list the orphan folders using the REST API and delete them if they are not needed anymore.
Overlapping virtual paths are not allowed for the same user, overlapping mapped paths are allowed only if quota tracking is globally disabled inside the configuration file (`track_quota` must be set to `0`).
Virtual folders are supported for local filesystem only.

View File

@@ -3,11 +3,8 @@
You can easily build your own interface using the exposed [REST API](./rest-api.md). Anyway, SFTPGo also provides a basic built-in web interface that allows you to manage users, virtual folders, admins and connections.
With the default `httpd` configuration, the web admin is available at the following URL:
[http://127.0.0.1:8080/web](http://127.0.0.1:8080/web)
[http://127.0.0.1:8080/web/admin](http://127.0.0.1:8080/web/admin)
The default credentials are:
- username: `admin`
- password: `password`
If no admin user is found within the data provider, typically after the initial installation, SFTPGo will ask you to create the first admin. You can also pre-create an admin user by loading initial data or by enabling the `create_default_admin` configuration key. Please take a look [here](./full-configuration.md) for more details.
The web interface can be exposed via HTTPS and may require mutual TLS authentication in addition to administrator credentials.

11
docs/web-client.md Normal file
View File

@@ -0,0 +1,11 @@
# Web Client
SFTPGo provides a basic front-end web interface for your users. It allows end-users to browse and download their files and change their credentials.
The web interface can be globally disabled within the `httpd` configuration via the `enable_web_client` key or on a per-user basis by adding `HTTP` to the denied protocols.
Public keys management can be disabled, per-user, using a specific permission.
The web client allows you to download multiple files or folders as a single zip file, any non regular files (for example symlinks) will be silently ignored.
With the default `httpd` configuration, the web client is available at the following URL:
[http://127.0.0.1:8080/web/client](http://127.0.0.1:8080/web/client)

View File

@@ -2,9 +2,9 @@
The `WebDAV` support can be enabled by configuring one or more `bindings` inside the `webdavd` configuration section.
Each user can access their home directory using the path `http/s://<SFTPGo ip>:<WevDAVPORT>/`.
Each user can access their home directory using the path `http/s://<SFTPGo ip>:<WevDAVPORT>/<prefix>`. By default `prefix` is empty. If you define a prefix it must be an abosulte URI, for example `/dav`.
WebDAV is quite a different protocol than SCP/FTP, there is no session concept, each command is a separate HTTP request and must be authenticated, to improve performance SFTPGo caches authenticated users. This way SFTPGo don't need to do a dataprovider query and a password check for each request.
WebDAV is quite a different protocol than SFTP/FTP, there is no session concept, each command is a separate HTTP request and must be authenticated, to improve performance SFTPGo caches authenticated users. This way SFTPGo don't need to do a dataprovider query and a password check for each request.
The user caching configuration allows to set:
@@ -19,12 +19,14 @@ The MIME types caching configurations allows to set the maximum number of MIME t
WebDAV should work as expected for most use cases but there are some minor issues and some missing features.
If you use WebDAV behind a reverse proxy ensure to preserve the `Host` header or `COPY`/`MOVE` operations will fail. For example for apache you have to set `ProxyPreserveHost On`.
Know issues:
- removing a directory tree on Cloud Storage backends could generate a `not found` error when removing the last (virtual) directory. This happens if the client cycles the directories tree itself and removes files and directories one by one instead of issuing a single remove command
- the used [WebDAV library](https://pkg.go.dev/golang.org/x/net/webdav?tab=doc) asks to open a file to execute a `stat` and sometimes reads some bytes to find the content type. Stat calls are executed before and after a download too, so to be able to properly list a directory you need to grant both `list` and `download` permissions and to be able to upload files you need to gran both `list` and `upload` permissions
- the used `WebDAV library` not always returns a proper error code/message, most of the times it simply returns `Method not Allowed`. I'll try to improve the library error codes in the future
- if an object within a directory cannot be accessed, for example due to OS permissions issues or because is a missing mapped path for a virtual folder, the directory listing will fail. In SFTP/FTP the directory listing will succeed and you'll only get an error if you try to access to the problematic file/directory
- if a file or a directory cannot be accessed, for example due to OS permissions issues or because a mapped path for a virtual folder is a missing, it will be omitted from the directory listing. This behavior is different from SFTP/FTP where you will be able to see the problematic file/directory in the directory listing, you will only get an error if you try to access it.
We plan to add [Dead Properties](https://tools.ietf.org/html/rfc4918#section-3) support in future releases. We need a design decision here, probably the best solution is to store dead properties inside the data provider but this could increase a lot its size. Alternately we could store them on disk for local filesystem and add as metadata for Cloud Storage, this means that we need to do a separate `HEAD` request to retrieve dead properties for an S3 file. For big folders will do a lot of requests to the Cloud Provider, I don't like this solution. Another option is to expose a hook and allow you to implement `dead properties` outside SFTPGo.

View File

@@ -3,7 +3,7 @@ package main
import (
"encoding/json"
"fmt"
"io/ioutil"
"io"
"log"
"net/http"
"os"
@@ -85,7 +85,7 @@ func main() {
printResponse(0, "")
}
var authyResponse map[string]interface{}
respBody, err := ioutil.ReadAll(resp.Body)
respBody, err := io.ReadAll(resp.Body)
if err != nil {
printResponse(0, "")
}

View File

@@ -3,7 +3,7 @@ package main
import (
"encoding/json"
"fmt"
"io/ioutil"
"io"
"log"
"net/http"
"os"
@@ -88,7 +88,7 @@ func main() {
printResponse("")
}
var authyResponse map[string]interface{}
respBody, err := ioutil.ReadAll(resp.Body)
respBody, err := io.ReadAll(resp.Body)
if err != nil {
printResponse("")
}

View File

@@ -4,7 +4,7 @@ import (
"bufio"
"encoding/json"
"fmt"
"io/ioutil"
"io"
"net/http"
"os"
"time"
@@ -122,7 +122,7 @@ func main() {
printAuthResponse(-1)
}
var authyResponse map[string]interface{}
respBody, err := ioutil.ReadAll(resp.Body)
respBody, err := io.ReadAll(resp.Body)
if err != nil {
printAuthResponse(-1)
}

17
examples/backup/README.md Normal file
View File

@@ -0,0 +1,17 @@
# Data Backup
The `backup` example script shows how to use the SFTPGo REST API to backup your data.
The script is written in Python and has the following requirements:
- python3 or python2
- python [Requests](https://requests.readthedocs.io/en/master/) module
The provided example tries to connect to an SFTPGo instance running on `127.0.0.1:8080` using the following credentials:
- username: `admin`
- password: `password`
and, if you execute it daily, it saves a different backup file for each day of the week. The backups will be saved within the configured `backups_path`.
Please edit the script according to your needs.

36
examples/backup/backup Executable file
View File

@@ -0,0 +1,36 @@
#!/usr/bin/env python
from datetime import datetime
import sys
import requests
try:
import urllib.parse as urlparse
except ImportError:
import urlparse
# change base_url to point to your SFTPGo installation
base_url = "http://127.0.0.1:8080"
# set to False if you want to skip TLS certificate validation
verify_tls_cert = True
# set the credentials for a valid admin here
admin_user = "admin"
admin_password = "password"
# get a JWT token
auth = requests.auth.HTTPBasicAuth(admin_user, admin_password)
r = requests.get(urlparse.urljoin(base_url, "api/v2/token"), auth=auth, verify=verify_tls_cert, timeout=10)
if r.status_code != 200:
print("error getting access token: {}".format(r.text))
sys.exit(1)
access_token = r.json()["access_token"]
auth_header = {"Authorization": "Bearer " + access_token}
r = requests.get(urlparse.urljoin(base_url, "api/v2/dumpdata"),
params={"output-file":"backup_{}.json".format(datetime.today().strftime('%w'))},
headers=auth_header, verify=verify_tls_cert, timeout=10)
if r.status_code == 200:
print("backup OK")
else:
print("backup error, status {}, response: {}".format(r.status_code, r.text))

View File

@@ -24,7 +24,7 @@ fields_to_update = {"status":0, "quota_files": 1000, "additional_info":"updated
# get a JWT token
auth = requests.auth.HTTPBasicAuth(admin_user, admin_password)
r = requests.get(urlparse.urljoin(base_url, "api/v2/token"), auth=auth, verify=verify_tls_cert)
r = requests.get(urlparse.urljoin(base_url, "api/v2/token"), auth=auth, verify=verify_tls_cert, timeout=10)
if r.status_code != 200:
print("error getting access token: {}".format(r.text))
sys.exit(1)
@@ -33,14 +33,14 @@ auth_header = {"Authorization": "Bearer " + access_token}
for username in users_to_update:
r = requests.get(urlparse.urljoin(base_url, posixpath.join("api/v2/users", username)),
headers=auth_header, verify=verify_tls_cert)
headers=auth_header, verify=verify_tls_cert, timeout=10)
if r.status_code != 200:
print("error getting user {}: {}".format(username, r.text))
continue
user = r.json()
user.update(fields_to_update)
r = requests.put(urlparse.urljoin(base_url, posixpath.join("api/v2/users", username)),
headers=auth_header, verify=verify_tls_cert, json=user)
headers=auth_header, verify=verify_tls_cert, json=user, timeout=10)
if r.status_code == 200:
print("user {} updated".format(username))
else:

View File

@@ -28,7 +28,7 @@ class ConvertUsers:
def buildUserObject(self, username, password, home_dir, uid, gid, max_sessions, quota_size, quota_files, upload_bandwidth,
download_bandwidth, status, expiration_date, allowed_ip=[], denied_ip=[]):
return {'id':0, 'username':username, 'password':password, 'home_dir':home_dir, 'uid':uid, 'gid':gid,
'max_sessions':max_sessions, 'quota_size':quota_size, 'quota_files':quota_files, 'permissions':{'/':"*"},
'max_sessions':max_sessions, 'quota_size':quota_size, 'quota_files':quota_files, 'permissions':{'/':["*"]},
'upload_bandwidth':upload_bandwidth, 'download_bandwidth':download_bandwidth,
'status':status, 'expiration_date':expiration_date,
'filters':{'allowed_ip':allowed_ip, 'denied_ip':denied_ip}}

Some files were not shown because too many files have changed in this diff Show More