Compare commits

..

284 Commits

Author SHA1 Message Date
Nicola Murino
05ae0ea5f2 config: fix bindings backward compatibility 2021-02-06 09:53:31 +01:00
Nicola Murino
8de7a81674 revertprovider: only accept the supported version 2021-02-05 13:55:19 +01:00
Nicola Murino
d32b195a57 httpd: reuse the same compressor among bindings 2021-02-04 22:32:55 +01:00
Nicola Murino
267d9f1831 web ui: allow to create folders from a template 2021-02-04 19:09:43 +01:00
Nicola Murino
17a42a0c11 webdav: add compression support
Fixes #295
2021-02-04 09:06:41 +01:00
Nicola Murino
a219d25cac webdav: update the doc
the user specific path is now gone
2021-02-04 07:46:40 +01:00
Nicola Murino
ce731020a7 webdav: remove the username path prefix
so we have the same URIs for all protocols

Fixes #293
2021-02-04 07:12:04 +01:00
Nicola Murino
fc9082c422 webdav: try to handle HEAD for collection too
The underlying golang webdav library returns Method Not Allowed for
HEAD requests on directories:

https://github.com/golang/net/blob/master/webdav/webdav.go#L210

let's see if we can workaround this inside SFTPGo itself in a similar
way as we do for GET.

The HEAD response will not return a Content-Length, we cannot handle
this inside SFTPGo.

Fixes #294
2021-02-03 22:36:13 +01:00
Nicola Murino
4872ba2ea0 README: add "Sponsors" section 2021-02-03 14:37:11 +01:00
Nicola Murino
70bb3c34ce sftpfs: improve endpoint validation
Validation will fail if the endpoint is not specified as host:port
2021-02-03 11:29:04 +01:00
Nicola Murino
1cde50f050 sftpd: improve logging if filesystem creation fails 2021-02-03 09:45:04 +01:00
Nicola Murino
e9dd4ecdf0 web admin: add CSRF 2021-02-03 08:55:28 +01:00
Nicola Murino
f863530653 JWT: only accepts tokens from the expected header or cookie 2021-02-02 13:11:47 +01:00
Nicola Murino
4f609cfa30 JWT: add token audience
a token released for API audience cannot be used for web pages and
vice-versa
2021-02-02 09:14:10 +01:00
Nicola Murino
78bf808322 virtual folders: change dataprovider structure
This way we no longer depend on the local file system path and so we can
add support for cloud backends in future updates
2021-02-01 19:04:15 +01:00
Nicola Murino
afe1da92c5 web UI cookie: set the Secure flags if we are over TLS 2021-01-28 13:29:16 +01:00
Nicola Murino
9985224966 examples: add a script for bulk user update
you can use this sample script a a basis if you need to update
some common parameters for multiple users while preserving the others
2021-01-27 19:18:37 +01:00
Nicola Murino
02679d6df3 web ui: save the state of the tables
the state will be saved for 1 hour
2021-01-27 08:41:21 +01:00
Nicola Murino
c2bbd468c4 REST API: add logout and store invalidated token 2021-01-26 22:35:36 +01:00
Nicola Murino
46ab8f8d78 post-login hook: add the full user JSON serialized
Fixes #284
2021-01-26 18:05:44 +01:00
Nicola Murino
54321c5240 web ui: allow to create multiple users from a template 2021-01-25 21:31:33 +01:00
Nicola Murino
5fcbf2528f html templates: minor improvements 2021-01-24 17:43:54 +01:00
Nicola Murino
ea096db8e4 sftpfs: set the correct file mode 2021-01-23 10:32:15 +01:00
Nicola Murino
0caeb68680 sftpfs: fix stat info 2021-01-23 09:42:49 +01:00
Nicola Murino
2b9ba1d520 web admin: try to uniform UI 2021-01-23 09:28:45 +01:00
Nicola Murino
80f5ccd357 web admin: add backup/restore 2021-01-22 19:42:18 +01:00
Nicola Murino
820169c5c6 windows service: simplify code
update testify to 1.7.0 too
2021-01-21 19:07:13 +01:00
Nicola Murino
aff75953e3 ssh requests: send a reply only if the client requested it 2021-01-21 09:28:41 +01:00
Nicola Murino
c0e09374a8 scp: fix wildcard uploads
Fixes #285
2021-01-20 22:37:59 +01:00
Nicola Murino
57976b4085 httpd: add mTLS and multiple bindings support 2021-01-19 18:59:41 +01:00
Nicola Murino
899f1a1844 improve windows service
ensure to exit the service process in any case
2021-01-18 21:46:26 +01:00
Nicola Murino
41a1af863e OpenAPI: minor changes 2021-01-18 13:24:38 +01:00
Nicola Murino
778ec9b88f REST API v2
- add JWT authentication
- admins are now stored inside the data provider
- admin access can be restricted based on the source IP: both proxy
  header and connection IP are checked
- deprecate REST API CLI: it is not relevant anymore

Some other changes to the REST API can still happen before releasing
SFTPGo 2.0.0

Fixes #197
2021-01-17 22:29:08 +01:00
Giorgio Pellero
d42fcc3786 s3: don't paginate to find zero-byte-keyed dirs (#277)
Fixes #275
2021-01-14 12:01:25 +01:00
Nicola Murino
5d4f758c47 GCS: don't paginate to find compat "dirs" 2021-01-12 19:22:12 +01:00
Nicola Murino
a8a17a223a scp: minor improvements
document that we don't support wildcard expansion.

I should refactor scp code ...
2021-01-05 22:32:30 +01:00
Nicola Murino
aa40b04576 update deps 2021-01-05 12:40:49 +01:00
Nicola Murino
daac90c4e1 fix a potential race condition for pre-login and ext auth
hooks

doing something like this:

err = provider.updateUser(u)
...
return provider.userExists(username)

could be racy if another update happen before

provider.userExists(username)

also pass a pointer to updateUser so if the user is modified inside
"validateUser" we can just return the modified user without do a new
query
2021-01-05 09:50:22 +01:00
Nicola Murino
72b2c83392 defender: allow hot-reloading for safe and block lists 2021-01-04 17:52:14 +01:00
Nicola Murino
c3410a3d91 config: don't log a warning if the config file is not found
we also support configuration via env vars
2021-01-03 17:57:07 +01:00
Nicola Murino
173c1820e1 Go 1.15 is now required
VerifyConnection is not available in 1.14
2021-01-03 17:25:24 +01:00
Nicola Murino
684f4ba1a6 mutal TLS: add support for revocation lists 2021-01-03 17:03:04 +01:00
Nicola Murino
6d84c5b9e3 capture http servers error logs
otherwise they will be printed to stdout
2021-01-03 10:38:28 +01:00
Nicola Murino
4b522a2455 webdav: refactor server initialization 2021-01-03 09:51:54 +01:00
Nicola Murino
1e1c46ae1b defender: minor docs improvements 2021-01-02 20:02:05 +01:00
Nicola Murino
d6b3acdb62 add REST API for the defender 2021-01-02 19:33:24 +01:00
Nicola Murino
037d89a320 add support for a basic built-in defender
It can help to prevent DoS and brute force password guessing
2021-01-02 14:05:09 +01:00
Nicola Murino
30eb3c4a99 update OpenAPI schema 2020-12-29 19:33:04 +01:00
Nicola Murino
0966d44c0f httpd: add support for listening over a Unix-domain socket
Fixes #266
2020-12-29 19:02:56 +01:00
Nicola Murino
40e759c983 FTP: add support for client certificate authentication 2020-12-29 09:20:09 +01:00
Nicola Murino
141ca6777c webdav: add support for client certificate authentication
Fixes #263
2020-12-28 19:48:23 +01:00
Nicola Murino
3c16a19269 FTP: update ftpserverlib
fixes another sneaky bug
2020-12-28 09:22:52 +01:00
Nicola Murino
b3c6d79f51 FTP: add support for ASCII transfer mode
the default remain binary, a client have to explicitly request an
ASCII transfer
2020-12-27 09:48:56 +01:00
Nicola Murino
0c56b6d504 nfpm: update to 2.1.0 2020-12-26 19:14:12 +01:00
Nicola Murino
3d2da88da9 web ui: update js and css deps 2020-12-26 18:47:09 +01:00
Nicola Murino
80c06d6b59 clone: disable decrypt error test for memory provider
This test cannot work using memory provider, we cannot change the provider
for a kms secrete without reloading it from JSON and the memory provider
will never reload users
2020-12-26 15:57:01 +01:00
Nicola Murino
e536a638c9 web UI: improve user cloning 2020-12-26 15:11:38 +01:00
Jochen Munz
bc397002d4 Feature: Clone existing user via web admin (#259)
UI based cloning of an existing user. The "add user" screen is prepopulated with existing user data.

Resolves drakkan/sftpgo#225
2020-12-26 14:58:59 +01:00
Nicola Murino
2a95d031ea FTP: add support for AVBL command 2020-12-25 11:14:08 +01:00
Nicola Murino
1dce1eff48 improve FTP support
- allow to disable active mode
- allow to disable SITE commands
- add optional support for calculating hash value of files
- add optional support for the non standard COMB command
2020-12-24 18:48:06 +01:00
Jochen Munz
5b1d8666b3 S3fs: Handle non-ascii filename in rename operations (#257)
SFTP is based on UTF-8 filenames, so non-ASCII filenames get transported with utf-8 escaped character sequences.
At least for the S3fs provider, if such a file is stored in a nested path it cannot be used as the source for a rename operations.

This adds the necessary escaping of the path fragments.

The patch is not required for MinIO but it doesn't hurt
2020-12-24 11:13:42 +01:00
Nicola Murino
187a5b1908 sftpd: properly handle listener accept errors
continue on temporary errors and exit from the serve loop for the
other ones
2020-12-23 19:53:07 +01:00
Nicola Murino
7ab7941ddd sftpfs: fix race condition 2020-12-23 17:15:55 +01:00
Nicola Murino
c69d63c1f8 add support for multiple bindings
Fixes #253
2020-12-23 16:12:30 +01:00
Nicola Murino
743b350fdd httpd: add support for route undefined HEAD requests to GET handlers
HEAD responses will not include a body but the Content-Length will be
set as the equivalent GET request

Fixes #255
2020-12-20 10:22:16 +01:00
Nicola Murino
1ac610da1a fix build on Windows 2020-12-18 16:22:52 +01:00
Nicola Murino
bcf0fa073e telemetry server: add optional https and authentication 2020-12-18 16:04:42 +01:00
Nicola Murino
140380716d remove unused constant 2020-12-18 10:05:08 +01:00
Nicola Murino
143df87fee add some docs for telemetry server
move pprof to the telemetry server only
2020-12-18 09:47:22 +01:00
Márk Sági-Kazár
6d895843dc feat: add new telemetry server (#254)
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-12-18 09:01:19 +01:00
Nicola Murino
65e6d5475f update ftpserverlib to include the latest fixes and features 2020-12-18 08:49:32 +01:00
Nicola Murino
15609cdbc7 fix build on FreeBSD
see https://github.com/otiai10/copy/pull/36
2020-12-17 14:46:31 +01:00
Nicola Murino
f876c728ad add support for the latest ftpserverlib and azblob versions 2020-12-17 13:40:36 +01:00
Nicola Murino
f34462e3c3 add support for limiting max concurrent client connections 2020-12-15 19:29:30 +01:00
Nicola Murino
ea0bf5e4c8 ensure 64 bit alignment for 64 bit struct fields access atomically 2020-12-14 14:52:36 +01:00
Nicola Murino
14d1b82f6b minor README improvements 2020-12-14 07:54:27 +01:00
Nicola Murino
ed43ddd79d enable hash commands for any supported backend 2020-12-13 15:11:55 +01:00
Nicola Murino
23192a3be7 update nfpm to 1.10.3 2020-12-13 14:29:59 +01:00
Nicola Murino
72e3d464b8 sftpfs: fix fingerprints copy for memory provider 2020-12-12 10:56:02 +01:00
Nicola Murino
a6985075b9 add sftpfs storage backend
Fixes #224
2020-12-12 10:31:09 +01:00
dharmendra kariya
4d5494912d Update README.md (#245) 2020-12-11 08:22:50 +01:00
Nicola Murino
50982229e1 REST API: add a method to get the status of the services
added a status page to the built-in web admin
2020-12-08 11:18:34 +01:00
dharmendra kariya
6977a4a18b Update full-configuration.md (#240)
just deleting redundant line
2020-12-08 09:09:21 +01:00
Nicola Murino
ab1bf2ad44 update deps 2020-12-06 22:20:53 +01:00
Nicola Murino
c451f742aa revertprovider: crypted provider was not supported in v4
also ensure to initialize kms before the dataprovider, it could be
needed to downgrade secret from cloud kms providers
2020-12-06 10:36:48 +01:00
Nicola Murino
034d89876d webdav: fix proppatch handling
also respect login delay for cached webdav users and check the home dir as
soon as the user authenticates

Fixes #239
2020-12-06 08:19:41 +01:00
Nicola Murino
4a88ea5c03 add Data At Rest Encryption support 2020-12-05 13:48:13 +01:00
Nicola Murino
95c6d41c35 config: make config file relative to the config dir
a configuration parsing error is now fatal
2020-12-03 17:16:35 +01:00
Márk Sági-Kazár
2a9ed0abca Accept a config file path instead of a config name
Config name is a Viper concept used for searching a specific file
in various paths with various extensions.

Making it configurable is usually not a useful feature
as users mostly want to define a full or relative path
to a config file.

This change replaces config name with config file.
2020-12-03 16:23:33 +01:00
Nicola Murino
3ff6b1bf64 fix lint warnings 2020-12-02 10:02:08 +01:00
Nicola Murino
a67276ccc2 add build tags to disable kms providers 2020-12-02 09:44:18 +01:00
Nicola Murino
87b51a6fd5 kms: remember if a secret was saved without a master key
So we will be able to decrypt secret stored without a master key if a
such key is provided later
2020-12-01 22:18:16 +01:00
Nicola Murino
940836b25b add a note about using sqlite provider over cifs shares
See #235
2020-11-30 21:59:56 +01:00
Nicola Murino
634b723b5d add KMS support
Fixes #226
2020-11-30 21:46:34 +01:00
Nicola Murino
af0c9b76c4 update nfpm to 1.10.2 2020-11-27 18:07:27 +01:00
Nicola Murino
2142ef20c5 fix some typos 2020-11-26 22:18:12 +01:00
Nicola Murino
224ce5fe81 add revertprovider subcommand
Fixes #233
2020-11-26 22:08:33 +01:00
Nicola Murino
4bb9d07dde user: add a free text field
Fixes #230
2020-11-25 22:26:34 +01:00
Nicola Murino
2054dfd83d create the credential directory when needed
The credentials dir is currently required only for GCS users if
prefer database credential setting is false, so defer its creation
and don't fail to start the services if this directory is missing
2020-11-25 14:18:12 +01:00
Nicola Murino
6699f5c2cc initial data loading: an error is no longer fatal
therefore it does not prevent the services from starting
2020-11-25 09:18:36 +01:00
Estel Smith
70bde8b2bc memory provider: print a log if loading the initial dump fails
therefore this error is no longer fatal and does not prevent the services
from starting

Fixes #229
2020-11-25 09:15:23 +01:00
Nicola Murino
ff73e5f53c CI Docker: don't build full image on pull request
it will fail since the slim tag is not pushed
2020-11-24 18:51:10 +01:00
Nicola Murino
0609188d3f allow to disable SFTP service
Fixes #228
2020-11-24 13:44:57 +01:00
Nicola Murino
99cd1ccfe5 S3: fix empty directory detection
when listing empty directory MinIO returns no contents while S3 returns
1 object with the key equal to the prefix. Make detection work in both
cases

Fixes #227
2020-11-23 15:36:42 +01:00
Nicola Murino
dccc583b5d add a dedicated struct to store encrypted credentials
also gcs credentials are now encrypted, both on disk and inside the
provider.

Data provider is automatically migrated and load data will accept
old format too but you should upgrade to the new format to avoid future
issues
2020-11-22 21:53:04 +01:00
Nicola Murino
ac435b7890 back to development 2020-11-18 21:53:23 +01:00
Nicola Murino
37fc589896 set version to 1.2.2 2020-11-18 19:24:19 +01:00
Nicola Murino
5d789a01b7 update pkg/sftp
These patches are now merged upstream:

https://github.com/pkg/sftp/pull/392
https://github.com/pkg/sftp/pull/393
2020-11-18 19:06:12 +01:00
Nicola Murino
ca0ff0d630 add a File interface so we can avoid to use os.File directly 2020-11-17 19:36:39 +01:00
Nicola Murino
969b38586e update pkg/sftp to fix requests accumulation
Include this patch:

https://github.com/pkg/sftp/pull/393

to avoid request accumulation (no underlying fd) if we return an error.
Before this patch the accumulated requests are released only when the
client disconnects.

We use our fork for now to include

https://github.com/pkg/sftp/pull/392

too
2020-11-16 19:49:26 +01:00
Nicola Murino
e3eca424f1 web admin: allow both allowed and denied extensions/patterns for a dir
this fix a regression introduced in the previous commit
2020-11-16 19:21:50 +01:00
Nicola Murino
a6355e298e add support for limit files using shell like patterns
Fixes #209
2020-11-15 22:04:48 +01:00
Ryan Gough
c0f47a58f2 web admin: clarify that the directories for permissions are relative
Fixes #222
2020-11-15 09:11:36 +01:00
Nicola Murino
dc845fa2f4 webdav: fix permission errors if the client try to read multiple times 2020-11-14 19:19:41 +01:00
Nicola Murino
7e855c83b3 deb packages: changes priority to optional, extra is deprecated 2020-11-14 13:54:14 +01:00
Nicola Murino
3b8a9e0963 back to development 2020-11-14 11:01:28 +01:00
Nicola Murino
4445834fd3 set version to 1.2.1 2020-11-14 09:28:53 +01:00
Nicola Murino
19a619ff65 Linux pkgs: use python3 for API CLI inside generated deb 2020-11-14 09:10:45 +01:00
Nicola Murino
66a538dc9c CI: improve docker build action 2020-11-13 21:55:53 +01:00
Nicola Murino
1a6863f4b1 GCS uploads: check Close() error
some code simplification too
2020-11-13 18:40:18 +01:00
Nicola Murino
fbd9919afa docker: add slim image 2020-11-12 22:40:53 +01:00
Nicola Murino
eec8bc73f4 docker: remove entrypoint
remove the VOLUME instruction from the Dockerfile so you can change
user using your own image like this:

FROM drakkan/sftpgo:tag
USER root
RUN chown -R 1100:1100 /etc/sftpgo && chown 1100:1100 /var/lib/sftpgo /srv/sftpgo
USER 1100:1100
2020-11-12 11:53:05 +01:00
Nicola Murino
5720d40fee add setstat_mode 2
in this mode chmod/chtimes/chown can be silently ignored only for cloud
based file systems

Fixes #223
2020-11-12 10:39:46 +01:00
Nicola Murino
38e0cba675 docker: add an entrypoint
running as an arbitrary user is now possible setting the following
env vars too:

SFTPGO_PUID
SFTPGO_PGID

Fixes #217
2020-11-10 23:11:57 +01:00
Nicola Murino
4c5a0d663e sftpd: return the error Operation Unsupported for unexpected reads
a cloud based file cannot be opened for read and write at the same
time. Return a proper error if a client try to do this.

It can happen only for SFTP
2020-11-09 21:01:56 +01:00
Nicola Murino
093df15fac CI: add ppc64le support 2020-11-09 18:39:36 +01:00
Nicola Murino
957430e675 back to development 2020-11-08 12:56:37 +01:00
Nicola Murino
14035f407e set version to 1.2.0 2020-11-08 06:14:03 +01:00
Nicola Murino
bf2b2525a9 CI: build deb/rpm for arm64 2020-11-07 19:29:16 +01:00
Nicola Murino
4edb9cd6b9 simplify some code 2020-11-07 18:05:47 +01:00
Nicola Murino
c38d242bea docker: allow running as an arbitrary user 2020-11-06 10:18:29 +01:00
Nicola Murino
c6ab6f94e7 azblob: container level SAS cannot access container properties
so return the root directory without checking if the bucket exists
2020-11-05 15:03:35 +01:00
Nicola Murino
36151d1ba9 subsystem mode: add base-home-dir flag 2020-11-05 12:12:11 +01:00
Nicola Murino
1d5d184720 webdav file: ensure to close the reader only once 2020-11-05 09:30:38 +01:00
Nicola Murino
0119fd03a6 webdav: user caching is now mandatory
we cache the lock system with the user, without user caching we cannot
support locks for resource
2020-11-04 22:29:25 +01:00
Nicola Murino
0a14297b48 webdav: performance improvements and bug fixes
we need my custom golang/x/net/webdav fork for now

https://github.com/drakkan/net/tree/sftpgo
2020-11-04 19:11:40 +01:00
Nicola Murino
442efa0607 docker: add ppc64le support
Thanks to OSU Open Source Lab for making this possible
2020-11-03 08:47:30 +01:00
Nicola Murino
6ad4cc317c cloud backends: stat and other performance improvements 2020-11-02 19:16:12 +01:00
Nicola Murino
57bec976ae document heathz endpoint 2020-11-01 10:39:10 +01:00
Nicola Murino
641493e31a fix default config file
restore a setting changed for a local test
2020-10-31 11:34:50 +01:00
Nicola Murino
5b4e9ad982 windows setup: allow installation on older Windows version
The REST API CLI will not be installed on version < 10

Fixes #205
2020-10-31 11:04:24 +01:00
Nicola Murino
950a5ad9ea add a recoverer where appropriate
I have never seen this, but a malformed packet can easily crash pkg/sftp
2020-10-31 11:02:04 +01:00
Nicola Murino
fcfdd633f6 Azure Blob: update SDK and add access tier support 2020-10-30 22:17:17 +01:00
Nicola Murino
ebb18fa57d config: manually set viper defaults
so we can override config via env var even without a configuration file

Fixes #208
2020-10-30 18:58:57 +01:00
Nicola Murino
58b0ca585c docs: clarify that the config dir is the working dir by default
Fixes #211
2020-10-29 21:54:02 +01:00
Nicola Murino
5bc1c2de2d add a link to the heml chart
Fixes #210
2020-10-29 21:50:21 +01:00
Mark Sagi-Kazar
ec00613202 feat(httpd): add new healthz endpoint
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-29 21:37:30 +01:00
Mark Sagi-Kazar
02ec3a5f48 refactor(httpd): move every route under a new group
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-29 21:37:30 +01:00
Nicola Murino
ac3bae00fc add support for SFTP subsystem mode
Fixes #204
2020-10-29 19:23:33 +01:00
Nicola Murino
e54828a7b8 add metrics for Azure Blob storage 2020-10-26 19:01:17 +01:00
Nicola Murino
f2acde789d portable mode: add Azure Blob support 2020-10-25 21:42:43 +01:00
Nicola Murino
9b49f63a97 azure: implement multipart uploads using low level API
The high level wrapper seems to hang if there are network issues
2020-10-25 17:41:04 +01:00
Nicola Murino
14bcc6f2fc s3, azblob: check upper limit for part size 2020-10-25 12:10:11 +01:00
Nicola Murino
975a2f3632 sftpd: fix the max upload file size check for overwrites
improved test case too
2020-10-25 08:52:31 +01:00
Nicola Murino
5ff8f75917 add Azure Blob support 2020-10-25 08:18:48 +01:00
Sean Hildebrand
db7e81e9d0 add prefer_database_credentials configuration parameter
When true, users' Google Cloud Storage credentials will be written to
the data provider instead of disk.
Pre-existing credentials on disk will be used as a fallback

Fixes #201
2020-10-22 10:42:40 +02:00
Nicola Murino
6a8039e76a sftpd: log fingerprints for used host keys 2020-10-21 14:27:58 +02:00
Mark Sagi-Kazar
56bf8364cd test: add test for InitializeActionHandler
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-21 07:23:33 +02:00
Mark Sagi-Kazar
75750e3a79 feat: add support for custom action hooks
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-21 07:23:33 +02:00
Nicola Murino
bb5207ad77 Add support for loading users/folders on startup
Fixes #161
2020-10-20 18:42:37 +02:00
Nicola Murino
b51d795e04 sftpd: auto generate an ed25519 host key too 2020-10-19 14:30:40 +02:00
Nicola Murino
d12819932a update cobra to v1.1.1
this version fix the man page generation so we don't need to use
our branch anymore
2020-10-18 21:52:42 +02:00
Nicola Murino
d812c86812 docker: push images to GHCR too
use numeric id for user inside Dockerfile
2020-10-18 19:18:51 +02:00
Nicola Murino
1625cd5a9f back to development 2020-10-18 11:09:16 +02:00
Nicola Murino
756c3d0503 fix man page generation
other minor changes
2020-10-17 22:14:04 +02:00
Nicola Murino
f884447b26 rpm: set proper permissions for /var/lib/sftpgo and /srv/sftpgo
it seems we have to check the permissions after each update,
probably because nfpm defines these dirs as empty folders
2020-10-15 10:01:31 +02:00
Nicola Murino
555394b95e Linux pkgs: move data directory to /srv/sftpgo 2020-10-14 22:25:58 +02:00
Nicola Murino
00510a6af8 docker docs: fix image name 2020-10-14 08:13:24 +02:00
Nicola Murino
6c0839e197 Improve docker images 2020-10-14 07:46:36 +02:00
Ilias Trichopoulos
5b79379c90 Fix typo in Twilio name 2020-10-12 11:36:14 +02:00
Nicola Murino
47fed45700 Improve Linux packages 2020-10-11 16:23:50 +02:00
Nicola Murino
80d695f3a2 back to development 2020-10-11 09:29:17 +02:00
Nicola Murino
8d4f40ccd2 release workflow add initprovider again 2020-10-10 22:29:04 +02:00
Nicola Murino
765bad5edd set version to 1.1.0 2020-10-10 22:09:48 +02:00
Nicola Murino
0c0382c9b5 docker: disable scheduled build
We already have an edge version built after each commit
2020-10-10 20:15:34 +02:00
Nicola Murino
bbab6149e8 fix windows service: was broken in the latest commit 2020-10-09 22:42:13 +02:00
Nicola Murino
ce9387f1ab update dependencies and some docs 2020-10-09 20:25:42 +02:00
Nicola Murino
d126c5736a Docker: add Debian based image 2020-10-08 21:43:13 +02:00
Nicola Murino
5048d54d32 PPA: add source files used to build the packages 2020-10-08 18:20:15 +02:00
Nicola Murino
f22fe6af76 remove py extension from REST API CLI 2020-10-08 16:02:04 +02:00
Mark Sagi-Kazar
8034f289d1 Fix empty env context in nightly builds
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-08 15:48:40 +02:00
Nicola Murino
eed61ac510 Dockerfile: add a FEATURES build arg
This ARG allows to disable some optional features and it might be
useful if you build the image yourself
2020-10-07 20:04:02 +02:00
Nicola Murino
412d6096c0 Linux pkgs: fix postinstall scripts 2020-10-06 18:18:43 +02:00
Nicola Murino
c289ae07d2 Docker workflow: explicitly set image labels
while waiting for https://github.com/docker/build-push-action/issues/165
to be fixed.

Some minor changes to the default configuration for Linux packages
2020-10-06 18:03:55 +02:00
Nicola Murino
87f78b07b3 docker: add some docs and build for arm64 too 2020-10-06 13:59:31 +02:00
Mark Sagi-Kazar
5e2db77ef9 refactor: add an enum for filesystem providers
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-05 21:40:21 +02:00
Nicola Murino
c992072286 data provider: add a setting to prevent auto-update 2020-10-05 19:42:33 +02:00
Nicola Murino
0ef826c090 docker package: fix description 2020-10-05 17:24:09 +02:00
Mark Sagi-Kazar
5da75c3915 ci: enable docker build
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-05 16:32:59 +02:00
Nicola Murino
8222baa7ed Dockerfile: minor changes 2020-10-05 16:31:22 +02:00
Mark Sagi-Kazar
7b76b51314 feat: configure database path using configuration
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-05 16:15:06 +02:00
Mark Sagi-Kazar
c96dbbd3b5 feat: save credentials to /var/lib/sftpgo
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-05 16:15:06 +02:00
Mark Sagi-Kazar
da6ccedf24 feat: save database to /var/lib/sftpgo
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-05 16:15:06 +02:00
Mark Sagi-Kazar
13b37a835f revert: boltdb, sqlite is not automatically initialized
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-05 16:15:06 +02:00
Mark Sagi-Kazar
863fa33309 feat: install additional packages
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-05 16:15:06 +02:00
Mark Sagi-Kazar
9f4c54a212 refactor: make /var/lib/sftpgo the user home
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-05 16:15:06 +02:00
Mark Sagi-Kazar
2a7bff4c0e feat: switch to boltdb by default to make the container work out of the box
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-05 16:15:06 +02:00
Mark Sagi-Kazar
17406d1aab fix: permission issue caused by root owning the volume
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-05 16:15:06 +02:00
Mark Sagi-Kazar
6537c53d43 feat: add host_keys under /var/lib/sftpgo
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-05 16:15:06 +02:00
Mark Sagi-Kazar
b4bd10521a feat: move data under /var/lib/sftpgo
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-05 16:15:06 +02:00
Mark Sagi-Kazar
65cbef1962 feat: move backups under /var/lib/sftpgo
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-05 16:15:06 +02:00
Mark Sagi-Kazar
a8d355900a fix: missing sha from docker image on GHA
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-05 16:15:06 +02:00
Mark Sagi-Kazar
ffd9c381ce feat: add workflow for building docker image
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-05 16:15:06 +02:00
Mark Sagi-Kazar
2a0bce0beb feat: add dockerfile
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-05 16:15:06 +02:00
Nicola Murino
f1f7b81088 logger: don't print connection_id if empty
Fixes #183
2020-10-05 15:51:17 +02:00
Nicola Murino
f9827f958b sftpd auto host keys: try to auto-create parent dir if missing 2020-10-05 14:16:57 +02:00
Nicola Murino
3e2afc35ba data provider: try to automatically initialize it if required 2020-10-05 12:55:49 +02:00
Ilias Trichopoulos
c65dd86d5e Fix typos (#181) 2020-10-05 11:29:18 +02:00
Nicola Murino
2d6c0388af update deps 2020-10-04 18:29:42 +02:00
Nicola Murino
4d19d87720 pkgs: use glob notation to include static folder 2020-10-02 18:16:49 +02:00
Nicola Murino
5eabaf98e0 gcs: remove a superfluous debug log 2020-09-29 09:17:08 +02:00
Nicola Murino
d1f0e9ae9f CGS: implement MimeTyper interface 2020-09-28 22:12:46 +02:00
Thomas Blommaert
cd56039ab7 GCS mime-type detection (#179)
Fixes #178
2020-09-28 21:52:18 +02:00
Nicola Murino
55515fee95 update deps, GCS can no finally use attribute selection
See https://github.com/googleapis/google-cloud-go/pull/2661
2020-09-28 12:51:19 +02:00
Nicola Murino
13d43a2d31 improve some docs 2020-09-27 09:24:10 +02:00
Nicola Murino
001261433b howto postgres-s3: update to use the debian package 2020-09-26 19:28:56 +02:00
Nicola Murino
03bf595525 automatically build deb and rpm Linux packages
The packages are built after each tag/commit

Fixes #176
2020-09-26 14:07:24 +02:00
Nicola Murino
4ebedace1e systemd unit: run as "sftpgo" system user
Update the docs too

Fixes #177
2020-09-25 18:23:04 +02:00
Stephan Müller
b23276c002 Set verbosity for go commands in docker build (#174) 2020-09-21 19:33:44 +02:00
Nicola Murino
bf708cb8bc osfs: improve isSubDir check 2020-09-21 19:32:33 +02:00
Nicola Murino
a550d082a3 portable mode: advertise WebDAV service if requested 2020-09-21 16:08:32 +02:00
Nicola Murino
6c1a7449fe ssh commands: return better error messages
This improve the fix for #171 and return better error message for
SSH commands other than SCP too
2020-09-19 10:14:30 +02:00
Nicola Murino
f0c9b55036 dataprovider: improve user validation errors
Fixes #170
2020-09-18 19:21:24 +02:00
Nicola Murino
209badf10c scp: return better error messages
Fixes #171
2020-09-18 19:13:09 +02:00
Nicola Murino
242dde4480 sftpd: ensure to always close idle connections
after the last commit this wasn't the case anymore

Completly fixes #169
2020-09-18 18:15:28 +02:00
Nicola Murino
2df0dd1f70 sshd: map each channel with a new connection
Fixes #169
2020-09-18 10:52:53 +02:00
Nicola Murino
98a6d138d4 sftpd: add a test case to ensure we return sftp.ErrSSHFxNoSuchFile ...
if stat/lstat fails on a missing file
2020-09-17 12:30:48 +02:00
Nicola Murino
38f06ab373 ftpd: fix TLS for active connections
See https://github.com/fclairamb/ftpserverlib/issues/177

Some minor doc improvements
2020-09-17 09:45:40 +02:00
Nicola Murino
3c1300721c add some basic how-to style documents 2020-09-13 19:43:56 +02:00
Nicola Murino
61003c8079 sftpd: add lstat support 2020-09-11 09:30:25 +02:00
Nicola Murino
01850c7399 REST API: remove status from ApiResponse
it duplicates the header HTTP status
2020-09-08 09:45:21 +02:00
Nicola Murino
b9c381e26f sftpd: update pkg/sftp
The patch to open a file in read/write mode is now merged
2020-09-06 11:40:31 +02:00
Nicola Murino
542554fb2c replace the library to verify UNIX's crypt(3) passwords 2020-09-04 21:08:09 +02:00
Nicola Murino
bdf18fa862 password hashing: exposes argon2 options
So the hashing complexity can be changed depending on available
memory/CPU resources and business requirements
2020-09-04 17:09:31 +02:00
Nicola Murino
afc411c51b adjust runtime.GOMAXPROCS to match the container CPU quota, if any 2020-09-03 18:09:45 +02:00
Nicola Murino
a59163e56c multi-step auth: don't advertise password method if it is disabled
also rename the settings to password_authentication so it is more like
OpenSSH, add some test cases and improve documentation
2020-09-01 19:34:40 +02:00
Giorgio Pellero
8391b19abb Add password_disabled bool to sftpd config, disables password auth callback (#165) 2020-09-01 19:26:33 +02:00
Nicola Murino
3925c7ff95 REST API/Web admin: add a parameter to disconnect a user after an update
This way you can force the user to login again and so to use the updated
configuration.

A deleted user will be automatically disconnected.

Fixes #163

Improved some docs too.
2020-09-01 16:10:26 +02:00
Nicola Murino
dbed110d02 WebDAV: add caching for authenticated users
In this way we get a big performance boost
2020-08-31 19:25:17 +02:00
Giorgio Pellero
f978355520 Fix "compatible" typo in README.md (#162) 2020-08-31 13:43:24 +02:00
Nicola Murino
4748e6f54d sftpd: handle read and write from the same handle (#158)
Fixes #155
2020-08-31 06:45:22 +02:00
Nicola Murino
91a4c64390 fix initprovider exit code for MySQL and PostgreSQL 2020-08-30 14:00:45 +02:00
Nicola Murino
600a107699 initprovider: check if the provider is already initialized
exit with code 0 if no initialization is required
2020-08-30 13:50:43 +02:00
Nicola Murino
2746c0b0f1 move stat to base connection and differentiate between Stat and Lstat
we will use Lstat once it will be exposed in pkg/sftp
2020-08-25 18:23:00 +02:00
Nicola Murino
701a6115f8 ftpd: use ftpserverlib master, the tls patch is now merged 2020-08-24 23:06:10 +02:00
Nicola Murino
56b00addc4 docker: try to improve the docs
See #159
2020-08-24 15:46:31 +02:00
Nicola Murino
02e35ee002 sftpd: add Readlink support 2020-08-22 14:52:17 +02:00
Nicola Murino
5208e4a4ca sftpd: improve truncate
quota usage and max allowed write size are now properly updated after a
truncate
2020-08-22 10:12:00 +02:00
Nicola Murino
7381a867ba fix truncate test cases on Windows 2020-08-20 14:44:38 +02:00
Nicola Murino
f41ce6619f sftpd: add SSH_FXP_FSETSTAT support
This change will fix file editing from sshfs, we need this patch

https://github.com/pkg/sftp/pull/373

for pkg/sftp to support this feature
2020-08-20 13:54:36 +02:00
Nicola Murino
933427310d fix check pwd hook when using memory provider 2020-08-19 19:47:52 +02:00
Nicola Murino
8b0a1817b3 add check password hook
its main use case is to allow to easily support things like password+OTP for
protocols without keyboard interactive support such as FTP and WebDAV
2020-08-19 19:36:12 +02:00
Nicola Murino
04c9a5c008 add some examples hooks for one time password logins
The examples use Twillo Authy since I use it for my GitHub account.

You can easily use other multi factor authentication software in a
similar way.
2020-08-18 21:21:01 +02:00
Nicola Murino
bbc8c091e6 portable mode: add WebDAV support 2020-08-17 14:08:08 +02:00
Nicola Murino
f3228713bc Allow individual protocols to be enabled per user
Fixes #154
2020-08-17 12:49:20 +02:00
Nicola Murino
fa5333784b add a maximum allowed size for a single upload 2020-08-16 20:17:02 +02:00
Nicola Murino
0dbf0cc81f WebDAV: add CORS support 2020-08-15 15:55:20 +02:00
Nicola Murino
196a56726e FTP improvements
- add a setting to require TLS
- add symlink support

require TLS 1.2 for all TLS connections
2020-08-15 13:02:25 +02:00
Nicola Murino
fe857dcb1b CI: use go 1.15 by default now that it is released 2020-08-12 16:42:38 +02:00
Nicola Murino
aa0ed5dbd0 add post-login hook
a login scope is supported too so you can get notifications for failed logins,
successful logins or both
2020-08-12 16:15:12 +02:00
Nicola Murino
a9e21c282a add WebDAV support
Fixes #147
2020-08-11 23:56:10 +02:00
Antoine Deschênes
9a15a54885 sftpd: set failed connection loglevel to debug (#152) 2020-08-06 21:20:31 +02:00
Nicola Murino
91dcc349de Add client IP address to external auth, pre-login and keyboard interactive hooks 2020-08-04 18:03:28 +02:00
Nicola Murino
fa41bfd06a Cloud backends: add support for FTP REST command
So partial downloads are now supported as for local fs
2020-08-03 18:03:09 +02:00
Nicola Murino
8839c34d53 FTP: implements ClientDriverExtensionRemoveDir
Fixes #149 for FTP too
2020-08-03 17:36:43 +02:00
Nicola Murino
11ceaa8850 docker: document how to enable FTP/S 2020-08-01 08:56:15 +02:00
Nicola Murino
2a9f7db1e2 Cloud FS: don't propagate the error if removing a folder returns not found
for Cloud FS the folders are virtual and they, generally, disappear when the
last file is removed.

This fix doesn't work for FTP protocol for now.

Fixes #149
2020-07-31 19:24:57 +02:00
Nicola Murino
22338ed478 add post connect hook
Fixes #144
2020-07-30 22:33:49 +02:00
Nicola Murino
59a21158a6 fix FTP quota limits test case
It failed sometime due to a bug in the ftp client library used in test
cases. The failure was more frequent on FreeBSD but it could happen in
any supported OS. It was not systematic since we use small files in
test cases.

See https://github.com/jlaffaye/ftp/pull/192
2020-07-30 19:52:29 +02:00
Nicola Murino
93ce96d011 add support for the venerable FTP protocol
Fixes #46
2020-07-29 21:56:56 +02:00
Nicola Murino
cc2f04b0e4 fix concurrency test case on go 1.13
a sleep seems required, needs investigation
2020-07-25 08:55:17 +02:00
Nicola Murino
aa5191fa1b CI: add a timeout for test cases execution 2020-07-25 00:14:44 +02:00
Nicola Murino
4e41a5583d refactoring: add common package
The common package defines the interfaces that a protocol must implement
and contain code that can be shared among supported protocols.

This way should be easier to support new protocols
2020-07-24 23:39:38 +02:00
Nicola Murino
ded8fad5e4 add sponsor button 2020-07-13 22:23:11 +02:00
Nicola Murino
3702bc8413 several doc fixes 2020-07-11 13:03:15 +02:00
Nicola Murino
7896d2eef7 improve CI/CD workflows 2020-07-10 23:31:53 +02:00
Nicola Murino
da0f470f1c document FreeBSD support
improve some tests cleanup
2020-07-10 19:20:37 +02:00
Nicola Murino
8fddb742df try to improve error message if the user forgot to initialize the provider
See #138
2020-07-09 20:01:37 +02:00
Nicola Murino
95fe26f3e3 keep track of services errors
So we can exit with the correct code if an error happen inside the
services goroutines

Fixes #143
2020-07-09 19:16:52 +02:00
Nicola Murino
1e10381143 improve help strings formatting
Fixes #139
2020-07-09 18:58:22 +02:00
Nicola Murino
96cbce52f9 cmd: add shell completion and man pages generators 2020-07-08 23:21:33 +02:00
Nicola Murino
0ea2ca3141 simplify data provider usage
remove the obsolete SQL scripts too. They are not required since v0.9.6
2020-07-08 19:59:31 +02:00
Nicola Murino
42877dd915 sql providers: add a query timeout 2020-07-08 18:54:44 +02:00
Nicola Murino
790c11c453 back to development 2020-07-07 19:40:22 +02:00
286 changed files with 49530 additions and 13184 deletions

12
.github/FUNDING.yml vendored Normal file
View File

@@ -0,0 +1,12 @@
# These are supported funding model platforms
github: [drakkan] # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2]
patreon: # Replace with a single Patreon username
open_collective: # Replace with a single Open Collective username
ko_fi: # Replace with a single Ko-fi username
tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
liberapay: # Replace with a single Liberapay username
issuehunt: # Replace with a single IssueHunt username
otechie: # Replace with a single Otechie username
custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']

2
.github/workflows/.editorconfig vendored Normal file
View File

@@ -0,0 +1,2 @@
[*.yml]
indent_size = 2

View File

@@ -11,19 +11,21 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
go: [1.14]
go: [1.15]
os: [ubuntu-latest, macos-latest]
upload-coverage: [true]
include:
- go: 1.13
os: ubuntu-latest
upload-coverage: false
- go: 1.14
#- go: 1.14
# os: ubuntu-latest
# upload-coverage: false
- go: 1.15
os: windows-latest
upload-coverage: false
steps:
- uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Set up Go
uses: actions/setup-go@v2
@@ -32,21 +34,17 @@ jobs:
- name: Build for Linux/macOS
if: startsWith(matrix.os, 'windows-') != true
run: go build -i -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
run: go build -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
- name: Build for Windows
if: startsWith(matrix.os, 'windows-')
run: |
$GIT_COMMIT = (git describe --always --dirty) | Out-String
$DATE_TIME = ([datetime]::Now.ToUniversalTime().toString("yyyy-MM-ddTHH:mm:ssZ")) | Out-String
go build -i -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/version.date=$DATE_TIME" -o sftpgo.exe
- name: Initialize data provider
run: ./sftpgo initprovider
shell: bash
go build -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/version.date=$DATE_TIME" -o sftpgo.exe
- name: Run test cases using SQLite provider
run: go test -v ./... -coverprofile=coverage.txt -covermode=atomic
run: go test -v -p 1 -timeout 10m ./... -coverprofile=coverage.txt -covermode=atomic
- name: Upload coverage to Codecov
if: ${{ matrix.upload-coverage }}
@@ -57,27 +55,64 @@ jobs:
- name: Run test cases using bolt provider
run: |
go test -v ./config -covermode=atomic
go test -v ./httpd -covermode=atomic
go test -v ./sftpd -covermode=atomic
go test -v -p 1 -timeout 2m ./config -covermode=atomic
go test -v -p 1 -timeout 2m ./common -covermode=atomic
go test -v -p 1 -timeout 3m ./httpd -covermode=atomic
go test -v -p 1 -timeout 8m ./sftpd -covermode=atomic
go test -v -p 1 -timeout 2m ./ftpd -covermode=atomic
go test -v -p 1 -timeout 2m ./webdavd -covermode=atomic
go test -v -p 1 -timeout 2m ./telemetry -covermode=atomic
env:
SFTPGO_DATA_PROVIDER__DRIVER: bolt
SFTPGO_DATA_PROVIDER__NAME: 'sftpgo_bolt.db'
- name: Run test cases using memory provider
run: go test -v ./... -covermode=atomic
run: go test -v -p 1 -timeout 10m ./... -covermode=atomic
env:
SFTPGO_DATA_PROVIDER__DRIVER: memory
SFTPGO_DATA_PROVIDER__NAME: ''
- name: Gather cross build info
id: cross_info
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
run: |
GIT_COMMIT=$(git describe --always)
BUILD_DATE=$(date -u +%FT%TZ)
echo ::set-output name=sha::${GIT_COMMIT}
echo ::set-output name=created::${BUILD_DATE}
- name: Cross build with xgo
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
uses: crazy-max/ghaction-xgo@v1
with:
dest: cross
prefix: sftpgo
targets: linux/arm64,linux/ppc64le
v: true
x: false
race: false
ldflags: -s -w -X github.com/drakkan/sftpgo/version.commit=${{ steps.cross_info.outputs.sha }} -X github.com/drakkan/sftpgo/version.date=${{ steps.cross_info.outputs.created }}
buildmode: default
- name: Prepare build artifact for Linux/macOS
if: startsWith(matrix.os, 'windows-') != true
run: |
mkdir output
mkdir -p output/{bash_completion,zsh_completion}
cp sftpgo output/
cp sftpgo.json output/
cp -r templates output/
cp -r static output/
cp -r init output/
./sftpgo gen completion bash > output/bash_completion/sftpgo
./sftpgo gen completion zsh > output/zsh_completion/_sftpgo
./sftpgo gen man -d output/man/man1
gzip output/man/man1/*
- name: Copy cross compiled Linux binaries
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
run: |
cp cross/sftpgo-linux-arm64 output/
cp cross/sftpgo-linux-ppc64le output/
- name: Prepare build artifact for Windows
if: startsWith(matrix.os, 'windows-')
@@ -96,6 +131,71 @@ jobs:
name: sftpgo-${{ matrix.os }}-go${{ matrix.go }}
path: output
- name: Build Linux Packages
id: build_linux_pkgs
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
run: |
cp -r pkgs pkgs_arm64
cp -r pkgs pkgs_ppc64le
cd pkgs
./build.sh
cd ..
export NFPM_ARCH=arm64
export BIN_SUFFIX=-linux-arm64
cp cross/sftpgo${BIN_SUFFIX} .
cd pkgs_arm64
./build.sh
cd ..
export NFPM_ARCH=ppc64le
export BIN_SUFFIX=-linux-ppc64le
cp cross/sftpgo${BIN_SUFFIX} .
cd pkgs_ppc64le
./build.sh
PKG_VERSION=$(cat dist/version)
echo "::set-output name=pkg-version::${PKG_VERSION}"
- name: Upload Debian Package
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
uses: actions/upload-artifact@v2
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-x86_64-deb
path: pkgs/dist/deb/*
- name: Upload RPM Package
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
uses: actions/upload-artifact@v2
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-x86_64-rpm
path: pkgs/dist/rpm/*
- name: Upload Debian Package arm64
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
uses: actions/upload-artifact@v2
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-arm64-deb
path: pkgs_arm64/dist/deb/*
- name: Upload RPM Package arm64
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
uses: actions/upload-artifact@v2
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-arm64-rpm
path: pkgs_arm64/dist/rpm/*
- name: Upload Debian Package ppc64le
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
uses: actions/upload-artifact@v2
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-ppc64le-deb
path: pkgs_ppc64le/dist/deb/*
- name: Upload RPM Package ppc64le
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
uses: actions/upload-artifact@v2
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-ppc64le-rpm
path: pkgs_ppc64le/dist/rpm/*
test-postgresql-mysql:
name: Test with PostgreSQL/MySQL
runs-on: ubuntu-latest
@@ -135,15 +235,14 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v2
with:
go-version: 1.14
go-version: 1.15
- name: Build
run: go build -i -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
run: go build -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
- name: Run tests using PostgreSQL provider
run: |
./sftpgo initprovider
go test -v ./... -covermode=atomic
go test -v -p 1 -timeout 10m ./... -covermode=atomic
env:
SFTPGO_DATA_PROVIDER__DRIVER: postgresql
SFTPGO_DATA_PROVIDER__NAME: sftpgo
@@ -154,8 +253,7 @@ jobs:
- name: Run tests using MySQL provider
run: |
./sftpgo initprovider
go test -v ./... -covermode=atomic
go test -v -p 1 -timeout 10m ./... -covermode=atomic
env:
SFTPGO_DATA_PROVIDER__DRIVER: mysql
SFTPGO_DATA_PROVIDER__NAME: sftpgo
@@ -170,6 +268,6 @@ jobs:
steps:
- uses: actions/checkout@v2
- name: Run golangci-lint
uses: golangci/golangci-lint-action@v1
uses: golangci/golangci-lint-action@v2
with:
version: v1.27
version: latest

177
.github/workflows/docker.yml vendored Normal file
View File

@@ -0,0 +1,177 @@
name: Docker
on:
#schedule:
# - cron: '0 4 * * *' # everyday at 4:00 AM UTC
push:
branches:
- master
tags:
- v*
pull_request:
jobs:
build:
name: Build
runs-on: ${{ matrix.os }}
strategy:
matrix:
os:
- ubuntu-latest
docker_pkg:
- debian
- alpine
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Repo metadata
id: repo
uses: actions/github-script@v3
with:
script: |
const repo = await github.repos.get(context.repo)
return repo.data
- name: Gather image information
id: info
run: |
VERSION=noop
DOCKERFILE_SLIM=Dockerfile
DOCKERFILE=Dockerfile.full
MINOR=""
MAJOR=""
if [ "${{ github.event_name }}" = "schedule" ]; then
VERSION=nightly
elif [[ $GITHUB_REF == refs/tags/* ]]; then
VERSION=${GITHUB_REF#refs/tags/}
elif [[ $GITHUB_REF == refs/heads/* ]]; then
VERSION=$(echo ${GITHUB_REF#refs/heads/} | sed -r 's#/+#-#g')
if [ "${{ github.event.repository.default_branch }}" = "$VERSION" ]; then
VERSION=edge
fi
elif [[ $GITHUB_REF == refs/pull/* ]]; then
VERSION=pr-${{ github.event.number }}
fi
if [[ $VERSION =~ ^v[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ ]]; then
MINOR=${VERSION%.*}
MAJOR=${MINOR%.*}
fi
VERSION_SLIM="${VERSION}-slim"
if [[ $DOCKER_PKG == alpine ]]; then
VERSION="${VERSION}-alpine"
VERSION_SLIM="${VERSION}-slim"
DOCKERFILE_SLIM=Dockerfile.alpine
DOCKERFILE=Dockerfile.full.alpine
fi
DOCKER_IMAGES=("drakkan/sftpgo" "ghcr.io/drakkan/sftpgo")
TAGS="${DOCKER_IMAGES[0]}:${VERSION}"
TAGS_SLIM="${DOCKER_IMAGES[0]}:${VERSION_SLIM}"
BASE_IMAGE="${TAGS_SLIM}"
for DOCKER_IMAGE in ${DOCKER_IMAGES[@]}; do
if [[ ${DOCKER_IMAGE} != ${DOCKER_IMAGES[0]} ]]; then
TAGS="${TAGS},${DOCKER_IMAGE}:${VERSION}"
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:${VERSION_SLIM}"
fi
if [[ $GITHUB_REF == refs/tags/* ]]; then
if [[ $DOCKER_PKG == debian ]]; then
if [[ -n $MAJOR && -n $MINOR ]]; then
TAGS="${TAGS},${DOCKER_IMAGE}:${MINOR},${DOCKER_IMAGE}:${MAJOR}"
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:${MINOR}-slim,${DOCKER_IMAGE}:${MAJOR}-slim"
fi
TAGS="${TAGS},${DOCKER_IMAGE}:latest"
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:slim"
else
if [[ -n $MAJOR && -n $MINOR ]]; then
TAGS="${TAGS},${DOCKER_IMAGE}:${MINOR}-alpine,${DOCKER_IMAGE}:${MAJOR}-alpine"
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:${MINOR}-alpine-slim,${DOCKER_IMAGE}:${MAJOR}-alpine-slim"
fi
TAGS="${TAGS},${DOCKER_IMAGE}:alpine"
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:alpine-slim"
fi
fi
done
echo ::set-output name=dockerfile::${DOCKERFILE}
echo ::set-output name=dockerfile-slim::${DOCKERFILE_SLIM}
echo ::set-output name=version::${VERSION}
echo ::set-output name=version-slim::${VERSION_SLIM}
echo ::set-output name=tags::${TAGS}
echo ::set-output name=tags-slim::${TAGS_SLIM}
echo ::set-output name=base-image::${BASE_IMAGE}
echo ::set-output name=created::$(date -u +'%Y-%m-%dT%H:%M:%SZ')
echo ::set-output name=sha::${GITHUB_SHA::8}
env:
DOCKER_PKG: ${{ matrix.docker_pkg }}
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
- name: Set up builder slim
uses: docker/setup-buildx-action@v1
id: builder-slim
- name: Set up builder full
uses: docker/setup-buildx-action@v1
id: builder-full
- name: Login to Docker Hub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
if: ${{ github.event_name != 'pull_request' }}
- name: Login to GitHub Container Registry
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.CR_PAT }}
if: ${{ github.event_name != 'pull_request' }}
- name: Build and push slim
uses: docker/build-push-action@v2
with:
builder: ${{ steps.builder-slim.outputs.name }}
file: ./${{ steps.info.outputs.dockerfile-slim }}
platforms: linux/amd64,linux/arm64,linux/ppc64le
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.info.outputs.tags-slim }}
build-args: |
COMMIT_SHA=${{ steps.info.outputs.sha }}
labels: |
org.opencontainers.image.title=SFTPGo
org.opencontainers.image.description=Fully featured and highly configurable SFTP server with optional FTP/S and WebDAV support
org.opencontainers.image.url=${{ fromJson(steps.repo.outputs.result).html_url }}
org.opencontainers.image.documentation=${{ fromJson(steps.repo.outputs.result).html_url }}/blob/${{ github.sha }}/docker/README.md
org.opencontainers.image.source=${{ fromJson(steps.repo.outputs.result).html_url }}
org.opencontainers.image.version=${{ steps.info.outputs.version }}
org.opencontainers.image.created=${{ steps.info.outputs.created }}
org.opencontainers.image.revision=${{ github.sha }}
org.opencontainers.image.licenses=${{ fromJson(steps.repo.outputs.result).license.spdx_id }}
- name: Build and push full
if: ${{ github.event_name != 'pull_request' }}
uses: docker/build-push-action@v2
with:
builder: ${{ steps.builder-full.outputs.name }}
file: ./${{ steps.info.outputs.dockerfile }}
platforms: linux/amd64,linux/arm64,linux/ppc64le
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.info.outputs.tags }}
build-args: |
COMMIT_SHA=${{ steps.info.outputs.sha }}
BASE_IMAGE=${{ steps.info.outputs.base-image }}
labels: |
org.opencontainers.image.title=SFTPGo
org.opencontainers.image.description=Fully featured and highly configurable SFTP server with optional FTP/S and WebDAV support
org.opencontainers.image.url=${{ fromJson(steps.repo.outputs.result).html_url }}
org.opencontainers.image.documentation=${{ fromJson(steps.repo.outputs.result).html_url }}/blob/${{ github.sha }}/docker/README.md
org.opencontainers.image.source=${{ fromJson(steps.repo.outputs.result).html_url }}
org.opencontainers.image.version=${{ steps.info.outputs.version }}
org.opencontainers.image.created=${{ steps.info.outputs.created }}
org.opencontainers.image.revision=${{ github.sha }}
org.opencontainers.image.licenses=${{ fromJson(steps.repo.outputs.result).license.spdx_id }}

View File

@@ -5,7 +5,7 @@ on:
tags: 'v*'
env:
GO_VERSION: 1.14
GO_VERSION: 1.15.8
jobs:
create-release:
@@ -19,7 +19,7 @@ jobs:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tag_name: ${{ github.ref }}
release_name: Release ${{ github.ref }}
release_name: ${{ github.ref }}
draft: false
prerelease: false
@@ -94,22 +94,16 @@ jobs:
with:
go-version: ${{ env.GO_VERSION }}
- name: Set up Python
if: startsWith(matrix.os, 'windows-')
uses: actions/setup-python@v2
with:
python-version: 3.x
- name: Build for Linux/macOS
if: startsWith(matrix.os, 'windows-') != true
run: go build -i -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
run: go build -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
- name: Build for Windows
if: startsWith(matrix.os, 'windows-')
run: |
$GIT_COMMIT = (git describe --always --dirty) | Out-String
$DATE_TIME = ([datetime]::Now.ToUniversalTime().toString("yyyy-MM-ddTHH:mm:ssZ")) | Out-String
go build -i -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/version.date=$DATE_TIME" -o sftpgo.exe
go build -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/version.date=$DATE_TIME" -o sftpgo.exe
- name: Initialize data provider
run: ./sftpgo initprovider
@@ -136,19 +130,32 @@ jobs:
env:
MATRIX_OS: ${{ matrix.os }}
- name: Build REST API CLI for Windows
if: startsWith(matrix.os, 'windows-')
- name: Gather cross build info
id: cross_info
if: ${{ matrix.os == 'ubuntu-latest' }}
run: |
python -m pip install --upgrade pip setuptools wheel
pip install requests
pip install pygments
pip install pyinstaller
pyinstaller --hidden-import="pkg_resources.py2_warn" --noupx --onefile examples\rest-api-cli\sftpgo_api_cli.py
GIT_COMMIT=$(git describe --always)
BUILD_DATE=$(date -u +%FT%TZ)
echo ::set-output name=sha::${GIT_COMMIT}
echo ::set-output name=created::${BUILD_DATE}
- name: Cross build with xgo
if: ${{ matrix.os == 'ubuntu-latest' }}
uses: crazy-max/ghaction-xgo@v1
with:
dest: cross
prefix: sftpgo
targets: linux/arm64,linux/ppc64le
v: true
x: false
race: false
ldflags: -s -w -X github.com/drakkan/sftpgo/version.commit=${{ steps.cross_info.outputs.sha }} -X github.com/drakkan/sftpgo/version.date=${{ steps.cross_info.outputs.created }}
buildmode: default
- name: Prepare Release for Linux/macOS
if: startsWith(matrix.os, 'windows-') != true
run: |
mkdir -p output/{init,examples/rest-api-cli,sqlite}
mkdir -p output/{init,sqlite,bash_completion,zsh_completion}
echo "For documentation please take a look here:" > output/README.txt
echo "" >> output/README.txt
echo "https://github.com/drakkan/sftpgo/blob/${SFTPGO_VERSION}/README.md" >> output/README.txt
@@ -160,18 +167,70 @@ jobs:
cp -r templates output/
if [ $OS == 'linux' ]
then
cp -r init/sftpgo.service output/init/
cp init/sftpgo.service output/init/
else
cp -r init/com.github.drakkan.sftpgo.plist output/init/
cp init/com.github.drakkan.sftpgo.plist output/init/
fi
./sftpgo gen completion bash > output/bash_completion/sftpgo
./sftpgo gen completion zsh > output/zsh_completion/_sftpgo
./sftpgo gen man -d output/man/man1
gzip output/man/man1/*
if [ $OS == 'linux' ]
then
cp -r output output_arm64
cp -r output output_ppc64le
cp -r output output_all
fi
cp examples/rest-api-cli/sftpgo_api_cli.py output/examples/rest-api-cli/
cd output
tar cJvf sftpgo_${SFTPGO_VERSION}_${OS}_x86_64.tar.xz *
cd ..
if [ $OS == 'linux' ]
then
cp cross/sftpgo-linux-arm64 output_arm64/sftpgo
cd output_arm64
tar cJvf sftpgo_${SFTPGO_VERSION}_${OS}_arm64.tar.xz *
cd ..
cp cross/sftpgo-linux-ppc64le output_ppc64le/sftpgo
cd output_ppc64le
tar cJvf sftpgo_${SFTPGO_VERSION}_${OS}_ppc64le.tar.xz *
cd ..
mkdir output_all/{arm64,ppc64le}
cp cross/sftpgo-linux-arm64 output_all/arm64/sftpgo
cp cross/sftpgo-linux-ppc64le output_all/ppc64le/sftpgo
cd output_all
tar cJvf sftpgo_${SFTPGO_VERSION}_${OS}_bundle.tar.xz *
cd ..
fi
env:
SFTPGO_VERSION: ${{ steps.get_version.outputs.VERSION }}
OS: ${{ steps.get_os_name.outputs.OS }}
- name: Prepare Linux Packages
id: build_linux_pkgs
if: ${{ matrix.os == 'ubuntu-latest' }}
run: |
cp -r pkgs pkgs_arm64
cp -r pkgs pkgs_ppc64le
cd pkgs
./build.sh
cd ..
export NFPM_ARCH=arm64
export BIN_SUFFIX=-linux-arm64
cp cross/sftpgo${BIN_SUFFIX} .
cd pkgs_arm64
./build.sh
cd ..
export NFPM_ARCH=ppc64le
export BIN_SUFFIX=-linux-ppc64le
cp cross/sftpgo${BIN_SUFFIX} .
cd pkgs_ppc64le
./build.sh
cd ..
PKG_VERSION=${SFTPGO_VERSION:1}
echo "::set-output name=pkg-version::${PKG_VERSION}"
env:
SFTPGO_VERSION: ${{ steps.get_version.outputs.VERSION }}
- name: Prepare Release for Windows
if: startsWith(matrix.os, 'windows-')
run: |
@@ -179,7 +238,6 @@ jobs:
copy .\sftpgo.exe .\output
copy .\sftpgo.json .\output
copy .\sftpgo.db .\output
copy .\dist\sftpgo_api_cli.exe .\output
copy .\LICENSE .\output\LICENSE.txt
mkdir output\templates
xcopy .\templates .\output\templates\ /E
@@ -190,6 +248,23 @@ jobs:
SFTPGO_ISS_VERSION: ${{ steps.get_version.outputs.VERSION }}
SFTPGO_ISS_DOC_URL: https://github.com/drakkan/sftpgo/blob/${{ steps.get_version.outputs.VERSION }}/README.md
- name: Prepare Portable Release for Windows
if: startsWith(matrix.os, 'windows-')
run: |
mkdir win-portable
copy .\sftpgo.exe .\win-portable
copy .\sftpgo.json .\win-portable
copy .\sftpgo.db .\win-portable
copy .\LICENSE .\win-portable\LICENSE.txt
mkdir win-portable\templates
xcopy .\templates .\win-portable\templates\ /E
mkdir win-portable\static
xcopy .\static .\win-portable\static\ /E
Compress-Archive .\win-portable\* sftpgo_portable_x86_64.zip
env:
SFTPGO_VERSION: ${{ steps.get_version.outputs.VERSION }}
OS: ${{ steps.get_os_name.outputs.OS }}
- name: Download release upload URL
uses: actions/download-artifact@v2
with:
@@ -213,6 +288,39 @@ jobs:
asset_name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86_64.tar.xz
asset_content_type: application/x-xz
- name: Upload Linux/arm64 Release
if: ${{ matrix.os == 'ubuntu-latest' }}
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.upload_url.outputs.url }}
asset_path: ./output_arm64/sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_arm64.tar.xz
asset_name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_arm64.tar.xz
asset_content_type: application/x-xz
- name: Upload Linux/ppc64le Release
if: ${{ matrix.os == 'ubuntu-latest' }}
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.upload_url.outputs.url }}
asset_path: ./output_ppc64le/sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_ppc64le.tar.xz
asset_name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_ppc64le.tar.xz
asset_content_type: application/x-xz
- name: Upload Linux Bundle Release
if: ${{ matrix.os == 'ubuntu-latest' }}
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.upload_url.outputs.url }}
asset_path: ./output_all/sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_bundle.tar.xz
asset_name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_bundle.tar.xz
asset_content_type: application/x-xz
- name: Upload Windows Release
if: startsWith(matrix.os, 'windows-')
uses: actions/upload-release-asset@v1
@@ -223,3 +331,80 @@ jobs:
asset_path: ./sftpgo_windows_x86_64.exe
asset_name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86_64.exe
asset_content_type: application/x-dosexec
- name: Upload Portable Windows Release
if: startsWith(matrix.os, 'windows-')
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.upload_url.outputs.url }}
asset_path: ./sftpgo_portable_x86_64.zip
asset_name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_portable_x86_64.zip
asset_content_type: application/zip
- name: Upload Debian Package
if: ${{ matrix.os == 'ubuntu-latest' }}
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.upload_url.outputs.url }}
asset_path: ./pkgs/dist/deb/sftpgo_${{ steps.build_linux_pkgs.outputs.pkg-version }}-1_amd64.deb
asset_name: sftpgo_${{ steps.build_linux_pkgs.outputs.pkg-version }}-1_amd64.deb
asset_content_type: application/vnd.debian.binary-package
- name: Upload RPM Package
if: ${{ matrix.os == 'ubuntu-latest' }}
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.upload_url.outputs.url }}
asset_path: ./pkgs/dist/rpm/sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-1.x86_64.rpm
asset_name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-1.x86_64.rpm
asset_content_type: application/x-rpm
- name: Upload Debian Package arm64
if: ${{ matrix.os == 'ubuntu-latest' }}
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.upload_url.outputs.url }}
asset_path: ./pkgs_arm64/dist/deb/sftpgo_${{ steps.build_linux_pkgs.outputs.pkg-version }}-1_arm64.deb
asset_name: sftpgo_${{ steps.build_linux_pkgs.outputs.pkg-version }}-1_arm64.deb
asset_content_type: application/vnd.debian.binary-package
- name: Upload RPM Package arm64
if: ${{ matrix.os == 'ubuntu-latest' }}
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.upload_url.outputs.url }}
asset_path: ./pkgs_arm64/dist/rpm/sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-1.aarch64.rpm
asset_name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-1.aarch64.rpm
asset_content_type: application/x-rpm
- name: Upload Debian Package ppc64le
if: ${{ matrix.os == 'ubuntu-latest' }}
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.upload_url.outputs.url }}
asset_path: ./pkgs_ppc64le/dist/deb/sftpgo_${{ steps.build_linux_pkgs.outputs.pkg-version }}-1_ppc64el.deb
asset_name: sftpgo_${{ steps.build_linux_pkgs.outputs.pkg-version }}-1_ppc64el.deb
asset_content_type: application/vnd.debian.binary-package
- name: Upload RPM Package ppc64le
if: ${{ matrix.os == 'ubuntu-latest' }}
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.upload_url.outputs.url }}
asset_path: ./pkgs_ppc64le/dist/rpm/sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-1.ppc64le.rpm
asset_name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-1.ppc64le.rpm
asset_content_type: application/x-rpm

60
Dockerfile Normal file
View File

@@ -0,0 +1,60 @@
FROM golang:1.15 as builder
ENV GOFLAGS="-mod=readonly"
RUN mkdir -p /workspace
WORKDIR /workspace
ARG GOPROXY
COPY go.mod go.sum ./
RUN go mod download
ARG COMMIT_SHA
# This ARG allows to disable some optional features and it might be useful if you build the image yourself.
# For example you can disable S3 and GCS support like this:
# --build-arg FEATURES=nos3,nogcs
ARG FEATURES
COPY . .
RUN set -xe && \
export COMMIT_SHA=${COMMIT_SHA:-$(git describe --always --dirty)} && \
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -v -o sftpgo
FROM debian:buster-slim
RUN apt-get update && apt-get install --no-install-recommends -y ca-certificates mime-support && rm -rf /var/lib/apt/lists/*
RUN mkdir -p /etc/sftpgo /var/lib/sftpgo /usr/share/sftpgo /srv/sftpgo
RUN groupadd --system -g 1000 sftpgo && \
useradd --system --gid sftpgo --no-create-home \
--home-dir /var/lib/sftpgo --shell /usr/sbin/nologin \
--comment "SFTPGo user" --uid 1000 sftpgo
COPY --from=builder /workspace/sftpgo.json /etc/sftpgo/sftpgo.json
COPY --from=builder /workspace/templates /usr/share/sftpgo/templates
COPY --from=builder /workspace/static /usr/share/sftpgo/static
COPY --from=builder /workspace/sftpgo /usr/local/bin/
# Log to the stdout so the logs will be available using docker logs
ENV SFTPGO_LOG_FILE_PATH=""
# templates and static paths are inside the container
ENV SFTPGO_HTTPD__TEMPLATES_PATH=/usr/share/sftpgo/templates
ENV SFTPGO_HTTPD__STATIC_FILES_PATH=/usr/share/sftpgo/static
# Modify the default configuration file
RUN sed -i "s|\"users_base_dir\": \"\",|\"users_base_dir\": \"/srv/sftpgo/data\",|" /etc/sftpgo/sftpgo.json && \
sed -i "s|\"backups\"|\"/srv/sftpgo/backups\"|" /etc/sftpgo/sftpgo.json && \
sed -i "s|\"bind_address\": \"127.0.0.1\",|\"bind_address\": \"\",|" /etc/sftpgo/sftpgo.json
COPY ./docker/scripts/entrypoint.sh /docker-entrypoint.sh
RUN chown -R sftpgo:sftpgo /etc/sftpgo && chown sftpgo:sftpgo /var/lib/sftpgo /srv/sftpgo
WORKDIR /var/lib/sftpgo
USER 1000:1000
CMD ["sftpgo", "serve"]

63
Dockerfile.alpine Normal file
View File

@@ -0,0 +1,63 @@
FROM golang:1.15-alpine AS builder
ENV GOFLAGS="-mod=readonly"
RUN apk add --update --no-cache bash ca-certificates curl git gcc g++
RUN mkdir -p /workspace
WORKDIR /workspace
ARG GOPROXY
COPY go.mod go.sum ./
RUN go mod download
ARG COMMIT_SHA
# This ARG allows to disable some optional features and it might be useful if you build the image yourself.
# For example you can disable S3 and GCS support like this:
# --build-arg FEATURES=nos3,nogcs
ARG FEATURES
COPY . .
RUN set -xe && \
export COMMIT_SHA=${COMMIT_SHA:-$(git describe --always --dirty)} && \
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -v -o sftpgo
FROM alpine:3.12
RUN apk add --update --no-cache ca-certificates tzdata mailcap
# set up nsswitch.conf for Go's "netgo" implementation
# https://github.com/gliderlabs/docker-alpine/issues/367#issuecomment-424546457
RUN test ! -e /etc/nsswitch.conf && echo 'hosts: files dns' > /etc/nsswitch.conf
RUN mkdir -p /etc/sftpgo /var/lib/sftpgo /usr/share/sftpgo /srv/sftpgo
RUN addgroup -g 1000 -S sftpgo && \
adduser -u 1000 -h /var/lib/sftpgo -s /sbin/nologin -G sftpgo -S -D -H -g "SFTPGo user" sftpgo
COPY --from=builder /workspace/sftpgo.json /etc/sftpgo/sftpgo.json
COPY --from=builder /workspace/templates /usr/share/sftpgo/templates
COPY --from=builder /workspace/static /usr/share/sftpgo/static
COPY --from=builder /workspace/sftpgo /usr/local/bin/
# Log to the stdout so the logs will be available using docker logs
ENV SFTPGO_LOG_FILE_PATH=""
# templates and static paths are inside the container
ENV SFTPGO_HTTPD__TEMPLATES_PATH=/usr/share/sftpgo/templates
ENV SFTPGO_HTTPD__STATIC_FILES_PATH=/usr/share/sftpgo/static
# Modify the default configuration file
RUN sed -i "s|\"users_base_dir\": \"\",|\"users_base_dir\": \"/srv/sftpgo/data\",|" /etc/sftpgo/sftpgo.json && \
sed -i "s|\"backups\"|\"/srv/sftpgo/backups\"|" /etc/sftpgo/sftpgo.json && \
sed -i "s|\"bind_address\": \"127.0.0.1\",|\"bind_address\": \"\",|" /etc/sftpgo/sftpgo.json
RUN chown -R sftpgo:sftpgo /etc/sftpgo && chown sftpgo:sftpgo /var/lib/sftpgo /srv/sftpgo
WORKDIR /var/lib/sftpgo
USER 1000:1000
CMD ["sftpgo", "serve"]

10
Dockerfile.full Normal file
View File

@@ -0,0 +1,10 @@
ARG BASE_IMAGE
FROM ${BASE_IMAGE}
USER root
# Install some optional packages used by SFTPGo features
RUN apt-get update && apt-get install --no-install-recommends -y git rsync && rm -rf /var/lib/apt/lists/*
USER 1000:1000

10
Dockerfile.full.alpine Normal file
View File

@@ -0,0 +1,10 @@
ARG BASE_IMAGE
FROM ${BASE_IMAGE}
USER root
# Install some optional packages used by SFTPGo features
RUN apk add --update --no-cache rsync git
USER 1000:1000

129
README.md
View File

@@ -4,21 +4,24 @@
[![Code Coverage](https://codecov.io/gh/drakkan/sftpgo/branch/master/graph/badge.svg)](https://codecov.io/gh/drakkan/sftpgo/branch/master)
[![Go Report Card](https://goreportcard.com/badge/github.com/drakkan/sftpgo)](https://goreportcard.com/report/github.com/drakkan/sftpgo)
[![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0)
[![Docker Pulls](https://img.shields.io/docker/pulls/drakkan/sftpgo)](https://hub.docker.com/r/drakkan/sftpgo)
[![Mentioned in Awesome Go](https://awesome.re/mentioned-badge.svg)](https://github.com/avelino/awesome-go)
Fully featured and highly configurable SFTP server, written in Go
Fully featured and highly configurable SFTP server with optional FTP/S and WebDAV support, written in Go.
Several storage backends are supported: local filesystem, encrypted local filesystem, S3 (compatible) Object Storage, Google Cloud Storage, Azure Blob Storage, SFTP.
## Features
- Each account is chrooted to its home directory.
- SFTP accounts are virtual accounts stored in a "data provider".
- SFTPGo uses virtual accounts stored inside a "data provider".
- SQLite, MySQL, PostgreSQL, bbolt (key/value store in pure Go) and in-memory data providers are supported.
- Each local account is chrooted in its home directory, for cloud-based accounts you can restrict access to a certain base path.
- Public key and password authentication. Multiple public keys per user are supported.
- SSH user [certificate authentication](https://cvsweb.openbsd.org/src/usr.bin/ssh/PROTOCOL.certkeys?rev=1.8).
- Keyboard interactive authentication. You can easily setup a customizable multi-factor authentication.
- Partial authentication. You can configure multi-step authentication requiring, for example, the user password after successful public key authentication.
- Per user authentication methods. You can configure the allowed authentication methods for each user.
- Custom authentication via external programs/HTTP API is supported.
- [Data At Rest Encryption](./docs/dare.md) is supported.
- Dynamic user modification before login via external programs/HTTP API is supported.
- Quota support: accounts can have individual quota expressed as max total size and/or max number of files.
- Bandwidth throttling is supported, with distinct settings for upload and download.
@@ -26,38 +29,45 @@ Fully featured and highly configurable SFTP server, written in Go
- Per user and per directory permission management: list directory contents, upload, overwrite, download, delete, rename, create directories, create symlinks, change owner/group and mode, change access and modification times.
- Per user files/folders ownership mapping: you can map all the users to the system account that runs SFTPGo (all platforms are supported) or you can run SFTPGo as root user and map each user or group of users to a different system account (\*NIX only).
- Per user IP filters are supported: login can be restricted to specific ranges of IP addresses or to a specific IP address.
- Per user and per directory file extensions filters are supported: files can be allowed or denied based on their extensions.
- Per user and per directory shell like patterns filters are supported: files can be allowed or denied based on shell like patterns.
- Virtual folders are supported: directories outside the user home directory can be exposed as virtual folders.
- Configurable custom commands and/or HTTP notifications on file upload, download, pre-delete, delete, rename, on SSH commands and on user add, update and delete.
- Automatically terminating idle connections.
- Automatic blocklist management is supported using the built-in [defender](./docs/defender.md).
- Atomic uploads are configurable.
- Support for Git repositories over SSH.
- SCP and rsync are supported.
- Support for serving local filesystem, S3 Compatible Object Storage and Google Cloud Storage over SFTP/SCP.
- FTP/S is supported. You can configure the FTP service to require TLS for both control and data connections.
- [WebDAV](./docs/webdav.md) is supported.
- Two-Way TLS authentication, aka TLS with client certificate authentication, is supported for REST API/Web Admin, FTPS and WebDAV over HTTPS.
- Support for serving local filesystem, encrypted local filesystem, S3 Compatible Object Storage, Google Cloud Storage, Azure Blob Storage or other SFTP accounts over SFTP/SCP/FTP/WebDAV.
- Per user protocols restrictions. You can configure the allowed protocols (SSH/FTP/WebDAV) for each user.
- [Prometheus metrics](./docs/metrics.md) are exposed.
- Support for HAProxy PROXY protocol: you can proxy and/or load balance the SFTP/SCP service without losing the information about the client's address.
- Support for HAProxy PROXY protocol: you can proxy and/or load balance the SFTP/SCP/FTP/WebDAV service without losing the information about the client's address.
- [REST API](./docs/rest-api.md) for users and folders management, backup, restore and real time reports of the active connections with possibility of forcibly closing a connection.
- [Web based administration interface](./docs/web-admin.md) to easily manage users, folders and connections.
- Easy [migration](./examples/rest-api-cli#convert-users-from-other-stores) from Linux system user accounts.
- Easy [migration](./examples/convertusers) from Linux system user accounts.
- [Portable mode](./docs/portable-mode.md): a convenient way to share a single directory on demand.
- [SFTP subsystem mode](./docs/sftp-subsystem.md): you can use SFTPGo as OpenSSH's SFTP subsystem.
- Performance analysis using built-in [profiler](./docs/profiling.md).
- Configuration format is at your choice: JSON, TOML, YAML, HCL, envfile are supported.
- Log files are accurate and they are saved in the easily parsable JSON format ([more information](./docs/logs.md)).
## Platforms
SFTPGo is developed and tested on Linux. After each commit, the code is automatically built and tested on Linux, macOS and Windows using a [GitHub Action](./.github/workflows/development.yml). Other UNIX variants such as \*BSD should work too.
SFTPGo is developed and tested on Linux. After each commit, the code is automatically built and tested on Linux, macOS and Windows using a [GitHub Action](./.github/workflows/development.yml). The test cases are regularly manually executed and passed on FreeBSD. Other *BSD variants should work too.
## Requirements
- Go 1.13 or higher as build only dependency.
- A suitable SQL server or key/value store to use as data provider: PostgreSQL 9.4+ or MySQL 5.6+ or SQLite 3.x or bbolt 1.3.x
- Go 1.15 or higher as build only dependency.
- A suitable SQL server to use as data provider: PostgreSQL 9.4+ or MySQL 5.6+ or SQLite 3.x.
- The SQL server is optional: you can choose to use an embedded bolt database as key/value store or an in memory data provider.
## Installation
Binary releases for Linux, macOS, and Windows are available. Please visit the [releases](https://github.com/drakkan/sftpgo/releases "releases") page.
Sample Dockerfiles for [Debian](https://www.debian.org) and [Alpine](https://alpinelinux.org) are available inside the source tree [docker](./docker) directory.
An official Docker image is available. Documentation is [here](./docker/README.md).
Some Linux distro packages are available:
@@ -65,6 +75,8 @@ Some Linux distro packages are available:
- [sftpgo](https://aur.archlinux.org/packages/sftpgo/). This package follows stable releases. It requires `git`, `gcc` and `go` to build.
- [sftpgo-bin](https://aur.archlinux.org/packages/sftpgo-bin/). This package follows stable releases downloading the prebuilt linux binary from GitHub. It does not require `git`, `gcc` and `go` to build.
- [sftpgo-git](https://aur.archlinux.org/packages/sftpgo-git/). This package builds and installs the latest git master. It requires `git`, `gcc` and `go` to build.
- Deb and RPM packages are built after each commit and for each release.
- For Ubuntu a PPA is available [here](https://launchpad.net/~sftpgo/+archive/ubuntu/sftpgo).
You can easily test new features selecting a commit from the [Actions](https://github.com/drakkan/sftpgo/actions) page and downloading the matching build artifacts for Linux, macOS or Windows. GitHub stores artifacts for 90 days.
@@ -74,9 +86,9 @@ Alternately, you can [build from source](./docs/build-from-source.md).
A full explanation of all configuration methods can be found [here](./docs/full-configuration.md).
Please make sure to [initialize the data provider](#data-provider-initialization) before running the daemon!
Please make sure to [initialize the data provider](#data-provider-initialization-and-management) before running the daemon!
To start the SFTP server with default settings, simply run:
To start SFTPGo with the default settings, simply run:
```bash
sftpgo serve
@@ -84,15 +96,15 @@ sftpgo serve
Check out [this documentation](./docs/service.md) if you want to run SFTPGo as a service.
### Data provider initialization
### Data provider initialization and management
Before starting the SFTPGo server, please ensure that the configured data provider is properly initialized.
Before starting the SFTPGo server please ensure that the configured data provider is properly initialized/updated.
SQL based data providers (SQLite, MySQL, PostgreSQL) require the creation of a database containing the required tables. Memory and bolt data providers do not require an initialization.
For PostgreSQL and MySQL providers, you need to create the configured database. For SQLite, the configured database will be automatically created at startup. Memory and bolt data providers do not require an initialization but they could require an update to the existing data after upgrading SFTPGo.
After configuring the data provider using the configuration file, you can create the required database structure using the `initprovider` command.
For SQLite provider, the `initprovider` command will auto create the database file, if missing, and the required tables.
For PostgreSQL and MySQL providers, you need to create the configured database, and the `initprovider` command will create the required tables.
SFTPGo will attempt to automatically detect if the data provider is initialized/updated and if not, will attempt to initialize/ update it on startup as needed.
Alternately, you can create/update the required data provider structures yourself using the `initprovider` command.
For example, you can simply execute the following command from the configuration directory:
@@ -106,13 +118,41 @@ Take a look at the CLI usage to learn how to specify a different configuration f
sftpgo initprovider --help
```
The `initprovider` command is enough for new installations. From now on, the database structure will be automatically checked and updated, if required, at startup.
You can disable automatic data provider checks/updates at startup by setting the `update_mode` configuration key to `1`.
#### Upgrading
If for some reason you want to downgrade SFTPGo, you may need to downgrade your data provider schema and data as well. You can use the `revertprovider` command for this task.
If you are upgrading from version 0.9.5 or before, you have to manually execute the SQL scripts to create the required database structure. These scripts can be found inside the source tree [sql](./sql "sql") directory. The SQL scripts filename is, by convention, the date as `YYYYMMDD` and the suffix `.sql`. You need to apply all the SQL scripts for your database ordered by name. For example, `20190828.sql` must be applied before `20191112.sql`, and so on.
Example for SQLite: `find sql/sqlite/ -type f -iname '*.sql' -print | sort -n | xargs cat | sqlite3 sftpgo.db`.
After applying these scripts, your database structure is the same as the one obtained using `initprovider` for new installations, so from now on, you don't have to manually upgrade your database anymore.
We support the follwing schema versions:
- `6`, this is the latest version
- `4`, this is the schema for v1.0.0-v1.2.x
So, if you plan to downgrade from git master to 1.2.x, you can prepare your data provider executing the following command from the configuration directory:
```shell
sftpgo revertprovider --to-version 4
```
Take a look at the CLI usage to learn how to specify a different configuration file:
```bash
sftpgo revertprovider --help
```
The `revertprovider` command is not supported for the memory provider.
## Users and folders management
After starting SFTPGo you can manage users and folders using:
- the [web based administration interface](./docs/web-admin.md)
- the [REST API](./docs/rest-api.md)
To support embedded data providers like `bolt` and `SQLite` we can't have a CLI that directly write users and folders to the data provider, we always have to use the REST API.
## Tutorials
Some step-to-step tutorials can be found inside the source tree [howto](./docs/howto "How-to") directory.
## Authentication options
@@ -141,21 +181,38 @@ More information about custom actions can be found [here](./docs/custom-actions.
Directories outside the user home directory can be exposed as virtual folders, more information [here](./docs/virtual-folders.md).
## Other hooks
You can get notified as soon as a new connection is established using the [Post-connect hook](./docs/post-connect-hook.md) and after each login using the [Post-login hook](./docs/post-login-hook.md).
You can use your own hook to [check passwords](./docs/check-password-hook.md).
## Storage backends
### S3 Compabible Object Storage backends
### S3 Compatible Object Storage backends
Each user can be mapped to whole bucket or to a bucket virtual folder. This way, the mapped bucket/virtual folder is exposed over SFTP/SCP. More information about S3 integration can be found [here](./docs/s3.md).
Each user can be mapped to the whole bucket or to a bucket virtual folder. This way, the mapped bucket/virtual folder is exposed over SFTP/SCP/FTP/WebDAV. More information about S3 integration can be found [here](./docs/s3.md).
### Google Cloud Storage backend
Each user can be mapped with a Google Cloud Storage bucket or a bucket virtual folder. This way, the mapped bucket/virtual folder is exposed over SFTP/SCP. More information about Google Cloud Storage integration can be found [here](./docs/google-cloud-storage.md).
Each user can be mapped with a Google Cloud Storage bucket or a bucket virtual folder. This way, the mapped bucket/virtual folder is exposed over SFTP/SCP/FTP/WebDAV. More information about Google Cloud Storage integration can be found [here](./docs/google-cloud-storage.md).
### Azure Blob Storage backend
Each user can be mapped with an Azure Blob Storage container or a container virtual folder. This way, the mapped container/virtual folder is exposed over SFTP/SCP/FTP/WebDAV. More information about Azure Blob Storage integration can be found [here](./docs/azure-blob-storage.md).
### SFTP backend
Each user can be mapped to another SFTP server account or a subfolder of it. More information can be found [here](./docs/sftpfs.md).
### Encrypted backend
Data at-rest encryption is supported via the [cryptfs backend](./docs/dare.md).
### Other Storage backends
Adding new storage backends is quite easy:
- implement the [Fs interface](./vfs/vfs.go#L18 "interface for filesystem backends").
- implement the [Fs interface](./vfs/vfs.go#L28 "interface for filesystem backends").
- update the user method `GetFilesystem` to return the new backend
- update the web interface and the REST API CLI
- add the flags for the new storage backed to the `portable` mode
@@ -166,6 +223,8 @@ Anyway, some backends require a pay per use account (or they offer free account
The [connection failed logs](./docs/logs.md) can be used for integration in tools such as [Fail2ban](http://www.fail2ban.org/). Example of [jails](./fail2ban/jails) and [filters](./fail2ban/filters) working with `systemd`/`journald` are available in fail2ban directory.
You can also use the built-in [defender](./docs/defender.md).
## Account's configuration properties
Details information about account configuration properties can be found [here](./docs/account.md).
@@ -176,12 +235,24 @@ SFTPGo can easily saturate a Gigabit connection on low end hardware with no spec
More in-depth analysis of performance can be found [here](./docs/performance.md).
## Release Cadence
SFTPGo releases are feature-driven, we don't have a fixed time based schedule. As a rough estimate, you can expect 1 or 2 new releases per year.
## Acknowledgements
SFTPGo makes use of the third party libraries listed inside [go.mod](./go.mod).
Some code was initially taken from [Pterodactyl SFTP Server](https://github.com/pterodactyl/sftp-server).
We are very grateful to all the people who contributed with ideas and/or pull requests.
Thank you [ysura](https://www.ysura.com/) for granting me stable access to a test AWS S3 account.
## Sponsors
I'd like to make SFTPGo into a sustainable long term project and your [sponsorship](https://github.com/sponsors/drakkan) will really help :heart:
Bronze, Silver and Gold sponsors will be listed here (if they wish).
## License
GNU GPLv3

12
SECURITY.md Normal file
View File

@@ -0,0 +1,12 @@
# Security Policy
## Supported Versions
Only the current release of the software is actively supported. If you need
help backporting fixes into an older release, feel free to ask.
## Reporting a Vulnerability
Email your vulnerability information to SFTPGo's maintainer:
Nicola Murino <nicola.murino@gmail.com>

12
cmd/gen.go Normal file
View File

@@ -0,0 +1,12 @@
package cmd
import "github.com/spf13/cobra"
var genCmd = &cobra.Command{
Use: "gen",
Short: "A collection of useful generators",
}
func init() {
rootCmd.AddCommand(genCmd)
}

76
cmd/gencompletion.go Normal file
View File

@@ -0,0 +1,76 @@
package cmd
import (
"os"
"github.com/rs/zerolog"
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/logger"
)
var genCompletionCmd = &cobra.Command{
Use: "completion [bash|zsh|fish|powershell]",
Short: "Generate shell completion script",
Long: `To load completions:
Bash:
$ source <(sftpgo gen completion bash)
To load completions for each session, execute once:
Linux:
$ sudo sftpgo gen completion bash > /usr/share/bash-completion/completions/sftpgo
MacOS:
$ sudo sftpgo gen completion bash > /usr/local/etc/bash_completion.d/sftpgo
Zsh:
If shell completion is not already enabled in your environment you will need
to enable it. You can execute the following once:
$ echo "autoload -U compinit; compinit" >> ~/.zshrc
To load completions for each session, execute once:
$ sftpgo gen completion zsh > "${fpath[1]}/_sftpgo"
Fish:
$ sftpgo gen completion fish | source
To load completions for each session, execute once:
$ sftpgo gen completion fish > ~/.config/fish/completions/sftpgo.fish
`,
DisableFlagsInUseLine: true,
ValidArgs: []string{"bash", "zsh", "fish", "powershell"},
Args: cobra.ExactValidArgs(1),
Run: func(cmd *cobra.Command, args []string) {
var err error
logger.DisableLogger()
logger.EnableConsoleLogger(zerolog.DebugLevel)
switch args[0] {
case "bash":
err = cmd.Root().GenBashCompletion(os.Stdout)
case "zsh":
err = cmd.Root().GenZshCompletion(os.Stdout)
case "fish":
err = cmd.Root().GenFishCompletion(os.Stdout, true)
case "powershell":
err = cmd.Root().GenPowerShellCompletion(os.Stdout)
}
if err != nil {
logger.WarnToConsole("Unable to generate shell completion script: %v", err)
os.Exit(1)
}
},
}
func init() {
genCmd.AddCommand(genCompletionCmd)
}

52
cmd/genman.go Normal file
View File

@@ -0,0 +1,52 @@
package cmd
import (
"fmt"
"os"
"github.com/rs/zerolog"
"github.com/spf13/cobra"
"github.com/spf13/cobra/doc"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/version"
)
var (
manDir string
genManCmd = &cobra.Command{
Use: "man",
Short: "Generate man pages for SFTPGo CLI",
Long: `This command automatically generates up-to-date man pages of SFTPGo's
command-line interface. By default, it creates the man page files
in the "man" directory under the current directory.
`,
Run: func(cmd *cobra.Command, args []string) {
logger.DisableLogger()
logger.EnableConsoleLogger(zerolog.DebugLevel)
if _, err := os.Stat(manDir); os.IsNotExist(err) {
err = os.MkdirAll(manDir, os.ModePerm)
if err != nil {
logger.WarnToConsole("Unable to generate man page files: %v", err)
os.Exit(1)
}
}
header := &doc.GenManHeader{
Section: "1",
Manual: "SFTPGo Manual",
Source: fmt.Sprintf("SFTPGo %v", version.Get().Version),
}
cmd.Root().DisableAutoGenTag = true
err := doc.GenManTree(cmd.Root(), header, manDir)
if err != nil {
logger.WarnToConsole("Unable to generate man page files: %v", err)
os.Exit(1)
}
},
}
)
func init() {
genManCmd.Flags().StringVarP(&manDir, "dir", "d", "man", "The directory to write the man pages")
genCmd.AddCommand(genManCmd)
}

View File

@@ -16,18 +16,22 @@ import (
var (
initProviderCmd = &cobra.Command{
Use: "initprovider",
Short: "Initializes the configured data provider",
Long: `This command reads the data provider connection details from the specified configuration file and creates the initial structure.
Short: "Initializes and/or updates the configured data provider",
Long: `This command reads the data provider connection details from the specified
configuration file and creates the initial structure or update the existing one,
as needed.
Some data providers such as bolt and memory does not require an initialization.
Some data providers such as bolt and memory does not require an initialization
but they could require an update to the existing data after upgrading SFTPGo.
For SQLite provider the database file will be auto created if missing.
For SQLite/bolt providers the database file will be auto-created if missing.
For PostgreSQL and MySQL providers you need to create the configured database, this command will create the required tables.
For PostgreSQL and MySQL providers you need to create the configured database,
this command will create/update the required tables as needed.
To initialize the data provider from the configuration directory simply use:
To initialize/update the data provider from the configuration directory simply use:
sftpgo initprovider
$ sftpgo initprovider
Please take a look at the usage below to customize the options.`,
Run: func(cmd *cobra.Command, args []string) {
@@ -39,13 +43,21 @@ Please take a look at the usage below to customize the options.`,
logger.WarnToConsole("Unable to initialize data provider, config load error: %v", err)
return
}
kmsConfig := config.GetKMSConfig()
err = kmsConfig.Initialize()
if err != nil {
logger.ErrorToConsole("unable to initialize KMS: %v", err)
os.Exit(1)
}
providerConf := config.GetProviderConf()
logger.DebugToConsole("Initializing provider: %#v config file: %#v", providerConf.Driver, viper.ConfigFileUsed())
logger.InfoToConsole("Initializing provider: %#v config file: %#v", providerConf.Driver, viper.ConfigFileUsed())
err = dataprovider.InitializeDatabase(providerConf, configDir)
if err == nil {
logger.DebugToConsole("Data provider successfully initialized")
logger.InfoToConsole("Data provider successfully initialized/updated")
} else if err == dataprovider.ErrNoInitRequired {
logger.InfoToConsole("%v", err.Error())
} else {
logger.WarnToConsole("Unable to initialize data provider: %v", err)
logger.WarnToConsole("Unable to initialize/update the data provider: %v", err)
os.Exit(1)
}
},

View File

@@ -15,7 +15,8 @@ var (
installCmd = &cobra.Command{
Use: "install",
Short: "Install SFTPGo as Windows Service",
Long: `To install the SFTPGo Windows Service with the default values for the command line flags simply use:
Long: `To install the SFTPGo Windows Service with the default values for the command
line flags simply use:
sftpgo service install
@@ -63,7 +64,7 @@ func getCustomServeFlags() []string {
result = append(result, "--"+configDirFlag)
result = append(result, configDir)
}
if configFile != defaultConfigName {
if configFile != defaultConfigFile {
result = append(result, "--"+configFileFlag)
result = append(result, configFile)
}
@@ -89,8 +90,5 @@ func getCustomServeFlags() []string {
if logCompress != defaultLogCompress {
result = append(result, "--"+logCompressFlag+"=true")
}
if profiler != defaultProfiler {
result = append(result, "--"+profilerFlag+"=true")
}
return result
}

View File

@@ -3,7 +3,6 @@
package cmd
import (
"encoding/base64"
"fmt"
"io/ioutil"
"os"
@@ -13,7 +12,9 @@ import (
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/common"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/kms"
"github.com/drakkan/sftpgo/service"
"github.com/drakkan/sftpgo/sftpd"
"github.com/drakkan/sftpgo/version"
@@ -32,8 +33,8 @@ var (
portablePublicKeys []string
portablePermissions []string
portableSSHCommands []string
portableAllowedExtensions []string
portableDeniedExtensions []string
portableAllowedPatterns []string
portableDeniedPatterns []string
portableFsProvider int
portableS3Bucket string
portableS3Region string
@@ -49,18 +50,43 @@ var (
portableGCSAutoCredentials int
portableGCSStorageClass string
portableGCSKeyPrefix string
portableFTPDPort int
portableFTPSCert string
portableFTPSKey string
portableWebDAVPort int
portableWebDAVCert string
portableWebDAVKey string
portableAzContainer string
portableAzAccountName string
portableAzAccountKey string
portableAzEndpoint string
portableAzAccessTier string
portableAzSASURL string
portableAzKeyPrefix string
portableAzULPartSize int
portableAzULConcurrency int
portableAzUseEmulator bool
portableCryptPassphrase string
portableSFTPEndpoint string
portableSFTPUsername string
portableSFTPPassword string
portableSFTPPrivateKeyPath string
portableSFTPFingerprints []string
portableSFTPPrefix string
portableCmd = &cobra.Command{
Use: "portable",
Short: "Serve a single directory",
Long: `To serve the current working directory with auto generated credentials simply use:
Long: `To serve the current working directory with auto generated credentials simply
use:
sftpgo portable
$ sftpgo portable
Please take a look at the usage below to customize the serving parameters`,
Run: func(cmd *cobra.Command, args []string) {
portableDir := directoryToServe
fsProvider := dataprovider.FilesystemProvider(portableFsProvider)
if !filepath.IsAbs(portableDir) {
if portableFsProvider == 0 {
if fsProvider == dataprovider.LocalFilesystemProvider {
portableDir, _ = filepath.Abs(portableDir)
} else {
portableDir = os.TempDir()
@@ -69,34 +95,51 @@ Please take a look at the usage below to customize the serving parameters`,
permissions := make(map[string][]string)
permissions["/"] = portablePermissions
portableGCSCredentials := ""
if portableFsProvider == 2 && len(portableGCSCredentialsFile) > 0 {
fi, err := os.Stat(portableGCSCredentialsFile)
if fsProvider == dataprovider.GCSFilesystemProvider && portableGCSCredentialsFile != "" {
contents, err := getFileContents(portableGCSCredentialsFile)
if err != nil {
fmt.Printf("Invalid GCS credentials file: %v\n", err)
fmt.Printf("Unable to get GCS credentials: %v\n", err)
os.Exit(1)
}
if fi.Size() > 1048576 {
fmt.Printf("Invalid GCS credentials file: %#v is too big %v/1048576 bytes\n", portableGCSCredentialsFile,
fi.Size())
os.Exit(1)
}
creds, err := ioutil.ReadFile(portableGCSCredentialsFile)
if err != nil {
fmt.Printf("Unable to read credentials file: %v\n", err)
}
portableGCSCredentials = base64.StdEncoding.EncodeToString(creds)
portableGCSCredentials = contents
portableGCSAutoCredentials = 0
}
portableSFTPPrivateKey := ""
if fsProvider == dataprovider.SFTPFilesystemProvider && portableSFTPPrivateKeyPath != "" {
contents, err := getFileContents(portableSFTPPrivateKeyPath)
if err != nil {
fmt.Printf("Unable to get SFTP private key: %v\n", err)
os.Exit(1)
}
portableSFTPPrivateKey = contents
}
if portableFTPDPort >= 0 && len(portableFTPSCert) > 0 && len(portableFTPSKey) > 0 {
_, err := common.NewCertManager(portableFTPSCert, portableFTPSKey, filepath.Clean(defaultConfigDir),
"FTP portable")
if err != nil {
fmt.Printf("Unable to load FTPS key pair, cert file %#v key file %#v error: %v\n",
portableFTPSCert, portableFTPSKey, err)
os.Exit(1)
}
}
if portableWebDAVPort > 0 && len(portableWebDAVCert) > 0 && len(portableWebDAVKey) > 0 {
_, err := common.NewCertManager(portableWebDAVCert, portableWebDAVKey, filepath.Clean(defaultConfigDir),
"WebDAV portable")
if err != nil {
fmt.Printf("Unable to load WebDAV key pair, cert file %#v key file %#v error: %v\n",
portableWebDAVCert, portableWebDAVKey, err)
os.Exit(1)
}
}
service := service.Service{
ConfigDir: filepath.Clean(defaultConfigDir),
ConfigFile: defaultConfigName,
ConfigFile: defaultConfigFile,
LogFilePath: portableLogFile,
LogMaxSize: defaultLogMaxSize,
LogMaxBackups: defaultLogMaxBackup,
LogMaxAge: defaultLogMaxAge,
LogCompress: defaultLogCompress,
LogVerbose: portableLogVerbose,
Profiler: defaultProfiler,
Shutdown: make(chan bool),
PortableMode: 1,
PortableUser: dataprovider.User{
@@ -107,12 +150,12 @@ Please take a look at the usage below to customize the serving parameters`,
HomeDir: portableDir,
Status: 1,
FsConfig: dataprovider.Filesystem{
Provider: portableFsProvider,
Provider: dataprovider.FilesystemProvider(portableFsProvider),
S3Config: vfs.S3FsConfig{
Bucket: portableS3Bucket,
Region: portableS3Region,
AccessKey: portableS3AccessKey,
AccessSecret: portableS3AccessSecret,
AccessSecret: kms.NewPlainSecret(portableS3AccessSecret),
Endpoint: portableS3Endpoint,
StorageClass: portableS3StorageClass,
KeyPrefix: portableS3KeyPrefix,
@@ -121,22 +164,47 @@ Please take a look at the usage below to customize the serving parameters`,
},
GCSConfig: vfs.GCSFsConfig{
Bucket: portableGCSBucket,
Credentials: portableGCSCredentials,
Credentials: kms.NewPlainSecret(portableGCSCredentials),
AutomaticCredentials: portableGCSAutoCredentials,
StorageClass: portableGCSStorageClass,
KeyPrefix: portableGCSKeyPrefix,
},
AzBlobConfig: vfs.AzBlobFsConfig{
Container: portableAzContainer,
AccountName: portableAzAccountName,
AccountKey: kms.NewPlainSecret(portableAzAccountKey),
Endpoint: portableAzEndpoint,
AccessTier: portableAzAccessTier,
SASURL: portableAzSASURL,
KeyPrefix: portableAzKeyPrefix,
UseEmulator: portableAzUseEmulator,
UploadPartSize: int64(portableAzULPartSize),
UploadConcurrency: portableAzULConcurrency,
},
CryptConfig: vfs.CryptFsConfig{
Passphrase: kms.NewPlainSecret(portableCryptPassphrase),
},
SFTPConfig: vfs.SFTPFsConfig{
Endpoint: portableSFTPEndpoint,
Username: portableSFTPUsername,
Password: kms.NewPlainSecret(portableSFTPPassword),
PrivateKey: kms.NewPlainSecret(portableSFTPPrivateKey),
Fingerprints: portableSFTPFingerprints,
Prefix: portableSFTPPrefix,
},
},
Filters: dataprovider.UserFilters{
FileExtensions: parseFileExtensionsFilters(),
FilePatterns: parsePatternsFilesFilters(),
},
},
}
if err := service.StartPortableMode(portableSFTPDPort, portableSSHCommands, portableAdvertiseService,
portableAdvertiseCredentials); err == nil {
if err := service.StartPortableMode(portableSFTPDPort, portableFTPDPort, portableWebDAVPort, portableSSHCommands, portableAdvertiseService,
portableAdvertiseCredentials, portableFTPSCert, portableFTPSKey, portableWebDAVCert, portableWebDAVKey); err == nil {
service.Wait()
if service.Error == nil {
os.Exit(0)
}
}
os.Exit(1)
},
}
@@ -145,84 +213,150 @@ Please take a look at the usage below to customize the serving parameters`,
func init() {
version.AddFeature("+portable")
portableCmd.Flags().StringVarP(&directoryToServe, "directory", "d", ".",
"Path to the directory to serve. This can be an absolute path or a path relative to the current directory")
portableCmd.Flags().IntVarP(&portableSFTPDPort, "sftpd-port", "s", 0, "0 means a random non privileged port")
portableCmd.Flags().StringVarP(&directoryToServe, "directory", "d", ".", `Path to the directory to serve.
This can be an absolute path or a path
relative to the current directory
`)
portableCmd.Flags().IntVarP(&portableSFTPDPort, "sftpd-port", "s", 0, `0 means a random unprivileged port,
< 0 disabled`)
portableCmd.Flags().IntVar(&portableFTPDPort, "ftpd-port", -1, `0 means a random unprivileged port,
< 0 disabled`)
portableCmd.Flags().IntVar(&portableWebDAVPort, "webdav-port", -1, `0 means a random unprivileged port,
< 0 disabled`)
portableCmd.Flags().StringSliceVarP(&portableSSHCommands, "ssh-commands", "c", sftpd.GetDefaultSSHCommands(),
"SSH commands to enable. \"*\" means any supported SSH command including scp")
portableCmd.Flags().StringVarP(&portableUsername, "username", "u", "", "Leave empty to use an auto generated value")
portableCmd.Flags().StringVarP(&portablePassword, "password", "p", "", "Leave empty to use an auto generated value")
`SSH commands to enable.
"*" means any supported SSH command
including scp
`)
portableCmd.Flags().StringVarP(&portableUsername, "username", "u", "", `Leave empty to use an auto generated
value`)
portableCmd.Flags().StringVarP(&portablePassword, "password", "p", "", `Leave empty to use an auto generated
value`)
portableCmd.Flags().StringVarP(&portableLogFile, logFilePathFlag, "l", "", "Leave empty to disable logging")
portableCmd.Flags().BoolVarP(&portableLogVerbose, logVerboseFlag, "v", false, "Enable verbose logs")
portableCmd.Flags().StringSliceVarP(&portablePublicKeys, "public-key", "k", []string{}, "")
portableCmd.Flags().StringSliceVarP(&portablePermissions, "permissions", "g", []string{"list", "download"},
"User's permissions. \"*\" means any permission")
portableCmd.Flags().StringArrayVar(&portableAllowedExtensions, "allowed-extensions", []string{},
"Allowed file extensions case insensitive. The format is /dir::ext1,ext2. For example: \"/somedir::.jpg,.png\"")
portableCmd.Flags().StringArrayVar(&portableDeniedExtensions, "denied-extensions", []string{},
"Denied file extensions case insensitive. The format is /dir::ext1,ext2. For example: \"/somedir::.jpg,.png\"")
`User's permissions. "*" means any
permission`)
portableCmd.Flags().StringArrayVar(&portableAllowedPatterns, "allowed-patterns", []string{},
`Allowed file patterns case insensitive.
The format is:
/dir::pattern1,pattern2.
For example: "/somedir::*.jpg,a*b?.png"`)
portableCmd.Flags().StringArrayVar(&portableDeniedPatterns, "denied-patterns", []string{},
`Denied file patterns case insensitive.
The format is:
/dir::pattern1,pattern2.
For example: "/somedir::*.jpg,a*b?.png"`)
portableCmd.Flags().BoolVarP(&portableAdvertiseService, "advertise-service", "S", false,
"Advertise SFTP service using multicast DNS")
`Advertise configured services using
multicast DNS`)
portableCmd.Flags().BoolVarP(&portableAdvertiseCredentials, "advertise-credentials", "C", false,
"If the SFTP service is advertised via multicast DNS, this flag allows to put username/password inside the advertised TXT record")
portableCmd.Flags().IntVarP(&portableFsProvider, "fs-provider", "f", 0, "0 means local filesystem, 1 Amazon S3 compatible, "+
"2 Google Cloud Storage")
`If the SFTP/FTP service is
advertised via multicast DNS, this
flag allows to put username/password
inside the advertised TXT record`)
portableCmd.Flags().IntVarP(&portableFsProvider, "fs-provider", "f", int(dataprovider.LocalFilesystemProvider), `0 => local filesystem
1 => AWS S3 compatible
2 => Google Cloud Storage
3 => Azure Blob Storage
4 => Encrypted local filesystem
5 => SFTP`)
portableCmd.Flags().StringVar(&portableS3Bucket, "s3-bucket", "", "")
portableCmd.Flags().StringVar(&portableS3Region, "s3-region", "", "")
portableCmd.Flags().StringVar(&portableS3AccessKey, "s3-access-key", "", "")
portableCmd.Flags().StringVar(&portableS3AccessSecret, "s3-access-secret", "", "")
portableCmd.Flags().StringVar(&portableS3Endpoint, "s3-endpoint", "", "")
portableCmd.Flags().StringVar(&portableS3StorageClass, "s3-storage-class", "", "")
portableCmd.Flags().StringVar(&portableS3KeyPrefix, "s3-key-prefix", "", "Allows to restrict access to the virtual folder "+
"identified by this prefix and its contents")
portableCmd.Flags().IntVar(&portableS3ULPartSize, "s3-upload-part-size", 5, "The buffer size for multipart uploads (MB)")
portableCmd.Flags().IntVar(&portableS3ULConcurrency, "s3-upload-concurrency", 2, "How many parts are uploaded in parallel")
portableCmd.Flags().StringVar(&portableS3KeyPrefix, "s3-key-prefix", "", `Allows to restrict access to the
virtual folder identified by this
prefix and its contents`)
portableCmd.Flags().IntVar(&portableS3ULPartSize, "s3-upload-part-size", 5, `The buffer size for multipart uploads
(MB)`)
portableCmd.Flags().IntVar(&portableS3ULConcurrency, "s3-upload-concurrency", 2, `How many parts are uploaded in
parallel`)
portableCmd.Flags().StringVar(&portableGCSBucket, "gcs-bucket", "", "")
portableCmd.Flags().StringVar(&portableGCSStorageClass, "gcs-storage-class", "", "")
portableCmd.Flags().StringVar(&portableGCSKeyPrefix, "gcs-key-prefix", "", "Allows to restrict access to the virtual folder "+
"identified by this prefix and its contents")
portableCmd.Flags().StringVar(&portableGCSCredentialsFile, "gcs-credentials-file", "", "Google Cloud Storage JSON credentials file")
portableCmd.Flags().IntVar(&portableGCSAutoCredentials, "gcs-automatic-credentials", 1, "0 means explicit credentials using a JSON "+
"credentials file, 1 automatic")
portableCmd.Flags().StringVar(&portableGCSKeyPrefix, "gcs-key-prefix", "", `Allows to restrict access to the
virtual folder identified by this
prefix and its contents`)
portableCmd.Flags().StringVar(&portableGCSCredentialsFile, "gcs-credentials-file", "", `Google Cloud Storage JSON credentials
file`)
portableCmd.Flags().IntVar(&portableGCSAutoCredentials, "gcs-automatic-credentials", 1, `0 means explicit credentials using
a JSON credentials file, 1 automatic
`)
portableCmd.Flags().StringVar(&portableFTPSCert, "ftpd-cert", "", "Path to the certificate file for FTPS")
portableCmd.Flags().StringVar(&portableFTPSKey, "ftpd-key", "", "Path to the key file for FTPS")
portableCmd.Flags().StringVar(&portableWebDAVCert, "webdav-cert", "", `Path to the certificate file for WebDAV
over HTTPS`)
portableCmd.Flags().StringVar(&portableWebDAVKey, "webdav-key", "", `Path to the key file for WebDAV over
HTTPS`)
portableCmd.Flags().StringVar(&portableAzContainer, "az-container", "", "")
portableCmd.Flags().StringVar(&portableAzAccountName, "az-account-name", "", "")
portableCmd.Flags().StringVar(&portableAzAccountKey, "az-account-key", "", "")
portableCmd.Flags().StringVar(&portableAzSASURL, "az-sas-url", "", `Shared access signature URL`)
portableCmd.Flags().StringVar(&portableAzEndpoint, "az-endpoint", "", `Leave empty to use the default:
"blob.core.windows.net"`)
portableCmd.Flags().StringVar(&portableAzAccessTier, "az-access-tier", "", `Leave empty to use the default
container setting`)
portableCmd.Flags().StringVar(&portableAzKeyPrefix, "az-key-prefix", "", `Allows to restrict access to the
virtual folder identified by this
prefix and its contents`)
portableCmd.Flags().IntVar(&portableAzULPartSize, "az-upload-part-size", 4, `The buffer size for multipart uploads
(MB)`)
portableCmd.Flags().IntVar(&portableAzULConcurrency, "az-upload-concurrency", 2, `How many parts are uploaded in
parallel`)
portableCmd.Flags().BoolVar(&portableAzUseEmulator, "az-use-emulator", false, "")
portableCmd.Flags().StringVar(&portableCryptPassphrase, "crypto-passphrase", "", `Passphrase for encryption/decryption`)
portableCmd.Flags().StringVar(&portableSFTPEndpoint, "sftp-endpoint", "", `SFTP endpoint as host:port for SFTP
provider`)
portableCmd.Flags().StringVar(&portableSFTPUsername, "sftp-username", "", `SFTP user for SFTP provider`)
portableCmd.Flags().StringVar(&portableSFTPPassword, "sftp-password", "", `SFTP password for SFTP provider`)
portableCmd.Flags().StringVar(&portableSFTPPrivateKeyPath, "sftp-key-path", "", `SFTP private key path for SFTP provider`)
portableCmd.Flags().StringSliceVar(&portableSFTPFingerprints, "sftp-fingerprints", []string{}, `SFTP fingerprints to verify remote host
key for SFTP provider`)
portableCmd.Flags().StringVar(&portableSFTPPrefix, "sftp-prefix", "", `SFTP prefix allows restrict all
operations to a given path within the
remote SFTP server`)
rootCmd.AddCommand(portableCmd)
}
func parseFileExtensionsFilters() []dataprovider.ExtensionsFilter {
var extensions []dataprovider.ExtensionsFilter
for _, val := range portableAllowedExtensions {
p, exts := getExtensionsFilterValues(strings.TrimSpace(val))
func parsePatternsFilesFilters() []dataprovider.PatternsFilter {
var patterns []dataprovider.PatternsFilter
for _, val := range portableAllowedPatterns {
p, exts := getPatternsFilterValues(strings.TrimSpace(val))
if len(p) > 0 {
extensions = append(extensions, dataprovider.ExtensionsFilter{
patterns = append(patterns, dataprovider.PatternsFilter{
Path: path.Clean(p),
AllowedExtensions: exts,
DeniedExtensions: []string{},
AllowedPatterns: exts,
DeniedPatterns: []string{},
})
}
}
for _, val := range portableDeniedExtensions {
p, exts := getExtensionsFilterValues(strings.TrimSpace(val))
for _, val := range portableDeniedPatterns {
p, exts := getPatternsFilterValues(strings.TrimSpace(val))
if len(p) > 0 {
found := false
for index, e := range extensions {
for index, e := range patterns {
if path.Clean(e.Path) == path.Clean(p) {
extensions[index].DeniedExtensions = append(extensions[index].DeniedExtensions, exts...)
patterns[index].DeniedPatterns = append(patterns[index].DeniedPatterns, exts...)
found = true
break
}
}
if !found {
extensions = append(extensions, dataprovider.ExtensionsFilter{
patterns = append(patterns, dataprovider.PatternsFilter{
Path: path.Clean(p),
AllowedExtensions: []string{},
DeniedExtensions: exts,
AllowedPatterns: []string{},
DeniedPatterns: exts,
})
}
}
}
return extensions
return patterns
}
func getExtensionsFilterValues(value string) (string, []string) {
func getPatternsFilterValues(value string) (string, []string) {
if strings.Contains(value, "::") {
dirExts := strings.Split(value, "::")
if len(dirExts) > 1 {
@@ -234,10 +368,25 @@ func getExtensionsFilterValues(value string) (string, []string) {
exts = append(exts, cleanedExt)
}
}
if len(dir) > 0 && len(exts) > 0 {
if dir != "" && len(exts) > 0 {
return dir, exts
}
}
}
return "", nil
}
func getFileContents(name string) (string, error) {
fi, err := os.Stat(name)
if err != nil {
return "", err
}
if fi.Size() > 1048576 {
return "", fmt.Errorf("%#v is too big %v/1048576 bytes", name, fi.Size())
}
contents, err := ioutil.ReadFile(name)
if err != nil {
return "", err
}
return string(contents), nil
}

64
cmd/revertprovider.go Normal file
View File

@@ -0,0 +1,64 @@
package cmd
import (
"os"
"github.com/rs/zerolog"
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/drakkan/sftpgo/config"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/utils"
)
var (
revertProviderTargetVersion int
revertProviderCmd = &cobra.Command{
Use: "revertprovider",
Short: "Revert the configured data provider to a previous version",
Long: `This command reads the data provider connection details from the specified
configuration file and restore the provider schema and/or data to a previous version.
This command is not supported for the memory provider.
Please take a look at the usage below to customize the options.`,
Run: func(cmd *cobra.Command, args []string) {
logger.DisableLogger()
logger.EnableConsoleLogger(zerolog.DebugLevel)
if revertProviderTargetVersion != 4 {
logger.WarnToConsole("Unsupported target version, 4 is the only supported one")
os.Exit(1)
}
configDir = utils.CleanDirInput(configDir)
err := config.LoadConfig(configDir, configFile)
if err != nil {
logger.WarnToConsole("Unable to initialize data provider, config load error: %v", err)
os.Exit(1)
}
kmsConfig := config.GetKMSConfig()
err = kmsConfig.Initialize()
if err != nil {
logger.ErrorToConsole("unable to initialize KMS: %v", err)
os.Exit(1)
}
providerConf := config.GetProviderConf()
logger.InfoToConsole("Reverting provider: %#v config file: %#v target version %v", providerConf.Driver,
viper.ConfigFileUsed(), revertProviderTargetVersion)
err = dataprovider.RevertDatabase(providerConf, configDir, revertProviderTargetVersion)
if err != nil {
logger.WarnToConsole("Error reverting provider: %v", err)
os.Exit(1)
}
logger.InfoToConsole("Data provider successfully reverted")
},
}
)
func init() {
addConfigFlags(revertProviderCmd)
revertProviderCmd.Flags().IntVar(&revertProviderTargetVersion, "to-version", 0, `4 means the version supported in v1.0.0-v1.2.x`)
revertProviderCmd.MarkFlagRequired("to-version") //nolint:errcheck
rootCmd.AddCommand(revertProviderCmd)
}

View File

@@ -8,7 +8,6 @@ import (
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/drakkan/sftpgo/config"
"github.com/drakkan/sftpgo/version"
)
@@ -29,17 +28,26 @@ const (
logCompressKey = "log_compress"
logVerboseFlag = "log-verbose"
logVerboseKey = "log_verbose"
profilerFlag = "profiler"
profilerKey = "profiler"
loadDataFromFlag = "loaddata-from"
loadDataFromKey = "loaddata_from"
loadDataModeFlag = "loaddata-mode"
loadDataModeKey = "loaddata_mode"
loadDataQuotaScanFlag = "loaddata-scan"
loadDataQuotaScanKey = "loaddata_scan"
loadDataCleanFlag = "loaddata-clean"
loadDataCleanKey = "loaddata_clean"
defaultConfigDir = "."
defaultConfigName = config.DefaultConfigName
defaultConfigFile = ""
defaultLogFile = "sftpgo.log"
defaultLogMaxSize = 10
defaultLogMaxBackup = 5
defaultLogMaxAge = 28
defaultLogCompress = false
defaultLogVerbose = true
defaultProfiler = false
defaultLoadDataFrom = ""
defaultLoadDataMode = 1
defaultLoadDataQuotaScan = 0
defaultLoadDataClean = false
)
var (
@@ -51,11 +59,14 @@ var (
logMaxAge int
logCompress bool
logVerbose bool
profiler bool
loadDataFrom string
loadDataMode int
loadDataQuotaScan int
loadDataClean bool
rootCmd = &cobra.Command{
Use: "sftpgo",
Short: "Full featured and highly configurable SFTP server",
Short: "Fully featured and highly configurable SFTP server",
}
)
@@ -79,18 +90,34 @@ func addConfigFlags(cmd *cobra.Command) {
viper.SetDefault(configDirKey, defaultConfigDir)
viper.BindEnv(configDirKey, "SFTPGO_CONFIG_DIR") //nolint:errcheck // err is not nil only if the key to bind is missing
cmd.Flags().StringVarP(&configDir, configDirFlag, "c", viper.GetString(configDirKey),
"Location for SFTPGo config dir. This directory should contain the \"sftpgo\" configuration file or the configured "+
"config-file and it is used as the base for files with a relative path (eg. the private keys for the SFTP server, "+
"the SQLite database if you use SQLite as data provider). This flag can be set using SFTPGO_CONFIG_DIR env var too.")
`Location for the config dir. This directory
is used as the base for files with a relative
path, eg. the private keys for the SFTP
server or the SQLite database if you use
SQLite as data provider.
The configuration file, if not explicitly set,
is looked for in this dir. We support reading
from JSON, TOML, YAML, HCL, envfile and Java
properties config files. The default config
file name is "sftpgo" and therefore
"sftpgo.json", "sftpgo.yaml" and so on are
searched.
This flag can be set using SFTPGO_CONFIG_DIR
env var too.`)
viper.BindPFlag(configDirKey, cmd.Flags().Lookup(configDirFlag)) //nolint:errcheck
viper.SetDefault(configFileKey, defaultConfigName)
viper.SetDefault(configFileKey, defaultConfigFile)
viper.BindEnv(configFileKey, "SFTPGO_CONFIG_FILE") //nolint:errcheck
cmd.Flags().StringVarP(&configFile, configFileFlag, "f", viper.GetString(configFileKey),
"Name for SFTPGo configuration file. It must be the name of a file stored in config-dir not the absolute path to the "+
"configuration file. The specified file name must have no extension we automatically load JSON, YAML, TOML, HCL and "+
"Java properties. Therefore if you set \"sftpgo\" then \"sftpgo.json\", \"sftpgo.yaml\" and so on are searched. "+
"This flag can be set using SFTPGO_CONFIG_FILE env var too.")
cmd.Flags().StringVar(&configFile, configFileFlag, viper.GetString(configFileKey),
`Path to SFTPGo configuration file.
This flag explicitly defines the path, name
and extension of the config file. If must be
an absolute path or a path relative to the
configuration directory. The specified file
name must have a supported extension (JSON,
YAML, TOML, HCL or Java properties).
This flag can be set using SFTPGO_CONFIG_FILE
env var too.`)
viper.BindPFlag(configFileKey, cmd.Flags().Lookup(configFileFlag)) //nolint:errcheck
}
@@ -100,48 +127,102 @@ func addServeFlags(cmd *cobra.Command) {
viper.SetDefault(logFilePathKey, defaultLogFile)
viper.BindEnv(logFilePathKey, "SFTPGO_LOG_FILE_PATH") //nolint:errcheck
cmd.Flags().StringVarP(&logFilePath, logFilePathFlag, "l", viper.GetString(logFilePathKey),
"Location for the log file. Leave empty to write logs to the standard output. This flag can be set using SFTPGO_LOG_FILE_PATH "+
"env var too.")
`Location for the log file. Leave empty to write
logs to the standard output. This flag can be
set using SFTPGO_LOG_FILE_PATH env var too.
`)
viper.BindPFlag(logFilePathKey, cmd.Flags().Lookup(logFilePathFlag)) //nolint:errcheck
viper.SetDefault(logMaxSizeKey, defaultLogMaxSize)
viper.BindEnv(logMaxSizeKey, "SFTPGO_LOG_MAX_SIZE") //nolint:errcheck
cmd.Flags().IntVarP(&logMaxSize, logMaxSizeFlag, "s", viper.GetInt(logMaxSizeKey),
"Maximum size in megabytes of the log file before it gets rotated. This flag can be set using SFTPGO_LOG_MAX_SIZE "+
"env var too. It is unused if log-file-path is empty.")
`Maximum size in megabytes of the log file
before it gets rotated. This flag can be set
using SFTPGO_LOG_MAX_SIZE env var too. It is
unused if log-file-path is empty.
`)
viper.BindPFlag(logMaxSizeKey, cmd.Flags().Lookup(logMaxSizeFlag)) //nolint:errcheck
viper.SetDefault(logMaxBackupKey, defaultLogMaxBackup)
viper.BindEnv(logMaxBackupKey, "SFTPGO_LOG_MAX_BACKUPS") //nolint:errcheck
cmd.Flags().IntVarP(&logMaxBackups, "log-max-backups", "b", viper.GetInt(logMaxBackupKey),
"Maximum number of old log files to retain. This flag can be set using SFTPGO_LOG_MAX_BACKUPS env var too. "+
"It is unused if log-file-path is empty.")
`Maximum number of old log files to retain.
This flag can be set using SFTPGO_LOG_MAX_BACKUPS
env var too. It is unused if log-file-path is
empty.`)
viper.BindPFlag(logMaxBackupKey, cmd.Flags().Lookup(logMaxBackupFlag)) //nolint:errcheck
viper.SetDefault(logMaxAgeKey, defaultLogMaxAge)
viper.BindEnv(logMaxAgeKey, "SFTPGO_LOG_MAX_AGE") //nolint:errcheck
cmd.Flags().IntVarP(&logMaxAge, "log-max-age", "a", viper.GetInt(logMaxAgeKey),
"Maximum number of days to retain old log files. This flag can be set using SFTPGO_LOG_MAX_AGE env var too. "+
"It is unused if log-file-path is empty.")
`Maximum number of days to retain old log files.
This flag can be set using SFTPGO_LOG_MAX_AGE env
var too. It is unused if log-file-path is empty.
`)
viper.BindPFlag(logMaxAgeKey, cmd.Flags().Lookup(logMaxAgeFlag)) //nolint:errcheck
viper.SetDefault(logCompressKey, defaultLogCompress)
viper.BindEnv(logCompressKey, "SFTPGO_LOG_COMPRESS") //nolint:errcheck
cmd.Flags().BoolVarP(&logCompress, logCompressFlag, "z", viper.GetBool(logCompressKey), "Determine if the rotated "+
"log files should be compressed using gzip. This flag can be set using SFTPGO_LOG_COMPRESS env var too. "+
"It is unused if log-file-path is empty.")
cmd.Flags().BoolVarP(&logCompress, logCompressFlag, "z", viper.GetBool(logCompressKey),
`Determine if the rotated log files
should be compressed using gzip. This flag can
be set using SFTPGO_LOG_COMPRESS env var too.
It is unused if log-file-path is empty.
`)
viper.BindPFlag(logCompressKey, cmd.Flags().Lookup(logCompressFlag)) //nolint:errcheck
viper.SetDefault(logVerboseKey, defaultLogVerbose)
viper.BindEnv(logVerboseKey, "SFTPGO_LOG_VERBOSE") //nolint:errcheck
cmd.Flags().BoolVarP(&logVerbose, logVerboseFlag, "v", viper.GetBool(logVerboseKey), "Enable verbose logs. "+
"This flag can be set using SFTPGO_LOG_VERBOSE env var too.")
cmd.Flags().BoolVarP(&logVerbose, logVerboseFlag, "v", viper.GetBool(logVerboseKey),
`Enable verbose logs. This flag can be set
using SFTPGO_LOG_VERBOSE env var too.
`)
viper.BindPFlag(logVerboseKey, cmd.Flags().Lookup(logVerboseFlag)) //nolint:errcheck
viper.SetDefault(profilerKey, defaultProfiler)
viper.BindEnv(profilerKey, "SFTPGO_PROFILER") //nolint:errcheck
cmd.Flags().BoolVarP(&profiler, profilerFlag, "p", viper.GetBool(profilerKey), "Enable the built-in profiler. "+
"The profiler will be accessible via HTTP/HTTPS using the base URL \"/debug/pprof/\". "+
"This flag can be set using SFTPGO_PROFILER env var too.")
viper.BindPFlag(profilerKey, cmd.Flags().Lookup(profilerFlag)) //nolint:errcheck
viper.SetDefault(loadDataFromKey, defaultLoadDataFrom)
viper.BindEnv(loadDataFromKey, "SFTPGO_LOADDATA_FROM") //nolint:errcheck
cmd.Flags().StringVar(&loadDataFrom, loadDataFromFlag, viper.GetString(loadDataFromKey),
`Load users and folders from this file.
The file must be specified as absolute path
and it must contain a backup obtained using
the "dumpdata" REST API or compatible content.
This flag can be set using SFTPGO_LOADDATA_FROM
env var too.
`)
viper.BindPFlag(loadDataFromKey, cmd.Flags().Lookup(loadDataFromFlag)) //nolint:errcheck
viper.SetDefault(loadDataModeKey, defaultLoadDataMode)
viper.BindEnv(loadDataModeKey, "SFTPGO_LOADDATA_MODE") //nolint:errcheck
cmd.Flags().IntVar(&loadDataMode, loadDataModeFlag, viper.GetInt(loadDataModeKey),
`Restore mode for data to load:
0 - new users are added, existing users are
updated
1 - New users are added, existing users are
not modified
This flag can be set using SFTPGO_LOADDATA_MODE
env var too.
`)
viper.BindPFlag(loadDataModeKey, cmd.Flags().Lookup(loadDataModeFlag)) //nolint:errcheck
viper.SetDefault(loadDataQuotaScanKey, defaultLoadDataQuotaScan)
viper.BindEnv(loadDataQuotaScanKey, "SFTPGO_LOADDATA_QUOTA_SCAN") //nolint:errcheck
cmd.Flags().IntVar(&loadDataQuotaScan, loadDataQuotaScanFlag, viper.GetInt(loadDataQuotaScanKey),
`Quota scan mode after data load:
0 - no quota scan
1 - scan quota
2 - scan quota if the user has quota restrictions
This flag can be set using SFTPGO_LOADDATA_QUOTA_SCAN
env var too.
(default 0)`)
viper.BindPFlag(loadDataQuotaScanKey, cmd.Flags().Lookup(loadDataQuotaScanFlag)) //nolint:errcheck
viper.SetDefault(loadDataCleanKey, defaultLoadDataClean)
viper.BindEnv(loadDataCleanKey, "SFTPGO_LOADDATA_CLEAN") //nolint:errcheck
cmd.Flags().BoolVar(&loadDataClean, loadDataCleanFlag, viper.GetBool(loadDataCleanKey),
`Determine if the loaddata-from file should
be removed after a successful load. This flag
can be set using SFTPGO_LOADDATA_CLEAN env var
too. (default "false")
`)
viper.BindPFlag(logCompressKey, cmd.Flags().Lookup(logCompressFlag)) //nolint:errcheck
}

View File

@@ -12,7 +12,7 @@ import (
var (
rotateLogCmd = &cobra.Command{
Use: "rotatelogs",
Short: "Signal to the running service to close the existing log file and immediately create a new one",
Short: "Signal to the running service to rotate the logs",
Run: func(cmd *cobra.Command, args []string) {
s := service.WindowsService{
Service: service.Service{

View File

@@ -13,9 +13,10 @@ var (
serveCmd = &cobra.Command{
Use: "serve",
Short: "Start the SFTP Server",
Long: `To start the SFTPGo with the default values for the command line flags simply use:
Long: `To start the SFTPGo with the default values for the command line flags simply
use:
sftpgo serve
$ sftpgo serve
Please take a look at the usage below to customize the startup options`,
Run: func(cmd *cobra.Command, args []string) {
@@ -28,13 +29,18 @@ Please take a look at the usage below to customize the startup options`,
LogMaxAge: logMaxAge,
LogCompress: logCompress,
LogVerbose: logVerbose,
Profiler: profiler,
LoadDataFrom: loadDataFrom,
LoadDataMode: loadDataMode,
LoadDataQuotaScan: loadDataQuotaScan,
LoadDataClean: loadDataClean,
Shutdown: make(chan bool),
}
if err := service.Start(); err == nil {
service.Wait()
if service.Error == nil {
os.Exit(0)
}
}
os.Exit(1)
},
}

View File

@@ -7,7 +7,7 @@ import (
var (
serviceCmd = &cobra.Command{
Use: "service",
Short: "Install, Uninstall, Start, Stop, Reload and retrieve status for SFTPGo Windows Service",
Short: "Manage SFTPGo Windows Service",
}
)

View File

@@ -29,7 +29,6 @@ var (
LogMaxAge: logMaxAge,
LogCompress: logCompress,
LogVerbose: logVerbose,
Profiler: profiler,
Shutdown: make(chan bool),
}
winService := service.WindowsService{

163
cmd/startsubsys.go Normal file
View File

@@ -0,0 +1,163 @@
package cmd
import (
"io"
"os"
"os/user"
"path/filepath"
"github.com/rs/xid"
"github.com/rs/zerolog"
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/drakkan/sftpgo/common"
"github.com/drakkan/sftpgo/config"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/sftpd"
"github.com/drakkan/sftpgo/version"
)
var (
logJournalD = false
preserveHomeDir = false
baseHomeDir = ""
subsystemCmd = &cobra.Command{
Use: "startsubsys",
Short: "Use SFTPGo as SFTP file transfer subsystem",
Long: `In this mode SFTPGo speaks the server side of SFTP protocol to stdout and
expects client requests from stdin.
This mode is not intended to be called directly, but from sshd using the
Subsystem option.
For example adding a line like this one in "/etc/ssh/sshd_config":
Subsystem sftp sftpgo startsubsys
Command-line flags should be specified in the Subsystem declaration.
`,
Run: func(cmd *cobra.Command, args []string) {
logSender := "startsubsys"
connectionID := xid.New().String()
logLevel := zerolog.DebugLevel
if !logVerbose {
logLevel = zerolog.InfoLevel
}
if logJournalD {
logger.InitJournalDLogger(logLevel)
} else {
logger.InitStdErrLogger(logLevel)
}
osUser, err := user.Current()
if err != nil {
logger.Error(logSender, connectionID, "unable to get the current user: %v", err)
os.Exit(1)
}
username := osUser.Username
homedir := osUser.HomeDir
logger.Info(logSender, connectionID, "starting SFTPGo %v as subsystem, user %#v home dir %#v config dir %#v base home dir %#v",
version.Get(), username, homedir, configDir, baseHomeDir)
err = config.LoadConfig(configDir, configFile)
if err != nil {
logger.Error(logSender, connectionID, "unable to load configuration: %v", err)
os.Exit(1)
}
commonConfig := config.GetCommonConfig()
// idle connection are managed externally
commonConfig.IdleTimeout = 0
config.SetCommonConfig(commonConfig)
if err := common.Initialize(config.GetCommonConfig()); err != nil {
logger.Error(logSender, connectionID, "%v", err)
os.Exit(1)
}
kmsConfig := config.GetKMSConfig()
if err := kmsConfig.Initialize(); err != nil {
logger.Error(logSender, connectionID, "unable to initialize KMS: %v", err)
os.Exit(1)
}
dataProviderConf := config.GetProviderConf()
if dataProviderConf.Driver == dataprovider.SQLiteDataProviderName || dataProviderConf.Driver == dataprovider.BoltDataProviderName {
logger.Debug(logSender, connectionID, "data provider %#v not supported in subsystem mode, using %#v provider",
dataProviderConf.Driver, dataprovider.MemoryDataProviderName)
dataProviderConf.Driver = dataprovider.MemoryDataProviderName
dataProviderConf.Name = ""
dataProviderConf.PreferDatabaseCredentials = true
}
config.SetProviderConf(dataProviderConf)
err = dataprovider.Initialize(dataProviderConf, configDir, false)
if err != nil {
logger.Error(logSender, connectionID, "unable to initialize the data provider: %v", err)
os.Exit(1)
}
httpConfig := config.GetHTTPConfig()
httpConfig.Initialize(configDir)
user, err := dataprovider.UserExists(username)
if err == nil {
if user.HomeDir != filepath.Clean(homedir) && !preserveHomeDir {
// update the user
user.HomeDir = filepath.Clean(homedir)
err = dataprovider.UpdateUser(&user)
if err != nil {
logger.Error(logSender, connectionID, "unable to update user %#v: %v", username, err)
os.Exit(1)
}
}
} else {
user.Username = username
if baseHomeDir != "" && filepath.IsAbs(baseHomeDir) {
user.HomeDir = filepath.Join(baseHomeDir, username)
} else {
user.HomeDir = filepath.Clean(homedir)
}
logger.Debug(logSender, connectionID, "home dir for new user %#v", user.HomeDir)
user.Password = connectionID
user.Permissions = make(map[string][]string)
user.Permissions["/"] = []string{dataprovider.PermAny}
err = dataprovider.AddUser(&user)
if err != nil {
logger.Error(logSender, connectionID, "unable to add user %#v: %v", username, err)
os.Exit(1)
}
}
err = sftpd.ServeSubSystemConnection(user, connectionID, os.Stdin, os.Stdout)
if err != nil && err != io.EOF {
logger.Warn(logSender, connectionID, "serving subsystem finished with error: %v", err)
os.Exit(1)
}
logger.Info(logSender, connectionID, "serving subsystem finished")
os.Exit(0)
},
}
)
func init() {
subsystemCmd.Flags().BoolVarP(&preserveHomeDir, "preserve-home", "p", false, `If the user already exists, the existing home
directory will not be changed`)
subsystemCmd.Flags().StringVarP(&baseHomeDir, "base-home-dir", "d", "", `If the user does not exist specify an alternate
starting directory. The home directory for a new
user will be:
[base-home-dir]/[username]
base-home-dir must be an absolute path.`)
subsystemCmd.Flags().BoolVarP(&logJournalD, "log-to-journald", "j", false, `Send logs to journald. Only available on Linux.
Use:
$ journalctl -o verbose -f
To see full logs.
If not set, the logs will be sent to the standard
error`)
addConfigFlags(subsystemCmd)
viper.SetDefault(logVerboseKey, defaultLogVerbose)
viper.BindEnv(logVerboseKey, "SFTPGO_LOG_VERBOSE") //nolint:errcheck
subsystemCmd.Flags().BoolVarP(&logVerbose, logVerboseFlag, "v", viper.GetBool(logVerboseKey),
`Enable verbose logs. This flag can be set
using SFTPGO_LOG_VERBOSE env var too.
`)
viper.BindPFlag(logVerboseKey, subsystemCmd.Flags().Lookup(logVerboseFlag)) //nolint:errcheck
rootCmd.AddCommand(subsystemCmd)
}

205
common/actions.go Normal file
View File

@@ -0,0 +1,205 @@
package common
import (
"bytes"
"context"
"encoding/json"
"errors"
"fmt"
"net/http"
"net/url"
"os"
"os/exec"
"path/filepath"
"strings"
"time"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/httpclient"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/utils"
)
var (
errUnconfiguredAction = errors.New("no hook is configured for this action")
errNoHook = errors.New("unable to execute action, no hook defined")
errUnexpectedHTTResponse = errors.New("unexpected HTTP response code")
)
// ProtocolActions defines the action to execute on file operations and SSH commands
type ProtocolActions struct {
// Valid values are download, upload, pre-delete, delete, rename, ssh_cmd. Empty slice to disable
ExecuteOn []string `json:"execute_on" mapstructure:"execute_on"`
// Absolute path to an external program or an HTTP URL
Hook string `json:"hook" mapstructure:"hook"`
}
var actionHandler ActionHandler = defaultActionHandler{}
// InitializeActionHandler lets the user choose an action handler implementation.
//
// Do NOT call this function after application initialization.
func InitializeActionHandler(handler ActionHandler) {
actionHandler = handler
}
// SSHCommandActionNotification executes the defined action for the specified SSH command.
func SSHCommandActionNotification(user *dataprovider.User, filePath, target, sshCmd string, err error) {
notification := newActionNotification(user, operationSSHCmd, filePath, target, sshCmd, ProtocolSSH, 0, err)
go actionHandler.Handle(notification) // nolint:errcheck
}
// ActionHandler handles a notification for a Protocol Action.
type ActionHandler interface {
Handle(notification ActionNotification) error
}
// ActionNotification defines a notification for a Protocol Action.
type ActionNotification struct {
Action string `json:"action"`
Username string `json:"username"`
Path string `json:"path"`
TargetPath string `json:"target_path,omitempty"`
SSHCmd string `json:"ssh_cmd,omitempty"`
FileSize int64 `json:"file_size,omitempty"`
FsProvider int `json:"fs_provider"`
Bucket string `json:"bucket,omitempty"`
Endpoint string `json:"endpoint,omitempty"`
Status int `json:"status"`
Protocol string `json:"protocol"`
}
func newActionNotification(
user *dataprovider.User,
operation, filePath, target, sshCmd, protocol string,
fileSize int64,
err error,
) ActionNotification {
var bucket, endpoint string
status := 1
if user.FsConfig.Provider == dataprovider.S3FilesystemProvider {
bucket = user.FsConfig.S3Config.Bucket
endpoint = user.FsConfig.S3Config.Endpoint
} else if user.FsConfig.Provider == dataprovider.GCSFilesystemProvider {
bucket = user.FsConfig.GCSConfig.Bucket
} else if user.FsConfig.Provider == dataprovider.AzureBlobFilesystemProvider {
bucket = user.FsConfig.AzBlobConfig.Container
if user.FsConfig.AzBlobConfig.SASURL != "" {
endpoint = user.FsConfig.AzBlobConfig.SASURL
} else {
endpoint = user.FsConfig.AzBlobConfig.Endpoint
}
}
if err == ErrQuotaExceeded {
status = 2
} else if err != nil {
status = 0
}
return ActionNotification{
Action: operation,
Username: user.Username,
Path: filePath,
TargetPath: target,
SSHCmd: sshCmd,
FileSize: fileSize,
FsProvider: int(user.FsConfig.Provider),
Bucket: bucket,
Endpoint: endpoint,
Status: status,
Protocol: protocol,
}
}
type defaultActionHandler struct{}
func (h defaultActionHandler) Handle(notification ActionNotification) error {
if !utils.IsStringInSlice(notification.Action, Config.Actions.ExecuteOn) {
return errUnconfiguredAction
}
if Config.Actions.Hook == "" {
logger.Warn(notification.Protocol, "", "Unable to send notification, no hook is defined")
return errNoHook
}
if strings.HasPrefix(Config.Actions.Hook, "http") {
return h.handleHTTP(notification)
}
return h.handleCommand(notification)
}
func (h defaultActionHandler) handleHTTP(notification ActionNotification) error {
u, err := url.Parse(Config.Actions.Hook)
if err != nil {
logger.Warn(notification.Protocol, "", "Invalid hook %#v for operation %#v: %v", Config.Actions.Hook, notification.Action, err)
return err
}
startTime := time.Now()
respCode := 0
httpClient := httpclient.GetHTTPClient()
var b bytes.Buffer
_ = json.NewEncoder(&b).Encode(notification)
resp, err := httpClient.Post(u.String(), "application/json", &b)
if err == nil {
respCode = resp.StatusCode
resp.Body.Close()
if respCode != http.StatusOK {
err = errUnexpectedHTTResponse
}
}
logger.Debug(notification.Protocol, "", "notified operation %#v to URL: %v status code: %v, elapsed: %v err: %v", notification.Action, u.String(), respCode, time.Since(startTime), err)
return err
}
func (h defaultActionHandler) handleCommand(notification ActionNotification) error {
if !filepath.IsAbs(Config.Actions.Hook) {
err := fmt.Errorf("invalid notification command %#v", Config.Actions.Hook)
logger.Warn(notification.Protocol, "", "unable to execute notification command: %v", err)
return err
}
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
cmd := exec.CommandContext(ctx, Config.Actions.Hook, notification.Action, notification.Username, notification.Path, notification.TargetPath, notification.SSHCmd)
cmd.Env = append(os.Environ(), notificationAsEnvVars(notification)...)
startTime := time.Now()
err := cmd.Run()
logger.Debug(notification.Protocol, "", "executed command %#v with arguments: %#v, %#v, %#v, %#v, %#v, elapsed: %v, error: %v",
Config.Actions.Hook, notification.Action, notification.Username, notification.Path, notification.TargetPath, notification.SSHCmd, time.Since(startTime), err)
return err
}
func notificationAsEnvVars(notification ActionNotification) []string {
return []string{
fmt.Sprintf("SFTPGO_ACTION=%v", notification.Action),
fmt.Sprintf("SFTPGO_ACTION_USERNAME=%v", notification.Username),
fmt.Sprintf("SFTPGO_ACTION_PATH=%v", notification.Path),
fmt.Sprintf("SFTPGO_ACTION_TARGET=%v", notification.TargetPath),
fmt.Sprintf("SFTPGO_ACTION_SSH_CMD=%v", notification.SSHCmd),
fmt.Sprintf("SFTPGO_ACTION_FILE_SIZE=%v", notification.FileSize),
fmt.Sprintf("SFTPGO_ACTION_FS_PROVIDER=%v", notification.FsProvider),
fmt.Sprintf("SFTPGO_ACTION_BUCKET=%v", notification.Bucket),
fmt.Sprintf("SFTPGO_ACTION_ENDPOINT=%v", notification.Endpoint),
fmt.Sprintf("SFTPGO_ACTION_STATUS=%v", notification.Status),
fmt.Sprintf("SFTPGO_ACTION_PROTOCOL=%v", notification.Protocol),
}
}

222
common/actions_test.go Normal file
View File

@@ -0,0 +1,222 @@
package common
import (
"errors"
"fmt"
"io/ioutil"
"os"
"os/exec"
"path/filepath"
"runtime"
"testing"
"github.com/stretchr/testify/assert"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/vfs"
)
func TestNewActionNotification(t *testing.T) {
user := &dataprovider.User{
Username: "username",
}
user.FsConfig.Provider = dataprovider.LocalFilesystemProvider
user.FsConfig.S3Config = vfs.S3FsConfig{
Bucket: "s3bucket",
Endpoint: "endpoint",
}
user.FsConfig.GCSConfig = vfs.GCSFsConfig{
Bucket: "gcsbucket",
}
user.FsConfig.AzBlobConfig = vfs.AzBlobFsConfig{
Container: "azcontainer",
SASURL: "azsasurl",
Endpoint: "azendpoint",
}
a := newActionNotification(user, operationDownload, "path", "target", "", ProtocolSFTP, 123, errors.New("fake error"))
assert.Equal(t, user.Username, a.Username)
assert.Equal(t, 0, len(a.Bucket))
assert.Equal(t, 0, len(a.Endpoint))
assert.Equal(t, 0, a.Status)
user.FsConfig.Provider = dataprovider.S3FilesystemProvider
a = newActionNotification(user, operationDownload, "path", "target", "", ProtocolSSH, 123, nil)
assert.Equal(t, "s3bucket", a.Bucket)
assert.Equal(t, "endpoint", a.Endpoint)
assert.Equal(t, 1, a.Status)
user.FsConfig.Provider = dataprovider.GCSFilesystemProvider
a = newActionNotification(user, operationDownload, "path", "target", "", ProtocolSCP, 123, ErrQuotaExceeded)
assert.Equal(t, "gcsbucket", a.Bucket)
assert.Equal(t, 0, len(a.Endpoint))
assert.Equal(t, 2, a.Status)
user.FsConfig.Provider = dataprovider.AzureBlobFilesystemProvider
a = newActionNotification(user, operationDownload, "path", "target", "", ProtocolSCP, 123, nil)
assert.Equal(t, "azcontainer", a.Bucket)
assert.Equal(t, "azsasurl", a.Endpoint)
assert.Equal(t, 1, a.Status)
user.FsConfig.AzBlobConfig.SASURL = ""
a = newActionNotification(user, operationDownload, "path", "target", "", ProtocolSCP, 123, nil)
assert.Equal(t, "azcontainer", a.Bucket)
assert.Equal(t, "azendpoint", a.Endpoint)
assert.Equal(t, 1, a.Status)
}
func TestActionHTTP(t *testing.T) {
actionsCopy := Config.Actions
Config.Actions = ProtocolActions{
ExecuteOn: []string{operationDownload},
Hook: fmt.Sprintf("http://%v", httpAddr),
}
user := &dataprovider.User{
Username: "username",
}
a := newActionNotification(user, operationDownload, "path", "target", "", ProtocolSFTP, 123, nil)
err := actionHandler.Handle(a)
assert.NoError(t, err)
Config.Actions.Hook = "http://invalid:1234"
err = actionHandler.Handle(a)
assert.Error(t, err)
Config.Actions.Hook = fmt.Sprintf("http://%v/404", httpAddr)
err = actionHandler.Handle(a)
if assert.Error(t, err) {
assert.EqualError(t, err, errUnexpectedHTTResponse.Error())
}
Config.Actions = actionsCopy
}
func TestActionCMD(t *testing.T) {
if runtime.GOOS == osWindows {
t.Skip("this test is not available on Windows")
}
actionsCopy := Config.Actions
hookCmd, err := exec.LookPath("true")
assert.NoError(t, err)
Config.Actions = ProtocolActions{
ExecuteOn: []string{operationDownload},
Hook: hookCmd,
}
user := &dataprovider.User{
Username: "username",
}
a := newActionNotification(user, operationDownload, "path", "target", "", ProtocolSFTP, 123, nil)
err = actionHandler.Handle(a)
assert.NoError(t, err)
SSHCommandActionNotification(user, "path", "target", "sha1sum", nil)
Config.Actions = actionsCopy
}
func TestWrongActions(t *testing.T) {
actionsCopy := Config.Actions
badCommand := "/bad/command"
if runtime.GOOS == osWindows {
badCommand = "C:\\bad\\command"
}
Config.Actions = ProtocolActions{
ExecuteOn: []string{operationUpload},
Hook: badCommand,
}
user := &dataprovider.User{
Username: "username",
}
a := newActionNotification(user, operationUpload, "", "", "", ProtocolSFTP, 123, nil)
err := actionHandler.Handle(a)
assert.Error(t, err, "action with bad command must fail")
a.Action = operationDelete
err = actionHandler.Handle(a)
assert.EqualError(t, err, errUnconfiguredAction.Error())
Config.Actions.Hook = "http://foo\x7f.com/"
a.Action = operationUpload
err = actionHandler.Handle(a)
assert.Error(t, err, "action with bad url must fail")
Config.Actions.Hook = ""
err = actionHandler.Handle(a)
if assert.Error(t, err) {
assert.EqualError(t, err, errNoHook.Error())
}
Config.Actions.Hook = "relative path"
err = actionHandler.Handle(a)
if assert.Error(t, err) {
assert.EqualError(t, err, fmt.Sprintf("invalid notification command %#v", Config.Actions.Hook))
}
Config.Actions = actionsCopy
}
func TestPreDeleteAction(t *testing.T) {
if runtime.GOOS == osWindows {
t.Skip("this test is not available on Windows")
}
actionsCopy := Config.Actions
hookCmd, err := exec.LookPath("true")
assert.NoError(t, err)
Config.Actions = ProtocolActions{
ExecuteOn: []string{operationPreDelete},
Hook: hookCmd,
}
homeDir := filepath.Join(os.TempDir(), "test_user")
err = os.MkdirAll(homeDir, os.ModePerm)
assert.NoError(t, err)
user := dataprovider.User{
Username: "username",
HomeDir: homeDir,
}
user.Permissions = make(map[string][]string)
user.Permissions["/"] = []string{dataprovider.PermAny}
fs := vfs.NewOsFs("id", homeDir, nil)
c := NewBaseConnection("id", ProtocolSFTP, user, fs)
testfile := filepath.Join(user.HomeDir, "testfile")
err = ioutil.WriteFile(testfile, []byte("test"), os.ModePerm)
assert.NoError(t, err)
info, err := os.Stat(testfile)
assert.NoError(t, err)
err = c.RemoveFile(testfile, "testfile", info)
assert.NoError(t, err)
assert.FileExists(t, testfile)
os.RemoveAll(homeDir)
Config.Actions = actionsCopy
}
type actionHandlerStub struct {
called bool
}
func (h *actionHandlerStub) Handle(notification ActionNotification) error {
h.called = true
return nil
}
func TestInitializeActionHandler(t *testing.T) {
handler := &actionHandlerStub{}
InitializeActionHandler(handler)
t.Cleanup(func() {
InitializeActionHandler(defaultActionHandler{})
})
err := actionHandler.Handle(ActionNotification{})
assert.NoError(t, err)
assert.True(t, handler.called)
}

834
common/common.go Normal file
View File

@@ -0,0 +1,834 @@
// Package common defines code shared among file transfer packages and protocols
package common
import (
"context"
"errors"
"fmt"
"net"
"net/http"
"net/url"
"os"
"os/exec"
"path/filepath"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/pires/go-proxyproto"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/httpclient"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/metrics"
"github.com/drakkan/sftpgo/utils"
)
// constants
const (
logSender = "common"
uploadLogSender = "Upload"
downloadLogSender = "Download"
renameLogSender = "Rename"
rmdirLogSender = "Rmdir"
mkdirLogSender = "Mkdir"
symlinkLogSender = "Symlink"
removeLogSender = "Remove"
chownLogSender = "Chown"
chmodLogSender = "Chmod"
chtimesLogSender = "Chtimes"
truncateLogSender = "Truncate"
operationDownload = "download"
operationUpload = "upload"
operationDelete = "delete"
operationPreDelete = "pre-delete"
operationRename = "rename"
operationSSHCmd = "ssh_cmd"
chtimesFormat = "2006-01-02T15:04:05" // YYYY-MM-DDTHH:MM:SS
idleTimeoutCheckInterval = 3 * time.Minute
)
// Stat flags
const (
StatAttrUIDGID = 1
StatAttrPerms = 2
StatAttrTimes = 4
StatAttrSize = 8
)
// Transfer types
const (
TransferUpload = iota
TransferDownload
)
// Supported protocols
const (
ProtocolSFTP = "SFTP"
ProtocolSCP = "SCP"
ProtocolSSH = "SSH"
ProtocolFTP = "FTP"
ProtocolWebDAV = "DAV"
)
// Upload modes
const (
UploadModeStandard = iota
UploadModeAtomic
UploadModeAtomicWithResume
)
// errors definitions
var (
ErrPermissionDenied = errors.New("permission denied")
ErrNotExist = errors.New("no such file or directory")
ErrOpUnsupported = errors.New("operation unsupported")
ErrGenericFailure = errors.New("failure")
ErrQuotaExceeded = errors.New("denying write due to space limit")
ErrSkipPermissionsCheck = errors.New("permission check skipped")
ErrConnectionDenied = errors.New("you are not allowed to connect")
ErrNoBinding = errors.New("no binding configured")
ErrCrtRevoked = errors.New("your certificate has been revoked")
errNoTransfer = errors.New("requested transfer not found")
errTransferMismatch = errors.New("transfer mismatch")
)
var (
// Config is the configuration for the supported protocols
Config Configuration
// Connections is the list of active connections
Connections ActiveConnections
// QuotaScans is the list of active quota scans
QuotaScans ActiveScans
idleTimeoutTicker *time.Ticker
idleTimeoutTickerDone chan bool
supportedProtocols = []string{ProtocolSFTP, ProtocolSCP, ProtocolSSH, ProtocolFTP, ProtocolWebDAV}
)
// Initialize sets the common configuration
func Initialize(c Configuration) error {
Config = c
Config.idleLoginTimeout = 2 * time.Minute
Config.idleTimeoutAsDuration = time.Duration(Config.IdleTimeout) * time.Minute
if Config.IdleTimeout > 0 {
startIdleTimeoutTicker(idleTimeoutCheckInterval)
}
Config.defender = nil
if c.DefenderConfig.Enabled {
defender, err := newInMemoryDefender(&c.DefenderConfig)
if err != nil {
return fmt.Errorf("defender initialization error: %v", err)
}
logger.Info(logSender, "", "defender initialized with config %+v", c.DefenderConfig)
Config.defender = defender
}
return nil
}
// ReloadDefender reloads the defender's block and safe lists
func ReloadDefender() error {
if Config.defender == nil {
return nil
}
return Config.defender.Reload()
}
// IsBanned returns true if the specified IP address is banned
func IsBanned(ip string) bool {
if Config.defender == nil {
return false
}
return Config.defender.IsBanned(ip)
}
// GetDefenderBanTime returns the ban time for the given IP
// or nil if the IP is not banned or the defender is disabled
func GetDefenderBanTime(ip string) *time.Time {
if Config.defender == nil {
return nil
}
return Config.defender.GetBanTime(ip)
}
// Unban removes the specified IP address from the banned ones
func Unban(ip string) bool {
if Config.defender == nil {
return false
}
return Config.defender.Unban(ip)
}
// GetDefenderScore returns the score for the given IP
func GetDefenderScore(ip string) int {
if Config.defender == nil {
return 0
}
return Config.defender.GetScore(ip)
}
// AddDefenderEvent adds the specified defender event for the given IP
func AddDefenderEvent(ip string, event HostEvent) {
if Config.defender == nil {
return
}
Config.defender.AddEvent(ip, event)
}
// the ticker cannot be started/stopped from multiple goroutines
func startIdleTimeoutTicker(duration time.Duration) {
stopIdleTimeoutTicker()
idleTimeoutTicker = time.NewTicker(duration)
idleTimeoutTickerDone = make(chan bool)
go func() {
for {
select {
case <-idleTimeoutTickerDone:
return
case <-idleTimeoutTicker.C:
Connections.checkIdles()
}
}
}()
}
func stopIdleTimeoutTicker() {
if idleTimeoutTicker != nil {
idleTimeoutTicker.Stop()
idleTimeoutTickerDone <- true
idleTimeoutTicker = nil
}
}
// ActiveTransfer defines the interface for the current active transfers
type ActiveTransfer interface {
GetID() uint64
GetType() int
GetSize() int64
GetVirtualPath() string
GetStartTime() time.Time
SignalClose()
Truncate(fsPath string, size int64) (int64, error)
GetRealFsPath(fsPath string) string
}
// ActiveConnection defines the interface for the current active connections
type ActiveConnection interface {
GetID() string
GetUsername() string
GetRemoteAddress() string
GetClientVersion() string
GetProtocol() string
GetConnectionTime() time.Time
GetLastActivity() time.Time
GetCommand() string
Disconnect() error
AddTransfer(t ActiveTransfer)
RemoveTransfer(t ActiveTransfer)
GetTransfers() []ConnectionTransfer
CloseFS() error
}
// StatAttributes defines the attributes for set stat commands
type StatAttributes struct {
Mode os.FileMode
Atime time.Time
Mtime time.Time
UID int
GID int
Flags int
Size int64
}
// ConnectionTransfer defines the trasfer details to expose
type ConnectionTransfer struct {
ID uint64 `json:"-"`
OperationType string `json:"operation_type"`
StartTime int64 `json:"start_time"`
Size int64 `json:"size"`
VirtualPath string `json:"path"`
}
func (t *ConnectionTransfer) getConnectionTransferAsString() string {
result := ""
switch t.OperationType {
case operationUpload:
result += "UL "
case operationDownload:
result += "DL "
}
result += fmt.Sprintf("%#v ", t.VirtualPath)
if t.Size > 0 {
elapsed := time.Since(utils.GetTimeFromMsecSinceEpoch(t.StartTime))
speed := float64(t.Size) / float64(utils.GetTimeAsMsSinceEpoch(time.Now())-t.StartTime)
result += fmt.Sprintf("Size: %#v Elapsed: %#v Speed: \"%.1f KB/s\"", utils.ByteCountSI(t.Size),
utils.GetDurationAsString(elapsed), speed)
}
return result
}
// Configuration defines configuration parameters common to all supported protocols
type Configuration struct {
// Maximum idle timeout as minutes. If a client is idle for a time that exceeds this setting it will be disconnected.
// 0 means disabled
IdleTimeout int `json:"idle_timeout" mapstructure:"idle_timeout"`
// UploadMode 0 means standard, the files are uploaded directly to the requested path.
// 1 means atomic: the files are uploaded to a temporary path and renamed to the requested path
// when the client ends the upload. Atomic mode avoid problems such as a web server that
// serves partial files when the files are being uploaded.
// In atomic mode if there is an upload error the temporary file is deleted and so the requested
// upload path will not contain a partial file.
// 2 means atomic with resume support: as atomic but if there is an upload error the temporary
// file is renamed to the requested path and not deleted, this way a client can reconnect and resume
// the upload.
UploadMode int `json:"upload_mode" mapstructure:"upload_mode"`
// Actions to execute for SFTP file operations and SSH commands
Actions ProtocolActions `json:"actions" mapstructure:"actions"`
// SetstatMode 0 means "normal mode": requests for changing permissions and owner/group are executed.
// 1 means "ignore mode": requests for changing permissions and owner/group are silently ignored.
// 2 means "ignore mode for cloud fs": requests for changing permissions and owner/group/time are
// silently ignored for cloud based filesystem such as S3, GCS, Azure Blob
SetstatMode int `json:"setstat_mode" mapstructure:"setstat_mode"`
// Support for HAProxy PROXY protocol.
// If you are running SFTPGo behind a proxy server such as HAProxy, AWS ELB or NGNIX, you can enable
// the proxy protocol. It provides a convenient way to safely transport connection information
// such as a client's address across multiple layers of NAT or TCP proxies to get the real
// client IP address instead of the proxy IP. Both protocol versions 1 and 2 are supported.
// - 0 means disabled
// - 1 means proxy protocol enabled. Proxy header will be used and requests without proxy header will be accepted.
// - 2 means proxy protocol required. Proxy header will be used and requests without proxy header will be rejected.
// If the proxy protocol is enabled in SFTPGo then you have to enable the protocol in your proxy configuration too,
// for example for HAProxy add "send-proxy" or "send-proxy-v2" to each server configuration line.
ProxyProtocol int `json:"proxy_protocol" mapstructure:"proxy_protocol"`
// List of IP addresses and IP ranges allowed to send the proxy header.
// If proxy protocol is set to 1 and we receive a proxy header from an IP that is not in the list then the
// connection will be accepted and the header will be ignored.
// If proxy protocol is set to 2 and we receive a proxy header from an IP that is not in the list then the
// connection will be rejected.
ProxyAllowed []string `json:"proxy_allowed" mapstructure:"proxy_allowed"`
// Absolute path to an external program or an HTTP URL to invoke after a user connects
// and before he tries to login. It allows you to reject the connection based on the source
// ip address. Leave empty do disable.
PostConnectHook string `json:"post_connect_hook" mapstructure:"post_connect_hook"`
// Maximum number of concurrent client connections. 0 means unlimited
MaxTotalConnections int `json:"max_total_connections" mapstructure:"max_total_connections"`
// Defender configuration
DefenderConfig DefenderConfig `json:"defender" mapstructure:"defender"`
idleTimeoutAsDuration time.Duration
idleLoginTimeout time.Duration
defender Defender
}
// IsAtomicUploadEnabled returns true if atomic upload is enabled
func (c *Configuration) IsAtomicUploadEnabled() bool {
return c.UploadMode == UploadModeAtomic || c.UploadMode == UploadModeAtomicWithResume
}
// GetProxyListener returns a wrapper for the given listener that supports the
// HAProxy Proxy Protocol or nil if the proxy protocol is not configured
func (c *Configuration) GetProxyListener(listener net.Listener) (*proxyproto.Listener, error) {
var proxyListener *proxyproto.Listener
var err error
if c.ProxyProtocol > 0 {
var policyFunc func(upstream net.Addr) (proxyproto.Policy, error)
if c.ProxyProtocol == 1 && len(c.ProxyAllowed) > 0 {
policyFunc, err = proxyproto.LaxWhiteListPolicy(c.ProxyAllowed)
if err != nil {
return nil, err
}
}
if c.ProxyProtocol == 2 {
if len(c.ProxyAllowed) == 0 {
policyFunc = func(upstream net.Addr) (proxyproto.Policy, error) {
return proxyproto.REQUIRE, nil
}
} else {
policyFunc, err = proxyproto.StrictWhiteListPolicy(c.ProxyAllowed)
if err != nil {
return nil, err
}
}
}
proxyListener = &proxyproto.Listener{
Listener: listener,
Policy: policyFunc,
}
}
return proxyListener, nil
}
// ExecutePostConnectHook executes the post connect hook if defined
func (c *Configuration) ExecutePostConnectHook(ipAddr, protocol string) error {
if c.PostConnectHook == "" {
return nil
}
if strings.HasPrefix(c.PostConnectHook, "http") {
var url *url.URL
url, err := url.Parse(c.PostConnectHook)
if err != nil {
logger.Warn(protocol, "", "Login from ip %#v denied, invalid post connect hook %#v: %v",
ipAddr, c.PostConnectHook, err)
return err
}
httpClient := httpclient.GetHTTPClient()
q := url.Query()
q.Add("ip", ipAddr)
q.Add("protocol", protocol)
url.RawQuery = q.Encode()
resp, err := httpClient.Get(url.String())
if err != nil {
logger.Warn(protocol, "", "Login from ip %#v denied, error executing post connect hook: %v", ipAddr, err)
return err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
logger.Warn(protocol, "", "Login from ip %#v denied, post connect hook response code: %v", ipAddr, resp.StatusCode)
return errUnexpectedHTTResponse
}
return nil
}
if !filepath.IsAbs(c.PostConnectHook) {
err := fmt.Errorf("invalid post connect hook %#v", c.PostConnectHook)
logger.Warn(protocol, "", "Login from ip %#v denied: %v", ipAddr, err)
return err
}
ctx, cancel := context.WithTimeout(context.Background(), 20*time.Second)
defer cancel()
cmd := exec.CommandContext(ctx, c.PostConnectHook)
cmd.Env = append(os.Environ(),
fmt.Sprintf("SFTPGO_CONNECTION_IP=%v", ipAddr),
fmt.Sprintf("SFTPGO_CONNECTION_PROTOCOL=%v", protocol))
err := cmd.Run()
if err != nil {
logger.Warn(protocol, "", "Login from ip %#v denied, connect hook error: %v", ipAddr, err)
}
return err
}
// SSHConnection defines an ssh connection.
// Each SSH connection can open several channels for SFTP or SSH commands
type SSHConnection struct {
id string
conn net.Conn
lastActivity int64
}
// NewSSHConnection returns a new SSHConnection
func NewSSHConnection(id string, conn net.Conn) *SSHConnection {
return &SSHConnection{
id: id,
conn: conn,
lastActivity: time.Now().UnixNano(),
}
}
// GetID returns the ID for this SSHConnection
func (c *SSHConnection) GetID() string {
return c.id
}
// UpdateLastActivity updates last activity for this connection
func (c *SSHConnection) UpdateLastActivity() {
atomic.StoreInt64(&c.lastActivity, time.Now().UnixNano())
}
// GetLastActivity returns the last connection activity
func (c *SSHConnection) GetLastActivity() time.Time {
return time.Unix(0, atomic.LoadInt64(&c.lastActivity))
}
// Close closes the underlying network connection
func (c *SSHConnection) Close() error {
return c.conn.Close()
}
// ActiveConnections holds the currect active connections with the associated transfers
type ActiveConnections struct {
sync.RWMutex
connections []ActiveConnection
sshConnections []*SSHConnection
}
// GetActiveSessions returns the number of active sessions for the given username.
// We return the open sessions for any protocol
func (conns *ActiveConnections) GetActiveSessions(username string) int {
conns.RLock()
defer conns.RUnlock()
numSessions := 0
for _, c := range conns.connections {
if c.GetUsername() == username {
numSessions++
}
}
return numSessions
}
// Add adds a new connection to the active ones
func (conns *ActiveConnections) Add(c ActiveConnection) {
conns.Lock()
defer conns.Unlock()
conns.connections = append(conns.connections, c)
metrics.UpdateActiveConnectionsSize(len(conns.connections))
logger.Debug(c.GetProtocol(), c.GetID(), "connection added, num open connections: %v", len(conns.connections))
}
// Swap replaces an existing connection with the given one.
// This method is useful if you have to change some connection details
// for example for FTP is used to update the connection once the user
// authenticates
func (conns *ActiveConnections) Swap(c ActiveConnection) error {
conns.Lock()
defer conns.Unlock()
for idx, conn := range conns.connections {
if conn.GetID() == c.GetID() {
conn = nil
conns.connections[idx] = c
return nil
}
}
return errors.New("connection to swap not found")
}
// Remove removes a connection from the active ones
func (conns *ActiveConnections) Remove(connectionID string) {
conns.Lock()
defer conns.Unlock()
for idx, conn := range conns.connections {
if conn.GetID() == connectionID {
err := conn.CloseFS()
lastIdx := len(conns.connections) - 1
conns.connections[idx] = conns.connections[lastIdx]
conns.connections[lastIdx] = nil
conns.connections = conns.connections[:lastIdx]
metrics.UpdateActiveConnectionsSize(lastIdx)
logger.Debug(conn.GetProtocol(), conn.GetID(), "connection removed, close fs error: %v, num open connections: %v",
err, lastIdx)
return
}
}
logger.Warn(logSender, "", "connection id %#v to remove not found!", connectionID)
}
// Close closes an active connection.
// It returns true on success
func (conns *ActiveConnections) Close(connectionID string) bool {
conns.RLock()
result := false
for _, c := range conns.connections {
if c.GetID() == connectionID {
defer func(conn ActiveConnection) {
err := conn.Disconnect()
logger.Debug(conn.GetProtocol(), conn.GetID(), "close connection requested, close err: %v", err)
}(c)
result = true
break
}
}
conns.RUnlock()
return result
}
// AddSSHConnection adds a new ssh connection to the active ones
func (conns *ActiveConnections) AddSSHConnection(c *SSHConnection) {
conns.Lock()
defer conns.Unlock()
conns.sshConnections = append(conns.sshConnections, c)
logger.Debug(logSender, c.GetID(), "ssh connection added, num open connections: %v", len(conns.sshConnections))
}
// RemoveSSHConnection removes a connection from the active ones
func (conns *ActiveConnections) RemoveSSHConnection(connectionID string) {
conns.Lock()
defer conns.Unlock()
for idx, conn := range conns.sshConnections {
if conn.GetID() == connectionID {
lastIdx := len(conns.sshConnections) - 1
conns.sshConnections[idx] = conns.sshConnections[lastIdx]
conns.sshConnections[lastIdx] = nil
conns.sshConnections = conns.sshConnections[:lastIdx]
logger.Debug(logSender, conn.GetID(), "ssh connection removed, num open ssh connections: %v", lastIdx)
return
}
}
logger.Warn(logSender, "", "ssh connection to remove with id %#v not found!", connectionID)
}
func (conns *ActiveConnections) checkIdles() {
conns.RLock()
for _, sshConn := range conns.sshConnections {
idleTime := time.Since(sshConn.GetLastActivity())
if idleTime > Config.idleTimeoutAsDuration {
// we close the an ssh connection if it has no active connections associated
idToMatch := fmt.Sprintf("_%v_", sshConn.GetID())
toClose := true
for _, conn := range conns.connections {
if strings.Contains(conn.GetID(), idToMatch) {
toClose = false
break
}
}
if toClose {
defer func(c *SSHConnection) {
err := c.Close()
logger.Debug(logSender, c.GetID(), "close idle SSH connection, idle time: %v, close err: %v",
time.Since(c.GetLastActivity()), err)
}(sshConn)
}
}
}
for _, c := range conns.connections {
idleTime := time.Since(c.GetLastActivity())
isUnauthenticatedFTPUser := (c.GetProtocol() == ProtocolFTP && c.GetUsername() == "")
if idleTime > Config.idleTimeoutAsDuration || (isUnauthenticatedFTPUser && idleTime > Config.idleLoginTimeout) {
defer func(conn ActiveConnection, isFTPNoAuth bool) {
err := conn.Disconnect()
logger.Debug(conn.GetProtocol(), conn.GetID(), "close idle connection, idle time: %v, username: %#v close err: %v",
time.Since(conn.GetLastActivity()), conn.GetUsername(), err)
if isFTPNoAuth {
ip := utils.GetIPFromRemoteAddress(c.GetRemoteAddress())
logger.ConnectionFailedLog("", ip, dataprovider.LoginMethodNoAuthTryed, c.GetProtocol(), "client idle")
metrics.AddNoAuthTryed()
AddDefenderEvent(ip, HostEventNoLoginTried)
dataprovider.ExecutePostLoginHook(&dataprovider.User{}, dataprovider.LoginMethodNoAuthTryed, ip, c.GetProtocol(),
dataprovider.ErrNoAuthTryed)
}
}(c, isUnauthenticatedFTPUser)
}
}
conns.RUnlock()
}
// IsNewConnectionAllowed returns false if the maximum number of concurrent allowed connections is exceeded
func (conns *ActiveConnections) IsNewConnectionAllowed() bool {
if Config.MaxTotalConnections == 0 {
return true
}
conns.RLock()
defer conns.RUnlock()
return len(conns.connections) < Config.MaxTotalConnections
}
// GetStats returns stats for active connections
func (conns *ActiveConnections) GetStats() []ConnectionStatus {
conns.RLock()
defer conns.RUnlock()
stats := make([]ConnectionStatus, 0, len(conns.connections))
for _, c := range conns.connections {
stat := ConnectionStatus{
Username: c.GetUsername(),
ConnectionID: c.GetID(),
ClientVersion: c.GetClientVersion(),
RemoteAddress: c.GetRemoteAddress(),
ConnectionTime: utils.GetTimeAsMsSinceEpoch(c.GetConnectionTime()),
LastActivity: utils.GetTimeAsMsSinceEpoch(c.GetLastActivity()),
Protocol: c.GetProtocol(),
Command: c.GetCommand(),
Transfers: c.GetTransfers(),
}
stats = append(stats, stat)
}
return stats
}
// ConnectionStatus returns the status for an active connection
type ConnectionStatus struct {
// Logged in username
Username string `json:"username"`
// Unique identifier for the connection
ConnectionID string `json:"connection_id"`
// client's version string
ClientVersion string `json:"client_version,omitempty"`
// Remote address for this connection
RemoteAddress string `json:"remote_address"`
// Connection time as unix timestamp in milliseconds
ConnectionTime int64 `json:"connection_time"`
// Last activity as unix timestamp in milliseconds
LastActivity int64 `json:"last_activity"`
// Protocol for this connection
Protocol string `json:"protocol"`
// active uploads/downloads
Transfers []ConnectionTransfer `json:"active_transfers,omitempty"`
// SSH command or WebDAV method
Command string `json:"command,omitempty"`
}
// GetConnectionDuration returns the connection duration as string
func (c ConnectionStatus) GetConnectionDuration() string {
elapsed := time.Since(utils.GetTimeFromMsecSinceEpoch(c.ConnectionTime))
return utils.GetDurationAsString(elapsed)
}
// GetConnectionInfo returns connection info.
// Protocol,Client Version and RemoteAddress are returned.
func (c ConnectionStatus) GetConnectionInfo() string {
var result strings.Builder
result.WriteString(fmt.Sprintf("%v. Client: %#v From: %#v", c.Protocol, c.ClientVersion, c.RemoteAddress))
if c.Command == "" {
return result.String()
}
switch c.Protocol {
case ProtocolSSH, ProtocolFTP:
result.WriteString(fmt.Sprintf(". Command: %#v", c.Command))
case ProtocolWebDAV:
result.WriteString(fmt.Sprintf(". Method: %#v", c.Command))
}
return result.String()
}
// GetTransfersAsString returns the active transfers as string
func (c ConnectionStatus) GetTransfersAsString() string {
result := ""
for _, t := range c.Transfers {
if len(result) > 0 {
result += ". "
}
result += t.getConnectionTransferAsString()
}
return result
}
// ActiveQuotaScan defines an active quota scan for a user home dir
type ActiveQuotaScan struct {
// Username to which the quota scan refers
Username string `json:"username"`
// quota scan start time as unix timestamp in milliseconds
StartTime int64 `json:"start_time"`
}
// ActiveVirtualFolderQuotaScan defines an active quota scan for a virtual folder
type ActiveVirtualFolderQuotaScan struct {
// folder name to which the quota scan refers
Name string `json:"name"`
// quota scan start time as unix timestamp in milliseconds
StartTime int64 `json:"start_time"`
}
// ActiveScans holds the active quota scans
type ActiveScans struct {
sync.RWMutex
UserHomeScans []ActiveQuotaScan
FolderScans []ActiveVirtualFolderQuotaScan
}
// GetUsersQuotaScans returns the active quota scans for users home directories
func (s *ActiveScans) GetUsersQuotaScans() []ActiveQuotaScan {
s.RLock()
defer s.RUnlock()
scans := make([]ActiveQuotaScan, len(s.UserHomeScans))
copy(scans, s.UserHomeScans)
return scans
}
// AddUserQuotaScan adds a user to the ones with active quota scans.
// Returns false if the user has a quota scan already running
func (s *ActiveScans) AddUserQuotaScan(username string) bool {
s.Lock()
defer s.Unlock()
for _, scan := range s.UserHomeScans {
if scan.Username == username {
return false
}
}
s.UserHomeScans = append(s.UserHomeScans, ActiveQuotaScan{
Username: username,
StartTime: utils.GetTimeAsMsSinceEpoch(time.Now()),
})
return true
}
// RemoveUserQuotaScan removes a user from the ones with active quota scans.
// Returns false if the user has no active quota scans
func (s *ActiveScans) RemoveUserQuotaScan(username string) bool {
s.Lock()
defer s.Unlock()
indexToRemove := -1
for i, scan := range s.UserHomeScans {
if scan.Username == username {
indexToRemove = i
break
}
}
if indexToRemove >= 0 {
s.UserHomeScans[indexToRemove] = s.UserHomeScans[len(s.UserHomeScans)-1]
s.UserHomeScans = s.UserHomeScans[:len(s.UserHomeScans)-1]
return true
}
return false
}
// GetVFoldersQuotaScans returns the active quota scans for virtual folders
func (s *ActiveScans) GetVFoldersQuotaScans() []ActiveVirtualFolderQuotaScan {
s.RLock()
defer s.RUnlock()
scans := make([]ActiveVirtualFolderQuotaScan, len(s.FolderScans))
copy(scans, s.FolderScans)
return scans
}
// AddVFolderQuotaScan adds a virtual folder to the ones with active quota scans.
// Returns false if the folder has a quota scan already running
func (s *ActiveScans) AddVFolderQuotaScan(folderName string) bool {
s.Lock()
defer s.Unlock()
for _, scan := range s.FolderScans {
if scan.Name == folderName {
return false
}
}
s.FolderScans = append(s.FolderScans, ActiveVirtualFolderQuotaScan{
Name: folderName,
StartTime: utils.GetTimeAsMsSinceEpoch(time.Now()),
})
return true
}
// RemoveVFolderQuotaScan removes a folder from the ones with active quota scans.
// Returns false if the folder has no active quota scans
func (s *ActiveScans) RemoveVFolderQuotaScan(folderName string) bool {
s.Lock()
defer s.Unlock()
indexToRemove := -1
for i, scan := range s.FolderScans {
if scan.Name == folderName {
indexToRemove = i
break
}
}
if indexToRemove >= 0 {
s.FolderScans[indexToRemove] = s.FolderScans[len(s.FolderScans)-1]
s.FolderScans = s.FolderScans[:len(s.FolderScans)-1]
return true
}
return false
}

652
common/common_test.go Normal file
View File

@@ -0,0 +1,652 @@
package common
import (
"fmt"
"net"
"net/http"
"os"
"os/exec"
"path/filepath"
"runtime"
"strings"
"sync/atomic"
"testing"
"time"
"github.com/rs/zerolog"
"github.com/spf13/viper"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/httpclient"
"github.com/drakkan/sftpgo/kms"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/utils"
"github.com/drakkan/sftpgo/vfs"
)
const (
logSenderTest = "common_test"
httpAddr = "127.0.0.1:9999"
httpProxyAddr = "127.0.0.1:7777"
configDir = ".."
osWindows = "windows"
userTestUsername = "common_test_username"
userTestPwd = "common_test_pwd"
)
type providerConf struct {
Config dataprovider.Config `json:"data_provider" mapstructure:"data_provider"`
}
type fakeConnection struct {
*BaseConnection
command string
}
func (c *fakeConnection) AddUser(user dataprovider.User) error {
fs, err := user.GetFilesystem(c.GetID())
if err != nil {
return err
}
c.BaseConnection.User = user
c.BaseConnection.Fs = fs
return nil
}
func (c *fakeConnection) Disconnect() error {
Connections.Remove(c.GetID())
return nil
}
func (c *fakeConnection) GetClientVersion() string {
return ""
}
func (c *fakeConnection) GetCommand() string {
return c.command
}
func (c *fakeConnection) GetRemoteAddress() string {
return ""
}
type customNetConn struct {
net.Conn
id string
isClosed bool
}
func (c *customNetConn) Close() error {
Connections.RemoveSSHConnection(c.id)
c.isClosed = true
return c.Conn.Close()
}
func TestMain(m *testing.M) {
logfilePath := "common_test.log"
logger.InitLogger(logfilePath, 5, 1, 28, false, zerolog.DebugLevel)
viper.SetEnvPrefix("sftpgo")
replacer := strings.NewReplacer(".", "__")
viper.SetEnvKeyReplacer(replacer)
viper.SetConfigName("sftpgo")
viper.AutomaticEnv()
viper.AllowEmptyEnv(true)
driver, err := initializeDataprovider(-1)
if err != nil {
logger.WarnToConsole("error initializing data provider: %v", err)
os.Exit(1)
}
logger.InfoToConsole("Starting COMMON tests, provider: %v", driver)
err = Initialize(Configuration{})
if err != nil {
logger.WarnToConsole("error initializing common: %v", err)
os.Exit(1)
}
httpConfig := httpclient.Config{
Timeout: 5,
}
httpConfig.Initialize(configDir)
go func() {
// start a test HTTP server to receive action notifications
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "OK\n")
})
http.HandleFunc("/404", func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusNotFound)
fmt.Fprintf(w, "Not found\n")
})
if err := http.ListenAndServe(httpAddr, nil); err != nil {
logger.ErrorToConsole("could not start HTTP notification server: %v", err)
os.Exit(1)
}
}()
go func() {
Config.ProxyProtocol = 2
listener, err := net.Listen("tcp", httpProxyAddr)
if err != nil {
logger.ErrorToConsole("error creating listener for proxy protocol server: %v", err)
os.Exit(1)
}
proxyListener, err := Config.GetProxyListener(listener)
if err != nil {
logger.ErrorToConsole("error creating proxy protocol listener: %v", err)
os.Exit(1)
}
Config.ProxyProtocol = 0
s := &http.Server{}
if err := s.Serve(proxyListener); err != nil {
logger.ErrorToConsole("could not start HTTP proxy protocol server: %v", err)
os.Exit(1)
}
}()
waitTCPListening(httpAddr)
waitTCPListening(httpProxyAddr)
exitCode := m.Run()
os.Remove(logfilePath) //nolint:errcheck
os.Exit(exitCode)
}
func waitTCPListening(address string) {
for {
conn, err := net.Dial("tcp", address)
if err != nil {
logger.WarnToConsole("tcp server %v not listening: %v\n", address, err)
time.Sleep(100 * time.Millisecond)
continue
}
logger.InfoToConsole("tcp server %v now listening\n", address)
conn.Close()
break
}
}
func initializeDataprovider(trackQuota int) (string, error) {
configDir := ".."
viper.AddConfigPath(configDir)
if err := viper.ReadInConfig(); err != nil {
return "", err
}
var cfg providerConf
if err := viper.Unmarshal(&cfg); err != nil {
return "", err
}
if trackQuota >= 0 && trackQuota <= 2 {
cfg.Config.TrackQuota = trackQuota
}
return cfg.Config.Driver, dataprovider.Initialize(cfg.Config, configDir, true)
}
func closeDataprovider() error {
return dataprovider.Close()
}
func TestSSHConnections(t *testing.T) {
conn1, conn2 := net.Pipe()
now := time.Now()
sshConn1 := NewSSHConnection("id1", conn1)
sshConn2 := NewSSHConnection("id2", conn2)
sshConn3 := NewSSHConnection("id3", conn2)
assert.Equal(t, "id1", sshConn1.GetID())
assert.Equal(t, "id2", sshConn2.GetID())
assert.Equal(t, "id3", sshConn3.GetID())
sshConn1.UpdateLastActivity()
assert.GreaterOrEqual(t, sshConn1.GetLastActivity().UnixNano(), now.UnixNano())
Connections.AddSSHConnection(sshConn1)
Connections.AddSSHConnection(sshConn2)
Connections.AddSSHConnection(sshConn3)
Connections.RLock()
assert.Len(t, Connections.sshConnections, 3)
Connections.RUnlock()
Connections.RemoveSSHConnection(sshConn1.id)
Connections.RLock()
assert.Len(t, Connections.sshConnections, 2)
assert.Equal(t, sshConn3.id, Connections.sshConnections[0].id)
assert.Equal(t, sshConn2.id, Connections.sshConnections[1].id)
Connections.RUnlock()
Connections.RemoveSSHConnection(sshConn1.id)
Connections.RLock()
assert.Len(t, Connections.sshConnections, 2)
assert.Equal(t, sshConn3.id, Connections.sshConnections[0].id)
assert.Equal(t, sshConn2.id, Connections.sshConnections[1].id)
Connections.RUnlock()
Connections.RemoveSSHConnection(sshConn2.id)
Connections.RLock()
assert.Len(t, Connections.sshConnections, 1)
assert.Equal(t, sshConn3.id, Connections.sshConnections[0].id)
Connections.RUnlock()
Connections.RemoveSSHConnection(sshConn3.id)
Connections.RLock()
assert.Len(t, Connections.sshConnections, 0)
Connections.RUnlock()
assert.NoError(t, sshConn1.Close())
assert.NoError(t, sshConn2.Close())
assert.NoError(t, sshConn3.Close())
}
func TestDefenderIntegration(t *testing.T) {
// by default defender is nil
configCopy := Config
ip := "127.1.1.1"
assert.Nil(t, ReloadDefender())
AddDefenderEvent(ip, HostEventNoLoginTried)
assert.False(t, IsBanned(ip))
assert.Nil(t, GetDefenderBanTime(ip))
assert.False(t, Unban(ip))
assert.Equal(t, 0, GetDefenderScore(ip))
Config.DefenderConfig = DefenderConfig{
Enabled: true,
BanTime: 10,
BanTimeIncrement: 50,
Threshold: 0,
ScoreInvalid: 2,
ScoreValid: 1,
ObservationTime: 15,
EntriesSoftLimit: 100,
EntriesHardLimit: 150,
}
err := Initialize(Config)
assert.Error(t, err)
Config.DefenderConfig.Threshold = 3
err = Initialize(Config)
assert.NoError(t, err)
assert.Nil(t, ReloadDefender())
AddDefenderEvent(ip, HostEventNoLoginTried)
assert.False(t, IsBanned(ip))
assert.Equal(t, 2, GetDefenderScore(ip))
assert.False(t, Unban(ip))
assert.Nil(t, GetDefenderBanTime(ip))
AddDefenderEvent(ip, HostEventLoginFailed)
assert.True(t, IsBanned(ip))
assert.Equal(t, 0, GetDefenderScore(ip))
assert.NotNil(t, GetDefenderBanTime(ip))
assert.True(t, Unban(ip))
assert.Nil(t, GetDefenderBanTime(ip))
assert.False(t, Unban(ip))
Config = configCopy
}
func TestMaxConnections(t *testing.T) {
oldValue := Config.MaxTotalConnections
Config.MaxTotalConnections = 1
assert.True(t, Connections.IsNewConnectionAllowed())
c := NewBaseConnection("id", ProtocolSFTP, dataprovider.User{}, nil)
fakeConn := &fakeConnection{
BaseConnection: c,
}
Connections.Add(fakeConn)
assert.Len(t, Connections.GetStats(), 1)
assert.False(t, Connections.IsNewConnectionAllowed())
res := Connections.Close(fakeConn.GetID())
assert.True(t, res)
assert.Eventually(t, func() bool { return len(Connections.GetStats()) == 0 }, 300*time.Millisecond, 50*time.Millisecond)
Config.MaxTotalConnections = oldValue
}
func TestIdleConnections(t *testing.T) {
configCopy := Config
Config.IdleTimeout = 1
err := Initialize(Config)
assert.NoError(t, err)
conn1, conn2 := net.Pipe()
customConn1 := &customNetConn{
Conn: conn1,
id: "id1",
}
customConn2 := &customNetConn{
Conn: conn2,
id: "id2",
}
sshConn1 := NewSSHConnection(customConn1.id, customConn1)
sshConn2 := NewSSHConnection(customConn2.id, customConn2)
username := "test_user"
user := dataprovider.User{
Username: username,
}
c := NewBaseConnection(sshConn1.id+"_1", ProtocolSFTP, user, nil)
c.lastActivity = time.Now().Add(-24 * time.Hour).UnixNano()
fakeConn := &fakeConnection{
BaseConnection: c,
}
// both ssh connections are expired but they should get removed only
// if there is no associated connection
sshConn1.lastActivity = c.lastActivity
sshConn2.lastActivity = c.lastActivity
Connections.AddSSHConnection(sshConn1)
Connections.Add(fakeConn)
assert.Equal(t, Connections.GetActiveSessions(username), 1)
c = NewBaseConnection(sshConn2.id+"_1", ProtocolSSH, user, nil)
fakeConn = &fakeConnection{
BaseConnection: c,
}
Connections.AddSSHConnection(sshConn2)
Connections.Add(fakeConn)
assert.Equal(t, Connections.GetActiveSessions(username), 2)
cFTP := NewBaseConnection("id2", ProtocolFTP, dataprovider.User{}, nil)
cFTP.lastActivity = time.Now().UnixNano()
fakeConn = &fakeConnection{
BaseConnection: cFTP,
}
Connections.Add(fakeConn)
assert.Equal(t, Connections.GetActiveSessions(username), 2)
assert.Len(t, Connections.GetStats(), 3)
Connections.RLock()
assert.Len(t, Connections.sshConnections, 2)
Connections.RUnlock()
startIdleTimeoutTicker(100 * time.Millisecond)
assert.Eventually(t, func() bool { return Connections.GetActiveSessions(username) == 1 }, 1*time.Second, 200*time.Millisecond)
assert.Eventually(t, func() bool {
Connections.RLock()
defer Connections.RUnlock()
return len(Connections.sshConnections) == 1
}, 1*time.Second, 200*time.Millisecond)
stopIdleTimeoutTicker()
assert.Len(t, Connections.GetStats(), 2)
c.lastActivity = time.Now().Add(-24 * time.Hour).UnixNano()
cFTP.lastActivity = time.Now().Add(-24 * time.Hour).UnixNano()
sshConn2.lastActivity = c.lastActivity
startIdleTimeoutTicker(100 * time.Millisecond)
assert.Eventually(t, func() bool { return len(Connections.GetStats()) == 0 }, 1*time.Second, 200*time.Millisecond)
assert.Eventually(t, func() bool {
Connections.RLock()
defer Connections.RUnlock()
return len(Connections.sshConnections) == 0
}, 1*time.Second, 200*time.Millisecond)
stopIdleTimeoutTicker()
assert.True(t, customConn1.isClosed)
assert.True(t, customConn2.isClosed)
Config = configCopy
}
func TestCloseConnection(t *testing.T) {
c := NewBaseConnection("id", ProtocolSFTP, dataprovider.User{}, nil)
fakeConn := &fakeConnection{
BaseConnection: c,
}
assert.True(t, Connections.IsNewConnectionAllowed())
Connections.Add(fakeConn)
assert.Len(t, Connections.GetStats(), 1)
res := Connections.Close(fakeConn.GetID())
assert.True(t, res)
assert.Eventually(t, func() bool { return len(Connections.GetStats()) == 0 }, 300*time.Millisecond, 50*time.Millisecond)
res = Connections.Close(fakeConn.GetID())
assert.False(t, res)
Connections.Remove(fakeConn.GetID())
}
func TestSwapConnection(t *testing.T) {
c := NewBaseConnection("id", ProtocolFTP, dataprovider.User{}, nil)
fakeConn := &fakeConnection{
BaseConnection: c,
}
Connections.Add(fakeConn)
if assert.Len(t, Connections.GetStats(), 1) {
assert.Equal(t, "", Connections.GetStats()[0].Username)
}
c = NewBaseConnection("id", ProtocolFTP, dataprovider.User{
Username: userTestUsername,
}, nil)
fakeConn = &fakeConnection{
BaseConnection: c,
}
err := Connections.Swap(fakeConn)
assert.NoError(t, err)
if assert.Len(t, Connections.GetStats(), 1) {
assert.Equal(t, userTestUsername, Connections.GetStats()[0].Username)
}
res := Connections.Close(fakeConn.GetID())
assert.True(t, res)
assert.Eventually(t, func() bool { return len(Connections.GetStats()) == 0 }, 300*time.Millisecond, 50*time.Millisecond)
err = Connections.Swap(fakeConn)
assert.Error(t, err)
}
func TestAtomicUpload(t *testing.T) {
configCopy := Config
Config.UploadMode = UploadModeStandard
assert.False(t, Config.IsAtomicUploadEnabled())
Config.UploadMode = UploadModeAtomic
assert.True(t, Config.IsAtomicUploadEnabled())
Config.UploadMode = UploadModeAtomicWithResume
assert.True(t, Config.IsAtomicUploadEnabled())
Config = configCopy
}
func TestConnectionStatus(t *testing.T) {
username := "test_user"
user := dataprovider.User{
Username: username,
}
fs := vfs.NewOsFs("", os.TempDir(), nil)
c1 := NewBaseConnection("id1", ProtocolSFTP, user, fs)
fakeConn1 := &fakeConnection{
BaseConnection: c1,
}
t1 := NewBaseTransfer(nil, c1, nil, "/p1", "/r1", TransferUpload, 0, 0, 0, true, fs)
t1.BytesReceived = 123
t2 := NewBaseTransfer(nil, c1, nil, "/p2", "/r2", TransferDownload, 0, 0, 0, true, fs)
t2.BytesSent = 456
c2 := NewBaseConnection("id2", ProtocolSSH, user, nil)
fakeConn2 := &fakeConnection{
BaseConnection: c2,
command: "md5sum",
}
c3 := NewBaseConnection("id3", ProtocolWebDAV, user, nil)
fakeConn3 := &fakeConnection{
BaseConnection: c3,
command: "PROPFIND",
}
t3 := NewBaseTransfer(nil, c3, nil, "/p2", "/r2", TransferDownload, 0, 0, 0, true, fs)
Connections.Add(fakeConn1)
Connections.Add(fakeConn2)
Connections.Add(fakeConn3)
stats := Connections.GetStats()
assert.Len(t, stats, 3)
for _, stat := range stats {
assert.Equal(t, stat.Username, username)
assert.True(t, strings.HasPrefix(stat.GetConnectionInfo(), stat.Protocol))
assert.True(t, strings.HasPrefix(stat.GetConnectionDuration(), "00:"))
if stat.ConnectionID == "SFTP_id1" {
assert.Len(t, stat.Transfers, 2)
assert.Greater(t, len(stat.GetTransfersAsString()), 0)
for _, tr := range stat.Transfers {
if tr.OperationType == operationDownload {
assert.True(t, strings.HasPrefix(tr.getConnectionTransferAsString(), "DL"))
} else if tr.OperationType == operationUpload {
assert.True(t, strings.HasPrefix(tr.getConnectionTransferAsString(), "UL"))
}
}
} else if stat.ConnectionID == "DAV_id3" {
assert.Len(t, stat.Transfers, 1)
assert.Greater(t, len(stat.GetTransfersAsString()), 0)
} else {
assert.Equal(t, 0, len(stat.GetTransfersAsString()))
}
}
err := t1.Close()
assert.NoError(t, err)
err = t2.Close()
assert.NoError(t, err)
err = fakeConn3.SignalTransfersAbort()
assert.NoError(t, err)
assert.Equal(t, int32(1), atomic.LoadInt32(&t3.AbortTransfer))
err = t3.Close()
assert.NoError(t, err)
err = fakeConn3.SignalTransfersAbort()
assert.Error(t, err)
Connections.Remove(fakeConn1.GetID())
stats = Connections.GetStats()
assert.Len(t, stats, 2)
assert.Equal(t, fakeConn3.GetID(), stats[0].ConnectionID)
assert.Equal(t, fakeConn2.GetID(), stats[1].ConnectionID)
Connections.Remove(fakeConn2.GetID())
stats = Connections.GetStats()
assert.Len(t, stats, 1)
assert.Equal(t, fakeConn3.GetID(), stats[0].ConnectionID)
Connections.Remove(fakeConn3.GetID())
stats = Connections.GetStats()
assert.Len(t, stats, 0)
}
func TestQuotaScans(t *testing.T) {
username := "username"
assert.True(t, QuotaScans.AddUserQuotaScan(username))
assert.False(t, QuotaScans.AddUserQuotaScan(username))
if assert.Len(t, QuotaScans.GetUsersQuotaScans(), 1) {
assert.Equal(t, QuotaScans.GetUsersQuotaScans()[0].Username, username)
}
assert.True(t, QuotaScans.RemoveUserQuotaScan(username))
assert.False(t, QuotaScans.RemoveUserQuotaScan(username))
assert.Len(t, QuotaScans.GetUsersQuotaScans(), 0)
folderName := "folder"
assert.True(t, QuotaScans.AddVFolderQuotaScan(folderName))
assert.False(t, QuotaScans.AddVFolderQuotaScan(folderName))
if assert.Len(t, QuotaScans.GetVFoldersQuotaScans(), 1) {
assert.Equal(t, QuotaScans.GetVFoldersQuotaScans()[0].Name, folderName)
}
assert.True(t, QuotaScans.RemoveVFolderQuotaScan(folderName))
assert.False(t, QuotaScans.RemoveVFolderQuotaScan(folderName))
assert.Len(t, QuotaScans.GetVFoldersQuotaScans(), 0)
}
func TestProxyProtocolVersion(t *testing.T) {
c := Configuration{
ProxyProtocol: 1,
}
proxyListener, err := c.GetProxyListener(nil)
assert.NoError(t, err)
assert.Nil(t, proxyListener.Policy)
c.ProxyProtocol = 2
proxyListener, err = c.GetProxyListener(nil)
assert.NoError(t, err)
assert.NotNil(t, proxyListener.Policy)
c.ProxyProtocol = 1
c.ProxyAllowed = []string{"invalid"}
_, err = c.GetProxyListener(nil)
assert.Error(t, err)
c.ProxyProtocol = 2
_, err = c.GetProxyListener(nil)
assert.Error(t, err)
}
func TestProxyProtocol(t *testing.T) {
httpClient := httpclient.GetHTTPClient()
resp, err := httpClient.Get(fmt.Sprintf("http://%v", httpProxyAddr))
if assert.NoError(t, err) {
defer resp.Body.Close()
assert.Equal(t, http.StatusBadRequest, resp.StatusCode)
}
}
func TestPostConnectHook(t *testing.T) {
Config.PostConnectHook = ""
ipAddr := "127.0.0.1"
assert.NoError(t, Config.ExecutePostConnectHook(ipAddr, ProtocolFTP))
Config.PostConnectHook = "http://foo\x7f.com/"
assert.Error(t, Config.ExecutePostConnectHook(ipAddr, ProtocolSFTP))
Config.PostConnectHook = "http://invalid:1234/"
assert.Error(t, Config.ExecutePostConnectHook(ipAddr, ProtocolSFTP))
Config.PostConnectHook = fmt.Sprintf("http://%v/404", httpAddr)
assert.Error(t, Config.ExecutePostConnectHook(ipAddr, ProtocolFTP))
Config.PostConnectHook = fmt.Sprintf("http://%v", httpAddr)
assert.NoError(t, Config.ExecutePostConnectHook(ipAddr, ProtocolFTP))
Config.PostConnectHook = "invalid"
assert.Error(t, Config.ExecutePostConnectHook(ipAddr, ProtocolFTP))
if runtime.GOOS == osWindows {
Config.PostConnectHook = "C:\\bad\\command"
assert.Error(t, Config.ExecutePostConnectHook(ipAddr, ProtocolSFTP))
} else {
Config.PostConnectHook = "/invalid/path"
assert.Error(t, Config.ExecutePostConnectHook(ipAddr, ProtocolSFTP))
hookCmd, err := exec.LookPath("true")
assert.NoError(t, err)
Config.PostConnectHook = hookCmd
assert.NoError(t, Config.ExecutePostConnectHook(ipAddr, ProtocolSFTP))
}
Config.PostConnectHook = ""
}
func TestCryptoConvertFileInfo(t *testing.T) {
name := "name"
fs, err := vfs.NewCryptFs("connID1", os.TempDir(), vfs.CryptFsConfig{Passphrase: kms.NewPlainSecret("secret")})
require.NoError(t, err)
cryptFs := fs.(*vfs.CryptFs)
info := vfs.NewFileInfo(name, true, 48, time.Now(), false)
assert.Equal(t, info, cryptFs.ConvertFileInfo(info))
info = vfs.NewFileInfo(name, false, 48, time.Now(), false)
assert.NotEqual(t, info.Size(), cryptFs.ConvertFileInfo(info).Size())
info = vfs.NewFileInfo(name, false, 33, time.Now(), false)
assert.Equal(t, int64(0), cryptFs.ConvertFileInfo(info).Size())
info = vfs.NewFileInfo(name, false, 1, time.Now(), false)
assert.Equal(t, int64(0), cryptFs.ConvertFileInfo(info).Size())
}
func TestFolderCopy(t *testing.T) {
folder := vfs.BaseVirtualFolder{
ID: 1,
Name: "name",
MappedPath: filepath.Clean(os.TempDir()),
UsedQuotaSize: 4096,
UsedQuotaFiles: 2,
LastQuotaUpdate: utils.GetTimeAsMsSinceEpoch(time.Now()),
Users: []string{"user1", "user2"},
}
folderCopy := folder.GetACopy()
folder.ID = 2
folder.Users = []string{"user3"}
require.Len(t, folderCopy.Users, 2)
require.True(t, utils.IsStringInSlice("user1", folderCopy.Users))
require.True(t, utils.IsStringInSlice("user2", folderCopy.Users))
require.Equal(t, int64(1), folderCopy.ID)
require.Equal(t, folder.Name, folderCopy.Name)
require.Equal(t, folder.MappedPath, folderCopy.MappedPath)
require.Equal(t, folder.UsedQuotaSize, folderCopy.UsedQuotaSize)
require.Equal(t, folder.UsedQuotaFiles, folderCopy.UsedQuotaFiles)
require.Equal(t, folder.LastQuotaUpdate, folderCopy.LastQuotaUpdate)
}

1005
common/connection.go Normal file

File diff suppressed because it is too large Load Diff

1261
common/connection_test.go Normal file

File diff suppressed because it is too large Load Diff

472
common/defender.go Normal file
View File

@@ -0,0 +1,472 @@
package common
import (
"encoding/json"
"fmt"
"io/ioutil"
"net"
"os"
"sort"
"sync"
"time"
"github.com/yl2chen/cidranger"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/utils"
)
// HostEvent is the enumerable for the support host event
type HostEvent int
// Supported host events
const (
HostEventLoginFailed HostEvent = iota
HostEventUserNotFound
HostEventNoLoginTried
)
// Defender defines the interface that a defender must implements
type Defender interface {
AddEvent(ip string, event HostEvent)
IsBanned(ip string) bool
GetBanTime(ip string) *time.Time
GetScore(ip string) int
Unban(ip string) bool
Reload() error
}
// DefenderConfig defines the "defender" configuration
type DefenderConfig struct {
// Set to true to enable the defender
Enabled bool `json:"enabled" mapstructure:"enabled"`
// BanTime is the number of minutes that a host is banned
BanTime int `json:"ban_time" mapstructure:"ban_time"`
// Percentage increase of the ban time if a banned host tries to connect again
BanTimeIncrement int `json:"ban_time_increment" mapstructure:"ban_time_increment"`
// Threshold value for banning a client
Threshold int `json:"threshold" mapstructure:"threshold"`
// Score for invalid login attempts, eg. non-existent user accounts or
// client disconnected for inactivity without authentication attempts
ScoreInvalid int `json:"score_invalid" mapstructure:"score_invalid"`
// Score for valid login attempts, eg. user accounts that exist
ScoreValid int `json:"score_valid" mapstructure:"score_valid"`
// Defines the time window, in minutes, for tracking client errors.
// A host is banned if it has exceeded the defined threshold during
// the last observation time minutes
ObservationTime int `json:"observation_time" mapstructure:"observation_time"`
// The number of banned IPs and host scores kept in memory will vary between the
// soft and hard limit
EntriesSoftLimit int `json:"entries_soft_limit" mapstructure:"entries_soft_limit"`
EntriesHardLimit int `json:"entries_hard_limit" mapstructure:"entries_hard_limit"`
// Path to a file containing a list of ip addresses and/or networks to never ban
SafeListFile string `json:"safelist_file" mapstructure:"safelist_file"`
// Path to a file containing a list of ip addresses and/or networks to always ban
BlockListFile string `json:"blocklist_file" mapstructure:"blocklist_file"`
}
type memoryDefender struct {
config *DefenderConfig
sync.RWMutex
// IP addresses of the clients trying to connected are stored inside hosts,
// they are added to banned once the thresold is reached.
// A violation from a banned host will increase the ban time
// based on the configured BanTimeIncrement
hosts map[string]hostScore // the key is the host IP
banned map[string]time.Time // the key is the host IP
safeList *HostList
blockList *HostList
}
// HostListFile defines the structure expected for safe/block list files
type HostListFile struct {
IPAddresses []string `json:"addresses"`
CIDRNetworks []string `json:"networks"`
}
// HostList defines the structure used to keep the HostListFile in memory
type HostList struct {
IPAddresses map[string]bool
Ranges cidranger.Ranger
}
func (h *HostList) isListed(ip string) bool {
if _, ok := h.IPAddresses[ip]; ok {
return true
}
ok, err := h.Ranges.Contains(net.ParseIP(ip))
if err != nil {
return false
}
return ok
}
type hostEvent struct {
dateTime time.Time
score int
}
type hostScore struct {
TotalScore int
Events []hostEvent
}
// validate returns an error if the configuration is invalid
func (c *DefenderConfig) validate() error {
if !c.Enabled {
return nil
}
if c.ScoreInvalid >= c.Threshold {
return fmt.Errorf("score_invalid %v cannot be greater than threshold %v", c.ScoreInvalid, c.Threshold)
}
if c.ScoreValid >= c.Threshold {
return fmt.Errorf("score_valid %v cannot be greater than threshold %v", c.ScoreValid, c.Threshold)
}
if c.BanTime <= 0 {
return fmt.Errorf("invalid ban_time %v", c.BanTime)
}
if c.BanTimeIncrement <= 0 {
return fmt.Errorf("invalid ban_time_increment %v", c.BanTimeIncrement)
}
if c.ObservationTime <= 0 {
return fmt.Errorf("invalid observation_time %v", c.ObservationTime)
}
if c.EntriesSoftLimit <= 0 {
return fmt.Errorf("invalid entries_soft_limit %v", c.EntriesSoftLimit)
}
if c.EntriesHardLimit <= c.EntriesSoftLimit {
return fmt.Errorf("invalid entries_hard_limit %v must be > %v", c.EntriesHardLimit, c.EntriesSoftLimit)
}
return nil
}
func newInMemoryDefender(config *DefenderConfig) (Defender, error) {
err := config.validate()
if err != nil {
return nil, err
}
defender := &memoryDefender{
config: config,
hosts: make(map[string]hostScore),
banned: make(map[string]time.Time),
}
if err := defender.Reload(); err != nil {
return nil, err
}
return defender, nil
}
// Reload reloads block and safe lists
func (d *memoryDefender) Reload() error {
blockList, err := loadHostListFromFile(d.config.BlockListFile)
if err != nil {
return err
}
d.Lock()
d.blockList = blockList
d.Unlock()
safeList, err := loadHostListFromFile(d.config.SafeListFile)
if err != nil {
return err
}
d.Lock()
d.safeList = safeList
d.Unlock()
return nil
}
// IsBanned returns true if the specified IP is banned
// and increase ban time if the IP is found.
// This method must be called as soon as the client connects
func (d *memoryDefender) IsBanned(ip string) bool {
d.RLock()
if banTime, ok := d.banned[ip]; ok {
if banTime.After(time.Now()) {
increment := d.config.BanTime * d.config.BanTimeIncrement / 100
if increment == 0 {
increment++
}
d.RUnlock()
// we can save an earlier ban time if there are contemporary updates
// but this should not make much difference. I prefer to hold a read lock
// until possible for performance reasons, this method is called each
// time a new client connects and it must be as fast as possible
d.Lock()
d.banned[ip] = banTime.Add(time.Duration(increment) * time.Minute)
d.Unlock()
return true
}
}
defer d.RUnlock()
if d.blockList != nil && d.blockList.isListed(ip) {
// permanent ban
return true
}
return false
}
// Unban removes the specified IP address from the banned ones
func (d *memoryDefender) Unban(ip string) bool {
d.Lock()
defer d.Unlock()
if _, ok := d.banned[ip]; ok {
delete(d.banned, ip)
return true
}
return false
}
// AddEvent adds an event for the given IP.
// This method must be called for clients not yet banned
func (d *memoryDefender) AddEvent(ip string, event HostEvent) {
d.Lock()
defer d.Unlock()
if d.safeList != nil && d.safeList.isListed(ip) {
return
}
var score int
switch event {
case HostEventLoginFailed:
score = d.config.ScoreValid
case HostEventUserNotFound, HostEventNoLoginTried:
score = d.config.ScoreInvalid
}
ev := hostEvent{
dateTime: time.Now(),
score: score,
}
if hs, ok := d.hosts[ip]; ok {
hs.Events = append(hs.Events, ev)
hs.TotalScore = 0
idx := 0
for _, event := range hs.Events {
if event.dateTime.Add(time.Duration(d.config.ObservationTime) * time.Minute).After(time.Now()) {
hs.Events[idx] = event
hs.TotalScore += event.score
idx++
}
}
hs.Events = hs.Events[:idx]
if hs.TotalScore >= d.config.Threshold {
d.banned[ip] = time.Now().Add(time.Duration(d.config.BanTime) * time.Minute)
delete(d.hosts, ip)
d.cleanupBanned()
} else {
d.hosts[ip] = hs
}
} else {
d.hosts[ip] = hostScore{
TotalScore: ev.score,
Events: []hostEvent{ev},
}
d.cleanupHosts()
}
}
func (d *memoryDefender) countBanned() int {
d.RLock()
defer d.RUnlock()
return len(d.banned)
}
func (d *memoryDefender) countHosts() int {
d.RLock()
defer d.RUnlock()
return len(d.hosts)
}
// GetBanTime returns the ban time for the given IP or nil if the IP is not banned
func (d *memoryDefender) GetBanTime(ip string) *time.Time {
d.RLock()
defer d.RUnlock()
if banTime, ok := d.banned[ip]; ok {
return &banTime
}
return nil
}
// GetScore returns the score for the given IP
func (d *memoryDefender) GetScore(ip string) int {
d.RLock()
defer d.RUnlock()
score := 0
if hs, ok := d.hosts[ip]; ok {
for _, event := range hs.Events {
if event.dateTime.Add(time.Duration(d.config.ObservationTime) * time.Minute).After(time.Now()) {
score += event.score
}
}
}
return score
}
func (d *memoryDefender) cleanupBanned() {
if len(d.banned) > d.config.EntriesHardLimit {
kvList := make(kvList, 0, len(d.banned))
for k, v := range d.banned {
if v.Before(time.Now()) {
delete(d.banned, k)
}
kvList = append(kvList, kv{
Key: k,
Value: v.UnixNano(),
})
}
// we removed expired ip addresses, if any, above, this could be enough
numToRemove := len(d.banned) - d.config.EntriesSoftLimit
if numToRemove <= 0 {
return
}
sort.Sort(kvList)
for idx, kv := range kvList {
if idx >= numToRemove {
break
}
delete(d.banned, kv.Key)
}
}
}
func (d *memoryDefender) cleanupHosts() {
if len(d.hosts) > d.config.EntriesHardLimit {
kvList := make(kvList, 0, len(d.hosts))
for k, v := range d.hosts {
value := int64(0)
if len(v.Events) > 0 {
value = v.Events[len(v.Events)-1].dateTime.UnixNano()
}
kvList = append(kvList, kv{
Key: k,
Value: value,
})
}
sort.Sort(kvList)
numToRemove := len(d.hosts) - d.config.EntriesSoftLimit
for idx, kv := range kvList {
if idx >= numToRemove {
break
}
delete(d.hosts, kv.Key)
}
}
}
func loadHostListFromFile(name string) (*HostList, error) {
if name == "" {
return nil, nil
}
if !utils.IsFileInputValid(name) {
return nil, fmt.Errorf("invalid host list file name %#v", name)
}
info, err := os.Stat(name)
if err != nil {
return nil, err
}
// opinionated max size, you should avoid big host lists
if info.Size() > 1048576*5 { // 5MB
return nil, fmt.Errorf("host list file %#v is too big: %v bytes", name, info.Size())
}
content, err := ioutil.ReadFile(name)
if err != nil {
return nil, fmt.Errorf("unable to read input file %#v: %v", name, err)
}
var hostList HostListFile
err = json.Unmarshal(content, &hostList)
if err != nil {
return nil, err
}
if len(hostList.CIDRNetworks) > 0 || len(hostList.IPAddresses) > 0 {
result := &HostList{
IPAddresses: make(map[string]bool),
Ranges: cidranger.NewPCTrieRanger(),
}
ipCount := 0
cdrCount := 0
for _, ip := range hostList.IPAddresses {
if net.ParseIP(ip) == nil {
logger.Warn(logSender, "", "unable to parse IP %#v", ip)
continue
}
result.IPAddresses[ip] = true
ipCount++
}
for _, cidrNet := range hostList.CIDRNetworks {
_, network, err := net.ParseCIDR(cidrNet)
if err != nil {
logger.Warn(logSender, "", "unable to parse CIDR network %#v", cidrNet)
continue
}
err = result.Ranges.Insert(cidranger.NewBasicRangerEntry(*network))
if err == nil {
cdrCount++
}
}
logger.Info(logSender, "", "list %#v loaded, ip addresses loaded: %v/%v networks loaded: %v/%v",
name, ipCount, len(hostList.IPAddresses), cdrCount, len(hostList.CIDRNetworks))
return result, nil
}
return nil, nil
}
type kv struct {
Key string
Value int64
}
type kvList []kv
func (p kvList) Len() int { return len(p) }
func (p kvList) Less(i, j int) bool { return p[i].Value < p[j].Value }
func (p kvList) Swap(i, j int) { p[i], p[j] = p[j], p[i] }

523
common/defender_test.go Normal file
View File

@@ -0,0 +1,523 @@
package common
import (
"crypto/rand"
"encoding/json"
"fmt"
"io/ioutil"
"net"
"os"
"path/filepath"
"runtime"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/yl2chen/cidranger"
)
func TestBasicDefender(t *testing.T) {
bl := HostListFile{
IPAddresses: []string{"172.16.1.1", "172.16.1.2"},
CIDRNetworks: []string{"10.8.0.0/24"},
}
sl := HostListFile{
IPAddresses: []string{"172.16.1.3", "172.16.1.4"},
CIDRNetworks: []string{"192.168.8.0/24"},
}
blFile := filepath.Join(os.TempDir(), "bl.json")
slFile := filepath.Join(os.TempDir(), "sl.json")
data, err := json.Marshal(bl)
assert.NoError(t, err)
err = ioutil.WriteFile(blFile, data, os.ModePerm)
assert.NoError(t, err)
data, err = json.Marshal(sl)
assert.NoError(t, err)
err = ioutil.WriteFile(slFile, data, os.ModePerm)
assert.NoError(t, err)
config := &DefenderConfig{
Enabled: true,
BanTime: 10,
BanTimeIncrement: 2,
Threshold: 5,
ScoreInvalid: 2,
ScoreValid: 1,
ObservationTime: 15,
EntriesSoftLimit: 1,
EntriesHardLimit: 2,
SafeListFile: "slFile",
BlockListFile: "blFile",
}
_, err = newInMemoryDefender(config)
assert.Error(t, err)
config.BlockListFile = blFile
_, err = newInMemoryDefender(config)
assert.Error(t, err)
config.SafeListFile = slFile
d, err := newInMemoryDefender(config)
assert.NoError(t, err)
defender := d.(*memoryDefender)
assert.True(t, defender.IsBanned("172.16.1.1"))
assert.False(t, defender.IsBanned("172.16.1.10"))
assert.False(t, defender.IsBanned("10.8.2.3"))
assert.True(t, defender.IsBanned("10.8.0.3"))
assert.False(t, defender.IsBanned("invalid ip"))
assert.Equal(t, 0, defender.countBanned())
assert.Equal(t, 0, defender.countHosts())
defender.AddEvent("172.16.1.4", HostEventLoginFailed)
defender.AddEvent("192.168.8.4", HostEventUserNotFound)
assert.Equal(t, 0, defender.countHosts())
testIP := "12.34.56.78"
defender.AddEvent(testIP, HostEventLoginFailed)
assert.Equal(t, 1, defender.countHosts())
assert.Equal(t, 0, defender.countBanned())
assert.Equal(t, 1, defender.GetScore(testIP))
assert.Nil(t, defender.GetBanTime(testIP))
defender.AddEvent(testIP, HostEventNoLoginTried)
assert.Equal(t, 1, defender.countHosts())
assert.Equal(t, 0, defender.countBanned())
assert.Equal(t, 3, defender.GetScore(testIP))
defender.AddEvent(testIP, HostEventNoLoginTried)
assert.Equal(t, 0, defender.countHosts())
assert.Equal(t, 1, defender.countBanned())
assert.Equal(t, 0, defender.GetScore(testIP))
assert.NotNil(t, defender.GetBanTime(testIP))
// now test cleanup, testIP is already banned
testIP1 := "12.34.56.79"
testIP2 := "12.34.56.80"
testIP3 := "12.34.56.81"
defender.AddEvent(testIP1, HostEventNoLoginTried)
defender.AddEvent(testIP2, HostEventNoLoginTried)
assert.Equal(t, 2, defender.countHosts())
time.Sleep(20 * time.Millisecond)
defender.AddEvent(testIP3, HostEventNoLoginTried)
assert.Equal(t, defender.config.EntriesSoftLimit, defender.countHosts())
// testIP1 and testIP2 should be removed
assert.Equal(t, defender.config.EntriesSoftLimit, defender.countHosts())
assert.Equal(t, 0, defender.GetScore(testIP1))
assert.Equal(t, 0, defender.GetScore(testIP2))
assert.Equal(t, 2, defender.GetScore(testIP3))
defender.AddEvent(testIP3, HostEventNoLoginTried)
defender.AddEvent(testIP3, HostEventNoLoginTried)
// IP3 is now banned
assert.NotNil(t, defender.GetBanTime(testIP3))
assert.Equal(t, 0, defender.countHosts())
time.Sleep(20 * time.Millisecond)
for i := 0; i < 3; i++ {
defender.AddEvent(testIP1, HostEventNoLoginTried)
}
assert.Equal(t, 0, defender.countHosts())
assert.Equal(t, config.EntriesSoftLimit, defender.countBanned())
assert.Nil(t, defender.GetBanTime(testIP))
assert.Nil(t, defender.GetBanTime(testIP3))
assert.NotNil(t, defender.GetBanTime(testIP1))
for i := 0; i < 3; i++ {
defender.AddEvent(testIP, HostEventNoLoginTried)
time.Sleep(10 * time.Millisecond)
defender.AddEvent(testIP3, HostEventNoLoginTried)
}
assert.Equal(t, 0, defender.countHosts())
assert.Equal(t, defender.config.EntriesSoftLimit, defender.countBanned())
banTime := defender.GetBanTime(testIP3)
if assert.NotNil(t, banTime) {
assert.True(t, defender.IsBanned(testIP3))
// ban time should increase
newBanTime := defender.GetBanTime(testIP3)
assert.True(t, newBanTime.After(*banTime))
}
assert.True(t, defender.Unban(testIP3))
assert.False(t, defender.Unban(testIP3))
err = os.Remove(slFile)
assert.NoError(t, err)
err = os.Remove(blFile)
assert.NoError(t, err)
}
func TestLoadHostListFromFile(t *testing.T) {
_, err := loadHostListFromFile(".")
assert.Error(t, err)
hostsFilePath := filepath.Join(os.TempDir(), "hostfile")
content := make([]byte, 1048576*6)
_, err = rand.Read(content)
assert.NoError(t, err)
err = ioutil.WriteFile(hostsFilePath, content, os.ModePerm)
assert.NoError(t, err)
_, err = loadHostListFromFile(hostsFilePath)
assert.Error(t, err)
hl := HostListFile{
IPAddresses: []string{},
CIDRNetworks: []string{},
}
asJSON, err := json.Marshal(hl)
assert.NoError(t, err)
err = ioutil.WriteFile(hostsFilePath, asJSON, os.ModePerm)
assert.NoError(t, err)
hostList, err := loadHostListFromFile(hostsFilePath)
assert.NoError(t, err)
assert.Nil(t, hostList)
hl.IPAddresses = append(hl.IPAddresses, "invalidip")
asJSON, err = json.Marshal(hl)
assert.NoError(t, err)
err = ioutil.WriteFile(hostsFilePath, asJSON, os.ModePerm)
assert.NoError(t, err)
hostList, err = loadHostListFromFile(hostsFilePath)
assert.NoError(t, err)
assert.Len(t, hostList.IPAddresses, 0)
hl.IPAddresses = nil
hl.CIDRNetworks = append(hl.CIDRNetworks, "invalid net")
asJSON, err = json.Marshal(hl)
assert.NoError(t, err)
err = ioutil.WriteFile(hostsFilePath, asJSON, os.ModePerm)
assert.NoError(t, err)
hostList, err = loadHostListFromFile(hostsFilePath)
assert.NoError(t, err)
assert.NotNil(t, hostList)
assert.Len(t, hostList.IPAddresses, 0)
assert.Equal(t, 0, hostList.Ranges.Len())
if runtime.GOOS != "windows" {
err = os.Chmod(hostsFilePath, 0111)
assert.NoError(t, err)
_, err = loadHostListFromFile(hostsFilePath)
assert.Error(t, err)
err = os.Chmod(hostsFilePath, 0644)
assert.NoError(t, err)
}
err = ioutil.WriteFile(hostsFilePath, []byte("non json content"), os.ModePerm)
assert.NoError(t, err)
_, err = loadHostListFromFile(hostsFilePath)
assert.Error(t, err)
err = os.Remove(hostsFilePath)
assert.NoError(t, err)
}
func TestDefenderCleanup(t *testing.T) {
d := memoryDefender{
banned: make(map[string]time.Time),
hosts: make(map[string]hostScore),
config: &DefenderConfig{
ObservationTime: 1,
EntriesSoftLimit: 2,
EntriesHardLimit: 3,
},
}
d.banned["1.1.1.1"] = time.Now().Add(-24 * time.Hour)
d.banned["1.1.1.2"] = time.Now().Add(-24 * time.Hour)
d.banned["1.1.1.3"] = time.Now().Add(-24 * time.Hour)
d.banned["1.1.1.4"] = time.Now().Add(-24 * time.Hour)
d.cleanupBanned()
assert.Equal(t, 0, d.countBanned())
d.banned["2.2.2.2"] = time.Now().Add(2 * time.Minute)
d.banned["2.2.2.3"] = time.Now().Add(1 * time.Minute)
d.banned["2.2.2.4"] = time.Now().Add(3 * time.Minute)
d.banned["2.2.2.5"] = time.Now().Add(4 * time.Minute)
d.cleanupBanned()
assert.Equal(t, d.config.EntriesSoftLimit, d.countBanned())
assert.Nil(t, d.GetBanTime("2.2.2.3"))
d.hosts["3.3.3.3"] = hostScore{
TotalScore: 0,
Events: []hostEvent{
{
dateTime: time.Now().Add(-5 * time.Minute),
score: 1,
},
{
dateTime: time.Now().Add(-3 * time.Minute),
score: 1,
},
{
dateTime: time.Now(),
score: 1,
},
},
}
d.hosts["3.3.3.4"] = hostScore{
TotalScore: 1,
Events: []hostEvent{
{
dateTime: time.Now().Add(-3 * time.Minute),
score: 1,
},
},
}
d.hosts["3.3.3.5"] = hostScore{
TotalScore: 1,
Events: []hostEvent{
{
dateTime: time.Now().Add(-2 * time.Minute),
score: 1,
},
},
}
d.hosts["3.3.3.6"] = hostScore{
TotalScore: 1,
Events: []hostEvent{
{
dateTime: time.Now().Add(-1 * time.Minute),
score: 1,
},
},
}
assert.Equal(t, 1, d.GetScore("3.3.3.3"))
d.cleanupHosts()
assert.Equal(t, d.config.EntriesSoftLimit, d.countHosts())
assert.Equal(t, 0, d.GetScore("3.3.3.4"))
}
func TestDefenderConfig(t *testing.T) {
c := DefenderConfig{}
err := c.validate()
require.NoError(t, err)
c.Enabled = true
c.Threshold = 10
c.ScoreInvalid = 10
err = c.validate()
require.Error(t, err)
c.ScoreInvalid = 2
c.ScoreValid = 10
err = c.validate()
require.Error(t, err)
c.ScoreValid = 1
c.BanTime = 0
err = c.validate()
require.Error(t, err)
c.BanTime = 30
c.BanTimeIncrement = 0
err = c.validate()
require.Error(t, err)
c.BanTimeIncrement = 50
c.ObservationTime = 0
err = c.validate()
require.Error(t, err)
c.ObservationTime = 30
err = c.validate()
require.Error(t, err)
c.EntriesSoftLimit = 10
err = c.validate()
require.Error(t, err)
c.EntriesHardLimit = 10
err = c.validate()
require.Error(t, err)
c.EntriesHardLimit = 20
err = c.validate()
require.NoError(t, err)
}
func BenchmarkDefenderBannedSearch(b *testing.B) {
d := getDefenderForBench()
ip, ipnet, err := net.ParseCIDR("10.8.0.0/12") // 1048574 ip addresses
if err != nil {
panic(err)
}
for ip := ip.Mask(ipnet.Mask); ipnet.Contains(ip); inc(ip) {
d.banned[ip.String()] = time.Now().Add(10 * time.Minute)
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
d.IsBanned("192.168.1.1")
}
}
func BenchmarkCleanup(b *testing.B) {
d := getDefenderForBench()
ip, ipnet, err := net.ParseCIDR("192.168.4.0/24")
if err != nil {
panic(err)
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
for ip := ip.Mask(ipnet.Mask); ipnet.Contains(ip); inc(ip) {
d.AddEvent(ip.String(), HostEventLoginFailed)
if d.countHosts() > d.config.EntriesHardLimit {
panic("too many hosts")
}
if d.countBanned() > d.config.EntriesSoftLimit {
panic("too many ip banned")
}
}
}
}
func BenchmarkDefenderBannedSearchWithBlockList(b *testing.B) {
d := getDefenderForBench()
d.blockList = &HostList{
IPAddresses: make(map[string]bool),
Ranges: cidranger.NewPCTrieRanger(),
}
ip, ipnet, err := net.ParseCIDR("129.8.0.0/12") // 1048574 ip addresses
if err != nil {
panic(err)
}
for ip := ip.Mask(ipnet.Mask); ipnet.Contains(ip); inc(ip) {
d.banned[ip.String()] = time.Now().Add(10 * time.Minute)
d.blockList.IPAddresses[ip.String()] = true
}
for i := 0; i < 255; i++ {
cidr := fmt.Sprintf("10.8.%v.1/24", i)
_, network, _ := net.ParseCIDR(cidr)
if err := d.blockList.Ranges.Insert(cidranger.NewBasicRangerEntry(*network)); err != nil {
panic(err)
}
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
d.IsBanned("192.168.1.1")
}
}
func BenchmarkHostListSearch(b *testing.B) {
hostlist := &HostList{
IPAddresses: make(map[string]bool),
Ranges: cidranger.NewPCTrieRanger(),
}
ip, ipnet, _ := net.ParseCIDR("172.16.0.0/16")
for ip := ip.Mask(ipnet.Mask); ipnet.Contains(ip); inc(ip) {
hostlist.IPAddresses[ip.String()] = true
}
for i := 0; i < 255; i++ {
cidr := fmt.Sprintf("10.8.%v.1/24", i)
_, network, _ := net.ParseCIDR(cidr)
if err := hostlist.Ranges.Insert(cidranger.NewBasicRangerEntry(*network)); err != nil {
panic(err)
}
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
if hostlist.isListed("192.167.1.2") {
panic("should not be listed")
}
}
}
func BenchmarkCIDRanger(b *testing.B) {
ranger := cidranger.NewPCTrieRanger()
for i := 0; i < 255; i++ {
cidr := fmt.Sprintf("192.168.%v.1/24", i)
_, network, _ := net.ParseCIDR(cidr)
if err := ranger.Insert(cidranger.NewBasicRangerEntry(*network)); err != nil {
panic(err)
}
}
ipToMatch := net.ParseIP("192.167.1.2")
b.ResetTimer()
for i := 0; i < b.N; i++ {
if _, err := ranger.Contains(ipToMatch); err != nil {
panic(err)
}
}
}
func BenchmarkNetContains(b *testing.B) {
var nets []*net.IPNet
for i := 0; i < 255; i++ {
cidr := fmt.Sprintf("192.168.%v.1/24", i)
_, network, _ := net.ParseCIDR(cidr)
nets = append(nets, network)
}
ipToMatch := net.ParseIP("192.167.1.1")
b.ResetTimer()
for i := 0; i < b.N; i++ {
for _, n := range nets {
n.Contains(ipToMatch)
}
}
}
func getDefenderForBench() *memoryDefender {
config := &DefenderConfig{
Enabled: true,
BanTime: 30,
BanTimeIncrement: 50,
Threshold: 10,
ScoreInvalid: 2,
ScoreValid: 2,
ObservationTime: 30,
EntriesSoftLimit: 50,
EntriesHardLimit: 100,
}
return &memoryDefender{
config: config,
hosts: make(map[string]hostScore),
banned: make(map[string]time.Time),
}
}
func inc(ip net.IP) {
for j := len(ip) - 1; j >= 0; j-- {
ip[j]++
if ip[j] > 0 {
break
}
}
}

View File

@@ -1,15 +1,13 @@
package httpd
package common
import (
"encoding/csv"
"errors"
"fmt"
"net/http"
"os"
"strings"
"sync"
unixcrypt "github.com/nathanaelle/password/v2"
"github.com/GehirnInc/crypt/apr1_crypt"
"github.com/GehirnInc/crypt/md5_crypt"
"golang.org/x/crypto/bcrypt"
"github.com/drakkan/sftpgo/logger"
@@ -17,50 +15,52 @@ import (
)
const (
authenticationHeader = "WWW-Authenticate"
authenticationRealm = "SFTPGo Web"
unauthResponse = "Unauthorized"
// HTTPAuthenticationHeader defines the HTTP authentication
HTTPAuthenticationHeader = "WWW-Authenticate"
md5CryptPwdPrefix = "$1$"
apr1CryptPwdPrefix = "$apr1$"
)
var (
md5CryptPwdPrefixes = []string{"$1$", "$apr1$"}
bcryptPwdPrefixes = []string{"$2a$", "$2$", "$2x$", "$2y$", "$2b$"}
)
type httpAuthProvider interface {
getHashedPassword(username string) (string, bool)
isEnabled() bool
// HTTPAuthProvider defines the interface for HTTP auth providers
type HTTPAuthProvider interface {
ValidateCredentials(username, password string) bool
IsEnabled() bool
}
type basicAuthProvider struct {
Path string
sync.RWMutex
Info os.FileInfo
Users map[string]string
lock *sync.RWMutex
}
func newBasicAuthProvider(authUserFile string) (httpAuthProvider, error) {
// NewBasicAuthProvider returns an HTTPAuthProvider implementing Basic Auth
func NewBasicAuthProvider(authUserFile string) (HTTPAuthProvider, error) {
basicAuthProvider := basicAuthProvider{
Path: authUserFile,
Info: nil,
Users: make(map[string]string),
lock: new(sync.RWMutex),
}
return &basicAuthProvider, basicAuthProvider.loadUsers()
}
func (p *basicAuthProvider) isEnabled() bool {
return len(p.Path) > 0
func (p *basicAuthProvider) IsEnabled() bool {
return p.Path != ""
}
func (p *basicAuthProvider) isReloadNeeded(info os.FileInfo) bool {
p.lock.RLock()
defer p.lock.RUnlock()
p.RLock()
defer p.RUnlock()
return p.Info == nil || p.Info.ModTime() != info.ModTime() || p.Info.Size() != info.Size()
}
func (p *basicAuthProvider) loadUsers() error {
if !p.isEnabled() {
if !p.IsEnabled() {
return nil
}
info, err := os.Stat(p.Path)
@@ -84,8 +84,9 @@ func (p *basicAuthProvider) loadUsers() error {
logger.Debug(logSender, "", "unable to parse basic auth users file: %v", err)
return err
}
p.lock.Lock()
defer p.lock.Unlock()
p.Lock()
defer p.Unlock()
p.Users = make(map[string]string)
for _, record := range records {
if len(record) == 2 {
@@ -103,49 +104,31 @@ func (p *basicAuthProvider) getHashedPassword(username string) (string, bool) {
if err != nil {
return "", false
}
p.lock.RLock()
defer p.lock.RUnlock()
p.RLock()
defer p.RUnlock()
pwd, ok := p.Users[username]
return pwd, ok
}
func checkAuth(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if !validateCredentials(r) {
w.Header().Set(authenticationHeader, fmt.Sprintf("Basic realm=\"%v\"", authenticationRealm))
if strings.HasPrefix(r.RequestURI, apiPrefix) {
sendAPIResponse(w, r, errors.New(unauthResponse), "", http.StatusUnauthorized)
} else {
http.Error(w, unauthResponse, http.StatusUnauthorized)
}
return
}
next.ServeHTTP(w, r)
})
}
func validateCredentials(r *http.Request) bool {
if !httpAuth.isEnabled() {
return true
}
username, password, ok := r.BasicAuth()
if !ok {
return false
}
if hashedPwd, ok := httpAuth.getHashedPassword(username); ok {
// ValidateCredentials returns true if the credentials are valid
func (p *basicAuthProvider) ValidateCredentials(username, password string) bool {
if hashedPwd, ok := p.getHashedPassword(username); ok {
if utils.IsStringPrefixInSlice(hashedPwd, bcryptPwdPrefixes) {
err := bcrypt.CompareHashAndPassword([]byte(hashedPwd), []byte(password))
return err == nil
}
if utils.IsStringPrefixInSlice(hashedPwd, md5CryptPwdPrefixes) {
crypter, ok := unixcrypt.MD5.CrypterFound(hashedPwd)
if !ok {
err := errors.New("cannot found matching MD5 crypter")
logger.Debug(logSender, "", "error comparing password with MD5 crypt hash: %v", err)
return false
}
return crypter.Verify([]byte(password))
}
}
if strings.HasPrefix(hashedPwd, md5CryptPwdPrefix) {
crypter := md5_crypt.New()
err := crypter.Verify(hashedPwd, []byte(password))
return err == nil
}
if strings.HasPrefix(hashedPwd, apr1CryptPwdPrefix) {
crypter := apr1_crypt.New()
err := crypter.Verify(hashedPwd, []byte(password))
return err == nil
}
}
return false
}

72
common/httpauth_test.go Normal file
View File

@@ -0,0 +1,72 @@
package common
import (
"io/ioutil"
"os"
"path/filepath"
"runtime"
"testing"
"github.com/stretchr/testify/require"
)
func TestBasicAuth(t *testing.T) {
httpAuth, err := NewBasicAuthProvider("")
require.NoError(t, err)
require.False(t, httpAuth.IsEnabled())
_, err = NewBasicAuthProvider("missing path")
require.Error(t, err)
authUserFile := filepath.Join(os.TempDir(), "http_users.txt")
authUserData := []byte("test1:$2y$05$bcHSED7aO1cfLto6ZdDBOOKzlwftslVhtpIkRhAtSa4GuLmk5mola\n")
err = ioutil.WriteFile(authUserFile, authUserData, os.ModePerm)
require.NoError(t, err)
httpAuth, err = NewBasicAuthProvider(authUserFile)
require.NoError(t, err)
require.True(t, httpAuth.IsEnabled())
require.False(t, httpAuth.ValidateCredentials("test1", "wrong1"))
require.False(t, httpAuth.ValidateCredentials("test2", "password2"))
require.True(t, httpAuth.ValidateCredentials("test1", "password1"))
authUserData = append(authUserData, []byte("test2:$1$OtSSTL8b$bmaCqEksI1e7rnZSjsIDR1\n")...)
err = ioutil.WriteFile(authUserFile, authUserData, os.ModePerm)
require.NoError(t, err)
require.False(t, httpAuth.ValidateCredentials("test2", "wrong2"))
require.True(t, httpAuth.ValidateCredentials("test2", "password2"))
authUserData = append(authUserData, []byte("test2:$apr1$gLnIkRIf$Xr/6aJfmIrihP4b2N2tcs/\n")...)
err = ioutil.WriteFile(authUserFile, authUserData, os.ModePerm)
require.NoError(t, err)
require.False(t, httpAuth.ValidateCredentials("test2", "wrong2"))
require.True(t, httpAuth.ValidateCredentials("test2", "password2"))
authUserData = append(authUserData, []byte("test3:$apr1$gLnIkRIf$Xr/6aJfmIrihP4b2N2tcs/\n")...)
err = ioutil.WriteFile(authUserFile, authUserData, os.ModePerm)
require.NoError(t, err)
require.False(t, httpAuth.ValidateCredentials("test3", "password3"))
authUserData = append(authUserData, []byte("test4:$invalid$gLnIkRIf$Xr/6$aJfmIr$ihP4b2N2tcs/\n")...)
err = ioutil.WriteFile(authUserFile, authUserData, os.ModePerm)
require.NoError(t, err)
require.False(t, httpAuth.ValidateCredentials("test4", "password3"))
if runtime.GOOS != "windows" {
authUserData = append(authUserData, []byte("test5:$apr1$gLnIkRIf$Xr/6aJfmIrihP4b2N2tcs/\n")...)
err = ioutil.WriteFile(authUserFile, authUserData, os.ModePerm)
require.NoError(t, err)
err = os.Chmod(authUserFile, 0001)
require.NoError(t, err)
require.False(t, httpAuth.ValidateCredentials("test5", "password2"))
err = os.Chmod(authUserFile, os.ModePerm)
require.NoError(t, err)
}
authUserData = append(authUserData, []byte("\"foo\"bar\"\r\n")...)
err = ioutil.WriteFile(authUserFile, authUserData, os.ModePerm)
require.NoError(t, err)
require.False(t, httpAuth.ValidateCredentials("test2", "password2"))
err = os.Remove(authUserFile)
require.NoError(t, err)
}

200
common/tlsutils.go Normal file
View File

@@ -0,0 +1,200 @@
package common
import (
"crypto/tls"
"crypto/x509"
"crypto/x509/pkix"
"fmt"
"io/ioutil"
"path/filepath"
"sync"
"time"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/utils"
)
// CertManager defines a TLS certificate manager
type CertManager struct {
certPath string
keyPath string
configDir string
logSender string
sync.RWMutex
caCertificates []string
caRevocationLists []string
cert *tls.Certificate
rootCAs *x509.CertPool
crls []*pkix.CertificateList
}
// Reload tries to reload certificate and CRLs
func (m *CertManager) Reload() error {
errCrt := m.loadCertificate()
errCRLs := m.LoadCRLs()
if errCrt != nil {
return errCrt
}
return errCRLs
}
// LoadCertificate loads the configured x509 key pair
func (m *CertManager) loadCertificate() error {
newCert, err := tls.LoadX509KeyPair(m.certPath, m.keyPath)
if err != nil {
logger.Warn(m.logSender, "", "unable to load X509 key pair, cert file %#v key file %#v error: %v",
m.certPath, m.keyPath, err)
return err
}
logger.Debug(m.logSender, "", "TLS certificate %#v successfully loaded", m.certPath)
m.Lock()
defer m.Unlock()
m.cert = &newCert
return nil
}
// GetCertificateFunc returns the loaded certificate
func (m *CertManager) GetCertificateFunc() func(*tls.ClientHelloInfo) (*tls.Certificate, error) {
return func(clientHello *tls.ClientHelloInfo) (*tls.Certificate, error) {
m.RLock()
defer m.RUnlock()
return m.cert, nil
}
}
// IsRevoked returns true if the specified certificate has been revoked
func (m *CertManager) IsRevoked(crt *x509.Certificate, caCrt *x509.Certificate) bool {
m.RLock()
defer m.RUnlock()
if crt == nil || caCrt == nil {
logger.Warn(m.logSender, "", "unable to verify crt %v ca crt %v", crt, caCrt)
return len(m.crls) > 0
}
for _, crl := range m.crls {
if !crl.HasExpired(time.Now()) && caCrt.CheckCRLSignature(crl) == nil {
for _, rc := range crl.TBSCertList.RevokedCertificates {
if rc.SerialNumber.Cmp(crt.SerialNumber) == 0 {
return true
}
}
}
}
return false
}
// LoadCRLs tries to load certificate revocation lists from the given paths
func (m *CertManager) LoadCRLs() error {
if len(m.caRevocationLists) == 0 {
return nil
}
var crls []*pkix.CertificateList
for _, revocationList := range m.caRevocationLists {
if !utils.IsFileInputValid(revocationList) {
return fmt.Errorf("invalid root CA revocation list %#v", revocationList)
}
if revocationList != "" && !filepath.IsAbs(revocationList) {
revocationList = filepath.Join(m.configDir, revocationList)
}
crlBytes, err := ioutil.ReadFile(revocationList)
if err != nil {
logger.Warn(m.logSender, "unable to read revocation list %#v", revocationList)
return err
}
crl, err := x509.ParseCRL(crlBytes)
if err != nil {
logger.Warn(m.logSender, "unable to parse revocation list %#v", revocationList)
return err
}
logger.Debug(m.logSender, "", "CRL %#v successfully loaded", revocationList)
crls = append(crls, crl)
}
m.Lock()
defer m.Unlock()
m.crls = crls
return nil
}
// GetRootCAs returns the set of root certificate authorities that servers
// use if required to verify a client certificate
func (m *CertManager) GetRootCAs() *x509.CertPool {
m.RLock()
defer m.RUnlock()
return m.rootCAs
}
// LoadRootCAs tries to load root CA certificate authorities from the given paths
func (m *CertManager) LoadRootCAs() error {
if len(m.caCertificates) == 0 {
return nil
}
rootCAs := x509.NewCertPool()
for _, rootCA := range m.caCertificates {
if !utils.IsFileInputValid(rootCA) {
return fmt.Errorf("invalid root CA certificate %#v", rootCA)
}
if rootCA != "" && !filepath.IsAbs(rootCA) {
rootCA = filepath.Join(m.configDir, rootCA)
}
crt, err := ioutil.ReadFile(rootCA)
if err != nil {
return err
}
if rootCAs.AppendCertsFromPEM(crt) {
logger.Debug(m.logSender, "", "TLS certificate authority %#v successfully loaded", rootCA)
} else {
err := fmt.Errorf("unable to load TLS certificate authority %#v", rootCA)
logger.Warn(m.logSender, "", "%v", err)
return err
}
}
m.Lock()
defer m.Unlock()
m.rootCAs = rootCAs
return nil
}
// SetCACertificates sets the root CA authorities file paths.
// This should not be changed at runtime
func (m *CertManager) SetCACertificates(caCertificates []string) {
m.caCertificates = caCertificates
}
// SetCARevocationLists sets the CA revocation lists file paths.
// This should not be changed at runtime
func (m *CertManager) SetCARevocationLists(caRevocationLists []string) {
m.caRevocationLists = caRevocationLists
}
// NewCertManager creates a new certificate manager
func NewCertManager(certificateFile, certificateKeyFile, configDir, logSender string) (*CertManager, error) {
manager := &CertManager{
cert: nil,
certPath: certificateFile,
keyPath: certificateKeyFile,
configDir: configDir,
logSender: logSender,
}
err := manager.loadCertificate()
if err != nil {
return nil, err
}
return manager, nil
}

387
common/tlsutils_test.go Normal file
View File

@@ -0,0 +1,387 @@
package common
import (
"crypto/tls"
"crypto/x509"
"io/ioutil"
"os"
"path/filepath"
"testing"
"github.com/stretchr/testify/assert"
)
const (
serverCert = `-----BEGIN CERTIFICATE-----
MIIEIDCCAgigAwIBAgIRAPOR9zTkX35vSdeyGpF8Rn8wDQYJKoZIhvcNAQELBQAw
EzERMA8GA1UEAxMIQ2VydEF1dGgwHhcNMjEwMTAyMjEyMjU1WhcNMjIwNzAyMjEz
MDUxWjARMQ8wDQYDVQQDEwZzZXJ2ZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAw
ggEKAoIBAQCte0PJhCTNqTiqdwk/s4JanKIMKUVWr2u94a+JYy5gJ9xYXrQ49SeN
m+fwhTAOqctP5zNVkFqxlBytJZg3pqCKqRoOOl1qVgL3F3o7JdhZGi67aw8QMLPx
tLPpYWnnrlUQoXRJdTlqkDqO8lOZl9HO5oZeidPZ7r5BVD6ZiujAC6Zg0jIc+EPt
qhaUJ1CStoAeRf1rNWKmDsLv5hEaDWoaHF9sNVzDQg6atZ3ici00qQj+uvEZo8mL
k6egg3rqsTv9ml2qlrRgFumt99J60hTt3tuQaAruHY80O9nGy3SCXC11daa7gszH
ElCRvhUVoOxRtB54YBEtJ0gEpFnTO9J1AgMBAAGjcTBvMA4GA1UdDwEB/wQEAwID
uDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwHQYDVR0OBBYEFAgDXwPV
nhztNz+H20iNWgoIx8adMB8GA1UdIwQYMBaAFO1yCNAGr/zQTJIi8lw3w5OiuBvM
MA0GCSqGSIb3DQEBCwUAA4ICAQCR5kgIb4vAtrtsXD24n6RtU1yIXHPLNmDStVrH
uaMYNnHlLhRlQFCjHhjWvZ89FQC7FeNOITc3FpibJySyw7JfnsyEOGxEbcAS4uLB
2pdAiJPqdQtxIVcyi5vu53m1T5tm0sy8sBrGxU466aDQ8VGqjcjfTwNIyoFMd3p/
ezFRvg2BudwU9hqApgfHfLi4WCuI3hLO2tbmgDinyH0HI0YYNNweGpiBYbTLF4Tx
H6vHgD9USMZeu4+HX0IIsBiHQD7TTIe5ceREkPcNPd5qTpIvT3zKQ/KwwT90/zjP
aWmz6pLxBfjRu7MY/bDfxfRUqsrLYJCVBoaDVRWR9rhiPIFkC5JzoWD/4hdj2iis
N0+OOaJ77L+/ArFprE+7Fu3cSdYlfiNjV8R5kE29cAxKLI92CjAiTKrEuxKcQPKO
+taWNKIYYjEDZwVnzlkTIl007X0RBuzu9gh4w5NwJdt8ZOJAp0JV0Cq+UvG+FC/v
lYk82E6j1HKhf4CXmrjsrD1Fyu41mpVFOpa2ATiFGvms913MkXuyO8g99IllmDw1
D7/PN4Qe9N6Zm7yoKZM0IUw2v+SUMIdOAZ7dptO9ZjtYOfiAIYN3jM8R4JYgPiuD
DGSM9LJBJxCxI/DiO1y1Z3n9TcdDQYut8Gqdi/aYXw2YeqyHXosX5Od3vcK/O5zC
pOJTYQ==
-----END CERTIFICATE-----`
serverKey = `-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEArXtDyYQkzak4qncJP7OCWpyiDClFVq9rveGviWMuYCfcWF60
OPUnjZvn8IUwDqnLT+czVZBasZQcrSWYN6agiqkaDjpdalYC9xd6OyXYWRouu2sP
EDCz8bSz6WFp565VEKF0SXU5apA6jvJTmZfRzuaGXonT2e6+QVQ+mYrowAumYNIy
HPhD7aoWlCdQkraAHkX9azVipg7C7+YRGg1qGhxfbDVcw0IOmrWd4nItNKkI/rrx
GaPJi5OnoIN66rE7/Zpdqpa0YBbprffSetIU7d7bkGgK7h2PNDvZxst0glwtdXWm
u4LMxxJQkb4VFaDsUbQeeGARLSdIBKRZ0zvSdQIDAQABAoIBAF4sI8goq7HYwqIG
rEagM4rsrCrd3H4KC/qvoJJ7/JjGCp8OCddBfY8pquat5kCPe4aMgxlXm2P6evaj
CdZr5Ypf8Xz3we4PctyfKgMhsCfuRqAGpc6sIYJ8DY4LC2pxAExe2LlnoRtv39np
QeiGuaYPDbIUL6SGLVFZYgIHngFhbDYfL83q3Cb/PnivUGFvUVQCfRBUKO2d8KYq
TrVB5BWD2GrHor24ApQmci1OOqfbkIevkK6bk8HUfSZiZGI9LUQiPHMxi5k2x43J
nIwhZnW2N28dorKnWHg2vh7viGvinVRZ3MEyX150oCw/L6SYM4fqR6t2ZSBgNQHT
ZNoDtwECgYEA4lXMgtYqKuSlZ3TKfxAj03tJ/gbRdKcUCEGXEbdpY70tTu6KESZS
etid4Ut/sWEoPTJsgYiGbgJl571t1O8oR1UZYgh9hBGHLV6UEIt9n2PbExhE2vL3
SB7+LfO+tMvM4qKUBN+uy4GpU0NiyEEecw4x4S7MRSyHFRIDR7B6RV0CgYEAxDgS
mDaNUfSdfB5mXekLUJAwqeKRdL9RjXYaHbnoZ5kIwQ73tFikRwyTsLQwMhjE1l3z
MItTzIAyTf/BlK3dsp6bHTaT7hXIjHBsuKATN5qAuUpzTrg9+QaCawVSlQgNeF3a
iyfD4dVp66Bzn3gO757TWqmroBZ2e1owbAQvF/kCgYAKT/Jze6KMNcK7hfy78VZQ
imuCoXjlob8t6R8i9YJdwv7Pe9rakS5s3nXDEBePU2fr8eIzvK6zUHSoLF9WtlbV
eTEg4FYnsEzCam7AmjptCrWulwp8F1ng9ViLa3Gi9y4snU+1MSPbrdqzKnzTtvPW
Ni1bnzA7bp3w/dMcbxQDGQKBgB50hY5SiUS7LuZg4YqZ7UOn3aXAoMr6FvJZ7lvG
yyepPQ6aACBh0b2lWhcHIKPl7EdJdcGHHo6TJzusAqPNCKf8rh6upe9COkpx+K3/
SnxK4sffol4JgrTwKbXqsZKoGU8hYhZPKbwXn8UOtmN+AvN2N1/PDfBfDCzBJtrd
G2IhAoGBAN19976xAMDjKb2+wd/mQYA2fR7E8lodxdX3LDnblYmndTKY67nVo94M
FHPKZSN590HkFJ+wmChnOrqjtosY+N25CKMS7939EUIDrq+B+bYTWM/gcwdLXNUk
Rygw/078Z3ZDJamXmyez5WpeLFrrbmI8sLnBBmSjQvMb6vCEtQ2Z
-----END RSA PRIVATE KEY-----`
caCRT = `-----BEGIN CERTIFICATE-----
MIIE5jCCAs6gAwIBAgIBATANBgkqhkiG9w0BAQsFADATMREwDwYDVQQDEwhDZXJ0
QXV0aDAeFw0yMTAxMDIyMTIwNTVaFw0yMjA3MDIyMTMwNTJaMBMxETAPBgNVBAMT
CENlcnRBdXRoMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEA4Tiho5xW
AC15JRkMwfp3/TJwI2As7MY5dele5cmdr5bHAE+sRKqC+Ti88OJWCV5saoyax/1S
CjxJlQMZMl169P1QYJskKjdG2sdv6RLWLMgwSNRRjxp/Bw9dHdiEb9MjLgu28Jro
9peQkHcRHeMf5hM9WvlIJGrdzbC4hUehmqggcqgARainBkYjf0SwuWxHeu4nMqkp
Ak5tcSTLCjHfEFHZ9Te0TIPG5YkWocQKyeLgu4lvuU+DD2W2lym+YVUtRMGs1Env
k7p+N0DcGU26qfzZ2sF5ZXkqm7dBsGQB9pIxwc2Q8T1dCIyP9OQCKVILdc5aVFf1
cryQFHYzYNNZXFlIBims5VV5Mgfp8ESHQSue+v6n6ykecLEyKt1F1Y/MWY/nWUSI
8zdq83jdBAZVjo9MSthxVn57/06s/hQca65IpcTZV2gX0a+eRlAVqaRbAhL3LaZe
bYsW3WHKoUOftwemuep3nL51TzlXZVL7Oz/ClGaEOsnGG9KFO6jh+W768qC0zLQI
CdE7v2Zex98sZteHCg9fGJHIaYoF0aJG5P3WI5oZf2fy7UIYN9ADLFZiorCXAZEh
CSU6mDoRViZ4RGR9GZxbDZ9KYn7O8M/KCR72bkQg73TlMsk1zSXEw0MKLUjtsw6c
rZ0Jt8t3sRatHO3JrYHALMt9vZfyNCZp0IsCAwEAAaNFMEMwDgYDVR0PAQH/BAQD
AgEGMBIGA1UdEwEB/wQIMAYBAf8CAQAwHQYDVR0OBBYEFO1yCNAGr/zQTJIi8lw3
w5OiuBvMMA0GCSqGSIb3DQEBCwUAA4ICAQA6gCNuM7r8mnx674dm31GxBjQy5ZwB
7CxDzYEvL/oiZ3Tv3HlPfN2LAAsJUfGnghh9DOytenL2CTZWjl/emP5eijzmlP+9
zva5I6CIMCf/eDDVsRdO244t0o4uG7+At0IgSDM3bpVaVb4RHZNjEziYChsEYY8d
HK6iwuRSvFniV6yhR/Vj1Ymi9yZ5xclqseLXiQnUB0PkfIk23+7s42cXB16653fH
O/FsPyKBLiKJArizLYQc12aP3QOrYoYD9+fAzIIzew7A5C0aanZCGzkuFpO6TRlD
Tb7ry9Gf0DfPpCgxraH8tOcmnqp/ka3hjqo/SRnnTk0IFrmmLdarJvjD46rKwBo4
MjyAIR1mQ5j8GTlSFBmSgETOQ/EYvO3FPLmra1Fh7L+DvaVzTpqI9fG3TuyyY+Ri
Fby4ycTOGSZOe5Fh8lqkX5Y47mCUJ3zHzOA1vUJy2eTlMRGpu47Eb1++Vm6EzPUP
2EF5aD+zwcssh+atZvQbwxpgVqVcyLt91RSkKkmZQslh0rnlTb68yxvUnD3zw7So
o6TAf9UvwVMEvdLT9NnFd6hwi2jcNte/h538GJwXeBb8EkfpqLKpTKyicnOdkamZ
7E9zY8SHNRYMwB9coQ/W8NvufbCgkvOoLyMXk5edbXofXl3PhNGOlraWbghBnzf5
r3rwjFsQOoZotA==
-----END CERTIFICATE-----`
caKey = `-----BEGIN RSA PRIVATE KEY-----
MIIJKQIBAAKCAgEA4Tiho5xWAC15JRkMwfp3/TJwI2As7MY5dele5cmdr5bHAE+s
RKqC+Ti88OJWCV5saoyax/1SCjxJlQMZMl169P1QYJskKjdG2sdv6RLWLMgwSNRR
jxp/Bw9dHdiEb9MjLgu28Jro9peQkHcRHeMf5hM9WvlIJGrdzbC4hUehmqggcqgA
RainBkYjf0SwuWxHeu4nMqkpAk5tcSTLCjHfEFHZ9Te0TIPG5YkWocQKyeLgu4lv
uU+DD2W2lym+YVUtRMGs1Envk7p+N0DcGU26qfzZ2sF5ZXkqm7dBsGQB9pIxwc2Q
8T1dCIyP9OQCKVILdc5aVFf1cryQFHYzYNNZXFlIBims5VV5Mgfp8ESHQSue+v6n
6ykecLEyKt1F1Y/MWY/nWUSI8zdq83jdBAZVjo9MSthxVn57/06s/hQca65IpcTZ
V2gX0a+eRlAVqaRbAhL3LaZebYsW3WHKoUOftwemuep3nL51TzlXZVL7Oz/ClGaE
OsnGG9KFO6jh+W768qC0zLQICdE7v2Zex98sZteHCg9fGJHIaYoF0aJG5P3WI5oZ
f2fy7UIYN9ADLFZiorCXAZEhCSU6mDoRViZ4RGR9GZxbDZ9KYn7O8M/KCR72bkQg
73TlMsk1zSXEw0MKLUjtsw6crZ0Jt8t3sRatHO3JrYHALMt9vZfyNCZp0IsCAwEA
AQKCAgAV+ElERYbaI5VyufvVnFJCH75ypPoc6sVGLEq2jbFVJJcq/5qlZCC8oP1F
Xj7YUR6wUiDzK1Hqb7EZ2SCHGjlZVrCVi+y+NYAy7UuMZ+r+mVSkdhmypPoJPUVv
GOTqZ6VB46Cn3eSl0WknvoWr7bD555yPmEuiSc5zNy74yWEJTidEKAFGyknowcTK
sG+w1tAuPLcUKQ44DGB+rgEkcHL7C5EAa7upzx0C3RmZFB+dTAVyJdkBMbFuOhTS
sB7DLeTplR7/4mp9da7EQw51ZXC1DlZOEZt++4/desXsqATNAbva1OuzrLG7mMKe
N/PCBh/aERQcsCvgUmaXqGQgqN1Jhw8kbXnjZnVd9iE7TAh7ki3VqNy1OMgTwOex
bBYWaCqHuDYIxCjeW0qLJcn0cKQ13FVYrxgInf4Jp82SQht5b/zLL3IRZEyKcLJF
kL6g1wlmTUTUX0z8eZzlM0ZCrqtExjgElMO/rV971nyNV5WU8Og3NmE8/slqMrmJ
DlrQr9q0WJsDKj1IMe46EUM6ix7bbxC5NIfJ96dgdxZDn6ghjca6iZYqqUACvmUj
cq08s3R4Ouw9/87kn11wwGBx2yDueCwrjKEGc0RKjweGbwu0nBxOrkJ8JXz6bAv7
1OKfYaX3afI9B8x4uaiuRs38oBQlg9uAYFfl4HNBPuQikGLmsQKCAQEA8VjFOsaz
y6NMZzKXi7WZ48uu3ed5x3Kf6RyDr1WvQ1jkBMv9b6b8Gp1CRnPqviRBto9L8QAg
bCXZTqnXzn//brskmW8IZgqjAlf89AWa53piucu9/hgidrHRZobs5gTqev28uJdc
zcuw1g8c3nCpY9WeTjHODzX5NXYRLFpkazLfYa6c8Q9jZR4KKrpdM+66fxL0JlOd
7dN0oQtEqEAugsd3cwkZgvWhY4oM7FGErrZoDLy273ZdJzi/vU+dThyVzfD8Ab8u
VxxuobVMT/S608zbe+uaiUdov5s96OkCl87403UNKJBH+6LNb3rjBBLE9NPN5ET9
JLQMrYd+zj8jQwKCAQEA7uU5I9MOufo9bIgJqjY4Ie1+Ex9DZEMUYFAvGNCJCVcS
mwOdGF8AWzIavTLACmEDJO7t/OrBdoo4L7IEsCNjgA3WiIwIMiWUVqveAGUMEXr6
TRI5EolV6FTqqIP6AS+BAeBq7G1ELgsTrWNHh11rW3+3kBMuOCn77PUQ8WHwcq/r
teZcZn4Ewcr6P7cBODgVvnBPhe/J8xHS0HFVCeS1CvaiNYgees5yA80Apo9IPjDJ
YWawLjmH5wUBI5yDFVp067wjqJnoKPSoKwWkZXqUk+zgFXx5KT0gh/c5yh1frASp
q6oaYnHEVC5qj2SpT1GFLonTcrQUXiSkiUudvNu1GQKCAQEAmko+5GFtRe0ihgLQ
4S76r6diJli6AKil1Fg3U1r6zZpBQ1PJtJxTJQyN9w5Z7q6tF/GqAesrzxevQdvQ
rCImAPtA3ZofC2UXawMnIjWHHx6diNvYnV1+gtUQ4nO1dSOFZ5VZFcUmPiZO6boF
oaryj3FcX+71JcJCjEvrlKhA9Es0hXUkvfMxfs5if4he1zlyHpTWYr4oA4egUugq
P0mwskikc3VIyvEO+NyjgFxo72yLPkFSzemkidN8uKDyFqKtnlfGM7OuA2CY1WZa
3+67lXWshx9KzyJIs92iCYkU8EoPxtdYzyrV6efdX7x27v60zTOut5TnJJS6WiF6
Do5MkwKCAQAxoR9IyP0DN/BwzqYrXU42Bi+t603F04W1KJNQNWpyrUspNwv41yus
xnD1o0hwH41Wq+h3JZIBfV+E0RfWO9Pc84MBJQ5C1LnHc7cQH+3s575+Km3+4tcd
CB8j2R8kBeloKWYtLdn/Mr/ownpGreqyvIq2/LUaZ+Z1aMgXTYB1YwS16mCBzmZQ
mEl62RsAwe4KfSyYJ6OtwqMoOJMxFfliiLBULK4gVykqjvk2oQeiG+KKQJoTUFJi
dRCyhD5bPkqR+qjxyt+HOqSBI4/uoROi05AOBqjpH1DVzk+MJKQOiX1yM0l98CKY
Vng+x+vAla/0Zh+ucajVkgk4mKPxazdpAoIBAQC17vWk4KYJpF2RC3pKPcQ0PdiX
bN35YNlvyhkYlSfDNdyH3aDrGiycUyW2mMXUgEDFsLRxHMTL+zPC6efqO6sTAJDY
cBptsW4drW/qo8NTx3dNOisLkW+mGGJOR/w157hREFr29ymCVMYu/Z7fVWIeSpCq
p3u8YX8WTljrxwSczlGjvpM7uJx3SfYRM4TUoy+8wU8bK74LywLa5f60bQY6Dye0
Gqd9O6OoPfgcQlwjC5MiAofeqwPJvU0hQOPoehZyNLAmOCWXTYWaTP7lxO1r6+NE
M3hGYqW3W8Ixua71OskCypBZg/HVlIP/lzjRzdx+VOB2hbWVth2Iup/Z1egW
-----END RSA PRIVATE KEY-----`
caCRL = `-----BEGIN X509 CRL-----
MIICpzCBkAIBATANBgkqhkiG9w0BAQsFADATMREwDwYDVQQDEwhDZXJ0QXV0aBcN
MjEwMTAyMjEzNDA1WhcNMjMwMTAyMjEzNDA1WjAkMCICEQC+l04DbHWMyC3fG09k
VXf+Fw0yMTAxMDIyMTM0MDVaoCMwITAfBgNVHSMEGDAWgBTtcgjQBq/80EySIvJc
N8OTorgbzDANBgkqhkiG9w0BAQsFAAOCAgEAEJ7z+uNc8sqtxlOhSdTGDzX/xput
E857kFQkSlMnU2whQ8c+XpYrBLA5vIZJNSSwohTpM4+zVBX/bJpmu3wqqaArRO9/
YcW5mQk9Anvb4WjQW1cHmtNapMTzoC9AiYt/OWPfy+P6JCgCr4Hy6LgQyIRL6bM9
VYTalolOm1qa4Y5cIeT7iHq/91mfaqo8/6MYRjLl8DOTROpmw8OS9bCXkzGKdCat
AbAzwkQUSauyoCQ10rpX+Y64w9ng3g4Dr20aCqPf5osaqplEJ2HTK8ljDTidlslv
9anQj8ax3Su89vI8+hK+YbfVQwrThabgdSjQsn+veyx8GlP8WwHLAQ379KjZjWg+
OlOSwBeU1vTdP0QcB8X5C2gVujAyuQekbaV86xzIBOj7vZdfHZ6ee30TZ2FKiMyg
7/N2OqW0w77ChsjB4MSHJCfuTgIeg62GzuZXLM+Q2Z9LBdtm4Byg+sm/P52adOEg
gVb2Zf4KSvsAmA0PIBlu449/QXUFcMxzLFy7mwTeZj2B4Ln0Hm0szV9f9R8MwMtB
SyLYxVH+mgqaR6Jkk22Q/yYyLPaELfafX5gp/AIXG8n0zxfVaTvK3auSgb1Q6ZLS
5QH9dSIsmZHlPq7GoSXmKpMdjUL8eaky/IMteioyXgsBiATzl5L2dsw6MTX3MDF0
QbDK+MzhmbKfDxs=
-----END X509 CRL-----`
client1Crt = `-----BEGIN CERTIFICATE-----
MIIEITCCAgmgAwIBAgIRAIppZHoj1hM80D7WzTEKLuAwDQYJKoZIhvcNAQELBQAw
EzERMA8GA1UEAxMIQ2VydEF1dGgwHhcNMjEwMTAyMjEyMzEwWhcNMjIwNzAyMjEz
MDUxWjASMRAwDgYDVQQDEwdjbGllbnQxMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
MIIBCgKCAQEAoKbYY9MdF2kF/nhBESIiZTdVYtA8XL9xrIZyDj9EnCiTxHiVbJtH
XVwszqSl5TRrotPmnmAQcX3r8OCk+z+RQZ0QQj257P3kG6q4rNnOcWCS5xEd20jP
yhQ3m+hMGfZsotNTQze1ochuQgLUN6IPyPxZkH22ia3jX4iu1eo/QxeLYHj1UHw4
3Cii9yE+j5kPUC21xmnrGKdUrB55NYLXHx6yTIqYR5znSOVB8oJi18/hwdZmH859
DHhm0Hx1HrS+jbjI3+CMorZJ3WUyNf+CkiVLD3xYutPbxzEpwiqkG/XYzLH0habT
cDcILo18n+o3jvem2KWBrDhyairjIDscwQIDAQABo3EwbzAOBgNVHQ8BAf8EBAMC
A7gwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMCMB0GA1UdDgQWBBSJ5GIv
zIrE4ZSQt2+CGblKTDswizAfBgNVHSMEGDAWgBTtcgjQBq/80EySIvJcN8OTorgb
zDANBgkqhkiG9w0BAQsFAAOCAgEALh4f5GhvNYNou0Ab04iQBbLEdOu2RlbK1B5n
K9P/umYenBHMY/z6HT3+6tpcHsDuqE8UVdq3f3Gh4S2Gu9m8PRitT+cJ3gdo9Plm
3rD4ufn/s6rGg3ppydXcedm17492tbccUDWOBZw3IO/ASVq13WPgT0/Kev7cPq0k
sSdSNhVeXqx8Myc2/d+8GYyzbul2Kpfa7h9i24sK49E9ftnSmsIvngONo08eT1T0
3wAOyK2981LIsHaAWcneShKFLDB6LeXIT9oitOYhiykhFlBZ4M1GNlSNfhQ8IIQP
xbqMNXCLkW4/BtLhGEEcg0QVso6Kudl9rzgTfQknrdF7pHp6rS46wYUjoSyIY6dl
oLmnoAVJX36J3QPWelePI9e07X2wrTfiZWewwgw3KNRWjd6/zfPLe7GoqXnK1S2z
PT8qMfCaTwKTtUkzXuTFvQ8bAo2My/mS8FOcpkt2oQWeOsADHAUX7fz5BCoa2DL3
k/7Mh4gVT+JYZEoTwCFuYHgMWFWe98naqHi9lB4yR981p1QgXgxO7qBeipagKY1F
LlH1iwXUqZ3MZnkNA+4e1Fglsw3sa/rC+L98HnznJ/YbTfQbCP6aQ1qcOymrjMud
7MrFwqZjtd/SK4Qx1VpK6jGEAtPgWBTUS3p9ayg6lqjMBjsmySWfvRsDQbq6P5Ct
O/e3EH8=
-----END CERTIFICATE-----`
client1Key = `-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEAoKbYY9MdF2kF/nhBESIiZTdVYtA8XL9xrIZyDj9EnCiTxHiV
bJtHXVwszqSl5TRrotPmnmAQcX3r8OCk+z+RQZ0QQj257P3kG6q4rNnOcWCS5xEd
20jPyhQ3m+hMGfZsotNTQze1ochuQgLUN6IPyPxZkH22ia3jX4iu1eo/QxeLYHj1
UHw43Cii9yE+j5kPUC21xmnrGKdUrB55NYLXHx6yTIqYR5znSOVB8oJi18/hwdZm
H859DHhm0Hx1HrS+jbjI3+CMorZJ3WUyNf+CkiVLD3xYutPbxzEpwiqkG/XYzLH0
habTcDcILo18n+o3jvem2KWBrDhyairjIDscwQIDAQABAoIBAEBSjVFqtbsp0byR
aXvyrtLX1Ng7h++at2jca85Ihq//jyqbHTje8zPuNAKI6eNbmb0YGr5OuEa4pD9N
ssDmMsKSoG/lRwwcm7h4InkSvBWpFShvMgUaohfHAHzsBYxfnh+TfULsi0y7c2n6
t/2OZcOTRkkUDIITnXYiw93ibHHv2Mv2bBDu35kGrcK+c2dN5IL5ZjTjMRpbJTe2
44RBJbdTxHBVSgoGBnugF+s2aEma6Ehsj70oyfoVpM6Aed5kGge0A5zA1JO7WCn9
Ay/DzlULRXHjJIoRWd2NKvx5n3FNppUc9vJh2plRHalRooZ2+MjSf8HmXlvG2Hpb
ScvmWgECgYEA1G+A/2KnxWsr/7uWIJ7ClcGCiNLdk17Pv3DZ3G4qUsU2ITftfIbb
tU0Q/b19na1IY8Pjy9ptP7t74/hF5kky97cf1FA8F+nMj/k4+wO8QDI8OJfzVzh9
PwielA5vbE+xmvis5Hdp8/od1Yrc/rPSy2TKtPFhvsqXjqoUmOAjDP8CgYEAwZjH
9dt1sc2lx/rMxihlWEzQ3JPswKW9/LJAmbRBoSWF9FGNjbX7uhWtXRKJkzb8ZAwa
88azluNo2oftbDD/+jw8b2cDgaJHlLAkSD4O1D1RthW7/LKD15qZ/oFsRb13NV85
ZNKtwslXGbfVNyGKUVFm7fVA8vBAOUey+LKDFj8CgYEAg8WWstOzVdYguMTXXuyb
ruEV42FJaDyLiSirOvxq7GTAKuLSQUg1yMRBIeQEo2X1XU0JZE3dLodRVhuO4EXP
g7Dn4X7Th9HSvgvNuIacowWGLWSz4Qp9RjhGhXhezUSx2nseY6le46PmFavJYYSR
4PBofMyt4PcyA6Cknh+KHmkCgYEAnTriG7ETE0a7v4DXUpB4TpCEiMCy5Xs2o8Z5
ZNva+W+qLVUWq+MDAIyechqeFSvxK6gRM69LJ96lx+XhU58wJiFJzAhT9rK/g+jS
bsHH9WOfu0xHkuHA5hgvvV2Le9B2wqgFyva4HJy82qxMxCu/VG/SMqyfBS9OWbb7
ibQhdq0CgYAl53LUWZsFSZIth1vux2LVOsI8C3X1oiXDGpnrdlQ+K7z57hq5EsRq
GC+INxwXbvKNqp5h0z2MvmKYPDlGVTgw8f8JjM7TkN17ERLcydhdRrMONUryZpo8
1xTob+8blyJgfxZUIAKbMbMbIiU0WAF0rfD/eJJwS4htOW/Hfv4TGA==
-----END RSA PRIVATE KEY-----`
// client 2 crt is revoked
client2Crt = `-----BEGIN CERTIFICATE-----
MIIEITCCAgmgAwIBAgIRAL6XTgNsdYzILd8bT2RVd/4wDQYJKoZIhvcNAQELBQAw
EzERMA8GA1UEAxMIQ2VydEF1dGgwHhcNMjEwMTAyMjEyMzIwWhcNMjIwNzAyMjEz
MDUxWjASMRAwDgYDVQQDEwdjbGllbnQyMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
MIIBCgKCAQEA6xjW5KQR3/OFQtV5M75WINqQ4AzXSu6DhSz/yumaaQZP/UxY+6hi
jcrFzGo9MMie/Sza8DhkXOFAl2BelUubrOeB2cl+/Gr8OCyRi2Gv6j3zCsuN/4jQ
tNaoez/IbkDvI3l/ZpzBtnuNY2RiemGgHuORXHRVf3qVlsw+npBIRW5rM2HkO/xG
oZjeBErWVu390Lyn+Gvk2TqQDnkutWnxUC60/zPlHhXZ4BwaFAekbSnjsSDB1YFM
s8HwW4oBryoxdj3/+/qLrBHt75IdLw3T7/V1UDJQM3EvSQOr12w4egpldhtsC871
nnBQZeY6qA5feffIwwg/6lJm70o6S6OX6wIDAQABo3EwbzAOBgNVHQ8BAf8EBAMC
A7gwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMCMB0GA1UdDgQWBBTB84v5
t9HqhLhMODbn6oYkEQt3KzAfBgNVHSMEGDAWgBTtcgjQBq/80EySIvJcN8OTorgb
zDANBgkqhkiG9w0BAQsFAAOCAgEALGtBCve5k8tToL3oLuXp/oSik6ovIB/zq4I/
4zNMYPU31+ZWz6aahysgx1JL1yqTa3Qm8o2tu52MbnV10dM7CIw7c/cYa+c+OPcG
5LF97kp13X+r2axy+CmwM86b4ILaDGs2Qyai6VB6k7oFUve+av5o7aUrNFpqGCJz
HWdtHZSVA3JMATzy0TfWanwkzreqfdw7qH0yZ9bDURlBKAVWrqnCstva9jRuv+AI
eqxr/4Ro986TFjJdoAP3Vr16CPg7/B6GA/KmsBWJrpeJdPWq4i2gpLKvYZoy89qD
mUZf34RbzcCtV4NvV1DadGnt4us0nvLrvS5rL2+2uWD09kZYq9RbLkvgzF/cY0fz
i7I1bi5XQ+alWe0uAk5ZZL/D+GTRYUX1AWwCqwJxmHrMxcskMyO9pXvLyuSWRDLo
YNBrbX9nLcfJzVCp+X+9sntTHjs4l6Cw+fLepJIgtgqdCHtbhTiv68vSM6cgb4br
6n2xrXRKuioiWFOrTSRr+oalZh8dGJ/xvwY8IbWknZAvml9mf1VvfE7Ma5P777QM
fsbYVTq0Y3R/5hIWsC3HA5z6MIM8L1oRe/YyhP3CTmrCHkVKyDOosGXpGz+JVcyo
cfYkY5A3yFKB2HaCwZSfwFmRhxkrYWGEbHv3Cd9YkZs1J3hNhGFZyVMC9Uh0S85a
6zdDidU=
-----END CERTIFICATE-----`
client2Key = `-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEA6xjW5KQR3/OFQtV5M75WINqQ4AzXSu6DhSz/yumaaQZP/UxY
+6hijcrFzGo9MMie/Sza8DhkXOFAl2BelUubrOeB2cl+/Gr8OCyRi2Gv6j3zCsuN
/4jQtNaoez/IbkDvI3l/ZpzBtnuNY2RiemGgHuORXHRVf3qVlsw+npBIRW5rM2Hk
O/xGoZjeBErWVu390Lyn+Gvk2TqQDnkutWnxUC60/zPlHhXZ4BwaFAekbSnjsSDB
1YFMs8HwW4oBryoxdj3/+/qLrBHt75IdLw3T7/V1UDJQM3EvSQOr12w4egpldhts
C871nnBQZeY6qA5feffIwwg/6lJm70o6S6OX6wIDAQABAoIBAFatstVb1KdQXsq0
cFpui8zTKOUiduJOrDkWzTygAmlEhYtrccdfXu7OWz0x0lvBLDVGK3a0I/TGrAzj
4BuFY+FM/egxTVt9in6fmA3et4BS1OAfCryzUdfK6RV//8L+t+zJZ/qKQzWnugpy
QYjDo8ifuMFwtvEoXizaIyBNLAhEp9hnrv+Tyi2O2gahPvCHsD48zkyZRCHYRstD
NH5cIrwz9/RJgPO1KI+QsJE7Nh7stR0sbr+5TPU4fnsL2mNhMUF2TJrwIPrc1yp+
YIUjdnh3SO88j4TQT3CIrWi8i4pOy6N0dcVn3gpCRGaqAKyS2ZYUj+yVtLO4KwxZ
SZ1lNvECgYEA78BrF7f4ETfWSLcBQ3qxfLs7ibB6IYo2x25685FhZjD+zLXM1AKb
FJHEXUm3mUYrFJK6AFEyOQnyGKBOLs3S6oTAswMPbTkkZeD1Y9O6uv0AHASLZnK6
pC6ub0eSRF5LUyTQ55Jj8D7QsjXJueO8v+G5ihWhNSN9tB2UA+8NBmkCgYEA+weq
cvoeMIEMBQHnNNLy35bwfqrceGyPIRBcUIvzQfY1vk7KW6DYOUzC7u+WUzy/hA52
DjXVVhua2eMQ9qqtOav7djcMc2W9RbLowxvno7K5qiCss013MeWk64TCWy+WMp5A
AVAtOliC3hMkIKqvR2poqn+IBTh1449agUJQqTMCgYEAu06IHGq1GraV6g9XpGF5
wqoAlMzUTdnOfDabRilBf/YtSr+J++ThRcuwLvXFw7CnPZZ4TIEjDJ7xjj3HdxeE
fYYjineMmNd40UNUU556F1ZLvJfsVKizmkuCKhwvcMx+asGrmA+tlmds4p3VMS50
KzDtpKzLWlmU/p/RINWlRmkCgYBy0pHTn7aZZx2xWKqCDg+L2EXPGqZX6wgZDpu7
OBifzlfM4ctL2CmvI/5yPmLbVgkgBWFYpKUdiujsyyEiQvWTUKhn7UwjqKDHtcsk
G6p7xS+JswJrzX4885bZJ9Oi1AR2yM3sC9l0O7I4lDbNPmWIXBLeEhGMmcPKv/Kc
91Ff4wKBgQCF3ur+Vt0PSU0ucrPVHjCe7tqazm0LJaWbPXL1Aw0pzdM2EcNcW/MA
w0kqpr7MgJ94qhXCBcVcfPuFN9fBOadM3UBj1B45Cz3pptoK+ScI8XKno6jvVK/p
xr5cb9VBRBtB9aOKVfuRhpatAfS2Pzm2Htae9lFn7slGPUmu2hkjDw==
-----END RSA PRIVATE KEY-----`
)
func TestLoadCertificate(t *testing.T) {
caCrtPath := filepath.Join(os.TempDir(), "testca.crt")
caCrlPath := filepath.Join(os.TempDir(), "testcrl.crt")
certPath := filepath.Join(os.TempDir(), "test.crt")
keyPath := filepath.Join(os.TempDir(), "test.key")
err := ioutil.WriteFile(caCrtPath, []byte(caCRT), os.ModePerm)
assert.NoError(t, err)
err = ioutil.WriteFile(caCrlPath, []byte(caCRL), os.ModePerm)
assert.NoError(t, err)
err = ioutil.WriteFile(certPath, []byte(serverCert), os.ModePerm)
assert.NoError(t, err)
err = ioutil.WriteFile(keyPath, []byte(serverKey), os.ModePerm)
assert.NoError(t, err)
certManager, err := NewCertManager(certPath, keyPath, configDir, logSenderTest)
assert.NoError(t, err)
certFunc := certManager.GetCertificateFunc()
if assert.NotNil(t, certFunc) {
hello := &tls.ClientHelloInfo{
ServerName: "localhost",
CipherSuites: []uint16{tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305},
}
cert, err := certFunc(hello)
assert.NoError(t, err)
assert.Equal(t, certManager.cert, cert)
}
certManager.SetCACertificates(nil)
err = certManager.LoadRootCAs()
assert.NoError(t, err)
certManager.SetCACertificates([]string{""})
err = certManager.LoadRootCAs()
assert.Error(t, err)
certManager.SetCACertificates([]string{"invalid"})
err = certManager.LoadRootCAs()
assert.Error(t, err)
// laoding the key as root CA must fail
certManager.SetCACertificates([]string{keyPath})
err = certManager.LoadRootCAs()
assert.Error(t, err)
certManager.SetCACertificates([]string{certPath})
err = certManager.LoadRootCAs()
assert.NoError(t, err)
rootCa := certManager.GetRootCAs()
assert.NotNil(t, rootCa)
err = certManager.Reload()
assert.NoError(t, err)
certManager.SetCARevocationLists(nil)
err = certManager.LoadCRLs()
assert.NoError(t, err)
certManager.SetCARevocationLists([]string{""})
err = certManager.LoadCRLs()
assert.Error(t, err)
certManager.SetCARevocationLists([]string{"invalid crl"})
err = certManager.LoadCRLs()
assert.Error(t, err)
// this is not a crl and must fail
certManager.SetCARevocationLists([]string{caCrtPath})
err = certManager.LoadCRLs()
assert.Error(t, err)
certManager.SetCARevocationLists([]string{caCrlPath})
err = certManager.LoadCRLs()
assert.NoError(t, err)
crt, err := tls.X509KeyPair([]byte(caCRT), []byte(caKey))
assert.NoError(t, err)
x509CAcrt, err := x509.ParseCertificate(crt.Certificate[0])
assert.NoError(t, err)
crt, err = tls.X509KeyPair([]byte(client1Crt), []byte(client1Key))
assert.NoError(t, err)
x509crt, err := x509.ParseCertificate(crt.Certificate[0])
if assert.NoError(t, err) {
assert.False(t, certManager.IsRevoked(x509crt, x509CAcrt))
}
crt, err = tls.X509KeyPair([]byte(client2Crt), []byte(client2Key))
assert.NoError(t, err)
x509crt, err = x509.ParseCertificate(crt.Certificate[0])
if assert.NoError(t, err) {
assert.True(t, certManager.IsRevoked(x509crt, x509CAcrt))
}
assert.True(t, certManager.IsRevoked(nil, nil))
err = os.Remove(caCrlPath)
assert.NoError(t, err)
err = certManager.Reload()
assert.Error(t, err)
err = os.Remove(certPath)
assert.NoError(t, err)
err = os.Remove(keyPath)
assert.NoError(t, err)
err = certManager.Reload()
assert.Error(t, err)
err = os.Remove(caCrtPath)
assert.NoError(t, err)
}
func TestLoadInvalidCert(t *testing.T) {
certManager, err := NewCertManager("test.crt", "test.key", configDir, logSenderTest)
assert.Error(t, err)
assert.Nil(t, certManager)
}

303
common/transfer.go Normal file
View File

@@ -0,0 +1,303 @@
package common
import (
"errors"
"path"
"sync"
"sync/atomic"
"time"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/metrics"
"github.com/drakkan/sftpgo/vfs"
)
var (
// ErrTransferClosed defines the error returned for a closed transfer
ErrTransferClosed = errors.New("transfer already closed")
)
// BaseTransfer contains protocols common transfer details for an upload or a download.
type BaseTransfer struct { //nolint:maligned
ID uint64
BytesSent int64
BytesReceived int64
Fs vfs.Fs
File vfs.File
Connection *BaseConnection
cancelFn func()
fsPath string
requestPath string
start time.Time
MaxWriteSize int64
MinWriteOffset int64
InitialSize int64
isNewFile bool
transferType int
AbortTransfer int32
sync.Mutex
ErrTransfer error
}
// NewBaseTransfer returns a new BaseTransfer and adds it to the given connection
func NewBaseTransfer(file vfs.File, conn *BaseConnection, cancelFn func(), fsPath, requestPath string, transferType int,
minWriteOffset, initialSize, maxWriteSize int64, isNewFile bool, fs vfs.Fs) *BaseTransfer {
t := &BaseTransfer{
ID: conn.GetTransferID(),
File: file,
Connection: conn,
cancelFn: cancelFn,
fsPath: fsPath,
start: time.Now(),
transferType: transferType,
MinWriteOffset: minWriteOffset,
InitialSize: initialSize,
isNewFile: isNewFile,
requestPath: requestPath,
BytesSent: 0,
BytesReceived: 0,
MaxWriteSize: maxWriteSize,
AbortTransfer: 0,
Fs: fs,
}
conn.AddTransfer(t)
return t
}
// GetID returns the transfer ID
func (t *BaseTransfer) GetID() uint64 {
return t.ID
}
// GetType returns the transfer type
func (t *BaseTransfer) GetType() int {
return t.transferType
}
// GetSize returns the transferred size
func (t *BaseTransfer) GetSize() int64 {
if t.transferType == TransferDownload {
return atomic.LoadInt64(&t.BytesSent)
}
return atomic.LoadInt64(&t.BytesReceived)
}
// GetStartTime returns the start time
func (t *BaseTransfer) GetStartTime() time.Time {
return t.start
}
// SignalClose signals that the transfer should be closed.
// For same protocols, for example WebDAV, we have no
// access to the network connection, so we use this method
// to make the next read or write to fail
func (t *BaseTransfer) SignalClose() {
atomic.StoreInt32(&(t.AbortTransfer), 1)
}
// GetVirtualPath returns the transfer virtual path
func (t *BaseTransfer) GetVirtualPath() string {
return t.requestPath
}
// GetFsPath returns the transfer filesystem path
func (t *BaseTransfer) GetFsPath() string {
return t.fsPath
}
// GetRealFsPath returns the real transfer filesystem path.
// If atomic uploads are enabled this differ from fsPath
func (t *BaseTransfer) GetRealFsPath(fsPath string) string {
if fsPath == t.GetFsPath() {
if t.File != nil {
return t.File.Name()
}
return t.fsPath
}
return ""
}
// SetCancelFn sets the cancel function for the transfer
func (t *BaseTransfer) SetCancelFn(cancelFn func()) {
t.cancelFn = cancelFn
}
// Truncate changes the size of the opened file.
// Supported for local fs only
func (t *BaseTransfer) Truncate(fsPath string, size int64) (int64, error) {
if fsPath == t.GetFsPath() {
if t.File != nil {
initialSize := t.InitialSize
err := t.File.Truncate(size)
if err == nil {
t.Lock()
t.InitialSize = size
if t.MaxWriteSize > 0 {
sizeDiff := initialSize - size
t.MaxWriteSize += sizeDiff
metrics.TransferCompleted(atomic.LoadInt64(&t.BytesSent), atomic.LoadInt64(&t.BytesReceived), t.transferType, t.ErrTransfer)
atomic.StoreInt64(&t.BytesReceived, 0)
}
t.Unlock()
}
t.Connection.Log(logger.LevelDebug, "file %#v truncated to size %v max write size %v new initial size %v err: %v",
fsPath, size, t.MaxWriteSize, t.InitialSize, err)
return initialSize, err
}
if size == 0 && atomic.LoadInt64(&t.BytesSent) == 0 {
// for cloud providers the file is always truncated to zero, we don't support append/resume for uploads
return 0, nil
}
return 0, ErrOpUnsupported
}
return 0, errTransferMismatch
}
// TransferError is called if there is an unexpected error.
// For example network or client issues
func (t *BaseTransfer) TransferError(err error) {
t.Lock()
defer t.Unlock()
if t.ErrTransfer != nil {
return
}
t.ErrTransfer = err
if t.cancelFn != nil {
t.cancelFn()
}
elapsed := time.Since(t.start).Nanoseconds() / 1000000
t.Connection.Log(logger.LevelWarn, "Unexpected error for transfer, path: %#v, error: \"%v\" bytes sent: %v, "+
"bytes received: %v transfer running since %v ms", t.fsPath, t.ErrTransfer, atomic.LoadInt64(&t.BytesSent),
atomic.LoadInt64(&t.BytesReceived), elapsed)
}
func (t *BaseTransfer) getUploadFileSize() (int64, error) {
var fileSize int64
info, err := t.Fs.Stat(t.fsPath)
if err == nil {
fileSize = info.Size()
}
if vfs.IsCryptOsFs(t.Fs) && t.ErrTransfer != nil {
errDelete := t.Connection.Fs.Remove(t.fsPath, false)
if errDelete != nil {
t.Connection.Log(logger.LevelWarn, "error removing partial crypto file %#v: %v", t.fsPath, errDelete)
}
}
return fileSize, err
}
// Close it is called when the transfer is completed.
// It logs the transfer info, updates the user quota (for uploads)
// and executes any defined action.
// If there is an error no action will be executed and, in atomic mode,
// we try to delete the temporary file
func (t *BaseTransfer) Close() error {
defer t.Connection.RemoveTransfer(t)
var err error
numFiles := 0
if t.isNewFile {
numFiles = 1
}
metrics.TransferCompleted(atomic.LoadInt64(&t.BytesSent), atomic.LoadInt64(&t.BytesReceived), t.transferType, t.ErrTransfer)
if t.ErrTransfer == ErrQuotaExceeded && t.File != nil {
// if quota is exceeded we try to remove the partial file for uploads to local filesystem
err = t.Connection.Fs.Remove(t.File.Name(), false)
if err == nil {
numFiles--
atomic.StoreInt64(&t.BytesReceived, 0)
t.MinWriteOffset = 0
}
t.Connection.Log(logger.LevelWarn, "upload denied due to space limit, delete temporary file: %#v, deletion error: %v",
t.File.Name(), err)
} else if t.transferType == TransferUpload && t.File != nil && t.File.Name() != t.fsPath {
if t.ErrTransfer == nil || Config.UploadMode == UploadModeAtomicWithResume {
err = t.Connection.Fs.Rename(t.File.Name(), t.fsPath)
t.Connection.Log(logger.LevelDebug, "atomic upload completed, rename: %#v -> %#v, error: %v",
t.File.Name(), t.fsPath, err)
} else {
err = t.Connection.Fs.Remove(t.File.Name(), false)
t.Connection.Log(logger.LevelWarn, "atomic upload completed with error: \"%v\", delete temporary file: %#v, "+
"deletion error: %v", t.ErrTransfer, t.File.Name(), err)
if err == nil {
numFiles--
atomic.StoreInt64(&t.BytesReceived, 0)
t.MinWriteOffset = 0
}
}
}
elapsed := time.Since(t.start).Nanoseconds() / 1000000
if t.transferType == TransferDownload {
logger.TransferLog(downloadLogSender, t.fsPath, elapsed, atomic.LoadInt64(&t.BytesSent), t.Connection.User.Username,
t.Connection.ID, t.Connection.protocol)
action := newActionNotification(&t.Connection.User, operationDownload, t.fsPath, "", "", t.Connection.protocol,
atomic.LoadInt64(&t.BytesSent), t.ErrTransfer)
go actionHandler.Handle(action) //nolint:errcheck
} else {
fileSize := atomic.LoadInt64(&t.BytesReceived) + t.MinWriteOffset
if statSize, err := t.getUploadFileSize(); err == nil {
fileSize = statSize
}
t.Connection.Log(logger.LevelDebug, "uploaded file size %v", fileSize)
t.updateQuota(numFiles, fileSize)
logger.TransferLog(uploadLogSender, t.fsPath, elapsed, atomic.LoadInt64(&t.BytesReceived), t.Connection.User.Username,
t.Connection.ID, t.Connection.protocol)
action := newActionNotification(&t.Connection.User, operationUpload, t.fsPath, "", "", t.Connection.protocol,
fileSize, t.ErrTransfer)
go actionHandler.Handle(action) //nolint:errcheck
}
if t.ErrTransfer != nil {
t.Connection.Log(logger.LevelWarn, "transfer error: %v, path: %#v", t.ErrTransfer, t.fsPath)
if err == nil {
err = t.ErrTransfer
}
}
return err
}
func (t *BaseTransfer) updateQuota(numFiles int, fileSize int64) bool {
// S3 uploads are atomic, if there is an error nothing is uploaded
if t.File == nil && t.ErrTransfer != nil {
return false
}
sizeDiff := fileSize - t.InitialSize
if t.transferType == TransferUpload && (numFiles != 0 || sizeDiff > 0) {
vfolder, err := t.Connection.User.GetVirtualFolderForPath(path.Dir(t.requestPath))
if err == nil {
dataprovider.UpdateVirtualFolderQuota(vfolder.BaseVirtualFolder, numFiles, //nolint:errcheck
sizeDiff, false)
if vfolder.IsIncludedInUserQuota() {
dataprovider.UpdateUserQuota(t.Connection.User, numFiles, sizeDiff, false) //nolint:errcheck
}
} else {
dataprovider.UpdateUserQuota(t.Connection.User, numFiles, sizeDiff, false) //nolint:errcheck
}
return true
}
return false
}
// HandleThrottle manage bandwidth throttling
func (t *BaseTransfer) HandleThrottle() {
var wantedBandwidth int64
var trasferredBytes int64
if t.transferType == TransferDownload {
wantedBandwidth = t.Connection.User.DownloadBandwidth
trasferredBytes = atomic.LoadInt64(&t.BytesSent)
} else {
wantedBandwidth = t.Connection.User.UploadBandwidth
trasferredBytes = atomic.LoadInt64(&t.BytesReceived)
}
if wantedBandwidth > 0 {
// real and wanted elapsed as milliseconds, bytes as kilobytes
realElapsed := time.Since(t.start).Nanoseconds() / 1000000
// trasferredBytes / 1024 = KB/s, we multiply for 1000 to get milliseconds
wantedElapsed := 1000 * (trasferredBytes / 1024) / wantedBandwidth
if wantedElapsed > realElapsed {
toSleep := time.Duration(wantedElapsed - realElapsed)
time.Sleep(toSleep * time.Millisecond)
}
}
}

276
common/transfer_test.go Normal file
View File

@@ -0,0 +1,276 @@
package common
import (
"errors"
"io/ioutil"
"os"
"path/filepath"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/kms"
"github.com/drakkan/sftpgo/vfs"
)
func TestTransferUpdateQuota(t *testing.T) {
conn := NewBaseConnection("", ProtocolSFTP, dataprovider.User{}, nil)
transfer := BaseTransfer{
Connection: conn,
transferType: TransferUpload,
BytesReceived: 123,
Fs: vfs.NewOsFs("", os.TempDir(), nil),
}
errFake := errors.New("fake error")
transfer.TransferError(errFake)
assert.False(t, transfer.updateQuota(1, 0))
err := transfer.Close()
if assert.Error(t, err) {
assert.EqualError(t, err, errFake.Error())
}
mappedPath := filepath.Join(os.TempDir(), "vdir")
vdirPath := "/vdir"
conn.User.VirtualFolders = append(conn.User.VirtualFolders, vfs.VirtualFolder{
BaseVirtualFolder: vfs.BaseVirtualFolder{
MappedPath: mappedPath,
},
VirtualPath: vdirPath,
QuotaFiles: -1,
QuotaSize: -1,
})
transfer.ErrTransfer = nil
transfer.BytesReceived = 1
transfer.requestPath = "/vdir/file"
assert.True(t, transfer.updateQuota(1, 0))
err = transfer.Close()
assert.NoError(t, err)
}
func TestTransferThrottling(t *testing.T) {
u := dataprovider.User{
Username: "test",
UploadBandwidth: 50,
DownloadBandwidth: 40,
}
fs := vfs.NewOsFs("", os.TempDir(), nil)
testFileSize := int64(131072)
wantedUploadElapsed := 1000 * (testFileSize / 1024) / u.UploadBandwidth
wantedDownloadElapsed := 1000 * (testFileSize / 1024) / u.DownloadBandwidth
// some tolerance
wantedUploadElapsed -= wantedDownloadElapsed / 10
wantedDownloadElapsed -= wantedDownloadElapsed / 10
conn := NewBaseConnection("id", ProtocolSCP, u, nil)
transfer := NewBaseTransfer(nil, conn, nil, "", "", TransferUpload, 0, 0, 0, true, fs)
transfer.BytesReceived = testFileSize
transfer.Connection.UpdateLastActivity()
startTime := transfer.Connection.GetLastActivity()
transfer.HandleThrottle()
elapsed := time.Since(startTime).Nanoseconds() / 1000000
assert.GreaterOrEqual(t, elapsed, wantedUploadElapsed, "upload bandwidth throttling not respected")
err := transfer.Close()
assert.NoError(t, err)
transfer = NewBaseTransfer(nil, conn, nil, "", "", TransferDownload, 0, 0, 0, true, fs)
transfer.BytesSent = testFileSize
transfer.Connection.UpdateLastActivity()
startTime = transfer.Connection.GetLastActivity()
transfer.HandleThrottle()
elapsed = time.Since(startTime).Nanoseconds() / 1000000
assert.GreaterOrEqual(t, elapsed, wantedDownloadElapsed, "download bandwidth throttling not respected")
err = transfer.Close()
assert.NoError(t, err)
}
func TestRealPath(t *testing.T) {
testFile := filepath.Join(os.TempDir(), "afile.txt")
fs := vfs.NewOsFs("123", os.TempDir(), nil)
u := dataprovider.User{
Username: "user",
HomeDir: os.TempDir(),
}
u.Permissions = make(map[string][]string)
u.Permissions["/"] = []string{dataprovider.PermAny}
file, err := os.Create(testFile)
require.NoError(t, err)
conn := NewBaseConnection(fs.ConnectionID(), ProtocolSFTP, u, fs)
transfer := NewBaseTransfer(file, conn, nil, testFile, "/transfer_test_file", TransferUpload, 0, 0, 0, true, fs)
rPath := transfer.GetRealFsPath(testFile)
assert.Equal(t, testFile, rPath)
rPath = conn.getRealFsPath(testFile)
assert.Equal(t, testFile, rPath)
err = transfer.Close()
assert.NoError(t, err)
err = file.Close()
assert.NoError(t, err)
transfer.File = nil
rPath = transfer.GetRealFsPath(testFile)
assert.Equal(t, testFile, rPath)
rPath = transfer.GetRealFsPath("")
assert.Empty(t, rPath)
err = os.Remove(testFile)
assert.NoError(t, err)
assert.Len(t, conn.GetTransfers(), 0)
}
func TestTruncate(t *testing.T) {
testFile := filepath.Join(os.TempDir(), "transfer_test_file")
fs := vfs.NewOsFs("123", os.TempDir(), nil)
u := dataprovider.User{
Username: "user",
HomeDir: os.TempDir(),
}
u.Permissions = make(map[string][]string)
u.Permissions["/"] = []string{dataprovider.PermAny}
file, err := os.Create(testFile)
if !assert.NoError(t, err) {
assert.FailNow(t, "unable to open test file")
}
_, err = file.Write([]byte("hello"))
assert.NoError(t, err)
conn := NewBaseConnection(fs.ConnectionID(), ProtocolSFTP, u, fs)
transfer := NewBaseTransfer(file, conn, nil, testFile, "/transfer_test_file", TransferUpload, 0, 5, 100, false, fs)
err = conn.SetStat(testFile, "/transfer_test_file", &StatAttributes{
Size: 2,
Flags: StatAttrSize,
})
assert.NoError(t, err)
assert.Equal(t, int64(103), transfer.MaxWriteSize)
err = transfer.Close()
assert.NoError(t, err)
err = file.Close()
assert.NoError(t, err)
fi, err := os.Stat(testFile)
if assert.NoError(t, err) {
assert.Equal(t, int64(2), fi.Size())
}
transfer = NewBaseTransfer(file, conn, nil, testFile, "/transfer_test_file", TransferUpload, 0, 0, 100, true, fs)
// file.Stat will fail on a closed file
err = conn.SetStat(testFile, "/transfer_test_file", &StatAttributes{
Size: 2,
Flags: StatAttrSize,
})
assert.Error(t, err)
err = transfer.Close()
assert.NoError(t, err)
transfer = NewBaseTransfer(nil, conn, nil, testFile, "", TransferUpload, 0, 0, 0, true, fs)
_, err = transfer.Truncate("mismatch", 0)
assert.EqualError(t, err, errTransferMismatch.Error())
_, err = transfer.Truncate(testFile, 0)
assert.NoError(t, err)
_, err = transfer.Truncate(testFile, 1)
assert.EqualError(t, err, ErrOpUnsupported.Error())
err = transfer.Close()
assert.NoError(t, err)
err = os.Remove(testFile)
assert.NoError(t, err)
assert.Len(t, conn.GetTransfers(), 0)
}
func TestTransferErrors(t *testing.T) {
isCancelled := false
cancelFn := func() {
isCancelled = true
}
testFile := filepath.Join(os.TempDir(), "transfer_test_file")
fs := vfs.NewOsFs("id", os.TempDir(), nil)
u := dataprovider.User{
Username: "test",
HomeDir: os.TempDir(),
}
err := ioutil.WriteFile(testFile, []byte("test data"), os.ModePerm)
assert.NoError(t, err)
file, err := os.Open(testFile)
if !assert.NoError(t, err) {
assert.FailNow(t, "unable to open test file")
}
conn := NewBaseConnection("id", ProtocolSFTP, u, fs)
transfer := NewBaseTransfer(file, conn, nil, testFile, "/transfer_test_file", TransferUpload, 0, 0, 0, true, fs)
assert.Nil(t, transfer.cancelFn)
assert.Equal(t, testFile, transfer.GetFsPath())
transfer.SetCancelFn(cancelFn)
errFake := errors.New("err fake")
transfer.BytesReceived = 9
transfer.TransferError(ErrQuotaExceeded)
assert.True(t, isCancelled)
transfer.TransferError(errFake)
assert.Error(t, transfer.ErrTransfer, ErrQuotaExceeded.Error())
// the file is closed from the embedding struct before to call close
err = file.Close()
assert.NoError(t, err)
err = transfer.Close()
if assert.Error(t, err) {
assert.Error(t, err, ErrQuotaExceeded.Error())
}
assert.NoFileExists(t, testFile)
err = ioutil.WriteFile(testFile, []byte("test data"), os.ModePerm)
assert.NoError(t, err)
file, err = os.Open(testFile)
if !assert.NoError(t, err) {
assert.FailNow(t, "unable to open test file")
}
fsPath := filepath.Join(os.TempDir(), "test_file")
transfer = NewBaseTransfer(file, conn, nil, fsPath, "/test_file", TransferUpload, 0, 0, 0, true, fs)
transfer.BytesReceived = 9
transfer.TransferError(errFake)
assert.Error(t, transfer.ErrTransfer, errFake.Error())
// the file is closed from the embedding struct before to call close
err = file.Close()
assert.NoError(t, err)
err = transfer.Close()
if assert.Error(t, err) {
assert.Error(t, err, errFake.Error())
}
assert.NoFileExists(t, testFile)
err = ioutil.WriteFile(testFile, []byte("test data"), os.ModePerm)
assert.NoError(t, err)
file, err = os.Open(testFile)
if !assert.NoError(t, err) {
assert.FailNow(t, "unable to open test file")
}
transfer = NewBaseTransfer(file, conn, nil, fsPath, "/test_file", TransferUpload, 0, 0, 0, true, fs)
transfer.BytesReceived = 9
// the file is closed from the embedding struct before to call close
err = file.Close()
assert.NoError(t, err)
err = transfer.Close()
assert.NoError(t, err)
assert.NoFileExists(t, testFile)
assert.FileExists(t, fsPath)
err = os.Remove(fsPath)
assert.NoError(t, err)
assert.Len(t, conn.GetTransfers(), 0)
}
func TestRemovePartialCryptoFile(t *testing.T) {
testFile := filepath.Join(os.TempDir(), "transfer_test_file")
fs, err := vfs.NewCryptFs("id", os.TempDir(), vfs.CryptFsConfig{Passphrase: kms.NewPlainSecret("secret")})
require.NoError(t, err)
u := dataprovider.User{
Username: "test",
HomeDir: os.TempDir(),
}
conn := NewBaseConnection(fs.ConnectionID(), ProtocolSFTP, u, fs)
transfer := NewBaseTransfer(nil, conn, nil, testFile, "/transfer_test_file", TransferUpload, 0, 0, 0, true, fs)
transfer.ErrTransfer = errors.New("test error")
_, err = transfer.getUploadFileSize()
assert.Error(t, err)
err = ioutil.WriteFile(testFile, []byte("test data"), os.ModePerm)
assert.NoError(t, err)
size, err := transfer.getUploadFileSize()
assert.NoError(t, err)
assert.Equal(t, int64(9), size)
assert.NoFileExists(t, testFile)
}

View File

@@ -1,62 +1,124 @@
// Package config manages the configuration.
// Configuration is loaded from sftpgo.conf file.
// If sftpgo.conf is not found or cannot be readed or decoded as json the default configuration is used.
// The default configuration an be found inside the source tree:
// https://github.com/drakkan/sftpgo/blob/master/sftpgo.conf
// Package config manages the configuration
package config
import (
"errors"
"fmt"
"os"
"path/filepath"
"strconv"
"strings"
"github.com/spf13/viper"
"github.com/drakkan/sftpgo/common"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/ftpd"
"github.com/drakkan/sftpgo/httpclient"
"github.com/drakkan/sftpgo/httpd"
"github.com/drakkan/sftpgo/kms"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/sftpd"
"github.com/drakkan/sftpgo/telemetry"
"github.com/drakkan/sftpgo/utils"
"github.com/drakkan/sftpgo/version"
"github.com/drakkan/sftpgo/webdavd"
)
const (
logSender = "config"
// DefaultConfigName defines the name for the default config file.
// This is the file name without extension, we use viper and so we
// support all the config files format supported by viper
DefaultConfigName = "sftpgo"
// ConfigEnvPrefix defines a prefix that ENVIRONMENT variables will use
// configName defines the name for config file.
// This name does not include the extension, viper will search for files
// with supported extensions such as "sftpgo.json", "sftpgo.yaml" and so on
configName = "sftpgo"
// ConfigEnvPrefix defines a prefix that environment variables will use
configEnvPrefix = "sftpgo"
)
var (
globalConf globalConfig
defaultBanner = fmt.Sprintf("SFTPGo_%v", version.Get().Version)
defaultSFTPDBanner = fmt.Sprintf("SFTPGo_%v", version.Get().Version)
defaultFTPDBanner = fmt.Sprintf("SFTPGo %v ready", version.Get().Version)
defaultSFTPDBinding = sftpd.Binding{
Address: "",
Port: 2022,
ApplyProxyConfig: true,
}
defaultFTPDBinding = ftpd.Binding{
Address: "",
Port: 0,
ApplyProxyConfig: true,
TLSMode: 0,
ForcePassiveIP: "",
ClientAuthType: 0,
}
defaultWebDAVDBinding = webdavd.Binding{
Address: "",
Port: 0,
EnableHTTPS: false,
ClientAuthType: 0,
}
defaultHTTPDBinding = httpd.Binding{
Address: "127.0.0.1",
Port: 8080,
EnableWebAdmin: true,
EnableHTTPS: false,
ClientAuthType: 0,
}
)
type globalConfig struct {
Common common.Configuration `json:"common" mapstructure:"common"`
SFTPD sftpd.Configuration `json:"sftpd" mapstructure:"sftpd"`
FTPD ftpd.Configuration `json:"ftpd" mapstructure:"ftpd"`
WebDAVD webdavd.Configuration `json:"webdavd" mapstructure:"webdavd"`
ProviderConf dataprovider.Config `json:"data_provider" mapstructure:"data_provider"`
HTTPDConfig httpd.Conf `json:"httpd" mapstructure:"httpd"`
HTTPConfig httpclient.Config `json:"http" mapstructure:"http"`
KMSConfig kms.Configuration `json:"kms" mapstructure:"kms"`
TelemetryConfig telemetry.Conf `json:"telemetry" mapstructure:"telemetry"`
}
func init() {
Init()
}
// Init initializes the global configuration.
// It is not supposed to be called outside of this package.
// It is exported to minimize refactoring efforts. Will eventually disappear.
func Init() {
// create a default configuration to use if no config file is provided
globalConf = globalConfig{
SFTPD: sftpd.Configuration{
Banner: defaultBanner,
BindPort: 2022,
BindAddress: "",
Common: common.Configuration{
IdleTimeout: 15,
MaxAuthTries: 0,
Umask: "0022",
UploadMode: 0,
Actions: sftpd.Actions{
Actions: common.ProtocolActions{
ExecuteOn: []string{},
Hook: "",
},
SetstatMode: 0,
ProxyProtocol: 0,
ProxyAllowed: []string{},
PostConnectHook: "",
MaxTotalConnections: 0,
DefenderConfig: common.DefenderConfig{
Enabled: false,
BanTime: 30,
BanTimeIncrement: 50,
Threshold: 15,
ScoreInvalid: 2,
ScoreValid: 1,
ObservationTime: 30,
EntriesSoftLimit: 100,
EntriesHardLimit: 150,
SafeListFile: "",
BlockListFile: "",
},
},
SFTPD: sftpd.Configuration{
Banner: defaultSFTPDBanner,
Bindings: []sftpd.Binding{defaultSFTPDBinding},
MaxAuthTries: 0,
HostKeys: []string{},
KexAlgorithms: []string{},
Ciphers: []string{},
@@ -65,8 +127,51 @@ func init() {
LoginBannerFile: "",
EnabledSSHCommands: sftpd.GetDefaultSSHCommands(),
KeyboardInteractiveHook: "",
ProxyProtocol: 0,
ProxyAllowed: []string{},
PasswordAuthentication: true,
},
FTPD: ftpd.Configuration{
Bindings: []ftpd.Binding{defaultFTPDBinding},
Banner: defaultFTPDBanner,
BannerFile: "",
ActiveTransfersPortNon20: true,
PassivePortRange: ftpd.PortRange{
Start: 50000,
End: 50100,
},
DisableActiveMode: false,
EnableSite: false,
HASHSupport: 0,
CombineSupport: 0,
CertificateFile: "",
CertificateKeyFile: "",
CACertificates: []string{},
CARevocationLists: []string{},
},
WebDAVD: webdavd.Configuration{
Bindings: []webdavd.Binding{defaultWebDAVDBinding},
CertificateFile: "",
CertificateKeyFile: "",
CACertificates: []string{},
CARevocationLists: []string{},
Cors: webdavd.Cors{
Enabled: false,
AllowedOrigins: []string{},
AllowedMethods: []string{},
AllowedHeaders: []string{},
ExposedHeaders: []string{},
AllowCredentials: false,
MaxAge: 0,
},
Cache: webdavd.Cache{
Users: webdavd.UsersCacheConfig{
ExpirationTime: 0,
MaxSize: 50,
},
MimeTypes: webdavd.MimeCacheConfig{
Enabled: true,
MaxSize: 1000,
},
},
},
ProviderConf: dataprovider.Config{
Driver: "sqlite",
@@ -77,12 +182,11 @@ func init() {
Password: "",
ConnectionString: "",
SQLTablesPrefix: "",
ManageUsers: 1,
SSLMode: 0,
TrackQuota: 1,
PoolSize: 0,
UsersBaseDir: "",
Actions: dataprovider.Actions{
Actions: dataprovider.UserActions{
ExecuteOn: []string{},
Hook: "",
},
@@ -90,14 +194,25 @@ func init() {
ExternalAuthScope: 0,
CredentialsPath: "credentials",
PreLoginHook: "",
PostLoginHook: "",
PostLoginScope: 0,
CheckPasswordHook: "",
CheckPasswordScope: 0,
PasswordHashing: dataprovider.PasswordHashing{
Argon2Options: dataprovider.Argon2Options{
Memory: 65536,
Iterations: 1,
Parallelism: 2,
},
},
UpdateMode: 0,
PreferDatabaseCredentials: false,
},
HTTPDConfig: httpd.Conf{
BindPort: 8080,
BindAddress: "127.0.0.1",
Bindings: []httpd.Binding{defaultHTTPDBinding},
TemplatesPath: "templates",
StaticFilesPath: "static",
BackupsPath: "backups",
AuthUserFile: "",
CertificateFile: "",
CertificateKeyFile: "",
},
@@ -106,16 +221,41 @@ func init() {
CACertificates: nil,
SkipTLSVerify: false,
},
KMSConfig: kms.Configuration{
Secrets: kms.Secrets{
URL: "",
MasterKeyPath: "",
},
},
TelemetryConfig: telemetry.Conf{
BindPort: 10000,
BindAddress: "127.0.0.1",
EnableProfiler: false,
AuthUserFile: "",
CertificateFile: "",
CertificateKeyFile: "",
},
}
viper.SetEnvPrefix(configEnvPrefix)
replacer := strings.NewReplacer(".", "__")
viper.SetEnvKeyReplacer(replacer)
viper.SetConfigName(DefaultConfigName)
viper.SetConfigName(configName)
setViperDefaults()
viper.AutomaticEnv()
viper.AllowEmptyEnv(true)
}
// GetCommonConfig returns the common protocols configuration
func GetCommonConfig() common.Configuration {
return globalConf.Common
}
// SetCommonConfig sets the common protocols configuration
func SetCommonConfig(config common.Configuration) {
globalConf.Common = config
}
// GetSFTPDConfig returns the configuration for the SFTP server
func GetSFTPDConfig() sftpd.Configuration {
return globalConf.SFTPD
@@ -126,6 +266,26 @@ func SetSFTPDConfig(config sftpd.Configuration) {
globalConf.SFTPD = config
}
// GetFTPDConfig returns the configuration for the FTP server
func GetFTPDConfig() ftpd.Configuration {
return globalConf.FTPD
}
// SetFTPDConfig sets the configuration for the FTP server
func SetFTPDConfig(config ftpd.Configuration) {
globalConf.FTPD = config
}
// GetWebDAVDConfig returns the configuration for the WebDAV server
func GetWebDAVDConfig() webdavd.Configuration {
return globalConf.WebDAVD
}
// SetWebDAVDConfig sets the configuration for the WebDAV server
func SetWebDAVDConfig(config webdavd.Configuration) {
globalConf.WebDAVD = config
}
// GetHTTPDConfig returns the configuration for the HTTP server
func GetHTTPDConfig() httpd.Conf {
return globalConf.HTTPDConfig
@@ -151,38 +311,94 @@ func GetHTTPConfig() httpclient.Config {
return globalConf.HTTPConfig
}
// GetKMSConfig returns the KMS configuration
func GetKMSConfig() kms.Configuration {
return globalConf.KMSConfig
}
// SetKMSConfig sets the kms configuration
func SetKMSConfig(config kms.Configuration) {
globalConf.KMSConfig = config
}
// GetTelemetryConfig returns the telemetry configuration
func GetTelemetryConfig() telemetry.Conf {
return globalConf.TelemetryConfig
}
// SetTelemetryConfig sets the telemetry configuration
func SetTelemetryConfig(config telemetry.Conf) {
globalConf.TelemetryConfig = config
}
// HasServicesToStart returns true if the config defines at least a service to start.
// Supported services are SFTP, FTP and WebDAV
func HasServicesToStart() bool {
if globalConf.SFTPD.ShouldBind() {
return true
}
if globalConf.FTPD.ShouldBind() {
return true
}
if globalConf.WebDAVD.ShouldBind() {
return true
}
return false
}
func getRedactedGlobalConf() globalConfig {
conf := globalConf
conf.ProviderConf.Password = "[redacted]"
return conf
}
func setConfigFile(configDir, configFile string) {
if configFile == "" {
return
}
if !filepath.IsAbs(configFile) && utils.IsFileInputValid(configFile) {
configFile = filepath.Join(configDir, configFile)
}
viper.SetConfigFile(configFile)
}
// LoadConfig loads the configuration
// configDir will be added to the configuration search paths.
// The search path contains by default the current directory and on linux it contains
// $HOME/.config/sftpgo and /etc/sftpgo too.
// configName is the name of the configuration to search without extension
func LoadConfig(configDir, configName string) error {
// configFile is an absolute or relative path (to the config dir) to the configuration file.
func LoadConfig(configDir, configFile string) error {
var err error
viper.AddConfigPath(configDir)
setViperAdditionalConfigPaths()
viper.AddConfigPath(".")
viper.SetConfigName(configName)
setConfigFile(configDir, configFile)
if err = viper.ReadInConfig(); err != nil {
logger.Warn(logSender, "", "error loading configuration file: %v. Default configuration will be used: %+v",
err, getRedactedGlobalConf())
logger.WarnToConsole("error loading configuration file: %v. Default configuration will be used.", err)
return err
// if the user specify a configuration file we get os.ErrNotExist.
// viper.ConfigFileNotFoundError is returned if viper is unable
// to find sftpgo.{json,yaml, etc..} in any of the search paths
if errors.As(err, &viper.ConfigFileNotFoundError{}) {
logger.Debug(logSender, "", "no configuration file found")
} else {
// should we return the error and not start here?
logger.Warn(logSender, "", "error loading configuration file: %v", err)
logger.WarnToConsole("error loading configuration file: %v", err)
}
}
err = viper.Unmarshal(&globalConf)
if err != nil {
logger.Warn(logSender, "", "error parsing configuration file: %v. Default configuration will be used: %+v",
err, getRedactedGlobalConf())
logger.WarnToConsole("error parsing configuration file: %v. Default configuration will be used.", err)
logger.Warn(logSender, "", "error parsing configuration file: %v", err)
logger.WarnToConsole("error parsing configuration file: %v", err)
return err
}
// viper only supports slice of strings from env vars, so we use our custom method
loadBindingsFromEnv()
checkCommonParamsCompatibility()
if strings.TrimSpace(globalConf.SFTPD.Banner) == "" {
globalConf.SFTPD.Banner = defaultBanner
globalConf.SFTPD.Banner = defaultSFTPDBanner
}
if strings.TrimSpace(globalConf.FTPD.Banner) == "" {
globalConf.FTPD.Banner = defaultFTPDBanner
}
if len(globalConf.ProviderConf.UsersBaseDir) > 0 && !utils.IsFileInputValid(globalConf.ProviderConf.UsersBaseDir) {
err = fmt.Errorf("invalid users base dir %#v will be ignored", globalConf.ProviderConf.UsersBaseDir)
@@ -190,77 +406,35 @@ func LoadConfig(configDir, configName string) error {
logger.Warn(logSender, "", "Configuration error: %v", err)
logger.WarnToConsole("Configuration error: %v", err)
}
if globalConf.SFTPD.UploadMode < 0 || globalConf.SFTPD.UploadMode > 2 {
err = fmt.Errorf("invalid upload_mode 0, 1 and 2 are supported, configured: %v reset upload_mode to 0",
globalConf.SFTPD.UploadMode)
globalConf.SFTPD.UploadMode = 0
logger.Warn(logSender, "", "Configuration error: %v", err)
logger.WarnToConsole("Configuration error: %v", err)
if globalConf.Common.UploadMode < 0 || globalConf.Common.UploadMode > 2 {
warn := fmt.Sprintf("invalid upload_mode 0, 1 and 2 are supported, configured: %v reset upload_mode to 0",
globalConf.Common.UploadMode)
globalConf.Common.UploadMode = 0
logger.Warn(logSender, "", "Configuration error: %v", warn)
logger.WarnToConsole("Configuration error: %v", warn)
}
if globalConf.SFTPD.ProxyProtocol < 0 || globalConf.SFTPD.ProxyProtocol > 2 {
err = fmt.Errorf("invalid proxy_protocol 0, 1 and 2 are supported, configured: %v reset proxy_protocol to 0",
globalConf.SFTPD.ProxyProtocol)
globalConf.SFTPD.ProxyProtocol = 0
logger.Warn(logSender, "", "Configuration error: %v", err)
logger.WarnToConsole("Configuration error: %v", err)
if globalConf.Common.ProxyProtocol < 0 || globalConf.Common.ProxyProtocol > 2 {
warn := fmt.Sprintf("invalid proxy_protocol 0, 1 and 2 are supported, configured: %v reset proxy_protocol to 0",
globalConf.Common.ProxyProtocol)
globalConf.Common.ProxyProtocol = 0
logger.Warn(logSender, "", "Configuration error: %v", warn)
logger.WarnToConsole("Configuration error: %v", warn)
}
if globalConf.ProviderConf.ExternalAuthScope < 0 || globalConf.ProviderConf.ExternalAuthScope > 7 {
err = fmt.Errorf("invalid external_auth_scope: %v reset to 0", globalConf.ProviderConf.ExternalAuthScope)
warn := fmt.Sprintf("invalid external_auth_scope: %v reset to 0", globalConf.ProviderConf.ExternalAuthScope)
globalConf.ProviderConf.ExternalAuthScope = 0
logger.Warn(logSender, "", "Configuration error: %v", err)
logger.WarnToConsole("Configuration error: %v", err)
logger.Warn(logSender, "", "Configuration error: %v", warn)
logger.WarnToConsole("Configuration error: %v", warn)
}
if len(globalConf.ProviderConf.CredentialsPath) == 0 {
err = fmt.Errorf("invalid credentials path, reset to \"credentials\"")
if globalConf.ProviderConf.CredentialsPath == "" {
warn := "invalid credentials path, reset to \"credentials\""
globalConf.ProviderConf.CredentialsPath = "credentials"
logger.Warn(logSender, "", "Configuration error: %v", err)
logger.WarnToConsole("Configuration error: %v", err)
logger.Warn(logSender, "", "Configuration error: %v", warn)
logger.WarnToConsole("Configuration error: %v", warn)
}
checkHooksCompatibility()
checkHostKeyCompatibility()
logger.Debug(logSender, "", "config file used: '%#v', config loaded: %+v", viper.ConfigFileUsed(), getRedactedGlobalConf())
return err
}
func checkHooksCompatibility() {
// we copy deprecated fields to new ones to keep backward compatibility so lint is disabled
if len(globalConf.ProviderConf.ExternalAuthProgram) > 0 && len(globalConf.ProviderConf.ExternalAuthHook) == 0 { //nolint:staticcheck
logger.Warn(logSender, "", "external_auth_program is deprecated, please use external_auth_hook")
logger.WarnToConsole("external_auth_program is deprecated, please use external_auth_hook")
globalConf.ProviderConf.ExternalAuthHook = globalConf.ProviderConf.ExternalAuthProgram //nolint:staticcheck
}
if len(globalConf.ProviderConf.PreLoginProgram) > 0 && len(globalConf.ProviderConf.PreLoginHook) == 0 { //nolint:staticcheck
logger.Warn(logSender, "", "pre_login_program is deprecated, please use pre_login_hook")
logger.WarnToConsole("pre_login_program is deprecated, please use pre_login_hook")
globalConf.ProviderConf.PreLoginHook = globalConf.ProviderConf.PreLoginProgram //nolint:staticcheck
}
if len(globalConf.SFTPD.KeyboardInteractiveProgram) > 0 && len(globalConf.SFTPD.KeyboardInteractiveHook) == 0 { //nolint:staticcheck
logger.Warn(logSender, "", "keyboard_interactive_auth_program is deprecated, please use keyboard_interactive_auth_hook")
logger.WarnToConsole("keyboard_interactive_auth_program is deprecated, please use keyboard_interactive_auth_hook")
globalConf.SFTPD.KeyboardInteractiveHook = globalConf.SFTPD.KeyboardInteractiveProgram //nolint:staticcheck
}
if len(globalConf.SFTPD.Actions.Hook) == 0 {
if len(globalConf.SFTPD.Actions.HTTPNotificationURL) > 0 { //nolint:staticcheck
logger.Warn(logSender, "", "http_notification_url is deprecated, please use hook")
logger.WarnToConsole("http_notification_url is deprecated, please use hook")
globalConf.SFTPD.Actions.Hook = globalConf.SFTPD.Actions.HTTPNotificationURL //nolint:staticcheck
} else if len(globalConf.SFTPD.Actions.Command) > 0 { //nolint:staticcheck
logger.Warn(logSender, "", "command is deprecated, please use hook")
logger.WarnToConsole("command is deprecated, please use hook")
globalConf.SFTPD.Actions.Hook = globalConf.SFTPD.Actions.Command //nolint:staticcheck
}
}
if len(globalConf.ProviderConf.Actions.Hook) == 0 {
if len(globalConf.ProviderConf.Actions.HTTPNotificationURL) > 0 { //nolint:staticcheck
logger.Warn(logSender, "", "http_notification_url is deprecated, please use hook")
logger.WarnToConsole("http_notification_url is deprecated, please use hook")
globalConf.ProviderConf.Actions.Hook = globalConf.ProviderConf.Actions.HTTPNotificationURL //nolint:staticcheck
} else if len(globalConf.ProviderConf.Actions.Command) > 0 { //nolint:staticcheck
logger.Warn(logSender, "", "command is deprecated, please use hook")
logger.WarnToConsole("command is deprecated, please use hook")
globalConf.ProviderConf.Actions.Hook = globalConf.ProviderConf.Actions.Command //nolint:staticcheck
}
}
return nil
}
func checkHostKeyCompatibility() {
@@ -273,3 +447,439 @@ func checkHostKeyCompatibility() {
}
}
}
func checkCommonParamsCompatibility() {
// we copy deprecated fields to new ones to keep backward compatibility so lint is disabled
if globalConf.SFTPD.IdleTimeout > 0 { //nolint:staticcheck
logger.Warn(logSender, "", "sftpd.idle_timeout is deprecated, please use common.idle_timeout")
logger.WarnToConsole("sftpd.idle_timeout is deprecated, please use common.idle_timeout")
globalConf.Common.IdleTimeout = globalConf.SFTPD.IdleTimeout //nolint:staticcheck
}
if len(globalConf.SFTPD.Actions.Hook) > 0 && len(globalConf.Common.Actions.Hook) == 0 { //nolint:staticcheck
logger.Warn(logSender, "", "sftpd.actions is deprecated, please use common.actions")
logger.WarnToConsole("sftpd.actions is deprecated, please use common.actions")
globalConf.Common.Actions.ExecuteOn = globalConf.SFTPD.Actions.ExecuteOn //nolint:staticcheck
globalConf.Common.Actions.Hook = globalConf.SFTPD.Actions.Hook //nolint:staticcheck
}
if globalConf.SFTPD.SetstatMode > 0 && globalConf.Common.SetstatMode == 0 { //nolint:staticcheck
logger.Warn(logSender, "", "sftpd.setstat_mode is deprecated, please use common.setstat_mode")
logger.WarnToConsole("sftpd.setstat_mode is deprecated, please use common.setstat_mode")
globalConf.Common.SetstatMode = globalConf.SFTPD.SetstatMode //nolint:staticcheck
}
if globalConf.SFTPD.UploadMode > 0 && globalConf.Common.UploadMode == 0 { //nolint:staticcheck
logger.Warn(logSender, "", "sftpd.upload_mode is deprecated, please use common.upload_mode")
logger.WarnToConsole("sftpd.upload_mode is deprecated, please use common.upload_mode")
globalConf.Common.UploadMode = globalConf.SFTPD.UploadMode //nolint:staticcheck
}
if globalConf.SFTPD.ProxyProtocol > 0 && globalConf.Common.ProxyProtocol == 0 { //nolint:staticcheck
logger.Warn(logSender, "", "sftpd.proxy_protocol is deprecated, please use common.proxy_protocol")
logger.WarnToConsole("sftpd.proxy_protocol is deprecated, please use common.proxy_protocol")
globalConf.Common.ProxyProtocol = globalConf.SFTPD.ProxyProtocol //nolint:staticcheck
globalConf.Common.ProxyAllowed = globalConf.SFTPD.ProxyAllowed //nolint:staticcheck
}
}
func checkSFTPDBindingsCompatibility() {
if globalConf.SFTPD.BindPort == 0 { //nolint:staticcheck
return
}
// we copy deprecated fields to new ones to keep backward compatibility so lint is disabled
binding := sftpd.Binding{
ApplyProxyConfig: true,
}
if globalConf.SFTPD.BindPort > 0 { //nolint:staticcheck
binding.Port = globalConf.SFTPD.BindPort //nolint:staticcheck
}
if globalConf.SFTPD.BindAddress != "" { //nolint:staticcheck
binding.Address = globalConf.SFTPD.BindAddress //nolint:staticcheck
}
globalConf.SFTPD.Bindings = []sftpd.Binding{binding}
}
func checkFTPDBindingCompatibility() {
if globalConf.FTPD.BindPort == 0 { //nolint:staticcheck
return
}
binding := ftpd.Binding{
ApplyProxyConfig: true,
}
if globalConf.FTPD.BindPort > 0 { //nolint:staticcheck
binding.Port = globalConf.FTPD.BindPort //nolint:staticcheck
}
if globalConf.FTPD.BindAddress != "" { //nolint:staticcheck
binding.Address = globalConf.FTPD.BindAddress //nolint:staticcheck
}
if globalConf.FTPD.TLSMode > 0 { //nolint:staticcheck
binding.TLSMode = globalConf.FTPD.TLSMode //nolint:staticcheck
}
if globalConf.FTPD.ForcePassiveIP != "" { //nolint:staticcheck
binding.ForcePassiveIP = globalConf.FTPD.ForcePassiveIP //nolint:staticcheck
}
globalConf.FTPD.Bindings = []ftpd.Binding{binding}
}
func checkWebDAVDBindingCompatibility() {
if globalConf.WebDAVD.BindPort == 0 { //nolint:staticcheck
return
}
binding := webdavd.Binding{
EnableHTTPS: globalConf.WebDAVD.CertificateFile != "" && globalConf.WebDAVD.CertificateKeyFile != "",
}
if globalConf.WebDAVD.BindPort > 0 { //nolint:staticcheck
binding.Port = globalConf.WebDAVD.BindPort //nolint:staticcheck
}
if globalConf.WebDAVD.BindAddress != "" { //nolint:staticcheck
binding.Address = globalConf.WebDAVD.BindAddress //nolint:staticcheck
}
globalConf.WebDAVD.Bindings = []webdavd.Binding{binding}
}
func checkHTTPDBindingCompatibility() {
if globalConf.HTTPDConfig.BindPort == 0 { //nolint:staticcheck
return
}
binding := httpd.Binding{
EnableWebAdmin: globalConf.HTTPDConfig.StaticFilesPath != "" && globalConf.HTTPDConfig.TemplatesPath != "",
EnableHTTPS: globalConf.HTTPDConfig.CertificateFile != "" && globalConf.HTTPDConfig.CertificateKeyFile != "",
}
if globalConf.HTTPDConfig.BindPort > 0 { //nolint:staticcheck
binding.Port = globalConf.HTTPDConfig.BindPort //nolint:staticcheck
}
if globalConf.HTTPDConfig.BindAddress != "" { //nolint:staticcheck
binding.Address = globalConf.HTTPDConfig.BindAddress //nolint:staticcheck
}
globalConf.HTTPDConfig.Bindings = []httpd.Binding{binding}
}
func loadBindingsFromEnv() {
checkSFTPDBindingsCompatibility()
checkFTPDBindingCompatibility()
checkWebDAVDBindingCompatibility()
checkHTTPDBindingCompatibility()
maxBindings := make([]int, 10)
for idx := range maxBindings {
getSFTPDBindindFromEnv(idx)
getFTPDBindingFromEnv(idx)
getWebDAVDBindingFromEnv(idx)
getHTTPDBindingFromEnv(idx)
}
}
func getSFTPDBindindFromEnv(idx int) {
binding := sftpd.Binding{}
if len(globalConf.SFTPD.Bindings) > idx {
binding = globalConf.SFTPD.Bindings[idx]
}
isSet := false
port, ok := lookupIntFromEnv(fmt.Sprintf("SFTPGO_SFTPD__BINDINGS__%v__PORT", idx))
if ok {
binding.Port = port
isSet = true
}
address, ok := os.LookupEnv(fmt.Sprintf("SFTPGO_SFTPD__BINDINGS__%v__ADDRESS", idx))
if ok {
binding.Address = address
isSet = true
}
applyProxyConfig, ok := lookupBoolFromEnv(fmt.Sprintf("SFTPGO_SFTPD__BINDINGS__%v__APPLY_PROXY_CONFIG", idx))
if ok {
binding.ApplyProxyConfig = applyProxyConfig
isSet = true
}
if isSet {
if len(globalConf.SFTPD.Bindings) > idx {
globalConf.SFTPD.Bindings[idx] = binding
} else {
globalConf.SFTPD.Bindings = append(globalConf.SFTPD.Bindings, binding)
}
}
}
func getFTPDBindingFromEnv(idx int) {
binding := ftpd.Binding{}
if len(globalConf.FTPD.Bindings) > idx {
binding = globalConf.FTPD.Bindings[idx]
}
isSet := false
port, ok := lookupIntFromEnv(fmt.Sprintf("SFTPGO_FTPD__BINDINGS__%v__PORT", idx))
if ok {
binding.Port = port
isSet = true
}
address, ok := os.LookupEnv(fmt.Sprintf("SFTPGO_FTPD__BINDINGS__%v__ADDRESS", idx))
if ok {
binding.Address = address
isSet = true
}
applyProxyConfig, ok := lookupBoolFromEnv(fmt.Sprintf("SFTPGO_FTPD__BINDINGS__%v__APPLY_PROXY_CONFIG", idx))
if ok {
binding.ApplyProxyConfig = applyProxyConfig
isSet = true
}
tlsMode, ok := lookupIntFromEnv(fmt.Sprintf("SFTPGO_FTPD__BINDINGS__%v__TLS_MODE", idx))
if ok {
binding.TLSMode = tlsMode
isSet = true
}
passiveIP, ok := os.LookupEnv(fmt.Sprintf("SFTPGO_FTPD__BINDINGS__%v__FORCE_PASSIVE_IP", idx))
if ok {
binding.ForcePassiveIP = passiveIP
isSet = true
}
clientAuthType, ok := lookupIntFromEnv(fmt.Sprintf("SFTPGO_FTPD__BINDINGS__%v__CLIENT_AUTH_TYPE", idx))
if ok {
binding.ClientAuthType = clientAuthType
isSet = true
}
if isSet {
if len(globalConf.FTPD.Bindings) > idx {
globalConf.FTPD.Bindings[idx] = binding
} else {
globalConf.FTPD.Bindings = append(globalConf.FTPD.Bindings, binding)
}
}
}
func getWebDAVDBindingFromEnv(idx int) {
binding := webdavd.Binding{}
if len(globalConf.WebDAVD.Bindings) > idx {
binding = globalConf.WebDAVD.Bindings[idx]
}
isSet := false
port, ok := lookupIntFromEnv(fmt.Sprintf("SFTPGO_WEBDAVD__BINDINGS__%v__PORT", idx))
if ok {
binding.Port = port
isSet = true
}
address, ok := os.LookupEnv(fmt.Sprintf("SFTPGO_WEBDAVD__BINDINGS__%v__ADDRESS", idx))
if ok {
binding.Address = address
isSet = true
}
enableHTTPS, ok := lookupBoolFromEnv(fmt.Sprintf("SFTPGO_WEBDAVD__BINDINGS__%v__ENABLE_HTTPS", idx))
if ok {
binding.EnableHTTPS = enableHTTPS
isSet = true
}
clientAuthType, ok := lookupIntFromEnv(fmt.Sprintf("SFTPGO_WEBDAVD__BINDINGS__%v__CLIENT_AUTH_TYPE", idx))
if ok {
binding.ClientAuthType = clientAuthType
isSet = true
}
if isSet {
if len(globalConf.WebDAVD.Bindings) > idx {
globalConf.WebDAVD.Bindings[idx] = binding
} else {
globalConf.WebDAVD.Bindings = append(globalConf.WebDAVD.Bindings, binding)
}
}
}
func getHTTPDBindingFromEnv(idx int) {
binding := httpd.Binding{}
if len(globalConf.HTTPDConfig.Bindings) > idx {
binding = globalConf.HTTPDConfig.Bindings[idx]
}
isSet := false
port, ok := lookupIntFromEnv(fmt.Sprintf("SFTPGO_HTTPD__BINDINGS__%v__PORT", idx))
if ok {
binding.Port = port
isSet = true
}
address, ok := os.LookupEnv(fmt.Sprintf("SFTPGO_HTTPD__BINDINGS__%v__ADDRESS", idx))
if ok {
binding.Address = address
isSet = true
}
enableWebAdmin, ok := lookupBoolFromEnv(fmt.Sprintf("SFTPGO_HTTPD__BINDINGS__%v__ENABLE_WEB_ADMIN", idx))
if ok {
binding.EnableWebAdmin = enableWebAdmin
isSet = true
}
enableHTTPS, ok := lookupBoolFromEnv(fmt.Sprintf("SFTPGO_HTTPD__BINDINGS__%v__ENABLE_HTTPS", idx))
if ok {
binding.EnableHTTPS = enableHTTPS
isSet = true
}
clientAuthType, ok := lookupIntFromEnv(fmt.Sprintf("SFTPGO_HTTPD__BINDINGS__%v__CLIENT_AUTH_TYPE", idx))
if ok {
binding.ClientAuthType = clientAuthType
isSet = true
}
if isSet {
if len(globalConf.HTTPDConfig.Bindings) > idx {
globalConf.HTTPDConfig.Bindings[idx] = binding
} else {
globalConf.HTTPDConfig.Bindings = append(globalConf.HTTPDConfig.Bindings, binding)
}
}
}
func setViperDefaults() {
viper.SetDefault("common.idle_timeout", globalConf.Common.IdleTimeout)
viper.SetDefault("common.upload_mode", globalConf.Common.UploadMode)
viper.SetDefault("common.actions.execute_on", globalConf.Common.Actions.ExecuteOn)
viper.SetDefault("common.actions.hook", globalConf.Common.Actions.Hook)
viper.SetDefault("common.setstat_mode", globalConf.Common.SetstatMode)
viper.SetDefault("common.proxy_protocol", globalConf.Common.ProxyProtocol)
viper.SetDefault("common.proxy_allowed", globalConf.Common.ProxyAllowed)
viper.SetDefault("common.post_connect_hook", globalConf.Common.PostConnectHook)
viper.SetDefault("common.max_total_connections", globalConf.Common.MaxTotalConnections)
viper.SetDefault("common.defender.enabled", globalConf.Common.DefenderConfig.Enabled)
viper.SetDefault("common.defender.ban_time", globalConf.Common.DefenderConfig.BanTime)
viper.SetDefault("common.defender.ban_time_increment", globalConf.Common.DefenderConfig.BanTimeIncrement)
viper.SetDefault("common.defender.threshold", globalConf.Common.DefenderConfig.Threshold)
viper.SetDefault("common.defender.score_invalid", globalConf.Common.DefenderConfig.ScoreInvalid)
viper.SetDefault("common.defender.score_valid", globalConf.Common.DefenderConfig.ScoreValid)
viper.SetDefault("common.defender.observation_time", globalConf.Common.DefenderConfig.ObservationTime)
viper.SetDefault("common.defender.entries_soft_limit", globalConf.Common.DefenderConfig.EntriesSoftLimit)
viper.SetDefault("common.defender.entries_hard_limit", globalConf.Common.DefenderConfig.EntriesHardLimit)
viper.SetDefault("common.defender.safelist_file", globalConf.Common.DefenderConfig.SafeListFile)
viper.SetDefault("common.defender.blocklist_file", globalConf.Common.DefenderConfig.BlockListFile)
viper.SetDefault("sftpd.max_auth_tries", globalConf.SFTPD.MaxAuthTries)
viper.SetDefault("sftpd.banner", globalConf.SFTPD.Banner)
viper.SetDefault("sftpd.host_keys", globalConf.SFTPD.HostKeys)
viper.SetDefault("sftpd.kex_algorithms", globalConf.SFTPD.KexAlgorithms)
viper.SetDefault("sftpd.ciphers", globalConf.SFTPD.Ciphers)
viper.SetDefault("sftpd.macs", globalConf.SFTPD.MACs)
viper.SetDefault("sftpd.trusted_user_ca_keys", globalConf.SFTPD.TrustedUserCAKeys)
viper.SetDefault("sftpd.login_banner_file", globalConf.SFTPD.LoginBannerFile)
viper.SetDefault("sftpd.enabled_ssh_commands", globalConf.SFTPD.EnabledSSHCommands)
viper.SetDefault("sftpd.keyboard_interactive_auth_hook", globalConf.SFTPD.KeyboardInteractiveHook)
viper.SetDefault("sftpd.password_authentication", globalConf.SFTPD.PasswordAuthentication)
viper.SetDefault("ftpd.banner", globalConf.FTPD.Banner)
viper.SetDefault("ftpd.banner_file", globalConf.FTPD.BannerFile)
viper.SetDefault("ftpd.active_transfers_port_non_20", globalConf.FTPD.ActiveTransfersPortNon20)
viper.SetDefault("ftpd.passive_port_range.start", globalConf.FTPD.PassivePortRange.Start)
viper.SetDefault("ftpd.passive_port_range.end", globalConf.FTPD.PassivePortRange.End)
viper.SetDefault("ftpd.disable_active_mode", globalConf.FTPD.DisableActiveMode)
viper.SetDefault("ftpd.enable_site", globalConf.FTPD.EnableSite)
viper.SetDefault("ftpd.hash_support", globalConf.FTPD.HASHSupport)
viper.SetDefault("ftpd.combine_support", globalConf.FTPD.CombineSupport)
viper.SetDefault("ftpd.certificate_file", globalConf.FTPD.CertificateFile)
viper.SetDefault("ftpd.certificate_key_file", globalConf.FTPD.CertificateKeyFile)
viper.SetDefault("ftpd.ca_certificates", globalConf.FTPD.CACertificates)
viper.SetDefault("ftpd.ca_revocation_lists", globalConf.FTPD.CARevocationLists)
viper.SetDefault("webdavd.certificate_file", globalConf.WebDAVD.CertificateFile)
viper.SetDefault("webdavd.certificate_key_file", globalConf.WebDAVD.CertificateKeyFile)
viper.SetDefault("webdavd.ca_certificates", globalConf.WebDAVD.CACertificates)
viper.SetDefault("webdavd.ca_revocation_lists", globalConf.WebDAVD.CARevocationLists)
viper.SetDefault("webdavd.cors.enabled", globalConf.WebDAVD.Cors.Enabled)
viper.SetDefault("webdavd.cors.allowed_origins", globalConf.WebDAVD.Cors.AllowedOrigins)
viper.SetDefault("webdavd.cors.allowed_methods", globalConf.WebDAVD.Cors.AllowedMethods)
viper.SetDefault("webdavd.cors.allowed_headers", globalConf.WebDAVD.Cors.AllowedHeaders)
viper.SetDefault("webdavd.cors.exposed_headers", globalConf.WebDAVD.Cors.ExposedHeaders)
viper.SetDefault("webdavd.cors.allow_credentials", globalConf.WebDAVD.Cors.AllowCredentials)
viper.SetDefault("webdavd.cors.max_age", globalConf.WebDAVD.Cors.MaxAge)
viper.SetDefault("webdavd.cache.users.expiration_time", globalConf.WebDAVD.Cache.Users.ExpirationTime)
viper.SetDefault("webdavd.cache.users.max_size", globalConf.WebDAVD.Cache.Users.MaxSize)
viper.SetDefault("webdavd.cache.mime_types.enabled", globalConf.WebDAVD.Cache.MimeTypes.Enabled)
viper.SetDefault("webdavd.cache.mime_types.max_size", globalConf.WebDAVD.Cache.MimeTypes.MaxSize)
viper.SetDefault("data_provider.driver", globalConf.ProviderConf.Driver)
viper.SetDefault("data_provider.name", globalConf.ProviderConf.Name)
viper.SetDefault("data_provider.host", globalConf.ProviderConf.Host)
viper.SetDefault("data_provider.port", globalConf.ProviderConf.Port)
viper.SetDefault("data_provider.username", globalConf.ProviderConf.Username)
viper.SetDefault("data_provider.password", globalConf.ProviderConf.Password)
viper.SetDefault("data_provider.sslmode", globalConf.ProviderConf.SSLMode)
viper.SetDefault("data_provider.connection_string", globalConf.ProviderConf.ConnectionString)
viper.SetDefault("data_provider.sql_tables_prefix", globalConf.ProviderConf.SQLTablesPrefix)
viper.SetDefault("data_provider.track_quota", globalConf.ProviderConf.TrackQuota)
viper.SetDefault("data_provider.pool_size", globalConf.ProviderConf.PoolSize)
viper.SetDefault("data_provider.users_base_dir", globalConf.ProviderConf.UsersBaseDir)
viper.SetDefault("data_provider.actions.execute_on", globalConf.ProviderConf.Actions.ExecuteOn)
viper.SetDefault("data_provider.actions.hook", globalConf.ProviderConf.Actions.Hook)
viper.SetDefault("data_provider.external_auth_hook", globalConf.ProviderConf.ExternalAuthHook)
viper.SetDefault("data_provider.external_auth_scope", globalConf.ProviderConf.ExternalAuthScope)
viper.SetDefault("data_provider.credentials_path", globalConf.ProviderConf.CredentialsPath)
viper.SetDefault("data_provider.prefer_database_credentials", globalConf.ProviderConf.PreferDatabaseCredentials)
viper.SetDefault("data_provider.pre_login_hook", globalConf.ProviderConf.PreLoginHook)
viper.SetDefault("data_provider.post_login_hook", globalConf.ProviderConf.PostLoginHook)
viper.SetDefault("data_provider.post_login_scope", globalConf.ProviderConf.PostLoginScope)
viper.SetDefault("data_provider.check_password_hook", globalConf.ProviderConf.CheckPasswordHook)
viper.SetDefault("data_provider.check_password_scope", globalConf.ProviderConf.CheckPasswordScope)
viper.SetDefault("data_provider.password_hashing.argon2_options.memory", globalConf.ProviderConf.PasswordHashing.Argon2Options.Memory)
viper.SetDefault("data_provider.password_hashing.argon2_options.iterations", globalConf.ProviderConf.PasswordHashing.Argon2Options.Iterations)
viper.SetDefault("data_provider.password_hashing.argon2_options.parallelism", globalConf.ProviderConf.PasswordHashing.Argon2Options.Parallelism)
viper.SetDefault("data_provider.update_mode", globalConf.ProviderConf.UpdateMode)
viper.SetDefault("httpd.templates_path", globalConf.HTTPDConfig.TemplatesPath)
viper.SetDefault("httpd.static_files_path", globalConf.HTTPDConfig.StaticFilesPath)
viper.SetDefault("httpd.backups_path", globalConf.HTTPDConfig.BackupsPath)
viper.SetDefault("httpd.certificate_file", globalConf.HTTPDConfig.CertificateFile)
viper.SetDefault("httpd.certificate_key_file", globalConf.HTTPDConfig.CertificateKeyFile)
viper.SetDefault("httpd.ca_certificates", globalConf.HTTPDConfig.CACertificates)
viper.SetDefault("httpd.ca_revocation_lists", globalConf.HTTPDConfig.CARevocationLists)
viper.SetDefault("http.timeout", globalConf.HTTPConfig.Timeout)
viper.SetDefault("http.ca_certificates", globalConf.HTTPConfig.CACertificates)
viper.SetDefault("http.skip_tls_verify", globalConf.HTTPConfig.SkipTLSVerify)
viper.SetDefault("kms.secrets.url", globalConf.KMSConfig.Secrets.URL)
viper.SetDefault("kms.secrets.master_key_path", globalConf.KMSConfig.Secrets.MasterKeyPath)
viper.SetDefault("telemetry.bind_port", globalConf.TelemetryConfig.BindPort)
viper.SetDefault("telemetry.bind_address", globalConf.TelemetryConfig.BindAddress)
viper.SetDefault("telemetry.enable_profiler", globalConf.TelemetryConfig.EnableProfiler)
viper.SetDefault("telemetry.auth_user_file", globalConf.TelemetryConfig.AuthUserFile)
viper.SetDefault("telemetry.certificate_file", globalConf.TelemetryConfig.CertificateFile)
viper.SetDefault("telemetry.certificate_key_file", globalConf.TelemetryConfig.CertificateKeyFile)
}
func lookupBoolFromEnv(envName string) (bool, bool) {
value, ok := os.LookupEnv(envName)
if ok {
converted, err := strconv.ParseBool(value)
if err == nil {
return converted, ok
}
}
return false, false
}
func lookupIntFromEnv(envName string) (int, bool) {
value, ok := os.LookupEnv(envName)
if ok {
converted, err := strconv.ParseInt(value, 10, 16)
if err == nil {
return int(converted), ok
}
}
return 0, false
}

View File

@@ -8,24 +8,35 @@ import (
"strings"
"testing"
"github.com/spf13/viper"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/drakkan/sftpgo/common"
"github.com/drakkan/sftpgo/config"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/ftpd"
"github.com/drakkan/sftpgo/httpclient"
"github.com/drakkan/sftpgo/httpd"
"github.com/drakkan/sftpgo/sftpd"
"github.com/drakkan/sftpgo/utils"
"github.com/drakkan/sftpgo/webdavd"
)
const (
tempConfigName = "temp"
configName = "sftpgo"
)
func reset() {
viper.Reset()
config.Init()
}
func TestLoadConfigTest(t *testing.T) {
reset()
configDir := ".."
err := config.LoadConfig(configDir, configName)
err := config.LoadConfig(configDir, "")
assert.NoError(t, err)
assert.NotEqual(t, httpd.Conf{}, config.GetHTTPConfig())
assert.NotEqual(t, dataprovider.Config{}, config.GetProviderConf())
@@ -33,66 +44,95 @@ func TestLoadConfigTest(t *testing.T) {
assert.NotEqual(t, httpclient.Config{}, config.GetHTTPConfig())
confName := tempConfigName + ".json"
configFilePath := filepath.Join(configDir, confName)
err = config.LoadConfig(configDir, tempConfigName)
assert.NotNil(t, err)
err = ioutil.WriteFile(configFilePath, []byte("{invalid json}"), 0666)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
err = config.LoadConfig(configDir, tempConfigName)
assert.NotNil(t, err)
err = ioutil.WriteFile(configFilePath, []byte("{\"sftpd\": {\"bind_port\": \"a\"}}"), 0666)
err = ioutil.WriteFile(configFilePath, []byte("{invalid json}"), os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, tempConfigName)
assert.NotNil(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, []byte("{\"sftpd\": {\"bind_port\": \"a\"}}"), os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, confName)
assert.Error(t, err)
err = os.Remove(configFilePath)
assert.NoError(t, err)
}
func TestLoadConfigFileNotFound(t *testing.T) {
reset()
viper.SetConfigName("configfile")
err := config.LoadConfig(os.TempDir(), "")
assert.NoError(t, err)
}
func TestEmptyBanner(t *testing.T) {
reset()
configDir := ".."
confName := tempConfigName + ".json"
configFilePath := filepath.Join(configDir, confName)
err := config.LoadConfig(configDir, configName)
err := config.LoadConfig(configDir, "")
assert.NoError(t, err)
sftpdConf := config.GetSFTPDConfig()
sftpdConf.Banner = " "
c := make(map[string]sftpd.Configuration)
c["sftpd"] = sftpdConf
jsonConf, _ := json.Marshal(c)
err = ioutil.WriteFile(configFilePath, jsonConf, 0666)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, tempConfigName)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
sftpdConf = config.GetSFTPDConfig()
assert.NotEmpty(t, strings.TrimSpace(sftpdConf.Banner))
err = os.Remove(configFilePath)
assert.NoError(t, err)
ftpdConf := config.GetFTPDConfig()
ftpdConf.Banner = " "
c1 := make(map[string]ftpd.Configuration)
c1["ftpd"] = ftpdConf
jsonConf, _ = json.Marshal(c1)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
ftpdConf = config.GetFTPDConfig()
assert.NotEmpty(t, strings.TrimSpace(ftpdConf.Banner))
err = os.Remove(configFilePath)
assert.NoError(t, err)
}
func TestInvalidUploadMode(t *testing.T) {
reset()
configDir := ".."
confName := tempConfigName + ".json"
configFilePath := filepath.Join(configDir, confName)
err := config.LoadConfig(configDir, configName)
err := config.LoadConfig(configDir, "")
assert.NoError(t, err)
sftpdConf := config.GetSFTPDConfig()
sftpdConf.UploadMode = 10
c := make(map[string]sftpd.Configuration)
c["sftpd"] = sftpdConf
commonConf := config.GetCommonConfig()
commonConf.UploadMode = 10
c := make(map[string]common.Configuration)
c["common"] = commonConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, jsonConf, 0666)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, tempConfigName)
assert.NotNil(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
assert.Equal(t, 0, config.GetCommonConfig().UploadMode)
err = os.Remove(configFilePath)
assert.NoError(t, err)
}
func TestInvalidExternalAuthScope(t *testing.T) {
reset()
configDir := ".."
confName := tempConfigName + ".json"
configFilePath := filepath.Join(configDir, confName)
err := config.LoadConfig(configDir, configName)
err := config.LoadConfig(configDir, "")
assert.NoError(t, err)
providerConf := config.GetProviderConf()
providerConf.ExternalAuthScope = 10
@@ -100,19 +140,22 @@ func TestInvalidExternalAuthScope(t *testing.T) {
c["data_provider"] = providerConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, jsonConf, 0666)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, tempConfigName)
assert.NotNil(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
assert.Equal(t, 0, config.GetProviderConf().ExternalAuthScope)
err = os.Remove(configFilePath)
assert.NoError(t, err)
}
func TestInvalidCredentialsPath(t *testing.T) {
reset()
configDir := ".."
confName := tempConfigName + ".json"
configFilePath := filepath.Join(configDir, confName)
err := config.LoadConfig(configDir, configName)
err := config.LoadConfig(configDir, "")
assert.NoError(t, err)
providerConf := config.GetProviderConf()
providerConf.CredentialsPath = ""
@@ -120,39 +163,45 @@ func TestInvalidCredentialsPath(t *testing.T) {
c["data_provider"] = providerConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, jsonConf, 0666)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, tempConfigName)
assert.NotNil(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
assert.Equal(t, "credentials", config.GetProviderConf().CredentialsPath)
err = os.Remove(configFilePath)
assert.NoError(t, err)
}
func TestInvalidProxyProtocol(t *testing.T) {
reset()
configDir := ".."
confName := tempConfigName + ".json"
configFilePath := filepath.Join(configDir, confName)
err := config.LoadConfig(configDir, configName)
err := config.LoadConfig(configDir, "")
assert.NoError(t, err)
sftpdConf := config.GetSFTPDConfig()
sftpdConf.ProxyProtocol = 10
c := make(map[string]sftpd.Configuration)
c["sftpd"] = sftpdConf
commonConf := config.GetCommonConfig()
commonConf.ProxyProtocol = 10
c := make(map[string]common.Configuration)
c["common"] = commonConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, jsonConf, 0666)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, tempConfigName)
assert.NotNil(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
assert.Equal(t, 0, config.GetCommonConfig().ProxyProtocol)
err = os.Remove(configFilePath)
assert.NoError(t, err)
}
func TestInvalidUsersBaseDir(t *testing.T) {
reset()
configDir := ".."
confName := tempConfigName + ".json"
configFilePath := filepath.Join(configDir, confName)
err := config.LoadConfig(configDir, configName)
err := config.LoadConfig(configDir, "")
assert.NoError(t, err)
providerConf := config.GetProviderConf()
providerConf.UsersBaseDir = "."
@@ -160,89 +209,59 @@ func TestInvalidUsersBaseDir(t *testing.T) {
c["data_provider"] = providerConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, jsonConf, 0666)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, tempConfigName)
assert.NotNil(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
assert.Empty(t, config.GetProviderConf().UsersBaseDir)
err = os.Remove(configFilePath)
assert.NoError(t, err)
}
func TestHookCompatibity(t *testing.T) {
func TestCommonParamsCompatibility(t *testing.T) {
reset()
configDir := ".."
confName := tempConfigName + ".json"
configFilePath := filepath.Join(configDir, confName)
err := config.LoadConfig(configDir, configName)
assert.NoError(t, err)
providerConf := config.GetProviderConf()
providerConf.ExternalAuthProgram = "ext_auth_program" //nolint:staticcheck
providerConf.PreLoginProgram = "pre_login_program" //nolint:staticcheck
providerConf.Actions.Command = "/tmp/test_cmd" //nolint:staticcheck
c := make(map[string]dataprovider.Config)
c["data_provider"] = providerConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, jsonConf, 0666)
assert.NoError(t, err)
err = config.LoadConfig(configDir, tempConfigName)
assert.NoError(t, err)
providerConf = config.GetProviderConf()
assert.Equal(t, "ext_auth_program", providerConf.ExternalAuthHook)
assert.Equal(t, "pre_login_program", providerConf.PreLoginHook)
assert.Equal(t, "/tmp/test_cmd", providerConf.Actions.Hook)
err = os.Remove(configFilePath)
assert.NoError(t, err)
providerConf.Actions.Hook = ""
providerConf.Actions.HTTPNotificationURL = "http://example.com/notify" //nolint:staticcheck
c = make(map[string]dataprovider.Config)
c["data_provider"] = providerConf
jsonConf, err = json.Marshal(c)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, jsonConf, 0666)
assert.NoError(t, err)
err = config.LoadConfig(configDir, tempConfigName)
assert.NoError(t, err)
providerConf = config.GetProviderConf()
assert.Equal(t, "http://example.com/notify", providerConf.Actions.Hook)
err = os.Remove(configFilePath)
err := config.LoadConfig(configDir, "")
assert.NoError(t, err)
sftpdConf := config.GetSFTPDConfig()
sftpdConf.KeyboardInteractiveProgram = "key_int_program" //nolint:staticcheck
sftpdConf.Actions.Command = "/tmp/sftp_cmd" //nolint:staticcheck
cnf := make(map[string]sftpd.Configuration)
cnf["sftpd"] = sftpdConf
jsonConf, err = json.Marshal(cnf)
sftpdConf.IdleTimeout = 21 //nolint:staticcheck
sftpdConf.Actions.Hook = "http://hook"
sftpdConf.Actions.ExecuteOn = []string{"upload"}
sftpdConf.SetstatMode = 1 //nolint:staticcheck
sftpdConf.UploadMode = common.UploadModeAtomicWithResume //nolint:staticcheck
sftpdConf.ProxyProtocol = 1 //nolint:staticcheck
sftpdConf.ProxyAllowed = []string{"192.168.1.1"} //nolint:staticcheck
c := make(map[string]sftpd.Configuration)
c["sftpd"] = sftpdConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, jsonConf, 0666)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, tempConfigName)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
sftpdConf = config.GetSFTPDConfig()
assert.Equal(t, "key_int_program", sftpdConf.KeyboardInteractiveHook)
assert.Equal(t, "/tmp/sftp_cmd", sftpdConf.Actions.Hook)
err = os.Remove(configFilePath)
assert.NoError(t, err)
sftpdConf.Actions.Hook = ""
sftpdConf.Actions.HTTPNotificationURL = "http://example.com/sftp" //nolint:staticcheck
cnf = make(map[string]sftpd.Configuration)
cnf["sftpd"] = sftpdConf
jsonConf, err = json.Marshal(cnf)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, jsonConf, 0666)
assert.NoError(t, err)
err = config.LoadConfig(configDir, tempConfigName)
assert.NoError(t, err)
sftpdConf = config.GetSFTPDConfig()
assert.Equal(t, "http://example.com/sftp", sftpdConf.Actions.Hook)
commonConf := config.GetCommonConfig()
assert.Equal(t, 21, commonConf.IdleTimeout)
assert.Equal(t, "http://hook", commonConf.Actions.Hook)
assert.Len(t, commonConf.Actions.ExecuteOn, 1)
assert.True(t, utils.IsStringInSlice("upload", commonConf.Actions.ExecuteOn))
assert.Equal(t, 1, commonConf.SetstatMode)
assert.Equal(t, 1, commonConf.ProxyProtocol)
assert.Len(t, commonConf.ProxyAllowed, 1)
assert.True(t, utils.IsStringInSlice("192.168.1.1", commonConf.ProxyAllowed))
err = os.Remove(configFilePath)
assert.NoError(t, err)
}
func TestHostKeyCompatibility(t *testing.T) {
reset()
configDir := ".."
confName := tempConfigName + ".json"
configFilePath := filepath.Join(configDir, confName)
err := config.LoadConfig(configDir, configName)
err := config.LoadConfig(configDir, "")
assert.NoError(t, err)
sftpdConf := config.GetSFTPDConfig()
sftpdConf.Keys = []sftpd.Key{ //nolint:staticcheck
@@ -257,9 +276,9 @@ func TestHostKeyCompatibility(t *testing.T) {
c["sftpd"] = sftpdConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, jsonConf, 0666)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, tempConfigName)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
sftpdConf = config.GetSFTPDConfig()
assert.Equal(t, 2, len(sftpdConf.HostKeys))
@@ -270,16 +289,414 @@ func TestHostKeyCompatibility(t *testing.T) {
}
func TestSetGetConfig(t *testing.T) {
reset()
sftpdConf := config.GetSFTPDConfig()
sftpdConf.IdleTimeout = 3
sftpdConf.MaxAuthTries = 10
config.SetSFTPDConfig(sftpdConf)
assert.Equal(t, sftpdConf.IdleTimeout, config.GetSFTPDConfig().IdleTimeout)
assert.Equal(t, sftpdConf.MaxAuthTries, config.GetSFTPDConfig().MaxAuthTries)
dataProviderConf := config.GetProviderConf()
dataProviderConf.Host = "test host"
config.SetProviderConf(dataProviderConf)
assert.Equal(t, dataProviderConf.Host, config.GetProviderConf().Host)
httpdConf := config.GetHTTPDConfig()
httpdConf.BindAddress = "0.0.0.0"
httpdConf.Bindings = append(httpdConf.Bindings, httpd.Binding{Address: "0.0.0.0"})
config.SetHTTPDConfig(httpdConf)
assert.Equal(t, httpdConf.BindAddress, config.GetHTTPDConfig().BindAddress)
assert.Equal(t, httpdConf.Bindings[0].Address, config.GetHTTPDConfig().Bindings[0].Address)
commonConf := config.GetCommonConfig()
commonConf.IdleTimeout = 10
config.SetCommonConfig(commonConf)
assert.Equal(t, commonConf.IdleTimeout, config.GetCommonConfig().IdleTimeout)
ftpdConf := config.GetFTPDConfig()
ftpdConf.CertificateFile = "cert"
ftpdConf.CertificateKeyFile = "key"
config.SetFTPDConfig(ftpdConf)
assert.Equal(t, ftpdConf.CertificateFile, config.GetFTPDConfig().CertificateFile)
assert.Equal(t, ftpdConf.CertificateKeyFile, config.GetFTPDConfig().CertificateKeyFile)
webDavConf := config.GetWebDAVDConfig()
webDavConf.CertificateFile = "dav_cert"
webDavConf.CertificateKeyFile = "dav_key"
config.SetWebDAVDConfig(webDavConf)
assert.Equal(t, webDavConf.CertificateFile, config.GetWebDAVDConfig().CertificateFile)
assert.Equal(t, webDavConf.CertificateKeyFile, config.GetWebDAVDConfig().CertificateKeyFile)
kmsConf := config.GetKMSConfig()
kmsConf.Secrets.MasterKeyPath = "apath"
kmsConf.Secrets.URL = "aurl"
config.SetKMSConfig(kmsConf)
assert.Equal(t, kmsConf.Secrets.MasterKeyPath, config.GetKMSConfig().Secrets.MasterKeyPath)
assert.Equal(t, kmsConf.Secrets.URL, config.GetKMSConfig().Secrets.URL)
telemetryConf := config.GetTelemetryConfig()
telemetryConf.BindPort = 10001
telemetryConf.BindAddress = "0.0.0.0"
config.SetTelemetryConfig(telemetryConf)
assert.Equal(t, telemetryConf.BindPort, config.GetTelemetryConfig().BindPort)
assert.Equal(t, telemetryConf.BindAddress, config.GetTelemetryConfig().BindAddress)
}
func TestServiceToStart(t *testing.T) {
reset()
configDir := ".."
err := config.LoadConfig(configDir, "")
assert.NoError(t, err)
assert.True(t, config.HasServicesToStart())
sftpdConf := config.GetSFTPDConfig()
sftpdConf.Bindings[0].Port = 0
config.SetSFTPDConfig(sftpdConf)
assert.False(t, config.HasServicesToStart())
ftpdConf := config.GetFTPDConfig()
ftpdConf.Bindings[0].Port = 2121
config.SetFTPDConfig(ftpdConf)
assert.True(t, config.HasServicesToStart())
ftpdConf.Bindings[0].Port = 0
config.SetFTPDConfig(ftpdConf)
webdavdConf := config.GetWebDAVDConfig()
webdavdConf.Bindings[0].Port = 9000
config.SetWebDAVDConfig(webdavdConf)
assert.True(t, config.HasServicesToStart())
webdavdConf.Bindings[0].Port = 0
config.SetWebDAVDConfig(webdavdConf)
assert.False(t, config.HasServicesToStart())
sftpdConf.Bindings[0].Port = 2022
config.SetSFTPDConfig(sftpdConf)
assert.True(t, config.HasServicesToStart())
}
func TestSFTPDBindingsCompatibility(t *testing.T) {
reset()
configDir := ".."
confName := tempConfigName + ".json"
configFilePath := filepath.Join(configDir, confName)
err := config.LoadConfig(configDir, "")
assert.NoError(t, err)
sftpdConf := config.GetSFTPDConfig()
require.Len(t, sftpdConf.Bindings, 1)
sftpdConf.Bindings = nil
sftpdConf.BindPort = 9022 //nolint:staticcheck
sftpdConf.BindAddress = "127.0.0.1" //nolint:staticcheck
c := make(map[string]sftpd.Configuration)
c["sftpd"] = sftpdConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
sftpdConf = config.GetSFTPDConfig()
// the default binding should be replaced with the deprecated configuration
require.Len(t, sftpdConf.Bindings, 1)
require.Equal(t, 9022, sftpdConf.Bindings[0].Port)
require.Equal(t, "127.0.0.1", sftpdConf.Bindings[0].Address)
require.True(t, sftpdConf.Bindings[0].ApplyProxyConfig)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
sftpdConf = config.GetSFTPDConfig()
require.Len(t, sftpdConf.Bindings, 1)
require.Equal(t, 9022, sftpdConf.Bindings[0].Port)
require.Equal(t, "127.0.0.1", sftpdConf.Bindings[0].Address)
require.True(t, sftpdConf.Bindings[0].ApplyProxyConfig)
err = os.Remove(configFilePath)
assert.NoError(t, err)
}
func TestFTPDBindingsCompatibility(t *testing.T) {
reset()
configDir := ".."
confName := tempConfigName + ".json"
configFilePath := filepath.Join(configDir, confName)
err := config.LoadConfig(configDir, "")
assert.NoError(t, err)
ftpdConf := config.GetFTPDConfig()
require.Len(t, ftpdConf.Bindings, 1)
ftpdConf.Bindings = nil
ftpdConf.BindPort = 9022 //nolint:staticcheck
ftpdConf.BindAddress = "127.1.0.1" //nolint:staticcheck
ftpdConf.ForcePassiveIP = "127.1.1.1" //nolint:staticcheck
ftpdConf.TLSMode = 2 //nolint:staticcheck
c := make(map[string]ftpd.Configuration)
c["ftpd"] = ftpdConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
ftpdConf = config.GetFTPDConfig()
// the default binding should be replaced with the deprecated configuration
require.Len(t, ftpdConf.Bindings, 1)
require.Equal(t, 9022, ftpdConf.Bindings[0].Port)
require.Equal(t, "127.1.0.1", ftpdConf.Bindings[0].Address)
require.True(t, ftpdConf.Bindings[0].ApplyProxyConfig)
require.Equal(t, 2, ftpdConf.Bindings[0].TLSMode)
require.Equal(t, "127.1.1.1", ftpdConf.Bindings[0].ForcePassiveIP)
err = os.Remove(configFilePath)
assert.NoError(t, err)
}
func TestWebDAVDBindingsCompatibility(t *testing.T) {
reset()
configDir := ".."
confName := tempConfigName + ".json"
configFilePath := filepath.Join(configDir, confName)
err := config.LoadConfig(configDir, "")
assert.NoError(t, err)
webdavConf := config.GetWebDAVDConfig()
require.Len(t, webdavConf.Bindings, 1)
webdavConf.Bindings = nil
webdavConf.BindPort = 9080 //nolint:staticcheck
webdavConf.BindAddress = "127.0.0.1" //nolint:staticcheck
c := make(map[string]webdavd.Configuration)
c["webdavd"] = webdavConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
webdavConf = config.GetWebDAVDConfig()
// the default binding should be replaced with the deprecated configuration
require.Len(t, webdavConf.Bindings, 1)
require.Equal(t, 9080, webdavConf.Bindings[0].Port)
require.Equal(t, "127.0.0.1", webdavConf.Bindings[0].Address)
require.False(t, webdavConf.Bindings[0].EnableHTTPS)
err = os.Remove(configFilePath)
assert.NoError(t, err)
}
func TestHTTPDBindingsCompatibility(t *testing.T) {
reset()
configDir := ".."
confName := tempConfigName + ".json"
configFilePath := filepath.Join(configDir, confName)
err := config.LoadConfig(configDir, "")
assert.NoError(t, err)
httpdConf := config.GetHTTPDConfig()
require.Len(t, httpdConf.Bindings, 1)
httpdConf.Bindings = nil
httpdConf.BindPort = 9080 //nolint:staticcheck
httpdConf.BindAddress = "127.1.1.1" //nolint:staticcheck
c := make(map[string]httpd.Conf)
c["httpd"] = httpdConf
jsonConf, err := json.Marshal(c)
assert.NoError(t, err)
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
assert.NoError(t, err)
err = config.LoadConfig(configDir, confName)
assert.NoError(t, err)
httpdConf = config.GetHTTPDConfig()
// the default binding should be replaced with the deprecated configuration
require.Len(t, httpdConf.Bindings, 1)
require.Equal(t, 9080, httpdConf.Bindings[0].Port)
require.Equal(t, "127.1.1.1", httpdConf.Bindings[0].Address)
require.False(t, httpdConf.Bindings[0].EnableHTTPS)
require.True(t, httpdConf.Bindings[0].EnableWebAdmin)
err = os.Remove(configFilePath)
assert.NoError(t, err)
}
func TestSFTPDBindingsFromEnv(t *testing.T) {
reset()
os.Setenv("SFTPGO_SFTPD__BINDINGS__0__ADDRESS", "127.0.0.1")
os.Setenv("SFTPGO_SFTPD__BINDINGS__0__PORT", "2200")
os.Setenv("SFTPGO_SFTPD__BINDINGS__0__APPLY_PROXY_CONFIG", "false")
os.Setenv("SFTPGO_SFTPD__BINDINGS__3__ADDRESS", "127.0.1.1")
os.Setenv("SFTPGO_SFTPD__BINDINGS__3__PORT", "2203")
os.Setenv("SFTPGO_SFTPD__BINDINGS__3__APPLY_PROXY_CONFIG", "1")
t.Cleanup(func() {
os.Unsetenv("SFTPGO_SFTPD__BINDINGS__0__ADDRESS")
os.Unsetenv("SFTPGO_SFTPD__BINDINGS__0__PORT")
os.Unsetenv("SFTPGO_SFTPD__BINDINGS__0__APPLY_PROXY_CONFIG")
os.Unsetenv("SFTPGO_SFTPD__BINDINGS__3__ADDRESS")
os.Unsetenv("SFTPGO_SFTPD__BINDINGS__3__PORT")
os.Unsetenv("SFTPGO_SFTPD__BINDINGS__3__APPLY_PROXY_CONFIG")
})
configDir := ".."
err := config.LoadConfig(configDir, "")
assert.NoError(t, err)
bindings := config.GetSFTPDConfig().Bindings
require.Len(t, bindings, 2)
require.Equal(t, 2200, bindings[0].Port)
require.Equal(t, "127.0.0.1", bindings[0].Address)
require.False(t, bindings[0].ApplyProxyConfig)
require.Equal(t, 2203, bindings[1].Port)
require.Equal(t, "127.0.1.1", bindings[1].Address)
require.True(t, bindings[1].ApplyProxyConfig)
}
func TestFTPDBindingsFromEnv(t *testing.T) {
reset()
os.Setenv("SFTPGO_FTPD__BINDINGS__0__ADDRESS", "127.0.0.1")
os.Setenv("SFTPGO_FTPD__BINDINGS__0__PORT", "2200")
os.Setenv("SFTPGO_FTPD__BINDINGS__0__APPLY_PROXY_CONFIG", "f")
os.Setenv("SFTPGO_FTPD__BINDINGS__0__TLS_MODE", "2")
os.Setenv("SFTPGO_FTPD__BINDINGS__0__FORCE_PASSIVE_IP", "127.0.1.2")
os.Setenv("SFTPGO_FTPD__BINDINGS__9__ADDRESS", "127.0.1.1")
os.Setenv("SFTPGO_FTPD__BINDINGS__9__PORT", "2203")
os.Setenv("SFTPGO_FTPD__BINDINGS__9__APPLY_PROXY_CONFIG", "t")
os.Setenv("SFTPGO_FTPD__BINDINGS__9__TLS_MODE", "1")
os.Setenv("SFTPGO_FTPD__BINDINGS__9__FORCE_PASSIVE_IP", "127.0.1.1")
os.Setenv("SFTPGO_FTPD__BINDINGS__9__CLIENT_AUTH_TYPE", "1")
t.Cleanup(func() {
os.Unsetenv("SFTPGO_FTPD__BINDINGS__0__ADDRESS")
os.Unsetenv("SFTPGO_FTPD__BINDINGS__0__PORT")
os.Unsetenv("SFTPGO_FTPD__BINDINGS__0__APPLY_PROXY_CONFIG")
os.Unsetenv("SFTPGO_FTPD__BINDINGS__0__TLS_MODE")
os.Unsetenv("SFTPGO_FTPD__BINDINGS__0__FORCE_PASSIVE_IP")
os.Unsetenv("SFTPGO_FTPD__BINDINGS__9__ADDRESS")
os.Unsetenv("SFTPGO_FTPD__BINDINGS__9__PORT")
os.Unsetenv("SFTPGO_FTPD__BINDINGS__9__APPLY_PROXY_CONFIG")
os.Unsetenv("SFTPGO_FTPD__BINDINGS__9__TLS_MODE")
os.Unsetenv("SFTPGO_FTPD__BINDINGS__9__FORCE_PASSIVE_IP")
os.Unsetenv("SFTPGO_FTPD__BINDINGS__9__CLIENT_AUTH_TYPE")
})
configDir := ".."
err := config.LoadConfig(configDir, "")
assert.NoError(t, err)
bindings := config.GetFTPDConfig().Bindings
require.Len(t, bindings, 2)
require.Equal(t, 2200, bindings[0].Port)
require.Equal(t, "127.0.0.1", bindings[0].Address)
require.False(t, bindings[0].ApplyProxyConfig)
require.Equal(t, 2, bindings[0].TLSMode)
require.Equal(t, "127.0.1.2", bindings[0].ForcePassiveIP)
require.Equal(t, 0, bindings[0].ClientAuthType)
require.Equal(t, 2203, bindings[1].Port)
require.Equal(t, "127.0.1.1", bindings[1].Address)
require.True(t, bindings[1].ApplyProxyConfig)
require.Equal(t, 1, bindings[1].TLSMode)
require.Equal(t, "127.0.1.1", bindings[1].ForcePassiveIP)
require.Equal(t, 1, bindings[1].ClientAuthType)
}
func TestWebDAVBindingsFromEnv(t *testing.T) {
reset()
os.Setenv("SFTPGO_WEBDAVD__BINDINGS__1__ADDRESS", "127.0.0.1")
os.Setenv("SFTPGO_WEBDAVD__BINDINGS__1__PORT", "8000")
os.Setenv("SFTPGO_WEBDAVD__BINDINGS__1__ENABLE_HTTPS", "0")
os.Setenv("SFTPGO_WEBDAVD__BINDINGS__2__ADDRESS", "127.0.1.1")
os.Setenv("SFTPGO_WEBDAVD__BINDINGS__2__PORT", "9000")
os.Setenv("SFTPGO_WEBDAVD__BINDINGS__2__ENABLE_HTTPS", "1")
os.Setenv("SFTPGO_WEBDAVD__BINDINGS__2__CLIENT_AUTH_TYPE", "1")
t.Cleanup(func() {
os.Unsetenv("SFTPGO_WEBDAVD__BINDINGS__1__ADDRESS")
os.Unsetenv("SFTPGO_WEBDAVD__BINDINGS__1__PORT")
os.Unsetenv("SFTPGO_WEBDAVD__BINDINGS__1__ENABLE_HTTPS")
os.Unsetenv("SFTPGO_WEBDAVD__BINDINGS__2__ADDRESS")
os.Unsetenv("SFTPGO_WEBDAVD__BINDINGS__2__PORT")
os.Unsetenv("SFTPGO_WEBDAVD__BINDINGS__2__ENABLE_HTTPS")
os.Unsetenv("SFTPGO_WEBDAVD__BINDINGS__2__CLIENT_AUTH_TYPE")
})
configDir := ".."
err := config.LoadConfig(configDir, "")
assert.NoError(t, err)
bindings := config.GetWebDAVDConfig().Bindings
require.Len(t, bindings, 3)
require.Equal(t, 0, bindings[0].Port)
require.Empty(t, bindings[0].Address)
require.False(t, bindings[0].EnableHTTPS)
require.Equal(t, 8000, bindings[1].Port)
require.Equal(t, "127.0.0.1", bindings[1].Address)
require.False(t, bindings[1].EnableHTTPS)
require.Equal(t, 0, bindings[1].ClientAuthType)
require.Equal(t, 9000, bindings[2].Port)
require.Equal(t, "127.0.1.1", bindings[2].Address)
require.True(t, bindings[2].EnableHTTPS)
require.Equal(t, 1, bindings[2].ClientAuthType)
}
func TestHTTPDBindingsFromEnv(t *testing.T) {
reset()
sockPath := filepath.Clean(os.TempDir())
os.Setenv("SFTPGO_HTTPD__BINDINGS__0__ADDRESS", sockPath)
os.Setenv("SFTPGO_HTTPD__BINDINGS__0__PORT", "0")
os.Setenv("SFTPGO_HTTPD__BINDINGS__1__ADDRESS", "127.0.0.1")
os.Setenv("SFTPGO_HTTPD__BINDINGS__1__PORT", "8000")
os.Setenv("SFTPGO_HTTPD__BINDINGS__1__ENABLE_HTTPS", "0")
os.Setenv("SFTPGO_HTTPD__BINDINGS__1__ENABLE_WEB_ADMIN", "1")
os.Setenv("SFTPGO_HTTPD__BINDINGS__2__ADDRESS", "127.0.1.1")
os.Setenv("SFTPGO_HTTPD__BINDINGS__2__PORT", "9000")
os.Setenv("SFTPGO_HTTPD__BINDINGS__2__ENABLE_WEB_ADMIN", "0")
os.Setenv("SFTPGO_HTTPD__BINDINGS__2__ENABLE_HTTPS", "1")
os.Setenv("SFTPGO_HTTPD__BINDINGS__2__CLIENT_AUTH_TYPE", "1")
t.Cleanup(func() {
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__0__ADDRESS")
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__0__PORT")
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__1__ADDRESS")
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__1__PORT")
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__1__ENABLE_HTTPS")
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__1__ENABLE_WEB_ADMIN")
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__2__ADDRESS")
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__2__PORT")
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__2__ENABLE_HTTPS")
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__2__ENABLE_WEB_ADMIN")
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__2__CLIENT_AUTH_TYPE")
})
configDir := ".."
err := config.LoadConfig(configDir, "")
assert.NoError(t, err)
bindings := config.GetHTTPDConfig().Bindings
require.Len(t, bindings, 3)
require.Equal(t, 0, bindings[0].Port)
require.Equal(t, sockPath, bindings[0].Address)
require.False(t, bindings[0].EnableHTTPS)
require.True(t, bindings[0].EnableWebAdmin)
require.Equal(t, 8000, bindings[1].Port)
require.Equal(t, "127.0.0.1", bindings[1].Address)
require.False(t, bindings[1].EnableHTTPS)
require.True(t, bindings[1].EnableWebAdmin)
require.Equal(t, 9000, bindings[2].Port)
require.Equal(t, "127.0.1.1", bindings[2].Address)
require.True(t, bindings[2].EnableHTTPS)
require.False(t, bindings[2].EnableWebAdmin)
require.Equal(t, 1, bindings[2].ClientAuthType)
}
func TestConfigFromEnv(t *testing.T) {
reset()
os.Setenv("SFTPGO_SFTPD__BINDINGS__0__ADDRESS", "127.0.0.1")
os.Setenv("SFTPGO_WEBDAVD__BINDINGS__0__PORT", "12000")
os.Setenv("SFTPGO_DATA_PROVIDER__PASSWORD_HASHING__ARGON2_OPTIONS__ITERATIONS", "41")
os.Setenv("SFTPGO_DATA_PROVIDER__POOL_SIZE", "10")
os.Setenv("SFTPGO_DATA_PROVIDER__ACTIONS__EXECUTE_ON", "add")
os.Setenv("SFTPGO_KMS__SECRETS__URL", "local")
os.Setenv("SFTPGO_KMS__SECRETS__MASTER_KEY_PATH", "path")
t.Cleanup(func() {
os.Unsetenv("SFTPGO_SFTPD__BINDINGS__0__ADDRESS")
os.Unsetenv("SFTPGO_WEBDAVD__BINDINGS__0__PORT")
os.Unsetenv("SFTPGO_DATA_PROVIDER__PASSWORD_HASHING__ARGON2_OPTIONS__ITERATIONS")
os.Unsetenv("SFTPGO_DATA_PROVIDER__POOL_SIZE")
os.Unsetenv("SFTPGO_DATA_PROVIDER__ACTIONS__EXECUTE_ON")
os.Unsetenv("SFTPGO_KMS__SECRETS__URL")
os.Unsetenv("SFTPGO_KMS__SECRETS__MASTER_KEY_PATH")
})
err := config.LoadConfig(".", "invalid config")
assert.NoError(t, err)
sftpdConfig := config.GetSFTPDConfig()
assert.Equal(t, "127.0.0.1", sftpdConfig.Bindings[0].Address)
assert.Equal(t, 12000, config.GetWebDAVDConfig().Bindings[0].Port)
dataProviderConf := config.GetProviderConf()
assert.Equal(t, uint32(41), dataProviderConf.PasswordHashing.Argon2Options.Iterations)
assert.Equal(t, 10, dataProviderConf.PoolSize)
assert.Len(t, dataProviderConf.Actions.ExecuteOn, 1)
assert.Contains(t, dataProviderConf.Actions.ExecuteOn, "add")
kmsConfig := config.GetKMSConfig()
assert.Equal(t, "local", kmsConfig.Secrets.URL)
assert.Equal(t, "path", kmsConfig.Secrets.MasterKeyPath)
}

228
dataprovider/admin.go Normal file
View File

@@ -0,0 +1,228 @@
package dataprovider
import (
"encoding/base64"
"errors"
"fmt"
"net"
"regexp"
"strings"
"github.com/alexedwards/argon2id"
"github.com/minio/sha256-simd"
"github.com/drakkan/sftpgo/utils"
)
// Available permissions for SFTPGo admins
const (
PermAdminAny = "*"
PermAdminAddUsers = "add_users"
PermAdminChangeUsers = "edit_users"
PermAdminDeleteUsers = "del_users"
PermAdminViewUsers = "view_users"
PermAdminViewConnections = "view_conns"
PermAdminCloseConnections = "close_conns"
PermAdminViewServerStatus = "view_status"
PermAdminManageAdmins = "manage_admins"
PermAdminQuotaScans = "quota_scans"
PermAdminManageSystem = "manage_system"
PermAdminManageDefender = "manage_defender"
PermAdminViewDefender = "view_defender"
)
var (
emailRegex = regexp.MustCompile("^(?:(?:(?:(?:[a-zA-Z]|\\d|[!#\\$%&'\\*\\+\\-\\/=\\?\\^_`{\\|}~]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])+(?:\\.([a-zA-Z]|\\d|[!#\\$%&'\\*\\+\\-\\/=\\?\\^_`{\\|}~]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])+)*)|(?:(?:\\x22)(?:(?:(?:(?:\\x20|\\x09)*(?:\\x0d\\x0a))?(?:\\x20|\\x09)+)?(?:(?:[\\x01-\\x08\\x0b\\x0c\\x0e-\\x1f\\x7f]|\\x21|[\\x23-\\x5b]|[\\x5d-\\x7e]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])|(?:(?:[\\x01-\\x09\\x0b\\x0c\\x0d-\\x7f]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}]))))*(?:(?:(?:\\x20|\\x09)*(?:\\x0d\\x0a))?(\\x20|\\x09)+)?(?:\\x22))))@(?:(?:(?:[a-zA-Z]|\\d|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])|(?:(?:[a-zA-Z]|\\d|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])(?:[a-zA-Z]|\\d|-|\\.|~|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])*(?:[a-zA-Z]|\\d|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])))\\.)+(?:(?:[a-zA-Z]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])|(?:(?:[a-zA-Z]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])(?:[a-zA-Z]|\\d|-|\\.|~|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])*(?:[a-zA-Z]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])))\\.?$")
validAdminPerms = []string{PermAdminAny, PermAdminAddUsers, PermAdminChangeUsers, PermAdminDeleteUsers,
PermAdminViewUsers, PermAdminViewConnections, PermAdminCloseConnections, PermAdminViewServerStatus,
PermAdminManageAdmins, PermAdminQuotaScans, PermAdminManageSystem, PermAdminManageDefender,
PermAdminViewDefender}
)
// AdminFilters defines additional restrictions for SFTPGo admins
type AdminFilters struct {
// only clients connecting from these IP/Mask are allowed.
// IP/Mask must be in CIDR notation as defined in RFC 4632 and RFC 4291
// for example "192.0.2.0/24" or "2001:db8::/32"
AllowList []string `json:"allow_list,omitempty"`
}
// Admin defines a SFTPGo admin
type Admin struct {
// Database unique identifier
ID int64 `json:"id"`
// 1 enabled, 0 disabled (login is not allowed)
Status int `json:"status"`
// Username
Username string `json:"username"`
Password string `json:"password,omitempty"`
Email string `json:"email"`
Permissions []string `json:"permissions"`
Filters AdminFilters `json:"filters,omitempty"`
AdditionalInfo string `json:"additional_info,omitempty"`
}
func (a *Admin) validate() error {
if a.Username == "" {
return &ValidationError{err: "username is mandatory"}
}
if a.Password == "" {
return &ValidationError{err: "please set a password"}
}
if !usernameRegex.MatchString(a.Username) {
return &ValidationError{err: fmt.Sprintf("username %#v is not valid", a.Username)}
}
if a.Password != "" && !strings.HasPrefix(a.Password, argonPwdPrefix) {
pwd, err := argon2id.CreateHash(a.Password, argon2Params)
if err != nil {
return err
}
a.Password = pwd
}
a.Permissions = utils.RemoveDuplicates(a.Permissions)
if len(a.Permissions) == 0 {
return &ValidationError{err: "please grant some permissions to this admin"}
}
if utils.IsStringInSlice(PermAdminAny, a.Permissions) {
a.Permissions = []string{PermAdminAny}
}
for _, perm := range a.Permissions {
if !utils.IsStringInSlice(perm, validAdminPerms) {
return &ValidationError{err: fmt.Sprintf("invalid permission: %#v", perm)}
}
}
if a.Email != "" && !emailRegex.MatchString(a.Email) {
return &ValidationError{err: fmt.Sprintf("email %#v is not valid", a.Email)}
}
for _, IPMask := range a.Filters.AllowList {
_, _, err := net.ParseCIDR(IPMask)
if err != nil {
return &ValidationError{err: fmt.Sprintf("could not parse allow list entry %#v : %v", IPMask, err)}
}
}
return nil
}
// CheckPassword verifies the admin password
func (a *Admin) CheckPassword(password string) (bool, error) {
return argon2id.ComparePasswordAndHash(password, a.Password)
}
// CanLoginFromIP returns true if login from the given IP is allowed
func (a *Admin) CanLoginFromIP(ip string) bool {
if len(a.Filters.AllowList) == 0 {
return true
}
parsedIP := net.ParseIP(ip)
if parsedIP == nil {
return len(a.Filters.AllowList) == 0
}
for _, ipMask := range a.Filters.AllowList {
_, network, err := net.ParseCIDR(ipMask)
if err != nil {
continue
}
if network.Contains(parsedIP) {
return true
}
}
return false
}
func (a *Admin) checkUserAndPass(password, ip string) error {
if a.Status != 1 {
return fmt.Errorf("admin %#v is disabled", a.Username)
}
if a.Password == "" || password == "" {
return errors.New("credentials cannot be null or empty")
}
match, err := a.CheckPassword(password)
if err != nil {
return err
}
if !match {
return ErrInvalidCredentials
}
if !a.CanLoginFromIP(ip) {
return fmt.Errorf("login from IP %v not allowed", ip)
}
return nil
}
// HideConfidentialData hides admin confidential data
func (a *Admin) HideConfidentialData() {
a.Password = ""
}
// HasPermission returns true if the admin has the specified permission
func (a *Admin) HasPermission(perm string) bool {
if utils.IsStringInSlice(PermAdminAny, a.Permissions) {
return true
}
return utils.IsStringInSlice(perm, a.Permissions)
}
// GetPermissionsAsString returns permission as string
func (a *Admin) GetPermissionsAsString() string {
return strings.Join(a.Permissions, ", ")
}
// GetAllowedIPAsString returns the allowed IP as comma separated string
func (a *Admin) GetAllowedIPAsString() string {
return strings.Join(a.Filters.AllowList, ",")
}
// GetValidPerms returns the allowed admin permissions
func (a *Admin) GetValidPerms() []string {
return validAdminPerms
}
// GetInfoString returns admin's info as string.
func (a *Admin) GetInfoString() string {
var result string
if a.Email != "" {
result = fmt.Sprintf("Email: %v. ", a.Email)
}
if len(a.Filters.AllowList) > 0 {
result += fmt.Sprintf("Allowed IP/Mask: %v. ", len(a.Filters.AllowList))
}
return result
}
// GetSignature returns a signature for this admin.
// It could change after an update
func (a *Admin) GetSignature() string {
data := []byte(a.Username)
data = append(data, []byte(a.Password)...)
signature := sha256.Sum256(data)
return base64.StdEncoding.EncodeToString(signature[:])
}
func (a *Admin) getACopy() Admin {
permissions := make([]string, len(a.Permissions))
copy(permissions, a.Permissions)
filters := AdminFilters{}
filters.AllowList = make([]string, len(a.Filters.AllowList))
copy(filters.AllowList, a.Filters.AllowList)
return Admin{
ID: a.ID,
Status: a.Status,
Username: a.Username,
Password: a.Password,
Email: a.Email,
Permissions: permissions,
Filters: filters,
AdditionalInfo: a.AdditionalInfo,
}
}
// setDefaults sets the appropriate value for the default admin
func (a *Admin) setDefaults() {
a.Username = "admin"
a.Password = "password"
a.Status = 1
a.Permissions = []string{PermAdminAny}
}

File diff suppressed because it is too large Load Diff

358
dataprovider/compat.go Normal file
View File

@@ -0,0 +1,358 @@
package dataprovider
import (
"encoding/json"
"fmt"
"io/ioutil"
"path/filepath"
"github.com/drakkan/sftpgo/kms"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/utils"
"github.com/drakkan/sftpgo/vfs"
)
type compatUserV2 struct {
ID int64 `json:"id"`
Username string `json:"username"`
Password string `json:"password,omitempty"`
PublicKeys []string `json:"public_keys,omitempty"`
HomeDir string `json:"home_dir"`
UID int `json:"uid"`
GID int `json:"gid"`
MaxSessions int `json:"max_sessions"`
QuotaSize int64 `json:"quota_size"`
QuotaFiles int `json:"quota_files"`
Permissions []string `json:"permissions"`
UsedQuotaSize int64 `json:"used_quota_size"`
UsedQuotaFiles int `json:"used_quota_files"`
LastQuotaUpdate int64 `json:"last_quota_update"`
UploadBandwidth int64 `json:"upload_bandwidth"`
DownloadBandwidth int64 `json:"download_bandwidth"`
ExpirationDate int64 `json:"expiration_date"`
LastLogin int64 `json:"last_login"`
Status int `json:"status"`
}
type compatS3FsConfigV4 struct {
Bucket string `json:"bucket,omitempty"`
KeyPrefix string `json:"key_prefix,omitempty"`
Region string `json:"region,omitempty"`
AccessKey string `json:"access_key,omitempty"`
AccessSecret string `json:"access_secret,omitempty"`
Endpoint string `json:"endpoint,omitempty"`
StorageClass string `json:"storage_class,omitempty"`
UploadPartSize int64 `json:"upload_part_size,omitempty"`
UploadConcurrency int `json:"upload_concurrency,omitempty"`
}
type compatGCSFsConfigV4 struct {
Bucket string `json:"bucket,omitempty"`
KeyPrefix string `json:"key_prefix,omitempty"`
CredentialFile string `json:"-"`
Credentials []byte `json:"credentials,omitempty"`
AutomaticCredentials int `json:"automatic_credentials,omitempty"`
StorageClass string `json:"storage_class,omitempty"`
}
type compatAzBlobFsConfigV4 struct {
Container string `json:"container,omitempty"`
AccountName string `json:"account_name,omitempty"`
AccountKey string `json:"account_key,omitempty"`
Endpoint string `json:"endpoint,omitempty"`
SASURL string `json:"sas_url,omitempty"`
KeyPrefix string `json:"key_prefix,omitempty"`
UploadPartSize int64 `json:"upload_part_size,omitempty"`
UploadConcurrency int `json:"upload_concurrency,omitempty"`
UseEmulator bool `json:"use_emulator,omitempty"`
AccessTier string `json:"access_tier,omitempty"`
}
type compatFilesystemV4 struct {
Provider FilesystemProvider `json:"provider"`
S3Config compatS3FsConfigV4 `json:"s3config,omitempty"`
GCSConfig compatGCSFsConfigV4 `json:"gcsconfig,omitempty"`
AzBlobConfig compatAzBlobFsConfigV4 `json:"azblobconfig,omitempty"`
}
type compatUserV4 struct {
ID int64 `json:"id"`
Status int `json:"status"`
Username string `json:"username"`
ExpirationDate int64 `json:"expiration_date"`
Password string `json:"password,omitempty"`
PublicKeys []string `json:"public_keys,omitempty"`
HomeDir string `json:"home_dir"`
VirtualFolders []vfs.VirtualFolder `json:"virtual_folders,omitempty"`
UID int `json:"uid"`
GID int `json:"gid"`
MaxSessions int `json:"max_sessions"`
QuotaSize int64 `json:"quota_size"`
QuotaFiles int `json:"quota_files"`
Permissions map[string][]string `json:"permissions"`
UsedQuotaSize int64 `json:"used_quota_size"`
UsedQuotaFiles int `json:"used_quota_files"`
LastQuotaUpdate int64 `json:"last_quota_update"`
UploadBandwidth int64 `json:"upload_bandwidth"`
DownloadBandwidth int64 `json:"download_bandwidth"`
LastLogin int64 `json:"last_login"`
Filters UserFilters `json:"filters"`
FsConfig compatFilesystemV4 `json:"filesystem"`
}
type backupDataV4Compat struct {
Users []compatUserV4 `json:"users"`
Folders []vfs.BaseVirtualFolder `json:"folders"`
}
func createUserFromV4(u compatUserV4, fsConfig Filesystem) User {
user := User{
ID: u.ID,
Status: u.Status,
Username: u.Username,
ExpirationDate: u.ExpirationDate,
Password: u.Password,
PublicKeys: u.PublicKeys,
HomeDir: u.HomeDir,
VirtualFolders: u.VirtualFolders,
UID: u.UID,
GID: u.GID,
MaxSessions: u.MaxSessions,
QuotaSize: u.QuotaSize,
QuotaFiles: u.QuotaFiles,
Permissions: u.Permissions,
UsedQuotaSize: u.UsedQuotaSize,
UsedQuotaFiles: u.UsedQuotaFiles,
LastQuotaUpdate: u.LastQuotaUpdate,
UploadBandwidth: u.UploadBandwidth,
DownloadBandwidth: u.DownloadBandwidth,
LastLogin: u.LastLogin,
Filters: u.Filters,
}
user.FsConfig = fsConfig
user.SetEmptySecretsIfNil()
return user
}
func convertUserToV4(u User, fsConfig compatFilesystemV4) compatUserV4 {
user := compatUserV4{
ID: u.ID,
Status: u.Status,
Username: u.Username,
ExpirationDate: u.ExpirationDate,
Password: u.Password,
PublicKeys: u.PublicKeys,
HomeDir: u.HomeDir,
VirtualFolders: u.VirtualFolders,
UID: u.UID,
GID: u.GID,
MaxSessions: u.MaxSessions,
QuotaSize: u.QuotaSize,
QuotaFiles: u.QuotaFiles,
Permissions: u.Permissions,
UsedQuotaSize: u.UsedQuotaSize,
UsedQuotaFiles: u.UsedQuotaFiles,
LastQuotaUpdate: u.LastQuotaUpdate,
UploadBandwidth: u.UploadBandwidth,
DownloadBandwidth: u.DownloadBandwidth,
LastLogin: u.LastLogin,
Filters: u.Filters,
}
user.FsConfig = fsConfig
return user
}
func getCGSCredentialsFromV4(config compatGCSFsConfigV4) (*kms.Secret, error) {
secret := kms.NewEmptySecret()
var err error
if len(config.Credentials) > 0 {
secret = kms.NewPlainSecret(string(config.Credentials))
return secret, nil
}
if config.CredentialFile != "" {
creds, err := ioutil.ReadFile(config.CredentialFile)
if err != nil {
return secret, err
}
secret = kms.NewPlainSecret(string(creds))
return secret, nil
}
return secret, err
}
func getCGSCredentialsFromV6(config vfs.GCSFsConfig, username string) (string, error) {
if config.Credentials == nil {
config.Credentials = kms.NewEmptySecret()
}
if config.Credentials.IsEmpty() {
config.CredentialFile = filepath.Join(credentialsDirPath, fmt.Sprintf("%v_gcs_credentials.json",
username))
creds, err := ioutil.ReadFile(config.CredentialFile)
if err != nil {
return "", err
}
err = json.Unmarshal(creds, &config.Credentials)
if err != nil {
return "", err
}
}
if config.Credentials.IsEncrypted() {
err := config.Credentials.Decrypt()
if err != nil {
return "", err
}
// in V4 GCS credentials were not encrypted
return config.Credentials.GetPayload(), nil
}
return "", nil
}
func convertFsConfigToV4(fs Filesystem, username string) (compatFilesystemV4, error) {
fsV4 := compatFilesystemV4{
Provider: fs.Provider,
S3Config: compatS3FsConfigV4{},
AzBlobConfig: compatAzBlobFsConfigV4{},
GCSConfig: compatGCSFsConfigV4{},
}
switch fs.Provider {
case S3FilesystemProvider:
fsV4.S3Config = compatS3FsConfigV4{
Bucket: fs.S3Config.Bucket,
KeyPrefix: fs.S3Config.KeyPrefix,
Region: fs.S3Config.Region,
AccessKey: fs.S3Config.AccessKey,
AccessSecret: "",
Endpoint: fs.S3Config.Endpoint,
StorageClass: fs.S3Config.StorageClass,
UploadPartSize: fs.S3Config.UploadPartSize,
UploadConcurrency: fs.S3Config.UploadConcurrency,
}
if fs.S3Config.AccessSecret.IsEncrypted() {
err := fs.S3Config.AccessSecret.Decrypt()
if err != nil {
return fsV4, err
}
secretV4, err := utils.EncryptData(fs.S3Config.AccessSecret.GetPayload())
if err != nil {
return fsV4, err
}
fsV4.S3Config.AccessSecret = secretV4
}
case AzureBlobFilesystemProvider:
fsV4.AzBlobConfig = compatAzBlobFsConfigV4{
Container: fs.AzBlobConfig.Container,
AccountName: fs.AzBlobConfig.AccountName,
AccountKey: "",
Endpoint: fs.AzBlobConfig.Endpoint,
SASURL: fs.AzBlobConfig.SASURL,
KeyPrefix: fs.AzBlobConfig.KeyPrefix,
UploadPartSize: fs.AzBlobConfig.UploadPartSize,
UploadConcurrency: fs.AzBlobConfig.UploadConcurrency,
UseEmulator: fs.AzBlobConfig.UseEmulator,
AccessTier: fs.AzBlobConfig.AccessTier,
}
if fs.AzBlobConfig.AccountKey.IsEncrypted() {
err := fs.AzBlobConfig.AccountKey.Decrypt()
if err != nil {
return fsV4, err
}
secretV4, err := utils.EncryptData(fs.AzBlobConfig.AccountKey.GetPayload())
if err != nil {
return fsV4, err
}
fsV4.AzBlobConfig.AccountKey = secretV4
}
case GCSFilesystemProvider:
fsV4.GCSConfig = compatGCSFsConfigV4{
Bucket: fs.GCSConfig.Bucket,
KeyPrefix: fs.GCSConfig.KeyPrefix,
CredentialFile: fs.GCSConfig.CredentialFile,
AutomaticCredentials: fs.GCSConfig.AutomaticCredentials,
StorageClass: fs.GCSConfig.StorageClass,
}
if fs.GCSConfig.AutomaticCredentials == 0 {
creds, err := getCGSCredentialsFromV6(fs.GCSConfig, username)
if err != nil {
return fsV4, err
}
fsV4.GCSConfig.Credentials = []byte(creds)
}
default:
// a provider not supported in v4, the configuration will be lost
providerLog(logger.LevelWarn, "provider %v was not supported in v4, the configuration for the user %#v will be lost",
fs.Provider, username)
fsV4.Provider = 0
}
return fsV4, nil
}
func convertFsConfigFromV4(compatFs compatFilesystemV4, username string) (Filesystem, error) {
fsConfig := Filesystem{
Provider: compatFs.Provider,
S3Config: vfs.S3FsConfig{},
AzBlobConfig: vfs.AzBlobFsConfig{},
GCSConfig: vfs.GCSFsConfig{},
}
switch compatFs.Provider {
case S3FilesystemProvider:
fsConfig.S3Config = vfs.S3FsConfig{
Bucket: compatFs.S3Config.Bucket,
KeyPrefix: compatFs.S3Config.KeyPrefix,
Region: compatFs.S3Config.Region,
AccessKey: compatFs.S3Config.AccessKey,
AccessSecret: kms.NewEmptySecret(),
Endpoint: compatFs.S3Config.Endpoint,
StorageClass: compatFs.S3Config.StorageClass,
UploadPartSize: compatFs.S3Config.UploadPartSize,
UploadConcurrency: compatFs.S3Config.UploadConcurrency,
}
if compatFs.S3Config.AccessSecret != "" {
secret, err := kms.GetSecretFromCompatString(compatFs.S3Config.AccessSecret)
if err != nil {
providerLog(logger.LevelError, "unable to convert v4 filesystem for user %#v: %v", username, err)
return fsConfig, err
}
fsConfig.S3Config.AccessSecret = secret
}
case AzureBlobFilesystemProvider:
fsConfig.AzBlobConfig = vfs.AzBlobFsConfig{
Container: compatFs.AzBlobConfig.Container,
AccountName: compatFs.AzBlobConfig.AccountName,
AccountKey: kms.NewEmptySecret(),
Endpoint: compatFs.AzBlobConfig.Endpoint,
SASURL: compatFs.AzBlobConfig.SASURL,
KeyPrefix: compatFs.AzBlobConfig.KeyPrefix,
UploadPartSize: compatFs.AzBlobConfig.UploadPartSize,
UploadConcurrency: compatFs.AzBlobConfig.UploadConcurrency,
UseEmulator: compatFs.AzBlobConfig.UseEmulator,
AccessTier: compatFs.AzBlobConfig.AccessTier,
}
if compatFs.AzBlobConfig.AccountKey != "" {
secret, err := kms.GetSecretFromCompatString(compatFs.AzBlobConfig.AccountKey)
if err != nil {
providerLog(logger.LevelError, "unable to convert v4 filesystem for user %#v: %v", username, err)
return fsConfig, err
}
fsConfig.AzBlobConfig.AccountKey = secret
}
case GCSFilesystemProvider:
fsConfig.GCSConfig = vfs.GCSFsConfig{
Bucket: compatFs.GCSConfig.Bucket,
KeyPrefix: compatFs.GCSConfig.KeyPrefix,
CredentialFile: compatFs.GCSConfig.CredentialFile,
AutomaticCredentials: compatFs.GCSConfig.AutomaticCredentials,
StorageClass: compatFs.GCSConfig.StorageClass,
}
if compatFs.GCSConfig.AutomaticCredentials == 0 {
compatFs.GCSConfig.CredentialFile = filepath.Join(credentialsDirPath, fmt.Sprintf("%v_gcs_credentials.json",
username))
}
secret, err := getCGSCredentialsFromV4(compatFs.GCSConfig)
if err != nil {
providerLog(logger.LevelError, "unable to convert v4 filesystem for user %#v: %v", username, err)
return fsConfig, err
}
fsConfig.GCSConfig.Credentials = secret
}
return fsConfig, nil
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -3,6 +3,7 @@
package dataprovider
import (
"context"
"database/sql"
"fmt"
"strings"
@@ -37,6 +38,21 @@ const (
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `unique_mapping` UNIQUE (`user_id`, `folder_id`);" +
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `folders_mapping_folder_id_fk_folders_id` FOREIGN KEY (`folder_id`) REFERENCES `{{folders}}` (`id`) ON DELETE CASCADE;" +
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `folders_mapping_user_id_fk_users_id` FOREIGN KEY (`user_id`) REFERENCES `{{users}}` (`id`) ON DELETE CASCADE;"
mysqlV6SQL = "ALTER TABLE `{{users}}` ADD COLUMN `additional_info` longtext NULL;"
mysqlV6DownSQL = "ALTER TABLE `{{users}}` DROP COLUMN `additional_info`;"
mysqlV7SQL = "CREATE TABLE `{{admins}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `username` varchar(255) NOT NULL UNIQUE, " +
"`password` varchar(255) NOT NULL, `email` varchar(255) NULL, `status` integer NOT NULL, `permissions` longtext NOT NULL, " +
"`filters` longtext NULL, `additional_info` longtext NULL);"
mysqlV7DownSQL = "DROP TABLE `{{admins}}` CASCADE;"
mysqlV8SQL = "ALTER TABLE `{{folders}}` ADD COLUMN `name` varchar(255) NULL;" +
"ALTER TABLE `{{folders}}` MODIFY `path` varchar(512) NULL;" +
"ALTER TABLE `{{folders}}` DROP INDEX `path`;" +
"UPDATE `{{folders}}` f1 SET name = (SELECT CONCAT('folder',f2.id) FROM `{{folders}}` f2 WHERE f2.id = f1.id);" +
"ALTER TABLE `{{folders}}` MODIFY `name` varchar(255) NOT NULL;" +
"ALTER TABLE `folders` ADD CONSTRAINT `name` UNIQUE (`name`);"
mysqlV8DownSQL = "ALTER TABLE `{{folders}}` DROP COLUMN `name`;" +
"ALTER TABLE `{{folders}}` MODIFY `path` varchar(512) NOT NULL;" +
"ALTER TABLE `{{folders}}` ADD CONSTRAINT `path` UNIQUE (`path`);"
)
// MySQLProvider auth provider for MySQL/MariaDB database
@@ -56,8 +72,13 @@ func initializeMySQLProvider() error {
providerLog(logger.LevelDebug, "mysql database handle created, connection string: %#v, pool size: %v",
getMySQLConnectionString(true), config.PoolSize)
dbHandle.SetMaxOpenConns(config.PoolSize)
dbHandle.SetConnMaxLifetime(1800 * time.Second)
provider = MySQLProvider{dbHandle: dbHandle}
if config.PoolSize > 0 {
dbHandle.SetMaxIdleConns(config.PoolSize)
} else {
dbHandle.SetMaxIdleConns(2)
}
dbHandle.SetConnMaxLifetime(240 * time.Second)
provider = &MySQLProvider{dbHandle: dbHandle}
} else {
providerLog(logger.LevelWarn, "error creating mysql database handler, connection string: %#v, error: %v",
getMySQLConnectionString(true), err)
@@ -66,7 +87,7 @@ func initializeMySQLProvider() error {
}
func getMySQLConnectionString(redactedPwd bool) string {
var connectionString string
if len(config.ConnectionString) == 0 {
if config.ConnectionString == "" {
password := config.Password
if redactedPwd {
password = "[redacted]"
@@ -79,96 +100,130 @@ func getMySQLConnectionString(redactedPwd bool) string {
return connectionString
}
func (p MySQLProvider) checkAvailability() error {
func (p *MySQLProvider) checkAvailability() error {
return sqlCommonCheckAvailability(p.dbHandle)
}
func (p MySQLProvider) validateUserAndPass(username string, password string) (User, error) {
return sqlCommonValidateUserAndPass(username, password, p.dbHandle)
func (p *MySQLProvider) validateUserAndPass(username, password, ip, protocol string) (User, error) {
return sqlCommonValidateUserAndPass(username, password, ip, protocol, p.dbHandle)
}
func (p MySQLProvider) validateUserAndPubKey(username string, publicKey []byte) (User, string, error) {
func (p *MySQLProvider) validateUserAndPubKey(username string, publicKey []byte) (User, string, error) {
return sqlCommonValidateUserAndPubKey(username, publicKey, p.dbHandle)
}
func (p MySQLProvider) getUserByID(ID int64) (User, error) {
return sqlCommonGetUserByID(ID, p.dbHandle)
}
func (p MySQLProvider) updateQuota(username string, filesAdd int, sizeAdd int64, reset bool) error {
func (p *MySQLProvider) updateQuota(username string, filesAdd int, sizeAdd int64, reset bool) error {
return sqlCommonUpdateQuota(username, filesAdd, sizeAdd, reset, p.dbHandle)
}
func (p MySQLProvider) getUsedQuota(username string) (int, int64, error) {
func (p *MySQLProvider) getUsedQuota(username string) (int, int64, error) {
return sqlCommonGetUsedQuota(username, p.dbHandle)
}
func (p MySQLProvider) updateLastLogin(username string) error {
func (p *MySQLProvider) updateLastLogin(username string) error {
return sqlCommonUpdateLastLogin(username, p.dbHandle)
}
func (p MySQLProvider) userExists(username string) (User, error) {
return sqlCommonCheckUserExists(username, p.dbHandle)
func (p *MySQLProvider) userExists(username string) (User, error) {
return sqlCommonGetUserByUsername(username, p.dbHandle)
}
func (p MySQLProvider) addUser(user User) error {
func (p *MySQLProvider) addUser(user *User) error {
return sqlCommonAddUser(user, p.dbHandle)
}
func (p MySQLProvider) updateUser(user User) error {
func (p *MySQLProvider) updateUser(user *User) error {
return sqlCommonUpdateUser(user, p.dbHandle)
}
func (p MySQLProvider) deleteUser(user User) error {
func (p *MySQLProvider) deleteUser(user *User) error {
return sqlCommonDeleteUser(user, p.dbHandle)
}
func (p MySQLProvider) dumpUsers() ([]User, error) {
func (p *MySQLProvider) dumpUsers() ([]User, error) {
return sqlCommonDumpUsers(p.dbHandle)
}
func (p MySQLProvider) getUsers(limit int, offset int, order string, username string) ([]User, error) {
return sqlCommonGetUsers(limit, offset, order, username, p.dbHandle)
func (p *MySQLProvider) getUsers(limit int, offset int, order string) ([]User, error) {
return sqlCommonGetUsers(limit, offset, order, p.dbHandle)
}
func (p MySQLProvider) dumpFolders() ([]vfs.BaseVirtualFolder, error) {
func (p *MySQLProvider) dumpFolders() ([]vfs.BaseVirtualFolder, error) {
return sqlCommonDumpFolders(p.dbHandle)
}
func (p MySQLProvider) getFolders(limit, offset int, order, folderPath string) ([]vfs.BaseVirtualFolder, error) {
return sqlCommonGetFolders(limit, offset, order, folderPath, p.dbHandle)
func (p *MySQLProvider) getFolders(limit, offset int, order string) ([]vfs.BaseVirtualFolder, error) {
return sqlCommonGetFolders(limit, offset, order, p.dbHandle)
}
func (p MySQLProvider) getFolderByPath(mappedPath string) (vfs.BaseVirtualFolder, error) {
return sqlCommonCheckFolderExists(mappedPath, p.dbHandle)
func (p *MySQLProvider) getFolderByName(name string) (vfs.BaseVirtualFolder, error) {
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
defer cancel()
return sqlCommonGetFolderByName(ctx, name, p.dbHandle)
}
func (p MySQLProvider) addFolder(folder vfs.BaseVirtualFolder) error {
func (p *MySQLProvider) addFolder(folder *vfs.BaseVirtualFolder) error {
return sqlCommonAddFolder(folder, p.dbHandle)
}
func (p MySQLProvider) deleteFolder(folder vfs.BaseVirtualFolder) error {
func (p *MySQLProvider) updateFolder(folder *vfs.BaseVirtualFolder) error {
return sqlCommonUpdateFolder(folder, p.dbHandle)
}
func (p *MySQLProvider) deleteFolder(folder *vfs.BaseVirtualFolder) error {
return sqlCommonDeleteFolder(folder, p.dbHandle)
}
func (p MySQLProvider) updateFolderQuota(mappedPath string, filesAdd int, sizeAdd int64, reset bool) error {
return sqlCommonUpdateFolderQuota(mappedPath, filesAdd, sizeAdd, reset, p.dbHandle)
func (p *MySQLProvider) updateFolderQuota(name string, filesAdd int, sizeAdd int64, reset bool) error {
return sqlCommonUpdateFolderQuota(name, filesAdd, sizeAdd, reset, p.dbHandle)
}
func (p MySQLProvider) getUsedFolderQuota(mappedPath string) (int, int64, error) {
return sqlCommonGetFolderUsedQuota(mappedPath, p.dbHandle)
func (p *MySQLProvider) getUsedFolderQuota(name string) (int, int64, error) {
return sqlCommonGetFolderUsedQuota(name, p.dbHandle)
}
func (p MySQLProvider) close() error {
func (p *MySQLProvider) adminExists(username string) (Admin, error) {
return sqlCommonGetAdminByUsername(username, p.dbHandle)
}
func (p *MySQLProvider) addAdmin(admin *Admin) error {
return sqlCommonAddAdmin(admin, p.dbHandle)
}
func (p *MySQLProvider) updateAdmin(admin *Admin) error {
return sqlCommonUpdateAdmin(admin, p.dbHandle)
}
func (p *MySQLProvider) deleteAdmin(admin *Admin) error {
return sqlCommonDeleteAdmin(admin, p.dbHandle)
}
func (p *MySQLProvider) getAdmins(limit int, offset int, order string) ([]Admin, error) {
return sqlCommonGetAdmins(limit, offset, order, p.dbHandle)
}
func (p *MySQLProvider) dumpAdmins() ([]Admin, error) {
return sqlCommonDumpAdmins(p.dbHandle)
}
func (p *MySQLProvider) validateAdminAndPass(username, password, ip string) (Admin, error) {
return sqlCommonValidateAdminAndPass(username, password, ip, p.dbHandle)
}
func (p *MySQLProvider) close() error {
return p.dbHandle.Close()
}
func (p MySQLProvider) reloadConfig() error {
func (p *MySQLProvider) reloadConfig() error {
return nil
}
// initializeDatabase creates the initial database structure
func (p MySQLProvider) initializeDatabase() error {
func (p *MySQLProvider) initializeDatabase() error {
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, false)
if err == nil && dbVersion.Version > 0 {
return ErrNoInitRequired
}
sqlUsers := strings.Replace(mysqlUsersTableSQL, "{{users}}", sqlTableUsers, 1)
tx, err := p.dbHandle.Begin()
if err != nil {
@@ -192,46 +247,150 @@ func (p MySQLProvider) initializeDatabase() error {
return tx.Commit()
}
func (p MySQLProvider) migrateDatabase() error {
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle)
func (p *MySQLProvider) migrateDatabase() error {
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
if err != nil {
return err
}
if dbVersion.Version == sqlDatabaseVersion {
providerLog(logger.LevelDebug, "sql database is updated, current version: %v", dbVersion.Version)
return nil
providerLog(logger.LevelDebug, "sql database is up to date, current version: %v", dbVersion.Version)
return ErrNoInitRequired
}
switch dbVersion.Version {
case 1:
err = updateMySQLDatabaseFrom1To2(p.dbHandle)
if err != nil {
return err
}
err = updateMySQLDatabaseFrom2To3(p.dbHandle)
if err != nil {
return err
}
return updateMySQLDatabaseFrom3To4(p.dbHandle)
return updateMySQLDatabaseFromV1(p.dbHandle)
case 2:
err = updateMySQLDatabaseFrom2To3(p.dbHandle)
return updateMySQLDatabaseFromV2(p.dbHandle)
case 3:
return updateMySQLDatabaseFromV3(p.dbHandle)
case 4:
return updateMySQLDatabaseFromV4(p.dbHandle)
case 5:
return updateMySQLDatabaseFromV5(p.dbHandle)
case 6:
return updateMySQLDatabaseFromV6(p.dbHandle)
case 7:
return updateMySQLDatabaseFromV7(p.dbHandle)
default:
if dbVersion.Version > sqlDatabaseVersion {
providerLog(logger.LevelWarn, "database version %v is newer than the supported: %v", dbVersion.Version,
sqlDatabaseVersion)
logger.WarnToConsole("database version %v is newer than the supported: %v", dbVersion.Version,
sqlDatabaseVersion)
return nil
}
return fmt.Errorf("Database version not handled: %v", dbVersion.Version)
}
}
//nolint:dupl
func (p *MySQLProvider) revertDatabase(targetVersion int) error {
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
if err != nil {
return err
}
return updateMySQLDatabaseFrom3To4(p.dbHandle)
case 3:
return updateMySQLDatabaseFrom3To4(p.dbHandle)
if dbVersion.Version == targetVersion {
return fmt.Errorf("current version match target version, nothing to do")
}
switch dbVersion.Version {
case 8:
err = downgradeMySQLDatabaseFrom8To7(p.dbHandle)
if err != nil {
return err
}
err = downgradeMySQLDatabaseFrom7To6(p.dbHandle)
if err != nil {
return err
}
err = downgradeMySQLDatabaseFrom6To5(p.dbHandle)
if err != nil {
return err
}
return downgradeMySQLDatabaseFrom5To4(p.dbHandle)
case 7:
err = downgradeMySQLDatabaseFrom7To6(p.dbHandle)
if err != nil {
return err
}
err = downgradeMySQLDatabaseFrom6To5(p.dbHandle)
if err != nil {
return err
}
return downgradeMySQLDatabaseFrom5To4(p.dbHandle)
case 6:
err = downgradeMySQLDatabaseFrom6To5(p.dbHandle)
if err != nil {
return err
}
return downgradeMySQLDatabaseFrom5To4(p.dbHandle)
case 5:
return downgradeMySQLDatabaseFrom5To4(p.dbHandle)
default:
return fmt.Errorf("Database version not handled: %v", dbVersion.Version)
}
}
func updateMySQLDatabaseFromV1(dbHandle *sql.DB) error {
err := updateMySQLDatabaseFrom1To2(dbHandle)
if err != nil {
return err
}
return updateMySQLDatabaseFromV2(dbHandle)
}
func updateMySQLDatabaseFromV2(dbHandle *sql.DB) error {
err := updateMySQLDatabaseFrom2To3(dbHandle)
if err != nil {
return err
}
return updateMySQLDatabaseFromV3(dbHandle)
}
func updateMySQLDatabaseFromV3(dbHandle *sql.DB) error {
err := updateMySQLDatabaseFrom3To4(dbHandle)
if err != nil {
return err
}
return updateMySQLDatabaseFromV4(dbHandle)
}
func updateMySQLDatabaseFromV4(dbHandle *sql.DB) error {
err := updateMySQLDatabaseFrom4To5(dbHandle)
if err != nil {
return err
}
return updateMySQLDatabaseFromV5(dbHandle)
}
func updateMySQLDatabaseFromV5(dbHandle *sql.DB) error {
err := updateMySQLDatabaseFrom5To6(dbHandle)
if err != nil {
return err
}
return updateMySQLDatabaseFromV6(dbHandle)
}
func updateMySQLDatabaseFromV6(dbHandle *sql.DB) error {
err := updateMySQLDatabaseFrom6To7(dbHandle)
if err != nil {
return err
}
return updateMySQLDatabaseFromV7(dbHandle)
}
func updateMySQLDatabaseFromV7(dbHandle *sql.DB) error {
return updateMySQLDatabaseFrom7To8(dbHandle)
}
func updateMySQLDatabaseFrom1To2(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 1 -> 2")
providerLog(logger.LevelInfo, "updating database version: 1 -> 2")
sql := strings.Replace(mysqlV2SQL, "{{users}}", sqlTableUsers, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 2)
}
func updateMySQLDatabaseFrom2To3(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 2 -> 3")
providerLog(logger.LevelInfo, "updating database version: 2 -> 3")
sql := strings.Replace(mysqlV3SQL, "{{users}}", sqlTableUsers, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 3)
@@ -240,3 +399,53 @@ func updateMySQLDatabaseFrom2To3(dbHandle *sql.DB) error {
func updateMySQLDatabaseFrom3To4(dbHandle *sql.DB) error {
return sqlCommonUpdateDatabaseFrom3To4(mysqlV4SQL, dbHandle)
}
func updateMySQLDatabaseFrom4To5(dbHandle *sql.DB) error {
return sqlCommonUpdateDatabaseFrom4To5(dbHandle)
}
func updateMySQLDatabaseFrom5To6(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 5 -> 6")
providerLog(logger.LevelInfo, "updating database version: 5 -> 6")
sql := strings.Replace(mysqlV6SQL, "{{users}}", sqlTableUsers, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 6)
}
func updateMySQLDatabaseFrom6To7(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 6 -> 7")
providerLog(logger.LevelInfo, "updating database version: 6 -> 7")
sql := strings.Replace(mysqlV7SQL, "{{admins}}", sqlTableAdmins, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 7)
}
func updateMySQLDatabaseFrom7To8(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 7 -> 8")
providerLog(logger.LevelInfo, "updating database version: 7 -> 8")
sql := strings.ReplaceAll(mysqlV8SQL, "{{folders}}", sqlTableFolders)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 8)
}
func downgradeMySQLDatabaseFrom8To7(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 8 -> 7")
providerLog(logger.LevelInfo, "downgrading database version: 8 -> 7")
sql := strings.ReplaceAll(mysqlV8DownSQL, "{{folders}}", sqlTableFolders)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 7)
}
func downgradeMySQLDatabaseFrom7To6(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 7 -> 6")
providerLog(logger.LevelInfo, "downgrading database version: 7 -> 6")
sql := strings.Replace(mysqlV7DownSQL, "{{admins}}", sqlTableAdmins, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 6)
}
func downgradeMySQLDatabaseFrom6To5(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 6 -> 5")
providerLog(logger.LevelInfo, "downgrading database version: 6 -> 5")
sql := strings.Replace(mysqlV6DownSQL, "{{users}}", sqlTableUsers, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 5)
}
func downgradeMySQLDatabaseFrom5To4(dbHandle *sql.DB) error {
return sqlCommonDowngradeDatabaseFrom5To4(dbHandle)
}

View File

@@ -3,9 +3,11 @@
package dataprovider
import (
"context"
"database/sql"
"fmt"
"strings"
"time"
// we import lib/pq here to be able to disable PostgreSQL support using a build tag
_ "github.com/lib/pq"
@@ -35,6 +37,24 @@ ALTER TABLE "{{folders_mapping}}" ADD CONSTRAINT "folders_mapping_folder_id_fk_f
ALTER TABLE "{{folders_mapping}}" ADD CONSTRAINT "folders_mapping_user_id_fk_users_id" FOREIGN KEY ("user_id") REFERENCES "{{users}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
CREATE INDEX "folders_mapping_folder_id_idx" ON "{{folders_mapping}}" ("folder_id");
CREATE INDEX "folders_mapping_user_id_idx" ON "{{folders_mapping}}" ("user_id");
`
pgsqlV6SQL = `ALTER TABLE "{{users}}" ADD COLUMN "additional_info" text NULL;`
pgsqlV6DownSQL = `ALTER TABLE "{{users}}" DROP COLUMN "additional_info" CASCADE;`
pgsqlV7SQL = `CREATE TABLE "{{admins}}" ("id" serial NOT NULL PRIMARY KEY, "username" varchar(255) NOT NULL UNIQUE,
"password" varchar(255) NOT NULL, "email" varchar(255) NULL, "status" integer NOT NULL, "permissions" text NOT NULL,
"filters" text NULL, "additional_info" text NULL);
`
pgsqlV7DownSQL = `DROP TABLE "{{admins}}" CASCADE;`
pgsqlV8SQL = `ALTER TABLE "{{folders}}" ADD COLUMN "name" varchar(255) NULL;
ALTER TABLE "folders" ALTER COLUMN "path" DROP NOT NULL;
ALTER TABLE "{{folders}}" DROP CONSTRAINT IF EXISTS folders_path_key;
UPDATE "{{folders}}" f1 SET name = (SELECT CONCAT('folder',f2.id) FROM "{{folders}}" f2 WHERE f2.id = f1.id);
ALTER TABLE "{{folders}}" ALTER COLUMN "name" SET NOT NULL;
ALTER TABLE "{{folders}}" ADD CONSTRAINT "folders_name_uniq" UNIQUE ("name");
`
pgsqlV8DownSQL = `ALTER TABLE "{{folders}}" DROP COLUMN "name" CASCADE;
ALTER TABLE "{{folders}}" ALTER COLUMN "path" SET NOT NULL;
ALTER TABLE "{{folders}}" ADD CONSTRAINT folders_path_key UNIQUE (path);
`
)
@@ -55,7 +75,13 @@ func initializePGSQLProvider() error {
providerLog(logger.LevelDebug, "postgres database handle created, connection string: %#v, pool size: %v",
getPGSQLConnectionString(true), config.PoolSize)
dbHandle.SetMaxOpenConns(config.PoolSize)
provider = PGSQLProvider{dbHandle: dbHandle}
if config.PoolSize > 0 {
dbHandle.SetMaxIdleConns(config.PoolSize)
} else {
dbHandle.SetMaxIdleConns(2)
}
dbHandle.SetConnMaxLifetime(240 * time.Second)
provider = &PGSQLProvider{dbHandle: dbHandle}
} else {
providerLog(logger.LevelWarn, "error creating postgres database handler, connection string: %#v, error: %v",
getPGSQLConnectionString(true), err)
@@ -65,7 +91,7 @@ func initializePGSQLProvider() error {
func getPGSQLConnectionString(redactedPwd bool) string {
var connectionString string
if len(config.ConnectionString) == 0 {
if config.ConnectionString == "" {
password := config.Password
if redactedPwd {
password = "[redacted]"
@@ -78,96 +104,130 @@ func getPGSQLConnectionString(redactedPwd bool) string {
return connectionString
}
func (p PGSQLProvider) checkAvailability() error {
func (p *PGSQLProvider) checkAvailability() error {
return sqlCommonCheckAvailability(p.dbHandle)
}
func (p PGSQLProvider) validateUserAndPass(username string, password string) (User, error) {
return sqlCommonValidateUserAndPass(username, password, p.dbHandle)
func (p *PGSQLProvider) validateUserAndPass(username, password, ip, protocol string) (User, error) {
return sqlCommonValidateUserAndPass(username, password, ip, protocol, p.dbHandle)
}
func (p PGSQLProvider) validateUserAndPubKey(username string, publicKey []byte) (User, string, error) {
func (p *PGSQLProvider) validateUserAndPubKey(username string, publicKey []byte) (User, string, error) {
return sqlCommonValidateUserAndPubKey(username, publicKey, p.dbHandle)
}
func (p PGSQLProvider) getUserByID(ID int64) (User, error) {
return sqlCommonGetUserByID(ID, p.dbHandle)
}
func (p PGSQLProvider) updateQuota(username string, filesAdd int, sizeAdd int64, reset bool) error {
func (p *PGSQLProvider) updateQuota(username string, filesAdd int, sizeAdd int64, reset bool) error {
return sqlCommonUpdateQuota(username, filesAdd, sizeAdd, reset, p.dbHandle)
}
func (p PGSQLProvider) getUsedQuota(username string) (int, int64, error) {
func (p *PGSQLProvider) getUsedQuota(username string) (int, int64, error) {
return sqlCommonGetUsedQuota(username, p.dbHandle)
}
func (p PGSQLProvider) updateLastLogin(username string) error {
func (p *PGSQLProvider) updateLastLogin(username string) error {
return sqlCommonUpdateLastLogin(username, p.dbHandle)
}
func (p PGSQLProvider) userExists(username string) (User, error) {
return sqlCommonCheckUserExists(username, p.dbHandle)
func (p *PGSQLProvider) userExists(username string) (User, error) {
return sqlCommonGetUserByUsername(username, p.dbHandle)
}
func (p PGSQLProvider) addUser(user User) error {
func (p *PGSQLProvider) addUser(user *User) error {
return sqlCommonAddUser(user, p.dbHandle)
}
func (p PGSQLProvider) updateUser(user User) error {
func (p *PGSQLProvider) updateUser(user *User) error {
return sqlCommonUpdateUser(user, p.dbHandle)
}
func (p PGSQLProvider) deleteUser(user User) error {
func (p *PGSQLProvider) deleteUser(user *User) error {
return sqlCommonDeleteUser(user, p.dbHandle)
}
func (p PGSQLProvider) dumpUsers() ([]User, error) {
func (p *PGSQLProvider) dumpUsers() ([]User, error) {
return sqlCommonDumpUsers(p.dbHandle)
}
func (p PGSQLProvider) getUsers(limit int, offset int, order string, username string) ([]User, error) {
return sqlCommonGetUsers(limit, offset, order, username, p.dbHandle)
func (p *PGSQLProvider) getUsers(limit int, offset int, order string) ([]User, error) {
return sqlCommonGetUsers(limit, offset, order, p.dbHandle)
}
func (p PGSQLProvider) dumpFolders() ([]vfs.BaseVirtualFolder, error) {
func (p *PGSQLProvider) dumpFolders() ([]vfs.BaseVirtualFolder, error) {
return sqlCommonDumpFolders(p.dbHandle)
}
func (p PGSQLProvider) getFolders(limit, offset int, order, folderPath string) ([]vfs.BaseVirtualFolder, error) {
return sqlCommonGetFolders(limit, offset, order, folderPath, p.dbHandle)
func (p *PGSQLProvider) getFolders(limit, offset int, order string) ([]vfs.BaseVirtualFolder, error) {
return sqlCommonGetFolders(limit, offset, order, p.dbHandle)
}
func (p PGSQLProvider) getFolderByPath(mappedPath string) (vfs.BaseVirtualFolder, error) {
return sqlCommonCheckFolderExists(mappedPath, p.dbHandle)
func (p *PGSQLProvider) getFolderByName(name string) (vfs.BaseVirtualFolder, error) {
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
defer cancel()
return sqlCommonGetFolderByName(ctx, name, p.dbHandle)
}
func (p PGSQLProvider) addFolder(folder vfs.BaseVirtualFolder) error {
func (p *PGSQLProvider) addFolder(folder *vfs.BaseVirtualFolder) error {
return sqlCommonAddFolder(folder, p.dbHandle)
}
func (p PGSQLProvider) deleteFolder(folder vfs.BaseVirtualFolder) error {
func (p *PGSQLProvider) updateFolder(folder *vfs.BaseVirtualFolder) error {
return sqlCommonUpdateFolder(folder, p.dbHandle)
}
func (p *PGSQLProvider) deleteFolder(folder *vfs.BaseVirtualFolder) error {
return sqlCommonDeleteFolder(folder, p.dbHandle)
}
func (p PGSQLProvider) updateFolderQuota(mappedPath string, filesAdd int, sizeAdd int64, reset bool) error {
return sqlCommonUpdateFolderQuota(mappedPath, filesAdd, sizeAdd, reset, p.dbHandle)
func (p *PGSQLProvider) updateFolderQuota(name string, filesAdd int, sizeAdd int64, reset bool) error {
return sqlCommonUpdateFolderQuota(name, filesAdd, sizeAdd, reset, p.dbHandle)
}
func (p PGSQLProvider) getUsedFolderQuota(mappedPath string) (int, int64, error) {
return sqlCommonGetFolderUsedQuota(mappedPath, p.dbHandle)
func (p *PGSQLProvider) getUsedFolderQuota(name string) (int, int64, error) {
return sqlCommonGetFolderUsedQuota(name, p.dbHandle)
}
func (p PGSQLProvider) close() error {
func (p *PGSQLProvider) adminExists(username string) (Admin, error) {
return sqlCommonGetAdminByUsername(username, p.dbHandle)
}
func (p *PGSQLProvider) addAdmin(admin *Admin) error {
return sqlCommonAddAdmin(admin, p.dbHandle)
}
func (p *PGSQLProvider) updateAdmin(admin *Admin) error {
return sqlCommonUpdateAdmin(admin, p.dbHandle)
}
func (p *PGSQLProvider) deleteAdmin(admin *Admin) error {
return sqlCommonDeleteAdmin(admin, p.dbHandle)
}
func (p *PGSQLProvider) getAdmins(limit int, offset int, order string) ([]Admin, error) {
return sqlCommonGetAdmins(limit, offset, order, p.dbHandle)
}
func (p *PGSQLProvider) dumpAdmins() ([]Admin, error) {
return sqlCommonDumpAdmins(p.dbHandle)
}
func (p *PGSQLProvider) validateAdminAndPass(username, password, ip string) (Admin, error) {
return sqlCommonValidateAdminAndPass(username, password, ip, p.dbHandle)
}
func (p *PGSQLProvider) close() error {
return p.dbHandle.Close()
}
func (p PGSQLProvider) reloadConfig() error {
func (p *PGSQLProvider) reloadConfig() error {
return nil
}
// initializeDatabase creates the initial database structure
func (p PGSQLProvider) initializeDatabase() error {
func (p *PGSQLProvider) initializeDatabase() error {
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, false)
if err == nil && dbVersion.Version > 0 {
return ErrNoInitRequired
}
sqlUsers := strings.Replace(pgsqlUsersTableSQL, "{{users}}", sqlTableUsers, 1)
tx, err := p.dbHandle.Begin()
if err != nil {
@@ -191,46 +251,150 @@ func (p PGSQLProvider) initializeDatabase() error {
return tx.Commit()
}
func (p PGSQLProvider) migrateDatabase() error {
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle)
func (p *PGSQLProvider) migrateDatabase() error {
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
if err != nil {
return err
}
if dbVersion.Version == sqlDatabaseVersion {
providerLog(logger.LevelDebug, "sql database is updated, current version: %v", dbVersion.Version)
return nil
providerLog(logger.LevelDebug, "sql database is up to date, current version: %v", dbVersion.Version)
return ErrNoInitRequired
}
switch dbVersion.Version {
case 1:
err = updatePGSQLDatabaseFrom1To2(p.dbHandle)
if err != nil {
return err
}
err = updatePGSQLDatabaseFrom2To3(p.dbHandle)
if err != nil {
return err
}
return updatePGSQLDatabaseFrom3To4(p.dbHandle)
return updatePGSQLDatabaseFromV1(p.dbHandle)
case 2:
err = updatePGSQLDatabaseFrom2To3(p.dbHandle)
return updatePGSQLDatabaseFromV2(p.dbHandle)
case 3:
return updatePGSQLDatabaseFromV3(p.dbHandle)
case 4:
return updatePGSQLDatabaseFromV4(p.dbHandle)
case 5:
return updatePGSQLDatabaseFromV5(p.dbHandle)
case 6:
return updatePGSQLDatabaseFromV6(p.dbHandle)
case 7:
return updatePGSQLDatabaseFromV7(p.dbHandle)
default:
if dbVersion.Version > sqlDatabaseVersion {
providerLog(logger.LevelWarn, "database version %v is newer than the supported: %v", dbVersion.Version,
sqlDatabaseVersion)
logger.WarnToConsole("database version %v is newer than the supported: %v", dbVersion.Version,
sqlDatabaseVersion)
return nil
}
return fmt.Errorf("Database version not handled: %v", dbVersion.Version)
}
}
//nolint:dupl
func (p *PGSQLProvider) revertDatabase(targetVersion int) error {
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
if err != nil {
return err
}
return updatePGSQLDatabaseFrom3To4(p.dbHandle)
case 3:
return updatePGSQLDatabaseFrom3To4(p.dbHandle)
if dbVersion.Version == targetVersion {
return fmt.Errorf("current version match target version, nothing to do")
}
switch dbVersion.Version {
case 8:
err = downgradePGSQLDatabaseFrom8To7(p.dbHandle)
if err != nil {
return err
}
err = downgradePGSQLDatabaseFrom7To6(p.dbHandle)
if err != nil {
return err
}
err = downgradePGSQLDatabaseFrom6To5(p.dbHandle)
if err != nil {
return err
}
return downgradePGSQLDatabaseFrom5To4(p.dbHandle)
case 7:
err = downgradePGSQLDatabaseFrom7To6(p.dbHandle)
if err != nil {
return err
}
err = downgradePGSQLDatabaseFrom6To5(p.dbHandle)
if err != nil {
return err
}
return downgradePGSQLDatabaseFrom5To4(p.dbHandle)
case 6:
err = downgradePGSQLDatabaseFrom6To5(p.dbHandle)
if err != nil {
return err
}
return downgradePGSQLDatabaseFrom5To4(p.dbHandle)
case 5:
return downgradePGSQLDatabaseFrom5To4(p.dbHandle)
default:
return fmt.Errorf("Database version not handled: %v", dbVersion.Version)
}
}
func updatePGSQLDatabaseFromV1(dbHandle *sql.DB) error {
err := updatePGSQLDatabaseFrom1To2(dbHandle)
if err != nil {
return err
}
return updatePGSQLDatabaseFromV2(dbHandle)
}
func updatePGSQLDatabaseFromV2(dbHandle *sql.DB) error {
err := updatePGSQLDatabaseFrom2To3(dbHandle)
if err != nil {
return err
}
return updatePGSQLDatabaseFromV3(dbHandle)
}
func updatePGSQLDatabaseFromV3(dbHandle *sql.DB) error {
err := updatePGSQLDatabaseFrom3To4(dbHandle)
if err != nil {
return err
}
return updatePGSQLDatabaseFromV4(dbHandle)
}
func updatePGSQLDatabaseFromV4(dbHandle *sql.DB) error {
err := updatePGSQLDatabaseFrom4To5(dbHandle)
if err != nil {
return err
}
return updatePGSQLDatabaseFromV5(dbHandle)
}
func updatePGSQLDatabaseFromV5(dbHandle *sql.DB) error {
err := updatePGSQLDatabaseFrom5To6(dbHandle)
if err != nil {
return err
}
return updatePGSQLDatabaseFromV6(dbHandle)
}
func updatePGSQLDatabaseFromV6(dbHandle *sql.DB) error {
err := updatePGSQLDatabaseFrom6To7(dbHandle)
if err != nil {
return err
}
return updatePGSQLDatabaseFromV7(dbHandle)
}
func updatePGSQLDatabaseFromV7(dbHandle *sql.DB) error {
return updatePGSQLDatabaseFrom7To8(dbHandle)
}
func updatePGSQLDatabaseFrom1To2(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 1 -> 2")
providerLog(logger.LevelInfo, "updating database version: 1 -> 2")
sql := strings.Replace(pgsqlV2SQL, "{{users}}", sqlTableUsers, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 2)
}
func updatePGSQLDatabaseFrom2To3(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 2 -> 3")
providerLog(logger.LevelInfo, "updating database version: 2 -> 3")
sql := strings.Replace(pgsqlV3SQL, "{{users}}", sqlTableUsers, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 3)
@@ -239,3 +403,53 @@ func updatePGSQLDatabaseFrom2To3(dbHandle *sql.DB) error {
func updatePGSQLDatabaseFrom3To4(dbHandle *sql.DB) error {
return sqlCommonUpdateDatabaseFrom3To4(pgsqlV4SQL, dbHandle)
}
func updatePGSQLDatabaseFrom4To5(dbHandle *sql.DB) error {
return sqlCommonUpdateDatabaseFrom4To5(dbHandle)
}
func updatePGSQLDatabaseFrom5To6(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 5 -> 6")
providerLog(logger.LevelInfo, "updating database version: 5 -> 6")
sql := strings.Replace(pgsqlV6SQL, "{{users}}", sqlTableUsers, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 6)
}
func updatePGSQLDatabaseFrom6To7(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 6 -> 7")
providerLog(logger.LevelInfo, "updating database version: 6 -> 7")
sql := strings.Replace(pgsqlV7SQL, "{{admins}}", sqlTableAdmins, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 7)
}
func updatePGSQLDatabaseFrom7To8(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 7 -> 8")
providerLog(logger.LevelInfo, "updating database version: 7 -> 8")
sql := strings.ReplaceAll(pgsqlV8SQL, "{{folders}}", sqlTableFolders)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 8)
}
func downgradePGSQLDatabaseFrom8To7(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 8 -> 7")
providerLog(logger.LevelInfo, "downgrading database version: 8 -> 7")
sql := strings.ReplaceAll(pgsqlV8DownSQL, "{{folders}}", sqlTableAdmins)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 7)
}
func downgradePGSQLDatabaseFrom7To6(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 7 -> 6")
providerLog(logger.LevelInfo, "downgrading database version: 7 -> 6")
sql := strings.Replace(pgsqlV7DownSQL, "{{admins}}", sqlTableAdmins, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 6)
}
func downgradePGSQLDatabaseFrom6To5(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 6 -> 5")
providerLog(logger.LevelInfo, "downgrading database version: 6 -> 5")
sql := strings.Replace(pgsqlV6DownSQL, "{{users}}", sqlTableUsers, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 5)
}
func downgradePGSQLDatabaseFrom5To4(dbHandle *sql.DB) error {
return sqlCommonDowngradeDatabaseFrom5To4(dbHandle)
}

File diff suppressed because it is too large Load Diff

View File

@@ -3,6 +3,7 @@
package dataprovider
import (
"context"
"database/sql"
"fmt"
"path/filepath"
@@ -61,6 +62,41 @@ DROP TABLE "{{users}}";
ALTER TABLE "new__users" RENAME TO "{{users}}";
CREATE INDEX "folders_mapping_folder_id_idx" ON "{{folders_mapping}}" ("folder_id");
CREATE INDEX "folders_mapping_user_id_idx" ON "{{folders_mapping}}" ("user_id");
`
sqliteV6SQL = `ALTER TABLE "{{users}}" ADD COLUMN "additional_info" text NULL;`
sqliteV6DownSQL = `CREATE TABLE "new__users" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255) NOT NULL UNIQUE,
"password" text NULL, "public_keys" text NULL, "home_dir" varchar(512) NOT NULL, "uid" integer NOT NULL, "gid" integer NOT NULL,
"max_sessions" integer NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "permissions" text NOT NULL,
"used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL, "upload_bandwidth" integer NOT NULL,
"download_bandwidth" integer NOT NULL, "expiration_date" bigint NOT NULL, "last_login" bigint NOT NULL, "status" integer NOT NULL,
"filters" text NULL, "filesystem" text NULL);
INSERT INTO "new__users" ("id", "username", "password", "public_keys", "home_dir", "uid", "gid", "max_sessions", "quota_size", "quota_files",
"permissions", "used_quota_size", "used_quota_files", "last_quota_update", "upload_bandwidth", "download_bandwidth", "expiration_date",
"last_login", "status", "filters", "filesystem") SELECT "id", "username", "password", "public_keys", "home_dir", "uid", "gid", "max_sessions",
"quota_size", "quota_files", "permissions", "used_quota_size", "used_quota_files", "last_quota_update", "upload_bandwidth", "download_bandwidth",
"expiration_date", "last_login", "status", "filters", "filesystem" FROM "{{users}}";
DROP TABLE "{{users}}";
ALTER TABLE "new__users" RENAME TO "{{users}}";
`
sqliteV7SQL = `CREATE TABLE "{{admins}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255) NOT NULL UNIQUE,
"password" varchar(255) NOT NULL, "email" varchar(255) NULL, "status" integer NOT NULL, "permissions" text NOT NULL, "filters" text NULL,
"additional_info" text NULL);`
sqliteV7DownSQL = `DROP TABLE "{{admins}}";`
sqliteV8SQL = `CREATE TABLE "new__folders" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT,
"name" varchar(255) NOT NULL UNIQUE, "path" varchar(512) NULL, "used_quota_size" bigint NOT NULL,
"used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL);
INSERT INTO "new__folders" ("id", "path", "used_quota_size", "used_quota_files", "last_quota_update", "name")
SELECT "id", "path", "used_quota_size", "used_quota_files", "last_quota_update", ('folder' || "id") FROM "{{folders}}";
DROP TABLE "{{folders}}";
ALTER TABLE "new__folders" RENAME TO "{{folders}}";
`
sqliteV8DownSQL = `CREATE TABLE "new__folders" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT,
"path" varchar(512) NOT NULL UNIQUE, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL,
"last_quota_update" bigint NOT NULL);
INSERT INTO "new__folders" ("id", "path", "used_quota_size", "used_quota_files", "last_quota_update")
SELECT "id", "path", "used_quota_size", "used_quota_files", "last_quota_update" FROM "{{folders}}";
DROP TABLE "{{folders}}";
ALTER TABLE "new__folders" RENAME TO "{{folders}}";
`
)
@@ -77,7 +113,7 @@ func initializeSQLiteProvider(basePath string) error {
var err error
var connectionString string
logSender = fmt.Sprintf("dataprovider_%v", SQLiteDataProviderName)
if len(config.ConnectionString) == 0 {
if config.ConnectionString == "" {
dbPath := config.Name
if !utils.IsFileInputValid(dbPath) {
return fmt.Errorf("Invalid database path: %#v", dbPath)
@@ -93,7 +129,7 @@ func initializeSQLiteProvider(basePath string) error {
if err == nil {
providerLog(logger.LevelDebug, "sqlite database handle created, connection string: %#v", connectionString)
dbHandle.SetMaxOpenConns(1)
provider = SQLiteProvider{dbHandle: dbHandle}
provider = &SQLiteProvider{dbHandle: dbHandle}
} else {
providerLog(logger.LevelWarn, "error creating sqlite database handler, connection string: %#v, error: %v",
connectionString, err)
@@ -101,96 +137,130 @@ func initializeSQLiteProvider(basePath string) error {
return err
}
func (p SQLiteProvider) checkAvailability() error {
func (p *SQLiteProvider) checkAvailability() error {
return sqlCommonCheckAvailability(p.dbHandle)
}
func (p SQLiteProvider) validateUserAndPass(username string, password string) (User, error) {
return sqlCommonValidateUserAndPass(username, password, p.dbHandle)
func (p *SQLiteProvider) validateUserAndPass(username, password, ip, protocol string) (User, error) {
return sqlCommonValidateUserAndPass(username, password, ip, protocol, p.dbHandle)
}
func (p SQLiteProvider) validateUserAndPubKey(username string, publicKey []byte) (User, string, error) {
func (p *SQLiteProvider) validateUserAndPubKey(username string, publicKey []byte) (User, string, error) {
return sqlCommonValidateUserAndPubKey(username, publicKey, p.dbHandle)
}
func (p SQLiteProvider) getUserByID(ID int64) (User, error) {
return sqlCommonGetUserByID(ID, p.dbHandle)
}
func (p SQLiteProvider) updateQuota(username string, filesAdd int, sizeAdd int64, reset bool) error {
func (p *SQLiteProvider) updateQuota(username string, filesAdd int, sizeAdd int64, reset bool) error {
return sqlCommonUpdateQuota(username, filesAdd, sizeAdd, reset, p.dbHandle)
}
func (p SQLiteProvider) getUsedQuota(username string) (int, int64, error) {
func (p *SQLiteProvider) getUsedQuota(username string) (int, int64, error) {
return sqlCommonGetUsedQuota(username, p.dbHandle)
}
func (p SQLiteProvider) updateLastLogin(username string) error {
func (p *SQLiteProvider) updateLastLogin(username string) error {
return sqlCommonUpdateLastLogin(username, p.dbHandle)
}
func (p SQLiteProvider) userExists(username string) (User, error) {
return sqlCommonCheckUserExists(username, p.dbHandle)
func (p *SQLiteProvider) userExists(username string) (User, error) {
return sqlCommonGetUserByUsername(username, p.dbHandle)
}
func (p SQLiteProvider) addUser(user User) error {
func (p *SQLiteProvider) addUser(user *User) error {
return sqlCommonAddUser(user, p.dbHandle)
}
func (p SQLiteProvider) updateUser(user User) error {
func (p *SQLiteProvider) updateUser(user *User) error {
return sqlCommonUpdateUser(user, p.dbHandle)
}
func (p SQLiteProvider) deleteUser(user User) error {
func (p *SQLiteProvider) deleteUser(user *User) error {
return sqlCommonDeleteUser(user, p.dbHandle)
}
func (p SQLiteProvider) dumpUsers() ([]User, error) {
func (p *SQLiteProvider) dumpUsers() ([]User, error) {
return sqlCommonDumpUsers(p.dbHandle)
}
func (p SQLiteProvider) getUsers(limit int, offset int, order string, username string) ([]User, error) {
return sqlCommonGetUsers(limit, offset, order, username, p.dbHandle)
func (p *SQLiteProvider) getUsers(limit int, offset int, order string) ([]User, error) {
return sqlCommonGetUsers(limit, offset, order, p.dbHandle)
}
func (p SQLiteProvider) dumpFolders() ([]vfs.BaseVirtualFolder, error) {
func (p *SQLiteProvider) dumpFolders() ([]vfs.BaseVirtualFolder, error) {
return sqlCommonDumpFolders(p.dbHandle)
}
func (p SQLiteProvider) getFolders(limit, offset int, order, folderPath string) ([]vfs.BaseVirtualFolder, error) {
return sqlCommonGetFolders(limit, offset, order, folderPath, p.dbHandle)
func (p *SQLiteProvider) getFolders(limit, offset int, order string) ([]vfs.BaseVirtualFolder, error) {
return sqlCommonGetFolders(limit, offset, order, p.dbHandle)
}
func (p SQLiteProvider) getFolderByPath(mappedPath string) (vfs.BaseVirtualFolder, error) {
return sqlCommonCheckFolderExists(mappedPath, p.dbHandle)
func (p *SQLiteProvider) getFolderByName(name string) (vfs.BaseVirtualFolder, error) {
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
defer cancel()
return sqlCommonGetFolderByName(ctx, name, p.dbHandle)
}
func (p SQLiteProvider) addFolder(folder vfs.BaseVirtualFolder) error {
func (p *SQLiteProvider) addFolder(folder *vfs.BaseVirtualFolder) error {
return sqlCommonAddFolder(folder, p.dbHandle)
}
func (p SQLiteProvider) deleteFolder(folder vfs.BaseVirtualFolder) error {
func (p *SQLiteProvider) updateFolder(folder *vfs.BaseVirtualFolder) error {
return sqlCommonUpdateFolder(folder, p.dbHandle)
}
func (p *SQLiteProvider) deleteFolder(folder *vfs.BaseVirtualFolder) error {
return sqlCommonDeleteFolder(folder, p.dbHandle)
}
func (p SQLiteProvider) updateFolderQuota(mappedPath string, filesAdd int, sizeAdd int64, reset bool) error {
return sqlCommonUpdateFolderQuota(mappedPath, filesAdd, sizeAdd, reset, p.dbHandle)
func (p *SQLiteProvider) updateFolderQuota(name string, filesAdd int, sizeAdd int64, reset bool) error {
return sqlCommonUpdateFolderQuota(name, filesAdd, sizeAdd, reset, p.dbHandle)
}
func (p SQLiteProvider) getUsedFolderQuota(mappedPath string) (int, int64, error) {
return sqlCommonGetFolderUsedQuota(mappedPath, p.dbHandle)
func (p *SQLiteProvider) getUsedFolderQuota(name string) (int, int64, error) {
return sqlCommonGetFolderUsedQuota(name, p.dbHandle)
}
func (p SQLiteProvider) close() error {
func (p *SQLiteProvider) adminExists(username string) (Admin, error) {
return sqlCommonGetAdminByUsername(username, p.dbHandle)
}
func (p *SQLiteProvider) addAdmin(admin *Admin) error {
return sqlCommonAddAdmin(admin, p.dbHandle)
}
func (p *SQLiteProvider) updateAdmin(admin *Admin) error {
return sqlCommonUpdateAdmin(admin, p.dbHandle)
}
func (p *SQLiteProvider) deleteAdmin(admin *Admin) error {
return sqlCommonDeleteAdmin(admin, p.dbHandle)
}
func (p *SQLiteProvider) getAdmins(limit int, offset int, order string) ([]Admin, error) {
return sqlCommonGetAdmins(limit, offset, order, p.dbHandle)
}
func (p *SQLiteProvider) dumpAdmins() ([]Admin, error) {
return sqlCommonDumpAdmins(p.dbHandle)
}
func (p *SQLiteProvider) validateAdminAndPass(username, password, ip string) (Admin, error) {
return sqlCommonValidateAdminAndPass(username, password, ip, p.dbHandle)
}
func (p *SQLiteProvider) close() error {
return p.dbHandle.Close()
}
func (p SQLiteProvider) reloadConfig() error {
func (p *SQLiteProvider) reloadConfig() error {
return nil
}
// initializeDatabase creates the initial database structure
func (p SQLiteProvider) initializeDatabase() error {
func (p *SQLiteProvider) initializeDatabase() error {
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, false)
if err == nil && dbVersion.Version > 0 {
return ErrNoInitRequired
}
sqlUsers := strings.Replace(sqliteUsersTableSQL, "{{users}}", sqlTableUsers, 1)
tx, err := p.dbHandle.Begin()
if err != nil {
@@ -214,46 +284,150 @@ func (p SQLiteProvider) initializeDatabase() error {
return tx.Commit()
}
func (p SQLiteProvider) migrateDatabase() error {
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle)
func (p *SQLiteProvider) migrateDatabase() error {
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
if err != nil {
return err
}
if dbVersion.Version == sqlDatabaseVersion {
providerLog(logger.LevelDebug, "sql database is updated, current version: %v", dbVersion.Version)
return nil
providerLog(logger.LevelDebug, "sql database is up to date, current version: %v", dbVersion.Version)
return ErrNoInitRequired
}
switch dbVersion.Version {
case 1:
err = updateSQLiteDatabaseFrom1To2(p.dbHandle)
if err != nil {
return err
}
err = updateSQLiteDatabaseFrom2To3(p.dbHandle)
if err != nil {
return err
}
return updateSQLiteDatabaseFrom3To4(p.dbHandle)
return updateSQLiteDatabaseFromV1(p.dbHandle)
case 2:
err = updateSQLiteDatabaseFrom2To3(p.dbHandle)
return updateSQLiteDatabaseFromV2(p.dbHandle)
case 3:
return updateSQLiteDatabaseFromV3(p.dbHandle)
case 4:
return updateSQLiteDatabaseFromV4(p.dbHandle)
case 5:
return updateSQLiteDatabaseFromV5(p.dbHandle)
case 6:
return updateSQLiteDatabaseFromV6(p.dbHandle)
case 7:
return updateSQLiteDatabaseFromV7(p.dbHandle)
default:
if dbVersion.Version > sqlDatabaseVersion {
providerLog(logger.LevelWarn, "database version %v is newer than the supported: %v", dbVersion.Version,
sqlDatabaseVersion)
logger.WarnToConsole("database version %v is newer than the supported: %v", dbVersion.Version,
sqlDatabaseVersion)
return nil
}
return fmt.Errorf("Database version not handled: %v", dbVersion.Version)
}
}
//nolint:dupl
func (p *SQLiteProvider) revertDatabase(targetVersion int) error {
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
if err != nil {
return err
}
return updateSQLiteDatabaseFrom3To4(p.dbHandle)
case 3:
return updateSQLiteDatabaseFrom3To4(p.dbHandle)
if dbVersion.Version == targetVersion {
return fmt.Errorf("current version match target version, nothing to do")
}
switch dbVersion.Version {
case 8:
err = downgradeSQLiteDatabaseFrom8To7(p.dbHandle)
if err != nil {
return err
}
err = downgradeSQLiteDatabaseFrom7To6(p.dbHandle)
if err != nil {
return err
}
err = downgradeSQLiteDatabaseFrom6To5(p.dbHandle)
if err != nil {
return err
}
return downgradeSQLiteDatabaseFrom5To4(p.dbHandle)
case 7:
err = downgradeSQLiteDatabaseFrom7To6(p.dbHandle)
if err != nil {
return err
}
err = downgradeSQLiteDatabaseFrom6To5(p.dbHandle)
if err != nil {
return err
}
return downgradeSQLiteDatabaseFrom5To4(p.dbHandle)
case 6:
err = downgradeSQLiteDatabaseFrom6To5(p.dbHandle)
if err != nil {
return err
}
return downgradeSQLiteDatabaseFrom5To4(p.dbHandle)
case 5:
return downgradeSQLiteDatabaseFrom5To4(p.dbHandle)
default:
return fmt.Errorf("Database version not handled: %v", dbVersion.Version)
}
}
func updateSQLiteDatabaseFromV1(dbHandle *sql.DB) error {
err := updateSQLiteDatabaseFrom1To2(dbHandle)
if err != nil {
return err
}
return updateSQLiteDatabaseFromV2(dbHandle)
}
func updateSQLiteDatabaseFromV2(dbHandle *sql.DB) error {
err := updateSQLiteDatabaseFrom2To3(dbHandle)
if err != nil {
return err
}
return updateSQLiteDatabaseFromV3(dbHandle)
}
func updateSQLiteDatabaseFromV3(dbHandle *sql.DB) error {
err := updateSQLiteDatabaseFrom3To4(dbHandle)
if err != nil {
return err
}
return updateSQLiteDatabaseFromV4(dbHandle)
}
func updateSQLiteDatabaseFromV4(dbHandle *sql.DB) error {
err := updateSQLiteDatabaseFrom4To5(dbHandle)
if err != nil {
return err
}
return updateSQLiteDatabaseFromV5(dbHandle)
}
func updateSQLiteDatabaseFromV5(dbHandle *sql.DB) error {
err := updateSQLiteDatabaseFrom5To6(dbHandle)
if err != nil {
return err
}
return updateSQLiteDatabaseFromV6(dbHandle)
}
func updateSQLiteDatabaseFromV6(dbHandle *sql.DB) error {
err := updateSQLiteDatabaseFrom6To7(dbHandle)
if err != nil {
return err
}
return updateSQLiteDatabaseFromV7(dbHandle)
}
func updateSQLiteDatabaseFromV7(dbHandle *sql.DB) error {
return updateSQLiteDatabaseFrom7To8(dbHandle)
}
func updateSQLiteDatabaseFrom1To2(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 1 -> 2")
providerLog(logger.LevelInfo, "updating database version: 1 -> 2")
sql := strings.Replace(sqliteV2SQL, "{{users}}", sqlTableUsers, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 2)
}
func updateSQLiteDatabaseFrom2To3(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 2 -> 3")
providerLog(logger.LevelInfo, "updating database version: 2 -> 3")
sql := strings.ReplaceAll(sqliteV3SQL, "{{users}}", sqlTableUsers)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 3)
@@ -262,3 +436,75 @@ func updateSQLiteDatabaseFrom2To3(dbHandle *sql.DB) error {
func updateSQLiteDatabaseFrom3To4(dbHandle *sql.DB) error {
return sqlCommonUpdateDatabaseFrom3To4(sqliteV4SQL, dbHandle)
}
func updateSQLiteDatabaseFrom4To5(dbHandle *sql.DB) error {
return sqlCommonUpdateDatabaseFrom4To5(dbHandle)
}
func updateSQLiteDatabaseFrom5To6(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 5 -> 6")
providerLog(logger.LevelInfo, "updating database version: 5 -> 6")
sql := strings.Replace(sqliteV6SQL, "{{users}}", sqlTableUsers, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 6)
}
func updateSQLiteDatabaseFrom6To7(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 6 -> 7")
providerLog(logger.LevelInfo, "updating database version: 6 -> 7")
sql := strings.Replace(sqliteV7SQL, "{{admins}}", sqlTableAdmins, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 7)
}
func updateSQLiteDatabaseFrom7To8(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 7 -> 8")
providerLog(logger.LevelInfo, "updating database version: 7 -> 8")
if err := setPragmaFK(dbHandle, "OFF"); err != nil {
return err
}
sql := strings.ReplaceAll(sqliteV8SQL, "{{folders}}", sqlTableFolders)
if err := sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 8); err != nil {
return err
}
return setPragmaFK(dbHandle, "ON")
}
func setPragmaFK(dbHandle *sql.DB, value string) error {
ctx, cancel := context.WithTimeout(context.Background(), longSQLQueryTimeout)
defer cancel()
sql := fmt.Sprintf("PRAGMA foreign_keys=%v;", value)
_, err := dbHandle.ExecContext(ctx, sql)
return err
}
func downgradeSQLiteDatabaseFrom8To7(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 8 -> 7")
providerLog(logger.LevelInfo, "downgrading database version: 8 -> 7")
if err := setPragmaFK(dbHandle, "OFF"); err != nil {
return err
}
sql := strings.ReplaceAll(sqliteV8DownSQL, "{{folders}}", sqlTableFolders)
if err := sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 7); err != nil {
return err
}
return setPragmaFK(dbHandle, "ON")
}
func downgradeSQLiteDatabaseFrom7To6(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 7 -> 6")
providerLog(logger.LevelInfo, "downgrading database version: 7 -> 6")
sql := strings.Replace(sqliteV7DownSQL, "{{admins}}", sqlTableAdmins, 1)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 6)
}
func downgradeSQLiteDatabaseFrom6To5(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 6 -> 5")
providerLog(logger.LevelInfo, "downgrading database version: 6 -> 5")
sql := strings.ReplaceAll(sqliteV6DownSQL, "{{users}}", sqlTableUsers)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 5)
}
func downgradeSQLiteDatabaseFrom5To4(dbHandle *sql.DB) error {
return sqlCommonDowngradeDatabaseFrom5To4(dbHandle)
}

View File

@@ -10,8 +10,9 @@ import (
const (
selectUserFields = "id,username,password,public_keys,home_dir,uid,gid,max_sessions,quota_size,quota_files,permissions,used_quota_size," +
"used_quota_files,last_quota_update,upload_bandwidth,download_bandwidth,expiration_date,last_login,status,filters,filesystem"
selectFolderFields = "id,path,used_quota_size,used_quota_files,last_quota_update"
"used_quota_files,last_quota_update,upload_bandwidth,download_bandwidth,expiration_date,last_login,status,filters,filesystem,additional_info"
selectFolderFields = "id,path,used_quota_size,used_quota_files,last_quota_update,name"
selectAdminFields = "id,username,password,status,email,permissions,filters,additional_info"
)
func getSQLPlaceholders() []string {
@@ -26,19 +27,40 @@ func getSQLPlaceholders() []string {
return placeholders
}
func getAdminByUsernameQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v WHERE username = %v`, selectAdminFields, sqlTableAdmins, sqlPlaceholders[0])
}
func getAdminsQuery(order string) string {
return fmt.Sprintf(`SELECT %v FROM %v ORDER BY username %v LIMIT %v OFFSET %v`, selectAdminFields, sqlTableAdmins,
order, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getDumpAdminsQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v`, selectAdminFields, sqlTableAdmins)
}
func getAddAdminQuery() string {
return fmt.Sprintf(`INSERT INTO %v (username,password,status,email,permissions,filters,additional_info)
VALUES (%v,%v,%v,%v,%v,%v,%v)`, sqlTableAdmins, sqlPlaceholders[0], sqlPlaceholders[1],
sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6])
}
func getUpdateAdminQuery() string {
return fmt.Sprintf(`UPDATE %v SET password=%v,status=%v,email=%v,permissions=%v,filters=%v,additional_info=%v
WHERE username = %v`, sqlTableAdmins, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2],
sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6])
}
func getDeleteAdminQuery() string {
return fmt.Sprintf(`DELETE FROM %v WHERE username = %v`, sqlTableAdmins, sqlPlaceholders[0])
}
func getUserByUsernameQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v WHERE username = %v`, selectUserFields, sqlTableUsers, sqlPlaceholders[0])
}
func getUserByIDQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v WHERE id = %v`, selectUserFields, sqlTableUsers, sqlPlaceholders[0])
}
func getUsersQuery(order string, username string) string {
if len(username) > 0 {
return fmt.Sprintf(`SELECT %v FROM %v WHERE username = %v ORDER BY username %v LIMIT %v OFFSET %v`,
selectUserFields, sqlTableUsers, sqlPlaceholders[0], order, sqlPlaceholders[1], sqlPlaceholders[2])
}
func getUsersQuery(order string) string {
return fmt.Sprintf(`SELECT %v FROM %v ORDER BY username %v LIMIT %v OFFSET %v`, selectUserFields, sqlTableUsers,
order, sqlPlaceholders[0], sqlPlaceholders[1])
}
@@ -72,32 +94,37 @@ func getQuotaQuery() string {
func getAddUserQuery() string {
return fmt.Sprintf(`INSERT INTO %v (username,password,public_keys,home_dir,uid,gid,max_sessions,quota_size,quota_files,permissions,
used_quota_size,used_quota_files,last_quota_update,upload_bandwidth,download_bandwidth,status,last_login,expiration_date,filters,
filesystem)
VALUES (%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,0,0,0,%v,%v,%v,0,%v,%v,%v)`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1],
filesystem,additional_info)
VALUES (%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,0,0,0,%v,%v,%v,0,%v,%v,%v,%v)`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1],
sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7],
sqlPlaceholders[8], sqlPlaceholders[9], sqlPlaceholders[10], sqlPlaceholders[11], sqlPlaceholders[12], sqlPlaceholders[13],
sqlPlaceholders[14], sqlPlaceholders[15])
sqlPlaceholders[14], sqlPlaceholders[15], sqlPlaceholders[16])
}
func getUpdateUserQuery() string {
return fmt.Sprintf(`UPDATE %v SET password=%v,public_keys=%v,home_dir=%v,uid=%v,gid=%v,max_sessions=%v,quota_size=%v,
quota_files=%v,permissions=%v,upload_bandwidth=%v,download_bandwidth=%v,status=%v,expiration_date=%v,filters=%v,filesystem=%v
WHERE id = %v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3],
quota_files=%v,permissions=%v,upload_bandwidth=%v,download_bandwidth=%v,status=%v,expiration_date=%v,filters=%v,filesystem=%v,
additional_info=%v WHERE id = %v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3],
sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7], sqlPlaceholders[8], sqlPlaceholders[9],
sqlPlaceholders[10], sqlPlaceholders[11], sqlPlaceholders[12], sqlPlaceholders[13], sqlPlaceholders[14], sqlPlaceholders[15])
sqlPlaceholders[10], sqlPlaceholders[11], sqlPlaceholders[12], sqlPlaceholders[13], sqlPlaceholders[14], sqlPlaceholders[15],
sqlPlaceholders[16])
}
func getDeleteUserQuery() string {
return fmt.Sprintf(`DELETE FROM %v WHERE id = %v`, sqlTableUsers, sqlPlaceholders[0])
}
func getFolderByPathQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v WHERE path = %v`, selectFolderFields, sqlTableFolders, sqlPlaceholders[0])
func getFolderByNameQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v WHERE name = %v`, selectFolderFields, sqlTableFolders, sqlPlaceholders[0])
}
func getAddFolderQuery() string {
return fmt.Sprintf(`INSERT INTO %v (path,used_quota_size,used_quota_files,last_quota_update) VALUES (%v,%v,%v,%v)`,
sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
return fmt.Sprintf(`INSERT INTO %v (path,used_quota_size,used_quota_files,last_quota_update,name) VALUES (%v,%v,%v,%v,%v)`,
sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4])
}
func getUpdateFolderQuery() string {
return fmt.Sprintf(`UPDATE %v SET path = %v WHERE name = %v`, sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getDeleteFolderQuery() string {
@@ -115,26 +142,22 @@ func getAddFolderMappingQuery() string {
sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3], sqlTableUsers, sqlPlaceholders[4])
}
func getFoldersQuery(order, folderPath string) string {
if len(folderPath) > 0 {
return fmt.Sprintf(`SELECT %v FROM %v WHERE path = %v ORDER BY path %v LIMIT %v OFFSET %v`,
selectFolderFields, sqlTableFolders, sqlPlaceholders[0], order, sqlPlaceholders[1], sqlPlaceholders[2])
}
return fmt.Sprintf(`SELECT %v FROM %v ORDER BY path %v LIMIT %v OFFSET %v`, selectFolderFields, sqlTableFolders,
func getFoldersQuery(order string) string {
return fmt.Sprintf(`SELECT %v FROM %v ORDER BY name %v LIMIT %v OFFSET %v`, selectFolderFields, sqlTableFolders,
order, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getUpdateFolderQuotaQuery(reset bool) string {
if reset {
return fmt.Sprintf(`UPDATE %v SET used_quota_size = %v,used_quota_files = %v,last_quota_update = %v
WHERE path = %v`, sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
WHERE name = %v`, sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
}
return fmt.Sprintf(`UPDATE %v SET used_quota_size = used_quota_size + %v,used_quota_files = used_quota_files + %v,last_quota_update = %v
WHERE path = %v`, sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
WHERE name = %v`, sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
}
func getQuotaFolderQuery() string {
return fmt.Sprintf(`SELECT used_quota_size,used_quota_files FROM %v WHERE path = %v`, sqlTableFolders,
return fmt.Sprintf(`SELECT used_quota_size,used_quota_files FROM %v WHERE name = %v`, sqlTableFolders,
sqlPlaceholders[0])
}
@@ -151,7 +174,7 @@ func getRelatedFoldersForUsersQuery(users []User) string {
if sb.Len() > 0 {
sb.WriteString(")")
}
return fmt.Sprintf(`SELECT f.id,f.path,f.used_quota_size,f.used_quota_files,f.last_quota_update,fm.virtual_path,fm.quota_size,fm.quota_files,fm.user_id
return fmt.Sprintf(`SELECT f.id,f.name,f.path,f.used_quota_size,f.used_quota_files,f.last_quota_update,fm.virtual_path,fm.quota_size,fm.quota_files,fm.user_id
FROM %v f INNER JOIN %v fm ON f.id = fm.folder_id WHERE fm.user_id IN %v ORDER BY fm.user_id`, sqlTableFolders,
sqlTableFoldersMapping, sb.String())
}
@@ -181,6 +204,14 @@ func getUpdateDBVersionQuery() string {
return fmt.Sprintf(`UPDATE %v SET version=%v`, sqlTableSchemaVersion, sqlPlaceholders[0])
}
func getCompatVirtualFoldersQuery() string {
/*func getCompatVirtualFoldersQuery() string {
return fmt.Sprintf(`SELECT id,username,virtual_folders FROM %v`, sqlTableUsers)
}*/
func getCompatV4FsConfigQuery() string {
return fmt.Sprintf(`SELECT id,username,filesystem FROM %v`, sqlTableUsers)
}
func updateCompatV4FsConfigQuery() string {
return fmt.Sprintf(`UPDATE %v SET filesystem=%v WHERE id=%v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1])
}

View File

@@ -12,12 +12,15 @@ import (
"strings"
"time"
"golang.org/x/net/webdav"
"github.com/drakkan/sftpgo/kms"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/utils"
"github.com/drakkan/sftpgo/vfs"
)
// Available permissions for SFTP users
// Available permissions for SFTPGo users
const (
// All permissions are granted
PermAny = "*"
@@ -46,10 +49,11 @@ const (
PermChtimes = "chtimes"
)
// Available SSH login methods
// Available login methods
const (
LoginMethodNoAuthTryed = "no_auth_tryed"
LoginMethodPassword = "password"
SSHLoginMethodPublicKey = "publickey"
SSHLoginMethodPassword = "password"
SSHLoginMethodKeyboardInteractive = "keyboard-interactive"
SSHLoginMethodKeyAndPassword = "publickey+password"
SSHLoginMethodKeyAndKeyboardInt = "publickey+keyboard-interactive"
@@ -59,28 +63,65 @@ var (
errNoMatchingVirtualFolder = errors.New("no matching virtual folder found")
)
// CachedUser adds fields useful for caching to a SFTPGo user
type CachedUser struct {
User User
Expiration time.Time
Password string
LockSystem webdav.LockSystem
}
// IsExpired returns true if the cached user is expired
func (c *CachedUser) IsExpired() bool {
if c.Expiration.IsZero() {
return false
}
return c.Expiration.Before(time.Now())
}
// ExtensionsFilter defines filters based on file extensions.
// These restrictions do not apply to files listing for performance reasons, so
// a denied file cannot be downloaded/overwritten/renamed but will still be
// it will still be listed in the list of files.
// in the list of files.
// System commands such as Git and rsync interacts with the filesystem directly
// and they are not aware about these restrictions so they are not allowed
// inside paths with extensions filters
type ExtensionsFilter struct {
// SFTP/SCP path, if no other specific filter is defined, the filter apply for
// Virtual path, if no other specific filter is defined, the filter apply for
// sub directories too.
// For example if filters are defined for the paths "/" and "/sub" then the
// filters for "/" are applied for any file outside the "/sub" directory
Path string `json:"path"`
// only files with these, case insensitive, extensions are allowed.
// Shell like expansion is not supported so you have to specify ".jpg" and
// not "*.jpg"
// not "*.jpg". If you want shell like patterns use pattern filters
AllowedExtensions []string `json:"allowed_extensions,omitempty"`
// files with these, case insensitive, extensions are not allowed.
// Denied file extensions are evaluated before the allowed ones
DeniedExtensions []string `json:"denied_extensions,omitempty"`
}
// PatternsFilter defines filters based on shell like patterns.
// These restrictions do not apply to files listing for performance reasons, so
// a denied file cannot be downloaded/overwritten/renamed but will still be
// in the list of files.
// System commands such as Git and rsync interacts with the filesystem directly
// and they are not aware about these restrictions so they are not allowed
// inside paths with extensions filters
type PatternsFilter struct {
// Virtual path, if no other specific filter is defined, the filter apply for
// sub directories too.
// For example if filters are defined for the paths "/" and "/sub" then the
// filters for "/" are applied for any file outside the "/sub" directory
Path string `json:"path"`
// files with these, case insensitive, patterns are allowed.
// Denied file patterns are evaluated before the allowed ones
AllowedPatterns []string `json:"allowed_patterns,omitempty"`
// files with these, case insensitive, patterns are not allowed.
// Denied file patterns are evaluated before the allowed ones
DeniedPatterns []string `json:"denied_patterns,omitempty"`
}
// UserFilters defines additional restrictions for a user
type UserFilters struct {
// only clients connecting from these IP/Mask are allowed.
@@ -93,20 +134,42 @@ type UserFilters struct {
// these login methods are not allowed.
// If null or empty any available login method is allowed
DeniedLoginMethods []string `json:"denied_login_methods,omitempty"`
// these protocols are not allowed.
// If null or empty any available protocol is allowed
DeniedProtocols []string `json:"denied_protocols,omitempty"`
// filters based on file extensions.
// Please note that these restrictions can be easily bypassed.
FileExtensions []ExtensionsFilter `json:"file_extensions,omitempty"`
// filter based on shell patterns
FilePatterns []PatternsFilter `json:"file_patterns,omitempty"`
// max size allowed for a single upload, 0 means unlimited
MaxUploadFileSize int64 `json:"max_upload_file_size,omitempty"`
}
// FilesystemProvider defines the supported storages
type FilesystemProvider int
// supported values for FilesystemProvider
const (
LocalFilesystemProvider FilesystemProvider = iota // Local
S3FilesystemProvider // AWS S3 compatible
GCSFilesystemProvider // Google Cloud Storage
AzureBlobFilesystemProvider // Azure Blob Storage
CryptedFilesystemProvider // Local encrypted
SFTPFilesystemProvider // SFTP
)
// Filesystem defines cloud storage filesystem details
type Filesystem struct {
// 0 local filesystem, 1 Amazon S3 compatible, 2 Google Cloud Storage
Provider int `json:"provider"`
Provider FilesystemProvider `json:"provider"`
S3Config vfs.S3FsConfig `json:"s3config,omitempty"`
GCSConfig vfs.GCSFsConfig `json:"gcsconfig,omitempty"`
AzBlobConfig vfs.AzBlobFsConfig `json:"azblobconfig,omitempty"`
CryptConfig vfs.CryptFsConfig `json:"cryptconfig,omitempty"`
SFTPConfig vfs.SFTPFsConfig `json:"sftpconfig,omitempty"`
}
// User defines an SFTP user
// User defines a SFTPGo user
type User struct {
// Database unique identifier
ID int64 `json:"id"`
@@ -156,19 +219,92 @@ type User struct {
Filters UserFilters `json:"filters"`
// Filesystem configuration details
FsConfig Filesystem `json:"filesystem"`
// free form text field for external systems
AdditionalInfo string `json:"additional_info,omitempty"`
}
// GetFilesystem returns the filesystem for this user
func (u *User) GetFilesystem(connectionID string) (vfs.Fs, error) {
if u.FsConfig.Provider == 1 {
switch u.FsConfig.Provider {
case S3FilesystemProvider:
return vfs.NewS3Fs(connectionID, u.GetHomeDir(), u.FsConfig.S3Config)
} else if u.FsConfig.Provider == 2 {
case GCSFilesystemProvider:
config := u.FsConfig.GCSConfig
config.CredentialFile = u.getGCSCredentialsFilePath()
return vfs.NewGCSFs(connectionID, u.GetHomeDir(), config)
}
case AzureBlobFilesystemProvider:
return vfs.NewAzBlobFs(connectionID, u.GetHomeDir(), u.FsConfig.AzBlobConfig)
case CryptedFilesystemProvider:
return vfs.NewCryptFs(connectionID, u.GetHomeDir(), u.FsConfig.CryptConfig)
case SFTPFilesystemProvider:
return vfs.NewSFTPFs(connectionID, u.FsConfig.SFTPConfig)
default:
return vfs.NewOsFs(connectionID, u.GetHomeDir(), u.VirtualFolders), nil
}
}
// HideConfidentialData hides user confidential data
func (u *User) HideConfidentialData() {
u.Password = ""
switch u.FsConfig.Provider {
case S3FilesystemProvider:
u.FsConfig.S3Config.AccessSecret.Hide()
case GCSFilesystemProvider:
u.FsConfig.GCSConfig.Credentials.Hide()
case AzureBlobFilesystemProvider:
u.FsConfig.AzBlobConfig.AccountKey.Hide()
case CryptedFilesystemProvider:
u.FsConfig.CryptConfig.Passphrase.Hide()
case SFTPFilesystemProvider:
u.FsConfig.SFTPConfig.Password.Hide()
u.FsConfig.SFTPConfig.PrivateKey.Hide()
}
}
// SetEmptySecrets sets to empty any user secret
func (u *User) SetEmptySecrets() {
u.FsConfig.S3Config.AccessSecret = kms.NewEmptySecret()
u.FsConfig.GCSConfig.Credentials = kms.NewEmptySecret()
u.FsConfig.AzBlobConfig.AccountKey = kms.NewEmptySecret()
u.FsConfig.CryptConfig.Passphrase = kms.NewEmptySecret()
u.FsConfig.SFTPConfig.Password = kms.NewEmptySecret()
u.FsConfig.SFTPConfig.PrivateKey = kms.NewEmptySecret()
}
// DecryptSecrets tries to decrypts kms secrets
func (u *User) DecryptSecrets() error {
switch u.FsConfig.Provider {
case S3FilesystemProvider:
if u.FsConfig.S3Config.AccessSecret.IsEncrypted() {
return u.FsConfig.S3Config.AccessSecret.Decrypt()
}
case GCSFilesystemProvider:
if u.FsConfig.GCSConfig.Credentials.IsEncrypted() {
return u.FsConfig.GCSConfig.Credentials.Decrypt()
}
case AzureBlobFilesystemProvider:
if u.FsConfig.AzBlobConfig.AccountKey.IsEncrypted() {
return u.FsConfig.AzBlobConfig.AccountKey.Decrypt()
}
case CryptedFilesystemProvider:
if u.FsConfig.CryptConfig.Passphrase.IsEncrypted() {
return u.FsConfig.CryptConfig.Passphrase.Decrypt()
}
case SFTPFilesystemProvider:
if u.FsConfig.SFTPConfig.Password.IsEncrypted() {
if err := u.FsConfig.SFTPConfig.Password.Decrypt(); err != nil {
return err
}
}
if u.FsConfig.SFTPConfig.PrivateKey.IsEncrypted() {
if err := u.FsConfig.SFTPConfig.PrivateKey.Decrypt(); err != nil {
return err
}
}
}
return nil
}
// GetPermissionsForPath returns the permissions for the given path.
// The path must be an SFTP path
@@ -200,7 +336,7 @@ func (u *User) GetPermissionsForPath(p string) []string {
// If the path is not inside a virtual folder an error is returned
func (u *User) GetVirtualFolderForPath(sftpPath string) (vfs.VirtualFolder, error) {
var folder vfs.VirtualFolder
if len(u.VirtualFolders) == 0 || u.FsConfig.Provider != 0 {
if len(u.VirtualFolders) == 0 || u.FsConfig.Provider != LocalFilesystemProvider {
return folder, errNoMatchingVirtualFolder
}
dirsForPath := utils.GetDirsForSFTPPath(sftpPath)
@@ -221,7 +357,7 @@ func (u *User) AddVirtualDirs(list []os.FileInfo, sftpPath string) []os.FileInfo
}
for _, v := range u.VirtualFolders {
if path.Dir(v.VirtualPath) == sftpPath {
fi := vfs.NewFileInfo(path.Base(v.VirtualPath), true, 0, time.Time{})
fi := vfs.NewFileInfo(v.VirtualPath, true, 0, time.Now(), false)
found := false
for index, f := range list {
if f.Name() == fi.Name() {
@@ -345,7 +481,7 @@ func (u *User) IsLoginMethodAllowed(loginMethod string, partialSuccessMethods []
return true
}
if len(partialSuccessMethods) == 1 {
for _, method := range u.GetNextAuthMethods(partialSuccessMethods) {
for _, method := range u.GetNextAuthMethods(partialSuccessMethods, true) {
if method == loginMethod {
return true
}
@@ -359,7 +495,7 @@ func (u *User) IsLoginMethodAllowed(loginMethod string, partialSuccessMethods []
// GetNextAuthMethods returns the list of authentications methods that
// can continue for multi-step authentication
func (u *User) GetNextAuthMethods(partialSuccessMethods []string) []string {
func (u *User) GetNextAuthMethods(partialSuccessMethods []string, isPasswordAuthEnabled bool) []string {
var methods []string
if len(partialSuccessMethods) != 1 {
return methods
@@ -368,8 +504,8 @@ func (u *User) GetNextAuthMethods(partialSuccessMethods []string) []string {
return methods
}
for _, method := range u.GetAllowedLoginMethods() {
if method == SSHLoginMethodKeyAndPassword {
methods = append(methods, SSHLoginMethodPassword)
if method == SSHLoginMethodKeyAndPassword && isPasswordAuthEnabled {
methods = append(methods, LoginMethodPassword)
}
if method == SSHLoginMethodKeyAndKeyboardInt {
methods = append(methods, SSHLoginMethodKeyboardInteractive)
@@ -407,11 +543,15 @@ func (u *User) GetAllowedLoginMethods() []string {
}
// IsFileAllowed returns true if the specified file is allowed by the file restrictions filters
func (u *User) IsFileAllowed(sftpPath string) bool {
func (u *User) IsFileAllowed(virtualPath string) bool {
return u.isFilePatternAllowed(virtualPath) && u.isFileExtensionAllowed(virtualPath)
}
func (u *User) isFileExtensionAllowed(virtualPath string) bool {
if len(u.Filters.FileExtensions) == 0 {
return true
}
dirsForPath := utils.GetDirsForSFTPPath(path.Dir(sftpPath))
dirsForPath := utils.GetDirsForSFTPPath(path.Dir(virtualPath))
var filter ExtensionsFilter
for _, dir := range dirsForPath {
for _, f := range u.Filters.FileExtensions {
@@ -420,12 +560,12 @@ func (u *User) IsFileAllowed(sftpPath string) bool {
break
}
}
if len(filter.Path) > 0 {
if filter.Path != "" {
break
}
}
if len(filter.Path) > 0 {
toMatch := strings.ToLower(sftpPath)
if filter.Path != "" {
toMatch := strings.ToLower(virtualPath)
for _, denied := range filter.DeniedExtensions {
if strings.HasSuffix(toMatch, denied) {
return false
@@ -441,6 +581,42 @@ func (u *User) IsFileAllowed(sftpPath string) bool {
return true
}
func (u *User) isFilePatternAllowed(virtualPath string) bool {
if len(u.Filters.FilePatterns) == 0 {
return true
}
dirsForPath := utils.GetDirsForSFTPPath(path.Dir(virtualPath))
var filter PatternsFilter
for _, dir := range dirsForPath {
for _, f := range u.Filters.FilePatterns {
if f.Path == dir {
filter = f
break
}
}
if filter.Path != "" {
break
}
}
if filter.Path != "" {
toMatch := strings.ToLower(path.Base(virtualPath))
for _, denied := range filter.DeniedPatterns {
matched, err := path.Match(denied, toMatch)
if err != nil || matched {
return false
}
}
for _, allowed := range filter.AllowedPatterns {
matched, err := path.Match(allowed, toMatch)
if err == nil && matched {
return true
}
}
return len(filter.AllowedPatterns) == 0
}
return true
}
// IsLoginFromAddrAllowed returns true if the login is allowed from the specified remoteAddr.
// If AllowedIP is defined only the specified IP/Mask can login.
// If DeniedIP is defined the specified IP/Mask cannot login.
@@ -592,10 +768,17 @@ func (u *User) GetInfoString() string {
t := utils.GetTimeFromMsecSinceEpoch(u.LastLogin)
result += fmt.Sprintf("Last login: %v ", t.Format("2006-01-02 15:04:05")) // YYYY-MM-DD HH:MM:SS
}
if u.FsConfig.Provider == 1 {
switch u.FsConfig.Provider {
case S3FilesystemProvider:
result += "Storage: S3 "
} else if u.FsConfig.Provider == 2 {
case GCSFilesystemProvider:
result += "Storage: GCS "
case AzureBlobFilesystemProvider:
result += "Storage: Azure "
case CryptedFilesystemProvider:
result += "Storage: Encrypted "
case SFTPFilesystemProvider:
result += "Storage: SFTP "
}
if len(u.PublicKeys) > 0 {
result += fmt.Sprintf("Public keys: %v ", len(u.PublicKeys))
@@ -628,30 +811,39 @@ func (u *User) GetExpirationDateAsString() string {
}
// GetAllowedIPAsString returns the allowed IP as comma separated string
func (u User) GetAllowedIPAsString() string {
result := ""
for _, IPMask := range u.Filters.AllowedIP {
if len(result) > 0 {
result += ","
}
result += IPMask
}
return result
func (u *User) GetAllowedIPAsString() string {
return strings.Join(u.Filters.AllowedIP, ",")
}
// GetDeniedIPAsString returns the denied IP as comma separated string
func (u User) GetDeniedIPAsString() string {
result := ""
for _, IPMask := range u.Filters.DeniedIP {
if len(result) > 0 {
result += ","
func (u *User) GetDeniedIPAsString() string {
return strings.Join(u.Filters.DeniedIP, ",")
}
result += IPMask
// SetEmptySecretsIfNil sets the secrets to empty if nil
func (u *User) SetEmptySecretsIfNil() {
if u.FsConfig.S3Config.AccessSecret == nil {
u.FsConfig.S3Config.AccessSecret = kms.NewEmptySecret()
}
if u.FsConfig.GCSConfig.Credentials == nil {
u.FsConfig.GCSConfig.Credentials = kms.NewEmptySecret()
}
if u.FsConfig.AzBlobConfig.AccountKey == nil {
u.FsConfig.AzBlobConfig.AccountKey = kms.NewEmptySecret()
}
if u.FsConfig.CryptConfig.Passphrase == nil {
u.FsConfig.CryptConfig.Passphrase = kms.NewEmptySecret()
}
if u.FsConfig.SFTPConfig.Password == nil {
u.FsConfig.SFTPConfig.Password = kms.NewEmptySecret()
}
if u.FsConfig.SFTPConfig.PrivateKey == nil {
u.FsConfig.SFTPConfig.PrivateKey = kms.NewEmptySecret()
}
return result
}
func (u *User) getACopy() User {
u.SetEmptySecretsIfNil()
pubKeys := make([]string, len(u.PublicKeys))
copy(pubKeys, u.PublicKeys)
virtualFolders := make([]vfs.VirtualFolder, len(u.VirtualFolders))
@@ -663,6 +855,7 @@ func (u *User) getACopy() User {
permissions[k] = perms
}
filters := UserFilters{}
filters.MaxUploadFileSize = u.Filters.MaxUploadFileSize
filters.AllowedIP = make([]string, len(u.Filters.AllowedIP))
copy(filters.AllowedIP, u.Filters.AllowedIP)
filters.DeniedIP = make([]string, len(u.Filters.DeniedIP))
@@ -671,13 +864,17 @@ func (u *User) getACopy() User {
copy(filters.DeniedLoginMethods, u.Filters.DeniedLoginMethods)
filters.FileExtensions = make([]ExtensionsFilter, len(u.Filters.FileExtensions))
copy(filters.FileExtensions, u.Filters.FileExtensions)
filters.FilePatterns = make([]PatternsFilter, len(u.Filters.FilePatterns))
copy(filters.FilePatterns, u.Filters.FilePatterns)
filters.DeniedProtocols = make([]string, len(u.Filters.DeniedProtocols))
copy(filters.DeniedProtocols, u.Filters.DeniedProtocols)
fsConfig := Filesystem{
Provider: u.FsConfig.Provider,
S3Config: vfs.S3FsConfig{
Bucket: u.FsConfig.S3Config.Bucket,
Region: u.FsConfig.S3Config.Region,
AccessKey: u.FsConfig.S3Config.AccessKey,
AccessSecret: u.FsConfig.S3Config.AccessSecret,
AccessSecret: u.FsConfig.S3Config.AccessSecret.Clone(),
Endpoint: u.FsConfig.S3Config.Endpoint,
StorageClass: u.FsConfig.S3Config.StorageClass,
KeyPrefix: u.FsConfig.S3Config.KeyPrefix,
@@ -687,10 +884,37 @@ func (u *User) getACopy() User {
GCSConfig: vfs.GCSFsConfig{
Bucket: u.FsConfig.GCSConfig.Bucket,
CredentialFile: u.FsConfig.GCSConfig.CredentialFile,
Credentials: u.FsConfig.GCSConfig.Credentials.Clone(),
AutomaticCredentials: u.FsConfig.GCSConfig.AutomaticCredentials,
StorageClass: u.FsConfig.GCSConfig.StorageClass,
KeyPrefix: u.FsConfig.GCSConfig.KeyPrefix,
},
AzBlobConfig: vfs.AzBlobFsConfig{
Container: u.FsConfig.AzBlobConfig.Container,
AccountName: u.FsConfig.AzBlobConfig.AccountName,
AccountKey: u.FsConfig.AzBlobConfig.AccountKey.Clone(),
Endpoint: u.FsConfig.AzBlobConfig.Endpoint,
SASURL: u.FsConfig.AzBlobConfig.SASURL,
KeyPrefix: u.FsConfig.AzBlobConfig.KeyPrefix,
UploadPartSize: u.FsConfig.AzBlobConfig.UploadPartSize,
UploadConcurrency: u.FsConfig.AzBlobConfig.UploadConcurrency,
UseEmulator: u.FsConfig.AzBlobConfig.UseEmulator,
AccessTier: u.FsConfig.AzBlobConfig.AccessTier,
},
CryptConfig: vfs.CryptFsConfig{
Passphrase: u.FsConfig.CryptConfig.Passphrase.Clone(),
},
SFTPConfig: vfs.SFTPFsConfig{
Endpoint: u.FsConfig.SFTPConfig.Endpoint,
Username: u.FsConfig.SFTPConfig.Username,
Password: u.FsConfig.SFTPConfig.Password.Clone(),
PrivateKey: u.FsConfig.SFTPConfig.PrivateKey.Clone(),
Prefix: u.FsConfig.SFTPConfig.Prefix,
},
}
if len(u.FsConfig.SFTPConfig.Fingerprints) > 0 {
fsConfig.SFTPConfig.Fingerprints = make([]string, len(u.FsConfig.SFTPConfig.Fingerprints))
copy(fsConfig.SFTPConfig.Fingerprints, u.FsConfig.SFTPConfig.Fingerprints)
}
return User{
@@ -716,6 +940,7 @@ func (u *User) getACopy() User {
LastLogin: u.LastLogin,
Filters: filters,
FsConfig: fsConfig,
AdditionalInfo: u.AdditionalInfo,
}
}
@@ -730,24 +955,6 @@ func (u *User) getNotificationFieldsAsSlice(action string) []string {
}
}
func (u *User) getNotificationFieldsAsEnvVars(action string) []string {
return []string{fmt.Sprintf("SFTPGO_USER_ACTION=%v", action),
fmt.Sprintf("SFTPGO_USER_USERNAME=%v", u.Username),
fmt.Sprintf("SFTPGO_USER_PASSWORD=%v", u.Password),
fmt.Sprintf("SFTPGO_USER_ID=%v", u.ID),
fmt.Sprintf("SFTPGO_USER_STATUS=%v", u.Status),
fmt.Sprintf("SFTPGO_USER_EXPIRATION_DATE=%v", u.ExpirationDate),
fmt.Sprintf("SFTPGO_USER_HOME_DIR=%v", u.HomeDir),
fmt.Sprintf("SFTPGO_USER_UID=%v", u.UID),
fmt.Sprintf("SFTPGO_USER_GID=%v", u.GID),
fmt.Sprintf("SFTPGO_USER_QUOTA_FILES=%v", u.QuotaFiles),
fmt.Sprintf("SFTPGO_USER_QUOTA_SIZE=%v", u.QuotaSize),
fmt.Sprintf("SFTPGO_USER_UPLOAD_BANDWIDTH=%v", u.UploadBandwidth),
fmt.Sprintf("SFTPGO_USER_DOWNLOAD_BANDWIDTH=%v", u.DownloadBandwidth),
fmt.Sprintf("SFTPGO_USER_MAX_SESSIONS=%v", u.MaxSessions),
fmt.Sprintf("SFTPGO_USER_FS_PROVIDER=%v", u.FsConfig.Provider)}
}
func (u *User) getGCSCredentialsFilePath() string {
return filepath.Join(credentialsDirPath, fmt.Sprintf("%v_gcs_credentials.json", u.Username))
}

View File

@@ -1,5 +1,143 @@
# Dockerfile examples
# Official Docker image
Sample Dockerfiles for `sftpgo` daemon and the REST API CLI.
SFTPGo provides an official Docker image, it is available on both [Docker Hub](https://hub.docker.com/r/drakkan/sftpgo) and on [GitHub Container Registry](https://github.com/users/drakkan/packages/container/package/sftpgo).
We don't want to add a `Dockerfile` for each single `sftpgo` configuration options or data provider. You can use the docker configurations here as starting point that you can customize to run `sftpgo` with [Docker](http://www.docker.io "Docker").
## Supported tags and respective Dockerfile links
- [v2.0.0, v2.0, v2, latest](https://github.com/drakkan/sftpgo/blob/v2.0.0/Dockerfile.full)
- [v2.0.0-alpine, v2.0-alpine, v2-alpine, alpine](https://github.com/drakkan/sftpgo/blob/v2.0.0/Dockerfile.full.alpine)
- [v2.0.0-slim, v2.0-slim, v2-slim, slim](https://github.com/drakkan/sftpgo/blob/v2.0.0/Dockerfile)
- [v2.0.0-alpine-slim, v2.0-alpine-slim, v2-alpine-slim, alpine-slim](https://github.com/drakkan/sftpgo/blob/v2.0.0/Dockerfile.alpine)
- [edge](../Dockerfile.full)
- [edge-alpine](../Dockerfile.full.alpine)
- [edge-slim](../Dockerfile)
- [edge-alpine-slim](../Dockerfile.alpine)
## How to use the SFTPGo image
### Start a `sftpgo` server instance
Starting a SFTPGo instance is simple:
```shell
docker run --name some-sftpgo -p 127.0.0.1:8080:8080 -p 2022:2022 -d "drakkan/sftpgo:tag"
```
... where `some-sftpgo` is the name you want to assign to your container, and `tag` is the tag specifying the SFTPGo version you want. See the list above for relevant tags.
Now visit [http://localhost:8080/](http://localhost:8080/) and create a new SFTPGo user. The SFTP service is available on port 2022.
If you prefer GitHub Container Registry to Docker Hub replace `drakkan/sftpgo:tag` with `ghcr.io/drakkan/sftpgo:tag`.
### Container shell access and viewing SFTPGo logs
The docker exec command allows you to run commands inside a Docker container. The following command line will give you a shell inside your `sftpgo` container:
```shell
docker exec -it some-sftpgo sh
```
The logs are available through Docker's container log:
```shell
docker logs some-sftpgo
```
### Where to Store Data
Important note: There are several ways to store data used by applications that run in Docker containers. We encourage users of the SFTPGo images to familiarize themselves with the options available, including:
- Let Docker manage the storage for SFTPGo data by [writing them to disk on the host system using its own internal volume management](https://docs.docker.com/engine/tutorials/dockervolumes/#adding-a-data-volume). This is the default and is easy and fairly transparent to the user. The downside is that the files may be hard to locate for tools and applications that run directly on the host system, i.e. outside containers.
- Create a data directory on the host system (outside the container) and [mount this to a directory visible from inside the container]((https://docs.docker.com/engine/tutorials/dockervolumes/#mount-a-host-directory-as-a-data-volume)). This places the SFTPGo files in a known location on the host system, and makes it easy for tools and applications on the host system to access the files. The downside is that the user needs to make sure that the directory exists, and that e.g. directory permissions and other security mechanisms on the host system are set up correctly. The SFTPGo image runs using `1000` as UID/GID by default.
The Docker documentation is a good starting point for understanding the different storage options and variations, and there are multiple blogs and forum postings that discuss and give advice in this area. We will simply show the basic procedure here for the latter option above:
1. Create a data directory on a suitable volume on your host system, e.g. `/my/own/sftpgodata`.
2. Create a home directory for the sftpgo container user on your host system e.g. `/my/own/sftpgohome`.
3. Start your SFTPGo container like this:
```shell
docker run --name some-sftpgo \
-p 127.0.0.1:8080:8090 \
-p 2022:2022 \
--mount type=bind,source=/my/own/sftpgodata,target=/srv/sftpgo \
--mount type=bind,source=/my/own/sftpgohome,target=/var/lib/sftpgo \
-e SFTPGO_HTTPD__BIND_PORT=8090 \
-d "drakkan/sftpgo:tag"
```
As you can see SFTPGo uses two volumes:
- `/srv/sftpgo` to handle persistent data. The default home directory for SFTP/FTP/WebDAV users is `/srv/sftpgo/data/<username>`. Backups are stored in `/srv/sftpgo/backups`
- `/var/lib/sftpgo` is the home directory for the sftpgo system user defined inside the container. This is the container working directory too, host keys will be created here when using the default configuration.
### Configuration
The runtime configuration can be customized via environment variables that you can set passing the `-e` option to the `docker run` command or inside the `environment` section if you are using [docker stack deploy](https://docs.docker.com/engine/reference/commandline/stack_deploy/) or [docker-compose](https://github.com/docker/compose).
Please take a look [here](../docs/full-configuration.md#environment-variables) to learn how to configure SFTPGo via environment variables.
Alternately you can mount your custom configuration file to `/var/lib/sftpgo` or `/var/lib/sftpgo/.config/sftpgo`.
### Loading initial data
Initial data can be loaded in the following ways:
- via the `--loaddata-from` flag or the `SFTPGO_LOADDATA_FROM` environment variable
- by providing a dump file to the memory provider
Please take a look [here](../docs/full-configuration.md) for more details.
### Running as an arbitrary user
The SFTPGo image runs using `1000` as UID/GID by default. If you know the permissions of your data and/or configuration directory are already set appropriately or you have need of running SFTPGo with a specific UID/GID, it is possible to invoke this image with `--user` set to any value (other than `root/0`) in order to achieve the desired access/configuration:
```shell
$ ls -lnd data
drwxr-xr-x 2 1100 1100 6 7 nov 09.09 data
$ ls -lnd config
drwxr-xr-x 2 1100 1100 6 7 nov 09.19 config
```
With the above directory permissions, you can start a SFTPGo instance like this:
```shell
docker run --name some-sftpgo \
--user 1100:1100 \
-p 127.0.0.1:8080:8080 \
-p 2022:2022 \
--mount type=bind,source="${PWD}/data",target=/srv/sftpgo \
--mount type=bind,source="${PWD}/config",target=/var/lib/sftpgo \
-d "drakkan/sftpgo:tag"
```
Alternately build your own image using the official one as a base, here is a sample Dockerfile:
```shell
FROM drakkan/sftpgo:tag
USER root
RUN chown -R 1100:1100 /etc/sftpgo && chown 1100:1100 /var/lib/sftpgo /srv/sftpgo
USER 1100:1100
```
## Image Variants
The `sftpgo` images comes in many flavors, each designed for a specific use case. The `edge` and `edge-alpine`tags are updated after each new commit.
### `sftpgo:<version>`
This is the defacto image, it is based on [Debian](https://www.debian.org/), available in [the `debian` official image](https://hub.docker.com/_/debian). If you are unsure about what your needs are, you probably want to use this one.
### `sftpgo:<version>-alpine`
This image is based on the popular [Alpine Linux project](https://alpinelinux.org/), available in [the `alpine` official image](https://hub.docker.com/_/alpine). Alpine Linux is much smaller than most distribution base images (~5MB), and thus leads to much slimmer images in general.
This variant is highly recommended when final image size being as small as possible is desired. The main caveat to note is that it does use [musl libc](https://musl.libc.org/) instead of [glibc and friends](https://www.etalabs.net/compare_libcs.html), so certain software might run into issues depending on the depth of their libc requirements. However, most software doesn't have an issue with this, so this variant is usually a very safe choice. See [this Hacker News comment thread](https://news.ycombinator.com/item?id=10782897) for more discussion of the issues that might arise and some pro/con comparisons of using Alpine-based images.
### `sftpgo:<suite>-slim`
These tags provide a slimmer image that does not include the optional `git` and `rsync` dependencies.
## Helm Chart
An helm chart is [available](https://artifacthub.io/packages/helm/sagikazarmark/sftpgo). You can find the source code [here](https://github.com/sagikazarmark/helm-charts/tree/master/charts/sftpgo).

View File

@@ -2,7 +2,7 @@ FROM debian:latest
LABEL maintainer="nicola.murino@gmail.com"
RUN apt-get update && apt-get install -y curl python3-requests python3-pygments
RUN curl https://raw.githubusercontent.com/drakkan/sftpgo/master/examples/rest-api-cli/sftpgo_api_cli.py --output /usr/bin/sftpgo_api_cli.py
RUN curl https://raw.githubusercontent.com/drakkan/sftpgo/master/examples/rest-api-cli/sftpgo_api_cli --output /usr/bin/sftpgo_api_cli
ENTRYPOINT ["python3", "/usr/bin/sftpgo_api_cli.py" ]
ENTRYPOINT ["python3", "/usr/bin/sftpgo_api_cli" ]
CMD []

View File

@@ -0,0 +1,28 @@
#!/usr/bin/env bash
SFTPGO_PUID=${SFTPGO_PUID:-1000}
SFTPGO_PGID=${SFTPGO_PGID:-1000}
if [ "$1" = 'sftpgo' ]; then
if [ "$(id -u)" = '0' ]; then
for DIR in "/etc/sftpgo" "/var/lib/sftpgo" "/srv/sftpgo"
do
DIR_UID=$(stat -c %u ${DIR})
DIR_GID=$(stat -c %g ${DIR})
if [ ${DIR_UID} != ${SFTPGO_PUID} ] || [ ${DIR_GID} != ${SFTPGO_PGID} ]; then
echo '{"level":"info","time":"'`date +%Y-%m-%dT%H:%M:%S.000`'","sender":"entrypoint","message":"change owner for \"'${DIR}'\" UID: '${SFTPGO_PUID}' GID: '${SFTPGO_PGID}'"}'
if [ ${DIR} = "/etc/sftpgo" ]; then
chown -R ${SFTPGO_PUID}:${SFTPGO_PGID} ${DIR}
else
chown ${SFTPGO_PUID}:${SFTPGO_PGID} ${DIR}
fi
fi
done
echo '{"level":"info","time":"'`date +%Y-%m-%dT%H:%M:%S.000`'","sender":"entrypoint","message":"run as UID: '${SFTPGO_PUID}' GID: '${SFTPGO_PGID}'"}'
exec su-exec ${SFTPGO_PUID}:${SFTPGO_PGID} "$@"
fi
exec "$@"
fi
exec "$@"

32
docker/scripts/entrypoint.sh Executable file
View File

@@ -0,0 +1,32 @@
#!/usr/bin/env bash
SFTPGO_PUID=${SFTPGO_PUID:-1000}
SFTPGO_PGID=${SFTPGO_PGID:-1000}
if [ "$1" = 'sftpgo' ]; then
if [ "$(id -u)" = '0' ]; then
getent passwd ${SFTPGO_PUID} > /dev/null
HAS_PUID=$?
getent group ${SFTPGO_PGID} > /dev/null
HAS_PGID=$?
if [ ${HAS_PUID} -ne 0 ] || [ ${HAS_PGID} -ne 0 ]; then
echo '{"level":"info","time":"'`date +%Y-%m-%dT%H:%M:%S.%3N`'","sender":"entrypoint","message":"prepare to run as UID: '${SFTPGO_PUID}' GID: '${SFTPGO_PGID}'"}'
if [ ${HAS_PGID} -ne 0 ]; then
echo '{"level":"info","time":"'`date +%Y-%m-%dT%H:%M:%S.%3N`'","sender":"entrypoint","message":"set GID to: '${SFTPGO_PGID}'"}'
groupmod -g ${SFTPGO_PGID} sftpgo
fi
if [ ${HAS_PUID} -ne 0 ]; then
echo '{"level":"info","time":"'`date +%Y-%m-%dT%H:%M:%S.%3N`'","sender":"entrypoint","message":"set UID to: '${SFTPGO_PUID}'"}'
usermod -u ${SFTPGO_PUID} sftpgo
fi
chown -R ${SFTPGO_PUID}:${SFTPGO_PGID} /etc/sftpgo
chown ${SFTPGO_PUID}:${SFTPGO_PGID} /var/lib/sftpgo /srv/sftpgo
fi
echo '{"level":"info","time":"'`date +%Y-%m-%dT%H:%M:%S.%3N`'","sender":"entrypoint","message":"run as UID: '${SFTPGO_PUID}' GID: '${SFTPGO_PGID}'"}'
exec gosu ${SFTPGO_PUID}:${SFTPGO_PGID} "$@"
fi
exec "$@"
fi
exec "$@"

View File

@@ -1,13 +1,13 @@
FROM golang:alpine as builder
RUN apk add --no-cache git gcc g++ ca-certificates \
&& go get -d github.com/drakkan/sftpgo
&& go get -v -d github.com/drakkan/sftpgo
WORKDIR /go/src/github.com/drakkan/sftpgo
ARG TAG
ARG FEATURES
# Use --build-arg TAG=LATEST for latest tag. Use e.g. --build-arg TAG=0.9.6 for a specific tag/commit. Otherwise HEAD (master) is built.
# Use --build-arg TAG=LATEST for latest tag. Use e.g. --build-arg TAG=v1.0.0 for a specific tag/commit. Otherwise HEAD (master) is built.
RUN git checkout $(if [ "${TAG}" = LATEST ]; then echo `git rev-list --tags --max-count=1`; elif [ -n "${TAG}" ]; then echo "${TAG}"; else echo HEAD; fi)
RUN go build -i $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o /go/bin/sftpgo
RUN go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -v -o /go/bin/sftpgo
FROM alpine:latest
@@ -27,5 +27,24 @@ RUN chmod +x /bin/entrypoint.sh
VOLUME [ "/data", "/srv/sftpgo/config", "/srv/sftpgo/backups" ]
EXPOSE 2022 8080
# uncomment the following settings to enable FTP support
#ENV SFTPGO_FTPD__BIND_PORT=2121
#ENV SFTPGO_FTPD__FORCE_PASSIVE_IP=<your FTP visibile IP here>
#EXPOSE 2121
# we need to expose the passive ports range too
#EXPOSE 50000-50100
# it is a good idea to provide certificates to enable FTPS too
#ENV SFTPGO_FTPD__CERTIFICATE_FILE=/srv/sftpgo/config/mycert.crt
#ENV SFTPGO_FTPD__CERTIFICATE_KEY_FILE=/srv/sftpgo/config/mycert.key
# uncomment the following setting to enable WebDAV support
#ENV SFTPGO_WEBDAVD__BIND_PORT=8090
# it is a good idea to provide certificates to enable WebDAV over HTTPS
#ENV SFTPGO_WEBDAVD__CERTIFICATE_FILE=${CONFIG_DIR}/mycert.crt
#ENV SFTPGO_WEBDAVD__CERTIFICATE_KEY_FILE=${CONFIG_DIR}/mycert.key
ENTRYPOINT ["/bin/entrypoint.sh"]
CMD ["serve"]

View File

@@ -1,5 +1,7 @@
# SFTPGo with Docker and Alpine
:warning: The recommended way to run SFTPGo on Docker is to use the official [images](https://hub.docker.com/r/drakkan/sftpgo). The documentation here is now obsolete.
This DockerFile is made to build image to host multiple instances of SFTPGo started with different users.
## Example
@@ -16,7 +18,7 @@ sudo groupadd -g 1003 sftpgrp && \
# Edit sftpgo.json as you need
# Get and build SFTPGo image.
# Add --build-arg TAG=LATEST to build the latest tag or e.g. TAG=0.9.6 for a specific tag/commit.
# Add --build-arg TAG=LATEST to build the latest tag or e.g. TAG=v1.0.0 for a specific tag/commit.
# Add --build-arg FEATURES=<build features comma separated> to specify the features to build.
git clone https://github.com/drakkan/sftpgo.git && \
cd sftpgo && \
@@ -46,6 +48,8 @@ sudo docker rm sftpgo && sudo docker run --name sftpgo \
sftpgo
```
If you want to enable FTP/S you also need the publish the FTP port and the FTP passive port range, defined in your `Dockerfile`, by adding, for example, the following options to the `docker run` command `-p 2121:2121 -p 50000-50100:50000-50100`. The same goes for WebDAV, you need to publish the configured port.
The script `entrypoint.sh` makes sure to correct the permissions of directories and start the process with the right user.
Several images can be run with different parameters.

View File

@@ -1,5 +1,5 @@
[Unit]
Description=SFTPGo sftp server
Description=SFTPGo server
After=docker.service
[Service]
@@ -15,12 +15,16 @@ ExecStart=docker run --name sftpgo \
--env-file sftpgo-${PUID}.env \
-e PUID=${PUID} \
-e GUID=${GUID} \
-e SFTPGO_LOG_FILE_PATH= \
-e SFTPGO_CONFIG_DIR=/srv/sftpgo/config \
-e SFTPGO_HTTPD__TEMPLATES_PATH=/srv/sftpgo/web/templates \
-e SFTPGO_HTTPD__STATIC_FILES_PATH=/srv/sftpgo/web/static \
-e SFTPGO_HTTPD__BACKUPS_PATH=/srv/sftpgo/backups \
-p 8080:8080 \
-p 2022:2022 \
-v /home/sftpuser/conf/:/srv/sftpgo/config \
-v /home/sftpuser/data:/data \
-v /home/sftpuser/backups:/srv/sftpgo/backups \
sftpgo
ExecStop=docker stop sftpgo
SyslogIdentifier=sftpgo

View File

@@ -1,22 +1,22 @@
# we use a multi stage build to have a separate build and run env
FROM golang:latest as buildenv
LABEL maintainer="nicola.murino@gmail.com"
RUN go get -d github.com/drakkan/sftpgo
RUN go get -v -d github.com/drakkan/sftpgo
WORKDIR /go/src/github.com/drakkan/sftpgo
ARG TAG
ARG FEATURES
# Use --build-arg TAG=LATEST for latest tag. Use e.g. --build-arg TAG=0.9.6 for a specific tag/commit. Otherwise HEAD (master) is built.
# Use --build-arg TAG=LATEST for latest tag. Use e.g. --build-arg TAG=v1.0.0 for a specific tag/commit. Otherwise HEAD (master) is built.
RUN git checkout $(if [ "${TAG}" = LATEST ]; then echo `git rev-list --tags --max-count=1`; elif [ -n "${TAG}" ]; then echo "${TAG}"; else echo HEAD; fi)
RUN go build -i $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
RUN go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -v -o sftpgo
# now define the run environment
FROM debian:latest
# ca-certificates is needed for Cloud Storage Support and to expose the REST API over HTTPS.
RUN apt-get update && apt-get install -y ca-certificates
# ca-certificates is needed for Cloud Storage Support and for HTTPS/FTPS.
RUN apt-get update && apt-get install -y ca-certificates && apt-get clean
# git and rsync are optional, uncomment the next line to add support for them if needed.
#RUN apt-get update && apt-get install -y git rsync
#RUN apt-get update && apt-get install -y git rsync && apt-get clean
ARG BASE_DIR=/app
ARG DATA_REL_DIR=data
@@ -40,7 +40,7 @@ ENV WEB_DIR=${BASE_DIR}/${WEB_REL_PATH}
RUN mkdir -p ${DATA_DIR} ${CONFIG_DIR} ${WEB_DIR} ${BACKUPS_DIR}
RUN groupadd --system -g ${GID} ${GROUPNAME}
RUN useradd --system --create-home --no-log-init --home-dir ${HOME_DIR} --comment "SFTPGo user" --shell /bin/false --gid ${GID} --uid ${UID} ${USERNAME}
RUN useradd --system --create-home --no-log-init --home-dir ${HOME_DIR} --comment "SFTPGo user" --shell /usr/sbin/nologin --gid ${GID} --uid ${UID} ${USERNAME}
WORKDIR ${HOME_DIR}
RUN mkdir -p bin .config/sftpgo
@@ -71,5 +71,23 @@ ENV SFTPGO_HTTPD__STATIC_FILES_PATH=${WEB_DIR}/static
ENV SFTPGO_DATA_PROVIDER__USERS_BASE_DIR=${DATA_DIR}
ENV SFTPGO_HTTPD__BACKUPS_PATH=${BACKUPS_DIR}
# uncomment the following settings to enable FTP support
#ENV SFTPGO_FTPD__BIND_PORT=2121
#ENV SFTPGO_FTPD__FORCE_PASSIVE_IP=<your FTP visibile IP here>
#EXPOSE 2121
# we need to expose the passive ports range too
#EXPOSE 50000-50100
# it is a good idea to provide certificates to enable FTPS too
#ENV SFTPGO_FTPD__CERTIFICATE_FILE=${CONFIG_DIR}/mycert.crt
#ENV SFTPGO_FTPD__CERTIFICATE_KEY_FILE=${CONFIG_DIR}/mycert.key
# uncomment the following setting to enable WebDAV support
#ENV SFTPGO_WEBDAVD__BIND_PORT=8090
# it is a good idea to provide certificates to enable WebDAV over HTTPS
#ENV SFTPGO_WEBDAVD__CERTIFICATE_FILE=${CONFIG_DIR}/mycert.crt
#ENV SFTPGO_WEBDAVD__CERTIFICATE_KEY_FILE=${CONFIG_DIR}/mycert.key
ENTRYPOINT ["sftpgo"]
CMD ["serve"]

View File

@@ -1,5 +1,7 @@
# Dockerfile based on Debian stable
:warning: The recommended way to run SFTPGo on Docker is to use the official [images](https://hub.docker.com/r/drakkan/sftpgo). The documentation here is now obsolete.
Please read the comments inside the `Dockerfile` to learn how to customize things for your setup.
You can build the container image using `docker build`, for example:
@@ -10,10 +12,10 @@ docker build -t="drakkan/sftpgo" .
This will build master of github.com/drakkan/sftpgo.
To build the latest tag you can add `--build-arg TAG=LATEST` and to build a specific tag/commit you can use for example `TAG=0.9.6`, like this:
To build the latest tag you can add `--build-arg TAG=LATEST` and to build a specific tag/commit you can use for example `TAG=v1.0.0`, like this:
```bash
docker build -t="drakkan/sftpgo" --build-arg TAG=0.9.6 .
docker build -t="drakkan/sftpgo" --build-arg TAG=v1.0.0 .
```
To specify the features to build you can add `--build-arg FEATURES=<build features comma separated>`. For example you can disable SQLite and S3 support like this:
@@ -53,3 +55,5 @@ and finally you can run the image using something like this:
```bash
docker rm sftpgo && docker run --name sftpgo -p 8080:8080 -p 2022:2022 --mount type=bind,source=/srv/sftpgo/data,target=/app/data --mount type=bind,source=/srv/sftpgo/config,target=/app/config --mount type=bind,source=/srv/sftpgo/backups,target=/app/backups drakkan/sftpgo
```
If you want to enable FTP/S you also need the publish the FTP port and the FTP passive port range, defined in your `Dockerfile`, by adding, for example, the following options to the `docker run` command `-p 2121:2121 -p 50000-50100:50000-50100`. The same goes for WebDAV, you need to publish the configured port.

View File

@@ -1,65 +1,19 @@
# Account's configuration properties
For each account, the following properties can be configured:
Please take a look at the [OpenAPI schema](../httpd/schema/openapi.yaml) for the exact definitions of user and folder fields.
If you need an example you can export a dump using the REST API CLI client or by invoking the `dumpdata` endpoint directly, for example:
- `username`
- `password` used for password authentication. For users created using SFTPGo REST API, if the password has no known hashing algo prefix, it will be stored using argon2id. SFTPGo supports checking passwords stored with bcrypt, pbkdf2, md5crypt and sha512crypt too. For pbkdf2 the supported format is `$<algo>$<iterations>$<salt>$<hashed pwd base64 encoded>`, where algo is `pbkdf2-sha1` or `pbkdf2-sha256` or `pbkdf2-sha512` or `$pbkdf2-b64salt-sha256$`. For example the `pbkdf2-sha256` of the word `password` using 150000 iterations and `E86a9YMX3zC7` as salt must be stored as `$pbkdf2-sha256$150000$E86a9YMX3zC7$R5J62hsSq+pYw00hLLPKBbcGXmq7fj5+/M0IFoYtZbo=`. In pbkdf2 variant with `b64salt` the salt is base64 encoded. For bcrypt the format must be the one supported by golang's [crypto/bcrypt](https://godoc.org/golang.org/x/crypto/bcrypt) package, for example the password `secret` with cost `14` must be stored as `$2a$14$ajq8Q7fbtFRQvXpdCq7Jcuy.Rx1h/L4J60Otx.gyNLbAYctGMJ9tK`. For md5crypt and sha512crypt we support the format used in `/etc/shadow` with the `$1$` and `$6$` prefix, this is useful if you are migrating from Unix system user accounts. We support Apache md5crypt (`$apr1$` prefix) too. Using the REST API you can send a password hashed as bcrypt, pbkdf2, md5crypt or sha512crypt and it will be stored as is.
- `public_keys` array of public keys. At least one public key or the password is mandatory.
- `status` 1 means "active", 0 "inactive". An inactive account cannot login.
- `expiration_date` expiration date as unix timestamp in milliseconds. An expired account cannot login. 0 means no expiration.
- `home_dir` the user cannot upload or download files outside this directory. Must be an absolute path. A local home directory is required for Cloud Storage Backends too: in this case it will store temporary files.
- `virtual_folders` list of mappings between virtual SFTP/SCP paths and local filesystem paths outside the user home directory. More information can be found [here](./virtual-folders.md)
- `uid`, `gid`. If SFTPGo runs as root system user then the created files and directories will be assigned to this system uid/gid. Ignored on windows or if SFTPGo runs as non root user: in this case files and directories for all SFTP users will be owned by the system user that runs SFTPGo.
- `max_sessions` maximum concurrent sessions. 0 means unlimited.
- `quota_size` maximum size allowed as bytes. 0 means unlimited.
- `quota_files` maximum number of files allowed. 0 means unlimited.
- `permissions` for SFTP paths. The following per directory permissions are supported:
- `*` all permissions are granted
- `list` list items is allowed
- `download` download files is allowed
- `upload` upload files is allowed
- `overwrite` overwrite an existing file, while uploading, is allowed. `upload` permission is required to allow file overwrite
- `delete` delete files or directories is allowed
- `rename` rename a file or a directory is allowed if this permission is granted on source and target path. You can enable rename in a more controlled way granting `delete` permission on source directory and `upload`/`create_dirs`/`create_symlinks` permissions on target directory
- `create_dirs` create directories is allowed
- `create_symlinks` create symbolic links is allowed
- `chmod` changing file or directory permissions is allowed. On Windows, only the 0200 bit (owner writable) of mode is used; it controls whether the file's read-only attribute is set or cleared. The other bits are currently unused. Use mode 0400 for a read-only file and 0600 for a readable+writable file.
- `chown` changing file or directory owner and group is allowed. Changing owner and group is not supported on Windows.
- `chtimes` changing file or directory access and modification time is allowed
- `upload_bandwidth` maximum upload bandwidth as KB/s, 0 means unlimited.
- `download_bandwidth` maximum download bandwidth as KB/s, 0 means unlimited.
- `allowed_ip`, List of IP/Mask allowed to login. Any IP address not contained in this list cannot login. IP/Mask must be in CIDR notation as defined in RFC 4632 and RFC 4291, for example "192.0.2.0/24" or "2001:db8::/32"
- `denied_ip`, List of IP/Mask not allowed to login. If an IP address is both allowed and denied then login will be denied
- `denied_login_methods`, List of login methods not allowed. To enable multi-step authentication you have to allow only multi-step login methods. The following login methods are supported:
- `publickey`
- `password`
- `keyboard-interactive`
- `publickey+password`
- `publickey+keyboard-interactive`
- `file_extensions`, list of struct. These restrictions do not apply to files listing for performance reasons, so a denied file cannot be downloaded/overwritten/renamed but it will still be listed in the list of files. Please note that these restrictions can be easily bypassed. Each struct contains the following fields:
- `allowed_extensions`, list of, case insensitive, allowed files extension. Shell like expansion is not supported so you have to specify `.jpg` and not `*.jpg`. Any file that does not end with this suffix will be denied
- `denied_extensions`, list of, case insensitive, denied files extension. Denied file extensions are evaluated before the allowed ones
- `path`, SFTP/SCP path, if no other specific filter is defined, the filter apply for sub directories too. For example if filters are defined for the paths `/` and `/sub` then the filters for `/` are applied for any file outside the `/sub` directory
- `fs_provider`, filesystem to serve via SFTP. Local filesystem and S3 Compatible Object Storage are supported
- `s3_bucket`, required for S3 filesystem
- `s3_region`, required for S3 filesystem. Must match the region for your bucket. You can find here the list of available [AWS regions](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-available-regions). For example if your bucket is at `Frankfurt` you have to set the region to `eu-central-1`
- `s3_access_key`
- `s3_access_secret`, if provided it is stored encrypted (AES-256-GCM). You can leave access key and access secret blank to use credentials from environment
- `s3_endpoint`, specifies a S3 endpoint (server) different from AWS. It is not required if you are connecting to AWS
- `s3_storage_class`, leave blank to use the default or specify a valid AWS [storage class](https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html)
- `s3_key_prefix`, allows to restrict access to the folder identified by this prefix and its contents
- `s3_upload_part_size`, the buffer size for multipart uploads (MB). Zero means the default (5 MB). Minimum is 5
- `s3_upload_concurrency` how many parts are uploaded in parallel
- `gcs_bucket`, required for GCS filesystem
- `gcs_credentials`, Google Cloud Storage JSON credentials base64 encoded
- `gcs_automatic_credentials`, integer. Set to 1 to use Application Default Credentials strategy or set to 0 to use explicit credentials via `gcs_credentials`
- `gcs_storage_class`
- `gcs_key_prefix`, allows to restrict access to the folder identified by this prefix and its contents
```shell
curl "http://127.0.0.1:8080/api/v1/dumpdata?output_file=dump.json&indent=1"
```
These properties are stored inside the data provider.
the dump is a JSON with users and folder.
These properties are stored inside the configured data provider.
SFTPGo supports checking passwords stored with bcrypt, pbkdf2, md5crypt and sha512crypt too. For pbkdf2 the supported format is `$<algo>$<iterations>$<salt>$<hashed pwd base64 encoded>`, where algo is `pbkdf2-sha1` or `pbkdf2-sha256` or `pbkdf2-sha512` or `$pbkdf2-b64salt-sha256$`. For example the pbkdf2-sha256 of the word password using 150000 iterations and E86a9YMX3zC7 as salt must be stored as `$pbkdf2-sha256$150000$E86a9YMX3zC7$R5J62hsSq+pYw00hLLPKBbcGXmq7fj5+/M0IFoYtZbo=`. In pbkdf2 variant with b64salt the salt is base64 encoded. For bcrypt the format must be the one supported by golang's crypto/bcrypt package, for example the password secret with cost 14 must be stored as `$2a$14$ajq8Q7fbtFRQvXpdCq7Jcuy.Rx1h/L4J60Otx.gyNLbAYctGMJ9tK`. For md5crypt and sha512crypt we support the format used in `/etc/shadow` with the `$1$` and `$6$` prefix, this is useful if you are migrating from Unix system user accounts. We support Apache md5crypt (`$apr1$` prefix) too. Using the REST API you can send a password hashed as bcrypt, pbkdf2, md5crypt or sha512crypt and it will be stored as is.
If you want to use your existing accounts, you have these options:
- If your accounts are aleady stored inside a supported database, you can create a database view. Since a view is read only, you have to disable user management and quota tracking so SFTPGo will never try to write to the view
- you can import your users inside SFTPGo. Take a look at [sftpgo_api_cli.py](../examples/rest-api-cli#convert-users-from-other-stores "SFTPGo API CLI example"), it can convert and import users from Linux system users and Pure-FTPd/ProFTPD virtual users
- you can import your users inside SFTPGo. Take a look at [convert users](.../examples/convertusers) script, it can convert and import users from Linux system users and Pure-FTPd/ProFTPD virtual users
- you can use an external authentication program

View File

@@ -0,0 +1,20 @@
# Azure Blob Storage backend
To connect SFTPGo to Azure Blob Storage, you need to specify the access credentials. Azure Blob Storage has different options for credentials, we support:
1. Providing an account name and account key.
2. Providing a shared access signature (SAS).
If you authenticate using account and key you also need to specify a container. The endpoint can generally be left blank, the default is `blob.core.windows.net`.
If you provide a SAS URL the container is optional and if given it must match the one inside the shared access signature.
If you want to connect to an emulator such as [Azurite](https://github.com/Azure/Azurite) you need to provide the account name/key pair and an endpoint prefixed with the protocol, for example `http://127.0.0.1:10000`.
Specifying a different `key_prefix`, you can assign different "folders" of the same container to different users. This is similar to a chroot directory for local filesystem. Each SFTPGo user can only access the assigned folder and its contents. The folder identified by `key_prefix` does not need to be pre-created.
For multipart uploads you can customize the parts size and the upload concurrency. Please note that if the upload bandwidth between the client and SFTPGo is greater than the upload bandwidth between SFTPGo and the Azure Blob service then the client should wait for the last parts to be uploaded to Azure after finishing uploading the file to SFTPGo, and it may time out. Keep this in mind if you customize these parameters.
The configured container must exist.
This backend is very similar to the [S3](./s3.md) backend, and it has the same limitations.

View File

@@ -1,25 +1,21 @@
# Build SFTPGo from source
You can install the package to your [\$GOPATH](https://github.com/golang/go/wiki/GOPATH "GOPATH") with the [go tool](https://golang.org/cmd/go/ "go command") from shell:
```bash
go get -u github.com/drakkan/sftpgo
```
Or you can download the sources and use `go build`.
Make sure [Git](https://git-scm.com/downloads) is installed on your machine and in your system's `PATH`.
Download the sources and use `go build`.
The following build tags are available:
- `nogcs`, disable Google Cloud Storage backend, default enabled
- `nos3`, disable S3 Compabible Object Storage backends, default enabled
- `noazblob`, disable Azure Blob Storage backend, default enabled
- `nobolt`, disable Bolt data provider, default enabled
- `nomysql`, disable MySQL data provider, default enabled
- `nopgsql`, disable PostgreSQL data provider, default enabled
- `nosqlite`, disable SQLite data provider, default enabled
- `noportable`, disable portable mode, default enabled
- `nometrics`, disable Prometheus metrics, default enabled
- `novaultkms`, disable Vault transit secret engine, default enabled
- `noawskms`, disable AWS KMS, default enabled
- `nogcpkms`, disable GCP KMS, default enabled
If no build tag is specified the build will include the default features.
@@ -36,7 +32,7 @@ Version info, such as git commit and build date, can be embedded setting the fol
For example, you can build using the following command:
```bash
go build -i -tags nogcs,nos3,nosqlite -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
go build -tags nogcs,nos3,nosqlite -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
```
You should get a version that includes git commit, build date and available features like this one:

View File

@@ -0,0 +1,45 @@
# Check password hook
This hook allows you to externally check the provided password, its main use case is to allow to easily support things like password+OTP for protocols without keyboard interactive support such as FTP and WebDAV. You can ask your users to login using a string consisting of a fixed password and a One Time Token, you can verify the token inside the hook and ask to SFTPGo to verify the fixed part.
The same thing can be achieved using [External authentication](./external-auth.md) but using this hook is simpler in some use cases.
The `check password hook` can be defined as the absolute path of your program or an HTTP URL.
The expected response is a JSON serialized struct containing the following keys:
- `status` integer. 0 means KO, 1 means OK, 2 means partial success
- `to_verify` string. For `status` = 2 SFTPGo will check this password against the one stored inside SFTPGo data provider
If the hook defines an external program it can read the following environment variables:
- `SFTPGO_AUTHD_USERNAME`
- `SFTPGO_AUTHD_PASSWORD`
- `SFTPGO_AUTHD_IP`
- `SFTPGO_AUTHD_PROTOCOL`, possible values are `SSH`, `FTP`, `DAV`
Previous global environment variables aren't cleared when the script is called. The content of these variables is _not_ quoted. They may contain special characters. They are under the control of a possibly malicious remote user.
The program must write, on its standard output, the expected JSON serialized response described above.
If the hook is an HTTP URL then it will be invoked as HTTP POST. The request body will contain a JSON serialized struct with the following fields:
- `username`
- `password`
- `ip`
- `protocol`, possible values are `SSH`, `FTP`, `DAV`
If authentication succeeds the HTTP response code must be 200 and the response body must contain the expected JSON serialized response described above.
The program hook must finish within 30 seconds, the HTTP hook timeout will use the global configuration for HTTP clients.
You can also restrict the hook scope using the `check_password_scope` configuration key:
- `0` means all supported protocols.
- `1` means SSH only
- `2` means FTP only
- `4` means WebDAV only
You can combine the scopes. For example, 6 means FTP and WebDAV.
An example check password program allowing 2FA using password + one time token can be found inside the source tree [checkpwd](../examples/OTP/authy/checkpwd) directory.

View File

@@ -1,6 +1,6 @@
# Custom Actions
The `actions` struct inside the "sftpd" configuration section allows to configure the actions for file operations and SSH commands.
The `actions` struct inside the "common" configuration section allows to configure the actions for file operations and SSH commands.
The `hook` can be defined as the absolute path of your program or an HTTP URL.
The `upload` condition includes both uploads to new files and overwrite of existing files. If an upload is aborted for quota limits SFTPGo tries to remove the partial file, so if the notification reports a zero size file and a quota exceeded error the file has been deleted. The `ssh_cmd` condition will be triggered after a command is successfully executed via SSH. `scp` will trigger the `download` and `upload` conditions and not `ssh_cmd`.
@@ -23,10 +23,11 @@ The external program can also read the following environment variables:
- `SFTPGO_ACTION_TARGET`, non-empty for `rename` `SFTPGO_ACTION`
- `SFTPGO_ACTION_SSH_CMD`, non-empty for `ssh_cmd` `SFTPGO_ACTION`
- `SFTPGO_ACTION_FILE_SIZE`, non-empty for `upload`, `download` and `delete` `SFTPGO_ACTION`
- `SFTPGO_ACTION_FS_PROVIDER`, `0` for local filesystem, `1` for S3 backend, `2` for Google Cloud Storage (GCS) backend
- `SFTPGO_ACTION_BUCKET`, non-empty for S3 and GCS backends
- `SFTPGO_ACTION_ENDPOINT`, non-empty for S3 backend if configured
- `SFTPGO_ACTION_FS_PROVIDER`, `0` for local filesystem, `1` for S3 backend, `2` for Google Cloud Storage (GCS) backend, `3` for Azure Blob Storage backend
- `SFTPGO_ACTION_BUCKET`, non-empty for S3, GCS and Azure backends
- `SFTPGO_ACTION_ENDPOINT`, non-empty for S3 and Azure backend if configured. For Azure this is the SAS URL, if configured otherwise the endpoint
- `SFTPGO_ACTION_STATUS`, integer. 0 means a generic error occurred. 1 means no error, 2 means quota exceeded error
- `SFTPGO_ACTION_PROTOCOL`, string. Possible values are `SSH`, `SFTP`, `SCP`, `FTP`, `DAV`
Previous global environment variables aren't cleared when the script is called.
The program must finish within 30 seconds.
@@ -39,10 +40,11 @@ If the `hook` defines an HTTP URL then this URL will be invoked as HTTP POST. Th
- `target_path`, not null for `rename` action
- `ssh_cmd`, not null for `ssh_cmd` action
- `file_size`, not null for `upload`, `download`, `delete` actions
- `fs_provider`, `0` for local filesystem, `1` for S3 backend, `2` for Google Cloud Storage (GCS) backend
- `bucket`, not null for S3 and GCS backends
- `endpoint`, not null for S3 backend if configured
- `fs_provider`, `0` for local filesystem, `1` for S3 backend, `2` for Google Cloud Storage (GCS) backend, `3` for Azure Blob Storage backend
- `bucket`, not null for S3, GCS and Azure backends
- `endpoint`, not null for S3 and Azure backend if configured. For Azure this is the SAS URL, if configured otherwise the endpoint
- `status`, integer. 0 means a generic error occurred. 1 means no error, 2 means quota exceeded error
- `protocol`, string. Possible values are `SSH`, `FTP`, `DAV`
The HTTP request will use the global configuration for HTTP clients.
@@ -64,20 +66,7 @@ If the `hook` defines a path to an external program, then this program is invoke
The external program can also read the following environment variables:
- `SFTPGO_USER_ACTION`
- `SFTPGO_USER_USERNAME`
- `SFTPGO_USER_PASSWORD`, hashed password as stored inside the data provider, can be empty if the user does not login using a password
- `SFTPGO_USER_ID`
- `SFTPGO_USER_STATUS`
- `SFTPGO_USER_EXPIRATION_DATE`
- `SFTPGO_USER_HOME_DIR`
- `SFTPGO_USER_UID`
- `SFTPGO_USER_GID`
- `SFTPGO_USER_QUOTA_FILES`
- `SFTPGO_USER_QUOTA_SIZE`
- `SFTPGO_USER_UPLOAD_BANDWIDTH`
- `SFTPGO_USER_DOWNLOAD_BANDWIDTH`
- `SFTPGO_USER_MAX_SESSIONS`
- `SFTPGO_USER_FS_PROVIDER`
- `SFTPGO_USER`, user serialized as JSON with sensitive fields removed
Previous global environment variables aren't cleared when the script is called.
The program must finish within 15 seconds.

19
docs/dare.md Normal file
View File

@@ -0,0 +1,19 @@
# Data At Rest Encryption (DARE)
SFTPGo supports data at-rest encryption via its `cryptfs` virtual file system, in this mode SFTPGo transparently encrypts and decrypts data (to/from the disk) on-the-fly during uploads and/or downloads, making sure that the files at-rest on the server-side are always encrypted.
So, because of the way it works, as described here above, when you set up an encrypted filesystem for a user you need to make sure it points to an empty path/directory (that has no files in it). Otherwise, it would try to decrypt existing files that are not encrypted in the first place and fail.
The SFTPGo's `cryptfs` is a tiny wrapper around [sio](https://github.com/minio/sio) therefore data is encrypted and authenticated using `AES-256-GCM` or `ChaCha20-Poly1305`. AES-GCM will be used if the CPU provides hardware support for it.
The only required configuration parameter is a `passphrase`, each file will be encrypted using an unique, randomly generated secret key derived from the given passphrase using the HMAC-based Extract-and-Expand Key Derivation Function (HKDF) as defined in [RFC 5869](http://tools.ietf.org/html/rfc5869). It is important to note that the per-object encryption key is never stored anywhere: it is derived from your `passphrase` and a randomly generated initialization vector just before encryption/decryption. The initialization vector is stored with the file.
The passphrase is stored encrypted itself according to your [KMS configuration](./kms.md) and is required to decrypt any file encrypted using an encryption key derived from it.
The encrypted filesystem has some limitations compared to the local, unencrypted, one:
- Upload resume is not supported.
- Opening a file for both reading and writing at the same time is not supported and so clients that require advanced filesystem-like features such as `sshfs` are not supported too.
- Truncate is not supported.
- System commands such as `git` or `rsync` are not supported: they will store data unencrypted.
- Virtual folders are not implemented for now, if you are interested in this feature, please consider submitting a well written pull request (fully covered by test cases) or sponsoring this development. We could add a filesystem configuration to each virtual folder so we can mount encrypted or cloud backends as subfolders for local filesystems and vice versa.

63
docs/defender.md Normal file
View File

@@ -0,0 +1,63 @@
# Defender
The built-in `defender` allows you to configure an auto-blocking policy for SFTPGo and thus helps to prevent DoS (Denial of Service) and brute force password guessing.
If enabled it will protect SFTP, FTP and WebDAV services and it will automatically block hosts (IP addresses) that continually fail to log in or attempt to connect.
You can configure a score for each event type:
- `score_valid`, defines the score for valid login attempts, eg. user accounts that exist. Default `1`.
- `score_invalid`, defines the score for invalid login attempts, eg. non-existent user accounts or client disconnected for inactivity without authentication attempts. Default `2`.
And then you can configure:
- `observation_time`, defines the time window, in minutes, for tracking client errors.
- `threshold`, defines the threshold value before banning a host.
- `ban_time`, defines the time to ban a client, as minutes
So a host is banned, for `ban_time` minutes, if it has exceeded the defined threshold during the last observation time minutes.
A banned IP has no score, it makes no sense to accumulate host events in memory for an already banned IP address.
If an already banned client tries to log in again, its ban time will be incremented according the `ban_time_increment` configuration.
The `ban_time_increment` is calculated as percentage of `ban_time`, so if `ban_time` is 30 minutes and `ban_time_increment` is 50 the host will be banned for additionally 15 minutes. You can also specify values greater than 100 for `ban_time_increment` if you want to increase the penalty for already banned hosts.
The `defender` will keep in memory both the host scores and the banned hosts, you can limit the memory usage using the `entries_soft_limit` and `entries_hard_limit` configuration keys.
The REST API allows:
- to retrieve the score for an IP address
- to retrieve the ban time for an IP address
- to unban an IP address
We don't return the whole list of the banned IP addresses or all stored scores because we store them as a hash map and iterating over all the keys of a hash map is not a fast operation and will slow down the recordings of new events.
The `defender` can also load a permanent block list and/or a safe list of ip addresses/networks from a file:
- `safelist_file`, defines the path to a file containing a list of ip addresses and/or networks to never ban.
- `blocklist_file`, defines the path to a file containing a list of ip addresses and/or networks to always ban.
These list must be stored as JSON conforming to the following schema:
- `addresses`, list of strings. Each string must be a valid IPv4/IPv6 address.
- `networks`, list of strings. Each string must be a valid IPv4/IPv6 CIDR address.
Here is a small example:
```json
{
"addresses":[
"192.0.2.1",
"2001:db8::68"
],
"networks":[
"192.0.2.0/24",
"2001:db8:1234::/48"
]
}
```
These list will be loaded in memory for faster lookups. The REST API queries "live" data and not these lists.
The `defender` is optimized for fast and time constant lookups however as it keeps all the lists and the entries in memory you should carefully measure the memory requirements for your use case.

View File

@@ -5,18 +5,20 @@ To enable dynamic user modification, you must set the absolute path of your prog
The external program can read the following environment variables to get info about the user trying to login:
- `SFTPGO_LOGIND_USER`, it contains the user trying to login serialized as JSON. A JSON serialized user id equal to zero means the user does not exists inside SFTPGo
- `SFTPGO_LOGIND_USER`, it contains the user trying to login serialized as JSON. A JSON serialized user id equal to zero means the user does not exist inside SFTPGo
- `SFTPGO_LOGIND_METHOD`, possible values are: `password`, `publickey` and `keyboard-interactive`
- `SFTPGO_LOGIND_IP`, ip address of the user trying to login
- `SFTPGO_LOGIND_PROTOCOL`, possible values are `SSH`, `FTP`, `DAV`
The program must write, on its the standard output:
The program must write, on its standard output:
- an empty string (or no response at all) if the user should not be created/updated
- or the SFTPGo user, JSON serialized, if you want create or update the given user
- or the SFTPGo user, JSON serialized, if you want to create or update the given user
If the hook is an HTTP URL then it will be invoked as HTTP POST. The login method is added to the query string, for example `<http_url>?login_method=password`.
If the hook is an HTTP URL then it will be invoked as HTTP POST. The login method, the used protocol and the ip address of the user trying to login are added to the query string, for example `<http_url>?login_method=password&ip=1.2.3.4&protocol=SSH`.
The request body will contain the user trying to login serialized as JSON. If no modification is needed the HTTP response code must be 204, otherwise the response code must be 200 and the response body a valid SFTPGo user serialized as JSON.
Actions defined for user's updates will not be executed in this case.
Actions defined for user's updates will not be executed in this case and an already logged in user with the same username will not be disconnected, you have to handle these things yourself.
The JSON response can include only the fields to update instead of the full user. For example, if you want to disable the user, you can return a response like this:
@@ -30,8 +32,8 @@ The program hook must finish within 30 seconds, the HTTP hook will use the globa
If an error happens while executing the hook then login will be denied.
"Dynamic user creation or modification" and "External Authentication" are mutally exclusive, they are quite similar, the difference is that "External Authentication" returns an already authenticated user while using "Dynamic users modification" you simply create or update a user. The authentication will be checked inside SFTPGo.
In other words while using "External Authentication" the external program receives the credentials of the user trying to login (for example the clear text password) and it need to validate them. While using "Dynamic users modification" the pre-login program receives the user stored inside the dataprovider (it includes the hashed password if any) and it can modify it, after the modification SFTPGo will check the credentials of the user trying to login.
"Dynamic user creation or modification" and "External Authentication" are mutually exclusive, they are quite similar, the difference is that "External Authentication" returns an already authenticated user while using "Dynamic users modification" you simply create or update a user. The authentication will be checked inside SFTPGo.
In other words while using "External Authentication" the external program receives the credentials of the user trying to login (for example the cleartext password) and it needs to validate them. While using "Dynamic users modification" the pre-login program receives the user stored inside the dataprovider (it includes the hashed password if any) and it can modify it, after the modification SFTPGo will check the credentials of the user trying to login.
Let's see a very basic example. Our sample program will grant access to the existing user `test_user` only in the time range 10:00-18:00. Other users will not be modified since the program will terminate with no output.

View File

@@ -5,33 +5,37 @@ To enable external authentication, you must set the absolute path of your authen
The external program can read the following environment variables to get info about the user trying to authenticate:
- `SFTPGO_AUTHD_USERNAME`
- `SFTPGO_AUTHD_IP`
- `SFTPGO_AUTHD_PROTOCOL`, possible values are `SSH`, `FTP`, `DAV`
- `SFTPGO_AUTHD_PASSWORD`, not empty for password authentication
- `SFTPGO_AUTHD_PUBLIC_KEY`, not empty for public key authentication
- `SFTPGO_AUTHD_KEYBOARD_INTERACTIVE`, not empty for keyboard interactive authentication
Previous global environment variables aren't cleared when the script is called. The content of these variables is _not_ quoted. They may contain special characters. They are under the control of a possibly malicious remote user.
The program must write, on its standard output, a valid SFTPGo user serialized as JSON if the authentication succeed or a user with an empty username if the authentication fails.
The program must write, on its standard output, a valid SFTPGo user serialized as JSON if the authentication succeeds or a user with an empty username if the authentication fails.
If the hook is an HTTP URL then it will be invoked as HTTP POST. The request body will contain a JSON serialized struct with the following fields:
- `username`
- `ip`
- `protocol`, possible values are `SSH`, `FTP`, `DAV`
- `password`, not empty for password authentication
- `public_key`, not empty for public key authentication
- `keyboard_interactive`, not empty for keyboard interactive authentication
If authentication succeed the HTTP response code must be 200 and the response body a valid SFTPGo user serialized as JSON. If the authentication fails the HTTP response code must be != 200 or the response body must be empty.
If authentication succeeds the HTTP response code must be 200 and the response body a valid SFTPGo user serialized as JSON. If the authentication fails the HTTP response code must be != 200 or the response body must be empty.
If the authentication succeeds, the user will be automatically added/updated inside the defined data provider. Actions defined for users added/updated will not be executed in this case and an already logged in user with the same username will not be disconnected, you have to handle these things yourself.
If the authentication succeeds, the user will be automatically added/updated inside the defined data provider. Actions defined for users added/updated will not be executed in this case.
The external hook should check authentication only. If there are login restrictions such as user disabled, expired, or login allowed only from specific IP addresses, it is enough to populate the matching user fields, and these conditions will be checked in the same way as for built-in users.
The program hook must finish within 30 seconds, the HTTP hook timeout will use the global configuration for HTTP clients.
This method is slower than built-in authentication, but it's very flexible as anyone can easily write his own authentication hooks.
You can also restrict the authentication scope for the hook using the `external_auth_scope` configuration key:
- 0 means all supported authetication scopes. The external hook will be used for password, public key and keyboard interactive authentication
- 1 means passwords only
- 2 means public keys only
- 4 means keyboard interactive only
- `0` means all supported authentication scopes. The external hook will be used for password, public key and keyboard interactive authentication
- `1` means passwords only
- `2` means public keys only
- `4` means keyboard interactive only
You can combine the scopes. For example, 3 means password and public key, 5 means password and keyboard interactive, and so on.

View File

@@ -9,8 +9,9 @@ Usage:
sftpgo [command]
Available Commands:
gen A collection of useful generators
help Help about any command
initprovider Initializes the configured data provider
initprovider Initializes and/or updates the configured data provider
portable Serve a single directory
serve Start the SFTP Server
@@ -23,48 +24,37 @@ Flags:
The `serve` command supports the following flags:
- `--config-dir` string. Location of the config dir. This directory should contain the configuration file and is used as the base directory for any files that use a relative path (eg. the private keys for the SFTP server, the SQLite or bblot database if you use SQLite or bbolt as data provider). The default value is "." or the value of `SFTPGO_CONFIG_DIR` environment variable.
- `--config-file` string. Name of the configuration file. It must be the name of a file stored in `config-dir`, not the absolute path to the configuration file. The specified file name must have no extension because we automatically append JSON, YAML, TOML, HCL and Java extensions when we search for the file. The default value is "sftpgo" (and therefore `sftpgo.json`, `sftpgo.yaml` and so on are searched) or the value of `SFTPGO_CONFIG_FILE` environment variable.
- `--config-dir` string. Location of the config dir. This directory is used as the base for files with a relative path, eg. the private keys for the SFTP server or the SQLite database if you use SQLite as data provider. The configuration file, if not explicitly set, is looked for in this dir. We support reading from JSON, TOML, YAML, HCL, envfile and Java properties config files. The default config file name is `sftpgo` and therefore `sftpgo.json`, `sftpgo.yaml` and so on are searched. The default value is the working directory (".") or the value of `SFTPGO_CONFIG_DIR` environment variable.
- `--config-file` string. This flag explicitly defines the path, name and extension of the config file. If must be an absolute path or a path relative to the configuration directory. The specified file name must have a supported extension (JSON, YAML, TOML, HCL or Java properties). The default value is empty or the value of `SFTPGO_CONFIG_FILE` environment variable.
- `--loaddata-from` string. Load users and folders from this file. The file must be specified as absolute path and it must contain a backup obtained using the `dumpdata` REST API or compatible content. The default value is empty or the value of `SFTPGO_LOADDATA_FROM` environment variable.
- `--loaddata-clean` boolean. Determine if the loaddata-from file should be removed after a successful load. Default `false` or the value of `SFTPGO_LOADDATA_CLEAN` environment variable (1 or `true`, 0 or `false`).
- `--loaddata-mode`, integer. Restore mode for data to load. 0 means new users are added, existing users are updated. 1 means new users are added, existing users are not modified. Default 1 or the value of `SFTPGO_LOADDATA_MODE` environment variable.
- `--loaddata-scan`, integer. Quota scan mode after data load. 0 means no quota scan. 1 means quota scan. 2 means scan quota if the user has quota restrictions. Default 0 or the value of `SFTPGO_LOADDATA_QUOTA_SCAN` environment variable.
- `--log-compress` boolean. Determine if the rotated log files should be compressed using gzip. Default `false` or the value of `SFTPGO_LOG_COMPRESS` environment variable (1 or `true`, 0 or `false`). It is unused if `log-file-path` is empty.
- `--log-file-path` string. Location for the log file, default "sftpgo.log" or the value of `SFTPGO_LOG_FILE_PATH` environment variable. Leave empty to write logs to the standard error.
- `--log-max-age` int. Maximum number of days to retain old log files. Default 28 or the value of `SFTPGO_LOG_MAX_AGE` environment variable. It is unused if `log-file-path` is empty.
- `--log-max-backups` int. Maximum number of old log files to retain. Default 5 or the value of `SFTPGO_LOG_MAX_BACKUPS` environment variable. It is unused if `log-file-path` is empty.
- `--log-max-size` int. Maximum size in megabytes of the log file before it gets rotated. Default 10 or the value of `SFTPGO_LOG_MAX_SIZE` environment variable. It is unused if `log-file-path` is empty.
- `--log-verbose` boolean. Enable verbose logs. Default `true` or the value of `SFTPGO_LOG_VERBOSE` environment variable (1 or `true`, 0 or `false`).
- `--profiler` boolean. Enable the built-in profiler. The profiler will be accessible via HTTP/HTTPS using the base URL "/debug/pprof/". Default `false` or the value of `SFTPGO_PROFILER` environment variable (1 or `true`, 0 or `false`).
Log file can be rotated on demand sending a `SIGUSR1` signal on Unix based systems and using the command `sftpgo service rotatelogs` on Windows.
If you don't configure any private host key, the daemon will use `id_rsa` and `id_ecdsa` in the configuration directory. If these files don't exist, the daemon will attempt to autogenerate them (if the user that executes SFTPGo has write access to the `config-dir`). The server supports any private key format supported by [`crypto/ssh`](https://github.com/golang/crypto/blob/master/ssh/keys.go#L33).
If you don't configure any private host key, the daemon will use `id_rsa`, `id_ecdsa` and `id_ed25519` in the configuration directory. If these files don't exist, the daemon will attempt to autogenerate them. The server supports any private key format supported by [`crypto/ssh`](https://github.com/golang/crypto/blob/master/ssh/keys.go#L33).
The `gen` command allows to generate completion scripts for your shell and man pages.
## Configuration file
The configuration file contains the following sections:
- **"sftpd"**, the configuration for the SFTP server
- `bind_port`, integer. The port used for serving SFTP requests. Default: 2022
- `bind_address`, string. Leave blank to listen on all available network interfaces. Default: ""
- **"common"**, configuration parameters shared among all the supported protocols
- `idle_timeout`, integer. Time in minutes after which an idle client will be disconnected. 0 means disabled. Default: 15
- `max_auth_tries` integer. Maximum number of authentication attempts permitted per connection. If set to a negative number, the number of attempts is unlimited. If set to zero, the number of attempts are limited to 6.
- `umask`, string. Umask for the new files and directories. This setting has no effect on Windows. Default: "0022"
- `banner`, string. Identification string used by the server. Leave empty to use the default banner. Default `SFTPGo_<version>`, for example `SSH-2.0-SFTPGo_0.9.5`
- `upload_mode` integer. 0 means standard: the files are uploaded directly to the requested path. 1 means atomic: files are uploaded to a temporary path and renamed to the requested path when the client ends the upload. Atomic mode avoids problems such as a web server that serves partial files when the files are being uploaded. In atomic mode, if there is an upload error, the temporary file is deleted and so the requested upload path will not contain a partial file. 2 means atomic with resume support: same as atomic but if there is an upload error, the temporary file is renamed to the requested path and not deleted. This way, a client can reconnect and resume the upload.
- `actions`, struct. It contains the command to execute and/or the HTTP URL to notify and the trigger conditions. See the "Custom Actions" paragraph for more details
- `actions`, struct. It contains the command to execute and/or the HTTP URL to notify and the trigger conditions. See [Custom Actions](./custom-actions.md) for more details
- `execute_on`, list of strings. Valid values are `download`, `upload`, `pre-delete`, `delete`, `rename`, `ssh_cmd`. Leave empty to disable actions.
- `command`, string. Deprecated please use `hook`.
- `http_notification_url`, a valid URL. Deprecated please use `hook`.
- `hook`, string. Absolute path to the command to execute or HTTP URL to notify.
- `keys`, struct array. Deprecated, please use `host_keys`.
- `private_key`, path to the private key file. It can be a path relative to the config dir or an absolute one.
- `host_keys`, list of strings. It contains the daemon's private host keys. Each host key can be defined as a path relative to the configuration directory or an absolute one. If empty, the daemon will search or try to generate `id_rsa` and `id_ecdsa` keys inside the configuration directory. If you configure absolute paths to files named `id_rsa` and/or `id_ecdsa` then SFTPGo will try to generate these keys using the default settings.
- `kex_algorithms`, list of strings. Available KEX (Key Exchange) algorithms in preference order. Leave empty to use default values. The supported values can be found here: [`crypto/ssh`](https://github.com/golang/crypto/blob/master/ssh/common.go#L46 "Supported kex algos")
- `ciphers`, list of strings. Allowed ciphers. Leave empty to use default values. The supported values can be found here: [`crypto/ssh`](https://github.com/golang/crypto/blob/master/ssh/common.go#L28 "Supported ciphers")
- `macs`, list of strings. Available MAC (message authentication code) algorithms in preference order. Leave empty to use default values. The supported values can be found here: [`crypto/ssh`](https://github.com/golang/crypto/blob/master/ssh/common.go#L84 "Supported MACs")
- `trusted_user_ca_keys`, list of public keys paths of certificate authorities that are trusted to sign user certificates for authentication. The paths can be absolute or relative to the configuration directory.
- `login_banner_file`, path to the login banner file. The contents of the specified file, if any, are sent to the remote user before authentication is allowed. It can be a path relative to the config dir or an absolute one. Leave empty to disable login banner.
- `setstat_mode`, integer. 0 means "normal mode": requests for changing permissions, owner/group and access/modification times are executed. 1 means "ignore mode": requests for changing permissions, owner/group and access/modification times are silently ignored.
- `enabled_ssh_commands`, list of enabled SSH commands. `*` enables all supported commands. More information can be found [here](./ssh-commands.md).
- `keyboard_interactive_auth_program`, string. Deprecated, please use `keyboard_interactive_auth_hook`.
- `keyboard_interactive_auth_hook`, string. Absolute path to an external program or an HTTP URL to invoke for keyboard interactive authentication. See the "Keyboard Interactive Authentication" paragraph for more details.
- `setstat_mode`, integer. 0 means "normal mode": requests for changing permissions, owner/group and access/modification times are executed. 1 means "ignore mode": requests for changing permissions, owner/group and access/modification times are silently ignored. 2 means "ignore mode for cloud based filesystems": requests for changing permissions, owner/group and access/modification times are silently ignored for cloud filesystems and executed for local filesystem.
- `proxy_protocol`, integer. Support for [HAProxy PROXY protocol](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt). If you are running SFTPGo behind a proxy server such as HAProxy, AWS ELB or NGNIX, you can enable the proxy protocol. It provides a convenient way to safely transport connection information such as a client's address across multiple layers of NAT or TCP proxies to get the real client IP address instead of the proxy IP. Both protocol versions 1 and 2 are supported. If the proxy protocol is enabled in SFTPGo then you have to enable the protocol in your proxy configuration too. For example, for HAProxy, add `send-proxy` or `send-proxy-v2` to each server configuration line. The following modes are supported:
- 0, disabled
- 1, enabled. Proxy header will be used and requests without proxy header will be accepted
@@ -72,9 +62,97 @@ The configuration file contains the following sections:
- `proxy_allowed`, List of IP addresses and IP ranges allowed to send the proxy header:
- If `proxy_protocol` is set to 1 and we receive a proxy header from an IP that is not in the list then the connection will be accepted and the header will be ignored
- If `proxy_protocol` is set to 2 and we receive a proxy header from an IP that is not in the list then the connection will be rejected
- `post_connect_hook`, string. Absolute path to the command to execute or HTTP URL to notify. See [Post connect hook](./post-connect-hook.md) for more details. Leave empty to disable
- `max_total_connections`, integer. Maximum number of concurrent client connections. 0 means unlimited
- `defender`, struct containing the defender configuration. See [Defender](./defender.md) for more details.
- `enabled`, boolean. Default `false`.
- `ban_time`, integer. Ban time in minutes.
- `ban_time_increment`, integer. Ban time increment, as a percentage, if a banned host tries to connect again.
- `threshold`, integer. Threshold value for banning a client.
- `score_invalid`, integer. Score for invalid login attempts, eg. non-existent user accounts or client disconnected for inactivity without authentication attempts.
- `score_valid`, integer. Score for valid login attempts, eg. user accounts that exist.
- `observation_time`, integer. Defines the time window, in minutes, for tracking client errors. A host is banned if it has exceeded the defined threshold during the last observation time minutes.
- `entries_soft_limit`, integer.
- `entries_hard_limit`, integer. The number of banned IPs and host scores kept in memory will vary between the soft and hard limit.
- `safelist_file`, string. Path to a file containing a list of ip addresses and/or networks to never ban.
- `blocklist_file`, string. Path to a file containing a list of ip addresses and/or networks to always ban. The lists can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows. An host that is already banned will not be automatically unbanned if you put it inside the safe list, you have to unban it using the REST API.
- **"sftpd"**, the configuration for the SFTP server
- `bindings`, list of structs. Each struct has the following fields:
- `port`, integer. The port used for serving SFTP requests. 0 means disabled. Default: 2022
- `address`, string. Leave blank to listen on all available network interfaces. Default: ""
- `apply_proxy_config`, boolean. If enabled the common proxy configuration, if any, will be applied. Default `true`
- `bind_port`, integer. Deprecated, please use `bindings`
- `bind_address`, string. Deprecated, please use `bindings`
- `idle_timeout`, integer. Deprecated, please use the same key in `common` section.
- `max_auth_tries` integer. Maximum number of authentication attempts permitted per connection. If set to a negative number, the number of attempts is unlimited. If set to zero, the number of attempts is limited to 6.
- `banner`, string. Identification string used by the server. Leave empty to use the default banner. Default `SFTPGo_<version>`, for example `SSH-2.0-SFTPGo_0.9.5`
- `upload_mode` integer. Deprecated, please use the same key in `common` section.
- `actions`, struct. Deprecated, please use the same key in `common` section.
- `keys`, struct array. Deprecated, please use `host_keys`.
- `private_key`, path to the private key file. It can be a path relative to the config dir or an absolute one.
- `host_keys`, list of strings. It contains the daemon's private host keys. Each host key can be defined as a path relative to the configuration directory or an absolute one. If empty, the daemon will search or try to generate `id_rsa`, `id_ecdsa` and `id_ed25519` keys inside the configuration directory. If you configure absolute paths to files named `id_rsa`, `id_ecdsa` and/or `id_ed25519` then SFTPGo will try to generate these keys using the default settings.
- `kex_algorithms`, list of strings. Available KEX (Key Exchange) algorithms in preference order. Leave empty to use default values. The supported values can be found here: [`crypto/ssh`](https://github.com/golang/crypto/blob/master/ssh/common.go#L46 "Supported kex algos")
- `ciphers`, list of strings. Allowed ciphers. Leave empty to use default values. The supported values can be found here: [crypto/ssh](https://github.com/golang/crypto/blob/master/ssh/common.go#L28 "Supported ciphers")
- `macs`, list of strings. Available MAC (message authentication code) algorithms in preference order. Leave empty to use default values. The supported values can be found here: [crypto/ssh](https://github.com/golang/crypto/blob/master/ssh/common.go#L84 "Supported MACs")
- `trusted_user_ca_keys`, list of public keys paths of certificate authorities that are trusted to sign user certificates for authentication. The paths can be absolute or relative to the configuration directory.
- `login_banner_file`, path to the login banner file. The contents of the specified file, if any, are sent to the remote user before authentication is allowed. It can be a path relative to the config dir or an absolute one. Leave empty to disable login banner.
- `setstat_mode`, integer. Deprecated, please use the same key in `common` section.
- `enabled_ssh_commands`, list of enabled SSH commands. `*` enables all supported commands. More information can be found [here](./ssh-commands.md).
- `keyboard_interactive_auth_hook`, string. Absolute path to an external program or an HTTP URL to invoke for keyboard interactive authentication. See [Keyboard Interactive Authentication](./keyboard-interactive.md) for more details.
- `password_authentication`, boolean. Set to false to disable password authentication. This setting will disable multi-step authentication method using public key + password too. It is useful for public key only configurations if you need to manage old clients that will not attempt to authenticate with public keys if the password login method is advertised. Default: true.
- `proxy_protocol`, integer. Deprecated, please use the same key in `common` section.
- `proxy_allowed`, list of strings. Deprecated, please use the same key in `common` section.
- **"ftpd"**, the configuration for the FTP server
- `bindings`, list of structs. Each struct has the following fields:
- `port`, integer. The port used for serving FTP requests. 0 means disabled. Default: 0.
- `address`, string. Leave blank to listen on all available network interfaces. Default: "".
- `apply_proxy_config`, boolean. If enabled the common proxy configuration, if any, will be applied. Default `true`.
- `tls_mode`, integer. 0 means accept both cleartext and encrypted sessions. 1 means TLS is required for both control and data connection. 2 means implicit TLS. Do not enable this blindly, please check that a proper TLS config is in place if you set `tls_mode` is different from 0.
- `force_passive_ip`, ip address. External IP address to expose for passive connections. Leavy empty to autodetect. Defaut: "".
- `client_auth_type`, integer. Set to `1` to require client certificate authentication in addition to FTP authentication. You need to define at least a certificate authority for this to work. Default: 0.
- `bind_port`, integer. Deprecated, please use `bindings`
- `bind_address`, string. Deprecated, please use `bindings`
- `banner`, string. Greeting banner displayed when a connection first comes in. Leave empty to use the default banner. Default `SFTPGo <version> ready`, for example `SFTPGo 1.0.0-dev ready`.
- `banner_file`, path to the banner file. The contents of the specified file, if any, are displayed when someone connects to the server. It can be a path relative to the config dir or an absolute one. If set, it overrides the banner string provided by the `banner` option. Leave empty to disable.
- `active_transfers_port_non_20`, boolean. Do not impose the port 20 for active data transfers. Enabling this option allows to run SFTPGo with less privilege. Default: false.
- `force_passive_ip`, ip address. Deprecated, please use `bindings`
- `passive_port_range`, struct containing the key `start` and `end`. Port Range for data connections. Random if not specified. Default range is 50000-50100.
- `disable_active_mode`, boolean. Set to `true` to disable active FTP, default `false`.
- `enable_site`, boolean. Set to true to enable the FTP SITE command. We support `chmod` and `symlink` if SITE support is enabled. Default `false`
- `hash_support`, integer. Set to `1` to enable FTP commands that allow to calculate the hash value of files. These FTP commands will be enabled: `HASH`, `XCRC`, `MD5/XMD5`, `XSHA/XSHA1`, `XSHA256`, `XSHA512`. Please keep in mind that to calculate the hash we need to read the whole file, for remote backends this means downloading the file, for the encrypted backend this means decrypting the file. Default `0`.
- `combine_support`, integer. Set to 1 to enable support for the non standard `COMB` FTP command. Combine is only supported for local filesystem, for cloud backends it has no advantage as it will download the partial files and will upload the combined one. Cloud backends natively support multipart uploads. Default `0`.
- `certificate_file`, string. Certificate for FTPS. This can be an absolute path or a path relative to the config dir.
- `certificate_key_file`, string. Private key matching the above certificate. This can be an absolute path or a path relative to the config dir. A certificate and the private key are required to enable explicit and implicit TLS. Certificate and key files can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows.
- `ca_certificates`, list of strings. Set of root certificate authorities to be used to verify client certificates.
- `ca_revocation_lists`, list of strings. Set a revocation lists, one for each root CA, to be used to check if a client certificate has been revoked. The revocation lists can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows.
- `tls_mode`, integer. Deprecated, please use `bindings`
- **"webdavd"**, the configuration for the WebDAV server, more info [here](./webdav.md)
- `bindings`, list of structs. Each struct has the following fields:
- `port`, integer. The port used for serving WebDAV requests. 0 means disabled. Default: 0.
- `address`, string. Leave blank to listen on all available network interfaces. Default: "".
- `enable_https`, boolean. Set to `true` and provide both a certificate and a key file to enable HTTPS connection for this binding. Default `false`.
- `client_auth_type`, integer. Set to `1` to require client certificate authentication in addition to basic authentication. You need to define at least a certificate authority for this to work. Default: 0.
- `bind_port`, integer. Deprecated, please use `bindings`.
- `bind_address`, string. Deprecated, please use `bindings`.
- `certificate_file`, string. Certificate for WebDAV over HTTPS. This can be an absolute path or a path relative to the config dir.
- `certificate_key_file`, string. Private key matching the above certificate. This can be an absolute path or a path relative to the config dir. A certificate and a private key are required to enable HTTPS connections. Certificate and key files can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows.
- `ca_certificates`, list of strings. Set of root certificate authorities to be used to verify client certificates.
- `ca_revocation_lists`, list of strings. Set a revocation lists, one for each root CA, to be used to check if a client certificate has been revoked. The revocation lists can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows.
- `cors` struct containing CORS configuration. SFTPGo uses [Go CORS handler](https://github.com/rs/cors), please refer to upstream documentation for fields meaning and their default values.
- `enabled`, boolean, set to true to enable CORS.
- `allowed_origins`, list of strings.
- `allowed_methods`, list of strings.
- `allowed_headers`, list of strings.
- `exposed_headers`, list of strings.
- `allow_credentials` boolean.
- `max_age`, integer.
- `cache` struct containing cache configuration for the authenticated users.
- `enabled`, boolean, set to true to enable user caching. Default: true.
- `expiration_time`, integer. Expiration time, in minutes, for the cached users. 0 means unlimited. Default: 0.
- `max_size`, integer. Maximum number of users to cache. 0 means unlimited. Default: 50.
- **"data_provider"**, the configuration for the data provider
- `driver`, string. Supported drivers are `sqlite`, `mysql`, `postgresql`, `bolt`, `memory`
- `name`, string. Database name. For driver `sqlite` this can be the database name relative to the config dir or the absolute path to the SQLite database. For driver `memory` this is the (optional) path relative to the config dir or the absolute path to the users dump, obtained using the `dumpdata` REST API, to load. This dump will be loaded at startup and can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows. The `memory` provider will not modify the provided file so quota usage and last login will not be persisted
- `name`, string. Database name. For driver `sqlite` this can be the database name relative to the config dir or the absolute path to the SQLite database. For driver `memory` this is the (optional) path relative to the config dir or the absolute path to the provider dump, obtained using the `dumpdata` REST API, to load. This dump will be loaded at startup and can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows. The `memory` provider will not modify the provided file so quota usage and last login will not be persisted. If you plan to use a SQLite database over a `cifs` network share (this is not recommended in general) you must use the `nobrl` mount option otherwise you will get the `database is locked` error. Some users reported that the `bolt` provider works fine over `cifs` shares.
- `host`, string. Database host. Leave empty for drivers `sqlite`, `bolt` and `memory`
- `port`, integer. Database port. Leave empty for drivers `sqlite`, `bolt` and `memory`
- `username`, string. Database user. Leave empty for drivers `sqlite`, `bolt` and `memory`
@@ -82,41 +160,67 @@ The configuration file contains the following sections:
- `sslmode`, integer. Used for drivers `mysql` and `postgresql`. 0 disable SSL/TLS connections, 1 require ssl, 2 set ssl mode to `verify-ca` for driver `postgresql` and `skip-verify` for driver `mysql`, 3 set ssl mode to `verify-full` for driver `postgresql` and `preferred` for driver `mysql`
- `connectionstring`, string. Provide a custom database connection string. If not empty, this connection string will be used instead of building one using the previous parameters. Leave empty for drivers `bolt` and `memory`
- `sql_tables_prefix`, string. Prefix for SQL tables
- `manage_users`, integer. Set to 0 to disable users management, 1 to enable
- `track_quota`, integer. Set the preferred mode to track users quota between the following choices:
- 0, disable quota tracking. REST API to scan users home directories/virtual folders and update quota will do nothing
- 1, quota is updated each time a user uploads or deletes a file, even if the user has no quota restrictions
- 2, quota is updated each time a user uploads or deletes a file, but only for users with quota restrictions and for virtual folders. With this configuration, the `quota scan` and `folder_quota_scan` REST API can still be used to periodically update space usage for users without quota restrictions and for folders
- `pool_size`, integer. Sets the maximum number of open connections for `mysql` and `postgresql` driver. Default 0 (unlimited)
- `users_base_dir`, string. Users default base directory. If no home dir is defined while adding a new user, and this value is a valid absolute path, then the user home dir will be automatically defined as the path obtained joining the base dir and the username
- `actions`, struct. It contains the command to execute and/or the HTTP URL to notify and the trigger conditions. See the "Custom Actions" paragraph for more details
- `actions`, struct. It contains the command to execute and/or the HTTP URL to notify and the trigger conditions. See [Custom Actions](./custom-actions.md) for more details
- `execute_on`, list of strings. Valid values are `add`, `update`, `delete`. `update` action will not be fired for internal updates such as the last login or the user quota fields.
- `command`, string. Deprecated please use `hook`.
- `http_notification_url`, a valid URL. Deprecated please use `hook`.
- `hook`, string. Absolute path to the command to execute or HTTP URL to notify.
- `external_auth_program`, string. Deprecated, please use `external_auth_hook`.
- `external_auth_hook`, string. Absolute path to an external program or an HTTP URL to invoke for users authentication. See the "External Authentication" paragraph for more details. Leave empty to disable.
- `external_auth_scope`, integer. 0 means all supported authetication scopes (passwords, public keys and keyboard interactive). 1 means passwords only. 2 means public keys only. 4 means key keyboard interactive only. The flags can be combined, for example 6 means public keys and keyboard interactive
- `external_auth_hook`, string. Absolute path to an external program or an HTTP URL to invoke for users authentication. See [External Authentication](./external-auth.md) for more details. Leave empty to disable.
- `external_auth_scope`, integer. 0 means all supported authentication scopes (passwords, public keys and keyboard interactive). 1 means passwords only. 2 means public keys only. 4 means key keyboard interactive only. The flags can be combined, for example 6 means public keys and keyboard interactive
- `credentials_path`, string. It defines the directory for storing user provided credential files such as Google Cloud Storage credentials. This can be an absolute path or a path relative to the config dir
- `prefer_database_credentials`, boolean. When true, users' Google Cloud Storage credentials will be written to the data provider instead of disk, though pre-existing credentials on disk will be used as a fallback. When false, they will be written to the directory specified by `credentials_path`.
- `pre_login_program`, string. Deprecated, please use `pre_login_hook`.
- `pre_login_hook`, string. Absolute path to an external program or an HTTP URL to invoke to modify user details just before the login. See the "Dynamic user modification" paragraph for more details. Leave empty to disable.
- `pre_login_hook`, string. Absolute path to an external program or an HTTP URL to invoke to modify user details just before the login. See [Dynamic user modification](./dynamic-user-mod.md) for more details. Leave empty to disable.
- `post_login_hook`, string. Absolute path to an external program or an HTTP URL to invoke to notify a successful or failed login. See [Post-login hook](./post-login-hook.md) for more details. Leave empty to disable.
- `post_login_scope`, defines the scope for the post-login hook. 0 means notify both failed and successful logins. 1 means notify failed logins. 2 means notify successful logins.
- `check_password_hook`, string. Absolute path to an external program or an HTTP URL to invoke to check the user provided password. See [Check password hook](./check-password-hook.md) for more details. Leave empty to disable.
- `check_password_scope`, defines the scope for the check password hook. 0 means all protocols, 1 means SSH, 2 means FTP, 4 means WebDAV. You can combine the scopes, for example 6 means FTP and WebDAV.
- `password_hashing`, struct. It contains the configuration parameters to be used to generate the password hash. SFTPGo can verify passwords in several formats and uses the `argon2id` algorithm to hash passwords in plain-text before storing them inside the data provider. These options allow you to customize how the hash is generated.
- `argon2_options` struct containing the options for argon2id hashing algorithm. The `memory` and `iterations` parameters control the computational cost of hashing the password. The higher these figures are, the greater the cost of generating the hash and the longer the runtime. It also follows that the greater the cost will be for any attacker trying to guess the password. If the code is running on a machine with multiple cores, then you can decrease the runtime without reducing the cost by increasing the `parallelism` parameter. This controls the number of threads that the work is spread across.
- `memory`, unsigned integer. The amount of memory used by the algorithm (in kibibytes). Default: 65536.
- `iterations`, unsigned integer. The number of iterations over the memory. Default: 1.
- `parallelism`. unsigned 8 bit integer. The number of threads (or lanes) used by the algorithm. Default: 2.
- `update_mode`, integer. Defines how the database will be initialized/updated. 0 means automatically. 1 means manually using the initprovider sub-command.
- **"httpd"**, the configuration for the HTTP server used to serve REST API and to expose the built-in web interface
- `bind_port`, integer. The port used for serving HTTP requests. Set to 0 to disable HTTP server. Default: 8080
- `bind_address`, string. Leave blank to listen on all available network interfaces. Default: "127.0.0.1"
- `bindings`, list of structs. Each struct has the following fields:
- `port`, integer. The port used for serving HTTP requests. Default: 8080.
- `address`, string. Leave blank to listen on all available network interfaces. On *NIX you can specify an absolute path to listen on a Unix-domain socket Default: "127.0.0.1".
- `enable_web_admin`, boolean. Set to `false` to disable the built-in web admin for this binding. You also need to define `templates_path` and `static_files_path` to enable the built-in web admin interface. Default `true`.
- `enable_https`, boolean. Set to `true` and provide both a certificate and a key file to enable HTTPS connection for this binding. Default `false`.
- `client_auth_type`, integer. Set to `1` to require client certificate authentication in addition to JWT/Web authentication. You need to define at least a certificate authority for this to work. Default: 0.
- `bind_port`, integer. Deprecated, please use `bindings`.
- `bind_address`, string. Deprecated, please use `bindings`. Leave blank to listen on all available network interfaces. On \*NIX you can specify an absolute path to listen on a Unix-domain socket. Default: "127.0.0.1"
- `templates_path`, string. Path to the HTML web templates. This can be an absolute path or a path relative to the config dir
- `static_files_path`, string. Path to the static files for the web interface. This can be an absolute path or a path relative to the config dir. If both `templates_path` and `static_files_path` are empty the built-in web interface will be disabled
- `backups_path`, string. Path to the backup directory. This can be an absolute path or a path relative to the config dir. We don't allow backups in arbitrary paths for security reasons
- `auth_user_file`, string. Path to a file used to store usernames and passwords for basic authentication. This can be an absolute path or a path relative to the config dir. We support HTTP basic authentication, and the file format must conform to the one generated using the Apache `htpasswd` tool. The supported password formats are bcrypt (`$2y$` prefix) and md5 crypt (`$apr1$` prefix). If empty, HTTP authentication is disabled.
- `certificate_file`, string. Certificate for HTTPS. This can be an absolute path or a path relative to the config dir.
- `certificate_key_file`, string. Private key matching the above certificate. This can be an absolute path or a path relative to the config dir. If both the certificate and the private key are provided, the server will expect HTTPS connections. Certificate and key files can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows.
- `ca_certificates`, list of strings. Set of root certificate authorities to be used to verify client certificates.
- `ca_revocation_lists`, list of strings. Set a revocation lists, one for each root CA, to be used to check if a client certificate has been revoked. The revocation lists can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows.
- **"telemetry"**, the configuration for the telemetry server, more details [below](#telemetry-server)
- `bind_port`, integer. The port used for serving HTTP requests. Set to 0 to disable HTTP server. Default: 10000
- `bind_address`, string. Leave blank to listen on all available network interfaces. On \*NIX you can specify an absolute path to listen on a Unix-domain socket. Default: "127.0.0.1"
- `enable_profiler`, boolean. Enable the built-in profiler. Default `false`
- `auth_user_file`, string. Path to a file used to store usernames and passwords for basic authentication. This can be an absolute path or a path relative to the config dir. We support HTTP basic authentication, and the file format must conform to the one generated using the Apache `htpasswd` tool. The supported password formats are bcrypt (`$2y$` prefix) and md5 crypt (`$apr1$` prefix). If empty, HTTP authentication is disabled. Authentication will be always disabled for the `/healthz` endpoint.
- `certificate_file`, string. Certificate for HTTPS. This can be an absolute path or a path relative to the config dir.
- `certificate_key_file`, string. Private key matching the above certificate. This can be an absolute path or a path relative to the config dir. If both the certificate and the private key are provided, the server will expect HTTPS connections. Certificate and key files can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows.
- **"http"**, the configuration for HTTP clients. HTTP clients are used for executing hooks such as the ones used for custom actions, external authentication and pre-login user modifications
- `timeout`, integer. Timeout specifies a time limit, in seconds, for requests.
- `ca_certificates`, list of strings. List of paths to extra CA certificates to trust. The paths can be absolute or relative to the config dir. Adding trusted CA certificates is a convenient way to use self-signed certificates without defeating the purpose of using TLS.
- `skip_tls_verify`, boolean. if enabled the HTTP client accepts any TLS certificate presented by the server and any host name in that certificate. In this mode, TLS is susceptible to man-in-the-middle attacks. This should be used only for testing.
- **kms**, configuration for the Key Management Service, more details can be found [here](./kms.md)
- `secrets`
- `url`
- `master_key_path`
A full example showing the default config (in JSON format) can be found [here](../sftpgo.json).
If you want to use a private host key that use an algorithm/setting different from the auto generated RSA/ECDSA keys, or more than two private keys, you can generate your own keys and replace the empty `keys` array with something like this:
If you want to use a private host key that uses an algorithm/setting different from the auto generated RSA/ECDSA keys, or more than two private keys, you can generate your own keys and replace the empty `keys` array with something like this:
```json
"host_keys": [
@@ -126,18 +230,19 @@ If you want to use a private host key that use an algorithm/setting different fr
]
```
where `id_rsa`, `id_ecdsa` and `id_ed25519`, in this example, are files containing your generated keys. You can use absolute paths or paths relative to the configuration directory.
where `id_rsa`, `id_ecdsa` and `id_ed25519`, in this example, are files containing your generated keys. You can use absolute paths or paths relative to the configuration directory specified via the `--config-dir` serve flag. By default the configuration directory is the working directory.
If you want the default host keys generation in a directory different from the config dir, please specify absolute paths to files named `id_rsa` or `id_ecdsa` like this:
If you want the default host keys generation in a directory different from the config dir, please specify absolute paths to files named `id_rsa`, `id_ecdsa` or `id_ed25519` like this:
```json
"host_keys": [
"/etc/sftpgo/keys/id_rsa",
"/etc/sftpgo/keys/id_ecdsa"
"/etc/sftpgo/keys/id_ecdsa",
"/etc/sftpgo/keys/id_ed25519"
]
```
then SFTPGo will try to create `id_rsa` and `id_ecdsa`, if they are missing, inside the existing directory `/etc/sftpgo/keys`.
then SFTPGo will try to create `id_rsa`, `id_ecdsa` and `id_ed25519`, if they are missing, inside the directory `/etc/sftpgo/keys`.
The configuration can be read from JSON, TOML, YAML, HCL, envfile and Java properties config files. If your `config-file` flag is set to `sftpgo` (default value), you need to create a configuration file called `sftpgo.json` or `sftpgo.yaml` and so on inside `config-dir`.
@@ -147,7 +252,13 @@ You can also override all the available configuration options using environment
Let's see some examples:
- To set sftpd `bind_port`, you need to define the env var `SFTPGO_SFTPD__BIND_PORT`
- To set the `execute_on` actions, you need to define the env var `SFTPGO_SFTPD__ACTIONS__EXECUTE_ON`. For example `SFTPGO_SFTPD__ACTIONS__EXECUTE_ON=upload,download`
- To set the `port` for the first sftpd binding, you need to define the env var `SFTPGO_SFTPD__BINDINGS__0__PORT`
- To set the `execute_on` actions, you need to define the env var `SFTPGO_COMMON__ACTIONS__EXECUTE_ON`. For example `SFTPGO_COMMON__ACTIONS__EXECUTE_ON=upload,download`
Please note that, to override configuration options with environment variables, a configuration file containing the options to override is required. You can, for example, deploy the default configuration file and then override the options you need to customize using environment variables.
## Telemetry Server
The telemetry server exposes the following endpoints:
- `/healthz`, health information (for health checks)
- `/metrics`, Prometheus metrics
- `/debug/pprof`, if enabled via the `enable_profiler` configuration key, for profiling, more details [here](./profiling.md)

View File

@@ -8,6 +8,4 @@ You can optionally specify a [storage class](https://cloud.google.com/storage/do
The configured bucket must exist.
Google Cloud Storage is exposed over HTTPS so if you are running SFTPGo as docker image please be sure to uncomment the line that install `ca-certificates`, inside your `Dockerfile`, to be able to properly verify certificate authorities.
This backend is very similar to the [S3](./s3.md) backend, and it has the same limitations.

6
docs/howto/README.md Normal file
View File

@@ -0,0 +1,6 @@
# Tutorials
Here we collect step-to-step tutorials. SFTPGo users are encouraged to contribute!
- [SFTPGo with PostgreSQL data provider and S3 backend](./postgresql-s3.md)
- [Expose Web Admin and REST API over HTTPS and password protected](./rest-api-https-auth.md)

215
docs/howto/postgresql-s3.md Normal file
View File

@@ -0,0 +1,215 @@
# SFTPGo with PostgreSQL data provider and S3 backend
This tutorial shows the installation of SFTPGo on Ubuntu 20.04 (Focal Fossa) with PostgreSQL data provider and S3 backend. SFTPGo will run as an unprivileged (non-root) user. We assume that you want to serve a single S3 bucket and you want to assign different "virtual folders" of this bucket to different SFTPGo virtual users.
## Preliminary Note
Before proceeding further you need to have a basic minimal installation of Ubuntu 20.04.
## Install PostgreSQL
Before installing any packages on the Ubuntu system, update and upgrade all packages using the `apt` commands below.
```shell
sudo apt update
sudo apt upgrade
```
Install PostgreSQL with this `apt` command.
```shell
sudo apt -y install postgresql
```
Once installation is completed, start the PostgreSQL service and add it to the system boot.
```shell
sudo systemctl start postgresql
sudo systemctl enable postgresql
```
Next, check the PostgreSQL service using the following command.
```shell
systemctl status postgresql
```
## Configure PostgreSQL
PostgreSQL uses roles for user authentication and authorization, it just like Unix-Style permissions. By default, PostgreSQL creates a new user called `postgres` for basic authentication.
In this step, we will create a new PostgreSQL user for SFTPGo.
Login to the PostgreSQL shell using the command below.
```shell
sudo -i -u postgres psql
```
Next, create a new role `sftpgo` with the password `sftpgo_pg_pwd` using the following query.
```sql
create user "sftpgo" with encrypted password 'sftpgo_pg_pwd';
```
Next, create a new database `sftpgo.db` for the SFTPGo service using the following queries.
```sql
create database "sftpgo.db";
grant all privileges on database "sftpgo.db" to "sftpgo";
```
Exit from the PostgreSQL shell typing `\q`.
## Install SFTPGo
To install SFTPGo you can use the PPA [here](https://launchpad.net/~sftpgo/+archive/ubuntu/sftpgo).
Start by adding the PPA.
```shell
sudo add-apt-repository ppa:sftpgo/sftpgo
sudo apt-get update
```
Next install SFTPGo.
```shell
sudo apt install sftpgo
```
After installation SFTPGo should already be running with default configuration and configured to start automatically at boot, check its status using the following command.
```shell
systemctl status sftpgo
```
## Configure AWS credentials
We assume that you want to serve a single S3 bucket and you want to assign different "virtual folders" of this bucket to different SFTPGo virtual users. In this case is very convenient to configure a credential file so SFTPGo will automatically use it and you don't need to specify the same AWS credentials for each user.
You can manually create the `/var/lib/sftpgo/.aws/credentials` file and write your AWS credentials like this.
```shell
[default]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
```
Alternately you can install `AWS CLI` and manage the credential using this tool.
```shell
sudo apt install awscli
```
and now set your credentials, region, and output format with the following command.
```shell
aws configure
```
Confirm that you can list your bucket contents with the following command.
```shell
aws s3 ls s3://mybucket
```
The AWS CLI will create the credential file in `~/.aws/credentials`. The SFTPGo service runs using the `sftpgo` system user whose home directory is `/var/lib/sftpgo` so you need to copy the credentials file to the sftpgo home directory and assign it the proper permissions.
```shell
sudo mkdir /var/lib/sftpgo/.aws
sudo cp ~/.aws/credentials /var/lib/sftpgo/.aws/
sudo chown -R sftpgo:sftpgo /var/lib/sftpgo/.aws
```
## Configure SFTPGo
Now open the SFTPGo configuration.
```shell
sudo vi /etc/sftpgo/sftpgo.json
```
Search for the `data_provider` section and change it as follow.
```json
"data_provider": {
"driver": "postgresql",
"name": "sftpgo.db",
"host": "127.0.0.1",
"port": 5432,
"username": "sftpgo",
"password": "sftpgo_pg_pwd",
...
}
```
This way we set the PostgreSQL connection parameters.
If you want to connect to PostgreSQL over a Unix Domain socket you have to set the value `/var/run/postgresql` for the `host` configuration key instead of `127.0.0.1`.
You can further customize your configuration adding custom actions and other hooks. A full explanation of all configuration parameters can be found [here](../full-configuration.md).
Next, initialize the data provider with the following command.
```shell
$ sudo su - sftpgo -s /bin/bash -c 'sftpgo initprovider -c /etc/sftpgo'
2020-10-09T21:07:50.000 INF Initializing provider: "postgresql" config file: "/etc/sftpgo/sftpgo.json"
2020-10-09T21:07:50.000 INF updating database version: 1 -> 2
2020-10-09T21:07:50.000 INF updating database version: 2 -> 3
2020-10-09T21:07:50.000 INF updating database version: 3 -> 4
2020-10-09T21:07:50.000 INF Data provider successfully initialized/updated
```
The default sftpgo systemd service will start after the network target, in this setup it is more appropriate to start it after the PostgreSQL service, so edit the service using the following command.
```shell
sudo systemctl edit sftpgo.service
```
And override the unit definition with the following snippet.
```shell
[Unit]
After=postgresql.service
```
Confirm that `sftpgo.service` will start after `postgresql.service` with the next command.
```shell
$ systemctl show sftpgo.service | grep After=
After=postgresql.service systemd-journald.socket system.slice -.mount systemd-tmpfiles-setup.service network.target sysinit.target basic.target
```
Next restart the sftpgo service to use the new configuration and check that it is running.
```shell
sudo systemctl restart sftpgo
systemctl status sftpgo
```
## Add virtual users
The easiest way to add virtual users is to use the built-in Web interface.
You can expose the Web Admin interface over the network replacing `"bind_address": "127.0.0.1"` in the `httpd` configuration section with `"bind_address": ""` and apply the change restarting the SFTPGo service with the following command.
```shell
sudo systemctl restart sftpgo
```
So now open the Web Admin URL.
[http://127.0.0.1:8080/web](http://127.0.0.1:8080/web)
Click `Add` and fill the user details, the minimum required parameters are:
- `Username`
- `Password` or `Public keys`
- `Permissions`
- `Home Dir` can be empty since we defined a default base dir
- Select `AWS S3 (Compatible)` as storage and then set `Bucket`, `Region` and optionally a `Key Prefix` if you want to restrict the user to a specific virtual folder in the bucket. The specified virtual folder does not need to be pre-created. You can leave `Access Key` and `Access Secret` empty since we defined global credentials for the `sftpgo` user and we use this system user to run the SFTPGo service.
You are done! Now you can connect to you SFTPGo instance using any compatible `sftp` client on port `2022`.
You can mix S3 users with local users but please be aware that we are running the service as the unprivileged `sftpgo` system user so if you set storage as `local` for an SFTPGo virtual user then the home directory for this user must be owned by the `sftpgo` system user. If you don't specify an home directory the default will be `/srv/sftpgo/data/<username>` which should be appropriate.

View File

@@ -0,0 +1,122 @@
# Expose Web Admin and REST API over HTTPS and password protected
This tutorial shows how to expose the SFTPGo web interface and REST API over HTTPS and password protect them.
## Preliminary Note
Before proceeding further you need to have a SFTPGo instance already configured and running.
We assume:
- you are running SFTPGo as service using the dedicated `sftpgo` system user
- the SFTPGo configuration directory is `/etc/sftpgo`
- you are running SFTPGo on Ubuntu 20.04, however this instructions can be easily adapted for other Linux variants.
## Authentication Setup
First install the `htpasswd` tool. We use this tool to create the users for the Web Admin/REST API.
```shell
sudo apt install apache2-utils
```
Create a user for web based authentication.
```shell
sudo htpasswd -B -c /etc/sftpgo/httpauth sftpgoweb
```
If you want to create additional users omit the `-c` option.
```shell
sudo htpasswd -B /etc/sftpgo/httpauth anotheruser
```
Next open the SFTPGo configuration.
```shell
sudo vi /etc/sftpgo/sftpgo.json
```
Search for the `httpd` section and change it as follow.
```json
"httpd": {
"bind_port": 8080,
"bind_address": "",
"templates_path": "templates",
"static_files_path": "static",
"backups_path": "backups",
"auth_user_file": "/etc/sftpgo/httpauth",
"certificate_file": "",
"certificate_key_file": ""
}
```
Setting an empty `bind_address` means that the service will listen on all available network interfaces and so it will be exposed over the network.
Now restart the SFTPGo service to apply the changes.
```shell
sudo systemctl restart sftpgo
```
You are done! Now login to the Web Admin interface using the username and password created above.
## Creation of a Self-Signed Certificate
For demostration purpose we use a self-signed certificate here. These certificates are easy to make and do not cost money. However, they do not provide all of the security properties that certificates signed by a Public Certificate Authority (CA) aim to provide, you are encouraged to use a certificate signed by a Public CA.
When creating a new SSL certificate, one needs to specify the duration validity of the same by changing the value 365 (as appearing in the message below) to the preferred number of days. It is important to mention here that the certificate so created stands to auto-expire upon completion of one year.
```shell
sudo mkdir /etc/sftpgo/ssl
sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/sftpgo/ssl/sftpgo.key -out /etc/sftpgo/ssl/sftpgo.crt
```
The above command is rather versatile, and lets you create both the self-signed SSL certificate and the server key to safeguard it, in addition to placing both of these into the `etc/sftpgo/ssl` directory. Answer to the questions to create the certificate and the key for HTTPS.
Assign the proper permissions to the generated certificates.
```shell
sudo chown -R sftpgo:sftpgo /etc/sftpgo/ssl
```
## HTTPS Setup
Open the SFTPGo configuration.
```shell
sudo vi /etc/sftpgo/sftpgo.json
```
Search for the `httpd` section and change it as follow.
```json
"httpd": {
"bind_port": 8080,
"bind_address": "",
"templates_path": "templates",
"static_files_path": "static",
"backups_path": "backups",
"auth_user_file": "/etc/sftpgo/httpauth",
"certificate_file": "/etc/sftpgo/ssl/sftpgo.crt",
"certificate_key_file": "/etc/sftpgo/ssl/sftpgo.key"
}
```
Now restart the SFTPGo service to apply the changes.
```shell
sudo systemctl restart sftpgo
```
You are done! Now SFTPGo web admin and REST API are exposed over HTTPS and password protected.
You can easily replace the self-signed certificate used here with a properly signed certificate.
The certificate could frequently change if you use something like [let's encrypt](https://letsencrypt.org/). SFTPGo allows hot-certificate reloading using the following command.
```shell
sudo systemctl reload sftpgo
```

View File

@@ -9,6 +9,7 @@ To enable keyboard interactive authentication, you must set the absolute path of
The external program can read the following environment variables to get info about the user trying to authenticate:
- `SFTPGO_AUTHD_USERNAME`
- `SFTPGO_AUTHD_IP`
- `SFTPGO_AUTHD_PASSWORD`, this is the hashed password as stored inside the data provider
Previous global environment variables aren't cleared when the script is called. The content of these variables is _not_ quoted. They may contain special characters.
@@ -77,13 +78,14 @@ The request body will contain a JSON struct with the following fields:
- `request_id`, string. Unique request identifier
- `username`, string
- `ip`, string
- `password`, string. This is the hashed password as stored inside the data provider
- `answers`, list of string. It will be null for the first request
- `questions`, list of string. It will contains the previous asked questions. It will be null for the first request
- `questions`, list of string. It will contain the previously asked questions. It will be null for the first request
The HTTP response code must be 200 and the body must contain the same JSON struct described for the program.
Let's see a basic sample, the configured hook is `http://127.0.0.1:8000/keyIntHookPwd`, as soon as the user try to login, SFTPGo makes this HTTP POST request:
Let's see a basic sample, the configured hook is `http://127.0.0.1:8000/keyIntHookPwd`, as soon as the user tries to login, SFTPGo makes this HTTP POST request:
```shell
POST /keyIntHookPwd HTTP/1.1
@@ -93,7 +95,7 @@ Content-Length: 189
Content-Type: application/json
Accept-Encoding: gzip
{"request_id":"bq1r5r7cdrpd2qtn25ng","username":"a","password":"$pbkdf2-sha512$150000$ClOPkLNujMTL$XktKy0xuJsOfMYBz+f2bIyPTdbvDTSnJ1q+7+zp/HPq5Qojwp6kcpSIiVHiwvbi8P6HFXI/D3UJv9BLcnQFqPA=="}
{"request_id":"bq1r5r7cdrpd2qtn25ng","username":"a","ip":"127.0.0.1","password":"$pbkdf2-sha512$150000$ClOPkLNujMTL$XktKy0xuJsOfMYBz+f2bIyPTdbvDTSnJ1q+7+zp/HPq5Qojwp6kcpSIiVHiwvbi8P6HFXI/D3UJv9BLcnQFqPA=="}
```
as you can see in this first requests `answers` and `questions` are null.
@@ -121,10 +123,10 @@ Content-Length: 233
Content-Type: application/json
Accept-Encoding: gzip
{"request_id":"bq1r5r7cdrpd2qtn25ng","username":"a","password":"$pbkdf2-sha512$150000$ClOPkLNujMTL$XktKy0xuJsOfMYBz+f2bIyPTdbvDTSnJ1q+7+zp/HPq5Qojwp6kcpSIiVHiwvbi8P6HFXI/D3UJv9BLcnQFqPA==","answers":["OK"],"questions":["Password: "]}
{"request_id":"bq1r5r7cdrpd2qtn25ng","username":"a","ip":"127.0.0.1","password":"$pbkdf2-sha512$150000$ClOPkLNujMTL$XktKy0xuJsOfMYBz+f2bIyPTdbvDTSnJ1q+7+zp/HPq5Qojwp6kcpSIiVHiwvbi8P6HFXI/D3UJv9BLcnQFqPA==","answers":["OK"],"questions":["Password: "]}
```
Here is the HTTP response that istructs SFTPGo to ask for a new question:
Here is the HTTP response that instructs SFTPGo to ask for a new question:
```shell
HTTP/1.1 200 OK
@@ -147,7 +149,7 @@ Content-Length: 239
Content-Type: application/json
Accept-Encoding: gzip
{"request_id":"bq1r5r7cdrpd2qtn25ng","username":"a","password":"$pbkdf2-sha512$150000$ClOPkLNujMTL$XktKy0xuJsOfMYBz+f2bIyPTdbvDTSnJ1q+7+zp/HPq5Qojwp6kcpSIiVHiwvbi8P6HFXI/D3UJv9BLcnQFqPA==","answers":["answer2"],"questions":["Question2: "]}
{"request_id":"bq1r5r7cdrpd2qtn25ng","username":"a","ip":"127.0.0.1","password":"$pbkdf2-sha512$150000$ClOPkLNujMTL$XktKy0xuJsOfMYBz+f2bIyPTdbvDTSnJ1q+7+zp/HPq5Qojwp6kcpSIiVHiwvbi8P6HFXI/D3UJv9BLcnQFqPA==","answers":["answer2"],"questions":["Question2: "]}
```
Here is the final HTTP response that allows the user login:
@@ -162,3 +164,5 @@ Content-Length: 18
{"auth_result": 1}
```
An example keyboard interactive program allowing to authenticate using [Twilio Authy 2FA](https://www.twilio.com/docs/authy) can be found inside the source tree [authy](../examples/OTP/authy) directory.

65
docs/kms.md Normal file
View File

@@ -0,0 +1,65 @@
# Key Management Services
SFTPGo stores sensitive data such as Cloud account credentials or passphrases to derive per-object encryption keys. These data are stored as ciphertext and only loaded to RAM in plaintext when needed.
## Supported Services for encryption and decryption
The `secrets` section of the `kms` configuration allows to configure how to encrypt and decrypt sensitive data. The following configuration parameters are available:
- `url` defines the URI to the KMS service
- `master_key_path` defines the absolute path to a file containing the master encryption key. This could be, for example, a docker secrets or a file protected with filesystem level permissions.
We use [Go CDK](https://gocloud.dev/howto/secrets/) to access several key management services in a portable way.
### Local provider
If the `url` is empty SFTPGo uses local encryption for keeping secrets. Internally, it uses the [NaCl secret box](https://pkg.go.dev/golang.org/x/crypto/nacl/secretbox) algorithm to perform encryption and authentication.
We first generate a random key, then the per-object encryption key is derived from this random key in the following way:
1. a master key is provided: the encryption key is derived using the HMAC-based Extract-and-Expand Key Derivation Function (HKDF) as defined in [RFC 5869](http://tools.ietf.org/html/rfc5869)
2. no master key is provided: the encryption key is derived as simple hash of the random key. This is the default configuration.
For compatibility with SFTPGo versions 1.2.x and before we also support encryption based on `AES-256-GCM`. The data encrypted with this algorithm will never use the master key to keep backward compatibility.
### Google Cloud Key Management Service
To use keys from Google Cloud Platforms [Key Management Service](https://cloud.google.com/kms/) (GCP KMS) you have to use `gcpkms` as URL scheme like this:
```shell
gcpkms://projects/[PROJECT_ID]/locations/[LOCATION]/keyRings/[KEY_RING]/cryptoKeys/[KEY]
```
SFTPGo will use Application Default Credentials. See [here](https://cloud.google.com/docs/authentication/production) for alternatives such as environment variables.
The URL host+path are used as the key resource ID; see [here](https://cloud.google.com/kms/docs/object-hierarchy#key) for more details.
If a master key is provided we first encrypt the plaintext data using the local provider and then we encrypt the resulting payload using the Cloud provider and store this ciphertext.
### AWS Key Management Service
To use customer master keys from Amazon Web Services [Key Management Service](https://aws.amazon.com/kms/) (AWS KMS) you have to use `awskms` as URL scheme. You can use the keys ID, alias, or Amazon Resource Name (ARN) to identify the key. You should specify the region query parameter to ensure your application connects to the correct region.
Here are some examples:
- By ID: `awskms://1234abcd-12ab-34cd-56ef-1234567890ab?region=us-east-1`
- By alias: `awskms://alias/ExampleAlias?region=us-east-1`
- By ARN: `arn:aws:kms:us-east-1:111122223333:key/1234abcd-12ab-34bc-56ef-1234567890ab?region=us-east-1`
SFTPGo will use the default AWS session. See [AWS Session](https://docs.aws.amazon.com/sdk-for-go/api/aws/session/) to learn about authentication alternatives such as environment variables.
If a master key is provided we first encrypt the plaintext data using the local provider and then we encrypt the resulting payload using the Cloud provider and store this ciphertext.
### HashiCorp Vault
To use the [transit secrets engine](https://www.vaultproject.io/docs/secrets/transit/index.html) in [Vault](https://www.vaultproject.io/) you have to use `hashivault` as URL scheme like this: `hashivault://mykey`.
The Vault server endpoint and authentication token are specified using the environment variables `VAULT_SERVER_URL` and `VAULT_SERVER_TOKEN`, respectively.
If a master key is provided we first encrypt the plaintext data using the local provider and then we encrypt the resulting payload using Vault and store this ciphertext.
### Notes
- The KMS configuration is global.
- If you set a master key you will be unable to decrypt the data without this key and the SFTPGo users that need the data as plain text will be unable to login.
- You can start using the local provider and then switch to an external one but you can't switch between external providers and still be able to decrypt the data encrypted using the previous provider.

View File

@@ -20,7 +20,7 @@ The logs can be divided into the following categories:
- `connection_id` string. Unique connection identifier
- `protocol` string. `SFTP` or `SCP`
- **"command logs"**, SFTP/SCP command logs:
- `sender` string. `Rename`, `Rmdir`, `Mkdir`, `Symlink`, `Remove`, `Chmod`, `Chown`, `Chtimes`, `SSHCommand`
- `sender` string. `Rename`, `Rmdir`, `Mkdir`, `Symlink`, `Remove`, `Chmod`, `Chown`, `Chtimes`, `Truncate`, `SSHCommand`
- `level` string
- `username`, string
- `file_path` string
@@ -30,6 +30,7 @@ The logs can be divided into the following categories:
- `gid` integer. Valid for sender `Chown` otherwise -1
- `access_time` datetime as YYYY-MM-DDTHH:MM:SS. Valid for sender `Chtimes` otherwise empty
- `modification_time` datetime as YYYY-MM-DDTHH:MM:SS. Valid for sender `Chtimes` otherwise empty
- `size` int64. Valid for sender `Truncate` otherwise -1
- `ssh_command`, string. Valid for sender `SSHCommand` otherwise empty
- `connection_id` string. Unique connection identifier
- `protocol` string. `SFTP`, `SCP` or `SSH`
@@ -50,5 +51,6 @@ The logs can be divided into the following categories:
- `level` string
- `username`, string. Can be empty if the connection is closed before an authentication attempt
- `client_ip` string.
- `login_type` string. Can be `publickey`, `password`, `keyboard-interactive` or `no_auth_tryed`
- `protocol` string. Possible values are `SSH`, `FTP`, `DAV`
- `login_type` string. Can be `publickey`, `password`, `keyboard-interactive`, `publickey+password`, `publickey+keyboard-interactive` or `no_auth_tryed`
- `error` string. Optional error description

View File

@@ -1,6 +1,6 @@
# Metrics
SFTPGo exposes [Prometheus](https://prometheus.io/) metrics at the `/metrics` HTTP endpoint.
SFTPGo exposes [Prometheus](https://prometheus.io/) metrics at the `/metrics` HTTP endpoint of the telemetry server.
Several counters and gauges are available, for example:
- Total uploads and downloads
@@ -16,3 +16,5 @@ Several counters and gauges are available, for example:
- Process information like CPU, memory, file descriptor usage and start time
Please check the `/metrics` page for more details.
We expose the `/metrics` endpoint in both HTTP server and the telemetry server, you should use the one from the telemetry server. The HTTP server `/metrics` endpoint is deprecated and it will be removed in future releases.

View File

@@ -32,7 +32,7 @@ Ethernet| Mellanox ConnectX-3 40GbE|
### Test configurations
- `Baseline`: SFTPGo version 0.9.6.
- `Devel`: SFTPGo commit b0ed1905918b9dcc22f9a20e89e354313f491734, compiled with Golang 1.14.2 .
- `Devel`: SFTPGo commit b0ed1905918b9dcc22f9a20e89e354313f491734, compiled with Golang 1.14.2. This is basically the same as v1.0.0 as far as performance is concerned.
- `Optimized`: Various [optimizations](#Optimizations-applied) applied on top of `Devel`.
- `Balanced`: Two optimized instances, running on localhost, load balanced by HAProxy 2.1.3.
- `OpenSSH`: OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1d 10 Sep 2019

View File

@@ -4,9 +4,10 @@ SFTPGo allows to share a single directory on demand using the `portable` subcomm
```console
sftpgo portable --help
To serve the current working directory with auto generated credentials simply use:
To serve the current working directory with auto generated credentials simply
use:
sftpgo portable
$ sftpgo portable
Please take a look at the usage below to customize the serving parameters
@@ -14,40 +15,111 @@ Usage:
sftpgo portable [flags]
Flags:
-C, --advertise-credentials If the SFTP service is advertised via multicast DNS, this flag allows to put username/password inside the advertised TXT record
-S, --advertise-service Advertise SFTP service using multicast DNS
--allowed-extensions stringArray Allowed file extensions case insensitive. The format is /dir::ext1,ext2. For example: "/somedir::.jpg,.png"
--denied-extensions stringArray Denied file extensions case insensitive. The format is /dir::ext1,ext2. For example: "/somedir::.jpg,.png"
-d, --directory string Path to the directory to serve. This can be an absolute path or a path relative to the current directory (default ".")
-f, --fs-provider int 0 means local filesystem, 1 Amazon S3 compatible, 2 Google Cloud Storage
--gcs-automatic-credentials int 0 means explicit credentials using a JSON credentials file, 1 automatic (default 1)
-C, --advertise-credentials If the SFTP/FTP service is
advertised via multicast DNS, this
flag allows to put username/password
inside the advertised TXT record
-S, --advertise-service Advertise configured services using
multicast DNS
--allowed-patterns stringArray Allowed file patterns case insensitive.
The format is:
/dir::pattern1,pattern2.
For example: "/somedir::*.jpg,a*b?.png"
--az-access-tier string Leave empty to use the default
container setting
--az-account-key string
--az-account-name string
--az-container string
--az-endpoint string Leave empty to use the default:
"blob.core.windows.net"
--az-key-prefix string Allows to restrict access to the
virtual folder identified by this
prefix and its contents
--az-sas-url string Shared access signature URL
--az-upload-concurrency int How many parts are uploaded in
parallel (default 2)
--az-upload-part-size int The buffer size for multipart uploads
(MB) (default 4)
--az-use-emulator
--crypto-passphrase string Passphrase for encryption/decryption
--denied-patterns stringArray Denied file patterns case insensitive.
The format is:
/dir::pattern1,pattern2.
For example: "/somedir::*.jpg,a*b?.png"
-d, --directory string Path to the directory to serve.
This can be an absolute path or a path
relative to the current directory
(default ".")
-f, --fs-provider int 0 => local filesystem
1 => AWS S3 compatible
2 => Google Cloud Storage
3 => Azure Blob Storage
4 => Encrypted local filesystem
5 => SFTP
--ftpd-cert string Path to the certificate file for FTPS
--ftpd-key string Path to the key file for FTPS
--ftpd-port int 0 means a random unprivileged port,
< 0 disabled (default -1)
--gcs-automatic-credentials int 0 means explicit credentials using
a JSON credentials file, 1 automatic
(default 1)
--gcs-bucket string
--gcs-credentials-file string Google Cloud Storage JSON credentials file
--gcs-key-prefix string Allows to restrict access to the virtual folder identified by this prefix and its contents
--gcs-credentials-file string Google Cloud Storage JSON credentials
file
--gcs-key-prefix string Allows to restrict access to the
virtual folder identified by this
prefix and its contents
--gcs-storage-class string
-h, --help help for portable
-l, --log-file-path string Leave empty to disable logging
-v, --log-verbose Enable verbose logs
-p, --password string Leave empty to use an auto generated value
-g, --permissions strings User's permissions. "*" means any permission (default [list,download])
-p, --password string Leave empty to use an auto generated
value
-g, --permissions strings User's permissions. "*" means any
permission (default [list,download])
-k, --public-key strings
--s3-access-key string
--s3-access-secret string
--s3-bucket string
--s3-endpoint string
--s3-key-prefix string Allows to restrict access to the virtual folder identified by this prefix and its contents
--s3-key-prefix string Allows to restrict access to the
virtual folder identified by this
prefix and its contents
--s3-region string
--s3-storage-class string
--s3-upload-concurrency int How many parts are uploaded in parallel (default 2)
--s3-upload-part-size int The buffer size for multipart uploads (MB) (default 5)
-s, --sftpd-port int 0 means a random non privileged port
-c, --ssh-commands strings SSH commands to enable. "*" means any supported SSH command including scp (default [md5sum,sha1sum,cd,pwd])
-u, --username string Leave empty to use an auto generated value
--s3-upload-concurrency int How many parts are uploaded in
parallel (default 2)
--s3-upload-part-size int The buffer size for multipart uploads
(MB) (default 5)
--sftp-endpoint string SFTP endpoint as host:port for SFTP
provider
--sftp-fingerprints strings SFTP fingerprints to verify remote host
key for SFTP provider
--sftp-key-path string SFTP private key path for SFTP provider
--sftp-password string SFTP password for SFTP provider
--sftp-prefix string SFTP prefix allows restrict all
operations to a given path within the
remote SFTP server
--sftp-username string SFTP user for SFTP provider
-s, --sftpd-port int 0 means a random unprivileged port,
< 0 disabled
-c, --ssh-commands strings SSH commands to enable.
"*" means any supported SSH command
including scp
(default [md5sum,sha1sum,cd,pwd,scp])
-u, --username string Leave empty to use an auto generated
value
--webdav-cert string Path to the certificate file for WebDAV
over HTTPS
--webdav-key string Path to the key file for WebDAV over
HTTPS
--webdav-port int 0 means a random unprivileged port,
< 0 disabled (default -1)
```
In portable mode, SFTPGo can advertise the SFTP service and, optionally, the credentials via multicast DNS, so there is a standard way to discover the service and to automatically connect to it.
In portable mode, SFTPGo can advertise the SFTP/FTP services and, optionally, the credentials via multicast DNS, so there is a standard way to discover the service and to automatically connect to it.
Here is an example of the advertised service including credentials as seen using `avahi-browse`:
Here is an example of the advertised SFTP service including credentials as seen using `avahi-browse`:
```console
= enp0s31f6 IPv4 SFTPGo portable 53705 SFTP File Transfer local

26
docs/post-connect-hook.md Normal file
View File

@@ -0,0 +1,26 @@
# Post-connect hook
This hook is executed as soon as a new connection is established. It notifies the connection's IP address and protocol. Based on the received response, the connection is accepted or rejected. Combining this hook with the [Post-login hook](./post-login-hook.md) you can implement your own (even for Protocol) blacklist/whitelist of IP addresses.
Please keep in mind that you can easily configure specialized program such as [Fail2ban](http://www.fail2ban.org/) for brute force protection. Executing a hook for each connection can be heavy.
The `post-connect-hook` can be defined as the absolute path of your program or an HTTP URL.
If the hook defines an external program it can read the following environment variables:
- `SFTPGO_CONNECTION_IP`
- `SFTPGO_CONNECTION_PROTOCOL`
If the external command completes with a zero exit status the connection will be accepted otherwise rejected.
Previous global environment variables aren't cleared when the script is called.
The program must finish within 20 seconds.
If the hook defines an HTTP URL then this URL will be invoked as HTTP GET with the following query parameters:
- `ip`
- `protocol`
The connection is accepted if the HTTP response code is `200` otherwise rejected.
The HTTP request will use the global configuration for HTTP clients.

29
docs/post-login-hook.md Normal file
View File

@@ -0,0 +1,29 @@
# Post-login hook
This hook is executed after a login or after closing a connection for authentication timeout. Defining an appropriate `post_login_scope` you can get notifications for failed logins, successful logins or both.
Please keep in mind that executing a hook after each login can be heavy.
The `post-login-hook` can be defined as the absolute path of your program or an HTTP URL.
If the hook defines an external program it can reads the following environment variables:
- `SFTPGO_LOGIND_USER`, it contains the user serialized as JSON. The username is empty if the connection is closed for authentication timeout
- `SFTPGO_LOGIND_IP`
- `SFTPGO_LOGIND_METHOD`, possible values are `publickey`, `password`, `keyboard-interactive`, `publickey+password`, `publickey+keyboard-interactive` or `no_auth_tryed`
- `SFTPGO_LOGIND_STATUS`, 1 means login OK, 0 login KO
- `SFTPGO_LOGIND_PROTOCOL`, possible values are `SSH`, `FTP`, `DAV`
Previous global environment variables aren't cleared when the script is called.
The program must finish within 20 seconds.
If the hook is an HTTP URL then it will be invoked as HTTP POST. The login method, the used protocol, the ip address and the status of the user are added to the query string, for example `<http_url>?login_method=password&ip=1.2.3.4&protocol=SSH&status=1`.
The request body will contain the user serialized as JSON.
The HTTP request will use the global configuration for HTTP clients.
The `post_login_scope` supports the following configuration values:
- `0` means notify both failed and successful logins
- `1` means notify failed logins. Connections closed for authentication timeout are notified as failed logins. You will get an empty username in this case
- `2` means notify successful logins

View File

@@ -1,7 +1,7 @@
# Profiling SFTPGo
The built-in profiler lets you collect CPU profiles, traces, allocations and heap profiles that allow to identify and correct specific bottlenecks.
You can enable the built-in profiler using the `--profiler` command flag.
You can enable the built-in profiler using `telemetry` configuration section inside the configuration file.
Profiling data are exposed via HTTP/HTTPS in the format expected by the [pprof](https://github.com/google/pprof/blob/master/doc/README.md) visualization tool. You can find the index page at the URL `/debug/pprof/`.

View File

@@ -4,32 +4,44 @@ SFTPGo exposes REST API to manage, backup, and restore users and folders, and to
If quota tracking is enabled in the configuration file, then the used size and number of files are updated each time a file is added/removed. If files are added/removed not using SFTP/SCP, or if you change `track_quota` from `2` to `1`, you can rescan the users home dir and update the used quota using the REST API.
REST API can be protected using HTTP basic authentication and exposed via HTTPS. If you need more advanced security features, you can setup a reverse proxy using an HTTP Server such as Apache or NGNIX.
REST API are protected using JSON Web Tokens (JWT) authentication and can be exposed over HTTPS. You can also configure client certificate authentication in addition to JWT.
For example, you can keep SFTPGo listening on localhost and expose it externally configuring a reverse proxy using Apache HTTP Server this way:
The default credentials are:
```shell
ProxyPass /api/v1 http://127.0.0.1:8080/api/v1
ProxyPassReverse /api/v1 http://127.0.0.1:8080/api/v1
- username: `admin`
- password: `password`
You can get a JWT token using the `/api/v2/token` endpoint, you need to authenticate using HTTP Basic authentication and the credentials of an active administrator. Here is a sample response:
```json
{"access_token":"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2MTA4NzU5NDksImp0aSI6ImMwMjAzbGZjZHJwZDRsMGMxanZnIiwibmJmIjoxNjEwODc1MzE5LCJwZXJtaXNzaW9ucyI6WyIqIl0sInN1YiI6ImlHZ010NlZNU3AzN2tld3hMR3lUV1l2b2p1a2ttSjBodXlJZHBzSWRyOFE9IiwidXNlcm5hbWUiOiJhZG1pbiJ9.dt-UwcWdEMwoGauuiQw8BmgpBAv4YlTaXkyNK-7iRJ4","expires_at":"2021-01-17T09:32:29Z"}
```
and you can add authentication with something like this:
once the access token has expired, you need to get a new one.
```shell
<Location /api/v1>
AuthType Digest
AuthName "Private"
AuthDigestDomain "/api/v1"
AuthDigestProvider file
AuthUserFile "/etc/httpd/conf/auth_digest"
Require valid-user
</Location>
```
JWT tokens are not stored and we use a randomly generated secret to sign them so if you restart SFTPGo all the previous tokens will be invalidated and you will get a 401 HTTP response code.
and, of course, you can configure the web server to use HTTPS.
If you define multiple bindings, each binding will sign JWT tokens with a different secret so the token generated for a binding is not valid for the other ones.
You can create other administrator and assign them the following permissions:
- add users
- edit users
- del users
- view users
- view connections
- close connections
- view server status
- view and start quota scans
- view defender
- manage defender
- manage system
- manage admins
You can also restrict administrator access based on the source IP address. If you are running SFTPGo behind a reverse proxy you need to allow both the proxy IP address and the real client IP.
The OpenAPI 3 schema for the exposed API can be found inside the source tree: [openapi.yaml](../httpd/schema/openapi.yaml "OpenAPI 3 specs").
A sample CLI client for the REST API can be found inside the source tree [rest-api-cli](../examples/rest-api-cli) directory.
You can generate your own REST client in your preferred programming language, or even bash scripts, using an OpenAPI generator such as [swagger-codegen](https://github.com/swagger-api/swagger-codegen) or [OpenAPI Generator](https://openapi-generator.tech/).
You can also generate your own REST client in your preferred programming language, or even bash scripts, using an OpenAPI generator such as [swagger-codegen](https://github.com/swagger-api/swagger-codegen) or [OpenAPI Generator](https://openapi-generator.tech/)
You can also use [Swagger UI](https://github.com/swagger-api/swagger-ui).

View File

@@ -10,26 +10,26 @@ AWS SDK has different options for credentials. [More Detail](https://docs.aws.am
So, you need to provide access keys to activate option 1, or leave them blank to use the other ways to specify credentials.
Most S3 backends require HTTPS connections so if you are running SFTPGo as docker image please be sure to uncomment the line that install `ca-certificates`, inside your `Dockerfile`, to be able to properly verify certificate authorities.
Specifying a different `key_prefix`, you can assign different "folders" of the same bucket to different users. This is similar to a chroot directory for local filesystem. Each SFTP/SCP user can only access the assigned folder and its contents. The folder identified by `key_prefix` does not need to be pre-created.
SFTPGo uses multipart uploads and parallel downloads for storing and retrieving files from S3.
For multipart uploads you can customize the parts size and the upload concurrency. Please note that if the upload bandwidth between the SFTP client and SFTPGo is greater than the upload bandwidth between SFTPGo and S3 then the SFTP client have to wait for the upload of the last parts to S3 after it ends the file upload to SFTPGo, and it may time out. Keep this in mind if you customize these parameters.
For multipart uploads you can customize the parts size and the upload concurrency. Please note that if the upload bandwidth between the client and SFTPGo is greater than the upload bandwidth between SFTPGo and S3 then the client should wait for the last parts to be uploaded to S3 after finishing uploading the file to SFTPGo, and it may time out. Keep this in mind if you customize these parameters.
The configured bucket must exist.
Some SFTP commands don't work over S3:
- `symlink` and `chtimes` will fail
- `chown` and `chmod` are silently ignored
- `chtimes`, `chown` and `chmod` will fail. If you want to silently ignore these method set `setstat_mode` to `1` or `2` in your configuration file
- `truncate`, `symlink`, `readlink` are not supported
- opening a file for both reading and writing at the same time is not supported
- upload resume is not supported
- upload mode `atomic` is ignored since S3 uploads are already atomic
Other notes:
- `rename` is a two step operation: server-side copy and then deletion. So, it is not atomic as for local filesystem.
- We don't support renaming non empty directories since we should rename all the contents too and this could take a long time: think about directories with thousands of files; for each file we should do an AWS API call.
- We don't support renaming non empty directories since we should rename all the contents too and this could take a long time: think about directories with thousands of files: for each file we should do an AWS API call.
- For server side encryption, you have to configure the mapped bucket to automatically encrypt objects.
- A local home directory is still required to store temporary files.
- Clients that require advanced filesystem-like features such as `sshfs` are not supported.

View File

@@ -1,30 +1,58 @@
# Running SFTPGo as a service
Download a binary SFTPGo [release](https://github.com/drakkan/sftpgo/releases) or a build artifact for the [latest commit](https://github.com/drakkan/sftpgo/actions) or build SFTPGo yourself.
Run the following instructions from the directory that contains the sftpgo binary and the accompanying files.
## Linux
For Linux, a `systemd` sample [service](../init/sftpgo.service "systemd service") can be found inside the source tree.
The easiest way to run SFTPGo as a service is to download and install the pre-compiled deb/rpm package or use one of the Arch Linux PKGBUILDs we maintain.
Here are some basic instructions to run SFTPGo as service, please run the following commands from the directory where you downloaded SFTPGo:
This section describes the procedure to use if you prefer to build SFTPGo yourself or if you want to download and configure a pre-built release as tar.
A `systemd` sample [service](../init/sftpgo.service "systemd service") can be found inside the source tree.
Here are some basic instructions to run SFTPGo as service using a dedicated `sftpgo` system account.
Please run the following commands from the directory where you downloaded/compiled SFTPGo:
```bash
# create the sftpgo user and group
sudo groupadd --system sftpgo
sudo useradd --system \
--gid sftpgo \
--no-create-home \
--home-dir /var/lib/sftpgo \
--shell /usr/sbin/nologin \
--comment "SFTPGo user" \
sftpgo
# create the required directories
sudo mkdir -p /etc/sftpgo \
/var/lib/sftpgo
/var/lib/sftpgo \
/usr/share/sftpgo
# install sftpgo executable
# install the sftpgo executable
sudo install -Dm755 sftpgo /usr/bin/sftpgo
# install the default configuration file, edit it if required
sudo install -Dm644 sftpgo.json /etc/sftpgo/
# override some configuration keys using environment variables
sudo sh -c 'echo "SFTPGO_HTTPD__TEMPLATES_PATH=/var/lib/sftpgo/templates" > /etc/sftpgo/sftpgo.env'
sudo sh -c 'echo "SFTPGO_HTTPD__STATIC_FILES_PATH=/var/lib/sftpgo/static" >> /etc/sftpgo/sftpgo.env'
sudo sh -c 'echo "SFTPGO_HTTPD__TEMPLATES_PATH=/usr/share/sftpgo/templates" > /etc/sftpgo/sftpgo.env'
sudo sh -c 'echo "SFTPGO_HTTPD__STATIC_FILES_PATH=/usr/share/sftpgo/static" >> /etc/sftpgo/sftpgo.env'
sudo sh -c 'echo "SFTPGO_HTTPD__BACKUPS_PATH=/var/lib/sftpgo/backups" >> /etc/sftpgo/sftpgo.env'
sudo sh -c 'echo "SFTPGO_DATA_PROVIDER__CREDENTIALS_PATH=/var/lib/sftpgo/credentials" >> /etc/sftpgo/sftpgo.env'
# if you use a file based data provider such as sqlite or bolt consider to set the database path too, for example:
#sudo sh -c 'echo "SFTPGO_DATA_PROVIDER__NAME=/var/lib/sftpgo/sftpgo.db" >> /etc/sftpgo/sftpgo.env'
# also set the provider's PATH as env var to get initprovider to work with SQLite provider:
#export SFTPGO_DATA_PROVIDER__NAME=/var/lib/sftpgo/sftpgo.db
# install static files and templates for the web UI
sudo cp -r static templates /var/lib/sftpgo/
sudo cp -r static templates /usr/share/sftpgo/
# set files and directory permissions
sudo chown -R sftpgo:sftpgo /etc/sftpgo /var/lib/sftpgo
sudo chmod 750 /etc/sftpgo /var/lib/sftpgo
sudo chmod 640 /etc/sftpgo/sftpgo.json /etc/sftpgo/sftpgo.env
# initialize the configured data provider
# if you want to use MySQL or PostgreSQL you need to create the configured database before running the initprovider command
sudo /usr/bin/sftpgo initprovider -c /etc/sftpgo/
sudo -E su - sftpgo -m -s /bin/bash -c 'sftpgo initprovider -c /etc/sftpgo'
# install the systemd service
sudo install -Dm644 init/sftpgo.service /etc/systemd/system
# start the service
@@ -33,8 +61,10 @@ sudo systemctl start sftpgo
sudo systemctl status sftpgo
# automatically start sftpgo on boot
sudo systemctl enable sftpgo
# optional, install the REST API CLI. It requires python-requests to run
sudo install -Dm755 examples/rest-api-cli/sftpgo_api_cli.py /usr/bin/sftpgo_api_cli
# optional, create shell completion script, for example for bash
sudo sh -c '/usr/bin/sftpgo gen completion bash > /usr/share/bash-completion/completions/sftpgo'
# optional, create man pages
sudo /usr/bin/sftpgo gen man -d /usr/share/man/man1
```
## macOS
@@ -47,6 +77,7 @@ Here are some basic instructions to run SFTPGo as service, please run the follow
# create the required directories
sudo mkdir -p /usr/local/opt/sftpgo/init \
/usr/local/opt/sftpgo/var/lib \
/usr/local/opt/sftpgo/usr/share \
/usr/local/opt/sftpgo/var/log \
/usr/local/opt/sftpgo/etc \
/usr/local/opt/sftpgo/bin
@@ -59,7 +90,7 @@ sudo chown root:wheel /usr/local/opt/sftpgo/init/com.github.drakkan.sftpgo.plist
# install the default configuration file, edit it if required
sudo cp sftpgo.json /usr/local/opt/sftpgo/etc/
# install static files and templates for the web UI
sudo cp -r static templates /usr/local/opt/sftpgo/var/lib/
sudo cp -r static templates /usr/local/opt/sftpgo/usr/share/
# initialize the configured data provider
# if you want to use MySQL or PostgreSQL you need to create the configured database before running the initprovider command
sudo /usr/local/opt/sftpgo/bin/sftpgo initprovider -c /usr/local/opt/sftpgo/etc/
@@ -69,8 +100,6 @@ sudo ln -s /usr/local/opt/sftpgo/init/com.github.drakkan.sftpgo.plist /Library/L
sudo launchctl load -w /Library/LaunchDaemons/com.github.drakkan.sftpgo.plist
# verify that the service is started
sudo launchctl list com.github.drakkan.sftpgo
# optional, install the REST API CLI. It requires python-requests to run, this python module is not installed by default
sudo cp examples/rest-api-cli/sftpgo_api_cli.py /usr/local/opt/sftpgo/bin/
```
## Windows
@@ -79,7 +108,7 @@ On Windows, you can register SFTPGo as Windows Service. Take a look at the CLI u
```powershell
PS> sftpgo.exe service --help
Install, Uninstall, Start, Stop, Reload and retrieve status for SFTPGo Windows Service
Manage SFTPGo Windows Service
Usage:
sftpgo service [command]
@@ -87,7 +116,7 @@ Usage:
Available Commands:
install Install SFTPGo as Windows Service
reload Reload the SFTPGo Windows Service sending a "paramchange" request
rotatelogs Signal to the running service to close the existing log file and immediately create a new one
rotatelogs Signal to the running service to rotate the logs
start Start SFTPGo Windows Service
status Retrieve the status for the SFTPGo Windows Service
stop Stop SFTPGo Windows Service
@@ -107,4 +136,6 @@ After installing as a Windows Service, please remember to allow network access t
PS> netsh advfirewall firewall add rule name="SFTPGo Service" dir=in action=allow program="C:\Program Files\SFTPGo\sftpgo.exe"
```
(Or through the Windows Firewall GUI.)
Or through the Windows Firewall GUI.
The Windows installer will register the service and allow network access for it automatically.

64
docs/sftp-subsystem.md Normal file
View File

@@ -0,0 +1,64 @@
# SFTP subsystem mode
In this mode SFTPGo speaks the server side of SFTP protocol to stdout and expects client requests from stdin.
You can use SFTPGo as subsystem via the `startsubsys` command.
This mode is not intended to be called directly, but from sshd using the `Subsystem` option.
For example adding a line like this one in `/etc/ssh/sshd_config`:
```shell
Subsystem sftp sftpgo startsubsys
```
Command-line flags should be specified in the Subsystem declaration.
```shell
Usage:
sftpgo startsubsys [flags]
Flags:
-d, --base-home-dir string If the user does not exist specify an alternate
starting directory. The home directory for a new
user will be:
<base-home-dir>/<username>
base-home-dir must be an absolute path.
-c, --config-dir string Location for SFTPGo config dir. This directory
should contain the "sftpgo" configuration file
or the configured config-file and it is used as
the base for files with a relative path (eg. the
private keys for the SFTP server, the SQLite
database if you use SQLite as data provider).
This flag can be set using SFTPGO_CONFIG_DIR
env var too. (default ".")
-f, --config-file string Name for SFTPGo configuration file. It must be
the name of a file stored in config-dir not the
absolute path to the configuration file. The
specified file name must have no extension we
automatically load JSON, YAML, TOML, HCL and
Java properties. Therefore if you set "sftpgo"
then "sftpgo.json", "sftpgo.yaml" and so on
are searched.
This flag can be set using SFTPGO_CONFIG_FILE
env var too. (default "sftpgo")
-h, --help help for startsubsys
-j, --log-to-journald Send logs to journald. Only available on Linux.
Use:
$ journalctl -o verbose -f
To see full logs.
If not set, the logs will be sent to the standard
error
-v, --log-verbose Enable verbose logs. This flag can be set
using SFTPGO_LOG_VERBOSE env var too.
(default true)
-p, --preserve-home If the user already exists, the existing home
directory will not be changed
```
In this mode `bolt` and `sqlite` providers are not usable as the same database file cannot be shared among multiple processes, if one of these provider is configured it will be automatically changed to `memory` provider.
The username and home directory for the logged in user are determined using [user.Current()](https://golang.org/pkg/os/user/#Current).
If the user who is logging is not found within the SFTPGo data provider, it is added automatically.
You can pre-configure the users inside the SFTPGo data provider, this way you can use a different home directory, restrict permissions and such.

30
docs/sftpfs.md Normal file
View File

@@ -0,0 +1,30 @@
# SFTP as storage backend
An SFTP account on another server can be used as storage for an SFTPGo account, so the remote SFTP server can be accessed in a similar way to the local file system.
Here are the supported configuration parameters:
- `Endpoint`, ssh endpoint as `host:port`
- `Username`
- `Password`
- `PrivateKey`
- `Fingerprints`
- `Prefix`
The mandatory parameters are the endpoint, the username and a password or a private key. If you define both a password and a private key the key is tried first. The provided private key should be PEM encoded, something like this:
```shell
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW
QyNTUxOQAAACA8LWc4SahqKkAr4L3rS19w1Vt8/IAf4th2FZmf+PJ/vwAAAJBvnZIJb52S
CQAAAAtzc2gtZWQyNTUxOQAAACA8LWc4SahqKkAr4L3rS19w1Vt8/IAf4th2FZmf+PJ/vw
AAAEBE6F5Az4wzNfNYLRdG8blDwvPBYFXE8BYDi4gzIhnd9zwtZzhJqGoqQCvgvetLX3DV
W3z8gB/i2HYVmZ/48n+/AAAACW5pY29sYUBwMQECAwQ=
-----END OPENSSH PRIVATE KEY-----
```
The password and the private key are stored as ciphertext according to your [KMS configuration](./kms.md).
SHA256 fingerprints for remote server host keys are optional but highly recommended: if you provide one or more fingerprints the server host key will be verified against them and the connection will be denied if none of the fingerprints provided match that for the server host key.
Specifying a prefix you can restrict all operations to a given path within the remote SFTP server.

View File

@@ -8,21 +8,35 @@ For system commands we have no direct control on file creation/deletion and so t
- system commands work only on local filyestem
- we cannot avoid to leak real filesystem paths
- quota check is suboptimal
- maximum size restriction on single file is not respected
- data at-rest encryption is not supported
If quota is enabled and SFTPGO receives a system command, the used size and number of files are checked at the command start and not while new files are created/deleted. While the command is running the number of files is not checked, the remaining size is calculated as the difference between the max allowed quota and the used one, and it is checked against the bytes transferred via SSH. The command is aborted if it uploads more bytes than the remaining allowed size calculated at the command start. Anyway, we only see the bytes that the remote command sends to the local one via SSH. These bytes contain both protocol commands and files, and so the size of the files is different from the size trasferred via SSH: for example, a command can send compressed files, or a protocol command (few bytes) could delete a big file. To mitigate these issues, quotas are recalculated at the command end with a full scan of the directory specified for the system command. This could be heavy for big directories. If you need system commands and quotas you could consider disabling quota restrictions and periodically update quota usage yourself using the REST API.
If quota is enabled and SFTPGo receives a system command, the used size and number of files are checked at the command start and not while new files are created/deleted. While the command is running the number of files is not checked, the remaining size is calculated as the difference between the max allowed quota and the used one, and it is checked against the bytes transferred via SSH. The command is aborted if it uploads more bytes than the remaining allowed size calculated at the command start. Anyway, we only see the bytes that the remote command sends to the local one via SSH. These bytes contain both protocol commands and files, and so the size of the files is different from the size transferred via SSH: for example, a command can send compressed files, or a protocol command (few bytes) could delete a big file. To mitigate these issues, quotas are recalculated at the command end with a full scan of the directory specified for the system command. This could be heavy for big directories. If you need system commands and quotas you could consider disabling quota restrictions and periodically update quota usage yourself using the REST API.
For these reasons we should limit system commands usage as much as possibile, we currently support the following system commands:
For these reasons we should limit system commands usage as much as possible, we currently support the following system commands:
- `git-receive-pack`, `git-upload-pack`, `git-upload-archive`. These commands enable support for Git repositories over SSH. They need to be installed and in your system's `PATH`.
- `rsync`. The `rsync` command needs to be installed and in your system's `PATH`. We cannot avoid that rsync creates symlinks, so if the user has the permission to create symlinks, we add the option `--safe-links` to the received rsync command if it is not already set. This should prevent creating symlinks that point outside the home dir. If the user cannot create symlinks, we add the option `--munge-links` if it is not already set. This should make symlinks unusable (but manually recoverable).
- `rsync`. The `rsync` command needs to be installed and in your system's `PATH`.
SFTPGo support the following built-in SSH commands:
At least the following permissions are required to be able to run system commands:
- `scp`, SFTPGo implements the SCP protocol so we can support it for cloud filesystems too and we can avoid the other system commands limitations. SCP between two remote hosts is supported using the `-3` scp option.
- `list`
- `download`
- `upload`
- `create_dirs`
- `overwrite`
- `delete`
For `rsync` we cannot avoid that it creates symlinks so if the `create_symlinks` permission is granted we add the option `--safe-links`, if it is not already set, to the received `rsync` command. This should prevent to create symlinks that point outside the home directory.
If the user cannot create symlinks we add the option `--munge-links`, if it is not already set, to the received `rsync` command. This should make symlinks unusable (but manually recoverable).
SFTPGo supports the following built-in SSH commands:
- `scp`, SFTPGo implements the SCP protocol so we can support it for cloud filesystems too and we can avoid the other system commands limitations. SCP between two remote hosts is supported using the `-3` scp option. Wildcard expansion is not supported.
- `md5sum`, `sha1sum`, `sha256sum`, `sha384sum`, `sha512sum`. Useful to check message digests for uploaded files.
- `cd`, `pwd`. Some SFTP clients do not support the SFTP SSH_FXP_REALPATH packet type, so they use `cd` and `pwd` SSH commands to get the initial directory. Currently `cd` does nothing and `pwd` always returns the `/` path.
- `sftpgo-copy`. This is a built-in copy implementation. It allows server side copy for files and directories. The first argument is the source file/directory and the second one is the destination file/directory, for example `sftpgo-copy <src> <dst>`. The command will fail if the destination exists. Copy for directories spanning virtual folders is not supported. Only local filesystem is supported: recursive copy for Cloud Storage filesystems requires a new request for every file in any case, so a real server side copy is not possibile.
- `sftpgo-remove`. This is a built-in remove implementation. It allows to remove single files and to recursively remove directories. The first argument is the file/directory to remove, for example `sftpgo-remove <dst>`. Only local filesystem is supported: recursive remove for Cloud Storage filesystems requires a new request for every file in any case, so a server side remove is not possibile.
- `cd`, `pwd`. Some SFTP clients do not support the SFTP SSH_FXP_REALPATH packet type, so they use `cd` and `pwd` SSH commands to get the initial directory. Currently `cd` does nothing and `pwd` always returns the `/` path. These commands will work with any storage backend but keep in mind that to calculate the hash we need to read the whole file, for remote backends this means downloading the file, for the encrypted backend this means decrypting the file.
- `sftpgo-copy`. This is a built-in copy implementation. It allows server side copy for files and directories. The first argument is the source file/directory and the second one is the destination file/directory, for example `sftpgo-copy <src> <dst>`. The command will fail if the destination exists. Copy for directories spanning virtual folders is not supported. Only local filesystem is supported: recursive copy for Cloud Storage filesystems requires a new request for every file in any case, so a real server side copy is not possible.
- `sftpgo-remove`. This is a built-in remove implementation. It allows to remove single files and to recursively remove directories. The first argument is the file/directory to remove, for example `sftpgo-remove <dst>`. Only local filesystem is supported: recursive remove for Cloud Storage filesystems requires a new request for every file in any case, so a server side remove is not possible.
The following SSH commands are enabled by default:

View File

@@ -1,6 +1,6 @@
# Virtual Folders
A virtual folder is a mapping between a SFTP/SCP virtual path and a filesystem path outside the user home directory.
A virtual folder is a mapping between an SFTP/SCP virtual path and a filesystem path outside the user home directory.
The specified paths must be absolute and the virtual path cannot be "/", it must be a sub directory.
The parent directory to the specified virtual path must exist. SFTPGo will try to automatically create any missing parent directory for the configured virtual folders at user login.
@@ -16,7 +16,7 @@ For example if you configure `/tmp/mapped` or `C:\mapped` as mapped path and `/v
The same virtual folder, identified by the `mapped_path`, can be shared among users and different folder quota limits for each user are supported.
Folder quota limits can also be included inside the user quota but in this case the folder is considered "private" and sharing it with other users will break user quota calculation.
You don't need to create virtual folders, inside the data provider, to associate them to the users: any missing virtual folder will be automatically created when you add/update an user. You only have to create the folder on the filesystem.
You don't need to create virtual folders, inside the data provider, to associate them to the users: any missing virtual folder will be automatically created when you add/update a user. You only have to create the folder on the filesystem.
Using the REST API you can:

View File

@@ -1,8 +1,13 @@
# Web Admin
You can easily build your own interface using the exposed REST API. Anyway, SFTPGo also provides a very basic built-in web interface that allows you to manage users and connections.
You can easily build your own interface using the exposed [REST API](./rest-api.md). Anyway, SFTPGo also provides a basic built-in web interface that allows you to manage users, virtual folders, admins and connections.
With the default `httpd` configuration, the web admin is available at the following URL:
[http://127.0.0.1:8080/web](http://127.0.0.1:8080/web)
The web interface can be protected using HTTP basic authentication and exposed via HTTPS. If you need more advanced security features, you can setup a reverse proxy as explained for the [REST API](./rest-api.md).
The default credentials are:
- username: `admin`
- password: `password`
The web interface can be exposed over HTTPS.

31
docs/webdav.md Normal file
View File

@@ -0,0 +1,31 @@
# WebDAV
The `WebDAV` support can be enabled by configuring one or more `bindings` inside the `webdavd` configuration section.
Each user can access their home directory using the path `http/s://<SFTPGo ip>:<WevDAVPORT>/`.
WebDAV is quite a different protocol than SCP/FTP, there is no session concept, each command is a separate HTTP request and must be authenticated, to improve performance SFTPGo caches authenticated users. This way SFTPGo don't need to do a dataprovider query and a password check for each request.
The user caching configuration allows to set:
- `expiration_time` in minutes. If a user is cached for more than the specified minutes it will be removed from the cache and a new dataprovider query will be performed. Please note that the `last_login` field will not be updated and `external_auth_hook`, `pre_login_hook` and `check_password_hook` will not be executed if the user is obtained from the cache.
- `max_size`. Maximum number of users to cache. When this limit is reached the user with the oldest expiration date will be removed from the cache. 0 means no limit however the cache size cannot exceed the number of users so if you have a small number of users you can set this value to 0.
Users are automatically removed from the cache after an update/delete.
WebDAV protocol requires the MIME type for each file. SFTPGo will first try to guess the MIME type by extension. If this fails it will send a `HEAD` request for Cloud backends and, as last resort, it will try to guess the MIME type reading the first 512 bytes of the file. This may slow down the directory listing, especially for Cloud based backends, if you have directories containing many files with unregistered extensions. To mitigate this problem, you can enable caching of MIME types so that the MIME type detection is done only once.
The MIME types caching configurations allows to set the maximum number of MIME types to cache. Once the cache reaches the configured maximum size no new MIME types will be added. The MIME types cache is a non-persistent in-memory cache. If you need a persistent cache add your MIME types to `/etc/mime.types` on Linux or inside the registry on Windows.
WebDAV should work as expected for most use cases but there are some minor issues and some missing features.
Know issues:
- removing a directory tree on Cloud Storage backends could generate a `not found` error when removing the last (virtual) directory. This happens if the client cycles the directories tree itself and removes files and directories one by one instead of issuing a single remove command
- the used [WebDAV library](https://pkg.go.dev/golang.org/x/net/webdav?tab=doc) asks to open a file to execute a `stat` and sometimes reads some bytes to find the content type. Stat calls are executed before and after a download too, so to be able to properly list a directory you need to grant both `list` and `download` permissions and to be able to upload files you need to gran both `list` and `upload` permissions
- the used `WebDAV library` not always returns a proper error code/message, most of the times it simply returns `Method not Allowed`. I'll try to improve the library error codes in the future
- if an object within a directory cannot be accessed, for example due to OS permissions issues or because is a missing mapped path for a virtual folder, the directory listing will fail. In SFTP/FTP the directory listing will succeed and you'll only get an error if you try to access to the problematic file/directory
We plan to add [Dead Properties](https://tools.ietf.org/html/rfc4918#section-3) support in future releases. We need a design decision here, probably the best solution is to store dead properties inside the data provider but this could increase a lot its size. Alternately we could store them on disk for local filesystem and add as metadata for Cloud Storage, this means that we need to do a separate `HEAD` request to retrieve dead properties for an S3 file. For big folders will do a lot of requests to the Cloud Provider, I don't like this solution. Another option is to expose a hook and allow you to implement `dead properties` outside SFTPGo.
If you find any other quirks or problems please let us know opening a GitHub issue, thank you!

View File

@@ -0,0 +1,58 @@
# Authy
These example show how-to integrate [Twilio Authy API](https://www.twilio.com/docs/authy/api) for One-Time-Password logins.
The examples assume that the user has the free [Authy app](https://authy.com/) installed and uses it to generate offline [TOTP](https://en.wikipedia.org/wiki/Time-based_One-time_Password_algorithm) codes (soft tokens).
You first need to [create an Authy Application in the Twilio Console](https://twilio.com/console/authy/applications?_ga=2.205553366.451688189.1597667213-1526360003.1597667213), then you can create a new Authy user and store a reference to the matching SFTPGo account.
Verify that your Authy application is successfully registered:
```bash
export AUTHY_API_KEY=<your api key here>
curl 'https://api.authy.com/protected/json/app/details' -H "X-Authy-API-Key: $AUTHY_API_KEY"
```
now create an Authy user:
```bash
curl -XPOST "https://api.authy.com/protected/json/users/new" \
-H "X-Authy-API-Key: $AUTHY_API_KEY" \
--data-urlencode user[email]="user@domain.com" \
--data-urlencode user[cellphone]="317-338-9302" \
--data-urlencode user[country_code]="54"
```
The response is something like this:
```json
{"message":"User created successfully.","user":{"id":xxxxxxxx},"success":true}
```
Save the user id somewhere and add a reference to the matching SFTPGo account. You could also store this ID in the `additional_info` SFTPGo user field.
After this step you can use the Authy app installed on your phone to generate TOTP codes.
Now you can verify the token using an HTTP GET request:
```bash
export TOKEN=<TOTP you read from Authy app>
export AUTHY_ID=<user id>
curl -i "https://api.authy.com/protected/json/verify/${TOKEN}/${AUTHY_ID}" \
-H "X-Authy-API-Key: $AUTHY_API_KEY"
```
So inside your hook you need to check:
- the HTTP response code for the verify request, it must be `200`
- the JSON reponse body, it must contains the key `success` with the value `true` (as string)
If these conditions are met the token is valid and you allow the user to login.
We provide the following examples:
- [Keyboard interactive authentication](./keyint/README.md) for 2FA using password + Authy one time token.
- [External authentication](./extauth/README.md) using Authy one time tokens as passwords.
- [Check password hook](./checkpwd/README.md) for 2FA using a password consisting of a fixed string and a One Time Token.
Please note that these are sample programs not intended for production use, you should write your own hook based on them and you should prefer HTTP based hooks if performance is a concern.

View File

@@ -0,0 +1,3 @@
# Authy 2FA via check password hook
This example shows how to use 2FA via the check password hook using a password consisting of a fixed part and an Authy TOTP token. The hook will check the TOTP token using the Authy API and SFTPGo will check the fixed part. Please read the [sample code](./main.go), it should be self explanatory.

View File

@@ -0,0 +1,3 @@
module github.com/drakkan/sftpgo/authy/checkpwd
go 1.15

View File

@@ -0,0 +1,106 @@
package main
import (
"encoding/json"
"fmt"
"io/ioutil"
"log"
"net/http"
"os"
"time"
)
type userMapping struct {
SFTPGoUsername string
AuthyID int64
AuthyAPIKey string
}
type checkPasswordResponse struct {
// 0 KO, 1 OK, 2 partial success
Status int `json:"status"`
// for status == 2 this is the password that SFTPGo will check against the one stored
// inside the data provider
ToVerify string `json:"to_verify"`
}
var (
mapping []userMapping
)
func init() {
// this is for demo only, you probably want to get this mapping dynamically, for example using a database query
mapping = append(mapping, userMapping{
SFTPGoUsername: "<SFTPGo username>",
AuthyID: 1234567,
AuthyAPIKey: "<your api key>",
})
}
func printResponse(status int, toVerify string) {
r := checkPasswordResponse{
Status: status,
ToVerify: toVerify,
}
resp, _ := json.Marshal(r)
fmt.Printf("%v\n", string(resp))
if status > 0 {
os.Exit(0)
} else {
os.Exit(1)
}
}
func main() {
// get credentials from env vars
username := os.Getenv("SFTPGO_AUTHD_USERNAME")
password := os.Getenv("SFTPGO_AUTHD_PASSWORD")
for _, m := range mapping {
if m.SFTPGoUsername == username {
// Authy token len is 7, we assume that we have the password followed by the token
pwdLen := len(password)
if pwdLen <= 7 {
printResponse(0, "")
}
pwd := password[:pwdLen-7]
authyToken := password[pwdLen-7:]
// now verify the authy token and instruct SFTPGo to check the password if the token is OK
url := fmt.Sprintf("https://api.authy.com/protected/json/verify/%v/%v", authyToken, m.AuthyID)
req, err := http.NewRequest(http.MethodGet, url, nil)
if err != nil {
log.Fatal(err)
}
req.Header.Set("X-Authy-API-Key", m.AuthyAPIKey)
httpClient := &http.Client{
Timeout: 10 * time.Second,
}
resp, err := httpClient.Do(req)
if err != nil {
printResponse(0, "")
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
// status code 200 is expected
printResponse(0, "")
}
var authyResponse map[string]interface{}
respBody, err := ioutil.ReadAll(resp.Body)
if err != nil {
printResponse(0, "")
}
err = json.Unmarshal(respBody, &authyResponse)
if err != nil {
printResponse(0, "")
}
if authyResponse["success"].(string) == "true" {
printResponse(2, pwd)
}
printResponse(0, "")
break
}
}
// no mapping found
printResponse(0, "")
}

View File

@@ -0,0 +1,3 @@
# Authy external authentication
This example shows how to use Authy TOTP token as password for SFTPGo users. Please read the [sample code](./main.go), it should be self explanatory.

View File

@@ -0,0 +1,3 @@
module github.com/drakkan/sftpgo/authy/extauth
go 1.15

View File

@@ -0,0 +1,109 @@
package main
import (
"encoding/json"
"fmt"
"io/ioutil"
"log"
"net/http"
"os"
"path/filepath"
"time"
)
type userMapping struct {
SFTPGoUsername string
AuthyID int64
AuthyAPIKey string
}
// we assume that the SFTPGo already exists, we only check the one time token.
// If you need to create the SFTPGo user more fields are needed here
type minimalSFTPGoUser struct {
Status int `json:"status,omitempty"`
Username string `json:"username"`
HomeDir string `json:"home_dir,omitempty"`
Permissions map[string][]string `json:"permissions"`
}
var (
mapping []userMapping
)
func init() {
// this is for demo only, you probably want to get this mapping dynamically, for example using a database query
mapping = append(mapping, userMapping{
SFTPGoUsername: "<SFTPGo username>",
AuthyID: 1234567,
AuthyAPIKey: "<your api key>",
})
}
func printResponse(username string) {
u := minimalSFTPGoUser{
Username: username,
Status: 1,
HomeDir: filepath.Join(os.TempDir(), username),
}
u.Permissions = make(map[string][]string)
u.Permissions["/"] = []string{"*"}
resp, _ := json.Marshal(u)
fmt.Printf("%v\n", string(resp))
if len(username) > 0 {
os.Exit(0)
} else {
os.Exit(1)
}
}
func main() {
// get credentials from env vars
username := os.Getenv("SFTPGO_AUTHD_USERNAME")
password := os.Getenv("SFTPGO_AUTHD_PASSWORD")
if len(password) == 0 {
// login method is not password
printResponse("")
return
}
for _, m := range mapping {
if m.SFTPGoUsername == username {
// mapping found we can now verify the token
url := fmt.Sprintf("https://api.authy.com/protected/json/verify/%v/%v", password, m.AuthyID)
req, err := http.NewRequest(http.MethodGet, url, nil)
if err != nil {
log.Fatal(err)
}
req.Header.Set("X-Authy-API-Key", m.AuthyAPIKey)
httpClient := &http.Client{
Timeout: 10 * time.Second,
}
resp, err := httpClient.Do(req)
if err != nil {
printResponse("")
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
// status code 200 is expected
printResponse("")
}
var authyResponse map[string]interface{}
respBody, err := ioutil.ReadAll(resp.Body)
if err != nil {
printResponse("")
}
err = json.Unmarshal(respBody, &authyResponse)
if err != nil {
printResponse("")
}
if authyResponse["success"].(string) == "true" {
printResponse(username)
}
printResponse("")
break
}
}
// no mapping found
printResponse("")
}

View File

@@ -0,0 +1,3 @@
# Authy 2FA using keyboard interactive authentication
This example shows how to authenticate SFTP users using 2FA (password + Authy token). Please read the [sample code](./main.go), it should be self explanatory.

Some files were not shown because too many files have changed in this diff Show More