Compare commits

...

786 Commits

Author SHA1 Message Date
Nicola Murino
c40a48c6f3 sql provider: enhanced folder mapping query using an upsert
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-30 13:02:32 +02:00
Nicola Murino
c7073f90cb improve readlink handling
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-27 19:01:42 +02:00
Nicola Murino
80c8486d24 webclient: don't restore checkbox status
Fixes #807

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-26 09:17:27 +02:00
Nicola Murino
cf9d081495 update moment.js to v2.29.2
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-15 09:58:15 +02:00
Nicola Murino
05ed7b6aa4 sshd: disable sha1 based KEXs and MACs by default
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-04 19:21:42 +02:00
Nicola Murino
68a4bbd10c be sure to close an SSH connection if all channels are idle
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-04-01 08:05:07 +02:00
Nicola Murino
1b21c19a78 add jq to full docker image variants
Fixes #767

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-23 11:37:27 +01:00
Nicola Murino
ee600c716b docker: add rsync to "full" images
there are better alternatives and rsync will only work on local
filesystem, but it can still be useful to some people

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-22 17:42:56 +01:00
Nicola Murino
6b77b55068 update deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-16 19:27:31 +01:00
Nicola Murino
5a45af76f3 db defender: fix list hosts queries
ensure that banned hosts are always returned

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-16 18:27:47 +01:00
Nicola Murino
7959737442 ensure that defaults defined in code match the default config file
Fixes #754

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-14 10:40:47 +01:00
Nicola Murino
d3fee39388 sftpfs: add a dial timeout
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-11 17:12:47 +01:00
Nicola Murino
97122ef06c backport some fixes from the main branch
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-03-04 19:14:39 +01:00
Nicola Murino
8a6c2265a4 deb/rpm packages: attempt to set the cap_net_bind_service capability
so the service can bind to privileged ports without running as root us

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-28 10:06:39 +01:00
Nicola Murino
b65dae89e8 web setup: add an optional installation code
The purpose of this code is to prevent anyone who can access to
the initial setup screen from creating an admin user

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-27 14:27:53 +01:00
Nicola Murino
4ed6e96c7b sftpfs: improve rename and remove
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-16 17:08:22 +01:00
Nicola Murino
6d3ff5a8ad logger: fix UTC time func
Fixes #719

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-14 12:37:55 +01:00
Nicola Murino
a7921500f5 set version to 2.2.2
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-06 09:59:28 +01:00
Nicola Murino
c3188a2b5a share download uncompressed: don't allow symlinks
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-06 08:49:08 +01:00
Nicola Murino
3f38f44d42 update deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-02-03 18:40:28 +01:00
Nicola Murino
0a3122f03e fix prefix for defender database tables
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-30 11:10:08 +01:00
Nicola Murino
8cd9e886f3 CI: enable docker
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-13 18:14:46 +01:00
Nicola Murino
016e285745 update deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-13 17:57:39 +01:00
Nicola Murino
467708dc1c Admin UI: allow to create multiple users/folders from templates
the clone button is not needed anymore, you can select a user and
click on template to generate one or more similar users or you can
create users/folders from an empty template

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-13 10:52:20 +01:00
Nicola Murino
ef626befb1 web admin: simplify user page
The page to add/edit users should be less less intimidating now.
All the advanced settings are hidden by default. Permissions are set
to any, so if you also have a users base dir set, to add a user
you have to simply set username, password or public key and save

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-13 10:52:16 +01:00
Nicola Murino
f61456ce87 sshd: improve docs about supported ciphers, KEX and MACs
also added a check to ensure that the configured values are valid

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-13 10:52:09 +01:00
Nicola Murino
ba3548c2c3 make the sdk a separate module
The SFTPGo SDK now is at the following URL

https://github.com/sftpgo/sdk

Fixes #657

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-13 10:52:03 +01:00
Nicola Murino
0e2d673889 move kms implementation outside the sdk package
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-13 10:51:55 +01:00
Nicola Murino
bf03eb2a88 log at info level the service configurations
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-13 10:51:48 +01:00
Nicola Murino
3603493146 move plugin handling outside the sdk package
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-13 10:51:42 +01:00
Nicola Murino
6a20e7411b sdk: add a logger interface
we are now ready to make the sdk a separate module

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-13 10:51:36 +01:00
Nicola Murino
0e1d8fc4d9 move kms definitions to the sdk package
This is the first step to make the sdk a separate module

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-13 10:51:31 +01:00
Nicola Murino
08a7f08d6e httpd: switch back to chi Recoverer now that the required patch is merged
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-13 10:51:24 +01:00
Nicola Murino
2c8968b5dc eventsearcher plugin: add support to search for provider, bucket, endpoint Signed-off-by: Nicola Murino <nicola.murino@gmail.com> 2022-01-13 10:51:02 +01:00
Nicola Murino
f65c973c99 notifier plugins: add provider, bucket and endpoint to nottifier params
Fixes #656

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-13 10:50:57 +01:00
Nicola Murino
85c2d474d9 notifiers plugin: replace params with a struct
Fixes #658

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-13 10:50:48 +01:00
Nicola Murino
6c6a6e3d16 Revert "notifier plugin: fix failed events recovery"
This reverts commit 92af6efc0c.

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-13 10:50:35 +01:00
Nicola Murino
92122bd962 sqlite: fix prefix for api_key indexes
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-09 11:56:27 +01:00
Nicola Murino
112306b9a2 CI: fix development workflow
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-02 16:38:48 +01:00
Nicola Murino
92af6efc0c notifier plugin: fix failed events recovery
the event timestamp is in nanosecons not milliseconds

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2022-01-02 16:35:29 +01:00
Nicola Murino
6d582a821b back to development
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2021-12-31 16:01:23 +01:00
Nicola Murino
794afbf85e update release workflow
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2021-12-31 14:17:51 +01:00
Nicola Murino
e3f3997c5e set version to 2.2.1
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2021-12-31 13:42:03 +01:00
Nicola Murino
f78090e47f update deps
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2021-12-29 18:11:00 +01:00
Nicola Murino
4d7a4aa99a check rename source and target 2021-12-28 12:03:52 +01:00
Nicola Murino
c36217c654 improve some docs 2021-12-26 14:54:29 +01:00
Nicola Murino
59bb578b89 web client: allow to move files between folders
Fixes #653
2021-12-25 17:13:23 +01:00
Nicola Murino
7d8823307f defender: add provider driver
Fixes #616
2021-12-25 12:08:07 +01:00
Nicola Murino
8174349032 console logger: enable colors on Windows too ...
... now that zerolog supports this feature
2021-12-20 18:47:18 +01:00
Nicola Murino
00a02dc14d howto: add two-factor authentication 2021-12-19 18:08:12 +01:00
Nicola Murino
ced73ed04e REST API: add an option to create missing dirs 2021-12-19 12:14:53 +01:00
Nicola Murino
cc73bb811b change log level from warn to error where appropriate
Fixes #649
2021-12-16 19:53:00 +01:00
Nicola Murino
a587228cf0 add support for metadata plugins 2021-12-16 18:18:36 +01:00
Nicola Murino
1472a0f415 hooks: preserve MFA related configs
if a user is updated using pre-login or external auth hook we need to
preserve the MFA related configs in the same way we do if the user is
updated using the REST API
2021-12-11 11:08:20 +01:00
Nicola Murino
0bb141960f add support for different bandwidth limits based on client IP 2021-12-10 18:43:26 +01:00
Nicola Murino
c153330ab8 web client: use fetch to upload files
also add REST API to upload a single file as POST body
2021-12-08 19:25:22 +01:00
Nicola Murino
5b4ef0ee3b windows installer: rename the sample configuration with the default values
The previous name sftpgo.json.default could create confusion for Windows
users
2021-12-05 07:58:53 +01:00
Nicola Murino
9632b6ee94 events search: improve test cases 2021-12-04 18:18:59 +01:00
Nicola Murino
78eb1c1166 update OpenAPI schema 2021-12-04 17:57:48 +01:00
Nicola Murino
a7c0b07a2a add session id to notifier plugins/hook 2021-12-04 17:27:24 +01:00
Nicola Murino
dc1cc88a46 keyboard interactive hooks: allow to validate passcode 2021-12-04 15:14:44 +01:00
Nicola Murino
3f5451eab6 web client: save/restore file list preferences 2021-12-04 07:58:49 +01:00
Nicola Murino
30d98326ca docker: update alpine image to 3.15 2021-12-03 19:33:37 +01:00
Nicola Murino
bedc8e288b web client: add support for integrating external viewers/editors 2021-12-03 18:33:08 +01:00
Nicola Murino
6092b6628e logs: use info level for login related messages
so enabling debug level is not required, for example only to understand
that a user exceeded the allowed sessions.

Also set the cache update frequency as documented
2021-12-02 19:36:42 +01:00
Nicola Murino
6ee51c5cc1 kms: remove support for compat secrets
also document how to activate the deprecated builtin provider
2021-12-01 17:53:19 +01:00
Nicola Murino
4df0ae82ac web client: allow downloading of single shared files without compression
Fixes #629
2021-11-30 20:32:10 +01:00
Nicola Murino
5db31f0fb3 web client: allow to upload/delete multiple files 2021-11-30 18:40:50 +01:00
Nicola Murino
0f8170c10f improve some docs and disable telemetry server by default 2021-11-29 17:58:10 +01:00
Nicola Murino
3c24cb773f SFTP: log users connections at info level
uniform SFTP and FTP logs

Fixes #626
2021-11-29 10:15:46 +01:00
Nicola Murino
bec54ac8ae CI: add windows x86
there still seem to be people using x86 on Windows ...
2021-11-28 21:30:31 +01:00
Nicola Murino
c330ac8418 CI: add windows arm64 2021-11-28 18:56:30 +01:00
Nicola Murino
3e478f42ea update lint rules and fix some warnings 2021-11-27 17:04:13 +01:00
Nicola Murino
18ab757216 back to development 2021-11-27 15:07:31 +01:00
Nicola Murino
b6bcf0cd94 set version to 2.2.0 2021-11-27 11:46:05 +01:00
Nicola Murino
015aa36c56 loaddata: improve shares restore
usage and timestamps are now preserved
2021-11-27 11:12:51 +01:00
Nicola Murino
f2480ce5c9 improve chtimes handling on open files 2021-11-26 19:00:44 +01:00
Vincent Murphy
f828c58dca Add --s3-force-path-style to portable 2021-11-26 17:40:23 +01:00
Nicola Murino
dc19921b0c web client: don't show the link for expired shares 2021-11-25 20:09:11 +01:00
Nicola Murino
3f3591bae0 web client: allow to preview images and pdf
pdf depends on browser support. It does not work on mobile devices.
2021-11-25 19:24:32 +01:00
Nicola Murino
fc048728d9 add 7digital to the sponsors section 2021-11-25 13:49:32 +01:00
Nicola Murino
aeb4675196 web admin: use a textarea for allowed/denied ip mask fields
Fixes #621
2021-11-25 13:08:12 +01:00
Nicola Murino
4652f9ede8 FTPD: allow to set different passive IPs based on the client's IP address 2021-11-25 12:45:09 +01:00
Nicola Murino
531cb5b5a1 sftpd: handle setstat requests with multiple attrs 2021-11-24 11:55:14 +01:00
Nicola Murino
9fb43b2c46 docs: clarify how multi-step auth works with external authentication
Fixes #617
2021-11-24 11:27:32 +01:00
Nicola Murino
8a8298ad46 web client: improve file upload 2021-11-22 12:25:36 +01:00
Nicola Murino
3d6b09e949 REST API: expose OpenAPI schema and render it using Swagger UI
Fixes #609
2021-11-21 09:32:51 +01:00
Nicola Murino
fb8f013ea7 web: update permissions on cookie refresh 2021-11-20 10:48:39 +01:00
Nicola Murino
c41319bb7a CI: sign windows installer and executable 2021-11-19 22:44:50 +01:00
Nicola Murino
46157ebbb6 CI docker: remove armv7 support
CI is still unreliable if we enable armv7 support
2021-11-16 09:07:10 +01:00
Nicola Murino
200b1d08c7 docker: add armv7 2021-11-15 21:58:35 +01:00
Nicola Murino
24b0352eb6 GCS: add ACL support 2021-11-15 21:57:41 +01:00
Nicola Murino
52f3a98cc8 preserve GCS credentials on update if not set
credentials were not preserved if "prefer_database_credentials" was
set to true

Fixes #613
2021-11-15 19:12:58 +01:00
Nicola Murino
e29a3efd39 add resetprovider sub-command
Fixes #608
2021-11-15 18:40:31 +01:00
Nicola Murino
ca730e77a5 add separate permissions to delete and rename files and dirs
perm_delete and perm_rename still exist for backward compatibility,
now they are an alias to assign both new split permissions
2021-11-14 16:23:33 +01:00
Nicola Murino
0833b4698e httpd service: add CORS support 2021-11-13 23:14:50 +01:00
Nicola Murino
ee5c5e033d S3: add ACL support
Fixes #610
2021-11-13 16:05:40 +01:00
Nicola Murino
78233ff9a3 web UI/REST API: add password reset
In order to reset the password from the admin/client user interface,
an SMTP configuration must be added and the user/admin must have an email
address.
You can prohibit the reset functionality on a per-user basis by using a
specific restriction.

Fixes #597
2021-11-13 13:25:43 +01:00
Nicola Murino
b331dc5686 web client: show share last use and used tokens 2021-11-07 09:53:35 +01:00
Nicola Murino
dfcfcee208 Windows: fix UTC time logging 2021-11-06 16:27:01 +01:00
Nicola Murino
094ee1522e logger: add a flag to use UTC time for logging 2021-11-06 15:18:16 +01:00
Nicola Murino
3bc58f5988 WebClient/REST API: add sharing support 2021-11-06 14:13:20 +01:00
Martijn Pieters
f6938e76dc Parse auth plugin information from env 2021-11-02 11:36:30 +01:00
Nicola Murino
570964deb3 add post-disconnect hook
Fixes #587
2021-10-29 19:55:18 +02:00
Nicola Murino
31984ffec1 update logo and add it to windows exe and installer
thanks to @asheroto for donating the new logo
2021-10-23 19:27:39 +02:00
Nicola Murino
74fc3aaf37 REST API: add events search 2021-10-23 15:47:21 +02:00
Nicola Murino
97d0a48557 plugins: improve notifier and searcher 2021-10-20 19:39:49 +02:00
Nicola Murino
3bbe67571f plugins: add eventsearcher 2021-10-17 16:43:05 +02:00
Nicola Murino
f131ef130b add a link to the new events store plugin 2021-10-16 17:08:34 +02:00
Nicola Murino
4a6a4ce28d sftpfs: map path resolution error to permission denied
we do the same for os fs so that the problematic directory is excluded
from the webdav listing instead of failing the whole directory listing
2021-10-16 10:32:18 +02:00
Nicola Murino
a80ac80fcd pkgs: update nfpm to 2.7 and use xz as compression for both deb and rpm 2021-10-13 09:15:04 +02:00
Nicola Murino
4aa9686e3b refactor custom actions
SFTPGo is now fully auditable, all fs and provider events that change
something are notified and can be collected using hooks/plugins.

There are some backward incompatible changes for command hooks
2021-10-10 13:08:05 +02:00
Nicola Murino
64e87d64bd web client UI: allow to edit plain text files
Fixes #567
2021-10-09 14:17:28 +02:00
Nicola Murino
9ca0b46f30 UI connections page: add a refresh button 2021-10-07 18:28:31 +02:00
Nicola Murino
6eb154bb74 webdav: add support for lock discovery 2021-10-06 09:11:56 +02:00
Nicola Murino
ea01c3a125 rate limiting: allow to exclude IP addresses/ranges
Fixes #563
2021-10-03 20:50:05 +02:00
Nicola Murino
1b4a1fbbe5 add data retention check hook 2021-10-03 15:17:49 +02:00
Nicola Murino
ec81a7ac29 actions: add a specific protocol for data retention 2021-10-03 10:22:47 +02:00
Nicola Murino
22d28a37b6 cmd: improve completion sub-commands 2021-10-03 08:14:57 +02:00
Nicola Murino
cc134cad9a data retention: allow to notify results via e-mail 2021-10-02 22:25:41 +02:00
Nicola Murino
1459150024 WebDAV: improve logs 2021-10-01 20:37:23 +02:00
root
87751e562e Flesh out examples/ldapauth, specifically:
Support 'virtual' users who have no homeDirectory, uidNumber or gidNumber.
Permit read-only access by a user named "anonymous", with any password.
Assume a conventional DIT with users under ou=people,dc=example,dc=com.
Read the LDAP bindPassword from a file (not baked into the code).
Log progress and problems to syslog.
2021-10-01 09:10:13 +02:00
Nicola Murino
e6f969cb04 web UI: update js and css deps 2021-09-30 10:23:25 +02:00
Nicola Murino
ba1febba73 rework user and admin profiles
users and admins can now also update their email and description
2021-09-29 18:46:15 +02:00
Nicola Murino
af8fa7ff81 Docker: remove rsync from default images
it's time to encourage people to switch to more modern alternatives like
rclone
2021-09-27 11:34:11 +02:00
Nicola Murino
4ab2e4088a CI docker: remove armv7 support
building docker images now takes too long and often fails with random
errors. I have to restart the build several times to be able to push
the images to docker hub and gcr
2021-09-27 10:25:21 +02:00
Nicola Murino
da0ccc6426 add SMTP support
it will be used in future update to add email sending capabilities
2021-09-26 20:25:37 +02:00
Maharanjan
0661876e99 Added email field for user account 2021-09-25 19:06:13 +02:00
Nicola Murino
cd72ac4fc9 CI: add armv7 support 2021-09-25 14:14:21 +02:00
Nicola Murino
da5a061b65 add basic REST APIs for data retention
Fixes #495
2021-09-25 12:20:31 +02:00
Nicola Murino
65948a47f1 systemd unit: set LimitNOFILE to 8192 2021-09-19 17:37:18 +02:00
Nicola Murino
bf4b3e6840 httpd: move the check connection middleware before the logger middleware
Fixes #543
2021-09-19 08:14:59 +02:00
Nicola Murino
6ea38188e8 minor fixes and doc improvements 2021-09-18 10:50:17 +02:00
Nicola Murino
b5639a51fd don't generate defender events for HTTP/WebDAV requests with no auth
it is quite common for HTTP clients to send a first request without
the Authorization header and then send the credentials after receiving
a 401 response. We don't want to generate defender events in this case
2021-09-11 18:23:11 +02:00
Nicola Murino
5c34d814d6 fix a possible nil pointer dereference
it can happen by upgrading from very old versions
2021-09-11 14:19:17 +02:00
Nicola Murino
0eca4f1866 update deps 2021-09-08 12:29:47 +02:00
Nicola Murino
b52f829f05 docker: replace mime-support package with media-types
This way the size of the slim image is similar to the previous buster
based images
2021-09-07 21:04:46 +02:00
Nicola Murino
90f64c9f63 distroless image: minor changes 2021-09-07 19:52:28 +02:00
Oleksandr Shvets
c106498dd8 docker: added distroless image 2021-09-06 19:10:28 +02:00
Nicola Murino
7bad65a43e user: add a permission to disable changing api key authentication
also implement the missing APIs to enable/disable api key authentication
2021-09-06 18:46:35 +02:00
Nicola Murino
101c2962ab web client UI: add a permission to disable password change
Fixes #528
2021-09-05 18:49:13 +02:00
Nicola Murino
59140a6d51 add additional data to MFA secrets and fix pointers management 2021-09-05 14:10:12 +02:00
Nicola Murino
b1d54f69d9 admin: fix possible nil pointer dereference
this possible bug was introduced in the previous commit
2021-09-04 13:56:29 +02:00
Nicola Murino
374de07c7b update deps 2021-09-04 13:30:23 +02:00
Nicola Murino
8a4c21b64a add builtin two-factor auth support
The builtin two-factor authentication is based on time-based one time
passwords (RFC 6238) which works with Authy, Google Authenticator and
other compatible apps.
2021-09-04 12:11:04 +02:00
Nicola Murino
16ba7ddb34 CI: also runs test cases using GOARCH 386
This way we can detect unaligned 64-bit atomic operations that only happen
on 32 bit platforms
2021-08-28 12:03:23 +02:00
Nicola Murino
bd9506da42 BaseConnection struct: ensure 64 bit alignment
Fixes #516
2021-08-28 10:06:49 +02:00
Nicola Murino
b903a6e46f data provider: remove default admin
you need to load initial data or set "create_default_admin" to true
and the appropriate env vars if you don't want to use the web admin
setup screen to create the default admin
2021-08-20 10:37:51 +02:00
Nicola Murino
bcf088f586 data provider: update internal caches if the data provider is shared 2021-08-20 09:35:06 +02:00
Nicola Murino
be3857d572 dataprovider: add timestamp fields for users and admins 2021-08-19 15:51:43 +02:00
Nicola Murino
b99d4ce82e fix folders validation
Fixes #510
2021-08-19 11:28:53 +02:00
Nicola Murino
0a558203da improve proxy documentation
Fixes #507
2021-08-18 15:27:07 +02:00
Nicola Murino
5a549a88fe update to Go 1.17 2021-08-18 14:39:56 +02:00
Nicola Murino
fe953d6b38 REST API: add support for API key authentication 2021-08-17 18:08:32 +02:00
erwiese
05c62b9f40 add documentation for defender scores (#500)
Co-authored-by: Erwin Wiesensarter <erwin.wiesensarter@bkg.bund.de>
2021-08-13 15:40:33 +02:00
Nicola Murino
555dc3b0c0 transfer logs: add FTP mode 2021-08-10 13:07:38 +02:00
Nicola Murino
0de0d3308c improve error messages for generic failures 2021-08-08 19:30:21 +02:00
Nicola Murino
a20373b613 add support for auth plugins 2021-08-08 17:09:48 +02:00
Nicola Murino
ced2e16f41 add support for password validation rules
Fixes #494
2021-08-06 18:56:07 +02:00
Nicola Murino
3ac832c8dd docker: bump Alpine to 3.14 2021-08-05 19:38:30 +02:00
Nicola Murino
a3c087456b ftpd: add some security checks 2021-08-05 18:38:15 +02:00
Nicola Murino
419774158a remove PayPal link
I'm having some issues with my PayPal account, remove it for now
2021-08-03 20:36:10 +02:00
Nicola Murino
0503215e7a web client: try to prevent browsers from caching requests
Fixes #493
2021-08-03 19:58:03 +02:00
dependabot[bot]
9541843ff7 Bump github.com/shirou/gopsutil/v3 from 3.21.6 to 3.21.7 (#491)
Bumps [github.com/shirou/gopsutil/v3](https://github.com/shirou/gopsutil) from 3.21.6 to 3.21.7.
- [Release notes](https://github.com/shirou/gopsutil/releases)
- [Commits](https://github.com/shirou/gopsutil/compare/v3.21.6...v3.21.7)

---
updated-dependencies:
- dependency-name: github.com/shirou/gopsutil/v3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-08-02 10:11:09 +02:00
dependabot[bot]
98f22ba110 Bump uraimo/run-on-arch-action from 2.1.0 to 2.1.1 (#490)
Bumps [uraimo/run-on-arch-action](https://github.com/uraimo/run-on-arch-action) from 2.1.0 to 2.1.1.
- [Release notes](https://github.com/uraimo/run-on-arch-action/releases)
- [Commits](https://github.com/uraimo/run-on-arch-action/compare/v2.1.0...v2.1.1)

---
updated-dependencies:
- dependency-name: uraimo/run-on-arch-action
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-08-02 10:10:24 +02:00
Nicola Murino
1e9a19e326 add a howto to use SFTPGo as OpenSSH's SFTP subsystem 2021-07-31 19:09:09 +02:00
mmcgeefeedo
0046c9960a add support to override default admin credentials via env vars 2021-07-31 10:39:53 +02:00
Nicola Murino
7640612a95 update deps 2021-07-31 10:22:38 +02:00
Nicola Murino
a26962f367 add dot and dot dot directories to sftp/ftp file listing 2021-07-31 09:42:23 +02:00
Nicola Murino
f778e47d22 sftpd: minor improvements and docs for the prefix middleware 2021-07-29 20:12:23 +02:00
Nicola Murino
4781921336 fix loading enabled_ssh_commands config key 2021-07-29 00:54:22 +02:00
mmcgeefeedo
3ae8abda9e sftpd: add folder prefix middleware 2021-07-29 00:32:55 +02:00
Nicola Murino
90b324d707 Add a link on the login pages to switch between admin and web client login
The links are hidden if only the web admin or only thw web client is
enabled and can also be controlled using the "hide_login_url" setting

Fixes #485
2021-07-27 18:43:00 +02:00
Nicola Murino
3a22aae34f web UI: add support for upload, create dirs, rename, delete 2021-07-26 20:55:49 +02:00
dependabot[bot]
45a0473fec Bump codecov/codecov-action from 1 to 2.0.2 (#486)
* Bump codecov/codecov-action from 1 to 2

Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 1 to 2.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/master/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v1...v2.0.2)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Nicola Murino <nicola.murino@gmail.com>
2021-07-26 11:08:48 +02:00
Nicola Murino
a7313e4492 webdav: add new test cases and fix some lock related issues
Our net/webdav branch now include the following patches:

https://github.com/golang/net/pull/92
https://github.com/golang/net/pull/93
https://github.com/golang/net/pull/94
2021-07-25 09:55:14 +02:00
Nicola Murino
c41ae116eb improve logging
Fixes #381
2021-07-24 20:11:17 +02:00
Nicola Murino
83c7453957 user API: allow to disable writes ...
... even if the user has permissions for these actions
2021-07-23 21:41:02 +02:00
Nicola Murino
85a47810ff S3: expose more properties, possible backward incompatible change
Before these changes we implictly set S3ForcePathStyle if an endpoint
was provided.

This can cause issues with some S3 compatible object storages and must
be explicitly set now.

AWS is also deprecating this setting

https://aws.amazon.com/it/blogs/aws/amazon-s3-path-deprecation-plan-the-rest-of-the-story/
2021-07-23 16:56:48 +02:00
Nicola Murino
c997ef876c S3: fix Ceph compatibility
This hack will no longer be needed once Ceph tags a new version and vendors
using it update their servers.

This code is taken from rclone, thank you!

Fixes #483
2021-07-23 11:41:31 +02:00
Nicola Murino
ae8ccadad2 users API: add API to create, delete, rename files and directories 2021-07-23 10:19:27 +02:00
Nicola Murino
5967aa1aa5 FTP: enable ftpserverlib logging and make debug mode configurable 2021-07-20 17:22:08 +02:00
Nicola Murino
c900cde8e4 notifiers plugin: add settings to retry unhandled events 2021-07-20 12:51:21 +02:00
Nicola Murino
13183a9f76 deps cleanup 2021-07-17 15:42:59 +02:00
Nicola Murino
5a568b4077 KMS: allow to provide the master encryption key as string 2021-07-17 15:34:48 +02:00
Nicola Murino
030507a2ce add some docs for the plugin system 2021-07-17 14:14:42 +02:00
Nicola Murino
338301955f move cloud KMS providers to an external plugin 2021-07-17 13:08:05 +02:00
Nicola Murino
6d313f6d8f expose KMS as plugin 2021-07-16 18:22:42 +02:00
Nicola Murino
776dffcf12 kms: improve modularity 2021-07-13 21:17:21 +02:00
Nicola Murino
e1a2451c22 s3: allow to configure the chunk download timeout 2021-07-11 18:39:45 +02:00
Nicola Murino
7344366ce8 sftpd: remove workarounds for directory listing
The underlying issue was fixed in pkg/sftp 1.13.2
2021-07-11 16:26:40 +02:00
Nicola Murino
bd5191dfc5 add experimental plugin system 2021-07-11 15:26:51 +02:00
Nicola Murino
bfa4085932 improve docs 2021-07-03 18:23:36 +02:00
Nicola Murino
302ec2558c add notifications for mkdir/rmdir 2021-07-03 18:07:55 +02:00
Nicola Murino
ff19879ffd allow to use a persistent signing key for JWT and CSRF tokens
Fixes #466
2021-07-01 20:17:40 +02:00
Nicola Murino
04001f7ad3 FTP: try to return more specific error codes/messages for some errors
We now return 552 code for quota exceeded errors and 553 in the following
cases:

- filename denied by a filter
- no upload permission
- no overwrite permission
- pre upload hook error

Fixes #442
2021-06-28 19:40:04 +02:00
Nicola Murino
076b2f0ee0 modules: add v2 support 2021-06-26 07:31:41 +02:00
Nicola Murino
93dfb03eaf GCS: add a trailing / to "directories"
This way SFTPGo should be compatible with Google Cloud console.

This change should be backward compatibile, testing is welcome

Fixes #464
2021-06-24 19:36:01 +02:00
Nicola Murino
e09bdd43d4 defender: fix GetHost for blocklist entries too 2021-06-20 21:57:19 +02:00
Nicola Murino
ac8d8a3da1 update portable mode docs 2021-06-19 19:40:53 +02:00
Manuel Reithuber
a4157e83e9 template fsconfig: updated form-group css classes so we can further improve onFilesystemChanged()
it doesn't reference any vfs providers at all anymore :)
2021-06-19 19:27:54 +02:00
Manuel Reithuber
13f23838a1 template fsconfig.html: using string provider name in onFilesystemChanged() 2021-06-19 19:27:54 +02:00
Manuel Reithuber
fd4c388b23 added vfs.ListProviders() and using it in template fsconfig.html (added a new ListFSProviders template function for that) 2021-06-19 19:27:54 +02:00
Manuel Reithuber
88b10da596 updated utils.LoadTemplate() to call template.ParseFiles() directly and added a way to specify a base template (will be used in the next commit) 2021-06-19 19:27:54 +02:00
Manuel Reithuber
c07dc74d48 template fsconfig.html: simplified code in onFilesystemChanged() 2021-06-19 19:27:54 +02:00
Manuel Reithuber
b48e01155c FilesystemProvider: added .Name() which reverses vfs.GetProviderByName(), and added .ShortInfo(); using .ShortInfo() in User.GetInfoString() 2021-06-19 19:27:54 +02:00
Manuel Reithuber
0ff010cc94 added vfs.GetProviderByName(), using it in for sftpgo portable and for parsing the webadmin form field 2021-06-19 19:27:54 +02:00
Nicola Murino
81aac15a6c defender: don't return expired hosts/banned ip in GetHost too 2021-06-19 18:51:33 +02:00
Nicola Murino
c1b862394d move other errors to utils package 2021-06-19 13:06:01 +02:00
Manuel Reithuber
f19937b715 move Filesystem config validation to vfs 2021-06-19 12:24:43 +02:00
Nicola Murino
f2f612b450 defender: don't return expired hosts/banned ip 2021-06-19 11:02:46 +02:00
Nicola Murino
0c2640bbab update deps 2021-06-19 09:56:49 +02:00
Nicola Murino
3bb0ca1d2b config: remove deprecated configuration keys 2021-06-19 09:47:06 +02:00
Nicola Murino
d5b42f72e2 squash database migrations, remove compat data provider code 2021-06-19 09:03:20 +02:00
Nicola Murino
62744e081b get HTTPD binding from env: respect the documented default 2021-06-17 15:57:41 +02:00
Nicola Murino
9dcaf1555f back to development 2021-06-16 19:28:25 +02:00
Nicola Murino
a09cf5c8b9 set version to 2.1.0 2021-06-16 17:45:09 +02:00
Nicola Murino
47ebe42375 FTP: fix LIST on files 2021-06-15 06:38:56 +02:00
Nicola Murino
4d97ab9eb9 Let's Encrypt tutorial: use sudo where appropriate 2021-06-14 22:35:08 +02:00
Nicola Murino
8ed13dc4a9 docs: document how to use Let's Encrypt Certificates 2021-06-14 22:05:55 +02:00
Nicola Murino
3b66dd0873 Linux packages: fix static resources copy 2021-06-14 14:18:15 +02:00
Nicola Murino
d992f0ffcc update deps 2021-06-13 08:54:22 +02:00
Nicola Murino
6c5a7e8f13 improve installation docs, add paypal link to fundings 2021-06-12 10:05:25 +02:00
Nicola Murino
9d3d7db29c azblob: store SAS URL as kms.Secret 2021-06-11 22:27:36 +02:00
Nicola Murino
8607788975 s3fs: use "application/x-directory" as folder mime type
This change improve s3fs-fuse compatibility

Fixes #451
2021-06-08 13:52:36 +02:00
Nicola Murino
4be6307d87 webadmin: add defender page 2021-06-08 13:24:28 +02:00
Nicola Murino
feec2118bb improve defender and quotas REST API 2021-06-07 21:52:43 +02:00
Nicola Murino
43182fc25e OpenAPI: add users API
These new APIs match the web client features.

I'm aware that some API do not follow REST best practises.

I want to avoid things likes "/user/folders/<path>"

where "path" must be encoded and making it optional create issues, so
I defined resources as query parameters instead of path parameters
2021-06-05 16:07:09 +02:00
Nicola Murino
976f588863 improve docs to enable FTP/WebDAV
Fixes #447
2021-06-02 09:49:31 +02:00
Nicola Murino
575bcf1f03 add remote address to transfer and commands logs 2021-06-01 22:28:43 +02:00
Nicola Murino
969c992bfd pre-upload: execute the hook just before opening the target file 2021-05-31 22:40:47 +02:00
Nicola Murino
c1239fbf59 pre-upload action: add file open flags
Reading the flags the hook receiver can detect if the client wants to
truncate the target file
2021-05-31 22:33:23 +02:00
Nicola Murino
c63b923ec3 cryptfs: add support for atomic uploads 2021-05-31 21:45:29 +02:00
dependabot[bot]
574c4029fc Bump uraimo/run-on-arch-action from 2.0.9 to 2.0.10 (#444)
Bumps [uraimo/run-on-arch-action](https://github.com/uraimo/run-on-arch-action) from 2.0.9 to 2.0.10.
- [Release notes](https://github.com/uraimo/run-on-arch-action/releases)
- [Commits](https://github.com/uraimo/run-on-arch-action/compare/v2.0.9...v2.0.10)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-05-31 10:05:25 +02:00
Nicola Murino
423d8306be webclient: allow to download multiple files as zip 2021-05-30 23:07:46 +02:00
Nicola Murino
fc7066a25c cross device rename: remove the source if copy suceeded 2021-05-27 22:23:14 +02:00
Nicola Murino
e1bf46c6a5 local fs rename: if it fails with a cross device error try a copy
I don't want to add a new setting for this, at least until we get the
first complain for a slow rename :)

Fixes #440
2021-05-27 20:14:12 +02:00
Nicola Murino
3b46e6a6fb add support for a global temp path
Fixes #436
2021-05-27 15:38:27 +02:00
Nicola Murino
7a85c66ee7 webclient: defer file list rendering
combined with server side processing I can now list a directory with
about 100.000 files in less than 2 seconds without losing client side
filtering and pagination
2021-05-27 09:40:46 +02:00
Nicola Murino
25a44030f9 actions: add pre-download and pre-upload
Downloads and uploads can be denied based on hook response
2021-05-26 07:48:37 +02:00
Nicola Murino
600268ebb8 httpclient: allow to set custom headers 2021-05-25 08:36:01 +02:00
Nicola Murino
1223957f91 webclient: use different icons based on the file extension 2021-05-24 19:09:03 +02:00
Nicola Murino
15cde2dd1a improve test coverage 2021-05-23 22:29:55 +02:00
Nicola Murino
50e441849a try to make the web admin more user friendly
removed all the textarea with fields separated using "::".
This should, hopefully, improve user experience
2021-05-23 22:02:01 +02:00
Nicola Murino
02bb09ec01 remove deprecated file extensions filters
these filters were deprecated a long time ago, everyone should use
patterns filters now
2021-05-22 12:28:05 +02:00
Nicola Murino
402947a43c update deps 2021-05-22 10:42:30 +02:00
Nicola Murino
b9bc8d722d try to improve web client credentials page
I should do the same for the admin page too
2021-05-22 09:54:27 +02:00
Nicola Murino
0cb5c49cf3 map path resolution errors to Permission errors
this way the affected paths will be ignored in WebDAV

Fixes #432
2021-05-21 13:04:22 +02:00
Nicola Murino
9fc4be6d40 minor doc fixes 2021-05-20 18:34:38 +02:00
Nicola Murino
ecfed4dc04 Add a Getting Started Guide 2021-05-20 18:16:27 +02:00
dependabot[bot]
b415e4d98f Bump github.com/lib/pq from 1.10.1 to 1.10.2 (#429)
Bumps [github.com/lib/pq](https://github.com/lib/pq) from 1.10.1 to 1.10.2.
- [Release notes](https://github.com/lib/pq/releases)
- [Commits](https://github.com/lib/pq/compare/v1.10.1...v1.10.2)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-05-17 09:27:16 +02:00
Nicola Murino
7d059efe06 add an example backup script 2021-05-16 22:28:08 +02:00
Nicola Murino
60cfbd2989 setup: auto login after creating the first admin 2021-05-16 21:36:57 +02:00
Nicola Murino
8ecf64f481 httpclient: accepts timeouts as float
Fixes #428
2021-05-16 12:50:06 +02:00
Nicola Murino
019b0f2fd5 http cookie: add max-age and samesite
update deps too
2021-05-16 09:13:00 +02:00
Nicola Murino
15d6cd144a another try to better understand the random webdav test case failure 2021-05-15 08:56:36 +02:00
Nicola Murino
f59f62317e sftpd: fix file upload resume detection
WinSCP does not set the APPEND flag while resuming a file upload,
so we detect a file upload resume if the TRUNCATE flag is not set.
The APPEND flag is now ignored.

Fixes #420
2021-05-15 08:39:01 +02:00
Nicola Murino
f2b93c0402 add a setup screen to create the first admin user
If you prefer to auto-create the first admin you can enable the
"create_default_admin" configuration key and SFTPGo will work as before.

You can also create the first admin by loading initial data: now you can
set both username and password, before you could only change the password
2021-05-14 19:21:15 +02:00
Nicola Murino
0540b8780e redact credentials within hooks
go-retryablehttp does not redact credentials, so we still log them
when we use it

https://github.com/hashicorp/go-retryablehttp/pull/133
2021-05-12 22:44:17 +02:00
Nicola Murino
fa45c9c138 allow to execute actions for file operations and SSH commands synchronously
The actions to run synchronously can be configured via the `execute_sync`
configuration key.

Executing an action synchronously means that SFTPGo will not return a result
code to the client until your hook have completed its execution.

Fixes #409
2021-05-11 12:45:14 +02:00
Nicola Murino
b67cd0d3df ensure no client is connected before running max connections test cases 2021-05-11 08:04:57 +02:00
Nicola Murino
c8f7fc9bc9 httpd/webdav: add a list of hosts allowed to send proxy headers
X-Forwarded-For, X-Real-IP and X-Forwarded-Proto headers will be ignored
for hosts not included in this list.

This is a backward incompatible change, before the proxy headers were
always used
2021-05-11 06:54:06 +02:00
dependabot[bot]
f1b998ce16 Bump github.com/otiai10/copy from 1.5.1 to 1.6.0 (#414)
Bumps [github.com/otiai10/copy](https://github.com/otiai10/copy) from 1.5.1 to 1.6.0.
- [Release notes](https://github.com/otiai10/copy/releases)
- [Commits](https://github.com/otiai10/copy/compare/v1.5.1...v1.6.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-05-10 14:02:09 +02:00
dependabot[bot]
aaa758e978 Bump github.com/minio/sio from 0.2.1 to 0.3.0 (#412)
Bumps [github.com/minio/sio](https://github.com/minio/sio) from 0.2.1 to 0.3.0.
- [Release notes](https://github.com/minio/sio/releases)
- [Commits](https://github.com/minio/sio/compare/v0.2.1...v0.3.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-05-10 11:34:01 +02:00
dependabot[bot]
716946a148 Bump github.com/aws/aws-sdk-go from 1.38.35 to 1.38.36 (#413)
Bumps [github.com/aws/aws-sdk-go](https://github.com/aws/aws-sdk-go) from 1.38.35 to 1.38.36.
- [Release notes](https://github.com/aws/aws-sdk-go/releases)
- [Changelog](https://github.com/aws/aws-sdk-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go/compare/v1.38.35...v1.38.36)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-05-10 11:10:58 +02:00
Nicola Murino
15934d72cc webdav test: increase log size
the latest 10 lines are not enough to understand the issue, try with 20
2021-05-09 10:09:25 +02:00
Nicola Murino
8f6cdacd00 allow to limit the number of per-host connections 2021-05-08 19:45:21 +02:00
Nicola Murino
8f736da4b8 webdav test: add some more logs
QuotaLimits test case sometime fails when running in CI, try to
understand the reason
2021-05-07 22:24:06 +02:00
Nicola Murino
4ea4202b99 httpd/webdav: use a custom listener with read and write deadlines 2021-05-07 20:41:20 +02:00
Nicola Murino
d4bfc3f6b5 fix lint configuration and a warning 2021-05-06 22:06:22 +02:00
Nicola Murino
23d9ebfc91 add a basic front-end web interface for end-users
Fixes #339 #321 #398
2021-05-06 21:35:43 +02:00
dependabot[bot]
5c99f4fb60 Bump github.com/shirou/gopsutil/v3 from 3.21.3 to 3.21.4 (#406)
Bumps [github.com/shirou/gopsutil/v3](https://github.com/shirou/gopsutil) from 3.21.3 to 3.21.4.
- [Release notes](https://github.com/shirou/gopsutil/releases)
- [Commits](https://github.com/shirou/gopsutil/compare/v3.21.3...v3.21.4)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-05-03 14:44:07 +02:00
dependabot[bot]
2263c7e20f Bump github.com/hashicorp/go-retryablehttp from 0.6.8 to 0.7.0 (#405)
Bumps [github.com/hashicorp/go-retryablehttp](https://github.com/hashicorp/go-retryablehttp) from 0.6.8 to 0.7.0.
- [Release notes](https://github.com/hashicorp/go-retryablehttp/releases)
- [Commits](https://github.com/hashicorp/go-retryablehttp/compare/v0.6.8...v0.7.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-05-03 14:43:53 +02:00
dependabot[bot]
515b2d917e Bump github.com/fclairamb/ftpserverlib from 0.13.0 to 0.13.1 (#404)
Bumps [github.com/fclairamb/ftpserverlib](https://github.com/fclairamb/ftpserverlib) from 0.13.0 to 0.13.1.
- [Release notes](https://github.com/fclairamb/ftpserverlib/releases)
- [Commits](https://github.com/fclairamb/ftpserverlib/compare/v0.13.0...v0.13.1)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-05-03 13:29:54 +02:00
dependabot[bot]
af4723356d Bump github.com/lestrrat-go/jwx from 1.1.7 to 1.2.0 (#403)
Bumps [github.com/lestrrat-go/jwx](https://github.com/lestrrat-go/jwx) from 1.1.7 to 1.2.0.
- [Release notes](https://github.com/lestrrat-go/jwx/releases)
- [Changelog](https://github.com/lestrrat-go/jwx/blob/main/Changes)
- [Commits](https://github.com/lestrrat-go/jwx/compare/v1.1.7...v1.2.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-05-03 13:29:16 +02:00
dependabot[bot]
068dd34a38 Bump github.com/aws/aws-sdk-go from 1.38.25 to 1.38.30 (#402)
Bumps [github.com/aws/aws-sdk-go](https://github.com/aws/aws-sdk-go) from 1.38.25 to 1.38.30.
- [Release notes](https://github.com/aws/aws-sdk-go/releases)
- [Changelog](https://github.com/aws/aws-sdk-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go/compare/v1.38.25...v1.38.30)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-05-03 11:41:25 +02:00
dependabot[bot]
b16a5c2caf Bump github.com/go-chi/chi/v5 from 5.0.2 to 5.0.3 (#401)
Bumps [github.com/go-chi/chi/v5](https://github.com/go-chi/chi) from 5.0.2 to 5.0.3.
- [Release notes](https://github.com/go-chi/chi/releases)
- [Changelog](https://github.com/go-chi/chi/blob/master/CHANGELOG.md)
- [Commits](https://github.com/go-chi/chi/compare/v5.0.2...v5.0.3)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-05-03 11:41:09 +02:00
Nicola Murino
a383957cfa OpenAPI: document that also folder-quota-update supports partial updates 2021-04-28 19:33:32 +02:00
Nicola Murino
00f97aabb4 OpenAPI: document that quota-update support partial updates
If the update mode is "add" and you pass only used_quota_size or only
used_quota_files the missing field will remain unchanged
2021-04-28 19:16:15 +02:00
Nicola Murino
32db0787bb add an example script for scheduled quota updates 2021-04-26 21:53:09 +02:00
Nicola Murino
1275328fdf Authentication errors: try to avoid user enumeration
Fixes #395
2021-04-26 19:48:21 +02:00
Nicola Murino
7778716fa7 update crypto and net dependencies 2021-04-25 18:12:02 +02:00
dependabot[bot]
77476d0f56 Bump github.com/aws/aws-sdk-go from 1.38.21 to 1.38.25 (#394)
Bumps [github.com/aws/aws-sdk-go](https://github.com/aws/aws-sdk-go) from 1.38.21 to 1.38.25.
- [Release notes](https://github.com/aws/aws-sdk-go/releases)
- [Changelog](https://github.com/aws/aws-sdk-go/blob/main/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go/compare/v1.38.21...v1.38.25)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-04-25 17:07:59 +02:00
dependabot[bot]
c7a1fc2996 Bump cloud.google.com/go/storage from 1.14.0 to 1.15.0 (#392)
Bumps [cloud.google.com/go/storage](https://github.com/googleapis/google-cloud-go) from 1.14.0 to 1.15.0.
- [Release notes](https://github.com/googleapis/google-cloud-go/releases)
- [Changelog](https://github.com/googleapis/google-cloud-go/blob/master/CHANGES.md)
- [Commits](https://github.com/googleapis/google-cloud-go/compare/spanner/v1.14.0...spanner/v1.15.0)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-04-25 17:07:36 +02:00
dependabot[bot]
e7d8e73be8 Bump github.com/lib/pq from 1.10.0 to 1.10.1 (#391)
Bumps [github.com/lib/pq](https://github.com/lib/pq) from 1.10.0 to 1.10.1.
- [Release notes](https://github.com/lib/pq/releases)
- [Commits](https://github.com/lib/pq/compare/v1.10.0...v1.10.1)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-04-25 17:07:26 +02:00
dependabot[bot]
3ee27f4370 Bump golangci/golangci-lint-action from v2 to v2.5.2 (#389)
Bumps [golangci/golangci-lint-action](https://github.com/golangci/golangci-lint-action) from v2 to v2.5.2.
- [Release notes](https://github.com/golangci/golangci-lint-action/releases)
- [Commits](https://github.com/golangci/golangci-lint-action/compare/v2...5c56cd6c9dc07901af25baab6f2b0d9f3b7c3018)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-04-25 16:41:17 +02:00
Nicola Murino
92424cd1c2 dependabot: limit the number of open pull requests 2021-04-25 16:39:41 +02:00
Nicola Murino
0190dad984 docker: update github script to v4 2021-04-25 15:59:29 +02:00
Nicola Murino
198258f4e7 add dependabot
Fixes #388
2021-04-25 15:54:19 +02:00
Nicola Murino
5be4b6bd44 localfs: fix subdir check if the user has the root dir as home 2021-04-25 14:36:29 +02:00
Nicola Murino
3941255733 docs: fix a typo 2021-04-25 09:42:19 +02:00
Nicola Murino
46998252e5 use bcrypt as default password hashing algo
argon2id has a high memory cost and, if not properly tuned, it can lead to
resource starvation.

Advanced users can still configure and use argon2id.
Passwords stored as argon2id will continue to work
2021-04-25 09:38:33 +02:00
Nicola Murino
74b51f0ad3 update nfpm 2021-04-23 22:53:13 +02:00
Nicola Murino
b11865f971 CI: add support for darwin/arm64
I have no way to test the produced binaries on a real Silicon M1
2021-04-20 23:00:27 +02:00
Nicola Murino
f4369cdbef fix max connections check
Also make sure to close the ssh client connection in test cases
2021-04-20 18:12:16 +02:00
Nicola Murino
92638ce93d add support for hashing password using bcrypt
argon2id remains the default
2021-04-20 13:55:09 +02:00
Nicola Murino
6ef85d6026 add, optional, in memory password caching
Verifying argon2 passwords has a high memory and computational cost,
by enabling, in memory, password caching you reduce this cost
2021-04-20 09:39:36 +02:00
Nicola Murino
bc88503f25 sql providers: reuse the same context where appropriate 2021-04-19 18:58:53 +02:00
Nicola Murino
47317bed9b make sure that Retry-After header has a value greater than zero 2021-04-19 09:16:27 +02:00
Nicola Murino
f45c89fc46 add rate limiting support for REST API/web admin too 2021-04-19 08:14:04 +02:00
Nicola Murino
112e3b2fc2 add rate limiting support 2021-04-18 12:31:06 +02:00
Nicola Murino
124c471a2b FTPD: make sure that the passive ip, if provided, is valid
The server will refuse to start if the provided passive ip is not a
valid IPv4 address.

Fixes #376
2021-04-16 15:08:10 +02:00
Nicola Murino
683ba6cd5b get binding from env: respect the documented default
Fixes #377
2021-04-16 13:35:13 +02:00
Nicola Murino
21fbcf4556 FTP: add support for TLS session resumption on the data connection
Fixes #374
2021-04-16 09:00:40 +02:00
Nicola Murino
2ffefbeb33 add sql_tables_prefix also to indexes and constraints
This allows you to reuse the same database for multiple SFTPGo instances

Fixes #372
2021-04-12 20:00:49 +02:00
Nicola Murino
c844fc7477 add support for delayed quota update
If there are a lot of close uploads, accumulating quota updates can
save you many queries to the data provider
2021-04-11 08:38:43 +02:00
Nicola Murino
4b98f37df1 back to development 2021-04-10 09:40:02 +02:00
Nicola Murino
0bc4db9950 web admin: make base url configurable 2021-04-09 22:02:48 +02:00
Nicola Murino
5acf29dae6 CI: replace deprecated actions with gh CLI 2021-04-08 21:29:09 +02:00
Nicola Murino
e9a42cd508 release workflow: re-add build Linux bundle
it is used as source for PPA packages
2021-04-08 08:38:51 +02:00
Nicola Murino
ed26d68948 portable mode: add SFTP buffer size 2021-04-07 19:47:39 +02:00
Nicola Murino
b389f93d97 allow to select sha256-simd using an env var 2021-04-07 16:25:58 +02:00
Nicola Murino
150aebf8d2 CI: replace xgo with QEMU
currently xgo don't allow to choose the building OS, this could cause
unexpected issues, for example v2.0.3 packages for arm64 and ppc64
don't run on Ubuntu 18.04
2021-04-07 15:12:09 +02:00
Nicola Murino
74e0223eb9 remove sha256-simd usage
sha256-simd is now deprecated

https://github.com/minio/sha256-simd/issues/58

This could slow down sha256 computation on some CPU
2021-04-05 18:23:40 +02:00
Nicola Murino
0823928f98 allow to disable login filesystem checks
SFTPGo requires that the user's home directory, virtual folder root,
and intermediate paths to virtual folders exist to work properly.
If you already know that the required directories exist, disabling
these checks will speed up login.
2021-04-05 17:57:30 +02:00
Nicola Murino
f895059660 web: add responsive table style to connections too
Fixed a small issue for sftpfs too
2021-04-05 11:28:28 +02:00
Nicola Murino
acb4310c11 add a startup hook 2021-04-05 10:07:59 +02:00
Nicola Murino
fdf3f23df5 allow to disable some hooks on a per-user basis
This way you can, for example, mix external and internal users
2021-04-04 22:32:25 +02:00
Nicola Murino
d92861a8e8 sftpfs: disable buffering for downloads if concurrent reads are disabled 2021-04-04 09:53:29 +02:00
Nicola Murino
1ee843757d fix OpenAPI schema 2021-04-03 17:09:08 +02:00
Nicola Murino
ea26d7786c sftpfs: add buffering support
this way we improve performance over high latency networks
2021-04-03 16:00:55 +02:00
Nicola Murino
6eb43baf3d web: fix content type for folders form
Fixes #367
2021-04-01 19:42:18 +02:00
Nicola Murino
2f56375121 improve SFTP loop detection 2021-04-01 18:53:48 +02:00
Nicola Murino
3bfd7e4d17 sftpfs: try to detect if an SFTP user point to itself
this will cause an infinite loop on login. The check should be improved
2021-03-29 21:53:44 +02:00
Nicola Murino
e1c66d96a1 back to development 2021-03-28 22:25:24 +02:00
Nicola Murino
a43854ae9b OpenAPI: document that secrets are automatically encrypted before saving 2021-03-28 11:23:06 +02:00
Nicola Murino
183bedd6ed webui: add responsive extension 2021-03-28 11:02:11 +02:00
Nicola Murino
2a89a8f664 webui: minor improvements 2021-03-27 22:23:01 +01:00
Nicola Murino
5cd27ce529 document Cockroach driver name 2021-03-27 19:41:00 +01:00
Nicola Murino
cee2e18caf convertusers: fix permissions
Fixes #363
2021-03-27 19:18:01 +01:00
Nicola Murino
9ad750da54 WebDAV: try to preserve the lock fs as much as possible 2021-03-27 19:10:27 +01:00
Nicola Murino
5f49af1780 external auth: allow to inspect and preserve an existing user 2021-03-26 15:19:01 +01:00
Nicola Murino
d5f092284a improve signals handling 2021-03-25 19:31:21 +01:00
Nicola Murino
0e50310a66 add a test case for UID/GID limits 2021-03-25 17:30:39 +01:00
Mike Unitskyi
5939ac4801 Increase uid:gid limits (#362)
Fixes #361
2021-03-25 17:11:42 +01:00
Nicola Murino
db274f1093 crdb: fix transactions handling 2021-03-25 09:07:56 +01:00
Nicola Murino
6bc5c64a3a webdav: ignore path, perm and not exist errors in PROPFIND
Fixes #340
2021-03-24 13:32:20 +01:00
Nicola Murino
70e035315e data provider: add CockroachDB support 2021-03-23 19:14:15 +01:00
Nicola Murino
8a1249878a OpenAPI schema: remove some superfluous required definitions
Fixes #356
2021-03-22 19:22:41 +01:00
Nicola Murino
5e375f56dd kms: add a lock, secrets could be modified concurrently for cached users
also reduce the size of the JSON payload omitting empty secrets
2021-03-22 19:03:25 +01:00
Nicola Murino
28f1d66ae5 link the Active Directory example in the howto section 2021-03-22 09:52:05 +01:00
Omar Ramos
79060d37a7 Added in a first draft of the page related to sftpgo-ldap-http-server. 2021-03-22 08:59:29 +01:00
Nicola Murino
800e64404b update deps 2021-03-22 08:55:35 +01:00
Nicola Murino
54c0c1b80d Windows: manually check if we can bind on the configured port/ports
Windows allows the coexistence of three types of sockets on the same
transport-layer service port, for example, 127.0.0.1:8080, [::1]:8080
and [::ffff:0.0.0.0]:8080

Go don't properly handles this, so we use a ugly hack

Fixes #350
2021-03-21 22:21:04 +01:00
Nicola Murino
f7c7e2951d initialize argon params before creating the data provider
Fixes #349
2021-03-21 19:58:57 +01:00
Nicola Murino
f249286cb1 docs: add some notes about the new virtual folders support
fixe a failing test case for the memory provider
2021-03-21 19:47:11 +01:00
Nicola Murino
d6dc3a507e extend virtual folders support to all storage backends
Fixes #241
2021-03-21 19:15:47 +01:00
Nicola Murino
0286da2356 try to auto create virtual folders if missing 2021-03-10 22:30:56 +01:00
Nicola Murino
76c08baaa0 httpclient: load CA certificates only when required
on Windows x509.SystemCertPool is not implemented and therefore we end
uo with an empty certificate pool if we load the CA certificates
unconditionally
2021-03-10 21:45:48 +01:00
Nicola Murino
67ea75cf03 improve OpenAPI schema so it is better rendered on Stoplight 2021-03-07 18:41:56 +01:00
Nicola Murino
4c658bb6f0 webdav: add prefix support 2021-03-07 17:10:45 +01:00
Nicola Murino
1ab02d5891 OpenAPI: improve schema
Fix some lint warnings
2021-03-06 17:08:24 +01:00
Nicola Murino
055506e518 sftpfs: add an option to disable concurrent reads 2021-03-06 15:41:40 +01:00
Nicola Murino
88122ba2f8 update jwtauth to v5 2021-03-05 18:50:45 +01:00
Nicola Murino
bfe0c18976 portable mode: fix WebDAV support 2021-03-05 08:41:24 +01:00
Nicola Murino
df41f0c556 add a setting to skip natural keys validation
Enabling the "skip_natural_keys_validation" data provider setting,
the natural keys for REST API/Web Admin as usernames, admin names,
folder names are not restricted to unreserved URI chars

Fixes #334 #308
2021-03-04 09:48:53 +01:00
Nicola Murino
561c5021dd add Segmed to the sponsors section 2021-03-03 18:55:47 +01:00
Nicola Murino
ad07fc78eb update nfpm and deps 2021-03-03 18:39:58 +01:00
Nicola Murino
3243181c5f Add a link to the OpenAPI schema where relevant
Fixes #329
2021-03-01 22:22:05 +01:00
Nicola Murino
895117718e SSH system command: add os separator to the resolved path when appropriate
Fixes #327
2021-03-01 22:10:45 +01:00
Nicola Murino
534b253c20 WebDAV: improve TLS certificate authentication
For each user you can now configure:

- TLS certificate auth
- TLS certificate auth and password
- Password auth

For TLS certificate auth, the certificate common name is used as
username
2021-03-01 19:28:11 +01:00
Nicola Murino
901cafc6da metrics: reduce complexity for AddLoginResult method
fix a gocyclo warning
2021-02-28 12:23:48 +01:00
Nicola Murino
a6e36e7cad FTP: improve TLS certificate authentication
For each user you can now configure:

- TLS certificate auth
- TLS certificate auth and password
- Password auth

For TLS auth, the certificate common name must match the name provided
using the "USER" FTP command
2021-02-28 12:10:40 +01:00
Nicola Murino
b566457e12 change license to AGPL-3 2021-02-26 19:47:48 +01:00
Nicola Murino
ca3e15578e Use new methods in the io and os packages instead of ioutil ones
ioutil is deprecated in Go 1.16 and SFTPGo is an application, not
a library, we have no reason to keep compatibility with old Go
versions.

Go 1.16 fix some cifs related issues too.
2021-02-25 21:53:04 +01:00
Nicola Murino
4b2edff6dd update deps 2021-02-24 22:27:52 +01:00
Nicola Murino
2146b83343 data providers: add filesystem to folder ...
... and some descriptive fields.
The filesystem support for virtual folders will be implemented in
future commits
2021-02-24 19:40:29 +01:00
Nicola Murino
3e1b07324d GCS: remove compat code 2021-02-22 22:06:23 +01:00
Nicola Murino
8cc2dfe5c2 update pkg/sftp
we don't need my branch anymore now that all the required features for
the sftpfs are available upstream too
2021-02-22 16:27:45 +01:00
Nicola Murino
78a837e8f1 remove other compat code 2021-02-22 09:13:26 +01:00
Nicola Murino
49830516be squash database migrations and remove compat code 2021-02-22 08:37:50 +01:00
Nicola Murino
41e1d9e68a use Go 1.16 for CI and Docker images 2021-02-21 12:01:37 +01:00
Nicola Murino
5da4f931c5 TLS: allow to configure cipher suites
Fixes #316
2021-02-18 20:17:16 +01:00
Nicola Murino
552a96533e back to development 2021-02-17 09:45:20 +01:00
Nicola Murino
cebd069c77 set version to 2.0.2 2021-02-17 08:10:17 +01:00
Nicola Murino
be9230e85b micro optimizations spotted using the go-critic linter 2021-02-16 19:11:36 +01:00
Nicola Murino
b1ce6eb85b web admin: allow to set an empty password for SFTPGo users 2021-02-15 19:38:53 +01:00
Nicola Murino
46176a54b4 minor doc fixes 2021-02-14 22:08:08 +01:00
Nicola Murino
a21ccad174 web hooks: add mutual TLS support 2021-02-13 14:41:37 +01:00
Nicola Murino
1129a868a5 Improve powershell completion
cobra 1.1.3 has much better powershell support
2021-02-13 09:10:35 +01:00
Nicola Murino
1ac66d27b6 Use IEC units for byte counting everywhere 2021-02-12 22:16:35 +01:00
Nicola Murino
6a6e8fffbc web hooks: improve resilience by adding a configurable retry
the retryable http client is used for hooks that notify events
2021-02-12 21:42:49 +01:00
Nicola Murino
51f110bc7b sftpd: add statvfs@openssh.com support 2021-02-11 19:45:52 +01:00
Nicola Murino
4ddfe41f23 loaddata: restore admins too 2021-02-11 08:33:32 +01:00
Nicola Murino
ddd06fc2ac docker: add permissions to data dirs
This way data and backup dirs can be mounted as separate volumes.

Based on the proof of concept submitted by

Mark Sagi-Kazar <mark.sagikazar@gmail.com>

See #305
2021-02-10 19:04:06 +01:00
Nicola Murino
1bccb93fcb rename default branch from master to main 2021-02-09 19:53:03 +01:00
Nicola Murino
db80781716 validation: improve error message for invalid chars 2021-02-08 21:32:59 +01:00
Nicola Murino
a2a99f9b57 merge full and slim dockerfiles
Fixes #232
2021-02-07 21:49:04 +01:00
Nicola Murino
cd4a68cc96 set version to 2.0.1 2021-02-06 15:28:30 +01:00
Nicola Murino
b37eb68993 docker alpine: revert to 3.12 since we have to release 2.0.1 2021-02-06 14:58:19 +01:00
Nicola Murino
b13958a8d6 docker: fix httpd address 2021-02-06 14:51:55 +01:00
Nicola Murino
17e2b234a0 dataprovider: fix migration with old mysql versions
Fixes #298
2021-02-06 14:33:51 +01:00
Nicola Murino
4ef1775e9a docker: switch to Alpine 3.13 2021-02-06 12:54:13 +01:00
Nicola Murino
363977b474 back to development 2021-02-06 12:23:26 +01:00
Nicola Murino
05ae0ea5f2 config: fix bindings backward compatibility 2021-02-06 09:53:31 +01:00
Nicola Murino
8de7a81674 revertprovider: only accept the supported version 2021-02-05 13:55:19 +01:00
Nicola Murino
d32b195a57 httpd: reuse the same compressor among bindings 2021-02-04 22:32:55 +01:00
Nicola Murino
267d9f1831 web ui: allow to create folders from a template 2021-02-04 19:09:43 +01:00
Nicola Murino
17a42a0c11 webdav: add compression support
Fixes #295
2021-02-04 09:06:41 +01:00
Nicola Murino
a219d25cac webdav: update the doc
the user specific path is now gone
2021-02-04 07:46:40 +01:00
Nicola Murino
ce731020a7 webdav: remove the username path prefix
so we have the same URIs for all protocols

Fixes #293
2021-02-04 07:12:04 +01:00
Nicola Murino
fc9082c422 webdav: try to handle HEAD for collection too
The underlying golang webdav library returns Method Not Allowed for
HEAD requests on directories:

https://github.com/golang/net/blob/master/webdav/webdav.go#L210

let's see if we can workaround this inside SFTPGo itself in a similar
way as we do for GET.

The HEAD response will not return a Content-Length, we cannot handle
this inside SFTPGo.

Fixes #294
2021-02-03 22:36:13 +01:00
Nicola Murino
4872ba2ea0 README: add "Sponsors" section 2021-02-03 14:37:11 +01:00
Nicola Murino
70bb3c34ce sftpfs: improve endpoint validation
Validation will fail if the endpoint is not specified as host:port
2021-02-03 11:29:04 +01:00
Nicola Murino
1cde50f050 sftpd: improve logging if filesystem creation fails 2021-02-03 09:45:04 +01:00
Nicola Murino
e9dd4ecdf0 web admin: add CSRF 2021-02-03 08:55:28 +01:00
Nicola Murino
f863530653 JWT: only accepts tokens from the expected header or cookie 2021-02-02 13:11:47 +01:00
Nicola Murino
4f609cfa30 JWT: add token audience
a token released for API audience cannot be used for web pages and
vice-versa
2021-02-02 09:14:10 +01:00
Nicola Murino
78bf808322 virtual folders: change dataprovider structure
This way we no longer depend on the local file system path and so we can
add support for cloud backends in future updates
2021-02-01 19:04:15 +01:00
Nicola Murino
afe1da92c5 web UI cookie: set the Secure flags if we are over TLS 2021-01-28 13:29:16 +01:00
Nicola Murino
9985224966 examples: add a script for bulk user update
you can use this sample script a a basis if you need to update
some common parameters for multiple users while preserving the others
2021-01-27 19:18:37 +01:00
Nicola Murino
02679d6df3 web ui: save the state of the tables
the state will be saved for 1 hour
2021-01-27 08:41:21 +01:00
Nicola Murino
c2bbd468c4 REST API: add logout and store invalidated token 2021-01-26 22:35:36 +01:00
Nicola Murino
46ab8f8d78 post-login hook: add the full user JSON serialized
Fixes #284
2021-01-26 18:05:44 +01:00
Nicola Murino
54321c5240 web ui: allow to create multiple users from a template 2021-01-25 21:31:33 +01:00
Nicola Murino
5fcbf2528f html templates: minor improvements 2021-01-24 17:43:54 +01:00
Nicola Murino
ea096db8e4 sftpfs: set the correct file mode 2021-01-23 10:32:15 +01:00
Nicola Murino
0caeb68680 sftpfs: fix stat info 2021-01-23 09:42:49 +01:00
Nicola Murino
2b9ba1d520 web admin: try to uniform UI 2021-01-23 09:28:45 +01:00
Nicola Murino
80f5ccd357 web admin: add backup/restore 2021-01-22 19:42:18 +01:00
Nicola Murino
820169c5c6 windows service: simplify code
update testify to 1.7.0 too
2021-01-21 19:07:13 +01:00
Nicola Murino
aff75953e3 ssh requests: send a reply only if the client requested it 2021-01-21 09:28:41 +01:00
Nicola Murino
c0e09374a8 scp: fix wildcard uploads
Fixes #285
2021-01-20 22:37:59 +01:00
Nicola Murino
57976b4085 httpd: add mTLS and multiple bindings support 2021-01-19 18:59:41 +01:00
Nicola Murino
899f1a1844 improve windows service
ensure to exit the service process in any case
2021-01-18 21:46:26 +01:00
Nicola Murino
41a1af863e OpenAPI: minor changes 2021-01-18 13:24:38 +01:00
Nicola Murino
778ec9b88f REST API v2
- add JWT authentication
- admins are now stored inside the data provider
- admin access can be restricted based on the source IP: both proxy
  header and connection IP are checked
- deprecate REST API CLI: it is not relevant anymore

Some other changes to the REST API can still happen before releasing
SFTPGo 2.0.0

Fixes #197
2021-01-17 22:29:08 +01:00
Giorgio Pellero
d42fcc3786 s3: don't paginate to find zero-byte-keyed dirs (#277)
Fixes #275
2021-01-14 12:01:25 +01:00
Nicola Murino
5d4f758c47 GCS: don't paginate to find compat "dirs" 2021-01-12 19:22:12 +01:00
Nicola Murino
a8a17a223a scp: minor improvements
document that we don't support wildcard expansion.

I should refactor scp code ...
2021-01-05 22:32:30 +01:00
Nicola Murino
aa40b04576 update deps 2021-01-05 12:40:49 +01:00
Nicola Murino
daac90c4e1 fix a potential race condition for pre-login and ext auth
hooks

doing something like this:

err = provider.updateUser(u)
...
return provider.userExists(username)

could be racy if another update happen before

provider.userExists(username)

also pass a pointer to updateUser so if the user is modified inside
"validateUser" we can just return the modified user without do a new
query
2021-01-05 09:50:22 +01:00
Nicola Murino
72b2c83392 defender: allow hot-reloading for safe and block lists 2021-01-04 17:52:14 +01:00
Nicola Murino
c3410a3d91 config: don't log a warning if the config file is not found
we also support configuration via env vars
2021-01-03 17:57:07 +01:00
Nicola Murino
173c1820e1 Go 1.15 is now required
VerifyConnection is not available in 1.14
2021-01-03 17:25:24 +01:00
Nicola Murino
684f4ba1a6 mutal TLS: add support for revocation lists 2021-01-03 17:03:04 +01:00
Nicola Murino
6d84c5b9e3 capture http servers error logs
otherwise they will be printed to stdout
2021-01-03 10:38:28 +01:00
Nicola Murino
4b522a2455 webdav: refactor server initialization 2021-01-03 09:51:54 +01:00
Nicola Murino
1e1c46ae1b defender: minor docs improvements 2021-01-02 20:02:05 +01:00
Nicola Murino
d6b3acdb62 add REST API for the defender 2021-01-02 19:33:24 +01:00
Nicola Murino
037d89a320 add support for a basic built-in defender
It can help to prevent DoS and brute force password guessing
2021-01-02 14:05:09 +01:00
Nicola Murino
30eb3c4a99 update OpenAPI schema 2020-12-29 19:33:04 +01:00
Nicola Murino
0966d44c0f httpd: add support for listening over a Unix-domain socket
Fixes #266
2020-12-29 19:02:56 +01:00
Nicola Murino
40e759c983 FTP: add support for client certificate authentication 2020-12-29 09:20:09 +01:00
Nicola Murino
141ca6777c webdav: add support for client certificate authentication
Fixes #263
2020-12-28 19:48:23 +01:00
Nicola Murino
3c16a19269 FTP: update ftpserverlib
fixes another sneaky bug
2020-12-28 09:22:52 +01:00
Nicola Murino
b3c6d79f51 FTP: add support for ASCII transfer mode
the default remain binary, a client have to explicitly request an
ASCII transfer
2020-12-27 09:48:56 +01:00
Nicola Murino
0c56b6d504 nfpm: update to 2.1.0 2020-12-26 19:14:12 +01:00
Nicola Murino
3d2da88da9 web ui: update js and css deps 2020-12-26 18:47:09 +01:00
Nicola Murino
80c06d6b59 clone: disable decrypt error test for memory provider
This test cannot work using memory provider, we cannot change the provider
for a kms secrete without reloading it from JSON and the memory provider
will never reload users
2020-12-26 15:57:01 +01:00
Nicola Murino
e536a638c9 web UI: improve user cloning 2020-12-26 15:11:38 +01:00
Jochen Munz
bc397002d4 Feature: Clone existing user via web admin (#259)
UI based cloning of an existing user. The "add user" screen is prepopulated with existing user data.

Resolves drakkan/sftpgo#225
2020-12-26 14:58:59 +01:00
Nicola Murino
2a95d031ea FTP: add support for AVBL command 2020-12-25 11:14:08 +01:00
Nicola Murino
1dce1eff48 improve FTP support
- allow to disable active mode
- allow to disable SITE commands
- add optional support for calculating hash value of files
- add optional support for the non standard COMB command
2020-12-24 18:48:06 +01:00
Jochen Munz
5b1d8666b3 S3fs: Handle non-ascii filename in rename operations (#257)
SFTP is based on UTF-8 filenames, so non-ASCII filenames get transported with utf-8 escaped character sequences.
At least for the S3fs provider, if such a file is stored in a nested path it cannot be used as the source for a rename operations.

This adds the necessary escaping of the path fragments.

The patch is not required for MinIO but it doesn't hurt
2020-12-24 11:13:42 +01:00
Nicola Murino
187a5b1908 sftpd: properly handle listener accept errors
continue on temporary errors and exit from the serve loop for the
other ones
2020-12-23 19:53:07 +01:00
Nicola Murino
7ab7941ddd sftpfs: fix race condition 2020-12-23 17:15:55 +01:00
Nicola Murino
c69d63c1f8 add support for multiple bindings
Fixes #253
2020-12-23 16:12:30 +01:00
Nicola Murino
743b350fdd httpd: add support for route undefined HEAD requests to GET handlers
HEAD responses will not include a body but the Content-Length will be
set as the equivalent GET request

Fixes #255
2020-12-20 10:22:16 +01:00
Nicola Murino
1ac610da1a fix build on Windows 2020-12-18 16:22:52 +01:00
Nicola Murino
bcf0fa073e telemetry server: add optional https and authentication 2020-12-18 16:04:42 +01:00
Nicola Murino
140380716d remove unused constant 2020-12-18 10:05:08 +01:00
Nicola Murino
143df87fee add some docs for telemetry server
move pprof to the telemetry server only
2020-12-18 09:47:22 +01:00
Márk Sági-Kazár
6d895843dc feat: add new telemetry server (#254)
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-12-18 09:01:19 +01:00
Nicola Murino
65e6d5475f update ftpserverlib to include the latest fixes and features 2020-12-18 08:49:32 +01:00
Nicola Murino
15609cdbc7 fix build on FreeBSD
see https://github.com/otiai10/copy/pull/36
2020-12-17 14:46:31 +01:00
Nicola Murino
f876c728ad add support for the latest ftpserverlib and azblob versions 2020-12-17 13:40:36 +01:00
Nicola Murino
f34462e3c3 add support for limiting max concurrent client connections 2020-12-15 19:29:30 +01:00
Nicola Murino
ea0bf5e4c8 ensure 64 bit alignment for 64 bit struct fields access atomically 2020-12-14 14:52:36 +01:00
Nicola Murino
14d1b82f6b minor README improvements 2020-12-14 07:54:27 +01:00
Nicola Murino
ed43ddd79d enable hash commands for any supported backend 2020-12-13 15:11:55 +01:00
Nicola Murino
23192a3be7 update nfpm to 1.10.3 2020-12-13 14:29:59 +01:00
Nicola Murino
72e3d464b8 sftpfs: fix fingerprints copy for memory provider 2020-12-12 10:56:02 +01:00
Nicola Murino
a6985075b9 add sftpfs storage backend
Fixes #224
2020-12-12 10:31:09 +01:00
dharmendra kariya
4d5494912d Update README.md (#245) 2020-12-11 08:22:50 +01:00
Nicola Murino
50982229e1 REST API: add a method to get the status of the services
added a status page to the built-in web admin
2020-12-08 11:18:34 +01:00
dharmendra kariya
6977a4a18b Update full-configuration.md (#240)
just deleting redundant line
2020-12-08 09:09:21 +01:00
Nicola Murino
ab1bf2ad44 update deps 2020-12-06 22:20:53 +01:00
Nicola Murino
c451f742aa revertprovider: crypted provider was not supported in v4
also ensure to initialize kms before the dataprovider, it could be
needed to downgrade secret from cloud kms providers
2020-12-06 10:36:48 +01:00
Nicola Murino
034d89876d webdav: fix proppatch handling
also respect login delay for cached webdav users and check the home dir as
soon as the user authenticates

Fixes #239
2020-12-06 08:19:41 +01:00
Nicola Murino
4a88ea5c03 add Data At Rest Encryption support 2020-12-05 13:48:13 +01:00
Nicola Murino
95c6d41c35 config: make config file relative to the config dir
a configuration parsing error is now fatal
2020-12-03 17:16:35 +01:00
Márk Sági-Kazár
2a9ed0abca Accept a config file path instead of a config name
Config name is a Viper concept used for searching a specific file
in various paths with various extensions.

Making it configurable is usually not a useful feature
as users mostly want to define a full or relative path
to a config file.

This change replaces config name with config file.
2020-12-03 16:23:33 +01:00
Nicola Murino
3ff6b1bf64 fix lint warnings 2020-12-02 10:02:08 +01:00
Nicola Murino
a67276ccc2 add build tags to disable kms providers 2020-12-02 09:44:18 +01:00
Nicola Murino
87b51a6fd5 kms: remember if a secret was saved without a master key
So we will be able to decrypt secret stored without a master key if a
such key is provided later
2020-12-01 22:18:16 +01:00
Nicola Murino
940836b25b add a note about using sqlite provider over cifs shares
See #235
2020-11-30 21:59:56 +01:00
Nicola Murino
634b723b5d add KMS support
Fixes #226
2020-11-30 21:46:34 +01:00
Nicola Murino
af0c9b76c4 update nfpm to 1.10.2 2020-11-27 18:07:27 +01:00
Nicola Murino
2142ef20c5 fix some typos 2020-11-26 22:18:12 +01:00
Nicola Murino
224ce5fe81 add revertprovider subcommand
Fixes #233
2020-11-26 22:08:33 +01:00
Nicola Murino
4bb9d07dde user: add a free text field
Fixes #230
2020-11-25 22:26:34 +01:00
Nicola Murino
2054dfd83d create the credential directory when needed
The credentials dir is currently required only for GCS users if
prefer database credential setting is false, so defer its creation
and don't fail to start the services if this directory is missing
2020-11-25 14:18:12 +01:00
Nicola Murino
6699f5c2cc initial data loading: an error is no longer fatal
therefore it does not prevent the services from starting
2020-11-25 09:18:36 +01:00
Estel Smith
70bde8b2bc memory provider: print a log if loading the initial dump fails
therefore this error is no longer fatal and does not prevent the services
from starting

Fixes #229
2020-11-25 09:15:23 +01:00
Nicola Murino
ff73e5f53c CI Docker: don't build full image on pull request
it will fail since the slim tag is not pushed
2020-11-24 18:51:10 +01:00
Nicola Murino
0609188d3f allow to disable SFTP service
Fixes #228
2020-11-24 13:44:57 +01:00
Nicola Murino
99cd1ccfe5 S3: fix empty directory detection
when listing empty directory MinIO returns no contents while S3 returns
1 object with the key equal to the prefix. Make detection work in both
cases

Fixes #227
2020-11-23 15:36:42 +01:00
Nicola Murino
dccc583b5d add a dedicated struct to store encrypted credentials
also gcs credentials are now encrypted, both on disk and inside the
provider.

Data provider is automatically migrated and load data will accept
old format too but you should upgrade to the new format to avoid future
issues
2020-11-22 21:53:04 +01:00
Nicola Murino
ac435b7890 back to development 2020-11-18 21:53:23 +01:00
Nicola Murino
37fc589896 set version to 1.2.2 2020-11-18 19:24:19 +01:00
Nicola Murino
5d789a01b7 update pkg/sftp
These patches are now merged upstream:

https://github.com/pkg/sftp/pull/392
https://github.com/pkg/sftp/pull/393
2020-11-18 19:06:12 +01:00
Nicola Murino
ca0ff0d630 add a File interface so we can avoid to use os.File directly 2020-11-17 19:36:39 +01:00
Nicola Murino
969b38586e update pkg/sftp to fix requests accumulation
Include this patch:

https://github.com/pkg/sftp/pull/393

to avoid request accumulation (no underlying fd) if we return an error.
Before this patch the accumulated requests are released only when the
client disconnects.

We use our fork for now to include

https://github.com/pkg/sftp/pull/392

too
2020-11-16 19:49:26 +01:00
Nicola Murino
e3eca424f1 web admin: allow both allowed and denied extensions/patterns for a dir
this fix a regression introduced in the previous commit
2020-11-16 19:21:50 +01:00
Nicola Murino
a6355e298e add support for limit files using shell like patterns
Fixes #209
2020-11-15 22:04:48 +01:00
Ryan Gough
c0f47a58f2 web admin: clarify that the directories for permissions are relative
Fixes #222
2020-11-15 09:11:36 +01:00
Nicola Murino
dc845fa2f4 webdav: fix permission errors if the client try to read multiple times 2020-11-14 19:19:41 +01:00
Nicola Murino
7e855c83b3 deb packages: changes priority to optional, extra is deprecated 2020-11-14 13:54:14 +01:00
Nicola Murino
3b8a9e0963 back to development 2020-11-14 11:01:28 +01:00
Nicola Murino
4445834fd3 set version to 1.2.1 2020-11-14 09:28:53 +01:00
Nicola Murino
19a619ff65 Linux pkgs: use python3 for API CLI inside generated deb 2020-11-14 09:10:45 +01:00
Nicola Murino
66a538dc9c CI: improve docker build action 2020-11-13 21:55:53 +01:00
Nicola Murino
1a6863f4b1 GCS uploads: check Close() error
some code simplification too
2020-11-13 18:40:18 +01:00
Nicola Murino
fbd9919afa docker: add slim image 2020-11-12 22:40:53 +01:00
Nicola Murino
eec8bc73f4 docker: remove entrypoint
remove the VOLUME instruction from the Dockerfile so you can change
user using your own image like this:

FROM drakkan/sftpgo:tag
USER root
RUN chown -R 1100:1100 /etc/sftpgo && chown 1100:1100 /var/lib/sftpgo /srv/sftpgo
USER 1100:1100
2020-11-12 11:53:05 +01:00
Nicola Murino
5720d40fee add setstat_mode 2
in this mode chmod/chtimes/chown can be silently ignored only for cloud
based file systems

Fixes #223
2020-11-12 10:39:46 +01:00
Nicola Murino
38e0cba675 docker: add an entrypoint
running as an arbitrary user is now possible setting the following
env vars too:

SFTPGO_PUID
SFTPGO_PGID

Fixes #217
2020-11-10 23:11:57 +01:00
Nicola Murino
4c5a0d663e sftpd: return the error Operation Unsupported for unexpected reads
a cloud based file cannot be opened for read and write at the same
time. Return a proper error if a client try to do this.

It can happen only for SFTP
2020-11-09 21:01:56 +01:00
Nicola Murino
093df15fac CI: add ppc64le support 2020-11-09 18:39:36 +01:00
Nicola Murino
957430e675 back to development 2020-11-08 12:56:37 +01:00
Nicola Murino
14035f407e set version to 1.2.0 2020-11-08 06:14:03 +01:00
Nicola Murino
bf2b2525a9 CI: build deb/rpm for arm64 2020-11-07 19:29:16 +01:00
Nicola Murino
4edb9cd6b9 simplify some code 2020-11-07 18:05:47 +01:00
Nicola Murino
c38d242bea docker: allow running as an arbitrary user 2020-11-06 10:18:29 +01:00
Nicola Murino
c6ab6f94e7 azblob: container level SAS cannot access container properties
so return the root directory without checking if the bucket exists
2020-11-05 15:03:35 +01:00
Nicola Murino
36151d1ba9 subsystem mode: add base-home-dir flag 2020-11-05 12:12:11 +01:00
Nicola Murino
1d5d184720 webdav file: ensure to close the reader only once 2020-11-05 09:30:38 +01:00
Nicola Murino
0119fd03a6 webdav: user caching is now mandatory
we cache the lock system with the user, without user caching we cannot
support locks for resource
2020-11-04 22:29:25 +01:00
Nicola Murino
0a14297b48 webdav: performance improvements and bug fixes
we need my custom golang/x/net/webdav fork for now

https://github.com/drakkan/net/tree/sftpgo
2020-11-04 19:11:40 +01:00
Nicola Murino
442efa0607 docker: add ppc64le support
Thanks to OSU Open Source Lab for making this possible
2020-11-03 08:47:30 +01:00
Nicola Murino
6ad4cc317c cloud backends: stat and other performance improvements 2020-11-02 19:16:12 +01:00
Nicola Murino
57bec976ae document heathz endpoint 2020-11-01 10:39:10 +01:00
Nicola Murino
641493e31a fix default config file
restore a setting changed for a local test
2020-10-31 11:34:50 +01:00
Nicola Murino
5b4e9ad982 windows setup: allow installation on older Windows version
The REST API CLI will not be installed on version < 10

Fixes #205
2020-10-31 11:04:24 +01:00
Nicola Murino
950a5ad9ea add a recoverer where appropriate
I have never seen this, but a malformed packet can easily crash pkg/sftp
2020-10-31 11:02:04 +01:00
Nicola Murino
fcfdd633f6 Azure Blob: update SDK and add access tier support 2020-10-30 22:17:17 +01:00
Nicola Murino
ebb18fa57d config: manually set viper defaults
so we can override config via env var even without a configuration file

Fixes #208
2020-10-30 18:58:57 +01:00
Nicola Murino
58b0ca585c docs: clarify that the config dir is the working dir by default
Fixes #211
2020-10-29 21:54:02 +01:00
Nicola Murino
5bc1c2de2d add a link to the heml chart
Fixes #210
2020-10-29 21:50:21 +01:00
Mark Sagi-Kazar
ec00613202 feat(httpd): add new healthz endpoint
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-29 21:37:30 +01:00
Mark Sagi-Kazar
02ec3a5f48 refactor(httpd): move every route under a new group
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-29 21:37:30 +01:00
Nicola Murino
ac3bae00fc add support for SFTP subsystem mode
Fixes #204
2020-10-29 19:23:33 +01:00
Nicola Murino
e54828a7b8 add metrics for Azure Blob storage 2020-10-26 19:01:17 +01:00
Nicola Murino
f2acde789d portable mode: add Azure Blob support 2020-10-25 21:42:43 +01:00
Nicola Murino
9b49f63a97 azure: implement multipart uploads using low level API
The high level wrapper seems to hang if there are network issues
2020-10-25 17:41:04 +01:00
Nicola Murino
14bcc6f2fc s3, azblob: check upper limit for part size 2020-10-25 12:10:11 +01:00
Nicola Murino
975a2f3632 sftpd: fix the max upload file size check for overwrites
improved test case too
2020-10-25 08:52:31 +01:00
Nicola Murino
5ff8f75917 add Azure Blob support 2020-10-25 08:18:48 +01:00
Sean Hildebrand
db7e81e9d0 add prefer_database_credentials configuration parameter
When true, users' Google Cloud Storage credentials will be written to
the data provider instead of disk.
Pre-existing credentials on disk will be used as a fallback

Fixes #201
2020-10-22 10:42:40 +02:00
Nicola Murino
6a8039e76a sftpd: log fingerprints for used host keys 2020-10-21 14:27:58 +02:00
Mark Sagi-Kazar
56bf8364cd test: add test for InitializeActionHandler
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-21 07:23:33 +02:00
Mark Sagi-Kazar
75750e3a79 feat: add support for custom action hooks
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-21 07:23:33 +02:00
Nicola Murino
bb5207ad77 Add support for loading users/folders on startup
Fixes #161
2020-10-20 18:42:37 +02:00
Nicola Murino
b51d795e04 sftpd: auto generate an ed25519 host key too 2020-10-19 14:30:40 +02:00
Nicola Murino
d12819932a update cobra to v1.1.1
this version fix the man page generation so we don't need to use
our branch anymore
2020-10-18 21:52:42 +02:00
Nicola Murino
d812c86812 docker: push images to GHCR too
use numeric id for user inside Dockerfile
2020-10-18 19:18:51 +02:00
Nicola Murino
1625cd5a9f back to development 2020-10-18 11:09:16 +02:00
Nicola Murino
756c3d0503 fix man page generation
other minor changes
2020-10-17 22:14:04 +02:00
Nicola Murino
f884447b26 rpm: set proper permissions for /var/lib/sftpgo and /srv/sftpgo
it seems we have to check the permissions after each update,
probably because nfpm defines these dirs as empty folders
2020-10-15 10:01:31 +02:00
Nicola Murino
555394b95e Linux pkgs: move data directory to /srv/sftpgo 2020-10-14 22:25:58 +02:00
Nicola Murino
00510a6af8 docker docs: fix image name 2020-10-14 08:13:24 +02:00
Nicola Murino
6c0839e197 Improve docker images 2020-10-14 07:46:36 +02:00
Ilias Trichopoulos
5b79379c90 Fix typo in Twilio name 2020-10-12 11:36:14 +02:00
Nicola Murino
47fed45700 Improve Linux packages 2020-10-11 16:23:50 +02:00
Nicola Murino
80d695f3a2 back to development 2020-10-11 09:29:17 +02:00
Nicola Murino
8d4f40ccd2 release workflow add initprovider again 2020-10-10 22:29:04 +02:00
Nicola Murino
765bad5edd set version to 1.1.0 2020-10-10 22:09:48 +02:00
Nicola Murino
0c0382c9b5 docker: disable scheduled build
We already have an edge version built after each commit
2020-10-10 20:15:34 +02:00
Nicola Murino
bbab6149e8 fix windows service: was broken in the latest commit 2020-10-09 22:42:13 +02:00
Nicola Murino
ce9387f1ab update dependencies and some docs 2020-10-09 20:25:42 +02:00
Nicola Murino
d126c5736a Docker: add Debian based image 2020-10-08 21:43:13 +02:00
Nicola Murino
5048d54d32 PPA: add source files used to build the packages 2020-10-08 18:20:15 +02:00
Nicola Murino
f22fe6af76 remove py extension from REST API CLI 2020-10-08 16:02:04 +02:00
Mark Sagi-Kazar
8034f289d1 Fix empty env context in nightly builds
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-08 15:48:40 +02:00
Nicola Murino
eed61ac510 Dockerfile: add a FEATURES build arg
This ARG allows to disable some optional features and it might be
useful if you build the image yourself
2020-10-07 20:04:02 +02:00
Nicola Murino
412d6096c0 Linux pkgs: fix postinstall scripts 2020-10-06 18:18:43 +02:00
Nicola Murino
c289ae07d2 Docker workflow: explicitly set image labels
while waiting for https://github.com/docker/build-push-action/issues/165
to be fixed.

Some minor changes to the default configuration for Linux packages
2020-10-06 18:03:55 +02:00
Nicola Murino
87f78b07b3 docker: add some docs and build for arm64 too 2020-10-06 13:59:31 +02:00
Mark Sagi-Kazar
5e2db77ef9 refactor: add an enum for filesystem providers
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-05 21:40:21 +02:00
Nicola Murino
c992072286 data provider: add a setting to prevent auto-update 2020-10-05 19:42:33 +02:00
Nicola Murino
0ef826c090 docker package: fix description 2020-10-05 17:24:09 +02:00
Mark Sagi-Kazar
5da75c3915 ci: enable docker build
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-05 16:32:59 +02:00
Nicola Murino
8222baa7ed Dockerfile: minor changes 2020-10-05 16:31:22 +02:00
Mark Sagi-Kazar
7b76b51314 feat: configure database path using configuration
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-05 16:15:06 +02:00
Mark Sagi-Kazar
c96dbbd3b5 feat: save credentials to /var/lib/sftpgo
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-05 16:15:06 +02:00
Mark Sagi-Kazar
da6ccedf24 feat: save database to /var/lib/sftpgo
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-05 16:15:06 +02:00
Mark Sagi-Kazar
13b37a835f revert: boltdb, sqlite is not automatically initialized
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-05 16:15:06 +02:00
Mark Sagi-Kazar
863fa33309 feat: install additional packages
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-05 16:15:06 +02:00
Mark Sagi-Kazar
9f4c54a212 refactor: make /var/lib/sftpgo the user home
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-05 16:15:06 +02:00
Mark Sagi-Kazar
2a7bff4c0e feat: switch to boltdb by default to make the container work out of the box
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-05 16:15:06 +02:00
Mark Sagi-Kazar
17406d1aab fix: permission issue caused by root owning the volume
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-05 16:15:06 +02:00
Mark Sagi-Kazar
6537c53d43 feat: add host_keys under /var/lib/sftpgo
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-05 16:15:06 +02:00
Mark Sagi-Kazar
b4bd10521a feat: move data under /var/lib/sftpgo
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-05 16:15:06 +02:00
Mark Sagi-Kazar
65cbef1962 feat: move backups under /var/lib/sftpgo
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-05 16:15:06 +02:00
Mark Sagi-Kazar
a8d355900a fix: missing sha from docker image on GHA
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-05 16:15:06 +02:00
Mark Sagi-Kazar
ffd9c381ce feat: add workflow for building docker image
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-05 16:15:06 +02:00
Mark Sagi-Kazar
2a0bce0beb feat: add dockerfile
Signed-off-by: Mark Sagi-Kazar <mark.sagikazar@gmail.com>
2020-10-05 16:15:06 +02:00
Nicola Murino
f1f7b81088 logger: don't print connection_id if empty
Fixes #183
2020-10-05 15:51:17 +02:00
Nicola Murino
f9827f958b sftpd auto host keys: try to auto-create parent dir if missing 2020-10-05 14:16:57 +02:00
Nicola Murino
3e2afc35ba data provider: try to automatically initialize it if required 2020-10-05 12:55:49 +02:00
Ilias Trichopoulos
c65dd86d5e Fix typos (#181) 2020-10-05 11:29:18 +02:00
Nicola Murino
2d6c0388af update deps 2020-10-04 18:29:42 +02:00
Nicola Murino
4d19d87720 pkgs: use glob notation to include static folder 2020-10-02 18:16:49 +02:00
Nicola Murino
5eabaf98e0 gcs: remove a superfluous debug log 2020-09-29 09:17:08 +02:00
Nicola Murino
d1f0e9ae9f CGS: implement MimeTyper interface 2020-09-28 22:12:46 +02:00
Thomas Blommaert
cd56039ab7 GCS mime-type detection (#179)
Fixes #178
2020-09-28 21:52:18 +02:00
Nicola Murino
55515fee95 update deps, GCS can no finally use attribute selection
See https://github.com/googleapis/google-cloud-go/pull/2661
2020-09-28 12:51:19 +02:00
Nicola Murino
13d43a2d31 improve some docs 2020-09-27 09:24:10 +02:00
Nicola Murino
001261433b howto postgres-s3: update to use the debian package 2020-09-26 19:28:56 +02:00
Nicola Murino
03bf595525 automatically build deb and rpm Linux packages
The packages are built after each tag/commit

Fixes #176
2020-09-26 14:07:24 +02:00
Nicola Murino
4ebedace1e systemd unit: run as "sftpgo" system user
Update the docs too

Fixes #177
2020-09-25 18:23:04 +02:00
Stephan Müller
b23276c002 Set verbosity for go commands in docker build (#174) 2020-09-21 19:33:44 +02:00
Nicola Murino
bf708cb8bc osfs: improve isSubDir check 2020-09-21 19:32:33 +02:00
Nicola Murino
a550d082a3 portable mode: advertise WebDAV service if requested 2020-09-21 16:08:32 +02:00
Nicola Murino
6c1a7449fe ssh commands: return better error messages
This improve the fix for #171 and return better error message for
SSH commands other than SCP too
2020-09-19 10:14:30 +02:00
Nicola Murino
f0c9b55036 dataprovider: improve user validation errors
Fixes #170
2020-09-18 19:21:24 +02:00
Nicola Murino
209badf10c scp: return better error messages
Fixes #171
2020-09-18 19:13:09 +02:00
Nicola Murino
242dde4480 sftpd: ensure to always close idle connections
after the last commit this wasn't the case anymore

Completly fixes #169
2020-09-18 18:15:28 +02:00
Nicola Murino
2df0dd1f70 sshd: map each channel with a new connection
Fixes #169
2020-09-18 10:52:53 +02:00
Nicola Murino
98a6d138d4 sftpd: add a test case to ensure we return sftp.ErrSSHFxNoSuchFile ...
if stat/lstat fails on a missing file
2020-09-17 12:30:48 +02:00
Nicola Murino
38f06ab373 ftpd: fix TLS for active connections
See https://github.com/fclairamb/ftpserverlib/issues/177

Some minor doc improvements
2020-09-17 09:45:40 +02:00
Nicola Murino
3c1300721c add some basic how-to style documents 2020-09-13 19:43:56 +02:00
Nicola Murino
61003c8079 sftpd: add lstat support 2020-09-11 09:30:25 +02:00
Nicola Murino
01850c7399 REST API: remove status from ApiResponse
it duplicates the header HTTP status
2020-09-08 09:45:21 +02:00
Nicola Murino
b9c381e26f sftpd: update pkg/sftp
The patch to open a file in read/write mode is now merged
2020-09-06 11:40:31 +02:00
Nicola Murino
542554fb2c replace the library to verify UNIX's crypt(3) passwords 2020-09-04 21:08:09 +02:00
Nicola Murino
bdf18fa862 password hashing: exposes argon2 options
So the hashing complexity can be changed depending on available
memory/CPU resources and business requirements
2020-09-04 17:09:31 +02:00
Nicola Murino
afc411c51b adjust runtime.GOMAXPROCS to match the container CPU quota, if any 2020-09-03 18:09:45 +02:00
Nicola Murino
a59163e56c multi-step auth: don't advertise password method if it is disabled
also rename the settings to password_authentication so it is more like
OpenSSH, add some test cases and improve documentation
2020-09-01 19:34:40 +02:00
Giorgio Pellero
8391b19abb Add password_disabled bool to sftpd config, disables password auth callback (#165) 2020-09-01 19:26:33 +02:00
Nicola Murino
3925c7ff95 REST API/Web admin: add a parameter to disconnect a user after an update
This way you can force the user to login again and so to use the updated
configuration.

A deleted user will be automatically disconnected.

Fixes #163

Improved some docs too.
2020-09-01 16:10:26 +02:00
Nicola Murino
dbed110d02 WebDAV: add caching for authenticated users
In this way we get a big performance boost
2020-08-31 19:25:17 +02:00
Giorgio Pellero
f978355520 Fix "compatible" typo in README.md (#162) 2020-08-31 13:43:24 +02:00
Nicola Murino
4748e6f54d sftpd: handle read and write from the same handle (#158)
Fixes #155
2020-08-31 06:45:22 +02:00
Nicola Murino
91a4c64390 fix initprovider exit code for MySQL and PostgreSQL 2020-08-30 14:00:45 +02:00
Nicola Murino
600a107699 initprovider: check if the provider is already initialized
exit with code 0 if no initialization is required
2020-08-30 13:50:43 +02:00
Nicola Murino
2746c0b0f1 move stat to base connection and differentiate between Stat and Lstat
we will use Lstat once it will be exposed in pkg/sftp
2020-08-25 18:23:00 +02:00
Nicola Murino
701a6115f8 ftpd: use ftpserverlib master, the tls patch is now merged 2020-08-24 23:06:10 +02:00
Nicola Murino
56b00addc4 docker: try to improve the docs
See #159
2020-08-24 15:46:31 +02:00
Nicola Murino
02e35ee002 sftpd: add Readlink support 2020-08-22 14:52:17 +02:00
Nicola Murino
5208e4a4ca sftpd: improve truncate
quota usage and max allowed write size are now properly updated after a
truncate
2020-08-22 10:12:00 +02:00
Nicola Murino
7381a867ba fix truncate test cases on Windows 2020-08-20 14:44:38 +02:00
Nicola Murino
f41ce6619f sftpd: add SSH_FXP_FSETSTAT support
This change will fix file editing from sshfs, we need this patch

https://github.com/pkg/sftp/pull/373

for pkg/sftp to support this feature
2020-08-20 13:54:36 +02:00
Nicola Murino
933427310d fix check pwd hook when using memory provider 2020-08-19 19:47:52 +02:00
Nicola Murino
8b0a1817b3 add check password hook
its main use case is to allow to easily support things like password+OTP for
protocols without keyboard interactive support such as FTP and WebDAV
2020-08-19 19:36:12 +02:00
Nicola Murino
04c9a5c008 add some examples hooks for one time password logins
The examples use Twillo Authy since I use it for my GitHub account.

You can easily use other multi factor authentication software in a
similar way.
2020-08-18 21:21:01 +02:00
Nicola Murino
bbc8c091e6 portable mode: add WebDAV support 2020-08-17 14:08:08 +02:00
Nicola Murino
f3228713bc Allow individual protocols to be enabled per user
Fixes #154
2020-08-17 12:49:20 +02:00
Nicola Murino
fa5333784b add a maximum allowed size for a single upload 2020-08-16 20:17:02 +02:00
Nicola Murino
0dbf0cc81f WebDAV: add CORS support 2020-08-15 15:55:20 +02:00
Nicola Murino
196a56726e FTP improvements
- add a setting to require TLS
- add symlink support

require TLS 1.2 for all TLS connections
2020-08-15 13:02:25 +02:00
Nicola Murino
fe857dcb1b CI: use go 1.15 by default now that it is released 2020-08-12 16:42:38 +02:00
Nicola Murino
aa0ed5dbd0 add post-login hook
a login scope is supported too so you can get notifications for failed logins,
successful logins or both
2020-08-12 16:15:12 +02:00
Nicola Murino
a9e21c282a add WebDAV support
Fixes #147
2020-08-11 23:56:10 +02:00
Antoine Deschênes
9a15a54885 sftpd: set failed connection loglevel to debug (#152) 2020-08-06 21:20:31 +02:00
Nicola Murino
91dcc349de Add client IP address to external auth, pre-login and keyboard interactive hooks 2020-08-04 18:03:28 +02:00
Nicola Murino
fa41bfd06a Cloud backends: add support for FTP REST command
So partial downloads are now supported as for local fs
2020-08-03 18:03:09 +02:00
Nicola Murino
8839c34d53 FTP: implements ClientDriverExtensionRemoveDir
Fixes #149 for FTP too
2020-08-03 17:36:43 +02:00
Nicola Murino
11ceaa8850 docker: document how to enable FTP/S 2020-08-01 08:56:15 +02:00
Nicola Murino
2a9f7db1e2 Cloud FS: don't propagate the error if removing a folder returns not found
for Cloud FS the folders are virtual and they, generally, disappear when the
last file is removed.

This fix doesn't work for FTP protocol for now.

Fixes #149
2020-07-31 19:24:57 +02:00
Nicola Murino
22338ed478 add post connect hook
Fixes #144
2020-07-30 22:33:49 +02:00
Nicola Murino
59a21158a6 fix FTP quota limits test case
It failed sometime due to a bug in the ftp client library used in test
cases. The failure was more frequent on FreeBSD but it could happen in
any supported OS. It was not systematic since we use small files in
test cases.

See https://github.com/jlaffaye/ftp/pull/192
2020-07-30 19:52:29 +02:00
Nicola Murino
93ce96d011 add support for the venerable FTP protocol
Fixes #46
2020-07-29 21:56:56 +02:00
Nicola Murino
cc2f04b0e4 fix concurrency test case on go 1.13
a sleep seems required, needs investigation
2020-07-25 08:55:17 +02:00
Nicola Murino
aa5191fa1b CI: add a timeout for test cases execution 2020-07-25 00:14:44 +02:00
Nicola Murino
4e41a5583d refactoring: add common package
The common package defines the interfaces that a protocol must implement
and contain code that can be shared among supported protocols.

This way should be easier to support new protocols
2020-07-24 23:39:38 +02:00
Nicola Murino
ded8fad5e4 add sponsor button 2020-07-13 22:23:11 +02:00
Nicola Murino
3702bc8413 several doc fixes 2020-07-11 13:03:15 +02:00
Nicola Murino
7896d2eef7 improve CI/CD workflows 2020-07-10 23:31:53 +02:00
Nicola Murino
da0f470f1c document FreeBSD support
improve some tests cleanup
2020-07-10 19:20:37 +02:00
Nicola Murino
8fddb742df try to improve error message if the user forgot to initialize the provider
See #138
2020-07-09 20:01:37 +02:00
Nicola Murino
95fe26f3e3 keep track of services errors
So we can exit with the correct code if an error happen inside the
services goroutines

Fixes #143
2020-07-09 19:16:52 +02:00
Nicola Murino
1e10381143 improve help strings formatting
Fixes #139
2020-07-09 18:58:22 +02:00
Nicola Murino
96cbce52f9 cmd: add shell completion and man pages generators 2020-07-08 23:21:33 +02:00
Nicola Murino
0ea2ca3141 simplify data provider usage
remove the obsolete SQL scripts too. They are not required since v0.9.6
2020-07-08 19:59:31 +02:00
Nicola Murino
42877dd915 sql providers: add a query timeout 2020-07-08 18:54:44 +02:00
Nicola Murino
790c11c453 back to development 2020-07-07 19:40:22 +02:00
Nicola Murino
1ac4baa00a set version to 1.0.0 2020-07-06 22:41:50 +02:00
Nicola Murino
fc32286045 update deps 2020-07-05 22:54:00 +02:00
Nicola Murino
ee1131f254 enable SCP test cases on Windows 2020-06-30 23:25:25 +02:00
Nicola Murino
c5dc3ee3b6 simplify CI workflow 2020-06-29 20:07:51 +02:00
Nicola Murino
dd593b1035 ssh commands: send a generic error for unexpected failures
and log the real error, it could leak a filesystem path
2020-06-29 18:53:33 +02:00
Nicola Murino
4814786556 windows installer: fix exe name for service control
It worked before since Windows is case insensitive
2020-06-29 14:55:58 +02:00
Nicola Murino
4f0a936ca0 web admin: fix Microsoft edge compatibility
Edge does not support trimEnd
2020-06-29 11:46:02 +02:00
Nicola Murino
aec372ca31 Windows setup: require Windows 10
Windows 7 is EOL since several months now
2020-06-29 11:15:24 +02:00
Nicola Murino
d2a739f8f6 add workflow status badge 2020-06-28 21:01:03 +02:00
Nicola Murino
165110872b add release workflow
for each tag a new release, including binaries, is automatically created
2020-06-28 15:57:33 +02:00
Nicola Murino
6ab4e9f533 add test case for concurrent logins 2020-06-27 12:36:42 +02:00
Nicola Murino
cf541d62ea recursive permissions check before renaming/copying directories 2020-06-26 23:38:29 +02:00
Nicola Murino
19fc58dd1f portable: avoid to log user provided password
disable DNS Multicast as default

Fixes #135 and #136
2020-06-24 13:37:38 +02:00
Nicola Murino
ac9c475849 test bolt and memory provider on macOS and Windows too 2020-06-22 23:47:07 +02:00
Nicola Murino
ddf99ab706 workflow: execute test cases on MySQL too 2020-06-22 20:02:51 +02:00
Nicola Murino
0056984d4b Allow to rotate logs on demand
Log file can be rotated sending a SIGUSR1 signal on Unix based systems and
using "sftpgo service rotatelogs" on Windows

Fixes #133
2020-06-22 19:11:53 +02:00
Nicola Murino
44fb276464 workflow: execute tests using postgresql provider too 2020-06-21 21:28:59 +02:00
Nicola Murino
558a1b4050 workflow: execute tests using memory provider too 2020-06-21 20:20:30 +02:00
Nicola Murino
8f934f2648 run test cases against bolt provider too 2020-06-20 23:49:27 +02:00
Nicola Murino
403b9a8310 replace Travis with GitHub actions 2020-06-20 21:57:51 +02:00
Nicola Murino
33436488e2 update deps 2020-06-20 16:09:55 +02:00
Nicola Murino
3c28366fed add action to build code after each commit
You can download the build artifact from the "Actions->Code Build"

Fixes #129
2020-06-20 15:34:19 +02:00
Nicola Murino
b80abe6c05 return exit code 1 on error
Fixes #132
2020-06-20 14:30:46 +02:00
Nicola Murino
8cb47817f6 Add API endpoint to set current quota
Fixes #130
2020-06-20 12:38:04 +02:00
Nicola Murino
23a80b01b6 add build tag to disable metrics 2020-06-19 17:08:51 +02:00
Nicola Murino
b30614e9d8 httpd: make the built-in web interface optional
The built-in web admin will be disabled if both "templates_path" and
"static_files_path" are empty

Fixes #131
2020-06-18 23:53:38 +02:00
Nicola Murino
e86089a9f3 quota: improve size check
get the remaining allowed size when an upload starts and check it against the
uploaded bytes

Fixes #128
2020-06-18 22:38:03 +02:00
Nicola Murino
3ceba7a147 sftpgo-copy: add quota limits check 2020-06-16 22:49:18 +02:00
Nicola Murino
c491133aff docs: fix markdown lint warnings 2020-06-15 23:46:11 +02:00
Nicola Murino
37418a7630 SSH system commands: allow git and rsync inside virtual folders 2020-06-15 23:32:12 +02:00
Nicola Murino
73a9c002e0 permissions: improve rename
Allow to enable rename permission in a more controlled way granting "delete"
permission on source directory and "upload" permission on target directory
2020-06-13 23:49:28 +02:00
Nicola Murino
3d48fa7382 ssh commands: add sftpgo-copy and sftpgo-remove
Fixes #122
2020-06-13 22:48:51 +02:00
Nicola Murino
8e22dd1b13 virtual folders: allow overlapped mapped paths if quota is disabled
See #95
2020-06-10 09:11:32 +02:00
Nicola Murino
7807fa7cc2 use os.ModePerm for files and directory creation 2020-06-08 19:40:17 +02:00
Nicola Murino
cd380973df allows host keys auto generation inside a user configured directory
Fixes #124
2020-06-08 18:45:04 +02:00
Nicola Murino
01d681faa3 external auth: allow to map multiple login username to a single account
some external auth users want to map multiple login usernames with a single
SGTPGo account.
For example an SFTP user logins using "user1" or "user2" and the external auth
returns "user" in both cases, so we use the username returned from external auth
and not the one used to login

Fixes #125
2020-06-08 13:06:02 +02:00
Nicola Murino
c231b663a3 add docs for virtual folders
fix test cases on macOS
2020-06-08 00:15:14 +02:00
Nicola Murino
8306b6bde6 refactor virtual folders
The same virtual folder can now be shared among users and different
folder quota limits for each user are supported.

Fixes #120
2020-06-07 23:30:18 +02:00
Nicola Murino
dc011af90d sftpd actions: add support for pre-delete action
Fixes #121
2020-05-24 23:31:14 +02:00
Nicola Murino
c27e3ef436 actions: add a generic hook to define external commands and HTTP URL
We can only define a single hook now and it can be an HTTP notification
or an external command, not both
2020-05-24 15:29:39 +02:00
Nicola Murino
760cc9ba5a partial auth: fix public key query response
more details here:

https://github.com/golang/crypto/pull/130#issuecomment-633191423
2020-05-24 12:13:14 +02:00
Nicola Murino
5665e9c0e7 improve some docs 2020-05-23 12:47:44 +02:00
Nicola Murino
ad53429cf1 add support for build tag to allow to disable some features
The following build tags are available:

- "nogcs", disable Google Cloud Storage backend
- "nos3", disable S3 Compabible Object Storage backends
- "nobolt", disable Bolt data provider
- "nomysql", disable MySQL data provider
- "nopgsql", disable PostgreSQL data provider
- "nosqlite", disable SQLite data provider
- "noportable", disable portable mode
2020-05-23 11:58:05 +02:00
Nicola Murino
15298b0409 sftpd: remove unused expectedSize field from Transfer struct
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2020-05-20 20:17:59 +02:00
Nicola Murino
cfa710037c cloud backends: fix SFTP error message for some write failures
Fixes #119

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2020-05-19 19:17:43 +02:00
Nicola Murino
a08dd85efd sftpd: deprecate keys and add a new host_keys config param
host_key defines the private host keys as plain list of strings.

Remove the other deprecated config params from the default config too.

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2020-05-16 23:26:44 +02:00
Nicola Murino
469d36d979 certificate auth: fix source address checking inside crypto/ssh
So we can avoid to check source address ourself

81aafe6d26

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2020-05-16 15:15:32 +02:00
Nicola Murino
7ae8b2cdeb move REST API CLI in examples directory
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2020-05-16 14:02:46 +02:00
Nicola Murino
cf148db75d add test case for expired SSH certificate
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2020-05-15 23:23:49 +02:00
Nicola Murino
738c7ab43e sftpd: add support for SSH user certificate authentication
This add support for PROTOCOL.certkeys vendor extension:

https://cvsweb.openbsd.org/src/usr.bin/ssh/PROTOCOL.certkeys?rev=1.8

Fixes #117

Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2020-05-15 20:08:53 +02:00
Nicola Murino
82fb7f8cf0 update proxyproto to v0.1.3
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2020-05-14 20:10:33 +02:00
Nicola Murino
e0f2ab9c01 test cases: minor improvements
Signed-off-by: Nicola Murino <nicola.murino@gmail.com>
2020-05-10 12:37:29 +02:00
Nicola Murino
e0183217b6 test cases: simplify TestLoginInvalidFs
we can simulate an invalid filesystem config using a GCS user without a
credentials file
2020-05-07 19:47:46 +02:00
Nicola Murino
f066b7fb9c use upstream pipeat
my patches are now merged
2020-05-07 00:05:40 +02:00
Nicola Murino
0c6e2b566b fix test cases on Windows 2020-05-06 23:16:08 +02:00
Nicola Murino
f02e24437a add more linters
test cases migration to testify is now complete.
Linters are enabled for test cases too
2020-05-06 19:36:34 +02:00
Nicola Murino
e9534be1e6 travis: exclude go 1.13 for arch arm64 2020-05-03 22:46:39 +02:00
Nicola Murino
7056997e49 travis: add arm64 2020-05-03 15:46:42 +02:00
Nicola Murino
155af19aaa tests: update httpd test to use testify 2020-05-03 15:24:26 +02:00
Nicola Murino
f369fdf6f2 httpclient: add a configuration parameter to skip TLS certificate validation
In this mode, TLS is susceptible to man-in-the-middle attacks.
This should be used only for testing.
2020-05-03 11:37:50 +02:00
Nicola Murino
510a95bd6d code quality check: set go version to 1.14 2020-05-02 15:55:27 +02:00
Nicola Murino
da90dbe645 tests: update config to use testify
we should port the other test cases to testify too
2020-05-02 15:47:23 +02:00
Nicola Murino
b006c5f914 NewOsFs: return an interface and not a pointer 2020-05-02 15:01:56 +02:00
Nicola Murino
3f75d46a16 sftpd: add support for excluding virtual folders from user quota limit
Fixes #110
2020-05-01 15:27:53 +02:00
Nicola Murino
14c2a244b7 code quality check: use setup-go@v2 and go 1.14 2020-04-30 17:57:06 +02:00
Nicola Murino
94ff9d7346 initprovider: fail if a configuration file cannot be found 2020-04-30 16:48:42 +02:00
Enes Çakır
14196167b0 add github action workflow for code quality 2020-04-30 15:06:15 +02:00
Nicola Murino
d70959c34c fix some lint issues 2020-04-30 14:23:55 +02:00
Sam Millar
67c6f27064 Tiny documentation typo fix 2020-04-29 16:13:33 +02:00
Enes Çakır
6bfbb27856 fix log level changing problem 2020-04-28 23:03:18 +02:00
Enes Çakır
baac3749b3 add verbose flag for portable mode 2020-04-28 17:03:14 +02:00
Nicola Murino
d377181b25 add a new configuration section for HTTP clients
HTTP clients are used for executing hooks such as the ones used for custom
actions, external authentication and pre-login user modifications.

This allows, for example, to use self-signed certificate without defeating the
purpose of using TLS
2020-04-26 23:29:09 +02:00
Nicola Murino
ebd6a11f3a external auth: add example HTTP server to use as authentication hook
The server authenticate against an LDAP server.
2020-04-26 14:48:32 +02:00
Nicola Murino
0a47412e8c scp, ssh commands: hide the real fs path on errors
The underlying filesystem errors for permissions and non-existing files
can contain the real storage path.
Map these errors to more generic ones to avoid to leak this info

Fixes #109
2020-04-22 12:26:18 +02:00
Nicola Murino
4f668bf558 simplify some httpd related code
and update chi, cobra and viper
2020-04-21 19:24:38 +02:00
Mengsk
9248c5a987 Update performance.md 2020-04-13 21:20:53 +02:00
Nicola Murino
b0ed190591 add an example auth program that allow to authenticate against LDAP
External authentication is the way to go to authenticate against LDAP,
at least for now.

Closes #99
2020-04-11 22:30:41 +02:00
Nicola Murino
37357b2d63 add support for checking pbkdf2 passwords with base64 encoded salt
This way we can import the default passwords format used in 389ds.

See TestPasswordsHashPbkdf2Sha256_389DS test case to learn how to convert
389ds passwords
2020-04-11 12:25:21 +02:00
Nicola Murino
9b06e0a3b7 sql providers: change password field from varchar 255 to text
some passwords can be longer than 255 characters
2020-04-11 11:17:40 +02:00
Nicola Murino
5a5912ea66 switch to my pkg/sftp branch and enable the request-server allocator
This way we have performance comparable to OpenSSH if the cipher
isn't the bottleneck
2020-04-10 23:35:57 +02:00
Nicola Murino
b1c7317cf6 add support for partial authentication
Multi-step authentication is activated disabling all single-step
auth methods for a given user
2020-04-09 23:32:42 +02:00
Nicola Murino
a0fe4cf5e4 docker: TAG build arg can be used to build a specific commit too 2020-04-09 11:30:51 +02:00
Henrik Lundahl
7fe3c965e3 Add a version build arg to the Alpine Dockerfile. 2020-04-09 11:26:09 +02:00
Henrik Lundahl
fd9b3c2767 Add a version build arg to the debian Dockerfile. 2020-04-09 11:15:21 +02:00
Nicola Murino
fb9e188e36 systemd service: add ExecReload 2020-04-05 11:36:29 +02:00
Nicola Murino
c93d8cecfc update deps
chi 4.1.0 requires some minor code changes
2020-04-03 22:30:30 +02:00
Nicola Murino
94b46e57f1 sftpd actions: execute defined command on error too
add a new field inside the notification to indicate if an error is
detected
2020-04-03 19:25:38 +02:00
Nicola Murino
9046acbe68 add HTTP hooks
external auth, pre-login user modification and keyboard interactive
authentication is now supported via HTTP requests too
2020-04-01 23:25:23 +02:00
Nicola Murino
075bbe2aef added test case that checks quota for files inside virtual folders 2020-03-29 11:10:03 +02:00
Nicola Murino
b52d078986 pbkdf2: fix password comparison
the key len for the derived function must be equal to the len of the
expected key
2020-03-28 16:09:06 +01:00
Nicola Murino
0a9c4914aa pre-login program: allow to create a new user too
clarify the difference between dynamic user creation/update and external
authentication
2020-03-27 23:26:22 +01:00
Nicola Murino
f284008fb5 enable scp in default configuration
remove the deprecated enable_scp setting
2020-03-26 23:38:24 +01:00
Nicola Murino
4759254e10 file actions: add bucket and endpoint to notifications
The HTTP notifications are now invoked as POST and the notification is
a JSON inside the POST body.

This is a backward incompatible change but this way the actions can be
extended more easily, sorry for the trouble

Fixes #101
2020-03-25 18:36:33 +01:00
Nicola Murino
e22d377203 docs: clarify "ca-certificates" requirement
Fixes #98
2020-03-22 20:17:36 +01:00
Nicola Murino
0787e3e595 bolt provider: fix error handling for get users with username filter 2020-03-22 15:37:08 +01:00
Nicola Murino
c1194d558c docs: minor improvements 2020-03-22 14:03:06 +01:00
Nicola Murino
952b10a9f6 update boltdb to v1.3.4
update other deps too
2020-03-21 10:12:30 +01:00
Nicola Murino
f55851bdc8 update nathanaelle password to v2
Fixes #97
2020-03-20 17:25:38 +01:00
Nicola Murino
76bb361393 docs: add built-in profiler 2020-03-15 23:33:12 +01:00
Nicola Murino
81c8e8d898 add profiler support
profiling is now available via the HTTP base URL /debug/pprof/

examples, use this URL to start and download a 30 seconds CPU profile:

/debug/pprof/profile?seconds=30

use this URL to profile used memory:

/debug/pprof/heap?gc=1

use this URL to profile allocated memory:

/debug/pprof/allocs?gc=1

Full docs here:

https://golang.org/pkg/net/http/pprof/
2020-03-15 15:16:35 +01:00
Nicola Murino
f4e872c782 portable mode: add flags for s3 upload part size and concurrency 2020-03-15 11:40:06 +01:00
Nicola Murino
ddcb500c51 update pipeat
it contains my latest performance patch that remove extraneous
allocation.

This improve performance for S3 and GCS
2020-03-15 01:36:19 +01:00
Nicola Murino
e8664c0ce4 docker: update docs
update dependencies too
2020-03-14 15:27:03 +01:00
Nicola Murino
3b002ddc86 improve performance
- use latest pkg/sftp that contains my latest performance patch
- replace default crypto with my branch that use minio sha256-simd
instead of Golang SHA256 implementation, this improve performance on
some hardware
2020-03-13 19:37:51 +01:00
Nicola Murino
1770da545d s3: upload concurrency is now configurable
Please note that if the upload bandwidth between the SFTP client and
SFTPGo is greater than the upload bandwidth between SFTPGo and S3 then
the SFTP client have to wait for the upload of the last parts to S3
after it ends the file upload to SFTPGo, and it may time out.
Keep this in mind if you customize parts size and upload concurrency
2020-03-13 19:13:58 +01:00
Nicola Murino
de3e69f846 s3: add documentation and test cases for upload part size 2020-03-13 17:28:55 +01:00
Michael Bonfils
cdf1233065 s3: export PartSize parameter
By default AWS SDK use part_size of 5 MB. For big files,
it is not ideal case. For Hadoop, it is not uncommon to
use 512 MB.
2020-03-13 17:28:04 +01:00
Nicola Murino
6b70f0b25f back to development 2020-03-07 18:06:46 +01:00
487 changed files with 130467 additions and 18330 deletions

12
.github/FUNDING.yml vendored Normal file
View File

@@ -0,0 +1,12 @@
# These are supported funding model platforms
github: [drakkan] # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2]
patreon: # Replace with a single Patreon username
open_collective: # Replace with a single Open Collective username
ko_fi: # Replace with a single Ko-fi username
tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
liberapay: # Replace with a single Liberapay username
issuehunt: # Replace with a single IssueHunt username
otechie: # Replace with a single Otechie username
custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']

20
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,20 @@
version: 2
updates:
- package-ecosystem: "gomod"
directory: "/"
schedule:
interval: "weekly"
open-pull-requests-limit: 2
- package-ecosystem: "docker"
directory: "/"
schedule:
interval: "weekly"
open-pull-requests-limit: 2
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"
open-pull-requests-limit: 2

2
.github/workflows/.editorconfig vendored Normal file
View File

@@ -0,0 +1,2 @@
[*.yml]
indent_size = 2

459
.github/workflows/development.yml vendored Normal file
View File

@@ -0,0 +1,459 @@
name: CI
on:
push:
branches: [2.2.x]
pull_request:
jobs:
test-deploy:
name: Test and deploy
runs-on: ${{ matrix.os }}
strategy:
matrix:
go: [1.17]
os: [ubuntu-18.04, macos-10.15]
upload-coverage: [true]
include:
- go: 1.17
os: windows-2019
upload-coverage: false
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: ${{ matrix.go }}
- name: Build for Linux/macOS x86_64
if: startsWith(matrix.os, 'windows-') != true
run: |
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
cd tests/eventsearcher
go build -trimpath -ldflags "-s -w" -o eventsearcher
cd -
- name: Build for macOS arm64
if: startsWith(matrix.os, 'macos-') == true
run: CGO_ENABLED=1 GOOS=darwin GOARCH=arm64 SDKROOT=$(xcrun --sdk macosx --show-sdk-path) go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo_arm64
- name: Build for Windows
if: startsWith(matrix.os, 'windows-')
run: |
$GIT_COMMIT = (git describe --always --dirty) | Out-String
$DATE_TIME = ([datetime]::Now.ToUniversalTime().toString("yyyy-MM-ddTHH:mm:ssZ")) | Out-String
$LATEST_TAG = ((git describe --tags $(git rev-list --tags --max-count=1)) | Out-String).Trim()
$REV_LIST=$LATEST_TAG+"..HEAD"
$COMMITS_FROM_TAG= ((git rev-list $REV_LIST --count) | Out-String).Trim()
$FILE_VERSION = $LATEST_TAG.substring(1) + "." + $COMMITS_FROM_TAG
go install github.com/tc-hib/go-winres@latest
go-winres simply --arch amd64 --product-version $LATEST_TAG-dev-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/version.date=$DATE_TIME" -o sftpgo.exe
cd tests/eventsearcher
go build -trimpath -ldflags "-s -w" -o eventsearcher.exe
cd ../..
mkdir arm64
$Env:CGO_ENABLED='0'
$Env:GOOS='windows'
$Env:GOARCH='arm64'
go-winres simply --arch arm64 --product-version $LATEST_TAG-dev-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/version.date=$DATE_TIME" -o .\arm64\sftpgo.exe
mkdir x86
$Env:GOARCH='386'
go-winres simply --arch 386 --product-version $LATEST_TAG-dev-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/version.date=$DATE_TIME" -o .\x86\sftpgo.exe
Remove-Item Env:\CGO_ENABLED
Remove-Item Env:\GOOS
Remove-Item Env:\GOARCH
- name: Run test cases using SQLite provider
run: go test -v -p 1 -timeout 15m ./... -coverprofile=coverage.txt -covermode=atomic
- name: Upload coverage to Codecov
if: ${{ matrix.upload-coverage }}
uses: codecov/codecov-action@v3
with:
file: ./coverage.txt
fail_ci_if_error: false
- name: Run test cases using bolt provider
run: |
go test -v -p 1 -timeout 2m ./config -covermode=atomic
go test -v -p 1 -timeout 5m ./common -covermode=atomic
go test -v -p 1 -timeout 5m ./httpd -covermode=atomic
go test -v -p 1 -timeout 8m ./sftpd -covermode=atomic
go test -v -p 1 -timeout 5m ./ftpd -covermode=atomic
go test -v -p 1 -timeout 5m ./webdavd -covermode=atomic
go test -v -p 1 -timeout 2m ./telemetry -covermode=atomic
go test -v -p 1 -timeout 2m ./mfa -covermode=atomic
env:
SFTPGO_DATA_PROVIDER__DRIVER: bolt
SFTPGO_DATA_PROVIDER__NAME: 'sftpgo_bolt.db'
- name: Run test cases using memory provider
run: go test -v -p 1 -timeout 15m ./... -covermode=atomic
env:
SFTPGO_DATA_PROVIDER__DRIVER: memory
SFTPGO_DATA_PROVIDER__NAME: ''
- name: Prepare build artifact for macOS
if: startsWith(matrix.os, 'macos-') == true
run: |
mkdir -p output/{init,bash_completion,zsh_completion}
cp sftpgo output/sftpgo_x86_64
cp sftpgo_arm64 output/
cp sftpgo.json output/
cp -r templates output/
cp -r static output/
cp -r openapi output/
cp init/com.github.drakkan.sftpgo.plist output/init/
./sftpgo gen completion bash > output/bash_completion/sftpgo
./sftpgo gen completion zsh > output/zsh_completion/_sftpgo
./sftpgo gen man -d output/man/man1
gzip output/man/man1/*
- name: Prepare Windows installer
if: ${{ startsWith(matrix.os, 'windows-') && github.event_name != 'pull_request' }}
run: |
Remove-Item -LiteralPath "output" -Force -Recurse -ErrorAction Ignore
mkdir output
copy .\sftpgo.exe .\output
copy .\sftpgo.json .\output
copy .\sftpgo.db .\output
copy .\LICENSE .\output\LICENSE.txt
mkdir output\templates
xcopy .\templates .\output\templates\ /E
mkdir output\static
xcopy .\static .\output\static\ /E
mkdir output\openapi
xcopy .\openapi .\output\openapi\ /E
$LATEST_TAG = ((git describe --tags $(git rev-list --tags --max-count=1)) | Out-String).Trim()
$REV_LIST=$LATEST_TAG+"..HEAD"
$COMMITS_FROM_TAG= ((git rev-list $REV_LIST --count) | Out-String).Trim()
$Env:SFTPGO_ISS_DEV_VERSION = $LATEST_TAG + "." + $COMMITS_FROM_TAG
$CERT_PATH=(Get-Location -PSProvider FileSystem).ProviderPath + "\cert.pfx"
[IO.File]::WriteAllBytes($CERT_PATH,[System.Convert]::FromBase64String($Env:CERT_DATA))
certutil -f -p "$Env:CERT_PASS" -importpfx MY "$CERT_PATH"
rm "$CERT_PATH"
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.17763.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\sftpgo.exe
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.17763.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\arm64\sftpgo.exe
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.17763.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\x86\sftpgo.exe
$INNO_S='/Ssigntool=$qC:/Program Files (x86)/Windows Kits/10/bin/10.0.17763.0/x86/signtool.exe$q sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n $qNicola Murino$q /d $qSFTPGo$q $f'
iscc "$INNO_S" .\windows-installer\sftpgo.iss
rm .\output\sftpgo.exe
rm .\output\sftpgo.db
copy .\arm64\sftpgo.exe .\output
(Get-Content .\output\sftpgo.json).replace('"sqlite"', '"bolt"') | Set-Content .\output\sftpgo.json
$Env:SFTPGO_DATA_PROVIDER__DRIVER='bolt'
$Env:SFTPGO_DATA_PROVIDER__NAME='.\output\sftpgo.db'
.\sftpgo.exe initprovider
Remove-Item Env:\SFTPGO_DATA_PROVIDER__DRIVER
Remove-Item Env:\SFTPGO_DATA_PROVIDER__NAME
$Env:SFTPGO_ISS_ARCH='arm64'
iscc "$INNO_S" .\windows-installer\sftpgo.iss
rm .\output\sftpgo.exe
copy .\x86\sftpgo.exe .\output
$Env:SFTPGO_ISS_ARCH='x86'
iscc "$INNO_S" .\windows-installer\sftpgo.iss
certutil -delstore MY "Nicola Murino"
env:
CERT_DATA: ${{ secrets.CERT_DATA }}
CERT_PASS: ${{ secrets.CERT_PASS }}
- name: Upload Windows installer x86_64 artifact
if: ${{ startsWith(matrix.os, 'windows-') && github.event_name != 'pull_request' }}
uses: actions/upload-artifact@v3
with:
name: sftpgo_windows_installer_x86_64
path: ./sftpgo_windows_x86_64.exe
- name: Upload Windows installer arm64 artifact
if: ${{ startsWith(matrix.os, 'windows-') && github.event_name != 'pull_request' }}
uses: actions/upload-artifact@v3
with:
name: sftpgo_windows_installer_arm64
path: ./sftpgo_windows_arm64.exe
- name: Upload Windows installer x86 artifact
if: ${{ startsWith(matrix.os, 'windows-') && github.event_name != 'pull_request' }}
uses: actions/upload-artifact@v3
with:
name: sftpgo_windows_installer_x86
path: ./sftpgo_windows_x86.exe
- name: Prepare build artifact for Windows
if: startsWith(matrix.os, 'windows-')
run: |
Remove-Item -LiteralPath "output" -Force -Recurse -ErrorAction Ignore
mkdir output
copy .\sftpgo.exe .\output
mkdir output\arm64
copy .\arm64\sftpgo.exe .\output\arm64
mkdir output\x86
copy .\x86\sftpgo.exe .\output\x86
copy .\sftpgo.json .\output
(Get-Content .\output\sftpgo.json).replace('"sqlite"', '"bolt"') | Set-Content .\output\sftpgo.json
mkdir output\templates
xcopy .\templates .\output\templates\ /E
mkdir output\static
xcopy .\static .\output\static\ /E
mkdir output\openapi
xcopy .\openapi .\output\openapi\ /E
- name: Upload build artifact
if: startsWith(matrix.os, 'ubuntu-') != true
uses: actions/upload-artifact@v3
with:
name: sftpgo-${{ matrix.os }}-go-${{ matrix.go }}
path: output
test-goarch-386:
name: Run test cases on 32-bit arch
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: 1.17
- name: Build
run: |
cd tests/eventsearcher
go build -trimpath -ldflags "-s -w" -o eventsearcher
cd -
env:
GOARCH: 386
- name: Run test cases
run: go test -v -p 1 -timeout 15m ./... -covermode=atomic
env:
SFTPGO_DATA_PROVIDER__DRIVER: memory
SFTPGO_DATA_PROVIDER__NAME: ''
GOARCH: 386
test-postgresql-mysql-crdb:
name: Test with PgSQL/MySQL/Cockroach
runs-on: ubuntu-18.04
services:
postgres:
image: postgres:latest
env:
POSTGRES_PASSWORD: postgres
POSTGRES_DB: sftpgo
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
- 5432:5432
mariadb:
image: mariadb:latest
env:
MYSQL_ROOT_PASSWORD: mysql
MYSQL_DATABASE: sftpgo
MYSQL_USER: sftpgo
MYSQL_PASSWORD: sftpgo
options: >-
--health-cmd "mysqladmin status -h 127.0.0.1 -P 3306 -u root -p$MYSQL_ROOT_PASSWORD"
--health-interval 10s
--health-timeout 5s
--health-retries 6
ports:
- 3307:3306
steps:
- uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: 1.17
- name: Build
run: |
cd tests/eventsearcher
go build -trimpath -ldflags "-s -w" -o eventsearcher
cd -
- name: Run tests using PostgreSQL provider
run: |
go test -v -p 1 -timeout 15m ./... -covermode=atomic
env:
SFTPGO_DATA_PROVIDER__DRIVER: postgresql
SFTPGO_DATA_PROVIDER__NAME: sftpgo
SFTPGO_DATA_PROVIDER__HOST: localhost
SFTPGO_DATA_PROVIDER__PORT: 5432
SFTPGO_DATA_PROVIDER__USERNAME: postgres
SFTPGO_DATA_PROVIDER__PASSWORD: postgres
- name: Run tests using MySQL provider
run: |
go test -v -p 1 -timeout 15m ./... -covermode=atomic
env:
SFTPGO_DATA_PROVIDER__DRIVER: mysql
SFTPGO_DATA_PROVIDER__NAME: sftpgo
SFTPGO_DATA_PROVIDER__HOST: localhost
SFTPGO_DATA_PROVIDER__PORT: 3307
SFTPGO_DATA_PROVIDER__USERNAME: sftpgo
SFTPGO_DATA_PROVIDER__PASSWORD: sftpgo
- name: Run tests using CockroachDB provider
run: |
docker run --rm --name crdb --health-cmd "curl -I http://127.0.0.1:8080" --health-interval 10s --health-timeout 5s --health-retries 6 -p 26257:26257 -d cockroachdb/cockroach:latest start-single-node --insecure --listen-addr 0.0.0.0:26257
docker exec crdb cockroach sql --insecure -e 'create database "sftpgo"'
go test -v -p 1 -timeout 15m ./... -covermode=atomic
docker stop crdb
env:
SFTPGO_DATA_PROVIDER__DRIVER: cockroachdb
SFTPGO_DATA_PROVIDER__NAME: sftpgo
SFTPGO_DATA_PROVIDER__HOST: localhost
SFTPGO_DATA_PROVIDER__PORT: 26257
SFTPGO_DATA_PROVIDER__USERNAME: root
SFTPGO_DATA_PROVIDER__PASSWORD:
build-linux-packages:
name: Build Linux packages
runs-on: ubuntu-18.04
strategy:
matrix:
include:
- arch: amd64
go: 1.17
go-arch: amd64
- arch: aarch64
distro: ubuntu18.04
go: go1.17.9
go-arch: arm64
- arch: ppc64le
distro: ubuntu18.04
go: go1.17.9
go-arch: ppc64le
- arch: armv7
distro: ubuntu18.04
go: go1.17.9
go-arch: arm7
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Set up Go
if: ${{ matrix.arch == 'amd64' }}
uses: actions/setup-go@v3
with:
go-version: ${{ matrix.go }}
- name: Build on amd64
if: ${{ matrix.arch == 'amd64' }}
run: |
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
mkdir -p output/{init,bash_completion,zsh_completion}
cp sftpgo.json output/
cp -r templates output/
cp -r static output/
cp -r openapi output/
cp init/sftpgo.service output/init/
./sftpgo gen completion bash > output/bash_completion/sftpgo
./sftpgo gen completion zsh > output/zsh_completion/_sftpgo
./sftpgo gen man -d output/man/man1
gzip output/man/man1/*
cp sftpgo output/
- uses: uraimo/run-on-arch-action@v2
if: ${{ matrix.arch != 'amd64' }}
name: Build for ${{ matrix.arch }}
id: build
with:
arch: ${{ matrix.arch }}
distro: ${{ matrix.distro }}
setup: |
mkdir -p "${PWD}/output"
dockerRunArgs: |
--volume "${PWD}/output:/output"
shell: /bin/bash
install: |
apt-get update -q -y
apt-get install -q -y curl gcc git
if [ ${{ matrix.go }} == 'latest' ]
then
GO_VERSION=$(curl -L https://go.dev/VERSION?m=text)
else
GO_VERSION=${{ matrix.go }}
fi
GO_DOWNLOAD_ARCH=${{ matrix.go-arch }}
if [ ${{ matrix.arch}} == 'armv7' ]
then
GO_DOWNLOAD_ARCH=armv6l
fi
curl --retry 5 --retry-delay 2 --connect-timeout 10 -o go.tar.gz -L https://go.dev/dl/${GO_VERSION}.linux-${GO_DOWNLOAD_ARCH}.tar.gz
tar -C /usr/local -xzf go.tar.gz
run: |
export PATH=$PATH:/usr/local/go/bin
if [ ${{ matrix.arch}} == 'armv7' ]
then
export GOARM=7
fi
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
mkdir -p output/{init,bash_completion,zsh_completion}
cp sftpgo.json output/
cp -r templates output/
cp -r static output/
cp -r openapi output/
cp init/sftpgo.service output/init/
./sftpgo gen completion bash > output/bash_completion/sftpgo
./sftpgo gen completion zsh > output/zsh_completion/_sftpgo
./sftpgo gen man -d output/man/man1
gzip output/man/man1/*
cp sftpgo output/
- name: Upload build artifact
uses: actions/upload-artifact@v3
with:
name: sftpgo-linux-${{ matrix.arch }}-go-${{ matrix.go }}
path: output
- name: Build Packages
id: build_linux_pkgs
run: |
export NFPM_ARCH=${{ matrix.go-arch }}
cd pkgs
./build.sh
PKG_VERSION=$(cat dist/version)
echo "::set-output name=pkg-version::${PKG_VERSION}"
- name: Upload Debian Package
uses: actions/upload-artifact@v3
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-${{ matrix.go-arch }}-deb
path: pkgs/dist/deb/*
- name: Upload RPM Package
uses: actions/upload-artifact@v3
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-${{ matrix.go-arch }}-rpm
path: pkgs/dist/rpm/*
golangci-lint:
name: golangci-lint
runs-on: ubuntu-latest
steps:
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: 1.17
- uses: actions/checkout@v3
- name: Run golangci-lint
uses: golangci/golangci-lint-action@v3
with:
version: latest

162
.github/workflows/docker.yml vendored Normal file
View File

@@ -0,0 +1,162 @@
name: Docker
on:
#schedule:
# - cron: '0 4 * * *' # everyday at 4:00 AM UTC
push:
branches:
- 2.2.x
tags:
- v*
pull_request:
jobs:
build:
name: Build
runs-on: ${{ matrix.os }}
strategy:
matrix:
os:
- ubuntu-latest
docker_pkg:
- debian
- alpine
optional_deps:
- true
- false
include:
- os: ubuntu-latest
docker_pkg: distroless
optional_deps: false
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Gather image information
id: info
run: |
VERSION=noop
DOCKERFILE=Dockerfile
MINOR=""
MAJOR=""
if [ "${{ github.event_name }}" = "schedule" ]; then
VERSION=nightly
elif [[ $GITHUB_REF == refs/tags/* ]]; then
VERSION=${GITHUB_REF#refs/tags/}
elif [[ $GITHUB_REF == refs/heads/* ]]; then
VERSION=$(echo ${GITHUB_REF#refs/heads/} | sed -r 's#/+#-#g')
if [ "${{ github.event.repository.default_branch }}" = "$VERSION" ]; then
VERSION=edge
fi
elif [[ $GITHUB_REF == refs/pull/* ]]; then
VERSION=pr-${{ github.event.number }}
fi
if [[ $VERSION =~ ^v[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ ]]; then
MINOR=${VERSION%.*}
MAJOR=${MINOR%.*}
fi
VERSION_SLIM="${VERSION}-slim"
if [[ $DOCKER_PKG == alpine ]]; then
VERSION="${VERSION}-alpine"
VERSION_SLIM="${VERSION}-slim"
DOCKERFILE=Dockerfile.alpine
elif [[ $DOCKER_PKG == distroless ]]; then
VERSION="${VERSION}-distroless"
VERSION_SLIM="${VERSION}-slim"
DOCKERFILE=Dockerfile.distroless
fi
DOCKER_IMAGES=("drakkan/sftpgo" "ghcr.io/drakkan/sftpgo")
TAGS="${DOCKER_IMAGES[0]}:${VERSION}"
TAGS_SLIM="${DOCKER_IMAGES[0]}:${VERSION_SLIM}"
for DOCKER_IMAGE in ${DOCKER_IMAGES[@]}; do
if [[ ${DOCKER_IMAGE} != ${DOCKER_IMAGES[0]} ]]; then
TAGS="${TAGS},${DOCKER_IMAGE}:${VERSION}"
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:${VERSION_SLIM}"
fi
if [[ $GITHUB_REF == refs/tags/* ]]; then
if [[ $DOCKER_PKG == debian ]]; then
if [[ -n $MAJOR && -n $MINOR ]]; then
TAGS="${TAGS},${DOCKER_IMAGE}:${MINOR},${DOCKER_IMAGE}:${MAJOR}"
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:${MINOR}-slim,${DOCKER_IMAGE}:${MAJOR}-slim"
fi
TAGS="${TAGS},${DOCKER_IMAGE}:latest"
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:slim"
elif [[ $DOCKER_PKG == distroless ]]; then
if [[ -n $MAJOR && -n $MINOR ]]; then
TAGS="${TAGS},${DOCKER_IMAGE}:${MINOR}-distroless,${DOCKER_IMAGE}:${MAJOR}-distroless"
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:${MINOR}-distroless-slim,${DOCKER_IMAGE}:${MAJOR}-distroless-slim"
fi
TAGS="${TAGS},${DOCKER_IMAGE}:distroless"
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:distroless-slim"
else
if [[ -n $MAJOR && -n $MINOR ]]; then
TAGS="${TAGS},${DOCKER_IMAGE}:${MINOR}-alpine,${DOCKER_IMAGE}:${MAJOR}-alpine"
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:${MINOR}-alpine-slim,${DOCKER_IMAGE}:${MAJOR}-alpine-slim"
fi
TAGS="${TAGS},${DOCKER_IMAGE}:alpine"
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:alpine-slim"
fi
fi
done
if [[ $OPTIONAL_DEPS == true ]]; then
echo ::set-output name=version::${VERSION}
echo ::set-output name=tags::${TAGS}
echo ::set-output name=full::true
else
echo ::set-output name=version::${VERSION_SLIM}
echo ::set-output name=tags::${TAGS_SLIM}
echo ::set-output name=full::false
fi
echo ::set-output name=dockerfile::${DOCKERFILE}
echo ::set-output name=created::$(date -u +'%Y-%m-%dT%H:%M:%SZ')
echo ::set-output name=sha::${GITHUB_SHA::8}
env:
DOCKER_PKG: ${{ matrix.docker_pkg }}
OPTIONAL_DEPS: ${{ matrix.optional_deps }}
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
- name: Set up builder
uses: docker/setup-buildx-action@v1
id: builder
- name: Login to Docker Hub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
if: ${{ github.event_name != 'pull_request' }}
- name: Login to GitHub Container Registry
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
if: ${{ github.event_name != 'pull_request' }}
- name: Build and push
uses: docker/build-push-action@v2
with:
context: .
builder: ${{ steps.builder.outputs.name }}
file: ./${{ steps.info.outputs.dockerfile }}
platforms: linux/amd64,linux/arm64,linux/ppc64le
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.info.outputs.tags }}
build-args: |
COMMIT_SHA=${{ steps.info.outputs.sha }}
INSTALL_OPTIONAL_PACKAGES=${{ steps.info.outputs.full }}
labels: |
org.opencontainers.image.title=SFTPGo
org.opencontainers.image.description=Fully featured and highly configurable SFTP server with optional HTTP, FTP/S and WebDAV support
org.opencontainers.image.url=https://github.com/drakkan/sftpgo
org.opencontainers.image.documentation=https://github.com/drakkan/sftpgo/blob/${{ github.sha }}/docker/README.md
org.opencontainers.image.source=https://github.com/drakkan/sftpgo
org.opencontainers.image.version=${{ steps.info.outputs.version }}
org.opencontainers.image.created=${{ steps.info.outputs.created }}
org.opencontainers.image.revision=${{ github.sha }}
org.opencontainers.image.licenses=AGPL-3.0

595
.github/workflows/release.yml vendored Normal file
View File

@@ -0,0 +1,595 @@
name: Release
on:
push:
tags: 'v*'
env:
GO_VERSION: 1.17.9
jobs:
prepare-sources-with-deps:
name: Prepare sources with deps
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: ${{ env.GO_VERSION }}
- name: Get SFTPGo version
id: get_version
run: echo ::set-output name=VERSION::${GITHUB_REF/refs\/tags\//}
- name: Prepare release
run: |
go mod vendor
echo "${SFTPGO_VERSION}" > VERSION.txt
tar cJvf sftpgo_${SFTPGO_VERSION}_src_with_deps.tar.xz *
env:
SFTPGO_VERSION: ${{ steps.get_version.outputs.VERSION }}
- name: Upload build artifact
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_src_with_deps.tar.xz
path: ./sftpgo_${{ steps.get_version.outputs.VERSION }}_src_with_deps.tar.xz
retention-days: 1
prepare-window-mac:
name: Prepare binaries
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [macos-10.15, windows-2019]
steps:
- uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: ${{ env.GO_VERSION }}
- name: Get SFTPGo version
id: get_version
run: echo ::set-output name=VERSION::${GITHUB_REF/refs\/tags\//}
shell: bash
- name: Get OS name
id: get_os_name
run: |
if [[ $MATRIX_OS =~ ^macos.* ]]
then
echo ::set-output name=OS::macOS
else
echo ::set-output name=OS::windows
fi
shell: bash
env:
MATRIX_OS: ${{ matrix.os }}
- name: Build for macOS x86_64
if: startsWith(matrix.os, 'windows-') != true
run: go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
- name: Build for macOS arm64
if: startsWith(matrix.os, 'macos-') == true
run: CGO_ENABLED=1 GOOS=darwin GOARCH=arm64 SDKROOT=$(xcrun --sdk macosx --show-sdk-path) go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo_arm64
- name: Build for Windows
if: startsWith(matrix.os, 'windows-')
run: |
$GIT_COMMIT = (git describe --always --dirty) | Out-String
$DATE_TIME = ([datetime]::Now.ToUniversalTime().toString("yyyy-MM-ddTHH:mm:ssZ")) | Out-String
$FILE_VERSION = $Env:SFTPGO_VERSION.substring(1) + ".0"
go install github.com/tc-hib/go-winres@latest
go-winres simply --arch amd64 --product-version $Env:SFTPGO_VERSION-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/version.date=$DATE_TIME" -o sftpgo.exe
mkdir arm64
$Env:CGO_ENABLED='0'
$Env:GOOS='windows'
$Env:GOARCH='arm64'
go-winres simply --arch arm64 --product-version $Env:SFTPGO_VERSION-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/version.date=$DATE_TIME" -o .\arm64\sftpgo.exe
mkdir x86
$Env:GOARCH='386'
go-winres simply --arch 386 --product-version $Env:SFTPGO_VERSION-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/version.date=$DATE_TIME" -o .\x86\sftpgo.exe
Remove-Item Env:\CGO_ENABLED
Remove-Item Env:\GOOS
Remove-Item Env:\GOARCH
env:
SFTPGO_VERSION: ${{ steps.get_version.outputs.VERSION }}
- name: Initialize data provider
run: ./sftpgo initprovider
shell: bash
- name: Prepare Release for macOS
if: startsWith(matrix.os, 'macos-')
run: |
mkdir -p output/{init,sqlite,bash_completion,zsh_completion}
echo "For documentation please take a look here:" > output/README.txt
echo "" >> output/README.txt
echo "https://github.com/drakkan/sftpgo/blob/${SFTPGO_VERSION}/README.md" >> output/README.txt
cp LICENSE output/
cp sftpgo output/
cp sftpgo.json output/
cp sftpgo.db output/sqlite/
cp -r static output/
cp -r openapi output/
cp -r templates output/
cp init/com.github.drakkan.sftpgo.plist output/init/
./sftpgo gen completion bash > output/bash_completion/sftpgo
./sftpgo gen completion zsh > output/zsh_completion/_sftpgo
./sftpgo gen man -d output/man/man1
gzip output/man/man1/*
cd output
tar cJvf ../sftpgo_${SFTPGO_VERSION}_${OS}_x86_64.tar.xz *
cd ..
cp sftpgo_arm64 output/sftpgo
cd output
tar cJvf ../sftpgo_${SFTPGO_VERSION}_${OS}_arm64.tar.xz *
cd ..
env:
SFTPGO_VERSION: ${{ steps.get_version.outputs.VERSION }}
OS: ${{ steps.get_os_name.outputs.OS }}
- name: Prepare Release for Windows
if: startsWith(matrix.os, 'windows-')
run: |
mkdir output
copy .\sftpgo.exe .\output
copy .\sftpgo.json .\output
copy .\sftpgo.db .\output
copy .\LICENSE .\output\LICENSE.txt
mkdir output\templates
xcopy .\templates .\output\templates\ /E
mkdir output\static
xcopy .\static .\output\static\ /E
mkdir output\openapi
xcopy .\openapi .\output\openapi\ /E
$CERT_PATH=(Get-Location -PSProvider FileSystem).ProviderPath + "\cert.pfx"
[IO.File]::WriteAllBytes($CERT_PATH,[System.Convert]::FromBase64String($Env:CERT_DATA))
certutil -f -p "$Env:CERT_PASS" -importpfx MY "$CERT_PATH"
rm "$CERT_PATH"
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.17763.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\sftpgo.exe
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.17763.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\arm64\sftpgo.exe
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.17763.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\x86\sftpgo.exe
$INNO_S='/Ssigntool=$qC:/Program Files (x86)/Windows Kits/10/bin/10.0.17763.0/x86/signtool.exe$q sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n $qNicola Murino$q /d $qSFTPGo$q $f'
iscc "$INNO_S" .\windows-installer\sftpgo.iss
rm .\output\sftpgo.exe
rm .\output\sftpgo.db
copy .\arm64\sftpgo.exe .\output
(Get-Content .\output\sftpgo.json).replace('"sqlite"', '"bolt"') | Set-Content .\output\sftpgo.json
$Env:SFTPGO_DATA_PROVIDER__DRIVER='bolt'
$Env:SFTPGO_DATA_PROVIDER__NAME='.\output\sftpgo.db'
.\sftpgo.exe initprovider
Remove-Item Env:\SFTPGO_DATA_PROVIDER__DRIVER
Remove-Item Env:\SFTPGO_DATA_PROVIDER__NAME
$Env:SFTPGO_ISS_ARCH='arm64'
iscc "$INNO_S" .\windows-installer\sftpgo.iss
rm .\output\sftpgo.exe
copy .\x86\sftpgo.exe .\output
$Env:SFTPGO_ISS_ARCH='x86'
iscc "$INNO_S" .\windows-installer\sftpgo.iss
certutil -delstore MY "Nicola Murino"
env:
SFTPGO_ISS_VERSION: ${{ steps.get_version.outputs.VERSION }}
SFTPGO_ISS_DOC_URL: https://github.com/drakkan/sftpgo/blob/${{ steps.get_version.outputs.VERSION }}/README.md
CERT_DATA: ${{ secrets.CERT_DATA }}
CERT_PASS: ${{ secrets.CERT_PASS }}
- name: Prepare Portable Release for Windows
if: startsWith(matrix.os, 'windows-')
run: |
mkdir win-portable
copy .\sftpgo.exe .\win-portable
mkdir win-portable\arm64
copy .\arm64\sftpgo.exe .\win-portable\arm64
mkdir win-portable\x86
copy .\x86\sftpgo.exe .\win-portable\x86
copy .\sftpgo.json .\win-portable
(Get-Content .\win-portable\sftpgo.json).replace('"sqlite"', '"bolt"') | Set-Content .\win-portable\sftpgo.json
copy .\output\sftpgo.db .\win-portable
copy .\LICENSE .\win-portable\LICENSE.txt
mkdir win-portable\templates
xcopy .\templates .\win-portable\templates\ /E
mkdir win-portable\static
xcopy .\static .\win-portable\static\ /E
mkdir win-portable\openapi
xcopy .\openapi .\win-portable\openapi\ /E
Compress-Archive .\win-portable\* sftpgo_portable.zip
- name: Upload macOS x86_64 artifact
if: startsWith(matrix.os, 'macos-')
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86_64.tar.xz
path: ./sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86_64.tar.xz
retention-days: 1
- name: Upload macOS arm64 artifact
if: startsWith(matrix.os, 'macos-')
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_arm64.tar.xz
path: ./sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_arm64.tar.xz
retention-days: 1
- name: Upload Windows installer x86_64 artifact
if: startsWith(matrix.os, 'windows-')
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86_64.exe
path: ./sftpgo_windows_x86_64.exe
retention-days: 1
- name: Upload Windows installer arm64 artifact
if: startsWith(matrix.os, 'windows-')
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_arm64.exe
path: ./sftpgo_windows_arm64.exe
retention-days: 1
- name: Upload Windows installer x86 artifact
if: startsWith(matrix.os, 'windows-')
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86.exe
path: ./sftpgo_windows_x86.exe
retention-days: 1
- name: Upload Windows portable artifact
if: startsWith(matrix.os, 'windows-')
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_portable.zip
path: ./sftpgo_portable.zip
retention-days: 1
prepare-linux:
name: Prepare Linux binaries
runs-on: ubuntu-18.04
strategy:
matrix:
include:
- arch: amd64
go-arch: amd64
deb-arch: amd64
rpm-arch: x86_64
tar-arch: x86_64
- arch: aarch64
distro: ubuntu18.04
go-arch: arm64
deb-arch: arm64
rpm-arch: aarch64
tar-arch: arm64
- arch: ppc64le
distro: ubuntu18.04
go-arch: ppc64le
deb-arch: ppc64el
rpm-arch: ppc64le
tar-arch: ppc64le
- arch: armv7
distro: ubuntu18.04
go-arch: arm7
deb-arch: armhf
rpm-arch: armv7hl
tar-arch: armv7
steps:
- uses: actions/checkout@v3
- name: Set up Go
if: ${{ matrix.arch == 'amd64' }}
uses: actions/setup-go@v3
with:
go-version: ${{ env.GO_VERSION }}
- name: Get versions
id: get_version
run: |
echo ::set-output name=SFTPGO_VERSION::${GITHUB_REF/refs\/tags\//}
echo ::set-output name=GO_VERSION::${GO_VERSION}
shell: bash
env:
GO_VERSION: ${{ env.GO_VERSION }}
- name: Build on amd64
if: ${{ matrix.arch == 'amd64' }}
run: |
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
mkdir -p output/{init,sqlite,bash_completion,zsh_completion}
echo "For documentation please take a look here:" > output/README.txt
echo "" >> output/README.txt
echo "https://github.com/drakkan/sftpgo/blob/${SFTPGO_VERSION}/README.md" >> output/README.txt
cp LICENSE output/
cp sftpgo.json output/
cp -r templates output/
cp -r static output/
cp -r openapi output/
cp init/sftpgo.service output/init/
./sftpgo initprovider
./sftpgo gen completion bash > output/bash_completion/sftpgo
./sftpgo gen completion zsh > output/zsh_completion/_sftpgo
./sftpgo gen man -d output/man/man1
gzip output/man/man1/*
cp sftpgo output/
cp sftpgo.db output/sqlite/
cd output
tar cJvf sftpgo_${SFTPGO_VERSION}_linux_${{ matrix.tar-arch }}.tar.xz *
cd ..
env:
SFTPGO_VERSION: ${{ steps.get_version.outputs.SFTPGO_VERSION }}
- uses: uraimo/run-on-arch-action@v2
if: ${{ matrix.arch != 'amd64' }}
name: Build for ${{ matrix.arch }}
id: build
with:
arch: ${{ matrix.arch }}
distro: ${{ matrix.distro }}
setup: |
mkdir -p "${PWD}/output"
dockerRunArgs: |
--volume "${PWD}/output:/output"
shell: /bin/bash
install: |
apt-get update -q -y
apt-get install -q -y curl gcc git xz-utils
GO_DOWNLOAD_ARCH=${{ matrix.go-arch }}
if [ ${{ matrix.arch}} == 'armv7' ]
then
GO_DOWNLOAD_ARCH=armv6l
fi
curl --retry 5 --retry-delay 2 --connect-timeout 10 -o go.tar.gz -L https://go.dev/dl/go${{ steps.get_version.outputs.GO_VERSION }}.linux-${GO_DOWNLOAD_ARCH}.tar.gz
tar -C /usr/local -xzf go.tar.gz
run: |
export PATH=$PATH:/usr/local/go/bin
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
mkdir -p output/{init,sqlite,bash_completion,zsh_completion}
echo "For documentation please take a look here:" > output/README.txt
echo "" >> output/README.txt
echo "https://github.com/drakkan/sftpgo/blob/${{ steps.get_version.outputs.SFTPGO_VERSION }}/README.md" >> output/README.txt
cp LICENSE output/
cp sftpgo.json output/
cp -r templates output/
cp -r static output/
cp -r openapi output/
cp init/sftpgo.service output/init/
./sftpgo initprovider
./sftpgo gen completion bash > output/bash_completion/sftpgo
./sftpgo gen completion zsh > output/zsh_completion/_sftpgo
./sftpgo gen man -d output/man/man1
gzip output/man/man1/*
cp sftpgo output/
cp sftpgo.db output/sqlite/
cd output
tar cJvf sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_${{ matrix.tar-arch }}.tar.xz *
cd ..
- name: Upload build artifact for ${{ matrix.arch }}
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_${{ matrix.tar-arch }}.tar.xz
path: ./output/sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_${{ matrix.tar-arch }}.tar.xz
retention-days: 1
- name: Build Packages
id: build_linux_pkgs
run: |
export NFPM_ARCH=${{ matrix.go-arch }}
cd pkgs
./build.sh
PKG_VERSION=${SFTPGO_VERSION:1}
echo "::set-output name=pkg-version::${PKG_VERSION}"
env:
SFTPGO_VERSION: ${{ steps.get_version.outputs.SFTPGO_VERSION }}
- name: Upload Deb Package
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.build_linux_pkgs.outputs.pkg-version }}-1_${{ matrix.deb-arch}}.deb
path: ./pkgs/dist/deb/sftpgo_${{ steps.build_linux_pkgs.outputs.pkg-version }}-1_${{ matrix.deb-arch}}.deb
retention-days: 1
- name: Upload RPM Package
uses: actions/upload-artifact@v3
with:
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-1.${{ matrix.rpm-arch}}.rpm
path: ./pkgs/dist/rpm/sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-1.${{ matrix.rpm-arch}}.rpm
retention-days: 1
prepare-linux-bundle:
name: Prepare Linux bundle
needs: prepare-linux
runs-on: ubuntu-latest
steps:
- name: Get versions
id: get_version
run: |
echo ::set-output name=SFTPGO_VERSION::${GITHUB_REF/refs\/tags\//}
shell: bash
- name: Download amd64 artifact
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_x86_64.tar.xz
- name: Download arm64 artifact
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_arm64.tar.xz
- name: Download ppc64le artifact
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_ppc64le.tar.xz
- name: Download armv7 artifact
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_armv7.tar.xz
- name: Build bundle
shell: bash
run: |
mkdir -p bundle/{arm64,ppc64le,armv7}
cd bundle
tar xvf ../sftpgo_${SFTPGO_VERSION}_linux_x86_64.tar.xz
cd arm64
tar xvf ../../sftpgo_${SFTPGO_VERSION}_linux_arm64.tar.xz sftpgo
cd ../ppc64le
tar xvf ../../sftpgo_${SFTPGO_VERSION}_linux_ppc64le.tar.xz sftpgo
cd ../armv7
tar xvf ../../sftpgo_${SFTPGO_VERSION}_linux_armv7.tar.xz sftpgo
cd ..
tar cJvf sftpgo_${SFTPGO_VERSION}_linux_bundle.tar.xz *
cd ..
env:
SFTPGO_VERSION: ${{ steps.get_version.outputs.SFTPGO_VERSION }}
- name: Upload Linux bundle
uses: actions/upload-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_bundle.tar.xz
path: ./bundle/sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_bundle.tar.xz
retention-days: 1
create-release:
name: Release
needs: [prepare-linux-bundle, prepare-sources-with-deps, prepare-window-mac]
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Get versions
id: get_version
run: |
SFTPGO_VERSION=${GITHUB_REF/refs\/tags\//}
PKG_VERSION=${SFTPGO_VERSION:1}
echo ::set-output name=SFTPGO_VERSION::${SFTPGO_VERSION}
echo "::set-output name=PKG_VERSION::${PKG_VERSION}"
shell: bash
- name: Download amd64 artifact
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_x86_64.tar.xz
- name: Download arm64 artifact
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_arm64.tar.xz
- name: Download ppc64le artifact
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_ppc64le.tar.xz
- name: Download armv7 artifact
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_armv7.tar.xz
- name: Download Linux bundle artifact
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_bundle.tar.xz
- name: Download Deb amd64 artifact
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.PKG_VERSION }}-1_amd64.deb
- name: Download Deb arm64 artifact
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.PKG_VERSION }}-1_arm64.deb
- name: Download Deb ppc64le artifact
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.PKG_VERSION }}-1_ppc64el.deb
- name: Download Deb armv7 artifact
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.PKG_VERSION }}-1_armhf.deb
- name: Download RPM x86_64 artifact
uses: actions/download-artifact@v3
with:
name: sftpgo-${{ steps.get_version.outputs.PKG_VERSION }}-1.x86_64.rpm
- name: Download RPM aarch64 artifact
uses: actions/download-artifact@v3
with:
name: sftpgo-${{ steps.get_version.outputs.PKG_VERSION }}-1.aarch64.rpm
- name: Download RPM ppc64le artifact
uses: actions/download-artifact@v3
with:
name: sftpgo-${{ steps.get_version.outputs.PKG_VERSION }}-1.ppc64le.rpm
- name: Download RPM armv7 artifact
uses: actions/download-artifact@v3
with:
name: sftpgo-${{ steps.get_version.outputs.PKG_VERSION }}-1.armv7hl.rpm
- name: Download macOS x86_64 artifact
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_macOS_x86_64.tar.xz
- name: Download macOS arm64 artifact
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_macOS_arm64.tar.xz
- name: Download Windows installer x86_64 artifact
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_windows_x86_64.exe
- name: Download Windows installer arm64 artifact
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_windows_arm64.exe
- name: Download Windows installer x86 artifact
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_windows_x86.exe
- name: Download Windows portable artifact
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_windows_portable.zip
- name: Download source with deps artifact
uses: actions/download-artifact@v3
with:
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_src_with_deps.tar.xz
- name: Create release
run: |
mv sftpgo_windows_x86_64.exe sftpgo_${SFTPGO_VERSION}_windows_x86_64.exe
mv sftpgo_windows_arm64.exe sftpgo_${SFTPGO_VERSION}_windows_arm64.exe
mv sftpgo_windows_x86.exe sftpgo_${SFTPGO_VERSION}_windows_x86.exe
mv sftpgo_portable.zip sftpgo_${SFTPGO_VERSION}_windows_portable.zip
gh release create "${SFTPGO_VERSION}" -t "${SFTPGO_VERSION}"
gh release upload "${SFTPGO_VERSION}" sftpgo_*.xz --clobber
gh release upload "${SFTPGO_VERSION}" sftpgo-*.rpm --clobber
gh release upload "${SFTPGO_VERSION}" sftpgo_*.deb --clobber
gh release upload "${SFTPGO_VERSION}" sftpgo_*.exe --clobber
gh release upload "${SFTPGO_VERSION}" sftpgo_*.zip --clobber
gh release view "${SFTPGO_VERSION}"
env:
GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}}
SFTPGO_VERSION: ${{ steps.get_version.outputs.SFTPGO_VERSION }}

52
.golangci.yml Normal file
View File

@@ -0,0 +1,52 @@
run:
timeout: 5m
issues-exit-code: 1
tests: true
linters-settings:
dupl:
threshold: 150
errcheck:
check-type-assertions: false
check-blank: false
goconst:
min-len: 3
min-occurrences: 3
gocyclo:
min-complexity: 15
gofmt:
simplify: true
goimports:
local-prefixes: github.com/drakkan/sftpgo
#govet:
# report about shadowed variables
#check-shadowing: true
#enable:
# - fieldalignment
issues:
include:
- EXC0002
- EXC0012
- EXC0013
- EXC0014
- EXC0015
linters:
enable:
- goconst
- errcheck
- gofmt
- goimports
- revive
- unconvert
- unparam
- bodyclose
- gocyclo
- misspell
- whitespace
- dupl
- rowserrcheck
- dogsled
- govet

View File

@@ -1,24 +0,0 @@
language: go
os:
- linux
- osx
go:
- 1.13.x
- 1.14.x
env:
- GO111MODULE=on
before_script:
- sftpgo initprovider
install:
- go get -v -t ./...
script:
- go test -v ./... -coverprofile=coverage.txt -covermode=atomic
after_success:
- bash <(curl -s https://codecov.io/bash)

65
Dockerfile Normal file
View File

@@ -0,0 +1,65 @@
FROM golang:1.17-bullseye as builder
ENV GOFLAGS="-mod=readonly"
RUN mkdir -p /workspace
WORKDIR /workspace
ARG GOPROXY
COPY go.mod go.sum ./
RUN go mod download
ARG COMMIT_SHA
# This ARG allows to disable some optional features and it might be useful if you build the image yourself.
# For example you can disable S3 and GCS support like this:
# --build-arg FEATURES=nos3,nogcs
ARG FEATURES
COPY . .
RUN set -xe && \
export COMMIT_SHA=${COMMIT_SHA:-$(git describe --always --dirty)} && \
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -v -o sftpgo
FROM debian:bullseye-slim
# Set to "true" to install jq and the optional git and rsync dependencies
ARG INSTALL_OPTIONAL_PACKAGES=false
RUN apt-get update && apt-get install --no-install-recommends -y ca-certificates media-types && rm -rf /var/lib/apt/lists/*
RUN if [ "${INSTALL_OPTIONAL_PACKAGES}" = "true" ]; then apt-get update && apt-get install --no-install-recommends -y jq git rsync && rm -rf /var/lib/apt/lists/*; fi
RUN mkdir -p /etc/sftpgo /var/lib/sftpgo /usr/share/sftpgo /srv/sftpgo/data /srv/sftpgo/backups
RUN groupadd --system -g 1000 sftpgo && \
useradd --system --gid sftpgo --no-create-home \
--home-dir /var/lib/sftpgo --shell /usr/sbin/nologin \
--comment "SFTPGo user" --uid 1000 sftpgo
COPY --from=builder /workspace/sftpgo.json /etc/sftpgo/sftpgo.json
COPY --from=builder /workspace/templates /usr/share/sftpgo/templates
COPY --from=builder /workspace/static /usr/share/sftpgo/static
COPY --from=builder /workspace/openapi /usr/share/sftpgo/openapi
COPY --from=builder /workspace/sftpgo /usr/local/bin/
# Log to the stdout so the logs will be available using docker logs
ENV SFTPGO_LOG_FILE_PATH=""
# templates and static paths are inside the container
ENV SFTPGO_HTTPD__TEMPLATES_PATH=/usr/share/sftpgo/templates
ENV SFTPGO_SMTP__TEMPLATES_PATH=/usr/share/sftpgo/templates
ENV SFTPGO_HTTPD__STATIC_FILES_PATH=/usr/share/sftpgo/static
ENV SFTPGO_HTTPD__OPENAPI_PATH=/usr/share/sftpgo/openapi
# Modify the default configuration file
RUN sed -i "s|\"users_base_dir\": \"\",|\"users_base_dir\": \"/srv/sftpgo/data\",|" /etc/sftpgo/sftpgo.json && \
sed -i "s|\"backups\"|\"/srv/sftpgo/backups\"|" /etc/sftpgo/sftpgo.json
RUN chown -R sftpgo:sftpgo /etc/sftpgo /srv/sftpgo && chown sftpgo:sftpgo /var/lib/sftpgo && chmod 700 /srv/sftpgo/backups
WORKDIR /var/lib/sftpgo
USER 1000:1000
CMD ["sftpgo", "serve"]

70
Dockerfile.alpine Normal file
View File

@@ -0,0 +1,70 @@
FROM golang:1.17-alpine3.15 AS builder
ENV GOFLAGS="-mod=readonly"
RUN apk add --update --no-cache bash ca-certificates curl git gcc g++
RUN mkdir -p /workspace
WORKDIR /workspace
ARG GOPROXY
COPY go.mod go.sum ./
RUN go mod download
ARG COMMIT_SHA
# This ARG allows to disable some optional features and it might be useful if you build the image yourself.
# For example you can disable S3 and GCS support like this:
# --build-arg FEATURES=nos3,nogcs
ARG FEATURES
COPY . .
RUN set -xe && \
export COMMIT_SHA=${COMMIT_SHA:-$(git describe --always --dirty)} && \
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -v -o sftpgo
FROM alpine:3.15
# Set to "true" to install jq and the optional git and rsync dependencies
ARG INSTALL_OPTIONAL_PACKAGES=false
RUN apk add --update --no-cache ca-certificates tzdata mailcap
RUN if [ "${INSTALL_OPTIONAL_PACKAGES}" = "true" ]; then apk add --update --no-cache jq git rsync; fi
# set up nsswitch.conf for Go's "netgo" implementation
# https://github.com/gliderlabs/docker-alpine/issues/367#issuecomment-424546457
RUN test ! -e /etc/nsswitch.conf && echo 'hosts: files dns' > /etc/nsswitch.conf
RUN mkdir -p /etc/sftpgo /var/lib/sftpgo /usr/share/sftpgo /srv/sftpgo/data /srv/sftpgo/backups
RUN addgroup -g 1000 -S sftpgo && \
adduser -u 1000 -h /var/lib/sftpgo -s /sbin/nologin -G sftpgo -S -D -H -g "SFTPGo user" sftpgo
COPY --from=builder /workspace/sftpgo.json /etc/sftpgo/sftpgo.json
COPY --from=builder /workspace/templates /usr/share/sftpgo/templates
COPY --from=builder /workspace/static /usr/share/sftpgo/static
COPY --from=builder /workspace/openapi /usr/share/sftpgo/openapi
COPY --from=builder /workspace/sftpgo /usr/local/bin/
# Log to the stdout so the logs will be available using docker logs
ENV SFTPGO_LOG_FILE_PATH=""
# templates and static paths are inside the container
ENV SFTPGO_HTTPD__TEMPLATES_PATH=/usr/share/sftpgo/templates
ENV SFTPGO_SMTP__TEMPLATES_PATH=/usr/share/sftpgo/templates
ENV SFTPGO_HTTPD__STATIC_FILES_PATH=/usr/share/sftpgo/static
ENV SFTPGO_HTTPD__OPENAPI_PATH=/usr/share/sftpgo/openapi
# Modify the default configuration file
RUN sed -i "s|\"users_base_dir\": \"\",|\"users_base_dir\": \"/srv/sftpgo/data\",|" /etc/sftpgo/sftpgo.json && \
sed -i "s|\"backups\"|\"/srv/sftpgo/backups\"|" /etc/sftpgo/sftpgo.json
RUN chown -R sftpgo:sftpgo /etc/sftpgo /srv/sftpgo && chown sftpgo:sftpgo /var/lib/sftpgo && chmod 700 /srv/sftpgo/backups
WORKDIR /var/lib/sftpgo
USER 1000:1000
CMD ["sftpgo", "serve"]

62
Dockerfile.distroless Normal file
View File

@@ -0,0 +1,62 @@
FROM golang:1.17-bullseye as builder
ENV CGO_ENABLED=0 GOFLAGS="-mod=readonly"
RUN mkdir -p /workspace
WORKDIR /workspace
ARG GOPROXY
COPY go.mod go.sum ./
RUN go mod download
ARG COMMIT_SHA
# This ARG allows to disable some optional features and it might be useful if you build the image yourself.
# For this variant we disable SQLite support since it requires CGO and so a C runtime which is not installed
# in distroless/static-* images
ARG FEATURES=nosqlite
COPY . .
RUN set -xe && \
export COMMIT_SHA=${COMMIT_SHA:-$(git describe --always --dirty)} && \
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -v -o sftpgo
# Modify the default configuration file
RUN sed -i "s|\"users_base_dir\": \"\",|\"users_base_dir\": \"/srv/sftpgo/data\",|" sftpgo.json && \
sed -i "s|\"backups\"|\"/srv/sftpgo/backups\"|" sftpgo.json && \
sed -i "s|\"sqlite\"|\"bolt\"|" sftpgo.json
RUN apt-get update && apt-get install --no-install-recommends -y media-types && rm -rf /var/lib/apt/lists/*
RUN mkdir /etc/sftpgo /var/lib/sftpgo /srv/sftpgo
FROM gcr.io/distroless/static-debian11
COPY --from=builder --chown=1000:1000 /etc/sftpgo /etc/sftpgo
COPY --from=builder --chown=1000:1000 /srv/sftpgo /srv/sftpgo
COPY --from=builder --chown=1000:1000 /var/lib/sftpgo /var/lib/sftpgo
COPY --from=builder --chown=1000:1000 /workspace/sftpgo.json /etc/sftpgo/sftpgo.json
COPY --from=builder /workspace/templates /usr/share/sftpgo/templates
COPY --from=builder /workspace/static /usr/share/sftpgo/static
COPY --from=builder /workspace/openapi /usr/share/sftpgo/openapi
COPY --from=builder /workspace/sftpgo /usr/local/bin/
COPY --from=builder /etc/mime.types /etc/mime.types
# Log to the stdout so the logs will be available using docker logs
ENV SFTPGO_LOG_FILE_PATH=""
# templates and static paths are inside the container
ENV SFTPGO_HTTPD__TEMPLATES_PATH=/usr/share/sftpgo/templates
ENV SFTPGO_SMTP__TEMPLATES_PATH=/usr/share/sftpgo/templates
ENV SFTPGO_HTTPD__STATIC_FILES_PATH=/usr/share/sftpgo/static
ENV SFTPGO_HTTPD__OPENAPI_PATH=/usr/share/sftpgo/openapi
# These env vars are required to avoid the following error when calling user.Current():
# unable to get the current user: user: Current requires cgo or $USER set in environment
ENV USER=sftpgo
ENV HOME=/var/lib/sftpgo
WORKDIR /var/lib/sftpgo
USER 1000:1000
CMD ["sftpgo", "serve"]

145
LICENSE
View File

@@ -1,5 +1,5 @@
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
@@ -7,17 +7,15 @@
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
software for all its users.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
@@ -26,44 +24,34 @@ them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
The precise terms and conditions for copying, distribution and
modification follow.
@@ -72,7 +60,7 @@ modification follow.
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"This License" refers to version 3 of the GNU Affero General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
@@ -549,35 +537,45 @@ to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
@@ -635,40 +633,29 @@ the "copyright" line and a pointer to where the full notice is found.
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
it under the terms of the GNU Affero General Public License as published
by the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
GNU Affero General Public License for more details.
You should have received a copy of the GNU General Public License
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.
For more information on this, and how to apply and follow the GNU AGPL, see
<https://www.gnu.org/licenses/>.

246
README.md
View File

@@ -1,74 +1,109 @@
# SFTPGo
[![Build Status](https://travis-ci.org/drakkan/sftpgo.svg?branch=master)](https://travis-ci.org/drakkan/sftpgo) [![Code Coverage](https://codecov.io/gh/drakkan/sftpgo/branch/master/graph/badge.svg)](https://codecov.io/gh/drakkan/sftpgo/branch/master) [![Go Report Card](https://goreportcard.com/badge/github.com/drakkan/sftpgo)](https://goreportcard.com/report/github.com/drakkan/sftpgo) [![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0) [![Mentioned in Awesome Go](https://awesome.re/mentioned-badge.svg)](https://github.com/avelino/awesome-go)
![CI Status](https://github.com/drakkan/sftpgo/workflows/CI/badge.svg?branch=main&event=push)
[![Code Coverage](https://codecov.io/gh/drakkan/sftpgo/branch/main/graph/badge.svg)](https://codecov.io/gh/drakkan/sftpgo/branch/main)
[![License: AGPL v3](https://img.shields.io/badge/License-AGPLv3-blue.svg)](https://www.gnu.org/licenses/agpl-3.0)
[![Docker Pulls](https://img.shields.io/docker/pulls/drakkan/sftpgo)](https://hub.docker.com/r/drakkan/sftpgo)
[![Mentioned in Awesome Go](https://awesome.re/mentioned-badge.svg)](https://github.com/avelino/awesome-go)
Fully featured and highly configurable SFTP server, written in Go
Fully featured and highly configurable SFTP server with optional HTTP, FTP/S and WebDAV support.
Several storage backends are supported: local filesystem, encrypted local filesystem, S3 (compatible) Object Storage, Google Cloud Storage, Azure Blob Storage, SFTP.
## Features
- Each account is chrooted to its home directory.
- SFTP accounts are virtual accounts stored in a "data provider".
- SQLite, MySQL, PostgreSQL, bbolt (key/value store in pure Go) and in-memory data providers are supported.
- Support for serving local filesystem, encrypted local filesystem, S3 Compatible Object Storage, Google Cloud Storage, Azure Blob Storage or other SFTP accounts over SFTP/SCP/FTP/WebDAV.
- Virtual folders are supported: a virtual folder can use any of the supported storage backends. So you can have, for example, an S3 user that exposes a GCS bucket (or part of it) on a specified path and an encrypted local filesystem on another one. Virtual folders can be private or shared among multiple users, for shared virtual folders you can define different quota limits for each user.
- Configurable [custom commands and/or HTTP hooks](./docs/custom-actions.md) on file upload, pre-upload, download, pre-download, delete, pre-delete, rename, mmkdir, rmdir on SSH commands and on user add, update and delete.
- Virtual accounts stored within a "data provider".
- SQLite, MySQL, PostgreSQL, CockroachDB, Bolt (key/value store in pure Go) and in-memory data providers are supported.
- Chroot isolation for local accounts. Cloud-based accounts can be restricted to a certain base path.
- Per user and per directory virtual permissions, for each exposed path you can allow or deny: directory listing, upload, overwrite, download, delete, rename, create directories, create symlinks, change owner/group/file mode.
- [REST API](./docs/rest-api.md) for users and folders management, data retention, backup, restore and real time reports of the active connections with possibility of forcibly closing a connection.
- [Web based administration interface](./docs/web-admin.md) to easily manage users, folders and connections.
- [Web client interface](./docs/web-client.md) so that end users can change their credentials, manage and share their files.
- Public key and password authentication. Multiple public keys per user are supported.
- SSH user [certificate authentication](https://cvsweb.openbsd.org/src/usr.bin/ssh/PROTOCOL.certkeys?rev=1.8).
- Keyboard interactive authentication. You can easily setup a customizable multi-factor authentication.
- Per user authentication methods. You can, for example, deny one or more authentication methods to one or more users.
- Custom authentication via external programs is supported.
- Dynamic user modification before login via external programs is supported.
- Partial authentication. You can configure multi-step authentication requiring, for example, the user password after successful public key authentication.
- Per user authentication methods.
- [Two-factor authentication](./docs/howto/two-factor-authentication.md) based on time-based one time passwords (RFC 6238) which works with Authy, Google Authenticator and other compatible apps.
- Custom authentication via external programs/HTTP API.
- [Data At Rest Encryption](./docs/dare.md).
- Dynamic user modification before login via external programs/HTTP API.
- Quota support: accounts can have individual quota expressed as max total size and/or max number of files.
- Bandwidth throttling is supported, with distinct settings for upload and download.
- Bandwidth throttling, with distinct settings for upload and download and overrides based on the client IP address.
- Per-protocol [rate limiting](./docs/rate-limiting.md) is supported and can be optionally connected to the built-in defender to automatically block hosts that repeatedly exceed the configured limit.
- Per user maximum concurrent sessions.
- Per user and per directory permission management: list directory contents, upload, overwrite, download, delete, rename, create directories, create symlinks, change owner/group and mode, change access and modification times.
- Per user files/folders ownership mapping: you can map all the users to the system account that runs SFTPGo (all platforms are supported) or you can run SFTPGo as root user and map each user or group of users to a different system account (\*NIX only).
- Per user IP filters are supported: login can be restricted to specific ranges of IP addresses or to a specific IP address.
- Per user and per directory file extensions filters are supported: files can be allowed or denied based on their extensions.
- Virtual folders are supported: directories outside the user home directory can be exposed as virtual folders.
- Configurable custom commands and/or HTTP notifications on file upload, download, delete, rename, on SSH commands and on user add, update and delete.
- Per user and global IP filters: login can be restricted to specific ranges of IP addresses or to a specific IP address.
- Per user and per directory shell like patterns filters: files can be allowed or denied based on shell like patterns.
- Automatically terminating idle connections.
- Automatic blocklist management using the built-in [defender](./docs/defender.md).
- Atomic uploads are configurable.
- Per user files/folders ownership mapping: you can map all the users to the system account that runs SFTPGo (all platforms are supported) or you can run SFTPGo as root user and map each user or group of users to a different system account (\*NIX only).
- Support for Git repositories over SSH.
- SCP and rsync are supported.
- Support for serving local filesystem, S3 Compatible Object Storage and Google Cloud Storage over SFTP/SCP.
- FTP/S is supported. You can configure the FTP service to require TLS for both control and data connections.
- [WebDAV](./docs/webdav.md) is supported.
- Two-Way TLS authentication, aka TLS with client certificate authentication, is supported for REST API/Web Admin, FTPS and WebDAV over HTTPS.
- Per user protocols restrictions. You can configure the allowed protocols (SSH/FTP/WebDAV) for each user.
- [Prometheus metrics](./docs/metrics.md) are exposed.
- Support for HAProxy PROXY protocol: you can proxy and/or load balance the SFTP/SCP service without losing the information about the client's address.
- [REST API](./docs/rest-api.md) for users management, backup, restore and real time reports of the active connections with possibility of forcibly closing a connection.
- [Web based administration interface](./docs/web-admin.md) to easily manage users and connections.
- Easy [migration](./scripts#convert-users-from-other-stores) from Linux system user accounts.
- Support for HAProxy PROXY protocol: you can proxy and/or load balance the SFTP/SCP/FTP/WebDAV service without losing the information about the client's address.
- Easy [migration](./examples/convertusers) from Linux system user accounts.
- [Portable mode](./docs/portable-mode.md): a convenient way to share a single directory on demand.
- [SFTP subsystem mode](./docs/sftp-subsystem.md): you can use SFTPGo as OpenSSH's SFTP subsystem.
- Performance analysis using built-in [profiler](./docs/profiling.md).
- Configuration format is at your choice: JSON, TOML, YAML, HCL, envfile are supported.
- Log files are accurate and they are saved in the easily parsable JSON format ([more information](./docs/logs.md)).
- SFTPGo supports a [plugin system](./docs/plugins.md) and therefore can be extended using external plugins.
## Platforms
SFTPGo is developed and tested on Linux. After each commit, the code is automatically built and tested on Linux and macOS using Travis CI.
The test cases are regularly manually executed and passed on Windows. Other UNIX variants such as \*BSD should work too.
SFTPGo is developed and tested on Linux. After each commit, the code is automatically built and tested on Linux, macOS and Windows using a [GitHub Action](./.github/workflows/development.yml). The test cases are regularly manually executed and passed on FreeBSD. Other *BSD variants should work too.
## Requirements
- Go 1.13 or higher as build only dependency.
- A suitable SQL server or key/value store to use as data provider: PostgreSQL 9.4+ or MySQL 5.6+ or SQLite 3.x or bbolt 1.3.x
- Go as build only dependency. We support the Go version(s) used in [continuous integration workflows](./tree/main/.github/workflows).
- A suitable SQL server to use as data provider: PostgreSQL 9.4+ or MySQL 5.6+ or SQLite 3.x or CockroachDB stable.
- The SQL server is optional: you can choose to use an embedded bolt database as key/value store or an in memory data provider.
## Installation
Binary releases for Linux, macOS, and Windows are available. Please visit the [releases](https://github.com/drakkan/sftpgo/releases "releases") page.
Sample Dockerfiles for [Debian](https://www.debian.org "Debian") and [Alpine](https://alpinelinux.org "Alpine") are available inside the source tree [docker](./docker "docker") directory.
An official Docker image is available. Documentation is [here](./docker/README.md).
Some Linux distro packages are available:
- For Arch Linux via AUR:
- [sftpgo](https://aur.archlinux.org/packages/sftpgo/). This package follows stable releases. It requires `git`, `gcc` and `go` to build.
- [sftpgo-bin](https://aur.archlinux.org/packages/sftpgo-bin/). This package follows stable releases downloading the prebuilt linux binary from GitHub. It does not require `git`, `gcc` and `go` to build.
- [sftpgo-git](https://aur.archlinux.org/packages/sftpgo-git/). This package builds and installs the latest git master. It requires `git`, `gcc` and `go` to build.
- [sftpgo-git](https://aur.archlinux.org/packages/sftpgo-git/). This package builds and installs the latest git `main` branch. It requires `git`, `gcc` and `go` to build.
- Deb and RPM packages are built after each commit and for each release.
- For Ubuntu a PPA is available [here](https://launchpad.net/~sftpgo/+archive/ubuntu/sftpgo).
SFTPGo is also available on [AWS Marketplace](https://aws.amazon.com/marketplace/seller-profile?id=6e849ab8-70a6-47de-9a43-13c3fa849335), purchasing from there will help keep SFTPGo a long-term sustainable project.
On FreeBSD you can install from the [SFTPGo port](https://www.freshports.org/ftp/sftpgo).
On Windows you can use:
- The Windows installer to install and run SFTPGo as a Windows service.
- The portable package to start SFTPGo on demand.
- The [Chocolatey package](https://community.chocolatey.org/packages/sftpgo) to install and run SFTPGo as a Windows service.
You can easily test new features selecting a commit from the [Actions](https://github.com/drakkan/sftpgo/actions) page and downloading the matching build artifacts for Linux, macOS or Windows. GitHub stores artifacts for 90 days.
Alternately, you can [build from source](./docs/build-from-source.md).
[Getting Started Guide for the Impatient](./docs/howto/getting-started.md).
## Configuration
A full explanation of all configuration methods can be found [here](./docs/full-configuration.md).
Please make sure to [initialize the data provider](#data-provider-initialization) before running the daemon!
Please make sure to [initialize the data provider](#data-provider-initialization-and-management) before running the daemon.
To start the SFTP server with default settings, simply run:
To start SFTPGo with the default settings, simply run:
```bash
sftpgo serve
@@ -76,15 +111,15 @@ sftpgo serve
Check out [this documentation](./docs/service.md) if you want to run SFTPGo as a service.
### Data provider initialization
### Data provider initialization and management
Before starting the SFTPGo server, please ensure that the configured data provider is properly initialized.
Before starting the SFTPGo server please ensure that the configured data provider is properly initialized/updated.
SQL based data providers (SQLite, MySQL, PostgreSQL) require the creation of a database containing the required tables. Memory and bolt data providers do not require an initialization.
For PostgreSQL, MySQL and CockroachDB providers, you need to create the configured database. For SQLite, the configured database will be automatically created at startup. Memory and bolt data providers do not require an initialization but they could require an update to the existing data after upgrading SFTPGo.
After configuring the data provider using the configuration file, you can create the required database structure using the `initprovider` command.
For SQLite provider, the `initprovider` command will auto create the database file, if missing, and the required tables.
For PostgreSQL and MySQL providers, you need to create the configured database, and the `initprovider` command will create the required tables.
SFTPGo will attempt to automatically detect if the data provider is initialized/updated and if not, will attempt to initialize/ update it on startup as needed.
Alternately, you can create/update the required data provider structures yourself using the `initprovider` command.
For example, you can simply execute the following command from the configuration directory:
@@ -98,13 +133,72 @@ Take a look at the CLI usage to learn how to specify a different configuration f
sftpgo initprovider --help
```
The `initprovider` command is enough for new installations. From now on, the database structure will be automatically checked and updated, if required, at startup.
You can disable automatic data provider checks/updates at startup by setting the `update_mode` configuration key to `1`.
#### Upgrading
You can also reset your provider by using the `resetprovider` sub-command. Take a look at the CLI usage for more details:
If you are upgrading from version 0.9.5 or before, you have to manually execute the SQL scripts to create the required database structure. These scripts can be found inside the source tree [sql](./sql "sql") directory. The SQL scripts filename is, by convention, the date as `YYYYMMDD` and the suffix `.sql`. You need to apply all the SQL scripts for your database ordered by name. For example, `20190828.sql` must be applied before `20191112.sql`, and so on.
Example for SQLite: `find sql/sqlite/ -type f -iname '*.sql' -print | sort -n | xargs cat | sqlite3 sftpgo.db`.
After applying these scripts, your database structure is the same as the one obtained using `initprovider` for new installations, so from now on, you don't have to manually upgrade your database anymore.
```bash
sftpgo resetprovider --help
```
## Create the first admin
To start using SFTPGo you need to create an admin user, you can do it in several ways:
- by using the web admin interface. The default URL is [http://127.0.0.1:8080/web/admin](http://127.0.0.1:8080/web/admin)
- by loading initial data
- by enabling `create_default_admin` in your configuration file and setting the environment variables `SFTPGO_DEFAULT_ADMIN_USERNAME` and `SFTPGO_DEFAULT_ADMIN_PASSWORD`
## Upgrading
SFTPGo supports upgrading from the previous release branch to the current one.
Some examples for supported upgrade paths are:
- from 1.2.x to 2.0.x
- from 2.0.x to 2.1.x and so on.
For supported upgrade paths, the data and schema are migrated automatically, alternately you can use the `initprovider` command.
So if, for example, you want to upgrade from a version before 1.2.x to 2.0.x, you must first install version 1.2.x, update the data provider and finally install the version 2.0.x. It is recommended to always install the latest available minor version, ie do not install 1.2.0 if 1.2.2 is available.
Loading data from a provider independent JSON dump is supported from the previous release branch to the current one too. After upgrading SFTPGo it is advisable to regenerate the JSON dump from the new version.
## Downgrading
If for some reason you want to downgrade SFTPGo, you may need to downgrade your data provider schema and data as well. You can use the `revertprovider` command for this task.
As for upgrading, SFTPGo supports downgrading from the previous release branch to the current one.
So, if you plan to downgrade from 2.0.x to 1.2.x, before uninstalling 2.0.x version, you can prepare your data provider executing the following command from the configuration directory:
```shell
sftpgo revertprovider --to-version 4
```
Take a look at the CLI usage to see the supported parameter for the `--to-version` argument and to learn how to specify a different configuration file:
```shell
sftpgo revertprovider --help
```
The `revertprovider` command is not supported for the memory provider.
Please note that we only support the current release branch and the current main branch, if you find a bug it is better to report it rather than downgrading to an older unsupported version.
## Users and folders management
After starting SFTPGo you can manage users and folders using:
- the [web based administration interface](./docs/web-admin.md)
- the [REST API](./docs/rest-api.md)
To support embedded data providers like `bolt` and `SQLite` we can't have a CLI that directly write users and folders to the data provider, we always have to use the REST API.
Full details for users, folders, admins and other resources are documented in the [OpenAPI](/openapi/openapi.yaml) schema. If you want to render the schema without importing it manually, you can explore it on [Stoplight](https://sftpgo.stoplight.io/docs/sftpgo/openapi.yaml).
## Tutorials
Some step-to-step tutorials can be found inside the source tree [howto](./docs/howto "How-to") directory.
## Authentication options
@@ -119,31 +213,52 @@ This authentication method is typically used for multi-factor authentication.
More information can be found [here](./docs/keyboard-interactive.md).
## Dynamic user modification
## Dynamic user creation or modification
The user configuration, retrieved from the data provider, can be modified by an external program. More information about this can be found [here](./docs/dynamic-user-mod.md).
A user can be created or modified by an external program just before the login. More information about this can be found [here](./docs/dynamic-user-mod.md).
## Custom Actions
SFTPGo allows you to configure custom commands and/or HTTP notifications on file upload, download, delete, rename, on SSH commands and on user add, update and delete.
SFTPGo allows you to configure custom commands and/or HTTP hooks to receive notifications about file uploads, deletions and several other events.
More information about custom actions can be found [here](./docs/custom-actions.md).
## Virtual folders
Directories outside the user home directory or based on a different storage provider can be exposed as virtual folders, more information [here](./docs/virtual-folders.md).
## Other hooks
You can get notified as soon as a new connection is established using the [Post-connect hook](./docs/post-connect-hook.md) and after each login using the [Post-login hook](./docs/post-login-hook.md).
You can use your own hook to [check passwords](./docs/check-password-hook.md).
## Storage backends
### S3 Compabible Object Storage backends
### S3 Compatible Object Storage backends
Each user can be mapped to whole bucket or to a bucket virtual folder. This way, the mapped bucket/virtual folder is exposed over SFTP/SCP. More information about S3 integration can be found [here](./docs/s3.md).
Each user can be mapped to the whole bucket or to a bucket virtual folder. This way, the mapped bucket/virtual folder is exposed over SFTP/SCP/FTP/WebDAV. More information about S3 integration can be found [here](./docs/s3.md).
### Google Cloud Storage backend
Each user can be mapped with a Google Cloud Storage bucket or a bucket virtual folder. This way, the mapped bucket/virtual folder is exposed over SFTP/SCP. More information about Google Cloud Storage integration can be found [here](./docs/google-cloud-storage.md).
Each user can be mapped with a Google Cloud Storage bucket or a bucket virtual folder. This way, the mapped bucket/virtual folder is exposed over SFTP/SCP/FTP/WebDAV. More information about Google Cloud Storage integration can be found [here](./docs/google-cloud-storage.md).
### Azure Blob Storage backend
Each user can be mapped with an Azure Blob Storage container or a container virtual folder. This way, the mapped container/virtual folder is exposed over SFTP/SCP/FTP/WebDAV. More information about Azure Blob Storage integration can be found [here](./docs/azure-blob-storage.md).
### SFTP backend
Each user can be mapped to another SFTP server account or a subfolder of it. More information can be found [here](./docs/sftpfs.md).
### Encrypted backend
Data at-rest encryption is supported via the [cryptfs backend](./docs/dare.md).
### Other Storage backends
Adding new storage backends is quite easy:
- implement the [Fs interface](./vfs/vfs.go#L18 "interface for filesystem backends").
- implement the [Fs interface](./vfs/vfs.go#L28 "interface for filesystem backends").
- update the user method `GetFilesystem` to return the new backend
- update the web interface and the REST API CLI
- add the flags for the new storage backed to the `portable` mode
@@ -154,6 +269,8 @@ Anyway, some backends require a pay per use account (or they offer free account
The [connection failed logs](./docs/logs.md) can be used for integration in tools such as [Fail2ban](http://www.fail2ban.org/). Example of [jails](./fail2ban/jails) and [filters](./fail2ban/filters) working with `systemd`/`journald` are available in fail2ban directory.
You can also use the built-in [defender](./docs/defender.md).
## Account's configuration properties
Details information about account configuration properties can be found [here](./docs/account.md).
@@ -164,29 +281,26 @@ SFTPGo can easily saturate a Gigabit connection on low end hardware with no spec
More in-depth analysis of performance can be found [here](./docs/performance.md).
## Release Cadence
SFTPGo releases are feature-driven, we don't have a fixed time based schedule. As a rough estimate, you can expect 1 or 2 new releases per year.
## Acknowledgements
- [pkg/sftp](https://github.com/pkg/sftp)
- [go-chi](https://github.com/go-chi/chi)
- [zerolog](https://github.com/rs/zerolog)
- [lumberjack](https://gopkg.in/natefinch/lumberjack.v2)
- [argon2id](https://github.com/alexedwards/argon2id)
- [go-sqlite3](https://github.com/mattn/go-sqlite3)
- [go-sql-driver/mysql](https://github.com/go-sql-driver/mysql)
- [bbolt](https://github.com/etcd-io/bbolt)
- [lib/pq](https://github.com/lib/pq)
- [viper](https://github.com/spf13/viper)
- [cobra](https://github.com/spf13/cobra)
- [xid](https://github.com/rs/xid)
- [nathanaelle/password](https://github.com/nathanaelle/password)
- [PipeAt](https://github.com/eikenb/pipeat)
- [ZeroConf](https://github.com/grandcat/zeroconf)
- [SB Admin 2](https://github.com/BlackrockDigital/startbootstrap-sb-admin-2)
- [shlex](https://github.com/google/shlex)
- [go-proxyproto](https://github.com/pires/go-proxyproto)
SFTPGo makes use of the third party libraries listed inside [go.mod](./go.mod).
Some code was initially taken from [Pterodactyl sftp server](https://github.com/pterodactyl/sftp-server)
We are very grateful to all the people who contributed with ideas and/or pull requests.
Thank you [ysura](https://www.ysura.com/) for granting me stable access to a test AWS S3 account.
## Sponsors
I'd like to make SFTPGo into a sustainable long term project and your [sponsorship](https://github.com/sponsors/drakkan) will really help :heart:
Thank you to our sponsors!
[<img src="https://www.7digital.com/wp-content/themes/sevendigital/images/top_logo.png" alt="7digital logo">](https://www.7digital.com/)
## License
GNU GPLv3
GNU AGPLv3

12
SECURITY.md Normal file
View File

@@ -0,0 +1,12 @@
# Security Policy
## Supported Versions
Only the current release of the software is actively supported. If you need
help backporting fixes into an older release, feel free to ask.
## Reporting a Vulnerability
Email your vulnerability information to SFTPGo's maintainer:
Nicola Murino <nicola.murino@gmail.com>

12
cmd/gen.go Normal file
View File

@@ -0,0 +1,12 @@
package cmd
import "github.com/spf13/cobra"
var genCmd = &cobra.Command{
Use: "gen",
Short: "A collection of useful generators",
}
func init() {
rootCmd.AddCommand(genCmd)
}

119
cmd/gencompletion.go Normal file
View File

@@ -0,0 +1,119 @@
package cmd
import (
"os"
"github.com/spf13/cobra"
)
var genCompletionCmd = &cobra.Command{
Use: "completion [bash|zsh|fish|powershell]",
Short: "Generate the autocompletion script for the specified shell",
Long: `Generate the autocompletion script for sftpgo for the specified shell.
See each sub-command's help for details on how to use the generated script.
`,
}
var genCompletionBashCmd = &cobra.Command{
Use: "bash",
Short: "Generate the autocompletion script for bash",
Long: `Generate the autocompletion script for the bash shell.
This script depends on the 'bash-completion' package.
If it is not installed already, you can install it via your OS's package
manager.
To load completions in your current shell session:
$ source <(sftpgo gen completion bash)
To load completions for every new session, execute once:
Linux:
$ sudo sftpgo gen completion bash > /usr/share/bash-completion/completions/sftpgo
MacOS:
$ sudo sftpgo gen completion bash > /usr/local/etc/bash_completion.d/sftpgo
You will need to start a new shell for this setup to take effect.
`,
DisableFlagsInUseLine: true,
RunE: func(cmd *cobra.Command, args []string) error {
return cmd.Root().GenBashCompletionV2(os.Stdout, true)
},
}
var genCompletionZshCmd = &cobra.Command{
Use: "zsh",
Short: "Generate the autocompletion script for zsh",
Long: `Generate the autocompletion script for the zsh shell.
If shell completion is not already enabled in your environment you will need
to enable it. You can execute the following once:
$ echo "autoload -U compinit; compinit" >> ~/.zshrc
To load completions for every new session, execute once:
Linux:
$ sftpgo gen completion zsh > > "${fpath[1]}/_sftpgo"
macOS:
$ sudo sftpgo gen completion zsh > /usr/local/share/zsh/site-functions/_sftpgo
You will need to start a new shell for this setup to take effect.
`,
DisableFlagsInUseLine: true,
RunE: func(cmd *cobra.Command, args []string) error {
return cmd.Root().GenZshCompletion(os.Stdout)
},
}
var genCompletionFishCmd = &cobra.Command{
Use: "fish",
Short: "Generate the autocompletion script for fish",
Long: `Generate the autocompletion script for the fish shell.
To load completions in your current shell session:
$ sftpgo gen completion fish | source
To load completions for every new session, execute once:
$ sftpgo gen completion fish > ~/.config/fish/completions/sftpgo.fish
You will need to start a new shell for this setup to take effect.
`,
DisableFlagsInUseLine: true,
RunE: func(cmd *cobra.Command, args []string) error {
return cmd.Root().GenFishCompletion(os.Stdout, true)
},
}
var genCompletionPowerShellCmd = &cobra.Command{
Use: "powershell",
Short: "Generate the autocompletion script for powershell",
Long: `Generate the autocompletion script for powershell.
To load completions in your current shell session:
PS C:\> sftpgo gen completion powershell | Out-String | Invoke-Expression
To load completions for every new session, add the output of the above command
to your powershell profile.
`,
DisableFlagsInUseLine: true,
RunE: func(cmd *cobra.Command, args []string) error {
return cmd.Root().GenPowerShellCompletionWithDesc(os.Stdout)
},
}
func init() {
genCompletionCmd.AddCommand(genCompletionBashCmd)
genCompletionCmd.AddCommand(genCompletionZshCmd)
genCompletionCmd.AddCommand(genCompletionFishCmd)
genCompletionCmd.AddCommand(genCompletionPowerShellCmd)
genCmd.AddCommand(genCompletionCmd)
}

53
cmd/genman.go Normal file
View File

@@ -0,0 +1,53 @@
package cmd
import (
"fmt"
"os"
"github.com/rs/zerolog"
"github.com/spf13/cobra"
"github.com/spf13/cobra/doc"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/version"
)
var (
manDir string
genManCmd = &cobra.Command{
Use: "man",
Short: "Generate man pages for sftpgo",
Long: `This command automatically generates up-to-date man pages of SFTPGo's
command-line interface.
By default, it creates the man page files in the "man" directory under the
current directory.
`,
Run: func(cmd *cobra.Command, args []string) {
logger.DisableLogger()
logger.EnableConsoleLogger(zerolog.DebugLevel)
if _, err := os.Stat(manDir); os.IsNotExist(err) {
err = os.MkdirAll(manDir, os.ModePerm)
if err != nil {
logger.WarnToConsole("Unable to generate man page files: %v", err)
os.Exit(1)
}
}
header := &doc.GenManHeader{
Section: "1",
Manual: "SFTPGo Manual",
Source: fmt.Sprintf("SFTPGo %v", version.Get().Version),
}
cmd.Root().DisableAutoGenTag = true
err := doc.GenManTree(cmd.Root(), header, manDir)
if err != nil {
logger.WarnToConsole("Unable to generate man page files: %v", err)
os.Exit(1)
}
},
}
)
func init() {
genManCmd.Flags().StringVarP(&manDir, "dir", "d", "man", "The directory to write the man pages")
genCmd.AddCommand(genManCmd)
}

View File

@@ -1,44 +1,64 @@
package cmd
import (
"github.com/drakkan/sftpgo/config"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/utils"
"os"
"github.com/rs/zerolog"
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/drakkan/sftpgo/v2/config"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
)
var (
initProviderCmd = &cobra.Command{
Use: "initprovider",
Short: "Initializes the configured data provider",
Long: `This command reads the data provider connection details from the specified configuration file and creates the initial structure.
Short: "Initialize and/or updates the configured data provider",
Long: `This command reads the data provider connection details from the specified
configuration file and creates the initial structure or update the existing one,
as needed.
Some data providers such as bolt and memory does not require an initialization.
Some data providers such as bolt and memory does not require an initialization
but they could require an update to the existing data after upgrading SFTPGo.
For SQLite provider the database file will be auto created if missing.
For SQLite/bolt providers the database file will be auto-created if missing.
For PostgreSQL and MySQL providers you need to create the configured database, this command will create the required tables.
For PostgreSQL and MySQL providers you need to create the configured database,
this command will create/update the required tables as needed.
To initialize the data provider from the configuration directory simply use:
To initialize/update the data provider from the configuration directory simply use:
sftpgo initprovider
$ sftpgo initprovider
Please take a look at the usage below to customize the options.`,
Run: func(cmd *cobra.Command, args []string) {
logger.DisableLogger()
logger.EnableConsoleLogger(zerolog.DebugLevel)
configDir = utils.CleanDirInput(configDir)
config.LoadConfig(configDir, configFile)
configDir = util.CleanDirInput(configDir)
err := config.LoadConfig(configDir, configFile)
if err != nil {
logger.WarnToConsole("Unable to initialize data provider, config load error: %v", err)
return
}
kmsConfig := config.GetKMSConfig()
err = kmsConfig.Initialize()
if err != nil {
logger.ErrorToConsole("unable to initialize KMS: %v", err)
os.Exit(1)
}
providerConf := config.GetProviderConf()
logger.DebugToConsole("Initializing provider: %#v config file: %#v", providerConf.Driver, viper.ConfigFileUsed())
err := dataprovider.InitializeDatabase(providerConf, configDir)
logger.InfoToConsole("Initializing provider: %#v config file: %#v", providerConf.Driver, viper.ConfigFileUsed())
err = dataprovider.InitializeDatabase(providerConf, configDir)
if err == nil {
logger.DebugToConsole("Data provider successfully initialized")
logger.InfoToConsole("Data provider successfully initialized/updated")
} else if err == dataprovider.ErrNoInitRequired {
logger.InfoToConsole("%v", err.Error())
} else {
logger.WarnToConsole("Unable to initialize data provider: %v", err)
logger.WarnToConsole("Unable to initialize/update the data provider: %v", err)
os.Exit(1)
}
},
}

View File

@@ -2,24 +2,28 @@ package cmd
import (
"fmt"
"os"
"strconv"
"github.com/drakkan/sftpgo/service"
"github.com/drakkan/sftpgo/utils"
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/v2/service"
"github.com/drakkan/sftpgo/v2/util"
)
var (
installCmd = &cobra.Command{
Use: "install",
Short: "Install SFTPGo as Windows Service",
Long: `To install the SFTPGo Windows Service with the default values for the command line flags simply use:
Long: `To install the SFTPGo Windows Service with the default values for the command
line flags simply use:
sftpgo service install
Please take a look at the usage below to customize the startup options`,
Run: func(cmd *cobra.Command, args []string) {
s := service.Service{
ConfigDir: utils.CleanDirInput(configDir),
ConfigDir: util.CleanDirInput(configDir),
ConfigFile: configFile,
LogFilePath: logFilePath,
LogMaxSize: logMaxSize,
@@ -27,6 +31,7 @@ Please take a look at the usage below to customize the startup options`,
LogMaxAge: logMaxAge,
LogCompress: logCompress,
LogVerbose: logVerbose,
LogUTCTime: logUTCTime,
Shutdown: make(chan bool),
}
winService := service.WindowsService{
@@ -40,6 +45,7 @@ Please take a look at the usage below to customize the startup options`,
err := winService.Install(serviceArgs...)
if err != nil {
fmt.Printf("Error installing service: %v\r\n", err)
os.Exit(1)
} else {
fmt.Printf("Service installed!\r\n")
}
@@ -51,3 +57,42 @@ func init() {
serviceCmd.AddCommand(installCmd)
addServeFlags(installCmd)
}
func getCustomServeFlags() []string {
result := []string{}
if configDir != defaultConfigDir {
configDir = util.CleanDirInput(configDir)
result = append(result, "--"+configDirFlag)
result = append(result, configDir)
}
if configFile != defaultConfigFile {
result = append(result, "--"+configFileFlag)
result = append(result, configFile)
}
if logFilePath != defaultLogFile {
result = append(result, "--"+logFilePathFlag)
result = append(result, logFilePath)
}
if logMaxSize != defaultLogMaxSize {
result = append(result, "--"+logMaxSizeFlag)
result = append(result, strconv.Itoa(logMaxSize))
}
if logMaxBackups != defaultLogMaxBackup {
result = append(result, "--"+logMaxBackupFlag)
result = append(result, strconv.Itoa(logMaxBackups))
}
if logMaxAge != defaultLogMaxAge {
result = append(result, "--"+logMaxAgeFlag)
result = append(result, strconv.Itoa(logMaxAge))
}
if logVerbose != defaultLogVerbose {
result = append(result, "--"+logVerboseFlag+"=false")
}
if logUTCTime != defaultLogUTCTime {
result = append(result, "--"+logUTCTimeFlag+"=true")
}
if logCompress != defaultLogCompress {
result = append(result, "--"+logCompressFlag+"=true")
}
return result
}

View File

@@ -1,59 +1,98 @@
//go:build !noportable
// +build !noportable
package cmd
import (
"encoding/base64"
"fmt"
"io/ioutil"
"os"
"path"
"path/filepath"
"strings"
"github.com/drakkan/sftpgo/dataprovider"
"github.com/drakkan/sftpgo/service"
"github.com/drakkan/sftpgo/sftpd"
"github.com/drakkan/sftpgo/vfs"
"github.com/sftpgo/sdk"
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/v2/common"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/kms"
"github.com/drakkan/sftpgo/v2/service"
"github.com/drakkan/sftpgo/v2/sftpd"
"github.com/drakkan/sftpgo/v2/version"
"github.com/drakkan/sftpgo/v2/vfs"
)
var (
directoryToServe string
portableSFTPDPort int
portableAdvertiseService bool
portableAdvertiseCredentials bool
portableUsername string
portablePassword string
portableLogFile string
portablePublicKeys []string
portablePermissions []string
portableSSHCommands []string
portableAllowedExtensions []string
portableDeniedExtensions []string
portableFsProvider int
portableS3Bucket string
portableS3Region string
portableS3AccessKey string
portableS3AccessSecret string
portableS3Endpoint string
portableS3StorageClass string
portableS3KeyPrefix string
portableGCSBucket string
portableGCSCredentialsFile string
portableGCSAutoCredentials int
portableGCSStorageClass string
portableGCSKeyPrefix string
portableCmd = &cobra.Command{
directoryToServe string
portableSFTPDPort int
portableAdvertiseService bool
portableAdvertiseCredentials bool
portableUsername string
portablePassword string
portableLogFile string
portableLogVerbose bool
portableLogUTCTime bool
portablePublicKeys []string
portablePermissions []string
portableSSHCommands []string
portableAllowedPatterns []string
portableDeniedPatterns []string
portableFsProvider string
portableS3Bucket string
portableS3Region string
portableS3AccessKey string
portableS3AccessSecret string
portableS3Endpoint string
portableS3StorageClass string
portableS3ACL string
portableS3KeyPrefix string
portableS3ULPartSize int
portableS3ULConcurrency int
portableS3ForcePathStyle bool
portableGCSBucket string
portableGCSCredentialsFile string
portableGCSAutoCredentials int
portableGCSStorageClass string
portableGCSKeyPrefix string
portableFTPDPort int
portableFTPSCert string
portableFTPSKey string
portableWebDAVPort int
portableWebDAVCert string
portableWebDAVKey string
portableAzContainer string
portableAzAccountName string
portableAzAccountKey string
portableAzEndpoint string
portableAzAccessTier string
portableAzSASURL string
portableAzKeyPrefix string
portableAzULPartSize int
portableAzULConcurrency int
portableAzUseEmulator bool
portableCryptPassphrase string
portableSFTPEndpoint string
portableSFTPUsername string
portableSFTPPassword string
portableSFTPPrivateKeyPath string
portableSFTPFingerprints []string
portableSFTPPrefix string
portableSFTPDisableConcurrentReads bool
portableSFTPDBufferSize int64
portableCmd = &cobra.Command{
Use: "portable",
Short: "Serve a single directory",
Long: `To serve the current working directory with auto generated credentials simply use:
Short: "Serve a single directory/account",
Long: `To serve the current working directory with auto generated credentials simply
use:
sftpgo portable
$ sftpgo portable
Please take a look at the usage below to customize the serving parameters`,
Run: func(cmd *cobra.Command, args []string) {
portableDir := directoryToServe
fsProvider := sdk.GetProviderByName(portableFsProvider)
if !filepath.IsAbs(portableDir) {
if portableFsProvider == 0 {
if fsProvider == sdk.LocalFilesystemProvider {
portableDir, _ = filepath.Abs(portableDir)
} else {
portableDir = os.TempDir()
@@ -62,150 +101,299 @@ Please take a look at the usage below to customize the serving parameters`,
permissions := make(map[string][]string)
permissions["/"] = portablePermissions
portableGCSCredentials := ""
if portableFsProvider == 2 && len(portableGCSCredentialsFile) > 0 {
fi, err := os.Stat(portableGCSCredentialsFile)
if fsProvider == sdk.GCSFilesystemProvider && portableGCSCredentialsFile != "" {
contents, err := getFileContents(portableGCSCredentialsFile)
if err != nil {
fmt.Printf("Invalid GCS credentials file: %v\n", err)
return
fmt.Printf("Unable to get GCS credentials: %v\n", err)
os.Exit(1)
}
if fi.Size() > 1048576 {
fmt.Printf("Invalid GCS credentials file: %#v is too big %v/1048576 bytes\n", portableGCSCredentialsFile,
fi.Size())
return
}
creds, err := ioutil.ReadFile(portableGCSCredentialsFile)
if err != nil {
fmt.Printf("Unable to read credentials file: %v\n", err)
}
portableGCSCredentials = base64.StdEncoding.EncodeToString(creds)
portableGCSCredentials = contents
portableGCSAutoCredentials = 0
}
portableSFTPPrivateKey := ""
if fsProvider == sdk.SFTPFilesystemProvider && portableSFTPPrivateKeyPath != "" {
contents, err := getFileContents(portableSFTPPrivateKeyPath)
if err != nil {
fmt.Printf("Unable to get SFTP private key: %v\n", err)
os.Exit(1)
}
portableSFTPPrivateKey = contents
}
if portableFTPDPort >= 0 && len(portableFTPSCert) > 0 && len(portableFTPSKey) > 0 {
_, err := common.NewCertManager(portableFTPSCert, portableFTPSKey, filepath.Clean(defaultConfigDir),
"FTP portable")
if err != nil {
fmt.Printf("Unable to load FTPS key pair, cert file %#v key file %#v error: %v\n",
portableFTPSCert, portableFTPSKey, err)
os.Exit(1)
}
}
if portableWebDAVPort > 0 && len(portableWebDAVCert) > 0 && len(portableWebDAVKey) > 0 {
_, err := common.NewCertManager(portableWebDAVCert, portableWebDAVKey, filepath.Clean(defaultConfigDir),
"WebDAV portable")
if err != nil {
fmt.Printf("Unable to load WebDAV key pair, cert file %#v key file %#v error: %v\n",
portableWebDAVCert, portableWebDAVKey, err)
os.Exit(1)
}
}
service := service.Service{
ConfigDir: filepath.Clean(defaultConfigDir),
ConfigFile: defaultConfigName,
ConfigFile: defaultConfigFile,
LogFilePath: portableLogFile,
LogMaxSize: defaultLogMaxSize,
LogMaxBackups: defaultLogMaxBackup,
LogMaxAge: defaultLogMaxAge,
LogCompress: defaultLogCompress,
LogVerbose: defaultLogVerbose,
LogVerbose: portableLogVerbose,
LogUTCTime: portableLogUTCTime,
Shutdown: make(chan bool),
PortableMode: 1,
PortableUser: dataprovider.User{
Username: portableUsername,
Password: portablePassword,
PublicKeys: portablePublicKeys,
Permissions: permissions,
HomeDir: portableDir,
Status: 1,
FsConfig: dataprovider.Filesystem{
Provider: portableFsProvider,
S3Config: vfs.S3FsConfig{
Bucket: portableS3Bucket,
Region: portableS3Region,
AccessKey: portableS3AccessKey,
AccessSecret: portableS3AccessSecret,
Endpoint: portableS3Endpoint,
StorageClass: portableS3StorageClass,
KeyPrefix: portableS3KeyPrefix,
},
GCSConfig: vfs.GCSFsConfig{
Bucket: portableGCSBucket,
Credentials: portableGCSCredentials,
AutomaticCredentials: portableGCSAutoCredentials,
StorageClass: portableGCSStorageClass,
KeyPrefix: portableGCSKeyPrefix,
},
BaseUser: sdk.BaseUser{
Username: portableUsername,
Password: portablePassword,
PublicKeys: portablePublicKeys,
Permissions: permissions,
HomeDir: portableDir,
Status: 1,
},
Filters: dataprovider.UserFilters{
FileExtensions: parseFileExtensionsFilters(),
BaseUserFilters: sdk.BaseUserFilters{
FilePatterns: parsePatternsFilesFilters(),
},
},
FsConfig: vfs.Filesystem{
Provider: sdk.GetProviderByName(portableFsProvider),
S3Config: vfs.S3FsConfig{
BaseS3FsConfig: sdk.BaseS3FsConfig{
Bucket: portableS3Bucket,
Region: portableS3Region,
AccessKey: portableS3AccessKey,
Endpoint: portableS3Endpoint,
StorageClass: portableS3StorageClass,
ACL: portableS3ACL,
KeyPrefix: portableS3KeyPrefix,
UploadPartSize: int64(portableS3ULPartSize),
UploadConcurrency: portableS3ULConcurrency,
ForcePathStyle: portableS3ForcePathStyle,
},
AccessSecret: kms.NewPlainSecret(portableS3AccessSecret),
},
GCSConfig: vfs.GCSFsConfig{
BaseGCSFsConfig: sdk.BaseGCSFsConfig{
Bucket: portableGCSBucket,
AutomaticCredentials: portableGCSAutoCredentials,
StorageClass: portableGCSStorageClass,
KeyPrefix: portableGCSKeyPrefix,
},
Credentials: kms.NewPlainSecret(portableGCSCredentials),
},
AzBlobConfig: vfs.AzBlobFsConfig{
BaseAzBlobFsConfig: sdk.BaseAzBlobFsConfig{
Container: portableAzContainer,
AccountName: portableAzAccountName,
Endpoint: portableAzEndpoint,
AccessTier: portableAzAccessTier,
KeyPrefix: portableAzKeyPrefix,
UseEmulator: portableAzUseEmulator,
UploadPartSize: int64(portableAzULPartSize),
UploadConcurrency: portableAzULConcurrency,
},
AccountKey: kms.NewPlainSecret(portableAzAccountKey),
SASURL: kms.NewPlainSecret(portableAzSASURL),
},
CryptConfig: vfs.CryptFsConfig{
Passphrase: kms.NewPlainSecret(portableCryptPassphrase),
},
SFTPConfig: vfs.SFTPFsConfig{
BaseSFTPFsConfig: sdk.BaseSFTPFsConfig{
Endpoint: portableSFTPEndpoint,
Username: portableSFTPUsername,
Fingerprints: portableSFTPFingerprints,
Prefix: portableSFTPPrefix,
DisableCouncurrentReads: portableSFTPDisableConcurrentReads,
BufferSize: portableSFTPDBufferSize,
},
Password: kms.NewPlainSecret(portableSFTPPassword),
PrivateKey: kms.NewPlainSecret(portableSFTPPrivateKey),
},
},
},
}
if err := service.StartPortableMode(portableSFTPDPort, portableSSHCommands, portableAdvertiseService,
portableAdvertiseCredentials); err == nil {
if err := service.StartPortableMode(portableSFTPDPort, portableFTPDPort, portableWebDAVPort, portableSSHCommands, portableAdvertiseService,
portableAdvertiseCredentials, portableFTPSCert, portableFTPSKey, portableWebDAVCert, portableWebDAVKey); err == nil {
service.Wait()
if service.Error == nil {
os.Exit(0)
}
}
os.Exit(1)
},
}
)
func init() {
portableCmd.Flags().StringVarP(&directoryToServe, "directory", "d", ".",
"Path to the directory to serve. This can be an absolute path or a path relative to the current directory")
portableCmd.Flags().IntVarP(&portableSFTPDPort, "sftpd-port", "s", 0, "0 means a random non privileged port")
version.AddFeature("+portable")
portableCmd.Flags().StringVarP(&directoryToServe, "directory", "d", ".", `Path to the directory to serve.
This can be an absolute path or a path
relative to the current directory
`)
portableCmd.Flags().IntVarP(&portableSFTPDPort, "sftpd-port", "s", 0, `0 means a random unprivileged port,
< 0 disabled`)
portableCmd.Flags().IntVar(&portableFTPDPort, "ftpd-port", -1, `0 means a random unprivileged port,
< 0 disabled`)
portableCmd.Flags().IntVar(&portableWebDAVPort, "webdav-port", -1, `0 means a random unprivileged port,
< 0 disabled`)
portableCmd.Flags().StringSliceVarP(&portableSSHCommands, "ssh-commands", "c", sftpd.GetDefaultSSHCommands(),
"SSH commands to enable. \"*\" means any supported SSH command including scp")
portableCmd.Flags().StringVarP(&portableUsername, "username", "u", "", "Leave empty to use an auto generated value")
portableCmd.Flags().StringVarP(&portablePassword, "password", "p", "", "Leave empty to use an auto generated value")
`SSH commands to enable.
"*" means any supported SSH command
including scp
`)
portableCmd.Flags().StringVarP(&portableUsername, "username", "u", "", `Leave empty to use an auto generated
value`)
portableCmd.Flags().StringVarP(&portablePassword, "password", "p", "", `Leave empty to use an auto generated
value`)
portableCmd.Flags().StringVarP(&portableLogFile, logFilePathFlag, "l", "", "Leave empty to disable logging")
portableCmd.Flags().BoolVarP(&portableLogVerbose, logVerboseFlag, "v", false, "Enable verbose logs")
portableCmd.Flags().BoolVar(&portableLogUTCTime, logUTCTimeFlag, false, "Use UTC time for logging")
portableCmd.Flags().StringSliceVarP(&portablePublicKeys, "public-key", "k", []string{}, "")
portableCmd.Flags().StringSliceVarP(&portablePermissions, "permissions", "g", []string{"list", "download"},
"User's permissions. \"*\" means any permission")
portableCmd.Flags().StringArrayVar(&portableAllowedExtensions, "allowed-extensions", []string{},
"Allowed file extensions case insensitive. The format is /dir::ext1,ext2. For example: \"/somedir::.jpg,.png\"")
portableCmd.Flags().StringArrayVar(&portableDeniedExtensions, "denied-extensions", []string{},
"Denied file extensions case insensitive. The format is /dir::ext1,ext2. For example: \"/somedir::.jpg,.png\"")
portableCmd.Flags().BoolVarP(&portableAdvertiseService, "advertise-service", "S", true,
"Advertise SFTP service using multicast DNS")
`User's permissions. "*" means any
permission`)
portableCmd.Flags().StringArrayVar(&portableAllowedPatterns, "allowed-patterns", []string{},
`Allowed file patterns case insensitive.
The format is:
/dir::pattern1,pattern2.
For example: "/somedir::*.jpg,a*b?.png"`)
portableCmd.Flags().StringArrayVar(&portableDeniedPatterns, "denied-patterns", []string{},
`Denied file patterns case insensitive.
The format is:
/dir::pattern1,pattern2.
For example: "/somedir::*.jpg,a*b?.png"`)
portableCmd.Flags().BoolVarP(&portableAdvertiseService, "advertise-service", "S", false,
`Advertise configured services using
multicast DNS`)
portableCmd.Flags().BoolVarP(&portableAdvertiseCredentials, "advertise-credentials", "C", false,
"If the SFTP service is advertised via multicast DNS, this flag allows to put username/password inside the advertised TXT record")
portableCmd.Flags().IntVarP(&portableFsProvider, "fs-provider", "f", 0, "0 means local filesystem, 1 Amazon S3 compatible, "+
"2 Google Cloud Storage")
`If the SFTP/FTP service is
advertised via multicast DNS, this
flag allows to put username/password
inside the advertised TXT record`)
portableCmd.Flags().StringVarP(&portableFsProvider, "fs-provider", "f", "osfs", `osfs => local filesystem (legacy value: 0)
s3fs => AWS S3 compatible (legacy: 1)
gcsfs => Google Cloud Storage (legacy: 2)
azblobfs => Azure Blob Storage (legacy: 3)
cryptfs => Encrypted local filesystem (legacy: 4)
sftpfs => SFTP (legacy: 5)`)
portableCmd.Flags().StringVar(&portableS3Bucket, "s3-bucket", "", "")
portableCmd.Flags().StringVar(&portableS3Region, "s3-region", "", "")
portableCmd.Flags().StringVar(&portableS3AccessKey, "s3-access-key", "", "")
portableCmd.Flags().StringVar(&portableS3AccessSecret, "s3-access-secret", "", "")
portableCmd.Flags().StringVar(&portableS3Endpoint, "s3-endpoint", "", "")
portableCmd.Flags().StringVar(&portableS3StorageClass, "s3-storage-class", "", "")
portableCmd.Flags().StringVar(&portableS3KeyPrefix, "s3-key-prefix", "", "Allows to restrict access to the virtual folder "+
"identified by this prefix and its contents")
portableCmd.Flags().StringVar(&portableS3ACL, "s3-acl", "", "")
portableCmd.Flags().StringVar(&portableS3KeyPrefix, "s3-key-prefix", "", `Allows to restrict access to the
virtual folder identified by this
prefix and its contents`)
portableCmd.Flags().IntVar(&portableS3ULPartSize, "s3-upload-part-size", 5, `The buffer size for multipart uploads
(MB)`)
portableCmd.Flags().IntVar(&portableS3ULConcurrency, "s3-upload-concurrency", 2, `How many parts are uploaded in
parallel`)
portableCmd.Flags().BoolVar(&portableS3ForcePathStyle, "s3-force-path-style", false, `Force path style bucket URL`)
portableCmd.Flags().StringVar(&portableGCSBucket, "gcs-bucket", "", "")
portableCmd.Flags().StringVar(&portableGCSStorageClass, "gcs-storage-class", "", "")
portableCmd.Flags().StringVar(&portableGCSKeyPrefix, "gcs-key-prefix", "", "Allows to restrict access to the virtual folder "+
"identified by this prefix and its contents")
portableCmd.Flags().StringVar(&portableGCSCredentialsFile, "gcs-credentials-file", "", "Google Cloud Storage JSON credentials file")
portableCmd.Flags().IntVar(&portableGCSAutoCredentials, "gcs-automatic-credentials", 1, "0 means explicit credentials using a JSON "+
"credentials file, 1 automatic")
portableCmd.Flags().StringVar(&portableGCSKeyPrefix, "gcs-key-prefix", "", `Allows to restrict access to the
virtual folder identified by this
prefix and its contents`)
portableCmd.Flags().StringVar(&portableGCSCredentialsFile, "gcs-credentials-file", "", `Google Cloud Storage JSON credentials
file`)
portableCmd.Flags().IntVar(&portableGCSAutoCredentials, "gcs-automatic-credentials", 1, `0 means explicit credentials using
a JSON credentials file, 1 automatic
`)
portableCmd.Flags().StringVar(&portableFTPSCert, "ftpd-cert", "", "Path to the certificate file for FTPS")
portableCmd.Flags().StringVar(&portableFTPSKey, "ftpd-key", "", "Path to the key file for FTPS")
portableCmd.Flags().StringVar(&portableWebDAVCert, "webdav-cert", "", `Path to the certificate file for WebDAV
over HTTPS`)
portableCmd.Flags().StringVar(&portableWebDAVKey, "webdav-key", "", `Path to the key file for WebDAV over
HTTPS`)
portableCmd.Flags().StringVar(&portableAzContainer, "az-container", "", "")
portableCmd.Flags().StringVar(&portableAzAccountName, "az-account-name", "", "")
portableCmd.Flags().StringVar(&portableAzAccountKey, "az-account-key", "", "")
portableCmd.Flags().StringVar(&portableAzSASURL, "az-sas-url", "", `Shared access signature URL`)
portableCmd.Flags().StringVar(&portableAzEndpoint, "az-endpoint", "", `Leave empty to use the default:
"blob.core.windows.net"`)
portableCmd.Flags().StringVar(&portableAzAccessTier, "az-access-tier", "", `Leave empty to use the default
container setting`)
portableCmd.Flags().StringVar(&portableAzKeyPrefix, "az-key-prefix", "", `Allows to restrict access to the
virtual folder identified by this
prefix and its contents`)
portableCmd.Flags().IntVar(&portableAzULPartSize, "az-upload-part-size", 4, `The buffer size for multipart uploads
(MB)`)
portableCmd.Flags().IntVar(&portableAzULConcurrency, "az-upload-concurrency", 2, `How many parts are uploaded in
parallel`)
portableCmd.Flags().BoolVar(&portableAzUseEmulator, "az-use-emulator", false, "")
portableCmd.Flags().StringVar(&portableCryptPassphrase, "crypto-passphrase", "", `Passphrase for encryption/decryption`)
portableCmd.Flags().StringVar(&portableSFTPEndpoint, "sftp-endpoint", "", `SFTP endpoint as host:port for SFTP
provider`)
portableCmd.Flags().StringVar(&portableSFTPUsername, "sftp-username", "", `SFTP user for SFTP provider`)
portableCmd.Flags().StringVar(&portableSFTPPassword, "sftp-password", "", `SFTP password for SFTP provider`)
portableCmd.Flags().StringVar(&portableSFTPPrivateKeyPath, "sftp-key-path", "", `SFTP private key path for SFTP provider`)
portableCmd.Flags().StringSliceVar(&portableSFTPFingerprints, "sftp-fingerprints", []string{}, `SFTP fingerprints to verify remote host
key for SFTP provider`)
portableCmd.Flags().StringVar(&portableSFTPPrefix, "sftp-prefix", "", `SFTP prefix allows restrict all
operations to a given path within the
remote SFTP server`)
portableCmd.Flags().BoolVar(&portableSFTPDisableConcurrentReads, "sftp-disable-concurrent-reads", false, `Concurrent reads are safe to use and
disabling them will degrade performance.
Disable for read once servers`)
portableCmd.Flags().Int64Var(&portableSFTPDBufferSize, "sftp-buffer-size", 0, `The size of the buffer (in MB) to use
for transfers. By enabling buffering,
the reads and writes, from/to the
remote SFTP server, are split in
multiple concurrent requests and this
allows data to be transferred at a
faster rate, over high latency networks,
by overlapping round-trip times`)
rootCmd.AddCommand(portableCmd)
}
func parseFileExtensionsFilters() []dataprovider.ExtensionsFilter {
var extensions []dataprovider.ExtensionsFilter
for _, val := range portableAllowedExtensions {
p, exts := getExtensionsFilterValues(strings.TrimSpace(val))
if len(p) > 0 {
extensions = append(extensions, dataprovider.ExtensionsFilter{
Path: path.Clean(p),
AllowedExtensions: exts,
DeniedExtensions: []string{},
func parsePatternsFilesFilters() []sdk.PatternsFilter {
var patterns []sdk.PatternsFilter
for _, val := range portableAllowedPatterns {
p, exts := getPatternsFilterValues(strings.TrimSpace(val))
if p != "" {
patterns = append(patterns, sdk.PatternsFilter{
Path: path.Clean(p),
AllowedPatterns: exts,
DeniedPatterns: []string{},
})
}
}
for _, val := range portableDeniedExtensions {
p, exts := getExtensionsFilterValues(strings.TrimSpace(val))
if len(p) > 0 {
for _, val := range portableDeniedPatterns {
p, exts := getPatternsFilterValues(strings.TrimSpace(val))
if p != "" {
found := false
for index, e := range extensions {
for index, e := range patterns {
if path.Clean(e.Path) == path.Clean(p) {
extensions[index].DeniedExtensions = append(extensions[index].DeniedExtensions, exts...)
patterns[index].DeniedPatterns = append(patterns[index].DeniedPatterns, exts...)
found = true
break
}
}
if !found {
extensions = append(extensions, dataprovider.ExtensionsFilter{
Path: path.Clean(p),
AllowedExtensions: []string{},
DeniedExtensions: exts,
patterns = append(patterns, sdk.PatternsFilter{
Path: path.Clean(p),
AllowedPatterns: []string{},
DeniedPatterns: exts,
})
}
}
}
return extensions
return patterns
}
func getExtensionsFilterValues(value string) (string, []string) {
func getPatternsFilterValues(value string) (string, []string) {
if strings.Contains(value, "::") {
dirExts := strings.Split(value, "::")
if len(dirExts) > 1 {
@@ -213,14 +401,29 @@ func getExtensionsFilterValues(value string) (string, []string) {
exts := []string{}
for _, e := range strings.Split(dirExts[1], ",") {
cleanedExt := strings.TrimSpace(e)
if len(cleanedExt) > 0 {
if cleanedExt != "" {
exts = append(exts, cleanedExt)
}
}
if len(dir) > 0 && len(exts) > 0 {
if dir != "" && len(exts) > 0 {
return dir, exts
}
}
}
return "", nil
}
func getFileContents(name string) (string, error) {
fi, err := os.Stat(name)
if err != nil {
return "", err
}
if fi.Size() > 1048576 {
return "", fmt.Errorf("%#v is too big %v/1048576 bytes", name, fi.Size())
}
contents, err := os.ReadFile(name)
if err != nil {
return "", err
}
return string(contents), nil
}

10
cmd/portable_disabled.go Normal file
View File

@@ -0,0 +1,10 @@
//go:build noportable
// +build noportable
package cmd
import "github.com/drakkan/sftpgo/v2/version"
func init() {
version.AddFeature("-portable")
}

View File

@@ -2,9 +2,11 @@ package cmd
import (
"fmt"
"os"
"github.com/drakkan/sftpgo/service"
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/v2/service"
)
var (
@@ -19,9 +21,10 @@ var (
}
err := s.Reload()
if err != nil {
fmt.Printf("Error reloading service: %v\r\n", err)
fmt.Printf("Error sending reload signal: %v\r\n", err)
os.Exit(1)
} else {
fmt.Printf("Service reloaded!\r\n")
fmt.Printf("Reload signal sent!\r\n")
}
},
}

75
cmd/resetprovider.go Normal file
View File

@@ -0,0 +1,75 @@
package cmd
import (
"bufio"
"os"
"strings"
"github.com/rs/zerolog"
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/drakkan/sftpgo/v2/config"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
)
var (
resetProviderForce bool
resetProviderCmd = &cobra.Command{
Use: "resetprovider",
Short: "Reset the configured provider, any data will be lost",
Long: `This command reads the data provider connection details from the specified
configuration file and resets the provider by deleting all data and schemas.
This command is not supported for the memory provider.
Please take a look at the usage below to customize the options.`,
Run: func(cmd *cobra.Command, args []string) {
logger.DisableLogger()
logger.EnableConsoleLogger(zerolog.DebugLevel)
configDir = util.CleanDirInput(configDir)
err := config.LoadConfig(configDir, configFile)
if err != nil {
logger.WarnToConsole("Unable to initialize data provider, config load error: %v", err)
os.Exit(1)
}
kmsConfig := config.GetKMSConfig()
err = kmsConfig.Initialize()
if err != nil {
logger.ErrorToConsole("unable to initialize KMS: %v", err)
os.Exit(1)
}
providerConf := config.GetProviderConf()
if !resetProviderForce {
logger.WarnToConsole("You are about to delete all the SFTPGo data for provider %#v, config file: %#v",
providerConf.Driver, viper.ConfigFileUsed())
logger.WarnToConsole("Are you sure? (Y/n)")
reader := bufio.NewReader(os.Stdin)
answer, err := reader.ReadString('\n')
if err != nil {
logger.ErrorToConsole("unable to read your answer: %v", err)
os.Exit(1)
}
if strings.ToUpper(strings.TrimSpace(answer)) != "Y" {
logger.InfoToConsole("command aborted")
os.Exit(1)
}
}
logger.InfoToConsole("Resetting provider: %#v, config file: %#v", providerConf.Driver, viper.ConfigFileUsed())
err = dataprovider.ResetDatabase(providerConf, configDir)
if err != nil {
logger.WarnToConsole("Error resetting provider: %v", err)
os.Exit(1)
}
logger.InfoToConsole("Tha data provider was successfully reset")
},
}
)
func init() {
addConfigFlags(resetProviderCmd)
resetProviderCmd.Flags().BoolVar(&resetProviderForce, "force", false, `reset the provider without asking for confirmation`)
rootCmd.AddCommand(resetProviderCmd)
}

63
cmd/revertprovider.go Normal file
View File

@@ -0,0 +1,63 @@
package cmd
import (
"os"
"github.com/rs/zerolog"
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/drakkan/sftpgo/v2/config"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
)
var (
revertProviderTargetVersion int
revertProviderCmd = &cobra.Command{
Use: "revertprovider",
Short: "Revert the configured data provider to a previous version",
Long: `This command reads the data provider connection details from the specified
configuration file and restore the provider schema and/or data to a previous version.
This command is not supported for the memory provider.
Please take a look at the usage below to customize the options.`,
Run: func(cmd *cobra.Command, args []string) {
logger.DisableLogger()
logger.EnableConsoleLogger(zerolog.DebugLevel)
if revertProviderTargetVersion != 10 {
logger.WarnToConsole("Unsupported target version, 10 is the only supported one")
os.Exit(1)
}
configDir = util.CleanDirInput(configDir)
err := config.LoadConfig(configDir, configFile)
if err != nil {
logger.WarnToConsole("Unable to initialize data provider, config load error: %v", err)
os.Exit(1)
}
kmsConfig := config.GetKMSConfig()
err = kmsConfig.Initialize()
if err != nil {
logger.ErrorToConsole("unable to initialize KMS: %v", err)
os.Exit(1)
}
providerConf := config.GetProviderConf()
logger.InfoToConsole("Reverting provider: %#v config file: %#v target version %v", providerConf.Driver,
viper.ConfigFileUsed(), revertProviderTargetVersion)
err = dataprovider.RevertDatabase(providerConf, configDir, revertProviderTargetVersion)
if err != nil {
logger.WarnToConsole("Error reverting provider: %v", err)
os.Exit(1)
}
logger.InfoToConsole("Data provider successfully reverted")
},
}
)
func init() {
addConfigFlags(revertProviderCmd)
revertProviderCmd.Flags().IntVar(&revertProviderTargetVersion, "to-version", 10, `10 means the version supported in v2.1.x`)
rootCmd.AddCommand(revertProviderCmd)
}

View File

@@ -4,63 +4,81 @@ package cmd
import (
"fmt"
"os"
"strconv"
"github.com/drakkan/sftpgo/config"
"github.com/drakkan/sftpgo/utils"
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/drakkan/sftpgo/v2/version"
)
const (
logSender = "cmd"
configDirFlag = "config-dir"
configDirKey = "config_dir"
configFileFlag = "config-file"
configFileKey = "config_file"
logFilePathFlag = "log-file-path"
logFilePathKey = "log_file_path"
logMaxSizeFlag = "log-max-size"
logMaxSizeKey = "log_max_size"
logMaxBackupFlag = "log-max-backups"
logMaxBackupKey = "log_max_backups"
logMaxAgeFlag = "log-max-age"
logMaxAgeKey = "log_max_age"
logCompressFlag = "log-compress"
logCompressKey = "log_compress"
logVerboseFlag = "log-verbose"
logVerboseKey = "log_verbose"
defaultConfigDir = "."
defaultConfigName = config.DefaultConfigName
defaultLogFile = "sftpgo.log"
defaultLogMaxSize = 10
defaultLogMaxBackup = 5
defaultLogMaxAge = 28
defaultLogCompress = false
defaultLogVerbose = true
configDirFlag = "config-dir"
configDirKey = "config_dir"
configFileFlag = "config-file"
configFileKey = "config_file"
logFilePathFlag = "log-file-path"
logFilePathKey = "log_file_path"
logMaxSizeFlag = "log-max-size"
logMaxSizeKey = "log_max_size"
logMaxBackupFlag = "log-max-backups"
logMaxBackupKey = "log_max_backups"
logMaxAgeFlag = "log-max-age"
logMaxAgeKey = "log_max_age"
logCompressFlag = "log-compress"
logCompressKey = "log_compress"
logVerboseFlag = "log-verbose"
logVerboseKey = "log_verbose"
logUTCTimeFlag = "log-utc-time"
logUTCTimeKey = "log_utc_time"
loadDataFromFlag = "loaddata-from"
loadDataFromKey = "loaddata_from"
loadDataModeFlag = "loaddata-mode"
loadDataModeKey = "loaddata_mode"
loadDataQuotaScanFlag = "loaddata-scan"
loadDataQuotaScanKey = "loaddata_scan"
loadDataCleanFlag = "loaddata-clean"
loadDataCleanKey = "loaddata_clean"
defaultConfigDir = "."
defaultConfigFile = ""
defaultLogFile = "sftpgo.log"
defaultLogMaxSize = 10
defaultLogMaxBackup = 5
defaultLogMaxAge = 28
defaultLogCompress = false
defaultLogVerbose = true
defaultLogUTCTime = false
defaultLoadDataFrom = ""
defaultLoadDataMode = 1
defaultLoadDataQuotaScan = 0
defaultLoadDataClean = false
)
var (
configDir string
configFile string
logFilePath string
logMaxSize int
logMaxBackups int
logMaxAge int
logCompress bool
logVerbose bool
configDir string
configFile string
logFilePath string
logMaxSize int
logMaxBackups int
logMaxAge int
logCompress bool
logVerbose bool
logUTCTime bool
loadDataFrom string
loadDataMode int
loadDataQuotaScan int
loadDataClean bool
rootCmd = &cobra.Command{
Use: "sftpgo",
Short: "Full featured and highly configurable SFTP server",
Short: "Fully featured and highly configurable SFTP server",
}
)
func init() {
version := utils.GetAppVersion()
rootCmd.CompletionOptions.DisableDefaultCmd = true
rootCmd.Flags().BoolP("version", "v", false, "")
rootCmd.Version = version.GetVersionAsString()
rootCmd.SetVersionTemplate(`{{printf "SFTPGo version: "}}{{printf "%s" .Version}}
rootCmd.Version = version.GetAsString()
rootCmd.SetVersionTemplate(`{{printf "SFTPGo "}}{{printf "%s" .Version}}
`)
}
@@ -75,100 +93,149 @@ func Execute() {
func addConfigFlags(cmd *cobra.Command) {
viper.SetDefault(configDirKey, defaultConfigDir)
viper.BindEnv(configDirKey, "SFTPGO_CONFIG_DIR")
viper.BindEnv(configDirKey, "SFTPGO_CONFIG_DIR") //nolint:errcheck // err is not nil only if the key to bind is missing
cmd.Flags().StringVarP(&configDir, configDirFlag, "c", viper.GetString(configDirKey),
"Location for SFTPGo config dir. This directory should contain the \"sftpgo\" configuration file or the configured "+
"config-file and it is used as the base for files with a relative path (eg. the private keys for the SFTP server, "+
"the SQLite database if you use SQLite as data provider). This flag can be set using SFTPGO_CONFIG_DIR env var too.")
viper.BindPFlag(configDirKey, cmd.Flags().Lookup(configDirFlag))
`Location for the config dir. This directory
is used as the base for files with a relative
path, eg. the private keys for the SFTP
server or the SQLite database if you use
SQLite as data provider.
The configuration file, if not explicitly set,
is looked for in this dir. We support reading
from JSON, TOML, YAML, HCL, envfile and Java
properties config files. The default config
file name is "sftpgo" and therefore
"sftpgo.json", "sftpgo.yaml" and so on are
searched.
This flag can be set using SFTPGO_CONFIG_DIR
env var too.`)
viper.BindPFlag(configDirKey, cmd.Flags().Lookup(configDirFlag)) //nolint:errcheck
viper.SetDefault(configFileKey, defaultConfigName)
viper.BindEnv(configFileKey, "SFTPGO_CONFIG_FILE")
cmd.Flags().StringVarP(&configFile, configFileFlag, "f", viper.GetString(configFileKey),
"Name for SFTPGo configuration file. It must be the name of a file stored in config-dir not the absolute path to the "+
"configuration file. The specified file name must have no extension we automatically load JSON, YAML, TOML, HCL and "+
"Java properties. Therefore if you set \"sftpgo\" then \"sftpgo.json\", \"sftpgo.yaml\" and so on are searched. "+
"This flag can be set using SFTPGO_CONFIG_FILE env var too.")
viper.BindPFlag(configFileKey, cmd.Flags().Lookup(configFileFlag))
viper.SetDefault(configFileKey, defaultConfigFile)
viper.BindEnv(configFileKey, "SFTPGO_CONFIG_FILE") //nolint:errcheck
cmd.Flags().StringVar(&configFile, configFileFlag, viper.GetString(configFileKey),
`Path to SFTPGo configuration file.
This flag explicitly defines the path, name
and extension of the config file. If must be
an absolute path or a path relative to the
configuration directory. The specified file
name must have a supported extension (JSON,
YAML, TOML, HCL or Java properties).
This flag can be set using SFTPGO_CONFIG_FILE
env var too.`)
viper.BindPFlag(configFileKey, cmd.Flags().Lookup(configFileFlag)) //nolint:errcheck
}
func addServeFlags(cmd *cobra.Command) {
addConfigFlags(cmd)
viper.SetDefault(logFilePathKey, defaultLogFile)
viper.BindEnv(logFilePathKey, "SFTPGO_LOG_FILE_PATH")
viper.BindEnv(logFilePathKey, "SFTPGO_LOG_FILE_PATH") //nolint:errcheck
cmd.Flags().StringVarP(&logFilePath, logFilePathFlag, "l", viper.GetString(logFilePathKey),
"Location for the log file. Leave empty to write logs to the standard output. This flag can be set using SFTPGO_LOG_FILE_PATH "+
"env var too.")
viper.BindPFlag(logFilePathKey, cmd.Flags().Lookup(logFilePathFlag))
`Location for the log file. Leave empty to write
logs to the standard output. This flag can be
set using SFTPGO_LOG_FILE_PATH env var too.
`)
viper.BindPFlag(logFilePathKey, cmd.Flags().Lookup(logFilePathFlag)) //nolint:errcheck
viper.SetDefault(logMaxSizeKey, defaultLogMaxSize)
viper.BindEnv(logMaxSizeKey, "SFTPGO_LOG_MAX_SIZE")
viper.BindEnv(logMaxSizeKey, "SFTPGO_LOG_MAX_SIZE") //nolint:errcheck
cmd.Flags().IntVarP(&logMaxSize, logMaxSizeFlag, "s", viper.GetInt(logMaxSizeKey),
"Maximum size in megabytes of the log file before it gets rotated. This flag can be set using SFTPGO_LOG_MAX_SIZE "+
"env var too. It is unused if log-file-path is empty.")
viper.BindPFlag(logMaxSizeKey, cmd.Flags().Lookup(logMaxSizeFlag))
`Maximum size in megabytes of the log file
before it gets rotated. This flag can be set
using SFTPGO_LOG_MAX_SIZE env var too. It is
unused if log-file-path is empty.
`)
viper.BindPFlag(logMaxSizeKey, cmd.Flags().Lookup(logMaxSizeFlag)) //nolint:errcheck
viper.SetDefault(logMaxBackupKey, defaultLogMaxBackup)
viper.BindEnv(logMaxBackupKey, "SFTPGO_LOG_MAX_BACKUPS")
viper.BindEnv(logMaxBackupKey, "SFTPGO_LOG_MAX_BACKUPS") //nolint:errcheck
cmd.Flags().IntVarP(&logMaxBackups, "log-max-backups", "b", viper.GetInt(logMaxBackupKey),
"Maximum number of old log files to retain. This flag can be set using SFTPGO_LOG_MAX_BACKUPS env var too. "+
"It is unused if log-file-path is empty.")
viper.BindPFlag(logMaxBackupKey, cmd.Flags().Lookup(logMaxBackupFlag))
`Maximum number of old log files to retain.
This flag can be set using SFTPGO_LOG_MAX_BACKUPS
env var too. It is unused if log-file-path is
empty.`)
viper.BindPFlag(logMaxBackupKey, cmd.Flags().Lookup(logMaxBackupFlag)) //nolint:errcheck
viper.SetDefault(logMaxAgeKey, defaultLogMaxAge)
viper.BindEnv(logMaxAgeKey, "SFTPGO_LOG_MAX_AGE")
viper.BindEnv(logMaxAgeKey, "SFTPGO_LOG_MAX_AGE") //nolint:errcheck
cmd.Flags().IntVarP(&logMaxAge, "log-max-age", "a", viper.GetInt(logMaxAgeKey),
"Maximum number of days to retain old log files. This flag can be set using SFTPGO_LOG_MAX_AGE env var too. "+
"It is unused if log-file-path is empty.")
viper.BindPFlag(logMaxAgeKey, cmd.Flags().Lookup(logMaxAgeFlag))
`Maximum number of days to retain old log files.
This flag can be set using SFTPGO_LOG_MAX_AGE env
var too. It is unused if log-file-path is empty.
`)
viper.BindPFlag(logMaxAgeKey, cmd.Flags().Lookup(logMaxAgeFlag)) //nolint:errcheck
viper.SetDefault(logCompressKey, defaultLogCompress)
viper.BindEnv(logCompressKey, "SFTPGO_LOG_COMPRESS")
cmd.Flags().BoolVarP(&logCompress, logCompressFlag, "z", viper.GetBool(logCompressKey), "Determine if the rotated "+
"log files should be compressed using gzip. This flag can be set using SFTPGO_LOG_COMPRESS env var too. "+
"It is unused if log-file-path is empty.")
viper.BindPFlag(logCompressKey, cmd.Flags().Lookup(logCompressFlag))
viper.BindEnv(logCompressKey, "SFTPGO_LOG_COMPRESS") //nolint:errcheck
cmd.Flags().BoolVarP(&logCompress, logCompressFlag, "z", viper.GetBool(logCompressKey),
`Determine if the rotated log files
should be compressed using gzip. This flag can
be set using SFTPGO_LOG_COMPRESS env var too.
It is unused if log-file-path is empty.
`)
viper.BindPFlag(logCompressKey, cmd.Flags().Lookup(logCompressFlag)) //nolint:errcheck
viper.SetDefault(logVerboseKey, defaultLogVerbose)
viper.BindEnv(logVerboseKey, "SFTPGO_LOG_VERBOSE")
cmd.Flags().BoolVarP(&logVerbose, logVerboseFlag, "v", viper.GetBool(logVerboseKey), "Enable verbose logs. "+
"This flag can be set using SFTPGO_LOG_VERBOSE env var too.")
viper.BindPFlag(logVerboseKey, cmd.Flags().Lookup(logVerboseFlag))
}
viper.BindEnv(logVerboseKey, "SFTPGO_LOG_VERBOSE") //nolint:errcheck
cmd.Flags().BoolVarP(&logVerbose, logVerboseFlag, "v", viper.GetBool(logVerboseKey),
`Enable verbose logs. This flag can be set
using SFTPGO_LOG_VERBOSE env var too.
`)
viper.BindPFlag(logVerboseKey, cmd.Flags().Lookup(logVerboseFlag)) //nolint:errcheck
func getCustomServeFlags() []string {
result := []string{}
if configDir != defaultConfigDir {
configDir = utils.CleanDirInput(configDir)
result = append(result, "--"+configDirFlag)
result = append(result, configDir)
}
if configFile != defaultConfigName {
result = append(result, "--"+configFileFlag)
result = append(result, configFile)
}
if logFilePath != defaultLogFile {
result = append(result, "--"+logFilePathFlag)
result = append(result, logFilePath)
}
if logMaxSize != defaultLogMaxSize {
result = append(result, "--"+logMaxSizeFlag)
result = append(result, strconv.Itoa(logMaxSize))
}
if logMaxBackups != defaultLogMaxBackup {
result = append(result, "--"+logMaxBackupFlag)
result = append(result, strconv.Itoa(logMaxBackups))
}
if logMaxAge != defaultLogMaxAge {
result = append(result, "--"+logMaxAgeFlag)
result = append(result, strconv.Itoa(logMaxAge))
}
if logVerbose != defaultLogVerbose {
result = append(result, "--"+logVerboseFlag+"=false")
}
if logCompress != defaultLogCompress {
result = append(result, "--"+logCompressFlag+"=true")
}
return result
viper.SetDefault(logUTCTimeKey, defaultLogUTCTime)
viper.BindEnv(logUTCTimeKey, "SFTPGO_LOG_UTC_TIME") //nolint:errcheck
cmd.Flags().BoolVar(&logUTCTime, logUTCTimeFlag, viper.GetBool(logUTCTimeKey),
`Use UTC time for logging. This flag can be set
using SFTPGO_LOG_UTC_TIME env var too.
`)
viper.BindPFlag(logUTCTimeKey, cmd.Flags().Lookup(logUTCTimeFlag)) //nolint:errcheck
viper.SetDefault(loadDataFromKey, defaultLoadDataFrom)
viper.BindEnv(loadDataFromKey, "SFTPGO_LOADDATA_FROM") //nolint:errcheck
cmd.Flags().StringVar(&loadDataFrom, loadDataFromFlag, viper.GetString(loadDataFromKey),
`Load users and folders from this file.
The file must be specified as absolute path
and it must contain a backup obtained using
the "dumpdata" REST API or compatible content.
This flag can be set using SFTPGO_LOADDATA_FROM
env var too.
`)
viper.BindPFlag(loadDataFromKey, cmd.Flags().Lookup(loadDataFromFlag)) //nolint:errcheck
viper.SetDefault(loadDataModeKey, defaultLoadDataMode)
viper.BindEnv(loadDataModeKey, "SFTPGO_LOADDATA_MODE") //nolint:errcheck
cmd.Flags().IntVar(&loadDataMode, loadDataModeFlag, viper.GetInt(loadDataModeKey),
`Restore mode for data to load:
0 - new users are added, existing users are
updated
1 - New users are added, existing users are
not modified
This flag can be set using SFTPGO_LOADDATA_MODE
env var too.
`)
viper.BindPFlag(loadDataModeKey, cmd.Flags().Lookup(loadDataModeFlag)) //nolint:errcheck
viper.SetDefault(loadDataQuotaScanKey, defaultLoadDataQuotaScan)
viper.BindEnv(loadDataQuotaScanKey, "SFTPGO_LOADDATA_QUOTA_SCAN") //nolint:errcheck
cmd.Flags().IntVar(&loadDataQuotaScan, loadDataQuotaScanFlag, viper.GetInt(loadDataQuotaScanKey),
`Quota scan mode after data load:
0 - no quota scan
1 - scan quota
2 - scan quota if the user has quota restrictions
This flag can be set using SFTPGO_LOADDATA_QUOTA_SCAN
env var too.
(default 0)`)
viper.BindPFlag(loadDataQuotaScanKey, cmd.Flags().Lookup(loadDataQuotaScanFlag)) //nolint:errcheck
viper.SetDefault(loadDataCleanKey, defaultLoadDataClean)
viper.BindEnv(loadDataCleanKey, "SFTPGO_LOADDATA_CLEAN") //nolint:errcheck
cmd.Flags().BoolVar(&loadDataClean, loadDataCleanFlag, viper.GetBool(loadDataCleanKey),
`Determine if the loaddata-from file should
be removed after a successful load. This flag
can be set using SFTPGO_LOADDATA_CLEAN env var
too. (default "false")
`)
viper.BindPFlag(logCompressKey, cmd.Flags().Lookup(logCompressFlag)) //nolint:errcheck
}

35
cmd/rotatelogs_windows.go Normal file
View File

@@ -0,0 +1,35 @@
package cmd
import (
"fmt"
"os"
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/v2/service"
)
var (
rotateLogCmd = &cobra.Command{
Use: "rotatelogs",
Short: "Signal to the running service to rotate the logs",
Run: func(cmd *cobra.Command, args []string) {
s := service.WindowsService{
Service: service.Service{
Shutdown: make(chan bool),
},
}
err := s.RotateLogFile()
if err != nil {
fmt.Printf("Error sending rotate log file signal to the service: %v\r\n", err)
os.Exit(1)
} else {
fmt.Printf("Rotate log file signal sent!\r\n")
}
},
}
)
func init() {
serviceCmd.AddCommand(rotateLogCmd)
}

View File

@@ -1,35 +1,48 @@
package cmd
import (
"github.com/drakkan/sftpgo/service"
"github.com/drakkan/sftpgo/utils"
"os"
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/v2/service"
"github.com/drakkan/sftpgo/v2/util"
)
var (
serveCmd = &cobra.Command{
Use: "serve",
Short: "Start the SFTP Server",
Long: `To start the SFTPGo with the default values for the command line flags simply use:
Short: "Start the SFTPGo service",
Long: `To start the SFTPGo with the default values for the command line flags simply
use:
sftpgo serve
$ sftpgo serve
Please take a look at the usage below to customize the startup options`,
Run: func(cmd *cobra.Command, args []string) {
service := service.Service{
ConfigDir: utils.CleanDirInput(configDir),
ConfigFile: configFile,
LogFilePath: logFilePath,
LogMaxSize: logMaxSize,
LogMaxBackups: logMaxBackups,
LogMaxAge: logMaxAge,
LogCompress: logCompress,
LogVerbose: logVerbose,
Shutdown: make(chan bool),
ConfigDir: util.CleanDirInput(configDir),
ConfigFile: configFile,
LogFilePath: logFilePath,
LogMaxSize: logMaxSize,
LogMaxBackups: logMaxBackups,
LogMaxAge: logMaxAge,
LogCompress: logCompress,
LogVerbose: logVerbose,
LogUTCTime: logUTCTime,
LoadDataFrom: loadDataFrom,
LoadDataMode: loadDataMode,
LoadDataQuotaScan: loadDataQuotaScan,
LoadDataClean: loadDataClean,
Shutdown: make(chan bool),
}
if err := service.Start(); err == nil {
service.Wait()
if service.Error == nil {
os.Exit(0)
}
}
os.Exit(1)
},
}
)

View File

@@ -7,7 +7,7 @@ import (
var (
serviceCmd = &cobra.Command{
Use: "service",
Short: "Install, Uninstall, Start, Stop, Reload and retrieve status for SFTPGo Windows Service",
Short: "Manage the SFTPGo Windows Service",
}
)

54
cmd/smtptest.go Normal file
View File

@@ -0,0 +1,54 @@
package cmd
import (
"os"
"github.com/rs/zerolog"
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/v2/config"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/smtp"
"github.com/drakkan/sftpgo/v2/util"
)
var (
smtpTestRecipient string
smtpTestCmd = &cobra.Command{
Use: "smtptest",
Short: "Test the SMTP configuration",
Long: `SFTPGo will try to send a test email to the specified recipient.
If the SMTP configuration is correct you should receive this email.`,
Run: func(cmd *cobra.Command, args []string) {
logger.DisableLogger()
logger.EnableConsoleLogger(zerolog.DebugLevel)
configDir = util.CleanDirInput(configDir)
err := config.LoadConfig(configDir, configFile)
if err != nil {
logger.WarnToConsole("Unable to initialize data provider, config load error: %v", err)
os.Exit(1)
}
smtpConfig := config.GetSMTPConfig()
err = smtpConfig.Initialize(configDir)
if err != nil {
logger.ErrorToConsole("unable to initialize SMTP configuration: %v", err)
os.Exit(1)
}
err = smtp.SendEmail(smtpTestRecipient, "SFTPGo - Testing Email Settings", "It appears your SFTPGo email is setup correctly!",
smtp.EmailContentTypeTextPlain)
if err != nil {
logger.WarnToConsole("Error sending email: %v", err)
os.Exit(1)
}
logger.InfoToConsole("No errors were reported while sending an email. Please check your inbox to make sure.")
},
}
)
func init() {
addConfigFlags(smtpTestCmd)
smtpTestCmd.Flags().StringVar(&smtpTestRecipient, "recipient", "", `email address to send the test e-mail to`)
smtpTestCmd.MarkFlagRequired("recipient") //nolint:errcheck
rootCmd.AddCommand(smtpTestCmd)
}

View File

@@ -2,20 +2,22 @@ package cmd
import (
"fmt"
"os"
"path/filepath"
"github.com/drakkan/sftpgo/service"
"github.com/drakkan/sftpgo/utils"
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/v2/service"
"github.com/drakkan/sftpgo/v2/util"
)
var (
startCmd = &cobra.Command{
Use: "start",
Short: "Start SFTPGo Windows Service",
Short: "Start the SFTPGo Windows Service",
Run: func(cmd *cobra.Command, args []string) {
configDir = utils.CleanDirInput(configDir)
if !filepath.IsAbs(logFilePath) && utils.IsFileInputValid(logFilePath) {
configDir = util.CleanDirInput(configDir)
if !filepath.IsAbs(logFilePath) && util.IsFileInputValid(logFilePath) {
logFilePath = filepath.Join(configDir, logFilePath)
}
s := service.Service{
@@ -27,6 +29,7 @@ var (
LogMaxAge: logMaxAge,
LogCompress: logCompress,
LogVerbose: logVerbose,
LogUTCTime: logUTCTime,
Shutdown: make(chan bool),
}
winService := service.WindowsService{
@@ -35,6 +38,7 @@ var (
err := winService.RunService()
if err != nil {
fmt.Printf("Error starting service: %v\r\n", err)
os.Exit(1)
} else {
fmt.Printf("Service started!\r\n")
}

193
cmd/startsubsys.go Normal file
View File

@@ -0,0 +1,193 @@
package cmd
import (
"io"
"os"
"os/user"
"path/filepath"
"github.com/rs/xid"
"github.com/rs/zerolog"
"github.com/spf13/cobra"
"github.com/spf13/viper"
"github.com/drakkan/sftpgo/v2/common"
"github.com/drakkan/sftpgo/v2/config"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/plugin"
"github.com/drakkan/sftpgo/v2/sftpd"
"github.com/drakkan/sftpgo/v2/version"
)
var (
logJournalD = false
preserveHomeDir = false
baseHomeDir = ""
subsystemCmd = &cobra.Command{
Use: "startsubsys",
Short: "Use sftpgo as SFTP file transfer subsystem",
Long: `In this mode SFTPGo speaks the server side of SFTP protocol to stdout and
expects client requests from stdin.
This mode is not intended to be called directly, but from sshd using the
Subsystem option.
For example adding a line like this one in "/etc/ssh/sshd_config":
Subsystem sftp sftpgo startsubsys
Command-line flags should be specified in the Subsystem declaration.
`,
Run: func(cmd *cobra.Command, args []string) {
logSender := "startsubsys"
connectionID := xid.New().String()
logLevel := zerolog.DebugLevel
if !logVerbose {
logLevel = zerolog.InfoLevel
}
logger.SetLogTime(logUTCTime)
if logJournalD {
logger.InitJournalDLogger(logLevel)
} else {
logger.InitStdErrLogger(logLevel)
}
osUser, err := user.Current()
if err != nil {
logger.Error(logSender, connectionID, "unable to get the current user: %v", err)
os.Exit(1)
}
username := osUser.Username
homedir := osUser.HomeDir
logger.Info(logSender, connectionID, "starting SFTPGo %v as subsystem, user %#v home dir %#v config dir %#v base home dir %#v",
version.Get(), username, homedir, configDir, baseHomeDir)
err = config.LoadConfig(configDir, configFile)
if err != nil {
logger.Error(logSender, connectionID, "unable to load configuration: %v", err)
os.Exit(1)
}
commonConfig := config.GetCommonConfig()
// idle connection are managed externally
commonConfig.IdleTimeout = 0
config.SetCommonConfig(commonConfig)
if err := common.Initialize(config.GetCommonConfig()); err != nil {
logger.Error(logSender, connectionID, "%v", err)
os.Exit(1)
}
kmsConfig := config.GetKMSConfig()
if err := kmsConfig.Initialize(); err != nil {
logger.Error(logSender, connectionID, "unable to initialize KMS: %v", err)
os.Exit(1)
}
mfaConfig := config.GetMFAConfig()
err = mfaConfig.Initialize()
if err != nil {
logger.Error(logSender, "", "unable to initialize MFA: %v", err)
os.Exit(1)
}
if err := plugin.Initialize(config.GetPluginsConfig(), logVerbose); err != nil {
logger.Error(logSender, connectionID, "unable to initialize plugin system: %v", err)
os.Exit(1)
}
smtpConfig := config.GetSMTPConfig()
err = smtpConfig.Initialize(configDir)
if err != nil {
logger.Error(logSender, connectionID, "unable to initialize SMTP configuration: %v", err)
os.Exit(1)
}
dataProviderConf := config.GetProviderConf()
if dataProviderConf.Driver == dataprovider.SQLiteDataProviderName || dataProviderConf.Driver == dataprovider.BoltDataProviderName {
logger.Debug(logSender, connectionID, "data provider %#v not supported in subsystem mode, using %#v provider",
dataProviderConf.Driver, dataprovider.MemoryDataProviderName)
dataProviderConf.Driver = dataprovider.MemoryDataProviderName
dataProviderConf.Name = ""
dataProviderConf.PreferDatabaseCredentials = true
}
config.SetProviderConf(dataProviderConf)
err = dataprovider.Initialize(dataProviderConf, configDir, false)
if err != nil {
logger.Error(logSender, connectionID, "unable to initialize the data provider: %v", err)
os.Exit(1)
}
httpConfig := config.GetHTTPConfig()
if err := httpConfig.Initialize(configDir); err != nil {
logger.Error(logSender, connectionID, "unable to initialize http client: %v", err)
os.Exit(1)
}
user, err := dataprovider.UserExists(username)
if err == nil {
if user.HomeDir != filepath.Clean(homedir) && !preserveHomeDir {
// update the user
user.HomeDir = filepath.Clean(homedir)
err = dataprovider.UpdateUser(&user, dataprovider.ActionExecutorSystem, "")
if err != nil {
logger.Error(logSender, connectionID, "unable to update user %#v: %v", username, err)
os.Exit(1)
}
}
} else {
user.Username = username
if baseHomeDir != "" && filepath.IsAbs(baseHomeDir) {
user.HomeDir = filepath.Join(baseHomeDir, username)
} else {
user.HomeDir = filepath.Clean(homedir)
}
logger.Debug(logSender, connectionID, "home dir for new user %#v", user.HomeDir)
user.Password = connectionID
user.Permissions = make(map[string][]string)
user.Permissions["/"] = []string{dataprovider.PermAny}
err = dataprovider.AddUser(&user, dataprovider.ActionExecutorSystem, "")
if err != nil {
logger.Error(logSender, connectionID, "unable to add user %#v: %v", username, err)
os.Exit(1)
}
}
err = sftpd.ServeSubSystemConnection(&user, connectionID, os.Stdin, os.Stdout)
if err != nil && err != io.EOF {
logger.Warn(logSender, connectionID, "serving subsystem finished with error: %v", err)
os.Exit(1)
}
logger.Info(logSender, connectionID, "serving subsystem finished")
plugin.Handler.Cleanup()
os.Exit(0)
},
}
)
func init() {
subsystemCmd.Flags().BoolVarP(&preserveHomeDir, "preserve-home", "p", false, `If the user already exists, the existing home
directory will not be changed`)
subsystemCmd.Flags().StringVarP(&baseHomeDir, "base-home-dir", "d", "", `If the user does not exist specify an alternate
starting directory. The home directory for a new
user will be:
[base-home-dir]/[username]
base-home-dir must be an absolute path.`)
subsystemCmd.Flags().BoolVarP(&logJournalD, "log-to-journald", "j", false, `Send logs to journald. Only available on Linux.
Use:
$ journalctl -o verbose -f
To see full logs.
If not set, the logs will be sent to the standard
error`)
addConfigFlags(subsystemCmd)
viper.SetDefault(logVerboseKey, defaultLogVerbose)
viper.BindEnv(logVerboseKey, "SFTPGO_LOG_VERBOSE") //nolint:errcheck
subsystemCmd.Flags().BoolVarP(&logVerbose, logVerboseFlag, "v", viper.GetBool(logVerboseKey),
`Enable verbose logs. This flag can be set
using SFTPGO_LOG_VERBOSE env var too.
`)
viper.BindPFlag(logVerboseKey, subsystemCmd.Flags().Lookup(logVerboseFlag)) //nolint:errcheck
viper.SetDefault(logUTCTimeKey, defaultLogUTCTime)
viper.BindEnv(logUTCTimeKey, "SFTPGO_LOG_UTC_TIME") //nolint:errcheck
subsystemCmd.Flags().BoolVar(&logUTCTime, logUTCTimeFlag, viper.GetBool(logUTCTimeKey),
`Use UTC time for logging. This flag can be set
using SFTPGO_LOG_UTC_TIME env var too.
`)
viper.BindPFlag(logUTCTimeKey, subsystemCmd.Flags().Lookup(logUTCTimeFlag)) //nolint:errcheck
rootCmd.AddCommand(subsystemCmd)
}

View File

@@ -2,9 +2,11 @@ package cmd
import (
"fmt"
"os"
"github.com/drakkan/sftpgo/service"
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/v2/service"
)
var (
@@ -20,6 +22,7 @@ var (
status, err := s.Status()
if err != nil {
fmt.Printf("Error querying service status: %v\r\n", err)
os.Exit(1)
} else {
fmt.Printf("Service status: %#v\r\n", status.String())
}

View File

@@ -2,15 +2,17 @@ package cmd
import (
"fmt"
"os"
"github.com/drakkan/sftpgo/service"
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/v2/service"
)
var (
stopCmd = &cobra.Command{
Use: "stop",
Short: "Stop SFTPGo Windows Service",
Short: "Stop the SFTPGo Windows Service",
Run: func(cmd *cobra.Command, args []string) {
s := service.WindowsService{
Service: service.Service{
@@ -20,6 +22,7 @@ var (
err := s.Stop()
if err != nil {
fmt.Printf("Error stopping service: %v\r\n", err)
os.Exit(1)
} else {
fmt.Printf("Service stopped!\r\n")
}

View File

@@ -2,15 +2,17 @@ package cmd
import (
"fmt"
"os"
"github.com/drakkan/sftpgo/service"
"github.com/spf13/cobra"
"github.com/drakkan/sftpgo/v2/service"
)
var (
uninstallCmd = &cobra.Command{
Use: "uninstall",
Short: "Uninstall SFTPGo Windows Service",
Short: "Uninstall the SFTPGo Windows Service",
Run: func(cmd *cobra.Command, args []string) {
s := service.WindowsService{
Service: service.Service{
@@ -20,6 +22,7 @@ var (
err := s.Uninstall()
if err != nil {
fmt.Printf("Error removing service: %v\r\n", err)
os.Exit(1)
} else {
fmt.Printf("Service uninstalled\r\n")
}

261
common/actions.go Normal file
View File

@@ -0,0 +1,261 @@
package common
import (
"bytes"
"context"
"encoding/json"
"errors"
"fmt"
"net/http"
"net/url"
"os"
"os/exec"
"path/filepath"
"strings"
"time"
"github.com/sftpgo/sdk"
"github.com/sftpgo/sdk/plugin/notifier"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/httpclient"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/plugin"
"github.com/drakkan/sftpgo/v2/util"
)
var (
errUnconfiguredAction = errors.New("no hook is configured for this action")
errNoHook = errors.New("unable to execute action, no hook defined")
errUnexpectedHTTResponse = errors.New("unexpected HTTP response code")
)
// ProtocolActions defines the action to execute on file operations and SSH commands
type ProtocolActions struct {
// Valid values are download, upload, pre-delete, delete, rename, ssh_cmd. Empty slice to disable
ExecuteOn []string `json:"execute_on" mapstructure:"execute_on"`
// Actions to be performed synchronously.
// The pre-delete action is always executed synchronously while the other ones are asynchronous.
// Executing an action synchronously means that SFTPGo will not return a result code to the client
// (which is waiting for it) until your hook have completed its execution.
ExecuteSync []string `json:"execute_sync" mapstructure:"execute_sync"`
// Absolute path to an external program or an HTTP URL
Hook string `json:"hook" mapstructure:"hook"`
}
var actionHandler ActionHandler = &defaultActionHandler{}
// InitializeActionHandler lets the user choose an action handler implementation.
//
// Do NOT call this function after application initialization.
func InitializeActionHandler(handler ActionHandler) {
actionHandler = handler
}
func handleUnconfiguredPreAction(operation string) error {
// for pre-delete we execute the internal handling on error, so we must return errUnconfiguredAction.
// Other pre action will deny the operation on error so if we have no configuration we must return
// a nil error
if operation == operationPreDelete {
return errUnconfiguredAction
}
return nil
}
// ExecutePreAction executes a pre-* action and returns the result
func ExecutePreAction(conn *BaseConnection, operation, filePath, virtualPath string, fileSize int64, openFlags int) error {
var event *notifier.FsEvent
hasNotifiersPlugin := plugin.Handler.HasNotifiers()
hasHook := util.IsStringInSlice(operation, Config.Actions.ExecuteOn)
if !hasHook && !hasNotifiersPlugin {
return handleUnconfiguredPreAction(operation)
}
event = newActionNotification(&conn.User, operation, filePath, virtualPath, "", "", "",
conn.protocol, conn.GetRemoteIP(), conn.ID, fileSize, openFlags, nil)
if hasNotifiersPlugin {
plugin.Handler.NotifyFsEvent(event)
}
if !hasHook {
return handleUnconfiguredPreAction(operation)
}
return actionHandler.Handle(event)
}
// ExecuteActionNotification executes the defined hook, if any, for the specified action
func ExecuteActionNotification(conn *BaseConnection, operation, filePath, virtualPath, target, virtualTarget, sshCmd string,
fileSize int64, err error,
) {
hasNotifiersPlugin := plugin.Handler.HasNotifiers()
hasHook := util.IsStringInSlice(operation, Config.Actions.ExecuteOn)
if !hasHook && !hasNotifiersPlugin {
return
}
notification := newActionNotification(&conn.User, operation, filePath, virtualPath, target, virtualTarget, sshCmd,
conn.protocol, conn.GetRemoteIP(), conn.ID, fileSize, 0, err)
if hasNotifiersPlugin {
plugin.Handler.NotifyFsEvent(notification)
}
if hasHook {
if util.IsStringInSlice(operation, Config.Actions.ExecuteSync) {
actionHandler.Handle(notification) //nolint:errcheck
return
}
go actionHandler.Handle(notification) //nolint:errcheck
}
}
// ActionHandler handles a notification for a Protocol Action.
type ActionHandler interface {
Handle(notification *notifier.FsEvent) error
}
func newActionNotification(
user *dataprovider.User,
operation, filePath, virtualPath, target, virtualTarget, sshCmd, protocol, ip, sessionID string,
fileSize int64,
openFlags int,
err error,
) *notifier.FsEvent {
var bucket, endpoint string
status := 1
fsConfig := user.GetFsConfigForPath(virtualPath)
switch fsConfig.Provider {
case sdk.S3FilesystemProvider:
bucket = fsConfig.S3Config.Bucket
endpoint = fsConfig.S3Config.Endpoint
case sdk.GCSFilesystemProvider:
bucket = fsConfig.GCSConfig.Bucket
case sdk.AzureBlobFilesystemProvider:
bucket = fsConfig.AzBlobConfig.Container
if fsConfig.AzBlobConfig.Endpoint != "" {
endpoint = fsConfig.AzBlobConfig.Endpoint
}
case sdk.SFTPFilesystemProvider:
endpoint = fsConfig.SFTPConfig.Endpoint
}
if err == ErrQuotaExceeded {
status = 3
} else if err != nil {
status = 2
}
return &notifier.FsEvent{
Action: operation,
Username: user.Username,
Path: filePath,
TargetPath: target,
VirtualPath: virtualPath,
VirtualTargetPath: virtualTarget,
SSHCmd: sshCmd,
FileSize: fileSize,
FsProvider: int(fsConfig.Provider),
Bucket: bucket,
Endpoint: endpoint,
Status: status,
Protocol: protocol,
IP: ip,
SessionID: sessionID,
OpenFlags: openFlags,
Timestamp: time.Now().UnixNano(),
}
}
type defaultActionHandler struct{}
func (h *defaultActionHandler) Handle(event *notifier.FsEvent) error {
if !util.IsStringInSlice(event.Action, Config.Actions.ExecuteOn) {
return errUnconfiguredAction
}
if Config.Actions.Hook == "" {
logger.Warn(event.Protocol, "", "Unable to send notification, no hook is defined")
return errNoHook
}
if strings.HasPrefix(Config.Actions.Hook, "http") {
return h.handleHTTP(event)
}
return h.handleCommand(event)
}
func (h *defaultActionHandler) handleHTTP(event *notifier.FsEvent) error {
u, err := url.Parse(Config.Actions.Hook)
if err != nil {
logger.Error(event.Protocol, "", "Invalid hook %#v for operation %#v: %v",
Config.Actions.Hook, event.Action, err)
return err
}
startTime := time.Now()
respCode := 0
var b bytes.Buffer
_ = json.NewEncoder(&b).Encode(event)
resp, err := httpclient.RetryablePost(Config.Actions.Hook, "application/json", &b)
if err == nil {
respCode = resp.StatusCode
resp.Body.Close()
if respCode != http.StatusOK {
err = errUnexpectedHTTResponse
}
}
logger.Debug(event.Protocol, "", "notified operation %#v to URL: %v status code: %v, elapsed: %v err: %v",
event.Action, u.Redacted(), respCode, time.Since(startTime), err)
return err
}
func (h *defaultActionHandler) handleCommand(event *notifier.FsEvent) error {
if !filepath.IsAbs(Config.Actions.Hook) {
err := fmt.Errorf("invalid notification command %#v", Config.Actions.Hook)
logger.Warn(event.Protocol, "", "unable to execute notification command: %v", err)
return err
}
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
cmd := exec.CommandContext(ctx, Config.Actions.Hook)
cmd.Env = append(os.Environ(), notificationAsEnvVars(event)...)
startTime := time.Now()
err := cmd.Run()
logger.Debug(event.Protocol, "", "executed command %#v, elapsed: %v, error: %v",
Config.Actions.Hook, time.Since(startTime), err)
return err
}
func notificationAsEnvVars(event *notifier.FsEvent) []string {
return []string{
fmt.Sprintf("SFTPGO_ACTION=%v", event.Action),
fmt.Sprintf("SFTPGO_ACTION_USERNAME=%v", event.Username),
fmt.Sprintf("SFTPGO_ACTION_PATH=%v", event.Path),
fmt.Sprintf("SFTPGO_ACTION_TARGET=%v", event.TargetPath),
fmt.Sprintf("SFTPGO_ACTION_VIRTUAL_PATH=%v", event.VirtualPath),
fmt.Sprintf("SFTPGO_ACTION_VIRTUAL_TARGET=%v", event.VirtualTargetPath),
fmt.Sprintf("SFTPGO_ACTION_SSH_CMD=%v", event.SSHCmd),
fmt.Sprintf("SFTPGO_ACTION_FILE_SIZE=%v", event.FileSize),
fmt.Sprintf("SFTPGO_ACTION_FS_PROVIDER=%v", event.FsProvider),
fmt.Sprintf("SFTPGO_ACTION_BUCKET=%v", event.Bucket),
fmt.Sprintf("SFTPGO_ACTION_ENDPOINT=%v", event.Endpoint),
fmt.Sprintf("SFTPGO_ACTION_STATUS=%v", event.Status),
fmt.Sprintf("SFTPGO_ACTION_PROTOCOL=%v", event.Protocol),
fmt.Sprintf("SFTPGO_ACTION_IP=%v", event.IP),
fmt.Sprintf("SFTPGO_ACTION_SESSION_ID=%v", event.SessionID),
fmt.Sprintf("SFTPGO_ACTION_OPEN_FLAGS=%v", event.OpenFlags),
fmt.Sprintf("SFTPGO_ACTION_TIMESTAMP=%v", event.Timestamp),
}
}

295
common/actions_test.go Normal file
View File

@@ -0,0 +1,295 @@
package common
import (
"errors"
"fmt"
"os"
"os/exec"
"path/filepath"
"runtime"
"testing"
"github.com/lithammer/shortuuid/v3"
"github.com/rs/xid"
"github.com/sftpgo/sdk"
"github.com/sftpgo/sdk/plugin/notifier"
"github.com/stretchr/testify/assert"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/plugin"
"github.com/drakkan/sftpgo/v2/vfs"
)
func TestNewActionNotification(t *testing.T) {
user := &dataprovider.User{
BaseUser: sdk.BaseUser{
Username: "username",
},
}
user.FsConfig.Provider = sdk.LocalFilesystemProvider
user.FsConfig.S3Config = vfs.S3FsConfig{
BaseS3FsConfig: sdk.BaseS3FsConfig{
Bucket: "s3bucket",
Endpoint: "endpoint",
},
}
user.FsConfig.GCSConfig = vfs.GCSFsConfig{
BaseGCSFsConfig: sdk.BaseGCSFsConfig{
Bucket: "gcsbucket",
},
}
user.FsConfig.AzBlobConfig = vfs.AzBlobFsConfig{
BaseAzBlobFsConfig: sdk.BaseAzBlobFsConfig{
Container: "azcontainer",
Endpoint: "azendpoint",
},
}
user.FsConfig.SFTPConfig = vfs.SFTPFsConfig{
BaseSFTPFsConfig: sdk.BaseSFTPFsConfig{
Endpoint: "sftpendpoint",
},
}
sessionID := xid.New().String()
a := newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSFTP, "", sessionID,
123, 0, errors.New("fake error"))
assert.Equal(t, user.Username, a.Username)
assert.Equal(t, 0, len(a.Bucket))
assert.Equal(t, 0, len(a.Endpoint))
assert.Equal(t, 2, a.Status)
user.FsConfig.Provider = sdk.S3FilesystemProvider
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSSH, "", sessionID,
123, 0, nil)
assert.Equal(t, "s3bucket", a.Bucket)
assert.Equal(t, "endpoint", a.Endpoint)
assert.Equal(t, 1, a.Status)
user.FsConfig.Provider = sdk.GCSFilesystemProvider
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSCP, "", sessionID,
123, 0, ErrQuotaExceeded)
assert.Equal(t, "gcsbucket", a.Bucket)
assert.Equal(t, 0, len(a.Endpoint))
assert.Equal(t, 3, a.Status)
user.FsConfig.Provider = sdk.AzureBlobFilesystemProvider
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSCP, "", sessionID,
123, 0, nil)
assert.Equal(t, "azcontainer", a.Bucket)
assert.Equal(t, "azendpoint", a.Endpoint)
assert.Equal(t, 1, a.Status)
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSCP, "", sessionID,
123, os.O_APPEND, nil)
assert.Equal(t, "azcontainer", a.Bucket)
assert.Equal(t, "azendpoint", a.Endpoint)
assert.Equal(t, 1, a.Status)
assert.Equal(t, os.O_APPEND, a.OpenFlags)
user.FsConfig.Provider = sdk.SFTPFilesystemProvider
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSFTP, "", sessionID,
123, 0, nil)
assert.Equal(t, "sftpendpoint", a.Endpoint)
}
func TestActionHTTP(t *testing.T) {
actionsCopy := Config.Actions
Config.Actions = ProtocolActions{
ExecuteOn: []string{operationDownload},
Hook: fmt.Sprintf("http://%v", httpAddr),
}
user := &dataprovider.User{
BaseUser: sdk.BaseUser{
Username: "username",
},
}
a := newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSFTP, "",
xid.New().String(), 123, 0, nil)
err := actionHandler.Handle(a)
assert.NoError(t, err)
Config.Actions.Hook = "http://invalid:1234"
err = actionHandler.Handle(a)
assert.Error(t, err)
Config.Actions.Hook = fmt.Sprintf("http://%v/404", httpAddr)
err = actionHandler.Handle(a)
if assert.Error(t, err) {
assert.EqualError(t, err, errUnexpectedHTTResponse.Error())
}
Config.Actions = actionsCopy
}
func TestActionCMD(t *testing.T) {
if runtime.GOOS == osWindows {
t.Skip("this test is not available on Windows")
}
actionsCopy := Config.Actions
hookCmd, err := exec.LookPath("true")
assert.NoError(t, err)
Config.Actions = ProtocolActions{
ExecuteOn: []string{operationDownload},
Hook: hookCmd,
}
user := &dataprovider.User{
BaseUser: sdk.BaseUser{
Username: "username",
},
}
sessionID := shortuuid.New()
a := newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSFTP, "", sessionID,
123, 0, nil)
err = actionHandler.Handle(a)
assert.NoError(t, err)
c := NewBaseConnection("id", ProtocolSFTP, "", "", *user)
ExecuteActionNotification(c, OperationSSHCmd, "path", "vpath", "target", "vtarget", "sha1sum", 0, nil)
ExecuteActionNotification(c, operationDownload, "path", "vpath", "", "", "", 0, nil)
Config.Actions = actionsCopy
}
func TestWrongActions(t *testing.T) {
actionsCopy := Config.Actions
badCommand := "/bad/command"
if runtime.GOOS == osWindows {
badCommand = "C:\\bad\\command"
}
Config.Actions = ProtocolActions{
ExecuteOn: []string{operationUpload},
Hook: badCommand,
}
user := &dataprovider.User{
BaseUser: sdk.BaseUser{
Username: "username",
},
}
a := newActionNotification(user, operationUpload, "", "", "", "", "", ProtocolSFTP, "", xid.New().String(),
123, 0, nil)
err := actionHandler.Handle(a)
assert.Error(t, err, "action with bad command must fail")
a.Action = operationDelete
err = actionHandler.Handle(a)
assert.EqualError(t, err, errUnconfiguredAction.Error())
Config.Actions.Hook = "http://foo\x7f.com/"
a.Action = operationUpload
err = actionHandler.Handle(a)
assert.Error(t, err, "action with bad url must fail")
Config.Actions.Hook = ""
err = actionHandler.Handle(a)
if assert.Error(t, err) {
assert.EqualError(t, err, errNoHook.Error())
}
Config.Actions.Hook = "relative path"
err = actionHandler.Handle(a)
if assert.Error(t, err) {
assert.EqualError(t, err, fmt.Sprintf("invalid notification command %#v", Config.Actions.Hook))
}
Config.Actions = actionsCopy
}
func TestPreDeleteAction(t *testing.T) {
if runtime.GOOS == osWindows {
t.Skip("this test is not available on Windows")
}
actionsCopy := Config.Actions
hookCmd, err := exec.LookPath("true")
assert.NoError(t, err)
Config.Actions = ProtocolActions{
ExecuteOn: []string{operationPreDelete},
Hook: hookCmd,
}
homeDir := filepath.Join(os.TempDir(), "test_user")
err = os.MkdirAll(homeDir, os.ModePerm)
assert.NoError(t, err)
user := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: "username",
HomeDir: homeDir,
},
}
user.Permissions = make(map[string][]string)
user.Permissions["/"] = []string{dataprovider.PermAny}
fs := vfs.NewOsFs("id", homeDir, "")
c := NewBaseConnection("id", ProtocolSFTP, "", "", user)
testfile := filepath.Join(user.HomeDir, "testfile")
err = os.WriteFile(testfile, []byte("test"), os.ModePerm)
assert.NoError(t, err)
info, err := os.Stat(testfile)
assert.NoError(t, err)
err = c.RemoveFile(fs, testfile, "testfile", info)
assert.NoError(t, err)
assert.FileExists(t, testfile)
os.RemoveAll(homeDir)
Config.Actions = actionsCopy
}
func TestUnconfiguredHook(t *testing.T) {
actionsCopy := Config.Actions
Config.Actions = ProtocolActions{
ExecuteOn: []string{operationDownload},
Hook: "",
}
pluginsConfig := []plugin.Config{
{
Type: "notifier",
},
}
err := plugin.Initialize(pluginsConfig, true)
assert.Error(t, err)
assert.True(t, plugin.Handler.HasNotifiers())
c := NewBaseConnection("id", ProtocolSFTP, "", "", dataprovider.User{})
err = ExecutePreAction(c, OperationPreDownload, "", "", 0, 0)
assert.NoError(t, err)
err = ExecutePreAction(c, operationPreDelete, "", "", 0, 0)
assert.ErrorIs(t, err, errUnconfiguredAction)
ExecuteActionNotification(c, operationDownload, "", "", "", "", "", 0, nil)
err = plugin.Initialize(nil, true)
assert.NoError(t, err)
assert.False(t, plugin.Handler.HasNotifiers())
Config.Actions = actionsCopy
}
type actionHandlerStub struct {
called bool
}
func (h *actionHandlerStub) Handle(event *notifier.FsEvent) error {
h.called = true
return nil
}
func TestInitializeActionHandler(t *testing.T) {
handler := &actionHandlerStub{}
InitializeActionHandler(handler)
t.Cleanup(func() {
InitializeActionHandler(&defaultActionHandler{})
})
err := actionHandler.Handle(&notifier.FsEvent{})
assert.NoError(t, err)
assert.True(t, handler.called)
}

51
common/clientsmap.go Normal file
View File

@@ -0,0 +1,51 @@
package common
import (
"sync"
"sync/atomic"
"github.com/drakkan/sftpgo/v2/logger"
)
// clienstMap is a struct containing the map of the connected clients
type clientsMap struct {
totalConnections int32
mu sync.RWMutex
clients map[string]int
}
func (c *clientsMap) add(source string) {
atomic.AddInt32(&c.totalConnections, 1)
c.mu.Lock()
defer c.mu.Unlock()
c.clients[source]++
}
func (c *clientsMap) remove(source string) {
c.mu.Lock()
defer c.mu.Unlock()
if val, ok := c.clients[source]; ok {
atomic.AddInt32(&c.totalConnections, -1)
c.clients[source]--
if val > 1 {
return
}
delete(c.clients, source)
} else {
logger.Warn(logSender, "", "cannot remove client %v it is not mapped", source)
}
}
func (c *clientsMap) getTotal() int32 {
return atomic.LoadInt32(&c.totalConnections)
}
func (c *clientsMap) getTotalFrom(source string) int {
c.mu.RLock()
defer c.mu.RUnlock()
return c.clients[source]
}

59
common/clientsmap_test.go Normal file
View File

@@ -0,0 +1,59 @@
package common
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestClientsMap(t *testing.T) {
m := clientsMap{
clients: make(map[string]int),
}
ip1 := "192.168.1.1"
ip2 := "192.168.1.2"
m.add(ip1)
assert.Equal(t, int32(1), m.getTotal())
assert.Equal(t, 1, m.getTotalFrom(ip1))
assert.Equal(t, 0, m.getTotalFrom(ip2))
m.add(ip1)
m.add(ip2)
assert.Equal(t, int32(3), m.getTotal())
assert.Equal(t, 2, m.getTotalFrom(ip1))
assert.Equal(t, 1, m.getTotalFrom(ip2))
m.add(ip1)
m.add(ip1)
m.add(ip2)
assert.Equal(t, int32(6), m.getTotal())
assert.Equal(t, 4, m.getTotalFrom(ip1))
assert.Equal(t, 2, m.getTotalFrom(ip2))
m.remove(ip2)
assert.Equal(t, int32(5), m.getTotal())
assert.Equal(t, 4, m.getTotalFrom(ip1))
assert.Equal(t, 1, m.getTotalFrom(ip2))
m.remove("unknown")
assert.Equal(t, int32(5), m.getTotal())
assert.Equal(t, 4, m.getTotalFrom(ip1))
assert.Equal(t, 1, m.getTotalFrom(ip2))
m.remove(ip2)
assert.Equal(t, int32(4), m.getTotal())
assert.Equal(t, 4, m.getTotalFrom(ip1))
assert.Equal(t, 0, m.getTotalFrom(ip2))
m.remove(ip1)
m.remove(ip1)
m.remove(ip1)
assert.Equal(t, int32(1), m.getTotal())
assert.Equal(t, 1, m.getTotalFrom(ip1))
assert.Equal(t, 0, m.getTotalFrom(ip2))
m.remove(ip1)
assert.Equal(t, int32(0), m.getTotal())
assert.Equal(t, 0, m.getTotalFrom(ip1))
assert.Equal(t, 0, m.getTotalFrom(ip2))
}

1073
common/common.go Normal file

File diff suppressed because it is too large Load Diff

910
common/common_test.go Normal file
View File

@@ -0,0 +1,910 @@
package common
import (
"encoding/json"
"fmt"
"net"
"os"
"os/exec"
"path/filepath"
"runtime"
"strings"
"sync/atomic"
"testing"
"time"
"github.com/alexedwards/argon2id"
"github.com/sftpgo/sdk"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"golang.org/x/crypto/bcrypt"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/kms"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/v2/vfs"
)
const (
logSenderTest = "common_test"
httpAddr = "127.0.0.1:9999"
configDir = ".."
osWindows = "windows"
userTestUsername = "common_test_username"
)
type fakeConnection struct {
*BaseConnection
command string
}
func (c *fakeConnection) AddUser(user dataprovider.User) error {
_, err := user.GetFilesystem(c.GetID())
if err != nil {
return err
}
c.BaseConnection.User = user
return nil
}
func (c *fakeConnection) Disconnect() error {
Connections.Remove(c.GetID())
return nil
}
func (c *fakeConnection) GetClientVersion() string {
return ""
}
func (c *fakeConnection) GetCommand() string {
return c.command
}
func (c *fakeConnection) GetLocalAddress() string {
return ""
}
func (c *fakeConnection) GetRemoteAddress() string {
return ""
}
type customNetConn struct {
net.Conn
id string
isClosed bool
}
func (c *customNetConn) Close() error {
Connections.RemoveSSHConnection(c.id)
c.isClosed = true
return c.Conn.Close()
}
func TestSSHConnections(t *testing.T) {
conn1, conn2 := net.Pipe()
now := time.Now()
sshConn1 := NewSSHConnection("id1", conn1)
sshConn2 := NewSSHConnection("id2", conn2)
sshConn3 := NewSSHConnection("id3", conn2)
assert.Equal(t, "id1", sshConn1.GetID())
assert.Equal(t, "id2", sshConn2.GetID())
assert.Equal(t, "id3", sshConn3.GetID())
sshConn1.UpdateLastActivity()
assert.GreaterOrEqual(t, sshConn1.GetLastActivity().UnixNano(), now.UnixNano())
Connections.AddSSHConnection(sshConn1)
Connections.AddSSHConnection(sshConn2)
Connections.AddSSHConnection(sshConn3)
Connections.RLock()
assert.Len(t, Connections.sshConnections, 3)
Connections.RUnlock()
Connections.RemoveSSHConnection(sshConn1.id)
Connections.RLock()
assert.Len(t, Connections.sshConnections, 2)
assert.Equal(t, sshConn3.id, Connections.sshConnections[0].id)
assert.Equal(t, sshConn2.id, Connections.sshConnections[1].id)
Connections.RUnlock()
Connections.RemoveSSHConnection(sshConn1.id)
Connections.RLock()
assert.Len(t, Connections.sshConnections, 2)
assert.Equal(t, sshConn3.id, Connections.sshConnections[0].id)
assert.Equal(t, sshConn2.id, Connections.sshConnections[1].id)
Connections.RUnlock()
Connections.RemoveSSHConnection(sshConn2.id)
Connections.RLock()
assert.Len(t, Connections.sshConnections, 1)
assert.Equal(t, sshConn3.id, Connections.sshConnections[0].id)
Connections.RUnlock()
Connections.RemoveSSHConnection(sshConn3.id)
Connections.RLock()
assert.Len(t, Connections.sshConnections, 0)
Connections.RUnlock()
assert.NoError(t, sshConn1.Close())
assert.NoError(t, sshConn2.Close())
assert.NoError(t, sshConn3.Close())
}
func TestDefenderIntegration(t *testing.T) {
// by default defender is nil
configCopy := Config
ip := "127.1.1.1"
assert.Nil(t, ReloadDefender())
AddDefenderEvent(ip, HostEventNoLoginTried)
assert.False(t, IsBanned(ip))
banTime, err := GetDefenderBanTime(ip)
assert.NoError(t, err)
assert.Nil(t, banTime)
assert.False(t, DeleteDefenderHost(ip))
score, err := GetDefenderScore(ip)
assert.NoError(t, err)
assert.Equal(t, 0, score)
_, err = GetDefenderHost(ip)
assert.Error(t, err)
hosts, err := GetDefenderHosts()
assert.NoError(t, err)
assert.Nil(t, hosts)
Config.DefenderConfig = DefenderConfig{
Enabled: true,
Driver: DefenderDriverProvider,
BanTime: 10,
BanTimeIncrement: 50,
Threshold: 0,
ScoreInvalid: 2,
ScoreValid: 1,
ObservationTime: 15,
EntriesSoftLimit: 100,
EntriesHardLimit: 150,
}
err = Initialize(Config)
// ScoreInvalid cannot be greater than threshold
assert.Error(t, err)
Config.DefenderConfig.Driver = "unsupported"
err = Initialize(Config)
if assert.Error(t, err) {
assert.Contains(t, err.Error(), "unsupported defender driver")
}
Config.DefenderConfig.Driver = DefenderDriverMemory
err = Initialize(Config)
// ScoreInvalid cannot be greater than threshold
assert.Error(t, err)
Config.DefenderConfig.Threshold = 3
err = Initialize(Config)
assert.NoError(t, err)
assert.Nil(t, ReloadDefender())
AddDefenderEvent(ip, HostEventNoLoginTried)
assert.False(t, IsBanned(ip))
score, err = GetDefenderScore(ip)
assert.NoError(t, err)
assert.Equal(t, 2, score)
entry, err := GetDefenderHost(ip)
assert.NoError(t, err)
asJSON, err := json.Marshal(&entry)
assert.NoError(t, err)
assert.Equal(t, `{"id":"3132372e312e312e31","ip":"127.1.1.1","score":2}`, string(asJSON), "entry %v", entry)
assert.True(t, DeleteDefenderHost(ip))
banTime, err = GetDefenderBanTime(ip)
assert.NoError(t, err)
assert.Nil(t, banTime)
AddDefenderEvent(ip, HostEventLoginFailed)
AddDefenderEvent(ip, HostEventNoLoginTried)
assert.True(t, IsBanned(ip))
score, err = GetDefenderScore(ip)
assert.NoError(t, err)
assert.Equal(t, 0, score)
banTime, err = GetDefenderBanTime(ip)
assert.NoError(t, err)
assert.NotNil(t, banTime)
hosts, err = GetDefenderHosts()
assert.NoError(t, err)
assert.Len(t, hosts, 1)
entry, err = GetDefenderHost(ip)
assert.NoError(t, err)
assert.False(t, entry.BanTime.IsZero())
assert.True(t, DeleteDefenderHost(ip))
hosts, err = GetDefenderHosts()
assert.NoError(t, err)
assert.Len(t, hosts, 0)
banTime, err = GetDefenderBanTime(ip)
assert.NoError(t, err)
assert.Nil(t, banTime)
assert.False(t, DeleteDefenderHost(ip))
Config = configCopy
}
func TestRateLimitersIntegration(t *testing.T) {
// by default defender is nil
configCopy := Config
Config.RateLimitersConfig = []RateLimiterConfig{
{
Average: 100,
Period: 10,
Burst: 5,
Type: int(rateLimiterTypeGlobal),
Protocols: rateLimiterProtocolValues,
},
{
Average: 1,
Period: 1000,
Burst: 1,
Type: int(rateLimiterTypeSource),
Protocols: []string{ProtocolWebDAV, ProtocolWebDAV, ProtocolFTP},
GenerateDefenderEvents: true,
EntriesSoftLimit: 100,
EntriesHardLimit: 150,
},
}
err := Initialize(Config)
assert.Error(t, err)
Config.RateLimitersConfig[0].Period = 1000
Config.RateLimitersConfig[0].AllowList = []string{"1.1.1", "1.1.1.2"}
err = Initialize(Config)
if assert.Error(t, err) {
assert.Contains(t, err.Error(), "unable to parse rate limiter allow list")
}
Config.RateLimitersConfig[0].AllowList = []string{"172.16.24.7"}
Config.RateLimitersConfig[1].AllowList = []string{"172.16.0.0/16"}
err = Initialize(Config)
assert.NoError(t, err)
assert.Len(t, rateLimiters, 4)
assert.Len(t, rateLimiters[ProtocolSSH], 1)
assert.Len(t, rateLimiters[ProtocolFTP], 2)
assert.Len(t, rateLimiters[ProtocolWebDAV], 2)
assert.Len(t, rateLimiters[ProtocolHTTP], 1)
source1 := "127.1.1.1"
source2 := "127.1.1.2"
source3 := "172.16.24.7" // whitelisted
_, err = LimitRate(ProtocolSSH, source1)
assert.NoError(t, err)
_, err = LimitRate(ProtocolFTP, source1)
assert.NoError(t, err)
// sleep to allow the add configured burst to the token.
// This sleep is not enough to add the per-source burst
time.Sleep(20 * time.Millisecond)
_, err = LimitRate(ProtocolWebDAV, source2)
assert.NoError(t, err)
_, err = LimitRate(ProtocolFTP, source1)
assert.Error(t, err)
_, err = LimitRate(ProtocolWebDAV, source2)
assert.Error(t, err)
_, err = LimitRate(ProtocolSSH, source1)
assert.NoError(t, err)
_, err = LimitRate(ProtocolSSH, source2)
assert.NoError(t, err)
for i := 0; i < 10; i++ {
_, err = LimitRate(ProtocolWebDAV, source3)
assert.NoError(t, err)
}
Config = configCopy
}
func TestMaxConnections(t *testing.T) {
oldValue := Config.MaxTotalConnections
perHost := Config.MaxPerHostConnections
Config.MaxPerHostConnections = 0
ipAddr := "192.168.7.8"
assert.True(t, Connections.IsNewConnectionAllowed(ipAddr))
Config.MaxTotalConnections = 1
Config.MaxPerHostConnections = perHost
assert.True(t, Connections.IsNewConnectionAllowed(ipAddr))
c := NewBaseConnection("id", ProtocolSFTP, "", "", dataprovider.User{})
fakeConn := &fakeConnection{
BaseConnection: c,
}
Connections.Add(fakeConn)
assert.Len(t, Connections.GetStats(), 1)
assert.False(t, Connections.IsNewConnectionAllowed(ipAddr))
res := Connections.Close(fakeConn.GetID())
assert.True(t, res)
assert.Eventually(t, func() bool { return len(Connections.GetStats()) == 0 }, 300*time.Millisecond, 50*time.Millisecond)
assert.True(t, Connections.IsNewConnectionAllowed(ipAddr))
Connections.AddClientConnection(ipAddr)
Connections.AddClientConnection(ipAddr)
assert.False(t, Connections.IsNewConnectionAllowed(ipAddr))
Connections.RemoveClientConnection(ipAddr)
assert.True(t, Connections.IsNewConnectionAllowed(ipAddr))
Connections.RemoveClientConnection(ipAddr)
Config.MaxTotalConnections = oldValue
}
func TestMaxConnectionPerHost(t *testing.T) {
oldValue := Config.MaxPerHostConnections
Config.MaxPerHostConnections = 2
ipAddr := "192.168.9.9"
Connections.AddClientConnection(ipAddr)
assert.True(t, Connections.IsNewConnectionAllowed(ipAddr))
Connections.AddClientConnection(ipAddr)
assert.True(t, Connections.IsNewConnectionAllowed(ipAddr))
Connections.AddClientConnection(ipAddr)
assert.False(t, Connections.IsNewConnectionAllowed(ipAddr))
assert.Equal(t, int32(3), Connections.GetClientConnections())
Connections.RemoveClientConnection(ipAddr)
Connections.RemoveClientConnection(ipAddr)
Connections.RemoveClientConnection(ipAddr)
assert.Equal(t, int32(0), Connections.GetClientConnections())
Config.MaxPerHostConnections = oldValue
}
func TestIdleConnections(t *testing.T) {
configCopy := Config
Config.IdleTimeout = 1
err := Initialize(Config)
assert.NoError(t, err)
conn1, conn2 := net.Pipe()
customConn1 := &customNetConn{
Conn: conn1,
id: "id1",
}
customConn2 := &customNetConn{
Conn: conn2,
id: "id2",
}
sshConn1 := NewSSHConnection(customConn1.id, customConn1)
sshConn2 := NewSSHConnection(customConn2.id, customConn2)
username := "test_user"
user := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: username,
},
}
c := NewBaseConnection(sshConn1.id+"_1", ProtocolSFTP, "", "", user)
c.lastActivity = time.Now().Add(-24 * time.Hour).UnixNano()
fakeConn := &fakeConnection{
BaseConnection: c,
}
// both ssh connections are expired but they should get removed only
// if there is no associated connection
sshConn1.lastActivity = c.lastActivity
sshConn2.lastActivity = c.lastActivity
Connections.AddSSHConnection(sshConn1)
Connections.Add(fakeConn)
assert.Equal(t, Connections.GetActiveSessions(username), 1)
c = NewBaseConnection(sshConn2.id+"_1", ProtocolSSH, "", "", user)
fakeConn = &fakeConnection{
BaseConnection: c,
}
Connections.AddSSHConnection(sshConn2)
Connections.Add(fakeConn)
assert.Equal(t, Connections.GetActiveSessions(username), 2)
cFTP := NewBaseConnection("id2", ProtocolFTP, "", "", dataprovider.User{})
cFTP.lastActivity = time.Now().UnixNano()
fakeConn = &fakeConnection{
BaseConnection: cFTP,
}
Connections.Add(fakeConn)
assert.Equal(t, Connections.GetActiveSessions(username), 2)
assert.Len(t, Connections.GetStats(), 3)
Connections.RLock()
assert.Len(t, Connections.sshConnections, 2)
Connections.RUnlock()
startIdleTimeoutTicker(100 * time.Millisecond)
assert.Eventually(t, func() bool { return Connections.GetActiveSessions(username) == 1 }, 1*time.Second, 200*time.Millisecond)
assert.Eventually(t, func() bool {
Connections.RLock()
defer Connections.RUnlock()
return len(Connections.sshConnections) == 1
}, 1*time.Second, 200*time.Millisecond)
stopIdleTimeoutTicker()
assert.Len(t, Connections.GetStats(), 2)
c.lastActivity = time.Now().Add(-24 * time.Hour).UnixNano()
cFTP.lastActivity = time.Now().Add(-24 * time.Hour).UnixNano()
sshConn2.lastActivity = c.lastActivity
startIdleTimeoutTicker(100 * time.Millisecond)
assert.Eventually(t, func() bool { return len(Connections.GetStats()) == 0 }, 1*time.Second, 200*time.Millisecond)
assert.Eventually(t, func() bool {
Connections.RLock()
defer Connections.RUnlock()
return len(Connections.sshConnections) == 0
}, 1*time.Second, 200*time.Millisecond)
assert.Equal(t, int32(0), Connections.GetClientConnections())
stopIdleTimeoutTicker()
assert.True(t, customConn1.isClosed)
assert.True(t, customConn2.isClosed)
Config = configCopy
}
func TestCloseConnection(t *testing.T) {
c := NewBaseConnection("id", ProtocolSFTP, "", "", dataprovider.User{})
fakeConn := &fakeConnection{
BaseConnection: c,
}
assert.True(t, Connections.IsNewConnectionAllowed("127.0.0.1"))
Connections.Add(fakeConn)
assert.Len(t, Connections.GetStats(), 1)
res := Connections.Close(fakeConn.GetID())
assert.True(t, res)
assert.Eventually(t, func() bool { return len(Connections.GetStats()) == 0 }, 300*time.Millisecond, 50*time.Millisecond)
res = Connections.Close(fakeConn.GetID())
assert.False(t, res)
Connections.Remove(fakeConn.GetID())
}
func TestSwapConnection(t *testing.T) {
c := NewBaseConnection("id", ProtocolFTP, "", "", dataprovider.User{})
fakeConn := &fakeConnection{
BaseConnection: c,
}
Connections.Add(fakeConn)
if assert.Len(t, Connections.GetStats(), 1) {
assert.Equal(t, "", Connections.GetStats()[0].Username)
}
c = NewBaseConnection("id", ProtocolFTP, "", "", dataprovider.User{
BaseUser: sdk.BaseUser{
Username: userTestUsername,
},
})
fakeConn = &fakeConnection{
BaseConnection: c,
}
err := Connections.Swap(fakeConn)
assert.NoError(t, err)
if assert.Len(t, Connections.GetStats(), 1) {
assert.Equal(t, userTestUsername, Connections.GetStats()[0].Username)
}
res := Connections.Close(fakeConn.GetID())
assert.True(t, res)
assert.Eventually(t, func() bool { return len(Connections.GetStats()) == 0 }, 300*time.Millisecond, 50*time.Millisecond)
err = Connections.Swap(fakeConn)
assert.Error(t, err)
}
func TestAtomicUpload(t *testing.T) {
configCopy := Config
Config.UploadMode = UploadModeStandard
assert.False(t, Config.IsAtomicUploadEnabled())
Config.UploadMode = UploadModeAtomic
assert.True(t, Config.IsAtomicUploadEnabled())
Config.UploadMode = UploadModeAtomicWithResume
assert.True(t, Config.IsAtomicUploadEnabled())
Config = configCopy
}
func TestConnectionStatus(t *testing.T) {
username := "test_user"
user := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: username,
},
}
fs := vfs.NewOsFs("", os.TempDir(), "")
c1 := NewBaseConnection("id1", ProtocolSFTP, "", "", user)
fakeConn1 := &fakeConnection{
BaseConnection: c1,
}
t1 := NewBaseTransfer(nil, c1, nil, "/p1", "/p1", "/r1", TransferUpload, 0, 0, 0, true, fs)
t1.BytesReceived = 123
t2 := NewBaseTransfer(nil, c1, nil, "/p2", "/p2", "/r2", TransferDownload, 0, 0, 0, true, fs)
t2.BytesSent = 456
c2 := NewBaseConnection("id2", ProtocolSSH, "", "", user)
fakeConn2 := &fakeConnection{
BaseConnection: c2,
command: "md5sum",
}
c3 := NewBaseConnection("id3", ProtocolWebDAV, "", "", user)
fakeConn3 := &fakeConnection{
BaseConnection: c3,
command: "PROPFIND",
}
t3 := NewBaseTransfer(nil, c3, nil, "/p2", "/p2", "/r2", TransferDownload, 0, 0, 0, true, fs)
Connections.Add(fakeConn1)
Connections.Add(fakeConn2)
Connections.Add(fakeConn3)
stats := Connections.GetStats()
assert.Len(t, stats, 3)
for _, stat := range stats {
assert.Equal(t, stat.Username, username)
assert.True(t, strings.HasPrefix(stat.GetConnectionInfo(), stat.Protocol))
assert.True(t, strings.HasPrefix(stat.GetConnectionDuration(), "00:"))
if stat.ConnectionID == "SFTP_id1" {
assert.Len(t, stat.Transfers, 2)
assert.Greater(t, len(stat.GetTransfersAsString()), 0)
for _, tr := range stat.Transfers {
if tr.OperationType == operationDownload {
assert.True(t, strings.HasPrefix(tr.getConnectionTransferAsString(), "DL"))
} else if tr.OperationType == operationUpload {
assert.True(t, strings.HasPrefix(tr.getConnectionTransferAsString(), "UL"))
}
}
} else if stat.ConnectionID == "DAV_id3" {
assert.Len(t, stat.Transfers, 1)
assert.Greater(t, len(stat.GetTransfersAsString()), 0)
} else {
assert.Equal(t, 0, len(stat.GetTransfersAsString()))
}
}
err := t1.Close()
assert.NoError(t, err)
err = t2.Close()
assert.NoError(t, err)
err = fakeConn3.SignalTransfersAbort()
assert.NoError(t, err)
assert.Equal(t, int32(1), atomic.LoadInt32(&t3.AbortTransfer))
err = t3.Close()
assert.NoError(t, err)
err = fakeConn3.SignalTransfersAbort()
assert.Error(t, err)
Connections.Remove(fakeConn1.GetID())
stats = Connections.GetStats()
assert.Len(t, stats, 2)
assert.Equal(t, fakeConn3.GetID(), stats[0].ConnectionID)
assert.Equal(t, fakeConn2.GetID(), stats[1].ConnectionID)
Connections.Remove(fakeConn2.GetID())
stats = Connections.GetStats()
assert.Len(t, stats, 1)
assert.Equal(t, fakeConn3.GetID(), stats[0].ConnectionID)
Connections.Remove(fakeConn3.GetID())
stats = Connections.GetStats()
assert.Len(t, stats, 0)
}
func TestQuotaScans(t *testing.T) {
username := "username"
assert.True(t, QuotaScans.AddUserQuotaScan(username))
assert.False(t, QuotaScans.AddUserQuotaScan(username))
usersScans := QuotaScans.GetUsersQuotaScans()
if assert.Len(t, usersScans, 1) {
assert.Equal(t, usersScans[0].Username, username)
assert.Equal(t, QuotaScans.UserScans[0].StartTime, usersScans[0].StartTime)
QuotaScans.UserScans[0].StartTime = 0
assert.NotEqual(t, QuotaScans.UserScans[0].StartTime, usersScans[0].StartTime)
}
assert.True(t, QuotaScans.RemoveUserQuotaScan(username))
assert.False(t, QuotaScans.RemoveUserQuotaScan(username))
assert.Len(t, QuotaScans.GetUsersQuotaScans(), 0)
assert.Len(t, usersScans, 1)
folderName := "folder"
assert.True(t, QuotaScans.AddVFolderQuotaScan(folderName))
assert.False(t, QuotaScans.AddVFolderQuotaScan(folderName))
if assert.Len(t, QuotaScans.GetVFoldersQuotaScans(), 1) {
assert.Equal(t, QuotaScans.GetVFoldersQuotaScans()[0].Name, folderName)
}
assert.True(t, QuotaScans.RemoveVFolderQuotaScan(folderName))
assert.False(t, QuotaScans.RemoveVFolderQuotaScan(folderName))
assert.Len(t, QuotaScans.GetVFoldersQuotaScans(), 0)
}
func TestProxyProtocolVersion(t *testing.T) {
c := Configuration{
ProxyProtocol: 0,
}
_, err := c.GetProxyListener(nil)
if assert.Error(t, err) {
assert.Contains(t, err.Error(), "proxy protocol not configured")
}
c.ProxyProtocol = 1
proxyListener, err := c.GetProxyListener(nil)
assert.NoError(t, err)
assert.Nil(t, proxyListener.Policy)
c.ProxyProtocol = 2
proxyListener, err = c.GetProxyListener(nil)
assert.NoError(t, err)
assert.NotNil(t, proxyListener.Policy)
c.ProxyProtocol = 1
c.ProxyAllowed = []string{"invalid"}
_, err = c.GetProxyListener(nil)
assert.Error(t, err)
c.ProxyProtocol = 2
_, err = c.GetProxyListener(nil)
assert.Error(t, err)
}
func TestStartupHook(t *testing.T) {
Config.StartupHook = ""
assert.NoError(t, Config.ExecuteStartupHook())
Config.StartupHook = "http://foo\x7f.com/startup"
assert.Error(t, Config.ExecuteStartupHook())
Config.StartupHook = "http://invalid:5678/"
assert.Error(t, Config.ExecuteStartupHook())
Config.StartupHook = fmt.Sprintf("http://%v", httpAddr)
assert.NoError(t, Config.ExecuteStartupHook())
Config.StartupHook = "invalidhook"
assert.Error(t, Config.ExecuteStartupHook())
if runtime.GOOS != osWindows {
hookCmd, err := exec.LookPath("true")
assert.NoError(t, err)
Config.StartupHook = hookCmd
assert.NoError(t, Config.ExecuteStartupHook())
}
Config.StartupHook = ""
}
func TestPostDisconnectHook(t *testing.T) {
Config.PostDisconnectHook = "http://127.0.0.1/"
remoteAddr := "127.0.0.1:80"
Config.checkPostDisconnectHook(remoteAddr, ProtocolHTTP, "", "", time.Now())
Config.checkPostDisconnectHook(remoteAddr, ProtocolSFTP, "", "", time.Now())
Config.PostDisconnectHook = "http://bar\x7f.com/"
Config.executePostDisconnectHook(remoteAddr, ProtocolSFTP, "", "", time.Now())
Config.PostDisconnectHook = fmt.Sprintf("http://%v", httpAddr)
Config.executePostDisconnectHook(remoteAddr, ProtocolSFTP, "", "", time.Now())
Config.PostDisconnectHook = "relativePath"
Config.executePostDisconnectHook(remoteAddr, ProtocolSFTP, "", "", time.Now())
if runtime.GOOS == osWindows {
Config.PostDisconnectHook = "C:\\a\\bad\\command"
Config.executePostDisconnectHook(remoteAddr, ProtocolSFTP, "", "", time.Now())
} else {
Config.PostDisconnectHook = "/invalid/path"
Config.executePostDisconnectHook(remoteAddr, ProtocolSFTP, "", "", time.Now())
hookCmd, err := exec.LookPath("true")
assert.NoError(t, err)
Config.PostDisconnectHook = hookCmd
Config.executePostDisconnectHook(remoteAddr, ProtocolSFTP, "", "", time.Now())
}
Config.PostDisconnectHook = ""
}
func TestPostConnectHook(t *testing.T) {
Config.PostConnectHook = ""
ipAddr := "127.0.0.1"
assert.NoError(t, Config.ExecutePostConnectHook(ipAddr, ProtocolFTP))
Config.PostConnectHook = "http://foo\x7f.com/"
assert.Error(t, Config.ExecutePostConnectHook(ipAddr, ProtocolSFTP))
Config.PostConnectHook = "http://invalid:1234/"
assert.Error(t, Config.ExecutePostConnectHook(ipAddr, ProtocolSFTP))
Config.PostConnectHook = fmt.Sprintf("http://%v/404", httpAddr)
assert.Error(t, Config.ExecutePostConnectHook(ipAddr, ProtocolFTP))
Config.PostConnectHook = fmt.Sprintf("http://%v", httpAddr)
assert.NoError(t, Config.ExecutePostConnectHook(ipAddr, ProtocolFTP))
Config.PostConnectHook = "invalid"
assert.Error(t, Config.ExecutePostConnectHook(ipAddr, ProtocolFTP))
if runtime.GOOS == osWindows {
Config.PostConnectHook = "C:\\bad\\command"
assert.Error(t, Config.ExecutePostConnectHook(ipAddr, ProtocolSFTP))
} else {
Config.PostConnectHook = "/invalid/path"
assert.Error(t, Config.ExecutePostConnectHook(ipAddr, ProtocolSFTP))
hookCmd, err := exec.LookPath("true")
assert.NoError(t, err)
Config.PostConnectHook = hookCmd
assert.NoError(t, Config.ExecutePostConnectHook(ipAddr, ProtocolSFTP))
}
Config.PostConnectHook = ""
}
func TestCryptoConvertFileInfo(t *testing.T) {
name := "name"
fs, err := vfs.NewCryptFs("connID1", os.TempDir(), "", vfs.CryptFsConfig{
Passphrase: kms.NewPlainSecret("secret"),
})
require.NoError(t, err)
cryptFs := fs.(*vfs.CryptFs)
info := vfs.NewFileInfo(name, true, 48, time.Now(), false)
assert.Equal(t, info, cryptFs.ConvertFileInfo(info))
info = vfs.NewFileInfo(name, false, 48, time.Now(), false)
assert.NotEqual(t, info.Size(), cryptFs.ConvertFileInfo(info).Size())
info = vfs.NewFileInfo(name, false, 33, time.Now(), false)
assert.Equal(t, int64(0), cryptFs.ConvertFileInfo(info).Size())
info = vfs.NewFileInfo(name, false, 1, time.Now(), false)
assert.Equal(t, int64(0), cryptFs.ConvertFileInfo(info).Size())
}
func TestFolderCopy(t *testing.T) {
folder := vfs.BaseVirtualFolder{
ID: 1,
Name: "name",
MappedPath: filepath.Clean(os.TempDir()),
UsedQuotaSize: 4096,
UsedQuotaFiles: 2,
LastQuotaUpdate: util.GetTimeAsMsSinceEpoch(time.Now()),
Users: []string{"user1", "user2"},
}
folderCopy := folder.GetACopy()
folder.ID = 2
folder.Users = []string{"user3"}
require.Len(t, folderCopy.Users, 2)
require.True(t, util.IsStringInSlice("user1", folderCopy.Users))
require.True(t, util.IsStringInSlice("user2", folderCopy.Users))
require.Equal(t, int64(1), folderCopy.ID)
require.Equal(t, folder.Name, folderCopy.Name)
require.Equal(t, folder.MappedPath, folderCopy.MappedPath)
require.Equal(t, folder.UsedQuotaSize, folderCopy.UsedQuotaSize)
require.Equal(t, folder.UsedQuotaFiles, folderCopy.UsedQuotaFiles)
require.Equal(t, folder.LastQuotaUpdate, folderCopy.LastQuotaUpdate)
folder.FsConfig = vfs.Filesystem{
CryptConfig: vfs.CryptFsConfig{
Passphrase: kms.NewPlainSecret("crypto secret"),
},
}
folderCopy = folder.GetACopy()
folder.FsConfig.CryptConfig.Passphrase = kms.NewEmptySecret()
require.Len(t, folderCopy.Users, 1)
require.True(t, util.IsStringInSlice("user3", folderCopy.Users))
require.Equal(t, int64(2), folderCopy.ID)
require.Equal(t, folder.Name, folderCopy.Name)
require.Equal(t, folder.MappedPath, folderCopy.MappedPath)
require.Equal(t, folder.UsedQuotaSize, folderCopy.UsedQuotaSize)
require.Equal(t, folder.UsedQuotaFiles, folderCopy.UsedQuotaFiles)
require.Equal(t, folder.LastQuotaUpdate, folderCopy.LastQuotaUpdate)
require.Equal(t, "crypto secret", folderCopy.FsConfig.CryptConfig.Passphrase.GetPayload())
}
func TestCachedFs(t *testing.T) {
user := dataprovider.User{
BaseUser: sdk.BaseUser{
HomeDir: filepath.Clean(os.TempDir()),
},
}
conn := NewBaseConnection("id", ProtocolSFTP, "", "", user)
// changing the user should not affect the connection
user.HomeDir = filepath.Join(os.TempDir(), "temp")
err := os.Mkdir(user.HomeDir, os.ModePerm)
assert.NoError(t, err)
fs, err := user.GetFilesystem("")
assert.NoError(t, err)
p, err := fs.ResolvePath("/")
assert.NoError(t, err)
assert.Equal(t, user.GetHomeDir(), p)
_, p, err = conn.GetFsAndResolvedPath("/")
assert.NoError(t, err)
assert.Equal(t, filepath.Clean(os.TempDir()), p)
user.FsConfig.Provider = sdk.S3FilesystemProvider
_, err = user.GetFilesystem("")
assert.Error(t, err)
conn.User.FsConfig.Provider = sdk.S3FilesystemProvider
_, p, err = conn.GetFsAndResolvedPath("/")
assert.NoError(t, err)
assert.Equal(t, filepath.Clean(os.TempDir()), p)
err = os.Remove(user.HomeDir)
assert.NoError(t, err)
}
func TestParseAllowedIPAndRanges(t *testing.T) {
_, err := util.ParseAllowedIPAndRanges([]string{"1.1.1.1", "not an ip"})
assert.Error(t, err)
_, err = util.ParseAllowedIPAndRanges([]string{"1.1.1.5", "192.168.1.0/240"})
assert.Error(t, err)
allow, err := util.ParseAllowedIPAndRanges([]string{"192.168.1.2", "172.16.0.0/24"})
assert.NoError(t, err)
assert.True(t, allow[0](net.ParseIP("192.168.1.2")))
assert.False(t, allow[0](net.ParseIP("192.168.2.2")))
assert.True(t, allow[1](net.ParseIP("172.16.0.1")))
assert.False(t, allow[1](net.ParseIP("172.16.1.1")))
}
func TestHideConfidentialData(t *testing.T) {
for _, provider := range []sdk.FilesystemProvider{sdk.LocalFilesystemProvider,
sdk.CryptedFilesystemProvider, sdk.S3FilesystemProvider, sdk.GCSFilesystemProvider,
sdk.AzureBlobFilesystemProvider, sdk.SFTPFilesystemProvider,
} {
u := dataprovider.User{
FsConfig: vfs.Filesystem{
Provider: provider,
},
}
u.PrepareForRendering()
f := vfs.BaseVirtualFolder{
FsConfig: vfs.Filesystem{
Provider: provider,
},
}
f.PrepareForRendering()
}
a := dataprovider.Admin{}
a.HideConfidentialData()
}
func TestUserPerms(t *testing.T) {
u := dataprovider.User{}
u.Permissions = make(map[string][]string)
u.Permissions["/"] = []string{dataprovider.PermUpload, dataprovider.PermDelete}
assert.True(t, u.HasAnyPerm([]string{dataprovider.PermRename, dataprovider.PermDelete}, "/"))
assert.False(t, u.HasAnyPerm([]string{dataprovider.PermRename, dataprovider.PermCreateDirs}, "/"))
u.Permissions["/"] = []string{dataprovider.PermDelete, dataprovider.PermCreateDirs}
assert.True(t, u.HasPermsDeleteAll("/"))
assert.False(t, u.HasPermsRenameAll("/"))
u.Permissions["/"] = []string{dataprovider.PermDeleteDirs, dataprovider.PermDeleteFiles, dataprovider.PermRenameDirs}
assert.True(t, u.HasPermsDeleteAll("/"))
assert.False(t, u.HasPermsRenameAll("/"))
u.Permissions["/"] = []string{dataprovider.PermDeleteDirs, dataprovider.PermRenameFiles, dataprovider.PermRenameDirs}
assert.False(t, u.HasPermsDeleteAll("/"))
assert.True(t, u.HasPermsRenameAll("/"))
}
func BenchmarkBcryptHashing(b *testing.B) {
bcryptPassword := "bcryptpassword"
for i := 0; i < b.N; i++ {
_, err := bcrypt.GenerateFromPassword([]byte(bcryptPassword), 10)
if err != nil {
panic(err)
}
}
}
func BenchmarkCompareBcryptPassword(b *testing.B) {
bcryptPassword := "$2a$10$lPDdnDimJZ7d5/GwL6xDuOqoZVRXok6OHHhivCnanWUtcgN0Zafki"
for i := 0; i < b.N; i++ {
err := bcrypt.CompareHashAndPassword([]byte(bcryptPassword), []byte("password"))
if err != nil {
panic(err)
}
}
}
func BenchmarkArgon2Hashing(b *testing.B) {
argonPassword := "argon2password"
for i := 0; i < b.N; i++ {
_, err := argon2id.CreateHash(argonPassword, argon2id.DefaultParams)
if err != nil {
panic(err)
}
}
}
func BenchmarkCompareArgon2Password(b *testing.B) {
argon2Password := "$argon2id$v=19$m=65536,t=1,p=2$aOoAOdAwvzhOgi7wUFjXlw$wn/y37dBWdKHtPXHR03nNaKHWKPXyNuVXOknaU+YZ+s"
for i := 0; i < b.N; i++ {
_, err := argon2id.ComparePasswordAndHash("password", argon2Password)
if err != nil {
panic(err)
}
}
}

1263
common/connection.go Normal file

File diff suppressed because it is too large Load Diff

447
common/connection_test.go Normal file
View File

@@ -0,0 +1,447 @@
package common
import (
"os"
"path"
"path/filepath"
"runtime"
"testing"
"time"
"github.com/pkg/sftp"
"github.com/rs/xid"
"github.com/sftpgo/sdk"
"github.com/stretchr/testify/assert"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/kms"
"github.com/drakkan/sftpgo/v2/vfs"
)
// MockOsFs mockable OsFs
type MockOsFs struct {
vfs.Fs
hasVirtualFolders bool
}
// Name returns the name for the Fs implementation
func (fs *MockOsFs) Name() string {
return "mockOsFs"
}
// HasVirtualFolders returns true if folders are emulated
func (fs *MockOsFs) HasVirtualFolders() bool {
return fs.hasVirtualFolders
}
func (fs *MockOsFs) IsUploadResumeSupported() bool {
return !fs.hasVirtualFolders
}
func (fs *MockOsFs) Chtimes(name string, atime, mtime time.Time, isUploading bool) error {
return vfs.ErrVfsUnsupported
}
func newMockOsFs(hasVirtualFolders bool, connectionID, rootDir string) vfs.Fs {
return &MockOsFs{
Fs: vfs.NewOsFs(connectionID, rootDir, ""),
hasVirtualFolders: hasVirtualFolders,
}
}
func TestRemoveErrors(t *testing.T) {
mappedPath := filepath.Join(os.TempDir(), "map")
homePath := filepath.Join(os.TempDir(), "home")
user := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: "remove_errors_user",
HomeDir: homePath,
},
VirtualFolders: []vfs.VirtualFolder{
{
BaseVirtualFolder: vfs.BaseVirtualFolder{
Name: filepath.Base(mappedPath),
MappedPath: mappedPath,
},
VirtualPath: "/virtualpath",
},
},
}
user.Permissions = make(map[string][]string)
user.Permissions["/"] = []string{dataprovider.PermAny}
fs := vfs.NewOsFs("", os.TempDir(), "")
conn := NewBaseConnection("", ProtocolFTP, "", "", user)
err := conn.IsRemoveDirAllowed(fs, mappedPath, "/virtualpath1")
if assert.Error(t, err) {
assert.Contains(t, err.Error(), "permission denied")
}
err = conn.RemoveFile(fs, filepath.Join(homePath, "missing_file"), "/missing_file",
vfs.NewFileInfo("info", false, 100, time.Now(), false))
assert.Error(t, err)
}
func TestSetStatMode(t *testing.T) {
oldSetStatMode := Config.SetstatMode
Config.SetstatMode = 1
fakePath := "fake path"
user := dataprovider.User{
BaseUser: sdk.BaseUser{
HomeDir: os.TempDir(),
},
}
user.Permissions = make(map[string][]string)
user.Permissions["/"] = []string{dataprovider.PermAny}
fs := newMockOsFs(true, "", user.GetHomeDir())
conn := NewBaseConnection("", ProtocolWebDAV, "", "", user)
err := conn.handleChmod(fs, fakePath, fakePath, nil)
assert.NoError(t, err)
err = conn.handleChown(fs, fakePath, fakePath, nil)
assert.NoError(t, err)
err = conn.handleChtimes(fs, fakePath, fakePath, nil)
assert.NoError(t, err)
Config.SetstatMode = 2
err = conn.handleChmod(fs, fakePath, fakePath, nil)
assert.NoError(t, err)
err = conn.handleChtimes(fs, fakePath, fakePath, &StatAttributes{
Atime: time.Now(),
Mtime: time.Now(),
})
assert.NoError(t, err)
Config.SetstatMode = oldSetStatMode
}
func TestRecursiveRenameWalkError(t *testing.T) {
fs := vfs.NewOsFs("", os.TempDir(), "")
conn := NewBaseConnection("", ProtocolWebDAV, "", "", dataprovider.User{})
err := conn.checkRecursiveRenameDirPermissions(fs, fs, "/source", "/target")
assert.ErrorIs(t, err, os.ErrNotExist)
}
func TestCrossRenameFsErrors(t *testing.T) {
fs := vfs.NewOsFs("", os.TempDir(), "")
conn := NewBaseConnection("", ProtocolWebDAV, "", "", dataprovider.User{})
res := conn.hasSpaceForCrossRename(fs, vfs.QuotaCheckResult{}, 1, "missingsource")
assert.False(t, res)
if runtime.GOOS != osWindows {
dirPath := filepath.Join(os.TempDir(), "d")
err := os.Mkdir(dirPath, os.ModePerm)
assert.NoError(t, err)
err = os.Chmod(dirPath, 0001)
assert.NoError(t, err)
res = conn.hasSpaceForCrossRename(fs, vfs.QuotaCheckResult{}, 1, dirPath)
assert.False(t, res)
err = os.Chmod(dirPath, os.ModePerm)
assert.NoError(t, err)
err = os.Remove(dirPath)
assert.NoError(t, err)
}
}
func TestRenameVirtualFolders(t *testing.T) {
vdir := "/avdir"
u := dataprovider.User{}
u.VirtualFolders = append(u.VirtualFolders, vfs.VirtualFolder{
BaseVirtualFolder: vfs.BaseVirtualFolder{
Name: "name",
MappedPath: "mappedPath",
},
VirtualPath: vdir,
})
fs := vfs.NewOsFs("", os.TempDir(), "")
conn := NewBaseConnection("", ProtocolFTP, "", "", u)
res := conn.isRenamePermitted(fs, fs, "source", "target", vdir, "vdirtarget", nil)
assert.False(t, res)
}
func TestRenamePerms(t *testing.T) {
src := "source"
target := "target"
u := dataprovider.User{}
u.Permissions = map[string][]string{}
u.Permissions["/"] = []string{dataprovider.PermCreateDirs, dataprovider.PermUpload, dataprovider.PermCreateSymlinks,
dataprovider.PermDeleteFiles}
conn := NewBaseConnection("", ProtocolSFTP, "", "", u)
assert.False(t, conn.hasRenamePerms(src, target, nil))
u.Permissions["/"] = []string{dataprovider.PermCreateDirs, dataprovider.PermUpload, dataprovider.PermCreateSymlinks,
dataprovider.PermDeleteFiles, dataprovider.PermDeleteDirs}
assert.True(t, conn.hasRenamePerms(src, target, nil))
u.Permissions["/"] = []string{dataprovider.PermCreateDirs, dataprovider.PermUpload, dataprovider.PermDeleteFiles,
dataprovider.PermDeleteDirs}
assert.False(t, conn.hasRenamePerms(src, target, nil))
info := vfs.NewFileInfo(src, true, 0, time.Now(), false)
u.Permissions["/"] = []string{dataprovider.PermCreateDirs, dataprovider.PermUpload, dataprovider.PermDeleteFiles}
assert.False(t, conn.hasRenamePerms(src, target, info))
u.Permissions["/"] = []string{dataprovider.PermCreateDirs, dataprovider.PermUpload, dataprovider.PermDeleteDirs}
assert.True(t, conn.hasRenamePerms(src, target, info))
u.Permissions["/"] = []string{dataprovider.PermDownload, dataprovider.PermUpload, dataprovider.PermDeleteDirs}
assert.False(t, conn.hasRenamePerms(src, target, info))
}
func TestUpdateQuotaAfterRename(t *testing.T) {
user := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: userTestUsername,
HomeDir: filepath.Join(os.TempDir(), "home"),
},
}
mappedPath := filepath.Join(os.TempDir(), "vdir")
user.Permissions = make(map[string][]string)
user.Permissions["/"] = []string{dataprovider.PermAny}
user.VirtualFolders = append(user.VirtualFolders, vfs.VirtualFolder{
BaseVirtualFolder: vfs.BaseVirtualFolder{
MappedPath: mappedPath,
},
VirtualPath: "/vdir",
QuotaFiles: -1,
QuotaSize: -1,
})
user.VirtualFolders = append(user.VirtualFolders, vfs.VirtualFolder{
BaseVirtualFolder: vfs.BaseVirtualFolder{
MappedPath: mappedPath,
},
VirtualPath: "/vdir1",
QuotaFiles: -1,
QuotaSize: -1,
})
err := os.MkdirAll(user.GetHomeDir(), os.ModePerm)
assert.NoError(t, err)
err = os.MkdirAll(mappedPath, os.ModePerm)
assert.NoError(t, err)
fs, err := user.GetFilesystem("id")
assert.NoError(t, err)
c := NewBaseConnection("", ProtocolSFTP, "", "", user)
request := sftp.NewRequest("Rename", "/testfile")
if runtime.GOOS != osWindows {
request.Filepath = "/dir"
request.Target = path.Join("/vdir", "dir")
testDirPath := filepath.Join(mappedPath, "dir")
err := os.MkdirAll(testDirPath, os.ModePerm)
assert.NoError(t, err)
err = os.Chmod(testDirPath, 0001)
assert.NoError(t, err)
err = c.updateQuotaAfterRename(fs, request.Filepath, request.Target, testDirPath, 0)
assert.Error(t, err)
err = os.Chmod(testDirPath, os.ModePerm)
assert.NoError(t, err)
}
testFile1 := "/testfile1"
request.Target = testFile1
request.Filepath = path.Join("/vdir", "file")
err = c.updateQuotaAfterRename(fs, request.Filepath, request.Target, filepath.Join(mappedPath, "file"), 0)
assert.Error(t, err)
err = os.WriteFile(filepath.Join(mappedPath, "file"), []byte("test content"), os.ModePerm)
assert.NoError(t, err)
request.Filepath = testFile1
request.Target = path.Join("/vdir", "file")
err = c.updateQuotaAfterRename(fs, request.Filepath, request.Target, filepath.Join(mappedPath, "file"), 12)
assert.NoError(t, err)
err = os.WriteFile(filepath.Join(user.GetHomeDir(), "testfile1"), []byte("test content"), os.ModePerm)
assert.NoError(t, err)
request.Target = testFile1
request.Filepath = path.Join("/vdir", "file")
err = c.updateQuotaAfterRename(fs, request.Filepath, request.Target, filepath.Join(mappedPath, "file"), 12)
assert.NoError(t, err)
request.Target = path.Join("/vdir1", "file")
request.Filepath = path.Join("/vdir", "file")
err = c.updateQuotaAfterRename(fs, request.Filepath, request.Target, filepath.Join(mappedPath, "file"), 12)
assert.NoError(t, err)
err = os.RemoveAll(mappedPath)
assert.NoError(t, err)
err = os.RemoveAll(user.GetHomeDir())
assert.NoError(t, err)
}
func TestErrorsMapping(t *testing.T) {
fs := vfs.NewOsFs("", os.TempDir(), "")
conn := NewBaseConnection("", ProtocolSFTP, "", "", dataprovider.User{BaseUser: sdk.BaseUser{HomeDir: os.TempDir()}})
for _, protocol := range supportedProtocols {
conn.SetProtocol(protocol)
err := conn.GetFsError(fs, os.ErrNotExist)
if protocol == ProtocolSFTP {
assert.ErrorIs(t, err, sftp.ErrSSHFxNoSuchFile)
} else if protocol == ProtocolWebDAV || protocol == ProtocolFTP || protocol == ProtocolHTTP ||
protocol == ProtocolHTTPShare || protocol == ProtocolDataRetention {
assert.EqualError(t, err, os.ErrNotExist.Error())
} else {
assert.EqualError(t, err, ErrNotExist.Error())
}
err = conn.GetFsError(fs, os.ErrPermission)
if protocol == ProtocolSFTP {
assert.EqualError(t, err, sftp.ErrSSHFxPermissionDenied.Error())
} else {
assert.EqualError(t, err, ErrPermissionDenied.Error())
}
err = conn.GetFsError(fs, os.ErrClosed)
if protocol == ProtocolSFTP {
assert.ErrorIs(t, err, sftp.ErrSSHFxFailure)
assert.Contains(t, err.Error(), os.ErrClosed.Error())
} else {
assert.EqualError(t, err, ErrGenericFailure.Error())
}
err = conn.GetFsError(fs, ErrPermissionDenied)
if protocol == ProtocolSFTP {
assert.ErrorIs(t, err, sftp.ErrSSHFxFailure)
assert.Contains(t, err.Error(), ErrPermissionDenied.Error())
} else {
assert.EqualError(t, err, ErrPermissionDenied.Error())
}
err = conn.GetFsError(fs, vfs.ErrVfsUnsupported)
if protocol == ProtocolSFTP {
assert.EqualError(t, err, sftp.ErrSSHFxOpUnsupported.Error())
} else {
assert.EqualError(t, err, ErrOpUnsupported.Error())
}
err = conn.GetFsError(fs, vfs.ErrStorageSizeUnavailable)
if protocol == ProtocolSFTP {
assert.ErrorIs(t, err, sftp.ErrSSHFxOpUnsupported)
assert.Contains(t, err.Error(), vfs.ErrStorageSizeUnavailable.Error())
} else {
assert.EqualError(t, err, vfs.ErrStorageSizeUnavailable.Error())
}
err = conn.GetQuotaExceededError()
assert.True(t, conn.IsQuotaExceededError(err))
err = conn.GetNotExistError()
assert.True(t, conn.IsNotExistError(err))
err = conn.GetFsError(fs, nil)
assert.NoError(t, err)
err = conn.GetOpUnsupportedError()
if protocol == ProtocolSFTP {
assert.EqualError(t, err, sftp.ErrSSHFxOpUnsupported.Error())
} else {
assert.EqualError(t, err, ErrOpUnsupported.Error())
}
}
}
func TestMaxWriteSize(t *testing.T) {
permissions := make(map[string][]string)
permissions["/"] = []string{dataprovider.PermAny}
user := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: userTestUsername,
Permissions: permissions,
HomeDir: filepath.Clean(os.TempDir()),
},
}
fs, err := user.GetFilesystem("123")
assert.NoError(t, err)
conn := NewBaseConnection("", ProtocolFTP, "", "", user)
quotaResult := vfs.QuotaCheckResult{
HasSpace: true,
}
size, err := conn.GetMaxWriteSize(quotaResult, false, 0, fs.IsUploadResumeSupported())
assert.NoError(t, err)
assert.Equal(t, int64(0), size)
conn.User.Filters.MaxUploadFileSize = 100
size, err = conn.GetMaxWriteSize(quotaResult, false, 0, fs.IsUploadResumeSupported())
assert.NoError(t, err)
assert.Equal(t, int64(100), size)
quotaResult.QuotaSize = 1000
size, err = conn.GetMaxWriteSize(quotaResult, false, 50, fs.IsUploadResumeSupported())
assert.NoError(t, err)
assert.Equal(t, int64(100), size)
quotaResult.QuotaSize = 1000
quotaResult.UsedSize = 990
size, err = conn.GetMaxWriteSize(quotaResult, false, 50, fs.IsUploadResumeSupported())
assert.NoError(t, err)
assert.Equal(t, int64(60), size)
quotaResult.QuotaSize = 0
quotaResult.UsedSize = 0
size, err = conn.GetMaxWriteSize(quotaResult, true, 100, fs.IsUploadResumeSupported())
assert.True(t, conn.IsQuotaExceededError(err))
assert.Equal(t, int64(0), size)
size, err = conn.GetMaxWriteSize(quotaResult, true, 10, fs.IsUploadResumeSupported())
assert.NoError(t, err)
assert.Equal(t, int64(90), size)
fs = newMockOsFs(true, fs.ConnectionID(), user.GetHomeDir())
size, err = conn.GetMaxWriteSize(quotaResult, true, 100, fs.IsUploadResumeSupported())
assert.EqualError(t, err, ErrOpUnsupported.Error())
assert.Equal(t, int64(0), size)
}
func TestCheckParentDirsErrors(t *testing.T) {
permissions := make(map[string][]string)
permissions["/"] = []string{dataprovider.PermAny}
user := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: userTestUsername,
Permissions: permissions,
HomeDir: filepath.Clean(os.TempDir()),
},
FsConfig: vfs.Filesystem{
Provider: sdk.CryptedFilesystemProvider,
},
}
c := NewBaseConnection(xid.New().String(), ProtocolSFTP, "", "", user)
err := c.CheckParentDirs("/a/dir")
assert.Error(t, err)
user.FsConfig.Provider = sdk.LocalFilesystemProvider
user.VirtualFolders = nil
user.VirtualFolders = append(user.VirtualFolders, vfs.VirtualFolder{
BaseVirtualFolder: vfs.BaseVirtualFolder{
FsConfig: vfs.Filesystem{
Provider: sdk.CryptedFilesystemProvider,
},
},
VirtualPath: "/vdir",
})
user.VirtualFolders = append(user.VirtualFolders, vfs.VirtualFolder{
BaseVirtualFolder: vfs.BaseVirtualFolder{
MappedPath: filepath.Clean(os.TempDir()),
},
VirtualPath: "/vdir/sub",
})
c = NewBaseConnection(xid.New().String(), ProtocolSFTP, "", "", user)
err = c.CheckParentDirs("/vdir/sub/dir")
assert.Error(t, err)
user = dataprovider.User{
BaseUser: sdk.BaseUser{
Username: userTestUsername,
Permissions: permissions,
HomeDir: filepath.Clean(os.TempDir()),
},
FsConfig: vfs.Filesystem{
Provider: sdk.S3FilesystemProvider,
S3Config: vfs.S3FsConfig{
BaseS3FsConfig: sdk.BaseS3FsConfig{
Bucket: "buck",
Region: "us-east-1",
AccessKey: "key",
},
AccessSecret: kms.NewPlainSecret("s3secret"),
},
},
}
c = NewBaseConnection(xid.New().String(), ProtocolSFTP, "", "", user)
err = c.CheckParentDirs("/a/dir")
assert.NoError(t, err)
user.VirtualFolders = append(user.VirtualFolders, vfs.VirtualFolder{
BaseVirtualFolder: vfs.BaseVirtualFolder{
MappedPath: filepath.Clean(os.TempDir()),
},
VirtualPath: "/local/dir",
})
c = NewBaseConnection(xid.New().String(), ProtocolSFTP, "", "", user)
err = c.CheckParentDirs("/local/dir/sub-dir")
assert.NoError(t, err)
err = os.RemoveAll(filepath.Join(os.TempDir(), "sub-dir"))
assert.NoError(t, err)
}

464
common/dataretention.go Normal file
View File

@@ -0,0 +1,464 @@
package common
import (
"bytes"
"context"
"encoding/json"
"fmt"
"net/http"
"net/url"
"os"
"os/exec"
"path"
"path/filepath"
"strings"
"sync"
"time"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/httpclient"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/smtp"
"github.com/drakkan/sftpgo/v2/util"
)
// RetentionCheckNotification defines the supported notification methods for a retention check result
type RetentionCheckNotification = string
// Supported notification methods
const (
// notify results using the defined "data_retention_hook"
RetentionCheckNotificationHook = "Hook"
// notify results by email
RetentionCheckNotificationEmail = "Email"
)
var (
// RetentionChecks is the list of active quota scans
RetentionChecks ActiveRetentionChecks
)
// ActiveRetentionChecks holds the active quota scans
type ActiveRetentionChecks struct {
sync.RWMutex
Checks []RetentionCheck
}
// Get returns the active retention checks
func (c *ActiveRetentionChecks) Get() []RetentionCheck {
c.RLock()
defer c.RUnlock()
checks := make([]RetentionCheck, 0, len(c.Checks))
for _, check := range c.Checks {
foldersCopy := make([]FolderRetention, len(check.Folders))
copy(foldersCopy, check.Folders)
notificationsCopy := make([]string, len(check.Notifications))
copy(notificationsCopy, check.Notifications)
checks = append(checks, RetentionCheck{
Username: check.Username,
StartTime: check.StartTime,
Notifications: notificationsCopy,
Email: check.Email,
Folders: foldersCopy,
})
}
return checks
}
// Add a new retention check, returns nil if a retention check for the given
// username is already active. The returned result can be used to start the check
func (c *ActiveRetentionChecks) Add(check RetentionCheck, user *dataprovider.User) *RetentionCheck {
c.Lock()
defer c.Unlock()
for _, val := range c.Checks {
if val.Username == user.Username {
return nil
}
}
// we silently ignore file patterns
user.Filters.FilePatterns = nil
conn := NewBaseConnection("", "", "", "", *user)
conn.SetProtocol(ProtocolDataRetention)
conn.ID = fmt.Sprintf("data_retention_%v", user.Username)
check.Username = user.Username
check.StartTime = util.GetTimeAsMsSinceEpoch(time.Now())
check.conn = conn
check.updateUserPermissions()
c.Checks = append(c.Checks, check)
return &check
}
// remove a user from the ones with active retention checks
// and returns true if the user is removed
func (c *ActiveRetentionChecks) remove(username string) bool {
c.Lock()
defer c.Unlock()
for idx, check := range c.Checks {
if check.Username == username {
lastIdx := len(c.Checks) - 1
c.Checks[idx] = c.Checks[lastIdx]
c.Checks = c.Checks[:lastIdx]
return true
}
}
return false
}
// FolderRetention defines the retention policy for the specified directory path
type FolderRetention struct {
// Path is the exposed virtual directory path, if no other specific retention is defined,
// the retention applies for sub directories too. For example if retention is defined
// for the paths "/" and "/sub" then the retention for "/" is applied for any file outside
// the "/sub" directory
Path string `json:"path"`
// Retention time in hours. 0 means exclude this path
Retention int `json:"retention"`
// DeleteEmptyDirs defines if empty directories will be deleted.
// The user need the delete permission
DeleteEmptyDirs bool `json:"delete_empty_dirs,omitempty"`
// IgnoreUserPermissions defines if delete files even if the user does not have the delete permission.
// The default is "false" which means that files will be skipped if the user does not have the permission
// to delete them. This applies to sub directories too.
IgnoreUserPermissions bool `json:"ignore_user_permissions,omitempty"`
}
func (f *FolderRetention) isValid() error {
f.Path = path.Clean(f.Path)
if !path.IsAbs(f.Path) {
return util.NewValidationError(fmt.Sprintf("folder retention: invalid path %#v, please specify an absolute POSIX path",
f.Path))
}
if f.Retention < 0 {
return util.NewValidationError(fmt.Sprintf("invalid folder retention %v, it must be greater or equal to zero",
f.Retention))
}
return nil
}
type folderRetentionCheckResult struct {
Path string `json:"path"`
Retention int `json:"retention"`
DeletedFiles int `json:"deleted_files"`
DeletedSize int64 `json:"deleted_size"`
Elapsed time.Duration `json:"-"`
Info string `json:"info,omitempty"`
Error string `json:"error,omitempty"`
}
// RetentionCheck defines an active retention check
type RetentionCheck struct {
// Username to which the retention check refers
Username string `json:"username"`
// retention check start time as unix timestamp in milliseconds
StartTime int64 `json:"start_time"`
// affected folders
Folders []FolderRetention `json:"folders"`
// how cleanup results will be notified
Notifications []RetentionCheckNotification `json:"notifications,omitempty"`
// email to use if the notification method is set to email
Email string `json:"email,omitempty"`
// Cleanup results
results []*folderRetentionCheckResult `json:"-"`
conn *BaseConnection
}
// Validate returns an error if the specified folders are not valid
func (c *RetentionCheck) Validate() error {
folderPaths := make(map[string]bool)
nothingToDo := true
for idx := range c.Folders {
f := &c.Folders[idx]
if err := f.isValid(); err != nil {
return err
}
if f.Retention > 0 {
nothingToDo = false
}
if _, ok := folderPaths[f.Path]; ok {
return util.NewValidationError(fmt.Sprintf("duplicated folder path %#v", f.Path))
}
folderPaths[f.Path] = true
}
if nothingToDo {
return util.NewValidationError("nothing to delete!")
}
for _, notification := range c.Notifications {
switch notification {
case RetentionCheckNotificationEmail:
if !smtp.IsEnabled() {
return util.NewValidationError("in order to notify results via email you must configure an SMTP server")
}
if c.Email == "" {
return util.NewValidationError("in order to notify results via email you must add a valid email address to your profile")
}
case RetentionCheckNotificationHook:
if Config.DataRetentionHook == "" {
return util.NewValidationError("in order to notify results via hook you must define a data_retention_hook")
}
default:
return util.NewValidationError(fmt.Sprintf("invalid notification %#v", notification))
}
}
return nil
}
func (c *RetentionCheck) updateUserPermissions() {
for _, folder := range c.Folders {
if folder.IgnoreUserPermissions {
c.conn.User.Permissions[folder.Path] = []string{dataprovider.PermAny}
}
}
}
func (c *RetentionCheck) getFolderRetention(folderPath string) (FolderRetention, error) {
dirsForPath := util.GetDirsForVirtualPath(folderPath)
for _, dirPath := range dirsForPath {
for _, folder := range c.Folders {
if folder.Path == dirPath {
return folder, nil
}
}
}
return FolderRetention{}, fmt.Errorf("unable to find folder retention for %#v", folderPath)
}
func (c *RetentionCheck) removeFile(virtualPath string, info os.FileInfo) error {
fs, fsPath, err := c.conn.GetFsAndResolvedPath(virtualPath)
if err != nil {
return err
}
return c.conn.RemoveFile(fs, fsPath, virtualPath, info)
}
func (c *RetentionCheck) cleanupFolder(folderPath string) error {
deleteFilesPerms := []string{dataprovider.PermDelete, dataprovider.PermDeleteFiles}
startTime := time.Now()
result := &folderRetentionCheckResult{
Path: folderPath,
}
c.results = append(c.results, result)
if !c.conn.User.HasPerm(dataprovider.PermListItems, folderPath) || !c.conn.User.HasAnyPerm(deleteFilesPerms, folderPath) {
result.Elapsed = time.Since(startTime)
result.Info = "data retention check skipped: no permissions"
c.conn.Log(logger.LevelInfo, "user %#v does not have permissions to check retention on %#v, retention check skipped",
c.conn.User, folderPath)
return nil
}
folderRetention, err := c.getFolderRetention(folderPath)
if err != nil {
result.Elapsed = time.Since(startTime)
result.Error = "unable to get folder retention"
c.conn.Log(logger.LevelError, "unable to get folder retention for path %#v", folderPath)
return err
}
result.Retention = folderRetention.Retention
if folderRetention.Retention == 0 {
result.Elapsed = time.Since(startTime)
result.Info = "data retention check skipped: retention is set to 0"
c.conn.Log(logger.LevelDebug, "retention check skipped for folder %#v, retention is set to 0", folderPath)
return nil
}
c.conn.Log(logger.LevelDebug, "start retention check for folder %#v, retention: %v hours, delete empty dirs? %v, ignore user perms? %v",
folderPath, folderRetention.Retention, folderRetention.DeleteEmptyDirs, folderRetention.IgnoreUserPermissions)
files, err := c.conn.ListDir(folderPath)
if err != nil {
result.Elapsed = time.Since(startTime)
if err == c.conn.GetNotExistError() {
result.Info = "data retention check skipped, folder does not exist"
c.conn.Log(logger.LevelDebug, "folder %#v does not exist, retention check skipped", folderPath)
return nil
}
result.Error = fmt.Sprintf("unable to list directory %#v", folderPath)
c.conn.Log(logger.LevelError, result.Error)
return err
}
for _, info := range files {
virtualPath := path.Join(folderPath, info.Name())
if info.IsDir() {
if err := c.cleanupFolder(virtualPath); err != nil {
result.Elapsed = time.Since(startTime)
result.Error = fmt.Sprintf("unable to check folder: %v", err)
c.conn.Log(logger.LevelError, "unable to cleanup folder %#v: %v", virtualPath, err)
return err
}
} else {
retentionTime := info.ModTime().Add(time.Duration(folderRetention.Retention) * time.Hour)
if retentionTime.Before(time.Now()) {
if err := c.removeFile(virtualPath, info); err != nil {
result.Elapsed = time.Since(startTime)
result.Error = fmt.Sprintf("unable to remove file %#v: %v", virtualPath, err)
c.conn.Log(logger.LevelError, "unable to remove file %#v, retention %v: %v",
virtualPath, retentionTime, err)
return err
}
c.conn.Log(logger.LevelDebug, "removed file %#v, modification time: %v, retention: %v hours, retention time: %v",
virtualPath, info.ModTime(), folderRetention.Retention, retentionTime)
result.DeletedFiles++
result.DeletedSize += info.Size()
}
}
}
if folderRetention.DeleteEmptyDirs {
c.checkEmptyDirRemoval(folderPath)
}
result.Elapsed = time.Since(startTime)
c.conn.Log(logger.LevelDebug, "retention check completed for folder %#v, deleted files: %v, deleted size: %v bytes",
folderPath, result.DeletedFiles, result.DeletedSize)
return nil
}
func (c *RetentionCheck) checkEmptyDirRemoval(folderPath string) {
if folderPath != "/" && c.conn.User.HasAnyPerm([]string{
dataprovider.PermDelete,
dataprovider.PermDeleteDirs,
}, path.Dir(folderPath),
) {
files, err := c.conn.ListDir(folderPath)
if err == nil && len(files) == 0 {
err = c.conn.RemoveDir(folderPath)
c.conn.Log(logger.LevelDebug, "tryed to remove empty dir %#v, error: %v", folderPath, err)
}
}
}
// Start starts the retention check
func (c *RetentionCheck) Start() {
c.conn.Log(logger.LevelInfo, "retention check started")
defer RetentionChecks.remove(c.conn.User.Username)
defer c.conn.CloseFS() //nolint:errcheck
startTime := time.Now()
for _, folder := range c.Folders {
if folder.Retention > 0 {
if err := c.cleanupFolder(folder.Path); err != nil {
c.conn.Log(logger.LevelError, "retention check failed, unable to cleanup folder %#v", folder.Path)
c.sendNotifications(time.Since(startTime), err)
return
}
}
}
c.conn.Log(logger.LevelInfo, "retention check completed")
c.sendNotifications(time.Since(startTime), nil)
}
func (c *RetentionCheck) sendNotifications(elapsed time.Duration, err error) {
for _, notification := range c.Notifications {
switch notification {
case RetentionCheckNotificationEmail:
c.sendEmailNotification(elapsed, err) //nolint:errcheck
case RetentionCheckNotificationHook:
c.sendHookNotification(elapsed, err) //nolint:errcheck
}
}
}
func (c *RetentionCheck) sendEmailNotification(elapsed time.Duration, errCheck error) error {
body := new(bytes.Buffer)
data := make(map[string]interface{})
data["Results"] = c.results
totalDeletedFiles := 0
totalDeletedSize := int64(0)
for _, result := range c.results {
totalDeletedFiles += result.DeletedFiles
totalDeletedSize += result.DeletedSize
}
data["HumanizeSize"] = util.ByteCountIEC
data["TotalFiles"] = totalDeletedFiles
data["TotalSize"] = totalDeletedSize
data["Elapsed"] = elapsed
data["Username"] = c.conn.User.Username
data["StartTime"] = util.GetTimeFromMsecSinceEpoch(c.StartTime)
if errCheck == nil {
data["Status"] = "Succeeded"
} else {
data["Status"] = "Failed"
}
if err := smtp.RenderRetentionReportTemplate(body, data); err != nil {
c.conn.Log(logger.LevelError, "unable to render retention check template: %v", err)
return err
}
startTime := time.Now()
subject := fmt.Sprintf("Retention check completed for user %#v", c.conn.User.Username)
if err := smtp.SendEmail(c.Email, subject, body.String(), smtp.EmailContentTypeTextHTML); err != nil {
c.conn.Log(logger.LevelError, "unable to notify retention check result via email: %v, elapsed: %v", err,
time.Since(startTime))
return err
}
c.conn.Log(logger.LevelInfo, "retention check result successfully notified via email, elapsed: %v", time.Since(startTime))
return nil
}
func (c *RetentionCheck) sendHookNotification(elapsed time.Duration, errCheck error) error {
data := make(map[string]interface{})
totalDeletedFiles := 0
totalDeletedSize := int64(0)
for _, result := range c.results {
totalDeletedFiles += result.DeletedFiles
totalDeletedSize += result.DeletedSize
}
data["username"] = c.conn.User.Username
data["start_time"] = c.StartTime
data["elapsed"] = elapsed.Milliseconds()
if errCheck == nil {
data["status"] = 1
} else {
data["status"] = 0
}
data["total_deleted_files"] = totalDeletedFiles
data["total_deleted_size"] = totalDeletedSize
data["details"] = c.results
jsonData, _ := json.Marshal(data)
startTime := time.Now()
if strings.HasPrefix(Config.DataRetentionHook, "http") {
var url *url.URL
url, err := url.Parse(Config.DataRetentionHook)
if err != nil {
c.conn.Log(logger.LevelError, "invalid data retention hook %#v: %v", Config.DataRetentionHook, err)
return err
}
respCode := 0
resp, err := httpclient.RetryablePost(url.String(), "application/json", bytes.NewBuffer(jsonData))
if err == nil {
respCode = resp.StatusCode
resp.Body.Close()
if respCode != http.StatusOK {
err = errUnexpectedHTTResponse
}
}
c.conn.Log(logger.LevelDebug, "notified result to URL: %#v, status code: %v, elapsed: %v err: %v",
url.Redacted(), respCode, time.Since(startTime), err)
return err
}
if !filepath.IsAbs(Config.DataRetentionHook) {
err := fmt.Errorf("invalid data retention hook %#v", Config.DataRetentionHook)
c.conn.Log(logger.LevelError, "%v", err)
return err
}
ctx, cancel := context.WithTimeout(context.Background(), 20*time.Second)
defer cancel()
cmd := exec.CommandContext(ctx, Config.DataRetentionHook)
cmd.Env = append(os.Environ(),
fmt.Sprintf("SFTPGO_DATA_RETENTION_RESULT=%v", string(jsonData)))
err := cmd.Run()
c.conn.Log(logger.LevelDebug, "notified result using command: %v, elapsed: %v err: %v",
Config.DataRetentionHook, time.Since(startTime), err)
return err
}

View File

@@ -0,0 +1,340 @@
package common
import (
"errors"
"fmt"
"os/exec"
"runtime"
"testing"
"time"
"github.com/sftpgo/sdk"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/smtp"
)
func TestRetentionValidation(t *testing.T) {
check := RetentionCheck{}
check.Folders = append(check.Folders, FolderRetention{
Path: "relative",
Retention: 10,
})
err := check.Validate()
require.Error(t, err)
assert.Contains(t, err.Error(), "please specify an absolute POSIX path")
check.Folders = []FolderRetention{
{
Path: "/",
Retention: -1,
},
}
err = check.Validate()
require.Error(t, err)
assert.Contains(t, err.Error(), "invalid folder retention")
check.Folders = []FolderRetention{
{
Path: "/ab/..",
Retention: 0,
},
}
err = check.Validate()
require.Error(t, err)
assert.Contains(t, err.Error(), "nothing to delete")
assert.Equal(t, "/", check.Folders[0].Path)
check.Folders = append(check.Folders, FolderRetention{
Path: "/../..",
Retention: 24,
})
err = check.Validate()
require.Error(t, err)
assert.Contains(t, err.Error(), `duplicated folder path "/"`)
check.Folders = []FolderRetention{
{
Path: "/dir1",
Retention: 48,
},
{
Path: "/dir2",
Retention: 96,
},
}
err = check.Validate()
assert.NoError(t, err)
assert.Len(t, check.Notifications, 0)
assert.Empty(t, check.Email)
check.Notifications = []RetentionCheckNotification{RetentionCheckNotificationEmail}
err = check.Validate()
require.Error(t, err)
assert.Contains(t, err.Error(), "you must configure an SMTP server")
smtpCfg := smtp.Config{
Host: "mail.example.com",
Port: 25,
TemplatesPath: "templates",
}
err = smtpCfg.Initialize("..")
require.NoError(t, err)
err = check.Validate()
require.Error(t, err)
assert.Contains(t, err.Error(), "you must add a valid email address")
check.Email = "admin@example.com"
err = check.Validate()
assert.NoError(t, err)
smtpCfg = smtp.Config{}
err = smtpCfg.Initialize("..")
require.NoError(t, err)
check.Notifications = []RetentionCheckNotification{RetentionCheckNotificationHook}
err = check.Validate()
require.Error(t, err)
assert.Contains(t, err.Error(), "data_retention_hook")
check.Notifications = []string{"not valid"}
err = check.Validate()
require.Error(t, err)
assert.Contains(t, err.Error(), "invalid notification")
}
func TestRetentionEmailNotifications(t *testing.T) {
smtpCfg := smtp.Config{
Host: "127.0.0.1",
Port: 2525,
TemplatesPath: "templates",
}
err := smtpCfg.Initialize("..")
require.NoError(t, err)
user := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: "user1",
},
}
user.Permissions = make(map[string][]string)
user.Permissions["/"] = []string{dataprovider.PermAny}
check := RetentionCheck{
Notifications: []RetentionCheckNotification{RetentionCheckNotificationEmail},
Email: "notification@example.com",
results: []*folderRetentionCheckResult{
{
Path: "/",
Retention: 24,
DeletedFiles: 10,
DeletedSize: 32657,
Elapsed: 10 * time.Second,
},
},
}
conn := NewBaseConnection("", "", "", "", user)
conn.SetProtocol(ProtocolDataRetention)
conn.ID = fmt.Sprintf("data_retention_%v", user.Username)
check.conn = conn
check.sendNotifications(1*time.Second, nil)
err = check.sendEmailNotification(1*time.Second, nil)
assert.NoError(t, err)
err = check.sendEmailNotification(1*time.Second, errors.New("test error"))
assert.NoError(t, err)
smtpCfg.Port = 2626
err = smtpCfg.Initialize("..")
require.NoError(t, err)
err = check.sendEmailNotification(1*time.Second, nil)
assert.Error(t, err)
smtpCfg = smtp.Config{}
err = smtpCfg.Initialize("..")
require.NoError(t, err)
err = check.sendEmailNotification(1*time.Second, nil)
assert.Error(t, err)
}
func TestRetentionHookNotifications(t *testing.T) {
dataRetentionHook := Config.DataRetentionHook
Config.DataRetentionHook = fmt.Sprintf("http://%v", httpAddr)
user := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: "user2",
},
}
user.Permissions = make(map[string][]string)
user.Permissions["/"] = []string{dataprovider.PermAny}
check := RetentionCheck{
Notifications: []RetentionCheckNotification{RetentionCheckNotificationHook},
results: []*folderRetentionCheckResult{
{
Path: "/",
Retention: 24,
DeletedFiles: 10,
DeletedSize: 32657,
Elapsed: 10 * time.Second,
},
},
}
conn := NewBaseConnection("", "", "", "", user)
conn.SetProtocol(ProtocolDataRetention)
conn.ID = fmt.Sprintf("data_retention_%v", user.Username)
check.conn = conn
check.sendNotifications(1*time.Second, nil)
err := check.sendHookNotification(1*time.Second, nil)
assert.NoError(t, err)
Config.DataRetentionHook = fmt.Sprintf("http://%v/404", httpAddr)
err = check.sendHookNotification(1*time.Second, nil)
assert.ErrorIs(t, err, errUnexpectedHTTResponse)
Config.DataRetentionHook = "http://foo\x7f.com/retention"
err = check.sendHookNotification(1*time.Second, err)
assert.Error(t, err)
Config.DataRetentionHook = "relativepath"
err = check.sendHookNotification(1*time.Second, err)
assert.Error(t, err)
if runtime.GOOS != osWindows {
hookCmd, err := exec.LookPath("true")
assert.NoError(t, err)
Config.DataRetentionHook = hookCmd
err = check.sendHookNotification(1*time.Second, err)
assert.NoError(t, err)
}
Config.DataRetentionHook = dataRetentionHook
}
func TestRetentionPermissionsAndGetFolder(t *testing.T) {
user := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: "user1",
},
}
user.Permissions = make(map[string][]string)
user.Permissions["/"] = []string{dataprovider.PermListItems, dataprovider.PermDelete}
user.Permissions["/dir1"] = []string{dataprovider.PermListItems}
user.Permissions["/dir2/sub1"] = []string{dataprovider.PermCreateDirs}
user.Permissions["/dir2/sub2"] = []string{dataprovider.PermDelete}
check := RetentionCheck{
Folders: []FolderRetention{
{
Path: "/dir2",
Retention: 24 * 7,
IgnoreUserPermissions: true,
},
{
Path: "/dir3",
Retention: 24 * 7,
IgnoreUserPermissions: false,
},
{
Path: "/dir2/sub1/sub",
Retention: 24,
IgnoreUserPermissions: true,
},
},
}
conn := NewBaseConnection("", "", "", "", user)
conn.SetProtocol(ProtocolDataRetention)
conn.ID = fmt.Sprintf("data_retention_%v", user.Username)
check.conn = conn
check.updateUserPermissions()
assert.Equal(t, []string{dataprovider.PermListItems, dataprovider.PermDelete}, conn.User.Permissions["/"])
assert.Equal(t, []string{dataprovider.PermListItems}, conn.User.Permissions["/dir1"])
assert.Equal(t, []string{dataprovider.PermAny}, conn.User.Permissions["/dir2"])
assert.Equal(t, []string{dataprovider.PermAny}, conn.User.Permissions["/dir2/sub1/sub"])
assert.Equal(t, []string{dataprovider.PermCreateDirs}, conn.User.Permissions["/dir2/sub1"])
assert.Equal(t, []string{dataprovider.PermDelete}, conn.User.Permissions["/dir2/sub2"])
_, err := check.getFolderRetention("/")
assert.Error(t, err)
folder, err := check.getFolderRetention("/dir3")
assert.NoError(t, err)
assert.Equal(t, "/dir3", folder.Path)
folder, err = check.getFolderRetention("/dir2/sub3")
assert.NoError(t, err)
assert.Equal(t, "/dir2", folder.Path)
folder, err = check.getFolderRetention("/dir2/sub2")
assert.NoError(t, err)
assert.Equal(t, "/dir2", folder.Path)
folder, err = check.getFolderRetention("/dir2/sub1")
assert.NoError(t, err)
assert.Equal(t, "/dir2", folder.Path)
folder, err = check.getFolderRetention("/dir2/sub1/sub/sub")
assert.NoError(t, err)
assert.Equal(t, "/dir2/sub1/sub", folder.Path)
}
func TestRetentionCheckAddRemove(t *testing.T) {
username := "username"
user := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: username,
},
}
user.Permissions = make(map[string][]string)
user.Permissions["/"] = []string{dataprovider.PermAny}
check := RetentionCheck{
Folders: []FolderRetention{
{
Path: "/",
Retention: 48,
},
},
Notifications: []RetentionCheckNotification{RetentionCheckNotificationHook},
}
assert.NotNil(t, RetentionChecks.Add(check, &user))
checks := RetentionChecks.Get()
require.Len(t, checks, 1)
assert.Equal(t, username, checks[0].Username)
assert.Greater(t, checks[0].StartTime, int64(0))
require.Len(t, checks[0].Folders, 1)
assert.Equal(t, check.Folders[0].Path, checks[0].Folders[0].Path)
assert.Equal(t, check.Folders[0].Retention, checks[0].Folders[0].Retention)
require.Len(t, checks[0].Notifications, 1)
assert.Equal(t, RetentionCheckNotificationHook, checks[0].Notifications[0])
assert.Nil(t, RetentionChecks.Add(check, &user))
assert.True(t, RetentionChecks.remove(username))
require.Len(t, RetentionChecks.Get(), 0)
assert.False(t, RetentionChecks.remove(username))
}
func TestCleanupErrors(t *testing.T) {
user := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: "u",
},
}
user.Permissions = make(map[string][]string)
user.Permissions["/"] = []string{dataprovider.PermAny}
check := &RetentionCheck{
Folders: []FolderRetention{
{
Path: "/path",
Retention: 48,
},
},
}
check = RetentionChecks.Add(*check, &user)
require.NotNil(t, check)
err := check.removeFile("missing file", nil)
assert.Error(t, err)
err = check.cleanupFolder("/")
assert.Error(t, err)
assert.True(t, RetentionChecks.remove(user.Username))
}

274
common/defender.go Normal file
View File

@@ -0,0 +1,274 @@
package common
import (
"encoding/json"
"fmt"
"net"
"os"
"sync"
"time"
"github.com/yl2chen/cidranger"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
)
// HostEvent is the enumerable for the supported host events
type HostEvent int
// Supported host events
const (
HostEventLoginFailed HostEvent = iota
HostEventUserNotFound
HostEventNoLoginTried
HostEventLimitExceeded
)
// Supported defender drivers
const (
DefenderDriverMemory = "memory"
DefenderDriverProvider = "provider"
)
var (
supportedDefenderDrivers = []string{DefenderDriverMemory, DefenderDriverProvider}
)
// Defender defines the interface that a defender must implements
type Defender interface {
GetHosts() ([]*dataprovider.DefenderEntry, error)
GetHost(ip string) (*dataprovider.DefenderEntry, error)
AddEvent(ip string, event HostEvent)
IsBanned(ip string) bool
GetBanTime(ip string) (*time.Time, error)
GetScore(ip string) (int, error)
DeleteHost(ip string) bool
Reload() error
}
// DefenderConfig defines the "defender" configuration
type DefenderConfig struct {
// Set to true to enable the defender
Enabled bool `json:"enabled" mapstructure:"enabled"`
// Defender implementation to use, we support "memory" and "provider".
// Using "provider" as driver you can share the defender events among
// multiple SFTPGo instances. For a single instance "memory" provider will
// be much faster
Driver string `json:"driver" mapstructure:"driver"`
// BanTime is the number of minutes that a host is banned
BanTime int `json:"ban_time" mapstructure:"ban_time"`
// Percentage increase of the ban time if a banned host tries to connect again
BanTimeIncrement int `json:"ban_time_increment" mapstructure:"ban_time_increment"`
// Threshold value for banning a client
Threshold int `json:"threshold" mapstructure:"threshold"`
// Score for invalid login attempts, eg. non-existent user accounts or
// client disconnected for inactivity without authentication attempts
ScoreInvalid int `json:"score_invalid" mapstructure:"score_invalid"`
// Score for valid login attempts, eg. user accounts that exist
ScoreValid int `json:"score_valid" mapstructure:"score_valid"`
// Score for limit exceeded events, generated from the rate limiters or for max connections
// per-host exceeded
ScoreLimitExceeded int `json:"score_limit_exceeded" mapstructure:"score_limit_exceeded"`
// Defines the time window, in minutes, for tracking client errors.
// A host is banned if it has exceeded the defined threshold during
// the last observation time minutes
ObservationTime int `json:"observation_time" mapstructure:"observation_time"`
// The number of banned IPs and host scores kept in memory will vary between the
// soft and hard limit for the "memory" driver. For the "provider" driver the
// soft limit is ignored and the hard limit is used to limit the number of entries
// to return when you request for the entire host list from the defender
EntriesSoftLimit int `json:"entries_soft_limit" mapstructure:"entries_soft_limit"`
EntriesHardLimit int `json:"entries_hard_limit" mapstructure:"entries_hard_limit"`
// Path to a file containing a list of ip addresses and/or networks to never ban
SafeListFile string `json:"safelist_file" mapstructure:"safelist_file"`
// Path to a file containing a list of ip addresses and/or networks to always ban
BlockListFile string `json:"blocklist_file" mapstructure:"blocklist_file"`
}
type baseDefender struct {
config *DefenderConfig
sync.RWMutex
safeList *HostList
blockList *HostList
}
// Reload reloads block and safe lists
func (d *baseDefender) Reload() error {
blockList, err := loadHostListFromFile(d.config.BlockListFile)
if err != nil {
return err
}
d.Lock()
d.blockList = blockList
d.Unlock()
safeList, err := loadHostListFromFile(d.config.SafeListFile)
if err != nil {
return err
}
d.Lock()
d.safeList = safeList
d.Unlock()
return nil
}
func (d *baseDefender) isBanned(ip string) bool {
if d.blockList != nil && d.blockList.isListed(ip) {
// permanent ban
return true
}
return false
}
func (d *baseDefender) getScore(event HostEvent) int {
var score int
switch event {
case HostEventLoginFailed:
score = d.config.ScoreValid
case HostEventLimitExceeded:
score = d.config.ScoreLimitExceeded
case HostEventUserNotFound, HostEventNoLoginTried:
score = d.config.ScoreInvalid
}
return score
}
// HostListFile defines the structure expected for safe/block list files
type HostListFile struct {
IPAddresses []string `json:"addresses"`
CIDRNetworks []string `json:"networks"`
}
// HostList defines the structure used to keep the HostListFile in memory
type HostList struct {
IPAddresses map[string]bool
Ranges cidranger.Ranger
}
func (h *HostList) isListed(ip string) bool {
if _, ok := h.IPAddresses[ip]; ok {
return true
}
ok, err := h.Ranges.Contains(net.ParseIP(ip))
if err != nil {
return false
}
return ok
}
type hostEvent struct {
dateTime time.Time
score int
}
type hostScore struct {
TotalScore int
Events []hostEvent
}
// validate returns an error if the configuration is invalid
func (c *DefenderConfig) validate() error {
if !c.Enabled {
return nil
}
if c.ScoreInvalid >= c.Threshold {
return fmt.Errorf("score_invalid %v cannot be greater than threshold %v", c.ScoreInvalid, c.Threshold)
}
if c.ScoreValid >= c.Threshold {
return fmt.Errorf("score_valid %v cannot be greater than threshold %v", c.ScoreValid, c.Threshold)
}
if c.ScoreLimitExceeded >= c.Threshold {
return fmt.Errorf("score_limit_exceeded %v cannot be greater than threshold %v", c.ScoreLimitExceeded, c.Threshold)
}
if c.BanTime <= 0 {
return fmt.Errorf("invalid ban_time %v", c.BanTime)
}
if c.BanTimeIncrement <= 0 {
return fmt.Errorf("invalid ban_time_increment %v", c.BanTimeIncrement)
}
if c.ObservationTime <= 0 {
return fmt.Errorf("invalid observation_time %v", c.ObservationTime)
}
if c.EntriesSoftLimit <= 0 {
return fmt.Errorf("invalid entries_soft_limit %v", c.EntriesSoftLimit)
}
if c.EntriesHardLimit <= c.EntriesSoftLimit {
return fmt.Errorf("invalid entries_hard_limit %v must be > %v", c.EntriesHardLimit, c.EntriesSoftLimit)
}
return nil
}
func loadHostListFromFile(name string) (*HostList, error) {
if name == "" {
return nil, nil
}
if !util.IsFileInputValid(name) {
return nil, fmt.Errorf("invalid host list file name %#v", name)
}
info, err := os.Stat(name)
if err != nil {
return nil, err
}
// opinionated max size, you should avoid big host lists
if info.Size() > 1048576*5 { // 5MB
return nil, fmt.Errorf("host list file %#v is too big: %v bytes", name, info.Size())
}
content, err := os.ReadFile(name)
if err != nil {
return nil, fmt.Errorf("unable to read input file %#v: %v", name, err)
}
var hostList HostListFile
err = json.Unmarshal(content, &hostList)
if err != nil {
return nil, err
}
if len(hostList.CIDRNetworks) > 0 || len(hostList.IPAddresses) > 0 {
result := &HostList{
IPAddresses: make(map[string]bool),
Ranges: cidranger.NewPCTrieRanger(),
}
ipCount := 0
cdrCount := 0
for _, ip := range hostList.IPAddresses {
if net.ParseIP(ip) == nil {
logger.Warn(logSender, "", "unable to parse IP %#v", ip)
continue
}
result.IPAddresses[ip] = true
ipCount++
}
for _, cidrNet := range hostList.CIDRNetworks {
_, network, err := net.ParseCIDR(cidrNet)
if err != nil {
logger.Warn(logSender, "", "unable to parse CIDR network %#v", cidrNet)
continue
}
err = result.Ranges.Insert(cidranger.NewBasicRangerEntry(*network))
if err == nil {
cdrCount++
}
}
logger.Info(logSender, "", "list %#v loaded, ip addresses loaded: %v/%v networks loaded: %v/%v",
name, ipCount, len(hostList.IPAddresses), cdrCount, len(hostList.CIDRNetworks))
return result, nil
}
return nil, nil
}

678
common/defender_test.go Normal file
View File

@@ -0,0 +1,678 @@
package common
import (
"crypto/rand"
"encoding/hex"
"encoding/json"
"fmt"
"net"
"os"
"path/filepath"
"runtime"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/yl2chen/cidranger"
)
func TestBasicDefender(t *testing.T) {
bl := HostListFile{
IPAddresses: []string{"172.16.1.1", "172.16.1.2"},
CIDRNetworks: []string{"10.8.0.0/24"},
}
sl := HostListFile{
IPAddresses: []string{"172.16.1.3", "172.16.1.4"},
CIDRNetworks: []string{"192.168.8.0/24"},
}
blFile := filepath.Join(os.TempDir(), "bl.json")
slFile := filepath.Join(os.TempDir(), "sl.json")
data, err := json.Marshal(bl)
assert.NoError(t, err)
err = os.WriteFile(blFile, data, os.ModePerm)
assert.NoError(t, err)
data, err = json.Marshal(sl)
assert.NoError(t, err)
err = os.WriteFile(slFile, data, os.ModePerm)
assert.NoError(t, err)
config := &DefenderConfig{
Enabled: true,
BanTime: 10,
BanTimeIncrement: 2,
Threshold: 5,
ScoreInvalid: 2,
ScoreValid: 1,
ScoreLimitExceeded: 3,
ObservationTime: 15,
EntriesSoftLimit: 1,
EntriesHardLimit: 2,
SafeListFile: "slFile",
BlockListFile: "blFile",
}
_, err = newInMemoryDefender(config)
assert.Error(t, err)
config.BlockListFile = blFile
_, err = newInMemoryDefender(config)
assert.Error(t, err)
config.SafeListFile = slFile
d, err := newInMemoryDefender(config)
assert.NoError(t, err)
defender := d.(*memoryDefender)
assert.True(t, defender.IsBanned("172.16.1.1"))
assert.False(t, defender.IsBanned("172.16.1.10"))
assert.False(t, defender.IsBanned("10.8.2.3"))
assert.True(t, defender.IsBanned("10.8.0.3"))
assert.False(t, defender.IsBanned("invalid ip"))
assert.Equal(t, 0, defender.countBanned())
assert.Equal(t, 0, defender.countHosts())
hosts, err := defender.GetHosts()
assert.NoError(t, err)
assert.Len(t, hosts, 0)
_, err = defender.GetHost("10.8.0.4")
assert.Error(t, err)
defender.AddEvent("172.16.1.4", HostEventLoginFailed)
defender.AddEvent("192.168.8.4", HostEventUserNotFound)
defender.AddEvent("172.16.1.3", HostEventLimitExceeded)
assert.Equal(t, 0, defender.countHosts())
testIP := "12.34.56.78"
defender.AddEvent(testIP, HostEventLoginFailed)
assert.Equal(t, 1, defender.countHosts())
assert.Equal(t, 0, defender.countBanned())
score, err := defender.GetScore(testIP)
assert.NoError(t, err)
assert.Equal(t, 1, score)
hosts, err = defender.GetHosts()
assert.NoError(t, err)
if assert.Len(t, hosts, 1) {
assert.Equal(t, 1, hosts[0].Score)
assert.True(t, hosts[0].BanTime.IsZero())
assert.Empty(t, hosts[0].GetBanTime())
}
host, err := defender.GetHost(testIP)
assert.NoError(t, err)
assert.Equal(t, 1, host.Score)
assert.Empty(t, host.GetBanTime())
banTime, err := defender.GetBanTime(testIP)
assert.NoError(t, err)
assert.Nil(t, banTime)
defender.AddEvent(testIP, HostEventLimitExceeded)
assert.Equal(t, 1, defender.countHosts())
assert.Equal(t, 0, defender.countBanned())
score, err = defender.GetScore(testIP)
assert.NoError(t, err)
assert.Equal(t, 4, score)
hosts, err = defender.GetHosts()
assert.NoError(t, err)
if assert.Len(t, hosts, 1) {
assert.Equal(t, 4, hosts[0].Score)
assert.True(t, hosts[0].BanTime.IsZero())
assert.Empty(t, hosts[0].GetBanTime())
}
defender.AddEvent(testIP, HostEventNoLoginTried)
defender.AddEvent(testIP, HostEventNoLoginTried)
assert.Equal(t, 0, defender.countHosts())
assert.Equal(t, 1, defender.countBanned())
score, err = defender.GetScore(testIP)
assert.NoError(t, err)
assert.Equal(t, 0, score)
banTime, err = defender.GetBanTime(testIP)
assert.NoError(t, err)
assert.NotNil(t, banTime)
hosts, err = defender.GetHosts()
assert.NoError(t, err)
if assert.Len(t, hosts, 1) {
assert.Equal(t, 0, hosts[0].Score)
assert.False(t, hosts[0].BanTime.IsZero())
assert.NotEmpty(t, hosts[0].GetBanTime())
assert.Equal(t, hex.EncodeToString([]byte(testIP)), hosts[0].GetID())
}
host, err = defender.GetHost(testIP)
assert.NoError(t, err)
assert.Equal(t, 0, host.Score)
assert.NotEmpty(t, host.GetBanTime())
// now test cleanup, testIP is already banned
testIP1 := "12.34.56.79"
testIP2 := "12.34.56.80"
testIP3 := "12.34.56.81"
defender.AddEvent(testIP1, HostEventNoLoginTried)
defender.AddEvent(testIP2, HostEventNoLoginTried)
assert.Equal(t, 2, defender.countHosts())
time.Sleep(20 * time.Millisecond)
defender.AddEvent(testIP3, HostEventNoLoginTried)
assert.Equal(t, defender.config.EntriesSoftLimit, defender.countHosts())
// testIP1 and testIP2 should be removed
assert.Equal(t, defender.config.EntriesSoftLimit, defender.countHosts())
score, err = defender.GetScore(testIP1)
assert.NoError(t, err)
assert.Equal(t, 0, score)
score, err = defender.GetScore(testIP2)
assert.NoError(t, err)
assert.Equal(t, 0, score)
score, err = defender.GetScore(testIP3)
assert.NoError(t, err)
assert.Equal(t, 2, score)
defender.AddEvent(testIP3, HostEventNoLoginTried)
defender.AddEvent(testIP3, HostEventNoLoginTried)
// IP3 is now banned
banTime, err = defender.GetBanTime(testIP3)
assert.NoError(t, err)
assert.NotNil(t, banTime)
assert.Equal(t, 0, defender.countHosts())
time.Sleep(20 * time.Millisecond)
for i := 0; i < 3; i++ {
defender.AddEvent(testIP1, HostEventNoLoginTried)
}
assert.Equal(t, 0, defender.countHosts())
assert.Equal(t, config.EntriesSoftLimit, defender.countBanned())
banTime, err = defender.GetBanTime(testIP)
assert.NoError(t, err)
assert.Nil(t, banTime)
banTime, err = defender.GetBanTime(testIP3)
assert.NoError(t, err)
assert.Nil(t, banTime)
banTime, err = defender.GetBanTime(testIP1)
assert.NoError(t, err)
assert.NotNil(t, banTime)
for i := 0; i < 3; i++ {
defender.AddEvent(testIP, HostEventNoLoginTried)
time.Sleep(10 * time.Millisecond)
defender.AddEvent(testIP3, HostEventNoLoginTried)
}
assert.Equal(t, 0, defender.countHosts())
assert.Equal(t, defender.config.EntriesSoftLimit, defender.countBanned())
banTime, err = defender.GetBanTime(testIP3)
assert.NoError(t, err)
if assert.NotNil(t, banTime) {
assert.True(t, defender.IsBanned(testIP3))
// ban time should increase
newBanTime, err := defender.GetBanTime(testIP3)
assert.NoError(t, err)
assert.True(t, newBanTime.After(*banTime))
}
assert.True(t, defender.DeleteHost(testIP3))
assert.False(t, defender.DeleteHost(testIP3))
err = os.Remove(slFile)
assert.NoError(t, err)
err = os.Remove(blFile)
assert.NoError(t, err)
}
func TestExpiredHostBans(t *testing.T) {
config := &DefenderConfig{
Enabled: true,
BanTime: 10,
BanTimeIncrement: 2,
Threshold: 5,
ScoreInvalid: 2,
ScoreValid: 1,
ScoreLimitExceeded: 3,
ObservationTime: 15,
EntriesSoftLimit: 1,
EntriesHardLimit: 2,
}
d, err := newInMemoryDefender(config)
assert.NoError(t, err)
defender := d.(*memoryDefender)
testIP := "1.2.3.4"
defender.banned[testIP] = time.Now().Add(-24 * time.Hour)
// the ban is expired testIP should not be listed
res, err := defender.GetHosts()
assert.NoError(t, err)
assert.Len(t, res, 0)
assert.False(t, defender.IsBanned(testIP))
_, err = defender.GetHost(testIP)
assert.Error(t, err)
_, ok := defender.banned[testIP]
assert.True(t, ok)
// now add an event for an expired banned ip, it should be removed
defender.AddEvent(testIP, HostEventLoginFailed)
assert.False(t, defender.IsBanned(testIP))
entry, err := defender.GetHost(testIP)
assert.NoError(t, err)
assert.Equal(t, testIP, entry.IP)
assert.Empty(t, entry.GetBanTime())
assert.Equal(t, 1, entry.Score)
res, err = defender.GetHosts()
assert.NoError(t, err)
if assert.Len(t, res, 1) {
assert.Equal(t, testIP, res[0].IP)
assert.Empty(t, res[0].GetBanTime())
assert.Equal(t, 1, res[0].Score)
}
events := []hostEvent{
{
dateTime: time.Now().Add(-24 * time.Hour),
score: 2,
},
{
dateTime: time.Now().Add(-24 * time.Hour),
score: 3,
},
}
hs := hostScore{
Events: events,
TotalScore: 5,
}
defender.hosts[testIP] = hs
// the recorded scored are too old
res, err = defender.GetHosts()
assert.NoError(t, err)
assert.Len(t, res, 0)
_, err = defender.GetHost(testIP)
assert.Error(t, err)
_, ok = defender.hosts[testIP]
assert.True(t, ok)
}
func TestLoadHostListFromFile(t *testing.T) {
_, err := loadHostListFromFile(".")
assert.Error(t, err)
hostsFilePath := filepath.Join(os.TempDir(), "hostfile")
content := make([]byte, 1048576*6)
_, err = rand.Read(content)
assert.NoError(t, err)
err = os.WriteFile(hostsFilePath, content, os.ModePerm)
assert.NoError(t, err)
_, err = loadHostListFromFile(hostsFilePath)
assert.Error(t, err)
hl := HostListFile{
IPAddresses: []string{},
CIDRNetworks: []string{},
}
asJSON, err := json.Marshal(hl)
assert.NoError(t, err)
err = os.WriteFile(hostsFilePath, asJSON, os.ModePerm)
assert.NoError(t, err)
hostList, err := loadHostListFromFile(hostsFilePath)
assert.NoError(t, err)
assert.Nil(t, hostList)
hl.IPAddresses = append(hl.IPAddresses, "invalidip")
asJSON, err = json.Marshal(hl)
assert.NoError(t, err)
err = os.WriteFile(hostsFilePath, asJSON, os.ModePerm)
assert.NoError(t, err)
hostList, err = loadHostListFromFile(hostsFilePath)
assert.NoError(t, err)
assert.Len(t, hostList.IPAddresses, 0)
hl.IPAddresses = nil
hl.CIDRNetworks = append(hl.CIDRNetworks, "invalid net")
asJSON, err = json.Marshal(hl)
assert.NoError(t, err)
err = os.WriteFile(hostsFilePath, asJSON, os.ModePerm)
assert.NoError(t, err)
hostList, err = loadHostListFromFile(hostsFilePath)
assert.NoError(t, err)
assert.NotNil(t, hostList)
assert.Len(t, hostList.IPAddresses, 0)
assert.Equal(t, 0, hostList.Ranges.Len())
if runtime.GOOS != "windows" {
err = os.Chmod(hostsFilePath, 0111)
assert.NoError(t, err)
_, err = loadHostListFromFile(hostsFilePath)
assert.Error(t, err)
err = os.Chmod(hostsFilePath, 0644)
assert.NoError(t, err)
}
err = os.WriteFile(hostsFilePath, []byte("non json content"), os.ModePerm)
assert.NoError(t, err)
_, err = loadHostListFromFile(hostsFilePath)
assert.Error(t, err)
err = os.Remove(hostsFilePath)
assert.NoError(t, err)
}
func TestDefenderCleanup(t *testing.T) {
d := memoryDefender{
baseDefender: baseDefender{
config: &DefenderConfig{
ObservationTime: 1,
EntriesSoftLimit: 2,
EntriesHardLimit: 3,
},
},
banned: make(map[string]time.Time),
hosts: make(map[string]hostScore),
}
d.banned["1.1.1.1"] = time.Now().Add(-24 * time.Hour)
d.banned["1.1.1.2"] = time.Now().Add(-24 * time.Hour)
d.banned["1.1.1.3"] = time.Now().Add(-24 * time.Hour)
d.banned["1.1.1.4"] = time.Now().Add(-24 * time.Hour)
d.cleanupBanned()
assert.Equal(t, 0, d.countBanned())
d.banned["2.2.2.2"] = time.Now().Add(2 * time.Minute)
d.banned["2.2.2.3"] = time.Now().Add(1 * time.Minute)
d.banned["2.2.2.4"] = time.Now().Add(3 * time.Minute)
d.banned["2.2.2.5"] = time.Now().Add(4 * time.Minute)
d.cleanupBanned()
assert.Equal(t, d.config.EntriesSoftLimit, d.countBanned())
banTime, err := d.GetBanTime("2.2.2.3")
assert.NoError(t, err)
assert.Nil(t, banTime)
d.hosts["3.3.3.3"] = hostScore{
TotalScore: 0,
Events: []hostEvent{
{
dateTime: time.Now().Add(-5 * time.Minute),
score: 1,
},
{
dateTime: time.Now().Add(-3 * time.Minute),
score: 1,
},
{
dateTime: time.Now(),
score: 1,
},
},
}
d.hosts["3.3.3.4"] = hostScore{
TotalScore: 1,
Events: []hostEvent{
{
dateTime: time.Now().Add(-3 * time.Minute),
score: 1,
},
},
}
d.hosts["3.3.3.5"] = hostScore{
TotalScore: 1,
Events: []hostEvent{
{
dateTime: time.Now().Add(-2 * time.Minute),
score: 1,
},
},
}
d.hosts["3.3.3.6"] = hostScore{
TotalScore: 1,
Events: []hostEvent{
{
dateTime: time.Now().Add(-1 * time.Minute),
score: 1,
},
},
}
score, err := d.GetScore("3.3.3.3")
assert.NoError(t, err)
assert.Equal(t, 1, score)
d.cleanupHosts()
assert.Equal(t, d.config.EntriesSoftLimit, d.countHosts())
score, err = d.GetScore("3.3.3.4")
assert.NoError(t, err)
assert.Equal(t, 0, score)
}
func TestDefenderConfig(t *testing.T) {
c := DefenderConfig{}
err := c.validate()
require.NoError(t, err)
c.Enabled = true
c.Threshold = 10
c.ScoreInvalid = 10
err = c.validate()
require.Error(t, err)
c.ScoreInvalid = 2
c.ScoreLimitExceeded = 10
err = c.validate()
require.Error(t, err)
c.ScoreLimitExceeded = 2
c.ScoreValid = 10
err = c.validate()
require.Error(t, err)
c.ScoreValid = 1
c.BanTime = 0
err = c.validate()
require.Error(t, err)
c.BanTime = 30
c.BanTimeIncrement = 0
err = c.validate()
require.Error(t, err)
c.BanTimeIncrement = 50
c.ObservationTime = 0
err = c.validate()
require.Error(t, err)
c.ObservationTime = 30
err = c.validate()
require.Error(t, err)
c.EntriesSoftLimit = 10
err = c.validate()
require.Error(t, err)
c.EntriesHardLimit = 10
err = c.validate()
require.Error(t, err)
c.EntriesHardLimit = 20
err = c.validate()
require.NoError(t, err)
}
func BenchmarkDefenderBannedSearch(b *testing.B) {
d := getDefenderForBench()
ip, ipnet, err := net.ParseCIDR("10.8.0.0/12") // 1048574 ip addresses
if err != nil {
panic(err)
}
for ip := ip.Mask(ipnet.Mask); ipnet.Contains(ip); inc(ip) {
d.banned[ip.String()] = time.Now().Add(10 * time.Minute)
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
d.IsBanned("192.168.1.1")
}
}
func BenchmarkCleanup(b *testing.B) {
d := getDefenderForBench()
ip, ipnet, err := net.ParseCIDR("192.168.4.0/24")
if err != nil {
panic(err)
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
for ip := ip.Mask(ipnet.Mask); ipnet.Contains(ip); inc(ip) {
d.AddEvent(ip.String(), HostEventLoginFailed)
if d.countHosts() > d.config.EntriesHardLimit {
panic("too many hosts")
}
if d.countBanned() > d.config.EntriesSoftLimit {
panic("too many ip banned")
}
}
}
}
func BenchmarkDefenderBannedSearchWithBlockList(b *testing.B) {
d := getDefenderForBench()
d.blockList = &HostList{
IPAddresses: make(map[string]bool),
Ranges: cidranger.NewPCTrieRanger(),
}
ip, ipnet, err := net.ParseCIDR("129.8.0.0/12") // 1048574 ip addresses
if err != nil {
panic(err)
}
for ip := ip.Mask(ipnet.Mask); ipnet.Contains(ip); inc(ip) {
d.banned[ip.String()] = time.Now().Add(10 * time.Minute)
d.blockList.IPAddresses[ip.String()] = true
}
for i := 0; i < 255; i++ {
cidr := fmt.Sprintf("10.8.%v.1/24", i)
_, network, _ := net.ParseCIDR(cidr)
if err := d.blockList.Ranges.Insert(cidranger.NewBasicRangerEntry(*network)); err != nil {
panic(err)
}
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
d.IsBanned("192.168.1.1")
}
}
func BenchmarkHostListSearch(b *testing.B) {
hostlist := &HostList{
IPAddresses: make(map[string]bool),
Ranges: cidranger.NewPCTrieRanger(),
}
ip, ipnet, _ := net.ParseCIDR("172.16.0.0/16")
for ip := ip.Mask(ipnet.Mask); ipnet.Contains(ip); inc(ip) {
hostlist.IPAddresses[ip.String()] = true
}
for i := 0; i < 255; i++ {
cidr := fmt.Sprintf("10.8.%v.1/24", i)
_, network, _ := net.ParseCIDR(cidr)
if err := hostlist.Ranges.Insert(cidranger.NewBasicRangerEntry(*network)); err != nil {
panic(err)
}
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
if hostlist.isListed("192.167.1.2") {
panic("should not be listed")
}
}
}
func BenchmarkCIDRanger(b *testing.B) {
ranger := cidranger.NewPCTrieRanger()
for i := 0; i < 255; i++ {
cidr := fmt.Sprintf("192.168.%v.1/24", i)
_, network, _ := net.ParseCIDR(cidr)
if err := ranger.Insert(cidranger.NewBasicRangerEntry(*network)); err != nil {
panic(err)
}
}
ipToMatch := net.ParseIP("192.167.1.2")
b.ResetTimer()
for i := 0; i < b.N; i++ {
if _, err := ranger.Contains(ipToMatch); err != nil {
panic(err)
}
}
}
func BenchmarkNetContains(b *testing.B) {
var nets []*net.IPNet
for i := 0; i < 255; i++ {
cidr := fmt.Sprintf("192.168.%v.1/24", i)
_, network, _ := net.ParseCIDR(cidr)
nets = append(nets, network)
}
ipToMatch := net.ParseIP("192.167.1.1")
b.ResetTimer()
for i := 0; i < b.N; i++ {
for _, n := range nets {
n.Contains(ipToMatch)
}
}
}
func getDefenderForBench() *memoryDefender {
config := &DefenderConfig{
Enabled: true,
BanTime: 30,
BanTimeIncrement: 50,
Threshold: 10,
ScoreInvalid: 2,
ScoreValid: 2,
ObservationTime: 30,
EntriesSoftLimit: 50,
EntriesHardLimit: 100,
}
return &memoryDefender{
baseDefender: baseDefender{
config: config,
},
hosts: make(map[string]hostScore),
banned: make(map[string]time.Time),
}
}
func inc(ip net.IP) {
for j := len(ip) - 1; j >= 0; j-- {
ip[j]++
if ip[j] > 0 {
break
}
}
}

157
common/defenderdb.go Normal file
View File

@@ -0,0 +1,157 @@
package common
import (
"time"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
)
type dbDefender struct {
baseDefender
lastCleanup time.Time
}
func newDBDefender(config *DefenderConfig) (Defender, error) {
err := config.validate()
if err != nil {
return nil, err
}
defender := &dbDefender{
baseDefender: baseDefender{
config: config,
},
lastCleanup: time.Time{},
}
if err := defender.Reload(); err != nil {
return nil, err
}
return defender, nil
}
// GetHosts returns hosts that are banned or for which some violations have been detected
func (d *dbDefender) GetHosts() ([]*dataprovider.DefenderEntry, error) {
return dataprovider.GetDefenderHosts(d.getStartObservationTime(), d.config.EntriesHardLimit)
}
// GetHost returns a defender host by ip, if any
func (d *dbDefender) GetHost(ip string) (*dataprovider.DefenderEntry, error) {
return dataprovider.GetDefenderHostByIP(ip, d.getStartObservationTime())
}
// IsBanned returns true if the specified IP is banned
// and increase ban time if the IP is found.
// This method must be called as soon as the client connects
func (d *dbDefender) IsBanned(ip string) bool {
d.RLock()
if d.baseDefender.isBanned(ip) {
d.RUnlock()
return true
}
d.RUnlock()
_, err := dataprovider.IsDefenderHostBanned(ip)
if err != nil {
// not found or another error, we allow this host
return false
}
increment := d.config.BanTime * d.config.BanTimeIncrement / 100
if increment == 0 {
increment++
}
dataprovider.UpdateDefenderBanTime(ip, increment) //nolint:errcheck
return true
}
// DeleteHost removes the specified IP from the defender lists
func (d *dbDefender) DeleteHost(ip string) bool {
if _, err := d.GetHost(ip); err != nil {
return false
}
return dataprovider.DeleteDefenderHost(ip) == nil
}
// AddEvent adds an event for the given IP.
// This method must be called for clients not yet banned
func (d *dbDefender) AddEvent(ip string, event HostEvent) {
d.RLock()
if d.safeList != nil && d.safeList.isListed(ip) {
d.RUnlock()
return
}
d.RUnlock()
score := d.baseDefender.getScore(event)
host, err := dataprovider.AddDefenderEvent(ip, score, d.getStartObservationTime())
if err != nil {
return
}
if host.Score > d.config.Threshold {
banTime := time.Now().Add(time.Duration(d.config.BanTime) * time.Minute)
err = dataprovider.SetDefenderBanTime(ip, util.GetTimeAsMsSinceEpoch(banTime))
}
if err == nil {
d.cleanup()
}
}
// GetBanTime returns the ban time for the given IP or nil if the IP is not banned
func (d *dbDefender) GetBanTime(ip string) (*time.Time, error) {
host, err := d.GetHost(ip)
if err != nil {
return nil, err
}
if host.BanTime.IsZero() {
return nil, nil
}
return &host.BanTime, nil
}
// GetScore returns the score for the given IP
func (d *dbDefender) GetScore(ip string) (int, error) {
host, err := d.GetHost(ip)
if err != nil {
return 0, err
}
return host.Score, nil
}
func (d *dbDefender) cleanup() {
lastCleanup := d.getLastCleanup()
if lastCleanup.IsZero() || lastCleanup.Add(time.Duration(d.config.ObservationTime)*time.Minute*3).Before(time.Now()) {
// FIXME: this could be racy in rare cases but it is better than acquire the lock for the cleanup duration
// or to always acquire a read/write lock.
// Concurrent cleanups could happen anyway from multiple SFTPGo instances and should not cause any issues
d.setLastCleanup(time.Now())
expireTime := time.Now().Add(-time.Duration(d.config.ObservationTime+1) * time.Minute)
logger.Debug(logSender, "", "cleanup defender hosts before %v, last cleanup %v", expireTime, lastCleanup)
if err := dataprovider.CleanupDefender(util.GetTimeAsMsSinceEpoch(expireTime)); err != nil {
logger.Error(logSender, "", "defender cleanup error, reset last cleanup to %v", lastCleanup)
d.setLastCleanup(lastCleanup)
}
}
}
func (d *dbDefender) getStartObservationTime() int64 {
t := time.Now().Add(-time.Duration(d.config.ObservationTime) * time.Minute)
return util.GetTimeAsMsSinceEpoch(t)
}
func (d *dbDefender) getLastCleanup() time.Time {
d.RLock()
defer d.RUnlock()
return d.lastCleanup
}
func (d *dbDefender) setLastCleanup(when time.Time) {
d.Lock()
defer d.Unlock()
d.lastCleanup = when
}

297
common/defenderdb_test.go Normal file
View File

@@ -0,0 +1,297 @@
package common
import (
"encoding/hex"
"encoding/json"
"os"
"path/filepath"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/util"
)
func TestBasicDbDefender(t *testing.T) {
if !isDbDefenderSupported() {
t.Skip("this test is not supported with the current database provider")
}
config := &DefenderConfig{
Enabled: true,
BanTime: 10,
BanTimeIncrement: 2,
Threshold: 5,
ScoreInvalid: 2,
ScoreValid: 1,
ScoreLimitExceeded: 3,
ObservationTime: 15,
EntriesSoftLimit: 1,
EntriesHardLimit: 10,
SafeListFile: "slFile",
BlockListFile: "blFile",
}
_, err := newDBDefender(config)
assert.Error(t, err)
bl := HostListFile{
IPAddresses: []string{"172.16.1.1", "172.16.1.2"},
CIDRNetworks: []string{"10.8.0.0/24"},
}
sl := HostListFile{
IPAddresses: []string{"172.16.1.3", "172.16.1.4"},
CIDRNetworks: []string{"192.168.8.0/24"},
}
blFile := filepath.Join(os.TempDir(), "bl.json")
slFile := filepath.Join(os.TempDir(), "sl.json")
data, err := json.Marshal(bl)
assert.NoError(t, err)
err = os.WriteFile(blFile, data, os.ModePerm)
assert.NoError(t, err)
data, err = json.Marshal(sl)
assert.NoError(t, err)
err = os.WriteFile(slFile, data, os.ModePerm)
assert.NoError(t, err)
config.BlockListFile = blFile
_, err = newDBDefender(config)
assert.Error(t, err)
config.SafeListFile = slFile
d, err := newDBDefender(config)
assert.NoError(t, err)
defender := d.(*dbDefender)
assert.True(t, defender.IsBanned("172.16.1.1"))
assert.False(t, defender.IsBanned("172.16.1.10"))
assert.False(t, defender.IsBanned("10.8.1.3"))
assert.True(t, defender.IsBanned("10.8.0.4"))
assert.False(t, defender.IsBanned("invalid ip"))
hosts, err := defender.GetHosts()
assert.NoError(t, err)
assert.Len(t, hosts, 0)
_, err = defender.GetHost("10.8.0.3")
assert.Error(t, err)
defender.AddEvent("172.16.1.4", HostEventLoginFailed)
defender.AddEvent("192.168.8.4", HostEventUserNotFound)
defender.AddEvent("172.16.1.3", HostEventLimitExceeded)
hosts, err = defender.GetHosts()
assert.NoError(t, err)
assert.Len(t, hosts, 0)
assert.True(t, defender.getLastCleanup().IsZero())
testIP := "123.45.67.89"
defender.AddEvent(testIP, HostEventLoginFailed)
lastCleanup := defender.getLastCleanup()
assert.False(t, lastCleanup.IsZero())
score, err := defender.GetScore(testIP)
assert.NoError(t, err)
assert.Equal(t, 1, score)
hosts, err = defender.GetHosts()
assert.NoError(t, err)
if assert.Len(t, hosts, 1) {
assert.Equal(t, 1, hosts[0].Score)
assert.True(t, hosts[0].BanTime.IsZero())
assert.Empty(t, hosts[0].GetBanTime())
}
host, err := defender.GetHost(testIP)
assert.NoError(t, err)
assert.Equal(t, 1, host.Score)
assert.Empty(t, host.GetBanTime())
banTime, err := defender.GetBanTime(testIP)
assert.NoError(t, err)
assert.Nil(t, banTime)
defender.AddEvent(testIP, HostEventLimitExceeded)
score, err = defender.GetScore(testIP)
assert.NoError(t, err)
assert.Equal(t, 4, score)
hosts, err = defender.GetHosts()
assert.NoError(t, err)
if assert.Len(t, hosts, 1) {
assert.Equal(t, 4, hosts[0].Score)
assert.True(t, hosts[0].BanTime.IsZero())
assert.Empty(t, hosts[0].GetBanTime())
}
defender.AddEvent(testIP, HostEventNoLoginTried)
defender.AddEvent(testIP, HostEventNoLoginTried)
score, err = defender.GetScore(testIP)
assert.NoError(t, err)
assert.Equal(t, 0, score)
banTime, err = defender.GetBanTime(testIP)
assert.NoError(t, err)
assert.NotNil(t, banTime)
hosts, err = defender.GetHosts()
assert.NoError(t, err)
if assert.Len(t, hosts, 1) {
assert.Equal(t, 0, hosts[0].Score)
assert.False(t, hosts[0].BanTime.IsZero())
assert.NotEmpty(t, hosts[0].GetBanTime())
assert.Equal(t, hex.EncodeToString([]byte(testIP)), hosts[0].GetID())
}
host, err = defender.GetHost(testIP)
assert.NoError(t, err)
assert.Equal(t, 0, host.Score)
assert.NotEmpty(t, host.GetBanTime())
// ban time should increase
assert.True(t, defender.IsBanned(testIP))
newBanTime, err := defender.GetBanTime(testIP)
assert.NoError(t, err)
assert.True(t, newBanTime.After(*banTime))
assert.True(t, defender.DeleteHost(testIP))
assert.False(t, defender.DeleteHost(testIP))
// test cleanup
testIP1 := "123.45.67.90"
testIP2 := "123.45.67.91"
testIP3 := "123.45.67.92"
for i := 0; i < 3; i++ {
defender.AddEvent(testIP, HostEventNoLoginTried)
defender.AddEvent(testIP1, HostEventNoLoginTried)
defender.AddEvent(testIP2, HostEventNoLoginTried)
}
hosts, err = defender.GetHosts()
assert.NoError(t, err)
assert.Len(t, hosts, 3)
for _, host := range hosts {
assert.Equal(t, 0, host.Score)
assert.False(t, host.BanTime.IsZero())
assert.NotEmpty(t, host.GetBanTime())
}
defender.AddEvent(testIP3, HostEventLoginFailed)
hosts, err = defender.GetHosts()
assert.NoError(t, err)
assert.Len(t, hosts, 4)
// now set a ban time in the past, so the host will be cleanead up
for _, ip := range []string{testIP1, testIP2} {
err = dataprovider.SetDefenderBanTime(ip, util.GetTimeAsMsSinceEpoch(time.Now().Add(-1*time.Minute)))
assert.NoError(t, err)
}
hosts, err = defender.GetHosts()
assert.NoError(t, err)
assert.Len(t, hosts, 4)
for _, host := range hosts {
switch host.IP {
case testIP:
assert.Equal(t, 0, host.Score)
assert.False(t, host.BanTime.IsZero())
assert.NotEmpty(t, host.GetBanTime())
case testIP3:
assert.Equal(t, 1, host.Score)
assert.True(t, host.BanTime.IsZero())
assert.Empty(t, host.GetBanTime())
default:
assert.Equal(t, 6, host.Score)
assert.True(t, host.BanTime.IsZero())
assert.Empty(t, host.GetBanTime())
}
}
host, err = defender.GetHost(testIP)
assert.NoError(t, err)
assert.Equal(t, 0, host.Score)
assert.False(t, host.BanTime.IsZero())
assert.NotEmpty(t, host.GetBanTime())
host, err = defender.GetHost(testIP3)
assert.NoError(t, err)
assert.Equal(t, 1, host.Score)
assert.True(t, host.BanTime.IsZero())
assert.Empty(t, host.GetBanTime())
// set a negative observation time so the from field in the queries will be in the future
// we still should get the banned hosts
defender.config.ObservationTime = -2
assert.Greater(t, defender.getStartObservationTime(), time.Now().UnixMilli())
hosts, err = defender.GetHosts()
assert.NoError(t, err)
if assert.Len(t, hosts, 1) {
assert.Equal(t, testIP, hosts[0].IP)
assert.Equal(t, 0, hosts[0].Score)
assert.False(t, hosts[0].BanTime.IsZero())
assert.NotEmpty(t, hosts[0].GetBanTime())
}
_, err = defender.GetHost(testIP)
assert.NoError(t, err)
// cleanup db
err = dataprovider.CleanupDefender(util.GetTimeAsMsSinceEpoch(time.Now().Add(10 * time.Minute)))
assert.NoError(t, err)
// the banned host must still be there
hosts, err = defender.GetHosts()
assert.NoError(t, err)
if assert.Len(t, hosts, 1) {
assert.Equal(t, testIP, hosts[0].IP)
assert.Equal(t, 0, hosts[0].Score)
assert.False(t, hosts[0].BanTime.IsZero())
assert.NotEmpty(t, hosts[0].GetBanTime())
}
_, err = defender.GetHost(testIP)
assert.NoError(t, err)
err = dataprovider.SetDefenderBanTime(testIP, util.GetTimeAsMsSinceEpoch(time.Now().Add(-1*time.Minute)))
assert.NoError(t, err)
err = dataprovider.CleanupDefender(util.GetTimeAsMsSinceEpoch(time.Now().Add(10 * time.Minute)))
assert.NoError(t, err)
hosts, err = defender.GetHosts()
assert.NoError(t, err)
assert.Len(t, hosts, 0)
err = os.Remove(slFile)
assert.NoError(t, err)
err = os.Remove(blFile)
assert.NoError(t, err)
}
func TestDbDefenderCleanup(t *testing.T) {
if !isDbDefenderSupported() {
t.Skip("this test is not supported with the current database provider")
}
config := &DefenderConfig{
Enabled: true,
BanTime: 10,
BanTimeIncrement: 2,
Threshold: 5,
ScoreInvalid: 2,
ScoreValid: 1,
ScoreLimitExceeded: 3,
ObservationTime: 15,
EntriesSoftLimit: 1,
EntriesHardLimit: 10,
}
d, err := newDBDefender(config)
assert.NoError(t, err)
defender := d.(*dbDefender)
lastCleanup := defender.getLastCleanup()
assert.True(t, lastCleanup.IsZero())
defender.cleanup()
lastCleanup = defender.getLastCleanup()
assert.False(t, lastCleanup.IsZero())
defender.cleanup()
assert.Equal(t, lastCleanup, defender.getLastCleanup())
defender.setLastCleanup(time.Now().Add(-time.Duration(config.ObservationTime) * time.Minute * 4))
time.Sleep(20 * time.Millisecond)
defender.cleanup()
assert.True(t, lastCleanup.Before(defender.getLastCleanup()))
providerConf := dataprovider.GetProviderConfig()
err = dataprovider.Close()
assert.NoError(t, err)
lastCleanup = time.Now().Add(-time.Duration(config.ObservationTime) * time.Minute * 4)
defender.setLastCleanup(lastCleanup)
defender.cleanup()
// cleanup will fail and so last cleanup should be reset to the previous value
assert.Equal(t, lastCleanup, defender.getLastCleanup())
err = dataprovider.Initialize(providerConf, configDir, true)
assert.NoError(t, err)
}
func isDbDefenderSupported() bool {
// SQLite shares the implementation with other SQL-based provider but it makes no sense
// to use it outside test cases
switch dataprovider.GetProviderStatus().Driver {
case dataprovider.MySQLDataProviderName, dataprovider.PGSQLDataProviderName,
dataprovider.CockroachDataProviderName, dataprovider.SQLiteDataProviderName:
return true
default:
return false
}
}

326
common/defendermem.go Normal file
View File

@@ -0,0 +1,326 @@
package common
import (
"sort"
"time"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/util"
)
type memoryDefender struct {
baseDefender
// IP addresses of the clients trying to connected are stored inside hosts,
// they are added to banned once the thresold is reached.
// A violation from a banned host will increase the ban time
// based on the configured BanTimeIncrement
hosts map[string]hostScore // the key is the host IP
banned map[string]time.Time // the key is the host IP
}
func newInMemoryDefender(config *DefenderConfig) (Defender, error) {
err := config.validate()
if err != nil {
return nil, err
}
defender := &memoryDefender{
baseDefender: baseDefender{
config: config,
},
hosts: make(map[string]hostScore),
banned: make(map[string]time.Time),
}
if err := defender.Reload(); err != nil {
return nil, err
}
return defender, nil
}
// GetHosts returns hosts that are banned or for which some violations have been detected
func (d *memoryDefender) GetHosts() ([]*dataprovider.DefenderEntry, error) {
d.RLock()
defer d.RUnlock()
var result []*dataprovider.DefenderEntry
for k, v := range d.banned {
if v.After(time.Now()) {
result = append(result, &dataprovider.DefenderEntry{
IP: k,
BanTime: v,
})
}
}
for k, v := range d.hosts {
score := 0
for _, event := range v.Events {
if event.dateTime.Add(time.Duration(d.config.ObservationTime) * time.Minute).After(time.Now()) {
score += event.score
}
}
if score > 0 {
result = append(result, &dataprovider.DefenderEntry{
IP: k,
Score: score,
})
}
}
return result, nil
}
// GetHost returns a defender host by ip, if any
func (d *memoryDefender) GetHost(ip string) (*dataprovider.DefenderEntry, error) {
d.RLock()
defer d.RUnlock()
if banTime, ok := d.banned[ip]; ok {
if banTime.After(time.Now()) {
return &dataprovider.DefenderEntry{
IP: ip,
BanTime: banTime,
}, nil
}
}
if hs, ok := d.hosts[ip]; ok {
score := 0
for _, event := range hs.Events {
if event.dateTime.Add(time.Duration(d.config.ObservationTime) * time.Minute).After(time.Now()) {
score += event.score
}
}
if score > 0 {
return &dataprovider.DefenderEntry{
IP: ip,
Score: score,
}, nil
}
}
return nil, util.NewRecordNotFoundError("host not found")
}
// IsBanned returns true if the specified IP is banned
// and increase ban time if the IP is found.
// This method must be called as soon as the client connects
func (d *memoryDefender) IsBanned(ip string) bool {
d.RLock()
if banTime, ok := d.banned[ip]; ok {
if banTime.After(time.Now()) {
increment := d.config.BanTime * d.config.BanTimeIncrement / 100
if increment == 0 {
increment++
}
d.RUnlock()
// we can save an earlier ban time if there are contemporary updates
// but this should not make much difference. I prefer to hold a read lock
// until possible for performance reasons, this method is called each
// time a new client connects and it must be as fast as possible
d.Lock()
d.banned[ip] = banTime.Add(time.Duration(increment) * time.Minute)
d.Unlock()
return true
}
}
defer d.RUnlock()
return d.baseDefender.isBanned(ip)
}
// DeleteHost removes the specified IP from the defender lists
func (d *memoryDefender) DeleteHost(ip string) bool {
d.Lock()
defer d.Unlock()
if _, ok := d.banned[ip]; ok {
delete(d.banned, ip)
return true
}
if _, ok := d.hosts[ip]; ok {
delete(d.hosts, ip)
return true
}
return false
}
// AddEvent adds an event for the given IP.
// This method must be called for clients not yet banned
func (d *memoryDefender) AddEvent(ip string, event HostEvent) {
d.Lock()
defer d.Unlock()
if d.safeList != nil && d.safeList.isListed(ip) {
return
}
// ignore events for already banned hosts
if v, ok := d.banned[ip]; ok {
if v.After(time.Now()) {
return
}
delete(d.banned, ip)
}
score := d.baseDefender.getScore(event)
ev := hostEvent{
dateTime: time.Now(),
score: score,
}
if hs, ok := d.hosts[ip]; ok {
hs.Events = append(hs.Events, ev)
hs.TotalScore = 0
idx := 0
for _, event := range hs.Events {
if event.dateTime.Add(time.Duration(d.config.ObservationTime) * time.Minute).After(time.Now()) {
hs.Events[idx] = event
hs.TotalScore += event.score
idx++
}
}
hs.Events = hs.Events[:idx]
if hs.TotalScore >= d.config.Threshold {
d.banned[ip] = time.Now().Add(time.Duration(d.config.BanTime) * time.Minute)
delete(d.hosts, ip)
d.cleanupBanned()
} else {
d.hosts[ip] = hs
}
} else {
d.hosts[ip] = hostScore{
TotalScore: ev.score,
Events: []hostEvent{ev},
}
d.cleanupHosts()
}
}
func (d *memoryDefender) countBanned() int {
d.RLock()
defer d.RUnlock()
return len(d.banned)
}
func (d *memoryDefender) countHosts() int {
d.RLock()
defer d.RUnlock()
return len(d.hosts)
}
// GetBanTime returns the ban time for the given IP or nil if the IP is not banned
func (d *memoryDefender) GetBanTime(ip string) (*time.Time, error) {
d.RLock()
defer d.RUnlock()
if banTime, ok := d.banned[ip]; ok {
return &banTime, nil
}
return nil, nil
}
// GetScore returns the score for the given IP
func (d *memoryDefender) GetScore(ip string) (int, error) {
d.RLock()
defer d.RUnlock()
score := 0
if hs, ok := d.hosts[ip]; ok {
for _, event := range hs.Events {
if event.dateTime.Add(time.Duration(d.config.ObservationTime) * time.Minute).After(time.Now()) {
score += event.score
}
}
}
return score, nil
}
func (d *memoryDefender) cleanupBanned() {
if len(d.banned) > d.config.EntriesHardLimit {
kvList := make(kvList, 0, len(d.banned))
for k, v := range d.banned {
if v.Before(time.Now()) {
delete(d.banned, k)
}
kvList = append(kvList, kv{
Key: k,
Value: v.UnixNano(),
})
}
// we removed expired ip addresses, if any, above, this could be enough
numToRemove := len(d.banned) - d.config.EntriesSoftLimit
if numToRemove <= 0 {
return
}
sort.Sort(kvList)
for idx, kv := range kvList {
if idx >= numToRemove {
break
}
delete(d.banned, kv.Key)
}
}
}
func (d *memoryDefender) cleanupHosts() {
if len(d.hosts) > d.config.EntriesHardLimit {
kvList := make(kvList, 0, len(d.hosts))
for k, v := range d.hosts {
value := int64(0)
if len(v.Events) > 0 {
value = v.Events[len(v.Events)-1].dateTime.UnixNano()
}
kvList = append(kvList, kv{
Key: k,
Value: value,
})
}
sort.Sort(kvList)
numToRemove := len(d.hosts) - d.config.EntriesSoftLimit
for idx, kv := range kvList {
if idx >= numToRemove {
break
}
delete(d.hosts, kv.Key)
}
}
}
type kv struct {
Key string
Value int64
}
type kvList []kv
func (p kvList) Len() int { return len(p) }
func (p kvList) Less(i, j int) bool { return p[i].Value < p[j].Value }
func (p kvList) Swap(i, j int) { p[i], p[j] = p[j], p[i] }

134
common/httpauth.go Normal file
View File

@@ -0,0 +1,134 @@
package common
import (
"encoding/csv"
"os"
"strings"
"sync"
"github.com/GehirnInc/crypt/apr1_crypt"
"github.com/GehirnInc/crypt/md5_crypt"
"golang.org/x/crypto/bcrypt"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
)
const (
// HTTPAuthenticationHeader defines the HTTP authentication
HTTPAuthenticationHeader = "WWW-Authenticate"
md5CryptPwdPrefix = "$1$"
apr1CryptPwdPrefix = "$apr1$"
)
var (
bcryptPwdPrefixes = []string{"$2a$", "$2$", "$2x$", "$2y$", "$2b$"}
)
// HTTPAuthProvider defines the interface for HTTP auth providers
type HTTPAuthProvider interface {
ValidateCredentials(username, password string) bool
IsEnabled() bool
}
type basicAuthProvider struct {
Path string
sync.RWMutex
Info os.FileInfo
Users map[string]string
}
// NewBasicAuthProvider returns an HTTPAuthProvider implementing Basic Auth
func NewBasicAuthProvider(authUserFile string) (HTTPAuthProvider, error) {
basicAuthProvider := basicAuthProvider{
Path: authUserFile,
Info: nil,
Users: make(map[string]string),
}
return &basicAuthProvider, basicAuthProvider.loadUsers()
}
func (p *basicAuthProvider) IsEnabled() bool {
return p.Path != ""
}
func (p *basicAuthProvider) isReloadNeeded(info os.FileInfo) bool {
p.RLock()
defer p.RUnlock()
return p.Info == nil || p.Info.ModTime() != info.ModTime() || p.Info.Size() != info.Size()
}
func (p *basicAuthProvider) loadUsers() error {
if !p.IsEnabled() {
return nil
}
info, err := os.Stat(p.Path)
if err != nil {
logger.Debug(logSender, "", "unable to stat basic auth users file: %v", err)
return err
}
if p.isReloadNeeded(info) {
r, err := os.Open(p.Path)
if err != nil {
logger.Debug(logSender, "", "unable to open basic auth users file: %v", err)
return err
}
defer r.Close()
reader := csv.NewReader(r)
reader.Comma = ':'
reader.Comment = '#'
reader.TrimLeadingSpace = true
records, err := reader.ReadAll()
if err != nil {
logger.Debug(logSender, "", "unable to parse basic auth users file: %v", err)
return err
}
p.Lock()
defer p.Unlock()
p.Users = make(map[string]string)
for _, record := range records {
if len(record) == 2 {
p.Users[record[0]] = record[1]
}
}
logger.Debug(logSender, "", "number of users loaded for httpd basic auth: %v", len(p.Users))
p.Info = info
}
return nil
}
func (p *basicAuthProvider) getHashedPassword(username string) (string, bool) {
err := p.loadUsers()
if err != nil {
return "", false
}
p.RLock()
defer p.RUnlock()
pwd, ok := p.Users[username]
return pwd, ok
}
// ValidateCredentials returns true if the credentials are valid
func (p *basicAuthProvider) ValidateCredentials(username, password string) bool {
if hashedPwd, ok := p.getHashedPassword(username); ok {
if util.IsStringPrefixInSlice(hashedPwd, bcryptPwdPrefixes) {
err := bcrypt.CompareHashAndPassword([]byte(hashedPwd), []byte(password))
return err == nil
}
if strings.HasPrefix(hashedPwd, md5CryptPwdPrefix) {
crypter := md5_crypt.New()
err := crypter.Verify(hashedPwd, []byte(password))
return err == nil
}
if strings.HasPrefix(hashedPwd, apr1CryptPwdPrefix) {
crypter := apr1_crypt.New()
err := crypter.Verify(hashedPwd, []byte(password))
return err == nil
}
}
return false
}

71
common/httpauth_test.go Normal file
View File

@@ -0,0 +1,71 @@
package common
import (
"os"
"path/filepath"
"runtime"
"testing"
"github.com/stretchr/testify/require"
)
func TestBasicAuth(t *testing.T) {
httpAuth, err := NewBasicAuthProvider("")
require.NoError(t, err)
require.False(t, httpAuth.IsEnabled())
_, err = NewBasicAuthProvider("missing path")
require.Error(t, err)
authUserFile := filepath.Join(os.TempDir(), "http_users.txt")
authUserData := []byte("test1:$2y$05$bcHSED7aO1cfLto6ZdDBOOKzlwftslVhtpIkRhAtSa4GuLmk5mola\n")
err = os.WriteFile(authUserFile, authUserData, os.ModePerm)
require.NoError(t, err)
httpAuth, err = NewBasicAuthProvider(authUserFile)
require.NoError(t, err)
require.True(t, httpAuth.IsEnabled())
require.False(t, httpAuth.ValidateCredentials("test1", "wrong1"))
require.False(t, httpAuth.ValidateCredentials("test2", "password2"))
require.True(t, httpAuth.ValidateCredentials("test1", "password1"))
authUserData = append(authUserData, []byte("test2:$1$OtSSTL8b$bmaCqEksI1e7rnZSjsIDR1\n")...)
err = os.WriteFile(authUserFile, authUserData, os.ModePerm)
require.NoError(t, err)
require.False(t, httpAuth.ValidateCredentials("test2", "wrong2"))
require.True(t, httpAuth.ValidateCredentials("test2", "password2"))
authUserData = append(authUserData, []byte("test2:$apr1$gLnIkRIf$Xr/6aJfmIrihP4b2N2tcs/\n")...)
err = os.WriteFile(authUserFile, authUserData, os.ModePerm)
require.NoError(t, err)
require.False(t, httpAuth.ValidateCredentials("test2", "wrong2"))
require.True(t, httpAuth.ValidateCredentials("test2", "password2"))
authUserData = append(authUserData, []byte("test3:$apr1$gLnIkRIf$Xr/6aJfmIrihP4b2N2tcs/\n")...)
err = os.WriteFile(authUserFile, authUserData, os.ModePerm)
require.NoError(t, err)
require.False(t, httpAuth.ValidateCredentials("test3", "password3"))
authUserData = append(authUserData, []byte("test4:$invalid$gLnIkRIf$Xr/6$aJfmIr$ihP4b2N2tcs/\n")...)
err = os.WriteFile(authUserFile, authUserData, os.ModePerm)
require.NoError(t, err)
require.False(t, httpAuth.ValidateCredentials("test4", "password3"))
if runtime.GOOS != "windows" {
authUserData = append(authUserData, []byte("test5:$apr1$gLnIkRIf$Xr/6aJfmIrihP4b2N2tcs/\n")...)
err = os.WriteFile(authUserFile, authUserData, os.ModePerm)
require.NoError(t, err)
err = os.Chmod(authUserFile, 0001)
require.NoError(t, err)
require.False(t, httpAuth.ValidateCredentials("test5", "password2"))
err = os.Chmod(authUserFile, os.ModePerm)
require.NoError(t, err)
}
authUserData = append(authUserData, []byte("\"foo\"bar\"\r\n")...)
err = os.WriteFile(authUserFile, authUserData, os.ModePerm)
require.NoError(t, err)
require.False(t, httpAuth.ValidateCredentials("test2", "password2"))
err = os.Remove(authUserFile)
require.NoError(t, err)
}

3373
common/protocol_test.go Normal file

File diff suppressed because it is too large Load Diff

243
common/ratelimiter.go Normal file
View File

@@ -0,0 +1,243 @@
package common
import (
"errors"
"fmt"
"net"
"sort"
"sync"
"sync/atomic"
"time"
"golang.org/x/time/rate"
"github.com/drakkan/sftpgo/v2/util"
)
var (
errNoBucket = errors.New("no bucket found")
errReserve = errors.New("unable to reserve token")
rateLimiterProtocolValues = []string{ProtocolSSH, ProtocolFTP, ProtocolWebDAV, ProtocolHTTP}
)
// RateLimiterType defines the supported rate limiters types
type RateLimiterType int
// Supported rate limiter types
const (
rateLimiterTypeGlobal RateLimiterType = iota + 1
rateLimiterTypeSource
)
// RateLimiterConfig defines the configuration for a rate limiter
type RateLimiterConfig struct {
// Average defines the maximum rate allowed. 0 means disabled
Average int64 `json:"average" mapstructure:"average"`
// Period defines the period as milliseconds. Default: 1000 (1 second).
// The rate is actually defined by dividing average by period.
// So for a rate below 1 req/s, one needs to define a period larger than a second.
Period int64 `json:"period" mapstructure:"period"`
// Burst is the maximum number of requests allowed to go through in the
// same arbitrarily small period of time. Default: 1.
Burst int `json:"burst" mapstructure:"burst"`
// Type defines the rate limiter type:
// - rateLimiterTypeGlobal is a global rate limiter independent from the source
// - rateLimiterTypeSource is a per-source rate limiter
Type int `json:"type" mapstructure:"type"`
// Protocols defines the protocols for this rate limiter.
// Available protocols are: "SFTP", "FTP", "DAV".
// A rate limiter with no protocols defined is disabled
Protocols []string `json:"protocols" mapstructure:"protocols"`
// AllowList defines a list of IP addresses and IP ranges excluded from rate limiting
AllowList []string `json:"allow_list" mapstructure:"mapstructure"`
// If the rate limit is exceeded, the defender is enabled, and this is a per-source limiter,
// a new defender event will be generated
GenerateDefenderEvents bool `json:"generate_defender_events" mapstructure:"generate_defender_events"`
// The number of per-ip rate limiters kept in memory will vary between the
// soft and hard limit
EntriesSoftLimit int `json:"entries_soft_limit" mapstructure:"entries_soft_limit"`
EntriesHardLimit int `json:"entries_hard_limit" mapstructure:"entries_hard_limit"`
}
func (r *RateLimiterConfig) isEnabled() bool {
return r.Average > 0 && len(r.Protocols) > 0
}
func (r *RateLimiterConfig) validate() error {
if r.Burst < 1 {
return fmt.Errorf("invalid burst %v. It must be >= 1", r.Burst)
}
if r.Period < 100 {
return fmt.Errorf("invalid period %v. It must be >= 100", r.Period)
}
if r.Type != int(rateLimiterTypeGlobal) && r.Type != int(rateLimiterTypeSource) {
return fmt.Errorf("invalid type %v", r.Type)
}
if r.Type != int(rateLimiterTypeGlobal) {
if r.EntriesSoftLimit <= 0 {
return fmt.Errorf("invalid entries_soft_limit %v", r.EntriesSoftLimit)
}
if r.EntriesHardLimit <= r.EntriesSoftLimit {
return fmt.Errorf("invalid entries_hard_limit %v must be > %v", r.EntriesHardLimit, r.EntriesSoftLimit)
}
}
r.Protocols = util.RemoveDuplicates(r.Protocols)
for _, protocol := range r.Protocols {
if !util.IsStringInSlice(protocol, rateLimiterProtocolValues) {
return fmt.Errorf("invalid protocol %#v", protocol)
}
}
return nil
}
func (r *RateLimiterConfig) getLimiter() *rateLimiter {
limiter := &rateLimiter{
burst: r.Burst,
globalBucket: nil,
generateDefenderEvents: r.GenerateDefenderEvents,
}
var maxDelay time.Duration
period := time.Duration(r.Period) * time.Millisecond
rtl := float64(r.Average*int64(time.Second)) / float64(period)
limiter.rate = rate.Limit(rtl)
if rtl < 1 {
maxDelay = period / 2
} else {
maxDelay = time.Second / (time.Duration(rtl) * 2)
}
if maxDelay > 10*time.Second {
maxDelay = 10 * time.Second
}
limiter.maxDelay = maxDelay
limiter.buckets = sourceBuckets{
buckets: make(map[string]sourceRateLimiter),
hardLimit: r.EntriesHardLimit,
softLimit: r.EntriesSoftLimit,
}
if r.Type != int(rateLimiterTypeSource) {
limiter.globalBucket = rate.NewLimiter(limiter.rate, limiter.burst)
}
return limiter
}
// RateLimiter defines a rate limiter
type rateLimiter struct {
rate rate.Limit
burst int
maxDelay time.Duration
globalBucket *rate.Limiter
buckets sourceBuckets
generateDefenderEvents bool
allowList []func(net.IP) bool
}
// Wait blocks until the limit allows one event to happen
// or returns an error if the time to wait exceeds the max
// allowed delay
func (rl *rateLimiter) Wait(source string) (time.Duration, error) {
if len(rl.allowList) > 0 {
ip := net.ParseIP(source)
if ip != nil {
for idx := range rl.allowList {
if rl.allowList[idx](ip) {
return 0, nil
}
}
}
}
var res *rate.Reservation
if rl.globalBucket != nil {
res = rl.globalBucket.Reserve()
} else {
var err error
res, err = rl.buckets.reserve(source)
if err != nil {
rateLimiter := rate.NewLimiter(rl.rate, rl.burst)
res = rl.buckets.addAndReserve(rateLimiter, source)
}
}
if !res.OK() {
return 0, errReserve
}
delay := res.Delay()
if delay > rl.maxDelay {
res.Cancel()
if rl.generateDefenderEvents && rl.globalBucket == nil {
AddDefenderEvent(source, HostEventLimitExceeded)
}
return delay, fmt.Errorf("rate limit exceed, wait time to respect rate %v, max wait time allowed %v", delay, rl.maxDelay)
}
time.Sleep(delay)
return 0, nil
}
type sourceRateLimiter struct {
lastActivity int64
bucket *rate.Limiter
}
func (s *sourceRateLimiter) updateLastActivity() {
atomic.StoreInt64(&s.lastActivity, time.Now().UnixNano())
}
func (s *sourceRateLimiter) getLastActivity() int64 {
return atomic.LoadInt64(&s.lastActivity)
}
type sourceBuckets struct {
sync.RWMutex
buckets map[string]sourceRateLimiter
hardLimit int
softLimit int
}
func (b *sourceBuckets) reserve(source string) (*rate.Reservation, error) {
b.RLock()
defer b.RUnlock()
if src, ok := b.buckets[source]; ok {
src.updateLastActivity()
return src.bucket.Reserve(), nil
}
return nil, errNoBucket
}
func (b *sourceBuckets) addAndReserve(r *rate.Limiter, source string) *rate.Reservation {
b.Lock()
defer b.Unlock()
b.cleanup()
src := sourceRateLimiter{
bucket: r,
}
src.updateLastActivity()
b.buckets[source] = src
return src.bucket.Reserve()
}
func (b *sourceBuckets) cleanup() {
if len(b.buckets) >= b.hardLimit {
numToRemove := len(b.buckets) - b.softLimit
kvList := make(kvList, 0, len(b.buckets))
for k, v := range b.buckets {
kvList = append(kvList, kv{
Key: k,
Value: v.getLastActivity(),
})
}
sort.Sort(kvList)
for idx, kv := range kvList {
if idx >= numToRemove {
break
}
delete(b.buckets, kv.Key)
}
}
}

148
common/ratelimiter_test.go Normal file
View File

@@ -0,0 +1,148 @@
package common
import (
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/drakkan/sftpgo/v2/util"
)
func TestRateLimiterConfig(t *testing.T) {
config := RateLimiterConfig{}
err := config.validate()
require.Error(t, err)
config.Burst = 1
config.Period = 10
err = config.validate()
require.Error(t, err)
config.Period = 1000
config.Type = 100
err = config.validate()
require.Error(t, err)
config.Type = int(rateLimiterTypeSource)
config.EntriesSoftLimit = 0
err = config.validate()
require.Error(t, err)
config.EntriesSoftLimit = 150
config.EntriesHardLimit = 0
err = config.validate()
require.Error(t, err)
config.EntriesHardLimit = 200
config.Protocols = []string{"unsupported protocol"}
err = config.validate()
require.Error(t, err)
config.Protocols = rateLimiterProtocolValues
err = config.validate()
require.NoError(t, err)
limiter := config.getLimiter()
require.Equal(t, 500*time.Millisecond, limiter.maxDelay)
require.Nil(t, limiter.globalBucket)
config.Type = int(rateLimiterTypeGlobal)
config.Average = 1
config.Period = 10000
limiter = config.getLimiter()
require.Equal(t, 5*time.Second, limiter.maxDelay)
require.NotNil(t, limiter.globalBucket)
config.Period = 100000
limiter = config.getLimiter()
require.Equal(t, 10*time.Second, limiter.maxDelay)
config.Period = 500
config.Average = 1
limiter = config.getLimiter()
require.Equal(t, 250*time.Millisecond, limiter.maxDelay)
}
func TestRateLimiter(t *testing.T) {
config := RateLimiterConfig{
Average: 1,
Period: 1000,
Burst: 1,
Type: int(rateLimiterTypeGlobal),
Protocols: rateLimiterProtocolValues,
}
limiter := config.getLimiter()
_, err := limiter.Wait("")
require.NoError(t, err)
_, err = limiter.Wait("")
require.Error(t, err)
config.Type = int(rateLimiterTypeSource)
config.GenerateDefenderEvents = true
config.EntriesSoftLimit = 5
config.EntriesHardLimit = 10
limiter = config.getLimiter()
source := "192.168.1.2"
_, err = limiter.Wait(source)
require.NoError(t, err)
_, err = limiter.Wait(source)
require.Error(t, err)
// a different source should work
_, err = limiter.Wait(source + "1")
require.NoError(t, err)
allowList := []string{"192.168.1.0/24"}
allowFuncs, err := util.ParseAllowedIPAndRanges(allowList)
assert.NoError(t, err)
limiter.allowList = allowFuncs
for i := 0; i < 5; i++ {
_, err = limiter.Wait(source)
require.NoError(t, err)
}
_, err = limiter.Wait("not an ip")
require.NoError(t, err)
config.Burst = 0
limiter = config.getLimiter()
_, err = limiter.Wait(source)
require.ErrorIs(t, err, errReserve)
}
func TestLimiterCleanup(t *testing.T) {
config := RateLimiterConfig{
Average: 100,
Period: 1000,
Burst: 1,
Type: int(rateLimiterTypeSource),
Protocols: rateLimiterProtocolValues,
EntriesSoftLimit: 1,
EntriesHardLimit: 3,
}
limiter := config.getLimiter()
source1 := "10.8.0.1"
source2 := "10.8.0.2"
source3 := "10.8.0.3"
source4 := "10.8.0.4"
_, err := limiter.Wait(source1)
assert.NoError(t, err)
time.Sleep(20 * time.Millisecond)
_, err = limiter.Wait(source2)
assert.NoError(t, err)
time.Sleep(20 * time.Millisecond)
assert.Len(t, limiter.buckets.buckets, 2)
_, ok := limiter.buckets.buckets[source1]
assert.True(t, ok)
_, ok = limiter.buckets.buckets[source2]
assert.True(t, ok)
_, err = limiter.Wait(source3)
assert.NoError(t, err)
assert.Len(t, limiter.buckets.buckets, 3)
_, ok = limiter.buckets.buckets[source1]
assert.True(t, ok)
_, ok = limiter.buckets.buckets[source2]
assert.True(t, ok)
_, ok = limiter.buckets.buckets[source3]
assert.True(t, ok)
time.Sleep(20 * time.Millisecond)
_, err = limiter.Wait(source4)
assert.NoError(t, err)
assert.Len(t, limiter.buckets.buckets, 2)
_, ok = limiter.buckets.buckets[source3]
assert.True(t, ok)
_, ok = limiter.buckets.buckets[source4]
assert.True(t, ok)
}

200
common/tlsutils.go Normal file
View File

@@ -0,0 +1,200 @@
package common
import (
"crypto/tls"
"crypto/x509"
"crypto/x509/pkix"
"fmt"
"os"
"path/filepath"
"sync"
"time"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
)
// CertManager defines a TLS certificate manager
type CertManager struct {
certPath string
keyPath string
configDir string
logSender string
sync.RWMutex
caCertificates []string
caRevocationLists []string
cert *tls.Certificate
rootCAs *x509.CertPool
crls []*pkix.CertificateList
}
// Reload tries to reload certificate and CRLs
func (m *CertManager) Reload() error {
errCrt := m.loadCertificate()
errCRLs := m.LoadCRLs()
if errCrt != nil {
return errCrt
}
return errCRLs
}
// LoadCertificate loads the configured x509 key pair
func (m *CertManager) loadCertificate() error {
newCert, err := tls.LoadX509KeyPair(m.certPath, m.keyPath)
if err != nil {
logger.Warn(m.logSender, "", "unable to load X509 key pair, cert file %#v key file %#v error: %v",
m.certPath, m.keyPath, err)
return err
}
logger.Debug(m.logSender, "", "TLS certificate %#v successfully loaded", m.certPath)
m.Lock()
defer m.Unlock()
m.cert = &newCert
return nil
}
// GetCertificateFunc returns the loaded certificate
func (m *CertManager) GetCertificateFunc() func(*tls.ClientHelloInfo) (*tls.Certificate, error) {
return func(clientHello *tls.ClientHelloInfo) (*tls.Certificate, error) {
m.RLock()
defer m.RUnlock()
return m.cert, nil
}
}
// IsRevoked returns true if the specified certificate has been revoked
func (m *CertManager) IsRevoked(crt *x509.Certificate, caCrt *x509.Certificate) bool {
m.RLock()
defer m.RUnlock()
if crt == nil || caCrt == nil {
logger.Warn(m.logSender, "", "unable to verify crt %v ca crt %v", crt, caCrt)
return len(m.crls) > 0
}
for _, crl := range m.crls {
if !crl.HasExpired(time.Now()) && caCrt.CheckCRLSignature(crl) == nil {
for _, rc := range crl.TBSCertList.RevokedCertificates {
if rc.SerialNumber.Cmp(crt.SerialNumber) == 0 {
return true
}
}
}
}
return false
}
// LoadCRLs tries to load certificate revocation lists from the given paths
func (m *CertManager) LoadCRLs() error {
if len(m.caRevocationLists) == 0 {
return nil
}
var crls []*pkix.CertificateList
for _, revocationList := range m.caRevocationLists {
if !util.IsFileInputValid(revocationList) {
return fmt.Errorf("invalid root CA revocation list %#v", revocationList)
}
if revocationList != "" && !filepath.IsAbs(revocationList) {
revocationList = filepath.Join(m.configDir, revocationList)
}
crlBytes, err := os.ReadFile(revocationList)
if err != nil {
logger.Warn(m.logSender, "unable to read revocation list %#v", revocationList)
return err
}
crl, err := x509.ParseCRL(crlBytes)
if err != nil {
logger.Warn(m.logSender, "unable to parse revocation list %#v", revocationList)
return err
}
logger.Debug(m.logSender, "", "CRL %#v successfully loaded", revocationList)
crls = append(crls, crl)
}
m.Lock()
defer m.Unlock()
m.crls = crls
return nil
}
// GetRootCAs returns the set of root certificate authorities that servers
// use if required to verify a client certificate
func (m *CertManager) GetRootCAs() *x509.CertPool {
m.RLock()
defer m.RUnlock()
return m.rootCAs
}
// LoadRootCAs tries to load root CA certificate authorities from the given paths
func (m *CertManager) LoadRootCAs() error {
if len(m.caCertificates) == 0 {
return nil
}
rootCAs := x509.NewCertPool()
for _, rootCA := range m.caCertificates {
if !util.IsFileInputValid(rootCA) {
return fmt.Errorf("invalid root CA certificate %#v", rootCA)
}
if rootCA != "" && !filepath.IsAbs(rootCA) {
rootCA = filepath.Join(m.configDir, rootCA)
}
crt, err := os.ReadFile(rootCA)
if err != nil {
return err
}
if rootCAs.AppendCertsFromPEM(crt) {
logger.Debug(m.logSender, "", "TLS certificate authority %#v successfully loaded", rootCA)
} else {
err := fmt.Errorf("unable to load TLS certificate authority %#v", rootCA)
logger.Warn(m.logSender, "", "%v", err)
return err
}
}
m.Lock()
defer m.Unlock()
m.rootCAs = rootCAs
return nil
}
// SetCACertificates sets the root CA authorities file paths.
// This should not be changed at runtime
func (m *CertManager) SetCACertificates(caCertificates []string) {
m.caCertificates = caCertificates
}
// SetCARevocationLists sets the CA revocation lists file paths.
// This should not be changed at runtime
func (m *CertManager) SetCARevocationLists(caRevocationLists []string) {
m.caRevocationLists = caRevocationLists
}
// NewCertManager creates a new certificate manager
func NewCertManager(certificateFile, certificateKeyFile, configDir, logSender string) (*CertManager, error) {
manager := &CertManager{
cert: nil,
certPath: certificateFile,
keyPath: certificateKeyFile,
configDir: configDir,
logSender: logSender,
}
err := manager.loadCertificate()
if err != nil {
return nil, err
}
return manager, nil
}

386
common/tlsutils_test.go Normal file
View File

@@ -0,0 +1,386 @@
package common
import (
"crypto/tls"
"crypto/x509"
"os"
"path/filepath"
"testing"
"github.com/stretchr/testify/assert"
)
const (
serverCert = `-----BEGIN CERTIFICATE-----
MIIEIDCCAgigAwIBAgIRAPOR9zTkX35vSdeyGpF8Rn8wDQYJKoZIhvcNAQELBQAw
EzERMA8GA1UEAxMIQ2VydEF1dGgwHhcNMjEwMTAyMjEyMjU1WhcNMjIwNzAyMjEz
MDUxWjARMQ8wDQYDVQQDEwZzZXJ2ZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAw
ggEKAoIBAQCte0PJhCTNqTiqdwk/s4JanKIMKUVWr2u94a+JYy5gJ9xYXrQ49SeN
m+fwhTAOqctP5zNVkFqxlBytJZg3pqCKqRoOOl1qVgL3F3o7JdhZGi67aw8QMLPx
tLPpYWnnrlUQoXRJdTlqkDqO8lOZl9HO5oZeidPZ7r5BVD6ZiujAC6Zg0jIc+EPt
qhaUJ1CStoAeRf1rNWKmDsLv5hEaDWoaHF9sNVzDQg6atZ3ici00qQj+uvEZo8mL
k6egg3rqsTv9ml2qlrRgFumt99J60hTt3tuQaAruHY80O9nGy3SCXC11daa7gszH
ElCRvhUVoOxRtB54YBEtJ0gEpFnTO9J1AgMBAAGjcTBvMA4GA1UdDwEB/wQEAwID
uDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwHQYDVR0OBBYEFAgDXwPV
nhztNz+H20iNWgoIx8adMB8GA1UdIwQYMBaAFO1yCNAGr/zQTJIi8lw3w5OiuBvM
MA0GCSqGSIb3DQEBCwUAA4ICAQCR5kgIb4vAtrtsXD24n6RtU1yIXHPLNmDStVrH
uaMYNnHlLhRlQFCjHhjWvZ89FQC7FeNOITc3FpibJySyw7JfnsyEOGxEbcAS4uLB
2pdAiJPqdQtxIVcyi5vu53m1T5tm0sy8sBrGxU466aDQ8VGqjcjfTwNIyoFMd3p/
ezFRvg2BudwU9hqApgfHfLi4WCuI3hLO2tbmgDinyH0HI0YYNNweGpiBYbTLF4Tx
H6vHgD9USMZeu4+HX0IIsBiHQD7TTIe5ceREkPcNPd5qTpIvT3zKQ/KwwT90/zjP
aWmz6pLxBfjRu7MY/bDfxfRUqsrLYJCVBoaDVRWR9rhiPIFkC5JzoWD/4hdj2iis
N0+OOaJ77L+/ArFprE+7Fu3cSdYlfiNjV8R5kE29cAxKLI92CjAiTKrEuxKcQPKO
+taWNKIYYjEDZwVnzlkTIl007X0RBuzu9gh4w5NwJdt8ZOJAp0JV0Cq+UvG+FC/v
lYk82E6j1HKhf4CXmrjsrD1Fyu41mpVFOpa2ATiFGvms913MkXuyO8g99IllmDw1
D7/PN4Qe9N6Zm7yoKZM0IUw2v+SUMIdOAZ7dptO9ZjtYOfiAIYN3jM8R4JYgPiuD
DGSM9LJBJxCxI/DiO1y1Z3n9TcdDQYut8Gqdi/aYXw2YeqyHXosX5Od3vcK/O5zC
pOJTYQ==
-----END CERTIFICATE-----`
serverKey = `-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEArXtDyYQkzak4qncJP7OCWpyiDClFVq9rveGviWMuYCfcWF60
OPUnjZvn8IUwDqnLT+czVZBasZQcrSWYN6agiqkaDjpdalYC9xd6OyXYWRouu2sP
EDCz8bSz6WFp565VEKF0SXU5apA6jvJTmZfRzuaGXonT2e6+QVQ+mYrowAumYNIy
HPhD7aoWlCdQkraAHkX9azVipg7C7+YRGg1qGhxfbDVcw0IOmrWd4nItNKkI/rrx
GaPJi5OnoIN66rE7/Zpdqpa0YBbprffSetIU7d7bkGgK7h2PNDvZxst0glwtdXWm
u4LMxxJQkb4VFaDsUbQeeGARLSdIBKRZ0zvSdQIDAQABAoIBAF4sI8goq7HYwqIG
rEagM4rsrCrd3H4KC/qvoJJ7/JjGCp8OCddBfY8pquat5kCPe4aMgxlXm2P6evaj
CdZr5Ypf8Xz3we4PctyfKgMhsCfuRqAGpc6sIYJ8DY4LC2pxAExe2LlnoRtv39np
QeiGuaYPDbIUL6SGLVFZYgIHngFhbDYfL83q3Cb/PnivUGFvUVQCfRBUKO2d8KYq
TrVB5BWD2GrHor24ApQmci1OOqfbkIevkK6bk8HUfSZiZGI9LUQiPHMxi5k2x43J
nIwhZnW2N28dorKnWHg2vh7viGvinVRZ3MEyX150oCw/L6SYM4fqR6t2ZSBgNQHT
ZNoDtwECgYEA4lXMgtYqKuSlZ3TKfxAj03tJ/gbRdKcUCEGXEbdpY70tTu6KESZS
etid4Ut/sWEoPTJsgYiGbgJl571t1O8oR1UZYgh9hBGHLV6UEIt9n2PbExhE2vL3
SB7+LfO+tMvM4qKUBN+uy4GpU0NiyEEecw4x4S7MRSyHFRIDR7B6RV0CgYEAxDgS
mDaNUfSdfB5mXekLUJAwqeKRdL9RjXYaHbnoZ5kIwQ73tFikRwyTsLQwMhjE1l3z
MItTzIAyTf/BlK3dsp6bHTaT7hXIjHBsuKATN5qAuUpzTrg9+QaCawVSlQgNeF3a
iyfD4dVp66Bzn3gO757TWqmroBZ2e1owbAQvF/kCgYAKT/Jze6KMNcK7hfy78VZQ
imuCoXjlob8t6R8i9YJdwv7Pe9rakS5s3nXDEBePU2fr8eIzvK6zUHSoLF9WtlbV
eTEg4FYnsEzCam7AmjptCrWulwp8F1ng9ViLa3Gi9y4snU+1MSPbrdqzKnzTtvPW
Ni1bnzA7bp3w/dMcbxQDGQKBgB50hY5SiUS7LuZg4YqZ7UOn3aXAoMr6FvJZ7lvG
yyepPQ6aACBh0b2lWhcHIKPl7EdJdcGHHo6TJzusAqPNCKf8rh6upe9COkpx+K3/
SnxK4sffol4JgrTwKbXqsZKoGU8hYhZPKbwXn8UOtmN+AvN2N1/PDfBfDCzBJtrd
G2IhAoGBAN19976xAMDjKb2+wd/mQYA2fR7E8lodxdX3LDnblYmndTKY67nVo94M
FHPKZSN590HkFJ+wmChnOrqjtosY+N25CKMS7939EUIDrq+B+bYTWM/gcwdLXNUk
Rygw/078Z3ZDJamXmyez5WpeLFrrbmI8sLnBBmSjQvMb6vCEtQ2Z
-----END RSA PRIVATE KEY-----`
caCRT = `-----BEGIN CERTIFICATE-----
MIIE5jCCAs6gAwIBAgIBATANBgkqhkiG9w0BAQsFADATMREwDwYDVQQDEwhDZXJ0
QXV0aDAeFw0yMTAxMDIyMTIwNTVaFw0yMjA3MDIyMTMwNTJaMBMxETAPBgNVBAMT
CENlcnRBdXRoMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEA4Tiho5xW
AC15JRkMwfp3/TJwI2As7MY5dele5cmdr5bHAE+sRKqC+Ti88OJWCV5saoyax/1S
CjxJlQMZMl169P1QYJskKjdG2sdv6RLWLMgwSNRRjxp/Bw9dHdiEb9MjLgu28Jro
9peQkHcRHeMf5hM9WvlIJGrdzbC4hUehmqggcqgARainBkYjf0SwuWxHeu4nMqkp
Ak5tcSTLCjHfEFHZ9Te0TIPG5YkWocQKyeLgu4lvuU+DD2W2lym+YVUtRMGs1Env
k7p+N0DcGU26qfzZ2sF5ZXkqm7dBsGQB9pIxwc2Q8T1dCIyP9OQCKVILdc5aVFf1
cryQFHYzYNNZXFlIBims5VV5Mgfp8ESHQSue+v6n6ykecLEyKt1F1Y/MWY/nWUSI
8zdq83jdBAZVjo9MSthxVn57/06s/hQca65IpcTZV2gX0a+eRlAVqaRbAhL3LaZe
bYsW3WHKoUOftwemuep3nL51TzlXZVL7Oz/ClGaEOsnGG9KFO6jh+W768qC0zLQI
CdE7v2Zex98sZteHCg9fGJHIaYoF0aJG5P3WI5oZf2fy7UIYN9ADLFZiorCXAZEh
CSU6mDoRViZ4RGR9GZxbDZ9KYn7O8M/KCR72bkQg73TlMsk1zSXEw0MKLUjtsw6c
rZ0Jt8t3sRatHO3JrYHALMt9vZfyNCZp0IsCAwEAAaNFMEMwDgYDVR0PAQH/BAQD
AgEGMBIGA1UdEwEB/wQIMAYBAf8CAQAwHQYDVR0OBBYEFO1yCNAGr/zQTJIi8lw3
w5OiuBvMMA0GCSqGSIb3DQEBCwUAA4ICAQA6gCNuM7r8mnx674dm31GxBjQy5ZwB
7CxDzYEvL/oiZ3Tv3HlPfN2LAAsJUfGnghh9DOytenL2CTZWjl/emP5eijzmlP+9
zva5I6CIMCf/eDDVsRdO244t0o4uG7+At0IgSDM3bpVaVb4RHZNjEziYChsEYY8d
HK6iwuRSvFniV6yhR/Vj1Ymi9yZ5xclqseLXiQnUB0PkfIk23+7s42cXB16653fH
O/FsPyKBLiKJArizLYQc12aP3QOrYoYD9+fAzIIzew7A5C0aanZCGzkuFpO6TRlD
Tb7ry9Gf0DfPpCgxraH8tOcmnqp/ka3hjqo/SRnnTk0IFrmmLdarJvjD46rKwBo4
MjyAIR1mQ5j8GTlSFBmSgETOQ/EYvO3FPLmra1Fh7L+DvaVzTpqI9fG3TuyyY+Ri
Fby4ycTOGSZOe5Fh8lqkX5Y47mCUJ3zHzOA1vUJy2eTlMRGpu47Eb1++Vm6EzPUP
2EF5aD+zwcssh+atZvQbwxpgVqVcyLt91RSkKkmZQslh0rnlTb68yxvUnD3zw7So
o6TAf9UvwVMEvdLT9NnFd6hwi2jcNte/h538GJwXeBb8EkfpqLKpTKyicnOdkamZ
7E9zY8SHNRYMwB9coQ/W8NvufbCgkvOoLyMXk5edbXofXl3PhNGOlraWbghBnzf5
r3rwjFsQOoZotA==
-----END CERTIFICATE-----`
caKey = `-----BEGIN RSA PRIVATE KEY-----
MIIJKQIBAAKCAgEA4Tiho5xWAC15JRkMwfp3/TJwI2As7MY5dele5cmdr5bHAE+s
RKqC+Ti88OJWCV5saoyax/1SCjxJlQMZMl169P1QYJskKjdG2sdv6RLWLMgwSNRR
jxp/Bw9dHdiEb9MjLgu28Jro9peQkHcRHeMf5hM9WvlIJGrdzbC4hUehmqggcqgA
RainBkYjf0SwuWxHeu4nMqkpAk5tcSTLCjHfEFHZ9Te0TIPG5YkWocQKyeLgu4lv
uU+DD2W2lym+YVUtRMGs1Envk7p+N0DcGU26qfzZ2sF5ZXkqm7dBsGQB9pIxwc2Q
8T1dCIyP9OQCKVILdc5aVFf1cryQFHYzYNNZXFlIBims5VV5Mgfp8ESHQSue+v6n
6ykecLEyKt1F1Y/MWY/nWUSI8zdq83jdBAZVjo9MSthxVn57/06s/hQca65IpcTZ
V2gX0a+eRlAVqaRbAhL3LaZebYsW3WHKoUOftwemuep3nL51TzlXZVL7Oz/ClGaE
OsnGG9KFO6jh+W768qC0zLQICdE7v2Zex98sZteHCg9fGJHIaYoF0aJG5P3WI5oZ
f2fy7UIYN9ADLFZiorCXAZEhCSU6mDoRViZ4RGR9GZxbDZ9KYn7O8M/KCR72bkQg
73TlMsk1zSXEw0MKLUjtsw6crZ0Jt8t3sRatHO3JrYHALMt9vZfyNCZp0IsCAwEA
AQKCAgAV+ElERYbaI5VyufvVnFJCH75ypPoc6sVGLEq2jbFVJJcq/5qlZCC8oP1F
Xj7YUR6wUiDzK1Hqb7EZ2SCHGjlZVrCVi+y+NYAy7UuMZ+r+mVSkdhmypPoJPUVv
GOTqZ6VB46Cn3eSl0WknvoWr7bD555yPmEuiSc5zNy74yWEJTidEKAFGyknowcTK
sG+w1tAuPLcUKQ44DGB+rgEkcHL7C5EAa7upzx0C3RmZFB+dTAVyJdkBMbFuOhTS
sB7DLeTplR7/4mp9da7EQw51ZXC1DlZOEZt++4/desXsqATNAbva1OuzrLG7mMKe
N/PCBh/aERQcsCvgUmaXqGQgqN1Jhw8kbXnjZnVd9iE7TAh7ki3VqNy1OMgTwOex
bBYWaCqHuDYIxCjeW0qLJcn0cKQ13FVYrxgInf4Jp82SQht5b/zLL3IRZEyKcLJF
kL6g1wlmTUTUX0z8eZzlM0ZCrqtExjgElMO/rV971nyNV5WU8Og3NmE8/slqMrmJ
DlrQr9q0WJsDKj1IMe46EUM6ix7bbxC5NIfJ96dgdxZDn6ghjca6iZYqqUACvmUj
cq08s3R4Ouw9/87kn11wwGBx2yDueCwrjKEGc0RKjweGbwu0nBxOrkJ8JXz6bAv7
1OKfYaX3afI9B8x4uaiuRs38oBQlg9uAYFfl4HNBPuQikGLmsQKCAQEA8VjFOsaz
y6NMZzKXi7WZ48uu3ed5x3Kf6RyDr1WvQ1jkBMv9b6b8Gp1CRnPqviRBto9L8QAg
bCXZTqnXzn//brskmW8IZgqjAlf89AWa53piucu9/hgidrHRZobs5gTqev28uJdc
zcuw1g8c3nCpY9WeTjHODzX5NXYRLFpkazLfYa6c8Q9jZR4KKrpdM+66fxL0JlOd
7dN0oQtEqEAugsd3cwkZgvWhY4oM7FGErrZoDLy273ZdJzi/vU+dThyVzfD8Ab8u
VxxuobVMT/S608zbe+uaiUdov5s96OkCl87403UNKJBH+6LNb3rjBBLE9NPN5ET9
JLQMrYd+zj8jQwKCAQEA7uU5I9MOufo9bIgJqjY4Ie1+Ex9DZEMUYFAvGNCJCVcS
mwOdGF8AWzIavTLACmEDJO7t/OrBdoo4L7IEsCNjgA3WiIwIMiWUVqveAGUMEXr6
TRI5EolV6FTqqIP6AS+BAeBq7G1ELgsTrWNHh11rW3+3kBMuOCn77PUQ8WHwcq/r
teZcZn4Ewcr6P7cBODgVvnBPhe/J8xHS0HFVCeS1CvaiNYgees5yA80Apo9IPjDJ
YWawLjmH5wUBI5yDFVp067wjqJnoKPSoKwWkZXqUk+zgFXx5KT0gh/c5yh1frASp
q6oaYnHEVC5qj2SpT1GFLonTcrQUXiSkiUudvNu1GQKCAQEAmko+5GFtRe0ihgLQ
4S76r6diJli6AKil1Fg3U1r6zZpBQ1PJtJxTJQyN9w5Z7q6tF/GqAesrzxevQdvQ
rCImAPtA3ZofC2UXawMnIjWHHx6diNvYnV1+gtUQ4nO1dSOFZ5VZFcUmPiZO6boF
oaryj3FcX+71JcJCjEvrlKhA9Es0hXUkvfMxfs5if4he1zlyHpTWYr4oA4egUugq
P0mwskikc3VIyvEO+NyjgFxo72yLPkFSzemkidN8uKDyFqKtnlfGM7OuA2CY1WZa
3+67lXWshx9KzyJIs92iCYkU8EoPxtdYzyrV6efdX7x27v60zTOut5TnJJS6WiF6
Do5MkwKCAQAxoR9IyP0DN/BwzqYrXU42Bi+t603F04W1KJNQNWpyrUspNwv41yus
xnD1o0hwH41Wq+h3JZIBfV+E0RfWO9Pc84MBJQ5C1LnHc7cQH+3s575+Km3+4tcd
CB8j2R8kBeloKWYtLdn/Mr/ownpGreqyvIq2/LUaZ+Z1aMgXTYB1YwS16mCBzmZQ
mEl62RsAwe4KfSyYJ6OtwqMoOJMxFfliiLBULK4gVykqjvk2oQeiG+KKQJoTUFJi
dRCyhD5bPkqR+qjxyt+HOqSBI4/uoROi05AOBqjpH1DVzk+MJKQOiX1yM0l98CKY
Vng+x+vAla/0Zh+ucajVkgk4mKPxazdpAoIBAQC17vWk4KYJpF2RC3pKPcQ0PdiX
bN35YNlvyhkYlSfDNdyH3aDrGiycUyW2mMXUgEDFsLRxHMTL+zPC6efqO6sTAJDY
cBptsW4drW/qo8NTx3dNOisLkW+mGGJOR/w157hREFr29ymCVMYu/Z7fVWIeSpCq
p3u8YX8WTljrxwSczlGjvpM7uJx3SfYRM4TUoy+8wU8bK74LywLa5f60bQY6Dye0
Gqd9O6OoPfgcQlwjC5MiAofeqwPJvU0hQOPoehZyNLAmOCWXTYWaTP7lxO1r6+NE
M3hGYqW3W8Ixua71OskCypBZg/HVlIP/lzjRzdx+VOB2hbWVth2Iup/Z1egW
-----END RSA PRIVATE KEY-----`
caCRL = `-----BEGIN X509 CRL-----
MIICpzCBkAIBATANBgkqhkiG9w0BAQsFADATMREwDwYDVQQDEwhDZXJ0QXV0aBcN
MjEwMTAyMjEzNDA1WhcNMjMwMTAyMjEzNDA1WjAkMCICEQC+l04DbHWMyC3fG09k
VXf+Fw0yMTAxMDIyMTM0MDVaoCMwITAfBgNVHSMEGDAWgBTtcgjQBq/80EySIvJc
N8OTorgbzDANBgkqhkiG9w0BAQsFAAOCAgEAEJ7z+uNc8sqtxlOhSdTGDzX/xput
E857kFQkSlMnU2whQ8c+XpYrBLA5vIZJNSSwohTpM4+zVBX/bJpmu3wqqaArRO9/
YcW5mQk9Anvb4WjQW1cHmtNapMTzoC9AiYt/OWPfy+P6JCgCr4Hy6LgQyIRL6bM9
VYTalolOm1qa4Y5cIeT7iHq/91mfaqo8/6MYRjLl8DOTROpmw8OS9bCXkzGKdCat
AbAzwkQUSauyoCQ10rpX+Y64w9ng3g4Dr20aCqPf5osaqplEJ2HTK8ljDTidlslv
9anQj8ax3Su89vI8+hK+YbfVQwrThabgdSjQsn+veyx8GlP8WwHLAQ379KjZjWg+
OlOSwBeU1vTdP0QcB8X5C2gVujAyuQekbaV86xzIBOj7vZdfHZ6ee30TZ2FKiMyg
7/N2OqW0w77ChsjB4MSHJCfuTgIeg62GzuZXLM+Q2Z9LBdtm4Byg+sm/P52adOEg
gVb2Zf4KSvsAmA0PIBlu449/QXUFcMxzLFy7mwTeZj2B4Ln0Hm0szV9f9R8MwMtB
SyLYxVH+mgqaR6Jkk22Q/yYyLPaELfafX5gp/AIXG8n0zxfVaTvK3auSgb1Q6ZLS
5QH9dSIsmZHlPq7GoSXmKpMdjUL8eaky/IMteioyXgsBiATzl5L2dsw6MTX3MDF0
QbDK+MzhmbKfDxs=
-----END X509 CRL-----`
client1Crt = `-----BEGIN CERTIFICATE-----
MIIEITCCAgmgAwIBAgIRAIppZHoj1hM80D7WzTEKLuAwDQYJKoZIhvcNAQELBQAw
EzERMA8GA1UEAxMIQ2VydEF1dGgwHhcNMjEwMTAyMjEyMzEwWhcNMjIwNzAyMjEz
MDUxWjASMRAwDgYDVQQDEwdjbGllbnQxMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
MIIBCgKCAQEAoKbYY9MdF2kF/nhBESIiZTdVYtA8XL9xrIZyDj9EnCiTxHiVbJtH
XVwszqSl5TRrotPmnmAQcX3r8OCk+z+RQZ0QQj257P3kG6q4rNnOcWCS5xEd20jP
yhQ3m+hMGfZsotNTQze1ochuQgLUN6IPyPxZkH22ia3jX4iu1eo/QxeLYHj1UHw4
3Cii9yE+j5kPUC21xmnrGKdUrB55NYLXHx6yTIqYR5znSOVB8oJi18/hwdZmH859
DHhm0Hx1HrS+jbjI3+CMorZJ3WUyNf+CkiVLD3xYutPbxzEpwiqkG/XYzLH0habT
cDcILo18n+o3jvem2KWBrDhyairjIDscwQIDAQABo3EwbzAOBgNVHQ8BAf8EBAMC
A7gwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMCMB0GA1UdDgQWBBSJ5GIv
zIrE4ZSQt2+CGblKTDswizAfBgNVHSMEGDAWgBTtcgjQBq/80EySIvJcN8OTorgb
zDANBgkqhkiG9w0BAQsFAAOCAgEALh4f5GhvNYNou0Ab04iQBbLEdOu2RlbK1B5n
K9P/umYenBHMY/z6HT3+6tpcHsDuqE8UVdq3f3Gh4S2Gu9m8PRitT+cJ3gdo9Plm
3rD4ufn/s6rGg3ppydXcedm17492tbccUDWOBZw3IO/ASVq13WPgT0/Kev7cPq0k
sSdSNhVeXqx8Myc2/d+8GYyzbul2Kpfa7h9i24sK49E9ftnSmsIvngONo08eT1T0
3wAOyK2981LIsHaAWcneShKFLDB6LeXIT9oitOYhiykhFlBZ4M1GNlSNfhQ8IIQP
xbqMNXCLkW4/BtLhGEEcg0QVso6Kudl9rzgTfQknrdF7pHp6rS46wYUjoSyIY6dl
oLmnoAVJX36J3QPWelePI9e07X2wrTfiZWewwgw3KNRWjd6/zfPLe7GoqXnK1S2z
PT8qMfCaTwKTtUkzXuTFvQ8bAo2My/mS8FOcpkt2oQWeOsADHAUX7fz5BCoa2DL3
k/7Mh4gVT+JYZEoTwCFuYHgMWFWe98naqHi9lB4yR981p1QgXgxO7qBeipagKY1F
LlH1iwXUqZ3MZnkNA+4e1Fglsw3sa/rC+L98HnznJ/YbTfQbCP6aQ1qcOymrjMud
7MrFwqZjtd/SK4Qx1VpK6jGEAtPgWBTUS3p9ayg6lqjMBjsmySWfvRsDQbq6P5Ct
O/e3EH8=
-----END CERTIFICATE-----`
client1Key = `-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEAoKbYY9MdF2kF/nhBESIiZTdVYtA8XL9xrIZyDj9EnCiTxHiV
bJtHXVwszqSl5TRrotPmnmAQcX3r8OCk+z+RQZ0QQj257P3kG6q4rNnOcWCS5xEd
20jPyhQ3m+hMGfZsotNTQze1ochuQgLUN6IPyPxZkH22ia3jX4iu1eo/QxeLYHj1
UHw43Cii9yE+j5kPUC21xmnrGKdUrB55NYLXHx6yTIqYR5znSOVB8oJi18/hwdZm
H859DHhm0Hx1HrS+jbjI3+CMorZJ3WUyNf+CkiVLD3xYutPbxzEpwiqkG/XYzLH0
habTcDcILo18n+o3jvem2KWBrDhyairjIDscwQIDAQABAoIBAEBSjVFqtbsp0byR
aXvyrtLX1Ng7h++at2jca85Ihq//jyqbHTje8zPuNAKI6eNbmb0YGr5OuEa4pD9N
ssDmMsKSoG/lRwwcm7h4InkSvBWpFShvMgUaohfHAHzsBYxfnh+TfULsi0y7c2n6
t/2OZcOTRkkUDIITnXYiw93ibHHv2Mv2bBDu35kGrcK+c2dN5IL5ZjTjMRpbJTe2
44RBJbdTxHBVSgoGBnugF+s2aEma6Ehsj70oyfoVpM6Aed5kGge0A5zA1JO7WCn9
Ay/DzlULRXHjJIoRWd2NKvx5n3FNppUc9vJh2plRHalRooZ2+MjSf8HmXlvG2Hpb
ScvmWgECgYEA1G+A/2KnxWsr/7uWIJ7ClcGCiNLdk17Pv3DZ3G4qUsU2ITftfIbb
tU0Q/b19na1IY8Pjy9ptP7t74/hF5kky97cf1FA8F+nMj/k4+wO8QDI8OJfzVzh9
PwielA5vbE+xmvis5Hdp8/od1Yrc/rPSy2TKtPFhvsqXjqoUmOAjDP8CgYEAwZjH
9dt1sc2lx/rMxihlWEzQ3JPswKW9/LJAmbRBoSWF9FGNjbX7uhWtXRKJkzb8ZAwa
88azluNo2oftbDD/+jw8b2cDgaJHlLAkSD4O1D1RthW7/LKD15qZ/oFsRb13NV85
ZNKtwslXGbfVNyGKUVFm7fVA8vBAOUey+LKDFj8CgYEAg8WWstOzVdYguMTXXuyb
ruEV42FJaDyLiSirOvxq7GTAKuLSQUg1yMRBIeQEo2X1XU0JZE3dLodRVhuO4EXP
g7Dn4X7Th9HSvgvNuIacowWGLWSz4Qp9RjhGhXhezUSx2nseY6le46PmFavJYYSR
4PBofMyt4PcyA6Cknh+KHmkCgYEAnTriG7ETE0a7v4DXUpB4TpCEiMCy5Xs2o8Z5
ZNva+W+qLVUWq+MDAIyechqeFSvxK6gRM69LJ96lx+XhU58wJiFJzAhT9rK/g+jS
bsHH9WOfu0xHkuHA5hgvvV2Le9B2wqgFyva4HJy82qxMxCu/VG/SMqyfBS9OWbb7
ibQhdq0CgYAl53LUWZsFSZIth1vux2LVOsI8C3X1oiXDGpnrdlQ+K7z57hq5EsRq
GC+INxwXbvKNqp5h0z2MvmKYPDlGVTgw8f8JjM7TkN17ERLcydhdRrMONUryZpo8
1xTob+8blyJgfxZUIAKbMbMbIiU0WAF0rfD/eJJwS4htOW/Hfv4TGA==
-----END RSA PRIVATE KEY-----`
// client 2 crt is revoked
client2Crt = `-----BEGIN CERTIFICATE-----
MIIEITCCAgmgAwIBAgIRAL6XTgNsdYzILd8bT2RVd/4wDQYJKoZIhvcNAQELBQAw
EzERMA8GA1UEAxMIQ2VydEF1dGgwHhcNMjEwMTAyMjEyMzIwWhcNMjIwNzAyMjEz
MDUxWjASMRAwDgYDVQQDEwdjbGllbnQyMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
MIIBCgKCAQEA6xjW5KQR3/OFQtV5M75WINqQ4AzXSu6DhSz/yumaaQZP/UxY+6hi
jcrFzGo9MMie/Sza8DhkXOFAl2BelUubrOeB2cl+/Gr8OCyRi2Gv6j3zCsuN/4jQ
tNaoez/IbkDvI3l/ZpzBtnuNY2RiemGgHuORXHRVf3qVlsw+npBIRW5rM2HkO/xG
oZjeBErWVu390Lyn+Gvk2TqQDnkutWnxUC60/zPlHhXZ4BwaFAekbSnjsSDB1YFM
s8HwW4oBryoxdj3/+/qLrBHt75IdLw3T7/V1UDJQM3EvSQOr12w4egpldhtsC871
nnBQZeY6qA5feffIwwg/6lJm70o6S6OX6wIDAQABo3EwbzAOBgNVHQ8BAf8EBAMC
A7gwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMCMB0GA1UdDgQWBBTB84v5
t9HqhLhMODbn6oYkEQt3KzAfBgNVHSMEGDAWgBTtcgjQBq/80EySIvJcN8OTorgb
zDANBgkqhkiG9w0BAQsFAAOCAgEALGtBCve5k8tToL3oLuXp/oSik6ovIB/zq4I/
4zNMYPU31+ZWz6aahysgx1JL1yqTa3Qm8o2tu52MbnV10dM7CIw7c/cYa+c+OPcG
5LF97kp13X+r2axy+CmwM86b4ILaDGs2Qyai6VB6k7oFUve+av5o7aUrNFpqGCJz
HWdtHZSVA3JMATzy0TfWanwkzreqfdw7qH0yZ9bDURlBKAVWrqnCstva9jRuv+AI
eqxr/4Ro986TFjJdoAP3Vr16CPg7/B6GA/KmsBWJrpeJdPWq4i2gpLKvYZoy89qD
mUZf34RbzcCtV4NvV1DadGnt4us0nvLrvS5rL2+2uWD09kZYq9RbLkvgzF/cY0fz
i7I1bi5XQ+alWe0uAk5ZZL/D+GTRYUX1AWwCqwJxmHrMxcskMyO9pXvLyuSWRDLo
YNBrbX9nLcfJzVCp+X+9sntTHjs4l6Cw+fLepJIgtgqdCHtbhTiv68vSM6cgb4br
6n2xrXRKuioiWFOrTSRr+oalZh8dGJ/xvwY8IbWknZAvml9mf1VvfE7Ma5P777QM
fsbYVTq0Y3R/5hIWsC3HA5z6MIM8L1oRe/YyhP3CTmrCHkVKyDOosGXpGz+JVcyo
cfYkY5A3yFKB2HaCwZSfwFmRhxkrYWGEbHv3Cd9YkZs1J3hNhGFZyVMC9Uh0S85a
6zdDidU=
-----END CERTIFICATE-----`
client2Key = `-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEA6xjW5KQR3/OFQtV5M75WINqQ4AzXSu6DhSz/yumaaQZP/UxY
+6hijcrFzGo9MMie/Sza8DhkXOFAl2BelUubrOeB2cl+/Gr8OCyRi2Gv6j3zCsuN
/4jQtNaoez/IbkDvI3l/ZpzBtnuNY2RiemGgHuORXHRVf3qVlsw+npBIRW5rM2Hk
O/xGoZjeBErWVu390Lyn+Gvk2TqQDnkutWnxUC60/zPlHhXZ4BwaFAekbSnjsSDB
1YFMs8HwW4oBryoxdj3/+/qLrBHt75IdLw3T7/V1UDJQM3EvSQOr12w4egpldhts
C871nnBQZeY6qA5feffIwwg/6lJm70o6S6OX6wIDAQABAoIBAFatstVb1KdQXsq0
cFpui8zTKOUiduJOrDkWzTygAmlEhYtrccdfXu7OWz0x0lvBLDVGK3a0I/TGrAzj
4BuFY+FM/egxTVt9in6fmA3et4BS1OAfCryzUdfK6RV//8L+t+zJZ/qKQzWnugpy
QYjDo8ifuMFwtvEoXizaIyBNLAhEp9hnrv+Tyi2O2gahPvCHsD48zkyZRCHYRstD
NH5cIrwz9/RJgPO1KI+QsJE7Nh7stR0sbr+5TPU4fnsL2mNhMUF2TJrwIPrc1yp+
YIUjdnh3SO88j4TQT3CIrWi8i4pOy6N0dcVn3gpCRGaqAKyS2ZYUj+yVtLO4KwxZ
SZ1lNvECgYEA78BrF7f4ETfWSLcBQ3qxfLs7ibB6IYo2x25685FhZjD+zLXM1AKb
FJHEXUm3mUYrFJK6AFEyOQnyGKBOLs3S6oTAswMPbTkkZeD1Y9O6uv0AHASLZnK6
pC6ub0eSRF5LUyTQ55Jj8D7QsjXJueO8v+G5ihWhNSN9tB2UA+8NBmkCgYEA+weq
cvoeMIEMBQHnNNLy35bwfqrceGyPIRBcUIvzQfY1vk7KW6DYOUzC7u+WUzy/hA52
DjXVVhua2eMQ9qqtOav7djcMc2W9RbLowxvno7K5qiCss013MeWk64TCWy+WMp5A
AVAtOliC3hMkIKqvR2poqn+IBTh1449agUJQqTMCgYEAu06IHGq1GraV6g9XpGF5
wqoAlMzUTdnOfDabRilBf/YtSr+J++ThRcuwLvXFw7CnPZZ4TIEjDJ7xjj3HdxeE
fYYjineMmNd40UNUU556F1ZLvJfsVKizmkuCKhwvcMx+asGrmA+tlmds4p3VMS50
KzDtpKzLWlmU/p/RINWlRmkCgYBy0pHTn7aZZx2xWKqCDg+L2EXPGqZX6wgZDpu7
OBifzlfM4ctL2CmvI/5yPmLbVgkgBWFYpKUdiujsyyEiQvWTUKhn7UwjqKDHtcsk
G6p7xS+JswJrzX4885bZJ9Oi1AR2yM3sC9l0O7I4lDbNPmWIXBLeEhGMmcPKv/Kc
91Ff4wKBgQCF3ur+Vt0PSU0ucrPVHjCe7tqazm0LJaWbPXL1Aw0pzdM2EcNcW/MA
w0kqpr7MgJ94qhXCBcVcfPuFN9fBOadM3UBj1B45Cz3pptoK+ScI8XKno6jvVK/p
xr5cb9VBRBtB9aOKVfuRhpatAfS2Pzm2Htae9lFn7slGPUmu2hkjDw==
-----END RSA PRIVATE KEY-----`
)
func TestLoadCertificate(t *testing.T) {
caCrtPath := filepath.Join(os.TempDir(), "testca.crt")
caCrlPath := filepath.Join(os.TempDir(), "testcrl.crt")
certPath := filepath.Join(os.TempDir(), "test.crt")
keyPath := filepath.Join(os.TempDir(), "test.key")
err := os.WriteFile(caCrtPath, []byte(caCRT), os.ModePerm)
assert.NoError(t, err)
err = os.WriteFile(caCrlPath, []byte(caCRL), os.ModePerm)
assert.NoError(t, err)
err = os.WriteFile(certPath, []byte(serverCert), os.ModePerm)
assert.NoError(t, err)
err = os.WriteFile(keyPath, []byte(serverKey), os.ModePerm)
assert.NoError(t, err)
certManager, err := NewCertManager(certPath, keyPath, configDir, logSenderTest)
assert.NoError(t, err)
certFunc := certManager.GetCertificateFunc()
if assert.NotNil(t, certFunc) {
hello := &tls.ClientHelloInfo{
ServerName: "localhost",
CipherSuites: []uint16{tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305},
}
cert, err := certFunc(hello)
assert.NoError(t, err)
assert.Equal(t, certManager.cert, cert)
}
certManager.SetCACertificates(nil)
err = certManager.LoadRootCAs()
assert.NoError(t, err)
certManager.SetCACertificates([]string{""})
err = certManager.LoadRootCAs()
assert.Error(t, err)
certManager.SetCACertificates([]string{"invalid"})
err = certManager.LoadRootCAs()
assert.Error(t, err)
// laoding the key as root CA must fail
certManager.SetCACertificates([]string{keyPath})
err = certManager.LoadRootCAs()
assert.Error(t, err)
certManager.SetCACertificates([]string{certPath})
err = certManager.LoadRootCAs()
assert.NoError(t, err)
rootCa := certManager.GetRootCAs()
assert.NotNil(t, rootCa)
err = certManager.Reload()
assert.NoError(t, err)
certManager.SetCARevocationLists(nil)
err = certManager.LoadCRLs()
assert.NoError(t, err)
certManager.SetCARevocationLists([]string{""})
err = certManager.LoadCRLs()
assert.Error(t, err)
certManager.SetCARevocationLists([]string{"invalid crl"})
err = certManager.LoadCRLs()
assert.Error(t, err)
// this is not a crl and must fail
certManager.SetCARevocationLists([]string{caCrtPath})
err = certManager.LoadCRLs()
assert.Error(t, err)
certManager.SetCARevocationLists([]string{caCrlPath})
err = certManager.LoadCRLs()
assert.NoError(t, err)
crt, err := tls.X509KeyPair([]byte(caCRT), []byte(caKey))
assert.NoError(t, err)
x509CAcrt, err := x509.ParseCertificate(crt.Certificate[0])
assert.NoError(t, err)
crt, err = tls.X509KeyPair([]byte(client1Crt), []byte(client1Key))
assert.NoError(t, err)
x509crt, err := x509.ParseCertificate(crt.Certificate[0])
if assert.NoError(t, err) {
assert.False(t, certManager.IsRevoked(x509crt, x509CAcrt))
}
crt, err = tls.X509KeyPair([]byte(client2Crt), []byte(client2Key))
assert.NoError(t, err)
x509crt, err = x509.ParseCertificate(crt.Certificate[0])
if assert.NoError(t, err) {
assert.True(t, certManager.IsRevoked(x509crt, x509CAcrt))
}
assert.True(t, certManager.IsRevoked(nil, nil))
err = os.Remove(caCrlPath)
assert.NoError(t, err)
err = certManager.Reload()
assert.Error(t, err)
err = os.Remove(certPath)
assert.NoError(t, err)
err = os.Remove(keyPath)
assert.NoError(t, err)
err = certManager.Reload()
assert.Error(t, err)
err = os.Remove(caCrtPath)
assert.NoError(t, err)
}
func TestLoadInvalidCert(t *testing.T) {
certManager, err := NewCertManager("test.crt", "test.key", configDir, logSenderTest)
assert.Error(t, err)
assert.Nil(t, certManager)
}

332
common/transfer.go Normal file
View File

@@ -0,0 +1,332 @@
package common
import (
"errors"
"path"
"sync"
"sync/atomic"
"time"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/metric"
"github.com/drakkan/sftpgo/v2/vfs"
)
var (
// ErrTransferClosed defines the error returned for a closed transfer
ErrTransferClosed = errors.New("transfer already closed")
)
// BaseTransfer contains protocols common transfer details for an upload or a download.
type BaseTransfer struct { //nolint:maligned
ID uint64
BytesSent int64
BytesReceived int64
Fs vfs.Fs
File vfs.File
Connection *BaseConnection
cancelFn func()
fsPath string
effectiveFsPath string
requestPath string
ftpMode string
start time.Time
MaxWriteSize int64
MinWriteOffset int64
InitialSize int64
isNewFile bool
transferType int
AbortTransfer int32
aTime time.Time
mTime time.Time
sync.Mutex
ErrTransfer error
}
// NewBaseTransfer returns a new BaseTransfer and adds it to the given connection
func NewBaseTransfer(file vfs.File, conn *BaseConnection, cancelFn func(), fsPath, effectiveFsPath, requestPath string,
transferType int, minWriteOffset, initialSize, maxWriteSize int64, isNewFile bool, fs vfs.Fs) *BaseTransfer {
t := &BaseTransfer{
ID: conn.GetTransferID(),
File: file,
Connection: conn,
cancelFn: cancelFn,
fsPath: fsPath,
effectiveFsPath: effectiveFsPath,
start: time.Now(),
transferType: transferType,
MinWriteOffset: minWriteOffset,
InitialSize: initialSize,
isNewFile: isNewFile,
requestPath: requestPath,
BytesSent: 0,
BytesReceived: 0,
MaxWriteSize: maxWriteSize,
AbortTransfer: 0,
Fs: fs,
}
conn.AddTransfer(t)
return t
}
// SetFtpMode sets the FTP mode for the current transfer
func (t *BaseTransfer) SetFtpMode(mode string) {
t.ftpMode = mode
}
// GetID returns the transfer ID
func (t *BaseTransfer) GetID() uint64 {
return t.ID
}
// GetType returns the transfer type
func (t *BaseTransfer) GetType() int {
return t.transferType
}
// GetSize returns the transferred size
func (t *BaseTransfer) GetSize() int64 {
if t.transferType == TransferDownload {
return atomic.LoadInt64(&t.BytesSent)
}
return atomic.LoadInt64(&t.BytesReceived)
}
// GetStartTime returns the start time
func (t *BaseTransfer) GetStartTime() time.Time {
return t.start
}
// SignalClose signals that the transfer should be closed.
// For same protocols, for example WebDAV, we have no
// access to the network connection, so we use this method
// to make the next read or write to fail
func (t *BaseTransfer) SignalClose() {
atomic.StoreInt32(&(t.AbortTransfer), 1)
}
// GetVirtualPath returns the transfer virtual path
func (t *BaseTransfer) GetVirtualPath() string {
return t.requestPath
}
// GetFsPath returns the transfer filesystem path
func (t *BaseTransfer) GetFsPath() string {
return t.fsPath
}
// SetTimes stores access and modification times if fsPath matches the current file
func (t *BaseTransfer) SetTimes(fsPath string, atime time.Time, mtime time.Time) bool {
if fsPath == t.GetFsPath() {
t.aTime = atime
t.mTime = mtime
return true
}
return false
}
// GetRealFsPath returns the real transfer filesystem path.
// If atomic uploads are enabled this differ from fsPath
func (t *BaseTransfer) GetRealFsPath(fsPath string) string {
if fsPath == t.GetFsPath() {
if t.File != nil {
return t.File.Name()
}
return t.fsPath
}
return ""
}
// SetCancelFn sets the cancel function for the transfer
func (t *BaseTransfer) SetCancelFn(cancelFn func()) {
t.cancelFn = cancelFn
}
// Truncate changes the size of the opened file.
// Supported for local fs only
func (t *BaseTransfer) Truncate(fsPath string, size int64) (int64, error) {
if fsPath == t.GetFsPath() {
if t.File != nil {
initialSize := t.InitialSize
err := t.File.Truncate(size)
if err == nil {
t.Lock()
t.InitialSize = size
if t.MaxWriteSize > 0 {
sizeDiff := initialSize - size
t.MaxWriteSize += sizeDiff
metric.TransferCompleted(atomic.LoadInt64(&t.BytesSent), atomic.LoadInt64(&t.BytesReceived), t.transferType, t.ErrTransfer)
atomic.StoreInt64(&t.BytesReceived, 0)
}
t.Unlock()
}
t.Connection.Log(logger.LevelDebug, "file %#v truncated to size %v max write size %v new initial size %v err: %v",
fsPath, size, t.MaxWriteSize, t.InitialSize, err)
return initialSize, err
}
if size == 0 && atomic.LoadInt64(&t.BytesSent) == 0 {
// for cloud providers the file is always truncated to zero, we don't support append/resume for uploads
// for buffered SFTP we can have buffered bytes so we returns an error
if !vfs.IsBufferedSFTPFs(t.Fs) {
return 0, nil
}
}
return 0, vfs.ErrVfsUnsupported
}
return 0, errTransferMismatch
}
// TransferError is called if there is an unexpected error.
// For example network or client issues
func (t *BaseTransfer) TransferError(err error) {
t.Lock()
defer t.Unlock()
if t.ErrTransfer != nil {
return
}
t.ErrTransfer = err
if t.cancelFn != nil {
t.cancelFn()
}
elapsed := time.Since(t.start).Nanoseconds() / 1000000
t.Connection.Log(logger.LevelError, "Unexpected error for transfer, path: %#v, error: \"%v\" bytes sent: %v, "+
"bytes received: %v transfer running since %v ms", t.fsPath, t.ErrTransfer, atomic.LoadInt64(&t.BytesSent),
atomic.LoadInt64(&t.BytesReceived), elapsed)
}
func (t *BaseTransfer) getUploadFileSize() (int64, error) {
var fileSize int64
info, err := t.Fs.Stat(t.fsPath)
if err == nil {
fileSize = info.Size()
}
if vfs.IsCryptOsFs(t.Fs) && t.ErrTransfer != nil {
errDelete := t.Fs.Remove(t.fsPath, false)
if errDelete != nil {
t.Connection.Log(logger.LevelWarn, "error removing partial crypto file %#v: %v", t.fsPath, errDelete)
}
}
return fileSize, err
}
// Close it is called when the transfer is completed.
// It logs the transfer info, updates the user quota (for uploads)
// and executes any defined action.
// If there is an error no action will be executed and, in atomic mode,
// we try to delete the temporary file
func (t *BaseTransfer) Close() error {
defer t.Connection.RemoveTransfer(t)
var err error
numFiles := 0
if t.isNewFile {
numFiles = 1
}
metric.TransferCompleted(atomic.LoadInt64(&t.BytesSent), atomic.LoadInt64(&t.BytesReceived), t.transferType, t.ErrTransfer)
if t.File != nil && t.Connection.IsQuotaExceededError(t.ErrTransfer) {
// if quota is exceeded we try to remove the partial file for uploads to local filesystem
err = t.Fs.Remove(t.File.Name(), false)
if err == nil {
numFiles--
atomic.StoreInt64(&t.BytesReceived, 0)
t.MinWriteOffset = 0
}
t.Connection.Log(logger.LevelWarn, "upload denied due to space limit, delete temporary file: %#v, deletion error: %v",
t.File.Name(), err)
} else if t.transferType == TransferUpload && t.effectiveFsPath != t.fsPath {
if t.ErrTransfer == nil || Config.UploadMode == UploadModeAtomicWithResume {
err = t.Fs.Rename(t.effectiveFsPath, t.fsPath)
t.Connection.Log(logger.LevelDebug, "atomic upload completed, rename: %#v -> %#v, error: %v",
t.effectiveFsPath, t.fsPath, err)
} else {
err = t.Fs.Remove(t.effectiveFsPath, false)
t.Connection.Log(logger.LevelWarn, "atomic upload completed with error: \"%v\", delete temporary file: %#v, "+
"deletion error: %v", t.ErrTransfer, t.effectiveFsPath, err)
if err == nil {
numFiles--
atomic.StoreInt64(&t.BytesReceived, 0)
t.MinWriteOffset = 0
}
}
}
elapsed := time.Since(t.start).Nanoseconds() / 1000000
if t.transferType == TransferDownload {
logger.TransferLog(downloadLogSender, t.fsPath, elapsed, atomic.LoadInt64(&t.BytesSent), t.Connection.User.Username,
t.Connection.ID, t.Connection.protocol, t.Connection.localAddr, t.Connection.remoteAddr, t.ftpMode)
ExecuteActionNotification(t.Connection, operationDownload, t.fsPath, t.requestPath, "", "", "",
atomic.LoadInt64(&t.BytesSent), t.ErrTransfer)
} else {
fileSize := atomic.LoadInt64(&t.BytesReceived) + t.MinWriteOffset
if statSize, err := t.getUploadFileSize(); err == nil {
fileSize = statSize
}
t.Connection.Log(logger.LevelDebug, "uploaded file size %v", fileSize)
t.updateQuota(numFiles, fileSize)
t.updateTimes()
logger.TransferLog(uploadLogSender, t.fsPath, elapsed, atomic.LoadInt64(&t.BytesReceived), t.Connection.User.Username,
t.Connection.ID, t.Connection.protocol, t.Connection.localAddr, t.Connection.remoteAddr, t.ftpMode)
ExecuteActionNotification(t.Connection, operationUpload, t.fsPath, t.requestPath, "", "", "", fileSize, t.ErrTransfer)
}
if t.ErrTransfer != nil {
t.Connection.Log(logger.LevelError, "transfer error: %v, path: %#v", t.ErrTransfer, t.fsPath)
if err == nil {
err = t.ErrTransfer
}
}
return err
}
func (t *BaseTransfer) updateTimes() {
if !t.aTime.IsZero() && !t.mTime.IsZero() {
err := t.Fs.Chtimes(t.fsPath, t.aTime, t.mTime, true)
t.Connection.Log(logger.LevelDebug, "set times for file %#v, atime: %v, mtime: %v, err: %v",
t.fsPath, t.aTime, t.mTime, err)
}
}
func (t *BaseTransfer) updateQuota(numFiles int, fileSize int64) bool {
// S3 uploads are atomic, if there is an error nothing is uploaded
if t.File == nil && t.ErrTransfer != nil && !t.Connection.User.HasBufferedSFTP(t.GetVirtualPath()) {
return false
}
sizeDiff := fileSize - t.InitialSize
if t.transferType == TransferUpload && (numFiles != 0 || sizeDiff > 0) {
vfolder, err := t.Connection.User.GetVirtualFolderForPath(path.Dir(t.requestPath))
if err == nil {
dataprovider.UpdateVirtualFolderQuota(&vfolder.BaseVirtualFolder, numFiles, //nolint:errcheck
sizeDiff, false)
if vfolder.IsIncludedInUserQuota() {
dataprovider.UpdateUserQuota(&t.Connection.User, numFiles, sizeDiff, false) //nolint:errcheck
}
} else {
dataprovider.UpdateUserQuota(&t.Connection.User, numFiles, sizeDiff, false) //nolint:errcheck
}
return true
}
return false
}
// HandleThrottle manage bandwidth throttling
func (t *BaseTransfer) HandleThrottle() {
var wantedBandwidth int64
var trasferredBytes int64
if t.transferType == TransferDownload {
wantedBandwidth = t.Connection.User.DownloadBandwidth
trasferredBytes = atomic.LoadInt64(&t.BytesSent)
} else {
wantedBandwidth = t.Connection.User.UploadBandwidth
trasferredBytes = atomic.LoadInt64(&t.BytesReceived)
}
if wantedBandwidth > 0 {
// real and wanted elapsed as milliseconds, bytes as kilobytes
realElapsed := time.Since(t.start).Nanoseconds() / 1000000
// trasferredBytes / 1024 = KB/s, we multiply for 1000 to get milliseconds
wantedElapsed := 1000 * (trasferredBytes / 1024) / wantedBandwidth
if wantedElapsed > realElapsed {
toSleep := time.Duration(wantedElapsed - realElapsed)
time.Sleep(toSleep * time.Millisecond)
}
}
}

299
common/transfer_test.go Normal file
View File

@@ -0,0 +1,299 @@
package common
import (
"errors"
"os"
"path/filepath"
"testing"
"time"
"github.com/sftpgo/sdk"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/drakkan/sftpgo/v2/dataprovider"
"github.com/drakkan/sftpgo/v2/kms"
"github.com/drakkan/sftpgo/v2/vfs"
)
func TestTransferUpdateQuota(t *testing.T) {
conn := NewBaseConnection("", ProtocolSFTP, "", "", dataprovider.User{})
transfer := BaseTransfer{
Connection: conn,
transferType: TransferUpload,
BytesReceived: 123,
Fs: vfs.NewOsFs("", os.TempDir(), ""),
}
errFake := errors.New("fake error")
transfer.TransferError(errFake)
assert.False(t, transfer.updateQuota(1, 0))
err := transfer.Close()
if assert.Error(t, err) {
assert.EqualError(t, err, errFake.Error())
}
mappedPath := filepath.Join(os.TempDir(), "vdir")
vdirPath := "/vdir"
conn.User.VirtualFolders = append(conn.User.VirtualFolders, vfs.VirtualFolder{
BaseVirtualFolder: vfs.BaseVirtualFolder{
MappedPath: mappedPath,
},
VirtualPath: vdirPath,
QuotaFiles: -1,
QuotaSize: -1,
})
transfer.ErrTransfer = nil
transfer.BytesReceived = 1
transfer.requestPath = "/vdir/file"
assert.True(t, transfer.updateQuota(1, 0))
err = transfer.Close()
assert.NoError(t, err)
}
func TestTransferThrottling(t *testing.T) {
u := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: "test",
UploadBandwidth: 50,
DownloadBandwidth: 40,
},
}
fs := vfs.NewOsFs("", os.TempDir(), "")
testFileSize := int64(131072)
wantedUploadElapsed := 1000 * (testFileSize / 1024) / u.UploadBandwidth
wantedDownloadElapsed := 1000 * (testFileSize / 1024) / u.DownloadBandwidth
// some tolerance
wantedUploadElapsed -= wantedDownloadElapsed / 10
wantedDownloadElapsed -= wantedDownloadElapsed / 10
conn := NewBaseConnection("id", ProtocolSCP, "", "", u)
transfer := NewBaseTransfer(nil, conn, nil, "", "", "", TransferUpload, 0, 0, 0, true, fs)
transfer.BytesReceived = testFileSize
transfer.Connection.UpdateLastActivity()
startTime := transfer.Connection.GetLastActivity()
transfer.HandleThrottle()
elapsed := time.Since(startTime).Nanoseconds() / 1000000
assert.GreaterOrEqual(t, elapsed, wantedUploadElapsed, "upload bandwidth throttling not respected")
err := transfer.Close()
assert.NoError(t, err)
transfer = NewBaseTransfer(nil, conn, nil, "", "", "", TransferDownload, 0, 0, 0, true, fs)
transfer.BytesSent = testFileSize
transfer.Connection.UpdateLastActivity()
startTime = transfer.Connection.GetLastActivity()
transfer.HandleThrottle()
elapsed = time.Since(startTime).Nanoseconds() / 1000000
assert.GreaterOrEqual(t, elapsed, wantedDownloadElapsed, "download bandwidth throttling not respected")
err = transfer.Close()
assert.NoError(t, err)
}
func TestRealPath(t *testing.T) {
testFile := filepath.Join(os.TempDir(), "afile.txt")
fs := vfs.NewOsFs("123", os.TempDir(), "")
u := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: "user",
HomeDir: os.TempDir(),
},
}
u.Permissions = make(map[string][]string)
u.Permissions["/"] = []string{dataprovider.PermAny}
file, err := os.Create(testFile)
require.NoError(t, err)
conn := NewBaseConnection(fs.ConnectionID(), ProtocolSFTP, "", "", u)
transfer := NewBaseTransfer(file, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload, 0, 0, 0, true, fs)
rPath := transfer.GetRealFsPath(testFile)
assert.Equal(t, testFile, rPath)
rPath = conn.getRealFsPath(testFile)
assert.Equal(t, testFile, rPath)
err = transfer.Close()
assert.NoError(t, err)
err = file.Close()
assert.NoError(t, err)
transfer.File = nil
rPath = transfer.GetRealFsPath(testFile)
assert.Equal(t, testFile, rPath)
rPath = transfer.GetRealFsPath("")
assert.Empty(t, rPath)
err = os.Remove(testFile)
assert.NoError(t, err)
assert.Len(t, conn.GetTransfers(), 0)
}
func TestTruncate(t *testing.T) {
testFile := filepath.Join(os.TempDir(), "transfer_test_file")
fs := vfs.NewOsFs("123", os.TempDir(), "")
u := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: "user",
HomeDir: os.TempDir(),
},
}
u.Permissions = make(map[string][]string)
u.Permissions["/"] = []string{dataprovider.PermAny}
file, err := os.Create(testFile)
if !assert.NoError(t, err) {
assert.FailNow(t, "unable to open test file")
}
_, err = file.Write([]byte("hello"))
assert.NoError(t, err)
conn := NewBaseConnection(fs.ConnectionID(), ProtocolSFTP, "", "", u)
transfer := NewBaseTransfer(file, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload, 0, 5, 100, false, fs)
err = conn.SetStat("/transfer_test_file", &StatAttributes{
Size: 2,
Flags: StatAttrSize,
})
assert.NoError(t, err)
assert.Equal(t, int64(103), transfer.MaxWriteSize)
err = transfer.Close()
assert.NoError(t, err)
err = file.Close()
assert.NoError(t, err)
fi, err := os.Stat(testFile)
if assert.NoError(t, err) {
assert.Equal(t, int64(2), fi.Size())
}
transfer = NewBaseTransfer(file, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload, 0, 0, 100, true, fs)
// file.Stat will fail on a closed file
err = conn.SetStat("/transfer_test_file", &StatAttributes{
Size: 2,
Flags: StatAttrSize,
})
assert.Error(t, err)
err = transfer.Close()
assert.NoError(t, err)
transfer = NewBaseTransfer(nil, conn, nil, testFile, testFile, "", TransferUpload, 0, 0, 0, true, fs)
_, err = transfer.Truncate("mismatch", 0)
assert.EqualError(t, err, errTransferMismatch.Error())
_, err = transfer.Truncate(testFile, 0)
assert.NoError(t, err)
_, err = transfer.Truncate(testFile, 1)
assert.EqualError(t, err, vfs.ErrVfsUnsupported.Error())
err = transfer.Close()
assert.NoError(t, err)
err = os.Remove(testFile)
assert.NoError(t, err)
assert.Len(t, conn.GetTransfers(), 0)
}
func TestTransferErrors(t *testing.T) {
isCancelled := false
cancelFn := func() {
isCancelled = true
}
testFile := filepath.Join(os.TempDir(), "transfer_test_file")
fs := vfs.NewOsFs("id", os.TempDir(), "")
u := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: "test",
HomeDir: os.TempDir(),
},
}
err := os.WriteFile(testFile, []byte("test data"), os.ModePerm)
assert.NoError(t, err)
file, err := os.Open(testFile)
if !assert.NoError(t, err) {
assert.FailNow(t, "unable to open test file")
}
conn := NewBaseConnection("id", ProtocolSFTP, "", "", u)
transfer := NewBaseTransfer(file, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload, 0, 0, 0, true, fs)
assert.Nil(t, transfer.cancelFn)
assert.Equal(t, testFile, transfer.GetFsPath())
transfer.SetCancelFn(cancelFn)
errFake := errors.New("err fake")
transfer.BytesReceived = 9
transfer.TransferError(ErrQuotaExceeded)
assert.True(t, isCancelled)
transfer.TransferError(errFake)
assert.Error(t, transfer.ErrTransfer, ErrQuotaExceeded.Error())
// the file is closed from the embedding struct before to call close
err = file.Close()
assert.NoError(t, err)
err = transfer.Close()
if assert.Error(t, err) {
assert.Error(t, err, ErrQuotaExceeded.Error())
}
assert.NoFileExists(t, testFile)
err = os.WriteFile(testFile, []byte("test data"), os.ModePerm)
assert.NoError(t, err)
file, err = os.Open(testFile)
if !assert.NoError(t, err) {
assert.FailNow(t, "unable to open test file")
}
fsPath := filepath.Join(os.TempDir(), "test_file")
transfer = NewBaseTransfer(file, conn, nil, fsPath, file.Name(), "/test_file", TransferUpload, 0, 0, 0, true, fs)
transfer.BytesReceived = 9
transfer.TransferError(errFake)
assert.Error(t, transfer.ErrTransfer, errFake.Error())
// the file is closed from the embedding struct before to call close
err = file.Close()
assert.NoError(t, err)
err = transfer.Close()
if assert.Error(t, err) {
assert.Error(t, err, errFake.Error())
}
assert.NoFileExists(t, testFile)
err = os.WriteFile(testFile, []byte("test data"), os.ModePerm)
assert.NoError(t, err)
file, err = os.Open(testFile)
if !assert.NoError(t, err) {
assert.FailNow(t, "unable to open test file")
}
transfer = NewBaseTransfer(file, conn, nil, fsPath, file.Name(), "/test_file", TransferUpload, 0, 0, 0, true, fs)
transfer.BytesReceived = 9
// the file is closed from the embedding struct before to call close
err = file.Close()
assert.NoError(t, err)
err = transfer.Close()
assert.NoError(t, err)
assert.NoFileExists(t, testFile)
assert.FileExists(t, fsPath)
err = os.Remove(fsPath)
assert.NoError(t, err)
assert.Len(t, conn.GetTransfers(), 0)
}
func TestRemovePartialCryptoFile(t *testing.T) {
testFile := filepath.Join(os.TempDir(), "transfer_test_file")
fs, err := vfs.NewCryptFs("id", os.TempDir(), "", vfs.CryptFsConfig{Passphrase: kms.NewPlainSecret("secret")})
require.NoError(t, err)
u := dataprovider.User{
BaseUser: sdk.BaseUser{
Username: "test",
HomeDir: os.TempDir(),
},
}
conn := NewBaseConnection(fs.ConnectionID(), ProtocolSFTP, "", "", u)
transfer := NewBaseTransfer(nil, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload, 0, 0, 0, true, fs)
transfer.ErrTransfer = errors.New("test error")
_, err = transfer.getUploadFileSize()
assert.Error(t, err)
err = os.WriteFile(testFile, []byte("test data"), os.ModePerm)
assert.NoError(t, err)
size, err := transfer.getUploadFileSize()
assert.NoError(t, err)
assert.Equal(t, int64(9), size)
assert.NoFileExists(t, testFile)
}
func TestFTPMode(t *testing.T) {
conn := NewBaseConnection("", ProtocolFTP, "", "", dataprovider.User{})
transfer := BaseTransfer{
Connection: conn,
transferType: TransferUpload,
BytesReceived: 123,
Fs: vfs.NewOsFs("", os.TempDir(), ""),
}
assert.Empty(t, transfer.ftpMode)
transfer.SetFtpMode("active")
assert.Equal(t, "active", transfer.ftpMode)
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,3 +1,4 @@
//go:build linux
// +build linux
package config

View File

@@ -1,7 +1,6 @@
//go:build !linux
// +build !linux
package config
func setViperAdditionalConfigPaths() {
}
func setViperAdditionalConfigPaths() {}

File diff suppressed because it is too large Load Diff

118
dataprovider/actions.go Normal file
View File

@@ -0,0 +1,118 @@
package dataprovider
import (
"bytes"
"context"
"fmt"
"net/url"
"os"
"os/exec"
"path/filepath"
"strings"
"time"
"github.com/sftpgo/sdk/plugin/notifier"
"github.com/drakkan/sftpgo/v2/httpclient"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/plugin"
"github.com/drakkan/sftpgo/v2/util"
)
const (
// ActionExecutorSelf is used as username for self action, for example a user/admin that updates itself
ActionExecutorSelf = "__self__"
// ActionExecutorSystem is used as username for actions with no explicit executor associated, for example
// adding/updating a user/admin by loading initial data
ActionExecutorSystem = "__system__"
)
const (
actionObjectUser = "user"
actionObjectAdmin = "admin"
actionObjectAPIKey = "api_key"
actionObjectShare = "share"
)
func executeAction(operation, executor, ip, objectType, objectName string, object plugin.Renderer) {
if plugin.Handler.HasNotifiers() {
plugin.Handler.NotifyProviderEvent(&notifier.ProviderEvent{
Action: operation,
Username: executor,
ObjectType: objectType,
ObjectName: objectName,
IP: ip,
Timestamp: time.Now().UnixNano(),
}, object)
}
if config.Actions.Hook == "" {
return
}
if !util.IsStringInSlice(operation, config.Actions.ExecuteOn) ||
!util.IsStringInSlice(objectType, config.Actions.ExecuteFor) {
return
}
go func() {
dataAsJSON, err := object.RenderAsJSON(operation != operationDelete)
if err != nil {
providerLog(logger.LevelError, "unable to serialize user as JSON for operation %#v: %v", operation, err)
return
}
if strings.HasPrefix(config.Actions.Hook, "http") {
var url *url.URL
url, err := url.Parse(config.Actions.Hook)
if err != nil {
providerLog(logger.LevelError, "Invalid http_notification_url %#v for operation %#v: %v",
config.Actions.Hook, operation, err)
return
}
q := url.Query()
q.Add("action", operation)
q.Add("username", executor)
q.Add("ip", ip)
q.Add("object_type", objectType)
q.Add("object_name", objectName)
q.Add("timestamp", fmt.Sprintf("%v", time.Now().UnixNano()))
url.RawQuery = q.Encode()
startTime := time.Now()
resp, err := httpclient.RetryablePost(url.String(), "application/json", bytes.NewBuffer(dataAsJSON))
respCode := 0
if err == nil {
respCode = resp.StatusCode
resp.Body.Close()
}
providerLog(logger.LevelDebug, "notified operation %#v to URL: %v status code: %v, elapsed: %v err: %v",
operation, url.Redacted(), respCode, time.Since(startTime), err)
} else {
executeNotificationCommand(operation, executor, ip, objectType, objectName, dataAsJSON) //nolint:errcheck // the error is used in test cases only
}
}()
}
func executeNotificationCommand(operation, executor, ip, objectType, objectName string, objectAsJSON []byte) error {
if !filepath.IsAbs(config.Actions.Hook) {
err := fmt.Errorf("invalid notification command %#v", config.Actions.Hook)
logger.Warn(logSender, "", "unable to execute notification command: %v", err)
return err
}
ctx, cancel := context.WithTimeout(context.Background(), 15*time.Second)
defer cancel()
cmd := exec.CommandContext(ctx, config.Actions.Hook)
cmd.Env = append(os.Environ(),
fmt.Sprintf("SFTPGO_PROVIDER_ACTION=%v", operation),
fmt.Sprintf("SFTPGO_PROVIDER_OBJECT_TYPE=%v", objectType),
fmt.Sprintf("SFTPGO_PROVIDER_OBJECT_NAME=%v", objectName),
fmt.Sprintf("SFTPGO_PROVIDER_USERNAME=%v", executor),
fmt.Sprintf("SFTPGO_PROVIDER_IP=%v", ip),
fmt.Sprintf("SFTPGO_PROVIDER_TIMESTAMP=%v", util.GetTimeAsMsSinceEpoch(time.Now())),
fmt.Sprintf("SFTPGO_PROVIDER_OBJECT=%v", string(objectAsJSON)))
startTime := time.Now()
err := cmd.Run()
providerLog(logger.LevelDebug, "executed command %#v, elapsed: %v, error: %v", config.Actions.Hook,
time.Since(startTime), err)
return err
}

443
dataprovider/admin.go Normal file
View File

@@ -0,0 +1,443 @@
package dataprovider
import (
"crypto/sha256"
"encoding/base64"
"encoding/json"
"errors"
"fmt"
"net"
"os"
"regexp"
"strings"
"github.com/alexedwards/argon2id"
passwordvalidator "github.com/wagslane/go-password-validator"
"golang.org/x/crypto/bcrypt"
"github.com/drakkan/sftpgo/v2/kms"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/mfa"
"github.com/drakkan/sftpgo/v2/util"
)
// Available permissions for SFTPGo admins
const (
PermAdminAny = "*"
PermAdminAddUsers = "add_users"
PermAdminChangeUsers = "edit_users"
PermAdminDeleteUsers = "del_users"
PermAdminViewUsers = "view_users"
PermAdminViewConnections = "view_conns"
PermAdminCloseConnections = "close_conns"
PermAdminViewServerStatus = "view_status"
PermAdminManageAdmins = "manage_admins"
PermAdminManageAPIKeys = "manage_apikeys"
PermAdminQuotaScans = "quota_scans"
PermAdminManageSystem = "manage_system"
PermAdminManageDefender = "manage_defender"
PermAdminViewDefender = "view_defender"
PermAdminRetentionChecks = "retention_checks"
PermAdminMetadataChecks = "metadata_checks"
PermAdminViewEvents = "view_events"
)
var (
emailRegex = regexp.MustCompile("^(?:(?:(?:(?:[a-zA-Z]|\\d|[!#\\$%&'\\*\\+\\-\\/=\\?\\^_`{\\|}~]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])+(?:\\.([a-zA-Z]|\\d|[!#\\$%&'\\*\\+\\-\\/=\\?\\^_`{\\|}~]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])+)*)|(?:(?:\\x22)(?:(?:(?:(?:\\x20|\\x09)*(?:\\x0d\\x0a))?(?:\\x20|\\x09)+)?(?:(?:[\\x01-\\x08\\x0b\\x0c\\x0e-\\x1f\\x7f]|\\x21|[\\x23-\\x5b]|[\\x5d-\\x7e]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])|(?:(?:[\\x01-\\x09\\x0b\\x0c\\x0d-\\x7f]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}]))))*(?:(?:(?:\\x20|\\x09)*(?:\\x0d\\x0a))?(\\x20|\\x09)+)?(?:\\x22))))@(?:(?:(?:[a-zA-Z]|\\d|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])|(?:(?:[a-zA-Z]|\\d|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])(?:[a-zA-Z]|\\d|-|\\.|~|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])*(?:[a-zA-Z]|\\d|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])))\\.)+(?:(?:[a-zA-Z]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])|(?:(?:[a-zA-Z]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])(?:[a-zA-Z]|\\d|-|\\.|~|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])*(?:[a-zA-Z]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])))\\.?$")
validAdminPerms = []string{PermAdminAny, PermAdminAddUsers, PermAdminChangeUsers, PermAdminDeleteUsers,
PermAdminViewUsers, PermAdminViewConnections, PermAdminCloseConnections, PermAdminViewServerStatus,
PermAdminManageAdmins, PermAdminManageAPIKeys, PermAdminQuotaScans, PermAdminManageSystem,
PermAdminManageDefender, PermAdminViewDefender, PermAdminRetentionChecks, PermAdminMetadataChecks,
PermAdminViewEvents}
)
// AdminTOTPConfig defines the time-based one time password configuration
type AdminTOTPConfig struct {
Enabled bool `json:"enabled,omitempty"`
ConfigName string `json:"config_name,omitempty"`
Secret *kms.Secret `json:"secret,omitempty"`
}
func (c *AdminTOTPConfig) validate(username string) error {
if !c.Enabled {
c.ConfigName = ""
c.Secret = kms.NewEmptySecret()
return nil
}
if c.ConfigName == "" {
return util.NewValidationError("totp: config name is mandatory")
}
if !util.IsStringInSlice(c.ConfigName, mfa.GetAvailableTOTPConfigNames()) {
return util.NewValidationError(fmt.Sprintf("totp: config name %#v not found", c.ConfigName))
}
if c.Secret.IsEmpty() {
return util.NewValidationError("totp: secret is mandatory")
}
if c.Secret.IsPlain() {
c.Secret.SetAdditionalData(username)
if err := c.Secret.Encrypt(); err != nil {
return util.NewValidationError(fmt.Sprintf("totp: unable to encrypt secret: %v", err))
}
}
return nil
}
// AdminFilters defines additional restrictions for SFTPGo admins
// TODO: rename to AdminOptions in v3
type AdminFilters struct {
// only clients connecting from these IP/Mask are allowed.
// IP/Mask must be in CIDR notation as defined in RFC 4632 and RFC 4291
// for example "192.0.2.0/24" or "2001:db8::/32"
AllowList []string `json:"allow_list,omitempty"`
// API key auth allows to impersonate this administrator with an API key
AllowAPIKeyAuth bool `json:"allow_api_key_auth,omitempty"`
// Time-based one time passwords configuration
TOTPConfig AdminTOTPConfig `json:"totp_config,omitempty"`
// Recovery codes to use if the user loses access to their second factor auth device.
// Each code can only be used once, you should use these codes to login and disable or
// reset 2FA for your account
RecoveryCodes []RecoveryCode `json:"recovery_codes,omitempty"`
}
// Admin defines a SFTPGo admin
type Admin struct {
// Database unique identifier
ID int64 `json:"id"`
// 1 enabled, 0 disabled (login is not allowed)
Status int `json:"status"`
// Username
Username string `json:"username"`
Password string `json:"password,omitempty"`
Email string `json:"email,omitempty"`
Permissions []string `json:"permissions"`
Filters AdminFilters `json:"filters,omitempty"`
Description string `json:"description,omitempty"`
AdditionalInfo string `json:"additional_info,omitempty"`
// Creation time as unix timestamp in milliseconds. It will be 0 for admins created before v2.2.0
CreatedAt int64 `json:"created_at"`
// last update time as unix timestamp in milliseconds
UpdatedAt int64 `json:"updated_at"`
// Last login as unix timestamp in milliseconds
LastLogin int64 `json:"last_login"`
}
// CountUnusedRecoveryCodes returns the number of unused recovery codes
func (a *Admin) CountUnusedRecoveryCodes() int {
unused := 0
for _, code := range a.Filters.RecoveryCodes {
if !code.Used {
unused++
}
}
return unused
}
func (a *Admin) hashPassword() error {
if a.Password != "" && !util.IsStringPrefixInSlice(a.Password, internalHashPwdPrefixes) {
if config.PasswordValidation.Admins.MinEntropy > 0 {
if err := passwordvalidator.Validate(a.Password, config.PasswordValidation.Admins.MinEntropy); err != nil {
return util.NewValidationError(err.Error())
}
}
if config.PasswordHashing.Algo == HashingAlgoBcrypt {
pwd, err := bcrypt.GenerateFromPassword([]byte(a.Password), config.PasswordHashing.BcryptOptions.Cost)
if err != nil {
return err
}
a.Password = string(pwd)
} else {
pwd, err := argon2id.CreateHash(a.Password, argon2Params)
if err != nil {
return err
}
a.Password = pwd
}
}
return nil
}
func (a *Admin) hasRedactedSecret() bool {
return a.Filters.TOTPConfig.Secret.IsRedacted()
}
func (a *Admin) validateRecoveryCodes() error {
for i := 0; i < len(a.Filters.RecoveryCodes); i++ {
code := &a.Filters.RecoveryCodes[i]
if code.Secret.IsEmpty() {
return util.NewValidationError("mfa: recovery code cannot be empty")
}
if code.Secret.IsPlain() {
code.Secret.SetAdditionalData(a.Username)
if err := code.Secret.Encrypt(); err != nil {
return util.NewValidationError(fmt.Sprintf("mfa: unable to encrypt recovery code: %v", err))
}
}
}
return nil
}
func (a *Admin) validatePermissions() error {
a.Permissions = util.RemoveDuplicates(a.Permissions)
if len(a.Permissions) == 0 {
return util.NewValidationError("please grant some permissions to this admin")
}
if util.IsStringInSlice(PermAdminAny, a.Permissions) {
a.Permissions = []string{PermAdminAny}
}
for _, perm := range a.Permissions {
if !util.IsStringInSlice(perm, validAdminPerms) {
return util.NewValidationError(fmt.Sprintf("invalid permission: %#v", perm))
}
}
return nil
}
func (a *Admin) validate() error {
a.SetEmptySecretsIfNil()
if a.Username == "" {
return util.NewValidationError("username is mandatory")
}
if a.Password == "" {
return util.NewValidationError("please set a password")
}
if a.hasRedactedSecret() {
return util.NewValidationError("cannot save an admin with a redacted secret")
}
if err := a.Filters.TOTPConfig.validate(a.Username); err != nil {
return err
}
if err := a.validateRecoveryCodes(); err != nil {
return err
}
if !config.SkipNaturalKeysValidation && !usernameRegex.MatchString(a.Username) {
return util.NewValidationError(fmt.Sprintf("username %#v is not valid, the following characters are allowed: a-zA-Z0-9-_.~", a.Username))
}
if err := a.hashPassword(); err != nil {
return err
}
if err := a.validatePermissions(); err != nil {
return err
}
if a.Email != "" && !emailRegex.MatchString(a.Email) {
return util.NewValidationError(fmt.Sprintf("email %#v is not valid", a.Email))
}
a.Filters.AllowList = util.RemoveDuplicates(a.Filters.AllowList)
for _, IPMask := range a.Filters.AllowList {
_, _, err := net.ParseCIDR(IPMask)
if err != nil {
return util.NewValidationError(fmt.Sprintf("could not parse allow list entry %#v : %v", IPMask, err))
}
}
return nil
}
// CheckPassword verifies the admin password
func (a *Admin) CheckPassword(password string) (bool, error) {
if strings.HasPrefix(a.Password, bcryptPwdPrefix) {
if err := bcrypt.CompareHashAndPassword([]byte(a.Password), []byte(password)); err != nil {
return false, ErrInvalidCredentials
}
return true, nil
}
match, err := argon2id.ComparePasswordAndHash(password, a.Password)
if !match || err != nil {
return false, ErrInvalidCredentials
}
return match, err
}
// CanLoginFromIP returns true if login from the given IP is allowed
func (a *Admin) CanLoginFromIP(ip string) bool {
if len(a.Filters.AllowList) == 0 {
return true
}
parsedIP := net.ParseIP(ip)
if parsedIP == nil {
return len(a.Filters.AllowList) == 0
}
for _, ipMask := range a.Filters.AllowList {
_, network, err := net.ParseCIDR(ipMask)
if err != nil {
continue
}
if network.Contains(parsedIP) {
return true
}
}
return false
}
// CanLogin returns an error if the login is not allowed
func (a *Admin) CanLogin(ip string) error {
if a.Status != 1 {
return fmt.Errorf("admin %#v is disabled", a.Username)
}
if !a.CanLoginFromIP(ip) {
return fmt.Errorf("login from IP %v not allowed", ip)
}
return nil
}
func (a *Admin) checkUserAndPass(password, ip string) error {
if err := a.CanLogin(ip); err != nil {
return err
}
if a.Password == "" || password == "" {
return errors.New("credentials cannot be null or empty")
}
match, err := a.CheckPassword(password)
if err != nil {
return err
}
if !match {
return ErrInvalidCredentials
}
return nil
}
// RenderAsJSON implements the renderer interface used within plugins
func (a *Admin) RenderAsJSON(reload bool) ([]byte, error) {
if reload {
admin, err := provider.adminExists(a.Username)
if err != nil {
providerLog(logger.LevelError, "unable to reload admin before rendering as json: %v", err)
return nil, err
}
admin.HideConfidentialData()
return json.Marshal(admin)
}
a.HideConfidentialData()
return json.Marshal(a)
}
// HideConfidentialData hides admin confidential data
func (a *Admin) HideConfidentialData() {
a.Password = ""
if a.Filters.TOTPConfig.Secret != nil {
a.Filters.TOTPConfig.Secret.Hide()
}
for _, code := range a.Filters.RecoveryCodes {
if code.Secret != nil {
code.Secret.Hide()
}
}
a.SetNilSecretsIfEmpty()
}
// SetEmptySecretsIfNil sets the secrets to empty if nil
func (a *Admin) SetEmptySecretsIfNil() {
if a.Filters.TOTPConfig.Secret == nil {
a.Filters.TOTPConfig.Secret = kms.NewEmptySecret()
}
}
// SetNilSecretsIfEmpty set the secrets to nil if empty.
// This is useful before rendering as JSON so the empty fields
// will not be serialized.
func (a *Admin) SetNilSecretsIfEmpty() {
if a.Filters.TOTPConfig.Secret != nil && a.Filters.TOTPConfig.Secret.IsEmpty() {
a.Filters.TOTPConfig.Secret = nil
}
}
// HasPermission returns true if the admin has the specified permission
func (a *Admin) HasPermission(perm string) bool {
if util.IsStringInSlice(PermAdminAny, a.Permissions) {
return true
}
return util.IsStringInSlice(perm, a.Permissions)
}
// GetPermissionsAsString returns permission as string
func (a *Admin) GetPermissionsAsString() string {
return strings.Join(a.Permissions, ", ")
}
// GetAllowedIPAsString returns the allowed IP as comma separated string
func (a *Admin) GetAllowedIPAsString() string {
return strings.Join(a.Filters.AllowList, ",")
}
// GetValidPerms returns the allowed admin permissions
func (a *Admin) GetValidPerms() []string {
return validAdminPerms
}
// GetInfoString returns admin's info as string.
func (a *Admin) GetInfoString() string {
var result strings.Builder
if a.Email != "" {
result.WriteString(fmt.Sprintf("Email: %v. ", a.Email))
}
if len(a.Filters.AllowList) > 0 {
result.WriteString(fmt.Sprintf("Allowed IP/Mask: %v. ", len(a.Filters.AllowList)))
}
return result.String()
}
// CanManageMFA returns true if the admin can add a multi-factor authentication configuration
func (a *Admin) CanManageMFA() bool {
return len(mfa.GetAvailableTOTPConfigs()) > 0
}
// GetSignature returns a signature for this admin.
// It could change after an update
func (a *Admin) GetSignature() string {
data := []byte(a.Username)
data = append(data, []byte(a.Password)...)
signature := sha256.Sum256(data)
return base64.StdEncoding.EncodeToString(signature[:])
}
func (a *Admin) getACopy() Admin {
a.SetEmptySecretsIfNil()
permissions := make([]string, len(a.Permissions))
copy(permissions, a.Permissions)
filters := AdminFilters{}
filters.AllowList = make([]string, len(a.Filters.AllowList))
filters.AllowAPIKeyAuth = a.Filters.AllowAPIKeyAuth
filters.TOTPConfig.Enabled = a.Filters.TOTPConfig.Enabled
filters.TOTPConfig.ConfigName = a.Filters.TOTPConfig.ConfigName
filters.TOTPConfig.Secret = a.Filters.TOTPConfig.Secret.Clone()
copy(filters.AllowList, a.Filters.AllowList)
filters.RecoveryCodes = make([]RecoveryCode, 0)
for _, code := range a.Filters.RecoveryCodes {
if code.Secret == nil {
code.Secret = kms.NewEmptySecret()
}
filters.RecoveryCodes = append(filters.RecoveryCodes, RecoveryCode{
Secret: code.Secret.Clone(),
Used: code.Used,
})
}
return Admin{
ID: a.ID,
Status: a.Status,
Username: a.Username,
Password: a.Password,
Email: a.Email,
Permissions: permissions,
Filters: filters,
AdditionalInfo: a.AdditionalInfo,
Description: a.Description,
LastLogin: a.LastLogin,
CreatedAt: a.CreatedAt,
UpdatedAt: a.UpdatedAt,
}
}
func (a *Admin) setFromEnv() error {
envUsername := strings.TrimSpace(os.Getenv("SFTPGO_DEFAULT_ADMIN_USERNAME"))
envPassword := strings.TrimSpace(os.Getenv("SFTPGO_DEFAULT_ADMIN_PASSWORD"))
if envUsername == "" || envPassword == "" {
return errors.New(`to create the default admin you need to set the env vars "SFTPGO_DEFAULT_ADMIN_USERNAME" and "SFTPGO_DEFAULT_ADMIN_PASSWORD"`)
}
a.Username = envUsername
a.Password = envPassword
a.Status = 1
a.Permissions = []string{PermAdminAny}
return nil
}

186
dataprovider/apikey.go Normal file
View File

@@ -0,0 +1,186 @@
package dataprovider
import (
"encoding/json"
"fmt"
"strings"
"time"
"github.com/alexedwards/argon2id"
"golang.org/x/crypto/bcrypt"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
)
// APIKeyScope defines the supported API key scopes
type APIKeyScope int
// Supported API key scopes
const (
// the API key will be used for an admin
APIKeyScopeAdmin APIKeyScope = iota + 1
// the API key will be used for a user
APIKeyScopeUser
)
// APIKey defines a SFTPGo API key.
// API keys can be used as authentication alternative to short lived tokens
// for REST API
type APIKey struct {
// Database unique identifier
ID int64 `json:"-"`
// Unique key identifier, used for key lookups.
// The generated key is in the format `KeyID.hash(Key)` so we can split
// and lookup by KeyID and then verify if the key matches the recorded hash
KeyID string `json:"id"`
// User friendly key name
Name string `json:"name"`
// we store the hash of the key, this is just like a password
Key string `json:"key,omitempty"`
Scope APIKeyScope `json:"scope"`
CreatedAt int64 `json:"created_at"`
UpdatedAt int64 `json:"updated_at"`
// 0 means never used
LastUseAt int64 `json:"last_use_at,omitempty"`
// 0 means never expire
ExpiresAt int64 `json:"expires_at,omitempty"`
Description string `json:"description,omitempty"`
// Username associated with this API key.
// If empty and the scope is APIKeyScopeUser the key is valid for any user
User string `json:"user,omitempty"`
// Admin username associated with this API key.
// If empty and the scope is APIKeyScopeAdmin the key is valid for any admin
Admin string `json:"admin,omitempty"`
// these fields are for internal use
userID int64
adminID int64
plainKey string
}
func (k *APIKey) getACopy() APIKey {
return APIKey{
ID: k.ID,
KeyID: k.KeyID,
Name: k.Name,
Key: k.Key,
Scope: k.Scope,
CreatedAt: k.CreatedAt,
UpdatedAt: k.UpdatedAt,
LastUseAt: k.LastUseAt,
ExpiresAt: k.ExpiresAt,
Description: k.Description,
User: k.User,
Admin: k.Admin,
userID: k.userID,
adminID: k.adminID,
}
}
// RenderAsJSON implements the renderer interface used within plugins
func (k *APIKey) RenderAsJSON(reload bool) ([]byte, error) {
if reload {
apiKey, err := provider.apiKeyExists(k.KeyID)
if err != nil {
providerLog(logger.LevelError, "unable to reload api key before rendering as json: %v", err)
return nil, err
}
apiKey.HideConfidentialData()
return json.Marshal(apiKey)
}
k.HideConfidentialData()
return json.Marshal(k)
}
// HideConfidentialData hides API key confidential data
func (k *APIKey) HideConfidentialData() {
k.Key = ""
}
func (k *APIKey) hashKey() error {
if k.Key != "" && !util.IsStringPrefixInSlice(k.Key, internalHashPwdPrefixes) {
if config.PasswordHashing.Algo == HashingAlgoBcrypt {
hashed, err := bcrypt.GenerateFromPassword([]byte(k.Key), config.PasswordHashing.BcryptOptions.Cost)
if err != nil {
return err
}
k.Key = string(hashed)
} else {
hashed, err := argon2id.CreateHash(k.Key, argon2Params)
if err != nil {
return err
}
k.Key = hashed
}
}
return nil
}
func (k *APIKey) generateKey() {
if k.KeyID != "" || k.Key != "" {
return
}
k.KeyID = util.GenerateUniqueID()
k.Key = util.GenerateUniqueID()
k.plainKey = k.Key
}
// DisplayKey returns the key to show to the user
func (k *APIKey) DisplayKey() string {
return fmt.Sprintf("%v.%v", k.KeyID, k.plainKey)
}
func (k *APIKey) validate() error {
if k.Name == "" {
return util.NewValidationError("name is mandatory")
}
if k.Scope != APIKeyScopeAdmin && k.Scope != APIKeyScopeUser {
return util.NewValidationError(fmt.Sprintf("invalid scope: %v", k.Scope))
}
k.generateKey()
if err := k.hashKey(); err != nil {
return err
}
if k.User != "" && k.Admin != "" {
return util.NewValidationError("an API key can be related to a user or an admin, not both")
}
if k.Scope == APIKeyScopeAdmin {
k.User = ""
}
if k.Scope == APIKeyScopeUser {
k.Admin = ""
}
if k.User != "" {
_, err := provider.userExists(k.User)
if err != nil {
return util.NewValidationError(fmt.Sprintf("unable to check API key user %v: %v", k.User, err))
}
}
if k.Admin != "" {
_, err := provider.adminExists(k.Admin)
if err != nil {
return util.NewValidationError(fmt.Sprintf("unable to check API key admin %v: %v", k.Admin, err))
}
}
return nil
}
// Authenticate tries to authenticate the provided plain key
func (k *APIKey) Authenticate(plainKey string) error {
if k.ExpiresAt > 0 && k.ExpiresAt < util.GetTimeAsMsSinceEpoch(time.Now()) {
return fmt.Errorf("API key %#v is expired, expiration timestamp: %v current timestamp: %v", k.KeyID,
k.ExpiresAt, util.GetTimeAsMsSinceEpoch(time.Now()))
}
if strings.HasPrefix(k.Key, bcryptPwdPrefix) {
if err := bcrypt.CompareHashAndPassword([]byte(k.Key), []byte(plainKey)); err != nil {
return ErrInvalidCredentials
}
} else if strings.HasPrefix(k.Key, argonPwdPrefix) {
match, err := argon2id.ComparePasswordAndHash(plainKey, k.Key)
if err != nil || !match {
return ErrInvalidCredentials
}
}
return nil
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,18 @@
//go:build nobolt
// +build nobolt
package dataprovider
import (
"errors"
"github.com/drakkan/sftpgo/v2/version"
)
func init() {
version.AddFeature("-bolt")
}
func initializeBoltProvider(basePath string) error {
return errors.New("bolt disabled at build time")
}

View File

@@ -0,0 +1,62 @@
package dataprovider
import (
"sync"
)
var cachedPasswords passwordsCache
func init() {
cachedPasswords = passwordsCache{
cache: make(map[string]string),
}
}
type passwordsCache struct {
sync.RWMutex
cache map[string]string
}
func (c *passwordsCache) Add(username, password string) {
if !config.PasswordCaching || username == "" || password == "" {
return
}
c.Lock()
defer c.Unlock()
c.cache[username] = password
}
func (c *passwordsCache) Remove(username string) {
if !config.PasswordCaching {
return
}
c.Lock()
defer c.Unlock()
delete(c.cache, username)
}
// Check returns if the user is found and if the password match
func (c *passwordsCache) Check(username, password string) (bool, bool) {
if username == "" || password == "" {
return false, false
}
c.RLock()
defer c.RUnlock()
pwd, ok := c.cache[username]
if !ok {
return false, false
}
return true, pwd == password
}
// CheckCachedPassword is an utility method used only in test cases
func CheckCachedPassword(username, password string) (bool, bool) {
return cachedPasswords.Check(username, password)
}

149
dataprovider/cacheduser.go Normal file
View File

@@ -0,0 +1,149 @@
package dataprovider
import (
"sync"
"time"
"golang.org/x/net/webdav"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
)
var (
webDAVUsersCache *usersCache
)
func init() {
webDAVUsersCache = &usersCache{
users: map[string]CachedUser{},
}
}
// InitializeWebDAVUserCache initializes the cache for webdav users
func InitializeWebDAVUserCache(maxSize int) {
webDAVUsersCache = &usersCache{
users: map[string]CachedUser{},
maxSize: maxSize,
}
}
// CachedUser adds fields useful for caching to a SFTPGo user
type CachedUser struct {
User User
Expiration time.Time
Password string
LockSystem webdav.LockSystem
}
// IsExpired returns true if the cached user is expired
func (c *CachedUser) IsExpired() bool {
if c.Expiration.IsZero() {
return false
}
return c.Expiration.Before(time.Now())
}
type usersCache struct {
sync.RWMutex
users map[string]CachedUser
maxSize int
}
func (cache *usersCache) updateLastLogin(username string) {
cache.Lock()
defer cache.Unlock()
if cachedUser, ok := cache.users[username]; ok {
cachedUser.User.LastLogin = util.GetTimeAsMsSinceEpoch(time.Now())
cache.users[username] = cachedUser
}
}
// swapWebDAVUser updates an existing cached user with the specified one
// preserving the lock fs if possible
func (cache *usersCache) swap(user *User) {
cache.Lock()
defer cache.Unlock()
if cachedUser, ok := cache.users[user.Username]; ok {
if cachedUser.User.Password != user.Password {
providerLog(logger.LevelDebug, "current password different from the cached one for user %#v, removing from cache",
user.Username)
// the password changed, the cached user is no longer valid
delete(cache.users, user.Username)
return
}
if cachedUser.User.isFsEqual(user) {
// the updated user has the same fs as the cached one, we can preserve the lock filesystem
providerLog(logger.LevelDebug, "current password and fs unchanged for for user %#v, swap cached one",
user.Username)
cachedUser.User = *user
cache.users[user.Username] = cachedUser
} else {
// filesystem changed, the cached user is no longer valid
providerLog(logger.LevelDebug, "current fs different from the cached one for user %#v, removing from cache",
user.Username)
delete(cache.users, user.Username)
}
}
}
func (cache *usersCache) add(cachedUser *CachedUser) {
cache.Lock()
defer cache.Unlock()
if cache.maxSize > 0 && len(cache.users) >= cache.maxSize {
var userToRemove string
var expirationTime time.Time
for k, v := range cache.users {
if userToRemove == "" {
userToRemove = k
expirationTime = v.Expiration
continue
}
expireTime := v.Expiration
if !expireTime.IsZero() && expireTime.Before(expirationTime) {
userToRemove = k
expirationTime = expireTime
}
}
delete(cache.users, userToRemove)
}
if cachedUser.User.Username != "" {
cache.users[cachedUser.User.Username] = *cachedUser
}
}
func (cache *usersCache) remove(username string) {
cache.Lock()
defer cache.Unlock()
delete(cache.users, username)
}
func (cache *usersCache) get(username string) (*CachedUser, bool) {
cache.RLock()
defer cache.RUnlock()
cachedUser, ok := cache.users[username]
return &cachedUser, ok
}
// CacheWebDAVUser add a user to the WebDAV cache
func CacheWebDAVUser(cachedUser *CachedUser) {
webDAVUsersCache.add(cachedUser)
}
// GetCachedWebDAVUser returns a previously cached WebDAV user
func GetCachedWebDAVUser(username string) (*CachedUser, bool) {
return webDAVUsersCache.get(username)
}
// RemoveCachedWebDAVUser removes a cached WebDAV user
func RemoveCachedWebDAVUser(username string) {
webDAVUsersCache.remove(username)
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,25 +1,100 @@
//go:build !nomysql
// +build !nomysql
package dataprovider
import (
"context"
"crypto/x509"
"database/sql"
"errors"
"fmt"
"strings"
"time"
"github.com/drakkan/sftpgo/logger"
// we import go-sql-driver/mysql here to be able to disable MySQL support using a build tag
_ "github.com/go-sql-driver/mysql"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/version"
"github.com/drakkan/sftpgo/v2/vfs"
)
const (
mysqlUsersTableSQL = "CREATE TABLE `{{users}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, " +
"`username` varchar(255) NOT NULL UNIQUE, `password` varchar(255) NULL, `public_keys` longtext NULL, " +
"`home_dir` varchar(255) NOT NULL, `uid` integer NOT NULL, `gid` integer NOT NULL, `max_sessions` integer NOT NULL, " +
" `quota_size` bigint NOT NULL, `quota_files` integer NOT NULL, `permissions` longtext NOT NULL, " +
"`used_quota_size` bigint NOT NULL, `used_quota_files` integer NOT NULL, `last_quota_update` bigint NOT NULL, " +
"`upload_bandwidth` integer NOT NULL, `download_bandwidth` integer NOT NULL, `expiration_date` bigint(20) NOT NULL, " +
"`last_login` bigint(20) NOT NULL, `status` int(11) NOT NULL, `filters` longtext DEFAULT NULL, " +
"`filesystem` longtext DEFAULT NULL);"
mysqlSchemaTableSQL = "CREATE TABLE `schema_version` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `version` integer NOT NULL);"
mysqlUsersV2SQL = "ALTER TABLE `{{users}}` ADD COLUMN `virtual_folders` longtext NULL;"
mysqlResetSQL = "DROP TABLE IF EXISTS `{{api_keys}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{folders_mapping}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{admins}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{folders}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{shares}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{users}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{defender_events}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{defender_hosts}}` CASCADE;" +
"DROP TABLE IF EXISTS `{{schema_version}}` CASCADE;"
mysqlInitialSQL = "CREATE TABLE `{{schema_version}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `version` integer NOT NULL);" +
"CREATE TABLE `{{admins}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `username` varchar(255) NOT NULL UNIQUE, " +
"`description` varchar(512) NULL, `password` varchar(255) NOT NULL, `email` varchar(255) NULL, `status` integer NOT NULL, " +
"`permissions` longtext NOT NULL, `filters` longtext NULL, `additional_info` longtext NULL);" +
"CREATE TABLE `{{folders}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `name` varchar(255) NOT NULL UNIQUE, " +
"`description` varchar(512) NULL, `path` varchar(512) NULL, `used_quota_size` bigint NOT NULL, " +
"`used_quota_files` integer NOT NULL, `last_quota_update` bigint NOT NULL, `filesystem` longtext NULL);" +
"CREATE TABLE `{{users}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `username` varchar(255) NOT NULL UNIQUE, " +
"`status` integer NOT NULL, `expiration_date` bigint NOT NULL, `description` varchar(512) NULL, `password` longtext NULL, " +
"`public_keys` longtext NULL, `home_dir` varchar(512) NOT NULL, `uid` integer NOT NULL, `gid` integer NOT NULL, " +
"`max_sessions` integer NOT NULL, `quota_size` bigint NOT NULL, `quota_files` integer NOT NULL, " +
"`permissions` longtext NOT NULL, `used_quota_size` bigint NOT NULL, `used_quota_files` integer NOT NULL, " +
"`last_quota_update` bigint NOT NULL, `upload_bandwidth` integer NOT NULL, `download_bandwidth` integer NOT NULL, " +
"`last_login` bigint NOT NULL, `filters` longtext NULL, `filesystem` longtext NULL, `additional_info` longtext NULL);" +
"CREATE TABLE `{{folders_mapping}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `virtual_path` varchar(512) NOT NULL, " +
"`quota_size` bigint NOT NULL, `quota_files` integer NOT NULL, `folder_id` integer NOT NULL, `user_id` integer NOT NULL);" +
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `{{prefix}}unique_mapping` UNIQUE (`user_id`, `folder_id`);" +
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `{{prefix}}folders_mapping_folder_id_fk_folders_id` FOREIGN KEY (`folder_id`) REFERENCES `{{folders}}` (`id`) ON DELETE CASCADE;" +
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `{{prefix}}folders_mapping_user_id_fk_users_id` FOREIGN KEY (`user_id`) REFERENCES `{{users}}` (`id`) ON DELETE CASCADE;" +
"INSERT INTO {{schema_version}} (version) VALUES (10);"
mysqlV11SQL = "CREATE TABLE `{{api_keys}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `name` varchar(255) NOT NULL, `key_id` varchar(50) NOT NULL UNIQUE," +
"`api_key` varchar(255) NOT NULL UNIQUE, `scope` integer NOT NULL, `created_at` bigint NOT NULL, `updated_at` bigint NOT NULL, `last_use_at` bigint NOT NULL, " +
"`expires_at` bigint NOT NULL, `description` longtext NULL, `admin_id` integer NULL, `user_id` integer NULL);" +
"ALTER TABLE `{{api_keys}}` ADD CONSTRAINT `{{prefix}}api_keys_admin_id_fk_admins_id` FOREIGN KEY (`admin_id`) REFERENCES `{{admins}}` (`id`) ON DELETE CASCADE;" +
"ALTER TABLE `{{api_keys}}` ADD CONSTRAINT `{{prefix}}api_keys_user_id_fk_users_id` FOREIGN KEY (`user_id`) REFERENCES `{{users}}` (`id`) ON DELETE CASCADE;"
mysqlV11DownSQL = "DROP TABLE `{{api_keys}}` CASCADE;"
mysqlV12SQL = "ALTER TABLE `{{admins}}` ADD COLUMN `created_at` bigint DEFAULT 0 NOT NULL;" +
"ALTER TABLE `{{admins}}` ALTER COLUMN `created_at` DROP DEFAULT;" +
"ALTER TABLE `{{admins}}` ADD COLUMN `updated_at` bigint DEFAULT 0 NOT NULL;" +
"ALTER TABLE `{{admins}}` ALTER COLUMN `updated_at` DROP DEFAULT;" +
"ALTER TABLE `{{admins}}` ADD COLUMN `last_login` bigint DEFAULT 0 NOT NULL;" +
"ALTER TABLE `{{admins}}` ALTER COLUMN `last_login` DROP DEFAULT;" +
"ALTER TABLE `{{users}}` ADD COLUMN `created_at` bigint DEFAULT 0 NOT NULL;" +
"ALTER TABLE `{{users}}` ALTER COLUMN `created_at` DROP DEFAULT;" +
"ALTER TABLE `{{users}}` ADD COLUMN `updated_at` bigint DEFAULT 0 NOT NULL;" +
"ALTER TABLE `{{users}}` ALTER COLUMN `updated_at` DROP DEFAULT;" +
"CREATE INDEX `{{prefix}}users_updated_at_idx` ON `{{users}}` (`updated_at`);"
mysqlV12DownSQL = "ALTER TABLE `{{admins}}` DROP COLUMN `updated_at`;" +
"ALTER TABLE `{{admins}}` DROP COLUMN `created_at`;" +
"ALTER TABLE `{{admins}}` DROP COLUMN `last_login`;" +
"ALTER TABLE `{{users}}` DROP COLUMN `created_at`;" +
"ALTER TABLE `{{users}}` DROP COLUMN `updated_at`;"
mysqlV13SQL = "ALTER TABLE `{{users}}` ADD COLUMN `email` varchar(255) NULL;"
mysqlV13DownSQL = "ALTER TABLE `{{users}}` DROP COLUMN `email`;"
mysqlV14SQL = "CREATE TABLE `{{shares}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, " +
"`share_id` varchar(60) NOT NULL UNIQUE, `name` varchar(255) NOT NULL, `description` varchar(512) NULL, " +
"`scope` integer NOT NULL, `paths` longtext NOT NULL, `created_at` bigint NOT NULL, " +
"`updated_at` bigint NOT NULL, `last_use_at` bigint NOT NULL, `expires_at` bigint NOT NULL, " +
"`password` longtext NULL, `max_tokens` integer NOT NULL, `used_tokens` integer NOT NULL, " +
"`allow_from` longtext NULL, `user_id` integer NOT NULL);" +
"ALTER TABLE `{{shares}}` ADD CONSTRAINT `{{prefix}}shares_user_id_fk_users_id` " +
"FOREIGN KEY (`user_id`) REFERENCES `{{users}}` (`id`) ON DELETE CASCADE;"
mysqlV14DownSQL = "DROP TABLE `{{shares}}` CASCADE;"
mysqlV15SQL = "CREATE TABLE `{{defender_hosts}}` (`id` bigint AUTO_INCREMENT NOT NULL PRIMARY KEY, " +
"`ip` varchar(50) NOT NULL UNIQUE, `ban_time` bigint NOT NULL, `updated_at` bigint NOT NULL);" +
"CREATE TABLE `{{defender_events}}` (`id` bigint AUTO_INCREMENT NOT NULL PRIMARY KEY, " +
"`date_time` bigint NOT NULL, `score` integer NOT NULL, `host_id` bigint NOT NULL);" +
"ALTER TABLE `{{defender_events}}` ADD CONSTRAINT `{{prefix}}defender_events_host_id_fk_defender_hosts_id` " +
"FOREIGN KEY (`host_id`) REFERENCES `{{defender_hosts}}` (`id`) ON DELETE CASCADE;" +
"CREATE INDEX `{{prefix}}defender_hosts_updated_at_idx` ON `{{defender_hosts}}` (`updated_at`);" +
"CREATE INDEX `{{prefix}}defender_hosts_ban_time_idx` ON `{{defender_hosts}}` (`ban_time`);" +
"CREATE INDEX `{{prefix}}defender_events_date_time_idx` ON `{{defender_events}}` (`date_time`);"
mysqlV15DownSQL = "DROP TABLE `{{defender_events}}` CASCADE;" +
"DROP TABLE `{{defender_hosts}}` CASCADE;"
)
// MySQLProvider auth provider for MySQL/MariaDB database
@@ -27,30 +102,39 @@ type MySQLProvider struct {
dbHandle *sql.DB
}
func init() {
version.AddFeature("+mysql")
}
func initializeMySQLProvider() error {
var err error
logSender = MySQLDataProviderName
dbHandle, err := sql.Open("mysql", getMySQLConnectionString(false))
if err == nil {
providerLog(logger.LevelDebug, "mysql database handle created, connection string: %#v, pool size: %v",
getMySQLConnectionString(true), config.PoolSize)
dbHandle.SetMaxOpenConns(config.PoolSize)
dbHandle.SetConnMaxLifetime(1800 * time.Second)
provider = MySQLProvider{dbHandle: dbHandle}
if config.PoolSize > 0 {
dbHandle.SetMaxIdleConns(config.PoolSize)
} else {
dbHandle.SetMaxIdleConns(2)
}
dbHandle.SetConnMaxLifetime(240 * time.Second)
provider = &MySQLProvider{dbHandle: dbHandle}
} else {
providerLog(logger.LevelWarn, "error creating mysql database handler, connection string: %#v, error: %v",
providerLog(logger.LevelError, "error creating mysql database handler, connection string: %#v, error: %v",
getMySQLConnectionString(true), err)
}
return err
}
func getMySQLConnectionString(redactedPwd bool) string {
var connectionString string
if len(config.ConnectionString) == 0 {
if config.ConnectionString == "" {
password := config.Password
if redactedPwd {
password = "[redacted]"
}
connectionString = fmt.Sprintf("%v:%v@tcp([%v]:%v)/%v?charset=utf8&interpolateParams=true&timeout=10s&tls=%v&writeTimeout=10s&readTimeout=10s",
connectionString = fmt.Sprintf("%v:%v@tcp([%v]:%v)/%v?charset=utf8mb4&interpolateParams=true&timeout=10s&parseTime=true&tls=%v&writeTimeout=10s&readTimeout=10s",
config.Username, password, config.Host, config.Port, config.Name, getSSLMode())
} else {
connectionString = config.ConnectionString
@@ -58,122 +142,465 @@ func getMySQLConnectionString(redactedPwd bool) string {
return connectionString
}
func (p MySQLProvider) checkAvailability() error {
func (p *MySQLProvider) checkAvailability() error {
return sqlCommonCheckAvailability(p.dbHandle)
}
func (p MySQLProvider) validateUserAndPass(username string, password string) (User, error) {
return sqlCommonValidateUserAndPass(username, password, p.dbHandle)
func (p *MySQLProvider) validateUserAndPass(username, password, ip, protocol string) (User, error) {
return sqlCommonValidateUserAndPass(username, password, ip, protocol, p.dbHandle)
}
func (p MySQLProvider) validateUserAndPubKey(username string, publicKey string) (User, string, error) {
func (p *MySQLProvider) validateUserAndTLSCert(username, protocol string, tlsCert *x509.Certificate) (User, error) {
return sqlCommonValidateUserAndTLSCertificate(username, protocol, tlsCert, p.dbHandle)
}
func (p *MySQLProvider) validateUserAndPubKey(username string, publicKey []byte) (User, string, error) {
return sqlCommonValidateUserAndPubKey(username, publicKey, p.dbHandle)
}
func (p MySQLProvider) getUserByID(ID int64) (User, error) {
return sqlCommonGetUserByID(ID, p.dbHandle)
}
func (p MySQLProvider) updateQuota(username string, filesAdd int, sizeAdd int64, reset bool) error {
func (p *MySQLProvider) updateQuota(username string, filesAdd int, sizeAdd int64, reset bool) error {
return sqlCommonUpdateQuota(username, filesAdd, sizeAdd, reset, p.dbHandle)
}
func (p MySQLProvider) updateLastLogin(username string) error {
return sqlCommonUpdateLastLogin(username, p.dbHandle)
}
func (p MySQLProvider) getUsedQuota(username string) (int, int64, error) {
func (p *MySQLProvider) getUsedQuota(username string) (int, int64, error) {
return sqlCommonGetUsedQuota(username, p.dbHandle)
}
func (p MySQLProvider) userExists(username string) (User, error) {
return sqlCommonCheckUserExists(username, p.dbHandle)
func (p *MySQLProvider) setUpdatedAt(username string) {
sqlCommonSetUpdatedAt(username, p.dbHandle)
}
func (p MySQLProvider) addUser(user User) error {
func (p *MySQLProvider) updateLastLogin(username string) error {
return sqlCommonUpdateLastLogin(username, p.dbHandle)
}
func (p *MySQLProvider) updateAdminLastLogin(username string) error {
return sqlCommonUpdateAdminLastLogin(username, p.dbHandle)
}
func (p *MySQLProvider) userExists(username string) (User, error) {
return sqlCommonGetUserByUsername(username, p.dbHandle)
}
func (p *MySQLProvider) addUser(user *User) error {
return sqlCommonAddUser(user, p.dbHandle)
}
func (p MySQLProvider) updateUser(user User) error {
func (p *MySQLProvider) updateUser(user *User) error {
return sqlCommonUpdateUser(user, p.dbHandle)
}
func (p MySQLProvider) deleteUser(user User) error {
func (p *MySQLProvider) deleteUser(user *User) error {
return sqlCommonDeleteUser(user, p.dbHandle)
}
func (p MySQLProvider) dumpUsers() ([]User, error) {
func (p *MySQLProvider) dumpUsers() ([]User, error) {
return sqlCommonDumpUsers(p.dbHandle)
}
func (p MySQLProvider) getUsers(limit int, offset int, order string, username string) ([]User, error) {
return sqlCommonGetUsers(limit, offset, order, username, p.dbHandle)
func (p *MySQLProvider) getRecentlyUpdatedUsers(after int64) ([]User, error) {
return sqlCommonGetRecentlyUpdatedUsers(after, p.dbHandle)
}
func (p MySQLProvider) close() error {
func (p *MySQLProvider) getUsers(limit int, offset int, order string) ([]User, error) {
return sqlCommonGetUsers(limit, offset, order, p.dbHandle)
}
func (p *MySQLProvider) dumpFolders() ([]vfs.BaseVirtualFolder, error) {
return sqlCommonDumpFolders(p.dbHandle)
}
func (p *MySQLProvider) getFolders(limit, offset int, order string) ([]vfs.BaseVirtualFolder, error) {
return sqlCommonGetFolders(limit, offset, order, p.dbHandle)
}
func (p *MySQLProvider) getFolderByName(name string) (vfs.BaseVirtualFolder, error) {
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
defer cancel()
return sqlCommonGetFolderByName(ctx, name, p.dbHandle)
}
func (p *MySQLProvider) addFolder(folder *vfs.BaseVirtualFolder) error {
return sqlCommonAddFolder(folder, p.dbHandle)
}
func (p *MySQLProvider) updateFolder(folder *vfs.BaseVirtualFolder) error {
return sqlCommonUpdateFolder(folder, p.dbHandle)
}
func (p *MySQLProvider) deleteFolder(folder *vfs.BaseVirtualFolder) error {
return sqlCommonDeleteFolder(folder, p.dbHandle)
}
func (p *MySQLProvider) updateFolderQuota(name string, filesAdd int, sizeAdd int64, reset bool) error {
return sqlCommonUpdateFolderQuota(name, filesAdd, sizeAdd, reset, p.dbHandle)
}
func (p *MySQLProvider) getUsedFolderQuota(name string) (int, int64, error) {
return sqlCommonGetFolderUsedQuota(name, p.dbHandle)
}
func (p *MySQLProvider) adminExists(username string) (Admin, error) {
return sqlCommonGetAdminByUsername(username, p.dbHandle)
}
func (p *MySQLProvider) addAdmin(admin *Admin) error {
return sqlCommonAddAdmin(admin, p.dbHandle)
}
func (p *MySQLProvider) updateAdmin(admin *Admin) error {
return sqlCommonUpdateAdmin(admin, p.dbHandle)
}
func (p *MySQLProvider) deleteAdmin(admin *Admin) error {
return sqlCommonDeleteAdmin(admin, p.dbHandle)
}
func (p *MySQLProvider) getAdmins(limit int, offset int, order string) ([]Admin, error) {
return sqlCommonGetAdmins(limit, offset, order, p.dbHandle)
}
func (p *MySQLProvider) dumpAdmins() ([]Admin, error) {
return sqlCommonDumpAdmins(p.dbHandle)
}
func (p *MySQLProvider) validateAdminAndPass(username, password, ip string) (Admin, error) {
return sqlCommonValidateAdminAndPass(username, password, ip, p.dbHandle)
}
func (p *MySQLProvider) apiKeyExists(keyID string) (APIKey, error) {
return sqlCommonGetAPIKeyByID(keyID, p.dbHandle)
}
func (p *MySQLProvider) addAPIKey(apiKey *APIKey) error {
return sqlCommonAddAPIKey(apiKey, p.dbHandle)
}
func (p *MySQLProvider) updateAPIKey(apiKey *APIKey) error {
return sqlCommonUpdateAPIKey(apiKey, p.dbHandle)
}
func (p *MySQLProvider) deleteAPIKey(apiKey *APIKey) error {
return sqlCommonDeleteAPIKey(apiKey, p.dbHandle)
}
func (p *MySQLProvider) getAPIKeys(limit int, offset int, order string) ([]APIKey, error) {
return sqlCommonGetAPIKeys(limit, offset, order, p.dbHandle)
}
func (p *MySQLProvider) dumpAPIKeys() ([]APIKey, error) {
return sqlCommonDumpAPIKeys(p.dbHandle)
}
func (p *MySQLProvider) updateAPIKeyLastUse(keyID string) error {
return sqlCommonUpdateAPIKeyLastUse(keyID, p.dbHandle)
}
func (p *MySQLProvider) shareExists(shareID, username string) (Share, error) {
return sqlCommonGetShareByID(shareID, username, p.dbHandle)
}
func (p *MySQLProvider) addShare(share *Share) error {
return sqlCommonAddShare(share, p.dbHandle)
}
func (p *MySQLProvider) updateShare(share *Share) error {
return sqlCommonUpdateShare(share, p.dbHandle)
}
func (p *MySQLProvider) deleteShare(share *Share) error {
return sqlCommonDeleteShare(share, p.dbHandle)
}
func (p *MySQLProvider) getShares(limit int, offset int, order, username string) ([]Share, error) {
return sqlCommonGetShares(limit, offset, order, username, p.dbHandle)
}
func (p *MySQLProvider) dumpShares() ([]Share, error) {
return sqlCommonDumpShares(p.dbHandle)
}
func (p *MySQLProvider) updateShareLastUse(shareID string, numTokens int) error {
return sqlCommonUpdateShareLastUse(shareID, numTokens, p.dbHandle)
}
func (p *MySQLProvider) getDefenderHosts(from int64, limit int) ([]*DefenderEntry, error) {
return sqlCommonGetDefenderHosts(from, limit, p.dbHandle)
}
func (p *MySQLProvider) getDefenderHostByIP(ip string, from int64) (*DefenderEntry, error) {
return sqlCommonGetDefenderHostByIP(ip, from, p.dbHandle)
}
func (p *MySQLProvider) isDefenderHostBanned(ip string) (*DefenderEntry, error) {
return sqlCommonIsDefenderHostBanned(ip, p.dbHandle)
}
func (p *MySQLProvider) updateDefenderBanTime(ip string, minutes int) error {
return sqlCommonDefenderIncrementBanTime(ip, minutes, p.dbHandle)
}
func (p *MySQLProvider) deleteDefenderHost(ip string) error {
return sqlCommonDeleteDefenderHost(ip, p.dbHandle)
}
func (p *MySQLProvider) addDefenderEvent(ip string, score int) error {
return sqlCommonAddDefenderHostAndEvent(ip, score, p.dbHandle)
}
func (p *MySQLProvider) setDefenderBanTime(ip string, banTime int64) error {
return sqlCommonSetDefenderBanTime(ip, banTime, p.dbHandle)
}
func (p *MySQLProvider) cleanupDefender(from int64) error {
return sqlCommonDefenderCleanup(from, p.dbHandle)
}
func (p *MySQLProvider) close() error {
return p.dbHandle.Close()
}
func (p MySQLProvider) reloadConfig() error {
func (p *MySQLProvider) reloadConfig() error {
return nil
}
// initializeDatabase creates the initial database structure
func (p MySQLProvider) initializeDatabase() error {
sqlUsers := strings.Replace(mysqlUsersTableSQL, "{{users}}", config.UsersTable, 1)
tx, err := p.dbHandle.Begin()
if err != nil {
return err
func (p *MySQLProvider) initializeDatabase() error {
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, false)
if err == nil && dbVersion.Version > 0 {
return ErrNoInitRequired
}
_, err = tx.Exec(sqlUsers)
if err != nil {
tx.Rollback()
return err
if errors.Is(err, sql.ErrNoRows) {
return errSchemaVersionEmpty
}
_, err = tx.Exec(mysqlSchemaTableSQL)
if err != nil {
tx.Rollback()
return err
}
_, err = tx.Exec(initialDBVersionSQL)
if err != nil {
tx.Rollback()
return err
}
return tx.Commit()
initialSQL := strings.ReplaceAll(mysqlInitialSQL, "{{schema_version}}", sqlTableSchemaVersion)
initialSQL = strings.ReplaceAll(initialSQL, "{{admins}}", sqlTableAdmins)
initialSQL = strings.ReplaceAll(initialSQL, "{{folders}}", sqlTableFolders)
initialSQL = strings.ReplaceAll(initialSQL, "{{users}}", sqlTableUsers)
initialSQL = strings.ReplaceAll(initialSQL, "{{folders_mapping}}", sqlTableFoldersMapping)
initialSQL = strings.ReplaceAll(initialSQL, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, strings.Split(initialSQL, ";"), 10)
}
func (p MySQLProvider) migrateDatabase() error {
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle)
//nolint:dupl
func (p *MySQLProvider) migrateDatabase() error {
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
if err != nil {
return err
}
if dbVersion.Version == sqlDatabaseVersion {
providerLog(logger.LevelDebug, "sql database is updated, current version: %v", dbVersion.Version)
return nil
switch version := dbVersion.Version; {
case version == sqlDatabaseVersion:
providerLog(logger.LevelDebug, "sql database is up to date, current version: %v", version)
return ErrNoInitRequired
case version < 10:
err = fmt.Errorf("database version %v is too old, please see the upgrading docs", version)
providerLog(logger.LevelError, "%v", err)
logger.ErrorToConsole("%v", err)
return err
case version == 10:
return updateMySQLDatabaseFromV10(p.dbHandle)
case version == 11:
return updateMySQLDatabaseFromV11(p.dbHandle)
case version == 12:
return updateMySQLDatabaseFromV12(p.dbHandle)
case version == 13:
return updateMySQLDatabaseFromV13(p.dbHandle)
case version == 14:
return updateMySQLDatabaseFromV14(p.dbHandle)
default:
if version > sqlDatabaseVersion {
providerLog(logger.LevelError, "database version %v is newer than the supported one: %v", version,
sqlDatabaseVersion)
logger.WarnToConsole("database version %v is newer than the supported one: %v", version,
sqlDatabaseVersion)
return nil
}
return fmt.Errorf("database version not handled: %v", version)
}
if dbVersion.Version == 1 {
return updateMySQLDatabaseFrom1To2(p.dbHandle)
}
return nil
}
func updateMySQLDatabaseFrom1To2(dbHandle *sql.DB) error {
providerLog(logger.LevelInfo, "updating database version: 1 -> 2")
sql := strings.Replace(mysqlUsersV2SQL, "{{users}}", config.UsersTable, 1)
tx, err := dbHandle.Begin()
func (p *MySQLProvider) revertDatabase(targetVersion int) error {
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
if err != nil {
return err
}
_, err = tx.Exec(sql)
if err != nil {
tx.Rollback()
return err
if dbVersion.Version == targetVersion {
return errors.New("current version match target version, nothing to do")
}
err = sqlCommonUpdateDatabaseVersionWithTX(tx, 2)
if err != nil {
tx.Rollback()
return err
switch dbVersion.Version {
case 15:
return downgradeMySQLDatabaseFromV15(p.dbHandle)
case 14:
return downgradeMySQLDatabaseFromV14(p.dbHandle)
case 13:
return downgradeMySQLDatabaseFromV13(p.dbHandle)
case 12:
return downgradeMySQLDatabaseFromV12(p.dbHandle)
case 11:
return downgradeMySQLDatabaseFromV11(p.dbHandle)
default:
return fmt.Errorf("database version not handled: %v", dbVersion.Version)
}
return tx.Commit()
}
func (p *MySQLProvider) resetDatabase() error {
sql := strings.ReplaceAll(mysqlResetSQL, "{{schema_version}}", sqlTableSchemaVersion)
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
sql = strings.ReplaceAll(sql, "{{folders}}", sqlTableFolders)
sql = strings.ReplaceAll(sql, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{folders_mapping}}", sqlTableFoldersMapping)
sql = strings.ReplaceAll(sql, "{{api_keys}}", sqlTableAPIKeys)
sql = strings.ReplaceAll(sql, "{{shares}}", sqlTableShares)
sql = strings.ReplaceAll(sql, "{{defender_events}}", sqlTableDefenderEvents)
sql = strings.ReplaceAll(sql, "{{defender_hosts}}", sqlTableDefenderHosts)
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, strings.Split(sql, ";"), 0)
}
func updateMySQLDatabaseFromV10(dbHandle *sql.DB) error {
if err := updateMySQLDatabaseFrom10To11(dbHandle); err != nil {
return err
}
return updateMySQLDatabaseFromV11(dbHandle)
}
func updateMySQLDatabaseFromV11(dbHandle *sql.DB) error {
if err := updateMySQLDatabaseFrom11To12(dbHandle); err != nil {
return err
}
return updateMySQLDatabaseFromV12(dbHandle)
}
func updateMySQLDatabaseFromV12(dbHandle *sql.DB) error {
if err := updateMySQLDatabaseFrom12To13(dbHandle); err != nil {
return err
}
return updateMySQLDatabaseFromV13(dbHandle)
}
func updateMySQLDatabaseFromV13(dbHandle *sql.DB) error {
if err := updateMySQLDatabaseFrom13To14(dbHandle); err != nil {
return err
}
return updateMySQLDatabaseFromV14(dbHandle)
}
func updateMySQLDatabaseFromV14(dbHandle *sql.DB) error {
return updateMySQLDatabaseFrom14To15(dbHandle)
}
func downgradeMySQLDatabaseFromV15(dbHandle *sql.DB) error {
if err := downgradeMySQLDatabaseFrom15To14(dbHandle); err != nil {
return err
}
return downgradeMySQLDatabaseFromV14(dbHandle)
}
func downgradeMySQLDatabaseFromV14(dbHandle *sql.DB) error {
if err := downgradeMySQLDatabaseFrom14To13(dbHandle); err != nil {
return err
}
return downgradeMySQLDatabaseFromV13(dbHandle)
}
func downgradeMySQLDatabaseFromV13(dbHandle *sql.DB) error {
if err := downgradeMySQLDatabaseFrom13To12(dbHandle); err != nil {
return err
}
return downgradeMySQLDatabaseFromV12(dbHandle)
}
func downgradeMySQLDatabaseFromV12(dbHandle *sql.DB) error {
if err := downgradeMySQLDatabaseFrom12To11(dbHandle); err != nil {
return err
}
return downgradeMySQLDatabaseFromV11(dbHandle)
}
func downgradeMySQLDatabaseFromV11(dbHandle *sql.DB) error {
return downgradeMySQLDatabaseFrom11To10(dbHandle)
}
func updateMySQLDatabaseFrom13To14(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 13 -> 14")
providerLog(logger.LevelInfo, "updating database version: 13 -> 14")
sql := strings.ReplaceAll(mysqlV14SQL, "{{shares}}", sqlTableShares)
sql = strings.ReplaceAll(sql, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 14)
}
func updateMySQLDatabaseFrom14To15(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 14 -> 15")
providerLog(logger.LevelInfo, "updating database version: 14 -> 15")
sql := strings.ReplaceAll(mysqlV15SQL, "{{defender_events}}", sqlTableDefenderEvents)
sql = strings.ReplaceAll(sql, "{{defender_hosts}}", sqlTableDefenderHosts)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 15)
}
func downgradeMySQLDatabaseFrom15To14(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 15 -> 14")
providerLog(logger.LevelInfo, "downgrading database version: 15 -> 14")
sql := strings.ReplaceAll(mysqlV15DownSQL, "{{defender_events}}", sqlTableDefenderEvents)
sql = strings.ReplaceAll(sql, "{{defender_hosts}}", sqlTableDefenderHosts)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 14)
}
func downgradeMySQLDatabaseFrom14To13(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 14 -> 13")
providerLog(logger.LevelInfo, "downgrading database version: 14 -> 13")
sql := strings.ReplaceAll(mysqlV14DownSQL, "{{shares}}", sqlTableShares)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 13)
}
func updateMySQLDatabaseFrom12To13(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 12 -> 13")
providerLog(logger.LevelInfo, "updating database version: 12 -> 13")
sql := strings.ReplaceAll(mysqlV13SQL, "{{users}}", sqlTableUsers)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 13)
}
func downgradeMySQLDatabaseFrom13To12(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 13 -> 12")
providerLog(logger.LevelInfo, "downgrading database version: 13 -> 12")
sql := strings.ReplaceAll(mysqlV13DownSQL, "{{users}}", sqlTableUsers)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 12)
}
func updateMySQLDatabaseFrom11To12(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 11 -> 12")
providerLog(logger.LevelInfo, "updating database version: 11 -> 12")
sql := strings.ReplaceAll(mysqlV12SQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 12)
}
func downgradeMySQLDatabaseFrom12To11(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 12 -> 11")
providerLog(logger.LevelInfo, "downgrading database version: 12 -> 11")
sql := strings.ReplaceAll(mysqlV12DownSQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 11)
}
func updateMySQLDatabaseFrom10To11(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 10 -> 11")
providerLog(logger.LevelInfo, "updating database version: 10 -> 11")
sql := strings.ReplaceAll(mysqlV11SQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
sql = strings.ReplaceAll(sql, "{{api_keys}}", sqlTableAPIKeys)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 11)
}
func downgradeMySQLDatabaseFrom11To10(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 11 -> 10")
providerLog(logger.LevelInfo, "downgrading database version: 11 -> 10")
sql := strings.ReplaceAll(mysqlV11DownSQL, "{{api_keys}}", sqlTableAPIKeys)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 10)
}

View File

@@ -0,0 +1,18 @@
//go:build nomysql
// +build nomysql
package dataprovider
import (
"errors"
"github.com/drakkan/sftpgo/v2/version"
)
func init() {
version.AddFeature("-mysql")
}
func initializeMySQLProvider() error {
return errors.New("MySQL disabled at build time")
}

View File

@@ -1,23 +1,118 @@
//go:build !nopgsql
// +build !nopgsql
package dataprovider
import (
"context"
"crypto/x509"
"database/sql"
"errors"
"fmt"
"strings"
"time"
"github.com/drakkan/sftpgo/logger"
// we import lib/pq here to be able to disable PostgreSQL support using a build tag
_ "github.com/lib/pq"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/version"
"github.com/drakkan/sftpgo/v2/vfs"
)
const (
pgsqlUsersTableSQL = `CREATE TABLE "{{users}}" ("id" serial NOT NULL PRIMARY KEY, "username" varchar(255) NOT NULL UNIQUE,
"password" varchar(255) NULL, "public_keys" text NULL, "home_dir" varchar(255) NOT NULL, "uid" integer NOT NULL,
"gid" integer NOT NULL, "max_sessions" integer NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL,
"permissions" text NOT NULL, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL,
"last_quota_update" bigint NOT NULL, "upload_bandwidth" integer NOT NULL, "download_bandwidth" integer NOT NULL,
"expiration_date" bigint NOT NULL, "last_login" bigint NOT NULL, "status" integer NOT NULL, "filters" text NULL,
"filesystem" text NULL);`
pgsqlSchemaTableSQL = `CREATE TABLE "schema_version" ("id" serial NOT NULL PRIMARY KEY, "version" integer NOT NULL);`
pgsqlUsersV2SQL = `ALTER TABLE "{{users}}" ADD COLUMN "virtual_folders" text NULL;`
pgsqlResetSQL = `DROP TABLE IF EXISTS "{{api_keys}}" CASCADE;
DROP TABLE IF EXISTS "{{folders_mapping}}" CASCADE;
DROP TABLE IF EXISTS "{{admins}}" CASCADE;
DROP TABLE IF EXISTS "{{folders}}" CASCADE;
DROP TABLE IF EXISTS "{{shares}}" CASCADE;
DROP TABLE IF EXISTS "{{users}}" CASCADE;
DROP TABLE IF EXISTS "{{defender_events}}" CASCADE;
DROP TABLE IF EXISTS "{{defender_hosts}}" CASCADE;
DROP TABLE IF EXISTS "{{schema_version}}" CASCADE;
`
pgsqlInitial = `CREATE TABLE "{{schema_version}}" ("id" serial NOT NULL PRIMARY KEY, "version" integer NOT NULL);
CREATE TABLE "{{admins}}" ("id" serial NOT NULL PRIMARY KEY, "username" varchar(255) NOT NULL UNIQUE,
"description" varchar(512) NULL, "password" varchar(255) NOT NULL, "email" varchar(255) NULL, "status" integer NOT NULL,
"permissions" text NOT NULL, "filters" text NULL, "additional_info" text NULL);
CREATE TABLE "{{folders}}" ("id" serial NOT NULL PRIMARY KEY, "name" varchar(255) NOT NULL UNIQUE, "description" varchar(512) NULL,
"path" varchar(512) NULL, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL,
"filesystem" text NULL);
CREATE TABLE "{{users}}" ("id" serial NOT NULL PRIMARY KEY, "username" varchar(255) NOT NULL UNIQUE, "status" integer NOT NULL,
"expiration_date" bigint NOT NULL, "description" varchar(512) NULL, "password" text NULL, "public_keys" text NULL,
"home_dir" varchar(512) NOT NULL, "uid" integer NOT NULL, "gid" integer NOT NULL, "max_sessions" integer NOT NULL,
"quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "permissions" text NOT NULL, "used_quota_size" bigint NOT NULL,
"used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL, "upload_bandwidth" integer NOT NULL,
"download_bandwidth" integer NOT NULL, "last_login" bigint NOT NULL, "filters" text NULL, "filesystem" text NULL,
"additional_info" text NULL);
CREATE TABLE "{{folders_mapping}}" ("id" serial NOT NULL PRIMARY KEY, "virtual_path" varchar(512) NOT NULL,
"quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "folder_id" integer NOT NULL, "user_id" integer NOT NULL);
ALTER TABLE "{{folders_mapping}}" ADD CONSTRAINT "{{prefix}}unique_mapping" UNIQUE ("user_id", "folder_id");
ALTER TABLE "{{folders_mapping}}" ADD CONSTRAINT "{{prefix}}folders_mapping_folder_id_fk_folders_id"
FOREIGN KEY ("folder_id") REFERENCES "{{folders}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
ALTER TABLE "{{folders_mapping}}" ADD CONSTRAINT "{{prefix}}folders_mapping_user_id_fk_users_id"
FOREIGN KEY ("user_id") REFERENCES "{{users}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
CREATE INDEX "{{prefix}}folders_mapping_folder_id_idx" ON "{{folders_mapping}}" ("folder_id");
CREATE INDEX "{{prefix}}folders_mapping_user_id_idx" ON "{{folders_mapping}}" ("user_id");
INSERT INTO {{schema_version}} (version) VALUES (10);
`
pgsqlV11SQL = `CREATE TABLE "{{api_keys}}" ("id" serial NOT NULL PRIMARY KEY, "name" varchar(255) NOT NULL,
"key_id" varchar(50) NOT NULL UNIQUE, "api_key" varchar(255) NOT NULL UNIQUE, "scope" integer NOT NULL,
"created_at" bigint NOT NULL, "updated_at" bigint NOT NULL, "last_use_at" bigint NOT NULL,"expires_at" bigint NOT NULL,
"description" text NULL, "admin_id" integer NULL, "user_id" integer NULL);
ALTER TABLE "{{api_keys}}" ADD CONSTRAINT "{{prefix}}api_keys_admin_id_fk_admins_id" FOREIGN KEY ("admin_id")
REFERENCES "{{admins}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE;
ALTER TABLE "{{api_keys}}" ADD CONSTRAINT "{{prefix}}api_keys_user_id_fk_users_id" FOREIGN KEY ("user_id")
REFERENCES "{{users}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE;
CREATE INDEX "{{prefix}}api_keys_admin_id_idx" ON "{{api_keys}}" ("admin_id");
CREATE INDEX "{{prefix}}api_keys_user_id_idx" ON "{{api_keys}}" ("user_id");
`
pgsqlV11DownSQL = `DROP TABLE "{{api_keys}}" CASCADE;`
pgsqlV12SQL = `ALTER TABLE "{{admins}}" ADD COLUMN "created_at" bigint DEFAULT 0 NOT NULL;
ALTER TABLE "{{admins}}" ALTER COLUMN "created_at" DROP DEFAULT;
ALTER TABLE "{{admins}}" ADD COLUMN "updated_at" bigint DEFAULT 0 NOT NULL;
ALTER TABLE "{{admins}}" ALTER COLUMN "updated_at" DROP DEFAULT;
ALTER TABLE "{{admins}}" ADD COLUMN "last_login" bigint DEFAULT 0 NOT NULL;
ALTER TABLE "{{admins}}" ALTER COLUMN "last_login" DROP DEFAULT;
ALTER TABLE "{{users}}" ADD COLUMN "created_at" bigint DEFAULT 0 NOT NULL;
ALTER TABLE "{{users}}" ALTER COLUMN "created_at" DROP DEFAULT;
ALTER TABLE "{{users}}" ADD COLUMN "updated_at" bigint DEFAULT 0 NOT NULL;
ALTER TABLE "{{users}}" ALTER COLUMN "updated_at" DROP DEFAULT;
CREATE INDEX "{{prefix}}users_updated_at_idx" ON "{{users}}" ("updated_at");
`
pgsqlV12DownSQL = `ALTER TABLE "{{users}}" DROP COLUMN "updated_at" CASCADE;
ALTER TABLE "{{users}}" DROP COLUMN "created_at" CASCADE;
ALTER TABLE "{{admins}}" DROP COLUMN "created_at" CASCADE;
ALTER TABLE "{{admins}}" DROP COLUMN "updated_at" CASCADE;
ALTER TABLE "{{admins}}" DROP COLUMN "last_login" CASCADE;
`
pgsqlV13SQL = `ALTER TABLE "{{users}}" ADD COLUMN "email" varchar(255) NULL;`
pgsqlV13DownSQL = `ALTER TABLE "{{users}}" DROP COLUMN "email" CASCADE;`
pgsqlV14SQL = `CREATE TABLE "{{shares}}" ("id" serial NOT NULL PRIMARY KEY,
"share_id" varchar(60) NOT NULL UNIQUE, "name" varchar(255) NOT NULL, "description" varchar(512) NULL,
"scope" integer NOT NULL, "paths" text NOT NULL, "created_at" bigint NOT NULL, "updated_at" bigint NOT NULL,
"last_use_at" bigint NOT NULL, "expires_at" bigint NOT NULL, "password" text NULL,
"max_tokens" integer NOT NULL, "used_tokens" integer NOT NULL, "allow_from" text NULL,
"user_id" integer NOT NULL);
ALTER TABLE "{{shares}}" ADD CONSTRAINT "{{prefix}}shares_user_id_fk_users_id" FOREIGN KEY ("user_id")
REFERENCES "{{users}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE;
CREATE INDEX "{{prefix}}shares_user_id_idx" ON "{{shares}}" ("user_id");
`
pgsqlV14DownSQL = `DROP TABLE "{{shares}}" CASCADE;`
pgsqlV15SQL = `CREATE TABLE "{{defender_hosts}}" ("id" bigserial NOT NULL PRIMARY KEY, "ip" varchar(50) NOT NULL UNIQUE,
"ban_time" bigint NOT NULL, "updated_at" bigint NOT NULL);
CREATE TABLE "{{defender_events}}" ("id" bigserial NOT NULL PRIMARY KEY, "date_time" bigint NOT NULL, "score" integer NOT NULL,
"host_id" bigint NOT NULL);
ALTER TABLE "{{defender_events}}" ADD CONSTRAINT "{{prefix}}defender_events_host_id_fk_defender_hosts_id" FOREIGN KEY
("host_id") REFERENCES "{{defender_hosts}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE;
CREATE INDEX "{{prefix}}defender_hosts_updated_at_idx" ON "{{defender_hosts}}" ("updated_at");
CREATE INDEX "{{prefix}}defender_hosts_ban_time_idx" ON "{{defender_hosts}}" ("ban_time");
CREATE INDEX "{{prefix}}defender_events_date_time_idx" ON "{{defender_events}}" ("date_time");
CREATE INDEX "{{prefix}}defender_events_host_id_idx" ON "{{defender_events}}" ("host_id");
`
pgsqlV15DownSQL = `DROP TABLE "{{defender_events}}" CASCADE;
DROP TABLE "{{defender_hosts}}" CASCADE;
`
)
// PGSQLProvider auth provider for PostgreSQL database
@@ -25,17 +120,26 @@ type PGSQLProvider struct {
dbHandle *sql.DB
}
func init() {
version.AddFeature("+pgsql")
}
func initializePGSQLProvider() error {
var err error
logSender = PGSQLDataProviderName
dbHandle, err := sql.Open("postgres", getPGSQLConnectionString(false))
if err == nil {
providerLog(logger.LevelDebug, "postgres database handle created, connection string: %#v, pool size: %v",
getPGSQLConnectionString(true), config.PoolSize)
dbHandle.SetMaxOpenConns(config.PoolSize)
provider = PGSQLProvider{dbHandle: dbHandle}
if config.PoolSize > 0 {
dbHandle.SetMaxIdleConns(config.PoolSize)
} else {
dbHandle.SetMaxIdleConns(2)
}
dbHandle.SetConnMaxLifetime(240 * time.Second)
provider = &PGSQLProvider{dbHandle: dbHandle}
} else {
providerLog(logger.LevelWarn, "error creating postgres database handler, connection string: %#v, error: %v",
providerLog(logger.LevelError, "error creating postgres database handler, connection string: %#v, error: %v",
getPGSQLConnectionString(true), err)
}
return err
@@ -43,7 +147,7 @@ func initializePGSQLProvider() error {
func getPGSQLConnectionString(redactedPwd bool) string {
var connectionString string
if len(config.ConnectionString) == 0 {
if config.ConnectionString == "" {
password := config.Password
if redactedPwd {
password = "[redacted]"
@@ -56,122 +160,471 @@ func getPGSQLConnectionString(redactedPwd bool) string {
return connectionString
}
func (p PGSQLProvider) checkAvailability() error {
func (p *PGSQLProvider) checkAvailability() error {
return sqlCommonCheckAvailability(p.dbHandle)
}
func (p PGSQLProvider) validateUserAndPass(username string, password string) (User, error) {
return sqlCommonValidateUserAndPass(username, password, p.dbHandle)
func (p *PGSQLProvider) validateUserAndPass(username, password, ip, protocol string) (User, error) {
return sqlCommonValidateUserAndPass(username, password, ip, protocol, p.dbHandle)
}
func (p PGSQLProvider) validateUserAndPubKey(username string, publicKey string) (User, string, error) {
func (p *PGSQLProvider) validateUserAndTLSCert(username, protocol string, tlsCert *x509.Certificate) (User, error) {
return sqlCommonValidateUserAndTLSCertificate(username, protocol, tlsCert, p.dbHandle)
}
func (p *PGSQLProvider) validateUserAndPubKey(username string, publicKey []byte) (User, string, error) {
return sqlCommonValidateUserAndPubKey(username, publicKey, p.dbHandle)
}
func (p PGSQLProvider) getUserByID(ID int64) (User, error) {
return sqlCommonGetUserByID(ID, p.dbHandle)
}
func (p PGSQLProvider) updateQuota(username string, filesAdd int, sizeAdd int64, reset bool) error {
func (p *PGSQLProvider) updateQuota(username string, filesAdd int, sizeAdd int64, reset bool) error {
return sqlCommonUpdateQuota(username, filesAdd, sizeAdd, reset, p.dbHandle)
}
func (p PGSQLProvider) updateLastLogin(username string) error {
return sqlCommonUpdateLastLogin(username, p.dbHandle)
}
func (p PGSQLProvider) getUsedQuota(username string) (int, int64, error) {
func (p *PGSQLProvider) getUsedQuota(username string) (int, int64, error) {
return sqlCommonGetUsedQuota(username, p.dbHandle)
}
func (p PGSQLProvider) userExists(username string) (User, error) {
return sqlCommonCheckUserExists(username, p.dbHandle)
func (p *PGSQLProvider) setUpdatedAt(username string) {
sqlCommonSetUpdatedAt(username, p.dbHandle)
}
func (p PGSQLProvider) addUser(user User) error {
func (p *PGSQLProvider) updateLastLogin(username string) error {
return sqlCommonUpdateLastLogin(username, p.dbHandle)
}
func (p *PGSQLProvider) updateAdminLastLogin(username string) error {
return sqlCommonUpdateAdminLastLogin(username, p.dbHandle)
}
func (p *PGSQLProvider) userExists(username string) (User, error) {
return sqlCommonGetUserByUsername(username, p.dbHandle)
}
func (p *PGSQLProvider) addUser(user *User) error {
return sqlCommonAddUser(user, p.dbHandle)
}
func (p PGSQLProvider) updateUser(user User) error {
func (p *PGSQLProvider) updateUser(user *User) error {
return sqlCommonUpdateUser(user, p.dbHandle)
}
func (p PGSQLProvider) deleteUser(user User) error {
func (p *PGSQLProvider) deleteUser(user *User) error {
return sqlCommonDeleteUser(user, p.dbHandle)
}
func (p PGSQLProvider) dumpUsers() ([]User, error) {
func (p *PGSQLProvider) dumpUsers() ([]User, error) {
return sqlCommonDumpUsers(p.dbHandle)
}
func (p PGSQLProvider) getUsers(limit int, offset int, order string, username string) ([]User, error) {
return sqlCommonGetUsers(limit, offset, order, username, p.dbHandle)
func (p *PGSQLProvider) getRecentlyUpdatedUsers(after int64) ([]User, error) {
return sqlCommonGetRecentlyUpdatedUsers(after, p.dbHandle)
}
func (p PGSQLProvider) close() error {
func (p *PGSQLProvider) getUsers(limit int, offset int, order string) ([]User, error) {
return sqlCommonGetUsers(limit, offset, order, p.dbHandle)
}
func (p *PGSQLProvider) dumpFolders() ([]vfs.BaseVirtualFolder, error) {
return sqlCommonDumpFolders(p.dbHandle)
}
func (p *PGSQLProvider) getFolders(limit, offset int, order string) ([]vfs.BaseVirtualFolder, error) {
return sqlCommonGetFolders(limit, offset, order, p.dbHandle)
}
func (p *PGSQLProvider) getFolderByName(name string) (vfs.BaseVirtualFolder, error) {
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
defer cancel()
return sqlCommonGetFolderByName(ctx, name, p.dbHandle)
}
func (p *PGSQLProvider) addFolder(folder *vfs.BaseVirtualFolder) error {
return sqlCommonAddFolder(folder, p.dbHandle)
}
func (p *PGSQLProvider) updateFolder(folder *vfs.BaseVirtualFolder) error {
return sqlCommonUpdateFolder(folder, p.dbHandle)
}
func (p *PGSQLProvider) deleteFolder(folder *vfs.BaseVirtualFolder) error {
return sqlCommonDeleteFolder(folder, p.dbHandle)
}
func (p *PGSQLProvider) updateFolderQuota(name string, filesAdd int, sizeAdd int64, reset bool) error {
return sqlCommonUpdateFolderQuota(name, filesAdd, sizeAdd, reset, p.dbHandle)
}
func (p *PGSQLProvider) getUsedFolderQuota(name string) (int, int64, error) {
return sqlCommonGetFolderUsedQuota(name, p.dbHandle)
}
func (p *PGSQLProvider) adminExists(username string) (Admin, error) {
return sqlCommonGetAdminByUsername(username, p.dbHandle)
}
func (p *PGSQLProvider) addAdmin(admin *Admin) error {
return sqlCommonAddAdmin(admin, p.dbHandle)
}
func (p *PGSQLProvider) updateAdmin(admin *Admin) error {
return sqlCommonUpdateAdmin(admin, p.dbHandle)
}
func (p *PGSQLProvider) deleteAdmin(admin *Admin) error {
return sqlCommonDeleteAdmin(admin, p.dbHandle)
}
func (p *PGSQLProvider) getAdmins(limit int, offset int, order string) ([]Admin, error) {
return sqlCommonGetAdmins(limit, offset, order, p.dbHandle)
}
func (p *PGSQLProvider) dumpAdmins() ([]Admin, error) {
return sqlCommonDumpAdmins(p.dbHandle)
}
func (p *PGSQLProvider) validateAdminAndPass(username, password, ip string) (Admin, error) {
return sqlCommonValidateAdminAndPass(username, password, ip, p.dbHandle)
}
func (p *PGSQLProvider) apiKeyExists(keyID string) (APIKey, error) {
return sqlCommonGetAPIKeyByID(keyID, p.dbHandle)
}
func (p *PGSQLProvider) addAPIKey(apiKey *APIKey) error {
return sqlCommonAddAPIKey(apiKey, p.dbHandle)
}
func (p *PGSQLProvider) updateAPIKey(apiKey *APIKey) error {
return sqlCommonUpdateAPIKey(apiKey, p.dbHandle)
}
func (p *PGSQLProvider) deleteAPIKey(apiKey *APIKey) error {
return sqlCommonDeleteAPIKey(apiKey, p.dbHandle)
}
func (p *PGSQLProvider) getAPIKeys(limit int, offset int, order string) ([]APIKey, error) {
return sqlCommonGetAPIKeys(limit, offset, order, p.dbHandle)
}
func (p *PGSQLProvider) dumpAPIKeys() ([]APIKey, error) {
return sqlCommonDumpAPIKeys(p.dbHandle)
}
func (p *PGSQLProvider) updateAPIKeyLastUse(keyID string) error {
return sqlCommonUpdateAPIKeyLastUse(keyID, p.dbHandle)
}
func (p *PGSQLProvider) shareExists(shareID, username string) (Share, error) {
return sqlCommonGetShareByID(shareID, username, p.dbHandle)
}
func (p *PGSQLProvider) addShare(share *Share) error {
return sqlCommonAddShare(share, p.dbHandle)
}
func (p *PGSQLProvider) updateShare(share *Share) error {
return sqlCommonUpdateShare(share, p.dbHandle)
}
func (p *PGSQLProvider) deleteShare(share *Share) error {
return sqlCommonDeleteShare(share, p.dbHandle)
}
func (p *PGSQLProvider) getShares(limit int, offset int, order, username string) ([]Share, error) {
return sqlCommonGetShares(limit, offset, order, username, p.dbHandle)
}
func (p *PGSQLProvider) dumpShares() ([]Share, error) {
return sqlCommonDumpShares(p.dbHandle)
}
func (p *PGSQLProvider) updateShareLastUse(shareID string, numTokens int) error {
return sqlCommonUpdateShareLastUse(shareID, numTokens, p.dbHandle)
}
func (p *PGSQLProvider) getDefenderHosts(from int64, limit int) ([]*DefenderEntry, error) {
return sqlCommonGetDefenderHosts(from, limit, p.dbHandle)
}
func (p *PGSQLProvider) getDefenderHostByIP(ip string, from int64) (*DefenderEntry, error) {
return sqlCommonGetDefenderHostByIP(ip, from, p.dbHandle)
}
func (p *PGSQLProvider) isDefenderHostBanned(ip string) (*DefenderEntry, error) {
return sqlCommonIsDefenderHostBanned(ip, p.dbHandle)
}
func (p *PGSQLProvider) updateDefenderBanTime(ip string, minutes int) error {
return sqlCommonDefenderIncrementBanTime(ip, minutes, p.dbHandle)
}
func (p *PGSQLProvider) deleteDefenderHost(ip string) error {
return sqlCommonDeleteDefenderHost(ip, p.dbHandle)
}
func (p *PGSQLProvider) addDefenderEvent(ip string, score int) error {
return sqlCommonAddDefenderHostAndEvent(ip, score, p.dbHandle)
}
func (p *PGSQLProvider) setDefenderBanTime(ip string, banTime int64) error {
return sqlCommonSetDefenderBanTime(ip, banTime, p.dbHandle)
}
func (p *PGSQLProvider) cleanupDefender(from int64) error {
return sqlCommonDefenderCleanup(from, p.dbHandle)
}
func (p *PGSQLProvider) close() error {
return p.dbHandle.Close()
}
func (p PGSQLProvider) reloadConfig() error {
func (p *PGSQLProvider) reloadConfig() error {
return nil
}
// initializeDatabase creates the initial database structure
func (p PGSQLProvider) initializeDatabase() error {
sqlUsers := strings.Replace(pgsqlUsersTableSQL, "{{users}}", config.UsersTable, 1)
tx, err := p.dbHandle.Begin()
if err != nil {
return err
func (p *PGSQLProvider) initializeDatabase() error {
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, false)
if err == nil && dbVersion.Version > 0 {
return ErrNoInitRequired
}
_, err = tx.Exec(sqlUsers)
if err != nil {
tx.Rollback()
return err
if errors.Is(err, sql.ErrNoRows) {
return errSchemaVersionEmpty
}
_, err = tx.Exec(pgsqlSchemaTableSQL)
if err != nil {
tx.Rollback()
return err
initialSQL := strings.ReplaceAll(pgsqlInitial, "{{schema_version}}", sqlTableSchemaVersion)
initialSQL = strings.ReplaceAll(initialSQL, "{{admins}}", sqlTableAdmins)
initialSQL = strings.ReplaceAll(initialSQL, "{{folders}}", sqlTableFolders)
initialSQL = strings.ReplaceAll(initialSQL, "{{users}}", sqlTableUsers)
initialSQL = strings.ReplaceAll(initialSQL, "{{folders_mapping}}", sqlTableFoldersMapping)
initialSQL = strings.ReplaceAll(initialSQL, "{{prefix}}", config.SQLTablesPrefix)
if config.Driver == CockroachDataProviderName {
// Cockroach does not support deferrable constraint validation, we don't need them,
// we keep these definitions for the PostgreSQL driver to avoid changes for users
// upgrading from old SFTPGo versions
initialSQL = strings.ReplaceAll(initialSQL, "DEFERRABLE INITIALLY DEFERRED", "")
}
_, err = tx.Exec(initialDBVersionSQL)
if err != nil {
tx.Rollback()
return err
}
return tx.Commit()
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, []string{initialSQL}, 10)
}
func (p PGSQLProvider) migrateDatabase() error {
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle)
//nolint:dupl
func (p *PGSQLProvider) migrateDatabase() error {
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
if err != nil {
return err
}
if dbVersion.Version == sqlDatabaseVersion {
providerLog(logger.LevelDebug, "sql database is updated, current version: %v", dbVersion.Version)
return nil
switch version := dbVersion.Version; {
case version == sqlDatabaseVersion:
providerLog(logger.LevelDebug, "sql database is up to date, current version: %v", version)
return ErrNoInitRequired
case version < 10:
err = fmt.Errorf("database version %v is too old, please see the upgrading docs", version)
providerLog(logger.LevelError, "%v", err)
logger.ErrorToConsole("%v", err)
return err
case version == 10:
return updatePGSQLDatabaseFromV10(p.dbHandle)
case version == 11:
return updatePGSQLDatabaseFromV11(p.dbHandle)
case version == 12:
return updatePGSQLDatabaseFromV12(p.dbHandle)
case version == 13:
return updatePGSQLDatabaseFromV13(p.dbHandle)
case version == 14:
return updatePGSQLDatabaseFromV14(p.dbHandle)
default:
if version > sqlDatabaseVersion {
providerLog(logger.LevelError, "database version %v is newer than the supported one: %v", version,
sqlDatabaseVersion)
logger.WarnToConsole("database version %v is newer than the supported one: %v", version,
sqlDatabaseVersion)
return nil
}
return fmt.Errorf("database version not handled: %v", version)
}
if dbVersion.Version == 1 {
return updatePGSQLDatabaseFrom1To2(p.dbHandle)
}
return nil
}
func updatePGSQLDatabaseFrom1To2(dbHandle *sql.DB) error {
providerLog(logger.LevelInfo, "updating database version: 1 -> 2")
sql := strings.Replace(pgsqlUsersV2SQL, "{{users}}", config.UsersTable, 1)
tx, err := dbHandle.Begin()
func (p *PGSQLProvider) revertDatabase(targetVersion int) error {
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
if err != nil {
return err
}
_, err = tx.Exec(sql)
if err != nil {
tx.Rollback()
return err
if dbVersion.Version == targetVersion {
return errors.New("current version match target version, nothing to do")
}
err = sqlCommonUpdateDatabaseVersionWithTX(tx, 2)
if err != nil {
tx.Rollback()
return err
switch dbVersion.Version {
case 15:
return downgradePGSQLDatabaseFromV15(p.dbHandle)
case 14:
return downgradePGSQLDatabaseFromV14(p.dbHandle)
case 13:
return downgradePGSQLDatabaseFromV13(p.dbHandle)
case 12:
return downgradePGSQLDatabaseFromV12(p.dbHandle)
case 11:
return downgradePGSQLDatabaseFromV11(p.dbHandle)
default:
return fmt.Errorf("database version not handled: %v", dbVersion.Version)
}
return tx.Commit()
}
func (p *PGSQLProvider) resetDatabase() error {
sql := strings.ReplaceAll(pgsqlResetSQL, "{{schema_version}}", sqlTableSchemaVersion)
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
sql = strings.ReplaceAll(sql, "{{folders}}", sqlTableFolders)
sql = strings.ReplaceAll(sql, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{folders_mapping}}", sqlTableFoldersMapping)
sql = strings.ReplaceAll(sql, "{{api_keys}}", sqlTableAPIKeys)
sql = strings.ReplaceAll(sql, "{{shares}}", sqlTableShares)
sql = strings.ReplaceAll(sql, "{{defender_events}}", sqlTableDefenderEvents)
sql = strings.ReplaceAll(sql, "{{defender_hosts}}", sqlTableDefenderHosts)
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, []string{sql}, 0)
}
func updatePGSQLDatabaseFromV10(dbHandle *sql.DB) error {
if err := updatePGSQLDatabaseFrom10To11(dbHandle); err != nil {
return err
}
return updatePGSQLDatabaseFromV11(dbHandle)
}
func updatePGSQLDatabaseFromV11(dbHandle *sql.DB) error {
if err := updatePGSQLDatabaseFrom11To12(dbHandle); err != nil {
return err
}
return updatePGSQLDatabaseFromV12(dbHandle)
}
func updatePGSQLDatabaseFromV12(dbHandle *sql.DB) error {
if err := updatePGSQLDatabaseFrom12To13(dbHandle); err != nil {
return err
}
return updatePGSQLDatabaseFromV13(dbHandle)
}
func updatePGSQLDatabaseFromV13(dbHandle *sql.DB) error {
if err := updatePGSQLDatabaseFrom13To14(dbHandle); err != nil {
return err
}
return updatePGSQLDatabaseFromV14(dbHandle)
}
func updatePGSQLDatabaseFromV14(dbHandle *sql.DB) error {
return updatePGSQLDatabaseFrom14To15(dbHandle)
}
func downgradePGSQLDatabaseFromV15(dbHandle *sql.DB) error {
if err := downgradePGSQLDatabaseFrom15To14(dbHandle); err != nil {
return err
}
return downgradePGSQLDatabaseFromV14(dbHandle)
}
func downgradePGSQLDatabaseFromV14(dbHandle *sql.DB) error {
if err := downgradePGSQLDatabaseFrom14To13(dbHandle); err != nil {
return err
}
return downgradePGSQLDatabaseFromV13(dbHandle)
}
func downgradePGSQLDatabaseFromV13(dbHandle *sql.DB) error {
if err := downgradePGSQLDatabaseFrom13To12(dbHandle); err != nil {
return err
}
return downgradePGSQLDatabaseFromV12(dbHandle)
}
func downgradePGSQLDatabaseFromV12(dbHandle *sql.DB) error {
if err := downgradePGSQLDatabaseFrom12To11(dbHandle); err != nil {
return err
}
return downgradePGSQLDatabaseFromV11(dbHandle)
}
func downgradePGSQLDatabaseFromV11(dbHandle *sql.DB) error {
return downgradePGSQLDatabaseFrom11To10(dbHandle)
}
func updatePGSQLDatabaseFrom13To14(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 13 -> 14")
providerLog(logger.LevelInfo, "updating database version: 13 -> 14")
sql := strings.ReplaceAll(pgsqlV14SQL, "{{shares}}", sqlTableShares)
sql = strings.ReplaceAll(sql, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 14)
}
func updatePGSQLDatabaseFrom14To15(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 14 -> 15")
providerLog(logger.LevelInfo, "updating database version: 14 -> 15")
sql := strings.ReplaceAll(pgsqlV15SQL, "{{defender_events}}", sqlTableDefenderEvents)
sql = strings.ReplaceAll(sql, "{{defender_hosts}}", sqlTableDefenderHosts)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 15)
}
func downgradePGSQLDatabaseFrom15To14(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 15 -> 14")
providerLog(logger.LevelInfo, "downgrading database version: 15 -> 14")
sql := strings.ReplaceAll(pgsqlV15DownSQL, "{{defender_events}}", sqlTableDefenderEvents)
sql = strings.ReplaceAll(sql, "{{defender_hosts}}", sqlTableDefenderHosts)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 14)
}
func downgradePGSQLDatabaseFrom14To13(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 14 -> 13")
providerLog(logger.LevelInfo, "downgrading database version: 14 -> 13")
sql := strings.ReplaceAll(pgsqlV14DownSQL, "{{shares}}", sqlTableShares)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 13)
}
func updatePGSQLDatabaseFrom12To13(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 12 -> 13")
providerLog(logger.LevelInfo, "updating database version: 12 -> 13")
sql := strings.ReplaceAll(pgsqlV13SQL, "{{users}}", sqlTableUsers)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 13)
}
func downgradePGSQLDatabaseFrom13To12(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 13 -> 12")
providerLog(logger.LevelInfo, "downgrading database version: 13 -> 12")
sql := strings.ReplaceAll(pgsqlV13DownSQL, "{{users}}", sqlTableUsers)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 12)
}
func updatePGSQLDatabaseFrom11To12(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 11 -> 12")
providerLog(logger.LevelInfo, "updating database version: 11 -> 12")
sql := strings.ReplaceAll(pgsqlV12SQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 12)
}
func downgradePGSQLDatabaseFrom12To11(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 12 -> 11")
providerLog(logger.LevelInfo, "downgrading database version: 12 -> 11")
sql := strings.ReplaceAll(pgsqlV12DownSQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 11)
}
func updatePGSQLDatabaseFrom10To11(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 10 -> 11")
providerLog(logger.LevelInfo, "updating database version: 10 -> 11")
sql := strings.ReplaceAll(pgsqlV11SQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
sql = strings.ReplaceAll(sql, "{{api_keys}}", sqlTableAPIKeys)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 11)
}
func downgradePGSQLDatabaseFrom11To10(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 11 -> 10")
providerLog(logger.LevelInfo, "downgrading database version: 11 -> 10")
sql := strings.ReplaceAll(pgsqlV11DownSQL, "{{api_keys}}", sqlTableAPIKeys)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 10)
}

View File

@@ -0,0 +1,18 @@
//go:build nopgsql
// +build nopgsql
package dataprovider
import (
"errors"
"github.com/drakkan/sftpgo/v2/version"
)
func init() {
version.AddFeature("-pgsql")
}
func initializePGSQLProvider() error {
return errors.New("PostgreSQL disabled at build time")
}

View File

@@ -0,0 +1,183 @@
package dataprovider
import (
"sync"
"time"
"github.com/drakkan/sftpgo/v2/logger"
)
var delayedQuotaUpdater quotaUpdater
func init() {
delayedQuotaUpdater = newQuotaUpdater()
}
type quotaObject struct {
size int64
files int
}
type quotaUpdater struct {
paramsMutex sync.RWMutex
waitTime time.Duration
sync.RWMutex
pendingUserQuotaUpdates map[string]quotaObject
pendingFolderQuotaUpdates map[string]quotaObject
}
func newQuotaUpdater() quotaUpdater {
return quotaUpdater{
pendingUserQuotaUpdates: make(map[string]quotaObject),
pendingFolderQuotaUpdates: make(map[string]quotaObject),
}
}
func (q *quotaUpdater) start() {
q.setWaitTime(config.DelayedQuotaUpdate)
go q.loop()
}
func (q *quotaUpdater) loop() {
waitTime := q.getWaitTime()
providerLog(logger.LevelDebug, "delayed quota update loop started, wait time: %v", waitTime)
for waitTime > 0 {
// We do this with a time.Sleep instead of a time.Ticker because we don't know
// how long each quota processing cycle will take, and we want to make
// sure we wait the configured seconds between each iteration
time.Sleep(waitTime)
providerLog(logger.LevelDebug, "delayed quota update check start")
q.storeUsersQuota()
q.storeFoldersQuota()
providerLog(logger.LevelDebug, "delayed quota update check end")
waitTime = q.getWaitTime()
}
providerLog(logger.LevelDebug, "delayed quota update loop ended, wait time: %v", waitTime)
}
func (q *quotaUpdater) setWaitTime(secs int) {
q.paramsMutex.Lock()
defer q.paramsMutex.Unlock()
q.waitTime = time.Duration(secs) * time.Second
}
func (q *quotaUpdater) getWaitTime() time.Duration {
q.paramsMutex.RLock()
defer q.paramsMutex.RUnlock()
return q.waitTime
}
func (q *quotaUpdater) resetUserQuota(username string) {
q.Lock()
defer q.Unlock()
delete(q.pendingUserQuotaUpdates, username)
}
func (q *quotaUpdater) updateUserQuota(username string, files int, size int64) {
q.Lock()
defer q.Unlock()
obj := q.pendingUserQuotaUpdates[username]
obj.size += size
obj.files += files
if obj.files == 0 && obj.size == 0 {
delete(q.pendingUserQuotaUpdates, username)
return
}
q.pendingUserQuotaUpdates[username] = obj
}
func (q *quotaUpdater) getUserPendingQuota(username string) (int, int64) {
q.RLock()
defer q.RUnlock()
obj := q.pendingUserQuotaUpdates[username]
return obj.files, obj.size
}
func (q *quotaUpdater) resetFolderQuota(name string) {
q.Lock()
defer q.Unlock()
delete(q.pendingFolderQuotaUpdates, name)
}
func (q *quotaUpdater) updateFolderQuota(name string, files int, size int64) {
q.Lock()
defer q.Unlock()
obj := q.pendingFolderQuotaUpdates[name]
obj.size += size
obj.files += files
if obj.files == 0 && obj.size == 0 {
delete(q.pendingFolderQuotaUpdates, name)
return
}
q.pendingFolderQuotaUpdates[name] = obj
}
func (q *quotaUpdater) getFolderPendingQuota(name string) (int, int64) {
q.RLock()
defer q.RUnlock()
obj := q.pendingFolderQuotaUpdates[name]
return obj.files, obj.size
}
func (q *quotaUpdater) getUsernames() []string {
q.RLock()
defer q.RUnlock()
result := make([]string, 0, len(q.pendingUserQuotaUpdates))
for username := range q.pendingUserQuotaUpdates {
result = append(result, username)
}
return result
}
func (q *quotaUpdater) getFoldernames() []string {
q.RLock()
defer q.RUnlock()
result := make([]string, 0, len(q.pendingFolderQuotaUpdates))
for name := range q.pendingFolderQuotaUpdates {
result = append(result, name)
}
return result
}
func (q *quotaUpdater) storeUsersQuota() {
for _, username := range q.getUsernames() {
files, size := q.getUserPendingQuota(username)
if size != 0 || files != 0 {
err := provider.updateQuota(username, files, size, false)
if err != nil {
providerLog(logger.LevelWarn, "unable to update quota delayed for user %#v: %v", username, err)
continue
}
q.updateUserQuota(username, -files, -size)
}
}
}
func (q *quotaUpdater) storeFoldersQuota() {
for _, name := range q.getFoldernames() {
files, size := q.getFolderPendingQuota(name)
if size != 0 || files != 0 {
err := provider.updateFolderQuota(name, files, size, false)
if err != nil {
providerLog(logger.LevelWarn, "unable to update quota delayed for folder %#v: %v", name, err)
continue
}
q.updateFolderQuota(name, -files, -size)
}
}
}

306
dataprovider/share.go Normal file
View File

@@ -0,0 +1,306 @@
package dataprovider
import (
"encoding/json"
"fmt"
"net"
"strings"
"time"
"github.com/alexedwards/argon2id"
"golang.org/x/crypto/bcrypt"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
)
// ShareScope defines the supported share scopes
type ShareScope int
// Supported share scopes
const (
ShareScopeRead ShareScope = iota + 1
ShareScopeWrite
)
const (
redactedPassword = "[**redacted**]"
)
// Share defines files and or directories shared with external users
type Share struct {
// Database unique identifier
ID int64 `json:"-"`
// Unique ID used to access this object
ShareID string `json:"id"`
Name string `json:"name"`
Description string `json:"description,omitempty"`
Scope ShareScope `json:"scope"`
// Paths to files or directories, for ShareScopeWrite it must be exactly one directory
Paths []string `json:"paths"`
// Username who shared this object
Username string `json:"username"`
CreatedAt int64 `json:"created_at"`
UpdatedAt int64 `json:"updated_at"`
// 0 means never used
LastUseAt int64 `json:"last_use_at,omitempty"`
// ExpiresAt expiration date/time as unix timestamp in milliseconds, 0 means no expiration
ExpiresAt int64 `json:"expires_at,omitempty"`
// Optional password to protect the share
Password string `json:"password"`
// Limit the available access tokens, 0 means no limit
MaxTokens int `json:"max_tokens,omitempty"`
// Used tokens
UsedTokens int `json:"used_tokens,omitempty"`
// Limit the share availability to these IPs/CIDR networks
AllowFrom []string `json:"allow_from,omitempty"`
// set for restores, we don't have to validate the expiration date
// otherwise we fail to restore existing shares and we have to insert
// all the previous values with no modifications
IsRestore bool `json:"-"`
}
// GetScopeAsString returns the share's scope as string.
// Used in web pages
func (s *Share) GetScopeAsString() string {
switch s.Scope {
case ShareScopeRead:
return "Read"
default:
return "Write"
}
}
// IsExpired returns true if the share is expired
func (s *Share) IsExpired() bool {
if s.ExpiresAt > 0 {
return s.ExpiresAt < util.GetTimeAsMsSinceEpoch(time.Now())
}
return false
}
// GetInfoString returns share's info as string.
func (s *Share) GetInfoString() string {
var result strings.Builder
if s.ExpiresAt > 0 {
t := util.GetTimeFromMsecSinceEpoch(s.ExpiresAt)
result.WriteString(fmt.Sprintf("Expiration: %v. ", t.Format("2006-01-02 15:04"))) // YYYY-MM-DD HH:MM
}
if s.LastUseAt > 0 {
t := util.GetTimeFromMsecSinceEpoch(s.LastUseAt)
result.WriteString(fmt.Sprintf("Last use: %v. ", t.Format("2006-01-02 15:04")))
}
if s.MaxTokens > 0 {
result.WriteString(fmt.Sprintf("Usage: %v/%v. ", s.UsedTokens, s.MaxTokens))
} else {
result.WriteString(fmt.Sprintf("Used tokens: %v. ", s.UsedTokens))
}
if len(s.AllowFrom) > 0 {
result.WriteString(fmt.Sprintf("Allowed IP/Mask: %v. ", len(s.AllowFrom)))
}
if s.Password != "" {
result.WriteString("Password protected.")
}
return result.String()
}
// GetAllowedFromAsString returns the allowed IP as comma separated string
func (s *Share) GetAllowedFromAsString() string {
return strings.Join(s.AllowFrom, ",")
}
func (s *Share) getACopy() Share {
allowFrom := make([]string, len(s.AllowFrom))
copy(allowFrom, s.AllowFrom)
return Share{
ID: s.ID,
ShareID: s.ShareID,
Name: s.Name,
Description: s.Description,
Scope: s.Scope,
Paths: s.Paths,
Username: s.Username,
CreatedAt: s.CreatedAt,
UpdatedAt: s.UpdatedAt,
LastUseAt: s.LastUseAt,
ExpiresAt: s.ExpiresAt,
Password: s.Password,
MaxTokens: s.MaxTokens,
UsedTokens: s.UsedTokens,
AllowFrom: allowFrom,
}
}
// RenderAsJSON implements the renderer interface used within plugins
func (s *Share) RenderAsJSON(reload bool) ([]byte, error) {
if reload {
share, err := provider.shareExists(s.ShareID, s.Username)
if err != nil {
providerLog(logger.LevelError, "unable to reload share before rendering as json: %v", err)
return nil, err
}
share.HideConfidentialData()
return json.Marshal(share)
}
s.HideConfidentialData()
return json.Marshal(s)
}
// HideConfidentialData hides share confidential data
func (s *Share) HideConfidentialData() {
if s.Password != "" {
s.Password = redactedPassword
}
}
// HasRedactedPassword returns true if this share has a redacted password
func (s *Share) HasRedactedPassword() bool {
return s.Password == redactedPassword
}
func (s *Share) hashPassword() error {
if s.Password != "" && !util.IsStringPrefixInSlice(s.Password, internalHashPwdPrefixes) {
if config.PasswordHashing.Algo == HashingAlgoBcrypt {
hashed, err := bcrypt.GenerateFromPassword([]byte(s.Password), config.PasswordHashing.BcryptOptions.Cost)
if err != nil {
return err
}
s.Password = string(hashed)
} else {
hashed, err := argon2id.CreateHash(s.Password, argon2Params)
if err != nil {
return err
}
s.Password = hashed
}
}
return nil
}
func (s *Share) validatePaths() error {
var paths []string
for _, p := range s.Paths {
p = strings.TrimSpace(p)
if p != "" {
paths = append(paths, p)
}
}
s.Paths = paths
if len(s.Paths) == 0 {
return util.NewValidationError("at least a shared path is required")
}
for idx := range s.Paths {
s.Paths[idx] = util.CleanPath(s.Paths[idx])
}
s.Paths = util.RemoveDuplicates(s.Paths)
if s.Scope == ShareScopeWrite && len(s.Paths) != 1 {
return util.NewValidationError("the write share scope requires exactly one path")
}
// check nested paths
if len(s.Paths) > 1 {
for idx := range s.Paths {
for innerIdx := range s.Paths {
if idx == innerIdx {
continue
}
if isVirtualDirOverlapped(s.Paths[idx], s.Paths[innerIdx], true) {
return util.NewGenericError("shared paths cannot be nested")
}
}
}
}
return nil
}
func (s *Share) validate() error {
if s.ShareID == "" {
return util.NewValidationError("share_id is mandatory")
}
if s.Name == "" {
return util.NewValidationError("name is mandatory")
}
if s.Scope != ShareScopeRead && s.Scope != ShareScopeWrite {
return util.NewValidationError(fmt.Sprintf("invalid scope: %v", s.Scope))
}
if err := s.validatePaths(); err != nil {
return err
}
if s.ExpiresAt > 0 {
if !s.IsRestore && s.ExpiresAt < util.GetTimeAsMsSinceEpoch(time.Now()) {
return util.NewValidationError("expiration must be in the future")
}
} else {
s.ExpiresAt = 0
}
if s.MaxTokens < 0 {
return util.NewValidationError("invalid max tokens")
}
if s.Username == "" {
return util.NewValidationError("username is mandatory")
}
if s.HasRedactedPassword() {
return util.NewValidationError("cannot save a share with a redacted password")
}
if err := s.hashPassword(); err != nil {
return err
}
s.AllowFrom = util.RemoveDuplicates(s.AllowFrom)
for _, IPMask := range s.AllowFrom {
_, _, err := net.ParseCIDR(IPMask)
if err != nil {
return util.NewValidationError(fmt.Sprintf("could not parse allow from entry %#v : %v", IPMask, err))
}
}
return nil
}
// CheckPassword verifies the share password if set
func (s *Share) CheckPassword(password string) (bool, error) {
if s.Password == "" {
return true, nil
}
if password == "" {
return false, ErrInvalidCredentials
}
if strings.HasPrefix(s.Password, bcryptPwdPrefix) {
if err := bcrypt.CompareHashAndPassword([]byte(s.Password), []byte(password)); err != nil {
return false, ErrInvalidCredentials
}
return true, nil
}
match, err := argon2id.ComparePasswordAndHash(password, s.Password)
if !match || err != nil {
return false, ErrInvalidCredentials
}
return match, err
}
// IsUsable checks if the share is usable from the specified IP
func (s *Share) IsUsable(ip string) (bool, error) {
if s.MaxTokens > 0 && s.UsedTokens >= s.MaxTokens {
return false, util.NewRecordNotFoundError("max share usage exceeded")
}
if s.ExpiresAt > 0 {
if s.ExpiresAt < util.GetTimeAsMsSinceEpoch(time.Now()) {
return false, util.NewRecordNotFoundError("share expired")
}
}
if len(s.AllowFrom) == 0 {
return true, nil
}
parsedIP := net.ParseIP(ip)
if parsedIP == nil {
return false, ErrLoginNotAllowedFromIP
}
for _, ipMask := range s.AllowFrom {
_, network, err := net.ParseCIDR(ipMask)
if err != nil {
continue
}
if network.Contains(parsedIP) {
return true, nil
}
}
return false, ErrLoginNotAllowedFromIP
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,25 +1,106 @@
//go:build !nosqlite
// +build !nosqlite
package dataprovider
import (
"context"
"crypto/x509"
"database/sql"
"errors"
"fmt"
"path/filepath"
"strings"
"github.com/drakkan/sftpgo/logger"
"github.com/drakkan/sftpgo/utils"
// we import go-sqlite3 here to be able to disable SQLite support using a build tag
_ "github.com/mattn/go-sqlite3"
"github.com/drakkan/sftpgo/v2/logger"
"github.com/drakkan/sftpgo/v2/util"
"github.com/drakkan/sftpgo/v2/version"
"github.com/drakkan/sftpgo/v2/vfs"
)
const (
sqliteUsersTableSQL = `CREATE TABLE "{{users}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255)
NOT NULL UNIQUE, "password" varchar(255) NULL, "public_keys" text NULL, "home_dir" varchar(255) NOT NULL, "uid" integer NOT NULL,
"gid" integer NOT NULL, "max_sessions" integer NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL,
"permissions" text NOT NULL, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL,
"last_quota_update" bigint NOT NULL, "upload_bandwidth" integer NOT NULL, "download_bandwidth" integer NOT NULL,
"expiration_date" bigint NOT NULL, "last_login" bigint NOT NULL, "status" integer NOT NULL, "filters" text NULL,
"filesystem" text NULL);`
sqliteSchemaTableSQL = `CREATE TABLE "schema_version" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "version" integer NOT NULL);`
sqliteUsersV2SQL = `ALTER TABLE "{{users}}" ADD COLUMN "virtual_folders" text NULL;`
sqliteResetSQL = `DROP TABLE IF EXISTS "{{api_keys}}";
DROP TABLE IF EXISTS "{{folders_mapping}}";
DROP TABLE IF EXISTS "{{admins}}";
DROP TABLE IF EXISTS "{{folders}}";
DROP TABLE IF EXISTS "{{shares}}";
DROP TABLE IF EXISTS "{{users}}";
DROP TABLE IF EXISTS "{{defender_events}}";
DROP TABLE IF EXISTS "{{defender_hosts}}";
DROP TABLE IF EXISTS "{{schema_version}}";
`
sqliteInitialSQL = `CREATE TABLE "{{schema_version}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "version" integer NOT NULL);
CREATE TABLE "{{admins}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255) NOT NULL UNIQUE,
"description" varchar(512) NULL, "password" varchar(255) NOT NULL, "email" varchar(255) NULL, "status" integer NOT NULL,
"permissions" text NOT NULL, "filters" text NULL, "additional_info" text NULL);
CREATE TABLE "{{folders}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "name" varchar(255) NOT NULL UNIQUE,
"description" varchar(512) NULL, "path" varchar(512) NULL, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL,
"last_quota_update" bigint NOT NULL, "filesystem" text NULL);
CREATE TABLE "{{users}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255) NOT NULL UNIQUE,
"status" integer NOT NULL, "expiration_date" bigint NOT NULL, "description" varchar(512) NULL, "password" text NULL,
"public_keys" text NULL, "home_dir" varchar(512) NOT NULL, "uid" integer NOT NULL, "gid" integer NOT NULL,
"max_sessions" integer NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "permissions" text NOT NULL,
"used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL,
"upload_bandwidth" integer NOT NULL, "download_bandwidth" integer NOT NULL, "last_login" bigint NOT NULL, "filters" text NULL,
"filesystem" text NULL, "additional_info" text NULL);
CREATE TABLE "{{folders_mapping}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "virtual_path" varchar(512) NOT NULL,
"quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "folder_id" integer NOT NULL REFERENCES "{{folders}}" ("id")
ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED, "user_id" integer NOT NULL REFERENCES "{{users}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
CONSTRAINT "{{prefix}}unique_mapping" UNIQUE ("user_id", "folder_id"));
CREATE INDEX "{{prefix}}folders_mapping_folder_id_idx" ON "{{folders_mapping}}" ("folder_id");
CREATE INDEX "{{prefix}}folders_mapping_user_id_idx" ON "{{folders_mapping}}" ("user_id");
INSERT INTO {{schema_version}} (version) VALUES (10);
`
sqliteV11SQL = `CREATE TABLE "{{api_keys}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "name" varchar(255) NOT NULL,
"key_id" varchar(50) NOT NULL UNIQUE, "api_key" varchar(255) NOT NULL UNIQUE, "scope" integer NOT NULL, "created_at" bigint NOT NULL,
"updated_at" bigint NOT NULL, "last_use_at" bigint NOT NULL, "expires_at" bigint NOT NULL, "description" text NULL,
"admin_id" integer NULL REFERENCES "{{admins}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
"user_id" integer NULL REFERENCES "{{users}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED);
CREATE INDEX "{{prefix}}api_keys_admin_id_idx" ON "{{api_keys}}" ("admin_id");
CREATE INDEX "{{prefix}}api_keys_user_id_idx" ON "{{api_keys}}" ("user_id");
`
sqliteV11DownSQL = `DROP TABLE "{{api_keys}}";`
sqliteV12SQL = `ALTER TABLE "{{admins}}" ADD COLUMN "created_at" bigint DEFAULT 0 NOT NULL;
ALTER TABLE "{{admins}}" ADD COLUMN "updated_at" bigint DEFAULT 0 NOT NULL;
ALTER TABLE "{{admins}}" ADD COLUMN "last_login" bigint DEFAULT 0 NOT NULL;
ALTER TABLE "{{users}}" ADD COLUMN "created_at" bigint DEFAULT 0 NOT NULL;
ALTER TABLE "{{users}}" ADD COLUMN "updated_at" bigint DEFAULT 0 NOT NULL;
CREATE INDEX "{{prefix}}users_updated_at_idx" ON "{{users}}" ("updated_at");
`
sqliteV12DownSQL = `DROP INDEX "{{prefix}}users_updated_at_idx";
ALTER TABLE "{{users}}" DROP COLUMN "updated_at";
ALTER TABLE "{{users}}" DROP COLUMN "created_at";
ALTER TABLE "{{admins}}" DROP COLUMN "created_at";
ALTER TABLE "{{admins}}" DROP COLUMN "updated_at";
ALTER TABLE "{{admins}}" DROP COLUMN "last_login";
`
sqliteV13SQL = `ALTER TABLE "{{users}}" ADD COLUMN "email" varchar(255) NULL;`
sqliteV13DownSQL = `ALTER TABLE "{{users}}" DROP COLUMN "email";`
sqliteV14SQL = `CREATE TABLE "{{shares}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT,
"share_id" varchar(60) NOT NULL UNIQUE, "name" varchar(255) NOT NULL, "description" varchar(512) NULL,
"scope" integer NOT NULL, "paths" text NOT NULL, "created_at" bigint NOT NULL, "updated_at" bigint NOT NULL,
"last_use_at" bigint NOT NULL, "expires_at" bigint NOT NULL, "password" text NULL, "max_tokens" integer NOT NULL,
"used_tokens" integer NOT NULL, "allow_from" text NULL,
"user_id" integer NOT NULL REFERENCES "{{users}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED);
CREATE INDEX "{{prefix}}shares_user_id_idx" ON "{{shares}}" ("user_id");
`
sqliteV14DownSQL = `DROP TABLE "{{shares}}";`
sqliteV15SQL = `CREATE TABLE "{{defender_hosts}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT,
"ip" varchar(50) NOT NULL UNIQUE, "ban_time" bigint NOT NULL, "updated_at" bigint NOT NULL);
CREATE TABLE "{{defender_events}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "date_time" bigint NOT NULL,
"score" integer NOT NULL, "host_id" integer NOT NULL REFERENCES "{{defender_hosts}}" ("id") ON DELETE CASCADE
DEFERRABLE INITIALLY DEFERRED);
CREATE INDEX "{{prefix}}defender_hosts_updated_at_idx" ON "{{defender_hosts}}" ("updated_at");
CREATE INDEX "{{prefix}}defender_hosts_ban_time_idx" ON "{{defender_hosts}}" ("ban_time");
CREATE INDEX "{{prefix}}defender_events_date_time_idx" ON "{{defender_events}}" ("date_time");
CREATE INDEX "{{prefix}}defender_events_host_id_idx" ON "{{defender_events}}" ("host_id");
`
sqliteV15DownSQL = `DROP TABLE "{{defender_events}}";
DROP TABLE "{{defender_hosts}}";
`
)
// SQLiteProvider auth provider for SQLite database
@@ -27,19 +108,23 @@ type SQLiteProvider struct {
dbHandle *sql.DB
}
func init() {
version.AddFeature("+sqlite")
}
func initializeSQLiteProvider(basePath string) error {
var err error
var connectionString string
logSender = SQLiteDataProviderName
if len(config.ConnectionString) == 0 {
if config.ConnectionString == "" {
dbPath := config.Name
if !utils.IsFileInputValid(dbPath) {
return fmt.Errorf("Invalid database path: %#v", dbPath)
if !util.IsFileInputValid(dbPath) {
return fmt.Errorf("invalid database path: %#v", dbPath)
}
if !filepath.IsAbs(dbPath) {
dbPath = filepath.Join(basePath, dbPath)
}
connectionString = fmt.Sprintf("file:%v?cache=shared", dbPath)
connectionString = fmt.Sprintf("file:%v?cache=shared&_foreign_keys=1", dbPath)
} else {
connectionString = config.ConnectionString
}
@@ -47,103 +132,484 @@ func initializeSQLiteProvider(basePath string) error {
if err == nil {
providerLog(logger.LevelDebug, "sqlite database handle created, connection string: %#v", connectionString)
dbHandle.SetMaxOpenConns(1)
provider = SQLiteProvider{dbHandle: dbHandle}
provider = &SQLiteProvider{dbHandle: dbHandle}
} else {
providerLog(logger.LevelWarn, "error creating sqlite database handler, connection string: %#v, error: %v",
providerLog(logger.LevelError, "error creating sqlite database handler, connection string: %#v, error: %v",
connectionString, err)
}
return err
}
func (p SQLiteProvider) checkAvailability() error {
func (p *SQLiteProvider) checkAvailability() error {
return sqlCommonCheckAvailability(p.dbHandle)
}
func (p SQLiteProvider) validateUserAndPass(username string, password string) (User, error) {
return sqlCommonValidateUserAndPass(username, password, p.dbHandle)
func (p *SQLiteProvider) validateUserAndPass(username, password, ip, protocol string) (User, error) {
return sqlCommonValidateUserAndPass(username, password, ip, protocol, p.dbHandle)
}
func (p SQLiteProvider) validateUserAndPubKey(username string, publicKey string) (User, string, error) {
func (p *SQLiteProvider) validateUserAndTLSCert(username, protocol string, tlsCert *x509.Certificate) (User, error) {
return sqlCommonValidateUserAndTLSCertificate(username, protocol, tlsCert, p.dbHandle)
}
func (p *SQLiteProvider) validateUserAndPubKey(username string, publicKey []byte) (User, string, error) {
return sqlCommonValidateUserAndPubKey(username, publicKey, p.dbHandle)
}
func (p SQLiteProvider) getUserByID(ID int64) (User, error) {
return sqlCommonGetUserByID(ID, p.dbHandle)
}
func (p SQLiteProvider) updateQuota(username string, filesAdd int, sizeAdd int64, reset bool) error {
func (p *SQLiteProvider) updateQuota(username string, filesAdd int, sizeAdd int64, reset bool) error {
return sqlCommonUpdateQuota(username, filesAdd, sizeAdd, reset, p.dbHandle)
}
func (p SQLiteProvider) updateLastLogin(username string) error {
return sqlCommonUpdateLastLogin(username, p.dbHandle)
}
func (p SQLiteProvider) getUsedQuota(username string) (int, int64, error) {
func (p *SQLiteProvider) getUsedQuota(username string) (int, int64, error) {
return sqlCommonGetUsedQuota(username, p.dbHandle)
}
func (p SQLiteProvider) userExists(username string) (User, error) {
return sqlCommonCheckUserExists(username, p.dbHandle)
func (p *SQLiteProvider) setUpdatedAt(username string) {
sqlCommonSetUpdatedAt(username, p.dbHandle)
}
func (p SQLiteProvider) addUser(user User) error {
func (p *SQLiteProvider) updateLastLogin(username string) error {
return sqlCommonUpdateLastLogin(username, p.dbHandle)
}
func (p *SQLiteProvider) updateAdminLastLogin(username string) error {
return sqlCommonUpdateAdminLastLogin(username, p.dbHandle)
}
func (p *SQLiteProvider) userExists(username string) (User, error) {
return sqlCommonGetUserByUsername(username, p.dbHandle)
}
func (p *SQLiteProvider) addUser(user *User) error {
return sqlCommonAddUser(user, p.dbHandle)
}
func (p SQLiteProvider) updateUser(user User) error {
func (p *SQLiteProvider) updateUser(user *User) error {
return sqlCommonUpdateUser(user, p.dbHandle)
}
func (p SQLiteProvider) deleteUser(user User) error {
func (p *SQLiteProvider) deleteUser(user *User) error {
return sqlCommonDeleteUser(user, p.dbHandle)
}
func (p SQLiteProvider) dumpUsers() ([]User, error) {
func (p *SQLiteProvider) dumpUsers() ([]User, error) {
return sqlCommonDumpUsers(p.dbHandle)
}
func (p SQLiteProvider) getUsers(limit int, offset int, order string, username string) ([]User, error) {
return sqlCommonGetUsers(limit, offset, order, username, p.dbHandle)
// SQLite provider cannot be shared, so we always return no recently updated users
func (p *SQLiteProvider) getRecentlyUpdatedUsers(after int64) ([]User, error) {
return nil, nil
}
func (p SQLiteProvider) close() error {
func (p *SQLiteProvider) getUsers(limit int, offset int, order string) ([]User, error) {
return sqlCommonGetUsers(limit, offset, order, p.dbHandle)
}
func (p *SQLiteProvider) dumpFolders() ([]vfs.BaseVirtualFolder, error) {
return sqlCommonDumpFolders(p.dbHandle)
}
func (p *SQLiteProvider) getFolders(limit, offset int, order string) ([]vfs.BaseVirtualFolder, error) {
return sqlCommonGetFolders(limit, offset, order, p.dbHandle)
}
func (p *SQLiteProvider) getFolderByName(name string) (vfs.BaseVirtualFolder, error) {
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
defer cancel()
return sqlCommonGetFolderByName(ctx, name, p.dbHandle)
}
func (p *SQLiteProvider) addFolder(folder *vfs.BaseVirtualFolder) error {
return sqlCommonAddFolder(folder, p.dbHandle)
}
func (p *SQLiteProvider) updateFolder(folder *vfs.BaseVirtualFolder) error {
return sqlCommonUpdateFolder(folder, p.dbHandle)
}
func (p *SQLiteProvider) deleteFolder(folder *vfs.BaseVirtualFolder) error {
return sqlCommonDeleteFolder(folder, p.dbHandle)
}
func (p *SQLiteProvider) updateFolderQuota(name string, filesAdd int, sizeAdd int64, reset bool) error {
return sqlCommonUpdateFolderQuota(name, filesAdd, sizeAdd, reset, p.dbHandle)
}
func (p *SQLiteProvider) getUsedFolderQuota(name string) (int, int64, error) {
return sqlCommonGetFolderUsedQuota(name, p.dbHandle)
}
func (p *SQLiteProvider) adminExists(username string) (Admin, error) {
return sqlCommonGetAdminByUsername(username, p.dbHandle)
}
func (p *SQLiteProvider) addAdmin(admin *Admin) error {
return sqlCommonAddAdmin(admin, p.dbHandle)
}
func (p *SQLiteProvider) updateAdmin(admin *Admin) error {
return sqlCommonUpdateAdmin(admin, p.dbHandle)
}
func (p *SQLiteProvider) deleteAdmin(admin *Admin) error {
return sqlCommonDeleteAdmin(admin, p.dbHandle)
}
func (p *SQLiteProvider) getAdmins(limit int, offset int, order string) ([]Admin, error) {
return sqlCommonGetAdmins(limit, offset, order, p.dbHandle)
}
func (p *SQLiteProvider) dumpAdmins() ([]Admin, error) {
return sqlCommonDumpAdmins(p.dbHandle)
}
func (p *SQLiteProvider) validateAdminAndPass(username, password, ip string) (Admin, error) {
return sqlCommonValidateAdminAndPass(username, password, ip, p.dbHandle)
}
func (p *SQLiteProvider) apiKeyExists(keyID string) (APIKey, error) {
return sqlCommonGetAPIKeyByID(keyID, p.dbHandle)
}
func (p *SQLiteProvider) addAPIKey(apiKey *APIKey) error {
return sqlCommonAddAPIKey(apiKey, p.dbHandle)
}
func (p *SQLiteProvider) updateAPIKey(apiKey *APIKey) error {
return sqlCommonUpdateAPIKey(apiKey, p.dbHandle)
}
func (p *SQLiteProvider) deleteAPIKey(apiKey *APIKey) error {
return sqlCommonDeleteAPIKey(apiKey, p.dbHandle)
}
func (p *SQLiteProvider) getAPIKeys(limit int, offset int, order string) ([]APIKey, error) {
return sqlCommonGetAPIKeys(limit, offset, order, p.dbHandle)
}
func (p *SQLiteProvider) dumpAPIKeys() ([]APIKey, error) {
return sqlCommonDumpAPIKeys(p.dbHandle)
}
func (p *SQLiteProvider) updateAPIKeyLastUse(keyID string) error {
return sqlCommonUpdateAPIKeyLastUse(keyID, p.dbHandle)
}
func (p *SQLiteProvider) shareExists(shareID, username string) (Share, error) {
return sqlCommonGetShareByID(shareID, username, p.dbHandle)
}
func (p *SQLiteProvider) addShare(share *Share) error {
return sqlCommonAddShare(share, p.dbHandle)
}
func (p *SQLiteProvider) updateShare(share *Share) error {
return sqlCommonUpdateShare(share, p.dbHandle)
}
func (p *SQLiteProvider) deleteShare(share *Share) error {
return sqlCommonDeleteShare(share, p.dbHandle)
}
func (p *SQLiteProvider) getShares(limit int, offset int, order, username string) ([]Share, error) {
return sqlCommonGetShares(limit, offset, order, username, p.dbHandle)
}
func (p *SQLiteProvider) dumpShares() ([]Share, error) {
return sqlCommonDumpShares(p.dbHandle)
}
func (p *SQLiteProvider) updateShareLastUse(shareID string, numTokens int) error {
return sqlCommonUpdateShareLastUse(shareID, numTokens, p.dbHandle)
}
func (p *SQLiteProvider) getDefenderHosts(from int64, limit int) ([]*DefenderEntry, error) {
return sqlCommonGetDefenderHosts(from, limit, p.dbHandle)
}
func (p *SQLiteProvider) getDefenderHostByIP(ip string, from int64) (*DefenderEntry, error) {
return sqlCommonGetDefenderHostByIP(ip, from, p.dbHandle)
}
func (p *SQLiteProvider) isDefenderHostBanned(ip string) (*DefenderEntry, error) {
return sqlCommonIsDefenderHostBanned(ip, p.dbHandle)
}
func (p *SQLiteProvider) updateDefenderBanTime(ip string, minutes int) error {
return sqlCommonDefenderIncrementBanTime(ip, minutes, p.dbHandle)
}
func (p *SQLiteProvider) deleteDefenderHost(ip string) error {
return sqlCommonDeleteDefenderHost(ip, p.dbHandle)
}
func (p *SQLiteProvider) addDefenderEvent(ip string, score int) error {
return sqlCommonAddDefenderHostAndEvent(ip, score, p.dbHandle)
}
func (p *SQLiteProvider) setDefenderBanTime(ip string, banTime int64) error {
return sqlCommonSetDefenderBanTime(ip, banTime, p.dbHandle)
}
func (p *SQLiteProvider) cleanupDefender(from int64) error {
return sqlCommonDefenderCleanup(from, p.dbHandle)
}
func (p *SQLiteProvider) close() error {
return p.dbHandle.Close()
}
func (p SQLiteProvider) reloadConfig() error {
func (p *SQLiteProvider) reloadConfig() error {
return nil
}
// initializeDatabase creates the initial database structure
func (p SQLiteProvider) initializeDatabase() error {
sqlUsers := strings.Replace(sqliteUsersTableSQL, "{{users}}", config.UsersTable, 1)
sql := sqlUsers + " " + sqliteSchemaTableSQL + " " + initialDBVersionSQL
_, err := p.dbHandle.Exec(sql)
func (p *SQLiteProvider) initializeDatabase() error {
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, false)
if err == nil && dbVersion.Version > 0 {
return ErrNoInitRequired
}
if errors.Is(err, sql.ErrNoRows) {
return errSchemaVersionEmpty
}
initialSQL := strings.ReplaceAll(sqliteInitialSQL, "{{schema_version}}", sqlTableSchemaVersion)
initialSQL = strings.ReplaceAll(initialSQL, "{{admins}}", sqlTableAdmins)
initialSQL = strings.ReplaceAll(initialSQL, "{{folders}}", sqlTableFolders)
initialSQL = strings.ReplaceAll(initialSQL, "{{users}}", sqlTableUsers)
initialSQL = strings.ReplaceAll(initialSQL, "{{folders_mapping}}", sqlTableFoldersMapping)
initialSQL = strings.ReplaceAll(initialSQL, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, []string{initialSQL}, 10)
}
//nolint:dupl
func (p *SQLiteProvider) migrateDatabase() error {
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
if err != nil {
return err
}
switch version := dbVersion.Version; {
case version == sqlDatabaseVersion:
providerLog(logger.LevelDebug, "sql database is up to date, current version: %v", version)
return ErrNoInitRequired
case version < 10:
err = fmt.Errorf("database version %v is too old, please see the upgrading docs", version)
providerLog(logger.LevelError, "%v", err)
logger.ErrorToConsole("%v", err)
return err
case version == 10:
return updateSQLiteDatabaseFromV10(p.dbHandle)
case version == 11:
return updateSQLiteDatabaseFromV11(p.dbHandle)
case version == 12:
return updateSQLiteDatabaseFromV12(p.dbHandle)
case version == 13:
return updateSQLiteDatabaseFromV13(p.dbHandle)
case version == 14:
return updateSQLiteDatabaseFromV14(p.dbHandle)
default:
if version > sqlDatabaseVersion {
providerLog(logger.LevelError, "database version %v is newer than the supported one: %v", version,
sqlDatabaseVersion)
logger.WarnToConsole("database version %v is newer than the supported one: %v", version,
sqlDatabaseVersion)
return nil
}
return fmt.Errorf("database version not handled: %v", version)
}
}
func (p *SQLiteProvider) revertDatabase(targetVersion int) error {
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
if err != nil {
return err
}
if dbVersion.Version == targetVersion {
return errors.New("current version match target version, nothing to do")
}
switch dbVersion.Version {
case 15:
return downgradeSQLiteDatabaseFromV15(p.dbHandle)
case 14:
return downgradeSQLiteDatabaseFromV14(p.dbHandle)
case 13:
return downgradeSQLiteDatabaseFromV13(p.dbHandle)
case 12:
return downgradeSQLiteDatabaseFromV12(p.dbHandle)
case 11:
return downgradeSQLiteDatabaseFromV11(p.dbHandle)
default:
return fmt.Errorf("database version not handled: %v", dbVersion.Version)
}
}
func (p *SQLiteProvider) resetDatabase() error {
sql := strings.ReplaceAll(sqliteResetSQL, "{{schema_version}}", sqlTableSchemaVersion)
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
sql = strings.ReplaceAll(sql, "{{folders}}", sqlTableFolders)
sql = strings.ReplaceAll(sql, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{folders_mapping}}", sqlTableFoldersMapping)
sql = strings.ReplaceAll(sql, "{{api_keys}}", sqlTableAPIKeys)
sql = strings.ReplaceAll(sql, "{{shares}}", sqlTableShares)
sql = strings.ReplaceAll(sql, "{{defender_events}}", sqlTableDefenderEvents)
sql = strings.ReplaceAll(sql, "{{defender_hosts}}", sqlTableDefenderHosts)
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, []string{sql}, 0)
}
func updateSQLiteDatabaseFromV10(dbHandle *sql.DB) error {
if err := updateSQLiteDatabaseFrom10To11(dbHandle); err != nil {
return err
}
return updateSQLiteDatabaseFromV11(dbHandle)
}
func updateSQLiteDatabaseFromV11(dbHandle *sql.DB) error {
if err := updateSQLiteDatabaseFrom11To12(dbHandle); err != nil {
return err
}
return updateSQLiteDatabaseFromV12(dbHandle)
}
func updateSQLiteDatabaseFromV12(dbHandle *sql.DB) error {
if err := updateSQLiteDatabaseFrom12To13(dbHandle); err != nil {
return err
}
return updateSQLiteDatabaseFromV13(dbHandle)
}
func updateSQLiteDatabaseFromV13(dbHandle *sql.DB) error {
if err := updateSQLiteDatabaseFrom13To14(dbHandle); err != nil {
return err
}
return updateSQLiteDatabaseFromV14(dbHandle)
}
func updateSQLiteDatabaseFromV14(dbHandle *sql.DB) error {
return updateSQLiteDatabaseFrom14To15(dbHandle)
}
func downgradeSQLiteDatabaseFromV15(dbHandle *sql.DB) error {
if err := downgradeSQLiteDatabaseFrom15To14(dbHandle); err != nil {
return err
}
return downgradeSQLiteDatabaseFromV14(dbHandle)
}
func downgradeSQLiteDatabaseFromV14(dbHandle *sql.DB) error {
if err := downgradeSQLiteDatabaseFrom14To13(dbHandle); err != nil {
return err
}
return downgradeSQLiteDatabaseFromV13(dbHandle)
}
func downgradeSQLiteDatabaseFromV13(dbHandle *sql.DB) error {
if err := downgradeSQLiteDatabaseFrom13To12(dbHandle); err != nil {
return err
}
return downgradeSQLiteDatabaseFromV12(dbHandle)
}
func downgradeSQLiteDatabaseFromV12(dbHandle *sql.DB) error {
if err := downgradeSQLiteDatabaseFrom12To11(dbHandle); err != nil {
return err
}
return downgradeSQLiteDatabaseFromV11(dbHandle)
}
func downgradeSQLiteDatabaseFromV11(dbHandle *sql.DB) error {
return downgradeSQLiteDatabaseFrom11To10(dbHandle)
}
func updateSQLiteDatabaseFrom13To14(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 13 -> 14")
providerLog(logger.LevelInfo, "updating database version: 13 -> 14")
sql := strings.ReplaceAll(sqliteV14SQL, "{{shares}}", sqlTableShares)
sql = strings.ReplaceAll(sql, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 14)
}
func updateSQLiteDatabaseFrom14To15(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 14 -> 15")
providerLog(logger.LevelInfo, "updating database version: 14 -> 15")
sql := strings.ReplaceAll(sqliteV15SQL, "{{defender_events}}", sqlTableDefenderEvents)
sql = strings.ReplaceAll(sql, "{{defender_hosts}}", sqlTableDefenderHosts)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 15)
}
func downgradeSQLiteDatabaseFrom15To14(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 15 -> 14")
providerLog(logger.LevelInfo, "downgrading database version: 15 -> 14")
sql := strings.ReplaceAll(sqliteV15DownSQL, "{{defender_events}}", sqlTableDefenderEvents)
sql = strings.ReplaceAll(sql, "{{defender_hosts}}", sqlTableDefenderHosts)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 14)
}
func downgradeSQLiteDatabaseFrom14To13(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 14 -> 13")
providerLog(logger.LevelInfo, "downgrading database version: 14 -> 13")
sql := strings.ReplaceAll(sqliteV14DownSQL, "{{shares}}", sqlTableShares)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 13)
}
func updateSQLiteDatabaseFrom12To13(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 12 -> 13")
providerLog(logger.LevelInfo, "updating database version: 12 -> 13")
sql := strings.ReplaceAll(sqliteV13SQL, "{{users}}", sqlTableUsers)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 13)
}
func downgradeSQLiteDatabaseFrom13To12(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 13 -> 12")
providerLog(logger.LevelInfo, "downgrading database version: 13 -> 12")
sql := strings.ReplaceAll(sqliteV13DownSQL, "{{users}}", sqlTableUsers)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 12)
}
func updateSQLiteDatabaseFrom11To12(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 11 -> 12")
providerLog(logger.LevelInfo, "updating database version: 11 -> 12")
sql := strings.ReplaceAll(sqliteV12SQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 12)
}
func downgradeSQLiteDatabaseFrom12To11(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 12 -> 11")
providerLog(logger.LevelInfo, "downgrading database version: 12 -> 11")
sql := strings.ReplaceAll(sqliteV12DownSQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 11)
}
func updateSQLiteDatabaseFrom10To11(dbHandle *sql.DB) error {
logger.InfoToConsole("updating database version: 10 -> 11")
providerLog(logger.LevelInfo, "updating database version: 10 -> 11")
sql := strings.ReplaceAll(sqliteV11SQL, "{{users}}", sqlTableUsers)
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
sql = strings.ReplaceAll(sql, "{{api_keys}}", sqlTableAPIKeys)
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 11)
}
func downgradeSQLiteDatabaseFrom11To10(dbHandle *sql.DB) error {
logger.InfoToConsole("downgrading database version: 11 -> 10")
providerLog(logger.LevelInfo, "downgrading database version: 11 -> 10")
sql := strings.ReplaceAll(sqliteV11DownSQL, "{{api_keys}}", sqlTableAPIKeys)
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 10)
}
/*func setPragmaFK(dbHandle *sql.DB, value string) error {
ctx, cancel := context.WithTimeout(context.Background(), longSQLQueryTimeout)
defer cancel()
sql := fmt.Sprintf("PRAGMA foreign_keys=%v;", value)
_, err := dbHandle.ExecContext(ctx, sql)
return err
}
func (p SQLiteProvider) migrateDatabase() error {
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle)
if err != nil {
return err
}
if dbVersion.Version == sqlDatabaseVersion {
providerLog(logger.LevelDebug, "sql database is updated, current version: %v", dbVersion.Version)
return nil
}
if dbVersion.Version == 1 {
return updateSQLiteDatabaseFrom1To2(p.dbHandle)
}
return nil
}
func updateSQLiteDatabaseFrom1To2(dbHandle *sql.DB) error {
providerLog(logger.LevelInfo, "updating database version: 1 -> 2")
sql := strings.Replace(sqliteUsersV2SQL, "{{users}}", config.UsersTable, 1)
_, err := dbHandle.Exec(sql)
if err != nil {
return err
}
return sqlCommonUpdateDatabaseVersion(dbHandle, 2)
}
}*/

View File

@@ -0,0 +1,18 @@
//go:build nosqlite
// +build nosqlite
package dataprovider
import (
"errors"
"github.com/drakkan/sftpgo/v2/version"
)
func init() {
version.AddFeature("-sqlite")
}
func initializeSQLiteProvider(basePath string) error {
return errors.New("SQLite disabled at build time")
}

View File

@@ -1,17 +1,28 @@
package dataprovider
import "fmt"
import (
"fmt"
"strconv"
"strings"
"github.com/drakkan/sftpgo/v2/vfs"
)
const (
selectUserFields = "id,username,password,public_keys,home_dir,uid,gid,max_sessions,quota_size,quota_files,permissions,used_quota_size," +
"used_quota_files,last_quota_update,upload_bandwidth,download_bandwidth,expiration_date,last_login,status,filters,filesystem," +
"virtual_folders"
"additional_info,description,email,created_at,updated_at"
selectFolderFields = "id,path,used_quota_size,used_quota_files,last_quota_update,name,description,filesystem"
selectAdminFields = "id,username,password,status,email,permissions,filters,additional_info,description,created_at,updated_at,last_login"
selectAPIKeyFields = "key_id,name,api_key,scope,created_at,updated_at,last_use_at,expires_at,description,user_id,admin_id"
selectShareFields = "s.share_id,s.name,s.description,s.scope,s.paths,u.username,s.created_at,s.updated_at,s.last_use_at," +
"s.expires_at,s.password,s.max_tokens,s.used_tokens,s.allow_from"
)
func getSQLPlaceholders() []string {
var placeholders []string
for i := 1; i <= 20; i++ {
if config.Driver == PGSQLDataProviderName {
for i := 1; i <= 30; i++ {
if config.Driver == PGSQLDataProviderName || config.Driver == CockroachDataProviderName {
placeholders = append(placeholders, fmt.Sprintf("$%v", i))
} else {
placeholders = append(placeholders, "?")
@@ -20,72 +31,412 @@ func getSQLPlaceholders() []string {
return placeholders
}
func getUserByUsernameQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v WHERE username = %v`, selectUserFields, config.UsersTable, sqlPlaceholders[0])
}
func getUserByIDQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v WHERE id = %v`, selectUserFields, config.UsersTable, sqlPlaceholders[0])
}
func getUsersQuery(order string, username string) string {
if len(username) > 0 {
return fmt.Sprintf(`SELECT %v FROM %v WHERE username = %v ORDER BY username %v LIMIT %v OFFSET %v`,
selectUserFields, config.UsersTable, sqlPlaceholders[0], order, sqlPlaceholders[1], sqlPlaceholders[2])
func getAddDefenderHostQuery() string {
if config.Driver == MySQLDataProviderName {
return fmt.Sprintf("INSERT INTO %v (`ip`,`updated_at`,`ban_time`) VALUES (%v,%v,0) ON DUPLICATE KEY UPDATE `updated_at`=VALUES(`updated_at`)",
sqlTableDefenderHosts, sqlPlaceholders[0], sqlPlaceholders[1])
}
return fmt.Sprintf(`SELECT %v FROM %v ORDER BY username %v LIMIT %v OFFSET %v`, selectUserFields, config.UsersTable,
return fmt.Sprintf(`INSERT INTO %v (ip,updated_at,ban_time) VALUES (%v,%v,0) ON CONFLICT (ip) DO UPDATE SET updated_at = EXCLUDED.updated_at RETURNING id`,
sqlTableDefenderHosts, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getAddDefenderEventQuery() string {
return fmt.Sprintf(`INSERT INTO %v (date_time,score,host_id) VALUES (%v,%v,(SELECT id from %v WHERE ip = %v))`,
sqlTableDefenderEvents, sqlPlaceholders[0], sqlPlaceholders[1], sqlTableDefenderHosts, sqlPlaceholders[2])
}
func getDefenderHostsQuery() string {
return fmt.Sprintf(`SELECT id,ip,ban_time FROM %v WHERE updated_at >= %v OR ban_time > 0 ORDER BY updated_at DESC LIMIT %v`,
sqlTableDefenderHosts, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getDefenderHostQuery() string {
return fmt.Sprintf(`SELECT id,ip,ban_time FROM %v WHERE ip = %v AND (updated_at >= %v OR ban_time > 0)`,
sqlTableDefenderHosts, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getDefenderEventsQuery(hostIDS []int64) string {
var sb strings.Builder
for _, hID := range hostIDS {
if sb.Len() == 0 {
sb.WriteString("(")
} else {
sb.WriteString(",")
}
sb.WriteString(strconv.FormatInt(hID, 10))
}
if sb.Len() > 0 {
sb.WriteString(")")
} else {
sb.WriteString("(0)")
}
return fmt.Sprintf(`SELECT host_id,SUM(score) FROM %v WHERE date_time >= %v AND host_id IN %v GROUP BY host_id`,
sqlTableDefenderEvents, sqlPlaceholders[0], sb.String())
}
func getDefenderIsHostBannedQuery() string {
return fmt.Sprintf(`SELECT id FROM %v WHERE ip = %v AND ban_time >= %v`,
sqlTableDefenderHosts, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getDefenderIncrementBanTimeQuery() string {
return fmt.Sprintf(`UPDATE %v SET ban_time = ban_time + %v WHERE ip = %v`,
sqlTableDefenderHosts, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getDefenderSetBanTimeQuery() string {
return fmt.Sprintf(`UPDATE %v SET ban_time = %v WHERE ip = %v`,
sqlTableDefenderHosts, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getDeleteDefenderHostQuery() string {
return fmt.Sprintf(`DELETE FROM %v WHERE ip = %v`, sqlTableDefenderHosts, sqlPlaceholders[0])
}
func getDefenderHostsCleanupQuery() string {
return fmt.Sprintf(`DELETE FROM %v WHERE ban_time < %v AND NOT EXISTS (
SELECT id FROM %v WHERE %v.host_id = %v.id AND %v.date_time > %v)`,
sqlTableDefenderHosts, sqlPlaceholders[0], sqlTableDefenderEvents, sqlTableDefenderEvents, sqlTableDefenderHosts,
sqlTableDefenderEvents, sqlPlaceholders[1])
}
func getDefenderEventsCleanupQuery() string {
return fmt.Sprintf(`DELETE FROM %v WHERE date_time < %v`, sqlTableDefenderEvents, sqlPlaceholders[0])
}
func getAdminByUsernameQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v WHERE username = %v`, selectAdminFields, sqlTableAdmins, sqlPlaceholders[0])
}
func getAdminsQuery(order string) string {
return fmt.Sprintf(`SELECT %v FROM %v ORDER BY username %v LIMIT %v OFFSET %v`, selectAdminFields, sqlTableAdmins,
order, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getDumpAdminsQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v`, selectAdminFields, sqlTableAdmins)
}
func getAddAdminQuery() string {
return fmt.Sprintf(`INSERT INTO %v (username,password,status,email,permissions,filters,additional_info,description,created_at,updated_at,last_login)
VALUES (%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,0)`, sqlTableAdmins, sqlPlaceholders[0], sqlPlaceholders[1],
sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7],
sqlPlaceholders[8], sqlPlaceholders[9])
}
func getUpdateAdminQuery() string {
return fmt.Sprintf(`UPDATE %v SET password=%v,status=%v,email=%v,permissions=%v,filters=%v,additional_info=%v,description=%v,updated_at=%v
WHERE username = %v`, sqlTableAdmins, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2],
sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7], sqlPlaceholders[8])
}
func getDeleteAdminQuery() string {
return fmt.Sprintf(`DELETE FROM %v WHERE username = %v`, sqlTableAdmins, sqlPlaceholders[0])
}
func getShareByIDQuery(filterUser bool) string {
if filterUser {
return fmt.Sprintf(`SELECT %v FROM %v s INNER JOIN %v u ON s.user_id = u.id WHERE s.share_id = %v AND u.username = %v`,
selectShareFields, sqlTableShares, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1])
}
return fmt.Sprintf(`SELECT %v FROM %v s INNER JOIN %v u ON s.user_id = u.id WHERE s.share_id = %v`,
selectShareFields, sqlTableShares, sqlTableUsers, sqlPlaceholders[0])
}
func getSharesQuery(order string) string {
return fmt.Sprintf(`SELECT %v FROM %v s INNER JOIN %v u ON s.user_id = u.id WHERE u.username = %v ORDER BY s.share_id %v LIMIT %v OFFSET %v`,
selectShareFields, sqlTableShares, sqlTableUsers, sqlPlaceholders[0], order, sqlPlaceholders[1], sqlPlaceholders[2])
}
func getDumpSharesQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v s INNER JOIN %v u ON s.user_id = u.id`,
selectShareFields, sqlTableShares, sqlTableUsers)
}
func getAddShareQuery() string {
return fmt.Sprintf(`INSERT INTO %v (share_id,name,description,scope,paths,created_at,updated_at,last_use_at,
expires_at,password,max_tokens,used_tokens,allow_from,user_id) VALUES (%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,%v)`,
sqlTableShares, sqlPlaceholders[0], sqlPlaceholders[1],
sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6],
sqlPlaceholders[7], sqlPlaceholders[8], sqlPlaceholders[9], sqlPlaceholders[10], sqlPlaceholders[11],
sqlPlaceholders[12], sqlPlaceholders[13])
}
func getUpdateShareRestoreQuery() string {
return fmt.Sprintf(`UPDATE %v SET name=%v,description=%v,scope=%v,paths=%v,created_at=%v,updated_at=%v,
last_use_at=%v,expires_at=%v,password=%v,max_tokens=%v,used_tokens=%v,allow_from=%v,user_id=%v WHERE share_id = %v`, sqlTableShares,
sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4],
sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7], sqlPlaceholders[8], sqlPlaceholders[9],
sqlPlaceholders[10], sqlPlaceholders[11], sqlPlaceholders[12], sqlPlaceholders[13])
}
func getUpdateShareQuery() string {
return fmt.Sprintf(`UPDATE %v SET name=%v,description=%v,scope=%v,paths=%v,updated_at=%v,expires_at=%v,
password=%v,max_tokens=%v,allow_from=%v,user_id=%v WHERE share_id = %v`, sqlTableShares,
sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4],
sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7], sqlPlaceholders[8], sqlPlaceholders[9],
sqlPlaceholders[10])
}
func getDeleteShareQuery() string {
return fmt.Sprintf(`DELETE FROM %v WHERE share_id = %v`, sqlTableShares, sqlPlaceholders[0])
}
func getAPIKeyByIDQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v WHERE key_id = %v`, selectAPIKeyFields, sqlTableAPIKeys, sqlPlaceholders[0])
}
func getAPIKeysQuery(order string) string {
return fmt.Sprintf(`SELECT %v FROM %v ORDER BY key_id %v LIMIT %v OFFSET %v`, selectAPIKeyFields, sqlTableAPIKeys,
order, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getDumpAPIKeysQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v`, selectAPIKeyFields, sqlTableAPIKeys)
}
func getAddAPIKeyQuery() string {
return fmt.Sprintf(`INSERT INTO %v (key_id,name,api_key,scope,created_at,updated_at,last_use_at,expires_at,description,user_id,admin_id)
VALUES (%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,%v)`, sqlTableAPIKeys, sqlPlaceholders[0], sqlPlaceholders[1],
sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6],
sqlPlaceholders[7], sqlPlaceholders[8], sqlPlaceholders[9], sqlPlaceholders[10])
}
func getUpdateAPIKeyQuery() string {
return fmt.Sprintf(`UPDATE %v SET name=%v,scope=%v,expires_at=%v,user_id=%v,admin_id=%v,description=%v,updated_at=%v
WHERE key_id = %v`, sqlTableAPIKeys, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2],
sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7])
}
func getDeleteAPIKeyQuery() string {
return fmt.Sprintf(`DELETE FROM %v WHERE key_id = %v`, sqlTableAPIKeys, sqlPlaceholders[0])
}
func getRelatedUsersForAPIKeysQuery(apiKeys []APIKey) string {
var sb strings.Builder
for _, k := range apiKeys {
if k.userID == 0 {
continue
}
if sb.Len() == 0 {
sb.WriteString("(")
} else {
sb.WriteString(",")
}
sb.WriteString(strconv.FormatInt(k.userID, 10))
}
if sb.Len() > 0 {
sb.WriteString(")")
} else {
sb.WriteString("(0)")
}
return fmt.Sprintf(`SELECT id,username FROM %v WHERE id IN %v`, sqlTableUsers, sb.String())
}
func getRelatedAdminsForAPIKeysQuery(apiKeys []APIKey) string {
var sb strings.Builder
for _, k := range apiKeys {
if k.adminID == 0 {
continue
}
if sb.Len() == 0 {
sb.WriteString("(")
} else {
sb.WriteString(",")
}
sb.WriteString(strconv.FormatInt(k.adminID, 10))
}
if sb.Len() > 0 {
sb.WriteString(")")
} else {
sb.WriteString("(0)")
}
return fmt.Sprintf(`SELECT id,username FROM %v WHERE id IN %v`, sqlTableAdmins, sb.String())
}
func getUserByUsernameQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v WHERE username = %v`, selectUserFields, sqlTableUsers, sqlPlaceholders[0])
}
func getUsersQuery(order string) string {
return fmt.Sprintf(`SELECT %v FROM %v ORDER BY username %v LIMIT %v OFFSET %v`, selectUserFields, sqlTableUsers,
order, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getRecentlyUpdatedUsersQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v WHERE updated_at >= %v`, selectUserFields, sqlTableUsers, sqlPlaceholders[0])
}
func getDumpUsersQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v`, selectUserFields, config.UsersTable)
return fmt.Sprintf(`SELECT %v FROM %v`, selectUserFields, sqlTableUsers)
}
func getDumpFoldersQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v`, selectFolderFields, sqlTableFolders)
}
func getUpdateQuotaQuery(reset bool) string {
if reset {
return fmt.Sprintf(`UPDATE %v SET used_quota_size = %v,used_quota_files = %v,last_quota_update = %v
WHERE username = %v`, config.UsersTable, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
WHERE username = %v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
}
return fmt.Sprintf(`UPDATE %v SET used_quota_size = used_quota_size + %v,used_quota_files = used_quota_files + %v,last_quota_update = %v
WHERE username = %v`, config.UsersTable, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
WHERE username = %v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
}
func getSetUpdateAtQuery() string {
return fmt.Sprintf(`UPDATE %v SET updated_at = %v WHERE username = %v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getUpdateLastLoginQuery() string {
return fmt.Sprintf(`UPDATE %v SET last_login = %v WHERE username = %v`, config.UsersTable, sqlPlaceholders[0], sqlPlaceholders[1])
return fmt.Sprintf(`UPDATE %v SET last_login = %v WHERE username = %v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getUpdateAdminLastLoginQuery() string {
return fmt.Sprintf(`UPDATE %v SET last_login = %v WHERE username = %v`, sqlTableAdmins, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getUpdateAPIKeyLastUseQuery() string {
return fmt.Sprintf(`UPDATE %v SET last_use_at = %v WHERE key_id = %v`, sqlTableAPIKeys, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getUpdateShareLastUseQuery() string {
return fmt.Sprintf(`UPDATE %v SET last_use_at = %v, used_tokens = used_tokens +%v WHERE share_id = %v`,
sqlTableShares, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2])
}
func getQuotaQuery() string {
return fmt.Sprintf(`SELECT used_quota_size,used_quota_files FROM %v WHERE username = %v`, config.UsersTable,
return fmt.Sprintf(`SELECT used_quota_size,used_quota_files FROM %v WHERE username = %v`, sqlTableUsers,
sqlPlaceholders[0])
}
func getAddUserQuery() string {
return fmt.Sprintf(`INSERT INTO %v (username,password,public_keys,home_dir,uid,gid,max_sessions,quota_size,quota_files,permissions,
used_quota_size,used_quota_files,last_quota_update,upload_bandwidth,download_bandwidth,status,last_login,expiration_date,filters,
filesystem,virtual_folders)
VALUES (%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,0,0,0,%v,%v,%v,0,%v,%v,%v,%v)`, config.UsersTable, sqlPlaceholders[0], sqlPlaceholders[1],
filesystem,additional_info,description,email,created_at,updated_at)
VALUES (%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,0,0,0,%v,%v,%v,0,%v,%v,%v,%v,%v,%v,%v,%v)`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1],
sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7],
sqlPlaceholders[8], sqlPlaceholders[9], sqlPlaceholders[10], sqlPlaceholders[11], sqlPlaceholders[12], sqlPlaceholders[13],
sqlPlaceholders[14], sqlPlaceholders[15], sqlPlaceholders[16])
sqlPlaceholders[14], sqlPlaceholders[15], sqlPlaceholders[16], sqlPlaceholders[17], sqlPlaceholders[18], sqlPlaceholders[19],
sqlPlaceholders[20])
}
func getUpdateUserQuery() string {
return fmt.Sprintf(`UPDATE %v SET password=%v,public_keys=%v,home_dir=%v,uid=%v,gid=%v,max_sessions=%v,quota_size=%v,
quota_files=%v,permissions=%v,upload_bandwidth=%v,download_bandwidth=%v,status=%v,expiration_date=%v,filters=%v,filesystem=%v,
virtual_folders=%v WHERE id = %v`, config.UsersTable, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3],
additional_info=%v,description=%v,email=%v,updated_at=%v WHERE id = %v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3],
sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7], sqlPlaceholders[8], sqlPlaceholders[9],
sqlPlaceholders[10], sqlPlaceholders[11], sqlPlaceholders[12], sqlPlaceholders[13], sqlPlaceholders[14], sqlPlaceholders[15],
sqlPlaceholders[16])
sqlPlaceholders[16], sqlPlaceholders[17], sqlPlaceholders[18], sqlPlaceholders[19])
}
func getDeleteUserQuery() string {
return fmt.Sprintf(`DELETE FROM %v WHERE id = %v`, config.UsersTable, sqlPlaceholders[0])
return fmt.Sprintf(`DELETE FROM %v WHERE id = %v`, sqlTableUsers, sqlPlaceholders[0])
}
func getFolderByNameQuery() string {
return fmt.Sprintf(`SELECT %v FROM %v WHERE name = %v`, selectFolderFields, sqlTableFolders, sqlPlaceholders[0])
}
func getAddFolderQuery() string {
return fmt.Sprintf(`INSERT INTO %v (path,used_quota_size,used_quota_files,last_quota_update,name,description,filesystem)
VALUES (%v,%v,%v,%v,%v,%v,%v)`, sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2],
sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6])
}
func getUpdateFolderQuery() string {
return fmt.Sprintf(`UPDATE %v SET path=%v,description=%v,filesystem=%v WHERE name = %v`, sqlTableFolders, sqlPlaceholders[0],
sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
}
func getDeleteFolderQuery() string {
return fmt.Sprintf(`DELETE FROM %v WHERE id = %v`, sqlTableFolders, sqlPlaceholders[0])
}
func getUpsertFolderQuery() string {
if config.Driver == MySQLDataProviderName {
return fmt.Sprintf("INSERT INTO %v (`path`,`used_quota_size`,`used_quota_files`,`last_quota_update`,`name`,"+
"`description`,`filesystem`) VALUES (%v,%v,%v,%v,%v,%v,%v) ON DUPLICATE KEY UPDATE "+
"`path`=VALUES(`path`),`description`=VALUES(`description`),`filesystem`=VALUES(`filesystem`)",
sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4],
sqlPlaceholders[5], sqlPlaceholders[6])
}
return fmt.Sprintf(`INSERT INTO %v (path,used_quota_size,used_quota_files,last_quota_update,name,description,filesystem)
VALUES (%v,%v,%v,%v,%v,%v,%v) ON CONFLICT (name) DO UPDATE SET path = EXCLUDED.path,description=EXCLUDED.description,
filesystem=EXCLUDED.filesystem`, sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2],
sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6])
}
func getClearFolderMappingQuery() string {
return fmt.Sprintf(`DELETE FROM %v WHERE user_id = (SELECT id FROM %v WHERE username = %v)`, sqlTableFoldersMapping,
sqlTableUsers, sqlPlaceholders[0])
}
func getAddFolderMappingQuery() string {
return fmt.Sprintf(`INSERT INTO %v (virtual_path,quota_size,quota_files,folder_id,user_id)
VALUES (%v,%v,%v,(SELECT id FROM %v WHERE name = %v),(SELECT id FROM %v WHERE username = %v))`,
sqlTableFoldersMapping, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlTableFolders,
sqlPlaceholders[3], sqlTableUsers, sqlPlaceholders[4])
}
func getFoldersQuery(order string) string {
return fmt.Sprintf(`SELECT %v FROM %v ORDER BY name %v LIMIT %v OFFSET %v`, selectFolderFields, sqlTableFolders,
order, sqlPlaceholders[0], sqlPlaceholders[1])
}
func getUpdateFolderQuotaQuery(reset bool) string {
if reset {
return fmt.Sprintf(`UPDATE %v SET used_quota_size = %v,used_quota_files = %v,last_quota_update = %v
WHERE name = %v`, sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
}
return fmt.Sprintf(`UPDATE %v SET used_quota_size = used_quota_size + %v,used_quota_files = used_quota_files + %v,last_quota_update = %v
WHERE name = %v`, sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
}
func getQuotaFolderQuery() string {
return fmt.Sprintf(`SELECT used_quota_size,used_quota_files FROM %v WHERE name = %v`, sqlTableFolders,
sqlPlaceholders[0])
}
func getRelatedFoldersForUsersQuery(users []User) string {
var sb strings.Builder
for _, u := range users {
if sb.Len() == 0 {
sb.WriteString("(")
} else {
sb.WriteString(",")
}
sb.WriteString(strconv.FormatInt(u.ID, 10))
}
if sb.Len() > 0 {
sb.WriteString(")")
}
return fmt.Sprintf(`SELECT f.id,f.name,f.path,f.used_quota_size,f.used_quota_files,f.last_quota_update,fm.virtual_path,
fm.quota_size,fm.quota_files,fm.user_id,f.filesystem,f.description FROM %v f INNER JOIN %v fm ON f.id = fm.folder_id WHERE
fm.user_id IN %v ORDER BY fm.user_id`, sqlTableFolders, sqlTableFoldersMapping, sb.String())
}
func getRelatedUsersForFoldersQuery(folders []vfs.BaseVirtualFolder) string {
var sb strings.Builder
for _, f := range folders {
if sb.Len() == 0 {
sb.WriteString("(")
} else {
sb.WriteString(",")
}
sb.WriteString(strconv.FormatInt(f.ID, 10))
}
if sb.Len() > 0 {
sb.WriteString(")")
}
return fmt.Sprintf(`SELECT fm.folder_id,u.username FROM %v fm INNER JOIN %v u ON fm.user_id = u.id
WHERE fm.folder_id IN %v ORDER BY fm.folder_id`, sqlTableFoldersMapping, sqlTableUsers, sb.String())
}
func getDatabaseVersionQuery() string {
return "SELECT version from schema_version LIMIT 1"
return fmt.Sprintf("SELECT version from %v LIMIT 1", sqlTableSchemaVersion)
}
func getUpdateDBVersionQuery() string {
return fmt.Sprintf(`UPDATE schema_version SET version=%v`, sqlPlaceholders[0])
return fmt.Sprintf(`UPDATE %v SET version=%v`, sqlTableSchemaVersion, sqlPlaceholders[0])
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,5 +1,204 @@
## Dockerfile examples
# Official Docker image
Sample Dockerfiles for `sftpgo` daemon and the REST API CLI.
SFTPGo provides an official Docker image, it is available on both [Docker Hub](https://hub.docker.com/r/drakkan/sftpgo) and on [GitHub Container Registry](https://github.com/users/drakkan/packages/container/package/sftpgo).
We don't want to add a `Dockerfile` for each single `sftpgo` configuration options or data provider. You can use the docker configurations here as starting point that you can customize to run `sftpgo` with [Docker](http://www.docker.io "Docker").
## Supported tags and respective Dockerfile links
- [v2.2.3, v2.2, v2, latest](https://github.com/drakkan/sftpgo/blob/v2.2.3/Dockerfile)
- [v2.2.3-alpine, v2.2-alpine, v2-alpine, alpine](https://github.com/drakkan/sftpgo/blob/v2.2.3/Dockerfile.alpine)
- [v2.2.3-slim, v2.2-slim, v2-slim, slim](https://github.com/drakkan/sftpgo/blob/v2.2.3/Dockerfile)
- [v2.2.3-alpine-slim, v2.2-alpine-slim, v2-alpine-slim, alpine-slim](https://github.com/drakkan/sftpgo/blob/v2.2.3/Dockerfile.alpine)
- [v2.2.3-distroless-slim, v2.2-distroless-slim, v2-distroless-slim, distroless-slim](https://github.com/drakkan/sftpgo/blob/v2.2.3/Dockerfile.distroless)
- [edge](../Dockerfile)
- [edge-alpine](../Dockerfile.alpine)
- [edge-slim](../Dockerfile)
- [edge-alpine-slim](../Dockerfile.alpine)
- [edge-distroless-slim](../Dockerfile.distroless)
## How to use the SFTPGo image
### Start a `sftpgo` server instance
Starting a SFTPGo instance is simple:
```shell
docker run --name some-sftpgo -p 8080:8080 -p 2022:2022 -d "drakkan/sftpgo:tag"
```
... where `some-sftpgo` is the name you want to assign to your container, and `tag` is the tag specifying the SFTPGo version you want. See the list above for relevant tags.
Now visit [http://localhost:8080/web/admin](http://localhost:8080/web/admin), replacing `localhost` with the appropriate IP address if SFTPGo is not reachable on localhost, create the first admin and a new SFTPGo user. The SFTP service is available on port 2022.
If you don't want to persist any files, for example for testing purposes, you can run an SFTPGo instance like this:
```shell
docker run --rm --name some-sftpgo -p 8080:8080 -p 2022:2022 -d "drakkan/sftpgo:tag"
```
If you prefer GitHub Container Registry to Docker Hub replace `drakkan/sftpgo:tag` with `ghcr.io/drakkan/sftpgo:tag`.
### Enable FTP service
FTP is disabled by default, you can enable the FTP service by starting the SFTPGo instance in this way:
```shell
docker run --name some-sftpgo \
-p 8080:8080 \
-p 2022:2022 \
-p 2121:2121 \
-p 50000-50100:50000-50100 \
-e SFTPGO_FTPD__BINDINGS__0__PORT=2121 \
-e SFTPGO_FTPD__BINDINGS__0__FORCE_PASSIVE_IP=<your external ip here> \
-d "drakkan/sftpgo:tag"
```
The FTP service is now available on port 2121 and SFTP on port 2022.
You can change the passive ports range (`50000-50100` by default) by setting the environment variables `SFTPGO_FTPD__PASSIVE_PORT_RANGE__START` and `SFTPGO_FTPD__PASSIVE_PORT_RANGE__END`.
It is recommended that you provide a certificate and key file to expose FTP over TLS. You should prefer SFTP to FTP even if you configure TLS, please don't blindly enable the old FTP protocol.
### Enable WebDAV service
WebDAV is disabled by default, you can enable the WebDAV service by starting the SFTPGo instance in this way:
```shell
docker run --name some-sftpgo \
-p 8080:8080 \
-p 2022:2022 \
-p 10080:10080 \
-e SFTPGO_WEBDAVD__BINDINGS__0__PORT=10080 \
-d "drakkan/sftpgo:tag"
```
The WebDAV service is now available on port 10080 and SFTP on port 2022.
It is recommended that you provide a certificate and key file to expose WebDAV over https.
### Container shell access and viewing SFTPGo logs
The docker exec command allows you to run commands inside a Docker container. The following command line will give you a shell inside your `sftpgo` container:
```shell
docker exec -it some-sftpgo sh
```
The logs are available through Docker's container log:
```shell
docker logs some-sftpgo
```
**Note:** [distroless](../Dockerfile.distroless) image contains only a statically linked sftpgo binary and its minimal runtime dependencies. Shell is not available on this image.
### Where to Store Data
Important note: There are several ways to store data used by applications that run in Docker containers. We encourage users of the SFTPGo images to familiarize themselves with the options available, including:
- Let Docker manage the storage for SFTPGo data by [writing them to disk on the host system using its own internal volume management](https://docs.docker.com/engine/tutorials/dockervolumes/#adding-a-data-volume). This is the default and is easy and fairly transparent to the user. The downside is that the files may be hard to locate for tools and applications that run directly on the host system, i.e. outside containers.
- Create a data directory on the host system (outside the container) and [mount this to a directory visible from inside the container]((https://docs.docker.com/engine/tutorials/dockervolumes/#mount-a-host-directory-as-a-data-volume)). This places the SFTPGo files in a known location on the host system, and makes it easy for tools and applications on the host system to access the files. The downside is that the user needs to make sure that the directory exists, and that e.g. directory permissions and other security mechanisms on the host system are set up correctly. The SFTPGo image runs using `1000` as UID/GID by default.
The Docker documentation is a good starting point for understanding the different storage options and variations, and there are multiple blogs and forum postings that discuss and give advice in this area. We will simply show the basic procedure here for the latter option above:
1. Create a data directory on a suitable volume on your host system, e.g. `/my/own/sftpgodata`.
2. Create a home directory for the sftpgo container user on your host system e.g. `/my/own/sftpgohome`.
3. Start your SFTPGo container like this:
```shell
docker run --name some-sftpgo \
-p 8080:8090 \
-p 2022:2022 \
--mount type=bind,source=/my/own/sftpgodata,target=/srv/sftpgo \
--mount type=bind,source=/my/own/sftpgohome,target=/var/lib/sftpgo \
-e SFTPGO_HTTPD__BINDINGS__0__PORT=8090 \
-d "drakkan/sftpgo:tag"
```
As you can see SFTPGo uses two main volumes:
- `/srv/sftpgo` to handle persistent data. The default home directory for SFTP/FTP/WebDAV users is `/srv/sftpgo/data/<username>`. Backups are stored in `/srv/sftpgo/backups`
- `/var/lib/sftpgo` is the home directory for the sftpgo system user defined inside the container. This is the container working directory too, host keys will be created here when using the default configuration.
If you want to get fine grained control, you can also mount `/srv/sftpgo/data` and `/srv/sftpgo/backups` as separate volumes instead of mounting `/srv/sftpgo`.
### Configuration
The runtime configuration can be customized via environment variables that you can set passing the `-e` option to the `docker run` command or inside the `environment` section if you are using [docker stack deploy](https://docs.docker.com/engine/reference/commandline/stack_deploy/) or [docker-compose](https://github.com/docker/compose).
Please take a look [here](../docs/full-configuration.md#environment-variables) to learn how to configure SFTPGo via environment variables.
Alternately you can mount your custom configuration file to `/var/lib/sftpgo` or `/var/lib/sftpgo/.config/sftpgo`.
### Loading initial data
Initial data can be loaded in the following ways:
- via the `--loaddata-from` flag or the `SFTPGO_LOADDATA_FROM` environment variable
- by providing a dump file to the memory provider
Please take a look [here](../docs/full-configuration.md) for more details.
### Running as an arbitrary user
The SFTPGo image runs using `1000` as UID/GID by default. If you know the permissions of your data and/or configuration directory are already set appropriately or you have need of running SFTPGo with a specific UID/GID, it is possible to invoke this image with `--user` set to any value (other than `root/0`) in order to achieve the desired access/configuration:
```shell
$ ls -lnd data
drwxr-xr-x 2 1100 1100 6 7 nov 09.09 data
$ ls -lnd config
drwxr-xr-x 2 1100 1100 6 7 nov 09.19 config
```
With the above directory permissions, you can start a SFTPGo instance like this:
```shell
docker run --name some-sftpgo \
--user 1100:1100 \
-p 8080:8080 \
-p 2022:2022 \
--mount type=bind,source="${PWD}/data",target=/srv/sftpgo \
--mount type=bind,source="${PWD}/config",target=/var/lib/sftpgo \
-d "drakkan/sftpgo:tag"
```
Alternately build your own image using the official one as a base, here is a sample Dockerfile:
```shell
FROM drakkan/sftpgo:tag
USER root
RUN chown -R 1100:1100 /etc/sftpgo && chown 1100:1100 /var/lib/sftpgo /srv/sftpgo
USER 1100:1100
```
**Note:** the above Dockerfile will not work if you use the [distroless](../Dockerfile.distroless) image as base since the `chown` command is not available there.
## Image Variants
The `sftpgo` images comes in many flavors, each designed for a specific use case. The `edge`, `edge-slim`, `edge-alpine`, `edge-alpine-slim` and `edge-distroless-slim` tags are updated after each new commit.
### `sftpgo:<version>`
This is the defacto image, it is based on [Debian](https://www.debian.org/), available in [the `debian` official image](https://hub.docker.com/_/debian). If you are unsure about what your needs are, you probably want to use this one.
### `sftpgo:<version>-alpine`
This image is based on the popular [Alpine Linux project](https://alpinelinux.org/), available in [the `alpine` official image](https://hub.docker.com/_/alpine). Alpine Linux is much smaller than most distribution base images (~5MB), and thus leads to much slimmer images in general.
This variant is highly recommended when final image size being as small as possible is desired. The main caveat to note is that it does use [musl libc](https://musl.libc.org/) instead of [glibc and friends](https://www.etalabs.net/compare_libcs.html), so certain software might run into issues depending on the depth of their libc requirements. However, most software doesn't have an issue with this, so this variant is usually a very safe choice. See [this Hacker News comment thread](https://news.ycombinator.com/item?id=10782897) for more discussion of the issues that might arise and some pro/con comparisons of using Alpine-based images.
### `sftpgo:<version>-distroless`
This image is based on the popular [Distroless project](https://github.com/GoogleContainerTools/distroless). We use the latest Debian based distroless image as base.
Distroless variant contains only a statically linked sftpgo binary and its minimal runtime dependencies and so it doesn't allow shell access (no shell is installed).
SQLite support is disabled since it requires CGO and so a C runtime which is not installed.
The default data provider is `bolt`, all the supported data providers expect `sqlite` work.
We only provide the slim variant and so the optional `git` dependency is not available.
### `sftpgo:<suite>-slim`
These tags provide a slimmer image that does not include the optional `git` dependency.
## Helm Chart
An helm chart is [available](https://artifacthub.io/packages/helm/sagikazarmark/sftpgo). You can find the source code [here](https://github.com/sagikazarmark/helm-charts/tree/master/charts/sftpgo).

View File

@@ -1,8 +0,0 @@
FROM debian:latest
LABEL maintainer="nicola.murino@gmail.com"
RUN apt-get update && apt-get install -y curl python3-requests python3-pygments
RUN curl https://raw.githubusercontent.com/drakkan/sftpgo/master/scripts/sftpgo_api_cli.py --output /usr/bin/sftpgo_api_cli.py
ENTRYPOINT ["python3", "/usr/bin/sftpgo_api_cli.py" ]
CMD []

View File

@@ -0,0 +1,28 @@
#!/usr/bin/env bash
SFTPGO_PUID=${SFTPGO_PUID:-1000}
SFTPGO_PGID=${SFTPGO_PGID:-1000}
if [ "$1" = 'sftpgo' ]; then
if [ "$(id -u)" = '0' ]; then
for DIR in "/etc/sftpgo" "/var/lib/sftpgo" "/srv/sftpgo"
do
DIR_UID=$(stat -c %u ${DIR})
DIR_GID=$(stat -c %g ${DIR})
if [ ${DIR_UID} != ${SFTPGO_PUID} ] || [ ${DIR_GID} != ${SFTPGO_PGID} ]; then
echo '{"level":"info","time":"'`date +%Y-%m-%dT%H:%M:%S.000`'","sender":"entrypoint","message":"change owner for \"'${DIR}'\" UID: '${SFTPGO_PUID}' GID: '${SFTPGO_PGID}'"}'
if [ ${DIR} = "/etc/sftpgo" ]; then
chown -R ${SFTPGO_PUID}:${SFTPGO_PGID} ${DIR}
else
chown ${SFTPGO_PUID}:${SFTPGO_PGID} ${DIR}
fi
fi
done
echo '{"level":"info","time":"'`date +%Y-%m-%dT%H:%M:%S.000`'","sender":"entrypoint","message":"run as UID: '${SFTPGO_PUID}' GID: '${SFTPGO_PGID}'"}'
exec su-exec ${SFTPGO_PUID}:${SFTPGO_PGID} "$@"
fi
exec "$@"
fi
exec "$@"

32
docker/scripts/entrypoint.sh Executable file
View File

@@ -0,0 +1,32 @@
#!/usr/bin/env bash
SFTPGO_PUID=${SFTPGO_PUID:-1000}
SFTPGO_PGID=${SFTPGO_PGID:-1000}
if [ "$1" = 'sftpgo' ]; then
if [ "$(id -u)" = '0' ]; then
getent passwd ${SFTPGO_PUID} > /dev/null
HAS_PUID=$?
getent group ${SFTPGO_PGID} > /dev/null
HAS_PGID=$?
if [ ${HAS_PUID} -ne 0 ] || [ ${HAS_PGID} -ne 0 ]; then
echo '{"level":"info","time":"'`date +%Y-%m-%dT%H:%M:%S.%3N`'","sender":"entrypoint","message":"prepare to run as UID: '${SFTPGO_PUID}' GID: '${SFTPGO_PGID}'"}'
if [ ${HAS_PGID} -ne 0 ]; then
echo '{"level":"info","time":"'`date +%Y-%m-%dT%H:%M:%S.%3N`'","sender":"entrypoint","message":"set GID to: '${SFTPGO_PGID}'"}'
groupmod -g ${SFTPGO_PGID} sftpgo
fi
if [ ${HAS_PUID} -ne 0 ]; then
echo '{"level":"info","time":"'`date +%Y-%m-%dT%H:%M:%S.%3N`'","sender":"entrypoint","message":"set UID to: '${SFTPGO_PUID}'"}'
usermod -u ${SFTPGO_PUID} sftpgo
fi
chown -R ${SFTPGO_PUID}:${SFTPGO_PGID} /etc/sftpgo
chown ${SFTPGO_PUID}:${SFTPGO_PGID} /var/lib/sftpgo /srv/sftpgo
fi
echo '{"level":"info","time":"'`date +%Y-%m-%dT%H:%M:%S.%3N`'","sender":"entrypoint","message":"run as UID: '${SFTPGO_PUID}' GID: '${SFTPGO_PGID}'"}'
exec gosu ${SFTPGO_PUID}:${SFTPGO_PGID} "$@"
fi
exec "$@"
fi
exec "$@"

View File

@@ -1,21 +1,21 @@
FROM golang:alpine as builder
RUN apk add --no-cache git gcc g++ ca-certificates \
&& go get -d github.com/drakkan/sftpgo
&& go get -v -d github.com/drakkan/sftpgo
WORKDIR /go/src/github.com/drakkan/sftpgo
# uncomment the next line to get the latest stable version instead of the latest git
#RUN git checkout `git rev-list --tags --max-count=1`
RUN go build -i -ldflags "-s -w -X github.com/drakkan/sftpgo/utils.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/utils.date=`date -u +%FT%TZ`" -o /go/bin/sftpgo
ARG TAG
ARG FEATURES
# Use --build-arg TAG=LATEST for latest tag. Use e.g. --build-arg TAG=v1.0.0 for a specific tag/commit. Otherwise HEAD (master) is built.
RUN git checkout $(if [ "${TAG}" = LATEST ]; then echo `git rev-list --tags --max-count=1`; elif [ -n "${TAG}" ]; then echo "${TAG}"; else echo HEAD; fi)
RUN go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -v -o /go/bin/sftpgo
FROM alpine:latest
RUN apk add --no-cache ca-certificates su-exec \
&& mkdir -p /data /etc/sftpgo /srv/sftpgo/config /srv/sftpgo/web /srv/sftpgo/backups
# ca-certificates is needed for Cloud Storage Support and to expose the REST API over HTTPS.
# If you install git then ca-certificates will be automatically installed as dependency.
# git, rsync and ca-certificates are optional, uncomment the next line to add support for them if needed.
#RUN apk add --no-cache git rsync ca-certificates
# git and rsync are optional, uncomment the next line to add support for them if needed.
#RUN apk add --no-cache git rsync
COPY --from=builder /go/bin/sftpgo /bin/
COPY --from=builder /go/src/github.com/drakkan/sftpgo/sftpgo.json /etc/sftpgo/sftpgo.json
@@ -27,5 +27,24 @@ RUN chmod +x /bin/entrypoint.sh
VOLUME [ "/data", "/srv/sftpgo/config", "/srv/sftpgo/backups" ]
EXPOSE 2022 8080
# uncomment the following settings to enable FTP support
#ENV SFTPGO_FTPD__BIND_PORT=2121
#ENV SFTPGO_FTPD__FORCE_PASSIVE_IP=<your FTP visibile IP here>
#EXPOSE 2121
# we need to expose the passive ports range too
#EXPOSE 50000-50100
# it is a good idea to provide certificates to enable FTPS too
#ENV SFTPGO_FTPD__CERTIFICATE_FILE=/srv/sftpgo/config/mycert.crt
#ENV SFTPGO_FTPD__CERTIFICATE_KEY_FILE=/srv/sftpgo/config/mycert.key
# uncomment the following setting to enable WebDAV support
#ENV SFTPGO_WEBDAVD__BIND_PORT=8090
# it is a good idea to provide certificates to enable WebDAV over HTTPS
#ENV SFTPGO_WEBDAVD__CERTIFICATE_FILE=${CONFIG_DIR}/mycert.crt
#ENV SFTPGO_WEBDAVD__CERTIFICATE_KEY_FILE=${CONFIG_DIR}/mycert.key
ENTRYPOINT ["/bin/entrypoint.sh"]
CMD ["serve"]
CMD ["serve"]

View File

@@ -1,27 +1,38 @@
# SFTPGo with Docker and Alpine
:warning: The recommended way to run SFTPGo on Docker is to use the official [images](https://hub.docker.com/r/drakkan/sftpgo). The documentation here is now obsolete.
This DockerFile is made to build image to host multiple instances of SFTPGo started with different users.
### Example
## Example
> 1003 is a custom uid:gid for this instance of SFTPGo
```bash
# Prereq on docker host
sudo groupadd -g 1003 sftpgrp && \
sudo useradd -u 1003 -g 1003 sftpuser -d /home/sftpuser/ && \
sudo -u sftpuser mkdir /home/sftpuser/{conf,data} && \
curl https://raw.githubusercontent.com/drakkan/sftpgo/master/sql/sqlite/20190828.sql | sqlite3 /home/sftpuser/conf/sftpgo.db && \
curl https://raw.githubusercontent.com/drakkan/sftpgo/master/sql/sqlite/20191112.sql | sqlite3 /home/sftpuser/conf/sftpgo.db && \
curl https://raw.githubusercontent.com/drakkan/sftpgo/master/sql/sqlite/20191230.sql | sqlite3 /home/sftpuser/conf/sftpgo.db && \
curl https://raw.githubusercontent.com/drakkan/sftpgo/master/sql/sqlite/20200116.sql | sqlite3 /home/sftpuser/conf/sftpgo.db && \
curl https://raw.githubusercontent.com/drakkan/sftpgo/master/sftpgo.json -o /home/sftpuser/conf/sftpgo.json
# Get and build SFTPGo image
# Edit sftpgo.json as you need
# Get and build SFTPGo image.
# Add --build-arg TAG=LATEST to build the latest tag or e.g. TAG=v1.0.0 for a specific tag/commit.
# Add --build-arg FEATURES=<build features comma separated> to specify the features to build.
git clone https://github.com/drakkan/sftpgo.git && \
cd sftpgo && \
sudo docker build -t sftpgo docker/sftpgo/alpine/
# Starting image
# Initialize the configured provider. For PostgreSQL and MySQL providers you need to create the configured database and the "initprovider" command will create the required tables.
sudo docker run --name sftpgo \
-e PUID=1003 \
-e GUID=1003 \
-v /home/sftpuser/conf/:/srv/sftpgo/config \
sftpgo initprovider -c /srv/sftpgo/config
# Start the image
sudo docker rm sftpgo && sudo docker run --name sftpgo \
-e SFTPGO_LOG_FILE_PATH= \
-e SFTPGO_CONFIG_DIR=/srv/sftpgo/config \
-e SFTPGO_HTTPD__TEMPLATES_PATH=/srv/sftpgo/web/templates \
@@ -36,11 +47,15 @@ sudo docker run --name sftpgo \
-v /home/sftpuser/backups:/srv/sftpgo/backups \
sftpgo
```
The script `entrypoint.sh` makes sure to correct the permissions of directories and start the process with the right user
If you want to enable FTP/S you also need the publish the FTP port and the FTP passive port range, defined in your `Dockerfile`, by adding, for example, the following options to the `docker run` command `-p 2121:2121 -p 50000-50100:50000-50100`. The same goes for WebDAV, you need to publish the configured port.
The script `entrypoint.sh` makes sure to correct the permissions of directories and start the process with the right user.
Several images can be run with different parameters.
### Custom systemd script
## Custom systemd script
An example of systemd script is present [here](sftpgo.service), with `Environment` parameter to set `PUID` and `GUID`
`WorkingDirectory` parameter must be exist with one file in this directory like `sftpgo-${PUID}.env` corresponding to the variable file for SFTPGo instance.
`WorkingDirectory` parameter must be exist with one file in this directory like `sftpgo-${PUID}.env` corresponding to the variable file for SFTPGo instance.

View File

@@ -1,5 +1,5 @@
[Unit]
Description=SFTPGo sftp server
Description=SFTPGo server
After=docker.service
[Service]
@@ -8,19 +8,23 @@ Group=root
WorkingDirectory=/etc/sftpgo
Environment=PUID=1003
Environment=GUID=1003
EnvironmentFile=-/etc/sysconfig/sftpgo.conf
EnvironmentFile=-/etc/sysconfig/sftpgo.env
ExecStartPre=-docker kill sftpgo
ExecStartPre=-docker rm sftpgo
ExecStart=docker run --name sftpgo \
--env-file sftpgo-${PUID}.env \
-e PUID=${PUID} \
-e GUID=${GUID} \
-e SFTPGO_LOG_FILE_PATH= \
-e SFTPGO_CONFIG_DIR=/srv/sftpgo/config \
-e SFTPGO_HTTPD__TEMPLATES_PATH=/srv/sftpgo/web/templates \
-e SFTPGO_HTTPD__STATIC_FILES_PATH=/srv/sftpgo/web/static \
-e SFTPGO_HTTPD__BACKUPS_PATH=/srv/sftpgo/backups \
-p 8080:8080 \
-p 2022:2022 \
-v /home/sftpuser/conf/:/srv/sftpgo/config \
-v /home/sftpuser/data:/data \
-v /home/sftpuser/backups:/srv/sftpgo/backups \
sftpgo
ExecStop=docker stop sftpgo
SyslogIdentifier=sftpgo

View File

@@ -1,19 +1,22 @@
# we use a multi stage build to have a separate build and run env
FROM golang:latest as buildenv
LABEL maintainer="nicola.murino@gmail.com"
RUN go get -d github.com/drakkan/sftpgo
RUN go get -v -d github.com/drakkan/sftpgo
WORKDIR /go/src/github.com/drakkan/sftpgo
# uncomment the next line to get the latest stable version instead of the latest git
#RUN git checkout `git rev-list --tags --max-count=1`
RUN go build -i -ldflags "-s -w -X github.com/drakkan/sftpgo/utils.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/utils.date=`date -u +%FT%TZ`" -o sftpgo
ARG TAG
ARG FEATURES
# Use --build-arg TAG=LATEST for latest tag. Use e.g. --build-arg TAG=v1.0.0 for a specific tag/commit. Otherwise HEAD (master) is built.
RUN git checkout $(if [ "${TAG}" = LATEST ]; then echo `git rev-list --tags --max-count=1`; elif [ -n "${TAG}" ]; then echo "${TAG}"; else echo HEAD; fi)
RUN go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -v -o sftpgo
# now define the run environment
FROM debian:latest
# ca-certificates is needed for Cloud Storage Support and to expose the REST API over HTTPS.
# If you install git then ca-certificates will be automatically installed as dependency.
# git, rsync and ca-certificates are optional, uncomment the next line to add support for them if needed.
#RUN apt-get update && apt-get install -y git rsync ca-certificates
# ca-certificates is needed for Cloud Storage Support and for HTTPS/FTPS.
RUN apt-get update && apt-get install -y ca-certificates && apt-get clean
# git and rsync are optional, uncomment the next line to add support for them if needed.
#RUN apt-get update && apt-get install -y git rsync && apt-get clean
ARG BASE_DIR=/app
ARG DATA_REL_DIR=data
@@ -37,7 +40,7 @@ ENV WEB_DIR=${BASE_DIR}/${WEB_REL_PATH}
RUN mkdir -p ${DATA_DIR} ${CONFIG_DIR} ${WEB_DIR} ${BACKUPS_DIR}
RUN groupadd --system -g ${GID} ${GROUPNAME}
RUN useradd --system --create-home --no-log-init --home-dir ${HOME_DIR} --comment "SFTPGo user" --shell /bin/false --gid ${GID} --uid ${UID} ${USERNAME}
RUN useradd --system --create-home --no-log-init --home-dir ${HOME_DIR} --comment "SFTPGo user" --shell /usr/sbin/nologin --gid ${GID} --uid ${UID} ${USERNAME}
WORKDIR ${HOME_DIR}
RUN mkdir -p bin .config/sftpgo
@@ -68,5 +71,23 @@ ENV SFTPGO_HTTPD__STATIC_FILES_PATH=${WEB_DIR}/static
ENV SFTPGO_DATA_PROVIDER__USERS_BASE_DIR=${DATA_DIR}
ENV SFTPGO_HTTPD__BACKUPS_PATH=${BACKUPS_DIR}
# uncomment the following settings to enable FTP support
#ENV SFTPGO_FTPD__BIND_PORT=2121
#ENV SFTPGO_FTPD__FORCE_PASSIVE_IP=<your FTP visibile IP here>
#EXPOSE 2121
# we need to expose the passive ports range too
#EXPOSE 50000-50100
# it is a good idea to provide certificates to enable FTPS too
#ENV SFTPGO_FTPD__CERTIFICATE_FILE=${CONFIG_DIR}/mycert.crt
#ENV SFTPGO_FTPD__CERTIFICATE_KEY_FILE=${CONFIG_DIR}/mycert.key
# uncomment the following setting to enable WebDAV support
#ENV SFTPGO_WEBDAVD__BIND_PORT=8090
# it is a good idea to provide certificates to enable WebDAV over HTTPS
#ENV SFTPGO_WEBDAVD__CERTIFICATE_FILE=${CONFIG_DIR}/mycert.crt
#ENV SFTPGO_WEBDAVD__CERTIFICATE_KEY_FILE=${CONFIG_DIR}/mycert.key
ENTRYPOINT ["sftpgo"]
CMD ["serve"]
CMD ["serve"]

View File

@@ -1,4 +1,6 @@
## Dockerfile based on Debian stable
# Dockerfile based on Debian stable
:warning: The recommended way to run SFTPGo on Docker is to use the official [images](https://hub.docker.com/r/drakkan/sftpgo). The documentation here is now obsolete.
Please read the comments inside the `Dockerfile` to learn how to customize things for your setup.
@@ -8,15 +10,50 @@ You can build the container image using `docker build`, for example:
docker build -t="drakkan/sftpgo" .
```
and you can run the Dockerfile using something like this:
This will build master of github.com/drakkan/sftpgo.
To build the latest tag you can add `--build-arg TAG=LATEST` and to build a specific tag/commit you can use for example `TAG=v1.0.0`, like this:
```bash
docker run --name sftpgo -p 8080:8080 -p 2022:2022 --mount type=bind,source=/srv/sftpgo/data,target=/app/data --mount type=bind,source=/srv/sftpgo/config,target=/app/config --mount type=bind,source=/srv/sftpgo/backups,target=/app/backups drakkan/sftpgo
docker build -t="drakkan/sftpgo" --build-arg TAG=v1.0.0 .
```
where `/srv/sftpgo/data`, `/srv/sftpgo/config` and `/srv/sftpgo/backups` are folders on the host system with write access for UID/GID defined inside the `Dockerfile`. You can choose to create a new user, on the host system, with a matching UID/GID pair or simply do something like:
To specify the features to build you can add `--build-arg FEATURES=<build features comma separated>`. For example you can disable SQLite and S3 support like this:
```bash
chown -R <UID>:<GID> /srv/sftpgo/data /srv/sftpgo/config /srv/sftpgo/backups
```
docker build -t="drakkan/sftpgo" --build-arg FEATURES=nosqlite,nos3 .
```
Please take a look at the [build from source](./../../../docs/build-from-source.md) documentation for the complete list of the features that can be disabled.
Now create the required folders on the host system, for example:
```bash
sudo mkdir -p /srv/sftpgo/data /srv/sftpgo/config /srv/sftpgo/backups
```
and give write access to them to the UID/GID defined inside the `Dockerfile`. You can choose to create a new user, on the host system, with a matching UID/GID pair, or simply do something like this:
```bash
sudo chown -R <UID>:<GID> /srv/sftpgo/data /srv/sftpgo/config /srv/sftpgo/backups
```
Download the default configuration file and edit it as you need:
```bash
sudo curl https://raw.githubusercontent.com/drakkan/sftpgo/master/sftpgo.json -o /srv/sftpgo/config/sftpgo.json
```
Initialize the configured provider. For PostgreSQL and MySQL providers you need to create the configured database and the `initprovider` command will create the required tables:
```bash
docker run --name sftpgo --mount type=bind,source=/srv/sftpgo/config,target=/app/config drakkan/sftpgo initprovider -c /app/config
```
and finally you can run the image using something like this:
```bash
docker rm sftpgo && docker run --name sftpgo -p 8080:8080 -p 2022:2022 --mount type=bind,source=/srv/sftpgo/data,target=/app/data --mount type=bind,source=/srv/sftpgo/config,target=/app/config --mount type=bind,source=/srv/sftpgo/backups,target=/app/backups drakkan/sftpgo
```
If you want to enable FTP/S you also need the publish the FTP port and the FTP passive port range, defined in your `Dockerfile`, by adding, for example, the following options to the `docker run` command `-p 2121:2121 -p 50000-50100:50000-50100`. The same goes for WebDAV, you need to publish the configured port.

View File

@@ -1,61 +1,22 @@
# Account's configuration properties
For each account, the following properties can be configured:
Please take a look at the [OpenAPI schema](../openapi/openapi.yaml) for the exact definitions of user, folder and admin fields.
If you need an example you can export a dump using the Web Admin or by invoking the `dumpdata` endpoint directly, you need to obtain an access token first, for example:
- `username`
- `password` used for password authentication. For users created using SFTPGo REST API, if the password has no known hashing algo prefix, it will be stored using argon2id. SFTPGo supports checking passwords stored with bcrypt, pbkdf2, md5crypt and sha512crypt too. For pbkdf2 the supported format is `$<algo>$<iterations>$<salt>$<hashed pwd base64 encoded>`, where algo is `pbkdf2-sha1` or `pbkdf2-sha256` or `pbkdf2-sha512`. For example the `pbkdf2-sha256` of the word `password` using 150000 iterations and `E86a9YMX3zC7` as salt must be stored as `$pbkdf2-sha256$150000$E86a9YMX3zC7$R5J62hsSq+pYw00hLLPKBbcGXmq7fj5+/M0IFoYtZbo=`. For bcrypt the format must be the one supported by golang's [crypto/bcrypt](https://godoc.org/golang.org/x/crypto/bcrypt) package, for example the password `secret` with cost `14` must be stored as `$2a$14$ajq8Q7fbtFRQvXpdCq7Jcuy.Rx1h/L4J60Otx.gyNLbAYctGMJ9tK`. For md5crypt and sha512crypt we support the format used in `/etc/shadow` with the `$1$` and `$6$` prefix, this is useful if you are migrating from Unix system user accounts. We support Apache md5crypt (`$apr1$` prefix) too. Using the REST API you can send a password hashed as bcrypt, pbkdf2, md5crypt or sha512crypt and it will be stored as is.
- `public_keys` array of public keys. At least one public key or the password is mandatory.
- `status` 1 means "active", 0 "inactive". An inactive account cannot login.
- `expiration_date` expiration date as unix timestamp in milliseconds. An expired account cannot login. 0 means no expiration.
- `home_dir` the user cannot upload or download files outside this directory. Must be an absolute path.
- `virtual_folders` list of mappings between virtual SFTP/SCP paths and local filesystem paths outside the user home directory. The specified paths must be absolute and the virtual path cannot be "/", it must be a sub directory. The parent directory for the specified virtual path must exist. SFTPGo will try to automatically create any missing parent directory for the configured virtual folders at user login
- `uid`, `gid`. If SFTPGo runs as root system user then the created files and directories will be assigned to this system uid/gid. Ignored on windows or if SFTPGo runs as non root user: in this case files and directories for all SFTP users will be owned by the system user that runs SFTPGo.
- `max_sessions` maximum concurrent sessions. 0 means unlimited.
- `quota_size` maximum size allowed as bytes. 0 means unlimited.
- `quota_files` maximum number of files allowed. 0 means unlimited.
- `permissions` the following per directory permissions are supported:
- `*` all permissions are granted
- `list` list items is allowed
- `download` download files is allowed
- `upload` upload files is allowed
- `overwrite` overwrite an existing file, while uploading, is allowed. `upload` permission is required to allow file overwrite
- `delete` delete files or directories is allowed
- `rename` rename files or directories is allowed
- `create_dirs` create directories is allowed
- `create_symlinks` create symbolic links is allowed
- `chmod` changing file or directory permissions is allowed. On Windows, only the 0200 bit (owner writable) of mode is used; it controls whether the file's read-only attribute is set or cleared. The other bits are currently unused. Use mode 0400 for a read-only file and 0600 for a readable+writable file.
- `chown` changing file or directory owner and group is allowed. Changing owner and group is not supported on Windows.
- `chtimes` changing file or directory access and modification time is allowed
- `upload_bandwidth` maximum upload bandwidth as KB/s, 0 means unlimited.
- `download_bandwidth` maximum download bandwidth as KB/s, 0 means unlimited.
- `allowed_ip`, List of IP/Mask allowed to login. Any IP address not contained in this list cannot login. IP/Mask must be in CIDR notation as defined in RFC 4632 and RFC 4291, for example "192.0.2.0/24" or "2001:db8::/32"
- `denied_ip`, List of IP/Mask not allowed to login. If an IP address is both allowed and denied then login will be denied
- `denied_login_methods`, List of login methods not allowed. The following login methods are supported:
- `publickey`
- `password`
- `keyboard-interactive`
- `file_extensions`, list of struct. These restrictions do not apply to files listing for performance reasons, so a denied file cannot be downloaded/overwritten/renamed but it will still be listed in the list of files. Please note that these restrictions can be easily bypassed. Each struct contains the following fields:
- `allowed_extensions`, list of, case insensitive, allowed files extension. Shell like expansion is not supported so you have to specify `.jpg` and not `*.jpg`. Any file that does not end with this suffix will be denied
- `denied_extensions`, list of, case insensitive, denied files extension. Denied file extensions are evaluated before the allowed ones
- `path`, SFTP/SCP path, if no other specific filter is defined, the filter apply for sub directories too. For example if filters are defined for the paths `/` and `/sub` then the filters for `/` are applied for any file outside the `/sub` directory
- `fs_provider`, filesystem to serve via SFTP. Local filesystem and S3 Compatible Object Storage are supported
- `s3_bucket`, required for S3 filesystem
- `s3_region`, required for S3 filesystem. Must match the region for your bucket. You can find here the list of available [AWS regions](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-available-regions). For example if your bucket is at `Frankfurt` you have to set the region to `eu-central-1`
- `s3_access_key`
- `s3_access_secret`, if provided it is stored encrypted (AES-256-GCM)
- `s3_endpoint`, specifies a S3 endpoint (server) different from AWS. It is not required if you are connecting to AWS
- `s3_storage_class`, leave blank to use the default or specify a valid AWS [storage class](https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html)
- `s3_key_prefix`, allows to restrict access to the virtual folder identified by this prefix and its contents
- `gcs_bucket`, required for GCS filesystem
- `gcs_credentials`, Google Cloud Storage JSON credentials base64 encoded
- `gcs_automatic_credentials`, integer. Set to 1 to use Application Default Credentials strategy or set to 0 to use explicit credentials via `gcs_credentials`
- `gcs_storage_class`
- `gcs_key_prefix`, allows to restrict access to the virtual folder identified by this prefix and its contents
```shell
$ curl "http://admin:password@127.0.0.1:8080/api/v2/token"
{"access_token":"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOlsiQVBJIl0sImV4cCI6MTYxMzMzNTI2MSwianRpIjoiYzBrb2gxZmNkcnBjaHNzMGZwZmciLCJuYmYiOjE2MTMzMzQ2MzEsInBlcm1pc3Npb25zIjpbIioiXSwic3ViIjoiYUJ0SHUwMHNBUmxzZ29yeEtLQ1pZZWVqSTRKVTlXbThHSGNiVWtWVmc1TT0iLCJ1c2VybmFtZSI6ImFkbWluIn0.WiyqvUF-92zCr--y4Q_sxn-tPnISFzGZd_exsG-K7ME","expires_at":"2021-02-14T20:41:01Z"}
These properties are stored inside the data provider.
curl -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOlsiQVBJIl0sImV4cCI6MTYxMzMzNTI2MSwianRpIjoiYzBrb2gxZmNkcnBjaHNzMGZwZmciLCJuYmYiOjE2MTMzMzQ2MzEsInBlcm1pc3Npb25zIjpbIioiXSwic3ViIjoiYUJ0SHUwMHNBUmxzZ29yeEtLQ1pZZWVqSTRKVTlXbThHSGNiVWtWVmc1TT0iLCJ1c2VybmFtZSI6ImFkbWluIn0.WiyqvUF-92zCr--y4Q_sxn-tPnISFzGZd_exsG-K7ME" "http://127.0.0.1:8080/api/v2/dumpdata?output-data=1"
```
the dump is a JSON with all SFTPGo data including users, folders, admins.
These properties are stored inside the configured data provider.
SFTPGo supports checking passwords stored with bcrypt, pbkdf2, md5crypt and sha512crypt too. For pbkdf2 the supported format is `$<algo>$<iterations>$<salt>$<hashed pwd base64 encoded>`, where algo is `pbkdf2-sha1` or `pbkdf2-sha256` or `pbkdf2-sha512` or `$pbkdf2-b64salt-sha256$`. For example the pbkdf2-sha256 of the word password using 150000 iterations and E86a9YMX3zC7 as salt must be stored as `$pbkdf2-sha256$150000$E86a9YMX3zC7$R5J62hsSq+pYw00hLLPKBbcGXmq7fj5+/M0IFoYtZbo=`. In pbkdf2 variant with b64salt the salt is base64 encoded. For bcrypt the format must be the one supported by golang's crypto/bcrypt package, for example the password secret with cost 14 must be stored as `$2a$14$ajq8Q7fbtFRQvXpdCq7Jcuy.Rx1h/L4J60Otx.gyNLbAYctGMJ9tK`. For md5crypt and sha512crypt we support the format used in `/etc/shadow` with the `$1$` and `$6$` prefix, this is useful if you are migrating from Unix system user accounts. We support Apache md5crypt (`$apr1$` prefix) too. Using the REST API you can send a password hashed as bcrypt, pbkdf2, md5crypt or sha512crypt and it will be stored as is.
If you want to use your existing accounts, you have these options:
- If your accounts are aleady stored inside a supported database, you can create a database view. Since a view is read only, you have to disable user management and quota tracking so SFTPGo will never try to write to the view
- you can import your users inside SFTPGo. Take a look at [sftpgo_api_cli.py](../scripts#convert-users-from-other-stores "SFTPGo API CLI script"), it can convert and import users from Linux system users and Pure-FTPd/ProFTPD virtual users
- you can import your users inside SFTPGo. Take a look at [convert users](.../examples/convertusers) script, it can convert and import users from Linux system users and Pure-FTPd/ProFTPD virtual users
- you can use an external authentication program

View File

@@ -0,0 +1,20 @@
# Azure Blob Storage backend
To connect SFTPGo to Azure Blob Storage, you need to specify the access credentials. Azure Blob Storage has different options for credentials, we support:
1. Providing an account name and account key.
2. Providing a shared access signature (SAS).
If you authenticate using account and key you also need to specify a container. The endpoint can generally be left blank, the default is `blob.core.windows.net`.
If you provide a SAS URL the container is optional and if given it must match the one inside the shared access signature.
If you want to connect to an emulator such as [Azurite](https://github.com/Azure/Azurite) you need to provide the account name/key pair and an endpoint prefixed with the protocol, for example `http://127.0.0.1:10000`.
Specifying a different `key_prefix`, you can assign different "folders" of the same container to different users. This is similar to a chroot directory for local filesystem. Each SFTPGo user can only access the assigned folder and its contents. The folder identified by `key_prefix` does not need to be pre-created.
For multipart uploads you can customize the parts size and the upload concurrency. Please note that if the upload bandwidth between the client and SFTPGo is greater than the upload bandwidth between SFTPGo and the Azure Blob service then the client should wait for the last parts to be uploaded to Azure after finishing uploading the file to SFTPGo, and it may time out. Keep this in mind if you customize these parameters.
The configured container must exist.
This backend is very similar to the [S3](./s3.md) backend, and it has the same limitations. As with S3 `chtime` will fail with the default configuration, you can install the [metadata plugin](https://github.com/sftpgo/sftpgo-plugin-metadata) to make it work and thus be able to preserve/change file modification times.

View File

@@ -1,34 +1,40 @@
# Build SFTPGo from source
Install the package to your [\$GOPATH](https://github.com/golang/go/wiki/GOPATH "GOPATH") with the [go tool](https://golang.org/cmd/go/ "go command") from shell:
Download the sources and use `go build`.
```bash
go get -u github.com/drakkan/sftpgo
```
The following build tags are available:
Make sure [Git](https://git-scm.com/downloads) is installed on your machine and in your system's `PATH`.
- `nogcs`, disable Google Cloud Storage backend, default enabled
- `nos3`, disable S3 Compabible Object Storage backends, default enabled
- `noazblob`, disable Azure Blob Storage backend, default enabled
- `nobolt`, disable Bolt data provider, default enabled
- `nomysql`, disable MySQL data provider, default enabled
- `nopgsql`, disable PostgreSQL data provider, default enabled
- `nosqlite`, disable SQLite data provider, default enabled
- `noportable`, disable portable mode, default enabled
- `nometrics`, disable Prometheus metrics, default enabled
SFTPGo depends on [go-sqlite3](https://github.com/mattn/go-sqlite3) which is a CGO package and so it requires a `C` compiler at build time.
If no build tag is specified the build will include the default features.
The optional [SQLite driver](https://github.com/mattn/go-sqlite3 "go-sqlite3") is a `CGO` package and so it requires a `C` compiler at build time.
On Linux and macOS, a compiler is easy to install or already installed. On Windows, you need to download [MinGW-w64](https://sourceforge.net/projects/mingw-w64/files/) and build SFTPGo from its command prompt.
The compiler is a build time only dependency. It is not required at runtime.
If you don't need SQLite, you can also get/build SFTPGo setting the environment variable `GCO_ENABLED` to 0. This way, SQLite support will be disabled and PostgreSQL, MySQL, bbolt and memory data providers will keep working. In this way, you don't need a `C` compiler for building.
Version info, such as git commit and build date, can be embedded setting the following string variables at build time:
- `github.com/drakkan/sftpgo/utils.commit`
- `github.com/drakkan/sftpgo/utils.date`
- `github.com/drakkan/sftpgo/v2/version.commit`
- `github.com/drakkan/sftpgo/v2/version.date`
For example, you can build using the following command:
```bash
go build -i -ldflags "-s -w -X github.com/drakkan/sftpgo/utils.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/utils.date=`date -u +%FT%TZ`" -o sftpgo
go build -tags nogcs,nos3,nosqlite -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
```
You should get a version that includes git commit and build date like this one:
You should get a version that includes git commit, build date and available features like this one:
```bash
$ sftpgo -v
SFTPGo version: 0.9.0-dev-90607d4-dirty-2019-08-08T19:28:36Z
```
$ ./sftpgo -v
SFTPGo 0.9.6-dev-b30614e-dirty-2020-06-19T11:04:56Z +metrics -gcs -s3 +bolt +mysql +pgsql -sqlite +portable
```

View File

@@ -0,0 +1,47 @@
# Check password hook
This hook allows you to externally check the provided password, its main use case is to allow to easily support things like password+OTP for protocols without keyboard interactive support such as FTP and WebDAV. You can ask your users to login using a string consisting of a fixed password and a One Time Token, you can verify the token inside the hook and ask to SFTPGo to verify the fixed part.
The same thing can be achieved using [External authentication](./external-auth.md) but using this hook is simpler in some use cases.
The `check password hook` can be defined as the absolute path of your program or an HTTP URL.
The expected response is a JSON serialized struct containing the following keys:
- `status` integer. 0 means KO, 1 means OK, 2 means partial success
- `to_verify` string. For `status` = 2 SFTPGo will check this password against the one stored inside SFTPGo data provider
If the hook defines an external program it can read the following environment variables:
- `SFTPGO_AUTHD_USERNAME`
- `SFTPGO_AUTHD_PASSWORD`
- `SFTPGO_AUTHD_IP`
- `SFTPGO_AUTHD_PROTOCOL`, possible values are `SSH`, `FTP`, `DAV`, `HTTP`
Previous global environment variables aren't cleared when the script is called. The content of these variables is _not_ quoted. They may contain special characters. They are under the control of a possibly malicious remote user.
The program must write, on its standard output, the expected JSON serialized response described above.
If the hook is an HTTP URL then it will be invoked as HTTP POST. The request body will contain a JSON serialized struct with the following fields:
- `username`
- `password`
- `ip`
- `protocol`, possible values are `SSH`, `FTP`, `DAV`
If authentication succeeds the HTTP response code must be 200 and the response body must contain the expected JSON serialized response described above.
The program hook must finish within 30 seconds, the HTTP hook timeout will use the global configuration for HTTP clients.
You can also restrict the hook scope using the `check_password_scope` configuration key:
- `0` means all supported protocols.
- `1` means SSH only
- `2` means FTP only
- `4` means WebDAV only
You can combine the scopes. For example, 6 means FTP and WebDAV.
You can disable the hook on a per-user basis.
An example check password program allowing 2FA using password + one time token can be found inside the source tree [checkpwd](../examples/OTP/authy/checkpwd) directory.

View File

@@ -1,78 +1,116 @@
# Custom Actions
The `actions` struct inside the "sftpd" configuration section allows to configure the actions for file operations and SSH commands.
SFTPGo can notify filesystem and provider events using custom actions. A custom action can be an external program or an HTTP URL.
Actions will not be executed if an error is detected, and so a partial file is uploaded or an SSH command is not successfully completed. The `upload` condition includes both uploads to new files and overwrite of existing files. The `ssh_cmd` condition will be triggered after a command is successfully executed via SSH. `scp` will trigger the `download` and `upload` conditions and not `ssh_cmd`.
## Filesystem events
The `command`, if defined, is invoked with the following arguments:
The `actions` struct inside the `common` configuration section allows to configure the actions for file operations and SSH commands.
The `hook` can be defined as the absolute path of your program or an HTTP URL.
- `action`, string, possible values are: `download`, `upload`, `delete`, `rename`, `ssh_cmd`
- `username`
- `path` is the full filesystem path, can be empty for some ssh commands
- `target_path`, non empty for `rename` action
- `ssh_cmd`, non empty for `ssh_cmd` action
The following `actions` are supported:
The `command` can also read the following environment variables:
- `download`
- `pre-download`
- `upload`
- `pre-upload`
- `delete`
- `pre-delete`
- `rename`
- `mkdir`
- `rmdir`
- `ssh_cmd`
- `SFTPGO_ACTION`
The `upload` condition includes both uploads to new files and overwrite of existing ones. If an upload is aborted for quota limits SFTPGo tries to remove the partial file, so if the notification reports a zero size file and a quota exceeded error the file has been deleted. The `ssh_cmd` condition will be triggered after a command is successfully executed via SSH. `scp` will trigger the `download` and `upload` conditions and not `ssh_cmd`.
For cloud backends directories are virtual, they are created implicitly when you upload a file and are implicitly removed when the last file within a directory is removed. The `mkdir` and `rmdir` notifications are sent only when a directory is explicitly created or removed.
The notification will indicate if an error is detected and so, for example, a partial file is uploaded.
The `pre-delete` action, if defined, will be called just before files deletion. If the external command completes with a zero exit status or the HTTP notification response code is `200` then SFTPGo will assume that the file was already deleted/moved and so it will not try to remove the file and it will not execute the hook defined for the `delete` action.
The `pre-download` and `pre-upload` actions, will be called before downloads and uploads. If the external command completes with a zero exit status or the HTTP notification response code is `200` then SFTPGo allows the operation, otherwise the client will get a permission denied error.
If the `hook` defines a path to an external program, then this program can read the following environment variables:
- `SFTPGO_ACTION`, supported action
- `SFTPGO_ACTION_USERNAME`
- `SFTPGO_ACTION_PATH`
- `SFTPGO_ACTION_TARGET`, non empty for `rename` `SFTPGO_ACTION`
- `SFTPGO_ACTION_SSH_CMD`, non empty for `ssh_cmd` `SFTPGO_ACTION`
- `SFTPGO_ACTION_FILE_SIZE`, non empty for `upload`, `download` and `delete` `SFTPGO_ACTION`
- `SFTPGO_ACTION_LOCAL_FILE`, `true` if the affected file is stored on the local filesystem, otherwise `false`
- `SFTPGO_ACTION_PATH`, is the full filesystem path, can be empty for some ssh commands
- `SFTPGO_ACTION_TARGET`, full filesystem path, non-empty for `rename` `SFTPGO_ACTION` and for some SSH commands
- `SFTPGO_ACTION_VIRTUAL_PATH`, virtual path, seen by SFTPGo users
- `SFTPGO_ACTION_VIRTUAL_TARGET`, virtual target path, seen by SFTPGo users
- `SFTPGO_ACTION_SSH_CMD`, non-empty for `ssh_cmd` `SFTPGO_ACTION`
- `SFTPGO_ACTION_FILE_SIZE`, non-zero for `pre-upload`,`upload`, `download` and `delete` actions if the file size is greater than `0`
- `SFTPGO_ACTION_FS_PROVIDER`, `0` for local filesystem, `1` for S3 backend, `2` for Google Cloud Storage (GCS) backend, `3` for Azure Blob Storage backend, `4` for local encrypted backend, `5` for SFTP backend
- `SFTPGO_ACTION_BUCKET`, non-empty for S3, GCS and Azure backends
- `SFTPGO_ACTION_ENDPOINT`, non-empty for S3, SFTP and Azure backend if configured
- `SFTPGO_ACTION_STATUS`, integer. Status for `upload`, `download` and `ssh_cmd` actions. 1 means no error, 2 means a generic error occurred, 3 means quota exceeded error
- `SFTPGO_ACTION_PROTOCOL`, string. Possible values are `SSH`, `SFTP`, `SCP`, `FTP`, `DAV`, `HTTP`, `HTTPShare`, `DataRetention`
- `SFTPGO_ACTION_IP`, the action was executed from this IP address
- `SFTPGO_ACTION_SESSION_ID`, string. Unique protocol session identifier. For stateless protocols such as HTTP the session id will change for each request
- `SFTPGO_ACTION_OPEN_FLAGS`, integer. File open flags, can be non-zero for `pre-upload` action. If `SFTPGO_ACTION_FILE_SIZE` is greater than zero and `SFTPGO_ACTION_OPEN_FLAGS&512 == 0` the target file will not be truncated
- `SFTPGO_ACTION_TIMESTAMP`, int64. Event timestamp as nanoseconds since epoch
Previous global environment variables aren't cleared when the script is called.
The `command` must finish within 30 seconds.
The program must finish within 30 seconds.
The `http_notification_url`, if defined, will contain the following, percent encoded, query string parameters:
If the `hook` defines an HTTP URL then this URL will be invoked as HTTP POST. The request body will contain a JSON serialized struct with the following fields:
- `action`
- `username`
- `path`
- `local_file`, `true` if the affected file is stored on the local filesystem, otherwise `false`
- `target_path`, added for `rename` action
- `ssh_cmd`, added for `ssh_cmd` action
- `file_size`, added for `upload`, `download`, `delete` actions
- `action`, string
- `username`, string
- `path`, string
- `target_path`, string, included for `rename` action and `sftpgo-copy` SSH command
- `virtual_path`, string, virtual path, seen by SFTPGo users
- `virtual_target_path`, string, virtual target path, seen by SFTPGo users
- `ssh_cmd`, string, included for `ssh_cmd` action
- `file_size`, int64, included for `pre-upload`, `upload`, `download`, `delete` actions if the file size is greater than `0`
- `fs_provider`, integer, `0` for local filesystem, `1` for S3 backend, `2` for Google Cloud Storage (GCS) backend, `3` for Azure Blob Storage backend, `4` for local encrypted backend, `5` for SFTP backend
- `bucket`, string, inlcuded for S3, GCS and Azure backends
- `endpoint`, string, included for S3, SFTP and Azure backend if configured
- `status`, integer. Status for `upload`, `download` and `ssh_cmd` actions. 1 means no error, 2 means a generic error occurred, 3 means quota exceeded error
- `protocol`, string. Possible values are `SSH`, `SFTP`, `SCP`, `FTP`, `DAV`, `HTTP`, `HTTPShare`, `DataRetention`
- `ip`, string. The action was executed from this IP address
- `session_id`, string. Unique protocol session identifier. For stateless protocols such as HTTP the session id will change for each request
- `open_flags`, integer. File open flags, can be non-zero for `pre-upload` action. If `file_size` is greater than zero and `file_size&512 == 0` the target file will not be truncated
- `timestamp`, int64. Event timestamp as nanoseconds since epoch
The HTTP request is executed with a 15-second timeout.
The HTTP hook will use the global configuration for HTTP clients and will respect the retry configurations.
The `actions` struct inside the "data_provider" configuration section allows you to configure actions on user add, update, delete.
The `pre-*` actions are always executed synchronously while the other ones are asynchronous. You can specify the actions to run synchronously via the `execute_sync` configuration key. Executing an action synchronously means that SFTPGo will not return a result code to the client (which is waiting for it) until your hook have completed its execution. If your hook takes a long time to complete this could cause a timeout on the client side, which wouldn't receive the server response in a timely manner and eventually drop the connection.
## Provider events
The `actions` struct inside the `data_provider` configuration section allows you to configure actions on data provider objects add, update, delete.
The supported object types are:
- `user`
- `admin`
- `api_key`
Actions will not be fired for internal updates, such as the last login or the user quota fields, or after external authentication.
The `command`, if defined, is invoked with the following arguments:
If the `hook` defines a path to an external program, then this program can read the following environment variables:
- `action`, string, possible values are: `add`, `update`, `delete`
- `username`
- `ID`
- `status`
- `expiration_date`
- `home_dir`
- `uid`
- `gid`
The `command` can also read the following environment variables:
- `SFTPGO_USER_ACTION`
- `SFTPGO_USER_USERNAME`
- `SFTPGO_USER_PASSWORD`, hashed password as stored inside the data provider, can be empty if the user does not login using a password
- `SFTPGO_USER_ID`
- `SFTPGO_USER_STATUS`
- `SFTPGO_USER_EXPIRATION_DATE`
- `SFTPGO_USER_HOME_DIR`
- `SFTPGO_USER_UID`
- `SFTPGO_USER_GID`
- `SFTPGO_USER_QUOTA_FILES`
- `SFTPGO_USER_QUOTA_SIZE`
- `SFTPGO_USER_UPLOAD_BANDWIDTH`
- `SFTPGO_USER_DOWNLOAD_BANDWIDTH`
- `SFTPGO_USER_MAX_SESSIONS`
- `SFTPGO_USER_FS_PROVIDER`
- `SFTPGO_PROVIDER_ACTION`, supported values are `add`, `update`, `delete`
- `SFTPGO_PROVIDER_OBJECT_TYPE`, affetected object type
- `SFTPGO_PROVIDER_OBJECT_NAME`, unique identifier for the affected object, for example username or key id
- `SFTPGO_PROVIDER_USERNAME`, the username that executed the action. There are two special usernames: `__self__` identifies a user/admin that updates itself and `__system__` identifies an action that does not have an explicit executor associated with it, for example users/admins can be added/updated by loading them from initial data
- `SFTPGO_PROVIDER_IP`, the action was executed from this IP address
- `SFTPGO_PROVIDER_TIMESTAMP`, event timestamp as nanoseconds since epoch
- `SFTPGO_PROVIDER_OBJECT`, object serialized as JSON with sensitive fields removed
Previous global environment variables aren't cleared when the script is called.
The `command` must finish within 15 seconds.
The program must finish within 15 seconds.
The `http_notification_url`, if defined, will be called invoked as http POST. The action is added to the query string, for example `<http_notification_url>?action=update`, and the user is sent serialized as JSON inside the POST body with sensitive fields removed.
If the `hook` defines an HTTP URL then this URL will be invoked as HTTP POST. The action, username, ip, object_type and object_name and timestamp are added to the query string, for example `<hook>?action=update&username=admin&ip=127.0.0.1&object_type=user&object_name=user1&timestamp=1633860803249`, and the full object is sent serialized as JSON inside the POST body with sensitive fields removed.
The HTTP request is executed with a 15-second timeout.
The HTTP hook will use the global configuration for HTTP clients and will respect the retry configurations.
The structure for SFTPGo objects can be found within the [OpenAPI schema](../openapi/openapi.yaml).
## Pub/Sub services
You can forward SFTPGo events to several publish/subscribe systems using the [sftpgo-plugin-pubsub](https://github.com/sftpgo/sftpgo-plugin-pubsub). The notifiers SFTPGo plugins are not suitable for interactive actions such as `pre-*` events. Their scope is to simply forward events to external services. A custom hook is a better choice if you need to react to `pre-*` events.
## Database services
You can store SFTPGo events in database systems using the [sftpgo-plugin-eventstore](https://github.com/sftpgo/sftpgo-plugin-eventstore) and you can search the stored events using the [sftpgo-plugin-eventsearch](https://github.com/sftpgo/sftpgo-plugin-eventsearch).

20
docs/dare.md Normal file
View File

@@ -0,0 +1,20 @@
# Data At Rest Encryption (DARE)
SFTPGo supports data at-rest encryption via its `cryptfs` virtual file system, in this mode SFTPGo transparently encrypts and decrypts data (to/from the local disk) on-the-fly during uploads and/or downloads, making sure that the files at-rest on the server-side are always encrypted.
Data At Rest Encryption is supported for local filesystem, for cloud storage backends you can use their server side encryption feature.
So, because of the way it works, as described here above, when you set up an encrypted filesystem for a user you need to make sure it points to an empty path/directory (that has no files in it). Otherwise, it would try to decrypt existing files that are not encrypted in the first place and fail.
The SFTPGo's `cryptfs` is a tiny wrapper around [sio](https://github.com/minio/sio) therefore data is encrypted and authenticated using `AES-256-GCM` or `ChaCha20-Poly1305`. AES-GCM will be used if the CPU provides hardware support for it.
The only required configuration parameter is a `passphrase`, each file will be encrypted using an unique, randomly generated secret key derived from the given passphrase using the HMAC-based Extract-and-Expand Key Derivation Function (HKDF) as defined in [RFC 5869](http://tools.ietf.org/html/rfc5869). It is important to note that the per-object encryption key is never stored anywhere: it is derived from your `passphrase` and a randomly generated initialization vector just before encryption/decryption. The initialization vector is stored with the file.
The passphrase is stored encrypted itself according to your [KMS configuration](./kms.md) and is required to decrypt any file encrypted using an encryption key derived from it.
The encrypted filesystem has some limitations compared to the local, unencrypted, one:
- Resuming uploads is not supported.
- Opening a file for both reading and writing at the same time is not supported and so clients that require advanced filesystem-like features such as `sshfs` are not supported too.
- Truncate is not supported.
- System commands such as `git` or `rsync` are not supported: they will store data unencrypted.

View File

@@ -0,0 +1,32 @@
# Data retention hook
This hook runs after a data retention check completes if you specify `Hook` between notifications methods when you start the check.
The `data_retention_hook` can be defined as the absolute path of your program or an HTTP URL.
If the hook defines an external program it can read the following environment variable:
- `SFTPGO_DATA_RETENTION_RESULT`, it contains the data retention check result JSON serialized.
Previous global environment variables aren't cleared when the script is called.
The program must finish within 20 seconds.
If the hook defines an HTTP URL then this URL will be invoked as HTTP POST and the POST body contains the data retention check result JSON serialized.
The HTTP hook will use the global configuration for HTTP clients and will respect the retry configurations.
Here is the schema for the data retention check result:
- `username`, string
- `status`, int. 1 means success, 0 error
- `start_time`, int64. Start time as UNIX timestamp in milliseconds
- `total_deleted_files`, int. Total number of files deleted
- `total_deleted_size`, int64. Total size deleted in bytes
- `elapsed`, int64. Elapsed time in milliseconds
- `details`, list of struct with details for each checked path, each struct contains the following fields:
- `path`, string
- `retention`, int. Retention time in hours
- `deleted_files`, int. Number of files deleted
- `deleted_size`, int64. Size deleted in bytes
- `info`, string. Informative, non fatal, message if any. For example it can indicates that the check was skipped because the user doesn't have the required permissions on this path
- `error`, string. Error message if any

67
docs/defender.md Normal file
View File

@@ -0,0 +1,67 @@
# Defender
The built-in `defender` allows you to configure an auto-blocking policy for SFTPGo and thus helps to prevent DoS (Denial of Service) and brute force password guessing.
If enabled it will protect SFTP, FTP and WebDAV services and it will automatically block hosts (IP addresses) that continually fail to log in or attempt to connect.
You can configure a score for the following events:
- `score_valid`, defines the score for valid login attempts, eg. user accounts that exist. Default `1`.
- `score_invalid`, defines the score for invalid login attempts, eg. non-existent user accounts or client disconnected for inactivity without authentication attempts. Default `2`.
- `score_limit_exceeded`, defines the score for hosts that exceeded the configured rate limits or the configured max connections per host. Default `3`.
And then you can configure:
- `observation_time`, defines the time window, in minutes, for tracking client errors.
- `threshold`, defines the threshold value before banning a host.
- `ban_time`, defines the time to ban a client, as minutes
So a host is banned, for `ban_time` minutes, if the sum of the scores has exceeded the defined threshold during the last observation time minutes.
By defining the scores, each type of event can be weighted. Let's see an example: if `score_invalid` is 3 and `threshold` is 8, a host will be banned after 3 login attempts with an non-existent user within the configured `observation_time`.
A banned IP has no score, it makes no sense to accumulate host events in memory for an already banned IP address.
If an already banned client tries to log in again, its ban time will be incremented according the `ban_time_increment` configuration.
The `ban_time_increment` is calculated as percentage of `ban_time`, so if `ban_time` is 30 minutes and `ban_time_increment` is 50 the host will be banned for additionally 15 minutes. You can also specify values greater than 100 for `ban_time_increment` if you want to increase the penalty for already banned hosts.
SFTPGo can store host scores and banned hosts in memory or within the configured data provider according to the `driver` set in the `defender` configuration section. The available drivers are `memory` and `provider`.
The `provider` driver is useful if you want to share the defender data across multiple SFTPGo instances and it requires a shared or distributed data provider: `MySQL`, `PostgreSQL` and `CockroachDB` are supported.
If you set the `provider` driver, the defender implementation may do many database queries (at least one query every time a new client connects to check if it is banned), if you have a single SFTPGo instance the `memory` driver is recommended.
For the `memory` driver, you can limit the memory usage using the `entries_soft_limit` and `entries_hard_limit` configuration keys.
The `provider` driver will periodically clean up expired hosts and events.
Using the REST API you can:
- list hosts within the defender's lists
- remove hosts from the defender's lists
The `defender` can also load a permanent block list and/or a safe list of ip addresses/networks from a file:
- `safelist_file`, defines the path to a file containing a list of ip addresses and/or networks to never ban.
- `blocklist_file`, defines the path to a file containing a list of ip addresses and/or networks to always ban.
These list must be stored as JSON conforming to the following schema:
- `addresses`, list of strings. Each string must be a valid IPv4/IPv6 address.
- `networks`, list of strings. Each string must be a valid IPv4/IPv6 CIDR address.
Here is a small example:
```json
{
"addresses":[
"192.0.2.1",
"2001:db8::68"
],
"networks":[
"192.0.3.0/24",
"2001:db8:1234::/48"
]
}
```
These list will be always loaded in memory (even if you use the `provider` driver) for faster lookups. The REST API queries "live" data and not these lists.

View File

@@ -1,27 +1,45 @@
# Dynamic user modification
# Dynamic user creation or modification
Dynamic user modification is supported via an external program that can be executed just before the user login.
To enable dynamic user modification, you must set the absolute path of your program using the `pre_login_program` key in your configuration file.
Dynamic user creation or modification is supported via an external program or an HTTP URL that can be invoked just before the user login.
To enable dynamic user modification, you must set the absolute path of your program or an HTTP URL using the `pre_login_hook` key in your configuration file.
The external program can read the following environment variables to get info about the user trying to login:
- `SFTPGO_LOGIND_USER`, it contains the user trying to login serialized as JSON
- `SFTPGO_LOGIND_METHOD`, possible values are: `password`, `publickey` and `keyboard-interactive`
- `SFTPGO_LOGIND_USER`, it contains the user trying to login serialized as JSON. A JSON serialized user id equal to zero means the user does not exist inside SFTPGo
- `SFTPGO_LOGIND_METHOD`, possible values are: `password`, `publickey`, `keyboard-interactive`, `TLSCertificate`
- `SFTPGO_LOGIND_IP`, ip address of the user trying to login
- `SFTPGO_LOGIND_PROTOCOL`, possible values are `SSH`, `FTP`, `DAV`, `HTTP`
The program must write, on its the standard output, an empty string (or no response at all) if no user update is needed or the updated SFTPGo user serialized as JSON. Actions defined for users update will not be executed in this case.
The JSON response can include only the fields that need to the updated instead of the full user. For example, if you want to disable the user, you can return a response like this:
The program must write, on its standard output:
- an empty string (or no response at all) if the user should not be created/updated
- or the SFTPGo user, JSON serialized, if you want to create or update the given user
If the hook is an HTTP URL then it will be invoked as HTTP POST. The login method, the used protocol and the ip address of the user trying to login are added to the query string, for example `<http_url>?login_method=password&ip=1.2.3.4&protocol=SSH`.
The request body will contain the user trying to login serialized as JSON. If no modification is needed the HTTP response code must be 204, otherwise the response code must be 200 and the response body a valid SFTPGo user serialized as JSON.
Actions defined for user's updates will not be executed in this case and an already logged in user with the same username will not be disconnected, you have to handle these things yourself.
The JSON response can include only the fields to update instead of the full user. For example, if you want to disable the user, you can return a response like this:
```json
{"status": 0}
```
The external program must finish within 60 seconds.
Please note that if you want to create a new user, the pre-login hook response must include all the mandatory user fields.
If an error happens while executing your program then login will be denied. "Dynamic user modification" and "External Authentication" are mutally exclusive.
The program hook must finish within 30 seconds, the HTTP hook will use the global configuration for HTTP clients.
Let's see a very basic example. Our sample program will grant access to the user `test_user` only in the time range 10:00-18:00. Other users will not be modified since the program will terminate with no output.
If an error happens while executing the hook then login will be denied.
```
"Dynamic user creation or modification" and "External Authentication" are mutually exclusive, they are quite similar, the difference is that "External Authentication" returns an already authenticated user while using "Dynamic users modification" you simply create or update a user. The authentication will be checked inside SFTPGo.
In other words while using "External Authentication" the external program receives the credentials of the user trying to login (for example the cleartext password) and it needs to validate them. While using "Dynamic users modification" the pre-login program receives the user stored inside the dataprovider (it includes the hashed password if any) and it can modify it, after the modification SFTPGo will check the credentials of the user trying to login.
You can disable the hook on a per-user basis.
Let's see a very basic example. Our sample program will grant access to the existing user `test_user` only in the time range 10:00-18:00. Other users will not be modified since the program will terminate with no output.
```shell
#!/bin/bash
CURRENT_TIME=`date +%H:%M`
@@ -38,3 +56,4 @@ fi
Please note that this is a demo program and it might not work in all cases. For example, the username should be obtained by parsing the JSON serialized user and not by searching the username inside the JSON as shown here.
The structure for SFTPGo users can be found within the [OpenAPI schema](../openapi/openapi.yaml).

Some files were not shown because too many files have changed in this diff Show More