mirror of
https://github.com/drakkan/sftpgo.git
synced 2025-12-07 14:50:55 +03:00
Compare commits
566 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
7dd795e6ed | ||
|
|
a3d4f5e800 | ||
|
|
30ce6ef736 | ||
|
|
2e6497ea17 | ||
|
|
cc9db96257 | ||
|
|
dd485509f6 | ||
|
|
df2e490680 | ||
|
|
7b0ea8f731 | ||
|
|
591bebef0c | ||
|
|
bdb6f585c7 | ||
|
|
db354e838c | ||
|
|
1e785077f2 | ||
|
|
700ca7550c | ||
|
|
791846adee | ||
|
|
af6f1f6026 | ||
|
|
5d3288c37d | ||
|
|
82b26f81d6 | ||
|
|
e0bbed1260 | ||
|
|
13e81530e9 | ||
|
|
c10b236f5d | ||
|
|
552a96533e | ||
|
|
cebd069c77 | ||
|
|
be9230e85b | ||
|
|
b1ce6eb85b | ||
|
|
46176a54b4 | ||
|
|
a21ccad174 | ||
|
|
1129a868a5 | ||
|
|
1ac66d27b6 | ||
|
|
6a6e8fffbc | ||
|
|
51f110bc7b | ||
|
|
4ddfe41f23 | ||
|
|
ddd06fc2ac | ||
|
|
1bccb93fcb | ||
|
|
db80781716 | ||
|
|
a2a99f9b57 | ||
|
|
cd4a68cc96 | ||
|
|
b37eb68993 | ||
|
|
b13958a8d6 | ||
|
|
17e2b234a0 | ||
|
|
4ef1775e9a | ||
|
|
363977b474 | ||
|
|
05ae0ea5f2 | ||
|
|
8de7a81674 | ||
|
|
d32b195a57 | ||
|
|
267d9f1831 | ||
|
|
17a42a0c11 | ||
|
|
a219d25cac | ||
|
|
ce731020a7 | ||
|
|
fc9082c422 | ||
|
|
4872ba2ea0 | ||
|
|
70bb3c34ce | ||
|
|
1cde50f050 | ||
|
|
e9dd4ecdf0 | ||
|
|
f863530653 | ||
|
|
4f609cfa30 | ||
|
|
78bf808322 | ||
|
|
afe1da92c5 | ||
|
|
9985224966 | ||
|
|
02679d6df3 | ||
|
|
c2bbd468c4 | ||
|
|
46ab8f8d78 | ||
|
|
54321c5240 | ||
|
|
5fcbf2528f | ||
|
|
ea096db8e4 | ||
|
|
0caeb68680 | ||
|
|
2b9ba1d520 | ||
|
|
80f5ccd357 | ||
|
|
820169c5c6 | ||
|
|
aff75953e3 | ||
|
|
c0e09374a8 | ||
|
|
57976b4085 | ||
|
|
899f1a1844 | ||
|
|
41a1af863e | ||
|
|
778ec9b88f | ||
|
|
d42fcc3786 | ||
|
|
5d4f758c47 | ||
|
|
a8a17a223a | ||
|
|
aa40b04576 | ||
|
|
daac90c4e1 | ||
|
|
72b2c83392 | ||
|
|
c3410a3d91 | ||
|
|
173c1820e1 | ||
|
|
684f4ba1a6 | ||
|
|
6d84c5b9e3 | ||
|
|
4b522a2455 | ||
|
|
1e1c46ae1b | ||
|
|
d6b3acdb62 | ||
|
|
037d89a320 | ||
|
|
30eb3c4a99 | ||
|
|
0966d44c0f | ||
|
|
40e759c983 | ||
|
|
141ca6777c | ||
|
|
3c16a19269 | ||
|
|
b3c6d79f51 | ||
|
|
0c56b6d504 | ||
|
|
3d2da88da9 | ||
|
|
80c06d6b59 | ||
|
|
e536a638c9 | ||
|
|
bc397002d4 | ||
|
|
2a95d031ea | ||
|
|
1dce1eff48 | ||
|
|
5b1d8666b3 | ||
|
|
187a5b1908 | ||
|
|
7ab7941ddd | ||
|
|
c69d63c1f8 | ||
|
|
743b350fdd | ||
|
|
1ac610da1a | ||
|
|
bcf0fa073e | ||
|
|
140380716d | ||
|
|
143df87fee | ||
|
|
6d895843dc | ||
|
|
65e6d5475f | ||
|
|
15609cdbc7 | ||
|
|
f876c728ad | ||
|
|
f34462e3c3 | ||
|
|
ea0bf5e4c8 | ||
|
|
14d1b82f6b | ||
|
|
ed43ddd79d | ||
|
|
23192a3be7 | ||
|
|
72e3d464b8 | ||
|
|
a6985075b9 | ||
|
|
4d5494912d | ||
|
|
50982229e1 | ||
|
|
6977a4a18b | ||
|
|
ab1bf2ad44 | ||
|
|
c451f742aa | ||
|
|
034d89876d | ||
|
|
4a88ea5c03 | ||
|
|
95c6d41c35 | ||
|
|
2a9ed0abca | ||
|
|
3ff6b1bf64 | ||
|
|
a67276ccc2 | ||
|
|
87b51a6fd5 | ||
|
|
940836b25b | ||
|
|
634b723b5d | ||
|
|
af0c9b76c4 | ||
|
|
2142ef20c5 | ||
|
|
224ce5fe81 | ||
|
|
4bb9d07dde | ||
|
|
2054dfd83d | ||
|
|
6699f5c2cc | ||
|
|
70bde8b2bc | ||
|
|
ff73e5f53c | ||
|
|
0609188d3f | ||
|
|
99cd1ccfe5 | ||
|
|
dccc583b5d | ||
|
|
ac435b7890 | ||
|
|
37fc589896 | ||
|
|
5d789a01b7 | ||
|
|
ca0ff0d630 | ||
|
|
969b38586e | ||
|
|
e3eca424f1 | ||
|
|
a6355e298e | ||
|
|
c0f47a58f2 | ||
|
|
dc845fa2f4 | ||
|
|
7e855c83b3 | ||
|
|
3b8a9e0963 | ||
|
|
4445834fd3 | ||
|
|
19a619ff65 | ||
|
|
66a538dc9c | ||
|
|
1a6863f4b1 | ||
|
|
fbd9919afa | ||
|
|
eec8bc73f4 | ||
|
|
5720d40fee | ||
|
|
38e0cba675 | ||
|
|
4c5a0d663e | ||
|
|
093df15fac | ||
|
|
957430e675 | ||
|
|
14035f407e | ||
|
|
bf2b2525a9 | ||
|
|
4edb9cd6b9 | ||
|
|
c38d242bea | ||
|
|
c6ab6f94e7 | ||
|
|
36151d1ba9 | ||
|
|
1d5d184720 | ||
|
|
0119fd03a6 | ||
|
|
0a14297b48 | ||
|
|
442efa0607 | ||
|
|
6ad4cc317c | ||
|
|
57bec976ae | ||
|
|
641493e31a | ||
|
|
5b4e9ad982 | ||
|
|
950a5ad9ea | ||
|
|
fcfdd633f6 | ||
|
|
ebb18fa57d | ||
|
|
58b0ca585c | ||
|
|
5bc1c2de2d | ||
|
|
ec00613202 | ||
|
|
02ec3a5f48 | ||
|
|
ac3bae00fc | ||
|
|
e54828a7b8 | ||
|
|
f2acde789d | ||
|
|
9b49f63a97 | ||
|
|
14bcc6f2fc | ||
|
|
975a2f3632 | ||
|
|
5ff8f75917 | ||
|
|
db7e81e9d0 | ||
|
|
6a8039e76a | ||
|
|
56bf8364cd | ||
|
|
75750e3a79 | ||
|
|
bb5207ad77 | ||
|
|
b51d795e04 | ||
|
|
d12819932a | ||
|
|
d812c86812 | ||
|
|
1625cd5a9f | ||
|
|
756c3d0503 | ||
|
|
f884447b26 | ||
|
|
555394b95e | ||
|
|
00510a6af8 | ||
|
|
6c0839e197 | ||
|
|
5b79379c90 | ||
|
|
47fed45700 | ||
|
|
80d695f3a2 | ||
|
|
8d4f40ccd2 | ||
|
|
765bad5edd | ||
|
|
0c0382c9b5 | ||
|
|
bbab6149e8 | ||
|
|
ce9387f1ab | ||
|
|
d126c5736a | ||
|
|
5048d54d32 | ||
|
|
f22fe6af76 | ||
|
|
8034f289d1 | ||
|
|
eed61ac510 | ||
|
|
412d6096c0 | ||
|
|
c289ae07d2 | ||
|
|
87f78b07b3 | ||
|
|
5e2db77ef9 | ||
|
|
c992072286 | ||
|
|
0ef826c090 | ||
|
|
5da75c3915 | ||
|
|
8222baa7ed | ||
|
|
7b76b51314 | ||
|
|
c96dbbd3b5 | ||
|
|
da6ccedf24 | ||
|
|
13b37a835f | ||
|
|
863fa33309 | ||
|
|
9f4c54a212 | ||
|
|
2a7bff4c0e | ||
|
|
17406d1aab | ||
|
|
6537c53d43 | ||
|
|
b4bd10521a | ||
|
|
65cbef1962 | ||
|
|
a8d355900a | ||
|
|
ffd9c381ce | ||
|
|
2a0bce0beb | ||
|
|
f1f7b81088 | ||
|
|
f9827f958b | ||
|
|
3e2afc35ba | ||
|
|
c65dd86d5e | ||
|
|
2d6c0388af | ||
|
|
4d19d87720 | ||
|
|
5eabaf98e0 | ||
|
|
d1f0e9ae9f | ||
|
|
cd56039ab7 | ||
|
|
55515fee95 | ||
|
|
13d43a2d31 | ||
|
|
001261433b | ||
|
|
03bf595525 | ||
|
|
4ebedace1e | ||
|
|
b23276c002 | ||
|
|
bf708cb8bc | ||
|
|
a550d082a3 | ||
|
|
6c1a7449fe | ||
|
|
f0c9b55036 | ||
|
|
209badf10c | ||
|
|
242dde4480 | ||
|
|
2df0dd1f70 | ||
|
|
98a6d138d4 | ||
|
|
38f06ab373 | ||
|
|
3c1300721c | ||
|
|
61003c8079 | ||
|
|
01850c7399 | ||
|
|
b9c381e26f | ||
|
|
542554fb2c | ||
|
|
bdf18fa862 | ||
|
|
afc411c51b | ||
|
|
a59163e56c | ||
|
|
8391b19abb | ||
|
|
3925c7ff95 | ||
|
|
dbed110d02 | ||
|
|
f978355520 | ||
|
|
4748e6f54d | ||
|
|
91a4c64390 | ||
|
|
600a107699 | ||
|
|
2746c0b0f1 | ||
|
|
701a6115f8 | ||
|
|
56b00addc4 | ||
|
|
02e35ee002 | ||
|
|
5208e4a4ca | ||
|
|
7381a867ba | ||
|
|
f41ce6619f | ||
|
|
933427310d | ||
|
|
8b0a1817b3 | ||
|
|
04c9a5c008 | ||
|
|
bbc8c091e6 | ||
|
|
f3228713bc | ||
|
|
fa5333784b | ||
|
|
0dbf0cc81f | ||
|
|
196a56726e | ||
|
|
fe857dcb1b | ||
|
|
aa0ed5dbd0 | ||
|
|
a9e21c282a | ||
|
|
9a15a54885 | ||
|
|
91dcc349de | ||
|
|
fa41bfd06a | ||
|
|
8839c34d53 | ||
|
|
11ceaa8850 | ||
|
|
2a9f7db1e2 | ||
|
|
22338ed478 | ||
|
|
59a21158a6 | ||
|
|
93ce96d011 | ||
|
|
cc2f04b0e4 | ||
|
|
aa5191fa1b | ||
|
|
4e41a5583d | ||
|
|
ded8fad5e4 | ||
|
|
3702bc8413 | ||
|
|
7896d2eef7 | ||
|
|
da0f470f1c | ||
|
|
8fddb742df | ||
|
|
95fe26f3e3 | ||
|
|
1e10381143 | ||
|
|
96cbce52f9 | ||
|
|
0ea2ca3141 | ||
|
|
42877dd915 | ||
|
|
790c11c453 | ||
|
|
1ac4baa00a | ||
|
|
fc32286045 | ||
|
|
ee1131f254 | ||
|
|
c5dc3ee3b6 | ||
|
|
dd593b1035 | ||
|
|
4814786556 | ||
|
|
4f0a936ca0 | ||
|
|
aec372ca31 | ||
|
|
d2a739f8f6 | ||
|
|
165110872b | ||
|
|
6ab4e9f533 | ||
|
|
cf541d62ea | ||
|
|
19fc58dd1f | ||
|
|
ac9c475849 | ||
|
|
ddf99ab706 | ||
|
|
0056984d4b | ||
|
|
44fb276464 | ||
|
|
558a1b4050 | ||
|
|
8f934f2648 | ||
|
|
403b9a8310 | ||
|
|
33436488e2 | ||
|
|
3c28366fed | ||
|
|
b80abe6c05 | ||
|
|
8cb47817f6 | ||
|
|
23a80b01b6 | ||
|
|
b30614e9d8 | ||
|
|
e86089a9f3 | ||
|
|
3ceba7a147 | ||
|
|
c491133aff | ||
|
|
37418a7630 | ||
|
|
73a9c002e0 | ||
|
|
3d48fa7382 | ||
|
|
8e22dd1b13 | ||
|
|
7807fa7cc2 | ||
|
|
cd380973df | ||
|
|
01d681faa3 | ||
|
|
c231b663a3 | ||
|
|
8306b6bde6 | ||
|
|
dc011af90d | ||
|
|
c27e3ef436 | ||
|
|
760cc9ba5a | ||
|
|
5665e9c0e7 | ||
|
|
ad53429cf1 | ||
|
|
15298b0409 | ||
|
|
cfa710037c | ||
|
|
a08dd85efd | ||
|
|
469d36d979 | ||
|
|
7ae8b2cdeb | ||
|
|
cf148db75d | ||
|
|
738c7ab43e | ||
|
|
82fb7f8cf0 | ||
|
|
e0f2ab9c01 | ||
|
|
e0183217b6 | ||
|
|
f066b7fb9c | ||
|
|
0c6e2b566b | ||
|
|
f02e24437a | ||
|
|
e9534be1e6 | ||
|
|
7056997e49 | ||
|
|
155af19aaa | ||
|
|
f369fdf6f2 | ||
|
|
510a95bd6d | ||
|
|
da90dbe645 | ||
|
|
b006c5f914 | ||
|
|
3f75d46a16 | ||
|
|
14c2a244b7 | ||
|
|
94ff9d7346 | ||
|
|
14196167b0 | ||
|
|
d70959c34c | ||
|
|
67c6f27064 | ||
|
|
6bfbb27856 | ||
|
|
baac3749b3 | ||
|
|
d377181b25 | ||
|
|
ebd6a11f3a | ||
|
|
0a47412e8c | ||
|
|
4f668bf558 | ||
|
|
9248c5a987 | ||
|
|
b0ed190591 | ||
|
|
37357b2d63 | ||
|
|
9b06e0a3b7 | ||
|
|
5a5912ea66 | ||
|
|
b1c7317cf6 | ||
|
|
a0fe4cf5e4 | ||
|
|
7fe3c965e3 | ||
|
|
fd9b3c2767 | ||
|
|
fb9e188e36 | ||
|
|
c93d8cecfc | ||
|
|
94b46e57f1 | ||
|
|
9046acbe68 | ||
|
|
075bbe2aef | ||
|
|
b52d078986 | ||
|
|
0a9c4914aa | ||
|
|
f284008fb5 | ||
|
|
4759254e10 | ||
|
|
e22d377203 | ||
|
|
0787e3e595 | ||
|
|
c1194d558c | ||
|
|
952b10a9f6 | ||
|
|
f55851bdc8 | ||
|
|
76bb361393 | ||
|
|
81c8e8d898 | ||
|
|
f4e872c782 | ||
|
|
ddcb500c51 | ||
|
|
e8664c0ce4 | ||
|
|
3b002ddc86 | ||
|
|
1770da545d | ||
|
|
de3e69f846 | ||
|
|
cdf1233065 | ||
|
|
6b70f0b25f | ||
|
|
4fe51f7cce | ||
|
|
7221bf9b25 | ||
|
|
61f20f5449 | ||
|
|
5dafbb54de | ||
|
|
ec8ab28a22 | ||
|
|
aaa6d0c71f | ||
|
|
ea74aca165 | ||
|
|
9b119765fc | ||
|
|
df02496145 | ||
|
|
31d285813e | ||
|
|
f9fc5792fd | ||
|
|
6ad9c5ae64 | ||
|
|
016abda6d7 | ||
|
|
2eea6c95b9 | ||
|
|
7f1946de34 | ||
|
|
d0a81cabab | ||
|
|
df67f4ef34 | ||
|
|
ed11e1128a | ||
|
|
ed1c7cac17 | ||
|
|
7c115aa9c8 | ||
|
|
3ffddcba92 | ||
|
|
833b702b90 | ||
|
|
b885d453a2 | ||
|
|
7163fde724 | ||
|
|
830e3d1f64 | ||
|
|
637463a068 | ||
|
|
e69536f540 | ||
|
|
c516780289 | ||
|
|
eb1b869b73 | ||
|
|
703ccc8d91 | ||
|
|
45b9366dd0 | ||
|
|
382c6fda89 | ||
|
|
0f80de86b2 | ||
|
|
bc11cdd8d5 | ||
|
|
62b20cd884 | ||
|
|
ae8ed75ae5 | ||
|
|
c8cc81cf4a | ||
|
|
79c8b6cbc2 | ||
|
|
58253968fc | ||
|
|
dbd75209df | ||
|
|
da01848855 | ||
|
|
0b7be1175d | ||
|
|
3479a7e438 | ||
|
|
4f5c67e7df | ||
|
|
b99495ebbb | ||
|
|
0061978db8 | ||
|
|
e011f793ec | ||
|
|
5b47292366 | ||
|
|
8eff2df39c | ||
|
|
7bfe0ddf80 | ||
|
|
d6fa853a37 | ||
|
|
553cceab42 | ||
|
|
5bfaae9202 | ||
|
|
9359669cd4 | ||
|
|
8b039e0447 | ||
|
|
c64c080159 | ||
|
|
bcaf283c35 | ||
|
|
31a433cda2 | ||
|
|
e647f3626e | ||
|
|
3491717c26 | ||
|
|
45a13f5f4e | ||
|
|
6884ce3f3e | ||
|
|
5f4efc9148 | ||
|
|
d481294519 | ||
|
|
7ebbbe5c29 | ||
|
|
9ff303b8c0 | ||
|
|
4463421028 | ||
|
|
d75f56b914 | ||
|
|
a4834f4a83 | ||
|
|
0b42dbc3c3 | ||
|
|
2013ba497c | ||
|
|
2be37217cf | ||
|
|
55a03c2e2b | ||
|
|
27dbcf0066 | ||
|
|
ec194d73d2 | ||
|
|
1d9bb54073 | ||
|
|
5cf4a47b48 | ||
|
|
eec60d6309 | ||
|
|
37c602a477 | ||
|
|
8e604f888a | ||
|
|
531091906d | ||
|
|
e046b35b97 | ||
|
|
eb2ddc4798 | ||
|
|
aee9312cea | ||
|
|
6a99a5cb9f | ||
|
|
8e0ca88421 | ||
|
|
c7e55db4e0 | ||
|
|
1b1c740b29 | ||
|
|
ad5436e3f6 | ||
|
|
20606a0043 | ||
|
|
80e9902324 | ||
|
|
741e65a3a1 | ||
|
|
6aff8c2f5e | ||
|
|
e5770af2fa | ||
|
|
ae094d3479 | ||
|
|
f49c280a7f | ||
|
|
ae812e55af | ||
|
|
489101668c | ||
|
|
f8fd5c067c | ||
|
|
39fc9b73e9 | ||
|
|
80a5138115 | ||
|
|
bc844105b2 | ||
|
|
7de0fe467a | ||
|
|
0a025aabfd | ||
|
|
7a8b1645ef | ||
|
|
b3729e4666 | ||
|
|
9c4dbbc3f8 | ||
|
|
ca6cb34d98 | ||
|
|
fc442d7862 | ||
|
|
3ac5af47f2 | ||
|
|
bb37a1c1ce | ||
|
|
206799ff1c | ||
|
|
f3de83707f | ||
|
|
5be1d1be69 | ||
|
|
08e85f6be9 | ||
|
|
acdf351047 | ||
|
|
c2ff50c917 | ||
|
|
363b9ccc7f | ||
|
|
74367a65cc | ||
|
|
2221d3307a | ||
|
|
4ff34b3e53 | ||
|
|
191da1ecaf | ||
|
|
77db2bd3d1 | ||
|
|
758f2ee834 | ||
|
|
c5a6ca5650 | ||
|
|
b409523d5c | ||
|
|
8cd0aec417 | ||
|
|
a4cddf4f7f | ||
|
|
d970e757eb | ||
|
|
083d9f76c6 | ||
|
|
2003d08c59 | ||
|
|
9cf4653425 | ||
|
|
4f6bb00996 |
12
.github/FUNDING.yml
vendored
Normal file
12
.github/FUNDING.yml
vendored
Normal file
@@ -0,0 +1,12 @@
|
||||
# These are supported funding model platforms
|
||||
|
||||
github: [drakkan] # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2]
|
||||
patreon: # Replace with a single Patreon username
|
||||
open_collective: # Replace with a single Open Collective username
|
||||
ko_fi: # Replace with a single Ko-fi username
|
||||
tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
|
||||
community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
|
||||
liberapay: # Replace with a single Liberapay username
|
||||
issuehunt: # Replace with a single IssueHunt username
|
||||
otechie: # Replace with a single Otechie username
|
||||
custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']
|
||||
2
.github/workflows/.editorconfig
vendored
Normal file
2
.github/workflows/.editorconfig
vendored
Normal file
@@ -0,0 +1,2 @@
|
||||
[*.yml]
|
||||
indent_size = 2
|
||||
267
.github/workflows/development.yml
vendored
Normal file
267
.github/workflows/development.yml
vendored
Normal file
@@ -0,0 +1,267 @@
|
||||
name: CI
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [2.0.x]
|
||||
pull_request:
|
||||
|
||||
jobs:
|
||||
test-deploy:
|
||||
name: Test and deploy
|
||||
runs-on: ${{ matrix.os }}
|
||||
strategy:
|
||||
matrix:
|
||||
go: [1.16]
|
||||
os: [ubuntu-18.04, macos-10.15, windows-2019]
|
||||
upload-coverage: [false]
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v2
|
||||
with:
|
||||
go-version: ${{ matrix.go }}
|
||||
|
||||
- name: Build for Linux/macOS
|
||||
if: startsWith(matrix.os, 'windows-') != true
|
||||
run: go build -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
|
||||
|
||||
- name: Build for Windows
|
||||
if: startsWith(matrix.os, 'windows-')
|
||||
run: |
|
||||
$GIT_COMMIT = (git describe --always --dirty) | Out-String
|
||||
$DATE_TIME = ([datetime]::Now.ToUniversalTime().toString("yyyy-MM-ddTHH:mm:ssZ")) | Out-String
|
||||
go build -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/version.date=$DATE_TIME" -o sftpgo.exe
|
||||
|
||||
- name: Run test cases using SQLite provider
|
||||
run: go test -v -p 1 -timeout 10m ./... -coverprofile=coverage.txt -covermode=atomic
|
||||
|
||||
- name: Upload coverage to Codecov
|
||||
if: ${{ matrix.upload-coverage }}
|
||||
uses: codecov/codecov-action@v1
|
||||
with:
|
||||
file: ./coverage.txt
|
||||
fail_ci_if_error: false
|
||||
|
||||
- name: Run test cases using bolt provider
|
||||
run: |
|
||||
go test -v -p 1 -timeout 2m ./config -covermode=atomic
|
||||
go test -v -p 1 -timeout 2m ./common -covermode=atomic
|
||||
go test -v -p 1 -timeout 3m ./httpd -covermode=atomic
|
||||
go test -v -p 1 -timeout 8m ./sftpd -covermode=atomic
|
||||
go test -v -p 1 -timeout 2m ./ftpd -covermode=atomic
|
||||
go test -v -p 1 -timeout 2m ./webdavd -covermode=atomic
|
||||
go test -v -p 1 -timeout 2m ./telemetry -covermode=atomic
|
||||
env:
|
||||
SFTPGO_DATA_PROVIDER__DRIVER: bolt
|
||||
SFTPGO_DATA_PROVIDER__NAME: 'sftpgo_bolt.db'
|
||||
|
||||
- name: Run test cases using memory provider
|
||||
run: go test -v -p 1 -timeout 10m ./... -covermode=atomic
|
||||
env:
|
||||
SFTPGO_DATA_PROVIDER__DRIVER: memory
|
||||
SFTPGO_DATA_PROVIDER__NAME: ''
|
||||
|
||||
- name: Gather cross build info
|
||||
id: cross_info
|
||||
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
|
||||
run: |
|
||||
GIT_COMMIT=$(git describe --always)
|
||||
BUILD_DATE=$(date -u +%FT%TZ)
|
||||
echo ::set-output name=sha::${GIT_COMMIT}
|
||||
echo ::set-output name=created::${BUILD_DATE}
|
||||
|
||||
- name: Cross build with xgo
|
||||
if: ${{ startsWith(matrix.os, 'ubuntu-') }}
|
||||
uses: crazy-max/ghaction-xgo@v1
|
||||
with:
|
||||
go_version: 1.16.x
|
||||
dest: cross
|
||||
prefix: sftpgo
|
||||
targets: linux/arm64,linux/ppc64le
|
||||
v: true
|
||||
x: false
|
||||
race: false
|
||||
ldflags: -s -w -X github.com/drakkan/sftpgo/version.commit=${{ steps.cross_info.outputs.sha }} -X github.com/drakkan/sftpgo/version.date=${{ steps.cross_info.outputs.created }}
|
||||
buildmode: default
|
||||
|
||||
- name: Prepare build artifact for Linux/macOS
|
||||
if: startsWith(matrix.os, 'windows-') != true
|
||||
run: |
|
||||
mkdir -p output/{bash_completion,zsh_completion}
|
||||
cp sftpgo output/
|
||||
cp sftpgo.json output/
|
||||
cp -r templates output/
|
||||
cp -r static output/
|
||||
cp -r init output/
|
||||
./sftpgo gen completion bash > output/bash_completion/sftpgo
|
||||
./sftpgo gen completion zsh > output/zsh_completion/_sftpgo
|
||||
./sftpgo gen man -d output/man/man1
|
||||
gzip output/man/man1/*
|
||||
|
||||
- name: Copy cross compiled Linux binaries
|
||||
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
|
||||
run: |
|
||||
cp cross/sftpgo-linux-arm64 output/
|
||||
cp cross/sftpgo-linux-ppc64le output/
|
||||
|
||||
- name: Prepare build artifact for Windows
|
||||
if: startsWith(matrix.os, 'windows-')
|
||||
run: |
|
||||
mkdir output
|
||||
copy .\sftpgo.exe .\output
|
||||
copy .\sftpgo.json .\output
|
||||
mkdir output\templates
|
||||
xcopy .\templates .\output\templates\ /E
|
||||
mkdir output\static
|
||||
xcopy .\static .\output\static\ /E
|
||||
|
||||
- name: Upload build artifact
|
||||
uses: actions/upload-artifact@v2
|
||||
with:
|
||||
name: sftpgo-${{ matrix.os }}-go${{ matrix.go }}
|
||||
path: output
|
||||
|
||||
- name: Build Linux Packages
|
||||
id: build_linux_pkgs
|
||||
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
|
||||
run: |
|
||||
cp -r pkgs pkgs_arm64
|
||||
cp -r pkgs pkgs_ppc64le
|
||||
cd pkgs
|
||||
./build.sh
|
||||
cd ..
|
||||
export NFPM_ARCH=arm64
|
||||
export BIN_SUFFIX=-linux-arm64
|
||||
cp cross/sftpgo${BIN_SUFFIX} .
|
||||
cd pkgs_arm64
|
||||
./build.sh
|
||||
cd ..
|
||||
export NFPM_ARCH=ppc64le
|
||||
export BIN_SUFFIX=-linux-ppc64le
|
||||
cp cross/sftpgo${BIN_SUFFIX} .
|
||||
cd pkgs_ppc64le
|
||||
./build.sh
|
||||
PKG_VERSION=$(cat dist/version)
|
||||
echo "::set-output name=pkg-version::${PKG_VERSION}"
|
||||
|
||||
- name: Upload Debian Package
|
||||
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
|
||||
uses: actions/upload-artifact@v2
|
||||
with:
|
||||
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-x86_64-deb
|
||||
path: pkgs/dist/deb/*
|
||||
|
||||
- name: Upload RPM Package
|
||||
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
|
||||
uses: actions/upload-artifact@v2
|
||||
with:
|
||||
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-x86_64-rpm
|
||||
path: pkgs/dist/rpm/*
|
||||
|
||||
- name: Upload Debian Package arm64
|
||||
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
|
||||
uses: actions/upload-artifact@v2
|
||||
with:
|
||||
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-arm64-deb
|
||||
path: pkgs_arm64/dist/deb/*
|
||||
|
||||
- name: Upload RPM Package arm64
|
||||
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
|
||||
uses: actions/upload-artifact@v2
|
||||
with:
|
||||
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-arm64-rpm
|
||||
path: pkgs_arm64/dist/rpm/*
|
||||
|
||||
- name: Upload Debian Package ppc64le
|
||||
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
|
||||
uses: actions/upload-artifact@v2
|
||||
with:
|
||||
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-ppc64le-deb
|
||||
path: pkgs_ppc64le/dist/deb/*
|
||||
|
||||
- name: Upload RPM Package ppc64le
|
||||
if: ${{ matrix.upload-coverage && startsWith(matrix.os, 'ubuntu-') }}
|
||||
uses: actions/upload-artifact@v2
|
||||
with:
|
||||
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-ppc64le-rpm
|
||||
path: pkgs_ppc64le/dist/rpm/*
|
||||
|
||||
test-postgresql-mysql:
|
||||
name: Test with PostgreSQL/MySQL
|
||||
runs-on: ubuntu-18.04
|
||||
|
||||
services:
|
||||
postgres:
|
||||
image: postgres:latest
|
||||
env:
|
||||
POSTGRES_PASSWORD: postgres
|
||||
POSTGRES_DB: sftpgo
|
||||
options: >-
|
||||
--health-cmd pg_isready
|
||||
--health-interval 10s
|
||||
--health-timeout 5s
|
||||
--health-retries 5
|
||||
ports:
|
||||
- 5432:5432
|
||||
|
||||
mariadb:
|
||||
image: mariadb:latest
|
||||
env:
|
||||
MYSQL_ROOT_PASSWORD: mysql
|
||||
MYSQL_DATABASE: sftpgo
|
||||
MYSQL_USER: sftpgo
|
||||
MYSQL_PASSWORD: sftpgo
|
||||
options: >-
|
||||
--health-cmd "mysqladmin status -h 127.0.0.1 -P 3306 -u root -p$MYSQL_ROOT_PASSWORD"
|
||||
--health-interval 10s
|
||||
--health-timeout 5s
|
||||
--health-retries 6
|
||||
ports:
|
||||
- 3307:3306
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v2
|
||||
with:
|
||||
go-version: 1.16
|
||||
|
||||
- name: Build
|
||||
run: go build -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
|
||||
|
||||
- name: Run tests using PostgreSQL provider
|
||||
run: |
|
||||
go test -v -p 1 -timeout 10m ./... -covermode=atomic
|
||||
env:
|
||||
SFTPGO_DATA_PROVIDER__DRIVER: postgresql
|
||||
SFTPGO_DATA_PROVIDER__NAME: sftpgo
|
||||
SFTPGO_DATA_PROVIDER__HOST: localhost
|
||||
SFTPGO_DATA_PROVIDER__PORT: 5432
|
||||
SFTPGO_DATA_PROVIDER__USERNAME: postgres
|
||||
SFTPGO_DATA_PROVIDER__PASSWORD: postgres
|
||||
|
||||
- name: Run tests using MySQL provider
|
||||
run: |
|
||||
go test -v -p 1 -timeout 10m ./... -covermode=atomic
|
||||
env:
|
||||
SFTPGO_DATA_PROVIDER__DRIVER: mysql
|
||||
SFTPGO_DATA_PROVIDER__NAME: sftpgo
|
||||
SFTPGO_DATA_PROVIDER__HOST: localhost
|
||||
SFTPGO_DATA_PROVIDER__PORT: 3307
|
||||
SFTPGO_DATA_PROVIDER__USERNAME: sftpgo
|
||||
SFTPGO_DATA_PROVIDER__PASSWORD: sftpgo
|
||||
|
||||
golangci-lint:
|
||||
name: golangci-lint
|
||||
runs-on: ubuntu-18.04
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- name: Run golangci-lint
|
||||
uses: golangci/golangci-lint-action@v2
|
||||
with:
|
||||
version: v1.37.1
|
||||
151
.github/workflows/docker.yml
vendored
Normal file
151
.github/workflows/docker.yml
vendored
Normal file
@@ -0,0 +1,151 @@
|
||||
name: Docker
|
||||
|
||||
on:
|
||||
push:
|
||||
tags:
|
||||
- v*
|
||||
pull_request:
|
||||
|
||||
jobs:
|
||||
build:
|
||||
name: Build
|
||||
runs-on: ${{ matrix.os }}
|
||||
strategy:
|
||||
matrix:
|
||||
os:
|
||||
- ubuntu-18.04
|
||||
docker_pkg:
|
||||
- debian
|
||||
- alpine
|
||||
optional_deps:
|
||||
- true
|
||||
- false
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v2
|
||||
|
||||
- name: Repo metadata
|
||||
id: repo
|
||||
uses: actions/github-script@v3
|
||||
with:
|
||||
script: |
|
||||
const repo = await github.repos.get(context.repo)
|
||||
return repo.data
|
||||
|
||||
- name: Gather image information
|
||||
id: info
|
||||
run: |
|
||||
VERSION=noop
|
||||
DOCKERFILE=Dockerfile
|
||||
MINOR=""
|
||||
MAJOR=""
|
||||
if [ "${{ github.event_name }}" = "schedule" ]; then
|
||||
VERSION=nightly
|
||||
elif [[ $GITHUB_REF == refs/tags/* ]]; then
|
||||
VERSION=${GITHUB_REF#refs/tags/}
|
||||
elif [[ $GITHUB_REF == refs/heads/* ]]; then
|
||||
VERSION=$(echo ${GITHUB_REF#refs/heads/} | sed -r 's#/+#-#g')
|
||||
if [ "${{ github.event.repository.default_branch }}" = "$VERSION" ]; then
|
||||
VERSION=edge
|
||||
fi
|
||||
elif [[ $GITHUB_REF == refs/pull/* ]]; then
|
||||
VERSION=pr-${{ github.event.number }}
|
||||
fi
|
||||
if [[ $VERSION =~ ^v[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ ]]; then
|
||||
MINOR=${VERSION%.*}
|
||||
MAJOR=${MINOR%.*}
|
||||
fi
|
||||
VERSION_SLIM="${VERSION}-slim"
|
||||
if [[ $DOCKER_PKG == alpine ]]; then
|
||||
VERSION="${VERSION}-alpine"
|
||||
VERSION_SLIM="${VERSION}-slim"
|
||||
DOCKERFILE=Dockerfile.alpine
|
||||
fi
|
||||
|
||||
DOCKER_IMAGES=("drakkan/sftpgo" "ghcr.io/drakkan/sftpgo")
|
||||
TAGS="${DOCKER_IMAGES[0]}:${VERSION}"
|
||||
TAGS_SLIM="${DOCKER_IMAGES[0]}:${VERSION_SLIM}"
|
||||
|
||||
for DOCKER_IMAGE in ${DOCKER_IMAGES[@]}; do
|
||||
if [[ ${DOCKER_IMAGE} != ${DOCKER_IMAGES[0]} ]]; then
|
||||
TAGS="${TAGS},${DOCKER_IMAGE}:${VERSION}"
|
||||
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:${VERSION_SLIM}"
|
||||
fi
|
||||
if [[ $GITHUB_REF == refs/tags/* ]]; then
|
||||
if [[ $DOCKER_PKG == debian ]]; then
|
||||
if [[ -n $MAJOR && -n $MINOR ]]; then
|
||||
TAGS="${TAGS},${DOCKER_IMAGE}:${MINOR},${DOCKER_IMAGE}:${MAJOR}"
|
||||
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:${MINOR}-slim,${DOCKER_IMAGE}:${MAJOR}-slim"
|
||||
fi
|
||||
TAGS="${TAGS},${DOCKER_IMAGE}:latest"
|
||||
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:slim"
|
||||
else
|
||||
if [[ -n $MAJOR && -n $MINOR ]]; then
|
||||
TAGS="${TAGS},${DOCKER_IMAGE}:${MINOR}-alpine,${DOCKER_IMAGE}:${MAJOR}-alpine"
|
||||
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:${MINOR}-alpine-slim,${DOCKER_IMAGE}:${MAJOR}-alpine-slim"
|
||||
fi
|
||||
TAGS="${TAGS},${DOCKER_IMAGE}:alpine"
|
||||
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:alpine-slim"
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
if [[ $OPTIONAL_DEPS == true ]]; then
|
||||
echo ::set-output name=version::${VERSION}
|
||||
echo ::set-output name=tags::${TAGS}
|
||||
echo ::set-output name=full::true
|
||||
else
|
||||
echo ::set-output name=version::${VERSION_SLIM}
|
||||
echo ::set-output name=tags::${TAGS_SLIM}
|
||||
echo ::set-output name=full::false
|
||||
fi
|
||||
echo ::set-output name=dockerfile::${DOCKERFILE}
|
||||
echo ::set-output name=created::$(date -u +'%Y-%m-%dT%H:%M:%SZ')
|
||||
echo ::set-output name=sha::${GITHUB_SHA::8}
|
||||
env:
|
||||
DOCKER_PKG: ${{ matrix.docker_pkg }}
|
||||
OPTIONAL_DEPS: ${{ matrix.optional_deps }}
|
||||
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v1
|
||||
|
||||
- name: Set up builder
|
||||
uses: docker/setup-buildx-action@v1
|
||||
id: builder
|
||||
|
||||
- name: Login to Docker Hub
|
||||
uses: docker/login-action@v1
|
||||
with:
|
||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
if: ${{ github.event_name != 'pull_request' }}
|
||||
|
||||
- name: Login to GitHub Container Registry
|
||||
uses: docker/login-action@v1
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ github.repository_owner }}
|
||||
password: ${{ secrets.CR_PAT }}
|
||||
if: ${{ github.event_name != 'pull_request' }}
|
||||
|
||||
- name: Build and push
|
||||
uses: docker/build-push-action@v2
|
||||
with:
|
||||
builder: ${{ steps.builder.outputs.name }}
|
||||
file: ./${{ steps.info.outputs.dockerfile }}
|
||||
platforms: linux/amd64,linux/arm64,linux/ppc64le
|
||||
push: ${{ github.event_name != 'pull_request' }}
|
||||
tags: ${{ steps.info.outputs.tags }}
|
||||
build-args: |
|
||||
COMMIT_SHA=${{ steps.info.outputs.sha }}
|
||||
INSTALL_OPTIONAL_PACKAGES=${{ steps.info.outputs.full }}
|
||||
labels: |
|
||||
org.opencontainers.image.title=SFTPGo
|
||||
org.opencontainers.image.description=Fully featured and highly configurable SFTP server with optional FTP/S and WebDAV support
|
||||
org.opencontainers.image.url=${{ fromJson(steps.repo.outputs.result).html_url }}
|
||||
org.opencontainers.image.documentation=${{ fromJson(steps.repo.outputs.result).html_url }}/blob/${{ github.sha }}/docker/README.md
|
||||
org.opencontainers.image.source=${{ fromJson(steps.repo.outputs.result).html_url }}
|
||||
org.opencontainers.image.version=${{ steps.info.outputs.version }}
|
||||
org.opencontainers.image.created=${{ steps.info.outputs.created }}
|
||||
org.opencontainers.image.revision=${{ github.sha }}
|
||||
org.opencontainers.image.licenses=${{ fromJson(steps.repo.outputs.result).license.spdx_id }}
|
||||
411
.github/workflows/release.yml
vendored
Normal file
411
.github/workflows/release.yml
vendored
Normal file
@@ -0,0 +1,411 @@
|
||||
name: Release
|
||||
|
||||
on:
|
||||
push:
|
||||
tags: 'v*'
|
||||
|
||||
env:
|
||||
GO_VERSION: 1.16.2
|
||||
|
||||
jobs:
|
||||
create-release:
|
||||
name: Create
|
||||
runs-on: ubuntu-18.04
|
||||
steps:
|
||||
- name: Create Release
|
||||
id: create_release
|
||||
uses: actions/create-release@v1
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
with:
|
||||
tag_name: ${{ github.ref }}
|
||||
release_name: ${{ github.ref }}
|
||||
draft: false
|
||||
prerelease: false
|
||||
|
||||
- name: Save release upload URL
|
||||
run: echo "${{ steps.create_release.outputs.upload_url }}" > ./upload_url.txt
|
||||
shell: bash
|
||||
|
||||
- name: Store release upload URL
|
||||
uses: actions/upload-artifact@v2
|
||||
with:
|
||||
name: upload_url
|
||||
path: ./upload_url.txt
|
||||
|
||||
release-sources-with-deps:
|
||||
name: Publish sources
|
||||
needs: create-release
|
||||
runs-on: ubuntu-18.04
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v2
|
||||
with:
|
||||
go-version: ${{ env.GO_VERSION }}
|
||||
|
||||
- name: Get SFTPGo version
|
||||
id: get_version
|
||||
run: echo ::set-output name=VERSION::${GITHUB_REF/refs\/tags\//}
|
||||
|
||||
- name: Prepare release
|
||||
run: |
|
||||
go mod vendor
|
||||
echo "${SFTPGO_VERSION}" > VERSION.txt
|
||||
tar cJvf sftpgo_${SFTPGO_VERSION}_src_with_deps.tar.xz *
|
||||
env:
|
||||
SFTPGO_VERSION: ${{ steps.get_version.outputs.VERSION }}
|
||||
|
||||
- name: Download release upload URL
|
||||
uses: actions/download-artifact@v2
|
||||
with:
|
||||
name: upload_url
|
||||
|
||||
- name: Get release upload URL
|
||||
id: upload_url
|
||||
run: |
|
||||
URL=$(cat upload_url.txt)
|
||||
echo "::set-output name=url::${URL}"
|
||||
shell: bash
|
||||
|
||||
- name: Upload Release
|
||||
id: upload-release-asset
|
||||
uses: actions/upload-release-asset@v1
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
with:
|
||||
upload_url: ${{ steps.upload_url.outputs.url }}
|
||||
asset_path: ./sftpgo_${{ steps.get_version.outputs.VERSION }}_src_with_deps.tar.xz
|
||||
asset_name: sftpgo_${{ steps.get_version.outputs.VERSION }}_src_with_deps.tar.xz
|
||||
asset_content_type: application/x-xz
|
||||
|
||||
publish:
|
||||
name: Publish binary
|
||||
needs: create-release
|
||||
runs-on: ${{ matrix.os }}
|
||||
strategy:
|
||||
matrix:
|
||||
os: [ubuntu-18.04, macos-10.15, windows-2019]
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v2
|
||||
with:
|
||||
go-version: ${{ env.GO_VERSION }}
|
||||
|
||||
- name: Build for Linux/macOS
|
||||
if: startsWith(matrix.os, 'windows-') != true
|
||||
run: go build -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
|
||||
|
||||
- name: Build for Windows
|
||||
if: startsWith(matrix.os, 'windows-')
|
||||
run: |
|
||||
$GIT_COMMIT = (git describe --always --dirty) | Out-String
|
||||
$DATE_TIME = ([datetime]::Now.ToUniversalTime().toString("yyyy-MM-ddTHH:mm:ssZ")) | Out-String
|
||||
go build -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/version.date=$DATE_TIME" -o sftpgo.exe
|
||||
|
||||
- name: Initialize data provider
|
||||
run: ./sftpgo initprovider
|
||||
shell: bash
|
||||
|
||||
- name: Get SFTPGo version
|
||||
id: get_version
|
||||
run: echo ::set-output name=VERSION::${GITHUB_REF/refs\/tags\//}
|
||||
shell: bash
|
||||
|
||||
- name: Get OS name
|
||||
id: get_os_name
|
||||
run: |
|
||||
if [ $MATRIX_OS == 'ubuntu-18.04' ]
|
||||
then
|
||||
echo ::set-output name=OS::linux
|
||||
elif [ $MATRIX_OS == 'macos-10.15' ]
|
||||
then
|
||||
echo ::set-output name=OS::macOS
|
||||
else
|
||||
echo ::set-output name=OS::windows
|
||||
fi
|
||||
shell: bash
|
||||
env:
|
||||
MATRIX_OS: ${{ matrix.os }}
|
||||
|
||||
- name: Gather cross build info
|
||||
id: cross_info
|
||||
if: ${{ matrix.os == 'ubuntu-18.04' }}
|
||||
run: |
|
||||
GIT_COMMIT=$(git describe --always)
|
||||
BUILD_DATE=$(date -u +%FT%TZ)
|
||||
echo ::set-output name=sha::${GIT_COMMIT}
|
||||
echo ::set-output name=created::${BUILD_DATE}
|
||||
|
||||
- name: Cross build with xgo
|
||||
if: ${{ matrix.os == 'ubuntu-18.04' }}
|
||||
uses: crazy-max/ghaction-xgo@v1
|
||||
with:
|
||||
go_version: ${{ env.GO_VERSION }}
|
||||
dest: cross
|
||||
prefix: sftpgo
|
||||
targets: linux/arm64,linux/ppc64le
|
||||
v: true
|
||||
x: false
|
||||
race: false
|
||||
ldflags: -s -w -X github.com/drakkan/sftpgo/version.commit=${{ steps.cross_info.outputs.sha }} -X github.com/drakkan/sftpgo/version.date=${{ steps.cross_info.outputs.created }}
|
||||
buildmode: default
|
||||
|
||||
- name: Prepare Release for Linux/macOS
|
||||
if: startsWith(matrix.os, 'windows-') != true
|
||||
run: |
|
||||
mkdir -p output/{init,sqlite,bash_completion,zsh_completion}
|
||||
echo "For documentation please take a look here:" > output/README.txt
|
||||
echo "" >> output/README.txt
|
||||
echo "https://github.com/drakkan/sftpgo/blob/${SFTPGO_VERSION}/README.md" >> output/README.txt
|
||||
cp LICENSE output/
|
||||
cp sftpgo output/
|
||||
cp sftpgo.json output/
|
||||
cp sftpgo.db output/sqlite/
|
||||
cp -r static output/
|
||||
cp -r templates output/
|
||||
if [ $OS == 'linux' ]
|
||||
then
|
||||
cp init/sftpgo.service output/init/
|
||||
else
|
||||
cp init/com.github.drakkan.sftpgo.plist output/init/
|
||||
fi
|
||||
./sftpgo gen completion bash > output/bash_completion/sftpgo
|
||||
./sftpgo gen completion zsh > output/zsh_completion/_sftpgo
|
||||
./sftpgo gen man -d output/man/man1
|
||||
gzip output/man/man1/*
|
||||
if [ $OS == 'linux' ]
|
||||
then
|
||||
cp -r output output_arm64
|
||||
cp -r output output_ppc64le
|
||||
cp -r output output_all
|
||||
fi
|
||||
cd output
|
||||
tar cJvf sftpgo_${SFTPGO_VERSION}_${OS}_x86_64.tar.xz *
|
||||
cd ..
|
||||
if [ $OS == 'linux' ]
|
||||
then
|
||||
cp cross/sftpgo-linux-arm64 output_arm64/sftpgo
|
||||
cd output_arm64
|
||||
tar cJvf sftpgo_${SFTPGO_VERSION}_${OS}_arm64.tar.xz *
|
||||
cd ..
|
||||
cp cross/sftpgo-linux-ppc64le output_ppc64le/sftpgo
|
||||
cd output_ppc64le
|
||||
tar cJvf sftpgo_${SFTPGO_VERSION}_${OS}_ppc64le.tar.xz *
|
||||
cd ..
|
||||
mkdir output_all/{arm64,ppc64le}
|
||||
cp cross/sftpgo-linux-arm64 output_all/arm64/sftpgo
|
||||
cp cross/sftpgo-linux-ppc64le output_all/ppc64le/sftpgo
|
||||
cd output_all
|
||||
tar cJvf sftpgo_${SFTPGO_VERSION}_${OS}_bundle.tar.xz *
|
||||
cd ..
|
||||
fi
|
||||
env:
|
||||
SFTPGO_VERSION: ${{ steps.get_version.outputs.VERSION }}
|
||||
OS: ${{ steps.get_os_name.outputs.OS }}
|
||||
|
||||
- name: Prepare Linux Packages
|
||||
id: build_linux_pkgs
|
||||
if: ${{ matrix.os == 'ubuntu-18.04' }}
|
||||
run: |
|
||||
cp -r pkgs pkgs_arm64
|
||||
cp -r pkgs pkgs_ppc64le
|
||||
cd pkgs
|
||||
./build.sh
|
||||
cd ..
|
||||
export NFPM_ARCH=arm64
|
||||
export BIN_SUFFIX=-linux-arm64
|
||||
cp cross/sftpgo${BIN_SUFFIX} .
|
||||
cd pkgs_arm64
|
||||
./build.sh
|
||||
cd ..
|
||||
export NFPM_ARCH=ppc64le
|
||||
export BIN_SUFFIX=-linux-ppc64le
|
||||
cp cross/sftpgo${BIN_SUFFIX} .
|
||||
cd pkgs_ppc64le
|
||||
./build.sh
|
||||
cd ..
|
||||
PKG_VERSION=${SFTPGO_VERSION:1}
|
||||
echo "::set-output name=pkg-version::${PKG_VERSION}"
|
||||
env:
|
||||
SFTPGO_VERSION: ${{ steps.get_version.outputs.VERSION }}
|
||||
|
||||
- name: Prepare Release for Windows
|
||||
if: startsWith(matrix.os, 'windows-')
|
||||
run: |
|
||||
mkdir output
|
||||
copy .\sftpgo.exe .\output
|
||||
copy .\sftpgo.json .\output
|
||||
copy .\sftpgo.db .\output
|
||||
copy .\LICENSE .\output\LICENSE.txt
|
||||
mkdir output\templates
|
||||
xcopy .\templates .\output\templates\ /E
|
||||
mkdir output\static
|
||||
xcopy .\static .\output\static\ /E
|
||||
iscc windows-installer\sftpgo.iss
|
||||
env:
|
||||
SFTPGO_ISS_VERSION: ${{ steps.get_version.outputs.VERSION }}
|
||||
SFTPGO_ISS_DOC_URL: https://github.com/drakkan/sftpgo/blob/${{ steps.get_version.outputs.VERSION }}/README.md
|
||||
|
||||
- name: Prepare Portable Release for Windows
|
||||
if: startsWith(matrix.os, 'windows-')
|
||||
run: |
|
||||
mkdir win-portable
|
||||
copy .\sftpgo.exe .\win-portable
|
||||
copy .\sftpgo.json .\win-portable
|
||||
copy .\sftpgo.db .\win-portable
|
||||
copy .\LICENSE .\win-portable\LICENSE.txt
|
||||
mkdir win-portable\templates
|
||||
xcopy .\templates .\win-portable\templates\ /E
|
||||
mkdir win-portable\static
|
||||
xcopy .\static .\win-portable\static\ /E
|
||||
Compress-Archive .\win-portable\* sftpgo_portable_x86_64.zip
|
||||
env:
|
||||
SFTPGO_VERSION: ${{ steps.get_version.outputs.VERSION }}
|
||||
OS: ${{ steps.get_os_name.outputs.OS }}
|
||||
|
||||
- name: Download release upload URL
|
||||
uses: actions/download-artifact@v2
|
||||
with:
|
||||
name: upload_url
|
||||
|
||||
- name: Get release upload URL
|
||||
id: upload_url
|
||||
run: |
|
||||
URL=$(cat upload_url.txt)
|
||||
echo "::set-output name=url::${URL}"
|
||||
shell: bash
|
||||
|
||||
- name: Upload Linux/macOS Release
|
||||
if: startsWith(matrix.os, 'windows-') != true
|
||||
uses: actions/upload-release-asset@v1
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
with:
|
||||
upload_url: ${{ steps.upload_url.outputs.url }}
|
||||
asset_path: ./output/sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86_64.tar.xz
|
||||
asset_name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86_64.tar.xz
|
||||
asset_content_type: application/x-xz
|
||||
|
||||
- name: Upload Linux/arm64 Release
|
||||
if: ${{ matrix.os == 'ubuntu-18.04' }}
|
||||
uses: actions/upload-release-asset@v1
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
with:
|
||||
upload_url: ${{ steps.upload_url.outputs.url }}
|
||||
asset_path: ./output_arm64/sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_arm64.tar.xz
|
||||
asset_name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_arm64.tar.xz
|
||||
asset_content_type: application/x-xz
|
||||
|
||||
- name: Upload Linux/ppc64le Release
|
||||
if: ${{ matrix.os == 'ubuntu-18.04' }}
|
||||
uses: actions/upload-release-asset@v1
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
with:
|
||||
upload_url: ${{ steps.upload_url.outputs.url }}
|
||||
asset_path: ./output_ppc64le/sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_ppc64le.tar.xz
|
||||
asset_name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_ppc64le.tar.xz
|
||||
asset_content_type: application/x-xz
|
||||
|
||||
- name: Upload Linux Bundle Release
|
||||
if: ${{ matrix.os == 'ubuntu-18.04' }}
|
||||
uses: actions/upload-release-asset@v1
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
with:
|
||||
upload_url: ${{ steps.upload_url.outputs.url }}
|
||||
asset_path: ./output_all/sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_bundle.tar.xz
|
||||
asset_name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_bundle.tar.xz
|
||||
asset_content_type: application/x-xz
|
||||
|
||||
- name: Upload Windows Release
|
||||
if: startsWith(matrix.os, 'windows-')
|
||||
uses: actions/upload-release-asset@v1
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
with:
|
||||
upload_url: ${{ steps.upload_url.outputs.url }}
|
||||
asset_path: ./sftpgo_windows_x86_64.exe
|
||||
asset_name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86_64.exe
|
||||
asset_content_type: application/x-dosexec
|
||||
|
||||
- name: Upload Portable Windows Release
|
||||
if: startsWith(matrix.os, 'windows-')
|
||||
uses: actions/upload-release-asset@v1
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
with:
|
||||
upload_url: ${{ steps.upload_url.outputs.url }}
|
||||
asset_path: ./sftpgo_portable_x86_64.zip
|
||||
asset_name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_portable_x86_64.zip
|
||||
asset_content_type: application/zip
|
||||
|
||||
- name: Upload Debian Package
|
||||
if: ${{ matrix.os == 'ubuntu-18.04' }}
|
||||
uses: actions/upload-release-asset@v1
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
with:
|
||||
upload_url: ${{ steps.upload_url.outputs.url }}
|
||||
asset_path: ./pkgs/dist/deb/sftpgo_${{ steps.build_linux_pkgs.outputs.pkg-version }}-1_amd64.deb
|
||||
asset_name: sftpgo_${{ steps.build_linux_pkgs.outputs.pkg-version }}-1_amd64.deb
|
||||
asset_content_type: application/vnd.debian.binary-package
|
||||
|
||||
- name: Upload RPM Package
|
||||
if: ${{ matrix.os == 'ubuntu-18.04' }}
|
||||
uses: actions/upload-release-asset@v1
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
with:
|
||||
upload_url: ${{ steps.upload_url.outputs.url }}
|
||||
asset_path: ./pkgs/dist/rpm/sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-1.x86_64.rpm
|
||||
asset_name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-1.x86_64.rpm
|
||||
asset_content_type: application/x-rpm
|
||||
|
||||
- name: Upload Debian Package arm64
|
||||
if: ${{ matrix.os == 'ubuntu-18.04' }}
|
||||
uses: actions/upload-release-asset@v1
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
with:
|
||||
upload_url: ${{ steps.upload_url.outputs.url }}
|
||||
asset_path: ./pkgs_arm64/dist/deb/sftpgo_${{ steps.build_linux_pkgs.outputs.pkg-version }}-1_arm64.deb
|
||||
asset_name: sftpgo_${{ steps.build_linux_pkgs.outputs.pkg-version }}-1_arm64.deb
|
||||
asset_content_type: application/vnd.debian.binary-package
|
||||
|
||||
- name: Upload RPM Package arm64
|
||||
if: ${{ matrix.os == 'ubuntu-18.04' }}
|
||||
uses: actions/upload-release-asset@v1
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
with:
|
||||
upload_url: ${{ steps.upload_url.outputs.url }}
|
||||
asset_path: ./pkgs_arm64/dist/rpm/sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-1.aarch64.rpm
|
||||
asset_name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-1.aarch64.rpm
|
||||
asset_content_type: application/x-rpm
|
||||
|
||||
- name: Upload Debian Package ppc64le
|
||||
if: ${{ matrix.os == 'ubuntu-18.04' }}
|
||||
uses: actions/upload-release-asset@v1
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
with:
|
||||
upload_url: ${{ steps.upload_url.outputs.url }}
|
||||
asset_path: ./pkgs_ppc64le/dist/deb/sftpgo_${{ steps.build_linux_pkgs.outputs.pkg-version }}-1_ppc64el.deb
|
||||
asset_name: sftpgo_${{ steps.build_linux_pkgs.outputs.pkg-version }}-1_ppc64el.deb
|
||||
asset_content_type: application/vnd.debian.binary-package
|
||||
|
||||
- name: Upload RPM Package ppc64le
|
||||
if: ${{ matrix.os == 'ubuntu-18.04' }}
|
||||
uses: actions/upload-release-asset@v1
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
with:
|
||||
upload_url: ${{ steps.upload_url.outputs.url }}
|
||||
asset_path: ./pkgs_ppc64le/dist/rpm/sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-1.ppc64le.rpm
|
||||
asset_name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-1.ppc64le.rpm
|
||||
asset_content_type: application/x-rpm
|
||||
3
.gitignore
vendored
Normal file
3
.gitignore
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
# compilation output
|
||||
sftpgo
|
||||
sftpgo.exe
|
||||
42
.golangci.yml
Normal file
42
.golangci.yml
Normal file
@@ -0,0 +1,42 @@
|
||||
run:
|
||||
timeout: 5m
|
||||
issues-exit-code: 1
|
||||
tests: true
|
||||
|
||||
|
||||
linters-settings:
|
||||
dupl:
|
||||
threshold: 150
|
||||
errcheck:
|
||||
check-type-assertions: false
|
||||
check-blank: false
|
||||
goconst:
|
||||
min-len: 3
|
||||
min-occurrences: 3
|
||||
gocyclo:
|
||||
min-complexity: 15
|
||||
gofmt:
|
||||
simplify: true
|
||||
goimports:
|
||||
local-prefixes: github.com/drakkan/sftpgo
|
||||
maligned:
|
||||
suggest-new: true
|
||||
|
||||
linters:
|
||||
enable:
|
||||
- goconst
|
||||
- errcheck
|
||||
- gofmt
|
||||
- goimports
|
||||
- golint
|
||||
- unconvert
|
||||
- unparam
|
||||
- bodyclose
|
||||
- gocyclo
|
||||
- misspell
|
||||
- maligned
|
||||
- whitespace
|
||||
- dupl
|
||||
- scopelint
|
||||
- rowserrcheck
|
||||
- dogsled
|
||||
24
.travis.yml
24
.travis.yml
@@ -1,24 +0,0 @@
|
||||
language: go
|
||||
|
||||
os:
|
||||
- linux
|
||||
- osx
|
||||
|
||||
go:
|
||||
- "1.12.x"
|
||||
- "1.13.x"
|
||||
|
||||
env:
|
||||
- GO111MODULE=on
|
||||
|
||||
before_script:
|
||||
- sqlite3 sftpgo.db 'CREATE TABLE "users" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255) NOT NULL UNIQUE, "password" varchar(255) NULL, "public_keys" text NULL, "home_dir" varchar(255) NOT NULL, "uid" integer NOT NULL, "gid" integer NOT NULL, "max_sessions" integer NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "permissions" text NOT NULL, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL, "upload_bandwidth" integer NOT NULL, "download_bandwidth" integer NOT NULL);'
|
||||
|
||||
install:
|
||||
- go get -v -t ./...
|
||||
|
||||
script:
|
||||
- go test -v ./... -coverprofile=coverage.txt -covermode=atomic
|
||||
|
||||
after_success:
|
||||
- bash <(curl -s https://codecov.io/bash)
|
||||
63
Dockerfile
Normal file
63
Dockerfile
Normal file
@@ -0,0 +1,63 @@
|
||||
FROM golang:1.16-buster as builder
|
||||
|
||||
ENV GOFLAGS="-mod=readonly"
|
||||
|
||||
RUN mkdir -p /workspace
|
||||
WORKDIR /workspace
|
||||
|
||||
ARG GOPROXY
|
||||
|
||||
COPY go.mod go.sum ./
|
||||
RUN go mod download
|
||||
|
||||
ARG COMMIT_SHA
|
||||
|
||||
# This ARG allows to disable some optional features and it might be useful if you build the image yourself.
|
||||
# For example you can disable S3 and GCS support like this:
|
||||
# --build-arg FEATURES=nos3,nogcs
|
||||
ARG FEATURES
|
||||
|
||||
COPY . .
|
||||
|
||||
RUN set -xe && \
|
||||
export COMMIT_SHA=${COMMIT_SHA:-$(git describe --always --dirty)} && \
|
||||
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -v -o sftpgo
|
||||
|
||||
FROM debian:buster-slim
|
||||
|
||||
# Set to "true" to install the optional git and rsync dependencies
|
||||
ARG INSTALL_OPTIONAL_PACKAGES=false
|
||||
|
||||
RUN apt-get update && apt-get install --no-install-recommends -y ca-certificates mime-support && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
RUN if [ "${INSTALL_OPTIONAL_PACKAGES}" = "true" ]; then apt-get update && apt-get install --no-install-recommends -y git rsync && rm -rf /var/lib/apt/lists/*; fi
|
||||
|
||||
RUN mkdir -p /etc/sftpgo /var/lib/sftpgo /usr/share/sftpgo /srv/sftpgo/data /srv/sftpgo/backups
|
||||
|
||||
RUN groupadd --system -g 1000 sftpgo && \
|
||||
useradd --system --gid sftpgo --no-create-home \
|
||||
--home-dir /var/lib/sftpgo --shell /usr/sbin/nologin \
|
||||
--comment "SFTPGo user" --uid 1000 sftpgo
|
||||
|
||||
COPY --from=builder /workspace/sftpgo.json /etc/sftpgo/sftpgo.json
|
||||
COPY --from=builder /workspace/templates /usr/share/sftpgo/templates
|
||||
COPY --from=builder /workspace/static /usr/share/sftpgo/static
|
||||
COPY --from=builder /workspace/sftpgo /usr/local/bin/
|
||||
|
||||
# Log to the stdout so the logs will be available using docker logs
|
||||
ENV SFTPGO_LOG_FILE_PATH=""
|
||||
# templates and static paths are inside the container
|
||||
ENV SFTPGO_HTTPD__TEMPLATES_PATH=/usr/share/sftpgo/templates
|
||||
ENV SFTPGO_HTTPD__STATIC_FILES_PATH=/usr/share/sftpgo/static
|
||||
|
||||
# Modify the default configuration file
|
||||
RUN sed -i "s|\"users_base_dir\": \"\",|\"users_base_dir\": \"/srv/sftpgo/data\",|" /etc/sftpgo/sftpgo.json && \
|
||||
sed -i "s|\"backups\"|\"/srv/sftpgo/backups\"|" /etc/sftpgo/sftpgo.json && \
|
||||
sed -i "s|\"address\": \"127.0.0.1\",|\"address\": \"\",|" /etc/sftpgo/sftpgo.json
|
||||
|
||||
RUN chown -R sftpgo:sftpgo /etc/sftpgo /srv/sftpgo && chown sftpgo:sftpgo /var/lib/sftpgo && chmod 700 /srv/sftpgo/backups
|
||||
|
||||
WORKDIR /var/lib/sftpgo
|
||||
USER 1000:1000
|
||||
|
||||
CMD ["sftpgo", "serve"]
|
||||
68
Dockerfile.alpine
Normal file
68
Dockerfile.alpine
Normal file
@@ -0,0 +1,68 @@
|
||||
FROM golang:1.16-alpine3.12 AS builder
|
||||
|
||||
ENV GOFLAGS="-mod=readonly"
|
||||
|
||||
RUN apk add --update --no-cache bash ca-certificates curl git gcc g++
|
||||
|
||||
RUN mkdir -p /workspace
|
||||
WORKDIR /workspace
|
||||
|
||||
ARG GOPROXY
|
||||
|
||||
COPY go.mod go.sum ./
|
||||
RUN go mod download
|
||||
|
||||
ARG COMMIT_SHA
|
||||
|
||||
# This ARG allows to disable some optional features and it might be useful if you build the image yourself.
|
||||
# For example you can disable S3 and GCS support like this:
|
||||
# --build-arg FEATURES=nos3,nogcs
|
||||
ARG FEATURES
|
||||
|
||||
COPY . .
|
||||
|
||||
RUN set -xe && \
|
||||
export COMMIT_SHA=${COMMIT_SHA:-$(git describe --always --dirty)} && \
|
||||
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -v -o sftpgo
|
||||
|
||||
|
||||
FROM alpine:3.12
|
||||
|
||||
# Set to "true" to install the optional git and rsync dependencies
|
||||
ARG INSTALL_OPTIONAL_PACKAGES=false
|
||||
|
||||
RUN apk add --update --no-cache ca-certificates tzdata mailcap
|
||||
|
||||
RUN if [ "${INSTALL_OPTIONAL_PACKAGES}" = "true" ]; then apk add --update --no-cache rsync git; fi
|
||||
|
||||
# set up nsswitch.conf for Go's "netgo" implementation
|
||||
# https://github.com/gliderlabs/docker-alpine/issues/367#issuecomment-424546457
|
||||
RUN test ! -e /etc/nsswitch.conf && echo 'hosts: files dns' > /etc/nsswitch.conf
|
||||
|
||||
RUN mkdir -p /etc/sftpgo /var/lib/sftpgo /usr/share/sftpgo /srv/sftpgo/data /srv/sftpgo/backups
|
||||
|
||||
RUN addgroup -g 1000 -S sftpgo && \
|
||||
adduser -u 1000 -h /var/lib/sftpgo -s /sbin/nologin -G sftpgo -S -D -H -g "SFTPGo user" sftpgo
|
||||
|
||||
COPY --from=builder /workspace/sftpgo.json /etc/sftpgo/sftpgo.json
|
||||
COPY --from=builder /workspace/templates /usr/share/sftpgo/templates
|
||||
COPY --from=builder /workspace/static /usr/share/sftpgo/static
|
||||
COPY --from=builder /workspace/sftpgo /usr/local/bin/
|
||||
|
||||
# Log to the stdout so the logs will be available using docker logs
|
||||
ENV SFTPGO_LOG_FILE_PATH=""
|
||||
# templates and static paths are inside the container
|
||||
ENV SFTPGO_HTTPD__TEMPLATES_PATH=/usr/share/sftpgo/templates
|
||||
ENV SFTPGO_HTTPD__STATIC_FILES_PATH=/usr/share/sftpgo/static
|
||||
|
||||
# Modify the default configuration file
|
||||
RUN sed -i "s|\"users_base_dir\": \"\",|\"users_base_dir\": \"/srv/sftpgo/data\",|" /etc/sftpgo/sftpgo.json && \
|
||||
sed -i "s|\"backups\"|\"/srv/sftpgo/backups\"|" /etc/sftpgo/sftpgo.json && \
|
||||
sed -i "s|\"address\": \"127.0.0.1\",|\"address\": \"\",|" /etc/sftpgo/sftpgo.json
|
||||
|
||||
RUN chown -R sftpgo:sftpgo /etc/sftpgo /srv/sftpgo && chown sftpgo:sftpgo /var/lib/sftpgo && chmod 700 /srv/sftpgo/backups
|
||||
|
||||
WORKDIR /var/lib/sftpgo
|
||||
USER 1000:1000
|
||||
|
||||
CMD ["sftpgo", "serve"]
|
||||
145
LICENSE
145
LICENSE
@@ -1,5 +1,5 @@
|
||||
GNU GENERAL PUBLIC LICENSE
|
||||
Version 3, 29 June 2007
|
||||
GNU AFFERO GENERAL PUBLIC LICENSE
|
||||
Version 3, 19 November 2007
|
||||
|
||||
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
|
||||
Everyone is permitted to copy and distribute verbatim copies
|
||||
@@ -7,17 +7,15 @@
|
||||
|
||||
Preamble
|
||||
|
||||
The GNU General Public License is a free, copyleft license for
|
||||
software and other kinds of works.
|
||||
The GNU Affero General Public License is a free, copyleft license for
|
||||
software and other kinds of works, specifically designed to ensure
|
||||
cooperation with the community in the case of network server software.
|
||||
|
||||
The licenses for most software and other practical works are designed
|
||||
to take away your freedom to share and change the works. By contrast,
|
||||
the GNU General Public License is intended to guarantee your freedom to
|
||||
our General Public Licenses are intended to guarantee your freedom to
|
||||
share and change all versions of a program--to make sure it remains free
|
||||
software for all its users. We, the Free Software Foundation, use the
|
||||
GNU General Public License for most of our software; it applies also to
|
||||
any other work released this way by its authors. You can apply it to
|
||||
your programs, too.
|
||||
software for all its users.
|
||||
|
||||
When we speak of free software, we are referring to freedom, not
|
||||
price. Our General Public Licenses are designed to make sure that you
|
||||
@@ -26,44 +24,34 @@ them if you wish), that you receive source code or can get it if you
|
||||
want it, that you can change the software or use pieces of it in new
|
||||
free programs, and that you know you can do these things.
|
||||
|
||||
To protect your rights, we need to prevent others from denying you
|
||||
these rights or asking you to surrender the rights. Therefore, you have
|
||||
certain responsibilities if you distribute copies of the software, or if
|
||||
you modify it: responsibilities to respect the freedom of others.
|
||||
Developers that use our General Public Licenses protect your rights
|
||||
with two steps: (1) assert copyright on the software, and (2) offer
|
||||
you this License which gives you legal permission to copy, distribute
|
||||
and/or modify the software.
|
||||
|
||||
For example, if you distribute copies of such a program, whether
|
||||
gratis or for a fee, you must pass on to the recipients the same
|
||||
freedoms that you received. You must make sure that they, too, receive
|
||||
or can get the source code. And you must show them these terms so they
|
||||
know their rights.
|
||||
A secondary benefit of defending all users' freedom is that
|
||||
improvements made in alternate versions of the program, if they
|
||||
receive widespread use, become available for other developers to
|
||||
incorporate. Many developers of free software are heartened and
|
||||
encouraged by the resulting cooperation. However, in the case of
|
||||
software used on network servers, this result may fail to come about.
|
||||
The GNU General Public License permits making a modified version and
|
||||
letting the public access it on a server without ever releasing its
|
||||
source code to the public.
|
||||
|
||||
Developers that use the GNU GPL protect your rights with two steps:
|
||||
(1) assert copyright on the software, and (2) offer you this License
|
||||
giving you legal permission to copy, distribute and/or modify it.
|
||||
The GNU Affero General Public License is designed specifically to
|
||||
ensure that, in such cases, the modified source code becomes available
|
||||
to the community. It requires the operator of a network server to
|
||||
provide the source code of the modified version running there to the
|
||||
users of that server. Therefore, public use of a modified version, on
|
||||
a publicly accessible server, gives the public access to the source
|
||||
code of the modified version.
|
||||
|
||||
For the developers' and authors' protection, the GPL clearly explains
|
||||
that there is no warranty for this free software. For both users' and
|
||||
authors' sake, the GPL requires that modified versions be marked as
|
||||
changed, so that their problems will not be attributed erroneously to
|
||||
authors of previous versions.
|
||||
|
||||
Some devices are designed to deny users access to install or run
|
||||
modified versions of the software inside them, although the manufacturer
|
||||
can do so. This is fundamentally incompatible with the aim of
|
||||
protecting users' freedom to change the software. The systematic
|
||||
pattern of such abuse occurs in the area of products for individuals to
|
||||
use, which is precisely where it is most unacceptable. Therefore, we
|
||||
have designed this version of the GPL to prohibit the practice for those
|
||||
products. If such problems arise substantially in other domains, we
|
||||
stand ready to extend this provision to those domains in future versions
|
||||
of the GPL, as needed to protect the freedom of users.
|
||||
|
||||
Finally, every program is threatened constantly by software patents.
|
||||
States should not allow patents to restrict development and use of
|
||||
software on general-purpose computers, but in those that do, we wish to
|
||||
avoid the special danger that patents applied to a free program could
|
||||
make it effectively proprietary. To prevent this, the GPL assures that
|
||||
patents cannot be used to render the program non-free.
|
||||
An older license, called the Affero General Public License and
|
||||
published by Affero, was designed to accomplish similar goals. This is
|
||||
a different license, not a version of the Affero GPL, but Affero has
|
||||
released a new version of the Affero GPL which permits relicensing under
|
||||
this license.
|
||||
|
||||
The precise terms and conditions for copying, distribution and
|
||||
modification follow.
|
||||
@@ -72,7 +60,7 @@ modification follow.
|
||||
|
||||
0. Definitions.
|
||||
|
||||
"This License" refers to version 3 of the GNU General Public License.
|
||||
"This License" refers to version 3 of the GNU Affero General Public License.
|
||||
|
||||
"Copyright" also means copyright-like laws that apply to other kinds of
|
||||
works, such as semiconductor masks.
|
||||
@@ -549,35 +537,45 @@ to collect a royalty for further conveying from those to whom you convey
|
||||
the Program, the only way you could satisfy both those terms and this
|
||||
License would be to refrain entirely from conveying the Program.
|
||||
|
||||
13. Use with the GNU Affero General Public License.
|
||||
13. Remote Network Interaction; Use with the GNU General Public License.
|
||||
|
||||
Notwithstanding any other provision of this License, if you modify the
|
||||
Program, your modified version must prominently offer all users
|
||||
interacting with it remotely through a computer network (if your version
|
||||
supports such interaction) an opportunity to receive the Corresponding
|
||||
Source of your version by providing access to the Corresponding Source
|
||||
from a network server at no charge, through some standard or customary
|
||||
means of facilitating copying of software. This Corresponding Source
|
||||
shall include the Corresponding Source for any work covered by version 3
|
||||
of the GNU General Public License that is incorporated pursuant to the
|
||||
following paragraph.
|
||||
|
||||
Notwithstanding any other provision of this License, you have
|
||||
permission to link or combine any covered work with a work licensed
|
||||
under version 3 of the GNU Affero General Public License into a single
|
||||
under version 3 of the GNU General Public License into a single
|
||||
combined work, and to convey the resulting work. The terms of this
|
||||
License will continue to apply to the part which is the covered work,
|
||||
but the special requirements of the GNU Affero General Public License,
|
||||
section 13, concerning interaction through a network will apply to the
|
||||
combination as such.
|
||||
but the work with which it is combined will remain governed by version
|
||||
3 of the GNU General Public License.
|
||||
|
||||
14. Revised Versions of this License.
|
||||
|
||||
The Free Software Foundation may publish revised and/or new versions of
|
||||
the GNU General Public License from time to time. Such new versions will
|
||||
be similar in spirit to the present version, but may differ in detail to
|
||||
the GNU Affero General Public License from time to time. Such new versions
|
||||
will be similar in spirit to the present version, but may differ in detail to
|
||||
address new problems or concerns.
|
||||
|
||||
Each version is given a distinguishing version number. If the
|
||||
Program specifies that a certain numbered version of the GNU General
|
||||
Program specifies that a certain numbered version of the GNU Affero General
|
||||
Public License "or any later version" applies to it, you have the
|
||||
option of following the terms and conditions either of that numbered
|
||||
version or of any later version published by the Free Software
|
||||
Foundation. If the Program does not specify a version number of the
|
||||
GNU General Public License, you may choose any version ever published
|
||||
GNU Affero General Public License, you may choose any version ever published
|
||||
by the Free Software Foundation.
|
||||
|
||||
If the Program specifies that a proxy can decide which future
|
||||
versions of the GNU General Public License can be used, that proxy's
|
||||
versions of the GNU Affero General Public License can be used, that proxy's
|
||||
public statement of acceptance of a version permanently authorizes you
|
||||
to choose that version for the Program.
|
||||
|
||||
@@ -635,40 +633,29 @@ the "copyright" line and a pointer to where the full notice is found.
|
||||
Copyright (C) <year> <name of author>
|
||||
|
||||
This program is free software: you can redistribute it and/or modify
|
||||
it under the terms of the GNU General Public License as published by
|
||||
the Free Software Foundation, either version 3 of the License, or
|
||||
it under the terms of the GNU Affero General Public License as published
|
||||
by the Free Software Foundation, either version 3 of the License, or
|
||||
(at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU General Public License for more details.
|
||||
GNU Affero General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU General Public License
|
||||
You should have received a copy of the GNU Affero General Public License
|
||||
along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
Also add information on how to contact you by electronic and paper mail.
|
||||
|
||||
If the program does terminal interaction, make it output a short
|
||||
notice like this when it starts in an interactive mode:
|
||||
|
||||
<program> Copyright (C) <year> <name of author>
|
||||
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
|
||||
This is free software, and you are welcome to redistribute it
|
||||
under certain conditions; type `show c' for details.
|
||||
|
||||
The hypothetical commands `show w' and `show c' should show the appropriate
|
||||
parts of the General Public License. Of course, your program's commands
|
||||
might be different; for a GUI interface, you would use an "about box".
|
||||
If your software can interact with users remotely through a computer
|
||||
network, you should also make sure that it provides a way for users to
|
||||
get its source. For example, if your program is a web application, its
|
||||
interface could display a "Source" link that leads users to an archive
|
||||
of the code. There are many ways you could offer source, and different
|
||||
solutions will be better for different programs; see section 13 for the
|
||||
specific requirements.
|
||||
|
||||
You should also get your employer (if you work as a programmer) or school,
|
||||
if any, to sign a "copyright disclaimer" for the program, if necessary.
|
||||
For more information on this, and how to apply and follow the GNU GPL, see
|
||||
<https://www.gnu.org/licenses/>.
|
||||
|
||||
The GNU General Public License does not permit incorporating your program
|
||||
into proprietary programs. If your program is a subroutine library, you
|
||||
may consider it more useful to permit linking proprietary applications with
|
||||
the library. If this is what you want to do, use the GNU Lesser General
|
||||
Public License instead of this License. But first, please read
|
||||
<https://www.gnu.org/licenses/why-not-lgpl.html>.
|
||||
For more information on this, and how to apply and follow the GNU AGPL, see
|
||||
<https://www.gnu.org/licenses/>.
|
||||
570
README.md
570
README.md
@@ -1,428 +1,260 @@
|
||||
# SFTPGo
|
||||
[](https://travis-ci.org/drakkan/sftpgo) [](https://codecov.io/gh/drakkan/sftpgo/branch/master) [](https://goreportcard.com/report/github.com/drakkan/sftpgo) [](https://www.gnu.org/licenses/gpl-3.0) [](https://github.com/avelino/awesome-go)
|
||||
|
||||
Full featured and highly configurable SFTP server
|
||||

|
||||
[](https://codecov.io/gh/drakkan/sftpgo/branch/main)
|
||||
[](https://goreportcard.com/report/github.com/drakkan/sftpgo)
|
||||
[](https://www.gnu.org/licenses/agpl-3.0)
|
||||
[](https://hub.docker.com/r/drakkan/sftpgo)
|
||||
[](https://github.com/avelino/awesome-go)
|
||||
|
||||
Fully featured and highly configurable SFTP server with optional FTP/S and WebDAV support, written in Go.
|
||||
Several storage backends are supported: local filesystem, encrypted local filesystem, S3 (compatible) Object Storage, Google Cloud Storage, Azure Blob Storage, SFTP.
|
||||
|
||||
## Features
|
||||
|
||||
- Each account is chrooted to his Home Dir.
|
||||
- SFTP accounts are virtual accounts stored in a "data provider".
|
||||
- SQLite, MySQL, PostgreSQL and bbolt (key/value store in pure Go) data providers are supported.
|
||||
- SFTPGo uses virtual accounts stored inside a "data provider".
|
||||
- SQLite, MySQL, PostgreSQL, bbolt (key/value store in pure Go) and in-memory data providers are supported.
|
||||
- Each local account is chrooted in its home directory, for cloud-based accounts you can restrict access to a certain base path.
|
||||
- Public key and password authentication. Multiple public keys per user are supported.
|
||||
- SSH user [certificate authentication](https://cvsweb.openbsd.org/src/usr.bin/ssh/PROTOCOL.certkeys?rev=1.8).
|
||||
- Keyboard interactive authentication. You can easily setup a customizable multi-factor authentication.
|
||||
- Partial authentication. You can configure multi-step authentication requiring, for example, the user password after successful public key authentication.
|
||||
- Per user authentication methods. You can configure the allowed authentication methods for each user.
|
||||
- Custom authentication via external programs/HTTP API is supported.
|
||||
- [Data At Rest Encryption](./docs/dare.md) is supported.
|
||||
- Dynamic user modification before login via external programs/HTTP API is supported.
|
||||
- Quota support: accounts can have individual quota expressed as max total size and/or max number of files.
|
||||
- Bandwidth throttling is supported, with distinct settings for upload and download.
|
||||
- Per user maximum concurrent sessions.
|
||||
- Per user permissions: list directories content, upload, overwrite, download, delete, rename, create directories, create symlinks can be enabled or disabled.
|
||||
- Per user files/folders ownership: you can map all the users to the system account that runs SFTPGo (all platforms are supported) or you can run SFTPGo as root user and map each user or group of users to a different system account (*NIX only).
|
||||
- Configurable custom commands and/or HTTP notifications on upload, download, delete or rename.
|
||||
- Per user and per directory permission management: list directory contents, upload, overwrite, download, delete, rename, create directories, create symlinks, change owner/group and mode, change access and modification times.
|
||||
- Per user files/folders ownership mapping: you can map all the users to the system account that runs SFTPGo (all platforms are supported) or you can run SFTPGo as root user and map each user or group of users to a different system account (\*NIX only).
|
||||
- Per user IP filters are supported: login can be restricted to specific ranges of IP addresses or to a specific IP address.
|
||||
- Per user and per directory shell like patterns filters are supported: files can be allowed or denied based on shell like patterns.
|
||||
- Virtual folders are supported: directories outside the user home directory can be exposed as virtual folders.
|
||||
- Configurable custom commands and/or HTTP notifications on file upload, download, pre-delete, delete, rename, on SSH commands and on user add, update and delete.
|
||||
- Automatically terminating idle connections.
|
||||
- Automatic blocklist management is supported using the built-in [defender](./docs/defender.md).
|
||||
- Atomic uploads are configurable.
|
||||
- Optional SCP support.
|
||||
- Prometheus metrics are exposed.
|
||||
- REST API for users and quota management and real time reports for the active connections with possibility of forcibly closing a connection.
|
||||
- Web based interface to manage users and connections.
|
||||
- Easy migration from Unix system user accounts.
|
||||
- Configuration is a your choice: JSON, TOML, YAML, HCL, envfile are supported.
|
||||
- Log files are accurate and they are saved in the easily parsable JSON format.
|
||||
- Support for Git repositories over SSH.
|
||||
- SCP and rsync are supported.
|
||||
- FTP/S is supported. You can configure the FTP service to require TLS for both control and data connections.
|
||||
- [WebDAV](./docs/webdav.md) is supported.
|
||||
- Two-Way TLS authentication, aka TLS with client certificate authentication, is supported for REST API/Web Admin, FTPS and WebDAV over HTTPS.
|
||||
- Support for serving local filesystem, encrypted local filesystem, S3 Compatible Object Storage, Google Cloud Storage, Azure Blob Storage or other SFTP accounts over SFTP/SCP/FTP/WebDAV.
|
||||
- Per user protocols restrictions. You can configure the allowed protocols (SSH/FTP/WebDAV) for each user.
|
||||
- [Prometheus metrics](./docs/metrics.md) are exposed.
|
||||
- Support for HAProxy PROXY protocol: you can proxy and/or load balance the SFTP/SCP/FTP/WebDAV service without losing the information about the client's address.
|
||||
- [REST API](./docs/rest-api.md) for users and folders management, backup, restore and real time reports of the active connections with possibility of forcibly closing a connection.
|
||||
- [Web based administration interface](./docs/web-admin.md) to easily manage users, folders and connections.
|
||||
- Easy [migration](./examples/convertusers) from Linux system user accounts.
|
||||
- [Portable mode](./docs/portable-mode.md): a convenient way to share a single directory on demand.
|
||||
- [SFTP subsystem mode](./docs/sftp-subsystem.md): you can use SFTPGo as OpenSSH's SFTP subsystem.
|
||||
- Performance analysis using built-in [profiler](./docs/profiling.md).
|
||||
- Configuration format is at your choice: JSON, TOML, YAML, HCL, envfile are supported.
|
||||
- Log files are accurate and they are saved in the easily parsable JSON format ([more information](./docs/logs.md)).
|
||||
|
||||
## Platforms
|
||||
|
||||
SFTPGo is developed and tested on Linux. After each commit the code is automatically built and tested on Linux and macOS using Travis CI.
|
||||
Regularly the test cases are manually executed and pass on Windows. Other UNIX variants such as *BSD should work too.
|
||||
SFTPGo is developed and tested on Linux. After each commit, the code is automatically built and tested on Linux, macOS and Windows using a [GitHub Action](./.github/workflows/development.yml). The test cases are regularly manually executed and passed on FreeBSD. Other *BSD variants should work too.
|
||||
|
||||
## Requirements
|
||||
|
||||
- Go 1.12 or higher.
|
||||
- A suitable SQL server or key/value store to use as data provider: PostreSQL 9.4+ or MySQL 5.6+ or SQLite 3.x or bbolt 1.3.x
|
||||
- Go 1.15 or higher as build only dependency.
|
||||
- A suitable SQL server to use as data provider: PostgreSQL 9.4+ or MySQL 5.6+ or SQLite 3.x.
|
||||
- The SQL server is optional: you can choose to use an embedded bolt database as key/value store or an in memory data provider.
|
||||
|
||||
## Installation
|
||||
|
||||
Simple install the package to your [$GOPATH](https://github.com/golang/go/wiki/GOPATH "GOPATH") with the [go tool](https://golang.org/cmd/go/ "go command") from shell:
|
||||
Binary releases for Linux, macOS, and Windows are available. Please visit the [releases](https://github.com/drakkan/sftpgo/releases "releases") page.
|
||||
|
||||
```
|
||||
$ go get -u github.com/drakkan/sftpgo
|
||||
```
|
||||
An official Docker image is available. Documentation is [here](./docker/README.md).
|
||||
|
||||
Make sure [Git is installed](https://git-scm.com/downloads) on your machine and in your system's `PATH`.
|
||||
Some Linux distro packages are available:
|
||||
|
||||
SFTPGo depends on [go-sqlite3](https://github.com/mattn/go-sqlite3) that is a CGO package and so it requires a `C` compiler at build time.
|
||||
On Linux and macOS a compiler is easy to install or already installed, on Windows you need to download [MinGW-w64](https://sourceforge.net/projects/mingw-w64/files/) and build SFTPGo from its command prompt.
|
||||
- For Arch Linux via AUR:
|
||||
- [sftpgo](https://aur.archlinux.org/packages/sftpgo/). This package follows stable releases. It requires `git`, `gcc` and `go` to build.
|
||||
- [sftpgo-bin](https://aur.archlinux.org/packages/sftpgo-bin/). This package follows stable releases downloading the prebuilt linux binary from GitHub. It does not require `git`, `gcc` and `go` to build.
|
||||
- [sftpgo-git](https://aur.archlinux.org/packages/sftpgo-git/). This package builds and installs the latest git `main` branch. It requires `git`, `gcc` and `go` to build.
|
||||
- Deb and RPM packages are built after each commit and for each release.
|
||||
- For Ubuntu a PPA is available [here](https://launchpad.net/~sftpgo/+archive/ubuntu/sftpgo).
|
||||
|
||||
The compiler is a build time only dependency, it is not not required at runtime.
|
||||
You can easily test new features selecting a commit from the [Actions](https://github.com/drakkan/sftpgo/actions) page and downloading the matching build artifacts for Linux, macOS or Windows. GitHub stores artifacts for 90 days.
|
||||
|
||||
If you don't need SQLite, you can also get/build SFTPGo setting the environment variable `GCO_ENABLED` to 0, this way SQLite support will be disabled but PostgreSQL, MySQL and bbolt will work and you don't need a `C` compiler for building.
|
||||
|
||||
Version info, such as git commit and build date, can be embedded setting the following string variables at build time:
|
||||
|
||||
- `github.com/drakkan/sftpgo/utils.commit`
|
||||
- `github.com/drakkan/sftpgo/utils.date`
|
||||
|
||||
For example you can build using the following command:
|
||||
|
||||
```
|
||||
go build -i -ldflags "-s -w -X github.com/drakkan/sftpgo/utils.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/utils.date=`date -u +%FT%TZ`" -o sftpgo
|
||||
```
|
||||
|
||||
and you will get a version that includes git commit and build date like this one:
|
||||
|
||||
```
|
||||
sftpgo -v
|
||||
SFTPGo version: 0.9.0-dev-90607d4-dirty-2019-08-08T19:28:36Z
|
||||
```
|
||||
|
||||
For Linux, a `systemd` sample [service](https://github.com/drakkan/sftpgo/tree/master/init/sftpgo.service "systemd service") can be found inside the source tree.
|
||||
|
||||
Alternately you can use distro packages:
|
||||
|
||||
- Arch Linux PKGBUILD is available on [AUR](https://aur.archlinux.org/packages/sftpgo/ "SFTPGo")
|
||||
|
||||
For macOS a `launchd` sample [service](https://github.com/drakkan/sftpgo/tree/master/init/com.github.drakkan.sftpgo.plist "launchd plist") can be found inside the source tree. The `launchd` plist assumes that `sftpgo` has `/usr/local/opt/sftpgo` as base directory.
|
||||
|
||||
On Windows you can run `SFTPGo` as Windows Service, please read the "Configuration" section below for more details.
|
||||
Alternately, you can [build from source](./docs/build-from-source.md).
|
||||
|
||||
## Configuration
|
||||
|
||||
The `sftpgo` executable can be used this way:
|
||||
A full explanation of all configuration methods can be found [here](./docs/full-configuration.md).
|
||||
|
||||
```
|
||||
Usage:
|
||||
sftpgo [command]
|
||||
Please make sure to [initialize the data provider](#data-provider-initialization-and-management) before running the daemon.
|
||||
|
||||
Available Commands:
|
||||
help Help about any command
|
||||
serve Start the SFTP Server
|
||||
To start SFTPGo with the default settings, simply run:
|
||||
|
||||
Flags:
|
||||
-h, --help help for sftpgo
|
||||
-v, --version
|
||||
|
||||
Use "sftpgo [command] --help" for more information about a command
|
||||
```
|
||||
|
||||
The `serve` subcommand supports the following flags:
|
||||
|
||||
- `--config-dir` string. Location of the config dir. This directory should contain the `sftpgo` configuration file and is used as the base for files with a relative path (eg. the private keys for the SFTP server, the SQLite or bblot database if you use SQLite or bbolt as data provider). The default value is "." or the value of `SFTPGO_CONFIG_DIR` environment variable.
|
||||
- `--config-file` string. Name of the configuration file. It must be the name of a file stored in config-dir not the absolute path to the configuration file. The specified file name must have no extension we automatically load JSON, YAML, TOML, HCL and Java properties. The default value is "sftpgo" (and therefore `sftpgo.json`, `sftpgo.yaml` and so on are searched) or the value of `SFTPGO_CONFIG_FILE` environment variable.
|
||||
- `--log-compress` boolean. Determine if the rotated log files should be compressed using gzip. Default `false` or the value of `SFTPGO_LOG_COMPRESS` environment variable (1 or `true`, 0 or `false`). It is unused if `log-file-path` is empty.
|
||||
- `--log-file-path` string. Location for the log file, default "sftpgo.log" or the value of `SFTPGO_LOG_FILE_PATH` environment variable. Leave empty to write logs to the standard error.
|
||||
- `--log-max-age` int. Maximum number of days to retain old log files. Default 28 or the value of `SFTPGO_LOG_MAX_AGE` environment variable. It is unused if `log-file-path` is empty.
|
||||
- `--log-max-backups` int. Maximum number of old log files to retain. Default 5 or the value of `SFTPGO_LOG_MAX_BACKUPS` environment variable. It is unused if `log-file-path` is empty.
|
||||
- `--log-max-size` int. Maximum size in megabytes of the log file before it gets rotated. Default 10 or the value of `SFTPGO_LOG_MAX_SIZE` environment variable. It is unused if `log-file-path` is empty.
|
||||
- `--log-verbose` boolean. Enable verbose logs. Default `true` or the value of `SFTPGO_LOG_VERBOSE` environment variable (1 or `true`, 0 or `false`).
|
||||
|
||||
If you don't configure any private host keys, the daemon will use `id_rsa` in the configuration directory. If that file doesn't exist, the daemon will attempt to autogenerate it (if the user that executes SFTPGo has write access to the config-dir). The server supports any private key format supported by [`crypto/ssh`](https://github.com/golang/crypto/blob/master/ssh/keys.go#L32).
|
||||
|
||||
Before starting `sftpgo` a dataprovider must be configured.
|
||||
|
||||
Sample SQL scripts to create the required database structure can be found inside the source tree [sql](https://github.com/drakkan/sftpgo/tree/master/sql "sql") directory. The SQL scripts filename's is, by convention, the date as `YYYYMMDD` and the suffix `.sql`. You need to apply all the SQL scripts for your database ordered by name, for example `20190706.sql` must be applied before `20190728.sql` and so on.
|
||||
|
||||
The `sftpgo` configuration file contains the following sections:
|
||||
|
||||
- **"sftpd"**, the configuration for the SFTP server
|
||||
- `bind_port`, integer. The port used for serving SFTP requests. Default: 2022
|
||||
- `bind_address`, string. Leave blank to listen on all available network interfaces. Default: ""
|
||||
- `idle_timeout`, integer. Time in minutes after which an idle client will be disconnected. 0 menas disabled. Default: 15
|
||||
- `max_auth_tries` integer. Maximum number of authentication attempts permitted per connection. If set to a negative number, the number of attempts are unlimited. If set to zero, the number of attempts are limited to 6.
|
||||
- `umask`, string. Umask for the new files and directories. This setting has no effect on Windows. Default: "0022"
|
||||
- `banner`, string. Identification string used by the server. Leave empty to use the default banner. Default "SFTPGo_version"
|
||||
- `upload_mode` integer. 0 means standard, the files are uploaded directly to the requested path. 1 means atomic: files are uploaded to a temporary path and renamed to the requested path when the client ends the upload. Atomic mode avoids problems such as a web server that serves partial files when the files are being uploaded. In atomic mode if there is an upload error the temporary file is deleted and so the requested upload path will not contain a partial file. 2 means atomic with resume support: as atomic but if there is an upload error the temporary file is renamed to the requested path and not deleted, this way a client can reconnect and resume the upload.
|
||||
- `actions`, struct. It contains the command to execute and/or the HTTP URL to notify and the trigger conditions
|
||||
- `execute_on`, list of strings. Valid values are `download`, `upload`, `delete`, `rename`. On folder deletion a `delete` notification will be sent for each deleted file. Actions will be not executed if an error is detected and so a partial file is uploaded or downloaded. Leave empty to disable actions. The `upload` condition includes both uploads to new files and overwrite existing files
|
||||
- `command`, string. Absolute path to the command to execute. Leave empty to disable. The command is invoked with the following arguments:
|
||||
- `action`, any valid `execute_on` string
|
||||
- `username`, user who did the action
|
||||
- `path` to the affected file. For `rename` action this is the old file name
|
||||
- `target_path`, non empty for `rename` action, this is the new file name
|
||||
- `http_notification_url`, a valid URL. An HTTP GET request will be executed to this URL. Leave empty to disable. The query string will contain the following parameters that have the same meaning of the command's arguments:
|
||||
- `action`
|
||||
- `username`
|
||||
- `path`
|
||||
- `target_path`, added for `rename` action only
|
||||
- `keys`, struct array. It contains the daemon's private keys. If empty or missing the daemon will search or try to generate `id_rsa` in the configuration directory.
|
||||
- `private_key`, path to the private key file. It can be a path relative to the config dir or an absolute one.
|
||||
- `enable_scp`, boolean. Default disabled. Set to `true` to enable SCP support. SCP is an experimental feature, we have our own SCP implementation since we can't rely on `scp` system command to proper handle permissions, quota and user's home dir restrictions. The SCP protocol is quite simple but there is no official docs about it, so we need more testing and feedbacks before enabling it by default. We may not handle some borderline cases or have sneaky bugs. Please do accurate tests yourself before enabling SCP and let us known if something does not work as expected for your use cases. SCP between two remote hosts is supported using the `-3` scp option.
|
||||
- `kex_algorithms`, list of strings. Available KEX (Key Exchange) algorithms in preference order. Leave empty to use default values. The supported values can be found here: [`crypto/ssh`](https://github.com/golang/crypto/blob/master/ssh/common.go#L46 "Supported kex algos")
|
||||
- `ciphers`, list of strings. Allowed ciphers. Leave empty to use default values. The supported values can be found here: [`crypto/ssh`](https://github.com/golang/crypto/blob/master/ssh/common.go#L28 "Supported ciphers")
|
||||
- `macs`, list of strings. available MAC (message authentication code) algorithms in preference order. Leave empty to use default values. The supported values can be found here: [`crypto/ssh`](https://github.com/golang/crypto/blob/master/ssh/common.go#L84 "Supported MACs")
|
||||
- `login_banner_file`, path to the login banner file. The contents of the specified file, if any, are sent to the remote user before authentication is allowed. It can be a path relative to the config dir or an absolute one. Leave empty to send no login banner
|
||||
- **"data_provider"**, the configuration for the data provider
|
||||
- `driver`, string. Supported drivers are `sqlite`, `mysql`, `postgresql`, `bolt`
|
||||
- `name`, string. Database name. For driver `sqlite` this can be the database name relative to the config dir or the absolute path to the SQLite database.
|
||||
- `host`, string. Database host. Leave empty for driver `sqlite` and `bolt`
|
||||
- `port`, integer. Database port. Leave empty for driver `sqlite` and `bolt`
|
||||
- `username`, string. Database user. Leave empty for driver `sqlite` and `bolt`
|
||||
- `password`, string. Database password. Leave empty for driver `sqlite` and `bolt`
|
||||
- `sslmode`, integer. Used for drivers `mysql` and `postgresql`. 0 disable SSL/TLS connections, 1 require ssl, 2 set ssl mode to `verify-ca` for driver `postgresql` and `skip-verify` for driver `mysql`, 3 set ssl mode to `verify-full` for driver `postgresql` and `preferred` for driver `mysql`
|
||||
- `connectionstring`, string. Provide a custom database connection string. If not empty this connection string will be used instead of build one using the previous parameters. Leave empty for driver `bolt`
|
||||
- `users_table`, string. Database table for SFTP users
|
||||
- `manage_users`, integer. Set to 0 to disable users management, 1 to enable
|
||||
- `track_quota`, integer. Set the preferred way to track users quota between the following choices:
|
||||
- 0, disable quota tracking. REST API to scan user dir and update quota will do nothing
|
||||
- 1, quota is updated each time a user upload or delete a file even if the user has no quota restrictions
|
||||
- 2, quota is updated each time a user upload or delete a file but only for users with quota restrictions. With this configuration the "quota scan" REST API can still be used to periodically update space usage for users without quota restrictions
|
||||
- `pool_size`, integer. Sets the maximum number of open connections for `mysql` and `postgresql` driver. Default 0 (unlimited)
|
||||
- `users_base_dir`, string. Users' default base directory. If no home dir is defined while adding a new user, and this value is a valid absolute path, then the user home dir will be automatically defined as the path obtained joining the base dir and the username
|
||||
- **"httpd"**, the configuration for the HTTP server used to serve REST API
|
||||
- `bind_port`, integer. The port used for serving HTTP requests. Set to 0 to disable HTTP server. Default: 8080
|
||||
- `bind_address`, string. Leave blank to listen on all available network interfaces. Default: "127.0.0.1"
|
||||
- `templates_path`, string. Path to the HTML web templates. This can be an absolute path or a path relative to the config dir
|
||||
- `static_files_path`, string. Path to the static files for the web interface. This can be an absolute path or a path relative to the config dir
|
||||
|
||||
Here is a full example showing the default config in JSON format:
|
||||
|
||||
```json
|
||||
{
|
||||
"sftpd": {
|
||||
"bind_port": 2022,
|
||||
"bind_address": "",
|
||||
"idle_timeout": 15,
|
||||
"max_auth_tries": 0,
|
||||
"umask": "0022",
|
||||
"banner": "SFTPGo",
|
||||
"actions": {
|
||||
"execute_on": [],
|
||||
"command": "",
|
||||
"http_notification_url": ""
|
||||
},
|
||||
"keys": [],
|
||||
"enable_scp": false
|
||||
},
|
||||
"data_provider": {
|
||||
"driver": "sqlite",
|
||||
"name": "sftpgo.db",
|
||||
"host": "",
|
||||
"port": 5432,
|
||||
"username": "",
|
||||
"password": "",
|
||||
"sslmode": 0,
|
||||
"connection_string": "",
|
||||
"users_table": "users",
|
||||
"manage_users": 1,
|
||||
"track_quota": 2,
|
||||
"pool_size": 0,
|
||||
"users_base_dir": ""
|
||||
},
|
||||
"httpd": {
|
||||
"bind_port": 8080,
|
||||
"bind_address": "127.0.0.1",
|
||||
"templates_path": "templates",
|
||||
"static_files_path": "static"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If you want to use a private key that use an algorithm different from RSA or more than one private key then replace the empty `keys` array with something like this:
|
||||
|
||||
```json
|
||||
"keys": [
|
||||
{
|
||||
"private_key": "id_rsa"
|
||||
},
|
||||
{
|
||||
"private_key": "id_ecdsa"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
The configuration can be read from JSON, TOML, YAML, HCL, envfile and Java properties config files, if your `config-file` flag is set to `sftpgo` (default value) you need to create a configuration file called `sftpgo.json` or `sftpgo.yaml` and so on inside `config-dir`.
|
||||
|
||||
You can also override all the available configuration options using environment variables, sftpgo will check for environment variables with a name matching the key uppercased and prefixed with the `SFTPGO_`. You need to use `__` to traverse a struct.
|
||||
|
||||
Let's see some examples:
|
||||
|
||||
- To set sftpd `bind_port` you need to define the env var `SFTPGO_SFTPD__BIND_PORT`
|
||||
- To set the `execute_on` actions you need to define the env var `SFTPGO_SFTPD__ACTIONS__EXECUTE_ON` for example `SFTPGO_SFTPD__ACTIONS__EXECUTE_ON=upload,download`
|
||||
|
||||
Please note that to override configuration options with environment variables a configuration file containing the options to override is required. You can, for example, deploy the default configuration file and then override the options you need to customize using environment variables.
|
||||
|
||||
To start the SFTP Server with the default values for the command line flags simply use:
|
||||
|
||||
```
|
||||
```bash
|
||||
sftpgo serve
|
||||
```
|
||||
|
||||
On Windows you can register `SFTPGo` as Windows Service, take a look at the CLI usage to learn how:
|
||||
Check out [this documentation](./docs/service.md) if you want to run SFTPGo as a service.
|
||||
|
||||
```
|
||||
sftpgo.exe service --help
|
||||
Install, Uninstall, Start, Stop and retrieve status for SFTPGo Windows Service
|
||||
### Data provider initialization and management
|
||||
|
||||
Usage:
|
||||
sftpgo service [command]
|
||||
Before starting the SFTPGo server please ensure that the configured data provider is properly initialized/updated.
|
||||
|
||||
Available Commands:
|
||||
install Install SFTPGo as Windows Service
|
||||
start Start SFTPGo Windows Service
|
||||
status Retrieve the status for the SFTPGo Windows Service
|
||||
stop Stop SFTPGo Windows Service
|
||||
uninstall Uninstall SFTPGo Windows Service
|
||||
For PostgreSQL and MySQL providers, you need to create the configured database. For SQLite, the configured database will be automatically created at startup. Memory and bolt data providers do not require an initialization but they could require an update to the existing data after upgrading SFTPGo.
|
||||
|
||||
Flags:
|
||||
-h, --help help for service
|
||||
SFTPGo will attempt to automatically detect if the data provider is initialized/updated and if not, will attempt to initialize/ update it on startup as needed.
|
||||
|
||||
Use "sftpgo service [command] --help" for more information about a command.
|
||||
Alternately, you can create/update the required data provider structures yourself using the `initprovider` command.
|
||||
|
||||
For example, you can simply execute the following command from the configuration directory:
|
||||
|
||||
```bash
|
||||
sftpgo initprovider
|
||||
```
|
||||
|
||||
`install` subcommand accepts the same flags valid for `serve`.
|
||||
Take a look at the CLI usage to learn how to specify a different configuration file:
|
||||
|
||||
After installing as Windows Service please remember to allow network access to the SFTPGo executable using something like this:
|
||||
|
||||
```
|
||||
netsh advfirewall firewall add rule name="SFTPGo Service" dir=in action=allow program="C:\Program Files\SFTPGo\sftpgo.exe"
|
||||
```bash
|
||||
sftpgo initprovider --help
|
||||
```
|
||||
|
||||
or through the Windows Firewall GUI.
|
||||
You can disable automatic data provider checks/updates at startup by setting the `update_mode` configuration key to `1`.
|
||||
|
||||
If for some reason you want to downgrade SFTPGo, you may need to downgrade your data provider schema and data as well. You can use the `revertprovider` command for this task.
|
||||
|
||||
We support the follwing schema versions:
|
||||
|
||||
- `8`, this is the latest version
|
||||
- `4`, this is the schema for v1.0.0-v1.2.x
|
||||
|
||||
So, if you plan to downgrade from 2.0.x to 1.2.x, you can prepare your data provider executing the following command from the configuration directory:
|
||||
|
||||
```shell
|
||||
sftpgo revertprovider --to-version 4
|
||||
```
|
||||
|
||||
Take a look at the CLI usage to learn how to specify a different configuration file:
|
||||
|
||||
```bash
|
||||
sftpgo revertprovider --help
|
||||
```
|
||||
|
||||
The `revertprovider` command is not supported for the memory provider.
|
||||
|
||||
## Users and folders management
|
||||
|
||||
After starting SFTPGo you can manage users and folders using:
|
||||
|
||||
- the [web based administration interface](./docs/web-admin.md)
|
||||
- the [REST API](./docs/rest-api.md)
|
||||
|
||||
To support embedded data providers like `bolt` and `SQLite` we can't have a CLI that directly write users and folders to the data provider, we always have to use the REST API.
|
||||
|
||||
## Tutorials
|
||||
|
||||
Some step-to-step tutorials can be found inside the source tree [howto](./docs/howto "How-to") directory.
|
||||
|
||||
## Authentication options
|
||||
|
||||
### External Authentication
|
||||
|
||||
Custom authentication methods can easily be added. SFTPGo supports external authentication modules, and writing a new backend can be as simple as a few lines of shell script. More information can be found [here](./docs/external-auth.md).
|
||||
|
||||
### Keyboard Interactive Authentication
|
||||
|
||||
Keyboard interactive authentication is, in general, a series of questions asked by the server with responses provided by the client.
|
||||
This authentication method is typically used for multi-factor authentication.
|
||||
|
||||
More information can be found [here](./docs/keyboard-interactive.md).
|
||||
|
||||
## Dynamic user creation or modification
|
||||
|
||||
A user can be created or modified by an external program just before the login. More information about this can be found [here](./docs/dynamic-user-mod.md).
|
||||
|
||||
## Custom Actions
|
||||
|
||||
SFTPGo allows to configure custom commands and/or HTTP notifications on file upload, download, delete, rename, on SSH commands and on user add, update and delete.
|
||||
|
||||
More information about custom actions can be found [here](./docs/custom-actions.md).
|
||||
|
||||
## Virtual folders
|
||||
|
||||
Directories outside the user home directory can be exposed as virtual folders, more information [here](./docs/virtual-folders.md).
|
||||
|
||||
## Other hooks
|
||||
|
||||
You can get notified as soon as a new connection is established using the [Post-connect hook](./docs/post-connect-hook.md) and after each login using the [Post-login hook](./docs/post-login-hook.md).
|
||||
You can use your own hook to [check passwords](./docs/check-password-hook.md).
|
||||
|
||||
## Storage backends
|
||||
|
||||
### S3 Compatible Object Storage backends
|
||||
|
||||
Each user can be mapped to the whole bucket or to a bucket virtual folder. This way, the mapped bucket/virtual folder is exposed over SFTP/SCP/FTP/WebDAV. More information about S3 integration can be found [here](./docs/s3.md).
|
||||
|
||||
### Google Cloud Storage backend
|
||||
|
||||
Each user can be mapped with a Google Cloud Storage bucket or a bucket virtual folder. This way, the mapped bucket/virtual folder is exposed over SFTP/SCP/FTP/WebDAV. More information about Google Cloud Storage integration can be found [here](./docs/google-cloud-storage.md).
|
||||
|
||||
### Azure Blob Storage backend
|
||||
|
||||
Each user can be mapped with an Azure Blob Storage container or a container virtual folder. This way, the mapped container/virtual folder is exposed over SFTP/SCP/FTP/WebDAV. More information about Azure Blob Storage integration can be found [here](./docs/azure-blob-storage.md).
|
||||
|
||||
### SFTP backend
|
||||
|
||||
Each user can be mapped to another SFTP server account or a subfolder of it. More information can be found [here](./docs/sftpfs.md).
|
||||
|
||||
### Encrypted backend
|
||||
|
||||
Data at-rest encryption is supported via the [cryptfs backend](./docs/dare.md).
|
||||
|
||||
### Other Storage backends
|
||||
|
||||
Adding new storage backends is quite easy:
|
||||
|
||||
- implement the [Fs interface](./vfs/vfs.go#L28 "interface for filesystem backends").
|
||||
- update the user method `GetFilesystem` to return the new backend
|
||||
- update the web interface and the REST API CLI
|
||||
- add the flags for the new storage backed to the `portable` mode
|
||||
|
||||
Anyway, some backends require a pay per use account (or they offer free account for a limited time period only). To be able to add support for such backends or to review pull requests, please provide a test account. The test account must be available for enough time to be able to maintain the backend and do basic tests before each new release.
|
||||
|
||||
## Brute force protection
|
||||
|
||||
The [connection failed logs](./docs/logs.md) can be used for integration in tools such as [Fail2ban](http://www.fail2ban.org/). Example of [jails](./fail2ban/jails) and [filters](./fail2ban/filters) working with `systemd`/`journald` are available in fail2ban directory.
|
||||
|
||||
You can also use the built-in [defender](./docs/defender.md).
|
||||
|
||||
## Account's configuration properties
|
||||
|
||||
For each account the following properties can be configured:
|
||||
Details information about account configuration properties can be found [here](./docs/account.md).
|
||||
|
||||
- `username`
|
||||
- `password` used for password authentication. For users created using SFTPGo REST API if the password has no known hashing algo prefix it will be stored using argon2id. SFTPGo supports checking passwords stored with bcrypt, pbkdf2 and sha512crypt too. For pbkdf2 the supported format is `$<algo>$<iterations>$<salt>$<hashed pwd base64 encoded>`, where algo is `pbkdf2-sha1` or `pbkdf2-sha256` or `pbkdf2-sha512`. For example the `pbkdf2-sha256` of the word `password` using 150000 iterations and `E86a9YMX3zC7` as salt must be stored as `$pbkdf2-sha256$150000$E86a9YMX3zC7$R5J62hsSq+pYw00hLLPKBbcGXmq7fj5+/M0IFoYtZbo=`. For bcrypt the format must be the one supported by golang's [crypto/bcrypt](https://godoc.org/golang.org/x/crypto/bcrypt) package, for example the password `secret` with cost `14` must be stored as `$2a$14$ajq8Q7fbtFRQvXpdCq7Jcuy.Rx1h/L4J60Otx.gyNLbAYctGMJ9tK`. For sha512crypt we support the format used in `/etc/shadow` with the `$6$` prefix, this is useful if you are migrating from Unix system user accounts. Using the REST API you can send a password hashed as bcrypt, pbkdf2 or sha512crypt and it will be stored as is.
|
||||
- `public_keys` array of public keys. At least one public key or the password is mandatory.
|
||||
- `home_dir` The user cannot upload or download files outside this directory. Must be an absolute path
|
||||
- `uid`, `gid`. If sftpgo runs as root system user then the created files and directories will be assigned to this system uid/gid. Ignored on windows and if sftpgo runs as non root user: in this case files and directories for all SFTP users will be owned by the system user that runs sftpgo.
|
||||
- `max_sessions` maximum concurrent sessions. 0 means unlimited
|
||||
- `quota_size` maximum size allowed as bytes. 0 means unlimited
|
||||
- `quota_files` maximum number of files allowed. 0 means unlimited
|
||||
- `permissions` the following permissions are supported:
|
||||
- `*` all permissions are granted
|
||||
- `list` list items is allowed
|
||||
- `download` download files is allowed
|
||||
- `upload` upload files is allowed
|
||||
- `overwrite` overwrite an existing file, while uploading, is allowed. `upload` permission is required to allow file overwrite
|
||||
- `delete` delete files or directories is allowed
|
||||
- `rename` rename files or directories is allowed
|
||||
- `create_dirs` create directories is allowed
|
||||
- `create_symlinks` create symbolic links is allowed
|
||||
- `upload_bandwidth` maximum upload bandwidth as KB/s, 0 means unlimited
|
||||
- `download_bandwidth` maximum download bandwidth as KB/s, 0 means unlimited
|
||||
## Performance
|
||||
|
||||
These properties are stored inside the data provider. If you want to use your existing accounts, you can create a database view. Since a view is read only, you have to disable user management and quota tracking so SFTPGo will never try to write to the view.
|
||||
SFTPGo can easily saturate a Gigabit connection on low end hardware with no special configuration, this is generally enough for most use cases.
|
||||
|
||||
## REST API
|
||||
More in-depth analysis of performance can be found [here](./docs/performance.md).
|
||||
|
||||
SFTPGo exposes REST API to manage users and quota and to get real time reports for the active connections with possibility of forcibly closing a connection.
|
||||
## Release Cadence
|
||||
|
||||
If quota tracking is enabled in `sftpgo` configuration file, then the used size and number of files are updated each time a file is added/removed. If files are added/removed not using SFTP or if you change `track_quota` from `2` to `1`, you can rescan the user home dir and update the used quota using the REST API.
|
||||
|
||||
REST API is designed to run on localhost or on a trusted network, if you need HTTPS and/or authentication you can setup a reverse proxy using an HTTP Server such as Apache or NGNIX.
|
||||
|
||||
For example you can keep SFTPGo listening on localhost and expose it externally configuring a reverse proxy using Apache HTTP Server this way:
|
||||
|
||||
```
|
||||
ProxyPass /api/v1 http://127.0.0.1:8080/api/v1
|
||||
ProxyPassReverse /api/v1 http://127.0.0.1:8080/api/v1
|
||||
```
|
||||
|
||||
and you can add authentication with something like this:
|
||||
|
||||
```
|
||||
<Location /api/v1>
|
||||
AuthType Digest
|
||||
AuthName "Private"
|
||||
AuthDigestDomain "/api/v1"
|
||||
AuthDigestProvider file
|
||||
AuthUserFile "/etc/httpd/conf/auth_digest"
|
||||
Require valid-user
|
||||
</Location>
|
||||
```
|
||||
|
||||
and, of course, you can configure the web server to use HTTPS.
|
||||
|
||||
The OpenAPI 3 schema for the exposed API can be found inside the source tree: [openapi.yaml](https://github.com/drakkan/sftpgo/tree/master/api/schema/openapi.yaml "OpenAPI 3 specs").
|
||||
|
||||
A sample CLI client for the REST API can be found inside the source tree [scripts](https://github.com/drakkan/sftpgo/tree/master/scripts "scripts") directory.
|
||||
|
||||
You can also generate your own REST client, in your preferred programming language or even bash scripts, using an OpenAPI generator such as [swagger-codegen](https://github.com/swagger-api/swagger-codegen) or [OpenAPI Generator](https://openapi-generator.tech/)
|
||||
|
||||
## Metrics
|
||||
|
||||
SFTPGo exposes [Prometheus](https://prometheus.io/) metrics at the `/metrics` HTTP endpoint.
|
||||
Several counters and gauges are available, for example:
|
||||
|
||||
- Total uploads and downloads
|
||||
- Total uploads and downloads size
|
||||
- Total uploads and downloads errors
|
||||
- Number of active connections
|
||||
- Data provider availability
|
||||
- Total successful and failed logins using a password or a public key
|
||||
- Total HTTP requests served and totals for response code
|
||||
- Go's runtime like details about GC, number of gouroutines and OS threads
|
||||
- Process information like CPU, memory, file descriptor usage and start time
|
||||
|
||||
Please check the `/metrics` page for more details.
|
||||
|
||||
## Web Admin
|
||||
|
||||
You can easily build your own interface using the exposed REST API, anyway SFTPGo provides also a very basic builtin web interface that allows to manage users and connections.
|
||||
With the default `httpd` configuration, the web admin is available at the following URL:
|
||||
|
||||
[http://127.0.0.1:8080/web](http://127.0.0.1:8080/web)
|
||||
|
||||
If you need HTTPS and/or authentication you can setup a reverse proxy as explained for the REST API.
|
||||
|
||||
## Logs
|
||||
|
||||
Inside the log file each line is a JSON struct, each struct has a `sender` fields that identify the log type.
|
||||
|
||||
The logs can be divided into the following categories:
|
||||
|
||||
- **"app logs"**, internal logs used to debug `sftpgo`:
|
||||
- `sender` string. This is generally the package name that emits the log
|
||||
- `time` string. Date/time with millisecond precision
|
||||
- `level` string
|
||||
- `message` string
|
||||
- **"transfer logs"**, SFTP/SCP transfer logs:
|
||||
- `sender` string. `Upload` or `Download`
|
||||
- `time` string. Date/time with millisecond precision
|
||||
- `level` string
|
||||
- `elapsed_ms`, int64. Elapsed time, as milliseconds, for the upload/download
|
||||
- `size_bytes`, int64. Size, as bytes, of the download/upload
|
||||
- `username`, string
|
||||
- `file_path` string
|
||||
- `connection_id` string. Unique connection identifier
|
||||
- `protocol` string. `SFTP` or `SCP`
|
||||
- **"command logs"**, SFTP/SCP command logs:
|
||||
- `sender` string. `Rename`, `Rmdir`, `Mkdir`, `Symlink`, `Remove`
|
||||
- `level` string
|
||||
- `username`, string
|
||||
- `file_path` string
|
||||
- `target_path` string
|
||||
- `connection_id` string. Unique connection identifier
|
||||
- `protocol` string. `SFTP` or `SCP`
|
||||
- **"http logs"**, REST API logs:
|
||||
- `sender` string. `httpd`
|
||||
- `level` string
|
||||
- `remote_addr` string. IP and port of the remote client
|
||||
- `proto` string, for example `HTTP/1.1`
|
||||
- `method` string. HTTP method (`GET`, `POST`, `PUT`, `DELETE` etc.)
|
||||
- `user_agent` string
|
||||
- `uri` string. Full uri
|
||||
- `resp_status` integer. HTTP response status code
|
||||
- `resp_size` integer. Size in bytes of the HTTP response
|
||||
- `elapsed_ms` int64. Elapsed time, as milliseconds, to complete the request
|
||||
- `request_id` string. Unique request identifier
|
||||
SFTPGo releases are feature-driven, we don't have a fixed time based schedule. As a rough estimate, you can expect 1 or 2 new releases per year.
|
||||
|
||||
## Acknowledgements
|
||||
|
||||
- [pkg/sftp](https://github.com/pkg/sftp)
|
||||
- [go-chi](https://github.com/go-chi/chi)
|
||||
- [zerolog](https://github.com/rs/zerolog)
|
||||
- [lumberjack](https://gopkg.in/natefinch/lumberjack.v2)
|
||||
- [argon2id](https://github.com/alexedwards/argon2id)
|
||||
- [go-sqlite3](https://github.com/mattn/go-sqlite3)
|
||||
- [go-sql-driver/mysql](https://github.com/go-sql-driver/mysql)
|
||||
- [bbolt](https://github.com/etcd-io/bbolt)
|
||||
- [lib/pq](https://github.com/lib/pq)
|
||||
- [viper](https://github.com/spf13/viper)
|
||||
- [cobra](https://github.com/spf13/cobra)
|
||||
- [xid](https://github.com/rs/xid)
|
||||
- [nathanaelle/password](https://github.com/nathanaelle/password)
|
||||
- [SB Admin 2](https://github.com/BlackrockDigital/startbootstrap-sb-admin-2)
|
||||
SFTPGo makes use of the third party libraries listed inside [go.mod](./go.mod).
|
||||
|
||||
Some code was initially taken from [Pterodactyl sftp server](https://github.com/pterodactyl/sftp-server)
|
||||
We are very grateful to all the people who contributed with ideas and/or pull requests.
|
||||
|
||||
Thank you [ysura](https://www.ysura.com/) for granting me stable access to a test AWS S3 account.
|
||||
|
||||
## Sponsors
|
||||
|
||||
I'd like to make SFTPGo into a sustainable long term project and your [sponsorship](https://github.com/sponsors/drakkan) will really help :heart:
|
||||
|
||||
Thank you to our sponsors!
|
||||
|
||||
[<img src="https://images.squarespace-cdn.com/content/5e5db7f1ded5fc06a4e9628b/1583608099266-T5NW2WNQL7PC15LPRB16/logo+black.png?format=1500w&content-type=image%2Fpng" width="33%" alt="segmed logo">](https://www.segmed.ai/)
|
||||
|
||||
## License
|
||||
|
||||
GNU GPLv3
|
||||
GNU AGPLv3
|
||||
|
||||
12
SECURITY.md
Normal file
12
SECURITY.md
Normal file
@@ -0,0 +1,12 @@
|
||||
# Security Policy
|
||||
|
||||
## Supported Versions
|
||||
|
||||
Only the current release of the software is actively supported. If you need
|
||||
help backporting fixes into an older release, feel free to ask.
|
||||
|
||||
## Reporting a Vulnerability
|
||||
|
||||
Email your vulnerability information to SFTPGo's maintainer:
|
||||
|
||||
Nicola Murino <nicola.murino@gmail.com>
|
||||
12
cmd/gen.go
Normal file
12
cmd/gen.go
Normal file
@@ -0,0 +1,12 @@
|
||||
package cmd
|
||||
|
||||
import "github.com/spf13/cobra"
|
||||
|
||||
var genCmd = &cobra.Command{
|
||||
Use: "gen",
|
||||
Short: "A collection of useful generators",
|
||||
}
|
||||
|
||||
func init() {
|
||||
rootCmd.AddCommand(genCmd)
|
||||
}
|
||||
86
cmd/gencompletion.go
Normal file
86
cmd/gencompletion.go
Normal file
@@ -0,0 +1,86 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"os"
|
||||
|
||||
"github.com/rs/zerolog"
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/drakkan/sftpgo/logger"
|
||||
)
|
||||
|
||||
var genCompletionCmd = &cobra.Command{
|
||||
Use: "completion [bash|zsh|fish|powershell]",
|
||||
Short: "Generate shell completion script",
|
||||
Long: `To load completions:
|
||||
|
||||
Bash:
|
||||
|
||||
$ source <(sftpgo gen completion bash)
|
||||
|
||||
To load completions for each session, execute once:
|
||||
|
||||
Linux:
|
||||
|
||||
$ sudo sftpgo gen completion bash > /usr/share/bash-completion/completions/sftpgo
|
||||
|
||||
MacOS:
|
||||
|
||||
$ sudo sftpgo gen completion bash > /usr/local/etc/bash_completion.d/sftpgo
|
||||
|
||||
Zsh:
|
||||
|
||||
If shell completion is not already enabled in your environment you will need
|
||||
to enable it. You can execute the following once:
|
||||
|
||||
$ echo "autoload -U compinit; compinit" >> ~/.zshrc
|
||||
|
||||
To load completions for each session, execute once:
|
||||
|
||||
$ sftpgo gen completion zsh > "${fpath[1]}/_sftpgo"
|
||||
|
||||
Fish:
|
||||
|
||||
$ sftpgo gen completion fish | source
|
||||
|
||||
To load completions for each session, execute once:
|
||||
|
||||
$ sftpgo gen completion fish > ~/.config/fish/completions/sftpgo.fish
|
||||
|
||||
Powershell:
|
||||
|
||||
PS> sftpgo gen completion powershell | Out-String | Invoke-Expression
|
||||
|
||||
To load completions for every new session, run:
|
||||
|
||||
PS> sftpgo gen completion powershell > sftpgo.ps1
|
||||
|
||||
and source this file from your powershell profile.
|
||||
`,
|
||||
DisableFlagsInUseLine: true,
|
||||
ValidArgs: []string{"bash", "zsh", "fish", "powershell"},
|
||||
Args: cobra.ExactValidArgs(1),
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
var err error
|
||||
logger.DisableLogger()
|
||||
logger.EnableConsoleLogger(zerolog.DebugLevel)
|
||||
switch args[0] {
|
||||
case "bash":
|
||||
err = cmd.Root().GenBashCompletion(os.Stdout)
|
||||
case "zsh":
|
||||
err = cmd.Root().GenZshCompletion(os.Stdout)
|
||||
case "fish":
|
||||
err = cmd.Root().GenFishCompletion(os.Stdout, true)
|
||||
case "powershell":
|
||||
err = cmd.Root().GenPowerShellCompletionWithDesc(os.Stdout)
|
||||
}
|
||||
if err != nil {
|
||||
logger.WarnToConsole("Unable to generate shell completion script: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
func init() {
|
||||
genCmd.AddCommand(genCompletionCmd)
|
||||
}
|
||||
52
cmd/genman.go
Normal file
52
cmd/genman.go
Normal file
@@ -0,0 +1,52 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"github.com/rs/zerolog"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/cobra/doc"
|
||||
|
||||
"github.com/drakkan/sftpgo/logger"
|
||||
"github.com/drakkan/sftpgo/version"
|
||||
)
|
||||
|
||||
var (
|
||||
manDir string
|
||||
genManCmd = &cobra.Command{
|
||||
Use: "man",
|
||||
Short: "Generate man pages for SFTPGo CLI",
|
||||
Long: `This command automatically generates up-to-date man pages of SFTPGo's
|
||||
command-line interface. By default, it creates the man page files
|
||||
in the "man" directory under the current directory.
|
||||
`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
logger.DisableLogger()
|
||||
logger.EnableConsoleLogger(zerolog.DebugLevel)
|
||||
if _, err := os.Stat(manDir); os.IsNotExist(err) {
|
||||
err = os.MkdirAll(manDir, os.ModePerm)
|
||||
if err != nil {
|
||||
logger.WarnToConsole("Unable to generate man page files: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
header := &doc.GenManHeader{
|
||||
Section: "1",
|
||||
Manual: "SFTPGo Manual",
|
||||
Source: fmt.Sprintf("SFTPGo %v", version.Get().Version),
|
||||
}
|
||||
cmd.Root().DisableAutoGenTag = true
|
||||
err := doc.GenManTree(cmd.Root(), header, manDir)
|
||||
if err != nil {
|
||||
logger.WarnToConsole("Unable to generate man page files: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
func init() {
|
||||
genManCmd.Flags().StringVarP(&manDir, "dir", "d", "man", "The directory to write the man pages")
|
||||
genCmd.AddCommand(genManCmd)
|
||||
}
|
||||
70
cmd/initprovider.go
Normal file
70
cmd/initprovider.go
Normal file
@@ -0,0 +1,70 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"os"
|
||||
|
||||
"github.com/rs/zerolog"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/viper"
|
||||
|
||||
"github.com/drakkan/sftpgo/config"
|
||||
"github.com/drakkan/sftpgo/dataprovider"
|
||||
"github.com/drakkan/sftpgo/logger"
|
||||
"github.com/drakkan/sftpgo/utils"
|
||||
)
|
||||
|
||||
var (
|
||||
initProviderCmd = &cobra.Command{
|
||||
Use: "initprovider",
|
||||
Short: "Initializes and/or updates the configured data provider",
|
||||
Long: `This command reads the data provider connection details from the specified
|
||||
configuration file and creates the initial structure or update the existing one,
|
||||
as needed.
|
||||
|
||||
Some data providers such as bolt and memory does not require an initialization
|
||||
but they could require an update to the existing data after upgrading SFTPGo.
|
||||
|
||||
For SQLite/bolt providers the database file will be auto-created if missing.
|
||||
|
||||
For PostgreSQL and MySQL providers you need to create the configured database,
|
||||
this command will create/update the required tables as needed.
|
||||
|
||||
To initialize/update the data provider from the configuration directory simply use:
|
||||
|
||||
$ sftpgo initprovider
|
||||
|
||||
Please take a look at the usage below to customize the options.`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
logger.DisableLogger()
|
||||
logger.EnableConsoleLogger(zerolog.DebugLevel)
|
||||
configDir = utils.CleanDirInput(configDir)
|
||||
err := config.LoadConfig(configDir, configFile)
|
||||
if err != nil {
|
||||
logger.WarnToConsole("Unable to initialize data provider, config load error: %v", err)
|
||||
return
|
||||
}
|
||||
kmsConfig := config.GetKMSConfig()
|
||||
err = kmsConfig.Initialize()
|
||||
if err != nil {
|
||||
logger.ErrorToConsole("unable to initialize KMS: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
providerConf := config.GetProviderConf()
|
||||
logger.InfoToConsole("Initializing provider: %#v config file: %#v", providerConf.Driver, viper.ConfigFileUsed())
|
||||
err = dataprovider.InitializeDatabase(providerConf, configDir)
|
||||
if err == nil {
|
||||
logger.InfoToConsole("Data provider successfully initialized/updated")
|
||||
} else if err == dataprovider.ErrNoInitRequired {
|
||||
logger.InfoToConsole("%v", err.Error())
|
||||
} else {
|
||||
logger.WarnToConsole("Unable to initialize/update the data provider: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
func init() {
|
||||
rootCmd.AddCommand(initProviderCmd)
|
||||
addConfigFlags(initProviderCmd)
|
||||
}
|
||||
@@ -2,23 +2,28 @@ package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"strconv"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/drakkan/sftpgo/service"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/drakkan/sftpgo/utils"
|
||||
)
|
||||
|
||||
var (
|
||||
installCmd = &cobra.Command{
|
||||
Use: "install",
|
||||
Short: "Install SFTPGo as Windows Service",
|
||||
Long: `To install the SFTPGo Windows Service with the default values for the command line flags simply use:
|
||||
Long: `To install the SFTPGo Windows Service with the default values for the command
|
||||
line flags simply use:
|
||||
|
||||
sftpgo service install
|
||||
|
||||
Please take a look at the usage below to customize the startup options`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
s := service.Service{
|
||||
ConfigDir: configDir,
|
||||
ConfigDir: utils.CleanDirInput(configDir),
|
||||
ConfigFile: configFile,
|
||||
LogFilePath: logFilePath,
|
||||
LogMaxSize: logMaxSize,
|
||||
@@ -39,6 +44,7 @@ Please take a look at the usage below to customize the startup options`,
|
||||
err := winService.Install(serviceArgs...)
|
||||
if err != nil {
|
||||
fmt.Printf("Error installing service: %v\r\n", err)
|
||||
os.Exit(1)
|
||||
} else {
|
||||
fmt.Printf("Service installed!\r\n")
|
||||
}
|
||||
@@ -50,3 +56,39 @@ func init() {
|
||||
serviceCmd.AddCommand(installCmd)
|
||||
addServeFlags(installCmd)
|
||||
}
|
||||
|
||||
func getCustomServeFlags() []string {
|
||||
result := []string{}
|
||||
if configDir != defaultConfigDir {
|
||||
configDir = utils.CleanDirInput(configDir)
|
||||
result = append(result, "--"+configDirFlag)
|
||||
result = append(result, configDir)
|
||||
}
|
||||
if configFile != defaultConfigFile {
|
||||
result = append(result, "--"+configFileFlag)
|
||||
result = append(result, configFile)
|
||||
}
|
||||
if logFilePath != defaultLogFile {
|
||||
result = append(result, "--"+logFilePathFlag)
|
||||
result = append(result, logFilePath)
|
||||
}
|
||||
if logMaxSize != defaultLogMaxSize {
|
||||
result = append(result, "--"+logMaxSizeFlag)
|
||||
result = append(result, strconv.Itoa(logMaxSize))
|
||||
}
|
||||
if logMaxBackups != defaultLogMaxBackup {
|
||||
result = append(result, "--"+logMaxBackupFlag)
|
||||
result = append(result, strconv.Itoa(logMaxBackups))
|
||||
}
|
||||
if logMaxAge != defaultLogMaxAge {
|
||||
result = append(result, "--"+logMaxAgeFlag)
|
||||
result = append(result, strconv.Itoa(logMaxAge))
|
||||
}
|
||||
if logVerbose != defaultLogVerbose {
|
||||
result = append(result, "--"+logVerboseFlag+"=false")
|
||||
}
|
||||
if logCompress != defaultLogCompress {
|
||||
result = append(result, "--"+logCompressFlag+"=true")
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
392
cmd/portable.go
Normal file
392
cmd/portable.go
Normal file
@@ -0,0 +1,392 @@
|
||||
// +build !noportable
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/drakkan/sftpgo/common"
|
||||
"github.com/drakkan/sftpgo/dataprovider"
|
||||
"github.com/drakkan/sftpgo/kms"
|
||||
"github.com/drakkan/sftpgo/service"
|
||||
"github.com/drakkan/sftpgo/sftpd"
|
||||
"github.com/drakkan/sftpgo/version"
|
||||
"github.com/drakkan/sftpgo/vfs"
|
||||
)
|
||||
|
||||
var (
|
||||
directoryToServe string
|
||||
portableSFTPDPort int
|
||||
portableAdvertiseService bool
|
||||
portableAdvertiseCredentials bool
|
||||
portableUsername string
|
||||
portablePassword string
|
||||
portableLogFile string
|
||||
portableLogVerbose bool
|
||||
portablePublicKeys []string
|
||||
portablePermissions []string
|
||||
portableSSHCommands []string
|
||||
portableAllowedPatterns []string
|
||||
portableDeniedPatterns []string
|
||||
portableFsProvider int
|
||||
portableS3Bucket string
|
||||
portableS3Region string
|
||||
portableS3AccessKey string
|
||||
portableS3AccessSecret string
|
||||
portableS3Endpoint string
|
||||
portableS3StorageClass string
|
||||
portableS3KeyPrefix string
|
||||
portableS3ULPartSize int
|
||||
portableS3ULConcurrency int
|
||||
portableGCSBucket string
|
||||
portableGCSCredentialsFile string
|
||||
portableGCSAutoCredentials int
|
||||
portableGCSStorageClass string
|
||||
portableGCSKeyPrefix string
|
||||
portableFTPDPort int
|
||||
portableFTPSCert string
|
||||
portableFTPSKey string
|
||||
portableWebDAVPort int
|
||||
portableWebDAVCert string
|
||||
portableWebDAVKey string
|
||||
portableAzContainer string
|
||||
portableAzAccountName string
|
||||
portableAzAccountKey string
|
||||
portableAzEndpoint string
|
||||
portableAzAccessTier string
|
||||
portableAzSASURL string
|
||||
portableAzKeyPrefix string
|
||||
portableAzULPartSize int
|
||||
portableAzULConcurrency int
|
||||
portableAzUseEmulator bool
|
||||
portableCryptPassphrase string
|
||||
portableSFTPEndpoint string
|
||||
portableSFTPUsername string
|
||||
portableSFTPPassword string
|
||||
portableSFTPPrivateKeyPath string
|
||||
portableSFTPFingerprints []string
|
||||
portableSFTPPrefix string
|
||||
portableCmd = &cobra.Command{
|
||||
Use: "portable",
|
||||
Short: "Serve a single directory",
|
||||
Long: `To serve the current working directory with auto generated credentials simply
|
||||
use:
|
||||
|
||||
$ sftpgo portable
|
||||
|
||||
Please take a look at the usage below to customize the serving parameters`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
portableDir := directoryToServe
|
||||
fsProvider := dataprovider.FilesystemProvider(portableFsProvider)
|
||||
if !filepath.IsAbs(portableDir) {
|
||||
if fsProvider == dataprovider.LocalFilesystemProvider {
|
||||
portableDir, _ = filepath.Abs(portableDir)
|
||||
} else {
|
||||
portableDir = os.TempDir()
|
||||
}
|
||||
}
|
||||
permissions := make(map[string][]string)
|
||||
permissions["/"] = portablePermissions
|
||||
portableGCSCredentials := ""
|
||||
if fsProvider == dataprovider.GCSFilesystemProvider && portableGCSCredentialsFile != "" {
|
||||
contents, err := getFileContents(portableGCSCredentialsFile)
|
||||
if err != nil {
|
||||
fmt.Printf("Unable to get GCS credentials: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
portableGCSCredentials = contents
|
||||
portableGCSAutoCredentials = 0
|
||||
}
|
||||
portableSFTPPrivateKey := ""
|
||||
if fsProvider == dataprovider.SFTPFilesystemProvider && portableSFTPPrivateKeyPath != "" {
|
||||
contents, err := getFileContents(portableSFTPPrivateKeyPath)
|
||||
if err != nil {
|
||||
fmt.Printf("Unable to get SFTP private key: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
portableSFTPPrivateKey = contents
|
||||
}
|
||||
if portableFTPDPort >= 0 && len(portableFTPSCert) > 0 && len(portableFTPSKey) > 0 {
|
||||
_, err := common.NewCertManager(portableFTPSCert, portableFTPSKey, filepath.Clean(defaultConfigDir),
|
||||
"FTP portable")
|
||||
if err != nil {
|
||||
fmt.Printf("Unable to load FTPS key pair, cert file %#v key file %#v error: %v\n",
|
||||
portableFTPSCert, portableFTPSKey, err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
if portableWebDAVPort > 0 && len(portableWebDAVCert) > 0 && len(portableWebDAVKey) > 0 {
|
||||
_, err := common.NewCertManager(portableWebDAVCert, portableWebDAVKey, filepath.Clean(defaultConfigDir),
|
||||
"WebDAV portable")
|
||||
if err != nil {
|
||||
fmt.Printf("Unable to load WebDAV key pair, cert file %#v key file %#v error: %v\n",
|
||||
portableWebDAVCert, portableWebDAVKey, err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
service := service.Service{
|
||||
ConfigDir: filepath.Clean(defaultConfigDir),
|
||||
ConfigFile: defaultConfigFile,
|
||||
LogFilePath: portableLogFile,
|
||||
LogMaxSize: defaultLogMaxSize,
|
||||
LogMaxBackups: defaultLogMaxBackup,
|
||||
LogMaxAge: defaultLogMaxAge,
|
||||
LogCompress: defaultLogCompress,
|
||||
LogVerbose: portableLogVerbose,
|
||||
Shutdown: make(chan bool),
|
||||
PortableMode: 1,
|
||||
PortableUser: dataprovider.User{
|
||||
Username: portableUsername,
|
||||
Password: portablePassword,
|
||||
PublicKeys: portablePublicKeys,
|
||||
Permissions: permissions,
|
||||
HomeDir: portableDir,
|
||||
Status: 1,
|
||||
FsConfig: dataprovider.Filesystem{
|
||||
Provider: dataprovider.FilesystemProvider(portableFsProvider),
|
||||
S3Config: vfs.S3FsConfig{
|
||||
Bucket: portableS3Bucket,
|
||||
Region: portableS3Region,
|
||||
AccessKey: portableS3AccessKey,
|
||||
AccessSecret: kms.NewPlainSecret(portableS3AccessSecret),
|
||||
Endpoint: portableS3Endpoint,
|
||||
StorageClass: portableS3StorageClass,
|
||||
KeyPrefix: portableS3KeyPrefix,
|
||||
UploadPartSize: int64(portableS3ULPartSize),
|
||||
UploadConcurrency: portableS3ULConcurrency,
|
||||
},
|
||||
GCSConfig: vfs.GCSFsConfig{
|
||||
Bucket: portableGCSBucket,
|
||||
Credentials: kms.NewPlainSecret(portableGCSCredentials),
|
||||
AutomaticCredentials: portableGCSAutoCredentials,
|
||||
StorageClass: portableGCSStorageClass,
|
||||
KeyPrefix: portableGCSKeyPrefix,
|
||||
},
|
||||
AzBlobConfig: vfs.AzBlobFsConfig{
|
||||
Container: portableAzContainer,
|
||||
AccountName: portableAzAccountName,
|
||||
AccountKey: kms.NewPlainSecret(portableAzAccountKey),
|
||||
Endpoint: portableAzEndpoint,
|
||||
AccessTier: portableAzAccessTier,
|
||||
SASURL: portableAzSASURL,
|
||||
KeyPrefix: portableAzKeyPrefix,
|
||||
UseEmulator: portableAzUseEmulator,
|
||||
UploadPartSize: int64(portableAzULPartSize),
|
||||
UploadConcurrency: portableAzULConcurrency,
|
||||
},
|
||||
CryptConfig: vfs.CryptFsConfig{
|
||||
Passphrase: kms.NewPlainSecret(portableCryptPassphrase),
|
||||
},
|
||||
SFTPConfig: vfs.SFTPFsConfig{
|
||||
Endpoint: portableSFTPEndpoint,
|
||||
Username: portableSFTPUsername,
|
||||
Password: kms.NewPlainSecret(portableSFTPPassword),
|
||||
PrivateKey: kms.NewPlainSecret(portableSFTPPrivateKey),
|
||||
Fingerprints: portableSFTPFingerprints,
|
||||
Prefix: portableSFTPPrefix,
|
||||
},
|
||||
},
|
||||
Filters: dataprovider.UserFilters{
|
||||
FilePatterns: parsePatternsFilesFilters(),
|
||||
},
|
||||
},
|
||||
}
|
||||
if err := service.StartPortableMode(portableSFTPDPort, portableFTPDPort, portableWebDAVPort, portableSSHCommands, portableAdvertiseService,
|
||||
portableAdvertiseCredentials, portableFTPSCert, portableFTPSKey, portableWebDAVCert, portableWebDAVKey); err == nil {
|
||||
service.Wait()
|
||||
if service.Error == nil {
|
||||
os.Exit(0)
|
||||
}
|
||||
}
|
||||
os.Exit(1)
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
func init() {
|
||||
version.AddFeature("+portable")
|
||||
|
||||
portableCmd.Flags().StringVarP(&directoryToServe, "directory", "d", ".", `Path to the directory to serve.
|
||||
This can be an absolute path or a path
|
||||
relative to the current directory
|
||||
`)
|
||||
portableCmd.Flags().IntVarP(&portableSFTPDPort, "sftpd-port", "s", 0, `0 means a random unprivileged port,
|
||||
< 0 disabled`)
|
||||
portableCmd.Flags().IntVar(&portableFTPDPort, "ftpd-port", -1, `0 means a random unprivileged port,
|
||||
< 0 disabled`)
|
||||
portableCmd.Flags().IntVar(&portableWebDAVPort, "webdav-port", -1, `0 means a random unprivileged port,
|
||||
< 0 disabled`)
|
||||
portableCmd.Flags().StringSliceVarP(&portableSSHCommands, "ssh-commands", "c", sftpd.GetDefaultSSHCommands(),
|
||||
`SSH commands to enable.
|
||||
"*" means any supported SSH command
|
||||
including scp
|
||||
`)
|
||||
portableCmd.Flags().StringVarP(&portableUsername, "username", "u", "", `Leave empty to use an auto generated
|
||||
value`)
|
||||
portableCmd.Flags().StringVarP(&portablePassword, "password", "p", "", `Leave empty to use an auto generated
|
||||
value`)
|
||||
portableCmd.Flags().StringVarP(&portableLogFile, logFilePathFlag, "l", "", "Leave empty to disable logging")
|
||||
portableCmd.Flags().BoolVarP(&portableLogVerbose, logVerboseFlag, "v", false, "Enable verbose logs")
|
||||
portableCmd.Flags().StringSliceVarP(&portablePublicKeys, "public-key", "k", []string{}, "")
|
||||
portableCmd.Flags().StringSliceVarP(&portablePermissions, "permissions", "g", []string{"list", "download"},
|
||||
`User's permissions. "*" means any
|
||||
permission`)
|
||||
portableCmd.Flags().StringArrayVar(&portableAllowedPatterns, "allowed-patterns", []string{},
|
||||
`Allowed file patterns case insensitive.
|
||||
The format is:
|
||||
/dir::pattern1,pattern2.
|
||||
For example: "/somedir::*.jpg,a*b?.png"`)
|
||||
portableCmd.Flags().StringArrayVar(&portableDeniedPatterns, "denied-patterns", []string{},
|
||||
`Denied file patterns case insensitive.
|
||||
The format is:
|
||||
/dir::pattern1,pattern2.
|
||||
For example: "/somedir::*.jpg,a*b?.png"`)
|
||||
portableCmd.Flags().BoolVarP(&portableAdvertiseService, "advertise-service", "S", false,
|
||||
`Advertise configured services using
|
||||
multicast DNS`)
|
||||
portableCmd.Flags().BoolVarP(&portableAdvertiseCredentials, "advertise-credentials", "C", false,
|
||||
`If the SFTP/FTP service is
|
||||
advertised via multicast DNS, this
|
||||
flag allows to put username/password
|
||||
inside the advertised TXT record`)
|
||||
portableCmd.Flags().IntVarP(&portableFsProvider, "fs-provider", "f", int(dataprovider.LocalFilesystemProvider), `0 => local filesystem
|
||||
1 => AWS S3 compatible
|
||||
2 => Google Cloud Storage
|
||||
3 => Azure Blob Storage
|
||||
4 => Encrypted local filesystem
|
||||
5 => SFTP`)
|
||||
portableCmd.Flags().StringVar(&portableS3Bucket, "s3-bucket", "", "")
|
||||
portableCmd.Flags().StringVar(&portableS3Region, "s3-region", "", "")
|
||||
portableCmd.Flags().StringVar(&portableS3AccessKey, "s3-access-key", "", "")
|
||||
portableCmd.Flags().StringVar(&portableS3AccessSecret, "s3-access-secret", "", "")
|
||||
portableCmd.Flags().StringVar(&portableS3Endpoint, "s3-endpoint", "", "")
|
||||
portableCmd.Flags().StringVar(&portableS3StorageClass, "s3-storage-class", "", "")
|
||||
portableCmd.Flags().StringVar(&portableS3KeyPrefix, "s3-key-prefix", "", `Allows to restrict access to the
|
||||
virtual folder identified by this
|
||||
prefix and its contents`)
|
||||
portableCmd.Flags().IntVar(&portableS3ULPartSize, "s3-upload-part-size", 5, `The buffer size for multipart uploads
|
||||
(MB)`)
|
||||
portableCmd.Flags().IntVar(&portableS3ULConcurrency, "s3-upload-concurrency", 2, `How many parts are uploaded in
|
||||
parallel`)
|
||||
portableCmd.Flags().StringVar(&portableGCSBucket, "gcs-bucket", "", "")
|
||||
portableCmd.Flags().StringVar(&portableGCSStorageClass, "gcs-storage-class", "", "")
|
||||
portableCmd.Flags().StringVar(&portableGCSKeyPrefix, "gcs-key-prefix", "", `Allows to restrict access to the
|
||||
virtual folder identified by this
|
||||
prefix and its contents`)
|
||||
portableCmd.Flags().StringVar(&portableGCSCredentialsFile, "gcs-credentials-file", "", `Google Cloud Storage JSON credentials
|
||||
file`)
|
||||
portableCmd.Flags().IntVar(&portableGCSAutoCredentials, "gcs-automatic-credentials", 1, `0 means explicit credentials using
|
||||
a JSON credentials file, 1 automatic
|
||||
`)
|
||||
portableCmd.Flags().StringVar(&portableFTPSCert, "ftpd-cert", "", "Path to the certificate file for FTPS")
|
||||
portableCmd.Flags().StringVar(&portableFTPSKey, "ftpd-key", "", "Path to the key file for FTPS")
|
||||
portableCmd.Flags().StringVar(&portableWebDAVCert, "webdav-cert", "", `Path to the certificate file for WebDAV
|
||||
over HTTPS`)
|
||||
portableCmd.Flags().StringVar(&portableWebDAVKey, "webdav-key", "", `Path to the key file for WebDAV over
|
||||
HTTPS`)
|
||||
portableCmd.Flags().StringVar(&portableAzContainer, "az-container", "", "")
|
||||
portableCmd.Flags().StringVar(&portableAzAccountName, "az-account-name", "", "")
|
||||
portableCmd.Flags().StringVar(&portableAzAccountKey, "az-account-key", "", "")
|
||||
portableCmd.Flags().StringVar(&portableAzSASURL, "az-sas-url", "", `Shared access signature URL`)
|
||||
portableCmd.Flags().StringVar(&portableAzEndpoint, "az-endpoint", "", `Leave empty to use the default:
|
||||
"blob.core.windows.net"`)
|
||||
portableCmd.Flags().StringVar(&portableAzAccessTier, "az-access-tier", "", `Leave empty to use the default
|
||||
container setting`)
|
||||
portableCmd.Flags().StringVar(&portableAzKeyPrefix, "az-key-prefix", "", `Allows to restrict access to the
|
||||
virtual folder identified by this
|
||||
prefix and its contents`)
|
||||
portableCmd.Flags().IntVar(&portableAzULPartSize, "az-upload-part-size", 4, `The buffer size for multipart uploads
|
||||
(MB)`)
|
||||
portableCmd.Flags().IntVar(&portableAzULConcurrency, "az-upload-concurrency", 2, `How many parts are uploaded in
|
||||
parallel`)
|
||||
portableCmd.Flags().BoolVar(&portableAzUseEmulator, "az-use-emulator", false, "")
|
||||
portableCmd.Flags().StringVar(&portableCryptPassphrase, "crypto-passphrase", "", `Passphrase for encryption/decryption`)
|
||||
portableCmd.Flags().StringVar(&portableSFTPEndpoint, "sftp-endpoint", "", `SFTP endpoint as host:port for SFTP
|
||||
provider`)
|
||||
portableCmd.Flags().StringVar(&portableSFTPUsername, "sftp-username", "", `SFTP user for SFTP provider`)
|
||||
portableCmd.Flags().StringVar(&portableSFTPPassword, "sftp-password", "", `SFTP password for SFTP provider`)
|
||||
portableCmd.Flags().StringVar(&portableSFTPPrivateKeyPath, "sftp-key-path", "", `SFTP private key path for SFTP provider`)
|
||||
portableCmd.Flags().StringSliceVar(&portableSFTPFingerprints, "sftp-fingerprints", []string{}, `SFTP fingerprints to verify remote host
|
||||
key for SFTP provider`)
|
||||
portableCmd.Flags().StringVar(&portableSFTPPrefix, "sftp-prefix", "", `SFTP prefix allows restrict all
|
||||
operations to a given path within the
|
||||
remote SFTP server`)
|
||||
rootCmd.AddCommand(portableCmd)
|
||||
}
|
||||
|
||||
func parsePatternsFilesFilters() []dataprovider.PatternsFilter {
|
||||
var patterns []dataprovider.PatternsFilter
|
||||
for _, val := range portableAllowedPatterns {
|
||||
p, exts := getPatternsFilterValues(strings.TrimSpace(val))
|
||||
if p != "" {
|
||||
patterns = append(patterns, dataprovider.PatternsFilter{
|
||||
Path: path.Clean(p),
|
||||
AllowedPatterns: exts,
|
||||
DeniedPatterns: []string{},
|
||||
})
|
||||
}
|
||||
}
|
||||
for _, val := range portableDeniedPatterns {
|
||||
p, exts := getPatternsFilterValues(strings.TrimSpace(val))
|
||||
if p != "" {
|
||||
found := false
|
||||
for index, e := range patterns {
|
||||
if path.Clean(e.Path) == path.Clean(p) {
|
||||
patterns[index].DeniedPatterns = append(patterns[index].DeniedPatterns, exts...)
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
patterns = append(patterns, dataprovider.PatternsFilter{
|
||||
Path: path.Clean(p),
|
||||
AllowedPatterns: []string{},
|
||||
DeniedPatterns: exts,
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
return patterns
|
||||
}
|
||||
|
||||
func getPatternsFilterValues(value string) (string, []string) {
|
||||
if strings.Contains(value, "::") {
|
||||
dirExts := strings.Split(value, "::")
|
||||
if len(dirExts) > 1 {
|
||||
dir := strings.TrimSpace(dirExts[0])
|
||||
exts := []string{}
|
||||
for _, e := range strings.Split(dirExts[1], ",") {
|
||||
cleanedExt := strings.TrimSpace(e)
|
||||
if cleanedExt != "" {
|
||||
exts = append(exts, cleanedExt)
|
||||
}
|
||||
}
|
||||
if dir != "" && len(exts) > 0 {
|
||||
return dir, exts
|
||||
}
|
||||
}
|
||||
}
|
||||
return "", nil
|
||||
}
|
||||
|
||||
func getFileContents(name string) (string, error) {
|
||||
fi, err := os.Stat(name)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
if fi.Size() > 1048576 {
|
||||
return "", fmt.Errorf("%#v is too big %v/1048576 bytes", name, fi.Size())
|
||||
}
|
||||
contents, err := ioutil.ReadFile(name)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return string(contents), nil
|
||||
}
|
||||
9
cmd/portable_disabled.go
Normal file
9
cmd/portable_disabled.go
Normal file
@@ -0,0 +1,9 @@
|
||||
// +build noportable
|
||||
|
||||
package cmd
|
||||
|
||||
import "github.com/drakkan/sftpgo/version"
|
||||
|
||||
func init() {
|
||||
version.AddFeature("-portable")
|
||||
}
|
||||
35
cmd/reload_windows.go
Normal file
35
cmd/reload_windows.go
Normal file
@@ -0,0 +1,35 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/drakkan/sftpgo/service"
|
||||
)
|
||||
|
||||
var (
|
||||
reloadCmd = &cobra.Command{
|
||||
Use: "reload",
|
||||
Short: "Reload the SFTPGo Windows Service sending a \"paramchange\" request",
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
s := service.WindowsService{
|
||||
Service: service.Service{
|
||||
Shutdown: make(chan bool),
|
||||
},
|
||||
}
|
||||
err := s.Reload()
|
||||
if err != nil {
|
||||
fmt.Printf("Error sending reload signal: %v\r\n", err)
|
||||
os.Exit(1)
|
||||
} else {
|
||||
fmt.Printf("Reload signal sent!\r\n")
|
||||
}
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
func init() {
|
||||
serviceCmd.AddCommand(reloadCmd)
|
||||
}
|
||||
64
cmd/revertprovider.go
Normal file
64
cmd/revertprovider.go
Normal file
@@ -0,0 +1,64 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"os"
|
||||
|
||||
"github.com/rs/zerolog"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/viper"
|
||||
|
||||
"github.com/drakkan/sftpgo/config"
|
||||
"github.com/drakkan/sftpgo/dataprovider"
|
||||
"github.com/drakkan/sftpgo/logger"
|
||||
"github.com/drakkan/sftpgo/utils"
|
||||
)
|
||||
|
||||
var (
|
||||
revertProviderTargetVersion int
|
||||
revertProviderCmd = &cobra.Command{
|
||||
Use: "revertprovider",
|
||||
Short: "Revert the configured data provider to a previous version",
|
||||
Long: `This command reads the data provider connection details from the specified
|
||||
configuration file and restore the provider schema and/or data to a previous version.
|
||||
This command is not supported for the memory provider.
|
||||
|
||||
Please take a look at the usage below to customize the options.`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
logger.DisableLogger()
|
||||
logger.EnableConsoleLogger(zerolog.DebugLevel)
|
||||
if revertProviderTargetVersion != 4 {
|
||||
logger.WarnToConsole("Unsupported target version, 4 is the only supported one")
|
||||
os.Exit(1)
|
||||
}
|
||||
configDir = utils.CleanDirInput(configDir)
|
||||
err := config.LoadConfig(configDir, configFile)
|
||||
if err != nil {
|
||||
logger.WarnToConsole("Unable to initialize data provider, config load error: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
kmsConfig := config.GetKMSConfig()
|
||||
err = kmsConfig.Initialize()
|
||||
if err != nil {
|
||||
logger.ErrorToConsole("unable to initialize KMS: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
providerConf := config.GetProviderConf()
|
||||
logger.InfoToConsole("Reverting provider: %#v config file: %#v target version %v", providerConf.Driver,
|
||||
viper.ConfigFileUsed(), revertProviderTargetVersion)
|
||||
err = dataprovider.RevertDatabase(providerConf, configDir, revertProviderTargetVersion)
|
||||
if err != nil {
|
||||
logger.WarnToConsole("Error reverting provider: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
logger.InfoToConsole("Data provider successfully reverted")
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
func init() {
|
||||
addConfigFlags(revertProviderCmd)
|
||||
revertProviderCmd.Flags().IntVar(&revertProviderTargetVersion, "to-version", 0, `4 means the version supported in v1.0.0-v1.2.x`)
|
||||
revertProviderCmd.MarkFlagRequired("to-version") //nolint:errcheck
|
||||
|
||||
rootCmd.AddCommand(revertProviderCmd)
|
||||
}
|
||||
285
cmd/root.go
285
cmd/root.go
@@ -4,63 +4,76 @@ package cmd
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"strconv"
|
||||
|
||||
"github.com/drakkan/sftpgo/config"
|
||||
"github.com/drakkan/sftpgo/utils"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/viper"
|
||||
|
||||
"github.com/drakkan/sftpgo/version"
|
||||
)
|
||||
|
||||
const (
|
||||
logSender = "cmd"
|
||||
configDirFlag = "config-dir"
|
||||
configDirKey = "config_dir"
|
||||
configFileFlag = "config-file"
|
||||
configFileKey = "config_file"
|
||||
logFilePathFlag = "log-file-path"
|
||||
logFilePathKey = "log_file_path"
|
||||
logMaxSizeFlag = "log-max-size"
|
||||
logMaxSizeKey = "log_max_size"
|
||||
logMaxBackupFlag = "log-max-backups"
|
||||
logMaxBackupKey = "log_max_backups"
|
||||
logMaxAgeFlag = "log-max-age"
|
||||
logMaxAgeKey = "log_max_age"
|
||||
logCompressFlag = "log-compress"
|
||||
logCompressKey = "log_compress"
|
||||
logVerboseFlag = "log-verbose"
|
||||
logVerboseKey = "log_verbose"
|
||||
defaultConfigDir = "."
|
||||
defaultConfigName = config.DefaultConfigName
|
||||
defaultLogFile = "sftpgo.log"
|
||||
defaultLogMaxSize = 10
|
||||
defaultLogMaxBackup = 5
|
||||
defaultLogMaxAge = 28
|
||||
defaultLogCompress = false
|
||||
defaultLogVerbose = true
|
||||
configDirFlag = "config-dir"
|
||||
configDirKey = "config_dir"
|
||||
configFileFlag = "config-file"
|
||||
configFileKey = "config_file"
|
||||
logFilePathFlag = "log-file-path"
|
||||
logFilePathKey = "log_file_path"
|
||||
logMaxSizeFlag = "log-max-size"
|
||||
logMaxSizeKey = "log_max_size"
|
||||
logMaxBackupFlag = "log-max-backups"
|
||||
logMaxBackupKey = "log_max_backups"
|
||||
logMaxAgeFlag = "log-max-age"
|
||||
logMaxAgeKey = "log_max_age"
|
||||
logCompressFlag = "log-compress"
|
||||
logCompressKey = "log_compress"
|
||||
logVerboseFlag = "log-verbose"
|
||||
logVerboseKey = "log_verbose"
|
||||
loadDataFromFlag = "loaddata-from"
|
||||
loadDataFromKey = "loaddata_from"
|
||||
loadDataModeFlag = "loaddata-mode"
|
||||
loadDataModeKey = "loaddata_mode"
|
||||
loadDataQuotaScanFlag = "loaddata-scan"
|
||||
loadDataQuotaScanKey = "loaddata_scan"
|
||||
loadDataCleanFlag = "loaddata-clean"
|
||||
loadDataCleanKey = "loaddata_clean"
|
||||
defaultConfigDir = "."
|
||||
defaultConfigFile = ""
|
||||
defaultLogFile = "sftpgo.log"
|
||||
defaultLogMaxSize = 10
|
||||
defaultLogMaxBackup = 5
|
||||
defaultLogMaxAge = 28
|
||||
defaultLogCompress = false
|
||||
defaultLogVerbose = true
|
||||
defaultLoadDataFrom = ""
|
||||
defaultLoadDataMode = 1
|
||||
defaultLoadDataQuotaScan = 0
|
||||
defaultLoadDataClean = false
|
||||
)
|
||||
|
||||
var (
|
||||
configDir string
|
||||
configFile string
|
||||
logFilePath string
|
||||
logMaxSize int
|
||||
logMaxBackups int
|
||||
logMaxAge int
|
||||
logCompress bool
|
||||
logVerbose bool
|
||||
configDir string
|
||||
configFile string
|
||||
logFilePath string
|
||||
logMaxSize int
|
||||
logMaxBackups int
|
||||
logMaxAge int
|
||||
logCompress bool
|
||||
logVerbose bool
|
||||
loadDataFrom string
|
||||
loadDataMode int
|
||||
loadDataQuotaScan int
|
||||
loadDataClean bool
|
||||
|
||||
rootCmd = &cobra.Command{
|
||||
Use: "sftpgo",
|
||||
Short: "Full featured and highly configurable SFTP server",
|
||||
Short: "Fully featured and highly configurable SFTP server",
|
||||
}
|
||||
)
|
||||
|
||||
func init() {
|
||||
version := utils.GetAppVersion()
|
||||
rootCmd.Flags().BoolP("version", "v", false, "")
|
||||
rootCmd.Version = version.GetVersionAsString()
|
||||
rootCmd.SetVersionTemplate(`{{printf "SFTPGo version: "}}{{printf "%s" .Version}}
|
||||
rootCmd.Version = version.GetAsString()
|
||||
rootCmd.SetVersionTemplate(`{{printf "SFTPGo "}}{{printf "%s" .Version}}
|
||||
`)
|
||||
}
|
||||
|
||||
@@ -73,97 +86,143 @@ func Execute() {
|
||||
}
|
||||
}
|
||||
|
||||
func addServeFlags(cmd *cobra.Command) {
|
||||
func addConfigFlags(cmd *cobra.Command) {
|
||||
viper.SetDefault(configDirKey, defaultConfigDir)
|
||||
viper.BindEnv(configDirKey, "SFTPGO_CONFIG_DIR")
|
||||
viper.BindEnv(configDirKey, "SFTPGO_CONFIG_DIR") //nolint:errcheck // err is not nil only if the key to bind is missing
|
||||
cmd.Flags().StringVarP(&configDir, configDirFlag, "c", viper.GetString(configDirKey),
|
||||
"Location for SFTPGo config dir. This directory should contain the \"sftpgo\" configuration file or the configured "+
|
||||
"config-file and it is used as the base for files with a relative path (eg. the private keys for the SFTP server, "+
|
||||
"the SQLite database if you use SQLite as data provider). This flag can be set using SFTPGO_CONFIG_DIR env var too.")
|
||||
viper.BindPFlag(configDirKey, cmd.Flags().Lookup(configDirFlag))
|
||||
`Location for the config dir. This directory
|
||||
is used as the base for files with a relative
|
||||
path, eg. the private keys for the SFTP
|
||||
server or the SQLite database if you use
|
||||
SQLite as data provider.
|
||||
The configuration file, if not explicitly set,
|
||||
is looked for in this dir. We support reading
|
||||
from JSON, TOML, YAML, HCL, envfile and Java
|
||||
properties config files. The default config
|
||||
file name is "sftpgo" and therefore
|
||||
"sftpgo.json", "sftpgo.yaml" and so on are
|
||||
searched.
|
||||
This flag can be set using SFTPGO_CONFIG_DIR
|
||||
env var too.`)
|
||||
viper.BindPFlag(configDirKey, cmd.Flags().Lookup(configDirFlag)) //nolint:errcheck
|
||||
|
||||
viper.SetDefault(configFileKey, defaultConfigName)
|
||||
viper.BindEnv(configFileKey, "SFTPGO_CONFIG_FILE")
|
||||
cmd.Flags().StringVarP(&configFile, configFileFlag, "f", viper.GetString(configFileKey),
|
||||
"Name for SFTPGo configuration file. It must be the name of a file stored in config-dir not the absolute path to the "+
|
||||
"configuration file. The specified file name must have no extension we automatically load JSON, YAML, TOML, HCL and "+
|
||||
"Java properties. Therefore if you set \"sftpgo\" then \"sftpgo.json\", \"sftpgo.yaml\" and so on are searched. "+
|
||||
"This flag can be set using SFTPGO_CONFIG_FILE env var too.")
|
||||
viper.BindPFlag(configFileKey, cmd.Flags().Lookup(configFileFlag))
|
||||
viper.SetDefault(configFileKey, defaultConfigFile)
|
||||
viper.BindEnv(configFileKey, "SFTPGO_CONFIG_FILE") //nolint:errcheck
|
||||
cmd.Flags().StringVar(&configFile, configFileFlag, viper.GetString(configFileKey),
|
||||
`Path to SFTPGo configuration file.
|
||||
This flag explicitly defines the path, name
|
||||
and extension of the config file. If must be
|
||||
an absolute path or a path relative to the
|
||||
configuration directory. The specified file
|
||||
name must have a supported extension (JSON,
|
||||
YAML, TOML, HCL or Java properties).
|
||||
This flag can be set using SFTPGO_CONFIG_FILE
|
||||
env var too.`)
|
||||
viper.BindPFlag(configFileKey, cmd.Flags().Lookup(configFileFlag)) //nolint:errcheck
|
||||
}
|
||||
|
||||
func addServeFlags(cmd *cobra.Command) {
|
||||
addConfigFlags(cmd)
|
||||
|
||||
viper.SetDefault(logFilePathKey, defaultLogFile)
|
||||
viper.BindEnv(logFilePathKey, "SFTPGO_LOG_FILE_PATH")
|
||||
viper.BindEnv(logFilePathKey, "SFTPGO_LOG_FILE_PATH") //nolint:errcheck
|
||||
cmd.Flags().StringVarP(&logFilePath, logFilePathFlag, "l", viper.GetString(logFilePathKey),
|
||||
"Location for the log file. Leave empty to write logs to the standard output. This flag can be set using SFTPGO_LOG_FILE_PATH "+
|
||||
"env var too.")
|
||||
viper.BindPFlag(logFilePathKey, cmd.Flags().Lookup(logFilePathFlag))
|
||||
`Location for the log file. Leave empty to write
|
||||
logs to the standard output. This flag can be
|
||||
set using SFTPGO_LOG_FILE_PATH env var too.
|
||||
`)
|
||||
viper.BindPFlag(logFilePathKey, cmd.Flags().Lookup(logFilePathFlag)) //nolint:errcheck
|
||||
|
||||
viper.SetDefault(logMaxSizeKey, defaultLogMaxSize)
|
||||
viper.BindEnv(logMaxSizeKey, "SFTPGO_LOG_MAX_SIZE")
|
||||
viper.BindEnv(logMaxSizeKey, "SFTPGO_LOG_MAX_SIZE") //nolint:errcheck
|
||||
cmd.Flags().IntVarP(&logMaxSize, logMaxSizeFlag, "s", viper.GetInt(logMaxSizeKey),
|
||||
"Maximum size in megabytes of the log file before it gets rotated. This flag can be set using SFTPGO_LOG_MAX_SIZE "+
|
||||
"env var too. It is unused if log-file-path is empty.")
|
||||
viper.BindPFlag(logMaxSizeKey, cmd.Flags().Lookup(logMaxSizeFlag))
|
||||
`Maximum size in megabytes of the log file
|
||||
before it gets rotated. This flag can be set
|
||||
using SFTPGO_LOG_MAX_SIZE env var too. It is
|
||||
unused if log-file-path is empty.
|
||||
`)
|
||||
viper.BindPFlag(logMaxSizeKey, cmd.Flags().Lookup(logMaxSizeFlag)) //nolint:errcheck
|
||||
|
||||
viper.SetDefault(logMaxBackupKey, defaultLogMaxBackup)
|
||||
viper.BindEnv(logMaxBackupKey, "SFTPGO_LOG_MAX_BACKUPS")
|
||||
viper.BindEnv(logMaxBackupKey, "SFTPGO_LOG_MAX_BACKUPS") //nolint:errcheck
|
||||
cmd.Flags().IntVarP(&logMaxBackups, "log-max-backups", "b", viper.GetInt(logMaxBackupKey),
|
||||
"Maximum number of old log files to retain. This flag can be set using SFTPGO_LOG_MAX_BACKUPS env var too. "+
|
||||
"It is unused if log-file-path is empty.")
|
||||
viper.BindPFlag(logMaxBackupKey, cmd.Flags().Lookup(logMaxBackupFlag))
|
||||
`Maximum number of old log files to retain.
|
||||
This flag can be set using SFTPGO_LOG_MAX_BACKUPS
|
||||
env var too. It is unused if log-file-path is
|
||||
empty.`)
|
||||
viper.BindPFlag(logMaxBackupKey, cmd.Flags().Lookup(logMaxBackupFlag)) //nolint:errcheck
|
||||
|
||||
viper.SetDefault(logMaxAgeKey, defaultLogMaxAge)
|
||||
viper.BindEnv(logMaxAgeKey, "SFTPGO_LOG_MAX_AGE")
|
||||
viper.BindEnv(logMaxAgeKey, "SFTPGO_LOG_MAX_AGE") //nolint:errcheck
|
||||
cmd.Flags().IntVarP(&logMaxAge, "log-max-age", "a", viper.GetInt(logMaxAgeKey),
|
||||
"Maximum number of days to retain old log files. This flag can be set using SFTPGO_LOG_MAX_AGE env var too. "+
|
||||
"It is unused if log-file-path is empty.")
|
||||
viper.BindPFlag(logMaxAgeKey, cmd.Flags().Lookup(logMaxAgeFlag))
|
||||
`Maximum number of days to retain old log files.
|
||||
This flag can be set using SFTPGO_LOG_MAX_AGE env
|
||||
var too. It is unused if log-file-path is empty.
|
||||
`)
|
||||
viper.BindPFlag(logMaxAgeKey, cmd.Flags().Lookup(logMaxAgeFlag)) //nolint:errcheck
|
||||
|
||||
viper.SetDefault(logCompressKey, defaultLogCompress)
|
||||
viper.BindEnv(logCompressKey, "SFTPGO_LOG_COMPRESS")
|
||||
cmd.Flags().BoolVarP(&logCompress, logCompressFlag, "z", viper.GetBool(logCompressKey), "Determine if the rotated "+
|
||||
"log files should be compressed using gzip. This flag can be set using SFTPGO_LOG_COMPRESS env var too. "+
|
||||
"It is unused if log-file-path is empty.")
|
||||
viper.BindPFlag(logCompressKey, cmd.Flags().Lookup(logCompressFlag))
|
||||
viper.BindEnv(logCompressKey, "SFTPGO_LOG_COMPRESS") //nolint:errcheck
|
||||
cmd.Flags().BoolVarP(&logCompress, logCompressFlag, "z", viper.GetBool(logCompressKey),
|
||||
`Determine if the rotated log files
|
||||
should be compressed using gzip. This flag can
|
||||
be set using SFTPGO_LOG_COMPRESS env var too.
|
||||
It is unused if log-file-path is empty.
|
||||
`)
|
||||
viper.BindPFlag(logCompressKey, cmd.Flags().Lookup(logCompressFlag)) //nolint:errcheck
|
||||
|
||||
viper.SetDefault(logVerboseKey, defaultLogVerbose)
|
||||
viper.BindEnv(logVerboseKey, "SFTPGO_LOG_VERBOSE")
|
||||
cmd.Flags().BoolVarP(&logVerbose, logVerboseFlag, "v", viper.GetBool(logVerboseKey), "Enable verbose logs. "+
|
||||
"This flag can be set using SFTPGO_LOG_VERBOSE env var too.")
|
||||
viper.BindPFlag(logVerboseKey, cmd.Flags().Lookup(logVerboseFlag))
|
||||
}
|
||||
viper.BindEnv(logVerboseKey, "SFTPGO_LOG_VERBOSE") //nolint:errcheck
|
||||
cmd.Flags().BoolVarP(&logVerbose, logVerboseFlag, "v", viper.GetBool(logVerboseKey),
|
||||
`Enable verbose logs. This flag can be set
|
||||
using SFTPGO_LOG_VERBOSE env var too.
|
||||
`)
|
||||
viper.BindPFlag(logVerboseKey, cmd.Flags().Lookup(logVerboseFlag)) //nolint:errcheck
|
||||
|
||||
func getCustomServeFlags() []string {
|
||||
result := []string{}
|
||||
if configDir != defaultConfigDir {
|
||||
result = append(result, "--"+configDirFlag)
|
||||
result = append(result, configDir)
|
||||
}
|
||||
if configFile != defaultConfigName {
|
||||
result = append(result, "--"+configFileFlag)
|
||||
result = append(result, configFile)
|
||||
}
|
||||
if logFilePath != defaultLogFile {
|
||||
result = append(result, "--"+logFilePathFlag)
|
||||
result = append(result, logFilePath)
|
||||
}
|
||||
if logMaxSize != defaultLogMaxSize {
|
||||
result = append(result, "--"+logMaxSizeFlag)
|
||||
result = append(result, strconv.Itoa(logMaxSize))
|
||||
}
|
||||
if logMaxBackups != defaultLogMaxBackup {
|
||||
result = append(result, "--"+logMaxBackupFlag)
|
||||
result = append(result, strconv.Itoa(logMaxBackups))
|
||||
}
|
||||
if logMaxAge != defaultLogMaxAge {
|
||||
result = append(result, "--"+logMaxAgeFlag)
|
||||
result = append(result, strconv.Itoa(logMaxAge))
|
||||
}
|
||||
if logVerbose != defaultLogVerbose {
|
||||
result = append(result, "--"+logVerboseFlag+"=false")
|
||||
}
|
||||
if logCompress != defaultLogCompress {
|
||||
result = append(result, "--"+logCompressFlag+"=true")
|
||||
}
|
||||
return result
|
||||
viper.SetDefault(loadDataFromKey, defaultLoadDataFrom)
|
||||
viper.BindEnv(loadDataFromKey, "SFTPGO_LOADDATA_FROM") //nolint:errcheck
|
||||
cmd.Flags().StringVar(&loadDataFrom, loadDataFromFlag, viper.GetString(loadDataFromKey),
|
||||
`Load users and folders from this file.
|
||||
The file must be specified as absolute path
|
||||
and it must contain a backup obtained using
|
||||
the "dumpdata" REST API or compatible content.
|
||||
This flag can be set using SFTPGO_LOADDATA_FROM
|
||||
env var too.
|
||||
`)
|
||||
viper.BindPFlag(loadDataFromKey, cmd.Flags().Lookup(loadDataFromFlag)) //nolint:errcheck
|
||||
|
||||
viper.SetDefault(loadDataModeKey, defaultLoadDataMode)
|
||||
viper.BindEnv(loadDataModeKey, "SFTPGO_LOADDATA_MODE") //nolint:errcheck
|
||||
cmd.Flags().IntVar(&loadDataMode, loadDataModeFlag, viper.GetInt(loadDataModeKey),
|
||||
`Restore mode for data to load:
|
||||
0 - new users are added, existing users are
|
||||
updated
|
||||
1 - New users are added, existing users are
|
||||
not modified
|
||||
This flag can be set using SFTPGO_LOADDATA_MODE
|
||||
env var too.
|
||||
`)
|
||||
viper.BindPFlag(loadDataModeKey, cmd.Flags().Lookup(loadDataModeFlag)) //nolint:errcheck
|
||||
|
||||
viper.SetDefault(loadDataQuotaScanKey, defaultLoadDataQuotaScan)
|
||||
viper.BindEnv(loadDataQuotaScanKey, "SFTPGO_LOADDATA_QUOTA_SCAN") //nolint:errcheck
|
||||
cmd.Flags().IntVar(&loadDataQuotaScan, loadDataQuotaScanFlag, viper.GetInt(loadDataQuotaScanKey),
|
||||
`Quota scan mode after data load:
|
||||
0 - no quota scan
|
||||
1 - scan quota
|
||||
2 - scan quota if the user has quota restrictions
|
||||
This flag can be set using SFTPGO_LOADDATA_QUOTA_SCAN
|
||||
env var too.
|
||||
(default 0)`)
|
||||
viper.BindPFlag(loadDataQuotaScanKey, cmd.Flags().Lookup(loadDataQuotaScanFlag)) //nolint:errcheck
|
||||
|
||||
viper.SetDefault(loadDataCleanKey, defaultLoadDataClean)
|
||||
viper.BindEnv(loadDataCleanKey, "SFTPGO_LOADDATA_CLEAN") //nolint:errcheck
|
||||
cmd.Flags().BoolVar(&loadDataClean, loadDataCleanFlag, viper.GetBool(loadDataCleanKey),
|
||||
`Determine if the loaddata-from file should
|
||||
be removed after a successful load. This flag
|
||||
can be set using SFTPGO_LOADDATA_CLEAN env var
|
||||
too. (default "false")
|
||||
`)
|
||||
viper.BindPFlag(logCompressKey, cmd.Flags().Lookup(logCompressFlag)) //nolint:errcheck
|
||||
}
|
||||
|
||||
35
cmd/rotatelogs_windows.go
Normal file
35
cmd/rotatelogs_windows.go
Normal file
@@ -0,0 +1,35 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/drakkan/sftpgo/service"
|
||||
)
|
||||
|
||||
var (
|
||||
rotateLogCmd = &cobra.Command{
|
||||
Use: "rotatelogs",
|
||||
Short: "Signal to the running service to rotate the logs",
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
s := service.WindowsService{
|
||||
Service: service.Service{
|
||||
Shutdown: make(chan bool),
|
||||
},
|
||||
}
|
||||
err := s.RotateLogFile()
|
||||
if err != nil {
|
||||
fmt.Printf("Error sending rotate log file signal to the service: %v\r\n", err)
|
||||
os.Exit(1)
|
||||
} else {
|
||||
fmt.Printf("Rotate log file signal sent!\r\n")
|
||||
}
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
func init() {
|
||||
serviceCmd.AddCommand(rotateLogCmd)
|
||||
}
|
||||
37
cmd/serve.go
37
cmd/serve.go
@@ -1,34 +1,47 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"github.com/drakkan/sftpgo/service"
|
||||
"os"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/drakkan/sftpgo/service"
|
||||
"github.com/drakkan/sftpgo/utils"
|
||||
)
|
||||
|
||||
var (
|
||||
serveCmd = &cobra.Command{
|
||||
Use: "serve",
|
||||
Short: "Start the SFTP Server",
|
||||
Long: `To start the SFTPGo with the default values for the command line flags simply use:
|
||||
Long: `To start the SFTPGo with the default values for the command line flags simply
|
||||
use:
|
||||
|
||||
sftpgo serve
|
||||
$ sftpgo serve
|
||||
|
||||
Please take a look at the usage below to customize the startup options`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
service := service.Service{
|
||||
ConfigDir: configDir,
|
||||
ConfigFile: configFile,
|
||||
LogFilePath: logFilePath,
|
||||
LogMaxSize: logMaxSize,
|
||||
LogMaxBackups: logMaxBackups,
|
||||
LogMaxAge: logMaxAge,
|
||||
LogCompress: logCompress,
|
||||
LogVerbose: logVerbose,
|
||||
Shutdown: make(chan bool),
|
||||
ConfigDir: utils.CleanDirInput(configDir),
|
||||
ConfigFile: configFile,
|
||||
LogFilePath: logFilePath,
|
||||
LogMaxSize: logMaxSize,
|
||||
LogMaxBackups: logMaxBackups,
|
||||
LogMaxAge: logMaxAge,
|
||||
LogCompress: logCompress,
|
||||
LogVerbose: logVerbose,
|
||||
LoadDataFrom: loadDataFrom,
|
||||
LoadDataMode: loadDataMode,
|
||||
LoadDataQuotaScan: loadDataQuotaScan,
|
||||
LoadDataClean: loadDataClean,
|
||||
Shutdown: make(chan bool),
|
||||
}
|
||||
if err := service.Start(); err == nil {
|
||||
service.Wait()
|
||||
if service.Error == nil {
|
||||
os.Exit(0)
|
||||
}
|
||||
}
|
||||
os.Exit(1)
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
@@ -7,7 +7,7 @@ import (
|
||||
var (
|
||||
serviceCmd = &cobra.Command{
|
||||
Use: "service",
|
||||
Short: "Install, Uninstall, Start, Stop and retrieve status for SFTPGo Windows Service",
|
||||
Short: "Manage SFTPGo Windows Service",
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
@@ -2,9 +2,13 @@ package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/drakkan/sftpgo/service"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/drakkan/sftpgo/utils"
|
||||
)
|
||||
|
||||
var (
|
||||
@@ -12,6 +16,10 @@ var (
|
||||
Use: "start",
|
||||
Short: "Start SFTPGo Windows Service",
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
configDir = utils.CleanDirInput(configDir)
|
||||
if !filepath.IsAbs(logFilePath) && utils.IsFileInputValid(logFilePath) {
|
||||
logFilePath = filepath.Join(configDir, logFilePath)
|
||||
}
|
||||
s := service.Service{
|
||||
ConfigDir: configDir,
|
||||
ConfigFile: configFile,
|
||||
@@ -29,6 +37,7 @@ var (
|
||||
err := winService.RunService()
|
||||
if err != nil {
|
||||
fmt.Printf("Error starting service: %v\r\n", err)
|
||||
os.Exit(1)
|
||||
} else {
|
||||
fmt.Printf("Service started!\r\n")
|
||||
}
|
||||
|
||||
166
cmd/startsubsys.go
Normal file
166
cmd/startsubsys.go
Normal file
@@ -0,0 +1,166 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"io"
|
||||
"os"
|
||||
"os/user"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/rs/xid"
|
||||
"github.com/rs/zerolog"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/viper"
|
||||
|
||||
"github.com/drakkan/sftpgo/common"
|
||||
"github.com/drakkan/sftpgo/config"
|
||||
"github.com/drakkan/sftpgo/dataprovider"
|
||||
"github.com/drakkan/sftpgo/logger"
|
||||
"github.com/drakkan/sftpgo/sftpd"
|
||||
"github.com/drakkan/sftpgo/version"
|
||||
)
|
||||
|
||||
var (
|
||||
logJournalD = false
|
||||
preserveHomeDir = false
|
||||
baseHomeDir = ""
|
||||
subsystemCmd = &cobra.Command{
|
||||
Use: "startsubsys",
|
||||
Short: "Use SFTPGo as SFTP file transfer subsystem",
|
||||
Long: `In this mode SFTPGo speaks the server side of SFTP protocol to stdout and
|
||||
expects client requests from stdin.
|
||||
This mode is not intended to be called directly, but from sshd using the
|
||||
Subsystem option.
|
||||
For example adding a line like this one in "/etc/ssh/sshd_config":
|
||||
|
||||
Subsystem sftp sftpgo startsubsys
|
||||
|
||||
Command-line flags should be specified in the Subsystem declaration.
|
||||
`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
logSender := "startsubsys"
|
||||
connectionID := xid.New().String()
|
||||
logLevel := zerolog.DebugLevel
|
||||
if !logVerbose {
|
||||
logLevel = zerolog.InfoLevel
|
||||
}
|
||||
if logJournalD {
|
||||
logger.InitJournalDLogger(logLevel)
|
||||
} else {
|
||||
logger.InitStdErrLogger(logLevel)
|
||||
}
|
||||
osUser, err := user.Current()
|
||||
if err != nil {
|
||||
logger.Error(logSender, connectionID, "unable to get the current user: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
username := osUser.Username
|
||||
homedir := osUser.HomeDir
|
||||
logger.Info(logSender, connectionID, "starting SFTPGo %v as subsystem, user %#v home dir %#v config dir %#v base home dir %#v",
|
||||
version.Get(), username, homedir, configDir, baseHomeDir)
|
||||
err = config.LoadConfig(configDir, configFile)
|
||||
if err != nil {
|
||||
logger.Error(logSender, connectionID, "unable to load configuration: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
commonConfig := config.GetCommonConfig()
|
||||
// idle connection are managed externally
|
||||
commonConfig.IdleTimeout = 0
|
||||
config.SetCommonConfig(commonConfig)
|
||||
if err := common.Initialize(config.GetCommonConfig()); err != nil {
|
||||
logger.Error(logSender, connectionID, "%v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
kmsConfig := config.GetKMSConfig()
|
||||
if err := kmsConfig.Initialize(); err != nil {
|
||||
logger.Error(logSender, connectionID, "unable to initialize KMS: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
dataProviderConf := config.GetProviderConf()
|
||||
if dataProviderConf.Driver == dataprovider.SQLiteDataProviderName || dataProviderConf.Driver == dataprovider.BoltDataProviderName {
|
||||
logger.Debug(logSender, connectionID, "data provider %#v not supported in subsystem mode, using %#v provider",
|
||||
dataProviderConf.Driver, dataprovider.MemoryDataProviderName)
|
||||
dataProviderConf.Driver = dataprovider.MemoryDataProviderName
|
||||
dataProviderConf.Name = ""
|
||||
dataProviderConf.PreferDatabaseCredentials = true
|
||||
}
|
||||
config.SetProviderConf(dataProviderConf)
|
||||
err = dataprovider.Initialize(dataProviderConf, configDir, false)
|
||||
if err != nil {
|
||||
logger.Error(logSender, connectionID, "unable to initialize the data provider: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
httpConfig := config.GetHTTPConfig()
|
||||
if err := httpConfig.Initialize(configDir); err != nil {
|
||||
logger.Error(logSender, connectionID, "unable to initialize http client: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
user, err := dataprovider.UserExists(username)
|
||||
if err == nil {
|
||||
if user.HomeDir != filepath.Clean(homedir) && !preserveHomeDir {
|
||||
// update the user
|
||||
user.HomeDir = filepath.Clean(homedir)
|
||||
err = dataprovider.UpdateUser(&user)
|
||||
if err != nil {
|
||||
logger.Error(logSender, connectionID, "unable to update user %#v: %v", username, err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
} else {
|
||||
user.Username = username
|
||||
if baseHomeDir != "" && filepath.IsAbs(baseHomeDir) {
|
||||
user.HomeDir = filepath.Join(baseHomeDir, username)
|
||||
} else {
|
||||
user.HomeDir = filepath.Clean(homedir)
|
||||
}
|
||||
logger.Debug(logSender, connectionID, "home dir for new user %#v", user.HomeDir)
|
||||
user.Password = connectionID
|
||||
user.Permissions = make(map[string][]string)
|
||||
user.Permissions["/"] = []string{dataprovider.PermAny}
|
||||
err = dataprovider.AddUser(&user)
|
||||
if err != nil {
|
||||
logger.Error(logSender, connectionID, "unable to add user %#v: %v", username, err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
err = sftpd.ServeSubSystemConnection(&user, connectionID, os.Stdin, os.Stdout)
|
||||
if err != nil && err != io.EOF {
|
||||
logger.Warn(logSender, connectionID, "serving subsystem finished with error: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
logger.Info(logSender, connectionID, "serving subsystem finished")
|
||||
os.Exit(0)
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
func init() {
|
||||
subsystemCmd.Flags().BoolVarP(&preserveHomeDir, "preserve-home", "p", false, `If the user already exists, the existing home
|
||||
directory will not be changed`)
|
||||
subsystemCmd.Flags().StringVarP(&baseHomeDir, "base-home-dir", "d", "", `If the user does not exist specify an alternate
|
||||
starting directory. The home directory for a new
|
||||
user will be:
|
||||
|
||||
[base-home-dir]/[username]
|
||||
|
||||
base-home-dir must be an absolute path.`)
|
||||
subsystemCmd.Flags().BoolVarP(&logJournalD, "log-to-journald", "j", false, `Send logs to journald. Only available on Linux.
|
||||
Use:
|
||||
|
||||
$ journalctl -o verbose -f
|
||||
|
||||
To see full logs.
|
||||
If not set, the logs will be sent to the standard
|
||||
error`)
|
||||
|
||||
addConfigFlags(subsystemCmd)
|
||||
|
||||
viper.SetDefault(logVerboseKey, defaultLogVerbose)
|
||||
viper.BindEnv(logVerboseKey, "SFTPGO_LOG_VERBOSE") //nolint:errcheck
|
||||
subsystemCmd.Flags().BoolVarP(&logVerbose, logVerboseFlag, "v", viper.GetBool(logVerboseKey),
|
||||
`Enable verbose logs. This flag can be set
|
||||
using SFTPGO_LOG_VERBOSE env var too.
|
||||
`)
|
||||
viper.BindPFlag(logVerboseKey, subsystemCmd.Flags().Lookup(logVerboseFlag)) //nolint:errcheck
|
||||
|
||||
rootCmd.AddCommand(subsystemCmd)
|
||||
}
|
||||
@@ -2,9 +2,11 @@ package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/drakkan/sftpgo/service"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
var (
|
||||
@@ -20,6 +22,7 @@ var (
|
||||
status, err := s.Status()
|
||||
if err != nil {
|
||||
fmt.Printf("Error querying service status: %v\r\n", err)
|
||||
os.Exit(1)
|
||||
} else {
|
||||
fmt.Printf("Service status: %#v\r\n", status.String())
|
||||
}
|
||||
|
||||
@@ -2,9 +2,11 @@ package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/drakkan/sftpgo/service"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
var (
|
||||
@@ -20,6 +22,7 @@ var (
|
||||
err := s.Stop()
|
||||
if err != nil {
|
||||
fmt.Printf("Error stopping service: %v\r\n", err)
|
||||
os.Exit(1)
|
||||
} else {
|
||||
fmt.Printf("Service stopped!\r\n")
|
||||
}
|
||||
|
||||
@@ -2,9 +2,11 @@ package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/drakkan/sftpgo/service"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
var (
|
||||
@@ -20,6 +22,7 @@ var (
|
||||
err := s.Uninstall()
|
||||
if err != nil {
|
||||
fmt.Printf("Error removing service: %v\r\n", err)
|
||||
os.Exit(1)
|
||||
} else {
|
||||
fmt.Printf("Service uninstalled\r\n")
|
||||
}
|
||||
|
||||
205
common/actions.go
Normal file
205
common/actions.go
Normal file
@@ -0,0 +1,205 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/drakkan/sftpgo/dataprovider"
|
||||
"github.com/drakkan/sftpgo/httpclient"
|
||||
"github.com/drakkan/sftpgo/logger"
|
||||
"github.com/drakkan/sftpgo/utils"
|
||||
)
|
||||
|
||||
var (
|
||||
errUnconfiguredAction = errors.New("no hook is configured for this action")
|
||||
errNoHook = errors.New("unable to execute action, no hook defined")
|
||||
errUnexpectedHTTResponse = errors.New("unexpected HTTP response code")
|
||||
)
|
||||
|
||||
// ProtocolActions defines the action to execute on file operations and SSH commands
|
||||
type ProtocolActions struct {
|
||||
// Valid values are download, upload, pre-delete, delete, rename, ssh_cmd. Empty slice to disable
|
||||
ExecuteOn []string `json:"execute_on" mapstructure:"execute_on"`
|
||||
// Absolute path to an external program or an HTTP URL
|
||||
Hook string `json:"hook" mapstructure:"hook"`
|
||||
}
|
||||
|
||||
var actionHandler ActionHandler = &defaultActionHandler{}
|
||||
|
||||
// InitializeActionHandler lets the user choose an action handler implementation.
|
||||
//
|
||||
// Do NOT call this function after application initialization.
|
||||
func InitializeActionHandler(handler ActionHandler) {
|
||||
actionHandler = handler
|
||||
}
|
||||
|
||||
// SSHCommandActionNotification executes the defined action for the specified SSH command.
|
||||
func SSHCommandActionNotification(user *dataprovider.User, filePath, target, sshCmd string, err error) {
|
||||
notification := newActionNotification(user, operationSSHCmd, filePath, target, sshCmd, ProtocolSSH, 0, err)
|
||||
|
||||
go actionHandler.Handle(notification) // nolint:errcheck
|
||||
}
|
||||
|
||||
// ActionHandler handles a notification for a Protocol Action.
|
||||
type ActionHandler interface {
|
||||
Handle(notification *ActionNotification) error
|
||||
}
|
||||
|
||||
// ActionNotification defines a notification for a Protocol Action.
|
||||
type ActionNotification struct {
|
||||
Action string `json:"action"`
|
||||
Username string `json:"username"`
|
||||
Path string `json:"path"`
|
||||
TargetPath string `json:"target_path,omitempty"`
|
||||
SSHCmd string `json:"ssh_cmd,omitempty"`
|
||||
FileSize int64 `json:"file_size,omitempty"`
|
||||
FsProvider int `json:"fs_provider"`
|
||||
Bucket string `json:"bucket,omitempty"`
|
||||
Endpoint string `json:"endpoint,omitempty"`
|
||||
Status int `json:"status"`
|
||||
Protocol string `json:"protocol"`
|
||||
}
|
||||
|
||||
func newActionNotification(
|
||||
user *dataprovider.User,
|
||||
operation, filePath, target, sshCmd, protocol string,
|
||||
fileSize int64,
|
||||
err error,
|
||||
) *ActionNotification {
|
||||
var bucket, endpoint string
|
||||
status := 1
|
||||
|
||||
if user.FsConfig.Provider == dataprovider.S3FilesystemProvider {
|
||||
bucket = user.FsConfig.S3Config.Bucket
|
||||
endpoint = user.FsConfig.S3Config.Endpoint
|
||||
} else if user.FsConfig.Provider == dataprovider.GCSFilesystemProvider {
|
||||
bucket = user.FsConfig.GCSConfig.Bucket
|
||||
} else if user.FsConfig.Provider == dataprovider.AzureBlobFilesystemProvider {
|
||||
bucket = user.FsConfig.AzBlobConfig.Container
|
||||
if user.FsConfig.AzBlobConfig.SASURL != "" {
|
||||
endpoint = user.FsConfig.AzBlobConfig.SASURL
|
||||
} else {
|
||||
endpoint = user.FsConfig.AzBlobConfig.Endpoint
|
||||
}
|
||||
}
|
||||
|
||||
if err == ErrQuotaExceeded {
|
||||
status = 2
|
||||
} else if err != nil {
|
||||
status = 0
|
||||
}
|
||||
|
||||
return &ActionNotification{
|
||||
Action: operation,
|
||||
Username: user.Username,
|
||||
Path: filePath,
|
||||
TargetPath: target,
|
||||
SSHCmd: sshCmd,
|
||||
FileSize: fileSize,
|
||||
FsProvider: int(user.FsConfig.Provider),
|
||||
Bucket: bucket,
|
||||
Endpoint: endpoint,
|
||||
Status: status,
|
||||
Protocol: protocol,
|
||||
}
|
||||
}
|
||||
|
||||
type defaultActionHandler struct{}
|
||||
|
||||
func (h *defaultActionHandler) Handle(notification *ActionNotification) error {
|
||||
if !utils.IsStringInSlice(notification.Action, Config.Actions.ExecuteOn) {
|
||||
return errUnconfiguredAction
|
||||
}
|
||||
|
||||
if Config.Actions.Hook == "" {
|
||||
logger.Warn(notification.Protocol, "", "Unable to send notification, no hook is defined")
|
||||
|
||||
return errNoHook
|
||||
}
|
||||
|
||||
if strings.HasPrefix(Config.Actions.Hook, "http") {
|
||||
return h.handleHTTP(notification)
|
||||
}
|
||||
|
||||
return h.handleCommand(notification)
|
||||
}
|
||||
|
||||
func (h *defaultActionHandler) handleHTTP(notification *ActionNotification) error {
|
||||
u, err := url.Parse(Config.Actions.Hook)
|
||||
if err != nil {
|
||||
logger.Warn(notification.Protocol, "", "Invalid hook %#v for operation %#v: %v", Config.Actions.Hook, notification.Action, err)
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
startTime := time.Now()
|
||||
respCode := 0
|
||||
|
||||
httpClient := httpclient.GetRetraybleHTTPClient()
|
||||
|
||||
var b bytes.Buffer
|
||||
_ = json.NewEncoder(&b).Encode(notification)
|
||||
|
||||
resp, err := httpClient.Post(u.String(), "application/json", &b)
|
||||
if err == nil {
|
||||
respCode = resp.StatusCode
|
||||
resp.Body.Close()
|
||||
|
||||
if respCode != http.StatusOK {
|
||||
err = errUnexpectedHTTResponse
|
||||
}
|
||||
}
|
||||
|
||||
logger.Debug(notification.Protocol, "", "notified operation %#v to URL: %v status code: %v, elapsed: %v err: %v", notification.Action, u.String(), respCode, time.Since(startTime), err)
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
func (h *defaultActionHandler) handleCommand(notification *ActionNotification) error {
|
||||
if !filepath.IsAbs(Config.Actions.Hook) {
|
||||
err := fmt.Errorf("invalid notification command %#v", Config.Actions.Hook)
|
||||
logger.Warn(notification.Protocol, "", "unable to execute notification command: %v", err)
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
|
||||
defer cancel()
|
||||
|
||||
cmd := exec.CommandContext(ctx, Config.Actions.Hook, notification.Action, notification.Username, notification.Path, notification.TargetPath, notification.SSHCmd)
|
||||
cmd.Env = append(os.Environ(), notificationAsEnvVars(notification)...)
|
||||
|
||||
startTime := time.Now()
|
||||
err := cmd.Run()
|
||||
|
||||
logger.Debug(notification.Protocol, "", "executed command %#v with arguments: %#v, %#v, %#v, %#v, %#v, elapsed: %v, error: %v",
|
||||
Config.Actions.Hook, notification.Action, notification.Username, notification.Path, notification.TargetPath, notification.SSHCmd, time.Since(startTime), err)
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
func notificationAsEnvVars(notification *ActionNotification) []string {
|
||||
return []string{
|
||||
fmt.Sprintf("SFTPGO_ACTION=%v", notification.Action),
|
||||
fmt.Sprintf("SFTPGO_ACTION_USERNAME=%v", notification.Username),
|
||||
fmt.Sprintf("SFTPGO_ACTION_PATH=%v", notification.Path),
|
||||
fmt.Sprintf("SFTPGO_ACTION_TARGET=%v", notification.TargetPath),
|
||||
fmt.Sprintf("SFTPGO_ACTION_SSH_CMD=%v", notification.SSHCmd),
|
||||
fmt.Sprintf("SFTPGO_ACTION_FILE_SIZE=%v", notification.FileSize),
|
||||
fmt.Sprintf("SFTPGO_ACTION_FS_PROVIDER=%v", notification.FsProvider),
|
||||
fmt.Sprintf("SFTPGO_ACTION_BUCKET=%v", notification.Bucket),
|
||||
fmt.Sprintf("SFTPGO_ACTION_ENDPOINT=%v", notification.Endpoint),
|
||||
fmt.Sprintf("SFTPGO_ACTION_STATUS=%v", notification.Status),
|
||||
fmt.Sprintf("SFTPGO_ACTION_PROTOCOL=%v", notification.Protocol),
|
||||
}
|
||||
}
|
||||
222
common/actions_test.go
Normal file
222
common/actions_test.go
Normal file
@@ -0,0 +1,222 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
|
||||
"github.com/drakkan/sftpgo/dataprovider"
|
||||
"github.com/drakkan/sftpgo/vfs"
|
||||
)
|
||||
|
||||
func TestNewActionNotification(t *testing.T) {
|
||||
user := &dataprovider.User{
|
||||
Username: "username",
|
||||
}
|
||||
user.FsConfig.Provider = dataprovider.LocalFilesystemProvider
|
||||
user.FsConfig.S3Config = vfs.S3FsConfig{
|
||||
Bucket: "s3bucket",
|
||||
Endpoint: "endpoint",
|
||||
}
|
||||
user.FsConfig.GCSConfig = vfs.GCSFsConfig{
|
||||
Bucket: "gcsbucket",
|
||||
}
|
||||
user.FsConfig.AzBlobConfig = vfs.AzBlobFsConfig{
|
||||
Container: "azcontainer",
|
||||
SASURL: "azsasurl",
|
||||
Endpoint: "azendpoint",
|
||||
}
|
||||
a := newActionNotification(user, operationDownload, "path", "target", "", ProtocolSFTP, 123, errors.New("fake error"))
|
||||
assert.Equal(t, user.Username, a.Username)
|
||||
assert.Equal(t, 0, len(a.Bucket))
|
||||
assert.Equal(t, 0, len(a.Endpoint))
|
||||
assert.Equal(t, 0, a.Status)
|
||||
|
||||
user.FsConfig.Provider = dataprovider.S3FilesystemProvider
|
||||
a = newActionNotification(user, operationDownload, "path", "target", "", ProtocolSSH, 123, nil)
|
||||
assert.Equal(t, "s3bucket", a.Bucket)
|
||||
assert.Equal(t, "endpoint", a.Endpoint)
|
||||
assert.Equal(t, 1, a.Status)
|
||||
|
||||
user.FsConfig.Provider = dataprovider.GCSFilesystemProvider
|
||||
a = newActionNotification(user, operationDownload, "path", "target", "", ProtocolSCP, 123, ErrQuotaExceeded)
|
||||
assert.Equal(t, "gcsbucket", a.Bucket)
|
||||
assert.Equal(t, 0, len(a.Endpoint))
|
||||
assert.Equal(t, 2, a.Status)
|
||||
|
||||
user.FsConfig.Provider = dataprovider.AzureBlobFilesystemProvider
|
||||
a = newActionNotification(user, operationDownload, "path", "target", "", ProtocolSCP, 123, nil)
|
||||
assert.Equal(t, "azcontainer", a.Bucket)
|
||||
assert.Equal(t, "azsasurl", a.Endpoint)
|
||||
assert.Equal(t, 1, a.Status)
|
||||
|
||||
user.FsConfig.AzBlobConfig.SASURL = ""
|
||||
a = newActionNotification(user, operationDownload, "path", "target", "", ProtocolSCP, 123, nil)
|
||||
assert.Equal(t, "azcontainer", a.Bucket)
|
||||
assert.Equal(t, "azendpoint", a.Endpoint)
|
||||
assert.Equal(t, 1, a.Status)
|
||||
}
|
||||
|
||||
func TestActionHTTP(t *testing.T) {
|
||||
actionsCopy := Config.Actions
|
||||
|
||||
Config.Actions = ProtocolActions{
|
||||
ExecuteOn: []string{operationDownload},
|
||||
Hook: fmt.Sprintf("http://%v", httpAddr),
|
||||
}
|
||||
user := &dataprovider.User{
|
||||
Username: "username",
|
||||
}
|
||||
a := newActionNotification(user, operationDownload, "path", "target", "", ProtocolSFTP, 123, nil)
|
||||
err := actionHandler.Handle(a)
|
||||
assert.NoError(t, err)
|
||||
|
||||
Config.Actions.Hook = "http://invalid:1234"
|
||||
err = actionHandler.Handle(a)
|
||||
assert.Error(t, err)
|
||||
|
||||
Config.Actions.Hook = fmt.Sprintf("http://%v/404", httpAddr)
|
||||
err = actionHandler.Handle(a)
|
||||
if assert.Error(t, err) {
|
||||
assert.EqualError(t, err, errUnexpectedHTTResponse.Error())
|
||||
}
|
||||
|
||||
Config.Actions = actionsCopy
|
||||
}
|
||||
|
||||
func TestActionCMD(t *testing.T) {
|
||||
if runtime.GOOS == osWindows {
|
||||
t.Skip("this test is not available on Windows")
|
||||
}
|
||||
actionsCopy := Config.Actions
|
||||
|
||||
hookCmd, err := exec.LookPath("true")
|
||||
assert.NoError(t, err)
|
||||
|
||||
Config.Actions = ProtocolActions{
|
||||
ExecuteOn: []string{operationDownload},
|
||||
Hook: hookCmd,
|
||||
}
|
||||
user := &dataprovider.User{
|
||||
Username: "username",
|
||||
}
|
||||
a := newActionNotification(user, operationDownload, "path", "target", "", ProtocolSFTP, 123, nil)
|
||||
err = actionHandler.Handle(a)
|
||||
assert.NoError(t, err)
|
||||
|
||||
SSHCommandActionNotification(user, "path", "target", "sha1sum", nil)
|
||||
|
||||
Config.Actions = actionsCopy
|
||||
}
|
||||
|
||||
func TestWrongActions(t *testing.T) {
|
||||
actionsCopy := Config.Actions
|
||||
|
||||
badCommand := "/bad/command"
|
||||
if runtime.GOOS == osWindows {
|
||||
badCommand = "C:\\bad\\command"
|
||||
}
|
||||
Config.Actions = ProtocolActions{
|
||||
ExecuteOn: []string{operationUpload},
|
||||
Hook: badCommand,
|
||||
}
|
||||
user := &dataprovider.User{
|
||||
Username: "username",
|
||||
}
|
||||
|
||||
a := newActionNotification(user, operationUpload, "", "", "", ProtocolSFTP, 123, nil)
|
||||
err := actionHandler.Handle(a)
|
||||
assert.Error(t, err, "action with bad command must fail")
|
||||
|
||||
a.Action = operationDelete
|
||||
err = actionHandler.Handle(a)
|
||||
assert.EqualError(t, err, errUnconfiguredAction.Error())
|
||||
|
||||
Config.Actions.Hook = "http://foo\x7f.com/"
|
||||
a.Action = operationUpload
|
||||
err = actionHandler.Handle(a)
|
||||
assert.Error(t, err, "action with bad url must fail")
|
||||
|
||||
Config.Actions.Hook = ""
|
||||
err = actionHandler.Handle(a)
|
||||
if assert.Error(t, err) {
|
||||
assert.EqualError(t, err, errNoHook.Error())
|
||||
}
|
||||
|
||||
Config.Actions.Hook = "relative path"
|
||||
err = actionHandler.Handle(a)
|
||||
if assert.Error(t, err) {
|
||||
assert.EqualError(t, err, fmt.Sprintf("invalid notification command %#v", Config.Actions.Hook))
|
||||
}
|
||||
|
||||
Config.Actions = actionsCopy
|
||||
}
|
||||
|
||||
func TestPreDeleteAction(t *testing.T) {
|
||||
if runtime.GOOS == osWindows {
|
||||
t.Skip("this test is not available on Windows")
|
||||
}
|
||||
actionsCopy := Config.Actions
|
||||
|
||||
hookCmd, err := exec.LookPath("true")
|
||||
assert.NoError(t, err)
|
||||
Config.Actions = ProtocolActions{
|
||||
ExecuteOn: []string{operationPreDelete},
|
||||
Hook: hookCmd,
|
||||
}
|
||||
homeDir := filepath.Join(os.TempDir(), "test_user")
|
||||
err = os.MkdirAll(homeDir, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
user := dataprovider.User{
|
||||
Username: "username",
|
||||
HomeDir: homeDir,
|
||||
}
|
||||
user.Permissions = make(map[string][]string)
|
||||
user.Permissions["/"] = []string{dataprovider.PermAny}
|
||||
fs := vfs.NewOsFs("id", homeDir, nil)
|
||||
c := NewBaseConnection("id", ProtocolSFTP, user, fs)
|
||||
|
||||
testfile := filepath.Join(user.HomeDir, "testfile")
|
||||
err = ioutil.WriteFile(testfile, []byte("test"), os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
info, err := os.Stat(testfile)
|
||||
assert.NoError(t, err)
|
||||
err = c.RemoveFile(testfile, "testfile", info)
|
||||
assert.NoError(t, err)
|
||||
assert.FileExists(t, testfile)
|
||||
|
||||
os.RemoveAll(homeDir)
|
||||
|
||||
Config.Actions = actionsCopy
|
||||
}
|
||||
|
||||
type actionHandlerStub struct {
|
||||
called bool
|
||||
}
|
||||
|
||||
func (h *actionHandlerStub) Handle(notification *ActionNotification) error {
|
||||
h.called = true
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func TestInitializeActionHandler(t *testing.T) {
|
||||
handler := &actionHandlerStub{}
|
||||
|
||||
InitializeActionHandler(handler)
|
||||
t.Cleanup(func() {
|
||||
InitializeActionHandler(&defaultActionHandler{})
|
||||
})
|
||||
|
||||
err := actionHandler.Handle(&ActionNotification{})
|
||||
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, handler.called)
|
||||
}
|
||||
834
common/common.go
Normal file
834
common/common.go
Normal file
@@ -0,0 +1,834 @@
|
||||
// Package common defines code shared among file transfer packages and protocols
|
||||
package common
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"net"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"github.com/pires/go-proxyproto"
|
||||
|
||||
"github.com/drakkan/sftpgo/dataprovider"
|
||||
"github.com/drakkan/sftpgo/httpclient"
|
||||
"github.com/drakkan/sftpgo/logger"
|
||||
"github.com/drakkan/sftpgo/metrics"
|
||||
"github.com/drakkan/sftpgo/utils"
|
||||
)
|
||||
|
||||
// constants
|
||||
const (
|
||||
logSender = "common"
|
||||
uploadLogSender = "Upload"
|
||||
downloadLogSender = "Download"
|
||||
renameLogSender = "Rename"
|
||||
rmdirLogSender = "Rmdir"
|
||||
mkdirLogSender = "Mkdir"
|
||||
symlinkLogSender = "Symlink"
|
||||
removeLogSender = "Remove"
|
||||
chownLogSender = "Chown"
|
||||
chmodLogSender = "Chmod"
|
||||
chtimesLogSender = "Chtimes"
|
||||
truncateLogSender = "Truncate"
|
||||
operationDownload = "download"
|
||||
operationUpload = "upload"
|
||||
operationDelete = "delete"
|
||||
operationPreDelete = "pre-delete"
|
||||
operationRename = "rename"
|
||||
operationSSHCmd = "ssh_cmd"
|
||||
chtimesFormat = "2006-01-02T15:04:05" // YYYY-MM-DDTHH:MM:SS
|
||||
idleTimeoutCheckInterval = 3 * time.Minute
|
||||
)
|
||||
|
||||
// Stat flags
|
||||
const (
|
||||
StatAttrUIDGID = 1
|
||||
StatAttrPerms = 2
|
||||
StatAttrTimes = 4
|
||||
StatAttrSize = 8
|
||||
)
|
||||
|
||||
// Transfer types
|
||||
const (
|
||||
TransferUpload = iota
|
||||
TransferDownload
|
||||
)
|
||||
|
||||
// Supported protocols
|
||||
const (
|
||||
ProtocolSFTP = "SFTP"
|
||||
ProtocolSCP = "SCP"
|
||||
ProtocolSSH = "SSH"
|
||||
ProtocolFTP = "FTP"
|
||||
ProtocolWebDAV = "DAV"
|
||||
)
|
||||
|
||||
// Upload modes
|
||||
const (
|
||||
UploadModeStandard = iota
|
||||
UploadModeAtomic
|
||||
UploadModeAtomicWithResume
|
||||
)
|
||||
|
||||
// errors definitions
|
||||
var (
|
||||
ErrPermissionDenied = errors.New("permission denied")
|
||||
ErrNotExist = errors.New("no such file or directory")
|
||||
ErrOpUnsupported = errors.New("operation unsupported")
|
||||
ErrGenericFailure = errors.New("failure")
|
||||
ErrQuotaExceeded = errors.New("denying write due to space limit")
|
||||
ErrSkipPermissionsCheck = errors.New("permission check skipped")
|
||||
ErrConnectionDenied = errors.New("you are not allowed to connect")
|
||||
ErrNoBinding = errors.New("no binding configured")
|
||||
ErrCrtRevoked = errors.New("your certificate has been revoked")
|
||||
errNoTransfer = errors.New("requested transfer not found")
|
||||
errTransferMismatch = errors.New("transfer mismatch")
|
||||
)
|
||||
|
||||
var (
|
||||
// Config is the configuration for the supported protocols
|
||||
Config Configuration
|
||||
// Connections is the list of active connections
|
||||
Connections ActiveConnections
|
||||
// QuotaScans is the list of active quota scans
|
||||
QuotaScans ActiveScans
|
||||
idleTimeoutTicker *time.Ticker
|
||||
idleTimeoutTickerDone chan bool
|
||||
supportedProtocols = []string{ProtocolSFTP, ProtocolSCP, ProtocolSSH, ProtocolFTP, ProtocolWebDAV}
|
||||
)
|
||||
|
||||
// Initialize sets the common configuration
|
||||
func Initialize(c Configuration) error {
|
||||
Config = c
|
||||
Config.idleLoginTimeout = 2 * time.Minute
|
||||
Config.idleTimeoutAsDuration = time.Duration(Config.IdleTimeout) * time.Minute
|
||||
if Config.IdleTimeout > 0 {
|
||||
startIdleTimeoutTicker(idleTimeoutCheckInterval)
|
||||
}
|
||||
Config.defender = nil
|
||||
if c.DefenderConfig.Enabled {
|
||||
defender, err := newInMemoryDefender(&c.DefenderConfig)
|
||||
if err != nil {
|
||||
return fmt.Errorf("defender initialization error: %v", err)
|
||||
}
|
||||
logger.Info(logSender, "", "defender initialized with config %+v", c.DefenderConfig)
|
||||
Config.defender = defender
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// ReloadDefender reloads the defender's block and safe lists
|
||||
func ReloadDefender() error {
|
||||
if Config.defender == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
return Config.defender.Reload()
|
||||
}
|
||||
|
||||
// IsBanned returns true if the specified IP address is banned
|
||||
func IsBanned(ip string) bool {
|
||||
if Config.defender == nil {
|
||||
return false
|
||||
}
|
||||
|
||||
return Config.defender.IsBanned(ip)
|
||||
}
|
||||
|
||||
// GetDefenderBanTime returns the ban time for the given IP
|
||||
// or nil if the IP is not banned or the defender is disabled
|
||||
func GetDefenderBanTime(ip string) *time.Time {
|
||||
if Config.defender == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
return Config.defender.GetBanTime(ip)
|
||||
}
|
||||
|
||||
// Unban removes the specified IP address from the banned ones
|
||||
func Unban(ip string) bool {
|
||||
if Config.defender == nil {
|
||||
return false
|
||||
}
|
||||
|
||||
return Config.defender.Unban(ip)
|
||||
}
|
||||
|
||||
// GetDefenderScore returns the score for the given IP
|
||||
func GetDefenderScore(ip string) int {
|
||||
if Config.defender == nil {
|
||||
return 0
|
||||
}
|
||||
|
||||
return Config.defender.GetScore(ip)
|
||||
}
|
||||
|
||||
// AddDefenderEvent adds the specified defender event for the given IP
|
||||
func AddDefenderEvent(ip string, event HostEvent) {
|
||||
if Config.defender == nil {
|
||||
return
|
||||
}
|
||||
|
||||
Config.defender.AddEvent(ip, event)
|
||||
}
|
||||
|
||||
// the ticker cannot be started/stopped from multiple goroutines
|
||||
func startIdleTimeoutTicker(duration time.Duration) {
|
||||
stopIdleTimeoutTicker()
|
||||
idleTimeoutTicker = time.NewTicker(duration)
|
||||
idleTimeoutTickerDone = make(chan bool)
|
||||
go func() {
|
||||
for {
|
||||
select {
|
||||
case <-idleTimeoutTickerDone:
|
||||
return
|
||||
case <-idleTimeoutTicker.C:
|
||||
Connections.checkIdles()
|
||||
}
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
func stopIdleTimeoutTicker() {
|
||||
if idleTimeoutTicker != nil {
|
||||
idleTimeoutTicker.Stop()
|
||||
idleTimeoutTickerDone <- true
|
||||
idleTimeoutTicker = nil
|
||||
}
|
||||
}
|
||||
|
||||
// ActiveTransfer defines the interface for the current active transfers
|
||||
type ActiveTransfer interface {
|
||||
GetID() uint64
|
||||
GetType() int
|
||||
GetSize() int64
|
||||
GetVirtualPath() string
|
||||
GetStartTime() time.Time
|
||||
SignalClose()
|
||||
Truncate(fsPath string, size int64) (int64, error)
|
||||
GetRealFsPath(fsPath string) string
|
||||
}
|
||||
|
||||
// ActiveConnection defines the interface for the current active connections
|
||||
type ActiveConnection interface {
|
||||
GetID() string
|
||||
GetUsername() string
|
||||
GetRemoteAddress() string
|
||||
GetClientVersion() string
|
||||
GetProtocol() string
|
||||
GetConnectionTime() time.Time
|
||||
GetLastActivity() time.Time
|
||||
GetCommand() string
|
||||
Disconnect() error
|
||||
AddTransfer(t ActiveTransfer)
|
||||
RemoveTransfer(t ActiveTransfer)
|
||||
GetTransfers() []ConnectionTransfer
|
||||
CloseFS() error
|
||||
}
|
||||
|
||||
// StatAttributes defines the attributes for set stat commands
|
||||
type StatAttributes struct {
|
||||
Mode os.FileMode
|
||||
Atime time.Time
|
||||
Mtime time.Time
|
||||
UID int
|
||||
GID int
|
||||
Flags int
|
||||
Size int64
|
||||
}
|
||||
|
||||
// ConnectionTransfer defines the trasfer details to expose
|
||||
type ConnectionTransfer struct {
|
||||
ID uint64 `json:"-"`
|
||||
OperationType string `json:"operation_type"`
|
||||
StartTime int64 `json:"start_time"`
|
||||
Size int64 `json:"size"`
|
||||
VirtualPath string `json:"path"`
|
||||
}
|
||||
|
||||
func (t *ConnectionTransfer) getConnectionTransferAsString() string {
|
||||
result := ""
|
||||
switch t.OperationType {
|
||||
case operationUpload:
|
||||
result += "UL "
|
||||
case operationDownload:
|
||||
result += "DL "
|
||||
}
|
||||
result += fmt.Sprintf("%#v ", t.VirtualPath)
|
||||
if t.Size > 0 {
|
||||
elapsed := time.Since(utils.GetTimeFromMsecSinceEpoch(t.StartTime))
|
||||
speed := float64(t.Size) / float64(utils.GetTimeAsMsSinceEpoch(time.Now())-t.StartTime)
|
||||
result += fmt.Sprintf("Size: %#v Elapsed: %#v Speed: \"%.1f KB/s\"", utils.ByteCountIEC(t.Size),
|
||||
utils.GetDurationAsString(elapsed), speed)
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// Configuration defines configuration parameters common to all supported protocols
|
||||
type Configuration struct {
|
||||
// Maximum idle timeout as minutes. If a client is idle for a time that exceeds this setting it will be disconnected.
|
||||
// 0 means disabled
|
||||
IdleTimeout int `json:"idle_timeout" mapstructure:"idle_timeout"`
|
||||
// UploadMode 0 means standard, the files are uploaded directly to the requested path.
|
||||
// 1 means atomic: the files are uploaded to a temporary path and renamed to the requested path
|
||||
// when the client ends the upload. Atomic mode avoid problems such as a web server that
|
||||
// serves partial files when the files are being uploaded.
|
||||
// In atomic mode if there is an upload error the temporary file is deleted and so the requested
|
||||
// upload path will not contain a partial file.
|
||||
// 2 means atomic with resume support: as atomic but if there is an upload error the temporary
|
||||
// file is renamed to the requested path and not deleted, this way a client can reconnect and resume
|
||||
// the upload.
|
||||
UploadMode int `json:"upload_mode" mapstructure:"upload_mode"`
|
||||
// Actions to execute for SFTP file operations and SSH commands
|
||||
Actions ProtocolActions `json:"actions" mapstructure:"actions"`
|
||||
// SetstatMode 0 means "normal mode": requests for changing permissions and owner/group are executed.
|
||||
// 1 means "ignore mode": requests for changing permissions and owner/group are silently ignored.
|
||||
// 2 means "ignore mode for cloud fs": requests for changing permissions and owner/group/time are
|
||||
// silently ignored for cloud based filesystem such as S3, GCS, Azure Blob
|
||||
SetstatMode int `json:"setstat_mode" mapstructure:"setstat_mode"`
|
||||
// Support for HAProxy PROXY protocol.
|
||||
// If you are running SFTPGo behind a proxy server such as HAProxy, AWS ELB or NGNIX, you can enable
|
||||
// the proxy protocol. It provides a convenient way to safely transport connection information
|
||||
// such as a client's address across multiple layers of NAT or TCP proxies to get the real
|
||||
// client IP address instead of the proxy IP. Both protocol versions 1 and 2 are supported.
|
||||
// - 0 means disabled
|
||||
// - 1 means proxy protocol enabled. Proxy header will be used and requests without proxy header will be accepted.
|
||||
// - 2 means proxy protocol required. Proxy header will be used and requests without proxy header will be rejected.
|
||||
// If the proxy protocol is enabled in SFTPGo then you have to enable the protocol in your proxy configuration too,
|
||||
// for example for HAProxy add "send-proxy" or "send-proxy-v2" to each server configuration line.
|
||||
ProxyProtocol int `json:"proxy_protocol" mapstructure:"proxy_protocol"`
|
||||
// List of IP addresses and IP ranges allowed to send the proxy header.
|
||||
// If proxy protocol is set to 1 and we receive a proxy header from an IP that is not in the list then the
|
||||
// connection will be accepted and the header will be ignored.
|
||||
// If proxy protocol is set to 2 and we receive a proxy header from an IP that is not in the list then the
|
||||
// connection will be rejected.
|
||||
ProxyAllowed []string `json:"proxy_allowed" mapstructure:"proxy_allowed"`
|
||||
// Absolute path to an external program or an HTTP URL to invoke after a user connects
|
||||
// and before he tries to login. It allows you to reject the connection based on the source
|
||||
// ip address. Leave empty do disable.
|
||||
PostConnectHook string `json:"post_connect_hook" mapstructure:"post_connect_hook"`
|
||||
// Maximum number of concurrent client connections. 0 means unlimited
|
||||
MaxTotalConnections int `json:"max_total_connections" mapstructure:"max_total_connections"`
|
||||
// Defender configuration
|
||||
DefenderConfig DefenderConfig `json:"defender" mapstructure:"defender"`
|
||||
idleTimeoutAsDuration time.Duration
|
||||
idleLoginTimeout time.Duration
|
||||
defender Defender
|
||||
}
|
||||
|
||||
// IsAtomicUploadEnabled returns true if atomic upload is enabled
|
||||
func (c *Configuration) IsAtomicUploadEnabled() bool {
|
||||
return c.UploadMode == UploadModeAtomic || c.UploadMode == UploadModeAtomicWithResume
|
||||
}
|
||||
|
||||
// GetProxyListener returns a wrapper for the given listener that supports the
|
||||
// HAProxy Proxy Protocol or nil if the proxy protocol is not configured
|
||||
func (c *Configuration) GetProxyListener(listener net.Listener) (*proxyproto.Listener, error) {
|
||||
var proxyListener *proxyproto.Listener
|
||||
var err error
|
||||
if c.ProxyProtocol > 0 {
|
||||
var policyFunc func(upstream net.Addr) (proxyproto.Policy, error)
|
||||
if c.ProxyProtocol == 1 && len(c.ProxyAllowed) > 0 {
|
||||
policyFunc, err = proxyproto.LaxWhiteListPolicy(c.ProxyAllowed)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
if c.ProxyProtocol == 2 {
|
||||
if len(c.ProxyAllowed) == 0 {
|
||||
policyFunc = func(upstream net.Addr) (proxyproto.Policy, error) {
|
||||
return proxyproto.REQUIRE, nil
|
||||
}
|
||||
} else {
|
||||
policyFunc, err = proxyproto.StrictWhiteListPolicy(c.ProxyAllowed)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
}
|
||||
proxyListener = &proxyproto.Listener{
|
||||
Listener: listener,
|
||||
Policy: policyFunc,
|
||||
}
|
||||
}
|
||||
return proxyListener, nil
|
||||
}
|
||||
|
||||
// ExecutePostConnectHook executes the post connect hook if defined
|
||||
func (c *Configuration) ExecutePostConnectHook(ipAddr, protocol string) error {
|
||||
if c.PostConnectHook == "" {
|
||||
return nil
|
||||
}
|
||||
if strings.HasPrefix(c.PostConnectHook, "http") {
|
||||
var url *url.URL
|
||||
url, err := url.Parse(c.PostConnectHook)
|
||||
if err != nil {
|
||||
logger.Warn(protocol, "", "Login from ip %#v denied, invalid post connect hook %#v: %v",
|
||||
ipAddr, c.PostConnectHook, err)
|
||||
return err
|
||||
}
|
||||
httpClient := httpclient.GetRetraybleHTTPClient()
|
||||
q := url.Query()
|
||||
q.Add("ip", ipAddr)
|
||||
q.Add("protocol", protocol)
|
||||
url.RawQuery = q.Encode()
|
||||
|
||||
resp, err := httpClient.Get(url.String())
|
||||
if err != nil {
|
||||
logger.Warn(protocol, "", "Login from ip %#v denied, error executing post connect hook: %v", ipAddr, err)
|
||||
return err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
logger.Warn(protocol, "", "Login from ip %#v denied, post connect hook response code: %v", ipAddr, resp.StatusCode)
|
||||
return errUnexpectedHTTResponse
|
||||
}
|
||||
return nil
|
||||
}
|
||||
if !filepath.IsAbs(c.PostConnectHook) {
|
||||
err := fmt.Errorf("invalid post connect hook %#v", c.PostConnectHook)
|
||||
logger.Warn(protocol, "", "Login from ip %#v denied: %v", ipAddr, err)
|
||||
return err
|
||||
}
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 20*time.Second)
|
||||
defer cancel()
|
||||
cmd := exec.CommandContext(ctx, c.PostConnectHook)
|
||||
cmd.Env = append(os.Environ(),
|
||||
fmt.Sprintf("SFTPGO_CONNECTION_IP=%v", ipAddr),
|
||||
fmt.Sprintf("SFTPGO_CONNECTION_PROTOCOL=%v", protocol))
|
||||
err := cmd.Run()
|
||||
if err != nil {
|
||||
logger.Warn(protocol, "", "Login from ip %#v denied, connect hook error: %v", ipAddr, err)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// SSHConnection defines an ssh connection.
|
||||
// Each SSH connection can open several channels for SFTP or SSH commands
|
||||
type SSHConnection struct {
|
||||
id string
|
||||
conn net.Conn
|
||||
lastActivity int64
|
||||
}
|
||||
|
||||
// NewSSHConnection returns a new SSHConnection
|
||||
func NewSSHConnection(id string, conn net.Conn) *SSHConnection {
|
||||
return &SSHConnection{
|
||||
id: id,
|
||||
conn: conn,
|
||||
lastActivity: time.Now().UnixNano(),
|
||||
}
|
||||
}
|
||||
|
||||
// GetID returns the ID for this SSHConnection
|
||||
func (c *SSHConnection) GetID() string {
|
||||
return c.id
|
||||
}
|
||||
|
||||
// UpdateLastActivity updates last activity for this connection
|
||||
func (c *SSHConnection) UpdateLastActivity() {
|
||||
atomic.StoreInt64(&c.lastActivity, time.Now().UnixNano())
|
||||
}
|
||||
|
||||
// GetLastActivity returns the last connection activity
|
||||
func (c *SSHConnection) GetLastActivity() time.Time {
|
||||
return time.Unix(0, atomic.LoadInt64(&c.lastActivity))
|
||||
}
|
||||
|
||||
// Close closes the underlying network connection
|
||||
func (c *SSHConnection) Close() error {
|
||||
return c.conn.Close()
|
||||
}
|
||||
|
||||
// ActiveConnections holds the currect active connections with the associated transfers
|
||||
type ActiveConnections struct {
|
||||
sync.RWMutex
|
||||
connections []ActiveConnection
|
||||
sshConnections []*SSHConnection
|
||||
}
|
||||
|
||||
// GetActiveSessions returns the number of active sessions for the given username.
|
||||
// We return the open sessions for any protocol
|
||||
func (conns *ActiveConnections) GetActiveSessions(username string) int {
|
||||
conns.RLock()
|
||||
defer conns.RUnlock()
|
||||
|
||||
numSessions := 0
|
||||
for _, c := range conns.connections {
|
||||
if c.GetUsername() == username {
|
||||
numSessions++
|
||||
}
|
||||
}
|
||||
return numSessions
|
||||
}
|
||||
|
||||
// Add adds a new connection to the active ones
|
||||
func (conns *ActiveConnections) Add(c ActiveConnection) {
|
||||
conns.Lock()
|
||||
defer conns.Unlock()
|
||||
|
||||
conns.connections = append(conns.connections, c)
|
||||
metrics.UpdateActiveConnectionsSize(len(conns.connections))
|
||||
logger.Debug(c.GetProtocol(), c.GetID(), "connection added, num open connections: %v", len(conns.connections))
|
||||
}
|
||||
|
||||
// Swap replaces an existing connection with the given one.
|
||||
// This method is useful if you have to change some connection details
|
||||
// for example for FTP is used to update the connection once the user
|
||||
// authenticates
|
||||
func (conns *ActiveConnections) Swap(c ActiveConnection) error {
|
||||
conns.Lock()
|
||||
defer conns.Unlock()
|
||||
|
||||
for idx, conn := range conns.connections {
|
||||
if conn.GetID() == c.GetID() {
|
||||
conn = nil
|
||||
conns.connections[idx] = c
|
||||
return nil
|
||||
}
|
||||
}
|
||||
return errors.New("connection to swap not found")
|
||||
}
|
||||
|
||||
// Remove removes a connection from the active ones
|
||||
func (conns *ActiveConnections) Remove(connectionID string) {
|
||||
conns.Lock()
|
||||
defer conns.Unlock()
|
||||
|
||||
for idx, conn := range conns.connections {
|
||||
if conn.GetID() == connectionID {
|
||||
err := conn.CloseFS()
|
||||
lastIdx := len(conns.connections) - 1
|
||||
conns.connections[idx] = conns.connections[lastIdx]
|
||||
conns.connections[lastIdx] = nil
|
||||
conns.connections = conns.connections[:lastIdx]
|
||||
metrics.UpdateActiveConnectionsSize(lastIdx)
|
||||
logger.Debug(conn.GetProtocol(), conn.GetID(), "connection removed, close fs error: %v, num open connections: %v",
|
||||
err, lastIdx)
|
||||
return
|
||||
}
|
||||
}
|
||||
logger.Warn(logSender, "", "connection id %#v to remove not found!", connectionID)
|
||||
}
|
||||
|
||||
// Close closes an active connection.
|
||||
// It returns true on success
|
||||
func (conns *ActiveConnections) Close(connectionID string) bool {
|
||||
conns.RLock()
|
||||
result := false
|
||||
|
||||
for _, c := range conns.connections {
|
||||
if c.GetID() == connectionID {
|
||||
defer func(conn ActiveConnection) {
|
||||
err := conn.Disconnect()
|
||||
logger.Debug(conn.GetProtocol(), conn.GetID(), "close connection requested, close err: %v", err)
|
||||
}(c)
|
||||
result = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
conns.RUnlock()
|
||||
return result
|
||||
}
|
||||
|
||||
// AddSSHConnection adds a new ssh connection to the active ones
|
||||
func (conns *ActiveConnections) AddSSHConnection(c *SSHConnection) {
|
||||
conns.Lock()
|
||||
defer conns.Unlock()
|
||||
|
||||
conns.sshConnections = append(conns.sshConnections, c)
|
||||
logger.Debug(logSender, c.GetID(), "ssh connection added, num open connections: %v", len(conns.sshConnections))
|
||||
}
|
||||
|
||||
// RemoveSSHConnection removes a connection from the active ones
|
||||
func (conns *ActiveConnections) RemoveSSHConnection(connectionID string) {
|
||||
conns.Lock()
|
||||
defer conns.Unlock()
|
||||
|
||||
for idx, conn := range conns.sshConnections {
|
||||
if conn.GetID() == connectionID {
|
||||
lastIdx := len(conns.sshConnections) - 1
|
||||
conns.sshConnections[idx] = conns.sshConnections[lastIdx]
|
||||
conns.sshConnections[lastIdx] = nil
|
||||
conns.sshConnections = conns.sshConnections[:lastIdx]
|
||||
logger.Debug(logSender, conn.GetID(), "ssh connection removed, num open ssh connections: %v", lastIdx)
|
||||
return
|
||||
}
|
||||
}
|
||||
logger.Warn(logSender, "", "ssh connection to remove with id %#v not found!", connectionID)
|
||||
}
|
||||
|
||||
func (conns *ActiveConnections) checkIdles() {
|
||||
conns.RLock()
|
||||
|
||||
for _, sshConn := range conns.sshConnections {
|
||||
idleTime := time.Since(sshConn.GetLastActivity())
|
||||
if idleTime > Config.idleTimeoutAsDuration {
|
||||
// we close the an ssh connection if it has no active connections associated
|
||||
idToMatch := fmt.Sprintf("_%v_", sshConn.GetID())
|
||||
toClose := true
|
||||
for _, conn := range conns.connections {
|
||||
if strings.Contains(conn.GetID(), idToMatch) {
|
||||
toClose = false
|
||||
break
|
||||
}
|
||||
}
|
||||
if toClose {
|
||||
defer func(c *SSHConnection) {
|
||||
err := c.Close()
|
||||
logger.Debug(logSender, c.GetID(), "close idle SSH connection, idle time: %v, close err: %v",
|
||||
time.Since(c.GetLastActivity()), err)
|
||||
}(sshConn)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for _, c := range conns.connections {
|
||||
idleTime := time.Since(c.GetLastActivity())
|
||||
isUnauthenticatedFTPUser := (c.GetProtocol() == ProtocolFTP && c.GetUsername() == "")
|
||||
|
||||
if idleTime > Config.idleTimeoutAsDuration || (isUnauthenticatedFTPUser && idleTime > Config.idleLoginTimeout) {
|
||||
defer func(conn ActiveConnection, isFTPNoAuth bool) {
|
||||
err := conn.Disconnect()
|
||||
logger.Debug(conn.GetProtocol(), conn.GetID(), "close idle connection, idle time: %v, username: %#v close err: %v",
|
||||
time.Since(conn.GetLastActivity()), conn.GetUsername(), err)
|
||||
if isFTPNoAuth {
|
||||
ip := utils.GetIPFromRemoteAddress(c.GetRemoteAddress())
|
||||
logger.ConnectionFailedLog("", ip, dataprovider.LoginMethodNoAuthTryed, c.GetProtocol(), "client idle")
|
||||
metrics.AddNoAuthTryed()
|
||||
AddDefenderEvent(ip, HostEventNoLoginTried)
|
||||
dataprovider.ExecutePostLoginHook(&dataprovider.User{}, dataprovider.LoginMethodNoAuthTryed, ip, c.GetProtocol(),
|
||||
dataprovider.ErrNoAuthTryed)
|
||||
}
|
||||
}(c, isUnauthenticatedFTPUser)
|
||||
}
|
||||
}
|
||||
|
||||
conns.RUnlock()
|
||||
}
|
||||
|
||||
// IsNewConnectionAllowed returns false if the maximum number of concurrent allowed connections is exceeded
|
||||
func (conns *ActiveConnections) IsNewConnectionAllowed() bool {
|
||||
if Config.MaxTotalConnections == 0 {
|
||||
return true
|
||||
}
|
||||
|
||||
conns.RLock()
|
||||
defer conns.RUnlock()
|
||||
|
||||
return len(conns.connections) < Config.MaxTotalConnections
|
||||
}
|
||||
|
||||
// GetStats returns stats for active connections
|
||||
func (conns *ActiveConnections) GetStats() []*ConnectionStatus {
|
||||
conns.RLock()
|
||||
defer conns.RUnlock()
|
||||
|
||||
stats := make([]*ConnectionStatus, 0, len(conns.connections))
|
||||
for _, c := range conns.connections {
|
||||
stat := &ConnectionStatus{
|
||||
Username: c.GetUsername(),
|
||||
ConnectionID: c.GetID(),
|
||||
ClientVersion: c.GetClientVersion(),
|
||||
RemoteAddress: c.GetRemoteAddress(),
|
||||
ConnectionTime: utils.GetTimeAsMsSinceEpoch(c.GetConnectionTime()),
|
||||
LastActivity: utils.GetTimeAsMsSinceEpoch(c.GetLastActivity()),
|
||||
Protocol: c.GetProtocol(),
|
||||
Command: c.GetCommand(),
|
||||
Transfers: c.GetTransfers(),
|
||||
}
|
||||
stats = append(stats, stat)
|
||||
}
|
||||
return stats
|
||||
}
|
||||
|
||||
// ConnectionStatus returns the status for an active connection
|
||||
type ConnectionStatus struct {
|
||||
// Logged in username
|
||||
Username string `json:"username"`
|
||||
// Unique identifier for the connection
|
||||
ConnectionID string `json:"connection_id"`
|
||||
// client's version string
|
||||
ClientVersion string `json:"client_version,omitempty"`
|
||||
// Remote address for this connection
|
||||
RemoteAddress string `json:"remote_address"`
|
||||
// Connection time as unix timestamp in milliseconds
|
||||
ConnectionTime int64 `json:"connection_time"`
|
||||
// Last activity as unix timestamp in milliseconds
|
||||
LastActivity int64 `json:"last_activity"`
|
||||
// Protocol for this connection
|
||||
Protocol string `json:"protocol"`
|
||||
// active uploads/downloads
|
||||
Transfers []ConnectionTransfer `json:"active_transfers,omitempty"`
|
||||
// SSH command or WebDAV method
|
||||
Command string `json:"command,omitempty"`
|
||||
}
|
||||
|
||||
// GetConnectionDuration returns the connection duration as string
|
||||
func (c *ConnectionStatus) GetConnectionDuration() string {
|
||||
elapsed := time.Since(utils.GetTimeFromMsecSinceEpoch(c.ConnectionTime))
|
||||
return utils.GetDurationAsString(elapsed)
|
||||
}
|
||||
|
||||
// GetConnectionInfo returns connection info.
|
||||
// Protocol,Client Version and RemoteAddress are returned.
|
||||
func (c *ConnectionStatus) GetConnectionInfo() string {
|
||||
var result strings.Builder
|
||||
|
||||
result.WriteString(fmt.Sprintf("%v. Client: %#v From: %#v", c.Protocol, c.ClientVersion, c.RemoteAddress))
|
||||
|
||||
if c.Command == "" {
|
||||
return result.String()
|
||||
}
|
||||
|
||||
switch c.Protocol {
|
||||
case ProtocolSSH, ProtocolFTP:
|
||||
result.WriteString(fmt.Sprintf(". Command: %#v", c.Command))
|
||||
case ProtocolWebDAV:
|
||||
result.WriteString(fmt.Sprintf(". Method: %#v", c.Command))
|
||||
}
|
||||
|
||||
return result.String()
|
||||
}
|
||||
|
||||
// GetTransfersAsString returns the active transfers as string
|
||||
func (c *ConnectionStatus) GetTransfersAsString() string {
|
||||
result := ""
|
||||
for _, t := range c.Transfers {
|
||||
if result != "" {
|
||||
result += ". "
|
||||
}
|
||||
result += t.getConnectionTransferAsString()
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// ActiveQuotaScan defines an active quota scan for a user home dir
|
||||
type ActiveQuotaScan struct {
|
||||
// Username to which the quota scan refers
|
||||
Username string `json:"username"`
|
||||
// quota scan start time as unix timestamp in milliseconds
|
||||
StartTime int64 `json:"start_time"`
|
||||
}
|
||||
|
||||
// ActiveVirtualFolderQuotaScan defines an active quota scan for a virtual folder
|
||||
type ActiveVirtualFolderQuotaScan struct {
|
||||
// folder name to which the quota scan refers
|
||||
Name string `json:"name"`
|
||||
// quota scan start time as unix timestamp in milliseconds
|
||||
StartTime int64 `json:"start_time"`
|
||||
}
|
||||
|
||||
// ActiveScans holds the active quota scans
|
||||
type ActiveScans struct {
|
||||
sync.RWMutex
|
||||
UserHomeScans []ActiveQuotaScan
|
||||
FolderScans []ActiveVirtualFolderQuotaScan
|
||||
}
|
||||
|
||||
// GetUsersQuotaScans returns the active quota scans for users home directories
|
||||
func (s *ActiveScans) GetUsersQuotaScans() []ActiveQuotaScan {
|
||||
s.RLock()
|
||||
defer s.RUnlock()
|
||||
|
||||
scans := make([]ActiveQuotaScan, len(s.UserHomeScans))
|
||||
copy(scans, s.UserHomeScans)
|
||||
return scans
|
||||
}
|
||||
|
||||
// AddUserQuotaScan adds a user to the ones with active quota scans.
|
||||
// Returns false if the user has a quota scan already running
|
||||
func (s *ActiveScans) AddUserQuotaScan(username string) bool {
|
||||
s.Lock()
|
||||
defer s.Unlock()
|
||||
|
||||
for _, scan := range s.UserHomeScans {
|
||||
if scan.Username == username {
|
||||
return false
|
||||
}
|
||||
}
|
||||
s.UserHomeScans = append(s.UserHomeScans, ActiveQuotaScan{
|
||||
Username: username,
|
||||
StartTime: utils.GetTimeAsMsSinceEpoch(time.Now()),
|
||||
})
|
||||
return true
|
||||
}
|
||||
|
||||
// RemoveUserQuotaScan removes a user from the ones with active quota scans.
|
||||
// Returns false if the user has no active quota scans
|
||||
func (s *ActiveScans) RemoveUserQuotaScan(username string) bool {
|
||||
s.Lock()
|
||||
defer s.Unlock()
|
||||
|
||||
indexToRemove := -1
|
||||
for i, scan := range s.UserHomeScans {
|
||||
if scan.Username == username {
|
||||
indexToRemove = i
|
||||
break
|
||||
}
|
||||
}
|
||||
if indexToRemove >= 0 {
|
||||
s.UserHomeScans[indexToRemove] = s.UserHomeScans[len(s.UserHomeScans)-1]
|
||||
s.UserHomeScans = s.UserHomeScans[:len(s.UserHomeScans)-1]
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// GetVFoldersQuotaScans returns the active quota scans for virtual folders
|
||||
func (s *ActiveScans) GetVFoldersQuotaScans() []ActiveVirtualFolderQuotaScan {
|
||||
s.RLock()
|
||||
defer s.RUnlock()
|
||||
scans := make([]ActiveVirtualFolderQuotaScan, len(s.FolderScans))
|
||||
copy(scans, s.FolderScans)
|
||||
return scans
|
||||
}
|
||||
|
||||
// AddVFolderQuotaScan adds a virtual folder to the ones with active quota scans.
|
||||
// Returns false if the folder has a quota scan already running
|
||||
func (s *ActiveScans) AddVFolderQuotaScan(folderName string) bool {
|
||||
s.Lock()
|
||||
defer s.Unlock()
|
||||
|
||||
for _, scan := range s.FolderScans {
|
||||
if scan.Name == folderName {
|
||||
return false
|
||||
}
|
||||
}
|
||||
s.FolderScans = append(s.FolderScans, ActiveVirtualFolderQuotaScan{
|
||||
Name: folderName,
|
||||
StartTime: utils.GetTimeAsMsSinceEpoch(time.Now()),
|
||||
})
|
||||
return true
|
||||
}
|
||||
|
||||
// RemoveVFolderQuotaScan removes a folder from the ones with active quota scans.
|
||||
// Returns false if the folder has no active quota scans
|
||||
func (s *ActiveScans) RemoveVFolderQuotaScan(folderName string) bool {
|
||||
s.Lock()
|
||||
defer s.Unlock()
|
||||
|
||||
indexToRemove := -1
|
||||
for i, scan := range s.FolderScans {
|
||||
if scan.Name == folderName {
|
||||
indexToRemove = i
|
||||
break
|
||||
}
|
||||
}
|
||||
if indexToRemove >= 0 {
|
||||
s.FolderScans[indexToRemove] = s.FolderScans[len(s.FolderScans)-1]
|
||||
s.FolderScans = s.FolderScans[:len(s.FolderScans)-1]
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
652
common/common_test.go
Normal file
652
common/common_test.go
Normal file
@@ -0,0 +1,652 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net"
|
||||
"net/http"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strings"
|
||||
"sync/atomic"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/rs/zerolog"
|
||||
"github.com/spf13/viper"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/drakkan/sftpgo/dataprovider"
|
||||
"github.com/drakkan/sftpgo/httpclient"
|
||||
"github.com/drakkan/sftpgo/kms"
|
||||
"github.com/drakkan/sftpgo/logger"
|
||||
"github.com/drakkan/sftpgo/utils"
|
||||
"github.com/drakkan/sftpgo/vfs"
|
||||
)
|
||||
|
||||
const (
|
||||
logSenderTest = "common_test"
|
||||
httpAddr = "127.0.0.1:9999"
|
||||
httpProxyAddr = "127.0.0.1:7777"
|
||||
configDir = ".."
|
||||
osWindows = "windows"
|
||||
userTestUsername = "common_test_username"
|
||||
userTestPwd = "common_test_pwd"
|
||||
)
|
||||
|
||||
type providerConf struct {
|
||||
Config dataprovider.Config `json:"data_provider" mapstructure:"data_provider"`
|
||||
}
|
||||
|
||||
type fakeConnection struct {
|
||||
*BaseConnection
|
||||
command string
|
||||
}
|
||||
|
||||
func (c *fakeConnection) AddUser(user dataprovider.User) error {
|
||||
fs, err := user.GetFilesystem(c.GetID())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
c.BaseConnection.User = user
|
||||
c.BaseConnection.Fs = fs
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *fakeConnection) Disconnect() error {
|
||||
Connections.Remove(c.GetID())
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *fakeConnection) GetClientVersion() string {
|
||||
return ""
|
||||
}
|
||||
|
||||
func (c *fakeConnection) GetCommand() string {
|
||||
return c.command
|
||||
}
|
||||
|
||||
func (c *fakeConnection) GetRemoteAddress() string {
|
||||
return ""
|
||||
}
|
||||
|
||||
type customNetConn struct {
|
||||
net.Conn
|
||||
id string
|
||||
isClosed bool
|
||||
}
|
||||
|
||||
func (c *customNetConn) Close() error {
|
||||
Connections.RemoveSSHConnection(c.id)
|
||||
c.isClosed = true
|
||||
return c.Conn.Close()
|
||||
}
|
||||
|
||||
func TestMain(m *testing.M) {
|
||||
logfilePath := "common_test.log"
|
||||
logger.InitLogger(logfilePath, 5, 1, 28, false, zerolog.DebugLevel)
|
||||
|
||||
viper.SetEnvPrefix("sftpgo")
|
||||
replacer := strings.NewReplacer(".", "__")
|
||||
viper.SetEnvKeyReplacer(replacer)
|
||||
viper.SetConfigName("sftpgo")
|
||||
viper.AutomaticEnv()
|
||||
viper.AllowEmptyEnv(true)
|
||||
|
||||
driver, err := initializeDataprovider(-1)
|
||||
if err != nil {
|
||||
logger.WarnToConsole("error initializing data provider: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
logger.InfoToConsole("Starting COMMON tests, provider: %v", driver)
|
||||
err = Initialize(Configuration{})
|
||||
if err != nil {
|
||||
logger.WarnToConsole("error initializing common: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
httpConfig := httpclient.Config{
|
||||
Timeout: 5,
|
||||
}
|
||||
httpConfig.Initialize(configDir) //nolint:errcheck
|
||||
|
||||
go func() {
|
||||
// start a test HTTP server to receive action notifications
|
||||
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
|
||||
fmt.Fprintf(w, "OK\n")
|
||||
})
|
||||
http.HandleFunc("/404", func(w http.ResponseWriter, r *http.Request) {
|
||||
w.WriteHeader(http.StatusNotFound)
|
||||
fmt.Fprintf(w, "Not found\n")
|
||||
})
|
||||
if err := http.ListenAndServe(httpAddr, nil); err != nil {
|
||||
logger.ErrorToConsole("could not start HTTP notification server: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}()
|
||||
|
||||
go func() {
|
||||
Config.ProxyProtocol = 2
|
||||
listener, err := net.Listen("tcp", httpProxyAddr)
|
||||
if err != nil {
|
||||
logger.ErrorToConsole("error creating listener for proxy protocol server: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
proxyListener, err := Config.GetProxyListener(listener)
|
||||
if err != nil {
|
||||
logger.ErrorToConsole("error creating proxy protocol listener: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
Config.ProxyProtocol = 0
|
||||
|
||||
s := &http.Server{}
|
||||
if err := s.Serve(proxyListener); err != nil {
|
||||
logger.ErrorToConsole("could not start HTTP proxy protocol server: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}()
|
||||
|
||||
waitTCPListening(httpAddr)
|
||||
waitTCPListening(httpProxyAddr)
|
||||
exitCode := m.Run()
|
||||
os.Remove(logfilePath) //nolint:errcheck
|
||||
os.Exit(exitCode)
|
||||
}
|
||||
|
||||
func waitTCPListening(address string) {
|
||||
for {
|
||||
conn, err := net.Dial("tcp", address)
|
||||
if err != nil {
|
||||
logger.WarnToConsole("tcp server %v not listening: %v", address, err)
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
continue
|
||||
}
|
||||
logger.InfoToConsole("tcp server %v now listening", address)
|
||||
conn.Close()
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
func initializeDataprovider(trackQuota int) (string, error) {
|
||||
configDir := ".."
|
||||
viper.AddConfigPath(configDir)
|
||||
if err := viper.ReadInConfig(); err != nil {
|
||||
return "", err
|
||||
}
|
||||
var cfg providerConf
|
||||
if err := viper.Unmarshal(&cfg); err != nil {
|
||||
return "", err
|
||||
}
|
||||
if trackQuota >= 0 && trackQuota <= 2 {
|
||||
cfg.Config.TrackQuota = trackQuota
|
||||
}
|
||||
return cfg.Config.Driver, dataprovider.Initialize(cfg.Config, configDir, true)
|
||||
}
|
||||
|
||||
func closeDataprovider() error {
|
||||
return dataprovider.Close()
|
||||
}
|
||||
|
||||
func TestSSHConnections(t *testing.T) {
|
||||
conn1, conn2 := net.Pipe()
|
||||
now := time.Now()
|
||||
sshConn1 := NewSSHConnection("id1", conn1)
|
||||
sshConn2 := NewSSHConnection("id2", conn2)
|
||||
sshConn3 := NewSSHConnection("id3", conn2)
|
||||
assert.Equal(t, "id1", sshConn1.GetID())
|
||||
assert.Equal(t, "id2", sshConn2.GetID())
|
||||
assert.Equal(t, "id3", sshConn3.GetID())
|
||||
sshConn1.UpdateLastActivity()
|
||||
assert.GreaterOrEqual(t, sshConn1.GetLastActivity().UnixNano(), now.UnixNano())
|
||||
Connections.AddSSHConnection(sshConn1)
|
||||
Connections.AddSSHConnection(sshConn2)
|
||||
Connections.AddSSHConnection(sshConn3)
|
||||
Connections.RLock()
|
||||
assert.Len(t, Connections.sshConnections, 3)
|
||||
Connections.RUnlock()
|
||||
Connections.RemoveSSHConnection(sshConn1.id)
|
||||
Connections.RLock()
|
||||
assert.Len(t, Connections.sshConnections, 2)
|
||||
assert.Equal(t, sshConn3.id, Connections.sshConnections[0].id)
|
||||
assert.Equal(t, sshConn2.id, Connections.sshConnections[1].id)
|
||||
Connections.RUnlock()
|
||||
Connections.RemoveSSHConnection(sshConn1.id)
|
||||
Connections.RLock()
|
||||
assert.Len(t, Connections.sshConnections, 2)
|
||||
assert.Equal(t, sshConn3.id, Connections.sshConnections[0].id)
|
||||
assert.Equal(t, sshConn2.id, Connections.sshConnections[1].id)
|
||||
Connections.RUnlock()
|
||||
Connections.RemoveSSHConnection(sshConn2.id)
|
||||
Connections.RLock()
|
||||
assert.Len(t, Connections.sshConnections, 1)
|
||||
assert.Equal(t, sshConn3.id, Connections.sshConnections[0].id)
|
||||
Connections.RUnlock()
|
||||
Connections.RemoveSSHConnection(sshConn3.id)
|
||||
Connections.RLock()
|
||||
assert.Len(t, Connections.sshConnections, 0)
|
||||
Connections.RUnlock()
|
||||
assert.NoError(t, sshConn1.Close())
|
||||
assert.NoError(t, sshConn2.Close())
|
||||
assert.NoError(t, sshConn3.Close())
|
||||
}
|
||||
|
||||
func TestDefenderIntegration(t *testing.T) {
|
||||
// by default defender is nil
|
||||
configCopy := Config
|
||||
|
||||
ip := "127.1.1.1"
|
||||
|
||||
assert.Nil(t, ReloadDefender())
|
||||
|
||||
AddDefenderEvent(ip, HostEventNoLoginTried)
|
||||
assert.False(t, IsBanned(ip))
|
||||
|
||||
assert.Nil(t, GetDefenderBanTime(ip))
|
||||
assert.False(t, Unban(ip))
|
||||
assert.Equal(t, 0, GetDefenderScore(ip))
|
||||
|
||||
Config.DefenderConfig = DefenderConfig{
|
||||
Enabled: true,
|
||||
BanTime: 10,
|
||||
BanTimeIncrement: 50,
|
||||
Threshold: 0,
|
||||
ScoreInvalid: 2,
|
||||
ScoreValid: 1,
|
||||
ObservationTime: 15,
|
||||
EntriesSoftLimit: 100,
|
||||
EntriesHardLimit: 150,
|
||||
}
|
||||
err := Initialize(Config)
|
||||
assert.Error(t, err)
|
||||
Config.DefenderConfig.Threshold = 3
|
||||
err = Initialize(Config)
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, ReloadDefender())
|
||||
|
||||
AddDefenderEvent(ip, HostEventNoLoginTried)
|
||||
assert.False(t, IsBanned(ip))
|
||||
assert.Equal(t, 2, GetDefenderScore(ip))
|
||||
assert.False(t, Unban(ip))
|
||||
assert.Nil(t, GetDefenderBanTime(ip))
|
||||
|
||||
AddDefenderEvent(ip, HostEventLoginFailed)
|
||||
assert.True(t, IsBanned(ip))
|
||||
assert.Equal(t, 0, GetDefenderScore(ip))
|
||||
assert.NotNil(t, GetDefenderBanTime(ip))
|
||||
assert.True(t, Unban(ip))
|
||||
assert.Nil(t, GetDefenderBanTime(ip))
|
||||
assert.False(t, Unban(ip))
|
||||
|
||||
Config = configCopy
|
||||
}
|
||||
|
||||
func TestMaxConnections(t *testing.T) {
|
||||
oldValue := Config.MaxTotalConnections
|
||||
Config.MaxTotalConnections = 1
|
||||
|
||||
assert.True(t, Connections.IsNewConnectionAllowed())
|
||||
c := NewBaseConnection("id", ProtocolSFTP, dataprovider.User{}, nil)
|
||||
fakeConn := &fakeConnection{
|
||||
BaseConnection: c,
|
||||
}
|
||||
Connections.Add(fakeConn)
|
||||
assert.Len(t, Connections.GetStats(), 1)
|
||||
assert.False(t, Connections.IsNewConnectionAllowed())
|
||||
|
||||
res := Connections.Close(fakeConn.GetID())
|
||||
assert.True(t, res)
|
||||
assert.Eventually(t, func() bool { return len(Connections.GetStats()) == 0 }, 300*time.Millisecond, 50*time.Millisecond)
|
||||
|
||||
Config.MaxTotalConnections = oldValue
|
||||
}
|
||||
|
||||
func TestIdleConnections(t *testing.T) {
|
||||
configCopy := Config
|
||||
|
||||
Config.IdleTimeout = 1
|
||||
err := Initialize(Config)
|
||||
assert.NoError(t, err)
|
||||
|
||||
conn1, conn2 := net.Pipe()
|
||||
customConn1 := &customNetConn{
|
||||
Conn: conn1,
|
||||
id: "id1",
|
||||
}
|
||||
customConn2 := &customNetConn{
|
||||
Conn: conn2,
|
||||
id: "id2",
|
||||
}
|
||||
sshConn1 := NewSSHConnection(customConn1.id, customConn1)
|
||||
sshConn2 := NewSSHConnection(customConn2.id, customConn2)
|
||||
|
||||
username := "test_user"
|
||||
user := dataprovider.User{
|
||||
Username: username,
|
||||
}
|
||||
c := NewBaseConnection(sshConn1.id+"_1", ProtocolSFTP, user, nil)
|
||||
c.lastActivity = time.Now().Add(-24 * time.Hour).UnixNano()
|
||||
fakeConn := &fakeConnection{
|
||||
BaseConnection: c,
|
||||
}
|
||||
// both ssh connections are expired but they should get removed only
|
||||
// if there is no associated connection
|
||||
sshConn1.lastActivity = c.lastActivity
|
||||
sshConn2.lastActivity = c.lastActivity
|
||||
Connections.AddSSHConnection(sshConn1)
|
||||
Connections.Add(fakeConn)
|
||||
assert.Equal(t, Connections.GetActiveSessions(username), 1)
|
||||
c = NewBaseConnection(sshConn2.id+"_1", ProtocolSSH, user, nil)
|
||||
fakeConn = &fakeConnection{
|
||||
BaseConnection: c,
|
||||
}
|
||||
Connections.AddSSHConnection(sshConn2)
|
||||
Connections.Add(fakeConn)
|
||||
assert.Equal(t, Connections.GetActiveSessions(username), 2)
|
||||
|
||||
cFTP := NewBaseConnection("id2", ProtocolFTP, dataprovider.User{}, nil)
|
||||
cFTP.lastActivity = time.Now().UnixNano()
|
||||
fakeConn = &fakeConnection{
|
||||
BaseConnection: cFTP,
|
||||
}
|
||||
Connections.Add(fakeConn)
|
||||
assert.Equal(t, Connections.GetActiveSessions(username), 2)
|
||||
assert.Len(t, Connections.GetStats(), 3)
|
||||
Connections.RLock()
|
||||
assert.Len(t, Connections.sshConnections, 2)
|
||||
Connections.RUnlock()
|
||||
|
||||
startIdleTimeoutTicker(100 * time.Millisecond)
|
||||
assert.Eventually(t, func() bool { return Connections.GetActiveSessions(username) == 1 }, 1*time.Second, 200*time.Millisecond)
|
||||
assert.Eventually(t, func() bool {
|
||||
Connections.RLock()
|
||||
defer Connections.RUnlock()
|
||||
return len(Connections.sshConnections) == 1
|
||||
}, 1*time.Second, 200*time.Millisecond)
|
||||
stopIdleTimeoutTicker()
|
||||
assert.Len(t, Connections.GetStats(), 2)
|
||||
c.lastActivity = time.Now().Add(-24 * time.Hour).UnixNano()
|
||||
cFTP.lastActivity = time.Now().Add(-24 * time.Hour).UnixNano()
|
||||
sshConn2.lastActivity = c.lastActivity
|
||||
startIdleTimeoutTicker(100 * time.Millisecond)
|
||||
assert.Eventually(t, func() bool { return len(Connections.GetStats()) == 0 }, 1*time.Second, 200*time.Millisecond)
|
||||
assert.Eventually(t, func() bool {
|
||||
Connections.RLock()
|
||||
defer Connections.RUnlock()
|
||||
return len(Connections.sshConnections) == 0
|
||||
}, 1*time.Second, 200*time.Millisecond)
|
||||
stopIdleTimeoutTicker()
|
||||
assert.True(t, customConn1.isClosed)
|
||||
assert.True(t, customConn2.isClosed)
|
||||
|
||||
Config = configCopy
|
||||
}
|
||||
|
||||
func TestCloseConnection(t *testing.T) {
|
||||
c := NewBaseConnection("id", ProtocolSFTP, dataprovider.User{}, nil)
|
||||
fakeConn := &fakeConnection{
|
||||
BaseConnection: c,
|
||||
}
|
||||
assert.True(t, Connections.IsNewConnectionAllowed())
|
||||
Connections.Add(fakeConn)
|
||||
assert.Len(t, Connections.GetStats(), 1)
|
||||
res := Connections.Close(fakeConn.GetID())
|
||||
assert.True(t, res)
|
||||
assert.Eventually(t, func() bool { return len(Connections.GetStats()) == 0 }, 300*time.Millisecond, 50*time.Millisecond)
|
||||
res = Connections.Close(fakeConn.GetID())
|
||||
assert.False(t, res)
|
||||
Connections.Remove(fakeConn.GetID())
|
||||
}
|
||||
|
||||
func TestSwapConnection(t *testing.T) {
|
||||
c := NewBaseConnection("id", ProtocolFTP, dataprovider.User{}, nil)
|
||||
fakeConn := &fakeConnection{
|
||||
BaseConnection: c,
|
||||
}
|
||||
Connections.Add(fakeConn)
|
||||
if assert.Len(t, Connections.GetStats(), 1) {
|
||||
assert.Equal(t, "", Connections.GetStats()[0].Username)
|
||||
}
|
||||
c = NewBaseConnection("id", ProtocolFTP, dataprovider.User{
|
||||
Username: userTestUsername,
|
||||
}, nil)
|
||||
fakeConn = &fakeConnection{
|
||||
BaseConnection: c,
|
||||
}
|
||||
err := Connections.Swap(fakeConn)
|
||||
assert.NoError(t, err)
|
||||
if assert.Len(t, Connections.GetStats(), 1) {
|
||||
assert.Equal(t, userTestUsername, Connections.GetStats()[0].Username)
|
||||
}
|
||||
res := Connections.Close(fakeConn.GetID())
|
||||
assert.True(t, res)
|
||||
assert.Eventually(t, func() bool { return len(Connections.GetStats()) == 0 }, 300*time.Millisecond, 50*time.Millisecond)
|
||||
err = Connections.Swap(fakeConn)
|
||||
assert.Error(t, err)
|
||||
}
|
||||
|
||||
func TestAtomicUpload(t *testing.T) {
|
||||
configCopy := Config
|
||||
|
||||
Config.UploadMode = UploadModeStandard
|
||||
assert.False(t, Config.IsAtomicUploadEnabled())
|
||||
Config.UploadMode = UploadModeAtomic
|
||||
assert.True(t, Config.IsAtomicUploadEnabled())
|
||||
Config.UploadMode = UploadModeAtomicWithResume
|
||||
assert.True(t, Config.IsAtomicUploadEnabled())
|
||||
|
||||
Config = configCopy
|
||||
}
|
||||
|
||||
func TestConnectionStatus(t *testing.T) {
|
||||
username := "test_user"
|
||||
user := dataprovider.User{
|
||||
Username: username,
|
||||
}
|
||||
fs := vfs.NewOsFs("", os.TempDir(), nil)
|
||||
c1 := NewBaseConnection("id1", ProtocolSFTP, user, fs)
|
||||
fakeConn1 := &fakeConnection{
|
||||
BaseConnection: c1,
|
||||
}
|
||||
t1 := NewBaseTransfer(nil, c1, nil, "/p1", "/r1", TransferUpload, 0, 0, 0, true, fs)
|
||||
t1.BytesReceived = 123
|
||||
t2 := NewBaseTransfer(nil, c1, nil, "/p2", "/r2", TransferDownload, 0, 0, 0, true, fs)
|
||||
t2.BytesSent = 456
|
||||
c2 := NewBaseConnection("id2", ProtocolSSH, user, nil)
|
||||
fakeConn2 := &fakeConnection{
|
||||
BaseConnection: c2,
|
||||
command: "md5sum",
|
||||
}
|
||||
c3 := NewBaseConnection("id3", ProtocolWebDAV, user, nil)
|
||||
fakeConn3 := &fakeConnection{
|
||||
BaseConnection: c3,
|
||||
command: "PROPFIND",
|
||||
}
|
||||
t3 := NewBaseTransfer(nil, c3, nil, "/p2", "/r2", TransferDownload, 0, 0, 0, true, fs)
|
||||
Connections.Add(fakeConn1)
|
||||
Connections.Add(fakeConn2)
|
||||
Connections.Add(fakeConn3)
|
||||
|
||||
stats := Connections.GetStats()
|
||||
assert.Len(t, stats, 3)
|
||||
for _, stat := range stats {
|
||||
assert.Equal(t, stat.Username, username)
|
||||
assert.True(t, strings.HasPrefix(stat.GetConnectionInfo(), stat.Protocol))
|
||||
assert.True(t, strings.HasPrefix(stat.GetConnectionDuration(), "00:"))
|
||||
if stat.ConnectionID == "SFTP_id1" {
|
||||
assert.Len(t, stat.Transfers, 2)
|
||||
assert.Greater(t, len(stat.GetTransfersAsString()), 0)
|
||||
for _, tr := range stat.Transfers {
|
||||
if tr.OperationType == operationDownload {
|
||||
assert.True(t, strings.HasPrefix(tr.getConnectionTransferAsString(), "DL"))
|
||||
} else if tr.OperationType == operationUpload {
|
||||
assert.True(t, strings.HasPrefix(tr.getConnectionTransferAsString(), "UL"))
|
||||
}
|
||||
}
|
||||
} else if stat.ConnectionID == "DAV_id3" {
|
||||
assert.Len(t, stat.Transfers, 1)
|
||||
assert.Greater(t, len(stat.GetTransfersAsString()), 0)
|
||||
} else {
|
||||
assert.Equal(t, 0, len(stat.GetTransfersAsString()))
|
||||
}
|
||||
}
|
||||
|
||||
err := t1.Close()
|
||||
assert.NoError(t, err)
|
||||
err = t2.Close()
|
||||
assert.NoError(t, err)
|
||||
|
||||
err = fakeConn3.SignalTransfersAbort()
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, int32(1), atomic.LoadInt32(&t3.AbortTransfer))
|
||||
err = t3.Close()
|
||||
assert.NoError(t, err)
|
||||
err = fakeConn3.SignalTransfersAbort()
|
||||
assert.Error(t, err)
|
||||
|
||||
Connections.Remove(fakeConn1.GetID())
|
||||
stats = Connections.GetStats()
|
||||
assert.Len(t, stats, 2)
|
||||
assert.Equal(t, fakeConn3.GetID(), stats[0].ConnectionID)
|
||||
assert.Equal(t, fakeConn2.GetID(), stats[1].ConnectionID)
|
||||
Connections.Remove(fakeConn2.GetID())
|
||||
stats = Connections.GetStats()
|
||||
assert.Len(t, stats, 1)
|
||||
assert.Equal(t, fakeConn3.GetID(), stats[0].ConnectionID)
|
||||
Connections.Remove(fakeConn3.GetID())
|
||||
stats = Connections.GetStats()
|
||||
assert.Len(t, stats, 0)
|
||||
}
|
||||
|
||||
func TestQuotaScans(t *testing.T) {
|
||||
username := "username"
|
||||
assert.True(t, QuotaScans.AddUserQuotaScan(username))
|
||||
assert.False(t, QuotaScans.AddUserQuotaScan(username))
|
||||
if assert.Len(t, QuotaScans.GetUsersQuotaScans(), 1) {
|
||||
assert.Equal(t, QuotaScans.GetUsersQuotaScans()[0].Username, username)
|
||||
}
|
||||
|
||||
assert.True(t, QuotaScans.RemoveUserQuotaScan(username))
|
||||
assert.False(t, QuotaScans.RemoveUserQuotaScan(username))
|
||||
assert.Len(t, QuotaScans.GetUsersQuotaScans(), 0)
|
||||
|
||||
folderName := "folder"
|
||||
assert.True(t, QuotaScans.AddVFolderQuotaScan(folderName))
|
||||
assert.False(t, QuotaScans.AddVFolderQuotaScan(folderName))
|
||||
if assert.Len(t, QuotaScans.GetVFoldersQuotaScans(), 1) {
|
||||
assert.Equal(t, QuotaScans.GetVFoldersQuotaScans()[0].Name, folderName)
|
||||
}
|
||||
|
||||
assert.True(t, QuotaScans.RemoveVFolderQuotaScan(folderName))
|
||||
assert.False(t, QuotaScans.RemoveVFolderQuotaScan(folderName))
|
||||
assert.Len(t, QuotaScans.GetVFoldersQuotaScans(), 0)
|
||||
}
|
||||
|
||||
func TestProxyProtocolVersion(t *testing.T) {
|
||||
c := Configuration{
|
||||
ProxyProtocol: 1,
|
||||
}
|
||||
proxyListener, err := c.GetProxyListener(nil)
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, proxyListener.Policy)
|
||||
|
||||
c.ProxyProtocol = 2
|
||||
proxyListener, err = c.GetProxyListener(nil)
|
||||
assert.NoError(t, err)
|
||||
assert.NotNil(t, proxyListener.Policy)
|
||||
|
||||
c.ProxyProtocol = 1
|
||||
c.ProxyAllowed = []string{"invalid"}
|
||||
_, err = c.GetProxyListener(nil)
|
||||
assert.Error(t, err)
|
||||
|
||||
c.ProxyProtocol = 2
|
||||
_, err = c.GetProxyListener(nil)
|
||||
assert.Error(t, err)
|
||||
}
|
||||
|
||||
func TestProxyProtocol(t *testing.T) {
|
||||
httpClient := httpclient.GetHTTPClient()
|
||||
resp, err := httpClient.Get(fmt.Sprintf("http://%v", httpProxyAddr))
|
||||
if assert.NoError(t, err) {
|
||||
defer resp.Body.Close()
|
||||
assert.Equal(t, http.StatusBadRequest, resp.StatusCode)
|
||||
}
|
||||
}
|
||||
|
||||
func TestPostConnectHook(t *testing.T) {
|
||||
Config.PostConnectHook = ""
|
||||
|
||||
ipAddr := "127.0.0.1"
|
||||
|
||||
assert.NoError(t, Config.ExecutePostConnectHook(ipAddr, ProtocolFTP))
|
||||
|
||||
Config.PostConnectHook = "http://foo\x7f.com/"
|
||||
assert.Error(t, Config.ExecutePostConnectHook(ipAddr, ProtocolSFTP))
|
||||
|
||||
Config.PostConnectHook = "http://invalid:1234/"
|
||||
assert.Error(t, Config.ExecutePostConnectHook(ipAddr, ProtocolSFTP))
|
||||
|
||||
Config.PostConnectHook = fmt.Sprintf("http://%v/404", httpAddr)
|
||||
assert.Error(t, Config.ExecutePostConnectHook(ipAddr, ProtocolFTP))
|
||||
|
||||
Config.PostConnectHook = fmt.Sprintf("http://%v", httpAddr)
|
||||
assert.NoError(t, Config.ExecutePostConnectHook(ipAddr, ProtocolFTP))
|
||||
|
||||
Config.PostConnectHook = "invalid"
|
||||
assert.Error(t, Config.ExecutePostConnectHook(ipAddr, ProtocolFTP))
|
||||
|
||||
if runtime.GOOS == osWindows {
|
||||
Config.PostConnectHook = "C:\\bad\\command"
|
||||
assert.Error(t, Config.ExecutePostConnectHook(ipAddr, ProtocolSFTP))
|
||||
} else {
|
||||
Config.PostConnectHook = "/invalid/path"
|
||||
assert.Error(t, Config.ExecutePostConnectHook(ipAddr, ProtocolSFTP))
|
||||
|
||||
hookCmd, err := exec.LookPath("true")
|
||||
assert.NoError(t, err)
|
||||
Config.PostConnectHook = hookCmd
|
||||
assert.NoError(t, Config.ExecutePostConnectHook(ipAddr, ProtocolSFTP))
|
||||
}
|
||||
|
||||
Config.PostConnectHook = ""
|
||||
}
|
||||
|
||||
func TestCryptoConvertFileInfo(t *testing.T) {
|
||||
name := "name"
|
||||
fs, err := vfs.NewCryptFs("connID1", os.TempDir(), vfs.CryptFsConfig{Passphrase: kms.NewPlainSecret("secret")})
|
||||
require.NoError(t, err)
|
||||
cryptFs := fs.(*vfs.CryptFs)
|
||||
info := vfs.NewFileInfo(name, true, 48, time.Now(), false)
|
||||
assert.Equal(t, info, cryptFs.ConvertFileInfo(info))
|
||||
info = vfs.NewFileInfo(name, false, 48, time.Now(), false)
|
||||
assert.NotEqual(t, info.Size(), cryptFs.ConvertFileInfo(info).Size())
|
||||
info = vfs.NewFileInfo(name, false, 33, time.Now(), false)
|
||||
assert.Equal(t, int64(0), cryptFs.ConvertFileInfo(info).Size())
|
||||
info = vfs.NewFileInfo(name, false, 1, time.Now(), false)
|
||||
assert.Equal(t, int64(0), cryptFs.ConvertFileInfo(info).Size())
|
||||
}
|
||||
|
||||
func TestFolderCopy(t *testing.T) {
|
||||
folder := vfs.BaseVirtualFolder{
|
||||
ID: 1,
|
||||
Name: "name",
|
||||
MappedPath: filepath.Clean(os.TempDir()),
|
||||
UsedQuotaSize: 4096,
|
||||
UsedQuotaFiles: 2,
|
||||
LastQuotaUpdate: utils.GetTimeAsMsSinceEpoch(time.Now()),
|
||||
Users: []string{"user1", "user2"},
|
||||
}
|
||||
folderCopy := folder.GetACopy()
|
||||
folder.ID = 2
|
||||
folder.Users = []string{"user3"}
|
||||
require.Len(t, folderCopy.Users, 2)
|
||||
require.True(t, utils.IsStringInSlice("user1", folderCopy.Users))
|
||||
require.True(t, utils.IsStringInSlice("user2", folderCopy.Users))
|
||||
require.Equal(t, int64(1), folderCopy.ID)
|
||||
require.Equal(t, folder.Name, folderCopy.Name)
|
||||
require.Equal(t, folder.MappedPath, folderCopy.MappedPath)
|
||||
require.Equal(t, folder.UsedQuotaSize, folderCopy.UsedQuotaSize)
|
||||
require.Equal(t, folder.UsedQuotaFiles, folderCopy.UsedQuotaFiles)
|
||||
require.Equal(t, folder.LastQuotaUpdate, folderCopy.LastQuotaUpdate)
|
||||
}
|
||||
1009
common/connection.go
Normal file
1009
common/connection.go
Normal file
File diff suppressed because it is too large
Load Diff
1267
common/connection_test.go
Normal file
1267
common/connection_test.go
Normal file
File diff suppressed because it is too large
Load Diff
472
common/defender.go
Normal file
472
common/defender.go
Normal file
@@ -0,0 +1,472 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"net"
|
||||
"os"
|
||||
"sort"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/yl2chen/cidranger"
|
||||
|
||||
"github.com/drakkan/sftpgo/logger"
|
||||
"github.com/drakkan/sftpgo/utils"
|
||||
)
|
||||
|
||||
// HostEvent is the enumerable for the support host event
|
||||
type HostEvent int
|
||||
|
||||
// Supported host events
|
||||
const (
|
||||
HostEventLoginFailed HostEvent = iota
|
||||
HostEventUserNotFound
|
||||
HostEventNoLoginTried
|
||||
)
|
||||
|
||||
// Defender defines the interface that a defender must implements
|
||||
type Defender interface {
|
||||
AddEvent(ip string, event HostEvent)
|
||||
IsBanned(ip string) bool
|
||||
GetBanTime(ip string) *time.Time
|
||||
GetScore(ip string) int
|
||||
Unban(ip string) bool
|
||||
Reload() error
|
||||
}
|
||||
|
||||
// DefenderConfig defines the "defender" configuration
|
||||
type DefenderConfig struct {
|
||||
// Set to true to enable the defender
|
||||
Enabled bool `json:"enabled" mapstructure:"enabled"`
|
||||
// BanTime is the number of minutes that a host is banned
|
||||
BanTime int `json:"ban_time" mapstructure:"ban_time"`
|
||||
// Percentage increase of the ban time if a banned host tries to connect again
|
||||
BanTimeIncrement int `json:"ban_time_increment" mapstructure:"ban_time_increment"`
|
||||
// Threshold value for banning a client
|
||||
Threshold int `json:"threshold" mapstructure:"threshold"`
|
||||
// Score for invalid login attempts, eg. non-existent user accounts or
|
||||
// client disconnected for inactivity without authentication attempts
|
||||
ScoreInvalid int `json:"score_invalid" mapstructure:"score_invalid"`
|
||||
// Score for valid login attempts, eg. user accounts that exist
|
||||
ScoreValid int `json:"score_valid" mapstructure:"score_valid"`
|
||||
// Defines the time window, in minutes, for tracking client errors.
|
||||
// A host is banned if it has exceeded the defined threshold during
|
||||
// the last observation time minutes
|
||||
ObservationTime int `json:"observation_time" mapstructure:"observation_time"`
|
||||
// The number of banned IPs and host scores kept in memory will vary between the
|
||||
// soft and hard limit
|
||||
EntriesSoftLimit int `json:"entries_soft_limit" mapstructure:"entries_soft_limit"`
|
||||
EntriesHardLimit int `json:"entries_hard_limit" mapstructure:"entries_hard_limit"`
|
||||
// Path to a file containing a list of ip addresses and/or networks to never ban
|
||||
SafeListFile string `json:"safelist_file" mapstructure:"safelist_file"`
|
||||
// Path to a file containing a list of ip addresses and/or networks to always ban
|
||||
BlockListFile string `json:"blocklist_file" mapstructure:"blocklist_file"`
|
||||
}
|
||||
|
||||
type memoryDefender struct {
|
||||
config *DefenderConfig
|
||||
sync.RWMutex
|
||||
// IP addresses of the clients trying to connected are stored inside hosts,
|
||||
// they are added to banned once the thresold is reached.
|
||||
// A violation from a banned host will increase the ban time
|
||||
// based on the configured BanTimeIncrement
|
||||
hosts map[string]hostScore // the key is the host IP
|
||||
banned map[string]time.Time // the key is the host IP
|
||||
safeList *HostList
|
||||
blockList *HostList
|
||||
}
|
||||
|
||||
// HostListFile defines the structure expected for safe/block list files
|
||||
type HostListFile struct {
|
||||
IPAddresses []string `json:"addresses"`
|
||||
CIDRNetworks []string `json:"networks"`
|
||||
}
|
||||
|
||||
// HostList defines the structure used to keep the HostListFile in memory
|
||||
type HostList struct {
|
||||
IPAddresses map[string]bool
|
||||
Ranges cidranger.Ranger
|
||||
}
|
||||
|
||||
func (h *HostList) isListed(ip string) bool {
|
||||
if _, ok := h.IPAddresses[ip]; ok {
|
||||
return true
|
||||
}
|
||||
|
||||
ok, err := h.Ranges.Contains(net.ParseIP(ip))
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
return ok
|
||||
}
|
||||
|
||||
type hostEvent struct {
|
||||
dateTime time.Time
|
||||
score int
|
||||
}
|
||||
|
||||
type hostScore struct {
|
||||
TotalScore int
|
||||
Events []hostEvent
|
||||
}
|
||||
|
||||
// validate returns an error if the configuration is invalid
|
||||
func (c *DefenderConfig) validate() error {
|
||||
if !c.Enabled {
|
||||
return nil
|
||||
}
|
||||
if c.ScoreInvalid >= c.Threshold {
|
||||
return fmt.Errorf("score_invalid %v cannot be greater than threshold %v", c.ScoreInvalid, c.Threshold)
|
||||
}
|
||||
if c.ScoreValid >= c.Threshold {
|
||||
return fmt.Errorf("score_valid %v cannot be greater than threshold %v", c.ScoreValid, c.Threshold)
|
||||
}
|
||||
if c.BanTime <= 0 {
|
||||
return fmt.Errorf("invalid ban_time %v", c.BanTime)
|
||||
}
|
||||
if c.BanTimeIncrement <= 0 {
|
||||
return fmt.Errorf("invalid ban_time_increment %v", c.BanTimeIncrement)
|
||||
}
|
||||
if c.ObservationTime <= 0 {
|
||||
return fmt.Errorf("invalid observation_time %v", c.ObservationTime)
|
||||
}
|
||||
if c.EntriesSoftLimit <= 0 {
|
||||
return fmt.Errorf("invalid entries_soft_limit %v", c.EntriesSoftLimit)
|
||||
}
|
||||
if c.EntriesHardLimit <= c.EntriesSoftLimit {
|
||||
return fmt.Errorf("invalid entries_hard_limit %v must be > %v", c.EntriesHardLimit, c.EntriesSoftLimit)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func newInMemoryDefender(config *DefenderConfig) (Defender, error) {
|
||||
err := config.validate()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defender := &memoryDefender{
|
||||
config: config,
|
||||
hosts: make(map[string]hostScore),
|
||||
banned: make(map[string]time.Time),
|
||||
}
|
||||
|
||||
if err := defender.Reload(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return defender, nil
|
||||
}
|
||||
|
||||
// Reload reloads block and safe lists
|
||||
func (d *memoryDefender) Reload() error {
|
||||
blockList, err := loadHostListFromFile(d.config.BlockListFile)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
d.Lock()
|
||||
d.blockList = blockList
|
||||
d.Unlock()
|
||||
|
||||
safeList, err := loadHostListFromFile(d.config.SafeListFile)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
d.Lock()
|
||||
d.safeList = safeList
|
||||
d.Unlock()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// IsBanned returns true if the specified IP is banned
|
||||
// and increase ban time if the IP is found.
|
||||
// This method must be called as soon as the client connects
|
||||
func (d *memoryDefender) IsBanned(ip string) bool {
|
||||
d.RLock()
|
||||
|
||||
if banTime, ok := d.banned[ip]; ok {
|
||||
if banTime.After(time.Now()) {
|
||||
increment := d.config.BanTime * d.config.BanTimeIncrement / 100
|
||||
if increment == 0 {
|
||||
increment++
|
||||
}
|
||||
|
||||
d.RUnlock()
|
||||
|
||||
// we can save an earlier ban time if there are contemporary updates
|
||||
// but this should not make much difference. I prefer to hold a read lock
|
||||
// until possible for performance reasons, this method is called each
|
||||
// time a new client connects and it must be as fast as possible
|
||||
d.Lock()
|
||||
d.banned[ip] = banTime.Add(time.Duration(increment) * time.Minute)
|
||||
d.Unlock()
|
||||
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
defer d.RUnlock()
|
||||
|
||||
if d.blockList != nil && d.blockList.isListed(ip) {
|
||||
// permanent ban
|
||||
return true
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// Unban removes the specified IP address from the banned ones
|
||||
func (d *memoryDefender) Unban(ip string) bool {
|
||||
d.Lock()
|
||||
defer d.Unlock()
|
||||
|
||||
if _, ok := d.banned[ip]; ok {
|
||||
delete(d.banned, ip)
|
||||
return true
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// AddEvent adds an event for the given IP.
|
||||
// This method must be called for clients not yet banned
|
||||
func (d *memoryDefender) AddEvent(ip string, event HostEvent) {
|
||||
d.Lock()
|
||||
defer d.Unlock()
|
||||
|
||||
if d.safeList != nil && d.safeList.isListed(ip) {
|
||||
return
|
||||
}
|
||||
|
||||
var score int
|
||||
|
||||
switch event {
|
||||
case HostEventLoginFailed:
|
||||
score = d.config.ScoreValid
|
||||
case HostEventUserNotFound, HostEventNoLoginTried:
|
||||
score = d.config.ScoreInvalid
|
||||
}
|
||||
|
||||
ev := hostEvent{
|
||||
dateTime: time.Now(),
|
||||
score: score,
|
||||
}
|
||||
|
||||
if hs, ok := d.hosts[ip]; ok {
|
||||
hs.Events = append(hs.Events, ev)
|
||||
hs.TotalScore = 0
|
||||
|
||||
idx := 0
|
||||
for _, event := range hs.Events {
|
||||
if event.dateTime.Add(time.Duration(d.config.ObservationTime) * time.Minute).After(time.Now()) {
|
||||
hs.Events[idx] = event
|
||||
hs.TotalScore += event.score
|
||||
idx++
|
||||
}
|
||||
}
|
||||
|
||||
hs.Events = hs.Events[:idx]
|
||||
if hs.TotalScore >= d.config.Threshold {
|
||||
d.banned[ip] = time.Now().Add(time.Duration(d.config.BanTime) * time.Minute)
|
||||
delete(d.hosts, ip)
|
||||
d.cleanupBanned()
|
||||
} else {
|
||||
d.hosts[ip] = hs
|
||||
}
|
||||
} else {
|
||||
d.hosts[ip] = hostScore{
|
||||
TotalScore: ev.score,
|
||||
Events: []hostEvent{ev},
|
||||
}
|
||||
d.cleanupHosts()
|
||||
}
|
||||
}
|
||||
|
||||
func (d *memoryDefender) countBanned() int {
|
||||
d.RLock()
|
||||
defer d.RUnlock()
|
||||
|
||||
return len(d.banned)
|
||||
}
|
||||
|
||||
func (d *memoryDefender) countHosts() int {
|
||||
d.RLock()
|
||||
defer d.RUnlock()
|
||||
|
||||
return len(d.hosts)
|
||||
}
|
||||
|
||||
// GetBanTime returns the ban time for the given IP or nil if the IP is not banned
|
||||
func (d *memoryDefender) GetBanTime(ip string) *time.Time {
|
||||
d.RLock()
|
||||
defer d.RUnlock()
|
||||
|
||||
if banTime, ok := d.banned[ip]; ok {
|
||||
return &banTime
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetScore returns the score for the given IP
|
||||
func (d *memoryDefender) GetScore(ip string) int {
|
||||
d.RLock()
|
||||
defer d.RUnlock()
|
||||
|
||||
score := 0
|
||||
|
||||
if hs, ok := d.hosts[ip]; ok {
|
||||
for _, event := range hs.Events {
|
||||
if event.dateTime.Add(time.Duration(d.config.ObservationTime) * time.Minute).After(time.Now()) {
|
||||
score += event.score
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return score
|
||||
}
|
||||
|
||||
func (d *memoryDefender) cleanupBanned() {
|
||||
if len(d.banned) > d.config.EntriesHardLimit {
|
||||
kvList := make(kvList, 0, len(d.banned))
|
||||
|
||||
for k, v := range d.banned {
|
||||
if v.Before(time.Now()) {
|
||||
delete(d.banned, k)
|
||||
}
|
||||
|
||||
kvList = append(kvList, kv{
|
||||
Key: k,
|
||||
Value: v.UnixNano(),
|
||||
})
|
||||
}
|
||||
|
||||
// we removed expired ip addresses, if any, above, this could be enough
|
||||
numToRemove := len(d.banned) - d.config.EntriesSoftLimit
|
||||
|
||||
if numToRemove <= 0 {
|
||||
return
|
||||
}
|
||||
|
||||
sort.Sort(kvList)
|
||||
|
||||
for idx, kv := range kvList {
|
||||
if idx >= numToRemove {
|
||||
break
|
||||
}
|
||||
|
||||
delete(d.banned, kv.Key)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (d *memoryDefender) cleanupHosts() {
|
||||
if len(d.hosts) > d.config.EntriesHardLimit {
|
||||
kvList := make(kvList, 0, len(d.hosts))
|
||||
|
||||
for k, v := range d.hosts {
|
||||
value := int64(0)
|
||||
if len(v.Events) > 0 {
|
||||
value = v.Events[len(v.Events)-1].dateTime.UnixNano()
|
||||
}
|
||||
kvList = append(kvList, kv{
|
||||
Key: k,
|
||||
Value: value,
|
||||
})
|
||||
}
|
||||
|
||||
sort.Sort(kvList)
|
||||
|
||||
numToRemove := len(d.hosts) - d.config.EntriesSoftLimit
|
||||
|
||||
for idx, kv := range kvList {
|
||||
if idx >= numToRemove {
|
||||
break
|
||||
}
|
||||
|
||||
delete(d.hosts, kv.Key)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func loadHostListFromFile(name string) (*HostList, error) {
|
||||
if name == "" {
|
||||
return nil, nil
|
||||
}
|
||||
if !utils.IsFileInputValid(name) {
|
||||
return nil, fmt.Errorf("invalid host list file name %#v", name)
|
||||
}
|
||||
|
||||
info, err := os.Stat(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// opinionated max size, you should avoid big host lists
|
||||
if info.Size() > 1048576*5 { // 5MB
|
||||
return nil, fmt.Errorf("host list file %#v is too big: %v bytes", name, info.Size())
|
||||
}
|
||||
|
||||
content, err := ioutil.ReadFile(name)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("unable to read input file %#v: %v", name, err)
|
||||
}
|
||||
|
||||
var hostList HostListFile
|
||||
|
||||
err = json.Unmarshal(content, &hostList)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if len(hostList.CIDRNetworks) > 0 || len(hostList.IPAddresses) > 0 {
|
||||
result := &HostList{
|
||||
IPAddresses: make(map[string]bool),
|
||||
Ranges: cidranger.NewPCTrieRanger(),
|
||||
}
|
||||
ipCount := 0
|
||||
cdrCount := 0
|
||||
for _, ip := range hostList.IPAddresses {
|
||||
if net.ParseIP(ip) == nil {
|
||||
logger.Warn(logSender, "", "unable to parse IP %#v", ip)
|
||||
continue
|
||||
}
|
||||
result.IPAddresses[ip] = true
|
||||
ipCount++
|
||||
}
|
||||
for _, cidrNet := range hostList.CIDRNetworks {
|
||||
_, network, err := net.ParseCIDR(cidrNet)
|
||||
if err != nil {
|
||||
logger.Warn(logSender, "", "unable to parse CIDR network %#v", cidrNet)
|
||||
continue
|
||||
}
|
||||
err = result.Ranges.Insert(cidranger.NewBasicRangerEntry(*network))
|
||||
if err == nil {
|
||||
cdrCount++
|
||||
}
|
||||
}
|
||||
|
||||
logger.Info(logSender, "", "list %#v loaded, ip addresses loaded: %v/%v networks loaded: %v/%v",
|
||||
name, ipCount, len(hostList.IPAddresses), cdrCount, len(hostList.CIDRNetworks))
|
||||
return result, nil
|
||||
}
|
||||
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
type kv struct {
|
||||
Key string
|
||||
Value int64
|
||||
}
|
||||
|
||||
type kvList []kv
|
||||
|
||||
func (p kvList) Len() int { return len(p) }
|
||||
func (p kvList) Less(i, j int) bool { return p[i].Value < p[j].Value }
|
||||
func (p kvList) Swap(i, j int) { p[i], p[j] = p[j], p[i] }
|
||||
523
common/defender_test.go
Normal file
523
common/defender_test.go
Normal file
@@ -0,0 +1,523 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"crypto/rand"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"net"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"github.com/yl2chen/cidranger"
|
||||
)
|
||||
|
||||
func TestBasicDefender(t *testing.T) {
|
||||
bl := HostListFile{
|
||||
IPAddresses: []string{"172.16.1.1", "172.16.1.2"},
|
||||
CIDRNetworks: []string{"10.8.0.0/24"},
|
||||
}
|
||||
sl := HostListFile{
|
||||
IPAddresses: []string{"172.16.1.3", "172.16.1.4"},
|
||||
CIDRNetworks: []string{"192.168.8.0/24"},
|
||||
}
|
||||
blFile := filepath.Join(os.TempDir(), "bl.json")
|
||||
slFile := filepath.Join(os.TempDir(), "sl.json")
|
||||
|
||||
data, err := json.Marshal(bl)
|
||||
assert.NoError(t, err)
|
||||
|
||||
err = ioutil.WriteFile(blFile, data, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
|
||||
data, err = json.Marshal(sl)
|
||||
assert.NoError(t, err)
|
||||
|
||||
err = ioutil.WriteFile(slFile, data, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
|
||||
config := &DefenderConfig{
|
||||
Enabled: true,
|
||||
BanTime: 10,
|
||||
BanTimeIncrement: 2,
|
||||
Threshold: 5,
|
||||
ScoreInvalid: 2,
|
||||
ScoreValid: 1,
|
||||
ObservationTime: 15,
|
||||
EntriesSoftLimit: 1,
|
||||
EntriesHardLimit: 2,
|
||||
SafeListFile: "slFile",
|
||||
BlockListFile: "blFile",
|
||||
}
|
||||
|
||||
_, err = newInMemoryDefender(config)
|
||||
assert.Error(t, err)
|
||||
config.BlockListFile = blFile
|
||||
_, err = newInMemoryDefender(config)
|
||||
assert.Error(t, err)
|
||||
config.SafeListFile = slFile
|
||||
d, err := newInMemoryDefender(config)
|
||||
assert.NoError(t, err)
|
||||
|
||||
defender := d.(*memoryDefender)
|
||||
assert.True(t, defender.IsBanned("172.16.1.1"))
|
||||
assert.False(t, defender.IsBanned("172.16.1.10"))
|
||||
assert.False(t, defender.IsBanned("10.8.2.3"))
|
||||
assert.True(t, defender.IsBanned("10.8.0.3"))
|
||||
assert.False(t, defender.IsBanned("invalid ip"))
|
||||
assert.Equal(t, 0, defender.countBanned())
|
||||
assert.Equal(t, 0, defender.countHosts())
|
||||
|
||||
defender.AddEvent("172.16.1.4", HostEventLoginFailed)
|
||||
defender.AddEvent("192.168.8.4", HostEventUserNotFound)
|
||||
assert.Equal(t, 0, defender.countHosts())
|
||||
|
||||
testIP := "12.34.56.78"
|
||||
defender.AddEvent(testIP, HostEventLoginFailed)
|
||||
assert.Equal(t, 1, defender.countHosts())
|
||||
assert.Equal(t, 0, defender.countBanned())
|
||||
assert.Equal(t, 1, defender.GetScore(testIP))
|
||||
assert.Nil(t, defender.GetBanTime(testIP))
|
||||
defender.AddEvent(testIP, HostEventNoLoginTried)
|
||||
assert.Equal(t, 1, defender.countHosts())
|
||||
assert.Equal(t, 0, defender.countBanned())
|
||||
assert.Equal(t, 3, defender.GetScore(testIP))
|
||||
defender.AddEvent(testIP, HostEventNoLoginTried)
|
||||
assert.Equal(t, 0, defender.countHosts())
|
||||
assert.Equal(t, 1, defender.countBanned())
|
||||
assert.Equal(t, 0, defender.GetScore(testIP))
|
||||
assert.NotNil(t, defender.GetBanTime(testIP))
|
||||
|
||||
// now test cleanup, testIP is already banned
|
||||
testIP1 := "12.34.56.79"
|
||||
testIP2 := "12.34.56.80"
|
||||
testIP3 := "12.34.56.81"
|
||||
|
||||
defender.AddEvent(testIP1, HostEventNoLoginTried)
|
||||
defender.AddEvent(testIP2, HostEventNoLoginTried)
|
||||
assert.Equal(t, 2, defender.countHosts())
|
||||
time.Sleep(20 * time.Millisecond)
|
||||
defender.AddEvent(testIP3, HostEventNoLoginTried)
|
||||
assert.Equal(t, defender.config.EntriesSoftLimit, defender.countHosts())
|
||||
// testIP1 and testIP2 should be removed
|
||||
assert.Equal(t, defender.config.EntriesSoftLimit, defender.countHosts())
|
||||
assert.Equal(t, 0, defender.GetScore(testIP1))
|
||||
assert.Equal(t, 0, defender.GetScore(testIP2))
|
||||
assert.Equal(t, 2, defender.GetScore(testIP3))
|
||||
|
||||
defender.AddEvent(testIP3, HostEventNoLoginTried)
|
||||
defender.AddEvent(testIP3, HostEventNoLoginTried)
|
||||
// IP3 is now banned
|
||||
assert.NotNil(t, defender.GetBanTime(testIP3))
|
||||
assert.Equal(t, 0, defender.countHosts())
|
||||
|
||||
time.Sleep(20 * time.Millisecond)
|
||||
for i := 0; i < 3; i++ {
|
||||
defender.AddEvent(testIP1, HostEventNoLoginTried)
|
||||
}
|
||||
assert.Equal(t, 0, defender.countHosts())
|
||||
assert.Equal(t, config.EntriesSoftLimit, defender.countBanned())
|
||||
assert.Nil(t, defender.GetBanTime(testIP))
|
||||
assert.Nil(t, defender.GetBanTime(testIP3))
|
||||
assert.NotNil(t, defender.GetBanTime(testIP1))
|
||||
|
||||
for i := 0; i < 3; i++ {
|
||||
defender.AddEvent(testIP, HostEventNoLoginTried)
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
defender.AddEvent(testIP3, HostEventNoLoginTried)
|
||||
}
|
||||
assert.Equal(t, 0, defender.countHosts())
|
||||
assert.Equal(t, defender.config.EntriesSoftLimit, defender.countBanned())
|
||||
|
||||
banTime := defender.GetBanTime(testIP3)
|
||||
if assert.NotNil(t, banTime) {
|
||||
assert.True(t, defender.IsBanned(testIP3))
|
||||
// ban time should increase
|
||||
newBanTime := defender.GetBanTime(testIP3)
|
||||
assert.True(t, newBanTime.After(*banTime))
|
||||
}
|
||||
|
||||
assert.True(t, defender.Unban(testIP3))
|
||||
assert.False(t, defender.Unban(testIP3))
|
||||
|
||||
err = os.Remove(slFile)
|
||||
assert.NoError(t, err)
|
||||
err = os.Remove(blFile)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestLoadHostListFromFile(t *testing.T) {
|
||||
_, err := loadHostListFromFile(".")
|
||||
assert.Error(t, err)
|
||||
|
||||
hostsFilePath := filepath.Join(os.TempDir(), "hostfile")
|
||||
content := make([]byte, 1048576*6)
|
||||
_, err = rand.Read(content)
|
||||
assert.NoError(t, err)
|
||||
|
||||
err = ioutil.WriteFile(hostsFilePath, content, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
|
||||
_, err = loadHostListFromFile(hostsFilePath)
|
||||
assert.Error(t, err)
|
||||
|
||||
hl := HostListFile{
|
||||
IPAddresses: []string{},
|
||||
CIDRNetworks: []string{},
|
||||
}
|
||||
|
||||
asJSON, err := json.Marshal(hl)
|
||||
assert.NoError(t, err)
|
||||
err = ioutil.WriteFile(hostsFilePath, asJSON, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
|
||||
hostList, err := loadHostListFromFile(hostsFilePath)
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, hostList)
|
||||
|
||||
hl.IPAddresses = append(hl.IPAddresses, "invalidip")
|
||||
asJSON, err = json.Marshal(hl)
|
||||
assert.NoError(t, err)
|
||||
err = ioutil.WriteFile(hostsFilePath, asJSON, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
|
||||
hostList, err = loadHostListFromFile(hostsFilePath)
|
||||
assert.NoError(t, err)
|
||||
assert.Len(t, hostList.IPAddresses, 0)
|
||||
|
||||
hl.IPAddresses = nil
|
||||
hl.CIDRNetworks = append(hl.CIDRNetworks, "invalid net")
|
||||
|
||||
asJSON, err = json.Marshal(hl)
|
||||
assert.NoError(t, err)
|
||||
err = ioutil.WriteFile(hostsFilePath, asJSON, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
|
||||
hostList, err = loadHostListFromFile(hostsFilePath)
|
||||
assert.NoError(t, err)
|
||||
assert.NotNil(t, hostList)
|
||||
assert.Len(t, hostList.IPAddresses, 0)
|
||||
assert.Equal(t, 0, hostList.Ranges.Len())
|
||||
|
||||
if runtime.GOOS != "windows" {
|
||||
err = os.Chmod(hostsFilePath, 0111)
|
||||
assert.NoError(t, err)
|
||||
|
||||
_, err = loadHostListFromFile(hostsFilePath)
|
||||
assert.Error(t, err)
|
||||
|
||||
err = os.Chmod(hostsFilePath, 0644)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
err = ioutil.WriteFile(hostsFilePath, []byte("non json content"), os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
_, err = loadHostListFromFile(hostsFilePath)
|
||||
assert.Error(t, err)
|
||||
|
||||
err = os.Remove(hostsFilePath)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestDefenderCleanup(t *testing.T) {
|
||||
d := memoryDefender{
|
||||
banned: make(map[string]time.Time),
|
||||
hosts: make(map[string]hostScore),
|
||||
config: &DefenderConfig{
|
||||
ObservationTime: 1,
|
||||
EntriesSoftLimit: 2,
|
||||
EntriesHardLimit: 3,
|
||||
},
|
||||
}
|
||||
|
||||
d.banned["1.1.1.1"] = time.Now().Add(-24 * time.Hour)
|
||||
d.banned["1.1.1.2"] = time.Now().Add(-24 * time.Hour)
|
||||
d.banned["1.1.1.3"] = time.Now().Add(-24 * time.Hour)
|
||||
d.banned["1.1.1.4"] = time.Now().Add(-24 * time.Hour)
|
||||
|
||||
d.cleanupBanned()
|
||||
assert.Equal(t, 0, d.countBanned())
|
||||
|
||||
d.banned["2.2.2.2"] = time.Now().Add(2 * time.Minute)
|
||||
d.banned["2.2.2.3"] = time.Now().Add(1 * time.Minute)
|
||||
d.banned["2.2.2.4"] = time.Now().Add(3 * time.Minute)
|
||||
d.banned["2.2.2.5"] = time.Now().Add(4 * time.Minute)
|
||||
|
||||
d.cleanupBanned()
|
||||
assert.Equal(t, d.config.EntriesSoftLimit, d.countBanned())
|
||||
assert.Nil(t, d.GetBanTime("2.2.2.3"))
|
||||
|
||||
d.hosts["3.3.3.3"] = hostScore{
|
||||
TotalScore: 0,
|
||||
Events: []hostEvent{
|
||||
{
|
||||
dateTime: time.Now().Add(-5 * time.Minute),
|
||||
score: 1,
|
||||
},
|
||||
{
|
||||
dateTime: time.Now().Add(-3 * time.Minute),
|
||||
score: 1,
|
||||
},
|
||||
{
|
||||
dateTime: time.Now(),
|
||||
score: 1,
|
||||
},
|
||||
},
|
||||
}
|
||||
d.hosts["3.3.3.4"] = hostScore{
|
||||
TotalScore: 1,
|
||||
Events: []hostEvent{
|
||||
{
|
||||
dateTime: time.Now().Add(-3 * time.Minute),
|
||||
score: 1,
|
||||
},
|
||||
},
|
||||
}
|
||||
d.hosts["3.3.3.5"] = hostScore{
|
||||
TotalScore: 1,
|
||||
Events: []hostEvent{
|
||||
{
|
||||
dateTime: time.Now().Add(-2 * time.Minute),
|
||||
score: 1,
|
||||
},
|
||||
},
|
||||
}
|
||||
d.hosts["3.3.3.6"] = hostScore{
|
||||
TotalScore: 1,
|
||||
Events: []hostEvent{
|
||||
{
|
||||
dateTime: time.Now().Add(-1 * time.Minute),
|
||||
score: 1,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
assert.Equal(t, 1, d.GetScore("3.3.3.3"))
|
||||
|
||||
d.cleanupHosts()
|
||||
assert.Equal(t, d.config.EntriesSoftLimit, d.countHosts())
|
||||
assert.Equal(t, 0, d.GetScore("3.3.3.4"))
|
||||
}
|
||||
|
||||
func TestDefenderConfig(t *testing.T) {
|
||||
c := DefenderConfig{}
|
||||
err := c.validate()
|
||||
require.NoError(t, err)
|
||||
|
||||
c.Enabled = true
|
||||
c.Threshold = 10
|
||||
c.ScoreInvalid = 10
|
||||
err = c.validate()
|
||||
require.Error(t, err)
|
||||
|
||||
c.ScoreInvalid = 2
|
||||
c.ScoreValid = 10
|
||||
err = c.validate()
|
||||
require.Error(t, err)
|
||||
|
||||
c.ScoreValid = 1
|
||||
c.BanTime = 0
|
||||
err = c.validate()
|
||||
require.Error(t, err)
|
||||
|
||||
c.BanTime = 30
|
||||
c.BanTimeIncrement = 0
|
||||
err = c.validate()
|
||||
require.Error(t, err)
|
||||
|
||||
c.BanTimeIncrement = 50
|
||||
c.ObservationTime = 0
|
||||
err = c.validate()
|
||||
require.Error(t, err)
|
||||
|
||||
c.ObservationTime = 30
|
||||
err = c.validate()
|
||||
require.Error(t, err)
|
||||
|
||||
c.EntriesSoftLimit = 10
|
||||
err = c.validate()
|
||||
require.Error(t, err)
|
||||
|
||||
c.EntriesHardLimit = 10
|
||||
err = c.validate()
|
||||
require.Error(t, err)
|
||||
|
||||
c.EntriesHardLimit = 20
|
||||
err = c.validate()
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
func BenchmarkDefenderBannedSearch(b *testing.B) {
|
||||
d := getDefenderForBench()
|
||||
|
||||
ip, ipnet, err := net.ParseCIDR("10.8.0.0/12") // 1048574 ip addresses
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
for ip := ip.Mask(ipnet.Mask); ipnet.Contains(ip); inc(ip) {
|
||||
d.banned[ip.String()] = time.Now().Add(10 * time.Minute)
|
||||
}
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for i := 0; i < b.N; i++ {
|
||||
d.IsBanned("192.168.1.1")
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkCleanup(b *testing.B) {
|
||||
d := getDefenderForBench()
|
||||
|
||||
ip, ipnet, err := net.ParseCIDR("192.168.4.0/24")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for i := 0; i < b.N; i++ {
|
||||
for ip := ip.Mask(ipnet.Mask); ipnet.Contains(ip); inc(ip) {
|
||||
d.AddEvent(ip.String(), HostEventLoginFailed)
|
||||
if d.countHosts() > d.config.EntriesHardLimit {
|
||||
panic("too many hosts")
|
||||
}
|
||||
if d.countBanned() > d.config.EntriesSoftLimit {
|
||||
panic("too many ip banned")
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkDefenderBannedSearchWithBlockList(b *testing.B) {
|
||||
d := getDefenderForBench()
|
||||
|
||||
d.blockList = &HostList{
|
||||
IPAddresses: make(map[string]bool),
|
||||
Ranges: cidranger.NewPCTrieRanger(),
|
||||
}
|
||||
|
||||
ip, ipnet, err := net.ParseCIDR("129.8.0.0/12") // 1048574 ip addresses
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
for ip := ip.Mask(ipnet.Mask); ipnet.Contains(ip); inc(ip) {
|
||||
d.banned[ip.String()] = time.Now().Add(10 * time.Minute)
|
||||
d.blockList.IPAddresses[ip.String()] = true
|
||||
}
|
||||
|
||||
for i := 0; i < 255; i++ {
|
||||
cidr := fmt.Sprintf("10.8.%v.1/24", i)
|
||||
_, network, _ := net.ParseCIDR(cidr)
|
||||
if err := d.blockList.Ranges.Insert(cidranger.NewBasicRangerEntry(*network)); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for i := 0; i < b.N; i++ {
|
||||
d.IsBanned("192.168.1.1")
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkHostListSearch(b *testing.B) {
|
||||
hostlist := &HostList{
|
||||
IPAddresses: make(map[string]bool),
|
||||
Ranges: cidranger.NewPCTrieRanger(),
|
||||
}
|
||||
|
||||
ip, ipnet, _ := net.ParseCIDR("172.16.0.0/16")
|
||||
|
||||
for ip := ip.Mask(ipnet.Mask); ipnet.Contains(ip); inc(ip) {
|
||||
hostlist.IPAddresses[ip.String()] = true
|
||||
}
|
||||
|
||||
for i := 0; i < 255; i++ {
|
||||
cidr := fmt.Sprintf("10.8.%v.1/24", i)
|
||||
_, network, _ := net.ParseCIDR(cidr)
|
||||
if err := hostlist.Ranges.Insert(cidranger.NewBasicRangerEntry(*network)); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for i := 0; i < b.N; i++ {
|
||||
if hostlist.isListed("192.167.1.2") {
|
||||
panic("should not be listed")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkCIDRanger(b *testing.B) {
|
||||
ranger := cidranger.NewPCTrieRanger()
|
||||
for i := 0; i < 255; i++ {
|
||||
cidr := fmt.Sprintf("192.168.%v.1/24", i)
|
||||
_, network, _ := net.ParseCIDR(cidr)
|
||||
if err := ranger.Insert(cidranger.NewBasicRangerEntry(*network)); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
|
||||
ipToMatch := net.ParseIP("192.167.1.2")
|
||||
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
if _, err := ranger.Contains(ipToMatch); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkNetContains(b *testing.B) {
|
||||
var nets []*net.IPNet
|
||||
for i := 0; i < 255; i++ {
|
||||
cidr := fmt.Sprintf("192.168.%v.1/24", i)
|
||||
_, network, _ := net.ParseCIDR(cidr)
|
||||
nets = append(nets, network)
|
||||
}
|
||||
|
||||
ipToMatch := net.ParseIP("192.167.1.1")
|
||||
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
for _, n := range nets {
|
||||
n.Contains(ipToMatch)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func getDefenderForBench() *memoryDefender {
|
||||
config := &DefenderConfig{
|
||||
Enabled: true,
|
||||
BanTime: 30,
|
||||
BanTimeIncrement: 50,
|
||||
Threshold: 10,
|
||||
ScoreInvalid: 2,
|
||||
ScoreValid: 2,
|
||||
ObservationTime: 30,
|
||||
EntriesSoftLimit: 50,
|
||||
EntriesHardLimit: 100,
|
||||
}
|
||||
return &memoryDefender{
|
||||
config: config,
|
||||
hosts: make(map[string]hostScore),
|
||||
banned: make(map[string]time.Time),
|
||||
}
|
||||
}
|
||||
|
||||
func inc(ip net.IP) {
|
||||
for j := len(ip) - 1; j >= 0; j-- {
|
||||
ip[j]++
|
||||
if ip[j] > 0 {
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
134
common/httpauth.go
Normal file
134
common/httpauth.go
Normal file
@@ -0,0 +1,134 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"encoding/csv"
|
||||
"os"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
"github.com/GehirnInc/crypt/apr1_crypt"
|
||||
"github.com/GehirnInc/crypt/md5_crypt"
|
||||
"golang.org/x/crypto/bcrypt"
|
||||
|
||||
"github.com/drakkan/sftpgo/logger"
|
||||
"github.com/drakkan/sftpgo/utils"
|
||||
)
|
||||
|
||||
const (
|
||||
// HTTPAuthenticationHeader defines the HTTP authentication
|
||||
HTTPAuthenticationHeader = "WWW-Authenticate"
|
||||
md5CryptPwdPrefix = "$1$"
|
||||
apr1CryptPwdPrefix = "$apr1$"
|
||||
)
|
||||
|
||||
var (
|
||||
bcryptPwdPrefixes = []string{"$2a$", "$2$", "$2x$", "$2y$", "$2b$"}
|
||||
)
|
||||
|
||||
// HTTPAuthProvider defines the interface for HTTP auth providers
|
||||
type HTTPAuthProvider interface {
|
||||
ValidateCredentials(username, password string) bool
|
||||
IsEnabled() bool
|
||||
}
|
||||
|
||||
type basicAuthProvider struct {
|
||||
Path string
|
||||
sync.RWMutex
|
||||
Info os.FileInfo
|
||||
Users map[string]string
|
||||
}
|
||||
|
||||
// NewBasicAuthProvider returns an HTTPAuthProvider implementing Basic Auth
|
||||
func NewBasicAuthProvider(authUserFile string) (HTTPAuthProvider, error) {
|
||||
basicAuthProvider := basicAuthProvider{
|
||||
Path: authUserFile,
|
||||
Info: nil,
|
||||
Users: make(map[string]string),
|
||||
}
|
||||
return &basicAuthProvider, basicAuthProvider.loadUsers()
|
||||
}
|
||||
|
||||
func (p *basicAuthProvider) IsEnabled() bool {
|
||||
return p.Path != ""
|
||||
}
|
||||
|
||||
func (p *basicAuthProvider) isReloadNeeded(info os.FileInfo) bool {
|
||||
p.RLock()
|
||||
defer p.RUnlock()
|
||||
|
||||
return p.Info == nil || p.Info.ModTime() != info.ModTime() || p.Info.Size() != info.Size()
|
||||
}
|
||||
|
||||
func (p *basicAuthProvider) loadUsers() error {
|
||||
if !p.IsEnabled() {
|
||||
return nil
|
||||
}
|
||||
info, err := os.Stat(p.Path)
|
||||
if err != nil {
|
||||
logger.Debug(logSender, "", "unable to stat basic auth users file: %v", err)
|
||||
return err
|
||||
}
|
||||
if p.isReloadNeeded(info) {
|
||||
r, err := os.Open(p.Path)
|
||||
if err != nil {
|
||||
logger.Debug(logSender, "", "unable to open basic auth users file: %v", err)
|
||||
return err
|
||||
}
|
||||
defer r.Close()
|
||||
reader := csv.NewReader(r)
|
||||
reader.Comma = ':'
|
||||
reader.Comment = '#'
|
||||
reader.TrimLeadingSpace = true
|
||||
records, err := reader.ReadAll()
|
||||
if err != nil {
|
||||
logger.Debug(logSender, "", "unable to parse basic auth users file: %v", err)
|
||||
return err
|
||||
}
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
|
||||
p.Users = make(map[string]string)
|
||||
for _, record := range records {
|
||||
if len(record) == 2 {
|
||||
p.Users[record[0]] = record[1]
|
||||
}
|
||||
}
|
||||
logger.Debug(logSender, "", "number of users loaded for httpd basic auth: %v", len(p.Users))
|
||||
p.Info = info
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *basicAuthProvider) getHashedPassword(username string) (string, bool) {
|
||||
err := p.loadUsers()
|
||||
if err != nil {
|
||||
return "", false
|
||||
}
|
||||
p.RLock()
|
||||
defer p.RUnlock()
|
||||
|
||||
pwd, ok := p.Users[username]
|
||||
return pwd, ok
|
||||
}
|
||||
|
||||
// ValidateCredentials returns true if the credentials are valid
|
||||
func (p *basicAuthProvider) ValidateCredentials(username, password string) bool {
|
||||
if hashedPwd, ok := p.getHashedPassword(username); ok {
|
||||
if utils.IsStringPrefixInSlice(hashedPwd, bcryptPwdPrefixes) {
|
||||
err := bcrypt.CompareHashAndPassword([]byte(hashedPwd), []byte(password))
|
||||
return err == nil
|
||||
}
|
||||
if strings.HasPrefix(hashedPwd, md5CryptPwdPrefix) {
|
||||
crypter := md5_crypt.New()
|
||||
err := crypter.Verify(hashedPwd, []byte(password))
|
||||
return err == nil
|
||||
}
|
||||
if strings.HasPrefix(hashedPwd, apr1CryptPwdPrefix) {
|
||||
crypter := apr1_crypt.New()
|
||||
err := crypter.Verify(hashedPwd, []byte(password))
|
||||
return err == nil
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
72
common/httpauth_test.go
Normal file
72
common/httpauth_test.go
Normal file
@@ -0,0 +1,72 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestBasicAuth(t *testing.T) {
|
||||
httpAuth, err := NewBasicAuthProvider("")
|
||||
require.NoError(t, err)
|
||||
require.False(t, httpAuth.IsEnabled())
|
||||
|
||||
_, err = NewBasicAuthProvider("missing path")
|
||||
require.Error(t, err)
|
||||
|
||||
authUserFile := filepath.Join(os.TempDir(), "http_users.txt")
|
||||
authUserData := []byte("test1:$2y$05$bcHSED7aO1cfLto6ZdDBOOKzlwftslVhtpIkRhAtSa4GuLmk5mola\n")
|
||||
err = ioutil.WriteFile(authUserFile, authUserData, os.ModePerm)
|
||||
require.NoError(t, err)
|
||||
|
||||
httpAuth, err = NewBasicAuthProvider(authUserFile)
|
||||
require.NoError(t, err)
|
||||
require.True(t, httpAuth.IsEnabled())
|
||||
require.False(t, httpAuth.ValidateCredentials("test1", "wrong1"))
|
||||
require.False(t, httpAuth.ValidateCredentials("test2", "password2"))
|
||||
require.True(t, httpAuth.ValidateCredentials("test1", "password1"))
|
||||
|
||||
authUserData = append(authUserData, []byte("test2:$1$OtSSTL8b$bmaCqEksI1e7rnZSjsIDR1\n")...)
|
||||
err = ioutil.WriteFile(authUserFile, authUserData, os.ModePerm)
|
||||
require.NoError(t, err)
|
||||
require.False(t, httpAuth.ValidateCredentials("test2", "wrong2"))
|
||||
require.True(t, httpAuth.ValidateCredentials("test2", "password2"))
|
||||
|
||||
authUserData = append(authUserData, []byte("test2:$apr1$gLnIkRIf$Xr/6aJfmIrihP4b2N2tcs/\n")...)
|
||||
err = ioutil.WriteFile(authUserFile, authUserData, os.ModePerm)
|
||||
require.NoError(t, err)
|
||||
require.False(t, httpAuth.ValidateCredentials("test2", "wrong2"))
|
||||
require.True(t, httpAuth.ValidateCredentials("test2", "password2"))
|
||||
|
||||
authUserData = append(authUserData, []byte("test3:$apr1$gLnIkRIf$Xr/6aJfmIrihP4b2N2tcs/\n")...)
|
||||
err = ioutil.WriteFile(authUserFile, authUserData, os.ModePerm)
|
||||
require.NoError(t, err)
|
||||
require.False(t, httpAuth.ValidateCredentials("test3", "password3"))
|
||||
|
||||
authUserData = append(authUserData, []byte("test4:$invalid$gLnIkRIf$Xr/6$aJfmIr$ihP4b2N2tcs/\n")...)
|
||||
err = ioutil.WriteFile(authUserFile, authUserData, os.ModePerm)
|
||||
require.NoError(t, err)
|
||||
require.False(t, httpAuth.ValidateCredentials("test4", "password3"))
|
||||
|
||||
if runtime.GOOS != "windows" {
|
||||
authUserData = append(authUserData, []byte("test5:$apr1$gLnIkRIf$Xr/6aJfmIrihP4b2N2tcs/\n")...)
|
||||
err = ioutil.WriteFile(authUserFile, authUserData, os.ModePerm)
|
||||
require.NoError(t, err)
|
||||
err = os.Chmod(authUserFile, 0001)
|
||||
require.NoError(t, err)
|
||||
require.False(t, httpAuth.ValidateCredentials("test5", "password2"))
|
||||
err = os.Chmod(authUserFile, os.ModePerm)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
authUserData = append(authUserData, []byte("\"foo\"bar\"\r\n")...)
|
||||
err = ioutil.WriteFile(authUserFile, authUserData, os.ModePerm)
|
||||
require.NoError(t, err)
|
||||
require.False(t, httpAuth.ValidateCredentials("test2", "password2"))
|
||||
|
||||
err = os.Remove(authUserFile)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
200
common/tlsutils.go
Normal file
200
common/tlsutils.go
Normal file
@@ -0,0 +1,200 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"crypto/x509"
|
||||
"crypto/x509/pkix"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"path/filepath"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/drakkan/sftpgo/logger"
|
||||
"github.com/drakkan/sftpgo/utils"
|
||||
)
|
||||
|
||||
// CertManager defines a TLS certificate manager
|
||||
type CertManager struct {
|
||||
certPath string
|
||||
keyPath string
|
||||
configDir string
|
||||
logSender string
|
||||
sync.RWMutex
|
||||
caCertificates []string
|
||||
caRevocationLists []string
|
||||
cert *tls.Certificate
|
||||
rootCAs *x509.CertPool
|
||||
crls []*pkix.CertificateList
|
||||
}
|
||||
|
||||
// Reload tries to reload certificate and CRLs
|
||||
func (m *CertManager) Reload() error {
|
||||
errCrt := m.loadCertificate()
|
||||
errCRLs := m.LoadCRLs()
|
||||
|
||||
if errCrt != nil {
|
||||
return errCrt
|
||||
}
|
||||
return errCRLs
|
||||
}
|
||||
|
||||
// LoadCertificate loads the configured x509 key pair
|
||||
func (m *CertManager) loadCertificate() error {
|
||||
newCert, err := tls.LoadX509KeyPair(m.certPath, m.keyPath)
|
||||
if err != nil {
|
||||
logger.Warn(m.logSender, "", "unable to load X509 key pair, cert file %#v key file %#v error: %v",
|
||||
m.certPath, m.keyPath, err)
|
||||
return err
|
||||
}
|
||||
logger.Debug(m.logSender, "", "TLS certificate %#v successfully loaded", m.certPath)
|
||||
|
||||
m.Lock()
|
||||
defer m.Unlock()
|
||||
|
||||
m.cert = &newCert
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetCertificateFunc returns the loaded certificate
|
||||
func (m *CertManager) GetCertificateFunc() func(*tls.ClientHelloInfo) (*tls.Certificate, error) {
|
||||
return func(clientHello *tls.ClientHelloInfo) (*tls.Certificate, error) {
|
||||
m.RLock()
|
||||
defer m.RUnlock()
|
||||
|
||||
return m.cert, nil
|
||||
}
|
||||
}
|
||||
|
||||
// IsRevoked returns true if the specified certificate has been revoked
|
||||
func (m *CertManager) IsRevoked(crt *x509.Certificate, caCrt *x509.Certificate) bool {
|
||||
m.RLock()
|
||||
defer m.RUnlock()
|
||||
|
||||
if crt == nil || caCrt == nil {
|
||||
logger.Warn(m.logSender, "", "unable to verify crt %v ca crt %v", crt, caCrt)
|
||||
return len(m.crls) > 0
|
||||
}
|
||||
|
||||
for _, crl := range m.crls {
|
||||
if !crl.HasExpired(time.Now()) && caCrt.CheckCRLSignature(crl) == nil {
|
||||
for _, rc := range crl.TBSCertList.RevokedCertificates {
|
||||
if rc.SerialNumber.Cmp(crt.SerialNumber) == 0 {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// LoadCRLs tries to load certificate revocation lists from the given paths
|
||||
func (m *CertManager) LoadCRLs() error {
|
||||
if len(m.caRevocationLists) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
var crls []*pkix.CertificateList
|
||||
|
||||
for _, revocationList := range m.caRevocationLists {
|
||||
if !utils.IsFileInputValid(revocationList) {
|
||||
return fmt.Errorf("invalid root CA revocation list %#v", revocationList)
|
||||
}
|
||||
if revocationList != "" && !filepath.IsAbs(revocationList) {
|
||||
revocationList = filepath.Join(m.configDir, revocationList)
|
||||
}
|
||||
crlBytes, err := ioutil.ReadFile(revocationList)
|
||||
if err != nil {
|
||||
logger.Warn(m.logSender, "unable to read revocation list %#v", revocationList)
|
||||
return err
|
||||
}
|
||||
crl, err := x509.ParseCRL(crlBytes)
|
||||
if err != nil {
|
||||
logger.Warn(m.logSender, "unable to parse revocation list %#v", revocationList)
|
||||
return err
|
||||
}
|
||||
|
||||
logger.Debug(m.logSender, "", "CRL %#v successfully loaded", revocationList)
|
||||
crls = append(crls, crl)
|
||||
}
|
||||
|
||||
m.Lock()
|
||||
defer m.Unlock()
|
||||
|
||||
m.crls = crls
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetRootCAs returns the set of root certificate authorities that servers
|
||||
// use if required to verify a client certificate
|
||||
func (m *CertManager) GetRootCAs() *x509.CertPool {
|
||||
m.RLock()
|
||||
defer m.RUnlock()
|
||||
|
||||
return m.rootCAs
|
||||
}
|
||||
|
||||
// LoadRootCAs tries to load root CA certificate authorities from the given paths
|
||||
func (m *CertManager) LoadRootCAs() error {
|
||||
if len(m.caCertificates) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
rootCAs := x509.NewCertPool()
|
||||
|
||||
for _, rootCA := range m.caCertificates {
|
||||
if !utils.IsFileInputValid(rootCA) {
|
||||
return fmt.Errorf("invalid root CA certificate %#v", rootCA)
|
||||
}
|
||||
if rootCA != "" && !filepath.IsAbs(rootCA) {
|
||||
rootCA = filepath.Join(m.configDir, rootCA)
|
||||
}
|
||||
crt, err := ioutil.ReadFile(rootCA)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if rootCAs.AppendCertsFromPEM(crt) {
|
||||
logger.Debug(m.logSender, "", "TLS certificate authority %#v successfully loaded", rootCA)
|
||||
} else {
|
||||
err := fmt.Errorf("unable to load TLS certificate authority %#v", rootCA)
|
||||
logger.Warn(m.logSender, "", "%v", err)
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
m.Lock()
|
||||
defer m.Unlock()
|
||||
|
||||
m.rootCAs = rootCAs
|
||||
return nil
|
||||
}
|
||||
|
||||
// SetCACertificates sets the root CA authorities file paths.
|
||||
// This should not be changed at runtime
|
||||
func (m *CertManager) SetCACertificates(caCertificates []string) {
|
||||
m.caCertificates = caCertificates
|
||||
}
|
||||
|
||||
// SetCARevocationLists sets the CA revocation lists file paths.
|
||||
// This should not be changed at runtime
|
||||
func (m *CertManager) SetCARevocationLists(caRevocationLists []string) {
|
||||
m.caRevocationLists = caRevocationLists
|
||||
}
|
||||
|
||||
// NewCertManager creates a new certificate manager
|
||||
func NewCertManager(certificateFile, certificateKeyFile, configDir, logSender string) (*CertManager, error) {
|
||||
manager := &CertManager{
|
||||
cert: nil,
|
||||
certPath: certificateFile,
|
||||
keyPath: certificateKeyFile,
|
||||
configDir: configDir,
|
||||
logSender: logSender,
|
||||
}
|
||||
err := manager.loadCertificate()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return manager, nil
|
||||
}
|
||||
387
common/tlsutils_test.go
Normal file
387
common/tlsutils_test.go
Normal file
@@ -0,0 +1,387 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"crypto/x509"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
const (
|
||||
serverCert = `-----BEGIN CERTIFICATE-----
|
||||
MIIEIDCCAgigAwIBAgIRAPOR9zTkX35vSdeyGpF8Rn8wDQYJKoZIhvcNAQELBQAw
|
||||
EzERMA8GA1UEAxMIQ2VydEF1dGgwHhcNMjEwMTAyMjEyMjU1WhcNMjIwNzAyMjEz
|
||||
MDUxWjARMQ8wDQYDVQQDEwZzZXJ2ZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAw
|
||||
ggEKAoIBAQCte0PJhCTNqTiqdwk/s4JanKIMKUVWr2u94a+JYy5gJ9xYXrQ49SeN
|
||||
m+fwhTAOqctP5zNVkFqxlBytJZg3pqCKqRoOOl1qVgL3F3o7JdhZGi67aw8QMLPx
|
||||
tLPpYWnnrlUQoXRJdTlqkDqO8lOZl9HO5oZeidPZ7r5BVD6ZiujAC6Zg0jIc+EPt
|
||||
qhaUJ1CStoAeRf1rNWKmDsLv5hEaDWoaHF9sNVzDQg6atZ3ici00qQj+uvEZo8mL
|
||||
k6egg3rqsTv9ml2qlrRgFumt99J60hTt3tuQaAruHY80O9nGy3SCXC11daa7gszH
|
||||
ElCRvhUVoOxRtB54YBEtJ0gEpFnTO9J1AgMBAAGjcTBvMA4GA1UdDwEB/wQEAwID
|
||||
uDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwHQYDVR0OBBYEFAgDXwPV
|
||||
nhztNz+H20iNWgoIx8adMB8GA1UdIwQYMBaAFO1yCNAGr/zQTJIi8lw3w5OiuBvM
|
||||
MA0GCSqGSIb3DQEBCwUAA4ICAQCR5kgIb4vAtrtsXD24n6RtU1yIXHPLNmDStVrH
|
||||
uaMYNnHlLhRlQFCjHhjWvZ89FQC7FeNOITc3FpibJySyw7JfnsyEOGxEbcAS4uLB
|
||||
2pdAiJPqdQtxIVcyi5vu53m1T5tm0sy8sBrGxU466aDQ8VGqjcjfTwNIyoFMd3p/
|
||||
ezFRvg2BudwU9hqApgfHfLi4WCuI3hLO2tbmgDinyH0HI0YYNNweGpiBYbTLF4Tx
|
||||
H6vHgD9USMZeu4+HX0IIsBiHQD7TTIe5ceREkPcNPd5qTpIvT3zKQ/KwwT90/zjP
|
||||
aWmz6pLxBfjRu7MY/bDfxfRUqsrLYJCVBoaDVRWR9rhiPIFkC5JzoWD/4hdj2iis
|
||||
N0+OOaJ77L+/ArFprE+7Fu3cSdYlfiNjV8R5kE29cAxKLI92CjAiTKrEuxKcQPKO
|
||||
+taWNKIYYjEDZwVnzlkTIl007X0RBuzu9gh4w5NwJdt8ZOJAp0JV0Cq+UvG+FC/v
|
||||
lYk82E6j1HKhf4CXmrjsrD1Fyu41mpVFOpa2ATiFGvms913MkXuyO8g99IllmDw1
|
||||
D7/PN4Qe9N6Zm7yoKZM0IUw2v+SUMIdOAZ7dptO9ZjtYOfiAIYN3jM8R4JYgPiuD
|
||||
DGSM9LJBJxCxI/DiO1y1Z3n9TcdDQYut8Gqdi/aYXw2YeqyHXosX5Od3vcK/O5zC
|
||||
pOJTYQ==
|
||||
-----END CERTIFICATE-----`
|
||||
serverKey = `-----BEGIN RSA PRIVATE KEY-----
|
||||
MIIEowIBAAKCAQEArXtDyYQkzak4qncJP7OCWpyiDClFVq9rveGviWMuYCfcWF60
|
||||
OPUnjZvn8IUwDqnLT+czVZBasZQcrSWYN6agiqkaDjpdalYC9xd6OyXYWRouu2sP
|
||||
EDCz8bSz6WFp565VEKF0SXU5apA6jvJTmZfRzuaGXonT2e6+QVQ+mYrowAumYNIy
|
||||
HPhD7aoWlCdQkraAHkX9azVipg7C7+YRGg1qGhxfbDVcw0IOmrWd4nItNKkI/rrx
|
||||
GaPJi5OnoIN66rE7/Zpdqpa0YBbprffSetIU7d7bkGgK7h2PNDvZxst0glwtdXWm
|
||||
u4LMxxJQkb4VFaDsUbQeeGARLSdIBKRZ0zvSdQIDAQABAoIBAF4sI8goq7HYwqIG
|
||||
rEagM4rsrCrd3H4KC/qvoJJ7/JjGCp8OCddBfY8pquat5kCPe4aMgxlXm2P6evaj
|
||||
CdZr5Ypf8Xz3we4PctyfKgMhsCfuRqAGpc6sIYJ8DY4LC2pxAExe2LlnoRtv39np
|
||||
QeiGuaYPDbIUL6SGLVFZYgIHngFhbDYfL83q3Cb/PnivUGFvUVQCfRBUKO2d8KYq
|
||||
TrVB5BWD2GrHor24ApQmci1OOqfbkIevkK6bk8HUfSZiZGI9LUQiPHMxi5k2x43J
|
||||
nIwhZnW2N28dorKnWHg2vh7viGvinVRZ3MEyX150oCw/L6SYM4fqR6t2ZSBgNQHT
|
||||
ZNoDtwECgYEA4lXMgtYqKuSlZ3TKfxAj03tJ/gbRdKcUCEGXEbdpY70tTu6KESZS
|
||||
etid4Ut/sWEoPTJsgYiGbgJl571t1O8oR1UZYgh9hBGHLV6UEIt9n2PbExhE2vL3
|
||||
SB7+LfO+tMvM4qKUBN+uy4GpU0NiyEEecw4x4S7MRSyHFRIDR7B6RV0CgYEAxDgS
|
||||
mDaNUfSdfB5mXekLUJAwqeKRdL9RjXYaHbnoZ5kIwQ73tFikRwyTsLQwMhjE1l3z
|
||||
MItTzIAyTf/BlK3dsp6bHTaT7hXIjHBsuKATN5qAuUpzTrg9+QaCawVSlQgNeF3a
|
||||
iyfD4dVp66Bzn3gO757TWqmroBZ2e1owbAQvF/kCgYAKT/Jze6KMNcK7hfy78VZQ
|
||||
imuCoXjlob8t6R8i9YJdwv7Pe9rakS5s3nXDEBePU2fr8eIzvK6zUHSoLF9WtlbV
|
||||
eTEg4FYnsEzCam7AmjptCrWulwp8F1ng9ViLa3Gi9y4snU+1MSPbrdqzKnzTtvPW
|
||||
Ni1bnzA7bp3w/dMcbxQDGQKBgB50hY5SiUS7LuZg4YqZ7UOn3aXAoMr6FvJZ7lvG
|
||||
yyepPQ6aACBh0b2lWhcHIKPl7EdJdcGHHo6TJzusAqPNCKf8rh6upe9COkpx+K3/
|
||||
SnxK4sffol4JgrTwKbXqsZKoGU8hYhZPKbwXn8UOtmN+AvN2N1/PDfBfDCzBJtrd
|
||||
G2IhAoGBAN19976xAMDjKb2+wd/mQYA2fR7E8lodxdX3LDnblYmndTKY67nVo94M
|
||||
FHPKZSN590HkFJ+wmChnOrqjtosY+N25CKMS7939EUIDrq+B+bYTWM/gcwdLXNUk
|
||||
Rygw/078Z3ZDJamXmyez5WpeLFrrbmI8sLnBBmSjQvMb6vCEtQ2Z
|
||||
-----END RSA PRIVATE KEY-----`
|
||||
caCRT = `-----BEGIN CERTIFICATE-----
|
||||
MIIE5jCCAs6gAwIBAgIBATANBgkqhkiG9w0BAQsFADATMREwDwYDVQQDEwhDZXJ0
|
||||
QXV0aDAeFw0yMTAxMDIyMTIwNTVaFw0yMjA3MDIyMTMwNTJaMBMxETAPBgNVBAMT
|
||||
CENlcnRBdXRoMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEA4Tiho5xW
|
||||
AC15JRkMwfp3/TJwI2As7MY5dele5cmdr5bHAE+sRKqC+Ti88OJWCV5saoyax/1S
|
||||
CjxJlQMZMl169P1QYJskKjdG2sdv6RLWLMgwSNRRjxp/Bw9dHdiEb9MjLgu28Jro
|
||||
9peQkHcRHeMf5hM9WvlIJGrdzbC4hUehmqggcqgARainBkYjf0SwuWxHeu4nMqkp
|
||||
Ak5tcSTLCjHfEFHZ9Te0TIPG5YkWocQKyeLgu4lvuU+DD2W2lym+YVUtRMGs1Env
|
||||
k7p+N0DcGU26qfzZ2sF5ZXkqm7dBsGQB9pIxwc2Q8T1dCIyP9OQCKVILdc5aVFf1
|
||||
cryQFHYzYNNZXFlIBims5VV5Mgfp8ESHQSue+v6n6ykecLEyKt1F1Y/MWY/nWUSI
|
||||
8zdq83jdBAZVjo9MSthxVn57/06s/hQca65IpcTZV2gX0a+eRlAVqaRbAhL3LaZe
|
||||
bYsW3WHKoUOftwemuep3nL51TzlXZVL7Oz/ClGaEOsnGG9KFO6jh+W768qC0zLQI
|
||||
CdE7v2Zex98sZteHCg9fGJHIaYoF0aJG5P3WI5oZf2fy7UIYN9ADLFZiorCXAZEh
|
||||
CSU6mDoRViZ4RGR9GZxbDZ9KYn7O8M/KCR72bkQg73TlMsk1zSXEw0MKLUjtsw6c
|
||||
rZ0Jt8t3sRatHO3JrYHALMt9vZfyNCZp0IsCAwEAAaNFMEMwDgYDVR0PAQH/BAQD
|
||||
AgEGMBIGA1UdEwEB/wQIMAYBAf8CAQAwHQYDVR0OBBYEFO1yCNAGr/zQTJIi8lw3
|
||||
w5OiuBvMMA0GCSqGSIb3DQEBCwUAA4ICAQA6gCNuM7r8mnx674dm31GxBjQy5ZwB
|
||||
7CxDzYEvL/oiZ3Tv3HlPfN2LAAsJUfGnghh9DOytenL2CTZWjl/emP5eijzmlP+9
|
||||
zva5I6CIMCf/eDDVsRdO244t0o4uG7+At0IgSDM3bpVaVb4RHZNjEziYChsEYY8d
|
||||
HK6iwuRSvFniV6yhR/Vj1Ymi9yZ5xclqseLXiQnUB0PkfIk23+7s42cXB16653fH
|
||||
O/FsPyKBLiKJArizLYQc12aP3QOrYoYD9+fAzIIzew7A5C0aanZCGzkuFpO6TRlD
|
||||
Tb7ry9Gf0DfPpCgxraH8tOcmnqp/ka3hjqo/SRnnTk0IFrmmLdarJvjD46rKwBo4
|
||||
MjyAIR1mQ5j8GTlSFBmSgETOQ/EYvO3FPLmra1Fh7L+DvaVzTpqI9fG3TuyyY+Ri
|
||||
Fby4ycTOGSZOe5Fh8lqkX5Y47mCUJ3zHzOA1vUJy2eTlMRGpu47Eb1++Vm6EzPUP
|
||||
2EF5aD+zwcssh+atZvQbwxpgVqVcyLt91RSkKkmZQslh0rnlTb68yxvUnD3zw7So
|
||||
o6TAf9UvwVMEvdLT9NnFd6hwi2jcNte/h538GJwXeBb8EkfpqLKpTKyicnOdkamZ
|
||||
7E9zY8SHNRYMwB9coQ/W8NvufbCgkvOoLyMXk5edbXofXl3PhNGOlraWbghBnzf5
|
||||
r3rwjFsQOoZotA==
|
||||
-----END CERTIFICATE-----`
|
||||
caKey = `-----BEGIN RSA PRIVATE KEY-----
|
||||
MIIJKQIBAAKCAgEA4Tiho5xWAC15JRkMwfp3/TJwI2As7MY5dele5cmdr5bHAE+s
|
||||
RKqC+Ti88OJWCV5saoyax/1SCjxJlQMZMl169P1QYJskKjdG2sdv6RLWLMgwSNRR
|
||||
jxp/Bw9dHdiEb9MjLgu28Jro9peQkHcRHeMf5hM9WvlIJGrdzbC4hUehmqggcqgA
|
||||
RainBkYjf0SwuWxHeu4nMqkpAk5tcSTLCjHfEFHZ9Te0TIPG5YkWocQKyeLgu4lv
|
||||
uU+DD2W2lym+YVUtRMGs1Envk7p+N0DcGU26qfzZ2sF5ZXkqm7dBsGQB9pIxwc2Q
|
||||
8T1dCIyP9OQCKVILdc5aVFf1cryQFHYzYNNZXFlIBims5VV5Mgfp8ESHQSue+v6n
|
||||
6ykecLEyKt1F1Y/MWY/nWUSI8zdq83jdBAZVjo9MSthxVn57/06s/hQca65IpcTZ
|
||||
V2gX0a+eRlAVqaRbAhL3LaZebYsW3WHKoUOftwemuep3nL51TzlXZVL7Oz/ClGaE
|
||||
OsnGG9KFO6jh+W768qC0zLQICdE7v2Zex98sZteHCg9fGJHIaYoF0aJG5P3WI5oZ
|
||||
f2fy7UIYN9ADLFZiorCXAZEhCSU6mDoRViZ4RGR9GZxbDZ9KYn7O8M/KCR72bkQg
|
||||
73TlMsk1zSXEw0MKLUjtsw6crZ0Jt8t3sRatHO3JrYHALMt9vZfyNCZp0IsCAwEA
|
||||
AQKCAgAV+ElERYbaI5VyufvVnFJCH75ypPoc6sVGLEq2jbFVJJcq/5qlZCC8oP1F
|
||||
Xj7YUR6wUiDzK1Hqb7EZ2SCHGjlZVrCVi+y+NYAy7UuMZ+r+mVSkdhmypPoJPUVv
|
||||
GOTqZ6VB46Cn3eSl0WknvoWr7bD555yPmEuiSc5zNy74yWEJTidEKAFGyknowcTK
|
||||
sG+w1tAuPLcUKQ44DGB+rgEkcHL7C5EAa7upzx0C3RmZFB+dTAVyJdkBMbFuOhTS
|
||||
sB7DLeTplR7/4mp9da7EQw51ZXC1DlZOEZt++4/desXsqATNAbva1OuzrLG7mMKe
|
||||
N/PCBh/aERQcsCvgUmaXqGQgqN1Jhw8kbXnjZnVd9iE7TAh7ki3VqNy1OMgTwOex
|
||||
bBYWaCqHuDYIxCjeW0qLJcn0cKQ13FVYrxgInf4Jp82SQht5b/zLL3IRZEyKcLJF
|
||||
kL6g1wlmTUTUX0z8eZzlM0ZCrqtExjgElMO/rV971nyNV5WU8Og3NmE8/slqMrmJ
|
||||
DlrQr9q0WJsDKj1IMe46EUM6ix7bbxC5NIfJ96dgdxZDn6ghjca6iZYqqUACvmUj
|
||||
cq08s3R4Ouw9/87kn11wwGBx2yDueCwrjKEGc0RKjweGbwu0nBxOrkJ8JXz6bAv7
|
||||
1OKfYaX3afI9B8x4uaiuRs38oBQlg9uAYFfl4HNBPuQikGLmsQKCAQEA8VjFOsaz
|
||||
y6NMZzKXi7WZ48uu3ed5x3Kf6RyDr1WvQ1jkBMv9b6b8Gp1CRnPqviRBto9L8QAg
|
||||
bCXZTqnXzn//brskmW8IZgqjAlf89AWa53piucu9/hgidrHRZobs5gTqev28uJdc
|
||||
zcuw1g8c3nCpY9WeTjHODzX5NXYRLFpkazLfYa6c8Q9jZR4KKrpdM+66fxL0JlOd
|
||||
7dN0oQtEqEAugsd3cwkZgvWhY4oM7FGErrZoDLy273ZdJzi/vU+dThyVzfD8Ab8u
|
||||
VxxuobVMT/S608zbe+uaiUdov5s96OkCl87403UNKJBH+6LNb3rjBBLE9NPN5ET9
|
||||
JLQMrYd+zj8jQwKCAQEA7uU5I9MOufo9bIgJqjY4Ie1+Ex9DZEMUYFAvGNCJCVcS
|
||||
mwOdGF8AWzIavTLACmEDJO7t/OrBdoo4L7IEsCNjgA3WiIwIMiWUVqveAGUMEXr6
|
||||
TRI5EolV6FTqqIP6AS+BAeBq7G1ELgsTrWNHh11rW3+3kBMuOCn77PUQ8WHwcq/r
|
||||
teZcZn4Ewcr6P7cBODgVvnBPhe/J8xHS0HFVCeS1CvaiNYgees5yA80Apo9IPjDJ
|
||||
YWawLjmH5wUBI5yDFVp067wjqJnoKPSoKwWkZXqUk+zgFXx5KT0gh/c5yh1frASp
|
||||
q6oaYnHEVC5qj2SpT1GFLonTcrQUXiSkiUudvNu1GQKCAQEAmko+5GFtRe0ihgLQ
|
||||
4S76r6diJli6AKil1Fg3U1r6zZpBQ1PJtJxTJQyN9w5Z7q6tF/GqAesrzxevQdvQ
|
||||
rCImAPtA3ZofC2UXawMnIjWHHx6diNvYnV1+gtUQ4nO1dSOFZ5VZFcUmPiZO6boF
|
||||
oaryj3FcX+71JcJCjEvrlKhA9Es0hXUkvfMxfs5if4he1zlyHpTWYr4oA4egUugq
|
||||
P0mwskikc3VIyvEO+NyjgFxo72yLPkFSzemkidN8uKDyFqKtnlfGM7OuA2CY1WZa
|
||||
3+67lXWshx9KzyJIs92iCYkU8EoPxtdYzyrV6efdX7x27v60zTOut5TnJJS6WiF6
|
||||
Do5MkwKCAQAxoR9IyP0DN/BwzqYrXU42Bi+t603F04W1KJNQNWpyrUspNwv41yus
|
||||
xnD1o0hwH41Wq+h3JZIBfV+E0RfWO9Pc84MBJQ5C1LnHc7cQH+3s575+Km3+4tcd
|
||||
CB8j2R8kBeloKWYtLdn/Mr/ownpGreqyvIq2/LUaZ+Z1aMgXTYB1YwS16mCBzmZQ
|
||||
mEl62RsAwe4KfSyYJ6OtwqMoOJMxFfliiLBULK4gVykqjvk2oQeiG+KKQJoTUFJi
|
||||
dRCyhD5bPkqR+qjxyt+HOqSBI4/uoROi05AOBqjpH1DVzk+MJKQOiX1yM0l98CKY
|
||||
Vng+x+vAla/0Zh+ucajVkgk4mKPxazdpAoIBAQC17vWk4KYJpF2RC3pKPcQ0PdiX
|
||||
bN35YNlvyhkYlSfDNdyH3aDrGiycUyW2mMXUgEDFsLRxHMTL+zPC6efqO6sTAJDY
|
||||
cBptsW4drW/qo8NTx3dNOisLkW+mGGJOR/w157hREFr29ymCVMYu/Z7fVWIeSpCq
|
||||
p3u8YX8WTljrxwSczlGjvpM7uJx3SfYRM4TUoy+8wU8bK74LywLa5f60bQY6Dye0
|
||||
Gqd9O6OoPfgcQlwjC5MiAofeqwPJvU0hQOPoehZyNLAmOCWXTYWaTP7lxO1r6+NE
|
||||
M3hGYqW3W8Ixua71OskCypBZg/HVlIP/lzjRzdx+VOB2hbWVth2Iup/Z1egW
|
||||
-----END RSA PRIVATE KEY-----`
|
||||
caCRL = `-----BEGIN X509 CRL-----
|
||||
MIICpzCBkAIBATANBgkqhkiG9w0BAQsFADATMREwDwYDVQQDEwhDZXJ0QXV0aBcN
|
||||
MjEwMTAyMjEzNDA1WhcNMjMwMTAyMjEzNDA1WjAkMCICEQC+l04DbHWMyC3fG09k
|
||||
VXf+Fw0yMTAxMDIyMTM0MDVaoCMwITAfBgNVHSMEGDAWgBTtcgjQBq/80EySIvJc
|
||||
N8OTorgbzDANBgkqhkiG9w0BAQsFAAOCAgEAEJ7z+uNc8sqtxlOhSdTGDzX/xput
|
||||
E857kFQkSlMnU2whQ8c+XpYrBLA5vIZJNSSwohTpM4+zVBX/bJpmu3wqqaArRO9/
|
||||
YcW5mQk9Anvb4WjQW1cHmtNapMTzoC9AiYt/OWPfy+P6JCgCr4Hy6LgQyIRL6bM9
|
||||
VYTalolOm1qa4Y5cIeT7iHq/91mfaqo8/6MYRjLl8DOTROpmw8OS9bCXkzGKdCat
|
||||
AbAzwkQUSauyoCQ10rpX+Y64w9ng3g4Dr20aCqPf5osaqplEJ2HTK8ljDTidlslv
|
||||
9anQj8ax3Su89vI8+hK+YbfVQwrThabgdSjQsn+veyx8GlP8WwHLAQ379KjZjWg+
|
||||
OlOSwBeU1vTdP0QcB8X5C2gVujAyuQekbaV86xzIBOj7vZdfHZ6ee30TZ2FKiMyg
|
||||
7/N2OqW0w77ChsjB4MSHJCfuTgIeg62GzuZXLM+Q2Z9LBdtm4Byg+sm/P52adOEg
|
||||
gVb2Zf4KSvsAmA0PIBlu449/QXUFcMxzLFy7mwTeZj2B4Ln0Hm0szV9f9R8MwMtB
|
||||
SyLYxVH+mgqaR6Jkk22Q/yYyLPaELfafX5gp/AIXG8n0zxfVaTvK3auSgb1Q6ZLS
|
||||
5QH9dSIsmZHlPq7GoSXmKpMdjUL8eaky/IMteioyXgsBiATzl5L2dsw6MTX3MDF0
|
||||
QbDK+MzhmbKfDxs=
|
||||
-----END X509 CRL-----`
|
||||
client1Crt = `-----BEGIN CERTIFICATE-----
|
||||
MIIEITCCAgmgAwIBAgIRAIppZHoj1hM80D7WzTEKLuAwDQYJKoZIhvcNAQELBQAw
|
||||
EzERMA8GA1UEAxMIQ2VydEF1dGgwHhcNMjEwMTAyMjEyMzEwWhcNMjIwNzAyMjEz
|
||||
MDUxWjASMRAwDgYDVQQDEwdjbGllbnQxMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
|
||||
MIIBCgKCAQEAoKbYY9MdF2kF/nhBESIiZTdVYtA8XL9xrIZyDj9EnCiTxHiVbJtH
|
||||
XVwszqSl5TRrotPmnmAQcX3r8OCk+z+RQZ0QQj257P3kG6q4rNnOcWCS5xEd20jP
|
||||
yhQ3m+hMGfZsotNTQze1ochuQgLUN6IPyPxZkH22ia3jX4iu1eo/QxeLYHj1UHw4
|
||||
3Cii9yE+j5kPUC21xmnrGKdUrB55NYLXHx6yTIqYR5znSOVB8oJi18/hwdZmH859
|
||||
DHhm0Hx1HrS+jbjI3+CMorZJ3WUyNf+CkiVLD3xYutPbxzEpwiqkG/XYzLH0habT
|
||||
cDcILo18n+o3jvem2KWBrDhyairjIDscwQIDAQABo3EwbzAOBgNVHQ8BAf8EBAMC
|
||||
A7gwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMCMB0GA1UdDgQWBBSJ5GIv
|
||||
zIrE4ZSQt2+CGblKTDswizAfBgNVHSMEGDAWgBTtcgjQBq/80EySIvJcN8OTorgb
|
||||
zDANBgkqhkiG9w0BAQsFAAOCAgEALh4f5GhvNYNou0Ab04iQBbLEdOu2RlbK1B5n
|
||||
K9P/umYenBHMY/z6HT3+6tpcHsDuqE8UVdq3f3Gh4S2Gu9m8PRitT+cJ3gdo9Plm
|
||||
3rD4ufn/s6rGg3ppydXcedm17492tbccUDWOBZw3IO/ASVq13WPgT0/Kev7cPq0k
|
||||
sSdSNhVeXqx8Myc2/d+8GYyzbul2Kpfa7h9i24sK49E9ftnSmsIvngONo08eT1T0
|
||||
3wAOyK2981LIsHaAWcneShKFLDB6LeXIT9oitOYhiykhFlBZ4M1GNlSNfhQ8IIQP
|
||||
xbqMNXCLkW4/BtLhGEEcg0QVso6Kudl9rzgTfQknrdF7pHp6rS46wYUjoSyIY6dl
|
||||
oLmnoAVJX36J3QPWelePI9e07X2wrTfiZWewwgw3KNRWjd6/zfPLe7GoqXnK1S2z
|
||||
PT8qMfCaTwKTtUkzXuTFvQ8bAo2My/mS8FOcpkt2oQWeOsADHAUX7fz5BCoa2DL3
|
||||
k/7Mh4gVT+JYZEoTwCFuYHgMWFWe98naqHi9lB4yR981p1QgXgxO7qBeipagKY1F
|
||||
LlH1iwXUqZ3MZnkNA+4e1Fglsw3sa/rC+L98HnznJ/YbTfQbCP6aQ1qcOymrjMud
|
||||
7MrFwqZjtd/SK4Qx1VpK6jGEAtPgWBTUS3p9ayg6lqjMBjsmySWfvRsDQbq6P5Ct
|
||||
O/e3EH8=
|
||||
-----END CERTIFICATE-----`
|
||||
client1Key = `-----BEGIN RSA PRIVATE KEY-----
|
||||
MIIEpAIBAAKCAQEAoKbYY9MdF2kF/nhBESIiZTdVYtA8XL9xrIZyDj9EnCiTxHiV
|
||||
bJtHXVwszqSl5TRrotPmnmAQcX3r8OCk+z+RQZ0QQj257P3kG6q4rNnOcWCS5xEd
|
||||
20jPyhQ3m+hMGfZsotNTQze1ochuQgLUN6IPyPxZkH22ia3jX4iu1eo/QxeLYHj1
|
||||
UHw43Cii9yE+j5kPUC21xmnrGKdUrB55NYLXHx6yTIqYR5znSOVB8oJi18/hwdZm
|
||||
H859DHhm0Hx1HrS+jbjI3+CMorZJ3WUyNf+CkiVLD3xYutPbxzEpwiqkG/XYzLH0
|
||||
habTcDcILo18n+o3jvem2KWBrDhyairjIDscwQIDAQABAoIBAEBSjVFqtbsp0byR
|
||||
aXvyrtLX1Ng7h++at2jca85Ihq//jyqbHTje8zPuNAKI6eNbmb0YGr5OuEa4pD9N
|
||||
ssDmMsKSoG/lRwwcm7h4InkSvBWpFShvMgUaohfHAHzsBYxfnh+TfULsi0y7c2n6
|
||||
t/2OZcOTRkkUDIITnXYiw93ibHHv2Mv2bBDu35kGrcK+c2dN5IL5ZjTjMRpbJTe2
|
||||
44RBJbdTxHBVSgoGBnugF+s2aEma6Ehsj70oyfoVpM6Aed5kGge0A5zA1JO7WCn9
|
||||
Ay/DzlULRXHjJIoRWd2NKvx5n3FNppUc9vJh2plRHalRooZ2+MjSf8HmXlvG2Hpb
|
||||
ScvmWgECgYEA1G+A/2KnxWsr/7uWIJ7ClcGCiNLdk17Pv3DZ3G4qUsU2ITftfIbb
|
||||
tU0Q/b19na1IY8Pjy9ptP7t74/hF5kky97cf1FA8F+nMj/k4+wO8QDI8OJfzVzh9
|
||||
PwielA5vbE+xmvis5Hdp8/od1Yrc/rPSy2TKtPFhvsqXjqoUmOAjDP8CgYEAwZjH
|
||||
9dt1sc2lx/rMxihlWEzQ3JPswKW9/LJAmbRBoSWF9FGNjbX7uhWtXRKJkzb8ZAwa
|
||||
88azluNo2oftbDD/+jw8b2cDgaJHlLAkSD4O1D1RthW7/LKD15qZ/oFsRb13NV85
|
||||
ZNKtwslXGbfVNyGKUVFm7fVA8vBAOUey+LKDFj8CgYEAg8WWstOzVdYguMTXXuyb
|
||||
ruEV42FJaDyLiSirOvxq7GTAKuLSQUg1yMRBIeQEo2X1XU0JZE3dLodRVhuO4EXP
|
||||
g7Dn4X7Th9HSvgvNuIacowWGLWSz4Qp9RjhGhXhezUSx2nseY6le46PmFavJYYSR
|
||||
4PBofMyt4PcyA6Cknh+KHmkCgYEAnTriG7ETE0a7v4DXUpB4TpCEiMCy5Xs2o8Z5
|
||||
ZNva+W+qLVUWq+MDAIyechqeFSvxK6gRM69LJ96lx+XhU58wJiFJzAhT9rK/g+jS
|
||||
bsHH9WOfu0xHkuHA5hgvvV2Le9B2wqgFyva4HJy82qxMxCu/VG/SMqyfBS9OWbb7
|
||||
ibQhdq0CgYAl53LUWZsFSZIth1vux2LVOsI8C3X1oiXDGpnrdlQ+K7z57hq5EsRq
|
||||
GC+INxwXbvKNqp5h0z2MvmKYPDlGVTgw8f8JjM7TkN17ERLcydhdRrMONUryZpo8
|
||||
1xTob+8blyJgfxZUIAKbMbMbIiU0WAF0rfD/eJJwS4htOW/Hfv4TGA==
|
||||
-----END RSA PRIVATE KEY-----`
|
||||
// client 2 crt is revoked
|
||||
client2Crt = `-----BEGIN CERTIFICATE-----
|
||||
MIIEITCCAgmgAwIBAgIRAL6XTgNsdYzILd8bT2RVd/4wDQYJKoZIhvcNAQELBQAw
|
||||
EzERMA8GA1UEAxMIQ2VydEF1dGgwHhcNMjEwMTAyMjEyMzIwWhcNMjIwNzAyMjEz
|
||||
MDUxWjASMRAwDgYDVQQDEwdjbGllbnQyMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
|
||||
MIIBCgKCAQEA6xjW5KQR3/OFQtV5M75WINqQ4AzXSu6DhSz/yumaaQZP/UxY+6hi
|
||||
jcrFzGo9MMie/Sza8DhkXOFAl2BelUubrOeB2cl+/Gr8OCyRi2Gv6j3zCsuN/4jQ
|
||||
tNaoez/IbkDvI3l/ZpzBtnuNY2RiemGgHuORXHRVf3qVlsw+npBIRW5rM2HkO/xG
|
||||
oZjeBErWVu390Lyn+Gvk2TqQDnkutWnxUC60/zPlHhXZ4BwaFAekbSnjsSDB1YFM
|
||||
s8HwW4oBryoxdj3/+/qLrBHt75IdLw3T7/V1UDJQM3EvSQOr12w4egpldhtsC871
|
||||
nnBQZeY6qA5feffIwwg/6lJm70o6S6OX6wIDAQABo3EwbzAOBgNVHQ8BAf8EBAMC
|
||||
A7gwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMCMB0GA1UdDgQWBBTB84v5
|
||||
t9HqhLhMODbn6oYkEQt3KzAfBgNVHSMEGDAWgBTtcgjQBq/80EySIvJcN8OTorgb
|
||||
zDANBgkqhkiG9w0BAQsFAAOCAgEALGtBCve5k8tToL3oLuXp/oSik6ovIB/zq4I/
|
||||
4zNMYPU31+ZWz6aahysgx1JL1yqTa3Qm8o2tu52MbnV10dM7CIw7c/cYa+c+OPcG
|
||||
5LF97kp13X+r2axy+CmwM86b4ILaDGs2Qyai6VB6k7oFUve+av5o7aUrNFpqGCJz
|
||||
HWdtHZSVA3JMATzy0TfWanwkzreqfdw7qH0yZ9bDURlBKAVWrqnCstva9jRuv+AI
|
||||
eqxr/4Ro986TFjJdoAP3Vr16CPg7/B6GA/KmsBWJrpeJdPWq4i2gpLKvYZoy89qD
|
||||
mUZf34RbzcCtV4NvV1DadGnt4us0nvLrvS5rL2+2uWD09kZYq9RbLkvgzF/cY0fz
|
||||
i7I1bi5XQ+alWe0uAk5ZZL/D+GTRYUX1AWwCqwJxmHrMxcskMyO9pXvLyuSWRDLo
|
||||
YNBrbX9nLcfJzVCp+X+9sntTHjs4l6Cw+fLepJIgtgqdCHtbhTiv68vSM6cgb4br
|
||||
6n2xrXRKuioiWFOrTSRr+oalZh8dGJ/xvwY8IbWknZAvml9mf1VvfE7Ma5P777QM
|
||||
fsbYVTq0Y3R/5hIWsC3HA5z6MIM8L1oRe/YyhP3CTmrCHkVKyDOosGXpGz+JVcyo
|
||||
cfYkY5A3yFKB2HaCwZSfwFmRhxkrYWGEbHv3Cd9YkZs1J3hNhGFZyVMC9Uh0S85a
|
||||
6zdDidU=
|
||||
-----END CERTIFICATE-----`
|
||||
client2Key = `-----BEGIN RSA PRIVATE KEY-----
|
||||
MIIEpAIBAAKCAQEA6xjW5KQR3/OFQtV5M75WINqQ4AzXSu6DhSz/yumaaQZP/UxY
|
||||
+6hijcrFzGo9MMie/Sza8DhkXOFAl2BelUubrOeB2cl+/Gr8OCyRi2Gv6j3zCsuN
|
||||
/4jQtNaoez/IbkDvI3l/ZpzBtnuNY2RiemGgHuORXHRVf3qVlsw+npBIRW5rM2Hk
|
||||
O/xGoZjeBErWVu390Lyn+Gvk2TqQDnkutWnxUC60/zPlHhXZ4BwaFAekbSnjsSDB
|
||||
1YFMs8HwW4oBryoxdj3/+/qLrBHt75IdLw3T7/V1UDJQM3EvSQOr12w4egpldhts
|
||||
C871nnBQZeY6qA5feffIwwg/6lJm70o6S6OX6wIDAQABAoIBAFatstVb1KdQXsq0
|
||||
cFpui8zTKOUiduJOrDkWzTygAmlEhYtrccdfXu7OWz0x0lvBLDVGK3a0I/TGrAzj
|
||||
4BuFY+FM/egxTVt9in6fmA3et4BS1OAfCryzUdfK6RV//8L+t+zJZ/qKQzWnugpy
|
||||
QYjDo8ifuMFwtvEoXizaIyBNLAhEp9hnrv+Tyi2O2gahPvCHsD48zkyZRCHYRstD
|
||||
NH5cIrwz9/RJgPO1KI+QsJE7Nh7stR0sbr+5TPU4fnsL2mNhMUF2TJrwIPrc1yp+
|
||||
YIUjdnh3SO88j4TQT3CIrWi8i4pOy6N0dcVn3gpCRGaqAKyS2ZYUj+yVtLO4KwxZ
|
||||
SZ1lNvECgYEA78BrF7f4ETfWSLcBQ3qxfLs7ibB6IYo2x25685FhZjD+zLXM1AKb
|
||||
FJHEXUm3mUYrFJK6AFEyOQnyGKBOLs3S6oTAswMPbTkkZeD1Y9O6uv0AHASLZnK6
|
||||
pC6ub0eSRF5LUyTQ55Jj8D7QsjXJueO8v+G5ihWhNSN9tB2UA+8NBmkCgYEA+weq
|
||||
cvoeMIEMBQHnNNLy35bwfqrceGyPIRBcUIvzQfY1vk7KW6DYOUzC7u+WUzy/hA52
|
||||
DjXVVhua2eMQ9qqtOav7djcMc2W9RbLowxvno7K5qiCss013MeWk64TCWy+WMp5A
|
||||
AVAtOliC3hMkIKqvR2poqn+IBTh1449agUJQqTMCgYEAu06IHGq1GraV6g9XpGF5
|
||||
wqoAlMzUTdnOfDabRilBf/YtSr+J++ThRcuwLvXFw7CnPZZ4TIEjDJ7xjj3HdxeE
|
||||
fYYjineMmNd40UNUU556F1ZLvJfsVKizmkuCKhwvcMx+asGrmA+tlmds4p3VMS50
|
||||
KzDtpKzLWlmU/p/RINWlRmkCgYBy0pHTn7aZZx2xWKqCDg+L2EXPGqZX6wgZDpu7
|
||||
OBifzlfM4ctL2CmvI/5yPmLbVgkgBWFYpKUdiujsyyEiQvWTUKhn7UwjqKDHtcsk
|
||||
G6p7xS+JswJrzX4885bZJ9Oi1AR2yM3sC9l0O7I4lDbNPmWIXBLeEhGMmcPKv/Kc
|
||||
91Ff4wKBgQCF3ur+Vt0PSU0ucrPVHjCe7tqazm0LJaWbPXL1Aw0pzdM2EcNcW/MA
|
||||
w0kqpr7MgJ94qhXCBcVcfPuFN9fBOadM3UBj1B45Cz3pptoK+ScI8XKno6jvVK/p
|
||||
xr5cb9VBRBtB9aOKVfuRhpatAfS2Pzm2Htae9lFn7slGPUmu2hkjDw==
|
||||
-----END RSA PRIVATE KEY-----`
|
||||
)
|
||||
|
||||
func TestLoadCertificate(t *testing.T) {
|
||||
caCrtPath := filepath.Join(os.TempDir(), "testca.crt")
|
||||
caCrlPath := filepath.Join(os.TempDir(), "testcrl.crt")
|
||||
certPath := filepath.Join(os.TempDir(), "test.crt")
|
||||
keyPath := filepath.Join(os.TempDir(), "test.key")
|
||||
err := ioutil.WriteFile(caCrtPath, []byte(caCRT), os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
err = ioutil.WriteFile(caCrlPath, []byte(caCRL), os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
err = ioutil.WriteFile(certPath, []byte(serverCert), os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
err = ioutil.WriteFile(keyPath, []byte(serverKey), os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
certManager, err := NewCertManager(certPath, keyPath, configDir, logSenderTest)
|
||||
assert.NoError(t, err)
|
||||
certFunc := certManager.GetCertificateFunc()
|
||||
if assert.NotNil(t, certFunc) {
|
||||
hello := &tls.ClientHelloInfo{
|
||||
ServerName: "localhost",
|
||||
CipherSuites: []uint16{tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305},
|
||||
}
|
||||
cert, err := certFunc(hello)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, certManager.cert, cert)
|
||||
}
|
||||
|
||||
certManager.SetCACertificates(nil)
|
||||
err = certManager.LoadRootCAs()
|
||||
assert.NoError(t, err)
|
||||
|
||||
certManager.SetCACertificates([]string{""})
|
||||
err = certManager.LoadRootCAs()
|
||||
assert.Error(t, err)
|
||||
|
||||
certManager.SetCACertificates([]string{"invalid"})
|
||||
err = certManager.LoadRootCAs()
|
||||
assert.Error(t, err)
|
||||
|
||||
// laoding the key as root CA must fail
|
||||
certManager.SetCACertificates([]string{keyPath})
|
||||
err = certManager.LoadRootCAs()
|
||||
assert.Error(t, err)
|
||||
|
||||
certManager.SetCACertificates([]string{certPath})
|
||||
err = certManager.LoadRootCAs()
|
||||
assert.NoError(t, err)
|
||||
|
||||
rootCa := certManager.GetRootCAs()
|
||||
assert.NotNil(t, rootCa)
|
||||
|
||||
err = certManager.Reload()
|
||||
assert.NoError(t, err)
|
||||
|
||||
certManager.SetCARevocationLists(nil)
|
||||
err = certManager.LoadCRLs()
|
||||
assert.NoError(t, err)
|
||||
|
||||
certManager.SetCARevocationLists([]string{""})
|
||||
err = certManager.LoadCRLs()
|
||||
assert.Error(t, err)
|
||||
|
||||
certManager.SetCARevocationLists([]string{"invalid crl"})
|
||||
err = certManager.LoadCRLs()
|
||||
assert.Error(t, err)
|
||||
|
||||
// this is not a crl and must fail
|
||||
certManager.SetCARevocationLists([]string{caCrtPath})
|
||||
err = certManager.LoadCRLs()
|
||||
assert.Error(t, err)
|
||||
|
||||
certManager.SetCARevocationLists([]string{caCrlPath})
|
||||
err = certManager.LoadCRLs()
|
||||
assert.NoError(t, err)
|
||||
|
||||
crt, err := tls.X509KeyPair([]byte(caCRT), []byte(caKey))
|
||||
assert.NoError(t, err)
|
||||
|
||||
x509CAcrt, err := x509.ParseCertificate(crt.Certificate[0])
|
||||
assert.NoError(t, err)
|
||||
|
||||
crt, err = tls.X509KeyPair([]byte(client1Crt), []byte(client1Key))
|
||||
assert.NoError(t, err)
|
||||
x509crt, err := x509.ParseCertificate(crt.Certificate[0])
|
||||
if assert.NoError(t, err) {
|
||||
assert.False(t, certManager.IsRevoked(x509crt, x509CAcrt))
|
||||
}
|
||||
|
||||
crt, err = tls.X509KeyPair([]byte(client2Crt), []byte(client2Key))
|
||||
assert.NoError(t, err)
|
||||
x509crt, err = x509.ParseCertificate(crt.Certificate[0])
|
||||
if assert.NoError(t, err) {
|
||||
assert.True(t, certManager.IsRevoked(x509crt, x509CAcrt))
|
||||
}
|
||||
|
||||
assert.True(t, certManager.IsRevoked(nil, nil))
|
||||
|
||||
err = os.Remove(caCrlPath)
|
||||
assert.NoError(t, err)
|
||||
err = certManager.Reload()
|
||||
assert.Error(t, err)
|
||||
|
||||
err = os.Remove(certPath)
|
||||
assert.NoError(t, err)
|
||||
err = os.Remove(keyPath)
|
||||
assert.NoError(t, err)
|
||||
err = certManager.Reload()
|
||||
assert.Error(t, err)
|
||||
|
||||
err = os.Remove(caCrtPath)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestLoadInvalidCert(t *testing.T) {
|
||||
certManager, err := NewCertManager("test.crt", "test.key", configDir, logSenderTest)
|
||||
assert.Error(t, err)
|
||||
assert.Nil(t, certManager)
|
||||
}
|
||||
303
common/transfer.go
Normal file
303
common/transfer.go
Normal file
@@ -0,0 +1,303 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"path"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"github.com/drakkan/sftpgo/dataprovider"
|
||||
"github.com/drakkan/sftpgo/logger"
|
||||
"github.com/drakkan/sftpgo/metrics"
|
||||
"github.com/drakkan/sftpgo/vfs"
|
||||
)
|
||||
|
||||
var (
|
||||
// ErrTransferClosed defines the error returned for a closed transfer
|
||||
ErrTransferClosed = errors.New("transfer already closed")
|
||||
)
|
||||
|
||||
// BaseTransfer contains protocols common transfer details for an upload or a download.
|
||||
type BaseTransfer struct { //nolint:maligned
|
||||
ID uint64
|
||||
BytesSent int64
|
||||
BytesReceived int64
|
||||
Fs vfs.Fs
|
||||
File vfs.File
|
||||
Connection *BaseConnection
|
||||
cancelFn func()
|
||||
fsPath string
|
||||
requestPath string
|
||||
start time.Time
|
||||
MaxWriteSize int64
|
||||
MinWriteOffset int64
|
||||
InitialSize int64
|
||||
isNewFile bool
|
||||
transferType int
|
||||
AbortTransfer int32
|
||||
sync.Mutex
|
||||
ErrTransfer error
|
||||
}
|
||||
|
||||
// NewBaseTransfer returns a new BaseTransfer and adds it to the given connection
|
||||
func NewBaseTransfer(file vfs.File, conn *BaseConnection, cancelFn func(), fsPath, requestPath string, transferType int,
|
||||
minWriteOffset, initialSize, maxWriteSize int64, isNewFile bool, fs vfs.Fs) *BaseTransfer {
|
||||
t := &BaseTransfer{
|
||||
ID: conn.GetTransferID(),
|
||||
File: file,
|
||||
Connection: conn,
|
||||
cancelFn: cancelFn,
|
||||
fsPath: fsPath,
|
||||
start: time.Now(),
|
||||
transferType: transferType,
|
||||
MinWriteOffset: minWriteOffset,
|
||||
InitialSize: initialSize,
|
||||
isNewFile: isNewFile,
|
||||
requestPath: requestPath,
|
||||
BytesSent: 0,
|
||||
BytesReceived: 0,
|
||||
MaxWriteSize: maxWriteSize,
|
||||
AbortTransfer: 0,
|
||||
Fs: fs,
|
||||
}
|
||||
|
||||
conn.AddTransfer(t)
|
||||
return t
|
||||
}
|
||||
|
||||
// GetID returns the transfer ID
|
||||
func (t *BaseTransfer) GetID() uint64 {
|
||||
return t.ID
|
||||
}
|
||||
|
||||
// GetType returns the transfer type
|
||||
func (t *BaseTransfer) GetType() int {
|
||||
return t.transferType
|
||||
}
|
||||
|
||||
// GetSize returns the transferred size
|
||||
func (t *BaseTransfer) GetSize() int64 {
|
||||
if t.transferType == TransferDownload {
|
||||
return atomic.LoadInt64(&t.BytesSent)
|
||||
}
|
||||
return atomic.LoadInt64(&t.BytesReceived)
|
||||
}
|
||||
|
||||
// GetStartTime returns the start time
|
||||
func (t *BaseTransfer) GetStartTime() time.Time {
|
||||
return t.start
|
||||
}
|
||||
|
||||
// SignalClose signals that the transfer should be closed.
|
||||
// For same protocols, for example WebDAV, we have no
|
||||
// access to the network connection, so we use this method
|
||||
// to make the next read or write to fail
|
||||
func (t *BaseTransfer) SignalClose() {
|
||||
atomic.StoreInt32(&(t.AbortTransfer), 1)
|
||||
}
|
||||
|
||||
// GetVirtualPath returns the transfer virtual path
|
||||
func (t *BaseTransfer) GetVirtualPath() string {
|
||||
return t.requestPath
|
||||
}
|
||||
|
||||
// GetFsPath returns the transfer filesystem path
|
||||
func (t *BaseTransfer) GetFsPath() string {
|
||||
return t.fsPath
|
||||
}
|
||||
|
||||
// GetRealFsPath returns the real transfer filesystem path.
|
||||
// If atomic uploads are enabled this differ from fsPath
|
||||
func (t *BaseTransfer) GetRealFsPath(fsPath string) string {
|
||||
if fsPath == t.GetFsPath() {
|
||||
if t.File != nil {
|
||||
return t.File.Name()
|
||||
}
|
||||
return t.fsPath
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// SetCancelFn sets the cancel function for the transfer
|
||||
func (t *BaseTransfer) SetCancelFn(cancelFn func()) {
|
||||
t.cancelFn = cancelFn
|
||||
}
|
||||
|
||||
// Truncate changes the size of the opened file.
|
||||
// Supported for local fs only
|
||||
func (t *BaseTransfer) Truncate(fsPath string, size int64) (int64, error) {
|
||||
if fsPath == t.GetFsPath() {
|
||||
if t.File != nil {
|
||||
initialSize := t.InitialSize
|
||||
err := t.File.Truncate(size)
|
||||
if err == nil {
|
||||
t.Lock()
|
||||
t.InitialSize = size
|
||||
if t.MaxWriteSize > 0 {
|
||||
sizeDiff := initialSize - size
|
||||
t.MaxWriteSize += sizeDiff
|
||||
metrics.TransferCompleted(atomic.LoadInt64(&t.BytesSent), atomic.LoadInt64(&t.BytesReceived), t.transferType, t.ErrTransfer)
|
||||
atomic.StoreInt64(&t.BytesReceived, 0)
|
||||
}
|
||||
t.Unlock()
|
||||
}
|
||||
t.Connection.Log(logger.LevelDebug, "file %#v truncated to size %v max write size %v new initial size %v err: %v",
|
||||
fsPath, size, t.MaxWriteSize, t.InitialSize, err)
|
||||
return initialSize, err
|
||||
}
|
||||
if size == 0 && atomic.LoadInt64(&t.BytesSent) == 0 {
|
||||
// for cloud providers the file is always truncated to zero, we don't support append/resume for uploads
|
||||
return 0, nil
|
||||
}
|
||||
return 0, ErrOpUnsupported
|
||||
}
|
||||
return 0, errTransferMismatch
|
||||
}
|
||||
|
||||
// TransferError is called if there is an unexpected error.
|
||||
// For example network or client issues
|
||||
func (t *BaseTransfer) TransferError(err error) {
|
||||
t.Lock()
|
||||
defer t.Unlock()
|
||||
if t.ErrTransfer != nil {
|
||||
return
|
||||
}
|
||||
t.ErrTransfer = err
|
||||
if t.cancelFn != nil {
|
||||
t.cancelFn()
|
||||
}
|
||||
elapsed := time.Since(t.start).Nanoseconds() / 1000000
|
||||
t.Connection.Log(logger.LevelWarn, "Unexpected error for transfer, path: %#v, error: \"%v\" bytes sent: %v, "+
|
||||
"bytes received: %v transfer running since %v ms", t.fsPath, t.ErrTransfer, atomic.LoadInt64(&t.BytesSent),
|
||||
atomic.LoadInt64(&t.BytesReceived), elapsed)
|
||||
}
|
||||
|
||||
func (t *BaseTransfer) getUploadFileSize() (int64, error) {
|
||||
var fileSize int64
|
||||
info, err := t.Fs.Stat(t.fsPath)
|
||||
if err == nil {
|
||||
fileSize = info.Size()
|
||||
}
|
||||
if vfs.IsCryptOsFs(t.Fs) && t.ErrTransfer != nil {
|
||||
errDelete := t.Connection.Fs.Remove(t.fsPath, false)
|
||||
if errDelete != nil {
|
||||
t.Connection.Log(logger.LevelWarn, "error removing partial crypto file %#v: %v", t.fsPath, errDelete)
|
||||
}
|
||||
}
|
||||
return fileSize, err
|
||||
}
|
||||
|
||||
// Close it is called when the transfer is completed.
|
||||
// It logs the transfer info, updates the user quota (for uploads)
|
||||
// and executes any defined action.
|
||||
// If there is an error no action will be executed and, in atomic mode,
|
||||
// we try to delete the temporary file
|
||||
func (t *BaseTransfer) Close() error {
|
||||
defer t.Connection.RemoveTransfer(t)
|
||||
|
||||
var err error
|
||||
numFiles := 0
|
||||
if t.isNewFile {
|
||||
numFiles = 1
|
||||
}
|
||||
metrics.TransferCompleted(atomic.LoadInt64(&t.BytesSent), atomic.LoadInt64(&t.BytesReceived), t.transferType, t.ErrTransfer)
|
||||
if t.ErrTransfer == ErrQuotaExceeded && t.File != nil {
|
||||
// if quota is exceeded we try to remove the partial file for uploads to local filesystem
|
||||
err = t.Connection.Fs.Remove(t.File.Name(), false)
|
||||
if err == nil {
|
||||
numFiles--
|
||||
atomic.StoreInt64(&t.BytesReceived, 0)
|
||||
t.MinWriteOffset = 0
|
||||
}
|
||||
t.Connection.Log(logger.LevelWarn, "upload denied due to space limit, delete temporary file: %#v, deletion error: %v",
|
||||
t.File.Name(), err)
|
||||
} else if t.transferType == TransferUpload && t.File != nil && t.File.Name() != t.fsPath {
|
||||
if t.ErrTransfer == nil || Config.UploadMode == UploadModeAtomicWithResume {
|
||||
err = t.Connection.Fs.Rename(t.File.Name(), t.fsPath)
|
||||
t.Connection.Log(logger.LevelDebug, "atomic upload completed, rename: %#v -> %#v, error: %v",
|
||||
t.File.Name(), t.fsPath, err)
|
||||
} else {
|
||||
err = t.Connection.Fs.Remove(t.File.Name(), false)
|
||||
t.Connection.Log(logger.LevelWarn, "atomic upload completed with error: \"%v\", delete temporary file: %#v, "+
|
||||
"deletion error: %v", t.ErrTransfer, t.File.Name(), err)
|
||||
if err == nil {
|
||||
numFiles--
|
||||
atomic.StoreInt64(&t.BytesReceived, 0)
|
||||
t.MinWriteOffset = 0
|
||||
}
|
||||
}
|
||||
}
|
||||
elapsed := time.Since(t.start).Nanoseconds() / 1000000
|
||||
if t.transferType == TransferDownload {
|
||||
logger.TransferLog(downloadLogSender, t.fsPath, elapsed, atomic.LoadInt64(&t.BytesSent), t.Connection.User.Username,
|
||||
t.Connection.ID, t.Connection.protocol)
|
||||
action := newActionNotification(&t.Connection.User, operationDownload, t.fsPath, "", "", t.Connection.protocol,
|
||||
atomic.LoadInt64(&t.BytesSent), t.ErrTransfer)
|
||||
go actionHandler.Handle(action) //nolint:errcheck
|
||||
} else {
|
||||
fileSize := atomic.LoadInt64(&t.BytesReceived) + t.MinWriteOffset
|
||||
if statSize, err := t.getUploadFileSize(); err == nil {
|
||||
fileSize = statSize
|
||||
}
|
||||
t.Connection.Log(logger.LevelDebug, "uploaded file size %v", fileSize)
|
||||
t.updateQuota(numFiles, fileSize)
|
||||
logger.TransferLog(uploadLogSender, t.fsPath, elapsed, atomic.LoadInt64(&t.BytesReceived), t.Connection.User.Username,
|
||||
t.Connection.ID, t.Connection.protocol)
|
||||
action := newActionNotification(&t.Connection.User, operationUpload, t.fsPath, "", "", t.Connection.protocol,
|
||||
fileSize, t.ErrTransfer)
|
||||
go actionHandler.Handle(action) //nolint:errcheck
|
||||
}
|
||||
if t.ErrTransfer != nil {
|
||||
t.Connection.Log(logger.LevelWarn, "transfer error: %v, path: %#v", t.ErrTransfer, t.fsPath)
|
||||
if err == nil {
|
||||
err = t.ErrTransfer
|
||||
}
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
func (t *BaseTransfer) updateQuota(numFiles int, fileSize int64) bool {
|
||||
// S3 uploads are atomic, if there is an error nothing is uploaded
|
||||
if t.File == nil && t.ErrTransfer != nil {
|
||||
return false
|
||||
}
|
||||
sizeDiff := fileSize - t.InitialSize
|
||||
if t.transferType == TransferUpload && (numFiles != 0 || sizeDiff > 0) {
|
||||
vfolder, err := t.Connection.User.GetVirtualFolderForPath(path.Dir(t.requestPath))
|
||||
if err == nil {
|
||||
dataprovider.UpdateVirtualFolderQuota(&vfolder.BaseVirtualFolder, numFiles, //nolint:errcheck
|
||||
sizeDiff, false)
|
||||
if vfolder.IsIncludedInUserQuota() {
|
||||
dataprovider.UpdateUserQuota(&t.Connection.User, numFiles, sizeDiff, false) //nolint:errcheck
|
||||
}
|
||||
} else {
|
||||
dataprovider.UpdateUserQuota(&t.Connection.User, numFiles, sizeDiff, false) //nolint:errcheck
|
||||
}
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// HandleThrottle manage bandwidth throttling
|
||||
func (t *BaseTransfer) HandleThrottle() {
|
||||
var wantedBandwidth int64
|
||||
var trasferredBytes int64
|
||||
if t.transferType == TransferDownload {
|
||||
wantedBandwidth = t.Connection.User.DownloadBandwidth
|
||||
trasferredBytes = atomic.LoadInt64(&t.BytesSent)
|
||||
} else {
|
||||
wantedBandwidth = t.Connection.User.UploadBandwidth
|
||||
trasferredBytes = atomic.LoadInt64(&t.BytesReceived)
|
||||
}
|
||||
if wantedBandwidth > 0 {
|
||||
// real and wanted elapsed as milliseconds, bytes as kilobytes
|
||||
realElapsed := time.Since(t.start).Nanoseconds() / 1000000
|
||||
// trasferredBytes / 1024 = KB/s, we multiply for 1000 to get milliseconds
|
||||
wantedElapsed := 1000 * (trasferredBytes / 1024) / wantedBandwidth
|
||||
if wantedElapsed > realElapsed {
|
||||
toSleep := time.Duration(wantedElapsed - realElapsed)
|
||||
time.Sleep(toSleep * time.Millisecond)
|
||||
}
|
||||
}
|
||||
}
|
||||
276
common/transfer_test.go
Normal file
276
common/transfer_test.go
Normal file
@@ -0,0 +1,276 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/drakkan/sftpgo/dataprovider"
|
||||
"github.com/drakkan/sftpgo/kms"
|
||||
"github.com/drakkan/sftpgo/vfs"
|
||||
)
|
||||
|
||||
func TestTransferUpdateQuota(t *testing.T) {
|
||||
conn := NewBaseConnection("", ProtocolSFTP, dataprovider.User{}, nil)
|
||||
transfer := BaseTransfer{
|
||||
Connection: conn,
|
||||
transferType: TransferUpload,
|
||||
BytesReceived: 123,
|
||||
Fs: vfs.NewOsFs("", os.TempDir(), nil),
|
||||
}
|
||||
errFake := errors.New("fake error")
|
||||
transfer.TransferError(errFake)
|
||||
assert.False(t, transfer.updateQuota(1, 0))
|
||||
err := transfer.Close()
|
||||
if assert.Error(t, err) {
|
||||
assert.EqualError(t, err, errFake.Error())
|
||||
}
|
||||
mappedPath := filepath.Join(os.TempDir(), "vdir")
|
||||
vdirPath := "/vdir"
|
||||
conn.User.VirtualFolders = append(conn.User.VirtualFolders, vfs.VirtualFolder{
|
||||
BaseVirtualFolder: vfs.BaseVirtualFolder{
|
||||
MappedPath: mappedPath,
|
||||
},
|
||||
VirtualPath: vdirPath,
|
||||
QuotaFiles: -1,
|
||||
QuotaSize: -1,
|
||||
})
|
||||
transfer.ErrTransfer = nil
|
||||
transfer.BytesReceived = 1
|
||||
transfer.requestPath = "/vdir/file"
|
||||
assert.True(t, transfer.updateQuota(1, 0))
|
||||
err = transfer.Close()
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestTransferThrottling(t *testing.T) {
|
||||
u := dataprovider.User{
|
||||
Username: "test",
|
||||
UploadBandwidth: 50,
|
||||
DownloadBandwidth: 40,
|
||||
}
|
||||
fs := vfs.NewOsFs("", os.TempDir(), nil)
|
||||
testFileSize := int64(131072)
|
||||
wantedUploadElapsed := 1000 * (testFileSize / 1024) / u.UploadBandwidth
|
||||
wantedDownloadElapsed := 1000 * (testFileSize / 1024) / u.DownloadBandwidth
|
||||
// some tolerance
|
||||
wantedUploadElapsed -= wantedDownloadElapsed / 10
|
||||
wantedDownloadElapsed -= wantedDownloadElapsed / 10
|
||||
conn := NewBaseConnection("id", ProtocolSCP, u, nil)
|
||||
transfer := NewBaseTransfer(nil, conn, nil, "", "", TransferUpload, 0, 0, 0, true, fs)
|
||||
transfer.BytesReceived = testFileSize
|
||||
transfer.Connection.UpdateLastActivity()
|
||||
startTime := transfer.Connection.GetLastActivity()
|
||||
transfer.HandleThrottle()
|
||||
elapsed := time.Since(startTime).Nanoseconds() / 1000000
|
||||
assert.GreaterOrEqual(t, elapsed, wantedUploadElapsed, "upload bandwidth throttling not respected")
|
||||
err := transfer.Close()
|
||||
assert.NoError(t, err)
|
||||
|
||||
transfer = NewBaseTransfer(nil, conn, nil, "", "", TransferDownload, 0, 0, 0, true, fs)
|
||||
transfer.BytesSent = testFileSize
|
||||
transfer.Connection.UpdateLastActivity()
|
||||
startTime = transfer.Connection.GetLastActivity()
|
||||
|
||||
transfer.HandleThrottle()
|
||||
elapsed = time.Since(startTime).Nanoseconds() / 1000000
|
||||
assert.GreaterOrEqual(t, elapsed, wantedDownloadElapsed, "download bandwidth throttling not respected")
|
||||
err = transfer.Close()
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestRealPath(t *testing.T) {
|
||||
testFile := filepath.Join(os.TempDir(), "afile.txt")
|
||||
fs := vfs.NewOsFs("123", os.TempDir(), nil)
|
||||
u := dataprovider.User{
|
||||
Username: "user",
|
||||
HomeDir: os.TempDir(),
|
||||
}
|
||||
u.Permissions = make(map[string][]string)
|
||||
u.Permissions["/"] = []string{dataprovider.PermAny}
|
||||
file, err := os.Create(testFile)
|
||||
require.NoError(t, err)
|
||||
conn := NewBaseConnection(fs.ConnectionID(), ProtocolSFTP, u, fs)
|
||||
transfer := NewBaseTransfer(file, conn, nil, testFile, "/transfer_test_file", TransferUpload, 0, 0, 0, true, fs)
|
||||
rPath := transfer.GetRealFsPath(testFile)
|
||||
assert.Equal(t, testFile, rPath)
|
||||
rPath = conn.getRealFsPath(testFile)
|
||||
assert.Equal(t, testFile, rPath)
|
||||
err = transfer.Close()
|
||||
assert.NoError(t, err)
|
||||
err = file.Close()
|
||||
assert.NoError(t, err)
|
||||
transfer.File = nil
|
||||
rPath = transfer.GetRealFsPath(testFile)
|
||||
assert.Equal(t, testFile, rPath)
|
||||
rPath = transfer.GetRealFsPath("")
|
||||
assert.Empty(t, rPath)
|
||||
err = os.Remove(testFile)
|
||||
assert.NoError(t, err)
|
||||
assert.Len(t, conn.GetTransfers(), 0)
|
||||
}
|
||||
|
||||
func TestTruncate(t *testing.T) {
|
||||
testFile := filepath.Join(os.TempDir(), "transfer_test_file")
|
||||
fs := vfs.NewOsFs("123", os.TempDir(), nil)
|
||||
u := dataprovider.User{
|
||||
Username: "user",
|
||||
HomeDir: os.TempDir(),
|
||||
}
|
||||
u.Permissions = make(map[string][]string)
|
||||
u.Permissions["/"] = []string{dataprovider.PermAny}
|
||||
file, err := os.Create(testFile)
|
||||
if !assert.NoError(t, err) {
|
||||
assert.FailNow(t, "unable to open test file")
|
||||
}
|
||||
_, err = file.Write([]byte("hello"))
|
||||
assert.NoError(t, err)
|
||||
conn := NewBaseConnection(fs.ConnectionID(), ProtocolSFTP, u, fs)
|
||||
transfer := NewBaseTransfer(file, conn, nil, testFile, "/transfer_test_file", TransferUpload, 0, 5, 100, false, fs)
|
||||
|
||||
err = conn.SetStat(testFile, "/transfer_test_file", &StatAttributes{
|
||||
Size: 2,
|
||||
Flags: StatAttrSize,
|
||||
})
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, int64(103), transfer.MaxWriteSize)
|
||||
err = transfer.Close()
|
||||
assert.NoError(t, err)
|
||||
err = file.Close()
|
||||
assert.NoError(t, err)
|
||||
fi, err := os.Stat(testFile)
|
||||
if assert.NoError(t, err) {
|
||||
assert.Equal(t, int64(2), fi.Size())
|
||||
}
|
||||
|
||||
transfer = NewBaseTransfer(file, conn, nil, testFile, "/transfer_test_file", TransferUpload, 0, 0, 100, true, fs)
|
||||
// file.Stat will fail on a closed file
|
||||
err = conn.SetStat(testFile, "/transfer_test_file", &StatAttributes{
|
||||
Size: 2,
|
||||
Flags: StatAttrSize,
|
||||
})
|
||||
assert.Error(t, err)
|
||||
err = transfer.Close()
|
||||
assert.NoError(t, err)
|
||||
|
||||
transfer = NewBaseTransfer(nil, conn, nil, testFile, "", TransferUpload, 0, 0, 0, true, fs)
|
||||
_, err = transfer.Truncate("mismatch", 0)
|
||||
assert.EqualError(t, err, errTransferMismatch.Error())
|
||||
_, err = transfer.Truncate(testFile, 0)
|
||||
assert.NoError(t, err)
|
||||
_, err = transfer.Truncate(testFile, 1)
|
||||
assert.EqualError(t, err, ErrOpUnsupported.Error())
|
||||
|
||||
err = transfer.Close()
|
||||
assert.NoError(t, err)
|
||||
|
||||
err = os.Remove(testFile)
|
||||
assert.NoError(t, err)
|
||||
|
||||
assert.Len(t, conn.GetTransfers(), 0)
|
||||
}
|
||||
|
||||
func TestTransferErrors(t *testing.T) {
|
||||
isCancelled := false
|
||||
cancelFn := func() {
|
||||
isCancelled = true
|
||||
}
|
||||
testFile := filepath.Join(os.TempDir(), "transfer_test_file")
|
||||
fs := vfs.NewOsFs("id", os.TempDir(), nil)
|
||||
u := dataprovider.User{
|
||||
Username: "test",
|
||||
HomeDir: os.TempDir(),
|
||||
}
|
||||
err := ioutil.WriteFile(testFile, []byte("test data"), os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
file, err := os.Open(testFile)
|
||||
if !assert.NoError(t, err) {
|
||||
assert.FailNow(t, "unable to open test file")
|
||||
}
|
||||
conn := NewBaseConnection("id", ProtocolSFTP, u, fs)
|
||||
transfer := NewBaseTransfer(file, conn, nil, testFile, "/transfer_test_file", TransferUpload, 0, 0, 0, true, fs)
|
||||
assert.Nil(t, transfer.cancelFn)
|
||||
assert.Equal(t, testFile, transfer.GetFsPath())
|
||||
transfer.SetCancelFn(cancelFn)
|
||||
errFake := errors.New("err fake")
|
||||
transfer.BytesReceived = 9
|
||||
transfer.TransferError(ErrQuotaExceeded)
|
||||
assert.True(t, isCancelled)
|
||||
transfer.TransferError(errFake)
|
||||
assert.Error(t, transfer.ErrTransfer, ErrQuotaExceeded.Error())
|
||||
// the file is closed from the embedding struct before to call close
|
||||
err = file.Close()
|
||||
assert.NoError(t, err)
|
||||
err = transfer.Close()
|
||||
if assert.Error(t, err) {
|
||||
assert.Error(t, err, ErrQuotaExceeded.Error())
|
||||
}
|
||||
assert.NoFileExists(t, testFile)
|
||||
|
||||
err = ioutil.WriteFile(testFile, []byte("test data"), os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
file, err = os.Open(testFile)
|
||||
if !assert.NoError(t, err) {
|
||||
assert.FailNow(t, "unable to open test file")
|
||||
}
|
||||
fsPath := filepath.Join(os.TempDir(), "test_file")
|
||||
transfer = NewBaseTransfer(file, conn, nil, fsPath, "/test_file", TransferUpload, 0, 0, 0, true, fs)
|
||||
transfer.BytesReceived = 9
|
||||
transfer.TransferError(errFake)
|
||||
assert.Error(t, transfer.ErrTransfer, errFake.Error())
|
||||
// the file is closed from the embedding struct before to call close
|
||||
err = file.Close()
|
||||
assert.NoError(t, err)
|
||||
err = transfer.Close()
|
||||
if assert.Error(t, err) {
|
||||
assert.Error(t, err, errFake.Error())
|
||||
}
|
||||
assert.NoFileExists(t, testFile)
|
||||
|
||||
err = ioutil.WriteFile(testFile, []byte("test data"), os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
file, err = os.Open(testFile)
|
||||
if !assert.NoError(t, err) {
|
||||
assert.FailNow(t, "unable to open test file")
|
||||
}
|
||||
transfer = NewBaseTransfer(file, conn, nil, fsPath, "/test_file", TransferUpload, 0, 0, 0, true, fs)
|
||||
transfer.BytesReceived = 9
|
||||
// the file is closed from the embedding struct before to call close
|
||||
err = file.Close()
|
||||
assert.NoError(t, err)
|
||||
err = transfer.Close()
|
||||
assert.NoError(t, err)
|
||||
assert.NoFileExists(t, testFile)
|
||||
assert.FileExists(t, fsPath)
|
||||
err = os.Remove(fsPath)
|
||||
assert.NoError(t, err)
|
||||
|
||||
assert.Len(t, conn.GetTransfers(), 0)
|
||||
}
|
||||
|
||||
func TestRemovePartialCryptoFile(t *testing.T) {
|
||||
testFile := filepath.Join(os.TempDir(), "transfer_test_file")
|
||||
fs, err := vfs.NewCryptFs("id", os.TempDir(), vfs.CryptFsConfig{Passphrase: kms.NewPlainSecret("secret")})
|
||||
require.NoError(t, err)
|
||||
u := dataprovider.User{
|
||||
Username: "test",
|
||||
HomeDir: os.TempDir(),
|
||||
}
|
||||
conn := NewBaseConnection(fs.ConnectionID(), ProtocolSFTP, u, fs)
|
||||
transfer := NewBaseTransfer(nil, conn, nil, testFile, "/transfer_test_file", TransferUpload, 0, 0, 0, true, fs)
|
||||
transfer.ErrTransfer = errors.New("test error")
|
||||
_, err = transfer.getUploadFileSize()
|
||||
assert.Error(t, err)
|
||||
err = ioutil.WriteFile(testFile, []byte("test data"), os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
size, err := transfer.getUploadFileSize()
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, int64(9), size)
|
||||
assert.NoFileExists(t, testFile)
|
||||
}
|
||||
918
config/config.go
918
config/config.go
File diff suppressed because it is too large
Load Diff
@@ -8,92 +8,784 @@ import (
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/spf13/viper"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/drakkan/sftpgo/common"
|
||||
"github.com/drakkan/sftpgo/config"
|
||||
"github.com/drakkan/sftpgo/dataprovider"
|
||||
"github.com/drakkan/sftpgo/ftpd"
|
||||
"github.com/drakkan/sftpgo/httpclient"
|
||||
"github.com/drakkan/sftpgo/httpd"
|
||||
"github.com/drakkan/sftpgo/sftpd"
|
||||
"github.com/drakkan/sftpgo/utils"
|
||||
"github.com/drakkan/sftpgo/webdavd"
|
||||
)
|
||||
|
||||
const (
|
||||
tempConfigName = "temp"
|
||||
)
|
||||
|
||||
func reset() {
|
||||
viper.Reset()
|
||||
config.Init()
|
||||
}
|
||||
|
||||
func TestLoadConfigTest(t *testing.T) {
|
||||
reset()
|
||||
|
||||
configDir := ".."
|
||||
err := config.LoadConfig(configDir, "")
|
||||
if err != nil {
|
||||
t.Errorf("error loading config")
|
||||
}
|
||||
emptyHTTPDConf := httpd.Conf{}
|
||||
if config.GetHTTPDConfig() == emptyHTTPDConf {
|
||||
t.Errorf("error loading httpd conf")
|
||||
}
|
||||
emptyProviderConf := dataprovider.Config{}
|
||||
if config.GetProviderConf() == emptyProviderConf {
|
||||
t.Errorf("error loading provider conf")
|
||||
}
|
||||
emptySFTPDConf := sftpd.Configuration{}
|
||||
if config.GetSFTPDConfig().BindPort == emptySFTPDConf.BindPort {
|
||||
t.Errorf("error loading SFTPD conf")
|
||||
}
|
||||
assert.NoError(t, err)
|
||||
assert.NotEqual(t, httpd.Conf{}, config.GetHTTPConfig())
|
||||
assert.NotEqual(t, dataprovider.Config{}, config.GetProviderConf())
|
||||
assert.NotEqual(t, sftpd.Configuration{}, config.GetSFTPDConfig())
|
||||
assert.NotEqual(t, httpclient.Config{}, config.GetHTTPConfig())
|
||||
confName := tempConfigName + ".json"
|
||||
configFilePath := filepath.Join(configDir, confName)
|
||||
err = config.LoadConfig(configDir, tempConfigName)
|
||||
if err == nil {
|
||||
t.Errorf("loading a non existent config file must fail")
|
||||
}
|
||||
ioutil.WriteFile(configFilePath, []byte("{invalid json}"), 0666)
|
||||
err = config.LoadConfig(configDir, tempConfigName)
|
||||
if err == nil {
|
||||
t.Errorf("loading an invalid config file must fail")
|
||||
}
|
||||
ioutil.WriteFile(configFilePath, []byte("{\"sftpd\": {\"bind_port\": \"a\"}}"), 0666)
|
||||
err = config.LoadConfig(configDir, tempConfigName)
|
||||
if err == nil {
|
||||
t.Errorf("loading a config with an invalid bond_port must fail")
|
||||
}
|
||||
os.Remove(configFilePath)
|
||||
err = config.LoadConfig(configDir, confName)
|
||||
assert.NoError(t, err)
|
||||
err = ioutil.WriteFile(configFilePath, []byte("{invalid json}"), os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
err = config.LoadConfig(configDir, confName)
|
||||
assert.NoError(t, err)
|
||||
err = ioutil.WriteFile(configFilePath, []byte("{\"sftpd\": {\"bind_port\": \"a\"}}"), os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
err = config.LoadConfig(configDir, confName)
|
||||
assert.Error(t, err)
|
||||
err = os.Remove(configFilePath)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestLoadConfigFileNotFound(t *testing.T) {
|
||||
reset()
|
||||
|
||||
viper.SetConfigName("configfile")
|
||||
err := config.LoadConfig(os.TempDir(), "")
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestEmptyBanner(t *testing.T) {
|
||||
reset()
|
||||
|
||||
configDir := ".."
|
||||
confName := tempConfigName + ".json"
|
||||
configFilePath := filepath.Join(configDir, confName)
|
||||
config.LoadConfig(configDir, "")
|
||||
err := config.LoadConfig(configDir, "")
|
||||
assert.NoError(t, err)
|
||||
sftpdConf := config.GetSFTPDConfig()
|
||||
sftpdConf.Banner = " "
|
||||
c := make(map[string]sftpd.Configuration)
|
||||
c["sftpd"] = sftpdConf
|
||||
jsonConf, _ := json.Marshal(c)
|
||||
err := ioutil.WriteFile(configFilePath, jsonConf, 0666)
|
||||
if err != nil {
|
||||
t.Errorf("error saving temporary configuration")
|
||||
}
|
||||
config.LoadConfig(configDir, tempConfigName)
|
||||
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
err = config.LoadConfig(configDir, confName)
|
||||
assert.NoError(t, err)
|
||||
sftpdConf = config.GetSFTPDConfig()
|
||||
if strings.TrimSpace(sftpdConf.Banner) == "" {
|
||||
t.Errorf("SFTPD banner cannot be empty")
|
||||
}
|
||||
os.Remove(configFilePath)
|
||||
assert.NotEmpty(t, strings.TrimSpace(sftpdConf.Banner))
|
||||
err = os.Remove(configFilePath)
|
||||
assert.NoError(t, err)
|
||||
|
||||
ftpdConf := config.GetFTPDConfig()
|
||||
ftpdConf.Banner = " "
|
||||
c1 := make(map[string]ftpd.Configuration)
|
||||
c1["ftpd"] = ftpdConf
|
||||
jsonConf, _ = json.Marshal(c1)
|
||||
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
err = config.LoadConfig(configDir, confName)
|
||||
assert.NoError(t, err)
|
||||
ftpdConf = config.GetFTPDConfig()
|
||||
assert.NotEmpty(t, strings.TrimSpace(ftpdConf.Banner))
|
||||
err = os.Remove(configFilePath)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestInvalidUploadMode(t *testing.T) {
|
||||
reset()
|
||||
|
||||
configDir := ".."
|
||||
confName := tempConfigName + ".json"
|
||||
configFilePath := filepath.Join(configDir, confName)
|
||||
config.LoadConfig(configDir, "")
|
||||
err := config.LoadConfig(configDir, "")
|
||||
assert.NoError(t, err)
|
||||
commonConf := config.GetCommonConfig()
|
||||
commonConf.UploadMode = 10
|
||||
c := make(map[string]common.Configuration)
|
||||
c["common"] = commonConf
|
||||
jsonConf, err := json.Marshal(c)
|
||||
assert.NoError(t, err)
|
||||
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
err = config.LoadConfig(configDir, confName)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 0, config.GetCommonConfig().UploadMode)
|
||||
err = os.Remove(configFilePath)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestInvalidExternalAuthScope(t *testing.T) {
|
||||
reset()
|
||||
|
||||
configDir := ".."
|
||||
confName := tempConfigName + ".json"
|
||||
configFilePath := filepath.Join(configDir, confName)
|
||||
err := config.LoadConfig(configDir, "")
|
||||
assert.NoError(t, err)
|
||||
providerConf := config.GetProviderConf()
|
||||
providerConf.ExternalAuthScope = 10
|
||||
c := make(map[string]dataprovider.Config)
|
||||
c["data_provider"] = providerConf
|
||||
jsonConf, err := json.Marshal(c)
|
||||
assert.NoError(t, err)
|
||||
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
err = config.LoadConfig(configDir, confName)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 0, config.GetProviderConf().ExternalAuthScope)
|
||||
err = os.Remove(configFilePath)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestInvalidCredentialsPath(t *testing.T) {
|
||||
reset()
|
||||
|
||||
configDir := ".."
|
||||
confName := tempConfigName + ".json"
|
||||
configFilePath := filepath.Join(configDir, confName)
|
||||
err := config.LoadConfig(configDir, "")
|
||||
assert.NoError(t, err)
|
||||
providerConf := config.GetProviderConf()
|
||||
providerConf.CredentialsPath = ""
|
||||
c := make(map[string]dataprovider.Config)
|
||||
c["data_provider"] = providerConf
|
||||
jsonConf, err := json.Marshal(c)
|
||||
assert.NoError(t, err)
|
||||
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
err = config.LoadConfig(configDir, confName)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "credentials", config.GetProviderConf().CredentialsPath)
|
||||
err = os.Remove(configFilePath)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestInvalidProxyProtocol(t *testing.T) {
|
||||
reset()
|
||||
|
||||
configDir := ".."
|
||||
confName := tempConfigName + ".json"
|
||||
configFilePath := filepath.Join(configDir, confName)
|
||||
err := config.LoadConfig(configDir, "")
|
||||
assert.NoError(t, err)
|
||||
commonConf := config.GetCommonConfig()
|
||||
commonConf.ProxyProtocol = 10
|
||||
c := make(map[string]common.Configuration)
|
||||
c["common"] = commonConf
|
||||
jsonConf, err := json.Marshal(c)
|
||||
assert.NoError(t, err)
|
||||
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
err = config.LoadConfig(configDir, confName)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 0, config.GetCommonConfig().ProxyProtocol)
|
||||
err = os.Remove(configFilePath)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestInvalidUsersBaseDir(t *testing.T) {
|
||||
reset()
|
||||
|
||||
configDir := ".."
|
||||
confName := tempConfigName + ".json"
|
||||
configFilePath := filepath.Join(configDir, confName)
|
||||
err := config.LoadConfig(configDir, "")
|
||||
assert.NoError(t, err)
|
||||
providerConf := config.GetProviderConf()
|
||||
providerConf.UsersBaseDir = "."
|
||||
c := make(map[string]dataprovider.Config)
|
||||
c["data_provider"] = providerConf
|
||||
jsonConf, err := json.Marshal(c)
|
||||
assert.NoError(t, err)
|
||||
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
err = config.LoadConfig(configDir, confName)
|
||||
assert.NoError(t, err)
|
||||
assert.Empty(t, config.GetProviderConf().UsersBaseDir)
|
||||
err = os.Remove(configFilePath)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestCommonParamsCompatibility(t *testing.T) {
|
||||
reset()
|
||||
|
||||
configDir := ".."
|
||||
confName := tempConfigName + ".json"
|
||||
configFilePath := filepath.Join(configDir, confName)
|
||||
err := config.LoadConfig(configDir, "")
|
||||
assert.NoError(t, err)
|
||||
sftpdConf := config.GetSFTPDConfig()
|
||||
sftpdConf.UploadMode = 10
|
||||
sftpdConf.IdleTimeout = 21 //nolint:staticcheck
|
||||
sftpdConf.Actions.Hook = "http://hook"
|
||||
sftpdConf.Actions.ExecuteOn = []string{"upload"}
|
||||
sftpdConf.SetstatMode = 1 //nolint:staticcheck
|
||||
sftpdConf.UploadMode = common.UploadModeAtomicWithResume //nolint:staticcheck
|
||||
sftpdConf.ProxyProtocol = 1 //nolint:staticcheck
|
||||
sftpdConf.ProxyAllowed = []string{"192.168.1.1"} //nolint:staticcheck
|
||||
c := make(map[string]sftpd.Configuration)
|
||||
c["sftpd"] = sftpdConf
|
||||
jsonConf, _ := json.Marshal(c)
|
||||
err := ioutil.WriteFile(configFilePath, jsonConf, 0666)
|
||||
if err != nil {
|
||||
t.Errorf("error saving temporary configuration")
|
||||
}
|
||||
err = config.LoadConfig(configDir, tempConfigName)
|
||||
if err == nil {
|
||||
t.Errorf("Loading configuration with invalid upload_mode must fail")
|
||||
}
|
||||
os.Remove(configFilePath)
|
||||
jsonConf, err := json.Marshal(c)
|
||||
assert.NoError(t, err)
|
||||
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
err = config.LoadConfig(configDir, confName)
|
||||
assert.NoError(t, err)
|
||||
commonConf := config.GetCommonConfig()
|
||||
assert.Equal(t, 21, commonConf.IdleTimeout)
|
||||
assert.Equal(t, "http://hook", commonConf.Actions.Hook)
|
||||
assert.Len(t, commonConf.Actions.ExecuteOn, 1)
|
||||
assert.True(t, utils.IsStringInSlice("upload", commonConf.Actions.ExecuteOn))
|
||||
assert.Equal(t, 1, commonConf.SetstatMode)
|
||||
assert.Equal(t, 1, commonConf.ProxyProtocol)
|
||||
assert.Len(t, commonConf.ProxyAllowed, 1)
|
||||
assert.True(t, utils.IsStringInSlice("192.168.1.1", commonConf.ProxyAllowed))
|
||||
err = os.Remove(configFilePath)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestHostKeyCompatibility(t *testing.T) {
|
||||
reset()
|
||||
|
||||
configDir := ".."
|
||||
confName := tempConfigName + ".json"
|
||||
configFilePath := filepath.Join(configDir, confName)
|
||||
err := config.LoadConfig(configDir, "")
|
||||
assert.NoError(t, err)
|
||||
sftpdConf := config.GetSFTPDConfig()
|
||||
sftpdConf.Keys = []sftpd.Key{ //nolint:staticcheck
|
||||
{
|
||||
PrivateKey: "rsa",
|
||||
},
|
||||
{
|
||||
PrivateKey: "ecdsa",
|
||||
},
|
||||
}
|
||||
c := make(map[string]sftpd.Configuration)
|
||||
c["sftpd"] = sftpdConf
|
||||
jsonConf, err := json.Marshal(c)
|
||||
assert.NoError(t, err)
|
||||
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
err = config.LoadConfig(configDir, confName)
|
||||
assert.NoError(t, err)
|
||||
sftpdConf = config.GetSFTPDConfig()
|
||||
assert.Equal(t, 2, len(sftpdConf.HostKeys))
|
||||
assert.True(t, utils.IsStringInSlice("rsa", sftpdConf.HostKeys))
|
||||
assert.True(t, utils.IsStringInSlice("ecdsa", sftpdConf.HostKeys))
|
||||
err = os.Remove(configFilePath)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestSetGetConfig(t *testing.T) {
|
||||
reset()
|
||||
|
||||
sftpdConf := config.GetSFTPDConfig()
|
||||
sftpdConf.MaxAuthTries = 10
|
||||
config.SetSFTPDConfig(sftpdConf)
|
||||
assert.Equal(t, sftpdConf.MaxAuthTries, config.GetSFTPDConfig().MaxAuthTries)
|
||||
dataProviderConf := config.GetProviderConf()
|
||||
dataProviderConf.Host = "test host"
|
||||
config.SetProviderConf(dataProviderConf)
|
||||
assert.Equal(t, dataProviderConf.Host, config.GetProviderConf().Host)
|
||||
httpdConf := config.GetHTTPDConfig()
|
||||
httpdConf.Bindings = append(httpdConf.Bindings, httpd.Binding{Address: "0.0.0.0"})
|
||||
config.SetHTTPDConfig(httpdConf)
|
||||
assert.Equal(t, httpdConf.Bindings[0].Address, config.GetHTTPDConfig().Bindings[0].Address)
|
||||
commonConf := config.GetCommonConfig()
|
||||
commonConf.IdleTimeout = 10
|
||||
config.SetCommonConfig(commonConf)
|
||||
assert.Equal(t, commonConf.IdleTimeout, config.GetCommonConfig().IdleTimeout)
|
||||
ftpdConf := config.GetFTPDConfig()
|
||||
ftpdConf.CertificateFile = "cert"
|
||||
ftpdConf.CertificateKeyFile = "key"
|
||||
config.SetFTPDConfig(ftpdConf)
|
||||
assert.Equal(t, ftpdConf.CertificateFile, config.GetFTPDConfig().CertificateFile)
|
||||
assert.Equal(t, ftpdConf.CertificateKeyFile, config.GetFTPDConfig().CertificateKeyFile)
|
||||
webDavConf := config.GetWebDAVDConfig()
|
||||
webDavConf.CertificateFile = "dav_cert"
|
||||
webDavConf.CertificateKeyFile = "dav_key"
|
||||
config.SetWebDAVDConfig(webDavConf)
|
||||
assert.Equal(t, webDavConf.CertificateFile, config.GetWebDAVDConfig().CertificateFile)
|
||||
assert.Equal(t, webDavConf.CertificateKeyFile, config.GetWebDAVDConfig().CertificateKeyFile)
|
||||
kmsConf := config.GetKMSConfig()
|
||||
kmsConf.Secrets.MasterKeyPath = "apath"
|
||||
kmsConf.Secrets.URL = "aurl"
|
||||
config.SetKMSConfig(kmsConf)
|
||||
assert.Equal(t, kmsConf.Secrets.MasterKeyPath, config.GetKMSConfig().Secrets.MasterKeyPath)
|
||||
assert.Equal(t, kmsConf.Secrets.URL, config.GetKMSConfig().Secrets.URL)
|
||||
telemetryConf := config.GetTelemetryConfig()
|
||||
telemetryConf.BindPort = 10001
|
||||
telemetryConf.BindAddress = "0.0.0.0"
|
||||
config.SetTelemetryConfig(telemetryConf)
|
||||
assert.Equal(t, telemetryConf.BindPort, config.GetTelemetryConfig().BindPort)
|
||||
assert.Equal(t, telemetryConf.BindAddress, config.GetTelemetryConfig().BindAddress)
|
||||
}
|
||||
|
||||
func TestServiceToStart(t *testing.T) {
|
||||
reset()
|
||||
|
||||
configDir := ".."
|
||||
err := config.LoadConfig(configDir, "")
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, config.HasServicesToStart())
|
||||
sftpdConf := config.GetSFTPDConfig()
|
||||
sftpdConf.Bindings[0].Port = 0
|
||||
config.SetSFTPDConfig(sftpdConf)
|
||||
assert.False(t, config.HasServicesToStart())
|
||||
ftpdConf := config.GetFTPDConfig()
|
||||
ftpdConf.Bindings[0].Port = 2121
|
||||
config.SetFTPDConfig(ftpdConf)
|
||||
assert.True(t, config.HasServicesToStart())
|
||||
ftpdConf.Bindings[0].Port = 0
|
||||
config.SetFTPDConfig(ftpdConf)
|
||||
webdavdConf := config.GetWebDAVDConfig()
|
||||
webdavdConf.Bindings[0].Port = 9000
|
||||
config.SetWebDAVDConfig(webdavdConf)
|
||||
assert.True(t, config.HasServicesToStart())
|
||||
webdavdConf.Bindings[0].Port = 0
|
||||
config.SetWebDAVDConfig(webdavdConf)
|
||||
assert.False(t, config.HasServicesToStart())
|
||||
sftpdConf.Bindings[0].Port = 2022
|
||||
config.SetSFTPDConfig(sftpdConf)
|
||||
assert.True(t, config.HasServicesToStart())
|
||||
}
|
||||
|
||||
func TestSFTPDBindingsCompatibility(t *testing.T) {
|
||||
reset()
|
||||
|
||||
configDir := ".."
|
||||
confName := tempConfigName + ".json"
|
||||
configFilePath := filepath.Join(configDir, confName)
|
||||
err := config.LoadConfig(configDir, "")
|
||||
assert.NoError(t, err)
|
||||
sftpdConf := config.GetSFTPDConfig()
|
||||
require.Len(t, sftpdConf.Bindings, 1)
|
||||
sftpdConf.Bindings = nil
|
||||
sftpdConf.BindPort = 9022 //nolint:staticcheck
|
||||
sftpdConf.BindAddress = "127.0.0.1" //nolint:staticcheck
|
||||
c := make(map[string]sftpd.Configuration)
|
||||
c["sftpd"] = sftpdConf
|
||||
jsonConf, err := json.Marshal(c)
|
||||
assert.NoError(t, err)
|
||||
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
err = config.LoadConfig(configDir, confName)
|
||||
assert.NoError(t, err)
|
||||
sftpdConf = config.GetSFTPDConfig()
|
||||
// the default binding should be replaced with the deprecated configuration
|
||||
require.Len(t, sftpdConf.Bindings, 1)
|
||||
require.Equal(t, 9022, sftpdConf.Bindings[0].Port)
|
||||
require.Equal(t, "127.0.0.1", sftpdConf.Bindings[0].Address)
|
||||
require.True(t, sftpdConf.Bindings[0].ApplyProxyConfig)
|
||||
|
||||
err = config.LoadConfig(configDir, confName)
|
||||
assert.NoError(t, err)
|
||||
sftpdConf = config.GetSFTPDConfig()
|
||||
require.Len(t, sftpdConf.Bindings, 1)
|
||||
require.Equal(t, 9022, sftpdConf.Bindings[0].Port)
|
||||
require.Equal(t, "127.0.0.1", sftpdConf.Bindings[0].Address)
|
||||
require.True(t, sftpdConf.Bindings[0].ApplyProxyConfig)
|
||||
err = os.Remove(configFilePath)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestFTPDBindingsCompatibility(t *testing.T) {
|
||||
reset()
|
||||
|
||||
configDir := ".."
|
||||
confName := tempConfigName + ".json"
|
||||
configFilePath := filepath.Join(configDir, confName)
|
||||
err := config.LoadConfig(configDir, "")
|
||||
assert.NoError(t, err)
|
||||
ftpdConf := config.GetFTPDConfig()
|
||||
require.Len(t, ftpdConf.Bindings, 1)
|
||||
ftpdConf.Bindings = nil
|
||||
ftpdConf.BindPort = 9022 //nolint:staticcheck
|
||||
ftpdConf.BindAddress = "127.1.0.1" //nolint:staticcheck
|
||||
ftpdConf.ForcePassiveIP = "127.1.1.1" //nolint:staticcheck
|
||||
ftpdConf.TLSMode = 2 //nolint:staticcheck
|
||||
c := make(map[string]ftpd.Configuration)
|
||||
c["ftpd"] = ftpdConf
|
||||
jsonConf, err := json.Marshal(c)
|
||||
assert.NoError(t, err)
|
||||
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
err = config.LoadConfig(configDir, confName)
|
||||
assert.NoError(t, err)
|
||||
ftpdConf = config.GetFTPDConfig()
|
||||
// the default binding should be replaced with the deprecated configuration
|
||||
require.Len(t, ftpdConf.Bindings, 1)
|
||||
require.Equal(t, 9022, ftpdConf.Bindings[0].Port)
|
||||
require.Equal(t, "127.1.0.1", ftpdConf.Bindings[0].Address)
|
||||
require.True(t, ftpdConf.Bindings[0].ApplyProxyConfig)
|
||||
require.Equal(t, 2, ftpdConf.Bindings[0].TLSMode)
|
||||
require.Equal(t, "127.1.1.1", ftpdConf.Bindings[0].ForcePassiveIP)
|
||||
err = os.Remove(configFilePath)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestWebDAVDBindingsCompatibility(t *testing.T) {
|
||||
reset()
|
||||
|
||||
configDir := ".."
|
||||
confName := tempConfigName + ".json"
|
||||
configFilePath := filepath.Join(configDir, confName)
|
||||
err := config.LoadConfig(configDir, "")
|
||||
assert.NoError(t, err)
|
||||
webdavConf := config.GetWebDAVDConfig()
|
||||
require.Len(t, webdavConf.Bindings, 1)
|
||||
webdavConf.Bindings = nil
|
||||
webdavConf.BindPort = 9080 //nolint:staticcheck
|
||||
webdavConf.BindAddress = "127.0.0.1" //nolint:staticcheck
|
||||
c := make(map[string]webdavd.Configuration)
|
||||
c["webdavd"] = webdavConf
|
||||
jsonConf, err := json.Marshal(c)
|
||||
assert.NoError(t, err)
|
||||
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
err = config.LoadConfig(configDir, confName)
|
||||
assert.NoError(t, err)
|
||||
webdavConf = config.GetWebDAVDConfig()
|
||||
// the default binding should be replaced with the deprecated configuration
|
||||
require.Len(t, webdavConf.Bindings, 1)
|
||||
require.Equal(t, 9080, webdavConf.Bindings[0].Port)
|
||||
require.Equal(t, "127.0.0.1", webdavConf.Bindings[0].Address)
|
||||
require.False(t, webdavConf.Bindings[0].EnableHTTPS)
|
||||
err = os.Remove(configFilePath)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestHTTPDBindingsCompatibility(t *testing.T) {
|
||||
reset()
|
||||
|
||||
configDir := ".."
|
||||
confName := tempConfigName + ".json"
|
||||
configFilePath := filepath.Join(configDir, confName)
|
||||
err := config.LoadConfig(configDir, "")
|
||||
assert.NoError(t, err)
|
||||
httpdConf := config.GetHTTPDConfig()
|
||||
require.Len(t, httpdConf.Bindings, 1)
|
||||
httpdConf.Bindings = nil
|
||||
httpdConf.BindPort = 9080 //nolint:staticcheck
|
||||
httpdConf.BindAddress = "127.1.1.1" //nolint:staticcheck
|
||||
c := make(map[string]httpd.Conf)
|
||||
c["httpd"] = httpdConf
|
||||
jsonConf, err := json.Marshal(c)
|
||||
assert.NoError(t, err)
|
||||
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
err = config.LoadConfig(configDir, confName)
|
||||
assert.NoError(t, err)
|
||||
httpdConf = config.GetHTTPDConfig()
|
||||
// the default binding should be replaced with the deprecated configuration
|
||||
require.Len(t, httpdConf.Bindings, 1)
|
||||
require.Equal(t, 9080, httpdConf.Bindings[0].Port)
|
||||
require.Equal(t, "127.1.1.1", httpdConf.Bindings[0].Address)
|
||||
require.False(t, httpdConf.Bindings[0].EnableHTTPS)
|
||||
require.True(t, httpdConf.Bindings[0].EnableWebAdmin)
|
||||
err = os.Remove(configFilePath)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestSFTPDBindingsFromEnv(t *testing.T) {
|
||||
reset()
|
||||
|
||||
os.Setenv("SFTPGO_SFTPD__BINDINGS__0__ADDRESS", "127.0.0.1")
|
||||
os.Setenv("SFTPGO_SFTPD__BINDINGS__0__PORT", "2200")
|
||||
os.Setenv("SFTPGO_SFTPD__BINDINGS__0__APPLY_PROXY_CONFIG", "false")
|
||||
os.Setenv("SFTPGO_SFTPD__BINDINGS__3__ADDRESS", "127.0.1.1")
|
||||
os.Setenv("SFTPGO_SFTPD__BINDINGS__3__PORT", "2203")
|
||||
os.Setenv("SFTPGO_SFTPD__BINDINGS__3__APPLY_PROXY_CONFIG", "1")
|
||||
t.Cleanup(func() {
|
||||
os.Unsetenv("SFTPGO_SFTPD__BINDINGS__0__ADDRESS")
|
||||
os.Unsetenv("SFTPGO_SFTPD__BINDINGS__0__PORT")
|
||||
os.Unsetenv("SFTPGO_SFTPD__BINDINGS__0__APPLY_PROXY_CONFIG")
|
||||
os.Unsetenv("SFTPGO_SFTPD__BINDINGS__3__ADDRESS")
|
||||
os.Unsetenv("SFTPGO_SFTPD__BINDINGS__3__PORT")
|
||||
os.Unsetenv("SFTPGO_SFTPD__BINDINGS__3__APPLY_PROXY_CONFIG")
|
||||
})
|
||||
|
||||
configDir := ".."
|
||||
err := config.LoadConfig(configDir, "")
|
||||
assert.NoError(t, err)
|
||||
bindings := config.GetSFTPDConfig().Bindings
|
||||
require.Len(t, bindings, 2)
|
||||
require.Equal(t, 2200, bindings[0].Port)
|
||||
require.Equal(t, "127.0.0.1", bindings[0].Address)
|
||||
require.False(t, bindings[0].ApplyProxyConfig)
|
||||
require.Equal(t, 2203, bindings[1].Port)
|
||||
require.Equal(t, "127.0.1.1", bindings[1].Address)
|
||||
require.True(t, bindings[1].ApplyProxyConfig)
|
||||
}
|
||||
|
||||
func TestFTPDBindingsFromEnv(t *testing.T) {
|
||||
reset()
|
||||
|
||||
os.Setenv("SFTPGO_FTPD__BINDINGS__0__ADDRESS", "127.0.0.1")
|
||||
os.Setenv("SFTPGO_FTPD__BINDINGS__0__PORT", "2200")
|
||||
os.Setenv("SFTPGO_FTPD__BINDINGS__0__APPLY_PROXY_CONFIG", "f")
|
||||
os.Setenv("SFTPGO_FTPD__BINDINGS__0__TLS_MODE", "2")
|
||||
os.Setenv("SFTPGO_FTPD__BINDINGS__0__FORCE_PASSIVE_IP", "127.0.1.2")
|
||||
os.Setenv("SFTPGO_FTPD__BINDINGS__0__TLS_CIPHER_SUITES", "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256")
|
||||
os.Setenv("SFTPGO_FTPD__BINDINGS__9__ADDRESS", "127.0.1.1")
|
||||
os.Setenv("SFTPGO_FTPD__BINDINGS__9__PORT", "2203")
|
||||
os.Setenv("SFTPGO_FTPD__BINDINGS__9__APPLY_PROXY_CONFIG", "t")
|
||||
os.Setenv("SFTPGO_FTPD__BINDINGS__9__TLS_MODE", "1")
|
||||
os.Setenv("SFTPGO_FTPD__BINDINGS__9__FORCE_PASSIVE_IP", "127.0.1.1")
|
||||
os.Setenv("SFTPGO_FTPD__BINDINGS__9__CLIENT_AUTH_TYPE", "1")
|
||||
|
||||
t.Cleanup(func() {
|
||||
os.Unsetenv("SFTPGO_FTPD__BINDINGS__0__ADDRESS")
|
||||
os.Unsetenv("SFTPGO_FTPD__BINDINGS__0__PORT")
|
||||
os.Unsetenv("SFTPGO_FTPD__BINDINGS__0__APPLY_PROXY_CONFIG")
|
||||
os.Unsetenv("SFTPGO_FTPD__BINDINGS__0__TLS_MODE")
|
||||
os.Unsetenv("SFTPGO_FTPD__BINDINGS__0__FORCE_PASSIVE_IP")
|
||||
os.Unsetenv("SFTPGO_FTPD__BINDINGS__0__TLS_CIPHER_SUITES")
|
||||
os.Unsetenv("SFTPGO_FTPD__BINDINGS__9__ADDRESS")
|
||||
os.Unsetenv("SFTPGO_FTPD__BINDINGS__9__PORT")
|
||||
os.Unsetenv("SFTPGO_FTPD__BINDINGS__9__APPLY_PROXY_CONFIG")
|
||||
os.Unsetenv("SFTPGO_FTPD__BINDINGS__9__TLS_MODE")
|
||||
os.Unsetenv("SFTPGO_FTPD__BINDINGS__9__FORCE_PASSIVE_IP")
|
||||
os.Unsetenv("SFTPGO_FTPD__BINDINGS__9__CLIENT_AUTH_TYPE")
|
||||
})
|
||||
|
||||
configDir := ".."
|
||||
err := config.LoadConfig(configDir, "")
|
||||
assert.NoError(t, err)
|
||||
bindings := config.GetFTPDConfig().Bindings
|
||||
require.Len(t, bindings, 2)
|
||||
require.Equal(t, 2200, bindings[0].Port)
|
||||
require.Equal(t, "127.0.0.1", bindings[0].Address)
|
||||
require.False(t, bindings[0].ApplyProxyConfig)
|
||||
require.Equal(t, 2, bindings[0].TLSMode)
|
||||
require.Equal(t, "127.0.1.2", bindings[0].ForcePassiveIP)
|
||||
require.Equal(t, 0, bindings[0].ClientAuthType)
|
||||
require.Len(t, bindings[0].TLSCipherSuites, 2)
|
||||
require.Equal(t, "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256", bindings[0].TLSCipherSuites[0])
|
||||
require.Equal(t, "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", bindings[0].TLSCipherSuites[1])
|
||||
require.Equal(t, 2203, bindings[1].Port)
|
||||
require.Equal(t, "127.0.1.1", bindings[1].Address)
|
||||
require.True(t, bindings[1].ApplyProxyConfig)
|
||||
require.Equal(t, 1, bindings[1].TLSMode)
|
||||
require.Equal(t, "127.0.1.1", bindings[1].ForcePassiveIP)
|
||||
require.Equal(t, 1, bindings[1].ClientAuthType)
|
||||
require.Nil(t, bindings[1].TLSCipherSuites)
|
||||
}
|
||||
|
||||
func TestWebDAVBindingsFromEnv(t *testing.T) {
|
||||
reset()
|
||||
|
||||
os.Setenv("SFTPGO_WEBDAVD__BINDINGS__1__ADDRESS", "127.0.0.1")
|
||||
os.Setenv("SFTPGO_WEBDAVD__BINDINGS__1__PORT", "8000")
|
||||
os.Setenv("SFTPGO_WEBDAVD__BINDINGS__1__ENABLE_HTTPS", "0")
|
||||
os.Setenv("SFTPGO_WEBDAVD__BINDINGS__1__TLS_CIPHER_SUITES", "TLS_RSA_WITH_AES_128_CBC_SHA ")
|
||||
os.Setenv("SFTPGO_WEBDAVD__BINDINGS__2__ADDRESS", "127.0.1.1")
|
||||
os.Setenv("SFTPGO_WEBDAVD__BINDINGS__2__PORT", "9000")
|
||||
os.Setenv("SFTPGO_WEBDAVD__BINDINGS__2__ENABLE_HTTPS", "1")
|
||||
os.Setenv("SFTPGO_WEBDAVD__BINDINGS__2__CLIENT_AUTH_TYPE", "1")
|
||||
t.Cleanup(func() {
|
||||
os.Unsetenv("SFTPGO_WEBDAVD__BINDINGS__1__ADDRESS")
|
||||
os.Unsetenv("SFTPGO_WEBDAVD__BINDINGS__1__PORT")
|
||||
os.Unsetenv("SFTPGO_WEBDAVD__BINDINGS__1__ENABLE_HTTPS")
|
||||
os.Unsetenv("SFTPGO_WEBDAVD__BINDINGS__1__TLS_CIPHER_SUITES")
|
||||
os.Unsetenv("SFTPGO_WEBDAVD__BINDINGS__2__ADDRESS")
|
||||
os.Unsetenv("SFTPGO_WEBDAVD__BINDINGS__2__PORT")
|
||||
os.Unsetenv("SFTPGO_WEBDAVD__BINDINGS__2__ENABLE_HTTPS")
|
||||
os.Unsetenv("SFTPGO_WEBDAVD__BINDINGS__2__CLIENT_AUTH_TYPE")
|
||||
})
|
||||
|
||||
configDir := ".."
|
||||
err := config.LoadConfig(configDir, "")
|
||||
assert.NoError(t, err)
|
||||
bindings := config.GetWebDAVDConfig().Bindings
|
||||
require.Len(t, bindings, 3)
|
||||
require.Equal(t, 0, bindings[0].Port)
|
||||
require.Empty(t, bindings[0].Address)
|
||||
require.False(t, bindings[0].EnableHTTPS)
|
||||
require.Len(t, bindings[0].TLSCipherSuites, 0)
|
||||
require.Equal(t, 8000, bindings[1].Port)
|
||||
require.Equal(t, "127.0.0.1", bindings[1].Address)
|
||||
require.False(t, bindings[1].EnableHTTPS)
|
||||
require.Equal(t, 0, bindings[1].ClientAuthType)
|
||||
require.Len(t, bindings[1].TLSCipherSuites, 1)
|
||||
require.Equal(t, "TLS_RSA_WITH_AES_128_CBC_SHA", bindings[1].TLSCipherSuites[0])
|
||||
require.Equal(t, 9000, bindings[2].Port)
|
||||
require.Equal(t, "127.0.1.1", bindings[2].Address)
|
||||
require.True(t, bindings[2].EnableHTTPS)
|
||||
require.Equal(t, 1, bindings[2].ClientAuthType)
|
||||
require.Nil(t, bindings[2].TLSCipherSuites)
|
||||
}
|
||||
|
||||
func TestHTTPDBindingsFromEnv(t *testing.T) {
|
||||
reset()
|
||||
|
||||
sockPath := filepath.Clean(os.TempDir())
|
||||
|
||||
os.Setenv("SFTPGO_HTTPD__BINDINGS__0__ADDRESS", sockPath)
|
||||
os.Setenv("SFTPGO_HTTPD__BINDINGS__0__PORT", "0")
|
||||
os.Setenv("SFTPGO_HTTPD__BINDINGS__0__TLS_CIPHER_SUITES", " TLS_AES_128_GCM_SHA256")
|
||||
os.Setenv("SFTPGO_HTTPD__BINDINGS__1__ADDRESS", "127.0.0.1")
|
||||
os.Setenv("SFTPGO_HTTPD__BINDINGS__1__PORT", "8000")
|
||||
os.Setenv("SFTPGO_HTTPD__BINDINGS__1__ENABLE_HTTPS", "0")
|
||||
os.Setenv("SFTPGO_HTTPD__BINDINGS__1__ENABLE_WEB_ADMIN", "1")
|
||||
os.Setenv("SFTPGO_HTTPD__BINDINGS__2__ADDRESS", "127.0.1.1")
|
||||
os.Setenv("SFTPGO_HTTPD__BINDINGS__2__PORT", "9000")
|
||||
os.Setenv("SFTPGO_HTTPD__BINDINGS__2__ENABLE_WEB_ADMIN", "0")
|
||||
os.Setenv("SFTPGO_HTTPD__BINDINGS__2__ENABLE_HTTPS", "1")
|
||||
os.Setenv("SFTPGO_HTTPD__BINDINGS__2__CLIENT_AUTH_TYPE", "1")
|
||||
os.Setenv("SFTPGO_HTTPD__BINDINGS__2__TLS_CIPHER_SUITES", " TLS_AES_256_GCM_SHA384 , TLS_CHACHA20_POLY1305_SHA256")
|
||||
t.Cleanup(func() {
|
||||
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__0__ADDRESS")
|
||||
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__0__PORT")
|
||||
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__0__TLS_CIPHER_SUITES")
|
||||
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__1__ADDRESS")
|
||||
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__1__PORT")
|
||||
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__1__ENABLE_HTTPS")
|
||||
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__1__ENABLE_WEB_ADMIN")
|
||||
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__2__ADDRESS")
|
||||
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__2__PORT")
|
||||
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__2__ENABLE_HTTPS")
|
||||
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__2__ENABLE_WEB_ADMIN")
|
||||
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__2__CLIENT_AUTH_TYPE")
|
||||
os.Unsetenv("SFTPGO_HTTPD__BINDINGS__2__TLS_CIPHER_SUITES")
|
||||
})
|
||||
|
||||
configDir := ".."
|
||||
err := config.LoadConfig(configDir, "")
|
||||
assert.NoError(t, err)
|
||||
bindings := config.GetHTTPDConfig().Bindings
|
||||
require.Len(t, bindings, 3)
|
||||
require.Equal(t, 0, bindings[0].Port)
|
||||
require.Equal(t, sockPath, bindings[0].Address)
|
||||
require.False(t, bindings[0].EnableHTTPS)
|
||||
require.True(t, bindings[0].EnableWebAdmin)
|
||||
require.Len(t, bindings[0].TLSCipherSuites, 1)
|
||||
require.Equal(t, "TLS_AES_128_GCM_SHA256", bindings[0].TLSCipherSuites[0])
|
||||
require.Equal(t, 8000, bindings[1].Port)
|
||||
require.Equal(t, "127.0.0.1", bindings[1].Address)
|
||||
require.False(t, bindings[1].EnableHTTPS)
|
||||
require.True(t, bindings[1].EnableWebAdmin)
|
||||
require.Nil(t, bindings[1].TLSCipherSuites)
|
||||
|
||||
require.Equal(t, 9000, bindings[2].Port)
|
||||
require.Equal(t, "127.0.1.1", bindings[2].Address)
|
||||
require.True(t, bindings[2].EnableHTTPS)
|
||||
require.False(t, bindings[2].EnableWebAdmin)
|
||||
require.Equal(t, 1, bindings[2].ClientAuthType)
|
||||
require.Len(t, bindings[2].TLSCipherSuites, 2)
|
||||
require.Equal(t, "TLS_AES_256_GCM_SHA384", bindings[2].TLSCipherSuites[0])
|
||||
require.Equal(t, "TLS_CHACHA20_POLY1305_SHA256", bindings[2].TLSCipherSuites[1])
|
||||
}
|
||||
|
||||
func TestHTTPClientCertificatesFromEnv(t *testing.T) {
|
||||
reset()
|
||||
|
||||
configDir := ".."
|
||||
confName := tempConfigName + ".json"
|
||||
configFilePath := filepath.Join(configDir, confName)
|
||||
err := config.LoadConfig(configDir, "")
|
||||
assert.NoError(t, err)
|
||||
httpConf := config.GetHTTPConfig()
|
||||
httpConf.Certificates = append(httpConf.Certificates, httpclient.TLSKeyPair{
|
||||
Cert: "cert",
|
||||
Key: "key",
|
||||
})
|
||||
c := make(map[string]httpclient.Config)
|
||||
c["http"] = httpConf
|
||||
jsonConf, err := json.Marshal(c)
|
||||
require.NoError(t, err)
|
||||
err = ioutil.WriteFile(configFilePath, jsonConf, os.ModePerm)
|
||||
require.NoError(t, err)
|
||||
err = config.LoadConfig(configDir, confName)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, config.GetHTTPConfig().Certificates, 1)
|
||||
require.Equal(t, "cert", config.GetHTTPConfig().Certificates[0].Cert)
|
||||
require.Equal(t, "key", config.GetHTTPConfig().Certificates[0].Key)
|
||||
|
||||
os.Setenv("SFTPGO_HTTP__CERTIFICATES__0__CERT", "cert0")
|
||||
os.Setenv("SFTPGO_HTTP__CERTIFICATES__0__KEY", "key0")
|
||||
os.Setenv("SFTPGO_HTTP__CERTIFICATES__8__CERT", "cert8")
|
||||
os.Setenv("SFTPGO_HTTP__CERTIFICATES__9__CERT", "cert9")
|
||||
os.Setenv("SFTPGO_HTTP__CERTIFICATES__9__KEY", "key9")
|
||||
|
||||
t.Cleanup(func() {
|
||||
os.Unsetenv("SFTPGO_HTTP__CERTIFICATES__0__CERT")
|
||||
os.Unsetenv("SFTPGO_HTTP__CERTIFICATES__0__KEY")
|
||||
os.Unsetenv("SFTPGO_HTTP__CERTIFICATES__8__CERT")
|
||||
os.Unsetenv("SFTPGO_HTTP__CERTIFICATES__9__CERT")
|
||||
os.Unsetenv("SFTPGO_HTTP__CERTIFICATES__9__KEY")
|
||||
})
|
||||
|
||||
err = config.LoadConfig(configDir, confName)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, config.GetHTTPConfig().Certificates, 2)
|
||||
require.Equal(t, "cert0", config.GetHTTPConfig().Certificates[0].Cert)
|
||||
require.Equal(t, "key0", config.GetHTTPConfig().Certificates[0].Key)
|
||||
require.Equal(t, "cert9", config.GetHTTPConfig().Certificates[1].Cert)
|
||||
require.Equal(t, "key9", config.GetHTTPConfig().Certificates[1].Key)
|
||||
|
||||
err = os.Remove(configFilePath)
|
||||
assert.NoError(t, err)
|
||||
|
||||
config.Init()
|
||||
|
||||
err = config.LoadConfig(configDir, "")
|
||||
require.NoError(t, err)
|
||||
require.Len(t, config.GetHTTPConfig().Certificates, 2)
|
||||
require.Equal(t, "cert0", config.GetHTTPConfig().Certificates[0].Cert)
|
||||
require.Equal(t, "key0", config.GetHTTPConfig().Certificates[0].Key)
|
||||
require.Equal(t, "cert9", config.GetHTTPConfig().Certificates[1].Cert)
|
||||
require.Equal(t, "key9", config.GetHTTPConfig().Certificates[1].Key)
|
||||
}
|
||||
|
||||
func TestConfigFromEnv(t *testing.T) {
|
||||
reset()
|
||||
|
||||
os.Setenv("SFTPGO_SFTPD__BINDINGS__0__ADDRESS", "127.0.0.1")
|
||||
os.Setenv("SFTPGO_WEBDAVD__BINDINGS__0__PORT", "12000")
|
||||
os.Setenv("SFTPGO_DATA_PROVIDER__PASSWORD_HASHING__ARGON2_OPTIONS__ITERATIONS", "41")
|
||||
os.Setenv("SFTPGO_DATA_PROVIDER__POOL_SIZE", "10")
|
||||
os.Setenv("SFTPGO_DATA_PROVIDER__ACTIONS__EXECUTE_ON", "add")
|
||||
os.Setenv("SFTPGO_KMS__SECRETS__URL", "local")
|
||||
os.Setenv("SFTPGO_KMS__SECRETS__MASTER_KEY_PATH", "path")
|
||||
os.Setenv("SFTPGO_TELEMETRY__TLS_CIPHER_SUITES", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA")
|
||||
t.Cleanup(func() {
|
||||
os.Unsetenv("SFTPGO_SFTPD__BINDINGS__0__ADDRESS")
|
||||
os.Unsetenv("SFTPGO_WEBDAVD__BINDINGS__0__PORT")
|
||||
os.Unsetenv("SFTPGO_DATA_PROVIDER__PASSWORD_HASHING__ARGON2_OPTIONS__ITERATIONS")
|
||||
os.Unsetenv("SFTPGO_DATA_PROVIDER__POOL_SIZE")
|
||||
os.Unsetenv("SFTPGO_DATA_PROVIDER__ACTIONS__EXECUTE_ON")
|
||||
os.Unsetenv("SFTPGO_KMS__SECRETS__URL")
|
||||
os.Unsetenv("SFTPGO_KMS__SECRETS__MASTER_KEY_PATH")
|
||||
os.Unsetenv("SFTPGO_TELEMETRY__TLS_CIPHER_SUITES")
|
||||
})
|
||||
err := config.LoadConfig(".", "invalid config")
|
||||
assert.NoError(t, err)
|
||||
sftpdConfig := config.GetSFTPDConfig()
|
||||
assert.Equal(t, "127.0.0.1", sftpdConfig.Bindings[0].Address)
|
||||
assert.Equal(t, 12000, config.GetWebDAVDConfig().Bindings[0].Port)
|
||||
dataProviderConf := config.GetProviderConf()
|
||||
assert.Equal(t, uint32(41), dataProviderConf.PasswordHashing.Argon2Options.Iterations)
|
||||
assert.Equal(t, 10, dataProviderConf.PoolSize)
|
||||
assert.Len(t, dataProviderConf.Actions.ExecuteOn, 1)
|
||||
assert.Contains(t, dataProviderConf.Actions.ExecuteOn, "add")
|
||||
kmsConfig := config.GetKMSConfig()
|
||||
assert.Equal(t, "local", kmsConfig.Secrets.URL)
|
||||
assert.Equal(t, "path", kmsConfig.Secrets.MasterKeyPath)
|
||||
telemetryConfig := config.GetTelemetryConfig()
|
||||
assert.Len(t, telemetryConfig.TLSCipherSuites, 2)
|
||||
assert.Equal(t, "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA", telemetryConfig.TLSCipherSuites[0])
|
||||
assert.Equal(t, "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA", telemetryConfig.TLSCipherSuites[1])
|
||||
}
|
||||
|
||||
235
dataprovider/admin.go
Normal file
235
dataprovider/admin.go
Normal file
@@ -0,0 +1,235 @@
|
||||
package dataprovider
|
||||
|
||||
import (
|
||||
"encoding/base64"
|
||||
"errors"
|
||||
"fmt"
|
||||
"net"
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
"github.com/alexedwards/argon2id"
|
||||
"github.com/minio/sha256-simd"
|
||||
|
||||
"github.com/drakkan/sftpgo/utils"
|
||||
)
|
||||
|
||||
// Available permissions for SFTPGo admins
|
||||
const (
|
||||
PermAdminAny = "*"
|
||||
PermAdminAddUsers = "add_users"
|
||||
PermAdminChangeUsers = "edit_users"
|
||||
PermAdminDeleteUsers = "del_users"
|
||||
PermAdminViewUsers = "view_users"
|
||||
PermAdminViewConnections = "view_conns"
|
||||
PermAdminCloseConnections = "close_conns"
|
||||
PermAdminViewServerStatus = "view_status"
|
||||
PermAdminManageAdmins = "manage_admins"
|
||||
PermAdminQuotaScans = "quota_scans"
|
||||
PermAdminManageSystem = "manage_system"
|
||||
PermAdminManageDefender = "manage_defender"
|
||||
PermAdminViewDefender = "view_defender"
|
||||
)
|
||||
|
||||
var (
|
||||
emailRegex = regexp.MustCompile("^(?:(?:(?:(?:[a-zA-Z]|\\d|[!#\\$%&'\\*\\+\\-\\/=\\?\\^_`{\\|}~]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])+(?:\\.([a-zA-Z]|\\d|[!#\\$%&'\\*\\+\\-\\/=\\?\\^_`{\\|}~]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])+)*)|(?:(?:\\x22)(?:(?:(?:(?:\\x20|\\x09)*(?:\\x0d\\x0a))?(?:\\x20|\\x09)+)?(?:(?:[\\x01-\\x08\\x0b\\x0c\\x0e-\\x1f\\x7f]|\\x21|[\\x23-\\x5b]|[\\x5d-\\x7e]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])|(?:(?:[\\x01-\\x09\\x0b\\x0c\\x0d-\\x7f]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}]))))*(?:(?:(?:\\x20|\\x09)*(?:\\x0d\\x0a))?(\\x20|\\x09)+)?(?:\\x22))))@(?:(?:(?:[a-zA-Z]|\\d|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])|(?:(?:[a-zA-Z]|\\d|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])(?:[a-zA-Z]|\\d|-|\\.|~|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])*(?:[a-zA-Z]|\\d|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])))\\.)+(?:(?:[a-zA-Z]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])|(?:(?:[a-zA-Z]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])(?:[a-zA-Z]|\\d|-|\\.|~|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])*(?:[a-zA-Z]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])))\\.?$")
|
||||
validAdminPerms = []string{PermAdminAny, PermAdminAddUsers, PermAdminChangeUsers, PermAdminDeleteUsers,
|
||||
PermAdminViewUsers, PermAdminViewConnections, PermAdminCloseConnections, PermAdminViewServerStatus,
|
||||
PermAdminManageAdmins, PermAdminQuotaScans, PermAdminManageSystem, PermAdminManageDefender,
|
||||
PermAdminViewDefender}
|
||||
)
|
||||
|
||||
// AdminFilters defines additional restrictions for SFTPGo admins
|
||||
type AdminFilters struct {
|
||||
// only clients connecting from these IP/Mask are allowed.
|
||||
// IP/Mask must be in CIDR notation as defined in RFC 4632 and RFC 4291
|
||||
// for example "192.0.2.0/24" or "2001:db8::/32"
|
||||
AllowList []string `json:"allow_list,omitempty"`
|
||||
}
|
||||
|
||||
// Admin defines a SFTPGo admin
|
||||
type Admin struct {
|
||||
// Database unique identifier
|
||||
ID int64 `json:"id"`
|
||||
// 1 enabled, 0 disabled (login is not allowed)
|
||||
Status int `json:"status"`
|
||||
// Username
|
||||
Username string `json:"username"`
|
||||
Password string `json:"password,omitempty"`
|
||||
Email string `json:"email"`
|
||||
Permissions []string `json:"permissions"`
|
||||
Filters AdminFilters `json:"filters,omitempty"`
|
||||
AdditionalInfo string `json:"additional_info,omitempty"`
|
||||
}
|
||||
|
||||
func (a *Admin) checkPassword() error {
|
||||
if a.Password != "" && !strings.HasPrefix(a.Password, argonPwdPrefix) {
|
||||
pwd, err := argon2id.CreateHash(a.Password, argon2Params)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
a.Password = pwd
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (a *Admin) validate() error {
|
||||
if a.Username == "" {
|
||||
return &ValidationError{err: "username is mandatory"}
|
||||
}
|
||||
if a.Password == "" {
|
||||
return &ValidationError{err: "please set a password"}
|
||||
}
|
||||
if !config.SkipNaturalKeysValidation && !usernameRegex.MatchString(a.Username) {
|
||||
return &ValidationError{err: fmt.Sprintf("username %#v is not valid, the following characters are allowed: a-zA-Z0-9-_.~", a.Username)}
|
||||
}
|
||||
if err := a.checkPassword(); err != nil {
|
||||
return err
|
||||
}
|
||||
a.Permissions = utils.RemoveDuplicates(a.Permissions)
|
||||
if len(a.Permissions) == 0 {
|
||||
return &ValidationError{err: "please grant some permissions to this admin"}
|
||||
}
|
||||
if utils.IsStringInSlice(PermAdminAny, a.Permissions) {
|
||||
a.Permissions = []string{PermAdminAny}
|
||||
}
|
||||
for _, perm := range a.Permissions {
|
||||
if !utils.IsStringInSlice(perm, validAdminPerms) {
|
||||
return &ValidationError{err: fmt.Sprintf("invalid permission: %#v", perm)}
|
||||
}
|
||||
}
|
||||
if a.Email != "" && !emailRegex.MatchString(a.Email) {
|
||||
return &ValidationError{err: fmt.Sprintf("email %#v is not valid", a.Email)}
|
||||
}
|
||||
for _, IPMask := range a.Filters.AllowList {
|
||||
_, _, err := net.ParseCIDR(IPMask)
|
||||
if err != nil {
|
||||
return &ValidationError{err: fmt.Sprintf("could not parse allow list entry %#v : %v", IPMask, err)}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// CheckPassword verifies the admin password
|
||||
func (a *Admin) CheckPassword(password string) (bool, error) {
|
||||
return argon2id.ComparePasswordAndHash(password, a.Password)
|
||||
}
|
||||
|
||||
// CanLoginFromIP returns true if login from the given IP is allowed
|
||||
func (a *Admin) CanLoginFromIP(ip string) bool {
|
||||
if len(a.Filters.AllowList) == 0 {
|
||||
return true
|
||||
}
|
||||
parsedIP := net.ParseIP(ip)
|
||||
if parsedIP == nil {
|
||||
return len(a.Filters.AllowList) == 0
|
||||
}
|
||||
|
||||
for _, ipMask := range a.Filters.AllowList {
|
||||
_, network, err := net.ParseCIDR(ipMask)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
if network.Contains(parsedIP) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (a *Admin) checkUserAndPass(password, ip string) error {
|
||||
if a.Status != 1 {
|
||||
return fmt.Errorf("admin %#v is disabled", a.Username)
|
||||
}
|
||||
if a.Password == "" || password == "" {
|
||||
return errors.New("credentials cannot be null or empty")
|
||||
}
|
||||
match, err := a.CheckPassword(password)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if !match {
|
||||
return ErrInvalidCredentials
|
||||
}
|
||||
if !a.CanLoginFromIP(ip) {
|
||||
return fmt.Errorf("login from IP %v not allowed", ip)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// HideConfidentialData hides admin confidential data
|
||||
func (a *Admin) HideConfidentialData() {
|
||||
a.Password = ""
|
||||
}
|
||||
|
||||
// HasPermission returns true if the admin has the specified permission
|
||||
func (a *Admin) HasPermission(perm string) bool {
|
||||
if utils.IsStringInSlice(PermAdminAny, a.Permissions) {
|
||||
return true
|
||||
}
|
||||
return utils.IsStringInSlice(perm, a.Permissions)
|
||||
}
|
||||
|
||||
// GetPermissionsAsString returns permission as string
|
||||
func (a *Admin) GetPermissionsAsString() string {
|
||||
return strings.Join(a.Permissions, ", ")
|
||||
}
|
||||
|
||||
// GetAllowedIPAsString returns the allowed IP as comma separated string
|
||||
func (a *Admin) GetAllowedIPAsString() string {
|
||||
return strings.Join(a.Filters.AllowList, ",")
|
||||
}
|
||||
|
||||
// GetValidPerms returns the allowed admin permissions
|
||||
func (a *Admin) GetValidPerms() []string {
|
||||
return validAdminPerms
|
||||
}
|
||||
|
||||
// GetInfoString returns admin's info as string.
|
||||
func (a *Admin) GetInfoString() string {
|
||||
var result string
|
||||
if a.Email != "" {
|
||||
result = fmt.Sprintf("Email: %v. ", a.Email)
|
||||
}
|
||||
if len(a.Filters.AllowList) > 0 {
|
||||
result += fmt.Sprintf("Allowed IP/Mask: %v. ", len(a.Filters.AllowList))
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// GetSignature returns a signature for this admin.
|
||||
// It could change after an update
|
||||
func (a *Admin) GetSignature() string {
|
||||
data := []byte(a.Username)
|
||||
data = append(data, []byte(a.Password)...)
|
||||
signature := sha256.Sum256(data)
|
||||
return base64.StdEncoding.EncodeToString(signature[:])
|
||||
}
|
||||
|
||||
func (a *Admin) getACopy() Admin {
|
||||
permissions := make([]string, len(a.Permissions))
|
||||
copy(permissions, a.Permissions)
|
||||
filters := AdminFilters{}
|
||||
filters.AllowList = make([]string, len(a.Filters.AllowList))
|
||||
copy(filters.AllowList, a.Filters.AllowList)
|
||||
|
||||
return Admin{
|
||||
ID: a.ID,
|
||||
Status: a.Status,
|
||||
Username: a.Username,
|
||||
Password: a.Password,
|
||||
Email: a.Email,
|
||||
Permissions: permissions,
|
||||
Filters: filters,
|
||||
AdditionalInfo: a.AdditionalInfo,
|
||||
}
|
||||
}
|
||||
|
||||
// setDefaults sets the appropriate value for the default admin
|
||||
func (a *Admin) setDefaults() {
|
||||
a.Username = "admin"
|
||||
a.Password = "password"
|
||||
a.Status = 1
|
||||
a.Permissions = []string{PermAdminAny}
|
||||
}
|
||||
1452
dataprovider/bolt.go
1452
dataprovider/bolt.go
File diff suppressed because it is too large
Load Diff
17
dataprovider/bolt_disabled.go
Normal file
17
dataprovider/bolt_disabled.go
Normal file
@@ -0,0 +1,17 @@
|
||||
// +build nobolt
|
||||
|
||||
package dataprovider
|
||||
|
||||
import (
|
||||
"errors"
|
||||
|
||||
"github.com/drakkan/sftpgo/version"
|
||||
)
|
||||
|
||||
func init() {
|
||||
version.AddFeature("-bolt")
|
||||
}
|
||||
|
||||
func initializeBoltProvider(basePath string) error {
|
||||
return errors.New("bolt disabled at build time")
|
||||
}
|
||||
358
dataprovider/compat.go
Normal file
358
dataprovider/compat.go
Normal file
@@ -0,0 +1,358 @@
|
||||
package dataprovider
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/drakkan/sftpgo/kms"
|
||||
"github.com/drakkan/sftpgo/logger"
|
||||
"github.com/drakkan/sftpgo/utils"
|
||||
"github.com/drakkan/sftpgo/vfs"
|
||||
)
|
||||
|
||||
type compatUserV2 struct {
|
||||
ID int64 `json:"id"`
|
||||
Username string `json:"username"`
|
||||
Password string `json:"password,omitempty"`
|
||||
PublicKeys []string `json:"public_keys,omitempty"`
|
||||
HomeDir string `json:"home_dir"`
|
||||
UID int `json:"uid"`
|
||||
GID int `json:"gid"`
|
||||
MaxSessions int `json:"max_sessions"`
|
||||
QuotaSize int64 `json:"quota_size"`
|
||||
QuotaFiles int `json:"quota_files"`
|
||||
Permissions []string `json:"permissions"`
|
||||
UsedQuotaSize int64 `json:"used_quota_size"`
|
||||
UsedQuotaFiles int `json:"used_quota_files"`
|
||||
LastQuotaUpdate int64 `json:"last_quota_update"`
|
||||
UploadBandwidth int64 `json:"upload_bandwidth"`
|
||||
DownloadBandwidth int64 `json:"download_bandwidth"`
|
||||
ExpirationDate int64 `json:"expiration_date"`
|
||||
LastLogin int64 `json:"last_login"`
|
||||
Status int `json:"status"`
|
||||
}
|
||||
|
||||
type compatS3FsConfigV4 struct {
|
||||
Bucket string `json:"bucket,omitempty"`
|
||||
KeyPrefix string `json:"key_prefix,omitempty"`
|
||||
Region string `json:"region,omitempty"`
|
||||
AccessKey string `json:"access_key,omitempty"`
|
||||
AccessSecret string `json:"access_secret,omitempty"`
|
||||
Endpoint string `json:"endpoint,omitempty"`
|
||||
StorageClass string `json:"storage_class,omitempty"`
|
||||
UploadPartSize int64 `json:"upload_part_size,omitempty"`
|
||||
UploadConcurrency int `json:"upload_concurrency,omitempty"`
|
||||
}
|
||||
|
||||
type compatGCSFsConfigV4 struct {
|
||||
Bucket string `json:"bucket,omitempty"`
|
||||
KeyPrefix string `json:"key_prefix,omitempty"`
|
||||
CredentialFile string `json:"-"`
|
||||
Credentials []byte `json:"credentials,omitempty"`
|
||||
AutomaticCredentials int `json:"automatic_credentials,omitempty"`
|
||||
StorageClass string `json:"storage_class,omitempty"`
|
||||
}
|
||||
|
||||
type compatAzBlobFsConfigV4 struct {
|
||||
Container string `json:"container,omitempty"`
|
||||
AccountName string `json:"account_name,omitempty"`
|
||||
AccountKey string `json:"account_key,omitempty"`
|
||||
Endpoint string `json:"endpoint,omitempty"`
|
||||
SASURL string `json:"sas_url,omitempty"`
|
||||
KeyPrefix string `json:"key_prefix,omitempty"`
|
||||
UploadPartSize int64 `json:"upload_part_size,omitempty"`
|
||||
UploadConcurrency int `json:"upload_concurrency,omitempty"`
|
||||
UseEmulator bool `json:"use_emulator,omitempty"`
|
||||
AccessTier string `json:"access_tier,omitempty"`
|
||||
}
|
||||
|
||||
type compatFilesystemV4 struct {
|
||||
Provider FilesystemProvider `json:"provider"`
|
||||
S3Config compatS3FsConfigV4 `json:"s3config,omitempty"`
|
||||
GCSConfig compatGCSFsConfigV4 `json:"gcsconfig,omitempty"`
|
||||
AzBlobConfig compatAzBlobFsConfigV4 `json:"azblobconfig,omitempty"`
|
||||
}
|
||||
|
||||
type compatUserV4 struct {
|
||||
ID int64 `json:"id"`
|
||||
Status int `json:"status"`
|
||||
Username string `json:"username"`
|
||||
ExpirationDate int64 `json:"expiration_date"`
|
||||
Password string `json:"password,omitempty"`
|
||||
PublicKeys []string `json:"public_keys,omitempty"`
|
||||
HomeDir string `json:"home_dir"`
|
||||
VirtualFolders []vfs.VirtualFolder `json:"virtual_folders,omitempty"`
|
||||
UID int `json:"uid"`
|
||||
GID int `json:"gid"`
|
||||
MaxSessions int `json:"max_sessions"`
|
||||
QuotaSize int64 `json:"quota_size"`
|
||||
QuotaFiles int `json:"quota_files"`
|
||||
Permissions map[string][]string `json:"permissions"`
|
||||
UsedQuotaSize int64 `json:"used_quota_size"`
|
||||
UsedQuotaFiles int `json:"used_quota_files"`
|
||||
LastQuotaUpdate int64 `json:"last_quota_update"`
|
||||
UploadBandwidth int64 `json:"upload_bandwidth"`
|
||||
DownloadBandwidth int64 `json:"download_bandwidth"`
|
||||
LastLogin int64 `json:"last_login"`
|
||||
Filters UserFilters `json:"filters"`
|
||||
FsConfig compatFilesystemV4 `json:"filesystem"`
|
||||
}
|
||||
|
||||
type backupDataV4Compat struct {
|
||||
Users []compatUserV4 `json:"users"`
|
||||
Folders []vfs.BaseVirtualFolder `json:"folders"`
|
||||
}
|
||||
|
||||
func createUserFromV4(u compatUserV4, fsConfig Filesystem) User {
|
||||
user := User{
|
||||
ID: u.ID,
|
||||
Status: u.Status,
|
||||
Username: u.Username,
|
||||
ExpirationDate: u.ExpirationDate,
|
||||
Password: u.Password,
|
||||
PublicKeys: u.PublicKeys,
|
||||
HomeDir: u.HomeDir,
|
||||
VirtualFolders: u.VirtualFolders,
|
||||
UID: u.UID,
|
||||
GID: u.GID,
|
||||
MaxSessions: u.MaxSessions,
|
||||
QuotaSize: u.QuotaSize,
|
||||
QuotaFiles: u.QuotaFiles,
|
||||
Permissions: u.Permissions,
|
||||
UsedQuotaSize: u.UsedQuotaSize,
|
||||
UsedQuotaFiles: u.UsedQuotaFiles,
|
||||
LastQuotaUpdate: u.LastQuotaUpdate,
|
||||
UploadBandwidth: u.UploadBandwidth,
|
||||
DownloadBandwidth: u.DownloadBandwidth,
|
||||
LastLogin: u.LastLogin,
|
||||
Filters: u.Filters,
|
||||
}
|
||||
user.FsConfig = fsConfig
|
||||
user.SetEmptySecretsIfNil()
|
||||
return user
|
||||
}
|
||||
|
||||
func convertUserToV4(u User, fsConfig compatFilesystemV4) compatUserV4 {
|
||||
user := compatUserV4{
|
||||
ID: u.ID,
|
||||
Status: u.Status,
|
||||
Username: u.Username,
|
||||
ExpirationDate: u.ExpirationDate,
|
||||
Password: u.Password,
|
||||
PublicKeys: u.PublicKeys,
|
||||
HomeDir: u.HomeDir,
|
||||
VirtualFolders: u.VirtualFolders,
|
||||
UID: u.UID,
|
||||
GID: u.GID,
|
||||
MaxSessions: u.MaxSessions,
|
||||
QuotaSize: u.QuotaSize,
|
||||
QuotaFiles: u.QuotaFiles,
|
||||
Permissions: u.Permissions,
|
||||
UsedQuotaSize: u.UsedQuotaSize,
|
||||
UsedQuotaFiles: u.UsedQuotaFiles,
|
||||
LastQuotaUpdate: u.LastQuotaUpdate,
|
||||
UploadBandwidth: u.UploadBandwidth,
|
||||
DownloadBandwidth: u.DownloadBandwidth,
|
||||
LastLogin: u.LastLogin,
|
||||
Filters: u.Filters,
|
||||
}
|
||||
user.FsConfig = fsConfig
|
||||
return user
|
||||
}
|
||||
|
||||
func getCGSCredentialsFromV4(config compatGCSFsConfigV4) (*kms.Secret, error) {
|
||||
secret := kms.NewEmptySecret()
|
||||
var err error
|
||||
if len(config.Credentials) > 0 {
|
||||
secret = kms.NewPlainSecret(string(config.Credentials))
|
||||
return secret, nil
|
||||
}
|
||||
if config.CredentialFile != "" {
|
||||
creds, err := ioutil.ReadFile(config.CredentialFile)
|
||||
if err != nil {
|
||||
return secret, err
|
||||
}
|
||||
secret = kms.NewPlainSecret(string(creds))
|
||||
return secret, nil
|
||||
}
|
||||
return secret, err
|
||||
}
|
||||
|
||||
func getCGSCredentialsFromV6(config vfs.GCSFsConfig, username string) (string, error) {
|
||||
if config.Credentials == nil {
|
||||
config.Credentials = kms.NewEmptySecret()
|
||||
}
|
||||
if config.Credentials.IsEmpty() {
|
||||
config.CredentialFile = filepath.Join(credentialsDirPath, fmt.Sprintf("%v_gcs_credentials.json",
|
||||
username))
|
||||
creds, err := ioutil.ReadFile(config.CredentialFile)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
err = json.Unmarshal(creds, &config.Credentials)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
}
|
||||
if config.Credentials.IsEncrypted() {
|
||||
err := config.Credentials.Decrypt()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
// in V4 GCS credentials were not encrypted
|
||||
return config.Credentials.GetPayload(), nil
|
||||
}
|
||||
return "", nil
|
||||
}
|
||||
|
||||
func convertFsConfigToV4(fs Filesystem, username string) (compatFilesystemV4, error) {
|
||||
fsV4 := compatFilesystemV4{
|
||||
Provider: fs.Provider,
|
||||
S3Config: compatS3FsConfigV4{},
|
||||
AzBlobConfig: compatAzBlobFsConfigV4{},
|
||||
GCSConfig: compatGCSFsConfigV4{},
|
||||
}
|
||||
switch fs.Provider {
|
||||
case S3FilesystemProvider:
|
||||
fsV4.S3Config = compatS3FsConfigV4{
|
||||
Bucket: fs.S3Config.Bucket,
|
||||
KeyPrefix: fs.S3Config.KeyPrefix,
|
||||
Region: fs.S3Config.Region,
|
||||
AccessKey: fs.S3Config.AccessKey,
|
||||
AccessSecret: "",
|
||||
Endpoint: fs.S3Config.Endpoint,
|
||||
StorageClass: fs.S3Config.StorageClass,
|
||||
UploadPartSize: fs.S3Config.UploadPartSize,
|
||||
UploadConcurrency: fs.S3Config.UploadConcurrency,
|
||||
}
|
||||
if fs.S3Config.AccessSecret.IsEncrypted() {
|
||||
err := fs.S3Config.AccessSecret.Decrypt()
|
||||
if err != nil {
|
||||
return fsV4, err
|
||||
}
|
||||
secretV4, err := utils.EncryptData(fs.S3Config.AccessSecret.GetPayload())
|
||||
if err != nil {
|
||||
return fsV4, err
|
||||
}
|
||||
fsV4.S3Config.AccessSecret = secretV4
|
||||
}
|
||||
case AzureBlobFilesystemProvider:
|
||||
fsV4.AzBlobConfig = compatAzBlobFsConfigV4{
|
||||
Container: fs.AzBlobConfig.Container,
|
||||
AccountName: fs.AzBlobConfig.AccountName,
|
||||
AccountKey: "",
|
||||
Endpoint: fs.AzBlobConfig.Endpoint,
|
||||
SASURL: fs.AzBlobConfig.SASURL,
|
||||
KeyPrefix: fs.AzBlobConfig.KeyPrefix,
|
||||
UploadPartSize: fs.AzBlobConfig.UploadPartSize,
|
||||
UploadConcurrency: fs.AzBlobConfig.UploadConcurrency,
|
||||
UseEmulator: fs.AzBlobConfig.UseEmulator,
|
||||
AccessTier: fs.AzBlobConfig.AccessTier,
|
||||
}
|
||||
if fs.AzBlobConfig.AccountKey.IsEncrypted() {
|
||||
err := fs.AzBlobConfig.AccountKey.Decrypt()
|
||||
if err != nil {
|
||||
return fsV4, err
|
||||
}
|
||||
secretV4, err := utils.EncryptData(fs.AzBlobConfig.AccountKey.GetPayload())
|
||||
if err != nil {
|
||||
return fsV4, err
|
||||
}
|
||||
fsV4.AzBlobConfig.AccountKey = secretV4
|
||||
}
|
||||
case GCSFilesystemProvider:
|
||||
fsV4.GCSConfig = compatGCSFsConfigV4{
|
||||
Bucket: fs.GCSConfig.Bucket,
|
||||
KeyPrefix: fs.GCSConfig.KeyPrefix,
|
||||
CredentialFile: fs.GCSConfig.CredentialFile,
|
||||
AutomaticCredentials: fs.GCSConfig.AutomaticCredentials,
|
||||
StorageClass: fs.GCSConfig.StorageClass,
|
||||
}
|
||||
if fs.GCSConfig.AutomaticCredentials == 0 {
|
||||
creds, err := getCGSCredentialsFromV6(fs.GCSConfig, username)
|
||||
if err != nil {
|
||||
return fsV4, err
|
||||
}
|
||||
fsV4.GCSConfig.Credentials = []byte(creds)
|
||||
}
|
||||
default:
|
||||
// a provider not supported in v4, the configuration will be lost
|
||||
providerLog(logger.LevelWarn, "provider %v was not supported in v4, the configuration for the user %#v will be lost",
|
||||
fs.Provider, username)
|
||||
fsV4.Provider = 0
|
||||
}
|
||||
return fsV4, nil
|
||||
}
|
||||
|
||||
func convertFsConfigFromV4(compatFs compatFilesystemV4, username string) (Filesystem, error) {
|
||||
fsConfig := Filesystem{
|
||||
Provider: compatFs.Provider,
|
||||
S3Config: vfs.S3FsConfig{},
|
||||
AzBlobConfig: vfs.AzBlobFsConfig{},
|
||||
GCSConfig: vfs.GCSFsConfig{},
|
||||
}
|
||||
switch compatFs.Provider {
|
||||
case S3FilesystemProvider:
|
||||
fsConfig.S3Config = vfs.S3FsConfig{
|
||||
Bucket: compatFs.S3Config.Bucket,
|
||||
KeyPrefix: compatFs.S3Config.KeyPrefix,
|
||||
Region: compatFs.S3Config.Region,
|
||||
AccessKey: compatFs.S3Config.AccessKey,
|
||||
AccessSecret: kms.NewEmptySecret(),
|
||||
Endpoint: compatFs.S3Config.Endpoint,
|
||||
StorageClass: compatFs.S3Config.StorageClass,
|
||||
UploadPartSize: compatFs.S3Config.UploadPartSize,
|
||||
UploadConcurrency: compatFs.S3Config.UploadConcurrency,
|
||||
}
|
||||
if compatFs.S3Config.AccessSecret != "" {
|
||||
secret, err := kms.GetSecretFromCompatString(compatFs.S3Config.AccessSecret)
|
||||
if err != nil {
|
||||
providerLog(logger.LevelError, "unable to convert v4 filesystem for user %#v: %v", username, err)
|
||||
return fsConfig, err
|
||||
}
|
||||
fsConfig.S3Config.AccessSecret = secret
|
||||
}
|
||||
case AzureBlobFilesystemProvider:
|
||||
fsConfig.AzBlobConfig = vfs.AzBlobFsConfig{
|
||||
Container: compatFs.AzBlobConfig.Container,
|
||||
AccountName: compatFs.AzBlobConfig.AccountName,
|
||||
AccountKey: kms.NewEmptySecret(),
|
||||
Endpoint: compatFs.AzBlobConfig.Endpoint,
|
||||
SASURL: compatFs.AzBlobConfig.SASURL,
|
||||
KeyPrefix: compatFs.AzBlobConfig.KeyPrefix,
|
||||
UploadPartSize: compatFs.AzBlobConfig.UploadPartSize,
|
||||
UploadConcurrency: compatFs.AzBlobConfig.UploadConcurrency,
|
||||
UseEmulator: compatFs.AzBlobConfig.UseEmulator,
|
||||
AccessTier: compatFs.AzBlobConfig.AccessTier,
|
||||
}
|
||||
if compatFs.AzBlobConfig.AccountKey != "" {
|
||||
secret, err := kms.GetSecretFromCompatString(compatFs.AzBlobConfig.AccountKey)
|
||||
if err != nil {
|
||||
providerLog(logger.LevelError, "unable to convert v4 filesystem for user %#v: %v", username, err)
|
||||
return fsConfig, err
|
||||
}
|
||||
fsConfig.AzBlobConfig.AccountKey = secret
|
||||
}
|
||||
case GCSFilesystemProvider:
|
||||
fsConfig.GCSConfig = vfs.GCSFsConfig{
|
||||
Bucket: compatFs.GCSConfig.Bucket,
|
||||
KeyPrefix: compatFs.GCSConfig.KeyPrefix,
|
||||
CredentialFile: compatFs.GCSConfig.CredentialFile,
|
||||
AutomaticCredentials: compatFs.GCSConfig.AutomaticCredentials,
|
||||
StorageClass: compatFs.GCSConfig.StorageClass,
|
||||
}
|
||||
if compatFs.GCSConfig.AutomaticCredentials == 0 {
|
||||
compatFs.GCSConfig.CredentialFile = filepath.Join(credentialsDirPath, fmt.Sprintf("%v_gcs_credentials.json",
|
||||
username))
|
||||
}
|
||||
secret, err := getCGSCredentialsFromV4(compatFs.GCSConfig)
|
||||
if err != nil {
|
||||
providerLog(logger.LevelError, "unable to convert v4 filesystem for user %#v: %v", username, err)
|
||||
return fsConfig, err
|
||||
}
|
||||
fsConfig.GCSConfig.Credentials = secret
|
||||
}
|
||||
return fsConfig, nil
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
902
dataprovider/memory.go
Normal file
902
dataprovider/memory.go
Normal file
@@ -0,0 +1,902 @@
|
||||
package dataprovider
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sort"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/drakkan/sftpgo/logger"
|
||||
"github.com/drakkan/sftpgo/utils"
|
||||
"github.com/drakkan/sftpgo/vfs"
|
||||
)
|
||||
|
||||
var (
|
||||
errMemoryProviderClosed = errors.New("memory provider is closed")
|
||||
)
|
||||
|
||||
type memoryProviderHandle struct {
|
||||
// configuration file to use for loading users
|
||||
configFile string
|
||||
sync.Mutex
|
||||
isClosed bool
|
||||
// slice with ordered usernames
|
||||
usernames []string
|
||||
// map for users, username is the key
|
||||
users map[string]User
|
||||
// map for virtual folders, folder name is the key
|
||||
vfolders map[string]vfs.BaseVirtualFolder
|
||||
// slice with ordered folder names
|
||||
vfoldersNames []string
|
||||
// map for admins, username is the key
|
||||
admins map[string]Admin
|
||||
// slice with ordered admins
|
||||
adminsUsernames []string
|
||||
}
|
||||
|
||||
// MemoryProvider auth provider for a memory store
|
||||
type MemoryProvider struct {
|
||||
dbHandle *memoryProviderHandle
|
||||
}
|
||||
|
||||
func initializeMemoryProvider(basePath string) {
|
||||
logSender = fmt.Sprintf("dataprovider_%v", MemoryDataProviderName)
|
||||
configFile := ""
|
||||
if utils.IsFileInputValid(config.Name) {
|
||||
configFile = config.Name
|
||||
if !filepath.IsAbs(configFile) {
|
||||
configFile = filepath.Join(basePath, configFile)
|
||||
}
|
||||
}
|
||||
provider = &MemoryProvider{
|
||||
dbHandle: &memoryProviderHandle{
|
||||
isClosed: false,
|
||||
usernames: []string{},
|
||||
users: make(map[string]User),
|
||||
vfolders: make(map[string]vfs.BaseVirtualFolder),
|
||||
vfoldersNames: []string{},
|
||||
admins: make(map[string]Admin),
|
||||
adminsUsernames: []string{},
|
||||
configFile: configFile,
|
||||
},
|
||||
}
|
||||
if err := provider.reloadConfig(); err != nil {
|
||||
logger.Error(logSender, "", "unable to load initial data: %v", err)
|
||||
logger.ErrorToConsole("unable to load initial data: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) checkAvailability() error {
|
||||
p.dbHandle.Lock()
|
||||
defer p.dbHandle.Unlock()
|
||||
if p.dbHandle.isClosed {
|
||||
return errMemoryProviderClosed
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) close() error {
|
||||
p.dbHandle.Lock()
|
||||
defer p.dbHandle.Unlock()
|
||||
if p.dbHandle.isClosed {
|
||||
return errMemoryProviderClosed
|
||||
}
|
||||
p.dbHandle.isClosed = true
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) validateUserAndPass(username, password, ip, protocol string) (User, error) {
|
||||
var user User
|
||||
if password == "" {
|
||||
return user, errors.New("Credentials cannot be null or empty")
|
||||
}
|
||||
user, err := p.userExists(username)
|
||||
if err != nil {
|
||||
providerLog(logger.LevelWarn, "error authenticating user %#v: %v", username, err)
|
||||
return user, err
|
||||
}
|
||||
return checkUserAndPass(&user, password, ip, protocol)
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) validateUserAndPubKey(username string, pubKey []byte) (User, string, error) {
|
||||
var user User
|
||||
if len(pubKey) == 0 {
|
||||
return user, "", errors.New("Credentials cannot be null or empty")
|
||||
}
|
||||
user, err := p.userExists(username)
|
||||
if err != nil {
|
||||
providerLog(logger.LevelWarn, "error authenticating user %#v: %v", username, err)
|
||||
return user, "", err
|
||||
}
|
||||
return checkUserAndPubKey(&user, pubKey)
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) validateAdminAndPass(username, password, ip string) (Admin, error) {
|
||||
admin, err := p.adminExists(username)
|
||||
if err != nil {
|
||||
providerLog(logger.LevelWarn, "error authenticating admin %#v: %v", username, err)
|
||||
return admin, err
|
||||
}
|
||||
err = admin.checkUserAndPass(password, ip)
|
||||
return admin, err
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) updateLastLogin(username string) error {
|
||||
p.dbHandle.Lock()
|
||||
defer p.dbHandle.Unlock()
|
||||
if p.dbHandle.isClosed {
|
||||
return errMemoryProviderClosed
|
||||
}
|
||||
user, err := p.userExistsInternal(username)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
user.LastLogin = utils.GetTimeAsMsSinceEpoch(time.Now())
|
||||
p.dbHandle.users[user.Username] = user
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) updateQuota(username string, filesAdd int, sizeAdd int64, reset bool) error {
|
||||
p.dbHandle.Lock()
|
||||
defer p.dbHandle.Unlock()
|
||||
if p.dbHandle.isClosed {
|
||||
return errMemoryProviderClosed
|
||||
}
|
||||
user, err := p.userExistsInternal(username)
|
||||
if err != nil {
|
||||
providerLog(logger.LevelWarn, "unable to update quota for user %#v error: %v", username, err)
|
||||
return err
|
||||
}
|
||||
if reset {
|
||||
user.UsedQuotaSize = sizeAdd
|
||||
user.UsedQuotaFiles = filesAdd
|
||||
} else {
|
||||
user.UsedQuotaSize += sizeAdd
|
||||
user.UsedQuotaFiles += filesAdd
|
||||
}
|
||||
user.LastQuotaUpdate = utils.GetTimeAsMsSinceEpoch(time.Now())
|
||||
providerLog(logger.LevelDebug, "quota updated for user %#v, files increment: %v size increment: %v is reset? %v",
|
||||
username, filesAdd, sizeAdd, reset)
|
||||
p.dbHandle.users[user.Username] = user
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) getUsedQuota(username string) (int, int64, error) {
|
||||
p.dbHandle.Lock()
|
||||
defer p.dbHandle.Unlock()
|
||||
if p.dbHandle.isClosed {
|
||||
return 0, 0, errMemoryProviderClosed
|
||||
}
|
||||
user, err := p.userExistsInternal(username)
|
||||
if err != nil {
|
||||
providerLog(logger.LevelWarn, "unable to get quota for user %#v error: %v", username, err)
|
||||
return 0, 0, err
|
||||
}
|
||||
return user.UsedQuotaFiles, user.UsedQuotaSize, err
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) addUser(user *User) error {
|
||||
// we can query virtual folder while validating a user
|
||||
// so we have to check without holding the lock
|
||||
err := ValidateUser(user)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
p.dbHandle.Lock()
|
||||
defer p.dbHandle.Unlock()
|
||||
if p.dbHandle.isClosed {
|
||||
return errMemoryProviderClosed
|
||||
}
|
||||
|
||||
_, err = p.userExistsInternal(user.Username)
|
||||
if err == nil {
|
||||
return fmt.Errorf("username %#v already exists", user.Username)
|
||||
}
|
||||
user.ID = p.getNextID()
|
||||
user.LastQuotaUpdate = 0
|
||||
user.UsedQuotaSize = 0
|
||||
user.UsedQuotaFiles = 0
|
||||
user.LastLogin = 0
|
||||
user.VirtualFolders = p.joinVirtualFoldersFields(user)
|
||||
p.dbHandle.users[user.Username] = user.getACopy()
|
||||
p.dbHandle.usernames = append(p.dbHandle.usernames, user.Username)
|
||||
sort.Strings(p.dbHandle.usernames)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) updateUser(user *User) error {
|
||||
// we can query virtual folder while validating a user
|
||||
// so we have to check without holding the lock
|
||||
err := ValidateUser(user)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
p.dbHandle.Lock()
|
||||
defer p.dbHandle.Unlock()
|
||||
if p.dbHandle.isClosed {
|
||||
return errMemoryProviderClosed
|
||||
}
|
||||
|
||||
u, err := p.userExistsInternal(user.Username)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
for _, oldFolder := range u.VirtualFolders {
|
||||
p.removeUserFromFolderMapping(oldFolder.Name, u.Username)
|
||||
}
|
||||
user.VirtualFolders = p.joinVirtualFoldersFields(user)
|
||||
user.LastQuotaUpdate = u.LastQuotaUpdate
|
||||
user.UsedQuotaSize = u.UsedQuotaSize
|
||||
user.UsedQuotaFiles = u.UsedQuotaFiles
|
||||
user.LastLogin = u.LastLogin
|
||||
user.ID = u.ID
|
||||
// pre-login and external auth hook will use the passed *user so save a copy
|
||||
p.dbHandle.users[user.Username] = user.getACopy()
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) deleteUser(user *User) error {
|
||||
p.dbHandle.Lock()
|
||||
defer p.dbHandle.Unlock()
|
||||
if p.dbHandle.isClosed {
|
||||
return errMemoryProviderClosed
|
||||
}
|
||||
u, err := p.userExistsInternal(user.Username)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
for _, oldFolder := range u.VirtualFolders {
|
||||
p.removeUserFromFolderMapping(oldFolder.Name, u.Username)
|
||||
}
|
||||
delete(p.dbHandle.users, user.Username)
|
||||
// this could be more efficient
|
||||
p.dbHandle.usernames = make([]string, 0, len(p.dbHandle.users))
|
||||
for username := range p.dbHandle.users {
|
||||
p.dbHandle.usernames = append(p.dbHandle.usernames, username)
|
||||
}
|
||||
sort.Strings(p.dbHandle.usernames)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) dumpUsers() ([]User, error) {
|
||||
p.dbHandle.Lock()
|
||||
defer p.dbHandle.Unlock()
|
||||
users := make([]User, 0, len(p.dbHandle.usernames))
|
||||
var err error
|
||||
if p.dbHandle.isClosed {
|
||||
return users, errMemoryProviderClosed
|
||||
}
|
||||
for _, username := range p.dbHandle.usernames {
|
||||
u := p.dbHandle.users[username]
|
||||
user := u.getACopy()
|
||||
err = addCredentialsToUser(&user)
|
||||
if err != nil {
|
||||
return users, err
|
||||
}
|
||||
users = append(users, user)
|
||||
}
|
||||
return users, err
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) dumpFolders() ([]vfs.BaseVirtualFolder, error) {
|
||||
p.dbHandle.Lock()
|
||||
defer p.dbHandle.Unlock()
|
||||
folders := make([]vfs.BaseVirtualFolder, 0, len(p.dbHandle.vfoldersNames))
|
||||
if p.dbHandle.isClosed {
|
||||
return folders, errMemoryProviderClosed
|
||||
}
|
||||
for _, f := range p.dbHandle.vfolders {
|
||||
folders = append(folders, f)
|
||||
}
|
||||
return folders, nil
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) getUsers(limit int, offset int, order string) ([]User, error) {
|
||||
users := make([]User, 0, limit)
|
||||
var err error
|
||||
p.dbHandle.Lock()
|
||||
defer p.dbHandle.Unlock()
|
||||
if p.dbHandle.isClosed {
|
||||
return users, errMemoryProviderClosed
|
||||
}
|
||||
if limit <= 0 {
|
||||
return users, err
|
||||
}
|
||||
itNum := 0
|
||||
if order == OrderASC {
|
||||
for _, username := range p.dbHandle.usernames {
|
||||
itNum++
|
||||
if itNum <= offset {
|
||||
continue
|
||||
}
|
||||
u := p.dbHandle.users[username]
|
||||
user := u.getACopy()
|
||||
user.HideConfidentialData()
|
||||
users = append(users, user)
|
||||
if len(users) >= limit {
|
||||
break
|
||||
}
|
||||
}
|
||||
} else {
|
||||
for i := len(p.dbHandle.usernames) - 1; i >= 0; i-- {
|
||||
itNum++
|
||||
if itNum <= offset {
|
||||
continue
|
||||
}
|
||||
username := p.dbHandle.usernames[i]
|
||||
u := p.dbHandle.users[username]
|
||||
user := u.getACopy()
|
||||
user.HideConfidentialData()
|
||||
users = append(users, user)
|
||||
if len(users) >= limit {
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
return users, err
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) userExists(username string) (User, error) {
|
||||
p.dbHandle.Lock()
|
||||
defer p.dbHandle.Unlock()
|
||||
if p.dbHandle.isClosed {
|
||||
return User{}, errMemoryProviderClosed
|
||||
}
|
||||
return p.userExistsInternal(username)
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) userExistsInternal(username string) (User, error) {
|
||||
if val, ok := p.dbHandle.users[username]; ok {
|
||||
return val.getACopy(), nil
|
||||
}
|
||||
return User{}, &RecordNotFoundError{err: fmt.Sprintf("username %#v does not exist", username)}
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) addAdmin(admin *Admin) error {
|
||||
p.dbHandle.Lock()
|
||||
defer p.dbHandle.Unlock()
|
||||
if p.dbHandle.isClosed {
|
||||
return errMemoryProviderClosed
|
||||
}
|
||||
err := admin.validate()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = p.adminExistsInternal(admin.Username)
|
||||
if err == nil {
|
||||
return fmt.Errorf("admin %#v already exists", admin.Username)
|
||||
}
|
||||
admin.ID = p.getNextAdminID()
|
||||
p.dbHandle.admins[admin.Username] = admin.getACopy()
|
||||
p.dbHandle.adminsUsernames = append(p.dbHandle.adminsUsernames, admin.Username)
|
||||
sort.Strings(p.dbHandle.adminsUsernames)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) updateAdmin(admin *Admin) error {
|
||||
p.dbHandle.Lock()
|
||||
defer p.dbHandle.Unlock()
|
||||
if p.dbHandle.isClosed {
|
||||
return errMemoryProviderClosed
|
||||
}
|
||||
err := admin.validate()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
a, err := p.adminExistsInternal(admin.Username)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
admin.ID = a.ID
|
||||
p.dbHandle.admins[admin.Username] = admin.getACopy()
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) deleteAdmin(admin *Admin) error {
|
||||
p.dbHandle.Lock()
|
||||
defer p.dbHandle.Unlock()
|
||||
if p.dbHandle.isClosed {
|
||||
return errMemoryProviderClosed
|
||||
}
|
||||
_, err := p.adminExistsInternal(admin.Username)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
delete(p.dbHandle.admins, admin.Username)
|
||||
// this could be more efficient
|
||||
p.dbHandle.adminsUsernames = make([]string, 0, len(p.dbHandle.admins))
|
||||
for username := range p.dbHandle.admins {
|
||||
p.dbHandle.adminsUsernames = append(p.dbHandle.adminsUsernames, username)
|
||||
}
|
||||
sort.Strings(p.dbHandle.adminsUsernames)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) adminExists(username string) (Admin, error) {
|
||||
p.dbHandle.Lock()
|
||||
defer p.dbHandle.Unlock()
|
||||
if p.dbHandle.isClosed {
|
||||
return Admin{}, errMemoryProviderClosed
|
||||
}
|
||||
return p.adminExistsInternal(username)
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) adminExistsInternal(username string) (Admin, error) {
|
||||
if val, ok := p.dbHandle.admins[username]; ok {
|
||||
return val.getACopy(), nil
|
||||
}
|
||||
return Admin{}, &RecordNotFoundError{err: fmt.Sprintf("admin %#v does not exist", username)}
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) dumpAdmins() ([]Admin, error) {
|
||||
p.dbHandle.Lock()
|
||||
defer p.dbHandle.Unlock()
|
||||
|
||||
admins := make([]Admin, 0, len(p.dbHandle.admins))
|
||||
if p.dbHandle.isClosed {
|
||||
return admins, errMemoryProviderClosed
|
||||
}
|
||||
for _, admin := range p.dbHandle.admins {
|
||||
admins = append(admins, admin)
|
||||
}
|
||||
return admins, nil
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) getAdmins(limit int, offset int, order string) ([]Admin, error) {
|
||||
admins := make([]Admin, 0, limit)
|
||||
|
||||
p.dbHandle.Lock()
|
||||
defer p.dbHandle.Unlock()
|
||||
|
||||
if p.dbHandle.isClosed {
|
||||
return admins, errMemoryProviderClosed
|
||||
}
|
||||
if limit <= 0 {
|
||||
return admins, nil
|
||||
}
|
||||
itNum := 0
|
||||
if order == OrderASC {
|
||||
for _, username := range p.dbHandle.adminsUsernames {
|
||||
itNum++
|
||||
if itNum <= offset {
|
||||
continue
|
||||
}
|
||||
a := p.dbHandle.admins[username]
|
||||
admin := a.getACopy()
|
||||
admin.HideConfidentialData()
|
||||
admins = append(admins, admin)
|
||||
if len(admins) >= limit {
|
||||
break
|
||||
}
|
||||
}
|
||||
} else {
|
||||
for i := len(p.dbHandle.adminsUsernames) - 1; i >= 0; i-- {
|
||||
itNum++
|
||||
if itNum <= offset {
|
||||
continue
|
||||
}
|
||||
username := p.dbHandle.adminsUsernames[i]
|
||||
a := p.dbHandle.admins[username]
|
||||
admin := a.getACopy()
|
||||
admin.HideConfidentialData()
|
||||
admins = append(admins, admin)
|
||||
if len(admins) >= limit {
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return admins, nil
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) updateFolderQuota(name string, filesAdd int, sizeAdd int64, reset bool) error {
|
||||
p.dbHandle.Lock()
|
||||
defer p.dbHandle.Unlock()
|
||||
if p.dbHandle.isClosed {
|
||||
return errMemoryProviderClosed
|
||||
}
|
||||
folder, err := p.folderExistsInternal(name)
|
||||
if err != nil {
|
||||
providerLog(logger.LevelWarn, "unable to update quota for folder %#v error: %v", name, err)
|
||||
return err
|
||||
}
|
||||
if reset {
|
||||
folder.UsedQuotaSize = sizeAdd
|
||||
folder.UsedQuotaFiles = filesAdd
|
||||
} else {
|
||||
folder.UsedQuotaSize += sizeAdd
|
||||
folder.UsedQuotaFiles += filesAdd
|
||||
}
|
||||
folder.LastQuotaUpdate = utils.GetTimeAsMsSinceEpoch(time.Now())
|
||||
p.dbHandle.vfolders[name] = folder
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) getUsedFolderQuota(name string) (int, int64, error) {
|
||||
p.dbHandle.Lock()
|
||||
defer p.dbHandle.Unlock()
|
||||
if p.dbHandle.isClosed {
|
||||
return 0, 0, errMemoryProviderClosed
|
||||
}
|
||||
folder, err := p.folderExistsInternal(name)
|
||||
if err != nil {
|
||||
providerLog(logger.LevelWarn, "unable to get quota for folder %#v error: %v", name, err)
|
||||
return 0, 0, err
|
||||
}
|
||||
return folder.UsedQuotaFiles, folder.UsedQuotaSize, err
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) joinVirtualFoldersFields(user *User) []vfs.VirtualFolder {
|
||||
var folders []vfs.VirtualFolder
|
||||
for _, folder := range user.VirtualFolders {
|
||||
f, err := p.addOrGetFolderInternal(folder.Name, folder.MappedPath, user.Username)
|
||||
if err == nil {
|
||||
folder.UsedQuotaFiles = f.UsedQuotaFiles
|
||||
folder.UsedQuotaSize = f.UsedQuotaSize
|
||||
folder.LastQuotaUpdate = f.LastQuotaUpdate
|
||||
folder.ID = f.ID
|
||||
folder.MappedPath = f.MappedPath
|
||||
folders = append(folders, folder)
|
||||
}
|
||||
}
|
||||
return folders
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) removeUserFromFolderMapping(folderName, username string) {
|
||||
folder, err := p.folderExistsInternal(folderName)
|
||||
if err == nil {
|
||||
var usernames []string
|
||||
for _, user := range folder.Users {
|
||||
if user != username {
|
||||
usernames = append(usernames, user)
|
||||
}
|
||||
}
|
||||
folder.Users = usernames
|
||||
p.dbHandle.vfolders[folder.Name] = folder
|
||||
}
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) updateFoldersMappingInternal(folder vfs.BaseVirtualFolder) {
|
||||
p.dbHandle.vfolders[folder.Name] = folder
|
||||
if !utils.IsStringInSlice(folder.Name, p.dbHandle.vfoldersNames) {
|
||||
p.dbHandle.vfoldersNames = append(p.dbHandle.vfoldersNames, folder.Name)
|
||||
sort.Strings(p.dbHandle.vfoldersNames)
|
||||
}
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) addOrGetFolderInternal(folderName, folderMappedPath, username string) (vfs.BaseVirtualFolder, error) {
|
||||
folder, err := p.folderExistsInternal(folderName)
|
||||
if _, ok := err.(*RecordNotFoundError); ok {
|
||||
folder := vfs.BaseVirtualFolder{
|
||||
ID: p.getNextFolderID(),
|
||||
Name: folderName,
|
||||
MappedPath: folderMappedPath,
|
||||
UsedQuotaSize: 0,
|
||||
UsedQuotaFiles: 0,
|
||||
LastQuotaUpdate: 0,
|
||||
Users: []string{username},
|
||||
}
|
||||
p.updateFoldersMappingInternal(folder)
|
||||
return folder, nil
|
||||
}
|
||||
if err == nil && !utils.IsStringInSlice(username, folder.Users) {
|
||||
folder.Users = append(folder.Users, username)
|
||||
p.updateFoldersMappingInternal(folder)
|
||||
}
|
||||
return folder, err
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) folderExistsInternal(name string) (vfs.BaseVirtualFolder, error) {
|
||||
if val, ok := p.dbHandle.vfolders[name]; ok {
|
||||
return val, nil
|
||||
}
|
||||
return vfs.BaseVirtualFolder{}, &RecordNotFoundError{err: fmt.Sprintf("folder %#v does not exist", name)}
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) getFolders(limit, offset int, order string) ([]vfs.BaseVirtualFolder, error) {
|
||||
folders := make([]vfs.BaseVirtualFolder, 0, limit)
|
||||
var err error
|
||||
p.dbHandle.Lock()
|
||||
defer p.dbHandle.Unlock()
|
||||
if p.dbHandle.isClosed {
|
||||
return folders, errMemoryProviderClosed
|
||||
}
|
||||
if limit <= 0 {
|
||||
return folders, err
|
||||
}
|
||||
itNum := 0
|
||||
if order == OrderASC {
|
||||
for _, name := range p.dbHandle.vfoldersNames {
|
||||
itNum++
|
||||
if itNum <= offset {
|
||||
continue
|
||||
}
|
||||
folder := p.dbHandle.vfolders[name]
|
||||
folders = append(folders, folder)
|
||||
if len(folders) >= limit {
|
||||
break
|
||||
}
|
||||
}
|
||||
} else {
|
||||
for i := len(p.dbHandle.vfoldersNames) - 1; i >= 0; i-- {
|
||||
itNum++
|
||||
if itNum <= offset {
|
||||
continue
|
||||
}
|
||||
name := p.dbHandle.vfoldersNames[i]
|
||||
folder := p.dbHandle.vfolders[name]
|
||||
folders = append(folders, folder)
|
||||
if len(folders) >= limit {
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
return folders, err
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) getFolderByName(name string) (vfs.BaseVirtualFolder, error) {
|
||||
p.dbHandle.Lock()
|
||||
defer p.dbHandle.Unlock()
|
||||
if p.dbHandle.isClosed {
|
||||
return vfs.BaseVirtualFolder{}, errMemoryProviderClosed
|
||||
}
|
||||
return p.folderExistsInternal(name)
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) addFolder(folder *vfs.BaseVirtualFolder) error {
|
||||
err := ValidateFolder(folder)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
p.dbHandle.Lock()
|
||||
defer p.dbHandle.Unlock()
|
||||
if p.dbHandle.isClosed {
|
||||
return errMemoryProviderClosed
|
||||
}
|
||||
|
||||
_, err = p.folderExistsInternal(folder.Name)
|
||||
if err == nil {
|
||||
return fmt.Errorf("folder %#v already exists", folder.Name)
|
||||
}
|
||||
folder.ID = p.getNextFolderID()
|
||||
folder.Users = nil
|
||||
p.dbHandle.vfolders[folder.Name] = folder.GetACopy()
|
||||
p.dbHandle.vfoldersNames = append(p.dbHandle.vfoldersNames, folder.Name)
|
||||
sort.Strings(p.dbHandle.vfoldersNames)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) updateFolder(folder *vfs.BaseVirtualFolder) error {
|
||||
err := ValidateFolder(folder)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
p.dbHandle.Lock()
|
||||
defer p.dbHandle.Unlock()
|
||||
if p.dbHandle.isClosed {
|
||||
return errMemoryProviderClosed
|
||||
}
|
||||
f, err := p.folderExistsInternal(folder.Name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
folder.ID = f.ID
|
||||
folder.LastQuotaUpdate = f.LastQuotaUpdate
|
||||
folder.UsedQuotaFiles = f.UsedQuotaFiles
|
||||
folder.UsedQuotaSize = f.UsedQuotaSize
|
||||
folder.Users = f.Users
|
||||
p.dbHandle.vfolders[folder.Name] = folder.GetACopy()
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) deleteFolder(folder *vfs.BaseVirtualFolder) error {
|
||||
p.dbHandle.Lock()
|
||||
defer p.dbHandle.Unlock()
|
||||
if p.dbHandle.isClosed {
|
||||
return errMemoryProviderClosed
|
||||
}
|
||||
|
||||
_, err := p.folderExistsInternal(folder.Name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
for _, username := range folder.Users {
|
||||
user, err := p.userExistsInternal(username)
|
||||
if err == nil {
|
||||
var folders []vfs.VirtualFolder
|
||||
for _, userFolder := range user.VirtualFolders {
|
||||
if folder.Name != userFolder.Name {
|
||||
folders = append(folders, userFolder)
|
||||
}
|
||||
}
|
||||
user.VirtualFolders = folders
|
||||
p.dbHandle.users[user.Username] = user
|
||||
}
|
||||
}
|
||||
delete(p.dbHandle.vfolders, folder.Name)
|
||||
p.dbHandle.vfoldersNames = []string{}
|
||||
for name := range p.dbHandle.vfolders {
|
||||
p.dbHandle.vfoldersNames = append(p.dbHandle.vfoldersNames, name)
|
||||
}
|
||||
sort.Strings(p.dbHandle.vfoldersNames)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) getNextID() int64 {
|
||||
nextID := int64(1)
|
||||
for _, v := range p.dbHandle.users {
|
||||
if v.ID >= nextID {
|
||||
nextID = v.ID + 1
|
||||
}
|
||||
}
|
||||
return nextID
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) getNextFolderID() int64 {
|
||||
nextID := int64(1)
|
||||
for _, v := range p.dbHandle.vfolders {
|
||||
if v.ID >= nextID {
|
||||
nextID = v.ID + 1
|
||||
}
|
||||
}
|
||||
return nextID
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) getNextAdminID() int64 {
|
||||
nextID := int64(1)
|
||||
for _, a := range p.dbHandle.admins {
|
||||
if a.ID >= nextID {
|
||||
nextID = a.ID + 1
|
||||
}
|
||||
}
|
||||
return nextID
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) clear() {
|
||||
p.dbHandle.Lock()
|
||||
defer p.dbHandle.Unlock()
|
||||
p.dbHandle.usernames = []string{}
|
||||
p.dbHandle.users = make(map[string]User)
|
||||
p.dbHandle.vfoldersNames = []string{}
|
||||
p.dbHandle.vfolders = make(map[string]vfs.BaseVirtualFolder)
|
||||
p.dbHandle.admins = make(map[string]Admin)
|
||||
p.dbHandle.adminsUsernames = []string{}
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) reloadConfig() error {
|
||||
if p.dbHandle.configFile == "" {
|
||||
providerLog(logger.LevelDebug, "no dump configuration file defined")
|
||||
return nil
|
||||
}
|
||||
providerLog(logger.LevelDebug, "loading dump from file: %#v", p.dbHandle.configFile)
|
||||
fi, err := os.Stat(p.dbHandle.configFile)
|
||||
if err != nil {
|
||||
providerLog(logger.LevelWarn, "error loading dump: %v", err)
|
||||
return err
|
||||
}
|
||||
if fi.Size() == 0 {
|
||||
err = errors.New("dump configuration file is invalid, its size must be > 0")
|
||||
providerLog(logger.LevelWarn, "error loading dump: %v", err)
|
||||
return err
|
||||
}
|
||||
if fi.Size() > 10485760 {
|
||||
err = errors.New("dump configuration file is invalid, its size must be <= 10485760 bytes")
|
||||
providerLog(logger.LevelWarn, "error loading dump: %v", err)
|
||||
return err
|
||||
}
|
||||
content, err := ioutil.ReadFile(p.dbHandle.configFile)
|
||||
if err != nil {
|
||||
providerLog(logger.LevelWarn, "error loading dump: %v", err)
|
||||
return err
|
||||
}
|
||||
dump, err := ParseDumpData(content)
|
||||
if err != nil {
|
||||
providerLog(logger.LevelWarn, "error loading dump: %v", err)
|
||||
return err
|
||||
}
|
||||
p.clear()
|
||||
|
||||
if err := p.restoreFolders(&dump); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := p.restoreUsers(&dump); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := p.restoreAdmins(&dump); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
providerLog(logger.LevelDebug, "config loaded from file: %#v", p.dbHandle.configFile)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) restoreAdmins(dump *BackupData) error {
|
||||
for _, admin := range dump.Admins {
|
||||
a, err := p.adminExists(admin.Username)
|
||||
admin := admin // pin
|
||||
if err == nil {
|
||||
admin.ID = a.ID
|
||||
err = p.updateAdmin(&admin)
|
||||
if err != nil {
|
||||
providerLog(logger.LevelWarn, "error updating admin %#v: %v", admin.Username, err)
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
err = p.addAdmin(&admin)
|
||||
if err != nil {
|
||||
providerLog(logger.LevelWarn, "error adding admin %#v: %v", admin.Username, err)
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) restoreFolders(dump *BackupData) error {
|
||||
for _, folder := range dump.Folders {
|
||||
folder := folder // pin
|
||||
f, err := p.getFolderByName(folder.Name)
|
||||
if err == nil {
|
||||
folder.ID = f.ID
|
||||
err = p.updateFolder(&folder)
|
||||
if err != nil {
|
||||
providerLog(logger.LevelWarn, "error updating folder %#v: %v", folder.Name, err)
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
folder.Users = nil
|
||||
err = p.addFolder(&folder)
|
||||
if err != nil {
|
||||
providerLog(logger.LevelWarn, "error adding folder %#v: %v", folder.Name, err)
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) restoreUsers(dump *BackupData) error {
|
||||
for _, user := range dump.Users {
|
||||
user := user // pin
|
||||
u, err := p.userExists(user.Username)
|
||||
if err == nil {
|
||||
user.ID = u.ID
|
||||
err = p.updateUser(&user)
|
||||
if err != nil {
|
||||
providerLog(logger.LevelWarn, "error updating user %#v: %v", user.Username, err)
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
err = p.addUser(&user)
|
||||
if err != nil {
|
||||
providerLog(logger.LevelWarn, "error adding user %#v: %v", user.Username, err)
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// initializeDatabase does nothing, no initilization is needed for memory provider
|
||||
func (p *MemoryProvider) initializeDatabase() error {
|
||||
return ErrNoInitRequired
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) migrateDatabase() error {
|
||||
return ErrNoInitRequired
|
||||
}
|
||||
|
||||
func (p *MemoryProvider) revertDatabase(targetVersion int) error {
|
||||
return errors.New("memory provider does not store data, revert not possible")
|
||||
}
|
||||
@@ -1,11 +1,58 @@
|
||||
// +build !nomysql
|
||||
|
||||
package dataprovider
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
// we import go-sql-driver/mysql here to be able to disable MySQL support using a build tag
|
||||
_ "github.com/go-sql-driver/mysql"
|
||||
|
||||
"github.com/drakkan/sftpgo/logger"
|
||||
"github.com/drakkan/sftpgo/version"
|
||||
"github.com/drakkan/sftpgo/vfs"
|
||||
)
|
||||
|
||||
const (
|
||||
mysqlUsersTableSQL = "CREATE TABLE `{{users}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, " +
|
||||
"`username` varchar(255) NOT NULL UNIQUE, `password` varchar(255) NULL, `public_keys` longtext NULL, " +
|
||||
"`home_dir` varchar(255) NOT NULL, `uid` integer NOT NULL, `gid` integer NOT NULL, `max_sessions` integer NOT NULL, " +
|
||||
" `quota_size` bigint NOT NULL, `quota_files` integer NOT NULL, `permissions` longtext NOT NULL, " +
|
||||
"`used_quota_size` bigint NOT NULL, `used_quota_files` integer NOT NULL, `last_quota_update` bigint NOT NULL, " +
|
||||
"`upload_bandwidth` integer NOT NULL, `download_bandwidth` integer NOT NULL, `expiration_date` bigint(20) NOT NULL, " +
|
||||
"`last_login` bigint(20) NOT NULL, `status` int(11) NOT NULL, `filters` longtext DEFAULT NULL, " +
|
||||
"`filesystem` longtext DEFAULT NULL);"
|
||||
mysqlSchemaTableSQL = "CREATE TABLE `{{schema_version}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `version` integer NOT NULL);"
|
||||
mysqlV2SQL = "ALTER TABLE `{{users}}` ADD COLUMN `virtual_folders` longtext NULL;"
|
||||
mysqlV3SQL = "ALTER TABLE `{{users}}` MODIFY `password` longtext NULL;"
|
||||
mysqlV4SQL = "CREATE TABLE `{{folders}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `path` varchar(512) NOT NULL UNIQUE," +
|
||||
"`used_quota_size` bigint NOT NULL, `used_quota_files` integer NOT NULL, `last_quota_update` bigint NOT NULL);" +
|
||||
"ALTER TABLE `{{users}}` MODIFY `home_dir` varchar(512) NOT NULL;" +
|
||||
"ALTER TABLE `{{users}}` DROP COLUMN `virtual_folders`;" +
|
||||
"CREATE TABLE `{{folders_mapping}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `virtual_path` varchar(512) NOT NULL, " +
|
||||
"`quota_size` bigint NOT NULL, `quota_files` integer NOT NULL, `folder_id` integer NOT NULL, `user_id` integer NOT NULL);" +
|
||||
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `unique_mapping` UNIQUE (`user_id`, `folder_id`);" +
|
||||
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `folders_mapping_folder_id_fk_folders_id` FOREIGN KEY (`folder_id`) REFERENCES `{{folders}}` (`id`) ON DELETE CASCADE;" +
|
||||
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `folders_mapping_user_id_fk_users_id` FOREIGN KEY (`user_id`) REFERENCES `{{users}}` (`id`) ON DELETE CASCADE;"
|
||||
mysqlV6SQL = "ALTER TABLE `{{users}}` ADD COLUMN `additional_info` longtext NULL;"
|
||||
mysqlV6DownSQL = "ALTER TABLE `{{users}}` DROP COLUMN `additional_info`;"
|
||||
mysqlV7SQL = "CREATE TABLE `{{admins}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `username` varchar(255) NOT NULL UNIQUE, " +
|
||||
"`password` varchar(255) NOT NULL, `email` varchar(255) NULL, `status` integer NOT NULL, `permissions` longtext NOT NULL, " +
|
||||
"`filters` longtext NULL, `additional_info` longtext NULL);"
|
||||
mysqlV7DownSQL = "DROP TABLE `{{admins}}` CASCADE;"
|
||||
mysqlV8SQL = "ALTER TABLE `{{folders}}` ADD COLUMN `name` varchar(255) NULL;" +
|
||||
"ALTER TABLE `{{folders}}` MODIFY `path` varchar(512) NULL;" +
|
||||
"ALTER TABLE `{{folders}}` DROP INDEX `path`;" +
|
||||
"UPDATE `{{folders}}` f1 SET name = CONCAT('folder',f1.id);" +
|
||||
"ALTER TABLE `{{folders}}` MODIFY `name` varchar(255) NOT NULL;" +
|
||||
"ALTER TABLE `{{folders}}` ADD CONSTRAINT `name` UNIQUE (`name`);"
|
||||
mysqlV8DownSQL = "ALTER TABLE `{{folders}}` DROP COLUMN `name`;" +
|
||||
"ALTER TABLE `{{folders}}` MODIFY `path` varchar(512) NOT NULL;" +
|
||||
"ALTER TABLE `{{folders}}` ADD CONSTRAINT `path` UNIQUE (`path`);"
|
||||
)
|
||||
|
||||
// MySQLProvider auth provider for MySQL/MariaDB database
|
||||
@@ -13,16 +60,25 @@ type MySQLProvider struct {
|
||||
dbHandle *sql.DB
|
||||
}
|
||||
|
||||
func init() {
|
||||
version.AddFeature("+mysql")
|
||||
}
|
||||
|
||||
func initializeMySQLProvider() error {
|
||||
var err error
|
||||
logSender = MySQLDataProviderName
|
||||
logSender = fmt.Sprintf("dataprovider_%v", MySQLDataProviderName)
|
||||
dbHandle, err := sql.Open("mysql", getMySQLConnectionString(false))
|
||||
if err == nil {
|
||||
providerLog(logger.LevelDebug, "mysql database handle created, connection string: %#v, pool size: %v",
|
||||
getMySQLConnectionString(true), config.PoolSize)
|
||||
dbHandle.SetMaxOpenConns(config.PoolSize)
|
||||
dbHandle.SetConnMaxLifetime(1800 * time.Second)
|
||||
provider = MySQLProvider{dbHandle: dbHandle}
|
||||
if config.PoolSize > 0 {
|
||||
dbHandle.SetMaxIdleConns(config.PoolSize)
|
||||
} else {
|
||||
dbHandle.SetMaxIdleConns(2)
|
||||
}
|
||||
dbHandle.SetConnMaxLifetime(240 * time.Second)
|
||||
provider = &MySQLProvider{dbHandle: dbHandle}
|
||||
} else {
|
||||
providerLog(logger.LevelWarn, "error creating mysql database handler, connection string: %#v, error: %v",
|
||||
getMySQLConnectionString(true), err)
|
||||
@@ -31,7 +87,7 @@ func initializeMySQLProvider() error {
|
||||
}
|
||||
func getMySQLConnectionString(redactedPwd bool) string {
|
||||
var connectionString string
|
||||
if len(config.ConnectionString) == 0 {
|
||||
if config.ConnectionString == "" {
|
||||
password := config.Password
|
||||
if redactedPwd {
|
||||
password = "[redacted]"
|
||||
@@ -44,50 +100,352 @@ func getMySQLConnectionString(redactedPwd bool) string {
|
||||
return connectionString
|
||||
}
|
||||
|
||||
func (p MySQLProvider) checkAvailability() error {
|
||||
func (p *MySQLProvider) checkAvailability() error {
|
||||
return sqlCommonCheckAvailability(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p MySQLProvider) validateUserAndPass(username string, password string) (User, error) {
|
||||
return sqlCommonValidateUserAndPass(username, password, p.dbHandle)
|
||||
func (p *MySQLProvider) validateUserAndPass(username, password, ip, protocol string) (User, error) {
|
||||
return sqlCommonValidateUserAndPass(username, password, ip, protocol, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p MySQLProvider) validateUserAndPubKey(username string, publicKey string) (User, string, error) {
|
||||
func (p *MySQLProvider) validateUserAndPubKey(username string, publicKey []byte) (User, string, error) {
|
||||
return sqlCommonValidateUserAndPubKey(username, publicKey, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p MySQLProvider) getUserByID(ID int64) (User, error) {
|
||||
return sqlCommonGetUserByID(ID, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p MySQLProvider) updateQuota(username string, filesAdd int, sizeAdd int64, reset bool) error {
|
||||
func (p *MySQLProvider) updateQuota(username string, filesAdd int, sizeAdd int64, reset bool) error {
|
||||
return sqlCommonUpdateQuota(username, filesAdd, sizeAdd, reset, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p MySQLProvider) getUsedQuota(username string) (int, int64, error) {
|
||||
func (p *MySQLProvider) getUsedQuota(username string) (int, int64, error) {
|
||||
return sqlCommonGetUsedQuota(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p MySQLProvider) userExists(username string) (User, error) {
|
||||
return sqlCommonCheckUserExists(username, p.dbHandle)
|
||||
func (p *MySQLProvider) updateLastLogin(username string) error {
|
||||
return sqlCommonUpdateLastLogin(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p MySQLProvider) addUser(user User) error {
|
||||
func (p *MySQLProvider) userExists(username string) (User, error) {
|
||||
return sqlCommonGetUserByUsername(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) addUser(user *User) error {
|
||||
return sqlCommonAddUser(user, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p MySQLProvider) updateUser(user User) error {
|
||||
func (p *MySQLProvider) updateUser(user *User) error {
|
||||
return sqlCommonUpdateUser(user, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p MySQLProvider) deleteUser(user User) error {
|
||||
func (p *MySQLProvider) deleteUser(user *User) error {
|
||||
return sqlCommonDeleteUser(user, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p MySQLProvider) getUsers(limit int, offset int, order string, username string) ([]User, error) {
|
||||
return sqlCommonGetUsers(limit, offset, order, username, p.dbHandle)
|
||||
func (p *MySQLProvider) dumpUsers() ([]User, error) {
|
||||
return sqlCommonDumpUsers(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p MySQLProvider) close() error {
|
||||
func (p *MySQLProvider) getUsers(limit int, offset int, order string) ([]User, error) {
|
||||
return sqlCommonGetUsers(limit, offset, order, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) dumpFolders() ([]vfs.BaseVirtualFolder, error) {
|
||||
return sqlCommonDumpFolders(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) getFolders(limit, offset int, order string) ([]vfs.BaseVirtualFolder, error) {
|
||||
return sqlCommonGetFolders(limit, offset, order, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) getFolderByName(name string) (vfs.BaseVirtualFolder, error) {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
|
||||
defer cancel()
|
||||
return sqlCommonGetFolderByName(ctx, name, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) addFolder(folder *vfs.BaseVirtualFolder) error {
|
||||
return sqlCommonAddFolder(folder, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) updateFolder(folder *vfs.BaseVirtualFolder) error {
|
||||
return sqlCommonUpdateFolder(folder, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) deleteFolder(folder *vfs.BaseVirtualFolder) error {
|
||||
return sqlCommonDeleteFolder(folder, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) updateFolderQuota(name string, filesAdd int, sizeAdd int64, reset bool) error {
|
||||
return sqlCommonUpdateFolderQuota(name, filesAdd, sizeAdd, reset, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) getUsedFolderQuota(name string) (int, int64, error) {
|
||||
return sqlCommonGetFolderUsedQuota(name, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) adminExists(username string) (Admin, error) {
|
||||
return sqlCommonGetAdminByUsername(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) addAdmin(admin *Admin) error {
|
||||
return sqlCommonAddAdmin(admin, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) updateAdmin(admin *Admin) error {
|
||||
return sqlCommonUpdateAdmin(admin, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) deleteAdmin(admin *Admin) error {
|
||||
return sqlCommonDeleteAdmin(admin, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) getAdmins(limit int, offset int, order string) ([]Admin, error) {
|
||||
return sqlCommonGetAdmins(limit, offset, order, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) dumpAdmins() ([]Admin, error) {
|
||||
return sqlCommonDumpAdmins(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) validateAdminAndPass(username, password, ip string) (Admin, error) {
|
||||
return sqlCommonValidateAdminAndPass(username, password, ip, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) close() error {
|
||||
return p.dbHandle.Close()
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) reloadConfig() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// initializeDatabase creates the initial database structure
|
||||
func (p *MySQLProvider) initializeDatabase() error {
|
||||
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, false)
|
||||
if err == nil && dbVersion.Version > 0 {
|
||||
return ErrNoInitRequired
|
||||
}
|
||||
sqlUsers := strings.Replace(mysqlUsersTableSQL, "{{users}}", sqlTableUsers, 1)
|
||||
ctx, cancel := context.WithTimeout(context.Background(), longSQLQueryTimeout)
|
||||
defer cancel()
|
||||
|
||||
tx, err := p.dbHandle.BeginTx(ctx, nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = tx.Exec(sqlUsers)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = tx.Exec(strings.Replace(mysqlSchemaTableSQL, "{{schema_version}}", sqlTableSchemaVersion, 1))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = tx.Exec(strings.Replace(initialDBVersionSQL, "{{schema_version}}", sqlTableSchemaVersion, 1))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return tx.Commit()
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) migrateDatabase() error {
|
||||
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if dbVersion.Version == sqlDatabaseVersion {
|
||||
providerLog(logger.LevelDebug, "sql database is up to date, current version: %v", dbVersion.Version)
|
||||
return ErrNoInitRequired
|
||||
}
|
||||
switch dbVersion.Version {
|
||||
case 1:
|
||||
return updateMySQLDatabaseFromV1(p.dbHandle)
|
||||
case 2:
|
||||
return updateMySQLDatabaseFromV2(p.dbHandle)
|
||||
case 3:
|
||||
return updateMySQLDatabaseFromV3(p.dbHandle)
|
||||
case 4:
|
||||
return updateMySQLDatabaseFromV4(p.dbHandle)
|
||||
case 5:
|
||||
return updateMySQLDatabaseFromV5(p.dbHandle)
|
||||
case 6:
|
||||
return updateMySQLDatabaseFromV6(p.dbHandle)
|
||||
case 7:
|
||||
return updateMySQLDatabaseFromV7(p.dbHandle)
|
||||
default:
|
||||
if dbVersion.Version > sqlDatabaseVersion {
|
||||
providerLog(logger.LevelWarn, "database version %v is newer than the supported: %v", dbVersion.Version,
|
||||
sqlDatabaseVersion)
|
||||
logger.WarnToConsole("database version %v is newer than the supported: %v", dbVersion.Version,
|
||||
sqlDatabaseVersion)
|
||||
return nil
|
||||
}
|
||||
return fmt.Errorf("Database version not handled: %v", dbVersion.Version)
|
||||
}
|
||||
}
|
||||
|
||||
//nolint:dupl
|
||||
func (p *MySQLProvider) revertDatabase(targetVersion int) error {
|
||||
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if dbVersion.Version == targetVersion {
|
||||
return fmt.Errorf("current version match target version, nothing to do")
|
||||
}
|
||||
switch dbVersion.Version {
|
||||
case 8:
|
||||
err = downgradeMySQLDatabaseFrom8To7(p.dbHandle)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = downgradeMySQLDatabaseFrom7To6(p.dbHandle)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = downgradeMySQLDatabaseFrom6To5(p.dbHandle)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return downgradeMySQLDatabaseFrom5To4(p.dbHandle)
|
||||
case 7:
|
||||
err = downgradeMySQLDatabaseFrom7To6(p.dbHandle)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = downgradeMySQLDatabaseFrom6To5(p.dbHandle)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return downgradeMySQLDatabaseFrom5To4(p.dbHandle)
|
||||
case 6:
|
||||
err = downgradeMySQLDatabaseFrom6To5(p.dbHandle)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return downgradeMySQLDatabaseFrom5To4(p.dbHandle)
|
||||
case 5:
|
||||
return downgradeMySQLDatabaseFrom5To4(p.dbHandle)
|
||||
default:
|
||||
return fmt.Errorf("Database version not handled: %v", dbVersion.Version)
|
||||
}
|
||||
}
|
||||
|
||||
func updateMySQLDatabaseFromV1(dbHandle *sql.DB) error {
|
||||
err := updateMySQLDatabaseFrom1To2(dbHandle)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return updateMySQLDatabaseFromV2(dbHandle)
|
||||
}
|
||||
|
||||
func updateMySQLDatabaseFromV2(dbHandle *sql.DB) error {
|
||||
err := updateMySQLDatabaseFrom2To3(dbHandle)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return updateMySQLDatabaseFromV3(dbHandle)
|
||||
}
|
||||
|
||||
func updateMySQLDatabaseFromV3(dbHandle *sql.DB) error {
|
||||
err := updateMySQLDatabaseFrom3To4(dbHandle)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return updateMySQLDatabaseFromV4(dbHandle)
|
||||
}
|
||||
|
||||
func updateMySQLDatabaseFromV4(dbHandle *sql.DB) error {
|
||||
err := updateMySQLDatabaseFrom4To5(dbHandle)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return updateMySQLDatabaseFromV5(dbHandle)
|
||||
}
|
||||
|
||||
func updateMySQLDatabaseFromV5(dbHandle *sql.DB) error {
|
||||
err := updateMySQLDatabaseFrom5To6(dbHandle)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return updateMySQLDatabaseFromV6(dbHandle)
|
||||
}
|
||||
|
||||
func updateMySQLDatabaseFromV6(dbHandle *sql.DB) error {
|
||||
err := updateMySQLDatabaseFrom6To7(dbHandle)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return updateMySQLDatabaseFromV7(dbHandle)
|
||||
}
|
||||
|
||||
func updateMySQLDatabaseFromV7(dbHandle *sql.DB) error {
|
||||
return updateMySQLDatabaseFrom7To8(dbHandle)
|
||||
}
|
||||
|
||||
func updateMySQLDatabaseFrom1To2(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 1 -> 2")
|
||||
providerLog(logger.LevelInfo, "updating database version: 1 -> 2")
|
||||
sql := strings.Replace(mysqlV2SQL, "{{users}}", sqlTableUsers, 1)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 2)
|
||||
}
|
||||
|
||||
func updateMySQLDatabaseFrom2To3(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 2 -> 3")
|
||||
providerLog(logger.LevelInfo, "updating database version: 2 -> 3")
|
||||
sql := strings.Replace(mysqlV3SQL, "{{users}}", sqlTableUsers, 1)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 3)
|
||||
}
|
||||
|
||||
func updateMySQLDatabaseFrom3To4(dbHandle *sql.DB) error {
|
||||
return sqlCommonUpdateDatabaseFrom3To4(mysqlV4SQL, dbHandle)
|
||||
}
|
||||
|
||||
func updateMySQLDatabaseFrom4To5(dbHandle *sql.DB) error {
|
||||
return sqlCommonUpdateDatabaseFrom4To5(dbHandle)
|
||||
}
|
||||
|
||||
func updateMySQLDatabaseFrom5To6(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 5 -> 6")
|
||||
providerLog(logger.LevelInfo, "updating database version: 5 -> 6")
|
||||
sql := strings.Replace(mysqlV6SQL, "{{users}}", sqlTableUsers, 1)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 6)
|
||||
}
|
||||
|
||||
func updateMySQLDatabaseFrom6To7(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 6 -> 7")
|
||||
providerLog(logger.LevelInfo, "updating database version: 6 -> 7")
|
||||
sql := strings.Replace(mysqlV7SQL, "{{admins}}", sqlTableAdmins, 1)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 7)
|
||||
}
|
||||
|
||||
func updateMySQLDatabaseFrom7To8(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 7 -> 8")
|
||||
providerLog(logger.LevelInfo, "updating database version: 7 -> 8")
|
||||
sql := strings.ReplaceAll(mysqlV8SQL, "{{folders}}", sqlTableFolders)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 8)
|
||||
}
|
||||
|
||||
func downgradeMySQLDatabaseFrom8To7(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("downgrading database version: 8 -> 7")
|
||||
providerLog(logger.LevelInfo, "downgrading database version: 8 -> 7")
|
||||
sql := strings.ReplaceAll(mysqlV8DownSQL, "{{folders}}", sqlTableFolders)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 7)
|
||||
}
|
||||
|
||||
func downgradeMySQLDatabaseFrom7To6(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("downgrading database version: 7 -> 6")
|
||||
providerLog(logger.LevelInfo, "downgrading database version: 7 -> 6")
|
||||
sql := strings.Replace(mysqlV7DownSQL, "{{admins}}", sqlTableAdmins, 1)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 6)
|
||||
}
|
||||
|
||||
func downgradeMySQLDatabaseFrom6To5(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("downgrading database version: 6 -> 5")
|
||||
providerLog(logger.LevelInfo, "downgrading database version: 6 -> 5")
|
||||
sql := strings.Replace(mysqlV6DownSQL, "{{users}}", sqlTableUsers, 1)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 5)
|
||||
}
|
||||
|
||||
func downgradeMySQLDatabaseFrom5To4(dbHandle *sql.DB) error {
|
||||
return sqlCommonDowngradeDatabaseFrom5To4(dbHandle)
|
||||
}
|
||||
|
||||
17
dataprovider/mysql_disabled.go
Normal file
17
dataprovider/mysql_disabled.go
Normal file
@@ -0,0 +1,17 @@
|
||||
// +build nomysql
|
||||
|
||||
package dataprovider
|
||||
|
||||
import (
|
||||
"errors"
|
||||
|
||||
"github.com/drakkan/sftpgo/version"
|
||||
)
|
||||
|
||||
func init() {
|
||||
version.AddFeature("-mysql")
|
||||
}
|
||||
|
||||
func initializeMySQLProvider() error {
|
||||
return errors.New("MySQL disabled at build time")
|
||||
}
|
||||
@@ -1,10 +1,61 @@
|
||||
// +build !nopgsql
|
||||
|
||||
package dataprovider
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
// we import lib/pq here to be able to disable PostgreSQL support using a build tag
|
||||
_ "github.com/lib/pq"
|
||||
|
||||
"github.com/drakkan/sftpgo/logger"
|
||||
"github.com/drakkan/sftpgo/version"
|
||||
"github.com/drakkan/sftpgo/vfs"
|
||||
)
|
||||
|
||||
const (
|
||||
pgsqlUsersTableSQL = `CREATE TABLE "{{users}}" ("id" serial NOT NULL PRIMARY KEY, "username" varchar(255) NOT NULL UNIQUE,
|
||||
"password" varchar(255) NULL, "public_keys" text NULL, "home_dir" varchar(255) NOT NULL, "uid" integer NOT NULL,
|
||||
"gid" integer NOT NULL, "max_sessions" integer NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL,
|
||||
"permissions" text NOT NULL, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL,
|
||||
"last_quota_update" bigint NOT NULL, "upload_bandwidth" integer NOT NULL, "download_bandwidth" integer NOT NULL,
|
||||
"expiration_date" bigint NOT NULL, "last_login" bigint NOT NULL, "status" integer NOT NULL, "filters" text NULL,
|
||||
"filesystem" text NULL);`
|
||||
pgsqlSchemaTableSQL = `CREATE TABLE "{{schema_version}}" ("id" serial NOT NULL PRIMARY KEY, "version" integer NOT NULL);`
|
||||
pgsqlV2SQL = `ALTER TABLE "{{users}}" ADD COLUMN "virtual_folders" text NULL;`
|
||||
pgsqlV3SQL = `ALTER TABLE "{{users}}" ALTER COLUMN "password" TYPE text USING "password"::text;`
|
||||
pgsqlV4SQL = `CREATE TABLE "{{folders}}" ("id" serial NOT NULL PRIMARY KEY, "path" varchar(512) NOT NULL UNIQUE, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL);
|
||||
ALTER TABLE "{{users}}" ALTER COLUMN "home_dir" TYPE varchar(512) USING "home_dir"::varchar(512);
|
||||
ALTER TABLE "{{users}}" DROP COLUMN "virtual_folders" CASCADE;
|
||||
CREATE TABLE "{{folders_mapping}}" ("id" serial NOT NULL PRIMARY KEY, "virtual_path" varchar(512) NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "folder_id" integer NOT NULL, "user_id" integer NOT NULL);
|
||||
ALTER TABLE "{{folders_mapping}}" ADD CONSTRAINT "unique_mapping" UNIQUE ("user_id", "folder_id");
|
||||
ALTER TABLE "{{folders_mapping}}" ADD CONSTRAINT "folders_mapping_folder_id_fk_folders_id" FOREIGN KEY ("folder_id") REFERENCES "{{folders}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
|
||||
ALTER TABLE "{{folders_mapping}}" ADD CONSTRAINT "folders_mapping_user_id_fk_users_id" FOREIGN KEY ("user_id") REFERENCES "{{users}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
|
||||
CREATE INDEX "folders_mapping_folder_id_idx" ON "{{folders_mapping}}" ("folder_id");
|
||||
CREATE INDEX "folders_mapping_user_id_idx" ON "{{folders_mapping}}" ("user_id");
|
||||
`
|
||||
pgsqlV6SQL = `ALTER TABLE "{{users}}" ADD COLUMN "additional_info" text NULL;`
|
||||
pgsqlV6DownSQL = `ALTER TABLE "{{users}}" DROP COLUMN "additional_info" CASCADE;`
|
||||
pgsqlV7SQL = `CREATE TABLE "{{admins}}" ("id" serial NOT NULL PRIMARY KEY, "username" varchar(255) NOT NULL UNIQUE,
|
||||
"password" varchar(255) NOT NULL, "email" varchar(255) NULL, "status" integer NOT NULL, "permissions" text NOT NULL,
|
||||
"filters" text NULL, "additional_info" text NULL);
|
||||
`
|
||||
pgsqlV7DownSQL = `DROP TABLE "{{admins}}" CASCADE;`
|
||||
pgsqlV8SQL = `ALTER TABLE "{{folders}}" ADD COLUMN "name" varchar(255) NULL;
|
||||
ALTER TABLE "folders" ALTER COLUMN "path" DROP NOT NULL;
|
||||
ALTER TABLE "{{folders}}" DROP CONSTRAINT IF EXISTS folders_path_key;
|
||||
UPDATE "{{folders}}" f1 SET name = (SELECT CONCAT('folder',f2.id) FROM "{{folders}}" f2 WHERE f2.id = f1.id);
|
||||
ALTER TABLE "{{folders}}" ALTER COLUMN "name" SET NOT NULL;
|
||||
ALTER TABLE "{{folders}}" ADD CONSTRAINT "folders_name_uniq" UNIQUE ("name");
|
||||
`
|
||||
pgsqlV8DownSQL = `ALTER TABLE "{{folders}}" DROP COLUMN "name" CASCADE;
|
||||
ALTER TABLE "{{folders}}" ALTER COLUMN "path" SET NOT NULL;
|
||||
ALTER TABLE "{{folders}}" ADD CONSTRAINT folders_path_key UNIQUE (path);
|
||||
`
|
||||
)
|
||||
|
||||
// PGSQLProvider auth provider for PostgreSQL database
|
||||
@@ -12,15 +63,25 @@ type PGSQLProvider struct {
|
||||
dbHandle *sql.DB
|
||||
}
|
||||
|
||||
func init() {
|
||||
version.AddFeature("+pgsql")
|
||||
}
|
||||
|
||||
func initializePGSQLProvider() error {
|
||||
var err error
|
||||
logSender = PGSQLDataProviderName
|
||||
logSender = fmt.Sprintf("dataprovider_%v", PGSQLDataProviderName)
|
||||
dbHandle, err := sql.Open("postgres", getPGSQLConnectionString(false))
|
||||
if err == nil {
|
||||
providerLog(logger.LevelDebug, "postgres database handle created, connection string: %#v, pool size: %v",
|
||||
getPGSQLConnectionString(true), config.PoolSize)
|
||||
dbHandle.SetMaxOpenConns(config.PoolSize)
|
||||
provider = PGSQLProvider{dbHandle: dbHandle}
|
||||
if config.PoolSize > 0 {
|
||||
dbHandle.SetMaxIdleConns(config.PoolSize)
|
||||
} else {
|
||||
dbHandle.SetMaxIdleConns(2)
|
||||
}
|
||||
dbHandle.SetConnMaxLifetime(240 * time.Second)
|
||||
provider = &PGSQLProvider{dbHandle: dbHandle}
|
||||
} else {
|
||||
providerLog(logger.LevelWarn, "error creating postgres database handler, connection string: %#v, error: %v",
|
||||
getPGSQLConnectionString(true), err)
|
||||
@@ -30,7 +91,7 @@ func initializePGSQLProvider() error {
|
||||
|
||||
func getPGSQLConnectionString(redactedPwd bool) string {
|
||||
var connectionString string
|
||||
if len(config.ConnectionString) == 0 {
|
||||
if config.ConnectionString == "" {
|
||||
password := config.Password
|
||||
if redactedPwd {
|
||||
password = "[redacted]"
|
||||
@@ -43,50 +104,352 @@ func getPGSQLConnectionString(redactedPwd bool) string {
|
||||
return connectionString
|
||||
}
|
||||
|
||||
func (p PGSQLProvider) checkAvailability() error {
|
||||
func (p *PGSQLProvider) checkAvailability() error {
|
||||
return sqlCommonCheckAvailability(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p PGSQLProvider) validateUserAndPass(username string, password string) (User, error) {
|
||||
return sqlCommonValidateUserAndPass(username, password, p.dbHandle)
|
||||
func (p *PGSQLProvider) validateUserAndPass(username, password, ip, protocol string) (User, error) {
|
||||
return sqlCommonValidateUserAndPass(username, password, ip, protocol, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p PGSQLProvider) validateUserAndPubKey(username string, publicKey string) (User, string, error) {
|
||||
func (p *PGSQLProvider) validateUserAndPubKey(username string, publicKey []byte) (User, string, error) {
|
||||
return sqlCommonValidateUserAndPubKey(username, publicKey, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p PGSQLProvider) getUserByID(ID int64) (User, error) {
|
||||
return sqlCommonGetUserByID(ID, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p PGSQLProvider) updateQuota(username string, filesAdd int, sizeAdd int64, reset bool) error {
|
||||
func (p *PGSQLProvider) updateQuota(username string, filesAdd int, sizeAdd int64, reset bool) error {
|
||||
return sqlCommonUpdateQuota(username, filesAdd, sizeAdd, reset, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p PGSQLProvider) getUsedQuota(username string) (int, int64, error) {
|
||||
func (p *PGSQLProvider) getUsedQuota(username string) (int, int64, error) {
|
||||
return sqlCommonGetUsedQuota(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p PGSQLProvider) userExists(username string) (User, error) {
|
||||
return sqlCommonCheckUserExists(username, p.dbHandle)
|
||||
func (p *PGSQLProvider) updateLastLogin(username string) error {
|
||||
return sqlCommonUpdateLastLogin(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p PGSQLProvider) addUser(user User) error {
|
||||
func (p *PGSQLProvider) userExists(username string) (User, error) {
|
||||
return sqlCommonGetUserByUsername(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) addUser(user *User) error {
|
||||
return sqlCommonAddUser(user, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p PGSQLProvider) updateUser(user User) error {
|
||||
func (p *PGSQLProvider) updateUser(user *User) error {
|
||||
return sqlCommonUpdateUser(user, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p PGSQLProvider) deleteUser(user User) error {
|
||||
func (p *PGSQLProvider) deleteUser(user *User) error {
|
||||
return sqlCommonDeleteUser(user, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p PGSQLProvider) getUsers(limit int, offset int, order string, username string) ([]User, error) {
|
||||
return sqlCommonGetUsers(limit, offset, order, username, p.dbHandle)
|
||||
func (p *PGSQLProvider) dumpUsers() ([]User, error) {
|
||||
return sqlCommonDumpUsers(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p PGSQLProvider) close() error {
|
||||
func (p *PGSQLProvider) getUsers(limit int, offset int, order string) ([]User, error) {
|
||||
return sqlCommonGetUsers(limit, offset, order, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) dumpFolders() ([]vfs.BaseVirtualFolder, error) {
|
||||
return sqlCommonDumpFolders(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) getFolders(limit, offset int, order string) ([]vfs.BaseVirtualFolder, error) {
|
||||
return sqlCommonGetFolders(limit, offset, order, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) getFolderByName(name string) (vfs.BaseVirtualFolder, error) {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
|
||||
defer cancel()
|
||||
return sqlCommonGetFolderByName(ctx, name, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) addFolder(folder *vfs.BaseVirtualFolder) error {
|
||||
return sqlCommonAddFolder(folder, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) updateFolder(folder *vfs.BaseVirtualFolder) error {
|
||||
return sqlCommonUpdateFolder(folder, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) deleteFolder(folder *vfs.BaseVirtualFolder) error {
|
||||
return sqlCommonDeleteFolder(folder, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) updateFolderQuota(name string, filesAdd int, sizeAdd int64, reset bool) error {
|
||||
return sqlCommonUpdateFolderQuota(name, filesAdd, sizeAdd, reset, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) getUsedFolderQuota(name string) (int, int64, error) {
|
||||
return sqlCommonGetFolderUsedQuota(name, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) adminExists(username string) (Admin, error) {
|
||||
return sqlCommonGetAdminByUsername(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) addAdmin(admin *Admin) error {
|
||||
return sqlCommonAddAdmin(admin, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) updateAdmin(admin *Admin) error {
|
||||
return sqlCommonUpdateAdmin(admin, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) deleteAdmin(admin *Admin) error {
|
||||
return sqlCommonDeleteAdmin(admin, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) getAdmins(limit int, offset int, order string) ([]Admin, error) {
|
||||
return sqlCommonGetAdmins(limit, offset, order, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) dumpAdmins() ([]Admin, error) {
|
||||
return sqlCommonDumpAdmins(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) validateAdminAndPass(username, password, ip string) (Admin, error) {
|
||||
return sqlCommonValidateAdminAndPass(username, password, ip, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) close() error {
|
||||
return p.dbHandle.Close()
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) reloadConfig() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// initializeDatabase creates the initial database structure
|
||||
func (p *PGSQLProvider) initializeDatabase() error {
|
||||
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, false)
|
||||
if err == nil && dbVersion.Version > 0 {
|
||||
return ErrNoInitRequired
|
||||
}
|
||||
sqlUsers := strings.Replace(pgsqlUsersTableSQL, "{{users}}", sqlTableUsers, 1)
|
||||
ctx, cancel := context.WithTimeout(context.Background(), longSQLQueryTimeout)
|
||||
defer cancel()
|
||||
|
||||
tx, err := p.dbHandle.BeginTx(ctx, nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = tx.Exec(sqlUsers)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = tx.Exec(strings.Replace(pgsqlSchemaTableSQL, "{{schema_version}}", sqlTableSchemaVersion, 1))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = tx.Exec(strings.Replace(initialDBVersionSQL, "{{schema_version}}", sqlTableSchemaVersion, 1))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return tx.Commit()
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) migrateDatabase() error {
|
||||
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if dbVersion.Version == sqlDatabaseVersion {
|
||||
providerLog(logger.LevelDebug, "sql database is up to date, current version: %v", dbVersion.Version)
|
||||
return ErrNoInitRequired
|
||||
}
|
||||
switch dbVersion.Version {
|
||||
case 1:
|
||||
return updatePGSQLDatabaseFromV1(p.dbHandle)
|
||||
case 2:
|
||||
return updatePGSQLDatabaseFromV2(p.dbHandle)
|
||||
case 3:
|
||||
return updatePGSQLDatabaseFromV3(p.dbHandle)
|
||||
case 4:
|
||||
return updatePGSQLDatabaseFromV4(p.dbHandle)
|
||||
case 5:
|
||||
return updatePGSQLDatabaseFromV5(p.dbHandle)
|
||||
case 6:
|
||||
return updatePGSQLDatabaseFromV6(p.dbHandle)
|
||||
case 7:
|
||||
return updatePGSQLDatabaseFromV7(p.dbHandle)
|
||||
default:
|
||||
if dbVersion.Version > sqlDatabaseVersion {
|
||||
providerLog(logger.LevelWarn, "database version %v is newer than the supported: %v", dbVersion.Version,
|
||||
sqlDatabaseVersion)
|
||||
logger.WarnToConsole("database version %v is newer than the supported: %v", dbVersion.Version,
|
||||
sqlDatabaseVersion)
|
||||
return nil
|
||||
}
|
||||
return fmt.Errorf("Database version not handled: %v", dbVersion.Version)
|
||||
}
|
||||
}
|
||||
|
||||
//nolint:dupl
|
||||
func (p *PGSQLProvider) revertDatabase(targetVersion int) error {
|
||||
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if dbVersion.Version == targetVersion {
|
||||
return fmt.Errorf("current version match target version, nothing to do")
|
||||
}
|
||||
switch dbVersion.Version {
|
||||
case 8:
|
||||
err = downgradePGSQLDatabaseFrom8To7(p.dbHandle)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = downgradePGSQLDatabaseFrom7To6(p.dbHandle)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = downgradePGSQLDatabaseFrom6To5(p.dbHandle)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return downgradePGSQLDatabaseFrom5To4(p.dbHandle)
|
||||
case 7:
|
||||
err = downgradePGSQLDatabaseFrom7To6(p.dbHandle)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = downgradePGSQLDatabaseFrom6To5(p.dbHandle)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return downgradePGSQLDatabaseFrom5To4(p.dbHandle)
|
||||
case 6:
|
||||
err = downgradePGSQLDatabaseFrom6To5(p.dbHandle)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return downgradePGSQLDatabaseFrom5To4(p.dbHandle)
|
||||
case 5:
|
||||
return downgradePGSQLDatabaseFrom5To4(p.dbHandle)
|
||||
default:
|
||||
return fmt.Errorf("Database version not handled: %v", dbVersion.Version)
|
||||
}
|
||||
}
|
||||
|
||||
func updatePGSQLDatabaseFromV1(dbHandle *sql.DB) error {
|
||||
err := updatePGSQLDatabaseFrom1To2(dbHandle)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return updatePGSQLDatabaseFromV2(dbHandle)
|
||||
}
|
||||
|
||||
func updatePGSQLDatabaseFromV2(dbHandle *sql.DB) error {
|
||||
err := updatePGSQLDatabaseFrom2To3(dbHandle)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return updatePGSQLDatabaseFromV3(dbHandle)
|
||||
}
|
||||
|
||||
func updatePGSQLDatabaseFromV3(dbHandle *sql.DB) error {
|
||||
err := updatePGSQLDatabaseFrom3To4(dbHandle)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return updatePGSQLDatabaseFromV4(dbHandle)
|
||||
}
|
||||
|
||||
func updatePGSQLDatabaseFromV4(dbHandle *sql.DB) error {
|
||||
err := updatePGSQLDatabaseFrom4To5(dbHandle)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return updatePGSQLDatabaseFromV5(dbHandle)
|
||||
}
|
||||
|
||||
func updatePGSQLDatabaseFromV5(dbHandle *sql.DB) error {
|
||||
err := updatePGSQLDatabaseFrom5To6(dbHandle)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return updatePGSQLDatabaseFromV6(dbHandle)
|
||||
}
|
||||
|
||||
func updatePGSQLDatabaseFromV6(dbHandle *sql.DB) error {
|
||||
err := updatePGSQLDatabaseFrom6To7(dbHandle)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return updatePGSQLDatabaseFromV7(dbHandle)
|
||||
}
|
||||
|
||||
func updatePGSQLDatabaseFromV7(dbHandle *sql.DB) error {
|
||||
return updatePGSQLDatabaseFrom7To8(dbHandle)
|
||||
}
|
||||
|
||||
func updatePGSQLDatabaseFrom1To2(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 1 -> 2")
|
||||
providerLog(logger.LevelInfo, "updating database version: 1 -> 2")
|
||||
sql := strings.Replace(pgsqlV2SQL, "{{users}}", sqlTableUsers, 1)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 2)
|
||||
}
|
||||
|
||||
func updatePGSQLDatabaseFrom2To3(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 2 -> 3")
|
||||
providerLog(logger.LevelInfo, "updating database version: 2 -> 3")
|
||||
sql := strings.Replace(pgsqlV3SQL, "{{users}}", sqlTableUsers, 1)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 3)
|
||||
}
|
||||
|
||||
func updatePGSQLDatabaseFrom3To4(dbHandle *sql.DB) error {
|
||||
return sqlCommonUpdateDatabaseFrom3To4(pgsqlV4SQL, dbHandle)
|
||||
}
|
||||
|
||||
func updatePGSQLDatabaseFrom4To5(dbHandle *sql.DB) error {
|
||||
return sqlCommonUpdateDatabaseFrom4To5(dbHandle)
|
||||
}
|
||||
|
||||
func updatePGSQLDatabaseFrom5To6(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 5 -> 6")
|
||||
providerLog(logger.LevelInfo, "updating database version: 5 -> 6")
|
||||
sql := strings.Replace(pgsqlV6SQL, "{{users}}", sqlTableUsers, 1)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 6)
|
||||
}
|
||||
|
||||
func updatePGSQLDatabaseFrom6To7(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 6 -> 7")
|
||||
providerLog(logger.LevelInfo, "updating database version: 6 -> 7")
|
||||
sql := strings.Replace(pgsqlV7SQL, "{{admins}}", sqlTableAdmins, 1)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 7)
|
||||
}
|
||||
|
||||
func updatePGSQLDatabaseFrom7To8(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 7 -> 8")
|
||||
providerLog(logger.LevelInfo, "updating database version: 7 -> 8")
|
||||
sql := strings.ReplaceAll(pgsqlV8SQL, "{{folders}}", sqlTableFolders)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 8)
|
||||
}
|
||||
|
||||
func downgradePGSQLDatabaseFrom8To7(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("downgrading database version: 8 -> 7")
|
||||
providerLog(logger.LevelInfo, "downgrading database version: 8 -> 7")
|
||||
sql := strings.ReplaceAll(pgsqlV8DownSQL, "{{folders}}", sqlTableAdmins)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 7)
|
||||
}
|
||||
|
||||
func downgradePGSQLDatabaseFrom7To6(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("downgrading database version: 7 -> 6")
|
||||
providerLog(logger.LevelInfo, "downgrading database version: 7 -> 6")
|
||||
sql := strings.Replace(pgsqlV7DownSQL, "{{admins}}", sqlTableAdmins, 1)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 6)
|
||||
}
|
||||
|
||||
func downgradePGSQLDatabaseFrom6To5(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("downgrading database version: 6 -> 5")
|
||||
providerLog(logger.LevelInfo, "downgrading database version: 6 -> 5")
|
||||
sql := strings.Replace(pgsqlV6DownSQL, "{{users}}", sqlTableUsers, 1)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 5)
|
||||
}
|
||||
|
||||
func downgradePGSQLDatabaseFrom5To4(dbHandle *sql.DB) error {
|
||||
return sqlCommonDowngradeDatabaseFrom5To4(dbHandle)
|
||||
}
|
||||
|
||||
17
dataprovider/pgsql_disabled.go
Normal file
17
dataprovider/pgsql_disabled.go
Normal file
@@ -0,0 +1,17 @@
|
||||
// +build nopgsql
|
||||
|
||||
package dataprovider
|
||||
|
||||
import (
|
||||
"errors"
|
||||
|
||||
"github.com/drakkan/sftpgo/version"
|
||||
)
|
||||
|
||||
func init() {
|
||||
version.AddFeature("-pgsql")
|
||||
}
|
||||
|
||||
func initializePGSQLProvider() error {
|
||||
return errors.New("PostgreSQL disabled at build time")
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,13 +1,103 @@
|
||||
// +build !nosqlite
|
||||
|
||||
package dataprovider
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
// we import go-sqlite3 here to be able to disable SQLite support using a build tag
|
||||
_ "github.com/mattn/go-sqlite3"
|
||||
|
||||
"github.com/drakkan/sftpgo/logger"
|
||||
"github.com/drakkan/sftpgo/utils"
|
||||
"github.com/drakkan/sftpgo/version"
|
||||
"github.com/drakkan/sftpgo/vfs"
|
||||
)
|
||||
|
||||
const (
|
||||
sqliteUsersTableSQL = `CREATE TABLE "{{users}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255)
|
||||
NOT NULL UNIQUE, "password" varchar(255) NULL, "public_keys" text NULL, "home_dir" varchar(255) NOT NULL, "uid" integer NOT NULL,
|
||||
"gid" integer NOT NULL, "max_sessions" integer NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL,
|
||||
"permissions" text NOT NULL, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL,
|
||||
"last_quota_update" bigint NOT NULL, "upload_bandwidth" integer NOT NULL, "download_bandwidth" integer NOT NULL,
|
||||
"expiration_date" bigint NOT NULL, "last_login" bigint NOT NULL, "status" integer NOT NULL, "filters" text NULL,
|
||||
"filesystem" text NULL);`
|
||||
sqliteSchemaTableSQL = `CREATE TABLE "{{schema_version}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "version" integer NOT NULL);`
|
||||
sqliteV2SQL = `ALTER TABLE "{{users}}" ADD COLUMN "virtual_folders" text NULL;`
|
||||
sqliteV3SQL = `CREATE TABLE "new__users" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255) NOT NULL UNIQUE,
|
||||
"password" text NULL, "public_keys" text NULL, "home_dir" varchar(255) NOT NULL, "uid" integer NOT NULL,
|
||||
"gid" integer NOT NULL, "max_sessions" integer NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL,
|
||||
"permissions" text NOT NULL, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL,
|
||||
"upload_bandwidth" integer NOT NULL, "download_bandwidth" integer NOT NULL, "expiration_date" bigint NOT NULL, "last_login" bigint NOT NULL,
|
||||
"status" integer NOT NULL, "filters" text NULL, "filesystem" text NULL, "virtual_folders" text NULL);
|
||||
INSERT INTO "new__users" ("id", "username", "public_keys", "home_dir", "uid", "gid", "max_sessions", "quota_size", "quota_files",
|
||||
"permissions", "used_quota_size", "used_quota_files", "last_quota_update", "upload_bandwidth", "download_bandwidth", "expiration_date",
|
||||
"last_login", "status", "filters", "filesystem", "virtual_folders", "password") SELECT "id", "username", "public_keys", "home_dir",
|
||||
"uid", "gid", "max_sessions", "quota_size", "quota_files", "permissions", "used_quota_size", "used_quota_files", "last_quota_update",
|
||||
"upload_bandwidth", "download_bandwidth", "expiration_date", "last_login", "status", "filters", "filesystem", "virtual_folders",
|
||||
"password" FROM "{{users}}";
|
||||
DROP TABLE "{{users}}";
|
||||
ALTER TABLE "new__users" RENAME TO "{{users}}";`
|
||||
sqliteV4SQL = `CREATE TABLE "{{folders}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "path" varchar(512) NOT NULL UNIQUE,
|
||||
"used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL);
|
||||
CREATE TABLE "{{folders_mapping}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "virtual_path" varchar(512) NOT NULL,
|
||||
"quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "folder_id" integer NOT NULL REFERENCES "{{folders}}" ("id")
|
||||
ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED, "user_id" integer NOT NULL REFERENCES "{{users}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
|
||||
CONSTRAINT "unique_mapping" UNIQUE ("user_id", "folder_id"));
|
||||
CREATE TABLE "new__users" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255) NOT NULL UNIQUE, "password" text NULL,
|
||||
"public_keys" text NULL, "home_dir" varchar(512) NOT NULL, "uid" integer NOT NULL, "gid" integer NOT NULL, "max_sessions" integer NOT NULL,
|
||||
"quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "permissions" text NOT NULL, "used_quota_size" bigint NOT NULL,
|
||||
"used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL, "upload_bandwidth" integer NOT NULL, "download_bandwidth" integer NOT NULL,
|
||||
"expiration_date" bigint NOT NULL, "last_login" bigint NOT NULL, "status" integer NOT NULL, "filters" text NULL, "filesystem" text NULL);
|
||||
INSERT INTO "new__users" ("id", "username", "password", "public_keys", "home_dir", "uid", "gid", "max_sessions", "quota_size", "quota_files",
|
||||
"permissions", "used_quota_size", "used_quota_files", "last_quota_update", "upload_bandwidth", "download_bandwidth", "expiration_date",
|
||||
"last_login", "status", "filters", "filesystem") SELECT "id", "username", "password", "public_keys", "home_dir", "uid", "gid", "max_sessions",
|
||||
"quota_size", "quota_files", "permissions", "used_quota_size", "used_quota_files", "last_quota_update", "upload_bandwidth", "download_bandwidth",
|
||||
"expiration_date", "last_login", "status", "filters", "filesystem" FROM "{{users}}";
|
||||
DROP TABLE "{{users}}";
|
||||
ALTER TABLE "new__users" RENAME TO "{{users}}";
|
||||
CREATE INDEX "folders_mapping_folder_id_idx" ON "{{folders_mapping}}" ("folder_id");
|
||||
CREATE INDEX "folders_mapping_user_id_idx" ON "{{folders_mapping}}" ("user_id");
|
||||
`
|
||||
sqliteV6SQL = `ALTER TABLE "{{users}}" ADD COLUMN "additional_info" text NULL;`
|
||||
sqliteV6DownSQL = `CREATE TABLE "new__users" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255) NOT NULL UNIQUE,
|
||||
"password" text NULL, "public_keys" text NULL, "home_dir" varchar(512) NOT NULL, "uid" integer NOT NULL, "gid" integer NOT NULL,
|
||||
"max_sessions" integer NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "permissions" text NOT NULL,
|
||||
"used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL, "upload_bandwidth" integer NOT NULL,
|
||||
"download_bandwidth" integer NOT NULL, "expiration_date" bigint NOT NULL, "last_login" bigint NOT NULL, "status" integer NOT NULL,
|
||||
"filters" text NULL, "filesystem" text NULL);
|
||||
INSERT INTO "new__users" ("id", "username", "password", "public_keys", "home_dir", "uid", "gid", "max_sessions", "quota_size", "quota_files",
|
||||
"permissions", "used_quota_size", "used_quota_files", "last_quota_update", "upload_bandwidth", "download_bandwidth", "expiration_date",
|
||||
"last_login", "status", "filters", "filesystem") SELECT "id", "username", "password", "public_keys", "home_dir", "uid", "gid", "max_sessions",
|
||||
"quota_size", "quota_files", "permissions", "used_quota_size", "used_quota_files", "last_quota_update", "upload_bandwidth", "download_bandwidth",
|
||||
"expiration_date", "last_login", "status", "filters", "filesystem" FROM "{{users}}";
|
||||
DROP TABLE "{{users}}";
|
||||
ALTER TABLE "new__users" RENAME TO "{{users}}";
|
||||
`
|
||||
sqliteV7SQL = `CREATE TABLE "{{admins}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255) NOT NULL UNIQUE,
|
||||
"password" varchar(255) NOT NULL, "email" varchar(255) NULL, "status" integer NOT NULL, "permissions" text NOT NULL, "filters" text NULL,
|
||||
"additional_info" text NULL);`
|
||||
sqliteV7DownSQL = `DROP TABLE "{{admins}}";`
|
||||
sqliteV8SQL = `CREATE TABLE "new__folders" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT,
|
||||
"name" varchar(255) NOT NULL UNIQUE, "path" varchar(512) NULL, "used_quota_size" bigint NOT NULL,
|
||||
"used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL);
|
||||
INSERT INTO "new__folders" ("id", "path", "used_quota_size", "used_quota_files", "last_quota_update", "name")
|
||||
SELECT "id", "path", "used_quota_size", "used_quota_files", "last_quota_update", ('folder' || "id") FROM "{{folders}}";
|
||||
DROP TABLE "{{folders}}";
|
||||
ALTER TABLE "new__folders" RENAME TO "{{folders}}";
|
||||
`
|
||||
sqliteV8DownSQL = `CREATE TABLE "new__folders" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT,
|
||||
"path" varchar(512) NOT NULL UNIQUE, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL,
|
||||
"last_quota_update" bigint NOT NULL);
|
||||
INSERT INTO "new__folders" ("id", "path", "used_quota_size", "used_quota_files", "last_quota_update")
|
||||
SELECT "id", "path", "used_quota_size", "used_quota_files", "last_quota_update" FROM "{{folders}}";
|
||||
DROP TABLE "{{folders}}";
|
||||
ALTER TABLE "new__folders" RENAME TO "{{folders}}";
|
||||
`
|
||||
)
|
||||
|
||||
// SQLiteProvider auth provider for SQLite database
|
||||
@@ -15,26 +105,23 @@ type SQLiteProvider struct {
|
||||
dbHandle *sql.DB
|
||||
}
|
||||
|
||||
func init() {
|
||||
version.AddFeature("+sqlite")
|
||||
}
|
||||
|
||||
func initializeSQLiteProvider(basePath string) error {
|
||||
var err error
|
||||
var connectionString string
|
||||
logSender = SQLiteDataProviderName
|
||||
if len(config.ConnectionString) == 0 {
|
||||
logSender = fmt.Sprintf("dataprovider_%v", SQLiteDataProviderName)
|
||||
if config.ConnectionString == "" {
|
||||
dbPath := config.Name
|
||||
if !utils.IsFileInputValid(dbPath) {
|
||||
return fmt.Errorf("Invalid database path: %#v", dbPath)
|
||||
}
|
||||
if !filepath.IsAbs(dbPath) {
|
||||
dbPath = filepath.Join(basePath, dbPath)
|
||||
}
|
||||
fi, err := os.Stat(dbPath)
|
||||
if err != nil {
|
||||
providerLog(logger.LevelWarn, "sqlite database file does not exists, please be sure to create and initialize"+
|
||||
" a database before starting sftpgo")
|
||||
return err
|
||||
}
|
||||
if fi.Size() == 0 {
|
||||
return errors.New("sqlite database file is invalid, please be sure to create and initialize" +
|
||||
" a database before starting sftpgo")
|
||||
}
|
||||
connectionString = fmt.Sprintf("file:%v?cache=shared", dbPath)
|
||||
connectionString = fmt.Sprintf("file:%v?cache=shared&_foreign_keys=1", dbPath)
|
||||
} else {
|
||||
connectionString = config.ConnectionString
|
||||
}
|
||||
@@ -42,7 +129,7 @@ func initializeSQLiteProvider(basePath string) error {
|
||||
if err == nil {
|
||||
providerLog(logger.LevelDebug, "sqlite database handle created, connection string: %#v", connectionString)
|
||||
dbHandle.SetMaxOpenConns(1)
|
||||
provider = SQLiteProvider{dbHandle: dbHandle}
|
||||
provider = &SQLiteProvider{dbHandle: dbHandle}
|
||||
} else {
|
||||
providerLog(logger.LevelWarn, "error creating sqlite database handler, connection string: %#v, error: %v",
|
||||
connectionString, err)
|
||||
@@ -50,50 +137,374 @@ func initializeSQLiteProvider(basePath string) error {
|
||||
return err
|
||||
}
|
||||
|
||||
func (p SQLiteProvider) checkAvailability() error {
|
||||
func (p *SQLiteProvider) checkAvailability() error {
|
||||
return sqlCommonCheckAvailability(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p SQLiteProvider) validateUserAndPass(username string, password string) (User, error) {
|
||||
return sqlCommonValidateUserAndPass(username, password, p.dbHandle)
|
||||
func (p *SQLiteProvider) validateUserAndPass(username, password, ip, protocol string) (User, error) {
|
||||
return sqlCommonValidateUserAndPass(username, password, ip, protocol, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p SQLiteProvider) validateUserAndPubKey(username string, publicKey string) (User, string, error) {
|
||||
func (p *SQLiteProvider) validateUserAndPubKey(username string, publicKey []byte) (User, string, error) {
|
||||
return sqlCommonValidateUserAndPubKey(username, publicKey, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p SQLiteProvider) getUserByID(ID int64) (User, error) {
|
||||
return sqlCommonGetUserByID(ID, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p SQLiteProvider) updateQuota(username string, filesAdd int, sizeAdd int64, reset bool) error {
|
||||
func (p *SQLiteProvider) updateQuota(username string, filesAdd int, sizeAdd int64, reset bool) error {
|
||||
return sqlCommonUpdateQuota(username, filesAdd, sizeAdd, reset, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p SQLiteProvider) getUsedQuota(username string) (int, int64, error) {
|
||||
func (p *SQLiteProvider) getUsedQuota(username string) (int, int64, error) {
|
||||
return sqlCommonGetUsedQuota(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p SQLiteProvider) userExists(username string) (User, error) {
|
||||
return sqlCommonCheckUserExists(username, p.dbHandle)
|
||||
func (p *SQLiteProvider) updateLastLogin(username string) error {
|
||||
return sqlCommonUpdateLastLogin(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p SQLiteProvider) addUser(user User) error {
|
||||
func (p *SQLiteProvider) userExists(username string) (User, error) {
|
||||
return sqlCommonGetUserByUsername(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) addUser(user *User) error {
|
||||
return sqlCommonAddUser(user, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p SQLiteProvider) updateUser(user User) error {
|
||||
func (p *SQLiteProvider) updateUser(user *User) error {
|
||||
return sqlCommonUpdateUser(user, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p SQLiteProvider) deleteUser(user User) error {
|
||||
func (p *SQLiteProvider) deleteUser(user *User) error {
|
||||
return sqlCommonDeleteUser(user, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p SQLiteProvider) getUsers(limit int, offset int, order string, username string) ([]User, error) {
|
||||
return sqlCommonGetUsers(limit, offset, order, username, p.dbHandle)
|
||||
func (p *SQLiteProvider) dumpUsers() ([]User, error) {
|
||||
return sqlCommonDumpUsers(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p SQLiteProvider) close() error {
|
||||
func (p *SQLiteProvider) getUsers(limit int, offset int, order string) ([]User, error) {
|
||||
return sqlCommonGetUsers(limit, offset, order, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) dumpFolders() ([]vfs.BaseVirtualFolder, error) {
|
||||
return sqlCommonDumpFolders(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) getFolders(limit, offset int, order string) ([]vfs.BaseVirtualFolder, error) {
|
||||
return sqlCommonGetFolders(limit, offset, order, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) getFolderByName(name string) (vfs.BaseVirtualFolder, error) {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
|
||||
defer cancel()
|
||||
return sqlCommonGetFolderByName(ctx, name, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) addFolder(folder *vfs.BaseVirtualFolder) error {
|
||||
return sqlCommonAddFolder(folder, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) updateFolder(folder *vfs.BaseVirtualFolder) error {
|
||||
return sqlCommonUpdateFolder(folder, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) deleteFolder(folder *vfs.BaseVirtualFolder) error {
|
||||
return sqlCommonDeleteFolder(folder, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) updateFolderQuota(name string, filesAdd int, sizeAdd int64, reset bool) error {
|
||||
return sqlCommonUpdateFolderQuota(name, filesAdd, sizeAdd, reset, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) getUsedFolderQuota(name string) (int, int64, error) {
|
||||
return sqlCommonGetFolderUsedQuota(name, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) adminExists(username string) (Admin, error) {
|
||||
return sqlCommonGetAdminByUsername(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) addAdmin(admin *Admin) error {
|
||||
return sqlCommonAddAdmin(admin, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) updateAdmin(admin *Admin) error {
|
||||
return sqlCommonUpdateAdmin(admin, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) deleteAdmin(admin *Admin) error {
|
||||
return sqlCommonDeleteAdmin(admin, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) getAdmins(limit int, offset int, order string) ([]Admin, error) {
|
||||
return sqlCommonGetAdmins(limit, offset, order, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) dumpAdmins() ([]Admin, error) {
|
||||
return sqlCommonDumpAdmins(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) validateAdminAndPass(username, password, ip string) (Admin, error) {
|
||||
return sqlCommonValidateAdminAndPass(username, password, ip, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) close() error {
|
||||
return p.dbHandle.Close()
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) reloadConfig() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// initializeDatabase creates the initial database structure
|
||||
func (p *SQLiteProvider) initializeDatabase() error {
|
||||
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, false)
|
||||
if err == nil && dbVersion.Version > 0 {
|
||||
return ErrNoInitRequired
|
||||
}
|
||||
sqlUsers := strings.Replace(sqliteUsersTableSQL, "{{users}}", sqlTableUsers, 1)
|
||||
ctx, cancel := context.WithTimeout(context.Background(), longSQLQueryTimeout)
|
||||
defer cancel()
|
||||
|
||||
tx, err := p.dbHandle.BeginTx(ctx, nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = tx.Exec(sqlUsers)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = tx.Exec(strings.Replace(sqliteSchemaTableSQL, "{{schema_version}}", sqlTableSchemaVersion, 1))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = tx.Exec(strings.Replace(initialDBVersionSQL, "{{schema_version}}", sqlTableSchemaVersion, 1))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return tx.Commit()
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) migrateDatabase() error {
|
||||
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if dbVersion.Version == sqlDatabaseVersion {
|
||||
providerLog(logger.LevelDebug, "sql database is up to date, current version: %v", dbVersion.Version)
|
||||
return ErrNoInitRequired
|
||||
}
|
||||
switch dbVersion.Version {
|
||||
case 1:
|
||||
return updateSQLiteDatabaseFromV1(p.dbHandle)
|
||||
case 2:
|
||||
return updateSQLiteDatabaseFromV2(p.dbHandle)
|
||||
case 3:
|
||||
return updateSQLiteDatabaseFromV3(p.dbHandle)
|
||||
case 4:
|
||||
return updateSQLiteDatabaseFromV4(p.dbHandle)
|
||||
case 5:
|
||||
return updateSQLiteDatabaseFromV5(p.dbHandle)
|
||||
case 6:
|
||||
return updateSQLiteDatabaseFromV6(p.dbHandle)
|
||||
case 7:
|
||||
return updateSQLiteDatabaseFromV7(p.dbHandle)
|
||||
default:
|
||||
if dbVersion.Version > sqlDatabaseVersion {
|
||||
providerLog(logger.LevelWarn, "database version %v is newer than the supported: %v", dbVersion.Version,
|
||||
sqlDatabaseVersion)
|
||||
logger.WarnToConsole("database version %v is newer than the supported: %v", dbVersion.Version,
|
||||
sqlDatabaseVersion)
|
||||
return nil
|
||||
}
|
||||
return fmt.Errorf("Database version not handled: %v", dbVersion.Version)
|
||||
}
|
||||
}
|
||||
|
||||
//nolint:dupl
|
||||
func (p *SQLiteProvider) revertDatabase(targetVersion int) error {
|
||||
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if dbVersion.Version == targetVersion {
|
||||
return fmt.Errorf("current version match target version, nothing to do")
|
||||
}
|
||||
switch dbVersion.Version {
|
||||
case 8:
|
||||
err = downgradeSQLiteDatabaseFrom8To7(p.dbHandle)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = downgradeSQLiteDatabaseFrom7To6(p.dbHandle)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = downgradeSQLiteDatabaseFrom6To5(p.dbHandle)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return downgradeSQLiteDatabaseFrom5To4(p.dbHandle)
|
||||
case 7:
|
||||
err = downgradeSQLiteDatabaseFrom7To6(p.dbHandle)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = downgradeSQLiteDatabaseFrom6To5(p.dbHandle)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return downgradeSQLiteDatabaseFrom5To4(p.dbHandle)
|
||||
case 6:
|
||||
err = downgradeSQLiteDatabaseFrom6To5(p.dbHandle)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return downgradeSQLiteDatabaseFrom5To4(p.dbHandle)
|
||||
case 5:
|
||||
return downgradeSQLiteDatabaseFrom5To4(p.dbHandle)
|
||||
default:
|
||||
return fmt.Errorf("Database version not handled: %v", dbVersion.Version)
|
||||
}
|
||||
}
|
||||
|
||||
func updateSQLiteDatabaseFromV1(dbHandle *sql.DB) error {
|
||||
err := updateSQLiteDatabaseFrom1To2(dbHandle)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return updateSQLiteDatabaseFromV2(dbHandle)
|
||||
}
|
||||
|
||||
func updateSQLiteDatabaseFromV2(dbHandle *sql.DB) error {
|
||||
err := updateSQLiteDatabaseFrom2To3(dbHandle)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return updateSQLiteDatabaseFromV3(dbHandle)
|
||||
}
|
||||
|
||||
func updateSQLiteDatabaseFromV3(dbHandle *sql.DB) error {
|
||||
err := updateSQLiteDatabaseFrom3To4(dbHandle)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return updateSQLiteDatabaseFromV4(dbHandle)
|
||||
}
|
||||
|
||||
func updateSQLiteDatabaseFromV4(dbHandle *sql.DB) error {
|
||||
err := updateSQLiteDatabaseFrom4To5(dbHandle)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return updateSQLiteDatabaseFromV5(dbHandle)
|
||||
}
|
||||
|
||||
func updateSQLiteDatabaseFromV5(dbHandle *sql.DB) error {
|
||||
err := updateSQLiteDatabaseFrom5To6(dbHandle)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return updateSQLiteDatabaseFromV6(dbHandle)
|
||||
}
|
||||
|
||||
func updateSQLiteDatabaseFromV6(dbHandle *sql.DB) error {
|
||||
err := updateSQLiteDatabaseFrom6To7(dbHandle)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return updateSQLiteDatabaseFromV7(dbHandle)
|
||||
}
|
||||
|
||||
func updateSQLiteDatabaseFromV7(dbHandle *sql.DB) error {
|
||||
return updateSQLiteDatabaseFrom7To8(dbHandle)
|
||||
}
|
||||
|
||||
func updateSQLiteDatabaseFrom1To2(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 1 -> 2")
|
||||
providerLog(logger.LevelInfo, "updating database version: 1 -> 2")
|
||||
sql := strings.Replace(sqliteV2SQL, "{{users}}", sqlTableUsers, 1)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 2)
|
||||
}
|
||||
|
||||
func updateSQLiteDatabaseFrom2To3(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 2 -> 3")
|
||||
providerLog(logger.LevelInfo, "updating database version: 2 -> 3")
|
||||
sql := strings.ReplaceAll(sqliteV3SQL, "{{users}}", sqlTableUsers)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 3)
|
||||
}
|
||||
|
||||
func updateSQLiteDatabaseFrom3To4(dbHandle *sql.DB) error {
|
||||
return sqlCommonUpdateDatabaseFrom3To4(sqliteV4SQL, dbHandle)
|
||||
}
|
||||
|
||||
func updateSQLiteDatabaseFrom4To5(dbHandle *sql.DB) error {
|
||||
return sqlCommonUpdateDatabaseFrom4To5(dbHandle)
|
||||
}
|
||||
|
||||
func updateSQLiteDatabaseFrom5To6(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 5 -> 6")
|
||||
providerLog(logger.LevelInfo, "updating database version: 5 -> 6")
|
||||
sql := strings.Replace(sqliteV6SQL, "{{users}}", sqlTableUsers, 1)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 6)
|
||||
}
|
||||
|
||||
func updateSQLiteDatabaseFrom6To7(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 6 -> 7")
|
||||
providerLog(logger.LevelInfo, "updating database version: 6 -> 7")
|
||||
sql := strings.Replace(sqliteV7SQL, "{{admins}}", sqlTableAdmins, 1)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 7)
|
||||
}
|
||||
|
||||
func updateSQLiteDatabaseFrom7To8(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 7 -> 8")
|
||||
providerLog(logger.LevelInfo, "updating database version: 7 -> 8")
|
||||
if err := setPragmaFK(dbHandle, "OFF"); err != nil {
|
||||
return err
|
||||
}
|
||||
sql := strings.ReplaceAll(sqliteV8SQL, "{{folders}}", sqlTableFolders)
|
||||
if err := sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 8); err != nil {
|
||||
return err
|
||||
}
|
||||
return setPragmaFK(dbHandle, "ON")
|
||||
}
|
||||
|
||||
func setPragmaFK(dbHandle *sql.DB, value string) error {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), longSQLQueryTimeout)
|
||||
defer cancel()
|
||||
|
||||
sql := fmt.Sprintf("PRAGMA foreign_keys=%v;", value)
|
||||
|
||||
_, err := dbHandle.ExecContext(ctx, sql)
|
||||
return err
|
||||
}
|
||||
|
||||
func downgradeSQLiteDatabaseFrom8To7(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("downgrading database version: 8 -> 7")
|
||||
providerLog(logger.LevelInfo, "downgrading database version: 8 -> 7")
|
||||
if err := setPragmaFK(dbHandle, "OFF"); err != nil {
|
||||
return err
|
||||
}
|
||||
sql := strings.ReplaceAll(sqliteV8DownSQL, "{{folders}}", sqlTableFolders)
|
||||
if err := sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 7); err != nil {
|
||||
return err
|
||||
}
|
||||
return setPragmaFK(dbHandle, "ON")
|
||||
}
|
||||
|
||||
func downgradeSQLiteDatabaseFrom7To6(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("downgrading database version: 7 -> 6")
|
||||
providerLog(logger.LevelInfo, "downgrading database version: 7 -> 6")
|
||||
sql := strings.Replace(sqliteV7DownSQL, "{{admins}}", sqlTableAdmins, 1)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 6)
|
||||
}
|
||||
|
||||
func downgradeSQLiteDatabaseFrom6To5(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("downgrading database version: 6 -> 5")
|
||||
providerLog(logger.LevelInfo, "downgrading database version: 6 -> 5")
|
||||
sql := strings.ReplaceAll(sqliteV6DownSQL, "{{users}}", sqlTableUsers)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 5)
|
||||
}
|
||||
|
||||
func downgradeSQLiteDatabaseFrom5To4(dbHandle *sql.DB) error {
|
||||
return sqlCommonDowngradeDatabaseFrom5To4(dbHandle)
|
||||
}
|
||||
|
||||
17
dataprovider/sqlite_disabled.go
Normal file
17
dataprovider/sqlite_disabled.go
Normal file
@@ -0,0 +1,17 @@
|
||||
// +build nosqlite
|
||||
|
||||
package dataprovider
|
||||
|
||||
import (
|
||||
"errors"
|
||||
|
||||
"github.com/drakkan/sftpgo/version"
|
||||
)
|
||||
|
||||
func init() {
|
||||
version.AddFeature("-sqlite")
|
||||
}
|
||||
|
||||
func initializeSQLiteProvider(basePath string) error {
|
||||
return errors.New("SQLite disabled at build time")
|
||||
}
|
||||
@@ -1,10 +1,18 @@
|
||||
package dataprovider
|
||||
|
||||
import "fmt"
|
||||
import (
|
||||
"fmt"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/drakkan/sftpgo/vfs"
|
||||
)
|
||||
|
||||
const (
|
||||
selectUserFields = "id,username,password,public_keys,home_dir,uid,gid,max_sessions,quota_size,quota_files,permissions," +
|
||||
"used_quota_size,used_quota_files,last_quota_update,upload_bandwidth,download_bandwidth"
|
||||
selectUserFields = "id,username,password,public_keys,home_dir,uid,gid,max_sessions,quota_size,quota_files,permissions,used_quota_size," +
|
||||
"used_quota_files,last_quota_update,upload_bandwidth,download_bandwidth,expiration_date,last_login,status,filters,filesystem,additional_info"
|
||||
selectFolderFields = "id,path,used_quota_size,used_quota_files,last_quota_update,name"
|
||||
selectAdminFields = "id,username,password,status,email,permissions,filters,additional_info"
|
||||
)
|
||||
|
||||
func getSQLPlaceholders() []string {
|
||||
@@ -19,52 +27,191 @@ func getSQLPlaceholders() []string {
|
||||
return placeholders
|
||||
}
|
||||
|
||||
func getUserByUsernameQuery() string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v WHERE username = %v`, selectUserFields, config.UsersTable, sqlPlaceholders[0])
|
||||
func getAdminByUsernameQuery() string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v WHERE username = %v`, selectAdminFields, sqlTableAdmins, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getUserByIDQuery() string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v WHERE id = %v`, selectUserFields, config.UsersTable, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getUsersQuery(order string, username string) string {
|
||||
if len(username) > 0 {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v WHERE username = %v ORDER BY username %v LIMIT %v OFFSET %v`,
|
||||
selectUserFields, config.UsersTable, sqlPlaceholders[0], order, sqlPlaceholders[1], sqlPlaceholders[2])
|
||||
}
|
||||
return fmt.Sprintf(`SELECT %v FROM %v ORDER BY username %v LIMIT %v OFFSET %v`, selectUserFields, config.UsersTable,
|
||||
func getAdminsQuery(order string) string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v ORDER BY username %v LIMIT %v OFFSET %v`, selectAdminFields, sqlTableAdmins,
|
||||
order, sqlPlaceholders[0], sqlPlaceholders[1])
|
||||
}
|
||||
|
||||
func getDumpAdminsQuery() string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v`, selectAdminFields, sqlTableAdmins)
|
||||
}
|
||||
|
||||
func getAddAdminQuery() string {
|
||||
return fmt.Sprintf(`INSERT INTO %v (username,password,status,email,permissions,filters,additional_info)
|
||||
VALUES (%v,%v,%v,%v,%v,%v,%v)`, sqlTableAdmins, sqlPlaceholders[0], sqlPlaceholders[1],
|
||||
sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6])
|
||||
}
|
||||
|
||||
func getUpdateAdminQuery() string {
|
||||
return fmt.Sprintf(`UPDATE %v SET password=%v,status=%v,email=%v,permissions=%v,filters=%v,additional_info=%v
|
||||
WHERE username = %v`, sqlTableAdmins, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2],
|
||||
sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6])
|
||||
}
|
||||
|
||||
func getDeleteAdminQuery() string {
|
||||
return fmt.Sprintf(`DELETE FROM %v WHERE username = %v`, sqlTableAdmins, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getUserByUsernameQuery() string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v WHERE username = %v`, selectUserFields, sqlTableUsers, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getUsersQuery(order string) string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v ORDER BY username %v LIMIT %v OFFSET %v`, selectUserFields, sqlTableUsers,
|
||||
order, sqlPlaceholders[0], sqlPlaceholders[1])
|
||||
}
|
||||
|
||||
func getDumpUsersQuery() string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v`, selectUserFields, sqlTableUsers)
|
||||
}
|
||||
|
||||
func getDumpFoldersQuery() string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v`, selectFolderFields, sqlTableFolders)
|
||||
}
|
||||
|
||||
func getUpdateQuotaQuery(reset bool) string {
|
||||
if reset {
|
||||
return fmt.Sprintf(`UPDATE %v SET used_quota_size = %v,used_quota_files = %v,last_quota_update = %v
|
||||
WHERE username = %v`, config.UsersTable, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
|
||||
return fmt.Sprintf(`UPDATE %v SET used_quota_size = %v,used_quota_files = %v,last_quota_update = %v
|
||||
WHERE username = %v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
|
||||
}
|
||||
return fmt.Sprintf(`UPDATE %v SET used_quota_size = used_quota_size + %v,used_quota_files = used_quota_files + %v,last_quota_update = %v
|
||||
WHERE username = %v`, config.UsersTable, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
|
||||
return fmt.Sprintf(`UPDATE %v SET used_quota_size = used_quota_size + %v,used_quota_files = used_quota_files + %v,last_quota_update = %v
|
||||
WHERE username = %v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
|
||||
}
|
||||
|
||||
func getUpdateLastLoginQuery() string {
|
||||
return fmt.Sprintf(`UPDATE %v SET last_login = %v WHERE username = %v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1])
|
||||
}
|
||||
|
||||
func getQuotaQuery() string {
|
||||
return fmt.Sprintf(`SELECT used_quota_size,used_quota_files FROM %v WHERE username = %v`, config.UsersTable,
|
||||
return fmt.Sprintf(`SELECT used_quota_size,used_quota_files FROM %v WHERE username = %v`, sqlTableUsers,
|
||||
sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getAddUserQuery() string {
|
||||
return fmt.Sprintf(`INSERT INTO %v (username,password,public_keys,home_dir,uid,gid,max_sessions,quota_size,quota_files,permissions,
|
||||
used_quota_size,used_quota_files,last_quota_update,upload_bandwidth,download_bandwidth)
|
||||
VALUES (%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,0,0,0,%v,%v)`, config.UsersTable, sqlPlaceholders[0], sqlPlaceholders[1],
|
||||
used_quota_size,used_quota_files,last_quota_update,upload_bandwidth,download_bandwidth,status,last_login,expiration_date,filters,
|
||||
filesystem,additional_info)
|
||||
VALUES (%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,0,0,0,%v,%v,%v,0,%v,%v,%v,%v)`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1],
|
||||
sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7],
|
||||
sqlPlaceholders[8], sqlPlaceholders[9], sqlPlaceholders[10], sqlPlaceholders[11])
|
||||
sqlPlaceholders[8], sqlPlaceholders[9], sqlPlaceholders[10], sqlPlaceholders[11], sqlPlaceholders[12], sqlPlaceholders[13],
|
||||
sqlPlaceholders[14], sqlPlaceholders[15], sqlPlaceholders[16])
|
||||
}
|
||||
|
||||
func getUpdateUserQuery() string {
|
||||
return fmt.Sprintf(`UPDATE %v SET password=%v,public_keys=%v,home_dir=%v,uid=%v,gid=%v,max_sessions=%v,quota_size=%v,
|
||||
quota_files=%v,permissions=%v,upload_bandwidth=%v,download_bandwidth=%v WHERE id = %v`, config.UsersTable,
|
||||
sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5],
|
||||
sqlPlaceholders[6], sqlPlaceholders[7], sqlPlaceholders[8], sqlPlaceholders[9], sqlPlaceholders[10], sqlPlaceholders[11])
|
||||
quota_files=%v,permissions=%v,upload_bandwidth=%v,download_bandwidth=%v,status=%v,expiration_date=%v,filters=%v,filesystem=%v,
|
||||
additional_info=%v WHERE id = %v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3],
|
||||
sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7], sqlPlaceholders[8], sqlPlaceholders[9],
|
||||
sqlPlaceholders[10], sqlPlaceholders[11], sqlPlaceholders[12], sqlPlaceholders[13], sqlPlaceholders[14], sqlPlaceholders[15],
|
||||
sqlPlaceholders[16])
|
||||
}
|
||||
|
||||
func getDeleteUserQuery() string {
|
||||
return fmt.Sprintf(`DELETE FROM %v WHERE id = %v`, config.UsersTable, sqlPlaceholders[0])
|
||||
return fmt.Sprintf(`DELETE FROM %v WHERE id = %v`, sqlTableUsers, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getFolderByNameQuery() string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v WHERE name = %v`, selectFolderFields, sqlTableFolders, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getAddFolderQuery() string {
|
||||
return fmt.Sprintf(`INSERT INTO %v (path,used_quota_size,used_quota_files,last_quota_update,name) VALUES (%v,%v,%v,%v,%v)`,
|
||||
sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4])
|
||||
}
|
||||
|
||||
func getUpdateFolderQuery() string {
|
||||
return fmt.Sprintf(`UPDATE %v SET path = %v WHERE name = %v`, sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1])
|
||||
}
|
||||
|
||||
func getDeleteFolderQuery() string {
|
||||
return fmt.Sprintf(`DELETE FROM %v WHERE id = %v`, sqlTableFolders, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getClearFolderMappingQuery() string {
|
||||
return fmt.Sprintf(`DELETE FROM %v WHERE user_id = (SELECT id FROM %v WHERE username = %v)`, sqlTableFoldersMapping,
|
||||
sqlTableUsers, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getAddFolderMappingQuery() string {
|
||||
return fmt.Sprintf(`INSERT INTO %v (virtual_path,quota_size,quota_files,folder_id,user_id)
|
||||
VALUES (%v,%v,%v,%v,(SELECT id FROM %v WHERE username = %v))`, sqlTableFoldersMapping, sqlPlaceholders[0],
|
||||
sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3], sqlTableUsers, sqlPlaceholders[4])
|
||||
}
|
||||
|
||||
func getFoldersQuery(order string) string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v ORDER BY name %v LIMIT %v OFFSET %v`, selectFolderFields, sqlTableFolders,
|
||||
order, sqlPlaceholders[0], sqlPlaceholders[1])
|
||||
}
|
||||
|
||||
func getUpdateFolderQuotaQuery(reset bool) string {
|
||||
if reset {
|
||||
return fmt.Sprintf(`UPDATE %v SET used_quota_size = %v,used_quota_files = %v,last_quota_update = %v
|
||||
WHERE name = %v`, sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
|
||||
}
|
||||
return fmt.Sprintf(`UPDATE %v SET used_quota_size = used_quota_size + %v,used_quota_files = used_quota_files + %v,last_quota_update = %v
|
||||
WHERE name = %v`, sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
|
||||
}
|
||||
|
||||
func getQuotaFolderQuery() string {
|
||||
return fmt.Sprintf(`SELECT used_quota_size,used_quota_files FROM %v WHERE name = %v`, sqlTableFolders,
|
||||
sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getRelatedFoldersForUsersQuery(users []User) string {
|
||||
var sb strings.Builder
|
||||
for _, u := range users {
|
||||
if sb.Len() == 0 {
|
||||
sb.WriteString("(")
|
||||
} else {
|
||||
sb.WriteString(",")
|
||||
}
|
||||
sb.WriteString(strconv.FormatInt(u.ID, 10))
|
||||
}
|
||||
if sb.Len() > 0 {
|
||||
sb.WriteString(")")
|
||||
}
|
||||
return fmt.Sprintf(`SELECT f.id,f.name,f.path,f.used_quota_size,f.used_quota_files,f.last_quota_update,fm.virtual_path,fm.quota_size,fm.quota_files,fm.user_id
|
||||
FROM %v f INNER JOIN %v fm ON f.id = fm.folder_id WHERE fm.user_id IN %v ORDER BY fm.user_id`, sqlTableFolders,
|
||||
sqlTableFoldersMapping, sb.String())
|
||||
}
|
||||
|
||||
func getRelatedUsersForFoldersQuery(folders []vfs.BaseVirtualFolder) string {
|
||||
var sb strings.Builder
|
||||
for _, f := range folders {
|
||||
if sb.Len() == 0 {
|
||||
sb.WriteString("(")
|
||||
} else {
|
||||
sb.WriteString(",")
|
||||
}
|
||||
sb.WriteString(strconv.FormatInt(f.ID, 10))
|
||||
}
|
||||
if sb.Len() > 0 {
|
||||
sb.WriteString(")")
|
||||
}
|
||||
return fmt.Sprintf(`SELECT fm.folder_id,u.username FROM %v fm INNER JOIN %v u ON fm.user_id = u.id
|
||||
WHERE fm.folder_id IN %v ORDER BY fm.folder_id`, sqlTableFoldersMapping, sqlTableUsers, sb.String())
|
||||
}
|
||||
|
||||
func getDatabaseVersionQuery() string {
|
||||
return fmt.Sprintf("SELECT version from %v LIMIT 1", sqlTableSchemaVersion)
|
||||
}
|
||||
|
||||
func getUpdateDBVersionQuery() string {
|
||||
return fmt.Sprintf(`UPDATE %v SET version=%v`, sqlTableSchemaVersion, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
/*func getCompatVirtualFoldersQuery() string {
|
||||
return fmt.Sprintf(`SELECT id,username,virtual_folders FROM %v`, sqlTableUsers)
|
||||
}*/
|
||||
|
||||
func getCompatV4FsConfigQuery() string {
|
||||
return fmt.Sprintf(`SELECT id,username,filesystem FROM %v`, sqlTableUsers)
|
||||
}
|
||||
|
||||
func updateCompatV4FsConfigQuery() string {
|
||||
return fmt.Sprintf(`UPDATE %v SET filesystem=%v WHERE id=%v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1])
|
||||
}
|
||||
|
||||
@@ -2,14 +2,26 @@ package dataprovider
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"math"
|
||||
"net"
|
||||
"os"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"golang.org/x/net/webdav"
|
||||
|
||||
"github.com/drakkan/sftpgo/kms"
|
||||
"github.com/drakkan/sftpgo/logger"
|
||||
"github.com/drakkan/sftpgo/utils"
|
||||
"github.com/drakkan/sftpgo/vfs"
|
||||
)
|
||||
|
||||
// Available permissions for SFTP users
|
||||
// Available permissions for SFTPGo users
|
||||
const (
|
||||
// All permissions are granted
|
||||
PermAny = "*"
|
||||
@@ -30,24 +42,156 @@ const (
|
||||
PermCreateDirs = "create_dirs"
|
||||
// create symbolic links is allowed
|
||||
PermCreateSymlinks = "create_symlinks"
|
||||
// changing file or directory permissions is allowed
|
||||
PermChmod = "chmod"
|
||||
// changing file or directory owner and group is allowed
|
||||
PermChown = "chown"
|
||||
// changing file or directory access and modification time is allowed
|
||||
PermChtimes = "chtimes"
|
||||
)
|
||||
|
||||
// User defines an SFTP user
|
||||
// Available login methods
|
||||
const (
|
||||
LoginMethodNoAuthTryed = "no_auth_tryed"
|
||||
LoginMethodPassword = "password"
|
||||
SSHLoginMethodPublicKey = "publickey"
|
||||
SSHLoginMethodKeyboardInteractive = "keyboard-interactive"
|
||||
SSHLoginMethodKeyAndPassword = "publickey+password"
|
||||
SSHLoginMethodKeyAndKeyboardInt = "publickey+keyboard-interactive"
|
||||
)
|
||||
|
||||
var (
|
||||
errNoMatchingVirtualFolder = errors.New("no matching virtual folder found")
|
||||
)
|
||||
|
||||
// CachedUser adds fields useful for caching to a SFTPGo user
|
||||
type CachedUser struct {
|
||||
User User
|
||||
Expiration time.Time
|
||||
Password string
|
||||
LockSystem webdav.LockSystem
|
||||
}
|
||||
|
||||
// IsExpired returns true if the cached user is expired
|
||||
func (c *CachedUser) IsExpired() bool {
|
||||
if c.Expiration.IsZero() {
|
||||
return false
|
||||
}
|
||||
return c.Expiration.Before(time.Now())
|
||||
}
|
||||
|
||||
// ExtensionsFilter defines filters based on file extensions.
|
||||
// These restrictions do not apply to files listing for performance reasons, so
|
||||
// a denied file cannot be downloaded/overwritten/renamed but will still be
|
||||
// in the list of files.
|
||||
// System commands such as Git and rsync interacts with the filesystem directly
|
||||
// and they are not aware about these restrictions so they are not allowed
|
||||
// inside paths with extensions filters
|
||||
type ExtensionsFilter struct {
|
||||
// Virtual path, if no other specific filter is defined, the filter apply for
|
||||
// sub directories too.
|
||||
// For example if filters are defined for the paths "/" and "/sub" then the
|
||||
// filters for "/" are applied for any file outside the "/sub" directory
|
||||
Path string `json:"path"`
|
||||
// only files with these, case insensitive, extensions are allowed.
|
||||
// Shell like expansion is not supported so you have to specify ".jpg" and
|
||||
// not "*.jpg". If you want shell like patterns use pattern filters
|
||||
AllowedExtensions []string `json:"allowed_extensions,omitempty"`
|
||||
// files with these, case insensitive, extensions are not allowed.
|
||||
// Denied file extensions are evaluated before the allowed ones
|
||||
DeniedExtensions []string `json:"denied_extensions,omitempty"`
|
||||
}
|
||||
|
||||
// PatternsFilter defines filters based on shell like patterns.
|
||||
// These restrictions do not apply to files listing for performance reasons, so
|
||||
// a denied file cannot be downloaded/overwritten/renamed but will still be
|
||||
// in the list of files.
|
||||
// System commands such as Git and rsync interacts with the filesystem directly
|
||||
// and they are not aware about these restrictions so they are not allowed
|
||||
// inside paths with extensions filters
|
||||
type PatternsFilter struct {
|
||||
// Virtual path, if no other specific filter is defined, the filter apply for
|
||||
// sub directories too.
|
||||
// For example if filters are defined for the paths "/" and "/sub" then the
|
||||
// filters for "/" are applied for any file outside the "/sub" directory
|
||||
Path string `json:"path"`
|
||||
// files with these, case insensitive, patterns are allowed.
|
||||
// Denied file patterns are evaluated before the allowed ones
|
||||
AllowedPatterns []string `json:"allowed_patterns,omitempty"`
|
||||
// files with these, case insensitive, patterns are not allowed.
|
||||
// Denied file patterns are evaluated before the allowed ones
|
||||
DeniedPatterns []string `json:"denied_patterns,omitempty"`
|
||||
}
|
||||
|
||||
// UserFilters defines additional restrictions for a user
|
||||
type UserFilters struct {
|
||||
// only clients connecting from these IP/Mask are allowed.
|
||||
// IP/Mask must be in CIDR notation as defined in RFC 4632 and RFC 4291
|
||||
// for example "192.0.2.0/24" or "2001:db8::/32"
|
||||
AllowedIP []string `json:"allowed_ip,omitempty"`
|
||||
// clients connecting from these IP/Mask are not allowed.
|
||||
// Denied rules will be evaluated before allowed ones
|
||||
DeniedIP []string `json:"denied_ip,omitempty"`
|
||||
// these login methods are not allowed.
|
||||
// If null or empty any available login method is allowed
|
||||
DeniedLoginMethods []string `json:"denied_login_methods,omitempty"`
|
||||
// these protocols are not allowed.
|
||||
// If null or empty any available protocol is allowed
|
||||
DeniedProtocols []string `json:"denied_protocols,omitempty"`
|
||||
// filters based on file extensions.
|
||||
// Please note that these restrictions can be easily bypassed.
|
||||
FileExtensions []ExtensionsFilter `json:"file_extensions,omitempty"`
|
||||
// filter based on shell patterns
|
||||
FilePatterns []PatternsFilter `json:"file_patterns,omitempty"`
|
||||
// max size allowed for a single upload, 0 means unlimited
|
||||
MaxUploadFileSize int64 `json:"max_upload_file_size,omitempty"`
|
||||
}
|
||||
|
||||
// FilesystemProvider defines the supported storages
|
||||
type FilesystemProvider int
|
||||
|
||||
// supported values for FilesystemProvider
|
||||
const (
|
||||
LocalFilesystemProvider FilesystemProvider = iota // Local
|
||||
S3FilesystemProvider // AWS S3 compatible
|
||||
GCSFilesystemProvider // Google Cloud Storage
|
||||
AzureBlobFilesystemProvider // Azure Blob Storage
|
||||
CryptedFilesystemProvider // Local encrypted
|
||||
SFTPFilesystemProvider // SFTP
|
||||
)
|
||||
|
||||
// Filesystem defines cloud storage filesystem details
|
||||
type Filesystem struct {
|
||||
Provider FilesystemProvider `json:"provider"`
|
||||
S3Config vfs.S3FsConfig `json:"s3config,omitempty"`
|
||||
GCSConfig vfs.GCSFsConfig `json:"gcsconfig,omitempty"`
|
||||
AzBlobConfig vfs.AzBlobFsConfig `json:"azblobconfig,omitempty"`
|
||||
CryptConfig vfs.CryptFsConfig `json:"cryptconfig,omitempty"`
|
||||
SFTPConfig vfs.SFTPFsConfig `json:"sftpconfig,omitempty"`
|
||||
}
|
||||
|
||||
// User defines a SFTPGo user
|
||||
type User struct {
|
||||
// Database unique identifier
|
||||
ID int64 `json:"id"`
|
||||
// 1 enabled, 0 disabled (login is not allowed)
|
||||
Status int `json:"status"`
|
||||
// Username
|
||||
Username string `json:"username"`
|
||||
// Account expiration date as unix timestamp in milliseconds. An expired account cannot login.
|
||||
// 0 means no expiration
|
||||
ExpirationDate int64 `json:"expiration_date"`
|
||||
// Password used for password authentication.
|
||||
// For users created using SFTPGo REST API the password is be stored using argon2id hashing algo.
|
||||
// Checking passwords stored with bcrypt is supported too.
|
||||
// Currently, as fallback, there is a clear text password checking but you should not store passwords
|
||||
// as clear text and this support could be removed at any time, so please don't depend on it.
|
||||
// Checking passwords stored with bcrypt, pbkdf2, md5crypt and sha512crypt is supported too.
|
||||
Password string `json:"password,omitempty"`
|
||||
// PublicKeys used for public key authentication. At least one between password and a public key is mandatory
|
||||
PublicKeys []string `json:"public_keys,omitempty"`
|
||||
// The user cannot upload or download files outside this directory. Must be an absolute path
|
||||
HomeDir string `json:"home_dir"`
|
||||
// Mapping between virtual paths and filesystem paths outside the home directory.
|
||||
// Supported for local filesystem only
|
||||
VirtualFolders []vfs.VirtualFolder `json:"virtual_folders,omitempty"`
|
||||
// If sftpgo runs as root system user then the created files and directories will be assigned to this system UID
|
||||
UID int `json:"uid"`
|
||||
// If sftpgo runs as root system user then the created files and directories will be assigned to this system GID
|
||||
@@ -59,7 +203,7 @@ type User struct {
|
||||
// Maximum number of files allowed. 0 means unlimited
|
||||
QuotaFiles int `json:"quota_files"`
|
||||
// List of the granted permissions
|
||||
Permissions []string `json:"permissions"`
|
||||
Permissions map[string][]string `json:"permissions"`
|
||||
// Used quota as bytes
|
||||
UsedQuotaSize int64 `json:"used_quota_size"`
|
||||
// Used quota as number of files
|
||||
@@ -70,14 +214,448 @@ type User struct {
|
||||
UploadBandwidth int64 `json:"upload_bandwidth"`
|
||||
// Maximum download bandwidth as KB/s, 0 means unlimited
|
||||
DownloadBandwidth int64 `json:"download_bandwidth"`
|
||||
// Last login as unix timestamp in milliseconds
|
||||
LastLogin int64 `json:"last_login"`
|
||||
// Additional restrictions
|
||||
Filters UserFilters `json:"filters"`
|
||||
// Filesystem configuration details
|
||||
FsConfig Filesystem `json:"filesystem"`
|
||||
// free form text field for external systems
|
||||
AdditionalInfo string `json:"additional_info,omitempty"`
|
||||
}
|
||||
|
||||
// GetFilesystem returns the filesystem for this user
|
||||
func (u *User) GetFilesystem(connectionID string) (vfs.Fs, error) {
|
||||
switch u.FsConfig.Provider {
|
||||
case S3FilesystemProvider:
|
||||
return vfs.NewS3Fs(connectionID, u.GetHomeDir(), u.FsConfig.S3Config)
|
||||
case GCSFilesystemProvider:
|
||||
config := u.FsConfig.GCSConfig
|
||||
config.CredentialFile = u.getGCSCredentialsFilePath()
|
||||
return vfs.NewGCSFs(connectionID, u.GetHomeDir(), config)
|
||||
case AzureBlobFilesystemProvider:
|
||||
return vfs.NewAzBlobFs(connectionID, u.GetHomeDir(), u.FsConfig.AzBlobConfig)
|
||||
case CryptedFilesystemProvider:
|
||||
return vfs.NewCryptFs(connectionID, u.GetHomeDir(), u.FsConfig.CryptConfig)
|
||||
case SFTPFilesystemProvider:
|
||||
return vfs.NewSFTPFs(connectionID, u.FsConfig.SFTPConfig)
|
||||
default:
|
||||
return vfs.NewOsFs(connectionID, u.GetHomeDir(), u.VirtualFolders), nil
|
||||
}
|
||||
}
|
||||
|
||||
// HideConfidentialData hides user confidential data
|
||||
func (u *User) HideConfidentialData() {
|
||||
u.Password = ""
|
||||
switch u.FsConfig.Provider {
|
||||
case S3FilesystemProvider:
|
||||
u.FsConfig.S3Config.AccessSecret.Hide()
|
||||
case GCSFilesystemProvider:
|
||||
u.FsConfig.GCSConfig.Credentials.Hide()
|
||||
case AzureBlobFilesystemProvider:
|
||||
u.FsConfig.AzBlobConfig.AccountKey.Hide()
|
||||
case CryptedFilesystemProvider:
|
||||
u.FsConfig.CryptConfig.Passphrase.Hide()
|
||||
case SFTPFilesystemProvider:
|
||||
u.FsConfig.SFTPConfig.Password.Hide()
|
||||
u.FsConfig.SFTPConfig.PrivateKey.Hide()
|
||||
}
|
||||
}
|
||||
|
||||
// IsPasswordHashed returns true if the password is hashed
|
||||
func (u *User) IsPasswordHashed() bool {
|
||||
return utils.IsStringPrefixInSlice(u.Password, hashPwdPrefixes)
|
||||
}
|
||||
|
||||
// SetEmptySecrets sets to empty any user secret
|
||||
func (u *User) SetEmptySecrets() {
|
||||
u.FsConfig.S3Config.AccessSecret = kms.NewEmptySecret()
|
||||
u.FsConfig.GCSConfig.Credentials = kms.NewEmptySecret()
|
||||
u.FsConfig.AzBlobConfig.AccountKey = kms.NewEmptySecret()
|
||||
u.FsConfig.CryptConfig.Passphrase = kms.NewEmptySecret()
|
||||
u.FsConfig.SFTPConfig.Password = kms.NewEmptySecret()
|
||||
u.FsConfig.SFTPConfig.PrivateKey = kms.NewEmptySecret()
|
||||
}
|
||||
|
||||
// DecryptSecrets tries to decrypts kms secrets
|
||||
func (u *User) DecryptSecrets() error {
|
||||
switch u.FsConfig.Provider {
|
||||
case S3FilesystemProvider:
|
||||
if u.FsConfig.S3Config.AccessSecret.IsEncrypted() {
|
||||
return u.FsConfig.S3Config.AccessSecret.Decrypt()
|
||||
}
|
||||
case GCSFilesystemProvider:
|
||||
if u.FsConfig.GCSConfig.Credentials.IsEncrypted() {
|
||||
return u.FsConfig.GCSConfig.Credentials.Decrypt()
|
||||
}
|
||||
case AzureBlobFilesystemProvider:
|
||||
if u.FsConfig.AzBlobConfig.AccountKey.IsEncrypted() {
|
||||
return u.FsConfig.AzBlobConfig.AccountKey.Decrypt()
|
||||
}
|
||||
case CryptedFilesystemProvider:
|
||||
if u.FsConfig.CryptConfig.Passphrase.IsEncrypted() {
|
||||
return u.FsConfig.CryptConfig.Passphrase.Decrypt()
|
||||
}
|
||||
case SFTPFilesystemProvider:
|
||||
if u.FsConfig.SFTPConfig.Password.IsEncrypted() {
|
||||
if err := u.FsConfig.SFTPConfig.Password.Decrypt(); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if u.FsConfig.SFTPConfig.PrivateKey.IsEncrypted() {
|
||||
if err := u.FsConfig.SFTPConfig.PrivateKey.Decrypt(); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetPermissionsForPath returns the permissions for the given path.
|
||||
// The path must be a SFTPGo exposed path
|
||||
func (u *User) GetPermissionsForPath(p string) []string {
|
||||
permissions := []string{}
|
||||
if perms, ok := u.Permissions["/"]; ok {
|
||||
// if only root permissions are defined returns them unconditionally
|
||||
if len(u.Permissions) == 1 {
|
||||
return perms
|
||||
}
|
||||
// fallback permissions
|
||||
permissions = perms
|
||||
}
|
||||
dirsForPath := utils.GetDirsForSFTPPath(p)
|
||||
// dirsForPath contains all the dirs for a given path in reverse order
|
||||
// for example if the path is: /1/2/3/4 it contains:
|
||||
// [ "/1/2/3/4", "/1/2/3", "/1/2", "/1", "/" ]
|
||||
// so the first match is the one we are interested to
|
||||
for _, val := range dirsForPath {
|
||||
if perms, ok := u.Permissions[val]; ok {
|
||||
permissions = perms
|
||||
break
|
||||
}
|
||||
}
|
||||
return permissions
|
||||
}
|
||||
|
||||
// GetVirtualFolderForPath returns the virtual folder containing the specified sftp path.
|
||||
// If the path is not inside a virtual folder an error is returned
|
||||
func (u *User) GetVirtualFolderForPath(sftpPath string) (vfs.VirtualFolder, error) {
|
||||
var folder vfs.VirtualFolder
|
||||
if len(u.VirtualFolders) == 0 || u.FsConfig.Provider != LocalFilesystemProvider {
|
||||
return folder, errNoMatchingVirtualFolder
|
||||
}
|
||||
dirsForPath := utils.GetDirsForSFTPPath(sftpPath)
|
||||
for _, val := range dirsForPath {
|
||||
for _, v := range u.VirtualFolders {
|
||||
if v.VirtualPath == val {
|
||||
return v, nil
|
||||
}
|
||||
}
|
||||
}
|
||||
return folder, errNoMatchingVirtualFolder
|
||||
}
|
||||
|
||||
// AddVirtualDirs adds virtual folders, if defined, to the given files list
|
||||
func (u *User) AddVirtualDirs(list []os.FileInfo, sftpPath string) []os.FileInfo {
|
||||
if len(u.VirtualFolders) == 0 {
|
||||
return list
|
||||
}
|
||||
for _, v := range u.VirtualFolders {
|
||||
if path.Dir(v.VirtualPath) == sftpPath {
|
||||
fi := vfs.NewFileInfo(v.VirtualPath, true, 0, time.Now(), false)
|
||||
found := false
|
||||
for index, f := range list {
|
||||
if f.Name() == fi.Name() {
|
||||
list[index] = fi
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
list = append(list, fi)
|
||||
}
|
||||
}
|
||||
}
|
||||
return list
|
||||
}
|
||||
|
||||
// IsMappedPath returns true if the specified filesystem path has a virtual folder mapping.
|
||||
// The filesystem path must be cleaned before calling this method
|
||||
func (u *User) IsMappedPath(fsPath string) bool {
|
||||
for _, v := range u.VirtualFolders {
|
||||
if fsPath == v.MappedPath {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// IsVirtualFolder returns true if the specified sftp path is a virtual folder
|
||||
func (u *User) IsVirtualFolder(sftpPath string) bool {
|
||||
for _, v := range u.VirtualFolders {
|
||||
if sftpPath == v.VirtualPath {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// HasVirtualFoldersInside returns true if there are virtual folders inside the
|
||||
// specified SFTP path. We assume that path are cleaned
|
||||
func (u *User) HasVirtualFoldersInside(sftpPath string) bool {
|
||||
if sftpPath == "/" && len(u.VirtualFolders) > 0 {
|
||||
return true
|
||||
}
|
||||
for _, v := range u.VirtualFolders {
|
||||
if len(v.VirtualPath) > len(sftpPath) {
|
||||
if strings.HasPrefix(v.VirtualPath, sftpPath+"/") {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// HasPermissionsInside returns true if the specified sftpPath has no permissions itself and
|
||||
// no subdirs with defined permissions
|
||||
func (u *User) HasPermissionsInside(sftpPath string) bool {
|
||||
for dir := range u.Permissions {
|
||||
if dir == sftpPath {
|
||||
return true
|
||||
} else if len(dir) > len(sftpPath) {
|
||||
if strings.HasPrefix(dir, sftpPath+"/") {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// HasOverlappedMappedPaths returns true if this user has virtual folders with overlapped mapped paths
|
||||
func (u *User) HasOverlappedMappedPaths() bool {
|
||||
if len(u.VirtualFolders) <= 1 {
|
||||
return false
|
||||
}
|
||||
for _, v1 := range u.VirtualFolders {
|
||||
for _, v2 := range u.VirtualFolders {
|
||||
if v1.VirtualPath == v2.VirtualPath {
|
||||
continue
|
||||
}
|
||||
if isMappedDirOverlapped(v1.MappedPath, v2.MappedPath) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// HasPerm returns true if the user has the given permission or any permission
|
||||
func (u *User) HasPerm(permission string) bool {
|
||||
if utils.IsStringInSlice(PermAny, u.Permissions) {
|
||||
func (u *User) HasPerm(permission, path string) bool {
|
||||
perms := u.GetPermissionsForPath(path)
|
||||
if utils.IsStringInSlice(PermAny, perms) {
|
||||
return true
|
||||
}
|
||||
return utils.IsStringInSlice(permission, u.Permissions)
|
||||
return utils.IsStringInSlice(permission, perms)
|
||||
}
|
||||
|
||||
// HasPerms return true if the user has all the given permissions
|
||||
func (u *User) HasPerms(permissions []string, path string) bool {
|
||||
perms := u.GetPermissionsForPath(path)
|
||||
if utils.IsStringInSlice(PermAny, perms) {
|
||||
return true
|
||||
}
|
||||
for _, permission := range permissions {
|
||||
if !utils.IsStringInSlice(permission, perms) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// HasNoQuotaRestrictions returns true if no quota restrictions need to be applyed
|
||||
func (u *User) HasNoQuotaRestrictions(checkFiles bool) bool {
|
||||
if u.QuotaSize == 0 && (!checkFiles || u.QuotaFiles == 0) {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// IsLoginMethodAllowed returns true if the specified login method is allowed
|
||||
func (u *User) IsLoginMethodAllowed(loginMethod string, partialSuccessMethods []string) bool {
|
||||
if len(u.Filters.DeniedLoginMethods) == 0 {
|
||||
return true
|
||||
}
|
||||
if len(partialSuccessMethods) == 1 {
|
||||
for _, method := range u.GetNextAuthMethods(partialSuccessMethods, true) {
|
||||
if method == loginMethod {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
if utils.IsStringInSlice(loginMethod, u.Filters.DeniedLoginMethods) {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// GetNextAuthMethods returns the list of authentications methods that
|
||||
// can continue for multi-step authentication
|
||||
func (u *User) GetNextAuthMethods(partialSuccessMethods []string, isPasswordAuthEnabled bool) []string {
|
||||
var methods []string
|
||||
if len(partialSuccessMethods) != 1 {
|
||||
return methods
|
||||
}
|
||||
if partialSuccessMethods[0] != SSHLoginMethodPublicKey {
|
||||
return methods
|
||||
}
|
||||
for _, method := range u.GetAllowedLoginMethods() {
|
||||
if method == SSHLoginMethodKeyAndPassword && isPasswordAuthEnabled {
|
||||
methods = append(methods, LoginMethodPassword)
|
||||
}
|
||||
if method == SSHLoginMethodKeyAndKeyboardInt {
|
||||
methods = append(methods, SSHLoginMethodKeyboardInteractive)
|
||||
}
|
||||
}
|
||||
return methods
|
||||
}
|
||||
|
||||
// IsPartialAuth returns true if the specified login method is a step for
|
||||
// a multi-step Authentication.
|
||||
// We support publickey+password and publickey+keyboard-interactive, so
|
||||
// only publickey can returns partial success.
|
||||
// We can have partial success if only multi-step Auth methods are enabled
|
||||
func (u *User) IsPartialAuth(loginMethod string) bool {
|
||||
if loginMethod != SSHLoginMethodPublicKey {
|
||||
return false
|
||||
}
|
||||
for _, method := range u.GetAllowedLoginMethods() {
|
||||
if !utils.IsStringInSlice(method, SSHMultiStepsLoginMethods) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// GetAllowedLoginMethods returns the allowed login methods
|
||||
func (u *User) GetAllowedLoginMethods() []string {
|
||||
var allowedMethods []string
|
||||
for _, method := range ValidSSHLoginMethods {
|
||||
if !utils.IsStringInSlice(method, u.Filters.DeniedLoginMethods) {
|
||||
allowedMethods = append(allowedMethods, method)
|
||||
}
|
||||
}
|
||||
return allowedMethods
|
||||
}
|
||||
|
||||
// IsFileAllowed returns true if the specified file is allowed by the file restrictions filters
|
||||
func (u *User) IsFileAllowed(virtualPath string) bool {
|
||||
return u.isFilePatternAllowed(virtualPath) && u.isFileExtensionAllowed(virtualPath)
|
||||
}
|
||||
|
||||
func (u *User) isFileExtensionAllowed(virtualPath string) bool {
|
||||
if len(u.Filters.FileExtensions) == 0 {
|
||||
return true
|
||||
}
|
||||
dirsForPath := utils.GetDirsForSFTPPath(path.Dir(virtualPath))
|
||||
var filter ExtensionsFilter
|
||||
for _, dir := range dirsForPath {
|
||||
for _, f := range u.Filters.FileExtensions {
|
||||
if f.Path == dir {
|
||||
filter = f
|
||||
break
|
||||
}
|
||||
}
|
||||
if filter.Path != "" {
|
||||
break
|
||||
}
|
||||
}
|
||||
if filter.Path != "" {
|
||||
toMatch := strings.ToLower(virtualPath)
|
||||
for _, denied := range filter.DeniedExtensions {
|
||||
if strings.HasSuffix(toMatch, denied) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
for _, allowed := range filter.AllowedExtensions {
|
||||
if strings.HasSuffix(toMatch, allowed) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return len(filter.AllowedExtensions) == 0
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func (u *User) isFilePatternAllowed(virtualPath string) bool {
|
||||
if len(u.Filters.FilePatterns) == 0 {
|
||||
return true
|
||||
}
|
||||
dirsForPath := utils.GetDirsForSFTPPath(path.Dir(virtualPath))
|
||||
var filter PatternsFilter
|
||||
for _, dir := range dirsForPath {
|
||||
for _, f := range u.Filters.FilePatterns {
|
||||
if f.Path == dir {
|
||||
filter = f
|
||||
break
|
||||
}
|
||||
}
|
||||
if filter.Path != "" {
|
||||
break
|
||||
}
|
||||
}
|
||||
if filter.Path != "" {
|
||||
toMatch := strings.ToLower(path.Base(virtualPath))
|
||||
for _, denied := range filter.DeniedPatterns {
|
||||
matched, err := path.Match(denied, toMatch)
|
||||
if err != nil || matched {
|
||||
return false
|
||||
}
|
||||
}
|
||||
for _, allowed := range filter.AllowedPatterns {
|
||||
matched, err := path.Match(allowed, toMatch)
|
||||
if err == nil && matched {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return len(filter.AllowedPatterns) == 0
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// IsLoginFromAddrAllowed returns true if the login is allowed from the specified remoteAddr.
|
||||
// If AllowedIP is defined only the specified IP/Mask can login.
|
||||
// If DeniedIP is defined the specified IP/Mask cannot login.
|
||||
// If an IP is both allowed and denied then login will be denied
|
||||
func (u *User) IsLoginFromAddrAllowed(remoteAddr string) bool {
|
||||
if len(u.Filters.AllowedIP) == 0 && len(u.Filters.DeniedIP) == 0 {
|
||||
return true
|
||||
}
|
||||
remoteIP := net.ParseIP(utils.GetIPFromRemoteAddress(remoteAddr))
|
||||
// if remoteIP is invalid we allow login, this should never happen
|
||||
if remoteIP == nil {
|
||||
logger.Warn(logSender, "", "login allowed for invalid IP. remote address: %#v", remoteAddr)
|
||||
return true
|
||||
}
|
||||
for _, IPMask := range u.Filters.DeniedIP {
|
||||
_, IPNet, err := net.ParseCIDR(IPMask)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
if IPNet.Contains(remoteIP) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
for _, IPMask := range u.Filters.AllowedIP {
|
||||
_, IPNet, err := net.ParseCIDR(IPMask)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
if IPNet.Contains(remoteIP) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return len(u.Filters.AllowedIP) == 0
|
||||
}
|
||||
|
||||
// GetPermissionsAsJSON returns the permissions as json byte array
|
||||
@@ -90,9 +668,19 @@ func (u *User) GetPublicKeysAsJSON() ([]byte, error) {
|
||||
return json.Marshal(u.PublicKeys)
|
||||
}
|
||||
|
||||
// GetFiltersAsJSON returns the filters as json byte array
|
||||
func (u *User) GetFiltersAsJSON() ([]byte, error) {
|
||||
return json.Marshal(u.Filters)
|
||||
}
|
||||
|
||||
// GetFsConfigAsJSON returns the filesystem config as json byte array
|
||||
func (u *User) GetFsConfigAsJSON() ([]byte, error) {
|
||||
return json.Marshal(u.FsConfig)
|
||||
}
|
||||
|
||||
// GetUID returns a validate uid, suitable for use with os.Chown
|
||||
func (u *User) GetUID() int {
|
||||
if u.UID <= 0 || u.UID > 65535 {
|
||||
if u.UID <= 0 || u.UID > math.MaxInt32 {
|
||||
return -1
|
||||
}
|
||||
return u.UID
|
||||
@@ -100,7 +688,7 @@ func (u *User) GetUID() int {
|
||||
|
||||
// GetGID returns a validate gid, suitable for use with os.Chown
|
||||
func (u *User) GetGID() int {
|
||||
if u.GID <= 0 || u.GID > 65535 {
|
||||
if u.GID <= 0 || u.GID > math.MaxInt32 {
|
||||
return -1
|
||||
}
|
||||
return u.GID
|
||||
@@ -116,16 +704,6 @@ func (u *User) HasQuotaRestrictions() bool {
|
||||
return u.QuotaFiles > 0 || u.QuotaSize > 0
|
||||
}
|
||||
|
||||
// GetRelativePath returns the path for a file relative to the user's home dir.
|
||||
// This is the path as seen by SFTP users
|
||||
func (u *User) GetRelativePath(path string) string {
|
||||
rel, err := filepath.Rel(u.GetHomeDir(), path)
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
return "/" + filepath.ToSlash(rel)
|
||||
}
|
||||
|
||||
// GetQuotaSummary returns used quota and limits if defined
|
||||
func (u *User) GetQuotaSummary() string {
|
||||
var result string
|
||||
@@ -134,9 +712,9 @@ func (u *User) GetQuotaSummary() string {
|
||||
result += "/" + strconv.Itoa(u.QuotaFiles)
|
||||
}
|
||||
if u.UsedQuotaSize > 0 || u.QuotaSize > 0 {
|
||||
result += ". Size: " + utils.ByteCountSI(u.UsedQuotaSize)
|
||||
result += ". Size: " + utils.ByteCountIEC(u.UsedQuotaSize)
|
||||
if u.QuotaSize > 0 {
|
||||
result += "/" + utils.ByteCountSI(u.QuotaSize)
|
||||
result += "/" + utils.ByteCountIEC(u.QuotaSize)
|
||||
}
|
||||
}
|
||||
return result
|
||||
@@ -144,12 +722,22 @@ func (u *User) GetQuotaSummary() string {
|
||||
|
||||
// GetPermissionsAsString returns the user's permissions as comma separated string
|
||||
func (u *User) GetPermissionsAsString() string {
|
||||
var result string
|
||||
for _, p := range u.Permissions {
|
||||
if len(result) > 0 {
|
||||
result += ", "
|
||||
result := ""
|
||||
for dir, perms := range u.Permissions {
|
||||
dirPerms := strings.Join(perms, ", ")
|
||||
dp := fmt.Sprintf("%#v: %#v", dir, dirPerms)
|
||||
if dir == "/" {
|
||||
if result != "" {
|
||||
result = dp + ", " + result
|
||||
} else {
|
||||
result = dp
|
||||
}
|
||||
} else {
|
||||
if result != "" {
|
||||
result += ", "
|
||||
}
|
||||
result += dp
|
||||
}
|
||||
result += p
|
||||
}
|
||||
return result
|
||||
}
|
||||
@@ -158,23 +746,40 @@ func (u *User) GetPermissionsAsString() string {
|
||||
func (u *User) GetBandwidthAsString() string {
|
||||
result := "Download: "
|
||||
if u.DownloadBandwidth > 0 {
|
||||
result += utils.ByteCountSI(u.DownloadBandwidth*1000) + "/s."
|
||||
result += utils.ByteCountIEC(u.DownloadBandwidth*1000) + "/s."
|
||||
} else {
|
||||
result += "ulimited."
|
||||
result += "unlimited."
|
||||
}
|
||||
result += " Upload: "
|
||||
if u.UploadBandwidth > 0 {
|
||||
result += utils.ByteCountSI(u.UploadBandwidth*1000) + "/s."
|
||||
result += utils.ByteCountIEC(u.UploadBandwidth*1000) + "/s."
|
||||
} else {
|
||||
result += "ulimited."
|
||||
result += "unlimited."
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// GetInfoString returns user's info as string.
|
||||
// Number of public keys, max sessions, uid and gid are returned
|
||||
// Storage provider, number of public keys, max sessions, uid,
|
||||
// gid, denied and allowed IP/Mask are returned
|
||||
func (u *User) GetInfoString() string {
|
||||
var result string
|
||||
if u.LastLogin > 0 {
|
||||
t := utils.GetTimeFromMsecSinceEpoch(u.LastLogin)
|
||||
result += fmt.Sprintf("Last login: %v ", t.Format("2006-01-02 15:04:05")) // YYYY-MM-DD HH:MM:SS
|
||||
}
|
||||
switch u.FsConfig.Provider {
|
||||
case S3FilesystemProvider:
|
||||
result += "Storage: S3 "
|
||||
case GCSFilesystemProvider:
|
||||
result += "Storage: GCS "
|
||||
case AzureBlobFilesystemProvider:
|
||||
result += "Storage: Azure "
|
||||
case CryptedFilesystemProvider:
|
||||
result += "Storage: Encrypted "
|
||||
case SFTPFilesystemProvider:
|
||||
result += "Storage: SFTP "
|
||||
}
|
||||
if len(u.PublicKeys) > 0 {
|
||||
result += fmt.Sprintf("Public keys: %v ", len(u.PublicKeys))
|
||||
}
|
||||
@@ -187,5 +792,169 @@ func (u *User) GetInfoString() string {
|
||||
if u.GID > 0 {
|
||||
result += fmt.Sprintf("GID: %v ", u.GID)
|
||||
}
|
||||
if len(u.Filters.DeniedIP) > 0 {
|
||||
result += fmt.Sprintf("Denied IP/Mask: %v ", len(u.Filters.DeniedIP))
|
||||
}
|
||||
if len(u.Filters.AllowedIP) > 0 {
|
||||
result += fmt.Sprintf("Allowed IP/Mask: %v ", len(u.Filters.AllowedIP))
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// GetExpirationDateAsString returns expiration date formatted as YYYY-MM-DD
|
||||
func (u *User) GetExpirationDateAsString() string {
|
||||
if u.ExpirationDate > 0 {
|
||||
t := utils.GetTimeFromMsecSinceEpoch(u.ExpirationDate)
|
||||
return t.Format("2006-01-02")
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// GetAllowedIPAsString returns the allowed IP as comma separated string
|
||||
func (u *User) GetAllowedIPAsString() string {
|
||||
return strings.Join(u.Filters.AllowedIP, ",")
|
||||
}
|
||||
|
||||
// GetDeniedIPAsString returns the denied IP as comma separated string
|
||||
func (u *User) GetDeniedIPAsString() string {
|
||||
return strings.Join(u.Filters.DeniedIP, ",")
|
||||
}
|
||||
|
||||
// SetEmptySecretsIfNil sets the secrets to empty if nil
|
||||
func (u *User) SetEmptySecretsIfNil() {
|
||||
if u.FsConfig.S3Config.AccessSecret == nil {
|
||||
u.FsConfig.S3Config.AccessSecret = kms.NewEmptySecret()
|
||||
}
|
||||
if u.FsConfig.GCSConfig.Credentials == nil {
|
||||
u.FsConfig.GCSConfig.Credentials = kms.NewEmptySecret()
|
||||
}
|
||||
if u.FsConfig.AzBlobConfig.AccountKey == nil {
|
||||
u.FsConfig.AzBlobConfig.AccountKey = kms.NewEmptySecret()
|
||||
}
|
||||
if u.FsConfig.CryptConfig.Passphrase == nil {
|
||||
u.FsConfig.CryptConfig.Passphrase = kms.NewEmptySecret()
|
||||
}
|
||||
if u.FsConfig.SFTPConfig.Password == nil {
|
||||
u.FsConfig.SFTPConfig.Password = kms.NewEmptySecret()
|
||||
}
|
||||
if u.FsConfig.SFTPConfig.PrivateKey == nil {
|
||||
u.FsConfig.SFTPConfig.PrivateKey = kms.NewEmptySecret()
|
||||
}
|
||||
}
|
||||
|
||||
func (u *User) getACopy() User {
|
||||
u.SetEmptySecretsIfNil()
|
||||
pubKeys := make([]string, len(u.PublicKeys))
|
||||
copy(pubKeys, u.PublicKeys)
|
||||
virtualFolders := make([]vfs.VirtualFolder, len(u.VirtualFolders))
|
||||
copy(virtualFolders, u.VirtualFolders)
|
||||
permissions := make(map[string][]string)
|
||||
for k, v := range u.Permissions {
|
||||
perms := make([]string, len(v))
|
||||
copy(perms, v)
|
||||
permissions[k] = perms
|
||||
}
|
||||
filters := UserFilters{}
|
||||
filters.MaxUploadFileSize = u.Filters.MaxUploadFileSize
|
||||
filters.AllowedIP = make([]string, len(u.Filters.AllowedIP))
|
||||
copy(filters.AllowedIP, u.Filters.AllowedIP)
|
||||
filters.DeniedIP = make([]string, len(u.Filters.DeniedIP))
|
||||
copy(filters.DeniedIP, u.Filters.DeniedIP)
|
||||
filters.DeniedLoginMethods = make([]string, len(u.Filters.DeniedLoginMethods))
|
||||
copy(filters.DeniedLoginMethods, u.Filters.DeniedLoginMethods)
|
||||
filters.FileExtensions = make([]ExtensionsFilter, len(u.Filters.FileExtensions))
|
||||
copy(filters.FileExtensions, u.Filters.FileExtensions)
|
||||
filters.FilePatterns = make([]PatternsFilter, len(u.Filters.FilePatterns))
|
||||
copy(filters.FilePatterns, u.Filters.FilePatterns)
|
||||
filters.DeniedProtocols = make([]string, len(u.Filters.DeniedProtocols))
|
||||
copy(filters.DeniedProtocols, u.Filters.DeniedProtocols)
|
||||
fsConfig := Filesystem{
|
||||
Provider: u.FsConfig.Provider,
|
||||
S3Config: vfs.S3FsConfig{
|
||||
Bucket: u.FsConfig.S3Config.Bucket,
|
||||
Region: u.FsConfig.S3Config.Region,
|
||||
AccessKey: u.FsConfig.S3Config.AccessKey,
|
||||
AccessSecret: u.FsConfig.S3Config.AccessSecret.Clone(),
|
||||
Endpoint: u.FsConfig.S3Config.Endpoint,
|
||||
StorageClass: u.FsConfig.S3Config.StorageClass,
|
||||
KeyPrefix: u.FsConfig.S3Config.KeyPrefix,
|
||||
UploadPartSize: u.FsConfig.S3Config.UploadPartSize,
|
||||
UploadConcurrency: u.FsConfig.S3Config.UploadConcurrency,
|
||||
},
|
||||
GCSConfig: vfs.GCSFsConfig{
|
||||
Bucket: u.FsConfig.GCSConfig.Bucket,
|
||||
CredentialFile: u.FsConfig.GCSConfig.CredentialFile,
|
||||
Credentials: u.FsConfig.GCSConfig.Credentials.Clone(),
|
||||
AutomaticCredentials: u.FsConfig.GCSConfig.AutomaticCredentials,
|
||||
StorageClass: u.FsConfig.GCSConfig.StorageClass,
|
||||
KeyPrefix: u.FsConfig.GCSConfig.KeyPrefix,
|
||||
},
|
||||
AzBlobConfig: vfs.AzBlobFsConfig{
|
||||
Container: u.FsConfig.AzBlobConfig.Container,
|
||||
AccountName: u.FsConfig.AzBlobConfig.AccountName,
|
||||
AccountKey: u.FsConfig.AzBlobConfig.AccountKey.Clone(),
|
||||
Endpoint: u.FsConfig.AzBlobConfig.Endpoint,
|
||||
SASURL: u.FsConfig.AzBlobConfig.SASURL,
|
||||
KeyPrefix: u.FsConfig.AzBlobConfig.KeyPrefix,
|
||||
UploadPartSize: u.FsConfig.AzBlobConfig.UploadPartSize,
|
||||
UploadConcurrency: u.FsConfig.AzBlobConfig.UploadConcurrency,
|
||||
UseEmulator: u.FsConfig.AzBlobConfig.UseEmulator,
|
||||
AccessTier: u.FsConfig.AzBlobConfig.AccessTier,
|
||||
},
|
||||
CryptConfig: vfs.CryptFsConfig{
|
||||
Passphrase: u.FsConfig.CryptConfig.Passphrase.Clone(),
|
||||
},
|
||||
SFTPConfig: vfs.SFTPFsConfig{
|
||||
Endpoint: u.FsConfig.SFTPConfig.Endpoint,
|
||||
Username: u.FsConfig.SFTPConfig.Username,
|
||||
Password: u.FsConfig.SFTPConfig.Password.Clone(),
|
||||
PrivateKey: u.FsConfig.SFTPConfig.PrivateKey.Clone(),
|
||||
Prefix: u.FsConfig.SFTPConfig.Prefix,
|
||||
},
|
||||
}
|
||||
if len(u.FsConfig.SFTPConfig.Fingerprints) > 0 {
|
||||
fsConfig.SFTPConfig.Fingerprints = make([]string, len(u.FsConfig.SFTPConfig.Fingerprints))
|
||||
copy(fsConfig.SFTPConfig.Fingerprints, u.FsConfig.SFTPConfig.Fingerprints)
|
||||
}
|
||||
|
||||
return User{
|
||||
ID: u.ID,
|
||||
Username: u.Username,
|
||||
Password: u.Password,
|
||||
PublicKeys: pubKeys,
|
||||
HomeDir: u.HomeDir,
|
||||
VirtualFolders: virtualFolders,
|
||||
UID: u.UID,
|
||||
GID: u.GID,
|
||||
MaxSessions: u.MaxSessions,
|
||||
QuotaSize: u.QuotaSize,
|
||||
QuotaFiles: u.QuotaFiles,
|
||||
Permissions: permissions,
|
||||
UsedQuotaSize: u.UsedQuotaSize,
|
||||
UsedQuotaFiles: u.UsedQuotaFiles,
|
||||
LastQuotaUpdate: u.LastQuotaUpdate,
|
||||
UploadBandwidth: u.UploadBandwidth,
|
||||
DownloadBandwidth: u.DownloadBandwidth,
|
||||
Status: u.Status,
|
||||
ExpirationDate: u.ExpirationDate,
|
||||
LastLogin: u.LastLogin,
|
||||
Filters: filters,
|
||||
FsConfig: fsConfig,
|
||||
AdditionalInfo: u.AdditionalInfo,
|
||||
}
|
||||
}
|
||||
|
||||
func (u *User) getNotificationFieldsAsSlice(action string) []string {
|
||||
return []string{action, u.Username,
|
||||
strconv.FormatInt(u.ID, 10),
|
||||
strconv.FormatInt(int64(u.Status), 10),
|
||||
strconv.FormatInt(u.ExpirationDate, 10),
|
||||
u.HomeDir,
|
||||
strconv.FormatInt(int64(u.UID), 10),
|
||||
strconv.FormatInt(int64(u.GID), 10),
|
||||
}
|
||||
}
|
||||
|
||||
func (u *User) getGCSCredentialsFilePath() string {
|
||||
return filepath.Join(credentialsDirPath, fmt.Sprintf("%v_gcs_credentials.json", u.Username))
|
||||
}
|
||||
|
||||
146
docker/README.md
146
docker/README.md
@@ -1,5 +1,145 @@
|
||||
## Dockerfile examples
|
||||
# Official Docker image
|
||||
|
||||
Sample Dockerfiles for `sftpgo` daemon and the REST API CLI.
|
||||
SFTPGo provides an official Docker image, it is available on both [Docker Hub](https://hub.docker.com/r/drakkan/sftpgo) and on [GitHub Container Registry](https://github.com/users/drakkan/packages/container/package/sftpgo).
|
||||
|
||||
We don't want to add a `Dockerfile` for each single `sftpgo` configuration options or data provider. You can use the docker configurations here as starting point that you can customize to run `sftpgo` with [Docker](http://www.docker.io "Docker").
|
||||
## Supported tags and respective Dockerfile links
|
||||
|
||||
- [v2.0.3, v2.0, v2, latest](https://github.com/drakkan/sftpgo/blob/v2.0.3/Dockerfile)
|
||||
- [v2.0.3-alpine, v2.0-alpine, v2-alpine, alpine](https://github.com/drakkan/sftpgo/blob/v2.0.3/Dockerfile.alpine)
|
||||
- [v2.0.3-slim, v2.0-slim, v2-slim, slim](https://github.com/drakkan/sftpgo/blob/v2.0.3/Dockerfile)
|
||||
- [v2.0.3-alpine-slim, v2.0-alpine-slim, v2-alpine-slim, alpine-slim](https://github.com/drakkan/sftpgo/blob/v2.0.3/Dockerfile.alpine)
|
||||
- [edge](../Dockerfile)
|
||||
- [edge-alpine](../Dockerfile.alpine)
|
||||
- [edge-slim](../Dockerfile)
|
||||
- [edge-alpine-slim](../Dockerfile.alpine)
|
||||
|
||||
## How to use the SFTPGo image
|
||||
|
||||
### Start a `sftpgo` server instance
|
||||
|
||||
Starting a SFTPGo instance is simple:
|
||||
|
||||
```shell
|
||||
docker run --name some-sftpgo -p 127.0.0.1:8080:8080 -p 2022:2022 -d "drakkan/sftpgo:tag"
|
||||
```
|
||||
|
||||
... where `some-sftpgo` is the name you want to assign to your container, and `tag` is the tag specifying the SFTPGo version you want. See the list above for relevant tags.
|
||||
|
||||
Now visit [http://localhost:8080/](http://localhost:8080/) and create a new SFTPGo user. The SFTP service is available on port 2022.
|
||||
|
||||
If you prefer GitHub Container Registry to Docker Hub replace `drakkan/sftpgo:tag` with `ghcr.io/drakkan/sftpgo:tag`.
|
||||
|
||||
### Container shell access and viewing SFTPGo logs
|
||||
|
||||
The docker exec command allows you to run commands inside a Docker container. The following command line will give you a shell inside your `sftpgo` container:
|
||||
|
||||
```shell
|
||||
docker exec -it some-sftpgo sh
|
||||
```
|
||||
|
||||
The logs are available through Docker's container log:
|
||||
|
||||
```shell
|
||||
docker logs some-sftpgo
|
||||
```
|
||||
|
||||
### Where to Store Data
|
||||
|
||||
Important note: There are several ways to store data used by applications that run in Docker containers. We encourage users of the SFTPGo images to familiarize themselves with the options available, including:
|
||||
|
||||
- Let Docker manage the storage for SFTPGo data by [writing them to disk on the host system using its own internal volume management](https://docs.docker.com/engine/tutorials/dockervolumes/#adding-a-data-volume). This is the default and is easy and fairly transparent to the user. The downside is that the files may be hard to locate for tools and applications that run directly on the host system, i.e. outside containers.
|
||||
- Create a data directory on the host system (outside the container) and [mount this to a directory visible from inside the container]((https://docs.docker.com/engine/tutorials/dockervolumes/#mount-a-host-directory-as-a-data-volume)). This places the SFTPGo files in a known location on the host system, and makes it easy for tools and applications on the host system to access the files. The downside is that the user needs to make sure that the directory exists, and that e.g. directory permissions and other security mechanisms on the host system are set up correctly. The SFTPGo image runs using `1000` as UID/GID by default.
|
||||
|
||||
The Docker documentation is a good starting point for understanding the different storage options and variations, and there are multiple blogs and forum postings that discuss and give advice in this area. We will simply show the basic procedure here for the latter option above:
|
||||
|
||||
1. Create a data directory on a suitable volume on your host system, e.g. `/my/own/sftpgodata`.
|
||||
2. Create a home directory for the sftpgo container user on your host system e.g. `/my/own/sftpgohome`.
|
||||
3. Start your SFTPGo container like this:
|
||||
|
||||
```shell
|
||||
docker run --name some-sftpgo \
|
||||
-p 127.0.0.1:8080:8090 \
|
||||
-p 2022:2022 \
|
||||
--mount type=bind,source=/my/own/sftpgodata,target=/srv/sftpgo \
|
||||
--mount type=bind,source=/my/own/sftpgohome,target=/var/lib/sftpgo \
|
||||
-e SFTPGO_HTTPD__BINDINGS__0__PORT=8090 \
|
||||
-d "drakkan/sftpgo:tag"
|
||||
```
|
||||
|
||||
As you can see SFTPGo uses two main volumes:
|
||||
|
||||
- `/srv/sftpgo` to handle persistent data. The default home directory for SFTP/FTP/WebDAV users is `/srv/sftpgo/data/<username>`. Backups are stored in `/srv/sftpgo/backups`
|
||||
- `/var/lib/sftpgo` is the home directory for the sftpgo system user defined inside the container. This is the container working directory too, host keys will be created here when using the default configuration.
|
||||
|
||||
If you want to get fine grained control, you can also mount `/srv/sftpgo/data` and `/srv/sftpgo/backups` as separate volumes instead of mounting `/srv/sftpgo`.
|
||||
|
||||
### Configuration
|
||||
|
||||
The runtime configuration can be customized via environment variables that you can set passing the `-e` option to the `docker run` command or inside the `environment` section if you are using [docker stack deploy](https://docs.docker.com/engine/reference/commandline/stack_deploy/) or [docker-compose](https://github.com/docker/compose).
|
||||
|
||||
Please take a look [here](../docs/full-configuration.md#environment-variables) to learn how to configure SFTPGo via environment variables.
|
||||
|
||||
Alternately you can mount your custom configuration file to `/var/lib/sftpgo` or `/var/lib/sftpgo/.config/sftpgo`.
|
||||
|
||||
### Loading initial data
|
||||
|
||||
Initial data can be loaded in the following ways:
|
||||
|
||||
- via the `--loaddata-from` flag or the `SFTPGO_LOADDATA_FROM` environment variable
|
||||
- by providing a dump file to the memory provider
|
||||
|
||||
Please take a look [here](../docs/full-configuration.md) for more details.
|
||||
|
||||
### Running as an arbitrary user
|
||||
|
||||
The SFTPGo image runs using `1000` as UID/GID by default. If you know the permissions of your data and/or configuration directory are already set appropriately or you have need of running SFTPGo with a specific UID/GID, it is possible to invoke this image with `--user` set to any value (other than `root/0`) in order to achieve the desired access/configuration:
|
||||
|
||||
```shell
|
||||
$ ls -lnd data
|
||||
drwxr-xr-x 2 1100 1100 6 7 nov 09.09 data
|
||||
$ ls -lnd config
|
||||
drwxr-xr-x 2 1100 1100 6 7 nov 09.19 config
|
||||
```
|
||||
|
||||
With the above directory permissions, you can start a SFTPGo instance like this:
|
||||
|
||||
```shell
|
||||
docker run --name some-sftpgo \
|
||||
--user 1100:1100 \
|
||||
-p 127.0.0.1:8080:8080 \
|
||||
-p 2022:2022 \
|
||||
--mount type=bind,source="${PWD}/data",target=/srv/sftpgo \
|
||||
--mount type=bind,source="${PWD}/config",target=/var/lib/sftpgo \
|
||||
-d "drakkan/sftpgo:tag"
|
||||
```
|
||||
|
||||
Alternately build your own image using the official one as a base, here is a sample Dockerfile:
|
||||
|
||||
```shell
|
||||
FROM drakkan/sftpgo:tag
|
||||
USER root
|
||||
RUN chown -R 1100:1100 /etc/sftpgo && chown 1100:1100 /var/lib/sftpgo /srv/sftpgo
|
||||
USER 1100:1100
|
||||
```
|
||||
|
||||
## Image Variants
|
||||
|
||||
The `sftpgo` images comes in many flavors, each designed for a specific use case. The `edge` and `edge-alpine`tags are updated after each new commit.
|
||||
|
||||
### `sftpgo:<version>`
|
||||
|
||||
This is the defacto image, it is based on [Debian](https://www.debian.org/), available in [the `debian` official image](https://hub.docker.com/_/debian). If you are unsure about what your needs are, you probably want to use this one.
|
||||
|
||||
### `sftpgo:<version>-alpine`
|
||||
|
||||
This image is based on the popular [Alpine Linux project](https://alpinelinux.org/), available in [the `alpine` official image](https://hub.docker.com/_/alpine). Alpine Linux is much smaller than most distribution base images (~5MB), and thus leads to much slimmer images in general.
|
||||
|
||||
This variant is highly recommended when final image size being as small as possible is desired. The main caveat to note is that it does use [musl libc](https://musl.libc.org/) instead of [glibc and friends](https://www.etalabs.net/compare_libcs.html), so certain software might run into issues depending on the depth of their libc requirements. However, most software doesn't have an issue with this, so this variant is usually a very safe choice. See [this Hacker News comment thread](https://news.ycombinator.com/item?id=10782897) for more discussion of the issues that might arise and some pro/con comparisons of using Alpine-based images.
|
||||
|
||||
### `sftpgo:<suite>-slim`
|
||||
|
||||
These tags provide a slimmer image that does not include the optional `git` and `rsync` dependencies.
|
||||
|
||||
## Helm Chart
|
||||
|
||||
An helm chart is [available](https://artifacthub.io/packages/helm/sagikazarmark/sftpgo). You can find the source code [here](https://github.com/sagikazarmark/helm-charts/tree/master/charts/sftpgo).
|
||||
|
||||
@@ -1,8 +0,0 @@
|
||||
FROM debian:latest
|
||||
LABEL maintainer="nicola.murino@gmail.com"
|
||||
RUN apt-get update && apt-get install -y curl python3-requests python3-pygments
|
||||
|
||||
RUN curl https://raw.githubusercontent.com/drakkan/sftpgo/master/scripts/sftpgo_api_cli.py --output /usr/bin/sftpgo_api_cli.py
|
||||
|
||||
ENTRYPOINT ["python3", "/usr/bin/sftpgo_api_cli.py" ]
|
||||
CMD []
|
||||
28
docker/scripts/entrypoint-alpine.sh
Executable file
28
docker/scripts/entrypoint-alpine.sh
Executable file
@@ -0,0 +1,28 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
SFTPGO_PUID=${SFTPGO_PUID:-1000}
|
||||
SFTPGO_PGID=${SFTPGO_PGID:-1000}
|
||||
|
||||
if [ "$1" = 'sftpgo' ]; then
|
||||
if [ "$(id -u)" = '0' ]; then
|
||||
for DIR in "/etc/sftpgo" "/var/lib/sftpgo" "/srv/sftpgo"
|
||||
do
|
||||
DIR_UID=$(stat -c %u ${DIR})
|
||||
DIR_GID=$(stat -c %g ${DIR})
|
||||
if [ ${DIR_UID} != ${SFTPGO_PUID} ] || [ ${DIR_GID} != ${SFTPGO_PGID} ]; then
|
||||
echo '{"level":"info","time":"'`date +%Y-%m-%dT%H:%M:%S.000`'","sender":"entrypoint","message":"change owner for \"'${DIR}'\" UID: '${SFTPGO_PUID}' GID: '${SFTPGO_PGID}'"}'
|
||||
if [ ${DIR} = "/etc/sftpgo" ]; then
|
||||
chown -R ${SFTPGO_PUID}:${SFTPGO_PGID} ${DIR}
|
||||
else
|
||||
chown ${SFTPGO_PUID}:${SFTPGO_PGID} ${DIR}
|
||||
fi
|
||||
fi
|
||||
done
|
||||
echo '{"level":"info","time":"'`date +%Y-%m-%dT%H:%M:%S.000`'","sender":"entrypoint","message":"run as UID: '${SFTPGO_PUID}' GID: '${SFTPGO_PGID}'"}'
|
||||
exec su-exec ${SFTPGO_PUID}:${SFTPGO_PGID} "$@"
|
||||
fi
|
||||
|
||||
exec "$@"
|
||||
fi
|
||||
|
||||
exec "$@"
|
||||
32
docker/scripts/entrypoint.sh
Executable file
32
docker/scripts/entrypoint.sh
Executable file
@@ -0,0 +1,32 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
SFTPGO_PUID=${SFTPGO_PUID:-1000}
|
||||
SFTPGO_PGID=${SFTPGO_PGID:-1000}
|
||||
|
||||
if [ "$1" = 'sftpgo' ]; then
|
||||
if [ "$(id -u)" = '0' ]; then
|
||||
getent passwd ${SFTPGO_PUID} > /dev/null
|
||||
HAS_PUID=$?
|
||||
getent group ${SFTPGO_PGID} > /dev/null
|
||||
HAS_PGID=$?
|
||||
if [ ${HAS_PUID} -ne 0 ] || [ ${HAS_PGID} -ne 0 ]; then
|
||||
echo '{"level":"info","time":"'`date +%Y-%m-%dT%H:%M:%S.%3N`'","sender":"entrypoint","message":"prepare to run as UID: '${SFTPGO_PUID}' GID: '${SFTPGO_PGID}'"}'
|
||||
if [ ${HAS_PGID} -ne 0 ]; then
|
||||
echo '{"level":"info","time":"'`date +%Y-%m-%dT%H:%M:%S.%3N`'","sender":"entrypoint","message":"set GID to: '${SFTPGO_PGID}'"}'
|
||||
groupmod -g ${SFTPGO_PGID} sftpgo
|
||||
fi
|
||||
if [ ${HAS_PUID} -ne 0 ]; then
|
||||
echo '{"level":"info","time":"'`date +%Y-%m-%dT%H:%M:%S.%3N`'","sender":"entrypoint","message":"set UID to: '${SFTPGO_PUID}'"}'
|
||||
usermod -u ${SFTPGO_PUID} sftpgo
|
||||
fi
|
||||
chown -R ${SFTPGO_PUID}:${SFTPGO_PGID} /etc/sftpgo
|
||||
chown ${SFTPGO_PUID}:${SFTPGO_PGID} /var/lib/sftpgo /srv/sftpgo
|
||||
fi
|
||||
echo '{"level":"info","time":"'`date +%Y-%m-%dT%H:%M:%S.%3N`'","sender":"entrypoint","message":"run as UID: '${SFTPGO_PUID}' GID: '${SFTPGO_PGID}'"}'
|
||||
exec gosu ${SFTPGO_PUID}:${SFTPGO_PGID} "$@"
|
||||
fi
|
||||
|
||||
exec "$@"
|
||||
fi
|
||||
|
||||
exec "$@"
|
||||
@@ -1,16 +1,21 @@
|
||||
FROM golang:alpine as builder
|
||||
|
||||
RUN apk add --no-cache git gcc g++ ca-certificates \
|
||||
&& go get -d github.com/drakkan/sftpgo
|
||||
&& go get -v -d github.com/drakkan/sftpgo
|
||||
WORKDIR /go/src/github.com/drakkan/sftpgo
|
||||
# uncomment the next line to get the latest stable version instead of the latest git
|
||||
#RUN git checkout `git rev-list --tags --max-count=1`
|
||||
RUN go build -i -ldflags "-s -w -X github.com/drakkan/sftpgo/utils.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/utils.date=`date -u +%FT%TZ`" -o /go/bin/sftpgo
|
||||
ARG TAG
|
||||
ARG FEATURES
|
||||
# Use --build-arg TAG=LATEST for latest tag. Use e.g. --build-arg TAG=v1.0.0 for a specific tag/commit. Otherwise HEAD (master) is built.
|
||||
RUN git checkout $(if [ "${TAG}" = LATEST ]; then echo `git rev-list --tags --max-count=1`; elif [ -n "${TAG}" ]; then echo "${TAG}"; else echo HEAD; fi)
|
||||
RUN go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -v -o /go/bin/sftpgo
|
||||
|
||||
FROM alpine:latest
|
||||
|
||||
RUN apk add --no-cache ca-certificates su-exec \
|
||||
&& mkdir -p /data /etc/sftpgo /srv/sftpgo/config /srv/sftpgo/web
|
||||
RUN apk add --no-cache ca-certificates su-exec \
|
||||
&& mkdir -p /data /etc/sftpgo /srv/sftpgo/config /srv/sftpgo/web /srv/sftpgo/backups
|
||||
|
||||
# git and rsync are optional, uncomment the next line to add support for them if needed.
|
||||
#RUN apk add --no-cache git rsync
|
||||
|
||||
COPY --from=builder /go/bin/sftpgo /bin/
|
||||
COPY --from=builder /go/src/github.com/drakkan/sftpgo/sftpgo.json /etc/sftpgo/sftpgo.json
|
||||
@@ -19,8 +24,27 @@ COPY --from=builder /go/src/github.com/drakkan/sftpgo/static /srv/sftpgo/web/sta
|
||||
COPY docker-entrypoint.sh /bin/entrypoint.sh
|
||||
RUN chmod +x /bin/entrypoint.sh
|
||||
|
||||
VOLUME [ "/data", "/srv/sftpgo/config" ]
|
||||
VOLUME [ "/data", "/srv/sftpgo/config", "/srv/sftpgo/backups" ]
|
||||
EXPOSE 2022 8080
|
||||
|
||||
# uncomment the following settings to enable FTP support
|
||||
#ENV SFTPGO_FTPD__BIND_PORT=2121
|
||||
#ENV SFTPGO_FTPD__FORCE_PASSIVE_IP=<your FTP visibile IP here>
|
||||
#EXPOSE 2121
|
||||
|
||||
# we need to expose the passive ports range too
|
||||
#EXPOSE 50000-50100
|
||||
|
||||
# it is a good idea to provide certificates to enable FTPS too
|
||||
#ENV SFTPGO_FTPD__CERTIFICATE_FILE=/srv/sftpgo/config/mycert.crt
|
||||
#ENV SFTPGO_FTPD__CERTIFICATE_KEY_FILE=/srv/sftpgo/config/mycert.key
|
||||
|
||||
# uncomment the following setting to enable WebDAV support
|
||||
#ENV SFTPGO_WEBDAVD__BIND_PORT=8090
|
||||
|
||||
# it is a good idea to provide certificates to enable WebDAV over HTTPS
|
||||
#ENV SFTPGO_WEBDAVD__CERTIFICATE_FILE=${CONFIG_DIR}/mycert.crt
|
||||
#ENV SFTPGO_WEBDAVD__CERTIFICATE_KEY_FILE=${CONFIG_DIR}/mycert.key
|
||||
|
||||
ENTRYPOINT ["/bin/entrypoint.sh"]
|
||||
CMD []
|
||||
CMD ["serve"]
|
||||
|
||||
@@ -1,41 +1,61 @@
|
||||
# SFTPGo with Docker and Alpine
|
||||
|
||||
:warning: The recommended way to run SFTPGo on Docker is to use the official [images](https://hub.docker.com/r/drakkan/sftpgo). The documentation here is now obsolete.
|
||||
|
||||
This DockerFile is made to build image to host multiple instances of SFTPGo started with different users.
|
||||
|
||||
### Example
|
||||
## Example
|
||||
|
||||
> 1003 is a custom uid:gid for this instance of SFTPGo
|
||||
|
||||
```bash
|
||||
# Prereq on docker host
|
||||
sudo groupadd -g 1003 sftpgrp && \
|
||||
sudo useradd -u 1003 -g 1003 sftpuser -d /home/sftpuser/ && \
|
||||
sudo -u sftpuser mkdir /home/sftpuser/{conf,data} && \
|
||||
curl https://raw.githubusercontent.com/drakkan/sftpgo/master/sql/sqlite/20190828.sql | sqlite3 /home/sftpuser/conf/sftpgo.db && \
|
||||
curl https://raw.githubusercontent.com/drakkan/sftpgo/master/sftpgo.json -o /home/sftpuser/conf/sftpgo.json
|
||||
|
||||
# Get and build SFTPGo image
|
||||
# Edit sftpgo.json as you need
|
||||
|
||||
# Get and build SFTPGo image.
|
||||
# Add --build-arg TAG=LATEST to build the latest tag or e.g. TAG=v1.0.0 for a specific tag/commit.
|
||||
# Add --build-arg FEATURES=<build features comma separated> to specify the features to build.
|
||||
git clone https://github.com/drakkan/sftpgo.git && \
|
||||
cd sftpgo && \
|
||||
sudo docker build -t sftpgo docker/sftpgo/alpine/
|
||||
|
||||
# Starting image
|
||||
# Initialize the configured provider. For PostgreSQL and MySQL providers you need to create the configured database and the "initprovider" command will create the required tables.
|
||||
sudo docker run --name sftpgo \
|
||||
-e PUID=1003 \
|
||||
-e GUID=1003 \
|
||||
-v /home/sftpuser/conf/:/srv/sftpgo/config \
|
||||
sftpgo initprovider -c /srv/sftpgo/config
|
||||
|
||||
# Start the image
|
||||
sudo docker rm sftpgo && sudo docker run --name sftpgo \
|
||||
-e SFTPGO_LOG_FILE_PATH= \
|
||||
-e SFTPGO_CONFIG_DIR=/srv/sftpgo/config \
|
||||
-e SFTPGO_HTTPD__TEMPLATES_PATH=/srv/sftpgo/web/templates \
|
||||
-e SFTPGO_HTTPD__STATIC_FILES_PATH=/srv/sftpgo/web/static \
|
||||
-e SFTPGO_HTTPD__BACKUPS_PATH=/srv/sftpgo/backups \
|
||||
-p 8080:8080 \
|
||||
-p 2022:2022 \
|
||||
-e PUID=1003 \
|
||||
-e GUID=1003 \
|
||||
-v /home/sftpuser/conf/:/srv/sftpgo/config \
|
||||
-v /home/sftpuser/data:/data \
|
||||
-v /home/sftpuser/backups:/srv/sftpgo/backups \
|
||||
sftpgo
|
||||
```
|
||||
The script `entrypoint.sh` makes sure to correct the permissions of directories and start the process with the right user
|
||||
|
||||
If you want to enable FTP/S you also need the publish the FTP port and the FTP passive port range, defined in your `Dockerfile`, by adding, for example, the following options to the `docker run` command `-p 2121:2121 -p 50000-50100:50000-50100`. The same goes for WebDAV, you need to publish the configured port.
|
||||
|
||||
The script `entrypoint.sh` makes sure to correct the permissions of directories and start the process with the right user.
|
||||
|
||||
Several images can be run with different parameters.
|
||||
|
||||
### Custom systemd script
|
||||
## Custom systemd script
|
||||
|
||||
An example of systemd script is present [here](sftpgo.service), with `Environment` parameter to set `PUID` and `GUID`
|
||||
|
||||
`WorkingDirectory` parameter must be exist with one file in this directory like `sftpgo-${PUID}.env` corresponding to the variable file for SFTPGo instance.
|
||||
`WorkingDirectory` parameter must be exist with one file in this directory like `sftpgo-${PUID}.env` corresponding to the variable file for SFTPGo instance.
|
||||
|
||||
@@ -2,6 +2,6 @@
|
||||
|
||||
set -eu
|
||||
|
||||
chown -R "${PUID}:${GUID}" /data /etc/sftpgo /srv/sftpgo/config \
|
||||
chown -R "${PUID}:${GUID}" /data /etc/sftpgo /srv/sftpgo/config /srv/sftpgo/backups \
|
||||
&& exec su-exec "${PUID}:${GUID}" \
|
||||
/bin/sftpgo serve "$@"
|
||||
/bin/sftpgo "$@"
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
[Unit]
|
||||
Description=SFTPGo sftp server
|
||||
Description=SFTPGo server
|
||||
After=docker.service
|
||||
|
||||
[Service]
|
||||
@@ -8,19 +8,23 @@ Group=root
|
||||
WorkingDirectory=/etc/sftpgo
|
||||
Environment=PUID=1003
|
||||
Environment=GUID=1003
|
||||
EnvironmentFile=-/etc/sysconfig/sftpgo.conf
|
||||
EnvironmentFile=-/etc/sysconfig/sftpgo.env
|
||||
ExecStartPre=-docker kill sftpgo
|
||||
ExecStartPre=-docker rm sftpgo
|
||||
ExecStart=docker run --name sftpgo \
|
||||
--env-file sftpgo-${PUID}.env \
|
||||
-e PUID=${PUID} \
|
||||
-e GUID=${GUID} \
|
||||
-e SFTPGO_LOG_FILE_PATH= \
|
||||
-e SFTPGO_CONFIG_DIR=/srv/sftpgo/config \
|
||||
-e SFTPGO_HTTPD__TEMPLATES_PATH=/srv/sftpgo/web/templates \
|
||||
-e SFTPGO_HTTPD__STATIC_FILES_PATH=/srv/sftpgo/web/static \
|
||||
-e SFTPGO_HTTPD__BACKUPS_PATH=/srv/sftpgo/backups \
|
||||
-p 8080:8080 \
|
||||
-p 2022:2022 \
|
||||
-v /home/sftpuser/conf/:/srv/sftpgo/config \
|
||||
-v /home/sftpuser/data:/data \
|
||||
-v /home/sftpuser/backups:/srv/sftpgo/backups \
|
||||
sftpgo
|
||||
ExecStop=docker stop sftpgo
|
||||
SyslogIdentifier=sftpgo
|
||||
|
||||
@@ -1,18 +1,27 @@
|
||||
# we use a multi stage build to have a separate build and run env
|
||||
FROM golang:latest as buildenv
|
||||
LABEL maintainer="nicola.murino@gmail.com"
|
||||
RUN go get -d github.com/drakkan/sftpgo
|
||||
RUN go get -v -d github.com/drakkan/sftpgo
|
||||
WORKDIR /go/src/github.com/drakkan/sftpgo
|
||||
# uncomment the next line to get the latest stable version instead of the latest git
|
||||
#RUN git checkout `git rev-list --tags --max-count=1`
|
||||
RUN go build -i -ldflags "-s -w -X github.com/drakkan/sftpgo/utils.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/utils.date=`date -u +%FT%TZ`" -o sftpgo
|
||||
ARG TAG
|
||||
ARG FEATURES
|
||||
# Use --build-arg TAG=LATEST for latest tag. Use e.g. --build-arg TAG=v1.0.0 for a specific tag/commit. Otherwise HEAD (master) is built.
|
||||
RUN git checkout $(if [ "${TAG}" = LATEST ]; then echo `git rev-list --tags --max-count=1`; elif [ -n "${TAG}" ]; then echo "${TAG}"; else echo HEAD; fi)
|
||||
RUN go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -v -o sftpgo
|
||||
|
||||
# now define the run environment
|
||||
FROM debian:latest
|
||||
|
||||
# ca-certificates is needed for Cloud Storage Support and for HTTPS/FTPS.
|
||||
RUN apt-get update && apt-get install -y ca-certificates && apt-get clean
|
||||
|
||||
# git and rsync are optional, uncomment the next line to add support for them if needed.
|
||||
#RUN apt-get update && apt-get install -y git rsync && apt-get clean
|
||||
|
||||
ARG BASE_DIR=/app
|
||||
ARG DATA_REL_DIR=data
|
||||
ARG CONFIG_REL_DIR=config
|
||||
ARG BACKUP_REL_DIR=backups
|
||||
ARG USERNAME=sftpgo
|
||||
ARG GROUPNAME=sftpgo
|
||||
ARG UID=515
|
||||
@@ -25,11 +34,13 @@ ENV HOME_DIR=${BASE_DIR}/${USERNAME}
|
||||
ENV DATA_DIR=${BASE_DIR}/${DATA_REL_DIR}
|
||||
# CONFIG_DIR, this is a volume to persist the daemon private keys, configuration file ecc..
|
||||
ENV CONFIG_DIR=${BASE_DIR}/${CONFIG_REL_DIR}
|
||||
# BACKUPS_DIR, this is a volume to store backups done using "dumpdata" REST API
|
||||
ENV BACKUPS_DIR=${BASE_DIR}/${BACKUP_REL_DIR}
|
||||
ENV WEB_DIR=${BASE_DIR}/${WEB_REL_PATH}
|
||||
|
||||
RUN mkdir -p ${DATA_DIR} ${CONFIG_DIR} ${WEB_DIR}
|
||||
RUN mkdir -p ${DATA_DIR} ${CONFIG_DIR} ${WEB_DIR} ${BACKUPS_DIR}
|
||||
RUN groupadd --system -g ${GID} ${GROUPNAME}
|
||||
RUN useradd --system --create-home --no-log-init --home-dir ${HOME_DIR} --comment "SFTPGo user" --shell /bin/false --gid ${GID} --uid ${UID} ${USERNAME}
|
||||
RUN useradd --system --create-home --no-log-init --home-dir ${HOME_DIR} --comment "SFTPGo user" --shell /usr/sbin/nologin --gid ${GID} --uid ${UID} ${USERNAME}
|
||||
|
||||
WORKDIR ${HOME_DIR}
|
||||
RUN mkdir -p bin .config/sftpgo
|
||||
@@ -40,7 +51,7 @@ COPY --from=buildenv /go/src/github.com/drakkan/sftpgo/sftpgo bin/sftpgo
|
||||
COPY --from=buildenv /go/src/github.com/drakkan/sftpgo/sftpgo.json .config/sftpgo/
|
||||
COPY --from=buildenv /go/src/github.com/drakkan/sftpgo/templates ${WEB_DIR}/templates
|
||||
COPY --from=buildenv /go/src/github.com/drakkan/sftpgo/static ${WEB_DIR}/static
|
||||
RUN chown -R ${UID}:${GID} ${DATA_DIR}
|
||||
RUN chown -R ${UID}:${GID} ${DATA_DIR} ${BACKUPS_DIR}
|
||||
|
||||
# run as non root user
|
||||
USER ${USERNAME}
|
||||
@@ -48,16 +59,35 @@ USER ${USERNAME}
|
||||
EXPOSE 2022 8080
|
||||
|
||||
# the defined volumes must have write access for the UID and GID defined above
|
||||
VOLUME [ "$DATA_DIR", "$CONFIG_DIR" ]
|
||||
VOLUME [ "$DATA_DIR", "$CONFIG_DIR", "$BACKUPS_DIR" ]
|
||||
|
||||
# override some default configuration options using env vars
|
||||
ENV SFTPGO_CONFIG_DIR=${CONFIG_DIR}
|
||||
# setting SFTPGO_LOG_FILE_PATH to an empty string will log to stdout
|
||||
ENV SFTPGO_LOG_FILE_PATH=${CONFIG_DIR}/sftpgo.log
|
||||
ENV SFTPGO_LOG_FILE_PATH=""
|
||||
ENV SFTPGO_HTTPD__BIND_ADDRESS=""
|
||||
ENV SFTPGO_HTTPD__TEMPLATES_PATH=${WEB_DIR}/templates
|
||||
ENV SFTPGO_HTTPD__STATIC_FILES_PATH=${WEB_DIR}/static
|
||||
ENV SFTPGO_DATA_PROVIDER__USERS_BASE_DIR=${DATA_DIR}
|
||||
ENV SFTPGO_HTTPD__BACKUPS_PATH=${BACKUPS_DIR}
|
||||
|
||||
# uncomment the following settings to enable FTP support
|
||||
#ENV SFTPGO_FTPD__BIND_PORT=2121
|
||||
#ENV SFTPGO_FTPD__FORCE_PASSIVE_IP=<your FTP visibile IP here>
|
||||
#EXPOSE 2121
|
||||
# we need to expose the passive ports range too
|
||||
#EXPOSE 50000-50100
|
||||
|
||||
# it is a good idea to provide certificates to enable FTPS too
|
||||
#ENV SFTPGO_FTPD__CERTIFICATE_FILE=${CONFIG_DIR}/mycert.crt
|
||||
#ENV SFTPGO_FTPD__CERTIFICATE_KEY_FILE=${CONFIG_DIR}/mycert.key
|
||||
|
||||
# uncomment the following setting to enable WebDAV support
|
||||
#ENV SFTPGO_WEBDAVD__BIND_PORT=8090
|
||||
|
||||
# it is a good idea to provide certificates to enable WebDAV over HTTPS
|
||||
#ENV SFTPGO_WEBDAVD__CERTIFICATE_FILE=${CONFIG_DIR}/mycert.crt
|
||||
#ENV SFTPGO_WEBDAVD__CERTIFICATE_KEY_FILE=${CONFIG_DIR}/mycert.key
|
||||
|
||||
ENTRYPOINT ["sftpgo"]
|
||||
CMD ["serve"]
|
||||
CMD ["serve"]
|
||||
|
||||
@@ -1,4 +1,6 @@
|
||||
## Dockerfile based on Debian stable
|
||||
# Dockerfile based on Debian stable
|
||||
|
||||
:warning: The recommended way to run SFTPGo on Docker is to use the official [images](https://hub.docker.com/r/drakkan/sftpgo). The documentation here is now obsolete.
|
||||
|
||||
Please read the comments inside the `Dockerfile` to learn how to customize things for your setup.
|
||||
|
||||
@@ -8,15 +10,50 @@ You can build the container image using `docker build`, for example:
|
||||
docker build -t="drakkan/sftpgo" .
|
||||
```
|
||||
|
||||
and you can run the Dockerfile using something like this:
|
||||
This will build master of github.com/drakkan/sftpgo.
|
||||
|
||||
To build the latest tag you can add `--build-arg TAG=LATEST` and to build a specific tag/commit you can use for example `TAG=v1.0.0`, like this:
|
||||
|
||||
```bash
|
||||
docker run --name sftpgo -p 8080:8080 -p 2022:2022 --mount type=bind,source=/srv/sftpgo/data,target=/app/data --mount type=bind,source=/srv/sftpgo/config,target=/app/config drakkan/sftpgo
|
||||
docker build -t="drakkan/sftpgo" --build-arg TAG=v1.0.0 .
|
||||
```
|
||||
|
||||
where `/srv/sftpgo/data` and `/srv/sftpgo/config` are two folders on the host system with write access for UID/GID defined inside the `Dockerfile`. You can choose to create a new user, on the host system, with a matching UID/GID pair or simply do something like:
|
||||
|
||||
To specify the features to build you can add `--build-arg FEATURES=<build features comma separated>`. For example you can disable SQLite and S3 support like this:
|
||||
|
||||
```bash
|
||||
chown -R <UID>:<GID> /srv/sftpgo/data /srv/sftpgo/config
|
||||
```
|
||||
docker build -t="drakkan/sftpgo" --build-arg FEATURES=nosqlite,nos3 .
|
||||
```
|
||||
|
||||
Please take a look at the [build from source](./../../../docs/build-from-source.md) documentation for the complete list of the features that can be disabled.
|
||||
|
||||
Now create the required folders on the host system, for example:
|
||||
|
||||
```bash
|
||||
sudo mkdir -p /srv/sftpgo/data /srv/sftpgo/config /srv/sftpgo/backups
|
||||
```
|
||||
|
||||
and give write access to them to the UID/GID defined inside the `Dockerfile`. You can choose to create a new user, on the host system, with a matching UID/GID pair, or simply do something like this:
|
||||
|
||||
```bash
|
||||
sudo chown -R <UID>:<GID> /srv/sftpgo/data /srv/sftpgo/config /srv/sftpgo/backups
|
||||
```
|
||||
|
||||
Download the default configuration file and edit it as you need:
|
||||
|
||||
```bash
|
||||
sudo curl https://raw.githubusercontent.com/drakkan/sftpgo/master/sftpgo.json -o /srv/sftpgo/config/sftpgo.json
|
||||
```
|
||||
|
||||
Initialize the configured provider. For PostgreSQL and MySQL providers you need to create the configured database and the `initprovider` command will create the required tables:
|
||||
|
||||
```bash
|
||||
docker run --name sftpgo --mount type=bind,source=/srv/sftpgo/config,target=/app/config drakkan/sftpgo initprovider -c /app/config
|
||||
```
|
||||
|
||||
and finally you can run the image using something like this:
|
||||
|
||||
```bash
|
||||
docker rm sftpgo && docker run --name sftpgo -p 8080:8080 -p 2022:2022 --mount type=bind,source=/srv/sftpgo/data,target=/app/data --mount type=bind,source=/srv/sftpgo/config,target=/app/config --mount type=bind,source=/srv/sftpgo/backups,target=/app/backups drakkan/sftpgo
|
||||
```
|
||||
|
||||
If you want to enable FTP/S you also need the publish the FTP port and the FTP passive port range, defined in your `Dockerfile`, by adding, for example, the following options to the `docker run` command `-p 2121:2121 -p 50000-50100:50000-50100`. The same goes for WebDAV, you need to publish the configured port.
|
||||
|
||||
22
docs/account.md
Normal file
22
docs/account.md
Normal file
@@ -0,0 +1,22 @@
|
||||
# Account's configuration properties
|
||||
|
||||
Please take a look at the [OpenAPI schema](../httpd/schema/openapi.yaml) for the exact definitions of user, folder and admin fields.
|
||||
If you need an example you can export a dump using the Web Admin or by invoking the `dumpdata` endpoint directly, you need to obtain an access token first, for example:
|
||||
|
||||
```shell
|
||||
$ curl "http://admin:password@127.0.0.1:8080/api/v2/token"
|
||||
{"access_token":"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOlsiQVBJIl0sImV4cCI6MTYxMzMzNTI2MSwianRpIjoiYzBrb2gxZmNkcnBjaHNzMGZwZmciLCJuYmYiOjE2MTMzMzQ2MzEsInBlcm1pc3Npb25zIjpbIioiXSwic3ViIjoiYUJ0SHUwMHNBUmxzZ29yeEtLQ1pZZWVqSTRKVTlXbThHSGNiVWtWVmc1TT0iLCJ1c2VybmFtZSI6ImFkbWluIn0.WiyqvUF-92zCr--y4Q_sxn-tPnISFzGZd_exsG-K7ME","expires_at":"2021-02-14T20:41:01Z"}
|
||||
|
||||
curl -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOlsiQVBJIl0sImV4cCI6MTYxMzMzNTI2MSwianRpIjoiYzBrb2gxZmNkcnBjaHNzMGZwZmciLCJuYmYiOjE2MTMzMzQ2MzEsInBlcm1pc3Npb25zIjpbIioiXSwic3ViIjoiYUJ0SHUwMHNBUmxzZ29yeEtLQ1pZZWVqSTRKVTlXbThHSGNiVWtWVmc1TT0iLCJ1c2VybmFtZSI6ImFkbWluIn0.WiyqvUF-92zCr--y4Q_sxn-tPnISFzGZd_exsG-K7ME" "http://127.0.0.1:8080/api/v2/dumpdata?output-data=1"
|
||||
```
|
||||
|
||||
the dump is a JSON with users, folder and admins.
|
||||
|
||||
These properties are stored inside the configured data provider.
|
||||
|
||||
SFTPGo supports checking passwords stored with bcrypt, pbkdf2, md5crypt and sha512crypt too. For pbkdf2 the supported format is `$<algo>$<iterations>$<salt>$<hashed pwd base64 encoded>`, where algo is `pbkdf2-sha1` or `pbkdf2-sha256` or `pbkdf2-sha512` or `$pbkdf2-b64salt-sha256$`. For example the pbkdf2-sha256 of the word password using 150000 iterations and E86a9YMX3zC7 as salt must be stored as `$pbkdf2-sha256$150000$E86a9YMX3zC7$R5J62hsSq+pYw00hLLPKBbcGXmq7fj5+/M0IFoYtZbo=`. In pbkdf2 variant with b64salt the salt is base64 encoded. For bcrypt the format must be the one supported by golang's crypto/bcrypt package, for example the password secret with cost 14 must be stored as `$2a$14$ajq8Q7fbtFRQvXpdCq7Jcuy.Rx1h/L4J60Otx.gyNLbAYctGMJ9tK`. For md5crypt and sha512crypt we support the format used in `/etc/shadow` with the `$1$` and `$6$` prefix, this is useful if you are migrating from Unix system user accounts. We support Apache md5crypt (`$apr1$` prefix) too. Using the REST API you can send a password hashed as bcrypt, pbkdf2, md5crypt or sha512crypt and it will be stored as is.
|
||||
|
||||
If you want to use your existing accounts, you have these options:
|
||||
|
||||
- you can import your users inside SFTPGo. Take a look at [convert users](.../examples/convertusers) script, it can convert and import users from Linux system users and Pure-FTPd/ProFTPD virtual users
|
||||
- you can use an external authentication program
|
||||
20
docs/azure-blob-storage.md
Normal file
20
docs/azure-blob-storage.md
Normal file
@@ -0,0 +1,20 @@
|
||||
# Azure Blob Storage backend
|
||||
|
||||
To connect SFTPGo to Azure Blob Storage, you need to specify the access credentials. Azure Blob Storage has different options for credentials, we support:
|
||||
|
||||
1. Providing an account name and account key.
|
||||
2. Providing a shared access signature (SAS).
|
||||
|
||||
If you authenticate using account and key you also need to specify a container. The endpoint can generally be left blank, the default is `blob.core.windows.net`.
|
||||
|
||||
If you provide a SAS URL the container is optional and if given it must match the one inside the shared access signature.
|
||||
|
||||
If you want to connect to an emulator such as [Azurite](https://github.com/Azure/Azurite) you need to provide the account name/key pair and an endpoint prefixed with the protocol, for example `http://127.0.0.1:10000`.
|
||||
|
||||
Specifying a different `key_prefix`, you can assign different "folders" of the same container to different users. This is similar to a chroot directory for local filesystem. Each SFTPGo user can only access the assigned folder and its contents. The folder identified by `key_prefix` does not need to be pre-created.
|
||||
|
||||
For multipart uploads you can customize the parts size and the upload concurrency. Please note that if the upload bandwidth between the client and SFTPGo is greater than the upload bandwidth between SFTPGo and the Azure Blob service then the client should wait for the last parts to be uploaded to Azure after finishing uploading the file to SFTPGo, and it may time out. Keep this in mind if you customize these parameters.
|
||||
|
||||
The configured container must exist.
|
||||
|
||||
This backend is very similar to the [S3](./s3.md) backend, and it has the same limitations.
|
||||
43
docs/build-from-source.md
Normal file
43
docs/build-from-source.md
Normal file
@@ -0,0 +1,43 @@
|
||||
# Build SFTPGo from source
|
||||
|
||||
Download the sources and use `go build`.
|
||||
|
||||
The following build tags are available:
|
||||
|
||||
- `nogcs`, disable Google Cloud Storage backend, default enabled
|
||||
- `nos3`, disable S3 Compabible Object Storage backends, default enabled
|
||||
- `noazblob`, disable Azure Blob Storage backend, default enabled
|
||||
- `nobolt`, disable Bolt data provider, default enabled
|
||||
- `nomysql`, disable MySQL data provider, default enabled
|
||||
- `nopgsql`, disable PostgreSQL data provider, default enabled
|
||||
- `nosqlite`, disable SQLite data provider, default enabled
|
||||
- `noportable`, disable portable mode, default enabled
|
||||
- `nometrics`, disable Prometheus metrics, default enabled
|
||||
- `novaultkms`, disable Vault transit secret engine, default enabled
|
||||
- `noawskms`, disable AWS KMS, default enabled
|
||||
- `nogcpkms`, disable GCP KMS, default enabled
|
||||
|
||||
If no build tag is specified the build will include the default features.
|
||||
|
||||
The optional [SQLite driver](https://github.com/mattn/go-sqlite3 "go-sqlite3") is a `CGO` package and so it requires a `C` compiler at build time.
|
||||
On Linux and macOS, a compiler is easy to install or already installed. On Windows, you need to download [MinGW-w64](https://sourceforge.net/projects/mingw-w64/files/) and build SFTPGo from its command prompt.
|
||||
|
||||
The compiler is a build time only dependency. It is not required at runtime.
|
||||
|
||||
Version info, such as git commit and build date, can be embedded setting the following string variables at build time:
|
||||
|
||||
- `github.com/drakkan/sftpgo/version.commit`
|
||||
- `github.com/drakkan/sftpgo/version.date`
|
||||
|
||||
For example, you can build using the following command:
|
||||
|
||||
```bash
|
||||
go build -tags nogcs,nos3,nosqlite -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -o sftpgo
|
||||
```
|
||||
|
||||
You should get a version that includes git commit, build date and available features like this one:
|
||||
|
||||
```bash
|
||||
$ ./sftpgo -v
|
||||
SFTPGo 0.9.6-dev-b30614e-dirty-2020-06-19T11:04:56Z +metrics -gcs -s3 +bolt +mysql +pgsql -sqlite +portable
|
||||
```
|
||||
45
docs/check-password-hook.md
Normal file
45
docs/check-password-hook.md
Normal file
@@ -0,0 +1,45 @@
|
||||
# Check password hook
|
||||
|
||||
This hook allows you to externally check the provided password, its main use case is to allow to easily support things like password+OTP for protocols without keyboard interactive support such as FTP and WebDAV. You can ask your users to login using a string consisting of a fixed password and a One Time Token, you can verify the token inside the hook and ask to SFTPGo to verify the fixed part.
|
||||
|
||||
The same thing can be achieved using [External authentication](./external-auth.md) but using this hook is simpler in some use cases.
|
||||
|
||||
The `check password hook` can be defined as the absolute path of your program or an HTTP URL.
|
||||
|
||||
The expected response is a JSON serialized struct containing the following keys:
|
||||
|
||||
- `status` integer. 0 means KO, 1 means OK, 2 means partial success
|
||||
- `to_verify` string. For `status` = 2 SFTPGo will check this password against the one stored inside SFTPGo data provider
|
||||
|
||||
If the hook defines an external program it can read the following environment variables:
|
||||
|
||||
- `SFTPGO_AUTHD_USERNAME`
|
||||
- `SFTPGO_AUTHD_PASSWORD`
|
||||
- `SFTPGO_AUTHD_IP`
|
||||
- `SFTPGO_AUTHD_PROTOCOL`, possible values are `SSH`, `FTP`, `DAV`
|
||||
|
||||
Previous global environment variables aren't cleared when the script is called. The content of these variables is _not_ quoted. They may contain special characters. They are under the control of a possibly malicious remote user.
|
||||
|
||||
The program must write, on its standard output, the expected JSON serialized response described above.
|
||||
|
||||
If the hook is an HTTP URL then it will be invoked as HTTP POST. The request body will contain a JSON serialized struct with the following fields:
|
||||
|
||||
- `username`
|
||||
- `password`
|
||||
- `ip`
|
||||
- `protocol`, possible values are `SSH`, `FTP`, `DAV`
|
||||
|
||||
If authentication succeeds the HTTP response code must be 200 and the response body must contain the expected JSON serialized response described above.
|
||||
|
||||
The program hook must finish within 30 seconds, the HTTP hook timeout will use the global configuration for HTTP clients.
|
||||
|
||||
You can also restrict the hook scope using the `check_password_scope` configuration key:
|
||||
|
||||
- `0` means all supported protocols.
|
||||
- `1` means SSH only
|
||||
- `2` means FTP only
|
||||
- `4` means WebDAV only
|
||||
|
||||
You can combine the scopes. For example, 6 means FTP and WebDAV.
|
||||
|
||||
An example check password program allowing 2FA using password + one time token can be found inside the source tree [checkpwd](../examples/OTP/authy/checkpwd) directory.
|
||||
76
docs/custom-actions.md
Normal file
76
docs/custom-actions.md
Normal file
@@ -0,0 +1,76 @@
|
||||
# Custom Actions
|
||||
|
||||
The `actions` struct inside the "common" configuration section allows to configure the actions for file operations and SSH commands.
|
||||
The `hook` can be defined as the absolute path of your program or an HTTP URL.
|
||||
|
||||
The `upload` condition includes both uploads to new files and overwrite of existing files. If an upload is aborted for quota limits SFTPGo tries to remove the partial file, so if the notification reports a zero size file and a quota exceeded error the file has been deleted. The `ssh_cmd` condition will be triggered after a command is successfully executed via SSH. `scp` will trigger the `download` and `upload` conditions and not `ssh_cmd`.
|
||||
The notification will indicate if an error is detected and so, for example, a partial file is uploaded.
|
||||
The `pre-delete` action, if defined, will be called just before files deletion. If the external command completes with a zero exit status or the HTTP notification response code is `200` then SFTPGo will assume that the file was already deleted/moved and so it will not try to remove the file and it will not execute the hook defined for the `delete` action.
|
||||
|
||||
If the `hook` defines a path to an external program, then this program is invoked with the following arguments:
|
||||
|
||||
- `action`, string, possible values are: `download`, `upload`, `pre-delete`,`delete`, `rename`, `ssh_cmd`
|
||||
- `username`
|
||||
- `path` is the full filesystem path, can be empty for some ssh commands
|
||||
- `target_path`, non-empty for `rename` action and for `sftpgo-copy` SSH command
|
||||
- `ssh_cmd`, non-empty for `ssh_cmd` action
|
||||
|
||||
The external program can also read the following environment variables:
|
||||
|
||||
- `SFTPGO_ACTION`
|
||||
- `SFTPGO_ACTION_USERNAME`
|
||||
- `SFTPGO_ACTION_PATH`
|
||||
- `SFTPGO_ACTION_TARGET`, non-empty for `rename` `SFTPGO_ACTION`
|
||||
- `SFTPGO_ACTION_SSH_CMD`, non-empty for `ssh_cmd` `SFTPGO_ACTION`
|
||||
- `SFTPGO_ACTION_FILE_SIZE`, non-empty for `upload`, `download` and `delete` `SFTPGO_ACTION`
|
||||
- `SFTPGO_ACTION_FS_PROVIDER`, `0` for local filesystem, `1` for S3 backend, `2` for Google Cloud Storage (GCS) backend, `3` for Azure Blob Storage backend
|
||||
- `SFTPGO_ACTION_BUCKET`, non-empty for S3, GCS and Azure backends
|
||||
- `SFTPGO_ACTION_ENDPOINT`, non-empty for S3 and Azure backend if configured. For Azure this is the SAS URL, if configured otherwise the endpoint
|
||||
- `SFTPGO_ACTION_STATUS`, integer. 0 means a generic error occurred. 1 means no error, 2 means quota exceeded error
|
||||
- `SFTPGO_ACTION_PROTOCOL`, string. Possible values are `SSH`, `SFTP`, `SCP`, `FTP`, `DAV`
|
||||
|
||||
Previous global environment variables aren't cleared when the script is called.
|
||||
The program must finish within 30 seconds.
|
||||
|
||||
If the `hook` defines an HTTP URL then this URL will be invoked as HTTP POST. The request body will contain a JSON serialized struct with the following fields:
|
||||
|
||||
- `action`
|
||||
- `username`
|
||||
- `path`
|
||||
- `target_path`, not null for `rename` action
|
||||
- `ssh_cmd`, not null for `ssh_cmd` action
|
||||
- `file_size`, not null for `upload`, `download`, `delete` actions
|
||||
- `fs_provider`, `0` for local filesystem, `1` for S3 backend, `2` for Google Cloud Storage (GCS) backend, `3` for Azure Blob Storage backend
|
||||
- `bucket`, not null for S3, GCS and Azure backends
|
||||
- `endpoint`, not null for S3 and Azure backend if configured. For Azure this is the SAS URL, if configured otherwise the endpoint
|
||||
- `status`, integer. 0 means a generic error occurred. 1 means no error, 2 means quota exceeded error
|
||||
- `protocol`, string. Possible values are `SSH`, `FTP`, `DAV`
|
||||
|
||||
The HTTP hook will use the global configuration for HTTP clients and will respect the retry configurations.
|
||||
|
||||
The `actions` struct inside the "data_provider" configuration section allows you to configure actions on user add, update, delete.
|
||||
|
||||
Actions will not be fired for internal updates, such as the last login or the user quota fields, or after external authentication.
|
||||
|
||||
If the `hook` defines a path to an external program, then this program is invoked with the following arguments:
|
||||
|
||||
- `action`, string, possible values are: `add`, `update`, `delete`
|
||||
- `username`
|
||||
- `ID`
|
||||
- `status`
|
||||
- `expiration_date`
|
||||
- `home_dir`
|
||||
- `uid`
|
||||
- `gid`
|
||||
|
||||
The external program can also read the following environment variables:
|
||||
|
||||
- `SFTPGO_USER_ACTION`
|
||||
- `SFTPGO_USER`, user serialized as JSON with sensitive fields removed
|
||||
|
||||
Previous global environment variables aren't cleared when the script is called.
|
||||
The program must finish within 15 seconds.
|
||||
|
||||
If the `hook` defines an HTTP URL then this URL will be invoked as HTTP POST. The action is added to the query string, for example `<hook>?action=update`, and the user is sent serialized as JSON inside the POST body with sensitive fields removed.
|
||||
|
||||
The HTTP hook will use the global configuration for HTTP clients and will respect the retry configurations.
|
||||
19
docs/dare.md
Normal file
19
docs/dare.md
Normal file
@@ -0,0 +1,19 @@
|
||||
# Data At Rest Encryption (DARE)
|
||||
|
||||
SFTPGo supports data at-rest encryption via its `cryptfs` virtual file system, in this mode SFTPGo transparently encrypts and decrypts data (to/from the disk) on-the-fly during uploads and/or downloads, making sure that the files at-rest on the server-side are always encrypted.
|
||||
|
||||
So, because of the way it works, as described here above, when you set up an encrypted filesystem for a user you need to make sure it points to an empty path/directory (that has no files in it). Otherwise, it would try to decrypt existing files that are not encrypted in the first place and fail.
|
||||
|
||||
The SFTPGo's `cryptfs` is a tiny wrapper around [sio](https://github.com/minio/sio) therefore data is encrypted and authenticated using `AES-256-GCM` or `ChaCha20-Poly1305`. AES-GCM will be used if the CPU provides hardware support for it.
|
||||
|
||||
The only required configuration parameter is a `passphrase`, each file will be encrypted using an unique, randomly generated secret key derived from the given passphrase using the HMAC-based Extract-and-Expand Key Derivation Function (HKDF) as defined in [RFC 5869](http://tools.ietf.org/html/rfc5869). It is important to note that the per-object encryption key is never stored anywhere: it is derived from your `passphrase` and a randomly generated initialization vector just before encryption/decryption. The initialization vector is stored with the file.
|
||||
|
||||
The passphrase is stored encrypted itself according to your [KMS configuration](./kms.md) and is required to decrypt any file encrypted using an encryption key derived from it.
|
||||
|
||||
The encrypted filesystem has some limitations compared to the local, unencrypted, one:
|
||||
|
||||
- Upload resume is not supported.
|
||||
- Opening a file for both reading and writing at the same time is not supported and so clients that require advanced filesystem-like features such as `sshfs` are not supported too.
|
||||
- Truncate is not supported.
|
||||
- System commands such as `git` or `rsync` are not supported: they will store data unencrypted.
|
||||
- Virtual folders are not implemented for now, if you are interested in this feature, please consider submitting a well written pull request (fully covered by test cases) or sponsoring this development. We could add a filesystem configuration to each virtual folder so we can mount encrypted or cloud backends as subfolders for local filesystems and vice versa.
|
||||
63
docs/defender.md
Normal file
63
docs/defender.md
Normal file
@@ -0,0 +1,63 @@
|
||||
# Defender
|
||||
|
||||
The built-in `defender` allows you to configure an auto-blocking policy for SFTPGo and thus helps to prevent DoS (Denial of Service) and brute force password guessing.
|
||||
|
||||
If enabled it will protect SFTP, FTP and WebDAV services and it will automatically block hosts (IP addresses) that continually fail to log in or attempt to connect.
|
||||
|
||||
You can configure a score for each event type:
|
||||
|
||||
- `score_valid`, defines the score for valid login attempts, eg. user accounts that exist. Default `1`.
|
||||
- `score_invalid`, defines the score for invalid login attempts, eg. non-existent user accounts or client disconnected for inactivity without authentication attempts. Default `2`.
|
||||
|
||||
And then you can configure:
|
||||
|
||||
- `observation_time`, defines the time window, in minutes, for tracking client errors.
|
||||
- `threshold`, defines the threshold value before banning a host.
|
||||
- `ban_time`, defines the time to ban a client, as minutes
|
||||
|
||||
So a host is banned, for `ban_time` minutes, if it has exceeded the defined threshold during the last observation time minutes.
|
||||
|
||||
A banned IP has no score, it makes no sense to accumulate host events in memory for an already banned IP address.
|
||||
|
||||
If an already banned client tries to log in again, its ban time will be incremented according the `ban_time_increment` configuration.
|
||||
|
||||
The `ban_time_increment` is calculated as percentage of `ban_time`, so if `ban_time` is 30 minutes and `ban_time_increment` is 50 the host will be banned for additionally 15 minutes. You can also specify values greater than 100 for `ban_time_increment` if you want to increase the penalty for already banned hosts.
|
||||
|
||||
The `defender` will keep in memory both the host scores and the banned hosts, you can limit the memory usage using the `entries_soft_limit` and `entries_hard_limit` configuration keys.
|
||||
|
||||
The REST API allows:
|
||||
|
||||
- to retrieve the score for an IP address
|
||||
- to retrieve the ban time for an IP address
|
||||
- to unban an IP address
|
||||
|
||||
We don't return the whole list of the banned IP addresses or all stored scores because we store them as a hash map and iterating over all the keys of a hash map is not a fast operation and will slow down the recordings of new events.
|
||||
|
||||
The `defender` can also load a permanent block list and/or a safe list of ip addresses/networks from a file:
|
||||
|
||||
- `safelist_file`, defines the path to a file containing a list of ip addresses and/or networks to never ban.
|
||||
- `blocklist_file`, defines the path to a file containing a list of ip addresses and/or networks to always ban.
|
||||
|
||||
These list must be stored as JSON conforming to the following schema:
|
||||
|
||||
- `addresses`, list of strings. Each string must be a valid IPv4/IPv6 address.
|
||||
- `networks`, list of strings. Each string must be a valid IPv4/IPv6 CIDR address.
|
||||
|
||||
Here is a small example:
|
||||
|
||||
```json
|
||||
{
|
||||
"addresses":[
|
||||
"192.0.2.1",
|
||||
"2001:db8::68"
|
||||
],
|
||||
"networks":[
|
||||
"192.0.2.0/24",
|
||||
"2001:db8:1234::/48"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
These list will be loaded in memory for faster lookups. The REST API queries "live" data and not these lists.
|
||||
|
||||
The `defender` is optimized for fast and time constant lookups however as it keeps all the lists and the entries in memory you should carefully measure the memory requirements for your use case.
|
||||
55
docs/dynamic-user-mod.md
Normal file
55
docs/dynamic-user-mod.md
Normal file
@@ -0,0 +1,55 @@
|
||||
# Dynamic user creation or modification
|
||||
|
||||
Dynamic user creation or modification is supported via an external program or an HTTP URL that can be invoked just before the user login.
|
||||
To enable dynamic user modification, you must set the absolute path of your program or an HTTP URL using the `pre_login_hook` key in your configuration file.
|
||||
|
||||
The external program can read the following environment variables to get info about the user trying to login:
|
||||
|
||||
- `SFTPGO_LOGIND_USER`, it contains the user trying to login serialized as JSON. A JSON serialized user id equal to zero means the user does not exist inside SFTPGo
|
||||
- `SFTPGO_LOGIND_METHOD`, possible values are: `password`, `publickey` and `keyboard-interactive`
|
||||
- `SFTPGO_LOGIND_IP`, ip address of the user trying to login
|
||||
- `SFTPGO_LOGIND_PROTOCOL`, possible values are `SSH`, `FTP`, `DAV`
|
||||
|
||||
The program must write, on its standard output:
|
||||
|
||||
- an empty string (or no response at all) if the user should not be created/updated
|
||||
- or the SFTPGo user, JSON serialized, if you want to create or update the given user
|
||||
|
||||
If the hook is an HTTP URL then it will be invoked as HTTP POST. The login method, the used protocol and the ip address of the user trying to login are added to the query string, for example `<http_url>?login_method=password&ip=1.2.3.4&protocol=SSH`.
|
||||
The request body will contain the user trying to login serialized as JSON. If no modification is needed the HTTP response code must be 204, otherwise the response code must be 200 and the response body a valid SFTPGo user serialized as JSON.
|
||||
|
||||
Actions defined for user's updates will not be executed in this case and an already logged in user with the same username will not be disconnected, you have to handle these things yourself.
|
||||
|
||||
The JSON response can include only the fields to update instead of the full user. For example, if you want to disable the user, you can return a response like this:
|
||||
|
||||
```json
|
||||
{"status": 0}
|
||||
```
|
||||
|
||||
Please note that if you want to create a new user, the pre-login hook response must include all the mandatory user fields.
|
||||
|
||||
The program hook must finish within 30 seconds, the HTTP hook will use the global configuration for HTTP clients.
|
||||
|
||||
If an error happens while executing the hook then login will be denied.
|
||||
|
||||
"Dynamic user creation or modification" and "External Authentication" are mutually exclusive, they are quite similar, the difference is that "External Authentication" returns an already authenticated user while using "Dynamic users modification" you simply create or update a user. The authentication will be checked inside SFTPGo.
|
||||
In other words while using "External Authentication" the external program receives the credentials of the user trying to login (for example the cleartext password) and it needs to validate them. While using "Dynamic users modification" the pre-login program receives the user stored inside the dataprovider (it includes the hashed password if any) and it can modify it, after the modification SFTPGo will check the credentials of the user trying to login.
|
||||
|
||||
Let's see a very basic example. Our sample program will grant access to the existing user `test_user` only in the time range 10:00-18:00. Other users will not be modified since the program will terminate with no output.
|
||||
|
||||
```shell
|
||||
#!/bin/bash
|
||||
|
||||
CURRENT_TIME=`date +%H:%M`
|
||||
if [[ "$SFTPGO_LOGIND_USER" =~ "\"test_user\"" ]]
|
||||
then
|
||||
if [[ $CURRENT_TIME > "18:00" || $CURRENT_TIME < "10:00" ]]
|
||||
then
|
||||
echo '{"status":0}'
|
||||
else
|
||||
echo '{"status":1}'
|
||||
fi
|
||||
fi
|
||||
```
|
||||
|
||||
Please note that this is a demo program and it might not work in all cases. For example, the username should be obtained by parsing the JSON serialized user and not by searching the username inside the JSON as shown here.
|
||||
58
docs/external-auth.md
Normal file
58
docs/external-auth.md
Normal file
@@ -0,0 +1,58 @@
|
||||
# External Authentication
|
||||
|
||||
To enable external authentication, you must set the absolute path of your authentication program or an HTTP URL using the `external_auth_hook` key in your configuration file.
|
||||
|
||||
The external program can read the following environment variables to get info about the user trying to authenticate:
|
||||
|
||||
- `SFTPGO_AUTHD_USERNAME`
|
||||
- `SFTPGO_AUTHD_IP`
|
||||
- `SFTPGO_AUTHD_PROTOCOL`, possible values are `SSH`, `FTP`, `DAV`
|
||||
- `SFTPGO_AUTHD_PASSWORD`, not empty for password authentication
|
||||
- `SFTPGO_AUTHD_PUBLIC_KEY`, not empty for public key authentication
|
||||
- `SFTPGO_AUTHD_KEYBOARD_INTERACTIVE`, not empty for keyboard interactive authentication
|
||||
|
||||
Previous global environment variables aren't cleared when the script is called. The content of these variables is _not_ quoted. They may contain special characters. They are under the control of a possibly malicious remote user.
|
||||
The program must write, on its standard output, a valid SFTPGo user serialized as JSON if the authentication succeeds or a user with an empty username if the authentication fails.
|
||||
|
||||
If the hook is an HTTP URL then it will be invoked as HTTP POST. The request body will contain a JSON serialized struct with the following fields:
|
||||
|
||||
- `username`
|
||||
- `ip`
|
||||
- `protocol`, possible values are `SSH`, `FTP`, `DAV`
|
||||
- `password`, not empty for password authentication
|
||||
- `public_key`, not empty for public key authentication
|
||||
- `keyboard_interactive`, not empty for keyboard interactive authentication
|
||||
|
||||
If authentication succeeds the HTTP response code must be 200 and the response body a valid SFTPGo user serialized as JSON. If the authentication fails the HTTP response code must be != 200 or the response body must be empty.
|
||||
|
||||
If the authentication succeeds, the user will be automatically added/updated inside the defined data provider. Actions defined for users added/updated will not be executed in this case and an already logged in user with the same username will not be disconnected, you have to handle these things yourself.
|
||||
|
||||
The program hook must finish within 30 seconds, the HTTP hook timeout will use the global configuration for HTTP clients.
|
||||
|
||||
This method is slower than built-in authentication, but it's very flexible as anyone can easily write his own authentication hooks.
|
||||
You can also restrict the authentication scope for the hook using the `external_auth_scope` configuration key:
|
||||
|
||||
- `0` means all supported authentication scopes. The external hook will be used for password, public key and keyboard interactive authentication
|
||||
- `1` means passwords only
|
||||
- `2` means public keys only
|
||||
- `4` means keyboard interactive only
|
||||
|
||||
You can combine the scopes. For example, 3 means password and public key, 5 means password and keyboard interactive, and so on.
|
||||
|
||||
Let's see a very basic example. Our sample authentication program will only accept user `test_user` with any password or public key.
|
||||
|
||||
```shell
|
||||
#!/bin/sh
|
||||
|
||||
if test "$SFTPGO_AUTHD_USERNAME" = "test_user"; then
|
||||
echo '{"status":1,"username":"test_user","expiration_date":0,"home_dir":"/tmp/test_user","uid":0,"gid":0,"max_sessions":0,"quota_size":0,"quota_files":100000,"permissions":{"/":["*"],"/somedir":["list","download"]},"upload_bandwidth":0,"download_bandwidth":0,"filters":{"allowed_ip":[],"denied_ip":[]},"public_keys":[]}'
|
||||
else
|
||||
echo '{"username":""}'
|
||||
fi
|
||||
```
|
||||
|
||||
An example authentication program allowing to authenticate against an LDAP server can be found inside the source tree [ldapauth](../examples/ldapauth) directory.
|
||||
|
||||
An example server, to use as HTTP authentication hook, allowing to authenticate against an LDAP server can be found inside the source tree [ldapauthserver](../examples/ldapauthserver) directory.
|
||||
|
||||
If you have an external authentication hook that could be useful to others too, please let us know and/or please send a pull request.
|
||||
275
docs/full-configuration.md
Normal file
275
docs/full-configuration.md
Normal file
@@ -0,0 +1,275 @@
|
||||
# Configuring SFTPGo
|
||||
|
||||
## Command line options
|
||||
|
||||
The SFTPGo executable can be used this way:
|
||||
|
||||
```console
|
||||
Usage:
|
||||
sftpgo [command]
|
||||
|
||||
Available Commands:
|
||||
gen A collection of useful generators
|
||||
help Help about any command
|
||||
initprovider Initializes and/or updates the configured data provider
|
||||
portable Serve a single directory
|
||||
serve Start the SFTP Server
|
||||
|
||||
Flags:
|
||||
-h, --help help for sftpgo
|
||||
-v, --version
|
||||
|
||||
Use "sftpgo [command] --help" for more information about a command
|
||||
```
|
||||
|
||||
The `serve` command supports the following flags:
|
||||
|
||||
- `--config-dir` string. Location of the config dir. This directory is used as the base for files with a relative path, eg. the private keys for the SFTP server or the SQLite database if you use SQLite as data provider. The configuration file, if not explicitly set, is looked for in this dir. We support reading from JSON, TOML, YAML, HCL, envfile and Java properties config files. The default config file name is `sftpgo` and therefore `sftpgo.json`, `sftpgo.yaml` and so on are searched. The default value is the working directory (".") or the value of `SFTPGO_CONFIG_DIR` environment variable.
|
||||
- `--config-file` string. This flag explicitly defines the path, name and extension of the config file. If must be an absolute path or a path relative to the configuration directory. The specified file name must have a supported extension (JSON, YAML, TOML, HCL or Java properties). The default value is empty or the value of `SFTPGO_CONFIG_FILE` environment variable.
|
||||
- `--loaddata-from` string. Load users and folders from this file. The file must be specified as absolute path and it must contain a backup obtained using the `dumpdata` REST API or compatible content. The default value is empty or the value of `SFTPGO_LOADDATA_FROM` environment variable.
|
||||
- `--loaddata-clean` boolean. Determine if the loaddata-from file should be removed after a successful load. Default `false` or the value of `SFTPGO_LOADDATA_CLEAN` environment variable (1 or `true`, 0 or `false`).
|
||||
- `--loaddata-mode`, integer. Restore mode for data to load. 0 means new users are added, existing users are updated. 1 means new users are added, existing users are not modified. Default 1 or the value of `SFTPGO_LOADDATA_MODE` environment variable.
|
||||
- `--loaddata-scan`, integer. Quota scan mode after data load. 0 means no quota scan. 1 means quota scan. 2 means scan quota if the user has quota restrictions. Default 0 or the value of `SFTPGO_LOADDATA_QUOTA_SCAN` environment variable.
|
||||
- `--log-compress` boolean. Determine if the rotated log files should be compressed using gzip. Default `false` or the value of `SFTPGO_LOG_COMPRESS` environment variable (1 or `true`, 0 or `false`). It is unused if `log-file-path` is empty.
|
||||
- `--log-file-path` string. Location for the log file, default "sftpgo.log" or the value of `SFTPGO_LOG_FILE_PATH` environment variable. Leave empty to write logs to the standard error.
|
||||
- `--log-max-age` int. Maximum number of days to retain old log files. Default 28 or the value of `SFTPGO_LOG_MAX_AGE` environment variable. It is unused if `log-file-path` is empty.
|
||||
- `--log-max-backups` int. Maximum number of old log files to retain. Default 5 or the value of `SFTPGO_LOG_MAX_BACKUPS` environment variable. It is unused if `log-file-path` is empty.
|
||||
- `--log-max-size` int. Maximum size in megabytes of the log file before it gets rotated. Default 10 or the value of `SFTPGO_LOG_MAX_SIZE` environment variable. It is unused if `log-file-path` is empty.
|
||||
- `--log-verbose` boolean. Enable verbose logs. Default `true` or the value of `SFTPGO_LOG_VERBOSE` environment variable (1 or `true`, 0 or `false`).
|
||||
- `--profiler` boolean. Enable the built-in profiler. The profiler will be accessible via HTTP/HTTPS using the base URL "/debug/pprof/". Default `false` or the value of `SFTPGO_PROFILER` environment variable (1 or `true`, 0 or `false`).
|
||||
|
||||
Log file can be rotated on demand sending a `SIGUSR1` signal on Unix based systems and using the command `sftpgo service rotatelogs` on Windows.
|
||||
|
||||
If you don't configure any private host key, the daemon will use `id_rsa`, `id_ecdsa` and `id_ed25519` in the configuration directory. If these files don't exist, the daemon will attempt to autogenerate them. The server supports any private key format supported by [`crypto/ssh`](https://github.com/golang/crypto/blob/master/ssh/keys.go#L33).
|
||||
|
||||
The `gen` command allows to generate completion scripts for your shell and man pages.
|
||||
|
||||
## Configuration file
|
||||
|
||||
The configuration file contains the following sections:
|
||||
|
||||
- **"common"**, configuration parameters shared among all the supported protocols
|
||||
- `idle_timeout`, integer. Time in minutes after which an idle client will be disconnected. 0 means disabled. Default: 15
|
||||
- `upload_mode` integer. 0 means standard: the files are uploaded directly to the requested path. 1 means atomic: files are uploaded to a temporary path and renamed to the requested path when the client ends the upload. Atomic mode avoids problems such as a web server that serves partial files when the files are being uploaded. In atomic mode, if there is an upload error, the temporary file is deleted and so the requested upload path will not contain a partial file. 2 means atomic with resume support: same as atomic but if there is an upload error, the temporary file is renamed to the requested path and not deleted. This way, a client can reconnect and resume the upload.
|
||||
- `actions`, struct. It contains the command to execute and/or the HTTP URL to notify and the trigger conditions. See [Custom Actions](./custom-actions.md) for more details
|
||||
- `execute_on`, list of strings. Valid values are `download`, `upload`, `pre-delete`, `delete`, `rename`, `ssh_cmd`. Leave empty to disable actions.
|
||||
- `hook`, string. Absolute path to the command to execute or HTTP URL to notify.
|
||||
- `setstat_mode`, integer. 0 means "normal mode": requests for changing permissions, owner/group and access/modification times are executed. 1 means "ignore mode": requests for changing permissions, owner/group and access/modification times are silently ignored. 2 means "ignore mode for cloud based filesystems": requests for changing permissions, owner/group and access/modification times are silently ignored for cloud filesystems and executed for local filesystem.
|
||||
- `proxy_protocol`, integer. Support for [HAProxy PROXY protocol](https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt). If you are running SFTPGo behind a proxy server such as HAProxy, AWS ELB or NGNIX, you can enable the proxy protocol. It provides a convenient way to safely transport connection information such as a client's address across multiple layers of NAT or TCP proxies to get the real client IP address instead of the proxy IP. Both protocol versions 1 and 2 are supported. If the proxy protocol is enabled in SFTPGo then you have to enable the protocol in your proxy configuration too. For example, for HAProxy, add `send-proxy` or `send-proxy-v2` to each server configuration line. The following modes are supported:
|
||||
- 0, disabled
|
||||
- 1, enabled. Proxy header will be used and requests without proxy header will be accepted
|
||||
- 2, required. Proxy header will be used and requests without proxy header will be rejected
|
||||
- `proxy_allowed`, List of IP addresses and IP ranges allowed to send the proxy header:
|
||||
- If `proxy_protocol` is set to 1 and we receive a proxy header from an IP that is not in the list then the connection will be accepted and the header will be ignored
|
||||
- If `proxy_protocol` is set to 2 and we receive a proxy header from an IP that is not in the list then the connection will be rejected
|
||||
- `post_connect_hook`, string. Absolute path to the command to execute or HTTP URL to notify. See [Post connect hook](./post-connect-hook.md) for more details. Leave empty to disable
|
||||
- `max_total_connections`, integer. Maximum number of concurrent client connections. 0 means unlimited
|
||||
- `defender`, struct containing the defender configuration. See [Defender](./defender.md) for more details.
|
||||
- `enabled`, boolean. Default `false`.
|
||||
- `ban_time`, integer. Ban time in minutes.
|
||||
- `ban_time_increment`, integer. Ban time increment, as a percentage, if a banned host tries to connect again.
|
||||
- `threshold`, integer. Threshold value for banning a client.
|
||||
- `score_invalid`, integer. Score for invalid login attempts, eg. non-existent user accounts or client disconnected for inactivity without authentication attempts.
|
||||
- `score_valid`, integer. Score for valid login attempts, eg. user accounts that exist.
|
||||
- `observation_time`, integer. Defines the time window, in minutes, for tracking client errors. A host is banned if it has exceeded the defined threshold during the last observation time minutes.
|
||||
- `entries_soft_limit`, integer.
|
||||
- `entries_hard_limit`, integer. The number of banned IPs and host scores kept in memory will vary between the soft and hard limit.
|
||||
- `safelist_file`, string. Path to a file containing a list of ip addresses and/or networks to never ban.
|
||||
- `blocklist_file`, string. Path to a file containing a list of ip addresses and/or networks to always ban. The lists can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows. An host that is already banned will not be automatically unbanned if you put it inside the safe list, you have to unban it using the REST API.
|
||||
- **"sftpd"**, the configuration for the SFTP server
|
||||
- `bindings`, list of structs. Each struct has the following fields:
|
||||
- `port`, integer. The port used for serving SFTP requests. 0 means disabled. Default: 2022
|
||||
- `address`, string. Leave blank to listen on all available network interfaces. Default: ""
|
||||
- `apply_proxy_config`, boolean. If enabled the common proxy configuration, if any, will be applied. Default `true`
|
||||
- `bind_port`, integer. Deprecated, please use `bindings`
|
||||
- `bind_address`, string. Deprecated, please use `bindings`
|
||||
- `idle_timeout`, integer. Deprecated, please use the same key in `common` section.
|
||||
- `max_auth_tries` integer. Maximum number of authentication attempts permitted per connection. If set to a negative number, the number of attempts is unlimited. If set to zero, the number of attempts is limited to 6.
|
||||
- `banner`, string. Identification string used by the server. Leave empty to use the default banner. Default `SFTPGo_<version>`, for example `SSH-2.0-SFTPGo_0.9.5`
|
||||
- `upload_mode` integer. Deprecated, please use the same key in `common` section.
|
||||
- `actions`, struct. Deprecated, please use the same key in `common` section.
|
||||
- `keys`, struct array. Deprecated, please use `host_keys`.
|
||||
- `private_key`, path to the private key file. It can be a path relative to the config dir or an absolute one.
|
||||
- `host_keys`, list of strings. It contains the daemon's private host keys. Each host key can be defined as a path relative to the configuration directory or an absolute one. If empty, the daemon will search or try to generate `id_rsa`, `id_ecdsa` and `id_ed25519` keys inside the configuration directory. If you configure absolute paths to files named `id_rsa`, `id_ecdsa` and/or `id_ed25519` then SFTPGo will try to generate these keys using the default settings.
|
||||
- `kex_algorithms`, list of strings. Available KEX (Key Exchange) algorithms in preference order. Leave empty to use default values. The supported values can be found here: [`crypto/ssh`](https://github.com/golang/crypto/blob/master/ssh/common.go#L46 "Supported kex algos")
|
||||
- `ciphers`, list of strings. Allowed ciphers. Leave empty to use default values. The supported values can be found here: [crypto/ssh](https://github.com/golang/crypto/blob/master/ssh/common.go#L28 "Supported ciphers")
|
||||
- `macs`, list of strings. Available MAC (message authentication code) algorithms in preference order. Leave empty to use default values. The supported values can be found here: [crypto/ssh](https://github.com/golang/crypto/blob/master/ssh/common.go#L84 "Supported MACs")
|
||||
- `trusted_user_ca_keys`, list of public keys paths of certificate authorities that are trusted to sign user certificates for authentication. The paths can be absolute or relative to the configuration directory.
|
||||
- `login_banner_file`, path to the login banner file. The contents of the specified file, if any, are sent to the remote user before authentication is allowed. It can be a path relative to the config dir or an absolute one. Leave empty to disable login banner.
|
||||
- `setstat_mode`, integer. Deprecated, please use the same key in `common` section.
|
||||
- `enabled_ssh_commands`, list of enabled SSH commands. `*` enables all supported commands. More information can be found [here](./ssh-commands.md).
|
||||
- `keyboard_interactive_auth_hook`, string. Absolute path to an external program or an HTTP URL to invoke for keyboard interactive authentication. See [Keyboard Interactive Authentication](./keyboard-interactive.md) for more details.
|
||||
- `password_authentication`, boolean. Set to false to disable password authentication. This setting will disable multi-step authentication method using public key + password too. It is useful for public key only configurations if you need to manage old clients that will not attempt to authenticate with public keys if the password login method is advertised. Default: true.
|
||||
- `proxy_protocol`, integer. Deprecated, please use the same key in `common` section.
|
||||
- `proxy_allowed`, list of strings. Deprecated, please use the same key in `common` section.
|
||||
- **"ftpd"**, the configuration for the FTP server
|
||||
- `bindings`, list of structs. Each struct has the following fields:
|
||||
- `port`, integer. The port used for serving FTP requests. 0 means disabled. Default: 0.
|
||||
- `address`, string. Leave blank to listen on all available network interfaces. Default: "".
|
||||
- `apply_proxy_config`, boolean. If enabled the common proxy configuration, if any, will be applied. Default `true`.
|
||||
- `tls_mode`, integer. 0 means accept both cleartext and encrypted sessions. 1 means TLS is required for both control and data connection. 2 means implicit TLS. Do not enable this blindly, please check that a proper TLS config is in place if you set `tls_mode` is different from 0.
|
||||
- `force_passive_ip`, ip address. External IP address to expose for passive connections. Leavy empty to autodetect. Defaut: "".
|
||||
- `client_auth_type`, integer. Set to `1` to require client certificate authentication in addition to FTP authentication. You need to define at least a certificate authority for this to work. Default: 0.
|
||||
- `tls_cipher_suites`, list of strings. List of supported cipher suites for TLS version 1.2. If empty, a default list of secure cipher suites is used, with a preference order based on hardware performance. Note that TLS 1.3 ciphersuites are not configurable. The supported ciphersuites names are defined [here](https://github.com/golang/go/blob/master/src/crypto/tls/cipher_suites.go#L52). Any invalid name will be silently ignored. The order matters, the ciphers listed first will be the preferred ones. Default: empty.
|
||||
- `bind_port`, integer. Deprecated, please use `bindings`
|
||||
- `bind_address`, string. Deprecated, please use `bindings`
|
||||
- `banner`, string. Greeting banner displayed when a connection first comes in. Leave empty to use the default banner. Default `SFTPGo <version> ready`, for example `SFTPGo 1.0.0-dev ready`.
|
||||
- `banner_file`, path to the banner file. The contents of the specified file, if any, are displayed when someone connects to the server. It can be a path relative to the config dir or an absolute one. If set, it overrides the banner string provided by the `banner` option. Leave empty to disable.
|
||||
- `active_transfers_port_non_20`, boolean. Do not impose the port 20 for active data transfers. Enabling this option allows to run SFTPGo with less privilege. Default: false.
|
||||
- `force_passive_ip`, ip address. Deprecated, please use `bindings`
|
||||
- `passive_port_range`, struct containing the key `start` and `end`. Port Range for data connections. Random if not specified. Default range is 50000-50100.
|
||||
- `disable_active_mode`, boolean. Set to `true` to disable active FTP, default `false`.
|
||||
- `enable_site`, boolean. Set to true to enable the FTP SITE command. We support `chmod` and `symlink` if SITE support is enabled. Default `false`
|
||||
- `hash_support`, integer. Set to `1` to enable FTP commands that allow to calculate the hash value of files. These FTP commands will be enabled: `HASH`, `XCRC`, `MD5/XMD5`, `XSHA/XSHA1`, `XSHA256`, `XSHA512`. Please keep in mind that to calculate the hash we need to read the whole file, for remote backends this means downloading the file, for the encrypted backend this means decrypting the file. Default `0`.
|
||||
- `combine_support`, integer. Set to 1 to enable support for the non standard `COMB` FTP command. Combine is only supported for local filesystem, for cloud backends it has no advantage as it will download the partial files and will upload the combined one. Cloud backends natively support multipart uploads. Default `0`.
|
||||
- `certificate_file`, string. Certificate for FTPS. This can be an absolute path or a path relative to the config dir.
|
||||
- `certificate_key_file`, string. Private key matching the above certificate. This can be an absolute path or a path relative to the config dir. A certificate and the private key are required to enable explicit and implicit TLS. Certificate and key files can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows.
|
||||
- `ca_certificates`, list of strings. Set of root certificate authorities to be used to verify client certificates.
|
||||
- `ca_revocation_lists`, list of strings. Set a revocation lists, one for each root CA, to be used to check if a client certificate has been revoked. The revocation lists can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows.
|
||||
- `tls_mode`, integer. Deprecated, please use `bindings`
|
||||
- **"webdavd"**, the configuration for the WebDAV server, more info [here](./webdav.md)
|
||||
- `bindings`, list of structs. Each struct has the following fields:
|
||||
- `port`, integer. The port used for serving WebDAV requests. 0 means disabled. Default: 0.
|
||||
- `address`, string. Leave blank to listen on all available network interfaces. Default: "".
|
||||
- `enable_https`, boolean. Set to `true` and provide both a certificate and a key file to enable HTTPS connection for this binding. Default `false`.
|
||||
- `client_auth_type`, integer. Set to `1` to require client certificate authentication in addition to basic authentication. You need to define at least a certificate authority for this to work. Default: 0.
|
||||
- `tls_cipher_suites`, list of strings. List of supported cipher suites for TLS version 1.2. If empty, a default list of secure cipher suites is used, with a preference order based on hardware performance. Note that TLS 1.3 ciphersuites are not configurable. The supported ciphersuites names are defined [here](https://github.com/golang/go/blob/master/src/crypto/tls/cipher_suites.go#L52). Any invalid name will be silently ignored. The order matters, the ciphers listed first will be the preferred ones. Default: empty.
|
||||
- `bind_port`, integer. Deprecated, please use `bindings`.
|
||||
- `bind_address`, string. Deprecated, please use `bindings`.
|
||||
- `certificate_file`, string. Certificate for WebDAV over HTTPS. This can be an absolute path or a path relative to the config dir.
|
||||
- `certificate_key_file`, string. Private key matching the above certificate. This can be an absolute path or a path relative to the config dir. A certificate and a private key are required to enable HTTPS connections. Certificate and key files can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows.
|
||||
- `ca_certificates`, list of strings. Set of root certificate authorities to be used to verify client certificates.
|
||||
- `ca_revocation_lists`, list of strings. Set a revocation lists, one for each root CA, to be used to check if a client certificate has been revoked. The revocation lists can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows.
|
||||
- `cors` struct containing CORS configuration. SFTPGo uses [Go CORS handler](https://github.com/rs/cors), please refer to upstream documentation for fields meaning and their default values.
|
||||
- `enabled`, boolean, set to true to enable CORS.
|
||||
- `allowed_origins`, list of strings.
|
||||
- `allowed_methods`, list of strings.
|
||||
- `allowed_headers`, list of strings.
|
||||
- `exposed_headers`, list of strings.
|
||||
- `allow_credentials` boolean.
|
||||
- `max_age`, integer.
|
||||
- `cache` struct containing cache configuration for the authenticated users.
|
||||
- `enabled`, boolean, set to true to enable user caching. Default: true.
|
||||
- `expiration_time`, integer. Expiration time, in minutes, for the cached users. 0 means unlimited. Default: 0.
|
||||
- `max_size`, integer. Maximum number of users to cache. 0 means unlimited. Default: 50.
|
||||
- **"data_provider"**, the configuration for the data provider
|
||||
- `driver`, string. Supported drivers are `sqlite`, `mysql`, `postgresql`, `bolt`, `memory`
|
||||
- `name`, string. Database name. For driver `sqlite` this can be the database name relative to the config dir or the absolute path to the SQLite database. For driver `memory` this is the (optional) path relative to the config dir or the absolute path to the provider dump, obtained using the `dumpdata` REST API, to load. This dump will be loaded at startup and can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows. The `memory` provider will not modify the provided file so quota usage and last login will not be persisted. If you plan to use a SQLite database over a `cifs` network share (this is not recommended in general) you must use the `nobrl` mount option otherwise you will get the `database is locked` error. Some users reported that the `bolt` provider works fine over `cifs` shares.
|
||||
- `host`, string. Database host. Leave empty for drivers `sqlite`, `bolt` and `memory`
|
||||
- `port`, integer. Database port. Leave empty for drivers `sqlite`, `bolt` and `memory`
|
||||
- `username`, string. Database user. Leave empty for drivers `sqlite`, `bolt` and `memory`
|
||||
- `password`, string. Database password. Leave empty for drivers `sqlite`, `bolt` and `memory`
|
||||
- `sslmode`, integer. Used for drivers `mysql` and `postgresql`. 0 disable SSL/TLS connections, 1 require ssl, 2 set ssl mode to `verify-ca` for driver `postgresql` and `skip-verify` for driver `mysql`, 3 set ssl mode to `verify-full` for driver `postgresql` and `preferred` for driver `mysql`
|
||||
- `connection_string`, string. Provide a custom database connection string. If not empty, this connection string will be used instead of building one using the previous parameters. Leave empty for drivers `bolt` and `memory`
|
||||
- `sql_tables_prefix`, string. Prefix for SQL tables
|
||||
- `track_quota`, integer. Set the preferred mode to track users quota between the following choices:
|
||||
- 0, disable quota tracking. REST API to scan users home directories/virtual folders and update quota will do nothing
|
||||
- 1, quota is updated each time a user uploads or deletes a file, even if the user has no quota restrictions
|
||||
- 2, quota is updated each time a user uploads or deletes a file, but only for users with quota restrictions and for virtual folders. With this configuration, the `quota scan` and `folder_quota_scan` REST API can still be used to periodically update space usage for users without quota restrictions and for folders
|
||||
- `pool_size`, integer. Sets the maximum number of open connections for `mysql` and `postgresql` driver. Default 0 (unlimited)
|
||||
- `users_base_dir`, string. Users default base directory. If no home dir is defined while adding a new user, and this value is a valid absolute path, then the user home dir will be automatically defined as the path obtained joining the base dir and the username
|
||||
- `actions`, struct. It contains the command to execute and/or the HTTP URL to notify and the trigger conditions. See [Custom Actions](./custom-actions.md) for more details
|
||||
- `execute_on`, list of strings. Valid values are `add`, `update`, `delete`. `update` action will not be fired for internal updates such as the last login or the user quota fields.
|
||||
- `hook`, string. Absolute path to the command to execute or HTTP URL to notify.
|
||||
- `external_auth_program`, string. Deprecated, please use `external_auth_hook`.
|
||||
- `external_auth_hook`, string. Absolute path to an external program or an HTTP URL to invoke for users authentication. See [External Authentication](./external-auth.md) for more details. Leave empty to disable.
|
||||
- `external_auth_scope`, integer. 0 means all supported authentication scopes (passwords, public keys and keyboard interactive). 1 means passwords only. 2 means public keys only. 4 means key keyboard interactive only. The flags can be combined, for example 6 means public keys and keyboard interactive
|
||||
- `credentials_path`, string. It defines the directory for storing user provided credential files such as Google Cloud Storage credentials. This can be an absolute path or a path relative to the config dir
|
||||
- `prefer_database_credentials`, boolean. When true, users' Google Cloud Storage credentials will be written to the data provider instead of disk, though pre-existing credentials on disk will be used as a fallback. When false, they will be written to the directory specified by `credentials_path`.
|
||||
- `pre_login_program`, string. Deprecated, please use `pre_login_hook`.
|
||||
- `pre_login_hook`, string. Absolute path to an external program or an HTTP URL to invoke to modify user details just before the login. See [Dynamic user modification](./dynamic-user-mod.md) for more details. Leave empty to disable.
|
||||
- `post_login_hook`, string. Absolute path to an external program or an HTTP URL to invoke to notify a successful or failed login. See [Post-login hook](./post-login-hook.md) for more details. Leave empty to disable.
|
||||
- `post_login_scope`, defines the scope for the post-login hook. 0 means notify both failed and successful logins. 1 means notify failed logins. 2 means notify successful logins.
|
||||
- `check_password_hook`, string. Absolute path to an external program or an HTTP URL to invoke to check the user provided password. See [Check password hook](./check-password-hook.md) for more details. Leave empty to disable.
|
||||
- `check_password_scope`, defines the scope for the check password hook. 0 means all protocols, 1 means SSH, 2 means FTP, 4 means WebDAV. You can combine the scopes, for example 6 means FTP and WebDAV.
|
||||
- `password_hashing`, struct. It contains the configuration parameters to be used to generate the password hash. SFTPGo can verify passwords in several formats and uses the `argon2id` algorithm to hash passwords in plain-text before storing them inside the data provider. These options allow you to customize how the hash is generated.
|
||||
- `argon2_options` struct containing the options for argon2id hashing algorithm. The `memory` and `iterations` parameters control the computational cost of hashing the password. The higher these figures are, the greater the cost of generating the hash and the longer the runtime. It also follows that the greater the cost will be for any attacker trying to guess the password. If the code is running on a machine with multiple cores, then you can decrease the runtime without reducing the cost by increasing the `parallelism` parameter. This controls the number of threads that the work is spread across.
|
||||
- `memory`, unsigned integer. The amount of memory used by the algorithm (in kibibytes). Default: 65536.
|
||||
- `iterations`, unsigned integer. The number of iterations over the memory. Default: 1.
|
||||
- `parallelism`. unsigned 8 bit integer. The number of threads (or lanes) used by the algorithm. Default: 2.
|
||||
- `update_mode`, integer. Defines how the database will be initialized/updated. 0 means automatically. 1 means manually using the initprovider sub-command.
|
||||
- `skip_natural_keys_validation`, boolean. If `true` you can use any UTF-8 character for natural keys as username, admin name, folder name. These keys are used in URIs for REST API and Web admin. If `false` only unreserved URI characters are allowed: ALPHA / DIGIT / "-" / "." / "_" / "~". Default: `false`.
|
||||
- **"httpd"**, the configuration for the HTTP server used to serve REST API and to expose the built-in web interface
|
||||
- `bindings`, list of structs. Each struct has the following fields:
|
||||
- `port`, integer. The port used for serving HTTP requests. Default: 8080.
|
||||
- `address`, string. Leave blank to listen on all available network interfaces. On *NIX you can specify an absolute path to listen on a Unix-domain socket Default: "127.0.0.1".
|
||||
- `enable_web_admin`, boolean. Set to `false` to disable the built-in web admin for this binding. You also need to define `templates_path` and `static_files_path` to enable the built-in web admin interface. Default `true`.
|
||||
- `enable_https`, boolean. Set to `true` and provide both a certificate and a key file to enable HTTPS connection for this binding. Default `false`.
|
||||
- `client_auth_type`, integer. Set to `1` to require client certificate authentication in addition to JWT/Web authentication. You need to define at least a certificate authority for this to work. Default: 0.
|
||||
- `tls_cipher_suites`, list of strings. List of supported cipher suites for TLS version 1.2. If empty, a default list of secure cipher suites is used, with a preference order based on hardware performance. Note that TLS 1.3 ciphersuites are not configurable. The supported ciphersuites names are defined [here](https://github.com/golang/go/blob/master/src/crypto/tls/cipher_suites.go#L52). Any invalid name will be silently ignored. The order matters, the ciphers listed first will be the preferred ones. Default: empty.
|
||||
- `bind_port`, integer. Deprecated, please use `bindings`.
|
||||
- `bind_address`, string. Deprecated, please use `bindings`. Leave blank to listen on all available network interfaces. On \*NIX you can specify an absolute path to listen on a Unix-domain socket. Default: "127.0.0.1"
|
||||
- `templates_path`, string. Path to the HTML web templates. This can be an absolute path or a path relative to the config dir
|
||||
- `static_files_path`, string. Path to the static files for the web interface. This can be an absolute path or a path relative to the config dir. If both `templates_path` and `static_files_path` are empty the built-in web interface will be disabled
|
||||
- `backups_path`, string. Path to the backup directory. This can be an absolute path or a path relative to the config dir. We don't allow backups in arbitrary paths for security reasons
|
||||
- `certificate_file`, string. Certificate for HTTPS. This can be an absolute path or a path relative to the config dir.
|
||||
- `certificate_key_file`, string. Private key matching the above certificate. This can be an absolute path or a path relative to the config dir. If both the certificate and the private key are provided, the server will expect HTTPS connections. Certificate and key files can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows.
|
||||
- `ca_certificates`, list of strings. Set of root certificate authorities to be used to verify client certificates.
|
||||
- `ca_revocation_lists`, list of strings. Set a revocation lists, one for each root CA, to be used to check if a client certificate has been revoked. The revocation lists can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows.
|
||||
- **"telemetry"**, the configuration for the telemetry server, more details [below](#telemetry-server)
|
||||
- `bind_port`, integer. The port used for serving HTTP requests. Set to 0 to disable HTTP server. Default: 10000
|
||||
- `bind_address`, string. Leave blank to listen on all available network interfaces. On \*NIX you can specify an absolute path to listen on a Unix-domain socket. Default: "127.0.0.1"
|
||||
- `enable_profiler`, boolean. Enable the built-in profiler. Default `false`
|
||||
- `auth_user_file`, string. Path to a file used to store usernames and passwords for basic authentication. This can be an absolute path or a path relative to the config dir. We support HTTP basic authentication, and the file format must conform to the one generated using the Apache `htpasswd` tool. The supported password formats are bcrypt (`$2y$` prefix) and md5 crypt (`$apr1$` prefix). If empty, HTTP authentication is disabled. Authentication will be always disabled for the `/healthz` endpoint.
|
||||
- `certificate_file`, string. Certificate for HTTPS. This can be an absolute path or a path relative to the config dir.
|
||||
- `certificate_key_file`, string. Private key matching the above certificate. This can be an absolute path or a path relative to the config dir. If both the certificate and the private key are provided, the server will expect HTTPS connections. Certificate and key files can be reloaded on demand sending a `SIGHUP` signal on Unix based systems and a `paramchange` request to the running service on Windows.
|
||||
- `tls_cipher_suites`, list of strings. List of supported cipher suites for TLS version 1.2. If empty, a default list of secure cipher suites is used, with a preference order based on hardware performance. Note that TLS 1.3 ciphersuites are not configurable. The supported ciphersuites names are defined [here](https://github.com/golang/go/blob/master/src/crypto/tls/cipher_suites.go#L52). Any invalid name will be silently ignored. The order matters, the ciphers listed first will be the preferred ones. Default: empty.
|
||||
- **"http"**, the configuration for HTTP clients. HTTP clients are used for executing hooks. Some hooks use a retryable HTTP client, for these hooks you can configure the time between retries and the number of retries. Please check the hook specific documentation to understand which hooks use a retryable HTTP client.
|
||||
- `timeout`, integer. Timeout specifies a time limit, in seconds, for requests. For requests with retries this is the timeout for a single request
|
||||
- `retry_wait_min`, integer. Defines the minimum waiting time between attempts in seconds.
|
||||
- `retry_wait_max`, integer. Defines the maximum waiting time between attempts in seconds. The backoff algorithm will perform exponential backoff based on the attempt number and limited by the provided minimum and maximum durations.
|
||||
- `retry_max`, integer. Defines the maximum number of retries if the first request fails.
|
||||
- `ca_certificates`, list of strings. List of paths to extra CA certificates to trust. The paths can be absolute or relative to the config dir. Adding trusted CA certificates is a convenient way to use self-signed certificates without defeating the purpose of using TLS.
|
||||
- `certificates`, list of certificate for mutual TLS. Each certificate is a struct with the following fields:
|
||||
- `cert`, string. Path to the certificate file. The path can be absolute or relative to the config dir.
|
||||
- `key`, string. Path to the key file. The path can be absolute or relative to the config dir.
|
||||
- `skip_tls_verify`, boolean. if enabled the HTTP client accepts any TLS certificate presented by the server and any host name in that certificate. In this mode, TLS is susceptible to man-in-the-middle attacks. This should be used only for testing.
|
||||
- **kms**, configuration for the Key Management Service, more details can be found [here](./kms.md)
|
||||
- `secrets`
|
||||
- `url`
|
||||
- `master_key_path`
|
||||
|
||||
A full example showing the default config (in JSON format) can be found [here](../sftpgo.json).
|
||||
|
||||
If you want to use a private host key that uses an algorithm/setting different from the auto generated RSA/ECDSA keys, or more than two private keys, you can generate your own keys and replace the empty `keys` array with something like this:
|
||||
|
||||
```json
|
||||
"host_keys": [
|
||||
"id_rsa",
|
||||
"id_ecdsa",
|
||||
"id_ed25519"
|
||||
]
|
||||
```
|
||||
|
||||
where `id_rsa`, `id_ecdsa` and `id_ed25519`, in this example, are files containing your generated keys. You can use absolute paths or paths relative to the configuration directory specified via the `--config-dir` serve flag. By default the configuration directory is the working directory.
|
||||
|
||||
If you want the default host keys generation in a directory different from the config dir, please specify absolute paths to files named `id_rsa`, `id_ecdsa` or `id_ed25519` like this:
|
||||
|
||||
```json
|
||||
"host_keys": [
|
||||
"/etc/sftpgo/keys/id_rsa",
|
||||
"/etc/sftpgo/keys/id_ecdsa",
|
||||
"/etc/sftpgo/keys/id_ed25519"
|
||||
]
|
||||
```
|
||||
|
||||
then SFTPGo will try to create `id_rsa`, `id_ecdsa` and `id_ed25519`, if they are missing, inside the directory `/etc/sftpgo/keys`.
|
||||
|
||||
The configuration can be read from JSON, TOML, YAML, HCL, envfile and Java properties config files. If your `config-file` flag is set to `sftpgo` (default value), you need to create a configuration file called `sftpgo.json` or `sftpgo.yaml` and so on inside `config-dir`.
|
||||
|
||||
## Environment variables
|
||||
|
||||
You can also override all the available configuration options using environment variables. SFTPGo will check for environment variables with a name matching the key uppercased and prefixed with the `SFTPGO_`. You need to use `__` to traverse a struct.
|
||||
|
||||
Let's see some examples:
|
||||
|
||||
- To set the `port` for the first sftpd binding, you need to define the env var `SFTPGO_SFTPD__BINDINGS__0__PORT`
|
||||
- To set the `execute_on` actions, you need to define the env var `SFTPGO_COMMON__ACTIONS__EXECUTE_ON`. For example `SFTPGO_COMMON__ACTIONS__EXECUTE_ON=upload,download`
|
||||
|
||||
## Telemetry Server
|
||||
|
||||
The telemetry server exposes the following endpoints:
|
||||
|
||||
- `/healthz`, health information (for health checks)
|
||||
- `/metrics`, Prometheus metrics
|
||||
- `/debug/pprof`, if enabled via the `enable_profiler` configuration key, for profiling, more details [here](./profiling.md)
|
||||
11
docs/google-cloud-storage.md
Normal file
11
docs/google-cloud-storage.md
Normal file
@@ -0,0 +1,11 @@
|
||||
# Google Cloud Storage backend
|
||||
|
||||
To connect SFTPGo to Google Cloud Storage you can use use the Application Default Credentials (ADC) strategy to try to find your application's credentials automatically or you can explicitly provide a JSON credentials file that you can obtain from the Google Cloud Console. Take a look [here](https://cloud.google.com/docs/authentication/production#providing_credentials_to_your_application) for details.
|
||||
|
||||
Specifying a different `key_prefix`, you can assign different "folders" of the same bucket to different users. This is similar to a chroot directory for local filesystem. Each SFTP/SCP user can only access the assigned folder and its contents. The folder identified by `key_prefix` does not need to be pre-created.
|
||||
|
||||
You can optionally specify a [storage class](https://cloud.google.com/storage/docs/storage-classes) too. Leave it blank to use the default storage class.
|
||||
|
||||
The configured bucket must exist.
|
||||
|
||||
This backend is very similar to the [S3](./s3.md) backend, and it has the same limitations.
|
||||
5
docs/howto/README.md
Normal file
5
docs/howto/README.md
Normal file
@@ -0,0 +1,5 @@
|
||||
# Tutorials
|
||||
|
||||
Here we collect step-to-step tutorials. SFTPGo users are encouraged to contribute!
|
||||
|
||||
- [SFTPGo with PostgreSQL data provider and S3 backend](./postgresql-s3.md)
|
||||
215
docs/howto/postgresql-s3.md
Normal file
215
docs/howto/postgresql-s3.md
Normal file
@@ -0,0 +1,215 @@
|
||||
# SFTPGo with PostgreSQL data provider and S3 backend
|
||||
|
||||
This tutorial shows the installation of SFTPGo on Ubuntu 20.04 (Focal Fossa) with PostgreSQL data provider and S3 backend. SFTPGo will run as an unprivileged (non-root) user. We assume that you want to serve a single S3 bucket and you want to assign different "virtual folders" of this bucket to different SFTPGo virtual users.
|
||||
|
||||
## Preliminary Note
|
||||
|
||||
Before proceeding further you need to have a basic minimal installation of Ubuntu 20.04.
|
||||
|
||||
## Install PostgreSQL
|
||||
|
||||
Before installing any packages on the Ubuntu system, update and upgrade all packages using the `apt` commands below.
|
||||
|
||||
```shell
|
||||
sudo apt update
|
||||
sudo apt upgrade
|
||||
```
|
||||
|
||||
Install PostgreSQL with this `apt` command.
|
||||
|
||||
```shell
|
||||
sudo apt -y install postgresql
|
||||
```
|
||||
|
||||
Once installation is completed, start the PostgreSQL service and add it to the system boot.
|
||||
|
||||
```shell
|
||||
sudo systemctl start postgresql
|
||||
sudo systemctl enable postgresql
|
||||
```
|
||||
|
||||
Next, check the PostgreSQL service using the following command.
|
||||
|
||||
```shell
|
||||
systemctl status postgresql
|
||||
```
|
||||
|
||||
## Configure PostgreSQL
|
||||
|
||||
PostgreSQL uses roles for user authentication and authorization, it just like Unix-Style permissions. By default, PostgreSQL creates a new user called `postgres` for basic authentication.
|
||||
|
||||
In this step, we will create a new PostgreSQL user for SFTPGo.
|
||||
|
||||
Login to the PostgreSQL shell using the command below.
|
||||
|
||||
```shell
|
||||
sudo -i -u postgres psql
|
||||
```
|
||||
|
||||
Next, create a new role `sftpgo` with the password `sftpgo_pg_pwd` using the following query.
|
||||
|
||||
```sql
|
||||
create user "sftpgo" with encrypted password 'sftpgo_pg_pwd';
|
||||
```
|
||||
|
||||
Next, create a new database `sftpgo.db` for the SFTPGo service using the following queries.
|
||||
|
||||
```sql
|
||||
create database "sftpgo.db";
|
||||
grant all privileges on database "sftpgo.db" to "sftpgo";
|
||||
```
|
||||
|
||||
Exit from the PostgreSQL shell typing `\q`.
|
||||
|
||||
## Install SFTPGo
|
||||
|
||||
To install SFTPGo you can use the PPA [here](https://launchpad.net/~sftpgo/+archive/ubuntu/sftpgo).
|
||||
|
||||
Start by adding the PPA.
|
||||
|
||||
```shell
|
||||
sudo add-apt-repository ppa:sftpgo/sftpgo
|
||||
sudo apt-get update
|
||||
```
|
||||
|
||||
Next install SFTPGo.
|
||||
|
||||
```shell
|
||||
sudo apt install sftpgo
|
||||
```
|
||||
|
||||
After installation SFTPGo should already be running with default configuration and configured to start automatically at boot, check its status using the following command.
|
||||
|
||||
```shell
|
||||
systemctl status sftpgo
|
||||
```
|
||||
|
||||
## Configure AWS credentials
|
||||
|
||||
We assume that you want to serve a single S3 bucket and you want to assign different "virtual folders" of this bucket to different SFTPGo virtual users. In this case is very convenient to configure a credential file so SFTPGo will automatically use it and you don't need to specify the same AWS credentials for each user.
|
||||
|
||||
You can manually create the `/var/lib/sftpgo/.aws/credentials` file and write your AWS credentials like this.
|
||||
|
||||
```shell
|
||||
[default]
|
||||
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
|
||||
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
|
||||
```
|
||||
|
||||
Alternately you can install `AWS CLI` and manage the credential using this tool.
|
||||
|
||||
```shell
|
||||
sudo apt install awscli
|
||||
```
|
||||
|
||||
and now set your credentials, region, and output format with the following command.
|
||||
|
||||
```shell
|
||||
aws configure
|
||||
```
|
||||
|
||||
Confirm that you can list your bucket contents with the following command.
|
||||
|
||||
```shell
|
||||
aws s3 ls s3://mybucket
|
||||
```
|
||||
|
||||
The AWS CLI will create the credential file in `~/.aws/credentials`. The SFTPGo service runs using the `sftpgo` system user whose home directory is `/var/lib/sftpgo` so you need to copy the credentials file to the sftpgo home directory and assign it the proper permissions.
|
||||
|
||||
```shell
|
||||
sudo mkdir /var/lib/sftpgo/.aws
|
||||
sudo cp ~/.aws/credentials /var/lib/sftpgo/.aws/
|
||||
sudo chown -R sftpgo:sftpgo /var/lib/sftpgo/.aws
|
||||
```
|
||||
|
||||
## Configure SFTPGo
|
||||
|
||||
Now open the SFTPGo configuration.
|
||||
|
||||
```shell
|
||||
sudo vi /etc/sftpgo/sftpgo.json
|
||||
```
|
||||
|
||||
Search for the `data_provider` section and change it as follow.
|
||||
|
||||
```json
|
||||
"data_provider": {
|
||||
"driver": "postgresql",
|
||||
"name": "sftpgo.db",
|
||||
"host": "127.0.0.1",
|
||||
"port": 5432,
|
||||
"username": "sftpgo",
|
||||
"password": "sftpgo_pg_pwd",
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
This way we set the PostgreSQL connection parameters.
|
||||
|
||||
If you want to connect to PostgreSQL over a Unix Domain socket you have to set the value `/var/run/postgresql` for the `host` configuration key instead of `127.0.0.1`.
|
||||
|
||||
You can further customize your configuration adding custom actions and other hooks. A full explanation of all configuration parameters can be found [here](../full-configuration.md).
|
||||
|
||||
Next, initialize the data provider with the following command.
|
||||
|
||||
```shell
|
||||
$ sudo su - sftpgo -s /bin/bash -c 'sftpgo initprovider -c /etc/sftpgo'
|
||||
2020-10-09T21:07:50.000 INF Initializing provider: "postgresql" config file: "/etc/sftpgo/sftpgo.json"
|
||||
2020-10-09T21:07:50.000 INF updating database version: 1 -> 2
|
||||
2020-10-09T21:07:50.000 INF updating database version: 2 -> 3
|
||||
2020-10-09T21:07:50.000 INF updating database version: 3 -> 4
|
||||
2020-10-09T21:07:50.000 INF Data provider successfully initialized/updated
|
||||
```
|
||||
|
||||
The default sftpgo systemd service will start after the network target, in this setup it is more appropriate to start it after the PostgreSQL service, so edit the service using the following command.
|
||||
|
||||
```shell
|
||||
sudo systemctl edit sftpgo.service
|
||||
```
|
||||
|
||||
And override the unit definition with the following snippet.
|
||||
|
||||
```shell
|
||||
[Unit]
|
||||
After=postgresql.service
|
||||
```
|
||||
|
||||
Confirm that `sftpgo.service` will start after `postgresql.service` with the next command.
|
||||
|
||||
```shell
|
||||
$ systemctl show sftpgo.service | grep After=
|
||||
After=postgresql.service systemd-journald.socket system.slice -.mount systemd-tmpfiles-setup.service network.target sysinit.target basic.target
|
||||
```
|
||||
|
||||
Next restart the sftpgo service to use the new configuration and check that it is running.
|
||||
|
||||
```shell
|
||||
sudo systemctl restart sftpgo
|
||||
systemctl status sftpgo
|
||||
```
|
||||
|
||||
## Add virtual users
|
||||
|
||||
The easiest way to add virtual users is to use the built-in Web interface.
|
||||
|
||||
You can expose the Web Admin interface over the network replacing `"bind_address": "127.0.0.1"` in the `httpd` configuration section with `"bind_address": ""` and apply the change restarting the SFTPGo service with the following command.
|
||||
|
||||
```shell
|
||||
sudo systemctl restart sftpgo
|
||||
```
|
||||
|
||||
So now open the Web Admin URL.
|
||||
|
||||
[http://127.0.0.1:8080/web](http://127.0.0.1:8080/web)
|
||||
|
||||
Click `Add` and fill the user details, the minimum required parameters are:
|
||||
|
||||
- `Username`
|
||||
- `Password` or `Public keys`
|
||||
- `Permissions`
|
||||
- `Home Dir` can be empty since we defined a default base dir
|
||||
- Select `AWS S3 (Compatible)` as storage and then set `Bucket`, `Region` and optionally a `Key Prefix` if you want to restrict the user to a specific virtual folder in the bucket. The specified virtual folder does not need to be pre-created. You can leave `Access Key` and `Access Secret` empty since we defined global credentials for the `sftpgo` user and we use this system user to run the SFTPGo service.
|
||||
|
||||
You are done! Now you can connect to you SFTPGo instance using any compatible `sftp` client on port `2022`.
|
||||
|
||||
You can mix S3 users with local users but please be aware that we are running the service as the unprivileged `sftpgo` system user so if you set storage as `local` for an SFTPGo virtual user then the home directory for this user must be owned by the `sftpgo` system user. If you don't specify an home directory the default will be `/srv/sftpgo/data/<username>` which should be appropriate.
|
||||
168
docs/keyboard-interactive.md
Normal file
168
docs/keyboard-interactive.md
Normal file
@@ -0,0 +1,168 @@
|
||||
# Keyboard Interactive Authentication
|
||||
|
||||
Keyboard interactive authentication is, in general, a series of questions asked by the server with responses provided by the client.
|
||||
This authentication method is typically used for multi-factor authentication.
|
||||
There are no restrictions on the number of questions asked on a particular authentication stage; there are also no restrictions on the number of stages involving different sets of questions.
|
||||
|
||||
To enable keyboard interactive authentication, you must set the absolute path of your authentication program or an HTTP URL using the `keyboard_interactive_auth_hook` key in your configuration file.
|
||||
|
||||
The external program can read the following environment variables to get info about the user trying to authenticate:
|
||||
|
||||
- `SFTPGO_AUTHD_USERNAME`
|
||||
- `SFTPGO_AUTHD_IP`
|
||||
- `SFTPGO_AUTHD_PASSWORD`, this is the hashed password as stored inside the data provider
|
||||
|
||||
Previous global environment variables aren't cleared when the script is called. The content of these variables is _not_ quoted. They may contain special characters.
|
||||
|
||||
The program must write the questions on its standard output, in a single line, using the following struct JSON serialized:
|
||||
|
||||
- `instruction`, string. A short description to show to the user that is trying to authenticate. Can be empty or omitted
|
||||
- `questions`, list of questions to be asked to the user
|
||||
- `echos` list of boolean flags corresponding to the questions (so the lengths of both lists must be the same) and indicating whether user's reply for a particular question should be echoed on the screen while they are typing: true if it should be echoed, or false if it should be hidden.
|
||||
- `check_password` optional integer. Ask exactly one question and set this field to 1 if the expected answer is the user password and you want that SFTPGo checks it for you. If the password is correct, the returned response to the program is `OK`. If the password is wrong, the program will be terminated and an authentication error will be returned to the user that is trying to authenticate.
|
||||
- `auth_result`, integer. Set this field to 1 to indicate successful authentication. 0 is ignored. Any other value means authentication error. If this field is found and it is different from 0 then SFTPGo will not read any other questions from the external program, and it will finalize the authentication.
|
||||
|
||||
SFTPGo writes the user answers to the program standard input, one per line, in the same order as the questions.
|
||||
Please be sure that your program receives the answers for all the issued questions before asking for the next ones.
|
||||
|
||||
Keyboard interactive authentication can be chained to the external authentication.
|
||||
The authentication must finish within 60 seconds.
|
||||
|
||||
Let's see a very basic example. Our sample keyboard interactive authentication program will ask for 2 sets of questions and accept the user if the answer to the last question is `answer3`.
|
||||
|
||||
```shell
|
||||
#!/bin/sh
|
||||
|
||||
echo '{"questions":["Question1: ","Question2: "],"instruction":"This is a sample for keyboard interactive authentication","echos":[true,false]}'
|
||||
|
||||
read ANSWER1
|
||||
read ANSWER2
|
||||
|
||||
echo '{"questions":["Question3: "],"instruction":"","echos":[true]}'
|
||||
|
||||
read ANSWER3
|
||||
|
||||
if test "$ANSWER3" = "answer3"; then
|
||||
echo '{"auth_result":1}'
|
||||
else
|
||||
echo '{"auth_result":-1}'
|
||||
fi
|
||||
```
|
||||
|
||||
and here is an example where SFTPGo checks the user password for you:
|
||||
|
||||
```shell
|
||||
#!/bin/sh
|
||||
|
||||
echo '{"questions":["Password: "],"instruction":"This is a sample for keyboard interactive authentication","echos":[false],"check_password":1}'
|
||||
|
||||
read ANSWER1
|
||||
|
||||
if test "$ANSWER1" != "OK"; then
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo '{"questions":["One time token: "],"instruction":"","echos":[false]}'
|
||||
|
||||
read ANSWER2
|
||||
|
||||
if test "$ANSWER2" = "token"; then
|
||||
echo '{"auth_result":1}'
|
||||
else
|
||||
echo '{"auth_result":-1}'
|
||||
fi
|
||||
```
|
||||
|
||||
If the hook is an HTTP URL then it will be invoked as HTTP POST multiple times for each login request.
|
||||
The request body will contain a JSON struct with the following fields:
|
||||
|
||||
- `request_id`, string. Unique request identifier
|
||||
- `username`, string
|
||||
- `ip`, string
|
||||
- `password`, string. This is the hashed password as stored inside the data provider
|
||||
- `answers`, list of string. It will be null for the first request
|
||||
- `questions`, list of string. It will contain the previously asked questions. It will be null for the first request
|
||||
|
||||
The HTTP response code must be 200 and the body must contain the same JSON struct described for the program.
|
||||
|
||||
Let's see a basic sample, the configured hook is `http://127.0.0.1:8000/keyIntHookPwd`, as soon as the user tries to login, SFTPGo makes this HTTP POST request:
|
||||
|
||||
```shell
|
||||
POST /keyIntHookPwd HTTP/1.1
|
||||
Host: 127.0.0.1:8000
|
||||
User-Agent: Go-http-client/1.1
|
||||
Content-Length: 189
|
||||
Content-Type: application/json
|
||||
Accept-Encoding: gzip
|
||||
|
||||
{"request_id":"bq1r5r7cdrpd2qtn25ng","username":"a","ip":"127.0.0.1","password":"$pbkdf2-sha512$150000$ClOPkLNujMTL$XktKy0xuJsOfMYBz+f2bIyPTdbvDTSnJ1q+7+zp/HPq5Qojwp6kcpSIiVHiwvbi8P6HFXI/D3UJv9BLcnQFqPA=="}
|
||||
```
|
||||
|
||||
as you can see in this first requests `answers` and `questions` are null.
|
||||
|
||||
Here is the response that instructs SFTPGo to ask for the user password and to check it:
|
||||
|
||||
```shell
|
||||
HTTP/1.1 200 OK
|
||||
Date: Tue, 31 Mar 2020 21:15:24 GMT
|
||||
Server: WSGIServer/0.2 CPython/3.8.2
|
||||
Content-Type: application/json
|
||||
X-Frame-Options: SAMEORIGIN
|
||||
Content-Length: 143
|
||||
|
||||
{"questions": ["Password: "], "check_password": 1, "instruction": "This is a sample for keyboard interactive authentication", "echos": [false]}
|
||||
```
|
||||
|
||||
The user enters the correct password and so SFTPGo makes a new HTTP POST, please note that the `request_id` is the same of the previous request, this time the asked `questions` and the user's `answers` are not null:
|
||||
|
||||
```shell
|
||||
POST /keyIntHookPwd HTTP/1.1
|
||||
Host: 127.0.0.1:8000
|
||||
User-Agent: Go-http-client/1.1
|
||||
Content-Length: 233
|
||||
Content-Type: application/json
|
||||
Accept-Encoding: gzip
|
||||
|
||||
{"request_id":"bq1r5r7cdrpd2qtn25ng","username":"a","ip":"127.0.0.1","password":"$pbkdf2-sha512$150000$ClOPkLNujMTL$XktKy0xuJsOfMYBz+f2bIyPTdbvDTSnJ1q+7+zp/HPq5Qojwp6kcpSIiVHiwvbi8P6HFXI/D3UJv9BLcnQFqPA==","answers":["OK"],"questions":["Password: "]}
|
||||
```
|
||||
|
||||
Here is the HTTP response that instructs SFTPGo to ask for a new question:
|
||||
|
||||
```shell
|
||||
HTTP/1.1 200 OK
|
||||
Date: Tue, 31 Mar 2020 21:15:27 GMT
|
||||
Server: WSGIServer/0.2 CPython/3.8.2
|
||||
Content-Type: application/json
|
||||
X-Frame-Options: SAMEORIGIN
|
||||
Content-Length: 66
|
||||
|
||||
{"questions": ["Question2: "], "instruction": "", "echos": [true]}
|
||||
```
|
||||
|
||||
As soon as the user answer to this question, SFTPGo will make a new HTTP POST request with the user's answers:
|
||||
|
||||
```shell
|
||||
POST /keyIntHookPwd HTTP/1.1
|
||||
Host: 127.0.0.1:8000
|
||||
User-Agent: Go-http-client/1.1
|
||||
Content-Length: 239
|
||||
Content-Type: application/json
|
||||
Accept-Encoding: gzip
|
||||
|
||||
{"request_id":"bq1r5r7cdrpd2qtn25ng","username":"a","ip":"127.0.0.1","password":"$pbkdf2-sha512$150000$ClOPkLNujMTL$XktKy0xuJsOfMYBz+f2bIyPTdbvDTSnJ1q+7+zp/HPq5Qojwp6kcpSIiVHiwvbi8P6HFXI/D3UJv9BLcnQFqPA==","answers":["answer2"],"questions":["Question2: "]}
|
||||
```
|
||||
|
||||
Here is the final HTTP response that allows the user login:
|
||||
|
||||
```shell
|
||||
HTTP/1.1 200 OK
|
||||
Date: Tue, 31 Mar 2020 21:15:29 GMT
|
||||
Server: WSGIServer/0.2 CPython/3.8.2
|
||||
Content-Type: application/json
|
||||
X-Frame-Options: SAMEORIGIN
|
||||
Content-Length: 18
|
||||
|
||||
{"auth_result": 1}
|
||||
```
|
||||
|
||||
An example keyboard interactive program allowing to authenticate using [Twilio Authy 2FA](https://www.twilio.com/docs/authy) can be found inside the source tree [authy](../examples/OTP/authy) directory.
|
||||
65
docs/kms.md
Normal file
65
docs/kms.md
Normal file
@@ -0,0 +1,65 @@
|
||||
# Key Management Services
|
||||
|
||||
SFTPGo stores sensitive data such as Cloud account credentials or passphrases to derive per-object encryption keys. These data are stored as ciphertext and only loaded to RAM in plaintext when needed.
|
||||
|
||||
## Supported Services for encryption and decryption
|
||||
|
||||
The `secrets` section of the `kms` configuration allows to configure how to encrypt and decrypt sensitive data. The following configuration parameters are available:
|
||||
|
||||
- `url` defines the URI to the KMS service
|
||||
- `master_key_path` defines the absolute path to a file containing the master encryption key. This could be, for example, a docker secrets or a file protected with filesystem level permissions.
|
||||
|
||||
We use [Go CDK](https://gocloud.dev/howto/secrets/) to access several key management services in a portable way.
|
||||
|
||||
### Local provider
|
||||
|
||||
If the `url` is empty SFTPGo uses local encryption for keeping secrets. Internally, it uses the [NaCl secret box](https://pkg.go.dev/golang.org/x/crypto/nacl/secretbox) algorithm to perform encryption and authentication.
|
||||
|
||||
We first generate a random key, then the per-object encryption key is derived from this random key in the following way:
|
||||
|
||||
1. a master key is provided: the encryption key is derived using the HMAC-based Extract-and-Expand Key Derivation Function (HKDF) as defined in [RFC 5869](http://tools.ietf.org/html/rfc5869)
|
||||
2. no master key is provided: the encryption key is derived as simple hash of the random key. This is the default configuration.
|
||||
|
||||
For compatibility with SFTPGo versions 1.2.x and before we also support encryption based on `AES-256-GCM`. The data encrypted with this algorithm will never use the master key to keep backward compatibility.
|
||||
|
||||
### Google Cloud Key Management Service
|
||||
|
||||
To use keys from Google Cloud Platform’s [Key Management Service](https://cloud.google.com/kms/) (GCP KMS) you have to use `gcpkms` as URL scheme like this:
|
||||
|
||||
```shell
|
||||
gcpkms://projects/[PROJECT_ID]/locations/[LOCATION]/keyRings/[KEY_RING]/cryptoKeys/[KEY]
|
||||
```
|
||||
|
||||
SFTPGo will use Application Default Credentials. See [here](https://cloud.google.com/docs/authentication/production) for alternatives such as environment variables.
|
||||
|
||||
The URL host+path are used as the key resource ID; see [here](https://cloud.google.com/kms/docs/object-hierarchy#key) for more details.
|
||||
|
||||
If a master key is provided we first encrypt the plaintext data using the local provider and then we encrypt the resulting payload using the Cloud provider and store this ciphertext.
|
||||
|
||||
### AWS Key Management Service
|
||||
|
||||
To use customer master keys from Amazon Web Service’s [Key Management Service](https://aws.amazon.com/kms/) (AWS KMS) you have to use `awskms` as URL scheme. You can use the key’s ID, alias, or Amazon Resource Name (ARN) to identify the key. You should specify the region query parameter to ensure your application connects to the correct region.
|
||||
|
||||
Here are some examples:
|
||||
|
||||
- By ID: `awskms://1234abcd-12ab-34cd-56ef-1234567890ab?region=us-east-1`
|
||||
- By alias: `awskms://alias/ExampleAlias?region=us-east-1`
|
||||
- By ARN: `arn:aws:kms:us-east-1:111122223333:key/1234abcd-12ab-34bc-56ef-1234567890ab?region=us-east-1`
|
||||
|
||||
SFTPGo will use the default AWS session. See [AWS Session](https://docs.aws.amazon.com/sdk-for-go/api/aws/session/) to learn about authentication alternatives such as environment variables.
|
||||
|
||||
If a master key is provided we first encrypt the plaintext data using the local provider and then we encrypt the resulting payload using the Cloud provider and store this ciphertext.
|
||||
|
||||
### HashiCorp Vault
|
||||
|
||||
To use the [transit secrets engine](https://www.vaultproject.io/docs/secrets/transit/index.html) in [Vault](https://www.vaultproject.io/) you have to use `hashivault` as URL scheme like this: `hashivault://mykey`.
|
||||
|
||||
The Vault server endpoint and authentication token are specified using the environment variables `VAULT_SERVER_URL` and `VAULT_SERVER_TOKEN`, respectively.
|
||||
|
||||
If a master key is provided we first encrypt the plaintext data using the local provider and then we encrypt the resulting payload using Vault and store this ciphertext.
|
||||
|
||||
### Notes
|
||||
|
||||
- The KMS configuration is global.
|
||||
- If you set a master key you will be unable to decrypt the data without this key and the SFTPGo users that need the data as plain text will be unable to login.
|
||||
- You can start using the local provider and then switch to an external one but you can't switch between external providers and still be able to decrypt the data encrypted using the previous provider.
|
||||
56
docs/logs.md
Normal file
56
docs/logs.md
Normal file
@@ -0,0 +1,56 @@
|
||||
# Logs
|
||||
|
||||
The log file is a stream of JSON structs. Each struct has a `sender` field that identifies the log type.
|
||||
|
||||
The logs can be divided into the following categories:
|
||||
|
||||
- **"app logs"**, internal logs used to debug SFTPGo:
|
||||
- `sender` string. This is generally the package name that emits the log
|
||||
- `time` string. Date/time with millisecond precision
|
||||
- `level` string
|
||||
- `message` string
|
||||
- **"transfer logs"**, SFTP/SCP transfer logs:
|
||||
- `sender` string. `Upload` or `Download`
|
||||
- `time` string. Date/time with millisecond precision
|
||||
- `level` string
|
||||
- `elapsed_ms`, int64. Elapsed time, as milliseconds, for the upload/download
|
||||
- `size_bytes`, int64. Size, as bytes, of the download/upload
|
||||
- `username`, string
|
||||
- `file_path` string
|
||||
- `connection_id` string. Unique connection identifier
|
||||
- `protocol` string. `SFTP` or `SCP`
|
||||
- **"command logs"**, SFTP/SCP command logs:
|
||||
- `sender` string. `Rename`, `Rmdir`, `Mkdir`, `Symlink`, `Remove`, `Chmod`, `Chown`, `Chtimes`, `Truncate`, `SSHCommand`
|
||||
- `level` string
|
||||
- `username`, string
|
||||
- `file_path` string
|
||||
- `target_path` string
|
||||
- `filemode` string. Valid for sender `Chmod` otherwise empty
|
||||
- `uid` integer. Valid for sender `Chown` otherwise -1
|
||||
- `gid` integer. Valid for sender `Chown` otherwise -1
|
||||
- `access_time` datetime as YYYY-MM-DDTHH:MM:SS. Valid for sender `Chtimes` otherwise empty
|
||||
- `modification_time` datetime as YYYY-MM-DDTHH:MM:SS. Valid for sender `Chtimes` otherwise empty
|
||||
- `size` int64. Valid for sender `Truncate` otherwise -1
|
||||
- `ssh_command`, string. Valid for sender `SSHCommand` otherwise empty
|
||||
- `connection_id` string. Unique connection identifier
|
||||
- `protocol` string. `SFTP`, `SCP` or `SSH`
|
||||
- **"http logs"**, REST API logs:
|
||||
- `sender` string. `httpd`
|
||||
- `level` string
|
||||
- `remote_addr` string. IP and port of the remote client
|
||||
- `proto` string, for example `HTTP/1.1`
|
||||
- `method` string. HTTP method (`GET`, `POST`, `PUT`, `DELETE` etc.)
|
||||
- `user_agent` string
|
||||
- `uri` string. Full uri
|
||||
- `resp_status` integer. HTTP response status code
|
||||
- `resp_size` integer. Size in bytes of the HTTP response
|
||||
- `elapsed_ms` int64. Elapsed time, as milliseconds, to complete the request
|
||||
- `request_id` string. Unique request identifier
|
||||
- **"connection failed logs"**, logs for failed attempts to initialize a connection. A connection can fail for an authentication error or other errors such as a client abort or a timeout if the login does not happen in two minutes
|
||||
- `sender` string. `connection_failed`
|
||||
- `level` string
|
||||
- `username`, string. Can be empty if the connection is closed before an authentication attempt
|
||||
- `client_ip` string.
|
||||
- `protocol` string. Possible values are `SSH`, `FTP`, `DAV`
|
||||
- `login_type` string. Can be `publickey`, `password`, `keyboard-interactive`, `publickey+password`, `publickey+keyboard-interactive` or `no_auth_tryed`
|
||||
- `error` string. Optional error description
|
||||
20
docs/metrics.md
Normal file
20
docs/metrics.md
Normal file
@@ -0,0 +1,20 @@
|
||||
# Metrics
|
||||
|
||||
SFTPGo exposes [Prometheus](https://prometheus.io/) metrics at the `/metrics` HTTP endpoint of the telemetry server.
|
||||
Several counters and gauges are available, for example:
|
||||
|
||||
- Total uploads and downloads
|
||||
- Total upload and download size
|
||||
- Total upload and download errors
|
||||
- Total executed SSH commands
|
||||
- Total SSH command errors
|
||||
- Number of active connections
|
||||
- Data provider availability
|
||||
- Total successful and failed logins using password, public key, keyboard interactive authentication or supported multi-step authentications
|
||||
- Total HTTP requests served and totals for response code
|
||||
- Go's runtime details about GC, number of gouroutines and OS threads
|
||||
- Process information like CPU, memory, file descriptor usage and start time
|
||||
|
||||
Please check the `/metrics` page for more details.
|
||||
|
||||
We expose the `/metrics` endpoint in both HTTP server and the telemetry server, you should use the one from the telemetry server. The HTTP server `/metrics` endpoint is deprecated and it will be removed in future releases.
|
||||
164
docs/performance.md
Normal file
164
docs/performance.md
Normal file
@@ -0,0 +1,164 @@
|
||||
# Performance
|
||||
|
||||
SFTPGo can easily saturate a Gigabit connection on low end hardware with no special configuration, this is generally enough for most use cases.
|
||||
|
||||
For Multi-Gig connections, some performance improvements and comparisons with OpenSSH have been discussed [here](https://github.com/drakkan/sftpgo/issues/69), most of them have been included in the main branch. To summarize:
|
||||
|
||||
- In current state with all performance improvements applied, SFTP performance is very close to OpenSSH however CPU usage is higher. SCP performance match OpenSSH.
|
||||
- The main bottlenecks are the encryption and the messages authentication, so if you can use a fast cipher with implicit messages authentication, such as `aes128-gcm@openssh.com`, you will get a big performance boost.
|
||||
- SCP protocol is much simpler than SFTP and so, the multi-platform, SFTPGo's SCP implementation performs better than SFTP.
|
||||
- Load balancing with HAProxy can greatly improve the performance if CPU not become the bottleneck.
|
||||
|
||||
## Benchmark
|
||||
|
||||
### Hardware specification
|
||||
|
||||
**Server** ||
|
||||
--- | --- |
|
||||
OS| Debian 10.2 x64 |
|
||||
CPU| Ryzen5 3600 |
|
||||
RAM| 64GB 2400MHz ECC |
|
||||
Disk| Ramdisk |
|
||||
Ethernet| Mellanox ConnectX-3 40GbE|
|
||||
|
||||
**Client** ||
|
||||
--- | --- |
|
||||
OS| Ubuntu 19.10 x64 |
|
||||
CPU| Threadripper 1920X |
|
||||
RAM| 64GB 2400MHz ECC |
|
||||
Disk| Ramdisk |
|
||||
Ethernet| Mellanox ConnectX-3 40GbE|
|
||||
|
||||
### Test configurations
|
||||
|
||||
- `Baseline`: SFTPGo version 0.9.6.
|
||||
- `Devel`: SFTPGo commit b0ed1905918b9dcc22f9a20e89e354313f491734, compiled with Golang 1.14.2. This is basically the same as v1.0.0 as far as performance is concerned.
|
||||
- `Optimized`: Various [optimizations](#Optimizations-applied) applied on top of `Devel`.
|
||||
- `Balanced`: Two optimized instances, running on localhost, load balanced by HAProxy 2.1.3.
|
||||
- `OpenSSH`: OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1d 10 Sep 2019
|
||||
|
||||
Server's CPU is in Eco mode, you can expect better results in certain cases with a stronger CPU, especially multi-stream HAProxy balanced load.
|
||||
|
||||
#### Cipher aes128-ctr
|
||||
|
||||
The Message Authentication Code (MAC) used is `hmac-sha2-256`.
|
||||
|
||||
##### SFTP
|
||||
|
||||
Download:
|
||||
|
||||
Stream|Baseline MB/s|Devel MB/s|Optimized MB/s|Balanced MB/s|OpenSSH MB/s|
|
||||
---|---|---|---|---|---|
|
||||
1|150|243|319|412|452|
|
||||
2|267|452|600|740|735|
|
||||
3|351|637|802|991|1045|
|
||||
4|414|811|1002|1192|1265|
|
||||
8|536|1451|1742|1552|1798|
|
||||
|
||||
Upload:
|
||||
|
||||
Stream|Baseline MB/s|Devel MB/s|Optimized MB/s|Balanced MB/s|OpenSSH MB/s|
|
||||
---|---|---|---|---|---|
|
||||
1|172|273|343|407|426|
|
||||
2|284|469|595|673|738|
|
||||
3|368|644|820|881|1090|
|
||||
4|446|851|1041|1026|1244|
|
||||
8|605|1210|1368|1273|1820|
|
||||
|
||||
##### SCP
|
||||
|
||||
Download:
|
||||
|
||||
Stream|Baseline MB/s|Devel MB/s|Optimized MB/s|Balanced MB/s|OpenSSH MB/s|
|
||||
---|---|---|---|---|---|
|
||||
1|220|369|525|611|558|
|
||||
2|437|659|941|1048|856|
|
||||
3|635|1000|1365|1363|1201|
|
||||
4|787|1272|1664|1610|1415|
|
||||
8|1297|2129|2690|2100|1959|
|
||||
|
||||
Upload:
|
||||
|
||||
Stream|Baseline MB/s|Devel MB/s|Optimized MB/s|Balanced MB/s|OpenSSH MB/s|
|
||||
---|---|---|---|---|---|
|
||||
1|208|312|400|458|508|
|
||||
2|360|516|647|745|926|
|
||||
3|476|678|861|935|1254|
|
||||
4|576|836|1080|1099|1569|
|
||||
8|857|1161|1416|1433|2271|
|
||||
|
||||
#### Cipher aes128-gcm@openssh.com
|
||||
|
||||
With this cipher the messages authentication is implicit, no SHA256 computation is needed.
|
||||
|
||||
##### SFTP
|
||||
|
||||
Download:
|
||||
|
||||
Stream|Baseline MB/s|Devel MB/s|Optimized MB/s|Balanced MB/s|OpenSSH MB/s|
|
||||
---|---|---|---|---|---|
|
||||
1|332|423|<--|583|443|
|
||||
2|533|755|<--|970|809|
|
||||
3|666|1045|<--|1249|1098|
|
||||
4|762|1276|<--|1461|1351|
|
||||
8|886|2064|<--|1825|1933|
|
||||
|
||||
Upload:
|
||||
|
||||
Stream|Baseline MB/s|Devel MB/s|Optimized MB/s|Balanced MB/s|OpenSSH MB/s|
|
||||
---|---|---|---|---|---|
|
||||
1|348|410|<--|527|469|
|
||||
2|596|729|<--|842|930|
|
||||
3|778|974|<--|1088|1341|
|
||||
4|886|1192|<--|1232|1494|
|
||||
8|1042|1578|<--|1433|1893|
|
||||
|
||||
##### SCP
|
||||
|
||||
Download:
|
||||
|
||||
Stream|Baseline MB/s|Devel MB/s|Optimized MB/s|Balanced MB/s|OpenSSH MB/s|
|
||||
---|---|---|---|---|---|
|
||||
1|776|793|<--|832|578|
|
||||
2|1343|1415|<--|1435|938|
|
||||
3|1815|1878|<--|1877|1279|
|
||||
4|2192|2205|<--|2056|1567|
|
||||
8|3237|3287|<--|2493|2036|
|
||||
|
||||
Upload:
|
||||
|
||||
Stream|Baseline MB/s|Devel MB/s|Optimized MB/s|Balanced MB/s|OpenSSH MB/s|
|
||||
---|---|---|---|---|---|
|
||||
1|528|545|<--|608|584|
|
||||
2|872|849|<--|975|1019|
|
||||
3|1121|1138|<--|1217|1412|
|
||||
4|1367|1387|<--|1368|1755|
|
||||
8|1733|1744|<--|1664|2510|
|
||||
|
||||
### Optimizations applied
|
||||
|
||||
- AES-CTR optimization of Go compiler for x86_64, there is a [patch](https://go-review.googlesource.com/c/go/+/51670) that hasn't been merged yet, you can apply it yourself.
|
||||
|
||||
### HAProxy configuration
|
||||
|
||||
Here is the relevant HAProxy configuration used for the `Balanced` test configuration:
|
||||
|
||||
```console
|
||||
frontend sftp
|
||||
bind :2222
|
||||
mode tcp
|
||||
timeout client 600s
|
||||
default_backend sftpgo
|
||||
|
||||
backend sftpgo
|
||||
mode tcp
|
||||
balance roundrobin
|
||||
timeout connect 10s
|
||||
timeout server 600s
|
||||
timeout queue 30s
|
||||
option tcp-check
|
||||
tcp-check expect string SSH-2.0-
|
||||
|
||||
server sftpgo1 127.0.0.1:2022 check send-proxy-v2 weight 10 inter 10s rise 2 fall 3
|
||||
server sftpgo2 127.0.0.1:2024 check send-proxy-v2 weight 10 inter 10s rise 2 fall 3
|
||||
```
|
||||
130
docs/portable-mode.md
Normal file
130
docs/portable-mode.md
Normal file
@@ -0,0 +1,130 @@
|
||||
# Portable mode
|
||||
|
||||
SFTPGo allows to share a single directory on demand using the `portable` subcommand:
|
||||
|
||||
```console
|
||||
sftpgo portable --help
|
||||
To serve the current working directory with auto generated credentials simply
|
||||
use:
|
||||
|
||||
$ sftpgo portable
|
||||
|
||||
Please take a look at the usage below to customize the serving parameters
|
||||
|
||||
Usage:
|
||||
sftpgo portable [flags]
|
||||
|
||||
Flags:
|
||||
-C, --advertise-credentials If the SFTP/FTP service is
|
||||
advertised via multicast DNS, this
|
||||
flag allows to put username/password
|
||||
inside the advertised TXT record
|
||||
-S, --advertise-service Advertise configured services using
|
||||
multicast DNS
|
||||
--allowed-patterns stringArray Allowed file patterns case insensitive.
|
||||
The format is:
|
||||
/dir::pattern1,pattern2.
|
||||
For example: "/somedir::*.jpg,a*b?.png"
|
||||
--az-access-tier string Leave empty to use the default
|
||||
container setting
|
||||
--az-account-key string
|
||||
--az-account-name string
|
||||
--az-container string
|
||||
--az-endpoint string Leave empty to use the default:
|
||||
"blob.core.windows.net"
|
||||
--az-key-prefix string Allows to restrict access to the
|
||||
virtual folder identified by this
|
||||
prefix and its contents
|
||||
--az-sas-url string Shared access signature URL
|
||||
--az-upload-concurrency int How many parts are uploaded in
|
||||
parallel (default 2)
|
||||
--az-upload-part-size int The buffer size for multipart uploads
|
||||
(MB) (default 4)
|
||||
--az-use-emulator
|
||||
--crypto-passphrase string Passphrase for encryption/decryption
|
||||
--denied-patterns stringArray Denied file patterns case insensitive.
|
||||
The format is:
|
||||
/dir::pattern1,pattern2.
|
||||
For example: "/somedir::*.jpg,a*b?.png"
|
||||
-d, --directory string Path to the directory to serve.
|
||||
This can be an absolute path or a path
|
||||
relative to the current directory
|
||||
(default ".")
|
||||
-f, --fs-provider int 0 => local filesystem
|
||||
1 => AWS S3 compatible
|
||||
2 => Google Cloud Storage
|
||||
3 => Azure Blob Storage
|
||||
4 => Encrypted local filesystem
|
||||
5 => SFTP
|
||||
--ftpd-cert string Path to the certificate file for FTPS
|
||||
--ftpd-key string Path to the key file for FTPS
|
||||
--ftpd-port int 0 means a random unprivileged port,
|
||||
< 0 disabled (default -1)
|
||||
--gcs-automatic-credentials int 0 means explicit credentials using
|
||||
a JSON credentials file, 1 automatic
|
||||
(default 1)
|
||||
--gcs-bucket string
|
||||
--gcs-credentials-file string Google Cloud Storage JSON credentials
|
||||
file
|
||||
--gcs-key-prefix string Allows to restrict access to the
|
||||
virtual folder identified by this
|
||||
prefix and its contents
|
||||
--gcs-storage-class string
|
||||
-h, --help help for portable
|
||||
-l, --log-file-path string Leave empty to disable logging
|
||||
-v, --log-verbose Enable verbose logs
|
||||
-p, --password string Leave empty to use an auto generated
|
||||
value
|
||||
-g, --permissions strings User's permissions. "*" means any
|
||||
permission (default [list,download])
|
||||
-k, --public-key strings
|
||||
--s3-access-key string
|
||||
--s3-access-secret string
|
||||
--s3-bucket string
|
||||
--s3-endpoint string
|
||||
--s3-key-prefix string Allows to restrict access to the
|
||||
virtual folder identified by this
|
||||
prefix and its contents
|
||||
--s3-region string
|
||||
--s3-storage-class string
|
||||
--s3-upload-concurrency int How many parts are uploaded in
|
||||
parallel (default 2)
|
||||
--s3-upload-part-size int The buffer size for multipart uploads
|
||||
(MB) (default 5)
|
||||
--sftp-endpoint string SFTP endpoint as host:port for SFTP
|
||||
provider
|
||||
--sftp-fingerprints strings SFTP fingerprints to verify remote host
|
||||
key for SFTP provider
|
||||
--sftp-key-path string SFTP private key path for SFTP provider
|
||||
--sftp-password string SFTP password for SFTP provider
|
||||
--sftp-prefix string SFTP prefix allows restrict all
|
||||
operations to a given path within the
|
||||
remote SFTP server
|
||||
--sftp-username string SFTP user for SFTP provider
|
||||
-s, --sftpd-port int 0 means a random unprivileged port,
|
||||
< 0 disabled
|
||||
-c, --ssh-commands strings SSH commands to enable.
|
||||
"*" means any supported SSH command
|
||||
including scp
|
||||
(default [md5sum,sha1sum,cd,pwd,scp])
|
||||
-u, --username string Leave empty to use an auto generated
|
||||
value
|
||||
--webdav-cert string Path to the certificate file for WebDAV
|
||||
over HTTPS
|
||||
--webdav-key string Path to the key file for WebDAV over
|
||||
HTTPS
|
||||
--webdav-port int 0 means a random unprivileged port,
|
||||
< 0 disabled (default -1)
|
||||
```
|
||||
|
||||
In portable mode, SFTPGo can advertise the SFTP/FTP services and, optionally, the credentials via multicast DNS, so there is a standard way to discover the service and to automatically connect to it.
|
||||
|
||||
Here is an example of the advertised SFTP service including credentials as seen using `avahi-browse`:
|
||||
|
||||
```console
|
||||
= enp0s31f6 IPv4 SFTPGo portable 53705 SFTP File Transfer local
|
||||
hostname = [p1.local]
|
||||
address = [192.168.1.230]
|
||||
port = [53705]
|
||||
txt = ["password=EWOo6pJe" "user=user" "version=0.9.3-dev-b409523-dirty-2019-10-26T13:43:32Z"]
|
||||
```
|
||||
26
docs/post-connect-hook.md
Normal file
26
docs/post-connect-hook.md
Normal file
@@ -0,0 +1,26 @@
|
||||
# Post-connect hook
|
||||
|
||||
This hook is executed as soon as a new connection is established. It notifies the connection's IP address and protocol. Based on the received response, the connection is accepted or rejected. Combining this hook with the [Post-login hook](./post-login-hook.md) you can implement your own (even for Protocol) blacklist/whitelist of IP addresses.
|
||||
|
||||
Please keep in mind that you can easily configure specialized program such as [Fail2ban](http://www.fail2ban.org/) for brute force protection. Executing a hook for each connection can be heavy.
|
||||
|
||||
The `post-connect-hook` can be defined as the absolute path of your program or an HTTP URL.
|
||||
|
||||
If the hook defines an external program it can read the following environment variables:
|
||||
|
||||
- `SFTPGO_CONNECTION_IP`
|
||||
- `SFTPGO_CONNECTION_PROTOCOL`
|
||||
|
||||
If the external command completes with a zero exit status the connection will be accepted otherwise rejected.
|
||||
|
||||
Previous global environment variables aren't cleared when the script is called.
|
||||
The program must finish within 20 seconds.
|
||||
|
||||
If the hook defines an HTTP URL then this URL will be invoked as HTTP GET with the following query parameters:
|
||||
|
||||
- `ip`
|
||||
- `protocol`
|
||||
|
||||
The connection is accepted if the HTTP response code is `200` otherwise rejected.
|
||||
|
||||
The HTTP hook will use the global configuration for HTTP clients and will respect the retry configurations.
|
||||
29
docs/post-login-hook.md
Normal file
29
docs/post-login-hook.md
Normal file
@@ -0,0 +1,29 @@
|
||||
# Post-login hook
|
||||
|
||||
This hook is executed after a login or after closing a connection for authentication timeout. Defining an appropriate `post_login_scope` you can get notifications for failed logins, successful logins or both.
|
||||
|
||||
Please keep in mind that executing a hook after each login can be heavy.
|
||||
|
||||
The `post-login-hook` can be defined as the absolute path of your program or an HTTP URL.
|
||||
|
||||
If the hook defines an external program it can reads the following environment variables:
|
||||
|
||||
- `SFTPGO_LOGIND_USER`, it contains the user serialized as JSON. The username is empty if the connection is closed for authentication timeout
|
||||
- `SFTPGO_LOGIND_IP`
|
||||
- `SFTPGO_LOGIND_METHOD`, possible values are `publickey`, `password`, `keyboard-interactive`, `publickey+password`, `publickey+keyboard-interactive` or `no_auth_tryed`
|
||||
- `SFTPGO_LOGIND_STATUS`, 1 means login OK, 0 login KO
|
||||
- `SFTPGO_LOGIND_PROTOCOL`, possible values are `SSH`, `FTP`, `DAV`
|
||||
|
||||
Previous global environment variables aren't cleared when the script is called.
|
||||
The program must finish within 20 seconds.
|
||||
|
||||
If the hook is an HTTP URL then it will be invoked as HTTP POST. The login method, the used protocol, the ip address and the status of the user are added to the query string, for example `<http_url>?login_method=password&ip=1.2.3.4&protocol=SSH&status=1`.
|
||||
The request body will contain the user serialized as JSON.
|
||||
|
||||
The HTTP hook will use the global configuration for HTTP clients and will respect the retry configurations.
|
||||
|
||||
The `post_login_scope` supports the following configuration values:
|
||||
|
||||
- `0` means notify both failed and successful logins
|
||||
- `1` means notify failed logins. Connections closed for authentication timeout are notified as failed logins. You will get an empty username in this case
|
||||
- `2` means notify successful logins
|
||||
23
docs/profiling.md
Normal file
23
docs/profiling.md
Normal file
@@ -0,0 +1,23 @@
|
||||
# Profiling SFTPGo
|
||||
|
||||
The built-in profiler lets you collect CPU profiles, traces, allocations and heap profiles that allow to identify and correct specific bottlenecks.
|
||||
You can enable the built-in profiler using `telemetry` configuration section inside the configuration file.
|
||||
|
||||
Profiling data are exposed via HTTP/HTTPS in the format expected by the [pprof](https://github.com/google/pprof/blob/main/doc/README.md) visualization tool. You can find the index page at the URL `/debug/pprof/`.
|
||||
|
||||
The following profiles are available, you can obtain them via HTTP GET requests:
|
||||
|
||||
- `allocs`, a sampling of all past memory allocations
|
||||
- `block`, stack traces that led to blocking on synchronization primitives
|
||||
- `goroutine`, stack traces of all current goroutines
|
||||
- `heap`, a sampling of memory allocations of live objects. You can specify the `gc` GET parameter to run GC before taking the heap sample
|
||||
- `mutex`, stack traces of holders of contended mutexes
|
||||
- `profile`, CPU profile. You can specify the duration in the `seconds` GET parameter. After you get the profile file, use the `go tool pprof` command to investigate the profile
|
||||
- `threadcreate`, stack traces that led to the creation of new OS threads
|
||||
- `trace`, a trace of execution of the current program. You can specify the duration in the `seconds` GET parameter. After you get the trace file, use the `go tool trace` command to investigate the trace
|
||||
|
||||
For example you can:
|
||||
|
||||
- download a 30 seconds CPU profile from the URL `/debug/pprof/profile?seconds=30`
|
||||
- download a sampling of memory allocations of live objects from the URL `/debug/pprof/heap?gc=1`
|
||||
- download a sampling of all past memory allocations from the URL `/debug/pprof/allocs`
|
||||
47
docs/rest-api.md
Normal file
47
docs/rest-api.md
Normal file
@@ -0,0 +1,47 @@
|
||||
# REST API
|
||||
|
||||
SFTPGo exposes REST API to manage, backup, and restore users and folders, and to get real time reports of the active connections with the ability to forcibly close a connection.
|
||||
|
||||
If quota tracking is enabled in the configuration file, then the used size and number of files are updated each time a file is added/removed. If files are added/removed not using SFTP/SCP, or if you change `track_quota` from `2` to `1`, you can rescan the users home dir and update the used quota using the REST API.
|
||||
|
||||
REST API are protected using JSON Web Tokens (JWT) authentication and can be exposed over HTTPS. You can also configure client certificate authentication in addition to JWT.
|
||||
|
||||
The default credentials are:
|
||||
|
||||
- username: `admin`
|
||||
- password: `password`
|
||||
|
||||
You can get a JWT token using the `/api/v2/token` endpoint, you need to authenticate using HTTP Basic authentication and the credentials of an active administrator. Here is a sample response:
|
||||
|
||||
```json
|
||||
{"access_token":"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2MTA4NzU5NDksImp0aSI6ImMwMjAzbGZjZHJwZDRsMGMxanZnIiwibmJmIjoxNjEwODc1MzE5LCJwZXJtaXNzaW9ucyI6WyIqIl0sInN1YiI6ImlHZ010NlZNU3AzN2tld3hMR3lUV1l2b2p1a2ttSjBodXlJZHBzSWRyOFE9IiwidXNlcm5hbWUiOiJhZG1pbiJ9.dt-UwcWdEMwoGauuiQw8BmgpBAv4YlTaXkyNK-7iRJ4","expires_at":"2021-01-17T09:32:29Z"}
|
||||
```
|
||||
|
||||
once the access token has expired, you need to get a new one.
|
||||
|
||||
JWT tokens are not stored and we use a randomly generated secret to sign them so if you restart SFTPGo all the previous tokens will be invalidated and you will get a 401 HTTP response code.
|
||||
|
||||
If you define multiple bindings, each binding will sign JWT tokens with a different secret so the token generated for a binding is not valid for the other ones.
|
||||
|
||||
You can create other administrator and assign them the following permissions:
|
||||
|
||||
- add users
|
||||
- edit users
|
||||
- del users
|
||||
- view users
|
||||
- view connections
|
||||
- close connections
|
||||
- view server status
|
||||
- view and start quota scans
|
||||
- view defender
|
||||
- manage defender
|
||||
- manage system
|
||||
- manage admins
|
||||
|
||||
You can also restrict administrator access based on the source IP address. If you are running SFTPGo behind a reverse proxy you need to allow both the proxy IP address and the real client IP.
|
||||
|
||||
The OpenAPI 3 schema for the exposed API can be found inside the source tree: [openapi.yaml](../httpd/schema/openapi.yaml "OpenAPI 3 specs").
|
||||
|
||||
You can generate your own REST client in your preferred programming language, or even bash scripts, using an OpenAPI generator such as [swagger-codegen](https://github.com/swagger-api/swagger-codegen) or [OpenAPI Generator](https://openapi-generator.tech/).
|
||||
|
||||
You can also use [Swagger UI](https://github.com/swagger-api/swagger-ui).
|
||||
35
docs/s3.md
Normal file
35
docs/s3.md
Normal file
@@ -0,0 +1,35 @@
|
||||
# S3 Compatible Object Storage backends
|
||||
|
||||
To connect SFTPGo to AWS, you need to specify credentials, a `bucket` and a `region`. Here is the list of available [AWS regions](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-available-regions). For example, if your bucket is at `Frankfurt`, you have to set the region to `eu-central-1`. You can specify an AWS [storage class](https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html) too. Leave it blank to use the default AWS storage class. An endpoint is required if you are connecting to a Compatible AWS Storage such as [MinIO](https://min.io/).
|
||||
|
||||
AWS SDK has different options for credentials. [More Detail](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html). We support:
|
||||
|
||||
1. Providing [Access Keys](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys).
|
||||
2. Use IAM roles for Amazon EC2
|
||||
3. Use IAM roles for tasks if your application uses an ECS task definition
|
||||
|
||||
So, you need to provide access keys to activate option 1, or leave them blank to use the other ways to specify credentials.
|
||||
|
||||
Specifying a different `key_prefix`, you can assign different "folders" of the same bucket to different users. This is similar to a chroot directory for local filesystem. Each SFTP/SCP user can only access the assigned folder and its contents. The folder identified by `key_prefix` does not need to be pre-created.
|
||||
|
||||
SFTPGo uses multipart uploads and parallel downloads for storing and retrieving files from S3.
|
||||
|
||||
For multipart uploads you can customize the parts size and the upload concurrency. Please note that if the upload bandwidth between the client and SFTPGo is greater than the upload bandwidth between SFTPGo and S3 then the client should wait for the last parts to be uploaded to S3 after finishing uploading the file to SFTPGo, and it may time out. Keep this in mind if you customize these parameters.
|
||||
|
||||
The configured bucket must exist.
|
||||
|
||||
Some SFTP commands don't work over S3:
|
||||
|
||||
- `chtimes`, `chown` and `chmod` will fail. If you want to silently ignore these method set `setstat_mode` to `1` or `2` in your configuration file
|
||||
- `truncate`, `symlink`, `readlink` are not supported
|
||||
- opening a file for both reading and writing at the same time is not supported
|
||||
- upload resume is not supported
|
||||
- upload mode `atomic` is ignored since S3 uploads are already atomic
|
||||
|
||||
Other notes:
|
||||
|
||||
- `rename` is a two step operation: server-side copy and then deletion. So, it is not atomic as for local filesystem.
|
||||
- We don't support renaming non empty directories since we should rename all the contents too and this could take a long time: think about directories with thousands of files: for each file we should do an AWS API call.
|
||||
- For server side encryption, you have to configure the mapped bucket to automatically encrypt objects.
|
||||
- A local home directory is still required to store temporary files.
|
||||
- Clients that require advanced filesystem-like features such as `sshfs` are not supported.
|
||||
141
docs/service.md
Normal file
141
docs/service.md
Normal file
@@ -0,0 +1,141 @@
|
||||
# Running SFTPGo as a service
|
||||
|
||||
Download a binary SFTPGo [release](https://github.com/drakkan/sftpgo/releases) or a build artifact for the [latest commit](https://github.com/drakkan/sftpgo/actions) or build SFTPGo yourself.
|
||||
|
||||
Run the following instructions from the directory that contains the sftpgo binary and the accompanying files.
|
||||
|
||||
## Linux
|
||||
|
||||
The easiest way to run SFTPGo as a service is to download and install the pre-compiled deb/rpm package or use one of the Arch Linux PKGBUILDs we maintain.
|
||||
|
||||
This section describes the procedure to use if you prefer to build SFTPGo yourself or if you want to download and configure a pre-built release as tar.
|
||||
|
||||
A `systemd` sample [service](../init/sftpgo.service "systemd service") can be found inside the source tree.
|
||||
|
||||
Here are some basic instructions to run SFTPGo as service using a dedicated `sftpgo` system account.
|
||||
|
||||
Please run the following commands from the directory where you downloaded/compiled SFTPGo:
|
||||
|
||||
```bash
|
||||
# create the sftpgo user and group
|
||||
sudo groupadd --system sftpgo
|
||||
sudo useradd --system \
|
||||
--gid sftpgo \
|
||||
--no-create-home \
|
||||
--home-dir /var/lib/sftpgo \
|
||||
--shell /usr/sbin/nologin \
|
||||
--comment "SFTPGo user" \
|
||||
sftpgo
|
||||
# create the required directories
|
||||
sudo mkdir -p /etc/sftpgo \
|
||||
/var/lib/sftpgo \
|
||||
/usr/share/sftpgo
|
||||
|
||||
# install the sftpgo executable
|
||||
sudo install -Dm755 sftpgo /usr/bin/sftpgo
|
||||
# install the default configuration file, edit it if required
|
||||
sudo install -Dm644 sftpgo.json /etc/sftpgo/
|
||||
# override some configuration keys using environment variables
|
||||
sudo sh -c 'echo "SFTPGO_HTTPD__TEMPLATES_PATH=/usr/share/sftpgo/templates" > /etc/sftpgo/sftpgo.env'
|
||||
sudo sh -c 'echo "SFTPGO_HTTPD__STATIC_FILES_PATH=/usr/share/sftpgo/static" >> /etc/sftpgo/sftpgo.env'
|
||||
sudo sh -c 'echo "SFTPGO_HTTPD__BACKUPS_PATH=/var/lib/sftpgo/backups" >> /etc/sftpgo/sftpgo.env'
|
||||
sudo sh -c 'echo "SFTPGO_DATA_PROVIDER__CREDENTIALS_PATH=/var/lib/sftpgo/credentials" >> /etc/sftpgo/sftpgo.env'
|
||||
# if you use a file based data provider such as sqlite or bolt consider to set the database path too, for example:
|
||||
#sudo sh -c 'echo "SFTPGO_DATA_PROVIDER__NAME=/var/lib/sftpgo/sftpgo.db" >> /etc/sftpgo/sftpgo.env'
|
||||
# also set the provider's PATH as env var to get initprovider to work with SQLite provider:
|
||||
#export SFTPGO_DATA_PROVIDER__NAME=/var/lib/sftpgo/sftpgo.db
|
||||
# install static files and templates for the web UI
|
||||
sudo cp -r static templates /usr/share/sftpgo/
|
||||
# set files and directory permissions
|
||||
sudo chown -R sftpgo:sftpgo /etc/sftpgo /var/lib/sftpgo
|
||||
sudo chmod 750 /etc/sftpgo /var/lib/sftpgo
|
||||
sudo chmod 640 /etc/sftpgo/sftpgo.json /etc/sftpgo/sftpgo.env
|
||||
# initialize the configured data provider
|
||||
# if you want to use MySQL or PostgreSQL you need to create the configured database before running the initprovider command
|
||||
sudo -E su - sftpgo -m -s /bin/bash -c 'sftpgo initprovider -c /etc/sftpgo'
|
||||
# install the systemd service
|
||||
sudo install -Dm644 init/sftpgo.service /etc/systemd/system
|
||||
# start the service
|
||||
sudo systemctl start sftpgo
|
||||
# verify that the service is started
|
||||
sudo systemctl status sftpgo
|
||||
# automatically start sftpgo on boot
|
||||
sudo systemctl enable sftpgo
|
||||
# optional, create shell completion script, for example for bash
|
||||
sudo sh -c '/usr/bin/sftpgo gen completion bash > /usr/share/bash-completion/completions/sftpgo'
|
||||
# optional, create man pages
|
||||
sudo /usr/bin/sftpgo gen man -d /usr/share/man/man1
|
||||
```
|
||||
|
||||
## macOS
|
||||
|
||||
For macOS, a `launchd` sample [service](../init/com.github.drakkan.sftpgo.plist "launchd plist") can be found inside the source tree. The `launchd` plist assumes that SFTPGo has `/usr/local/opt/sftpgo` as base directory.
|
||||
|
||||
Here are some basic instructions to run SFTPGo as service, please run the following commands from the directory where you downloaded SFTPGo:
|
||||
|
||||
```bash
|
||||
# create the required directories
|
||||
sudo mkdir -p /usr/local/opt/sftpgo/init \
|
||||
/usr/local/opt/sftpgo/var/lib \
|
||||
/usr/local/opt/sftpgo/usr/share \
|
||||
/usr/local/opt/sftpgo/var/log \
|
||||
/usr/local/opt/sftpgo/etc \
|
||||
/usr/local/opt/sftpgo/bin
|
||||
|
||||
# install sftpgo executable
|
||||
sudo cp sftpgo /usr/local/opt/sftpgo/bin/
|
||||
# install the launchd service
|
||||
sudo cp init/com.github.drakkan.sftpgo.plist /usr/local/opt/sftpgo/init/
|
||||
sudo chown root:wheel /usr/local/opt/sftpgo/init/com.github.drakkan.sftpgo.plist
|
||||
# install the default configuration file, edit it if required
|
||||
sudo cp sftpgo.json /usr/local/opt/sftpgo/etc/
|
||||
# install static files and templates for the web UI
|
||||
sudo cp -r static templates /usr/local/opt/sftpgo/usr/share/
|
||||
# initialize the configured data provider
|
||||
# if you want to use MySQL or PostgreSQL you need to create the configured database before running the initprovider command
|
||||
sudo /usr/local/opt/sftpgo/bin/sftpgo initprovider -c /usr/local/opt/sftpgo/etc/
|
||||
# add sftpgo to the launch daemons
|
||||
sudo ln -s /usr/local/opt/sftpgo/init/com.github.drakkan.sftpgo.plist /Library/LaunchDaemons/com.github.drakkan.sftpgo.plist
|
||||
# start the service and enable it to start on boot
|
||||
sudo launchctl load -w /Library/LaunchDaemons/com.github.drakkan.sftpgo.plist
|
||||
# verify that the service is started
|
||||
sudo launchctl list com.github.drakkan.sftpgo
|
||||
```
|
||||
|
||||
## Windows
|
||||
|
||||
On Windows, you can register SFTPGo as Windows Service. Take a look at the CLI usage to learn how to do this:
|
||||
|
||||
```powershell
|
||||
PS> sftpgo.exe service --help
|
||||
Manage SFTPGo Windows Service
|
||||
|
||||
Usage:
|
||||
sftpgo service [command]
|
||||
|
||||
Available Commands:
|
||||
install Install SFTPGo as Windows Service
|
||||
reload Reload the SFTPGo Windows Service sending a "paramchange" request
|
||||
rotatelogs Signal to the running service to rotate the logs
|
||||
start Start SFTPGo Windows Service
|
||||
status Retrieve the status for the SFTPGo Windows Service
|
||||
stop Stop SFTPGo Windows Service
|
||||
uninstall Uninstall SFTPGo Windows Service
|
||||
|
||||
Flags:
|
||||
-h, --help help for service
|
||||
|
||||
Use "sftpgo service [command] --help" for more information about a command.
|
||||
```
|
||||
|
||||
The `install` subcommand accepts the same flags that are valid for `serve`.
|
||||
|
||||
After installing as a Windows Service, please remember to allow network access to the SFTPGo executable using something like this:
|
||||
|
||||
```powershell
|
||||
PS> netsh advfirewall firewall add rule name="SFTPGo Service" dir=in action=allow program="C:\Program Files\SFTPGo\sftpgo.exe"
|
||||
```
|
||||
|
||||
Or through the Windows Firewall GUI.
|
||||
|
||||
The Windows installer will register the service and allow network access for it automatically.
|
||||
64
docs/sftp-subsystem.md
Normal file
64
docs/sftp-subsystem.md
Normal file
@@ -0,0 +1,64 @@
|
||||
# SFTP subsystem mode
|
||||
|
||||
In this mode SFTPGo speaks the server side of SFTP protocol to stdout and expects client requests from stdin.
|
||||
You can use SFTPGo as subsystem via the `startsubsys` command.
|
||||
This mode is not intended to be called directly, but from sshd using the `Subsystem` option.
|
||||
For example adding a line like this one in `/etc/ssh/sshd_config`:
|
||||
|
||||
```shell
|
||||
Subsystem sftp sftpgo startsubsys
|
||||
```
|
||||
|
||||
Command-line flags should be specified in the Subsystem declaration.
|
||||
|
||||
```shell
|
||||
Usage:
|
||||
sftpgo startsubsys [flags]
|
||||
|
||||
Flags:
|
||||
-d, --base-home-dir string If the user does not exist specify an alternate
|
||||
starting directory. The home directory for a new
|
||||
user will be:
|
||||
|
||||
<base-home-dir>/<username>
|
||||
|
||||
base-home-dir must be an absolute path.
|
||||
-c, --config-dir string Location for SFTPGo config dir. This directory
|
||||
should contain the "sftpgo" configuration file
|
||||
or the configured config-file and it is used as
|
||||
the base for files with a relative path (eg. the
|
||||
private keys for the SFTP server, the SQLite
|
||||
database if you use SQLite as data provider).
|
||||
This flag can be set using SFTPGO_CONFIG_DIR
|
||||
env var too. (default ".")
|
||||
-f, --config-file string Name for SFTPGo configuration file. It must be
|
||||
the name of a file stored in config-dir not the
|
||||
absolute path to the configuration file. The
|
||||
specified file name must have no extension we
|
||||
automatically load JSON, YAML, TOML, HCL and
|
||||
Java properties. Therefore if you set "sftpgo"
|
||||
then "sftpgo.json", "sftpgo.yaml" and so on
|
||||
are searched.
|
||||
This flag can be set using SFTPGO_CONFIG_FILE
|
||||
env var too. (default "sftpgo")
|
||||
-h, --help help for startsubsys
|
||||
-j, --log-to-journald Send logs to journald. Only available on Linux.
|
||||
Use:
|
||||
|
||||
$ journalctl -o verbose -f
|
||||
|
||||
To see full logs.
|
||||
If not set, the logs will be sent to the standard
|
||||
error
|
||||
-v, --log-verbose Enable verbose logs. This flag can be set
|
||||
using SFTPGO_LOG_VERBOSE env var too.
|
||||
(default true)
|
||||
-p, --preserve-home If the user already exists, the existing home
|
||||
directory will not be changed
|
||||
```
|
||||
|
||||
In this mode `bolt` and `sqlite` providers are not usable as the same database file cannot be shared among multiple processes, if one of these provider is configured it will be automatically changed to `memory` provider.
|
||||
|
||||
The username and home directory for the logged in user are determined using [user.Current()](https://golang.org/pkg/os/user/#Current).
|
||||
If the user who is logging is not found within the SFTPGo data provider, it is added automatically.
|
||||
You can pre-configure the users inside the SFTPGo data provider, this way you can use a different home directory, restrict permissions and such.
|
||||
30
docs/sftpfs.md
Normal file
30
docs/sftpfs.md
Normal file
@@ -0,0 +1,30 @@
|
||||
# SFTP as storage backend
|
||||
|
||||
An SFTP account on another server can be used as storage for an SFTPGo account, so the remote SFTP server can be accessed in a similar way to the local file system.
|
||||
|
||||
Here are the supported configuration parameters:
|
||||
|
||||
- `Endpoint`, ssh endpoint as `host:port`
|
||||
- `Username`
|
||||
- `Password`
|
||||
- `PrivateKey`
|
||||
- `Fingerprints`
|
||||
- `Prefix`
|
||||
|
||||
The mandatory parameters are the endpoint, the username and a password or a private key. If you define both a password and a private key the key is tried first. The provided private key should be PEM encoded, something like this:
|
||||
|
||||
```shell
|
||||
-----BEGIN OPENSSH PRIVATE KEY-----
|
||||
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW
|
||||
QyNTUxOQAAACA8LWc4SahqKkAr4L3rS19w1Vt8/IAf4th2FZmf+PJ/vwAAAJBvnZIJb52S
|
||||
CQAAAAtzc2gtZWQyNTUxOQAAACA8LWc4SahqKkAr4L3rS19w1Vt8/IAf4th2FZmf+PJ/vw
|
||||
AAAEBE6F5Az4wzNfNYLRdG8blDwvPBYFXE8BYDi4gzIhnd9zwtZzhJqGoqQCvgvetLX3DV
|
||||
W3z8gB/i2HYVmZ/48n+/AAAACW5pY29sYUBwMQECAwQ=
|
||||
-----END OPENSSH PRIVATE KEY-----
|
||||
```
|
||||
|
||||
The password and the private key are stored as ciphertext according to your [KMS configuration](./kms.md).
|
||||
|
||||
SHA256 fingerprints for remote server host keys are optional but highly recommended: if you provide one or more fingerprints the server host key will be verified against them and the connection will be denied if none of the fingerprints provided match that for the server host key.
|
||||
|
||||
Specifying a prefix you can restrict all operations to a given path within the remote SFTP server.
|
||||
47
docs/ssh-commands.md
Normal file
47
docs/ssh-commands.md
Normal file
@@ -0,0 +1,47 @@
|
||||
# SSH commands
|
||||
|
||||
Some SSH commands are implemented directly inside SFTPGo, while for others we use system commands that need to be installed and in your system's `PATH`.
|
||||
|
||||
For system commands we have no direct control on file creation/deletion and so there are some limitations:
|
||||
|
||||
- we cannot allow them if the target directory contains virtual folders or file extensions filters
|
||||
- system commands work only on local filyestem
|
||||
- we cannot avoid to leak real filesystem paths
|
||||
- quota check is suboptimal
|
||||
- maximum size restriction on single file is not respected
|
||||
- data at-rest encryption is not supported
|
||||
|
||||
If quota is enabled and SFTPGo receives a system command, the used size and number of files are checked at the command start and not while new files are created/deleted. While the command is running the number of files is not checked, the remaining size is calculated as the difference between the max allowed quota and the used one, and it is checked against the bytes transferred via SSH. The command is aborted if it uploads more bytes than the remaining allowed size calculated at the command start. Anyway, we only see the bytes that the remote command sends to the local one via SSH. These bytes contain both protocol commands and files, and so the size of the files is different from the size transferred via SSH: for example, a command can send compressed files, or a protocol command (few bytes) could delete a big file. To mitigate these issues, quotas are recalculated at the command end with a full scan of the directory specified for the system command. This could be heavy for big directories. If you need system commands and quotas you could consider disabling quota restrictions and periodically update quota usage yourself using the REST API.
|
||||
|
||||
For these reasons we should limit system commands usage as much as possible, we currently support the following system commands:
|
||||
|
||||
- `git-receive-pack`, `git-upload-pack`, `git-upload-archive`. These commands enable support for Git repositories over SSH. They need to be installed and in your system's `PATH`.
|
||||
- `rsync`. The `rsync` command needs to be installed and in your system's `PATH`.
|
||||
|
||||
At least the following permissions are required to be able to run system commands:
|
||||
|
||||
- `list`
|
||||
- `download`
|
||||
- `upload`
|
||||
- `create_dirs`
|
||||
- `overwrite`
|
||||
- `delete`
|
||||
|
||||
For `rsync` we cannot avoid that it creates symlinks so if the `create_symlinks` permission is granted we add the option `--safe-links`, if it is not already set, to the received `rsync` command. This should prevent to create symlinks that point outside the home directory.
|
||||
If the user cannot create symlinks we add the option `--munge-links`, if it is not already set, to the received `rsync` command. This should make symlinks unusable (but manually recoverable).
|
||||
|
||||
SFTPGo supports the following built-in SSH commands:
|
||||
|
||||
- `scp`, SFTPGo implements the SCP protocol so we can support it for cloud filesystems too and we can avoid the other system commands limitations. SCP between two remote hosts is supported using the `-3` scp option. Wildcard expansion is not supported.
|
||||
- `md5sum`, `sha1sum`, `sha256sum`, `sha384sum`, `sha512sum`. Useful to check message digests for uploaded files.
|
||||
- `cd`, `pwd`. Some SFTP clients do not support the SFTP SSH_FXP_REALPATH packet type, so they use `cd` and `pwd` SSH commands to get the initial directory. Currently `cd` does nothing and `pwd` always returns the `/` path. These commands will work with any storage backend but keep in mind that to calculate the hash we need to read the whole file, for remote backends this means downloading the file, for the encrypted backend this means decrypting the file.
|
||||
- `sftpgo-copy`. This is a built-in copy implementation. It allows server side copy for files and directories. The first argument is the source file/directory and the second one is the destination file/directory, for example `sftpgo-copy <src> <dst>`. The command will fail if the destination exists. Copy for directories spanning virtual folders is not supported. Only local filesystem is supported: recursive copy for Cloud Storage filesystems requires a new request for every file in any case, so a real server side copy is not possible.
|
||||
- `sftpgo-remove`. This is a built-in remove implementation. It allows to remove single files and to recursively remove directories. The first argument is the file/directory to remove, for example `sftpgo-remove <dst>`. Only local filesystem is supported: recursive remove for Cloud Storage filesystems requires a new request for every file in any case, so a server side remove is not possible.
|
||||
|
||||
The following SSH commands are enabled by default:
|
||||
|
||||
- `md5sum`
|
||||
- `sha1sum`
|
||||
- `cd`
|
||||
- `pwd`
|
||||
- `scp`
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user