mirror of
https://github.com/drakkan/sftpgo.git
synced 2025-12-07 14:50:55 +03:00
Compare commits
786 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c40a48c6f3 | ||
|
|
c7073f90cb | ||
|
|
80c8486d24 | ||
|
|
cf9d081495 | ||
|
|
05ed7b6aa4 | ||
|
|
68a4bbd10c | ||
|
|
1b21c19a78 | ||
|
|
ee600c716b | ||
|
|
6b77b55068 | ||
|
|
5a45af76f3 | ||
|
|
7959737442 | ||
|
|
d3fee39388 | ||
|
|
97122ef06c | ||
|
|
8a6c2265a4 | ||
|
|
b65dae89e8 | ||
|
|
4ed6e96c7b | ||
|
|
6d3ff5a8ad | ||
|
|
a7921500f5 | ||
|
|
c3188a2b5a | ||
|
|
3f38f44d42 | ||
|
|
0a3122f03e | ||
|
|
8cd9e886f3 | ||
|
|
016e285745 | ||
|
|
467708dc1c | ||
|
|
ef626befb1 | ||
|
|
f61456ce87 | ||
|
|
ba3548c2c3 | ||
|
|
0e2d673889 | ||
|
|
bf03eb2a88 | ||
|
|
3603493146 | ||
|
|
6a20e7411b | ||
|
|
0e1d8fc4d9 | ||
|
|
08a7f08d6e | ||
|
|
2c8968b5dc | ||
|
|
f65c973c99 | ||
|
|
85c2d474d9 | ||
|
|
6c6a6e3d16 | ||
|
|
92122bd962 | ||
|
|
112306b9a2 | ||
|
|
92af6efc0c | ||
|
|
6d582a821b | ||
|
|
794afbf85e | ||
|
|
e3f3997c5e | ||
|
|
f78090e47f | ||
|
|
4d7a4aa99a | ||
|
|
c36217c654 | ||
|
|
59bb578b89 | ||
|
|
7d8823307f | ||
|
|
8174349032 | ||
|
|
00a02dc14d | ||
|
|
ced73ed04e | ||
|
|
cc73bb811b | ||
|
|
a587228cf0 | ||
|
|
1472a0f415 | ||
|
|
0bb141960f | ||
|
|
c153330ab8 | ||
|
|
5b4ef0ee3b | ||
|
|
9632b6ee94 | ||
|
|
78eb1c1166 | ||
|
|
a7c0b07a2a | ||
|
|
dc1cc88a46 | ||
|
|
3f5451eab6 | ||
|
|
30d98326ca | ||
|
|
bedc8e288b | ||
|
|
6092b6628e | ||
|
|
6ee51c5cc1 | ||
|
|
4df0ae82ac | ||
|
|
5db31f0fb3 | ||
|
|
0f8170c10f | ||
|
|
3c24cb773f | ||
|
|
bec54ac8ae | ||
|
|
c330ac8418 | ||
|
|
3e478f42ea | ||
|
|
18ab757216 | ||
|
|
b6bcf0cd94 | ||
|
|
015aa36c56 | ||
|
|
f2480ce5c9 | ||
|
|
f828c58dca | ||
|
|
dc19921b0c | ||
|
|
3f3591bae0 | ||
|
|
fc048728d9 | ||
|
|
aeb4675196 | ||
|
|
4652f9ede8 | ||
|
|
531cb5b5a1 | ||
|
|
9fb43b2c46 | ||
|
|
8a8298ad46 | ||
|
|
3d6b09e949 | ||
|
|
fb8f013ea7 | ||
|
|
c41319bb7a | ||
|
|
46157ebbb6 | ||
|
|
200b1d08c7 | ||
|
|
24b0352eb6 | ||
|
|
52f3a98cc8 | ||
|
|
e29a3efd39 | ||
|
|
ca730e77a5 | ||
|
|
0833b4698e | ||
|
|
ee5c5e033d | ||
|
|
78233ff9a3 | ||
|
|
b331dc5686 | ||
|
|
dfcfcee208 | ||
|
|
094ee1522e | ||
|
|
3bc58f5988 | ||
|
|
f6938e76dc | ||
|
|
570964deb3 | ||
|
|
31984ffec1 | ||
|
|
74fc3aaf37 | ||
|
|
97d0a48557 | ||
|
|
3bbe67571f | ||
|
|
f131ef130b | ||
|
|
4a6a4ce28d | ||
|
|
a80ac80fcd | ||
|
|
4aa9686e3b | ||
|
|
64e87d64bd | ||
|
|
9ca0b46f30 | ||
|
|
6eb154bb74 | ||
|
|
ea01c3a125 | ||
|
|
1b4a1fbbe5 | ||
|
|
ec81a7ac29 | ||
|
|
22d28a37b6 | ||
|
|
cc134cad9a | ||
|
|
1459150024 | ||
|
|
87751e562e | ||
|
|
e6f969cb04 | ||
|
|
ba1febba73 | ||
|
|
af8fa7ff81 | ||
|
|
4ab2e4088a | ||
|
|
da0ccc6426 | ||
|
|
0661876e99 | ||
|
|
cd72ac4fc9 | ||
|
|
da5a061b65 | ||
|
|
65948a47f1 | ||
|
|
bf4b3e6840 | ||
|
|
6ea38188e8 | ||
|
|
b5639a51fd | ||
|
|
5c34d814d6 | ||
|
|
0eca4f1866 | ||
|
|
b52f829f05 | ||
|
|
90f64c9f63 | ||
|
|
c106498dd8 | ||
|
|
7bad65a43e | ||
|
|
101c2962ab | ||
|
|
59140a6d51 | ||
|
|
b1d54f69d9 | ||
|
|
374de07c7b | ||
|
|
8a4c21b64a | ||
|
|
16ba7ddb34 | ||
|
|
bd9506da42 | ||
|
|
b903a6e46f | ||
|
|
bcf088f586 | ||
|
|
be3857d572 | ||
|
|
b99d4ce82e | ||
|
|
0a558203da | ||
|
|
5a549a88fe | ||
|
|
fe953d6b38 | ||
|
|
05c62b9f40 | ||
|
|
555dc3b0c0 | ||
|
|
0de0d3308c | ||
|
|
a20373b613 | ||
|
|
ced2e16f41 | ||
|
|
3ac832c8dd | ||
|
|
a3c087456b | ||
|
|
419774158a | ||
|
|
0503215e7a | ||
|
|
9541843ff7 | ||
|
|
98f22ba110 | ||
|
|
1e9a19e326 | ||
|
|
0046c9960a | ||
|
|
7640612a95 | ||
|
|
a26962f367 | ||
|
|
f778e47d22 | ||
|
|
4781921336 | ||
|
|
3ae8abda9e | ||
|
|
90b324d707 | ||
|
|
3a22aae34f | ||
|
|
45a0473fec | ||
|
|
a7313e4492 | ||
|
|
c41ae116eb | ||
|
|
83c7453957 | ||
|
|
85a47810ff | ||
|
|
c997ef876c | ||
|
|
ae8ccadad2 | ||
|
|
5967aa1aa5 | ||
|
|
c900cde8e4 | ||
|
|
13183a9f76 | ||
|
|
5a568b4077 | ||
|
|
030507a2ce | ||
|
|
338301955f | ||
|
|
6d313f6d8f | ||
|
|
776dffcf12 | ||
|
|
e1a2451c22 | ||
|
|
7344366ce8 | ||
|
|
bd5191dfc5 | ||
|
|
bfa4085932 | ||
|
|
302ec2558c | ||
|
|
ff19879ffd | ||
|
|
04001f7ad3 | ||
|
|
076b2f0ee0 | ||
|
|
93dfb03eaf | ||
|
|
e09bdd43d4 | ||
|
|
ac8d8a3da1 | ||
|
|
a4157e83e9 | ||
|
|
13f23838a1 | ||
|
|
fd4c388b23 | ||
|
|
88b10da596 | ||
|
|
c07dc74d48 | ||
|
|
b48e01155c | ||
|
|
0ff010cc94 | ||
|
|
81aac15a6c | ||
|
|
c1b862394d | ||
|
|
f19937b715 | ||
|
|
f2f612b450 | ||
|
|
0c2640bbab | ||
|
|
3bb0ca1d2b | ||
|
|
d5b42f72e2 | ||
|
|
62744e081b | ||
|
|
9dcaf1555f | ||
|
|
a09cf5c8b9 | ||
|
|
47ebe42375 | ||
|
|
4d97ab9eb9 | ||
|
|
8ed13dc4a9 | ||
|
|
3b66dd0873 | ||
|
|
d992f0ffcc | ||
|
|
6c5a7e8f13 | ||
|
|
9d3d7db29c | ||
|
|
8607788975 | ||
|
|
4be6307d87 | ||
|
|
feec2118bb | ||
|
|
43182fc25e | ||
|
|
976f588863 | ||
|
|
575bcf1f03 | ||
|
|
969c992bfd | ||
|
|
c1239fbf59 | ||
|
|
c63b923ec3 | ||
|
|
574c4029fc | ||
|
|
423d8306be | ||
|
|
fc7066a25c | ||
|
|
e1bf46c6a5 | ||
|
|
3b46e6a6fb | ||
|
|
7a85c66ee7 | ||
|
|
25a44030f9 | ||
|
|
600268ebb8 | ||
|
|
1223957f91 | ||
|
|
15cde2dd1a | ||
|
|
50e441849a | ||
|
|
02bb09ec01 | ||
|
|
402947a43c | ||
|
|
b9bc8d722d | ||
|
|
0cb5c49cf3 | ||
|
|
9fc4be6d40 | ||
|
|
ecfed4dc04 | ||
|
|
b415e4d98f | ||
|
|
7d059efe06 | ||
|
|
60cfbd2989 | ||
|
|
8ecf64f481 | ||
|
|
019b0f2fd5 | ||
|
|
15d6cd144a | ||
|
|
f59f62317e | ||
|
|
f2b93c0402 | ||
|
|
0540b8780e | ||
|
|
fa45c9c138 | ||
|
|
b67cd0d3df | ||
|
|
c8f7fc9bc9 | ||
|
|
f1b998ce16 | ||
|
|
aaa758e978 | ||
|
|
716946a148 | ||
|
|
15934d72cc | ||
|
|
8f6cdacd00 | ||
|
|
8f736da4b8 | ||
|
|
4ea4202b99 | ||
|
|
d4bfc3f6b5 | ||
|
|
23d9ebfc91 | ||
|
|
5c99f4fb60 | ||
|
|
2263c7e20f | ||
|
|
515b2d917e | ||
|
|
af4723356d | ||
|
|
068dd34a38 | ||
|
|
b16a5c2caf | ||
|
|
a383957cfa | ||
|
|
00f97aabb4 | ||
|
|
32db0787bb | ||
|
|
1275328fdf | ||
|
|
7778716fa7 | ||
|
|
77476d0f56 | ||
|
|
c7a1fc2996 | ||
|
|
e7d8e73be8 | ||
|
|
3ee27f4370 | ||
|
|
92424cd1c2 | ||
|
|
0190dad984 | ||
|
|
198258f4e7 | ||
|
|
5be4b6bd44 | ||
|
|
3941255733 | ||
|
|
46998252e5 | ||
|
|
74b51f0ad3 | ||
|
|
b11865f971 | ||
|
|
f4369cdbef | ||
|
|
92638ce93d | ||
|
|
6ef85d6026 | ||
|
|
bc88503f25 | ||
|
|
47317bed9b | ||
|
|
f45c89fc46 | ||
|
|
112e3b2fc2 | ||
|
|
124c471a2b | ||
|
|
683ba6cd5b | ||
|
|
21fbcf4556 | ||
|
|
2ffefbeb33 | ||
|
|
c844fc7477 | ||
|
|
4b98f37df1 | ||
|
|
0bc4db9950 | ||
|
|
5acf29dae6 | ||
|
|
e9a42cd508 | ||
|
|
ed26d68948 | ||
|
|
b389f93d97 | ||
|
|
150aebf8d2 | ||
|
|
74e0223eb9 | ||
|
|
0823928f98 | ||
|
|
f895059660 | ||
|
|
acb4310c11 | ||
|
|
fdf3f23df5 | ||
|
|
d92861a8e8 | ||
|
|
1ee843757d | ||
|
|
ea26d7786c | ||
|
|
6eb43baf3d | ||
|
|
2f56375121 | ||
|
|
3bfd7e4d17 | ||
|
|
e1c66d96a1 | ||
|
|
a43854ae9b | ||
|
|
183bedd6ed | ||
|
|
2a89a8f664 | ||
|
|
5cd27ce529 | ||
|
|
cee2e18caf | ||
|
|
9ad750da54 | ||
|
|
5f49af1780 | ||
|
|
d5f092284a | ||
|
|
0e50310a66 | ||
|
|
5939ac4801 | ||
|
|
db274f1093 | ||
|
|
6bc5c64a3a | ||
|
|
70e035315e | ||
|
|
8a1249878a | ||
|
|
5e375f56dd | ||
|
|
28f1d66ae5 | ||
|
|
79060d37a7 | ||
|
|
800e64404b | ||
|
|
54c0c1b80d | ||
|
|
f7c7e2951d | ||
|
|
f249286cb1 | ||
|
|
d6dc3a507e | ||
|
|
0286da2356 | ||
|
|
76c08baaa0 | ||
|
|
67ea75cf03 | ||
|
|
4c658bb6f0 | ||
|
|
1ab02d5891 | ||
|
|
055506e518 | ||
|
|
88122ba2f8 | ||
|
|
bfe0c18976 | ||
|
|
df41f0c556 | ||
|
|
561c5021dd | ||
|
|
ad07fc78eb | ||
|
|
3243181c5f | ||
|
|
895117718e | ||
|
|
534b253c20 | ||
|
|
901cafc6da | ||
|
|
a6e36e7cad | ||
|
|
b566457e12 | ||
|
|
ca3e15578e | ||
|
|
4b2edff6dd | ||
|
|
2146b83343 | ||
|
|
3e1b07324d | ||
|
|
8cc2dfe5c2 | ||
|
|
78a837e8f1 | ||
|
|
49830516be | ||
|
|
41e1d9e68a | ||
|
|
5da4f931c5 | ||
|
|
552a96533e | ||
|
|
cebd069c77 | ||
|
|
be9230e85b | ||
|
|
b1ce6eb85b | ||
|
|
46176a54b4 | ||
|
|
a21ccad174 | ||
|
|
1129a868a5 | ||
|
|
1ac66d27b6 | ||
|
|
6a6e8fffbc | ||
|
|
51f110bc7b | ||
|
|
4ddfe41f23 | ||
|
|
ddd06fc2ac | ||
|
|
1bccb93fcb | ||
|
|
db80781716 | ||
|
|
a2a99f9b57 | ||
|
|
cd4a68cc96 | ||
|
|
b37eb68993 | ||
|
|
b13958a8d6 | ||
|
|
17e2b234a0 | ||
|
|
4ef1775e9a | ||
|
|
363977b474 | ||
|
|
05ae0ea5f2 | ||
|
|
8de7a81674 | ||
|
|
d32b195a57 | ||
|
|
267d9f1831 | ||
|
|
17a42a0c11 | ||
|
|
a219d25cac | ||
|
|
ce731020a7 | ||
|
|
fc9082c422 | ||
|
|
4872ba2ea0 | ||
|
|
70bb3c34ce | ||
|
|
1cde50f050 | ||
|
|
e9dd4ecdf0 | ||
|
|
f863530653 | ||
|
|
4f609cfa30 | ||
|
|
78bf808322 | ||
|
|
afe1da92c5 | ||
|
|
9985224966 | ||
|
|
02679d6df3 | ||
|
|
c2bbd468c4 | ||
|
|
46ab8f8d78 | ||
|
|
54321c5240 | ||
|
|
5fcbf2528f | ||
|
|
ea096db8e4 | ||
|
|
0caeb68680 | ||
|
|
2b9ba1d520 | ||
|
|
80f5ccd357 | ||
|
|
820169c5c6 | ||
|
|
aff75953e3 | ||
|
|
c0e09374a8 | ||
|
|
57976b4085 | ||
|
|
899f1a1844 | ||
|
|
41a1af863e | ||
|
|
778ec9b88f | ||
|
|
d42fcc3786 | ||
|
|
5d4f758c47 | ||
|
|
a8a17a223a | ||
|
|
aa40b04576 | ||
|
|
daac90c4e1 | ||
|
|
72b2c83392 | ||
|
|
c3410a3d91 | ||
|
|
173c1820e1 | ||
|
|
684f4ba1a6 | ||
|
|
6d84c5b9e3 | ||
|
|
4b522a2455 | ||
|
|
1e1c46ae1b | ||
|
|
d6b3acdb62 | ||
|
|
037d89a320 | ||
|
|
30eb3c4a99 | ||
|
|
0966d44c0f | ||
|
|
40e759c983 | ||
|
|
141ca6777c | ||
|
|
3c16a19269 | ||
|
|
b3c6d79f51 | ||
|
|
0c56b6d504 | ||
|
|
3d2da88da9 | ||
|
|
80c06d6b59 | ||
|
|
e536a638c9 | ||
|
|
bc397002d4 | ||
|
|
2a95d031ea | ||
|
|
1dce1eff48 | ||
|
|
5b1d8666b3 | ||
|
|
187a5b1908 | ||
|
|
7ab7941ddd | ||
|
|
c69d63c1f8 | ||
|
|
743b350fdd | ||
|
|
1ac610da1a | ||
|
|
bcf0fa073e | ||
|
|
140380716d | ||
|
|
143df87fee | ||
|
|
6d895843dc | ||
|
|
65e6d5475f | ||
|
|
15609cdbc7 | ||
|
|
f876c728ad | ||
|
|
f34462e3c3 | ||
|
|
ea0bf5e4c8 | ||
|
|
14d1b82f6b | ||
|
|
ed43ddd79d | ||
|
|
23192a3be7 | ||
|
|
72e3d464b8 | ||
|
|
a6985075b9 | ||
|
|
4d5494912d | ||
|
|
50982229e1 | ||
|
|
6977a4a18b | ||
|
|
ab1bf2ad44 | ||
|
|
c451f742aa | ||
|
|
034d89876d | ||
|
|
4a88ea5c03 | ||
|
|
95c6d41c35 | ||
|
|
2a9ed0abca | ||
|
|
3ff6b1bf64 | ||
|
|
a67276ccc2 | ||
|
|
87b51a6fd5 | ||
|
|
940836b25b | ||
|
|
634b723b5d | ||
|
|
af0c9b76c4 | ||
|
|
2142ef20c5 | ||
|
|
224ce5fe81 | ||
|
|
4bb9d07dde | ||
|
|
2054dfd83d | ||
|
|
6699f5c2cc | ||
|
|
70bde8b2bc | ||
|
|
ff73e5f53c | ||
|
|
0609188d3f | ||
|
|
99cd1ccfe5 | ||
|
|
dccc583b5d | ||
|
|
ac435b7890 | ||
|
|
37fc589896 | ||
|
|
5d789a01b7 | ||
|
|
ca0ff0d630 | ||
|
|
969b38586e | ||
|
|
e3eca424f1 | ||
|
|
a6355e298e | ||
|
|
c0f47a58f2 | ||
|
|
dc845fa2f4 | ||
|
|
7e855c83b3 | ||
|
|
3b8a9e0963 | ||
|
|
4445834fd3 | ||
|
|
19a619ff65 | ||
|
|
66a538dc9c | ||
|
|
1a6863f4b1 | ||
|
|
fbd9919afa | ||
|
|
eec8bc73f4 | ||
|
|
5720d40fee | ||
|
|
38e0cba675 | ||
|
|
4c5a0d663e | ||
|
|
093df15fac | ||
|
|
957430e675 | ||
|
|
14035f407e | ||
|
|
bf2b2525a9 | ||
|
|
4edb9cd6b9 | ||
|
|
c38d242bea | ||
|
|
c6ab6f94e7 | ||
|
|
36151d1ba9 | ||
|
|
1d5d184720 | ||
|
|
0119fd03a6 | ||
|
|
0a14297b48 | ||
|
|
442efa0607 | ||
|
|
6ad4cc317c | ||
|
|
57bec976ae | ||
|
|
641493e31a | ||
|
|
5b4e9ad982 | ||
|
|
950a5ad9ea | ||
|
|
fcfdd633f6 | ||
|
|
ebb18fa57d | ||
|
|
58b0ca585c | ||
|
|
5bc1c2de2d | ||
|
|
ec00613202 | ||
|
|
02ec3a5f48 | ||
|
|
ac3bae00fc | ||
|
|
e54828a7b8 | ||
|
|
f2acde789d | ||
|
|
9b49f63a97 | ||
|
|
14bcc6f2fc | ||
|
|
975a2f3632 | ||
|
|
5ff8f75917 | ||
|
|
db7e81e9d0 | ||
|
|
6a8039e76a | ||
|
|
56bf8364cd | ||
|
|
75750e3a79 | ||
|
|
bb5207ad77 | ||
|
|
b51d795e04 | ||
|
|
d12819932a | ||
|
|
d812c86812 | ||
|
|
1625cd5a9f | ||
|
|
756c3d0503 | ||
|
|
f884447b26 | ||
|
|
555394b95e | ||
|
|
00510a6af8 | ||
|
|
6c0839e197 | ||
|
|
5b79379c90 | ||
|
|
47fed45700 | ||
|
|
80d695f3a2 | ||
|
|
8d4f40ccd2 | ||
|
|
765bad5edd | ||
|
|
0c0382c9b5 | ||
|
|
bbab6149e8 | ||
|
|
ce9387f1ab | ||
|
|
d126c5736a | ||
|
|
5048d54d32 | ||
|
|
f22fe6af76 | ||
|
|
8034f289d1 | ||
|
|
eed61ac510 | ||
|
|
412d6096c0 | ||
|
|
c289ae07d2 | ||
|
|
87f78b07b3 | ||
|
|
5e2db77ef9 | ||
|
|
c992072286 | ||
|
|
0ef826c090 | ||
|
|
5da75c3915 | ||
|
|
8222baa7ed | ||
|
|
7b76b51314 | ||
|
|
c96dbbd3b5 | ||
|
|
da6ccedf24 | ||
|
|
13b37a835f | ||
|
|
863fa33309 | ||
|
|
9f4c54a212 | ||
|
|
2a7bff4c0e | ||
|
|
17406d1aab | ||
|
|
6537c53d43 | ||
|
|
b4bd10521a | ||
|
|
65cbef1962 | ||
|
|
a8d355900a | ||
|
|
ffd9c381ce | ||
|
|
2a0bce0beb | ||
|
|
f1f7b81088 | ||
|
|
f9827f958b | ||
|
|
3e2afc35ba | ||
|
|
c65dd86d5e | ||
|
|
2d6c0388af | ||
|
|
4d19d87720 | ||
|
|
5eabaf98e0 | ||
|
|
d1f0e9ae9f | ||
|
|
cd56039ab7 | ||
|
|
55515fee95 | ||
|
|
13d43a2d31 | ||
|
|
001261433b | ||
|
|
03bf595525 | ||
|
|
4ebedace1e | ||
|
|
b23276c002 | ||
|
|
bf708cb8bc | ||
|
|
a550d082a3 | ||
|
|
6c1a7449fe | ||
|
|
f0c9b55036 | ||
|
|
209badf10c | ||
|
|
242dde4480 | ||
|
|
2df0dd1f70 | ||
|
|
98a6d138d4 | ||
|
|
38f06ab373 | ||
|
|
3c1300721c | ||
|
|
61003c8079 | ||
|
|
01850c7399 | ||
|
|
b9c381e26f | ||
|
|
542554fb2c | ||
|
|
bdf18fa862 | ||
|
|
afc411c51b | ||
|
|
a59163e56c | ||
|
|
8391b19abb | ||
|
|
3925c7ff95 | ||
|
|
dbed110d02 | ||
|
|
f978355520 | ||
|
|
4748e6f54d | ||
|
|
91a4c64390 | ||
|
|
600a107699 | ||
|
|
2746c0b0f1 | ||
|
|
701a6115f8 | ||
|
|
56b00addc4 | ||
|
|
02e35ee002 | ||
|
|
5208e4a4ca | ||
|
|
7381a867ba | ||
|
|
f41ce6619f | ||
|
|
933427310d | ||
|
|
8b0a1817b3 | ||
|
|
04c9a5c008 | ||
|
|
bbc8c091e6 | ||
|
|
f3228713bc | ||
|
|
fa5333784b | ||
|
|
0dbf0cc81f | ||
|
|
196a56726e | ||
|
|
fe857dcb1b | ||
|
|
aa0ed5dbd0 | ||
|
|
a9e21c282a | ||
|
|
9a15a54885 | ||
|
|
91dcc349de | ||
|
|
fa41bfd06a | ||
|
|
8839c34d53 | ||
|
|
11ceaa8850 | ||
|
|
2a9f7db1e2 | ||
|
|
22338ed478 | ||
|
|
59a21158a6 | ||
|
|
93ce96d011 | ||
|
|
cc2f04b0e4 | ||
|
|
aa5191fa1b | ||
|
|
4e41a5583d | ||
|
|
ded8fad5e4 | ||
|
|
3702bc8413 | ||
|
|
7896d2eef7 | ||
|
|
da0f470f1c | ||
|
|
8fddb742df | ||
|
|
95fe26f3e3 | ||
|
|
1e10381143 | ||
|
|
96cbce52f9 | ||
|
|
0ea2ca3141 | ||
|
|
42877dd915 | ||
|
|
790c11c453 | ||
|
|
1ac4baa00a | ||
|
|
fc32286045 | ||
|
|
ee1131f254 | ||
|
|
c5dc3ee3b6 | ||
|
|
dd593b1035 | ||
|
|
4814786556 | ||
|
|
4f0a936ca0 | ||
|
|
aec372ca31 | ||
|
|
d2a739f8f6 | ||
|
|
165110872b | ||
|
|
6ab4e9f533 | ||
|
|
cf541d62ea | ||
|
|
19fc58dd1f | ||
|
|
ac9c475849 | ||
|
|
ddf99ab706 | ||
|
|
0056984d4b | ||
|
|
44fb276464 | ||
|
|
558a1b4050 | ||
|
|
8f934f2648 | ||
|
|
403b9a8310 | ||
|
|
33436488e2 | ||
|
|
3c28366fed | ||
|
|
b80abe6c05 | ||
|
|
8cb47817f6 | ||
|
|
23a80b01b6 | ||
|
|
b30614e9d8 | ||
|
|
e86089a9f3 | ||
|
|
3ceba7a147 | ||
|
|
c491133aff | ||
|
|
37418a7630 | ||
|
|
73a9c002e0 | ||
|
|
3d48fa7382 | ||
|
|
8e22dd1b13 | ||
|
|
7807fa7cc2 | ||
|
|
cd380973df | ||
|
|
01d681faa3 | ||
|
|
c231b663a3 | ||
|
|
8306b6bde6 | ||
|
|
dc011af90d | ||
|
|
c27e3ef436 | ||
|
|
760cc9ba5a | ||
|
|
5665e9c0e7 | ||
|
|
ad53429cf1 | ||
|
|
15298b0409 | ||
|
|
cfa710037c | ||
|
|
a08dd85efd | ||
|
|
469d36d979 | ||
|
|
7ae8b2cdeb | ||
|
|
cf148db75d | ||
|
|
738c7ab43e | ||
|
|
82fb7f8cf0 | ||
|
|
e0f2ab9c01 | ||
|
|
e0183217b6 | ||
|
|
f066b7fb9c | ||
|
|
0c6e2b566b | ||
|
|
f02e24437a | ||
|
|
e9534be1e6 | ||
|
|
7056997e49 | ||
|
|
155af19aaa | ||
|
|
f369fdf6f2 | ||
|
|
510a95bd6d | ||
|
|
da90dbe645 | ||
|
|
b006c5f914 | ||
|
|
3f75d46a16 | ||
|
|
14c2a244b7 | ||
|
|
94ff9d7346 | ||
|
|
14196167b0 | ||
|
|
d70959c34c | ||
|
|
67c6f27064 | ||
|
|
6bfbb27856 | ||
|
|
baac3749b3 | ||
|
|
d377181b25 | ||
|
|
ebd6a11f3a | ||
|
|
0a47412e8c | ||
|
|
4f668bf558 | ||
|
|
9248c5a987 | ||
|
|
b0ed190591 | ||
|
|
37357b2d63 | ||
|
|
9b06e0a3b7 | ||
|
|
5a5912ea66 | ||
|
|
b1c7317cf6 | ||
|
|
a0fe4cf5e4 | ||
|
|
7fe3c965e3 | ||
|
|
fd9b3c2767 | ||
|
|
fb9e188e36 | ||
|
|
c93d8cecfc | ||
|
|
94b46e57f1 | ||
|
|
9046acbe68 | ||
|
|
075bbe2aef | ||
|
|
b52d078986 | ||
|
|
0a9c4914aa | ||
|
|
f284008fb5 | ||
|
|
4759254e10 | ||
|
|
e22d377203 | ||
|
|
0787e3e595 | ||
|
|
c1194d558c | ||
|
|
952b10a9f6 | ||
|
|
f55851bdc8 | ||
|
|
76bb361393 | ||
|
|
81c8e8d898 | ||
|
|
f4e872c782 | ||
|
|
ddcb500c51 | ||
|
|
e8664c0ce4 | ||
|
|
3b002ddc86 | ||
|
|
1770da545d | ||
|
|
de3e69f846 | ||
|
|
cdf1233065 | ||
|
|
6b70f0b25f |
12
.github/FUNDING.yml
vendored
Normal file
12
.github/FUNDING.yml
vendored
Normal file
@@ -0,0 +1,12 @@
|
||||
# These are supported funding model platforms
|
||||
|
||||
github: [drakkan] # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2]
|
||||
patreon: # Replace with a single Patreon username
|
||||
open_collective: # Replace with a single Open Collective username
|
||||
ko_fi: # Replace with a single Ko-fi username
|
||||
tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
|
||||
community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
|
||||
liberapay: # Replace with a single Liberapay username
|
||||
issuehunt: # Replace with a single IssueHunt username
|
||||
otechie: # Replace with a single Otechie username
|
||||
custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']
|
||||
20
.github/dependabot.yml
vendored
Normal file
20
.github/dependabot.yml
vendored
Normal file
@@ -0,0 +1,20 @@
|
||||
version: 2
|
||||
|
||||
updates:
|
||||
- package-ecosystem: "gomod"
|
||||
directory: "/"
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
open-pull-requests-limit: 2
|
||||
|
||||
- package-ecosystem: "docker"
|
||||
directory: "/"
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
open-pull-requests-limit: 2
|
||||
|
||||
- package-ecosystem: "github-actions"
|
||||
directory: "/"
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
open-pull-requests-limit: 2
|
||||
2
.github/workflows/.editorconfig
vendored
Normal file
2
.github/workflows/.editorconfig
vendored
Normal file
@@ -0,0 +1,2 @@
|
||||
[*.yml]
|
||||
indent_size = 2
|
||||
459
.github/workflows/development.yml
vendored
Normal file
459
.github/workflows/development.yml
vendored
Normal file
@@ -0,0 +1,459 @@
|
||||
name: CI
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [2.2.x]
|
||||
pull_request:
|
||||
|
||||
jobs:
|
||||
test-deploy:
|
||||
name: Test and deploy
|
||||
runs-on: ${{ matrix.os }}
|
||||
strategy:
|
||||
matrix:
|
||||
go: [1.17]
|
||||
os: [ubuntu-18.04, macos-10.15]
|
||||
upload-coverage: [true]
|
||||
include:
|
||||
- go: 1.17
|
||||
os: windows-2019
|
||||
upload-coverage: false
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v3
|
||||
with:
|
||||
go-version: ${{ matrix.go }}
|
||||
|
||||
- name: Build for Linux/macOS x86_64
|
||||
if: startsWith(matrix.os, 'windows-') != true
|
||||
run: |
|
||||
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
|
||||
cd tests/eventsearcher
|
||||
go build -trimpath -ldflags "-s -w" -o eventsearcher
|
||||
cd -
|
||||
|
||||
- name: Build for macOS arm64
|
||||
if: startsWith(matrix.os, 'macos-') == true
|
||||
run: CGO_ENABLED=1 GOOS=darwin GOARCH=arm64 SDKROOT=$(xcrun --sdk macosx --show-sdk-path) go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo_arm64
|
||||
|
||||
- name: Build for Windows
|
||||
if: startsWith(matrix.os, 'windows-')
|
||||
run: |
|
||||
$GIT_COMMIT = (git describe --always --dirty) | Out-String
|
||||
$DATE_TIME = ([datetime]::Now.ToUniversalTime().toString("yyyy-MM-ddTHH:mm:ssZ")) | Out-String
|
||||
$LATEST_TAG = ((git describe --tags $(git rev-list --tags --max-count=1)) | Out-String).Trim()
|
||||
$REV_LIST=$LATEST_TAG+"..HEAD"
|
||||
$COMMITS_FROM_TAG= ((git rev-list $REV_LIST --count) | Out-String).Trim()
|
||||
$FILE_VERSION = $LATEST_TAG.substring(1) + "." + $COMMITS_FROM_TAG
|
||||
go install github.com/tc-hib/go-winres@latest
|
||||
go-winres simply --arch amd64 --product-version $LATEST_TAG-dev-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
|
||||
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/version.date=$DATE_TIME" -o sftpgo.exe
|
||||
cd tests/eventsearcher
|
||||
go build -trimpath -ldflags "-s -w" -o eventsearcher.exe
|
||||
cd ../..
|
||||
mkdir arm64
|
||||
$Env:CGO_ENABLED='0'
|
||||
$Env:GOOS='windows'
|
||||
$Env:GOARCH='arm64'
|
||||
go-winres simply --arch arm64 --product-version $LATEST_TAG-dev-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
|
||||
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/version.date=$DATE_TIME" -o .\arm64\sftpgo.exe
|
||||
mkdir x86
|
||||
$Env:GOARCH='386'
|
||||
go-winres simply --arch 386 --product-version $LATEST_TAG-dev-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
|
||||
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/version.date=$DATE_TIME" -o .\x86\sftpgo.exe
|
||||
Remove-Item Env:\CGO_ENABLED
|
||||
Remove-Item Env:\GOOS
|
||||
Remove-Item Env:\GOARCH
|
||||
|
||||
- name: Run test cases using SQLite provider
|
||||
run: go test -v -p 1 -timeout 15m ./... -coverprofile=coverage.txt -covermode=atomic
|
||||
|
||||
- name: Upload coverage to Codecov
|
||||
if: ${{ matrix.upload-coverage }}
|
||||
uses: codecov/codecov-action@v3
|
||||
with:
|
||||
file: ./coverage.txt
|
||||
fail_ci_if_error: false
|
||||
|
||||
- name: Run test cases using bolt provider
|
||||
run: |
|
||||
go test -v -p 1 -timeout 2m ./config -covermode=atomic
|
||||
go test -v -p 1 -timeout 5m ./common -covermode=atomic
|
||||
go test -v -p 1 -timeout 5m ./httpd -covermode=atomic
|
||||
go test -v -p 1 -timeout 8m ./sftpd -covermode=atomic
|
||||
go test -v -p 1 -timeout 5m ./ftpd -covermode=atomic
|
||||
go test -v -p 1 -timeout 5m ./webdavd -covermode=atomic
|
||||
go test -v -p 1 -timeout 2m ./telemetry -covermode=atomic
|
||||
go test -v -p 1 -timeout 2m ./mfa -covermode=atomic
|
||||
env:
|
||||
SFTPGO_DATA_PROVIDER__DRIVER: bolt
|
||||
SFTPGO_DATA_PROVIDER__NAME: 'sftpgo_bolt.db'
|
||||
|
||||
- name: Run test cases using memory provider
|
||||
run: go test -v -p 1 -timeout 15m ./... -covermode=atomic
|
||||
env:
|
||||
SFTPGO_DATA_PROVIDER__DRIVER: memory
|
||||
SFTPGO_DATA_PROVIDER__NAME: ''
|
||||
|
||||
- name: Prepare build artifact for macOS
|
||||
if: startsWith(matrix.os, 'macos-') == true
|
||||
run: |
|
||||
mkdir -p output/{init,bash_completion,zsh_completion}
|
||||
cp sftpgo output/sftpgo_x86_64
|
||||
cp sftpgo_arm64 output/
|
||||
cp sftpgo.json output/
|
||||
cp -r templates output/
|
||||
cp -r static output/
|
||||
cp -r openapi output/
|
||||
cp init/com.github.drakkan.sftpgo.plist output/init/
|
||||
./sftpgo gen completion bash > output/bash_completion/sftpgo
|
||||
./sftpgo gen completion zsh > output/zsh_completion/_sftpgo
|
||||
./sftpgo gen man -d output/man/man1
|
||||
gzip output/man/man1/*
|
||||
|
||||
- name: Prepare Windows installer
|
||||
if: ${{ startsWith(matrix.os, 'windows-') && github.event_name != 'pull_request' }}
|
||||
run: |
|
||||
Remove-Item -LiteralPath "output" -Force -Recurse -ErrorAction Ignore
|
||||
mkdir output
|
||||
copy .\sftpgo.exe .\output
|
||||
copy .\sftpgo.json .\output
|
||||
copy .\sftpgo.db .\output
|
||||
copy .\LICENSE .\output\LICENSE.txt
|
||||
mkdir output\templates
|
||||
xcopy .\templates .\output\templates\ /E
|
||||
mkdir output\static
|
||||
xcopy .\static .\output\static\ /E
|
||||
mkdir output\openapi
|
||||
xcopy .\openapi .\output\openapi\ /E
|
||||
$LATEST_TAG = ((git describe --tags $(git rev-list --tags --max-count=1)) | Out-String).Trim()
|
||||
$REV_LIST=$LATEST_TAG+"..HEAD"
|
||||
$COMMITS_FROM_TAG= ((git rev-list $REV_LIST --count) | Out-String).Trim()
|
||||
$Env:SFTPGO_ISS_DEV_VERSION = $LATEST_TAG + "." + $COMMITS_FROM_TAG
|
||||
$CERT_PATH=(Get-Location -PSProvider FileSystem).ProviderPath + "\cert.pfx"
|
||||
[IO.File]::WriteAllBytes($CERT_PATH,[System.Convert]::FromBase64String($Env:CERT_DATA))
|
||||
certutil -f -p "$Env:CERT_PASS" -importpfx MY "$CERT_PATH"
|
||||
rm "$CERT_PATH"
|
||||
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.17763.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\sftpgo.exe
|
||||
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.17763.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\arm64\sftpgo.exe
|
||||
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.17763.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\x86\sftpgo.exe
|
||||
$INNO_S='/Ssigntool=$qC:/Program Files (x86)/Windows Kits/10/bin/10.0.17763.0/x86/signtool.exe$q sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n $qNicola Murino$q /d $qSFTPGo$q $f'
|
||||
iscc "$INNO_S" .\windows-installer\sftpgo.iss
|
||||
|
||||
rm .\output\sftpgo.exe
|
||||
rm .\output\sftpgo.db
|
||||
copy .\arm64\sftpgo.exe .\output
|
||||
(Get-Content .\output\sftpgo.json).replace('"sqlite"', '"bolt"') | Set-Content .\output\sftpgo.json
|
||||
$Env:SFTPGO_DATA_PROVIDER__DRIVER='bolt'
|
||||
$Env:SFTPGO_DATA_PROVIDER__NAME='.\output\sftpgo.db'
|
||||
.\sftpgo.exe initprovider
|
||||
Remove-Item Env:\SFTPGO_DATA_PROVIDER__DRIVER
|
||||
Remove-Item Env:\SFTPGO_DATA_PROVIDER__NAME
|
||||
$Env:SFTPGO_ISS_ARCH='arm64'
|
||||
iscc "$INNO_S" .\windows-installer\sftpgo.iss
|
||||
|
||||
rm .\output\sftpgo.exe
|
||||
copy .\x86\sftpgo.exe .\output
|
||||
$Env:SFTPGO_ISS_ARCH='x86'
|
||||
iscc "$INNO_S" .\windows-installer\sftpgo.iss
|
||||
certutil -delstore MY "Nicola Murino"
|
||||
env:
|
||||
CERT_DATA: ${{ secrets.CERT_DATA }}
|
||||
CERT_PASS: ${{ secrets.CERT_PASS }}
|
||||
|
||||
- name: Upload Windows installer x86_64 artifact
|
||||
if: ${{ startsWith(matrix.os, 'windows-') && github.event_name != 'pull_request' }}
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: sftpgo_windows_installer_x86_64
|
||||
path: ./sftpgo_windows_x86_64.exe
|
||||
|
||||
- name: Upload Windows installer arm64 artifact
|
||||
if: ${{ startsWith(matrix.os, 'windows-') && github.event_name != 'pull_request' }}
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: sftpgo_windows_installer_arm64
|
||||
path: ./sftpgo_windows_arm64.exe
|
||||
|
||||
- name: Upload Windows installer x86 artifact
|
||||
if: ${{ startsWith(matrix.os, 'windows-') && github.event_name != 'pull_request' }}
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: sftpgo_windows_installer_x86
|
||||
path: ./sftpgo_windows_x86.exe
|
||||
|
||||
- name: Prepare build artifact for Windows
|
||||
if: startsWith(matrix.os, 'windows-')
|
||||
run: |
|
||||
Remove-Item -LiteralPath "output" -Force -Recurse -ErrorAction Ignore
|
||||
mkdir output
|
||||
copy .\sftpgo.exe .\output
|
||||
mkdir output\arm64
|
||||
copy .\arm64\sftpgo.exe .\output\arm64
|
||||
mkdir output\x86
|
||||
copy .\x86\sftpgo.exe .\output\x86
|
||||
copy .\sftpgo.json .\output
|
||||
(Get-Content .\output\sftpgo.json).replace('"sqlite"', '"bolt"') | Set-Content .\output\sftpgo.json
|
||||
mkdir output\templates
|
||||
xcopy .\templates .\output\templates\ /E
|
||||
mkdir output\static
|
||||
xcopy .\static .\output\static\ /E
|
||||
mkdir output\openapi
|
||||
xcopy .\openapi .\output\openapi\ /E
|
||||
|
||||
- name: Upload build artifact
|
||||
if: startsWith(matrix.os, 'ubuntu-') != true
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: sftpgo-${{ matrix.os }}-go-${{ matrix.go }}
|
||||
path: output
|
||||
|
||||
test-goarch-386:
|
||||
name: Run test cases on 32-bit arch
|
||||
runs-on: ubuntu-18.04
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v3
|
||||
with:
|
||||
go-version: 1.17
|
||||
|
||||
- name: Build
|
||||
run: |
|
||||
cd tests/eventsearcher
|
||||
go build -trimpath -ldflags "-s -w" -o eventsearcher
|
||||
cd -
|
||||
env:
|
||||
GOARCH: 386
|
||||
|
||||
- name: Run test cases
|
||||
run: go test -v -p 1 -timeout 15m ./... -covermode=atomic
|
||||
env:
|
||||
SFTPGO_DATA_PROVIDER__DRIVER: memory
|
||||
SFTPGO_DATA_PROVIDER__NAME: ''
|
||||
GOARCH: 386
|
||||
|
||||
test-postgresql-mysql-crdb:
|
||||
name: Test with PgSQL/MySQL/Cockroach
|
||||
runs-on: ubuntu-18.04
|
||||
|
||||
services:
|
||||
postgres:
|
||||
image: postgres:latest
|
||||
env:
|
||||
POSTGRES_PASSWORD: postgres
|
||||
POSTGRES_DB: sftpgo
|
||||
options: >-
|
||||
--health-cmd pg_isready
|
||||
--health-interval 10s
|
||||
--health-timeout 5s
|
||||
--health-retries 5
|
||||
ports:
|
||||
- 5432:5432
|
||||
|
||||
mariadb:
|
||||
image: mariadb:latest
|
||||
env:
|
||||
MYSQL_ROOT_PASSWORD: mysql
|
||||
MYSQL_DATABASE: sftpgo
|
||||
MYSQL_USER: sftpgo
|
||||
MYSQL_PASSWORD: sftpgo
|
||||
options: >-
|
||||
--health-cmd "mysqladmin status -h 127.0.0.1 -P 3306 -u root -p$MYSQL_ROOT_PASSWORD"
|
||||
--health-interval 10s
|
||||
--health-timeout 5s
|
||||
--health-retries 6
|
||||
ports:
|
||||
- 3307:3306
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v3
|
||||
with:
|
||||
go-version: 1.17
|
||||
|
||||
- name: Build
|
||||
run: |
|
||||
cd tests/eventsearcher
|
||||
go build -trimpath -ldflags "-s -w" -o eventsearcher
|
||||
cd -
|
||||
|
||||
- name: Run tests using PostgreSQL provider
|
||||
run: |
|
||||
go test -v -p 1 -timeout 15m ./... -covermode=atomic
|
||||
env:
|
||||
SFTPGO_DATA_PROVIDER__DRIVER: postgresql
|
||||
SFTPGO_DATA_PROVIDER__NAME: sftpgo
|
||||
SFTPGO_DATA_PROVIDER__HOST: localhost
|
||||
SFTPGO_DATA_PROVIDER__PORT: 5432
|
||||
SFTPGO_DATA_PROVIDER__USERNAME: postgres
|
||||
SFTPGO_DATA_PROVIDER__PASSWORD: postgres
|
||||
|
||||
- name: Run tests using MySQL provider
|
||||
run: |
|
||||
go test -v -p 1 -timeout 15m ./... -covermode=atomic
|
||||
env:
|
||||
SFTPGO_DATA_PROVIDER__DRIVER: mysql
|
||||
SFTPGO_DATA_PROVIDER__NAME: sftpgo
|
||||
SFTPGO_DATA_PROVIDER__HOST: localhost
|
||||
SFTPGO_DATA_PROVIDER__PORT: 3307
|
||||
SFTPGO_DATA_PROVIDER__USERNAME: sftpgo
|
||||
SFTPGO_DATA_PROVIDER__PASSWORD: sftpgo
|
||||
|
||||
- name: Run tests using CockroachDB provider
|
||||
run: |
|
||||
docker run --rm --name crdb --health-cmd "curl -I http://127.0.0.1:8080" --health-interval 10s --health-timeout 5s --health-retries 6 -p 26257:26257 -d cockroachdb/cockroach:latest start-single-node --insecure --listen-addr 0.0.0.0:26257
|
||||
docker exec crdb cockroach sql --insecure -e 'create database "sftpgo"'
|
||||
go test -v -p 1 -timeout 15m ./... -covermode=atomic
|
||||
docker stop crdb
|
||||
env:
|
||||
SFTPGO_DATA_PROVIDER__DRIVER: cockroachdb
|
||||
SFTPGO_DATA_PROVIDER__NAME: sftpgo
|
||||
SFTPGO_DATA_PROVIDER__HOST: localhost
|
||||
SFTPGO_DATA_PROVIDER__PORT: 26257
|
||||
SFTPGO_DATA_PROVIDER__USERNAME: root
|
||||
SFTPGO_DATA_PROVIDER__PASSWORD:
|
||||
|
||||
build-linux-packages:
|
||||
name: Build Linux packages
|
||||
runs-on: ubuntu-18.04
|
||||
strategy:
|
||||
matrix:
|
||||
include:
|
||||
- arch: amd64
|
||||
go: 1.17
|
||||
go-arch: amd64
|
||||
- arch: aarch64
|
||||
distro: ubuntu18.04
|
||||
go: go1.17.9
|
||||
go-arch: arm64
|
||||
- arch: ppc64le
|
||||
distro: ubuntu18.04
|
||||
go: go1.17.9
|
||||
go-arch: ppc64le
|
||||
- arch: armv7
|
||||
distro: ubuntu18.04
|
||||
go: go1.17.9
|
||||
go-arch: arm7
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- name: Set up Go
|
||||
if: ${{ matrix.arch == 'amd64' }}
|
||||
uses: actions/setup-go@v3
|
||||
with:
|
||||
go-version: ${{ matrix.go }}
|
||||
|
||||
- name: Build on amd64
|
||||
if: ${{ matrix.arch == 'amd64' }}
|
||||
run: |
|
||||
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
|
||||
mkdir -p output/{init,bash_completion,zsh_completion}
|
||||
cp sftpgo.json output/
|
||||
cp -r templates output/
|
||||
cp -r static output/
|
||||
cp -r openapi output/
|
||||
cp init/sftpgo.service output/init/
|
||||
./sftpgo gen completion bash > output/bash_completion/sftpgo
|
||||
./sftpgo gen completion zsh > output/zsh_completion/_sftpgo
|
||||
./sftpgo gen man -d output/man/man1
|
||||
gzip output/man/man1/*
|
||||
cp sftpgo output/
|
||||
|
||||
- uses: uraimo/run-on-arch-action@v2
|
||||
if: ${{ matrix.arch != 'amd64' }}
|
||||
name: Build for ${{ matrix.arch }}
|
||||
id: build
|
||||
with:
|
||||
arch: ${{ matrix.arch }}
|
||||
distro: ${{ matrix.distro }}
|
||||
setup: |
|
||||
mkdir -p "${PWD}/output"
|
||||
dockerRunArgs: |
|
||||
--volume "${PWD}/output:/output"
|
||||
shell: /bin/bash
|
||||
install: |
|
||||
apt-get update -q -y
|
||||
apt-get install -q -y curl gcc git
|
||||
if [ ${{ matrix.go }} == 'latest' ]
|
||||
then
|
||||
GO_VERSION=$(curl -L https://go.dev/VERSION?m=text)
|
||||
else
|
||||
GO_VERSION=${{ matrix.go }}
|
||||
fi
|
||||
GO_DOWNLOAD_ARCH=${{ matrix.go-arch }}
|
||||
if [ ${{ matrix.arch}} == 'armv7' ]
|
||||
then
|
||||
GO_DOWNLOAD_ARCH=armv6l
|
||||
fi
|
||||
curl --retry 5 --retry-delay 2 --connect-timeout 10 -o go.tar.gz -L https://go.dev/dl/${GO_VERSION}.linux-${GO_DOWNLOAD_ARCH}.tar.gz
|
||||
tar -C /usr/local -xzf go.tar.gz
|
||||
run: |
|
||||
export PATH=$PATH:/usr/local/go/bin
|
||||
if [ ${{ matrix.arch}} == 'armv7' ]
|
||||
then
|
||||
export GOARM=7
|
||||
fi
|
||||
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
|
||||
mkdir -p output/{init,bash_completion,zsh_completion}
|
||||
cp sftpgo.json output/
|
||||
cp -r templates output/
|
||||
cp -r static output/
|
||||
cp -r openapi output/
|
||||
cp init/sftpgo.service output/init/
|
||||
./sftpgo gen completion bash > output/bash_completion/sftpgo
|
||||
./sftpgo gen completion zsh > output/zsh_completion/_sftpgo
|
||||
./sftpgo gen man -d output/man/man1
|
||||
gzip output/man/man1/*
|
||||
cp sftpgo output/
|
||||
|
||||
- name: Upload build artifact
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: sftpgo-linux-${{ matrix.arch }}-go-${{ matrix.go }}
|
||||
path: output
|
||||
|
||||
- name: Build Packages
|
||||
id: build_linux_pkgs
|
||||
run: |
|
||||
export NFPM_ARCH=${{ matrix.go-arch }}
|
||||
cd pkgs
|
||||
./build.sh
|
||||
PKG_VERSION=$(cat dist/version)
|
||||
echo "::set-output name=pkg-version::${PKG_VERSION}"
|
||||
|
||||
- name: Upload Debian Package
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-${{ matrix.go-arch }}-deb
|
||||
path: pkgs/dist/deb/*
|
||||
|
||||
- name: Upload RPM Package
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-${{ matrix.go-arch }}-rpm
|
||||
path: pkgs/dist/rpm/*
|
||||
|
||||
golangci-lint:
|
||||
name: golangci-lint
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v3
|
||||
with:
|
||||
go-version: 1.17
|
||||
- uses: actions/checkout@v3
|
||||
- name: Run golangci-lint
|
||||
uses: golangci/golangci-lint-action@v3
|
||||
with:
|
||||
version: latest
|
||||
162
.github/workflows/docker.yml
vendored
Normal file
162
.github/workflows/docker.yml
vendored
Normal file
@@ -0,0 +1,162 @@
|
||||
name: Docker
|
||||
|
||||
on:
|
||||
#schedule:
|
||||
# - cron: '0 4 * * *' # everyday at 4:00 AM UTC
|
||||
push:
|
||||
branches:
|
||||
- 2.2.x
|
||||
tags:
|
||||
- v*
|
||||
pull_request:
|
||||
|
||||
jobs:
|
||||
build:
|
||||
name: Build
|
||||
runs-on: ${{ matrix.os }}
|
||||
strategy:
|
||||
matrix:
|
||||
os:
|
||||
- ubuntu-latest
|
||||
docker_pkg:
|
||||
- debian
|
||||
- alpine
|
||||
optional_deps:
|
||||
- true
|
||||
- false
|
||||
include:
|
||||
- os: ubuntu-latest
|
||||
docker_pkg: distroless
|
||||
optional_deps: false
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v3
|
||||
|
||||
- name: Gather image information
|
||||
id: info
|
||||
run: |
|
||||
VERSION=noop
|
||||
DOCKERFILE=Dockerfile
|
||||
MINOR=""
|
||||
MAJOR=""
|
||||
if [ "${{ github.event_name }}" = "schedule" ]; then
|
||||
VERSION=nightly
|
||||
elif [[ $GITHUB_REF == refs/tags/* ]]; then
|
||||
VERSION=${GITHUB_REF#refs/tags/}
|
||||
elif [[ $GITHUB_REF == refs/heads/* ]]; then
|
||||
VERSION=$(echo ${GITHUB_REF#refs/heads/} | sed -r 's#/+#-#g')
|
||||
if [ "${{ github.event.repository.default_branch }}" = "$VERSION" ]; then
|
||||
VERSION=edge
|
||||
fi
|
||||
elif [[ $GITHUB_REF == refs/pull/* ]]; then
|
||||
VERSION=pr-${{ github.event.number }}
|
||||
fi
|
||||
if [[ $VERSION =~ ^v[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ ]]; then
|
||||
MINOR=${VERSION%.*}
|
||||
MAJOR=${MINOR%.*}
|
||||
fi
|
||||
VERSION_SLIM="${VERSION}-slim"
|
||||
if [[ $DOCKER_PKG == alpine ]]; then
|
||||
VERSION="${VERSION}-alpine"
|
||||
VERSION_SLIM="${VERSION}-slim"
|
||||
DOCKERFILE=Dockerfile.alpine
|
||||
elif [[ $DOCKER_PKG == distroless ]]; then
|
||||
VERSION="${VERSION}-distroless"
|
||||
VERSION_SLIM="${VERSION}-slim"
|
||||
DOCKERFILE=Dockerfile.distroless
|
||||
fi
|
||||
DOCKER_IMAGES=("drakkan/sftpgo" "ghcr.io/drakkan/sftpgo")
|
||||
TAGS="${DOCKER_IMAGES[0]}:${VERSION}"
|
||||
TAGS_SLIM="${DOCKER_IMAGES[0]}:${VERSION_SLIM}"
|
||||
|
||||
for DOCKER_IMAGE in ${DOCKER_IMAGES[@]}; do
|
||||
if [[ ${DOCKER_IMAGE} != ${DOCKER_IMAGES[0]} ]]; then
|
||||
TAGS="${TAGS},${DOCKER_IMAGE}:${VERSION}"
|
||||
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:${VERSION_SLIM}"
|
||||
fi
|
||||
if [[ $GITHUB_REF == refs/tags/* ]]; then
|
||||
if [[ $DOCKER_PKG == debian ]]; then
|
||||
if [[ -n $MAJOR && -n $MINOR ]]; then
|
||||
TAGS="${TAGS},${DOCKER_IMAGE}:${MINOR},${DOCKER_IMAGE}:${MAJOR}"
|
||||
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:${MINOR}-slim,${DOCKER_IMAGE}:${MAJOR}-slim"
|
||||
fi
|
||||
TAGS="${TAGS},${DOCKER_IMAGE}:latest"
|
||||
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:slim"
|
||||
elif [[ $DOCKER_PKG == distroless ]]; then
|
||||
if [[ -n $MAJOR && -n $MINOR ]]; then
|
||||
TAGS="${TAGS},${DOCKER_IMAGE}:${MINOR}-distroless,${DOCKER_IMAGE}:${MAJOR}-distroless"
|
||||
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:${MINOR}-distroless-slim,${DOCKER_IMAGE}:${MAJOR}-distroless-slim"
|
||||
fi
|
||||
TAGS="${TAGS},${DOCKER_IMAGE}:distroless"
|
||||
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:distroless-slim"
|
||||
else
|
||||
if [[ -n $MAJOR && -n $MINOR ]]; then
|
||||
TAGS="${TAGS},${DOCKER_IMAGE}:${MINOR}-alpine,${DOCKER_IMAGE}:${MAJOR}-alpine"
|
||||
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:${MINOR}-alpine-slim,${DOCKER_IMAGE}:${MAJOR}-alpine-slim"
|
||||
fi
|
||||
TAGS="${TAGS},${DOCKER_IMAGE}:alpine"
|
||||
TAGS_SLIM="${TAGS_SLIM},${DOCKER_IMAGE}:alpine-slim"
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
if [[ $OPTIONAL_DEPS == true ]]; then
|
||||
echo ::set-output name=version::${VERSION}
|
||||
echo ::set-output name=tags::${TAGS}
|
||||
echo ::set-output name=full::true
|
||||
else
|
||||
echo ::set-output name=version::${VERSION_SLIM}
|
||||
echo ::set-output name=tags::${TAGS_SLIM}
|
||||
echo ::set-output name=full::false
|
||||
fi
|
||||
echo ::set-output name=dockerfile::${DOCKERFILE}
|
||||
echo ::set-output name=created::$(date -u +'%Y-%m-%dT%H:%M:%SZ')
|
||||
echo ::set-output name=sha::${GITHUB_SHA::8}
|
||||
env:
|
||||
DOCKER_PKG: ${{ matrix.docker_pkg }}
|
||||
OPTIONAL_DEPS: ${{ matrix.optional_deps }}
|
||||
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v1
|
||||
|
||||
- name: Set up builder
|
||||
uses: docker/setup-buildx-action@v1
|
||||
id: builder
|
||||
|
||||
- name: Login to Docker Hub
|
||||
uses: docker/login-action@v1
|
||||
with:
|
||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
if: ${{ github.event_name != 'pull_request' }}
|
||||
|
||||
- name: Login to GitHub Container Registry
|
||||
uses: docker/login-action@v1
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ github.repository_owner }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
if: ${{ github.event_name != 'pull_request' }}
|
||||
|
||||
- name: Build and push
|
||||
uses: docker/build-push-action@v2
|
||||
with:
|
||||
context: .
|
||||
builder: ${{ steps.builder.outputs.name }}
|
||||
file: ./${{ steps.info.outputs.dockerfile }}
|
||||
platforms: linux/amd64,linux/arm64,linux/ppc64le
|
||||
push: ${{ github.event_name != 'pull_request' }}
|
||||
tags: ${{ steps.info.outputs.tags }}
|
||||
build-args: |
|
||||
COMMIT_SHA=${{ steps.info.outputs.sha }}
|
||||
INSTALL_OPTIONAL_PACKAGES=${{ steps.info.outputs.full }}
|
||||
labels: |
|
||||
org.opencontainers.image.title=SFTPGo
|
||||
org.opencontainers.image.description=Fully featured and highly configurable SFTP server with optional HTTP, FTP/S and WebDAV support
|
||||
org.opencontainers.image.url=https://github.com/drakkan/sftpgo
|
||||
org.opencontainers.image.documentation=https://github.com/drakkan/sftpgo/blob/${{ github.sha }}/docker/README.md
|
||||
org.opencontainers.image.source=https://github.com/drakkan/sftpgo
|
||||
org.opencontainers.image.version=${{ steps.info.outputs.version }}
|
||||
org.opencontainers.image.created=${{ steps.info.outputs.created }}
|
||||
org.opencontainers.image.revision=${{ github.sha }}
|
||||
org.opencontainers.image.licenses=AGPL-3.0
|
||||
595
.github/workflows/release.yml
vendored
Normal file
595
.github/workflows/release.yml
vendored
Normal file
@@ -0,0 +1,595 @@
|
||||
name: Release
|
||||
|
||||
on:
|
||||
push:
|
||||
tags: 'v*'
|
||||
|
||||
env:
|
||||
GO_VERSION: 1.17.9
|
||||
|
||||
jobs:
|
||||
prepare-sources-with-deps:
|
||||
name: Prepare sources with deps
|
||||
runs-on: ubuntu-18.04
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v3
|
||||
with:
|
||||
go-version: ${{ env.GO_VERSION }}
|
||||
|
||||
- name: Get SFTPGo version
|
||||
id: get_version
|
||||
run: echo ::set-output name=VERSION::${GITHUB_REF/refs\/tags\//}
|
||||
|
||||
- name: Prepare release
|
||||
run: |
|
||||
go mod vendor
|
||||
echo "${SFTPGO_VERSION}" > VERSION.txt
|
||||
tar cJvf sftpgo_${SFTPGO_VERSION}_src_with_deps.tar.xz *
|
||||
env:
|
||||
SFTPGO_VERSION: ${{ steps.get_version.outputs.VERSION }}
|
||||
|
||||
- name: Upload build artifact
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_src_with_deps.tar.xz
|
||||
path: ./sftpgo_${{ steps.get_version.outputs.VERSION }}_src_with_deps.tar.xz
|
||||
retention-days: 1
|
||||
|
||||
prepare-window-mac:
|
||||
name: Prepare binaries
|
||||
runs-on: ${{ matrix.os }}
|
||||
strategy:
|
||||
matrix:
|
||||
os: [macos-10.15, windows-2019]
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- name: Set up Go
|
||||
uses: actions/setup-go@v3
|
||||
with:
|
||||
go-version: ${{ env.GO_VERSION }}
|
||||
|
||||
- name: Get SFTPGo version
|
||||
id: get_version
|
||||
run: echo ::set-output name=VERSION::${GITHUB_REF/refs\/tags\//}
|
||||
shell: bash
|
||||
|
||||
- name: Get OS name
|
||||
id: get_os_name
|
||||
run: |
|
||||
if [[ $MATRIX_OS =~ ^macos.* ]]
|
||||
then
|
||||
echo ::set-output name=OS::macOS
|
||||
else
|
||||
echo ::set-output name=OS::windows
|
||||
fi
|
||||
shell: bash
|
||||
env:
|
||||
MATRIX_OS: ${{ matrix.os }}
|
||||
|
||||
- name: Build for macOS x86_64
|
||||
if: startsWith(matrix.os, 'windows-') != true
|
||||
run: go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
|
||||
|
||||
- name: Build for macOS arm64
|
||||
if: startsWith(matrix.os, 'macos-') == true
|
||||
run: CGO_ENABLED=1 GOOS=darwin GOARCH=arm64 SDKROOT=$(xcrun --sdk macosx --show-sdk-path) go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo_arm64
|
||||
|
||||
- name: Build for Windows
|
||||
if: startsWith(matrix.os, 'windows-')
|
||||
run: |
|
||||
$GIT_COMMIT = (git describe --always --dirty) | Out-String
|
||||
$DATE_TIME = ([datetime]::Now.ToUniversalTime().toString("yyyy-MM-ddTHH:mm:ssZ")) | Out-String
|
||||
$FILE_VERSION = $Env:SFTPGO_VERSION.substring(1) + ".0"
|
||||
go install github.com/tc-hib/go-winres@latest
|
||||
go-winres simply --arch amd64 --product-version $Env:SFTPGO_VERSION-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
|
||||
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/version.date=$DATE_TIME" -o sftpgo.exe
|
||||
mkdir arm64
|
||||
$Env:CGO_ENABLED='0'
|
||||
$Env:GOOS='windows'
|
||||
$Env:GOARCH='arm64'
|
||||
go-winres simply --arch arm64 --product-version $Env:SFTPGO_VERSION-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
|
||||
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/version.date=$DATE_TIME" -o .\arm64\sftpgo.exe
|
||||
mkdir x86
|
||||
$Env:GOARCH='386'
|
||||
go-winres simply --arch 386 --product-version $Env:SFTPGO_VERSION-$GIT_COMMIT --file-version $FILE_VERSION --file-description "SFTPGo server" --product-name SFTPGo --copyright "AGPL-3.0" --original-filename sftpgo.exe --icon .\windows-installer\icon.ico
|
||||
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=$GIT_COMMIT -X github.com/drakkan/sftpgo/v2/version.date=$DATE_TIME" -o .\x86\sftpgo.exe
|
||||
Remove-Item Env:\CGO_ENABLED
|
||||
Remove-Item Env:\GOOS
|
||||
Remove-Item Env:\GOARCH
|
||||
env:
|
||||
SFTPGO_VERSION: ${{ steps.get_version.outputs.VERSION }}
|
||||
|
||||
- name: Initialize data provider
|
||||
run: ./sftpgo initprovider
|
||||
shell: bash
|
||||
|
||||
- name: Prepare Release for macOS
|
||||
if: startsWith(matrix.os, 'macos-')
|
||||
run: |
|
||||
mkdir -p output/{init,sqlite,bash_completion,zsh_completion}
|
||||
echo "For documentation please take a look here:" > output/README.txt
|
||||
echo "" >> output/README.txt
|
||||
echo "https://github.com/drakkan/sftpgo/blob/${SFTPGO_VERSION}/README.md" >> output/README.txt
|
||||
cp LICENSE output/
|
||||
cp sftpgo output/
|
||||
cp sftpgo.json output/
|
||||
cp sftpgo.db output/sqlite/
|
||||
cp -r static output/
|
||||
cp -r openapi output/
|
||||
cp -r templates output/
|
||||
cp init/com.github.drakkan.sftpgo.plist output/init/
|
||||
./sftpgo gen completion bash > output/bash_completion/sftpgo
|
||||
./sftpgo gen completion zsh > output/zsh_completion/_sftpgo
|
||||
./sftpgo gen man -d output/man/man1
|
||||
gzip output/man/man1/*
|
||||
cd output
|
||||
tar cJvf ../sftpgo_${SFTPGO_VERSION}_${OS}_x86_64.tar.xz *
|
||||
cd ..
|
||||
cp sftpgo_arm64 output/sftpgo
|
||||
cd output
|
||||
tar cJvf ../sftpgo_${SFTPGO_VERSION}_${OS}_arm64.tar.xz *
|
||||
cd ..
|
||||
env:
|
||||
SFTPGO_VERSION: ${{ steps.get_version.outputs.VERSION }}
|
||||
OS: ${{ steps.get_os_name.outputs.OS }}
|
||||
|
||||
- name: Prepare Release for Windows
|
||||
if: startsWith(matrix.os, 'windows-')
|
||||
run: |
|
||||
mkdir output
|
||||
copy .\sftpgo.exe .\output
|
||||
copy .\sftpgo.json .\output
|
||||
copy .\sftpgo.db .\output
|
||||
copy .\LICENSE .\output\LICENSE.txt
|
||||
mkdir output\templates
|
||||
xcopy .\templates .\output\templates\ /E
|
||||
mkdir output\static
|
||||
xcopy .\static .\output\static\ /E
|
||||
mkdir output\openapi
|
||||
xcopy .\openapi .\output\openapi\ /E
|
||||
$CERT_PATH=(Get-Location -PSProvider FileSystem).ProviderPath + "\cert.pfx"
|
||||
[IO.File]::WriteAllBytes($CERT_PATH,[System.Convert]::FromBase64String($Env:CERT_DATA))
|
||||
certutil -f -p "$Env:CERT_PASS" -importpfx MY "$CERT_PATH"
|
||||
rm "$CERT_PATH"
|
||||
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.17763.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\sftpgo.exe
|
||||
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.17763.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\arm64\sftpgo.exe
|
||||
& 'C:/Program Files (x86)/Windows Kits/10/bin/10.0.17763.0/x86/signtool.exe' sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n "Nicola Murino" /d "SFTPGo" .\x86\sftpgo.exe
|
||||
$INNO_S='/Ssigntool=$qC:/Program Files (x86)/Windows Kits/10/bin/10.0.17763.0/x86/signtool.exe$q sign /sm /tr http://timestamp.sectigo.com /td sha256 /fd sha256 /n $qNicola Murino$q /d $qSFTPGo$q $f'
|
||||
iscc "$INNO_S" .\windows-installer\sftpgo.iss
|
||||
|
||||
rm .\output\sftpgo.exe
|
||||
rm .\output\sftpgo.db
|
||||
copy .\arm64\sftpgo.exe .\output
|
||||
(Get-Content .\output\sftpgo.json).replace('"sqlite"', '"bolt"') | Set-Content .\output\sftpgo.json
|
||||
$Env:SFTPGO_DATA_PROVIDER__DRIVER='bolt'
|
||||
$Env:SFTPGO_DATA_PROVIDER__NAME='.\output\sftpgo.db'
|
||||
.\sftpgo.exe initprovider
|
||||
Remove-Item Env:\SFTPGO_DATA_PROVIDER__DRIVER
|
||||
Remove-Item Env:\SFTPGO_DATA_PROVIDER__NAME
|
||||
$Env:SFTPGO_ISS_ARCH='arm64'
|
||||
iscc "$INNO_S" .\windows-installer\sftpgo.iss
|
||||
|
||||
rm .\output\sftpgo.exe
|
||||
copy .\x86\sftpgo.exe .\output
|
||||
$Env:SFTPGO_ISS_ARCH='x86'
|
||||
iscc "$INNO_S" .\windows-installer\sftpgo.iss
|
||||
certutil -delstore MY "Nicola Murino"
|
||||
env:
|
||||
SFTPGO_ISS_VERSION: ${{ steps.get_version.outputs.VERSION }}
|
||||
SFTPGO_ISS_DOC_URL: https://github.com/drakkan/sftpgo/blob/${{ steps.get_version.outputs.VERSION }}/README.md
|
||||
CERT_DATA: ${{ secrets.CERT_DATA }}
|
||||
CERT_PASS: ${{ secrets.CERT_PASS }}
|
||||
|
||||
- name: Prepare Portable Release for Windows
|
||||
if: startsWith(matrix.os, 'windows-')
|
||||
run: |
|
||||
mkdir win-portable
|
||||
copy .\sftpgo.exe .\win-portable
|
||||
mkdir win-portable\arm64
|
||||
copy .\arm64\sftpgo.exe .\win-portable\arm64
|
||||
mkdir win-portable\x86
|
||||
copy .\x86\sftpgo.exe .\win-portable\x86
|
||||
copy .\sftpgo.json .\win-portable
|
||||
(Get-Content .\win-portable\sftpgo.json).replace('"sqlite"', '"bolt"') | Set-Content .\win-portable\sftpgo.json
|
||||
copy .\output\sftpgo.db .\win-portable
|
||||
copy .\LICENSE .\win-portable\LICENSE.txt
|
||||
mkdir win-portable\templates
|
||||
xcopy .\templates .\win-portable\templates\ /E
|
||||
mkdir win-portable\static
|
||||
xcopy .\static .\win-portable\static\ /E
|
||||
mkdir win-portable\openapi
|
||||
xcopy .\openapi .\win-portable\openapi\ /E
|
||||
Compress-Archive .\win-portable\* sftpgo_portable.zip
|
||||
|
||||
- name: Upload macOS x86_64 artifact
|
||||
if: startsWith(matrix.os, 'macos-')
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86_64.tar.xz
|
||||
path: ./sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86_64.tar.xz
|
||||
retention-days: 1
|
||||
|
||||
- name: Upload macOS arm64 artifact
|
||||
if: startsWith(matrix.os, 'macos-')
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_arm64.tar.xz
|
||||
path: ./sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_arm64.tar.xz
|
||||
retention-days: 1
|
||||
|
||||
- name: Upload Windows installer x86_64 artifact
|
||||
if: startsWith(matrix.os, 'windows-')
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86_64.exe
|
||||
path: ./sftpgo_windows_x86_64.exe
|
||||
retention-days: 1
|
||||
|
||||
- name: Upload Windows installer arm64 artifact
|
||||
if: startsWith(matrix.os, 'windows-')
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_arm64.exe
|
||||
path: ./sftpgo_windows_arm64.exe
|
||||
retention-days: 1
|
||||
|
||||
- name: Upload Windows installer x86 artifact
|
||||
if: startsWith(matrix.os, 'windows-')
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_x86.exe
|
||||
path: ./sftpgo_windows_x86.exe
|
||||
retention-days: 1
|
||||
|
||||
- name: Upload Windows portable artifact
|
||||
if: startsWith(matrix.os, 'windows-')
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.VERSION }}_${{ steps.get_os_name.outputs.OS }}_portable.zip
|
||||
path: ./sftpgo_portable.zip
|
||||
retention-days: 1
|
||||
|
||||
prepare-linux:
|
||||
name: Prepare Linux binaries
|
||||
runs-on: ubuntu-18.04
|
||||
strategy:
|
||||
matrix:
|
||||
include:
|
||||
- arch: amd64
|
||||
go-arch: amd64
|
||||
deb-arch: amd64
|
||||
rpm-arch: x86_64
|
||||
tar-arch: x86_64
|
||||
- arch: aarch64
|
||||
distro: ubuntu18.04
|
||||
go-arch: arm64
|
||||
deb-arch: arm64
|
||||
rpm-arch: aarch64
|
||||
tar-arch: arm64
|
||||
- arch: ppc64le
|
||||
distro: ubuntu18.04
|
||||
go-arch: ppc64le
|
||||
deb-arch: ppc64el
|
||||
rpm-arch: ppc64le
|
||||
tar-arch: ppc64le
|
||||
- arch: armv7
|
||||
distro: ubuntu18.04
|
||||
go-arch: arm7
|
||||
deb-arch: armhf
|
||||
rpm-arch: armv7hl
|
||||
tar-arch: armv7
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- name: Set up Go
|
||||
if: ${{ matrix.arch == 'amd64' }}
|
||||
uses: actions/setup-go@v3
|
||||
with:
|
||||
go-version: ${{ env.GO_VERSION }}
|
||||
|
||||
- name: Get versions
|
||||
id: get_version
|
||||
run: |
|
||||
echo ::set-output name=SFTPGO_VERSION::${GITHUB_REF/refs\/tags\//}
|
||||
echo ::set-output name=GO_VERSION::${GO_VERSION}
|
||||
shell: bash
|
||||
env:
|
||||
GO_VERSION: ${{ env.GO_VERSION }}
|
||||
|
||||
- name: Build on amd64
|
||||
if: ${{ matrix.arch == 'amd64' }}
|
||||
run: |
|
||||
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
|
||||
mkdir -p output/{init,sqlite,bash_completion,zsh_completion}
|
||||
echo "For documentation please take a look here:" > output/README.txt
|
||||
echo "" >> output/README.txt
|
||||
echo "https://github.com/drakkan/sftpgo/blob/${SFTPGO_VERSION}/README.md" >> output/README.txt
|
||||
cp LICENSE output/
|
||||
cp sftpgo.json output/
|
||||
cp -r templates output/
|
||||
cp -r static output/
|
||||
cp -r openapi output/
|
||||
cp init/sftpgo.service output/init/
|
||||
./sftpgo initprovider
|
||||
./sftpgo gen completion bash > output/bash_completion/sftpgo
|
||||
./sftpgo gen completion zsh > output/zsh_completion/_sftpgo
|
||||
./sftpgo gen man -d output/man/man1
|
||||
gzip output/man/man1/*
|
||||
cp sftpgo output/
|
||||
cp sftpgo.db output/sqlite/
|
||||
cd output
|
||||
tar cJvf sftpgo_${SFTPGO_VERSION}_linux_${{ matrix.tar-arch }}.tar.xz *
|
||||
cd ..
|
||||
env:
|
||||
SFTPGO_VERSION: ${{ steps.get_version.outputs.SFTPGO_VERSION }}
|
||||
|
||||
- uses: uraimo/run-on-arch-action@v2
|
||||
if: ${{ matrix.arch != 'amd64' }}
|
||||
name: Build for ${{ matrix.arch }}
|
||||
id: build
|
||||
with:
|
||||
arch: ${{ matrix.arch }}
|
||||
distro: ${{ matrix.distro }}
|
||||
setup: |
|
||||
mkdir -p "${PWD}/output"
|
||||
dockerRunArgs: |
|
||||
--volume "${PWD}/output:/output"
|
||||
shell: /bin/bash
|
||||
install: |
|
||||
apt-get update -q -y
|
||||
apt-get install -q -y curl gcc git xz-utils
|
||||
GO_DOWNLOAD_ARCH=${{ matrix.go-arch }}
|
||||
if [ ${{ matrix.arch}} == 'armv7' ]
|
||||
then
|
||||
GO_DOWNLOAD_ARCH=armv6l
|
||||
fi
|
||||
curl --retry 5 --retry-delay 2 --connect-timeout 10 -o go.tar.gz -L https://go.dev/dl/go${{ steps.get_version.outputs.GO_VERSION }}.linux-${GO_DOWNLOAD_ARCH}.tar.gz
|
||||
tar -C /usr/local -xzf go.tar.gz
|
||||
run: |
|
||||
export PATH=$PATH:/usr/local/go/bin
|
||||
go build -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
|
||||
mkdir -p output/{init,sqlite,bash_completion,zsh_completion}
|
||||
echo "For documentation please take a look here:" > output/README.txt
|
||||
echo "" >> output/README.txt
|
||||
echo "https://github.com/drakkan/sftpgo/blob/${{ steps.get_version.outputs.SFTPGO_VERSION }}/README.md" >> output/README.txt
|
||||
cp LICENSE output/
|
||||
cp sftpgo.json output/
|
||||
cp -r templates output/
|
||||
cp -r static output/
|
||||
cp -r openapi output/
|
||||
cp init/sftpgo.service output/init/
|
||||
./sftpgo initprovider
|
||||
./sftpgo gen completion bash > output/bash_completion/sftpgo
|
||||
./sftpgo gen completion zsh > output/zsh_completion/_sftpgo
|
||||
./sftpgo gen man -d output/man/man1
|
||||
gzip output/man/man1/*
|
||||
cp sftpgo output/
|
||||
cp sftpgo.db output/sqlite/
|
||||
cd output
|
||||
tar cJvf sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_${{ matrix.tar-arch }}.tar.xz *
|
||||
cd ..
|
||||
|
||||
- name: Upload build artifact for ${{ matrix.arch }}
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_${{ matrix.tar-arch }}.tar.xz
|
||||
path: ./output/sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_${{ matrix.tar-arch }}.tar.xz
|
||||
retention-days: 1
|
||||
|
||||
- name: Build Packages
|
||||
id: build_linux_pkgs
|
||||
run: |
|
||||
export NFPM_ARCH=${{ matrix.go-arch }}
|
||||
cd pkgs
|
||||
./build.sh
|
||||
PKG_VERSION=${SFTPGO_VERSION:1}
|
||||
echo "::set-output name=pkg-version::${PKG_VERSION}"
|
||||
env:
|
||||
SFTPGO_VERSION: ${{ steps.get_version.outputs.SFTPGO_VERSION }}
|
||||
|
||||
- name: Upload Deb Package
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.build_linux_pkgs.outputs.pkg-version }}-1_${{ matrix.deb-arch}}.deb
|
||||
path: ./pkgs/dist/deb/sftpgo_${{ steps.build_linux_pkgs.outputs.pkg-version }}-1_${{ matrix.deb-arch}}.deb
|
||||
retention-days: 1
|
||||
|
||||
- name: Upload RPM Package
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-1.${{ matrix.rpm-arch}}.rpm
|
||||
path: ./pkgs/dist/rpm/sftpgo-${{ steps.build_linux_pkgs.outputs.pkg-version }}-1.${{ matrix.rpm-arch}}.rpm
|
||||
retention-days: 1
|
||||
|
||||
prepare-linux-bundle:
|
||||
name: Prepare Linux bundle
|
||||
needs: prepare-linux
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Get versions
|
||||
id: get_version
|
||||
run: |
|
||||
echo ::set-output name=SFTPGO_VERSION::${GITHUB_REF/refs\/tags\//}
|
||||
shell: bash
|
||||
|
||||
- name: Download amd64 artifact
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_x86_64.tar.xz
|
||||
|
||||
- name: Download arm64 artifact
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_arm64.tar.xz
|
||||
|
||||
- name: Download ppc64le artifact
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_ppc64le.tar.xz
|
||||
|
||||
- name: Download armv7 artifact
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_armv7.tar.xz
|
||||
|
||||
- name: Build bundle
|
||||
shell: bash
|
||||
run: |
|
||||
mkdir -p bundle/{arm64,ppc64le,armv7}
|
||||
cd bundle
|
||||
tar xvf ../sftpgo_${SFTPGO_VERSION}_linux_x86_64.tar.xz
|
||||
cd arm64
|
||||
tar xvf ../../sftpgo_${SFTPGO_VERSION}_linux_arm64.tar.xz sftpgo
|
||||
cd ../ppc64le
|
||||
tar xvf ../../sftpgo_${SFTPGO_VERSION}_linux_ppc64le.tar.xz sftpgo
|
||||
cd ../armv7
|
||||
tar xvf ../../sftpgo_${SFTPGO_VERSION}_linux_armv7.tar.xz sftpgo
|
||||
cd ..
|
||||
tar cJvf sftpgo_${SFTPGO_VERSION}_linux_bundle.tar.xz *
|
||||
cd ..
|
||||
env:
|
||||
SFTPGO_VERSION: ${{ steps.get_version.outputs.SFTPGO_VERSION }}
|
||||
|
||||
- name: Upload Linux bundle
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_bundle.tar.xz
|
||||
path: ./bundle/sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_bundle.tar.xz
|
||||
retention-days: 1
|
||||
|
||||
create-release:
|
||||
name: Release
|
||||
needs: [prepare-linux-bundle, prepare-sources-with-deps, prepare-window-mac]
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- name: Get versions
|
||||
id: get_version
|
||||
run: |
|
||||
SFTPGO_VERSION=${GITHUB_REF/refs\/tags\//}
|
||||
PKG_VERSION=${SFTPGO_VERSION:1}
|
||||
echo ::set-output name=SFTPGO_VERSION::${SFTPGO_VERSION}
|
||||
echo "::set-output name=PKG_VERSION::${PKG_VERSION}"
|
||||
shell: bash
|
||||
|
||||
- name: Download amd64 artifact
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_x86_64.tar.xz
|
||||
|
||||
- name: Download arm64 artifact
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_arm64.tar.xz
|
||||
|
||||
- name: Download ppc64le artifact
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_ppc64le.tar.xz
|
||||
|
||||
- name: Download armv7 artifact
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_armv7.tar.xz
|
||||
|
||||
- name: Download Linux bundle artifact
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_linux_bundle.tar.xz
|
||||
|
||||
- name: Download Deb amd64 artifact
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.PKG_VERSION }}-1_amd64.deb
|
||||
|
||||
- name: Download Deb arm64 artifact
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.PKG_VERSION }}-1_arm64.deb
|
||||
|
||||
- name: Download Deb ppc64le artifact
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.PKG_VERSION }}-1_ppc64el.deb
|
||||
|
||||
- name: Download Deb armv7 artifact
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.PKG_VERSION }}-1_armhf.deb
|
||||
|
||||
- name: Download RPM x86_64 artifact
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo-${{ steps.get_version.outputs.PKG_VERSION }}-1.x86_64.rpm
|
||||
|
||||
- name: Download RPM aarch64 artifact
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo-${{ steps.get_version.outputs.PKG_VERSION }}-1.aarch64.rpm
|
||||
|
||||
- name: Download RPM ppc64le artifact
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo-${{ steps.get_version.outputs.PKG_VERSION }}-1.ppc64le.rpm
|
||||
|
||||
- name: Download RPM armv7 artifact
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo-${{ steps.get_version.outputs.PKG_VERSION }}-1.armv7hl.rpm
|
||||
|
||||
- name: Download macOS x86_64 artifact
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_macOS_x86_64.tar.xz
|
||||
|
||||
- name: Download macOS arm64 artifact
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_macOS_arm64.tar.xz
|
||||
|
||||
- name: Download Windows installer x86_64 artifact
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_windows_x86_64.exe
|
||||
|
||||
- name: Download Windows installer arm64 artifact
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_windows_arm64.exe
|
||||
|
||||
- name: Download Windows installer x86 artifact
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_windows_x86.exe
|
||||
|
||||
- name: Download Windows portable artifact
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_windows_portable.zip
|
||||
|
||||
- name: Download source with deps artifact
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: sftpgo_${{ steps.get_version.outputs.SFTPGO_VERSION }}_src_with_deps.tar.xz
|
||||
|
||||
- name: Create release
|
||||
run: |
|
||||
mv sftpgo_windows_x86_64.exe sftpgo_${SFTPGO_VERSION}_windows_x86_64.exe
|
||||
mv sftpgo_windows_arm64.exe sftpgo_${SFTPGO_VERSION}_windows_arm64.exe
|
||||
mv sftpgo_windows_x86.exe sftpgo_${SFTPGO_VERSION}_windows_x86.exe
|
||||
mv sftpgo_portable.zip sftpgo_${SFTPGO_VERSION}_windows_portable.zip
|
||||
gh release create "${SFTPGO_VERSION}" -t "${SFTPGO_VERSION}"
|
||||
gh release upload "${SFTPGO_VERSION}" sftpgo_*.xz --clobber
|
||||
gh release upload "${SFTPGO_VERSION}" sftpgo-*.rpm --clobber
|
||||
gh release upload "${SFTPGO_VERSION}" sftpgo_*.deb --clobber
|
||||
gh release upload "${SFTPGO_VERSION}" sftpgo_*.exe --clobber
|
||||
gh release upload "${SFTPGO_VERSION}" sftpgo_*.zip --clobber
|
||||
gh release view "${SFTPGO_VERSION}"
|
||||
env:
|
||||
GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}}
|
||||
SFTPGO_VERSION: ${{ steps.get_version.outputs.SFTPGO_VERSION }}
|
||||
52
.golangci.yml
Normal file
52
.golangci.yml
Normal file
@@ -0,0 +1,52 @@
|
||||
run:
|
||||
timeout: 5m
|
||||
issues-exit-code: 1
|
||||
tests: true
|
||||
|
||||
|
||||
linters-settings:
|
||||
dupl:
|
||||
threshold: 150
|
||||
errcheck:
|
||||
check-type-assertions: false
|
||||
check-blank: false
|
||||
goconst:
|
||||
min-len: 3
|
||||
min-occurrences: 3
|
||||
gocyclo:
|
||||
min-complexity: 15
|
||||
gofmt:
|
||||
simplify: true
|
||||
goimports:
|
||||
local-prefixes: github.com/drakkan/sftpgo
|
||||
#govet:
|
||||
# report about shadowed variables
|
||||
#check-shadowing: true
|
||||
#enable:
|
||||
# - fieldalignment
|
||||
|
||||
issues:
|
||||
include:
|
||||
- EXC0002
|
||||
- EXC0012
|
||||
- EXC0013
|
||||
- EXC0014
|
||||
- EXC0015
|
||||
|
||||
linters:
|
||||
enable:
|
||||
- goconst
|
||||
- errcheck
|
||||
- gofmt
|
||||
- goimports
|
||||
- revive
|
||||
- unconvert
|
||||
- unparam
|
||||
- bodyclose
|
||||
- gocyclo
|
||||
- misspell
|
||||
- whitespace
|
||||
- dupl
|
||||
- rowserrcheck
|
||||
- dogsled
|
||||
- govet
|
||||
24
.travis.yml
24
.travis.yml
@@ -1,24 +0,0 @@
|
||||
language: go
|
||||
|
||||
os:
|
||||
- linux
|
||||
- osx
|
||||
|
||||
go:
|
||||
- 1.13.x
|
||||
- 1.14.x
|
||||
|
||||
env:
|
||||
- GO111MODULE=on
|
||||
|
||||
before_script:
|
||||
- sftpgo initprovider
|
||||
|
||||
install:
|
||||
- go get -v -t ./...
|
||||
|
||||
script:
|
||||
- go test -v ./... -coverprofile=coverage.txt -covermode=atomic
|
||||
|
||||
after_success:
|
||||
- bash <(curl -s https://codecov.io/bash)
|
||||
65
Dockerfile
Normal file
65
Dockerfile
Normal file
@@ -0,0 +1,65 @@
|
||||
FROM golang:1.17-bullseye as builder
|
||||
|
||||
ENV GOFLAGS="-mod=readonly"
|
||||
|
||||
RUN mkdir -p /workspace
|
||||
WORKDIR /workspace
|
||||
|
||||
ARG GOPROXY
|
||||
|
||||
COPY go.mod go.sum ./
|
||||
RUN go mod download
|
||||
|
||||
ARG COMMIT_SHA
|
||||
|
||||
# This ARG allows to disable some optional features and it might be useful if you build the image yourself.
|
||||
# For example you can disable S3 and GCS support like this:
|
||||
# --build-arg FEATURES=nos3,nogcs
|
||||
ARG FEATURES
|
||||
|
||||
COPY . .
|
||||
|
||||
RUN set -xe && \
|
||||
export COMMIT_SHA=${COMMIT_SHA:-$(git describe --always --dirty)} && \
|
||||
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -v -o sftpgo
|
||||
|
||||
FROM debian:bullseye-slim
|
||||
|
||||
# Set to "true" to install jq and the optional git and rsync dependencies
|
||||
ARG INSTALL_OPTIONAL_PACKAGES=false
|
||||
|
||||
RUN apt-get update && apt-get install --no-install-recommends -y ca-certificates media-types && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
RUN if [ "${INSTALL_OPTIONAL_PACKAGES}" = "true" ]; then apt-get update && apt-get install --no-install-recommends -y jq git rsync && rm -rf /var/lib/apt/lists/*; fi
|
||||
|
||||
RUN mkdir -p /etc/sftpgo /var/lib/sftpgo /usr/share/sftpgo /srv/sftpgo/data /srv/sftpgo/backups
|
||||
|
||||
RUN groupadd --system -g 1000 sftpgo && \
|
||||
useradd --system --gid sftpgo --no-create-home \
|
||||
--home-dir /var/lib/sftpgo --shell /usr/sbin/nologin \
|
||||
--comment "SFTPGo user" --uid 1000 sftpgo
|
||||
|
||||
COPY --from=builder /workspace/sftpgo.json /etc/sftpgo/sftpgo.json
|
||||
COPY --from=builder /workspace/templates /usr/share/sftpgo/templates
|
||||
COPY --from=builder /workspace/static /usr/share/sftpgo/static
|
||||
COPY --from=builder /workspace/openapi /usr/share/sftpgo/openapi
|
||||
COPY --from=builder /workspace/sftpgo /usr/local/bin/
|
||||
|
||||
# Log to the stdout so the logs will be available using docker logs
|
||||
ENV SFTPGO_LOG_FILE_PATH=""
|
||||
# templates and static paths are inside the container
|
||||
ENV SFTPGO_HTTPD__TEMPLATES_PATH=/usr/share/sftpgo/templates
|
||||
ENV SFTPGO_SMTP__TEMPLATES_PATH=/usr/share/sftpgo/templates
|
||||
ENV SFTPGO_HTTPD__STATIC_FILES_PATH=/usr/share/sftpgo/static
|
||||
ENV SFTPGO_HTTPD__OPENAPI_PATH=/usr/share/sftpgo/openapi
|
||||
|
||||
# Modify the default configuration file
|
||||
RUN sed -i "s|\"users_base_dir\": \"\",|\"users_base_dir\": \"/srv/sftpgo/data\",|" /etc/sftpgo/sftpgo.json && \
|
||||
sed -i "s|\"backups\"|\"/srv/sftpgo/backups\"|" /etc/sftpgo/sftpgo.json
|
||||
|
||||
RUN chown -R sftpgo:sftpgo /etc/sftpgo /srv/sftpgo && chown sftpgo:sftpgo /var/lib/sftpgo && chmod 700 /srv/sftpgo/backups
|
||||
|
||||
WORKDIR /var/lib/sftpgo
|
||||
USER 1000:1000
|
||||
|
||||
CMD ["sftpgo", "serve"]
|
||||
70
Dockerfile.alpine
Normal file
70
Dockerfile.alpine
Normal file
@@ -0,0 +1,70 @@
|
||||
FROM golang:1.17-alpine3.15 AS builder
|
||||
|
||||
ENV GOFLAGS="-mod=readonly"
|
||||
|
||||
RUN apk add --update --no-cache bash ca-certificates curl git gcc g++
|
||||
|
||||
RUN mkdir -p /workspace
|
||||
WORKDIR /workspace
|
||||
|
||||
ARG GOPROXY
|
||||
|
||||
COPY go.mod go.sum ./
|
||||
RUN go mod download
|
||||
|
||||
ARG COMMIT_SHA
|
||||
|
||||
# This ARG allows to disable some optional features and it might be useful if you build the image yourself.
|
||||
# For example you can disable S3 and GCS support like this:
|
||||
# --build-arg FEATURES=nos3,nogcs
|
||||
ARG FEATURES
|
||||
|
||||
COPY . .
|
||||
|
||||
RUN set -xe && \
|
||||
export COMMIT_SHA=${COMMIT_SHA:-$(git describe --always --dirty)} && \
|
||||
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -v -o sftpgo
|
||||
|
||||
|
||||
FROM alpine:3.15
|
||||
|
||||
# Set to "true" to install jq and the optional git and rsync dependencies
|
||||
ARG INSTALL_OPTIONAL_PACKAGES=false
|
||||
|
||||
RUN apk add --update --no-cache ca-certificates tzdata mailcap
|
||||
|
||||
RUN if [ "${INSTALL_OPTIONAL_PACKAGES}" = "true" ]; then apk add --update --no-cache jq git rsync; fi
|
||||
|
||||
# set up nsswitch.conf for Go's "netgo" implementation
|
||||
# https://github.com/gliderlabs/docker-alpine/issues/367#issuecomment-424546457
|
||||
RUN test ! -e /etc/nsswitch.conf && echo 'hosts: files dns' > /etc/nsswitch.conf
|
||||
|
||||
RUN mkdir -p /etc/sftpgo /var/lib/sftpgo /usr/share/sftpgo /srv/sftpgo/data /srv/sftpgo/backups
|
||||
|
||||
RUN addgroup -g 1000 -S sftpgo && \
|
||||
adduser -u 1000 -h /var/lib/sftpgo -s /sbin/nologin -G sftpgo -S -D -H -g "SFTPGo user" sftpgo
|
||||
|
||||
COPY --from=builder /workspace/sftpgo.json /etc/sftpgo/sftpgo.json
|
||||
COPY --from=builder /workspace/templates /usr/share/sftpgo/templates
|
||||
COPY --from=builder /workspace/static /usr/share/sftpgo/static
|
||||
COPY --from=builder /workspace/openapi /usr/share/sftpgo/openapi
|
||||
COPY --from=builder /workspace/sftpgo /usr/local/bin/
|
||||
|
||||
# Log to the stdout so the logs will be available using docker logs
|
||||
ENV SFTPGO_LOG_FILE_PATH=""
|
||||
# templates and static paths are inside the container
|
||||
ENV SFTPGO_HTTPD__TEMPLATES_PATH=/usr/share/sftpgo/templates
|
||||
ENV SFTPGO_SMTP__TEMPLATES_PATH=/usr/share/sftpgo/templates
|
||||
ENV SFTPGO_HTTPD__STATIC_FILES_PATH=/usr/share/sftpgo/static
|
||||
ENV SFTPGO_HTTPD__OPENAPI_PATH=/usr/share/sftpgo/openapi
|
||||
|
||||
# Modify the default configuration file
|
||||
RUN sed -i "s|\"users_base_dir\": \"\",|\"users_base_dir\": \"/srv/sftpgo/data\",|" /etc/sftpgo/sftpgo.json && \
|
||||
sed -i "s|\"backups\"|\"/srv/sftpgo/backups\"|" /etc/sftpgo/sftpgo.json
|
||||
|
||||
RUN chown -R sftpgo:sftpgo /etc/sftpgo /srv/sftpgo && chown sftpgo:sftpgo /var/lib/sftpgo && chmod 700 /srv/sftpgo/backups
|
||||
|
||||
WORKDIR /var/lib/sftpgo
|
||||
USER 1000:1000
|
||||
|
||||
CMD ["sftpgo", "serve"]
|
||||
62
Dockerfile.distroless
Normal file
62
Dockerfile.distroless
Normal file
@@ -0,0 +1,62 @@
|
||||
FROM golang:1.17-bullseye as builder
|
||||
|
||||
ENV CGO_ENABLED=0 GOFLAGS="-mod=readonly"
|
||||
|
||||
RUN mkdir -p /workspace
|
||||
WORKDIR /workspace
|
||||
|
||||
ARG GOPROXY
|
||||
|
||||
COPY go.mod go.sum ./
|
||||
RUN go mod download
|
||||
|
||||
ARG COMMIT_SHA
|
||||
|
||||
# This ARG allows to disable some optional features and it might be useful if you build the image yourself.
|
||||
# For this variant we disable SQLite support since it requires CGO and so a C runtime which is not installed
|
||||
# in distroless/static-* images
|
||||
ARG FEATURES=nosqlite
|
||||
|
||||
COPY . .
|
||||
|
||||
RUN set -xe && \
|
||||
export COMMIT_SHA=${COMMIT_SHA:-$(git describe --always --dirty)} && \
|
||||
go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -trimpath -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=${COMMIT_SHA} -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -v -o sftpgo
|
||||
|
||||
# Modify the default configuration file
|
||||
RUN sed -i "s|\"users_base_dir\": \"\",|\"users_base_dir\": \"/srv/sftpgo/data\",|" sftpgo.json && \
|
||||
sed -i "s|\"backups\"|\"/srv/sftpgo/backups\"|" sftpgo.json && \
|
||||
sed -i "s|\"sqlite\"|\"bolt\"|" sftpgo.json
|
||||
|
||||
RUN apt-get update && apt-get install --no-install-recommends -y media-types && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
RUN mkdir /etc/sftpgo /var/lib/sftpgo /srv/sftpgo
|
||||
|
||||
FROM gcr.io/distroless/static-debian11
|
||||
|
||||
COPY --from=builder --chown=1000:1000 /etc/sftpgo /etc/sftpgo
|
||||
COPY --from=builder --chown=1000:1000 /srv/sftpgo /srv/sftpgo
|
||||
COPY --from=builder --chown=1000:1000 /var/lib/sftpgo /var/lib/sftpgo
|
||||
COPY --from=builder --chown=1000:1000 /workspace/sftpgo.json /etc/sftpgo/sftpgo.json
|
||||
COPY --from=builder /workspace/templates /usr/share/sftpgo/templates
|
||||
COPY --from=builder /workspace/static /usr/share/sftpgo/static
|
||||
COPY --from=builder /workspace/openapi /usr/share/sftpgo/openapi
|
||||
COPY --from=builder /workspace/sftpgo /usr/local/bin/
|
||||
COPY --from=builder /etc/mime.types /etc/mime.types
|
||||
|
||||
# Log to the stdout so the logs will be available using docker logs
|
||||
ENV SFTPGO_LOG_FILE_PATH=""
|
||||
# templates and static paths are inside the container
|
||||
ENV SFTPGO_HTTPD__TEMPLATES_PATH=/usr/share/sftpgo/templates
|
||||
ENV SFTPGO_SMTP__TEMPLATES_PATH=/usr/share/sftpgo/templates
|
||||
ENV SFTPGO_HTTPD__STATIC_FILES_PATH=/usr/share/sftpgo/static
|
||||
ENV SFTPGO_HTTPD__OPENAPI_PATH=/usr/share/sftpgo/openapi
|
||||
# These env vars are required to avoid the following error when calling user.Current():
|
||||
# unable to get the current user: user: Current requires cgo or $USER set in environment
|
||||
ENV USER=sftpgo
|
||||
ENV HOME=/var/lib/sftpgo
|
||||
|
||||
WORKDIR /var/lib/sftpgo
|
||||
USER 1000:1000
|
||||
|
||||
CMD ["sftpgo", "serve"]
|
||||
145
LICENSE
145
LICENSE
@@ -1,5 +1,5 @@
|
||||
GNU GENERAL PUBLIC LICENSE
|
||||
Version 3, 29 June 2007
|
||||
GNU AFFERO GENERAL PUBLIC LICENSE
|
||||
Version 3, 19 November 2007
|
||||
|
||||
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
|
||||
Everyone is permitted to copy and distribute verbatim copies
|
||||
@@ -7,17 +7,15 @@
|
||||
|
||||
Preamble
|
||||
|
||||
The GNU General Public License is a free, copyleft license for
|
||||
software and other kinds of works.
|
||||
The GNU Affero General Public License is a free, copyleft license for
|
||||
software and other kinds of works, specifically designed to ensure
|
||||
cooperation with the community in the case of network server software.
|
||||
|
||||
The licenses for most software and other practical works are designed
|
||||
to take away your freedom to share and change the works. By contrast,
|
||||
the GNU General Public License is intended to guarantee your freedom to
|
||||
our General Public Licenses are intended to guarantee your freedom to
|
||||
share and change all versions of a program--to make sure it remains free
|
||||
software for all its users. We, the Free Software Foundation, use the
|
||||
GNU General Public License for most of our software; it applies also to
|
||||
any other work released this way by its authors. You can apply it to
|
||||
your programs, too.
|
||||
software for all its users.
|
||||
|
||||
When we speak of free software, we are referring to freedom, not
|
||||
price. Our General Public Licenses are designed to make sure that you
|
||||
@@ -26,44 +24,34 @@ them if you wish), that you receive source code or can get it if you
|
||||
want it, that you can change the software or use pieces of it in new
|
||||
free programs, and that you know you can do these things.
|
||||
|
||||
To protect your rights, we need to prevent others from denying you
|
||||
these rights or asking you to surrender the rights. Therefore, you have
|
||||
certain responsibilities if you distribute copies of the software, or if
|
||||
you modify it: responsibilities to respect the freedom of others.
|
||||
Developers that use our General Public Licenses protect your rights
|
||||
with two steps: (1) assert copyright on the software, and (2) offer
|
||||
you this License which gives you legal permission to copy, distribute
|
||||
and/or modify the software.
|
||||
|
||||
For example, if you distribute copies of such a program, whether
|
||||
gratis or for a fee, you must pass on to the recipients the same
|
||||
freedoms that you received. You must make sure that they, too, receive
|
||||
or can get the source code. And you must show them these terms so they
|
||||
know their rights.
|
||||
A secondary benefit of defending all users' freedom is that
|
||||
improvements made in alternate versions of the program, if they
|
||||
receive widespread use, become available for other developers to
|
||||
incorporate. Many developers of free software are heartened and
|
||||
encouraged by the resulting cooperation. However, in the case of
|
||||
software used on network servers, this result may fail to come about.
|
||||
The GNU General Public License permits making a modified version and
|
||||
letting the public access it on a server without ever releasing its
|
||||
source code to the public.
|
||||
|
||||
Developers that use the GNU GPL protect your rights with two steps:
|
||||
(1) assert copyright on the software, and (2) offer you this License
|
||||
giving you legal permission to copy, distribute and/or modify it.
|
||||
The GNU Affero General Public License is designed specifically to
|
||||
ensure that, in such cases, the modified source code becomes available
|
||||
to the community. It requires the operator of a network server to
|
||||
provide the source code of the modified version running there to the
|
||||
users of that server. Therefore, public use of a modified version, on
|
||||
a publicly accessible server, gives the public access to the source
|
||||
code of the modified version.
|
||||
|
||||
For the developers' and authors' protection, the GPL clearly explains
|
||||
that there is no warranty for this free software. For both users' and
|
||||
authors' sake, the GPL requires that modified versions be marked as
|
||||
changed, so that their problems will not be attributed erroneously to
|
||||
authors of previous versions.
|
||||
|
||||
Some devices are designed to deny users access to install or run
|
||||
modified versions of the software inside them, although the manufacturer
|
||||
can do so. This is fundamentally incompatible with the aim of
|
||||
protecting users' freedom to change the software. The systematic
|
||||
pattern of such abuse occurs in the area of products for individuals to
|
||||
use, which is precisely where it is most unacceptable. Therefore, we
|
||||
have designed this version of the GPL to prohibit the practice for those
|
||||
products. If such problems arise substantially in other domains, we
|
||||
stand ready to extend this provision to those domains in future versions
|
||||
of the GPL, as needed to protect the freedom of users.
|
||||
|
||||
Finally, every program is threatened constantly by software patents.
|
||||
States should not allow patents to restrict development and use of
|
||||
software on general-purpose computers, but in those that do, we wish to
|
||||
avoid the special danger that patents applied to a free program could
|
||||
make it effectively proprietary. To prevent this, the GPL assures that
|
||||
patents cannot be used to render the program non-free.
|
||||
An older license, called the Affero General Public License and
|
||||
published by Affero, was designed to accomplish similar goals. This is
|
||||
a different license, not a version of the Affero GPL, but Affero has
|
||||
released a new version of the Affero GPL which permits relicensing under
|
||||
this license.
|
||||
|
||||
The precise terms and conditions for copying, distribution and
|
||||
modification follow.
|
||||
@@ -72,7 +60,7 @@ modification follow.
|
||||
|
||||
0. Definitions.
|
||||
|
||||
"This License" refers to version 3 of the GNU General Public License.
|
||||
"This License" refers to version 3 of the GNU Affero General Public License.
|
||||
|
||||
"Copyright" also means copyright-like laws that apply to other kinds of
|
||||
works, such as semiconductor masks.
|
||||
@@ -549,35 +537,45 @@ to collect a royalty for further conveying from those to whom you convey
|
||||
the Program, the only way you could satisfy both those terms and this
|
||||
License would be to refrain entirely from conveying the Program.
|
||||
|
||||
13. Use with the GNU Affero General Public License.
|
||||
13. Remote Network Interaction; Use with the GNU General Public License.
|
||||
|
||||
Notwithstanding any other provision of this License, if you modify the
|
||||
Program, your modified version must prominently offer all users
|
||||
interacting with it remotely through a computer network (if your version
|
||||
supports such interaction) an opportunity to receive the Corresponding
|
||||
Source of your version by providing access to the Corresponding Source
|
||||
from a network server at no charge, through some standard or customary
|
||||
means of facilitating copying of software. This Corresponding Source
|
||||
shall include the Corresponding Source for any work covered by version 3
|
||||
of the GNU General Public License that is incorporated pursuant to the
|
||||
following paragraph.
|
||||
|
||||
Notwithstanding any other provision of this License, you have
|
||||
permission to link or combine any covered work with a work licensed
|
||||
under version 3 of the GNU Affero General Public License into a single
|
||||
under version 3 of the GNU General Public License into a single
|
||||
combined work, and to convey the resulting work. The terms of this
|
||||
License will continue to apply to the part which is the covered work,
|
||||
but the special requirements of the GNU Affero General Public License,
|
||||
section 13, concerning interaction through a network will apply to the
|
||||
combination as such.
|
||||
but the work with which it is combined will remain governed by version
|
||||
3 of the GNU General Public License.
|
||||
|
||||
14. Revised Versions of this License.
|
||||
|
||||
The Free Software Foundation may publish revised and/or new versions of
|
||||
the GNU General Public License from time to time. Such new versions will
|
||||
be similar in spirit to the present version, but may differ in detail to
|
||||
the GNU Affero General Public License from time to time. Such new versions
|
||||
will be similar in spirit to the present version, but may differ in detail to
|
||||
address new problems or concerns.
|
||||
|
||||
Each version is given a distinguishing version number. If the
|
||||
Program specifies that a certain numbered version of the GNU General
|
||||
Program specifies that a certain numbered version of the GNU Affero General
|
||||
Public License "or any later version" applies to it, you have the
|
||||
option of following the terms and conditions either of that numbered
|
||||
version or of any later version published by the Free Software
|
||||
Foundation. If the Program does not specify a version number of the
|
||||
GNU General Public License, you may choose any version ever published
|
||||
GNU Affero General Public License, you may choose any version ever published
|
||||
by the Free Software Foundation.
|
||||
|
||||
If the Program specifies that a proxy can decide which future
|
||||
versions of the GNU General Public License can be used, that proxy's
|
||||
versions of the GNU Affero General Public License can be used, that proxy's
|
||||
public statement of acceptance of a version permanently authorizes you
|
||||
to choose that version for the Program.
|
||||
|
||||
@@ -635,40 +633,29 @@ the "copyright" line and a pointer to where the full notice is found.
|
||||
Copyright (C) <year> <name of author>
|
||||
|
||||
This program is free software: you can redistribute it and/or modify
|
||||
it under the terms of the GNU General Public License as published by
|
||||
the Free Software Foundation, either version 3 of the License, or
|
||||
it under the terms of the GNU Affero General Public License as published
|
||||
by the Free Software Foundation, either version 3 of the License, or
|
||||
(at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU General Public License for more details.
|
||||
GNU Affero General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU General Public License
|
||||
You should have received a copy of the GNU Affero General Public License
|
||||
along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
Also add information on how to contact you by electronic and paper mail.
|
||||
|
||||
If the program does terminal interaction, make it output a short
|
||||
notice like this when it starts in an interactive mode:
|
||||
|
||||
<program> Copyright (C) <year> <name of author>
|
||||
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
|
||||
This is free software, and you are welcome to redistribute it
|
||||
under certain conditions; type `show c' for details.
|
||||
|
||||
The hypothetical commands `show w' and `show c' should show the appropriate
|
||||
parts of the General Public License. Of course, your program's commands
|
||||
might be different; for a GUI interface, you would use an "about box".
|
||||
If your software can interact with users remotely through a computer
|
||||
network, you should also make sure that it provides a way for users to
|
||||
get its source. For example, if your program is a web application, its
|
||||
interface could display a "Source" link that leads users to an archive
|
||||
of the code. There are many ways you could offer source, and different
|
||||
solutions will be better for different programs; see section 13 for the
|
||||
specific requirements.
|
||||
|
||||
You should also get your employer (if you work as a programmer) or school,
|
||||
if any, to sign a "copyright disclaimer" for the program, if necessary.
|
||||
For more information on this, and how to apply and follow the GNU GPL, see
|
||||
<https://www.gnu.org/licenses/>.
|
||||
|
||||
The GNU General Public License does not permit incorporating your program
|
||||
into proprietary programs. If your program is a subroutine library, you
|
||||
may consider it more useful to permit linking proprietary applications with
|
||||
the library. If this is what you want to do, use the GNU Lesser General
|
||||
Public License instead of this License. But first, please read
|
||||
<https://www.gnu.org/licenses/why-not-lgpl.html>.
|
||||
For more information on this, and how to apply and follow the GNU AGPL, see
|
||||
<https://www.gnu.org/licenses/>.
|
||||
246
README.md
246
README.md
@@ -1,74 +1,109 @@
|
||||
# SFTPGo
|
||||
|
||||
[](https://travis-ci.org/drakkan/sftpgo) [](https://codecov.io/gh/drakkan/sftpgo/branch/master) [](https://goreportcard.com/report/github.com/drakkan/sftpgo) [](https://www.gnu.org/licenses/gpl-3.0) [](https://github.com/avelino/awesome-go)
|
||||

|
||||
[](https://codecov.io/gh/drakkan/sftpgo/branch/main)
|
||||
[](https://www.gnu.org/licenses/agpl-3.0)
|
||||
[](https://hub.docker.com/r/drakkan/sftpgo)
|
||||
[](https://github.com/avelino/awesome-go)
|
||||
|
||||
Fully featured and highly configurable SFTP server, written in Go
|
||||
Fully featured and highly configurable SFTP server with optional HTTP, FTP/S and WebDAV support.
|
||||
Several storage backends are supported: local filesystem, encrypted local filesystem, S3 (compatible) Object Storage, Google Cloud Storage, Azure Blob Storage, SFTP.
|
||||
|
||||
## Features
|
||||
|
||||
- Each account is chrooted to its home directory.
|
||||
- SFTP accounts are virtual accounts stored in a "data provider".
|
||||
- SQLite, MySQL, PostgreSQL, bbolt (key/value store in pure Go) and in-memory data providers are supported.
|
||||
- Support for serving local filesystem, encrypted local filesystem, S3 Compatible Object Storage, Google Cloud Storage, Azure Blob Storage or other SFTP accounts over SFTP/SCP/FTP/WebDAV.
|
||||
- Virtual folders are supported: a virtual folder can use any of the supported storage backends. So you can have, for example, an S3 user that exposes a GCS bucket (or part of it) on a specified path and an encrypted local filesystem on another one. Virtual folders can be private or shared among multiple users, for shared virtual folders you can define different quota limits for each user.
|
||||
- Configurable [custom commands and/or HTTP hooks](./docs/custom-actions.md) on file upload, pre-upload, download, pre-download, delete, pre-delete, rename, mmkdir, rmdir on SSH commands and on user add, update and delete.
|
||||
- Virtual accounts stored within a "data provider".
|
||||
- SQLite, MySQL, PostgreSQL, CockroachDB, Bolt (key/value store in pure Go) and in-memory data providers are supported.
|
||||
- Chroot isolation for local accounts. Cloud-based accounts can be restricted to a certain base path.
|
||||
- Per user and per directory virtual permissions, for each exposed path you can allow or deny: directory listing, upload, overwrite, download, delete, rename, create directories, create symlinks, change owner/group/file mode.
|
||||
- [REST API](./docs/rest-api.md) for users and folders management, data retention, backup, restore and real time reports of the active connections with possibility of forcibly closing a connection.
|
||||
- [Web based administration interface](./docs/web-admin.md) to easily manage users, folders and connections.
|
||||
- [Web client interface](./docs/web-client.md) so that end users can change their credentials, manage and share their files.
|
||||
- Public key and password authentication. Multiple public keys per user are supported.
|
||||
- SSH user [certificate authentication](https://cvsweb.openbsd.org/src/usr.bin/ssh/PROTOCOL.certkeys?rev=1.8).
|
||||
- Keyboard interactive authentication. You can easily setup a customizable multi-factor authentication.
|
||||
- Per user authentication methods. You can, for example, deny one or more authentication methods to one or more users.
|
||||
- Custom authentication via external programs is supported.
|
||||
- Dynamic user modification before login via external programs is supported.
|
||||
- Partial authentication. You can configure multi-step authentication requiring, for example, the user password after successful public key authentication.
|
||||
- Per user authentication methods.
|
||||
- [Two-factor authentication](./docs/howto/two-factor-authentication.md) based on time-based one time passwords (RFC 6238) which works with Authy, Google Authenticator and other compatible apps.
|
||||
- Custom authentication via external programs/HTTP API.
|
||||
- [Data At Rest Encryption](./docs/dare.md).
|
||||
- Dynamic user modification before login via external programs/HTTP API.
|
||||
- Quota support: accounts can have individual quota expressed as max total size and/or max number of files.
|
||||
- Bandwidth throttling is supported, with distinct settings for upload and download.
|
||||
- Bandwidth throttling, with distinct settings for upload and download and overrides based on the client IP address.
|
||||
- Per-protocol [rate limiting](./docs/rate-limiting.md) is supported and can be optionally connected to the built-in defender to automatically block hosts that repeatedly exceed the configured limit.
|
||||
- Per user maximum concurrent sessions.
|
||||
- Per user and per directory permission management: list directory contents, upload, overwrite, download, delete, rename, create directories, create symlinks, change owner/group and mode, change access and modification times.
|
||||
- Per user files/folders ownership mapping: you can map all the users to the system account that runs SFTPGo (all platforms are supported) or you can run SFTPGo as root user and map each user or group of users to a different system account (\*NIX only).
|
||||
- Per user IP filters are supported: login can be restricted to specific ranges of IP addresses or to a specific IP address.
|
||||
- Per user and per directory file extensions filters are supported: files can be allowed or denied based on their extensions.
|
||||
- Virtual folders are supported: directories outside the user home directory can be exposed as virtual folders.
|
||||
- Configurable custom commands and/or HTTP notifications on file upload, download, delete, rename, on SSH commands and on user add, update and delete.
|
||||
- Per user and global IP filters: login can be restricted to specific ranges of IP addresses or to a specific IP address.
|
||||
- Per user and per directory shell like patterns filters: files can be allowed or denied based on shell like patterns.
|
||||
- Automatically terminating idle connections.
|
||||
- Automatic blocklist management using the built-in [defender](./docs/defender.md).
|
||||
- Atomic uploads are configurable.
|
||||
- Per user files/folders ownership mapping: you can map all the users to the system account that runs SFTPGo (all platforms are supported) or you can run SFTPGo as root user and map each user or group of users to a different system account (\*NIX only).
|
||||
- Support for Git repositories over SSH.
|
||||
- SCP and rsync are supported.
|
||||
- Support for serving local filesystem, S3 Compatible Object Storage and Google Cloud Storage over SFTP/SCP.
|
||||
- FTP/S is supported. You can configure the FTP service to require TLS for both control and data connections.
|
||||
- [WebDAV](./docs/webdav.md) is supported.
|
||||
- Two-Way TLS authentication, aka TLS with client certificate authentication, is supported for REST API/Web Admin, FTPS and WebDAV over HTTPS.
|
||||
- Per user protocols restrictions. You can configure the allowed protocols (SSH/FTP/WebDAV) for each user.
|
||||
- [Prometheus metrics](./docs/metrics.md) are exposed.
|
||||
- Support for HAProxy PROXY protocol: you can proxy and/or load balance the SFTP/SCP service without losing the information about the client's address.
|
||||
- [REST API](./docs/rest-api.md) for users management, backup, restore and real time reports of the active connections with possibility of forcibly closing a connection.
|
||||
- [Web based administration interface](./docs/web-admin.md) to easily manage users and connections.
|
||||
- Easy [migration](./scripts#convert-users-from-other-stores) from Linux system user accounts.
|
||||
- Support for HAProxy PROXY protocol: you can proxy and/or load balance the SFTP/SCP/FTP/WebDAV service without losing the information about the client's address.
|
||||
- Easy [migration](./examples/convertusers) from Linux system user accounts.
|
||||
- [Portable mode](./docs/portable-mode.md): a convenient way to share a single directory on demand.
|
||||
- [SFTP subsystem mode](./docs/sftp-subsystem.md): you can use SFTPGo as OpenSSH's SFTP subsystem.
|
||||
- Performance analysis using built-in [profiler](./docs/profiling.md).
|
||||
- Configuration format is at your choice: JSON, TOML, YAML, HCL, envfile are supported.
|
||||
- Log files are accurate and they are saved in the easily parsable JSON format ([more information](./docs/logs.md)).
|
||||
- SFTPGo supports a [plugin system](./docs/plugins.md) and therefore can be extended using external plugins.
|
||||
|
||||
## Platforms
|
||||
|
||||
SFTPGo is developed and tested on Linux. After each commit, the code is automatically built and tested on Linux and macOS using Travis CI.
|
||||
The test cases are regularly manually executed and passed on Windows. Other UNIX variants such as \*BSD should work too.
|
||||
SFTPGo is developed and tested on Linux. After each commit, the code is automatically built and tested on Linux, macOS and Windows using a [GitHub Action](./.github/workflows/development.yml). The test cases are regularly manually executed and passed on FreeBSD. Other *BSD variants should work too.
|
||||
|
||||
## Requirements
|
||||
|
||||
- Go 1.13 or higher as build only dependency.
|
||||
- A suitable SQL server or key/value store to use as data provider: PostgreSQL 9.4+ or MySQL 5.6+ or SQLite 3.x or bbolt 1.3.x
|
||||
- Go as build only dependency. We support the Go version(s) used in [continuous integration workflows](./tree/main/.github/workflows).
|
||||
- A suitable SQL server to use as data provider: PostgreSQL 9.4+ or MySQL 5.6+ or SQLite 3.x or CockroachDB stable.
|
||||
- The SQL server is optional: you can choose to use an embedded bolt database as key/value store or an in memory data provider.
|
||||
|
||||
## Installation
|
||||
|
||||
Binary releases for Linux, macOS, and Windows are available. Please visit the [releases](https://github.com/drakkan/sftpgo/releases "releases") page.
|
||||
|
||||
Sample Dockerfiles for [Debian](https://www.debian.org "Debian") and [Alpine](https://alpinelinux.org "Alpine") are available inside the source tree [docker](./docker "docker") directory.
|
||||
An official Docker image is available. Documentation is [here](./docker/README.md).
|
||||
|
||||
Some Linux distro packages are available:
|
||||
|
||||
- For Arch Linux via AUR:
|
||||
- [sftpgo](https://aur.archlinux.org/packages/sftpgo/). This package follows stable releases. It requires `git`, `gcc` and `go` to build.
|
||||
- [sftpgo-bin](https://aur.archlinux.org/packages/sftpgo-bin/). This package follows stable releases downloading the prebuilt linux binary from GitHub. It does not require `git`, `gcc` and `go` to build.
|
||||
- [sftpgo-git](https://aur.archlinux.org/packages/sftpgo-git/). This package builds and installs the latest git master. It requires `git`, `gcc` and `go` to build.
|
||||
- [sftpgo-git](https://aur.archlinux.org/packages/sftpgo-git/). This package builds and installs the latest git `main` branch. It requires `git`, `gcc` and `go` to build.
|
||||
- Deb and RPM packages are built after each commit and for each release.
|
||||
- For Ubuntu a PPA is available [here](https://launchpad.net/~sftpgo/+archive/ubuntu/sftpgo).
|
||||
|
||||
SFTPGo is also available on [AWS Marketplace](https://aws.amazon.com/marketplace/seller-profile?id=6e849ab8-70a6-47de-9a43-13c3fa849335), purchasing from there will help keep SFTPGo a long-term sustainable project.
|
||||
|
||||
On FreeBSD you can install from the [SFTPGo port](https://www.freshports.org/ftp/sftpgo).
|
||||
|
||||
On Windows you can use:
|
||||
|
||||
- The Windows installer to install and run SFTPGo as a Windows service.
|
||||
- The portable package to start SFTPGo on demand.
|
||||
- The [Chocolatey package](https://community.chocolatey.org/packages/sftpgo) to install and run SFTPGo as a Windows service.
|
||||
|
||||
You can easily test new features selecting a commit from the [Actions](https://github.com/drakkan/sftpgo/actions) page and downloading the matching build artifacts for Linux, macOS or Windows. GitHub stores artifacts for 90 days.
|
||||
|
||||
Alternately, you can [build from source](./docs/build-from-source.md).
|
||||
|
||||
[Getting Started Guide for the Impatient](./docs/howto/getting-started.md).
|
||||
|
||||
## Configuration
|
||||
|
||||
A full explanation of all configuration methods can be found [here](./docs/full-configuration.md).
|
||||
|
||||
Please make sure to [initialize the data provider](#data-provider-initialization) before running the daemon!
|
||||
Please make sure to [initialize the data provider](#data-provider-initialization-and-management) before running the daemon.
|
||||
|
||||
To start the SFTP server with default settings, simply run:
|
||||
To start SFTPGo with the default settings, simply run:
|
||||
|
||||
```bash
|
||||
sftpgo serve
|
||||
@@ -76,15 +111,15 @@ sftpgo serve
|
||||
|
||||
Check out [this documentation](./docs/service.md) if you want to run SFTPGo as a service.
|
||||
|
||||
### Data provider initialization
|
||||
### Data provider initialization and management
|
||||
|
||||
Before starting the SFTPGo server, please ensure that the configured data provider is properly initialized.
|
||||
Before starting the SFTPGo server please ensure that the configured data provider is properly initialized/updated.
|
||||
|
||||
SQL based data providers (SQLite, MySQL, PostgreSQL) require the creation of a database containing the required tables. Memory and bolt data providers do not require an initialization.
|
||||
For PostgreSQL, MySQL and CockroachDB providers, you need to create the configured database. For SQLite, the configured database will be automatically created at startup. Memory and bolt data providers do not require an initialization but they could require an update to the existing data after upgrading SFTPGo.
|
||||
|
||||
After configuring the data provider using the configuration file, you can create the required database structure using the `initprovider` command.
|
||||
For SQLite provider, the `initprovider` command will auto create the database file, if missing, and the required tables.
|
||||
For PostgreSQL and MySQL providers, you need to create the configured database, and the `initprovider` command will create the required tables.
|
||||
SFTPGo will attempt to automatically detect if the data provider is initialized/updated and if not, will attempt to initialize/ update it on startup as needed.
|
||||
|
||||
Alternately, you can create/update the required data provider structures yourself using the `initprovider` command.
|
||||
|
||||
For example, you can simply execute the following command from the configuration directory:
|
||||
|
||||
@@ -98,13 +133,72 @@ Take a look at the CLI usage to learn how to specify a different configuration f
|
||||
sftpgo initprovider --help
|
||||
```
|
||||
|
||||
The `initprovider` command is enough for new installations. From now on, the database structure will be automatically checked and updated, if required, at startup.
|
||||
You can disable automatic data provider checks/updates at startup by setting the `update_mode` configuration key to `1`.
|
||||
|
||||
#### Upgrading
|
||||
You can also reset your provider by using the `resetprovider` sub-command. Take a look at the CLI usage for more details:
|
||||
|
||||
If you are upgrading from version 0.9.5 or before, you have to manually execute the SQL scripts to create the required database structure. These scripts can be found inside the source tree [sql](./sql "sql") directory. The SQL scripts filename is, by convention, the date as `YYYYMMDD` and the suffix `.sql`. You need to apply all the SQL scripts for your database ordered by name. For example, `20190828.sql` must be applied before `20191112.sql`, and so on.
|
||||
Example for SQLite: `find sql/sqlite/ -type f -iname '*.sql' -print | sort -n | xargs cat | sqlite3 sftpgo.db`.
|
||||
After applying these scripts, your database structure is the same as the one obtained using `initprovider` for new installations, so from now on, you don't have to manually upgrade your database anymore.
|
||||
```bash
|
||||
sftpgo resetprovider --help
|
||||
```
|
||||
|
||||
## Create the first admin
|
||||
|
||||
To start using SFTPGo you need to create an admin user, you can do it in several ways:
|
||||
|
||||
- by using the web admin interface. The default URL is [http://127.0.0.1:8080/web/admin](http://127.0.0.1:8080/web/admin)
|
||||
- by loading initial data
|
||||
- by enabling `create_default_admin` in your configuration file and setting the environment variables `SFTPGO_DEFAULT_ADMIN_USERNAME` and `SFTPGO_DEFAULT_ADMIN_PASSWORD`
|
||||
|
||||
## Upgrading
|
||||
|
||||
SFTPGo supports upgrading from the previous release branch to the current one.
|
||||
Some examples for supported upgrade paths are:
|
||||
|
||||
- from 1.2.x to 2.0.x
|
||||
- from 2.0.x to 2.1.x and so on.
|
||||
|
||||
For supported upgrade paths, the data and schema are migrated automatically, alternately you can use the `initprovider` command.
|
||||
|
||||
So if, for example, you want to upgrade from a version before 1.2.x to 2.0.x, you must first install version 1.2.x, update the data provider and finally install the version 2.0.x. It is recommended to always install the latest available minor version, ie do not install 1.2.0 if 1.2.2 is available.
|
||||
|
||||
Loading data from a provider independent JSON dump is supported from the previous release branch to the current one too. After upgrading SFTPGo it is advisable to regenerate the JSON dump from the new version.
|
||||
|
||||
## Downgrading
|
||||
|
||||
If for some reason you want to downgrade SFTPGo, you may need to downgrade your data provider schema and data as well. You can use the `revertprovider` command for this task.
|
||||
|
||||
As for upgrading, SFTPGo supports downgrading from the previous release branch to the current one.
|
||||
|
||||
So, if you plan to downgrade from 2.0.x to 1.2.x, before uninstalling 2.0.x version, you can prepare your data provider executing the following command from the configuration directory:
|
||||
|
||||
```shell
|
||||
sftpgo revertprovider --to-version 4
|
||||
```
|
||||
|
||||
Take a look at the CLI usage to see the supported parameter for the `--to-version` argument and to learn how to specify a different configuration file:
|
||||
|
||||
```shell
|
||||
sftpgo revertprovider --help
|
||||
```
|
||||
|
||||
The `revertprovider` command is not supported for the memory provider.
|
||||
|
||||
Please note that we only support the current release branch and the current main branch, if you find a bug it is better to report it rather than downgrading to an older unsupported version.
|
||||
|
||||
## Users and folders management
|
||||
|
||||
After starting SFTPGo you can manage users and folders using:
|
||||
|
||||
- the [web based administration interface](./docs/web-admin.md)
|
||||
- the [REST API](./docs/rest-api.md)
|
||||
|
||||
To support embedded data providers like `bolt` and `SQLite` we can't have a CLI that directly write users and folders to the data provider, we always have to use the REST API.
|
||||
|
||||
Full details for users, folders, admins and other resources are documented in the [OpenAPI](/openapi/openapi.yaml) schema. If you want to render the schema without importing it manually, you can explore it on [Stoplight](https://sftpgo.stoplight.io/docs/sftpgo/openapi.yaml).
|
||||
|
||||
## Tutorials
|
||||
|
||||
Some step-to-step tutorials can be found inside the source tree [howto](./docs/howto "How-to") directory.
|
||||
|
||||
## Authentication options
|
||||
|
||||
@@ -119,31 +213,52 @@ This authentication method is typically used for multi-factor authentication.
|
||||
|
||||
More information can be found [here](./docs/keyboard-interactive.md).
|
||||
|
||||
## Dynamic user modification
|
||||
## Dynamic user creation or modification
|
||||
|
||||
The user configuration, retrieved from the data provider, can be modified by an external program. More information about this can be found [here](./docs/dynamic-user-mod.md).
|
||||
A user can be created or modified by an external program just before the login. More information about this can be found [here](./docs/dynamic-user-mod.md).
|
||||
|
||||
## Custom Actions
|
||||
|
||||
SFTPGo allows you to configure custom commands and/or HTTP notifications on file upload, download, delete, rename, on SSH commands and on user add, update and delete.
|
||||
SFTPGo allows you to configure custom commands and/or HTTP hooks to receive notifications about file uploads, deletions and several other events.
|
||||
|
||||
More information about custom actions can be found [here](./docs/custom-actions.md).
|
||||
|
||||
## Virtual folders
|
||||
|
||||
Directories outside the user home directory or based on a different storage provider can be exposed as virtual folders, more information [here](./docs/virtual-folders.md).
|
||||
|
||||
## Other hooks
|
||||
|
||||
You can get notified as soon as a new connection is established using the [Post-connect hook](./docs/post-connect-hook.md) and after each login using the [Post-login hook](./docs/post-login-hook.md).
|
||||
You can use your own hook to [check passwords](./docs/check-password-hook.md).
|
||||
|
||||
## Storage backends
|
||||
|
||||
### S3 Compabible Object Storage backends
|
||||
### S3 Compatible Object Storage backends
|
||||
|
||||
Each user can be mapped to whole bucket or to a bucket virtual folder. This way, the mapped bucket/virtual folder is exposed over SFTP/SCP. More information about S3 integration can be found [here](./docs/s3.md).
|
||||
Each user can be mapped to the whole bucket or to a bucket virtual folder. This way, the mapped bucket/virtual folder is exposed over SFTP/SCP/FTP/WebDAV. More information about S3 integration can be found [here](./docs/s3.md).
|
||||
|
||||
### Google Cloud Storage backend
|
||||
|
||||
Each user can be mapped with a Google Cloud Storage bucket or a bucket virtual folder. This way, the mapped bucket/virtual folder is exposed over SFTP/SCP. More information about Google Cloud Storage integration can be found [here](./docs/google-cloud-storage.md).
|
||||
Each user can be mapped with a Google Cloud Storage bucket or a bucket virtual folder. This way, the mapped bucket/virtual folder is exposed over SFTP/SCP/FTP/WebDAV. More information about Google Cloud Storage integration can be found [here](./docs/google-cloud-storage.md).
|
||||
|
||||
### Azure Blob Storage backend
|
||||
|
||||
Each user can be mapped with an Azure Blob Storage container or a container virtual folder. This way, the mapped container/virtual folder is exposed over SFTP/SCP/FTP/WebDAV. More information about Azure Blob Storage integration can be found [here](./docs/azure-blob-storage.md).
|
||||
|
||||
### SFTP backend
|
||||
|
||||
Each user can be mapped to another SFTP server account or a subfolder of it. More information can be found [here](./docs/sftpfs.md).
|
||||
|
||||
### Encrypted backend
|
||||
|
||||
Data at-rest encryption is supported via the [cryptfs backend](./docs/dare.md).
|
||||
|
||||
### Other Storage backends
|
||||
|
||||
Adding new storage backends is quite easy:
|
||||
|
||||
- implement the [Fs interface](./vfs/vfs.go#L18 "interface for filesystem backends").
|
||||
- implement the [Fs interface](./vfs/vfs.go#L28 "interface for filesystem backends").
|
||||
- update the user method `GetFilesystem` to return the new backend
|
||||
- update the web interface and the REST API CLI
|
||||
- add the flags for the new storage backed to the `portable` mode
|
||||
@@ -154,6 +269,8 @@ Anyway, some backends require a pay per use account (or they offer free account
|
||||
|
||||
The [connection failed logs](./docs/logs.md) can be used for integration in tools such as [Fail2ban](http://www.fail2ban.org/). Example of [jails](./fail2ban/jails) and [filters](./fail2ban/filters) working with `systemd`/`journald` are available in fail2ban directory.
|
||||
|
||||
You can also use the built-in [defender](./docs/defender.md).
|
||||
|
||||
## Account's configuration properties
|
||||
|
||||
Details information about account configuration properties can be found [here](./docs/account.md).
|
||||
@@ -164,29 +281,26 @@ SFTPGo can easily saturate a Gigabit connection on low end hardware with no spec
|
||||
|
||||
More in-depth analysis of performance can be found [here](./docs/performance.md).
|
||||
|
||||
## Release Cadence
|
||||
|
||||
SFTPGo releases are feature-driven, we don't have a fixed time based schedule. As a rough estimate, you can expect 1 or 2 new releases per year.
|
||||
|
||||
## Acknowledgements
|
||||
|
||||
- [pkg/sftp](https://github.com/pkg/sftp)
|
||||
- [go-chi](https://github.com/go-chi/chi)
|
||||
- [zerolog](https://github.com/rs/zerolog)
|
||||
- [lumberjack](https://gopkg.in/natefinch/lumberjack.v2)
|
||||
- [argon2id](https://github.com/alexedwards/argon2id)
|
||||
- [go-sqlite3](https://github.com/mattn/go-sqlite3)
|
||||
- [go-sql-driver/mysql](https://github.com/go-sql-driver/mysql)
|
||||
- [bbolt](https://github.com/etcd-io/bbolt)
|
||||
- [lib/pq](https://github.com/lib/pq)
|
||||
- [viper](https://github.com/spf13/viper)
|
||||
- [cobra](https://github.com/spf13/cobra)
|
||||
- [xid](https://github.com/rs/xid)
|
||||
- [nathanaelle/password](https://github.com/nathanaelle/password)
|
||||
- [PipeAt](https://github.com/eikenb/pipeat)
|
||||
- [ZeroConf](https://github.com/grandcat/zeroconf)
|
||||
- [SB Admin 2](https://github.com/BlackrockDigital/startbootstrap-sb-admin-2)
|
||||
- [shlex](https://github.com/google/shlex)
|
||||
- [go-proxyproto](https://github.com/pires/go-proxyproto)
|
||||
SFTPGo makes use of the third party libraries listed inside [go.mod](./go.mod).
|
||||
|
||||
Some code was initially taken from [Pterodactyl sftp server](https://github.com/pterodactyl/sftp-server)
|
||||
We are very grateful to all the people who contributed with ideas and/or pull requests.
|
||||
|
||||
Thank you [ysura](https://www.ysura.com/) for granting me stable access to a test AWS S3 account.
|
||||
|
||||
## Sponsors
|
||||
|
||||
I'd like to make SFTPGo into a sustainable long term project and your [sponsorship](https://github.com/sponsors/drakkan) will really help :heart:
|
||||
|
||||
Thank you to our sponsors!
|
||||
|
||||
[<img src="https://www.7digital.com/wp-content/themes/sevendigital/images/top_logo.png" alt="7digital logo">](https://www.7digital.com/)
|
||||
|
||||
## License
|
||||
|
||||
GNU GPLv3
|
||||
GNU AGPLv3
|
||||
|
||||
12
SECURITY.md
Normal file
12
SECURITY.md
Normal file
@@ -0,0 +1,12 @@
|
||||
# Security Policy
|
||||
|
||||
## Supported Versions
|
||||
|
||||
Only the current release of the software is actively supported. If you need
|
||||
help backporting fixes into an older release, feel free to ask.
|
||||
|
||||
## Reporting a Vulnerability
|
||||
|
||||
Email your vulnerability information to SFTPGo's maintainer:
|
||||
|
||||
Nicola Murino <nicola.murino@gmail.com>
|
||||
12
cmd/gen.go
Normal file
12
cmd/gen.go
Normal file
@@ -0,0 +1,12 @@
|
||||
package cmd
|
||||
|
||||
import "github.com/spf13/cobra"
|
||||
|
||||
var genCmd = &cobra.Command{
|
||||
Use: "gen",
|
||||
Short: "A collection of useful generators",
|
||||
}
|
||||
|
||||
func init() {
|
||||
rootCmd.AddCommand(genCmd)
|
||||
}
|
||||
119
cmd/gencompletion.go
Normal file
119
cmd/gencompletion.go
Normal file
@@ -0,0 +1,119 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"os"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
var genCompletionCmd = &cobra.Command{
|
||||
Use: "completion [bash|zsh|fish|powershell]",
|
||||
Short: "Generate the autocompletion script for the specified shell",
|
||||
Long: `Generate the autocompletion script for sftpgo for the specified shell.
|
||||
|
||||
See each sub-command's help for details on how to use the generated script.
|
||||
`,
|
||||
}
|
||||
|
||||
var genCompletionBashCmd = &cobra.Command{
|
||||
Use: "bash",
|
||||
Short: "Generate the autocompletion script for bash",
|
||||
Long: `Generate the autocompletion script for the bash shell.
|
||||
|
||||
This script depends on the 'bash-completion' package.
|
||||
If it is not installed already, you can install it via your OS's package
|
||||
manager.
|
||||
|
||||
To load completions in your current shell session:
|
||||
|
||||
$ source <(sftpgo gen completion bash)
|
||||
|
||||
To load completions for every new session, execute once:
|
||||
|
||||
Linux:
|
||||
$ sudo sftpgo gen completion bash > /usr/share/bash-completion/completions/sftpgo
|
||||
|
||||
MacOS:
|
||||
$ sudo sftpgo gen completion bash > /usr/local/etc/bash_completion.d/sftpgo
|
||||
|
||||
You will need to start a new shell for this setup to take effect.
|
||||
`,
|
||||
DisableFlagsInUseLine: true,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return cmd.Root().GenBashCompletionV2(os.Stdout, true)
|
||||
},
|
||||
}
|
||||
|
||||
var genCompletionZshCmd = &cobra.Command{
|
||||
Use: "zsh",
|
||||
Short: "Generate the autocompletion script for zsh",
|
||||
Long: `Generate the autocompletion script for the zsh shell.
|
||||
|
||||
If shell completion is not already enabled in your environment you will need
|
||||
to enable it. You can execute the following once:
|
||||
|
||||
$ echo "autoload -U compinit; compinit" >> ~/.zshrc
|
||||
|
||||
To load completions for every new session, execute once:
|
||||
|
||||
Linux:
|
||||
$ sftpgo gen completion zsh > > "${fpath[1]}/_sftpgo"
|
||||
|
||||
macOS:
|
||||
$ sudo sftpgo gen completion zsh > /usr/local/share/zsh/site-functions/_sftpgo
|
||||
|
||||
You will need to start a new shell for this setup to take effect.
|
||||
`,
|
||||
DisableFlagsInUseLine: true,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return cmd.Root().GenZshCompletion(os.Stdout)
|
||||
},
|
||||
}
|
||||
|
||||
var genCompletionFishCmd = &cobra.Command{
|
||||
Use: "fish",
|
||||
Short: "Generate the autocompletion script for fish",
|
||||
Long: `Generate the autocompletion script for the fish shell.
|
||||
|
||||
To load completions in your current shell session:
|
||||
|
||||
$ sftpgo gen completion fish | source
|
||||
|
||||
To load completions for every new session, execute once:
|
||||
|
||||
$ sftpgo gen completion fish > ~/.config/fish/completions/sftpgo.fish
|
||||
|
||||
You will need to start a new shell for this setup to take effect.
|
||||
`,
|
||||
DisableFlagsInUseLine: true,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return cmd.Root().GenFishCompletion(os.Stdout, true)
|
||||
},
|
||||
}
|
||||
|
||||
var genCompletionPowerShellCmd = &cobra.Command{
|
||||
Use: "powershell",
|
||||
Short: "Generate the autocompletion script for powershell",
|
||||
Long: `Generate the autocompletion script for powershell.
|
||||
|
||||
To load completions in your current shell session:
|
||||
|
||||
PS C:\> sftpgo gen completion powershell | Out-String | Invoke-Expression
|
||||
|
||||
To load completions for every new session, add the output of the above command
|
||||
to your powershell profile.
|
||||
`,
|
||||
DisableFlagsInUseLine: true,
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return cmd.Root().GenPowerShellCompletionWithDesc(os.Stdout)
|
||||
},
|
||||
}
|
||||
|
||||
func init() {
|
||||
genCompletionCmd.AddCommand(genCompletionBashCmd)
|
||||
genCompletionCmd.AddCommand(genCompletionZshCmd)
|
||||
genCompletionCmd.AddCommand(genCompletionFishCmd)
|
||||
genCompletionCmd.AddCommand(genCompletionPowerShellCmd)
|
||||
|
||||
genCmd.AddCommand(genCompletionCmd)
|
||||
}
|
||||
53
cmd/genman.go
Normal file
53
cmd/genman.go
Normal file
@@ -0,0 +1,53 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"github.com/rs/zerolog"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/cobra/doc"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/version"
|
||||
)
|
||||
|
||||
var (
|
||||
manDir string
|
||||
genManCmd = &cobra.Command{
|
||||
Use: "man",
|
||||
Short: "Generate man pages for sftpgo",
|
||||
Long: `This command automatically generates up-to-date man pages of SFTPGo's
|
||||
command-line interface.
|
||||
By default, it creates the man page files in the "man" directory under the
|
||||
current directory.
|
||||
`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
logger.DisableLogger()
|
||||
logger.EnableConsoleLogger(zerolog.DebugLevel)
|
||||
if _, err := os.Stat(manDir); os.IsNotExist(err) {
|
||||
err = os.MkdirAll(manDir, os.ModePerm)
|
||||
if err != nil {
|
||||
logger.WarnToConsole("Unable to generate man page files: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
header := &doc.GenManHeader{
|
||||
Section: "1",
|
||||
Manual: "SFTPGo Manual",
|
||||
Source: fmt.Sprintf("SFTPGo %v", version.Get().Version),
|
||||
}
|
||||
cmd.Root().DisableAutoGenTag = true
|
||||
err := doc.GenManTree(cmd.Root(), header, manDir)
|
||||
if err != nil {
|
||||
logger.WarnToConsole("Unable to generate man page files: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
func init() {
|
||||
genManCmd.Flags().StringVarP(&manDir, "dir", "d", "man", "The directory to write the man pages")
|
||||
genCmd.AddCommand(genManCmd)
|
||||
}
|
||||
@@ -1,44 +1,64 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"github.com/drakkan/sftpgo/config"
|
||||
"github.com/drakkan/sftpgo/dataprovider"
|
||||
"github.com/drakkan/sftpgo/logger"
|
||||
"github.com/drakkan/sftpgo/utils"
|
||||
"os"
|
||||
|
||||
"github.com/rs/zerolog"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/viper"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/config"
|
||||
"github.com/drakkan/sftpgo/v2/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
var (
|
||||
initProviderCmd = &cobra.Command{
|
||||
Use: "initprovider",
|
||||
Short: "Initializes the configured data provider",
|
||||
Long: `This command reads the data provider connection details from the specified configuration file and creates the initial structure.
|
||||
Short: "Initialize and/or updates the configured data provider",
|
||||
Long: `This command reads the data provider connection details from the specified
|
||||
configuration file and creates the initial structure or update the existing one,
|
||||
as needed.
|
||||
|
||||
Some data providers such as bolt and memory does not require an initialization.
|
||||
Some data providers such as bolt and memory does not require an initialization
|
||||
but they could require an update to the existing data after upgrading SFTPGo.
|
||||
|
||||
For SQLite provider the database file will be auto created if missing.
|
||||
For SQLite/bolt providers the database file will be auto-created if missing.
|
||||
|
||||
For PostgreSQL and MySQL providers you need to create the configured database, this command will create the required tables.
|
||||
For PostgreSQL and MySQL providers you need to create the configured database,
|
||||
this command will create/update the required tables as needed.
|
||||
|
||||
To initialize the data provider from the configuration directory simply use:
|
||||
To initialize/update the data provider from the configuration directory simply use:
|
||||
|
||||
sftpgo initprovider
|
||||
$ sftpgo initprovider
|
||||
|
||||
Please take a look at the usage below to customize the options.`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
logger.DisableLogger()
|
||||
logger.EnableConsoleLogger(zerolog.DebugLevel)
|
||||
configDir = utils.CleanDirInput(configDir)
|
||||
config.LoadConfig(configDir, configFile)
|
||||
configDir = util.CleanDirInput(configDir)
|
||||
err := config.LoadConfig(configDir, configFile)
|
||||
if err != nil {
|
||||
logger.WarnToConsole("Unable to initialize data provider, config load error: %v", err)
|
||||
return
|
||||
}
|
||||
kmsConfig := config.GetKMSConfig()
|
||||
err = kmsConfig.Initialize()
|
||||
if err != nil {
|
||||
logger.ErrorToConsole("unable to initialize KMS: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
providerConf := config.GetProviderConf()
|
||||
logger.DebugToConsole("Initializing provider: %#v config file: %#v", providerConf.Driver, viper.ConfigFileUsed())
|
||||
err := dataprovider.InitializeDatabase(providerConf, configDir)
|
||||
logger.InfoToConsole("Initializing provider: %#v config file: %#v", providerConf.Driver, viper.ConfigFileUsed())
|
||||
err = dataprovider.InitializeDatabase(providerConf, configDir)
|
||||
if err == nil {
|
||||
logger.DebugToConsole("Data provider successfully initialized")
|
||||
logger.InfoToConsole("Data provider successfully initialized/updated")
|
||||
} else if err == dataprovider.ErrNoInitRequired {
|
||||
logger.InfoToConsole("%v", err.Error())
|
||||
} else {
|
||||
logger.WarnToConsole("Unable to initialize data provider: %v", err)
|
||||
logger.WarnToConsole("Unable to initialize/update the data provider: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
@@ -2,24 +2,28 @@ package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"strconv"
|
||||
|
||||
"github.com/drakkan/sftpgo/service"
|
||||
"github.com/drakkan/sftpgo/utils"
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/service"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
var (
|
||||
installCmd = &cobra.Command{
|
||||
Use: "install",
|
||||
Short: "Install SFTPGo as Windows Service",
|
||||
Long: `To install the SFTPGo Windows Service with the default values for the command line flags simply use:
|
||||
Long: `To install the SFTPGo Windows Service with the default values for the command
|
||||
line flags simply use:
|
||||
|
||||
sftpgo service install
|
||||
|
||||
Please take a look at the usage below to customize the startup options`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
s := service.Service{
|
||||
ConfigDir: utils.CleanDirInput(configDir),
|
||||
ConfigDir: util.CleanDirInput(configDir),
|
||||
ConfigFile: configFile,
|
||||
LogFilePath: logFilePath,
|
||||
LogMaxSize: logMaxSize,
|
||||
@@ -27,6 +31,7 @@ Please take a look at the usage below to customize the startup options`,
|
||||
LogMaxAge: logMaxAge,
|
||||
LogCompress: logCompress,
|
||||
LogVerbose: logVerbose,
|
||||
LogUTCTime: logUTCTime,
|
||||
Shutdown: make(chan bool),
|
||||
}
|
||||
winService := service.WindowsService{
|
||||
@@ -40,6 +45,7 @@ Please take a look at the usage below to customize the startup options`,
|
||||
err := winService.Install(serviceArgs...)
|
||||
if err != nil {
|
||||
fmt.Printf("Error installing service: %v\r\n", err)
|
||||
os.Exit(1)
|
||||
} else {
|
||||
fmt.Printf("Service installed!\r\n")
|
||||
}
|
||||
@@ -51,3 +57,42 @@ func init() {
|
||||
serviceCmd.AddCommand(installCmd)
|
||||
addServeFlags(installCmd)
|
||||
}
|
||||
|
||||
func getCustomServeFlags() []string {
|
||||
result := []string{}
|
||||
if configDir != defaultConfigDir {
|
||||
configDir = util.CleanDirInput(configDir)
|
||||
result = append(result, "--"+configDirFlag)
|
||||
result = append(result, configDir)
|
||||
}
|
||||
if configFile != defaultConfigFile {
|
||||
result = append(result, "--"+configFileFlag)
|
||||
result = append(result, configFile)
|
||||
}
|
||||
if logFilePath != defaultLogFile {
|
||||
result = append(result, "--"+logFilePathFlag)
|
||||
result = append(result, logFilePath)
|
||||
}
|
||||
if logMaxSize != defaultLogMaxSize {
|
||||
result = append(result, "--"+logMaxSizeFlag)
|
||||
result = append(result, strconv.Itoa(logMaxSize))
|
||||
}
|
||||
if logMaxBackups != defaultLogMaxBackup {
|
||||
result = append(result, "--"+logMaxBackupFlag)
|
||||
result = append(result, strconv.Itoa(logMaxBackups))
|
||||
}
|
||||
if logMaxAge != defaultLogMaxAge {
|
||||
result = append(result, "--"+logMaxAgeFlag)
|
||||
result = append(result, strconv.Itoa(logMaxAge))
|
||||
}
|
||||
if logVerbose != defaultLogVerbose {
|
||||
result = append(result, "--"+logVerboseFlag+"=false")
|
||||
}
|
||||
if logUTCTime != defaultLogUTCTime {
|
||||
result = append(result, "--"+logUTCTimeFlag+"=true")
|
||||
}
|
||||
if logCompress != defaultLogCompress {
|
||||
result = append(result, "--"+logCompressFlag+"=true")
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
451
cmd/portable.go
451
cmd/portable.go
@@ -1,59 +1,98 @@
|
||||
//go:build !noportable
|
||||
// +build !noportable
|
||||
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"encoding/base64"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/drakkan/sftpgo/dataprovider"
|
||||
"github.com/drakkan/sftpgo/service"
|
||||
"github.com/drakkan/sftpgo/sftpd"
|
||||
"github.com/drakkan/sftpgo/vfs"
|
||||
"github.com/sftpgo/sdk"
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/common"
|
||||
"github.com/drakkan/sftpgo/v2/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/kms"
|
||||
"github.com/drakkan/sftpgo/v2/service"
|
||||
"github.com/drakkan/sftpgo/v2/sftpd"
|
||||
"github.com/drakkan/sftpgo/v2/version"
|
||||
"github.com/drakkan/sftpgo/v2/vfs"
|
||||
)
|
||||
|
||||
var (
|
||||
directoryToServe string
|
||||
portableSFTPDPort int
|
||||
portableAdvertiseService bool
|
||||
portableAdvertiseCredentials bool
|
||||
portableUsername string
|
||||
portablePassword string
|
||||
portableLogFile string
|
||||
portablePublicKeys []string
|
||||
portablePermissions []string
|
||||
portableSSHCommands []string
|
||||
portableAllowedExtensions []string
|
||||
portableDeniedExtensions []string
|
||||
portableFsProvider int
|
||||
portableS3Bucket string
|
||||
portableS3Region string
|
||||
portableS3AccessKey string
|
||||
portableS3AccessSecret string
|
||||
portableS3Endpoint string
|
||||
portableS3StorageClass string
|
||||
portableS3KeyPrefix string
|
||||
portableGCSBucket string
|
||||
portableGCSCredentialsFile string
|
||||
portableGCSAutoCredentials int
|
||||
portableGCSStorageClass string
|
||||
portableGCSKeyPrefix string
|
||||
portableCmd = &cobra.Command{
|
||||
directoryToServe string
|
||||
portableSFTPDPort int
|
||||
portableAdvertiseService bool
|
||||
portableAdvertiseCredentials bool
|
||||
portableUsername string
|
||||
portablePassword string
|
||||
portableLogFile string
|
||||
portableLogVerbose bool
|
||||
portableLogUTCTime bool
|
||||
portablePublicKeys []string
|
||||
portablePermissions []string
|
||||
portableSSHCommands []string
|
||||
portableAllowedPatterns []string
|
||||
portableDeniedPatterns []string
|
||||
portableFsProvider string
|
||||
portableS3Bucket string
|
||||
portableS3Region string
|
||||
portableS3AccessKey string
|
||||
portableS3AccessSecret string
|
||||
portableS3Endpoint string
|
||||
portableS3StorageClass string
|
||||
portableS3ACL string
|
||||
portableS3KeyPrefix string
|
||||
portableS3ULPartSize int
|
||||
portableS3ULConcurrency int
|
||||
portableS3ForcePathStyle bool
|
||||
portableGCSBucket string
|
||||
portableGCSCredentialsFile string
|
||||
portableGCSAutoCredentials int
|
||||
portableGCSStorageClass string
|
||||
portableGCSKeyPrefix string
|
||||
portableFTPDPort int
|
||||
portableFTPSCert string
|
||||
portableFTPSKey string
|
||||
portableWebDAVPort int
|
||||
portableWebDAVCert string
|
||||
portableWebDAVKey string
|
||||
portableAzContainer string
|
||||
portableAzAccountName string
|
||||
portableAzAccountKey string
|
||||
portableAzEndpoint string
|
||||
portableAzAccessTier string
|
||||
portableAzSASURL string
|
||||
portableAzKeyPrefix string
|
||||
portableAzULPartSize int
|
||||
portableAzULConcurrency int
|
||||
portableAzUseEmulator bool
|
||||
portableCryptPassphrase string
|
||||
portableSFTPEndpoint string
|
||||
portableSFTPUsername string
|
||||
portableSFTPPassword string
|
||||
portableSFTPPrivateKeyPath string
|
||||
portableSFTPFingerprints []string
|
||||
portableSFTPPrefix string
|
||||
portableSFTPDisableConcurrentReads bool
|
||||
portableSFTPDBufferSize int64
|
||||
portableCmd = &cobra.Command{
|
||||
Use: "portable",
|
||||
Short: "Serve a single directory",
|
||||
Long: `To serve the current working directory with auto generated credentials simply use:
|
||||
Short: "Serve a single directory/account",
|
||||
Long: `To serve the current working directory with auto generated credentials simply
|
||||
use:
|
||||
|
||||
sftpgo portable
|
||||
$ sftpgo portable
|
||||
|
||||
Please take a look at the usage below to customize the serving parameters`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
portableDir := directoryToServe
|
||||
fsProvider := sdk.GetProviderByName(portableFsProvider)
|
||||
if !filepath.IsAbs(portableDir) {
|
||||
if portableFsProvider == 0 {
|
||||
if fsProvider == sdk.LocalFilesystemProvider {
|
||||
portableDir, _ = filepath.Abs(portableDir)
|
||||
} else {
|
||||
portableDir = os.TempDir()
|
||||
@@ -62,150 +101,299 @@ Please take a look at the usage below to customize the serving parameters`,
|
||||
permissions := make(map[string][]string)
|
||||
permissions["/"] = portablePermissions
|
||||
portableGCSCredentials := ""
|
||||
if portableFsProvider == 2 && len(portableGCSCredentialsFile) > 0 {
|
||||
fi, err := os.Stat(portableGCSCredentialsFile)
|
||||
if fsProvider == sdk.GCSFilesystemProvider && portableGCSCredentialsFile != "" {
|
||||
contents, err := getFileContents(portableGCSCredentialsFile)
|
||||
if err != nil {
|
||||
fmt.Printf("Invalid GCS credentials file: %v\n", err)
|
||||
return
|
||||
fmt.Printf("Unable to get GCS credentials: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
if fi.Size() > 1048576 {
|
||||
fmt.Printf("Invalid GCS credentials file: %#v is too big %v/1048576 bytes\n", portableGCSCredentialsFile,
|
||||
fi.Size())
|
||||
return
|
||||
}
|
||||
creds, err := ioutil.ReadFile(portableGCSCredentialsFile)
|
||||
if err != nil {
|
||||
fmt.Printf("Unable to read credentials file: %v\n", err)
|
||||
}
|
||||
portableGCSCredentials = base64.StdEncoding.EncodeToString(creds)
|
||||
portableGCSCredentials = contents
|
||||
portableGCSAutoCredentials = 0
|
||||
}
|
||||
portableSFTPPrivateKey := ""
|
||||
if fsProvider == sdk.SFTPFilesystemProvider && portableSFTPPrivateKeyPath != "" {
|
||||
contents, err := getFileContents(portableSFTPPrivateKeyPath)
|
||||
if err != nil {
|
||||
fmt.Printf("Unable to get SFTP private key: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
portableSFTPPrivateKey = contents
|
||||
}
|
||||
if portableFTPDPort >= 0 && len(portableFTPSCert) > 0 && len(portableFTPSKey) > 0 {
|
||||
_, err := common.NewCertManager(portableFTPSCert, portableFTPSKey, filepath.Clean(defaultConfigDir),
|
||||
"FTP portable")
|
||||
if err != nil {
|
||||
fmt.Printf("Unable to load FTPS key pair, cert file %#v key file %#v error: %v\n",
|
||||
portableFTPSCert, portableFTPSKey, err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
if portableWebDAVPort > 0 && len(portableWebDAVCert) > 0 && len(portableWebDAVKey) > 0 {
|
||||
_, err := common.NewCertManager(portableWebDAVCert, portableWebDAVKey, filepath.Clean(defaultConfigDir),
|
||||
"WebDAV portable")
|
||||
if err != nil {
|
||||
fmt.Printf("Unable to load WebDAV key pair, cert file %#v key file %#v error: %v\n",
|
||||
portableWebDAVCert, portableWebDAVKey, err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
service := service.Service{
|
||||
ConfigDir: filepath.Clean(defaultConfigDir),
|
||||
ConfigFile: defaultConfigName,
|
||||
ConfigFile: defaultConfigFile,
|
||||
LogFilePath: portableLogFile,
|
||||
LogMaxSize: defaultLogMaxSize,
|
||||
LogMaxBackups: defaultLogMaxBackup,
|
||||
LogMaxAge: defaultLogMaxAge,
|
||||
LogCompress: defaultLogCompress,
|
||||
LogVerbose: defaultLogVerbose,
|
||||
LogVerbose: portableLogVerbose,
|
||||
LogUTCTime: portableLogUTCTime,
|
||||
Shutdown: make(chan bool),
|
||||
PortableMode: 1,
|
||||
PortableUser: dataprovider.User{
|
||||
Username: portableUsername,
|
||||
Password: portablePassword,
|
||||
PublicKeys: portablePublicKeys,
|
||||
Permissions: permissions,
|
||||
HomeDir: portableDir,
|
||||
Status: 1,
|
||||
FsConfig: dataprovider.Filesystem{
|
||||
Provider: portableFsProvider,
|
||||
S3Config: vfs.S3FsConfig{
|
||||
Bucket: portableS3Bucket,
|
||||
Region: portableS3Region,
|
||||
AccessKey: portableS3AccessKey,
|
||||
AccessSecret: portableS3AccessSecret,
|
||||
Endpoint: portableS3Endpoint,
|
||||
StorageClass: portableS3StorageClass,
|
||||
KeyPrefix: portableS3KeyPrefix,
|
||||
},
|
||||
GCSConfig: vfs.GCSFsConfig{
|
||||
Bucket: portableGCSBucket,
|
||||
Credentials: portableGCSCredentials,
|
||||
AutomaticCredentials: portableGCSAutoCredentials,
|
||||
StorageClass: portableGCSStorageClass,
|
||||
KeyPrefix: portableGCSKeyPrefix,
|
||||
},
|
||||
BaseUser: sdk.BaseUser{
|
||||
Username: portableUsername,
|
||||
Password: portablePassword,
|
||||
PublicKeys: portablePublicKeys,
|
||||
Permissions: permissions,
|
||||
HomeDir: portableDir,
|
||||
Status: 1,
|
||||
},
|
||||
Filters: dataprovider.UserFilters{
|
||||
FileExtensions: parseFileExtensionsFilters(),
|
||||
BaseUserFilters: sdk.BaseUserFilters{
|
||||
FilePatterns: parsePatternsFilesFilters(),
|
||||
},
|
||||
},
|
||||
FsConfig: vfs.Filesystem{
|
||||
Provider: sdk.GetProviderByName(portableFsProvider),
|
||||
S3Config: vfs.S3FsConfig{
|
||||
BaseS3FsConfig: sdk.BaseS3FsConfig{
|
||||
Bucket: portableS3Bucket,
|
||||
Region: portableS3Region,
|
||||
AccessKey: portableS3AccessKey,
|
||||
Endpoint: portableS3Endpoint,
|
||||
StorageClass: portableS3StorageClass,
|
||||
ACL: portableS3ACL,
|
||||
KeyPrefix: portableS3KeyPrefix,
|
||||
UploadPartSize: int64(portableS3ULPartSize),
|
||||
UploadConcurrency: portableS3ULConcurrency,
|
||||
ForcePathStyle: portableS3ForcePathStyle,
|
||||
},
|
||||
AccessSecret: kms.NewPlainSecret(portableS3AccessSecret),
|
||||
},
|
||||
GCSConfig: vfs.GCSFsConfig{
|
||||
BaseGCSFsConfig: sdk.BaseGCSFsConfig{
|
||||
Bucket: portableGCSBucket,
|
||||
AutomaticCredentials: portableGCSAutoCredentials,
|
||||
StorageClass: portableGCSStorageClass,
|
||||
KeyPrefix: portableGCSKeyPrefix,
|
||||
},
|
||||
Credentials: kms.NewPlainSecret(portableGCSCredentials),
|
||||
},
|
||||
AzBlobConfig: vfs.AzBlobFsConfig{
|
||||
BaseAzBlobFsConfig: sdk.BaseAzBlobFsConfig{
|
||||
Container: portableAzContainer,
|
||||
AccountName: portableAzAccountName,
|
||||
Endpoint: portableAzEndpoint,
|
||||
AccessTier: portableAzAccessTier,
|
||||
KeyPrefix: portableAzKeyPrefix,
|
||||
UseEmulator: portableAzUseEmulator,
|
||||
UploadPartSize: int64(portableAzULPartSize),
|
||||
UploadConcurrency: portableAzULConcurrency,
|
||||
},
|
||||
AccountKey: kms.NewPlainSecret(portableAzAccountKey),
|
||||
SASURL: kms.NewPlainSecret(portableAzSASURL),
|
||||
},
|
||||
CryptConfig: vfs.CryptFsConfig{
|
||||
Passphrase: kms.NewPlainSecret(portableCryptPassphrase),
|
||||
},
|
||||
SFTPConfig: vfs.SFTPFsConfig{
|
||||
BaseSFTPFsConfig: sdk.BaseSFTPFsConfig{
|
||||
Endpoint: portableSFTPEndpoint,
|
||||
Username: portableSFTPUsername,
|
||||
Fingerprints: portableSFTPFingerprints,
|
||||
Prefix: portableSFTPPrefix,
|
||||
DisableCouncurrentReads: portableSFTPDisableConcurrentReads,
|
||||
BufferSize: portableSFTPDBufferSize,
|
||||
},
|
||||
Password: kms.NewPlainSecret(portableSFTPPassword),
|
||||
PrivateKey: kms.NewPlainSecret(portableSFTPPrivateKey),
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
if err := service.StartPortableMode(portableSFTPDPort, portableSSHCommands, portableAdvertiseService,
|
||||
portableAdvertiseCredentials); err == nil {
|
||||
if err := service.StartPortableMode(portableSFTPDPort, portableFTPDPort, portableWebDAVPort, portableSSHCommands, portableAdvertiseService,
|
||||
portableAdvertiseCredentials, portableFTPSCert, portableFTPSKey, portableWebDAVCert, portableWebDAVKey); err == nil {
|
||||
service.Wait()
|
||||
if service.Error == nil {
|
||||
os.Exit(0)
|
||||
}
|
||||
}
|
||||
os.Exit(1)
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
func init() {
|
||||
portableCmd.Flags().StringVarP(&directoryToServe, "directory", "d", ".",
|
||||
"Path to the directory to serve. This can be an absolute path or a path relative to the current directory")
|
||||
portableCmd.Flags().IntVarP(&portableSFTPDPort, "sftpd-port", "s", 0, "0 means a random non privileged port")
|
||||
version.AddFeature("+portable")
|
||||
|
||||
portableCmd.Flags().StringVarP(&directoryToServe, "directory", "d", ".", `Path to the directory to serve.
|
||||
This can be an absolute path or a path
|
||||
relative to the current directory
|
||||
`)
|
||||
portableCmd.Flags().IntVarP(&portableSFTPDPort, "sftpd-port", "s", 0, `0 means a random unprivileged port,
|
||||
< 0 disabled`)
|
||||
portableCmd.Flags().IntVar(&portableFTPDPort, "ftpd-port", -1, `0 means a random unprivileged port,
|
||||
< 0 disabled`)
|
||||
portableCmd.Flags().IntVar(&portableWebDAVPort, "webdav-port", -1, `0 means a random unprivileged port,
|
||||
< 0 disabled`)
|
||||
portableCmd.Flags().StringSliceVarP(&portableSSHCommands, "ssh-commands", "c", sftpd.GetDefaultSSHCommands(),
|
||||
"SSH commands to enable. \"*\" means any supported SSH command including scp")
|
||||
portableCmd.Flags().StringVarP(&portableUsername, "username", "u", "", "Leave empty to use an auto generated value")
|
||||
portableCmd.Flags().StringVarP(&portablePassword, "password", "p", "", "Leave empty to use an auto generated value")
|
||||
`SSH commands to enable.
|
||||
"*" means any supported SSH command
|
||||
including scp
|
||||
`)
|
||||
portableCmd.Flags().StringVarP(&portableUsername, "username", "u", "", `Leave empty to use an auto generated
|
||||
value`)
|
||||
portableCmd.Flags().StringVarP(&portablePassword, "password", "p", "", `Leave empty to use an auto generated
|
||||
value`)
|
||||
portableCmd.Flags().StringVarP(&portableLogFile, logFilePathFlag, "l", "", "Leave empty to disable logging")
|
||||
portableCmd.Flags().BoolVarP(&portableLogVerbose, logVerboseFlag, "v", false, "Enable verbose logs")
|
||||
portableCmd.Flags().BoolVar(&portableLogUTCTime, logUTCTimeFlag, false, "Use UTC time for logging")
|
||||
portableCmd.Flags().StringSliceVarP(&portablePublicKeys, "public-key", "k", []string{}, "")
|
||||
portableCmd.Flags().StringSliceVarP(&portablePermissions, "permissions", "g", []string{"list", "download"},
|
||||
"User's permissions. \"*\" means any permission")
|
||||
portableCmd.Flags().StringArrayVar(&portableAllowedExtensions, "allowed-extensions", []string{},
|
||||
"Allowed file extensions case insensitive. The format is /dir::ext1,ext2. For example: \"/somedir::.jpg,.png\"")
|
||||
portableCmd.Flags().StringArrayVar(&portableDeniedExtensions, "denied-extensions", []string{},
|
||||
"Denied file extensions case insensitive. The format is /dir::ext1,ext2. For example: \"/somedir::.jpg,.png\"")
|
||||
portableCmd.Flags().BoolVarP(&portableAdvertiseService, "advertise-service", "S", true,
|
||||
"Advertise SFTP service using multicast DNS")
|
||||
`User's permissions. "*" means any
|
||||
permission`)
|
||||
portableCmd.Flags().StringArrayVar(&portableAllowedPatterns, "allowed-patterns", []string{},
|
||||
`Allowed file patterns case insensitive.
|
||||
The format is:
|
||||
/dir::pattern1,pattern2.
|
||||
For example: "/somedir::*.jpg,a*b?.png"`)
|
||||
portableCmd.Flags().StringArrayVar(&portableDeniedPatterns, "denied-patterns", []string{},
|
||||
`Denied file patterns case insensitive.
|
||||
The format is:
|
||||
/dir::pattern1,pattern2.
|
||||
For example: "/somedir::*.jpg,a*b?.png"`)
|
||||
portableCmd.Flags().BoolVarP(&portableAdvertiseService, "advertise-service", "S", false,
|
||||
`Advertise configured services using
|
||||
multicast DNS`)
|
||||
portableCmd.Flags().BoolVarP(&portableAdvertiseCredentials, "advertise-credentials", "C", false,
|
||||
"If the SFTP service is advertised via multicast DNS, this flag allows to put username/password inside the advertised TXT record")
|
||||
portableCmd.Flags().IntVarP(&portableFsProvider, "fs-provider", "f", 0, "0 means local filesystem, 1 Amazon S3 compatible, "+
|
||||
"2 Google Cloud Storage")
|
||||
`If the SFTP/FTP service is
|
||||
advertised via multicast DNS, this
|
||||
flag allows to put username/password
|
||||
inside the advertised TXT record`)
|
||||
portableCmd.Flags().StringVarP(&portableFsProvider, "fs-provider", "f", "osfs", `osfs => local filesystem (legacy value: 0)
|
||||
s3fs => AWS S3 compatible (legacy: 1)
|
||||
gcsfs => Google Cloud Storage (legacy: 2)
|
||||
azblobfs => Azure Blob Storage (legacy: 3)
|
||||
cryptfs => Encrypted local filesystem (legacy: 4)
|
||||
sftpfs => SFTP (legacy: 5)`)
|
||||
portableCmd.Flags().StringVar(&portableS3Bucket, "s3-bucket", "", "")
|
||||
portableCmd.Flags().StringVar(&portableS3Region, "s3-region", "", "")
|
||||
portableCmd.Flags().StringVar(&portableS3AccessKey, "s3-access-key", "", "")
|
||||
portableCmd.Flags().StringVar(&portableS3AccessSecret, "s3-access-secret", "", "")
|
||||
portableCmd.Flags().StringVar(&portableS3Endpoint, "s3-endpoint", "", "")
|
||||
portableCmd.Flags().StringVar(&portableS3StorageClass, "s3-storage-class", "", "")
|
||||
portableCmd.Flags().StringVar(&portableS3KeyPrefix, "s3-key-prefix", "", "Allows to restrict access to the virtual folder "+
|
||||
"identified by this prefix and its contents")
|
||||
portableCmd.Flags().StringVar(&portableS3ACL, "s3-acl", "", "")
|
||||
portableCmd.Flags().StringVar(&portableS3KeyPrefix, "s3-key-prefix", "", `Allows to restrict access to the
|
||||
virtual folder identified by this
|
||||
prefix and its contents`)
|
||||
portableCmd.Flags().IntVar(&portableS3ULPartSize, "s3-upload-part-size", 5, `The buffer size for multipart uploads
|
||||
(MB)`)
|
||||
portableCmd.Flags().IntVar(&portableS3ULConcurrency, "s3-upload-concurrency", 2, `How many parts are uploaded in
|
||||
parallel`)
|
||||
portableCmd.Flags().BoolVar(&portableS3ForcePathStyle, "s3-force-path-style", false, `Force path style bucket URL`)
|
||||
portableCmd.Flags().StringVar(&portableGCSBucket, "gcs-bucket", "", "")
|
||||
portableCmd.Flags().StringVar(&portableGCSStorageClass, "gcs-storage-class", "", "")
|
||||
portableCmd.Flags().StringVar(&portableGCSKeyPrefix, "gcs-key-prefix", "", "Allows to restrict access to the virtual folder "+
|
||||
"identified by this prefix and its contents")
|
||||
portableCmd.Flags().StringVar(&portableGCSCredentialsFile, "gcs-credentials-file", "", "Google Cloud Storage JSON credentials file")
|
||||
portableCmd.Flags().IntVar(&portableGCSAutoCredentials, "gcs-automatic-credentials", 1, "0 means explicit credentials using a JSON "+
|
||||
"credentials file, 1 automatic")
|
||||
portableCmd.Flags().StringVar(&portableGCSKeyPrefix, "gcs-key-prefix", "", `Allows to restrict access to the
|
||||
virtual folder identified by this
|
||||
prefix and its contents`)
|
||||
portableCmd.Flags().StringVar(&portableGCSCredentialsFile, "gcs-credentials-file", "", `Google Cloud Storage JSON credentials
|
||||
file`)
|
||||
portableCmd.Flags().IntVar(&portableGCSAutoCredentials, "gcs-automatic-credentials", 1, `0 means explicit credentials using
|
||||
a JSON credentials file, 1 automatic
|
||||
`)
|
||||
portableCmd.Flags().StringVar(&portableFTPSCert, "ftpd-cert", "", "Path to the certificate file for FTPS")
|
||||
portableCmd.Flags().StringVar(&portableFTPSKey, "ftpd-key", "", "Path to the key file for FTPS")
|
||||
portableCmd.Flags().StringVar(&portableWebDAVCert, "webdav-cert", "", `Path to the certificate file for WebDAV
|
||||
over HTTPS`)
|
||||
portableCmd.Flags().StringVar(&portableWebDAVKey, "webdav-key", "", `Path to the key file for WebDAV over
|
||||
HTTPS`)
|
||||
portableCmd.Flags().StringVar(&portableAzContainer, "az-container", "", "")
|
||||
portableCmd.Flags().StringVar(&portableAzAccountName, "az-account-name", "", "")
|
||||
portableCmd.Flags().StringVar(&portableAzAccountKey, "az-account-key", "", "")
|
||||
portableCmd.Flags().StringVar(&portableAzSASURL, "az-sas-url", "", `Shared access signature URL`)
|
||||
portableCmd.Flags().StringVar(&portableAzEndpoint, "az-endpoint", "", `Leave empty to use the default:
|
||||
"blob.core.windows.net"`)
|
||||
portableCmd.Flags().StringVar(&portableAzAccessTier, "az-access-tier", "", `Leave empty to use the default
|
||||
container setting`)
|
||||
portableCmd.Flags().StringVar(&portableAzKeyPrefix, "az-key-prefix", "", `Allows to restrict access to the
|
||||
virtual folder identified by this
|
||||
prefix and its contents`)
|
||||
portableCmd.Flags().IntVar(&portableAzULPartSize, "az-upload-part-size", 4, `The buffer size for multipart uploads
|
||||
(MB)`)
|
||||
portableCmd.Flags().IntVar(&portableAzULConcurrency, "az-upload-concurrency", 2, `How many parts are uploaded in
|
||||
parallel`)
|
||||
portableCmd.Flags().BoolVar(&portableAzUseEmulator, "az-use-emulator", false, "")
|
||||
portableCmd.Flags().StringVar(&portableCryptPassphrase, "crypto-passphrase", "", `Passphrase for encryption/decryption`)
|
||||
portableCmd.Flags().StringVar(&portableSFTPEndpoint, "sftp-endpoint", "", `SFTP endpoint as host:port for SFTP
|
||||
provider`)
|
||||
portableCmd.Flags().StringVar(&portableSFTPUsername, "sftp-username", "", `SFTP user for SFTP provider`)
|
||||
portableCmd.Flags().StringVar(&portableSFTPPassword, "sftp-password", "", `SFTP password for SFTP provider`)
|
||||
portableCmd.Flags().StringVar(&portableSFTPPrivateKeyPath, "sftp-key-path", "", `SFTP private key path for SFTP provider`)
|
||||
portableCmd.Flags().StringSliceVar(&portableSFTPFingerprints, "sftp-fingerprints", []string{}, `SFTP fingerprints to verify remote host
|
||||
key for SFTP provider`)
|
||||
portableCmd.Flags().StringVar(&portableSFTPPrefix, "sftp-prefix", "", `SFTP prefix allows restrict all
|
||||
operations to a given path within the
|
||||
remote SFTP server`)
|
||||
portableCmd.Flags().BoolVar(&portableSFTPDisableConcurrentReads, "sftp-disable-concurrent-reads", false, `Concurrent reads are safe to use and
|
||||
disabling them will degrade performance.
|
||||
Disable for read once servers`)
|
||||
portableCmd.Flags().Int64Var(&portableSFTPDBufferSize, "sftp-buffer-size", 0, `The size of the buffer (in MB) to use
|
||||
for transfers. By enabling buffering,
|
||||
the reads and writes, from/to the
|
||||
remote SFTP server, are split in
|
||||
multiple concurrent requests and this
|
||||
allows data to be transferred at a
|
||||
faster rate, over high latency networks,
|
||||
by overlapping round-trip times`)
|
||||
rootCmd.AddCommand(portableCmd)
|
||||
}
|
||||
|
||||
func parseFileExtensionsFilters() []dataprovider.ExtensionsFilter {
|
||||
var extensions []dataprovider.ExtensionsFilter
|
||||
for _, val := range portableAllowedExtensions {
|
||||
p, exts := getExtensionsFilterValues(strings.TrimSpace(val))
|
||||
if len(p) > 0 {
|
||||
extensions = append(extensions, dataprovider.ExtensionsFilter{
|
||||
Path: path.Clean(p),
|
||||
AllowedExtensions: exts,
|
||||
DeniedExtensions: []string{},
|
||||
func parsePatternsFilesFilters() []sdk.PatternsFilter {
|
||||
var patterns []sdk.PatternsFilter
|
||||
for _, val := range portableAllowedPatterns {
|
||||
p, exts := getPatternsFilterValues(strings.TrimSpace(val))
|
||||
if p != "" {
|
||||
patterns = append(patterns, sdk.PatternsFilter{
|
||||
Path: path.Clean(p),
|
||||
AllowedPatterns: exts,
|
||||
DeniedPatterns: []string{},
|
||||
})
|
||||
}
|
||||
}
|
||||
for _, val := range portableDeniedExtensions {
|
||||
p, exts := getExtensionsFilterValues(strings.TrimSpace(val))
|
||||
if len(p) > 0 {
|
||||
for _, val := range portableDeniedPatterns {
|
||||
p, exts := getPatternsFilterValues(strings.TrimSpace(val))
|
||||
if p != "" {
|
||||
found := false
|
||||
for index, e := range extensions {
|
||||
for index, e := range patterns {
|
||||
if path.Clean(e.Path) == path.Clean(p) {
|
||||
extensions[index].DeniedExtensions = append(extensions[index].DeniedExtensions, exts...)
|
||||
patterns[index].DeniedPatterns = append(patterns[index].DeniedPatterns, exts...)
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
extensions = append(extensions, dataprovider.ExtensionsFilter{
|
||||
Path: path.Clean(p),
|
||||
AllowedExtensions: []string{},
|
||||
DeniedExtensions: exts,
|
||||
patterns = append(patterns, sdk.PatternsFilter{
|
||||
Path: path.Clean(p),
|
||||
AllowedPatterns: []string{},
|
||||
DeniedPatterns: exts,
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
return extensions
|
||||
return patterns
|
||||
}
|
||||
|
||||
func getExtensionsFilterValues(value string) (string, []string) {
|
||||
func getPatternsFilterValues(value string) (string, []string) {
|
||||
if strings.Contains(value, "::") {
|
||||
dirExts := strings.Split(value, "::")
|
||||
if len(dirExts) > 1 {
|
||||
@@ -213,14 +401,29 @@ func getExtensionsFilterValues(value string) (string, []string) {
|
||||
exts := []string{}
|
||||
for _, e := range strings.Split(dirExts[1], ",") {
|
||||
cleanedExt := strings.TrimSpace(e)
|
||||
if len(cleanedExt) > 0 {
|
||||
if cleanedExt != "" {
|
||||
exts = append(exts, cleanedExt)
|
||||
}
|
||||
}
|
||||
if len(dir) > 0 && len(exts) > 0 {
|
||||
if dir != "" && len(exts) > 0 {
|
||||
return dir, exts
|
||||
}
|
||||
}
|
||||
}
|
||||
return "", nil
|
||||
}
|
||||
|
||||
func getFileContents(name string) (string, error) {
|
||||
fi, err := os.Stat(name)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
if fi.Size() > 1048576 {
|
||||
return "", fmt.Errorf("%#v is too big %v/1048576 bytes", name, fi.Size())
|
||||
}
|
||||
contents, err := os.ReadFile(name)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return string(contents), nil
|
||||
}
|
||||
|
||||
10
cmd/portable_disabled.go
Normal file
10
cmd/portable_disabled.go
Normal file
@@ -0,0 +1,10 @@
|
||||
//go:build noportable
|
||||
// +build noportable
|
||||
|
||||
package cmd
|
||||
|
||||
import "github.com/drakkan/sftpgo/v2/version"
|
||||
|
||||
func init() {
|
||||
version.AddFeature("-portable")
|
||||
}
|
||||
@@ -2,9 +2,11 @@ package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"github.com/drakkan/sftpgo/service"
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/service"
|
||||
)
|
||||
|
||||
var (
|
||||
@@ -19,9 +21,10 @@ var (
|
||||
}
|
||||
err := s.Reload()
|
||||
if err != nil {
|
||||
fmt.Printf("Error reloading service: %v\r\n", err)
|
||||
fmt.Printf("Error sending reload signal: %v\r\n", err)
|
||||
os.Exit(1)
|
||||
} else {
|
||||
fmt.Printf("Service reloaded!\r\n")
|
||||
fmt.Printf("Reload signal sent!\r\n")
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
75
cmd/resetprovider.go
Normal file
75
cmd/resetprovider.go
Normal file
@@ -0,0 +1,75 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
"github.com/rs/zerolog"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/viper"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/config"
|
||||
"github.com/drakkan/sftpgo/v2/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
var (
|
||||
resetProviderForce bool
|
||||
resetProviderCmd = &cobra.Command{
|
||||
Use: "resetprovider",
|
||||
Short: "Reset the configured provider, any data will be lost",
|
||||
Long: `This command reads the data provider connection details from the specified
|
||||
configuration file and resets the provider by deleting all data and schemas.
|
||||
This command is not supported for the memory provider.
|
||||
|
||||
Please take a look at the usage below to customize the options.`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
logger.DisableLogger()
|
||||
logger.EnableConsoleLogger(zerolog.DebugLevel)
|
||||
configDir = util.CleanDirInput(configDir)
|
||||
err := config.LoadConfig(configDir, configFile)
|
||||
if err != nil {
|
||||
logger.WarnToConsole("Unable to initialize data provider, config load error: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
kmsConfig := config.GetKMSConfig()
|
||||
err = kmsConfig.Initialize()
|
||||
if err != nil {
|
||||
logger.ErrorToConsole("unable to initialize KMS: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
providerConf := config.GetProviderConf()
|
||||
if !resetProviderForce {
|
||||
logger.WarnToConsole("You are about to delete all the SFTPGo data for provider %#v, config file: %#v",
|
||||
providerConf.Driver, viper.ConfigFileUsed())
|
||||
logger.WarnToConsole("Are you sure? (Y/n)")
|
||||
reader := bufio.NewReader(os.Stdin)
|
||||
answer, err := reader.ReadString('\n')
|
||||
if err != nil {
|
||||
logger.ErrorToConsole("unable to read your answer: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
if strings.ToUpper(strings.TrimSpace(answer)) != "Y" {
|
||||
logger.InfoToConsole("command aborted")
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
logger.InfoToConsole("Resetting provider: %#v, config file: %#v", providerConf.Driver, viper.ConfigFileUsed())
|
||||
err = dataprovider.ResetDatabase(providerConf, configDir)
|
||||
if err != nil {
|
||||
logger.WarnToConsole("Error resetting provider: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
logger.InfoToConsole("Tha data provider was successfully reset")
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
func init() {
|
||||
addConfigFlags(resetProviderCmd)
|
||||
resetProviderCmd.Flags().BoolVar(&resetProviderForce, "force", false, `reset the provider without asking for confirmation`)
|
||||
|
||||
rootCmd.AddCommand(resetProviderCmd)
|
||||
}
|
||||
63
cmd/revertprovider.go
Normal file
63
cmd/revertprovider.go
Normal file
@@ -0,0 +1,63 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"os"
|
||||
|
||||
"github.com/rs/zerolog"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/viper"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/config"
|
||||
"github.com/drakkan/sftpgo/v2/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
var (
|
||||
revertProviderTargetVersion int
|
||||
revertProviderCmd = &cobra.Command{
|
||||
Use: "revertprovider",
|
||||
Short: "Revert the configured data provider to a previous version",
|
||||
Long: `This command reads the data provider connection details from the specified
|
||||
configuration file and restore the provider schema and/or data to a previous version.
|
||||
This command is not supported for the memory provider.
|
||||
|
||||
Please take a look at the usage below to customize the options.`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
logger.DisableLogger()
|
||||
logger.EnableConsoleLogger(zerolog.DebugLevel)
|
||||
if revertProviderTargetVersion != 10 {
|
||||
logger.WarnToConsole("Unsupported target version, 10 is the only supported one")
|
||||
os.Exit(1)
|
||||
}
|
||||
configDir = util.CleanDirInput(configDir)
|
||||
err := config.LoadConfig(configDir, configFile)
|
||||
if err != nil {
|
||||
logger.WarnToConsole("Unable to initialize data provider, config load error: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
kmsConfig := config.GetKMSConfig()
|
||||
err = kmsConfig.Initialize()
|
||||
if err != nil {
|
||||
logger.ErrorToConsole("unable to initialize KMS: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
providerConf := config.GetProviderConf()
|
||||
logger.InfoToConsole("Reverting provider: %#v config file: %#v target version %v", providerConf.Driver,
|
||||
viper.ConfigFileUsed(), revertProviderTargetVersion)
|
||||
err = dataprovider.RevertDatabase(providerConf, configDir, revertProviderTargetVersion)
|
||||
if err != nil {
|
||||
logger.WarnToConsole("Error reverting provider: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
logger.InfoToConsole("Data provider successfully reverted")
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
func init() {
|
||||
addConfigFlags(revertProviderCmd)
|
||||
revertProviderCmd.Flags().IntVar(&revertProviderTargetVersion, "to-version", 10, `10 means the version supported in v2.1.x`)
|
||||
|
||||
rootCmd.AddCommand(revertProviderCmd)
|
||||
}
|
||||
293
cmd/root.go
293
cmd/root.go
@@ -4,63 +4,81 @@ package cmd
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"strconv"
|
||||
|
||||
"github.com/drakkan/sftpgo/config"
|
||||
"github.com/drakkan/sftpgo/utils"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/viper"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/version"
|
||||
)
|
||||
|
||||
const (
|
||||
logSender = "cmd"
|
||||
configDirFlag = "config-dir"
|
||||
configDirKey = "config_dir"
|
||||
configFileFlag = "config-file"
|
||||
configFileKey = "config_file"
|
||||
logFilePathFlag = "log-file-path"
|
||||
logFilePathKey = "log_file_path"
|
||||
logMaxSizeFlag = "log-max-size"
|
||||
logMaxSizeKey = "log_max_size"
|
||||
logMaxBackupFlag = "log-max-backups"
|
||||
logMaxBackupKey = "log_max_backups"
|
||||
logMaxAgeFlag = "log-max-age"
|
||||
logMaxAgeKey = "log_max_age"
|
||||
logCompressFlag = "log-compress"
|
||||
logCompressKey = "log_compress"
|
||||
logVerboseFlag = "log-verbose"
|
||||
logVerboseKey = "log_verbose"
|
||||
defaultConfigDir = "."
|
||||
defaultConfigName = config.DefaultConfigName
|
||||
defaultLogFile = "sftpgo.log"
|
||||
defaultLogMaxSize = 10
|
||||
defaultLogMaxBackup = 5
|
||||
defaultLogMaxAge = 28
|
||||
defaultLogCompress = false
|
||||
defaultLogVerbose = true
|
||||
configDirFlag = "config-dir"
|
||||
configDirKey = "config_dir"
|
||||
configFileFlag = "config-file"
|
||||
configFileKey = "config_file"
|
||||
logFilePathFlag = "log-file-path"
|
||||
logFilePathKey = "log_file_path"
|
||||
logMaxSizeFlag = "log-max-size"
|
||||
logMaxSizeKey = "log_max_size"
|
||||
logMaxBackupFlag = "log-max-backups"
|
||||
logMaxBackupKey = "log_max_backups"
|
||||
logMaxAgeFlag = "log-max-age"
|
||||
logMaxAgeKey = "log_max_age"
|
||||
logCompressFlag = "log-compress"
|
||||
logCompressKey = "log_compress"
|
||||
logVerboseFlag = "log-verbose"
|
||||
logVerboseKey = "log_verbose"
|
||||
logUTCTimeFlag = "log-utc-time"
|
||||
logUTCTimeKey = "log_utc_time"
|
||||
loadDataFromFlag = "loaddata-from"
|
||||
loadDataFromKey = "loaddata_from"
|
||||
loadDataModeFlag = "loaddata-mode"
|
||||
loadDataModeKey = "loaddata_mode"
|
||||
loadDataQuotaScanFlag = "loaddata-scan"
|
||||
loadDataQuotaScanKey = "loaddata_scan"
|
||||
loadDataCleanFlag = "loaddata-clean"
|
||||
loadDataCleanKey = "loaddata_clean"
|
||||
defaultConfigDir = "."
|
||||
defaultConfigFile = ""
|
||||
defaultLogFile = "sftpgo.log"
|
||||
defaultLogMaxSize = 10
|
||||
defaultLogMaxBackup = 5
|
||||
defaultLogMaxAge = 28
|
||||
defaultLogCompress = false
|
||||
defaultLogVerbose = true
|
||||
defaultLogUTCTime = false
|
||||
defaultLoadDataFrom = ""
|
||||
defaultLoadDataMode = 1
|
||||
defaultLoadDataQuotaScan = 0
|
||||
defaultLoadDataClean = false
|
||||
)
|
||||
|
||||
var (
|
||||
configDir string
|
||||
configFile string
|
||||
logFilePath string
|
||||
logMaxSize int
|
||||
logMaxBackups int
|
||||
logMaxAge int
|
||||
logCompress bool
|
||||
logVerbose bool
|
||||
configDir string
|
||||
configFile string
|
||||
logFilePath string
|
||||
logMaxSize int
|
||||
logMaxBackups int
|
||||
logMaxAge int
|
||||
logCompress bool
|
||||
logVerbose bool
|
||||
logUTCTime bool
|
||||
loadDataFrom string
|
||||
loadDataMode int
|
||||
loadDataQuotaScan int
|
||||
loadDataClean bool
|
||||
|
||||
rootCmd = &cobra.Command{
|
||||
Use: "sftpgo",
|
||||
Short: "Full featured and highly configurable SFTP server",
|
||||
Short: "Fully featured and highly configurable SFTP server",
|
||||
}
|
||||
)
|
||||
|
||||
func init() {
|
||||
version := utils.GetAppVersion()
|
||||
rootCmd.CompletionOptions.DisableDefaultCmd = true
|
||||
rootCmd.Flags().BoolP("version", "v", false, "")
|
||||
rootCmd.Version = version.GetVersionAsString()
|
||||
rootCmd.SetVersionTemplate(`{{printf "SFTPGo version: "}}{{printf "%s" .Version}}
|
||||
rootCmd.Version = version.GetAsString()
|
||||
rootCmd.SetVersionTemplate(`{{printf "SFTPGo "}}{{printf "%s" .Version}}
|
||||
`)
|
||||
}
|
||||
|
||||
@@ -75,100 +93,149 @@ func Execute() {
|
||||
|
||||
func addConfigFlags(cmd *cobra.Command) {
|
||||
viper.SetDefault(configDirKey, defaultConfigDir)
|
||||
viper.BindEnv(configDirKey, "SFTPGO_CONFIG_DIR")
|
||||
viper.BindEnv(configDirKey, "SFTPGO_CONFIG_DIR") //nolint:errcheck // err is not nil only if the key to bind is missing
|
||||
cmd.Flags().StringVarP(&configDir, configDirFlag, "c", viper.GetString(configDirKey),
|
||||
"Location for SFTPGo config dir. This directory should contain the \"sftpgo\" configuration file or the configured "+
|
||||
"config-file and it is used as the base for files with a relative path (eg. the private keys for the SFTP server, "+
|
||||
"the SQLite database if you use SQLite as data provider). This flag can be set using SFTPGO_CONFIG_DIR env var too.")
|
||||
viper.BindPFlag(configDirKey, cmd.Flags().Lookup(configDirFlag))
|
||||
`Location for the config dir. This directory
|
||||
is used as the base for files with a relative
|
||||
path, eg. the private keys for the SFTP
|
||||
server or the SQLite database if you use
|
||||
SQLite as data provider.
|
||||
The configuration file, if not explicitly set,
|
||||
is looked for in this dir. We support reading
|
||||
from JSON, TOML, YAML, HCL, envfile and Java
|
||||
properties config files. The default config
|
||||
file name is "sftpgo" and therefore
|
||||
"sftpgo.json", "sftpgo.yaml" and so on are
|
||||
searched.
|
||||
This flag can be set using SFTPGO_CONFIG_DIR
|
||||
env var too.`)
|
||||
viper.BindPFlag(configDirKey, cmd.Flags().Lookup(configDirFlag)) //nolint:errcheck
|
||||
|
||||
viper.SetDefault(configFileKey, defaultConfigName)
|
||||
viper.BindEnv(configFileKey, "SFTPGO_CONFIG_FILE")
|
||||
cmd.Flags().StringVarP(&configFile, configFileFlag, "f", viper.GetString(configFileKey),
|
||||
"Name for SFTPGo configuration file. It must be the name of a file stored in config-dir not the absolute path to the "+
|
||||
"configuration file. The specified file name must have no extension we automatically load JSON, YAML, TOML, HCL and "+
|
||||
"Java properties. Therefore if you set \"sftpgo\" then \"sftpgo.json\", \"sftpgo.yaml\" and so on are searched. "+
|
||||
"This flag can be set using SFTPGO_CONFIG_FILE env var too.")
|
||||
viper.BindPFlag(configFileKey, cmd.Flags().Lookup(configFileFlag))
|
||||
viper.SetDefault(configFileKey, defaultConfigFile)
|
||||
viper.BindEnv(configFileKey, "SFTPGO_CONFIG_FILE") //nolint:errcheck
|
||||
cmd.Flags().StringVar(&configFile, configFileFlag, viper.GetString(configFileKey),
|
||||
`Path to SFTPGo configuration file.
|
||||
This flag explicitly defines the path, name
|
||||
and extension of the config file. If must be
|
||||
an absolute path or a path relative to the
|
||||
configuration directory. The specified file
|
||||
name must have a supported extension (JSON,
|
||||
YAML, TOML, HCL or Java properties).
|
||||
This flag can be set using SFTPGO_CONFIG_FILE
|
||||
env var too.`)
|
||||
viper.BindPFlag(configFileKey, cmd.Flags().Lookup(configFileFlag)) //nolint:errcheck
|
||||
}
|
||||
|
||||
func addServeFlags(cmd *cobra.Command) {
|
||||
addConfigFlags(cmd)
|
||||
|
||||
viper.SetDefault(logFilePathKey, defaultLogFile)
|
||||
viper.BindEnv(logFilePathKey, "SFTPGO_LOG_FILE_PATH")
|
||||
viper.BindEnv(logFilePathKey, "SFTPGO_LOG_FILE_PATH") //nolint:errcheck
|
||||
cmd.Flags().StringVarP(&logFilePath, logFilePathFlag, "l", viper.GetString(logFilePathKey),
|
||||
"Location for the log file. Leave empty to write logs to the standard output. This flag can be set using SFTPGO_LOG_FILE_PATH "+
|
||||
"env var too.")
|
||||
viper.BindPFlag(logFilePathKey, cmd.Flags().Lookup(logFilePathFlag))
|
||||
`Location for the log file. Leave empty to write
|
||||
logs to the standard output. This flag can be
|
||||
set using SFTPGO_LOG_FILE_PATH env var too.
|
||||
`)
|
||||
viper.BindPFlag(logFilePathKey, cmd.Flags().Lookup(logFilePathFlag)) //nolint:errcheck
|
||||
|
||||
viper.SetDefault(logMaxSizeKey, defaultLogMaxSize)
|
||||
viper.BindEnv(logMaxSizeKey, "SFTPGO_LOG_MAX_SIZE")
|
||||
viper.BindEnv(logMaxSizeKey, "SFTPGO_LOG_MAX_SIZE") //nolint:errcheck
|
||||
cmd.Flags().IntVarP(&logMaxSize, logMaxSizeFlag, "s", viper.GetInt(logMaxSizeKey),
|
||||
"Maximum size in megabytes of the log file before it gets rotated. This flag can be set using SFTPGO_LOG_MAX_SIZE "+
|
||||
"env var too. It is unused if log-file-path is empty.")
|
||||
viper.BindPFlag(logMaxSizeKey, cmd.Flags().Lookup(logMaxSizeFlag))
|
||||
`Maximum size in megabytes of the log file
|
||||
before it gets rotated. This flag can be set
|
||||
using SFTPGO_LOG_MAX_SIZE env var too. It is
|
||||
unused if log-file-path is empty.
|
||||
`)
|
||||
viper.BindPFlag(logMaxSizeKey, cmd.Flags().Lookup(logMaxSizeFlag)) //nolint:errcheck
|
||||
|
||||
viper.SetDefault(logMaxBackupKey, defaultLogMaxBackup)
|
||||
viper.BindEnv(logMaxBackupKey, "SFTPGO_LOG_MAX_BACKUPS")
|
||||
viper.BindEnv(logMaxBackupKey, "SFTPGO_LOG_MAX_BACKUPS") //nolint:errcheck
|
||||
cmd.Flags().IntVarP(&logMaxBackups, "log-max-backups", "b", viper.GetInt(logMaxBackupKey),
|
||||
"Maximum number of old log files to retain. This flag can be set using SFTPGO_LOG_MAX_BACKUPS env var too. "+
|
||||
"It is unused if log-file-path is empty.")
|
||||
viper.BindPFlag(logMaxBackupKey, cmd.Flags().Lookup(logMaxBackupFlag))
|
||||
`Maximum number of old log files to retain.
|
||||
This flag can be set using SFTPGO_LOG_MAX_BACKUPS
|
||||
env var too. It is unused if log-file-path is
|
||||
empty.`)
|
||||
viper.BindPFlag(logMaxBackupKey, cmd.Flags().Lookup(logMaxBackupFlag)) //nolint:errcheck
|
||||
|
||||
viper.SetDefault(logMaxAgeKey, defaultLogMaxAge)
|
||||
viper.BindEnv(logMaxAgeKey, "SFTPGO_LOG_MAX_AGE")
|
||||
viper.BindEnv(logMaxAgeKey, "SFTPGO_LOG_MAX_AGE") //nolint:errcheck
|
||||
cmd.Flags().IntVarP(&logMaxAge, "log-max-age", "a", viper.GetInt(logMaxAgeKey),
|
||||
"Maximum number of days to retain old log files. This flag can be set using SFTPGO_LOG_MAX_AGE env var too. "+
|
||||
"It is unused if log-file-path is empty.")
|
||||
viper.BindPFlag(logMaxAgeKey, cmd.Flags().Lookup(logMaxAgeFlag))
|
||||
`Maximum number of days to retain old log files.
|
||||
This flag can be set using SFTPGO_LOG_MAX_AGE env
|
||||
var too. It is unused if log-file-path is empty.
|
||||
`)
|
||||
viper.BindPFlag(logMaxAgeKey, cmd.Flags().Lookup(logMaxAgeFlag)) //nolint:errcheck
|
||||
|
||||
viper.SetDefault(logCompressKey, defaultLogCompress)
|
||||
viper.BindEnv(logCompressKey, "SFTPGO_LOG_COMPRESS")
|
||||
cmd.Flags().BoolVarP(&logCompress, logCompressFlag, "z", viper.GetBool(logCompressKey), "Determine if the rotated "+
|
||||
"log files should be compressed using gzip. This flag can be set using SFTPGO_LOG_COMPRESS env var too. "+
|
||||
"It is unused if log-file-path is empty.")
|
||||
viper.BindPFlag(logCompressKey, cmd.Flags().Lookup(logCompressFlag))
|
||||
viper.BindEnv(logCompressKey, "SFTPGO_LOG_COMPRESS") //nolint:errcheck
|
||||
cmd.Flags().BoolVarP(&logCompress, logCompressFlag, "z", viper.GetBool(logCompressKey),
|
||||
`Determine if the rotated log files
|
||||
should be compressed using gzip. This flag can
|
||||
be set using SFTPGO_LOG_COMPRESS env var too.
|
||||
It is unused if log-file-path is empty.
|
||||
`)
|
||||
viper.BindPFlag(logCompressKey, cmd.Flags().Lookup(logCompressFlag)) //nolint:errcheck
|
||||
|
||||
viper.SetDefault(logVerboseKey, defaultLogVerbose)
|
||||
viper.BindEnv(logVerboseKey, "SFTPGO_LOG_VERBOSE")
|
||||
cmd.Flags().BoolVarP(&logVerbose, logVerboseFlag, "v", viper.GetBool(logVerboseKey), "Enable verbose logs. "+
|
||||
"This flag can be set using SFTPGO_LOG_VERBOSE env var too.")
|
||||
viper.BindPFlag(logVerboseKey, cmd.Flags().Lookup(logVerboseFlag))
|
||||
}
|
||||
viper.BindEnv(logVerboseKey, "SFTPGO_LOG_VERBOSE") //nolint:errcheck
|
||||
cmd.Flags().BoolVarP(&logVerbose, logVerboseFlag, "v", viper.GetBool(logVerboseKey),
|
||||
`Enable verbose logs. This flag can be set
|
||||
using SFTPGO_LOG_VERBOSE env var too.
|
||||
`)
|
||||
viper.BindPFlag(logVerboseKey, cmd.Flags().Lookup(logVerboseFlag)) //nolint:errcheck
|
||||
|
||||
func getCustomServeFlags() []string {
|
||||
result := []string{}
|
||||
if configDir != defaultConfigDir {
|
||||
configDir = utils.CleanDirInput(configDir)
|
||||
result = append(result, "--"+configDirFlag)
|
||||
result = append(result, configDir)
|
||||
}
|
||||
if configFile != defaultConfigName {
|
||||
result = append(result, "--"+configFileFlag)
|
||||
result = append(result, configFile)
|
||||
}
|
||||
if logFilePath != defaultLogFile {
|
||||
result = append(result, "--"+logFilePathFlag)
|
||||
result = append(result, logFilePath)
|
||||
}
|
||||
if logMaxSize != defaultLogMaxSize {
|
||||
result = append(result, "--"+logMaxSizeFlag)
|
||||
result = append(result, strconv.Itoa(logMaxSize))
|
||||
}
|
||||
if logMaxBackups != defaultLogMaxBackup {
|
||||
result = append(result, "--"+logMaxBackupFlag)
|
||||
result = append(result, strconv.Itoa(logMaxBackups))
|
||||
}
|
||||
if logMaxAge != defaultLogMaxAge {
|
||||
result = append(result, "--"+logMaxAgeFlag)
|
||||
result = append(result, strconv.Itoa(logMaxAge))
|
||||
}
|
||||
if logVerbose != defaultLogVerbose {
|
||||
result = append(result, "--"+logVerboseFlag+"=false")
|
||||
}
|
||||
if logCompress != defaultLogCompress {
|
||||
result = append(result, "--"+logCompressFlag+"=true")
|
||||
}
|
||||
return result
|
||||
viper.SetDefault(logUTCTimeKey, defaultLogUTCTime)
|
||||
viper.BindEnv(logUTCTimeKey, "SFTPGO_LOG_UTC_TIME") //nolint:errcheck
|
||||
cmd.Flags().BoolVar(&logUTCTime, logUTCTimeFlag, viper.GetBool(logUTCTimeKey),
|
||||
`Use UTC time for logging. This flag can be set
|
||||
using SFTPGO_LOG_UTC_TIME env var too.
|
||||
`)
|
||||
viper.BindPFlag(logUTCTimeKey, cmd.Flags().Lookup(logUTCTimeFlag)) //nolint:errcheck
|
||||
|
||||
viper.SetDefault(loadDataFromKey, defaultLoadDataFrom)
|
||||
viper.BindEnv(loadDataFromKey, "SFTPGO_LOADDATA_FROM") //nolint:errcheck
|
||||
cmd.Flags().StringVar(&loadDataFrom, loadDataFromFlag, viper.GetString(loadDataFromKey),
|
||||
`Load users and folders from this file.
|
||||
The file must be specified as absolute path
|
||||
and it must contain a backup obtained using
|
||||
the "dumpdata" REST API or compatible content.
|
||||
This flag can be set using SFTPGO_LOADDATA_FROM
|
||||
env var too.
|
||||
`)
|
||||
viper.BindPFlag(loadDataFromKey, cmd.Flags().Lookup(loadDataFromFlag)) //nolint:errcheck
|
||||
|
||||
viper.SetDefault(loadDataModeKey, defaultLoadDataMode)
|
||||
viper.BindEnv(loadDataModeKey, "SFTPGO_LOADDATA_MODE") //nolint:errcheck
|
||||
cmd.Flags().IntVar(&loadDataMode, loadDataModeFlag, viper.GetInt(loadDataModeKey),
|
||||
`Restore mode for data to load:
|
||||
0 - new users are added, existing users are
|
||||
updated
|
||||
1 - New users are added, existing users are
|
||||
not modified
|
||||
This flag can be set using SFTPGO_LOADDATA_MODE
|
||||
env var too.
|
||||
`)
|
||||
viper.BindPFlag(loadDataModeKey, cmd.Flags().Lookup(loadDataModeFlag)) //nolint:errcheck
|
||||
|
||||
viper.SetDefault(loadDataQuotaScanKey, defaultLoadDataQuotaScan)
|
||||
viper.BindEnv(loadDataQuotaScanKey, "SFTPGO_LOADDATA_QUOTA_SCAN") //nolint:errcheck
|
||||
cmd.Flags().IntVar(&loadDataQuotaScan, loadDataQuotaScanFlag, viper.GetInt(loadDataQuotaScanKey),
|
||||
`Quota scan mode after data load:
|
||||
0 - no quota scan
|
||||
1 - scan quota
|
||||
2 - scan quota if the user has quota restrictions
|
||||
This flag can be set using SFTPGO_LOADDATA_QUOTA_SCAN
|
||||
env var too.
|
||||
(default 0)`)
|
||||
viper.BindPFlag(loadDataQuotaScanKey, cmd.Flags().Lookup(loadDataQuotaScanFlag)) //nolint:errcheck
|
||||
|
||||
viper.SetDefault(loadDataCleanKey, defaultLoadDataClean)
|
||||
viper.BindEnv(loadDataCleanKey, "SFTPGO_LOADDATA_CLEAN") //nolint:errcheck
|
||||
cmd.Flags().BoolVar(&loadDataClean, loadDataCleanFlag, viper.GetBool(loadDataCleanKey),
|
||||
`Determine if the loaddata-from file should
|
||||
be removed after a successful load. This flag
|
||||
can be set using SFTPGO_LOADDATA_CLEAN env var
|
||||
too. (default "false")
|
||||
`)
|
||||
viper.BindPFlag(logCompressKey, cmd.Flags().Lookup(logCompressFlag)) //nolint:errcheck
|
||||
}
|
||||
|
||||
35
cmd/rotatelogs_windows.go
Normal file
35
cmd/rotatelogs_windows.go
Normal file
@@ -0,0 +1,35 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/service"
|
||||
)
|
||||
|
||||
var (
|
||||
rotateLogCmd = &cobra.Command{
|
||||
Use: "rotatelogs",
|
||||
Short: "Signal to the running service to rotate the logs",
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
s := service.WindowsService{
|
||||
Service: service.Service{
|
||||
Shutdown: make(chan bool),
|
||||
},
|
||||
}
|
||||
err := s.RotateLogFile()
|
||||
if err != nil {
|
||||
fmt.Printf("Error sending rotate log file signal to the service: %v\r\n", err)
|
||||
os.Exit(1)
|
||||
} else {
|
||||
fmt.Printf("Rotate log file signal sent!\r\n")
|
||||
}
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
func init() {
|
||||
serviceCmd.AddCommand(rotateLogCmd)
|
||||
}
|
||||
41
cmd/serve.go
41
cmd/serve.go
@@ -1,35 +1,48 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"github.com/drakkan/sftpgo/service"
|
||||
"github.com/drakkan/sftpgo/utils"
|
||||
"os"
|
||||
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/service"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
var (
|
||||
serveCmd = &cobra.Command{
|
||||
Use: "serve",
|
||||
Short: "Start the SFTP Server",
|
||||
Long: `To start the SFTPGo with the default values for the command line flags simply use:
|
||||
Short: "Start the SFTPGo service",
|
||||
Long: `To start the SFTPGo with the default values for the command line flags simply
|
||||
use:
|
||||
|
||||
sftpgo serve
|
||||
$ sftpgo serve
|
||||
|
||||
Please take a look at the usage below to customize the startup options`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
service := service.Service{
|
||||
ConfigDir: utils.CleanDirInput(configDir),
|
||||
ConfigFile: configFile,
|
||||
LogFilePath: logFilePath,
|
||||
LogMaxSize: logMaxSize,
|
||||
LogMaxBackups: logMaxBackups,
|
||||
LogMaxAge: logMaxAge,
|
||||
LogCompress: logCompress,
|
||||
LogVerbose: logVerbose,
|
||||
Shutdown: make(chan bool),
|
||||
ConfigDir: util.CleanDirInput(configDir),
|
||||
ConfigFile: configFile,
|
||||
LogFilePath: logFilePath,
|
||||
LogMaxSize: logMaxSize,
|
||||
LogMaxBackups: logMaxBackups,
|
||||
LogMaxAge: logMaxAge,
|
||||
LogCompress: logCompress,
|
||||
LogVerbose: logVerbose,
|
||||
LogUTCTime: logUTCTime,
|
||||
LoadDataFrom: loadDataFrom,
|
||||
LoadDataMode: loadDataMode,
|
||||
LoadDataQuotaScan: loadDataQuotaScan,
|
||||
LoadDataClean: loadDataClean,
|
||||
Shutdown: make(chan bool),
|
||||
}
|
||||
if err := service.Start(); err == nil {
|
||||
service.Wait()
|
||||
if service.Error == nil {
|
||||
os.Exit(0)
|
||||
}
|
||||
}
|
||||
os.Exit(1)
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
@@ -7,7 +7,7 @@ import (
|
||||
var (
|
||||
serviceCmd = &cobra.Command{
|
||||
Use: "service",
|
||||
Short: "Install, Uninstall, Start, Stop, Reload and retrieve status for SFTPGo Windows Service",
|
||||
Short: "Manage the SFTPGo Windows Service",
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
54
cmd/smtptest.go
Normal file
54
cmd/smtptest.go
Normal file
@@ -0,0 +1,54 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"os"
|
||||
|
||||
"github.com/rs/zerolog"
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/config"
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/smtp"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
var (
|
||||
smtpTestRecipient string
|
||||
smtpTestCmd = &cobra.Command{
|
||||
Use: "smtptest",
|
||||
Short: "Test the SMTP configuration",
|
||||
Long: `SFTPGo will try to send a test email to the specified recipient.
|
||||
If the SMTP configuration is correct you should receive this email.`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
logger.DisableLogger()
|
||||
logger.EnableConsoleLogger(zerolog.DebugLevel)
|
||||
configDir = util.CleanDirInput(configDir)
|
||||
err := config.LoadConfig(configDir, configFile)
|
||||
if err != nil {
|
||||
logger.WarnToConsole("Unable to initialize data provider, config load error: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
smtpConfig := config.GetSMTPConfig()
|
||||
err = smtpConfig.Initialize(configDir)
|
||||
if err != nil {
|
||||
logger.ErrorToConsole("unable to initialize SMTP configuration: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
err = smtp.SendEmail(smtpTestRecipient, "SFTPGo - Testing Email Settings", "It appears your SFTPGo email is setup correctly!",
|
||||
smtp.EmailContentTypeTextPlain)
|
||||
if err != nil {
|
||||
logger.WarnToConsole("Error sending email: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
logger.InfoToConsole("No errors were reported while sending an email. Please check your inbox to make sure.")
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
func init() {
|
||||
addConfigFlags(smtpTestCmd)
|
||||
smtpTestCmd.Flags().StringVar(&smtpTestRecipient, "recipient", "", `email address to send the test e-mail to`)
|
||||
smtpTestCmd.MarkFlagRequired("recipient") //nolint:errcheck
|
||||
|
||||
rootCmd.AddCommand(smtpTestCmd)
|
||||
}
|
||||
@@ -2,20 +2,22 @@ package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/drakkan/sftpgo/service"
|
||||
"github.com/drakkan/sftpgo/utils"
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/service"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
var (
|
||||
startCmd = &cobra.Command{
|
||||
Use: "start",
|
||||
Short: "Start SFTPGo Windows Service",
|
||||
Short: "Start the SFTPGo Windows Service",
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
configDir = utils.CleanDirInput(configDir)
|
||||
if !filepath.IsAbs(logFilePath) && utils.IsFileInputValid(logFilePath) {
|
||||
configDir = util.CleanDirInput(configDir)
|
||||
if !filepath.IsAbs(logFilePath) && util.IsFileInputValid(logFilePath) {
|
||||
logFilePath = filepath.Join(configDir, logFilePath)
|
||||
}
|
||||
s := service.Service{
|
||||
@@ -27,6 +29,7 @@ var (
|
||||
LogMaxAge: logMaxAge,
|
||||
LogCompress: logCompress,
|
||||
LogVerbose: logVerbose,
|
||||
LogUTCTime: logUTCTime,
|
||||
Shutdown: make(chan bool),
|
||||
}
|
||||
winService := service.WindowsService{
|
||||
@@ -35,6 +38,7 @@ var (
|
||||
err := winService.RunService()
|
||||
if err != nil {
|
||||
fmt.Printf("Error starting service: %v\r\n", err)
|
||||
os.Exit(1)
|
||||
} else {
|
||||
fmt.Printf("Service started!\r\n")
|
||||
}
|
||||
|
||||
193
cmd/startsubsys.go
Normal file
193
cmd/startsubsys.go
Normal file
@@ -0,0 +1,193 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"io"
|
||||
"os"
|
||||
"os/user"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/rs/xid"
|
||||
"github.com/rs/zerolog"
|
||||
"github.com/spf13/cobra"
|
||||
"github.com/spf13/viper"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/common"
|
||||
"github.com/drakkan/sftpgo/v2/config"
|
||||
"github.com/drakkan/sftpgo/v2/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/plugin"
|
||||
"github.com/drakkan/sftpgo/v2/sftpd"
|
||||
"github.com/drakkan/sftpgo/v2/version"
|
||||
)
|
||||
|
||||
var (
|
||||
logJournalD = false
|
||||
preserveHomeDir = false
|
||||
baseHomeDir = ""
|
||||
subsystemCmd = &cobra.Command{
|
||||
Use: "startsubsys",
|
||||
Short: "Use sftpgo as SFTP file transfer subsystem",
|
||||
Long: `In this mode SFTPGo speaks the server side of SFTP protocol to stdout and
|
||||
expects client requests from stdin.
|
||||
This mode is not intended to be called directly, but from sshd using the
|
||||
Subsystem option.
|
||||
For example adding a line like this one in "/etc/ssh/sshd_config":
|
||||
|
||||
Subsystem sftp sftpgo startsubsys
|
||||
|
||||
Command-line flags should be specified in the Subsystem declaration.
|
||||
`,
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
logSender := "startsubsys"
|
||||
connectionID := xid.New().String()
|
||||
logLevel := zerolog.DebugLevel
|
||||
if !logVerbose {
|
||||
logLevel = zerolog.InfoLevel
|
||||
}
|
||||
logger.SetLogTime(logUTCTime)
|
||||
if logJournalD {
|
||||
logger.InitJournalDLogger(logLevel)
|
||||
} else {
|
||||
logger.InitStdErrLogger(logLevel)
|
||||
}
|
||||
osUser, err := user.Current()
|
||||
if err != nil {
|
||||
logger.Error(logSender, connectionID, "unable to get the current user: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
username := osUser.Username
|
||||
homedir := osUser.HomeDir
|
||||
logger.Info(logSender, connectionID, "starting SFTPGo %v as subsystem, user %#v home dir %#v config dir %#v base home dir %#v",
|
||||
version.Get(), username, homedir, configDir, baseHomeDir)
|
||||
err = config.LoadConfig(configDir, configFile)
|
||||
if err != nil {
|
||||
logger.Error(logSender, connectionID, "unable to load configuration: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
commonConfig := config.GetCommonConfig()
|
||||
// idle connection are managed externally
|
||||
commonConfig.IdleTimeout = 0
|
||||
config.SetCommonConfig(commonConfig)
|
||||
if err := common.Initialize(config.GetCommonConfig()); err != nil {
|
||||
logger.Error(logSender, connectionID, "%v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
kmsConfig := config.GetKMSConfig()
|
||||
if err := kmsConfig.Initialize(); err != nil {
|
||||
logger.Error(logSender, connectionID, "unable to initialize KMS: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
mfaConfig := config.GetMFAConfig()
|
||||
err = mfaConfig.Initialize()
|
||||
if err != nil {
|
||||
logger.Error(logSender, "", "unable to initialize MFA: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
if err := plugin.Initialize(config.GetPluginsConfig(), logVerbose); err != nil {
|
||||
logger.Error(logSender, connectionID, "unable to initialize plugin system: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
smtpConfig := config.GetSMTPConfig()
|
||||
err = smtpConfig.Initialize(configDir)
|
||||
if err != nil {
|
||||
logger.Error(logSender, connectionID, "unable to initialize SMTP configuration: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
dataProviderConf := config.GetProviderConf()
|
||||
if dataProviderConf.Driver == dataprovider.SQLiteDataProviderName || dataProviderConf.Driver == dataprovider.BoltDataProviderName {
|
||||
logger.Debug(logSender, connectionID, "data provider %#v not supported in subsystem mode, using %#v provider",
|
||||
dataProviderConf.Driver, dataprovider.MemoryDataProviderName)
|
||||
dataProviderConf.Driver = dataprovider.MemoryDataProviderName
|
||||
dataProviderConf.Name = ""
|
||||
dataProviderConf.PreferDatabaseCredentials = true
|
||||
}
|
||||
config.SetProviderConf(dataProviderConf)
|
||||
err = dataprovider.Initialize(dataProviderConf, configDir, false)
|
||||
if err != nil {
|
||||
logger.Error(logSender, connectionID, "unable to initialize the data provider: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
httpConfig := config.GetHTTPConfig()
|
||||
if err := httpConfig.Initialize(configDir); err != nil {
|
||||
logger.Error(logSender, connectionID, "unable to initialize http client: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
user, err := dataprovider.UserExists(username)
|
||||
if err == nil {
|
||||
if user.HomeDir != filepath.Clean(homedir) && !preserveHomeDir {
|
||||
// update the user
|
||||
user.HomeDir = filepath.Clean(homedir)
|
||||
err = dataprovider.UpdateUser(&user, dataprovider.ActionExecutorSystem, "")
|
||||
if err != nil {
|
||||
logger.Error(logSender, connectionID, "unable to update user %#v: %v", username, err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
} else {
|
||||
user.Username = username
|
||||
if baseHomeDir != "" && filepath.IsAbs(baseHomeDir) {
|
||||
user.HomeDir = filepath.Join(baseHomeDir, username)
|
||||
} else {
|
||||
user.HomeDir = filepath.Clean(homedir)
|
||||
}
|
||||
logger.Debug(logSender, connectionID, "home dir for new user %#v", user.HomeDir)
|
||||
user.Password = connectionID
|
||||
user.Permissions = make(map[string][]string)
|
||||
user.Permissions["/"] = []string{dataprovider.PermAny}
|
||||
err = dataprovider.AddUser(&user, dataprovider.ActionExecutorSystem, "")
|
||||
if err != nil {
|
||||
logger.Error(logSender, connectionID, "unable to add user %#v: %v", username, err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
err = sftpd.ServeSubSystemConnection(&user, connectionID, os.Stdin, os.Stdout)
|
||||
if err != nil && err != io.EOF {
|
||||
logger.Warn(logSender, connectionID, "serving subsystem finished with error: %v", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
logger.Info(logSender, connectionID, "serving subsystem finished")
|
||||
plugin.Handler.Cleanup()
|
||||
os.Exit(0)
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
func init() {
|
||||
subsystemCmd.Flags().BoolVarP(&preserveHomeDir, "preserve-home", "p", false, `If the user already exists, the existing home
|
||||
directory will not be changed`)
|
||||
subsystemCmd.Flags().StringVarP(&baseHomeDir, "base-home-dir", "d", "", `If the user does not exist specify an alternate
|
||||
starting directory. The home directory for a new
|
||||
user will be:
|
||||
|
||||
[base-home-dir]/[username]
|
||||
|
||||
base-home-dir must be an absolute path.`)
|
||||
subsystemCmd.Flags().BoolVarP(&logJournalD, "log-to-journald", "j", false, `Send logs to journald. Only available on Linux.
|
||||
Use:
|
||||
|
||||
$ journalctl -o verbose -f
|
||||
|
||||
To see full logs.
|
||||
If not set, the logs will be sent to the standard
|
||||
error`)
|
||||
|
||||
addConfigFlags(subsystemCmd)
|
||||
|
||||
viper.SetDefault(logVerboseKey, defaultLogVerbose)
|
||||
viper.BindEnv(logVerboseKey, "SFTPGO_LOG_VERBOSE") //nolint:errcheck
|
||||
subsystemCmd.Flags().BoolVarP(&logVerbose, logVerboseFlag, "v", viper.GetBool(logVerboseKey),
|
||||
`Enable verbose logs. This flag can be set
|
||||
using SFTPGO_LOG_VERBOSE env var too.
|
||||
`)
|
||||
viper.BindPFlag(logVerboseKey, subsystemCmd.Flags().Lookup(logVerboseFlag)) //nolint:errcheck
|
||||
|
||||
viper.SetDefault(logUTCTimeKey, defaultLogUTCTime)
|
||||
viper.BindEnv(logUTCTimeKey, "SFTPGO_LOG_UTC_TIME") //nolint:errcheck
|
||||
subsystemCmd.Flags().BoolVar(&logUTCTime, logUTCTimeFlag, viper.GetBool(logUTCTimeKey),
|
||||
`Use UTC time for logging. This flag can be set
|
||||
using SFTPGO_LOG_UTC_TIME env var too.
|
||||
`)
|
||||
viper.BindPFlag(logUTCTimeKey, subsystemCmd.Flags().Lookup(logUTCTimeFlag)) //nolint:errcheck
|
||||
|
||||
rootCmd.AddCommand(subsystemCmd)
|
||||
}
|
||||
@@ -2,9 +2,11 @@ package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"github.com/drakkan/sftpgo/service"
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/service"
|
||||
)
|
||||
|
||||
var (
|
||||
@@ -20,6 +22,7 @@ var (
|
||||
status, err := s.Status()
|
||||
if err != nil {
|
||||
fmt.Printf("Error querying service status: %v\r\n", err)
|
||||
os.Exit(1)
|
||||
} else {
|
||||
fmt.Printf("Service status: %#v\r\n", status.String())
|
||||
}
|
||||
|
||||
@@ -2,15 +2,17 @@ package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"github.com/drakkan/sftpgo/service"
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/service"
|
||||
)
|
||||
|
||||
var (
|
||||
stopCmd = &cobra.Command{
|
||||
Use: "stop",
|
||||
Short: "Stop SFTPGo Windows Service",
|
||||
Short: "Stop the SFTPGo Windows Service",
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
s := service.WindowsService{
|
||||
Service: service.Service{
|
||||
@@ -20,6 +22,7 @@ var (
|
||||
err := s.Stop()
|
||||
if err != nil {
|
||||
fmt.Printf("Error stopping service: %v\r\n", err)
|
||||
os.Exit(1)
|
||||
} else {
|
||||
fmt.Printf("Service stopped!\r\n")
|
||||
}
|
||||
|
||||
@@ -2,15 +2,17 @@ package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"github.com/drakkan/sftpgo/service"
|
||||
"github.com/spf13/cobra"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/service"
|
||||
)
|
||||
|
||||
var (
|
||||
uninstallCmd = &cobra.Command{
|
||||
Use: "uninstall",
|
||||
Short: "Uninstall SFTPGo Windows Service",
|
||||
Short: "Uninstall the SFTPGo Windows Service",
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
s := service.WindowsService{
|
||||
Service: service.Service{
|
||||
@@ -20,6 +22,7 @@ var (
|
||||
err := s.Uninstall()
|
||||
if err != nil {
|
||||
fmt.Printf("Error removing service: %v\r\n", err)
|
||||
os.Exit(1)
|
||||
} else {
|
||||
fmt.Printf("Service uninstalled\r\n")
|
||||
}
|
||||
|
||||
261
common/actions.go
Normal file
261
common/actions.go
Normal file
@@ -0,0 +1,261 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/sftpgo/sdk"
|
||||
"github.com/sftpgo/sdk/plugin/notifier"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/httpclient"
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/plugin"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
var (
|
||||
errUnconfiguredAction = errors.New("no hook is configured for this action")
|
||||
errNoHook = errors.New("unable to execute action, no hook defined")
|
||||
errUnexpectedHTTResponse = errors.New("unexpected HTTP response code")
|
||||
)
|
||||
|
||||
// ProtocolActions defines the action to execute on file operations and SSH commands
|
||||
type ProtocolActions struct {
|
||||
// Valid values are download, upload, pre-delete, delete, rename, ssh_cmd. Empty slice to disable
|
||||
ExecuteOn []string `json:"execute_on" mapstructure:"execute_on"`
|
||||
// Actions to be performed synchronously.
|
||||
// The pre-delete action is always executed synchronously while the other ones are asynchronous.
|
||||
// Executing an action synchronously means that SFTPGo will not return a result code to the client
|
||||
// (which is waiting for it) until your hook have completed its execution.
|
||||
ExecuteSync []string `json:"execute_sync" mapstructure:"execute_sync"`
|
||||
// Absolute path to an external program or an HTTP URL
|
||||
Hook string `json:"hook" mapstructure:"hook"`
|
||||
}
|
||||
|
||||
var actionHandler ActionHandler = &defaultActionHandler{}
|
||||
|
||||
// InitializeActionHandler lets the user choose an action handler implementation.
|
||||
//
|
||||
// Do NOT call this function after application initialization.
|
||||
func InitializeActionHandler(handler ActionHandler) {
|
||||
actionHandler = handler
|
||||
}
|
||||
|
||||
func handleUnconfiguredPreAction(operation string) error {
|
||||
// for pre-delete we execute the internal handling on error, so we must return errUnconfiguredAction.
|
||||
// Other pre action will deny the operation on error so if we have no configuration we must return
|
||||
// a nil error
|
||||
if operation == operationPreDelete {
|
||||
return errUnconfiguredAction
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// ExecutePreAction executes a pre-* action and returns the result
|
||||
func ExecutePreAction(conn *BaseConnection, operation, filePath, virtualPath string, fileSize int64, openFlags int) error {
|
||||
var event *notifier.FsEvent
|
||||
hasNotifiersPlugin := plugin.Handler.HasNotifiers()
|
||||
hasHook := util.IsStringInSlice(operation, Config.Actions.ExecuteOn)
|
||||
if !hasHook && !hasNotifiersPlugin {
|
||||
return handleUnconfiguredPreAction(operation)
|
||||
}
|
||||
event = newActionNotification(&conn.User, operation, filePath, virtualPath, "", "", "",
|
||||
conn.protocol, conn.GetRemoteIP(), conn.ID, fileSize, openFlags, nil)
|
||||
if hasNotifiersPlugin {
|
||||
plugin.Handler.NotifyFsEvent(event)
|
||||
}
|
||||
if !hasHook {
|
||||
return handleUnconfiguredPreAction(operation)
|
||||
}
|
||||
return actionHandler.Handle(event)
|
||||
}
|
||||
|
||||
// ExecuteActionNotification executes the defined hook, if any, for the specified action
|
||||
func ExecuteActionNotification(conn *BaseConnection, operation, filePath, virtualPath, target, virtualTarget, sshCmd string,
|
||||
fileSize int64, err error,
|
||||
) {
|
||||
hasNotifiersPlugin := plugin.Handler.HasNotifiers()
|
||||
hasHook := util.IsStringInSlice(operation, Config.Actions.ExecuteOn)
|
||||
if !hasHook && !hasNotifiersPlugin {
|
||||
return
|
||||
}
|
||||
notification := newActionNotification(&conn.User, operation, filePath, virtualPath, target, virtualTarget, sshCmd,
|
||||
conn.protocol, conn.GetRemoteIP(), conn.ID, fileSize, 0, err)
|
||||
if hasNotifiersPlugin {
|
||||
plugin.Handler.NotifyFsEvent(notification)
|
||||
}
|
||||
|
||||
if hasHook {
|
||||
if util.IsStringInSlice(operation, Config.Actions.ExecuteSync) {
|
||||
actionHandler.Handle(notification) //nolint:errcheck
|
||||
return
|
||||
}
|
||||
|
||||
go actionHandler.Handle(notification) //nolint:errcheck
|
||||
}
|
||||
}
|
||||
|
||||
// ActionHandler handles a notification for a Protocol Action.
|
||||
type ActionHandler interface {
|
||||
Handle(notification *notifier.FsEvent) error
|
||||
}
|
||||
|
||||
func newActionNotification(
|
||||
user *dataprovider.User,
|
||||
operation, filePath, virtualPath, target, virtualTarget, sshCmd, protocol, ip, sessionID string,
|
||||
fileSize int64,
|
||||
openFlags int,
|
||||
err error,
|
||||
) *notifier.FsEvent {
|
||||
var bucket, endpoint string
|
||||
status := 1
|
||||
|
||||
fsConfig := user.GetFsConfigForPath(virtualPath)
|
||||
|
||||
switch fsConfig.Provider {
|
||||
case sdk.S3FilesystemProvider:
|
||||
bucket = fsConfig.S3Config.Bucket
|
||||
endpoint = fsConfig.S3Config.Endpoint
|
||||
case sdk.GCSFilesystemProvider:
|
||||
bucket = fsConfig.GCSConfig.Bucket
|
||||
case sdk.AzureBlobFilesystemProvider:
|
||||
bucket = fsConfig.AzBlobConfig.Container
|
||||
if fsConfig.AzBlobConfig.Endpoint != "" {
|
||||
endpoint = fsConfig.AzBlobConfig.Endpoint
|
||||
}
|
||||
case sdk.SFTPFilesystemProvider:
|
||||
endpoint = fsConfig.SFTPConfig.Endpoint
|
||||
}
|
||||
|
||||
if err == ErrQuotaExceeded {
|
||||
status = 3
|
||||
} else if err != nil {
|
||||
status = 2
|
||||
}
|
||||
|
||||
return ¬ifier.FsEvent{
|
||||
Action: operation,
|
||||
Username: user.Username,
|
||||
Path: filePath,
|
||||
TargetPath: target,
|
||||
VirtualPath: virtualPath,
|
||||
VirtualTargetPath: virtualTarget,
|
||||
SSHCmd: sshCmd,
|
||||
FileSize: fileSize,
|
||||
FsProvider: int(fsConfig.Provider),
|
||||
Bucket: bucket,
|
||||
Endpoint: endpoint,
|
||||
Status: status,
|
||||
Protocol: protocol,
|
||||
IP: ip,
|
||||
SessionID: sessionID,
|
||||
OpenFlags: openFlags,
|
||||
Timestamp: time.Now().UnixNano(),
|
||||
}
|
||||
}
|
||||
|
||||
type defaultActionHandler struct{}
|
||||
|
||||
func (h *defaultActionHandler) Handle(event *notifier.FsEvent) error {
|
||||
if !util.IsStringInSlice(event.Action, Config.Actions.ExecuteOn) {
|
||||
return errUnconfiguredAction
|
||||
}
|
||||
|
||||
if Config.Actions.Hook == "" {
|
||||
logger.Warn(event.Protocol, "", "Unable to send notification, no hook is defined")
|
||||
|
||||
return errNoHook
|
||||
}
|
||||
|
||||
if strings.HasPrefix(Config.Actions.Hook, "http") {
|
||||
return h.handleHTTP(event)
|
||||
}
|
||||
|
||||
return h.handleCommand(event)
|
||||
}
|
||||
|
||||
func (h *defaultActionHandler) handleHTTP(event *notifier.FsEvent) error {
|
||||
u, err := url.Parse(Config.Actions.Hook)
|
||||
if err != nil {
|
||||
logger.Error(event.Protocol, "", "Invalid hook %#v for operation %#v: %v",
|
||||
Config.Actions.Hook, event.Action, err)
|
||||
return err
|
||||
}
|
||||
|
||||
startTime := time.Now()
|
||||
respCode := 0
|
||||
|
||||
var b bytes.Buffer
|
||||
_ = json.NewEncoder(&b).Encode(event)
|
||||
|
||||
resp, err := httpclient.RetryablePost(Config.Actions.Hook, "application/json", &b)
|
||||
if err == nil {
|
||||
respCode = resp.StatusCode
|
||||
resp.Body.Close()
|
||||
|
||||
if respCode != http.StatusOK {
|
||||
err = errUnexpectedHTTResponse
|
||||
}
|
||||
}
|
||||
|
||||
logger.Debug(event.Protocol, "", "notified operation %#v to URL: %v status code: %v, elapsed: %v err: %v",
|
||||
event.Action, u.Redacted(), respCode, time.Since(startTime), err)
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
func (h *defaultActionHandler) handleCommand(event *notifier.FsEvent) error {
|
||||
if !filepath.IsAbs(Config.Actions.Hook) {
|
||||
err := fmt.Errorf("invalid notification command %#v", Config.Actions.Hook)
|
||||
logger.Warn(event.Protocol, "", "unable to execute notification command: %v", err)
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
|
||||
defer cancel()
|
||||
|
||||
cmd := exec.CommandContext(ctx, Config.Actions.Hook)
|
||||
cmd.Env = append(os.Environ(), notificationAsEnvVars(event)...)
|
||||
|
||||
startTime := time.Now()
|
||||
err := cmd.Run()
|
||||
|
||||
logger.Debug(event.Protocol, "", "executed command %#v, elapsed: %v, error: %v",
|
||||
Config.Actions.Hook, time.Since(startTime), err)
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
func notificationAsEnvVars(event *notifier.FsEvent) []string {
|
||||
return []string{
|
||||
fmt.Sprintf("SFTPGO_ACTION=%v", event.Action),
|
||||
fmt.Sprintf("SFTPGO_ACTION_USERNAME=%v", event.Username),
|
||||
fmt.Sprintf("SFTPGO_ACTION_PATH=%v", event.Path),
|
||||
fmt.Sprintf("SFTPGO_ACTION_TARGET=%v", event.TargetPath),
|
||||
fmt.Sprintf("SFTPGO_ACTION_VIRTUAL_PATH=%v", event.VirtualPath),
|
||||
fmt.Sprintf("SFTPGO_ACTION_VIRTUAL_TARGET=%v", event.VirtualTargetPath),
|
||||
fmt.Sprintf("SFTPGO_ACTION_SSH_CMD=%v", event.SSHCmd),
|
||||
fmt.Sprintf("SFTPGO_ACTION_FILE_SIZE=%v", event.FileSize),
|
||||
fmt.Sprintf("SFTPGO_ACTION_FS_PROVIDER=%v", event.FsProvider),
|
||||
fmt.Sprintf("SFTPGO_ACTION_BUCKET=%v", event.Bucket),
|
||||
fmt.Sprintf("SFTPGO_ACTION_ENDPOINT=%v", event.Endpoint),
|
||||
fmt.Sprintf("SFTPGO_ACTION_STATUS=%v", event.Status),
|
||||
fmt.Sprintf("SFTPGO_ACTION_PROTOCOL=%v", event.Protocol),
|
||||
fmt.Sprintf("SFTPGO_ACTION_IP=%v", event.IP),
|
||||
fmt.Sprintf("SFTPGO_ACTION_SESSION_ID=%v", event.SessionID),
|
||||
fmt.Sprintf("SFTPGO_ACTION_OPEN_FLAGS=%v", event.OpenFlags),
|
||||
fmt.Sprintf("SFTPGO_ACTION_TIMESTAMP=%v", event.Timestamp),
|
||||
}
|
||||
}
|
||||
295
common/actions_test.go
Normal file
295
common/actions_test.go
Normal file
@@ -0,0 +1,295 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"testing"
|
||||
|
||||
"github.com/lithammer/shortuuid/v3"
|
||||
"github.com/rs/xid"
|
||||
"github.com/sftpgo/sdk"
|
||||
"github.com/sftpgo/sdk/plugin/notifier"
|
||||
"github.com/stretchr/testify/assert"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/plugin"
|
||||
"github.com/drakkan/sftpgo/v2/vfs"
|
||||
)
|
||||
|
||||
func TestNewActionNotification(t *testing.T) {
|
||||
user := &dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
Username: "username",
|
||||
},
|
||||
}
|
||||
user.FsConfig.Provider = sdk.LocalFilesystemProvider
|
||||
user.FsConfig.S3Config = vfs.S3FsConfig{
|
||||
BaseS3FsConfig: sdk.BaseS3FsConfig{
|
||||
Bucket: "s3bucket",
|
||||
Endpoint: "endpoint",
|
||||
},
|
||||
}
|
||||
user.FsConfig.GCSConfig = vfs.GCSFsConfig{
|
||||
BaseGCSFsConfig: sdk.BaseGCSFsConfig{
|
||||
Bucket: "gcsbucket",
|
||||
},
|
||||
}
|
||||
user.FsConfig.AzBlobConfig = vfs.AzBlobFsConfig{
|
||||
BaseAzBlobFsConfig: sdk.BaseAzBlobFsConfig{
|
||||
Container: "azcontainer",
|
||||
Endpoint: "azendpoint",
|
||||
},
|
||||
}
|
||||
user.FsConfig.SFTPConfig = vfs.SFTPFsConfig{
|
||||
BaseSFTPFsConfig: sdk.BaseSFTPFsConfig{
|
||||
Endpoint: "sftpendpoint",
|
||||
},
|
||||
}
|
||||
sessionID := xid.New().String()
|
||||
a := newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSFTP, "", sessionID,
|
||||
123, 0, errors.New("fake error"))
|
||||
assert.Equal(t, user.Username, a.Username)
|
||||
assert.Equal(t, 0, len(a.Bucket))
|
||||
assert.Equal(t, 0, len(a.Endpoint))
|
||||
assert.Equal(t, 2, a.Status)
|
||||
|
||||
user.FsConfig.Provider = sdk.S3FilesystemProvider
|
||||
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSSH, "", sessionID,
|
||||
123, 0, nil)
|
||||
assert.Equal(t, "s3bucket", a.Bucket)
|
||||
assert.Equal(t, "endpoint", a.Endpoint)
|
||||
assert.Equal(t, 1, a.Status)
|
||||
|
||||
user.FsConfig.Provider = sdk.GCSFilesystemProvider
|
||||
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSCP, "", sessionID,
|
||||
123, 0, ErrQuotaExceeded)
|
||||
assert.Equal(t, "gcsbucket", a.Bucket)
|
||||
assert.Equal(t, 0, len(a.Endpoint))
|
||||
assert.Equal(t, 3, a.Status)
|
||||
|
||||
user.FsConfig.Provider = sdk.AzureBlobFilesystemProvider
|
||||
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSCP, "", sessionID,
|
||||
123, 0, nil)
|
||||
assert.Equal(t, "azcontainer", a.Bucket)
|
||||
assert.Equal(t, "azendpoint", a.Endpoint)
|
||||
assert.Equal(t, 1, a.Status)
|
||||
|
||||
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSCP, "", sessionID,
|
||||
123, os.O_APPEND, nil)
|
||||
assert.Equal(t, "azcontainer", a.Bucket)
|
||||
assert.Equal(t, "azendpoint", a.Endpoint)
|
||||
assert.Equal(t, 1, a.Status)
|
||||
assert.Equal(t, os.O_APPEND, a.OpenFlags)
|
||||
|
||||
user.FsConfig.Provider = sdk.SFTPFilesystemProvider
|
||||
a = newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSFTP, "", sessionID,
|
||||
123, 0, nil)
|
||||
assert.Equal(t, "sftpendpoint", a.Endpoint)
|
||||
}
|
||||
|
||||
func TestActionHTTP(t *testing.T) {
|
||||
actionsCopy := Config.Actions
|
||||
|
||||
Config.Actions = ProtocolActions{
|
||||
ExecuteOn: []string{operationDownload},
|
||||
Hook: fmt.Sprintf("http://%v", httpAddr),
|
||||
}
|
||||
user := &dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
Username: "username",
|
||||
},
|
||||
}
|
||||
a := newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSFTP, "",
|
||||
xid.New().String(), 123, 0, nil)
|
||||
err := actionHandler.Handle(a)
|
||||
assert.NoError(t, err)
|
||||
|
||||
Config.Actions.Hook = "http://invalid:1234"
|
||||
err = actionHandler.Handle(a)
|
||||
assert.Error(t, err)
|
||||
|
||||
Config.Actions.Hook = fmt.Sprintf("http://%v/404", httpAddr)
|
||||
err = actionHandler.Handle(a)
|
||||
if assert.Error(t, err) {
|
||||
assert.EqualError(t, err, errUnexpectedHTTResponse.Error())
|
||||
}
|
||||
|
||||
Config.Actions = actionsCopy
|
||||
}
|
||||
|
||||
func TestActionCMD(t *testing.T) {
|
||||
if runtime.GOOS == osWindows {
|
||||
t.Skip("this test is not available on Windows")
|
||||
}
|
||||
actionsCopy := Config.Actions
|
||||
|
||||
hookCmd, err := exec.LookPath("true")
|
||||
assert.NoError(t, err)
|
||||
|
||||
Config.Actions = ProtocolActions{
|
||||
ExecuteOn: []string{operationDownload},
|
||||
Hook: hookCmd,
|
||||
}
|
||||
user := &dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
Username: "username",
|
||||
},
|
||||
}
|
||||
sessionID := shortuuid.New()
|
||||
a := newActionNotification(user, operationDownload, "path", "vpath", "target", "", "", ProtocolSFTP, "", sessionID,
|
||||
123, 0, nil)
|
||||
err = actionHandler.Handle(a)
|
||||
assert.NoError(t, err)
|
||||
|
||||
c := NewBaseConnection("id", ProtocolSFTP, "", "", *user)
|
||||
ExecuteActionNotification(c, OperationSSHCmd, "path", "vpath", "target", "vtarget", "sha1sum", 0, nil)
|
||||
|
||||
ExecuteActionNotification(c, operationDownload, "path", "vpath", "", "", "", 0, nil)
|
||||
|
||||
Config.Actions = actionsCopy
|
||||
}
|
||||
|
||||
func TestWrongActions(t *testing.T) {
|
||||
actionsCopy := Config.Actions
|
||||
|
||||
badCommand := "/bad/command"
|
||||
if runtime.GOOS == osWindows {
|
||||
badCommand = "C:\\bad\\command"
|
||||
}
|
||||
Config.Actions = ProtocolActions{
|
||||
ExecuteOn: []string{operationUpload},
|
||||
Hook: badCommand,
|
||||
}
|
||||
user := &dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
Username: "username",
|
||||
},
|
||||
}
|
||||
|
||||
a := newActionNotification(user, operationUpload, "", "", "", "", "", ProtocolSFTP, "", xid.New().String(),
|
||||
123, 0, nil)
|
||||
err := actionHandler.Handle(a)
|
||||
assert.Error(t, err, "action with bad command must fail")
|
||||
|
||||
a.Action = operationDelete
|
||||
err = actionHandler.Handle(a)
|
||||
assert.EqualError(t, err, errUnconfiguredAction.Error())
|
||||
|
||||
Config.Actions.Hook = "http://foo\x7f.com/"
|
||||
a.Action = operationUpload
|
||||
err = actionHandler.Handle(a)
|
||||
assert.Error(t, err, "action with bad url must fail")
|
||||
|
||||
Config.Actions.Hook = ""
|
||||
err = actionHandler.Handle(a)
|
||||
if assert.Error(t, err) {
|
||||
assert.EqualError(t, err, errNoHook.Error())
|
||||
}
|
||||
|
||||
Config.Actions.Hook = "relative path"
|
||||
err = actionHandler.Handle(a)
|
||||
if assert.Error(t, err) {
|
||||
assert.EqualError(t, err, fmt.Sprintf("invalid notification command %#v", Config.Actions.Hook))
|
||||
}
|
||||
|
||||
Config.Actions = actionsCopy
|
||||
}
|
||||
|
||||
func TestPreDeleteAction(t *testing.T) {
|
||||
if runtime.GOOS == osWindows {
|
||||
t.Skip("this test is not available on Windows")
|
||||
}
|
||||
actionsCopy := Config.Actions
|
||||
|
||||
hookCmd, err := exec.LookPath("true")
|
||||
assert.NoError(t, err)
|
||||
Config.Actions = ProtocolActions{
|
||||
ExecuteOn: []string{operationPreDelete},
|
||||
Hook: hookCmd,
|
||||
}
|
||||
homeDir := filepath.Join(os.TempDir(), "test_user")
|
||||
err = os.MkdirAll(homeDir, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
user := dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
Username: "username",
|
||||
HomeDir: homeDir,
|
||||
},
|
||||
}
|
||||
user.Permissions = make(map[string][]string)
|
||||
user.Permissions["/"] = []string{dataprovider.PermAny}
|
||||
fs := vfs.NewOsFs("id", homeDir, "")
|
||||
c := NewBaseConnection("id", ProtocolSFTP, "", "", user)
|
||||
|
||||
testfile := filepath.Join(user.HomeDir, "testfile")
|
||||
err = os.WriteFile(testfile, []byte("test"), os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
info, err := os.Stat(testfile)
|
||||
assert.NoError(t, err)
|
||||
err = c.RemoveFile(fs, testfile, "testfile", info)
|
||||
assert.NoError(t, err)
|
||||
assert.FileExists(t, testfile)
|
||||
|
||||
os.RemoveAll(homeDir)
|
||||
|
||||
Config.Actions = actionsCopy
|
||||
}
|
||||
|
||||
func TestUnconfiguredHook(t *testing.T) {
|
||||
actionsCopy := Config.Actions
|
||||
|
||||
Config.Actions = ProtocolActions{
|
||||
ExecuteOn: []string{operationDownload},
|
||||
Hook: "",
|
||||
}
|
||||
pluginsConfig := []plugin.Config{
|
||||
{
|
||||
Type: "notifier",
|
||||
},
|
||||
}
|
||||
err := plugin.Initialize(pluginsConfig, true)
|
||||
assert.Error(t, err)
|
||||
assert.True(t, plugin.Handler.HasNotifiers())
|
||||
|
||||
c := NewBaseConnection("id", ProtocolSFTP, "", "", dataprovider.User{})
|
||||
err = ExecutePreAction(c, OperationPreDownload, "", "", 0, 0)
|
||||
assert.NoError(t, err)
|
||||
err = ExecutePreAction(c, operationPreDelete, "", "", 0, 0)
|
||||
assert.ErrorIs(t, err, errUnconfiguredAction)
|
||||
|
||||
ExecuteActionNotification(c, operationDownload, "", "", "", "", "", 0, nil)
|
||||
|
||||
err = plugin.Initialize(nil, true)
|
||||
assert.NoError(t, err)
|
||||
assert.False(t, plugin.Handler.HasNotifiers())
|
||||
|
||||
Config.Actions = actionsCopy
|
||||
}
|
||||
|
||||
type actionHandlerStub struct {
|
||||
called bool
|
||||
}
|
||||
|
||||
func (h *actionHandlerStub) Handle(event *notifier.FsEvent) error {
|
||||
h.called = true
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func TestInitializeActionHandler(t *testing.T) {
|
||||
handler := &actionHandlerStub{}
|
||||
|
||||
InitializeActionHandler(handler)
|
||||
t.Cleanup(func() {
|
||||
InitializeActionHandler(&defaultActionHandler{})
|
||||
})
|
||||
|
||||
err := actionHandler.Handle(¬ifier.FsEvent{})
|
||||
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, handler.called)
|
||||
}
|
||||
51
common/clientsmap.go
Normal file
51
common/clientsmap.go
Normal file
@@ -0,0 +1,51 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
)
|
||||
|
||||
// clienstMap is a struct containing the map of the connected clients
|
||||
type clientsMap struct {
|
||||
totalConnections int32
|
||||
mu sync.RWMutex
|
||||
clients map[string]int
|
||||
}
|
||||
|
||||
func (c *clientsMap) add(source string) {
|
||||
atomic.AddInt32(&c.totalConnections, 1)
|
||||
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
|
||||
c.clients[source]++
|
||||
}
|
||||
|
||||
func (c *clientsMap) remove(source string) {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
|
||||
if val, ok := c.clients[source]; ok {
|
||||
atomic.AddInt32(&c.totalConnections, -1)
|
||||
c.clients[source]--
|
||||
if val > 1 {
|
||||
return
|
||||
}
|
||||
delete(c.clients, source)
|
||||
} else {
|
||||
logger.Warn(logSender, "", "cannot remove client %v it is not mapped", source)
|
||||
}
|
||||
}
|
||||
|
||||
func (c *clientsMap) getTotal() int32 {
|
||||
return atomic.LoadInt32(&c.totalConnections)
|
||||
}
|
||||
|
||||
func (c *clientsMap) getTotalFrom(source string) int {
|
||||
c.mu.RLock()
|
||||
defer c.mu.RUnlock()
|
||||
|
||||
return c.clients[source]
|
||||
}
|
||||
59
common/clientsmap_test.go
Normal file
59
common/clientsmap_test.go
Normal file
@@ -0,0 +1,59 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestClientsMap(t *testing.T) {
|
||||
m := clientsMap{
|
||||
clients: make(map[string]int),
|
||||
}
|
||||
ip1 := "192.168.1.1"
|
||||
ip2 := "192.168.1.2"
|
||||
m.add(ip1)
|
||||
assert.Equal(t, int32(1), m.getTotal())
|
||||
assert.Equal(t, 1, m.getTotalFrom(ip1))
|
||||
assert.Equal(t, 0, m.getTotalFrom(ip2))
|
||||
|
||||
m.add(ip1)
|
||||
m.add(ip2)
|
||||
assert.Equal(t, int32(3), m.getTotal())
|
||||
assert.Equal(t, 2, m.getTotalFrom(ip1))
|
||||
assert.Equal(t, 1, m.getTotalFrom(ip2))
|
||||
|
||||
m.add(ip1)
|
||||
m.add(ip1)
|
||||
m.add(ip2)
|
||||
assert.Equal(t, int32(6), m.getTotal())
|
||||
assert.Equal(t, 4, m.getTotalFrom(ip1))
|
||||
assert.Equal(t, 2, m.getTotalFrom(ip2))
|
||||
|
||||
m.remove(ip2)
|
||||
assert.Equal(t, int32(5), m.getTotal())
|
||||
assert.Equal(t, 4, m.getTotalFrom(ip1))
|
||||
assert.Equal(t, 1, m.getTotalFrom(ip2))
|
||||
|
||||
m.remove("unknown")
|
||||
assert.Equal(t, int32(5), m.getTotal())
|
||||
assert.Equal(t, 4, m.getTotalFrom(ip1))
|
||||
assert.Equal(t, 1, m.getTotalFrom(ip2))
|
||||
|
||||
m.remove(ip2)
|
||||
assert.Equal(t, int32(4), m.getTotal())
|
||||
assert.Equal(t, 4, m.getTotalFrom(ip1))
|
||||
assert.Equal(t, 0, m.getTotalFrom(ip2))
|
||||
|
||||
m.remove(ip1)
|
||||
m.remove(ip1)
|
||||
m.remove(ip1)
|
||||
assert.Equal(t, int32(1), m.getTotal())
|
||||
assert.Equal(t, 1, m.getTotalFrom(ip1))
|
||||
assert.Equal(t, 0, m.getTotalFrom(ip2))
|
||||
|
||||
m.remove(ip1)
|
||||
assert.Equal(t, int32(0), m.getTotal())
|
||||
assert.Equal(t, 0, m.getTotalFrom(ip1))
|
||||
assert.Equal(t, 0, m.getTotalFrom(ip2))
|
||||
}
|
||||
1073
common/common.go
Normal file
1073
common/common.go
Normal file
File diff suppressed because it is too large
Load Diff
910
common/common_test.go
Normal file
910
common/common_test.go
Normal file
@@ -0,0 +1,910 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strings"
|
||||
"sync/atomic"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/alexedwards/argon2id"
|
||||
"github.com/sftpgo/sdk"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"golang.org/x/crypto/bcrypt"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/kms"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
"github.com/drakkan/sftpgo/v2/vfs"
|
||||
)
|
||||
|
||||
const (
|
||||
logSenderTest = "common_test"
|
||||
httpAddr = "127.0.0.1:9999"
|
||||
configDir = ".."
|
||||
osWindows = "windows"
|
||||
userTestUsername = "common_test_username"
|
||||
)
|
||||
|
||||
type fakeConnection struct {
|
||||
*BaseConnection
|
||||
command string
|
||||
}
|
||||
|
||||
func (c *fakeConnection) AddUser(user dataprovider.User) error {
|
||||
_, err := user.GetFilesystem(c.GetID())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
c.BaseConnection.User = user
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *fakeConnection) Disconnect() error {
|
||||
Connections.Remove(c.GetID())
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *fakeConnection) GetClientVersion() string {
|
||||
return ""
|
||||
}
|
||||
|
||||
func (c *fakeConnection) GetCommand() string {
|
||||
return c.command
|
||||
}
|
||||
|
||||
func (c *fakeConnection) GetLocalAddress() string {
|
||||
return ""
|
||||
}
|
||||
|
||||
func (c *fakeConnection) GetRemoteAddress() string {
|
||||
return ""
|
||||
}
|
||||
|
||||
type customNetConn struct {
|
||||
net.Conn
|
||||
id string
|
||||
isClosed bool
|
||||
}
|
||||
|
||||
func (c *customNetConn) Close() error {
|
||||
Connections.RemoveSSHConnection(c.id)
|
||||
c.isClosed = true
|
||||
return c.Conn.Close()
|
||||
}
|
||||
|
||||
func TestSSHConnections(t *testing.T) {
|
||||
conn1, conn2 := net.Pipe()
|
||||
now := time.Now()
|
||||
sshConn1 := NewSSHConnection("id1", conn1)
|
||||
sshConn2 := NewSSHConnection("id2", conn2)
|
||||
sshConn3 := NewSSHConnection("id3", conn2)
|
||||
assert.Equal(t, "id1", sshConn1.GetID())
|
||||
assert.Equal(t, "id2", sshConn2.GetID())
|
||||
assert.Equal(t, "id3", sshConn3.GetID())
|
||||
sshConn1.UpdateLastActivity()
|
||||
assert.GreaterOrEqual(t, sshConn1.GetLastActivity().UnixNano(), now.UnixNano())
|
||||
Connections.AddSSHConnection(sshConn1)
|
||||
Connections.AddSSHConnection(sshConn2)
|
||||
Connections.AddSSHConnection(sshConn3)
|
||||
Connections.RLock()
|
||||
assert.Len(t, Connections.sshConnections, 3)
|
||||
Connections.RUnlock()
|
||||
Connections.RemoveSSHConnection(sshConn1.id)
|
||||
Connections.RLock()
|
||||
assert.Len(t, Connections.sshConnections, 2)
|
||||
assert.Equal(t, sshConn3.id, Connections.sshConnections[0].id)
|
||||
assert.Equal(t, sshConn2.id, Connections.sshConnections[1].id)
|
||||
Connections.RUnlock()
|
||||
Connections.RemoveSSHConnection(sshConn1.id)
|
||||
Connections.RLock()
|
||||
assert.Len(t, Connections.sshConnections, 2)
|
||||
assert.Equal(t, sshConn3.id, Connections.sshConnections[0].id)
|
||||
assert.Equal(t, sshConn2.id, Connections.sshConnections[1].id)
|
||||
Connections.RUnlock()
|
||||
Connections.RemoveSSHConnection(sshConn2.id)
|
||||
Connections.RLock()
|
||||
assert.Len(t, Connections.sshConnections, 1)
|
||||
assert.Equal(t, sshConn3.id, Connections.sshConnections[0].id)
|
||||
Connections.RUnlock()
|
||||
Connections.RemoveSSHConnection(sshConn3.id)
|
||||
Connections.RLock()
|
||||
assert.Len(t, Connections.sshConnections, 0)
|
||||
Connections.RUnlock()
|
||||
assert.NoError(t, sshConn1.Close())
|
||||
assert.NoError(t, sshConn2.Close())
|
||||
assert.NoError(t, sshConn3.Close())
|
||||
}
|
||||
|
||||
func TestDefenderIntegration(t *testing.T) {
|
||||
// by default defender is nil
|
||||
configCopy := Config
|
||||
|
||||
ip := "127.1.1.1"
|
||||
|
||||
assert.Nil(t, ReloadDefender())
|
||||
|
||||
AddDefenderEvent(ip, HostEventNoLoginTried)
|
||||
assert.False(t, IsBanned(ip))
|
||||
|
||||
banTime, err := GetDefenderBanTime(ip)
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, banTime)
|
||||
assert.False(t, DeleteDefenderHost(ip))
|
||||
score, err := GetDefenderScore(ip)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 0, score)
|
||||
_, err = GetDefenderHost(ip)
|
||||
assert.Error(t, err)
|
||||
hosts, err := GetDefenderHosts()
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, hosts)
|
||||
|
||||
Config.DefenderConfig = DefenderConfig{
|
||||
Enabled: true,
|
||||
Driver: DefenderDriverProvider,
|
||||
BanTime: 10,
|
||||
BanTimeIncrement: 50,
|
||||
Threshold: 0,
|
||||
ScoreInvalid: 2,
|
||||
ScoreValid: 1,
|
||||
ObservationTime: 15,
|
||||
EntriesSoftLimit: 100,
|
||||
EntriesHardLimit: 150,
|
||||
}
|
||||
err = Initialize(Config)
|
||||
// ScoreInvalid cannot be greater than threshold
|
||||
assert.Error(t, err)
|
||||
Config.DefenderConfig.Driver = "unsupported"
|
||||
err = Initialize(Config)
|
||||
if assert.Error(t, err) {
|
||||
assert.Contains(t, err.Error(), "unsupported defender driver")
|
||||
}
|
||||
Config.DefenderConfig.Driver = DefenderDriverMemory
|
||||
err = Initialize(Config)
|
||||
// ScoreInvalid cannot be greater than threshold
|
||||
assert.Error(t, err)
|
||||
Config.DefenderConfig.Threshold = 3
|
||||
err = Initialize(Config)
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, ReloadDefender())
|
||||
|
||||
AddDefenderEvent(ip, HostEventNoLoginTried)
|
||||
assert.False(t, IsBanned(ip))
|
||||
score, err = GetDefenderScore(ip)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 2, score)
|
||||
entry, err := GetDefenderHost(ip)
|
||||
assert.NoError(t, err)
|
||||
asJSON, err := json.Marshal(&entry)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, `{"id":"3132372e312e312e31","ip":"127.1.1.1","score":2}`, string(asJSON), "entry %v", entry)
|
||||
assert.True(t, DeleteDefenderHost(ip))
|
||||
banTime, err = GetDefenderBanTime(ip)
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, banTime)
|
||||
|
||||
AddDefenderEvent(ip, HostEventLoginFailed)
|
||||
AddDefenderEvent(ip, HostEventNoLoginTried)
|
||||
assert.True(t, IsBanned(ip))
|
||||
score, err = GetDefenderScore(ip)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 0, score)
|
||||
banTime, err = GetDefenderBanTime(ip)
|
||||
assert.NoError(t, err)
|
||||
assert.NotNil(t, banTime)
|
||||
hosts, err = GetDefenderHosts()
|
||||
assert.NoError(t, err)
|
||||
assert.Len(t, hosts, 1)
|
||||
entry, err = GetDefenderHost(ip)
|
||||
assert.NoError(t, err)
|
||||
assert.False(t, entry.BanTime.IsZero())
|
||||
assert.True(t, DeleteDefenderHost(ip))
|
||||
hosts, err = GetDefenderHosts()
|
||||
assert.NoError(t, err)
|
||||
assert.Len(t, hosts, 0)
|
||||
banTime, err = GetDefenderBanTime(ip)
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, banTime)
|
||||
assert.False(t, DeleteDefenderHost(ip))
|
||||
|
||||
Config = configCopy
|
||||
}
|
||||
|
||||
func TestRateLimitersIntegration(t *testing.T) {
|
||||
// by default defender is nil
|
||||
configCopy := Config
|
||||
|
||||
Config.RateLimitersConfig = []RateLimiterConfig{
|
||||
{
|
||||
Average: 100,
|
||||
Period: 10,
|
||||
Burst: 5,
|
||||
Type: int(rateLimiterTypeGlobal),
|
||||
Protocols: rateLimiterProtocolValues,
|
||||
},
|
||||
{
|
||||
Average: 1,
|
||||
Period: 1000,
|
||||
Burst: 1,
|
||||
Type: int(rateLimiterTypeSource),
|
||||
Protocols: []string{ProtocolWebDAV, ProtocolWebDAV, ProtocolFTP},
|
||||
GenerateDefenderEvents: true,
|
||||
EntriesSoftLimit: 100,
|
||||
EntriesHardLimit: 150,
|
||||
},
|
||||
}
|
||||
err := Initialize(Config)
|
||||
assert.Error(t, err)
|
||||
Config.RateLimitersConfig[0].Period = 1000
|
||||
Config.RateLimitersConfig[0].AllowList = []string{"1.1.1", "1.1.1.2"}
|
||||
err = Initialize(Config)
|
||||
if assert.Error(t, err) {
|
||||
assert.Contains(t, err.Error(), "unable to parse rate limiter allow list")
|
||||
}
|
||||
Config.RateLimitersConfig[0].AllowList = []string{"172.16.24.7"}
|
||||
Config.RateLimitersConfig[1].AllowList = []string{"172.16.0.0/16"}
|
||||
|
||||
err = Initialize(Config)
|
||||
assert.NoError(t, err)
|
||||
|
||||
assert.Len(t, rateLimiters, 4)
|
||||
assert.Len(t, rateLimiters[ProtocolSSH], 1)
|
||||
assert.Len(t, rateLimiters[ProtocolFTP], 2)
|
||||
assert.Len(t, rateLimiters[ProtocolWebDAV], 2)
|
||||
assert.Len(t, rateLimiters[ProtocolHTTP], 1)
|
||||
|
||||
source1 := "127.1.1.1"
|
||||
source2 := "127.1.1.2"
|
||||
source3 := "172.16.24.7" // whitelisted
|
||||
|
||||
_, err = LimitRate(ProtocolSSH, source1)
|
||||
assert.NoError(t, err)
|
||||
_, err = LimitRate(ProtocolFTP, source1)
|
||||
assert.NoError(t, err)
|
||||
// sleep to allow the add configured burst to the token.
|
||||
// This sleep is not enough to add the per-source burst
|
||||
time.Sleep(20 * time.Millisecond)
|
||||
_, err = LimitRate(ProtocolWebDAV, source2)
|
||||
assert.NoError(t, err)
|
||||
_, err = LimitRate(ProtocolFTP, source1)
|
||||
assert.Error(t, err)
|
||||
_, err = LimitRate(ProtocolWebDAV, source2)
|
||||
assert.Error(t, err)
|
||||
_, err = LimitRate(ProtocolSSH, source1)
|
||||
assert.NoError(t, err)
|
||||
_, err = LimitRate(ProtocolSSH, source2)
|
||||
assert.NoError(t, err)
|
||||
for i := 0; i < 10; i++ {
|
||||
_, err = LimitRate(ProtocolWebDAV, source3)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
Config = configCopy
|
||||
}
|
||||
|
||||
func TestMaxConnections(t *testing.T) {
|
||||
oldValue := Config.MaxTotalConnections
|
||||
perHost := Config.MaxPerHostConnections
|
||||
|
||||
Config.MaxPerHostConnections = 0
|
||||
|
||||
ipAddr := "192.168.7.8"
|
||||
assert.True(t, Connections.IsNewConnectionAllowed(ipAddr))
|
||||
|
||||
Config.MaxTotalConnections = 1
|
||||
Config.MaxPerHostConnections = perHost
|
||||
|
||||
assert.True(t, Connections.IsNewConnectionAllowed(ipAddr))
|
||||
c := NewBaseConnection("id", ProtocolSFTP, "", "", dataprovider.User{})
|
||||
fakeConn := &fakeConnection{
|
||||
BaseConnection: c,
|
||||
}
|
||||
Connections.Add(fakeConn)
|
||||
assert.Len(t, Connections.GetStats(), 1)
|
||||
assert.False(t, Connections.IsNewConnectionAllowed(ipAddr))
|
||||
|
||||
res := Connections.Close(fakeConn.GetID())
|
||||
assert.True(t, res)
|
||||
assert.Eventually(t, func() bool { return len(Connections.GetStats()) == 0 }, 300*time.Millisecond, 50*time.Millisecond)
|
||||
|
||||
assert.True(t, Connections.IsNewConnectionAllowed(ipAddr))
|
||||
Connections.AddClientConnection(ipAddr)
|
||||
Connections.AddClientConnection(ipAddr)
|
||||
assert.False(t, Connections.IsNewConnectionAllowed(ipAddr))
|
||||
Connections.RemoveClientConnection(ipAddr)
|
||||
assert.True(t, Connections.IsNewConnectionAllowed(ipAddr))
|
||||
Connections.RemoveClientConnection(ipAddr)
|
||||
|
||||
Config.MaxTotalConnections = oldValue
|
||||
}
|
||||
|
||||
func TestMaxConnectionPerHost(t *testing.T) {
|
||||
oldValue := Config.MaxPerHostConnections
|
||||
|
||||
Config.MaxPerHostConnections = 2
|
||||
|
||||
ipAddr := "192.168.9.9"
|
||||
Connections.AddClientConnection(ipAddr)
|
||||
assert.True(t, Connections.IsNewConnectionAllowed(ipAddr))
|
||||
|
||||
Connections.AddClientConnection(ipAddr)
|
||||
assert.True(t, Connections.IsNewConnectionAllowed(ipAddr))
|
||||
|
||||
Connections.AddClientConnection(ipAddr)
|
||||
assert.False(t, Connections.IsNewConnectionAllowed(ipAddr))
|
||||
assert.Equal(t, int32(3), Connections.GetClientConnections())
|
||||
|
||||
Connections.RemoveClientConnection(ipAddr)
|
||||
Connections.RemoveClientConnection(ipAddr)
|
||||
Connections.RemoveClientConnection(ipAddr)
|
||||
|
||||
assert.Equal(t, int32(0), Connections.GetClientConnections())
|
||||
|
||||
Config.MaxPerHostConnections = oldValue
|
||||
}
|
||||
|
||||
func TestIdleConnections(t *testing.T) {
|
||||
configCopy := Config
|
||||
|
||||
Config.IdleTimeout = 1
|
||||
err := Initialize(Config)
|
||||
assert.NoError(t, err)
|
||||
|
||||
conn1, conn2 := net.Pipe()
|
||||
customConn1 := &customNetConn{
|
||||
Conn: conn1,
|
||||
id: "id1",
|
||||
}
|
||||
customConn2 := &customNetConn{
|
||||
Conn: conn2,
|
||||
id: "id2",
|
||||
}
|
||||
sshConn1 := NewSSHConnection(customConn1.id, customConn1)
|
||||
sshConn2 := NewSSHConnection(customConn2.id, customConn2)
|
||||
|
||||
username := "test_user"
|
||||
user := dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
Username: username,
|
||||
},
|
||||
}
|
||||
c := NewBaseConnection(sshConn1.id+"_1", ProtocolSFTP, "", "", user)
|
||||
c.lastActivity = time.Now().Add(-24 * time.Hour).UnixNano()
|
||||
fakeConn := &fakeConnection{
|
||||
BaseConnection: c,
|
||||
}
|
||||
// both ssh connections are expired but they should get removed only
|
||||
// if there is no associated connection
|
||||
sshConn1.lastActivity = c.lastActivity
|
||||
sshConn2.lastActivity = c.lastActivity
|
||||
Connections.AddSSHConnection(sshConn1)
|
||||
Connections.Add(fakeConn)
|
||||
assert.Equal(t, Connections.GetActiveSessions(username), 1)
|
||||
c = NewBaseConnection(sshConn2.id+"_1", ProtocolSSH, "", "", user)
|
||||
fakeConn = &fakeConnection{
|
||||
BaseConnection: c,
|
||||
}
|
||||
Connections.AddSSHConnection(sshConn2)
|
||||
Connections.Add(fakeConn)
|
||||
assert.Equal(t, Connections.GetActiveSessions(username), 2)
|
||||
|
||||
cFTP := NewBaseConnection("id2", ProtocolFTP, "", "", dataprovider.User{})
|
||||
cFTP.lastActivity = time.Now().UnixNano()
|
||||
fakeConn = &fakeConnection{
|
||||
BaseConnection: cFTP,
|
||||
}
|
||||
Connections.Add(fakeConn)
|
||||
assert.Equal(t, Connections.GetActiveSessions(username), 2)
|
||||
assert.Len(t, Connections.GetStats(), 3)
|
||||
Connections.RLock()
|
||||
assert.Len(t, Connections.sshConnections, 2)
|
||||
Connections.RUnlock()
|
||||
|
||||
startIdleTimeoutTicker(100 * time.Millisecond)
|
||||
assert.Eventually(t, func() bool { return Connections.GetActiveSessions(username) == 1 }, 1*time.Second, 200*time.Millisecond)
|
||||
assert.Eventually(t, func() bool {
|
||||
Connections.RLock()
|
||||
defer Connections.RUnlock()
|
||||
return len(Connections.sshConnections) == 1
|
||||
}, 1*time.Second, 200*time.Millisecond)
|
||||
stopIdleTimeoutTicker()
|
||||
assert.Len(t, Connections.GetStats(), 2)
|
||||
c.lastActivity = time.Now().Add(-24 * time.Hour).UnixNano()
|
||||
cFTP.lastActivity = time.Now().Add(-24 * time.Hour).UnixNano()
|
||||
sshConn2.lastActivity = c.lastActivity
|
||||
startIdleTimeoutTicker(100 * time.Millisecond)
|
||||
assert.Eventually(t, func() bool { return len(Connections.GetStats()) == 0 }, 1*time.Second, 200*time.Millisecond)
|
||||
assert.Eventually(t, func() bool {
|
||||
Connections.RLock()
|
||||
defer Connections.RUnlock()
|
||||
return len(Connections.sshConnections) == 0
|
||||
}, 1*time.Second, 200*time.Millisecond)
|
||||
assert.Equal(t, int32(0), Connections.GetClientConnections())
|
||||
stopIdleTimeoutTicker()
|
||||
assert.True(t, customConn1.isClosed)
|
||||
assert.True(t, customConn2.isClosed)
|
||||
|
||||
Config = configCopy
|
||||
}
|
||||
|
||||
func TestCloseConnection(t *testing.T) {
|
||||
c := NewBaseConnection("id", ProtocolSFTP, "", "", dataprovider.User{})
|
||||
fakeConn := &fakeConnection{
|
||||
BaseConnection: c,
|
||||
}
|
||||
assert.True(t, Connections.IsNewConnectionAllowed("127.0.0.1"))
|
||||
Connections.Add(fakeConn)
|
||||
assert.Len(t, Connections.GetStats(), 1)
|
||||
res := Connections.Close(fakeConn.GetID())
|
||||
assert.True(t, res)
|
||||
assert.Eventually(t, func() bool { return len(Connections.GetStats()) == 0 }, 300*time.Millisecond, 50*time.Millisecond)
|
||||
res = Connections.Close(fakeConn.GetID())
|
||||
assert.False(t, res)
|
||||
Connections.Remove(fakeConn.GetID())
|
||||
}
|
||||
|
||||
func TestSwapConnection(t *testing.T) {
|
||||
c := NewBaseConnection("id", ProtocolFTP, "", "", dataprovider.User{})
|
||||
fakeConn := &fakeConnection{
|
||||
BaseConnection: c,
|
||||
}
|
||||
Connections.Add(fakeConn)
|
||||
if assert.Len(t, Connections.GetStats(), 1) {
|
||||
assert.Equal(t, "", Connections.GetStats()[0].Username)
|
||||
}
|
||||
c = NewBaseConnection("id", ProtocolFTP, "", "", dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
Username: userTestUsername,
|
||||
},
|
||||
})
|
||||
fakeConn = &fakeConnection{
|
||||
BaseConnection: c,
|
||||
}
|
||||
err := Connections.Swap(fakeConn)
|
||||
assert.NoError(t, err)
|
||||
if assert.Len(t, Connections.GetStats(), 1) {
|
||||
assert.Equal(t, userTestUsername, Connections.GetStats()[0].Username)
|
||||
}
|
||||
res := Connections.Close(fakeConn.GetID())
|
||||
assert.True(t, res)
|
||||
assert.Eventually(t, func() bool { return len(Connections.GetStats()) == 0 }, 300*time.Millisecond, 50*time.Millisecond)
|
||||
err = Connections.Swap(fakeConn)
|
||||
assert.Error(t, err)
|
||||
}
|
||||
|
||||
func TestAtomicUpload(t *testing.T) {
|
||||
configCopy := Config
|
||||
|
||||
Config.UploadMode = UploadModeStandard
|
||||
assert.False(t, Config.IsAtomicUploadEnabled())
|
||||
Config.UploadMode = UploadModeAtomic
|
||||
assert.True(t, Config.IsAtomicUploadEnabled())
|
||||
Config.UploadMode = UploadModeAtomicWithResume
|
||||
assert.True(t, Config.IsAtomicUploadEnabled())
|
||||
|
||||
Config = configCopy
|
||||
}
|
||||
|
||||
func TestConnectionStatus(t *testing.T) {
|
||||
username := "test_user"
|
||||
user := dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
Username: username,
|
||||
},
|
||||
}
|
||||
fs := vfs.NewOsFs("", os.TempDir(), "")
|
||||
c1 := NewBaseConnection("id1", ProtocolSFTP, "", "", user)
|
||||
fakeConn1 := &fakeConnection{
|
||||
BaseConnection: c1,
|
||||
}
|
||||
t1 := NewBaseTransfer(nil, c1, nil, "/p1", "/p1", "/r1", TransferUpload, 0, 0, 0, true, fs)
|
||||
t1.BytesReceived = 123
|
||||
t2 := NewBaseTransfer(nil, c1, nil, "/p2", "/p2", "/r2", TransferDownload, 0, 0, 0, true, fs)
|
||||
t2.BytesSent = 456
|
||||
c2 := NewBaseConnection("id2", ProtocolSSH, "", "", user)
|
||||
fakeConn2 := &fakeConnection{
|
||||
BaseConnection: c2,
|
||||
command: "md5sum",
|
||||
}
|
||||
c3 := NewBaseConnection("id3", ProtocolWebDAV, "", "", user)
|
||||
fakeConn3 := &fakeConnection{
|
||||
BaseConnection: c3,
|
||||
command: "PROPFIND",
|
||||
}
|
||||
t3 := NewBaseTransfer(nil, c3, nil, "/p2", "/p2", "/r2", TransferDownload, 0, 0, 0, true, fs)
|
||||
Connections.Add(fakeConn1)
|
||||
Connections.Add(fakeConn2)
|
||||
Connections.Add(fakeConn3)
|
||||
|
||||
stats := Connections.GetStats()
|
||||
assert.Len(t, stats, 3)
|
||||
for _, stat := range stats {
|
||||
assert.Equal(t, stat.Username, username)
|
||||
assert.True(t, strings.HasPrefix(stat.GetConnectionInfo(), stat.Protocol))
|
||||
assert.True(t, strings.HasPrefix(stat.GetConnectionDuration(), "00:"))
|
||||
if stat.ConnectionID == "SFTP_id1" {
|
||||
assert.Len(t, stat.Transfers, 2)
|
||||
assert.Greater(t, len(stat.GetTransfersAsString()), 0)
|
||||
for _, tr := range stat.Transfers {
|
||||
if tr.OperationType == operationDownload {
|
||||
assert.True(t, strings.HasPrefix(tr.getConnectionTransferAsString(), "DL"))
|
||||
} else if tr.OperationType == operationUpload {
|
||||
assert.True(t, strings.HasPrefix(tr.getConnectionTransferAsString(), "UL"))
|
||||
}
|
||||
}
|
||||
} else if stat.ConnectionID == "DAV_id3" {
|
||||
assert.Len(t, stat.Transfers, 1)
|
||||
assert.Greater(t, len(stat.GetTransfersAsString()), 0)
|
||||
} else {
|
||||
assert.Equal(t, 0, len(stat.GetTransfersAsString()))
|
||||
}
|
||||
}
|
||||
|
||||
err := t1.Close()
|
||||
assert.NoError(t, err)
|
||||
err = t2.Close()
|
||||
assert.NoError(t, err)
|
||||
|
||||
err = fakeConn3.SignalTransfersAbort()
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, int32(1), atomic.LoadInt32(&t3.AbortTransfer))
|
||||
err = t3.Close()
|
||||
assert.NoError(t, err)
|
||||
err = fakeConn3.SignalTransfersAbort()
|
||||
assert.Error(t, err)
|
||||
|
||||
Connections.Remove(fakeConn1.GetID())
|
||||
stats = Connections.GetStats()
|
||||
assert.Len(t, stats, 2)
|
||||
assert.Equal(t, fakeConn3.GetID(), stats[0].ConnectionID)
|
||||
assert.Equal(t, fakeConn2.GetID(), stats[1].ConnectionID)
|
||||
Connections.Remove(fakeConn2.GetID())
|
||||
stats = Connections.GetStats()
|
||||
assert.Len(t, stats, 1)
|
||||
assert.Equal(t, fakeConn3.GetID(), stats[0].ConnectionID)
|
||||
Connections.Remove(fakeConn3.GetID())
|
||||
stats = Connections.GetStats()
|
||||
assert.Len(t, stats, 0)
|
||||
}
|
||||
|
||||
func TestQuotaScans(t *testing.T) {
|
||||
username := "username"
|
||||
assert.True(t, QuotaScans.AddUserQuotaScan(username))
|
||||
assert.False(t, QuotaScans.AddUserQuotaScan(username))
|
||||
usersScans := QuotaScans.GetUsersQuotaScans()
|
||||
if assert.Len(t, usersScans, 1) {
|
||||
assert.Equal(t, usersScans[0].Username, username)
|
||||
assert.Equal(t, QuotaScans.UserScans[0].StartTime, usersScans[0].StartTime)
|
||||
QuotaScans.UserScans[0].StartTime = 0
|
||||
assert.NotEqual(t, QuotaScans.UserScans[0].StartTime, usersScans[0].StartTime)
|
||||
}
|
||||
|
||||
assert.True(t, QuotaScans.RemoveUserQuotaScan(username))
|
||||
assert.False(t, QuotaScans.RemoveUserQuotaScan(username))
|
||||
assert.Len(t, QuotaScans.GetUsersQuotaScans(), 0)
|
||||
assert.Len(t, usersScans, 1)
|
||||
|
||||
folderName := "folder"
|
||||
assert.True(t, QuotaScans.AddVFolderQuotaScan(folderName))
|
||||
assert.False(t, QuotaScans.AddVFolderQuotaScan(folderName))
|
||||
if assert.Len(t, QuotaScans.GetVFoldersQuotaScans(), 1) {
|
||||
assert.Equal(t, QuotaScans.GetVFoldersQuotaScans()[0].Name, folderName)
|
||||
}
|
||||
|
||||
assert.True(t, QuotaScans.RemoveVFolderQuotaScan(folderName))
|
||||
assert.False(t, QuotaScans.RemoveVFolderQuotaScan(folderName))
|
||||
assert.Len(t, QuotaScans.GetVFoldersQuotaScans(), 0)
|
||||
}
|
||||
|
||||
func TestProxyProtocolVersion(t *testing.T) {
|
||||
c := Configuration{
|
||||
ProxyProtocol: 0,
|
||||
}
|
||||
_, err := c.GetProxyListener(nil)
|
||||
if assert.Error(t, err) {
|
||||
assert.Contains(t, err.Error(), "proxy protocol not configured")
|
||||
}
|
||||
c.ProxyProtocol = 1
|
||||
proxyListener, err := c.GetProxyListener(nil)
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, proxyListener.Policy)
|
||||
|
||||
c.ProxyProtocol = 2
|
||||
proxyListener, err = c.GetProxyListener(nil)
|
||||
assert.NoError(t, err)
|
||||
assert.NotNil(t, proxyListener.Policy)
|
||||
|
||||
c.ProxyProtocol = 1
|
||||
c.ProxyAllowed = []string{"invalid"}
|
||||
_, err = c.GetProxyListener(nil)
|
||||
assert.Error(t, err)
|
||||
|
||||
c.ProxyProtocol = 2
|
||||
_, err = c.GetProxyListener(nil)
|
||||
assert.Error(t, err)
|
||||
}
|
||||
|
||||
func TestStartupHook(t *testing.T) {
|
||||
Config.StartupHook = ""
|
||||
|
||||
assert.NoError(t, Config.ExecuteStartupHook())
|
||||
|
||||
Config.StartupHook = "http://foo\x7f.com/startup"
|
||||
assert.Error(t, Config.ExecuteStartupHook())
|
||||
|
||||
Config.StartupHook = "http://invalid:5678/"
|
||||
assert.Error(t, Config.ExecuteStartupHook())
|
||||
|
||||
Config.StartupHook = fmt.Sprintf("http://%v", httpAddr)
|
||||
assert.NoError(t, Config.ExecuteStartupHook())
|
||||
|
||||
Config.StartupHook = "invalidhook"
|
||||
assert.Error(t, Config.ExecuteStartupHook())
|
||||
|
||||
if runtime.GOOS != osWindows {
|
||||
hookCmd, err := exec.LookPath("true")
|
||||
assert.NoError(t, err)
|
||||
Config.StartupHook = hookCmd
|
||||
assert.NoError(t, Config.ExecuteStartupHook())
|
||||
}
|
||||
|
||||
Config.StartupHook = ""
|
||||
}
|
||||
|
||||
func TestPostDisconnectHook(t *testing.T) {
|
||||
Config.PostDisconnectHook = "http://127.0.0.1/"
|
||||
|
||||
remoteAddr := "127.0.0.1:80"
|
||||
Config.checkPostDisconnectHook(remoteAddr, ProtocolHTTP, "", "", time.Now())
|
||||
Config.checkPostDisconnectHook(remoteAddr, ProtocolSFTP, "", "", time.Now())
|
||||
|
||||
Config.PostDisconnectHook = "http://bar\x7f.com/"
|
||||
Config.executePostDisconnectHook(remoteAddr, ProtocolSFTP, "", "", time.Now())
|
||||
|
||||
Config.PostDisconnectHook = fmt.Sprintf("http://%v", httpAddr)
|
||||
Config.executePostDisconnectHook(remoteAddr, ProtocolSFTP, "", "", time.Now())
|
||||
|
||||
Config.PostDisconnectHook = "relativePath"
|
||||
Config.executePostDisconnectHook(remoteAddr, ProtocolSFTP, "", "", time.Now())
|
||||
|
||||
if runtime.GOOS == osWindows {
|
||||
Config.PostDisconnectHook = "C:\\a\\bad\\command"
|
||||
Config.executePostDisconnectHook(remoteAddr, ProtocolSFTP, "", "", time.Now())
|
||||
} else {
|
||||
Config.PostDisconnectHook = "/invalid/path"
|
||||
Config.executePostDisconnectHook(remoteAddr, ProtocolSFTP, "", "", time.Now())
|
||||
|
||||
hookCmd, err := exec.LookPath("true")
|
||||
assert.NoError(t, err)
|
||||
Config.PostDisconnectHook = hookCmd
|
||||
Config.executePostDisconnectHook(remoteAddr, ProtocolSFTP, "", "", time.Now())
|
||||
}
|
||||
Config.PostDisconnectHook = ""
|
||||
}
|
||||
|
||||
func TestPostConnectHook(t *testing.T) {
|
||||
Config.PostConnectHook = ""
|
||||
|
||||
ipAddr := "127.0.0.1"
|
||||
|
||||
assert.NoError(t, Config.ExecutePostConnectHook(ipAddr, ProtocolFTP))
|
||||
|
||||
Config.PostConnectHook = "http://foo\x7f.com/"
|
||||
assert.Error(t, Config.ExecutePostConnectHook(ipAddr, ProtocolSFTP))
|
||||
|
||||
Config.PostConnectHook = "http://invalid:1234/"
|
||||
assert.Error(t, Config.ExecutePostConnectHook(ipAddr, ProtocolSFTP))
|
||||
|
||||
Config.PostConnectHook = fmt.Sprintf("http://%v/404", httpAddr)
|
||||
assert.Error(t, Config.ExecutePostConnectHook(ipAddr, ProtocolFTP))
|
||||
|
||||
Config.PostConnectHook = fmt.Sprintf("http://%v", httpAddr)
|
||||
assert.NoError(t, Config.ExecutePostConnectHook(ipAddr, ProtocolFTP))
|
||||
|
||||
Config.PostConnectHook = "invalid"
|
||||
assert.Error(t, Config.ExecutePostConnectHook(ipAddr, ProtocolFTP))
|
||||
|
||||
if runtime.GOOS == osWindows {
|
||||
Config.PostConnectHook = "C:\\bad\\command"
|
||||
assert.Error(t, Config.ExecutePostConnectHook(ipAddr, ProtocolSFTP))
|
||||
} else {
|
||||
Config.PostConnectHook = "/invalid/path"
|
||||
assert.Error(t, Config.ExecutePostConnectHook(ipAddr, ProtocolSFTP))
|
||||
|
||||
hookCmd, err := exec.LookPath("true")
|
||||
assert.NoError(t, err)
|
||||
Config.PostConnectHook = hookCmd
|
||||
assert.NoError(t, Config.ExecutePostConnectHook(ipAddr, ProtocolSFTP))
|
||||
}
|
||||
|
||||
Config.PostConnectHook = ""
|
||||
}
|
||||
|
||||
func TestCryptoConvertFileInfo(t *testing.T) {
|
||||
name := "name"
|
||||
fs, err := vfs.NewCryptFs("connID1", os.TempDir(), "", vfs.CryptFsConfig{
|
||||
Passphrase: kms.NewPlainSecret("secret"),
|
||||
})
|
||||
require.NoError(t, err)
|
||||
cryptFs := fs.(*vfs.CryptFs)
|
||||
info := vfs.NewFileInfo(name, true, 48, time.Now(), false)
|
||||
assert.Equal(t, info, cryptFs.ConvertFileInfo(info))
|
||||
info = vfs.NewFileInfo(name, false, 48, time.Now(), false)
|
||||
assert.NotEqual(t, info.Size(), cryptFs.ConvertFileInfo(info).Size())
|
||||
info = vfs.NewFileInfo(name, false, 33, time.Now(), false)
|
||||
assert.Equal(t, int64(0), cryptFs.ConvertFileInfo(info).Size())
|
||||
info = vfs.NewFileInfo(name, false, 1, time.Now(), false)
|
||||
assert.Equal(t, int64(0), cryptFs.ConvertFileInfo(info).Size())
|
||||
}
|
||||
|
||||
func TestFolderCopy(t *testing.T) {
|
||||
folder := vfs.BaseVirtualFolder{
|
||||
ID: 1,
|
||||
Name: "name",
|
||||
MappedPath: filepath.Clean(os.TempDir()),
|
||||
UsedQuotaSize: 4096,
|
||||
UsedQuotaFiles: 2,
|
||||
LastQuotaUpdate: util.GetTimeAsMsSinceEpoch(time.Now()),
|
||||
Users: []string{"user1", "user2"},
|
||||
}
|
||||
folderCopy := folder.GetACopy()
|
||||
folder.ID = 2
|
||||
folder.Users = []string{"user3"}
|
||||
require.Len(t, folderCopy.Users, 2)
|
||||
require.True(t, util.IsStringInSlice("user1", folderCopy.Users))
|
||||
require.True(t, util.IsStringInSlice("user2", folderCopy.Users))
|
||||
require.Equal(t, int64(1), folderCopy.ID)
|
||||
require.Equal(t, folder.Name, folderCopy.Name)
|
||||
require.Equal(t, folder.MappedPath, folderCopy.MappedPath)
|
||||
require.Equal(t, folder.UsedQuotaSize, folderCopy.UsedQuotaSize)
|
||||
require.Equal(t, folder.UsedQuotaFiles, folderCopy.UsedQuotaFiles)
|
||||
require.Equal(t, folder.LastQuotaUpdate, folderCopy.LastQuotaUpdate)
|
||||
|
||||
folder.FsConfig = vfs.Filesystem{
|
||||
CryptConfig: vfs.CryptFsConfig{
|
||||
Passphrase: kms.NewPlainSecret("crypto secret"),
|
||||
},
|
||||
}
|
||||
folderCopy = folder.GetACopy()
|
||||
folder.FsConfig.CryptConfig.Passphrase = kms.NewEmptySecret()
|
||||
require.Len(t, folderCopy.Users, 1)
|
||||
require.True(t, util.IsStringInSlice("user3", folderCopy.Users))
|
||||
require.Equal(t, int64(2), folderCopy.ID)
|
||||
require.Equal(t, folder.Name, folderCopy.Name)
|
||||
require.Equal(t, folder.MappedPath, folderCopy.MappedPath)
|
||||
require.Equal(t, folder.UsedQuotaSize, folderCopy.UsedQuotaSize)
|
||||
require.Equal(t, folder.UsedQuotaFiles, folderCopy.UsedQuotaFiles)
|
||||
require.Equal(t, folder.LastQuotaUpdate, folderCopy.LastQuotaUpdate)
|
||||
require.Equal(t, "crypto secret", folderCopy.FsConfig.CryptConfig.Passphrase.GetPayload())
|
||||
}
|
||||
|
||||
func TestCachedFs(t *testing.T) {
|
||||
user := dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
HomeDir: filepath.Clean(os.TempDir()),
|
||||
},
|
||||
}
|
||||
conn := NewBaseConnection("id", ProtocolSFTP, "", "", user)
|
||||
// changing the user should not affect the connection
|
||||
user.HomeDir = filepath.Join(os.TempDir(), "temp")
|
||||
err := os.Mkdir(user.HomeDir, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
fs, err := user.GetFilesystem("")
|
||||
assert.NoError(t, err)
|
||||
p, err := fs.ResolvePath("/")
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, user.GetHomeDir(), p)
|
||||
|
||||
_, p, err = conn.GetFsAndResolvedPath("/")
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, filepath.Clean(os.TempDir()), p)
|
||||
user.FsConfig.Provider = sdk.S3FilesystemProvider
|
||||
_, err = user.GetFilesystem("")
|
||||
assert.Error(t, err)
|
||||
conn.User.FsConfig.Provider = sdk.S3FilesystemProvider
|
||||
_, p, err = conn.GetFsAndResolvedPath("/")
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, filepath.Clean(os.TempDir()), p)
|
||||
err = os.Remove(user.HomeDir)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestParseAllowedIPAndRanges(t *testing.T) {
|
||||
_, err := util.ParseAllowedIPAndRanges([]string{"1.1.1.1", "not an ip"})
|
||||
assert.Error(t, err)
|
||||
_, err = util.ParseAllowedIPAndRanges([]string{"1.1.1.5", "192.168.1.0/240"})
|
||||
assert.Error(t, err)
|
||||
allow, err := util.ParseAllowedIPAndRanges([]string{"192.168.1.2", "172.16.0.0/24"})
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, allow[0](net.ParseIP("192.168.1.2")))
|
||||
assert.False(t, allow[0](net.ParseIP("192.168.2.2")))
|
||||
assert.True(t, allow[1](net.ParseIP("172.16.0.1")))
|
||||
assert.False(t, allow[1](net.ParseIP("172.16.1.1")))
|
||||
}
|
||||
|
||||
func TestHideConfidentialData(t *testing.T) {
|
||||
for _, provider := range []sdk.FilesystemProvider{sdk.LocalFilesystemProvider,
|
||||
sdk.CryptedFilesystemProvider, sdk.S3FilesystemProvider, sdk.GCSFilesystemProvider,
|
||||
sdk.AzureBlobFilesystemProvider, sdk.SFTPFilesystemProvider,
|
||||
} {
|
||||
u := dataprovider.User{
|
||||
FsConfig: vfs.Filesystem{
|
||||
Provider: provider,
|
||||
},
|
||||
}
|
||||
u.PrepareForRendering()
|
||||
f := vfs.BaseVirtualFolder{
|
||||
FsConfig: vfs.Filesystem{
|
||||
Provider: provider,
|
||||
},
|
||||
}
|
||||
f.PrepareForRendering()
|
||||
}
|
||||
a := dataprovider.Admin{}
|
||||
a.HideConfidentialData()
|
||||
}
|
||||
|
||||
func TestUserPerms(t *testing.T) {
|
||||
u := dataprovider.User{}
|
||||
u.Permissions = make(map[string][]string)
|
||||
u.Permissions["/"] = []string{dataprovider.PermUpload, dataprovider.PermDelete}
|
||||
assert.True(t, u.HasAnyPerm([]string{dataprovider.PermRename, dataprovider.PermDelete}, "/"))
|
||||
assert.False(t, u.HasAnyPerm([]string{dataprovider.PermRename, dataprovider.PermCreateDirs}, "/"))
|
||||
u.Permissions["/"] = []string{dataprovider.PermDelete, dataprovider.PermCreateDirs}
|
||||
assert.True(t, u.HasPermsDeleteAll("/"))
|
||||
assert.False(t, u.HasPermsRenameAll("/"))
|
||||
u.Permissions["/"] = []string{dataprovider.PermDeleteDirs, dataprovider.PermDeleteFiles, dataprovider.PermRenameDirs}
|
||||
assert.True(t, u.HasPermsDeleteAll("/"))
|
||||
assert.False(t, u.HasPermsRenameAll("/"))
|
||||
u.Permissions["/"] = []string{dataprovider.PermDeleteDirs, dataprovider.PermRenameFiles, dataprovider.PermRenameDirs}
|
||||
assert.False(t, u.HasPermsDeleteAll("/"))
|
||||
assert.True(t, u.HasPermsRenameAll("/"))
|
||||
}
|
||||
|
||||
func BenchmarkBcryptHashing(b *testing.B) {
|
||||
bcryptPassword := "bcryptpassword"
|
||||
for i := 0; i < b.N; i++ {
|
||||
_, err := bcrypt.GenerateFromPassword([]byte(bcryptPassword), 10)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkCompareBcryptPassword(b *testing.B) {
|
||||
bcryptPassword := "$2a$10$lPDdnDimJZ7d5/GwL6xDuOqoZVRXok6OHHhivCnanWUtcgN0Zafki"
|
||||
for i := 0; i < b.N; i++ {
|
||||
err := bcrypt.CompareHashAndPassword([]byte(bcryptPassword), []byte("password"))
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkArgon2Hashing(b *testing.B) {
|
||||
argonPassword := "argon2password"
|
||||
for i := 0; i < b.N; i++ {
|
||||
_, err := argon2id.CreateHash(argonPassword, argon2id.DefaultParams)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkCompareArgon2Password(b *testing.B) {
|
||||
argon2Password := "$argon2id$v=19$m=65536,t=1,p=2$aOoAOdAwvzhOgi7wUFjXlw$wn/y37dBWdKHtPXHR03nNaKHWKPXyNuVXOknaU+YZ+s"
|
||||
for i := 0; i < b.N; i++ {
|
||||
_, err := argon2id.ComparePasswordAndHash("password", argon2Password)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
}
|
||||
1263
common/connection.go
Normal file
1263
common/connection.go
Normal file
File diff suppressed because it is too large
Load Diff
447
common/connection_test.go
Normal file
447
common/connection_test.go
Normal file
@@ -0,0 +1,447 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/pkg/sftp"
|
||||
"github.com/rs/xid"
|
||||
"github.com/sftpgo/sdk"
|
||||
"github.com/stretchr/testify/assert"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/kms"
|
||||
"github.com/drakkan/sftpgo/v2/vfs"
|
||||
)
|
||||
|
||||
// MockOsFs mockable OsFs
|
||||
type MockOsFs struct {
|
||||
vfs.Fs
|
||||
hasVirtualFolders bool
|
||||
}
|
||||
|
||||
// Name returns the name for the Fs implementation
|
||||
func (fs *MockOsFs) Name() string {
|
||||
return "mockOsFs"
|
||||
}
|
||||
|
||||
// HasVirtualFolders returns true if folders are emulated
|
||||
func (fs *MockOsFs) HasVirtualFolders() bool {
|
||||
return fs.hasVirtualFolders
|
||||
}
|
||||
|
||||
func (fs *MockOsFs) IsUploadResumeSupported() bool {
|
||||
return !fs.hasVirtualFolders
|
||||
}
|
||||
|
||||
func (fs *MockOsFs) Chtimes(name string, atime, mtime time.Time, isUploading bool) error {
|
||||
return vfs.ErrVfsUnsupported
|
||||
}
|
||||
|
||||
func newMockOsFs(hasVirtualFolders bool, connectionID, rootDir string) vfs.Fs {
|
||||
return &MockOsFs{
|
||||
Fs: vfs.NewOsFs(connectionID, rootDir, ""),
|
||||
hasVirtualFolders: hasVirtualFolders,
|
||||
}
|
||||
}
|
||||
|
||||
func TestRemoveErrors(t *testing.T) {
|
||||
mappedPath := filepath.Join(os.TempDir(), "map")
|
||||
homePath := filepath.Join(os.TempDir(), "home")
|
||||
|
||||
user := dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
Username: "remove_errors_user",
|
||||
HomeDir: homePath,
|
||||
},
|
||||
VirtualFolders: []vfs.VirtualFolder{
|
||||
{
|
||||
BaseVirtualFolder: vfs.BaseVirtualFolder{
|
||||
Name: filepath.Base(mappedPath),
|
||||
MappedPath: mappedPath,
|
||||
},
|
||||
VirtualPath: "/virtualpath",
|
||||
},
|
||||
},
|
||||
}
|
||||
user.Permissions = make(map[string][]string)
|
||||
user.Permissions["/"] = []string{dataprovider.PermAny}
|
||||
fs := vfs.NewOsFs("", os.TempDir(), "")
|
||||
conn := NewBaseConnection("", ProtocolFTP, "", "", user)
|
||||
err := conn.IsRemoveDirAllowed(fs, mappedPath, "/virtualpath1")
|
||||
if assert.Error(t, err) {
|
||||
assert.Contains(t, err.Error(), "permission denied")
|
||||
}
|
||||
err = conn.RemoveFile(fs, filepath.Join(homePath, "missing_file"), "/missing_file",
|
||||
vfs.NewFileInfo("info", false, 100, time.Now(), false))
|
||||
assert.Error(t, err)
|
||||
}
|
||||
|
||||
func TestSetStatMode(t *testing.T) {
|
||||
oldSetStatMode := Config.SetstatMode
|
||||
Config.SetstatMode = 1
|
||||
|
||||
fakePath := "fake path"
|
||||
user := dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
HomeDir: os.TempDir(),
|
||||
},
|
||||
}
|
||||
user.Permissions = make(map[string][]string)
|
||||
user.Permissions["/"] = []string{dataprovider.PermAny}
|
||||
fs := newMockOsFs(true, "", user.GetHomeDir())
|
||||
conn := NewBaseConnection("", ProtocolWebDAV, "", "", user)
|
||||
err := conn.handleChmod(fs, fakePath, fakePath, nil)
|
||||
assert.NoError(t, err)
|
||||
err = conn.handleChown(fs, fakePath, fakePath, nil)
|
||||
assert.NoError(t, err)
|
||||
err = conn.handleChtimes(fs, fakePath, fakePath, nil)
|
||||
assert.NoError(t, err)
|
||||
|
||||
Config.SetstatMode = 2
|
||||
err = conn.handleChmod(fs, fakePath, fakePath, nil)
|
||||
assert.NoError(t, err)
|
||||
err = conn.handleChtimes(fs, fakePath, fakePath, &StatAttributes{
|
||||
Atime: time.Now(),
|
||||
Mtime: time.Now(),
|
||||
})
|
||||
assert.NoError(t, err)
|
||||
|
||||
Config.SetstatMode = oldSetStatMode
|
||||
}
|
||||
|
||||
func TestRecursiveRenameWalkError(t *testing.T) {
|
||||
fs := vfs.NewOsFs("", os.TempDir(), "")
|
||||
conn := NewBaseConnection("", ProtocolWebDAV, "", "", dataprovider.User{})
|
||||
err := conn.checkRecursiveRenameDirPermissions(fs, fs, "/source", "/target")
|
||||
assert.ErrorIs(t, err, os.ErrNotExist)
|
||||
}
|
||||
|
||||
func TestCrossRenameFsErrors(t *testing.T) {
|
||||
fs := vfs.NewOsFs("", os.TempDir(), "")
|
||||
conn := NewBaseConnection("", ProtocolWebDAV, "", "", dataprovider.User{})
|
||||
res := conn.hasSpaceForCrossRename(fs, vfs.QuotaCheckResult{}, 1, "missingsource")
|
||||
assert.False(t, res)
|
||||
if runtime.GOOS != osWindows {
|
||||
dirPath := filepath.Join(os.TempDir(), "d")
|
||||
err := os.Mkdir(dirPath, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
err = os.Chmod(dirPath, 0001)
|
||||
assert.NoError(t, err)
|
||||
|
||||
res = conn.hasSpaceForCrossRename(fs, vfs.QuotaCheckResult{}, 1, dirPath)
|
||||
assert.False(t, res)
|
||||
|
||||
err = os.Chmod(dirPath, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
err = os.Remove(dirPath)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestRenameVirtualFolders(t *testing.T) {
|
||||
vdir := "/avdir"
|
||||
u := dataprovider.User{}
|
||||
u.VirtualFolders = append(u.VirtualFolders, vfs.VirtualFolder{
|
||||
BaseVirtualFolder: vfs.BaseVirtualFolder{
|
||||
Name: "name",
|
||||
MappedPath: "mappedPath",
|
||||
},
|
||||
VirtualPath: vdir,
|
||||
})
|
||||
fs := vfs.NewOsFs("", os.TempDir(), "")
|
||||
conn := NewBaseConnection("", ProtocolFTP, "", "", u)
|
||||
res := conn.isRenamePermitted(fs, fs, "source", "target", vdir, "vdirtarget", nil)
|
||||
assert.False(t, res)
|
||||
}
|
||||
|
||||
func TestRenamePerms(t *testing.T) {
|
||||
src := "source"
|
||||
target := "target"
|
||||
u := dataprovider.User{}
|
||||
u.Permissions = map[string][]string{}
|
||||
u.Permissions["/"] = []string{dataprovider.PermCreateDirs, dataprovider.PermUpload, dataprovider.PermCreateSymlinks,
|
||||
dataprovider.PermDeleteFiles}
|
||||
conn := NewBaseConnection("", ProtocolSFTP, "", "", u)
|
||||
assert.False(t, conn.hasRenamePerms(src, target, nil))
|
||||
u.Permissions["/"] = []string{dataprovider.PermCreateDirs, dataprovider.PermUpload, dataprovider.PermCreateSymlinks,
|
||||
dataprovider.PermDeleteFiles, dataprovider.PermDeleteDirs}
|
||||
assert.True(t, conn.hasRenamePerms(src, target, nil))
|
||||
u.Permissions["/"] = []string{dataprovider.PermCreateDirs, dataprovider.PermUpload, dataprovider.PermDeleteFiles,
|
||||
dataprovider.PermDeleteDirs}
|
||||
assert.False(t, conn.hasRenamePerms(src, target, nil))
|
||||
|
||||
info := vfs.NewFileInfo(src, true, 0, time.Now(), false)
|
||||
u.Permissions["/"] = []string{dataprovider.PermCreateDirs, dataprovider.PermUpload, dataprovider.PermDeleteFiles}
|
||||
assert.False(t, conn.hasRenamePerms(src, target, info))
|
||||
u.Permissions["/"] = []string{dataprovider.PermCreateDirs, dataprovider.PermUpload, dataprovider.PermDeleteDirs}
|
||||
assert.True(t, conn.hasRenamePerms(src, target, info))
|
||||
u.Permissions["/"] = []string{dataprovider.PermDownload, dataprovider.PermUpload, dataprovider.PermDeleteDirs}
|
||||
assert.False(t, conn.hasRenamePerms(src, target, info))
|
||||
}
|
||||
|
||||
func TestUpdateQuotaAfterRename(t *testing.T) {
|
||||
user := dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
Username: userTestUsername,
|
||||
HomeDir: filepath.Join(os.TempDir(), "home"),
|
||||
},
|
||||
}
|
||||
mappedPath := filepath.Join(os.TempDir(), "vdir")
|
||||
user.Permissions = make(map[string][]string)
|
||||
user.Permissions["/"] = []string{dataprovider.PermAny}
|
||||
user.VirtualFolders = append(user.VirtualFolders, vfs.VirtualFolder{
|
||||
BaseVirtualFolder: vfs.BaseVirtualFolder{
|
||||
MappedPath: mappedPath,
|
||||
},
|
||||
VirtualPath: "/vdir",
|
||||
QuotaFiles: -1,
|
||||
QuotaSize: -1,
|
||||
})
|
||||
user.VirtualFolders = append(user.VirtualFolders, vfs.VirtualFolder{
|
||||
BaseVirtualFolder: vfs.BaseVirtualFolder{
|
||||
MappedPath: mappedPath,
|
||||
},
|
||||
VirtualPath: "/vdir1",
|
||||
QuotaFiles: -1,
|
||||
QuotaSize: -1,
|
||||
})
|
||||
err := os.MkdirAll(user.GetHomeDir(), os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
err = os.MkdirAll(mappedPath, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
fs, err := user.GetFilesystem("id")
|
||||
assert.NoError(t, err)
|
||||
c := NewBaseConnection("", ProtocolSFTP, "", "", user)
|
||||
request := sftp.NewRequest("Rename", "/testfile")
|
||||
if runtime.GOOS != osWindows {
|
||||
request.Filepath = "/dir"
|
||||
request.Target = path.Join("/vdir", "dir")
|
||||
testDirPath := filepath.Join(mappedPath, "dir")
|
||||
err := os.MkdirAll(testDirPath, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
err = os.Chmod(testDirPath, 0001)
|
||||
assert.NoError(t, err)
|
||||
err = c.updateQuotaAfterRename(fs, request.Filepath, request.Target, testDirPath, 0)
|
||||
assert.Error(t, err)
|
||||
err = os.Chmod(testDirPath, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
testFile1 := "/testfile1"
|
||||
request.Target = testFile1
|
||||
request.Filepath = path.Join("/vdir", "file")
|
||||
err = c.updateQuotaAfterRename(fs, request.Filepath, request.Target, filepath.Join(mappedPath, "file"), 0)
|
||||
assert.Error(t, err)
|
||||
err = os.WriteFile(filepath.Join(mappedPath, "file"), []byte("test content"), os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
request.Filepath = testFile1
|
||||
request.Target = path.Join("/vdir", "file")
|
||||
err = c.updateQuotaAfterRename(fs, request.Filepath, request.Target, filepath.Join(mappedPath, "file"), 12)
|
||||
assert.NoError(t, err)
|
||||
err = os.WriteFile(filepath.Join(user.GetHomeDir(), "testfile1"), []byte("test content"), os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
request.Target = testFile1
|
||||
request.Filepath = path.Join("/vdir", "file")
|
||||
err = c.updateQuotaAfterRename(fs, request.Filepath, request.Target, filepath.Join(mappedPath, "file"), 12)
|
||||
assert.NoError(t, err)
|
||||
request.Target = path.Join("/vdir1", "file")
|
||||
request.Filepath = path.Join("/vdir", "file")
|
||||
err = c.updateQuotaAfterRename(fs, request.Filepath, request.Target, filepath.Join(mappedPath, "file"), 12)
|
||||
assert.NoError(t, err)
|
||||
|
||||
err = os.RemoveAll(mappedPath)
|
||||
assert.NoError(t, err)
|
||||
err = os.RemoveAll(user.GetHomeDir())
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestErrorsMapping(t *testing.T) {
|
||||
fs := vfs.NewOsFs("", os.TempDir(), "")
|
||||
conn := NewBaseConnection("", ProtocolSFTP, "", "", dataprovider.User{BaseUser: sdk.BaseUser{HomeDir: os.TempDir()}})
|
||||
for _, protocol := range supportedProtocols {
|
||||
conn.SetProtocol(protocol)
|
||||
err := conn.GetFsError(fs, os.ErrNotExist)
|
||||
if protocol == ProtocolSFTP {
|
||||
assert.ErrorIs(t, err, sftp.ErrSSHFxNoSuchFile)
|
||||
} else if protocol == ProtocolWebDAV || protocol == ProtocolFTP || protocol == ProtocolHTTP ||
|
||||
protocol == ProtocolHTTPShare || protocol == ProtocolDataRetention {
|
||||
assert.EqualError(t, err, os.ErrNotExist.Error())
|
||||
} else {
|
||||
assert.EqualError(t, err, ErrNotExist.Error())
|
||||
}
|
||||
err = conn.GetFsError(fs, os.ErrPermission)
|
||||
if protocol == ProtocolSFTP {
|
||||
assert.EqualError(t, err, sftp.ErrSSHFxPermissionDenied.Error())
|
||||
} else {
|
||||
assert.EqualError(t, err, ErrPermissionDenied.Error())
|
||||
}
|
||||
err = conn.GetFsError(fs, os.ErrClosed)
|
||||
if protocol == ProtocolSFTP {
|
||||
assert.ErrorIs(t, err, sftp.ErrSSHFxFailure)
|
||||
assert.Contains(t, err.Error(), os.ErrClosed.Error())
|
||||
} else {
|
||||
assert.EqualError(t, err, ErrGenericFailure.Error())
|
||||
}
|
||||
err = conn.GetFsError(fs, ErrPermissionDenied)
|
||||
if protocol == ProtocolSFTP {
|
||||
assert.ErrorIs(t, err, sftp.ErrSSHFxFailure)
|
||||
assert.Contains(t, err.Error(), ErrPermissionDenied.Error())
|
||||
} else {
|
||||
assert.EqualError(t, err, ErrPermissionDenied.Error())
|
||||
}
|
||||
err = conn.GetFsError(fs, vfs.ErrVfsUnsupported)
|
||||
if protocol == ProtocolSFTP {
|
||||
assert.EqualError(t, err, sftp.ErrSSHFxOpUnsupported.Error())
|
||||
} else {
|
||||
assert.EqualError(t, err, ErrOpUnsupported.Error())
|
||||
}
|
||||
err = conn.GetFsError(fs, vfs.ErrStorageSizeUnavailable)
|
||||
if protocol == ProtocolSFTP {
|
||||
assert.ErrorIs(t, err, sftp.ErrSSHFxOpUnsupported)
|
||||
assert.Contains(t, err.Error(), vfs.ErrStorageSizeUnavailable.Error())
|
||||
} else {
|
||||
assert.EqualError(t, err, vfs.ErrStorageSizeUnavailable.Error())
|
||||
}
|
||||
err = conn.GetQuotaExceededError()
|
||||
assert.True(t, conn.IsQuotaExceededError(err))
|
||||
err = conn.GetNotExistError()
|
||||
assert.True(t, conn.IsNotExistError(err))
|
||||
err = conn.GetFsError(fs, nil)
|
||||
assert.NoError(t, err)
|
||||
err = conn.GetOpUnsupportedError()
|
||||
if protocol == ProtocolSFTP {
|
||||
assert.EqualError(t, err, sftp.ErrSSHFxOpUnsupported.Error())
|
||||
} else {
|
||||
assert.EqualError(t, err, ErrOpUnsupported.Error())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestMaxWriteSize(t *testing.T) {
|
||||
permissions := make(map[string][]string)
|
||||
permissions["/"] = []string{dataprovider.PermAny}
|
||||
user := dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
Username: userTestUsername,
|
||||
Permissions: permissions,
|
||||
HomeDir: filepath.Clean(os.TempDir()),
|
||||
},
|
||||
}
|
||||
fs, err := user.GetFilesystem("123")
|
||||
assert.NoError(t, err)
|
||||
conn := NewBaseConnection("", ProtocolFTP, "", "", user)
|
||||
quotaResult := vfs.QuotaCheckResult{
|
||||
HasSpace: true,
|
||||
}
|
||||
size, err := conn.GetMaxWriteSize(quotaResult, false, 0, fs.IsUploadResumeSupported())
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, int64(0), size)
|
||||
|
||||
conn.User.Filters.MaxUploadFileSize = 100
|
||||
size, err = conn.GetMaxWriteSize(quotaResult, false, 0, fs.IsUploadResumeSupported())
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, int64(100), size)
|
||||
|
||||
quotaResult.QuotaSize = 1000
|
||||
size, err = conn.GetMaxWriteSize(quotaResult, false, 50, fs.IsUploadResumeSupported())
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, int64(100), size)
|
||||
|
||||
quotaResult.QuotaSize = 1000
|
||||
quotaResult.UsedSize = 990
|
||||
size, err = conn.GetMaxWriteSize(quotaResult, false, 50, fs.IsUploadResumeSupported())
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, int64(60), size)
|
||||
|
||||
quotaResult.QuotaSize = 0
|
||||
quotaResult.UsedSize = 0
|
||||
size, err = conn.GetMaxWriteSize(quotaResult, true, 100, fs.IsUploadResumeSupported())
|
||||
assert.True(t, conn.IsQuotaExceededError(err))
|
||||
assert.Equal(t, int64(0), size)
|
||||
|
||||
size, err = conn.GetMaxWriteSize(quotaResult, true, 10, fs.IsUploadResumeSupported())
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, int64(90), size)
|
||||
|
||||
fs = newMockOsFs(true, fs.ConnectionID(), user.GetHomeDir())
|
||||
size, err = conn.GetMaxWriteSize(quotaResult, true, 100, fs.IsUploadResumeSupported())
|
||||
assert.EqualError(t, err, ErrOpUnsupported.Error())
|
||||
assert.Equal(t, int64(0), size)
|
||||
}
|
||||
|
||||
func TestCheckParentDirsErrors(t *testing.T) {
|
||||
permissions := make(map[string][]string)
|
||||
permissions["/"] = []string{dataprovider.PermAny}
|
||||
user := dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
Username: userTestUsername,
|
||||
Permissions: permissions,
|
||||
HomeDir: filepath.Clean(os.TempDir()),
|
||||
},
|
||||
FsConfig: vfs.Filesystem{
|
||||
Provider: sdk.CryptedFilesystemProvider,
|
||||
},
|
||||
}
|
||||
c := NewBaseConnection(xid.New().String(), ProtocolSFTP, "", "", user)
|
||||
err := c.CheckParentDirs("/a/dir")
|
||||
assert.Error(t, err)
|
||||
|
||||
user.FsConfig.Provider = sdk.LocalFilesystemProvider
|
||||
user.VirtualFolders = nil
|
||||
user.VirtualFolders = append(user.VirtualFolders, vfs.VirtualFolder{
|
||||
BaseVirtualFolder: vfs.BaseVirtualFolder{
|
||||
FsConfig: vfs.Filesystem{
|
||||
Provider: sdk.CryptedFilesystemProvider,
|
||||
},
|
||||
},
|
||||
VirtualPath: "/vdir",
|
||||
})
|
||||
user.VirtualFolders = append(user.VirtualFolders, vfs.VirtualFolder{
|
||||
BaseVirtualFolder: vfs.BaseVirtualFolder{
|
||||
MappedPath: filepath.Clean(os.TempDir()),
|
||||
},
|
||||
VirtualPath: "/vdir/sub",
|
||||
})
|
||||
c = NewBaseConnection(xid.New().String(), ProtocolSFTP, "", "", user)
|
||||
err = c.CheckParentDirs("/vdir/sub/dir")
|
||||
assert.Error(t, err)
|
||||
|
||||
user = dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
Username: userTestUsername,
|
||||
Permissions: permissions,
|
||||
HomeDir: filepath.Clean(os.TempDir()),
|
||||
},
|
||||
FsConfig: vfs.Filesystem{
|
||||
Provider: sdk.S3FilesystemProvider,
|
||||
S3Config: vfs.S3FsConfig{
|
||||
BaseS3FsConfig: sdk.BaseS3FsConfig{
|
||||
Bucket: "buck",
|
||||
Region: "us-east-1",
|
||||
AccessKey: "key",
|
||||
},
|
||||
AccessSecret: kms.NewPlainSecret("s3secret"),
|
||||
},
|
||||
},
|
||||
}
|
||||
c = NewBaseConnection(xid.New().String(), ProtocolSFTP, "", "", user)
|
||||
err = c.CheckParentDirs("/a/dir")
|
||||
assert.NoError(t, err)
|
||||
|
||||
user.VirtualFolders = append(user.VirtualFolders, vfs.VirtualFolder{
|
||||
BaseVirtualFolder: vfs.BaseVirtualFolder{
|
||||
MappedPath: filepath.Clean(os.TempDir()),
|
||||
},
|
||||
VirtualPath: "/local/dir",
|
||||
})
|
||||
|
||||
c = NewBaseConnection(xid.New().String(), ProtocolSFTP, "", "", user)
|
||||
err = c.CheckParentDirs("/local/dir/sub-dir")
|
||||
assert.NoError(t, err)
|
||||
err = os.RemoveAll(filepath.Join(os.TempDir(), "sub-dir"))
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
464
common/dataretention.go
Normal file
464
common/dataretention.go
Normal file
@@ -0,0 +1,464 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/httpclient"
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/smtp"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
// RetentionCheckNotification defines the supported notification methods for a retention check result
|
||||
type RetentionCheckNotification = string
|
||||
|
||||
// Supported notification methods
|
||||
const (
|
||||
// notify results using the defined "data_retention_hook"
|
||||
RetentionCheckNotificationHook = "Hook"
|
||||
// notify results by email
|
||||
RetentionCheckNotificationEmail = "Email"
|
||||
)
|
||||
|
||||
var (
|
||||
// RetentionChecks is the list of active quota scans
|
||||
RetentionChecks ActiveRetentionChecks
|
||||
)
|
||||
|
||||
// ActiveRetentionChecks holds the active quota scans
|
||||
type ActiveRetentionChecks struct {
|
||||
sync.RWMutex
|
||||
Checks []RetentionCheck
|
||||
}
|
||||
|
||||
// Get returns the active retention checks
|
||||
func (c *ActiveRetentionChecks) Get() []RetentionCheck {
|
||||
c.RLock()
|
||||
defer c.RUnlock()
|
||||
|
||||
checks := make([]RetentionCheck, 0, len(c.Checks))
|
||||
for _, check := range c.Checks {
|
||||
foldersCopy := make([]FolderRetention, len(check.Folders))
|
||||
copy(foldersCopy, check.Folders)
|
||||
notificationsCopy := make([]string, len(check.Notifications))
|
||||
copy(notificationsCopy, check.Notifications)
|
||||
checks = append(checks, RetentionCheck{
|
||||
Username: check.Username,
|
||||
StartTime: check.StartTime,
|
||||
Notifications: notificationsCopy,
|
||||
Email: check.Email,
|
||||
Folders: foldersCopy,
|
||||
})
|
||||
}
|
||||
return checks
|
||||
}
|
||||
|
||||
// Add a new retention check, returns nil if a retention check for the given
|
||||
// username is already active. The returned result can be used to start the check
|
||||
func (c *ActiveRetentionChecks) Add(check RetentionCheck, user *dataprovider.User) *RetentionCheck {
|
||||
c.Lock()
|
||||
defer c.Unlock()
|
||||
|
||||
for _, val := range c.Checks {
|
||||
if val.Username == user.Username {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
// we silently ignore file patterns
|
||||
user.Filters.FilePatterns = nil
|
||||
conn := NewBaseConnection("", "", "", "", *user)
|
||||
conn.SetProtocol(ProtocolDataRetention)
|
||||
conn.ID = fmt.Sprintf("data_retention_%v", user.Username)
|
||||
check.Username = user.Username
|
||||
check.StartTime = util.GetTimeAsMsSinceEpoch(time.Now())
|
||||
check.conn = conn
|
||||
check.updateUserPermissions()
|
||||
c.Checks = append(c.Checks, check)
|
||||
|
||||
return &check
|
||||
}
|
||||
|
||||
// remove a user from the ones with active retention checks
|
||||
// and returns true if the user is removed
|
||||
func (c *ActiveRetentionChecks) remove(username string) bool {
|
||||
c.Lock()
|
||||
defer c.Unlock()
|
||||
|
||||
for idx, check := range c.Checks {
|
||||
if check.Username == username {
|
||||
lastIdx := len(c.Checks) - 1
|
||||
c.Checks[idx] = c.Checks[lastIdx]
|
||||
c.Checks = c.Checks[:lastIdx]
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// FolderRetention defines the retention policy for the specified directory path
|
||||
type FolderRetention struct {
|
||||
// Path is the exposed virtual directory path, if no other specific retention is defined,
|
||||
// the retention applies for sub directories too. For example if retention is defined
|
||||
// for the paths "/" and "/sub" then the retention for "/" is applied for any file outside
|
||||
// the "/sub" directory
|
||||
Path string `json:"path"`
|
||||
// Retention time in hours. 0 means exclude this path
|
||||
Retention int `json:"retention"`
|
||||
// DeleteEmptyDirs defines if empty directories will be deleted.
|
||||
// The user need the delete permission
|
||||
DeleteEmptyDirs bool `json:"delete_empty_dirs,omitempty"`
|
||||
// IgnoreUserPermissions defines if delete files even if the user does not have the delete permission.
|
||||
// The default is "false" which means that files will be skipped if the user does not have the permission
|
||||
// to delete them. This applies to sub directories too.
|
||||
IgnoreUserPermissions bool `json:"ignore_user_permissions,omitempty"`
|
||||
}
|
||||
|
||||
func (f *FolderRetention) isValid() error {
|
||||
f.Path = path.Clean(f.Path)
|
||||
if !path.IsAbs(f.Path) {
|
||||
return util.NewValidationError(fmt.Sprintf("folder retention: invalid path %#v, please specify an absolute POSIX path",
|
||||
f.Path))
|
||||
}
|
||||
if f.Retention < 0 {
|
||||
return util.NewValidationError(fmt.Sprintf("invalid folder retention %v, it must be greater or equal to zero",
|
||||
f.Retention))
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
type folderRetentionCheckResult struct {
|
||||
Path string `json:"path"`
|
||||
Retention int `json:"retention"`
|
||||
DeletedFiles int `json:"deleted_files"`
|
||||
DeletedSize int64 `json:"deleted_size"`
|
||||
Elapsed time.Duration `json:"-"`
|
||||
Info string `json:"info,omitempty"`
|
||||
Error string `json:"error,omitempty"`
|
||||
}
|
||||
|
||||
// RetentionCheck defines an active retention check
|
||||
type RetentionCheck struct {
|
||||
// Username to which the retention check refers
|
||||
Username string `json:"username"`
|
||||
// retention check start time as unix timestamp in milliseconds
|
||||
StartTime int64 `json:"start_time"`
|
||||
// affected folders
|
||||
Folders []FolderRetention `json:"folders"`
|
||||
// how cleanup results will be notified
|
||||
Notifications []RetentionCheckNotification `json:"notifications,omitempty"`
|
||||
// email to use if the notification method is set to email
|
||||
Email string `json:"email,omitempty"`
|
||||
// Cleanup results
|
||||
results []*folderRetentionCheckResult `json:"-"`
|
||||
conn *BaseConnection
|
||||
}
|
||||
|
||||
// Validate returns an error if the specified folders are not valid
|
||||
func (c *RetentionCheck) Validate() error {
|
||||
folderPaths := make(map[string]bool)
|
||||
nothingToDo := true
|
||||
for idx := range c.Folders {
|
||||
f := &c.Folders[idx]
|
||||
if err := f.isValid(); err != nil {
|
||||
return err
|
||||
}
|
||||
if f.Retention > 0 {
|
||||
nothingToDo = false
|
||||
}
|
||||
if _, ok := folderPaths[f.Path]; ok {
|
||||
return util.NewValidationError(fmt.Sprintf("duplicated folder path %#v", f.Path))
|
||||
}
|
||||
folderPaths[f.Path] = true
|
||||
}
|
||||
if nothingToDo {
|
||||
return util.NewValidationError("nothing to delete!")
|
||||
}
|
||||
for _, notification := range c.Notifications {
|
||||
switch notification {
|
||||
case RetentionCheckNotificationEmail:
|
||||
if !smtp.IsEnabled() {
|
||||
return util.NewValidationError("in order to notify results via email you must configure an SMTP server")
|
||||
}
|
||||
if c.Email == "" {
|
||||
return util.NewValidationError("in order to notify results via email you must add a valid email address to your profile")
|
||||
}
|
||||
case RetentionCheckNotificationHook:
|
||||
if Config.DataRetentionHook == "" {
|
||||
return util.NewValidationError("in order to notify results via hook you must define a data_retention_hook")
|
||||
}
|
||||
default:
|
||||
return util.NewValidationError(fmt.Sprintf("invalid notification %#v", notification))
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *RetentionCheck) updateUserPermissions() {
|
||||
for _, folder := range c.Folders {
|
||||
if folder.IgnoreUserPermissions {
|
||||
c.conn.User.Permissions[folder.Path] = []string{dataprovider.PermAny}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (c *RetentionCheck) getFolderRetention(folderPath string) (FolderRetention, error) {
|
||||
dirsForPath := util.GetDirsForVirtualPath(folderPath)
|
||||
for _, dirPath := range dirsForPath {
|
||||
for _, folder := range c.Folders {
|
||||
if folder.Path == dirPath {
|
||||
return folder, nil
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return FolderRetention{}, fmt.Errorf("unable to find folder retention for %#v", folderPath)
|
||||
}
|
||||
|
||||
func (c *RetentionCheck) removeFile(virtualPath string, info os.FileInfo) error {
|
||||
fs, fsPath, err := c.conn.GetFsAndResolvedPath(virtualPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return c.conn.RemoveFile(fs, fsPath, virtualPath, info)
|
||||
}
|
||||
|
||||
func (c *RetentionCheck) cleanupFolder(folderPath string) error {
|
||||
deleteFilesPerms := []string{dataprovider.PermDelete, dataprovider.PermDeleteFiles}
|
||||
startTime := time.Now()
|
||||
result := &folderRetentionCheckResult{
|
||||
Path: folderPath,
|
||||
}
|
||||
c.results = append(c.results, result)
|
||||
if !c.conn.User.HasPerm(dataprovider.PermListItems, folderPath) || !c.conn.User.HasAnyPerm(deleteFilesPerms, folderPath) {
|
||||
result.Elapsed = time.Since(startTime)
|
||||
result.Info = "data retention check skipped: no permissions"
|
||||
c.conn.Log(logger.LevelInfo, "user %#v does not have permissions to check retention on %#v, retention check skipped",
|
||||
c.conn.User, folderPath)
|
||||
return nil
|
||||
}
|
||||
|
||||
folderRetention, err := c.getFolderRetention(folderPath)
|
||||
if err != nil {
|
||||
result.Elapsed = time.Since(startTime)
|
||||
result.Error = "unable to get folder retention"
|
||||
c.conn.Log(logger.LevelError, "unable to get folder retention for path %#v", folderPath)
|
||||
return err
|
||||
}
|
||||
result.Retention = folderRetention.Retention
|
||||
if folderRetention.Retention == 0 {
|
||||
result.Elapsed = time.Since(startTime)
|
||||
result.Info = "data retention check skipped: retention is set to 0"
|
||||
c.conn.Log(logger.LevelDebug, "retention check skipped for folder %#v, retention is set to 0", folderPath)
|
||||
return nil
|
||||
}
|
||||
c.conn.Log(logger.LevelDebug, "start retention check for folder %#v, retention: %v hours, delete empty dirs? %v, ignore user perms? %v",
|
||||
folderPath, folderRetention.Retention, folderRetention.DeleteEmptyDirs, folderRetention.IgnoreUserPermissions)
|
||||
files, err := c.conn.ListDir(folderPath)
|
||||
if err != nil {
|
||||
result.Elapsed = time.Since(startTime)
|
||||
if err == c.conn.GetNotExistError() {
|
||||
result.Info = "data retention check skipped, folder does not exist"
|
||||
c.conn.Log(logger.LevelDebug, "folder %#v does not exist, retention check skipped", folderPath)
|
||||
return nil
|
||||
}
|
||||
result.Error = fmt.Sprintf("unable to list directory %#v", folderPath)
|
||||
c.conn.Log(logger.LevelError, result.Error)
|
||||
return err
|
||||
}
|
||||
for _, info := range files {
|
||||
virtualPath := path.Join(folderPath, info.Name())
|
||||
if info.IsDir() {
|
||||
if err := c.cleanupFolder(virtualPath); err != nil {
|
||||
result.Elapsed = time.Since(startTime)
|
||||
result.Error = fmt.Sprintf("unable to check folder: %v", err)
|
||||
c.conn.Log(logger.LevelError, "unable to cleanup folder %#v: %v", virtualPath, err)
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
retentionTime := info.ModTime().Add(time.Duration(folderRetention.Retention) * time.Hour)
|
||||
if retentionTime.Before(time.Now()) {
|
||||
if err := c.removeFile(virtualPath, info); err != nil {
|
||||
result.Elapsed = time.Since(startTime)
|
||||
result.Error = fmt.Sprintf("unable to remove file %#v: %v", virtualPath, err)
|
||||
c.conn.Log(logger.LevelError, "unable to remove file %#v, retention %v: %v",
|
||||
virtualPath, retentionTime, err)
|
||||
return err
|
||||
}
|
||||
c.conn.Log(logger.LevelDebug, "removed file %#v, modification time: %v, retention: %v hours, retention time: %v",
|
||||
virtualPath, info.ModTime(), folderRetention.Retention, retentionTime)
|
||||
result.DeletedFiles++
|
||||
result.DeletedSize += info.Size()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if folderRetention.DeleteEmptyDirs {
|
||||
c.checkEmptyDirRemoval(folderPath)
|
||||
}
|
||||
result.Elapsed = time.Since(startTime)
|
||||
c.conn.Log(logger.LevelDebug, "retention check completed for folder %#v, deleted files: %v, deleted size: %v bytes",
|
||||
folderPath, result.DeletedFiles, result.DeletedSize)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *RetentionCheck) checkEmptyDirRemoval(folderPath string) {
|
||||
if folderPath != "/" && c.conn.User.HasAnyPerm([]string{
|
||||
dataprovider.PermDelete,
|
||||
dataprovider.PermDeleteDirs,
|
||||
}, path.Dir(folderPath),
|
||||
) {
|
||||
files, err := c.conn.ListDir(folderPath)
|
||||
if err == nil && len(files) == 0 {
|
||||
err = c.conn.RemoveDir(folderPath)
|
||||
c.conn.Log(logger.LevelDebug, "tryed to remove empty dir %#v, error: %v", folderPath, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Start starts the retention check
|
||||
func (c *RetentionCheck) Start() {
|
||||
c.conn.Log(logger.LevelInfo, "retention check started")
|
||||
defer RetentionChecks.remove(c.conn.User.Username)
|
||||
defer c.conn.CloseFS() //nolint:errcheck
|
||||
|
||||
startTime := time.Now()
|
||||
for _, folder := range c.Folders {
|
||||
if folder.Retention > 0 {
|
||||
if err := c.cleanupFolder(folder.Path); err != nil {
|
||||
c.conn.Log(logger.LevelError, "retention check failed, unable to cleanup folder %#v", folder.Path)
|
||||
c.sendNotifications(time.Since(startTime), err)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
c.conn.Log(logger.LevelInfo, "retention check completed")
|
||||
c.sendNotifications(time.Since(startTime), nil)
|
||||
}
|
||||
|
||||
func (c *RetentionCheck) sendNotifications(elapsed time.Duration, err error) {
|
||||
for _, notification := range c.Notifications {
|
||||
switch notification {
|
||||
case RetentionCheckNotificationEmail:
|
||||
c.sendEmailNotification(elapsed, err) //nolint:errcheck
|
||||
case RetentionCheckNotificationHook:
|
||||
c.sendHookNotification(elapsed, err) //nolint:errcheck
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (c *RetentionCheck) sendEmailNotification(elapsed time.Duration, errCheck error) error {
|
||||
body := new(bytes.Buffer)
|
||||
data := make(map[string]interface{})
|
||||
data["Results"] = c.results
|
||||
totalDeletedFiles := 0
|
||||
totalDeletedSize := int64(0)
|
||||
for _, result := range c.results {
|
||||
totalDeletedFiles += result.DeletedFiles
|
||||
totalDeletedSize += result.DeletedSize
|
||||
}
|
||||
data["HumanizeSize"] = util.ByteCountIEC
|
||||
data["TotalFiles"] = totalDeletedFiles
|
||||
data["TotalSize"] = totalDeletedSize
|
||||
data["Elapsed"] = elapsed
|
||||
data["Username"] = c.conn.User.Username
|
||||
data["StartTime"] = util.GetTimeFromMsecSinceEpoch(c.StartTime)
|
||||
if errCheck == nil {
|
||||
data["Status"] = "Succeeded"
|
||||
} else {
|
||||
data["Status"] = "Failed"
|
||||
}
|
||||
if err := smtp.RenderRetentionReportTemplate(body, data); err != nil {
|
||||
c.conn.Log(logger.LevelError, "unable to render retention check template: %v", err)
|
||||
return err
|
||||
}
|
||||
startTime := time.Now()
|
||||
subject := fmt.Sprintf("Retention check completed for user %#v", c.conn.User.Username)
|
||||
if err := smtp.SendEmail(c.Email, subject, body.String(), smtp.EmailContentTypeTextHTML); err != nil {
|
||||
c.conn.Log(logger.LevelError, "unable to notify retention check result via email: %v, elapsed: %v", err,
|
||||
time.Since(startTime))
|
||||
return err
|
||||
}
|
||||
c.conn.Log(logger.LevelInfo, "retention check result successfully notified via email, elapsed: %v", time.Since(startTime))
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *RetentionCheck) sendHookNotification(elapsed time.Duration, errCheck error) error {
|
||||
data := make(map[string]interface{})
|
||||
totalDeletedFiles := 0
|
||||
totalDeletedSize := int64(0)
|
||||
for _, result := range c.results {
|
||||
totalDeletedFiles += result.DeletedFiles
|
||||
totalDeletedSize += result.DeletedSize
|
||||
}
|
||||
data["username"] = c.conn.User.Username
|
||||
data["start_time"] = c.StartTime
|
||||
data["elapsed"] = elapsed.Milliseconds()
|
||||
if errCheck == nil {
|
||||
data["status"] = 1
|
||||
} else {
|
||||
data["status"] = 0
|
||||
}
|
||||
data["total_deleted_files"] = totalDeletedFiles
|
||||
data["total_deleted_size"] = totalDeletedSize
|
||||
data["details"] = c.results
|
||||
jsonData, _ := json.Marshal(data)
|
||||
|
||||
startTime := time.Now()
|
||||
|
||||
if strings.HasPrefix(Config.DataRetentionHook, "http") {
|
||||
var url *url.URL
|
||||
url, err := url.Parse(Config.DataRetentionHook)
|
||||
if err != nil {
|
||||
c.conn.Log(logger.LevelError, "invalid data retention hook %#v: %v", Config.DataRetentionHook, err)
|
||||
return err
|
||||
}
|
||||
respCode := 0
|
||||
|
||||
resp, err := httpclient.RetryablePost(url.String(), "application/json", bytes.NewBuffer(jsonData))
|
||||
if err == nil {
|
||||
respCode = resp.StatusCode
|
||||
resp.Body.Close()
|
||||
|
||||
if respCode != http.StatusOK {
|
||||
err = errUnexpectedHTTResponse
|
||||
}
|
||||
}
|
||||
|
||||
c.conn.Log(logger.LevelDebug, "notified result to URL: %#v, status code: %v, elapsed: %v err: %v",
|
||||
url.Redacted(), respCode, time.Since(startTime), err)
|
||||
|
||||
return err
|
||||
}
|
||||
if !filepath.IsAbs(Config.DataRetentionHook) {
|
||||
err := fmt.Errorf("invalid data retention hook %#v", Config.DataRetentionHook)
|
||||
c.conn.Log(logger.LevelError, "%v", err)
|
||||
return err
|
||||
}
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 20*time.Second)
|
||||
defer cancel()
|
||||
|
||||
cmd := exec.CommandContext(ctx, Config.DataRetentionHook)
|
||||
cmd.Env = append(os.Environ(),
|
||||
fmt.Sprintf("SFTPGO_DATA_RETENTION_RESULT=%v", string(jsonData)))
|
||||
err := cmd.Run()
|
||||
|
||||
c.conn.Log(logger.LevelDebug, "notified result using command: %v, elapsed: %v err: %v",
|
||||
Config.DataRetentionHook, time.Since(startTime), err)
|
||||
return err
|
||||
}
|
||||
340
common/dataretention_test.go
Normal file
340
common/dataretention_test.go
Normal file
@@ -0,0 +1,340 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"os/exec"
|
||||
"runtime"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/sftpgo/sdk"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/smtp"
|
||||
)
|
||||
|
||||
func TestRetentionValidation(t *testing.T) {
|
||||
check := RetentionCheck{}
|
||||
check.Folders = append(check.Folders, FolderRetention{
|
||||
Path: "relative",
|
||||
Retention: 10,
|
||||
})
|
||||
err := check.Validate()
|
||||
require.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "please specify an absolute POSIX path")
|
||||
|
||||
check.Folders = []FolderRetention{
|
||||
{
|
||||
Path: "/",
|
||||
Retention: -1,
|
||||
},
|
||||
}
|
||||
err = check.Validate()
|
||||
require.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "invalid folder retention")
|
||||
|
||||
check.Folders = []FolderRetention{
|
||||
{
|
||||
Path: "/ab/..",
|
||||
Retention: 0,
|
||||
},
|
||||
}
|
||||
err = check.Validate()
|
||||
require.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "nothing to delete")
|
||||
assert.Equal(t, "/", check.Folders[0].Path)
|
||||
|
||||
check.Folders = append(check.Folders, FolderRetention{
|
||||
Path: "/../..",
|
||||
Retention: 24,
|
||||
})
|
||||
err = check.Validate()
|
||||
require.Error(t, err)
|
||||
assert.Contains(t, err.Error(), `duplicated folder path "/"`)
|
||||
|
||||
check.Folders = []FolderRetention{
|
||||
{
|
||||
Path: "/dir1",
|
||||
Retention: 48,
|
||||
},
|
||||
{
|
||||
Path: "/dir2",
|
||||
Retention: 96,
|
||||
},
|
||||
}
|
||||
err = check.Validate()
|
||||
assert.NoError(t, err)
|
||||
assert.Len(t, check.Notifications, 0)
|
||||
assert.Empty(t, check.Email)
|
||||
|
||||
check.Notifications = []RetentionCheckNotification{RetentionCheckNotificationEmail}
|
||||
err = check.Validate()
|
||||
require.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "you must configure an SMTP server")
|
||||
|
||||
smtpCfg := smtp.Config{
|
||||
Host: "mail.example.com",
|
||||
Port: 25,
|
||||
TemplatesPath: "templates",
|
||||
}
|
||||
err = smtpCfg.Initialize("..")
|
||||
require.NoError(t, err)
|
||||
|
||||
err = check.Validate()
|
||||
require.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "you must add a valid email address")
|
||||
|
||||
check.Email = "admin@example.com"
|
||||
err = check.Validate()
|
||||
assert.NoError(t, err)
|
||||
|
||||
smtpCfg = smtp.Config{}
|
||||
err = smtpCfg.Initialize("..")
|
||||
require.NoError(t, err)
|
||||
|
||||
check.Notifications = []RetentionCheckNotification{RetentionCheckNotificationHook}
|
||||
err = check.Validate()
|
||||
require.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "data_retention_hook")
|
||||
|
||||
check.Notifications = []string{"not valid"}
|
||||
err = check.Validate()
|
||||
require.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "invalid notification")
|
||||
}
|
||||
|
||||
func TestRetentionEmailNotifications(t *testing.T) {
|
||||
smtpCfg := smtp.Config{
|
||||
Host: "127.0.0.1",
|
||||
Port: 2525,
|
||||
TemplatesPath: "templates",
|
||||
}
|
||||
err := smtpCfg.Initialize("..")
|
||||
require.NoError(t, err)
|
||||
|
||||
user := dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
Username: "user1",
|
||||
},
|
||||
}
|
||||
user.Permissions = make(map[string][]string)
|
||||
user.Permissions["/"] = []string{dataprovider.PermAny}
|
||||
check := RetentionCheck{
|
||||
Notifications: []RetentionCheckNotification{RetentionCheckNotificationEmail},
|
||||
Email: "notification@example.com",
|
||||
results: []*folderRetentionCheckResult{
|
||||
{
|
||||
Path: "/",
|
||||
Retention: 24,
|
||||
DeletedFiles: 10,
|
||||
DeletedSize: 32657,
|
||||
Elapsed: 10 * time.Second,
|
||||
},
|
||||
},
|
||||
}
|
||||
conn := NewBaseConnection("", "", "", "", user)
|
||||
conn.SetProtocol(ProtocolDataRetention)
|
||||
conn.ID = fmt.Sprintf("data_retention_%v", user.Username)
|
||||
check.conn = conn
|
||||
check.sendNotifications(1*time.Second, nil)
|
||||
err = check.sendEmailNotification(1*time.Second, nil)
|
||||
assert.NoError(t, err)
|
||||
err = check.sendEmailNotification(1*time.Second, errors.New("test error"))
|
||||
assert.NoError(t, err)
|
||||
|
||||
smtpCfg.Port = 2626
|
||||
err = smtpCfg.Initialize("..")
|
||||
require.NoError(t, err)
|
||||
err = check.sendEmailNotification(1*time.Second, nil)
|
||||
assert.Error(t, err)
|
||||
|
||||
smtpCfg = smtp.Config{}
|
||||
err = smtpCfg.Initialize("..")
|
||||
require.NoError(t, err)
|
||||
err = check.sendEmailNotification(1*time.Second, nil)
|
||||
assert.Error(t, err)
|
||||
}
|
||||
|
||||
func TestRetentionHookNotifications(t *testing.T) {
|
||||
dataRetentionHook := Config.DataRetentionHook
|
||||
|
||||
Config.DataRetentionHook = fmt.Sprintf("http://%v", httpAddr)
|
||||
user := dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
Username: "user2",
|
||||
},
|
||||
}
|
||||
user.Permissions = make(map[string][]string)
|
||||
user.Permissions["/"] = []string{dataprovider.PermAny}
|
||||
check := RetentionCheck{
|
||||
Notifications: []RetentionCheckNotification{RetentionCheckNotificationHook},
|
||||
results: []*folderRetentionCheckResult{
|
||||
{
|
||||
Path: "/",
|
||||
Retention: 24,
|
||||
DeletedFiles: 10,
|
||||
DeletedSize: 32657,
|
||||
Elapsed: 10 * time.Second,
|
||||
},
|
||||
},
|
||||
}
|
||||
conn := NewBaseConnection("", "", "", "", user)
|
||||
conn.SetProtocol(ProtocolDataRetention)
|
||||
conn.ID = fmt.Sprintf("data_retention_%v", user.Username)
|
||||
check.conn = conn
|
||||
check.sendNotifications(1*time.Second, nil)
|
||||
err := check.sendHookNotification(1*time.Second, nil)
|
||||
assert.NoError(t, err)
|
||||
|
||||
Config.DataRetentionHook = fmt.Sprintf("http://%v/404", httpAddr)
|
||||
err = check.sendHookNotification(1*time.Second, nil)
|
||||
assert.ErrorIs(t, err, errUnexpectedHTTResponse)
|
||||
|
||||
Config.DataRetentionHook = "http://foo\x7f.com/retention"
|
||||
err = check.sendHookNotification(1*time.Second, err)
|
||||
assert.Error(t, err)
|
||||
|
||||
Config.DataRetentionHook = "relativepath"
|
||||
err = check.sendHookNotification(1*time.Second, err)
|
||||
assert.Error(t, err)
|
||||
|
||||
if runtime.GOOS != osWindows {
|
||||
hookCmd, err := exec.LookPath("true")
|
||||
assert.NoError(t, err)
|
||||
|
||||
Config.DataRetentionHook = hookCmd
|
||||
err = check.sendHookNotification(1*time.Second, err)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
Config.DataRetentionHook = dataRetentionHook
|
||||
}
|
||||
|
||||
func TestRetentionPermissionsAndGetFolder(t *testing.T) {
|
||||
user := dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
Username: "user1",
|
||||
},
|
||||
}
|
||||
user.Permissions = make(map[string][]string)
|
||||
user.Permissions["/"] = []string{dataprovider.PermListItems, dataprovider.PermDelete}
|
||||
user.Permissions["/dir1"] = []string{dataprovider.PermListItems}
|
||||
user.Permissions["/dir2/sub1"] = []string{dataprovider.PermCreateDirs}
|
||||
user.Permissions["/dir2/sub2"] = []string{dataprovider.PermDelete}
|
||||
|
||||
check := RetentionCheck{
|
||||
Folders: []FolderRetention{
|
||||
{
|
||||
Path: "/dir2",
|
||||
Retention: 24 * 7,
|
||||
IgnoreUserPermissions: true,
|
||||
},
|
||||
{
|
||||
Path: "/dir3",
|
||||
Retention: 24 * 7,
|
||||
IgnoreUserPermissions: false,
|
||||
},
|
||||
{
|
||||
Path: "/dir2/sub1/sub",
|
||||
Retention: 24,
|
||||
IgnoreUserPermissions: true,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
conn := NewBaseConnection("", "", "", "", user)
|
||||
conn.SetProtocol(ProtocolDataRetention)
|
||||
conn.ID = fmt.Sprintf("data_retention_%v", user.Username)
|
||||
check.conn = conn
|
||||
check.updateUserPermissions()
|
||||
assert.Equal(t, []string{dataprovider.PermListItems, dataprovider.PermDelete}, conn.User.Permissions["/"])
|
||||
assert.Equal(t, []string{dataprovider.PermListItems}, conn.User.Permissions["/dir1"])
|
||||
assert.Equal(t, []string{dataprovider.PermAny}, conn.User.Permissions["/dir2"])
|
||||
assert.Equal(t, []string{dataprovider.PermAny}, conn.User.Permissions["/dir2/sub1/sub"])
|
||||
assert.Equal(t, []string{dataprovider.PermCreateDirs}, conn.User.Permissions["/dir2/sub1"])
|
||||
assert.Equal(t, []string{dataprovider.PermDelete}, conn.User.Permissions["/dir2/sub2"])
|
||||
|
||||
_, err := check.getFolderRetention("/")
|
||||
assert.Error(t, err)
|
||||
folder, err := check.getFolderRetention("/dir3")
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "/dir3", folder.Path)
|
||||
folder, err = check.getFolderRetention("/dir2/sub3")
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "/dir2", folder.Path)
|
||||
folder, err = check.getFolderRetention("/dir2/sub2")
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "/dir2", folder.Path)
|
||||
folder, err = check.getFolderRetention("/dir2/sub1")
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "/dir2", folder.Path)
|
||||
folder, err = check.getFolderRetention("/dir2/sub1/sub/sub")
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "/dir2/sub1/sub", folder.Path)
|
||||
}
|
||||
|
||||
func TestRetentionCheckAddRemove(t *testing.T) {
|
||||
username := "username"
|
||||
user := dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
Username: username,
|
||||
},
|
||||
}
|
||||
user.Permissions = make(map[string][]string)
|
||||
user.Permissions["/"] = []string{dataprovider.PermAny}
|
||||
check := RetentionCheck{
|
||||
Folders: []FolderRetention{
|
||||
{
|
||||
Path: "/",
|
||||
Retention: 48,
|
||||
},
|
||||
},
|
||||
Notifications: []RetentionCheckNotification{RetentionCheckNotificationHook},
|
||||
}
|
||||
assert.NotNil(t, RetentionChecks.Add(check, &user))
|
||||
checks := RetentionChecks.Get()
|
||||
require.Len(t, checks, 1)
|
||||
assert.Equal(t, username, checks[0].Username)
|
||||
assert.Greater(t, checks[0].StartTime, int64(0))
|
||||
require.Len(t, checks[0].Folders, 1)
|
||||
assert.Equal(t, check.Folders[0].Path, checks[0].Folders[0].Path)
|
||||
assert.Equal(t, check.Folders[0].Retention, checks[0].Folders[0].Retention)
|
||||
require.Len(t, checks[0].Notifications, 1)
|
||||
assert.Equal(t, RetentionCheckNotificationHook, checks[0].Notifications[0])
|
||||
|
||||
assert.Nil(t, RetentionChecks.Add(check, &user))
|
||||
assert.True(t, RetentionChecks.remove(username))
|
||||
require.Len(t, RetentionChecks.Get(), 0)
|
||||
assert.False(t, RetentionChecks.remove(username))
|
||||
}
|
||||
|
||||
func TestCleanupErrors(t *testing.T) {
|
||||
user := dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
Username: "u",
|
||||
},
|
||||
}
|
||||
user.Permissions = make(map[string][]string)
|
||||
user.Permissions["/"] = []string{dataprovider.PermAny}
|
||||
check := &RetentionCheck{
|
||||
Folders: []FolderRetention{
|
||||
{
|
||||
Path: "/path",
|
||||
Retention: 48,
|
||||
},
|
||||
},
|
||||
}
|
||||
check = RetentionChecks.Add(*check, &user)
|
||||
require.NotNil(t, check)
|
||||
|
||||
err := check.removeFile("missing file", nil)
|
||||
assert.Error(t, err)
|
||||
|
||||
err = check.cleanupFolder("/")
|
||||
assert.Error(t, err)
|
||||
|
||||
assert.True(t, RetentionChecks.remove(user.Username))
|
||||
}
|
||||
274
common/defender.go
Normal file
274
common/defender.go
Normal file
@@ -0,0 +1,274 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net"
|
||||
"os"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/yl2chen/cidranger"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
// HostEvent is the enumerable for the supported host events
|
||||
type HostEvent int
|
||||
|
||||
// Supported host events
|
||||
const (
|
||||
HostEventLoginFailed HostEvent = iota
|
||||
HostEventUserNotFound
|
||||
HostEventNoLoginTried
|
||||
HostEventLimitExceeded
|
||||
)
|
||||
|
||||
// Supported defender drivers
|
||||
const (
|
||||
DefenderDriverMemory = "memory"
|
||||
DefenderDriverProvider = "provider"
|
||||
)
|
||||
|
||||
var (
|
||||
supportedDefenderDrivers = []string{DefenderDriverMemory, DefenderDriverProvider}
|
||||
)
|
||||
|
||||
// Defender defines the interface that a defender must implements
|
||||
type Defender interface {
|
||||
GetHosts() ([]*dataprovider.DefenderEntry, error)
|
||||
GetHost(ip string) (*dataprovider.DefenderEntry, error)
|
||||
AddEvent(ip string, event HostEvent)
|
||||
IsBanned(ip string) bool
|
||||
GetBanTime(ip string) (*time.Time, error)
|
||||
GetScore(ip string) (int, error)
|
||||
DeleteHost(ip string) bool
|
||||
Reload() error
|
||||
}
|
||||
|
||||
// DefenderConfig defines the "defender" configuration
|
||||
type DefenderConfig struct {
|
||||
// Set to true to enable the defender
|
||||
Enabled bool `json:"enabled" mapstructure:"enabled"`
|
||||
// Defender implementation to use, we support "memory" and "provider".
|
||||
// Using "provider" as driver you can share the defender events among
|
||||
// multiple SFTPGo instances. For a single instance "memory" provider will
|
||||
// be much faster
|
||||
Driver string `json:"driver" mapstructure:"driver"`
|
||||
// BanTime is the number of minutes that a host is banned
|
||||
BanTime int `json:"ban_time" mapstructure:"ban_time"`
|
||||
// Percentage increase of the ban time if a banned host tries to connect again
|
||||
BanTimeIncrement int `json:"ban_time_increment" mapstructure:"ban_time_increment"`
|
||||
// Threshold value for banning a client
|
||||
Threshold int `json:"threshold" mapstructure:"threshold"`
|
||||
// Score for invalid login attempts, eg. non-existent user accounts or
|
||||
// client disconnected for inactivity without authentication attempts
|
||||
ScoreInvalid int `json:"score_invalid" mapstructure:"score_invalid"`
|
||||
// Score for valid login attempts, eg. user accounts that exist
|
||||
ScoreValid int `json:"score_valid" mapstructure:"score_valid"`
|
||||
// Score for limit exceeded events, generated from the rate limiters or for max connections
|
||||
// per-host exceeded
|
||||
ScoreLimitExceeded int `json:"score_limit_exceeded" mapstructure:"score_limit_exceeded"`
|
||||
// Defines the time window, in minutes, for tracking client errors.
|
||||
// A host is banned if it has exceeded the defined threshold during
|
||||
// the last observation time minutes
|
||||
ObservationTime int `json:"observation_time" mapstructure:"observation_time"`
|
||||
// The number of banned IPs and host scores kept in memory will vary between the
|
||||
// soft and hard limit for the "memory" driver. For the "provider" driver the
|
||||
// soft limit is ignored and the hard limit is used to limit the number of entries
|
||||
// to return when you request for the entire host list from the defender
|
||||
EntriesSoftLimit int `json:"entries_soft_limit" mapstructure:"entries_soft_limit"`
|
||||
EntriesHardLimit int `json:"entries_hard_limit" mapstructure:"entries_hard_limit"`
|
||||
// Path to a file containing a list of ip addresses and/or networks to never ban
|
||||
SafeListFile string `json:"safelist_file" mapstructure:"safelist_file"`
|
||||
// Path to a file containing a list of ip addresses and/or networks to always ban
|
||||
BlockListFile string `json:"blocklist_file" mapstructure:"blocklist_file"`
|
||||
}
|
||||
|
||||
type baseDefender struct {
|
||||
config *DefenderConfig
|
||||
sync.RWMutex
|
||||
safeList *HostList
|
||||
blockList *HostList
|
||||
}
|
||||
|
||||
// Reload reloads block and safe lists
|
||||
func (d *baseDefender) Reload() error {
|
||||
blockList, err := loadHostListFromFile(d.config.BlockListFile)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
d.Lock()
|
||||
d.blockList = blockList
|
||||
d.Unlock()
|
||||
|
||||
safeList, err := loadHostListFromFile(d.config.SafeListFile)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
d.Lock()
|
||||
d.safeList = safeList
|
||||
d.Unlock()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (d *baseDefender) isBanned(ip string) bool {
|
||||
if d.blockList != nil && d.blockList.isListed(ip) {
|
||||
// permanent ban
|
||||
return true
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
func (d *baseDefender) getScore(event HostEvent) int {
|
||||
var score int
|
||||
|
||||
switch event {
|
||||
case HostEventLoginFailed:
|
||||
score = d.config.ScoreValid
|
||||
case HostEventLimitExceeded:
|
||||
score = d.config.ScoreLimitExceeded
|
||||
case HostEventUserNotFound, HostEventNoLoginTried:
|
||||
score = d.config.ScoreInvalid
|
||||
}
|
||||
return score
|
||||
}
|
||||
|
||||
// HostListFile defines the structure expected for safe/block list files
|
||||
type HostListFile struct {
|
||||
IPAddresses []string `json:"addresses"`
|
||||
CIDRNetworks []string `json:"networks"`
|
||||
}
|
||||
|
||||
// HostList defines the structure used to keep the HostListFile in memory
|
||||
type HostList struct {
|
||||
IPAddresses map[string]bool
|
||||
Ranges cidranger.Ranger
|
||||
}
|
||||
|
||||
func (h *HostList) isListed(ip string) bool {
|
||||
if _, ok := h.IPAddresses[ip]; ok {
|
||||
return true
|
||||
}
|
||||
|
||||
ok, err := h.Ranges.Contains(net.ParseIP(ip))
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
return ok
|
||||
}
|
||||
|
||||
type hostEvent struct {
|
||||
dateTime time.Time
|
||||
score int
|
||||
}
|
||||
|
||||
type hostScore struct {
|
||||
TotalScore int
|
||||
Events []hostEvent
|
||||
}
|
||||
|
||||
// validate returns an error if the configuration is invalid
|
||||
func (c *DefenderConfig) validate() error {
|
||||
if !c.Enabled {
|
||||
return nil
|
||||
}
|
||||
if c.ScoreInvalid >= c.Threshold {
|
||||
return fmt.Errorf("score_invalid %v cannot be greater than threshold %v", c.ScoreInvalid, c.Threshold)
|
||||
}
|
||||
if c.ScoreValid >= c.Threshold {
|
||||
return fmt.Errorf("score_valid %v cannot be greater than threshold %v", c.ScoreValid, c.Threshold)
|
||||
}
|
||||
if c.ScoreLimitExceeded >= c.Threshold {
|
||||
return fmt.Errorf("score_limit_exceeded %v cannot be greater than threshold %v", c.ScoreLimitExceeded, c.Threshold)
|
||||
}
|
||||
if c.BanTime <= 0 {
|
||||
return fmt.Errorf("invalid ban_time %v", c.BanTime)
|
||||
}
|
||||
if c.BanTimeIncrement <= 0 {
|
||||
return fmt.Errorf("invalid ban_time_increment %v", c.BanTimeIncrement)
|
||||
}
|
||||
if c.ObservationTime <= 0 {
|
||||
return fmt.Errorf("invalid observation_time %v", c.ObservationTime)
|
||||
}
|
||||
if c.EntriesSoftLimit <= 0 {
|
||||
return fmt.Errorf("invalid entries_soft_limit %v", c.EntriesSoftLimit)
|
||||
}
|
||||
if c.EntriesHardLimit <= c.EntriesSoftLimit {
|
||||
return fmt.Errorf("invalid entries_hard_limit %v must be > %v", c.EntriesHardLimit, c.EntriesSoftLimit)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func loadHostListFromFile(name string) (*HostList, error) {
|
||||
if name == "" {
|
||||
return nil, nil
|
||||
}
|
||||
if !util.IsFileInputValid(name) {
|
||||
return nil, fmt.Errorf("invalid host list file name %#v", name)
|
||||
}
|
||||
|
||||
info, err := os.Stat(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// opinionated max size, you should avoid big host lists
|
||||
if info.Size() > 1048576*5 { // 5MB
|
||||
return nil, fmt.Errorf("host list file %#v is too big: %v bytes", name, info.Size())
|
||||
}
|
||||
|
||||
content, err := os.ReadFile(name)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("unable to read input file %#v: %v", name, err)
|
||||
}
|
||||
|
||||
var hostList HostListFile
|
||||
|
||||
err = json.Unmarshal(content, &hostList)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if len(hostList.CIDRNetworks) > 0 || len(hostList.IPAddresses) > 0 {
|
||||
result := &HostList{
|
||||
IPAddresses: make(map[string]bool),
|
||||
Ranges: cidranger.NewPCTrieRanger(),
|
||||
}
|
||||
ipCount := 0
|
||||
cdrCount := 0
|
||||
for _, ip := range hostList.IPAddresses {
|
||||
if net.ParseIP(ip) == nil {
|
||||
logger.Warn(logSender, "", "unable to parse IP %#v", ip)
|
||||
continue
|
||||
}
|
||||
result.IPAddresses[ip] = true
|
||||
ipCount++
|
||||
}
|
||||
for _, cidrNet := range hostList.CIDRNetworks {
|
||||
_, network, err := net.ParseCIDR(cidrNet)
|
||||
if err != nil {
|
||||
logger.Warn(logSender, "", "unable to parse CIDR network %#v", cidrNet)
|
||||
continue
|
||||
}
|
||||
err = result.Ranges.Insert(cidranger.NewBasicRangerEntry(*network))
|
||||
if err == nil {
|
||||
cdrCount++
|
||||
}
|
||||
}
|
||||
|
||||
logger.Info(logSender, "", "list %#v loaded, ip addresses loaded: %v/%v networks loaded: %v/%v",
|
||||
name, ipCount, len(hostList.IPAddresses), cdrCount, len(hostList.CIDRNetworks))
|
||||
return result, nil
|
||||
}
|
||||
|
||||
return nil, nil
|
||||
}
|
||||
678
common/defender_test.go
Normal file
678
common/defender_test.go
Normal file
@@ -0,0 +1,678 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"crypto/rand"
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"github.com/yl2chen/cidranger"
|
||||
)
|
||||
|
||||
func TestBasicDefender(t *testing.T) {
|
||||
bl := HostListFile{
|
||||
IPAddresses: []string{"172.16.1.1", "172.16.1.2"},
|
||||
CIDRNetworks: []string{"10.8.0.0/24"},
|
||||
}
|
||||
sl := HostListFile{
|
||||
IPAddresses: []string{"172.16.1.3", "172.16.1.4"},
|
||||
CIDRNetworks: []string{"192.168.8.0/24"},
|
||||
}
|
||||
blFile := filepath.Join(os.TempDir(), "bl.json")
|
||||
slFile := filepath.Join(os.TempDir(), "sl.json")
|
||||
|
||||
data, err := json.Marshal(bl)
|
||||
assert.NoError(t, err)
|
||||
|
||||
err = os.WriteFile(blFile, data, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
|
||||
data, err = json.Marshal(sl)
|
||||
assert.NoError(t, err)
|
||||
|
||||
err = os.WriteFile(slFile, data, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
|
||||
config := &DefenderConfig{
|
||||
Enabled: true,
|
||||
BanTime: 10,
|
||||
BanTimeIncrement: 2,
|
||||
Threshold: 5,
|
||||
ScoreInvalid: 2,
|
||||
ScoreValid: 1,
|
||||
ScoreLimitExceeded: 3,
|
||||
ObservationTime: 15,
|
||||
EntriesSoftLimit: 1,
|
||||
EntriesHardLimit: 2,
|
||||
SafeListFile: "slFile",
|
||||
BlockListFile: "blFile",
|
||||
}
|
||||
|
||||
_, err = newInMemoryDefender(config)
|
||||
assert.Error(t, err)
|
||||
config.BlockListFile = blFile
|
||||
_, err = newInMemoryDefender(config)
|
||||
assert.Error(t, err)
|
||||
config.SafeListFile = slFile
|
||||
d, err := newInMemoryDefender(config)
|
||||
assert.NoError(t, err)
|
||||
|
||||
defender := d.(*memoryDefender)
|
||||
assert.True(t, defender.IsBanned("172.16.1.1"))
|
||||
assert.False(t, defender.IsBanned("172.16.1.10"))
|
||||
assert.False(t, defender.IsBanned("10.8.2.3"))
|
||||
assert.True(t, defender.IsBanned("10.8.0.3"))
|
||||
assert.False(t, defender.IsBanned("invalid ip"))
|
||||
assert.Equal(t, 0, defender.countBanned())
|
||||
assert.Equal(t, 0, defender.countHosts())
|
||||
hosts, err := defender.GetHosts()
|
||||
assert.NoError(t, err)
|
||||
assert.Len(t, hosts, 0)
|
||||
_, err = defender.GetHost("10.8.0.4")
|
||||
assert.Error(t, err)
|
||||
|
||||
defender.AddEvent("172.16.1.4", HostEventLoginFailed)
|
||||
defender.AddEvent("192.168.8.4", HostEventUserNotFound)
|
||||
defender.AddEvent("172.16.1.3", HostEventLimitExceeded)
|
||||
assert.Equal(t, 0, defender.countHosts())
|
||||
|
||||
testIP := "12.34.56.78"
|
||||
defender.AddEvent(testIP, HostEventLoginFailed)
|
||||
assert.Equal(t, 1, defender.countHosts())
|
||||
assert.Equal(t, 0, defender.countBanned())
|
||||
score, err := defender.GetScore(testIP)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 1, score)
|
||||
hosts, err = defender.GetHosts()
|
||||
assert.NoError(t, err)
|
||||
if assert.Len(t, hosts, 1) {
|
||||
assert.Equal(t, 1, hosts[0].Score)
|
||||
assert.True(t, hosts[0].BanTime.IsZero())
|
||||
assert.Empty(t, hosts[0].GetBanTime())
|
||||
}
|
||||
host, err := defender.GetHost(testIP)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 1, host.Score)
|
||||
assert.Empty(t, host.GetBanTime())
|
||||
banTime, err := defender.GetBanTime(testIP)
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, banTime)
|
||||
defender.AddEvent(testIP, HostEventLimitExceeded)
|
||||
assert.Equal(t, 1, defender.countHosts())
|
||||
assert.Equal(t, 0, defender.countBanned())
|
||||
score, err = defender.GetScore(testIP)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 4, score)
|
||||
hosts, err = defender.GetHosts()
|
||||
assert.NoError(t, err)
|
||||
if assert.Len(t, hosts, 1) {
|
||||
assert.Equal(t, 4, hosts[0].Score)
|
||||
assert.True(t, hosts[0].BanTime.IsZero())
|
||||
assert.Empty(t, hosts[0].GetBanTime())
|
||||
}
|
||||
defender.AddEvent(testIP, HostEventNoLoginTried)
|
||||
defender.AddEvent(testIP, HostEventNoLoginTried)
|
||||
assert.Equal(t, 0, defender.countHosts())
|
||||
assert.Equal(t, 1, defender.countBanned())
|
||||
score, err = defender.GetScore(testIP)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 0, score)
|
||||
banTime, err = defender.GetBanTime(testIP)
|
||||
assert.NoError(t, err)
|
||||
assert.NotNil(t, banTime)
|
||||
hosts, err = defender.GetHosts()
|
||||
assert.NoError(t, err)
|
||||
if assert.Len(t, hosts, 1) {
|
||||
assert.Equal(t, 0, hosts[0].Score)
|
||||
assert.False(t, hosts[0].BanTime.IsZero())
|
||||
assert.NotEmpty(t, hosts[0].GetBanTime())
|
||||
assert.Equal(t, hex.EncodeToString([]byte(testIP)), hosts[0].GetID())
|
||||
}
|
||||
host, err = defender.GetHost(testIP)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 0, host.Score)
|
||||
assert.NotEmpty(t, host.GetBanTime())
|
||||
|
||||
// now test cleanup, testIP is already banned
|
||||
testIP1 := "12.34.56.79"
|
||||
testIP2 := "12.34.56.80"
|
||||
testIP3 := "12.34.56.81"
|
||||
|
||||
defender.AddEvent(testIP1, HostEventNoLoginTried)
|
||||
defender.AddEvent(testIP2, HostEventNoLoginTried)
|
||||
assert.Equal(t, 2, defender.countHosts())
|
||||
time.Sleep(20 * time.Millisecond)
|
||||
defender.AddEvent(testIP3, HostEventNoLoginTried)
|
||||
assert.Equal(t, defender.config.EntriesSoftLimit, defender.countHosts())
|
||||
// testIP1 and testIP2 should be removed
|
||||
assert.Equal(t, defender.config.EntriesSoftLimit, defender.countHosts())
|
||||
score, err = defender.GetScore(testIP1)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 0, score)
|
||||
score, err = defender.GetScore(testIP2)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 0, score)
|
||||
score, err = defender.GetScore(testIP3)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 2, score)
|
||||
|
||||
defender.AddEvent(testIP3, HostEventNoLoginTried)
|
||||
defender.AddEvent(testIP3, HostEventNoLoginTried)
|
||||
// IP3 is now banned
|
||||
banTime, err = defender.GetBanTime(testIP3)
|
||||
assert.NoError(t, err)
|
||||
assert.NotNil(t, banTime)
|
||||
assert.Equal(t, 0, defender.countHosts())
|
||||
|
||||
time.Sleep(20 * time.Millisecond)
|
||||
for i := 0; i < 3; i++ {
|
||||
defender.AddEvent(testIP1, HostEventNoLoginTried)
|
||||
}
|
||||
assert.Equal(t, 0, defender.countHosts())
|
||||
assert.Equal(t, config.EntriesSoftLimit, defender.countBanned())
|
||||
banTime, err = defender.GetBanTime(testIP)
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, banTime)
|
||||
banTime, err = defender.GetBanTime(testIP3)
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, banTime)
|
||||
banTime, err = defender.GetBanTime(testIP1)
|
||||
assert.NoError(t, err)
|
||||
assert.NotNil(t, banTime)
|
||||
|
||||
for i := 0; i < 3; i++ {
|
||||
defender.AddEvent(testIP, HostEventNoLoginTried)
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
defender.AddEvent(testIP3, HostEventNoLoginTried)
|
||||
}
|
||||
assert.Equal(t, 0, defender.countHosts())
|
||||
assert.Equal(t, defender.config.EntriesSoftLimit, defender.countBanned())
|
||||
|
||||
banTime, err = defender.GetBanTime(testIP3)
|
||||
assert.NoError(t, err)
|
||||
if assert.NotNil(t, banTime) {
|
||||
assert.True(t, defender.IsBanned(testIP3))
|
||||
// ban time should increase
|
||||
newBanTime, err := defender.GetBanTime(testIP3)
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, newBanTime.After(*banTime))
|
||||
}
|
||||
|
||||
assert.True(t, defender.DeleteHost(testIP3))
|
||||
assert.False(t, defender.DeleteHost(testIP3))
|
||||
|
||||
err = os.Remove(slFile)
|
||||
assert.NoError(t, err)
|
||||
err = os.Remove(blFile)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestExpiredHostBans(t *testing.T) {
|
||||
config := &DefenderConfig{
|
||||
Enabled: true,
|
||||
BanTime: 10,
|
||||
BanTimeIncrement: 2,
|
||||
Threshold: 5,
|
||||
ScoreInvalid: 2,
|
||||
ScoreValid: 1,
|
||||
ScoreLimitExceeded: 3,
|
||||
ObservationTime: 15,
|
||||
EntriesSoftLimit: 1,
|
||||
EntriesHardLimit: 2,
|
||||
}
|
||||
|
||||
d, err := newInMemoryDefender(config)
|
||||
assert.NoError(t, err)
|
||||
|
||||
defender := d.(*memoryDefender)
|
||||
|
||||
testIP := "1.2.3.4"
|
||||
defender.banned[testIP] = time.Now().Add(-24 * time.Hour)
|
||||
|
||||
// the ban is expired testIP should not be listed
|
||||
res, err := defender.GetHosts()
|
||||
assert.NoError(t, err)
|
||||
assert.Len(t, res, 0)
|
||||
|
||||
assert.False(t, defender.IsBanned(testIP))
|
||||
_, err = defender.GetHost(testIP)
|
||||
assert.Error(t, err)
|
||||
_, ok := defender.banned[testIP]
|
||||
assert.True(t, ok)
|
||||
// now add an event for an expired banned ip, it should be removed
|
||||
defender.AddEvent(testIP, HostEventLoginFailed)
|
||||
assert.False(t, defender.IsBanned(testIP))
|
||||
entry, err := defender.GetHost(testIP)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, testIP, entry.IP)
|
||||
assert.Empty(t, entry.GetBanTime())
|
||||
assert.Equal(t, 1, entry.Score)
|
||||
|
||||
res, err = defender.GetHosts()
|
||||
assert.NoError(t, err)
|
||||
if assert.Len(t, res, 1) {
|
||||
assert.Equal(t, testIP, res[0].IP)
|
||||
assert.Empty(t, res[0].GetBanTime())
|
||||
assert.Equal(t, 1, res[0].Score)
|
||||
}
|
||||
|
||||
events := []hostEvent{
|
||||
{
|
||||
dateTime: time.Now().Add(-24 * time.Hour),
|
||||
score: 2,
|
||||
},
|
||||
{
|
||||
dateTime: time.Now().Add(-24 * time.Hour),
|
||||
score: 3,
|
||||
},
|
||||
}
|
||||
|
||||
hs := hostScore{
|
||||
Events: events,
|
||||
TotalScore: 5,
|
||||
}
|
||||
|
||||
defender.hosts[testIP] = hs
|
||||
// the recorded scored are too old
|
||||
res, err = defender.GetHosts()
|
||||
assert.NoError(t, err)
|
||||
assert.Len(t, res, 0)
|
||||
_, err = defender.GetHost(testIP)
|
||||
assert.Error(t, err)
|
||||
_, ok = defender.hosts[testIP]
|
||||
assert.True(t, ok)
|
||||
}
|
||||
|
||||
func TestLoadHostListFromFile(t *testing.T) {
|
||||
_, err := loadHostListFromFile(".")
|
||||
assert.Error(t, err)
|
||||
|
||||
hostsFilePath := filepath.Join(os.TempDir(), "hostfile")
|
||||
content := make([]byte, 1048576*6)
|
||||
_, err = rand.Read(content)
|
||||
assert.NoError(t, err)
|
||||
|
||||
err = os.WriteFile(hostsFilePath, content, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
|
||||
_, err = loadHostListFromFile(hostsFilePath)
|
||||
assert.Error(t, err)
|
||||
|
||||
hl := HostListFile{
|
||||
IPAddresses: []string{},
|
||||
CIDRNetworks: []string{},
|
||||
}
|
||||
|
||||
asJSON, err := json.Marshal(hl)
|
||||
assert.NoError(t, err)
|
||||
err = os.WriteFile(hostsFilePath, asJSON, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
|
||||
hostList, err := loadHostListFromFile(hostsFilePath)
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, hostList)
|
||||
|
||||
hl.IPAddresses = append(hl.IPAddresses, "invalidip")
|
||||
asJSON, err = json.Marshal(hl)
|
||||
assert.NoError(t, err)
|
||||
err = os.WriteFile(hostsFilePath, asJSON, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
|
||||
hostList, err = loadHostListFromFile(hostsFilePath)
|
||||
assert.NoError(t, err)
|
||||
assert.Len(t, hostList.IPAddresses, 0)
|
||||
|
||||
hl.IPAddresses = nil
|
||||
hl.CIDRNetworks = append(hl.CIDRNetworks, "invalid net")
|
||||
|
||||
asJSON, err = json.Marshal(hl)
|
||||
assert.NoError(t, err)
|
||||
err = os.WriteFile(hostsFilePath, asJSON, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
|
||||
hostList, err = loadHostListFromFile(hostsFilePath)
|
||||
assert.NoError(t, err)
|
||||
assert.NotNil(t, hostList)
|
||||
assert.Len(t, hostList.IPAddresses, 0)
|
||||
assert.Equal(t, 0, hostList.Ranges.Len())
|
||||
|
||||
if runtime.GOOS != "windows" {
|
||||
err = os.Chmod(hostsFilePath, 0111)
|
||||
assert.NoError(t, err)
|
||||
|
||||
_, err = loadHostListFromFile(hostsFilePath)
|
||||
assert.Error(t, err)
|
||||
|
||||
err = os.Chmod(hostsFilePath, 0644)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
err = os.WriteFile(hostsFilePath, []byte("non json content"), os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
_, err = loadHostListFromFile(hostsFilePath)
|
||||
assert.Error(t, err)
|
||||
|
||||
err = os.Remove(hostsFilePath)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestDefenderCleanup(t *testing.T) {
|
||||
d := memoryDefender{
|
||||
baseDefender: baseDefender{
|
||||
config: &DefenderConfig{
|
||||
ObservationTime: 1,
|
||||
EntriesSoftLimit: 2,
|
||||
EntriesHardLimit: 3,
|
||||
},
|
||||
},
|
||||
banned: make(map[string]time.Time),
|
||||
hosts: make(map[string]hostScore),
|
||||
}
|
||||
|
||||
d.banned["1.1.1.1"] = time.Now().Add(-24 * time.Hour)
|
||||
d.banned["1.1.1.2"] = time.Now().Add(-24 * time.Hour)
|
||||
d.banned["1.1.1.3"] = time.Now().Add(-24 * time.Hour)
|
||||
d.banned["1.1.1.4"] = time.Now().Add(-24 * time.Hour)
|
||||
|
||||
d.cleanupBanned()
|
||||
assert.Equal(t, 0, d.countBanned())
|
||||
|
||||
d.banned["2.2.2.2"] = time.Now().Add(2 * time.Minute)
|
||||
d.banned["2.2.2.3"] = time.Now().Add(1 * time.Minute)
|
||||
d.banned["2.2.2.4"] = time.Now().Add(3 * time.Minute)
|
||||
d.banned["2.2.2.5"] = time.Now().Add(4 * time.Minute)
|
||||
|
||||
d.cleanupBanned()
|
||||
assert.Equal(t, d.config.EntriesSoftLimit, d.countBanned())
|
||||
banTime, err := d.GetBanTime("2.2.2.3")
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, banTime)
|
||||
|
||||
d.hosts["3.3.3.3"] = hostScore{
|
||||
TotalScore: 0,
|
||||
Events: []hostEvent{
|
||||
{
|
||||
dateTime: time.Now().Add(-5 * time.Minute),
|
||||
score: 1,
|
||||
},
|
||||
{
|
||||
dateTime: time.Now().Add(-3 * time.Minute),
|
||||
score: 1,
|
||||
},
|
||||
{
|
||||
dateTime: time.Now(),
|
||||
score: 1,
|
||||
},
|
||||
},
|
||||
}
|
||||
d.hosts["3.3.3.4"] = hostScore{
|
||||
TotalScore: 1,
|
||||
Events: []hostEvent{
|
||||
{
|
||||
dateTime: time.Now().Add(-3 * time.Minute),
|
||||
score: 1,
|
||||
},
|
||||
},
|
||||
}
|
||||
d.hosts["3.3.3.5"] = hostScore{
|
||||
TotalScore: 1,
|
||||
Events: []hostEvent{
|
||||
{
|
||||
dateTime: time.Now().Add(-2 * time.Minute),
|
||||
score: 1,
|
||||
},
|
||||
},
|
||||
}
|
||||
d.hosts["3.3.3.6"] = hostScore{
|
||||
TotalScore: 1,
|
||||
Events: []hostEvent{
|
||||
{
|
||||
dateTime: time.Now().Add(-1 * time.Minute),
|
||||
score: 1,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
score, err := d.GetScore("3.3.3.3")
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 1, score)
|
||||
|
||||
d.cleanupHosts()
|
||||
assert.Equal(t, d.config.EntriesSoftLimit, d.countHosts())
|
||||
score, err = d.GetScore("3.3.3.4")
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 0, score)
|
||||
}
|
||||
|
||||
func TestDefenderConfig(t *testing.T) {
|
||||
c := DefenderConfig{}
|
||||
err := c.validate()
|
||||
require.NoError(t, err)
|
||||
|
||||
c.Enabled = true
|
||||
c.Threshold = 10
|
||||
c.ScoreInvalid = 10
|
||||
err = c.validate()
|
||||
require.Error(t, err)
|
||||
|
||||
c.ScoreInvalid = 2
|
||||
c.ScoreLimitExceeded = 10
|
||||
err = c.validate()
|
||||
require.Error(t, err)
|
||||
|
||||
c.ScoreLimitExceeded = 2
|
||||
c.ScoreValid = 10
|
||||
err = c.validate()
|
||||
require.Error(t, err)
|
||||
|
||||
c.ScoreValid = 1
|
||||
c.BanTime = 0
|
||||
err = c.validate()
|
||||
require.Error(t, err)
|
||||
|
||||
c.BanTime = 30
|
||||
c.BanTimeIncrement = 0
|
||||
err = c.validate()
|
||||
require.Error(t, err)
|
||||
|
||||
c.BanTimeIncrement = 50
|
||||
c.ObservationTime = 0
|
||||
err = c.validate()
|
||||
require.Error(t, err)
|
||||
|
||||
c.ObservationTime = 30
|
||||
err = c.validate()
|
||||
require.Error(t, err)
|
||||
|
||||
c.EntriesSoftLimit = 10
|
||||
err = c.validate()
|
||||
require.Error(t, err)
|
||||
|
||||
c.EntriesHardLimit = 10
|
||||
err = c.validate()
|
||||
require.Error(t, err)
|
||||
|
||||
c.EntriesHardLimit = 20
|
||||
err = c.validate()
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
func BenchmarkDefenderBannedSearch(b *testing.B) {
|
||||
d := getDefenderForBench()
|
||||
|
||||
ip, ipnet, err := net.ParseCIDR("10.8.0.0/12") // 1048574 ip addresses
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
for ip := ip.Mask(ipnet.Mask); ipnet.Contains(ip); inc(ip) {
|
||||
d.banned[ip.String()] = time.Now().Add(10 * time.Minute)
|
||||
}
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for i := 0; i < b.N; i++ {
|
||||
d.IsBanned("192.168.1.1")
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkCleanup(b *testing.B) {
|
||||
d := getDefenderForBench()
|
||||
|
||||
ip, ipnet, err := net.ParseCIDR("192.168.4.0/24")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for i := 0; i < b.N; i++ {
|
||||
for ip := ip.Mask(ipnet.Mask); ipnet.Contains(ip); inc(ip) {
|
||||
d.AddEvent(ip.String(), HostEventLoginFailed)
|
||||
if d.countHosts() > d.config.EntriesHardLimit {
|
||||
panic("too many hosts")
|
||||
}
|
||||
if d.countBanned() > d.config.EntriesSoftLimit {
|
||||
panic("too many ip banned")
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkDefenderBannedSearchWithBlockList(b *testing.B) {
|
||||
d := getDefenderForBench()
|
||||
|
||||
d.blockList = &HostList{
|
||||
IPAddresses: make(map[string]bool),
|
||||
Ranges: cidranger.NewPCTrieRanger(),
|
||||
}
|
||||
|
||||
ip, ipnet, err := net.ParseCIDR("129.8.0.0/12") // 1048574 ip addresses
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
for ip := ip.Mask(ipnet.Mask); ipnet.Contains(ip); inc(ip) {
|
||||
d.banned[ip.String()] = time.Now().Add(10 * time.Minute)
|
||||
d.blockList.IPAddresses[ip.String()] = true
|
||||
}
|
||||
|
||||
for i := 0; i < 255; i++ {
|
||||
cidr := fmt.Sprintf("10.8.%v.1/24", i)
|
||||
_, network, _ := net.ParseCIDR(cidr)
|
||||
if err := d.blockList.Ranges.Insert(cidranger.NewBasicRangerEntry(*network)); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for i := 0; i < b.N; i++ {
|
||||
d.IsBanned("192.168.1.1")
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkHostListSearch(b *testing.B) {
|
||||
hostlist := &HostList{
|
||||
IPAddresses: make(map[string]bool),
|
||||
Ranges: cidranger.NewPCTrieRanger(),
|
||||
}
|
||||
|
||||
ip, ipnet, _ := net.ParseCIDR("172.16.0.0/16")
|
||||
|
||||
for ip := ip.Mask(ipnet.Mask); ipnet.Contains(ip); inc(ip) {
|
||||
hostlist.IPAddresses[ip.String()] = true
|
||||
}
|
||||
|
||||
for i := 0; i < 255; i++ {
|
||||
cidr := fmt.Sprintf("10.8.%v.1/24", i)
|
||||
_, network, _ := net.ParseCIDR(cidr)
|
||||
if err := hostlist.Ranges.Insert(cidranger.NewBasicRangerEntry(*network)); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for i := 0; i < b.N; i++ {
|
||||
if hostlist.isListed("192.167.1.2") {
|
||||
panic("should not be listed")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkCIDRanger(b *testing.B) {
|
||||
ranger := cidranger.NewPCTrieRanger()
|
||||
for i := 0; i < 255; i++ {
|
||||
cidr := fmt.Sprintf("192.168.%v.1/24", i)
|
||||
_, network, _ := net.ParseCIDR(cidr)
|
||||
if err := ranger.Insert(cidranger.NewBasicRangerEntry(*network)); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
|
||||
ipToMatch := net.ParseIP("192.167.1.2")
|
||||
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
if _, err := ranger.Contains(ipToMatch); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func BenchmarkNetContains(b *testing.B) {
|
||||
var nets []*net.IPNet
|
||||
for i := 0; i < 255; i++ {
|
||||
cidr := fmt.Sprintf("192.168.%v.1/24", i)
|
||||
_, network, _ := net.ParseCIDR(cidr)
|
||||
nets = append(nets, network)
|
||||
}
|
||||
|
||||
ipToMatch := net.ParseIP("192.167.1.1")
|
||||
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
for _, n := range nets {
|
||||
n.Contains(ipToMatch)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func getDefenderForBench() *memoryDefender {
|
||||
config := &DefenderConfig{
|
||||
Enabled: true,
|
||||
BanTime: 30,
|
||||
BanTimeIncrement: 50,
|
||||
Threshold: 10,
|
||||
ScoreInvalid: 2,
|
||||
ScoreValid: 2,
|
||||
ObservationTime: 30,
|
||||
EntriesSoftLimit: 50,
|
||||
EntriesHardLimit: 100,
|
||||
}
|
||||
return &memoryDefender{
|
||||
baseDefender: baseDefender{
|
||||
config: config,
|
||||
},
|
||||
hosts: make(map[string]hostScore),
|
||||
banned: make(map[string]time.Time),
|
||||
}
|
||||
}
|
||||
|
||||
func inc(ip net.IP) {
|
||||
for j := len(ip) - 1; j >= 0; j-- {
|
||||
ip[j]++
|
||||
if ip[j] > 0 {
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
157
common/defenderdb.go
Normal file
157
common/defenderdb.go
Normal file
@@ -0,0 +1,157 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
type dbDefender struct {
|
||||
baseDefender
|
||||
lastCleanup time.Time
|
||||
}
|
||||
|
||||
func newDBDefender(config *DefenderConfig) (Defender, error) {
|
||||
err := config.validate()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defender := &dbDefender{
|
||||
baseDefender: baseDefender{
|
||||
config: config,
|
||||
},
|
||||
lastCleanup: time.Time{},
|
||||
}
|
||||
|
||||
if err := defender.Reload(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return defender, nil
|
||||
}
|
||||
|
||||
// GetHosts returns hosts that are banned or for which some violations have been detected
|
||||
func (d *dbDefender) GetHosts() ([]*dataprovider.DefenderEntry, error) {
|
||||
return dataprovider.GetDefenderHosts(d.getStartObservationTime(), d.config.EntriesHardLimit)
|
||||
}
|
||||
|
||||
// GetHost returns a defender host by ip, if any
|
||||
func (d *dbDefender) GetHost(ip string) (*dataprovider.DefenderEntry, error) {
|
||||
return dataprovider.GetDefenderHostByIP(ip, d.getStartObservationTime())
|
||||
}
|
||||
|
||||
// IsBanned returns true if the specified IP is banned
|
||||
// and increase ban time if the IP is found.
|
||||
// This method must be called as soon as the client connects
|
||||
func (d *dbDefender) IsBanned(ip string) bool {
|
||||
d.RLock()
|
||||
if d.baseDefender.isBanned(ip) {
|
||||
d.RUnlock()
|
||||
return true
|
||||
}
|
||||
d.RUnlock()
|
||||
|
||||
_, err := dataprovider.IsDefenderHostBanned(ip)
|
||||
if err != nil {
|
||||
// not found or another error, we allow this host
|
||||
return false
|
||||
}
|
||||
increment := d.config.BanTime * d.config.BanTimeIncrement / 100
|
||||
if increment == 0 {
|
||||
increment++
|
||||
}
|
||||
dataprovider.UpdateDefenderBanTime(ip, increment) //nolint:errcheck
|
||||
return true
|
||||
}
|
||||
|
||||
// DeleteHost removes the specified IP from the defender lists
|
||||
func (d *dbDefender) DeleteHost(ip string) bool {
|
||||
if _, err := d.GetHost(ip); err != nil {
|
||||
return false
|
||||
}
|
||||
return dataprovider.DeleteDefenderHost(ip) == nil
|
||||
}
|
||||
|
||||
// AddEvent adds an event for the given IP.
|
||||
// This method must be called for clients not yet banned
|
||||
func (d *dbDefender) AddEvent(ip string, event HostEvent) {
|
||||
d.RLock()
|
||||
if d.safeList != nil && d.safeList.isListed(ip) {
|
||||
d.RUnlock()
|
||||
return
|
||||
}
|
||||
d.RUnlock()
|
||||
|
||||
score := d.baseDefender.getScore(event)
|
||||
|
||||
host, err := dataprovider.AddDefenderEvent(ip, score, d.getStartObservationTime())
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
if host.Score > d.config.Threshold {
|
||||
banTime := time.Now().Add(time.Duration(d.config.BanTime) * time.Minute)
|
||||
err = dataprovider.SetDefenderBanTime(ip, util.GetTimeAsMsSinceEpoch(banTime))
|
||||
}
|
||||
|
||||
if err == nil {
|
||||
d.cleanup()
|
||||
}
|
||||
}
|
||||
|
||||
// GetBanTime returns the ban time for the given IP or nil if the IP is not banned
|
||||
func (d *dbDefender) GetBanTime(ip string) (*time.Time, error) {
|
||||
host, err := d.GetHost(ip)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if host.BanTime.IsZero() {
|
||||
return nil, nil
|
||||
}
|
||||
return &host.BanTime, nil
|
||||
}
|
||||
|
||||
// GetScore returns the score for the given IP
|
||||
func (d *dbDefender) GetScore(ip string) (int, error) {
|
||||
host, err := d.GetHost(ip)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
return host.Score, nil
|
||||
}
|
||||
|
||||
func (d *dbDefender) cleanup() {
|
||||
lastCleanup := d.getLastCleanup()
|
||||
if lastCleanup.IsZero() || lastCleanup.Add(time.Duration(d.config.ObservationTime)*time.Minute*3).Before(time.Now()) {
|
||||
// FIXME: this could be racy in rare cases but it is better than acquire the lock for the cleanup duration
|
||||
// or to always acquire a read/write lock.
|
||||
// Concurrent cleanups could happen anyway from multiple SFTPGo instances and should not cause any issues
|
||||
d.setLastCleanup(time.Now())
|
||||
expireTime := time.Now().Add(-time.Duration(d.config.ObservationTime+1) * time.Minute)
|
||||
logger.Debug(logSender, "", "cleanup defender hosts before %v, last cleanup %v", expireTime, lastCleanup)
|
||||
if err := dataprovider.CleanupDefender(util.GetTimeAsMsSinceEpoch(expireTime)); err != nil {
|
||||
logger.Error(logSender, "", "defender cleanup error, reset last cleanup to %v", lastCleanup)
|
||||
d.setLastCleanup(lastCleanup)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (d *dbDefender) getStartObservationTime() int64 {
|
||||
t := time.Now().Add(-time.Duration(d.config.ObservationTime) * time.Minute)
|
||||
return util.GetTimeAsMsSinceEpoch(t)
|
||||
}
|
||||
|
||||
func (d *dbDefender) getLastCleanup() time.Time {
|
||||
d.RLock()
|
||||
defer d.RUnlock()
|
||||
|
||||
return d.lastCleanup
|
||||
}
|
||||
|
||||
func (d *dbDefender) setLastCleanup(when time.Time) {
|
||||
d.Lock()
|
||||
defer d.Unlock()
|
||||
|
||||
d.lastCleanup = when
|
||||
}
|
||||
297
common/defenderdb_test.go
Normal file
297
common/defenderdb_test.go
Normal file
@@ -0,0 +1,297 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
func TestBasicDbDefender(t *testing.T) {
|
||||
if !isDbDefenderSupported() {
|
||||
t.Skip("this test is not supported with the current database provider")
|
||||
}
|
||||
config := &DefenderConfig{
|
||||
Enabled: true,
|
||||
BanTime: 10,
|
||||
BanTimeIncrement: 2,
|
||||
Threshold: 5,
|
||||
ScoreInvalid: 2,
|
||||
ScoreValid: 1,
|
||||
ScoreLimitExceeded: 3,
|
||||
ObservationTime: 15,
|
||||
EntriesSoftLimit: 1,
|
||||
EntriesHardLimit: 10,
|
||||
SafeListFile: "slFile",
|
||||
BlockListFile: "blFile",
|
||||
}
|
||||
_, err := newDBDefender(config)
|
||||
assert.Error(t, err)
|
||||
|
||||
bl := HostListFile{
|
||||
IPAddresses: []string{"172.16.1.1", "172.16.1.2"},
|
||||
CIDRNetworks: []string{"10.8.0.0/24"},
|
||||
}
|
||||
sl := HostListFile{
|
||||
IPAddresses: []string{"172.16.1.3", "172.16.1.4"},
|
||||
CIDRNetworks: []string{"192.168.8.0/24"},
|
||||
}
|
||||
blFile := filepath.Join(os.TempDir(), "bl.json")
|
||||
slFile := filepath.Join(os.TempDir(), "sl.json")
|
||||
|
||||
data, err := json.Marshal(bl)
|
||||
assert.NoError(t, err)
|
||||
err = os.WriteFile(blFile, data, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
|
||||
data, err = json.Marshal(sl)
|
||||
assert.NoError(t, err)
|
||||
err = os.WriteFile(slFile, data, os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
|
||||
config.BlockListFile = blFile
|
||||
_, err = newDBDefender(config)
|
||||
assert.Error(t, err)
|
||||
config.SafeListFile = slFile
|
||||
d, err := newDBDefender(config)
|
||||
assert.NoError(t, err)
|
||||
defender := d.(*dbDefender)
|
||||
assert.True(t, defender.IsBanned("172.16.1.1"))
|
||||
assert.False(t, defender.IsBanned("172.16.1.10"))
|
||||
assert.False(t, defender.IsBanned("10.8.1.3"))
|
||||
assert.True(t, defender.IsBanned("10.8.0.4"))
|
||||
assert.False(t, defender.IsBanned("invalid ip"))
|
||||
hosts, err := defender.GetHosts()
|
||||
assert.NoError(t, err)
|
||||
assert.Len(t, hosts, 0)
|
||||
_, err = defender.GetHost("10.8.0.3")
|
||||
assert.Error(t, err)
|
||||
|
||||
defender.AddEvent("172.16.1.4", HostEventLoginFailed)
|
||||
defender.AddEvent("192.168.8.4", HostEventUserNotFound)
|
||||
defender.AddEvent("172.16.1.3", HostEventLimitExceeded)
|
||||
hosts, err = defender.GetHosts()
|
||||
assert.NoError(t, err)
|
||||
assert.Len(t, hosts, 0)
|
||||
assert.True(t, defender.getLastCleanup().IsZero())
|
||||
|
||||
testIP := "123.45.67.89"
|
||||
defender.AddEvent(testIP, HostEventLoginFailed)
|
||||
lastCleanup := defender.getLastCleanup()
|
||||
assert.False(t, lastCleanup.IsZero())
|
||||
score, err := defender.GetScore(testIP)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 1, score)
|
||||
hosts, err = defender.GetHosts()
|
||||
assert.NoError(t, err)
|
||||
if assert.Len(t, hosts, 1) {
|
||||
assert.Equal(t, 1, hosts[0].Score)
|
||||
assert.True(t, hosts[0].BanTime.IsZero())
|
||||
assert.Empty(t, hosts[0].GetBanTime())
|
||||
}
|
||||
host, err := defender.GetHost(testIP)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 1, host.Score)
|
||||
assert.Empty(t, host.GetBanTime())
|
||||
banTime, err := defender.GetBanTime(testIP)
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, banTime)
|
||||
defender.AddEvent(testIP, HostEventLimitExceeded)
|
||||
score, err = defender.GetScore(testIP)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 4, score)
|
||||
hosts, err = defender.GetHosts()
|
||||
assert.NoError(t, err)
|
||||
if assert.Len(t, hosts, 1) {
|
||||
assert.Equal(t, 4, hosts[0].Score)
|
||||
assert.True(t, hosts[0].BanTime.IsZero())
|
||||
assert.Empty(t, hosts[0].GetBanTime())
|
||||
}
|
||||
defender.AddEvent(testIP, HostEventNoLoginTried)
|
||||
defender.AddEvent(testIP, HostEventNoLoginTried)
|
||||
score, err = defender.GetScore(testIP)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 0, score)
|
||||
banTime, err = defender.GetBanTime(testIP)
|
||||
assert.NoError(t, err)
|
||||
assert.NotNil(t, banTime)
|
||||
hosts, err = defender.GetHosts()
|
||||
assert.NoError(t, err)
|
||||
if assert.Len(t, hosts, 1) {
|
||||
assert.Equal(t, 0, hosts[0].Score)
|
||||
assert.False(t, hosts[0].BanTime.IsZero())
|
||||
assert.NotEmpty(t, hosts[0].GetBanTime())
|
||||
assert.Equal(t, hex.EncodeToString([]byte(testIP)), hosts[0].GetID())
|
||||
}
|
||||
host, err = defender.GetHost(testIP)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 0, host.Score)
|
||||
assert.NotEmpty(t, host.GetBanTime())
|
||||
// ban time should increase
|
||||
assert.True(t, defender.IsBanned(testIP))
|
||||
newBanTime, err := defender.GetBanTime(testIP)
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, newBanTime.After(*banTime))
|
||||
|
||||
assert.True(t, defender.DeleteHost(testIP))
|
||||
assert.False(t, defender.DeleteHost(testIP))
|
||||
// test cleanup
|
||||
testIP1 := "123.45.67.90"
|
||||
testIP2 := "123.45.67.91"
|
||||
testIP3 := "123.45.67.92"
|
||||
for i := 0; i < 3; i++ {
|
||||
defender.AddEvent(testIP, HostEventNoLoginTried)
|
||||
defender.AddEvent(testIP1, HostEventNoLoginTried)
|
||||
defender.AddEvent(testIP2, HostEventNoLoginTried)
|
||||
}
|
||||
hosts, err = defender.GetHosts()
|
||||
assert.NoError(t, err)
|
||||
assert.Len(t, hosts, 3)
|
||||
for _, host := range hosts {
|
||||
assert.Equal(t, 0, host.Score)
|
||||
assert.False(t, host.BanTime.IsZero())
|
||||
assert.NotEmpty(t, host.GetBanTime())
|
||||
}
|
||||
defender.AddEvent(testIP3, HostEventLoginFailed)
|
||||
hosts, err = defender.GetHosts()
|
||||
assert.NoError(t, err)
|
||||
assert.Len(t, hosts, 4)
|
||||
// now set a ban time in the past, so the host will be cleanead up
|
||||
for _, ip := range []string{testIP1, testIP2} {
|
||||
err = dataprovider.SetDefenderBanTime(ip, util.GetTimeAsMsSinceEpoch(time.Now().Add(-1*time.Minute)))
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
hosts, err = defender.GetHosts()
|
||||
assert.NoError(t, err)
|
||||
assert.Len(t, hosts, 4)
|
||||
for _, host := range hosts {
|
||||
switch host.IP {
|
||||
case testIP:
|
||||
assert.Equal(t, 0, host.Score)
|
||||
assert.False(t, host.BanTime.IsZero())
|
||||
assert.NotEmpty(t, host.GetBanTime())
|
||||
case testIP3:
|
||||
assert.Equal(t, 1, host.Score)
|
||||
assert.True(t, host.BanTime.IsZero())
|
||||
assert.Empty(t, host.GetBanTime())
|
||||
default:
|
||||
assert.Equal(t, 6, host.Score)
|
||||
assert.True(t, host.BanTime.IsZero())
|
||||
assert.Empty(t, host.GetBanTime())
|
||||
}
|
||||
}
|
||||
host, err = defender.GetHost(testIP)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 0, host.Score)
|
||||
assert.False(t, host.BanTime.IsZero())
|
||||
assert.NotEmpty(t, host.GetBanTime())
|
||||
host, err = defender.GetHost(testIP3)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 1, host.Score)
|
||||
assert.True(t, host.BanTime.IsZero())
|
||||
assert.Empty(t, host.GetBanTime())
|
||||
// set a negative observation time so the from field in the queries will be in the future
|
||||
// we still should get the banned hosts
|
||||
defender.config.ObservationTime = -2
|
||||
assert.Greater(t, defender.getStartObservationTime(), time.Now().UnixMilli())
|
||||
hosts, err = defender.GetHosts()
|
||||
assert.NoError(t, err)
|
||||
if assert.Len(t, hosts, 1) {
|
||||
assert.Equal(t, testIP, hosts[0].IP)
|
||||
assert.Equal(t, 0, hosts[0].Score)
|
||||
assert.False(t, hosts[0].BanTime.IsZero())
|
||||
assert.NotEmpty(t, hosts[0].GetBanTime())
|
||||
}
|
||||
_, err = defender.GetHost(testIP)
|
||||
assert.NoError(t, err)
|
||||
// cleanup db
|
||||
err = dataprovider.CleanupDefender(util.GetTimeAsMsSinceEpoch(time.Now().Add(10 * time.Minute)))
|
||||
assert.NoError(t, err)
|
||||
// the banned host must still be there
|
||||
hosts, err = defender.GetHosts()
|
||||
assert.NoError(t, err)
|
||||
if assert.Len(t, hosts, 1) {
|
||||
assert.Equal(t, testIP, hosts[0].IP)
|
||||
assert.Equal(t, 0, hosts[0].Score)
|
||||
assert.False(t, hosts[0].BanTime.IsZero())
|
||||
assert.NotEmpty(t, hosts[0].GetBanTime())
|
||||
}
|
||||
_, err = defender.GetHost(testIP)
|
||||
assert.NoError(t, err)
|
||||
err = dataprovider.SetDefenderBanTime(testIP, util.GetTimeAsMsSinceEpoch(time.Now().Add(-1*time.Minute)))
|
||||
assert.NoError(t, err)
|
||||
err = dataprovider.CleanupDefender(util.GetTimeAsMsSinceEpoch(time.Now().Add(10 * time.Minute)))
|
||||
assert.NoError(t, err)
|
||||
hosts, err = defender.GetHosts()
|
||||
assert.NoError(t, err)
|
||||
assert.Len(t, hosts, 0)
|
||||
|
||||
err = os.Remove(slFile)
|
||||
assert.NoError(t, err)
|
||||
err = os.Remove(blFile)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestDbDefenderCleanup(t *testing.T) {
|
||||
if !isDbDefenderSupported() {
|
||||
t.Skip("this test is not supported with the current database provider")
|
||||
}
|
||||
config := &DefenderConfig{
|
||||
Enabled: true,
|
||||
BanTime: 10,
|
||||
BanTimeIncrement: 2,
|
||||
Threshold: 5,
|
||||
ScoreInvalid: 2,
|
||||
ScoreValid: 1,
|
||||
ScoreLimitExceeded: 3,
|
||||
ObservationTime: 15,
|
||||
EntriesSoftLimit: 1,
|
||||
EntriesHardLimit: 10,
|
||||
}
|
||||
d, err := newDBDefender(config)
|
||||
assert.NoError(t, err)
|
||||
defender := d.(*dbDefender)
|
||||
lastCleanup := defender.getLastCleanup()
|
||||
assert.True(t, lastCleanup.IsZero())
|
||||
defender.cleanup()
|
||||
lastCleanup = defender.getLastCleanup()
|
||||
assert.False(t, lastCleanup.IsZero())
|
||||
defender.cleanup()
|
||||
assert.Equal(t, lastCleanup, defender.getLastCleanup())
|
||||
defender.setLastCleanup(time.Now().Add(-time.Duration(config.ObservationTime) * time.Minute * 4))
|
||||
time.Sleep(20 * time.Millisecond)
|
||||
defender.cleanup()
|
||||
assert.True(t, lastCleanup.Before(defender.getLastCleanup()))
|
||||
|
||||
providerConf := dataprovider.GetProviderConfig()
|
||||
err = dataprovider.Close()
|
||||
assert.NoError(t, err)
|
||||
|
||||
lastCleanup = time.Now().Add(-time.Duration(config.ObservationTime) * time.Minute * 4)
|
||||
defender.setLastCleanup(lastCleanup)
|
||||
defender.cleanup()
|
||||
// cleanup will fail and so last cleanup should be reset to the previous value
|
||||
assert.Equal(t, lastCleanup, defender.getLastCleanup())
|
||||
|
||||
err = dataprovider.Initialize(providerConf, configDir, true)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func isDbDefenderSupported() bool {
|
||||
// SQLite shares the implementation with other SQL-based provider but it makes no sense
|
||||
// to use it outside test cases
|
||||
switch dataprovider.GetProviderStatus().Driver {
|
||||
case dataprovider.MySQLDataProviderName, dataprovider.PGSQLDataProviderName,
|
||||
dataprovider.CockroachDataProviderName, dataprovider.SQLiteDataProviderName:
|
||||
return true
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
326
common/defendermem.go
Normal file
326
common/defendermem.go
Normal file
@@ -0,0 +1,326 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"sort"
|
||||
"time"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
type memoryDefender struct {
|
||||
baseDefender
|
||||
// IP addresses of the clients trying to connected are stored inside hosts,
|
||||
// they are added to banned once the thresold is reached.
|
||||
// A violation from a banned host will increase the ban time
|
||||
// based on the configured BanTimeIncrement
|
||||
hosts map[string]hostScore // the key is the host IP
|
||||
banned map[string]time.Time // the key is the host IP
|
||||
}
|
||||
|
||||
func newInMemoryDefender(config *DefenderConfig) (Defender, error) {
|
||||
err := config.validate()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defender := &memoryDefender{
|
||||
baseDefender: baseDefender{
|
||||
config: config,
|
||||
},
|
||||
hosts: make(map[string]hostScore),
|
||||
banned: make(map[string]time.Time),
|
||||
}
|
||||
|
||||
if err := defender.Reload(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return defender, nil
|
||||
}
|
||||
|
||||
// GetHosts returns hosts that are banned or for which some violations have been detected
|
||||
func (d *memoryDefender) GetHosts() ([]*dataprovider.DefenderEntry, error) {
|
||||
d.RLock()
|
||||
defer d.RUnlock()
|
||||
|
||||
var result []*dataprovider.DefenderEntry
|
||||
for k, v := range d.banned {
|
||||
if v.After(time.Now()) {
|
||||
result = append(result, &dataprovider.DefenderEntry{
|
||||
IP: k,
|
||||
BanTime: v,
|
||||
})
|
||||
}
|
||||
}
|
||||
for k, v := range d.hosts {
|
||||
score := 0
|
||||
for _, event := range v.Events {
|
||||
if event.dateTime.Add(time.Duration(d.config.ObservationTime) * time.Minute).After(time.Now()) {
|
||||
score += event.score
|
||||
}
|
||||
}
|
||||
if score > 0 {
|
||||
result = append(result, &dataprovider.DefenderEntry{
|
||||
IP: k,
|
||||
Score: score,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// GetHost returns a defender host by ip, if any
|
||||
func (d *memoryDefender) GetHost(ip string) (*dataprovider.DefenderEntry, error) {
|
||||
d.RLock()
|
||||
defer d.RUnlock()
|
||||
|
||||
if banTime, ok := d.banned[ip]; ok {
|
||||
if banTime.After(time.Now()) {
|
||||
return &dataprovider.DefenderEntry{
|
||||
IP: ip,
|
||||
BanTime: banTime,
|
||||
}, nil
|
||||
}
|
||||
}
|
||||
|
||||
if hs, ok := d.hosts[ip]; ok {
|
||||
score := 0
|
||||
for _, event := range hs.Events {
|
||||
if event.dateTime.Add(time.Duration(d.config.ObservationTime) * time.Minute).After(time.Now()) {
|
||||
score += event.score
|
||||
}
|
||||
}
|
||||
if score > 0 {
|
||||
return &dataprovider.DefenderEntry{
|
||||
IP: ip,
|
||||
Score: score,
|
||||
}, nil
|
||||
}
|
||||
}
|
||||
|
||||
return nil, util.NewRecordNotFoundError("host not found")
|
||||
}
|
||||
|
||||
// IsBanned returns true if the specified IP is banned
|
||||
// and increase ban time if the IP is found.
|
||||
// This method must be called as soon as the client connects
|
||||
func (d *memoryDefender) IsBanned(ip string) bool {
|
||||
d.RLock()
|
||||
|
||||
if banTime, ok := d.banned[ip]; ok {
|
||||
if banTime.After(time.Now()) {
|
||||
increment := d.config.BanTime * d.config.BanTimeIncrement / 100
|
||||
if increment == 0 {
|
||||
increment++
|
||||
}
|
||||
|
||||
d.RUnlock()
|
||||
|
||||
// we can save an earlier ban time if there are contemporary updates
|
||||
// but this should not make much difference. I prefer to hold a read lock
|
||||
// until possible for performance reasons, this method is called each
|
||||
// time a new client connects and it must be as fast as possible
|
||||
d.Lock()
|
||||
d.banned[ip] = banTime.Add(time.Duration(increment) * time.Minute)
|
||||
d.Unlock()
|
||||
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
defer d.RUnlock()
|
||||
|
||||
return d.baseDefender.isBanned(ip)
|
||||
}
|
||||
|
||||
// DeleteHost removes the specified IP from the defender lists
|
||||
func (d *memoryDefender) DeleteHost(ip string) bool {
|
||||
d.Lock()
|
||||
defer d.Unlock()
|
||||
|
||||
if _, ok := d.banned[ip]; ok {
|
||||
delete(d.banned, ip)
|
||||
return true
|
||||
}
|
||||
|
||||
if _, ok := d.hosts[ip]; ok {
|
||||
delete(d.hosts, ip)
|
||||
return true
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// AddEvent adds an event for the given IP.
|
||||
// This method must be called for clients not yet banned
|
||||
func (d *memoryDefender) AddEvent(ip string, event HostEvent) {
|
||||
d.Lock()
|
||||
defer d.Unlock()
|
||||
|
||||
if d.safeList != nil && d.safeList.isListed(ip) {
|
||||
return
|
||||
}
|
||||
|
||||
// ignore events for already banned hosts
|
||||
if v, ok := d.banned[ip]; ok {
|
||||
if v.After(time.Now()) {
|
||||
return
|
||||
}
|
||||
delete(d.banned, ip)
|
||||
}
|
||||
|
||||
score := d.baseDefender.getScore(event)
|
||||
|
||||
ev := hostEvent{
|
||||
dateTime: time.Now(),
|
||||
score: score,
|
||||
}
|
||||
|
||||
if hs, ok := d.hosts[ip]; ok {
|
||||
hs.Events = append(hs.Events, ev)
|
||||
hs.TotalScore = 0
|
||||
|
||||
idx := 0
|
||||
for _, event := range hs.Events {
|
||||
if event.dateTime.Add(time.Duration(d.config.ObservationTime) * time.Minute).After(time.Now()) {
|
||||
hs.Events[idx] = event
|
||||
hs.TotalScore += event.score
|
||||
idx++
|
||||
}
|
||||
}
|
||||
|
||||
hs.Events = hs.Events[:idx]
|
||||
if hs.TotalScore >= d.config.Threshold {
|
||||
d.banned[ip] = time.Now().Add(time.Duration(d.config.BanTime) * time.Minute)
|
||||
delete(d.hosts, ip)
|
||||
d.cleanupBanned()
|
||||
} else {
|
||||
d.hosts[ip] = hs
|
||||
}
|
||||
} else {
|
||||
d.hosts[ip] = hostScore{
|
||||
TotalScore: ev.score,
|
||||
Events: []hostEvent{ev},
|
||||
}
|
||||
d.cleanupHosts()
|
||||
}
|
||||
}
|
||||
|
||||
func (d *memoryDefender) countBanned() int {
|
||||
d.RLock()
|
||||
defer d.RUnlock()
|
||||
|
||||
return len(d.banned)
|
||||
}
|
||||
|
||||
func (d *memoryDefender) countHosts() int {
|
||||
d.RLock()
|
||||
defer d.RUnlock()
|
||||
|
||||
return len(d.hosts)
|
||||
}
|
||||
|
||||
// GetBanTime returns the ban time for the given IP or nil if the IP is not banned
|
||||
func (d *memoryDefender) GetBanTime(ip string) (*time.Time, error) {
|
||||
d.RLock()
|
||||
defer d.RUnlock()
|
||||
|
||||
if banTime, ok := d.banned[ip]; ok {
|
||||
return &banTime, nil
|
||||
}
|
||||
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// GetScore returns the score for the given IP
|
||||
func (d *memoryDefender) GetScore(ip string) (int, error) {
|
||||
d.RLock()
|
||||
defer d.RUnlock()
|
||||
|
||||
score := 0
|
||||
|
||||
if hs, ok := d.hosts[ip]; ok {
|
||||
for _, event := range hs.Events {
|
||||
if event.dateTime.Add(time.Duration(d.config.ObservationTime) * time.Minute).After(time.Now()) {
|
||||
score += event.score
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return score, nil
|
||||
}
|
||||
|
||||
func (d *memoryDefender) cleanupBanned() {
|
||||
if len(d.banned) > d.config.EntriesHardLimit {
|
||||
kvList := make(kvList, 0, len(d.banned))
|
||||
|
||||
for k, v := range d.banned {
|
||||
if v.Before(time.Now()) {
|
||||
delete(d.banned, k)
|
||||
}
|
||||
|
||||
kvList = append(kvList, kv{
|
||||
Key: k,
|
||||
Value: v.UnixNano(),
|
||||
})
|
||||
}
|
||||
|
||||
// we removed expired ip addresses, if any, above, this could be enough
|
||||
numToRemove := len(d.banned) - d.config.EntriesSoftLimit
|
||||
|
||||
if numToRemove <= 0 {
|
||||
return
|
||||
}
|
||||
|
||||
sort.Sort(kvList)
|
||||
|
||||
for idx, kv := range kvList {
|
||||
if idx >= numToRemove {
|
||||
break
|
||||
}
|
||||
|
||||
delete(d.banned, kv.Key)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (d *memoryDefender) cleanupHosts() {
|
||||
if len(d.hosts) > d.config.EntriesHardLimit {
|
||||
kvList := make(kvList, 0, len(d.hosts))
|
||||
|
||||
for k, v := range d.hosts {
|
||||
value := int64(0)
|
||||
if len(v.Events) > 0 {
|
||||
value = v.Events[len(v.Events)-1].dateTime.UnixNano()
|
||||
}
|
||||
kvList = append(kvList, kv{
|
||||
Key: k,
|
||||
Value: value,
|
||||
})
|
||||
}
|
||||
|
||||
sort.Sort(kvList)
|
||||
|
||||
numToRemove := len(d.hosts) - d.config.EntriesSoftLimit
|
||||
|
||||
for idx, kv := range kvList {
|
||||
if idx >= numToRemove {
|
||||
break
|
||||
}
|
||||
|
||||
delete(d.hosts, kv.Key)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
type kv struct {
|
||||
Key string
|
||||
Value int64
|
||||
}
|
||||
|
||||
type kvList []kv
|
||||
|
||||
func (p kvList) Len() int { return len(p) }
|
||||
func (p kvList) Less(i, j int) bool { return p[i].Value < p[j].Value }
|
||||
func (p kvList) Swap(i, j int) { p[i], p[j] = p[j], p[i] }
|
||||
134
common/httpauth.go
Normal file
134
common/httpauth.go
Normal file
@@ -0,0 +1,134 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"encoding/csv"
|
||||
"os"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
"github.com/GehirnInc/crypt/apr1_crypt"
|
||||
"github.com/GehirnInc/crypt/md5_crypt"
|
||||
"golang.org/x/crypto/bcrypt"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
const (
|
||||
// HTTPAuthenticationHeader defines the HTTP authentication
|
||||
HTTPAuthenticationHeader = "WWW-Authenticate"
|
||||
md5CryptPwdPrefix = "$1$"
|
||||
apr1CryptPwdPrefix = "$apr1$"
|
||||
)
|
||||
|
||||
var (
|
||||
bcryptPwdPrefixes = []string{"$2a$", "$2$", "$2x$", "$2y$", "$2b$"}
|
||||
)
|
||||
|
||||
// HTTPAuthProvider defines the interface for HTTP auth providers
|
||||
type HTTPAuthProvider interface {
|
||||
ValidateCredentials(username, password string) bool
|
||||
IsEnabled() bool
|
||||
}
|
||||
|
||||
type basicAuthProvider struct {
|
||||
Path string
|
||||
sync.RWMutex
|
||||
Info os.FileInfo
|
||||
Users map[string]string
|
||||
}
|
||||
|
||||
// NewBasicAuthProvider returns an HTTPAuthProvider implementing Basic Auth
|
||||
func NewBasicAuthProvider(authUserFile string) (HTTPAuthProvider, error) {
|
||||
basicAuthProvider := basicAuthProvider{
|
||||
Path: authUserFile,
|
||||
Info: nil,
|
||||
Users: make(map[string]string),
|
||||
}
|
||||
return &basicAuthProvider, basicAuthProvider.loadUsers()
|
||||
}
|
||||
|
||||
func (p *basicAuthProvider) IsEnabled() bool {
|
||||
return p.Path != ""
|
||||
}
|
||||
|
||||
func (p *basicAuthProvider) isReloadNeeded(info os.FileInfo) bool {
|
||||
p.RLock()
|
||||
defer p.RUnlock()
|
||||
|
||||
return p.Info == nil || p.Info.ModTime() != info.ModTime() || p.Info.Size() != info.Size()
|
||||
}
|
||||
|
||||
func (p *basicAuthProvider) loadUsers() error {
|
||||
if !p.IsEnabled() {
|
||||
return nil
|
||||
}
|
||||
info, err := os.Stat(p.Path)
|
||||
if err != nil {
|
||||
logger.Debug(logSender, "", "unable to stat basic auth users file: %v", err)
|
||||
return err
|
||||
}
|
||||
if p.isReloadNeeded(info) {
|
||||
r, err := os.Open(p.Path)
|
||||
if err != nil {
|
||||
logger.Debug(logSender, "", "unable to open basic auth users file: %v", err)
|
||||
return err
|
||||
}
|
||||
defer r.Close()
|
||||
reader := csv.NewReader(r)
|
||||
reader.Comma = ':'
|
||||
reader.Comment = '#'
|
||||
reader.TrimLeadingSpace = true
|
||||
records, err := reader.ReadAll()
|
||||
if err != nil {
|
||||
logger.Debug(logSender, "", "unable to parse basic auth users file: %v", err)
|
||||
return err
|
||||
}
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
|
||||
p.Users = make(map[string]string)
|
||||
for _, record := range records {
|
||||
if len(record) == 2 {
|
||||
p.Users[record[0]] = record[1]
|
||||
}
|
||||
}
|
||||
logger.Debug(logSender, "", "number of users loaded for httpd basic auth: %v", len(p.Users))
|
||||
p.Info = info
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *basicAuthProvider) getHashedPassword(username string) (string, bool) {
|
||||
err := p.loadUsers()
|
||||
if err != nil {
|
||||
return "", false
|
||||
}
|
||||
p.RLock()
|
||||
defer p.RUnlock()
|
||||
|
||||
pwd, ok := p.Users[username]
|
||||
return pwd, ok
|
||||
}
|
||||
|
||||
// ValidateCredentials returns true if the credentials are valid
|
||||
func (p *basicAuthProvider) ValidateCredentials(username, password string) bool {
|
||||
if hashedPwd, ok := p.getHashedPassword(username); ok {
|
||||
if util.IsStringPrefixInSlice(hashedPwd, bcryptPwdPrefixes) {
|
||||
err := bcrypt.CompareHashAndPassword([]byte(hashedPwd), []byte(password))
|
||||
return err == nil
|
||||
}
|
||||
if strings.HasPrefix(hashedPwd, md5CryptPwdPrefix) {
|
||||
crypter := md5_crypt.New()
|
||||
err := crypter.Verify(hashedPwd, []byte(password))
|
||||
return err == nil
|
||||
}
|
||||
if strings.HasPrefix(hashedPwd, apr1CryptPwdPrefix) {
|
||||
crypter := apr1_crypt.New()
|
||||
err := crypter.Verify(hashedPwd, []byte(password))
|
||||
return err == nil
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
71
common/httpauth_test.go
Normal file
71
common/httpauth_test.go
Normal file
@@ -0,0 +1,71 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestBasicAuth(t *testing.T) {
|
||||
httpAuth, err := NewBasicAuthProvider("")
|
||||
require.NoError(t, err)
|
||||
require.False(t, httpAuth.IsEnabled())
|
||||
|
||||
_, err = NewBasicAuthProvider("missing path")
|
||||
require.Error(t, err)
|
||||
|
||||
authUserFile := filepath.Join(os.TempDir(), "http_users.txt")
|
||||
authUserData := []byte("test1:$2y$05$bcHSED7aO1cfLto6ZdDBOOKzlwftslVhtpIkRhAtSa4GuLmk5mola\n")
|
||||
err = os.WriteFile(authUserFile, authUserData, os.ModePerm)
|
||||
require.NoError(t, err)
|
||||
|
||||
httpAuth, err = NewBasicAuthProvider(authUserFile)
|
||||
require.NoError(t, err)
|
||||
require.True(t, httpAuth.IsEnabled())
|
||||
require.False(t, httpAuth.ValidateCredentials("test1", "wrong1"))
|
||||
require.False(t, httpAuth.ValidateCredentials("test2", "password2"))
|
||||
require.True(t, httpAuth.ValidateCredentials("test1", "password1"))
|
||||
|
||||
authUserData = append(authUserData, []byte("test2:$1$OtSSTL8b$bmaCqEksI1e7rnZSjsIDR1\n")...)
|
||||
err = os.WriteFile(authUserFile, authUserData, os.ModePerm)
|
||||
require.NoError(t, err)
|
||||
require.False(t, httpAuth.ValidateCredentials("test2", "wrong2"))
|
||||
require.True(t, httpAuth.ValidateCredentials("test2", "password2"))
|
||||
|
||||
authUserData = append(authUserData, []byte("test2:$apr1$gLnIkRIf$Xr/6aJfmIrihP4b2N2tcs/\n")...)
|
||||
err = os.WriteFile(authUserFile, authUserData, os.ModePerm)
|
||||
require.NoError(t, err)
|
||||
require.False(t, httpAuth.ValidateCredentials("test2", "wrong2"))
|
||||
require.True(t, httpAuth.ValidateCredentials("test2", "password2"))
|
||||
|
||||
authUserData = append(authUserData, []byte("test3:$apr1$gLnIkRIf$Xr/6aJfmIrihP4b2N2tcs/\n")...)
|
||||
err = os.WriteFile(authUserFile, authUserData, os.ModePerm)
|
||||
require.NoError(t, err)
|
||||
require.False(t, httpAuth.ValidateCredentials("test3", "password3"))
|
||||
|
||||
authUserData = append(authUserData, []byte("test4:$invalid$gLnIkRIf$Xr/6$aJfmIr$ihP4b2N2tcs/\n")...)
|
||||
err = os.WriteFile(authUserFile, authUserData, os.ModePerm)
|
||||
require.NoError(t, err)
|
||||
require.False(t, httpAuth.ValidateCredentials("test4", "password3"))
|
||||
|
||||
if runtime.GOOS != "windows" {
|
||||
authUserData = append(authUserData, []byte("test5:$apr1$gLnIkRIf$Xr/6aJfmIrihP4b2N2tcs/\n")...)
|
||||
err = os.WriteFile(authUserFile, authUserData, os.ModePerm)
|
||||
require.NoError(t, err)
|
||||
err = os.Chmod(authUserFile, 0001)
|
||||
require.NoError(t, err)
|
||||
require.False(t, httpAuth.ValidateCredentials("test5", "password2"))
|
||||
err = os.Chmod(authUserFile, os.ModePerm)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
authUserData = append(authUserData, []byte("\"foo\"bar\"\r\n")...)
|
||||
err = os.WriteFile(authUserFile, authUserData, os.ModePerm)
|
||||
require.NoError(t, err)
|
||||
require.False(t, httpAuth.ValidateCredentials("test2", "password2"))
|
||||
|
||||
err = os.Remove(authUserFile)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
3373
common/protocol_test.go
Normal file
3373
common/protocol_test.go
Normal file
File diff suppressed because it is too large
Load Diff
243
common/ratelimiter.go
Normal file
243
common/ratelimiter.go
Normal file
@@ -0,0 +1,243 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"net"
|
||||
"sort"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"golang.org/x/time/rate"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
var (
|
||||
errNoBucket = errors.New("no bucket found")
|
||||
errReserve = errors.New("unable to reserve token")
|
||||
rateLimiterProtocolValues = []string{ProtocolSSH, ProtocolFTP, ProtocolWebDAV, ProtocolHTTP}
|
||||
)
|
||||
|
||||
// RateLimiterType defines the supported rate limiters types
|
||||
type RateLimiterType int
|
||||
|
||||
// Supported rate limiter types
|
||||
const (
|
||||
rateLimiterTypeGlobal RateLimiterType = iota + 1
|
||||
rateLimiterTypeSource
|
||||
)
|
||||
|
||||
// RateLimiterConfig defines the configuration for a rate limiter
|
||||
type RateLimiterConfig struct {
|
||||
// Average defines the maximum rate allowed. 0 means disabled
|
||||
Average int64 `json:"average" mapstructure:"average"`
|
||||
// Period defines the period as milliseconds. Default: 1000 (1 second).
|
||||
// The rate is actually defined by dividing average by period.
|
||||
// So for a rate below 1 req/s, one needs to define a period larger than a second.
|
||||
Period int64 `json:"period" mapstructure:"period"`
|
||||
// Burst is the maximum number of requests allowed to go through in the
|
||||
// same arbitrarily small period of time. Default: 1.
|
||||
Burst int `json:"burst" mapstructure:"burst"`
|
||||
// Type defines the rate limiter type:
|
||||
// - rateLimiterTypeGlobal is a global rate limiter independent from the source
|
||||
// - rateLimiterTypeSource is a per-source rate limiter
|
||||
Type int `json:"type" mapstructure:"type"`
|
||||
// Protocols defines the protocols for this rate limiter.
|
||||
// Available protocols are: "SFTP", "FTP", "DAV".
|
||||
// A rate limiter with no protocols defined is disabled
|
||||
Protocols []string `json:"protocols" mapstructure:"protocols"`
|
||||
// AllowList defines a list of IP addresses and IP ranges excluded from rate limiting
|
||||
AllowList []string `json:"allow_list" mapstructure:"mapstructure"`
|
||||
// If the rate limit is exceeded, the defender is enabled, and this is a per-source limiter,
|
||||
// a new defender event will be generated
|
||||
GenerateDefenderEvents bool `json:"generate_defender_events" mapstructure:"generate_defender_events"`
|
||||
// The number of per-ip rate limiters kept in memory will vary between the
|
||||
// soft and hard limit
|
||||
EntriesSoftLimit int `json:"entries_soft_limit" mapstructure:"entries_soft_limit"`
|
||||
EntriesHardLimit int `json:"entries_hard_limit" mapstructure:"entries_hard_limit"`
|
||||
}
|
||||
|
||||
func (r *RateLimiterConfig) isEnabled() bool {
|
||||
return r.Average > 0 && len(r.Protocols) > 0
|
||||
}
|
||||
|
||||
func (r *RateLimiterConfig) validate() error {
|
||||
if r.Burst < 1 {
|
||||
return fmt.Errorf("invalid burst %v. It must be >= 1", r.Burst)
|
||||
}
|
||||
if r.Period < 100 {
|
||||
return fmt.Errorf("invalid period %v. It must be >= 100", r.Period)
|
||||
}
|
||||
if r.Type != int(rateLimiterTypeGlobal) && r.Type != int(rateLimiterTypeSource) {
|
||||
return fmt.Errorf("invalid type %v", r.Type)
|
||||
}
|
||||
if r.Type != int(rateLimiterTypeGlobal) {
|
||||
if r.EntriesSoftLimit <= 0 {
|
||||
return fmt.Errorf("invalid entries_soft_limit %v", r.EntriesSoftLimit)
|
||||
}
|
||||
if r.EntriesHardLimit <= r.EntriesSoftLimit {
|
||||
return fmt.Errorf("invalid entries_hard_limit %v must be > %v", r.EntriesHardLimit, r.EntriesSoftLimit)
|
||||
}
|
||||
}
|
||||
r.Protocols = util.RemoveDuplicates(r.Protocols)
|
||||
for _, protocol := range r.Protocols {
|
||||
if !util.IsStringInSlice(protocol, rateLimiterProtocolValues) {
|
||||
return fmt.Errorf("invalid protocol %#v", protocol)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (r *RateLimiterConfig) getLimiter() *rateLimiter {
|
||||
limiter := &rateLimiter{
|
||||
burst: r.Burst,
|
||||
globalBucket: nil,
|
||||
generateDefenderEvents: r.GenerateDefenderEvents,
|
||||
}
|
||||
var maxDelay time.Duration
|
||||
period := time.Duration(r.Period) * time.Millisecond
|
||||
rtl := float64(r.Average*int64(time.Second)) / float64(period)
|
||||
limiter.rate = rate.Limit(rtl)
|
||||
if rtl < 1 {
|
||||
maxDelay = period / 2
|
||||
} else {
|
||||
maxDelay = time.Second / (time.Duration(rtl) * 2)
|
||||
}
|
||||
if maxDelay > 10*time.Second {
|
||||
maxDelay = 10 * time.Second
|
||||
}
|
||||
limiter.maxDelay = maxDelay
|
||||
limiter.buckets = sourceBuckets{
|
||||
buckets: make(map[string]sourceRateLimiter),
|
||||
hardLimit: r.EntriesHardLimit,
|
||||
softLimit: r.EntriesSoftLimit,
|
||||
}
|
||||
if r.Type != int(rateLimiterTypeSource) {
|
||||
limiter.globalBucket = rate.NewLimiter(limiter.rate, limiter.burst)
|
||||
}
|
||||
return limiter
|
||||
}
|
||||
|
||||
// RateLimiter defines a rate limiter
|
||||
type rateLimiter struct {
|
||||
rate rate.Limit
|
||||
burst int
|
||||
maxDelay time.Duration
|
||||
globalBucket *rate.Limiter
|
||||
buckets sourceBuckets
|
||||
generateDefenderEvents bool
|
||||
allowList []func(net.IP) bool
|
||||
}
|
||||
|
||||
// Wait blocks until the limit allows one event to happen
|
||||
// or returns an error if the time to wait exceeds the max
|
||||
// allowed delay
|
||||
func (rl *rateLimiter) Wait(source string) (time.Duration, error) {
|
||||
if len(rl.allowList) > 0 {
|
||||
ip := net.ParseIP(source)
|
||||
if ip != nil {
|
||||
for idx := range rl.allowList {
|
||||
if rl.allowList[idx](ip) {
|
||||
return 0, nil
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
var res *rate.Reservation
|
||||
if rl.globalBucket != nil {
|
||||
res = rl.globalBucket.Reserve()
|
||||
} else {
|
||||
var err error
|
||||
res, err = rl.buckets.reserve(source)
|
||||
if err != nil {
|
||||
rateLimiter := rate.NewLimiter(rl.rate, rl.burst)
|
||||
res = rl.buckets.addAndReserve(rateLimiter, source)
|
||||
}
|
||||
}
|
||||
if !res.OK() {
|
||||
return 0, errReserve
|
||||
}
|
||||
delay := res.Delay()
|
||||
if delay > rl.maxDelay {
|
||||
res.Cancel()
|
||||
if rl.generateDefenderEvents && rl.globalBucket == nil {
|
||||
AddDefenderEvent(source, HostEventLimitExceeded)
|
||||
}
|
||||
return delay, fmt.Errorf("rate limit exceed, wait time to respect rate %v, max wait time allowed %v", delay, rl.maxDelay)
|
||||
}
|
||||
time.Sleep(delay)
|
||||
return 0, nil
|
||||
}
|
||||
|
||||
type sourceRateLimiter struct {
|
||||
lastActivity int64
|
||||
bucket *rate.Limiter
|
||||
}
|
||||
|
||||
func (s *sourceRateLimiter) updateLastActivity() {
|
||||
atomic.StoreInt64(&s.lastActivity, time.Now().UnixNano())
|
||||
}
|
||||
|
||||
func (s *sourceRateLimiter) getLastActivity() int64 {
|
||||
return atomic.LoadInt64(&s.lastActivity)
|
||||
}
|
||||
|
||||
type sourceBuckets struct {
|
||||
sync.RWMutex
|
||||
buckets map[string]sourceRateLimiter
|
||||
hardLimit int
|
||||
softLimit int
|
||||
}
|
||||
|
||||
func (b *sourceBuckets) reserve(source string) (*rate.Reservation, error) {
|
||||
b.RLock()
|
||||
defer b.RUnlock()
|
||||
|
||||
if src, ok := b.buckets[source]; ok {
|
||||
src.updateLastActivity()
|
||||
return src.bucket.Reserve(), nil
|
||||
}
|
||||
|
||||
return nil, errNoBucket
|
||||
}
|
||||
|
||||
func (b *sourceBuckets) addAndReserve(r *rate.Limiter, source string) *rate.Reservation {
|
||||
b.Lock()
|
||||
defer b.Unlock()
|
||||
|
||||
b.cleanup()
|
||||
|
||||
src := sourceRateLimiter{
|
||||
bucket: r,
|
||||
}
|
||||
src.updateLastActivity()
|
||||
b.buckets[source] = src
|
||||
return src.bucket.Reserve()
|
||||
}
|
||||
|
||||
func (b *sourceBuckets) cleanup() {
|
||||
if len(b.buckets) >= b.hardLimit {
|
||||
numToRemove := len(b.buckets) - b.softLimit
|
||||
|
||||
kvList := make(kvList, 0, len(b.buckets))
|
||||
|
||||
for k, v := range b.buckets {
|
||||
kvList = append(kvList, kv{
|
||||
Key: k,
|
||||
Value: v.getLastActivity(),
|
||||
})
|
||||
}
|
||||
|
||||
sort.Sort(kvList)
|
||||
|
||||
for idx, kv := range kvList {
|
||||
if idx >= numToRemove {
|
||||
break
|
||||
}
|
||||
|
||||
delete(b.buckets, kv.Key)
|
||||
}
|
||||
}
|
||||
}
|
||||
148
common/ratelimiter_test.go
Normal file
148
common/ratelimiter_test.go
Normal file
@@ -0,0 +1,148 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
func TestRateLimiterConfig(t *testing.T) {
|
||||
config := RateLimiterConfig{}
|
||||
err := config.validate()
|
||||
require.Error(t, err)
|
||||
config.Burst = 1
|
||||
config.Period = 10
|
||||
err = config.validate()
|
||||
require.Error(t, err)
|
||||
config.Period = 1000
|
||||
config.Type = 100
|
||||
err = config.validate()
|
||||
require.Error(t, err)
|
||||
config.Type = int(rateLimiterTypeSource)
|
||||
config.EntriesSoftLimit = 0
|
||||
err = config.validate()
|
||||
require.Error(t, err)
|
||||
config.EntriesSoftLimit = 150
|
||||
config.EntriesHardLimit = 0
|
||||
err = config.validate()
|
||||
require.Error(t, err)
|
||||
config.EntriesHardLimit = 200
|
||||
config.Protocols = []string{"unsupported protocol"}
|
||||
err = config.validate()
|
||||
require.Error(t, err)
|
||||
config.Protocols = rateLimiterProtocolValues
|
||||
err = config.validate()
|
||||
require.NoError(t, err)
|
||||
|
||||
limiter := config.getLimiter()
|
||||
require.Equal(t, 500*time.Millisecond, limiter.maxDelay)
|
||||
require.Nil(t, limiter.globalBucket)
|
||||
config.Type = int(rateLimiterTypeGlobal)
|
||||
config.Average = 1
|
||||
config.Period = 10000
|
||||
limiter = config.getLimiter()
|
||||
require.Equal(t, 5*time.Second, limiter.maxDelay)
|
||||
require.NotNil(t, limiter.globalBucket)
|
||||
config.Period = 100000
|
||||
limiter = config.getLimiter()
|
||||
require.Equal(t, 10*time.Second, limiter.maxDelay)
|
||||
config.Period = 500
|
||||
config.Average = 1
|
||||
limiter = config.getLimiter()
|
||||
require.Equal(t, 250*time.Millisecond, limiter.maxDelay)
|
||||
}
|
||||
|
||||
func TestRateLimiter(t *testing.T) {
|
||||
config := RateLimiterConfig{
|
||||
Average: 1,
|
||||
Period: 1000,
|
||||
Burst: 1,
|
||||
Type: int(rateLimiterTypeGlobal),
|
||||
Protocols: rateLimiterProtocolValues,
|
||||
}
|
||||
limiter := config.getLimiter()
|
||||
_, err := limiter.Wait("")
|
||||
require.NoError(t, err)
|
||||
_, err = limiter.Wait("")
|
||||
require.Error(t, err)
|
||||
|
||||
config.Type = int(rateLimiterTypeSource)
|
||||
config.GenerateDefenderEvents = true
|
||||
config.EntriesSoftLimit = 5
|
||||
config.EntriesHardLimit = 10
|
||||
limiter = config.getLimiter()
|
||||
|
||||
source := "192.168.1.2"
|
||||
_, err = limiter.Wait(source)
|
||||
require.NoError(t, err)
|
||||
_, err = limiter.Wait(source)
|
||||
require.Error(t, err)
|
||||
// a different source should work
|
||||
_, err = limiter.Wait(source + "1")
|
||||
require.NoError(t, err)
|
||||
|
||||
allowList := []string{"192.168.1.0/24"}
|
||||
allowFuncs, err := util.ParseAllowedIPAndRanges(allowList)
|
||||
assert.NoError(t, err)
|
||||
limiter.allowList = allowFuncs
|
||||
for i := 0; i < 5; i++ {
|
||||
_, err = limiter.Wait(source)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
_, err = limiter.Wait("not an ip")
|
||||
require.NoError(t, err)
|
||||
|
||||
config.Burst = 0
|
||||
limiter = config.getLimiter()
|
||||
_, err = limiter.Wait(source)
|
||||
require.ErrorIs(t, err, errReserve)
|
||||
}
|
||||
|
||||
func TestLimiterCleanup(t *testing.T) {
|
||||
config := RateLimiterConfig{
|
||||
Average: 100,
|
||||
Period: 1000,
|
||||
Burst: 1,
|
||||
Type: int(rateLimiterTypeSource),
|
||||
Protocols: rateLimiterProtocolValues,
|
||||
EntriesSoftLimit: 1,
|
||||
EntriesHardLimit: 3,
|
||||
}
|
||||
limiter := config.getLimiter()
|
||||
source1 := "10.8.0.1"
|
||||
source2 := "10.8.0.2"
|
||||
source3 := "10.8.0.3"
|
||||
source4 := "10.8.0.4"
|
||||
_, err := limiter.Wait(source1)
|
||||
assert.NoError(t, err)
|
||||
time.Sleep(20 * time.Millisecond)
|
||||
_, err = limiter.Wait(source2)
|
||||
assert.NoError(t, err)
|
||||
time.Sleep(20 * time.Millisecond)
|
||||
assert.Len(t, limiter.buckets.buckets, 2)
|
||||
_, ok := limiter.buckets.buckets[source1]
|
||||
assert.True(t, ok)
|
||||
_, ok = limiter.buckets.buckets[source2]
|
||||
assert.True(t, ok)
|
||||
_, err = limiter.Wait(source3)
|
||||
assert.NoError(t, err)
|
||||
assert.Len(t, limiter.buckets.buckets, 3)
|
||||
_, ok = limiter.buckets.buckets[source1]
|
||||
assert.True(t, ok)
|
||||
_, ok = limiter.buckets.buckets[source2]
|
||||
assert.True(t, ok)
|
||||
_, ok = limiter.buckets.buckets[source3]
|
||||
assert.True(t, ok)
|
||||
time.Sleep(20 * time.Millisecond)
|
||||
_, err = limiter.Wait(source4)
|
||||
assert.NoError(t, err)
|
||||
assert.Len(t, limiter.buckets.buckets, 2)
|
||||
_, ok = limiter.buckets.buckets[source3]
|
||||
assert.True(t, ok)
|
||||
_, ok = limiter.buckets.buckets[source4]
|
||||
assert.True(t, ok)
|
||||
}
|
||||
200
common/tlsutils.go
Normal file
200
common/tlsutils.go
Normal file
@@ -0,0 +1,200 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"crypto/x509"
|
||||
"crypto/x509/pkix"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
// CertManager defines a TLS certificate manager
|
||||
type CertManager struct {
|
||||
certPath string
|
||||
keyPath string
|
||||
configDir string
|
||||
logSender string
|
||||
sync.RWMutex
|
||||
caCertificates []string
|
||||
caRevocationLists []string
|
||||
cert *tls.Certificate
|
||||
rootCAs *x509.CertPool
|
||||
crls []*pkix.CertificateList
|
||||
}
|
||||
|
||||
// Reload tries to reload certificate and CRLs
|
||||
func (m *CertManager) Reload() error {
|
||||
errCrt := m.loadCertificate()
|
||||
errCRLs := m.LoadCRLs()
|
||||
|
||||
if errCrt != nil {
|
||||
return errCrt
|
||||
}
|
||||
return errCRLs
|
||||
}
|
||||
|
||||
// LoadCertificate loads the configured x509 key pair
|
||||
func (m *CertManager) loadCertificate() error {
|
||||
newCert, err := tls.LoadX509KeyPair(m.certPath, m.keyPath)
|
||||
if err != nil {
|
||||
logger.Warn(m.logSender, "", "unable to load X509 key pair, cert file %#v key file %#v error: %v",
|
||||
m.certPath, m.keyPath, err)
|
||||
return err
|
||||
}
|
||||
logger.Debug(m.logSender, "", "TLS certificate %#v successfully loaded", m.certPath)
|
||||
|
||||
m.Lock()
|
||||
defer m.Unlock()
|
||||
|
||||
m.cert = &newCert
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetCertificateFunc returns the loaded certificate
|
||||
func (m *CertManager) GetCertificateFunc() func(*tls.ClientHelloInfo) (*tls.Certificate, error) {
|
||||
return func(clientHello *tls.ClientHelloInfo) (*tls.Certificate, error) {
|
||||
m.RLock()
|
||||
defer m.RUnlock()
|
||||
|
||||
return m.cert, nil
|
||||
}
|
||||
}
|
||||
|
||||
// IsRevoked returns true if the specified certificate has been revoked
|
||||
func (m *CertManager) IsRevoked(crt *x509.Certificate, caCrt *x509.Certificate) bool {
|
||||
m.RLock()
|
||||
defer m.RUnlock()
|
||||
|
||||
if crt == nil || caCrt == nil {
|
||||
logger.Warn(m.logSender, "", "unable to verify crt %v ca crt %v", crt, caCrt)
|
||||
return len(m.crls) > 0
|
||||
}
|
||||
|
||||
for _, crl := range m.crls {
|
||||
if !crl.HasExpired(time.Now()) && caCrt.CheckCRLSignature(crl) == nil {
|
||||
for _, rc := range crl.TBSCertList.RevokedCertificates {
|
||||
if rc.SerialNumber.Cmp(crt.SerialNumber) == 0 {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// LoadCRLs tries to load certificate revocation lists from the given paths
|
||||
func (m *CertManager) LoadCRLs() error {
|
||||
if len(m.caRevocationLists) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
var crls []*pkix.CertificateList
|
||||
|
||||
for _, revocationList := range m.caRevocationLists {
|
||||
if !util.IsFileInputValid(revocationList) {
|
||||
return fmt.Errorf("invalid root CA revocation list %#v", revocationList)
|
||||
}
|
||||
if revocationList != "" && !filepath.IsAbs(revocationList) {
|
||||
revocationList = filepath.Join(m.configDir, revocationList)
|
||||
}
|
||||
crlBytes, err := os.ReadFile(revocationList)
|
||||
if err != nil {
|
||||
logger.Warn(m.logSender, "unable to read revocation list %#v", revocationList)
|
||||
return err
|
||||
}
|
||||
crl, err := x509.ParseCRL(crlBytes)
|
||||
if err != nil {
|
||||
logger.Warn(m.logSender, "unable to parse revocation list %#v", revocationList)
|
||||
return err
|
||||
}
|
||||
|
||||
logger.Debug(m.logSender, "", "CRL %#v successfully loaded", revocationList)
|
||||
crls = append(crls, crl)
|
||||
}
|
||||
|
||||
m.Lock()
|
||||
defer m.Unlock()
|
||||
|
||||
m.crls = crls
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetRootCAs returns the set of root certificate authorities that servers
|
||||
// use if required to verify a client certificate
|
||||
func (m *CertManager) GetRootCAs() *x509.CertPool {
|
||||
m.RLock()
|
||||
defer m.RUnlock()
|
||||
|
||||
return m.rootCAs
|
||||
}
|
||||
|
||||
// LoadRootCAs tries to load root CA certificate authorities from the given paths
|
||||
func (m *CertManager) LoadRootCAs() error {
|
||||
if len(m.caCertificates) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
rootCAs := x509.NewCertPool()
|
||||
|
||||
for _, rootCA := range m.caCertificates {
|
||||
if !util.IsFileInputValid(rootCA) {
|
||||
return fmt.Errorf("invalid root CA certificate %#v", rootCA)
|
||||
}
|
||||
if rootCA != "" && !filepath.IsAbs(rootCA) {
|
||||
rootCA = filepath.Join(m.configDir, rootCA)
|
||||
}
|
||||
crt, err := os.ReadFile(rootCA)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if rootCAs.AppendCertsFromPEM(crt) {
|
||||
logger.Debug(m.logSender, "", "TLS certificate authority %#v successfully loaded", rootCA)
|
||||
} else {
|
||||
err := fmt.Errorf("unable to load TLS certificate authority %#v", rootCA)
|
||||
logger.Warn(m.logSender, "", "%v", err)
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
m.Lock()
|
||||
defer m.Unlock()
|
||||
|
||||
m.rootCAs = rootCAs
|
||||
return nil
|
||||
}
|
||||
|
||||
// SetCACertificates sets the root CA authorities file paths.
|
||||
// This should not be changed at runtime
|
||||
func (m *CertManager) SetCACertificates(caCertificates []string) {
|
||||
m.caCertificates = caCertificates
|
||||
}
|
||||
|
||||
// SetCARevocationLists sets the CA revocation lists file paths.
|
||||
// This should not be changed at runtime
|
||||
func (m *CertManager) SetCARevocationLists(caRevocationLists []string) {
|
||||
m.caRevocationLists = caRevocationLists
|
||||
}
|
||||
|
||||
// NewCertManager creates a new certificate manager
|
||||
func NewCertManager(certificateFile, certificateKeyFile, configDir, logSender string) (*CertManager, error) {
|
||||
manager := &CertManager{
|
||||
cert: nil,
|
||||
certPath: certificateFile,
|
||||
keyPath: certificateKeyFile,
|
||||
configDir: configDir,
|
||||
logSender: logSender,
|
||||
}
|
||||
err := manager.loadCertificate()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return manager, nil
|
||||
}
|
||||
386
common/tlsutils_test.go
Normal file
386
common/tlsutils_test.go
Normal file
@@ -0,0 +1,386 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"crypto/x509"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
const (
|
||||
serverCert = `-----BEGIN CERTIFICATE-----
|
||||
MIIEIDCCAgigAwIBAgIRAPOR9zTkX35vSdeyGpF8Rn8wDQYJKoZIhvcNAQELBQAw
|
||||
EzERMA8GA1UEAxMIQ2VydEF1dGgwHhcNMjEwMTAyMjEyMjU1WhcNMjIwNzAyMjEz
|
||||
MDUxWjARMQ8wDQYDVQQDEwZzZXJ2ZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAw
|
||||
ggEKAoIBAQCte0PJhCTNqTiqdwk/s4JanKIMKUVWr2u94a+JYy5gJ9xYXrQ49SeN
|
||||
m+fwhTAOqctP5zNVkFqxlBytJZg3pqCKqRoOOl1qVgL3F3o7JdhZGi67aw8QMLPx
|
||||
tLPpYWnnrlUQoXRJdTlqkDqO8lOZl9HO5oZeidPZ7r5BVD6ZiujAC6Zg0jIc+EPt
|
||||
qhaUJ1CStoAeRf1rNWKmDsLv5hEaDWoaHF9sNVzDQg6atZ3ici00qQj+uvEZo8mL
|
||||
k6egg3rqsTv9ml2qlrRgFumt99J60hTt3tuQaAruHY80O9nGy3SCXC11daa7gszH
|
||||
ElCRvhUVoOxRtB54YBEtJ0gEpFnTO9J1AgMBAAGjcTBvMA4GA1UdDwEB/wQEAwID
|
||||
uDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwHQYDVR0OBBYEFAgDXwPV
|
||||
nhztNz+H20iNWgoIx8adMB8GA1UdIwQYMBaAFO1yCNAGr/zQTJIi8lw3w5OiuBvM
|
||||
MA0GCSqGSIb3DQEBCwUAA4ICAQCR5kgIb4vAtrtsXD24n6RtU1yIXHPLNmDStVrH
|
||||
uaMYNnHlLhRlQFCjHhjWvZ89FQC7FeNOITc3FpibJySyw7JfnsyEOGxEbcAS4uLB
|
||||
2pdAiJPqdQtxIVcyi5vu53m1T5tm0sy8sBrGxU466aDQ8VGqjcjfTwNIyoFMd3p/
|
||||
ezFRvg2BudwU9hqApgfHfLi4WCuI3hLO2tbmgDinyH0HI0YYNNweGpiBYbTLF4Tx
|
||||
H6vHgD9USMZeu4+HX0IIsBiHQD7TTIe5ceREkPcNPd5qTpIvT3zKQ/KwwT90/zjP
|
||||
aWmz6pLxBfjRu7MY/bDfxfRUqsrLYJCVBoaDVRWR9rhiPIFkC5JzoWD/4hdj2iis
|
||||
N0+OOaJ77L+/ArFprE+7Fu3cSdYlfiNjV8R5kE29cAxKLI92CjAiTKrEuxKcQPKO
|
||||
+taWNKIYYjEDZwVnzlkTIl007X0RBuzu9gh4w5NwJdt8ZOJAp0JV0Cq+UvG+FC/v
|
||||
lYk82E6j1HKhf4CXmrjsrD1Fyu41mpVFOpa2ATiFGvms913MkXuyO8g99IllmDw1
|
||||
D7/PN4Qe9N6Zm7yoKZM0IUw2v+SUMIdOAZ7dptO9ZjtYOfiAIYN3jM8R4JYgPiuD
|
||||
DGSM9LJBJxCxI/DiO1y1Z3n9TcdDQYut8Gqdi/aYXw2YeqyHXosX5Od3vcK/O5zC
|
||||
pOJTYQ==
|
||||
-----END CERTIFICATE-----`
|
||||
serverKey = `-----BEGIN RSA PRIVATE KEY-----
|
||||
MIIEowIBAAKCAQEArXtDyYQkzak4qncJP7OCWpyiDClFVq9rveGviWMuYCfcWF60
|
||||
OPUnjZvn8IUwDqnLT+czVZBasZQcrSWYN6agiqkaDjpdalYC9xd6OyXYWRouu2sP
|
||||
EDCz8bSz6WFp565VEKF0SXU5apA6jvJTmZfRzuaGXonT2e6+QVQ+mYrowAumYNIy
|
||||
HPhD7aoWlCdQkraAHkX9azVipg7C7+YRGg1qGhxfbDVcw0IOmrWd4nItNKkI/rrx
|
||||
GaPJi5OnoIN66rE7/Zpdqpa0YBbprffSetIU7d7bkGgK7h2PNDvZxst0glwtdXWm
|
||||
u4LMxxJQkb4VFaDsUbQeeGARLSdIBKRZ0zvSdQIDAQABAoIBAF4sI8goq7HYwqIG
|
||||
rEagM4rsrCrd3H4KC/qvoJJ7/JjGCp8OCddBfY8pquat5kCPe4aMgxlXm2P6evaj
|
||||
CdZr5Ypf8Xz3we4PctyfKgMhsCfuRqAGpc6sIYJ8DY4LC2pxAExe2LlnoRtv39np
|
||||
QeiGuaYPDbIUL6SGLVFZYgIHngFhbDYfL83q3Cb/PnivUGFvUVQCfRBUKO2d8KYq
|
||||
TrVB5BWD2GrHor24ApQmci1OOqfbkIevkK6bk8HUfSZiZGI9LUQiPHMxi5k2x43J
|
||||
nIwhZnW2N28dorKnWHg2vh7viGvinVRZ3MEyX150oCw/L6SYM4fqR6t2ZSBgNQHT
|
||||
ZNoDtwECgYEA4lXMgtYqKuSlZ3TKfxAj03tJ/gbRdKcUCEGXEbdpY70tTu6KESZS
|
||||
etid4Ut/sWEoPTJsgYiGbgJl571t1O8oR1UZYgh9hBGHLV6UEIt9n2PbExhE2vL3
|
||||
SB7+LfO+tMvM4qKUBN+uy4GpU0NiyEEecw4x4S7MRSyHFRIDR7B6RV0CgYEAxDgS
|
||||
mDaNUfSdfB5mXekLUJAwqeKRdL9RjXYaHbnoZ5kIwQ73tFikRwyTsLQwMhjE1l3z
|
||||
MItTzIAyTf/BlK3dsp6bHTaT7hXIjHBsuKATN5qAuUpzTrg9+QaCawVSlQgNeF3a
|
||||
iyfD4dVp66Bzn3gO757TWqmroBZ2e1owbAQvF/kCgYAKT/Jze6KMNcK7hfy78VZQ
|
||||
imuCoXjlob8t6R8i9YJdwv7Pe9rakS5s3nXDEBePU2fr8eIzvK6zUHSoLF9WtlbV
|
||||
eTEg4FYnsEzCam7AmjptCrWulwp8F1ng9ViLa3Gi9y4snU+1MSPbrdqzKnzTtvPW
|
||||
Ni1bnzA7bp3w/dMcbxQDGQKBgB50hY5SiUS7LuZg4YqZ7UOn3aXAoMr6FvJZ7lvG
|
||||
yyepPQ6aACBh0b2lWhcHIKPl7EdJdcGHHo6TJzusAqPNCKf8rh6upe9COkpx+K3/
|
||||
SnxK4sffol4JgrTwKbXqsZKoGU8hYhZPKbwXn8UOtmN+AvN2N1/PDfBfDCzBJtrd
|
||||
G2IhAoGBAN19976xAMDjKb2+wd/mQYA2fR7E8lodxdX3LDnblYmndTKY67nVo94M
|
||||
FHPKZSN590HkFJ+wmChnOrqjtosY+N25CKMS7939EUIDrq+B+bYTWM/gcwdLXNUk
|
||||
Rygw/078Z3ZDJamXmyez5WpeLFrrbmI8sLnBBmSjQvMb6vCEtQ2Z
|
||||
-----END RSA PRIVATE KEY-----`
|
||||
caCRT = `-----BEGIN CERTIFICATE-----
|
||||
MIIE5jCCAs6gAwIBAgIBATANBgkqhkiG9w0BAQsFADATMREwDwYDVQQDEwhDZXJ0
|
||||
QXV0aDAeFw0yMTAxMDIyMTIwNTVaFw0yMjA3MDIyMTMwNTJaMBMxETAPBgNVBAMT
|
||||
CENlcnRBdXRoMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEA4Tiho5xW
|
||||
AC15JRkMwfp3/TJwI2As7MY5dele5cmdr5bHAE+sRKqC+Ti88OJWCV5saoyax/1S
|
||||
CjxJlQMZMl169P1QYJskKjdG2sdv6RLWLMgwSNRRjxp/Bw9dHdiEb9MjLgu28Jro
|
||||
9peQkHcRHeMf5hM9WvlIJGrdzbC4hUehmqggcqgARainBkYjf0SwuWxHeu4nMqkp
|
||||
Ak5tcSTLCjHfEFHZ9Te0TIPG5YkWocQKyeLgu4lvuU+DD2W2lym+YVUtRMGs1Env
|
||||
k7p+N0DcGU26qfzZ2sF5ZXkqm7dBsGQB9pIxwc2Q8T1dCIyP9OQCKVILdc5aVFf1
|
||||
cryQFHYzYNNZXFlIBims5VV5Mgfp8ESHQSue+v6n6ykecLEyKt1F1Y/MWY/nWUSI
|
||||
8zdq83jdBAZVjo9MSthxVn57/06s/hQca65IpcTZV2gX0a+eRlAVqaRbAhL3LaZe
|
||||
bYsW3WHKoUOftwemuep3nL51TzlXZVL7Oz/ClGaEOsnGG9KFO6jh+W768qC0zLQI
|
||||
CdE7v2Zex98sZteHCg9fGJHIaYoF0aJG5P3WI5oZf2fy7UIYN9ADLFZiorCXAZEh
|
||||
CSU6mDoRViZ4RGR9GZxbDZ9KYn7O8M/KCR72bkQg73TlMsk1zSXEw0MKLUjtsw6c
|
||||
rZ0Jt8t3sRatHO3JrYHALMt9vZfyNCZp0IsCAwEAAaNFMEMwDgYDVR0PAQH/BAQD
|
||||
AgEGMBIGA1UdEwEB/wQIMAYBAf8CAQAwHQYDVR0OBBYEFO1yCNAGr/zQTJIi8lw3
|
||||
w5OiuBvMMA0GCSqGSIb3DQEBCwUAA4ICAQA6gCNuM7r8mnx674dm31GxBjQy5ZwB
|
||||
7CxDzYEvL/oiZ3Tv3HlPfN2LAAsJUfGnghh9DOytenL2CTZWjl/emP5eijzmlP+9
|
||||
zva5I6CIMCf/eDDVsRdO244t0o4uG7+At0IgSDM3bpVaVb4RHZNjEziYChsEYY8d
|
||||
HK6iwuRSvFniV6yhR/Vj1Ymi9yZ5xclqseLXiQnUB0PkfIk23+7s42cXB16653fH
|
||||
O/FsPyKBLiKJArizLYQc12aP3QOrYoYD9+fAzIIzew7A5C0aanZCGzkuFpO6TRlD
|
||||
Tb7ry9Gf0DfPpCgxraH8tOcmnqp/ka3hjqo/SRnnTk0IFrmmLdarJvjD46rKwBo4
|
||||
MjyAIR1mQ5j8GTlSFBmSgETOQ/EYvO3FPLmra1Fh7L+DvaVzTpqI9fG3TuyyY+Ri
|
||||
Fby4ycTOGSZOe5Fh8lqkX5Y47mCUJ3zHzOA1vUJy2eTlMRGpu47Eb1++Vm6EzPUP
|
||||
2EF5aD+zwcssh+atZvQbwxpgVqVcyLt91RSkKkmZQslh0rnlTb68yxvUnD3zw7So
|
||||
o6TAf9UvwVMEvdLT9NnFd6hwi2jcNte/h538GJwXeBb8EkfpqLKpTKyicnOdkamZ
|
||||
7E9zY8SHNRYMwB9coQ/W8NvufbCgkvOoLyMXk5edbXofXl3PhNGOlraWbghBnzf5
|
||||
r3rwjFsQOoZotA==
|
||||
-----END CERTIFICATE-----`
|
||||
caKey = `-----BEGIN RSA PRIVATE KEY-----
|
||||
MIIJKQIBAAKCAgEA4Tiho5xWAC15JRkMwfp3/TJwI2As7MY5dele5cmdr5bHAE+s
|
||||
RKqC+Ti88OJWCV5saoyax/1SCjxJlQMZMl169P1QYJskKjdG2sdv6RLWLMgwSNRR
|
||||
jxp/Bw9dHdiEb9MjLgu28Jro9peQkHcRHeMf5hM9WvlIJGrdzbC4hUehmqggcqgA
|
||||
RainBkYjf0SwuWxHeu4nMqkpAk5tcSTLCjHfEFHZ9Te0TIPG5YkWocQKyeLgu4lv
|
||||
uU+DD2W2lym+YVUtRMGs1Envk7p+N0DcGU26qfzZ2sF5ZXkqm7dBsGQB9pIxwc2Q
|
||||
8T1dCIyP9OQCKVILdc5aVFf1cryQFHYzYNNZXFlIBims5VV5Mgfp8ESHQSue+v6n
|
||||
6ykecLEyKt1F1Y/MWY/nWUSI8zdq83jdBAZVjo9MSthxVn57/06s/hQca65IpcTZ
|
||||
V2gX0a+eRlAVqaRbAhL3LaZebYsW3WHKoUOftwemuep3nL51TzlXZVL7Oz/ClGaE
|
||||
OsnGG9KFO6jh+W768qC0zLQICdE7v2Zex98sZteHCg9fGJHIaYoF0aJG5P3WI5oZ
|
||||
f2fy7UIYN9ADLFZiorCXAZEhCSU6mDoRViZ4RGR9GZxbDZ9KYn7O8M/KCR72bkQg
|
||||
73TlMsk1zSXEw0MKLUjtsw6crZ0Jt8t3sRatHO3JrYHALMt9vZfyNCZp0IsCAwEA
|
||||
AQKCAgAV+ElERYbaI5VyufvVnFJCH75ypPoc6sVGLEq2jbFVJJcq/5qlZCC8oP1F
|
||||
Xj7YUR6wUiDzK1Hqb7EZ2SCHGjlZVrCVi+y+NYAy7UuMZ+r+mVSkdhmypPoJPUVv
|
||||
GOTqZ6VB46Cn3eSl0WknvoWr7bD555yPmEuiSc5zNy74yWEJTidEKAFGyknowcTK
|
||||
sG+w1tAuPLcUKQ44DGB+rgEkcHL7C5EAa7upzx0C3RmZFB+dTAVyJdkBMbFuOhTS
|
||||
sB7DLeTplR7/4mp9da7EQw51ZXC1DlZOEZt++4/desXsqATNAbva1OuzrLG7mMKe
|
||||
N/PCBh/aERQcsCvgUmaXqGQgqN1Jhw8kbXnjZnVd9iE7TAh7ki3VqNy1OMgTwOex
|
||||
bBYWaCqHuDYIxCjeW0qLJcn0cKQ13FVYrxgInf4Jp82SQht5b/zLL3IRZEyKcLJF
|
||||
kL6g1wlmTUTUX0z8eZzlM0ZCrqtExjgElMO/rV971nyNV5WU8Og3NmE8/slqMrmJ
|
||||
DlrQr9q0WJsDKj1IMe46EUM6ix7bbxC5NIfJ96dgdxZDn6ghjca6iZYqqUACvmUj
|
||||
cq08s3R4Ouw9/87kn11wwGBx2yDueCwrjKEGc0RKjweGbwu0nBxOrkJ8JXz6bAv7
|
||||
1OKfYaX3afI9B8x4uaiuRs38oBQlg9uAYFfl4HNBPuQikGLmsQKCAQEA8VjFOsaz
|
||||
y6NMZzKXi7WZ48uu3ed5x3Kf6RyDr1WvQ1jkBMv9b6b8Gp1CRnPqviRBto9L8QAg
|
||||
bCXZTqnXzn//brskmW8IZgqjAlf89AWa53piucu9/hgidrHRZobs5gTqev28uJdc
|
||||
zcuw1g8c3nCpY9WeTjHODzX5NXYRLFpkazLfYa6c8Q9jZR4KKrpdM+66fxL0JlOd
|
||||
7dN0oQtEqEAugsd3cwkZgvWhY4oM7FGErrZoDLy273ZdJzi/vU+dThyVzfD8Ab8u
|
||||
VxxuobVMT/S608zbe+uaiUdov5s96OkCl87403UNKJBH+6LNb3rjBBLE9NPN5ET9
|
||||
JLQMrYd+zj8jQwKCAQEA7uU5I9MOufo9bIgJqjY4Ie1+Ex9DZEMUYFAvGNCJCVcS
|
||||
mwOdGF8AWzIavTLACmEDJO7t/OrBdoo4L7IEsCNjgA3WiIwIMiWUVqveAGUMEXr6
|
||||
TRI5EolV6FTqqIP6AS+BAeBq7G1ELgsTrWNHh11rW3+3kBMuOCn77PUQ8WHwcq/r
|
||||
teZcZn4Ewcr6P7cBODgVvnBPhe/J8xHS0HFVCeS1CvaiNYgees5yA80Apo9IPjDJ
|
||||
YWawLjmH5wUBI5yDFVp067wjqJnoKPSoKwWkZXqUk+zgFXx5KT0gh/c5yh1frASp
|
||||
q6oaYnHEVC5qj2SpT1GFLonTcrQUXiSkiUudvNu1GQKCAQEAmko+5GFtRe0ihgLQ
|
||||
4S76r6diJli6AKil1Fg3U1r6zZpBQ1PJtJxTJQyN9w5Z7q6tF/GqAesrzxevQdvQ
|
||||
rCImAPtA3ZofC2UXawMnIjWHHx6diNvYnV1+gtUQ4nO1dSOFZ5VZFcUmPiZO6boF
|
||||
oaryj3FcX+71JcJCjEvrlKhA9Es0hXUkvfMxfs5if4he1zlyHpTWYr4oA4egUugq
|
||||
P0mwskikc3VIyvEO+NyjgFxo72yLPkFSzemkidN8uKDyFqKtnlfGM7OuA2CY1WZa
|
||||
3+67lXWshx9KzyJIs92iCYkU8EoPxtdYzyrV6efdX7x27v60zTOut5TnJJS6WiF6
|
||||
Do5MkwKCAQAxoR9IyP0DN/BwzqYrXU42Bi+t603F04W1KJNQNWpyrUspNwv41yus
|
||||
xnD1o0hwH41Wq+h3JZIBfV+E0RfWO9Pc84MBJQ5C1LnHc7cQH+3s575+Km3+4tcd
|
||||
CB8j2R8kBeloKWYtLdn/Mr/ownpGreqyvIq2/LUaZ+Z1aMgXTYB1YwS16mCBzmZQ
|
||||
mEl62RsAwe4KfSyYJ6OtwqMoOJMxFfliiLBULK4gVykqjvk2oQeiG+KKQJoTUFJi
|
||||
dRCyhD5bPkqR+qjxyt+HOqSBI4/uoROi05AOBqjpH1DVzk+MJKQOiX1yM0l98CKY
|
||||
Vng+x+vAla/0Zh+ucajVkgk4mKPxazdpAoIBAQC17vWk4KYJpF2RC3pKPcQ0PdiX
|
||||
bN35YNlvyhkYlSfDNdyH3aDrGiycUyW2mMXUgEDFsLRxHMTL+zPC6efqO6sTAJDY
|
||||
cBptsW4drW/qo8NTx3dNOisLkW+mGGJOR/w157hREFr29ymCVMYu/Z7fVWIeSpCq
|
||||
p3u8YX8WTljrxwSczlGjvpM7uJx3SfYRM4TUoy+8wU8bK74LywLa5f60bQY6Dye0
|
||||
Gqd9O6OoPfgcQlwjC5MiAofeqwPJvU0hQOPoehZyNLAmOCWXTYWaTP7lxO1r6+NE
|
||||
M3hGYqW3W8Ixua71OskCypBZg/HVlIP/lzjRzdx+VOB2hbWVth2Iup/Z1egW
|
||||
-----END RSA PRIVATE KEY-----`
|
||||
caCRL = `-----BEGIN X509 CRL-----
|
||||
MIICpzCBkAIBATANBgkqhkiG9w0BAQsFADATMREwDwYDVQQDEwhDZXJ0QXV0aBcN
|
||||
MjEwMTAyMjEzNDA1WhcNMjMwMTAyMjEzNDA1WjAkMCICEQC+l04DbHWMyC3fG09k
|
||||
VXf+Fw0yMTAxMDIyMTM0MDVaoCMwITAfBgNVHSMEGDAWgBTtcgjQBq/80EySIvJc
|
||||
N8OTorgbzDANBgkqhkiG9w0BAQsFAAOCAgEAEJ7z+uNc8sqtxlOhSdTGDzX/xput
|
||||
E857kFQkSlMnU2whQ8c+XpYrBLA5vIZJNSSwohTpM4+zVBX/bJpmu3wqqaArRO9/
|
||||
YcW5mQk9Anvb4WjQW1cHmtNapMTzoC9AiYt/OWPfy+P6JCgCr4Hy6LgQyIRL6bM9
|
||||
VYTalolOm1qa4Y5cIeT7iHq/91mfaqo8/6MYRjLl8DOTROpmw8OS9bCXkzGKdCat
|
||||
AbAzwkQUSauyoCQ10rpX+Y64w9ng3g4Dr20aCqPf5osaqplEJ2HTK8ljDTidlslv
|
||||
9anQj8ax3Su89vI8+hK+YbfVQwrThabgdSjQsn+veyx8GlP8WwHLAQ379KjZjWg+
|
||||
OlOSwBeU1vTdP0QcB8X5C2gVujAyuQekbaV86xzIBOj7vZdfHZ6ee30TZ2FKiMyg
|
||||
7/N2OqW0w77ChsjB4MSHJCfuTgIeg62GzuZXLM+Q2Z9LBdtm4Byg+sm/P52adOEg
|
||||
gVb2Zf4KSvsAmA0PIBlu449/QXUFcMxzLFy7mwTeZj2B4Ln0Hm0szV9f9R8MwMtB
|
||||
SyLYxVH+mgqaR6Jkk22Q/yYyLPaELfafX5gp/AIXG8n0zxfVaTvK3auSgb1Q6ZLS
|
||||
5QH9dSIsmZHlPq7GoSXmKpMdjUL8eaky/IMteioyXgsBiATzl5L2dsw6MTX3MDF0
|
||||
QbDK+MzhmbKfDxs=
|
||||
-----END X509 CRL-----`
|
||||
client1Crt = `-----BEGIN CERTIFICATE-----
|
||||
MIIEITCCAgmgAwIBAgIRAIppZHoj1hM80D7WzTEKLuAwDQYJKoZIhvcNAQELBQAw
|
||||
EzERMA8GA1UEAxMIQ2VydEF1dGgwHhcNMjEwMTAyMjEyMzEwWhcNMjIwNzAyMjEz
|
||||
MDUxWjASMRAwDgYDVQQDEwdjbGllbnQxMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
|
||||
MIIBCgKCAQEAoKbYY9MdF2kF/nhBESIiZTdVYtA8XL9xrIZyDj9EnCiTxHiVbJtH
|
||||
XVwszqSl5TRrotPmnmAQcX3r8OCk+z+RQZ0QQj257P3kG6q4rNnOcWCS5xEd20jP
|
||||
yhQ3m+hMGfZsotNTQze1ochuQgLUN6IPyPxZkH22ia3jX4iu1eo/QxeLYHj1UHw4
|
||||
3Cii9yE+j5kPUC21xmnrGKdUrB55NYLXHx6yTIqYR5znSOVB8oJi18/hwdZmH859
|
||||
DHhm0Hx1HrS+jbjI3+CMorZJ3WUyNf+CkiVLD3xYutPbxzEpwiqkG/XYzLH0habT
|
||||
cDcILo18n+o3jvem2KWBrDhyairjIDscwQIDAQABo3EwbzAOBgNVHQ8BAf8EBAMC
|
||||
A7gwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMCMB0GA1UdDgQWBBSJ5GIv
|
||||
zIrE4ZSQt2+CGblKTDswizAfBgNVHSMEGDAWgBTtcgjQBq/80EySIvJcN8OTorgb
|
||||
zDANBgkqhkiG9w0BAQsFAAOCAgEALh4f5GhvNYNou0Ab04iQBbLEdOu2RlbK1B5n
|
||||
K9P/umYenBHMY/z6HT3+6tpcHsDuqE8UVdq3f3Gh4S2Gu9m8PRitT+cJ3gdo9Plm
|
||||
3rD4ufn/s6rGg3ppydXcedm17492tbccUDWOBZw3IO/ASVq13WPgT0/Kev7cPq0k
|
||||
sSdSNhVeXqx8Myc2/d+8GYyzbul2Kpfa7h9i24sK49E9ftnSmsIvngONo08eT1T0
|
||||
3wAOyK2981LIsHaAWcneShKFLDB6LeXIT9oitOYhiykhFlBZ4M1GNlSNfhQ8IIQP
|
||||
xbqMNXCLkW4/BtLhGEEcg0QVso6Kudl9rzgTfQknrdF7pHp6rS46wYUjoSyIY6dl
|
||||
oLmnoAVJX36J3QPWelePI9e07X2wrTfiZWewwgw3KNRWjd6/zfPLe7GoqXnK1S2z
|
||||
PT8qMfCaTwKTtUkzXuTFvQ8bAo2My/mS8FOcpkt2oQWeOsADHAUX7fz5BCoa2DL3
|
||||
k/7Mh4gVT+JYZEoTwCFuYHgMWFWe98naqHi9lB4yR981p1QgXgxO7qBeipagKY1F
|
||||
LlH1iwXUqZ3MZnkNA+4e1Fglsw3sa/rC+L98HnznJ/YbTfQbCP6aQ1qcOymrjMud
|
||||
7MrFwqZjtd/SK4Qx1VpK6jGEAtPgWBTUS3p9ayg6lqjMBjsmySWfvRsDQbq6P5Ct
|
||||
O/e3EH8=
|
||||
-----END CERTIFICATE-----`
|
||||
client1Key = `-----BEGIN RSA PRIVATE KEY-----
|
||||
MIIEpAIBAAKCAQEAoKbYY9MdF2kF/nhBESIiZTdVYtA8XL9xrIZyDj9EnCiTxHiV
|
||||
bJtHXVwszqSl5TRrotPmnmAQcX3r8OCk+z+RQZ0QQj257P3kG6q4rNnOcWCS5xEd
|
||||
20jPyhQ3m+hMGfZsotNTQze1ochuQgLUN6IPyPxZkH22ia3jX4iu1eo/QxeLYHj1
|
||||
UHw43Cii9yE+j5kPUC21xmnrGKdUrB55NYLXHx6yTIqYR5znSOVB8oJi18/hwdZm
|
||||
H859DHhm0Hx1HrS+jbjI3+CMorZJ3WUyNf+CkiVLD3xYutPbxzEpwiqkG/XYzLH0
|
||||
habTcDcILo18n+o3jvem2KWBrDhyairjIDscwQIDAQABAoIBAEBSjVFqtbsp0byR
|
||||
aXvyrtLX1Ng7h++at2jca85Ihq//jyqbHTje8zPuNAKI6eNbmb0YGr5OuEa4pD9N
|
||||
ssDmMsKSoG/lRwwcm7h4InkSvBWpFShvMgUaohfHAHzsBYxfnh+TfULsi0y7c2n6
|
||||
t/2OZcOTRkkUDIITnXYiw93ibHHv2Mv2bBDu35kGrcK+c2dN5IL5ZjTjMRpbJTe2
|
||||
44RBJbdTxHBVSgoGBnugF+s2aEma6Ehsj70oyfoVpM6Aed5kGge0A5zA1JO7WCn9
|
||||
Ay/DzlULRXHjJIoRWd2NKvx5n3FNppUc9vJh2plRHalRooZ2+MjSf8HmXlvG2Hpb
|
||||
ScvmWgECgYEA1G+A/2KnxWsr/7uWIJ7ClcGCiNLdk17Pv3DZ3G4qUsU2ITftfIbb
|
||||
tU0Q/b19na1IY8Pjy9ptP7t74/hF5kky97cf1FA8F+nMj/k4+wO8QDI8OJfzVzh9
|
||||
PwielA5vbE+xmvis5Hdp8/od1Yrc/rPSy2TKtPFhvsqXjqoUmOAjDP8CgYEAwZjH
|
||||
9dt1sc2lx/rMxihlWEzQ3JPswKW9/LJAmbRBoSWF9FGNjbX7uhWtXRKJkzb8ZAwa
|
||||
88azluNo2oftbDD/+jw8b2cDgaJHlLAkSD4O1D1RthW7/LKD15qZ/oFsRb13NV85
|
||||
ZNKtwslXGbfVNyGKUVFm7fVA8vBAOUey+LKDFj8CgYEAg8WWstOzVdYguMTXXuyb
|
||||
ruEV42FJaDyLiSirOvxq7GTAKuLSQUg1yMRBIeQEo2X1XU0JZE3dLodRVhuO4EXP
|
||||
g7Dn4X7Th9HSvgvNuIacowWGLWSz4Qp9RjhGhXhezUSx2nseY6le46PmFavJYYSR
|
||||
4PBofMyt4PcyA6Cknh+KHmkCgYEAnTriG7ETE0a7v4DXUpB4TpCEiMCy5Xs2o8Z5
|
||||
ZNva+W+qLVUWq+MDAIyechqeFSvxK6gRM69LJ96lx+XhU58wJiFJzAhT9rK/g+jS
|
||||
bsHH9WOfu0xHkuHA5hgvvV2Le9B2wqgFyva4HJy82qxMxCu/VG/SMqyfBS9OWbb7
|
||||
ibQhdq0CgYAl53LUWZsFSZIth1vux2LVOsI8C3X1oiXDGpnrdlQ+K7z57hq5EsRq
|
||||
GC+INxwXbvKNqp5h0z2MvmKYPDlGVTgw8f8JjM7TkN17ERLcydhdRrMONUryZpo8
|
||||
1xTob+8blyJgfxZUIAKbMbMbIiU0WAF0rfD/eJJwS4htOW/Hfv4TGA==
|
||||
-----END RSA PRIVATE KEY-----`
|
||||
// client 2 crt is revoked
|
||||
client2Crt = `-----BEGIN CERTIFICATE-----
|
||||
MIIEITCCAgmgAwIBAgIRAL6XTgNsdYzILd8bT2RVd/4wDQYJKoZIhvcNAQELBQAw
|
||||
EzERMA8GA1UEAxMIQ2VydEF1dGgwHhcNMjEwMTAyMjEyMzIwWhcNMjIwNzAyMjEz
|
||||
MDUxWjASMRAwDgYDVQQDEwdjbGllbnQyMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
|
||||
MIIBCgKCAQEA6xjW5KQR3/OFQtV5M75WINqQ4AzXSu6DhSz/yumaaQZP/UxY+6hi
|
||||
jcrFzGo9MMie/Sza8DhkXOFAl2BelUubrOeB2cl+/Gr8OCyRi2Gv6j3zCsuN/4jQ
|
||||
tNaoez/IbkDvI3l/ZpzBtnuNY2RiemGgHuORXHRVf3qVlsw+npBIRW5rM2HkO/xG
|
||||
oZjeBErWVu390Lyn+Gvk2TqQDnkutWnxUC60/zPlHhXZ4BwaFAekbSnjsSDB1YFM
|
||||
s8HwW4oBryoxdj3/+/qLrBHt75IdLw3T7/V1UDJQM3EvSQOr12w4egpldhtsC871
|
||||
nnBQZeY6qA5feffIwwg/6lJm70o6S6OX6wIDAQABo3EwbzAOBgNVHQ8BAf8EBAMC
|
||||
A7gwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMCMB0GA1UdDgQWBBTB84v5
|
||||
t9HqhLhMODbn6oYkEQt3KzAfBgNVHSMEGDAWgBTtcgjQBq/80EySIvJcN8OTorgb
|
||||
zDANBgkqhkiG9w0BAQsFAAOCAgEALGtBCve5k8tToL3oLuXp/oSik6ovIB/zq4I/
|
||||
4zNMYPU31+ZWz6aahysgx1JL1yqTa3Qm8o2tu52MbnV10dM7CIw7c/cYa+c+OPcG
|
||||
5LF97kp13X+r2axy+CmwM86b4ILaDGs2Qyai6VB6k7oFUve+av5o7aUrNFpqGCJz
|
||||
HWdtHZSVA3JMATzy0TfWanwkzreqfdw7qH0yZ9bDURlBKAVWrqnCstva9jRuv+AI
|
||||
eqxr/4Ro986TFjJdoAP3Vr16CPg7/B6GA/KmsBWJrpeJdPWq4i2gpLKvYZoy89qD
|
||||
mUZf34RbzcCtV4NvV1DadGnt4us0nvLrvS5rL2+2uWD09kZYq9RbLkvgzF/cY0fz
|
||||
i7I1bi5XQ+alWe0uAk5ZZL/D+GTRYUX1AWwCqwJxmHrMxcskMyO9pXvLyuSWRDLo
|
||||
YNBrbX9nLcfJzVCp+X+9sntTHjs4l6Cw+fLepJIgtgqdCHtbhTiv68vSM6cgb4br
|
||||
6n2xrXRKuioiWFOrTSRr+oalZh8dGJ/xvwY8IbWknZAvml9mf1VvfE7Ma5P777QM
|
||||
fsbYVTq0Y3R/5hIWsC3HA5z6MIM8L1oRe/YyhP3CTmrCHkVKyDOosGXpGz+JVcyo
|
||||
cfYkY5A3yFKB2HaCwZSfwFmRhxkrYWGEbHv3Cd9YkZs1J3hNhGFZyVMC9Uh0S85a
|
||||
6zdDidU=
|
||||
-----END CERTIFICATE-----`
|
||||
client2Key = `-----BEGIN RSA PRIVATE KEY-----
|
||||
MIIEpAIBAAKCAQEA6xjW5KQR3/OFQtV5M75WINqQ4AzXSu6DhSz/yumaaQZP/UxY
|
||||
+6hijcrFzGo9MMie/Sza8DhkXOFAl2BelUubrOeB2cl+/Gr8OCyRi2Gv6j3zCsuN
|
||||
/4jQtNaoez/IbkDvI3l/ZpzBtnuNY2RiemGgHuORXHRVf3qVlsw+npBIRW5rM2Hk
|
||||
O/xGoZjeBErWVu390Lyn+Gvk2TqQDnkutWnxUC60/zPlHhXZ4BwaFAekbSnjsSDB
|
||||
1YFMs8HwW4oBryoxdj3/+/qLrBHt75IdLw3T7/V1UDJQM3EvSQOr12w4egpldhts
|
||||
C871nnBQZeY6qA5feffIwwg/6lJm70o6S6OX6wIDAQABAoIBAFatstVb1KdQXsq0
|
||||
cFpui8zTKOUiduJOrDkWzTygAmlEhYtrccdfXu7OWz0x0lvBLDVGK3a0I/TGrAzj
|
||||
4BuFY+FM/egxTVt9in6fmA3et4BS1OAfCryzUdfK6RV//8L+t+zJZ/qKQzWnugpy
|
||||
QYjDo8ifuMFwtvEoXizaIyBNLAhEp9hnrv+Tyi2O2gahPvCHsD48zkyZRCHYRstD
|
||||
NH5cIrwz9/RJgPO1KI+QsJE7Nh7stR0sbr+5TPU4fnsL2mNhMUF2TJrwIPrc1yp+
|
||||
YIUjdnh3SO88j4TQT3CIrWi8i4pOy6N0dcVn3gpCRGaqAKyS2ZYUj+yVtLO4KwxZ
|
||||
SZ1lNvECgYEA78BrF7f4ETfWSLcBQ3qxfLs7ibB6IYo2x25685FhZjD+zLXM1AKb
|
||||
FJHEXUm3mUYrFJK6AFEyOQnyGKBOLs3S6oTAswMPbTkkZeD1Y9O6uv0AHASLZnK6
|
||||
pC6ub0eSRF5LUyTQ55Jj8D7QsjXJueO8v+G5ihWhNSN9tB2UA+8NBmkCgYEA+weq
|
||||
cvoeMIEMBQHnNNLy35bwfqrceGyPIRBcUIvzQfY1vk7KW6DYOUzC7u+WUzy/hA52
|
||||
DjXVVhua2eMQ9qqtOav7djcMc2W9RbLowxvno7K5qiCss013MeWk64TCWy+WMp5A
|
||||
AVAtOliC3hMkIKqvR2poqn+IBTh1449agUJQqTMCgYEAu06IHGq1GraV6g9XpGF5
|
||||
wqoAlMzUTdnOfDabRilBf/YtSr+J++ThRcuwLvXFw7CnPZZ4TIEjDJ7xjj3HdxeE
|
||||
fYYjineMmNd40UNUU556F1ZLvJfsVKizmkuCKhwvcMx+asGrmA+tlmds4p3VMS50
|
||||
KzDtpKzLWlmU/p/RINWlRmkCgYBy0pHTn7aZZx2xWKqCDg+L2EXPGqZX6wgZDpu7
|
||||
OBifzlfM4ctL2CmvI/5yPmLbVgkgBWFYpKUdiujsyyEiQvWTUKhn7UwjqKDHtcsk
|
||||
G6p7xS+JswJrzX4885bZJ9Oi1AR2yM3sC9l0O7I4lDbNPmWIXBLeEhGMmcPKv/Kc
|
||||
91Ff4wKBgQCF3ur+Vt0PSU0ucrPVHjCe7tqazm0LJaWbPXL1Aw0pzdM2EcNcW/MA
|
||||
w0kqpr7MgJ94qhXCBcVcfPuFN9fBOadM3UBj1B45Cz3pptoK+ScI8XKno6jvVK/p
|
||||
xr5cb9VBRBtB9aOKVfuRhpatAfS2Pzm2Htae9lFn7slGPUmu2hkjDw==
|
||||
-----END RSA PRIVATE KEY-----`
|
||||
)
|
||||
|
||||
func TestLoadCertificate(t *testing.T) {
|
||||
caCrtPath := filepath.Join(os.TempDir(), "testca.crt")
|
||||
caCrlPath := filepath.Join(os.TempDir(), "testcrl.crt")
|
||||
certPath := filepath.Join(os.TempDir(), "test.crt")
|
||||
keyPath := filepath.Join(os.TempDir(), "test.key")
|
||||
err := os.WriteFile(caCrtPath, []byte(caCRT), os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
err = os.WriteFile(caCrlPath, []byte(caCRL), os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
err = os.WriteFile(certPath, []byte(serverCert), os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
err = os.WriteFile(keyPath, []byte(serverKey), os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
certManager, err := NewCertManager(certPath, keyPath, configDir, logSenderTest)
|
||||
assert.NoError(t, err)
|
||||
certFunc := certManager.GetCertificateFunc()
|
||||
if assert.NotNil(t, certFunc) {
|
||||
hello := &tls.ClientHelloInfo{
|
||||
ServerName: "localhost",
|
||||
CipherSuites: []uint16{tls.TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305},
|
||||
}
|
||||
cert, err := certFunc(hello)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, certManager.cert, cert)
|
||||
}
|
||||
|
||||
certManager.SetCACertificates(nil)
|
||||
err = certManager.LoadRootCAs()
|
||||
assert.NoError(t, err)
|
||||
|
||||
certManager.SetCACertificates([]string{""})
|
||||
err = certManager.LoadRootCAs()
|
||||
assert.Error(t, err)
|
||||
|
||||
certManager.SetCACertificates([]string{"invalid"})
|
||||
err = certManager.LoadRootCAs()
|
||||
assert.Error(t, err)
|
||||
|
||||
// laoding the key as root CA must fail
|
||||
certManager.SetCACertificates([]string{keyPath})
|
||||
err = certManager.LoadRootCAs()
|
||||
assert.Error(t, err)
|
||||
|
||||
certManager.SetCACertificates([]string{certPath})
|
||||
err = certManager.LoadRootCAs()
|
||||
assert.NoError(t, err)
|
||||
|
||||
rootCa := certManager.GetRootCAs()
|
||||
assert.NotNil(t, rootCa)
|
||||
|
||||
err = certManager.Reload()
|
||||
assert.NoError(t, err)
|
||||
|
||||
certManager.SetCARevocationLists(nil)
|
||||
err = certManager.LoadCRLs()
|
||||
assert.NoError(t, err)
|
||||
|
||||
certManager.SetCARevocationLists([]string{""})
|
||||
err = certManager.LoadCRLs()
|
||||
assert.Error(t, err)
|
||||
|
||||
certManager.SetCARevocationLists([]string{"invalid crl"})
|
||||
err = certManager.LoadCRLs()
|
||||
assert.Error(t, err)
|
||||
|
||||
// this is not a crl and must fail
|
||||
certManager.SetCARevocationLists([]string{caCrtPath})
|
||||
err = certManager.LoadCRLs()
|
||||
assert.Error(t, err)
|
||||
|
||||
certManager.SetCARevocationLists([]string{caCrlPath})
|
||||
err = certManager.LoadCRLs()
|
||||
assert.NoError(t, err)
|
||||
|
||||
crt, err := tls.X509KeyPair([]byte(caCRT), []byte(caKey))
|
||||
assert.NoError(t, err)
|
||||
|
||||
x509CAcrt, err := x509.ParseCertificate(crt.Certificate[0])
|
||||
assert.NoError(t, err)
|
||||
|
||||
crt, err = tls.X509KeyPair([]byte(client1Crt), []byte(client1Key))
|
||||
assert.NoError(t, err)
|
||||
x509crt, err := x509.ParseCertificate(crt.Certificate[0])
|
||||
if assert.NoError(t, err) {
|
||||
assert.False(t, certManager.IsRevoked(x509crt, x509CAcrt))
|
||||
}
|
||||
|
||||
crt, err = tls.X509KeyPair([]byte(client2Crt), []byte(client2Key))
|
||||
assert.NoError(t, err)
|
||||
x509crt, err = x509.ParseCertificate(crt.Certificate[0])
|
||||
if assert.NoError(t, err) {
|
||||
assert.True(t, certManager.IsRevoked(x509crt, x509CAcrt))
|
||||
}
|
||||
|
||||
assert.True(t, certManager.IsRevoked(nil, nil))
|
||||
|
||||
err = os.Remove(caCrlPath)
|
||||
assert.NoError(t, err)
|
||||
err = certManager.Reload()
|
||||
assert.Error(t, err)
|
||||
|
||||
err = os.Remove(certPath)
|
||||
assert.NoError(t, err)
|
||||
err = os.Remove(keyPath)
|
||||
assert.NoError(t, err)
|
||||
err = certManager.Reload()
|
||||
assert.Error(t, err)
|
||||
|
||||
err = os.Remove(caCrtPath)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestLoadInvalidCert(t *testing.T) {
|
||||
certManager, err := NewCertManager("test.crt", "test.key", configDir, logSenderTest)
|
||||
assert.Error(t, err)
|
||||
assert.Nil(t, certManager)
|
||||
}
|
||||
332
common/transfer.go
Normal file
332
common/transfer.go
Normal file
@@ -0,0 +1,332 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"path"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/metric"
|
||||
"github.com/drakkan/sftpgo/v2/vfs"
|
||||
)
|
||||
|
||||
var (
|
||||
// ErrTransferClosed defines the error returned for a closed transfer
|
||||
ErrTransferClosed = errors.New("transfer already closed")
|
||||
)
|
||||
|
||||
// BaseTransfer contains protocols common transfer details for an upload or a download.
|
||||
type BaseTransfer struct { //nolint:maligned
|
||||
ID uint64
|
||||
BytesSent int64
|
||||
BytesReceived int64
|
||||
Fs vfs.Fs
|
||||
File vfs.File
|
||||
Connection *BaseConnection
|
||||
cancelFn func()
|
||||
fsPath string
|
||||
effectiveFsPath string
|
||||
requestPath string
|
||||
ftpMode string
|
||||
start time.Time
|
||||
MaxWriteSize int64
|
||||
MinWriteOffset int64
|
||||
InitialSize int64
|
||||
isNewFile bool
|
||||
transferType int
|
||||
AbortTransfer int32
|
||||
aTime time.Time
|
||||
mTime time.Time
|
||||
sync.Mutex
|
||||
ErrTransfer error
|
||||
}
|
||||
|
||||
// NewBaseTransfer returns a new BaseTransfer and adds it to the given connection
|
||||
func NewBaseTransfer(file vfs.File, conn *BaseConnection, cancelFn func(), fsPath, effectiveFsPath, requestPath string,
|
||||
transferType int, minWriteOffset, initialSize, maxWriteSize int64, isNewFile bool, fs vfs.Fs) *BaseTransfer {
|
||||
t := &BaseTransfer{
|
||||
ID: conn.GetTransferID(),
|
||||
File: file,
|
||||
Connection: conn,
|
||||
cancelFn: cancelFn,
|
||||
fsPath: fsPath,
|
||||
effectiveFsPath: effectiveFsPath,
|
||||
start: time.Now(),
|
||||
transferType: transferType,
|
||||
MinWriteOffset: minWriteOffset,
|
||||
InitialSize: initialSize,
|
||||
isNewFile: isNewFile,
|
||||
requestPath: requestPath,
|
||||
BytesSent: 0,
|
||||
BytesReceived: 0,
|
||||
MaxWriteSize: maxWriteSize,
|
||||
AbortTransfer: 0,
|
||||
Fs: fs,
|
||||
}
|
||||
|
||||
conn.AddTransfer(t)
|
||||
return t
|
||||
}
|
||||
|
||||
// SetFtpMode sets the FTP mode for the current transfer
|
||||
func (t *BaseTransfer) SetFtpMode(mode string) {
|
||||
t.ftpMode = mode
|
||||
}
|
||||
|
||||
// GetID returns the transfer ID
|
||||
func (t *BaseTransfer) GetID() uint64 {
|
||||
return t.ID
|
||||
}
|
||||
|
||||
// GetType returns the transfer type
|
||||
func (t *BaseTransfer) GetType() int {
|
||||
return t.transferType
|
||||
}
|
||||
|
||||
// GetSize returns the transferred size
|
||||
func (t *BaseTransfer) GetSize() int64 {
|
||||
if t.transferType == TransferDownload {
|
||||
return atomic.LoadInt64(&t.BytesSent)
|
||||
}
|
||||
return atomic.LoadInt64(&t.BytesReceived)
|
||||
}
|
||||
|
||||
// GetStartTime returns the start time
|
||||
func (t *BaseTransfer) GetStartTime() time.Time {
|
||||
return t.start
|
||||
}
|
||||
|
||||
// SignalClose signals that the transfer should be closed.
|
||||
// For same protocols, for example WebDAV, we have no
|
||||
// access to the network connection, so we use this method
|
||||
// to make the next read or write to fail
|
||||
func (t *BaseTransfer) SignalClose() {
|
||||
atomic.StoreInt32(&(t.AbortTransfer), 1)
|
||||
}
|
||||
|
||||
// GetVirtualPath returns the transfer virtual path
|
||||
func (t *BaseTransfer) GetVirtualPath() string {
|
||||
return t.requestPath
|
||||
}
|
||||
|
||||
// GetFsPath returns the transfer filesystem path
|
||||
func (t *BaseTransfer) GetFsPath() string {
|
||||
return t.fsPath
|
||||
}
|
||||
|
||||
// SetTimes stores access and modification times if fsPath matches the current file
|
||||
func (t *BaseTransfer) SetTimes(fsPath string, atime time.Time, mtime time.Time) bool {
|
||||
if fsPath == t.GetFsPath() {
|
||||
t.aTime = atime
|
||||
t.mTime = mtime
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// GetRealFsPath returns the real transfer filesystem path.
|
||||
// If atomic uploads are enabled this differ from fsPath
|
||||
func (t *BaseTransfer) GetRealFsPath(fsPath string) string {
|
||||
if fsPath == t.GetFsPath() {
|
||||
if t.File != nil {
|
||||
return t.File.Name()
|
||||
}
|
||||
return t.fsPath
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// SetCancelFn sets the cancel function for the transfer
|
||||
func (t *BaseTransfer) SetCancelFn(cancelFn func()) {
|
||||
t.cancelFn = cancelFn
|
||||
}
|
||||
|
||||
// Truncate changes the size of the opened file.
|
||||
// Supported for local fs only
|
||||
func (t *BaseTransfer) Truncate(fsPath string, size int64) (int64, error) {
|
||||
if fsPath == t.GetFsPath() {
|
||||
if t.File != nil {
|
||||
initialSize := t.InitialSize
|
||||
err := t.File.Truncate(size)
|
||||
if err == nil {
|
||||
t.Lock()
|
||||
t.InitialSize = size
|
||||
if t.MaxWriteSize > 0 {
|
||||
sizeDiff := initialSize - size
|
||||
t.MaxWriteSize += sizeDiff
|
||||
metric.TransferCompleted(atomic.LoadInt64(&t.BytesSent), atomic.LoadInt64(&t.BytesReceived), t.transferType, t.ErrTransfer)
|
||||
atomic.StoreInt64(&t.BytesReceived, 0)
|
||||
}
|
||||
t.Unlock()
|
||||
}
|
||||
t.Connection.Log(logger.LevelDebug, "file %#v truncated to size %v max write size %v new initial size %v err: %v",
|
||||
fsPath, size, t.MaxWriteSize, t.InitialSize, err)
|
||||
return initialSize, err
|
||||
}
|
||||
if size == 0 && atomic.LoadInt64(&t.BytesSent) == 0 {
|
||||
// for cloud providers the file is always truncated to zero, we don't support append/resume for uploads
|
||||
// for buffered SFTP we can have buffered bytes so we returns an error
|
||||
if !vfs.IsBufferedSFTPFs(t.Fs) {
|
||||
return 0, nil
|
||||
}
|
||||
}
|
||||
return 0, vfs.ErrVfsUnsupported
|
||||
}
|
||||
return 0, errTransferMismatch
|
||||
}
|
||||
|
||||
// TransferError is called if there is an unexpected error.
|
||||
// For example network or client issues
|
||||
func (t *BaseTransfer) TransferError(err error) {
|
||||
t.Lock()
|
||||
defer t.Unlock()
|
||||
if t.ErrTransfer != nil {
|
||||
return
|
||||
}
|
||||
t.ErrTransfer = err
|
||||
if t.cancelFn != nil {
|
||||
t.cancelFn()
|
||||
}
|
||||
elapsed := time.Since(t.start).Nanoseconds() / 1000000
|
||||
t.Connection.Log(logger.LevelError, "Unexpected error for transfer, path: %#v, error: \"%v\" bytes sent: %v, "+
|
||||
"bytes received: %v transfer running since %v ms", t.fsPath, t.ErrTransfer, atomic.LoadInt64(&t.BytesSent),
|
||||
atomic.LoadInt64(&t.BytesReceived), elapsed)
|
||||
}
|
||||
|
||||
func (t *BaseTransfer) getUploadFileSize() (int64, error) {
|
||||
var fileSize int64
|
||||
info, err := t.Fs.Stat(t.fsPath)
|
||||
if err == nil {
|
||||
fileSize = info.Size()
|
||||
}
|
||||
if vfs.IsCryptOsFs(t.Fs) && t.ErrTransfer != nil {
|
||||
errDelete := t.Fs.Remove(t.fsPath, false)
|
||||
if errDelete != nil {
|
||||
t.Connection.Log(logger.LevelWarn, "error removing partial crypto file %#v: %v", t.fsPath, errDelete)
|
||||
}
|
||||
}
|
||||
return fileSize, err
|
||||
}
|
||||
|
||||
// Close it is called when the transfer is completed.
|
||||
// It logs the transfer info, updates the user quota (for uploads)
|
||||
// and executes any defined action.
|
||||
// If there is an error no action will be executed and, in atomic mode,
|
||||
// we try to delete the temporary file
|
||||
func (t *BaseTransfer) Close() error {
|
||||
defer t.Connection.RemoveTransfer(t)
|
||||
|
||||
var err error
|
||||
numFiles := 0
|
||||
if t.isNewFile {
|
||||
numFiles = 1
|
||||
}
|
||||
metric.TransferCompleted(atomic.LoadInt64(&t.BytesSent), atomic.LoadInt64(&t.BytesReceived), t.transferType, t.ErrTransfer)
|
||||
if t.File != nil && t.Connection.IsQuotaExceededError(t.ErrTransfer) {
|
||||
// if quota is exceeded we try to remove the partial file for uploads to local filesystem
|
||||
err = t.Fs.Remove(t.File.Name(), false)
|
||||
if err == nil {
|
||||
numFiles--
|
||||
atomic.StoreInt64(&t.BytesReceived, 0)
|
||||
t.MinWriteOffset = 0
|
||||
}
|
||||
t.Connection.Log(logger.LevelWarn, "upload denied due to space limit, delete temporary file: %#v, deletion error: %v",
|
||||
t.File.Name(), err)
|
||||
} else if t.transferType == TransferUpload && t.effectiveFsPath != t.fsPath {
|
||||
if t.ErrTransfer == nil || Config.UploadMode == UploadModeAtomicWithResume {
|
||||
err = t.Fs.Rename(t.effectiveFsPath, t.fsPath)
|
||||
t.Connection.Log(logger.LevelDebug, "atomic upload completed, rename: %#v -> %#v, error: %v",
|
||||
t.effectiveFsPath, t.fsPath, err)
|
||||
} else {
|
||||
err = t.Fs.Remove(t.effectiveFsPath, false)
|
||||
t.Connection.Log(logger.LevelWarn, "atomic upload completed with error: \"%v\", delete temporary file: %#v, "+
|
||||
"deletion error: %v", t.ErrTransfer, t.effectiveFsPath, err)
|
||||
if err == nil {
|
||||
numFiles--
|
||||
atomic.StoreInt64(&t.BytesReceived, 0)
|
||||
t.MinWriteOffset = 0
|
||||
}
|
||||
}
|
||||
}
|
||||
elapsed := time.Since(t.start).Nanoseconds() / 1000000
|
||||
if t.transferType == TransferDownload {
|
||||
logger.TransferLog(downloadLogSender, t.fsPath, elapsed, atomic.LoadInt64(&t.BytesSent), t.Connection.User.Username,
|
||||
t.Connection.ID, t.Connection.protocol, t.Connection.localAddr, t.Connection.remoteAddr, t.ftpMode)
|
||||
ExecuteActionNotification(t.Connection, operationDownload, t.fsPath, t.requestPath, "", "", "",
|
||||
atomic.LoadInt64(&t.BytesSent), t.ErrTransfer)
|
||||
} else {
|
||||
fileSize := atomic.LoadInt64(&t.BytesReceived) + t.MinWriteOffset
|
||||
if statSize, err := t.getUploadFileSize(); err == nil {
|
||||
fileSize = statSize
|
||||
}
|
||||
t.Connection.Log(logger.LevelDebug, "uploaded file size %v", fileSize)
|
||||
t.updateQuota(numFiles, fileSize)
|
||||
t.updateTimes()
|
||||
logger.TransferLog(uploadLogSender, t.fsPath, elapsed, atomic.LoadInt64(&t.BytesReceived), t.Connection.User.Username,
|
||||
t.Connection.ID, t.Connection.protocol, t.Connection.localAddr, t.Connection.remoteAddr, t.ftpMode)
|
||||
ExecuteActionNotification(t.Connection, operationUpload, t.fsPath, t.requestPath, "", "", "", fileSize, t.ErrTransfer)
|
||||
}
|
||||
if t.ErrTransfer != nil {
|
||||
t.Connection.Log(logger.LevelError, "transfer error: %v, path: %#v", t.ErrTransfer, t.fsPath)
|
||||
if err == nil {
|
||||
err = t.ErrTransfer
|
||||
}
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
func (t *BaseTransfer) updateTimes() {
|
||||
if !t.aTime.IsZero() && !t.mTime.IsZero() {
|
||||
err := t.Fs.Chtimes(t.fsPath, t.aTime, t.mTime, true)
|
||||
t.Connection.Log(logger.LevelDebug, "set times for file %#v, atime: %v, mtime: %v, err: %v",
|
||||
t.fsPath, t.aTime, t.mTime, err)
|
||||
}
|
||||
}
|
||||
|
||||
func (t *BaseTransfer) updateQuota(numFiles int, fileSize int64) bool {
|
||||
// S3 uploads are atomic, if there is an error nothing is uploaded
|
||||
if t.File == nil && t.ErrTransfer != nil && !t.Connection.User.HasBufferedSFTP(t.GetVirtualPath()) {
|
||||
return false
|
||||
}
|
||||
sizeDiff := fileSize - t.InitialSize
|
||||
if t.transferType == TransferUpload && (numFiles != 0 || sizeDiff > 0) {
|
||||
vfolder, err := t.Connection.User.GetVirtualFolderForPath(path.Dir(t.requestPath))
|
||||
if err == nil {
|
||||
dataprovider.UpdateVirtualFolderQuota(&vfolder.BaseVirtualFolder, numFiles, //nolint:errcheck
|
||||
sizeDiff, false)
|
||||
if vfolder.IsIncludedInUserQuota() {
|
||||
dataprovider.UpdateUserQuota(&t.Connection.User, numFiles, sizeDiff, false) //nolint:errcheck
|
||||
}
|
||||
} else {
|
||||
dataprovider.UpdateUserQuota(&t.Connection.User, numFiles, sizeDiff, false) //nolint:errcheck
|
||||
}
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// HandleThrottle manage bandwidth throttling
|
||||
func (t *BaseTransfer) HandleThrottle() {
|
||||
var wantedBandwidth int64
|
||||
var trasferredBytes int64
|
||||
if t.transferType == TransferDownload {
|
||||
wantedBandwidth = t.Connection.User.DownloadBandwidth
|
||||
trasferredBytes = atomic.LoadInt64(&t.BytesSent)
|
||||
} else {
|
||||
wantedBandwidth = t.Connection.User.UploadBandwidth
|
||||
trasferredBytes = atomic.LoadInt64(&t.BytesReceived)
|
||||
}
|
||||
if wantedBandwidth > 0 {
|
||||
// real and wanted elapsed as milliseconds, bytes as kilobytes
|
||||
realElapsed := time.Since(t.start).Nanoseconds() / 1000000
|
||||
// trasferredBytes / 1024 = KB/s, we multiply for 1000 to get milliseconds
|
||||
wantedElapsed := 1000 * (trasferredBytes / 1024) / wantedBandwidth
|
||||
if wantedElapsed > realElapsed {
|
||||
toSleep := time.Duration(wantedElapsed - realElapsed)
|
||||
time.Sleep(toSleep * time.Millisecond)
|
||||
}
|
||||
}
|
||||
}
|
||||
299
common/transfer_test.go
Normal file
299
common/transfer_test.go
Normal file
@@ -0,0 +1,299 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/sftpgo/sdk"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/dataprovider"
|
||||
"github.com/drakkan/sftpgo/v2/kms"
|
||||
"github.com/drakkan/sftpgo/v2/vfs"
|
||||
)
|
||||
|
||||
func TestTransferUpdateQuota(t *testing.T) {
|
||||
conn := NewBaseConnection("", ProtocolSFTP, "", "", dataprovider.User{})
|
||||
transfer := BaseTransfer{
|
||||
Connection: conn,
|
||||
transferType: TransferUpload,
|
||||
BytesReceived: 123,
|
||||
Fs: vfs.NewOsFs("", os.TempDir(), ""),
|
||||
}
|
||||
errFake := errors.New("fake error")
|
||||
transfer.TransferError(errFake)
|
||||
assert.False(t, transfer.updateQuota(1, 0))
|
||||
err := transfer.Close()
|
||||
if assert.Error(t, err) {
|
||||
assert.EqualError(t, err, errFake.Error())
|
||||
}
|
||||
mappedPath := filepath.Join(os.TempDir(), "vdir")
|
||||
vdirPath := "/vdir"
|
||||
conn.User.VirtualFolders = append(conn.User.VirtualFolders, vfs.VirtualFolder{
|
||||
BaseVirtualFolder: vfs.BaseVirtualFolder{
|
||||
MappedPath: mappedPath,
|
||||
},
|
||||
VirtualPath: vdirPath,
|
||||
QuotaFiles: -1,
|
||||
QuotaSize: -1,
|
||||
})
|
||||
transfer.ErrTransfer = nil
|
||||
transfer.BytesReceived = 1
|
||||
transfer.requestPath = "/vdir/file"
|
||||
assert.True(t, transfer.updateQuota(1, 0))
|
||||
err = transfer.Close()
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestTransferThrottling(t *testing.T) {
|
||||
u := dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
Username: "test",
|
||||
UploadBandwidth: 50,
|
||||
DownloadBandwidth: 40,
|
||||
},
|
||||
}
|
||||
fs := vfs.NewOsFs("", os.TempDir(), "")
|
||||
testFileSize := int64(131072)
|
||||
wantedUploadElapsed := 1000 * (testFileSize / 1024) / u.UploadBandwidth
|
||||
wantedDownloadElapsed := 1000 * (testFileSize / 1024) / u.DownloadBandwidth
|
||||
// some tolerance
|
||||
wantedUploadElapsed -= wantedDownloadElapsed / 10
|
||||
wantedDownloadElapsed -= wantedDownloadElapsed / 10
|
||||
conn := NewBaseConnection("id", ProtocolSCP, "", "", u)
|
||||
transfer := NewBaseTransfer(nil, conn, nil, "", "", "", TransferUpload, 0, 0, 0, true, fs)
|
||||
transfer.BytesReceived = testFileSize
|
||||
transfer.Connection.UpdateLastActivity()
|
||||
startTime := transfer.Connection.GetLastActivity()
|
||||
transfer.HandleThrottle()
|
||||
elapsed := time.Since(startTime).Nanoseconds() / 1000000
|
||||
assert.GreaterOrEqual(t, elapsed, wantedUploadElapsed, "upload bandwidth throttling not respected")
|
||||
err := transfer.Close()
|
||||
assert.NoError(t, err)
|
||||
|
||||
transfer = NewBaseTransfer(nil, conn, nil, "", "", "", TransferDownload, 0, 0, 0, true, fs)
|
||||
transfer.BytesSent = testFileSize
|
||||
transfer.Connection.UpdateLastActivity()
|
||||
startTime = transfer.Connection.GetLastActivity()
|
||||
|
||||
transfer.HandleThrottle()
|
||||
elapsed = time.Since(startTime).Nanoseconds() / 1000000
|
||||
assert.GreaterOrEqual(t, elapsed, wantedDownloadElapsed, "download bandwidth throttling not respected")
|
||||
err = transfer.Close()
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestRealPath(t *testing.T) {
|
||||
testFile := filepath.Join(os.TempDir(), "afile.txt")
|
||||
fs := vfs.NewOsFs("123", os.TempDir(), "")
|
||||
u := dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
Username: "user",
|
||||
HomeDir: os.TempDir(),
|
||||
},
|
||||
}
|
||||
u.Permissions = make(map[string][]string)
|
||||
u.Permissions["/"] = []string{dataprovider.PermAny}
|
||||
file, err := os.Create(testFile)
|
||||
require.NoError(t, err)
|
||||
conn := NewBaseConnection(fs.ConnectionID(), ProtocolSFTP, "", "", u)
|
||||
transfer := NewBaseTransfer(file, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload, 0, 0, 0, true, fs)
|
||||
rPath := transfer.GetRealFsPath(testFile)
|
||||
assert.Equal(t, testFile, rPath)
|
||||
rPath = conn.getRealFsPath(testFile)
|
||||
assert.Equal(t, testFile, rPath)
|
||||
err = transfer.Close()
|
||||
assert.NoError(t, err)
|
||||
err = file.Close()
|
||||
assert.NoError(t, err)
|
||||
transfer.File = nil
|
||||
rPath = transfer.GetRealFsPath(testFile)
|
||||
assert.Equal(t, testFile, rPath)
|
||||
rPath = transfer.GetRealFsPath("")
|
||||
assert.Empty(t, rPath)
|
||||
err = os.Remove(testFile)
|
||||
assert.NoError(t, err)
|
||||
assert.Len(t, conn.GetTransfers(), 0)
|
||||
}
|
||||
|
||||
func TestTruncate(t *testing.T) {
|
||||
testFile := filepath.Join(os.TempDir(), "transfer_test_file")
|
||||
fs := vfs.NewOsFs("123", os.TempDir(), "")
|
||||
u := dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
Username: "user",
|
||||
HomeDir: os.TempDir(),
|
||||
},
|
||||
}
|
||||
u.Permissions = make(map[string][]string)
|
||||
u.Permissions["/"] = []string{dataprovider.PermAny}
|
||||
file, err := os.Create(testFile)
|
||||
if !assert.NoError(t, err) {
|
||||
assert.FailNow(t, "unable to open test file")
|
||||
}
|
||||
_, err = file.Write([]byte("hello"))
|
||||
assert.NoError(t, err)
|
||||
conn := NewBaseConnection(fs.ConnectionID(), ProtocolSFTP, "", "", u)
|
||||
transfer := NewBaseTransfer(file, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload, 0, 5, 100, false, fs)
|
||||
|
||||
err = conn.SetStat("/transfer_test_file", &StatAttributes{
|
||||
Size: 2,
|
||||
Flags: StatAttrSize,
|
||||
})
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, int64(103), transfer.MaxWriteSize)
|
||||
err = transfer.Close()
|
||||
assert.NoError(t, err)
|
||||
err = file.Close()
|
||||
assert.NoError(t, err)
|
||||
fi, err := os.Stat(testFile)
|
||||
if assert.NoError(t, err) {
|
||||
assert.Equal(t, int64(2), fi.Size())
|
||||
}
|
||||
|
||||
transfer = NewBaseTransfer(file, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload, 0, 0, 100, true, fs)
|
||||
// file.Stat will fail on a closed file
|
||||
err = conn.SetStat("/transfer_test_file", &StatAttributes{
|
||||
Size: 2,
|
||||
Flags: StatAttrSize,
|
||||
})
|
||||
assert.Error(t, err)
|
||||
err = transfer.Close()
|
||||
assert.NoError(t, err)
|
||||
|
||||
transfer = NewBaseTransfer(nil, conn, nil, testFile, testFile, "", TransferUpload, 0, 0, 0, true, fs)
|
||||
_, err = transfer.Truncate("mismatch", 0)
|
||||
assert.EqualError(t, err, errTransferMismatch.Error())
|
||||
_, err = transfer.Truncate(testFile, 0)
|
||||
assert.NoError(t, err)
|
||||
_, err = transfer.Truncate(testFile, 1)
|
||||
assert.EqualError(t, err, vfs.ErrVfsUnsupported.Error())
|
||||
|
||||
err = transfer.Close()
|
||||
assert.NoError(t, err)
|
||||
|
||||
err = os.Remove(testFile)
|
||||
assert.NoError(t, err)
|
||||
|
||||
assert.Len(t, conn.GetTransfers(), 0)
|
||||
}
|
||||
|
||||
func TestTransferErrors(t *testing.T) {
|
||||
isCancelled := false
|
||||
cancelFn := func() {
|
||||
isCancelled = true
|
||||
}
|
||||
testFile := filepath.Join(os.TempDir(), "transfer_test_file")
|
||||
fs := vfs.NewOsFs("id", os.TempDir(), "")
|
||||
u := dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
Username: "test",
|
||||
HomeDir: os.TempDir(),
|
||||
},
|
||||
}
|
||||
err := os.WriteFile(testFile, []byte("test data"), os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
file, err := os.Open(testFile)
|
||||
if !assert.NoError(t, err) {
|
||||
assert.FailNow(t, "unable to open test file")
|
||||
}
|
||||
conn := NewBaseConnection("id", ProtocolSFTP, "", "", u)
|
||||
transfer := NewBaseTransfer(file, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload, 0, 0, 0, true, fs)
|
||||
assert.Nil(t, transfer.cancelFn)
|
||||
assert.Equal(t, testFile, transfer.GetFsPath())
|
||||
transfer.SetCancelFn(cancelFn)
|
||||
errFake := errors.New("err fake")
|
||||
transfer.BytesReceived = 9
|
||||
transfer.TransferError(ErrQuotaExceeded)
|
||||
assert.True(t, isCancelled)
|
||||
transfer.TransferError(errFake)
|
||||
assert.Error(t, transfer.ErrTransfer, ErrQuotaExceeded.Error())
|
||||
// the file is closed from the embedding struct before to call close
|
||||
err = file.Close()
|
||||
assert.NoError(t, err)
|
||||
err = transfer.Close()
|
||||
if assert.Error(t, err) {
|
||||
assert.Error(t, err, ErrQuotaExceeded.Error())
|
||||
}
|
||||
assert.NoFileExists(t, testFile)
|
||||
|
||||
err = os.WriteFile(testFile, []byte("test data"), os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
file, err = os.Open(testFile)
|
||||
if !assert.NoError(t, err) {
|
||||
assert.FailNow(t, "unable to open test file")
|
||||
}
|
||||
fsPath := filepath.Join(os.TempDir(), "test_file")
|
||||
transfer = NewBaseTransfer(file, conn, nil, fsPath, file.Name(), "/test_file", TransferUpload, 0, 0, 0, true, fs)
|
||||
transfer.BytesReceived = 9
|
||||
transfer.TransferError(errFake)
|
||||
assert.Error(t, transfer.ErrTransfer, errFake.Error())
|
||||
// the file is closed from the embedding struct before to call close
|
||||
err = file.Close()
|
||||
assert.NoError(t, err)
|
||||
err = transfer.Close()
|
||||
if assert.Error(t, err) {
|
||||
assert.Error(t, err, errFake.Error())
|
||||
}
|
||||
assert.NoFileExists(t, testFile)
|
||||
|
||||
err = os.WriteFile(testFile, []byte("test data"), os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
file, err = os.Open(testFile)
|
||||
if !assert.NoError(t, err) {
|
||||
assert.FailNow(t, "unable to open test file")
|
||||
}
|
||||
transfer = NewBaseTransfer(file, conn, nil, fsPath, file.Name(), "/test_file", TransferUpload, 0, 0, 0, true, fs)
|
||||
transfer.BytesReceived = 9
|
||||
// the file is closed from the embedding struct before to call close
|
||||
err = file.Close()
|
||||
assert.NoError(t, err)
|
||||
err = transfer.Close()
|
||||
assert.NoError(t, err)
|
||||
assert.NoFileExists(t, testFile)
|
||||
assert.FileExists(t, fsPath)
|
||||
err = os.Remove(fsPath)
|
||||
assert.NoError(t, err)
|
||||
|
||||
assert.Len(t, conn.GetTransfers(), 0)
|
||||
}
|
||||
|
||||
func TestRemovePartialCryptoFile(t *testing.T) {
|
||||
testFile := filepath.Join(os.TempDir(), "transfer_test_file")
|
||||
fs, err := vfs.NewCryptFs("id", os.TempDir(), "", vfs.CryptFsConfig{Passphrase: kms.NewPlainSecret("secret")})
|
||||
require.NoError(t, err)
|
||||
u := dataprovider.User{
|
||||
BaseUser: sdk.BaseUser{
|
||||
Username: "test",
|
||||
HomeDir: os.TempDir(),
|
||||
},
|
||||
}
|
||||
conn := NewBaseConnection(fs.ConnectionID(), ProtocolSFTP, "", "", u)
|
||||
transfer := NewBaseTransfer(nil, conn, nil, testFile, testFile, "/transfer_test_file", TransferUpload, 0, 0, 0, true, fs)
|
||||
transfer.ErrTransfer = errors.New("test error")
|
||||
_, err = transfer.getUploadFileSize()
|
||||
assert.Error(t, err)
|
||||
err = os.WriteFile(testFile, []byte("test data"), os.ModePerm)
|
||||
assert.NoError(t, err)
|
||||
size, err := transfer.getUploadFileSize()
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, int64(9), size)
|
||||
assert.NoFileExists(t, testFile)
|
||||
}
|
||||
|
||||
func TestFTPMode(t *testing.T) {
|
||||
conn := NewBaseConnection("", ProtocolFTP, "", "", dataprovider.User{})
|
||||
transfer := BaseTransfer{
|
||||
Connection: conn,
|
||||
transferType: TransferUpload,
|
||||
BytesReceived: 123,
|
||||
Fs: vfs.NewOsFs("", os.TempDir(), ""),
|
||||
}
|
||||
assert.Empty(t, transfer.ftpMode)
|
||||
transfer.SetFtpMode("active")
|
||||
assert.Equal(t, "active", transfer.ftpMode)
|
||||
}
|
||||
1407
config/config.go
1407
config/config.go
File diff suppressed because it is too large
Load Diff
@@ -1,3 +1,4 @@
|
||||
//go:build linux
|
||||
// +build linux
|
||||
|
||||
package config
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
//go:build !linux
|
||||
// +build !linux
|
||||
|
||||
package config
|
||||
|
||||
func setViperAdditionalConfigPaths() {
|
||||
|
||||
}
|
||||
func setViperAdditionalConfigPaths() {}
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
118
dataprovider/actions.go
Normal file
118
dataprovider/actions.go
Normal file
@@ -0,0 +1,118 @@
|
||||
package dataprovider
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"fmt"
|
||||
"net/url"
|
||||
"os"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/sftpgo/sdk/plugin/notifier"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/httpclient"
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/plugin"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
const (
|
||||
// ActionExecutorSelf is used as username for self action, for example a user/admin that updates itself
|
||||
ActionExecutorSelf = "__self__"
|
||||
// ActionExecutorSystem is used as username for actions with no explicit executor associated, for example
|
||||
// adding/updating a user/admin by loading initial data
|
||||
ActionExecutorSystem = "__system__"
|
||||
)
|
||||
|
||||
const (
|
||||
actionObjectUser = "user"
|
||||
actionObjectAdmin = "admin"
|
||||
actionObjectAPIKey = "api_key"
|
||||
actionObjectShare = "share"
|
||||
)
|
||||
|
||||
func executeAction(operation, executor, ip, objectType, objectName string, object plugin.Renderer) {
|
||||
if plugin.Handler.HasNotifiers() {
|
||||
plugin.Handler.NotifyProviderEvent(¬ifier.ProviderEvent{
|
||||
Action: operation,
|
||||
Username: executor,
|
||||
ObjectType: objectType,
|
||||
ObjectName: objectName,
|
||||
IP: ip,
|
||||
Timestamp: time.Now().UnixNano(),
|
||||
}, object)
|
||||
}
|
||||
if config.Actions.Hook == "" {
|
||||
return
|
||||
}
|
||||
if !util.IsStringInSlice(operation, config.Actions.ExecuteOn) ||
|
||||
!util.IsStringInSlice(objectType, config.Actions.ExecuteFor) {
|
||||
return
|
||||
}
|
||||
|
||||
go func() {
|
||||
dataAsJSON, err := object.RenderAsJSON(operation != operationDelete)
|
||||
if err != nil {
|
||||
providerLog(logger.LevelError, "unable to serialize user as JSON for operation %#v: %v", operation, err)
|
||||
return
|
||||
}
|
||||
if strings.HasPrefix(config.Actions.Hook, "http") {
|
||||
var url *url.URL
|
||||
url, err := url.Parse(config.Actions.Hook)
|
||||
if err != nil {
|
||||
providerLog(logger.LevelError, "Invalid http_notification_url %#v for operation %#v: %v",
|
||||
config.Actions.Hook, operation, err)
|
||||
return
|
||||
}
|
||||
q := url.Query()
|
||||
q.Add("action", operation)
|
||||
q.Add("username", executor)
|
||||
q.Add("ip", ip)
|
||||
q.Add("object_type", objectType)
|
||||
q.Add("object_name", objectName)
|
||||
q.Add("timestamp", fmt.Sprintf("%v", time.Now().UnixNano()))
|
||||
url.RawQuery = q.Encode()
|
||||
startTime := time.Now()
|
||||
resp, err := httpclient.RetryablePost(url.String(), "application/json", bytes.NewBuffer(dataAsJSON))
|
||||
respCode := 0
|
||||
if err == nil {
|
||||
respCode = resp.StatusCode
|
||||
resp.Body.Close()
|
||||
}
|
||||
providerLog(logger.LevelDebug, "notified operation %#v to URL: %v status code: %v, elapsed: %v err: %v",
|
||||
operation, url.Redacted(), respCode, time.Since(startTime), err)
|
||||
} else {
|
||||
executeNotificationCommand(operation, executor, ip, objectType, objectName, dataAsJSON) //nolint:errcheck // the error is used in test cases only
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
func executeNotificationCommand(operation, executor, ip, objectType, objectName string, objectAsJSON []byte) error {
|
||||
if !filepath.IsAbs(config.Actions.Hook) {
|
||||
err := fmt.Errorf("invalid notification command %#v", config.Actions.Hook)
|
||||
logger.Warn(logSender, "", "unable to execute notification command: %v", err)
|
||||
return err
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 15*time.Second)
|
||||
defer cancel()
|
||||
|
||||
cmd := exec.CommandContext(ctx, config.Actions.Hook)
|
||||
cmd.Env = append(os.Environ(),
|
||||
fmt.Sprintf("SFTPGO_PROVIDER_ACTION=%v", operation),
|
||||
fmt.Sprintf("SFTPGO_PROVIDER_OBJECT_TYPE=%v", objectType),
|
||||
fmt.Sprintf("SFTPGO_PROVIDER_OBJECT_NAME=%v", objectName),
|
||||
fmt.Sprintf("SFTPGO_PROVIDER_USERNAME=%v", executor),
|
||||
fmt.Sprintf("SFTPGO_PROVIDER_IP=%v", ip),
|
||||
fmt.Sprintf("SFTPGO_PROVIDER_TIMESTAMP=%v", util.GetTimeAsMsSinceEpoch(time.Now())),
|
||||
fmt.Sprintf("SFTPGO_PROVIDER_OBJECT=%v", string(objectAsJSON)))
|
||||
|
||||
startTime := time.Now()
|
||||
err := cmd.Run()
|
||||
providerLog(logger.LevelDebug, "executed command %#v, elapsed: %v, error: %v", config.Actions.Hook,
|
||||
time.Since(startTime), err)
|
||||
return err
|
||||
}
|
||||
443
dataprovider/admin.go
Normal file
443
dataprovider/admin.go
Normal file
@@ -0,0 +1,443 @@
|
||||
package dataprovider
|
||||
|
||||
import (
|
||||
"crypto/sha256"
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"net"
|
||||
"os"
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
"github.com/alexedwards/argon2id"
|
||||
passwordvalidator "github.com/wagslane/go-password-validator"
|
||||
"golang.org/x/crypto/bcrypt"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/kms"
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/mfa"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
// Available permissions for SFTPGo admins
|
||||
const (
|
||||
PermAdminAny = "*"
|
||||
PermAdminAddUsers = "add_users"
|
||||
PermAdminChangeUsers = "edit_users"
|
||||
PermAdminDeleteUsers = "del_users"
|
||||
PermAdminViewUsers = "view_users"
|
||||
PermAdminViewConnections = "view_conns"
|
||||
PermAdminCloseConnections = "close_conns"
|
||||
PermAdminViewServerStatus = "view_status"
|
||||
PermAdminManageAdmins = "manage_admins"
|
||||
PermAdminManageAPIKeys = "manage_apikeys"
|
||||
PermAdminQuotaScans = "quota_scans"
|
||||
PermAdminManageSystem = "manage_system"
|
||||
PermAdminManageDefender = "manage_defender"
|
||||
PermAdminViewDefender = "view_defender"
|
||||
PermAdminRetentionChecks = "retention_checks"
|
||||
PermAdminMetadataChecks = "metadata_checks"
|
||||
PermAdminViewEvents = "view_events"
|
||||
)
|
||||
|
||||
var (
|
||||
emailRegex = regexp.MustCompile("^(?:(?:(?:(?:[a-zA-Z]|\\d|[!#\\$%&'\\*\\+\\-\\/=\\?\\^_`{\\|}~]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])+(?:\\.([a-zA-Z]|\\d|[!#\\$%&'\\*\\+\\-\\/=\\?\\^_`{\\|}~]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])+)*)|(?:(?:\\x22)(?:(?:(?:(?:\\x20|\\x09)*(?:\\x0d\\x0a))?(?:\\x20|\\x09)+)?(?:(?:[\\x01-\\x08\\x0b\\x0c\\x0e-\\x1f\\x7f]|\\x21|[\\x23-\\x5b]|[\\x5d-\\x7e]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])|(?:(?:[\\x01-\\x09\\x0b\\x0c\\x0d-\\x7f]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}]))))*(?:(?:(?:\\x20|\\x09)*(?:\\x0d\\x0a))?(\\x20|\\x09)+)?(?:\\x22))))@(?:(?:(?:[a-zA-Z]|\\d|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])|(?:(?:[a-zA-Z]|\\d|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])(?:[a-zA-Z]|\\d|-|\\.|~|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])*(?:[a-zA-Z]|\\d|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])))\\.)+(?:(?:[a-zA-Z]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])|(?:(?:[a-zA-Z]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])(?:[a-zA-Z]|\\d|-|\\.|~|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])*(?:[a-zA-Z]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])))\\.?$")
|
||||
validAdminPerms = []string{PermAdminAny, PermAdminAddUsers, PermAdminChangeUsers, PermAdminDeleteUsers,
|
||||
PermAdminViewUsers, PermAdminViewConnections, PermAdminCloseConnections, PermAdminViewServerStatus,
|
||||
PermAdminManageAdmins, PermAdminManageAPIKeys, PermAdminQuotaScans, PermAdminManageSystem,
|
||||
PermAdminManageDefender, PermAdminViewDefender, PermAdminRetentionChecks, PermAdminMetadataChecks,
|
||||
PermAdminViewEvents}
|
||||
)
|
||||
|
||||
// AdminTOTPConfig defines the time-based one time password configuration
|
||||
type AdminTOTPConfig struct {
|
||||
Enabled bool `json:"enabled,omitempty"`
|
||||
ConfigName string `json:"config_name,omitempty"`
|
||||
Secret *kms.Secret `json:"secret,omitempty"`
|
||||
}
|
||||
|
||||
func (c *AdminTOTPConfig) validate(username string) error {
|
||||
if !c.Enabled {
|
||||
c.ConfigName = ""
|
||||
c.Secret = kms.NewEmptySecret()
|
||||
return nil
|
||||
}
|
||||
if c.ConfigName == "" {
|
||||
return util.NewValidationError("totp: config name is mandatory")
|
||||
}
|
||||
if !util.IsStringInSlice(c.ConfigName, mfa.GetAvailableTOTPConfigNames()) {
|
||||
return util.NewValidationError(fmt.Sprintf("totp: config name %#v not found", c.ConfigName))
|
||||
}
|
||||
if c.Secret.IsEmpty() {
|
||||
return util.NewValidationError("totp: secret is mandatory")
|
||||
}
|
||||
if c.Secret.IsPlain() {
|
||||
c.Secret.SetAdditionalData(username)
|
||||
if err := c.Secret.Encrypt(); err != nil {
|
||||
return util.NewValidationError(fmt.Sprintf("totp: unable to encrypt secret: %v", err))
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// AdminFilters defines additional restrictions for SFTPGo admins
|
||||
// TODO: rename to AdminOptions in v3
|
||||
type AdminFilters struct {
|
||||
// only clients connecting from these IP/Mask are allowed.
|
||||
// IP/Mask must be in CIDR notation as defined in RFC 4632 and RFC 4291
|
||||
// for example "192.0.2.0/24" or "2001:db8::/32"
|
||||
AllowList []string `json:"allow_list,omitempty"`
|
||||
// API key auth allows to impersonate this administrator with an API key
|
||||
AllowAPIKeyAuth bool `json:"allow_api_key_auth,omitempty"`
|
||||
// Time-based one time passwords configuration
|
||||
TOTPConfig AdminTOTPConfig `json:"totp_config,omitempty"`
|
||||
// Recovery codes to use if the user loses access to their second factor auth device.
|
||||
// Each code can only be used once, you should use these codes to login and disable or
|
||||
// reset 2FA for your account
|
||||
RecoveryCodes []RecoveryCode `json:"recovery_codes,omitempty"`
|
||||
}
|
||||
|
||||
// Admin defines a SFTPGo admin
|
||||
type Admin struct {
|
||||
// Database unique identifier
|
||||
ID int64 `json:"id"`
|
||||
// 1 enabled, 0 disabled (login is not allowed)
|
||||
Status int `json:"status"`
|
||||
// Username
|
||||
Username string `json:"username"`
|
||||
Password string `json:"password,omitempty"`
|
||||
Email string `json:"email,omitempty"`
|
||||
Permissions []string `json:"permissions"`
|
||||
Filters AdminFilters `json:"filters,omitempty"`
|
||||
Description string `json:"description,omitempty"`
|
||||
AdditionalInfo string `json:"additional_info,omitempty"`
|
||||
// Creation time as unix timestamp in milliseconds. It will be 0 for admins created before v2.2.0
|
||||
CreatedAt int64 `json:"created_at"`
|
||||
// last update time as unix timestamp in milliseconds
|
||||
UpdatedAt int64 `json:"updated_at"`
|
||||
// Last login as unix timestamp in milliseconds
|
||||
LastLogin int64 `json:"last_login"`
|
||||
}
|
||||
|
||||
// CountUnusedRecoveryCodes returns the number of unused recovery codes
|
||||
func (a *Admin) CountUnusedRecoveryCodes() int {
|
||||
unused := 0
|
||||
for _, code := range a.Filters.RecoveryCodes {
|
||||
if !code.Used {
|
||||
unused++
|
||||
}
|
||||
}
|
||||
return unused
|
||||
}
|
||||
|
||||
func (a *Admin) hashPassword() error {
|
||||
if a.Password != "" && !util.IsStringPrefixInSlice(a.Password, internalHashPwdPrefixes) {
|
||||
if config.PasswordValidation.Admins.MinEntropy > 0 {
|
||||
if err := passwordvalidator.Validate(a.Password, config.PasswordValidation.Admins.MinEntropy); err != nil {
|
||||
return util.NewValidationError(err.Error())
|
||||
}
|
||||
}
|
||||
if config.PasswordHashing.Algo == HashingAlgoBcrypt {
|
||||
pwd, err := bcrypt.GenerateFromPassword([]byte(a.Password), config.PasswordHashing.BcryptOptions.Cost)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
a.Password = string(pwd)
|
||||
} else {
|
||||
pwd, err := argon2id.CreateHash(a.Password, argon2Params)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
a.Password = pwd
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (a *Admin) hasRedactedSecret() bool {
|
||||
return a.Filters.TOTPConfig.Secret.IsRedacted()
|
||||
}
|
||||
|
||||
func (a *Admin) validateRecoveryCodes() error {
|
||||
for i := 0; i < len(a.Filters.RecoveryCodes); i++ {
|
||||
code := &a.Filters.RecoveryCodes[i]
|
||||
if code.Secret.IsEmpty() {
|
||||
return util.NewValidationError("mfa: recovery code cannot be empty")
|
||||
}
|
||||
if code.Secret.IsPlain() {
|
||||
code.Secret.SetAdditionalData(a.Username)
|
||||
if err := code.Secret.Encrypt(); err != nil {
|
||||
return util.NewValidationError(fmt.Sprintf("mfa: unable to encrypt recovery code: %v", err))
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (a *Admin) validatePermissions() error {
|
||||
a.Permissions = util.RemoveDuplicates(a.Permissions)
|
||||
if len(a.Permissions) == 0 {
|
||||
return util.NewValidationError("please grant some permissions to this admin")
|
||||
}
|
||||
if util.IsStringInSlice(PermAdminAny, a.Permissions) {
|
||||
a.Permissions = []string{PermAdminAny}
|
||||
}
|
||||
for _, perm := range a.Permissions {
|
||||
if !util.IsStringInSlice(perm, validAdminPerms) {
|
||||
return util.NewValidationError(fmt.Sprintf("invalid permission: %#v", perm))
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (a *Admin) validate() error {
|
||||
a.SetEmptySecretsIfNil()
|
||||
if a.Username == "" {
|
||||
return util.NewValidationError("username is mandatory")
|
||||
}
|
||||
if a.Password == "" {
|
||||
return util.NewValidationError("please set a password")
|
||||
}
|
||||
if a.hasRedactedSecret() {
|
||||
return util.NewValidationError("cannot save an admin with a redacted secret")
|
||||
}
|
||||
if err := a.Filters.TOTPConfig.validate(a.Username); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := a.validateRecoveryCodes(); err != nil {
|
||||
return err
|
||||
}
|
||||
if !config.SkipNaturalKeysValidation && !usernameRegex.MatchString(a.Username) {
|
||||
return util.NewValidationError(fmt.Sprintf("username %#v is not valid, the following characters are allowed: a-zA-Z0-9-_.~", a.Username))
|
||||
}
|
||||
if err := a.hashPassword(); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := a.validatePermissions(); err != nil {
|
||||
return err
|
||||
}
|
||||
if a.Email != "" && !emailRegex.MatchString(a.Email) {
|
||||
return util.NewValidationError(fmt.Sprintf("email %#v is not valid", a.Email))
|
||||
}
|
||||
a.Filters.AllowList = util.RemoveDuplicates(a.Filters.AllowList)
|
||||
for _, IPMask := range a.Filters.AllowList {
|
||||
_, _, err := net.ParseCIDR(IPMask)
|
||||
if err != nil {
|
||||
return util.NewValidationError(fmt.Sprintf("could not parse allow list entry %#v : %v", IPMask, err))
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// CheckPassword verifies the admin password
|
||||
func (a *Admin) CheckPassword(password string) (bool, error) {
|
||||
if strings.HasPrefix(a.Password, bcryptPwdPrefix) {
|
||||
if err := bcrypt.CompareHashAndPassword([]byte(a.Password), []byte(password)); err != nil {
|
||||
return false, ErrInvalidCredentials
|
||||
}
|
||||
return true, nil
|
||||
}
|
||||
match, err := argon2id.ComparePasswordAndHash(password, a.Password)
|
||||
if !match || err != nil {
|
||||
return false, ErrInvalidCredentials
|
||||
}
|
||||
return match, err
|
||||
}
|
||||
|
||||
// CanLoginFromIP returns true if login from the given IP is allowed
|
||||
func (a *Admin) CanLoginFromIP(ip string) bool {
|
||||
if len(a.Filters.AllowList) == 0 {
|
||||
return true
|
||||
}
|
||||
parsedIP := net.ParseIP(ip)
|
||||
if parsedIP == nil {
|
||||
return len(a.Filters.AllowList) == 0
|
||||
}
|
||||
|
||||
for _, ipMask := range a.Filters.AllowList {
|
||||
_, network, err := net.ParseCIDR(ipMask)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
if network.Contains(parsedIP) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// CanLogin returns an error if the login is not allowed
|
||||
func (a *Admin) CanLogin(ip string) error {
|
||||
if a.Status != 1 {
|
||||
return fmt.Errorf("admin %#v is disabled", a.Username)
|
||||
}
|
||||
if !a.CanLoginFromIP(ip) {
|
||||
return fmt.Errorf("login from IP %v not allowed", ip)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (a *Admin) checkUserAndPass(password, ip string) error {
|
||||
if err := a.CanLogin(ip); err != nil {
|
||||
return err
|
||||
}
|
||||
if a.Password == "" || password == "" {
|
||||
return errors.New("credentials cannot be null or empty")
|
||||
}
|
||||
match, err := a.CheckPassword(password)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if !match {
|
||||
return ErrInvalidCredentials
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// RenderAsJSON implements the renderer interface used within plugins
|
||||
func (a *Admin) RenderAsJSON(reload bool) ([]byte, error) {
|
||||
if reload {
|
||||
admin, err := provider.adminExists(a.Username)
|
||||
if err != nil {
|
||||
providerLog(logger.LevelError, "unable to reload admin before rendering as json: %v", err)
|
||||
return nil, err
|
||||
}
|
||||
admin.HideConfidentialData()
|
||||
return json.Marshal(admin)
|
||||
}
|
||||
a.HideConfidentialData()
|
||||
return json.Marshal(a)
|
||||
}
|
||||
|
||||
// HideConfidentialData hides admin confidential data
|
||||
func (a *Admin) HideConfidentialData() {
|
||||
a.Password = ""
|
||||
if a.Filters.TOTPConfig.Secret != nil {
|
||||
a.Filters.TOTPConfig.Secret.Hide()
|
||||
}
|
||||
for _, code := range a.Filters.RecoveryCodes {
|
||||
if code.Secret != nil {
|
||||
code.Secret.Hide()
|
||||
}
|
||||
}
|
||||
a.SetNilSecretsIfEmpty()
|
||||
}
|
||||
|
||||
// SetEmptySecretsIfNil sets the secrets to empty if nil
|
||||
func (a *Admin) SetEmptySecretsIfNil() {
|
||||
if a.Filters.TOTPConfig.Secret == nil {
|
||||
a.Filters.TOTPConfig.Secret = kms.NewEmptySecret()
|
||||
}
|
||||
}
|
||||
|
||||
// SetNilSecretsIfEmpty set the secrets to nil if empty.
|
||||
// This is useful before rendering as JSON so the empty fields
|
||||
// will not be serialized.
|
||||
func (a *Admin) SetNilSecretsIfEmpty() {
|
||||
if a.Filters.TOTPConfig.Secret != nil && a.Filters.TOTPConfig.Secret.IsEmpty() {
|
||||
a.Filters.TOTPConfig.Secret = nil
|
||||
}
|
||||
}
|
||||
|
||||
// HasPermission returns true if the admin has the specified permission
|
||||
func (a *Admin) HasPermission(perm string) bool {
|
||||
if util.IsStringInSlice(PermAdminAny, a.Permissions) {
|
||||
return true
|
||||
}
|
||||
return util.IsStringInSlice(perm, a.Permissions)
|
||||
}
|
||||
|
||||
// GetPermissionsAsString returns permission as string
|
||||
func (a *Admin) GetPermissionsAsString() string {
|
||||
return strings.Join(a.Permissions, ", ")
|
||||
}
|
||||
|
||||
// GetAllowedIPAsString returns the allowed IP as comma separated string
|
||||
func (a *Admin) GetAllowedIPAsString() string {
|
||||
return strings.Join(a.Filters.AllowList, ",")
|
||||
}
|
||||
|
||||
// GetValidPerms returns the allowed admin permissions
|
||||
func (a *Admin) GetValidPerms() []string {
|
||||
return validAdminPerms
|
||||
}
|
||||
|
||||
// GetInfoString returns admin's info as string.
|
||||
func (a *Admin) GetInfoString() string {
|
||||
var result strings.Builder
|
||||
if a.Email != "" {
|
||||
result.WriteString(fmt.Sprintf("Email: %v. ", a.Email))
|
||||
}
|
||||
if len(a.Filters.AllowList) > 0 {
|
||||
result.WriteString(fmt.Sprintf("Allowed IP/Mask: %v. ", len(a.Filters.AllowList)))
|
||||
}
|
||||
return result.String()
|
||||
}
|
||||
|
||||
// CanManageMFA returns true if the admin can add a multi-factor authentication configuration
|
||||
func (a *Admin) CanManageMFA() bool {
|
||||
return len(mfa.GetAvailableTOTPConfigs()) > 0
|
||||
}
|
||||
|
||||
// GetSignature returns a signature for this admin.
|
||||
// It could change after an update
|
||||
func (a *Admin) GetSignature() string {
|
||||
data := []byte(a.Username)
|
||||
data = append(data, []byte(a.Password)...)
|
||||
signature := sha256.Sum256(data)
|
||||
return base64.StdEncoding.EncodeToString(signature[:])
|
||||
}
|
||||
|
||||
func (a *Admin) getACopy() Admin {
|
||||
a.SetEmptySecretsIfNil()
|
||||
permissions := make([]string, len(a.Permissions))
|
||||
copy(permissions, a.Permissions)
|
||||
filters := AdminFilters{}
|
||||
filters.AllowList = make([]string, len(a.Filters.AllowList))
|
||||
filters.AllowAPIKeyAuth = a.Filters.AllowAPIKeyAuth
|
||||
filters.TOTPConfig.Enabled = a.Filters.TOTPConfig.Enabled
|
||||
filters.TOTPConfig.ConfigName = a.Filters.TOTPConfig.ConfigName
|
||||
filters.TOTPConfig.Secret = a.Filters.TOTPConfig.Secret.Clone()
|
||||
copy(filters.AllowList, a.Filters.AllowList)
|
||||
filters.RecoveryCodes = make([]RecoveryCode, 0)
|
||||
for _, code := range a.Filters.RecoveryCodes {
|
||||
if code.Secret == nil {
|
||||
code.Secret = kms.NewEmptySecret()
|
||||
}
|
||||
filters.RecoveryCodes = append(filters.RecoveryCodes, RecoveryCode{
|
||||
Secret: code.Secret.Clone(),
|
||||
Used: code.Used,
|
||||
})
|
||||
}
|
||||
|
||||
return Admin{
|
||||
ID: a.ID,
|
||||
Status: a.Status,
|
||||
Username: a.Username,
|
||||
Password: a.Password,
|
||||
Email: a.Email,
|
||||
Permissions: permissions,
|
||||
Filters: filters,
|
||||
AdditionalInfo: a.AdditionalInfo,
|
||||
Description: a.Description,
|
||||
LastLogin: a.LastLogin,
|
||||
CreatedAt: a.CreatedAt,
|
||||
UpdatedAt: a.UpdatedAt,
|
||||
}
|
||||
}
|
||||
|
||||
func (a *Admin) setFromEnv() error {
|
||||
envUsername := strings.TrimSpace(os.Getenv("SFTPGO_DEFAULT_ADMIN_USERNAME"))
|
||||
envPassword := strings.TrimSpace(os.Getenv("SFTPGO_DEFAULT_ADMIN_PASSWORD"))
|
||||
if envUsername == "" || envPassword == "" {
|
||||
return errors.New(`to create the default admin you need to set the env vars "SFTPGO_DEFAULT_ADMIN_USERNAME" and "SFTPGO_DEFAULT_ADMIN_PASSWORD"`)
|
||||
}
|
||||
a.Username = envUsername
|
||||
a.Password = envPassword
|
||||
a.Status = 1
|
||||
a.Permissions = []string{PermAdminAny}
|
||||
return nil
|
||||
}
|
||||
186
dataprovider/apikey.go
Normal file
186
dataprovider/apikey.go
Normal file
@@ -0,0 +1,186 @@
|
||||
package dataprovider
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/alexedwards/argon2id"
|
||||
"golang.org/x/crypto/bcrypt"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
// APIKeyScope defines the supported API key scopes
|
||||
type APIKeyScope int
|
||||
|
||||
// Supported API key scopes
|
||||
const (
|
||||
// the API key will be used for an admin
|
||||
APIKeyScopeAdmin APIKeyScope = iota + 1
|
||||
// the API key will be used for a user
|
||||
APIKeyScopeUser
|
||||
)
|
||||
|
||||
// APIKey defines a SFTPGo API key.
|
||||
// API keys can be used as authentication alternative to short lived tokens
|
||||
// for REST API
|
||||
type APIKey struct {
|
||||
// Database unique identifier
|
||||
ID int64 `json:"-"`
|
||||
// Unique key identifier, used for key lookups.
|
||||
// The generated key is in the format `KeyID.hash(Key)` so we can split
|
||||
// and lookup by KeyID and then verify if the key matches the recorded hash
|
||||
KeyID string `json:"id"`
|
||||
// User friendly key name
|
||||
Name string `json:"name"`
|
||||
// we store the hash of the key, this is just like a password
|
||||
Key string `json:"key,omitempty"`
|
||||
Scope APIKeyScope `json:"scope"`
|
||||
CreatedAt int64 `json:"created_at"`
|
||||
UpdatedAt int64 `json:"updated_at"`
|
||||
// 0 means never used
|
||||
LastUseAt int64 `json:"last_use_at,omitempty"`
|
||||
// 0 means never expire
|
||||
ExpiresAt int64 `json:"expires_at,omitempty"`
|
||||
Description string `json:"description,omitempty"`
|
||||
// Username associated with this API key.
|
||||
// If empty and the scope is APIKeyScopeUser the key is valid for any user
|
||||
User string `json:"user,omitempty"`
|
||||
// Admin username associated with this API key.
|
||||
// If empty and the scope is APIKeyScopeAdmin the key is valid for any admin
|
||||
Admin string `json:"admin,omitempty"`
|
||||
// these fields are for internal use
|
||||
userID int64
|
||||
adminID int64
|
||||
plainKey string
|
||||
}
|
||||
|
||||
func (k *APIKey) getACopy() APIKey {
|
||||
return APIKey{
|
||||
ID: k.ID,
|
||||
KeyID: k.KeyID,
|
||||
Name: k.Name,
|
||||
Key: k.Key,
|
||||
Scope: k.Scope,
|
||||
CreatedAt: k.CreatedAt,
|
||||
UpdatedAt: k.UpdatedAt,
|
||||
LastUseAt: k.LastUseAt,
|
||||
ExpiresAt: k.ExpiresAt,
|
||||
Description: k.Description,
|
||||
User: k.User,
|
||||
Admin: k.Admin,
|
||||
userID: k.userID,
|
||||
adminID: k.adminID,
|
||||
}
|
||||
}
|
||||
|
||||
// RenderAsJSON implements the renderer interface used within plugins
|
||||
func (k *APIKey) RenderAsJSON(reload bool) ([]byte, error) {
|
||||
if reload {
|
||||
apiKey, err := provider.apiKeyExists(k.KeyID)
|
||||
if err != nil {
|
||||
providerLog(logger.LevelError, "unable to reload api key before rendering as json: %v", err)
|
||||
return nil, err
|
||||
}
|
||||
apiKey.HideConfidentialData()
|
||||
return json.Marshal(apiKey)
|
||||
}
|
||||
k.HideConfidentialData()
|
||||
return json.Marshal(k)
|
||||
}
|
||||
|
||||
// HideConfidentialData hides API key confidential data
|
||||
func (k *APIKey) HideConfidentialData() {
|
||||
k.Key = ""
|
||||
}
|
||||
|
||||
func (k *APIKey) hashKey() error {
|
||||
if k.Key != "" && !util.IsStringPrefixInSlice(k.Key, internalHashPwdPrefixes) {
|
||||
if config.PasswordHashing.Algo == HashingAlgoBcrypt {
|
||||
hashed, err := bcrypt.GenerateFromPassword([]byte(k.Key), config.PasswordHashing.BcryptOptions.Cost)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
k.Key = string(hashed)
|
||||
} else {
|
||||
hashed, err := argon2id.CreateHash(k.Key, argon2Params)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
k.Key = hashed
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (k *APIKey) generateKey() {
|
||||
if k.KeyID != "" || k.Key != "" {
|
||||
return
|
||||
}
|
||||
k.KeyID = util.GenerateUniqueID()
|
||||
k.Key = util.GenerateUniqueID()
|
||||
k.plainKey = k.Key
|
||||
}
|
||||
|
||||
// DisplayKey returns the key to show to the user
|
||||
func (k *APIKey) DisplayKey() string {
|
||||
return fmt.Sprintf("%v.%v", k.KeyID, k.plainKey)
|
||||
}
|
||||
|
||||
func (k *APIKey) validate() error {
|
||||
if k.Name == "" {
|
||||
return util.NewValidationError("name is mandatory")
|
||||
}
|
||||
if k.Scope != APIKeyScopeAdmin && k.Scope != APIKeyScopeUser {
|
||||
return util.NewValidationError(fmt.Sprintf("invalid scope: %v", k.Scope))
|
||||
}
|
||||
k.generateKey()
|
||||
if err := k.hashKey(); err != nil {
|
||||
return err
|
||||
}
|
||||
if k.User != "" && k.Admin != "" {
|
||||
return util.NewValidationError("an API key can be related to a user or an admin, not both")
|
||||
}
|
||||
if k.Scope == APIKeyScopeAdmin {
|
||||
k.User = ""
|
||||
}
|
||||
if k.Scope == APIKeyScopeUser {
|
||||
k.Admin = ""
|
||||
}
|
||||
if k.User != "" {
|
||||
_, err := provider.userExists(k.User)
|
||||
if err != nil {
|
||||
return util.NewValidationError(fmt.Sprintf("unable to check API key user %v: %v", k.User, err))
|
||||
}
|
||||
}
|
||||
if k.Admin != "" {
|
||||
_, err := provider.adminExists(k.Admin)
|
||||
if err != nil {
|
||||
return util.NewValidationError(fmt.Sprintf("unable to check API key admin %v: %v", k.Admin, err))
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Authenticate tries to authenticate the provided plain key
|
||||
func (k *APIKey) Authenticate(plainKey string) error {
|
||||
if k.ExpiresAt > 0 && k.ExpiresAt < util.GetTimeAsMsSinceEpoch(time.Now()) {
|
||||
return fmt.Errorf("API key %#v is expired, expiration timestamp: %v current timestamp: %v", k.KeyID,
|
||||
k.ExpiresAt, util.GetTimeAsMsSinceEpoch(time.Now()))
|
||||
}
|
||||
if strings.HasPrefix(k.Key, bcryptPwdPrefix) {
|
||||
if err := bcrypt.CompareHashAndPassword([]byte(k.Key), []byte(plainKey)); err != nil {
|
||||
return ErrInvalidCredentials
|
||||
}
|
||||
} else if strings.HasPrefix(k.Key, argonPwdPrefix) {
|
||||
match, err := argon2id.ComparePasswordAndHash(plainKey, k.Key)
|
||||
if err != nil || !match {
|
||||
return ErrInvalidCredentials
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
1743
dataprovider/bolt.go
1743
dataprovider/bolt.go
File diff suppressed because it is too large
Load Diff
18
dataprovider/bolt_disabled.go
Normal file
18
dataprovider/bolt_disabled.go
Normal file
@@ -0,0 +1,18 @@
|
||||
//go:build nobolt
|
||||
// +build nobolt
|
||||
|
||||
package dataprovider
|
||||
|
||||
import (
|
||||
"errors"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/version"
|
||||
)
|
||||
|
||||
func init() {
|
||||
version.AddFeature("-bolt")
|
||||
}
|
||||
|
||||
func initializeBoltProvider(basePath string) error {
|
||||
return errors.New("bolt disabled at build time")
|
||||
}
|
||||
62
dataprovider/cachedpassword.go
Normal file
62
dataprovider/cachedpassword.go
Normal file
@@ -0,0 +1,62 @@
|
||||
package dataprovider
|
||||
|
||||
import (
|
||||
"sync"
|
||||
)
|
||||
|
||||
var cachedPasswords passwordsCache
|
||||
|
||||
func init() {
|
||||
cachedPasswords = passwordsCache{
|
||||
cache: make(map[string]string),
|
||||
}
|
||||
}
|
||||
|
||||
type passwordsCache struct {
|
||||
sync.RWMutex
|
||||
cache map[string]string
|
||||
}
|
||||
|
||||
func (c *passwordsCache) Add(username, password string) {
|
||||
if !config.PasswordCaching || username == "" || password == "" {
|
||||
return
|
||||
}
|
||||
|
||||
c.Lock()
|
||||
defer c.Unlock()
|
||||
|
||||
c.cache[username] = password
|
||||
}
|
||||
|
||||
func (c *passwordsCache) Remove(username string) {
|
||||
if !config.PasswordCaching {
|
||||
return
|
||||
}
|
||||
|
||||
c.Lock()
|
||||
defer c.Unlock()
|
||||
|
||||
delete(c.cache, username)
|
||||
}
|
||||
|
||||
// Check returns if the user is found and if the password match
|
||||
func (c *passwordsCache) Check(username, password string) (bool, bool) {
|
||||
if username == "" || password == "" {
|
||||
return false, false
|
||||
}
|
||||
|
||||
c.RLock()
|
||||
defer c.RUnlock()
|
||||
|
||||
pwd, ok := c.cache[username]
|
||||
if !ok {
|
||||
return false, false
|
||||
}
|
||||
|
||||
return true, pwd == password
|
||||
}
|
||||
|
||||
// CheckCachedPassword is an utility method used only in test cases
|
||||
func CheckCachedPassword(username, password string) (bool, bool) {
|
||||
return cachedPasswords.Check(username, password)
|
||||
}
|
||||
149
dataprovider/cacheduser.go
Normal file
149
dataprovider/cacheduser.go
Normal file
@@ -0,0 +1,149 @@
|
||||
package dataprovider
|
||||
|
||||
import (
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"golang.org/x/net/webdav"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
var (
|
||||
webDAVUsersCache *usersCache
|
||||
)
|
||||
|
||||
func init() {
|
||||
webDAVUsersCache = &usersCache{
|
||||
users: map[string]CachedUser{},
|
||||
}
|
||||
}
|
||||
|
||||
// InitializeWebDAVUserCache initializes the cache for webdav users
|
||||
func InitializeWebDAVUserCache(maxSize int) {
|
||||
webDAVUsersCache = &usersCache{
|
||||
users: map[string]CachedUser{},
|
||||
maxSize: maxSize,
|
||||
}
|
||||
}
|
||||
|
||||
// CachedUser adds fields useful for caching to a SFTPGo user
|
||||
type CachedUser struct {
|
||||
User User
|
||||
Expiration time.Time
|
||||
Password string
|
||||
LockSystem webdav.LockSystem
|
||||
}
|
||||
|
||||
// IsExpired returns true if the cached user is expired
|
||||
func (c *CachedUser) IsExpired() bool {
|
||||
if c.Expiration.IsZero() {
|
||||
return false
|
||||
}
|
||||
return c.Expiration.Before(time.Now())
|
||||
}
|
||||
|
||||
type usersCache struct {
|
||||
sync.RWMutex
|
||||
users map[string]CachedUser
|
||||
maxSize int
|
||||
}
|
||||
|
||||
func (cache *usersCache) updateLastLogin(username string) {
|
||||
cache.Lock()
|
||||
defer cache.Unlock()
|
||||
|
||||
if cachedUser, ok := cache.users[username]; ok {
|
||||
cachedUser.User.LastLogin = util.GetTimeAsMsSinceEpoch(time.Now())
|
||||
cache.users[username] = cachedUser
|
||||
}
|
||||
}
|
||||
|
||||
// swapWebDAVUser updates an existing cached user with the specified one
|
||||
// preserving the lock fs if possible
|
||||
func (cache *usersCache) swap(user *User) {
|
||||
cache.Lock()
|
||||
defer cache.Unlock()
|
||||
|
||||
if cachedUser, ok := cache.users[user.Username]; ok {
|
||||
if cachedUser.User.Password != user.Password {
|
||||
providerLog(logger.LevelDebug, "current password different from the cached one for user %#v, removing from cache",
|
||||
user.Username)
|
||||
// the password changed, the cached user is no longer valid
|
||||
delete(cache.users, user.Username)
|
||||
return
|
||||
}
|
||||
if cachedUser.User.isFsEqual(user) {
|
||||
// the updated user has the same fs as the cached one, we can preserve the lock filesystem
|
||||
providerLog(logger.LevelDebug, "current password and fs unchanged for for user %#v, swap cached one",
|
||||
user.Username)
|
||||
cachedUser.User = *user
|
||||
cache.users[user.Username] = cachedUser
|
||||
} else {
|
||||
// filesystem changed, the cached user is no longer valid
|
||||
providerLog(logger.LevelDebug, "current fs different from the cached one for user %#v, removing from cache",
|
||||
user.Username)
|
||||
delete(cache.users, user.Username)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (cache *usersCache) add(cachedUser *CachedUser) {
|
||||
cache.Lock()
|
||||
defer cache.Unlock()
|
||||
|
||||
if cache.maxSize > 0 && len(cache.users) >= cache.maxSize {
|
||||
var userToRemove string
|
||||
var expirationTime time.Time
|
||||
|
||||
for k, v := range cache.users {
|
||||
if userToRemove == "" {
|
||||
userToRemove = k
|
||||
expirationTime = v.Expiration
|
||||
continue
|
||||
}
|
||||
expireTime := v.Expiration
|
||||
if !expireTime.IsZero() && expireTime.Before(expirationTime) {
|
||||
userToRemove = k
|
||||
expirationTime = expireTime
|
||||
}
|
||||
}
|
||||
|
||||
delete(cache.users, userToRemove)
|
||||
}
|
||||
|
||||
if cachedUser.User.Username != "" {
|
||||
cache.users[cachedUser.User.Username] = *cachedUser
|
||||
}
|
||||
}
|
||||
|
||||
func (cache *usersCache) remove(username string) {
|
||||
cache.Lock()
|
||||
defer cache.Unlock()
|
||||
|
||||
delete(cache.users, username)
|
||||
}
|
||||
|
||||
func (cache *usersCache) get(username string) (*CachedUser, bool) {
|
||||
cache.RLock()
|
||||
defer cache.RUnlock()
|
||||
|
||||
cachedUser, ok := cache.users[username]
|
||||
return &cachedUser, ok
|
||||
}
|
||||
|
||||
// CacheWebDAVUser add a user to the WebDAV cache
|
||||
func CacheWebDAVUser(cachedUser *CachedUser) {
|
||||
webDAVUsersCache.add(cachedUser)
|
||||
}
|
||||
|
||||
// GetCachedWebDAVUser returns a previously cached WebDAV user
|
||||
func GetCachedWebDAVUser(username string) (*CachedUser, bool) {
|
||||
return webDAVUsersCache.get(username)
|
||||
}
|
||||
|
||||
// RemoveCachedWebDAVUser removes a cached WebDAV user
|
||||
func RemoveCachedWebDAVUser(username string) {
|
||||
webDAVUsersCache.remove(username)
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@@ -1,25 +1,100 @@
|
||||
//go:build !nomysql
|
||||
// +build !nomysql
|
||||
|
||||
package dataprovider
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/x509"
|
||||
"database/sql"
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/drakkan/sftpgo/logger"
|
||||
// we import go-sql-driver/mysql here to be able to disable MySQL support using a build tag
|
||||
_ "github.com/go-sql-driver/mysql"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/version"
|
||||
"github.com/drakkan/sftpgo/v2/vfs"
|
||||
)
|
||||
|
||||
const (
|
||||
mysqlUsersTableSQL = "CREATE TABLE `{{users}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, " +
|
||||
"`username` varchar(255) NOT NULL UNIQUE, `password` varchar(255) NULL, `public_keys` longtext NULL, " +
|
||||
"`home_dir` varchar(255) NOT NULL, `uid` integer NOT NULL, `gid` integer NOT NULL, `max_sessions` integer NOT NULL, " +
|
||||
" `quota_size` bigint NOT NULL, `quota_files` integer NOT NULL, `permissions` longtext NOT NULL, " +
|
||||
"`used_quota_size` bigint NOT NULL, `used_quota_files` integer NOT NULL, `last_quota_update` bigint NOT NULL, " +
|
||||
"`upload_bandwidth` integer NOT NULL, `download_bandwidth` integer NOT NULL, `expiration_date` bigint(20) NOT NULL, " +
|
||||
"`last_login` bigint(20) NOT NULL, `status` int(11) NOT NULL, `filters` longtext DEFAULT NULL, " +
|
||||
"`filesystem` longtext DEFAULT NULL);"
|
||||
mysqlSchemaTableSQL = "CREATE TABLE `schema_version` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `version` integer NOT NULL);"
|
||||
mysqlUsersV2SQL = "ALTER TABLE `{{users}}` ADD COLUMN `virtual_folders` longtext NULL;"
|
||||
mysqlResetSQL = "DROP TABLE IF EXISTS `{{api_keys}}` CASCADE;" +
|
||||
"DROP TABLE IF EXISTS `{{folders_mapping}}` CASCADE;" +
|
||||
"DROP TABLE IF EXISTS `{{admins}}` CASCADE;" +
|
||||
"DROP TABLE IF EXISTS `{{folders}}` CASCADE;" +
|
||||
"DROP TABLE IF EXISTS `{{shares}}` CASCADE;" +
|
||||
"DROP TABLE IF EXISTS `{{users}}` CASCADE;" +
|
||||
"DROP TABLE IF EXISTS `{{defender_events}}` CASCADE;" +
|
||||
"DROP TABLE IF EXISTS `{{defender_hosts}}` CASCADE;" +
|
||||
"DROP TABLE IF EXISTS `{{schema_version}}` CASCADE;"
|
||||
mysqlInitialSQL = "CREATE TABLE `{{schema_version}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `version` integer NOT NULL);" +
|
||||
"CREATE TABLE `{{admins}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `username` varchar(255) NOT NULL UNIQUE, " +
|
||||
"`description` varchar(512) NULL, `password` varchar(255) NOT NULL, `email` varchar(255) NULL, `status` integer NOT NULL, " +
|
||||
"`permissions` longtext NOT NULL, `filters` longtext NULL, `additional_info` longtext NULL);" +
|
||||
"CREATE TABLE `{{folders}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `name` varchar(255) NOT NULL UNIQUE, " +
|
||||
"`description` varchar(512) NULL, `path` varchar(512) NULL, `used_quota_size` bigint NOT NULL, " +
|
||||
"`used_quota_files` integer NOT NULL, `last_quota_update` bigint NOT NULL, `filesystem` longtext NULL);" +
|
||||
"CREATE TABLE `{{users}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `username` varchar(255) NOT NULL UNIQUE, " +
|
||||
"`status` integer NOT NULL, `expiration_date` bigint NOT NULL, `description` varchar(512) NULL, `password` longtext NULL, " +
|
||||
"`public_keys` longtext NULL, `home_dir` varchar(512) NOT NULL, `uid` integer NOT NULL, `gid` integer NOT NULL, " +
|
||||
"`max_sessions` integer NOT NULL, `quota_size` bigint NOT NULL, `quota_files` integer NOT NULL, " +
|
||||
"`permissions` longtext NOT NULL, `used_quota_size` bigint NOT NULL, `used_quota_files` integer NOT NULL, " +
|
||||
"`last_quota_update` bigint NOT NULL, `upload_bandwidth` integer NOT NULL, `download_bandwidth` integer NOT NULL, " +
|
||||
"`last_login` bigint NOT NULL, `filters` longtext NULL, `filesystem` longtext NULL, `additional_info` longtext NULL);" +
|
||||
"CREATE TABLE `{{folders_mapping}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `virtual_path` varchar(512) NOT NULL, " +
|
||||
"`quota_size` bigint NOT NULL, `quota_files` integer NOT NULL, `folder_id` integer NOT NULL, `user_id` integer NOT NULL);" +
|
||||
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `{{prefix}}unique_mapping` UNIQUE (`user_id`, `folder_id`);" +
|
||||
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `{{prefix}}folders_mapping_folder_id_fk_folders_id` FOREIGN KEY (`folder_id`) REFERENCES `{{folders}}` (`id`) ON DELETE CASCADE;" +
|
||||
"ALTER TABLE `{{folders_mapping}}` ADD CONSTRAINT `{{prefix}}folders_mapping_user_id_fk_users_id` FOREIGN KEY (`user_id`) REFERENCES `{{users}}` (`id`) ON DELETE CASCADE;" +
|
||||
"INSERT INTO {{schema_version}} (version) VALUES (10);"
|
||||
mysqlV11SQL = "CREATE TABLE `{{api_keys}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `name` varchar(255) NOT NULL, `key_id` varchar(50) NOT NULL UNIQUE," +
|
||||
"`api_key` varchar(255) NOT NULL UNIQUE, `scope` integer NOT NULL, `created_at` bigint NOT NULL, `updated_at` bigint NOT NULL, `last_use_at` bigint NOT NULL, " +
|
||||
"`expires_at` bigint NOT NULL, `description` longtext NULL, `admin_id` integer NULL, `user_id` integer NULL);" +
|
||||
"ALTER TABLE `{{api_keys}}` ADD CONSTRAINT `{{prefix}}api_keys_admin_id_fk_admins_id` FOREIGN KEY (`admin_id`) REFERENCES `{{admins}}` (`id`) ON DELETE CASCADE;" +
|
||||
"ALTER TABLE `{{api_keys}}` ADD CONSTRAINT `{{prefix}}api_keys_user_id_fk_users_id` FOREIGN KEY (`user_id`) REFERENCES `{{users}}` (`id`) ON DELETE CASCADE;"
|
||||
mysqlV11DownSQL = "DROP TABLE `{{api_keys}}` CASCADE;"
|
||||
mysqlV12SQL = "ALTER TABLE `{{admins}}` ADD COLUMN `created_at` bigint DEFAULT 0 NOT NULL;" +
|
||||
"ALTER TABLE `{{admins}}` ALTER COLUMN `created_at` DROP DEFAULT;" +
|
||||
"ALTER TABLE `{{admins}}` ADD COLUMN `updated_at` bigint DEFAULT 0 NOT NULL;" +
|
||||
"ALTER TABLE `{{admins}}` ALTER COLUMN `updated_at` DROP DEFAULT;" +
|
||||
"ALTER TABLE `{{admins}}` ADD COLUMN `last_login` bigint DEFAULT 0 NOT NULL;" +
|
||||
"ALTER TABLE `{{admins}}` ALTER COLUMN `last_login` DROP DEFAULT;" +
|
||||
"ALTER TABLE `{{users}}` ADD COLUMN `created_at` bigint DEFAULT 0 NOT NULL;" +
|
||||
"ALTER TABLE `{{users}}` ALTER COLUMN `created_at` DROP DEFAULT;" +
|
||||
"ALTER TABLE `{{users}}` ADD COLUMN `updated_at` bigint DEFAULT 0 NOT NULL;" +
|
||||
"ALTER TABLE `{{users}}` ALTER COLUMN `updated_at` DROP DEFAULT;" +
|
||||
"CREATE INDEX `{{prefix}}users_updated_at_idx` ON `{{users}}` (`updated_at`);"
|
||||
mysqlV12DownSQL = "ALTER TABLE `{{admins}}` DROP COLUMN `updated_at`;" +
|
||||
"ALTER TABLE `{{admins}}` DROP COLUMN `created_at`;" +
|
||||
"ALTER TABLE `{{admins}}` DROP COLUMN `last_login`;" +
|
||||
"ALTER TABLE `{{users}}` DROP COLUMN `created_at`;" +
|
||||
"ALTER TABLE `{{users}}` DROP COLUMN `updated_at`;"
|
||||
|
||||
mysqlV13SQL = "ALTER TABLE `{{users}}` ADD COLUMN `email` varchar(255) NULL;"
|
||||
mysqlV13DownSQL = "ALTER TABLE `{{users}}` DROP COLUMN `email`;"
|
||||
mysqlV14SQL = "CREATE TABLE `{{shares}}` (`id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, " +
|
||||
"`share_id` varchar(60) NOT NULL UNIQUE, `name` varchar(255) NOT NULL, `description` varchar(512) NULL, " +
|
||||
"`scope` integer NOT NULL, `paths` longtext NOT NULL, `created_at` bigint NOT NULL, " +
|
||||
"`updated_at` bigint NOT NULL, `last_use_at` bigint NOT NULL, `expires_at` bigint NOT NULL, " +
|
||||
"`password` longtext NULL, `max_tokens` integer NOT NULL, `used_tokens` integer NOT NULL, " +
|
||||
"`allow_from` longtext NULL, `user_id` integer NOT NULL);" +
|
||||
"ALTER TABLE `{{shares}}` ADD CONSTRAINT `{{prefix}}shares_user_id_fk_users_id` " +
|
||||
"FOREIGN KEY (`user_id`) REFERENCES `{{users}}` (`id`) ON DELETE CASCADE;"
|
||||
mysqlV14DownSQL = "DROP TABLE `{{shares}}` CASCADE;"
|
||||
mysqlV15SQL = "CREATE TABLE `{{defender_hosts}}` (`id` bigint AUTO_INCREMENT NOT NULL PRIMARY KEY, " +
|
||||
"`ip` varchar(50) NOT NULL UNIQUE, `ban_time` bigint NOT NULL, `updated_at` bigint NOT NULL);" +
|
||||
"CREATE TABLE `{{defender_events}}` (`id` bigint AUTO_INCREMENT NOT NULL PRIMARY KEY, " +
|
||||
"`date_time` bigint NOT NULL, `score` integer NOT NULL, `host_id` bigint NOT NULL);" +
|
||||
"ALTER TABLE `{{defender_events}}` ADD CONSTRAINT `{{prefix}}defender_events_host_id_fk_defender_hosts_id` " +
|
||||
"FOREIGN KEY (`host_id`) REFERENCES `{{defender_hosts}}` (`id`) ON DELETE CASCADE;" +
|
||||
"CREATE INDEX `{{prefix}}defender_hosts_updated_at_idx` ON `{{defender_hosts}}` (`updated_at`);" +
|
||||
"CREATE INDEX `{{prefix}}defender_hosts_ban_time_idx` ON `{{defender_hosts}}` (`ban_time`);" +
|
||||
"CREATE INDEX `{{prefix}}defender_events_date_time_idx` ON `{{defender_events}}` (`date_time`);"
|
||||
mysqlV15DownSQL = "DROP TABLE `{{defender_events}}` CASCADE;" +
|
||||
"DROP TABLE `{{defender_hosts}}` CASCADE;"
|
||||
)
|
||||
|
||||
// MySQLProvider auth provider for MySQL/MariaDB database
|
||||
@@ -27,30 +102,39 @@ type MySQLProvider struct {
|
||||
dbHandle *sql.DB
|
||||
}
|
||||
|
||||
func init() {
|
||||
version.AddFeature("+mysql")
|
||||
}
|
||||
|
||||
func initializeMySQLProvider() error {
|
||||
var err error
|
||||
logSender = MySQLDataProviderName
|
||||
|
||||
dbHandle, err := sql.Open("mysql", getMySQLConnectionString(false))
|
||||
if err == nil {
|
||||
providerLog(logger.LevelDebug, "mysql database handle created, connection string: %#v, pool size: %v",
|
||||
getMySQLConnectionString(true), config.PoolSize)
|
||||
dbHandle.SetMaxOpenConns(config.PoolSize)
|
||||
dbHandle.SetConnMaxLifetime(1800 * time.Second)
|
||||
provider = MySQLProvider{dbHandle: dbHandle}
|
||||
if config.PoolSize > 0 {
|
||||
dbHandle.SetMaxIdleConns(config.PoolSize)
|
||||
} else {
|
||||
dbHandle.SetMaxIdleConns(2)
|
||||
}
|
||||
dbHandle.SetConnMaxLifetime(240 * time.Second)
|
||||
provider = &MySQLProvider{dbHandle: dbHandle}
|
||||
} else {
|
||||
providerLog(logger.LevelWarn, "error creating mysql database handler, connection string: %#v, error: %v",
|
||||
providerLog(logger.LevelError, "error creating mysql database handler, connection string: %#v, error: %v",
|
||||
getMySQLConnectionString(true), err)
|
||||
}
|
||||
return err
|
||||
}
|
||||
func getMySQLConnectionString(redactedPwd bool) string {
|
||||
var connectionString string
|
||||
if len(config.ConnectionString) == 0 {
|
||||
if config.ConnectionString == "" {
|
||||
password := config.Password
|
||||
if redactedPwd {
|
||||
password = "[redacted]"
|
||||
}
|
||||
connectionString = fmt.Sprintf("%v:%v@tcp([%v]:%v)/%v?charset=utf8&interpolateParams=true&timeout=10s&tls=%v&writeTimeout=10s&readTimeout=10s",
|
||||
connectionString = fmt.Sprintf("%v:%v@tcp([%v]:%v)/%v?charset=utf8mb4&interpolateParams=true&timeout=10s&parseTime=true&tls=%v&writeTimeout=10s&readTimeout=10s",
|
||||
config.Username, password, config.Host, config.Port, config.Name, getSSLMode())
|
||||
} else {
|
||||
connectionString = config.ConnectionString
|
||||
@@ -58,122 +142,465 @@ func getMySQLConnectionString(redactedPwd bool) string {
|
||||
return connectionString
|
||||
}
|
||||
|
||||
func (p MySQLProvider) checkAvailability() error {
|
||||
func (p *MySQLProvider) checkAvailability() error {
|
||||
return sqlCommonCheckAvailability(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p MySQLProvider) validateUserAndPass(username string, password string) (User, error) {
|
||||
return sqlCommonValidateUserAndPass(username, password, p.dbHandle)
|
||||
func (p *MySQLProvider) validateUserAndPass(username, password, ip, protocol string) (User, error) {
|
||||
return sqlCommonValidateUserAndPass(username, password, ip, protocol, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p MySQLProvider) validateUserAndPubKey(username string, publicKey string) (User, string, error) {
|
||||
func (p *MySQLProvider) validateUserAndTLSCert(username, protocol string, tlsCert *x509.Certificate) (User, error) {
|
||||
return sqlCommonValidateUserAndTLSCertificate(username, protocol, tlsCert, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) validateUserAndPubKey(username string, publicKey []byte) (User, string, error) {
|
||||
return sqlCommonValidateUserAndPubKey(username, publicKey, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p MySQLProvider) getUserByID(ID int64) (User, error) {
|
||||
return sqlCommonGetUserByID(ID, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p MySQLProvider) updateQuota(username string, filesAdd int, sizeAdd int64, reset bool) error {
|
||||
func (p *MySQLProvider) updateQuota(username string, filesAdd int, sizeAdd int64, reset bool) error {
|
||||
return sqlCommonUpdateQuota(username, filesAdd, sizeAdd, reset, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p MySQLProvider) updateLastLogin(username string) error {
|
||||
return sqlCommonUpdateLastLogin(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p MySQLProvider) getUsedQuota(username string) (int, int64, error) {
|
||||
func (p *MySQLProvider) getUsedQuota(username string) (int, int64, error) {
|
||||
return sqlCommonGetUsedQuota(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p MySQLProvider) userExists(username string) (User, error) {
|
||||
return sqlCommonCheckUserExists(username, p.dbHandle)
|
||||
func (p *MySQLProvider) setUpdatedAt(username string) {
|
||||
sqlCommonSetUpdatedAt(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p MySQLProvider) addUser(user User) error {
|
||||
func (p *MySQLProvider) updateLastLogin(username string) error {
|
||||
return sqlCommonUpdateLastLogin(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) updateAdminLastLogin(username string) error {
|
||||
return sqlCommonUpdateAdminLastLogin(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) userExists(username string) (User, error) {
|
||||
return sqlCommonGetUserByUsername(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) addUser(user *User) error {
|
||||
return sqlCommonAddUser(user, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p MySQLProvider) updateUser(user User) error {
|
||||
func (p *MySQLProvider) updateUser(user *User) error {
|
||||
return sqlCommonUpdateUser(user, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p MySQLProvider) deleteUser(user User) error {
|
||||
func (p *MySQLProvider) deleteUser(user *User) error {
|
||||
return sqlCommonDeleteUser(user, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p MySQLProvider) dumpUsers() ([]User, error) {
|
||||
func (p *MySQLProvider) dumpUsers() ([]User, error) {
|
||||
return sqlCommonDumpUsers(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p MySQLProvider) getUsers(limit int, offset int, order string, username string) ([]User, error) {
|
||||
return sqlCommonGetUsers(limit, offset, order, username, p.dbHandle)
|
||||
func (p *MySQLProvider) getRecentlyUpdatedUsers(after int64) ([]User, error) {
|
||||
return sqlCommonGetRecentlyUpdatedUsers(after, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p MySQLProvider) close() error {
|
||||
func (p *MySQLProvider) getUsers(limit int, offset int, order string) ([]User, error) {
|
||||
return sqlCommonGetUsers(limit, offset, order, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) dumpFolders() ([]vfs.BaseVirtualFolder, error) {
|
||||
return sqlCommonDumpFolders(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) getFolders(limit, offset int, order string) ([]vfs.BaseVirtualFolder, error) {
|
||||
return sqlCommonGetFolders(limit, offset, order, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) getFolderByName(name string) (vfs.BaseVirtualFolder, error) {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
|
||||
defer cancel()
|
||||
return sqlCommonGetFolderByName(ctx, name, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) addFolder(folder *vfs.BaseVirtualFolder) error {
|
||||
return sqlCommonAddFolder(folder, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) updateFolder(folder *vfs.BaseVirtualFolder) error {
|
||||
return sqlCommonUpdateFolder(folder, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) deleteFolder(folder *vfs.BaseVirtualFolder) error {
|
||||
return sqlCommonDeleteFolder(folder, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) updateFolderQuota(name string, filesAdd int, sizeAdd int64, reset bool) error {
|
||||
return sqlCommonUpdateFolderQuota(name, filesAdd, sizeAdd, reset, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) getUsedFolderQuota(name string) (int, int64, error) {
|
||||
return sqlCommonGetFolderUsedQuota(name, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) adminExists(username string) (Admin, error) {
|
||||
return sqlCommonGetAdminByUsername(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) addAdmin(admin *Admin) error {
|
||||
return sqlCommonAddAdmin(admin, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) updateAdmin(admin *Admin) error {
|
||||
return sqlCommonUpdateAdmin(admin, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) deleteAdmin(admin *Admin) error {
|
||||
return sqlCommonDeleteAdmin(admin, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) getAdmins(limit int, offset int, order string) ([]Admin, error) {
|
||||
return sqlCommonGetAdmins(limit, offset, order, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) dumpAdmins() ([]Admin, error) {
|
||||
return sqlCommonDumpAdmins(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) validateAdminAndPass(username, password, ip string) (Admin, error) {
|
||||
return sqlCommonValidateAdminAndPass(username, password, ip, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) apiKeyExists(keyID string) (APIKey, error) {
|
||||
return sqlCommonGetAPIKeyByID(keyID, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) addAPIKey(apiKey *APIKey) error {
|
||||
return sqlCommonAddAPIKey(apiKey, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) updateAPIKey(apiKey *APIKey) error {
|
||||
return sqlCommonUpdateAPIKey(apiKey, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) deleteAPIKey(apiKey *APIKey) error {
|
||||
return sqlCommonDeleteAPIKey(apiKey, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) getAPIKeys(limit int, offset int, order string) ([]APIKey, error) {
|
||||
return sqlCommonGetAPIKeys(limit, offset, order, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) dumpAPIKeys() ([]APIKey, error) {
|
||||
return sqlCommonDumpAPIKeys(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) updateAPIKeyLastUse(keyID string) error {
|
||||
return sqlCommonUpdateAPIKeyLastUse(keyID, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) shareExists(shareID, username string) (Share, error) {
|
||||
return sqlCommonGetShareByID(shareID, username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) addShare(share *Share) error {
|
||||
return sqlCommonAddShare(share, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) updateShare(share *Share) error {
|
||||
return sqlCommonUpdateShare(share, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) deleteShare(share *Share) error {
|
||||
return sqlCommonDeleteShare(share, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) getShares(limit int, offset int, order, username string) ([]Share, error) {
|
||||
return sqlCommonGetShares(limit, offset, order, username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) dumpShares() ([]Share, error) {
|
||||
return sqlCommonDumpShares(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) updateShareLastUse(shareID string, numTokens int) error {
|
||||
return sqlCommonUpdateShareLastUse(shareID, numTokens, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) getDefenderHosts(from int64, limit int) ([]*DefenderEntry, error) {
|
||||
return sqlCommonGetDefenderHosts(from, limit, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) getDefenderHostByIP(ip string, from int64) (*DefenderEntry, error) {
|
||||
return sqlCommonGetDefenderHostByIP(ip, from, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) isDefenderHostBanned(ip string) (*DefenderEntry, error) {
|
||||
return sqlCommonIsDefenderHostBanned(ip, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) updateDefenderBanTime(ip string, minutes int) error {
|
||||
return sqlCommonDefenderIncrementBanTime(ip, minutes, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) deleteDefenderHost(ip string) error {
|
||||
return sqlCommonDeleteDefenderHost(ip, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) addDefenderEvent(ip string, score int) error {
|
||||
return sqlCommonAddDefenderHostAndEvent(ip, score, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) setDefenderBanTime(ip string, banTime int64) error {
|
||||
return sqlCommonSetDefenderBanTime(ip, banTime, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) cleanupDefender(from int64) error {
|
||||
return sqlCommonDefenderCleanup(from, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) close() error {
|
||||
return p.dbHandle.Close()
|
||||
}
|
||||
|
||||
func (p MySQLProvider) reloadConfig() error {
|
||||
func (p *MySQLProvider) reloadConfig() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// initializeDatabase creates the initial database structure
|
||||
func (p MySQLProvider) initializeDatabase() error {
|
||||
sqlUsers := strings.Replace(mysqlUsersTableSQL, "{{users}}", config.UsersTable, 1)
|
||||
tx, err := p.dbHandle.Begin()
|
||||
if err != nil {
|
||||
return err
|
||||
func (p *MySQLProvider) initializeDatabase() error {
|
||||
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, false)
|
||||
if err == nil && dbVersion.Version > 0 {
|
||||
return ErrNoInitRequired
|
||||
}
|
||||
_, err = tx.Exec(sqlUsers)
|
||||
if err != nil {
|
||||
tx.Rollback()
|
||||
return err
|
||||
if errors.Is(err, sql.ErrNoRows) {
|
||||
return errSchemaVersionEmpty
|
||||
}
|
||||
_, err = tx.Exec(mysqlSchemaTableSQL)
|
||||
if err != nil {
|
||||
tx.Rollback()
|
||||
return err
|
||||
}
|
||||
_, err = tx.Exec(initialDBVersionSQL)
|
||||
if err != nil {
|
||||
tx.Rollback()
|
||||
return err
|
||||
}
|
||||
return tx.Commit()
|
||||
initialSQL := strings.ReplaceAll(mysqlInitialSQL, "{{schema_version}}", sqlTableSchemaVersion)
|
||||
initialSQL = strings.ReplaceAll(initialSQL, "{{admins}}", sqlTableAdmins)
|
||||
initialSQL = strings.ReplaceAll(initialSQL, "{{folders}}", sqlTableFolders)
|
||||
initialSQL = strings.ReplaceAll(initialSQL, "{{users}}", sqlTableUsers)
|
||||
initialSQL = strings.ReplaceAll(initialSQL, "{{folders_mapping}}", sqlTableFoldersMapping)
|
||||
initialSQL = strings.ReplaceAll(initialSQL, "{{prefix}}", config.SQLTablesPrefix)
|
||||
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, strings.Split(initialSQL, ";"), 10)
|
||||
}
|
||||
|
||||
func (p MySQLProvider) migrateDatabase() error {
|
||||
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle)
|
||||
//nolint:dupl
|
||||
func (p *MySQLProvider) migrateDatabase() error {
|
||||
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if dbVersion.Version == sqlDatabaseVersion {
|
||||
providerLog(logger.LevelDebug, "sql database is updated, current version: %v", dbVersion.Version)
|
||||
return nil
|
||||
|
||||
switch version := dbVersion.Version; {
|
||||
case version == sqlDatabaseVersion:
|
||||
providerLog(logger.LevelDebug, "sql database is up to date, current version: %v", version)
|
||||
return ErrNoInitRequired
|
||||
case version < 10:
|
||||
err = fmt.Errorf("database version %v is too old, please see the upgrading docs", version)
|
||||
providerLog(logger.LevelError, "%v", err)
|
||||
logger.ErrorToConsole("%v", err)
|
||||
return err
|
||||
case version == 10:
|
||||
return updateMySQLDatabaseFromV10(p.dbHandle)
|
||||
case version == 11:
|
||||
return updateMySQLDatabaseFromV11(p.dbHandle)
|
||||
case version == 12:
|
||||
return updateMySQLDatabaseFromV12(p.dbHandle)
|
||||
case version == 13:
|
||||
return updateMySQLDatabaseFromV13(p.dbHandle)
|
||||
case version == 14:
|
||||
return updateMySQLDatabaseFromV14(p.dbHandle)
|
||||
default:
|
||||
if version > sqlDatabaseVersion {
|
||||
providerLog(logger.LevelError, "database version %v is newer than the supported one: %v", version,
|
||||
sqlDatabaseVersion)
|
||||
logger.WarnToConsole("database version %v is newer than the supported one: %v", version,
|
||||
sqlDatabaseVersion)
|
||||
return nil
|
||||
}
|
||||
return fmt.Errorf("database version not handled: %v", version)
|
||||
}
|
||||
if dbVersion.Version == 1 {
|
||||
return updateMySQLDatabaseFrom1To2(p.dbHandle)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func updateMySQLDatabaseFrom1To2(dbHandle *sql.DB) error {
|
||||
providerLog(logger.LevelInfo, "updating database version: 1 -> 2")
|
||||
sql := strings.Replace(mysqlUsersV2SQL, "{{users}}", config.UsersTable, 1)
|
||||
tx, err := dbHandle.Begin()
|
||||
func (p *MySQLProvider) revertDatabase(targetVersion int) error {
|
||||
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = tx.Exec(sql)
|
||||
if err != nil {
|
||||
tx.Rollback()
|
||||
return err
|
||||
if dbVersion.Version == targetVersion {
|
||||
return errors.New("current version match target version, nothing to do")
|
||||
}
|
||||
err = sqlCommonUpdateDatabaseVersionWithTX(tx, 2)
|
||||
if err != nil {
|
||||
tx.Rollback()
|
||||
return err
|
||||
|
||||
switch dbVersion.Version {
|
||||
case 15:
|
||||
return downgradeMySQLDatabaseFromV15(p.dbHandle)
|
||||
case 14:
|
||||
return downgradeMySQLDatabaseFromV14(p.dbHandle)
|
||||
case 13:
|
||||
return downgradeMySQLDatabaseFromV13(p.dbHandle)
|
||||
case 12:
|
||||
return downgradeMySQLDatabaseFromV12(p.dbHandle)
|
||||
case 11:
|
||||
return downgradeMySQLDatabaseFromV11(p.dbHandle)
|
||||
default:
|
||||
return fmt.Errorf("database version not handled: %v", dbVersion.Version)
|
||||
}
|
||||
return tx.Commit()
|
||||
}
|
||||
|
||||
func (p *MySQLProvider) resetDatabase() error {
|
||||
sql := strings.ReplaceAll(mysqlResetSQL, "{{schema_version}}", sqlTableSchemaVersion)
|
||||
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
|
||||
sql = strings.ReplaceAll(sql, "{{folders}}", sqlTableFolders)
|
||||
sql = strings.ReplaceAll(sql, "{{users}}", sqlTableUsers)
|
||||
sql = strings.ReplaceAll(sql, "{{folders_mapping}}", sqlTableFoldersMapping)
|
||||
sql = strings.ReplaceAll(sql, "{{api_keys}}", sqlTableAPIKeys)
|
||||
sql = strings.ReplaceAll(sql, "{{shares}}", sqlTableShares)
|
||||
sql = strings.ReplaceAll(sql, "{{defender_events}}", sqlTableDefenderEvents)
|
||||
sql = strings.ReplaceAll(sql, "{{defender_hosts}}", sqlTableDefenderHosts)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, strings.Split(sql, ";"), 0)
|
||||
}
|
||||
|
||||
func updateMySQLDatabaseFromV10(dbHandle *sql.DB) error {
|
||||
if err := updateMySQLDatabaseFrom10To11(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return updateMySQLDatabaseFromV11(dbHandle)
|
||||
}
|
||||
|
||||
func updateMySQLDatabaseFromV11(dbHandle *sql.DB) error {
|
||||
if err := updateMySQLDatabaseFrom11To12(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return updateMySQLDatabaseFromV12(dbHandle)
|
||||
}
|
||||
|
||||
func updateMySQLDatabaseFromV12(dbHandle *sql.DB) error {
|
||||
if err := updateMySQLDatabaseFrom12To13(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return updateMySQLDatabaseFromV13(dbHandle)
|
||||
}
|
||||
|
||||
func updateMySQLDatabaseFromV13(dbHandle *sql.DB) error {
|
||||
if err := updateMySQLDatabaseFrom13To14(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return updateMySQLDatabaseFromV14(dbHandle)
|
||||
}
|
||||
|
||||
func updateMySQLDatabaseFromV14(dbHandle *sql.DB) error {
|
||||
return updateMySQLDatabaseFrom14To15(dbHandle)
|
||||
}
|
||||
|
||||
func downgradeMySQLDatabaseFromV15(dbHandle *sql.DB) error {
|
||||
if err := downgradeMySQLDatabaseFrom15To14(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return downgradeMySQLDatabaseFromV14(dbHandle)
|
||||
}
|
||||
|
||||
func downgradeMySQLDatabaseFromV14(dbHandle *sql.DB) error {
|
||||
if err := downgradeMySQLDatabaseFrom14To13(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return downgradeMySQLDatabaseFromV13(dbHandle)
|
||||
}
|
||||
|
||||
func downgradeMySQLDatabaseFromV13(dbHandle *sql.DB) error {
|
||||
if err := downgradeMySQLDatabaseFrom13To12(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return downgradeMySQLDatabaseFromV12(dbHandle)
|
||||
}
|
||||
|
||||
func downgradeMySQLDatabaseFromV12(dbHandle *sql.DB) error {
|
||||
if err := downgradeMySQLDatabaseFrom12To11(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return downgradeMySQLDatabaseFromV11(dbHandle)
|
||||
}
|
||||
|
||||
func downgradeMySQLDatabaseFromV11(dbHandle *sql.DB) error {
|
||||
return downgradeMySQLDatabaseFrom11To10(dbHandle)
|
||||
}
|
||||
|
||||
func updateMySQLDatabaseFrom13To14(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 13 -> 14")
|
||||
providerLog(logger.LevelInfo, "updating database version: 13 -> 14")
|
||||
sql := strings.ReplaceAll(mysqlV14SQL, "{{shares}}", sqlTableShares)
|
||||
sql = strings.ReplaceAll(sql, "{{users}}", sqlTableUsers)
|
||||
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 14)
|
||||
}
|
||||
|
||||
func updateMySQLDatabaseFrom14To15(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 14 -> 15")
|
||||
providerLog(logger.LevelInfo, "updating database version: 14 -> 15")
|
||||
sql := strings.ReplaceAll(mysqlV15SQL, "{{defender_events}}", sqlTableDefenderEvents)
|
||||
sql = strings.ReplaceAll(sql, "{{defender_hosts}}", sqlTableDefenderHosts)
|
||||
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 15)
|
||||
}
|
||||
|
||||
func downgradeMySQLDatabaseFrom15To14(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("downgrading database version: 15 -> 14")
|
||||
providerLog(logger.LevelInfo, "downgrading database version: 15 -> 14")
|
||||
sql := strings.ReplaceAll(mysqlV15DownSQL, "{{defender_events}}", sqlTableDefenderEvents)
|
||||
sql = strings.ReplaceAll(sql, "{{defender_hosts}}", sqlTableDefenderHosts)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 14)
|
||||
}
|
||||
|
||||
func downgradeMySQLDatabaseFrom14To13(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("downgrading database version: 14 -> 13")
|
||||
providerLog(logger.LevelInfo, "downgrading database version: 14 -> 13")
|
||||
sql := strings.ReplaceAll(mysqlV14DownSQL, "{{shares}}", sqlTableShares)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 13)
|
||||
}
|
||||
|
||||
func updateMySQLDatabaseFrom12To13(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 12 -> 13")
|
||||
providerLog(logger.LevelInfo, "updating database version: 12 -> 13")
|
||||
sql := strings.ReplaceAll(mysqlV13SQL, "{{users}}", sqlTableUsers)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 13)
|
||||
}
|
||||
|
||||
func downgradeMySQLDatabaseFrom13To12(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("downgrading database version: 13 -> 12")
|
||||
providerLog(logger.LevelInfo, "downgrading database version: 13 -> 12")
|
||||
sql := strings.ReplaceAll(mysqlV13DownSQL, "{{users}}", sqlTableUsers)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 12)
|
||||
}
|
||||
|
||||
func updateMySQLDatabaseFrom11To12(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 11 -> 12")
|
||||
providerLog(logger.LevelInfo, "updating database version: 11 -> 12")
|
||||
sql := strings.ReplaceAll(mysqlV12SQL, "{{users}}", sqlTableUsers)
|
||||
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
|
||||
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 12)
|
||||
}
|
||||
|
||||
func downgradeMySQLDatabaseFrom12To11(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("downgrading database version: 12 -> 11")
|
||||
providerLog(logger.LevelInfo, "downgrading database version: 12 -> 11")
|
||||
sql := strings.ReplaceAll(mysqlV12DownSQL, "{{users}}", sqlTableUsers)
|
||||
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
|
||||
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 11)
|
||||
}
|
||||
|
||||
func updateMySQLDatabaseFrom10To11(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 10 -> 11")
|
||||
providerLog(logger.LevelInfo, "updating database version: 10 -> 11")
|
||||
sql := strings.ReplaceAll(mysqlV11SQL, "{{users}}", sqlTableUsers)
|
||||
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
|
||||
sql = strings.ReplaceAll(sql, "{{api_keys}}", sqlTableAPIKeys)
|
||||
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 11)
|
||||
}
|
||||
|
||||
func downgradeMySQLDatabaseFrom11To10(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("downgrading database version: 11 -> 10")
|
||||
providerLog(logger.LevelInfo, "downgrading database version: 11 -> 10")
|
||||
sql := strings.ReplaceAll(mysqlV11DownSQL, "{{api_keys}}", sqlTableAPIKeys)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, strings.Split(sql, ";"), 10)
|
||||
}
|
||||
|
||||
18
dataprovider/mysql_disabled.go
Normal file
18
dataprovider/mysql_disabled.go
Normal file
@@ -0,0 +1,18 @@
|
||||
//go:build nomysql
|
||||
// +build nomysql
|
||||
|
||||
package dataprovider
|
||||
|
||||
import (
|
||||
"errors"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/version"
|
||||
)
|
||||
|
||||
func init() {
|
||||
version.AddFeature("-mysql")
|
||||
}
|
||||
|
||||
func initializeMySQLProvider() error {
|
||||
return errors.New("MySQL disabled at build time")
|
||||
}
|
||||
@@ -1,23 +1,118 @@
|
||||
//go:build !nopgsql
|
||||
// +build !nopgsql
|
||||
|
||||
package dataprovider
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/x509"
|
||||
"database/sql"
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/drakkan/sftpgo/logger"
|
||||
// we import lib/pq here to be able to disable PostgreSQL support using a build tag
|
||||
_ "github.com/lib/pq"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/version"
|
||||
"github.com/drakkan/sftpgo/v2/vfs"
|
||||
)
|
||||
|
||||
const (
|
||||
pgsqlUsersTableSQL = `CREATE TABLE "{{users}}" ("id" serial NOT NULL PRIMARY KEY, "username" varchar(255) NOT NULL UNIQUE,
|
||||
"password" varchar(255) NULL, "public_keys" text NULL, "home_dir" varchar(255) NOT NULL, "uid" integer NOT NULL,
|
||||
"gid" integer NOT NULL, "max_sessions" integer NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL,
|
||||
"permissions" text NOT NULL, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL,
|
||||
"last_quota_update" bigint NOT NULL, "upload_bandwidth" integer NOT NULL, "download_bandwidth" integer NOT NULL,
|
||||
"expiration_date" bigint NOT NULL, "last_login" bigint NOT NULL, "status" integer NOT NULL, "filters" text NULL,
|
||||
"filesystem" text NULL);`
|
||||
pgsqlSchemaTableSQL = `CREATE TABLE "schema_version" ("id" serial NOT NULL PRIMARY KEY, "version" integer NOT NULL);`
|
||||
pgsqlUsersV2SQL = `ALTER TABLE "{{users}}" ADD COLUMN "virtual_folders" text NULL;`
|
||||
pgsqlResetSQL = `DROP TABLE IF EXISTS "{{api_keys}}" CASCADE;
|
||||
DROP TABLE IF EXISTS "{{folders_mapping}}" CASCADE;
|
||||
DROP TABLE IF EXISTS "{{admins}}" CASCADE;
|
||||
DROP TABLE IF EXISTS "{{folders}}" CASCADE;
|
||||
DROP TABLE IF EXISTS "{{shares}}" CASCADE;
|
||||
DROP TABLE IF EXISTS "{{users}}" CASCADE;
|
||||
DROP TABLE IF EXISTS "{{defender_events}}" CASCADE;
|
||||
DROP TABLE IF EXISTS "{{defender_hosts}}" CASCADE;
|
||||
DROP TABLE IF EXISTS "{{schema_version}}" CASCADE;
|
||||
`
|
||||
pgsqlInitial = `CREATE TABLE "{{schema_version}}" ("id" serial NOT NULL PRIMARY KEY, "version" integer NOT NULL);
|
||||
CREATE TABLE "{{admins}}" ("id" serial NOT NULL PRIMARY KEY, "username" varchar(255) NOT NULL UNIQUE,
|
||||
"description" varchar(512) NULL, "password" varchar(255) NOT NULL, "email" varchar(255) NULL, "status" integer NOT NULL,
|
||||
"permissions" text NOT NULL, "filters" text NULL, "additional_info" text NULL);
|
||||
CREATE TABLE "{{folders}}" ("id" serial NOT NULL PRIMARY KEY, "name" varchar(255) NOT NULL UNIQUE, "description" varchar(512) NULL,
|
||||
"path" varchar(512) NULL, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL,
|
||||
"filesystem" text NULL);
|
||||
CREATE TABLE "{{users}}" ("id" serial NOT NULL PRIMARY KEY, "username" varchar(255) NOT NULL UNIQUE, "status" integer NOT NULL,
|
||||
"expiration_date" bigint NOT NULL, "description" varchar(512) NULL, "password" text NULL, "public_keys" text NULL,
|
||||
"home_dir" varchar(512) NOT NULL, "uid" integer NOT NULL, "gid" integer NOT NULL, "max_sessions" integer NOT NULL,
|
||||
"quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "permissions" text NOT NULL, "used_quota_size" bigint NOT NULL,
|
||||
"used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL, "upload_bandwidth" integer NOT NULL,
|
||||
"download_bandwidth" integer NOT NULL, "last_login" bigint NOT NULL, "filters" text NULL, "filesystem" text NULL,
|
||||
"additional_info" text NULL);
|
||||
CREATE TABLE "{{folders_mapping}}" ("id" serial NOT NULL PRIMARY KEY, "virtual_path" varchar(512) NOT NULL,
|
||||
"quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "folder_id" integer NOT NULL, "user_id" integer NOT NULL);
|
||||
ALTER TABLE "{{folders_mapping}}" ADD CONSTRAINT "{{prefix}}unique_mapping" UNIQUE ("user_id", "folder_id");
|
||||
ALTER TABLE "{{folders_mapping}}" ADD CONSTRAINT "{{prefix}}folders_mapping_folder_id_fk_folders_id"
|
||||
FOREIGN KEY ("folder_id") REFERENCES "{{folders}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
|
||||
ALTER TABLE "{{folders_mapping}}" ADD CONSTRAINT "{{prefix}}folders_mapping_user_id_fk_users_id"
|
||||
FOREIGN KEY ("user_id") REFERENCES "{{users}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
|
||||
CREATE INDEX "{{prefix}}folders_mapping_folder_id_idx" ON "{{folders_mapping}}" ("folder_id");
|
||||
CREATE INDEX "{{prefix}}folders_mapping_user_id_idx" ON "{{folders_mapping}}" ("user_id");
|
||||
INSERT INTO {{schema_version}} (version) VALUES (10);
|
||||
`
|
||||
pgsqlV11SQL = `CREATE TABLE "{{api_keys}}" ("id" serial NOT NULL PRIMARY KEY, "name" varchar(255) NOT NULL,
|
||||
"key_id" varchar(50) NOT NULL UNIQUE, "api_key" varchar(255) NOT NULL UNIQUE, "scope" integer NOT NULL,
|
||||
"created_at" bigint NOT NULL, "updated_at" bigint NOT NULL, "last_use_at" bigint NOT NULL,"expires_at" bigint NOT NULL,
|
||||
"description" text NULL, "admin_id" integer NULL, "user_id" integer NULL);
|
||||
ALTER TABLE "{{api_keys}}" ADD CONSTRAINT "{{prefix}}api_keys_admin_id_fk_admins_id" FOREIGN KEY ("admin_id")
|
||||
REFERENCES "{{admins}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE;
|
||||
ALTER TABLE "{{api_keys}}" ADD CONSTRAINT "{{prefix}}api_keys_user_id_fk_users_id" FOREIGN KEY ("user_id")
|
||||
REFERENCES "{{users}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE;
|
||||
CREATE INDEX "{{prefix}}api_keys_admin_id_idx" ON "{{api_keys}}" ("admin_id");
|
||||
CREATE INDEX "{{prefix}}api_keys_user_id_idx" ON "{{api_keys}}" ("user_id");
|
||||
`
|
||||
pgsqlV11DownSQL = `DROP TABLE "{{api_keys}}" CASCADE;`
|
||||
pgsqlV12SQL = `ALTER TABLE "{{admins}}" ADD COLUMN "created_at" bigint DEFAULT 0 NOT NULL;
|
||||
ALTER TABLE "{{admins}}" ALTER COLUMN "created_at" DROP DEFAULT;
|
||||
ALTER TABLE "{{admins}}" ADD COLUMN "updated_at" bigint DEFAULT 0 NOT NULL;
|
||||
ALTER TABLE "{{admins}}" ALTER COLUMN "updated_at" DROP DEFAULT;
|
||||
ALTER TABLE "{{admins}}" ADD COLUMN "last_login" bigint DEFAULT 0 NOT NULL;
|
||||
ALTER TABLE "{{admins}}" ALTER COLUMN "last_login" DROP DEFAULT;
|
||||
ALTER TABLE "{{users}}" ADD COLUMN "created_at" bigint DEFAULT 0 NOT NULL;
|
||||
ALTER TABLE "{{users}}" ALTER COLUMN "created_at" DROP DEFAULT;
|
||||
ALTER TABLE "{{users}}" ADD COLUMN "updated_at" bigint DEFAULT 0 NOT NULL;
|
||||
ALTER TABLE "{{users}}" ALTER COLUMN "updated_at" DROP DEFAULT;
|
||||
CREATE INDEX "{{prefix}}users_updated_at_idx" ON "{{users}}" ("updated_at");
|
||||
`
|
||||
pgsqlV12DownSQL = `ALTER TABLE "{{users}}" DROP COLUMN "updated_at" CASCADE;
|
||||
ALTER TABLE "{{users}}" DROP COLUMN "created_at" CASCADE;
|
||||
ALTER TABLE "{{admins}}" DROP COLUMN "created_at" CASCADE;
|
||||
ALTER TABLE "{{admins}}" DROP COLUMN "updated_at" CASCADE;
|
||||
ALTER TABLE "{{admins}}" DROP COLUMN "last_login" CASCADE;
|
||||
`
|
||||
pgsqlV13SQL = `ALTER TABLE "{{users}}" ADD COLUMN "email" varchar(255) NULL;`
|
||||
pgsqlV13DownSQL = `ALTER TABLE "{{users}}" DROP COLUMN "email" CASCADE;`
|
||||
pgsqlV14SQL = `CREATE TABLE "{{shares}}" ("id" serial NOT NULL PRIMARY KEY,
|
||||
"share_id" varchar(60) NOT NULL UNIQUE, "name" varchar(255) NOT NULL, "description" varchar(512) NULL,
|
||||
"scope" integer NOT NULL, "paths" text NOT NULL, "created_at" bigint NOT NULL, "updated_at" bigint NOT NULL,
|
||||
"last_use_at" bigint NOT NULL, "expires_at" bigint NOT NULL, "password" text NULL,
|
||||
"max_tokens" integer NOT NULL, "used_tokens" integer NOT NULL, "allow_from" text NULL,
|
||||
"user_id" integer NOT NULL);
|
||||
ALTER TABLE "{{shares}}" ADD CONSTRAINT "{{prefix}}shares_user_id_fk_users_id" FOREIGN KEY ("user_id")
|
||||
REFERENCES "{{users}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE;
|
||||
CREATE INDEX "{{prefix}}shares_user_id_idx" ON "{{shares}}" ("user_id");
|
||||
`
|
||||
pgsqlV14DownSQL = `DROP TABLE "{{shares}}" CASCADE;`
|
||||
pgsqlV15SQL = `CREATE TABLE "{{defender_hosts}}" ("id" bigserial NOT NULL PRIMARY KEY, "ip" varchar(50) NOT NULL UNIQUE,
|
||||
"ban_time" bigint NOT NULL, "updated_at" bigint NOT NULL);
|
||||
CREATE TABLE "{{defender_events}}" ("id" bigserial NOT NULL PRIMARY KEY, "date_time" bigint NOT NULL, "score" integer NOT NULL,
|
||||
"host_id" bigint NOT NULL);
|
||||
ALTER TABLE "{{defender_events}}" ADD CONSTRAINT "{{prefix}}defender_events_host_id_fk_defender_hosts_id" FOREIGN KEY
|
||||
("host_id") REFERENCES "{{defender_hosts}}" ("id") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE;
|
||||
CREATE INDEX "{{prefix}}defender_hosts_updated_at_idx" ON "{{defender_hosts}}" ("updated_at");
|
||||
CREATE INDEX "{{prefix}}defender_hosts_ban_time_idx" ON "{{defender_hosts}}" ("ban_time");
|
||||
CREATE INDEX "{{prefix}}defender_events_date_time_idx" ON "{{defender_events}}" ("date_time");
|
||||
CREATE INDEX "{{prefix}}defender_events_host_id_idx" ON "{{defender_events}}" ("host_id");
|
||||
`
|
||||
pgsqlV15DownSQL = `DROP TABLE "{{defender_events}}" CASCADE;
|
||||
DROP TABLE "{{defender_hosts}}" CASCADE;
|
||||
`
|
||||
)
|
||||
|
||||
// PGSQLProvider auth provider for PostgreSQL database
|
||||
@@ -25,17 +120,26 @@ type PGSQLProvider struct {
|
||||
dbHandle *sql.DB
|
||||
}
|
||||
|
||||
func init() {
|
||||
version.AddFeature("+pgsql")
|
||||
}
|
||||
|
||||
func initializePGSQLProvider() error {
|
||||
var err error
|
||||
logSender = PGSQLDataProviderName
|
||||
dbHandle, err := sql.Open("postgres", getPGSQLConnectionString(false))
|
||||
if err == nil {
|
||||
providerLog(logger.LevelDebug, "postgres database handle created, connection string: %#v, pool size: %v",
|
||||
getPGSQLConnectionString(true), config.PoolSize)
|
||||
dbHandle.SetMaxOpenConns(config.PoolSize)
|
||||
provider = PGSQLProvider{dbHandle: dbHandle}
|
||||
if config.PoolSize > 0 {
|
||||
dbHandle.SetMaxIdleConns(config.PoolSize)
|
||||
} else {
|
||||
dbHandle.SetMaxIdleConns(2)
|
||||
}
|
||||
dbHandle.SetConnMaxLifetime(240 * time.Second)
|
||||
provider = &PGSQLProvider{dbHandle: dbHandle}
|
||||
} else {
|
||||
providerLog(logger.LevelWarn, "error creating postgres database handler, connection string: %#v, error: %v",
|
||||
providerLog(logger.LevelError, "error creating postgres database handler, connection string: %#v, error: %v",
|
||||
getPGSQLConnectionString(true), err)
|
||||
}
|
||||
return err
|
||||
@@ -43,7 +147,7 @@ func initializePGSQLProvider() error {
|
||||
|
||||
func getPGSQLConnectionString(redactedPwd bool) string {
|
||||
var connectionString string
|
||||
if len(config.ConnectionString) == 0 {
|
||||
if config.ConnectionString == "" {
|
||||
password := config.Password
|
||||
if redactedPwd {
|
||||
password = "[redacted]"
|
||||
@@ -56,122 +160,471 @@ func getPGSQLConnectionString(redactedPwd bool) string {
|
||||
return connectionString
|
||||
}
|
||||
|
||||
func (p PGSQLProvider) checkAvailability() error {
|
||||
func (p *PGSQLProvider) checkAvailability() error {
|
||||
return sqlCommonCheckAvailability(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p PGSQLProvider) validateUserAndPass(username string, password string) (User, error) {
|
||||
return sqlCommonValidateUserAndPass(username, password, p.dbHandle)
|
||||
func (p *PGSQLProvider) validateUserAndPass(username, password, ip, protocol string) (User, error) {
|
||||
return sqlCommonValidateUserAndPass(username, password, ip, protocol, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p PGSQLProvider) validateUserAndPubKey(username string, publicKey string) (User, string, error) {
|
||||
func (p *PGSQLProvider) validateUserAndTLSCert(username, protocol string, tlsCert *x509.Certificate) (User, error) {
|
||||
return sqlCommonValidateUserAndTLSCertificate(username, protocol, tlsCert, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) validateUserAndPubKey(username string, publicKey []byte) (User, string, error) {
|
||||
return sqlCommonValidateUserAndPubKey(username, publicKey, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p PGSQLProvider) getUserByID(ID int64) (User, error) {
|
||||
return sqlCommonGetUserByID(ID, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p PGSQLProvider) updateQuota(username string, filesAdd int, sizeAdd int64, reset bool) error {
|
||||
func (p *PGSQLProvider) updateQuota(username string, filesAdd int, sizeAdd int64, reset bool) error {
|
||||
return sqlCommonUpdateQuota(username, filesAdd, sizeAdd, reset, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p PGSQLProvider) updateLastLogin(username string) error {
|
||||
return sqlCommonUpdateLastLogin(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p PGSQLProvider) getUsedQuota(username string) (int, int64, error) {
|
||||
func (p *PGSQLProvider) getUsedQuota(username string) (int, int64, error) {
|
||||
return sqlCommonGetUsedQuota(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p PGSQLProvider) userExists(username string) (User, error) {
|
||||
return sqlCommonCheckUserExists(username, p.dbHandle)
|
||||
func (p *PGSQLProvider) setUpdatedAt(username string) {
|
||||
sqlCommonSetUpdatedAt(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p PGSQLProvider) addUser(user User) error {
|
||||
func (p *PGSQLProvider) updateLastLogin(username string) error {
|
||||
return sqlCommonUpdateLastLogin(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) updateAdminLastLogin(username string) error {
|
||||
return sqlCommonUpdateAdminLastLogin(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) userExists(username string) (User, error) {
|
||||
return sqlCommonGetUserByUsername(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) addUser(user *User) error {
|
||||
return sqlCommonAddUser(user, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p PGSQLProvider) updateUser(user User) error {
|
||||
func (p *PGSQLProvider) updateUser(user *User) error {
|
||||
return sqlCommonUpdateUser(user, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p PGSQLProvider) deleteUser(user User) error {
|
||||
func (p *PGSQLProvider) deleteUser(user *User) error {
|
||||
return sqlCommonDeleteUser(user, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p PGSQLProvider) dumpUsers() ([]User, error) {
|
||||
func (p *PGSQLProvider) dumpUsers() ([]User, error) {
|
||||
return sqlCommonDumpUsers(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p PGSQLProvider) getUsers(limit int, offset int, order string, username string) ([]User, error) {
|
||||
return sqlCommonGetUsers(limit, offset, order, username, p.dbHandle)
|
||||
func (p *PGSQLProvider) getRecentlyUpdatedUsers(after int64) ([]User, error) {
|
||||
return sqlCommonGetRecentlyUpdatedUsers(after, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p PGSQLProvider) close() error {
|
||||
func (p *PGSQLProvider) getUsers(limit int, offset int, order string) ([]User, error) {
|
||||
return sqlCommonGetUsers(limit, offset, order, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) dumpFolders() ([]vfs.BaseVirtualFolder, error) {
|
||||
return sqlCommonDumpFolders(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) getFolders(limit, offset int, order string) ([]vfs.BaseVirtualFolder, error) {
|
||||
return sqlCommonGetFolders(limit, offset, order, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) getFolderByName(name string) (vfs.BaseVirtualFolder, error) {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
|
||||
defer cancel()
|
||||
return sqlCommonGetFolderByName(ctx, name, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) addFolder(folder *vfs.BaseVirtualFolder) error {
|
||||
return sqlCommonAddFolder(folder, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) updateFolder(folder *vfs.BaseVirtualFolder) error {
|
||||
return sqlCommonUpdateFolder(folder, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) deleteFolder(folder *vfs.BaseVirtualFolder) error {
|
||||
return sqlCommonDeleteFolder(folder, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) updateFolderQuota(name string, filesAdd int, sizeAdd int64, reset bool) error {
|
||||
return sqlCommonUpdateFolderQuota(name, filesAdd, sizeAdd, reset, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) getUsedFolderQuota(name string) (int, int64, error) {
|
||||
return sqlCommonGetFolderUsedQuota(name, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) adminExists(username string) (Admin, error) {
|
||||
return sqlCommonGetAdminByUsername(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) addAdmin(admin *Admin) error {
|
||||
return sqlCommonAddAdmin(admin, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) updateAdmin(admin *Admin) error {
|
||||
return sqlCommonUpdateAdmin(admin, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) deleteAdmin(admin *Admin) error {
|
||||
return sqlCommonDeleteAdmin(admin, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) getAdmins(limit int, offset int, order string) ([]Admin, error) {
|
||||
return sqlCommonGetAdmins(limit, offset, order, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) dumpAdmins() ([]Admin, error) {
|
||||
return sqlCommonDumpAdmins(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) validateAdminAndPass(username, password, ip string) (Admin, error) {
|
||||
return sqlCommonValidateAdminAndPass(username, password, ip, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) apiKeyExists(keyID string) (APIKey, error) {
|
||||
return sqlCommonGetAPIKeyByID(keyID, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) addAPIKey(apiKey *APIKey) error {
|
||||
return sqlCommonAddAPIKey(apiKey, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) updateAPIKey(apiKey *APIKey) error {
|
||||
return sqlCommonUpdateAPIKey(apiKey, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) deleteAPIKey(apiKey *APIKey) error {
|
||||
return sqlCommonDeleteAPIKey(apiKey, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) getAPIKeys(limit int, offset int, order string) ([]APIKey, error) {
|
||||
return sqlCommonGetAPIKeys(limit, offset, order, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) dumpAPIKeys() ([]APIKey, error) {
|
||||
return sqlCommonDumpAPIKeys(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) updateAPIKeyLastUse(keyID string) error {
|
||||
return sqlCommonUpdateAPIKeyLastUse(keyID, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) shareExists(shareID, username string) (Share, error) {
|
||||
return sqlCommonGetShareByID(shareID, username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) addShare(share *Share) error {
|
||||
return sqlCommonAddShare(share, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) updateShare(share *Share) error {
|
||||
return sqlCommonUpdateShare(share, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) deleteShare(share *Share) error {
|
||||
return sqlCommonDeleteShare(share, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) getShares(limit int, offset int, order, username string) ([]Share, error) {
|
||||
return sqlCommonGetShares(limit, offset, order, username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) dumpShares() ([]Share, error) {
|
||||
return sqlCommonDumpShares(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) updateShareLastUse(shareID string, numTokens int) error {
|
||||
return sqlCommonUpdateShareLastUse(shareID, numTokens, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) getDefenderHosts(from int64, limit int) ([]*DefenderEntry, error) {
|
||||
return sqlCommonGetDefenderHosts(from, limit, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) getDefenderHostByIP(ip string, from int64) (*DefenderEntry, error) {
|
||||
return sqlCommonGetDefenderHostByIP(ip, from, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) isDefenderHostBanned(ip string) (*DefenderEntry, error) {
|
||||
return sqlCommonIsDefenderHostBanned(ip, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) updateDefenderBanTime(ip string, minutes int) error {
|
||||
return sqlCommonDefenderIncrementBanTime(ip, minutes, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) deleteDefenderHost(ip string) error {
|
||||
return sqlCommonDeleteDefenderHost(ip, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) addDefenderEvent(ip string, score int) error {
|
||||
return sqlCommonAddDefenderHostAndEvent(ip, score, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) setDefenderBanTime(ip string, banTime int64) error {
|
||||
return sqlCommonSetDefenderBanTime(ip, banTime, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) cleanupDefender(from int64) error {
|
||||
return sqlCommonDefenderCleanup(from, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) close() error {
|
||||
return p.dbHandle.Close()
|
||||
}
|
||||
|
||||
func (p PGSQLProvider) reloadConfig() error {
|
||||
func (p *PGSQLProvider) reloadConfig() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// initializeDatabase creates the initial database structure
|
||||
func (p PGSQLProvider) initializeDatabase() error {
|
||||
sqlUsers := strings.Replace(pgsqlUsersTableSQL, "{{users}}", config.UsersTable, 1)
|
||||
tx, err := p.dbHandle.Begin()
|
||||
if err != nil {
|
||||
return err
|
||||
func (p *PGSQLProvider) initializeDatabase() error {
|
||||
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, false)
|
||||
if err == nil && dbVersion.Version > 0 {
|
||||
return ErrNoInitRequired
|
||||
}
|
||||
_, err = tx.Exec(sqlUsers)
|
||||
if err != nil {
|
||||
tx.Rollback()
|
||||
return err
|
||||
if errors.Is(err, sql.ErrNoRows) {
|
||||
return errSchemaVersionEmpty
|
||||
}
|
||||
_, err = tx.Exec(pgsqlSchemaTableSQL)
|
||||
if err != nil {
|
||||
tx.Rollback()
|
||||
return err
|
||||
initialSQL := strings.ReplaceAll(pgsqlInitial, "{{schema_version}}", sqlTableSchemaVersion)
|
||||
initialSQL = strings.ReplaceAll(initialSQL, "{{admins}}", sqlTableAdmins)
|
||||
initialSQL = strings.ReplaceAll(initialSQL, "{{folders}}", sqlTableFolders)
|
||||
initialSQL = strings.ReplaceAll(initialSQL, "{{users}}", sqlTableUsers)
|
||||
initialSQL = strings.ReplaceAll(initialSQL, "{{folders_mapping}}", sqlTableFoldersMapping)
|
||||
initialSQL = strings.ReplaceAll(initialSQL, "{{prefix}}", config.SQLTablesPrefix)
|
||||
if config.Driver == CockroachDataProviderName {
|
||||
// Cockroach does not support deferrable constraint validation, we don't need them,
|
||||
// we keep these definitions for the PostgreSQL driver to avoid changes for users
|
||||
// upgrading from old SFTPGo versions
|
||||
initialSQL = strings.ReplaceAll(initialSQL, "DEFERRABLE INITIALLY DEFERRED", "")
|
||||
}
|
||||
_, err = tx.Exec(initialDBVersionSQL)
|
||||
if err != nil {
|
||||
tx.Rollback()
|
||||
return err
|
||||
}
|
||||
return tx.Commit()
|
||||
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, []string{initialSQL}, 10)
|
||||
}
|
||||
|
||||
func (p PGSQLProvider) migrateDatabase() error {
|
||||
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle)
|
||||
//nolint:dupl
|
||||
func (p *PGSQLProvider) migrateDatabase() error {
|
||||
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if dbVersion.Version == sqlDatabaseVersion {
|
||||
providerLog(logger.LevelDebug, "sql database is updated, current version: %v", dbVersion.Version)
|
||||
return nil
|
||||
|
||||
switch version := dbVersion.Version; {
|
||||
case version == sqlDatabaseVersion:
|
||||
providerLog(logger.LevelDebug, "sql database is up to date, current version: %v", version)
|
||||
return ErrNoInitRequired
|
||||
case version < 10:
|
||||
err = fmt.Errorf("database version %v is too old, please see the upgrading docs", version)
|
||||
providerLog(logger.LevelError, "%v", err)
|
||||
logger.ErrorToConsole("%v", err)
|
||||
return err
|
||||
case version == 10:
|
||||
return updatePGSQLDatabaseFromV10(p.dbHandle)
|
||||
case version == 11:
|
||||
return updatePGSQLDatabaseFromV11(p.dbHandle)
|
||||
case version == 12:
|
||||
return updatePGSQLDatabaseFromV12(p.dbHandle)
|
||||
case version == 13:
|
||||
return updatePGSQLDatabaseFromV13(p.dbHandle)
|
||||
case version == 14:
|
||||
return updatePGSQLDatabaseFromV14(p.dbHandle)
|
||||
default:
|
||||
if version > sqlDatabaseVersion {
|
||||
providerLog(logger.LevelError, "database version %v is newer than the supported one: %v", version,
|
||||
sqlDatabaseVersion)
|
||||
logger.WarnToConsole("database version %v is newer than the supported one: %v", version,
|
||||
sqlDatabaseVersion)
|
||||
return nil
|
||||
}
|
||||
return fmt.Errorf("database version not handled: %v", version)
|
||||
}
|
||||
if dbVersion.Version == 1 {
|
||||
return updatePGSQLDatabaseFrom1To2(p.dbHandle)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func updatePGSQLDatabaseFrom1To2(dbHandle *sql.DB) error {
|
||||
providerLog(logger.LevelInfo, "updating database version: 1 -> 2")
|
||||
sql := strings.Replace(pgsqlUsersV2SQL, "{{users}}", config.UsersTable, 1)
|
||||
tx, err := dbHandle.Begin()
|
||||
func (p *PGSQLProvider) revertDatabase(targetVersion int) error {
|
||||
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = tx.Exec(sql)
|
||||
if err != nil {
|
||||
tx.Rollback()
|
||||
return err
|
||||
if dbVersion.Version == targetVersion {
|
||||
return errors.New("current version match target version, nothing to do")
|
||||
}
|
||||
err = sqlCommonUpdateDatabaseVersionWithTX(tx, 2)
|
||||
if err != nil {
|
||||
tx.Rollback()
|
||||
return err
|
||||
|
||||
switch dbVersion.Version {
|
||||
case 15:
|
||||
return downgradePGSQLDatabaseFromV15(p.dbHandle)
|
||||
case 14:
|
||||
return downgradePGSQLDatabaseFromV14(p.dbHandle)
|
||||
case 13:
|
||||
return downgradePGSQLDatabaseFromV13(p.dbHandle)
|
||||
case 12:
|
||||
return downgradePGSQLDatabaseFromV12(p.dbHandle)
|
||||
case 11:
|
||||
return downgradePGSQLDatabaseFromV11(p.dbHandle)
|
||||
default:
|
||||
return fmt.Errorf("database version not handled: %v", dbVersion.Version)
|
||||
}
|
||||
return tx.Commit()
|
||||
}
|
||||
|
||||
func (p *PGSQLProvider) resetDatabase() error {
|
||||
sql := strings.ReplaceAll(pgsqlResetSQL, "{{schema_version}}", sqlTableSchemaVersion)
|
||||
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
|
||||
sql = strings.ReplaceAll(sql, "{{folders}}", sqlTableFolders)
|
||||
sql = strings.ReplaceAll(sql, "{{users}}", sqlTableUsers)
|
||||
sql = strings.ReplaceAll(sql, "{{folders_mapping}}", sqlTableFoldersMapping)
|
||||
sql = strings.ReplaceAll(sql, "{{api_keys}}", sqlTableAPIKeys)
|
||||
sql = strings.ReplaceAll(sql, "{{shares}}", sqlTableShares)
|
||||
sql = strings.ReplaceAll(sql, "{{defender_events}}", sqlTableDefenderEvents)
|
||||
sql = strings.ReplaceAll(sql, "{{defender_hosts}}", sqlTableDefenderHosts)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, []string{sql}, 0)
|
||||
}
|
||||
|
||||
func updatePGSQLDatabaseFromV10(dbHandle *sql.DB) error {
|
||||
if err := updatePGSQLDatabaseFrom10To11(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return updatePGSQLDatabaseFromV11(dbHandle)
|
||||
}
|
||||
|
||||
func updatePGSQLDatabaseFromV11(dbHandle *sql.DB) error {
|
||||
if err := updatePGSQLDatabaseFrom11To12(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return updatePGSQLDatabaseFromV12(dbHandle)
|
||||
}
|
||||
|
||||
func updatePGSQLDatabaseFromV12(dbHandle *sql.DB) error {
|
||||
if err := updatePGSQLDatabaseFrom12To13(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return updatePGSQLDatabaseFromV13(dbHandle)
|
||||
}
|
||||
|
||||
func updatePGSQLDatabaseFromV13(dbHandle *sql.DB) error {
|
||||
if err := updatePGSQLDatabaseFrom13To14(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return updatePGSQLDatabaseFromV14(dbHandle)
|
||||
}
|
||||
|
||||
func updatePGSQLDatabaseFromV14(dbHandle *sql.DB) error {
|
||||
return updatePGSQLDatabaseFrom14To15(dbHandle)
|
||||
}
|
||||
|
||||
func downgradePGSQLDatabaseFromV15(dbHandle *sql.DB) error {
|
||||
if err := downgradePGSQLDatabaseFrom15To14(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return downgradePGSQLDatabaseFromV14(dbHandle)
|
||||
}
|
||||
|
||||
func downgradePGSQLDatabaseFromV14(dbHandle *sql.DB) error {
|
||||
if err := downgradePGSQLDatabaseFrom14To13(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return downgradePGSQLDatabaseFromV13(dbHandle)
|
||||
}
|
||||
|
||||
func downgradePGSQLDatabaseFromV13(dbHandle *sql.DB) error {
|
||||
if err := downgradePGSQLDatabaseFrom13To12(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return downgradePGSQLDatabaseFromV12(dbHandle)
|
||||
}
|
||||
|
||||
func downgradePGSQLDatabaseFromV12(dbHandle *sql.DB) error {
|
||||
if err := downgradePGSQLDatabaseFrom12To11(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return downgradePGSQLDatabaseFromV11(dbHandle)
|
||||
}
|
||||
|
||||
func downgradePGSQLDatabaseFromV11(dbHandle *sql.DB) error {
|
||||
return downgradePGSQLDatabaseFrom11To10(dbHandle)
|
||||
}
|
||||
|
||||
func updatePGSQLDatabaseFrom13To14(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 13 -> 14")
|
||||
providerLog(logger.LevelInfo, "updating database version: 13 -> 14")
|
||||
sql := strings.ReplaceAll(pgsqlV14SQL, "{{shares}}", sqlTableShares)
|
||||
sql = strings.ReplaceAll(sql, "{{users}}", sqlTableUsers)
|
||||
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 14)
|
||||
}
|
||||
|
||||
func updatePGSQLDatabaseFrom14To15(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 14 -> 15")
|
||||
providerLog(logger.LevelInfo, "updating database version: 14 -> 15")
|
||||
sql := strings.ReplaceAll(pgsqlV15SQL, "{{defender_events}}", sqlTableDefenderEvents)
|
||||
sql = strings.ReplaceAll(sql, "{{defender_hosts}}", sqlTableDefenderHosts)
|
||||
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 15)
|
||||
}
|
||||
|
||||
func downgradePGSQLDatabaseFrom15To14(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("downgrading database version: 15 -> 14")
|
||||
providerLog(logger.LevelInfo, "downgrading database version: 15 -> 14")
|
||||
sql := strings.ReplaceAll(pgsqlV15DownSQL, "{{defender_events}}", sqlTableDefenderEvents)
|
||||
sql = strings.ReplaceAll(sql, "{{defender_hosts}}", sqlTableDefenderHosts)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 14)
|
||||
}
|
||||
|
||||
func downgradePGSQLDatabaseFrom14To13(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("downgrading database version: 14 -> 13")
|
||||
providerLog(logger.LevelInfo, "downgrading database version: 14 -> 13")
|
||||
sql := strings.ReplaceAll(pgsqlV14DownSQL, "{{shares}}", sqlTableShares)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 13)
|
||||
}
|
||||
|
||||
func updatePGSQLDatabaseFrom12To13(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 12 -> 13")
|
||||
providerLog(logger.LevelInfo, "updating database version: 12 -> 13")
|
||||
sql := strings.ReplaceAll(pgsqlV13SQL, "{{users}}", sqlTableUsers)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 13)
|
||||
}
|
||||
|
||||
func downgradePGSQLDatabaseFrom13To12(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("downgrading database version: 13 -> 12")
|
||||
providerLog(logger.LevelInfo, "downgrading database version: 13 -> 12")
|
||||
sql := strings.ReplaceAll(pgsqlV13DownSQL, "{{users}}", sqlTableUsers)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 12)
|
||||
}
|
||||
|
||||
func updatePGSQLDatabaseFrom11To12(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 11 -> 12")
|
||||
providerLog(logger.LevelInfo, "updating database version: 11 -> 12")
|
||||
sql := strings.ReplaceAll(pgsqlV12SQL, "{{users}}", sqlTableUsers)
|
||||
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
|
||||
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 12)
|
||||
}
|
||||
|
||||
func downgradePGSQLDatabaseFrom12To11(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("downgrading database version: 12 -> 11")
|
||||
providerLog(logger.LevelInfo, "downgrading database version: 12 -> 11")
|
||||
sql := strings.ReplaceAll(pgsqlV12DownSQL, "{{users}}", sqlTableUsers)
|
||||
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
|
||||
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 11)
|
||||
}
|
||||
|
||||
func updatePGSQLDatabaseFrom10To11(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 10 -> 11")
|
||||
providerLog(logger.LevelInfo, "updating database version: 10 -> 11")
|
||||
sql := strings.ReplaceAll(pgsqlV11SQL, "{{users}}", sqlTableUsers)
|
||||
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
|
||||
sql = strings.ReplaceAll(sql, "{{api_keys}}", sqlTableAPIKeys)
|
||||
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 11)
|
||||
}
|
||||
|
||||
func downgradePGSQLDatabaseFrom11To10(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("downgrading database version: 11 -> 10")
|
||||
providerLog(logger.LevelInfo, "downgrading database version: 11 -> 10")
|
||||
sql := strings.ReplaceAll(pgsqlV11DownSQL, "{{api_keys}}", sqlTableAPIKeys)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 10)
|
||||
}
|
||||
|
||||
18
dataprovider/pgsql_disabled.go
Normal file
18
dataprovider/pgsql_disabled.go
Normal file
@@ -0,0 +1,18 @@
|
||||
//go:build nopgsql
|
||||
// +build nopgsql
|
||||
|
||||
package dataprovider
|
||||
|
||||
import (
|
||||
"errors"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/version"
|
||||
)
|
||||
|
||||
func init() {
|
||||
version.AddFeature("-pgsql")
|
||||
}
|
||||
|
||||
func initializePGSQLProvider() error {
|
||||
return errors.New("PostgreSQL disabled at build time")
|
||||
}
|
||||
183
dataprovider/quotaupdater.go
Normal file
183
dataprovider/quotaupdater.go
Normal file
@@ -0,0 +1,183 @@
|
||||
package dataprovider
|
||||
|
||||
import (
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
)
|
||||
|
||||
var delayedQuotaUpdater quotaUpdater
|
||||
|
||||
func init() {
|
||||
delayedQuotaUpdater = newQuotaUpdater()
|
||||
}
|
||||
|
||||
type quotaObject struct {
|
||||
size int64
|
||||
files int
|
||||
}
|
||||
|
||||
type quotaUpdater struct {
|
||||
paramsMutex sync.RWMutex
|
||||
waitTime time.Duration
|
||||
sync.RWMutex
|
||||
pendingUserQuotaUpdates map[string]quotaObject
|
||||
pendingFolderQuotaUpdates map[string]quotaObject
|
||||
}
|
||||
|
||||
func newQuotaUpdater() quotaUpdater {
|
||||
return quotaUpdater{
|
||||
pendingUserQuotaUpdates: make(map[string]quotaObject),
|
||||
pendingFolderQuotaUpdates: make(map[string]quotaObject),
|
||||
}
|
||||
}
|
||||
|
||||
func (q *quotaUpdater) start() {
|
||||
q.setWaitTime(config.DelayedQuotaUpdate)
|
||||
|
||||
go q.loop()
|
||||
}
|
||||
|
||||
func (q *quotaUpdater) loop() {
|
||||
waitTime := q.getWaitTime()
|
||||
providerLog(logger.LevelDebug, "delayed quota update loop started, wait time: %v", waitTime)
|
||||
for waitTime > 0 {
|
||||
// We do this with a time.Sleep instead of a time.Ticker because we don't know
|
||||
// how long each quota processing cycle will take, and we want to make
|
||||
// sure we wait the configured seconds between each iteration
|
||||
time.Sleep(waitTime)
|
||||
providerLog(logger.LevelDebug, "delayed quota update check start")
|
||||
q.storeUsersQuota()
|
||||
q.storeFoldersQuota()
|
||||
providerLog(logger.LevelDebug, "delayed quota update check end")
|
||||
waitTime = q.getWaitTime()
|
||||
}
|
||||
providerLog(logger.LevelDebug, "delayed quota update loop ended, wait time: %v", waitTime)
|
||||
}
|
||||
|
||||
func (q *quotaUpdater) setWaitTime(secs int) {
|
||||
q.paramsMutex.Lock()
|
||||
defer q.paramsMutex.Unlock()
|
||||
|
||||
q.waitTime = time.Duration(secs) * time.Second
|
||||
}
|
||||
|
||||
func (q *quotaUpdater) getWaitTime() time.Duration {
|
||||
q.paramsMutex.RLock()
|
||||
defer q.paramsMutex.RUnlock()
|
||||
|
||||
return q.waitTime
|
||||
}
|
||||
|
||||
func (q *quotaUpdater) resetUserQuota(username string) {
|
||||
q.Lock()
|
||||
defer q.Unlock()
|
||||
|
||||
delete(q.pendingUserQuotaUpdates, username)
|
||||
}
|
||||
|
||||
func (q *quotaUpdater) updateUserQuota(username string, files int, size int64) {
|
||||
q.Lock()
|
||||
defer q.Unlock()
|
||||
|
||||
obj := q.pendingUserQuotaUpdates[username]
|
||||
obj.size += size
|
||||
obj.files += files
|
||||
if obj.files == 0 && obj.size == 0 {
|
||||
delete(q.pendingUserQuotaUpdates, username)
|
||||
return
|
||||
}
|
||||
q.pendingUserQuotaUpdates[username] = obj
|
||||
}
|
||||
|
||||
func (q *quotaUpdater) getUserPendingQuota(username string) (int, int64) {
|
||||
q.RLock()
|
||||
defer q.RUnlock()
|
||||
|
||||
obj := q.pendingUserQuotaUpdates[username]
|
||||
|
||||
return obj.files, obj.size
|
||||
}
|
||||
|
||||
func (q *quotaUpdater) resetFolderQuota(name string) {
|
||||
q.Lock()
|
||||
defer q.Unlock()
|
||||
|
||||
delete(q.pendingFolderQuotaUpdates, name)
|
||||
}
|
||||
|
||||
func (q *quotaUpdater) updateFolderQuota(name string, files int, size int64) {
|
||||
q.Lock()
|
||||
defer q.Unlock()
|
||||
|
||||
obj := q.pendingFolderQuotaUpdates[name]
|
||||
obj.size += size
|
||||
obj.files += files
|
||||
if obj.files == 0 && obj.size == 0 {
|
||||
delete(q.pendingFolderQuotaUpdates, name)
|
||||
return
|
||||
}
|
||||
q.pendingFolderQuotaUpdates[name] = obj
|
||||
}
|
||||
|
||||
func (q *quotaUpdater) getFolderPendingQuota(name string) (int, int64) {
|
||||
q.RLock()
|
||||
defer q.RUnlock()
|
||||
|
||||
obj := q.pendingFolderQuotaUpdates[name]
|
||||
|
||||
return obj.files, obj.size
|
||||
}
|
||||
|
||||
func (q *quotaUpdater) getUsernames() []string {
|
||||
q.RLock()
|
||||
defer q.RUnlock()
|
||||
|
||||
result := make([]string, 0, len(q.pendingUserQuotaUpdates))
|
||||
for username := range q.pendingUserQuotaUpdates {
|
||||
result = append(result, username)
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
func (q *quotaUpdater) getFoldernames() []string {
|
||||
q.RLock()
|
||||
defer q.RUnlock()
|
||||
|
||||
result := make([]string, 0, len(q.pendingFolderQuotaUpdates))
|
||||
for name := range q.pendingFolderQuotaUpdates {
|
||||
result = append(result, name)
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
func (q *quotaUpdater) storeUsersQuota() {
|
||||
for _, username := range q.getUsernames() {
|
||||
files, size := q.getUserPendingQuota(username)
|
||||
if size != 0 || files != 0 {
|
||||
err := provider.updateQuota(username, files, size, false)
|
||||
if err != nil {
|
||||
providerLog(logger.LevelWarn, "unable to update quota delayed for user %#v: %v", username, err)
|
||||
continue
|
||||
}
|
||||
q.updateUserQuota(username, -files, -size)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (q *quotaUpdater) storeFoldersQuota() {
|
||||
for _, name := range q.getFoldernames() {
|
||||
files, size := q.getFolderPendingQuota(name)
|
||||
if size != 0 || files != 0 {
|
||||
err := provider.updateFolderQuota(name, files, size, false)
|
||||
if err != nil {
|
||||
providerLog(logger.LevelWarn, "unable to update quota delayed for folder %#v: %v", name, err)
|
||||
continue
|
||||
}
|
||||
q.updateFolderQuota(name, -files, -size)
|
||||
}
|
||||
}
|
||||
}
|
||||
306
dataprovider/share.go
Normal file
306
dataprovider/share.go
Normal file
@@ -0,0 +1,306 @@
|
||||
package dataprovider
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/alexedwards/argon2id"
|
||||
"golang.org/x/crypto/bcrypt"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
)
|
||||
|
||||
// ShareScope defines the supported share scopes
|
||||
type ShareScope int
|
||||
|
||||
// Supported share scopes
|
||||
const (
|
||||
ShareScopeRead ShareScope = iota + 1
|
||||
ShareScopeWrite
|
||||
)
|
||||
|
||||
const (
|
||||
redactedPassword = "[**redacted**]"
|
||||
)
|
||||
|
||||
// Share defines files and or directories shared with external users
|
||||
type Share struct {
|
||||
// Database unique identifier
|
||||
ID int64 `json:"-"`
|
||||
// Unique ID used to access this object
|
||||
ShareID string `json:"id"`
|
||||
Name string `json:"name"`
|
||||
Description string `json:"description,omitempty"`
|
||||
Scope ShareScope `json:"scope"`
|
||||
// Paths to files or directories, for ShareScopeWrite it must be exactly one directory
|
||||
Paths []string `json:"paths"`
|
||||
// Username who shared this object
|
||||
Username string `json:"username"`
|
||||
CreatedAt int64 `json:"created_at"`
|
||||
UpdatedAt int64 `json:"updated_at"`
|
||||
// 0 means never used
|
||||
LastUseAt int64 `json:"last_use_at,omitempty"`
|
||||
// ExpiresAt expiration date/time as unix timestamp in milliseconds, 0 means no expiration
|
||||
ExpiresAt int64 `json:"expires_at,omitempty"`
|
||||
// Optional password to protect the share
|
||||
Password string `json:"password"`
|
||||
// Limit the available access tokens, 0 means no limit
|
||||
MaxTokens int `json:"max_tokens,omitempty"`
|
||||
// Used tokens
|
||||
UsedTokens int `json:"used_tokens,omitempty"`
|
||||
// Limit the share availability to these IPs/CIDR networks
|
||||
AllowFrom []string `json:"allow_from,omitempty"`
|
||||
// set for restores, we don't have to validate the expiration date
|
||||
// otherwise we fail to restore existing shares and we have to insert
|
||||
// all the previous values with no modifications
|
||||
IsRestore bool `json:"-"`
|
||||
}
|
||||
|
||||
// GetScopeAsString returns the share's scope as string.
|
||||
// Used in web pages
|
||||
func (s *Share) GetScopeAsString() string {
|
||||
switch s.Scope {
|
||||
case ShareScopeRead:
|
||||
return "Read"
|
||||
default:
|
||||
return "Write"
|
||||
}
|
||||
}
|
||||
|
||||
// IsExpired returns true if the share is expired
|
||||
func (s *Share) IsExpired() bool {
|
||||
if s.ExpiresAt > 0 {
|
||||
return s.ExpiresAt < util.GetTimeAsMsSinceEpoch(time.Now())
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// GetInfoString returns share's info as string.
|
||||
func (s *Share) GetInfoString() string {
|
||||
var result strings.Builder
|
||||
if s.ExpiresAt > 0 {
|
||||
t := util.GetTimeFromMsecSinceEpoch(s.ExpiresAt)
|
||||
result.WriteString(fmt.Sprintf("Expiration: %v. ", t.Format("2006-01-02 15:04"))) // YYYY-MM-DD HH:MM
|
||||
}
|
||||
if s.LastUseAt > 0 {
|
||||
t := util.GetTimeFromMsecSinceEpoch(s.LastUseAt)
|
||||
result.WriteString(fmt.Sprintf("Last use: %v. ", t.Format("2006-01-02 15:04")))
|
||||
}
|
||||
if s.MaxTokens > 0 {
|
||||
result.WriteString(fmt.Sprintf("Usage: %v/%v. ", s.UsedTokens, s.MaxTokens))
|
||||
} else {
|
||||
result.WriteString(fmt.Sprintf("Used tokens: %v. ", s.UsedTokens))
|
||||
}
|
||||
if len(s.AllowFrom) > 0 {
|
||||
result.WriteString(fmt.Sprintf("Allowed IP/Mask: %v. ", len(s.AllowFrom)))
|
||||
}
|
||||
if s.Password != "" {
|
||||
result.WriteString("Password protected.")
|
||||
}
|
||||
return result.String()
|
||||
}
|
||||
|
||||
// GetAllowedFromAsString returns the allowed IP as comma separated string
|
||||
func (s *Share) GetAllowedFromAsString() string {
|
||||
return strings.Join(s.AllowFrom, ",")
|
||||
}
|
||||
|
||||
func (s *Share) getACopy() Share {
|
||||
allowFrom := make([]string, len(s.AllowFrom))
|
||||
copy(allowFrom, s.AllowFrom)
|
||||
|
||||
return Share{
|
||||
ID: s.ID,
|
||||
ShareID: s.ShareID,
|
||||
Name: s.Name,
|
||||
Description: s.Description,
|
||||
Scope: s.Scope,
|
||||
Paths: s.Paths,
|
||||
Username: s.Username,
|
||||
CreatedAt: s.CreatedAt,
|
||||
UpdatedAt: s.UpdatedAt,
|
||||
LastUseAt: s.LastUseAt,
|
||||
ExpiresAt: s.ExpiresAt,
|
||||
Password: s.Password,
|
||||
MaxTokens: s.MaxTokens,
|
||||
UsedTokens: s.UsedTokens,
|
||||
AllowFrom: allowFrom,
|
||||
}
|
||||
}
|
||||
|
||||
// RenderAsJSON implements the renderer interface used within plugins
|
||||
func (s *Share) RenderAsJSON(reload bool) ([]byte, error) {
|
||||
if reload {
|
||||
share, err := provider.shareExists(s.ShareID, s.Username)
|
||||
if err != nil {
|
||||
providerLog(logger.LevelError, "unable to reload share before rendering as json: %v", err)
|
||||
return nil, err
|
||||
}
|
||||
share.HideConfidentialData()
|
||||
return json.Marshal(share)
|
||||
}
|
||||
s.HideConfidentialData()
|
||||
return json.Marshal(s)
|
||||
}
|
||||
|
||||
// HideConfidentialData hides share confidential data
|
||||
func (s *Share) HideConfidentialData() {
|
||||
if s.Password != "" {
|
||||
s.Password = redactedPassword
|
||||
}
|
||||
}
|
||||
|
||||
// HasRedactedPassword returns true if this share has a redacted password
|
||||
func (s *Share) HasRedactedPassword() bool {
|
||||
return s.Password == redactedPassword
|
||||
}
|
||||
|
||||
func (s *Share) hashPassword() error {
|
||||
if s.Password != "" && !util.IsStringPrefixInSlice(s.Password, internalHashPwdPrefixes) {
|
||||
if config.PasswordHashing.Algo == HashingAlgoBcrypt {
|
||||
hashed, err := bcrypt.GenerateFromPassword([]byte(s.Password), config.PasswordHashing.BcryptOptions.Cost)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
s.Password = string(hashed)
|
||||
} else {
|
||||
hashed, err := argon2id.CreateHash(s.Password, argon2Params)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
s.Password = hashed
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *Share) validatePaths() error {
|
||||
var paths []string
|
||||
for _, p := range s.Paths {
|
||||
p = strings.TrimSpace(p)
|
||||
if p != "" {
|
||||
paths = append(paths, p)
|
||||
}
|
||||
}
|
||||
s.Paths = paths
|
||||
if len(s.Paths) == 0 {
|
||||
return util.NewValidationError("at least a shared path is required")
|
||||
}
|
||||
for idx := range s.Paths {
|
||||
s.Paths[idx] = util.CleanPath(s.Paths[idx])
|
||||
}
|
||||
s.Paths = util.RemoveDuplicates(s.Paths)
|
||||
if s.Scope == ShareScopeWrite && len(s.Paths) != 1 {
|
||||
return util.NewValidationError("the write share scope requires exactly one path")
|
||||
}
|
||||
// check nested paths
|
||||
if len(s.Paths) > 1 {
|
||||
for idx := range s.Paths {
|
||||
for innerIdx := range s.Paths {
|
||||
if idx == innerIdx {
|
||||
continue
|
||||
}
|
||||
if isVirtualDirOverlapped(s.Paths[idx], s.Paths[innerIdx], true) {
|
||||
return util.NewGenericError("shared paths cannot be nested")
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *Share) validate() error {
|
||||
if s.ShareID == "" {
|
||||
return util.NewValidationError("share_id is mandatory")
|
||||
}
|
||||
if s.Name == "" {
|
||||
return util.NewValidationError("name is mandatory")
|
||||
}
|
||||
if s.Scope != ShareScopeRead && s.Scope != ShareScopeWrite {
|
||||
return util.NewValidationError(fmt.Sprintf("invalid scope: %v", s.Scope))
|
||||
}
|
||||
if err := s.validatePaths(); err != nil {
|
||||
return err
|
||||
}
|
||||
if s.ExpiresAt > 0 {
|
||||
if !s.IsRestore && s.ExpiresAt < util.GetTimeAsMsSinceEpoch(time.Now()) {
|
||||
return util.NewValidationError("expiration must be in the future")
|
||||
}
|
||||
} else {
|
||||
s.ExpiresAt = 0
|
||||
}
|
||||
if s.MaxTokens < 0 {
|
||||
return util.NewValidationError("invalid max tokens")
|
||||
}
|
||||
if s.Username == "" {
|
||||
return util.NewValidationError("username is mandatory")
|
||||
}
|
||||
if s.HasRedactedPassword() {
|
||||
return util.NewValidationError("cannot save a share with a redacted password")
|
||||
}
|
||||
if err := s.hashPassword(); err != nil {
|
||||
return err
|
||||
}
|
||||
s.AllowFrom = util.RemoveDuplicates(s.AllowFrom)
|
||||
for _, IPMask := range s.AllowFrom {
|
||||
_, _, err := net.ParseCIDR(IPMask)
|
||||
if err != nil {
|
||||
return util.NewValidationError(fmt.Sprintf("could not parse allow from entry %#v : %v", IPMask, err))
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// CheckPassword verifies the share password if set
|
||||
func (s *Share) CheckPassword(password string) (bool, error) {
|
||||
if s.Password == "" {
|
||||
return true, nil
|
||||
}
|
||||
if password == "" {
|
||||
return false, ErrInvalidCredentials
|
||||
}
|
||||
if strings.HasPrefix(s.Password, bcryptPwdPrefix) {
|
||||
if err := bcrypt.CompareHashAndPassword([]byte(s.Password), []byte(password)); err != nil {
|
||||
return false, ErrInvalidCredentials
|
||||
}
|
||||
return true, nil
|
||||
}
|
||||
match, err := argon2id.ComparePasswordAndHash(password, s.Password)
|
||||
if !match || err != nil {
|
||||
return false, ErrInvalidCredentials
|
||||
}
|
||||
return match, err
|
||||
}
|
||||
|
||||
// IsUsable checks if the share is usable from the specified IP
|
||||
func (s *Share) IsUsable(ip string) (bool, error) {
|
||||
if s.MaxTokens > 0 && s.UsedTokens >= s.MaxTokens {
|
||||
return false, util.NewRecordNotFoundError("max share usage exceeded")
|
||||
}
|
||||
if s.ExpiresAt > 0 {
|
||||
if s.ExpiresAt < util.GetTimeAsMsSinceEpoch(time.Now()) {
|
||||
return false, util.NewRecordNotFoundError("share expired")
|
||||
}
|
||||
}
|
||||
if len(s.AllowFrom) == 0 {
|
||||
return true, nil
|
||||
}
|
||||
parsedIP := net.ParseIP(ip)
|
||||
if parsedIP == nil {
|
||||
return false, ErrLoginNotAllowedFromIP
|
||||
}
|
||||
for _, ipMask := range s.AllowFrom {
|
||||
_, network, err := net.ParseCIDR(ipMask)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
if network.Contains(parsedIP) {
|
||||
return true, nil
|
||||
}
|
||||
}
|
||||
return false, ErrLoginNotAllowedFromIP
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,25 +1,106 @@
|
||||
//go:build !nosqlite
|
||||
// +build !nosqlite
|
||||
|
||||
package dataprovider
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/x509"
|
||||
"database/sql"
|
||||
"errors"
|
||||
"fmt"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/drakkan/sftpgo/logger"
|
||||
"github.com/drakkan/sftpgo/utils"
|
||||
// we import go-sqlite3 here to be able to disable SQLite support using a build tag
|
||||
_ "github.com/mattn/go-sqlite3"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/logger"
|
||||
"github.com/drakkan/sftpgo/v2/util"
|
||||
"github.com/drakkan/sftpgo/v2/version"
|
||||
"github.com/drakkan/sftpgo/v2/vfs"
|
||||
)
|
||||
|
||||
const (
|
||||
sqliteUsersTableSQL = `CREATE TABLE "{{users}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255)
|
||||
NOT NULL UNIQUE, "password" varchar(255) NULL, "public_keys" text NULL, "home_dir" varchar(255) NOT NULL, "uid" integer NOT NULL,
|
||||
"gid" integer NOT NULL, "max_sessions" integer NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL,
|
||||
"permissions" text NOT NULL, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL,
|
||||
"last_quota_update" bigint NOT NULL, "upload_bandwidth" integer NOT NULL, "download_bandwidth" integer NOT NULL,
|
||||
"expiration_date" bigint NOT NULL, "last_login" bigint NOT NULL, "status" integer NOT NULL, "filters" text NULL,
|
||||
"filesystem" text NULL);`
|
||||
sqliteSchemaTableSQL = `CREATE TABLE "schema_version" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "version" integer NOT NULL);`
|
||||
sqliteUsersV2SQL = `ALTER TABLE "{{users}}" ADD COLUMN "virtual_folders" text NULL;`
|
||||
sqliteResetSQL = `DROP TABLE IF EXISTS "{{api_keys}}";
|
||||
DROP TABLE IF EXISTS "{{folders_mapping}}";
|
||||
DROP TABLE IF EXISTS "{{admins}}";
|
||||
DROP TABLE IF EXISTS "{{folders}}";
|
||||
DROP TABLE IF EXISTS "{{shares}}";
|
||||
DROP TABLE IF EXISTS "{{users}}";
|
||||
DROP TABLE IF EXISTS "{{defender_events}}";
|
||||
DROP TABLE IF EXISTS "{{defender_hosts}}";
|
||||
DROP TABLE IF EXISTS "{{schema_version}}";
|
||||
`
|
||||
sqliteInitialSQL = `CREATE TABLE "{{schema_version}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "version" integer NOT NULL);
|
||||
CREATE TABLE "{{admins}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255) NOT NULL UNIQUE,
|
||||
"description" varchar(512) NULL, "password" varchar(255) NOT NULL, "email" varchar(255) NULL, "status" integer NOT NULL,
|
||||
"permissions" text NOT NULL, "filters" text NULL, "additional_info" text NULL);
|
||||
CREATE TABLE "{{folders}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "name" varchar(255) NOT NULL UNIQUE,
|
||||
"description" varchar(512) NULL, "path" varchar(512) NULL, "used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL,
|
||||
"last_quota_update" bigint NOT NULL, "filesystem" text NULL);
|
||||
CREATE TABLE "{{users}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "username" varchar(255) NOT NULL UNIQUE,
|
||||
"status" integer NOT NULL, "expiration_date" bigint NOT NULL, "description" varchar(512) NULL, "password" text NULL,
|
||||
"public_keys" text NULL, "home_dir" varchar(512) NOT NULL, "uid" integer NOT NULL, "gid" integer NOT NULL,
|
||||
"max_sessions" integer NOT NULL, "quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "permissions" text NOT NULL,
|
||||
"used_quota_size" bigint NOT NULL, "used_quota_files" integer NOT NULL, "last_quota_update" bigint NOT NULL,
|
||||
"upload_bandwidth" integer NOT NULL, "download_bandwidth" integer NOT NULL, "last_login" bigint NOT NULL, "filters" text NULL,
|
||||
"filesystem" text NULL, "additional_info" text NULL);
|
||||
CREATE TABLE "{{folders_mapping}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "virtual_path" varchar(512) NOT NULL,
|
||||
"quota_size" bigint NOT NULL, "quota_files" integer NOT NULL, "folder_id" integer NOT NULL REFERENCES "{{folders}}" ("id")
|
||||
ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED, "user_id" integer NOT NULL REFERENCES "{{users}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
|
||||
CONSTRAINT "{{prefix}}unique_mapping" UNIQUE ("user_id", "folder_id"));
|
||||
CREATE INDEX "{{prefix}}folders_mapping_folder_id_idx" ON "{{folders_mapping}}" ("folder_id");
|
||||
CREATE INDEX "{{prefix}}folders_mapping_user_id_idx" ON "{{folders_mapping}}" ("user_id");
|
||||
INSERT INTO {{schema_version}} (version) VALUES (10);
|
||||
`
|
||||
sqliteV11SQL = `CREATE TABLE "{{api_keys}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "name" varchar(255) NOT NULL,
|
||||
"key_id" varchar(50) NOT NULL UNIQUE, "api_key" varchar(255) NOT NULL UNIQUE, "scope" integer NOT NULL, "created_at" bigint NOT NULL,
|
||||
"updated_at" bigint NOT NULL, "last_use_at" bigint NOT NULL, "expires_at" bigint NOT NULL, "description" text NULL,
|
||||
"admin_id" integer NULL REFERENCES "{{admins}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED,
|
||||
"user_id" integer NULL REFERENCES "{{users}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED);
|
||||
CREATE INDEX "{{prefix}}api_keys_admin_id_idx" ON "{{api_keys}}" ("admin_id");
|
||||
CREATE INDEX "{{prefix}}api_keys_user_id_idx" ON "{{api_keys}}" ("user_id");
|
||||
`
|
||||
sqliteV11DownSQL = `DROP TABLE "{{api_keys}}";`
|
||||
sqliteV12SQL = `ALTER TABLE "{{admins}}" ADD COLUMN "created_at" bigint DEFAULT 0 NOT NULL;
|
||||
ALTER TABLE "{{admins}}" ADD COLUMN "updated_at" bigint DEFAULT 0 NOT NULL;
|
||||
ALTER TABLE "{{admins}}" ADD COLUMN "last_login" bigint DEFAULT 0 NOT NULL;
|
||||
ALTER TABLE "{{users}}" ADD COLUMN "created_at" bigint DEFAULT 0 NOT NULL;
|
||||
ALTER TABLE "{{users}}" ADD COLUMN "updated_at" bigint DEFAULT 0 NOT NULL;
|
||||
CREATE INDEX "{{prefix}}users_updated_at_idx" ON "{{users}}" ("updated_at");
|
||||
`
|
||||
sqliteV12DownSQL = `DROP INDEX "{{prefix}}users_updated_at_idx";
|
||||
ALTER TABLE "{{users}}" DROP COLUMN "updated_at";
|
||||
ALTER TABLE "{{users}}" DROP COLUMN "created_at";
|
||||
ALTER TABLE "{{admins}}" DROP COLUMN "created_at";
|
||||
ALTER TABLE "{{admins}}" DROP COLUMN "updated_at";
|
||||
ALTER TABLE "{{admins}}" DROP COLUMN "last_login";
|
||||
`
|
||||
sqliteV13SQL = `ALTER TABLE "{{users}}" ADD COLUMN "email" varchar(255) NULL;`
|
||||
sqliteV13DownSQL = `ALTER TABLE "{{users}}" DROP COLUMN "email";`
|
||||
sqliteV14SQL = `CREATE TABLE "{{shares}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT,
|
||||
"share_id" varchar(60) NOT NULL UNIQUE, "name" varchar(255) NOT NULL, "description" varchar(512) NULL,
|
||||
"scope" integer NOT NULL, "paths" text NOT NULL, "created_at" bigint NOT NULL, "updated_at" bigint NOT NULL,
|
||||
"last_use_at" bigint NOT NULL, "expires_at" bigint NOT NULL, "password" text NULL, "max_tokens" integer NOT NULL,
|
||||
"used_tokens" integer NOT NULL, "allow_from" text NULL,
|
||||
"user_id" integer NOT NULL REFERENCES "{{users}}" ("id") ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED);
|
||||
CREATE INDEX "{{prefix}}shares_user_id_idx" ON "{{shares}}" ("user_id");
|
||||
`
|
||||
sqliteV14DownSQL = `DROP TABLE "{{shares}}";`
|
||||
sqliteV15SQL = `CREATE TABLE "{{defender_hosts}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT,
|
||||
"ip" varchar(50) NOT NULL UNIQUE, "ban_time" bigint NOT NULL, "updated_at" bigint NOT NULL);
|
||||
CREATE TABLE "{{defender_events}}" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "date_time" bigint NOT NULL,
|
||||
"score" integer NOT NULL, "host_id" integer NOT NULL REFERENCES "{{defender_hosts}}" ("id") ON DELETE CASCADE
|
||||
DEFERRABLE INITIALLY DEFERRED);
|
||||
CREATE INDEX "{{prefix}}defender_hosts_updated_at_idx" ON "{{defender_hosts}}" ("updated_at");
|
||||
CREATE INDEX "{{prefix}}defender_hosts_ban_time_idx" ON "{{defender_hosts}}" ("ban_time");
|
||||
CREATE INDEX "{{prefix}}defender_events_date_time_idx" ON "{{defender_events}}" ("date_time");
|
||||
CREATE INDEX "{{prefix}}defender_events_host_id_idx" ON "{{defender_events}}" ("host_id");
|
||||
`
|
||||
sqliteV15DownSQL = `DROP TABLE "{{defender_events}}";
|
||||
DROP TABLE "{{defender_hosts}}";
|
||||
`
|
||||
)
|
||||
|
||||
// SQLiteProvider auth provider for SQLite database
|
||||
@@ -27,19 +108,23 @@ type SQLiteProvider struct {
|
||||
dbHandle *sql.DB
|
||||
}
|
||||
|
||||
func init() {
|
||||
version.AddFeature("+sqlite")
|
||||
}
|
||||
|
||||
func initializeSQLiteProvider(basePath string) error {
|
||||
var err error
|
||||
var connectionString string
|
||||
logSender = SQLiteDataProviderName
|
||||
if len(config.ConnectionString) == 0 {
|
||||
|
||||
if config.ConnectionString == "" {
|
||||
dbPath := config.Name
|
||||
if !utils.IsFileInputValid(dbPath) {
|
||||
return fmt.Errorf("Invalid database path: %#v", dbPath)
|
||||
if !util.IsFileInputValid(dbPath) {
|
||||
return fmt.Errorf("invalid database path: %#v", dbPath)
|
||||
}
|
||||
if !filepath.IsAbs(dbPath) {
|
||||
dbPath = filepath.Join(basePath, dbPath)
|
||||
}
|
||||
connectionString = fmt.Sprintf("file:%v?cache=shared", dbPath)
|
||||
connectionString = fmt.Sprintf("file:%v?cache=shared&_foreign_keys=1", dbPath)
|
||||
} else {
|
||||
connectionString = config.ConnectionString
|
||||
}
|
||||
@@ -47,103 +132,484 @@ func initializeSQLiteProvider(basePath string) error {
|
||||
if err == nil {
|
||||
providerLog(logger.LevelDebug, "sqlite database handle created, connection string: %#v", connectionString)
|
||||
dbHandle.SetMaxOpenConns(1)
|
||||
provider = SQLiteProvider{dbHandle: dbHandle}
|
||||
provider = &SQLiteProvider{dbHandle: dbHandle}
|
||||
} else {
|
||||
providerLog(logger.LevelWarn, "error creating sqlite database handler, connection string: %#v, error: %v",
|
||||
providerLog(logger.LevelError, "error creating sqlite database handler, connection string: %#v, error: %v",
|
||||
connectionString, err)
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
func (p SQLiteProvider) checkAvailability() error {
|
||||
func (p *SQLiteProvider) checkAvailability() error {
|
||||
return sqlCommonCheckAvailability(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p SQLiteProvider) validateUserAndPass(username string, password string) (User, error) {
|
||||
return sqlCommonValidateUserAndPass(username, password, p.dbHandle)
|
||||
func (p *SQLiteProvider) validateUserAndPass(username, password, ip, protocol string) (User, error) {
|
||||
return sqlCommonValidateUserAndPass(username, password, ip, protocol, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p SQLiteProvider) validateUserAndPubKey(username string, publicKey string) (User, string, error) {
|
||||
func (p *SQLiteProvider) validateUserAndTLSCert(username, protocol string, tlsCert *x509.Certificate) (User, error) {
|
||||
return sqlCommonValidateUserAndTLSCertificate(username, protocol, tlsCert, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) validateUserAndPubKey(username string, publicKey []byte) (User, string, error) {
|
||||
return sqlCommonValidateUserAndPubKey(username, publicKey, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p SQLiteProvider) getUserByID(ID int64) (User, error) {
|
||||
return sqlCommonGetUserByID(ID, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p SQLiteProvider) updateQuota(username string, filesAdd int, sizeAdd int64, reset bool) error {
|
||||
func (p *SQLiteProvider) updateQuota(username string, filesAdd int, sizeAdd int64, reset bool) error {
|
||||
return sqlCommonUpdateQuota(username, filesAdd, sizeAdd, reset, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p SQLiteProvider) updateLastLogin(username string) error {
|
||||
return sqlCommonUpdateLastLogin(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p SQLiteProvider) getUsedQuota(username string) (int, int64, error) {
|
||||
func (p *SQLiteProvider) getUsedQuota(username string) (int, int64, error) {
|
||||
return sqlCommonGetUsedQuota(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p SQLiteProvider) userExists(username string) (User, error) {
|
||||
return sqlCommonCheckUserExists(username, p.dbHandle)
|
||||
func (p *SQLiteProvider) setUpdatedAt(username string) {
|
||||
sqlCommonSetUpdatedAt(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p SQLiteProvider) addUser(user User) error {
|
||||
func (p *SQLiteProvider) updateLastLogin(username string) error {
|
||||
return sqlCommonUpdateLastLogin(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) updateAdminLastLogin(username string) error {
|
||||
return sqlCommonUpdateAdminLastLogin(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) userExists(username string) (User, error) {
|
||||
return sqlCommonGetUserByUsername(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) addUser(user *User) error {
|
||||
return sqlCommonAddUser(user, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p SQLiteProvider) updateUser(user User) error {
|
||||
func (p *SQLiteProvider) updateUser(user *User) error {
|
||||
return sqlCommonUpdateUser(user, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p SQLiteProvider) deleteUser(user User) error {
|
||||
func (p *SQLiteProvider) deleteUser(user *User) error {
|
||||
return sqlCommonDeleteUser(user, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p SQLiteProvider) dumpUsers() ([]User, error) {
|
||||
func (p *SQLiteProvider) dumpUsers() ([]User, error) {
|
||||
return sqlCommonDumpUsers(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p SQLiteProvider) getUsers(limit int, offset int, order string, username string) ([]User, error) {
|
||||
return sqlCommonGetUsers(limit, offset, order, username, p.dbHandle)
|
||||
// SQLite provider cannot be shared, so we always return no recently updated users
|
||||
func (p *SQLiteProvider) getRecentlyUpdatedUsers(after int64) ([]User, error) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
func (p SQLiteProvider) close() error {
|
||||
func (p *SQLiteProvider) getUsers(limit int, offset int, order string) ([]User, error) {
|
||||
return sqlCommonGetUsers(limit, offset, order, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) dumpFolders() ([]vfs.BaseVirtualFolder, error) {
|
||||
return sqlCommonDumpFolders(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) getFolders(limit, offset int, order string) ([]vfs.BaseVirtualFolder, error) {
|
||||
return sqlCommonGetFolders(limit, offset, order, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) getFolderByName(name string) (vfs.BaseVirtualFolder, error) {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), defaultSQLQueryTimeout)
|
||||
defer cancel()
|
||||
return sqlCommonGetFolderByName(ctx, name, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) addFolder(folder *vfs.BaseVirtualFolder) error {
|
||||
return sqlCommonAddFolder(folder, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) updateFolder(folder *vfs.BaseVirtualFolder) error {
|
||||
return sqlCommonUpdateFolder(folder, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) deleteFolder(folder *vfs.BaseVirtualFolder) error {
|
||||
return sqlCommonDeleteFolder(folder, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) updateFolderQuota(name string, filesAdd int, sizeAdd int64, reset bool) error {
|
||||
return sqlCommonUpdateFolderQuota(name, filesAdd, sizeAdd, reset, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) getUsedFolderQuota(name string) (int, int64, error) {
|
||||
return sqlCommonGetFolderUsedQuota(name, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) adminExists(username string) (Admin, error) {
|
||||
return sqlCommonGetAdminByUsername(username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) addAdmin(admin *Admin) error {
|
||||
return sqlCommonAddAdmin(admin, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) updateAdmin(admin *Admin) error {
|
||||
return sqlCommonUpdateAdmin(admin, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) deleteAdmin(admin *Admin) error {
|
||||
return sqlCommonDeleteAdmin(admin, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) getAdmins(limit int, offset int, order string) ([]Admin, error) {
|
||||
return sqlCommonGetAdmins(limit, offset, order, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) dumpAdmins() ([]Admin, error) {
|
||||
return sqlCommonDumpAdmins(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) validateAdminAndPass(username, password, ip string) (Admin, error) {
|
||||
return sqlCommonValidateAdminAndPass(username, password, ip, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) apiKeyExists(keyID string) (APIKey, error) {
|
||||
return sqlCommonGetAPIKeyByID(keyID, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) addAPIKey(apiKey *APIKey) error {
|
||||
return sqlCommonAddAPIKey(apiKey, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) updateAPIKey(apiKey *APIKey) error {
|
||||
return sqlCommonUpdateAPIKey(apiKey, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) deleteAPIKey(apiKey *APIKey) error {
|
||||
return sqlCommonDeleteAPIKey(apiKey, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) getAPIKeys(limit int, offset int, order string) ([]APIKey, error) {
|
||||
return sqlCommonGetAPIKeys(limit, offset, order, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) dumpAPIKeys() ([]APIKey, error) {
|
||||
return sqlCommonDumpAPIKeys(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) updateAPIKeyLastUse(keyID string) error {
|
||||
return sqlCommonUpdateAPIKeyLastUse(keyID, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) shareExists(shareID, username string) (Share, error) {
|
||||
return sqlCommonGetShareByID(shareID, username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) addShare(share *Share) error {
|
||||
return sqlCommonAddShare(share, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) updateShare(share *Share) error {
|
||||
return sqlCommonUpdateShare(share, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) deleteShare(share *Share) error {
|
||||
return sqlCommonDeleteShare(share, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) getShares(limit int, offset int, order, username string) ([]Share, error) {
|
||||
return sqlCommonGetShares(limit, offset, order, username, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) dumpShares() ([]Share, error) {
|
||||
return sqlCommonDumpShares(p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) updateShareLastUse(shareID string, numTokens int) error {
|
||||
return sqlCommonUpdateShareLastUse(shareID, numTokens, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) getDefenderHosts(from int64, limit int) ([]*DefenderEntry, error) {
|
||||
return sqlCommonGetDefenderHosts(from, limit, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) getDefenderHostByIP(ip string, from int64) (*DefenderEntry, error) {
|
||||
return sqlCommonGetDefenderHostByIP(ip, from, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) isDefenderHostBanned(ip string) (*DefenderEntry, error) {
|
||||
return sqlCommonIsDefenderHostBanned(ip, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) updateDefenderBanTime(ip string, minutes int) error {
|
||||
return sqlCommonDefenderIncrementBanTime(ip, minutes, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) deleteDefenderHost(ip string) error {
|
||||
return sqlCommonDeleteDefenderHost(ip, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) addDefenderEvent(ip string, score int) error {
|
||||
return sqlCommonAddDefenderHostAndEvent(ip, score, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) setDefenderBanTime(ip string, banTime int64) error {
|
||||
return sqlCommonSetDefenderBanTime(ip, banTime, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) cleanupDefender(from int64) error {
|
||||
return sqlCommonDefenderCleanup(from, p.dbHandle)
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) close() error {
|
||||
return p.dbHandle.Close()
|
||||
}
|
||||
|
||||
func (p SQLiteProvider) reloadConfig() error {
|
||||
func (p *SQLiteProvider) reloadConfig() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// initializeDatabase creates the initial database structure
|
||||
func (p SQLiteProvider) initializeDatabase() error {
|
||||
sqlUsers := strings.Replace(sqliteUsersTableSQL, "{{users}}", config.UsersTable, 1)
|
||||
sql := sqlUsers + " " + sqliteSchemaTableSQL + " " + initialDBVersionSQL
|
||||
_, err := p.dbHandle.Exec(sql)
|
||||
func (p *SQLiteProvider) initializeDatabase() error {
|
||||
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, false)
|
||||
if err == nil && dbVersion.Version > 0 {
|
||||
return ErrNoInitRequired
|
||||
}
|
||||
if errors.Is(err, sql.ErrNoRows) {
|
||||
return errSchemaVersionEmpty
|
||||
}
|
||||
initialSQL := strings.ReplaceAll(sqliteInitialSQL, "{{schema_version}}", sqlTableSchemaVersion)
|
||||
initialSQL = strings.ReplaceAll(initialSQL, "{{admins}}", sqlTableAdmins)
|
||||
initialSQL = strings.ReplaceAll(initialSQL, "{{folders}}", sqlTableFolders)
|
||||
initialSQL = strings.ReplaceAll(initialSQL, "{{users}}", sqlTableUsers)
|
||||
initialSQL = strings.ReplaceAll(initialSQL, "{{folders_mapping}}", sqlTableFoldersMapping)
|
||||
initialSQL = strings.ReplaceAll(initialSQL, "{{prefix}}", config.SQLTablesPrefix)
|
||||
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, []string{initialSQL}, 10)
|
||||
}
|
||||
|
||||
//nolint:dupl
|
||||
func (p *SQLiteProvider) migrateDatabase() error {
|
||||
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
switch version := dbVersion.Version; {
|
||||
case version == sqlDatabaseVersion:
|
||||
providerLog(logger.LevelDebug, "sql database is up to date, current version: %v", version)
|
||||
return ErrNoInitRequired
|
||||
case version < 10:
|
||||
err = fmt.Errorf("database version %v is too old, please see the upgrading docs", version)
|
||||
providerLog(logger.LevelError, "%v", err)
|
||||
logger.ErrorToConsole("%v", err)
|
||||
return err
|
||||
case version == 10:
|
||||
return updateSQLiteDatabaseFromV10(p.dbHandle)
|
||||
case version == 11:
|
||||
return updateSQLiteDatabaseFromV11(p.dbHandle)
|
||||
case version == 12:
|
||||
return updateSQLiteDatabaseFromV12(p.dbHandle)
|
||||
case version == 13:
|
||||
return updateSQLiteDatabaseFromV13(p.dbHandle)
|
||||
case version == 14:
|
||||
return updateSQLiteDatabaseFromV14(p.dbHandle)
|
||||
default:
|
||||
if version > sqlDatabaseVersion {
|
||||
providerLog(logger.LevelError, "database version %v is newer than the supported one: %v", version,
|
||||
sqlDatabaseVersion)
|
||||
logger.WarnToConsole("database version %v is newer than the supported one: %v", version,
|
||||
sqlDatabaseVersion)
|
||||
return nil
|
||||
}
|
||||
return fmt.Errorf("database version not handled: %v", version)
|
||||
}
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) revertDatabase(targetVersion int) error {
|
||||
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle, true)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if dbVersion.Version == targetVersion {
|
||||
return errors.New("current version match target version, nothing to do")
|
||||
}
|
||||
|
||||
switch dbVersion.Version {
|
||||
case 15:
|
||||
return downgradeSQLiteDatabaseFromV15(p.dbHandle)
|
||||
case 14:
|
||||
return downgradeSQLiteDatabaseFromV14(p.dbHandle)
|
||||
case 13:
|
||||
return downgradeSQLiteDatabaseFromV13(p.dbHandle)
|
||||
case 12:
|
||||
return downgradeSQLiteDatabaseFromV12(p.dbHandle)
|
||||
case 11:
|
||||
return downgradeSQLiteDatabaseFromV11(p.dbHandle)
|
||||
default:
|
||||
return fmt.Errorf("database version not handled: %v", dbVersion.Version)
|
||||
}
|
||||
}
|
||||
|
||||
func (p *SQLiteProvider) resetDatabase() error {
|
||||
sql := strings.ReplaceAll(sqliteResetSQL, "{{schema_version}}", sqlTableSchemaVersion)
|
||||
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
|
||||
sql = strings.ReplaceAll(sql, "{{folders}}", sqlTableFolders)
|
||||
sql = strings.ReplaceAll(sql, "{{users}}", sqlTableUsers)
|
||||
sql = strings.ReplaceAll(sql, "{{folders_mapping}}", sqlTableFoldersMapping)
|
||||
sql = strings.ReplaceAll(sql, "{{api_keys}}", sqlTableAPIKeys)
|
||||
sql = strings.ReplaceAll(sql, "{{shares}}", sqlTableShares)
|
||||
sql = strings.ReplaceAll(sql, "{{defender_events}}", sqlTableDefenderEvents)
|
||||
sql = strings.ReplaceAll(sql, "{{defender_hosts}}", sqlTableDefenderHosts)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(p.dbHandle, []string{sql}, 0)
|
||||
}
|
||||
|
||||
func updateSQLiteDatabaseFromV10(dbHandle *sql.DB) error {
|
||||
if err := updateSQLiteDatabaseFrom10To11(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return updateSQLiteDatabaseFromV11(dbHandle)
|
||||
}
|
||||
|
||||
func updateSQLiteDatabaseFromV11(dbHandle *sql.DB) error {
|
||||
if err := updateSQLiteDatabaseFrom11To12(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return updateSQLiteDatabaseFromV12(dbHandle)
|
||||
}
|
||||
|
||||
func updateSQLiteDatabaseFromV12(dbHandle *sql.DB) error {
|
||||
if err := updateSQLiteDatabaseFrom12To13(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return updateSQLiteDatabaseFromV13(dbHandle)
|
||||
}
|
||||
|
||||
func updateSQLiteDatabaseFromV13(dbHandle *sql.DB) error {
|
||||
if err := updateSQLiteDatabaseFrom13To14(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return updateSQLiteDatabaseFromV14(dbHandle)
|
||||
}
|
||||
|
||||
func updateSQLiteDatabaseFromV14(dbHandle *sql.DB) error {
|
||||
return updateSQLiteDatabaseFrom14To15(dbHandle)
|
||||
}
|
||||
|
||||
func downgradeSQLiteDatabaseFromV15(dbHandle *sql.DB) error {
|
||||
if err := downgradeSQLiteDatabaseFrom15To14(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return downgradeSQLiteDatabaseFromV14(dbHandle)
|
||||
}
|
||||
|
||||
func downgradeSQLiteDatabaseFromV14(dbHandle *sql.DB) error {
|
||||
if err := downgradeSQLiteDatabaseFrom14To13(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return downgradeSQLiteDatabaseFromV13(dbHandle)
|
||||
}
|
||||
|
||||
func downgradeSQLiteDatabaseFromV13(dbHandle *sql.DB) error {
|
||||
if err := downgradeSQLiteDatabaseFrom13To12(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return downgradeSQLiteDatabaseFromV12(dbHandle)
|
||||
}
|
||||
|
||||
func downgradeSQLiteDatabaseFromV12(dbHandle *sql.DB) error {
|
||||
if err := downgradeSQLiteDatabaseFrom12To11(dbHandle); err != nil {
|
||||
return err
|
||||
}
|
||||
return downgradeSQLiteDatabaseFromV11(dbHandle)
|
||||
}
|
||||
|
||||
func downgradeSQLiteDatabaseFromV11(dbHandle *sql.DB) error {
|
||||
return downgradeSQLiteDatabaseFrom11To10(dbHandle)
|
||||
}
|
||||
|
||||
func updateSQLiteDatabaseFrom13To14(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 13 -> 14")
|
||||
providerLog(logger.LevelInfo, "updating database version: 13 -> 14")
|
||||
sql := strings.ReplaceAll(sqliteV14SQL, "{{shares}}", sqlTableShares)
|
||||
sql = strings.ReplaceAll(sql, "{{users}}", sqlTableUsers)
|
||||
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 14)
|
||||
}
|
||||
|
||||
func updateSQLiteDatabaseFrom14To15(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 14 -> 15")
|
||||
providerLog(logger.LevelInfo, "updating database version: 14 -> 15")
|
||||
sql := strings.ReplaceAll(sqliteV15SQL, "{{defender_events}}", sqlTableDefenderEvents)
|
||||
sql = strings.ReplaceAll(sql, "{{defender_hosts}}", sqlTableDefenderHosts)
|
||||
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 15)
|
||||
}
|
||||
|
||||
func downgradeSQLiteDatabaseFrom15To14(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("downgrading database version: 15 -> 14")
|
||||
providerLog(logger.LevelInfo, "downgrading database version: 15 -> 14")
|
||||
sql := strings.ReplaceAll(sqliteV15DownSQL, "{{defender_events}}", sqlTableDefenderEvents)
|
||||
sql = strings.ReplaceAll(sql, "{{defender_hosts}}", sqlTableDefenderHosts)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 14)
|
||||
}
|
||||
|
||||
func downgradeSQLiteDatabaseFrom14To13(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("downgrading database version: 14 -> 13")
|
||||
providerLog(logger.LevelInfo, "downgrading database version: 14 -> 13")
|
||||
sql := strings.ReplaceAll(sqliteV14DownSQL, "{{shares}}", sqlTableShares)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 13)
|
||||
}
|
||||
|
||||
func updateSQLiteDatabaseFrom12To13(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 12 -> 13")
|
||||
providerLog(logger.LevelInfo, "updating database version: 12 -> 13")
|
||||
sql := strings.ReplaceAll(sqliteV13SQL, "{{users}}", sqlTableUsers)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 13)
|
||||
}
|
||||
|
||||
func downgradeSQLiteDatabaseFrom13To12(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("downgrading database version: 13 -> 12")
|
||||
providerLog(logger.LevelInfo, "downgrading database version: 13 -> 12")
|
||||
sql := strings.ReplaceAll(sqliteV13DownSQL, "{{users}}", sqlTableUsers)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 12)
|
||||
}
|
||||
|
||||
func updateSQLiteDatabaseFrom11To12(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 11 -> 12")
|
||||
providerLog(logger.LevelInfo, "updating database version: 11 -> 12")
|
||||
sql := strings.ReplaceAll(sqliteV12SQL, "{{users}}", sqlTableUsers)
|
||||
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
|
||||
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 12)
|
||||
}
|
||||
|
||||
func downgradeSQLiteDatabaseFrom12To11(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("downgrading database version: 12 -> 11")
|
||||
providerLog(logger.LevelInfo, "downgrading database version: 12 -> 11")
|
||||
sql := strings.ReplaceAll(sqliteV12DownSQL, "{{users}}", sqlTableUsers)
|
||||
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
|
||||
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 11)
|
||||
}
|
||||
|
||||
func updateSQLiteDatabaseFrom10To11(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("updating database version: 10 -> 11")
|
||||
providerLog(logger.LevelInfo, "updating database version: 10 -> 11")
|
||||
sql := strings.ReplaceAll(sqliteV11SQL, "{{users}}", sqlTableUsers)
|
||||
sql = strings.ReplaceAll(sql, "{{admins}}", sqlTableAdmins)
|
||||
sql = strings.ReplaceAll(sql, "{{api_keys}}", sqlTableAPIKeys)
|
||||
sql = strings.ReplaceAll(sql, "{{prefix}}", config.SQLTablesPrefix)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 11)
|
||||
}
|
||||
|
||||
func downgradeSQLiteDatabaseFrom11To10(dbHandle *sql.DB) error {
|
||||
logger.InfoToConsole("downgrading database version: 11 -> 10")
|
||||
providerLog(logger.LevelInfo, "downgrading database version: 11 -> 10")
|
||||
sql := strings.ReplaceAll(sqliteV11DownSQL, "{{api_keys}}", sqlTableAPIKeys)
|
||||
return sqlCommonExecSQLAndUpdateDBVersion(dbHandle, []string{sql}, 10)
|
||||
}
|
||||
|
||||
/*func setPragmaFK(dbHandle *sql.DB, value string) error {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), longSQLQueryTimeout)
|
||||
defer cancel()
|
||||
|
||||
sql := fmt.Sprintf("PRAGMA foreign_keys=%v;", value)
|
||||
|
||||
_, err := dbHandle.ExecContext(ctx, sql)
|
||||
return err
|
||||
}
|
||||
|
||||
func (p SQLiteProvider) migrateDatabase() error {
|
||||
dbVersion, err := sqlCommonGetDatabaseVersion(p.dbHandle)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if dbVersion.Version == sqlDatabaseVersion {
|
||||
providerLog(logger.LevelDebug, "sql database is updated, current version: %v", dbVersion.Version)
|
||||
return nil
|
||||
}
|
||||
if dbVersion.Version == 1 {
|
||||
return updateSQLiteDatabaseFrom1To2(p.dbHandle)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func updateSQLiteDatabaseFrom1To2(dbHandle *sql.DB) error {
|
||||
providerLog(logger.LevelInfo, "updating database version: 1 -> 2")
|
||||
sql := strings.Replace(sqliteUsersV2SQL, "{{users}}", config.UsersTable, 1)
|
||||
_, err := dbHandle.Exec(sql)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return sqlCommonUpdateDatabaseVersion(dbHandle, 2)
|
||||
}
|
||||
}*/
|
||||
|
||||
18
dataprovider/sqlite_disabled.go
Normal file
18
dataprovider/sqlite_disabled.go
Normal file
@@ -0,0 +1,18 @@
|
||||
//go:build nosqlite
|
||||
// +build nosqlite
|
||||
|
||||
package dataprovider
|
||||
|
||||
import (
|
||||
"errors"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/version"
|
||||
)
|
||||
|
||||
func init() {
|
||||
version.AddFeature("-sqlite")
|
||||
}
|
||||
|
||||
func initializeSQLiteProvider(basePath string) error {
|
||||
return errors.New("SQLite disabled at build time")
|
||||
}
|
||||
@@ -1,17 +1,28 @@
|
||||
package dataprovider
|
||||
|
||||
import "fmt"
|
||||
import (
|
||||
"fmt"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/drakkan/sftpgo/v2/vfs"
|
||||
)
|
||||
|
||||
const (
|
||||
selectUserFields = "id,username,password,public_keys,home_dir,uid,gid,max_sessions,quota_size,quota_files,permissions,used_quota_size," +
|
||||
"used_quota_files,last_quota_update,upload_bandwidth,download_bandwidth,expiration_date,last_login,status,filters,filesystem," +
|
||||
"virtual_folders"
|
||||
"additional_info,description,email,created_at,updated_at"
|
||||
selectFolderFields = "id,path,used_quota_size,used_quota_files,last_quota_update,name,description,filesystem"
|
||||
selectAdminFields = "id,username,password,status,email,permissions,filters,additional_info,description,created_at,updated_at,last_login"
|
||||
selectAPIKeyFields = "key_id,name,api_key,scope,created_at,updated_at,last_use_at,expires_at,description,user_id,admin_id"
|
||||
selectShareFields = "s.share_id,s.name,s.description,s.scope,s.paths,u.username,s.created_at,s.updated_at,s.last_use_at," +
|
||||
"s.expires_at,s.password,s.max_tokens,s.used_tokens,s.allow_from"
|
||||
)
|
||||
|
||||
func getSQLPlaceholders() []string {
|
||||
var placeholders []string
|
||||
for i := 1; i <= 20; i++ {
|
||||
if config.Driver == PGSQLDataProviderName {
|
||||
for i := 1; i <= 30; i++ {
|
||||
if config.Driver == PGSQLDataProviderName || config.Driver == CockroachDataProviderName {
|
||||
placeholders = append(placeholders, fmt.Sprintf("$%v", i))
|
||||
} else {
|
||||
placeholders = append(placeholders, "?")
|
||||
@@ -20,72 +31,412 @@ func getSQLPlaceholders() []string {
|
||||
return placeholders
|
||||
}
|
||||
|
||||
func getUserByUsernameQuery() string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v WHERE username = %v`, selectUserFields, config.UsersTable, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getUserByIDQuery() string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v WHERE id = %v`, selectUserFields, config.UsersTable, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getUsersQuery(order string, username string) string {
|
||||
if len(username) > 0 {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v WHERE username = %v ORDER BY username %v LIMIT %v OFFSET %v`,
|
||||
selectUserFields, config.UsersTable, sqlPlaceholders[0], order, sqlPlaceholders[1], sqlPlaceholders[2])
|
||||
func getAddDefenderHostQuery() string {
|
||||
if config.Driver == MySQLDataProviderName {
|
||||
return fmt.Sprintf("INSERT INTO %v (`ip`,`updated_at`,`ban_time`) VALUES (%v,%v,0) ON DUPLICATE KEY UPDATE `updated_at`=VALUES(`updated_at`)",
|
||||
sqlTableDefenderHosts, sqlPlaceholders[0], sqlPlaceholders[1])
|
||||
}
|
||||
return fmt.Sprintf(`SELECT %v FROM %v ORDER BY username %v LIMIT %v OFFSET %v`, selectUserFields, config.UsersTable,
|
||||
return fmt.Sprintf(`INSERT INTO %v (ip,updated_at,ban_time) VALUES (%v,%v,0) ON CONFLICT (ip) DO UPDATE SET updated_at = EXCLUDED.updated_at RETURNING id`,
|
||||
sqlTableDefenderHosts, sqlPlaceholders[0], sqlPlaceholders[1])
|
||||
}
|
||||
|
||||
func getAddDefenderEventQuery() string {
|
||||
return fmt.Sprintf(`INSERT INTO %v (date_time,score,host_id) VALUES (%v,%v,(SELECT id from %v WHERE ip = %v))`,
|
||||
sqlTableDefenderEvents, sqlPlaceholders[0], sqlPlaceholders[1], sqlTableDefenderHosts, sqlPlaceholders[2])
|
||||
}
|
||||
|
||||
func getDefenderHostsQuery() string {
|
||||
return fmt.Sprintf(`SELECT id,ip,ban_time FROM %v WHERE updated_at >= %v OR ban_time > 0 ORDER BY updated_at DESC LIMIT %v`,
|
||||
sqlTableDefenderHosts, sqlPlaceholders[0], sqlPlaceholders[1])
|
||||
}
|
||||
|
||||
func getDefenderHostQuery() string {
|
||||
return fmt.Sprintf(`SELECT id,ip,ban_time FROM %v WHERE ip = %v AND (updated_at >= %v OR ban_time > 0)`,
|
||||
sqlTableDefenderHosts, sqlPlaceholders[0], sqlPlaceholders[1])
|
||||
}
|
||||
|
||||
func getDefenderEventsQuery(hostIDS []int64) string {
|
||||
var sb strings.Builder
|
||||
for _, hID := range hostIDS {
|
||||
if sb.Len() == 0 {
|
||||
sb.WriteString("(")
|
||||
} else {
|
||||
sb.WriteString(",")
|
||||
}
|
||||
sb.WriteString(strconv.FormatInt(hID, 10))
|
||||
}
|
||||
if sb.Len() > 0 {
|
||||
sb.WriteString(")")
|
||||
} else {
|
||||
sb.WriteString("(0)")
|
||||
}
|
||||
return fmt.Sprintf(`SELECT host_id,SUM(score) FROM %v WHERE date_time >= %v AND host_id IN %v GROUP BY host_id`,
|
||||
sqlTableDefenderEvents, sqlPlaceholders[0], sb.String())
|
||||
}
|
||||
|
||||
func getDefenderIsHostBannedQuery() string {
|
||||
return fmt.Sprintf(`SELECT id FROM %v WHERE ip = %v AND ban_time >= %v`,
|
||||
sqlTableDefenderHosts, sqlPlaceholders[0], sqlPlaceholders[1])
|
||||
}
|
||||
|
||||
func getDefenderIncrementBanTimeQuery() string {
|
||||
return fmt.Sprintf(`UPDATE %v SET ban_time = ban_time + %v WHERE ip = %v`,
|
||||
sqlTableDefenderHosts, sqlPlaceholders[0], sqlPlaceholders[1])
|
||||
}
|
||||
|
||||
func getDefenderSetBanTimeQuery() string {
|
||||
return fmt.Sprintf(`UPDATE %v SET ban_time = %v WHERE ip = %v`,
|
||||
sqlTableDefenderHosts, sqlPlaceholders[0], sqlPlaceholders[1])
|
||||
}
|
||||
|
||||
func getDeleteDefenderHostQuery() string {
|
||||
return fmt.Sprintf(`DELETE FROM %v WHERE ip = %v`, sqlTableDefenderHosts, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getDefenderHostsCleanupQuery() string {
|
||||
return fmt.Sprintf(`DELETE FROM %v WHERE ban_time < %v AND NOT EXISTS (
|
||||
SELECT id FROM %v WHERE %v.host_id = %v.id AND %v.date_time > %v)`,
|
||||
sqlTableDefenderHosts, sqlPlaceholders[0], sqlTableDefenderEvents, sqlTableDefenderEvents, sqlTableDefenderHosts,
|
||||
sqlTableDefenderEvents, sqlPlaceholders[1])
|
||||
}
|
||||
|
||||
func getDefenderEventsCleanupQuery() string {
|
||||
return fmt.Sprintf(`DELETE FROM %v WHERE date_time < %v`, sqlTableDefenderEvents, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getAdminByUsernameQuery() string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v WHERE username = %v`, selectAdminFields, sqlTableAdmins, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getAdminsQuery(order string) string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v ORDER BY username %v LIMIT %v OFFSET %v`, selectAdminFields, sqlTableAdmins,
|
||||
order, sqlPlaceholders[0], sqlPlaceholders[1])
|
||||
}
|
||||
|
||||
func getDumpAdminsQuery() string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v`, selectAdminFields, sqlTableAdmins)
|
||||
}
|
||||
|
||||
func getAddAdminQuery() string {
|
||||
return fmt.Sprintf(`INSERT INTO %v (username,password,status,email,permissions,filters,additional_info,description,created_at,updated_at,last_login)
|
||||
VALUES (%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,0)`, sqlTableAdmins, sqlPlaceholders[0], sqlPlaceholders[1],
|
||||
sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7],
|
||||
sqlPlaceholders[8], sqlPlaceholders[9])
|
||||
}
|
||||
|
||||
func getUpdateAdminQuery() string {
|
||||
return fmt.Sprintf(`UPDATE %v SET password=%v,status=%v,email=%v,permissions=%v,filters=%v,additional_info=%v,description=%v,updated_at=%v
|
||||
WHERE username = %v`, sqlTableAdmins, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2],
|
||||
sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7], sqlPlaceholders[8])
|
||||
}
|
||||
|
||||
func getDeleteAdminQuery() string {
|
||||
return fmt.Sprintf(`DELETE FROM %v WHERE username = %v`, sqlTableAdmins, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getShareByIDQuery(filterUser bool) string {
|
||||
if filterUser {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v s INNER JOIN %v u ON s.user_id = u.id WHERE s.share_id = %v AND u.username = %v`,
|
||||
selectShareFields, sqlTableShares, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1])
|
||||
}
|
||||
return fmt.Sprintf(`SELECT %v FROM %v s INNER JOIN %v u ON s.user_id = u.id WHERE s.share_id = %v`,
|
||||
selectShareFields, sqlTableShares, sqlTableUsers, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getSharesQuery(order string) string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v s INNER JOIN %v u ON s.user_id = u.id WHERE u.username = %v ORDER BY s.share_id %v LIMIT %v OFFSET %v`,
|
||||
selectShareFields, sqlTableShares, sqlTableUsers, sqlPlaceholders[0], order, sqlPlaceholders[1], sqlPlaceholders[2])
|
||||
}
|
||||
|
||||
func getDumpSharesQuery() string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v s INNER JOIN %v u ON s.user_id = u.id`,
|
||||
selectShareFields, sqlTableShares, sqlTableUsers)
|
||||
}
|
||||
|
||||
func getAddShareQuery() string {
|
||||
return fmt.Sprintf(`INSERT INTO %v (share_id,name,description,scope,paths,created_at,updated_at,last_use_at,
|
||||
expires_at,password,max_tokens,used_tokens,allow_from,user_id) VALUES (%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,%v)`,
|
||||
sqlTableShares, sqlPlaceholders[0], sqlPlaceholders[1],
|
||||
sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6],
|
||||
sqlPlaceholders[7], sqlPlaceholders[8], sqlPlaceholders[9], sqlPlaceholders[10], sqlPlaceholders[11],
|
||||
sqlPlaceholders[12], sqlPlaceholders[13])
|
||||
}
|
||||
|
||||
func getUpdateShareRestoreQuery() string {
|
||||
return fmt.Sprintf(`UPDATE %v SET name=%v,description=%v,scope=%v,paths=%v,created_at=%v,updated_at=%v,
|
||||
last_use_at=%v,expires_at=%v,password=%v,max_tokens=%v,used_tokens=%v,allow_from=%v,user_id=%v WHERE share_id = %v`, sqlTableShares,
|
||||
sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4],
|
||||
sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7], sqlPlaceholders[8], sqlPlaceholders[9],
|
||||
sqlPlaceholders[10], sqlPlaceholders[11], sqlPlaceholders[12], sqlPlaceholders[13])
|
||||
}
|
||||
|
||||
func getUpdateShareQuery() string {
|
||||
return fmt.Sprintf(`UPDATE %v SET name=%v,description=%v,scope=%v,paths=%v,updated_at=%v,expires_at=%v,
|
||||
password=%v,max_tokens=%v,allow_from=%v,user_id=%v WHERE share_id = %v`, sqlTableShares,
|
||||
sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4],
|
||||
sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7], sqlPlaceholders[8], sqlPlaceholders[9],
|
||||
sqlPlaceholders[10])
|
||||
}
|
||||
|
||||
func getDeleteShareQuery() string {
|
||||
return fmt.Sprintf(`DELETE FROM %v WHERE share_id = %v`, sqlTableShares, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getAPIKeyByIDQuery() string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v WHERE key_id = %v`, selectAPIKeyFields, sqlTableAPIKeys, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getAPIKeysQuery(order string) string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v ORDER BY key_id %v LIMIT %v OFFSET %v`, selectAPIKeyFields, sqlTableAPIKeys,
|
||||
order, sqlPlaceholders[0], sqlPlaceholders[1])
|
||||
}
|
||||
|
||||
func getDumpAPIKeysQuery() string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v`, selectAPIKeyFields, sqlTableAPIKeys)
|
||||
}
|
||||
|
||||
func getAddAPIKeyQuery() string {
|
||||
return fmt.Sprintf(`INSERT INTO %v (key_id,name,api_key,scope,created_at,updated_at,last_use_at,expires_at,description,user_id,admin_id)
|
||||
VALUES (%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,%v)`, sqlTableAPIKeys, sqlPlaceholders[0], sqlPlaceholders[1],
|
||||
sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6],
|
||||
sqlPlaceholders[7], sqlPlaceholders[8], sqlPlaceholders[9], sqlPlaceholders[10])
|
||||
}
|
||||
|
||||
func getUpdateAPIKeyQuery() string {
|
||||
return fmt.Sprintf(`UPDATE %v SET name=%v,scope=%v,expires_at=%v,user_id=%v,admin_id=%v,description=%v,updated_at=%v
|
||||
WHERE key_id = %v`, sqlTableAPIKeys, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2],
|
||||
sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7])
|
||||
}
|
||||
|
||||
func getDeleteAPIKeyQuery() string {
|
||||
return fmt.Sprintf(`DELETE FROM %v WHERE key_id = %v`, sqlTableAPIKeys, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getRelatedUsersForAPIKeysQuery(apiKeys []APIKey) string {
|
||||
var sb strings.Builder
|
||||
for _, k := range apiKeys {
|
||||
if k.userID == 0 {
|
||||
continue
|
||||
}
|
||||
if sb.Len() == 0 {
|
||||
sb.WriteString("(")
|
||||
} else {
|
||||
sb.WriteString(",")
|
||||
}
|
||||
sb.WriteString(strconv.FormatInt(k.userID, 10))
|
||||
}
|
||||
if sb.Len() > 0 {
|
||||
sb.WriteString(")")
|
||||
} else {
|
||||
sb.WriteString("(0)")
|
||||
}
|
||||
return fmt.Sprintf(`SELECT id,username FROM %v WHERE id IN %v`, sqlTableUsers, sb.String())
|
||||
}
|
||||
|
||||
func getRelatedAdminsForAPIKeysQuery(apiKeys []APIKey) string {
|
||||
var sb strings.Builder
|
||||
for _, k := range apiKeys {
|
||||
if k.adminID == 0 {
|
||||
continue
|
||||
}
|
||||
if sb.Len() == 0 {
|
||||
sb.WriteString("(")
|
||||
} else {
|
||||
sb.WriteString(",")
|
||||
}
|
||||
sb.WriteString(strconv.FormatInt(k.adminID, 10))
|
||||
}
|
||||
if sb.Len() > 0 {
|
||||
sb.WriteString(")")
|
||||
} else {
|
||||
sb.WriteString("(0)")
|
||||
}
|
||||
return fmt.Sprintf(`SELECT id,username FROM %v WHERE id IN %v`, sqlTableAdmins, sb.String())
|
||||
}
|
||||
|
||||
func getUserByUsernameQuery() string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v WHERE username = %v`, selectUserFields, sqlTableUsers, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getUsersQuery(order string) string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v ORDER BY username %v LIMIT %v OFFSET %v`, selectUserFields, sqlTableUsers,
|
||||
order, sqlPlaceholders[0], sqlPlaceholders[1])
|
||||
}
|
||||
|
||||
func getRecentlyUpdatedUsersQuery() string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v WHERE updated_at >= %v`, selectUserFields, sqlTableUsers, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getDumpUsersQuery() string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v`, selectUserFields, config.UsersTable)
|
||||
return fmt.Sprintf(`SELECT %v FROM %v`, selectUserFields, sqlTableUsers)
|
||||
}
|
||||
|
||||
func getDumpFoldersQuery() string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v`, selectFolderFields, sqlTableFolders)
|
||||
}
|
||||
|
||||
func getUpdateQuotaQuery(reset bool) string {
|
||||
if reset {
|
||||
return fmt.Sprintf(`UPDATE %v SET used_quota_size = %v,used_quota_files = %v,last_quota_update = %v
|
||||
WHERE username = %v`, config.UsersTable, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
|
||||
WHERE username = %v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
|
||||
}
|
||||
return fmt.Sprintf(`UPDATE %v SET used_quota_size = used_quota_size + %v,used_quota_files = used_quota_files + %v,last_quota_update = %v
|
||||
WHERE username = %v`, config.UsersTable, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
|
||||
WHERE username = %v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
|
||||
}
|
||||
|
||||
func getSetUpdateAtQuery() string {
|
||||
return fmt.Sprintf(`UPDATE %v SET updated_at = %v WHERE username = %v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1])
|
||||
}
|
||||
|
||||
func getUpdateLastLoginQuery() string {
|
||||
return fmt.Sprintf(`UPDATE %v SET last_login = %v WHERE username = %v`, config.UsersTable, sqlPlaceholders[0], sqlPlaceholders[1])
|
||||
return fmt.Sprintf(`UPDATE %v SET last_login = %v WHERE username = %v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1])
|
||||
}
|
||||
|
||||
func getUpdateAdminLastLoginQuery() string {
|
||||
return fmt.Sprintf(`UPDATE %v SET last_login = %v WHERE username = %v`, sqlTableAdmins, sqlPlaceholders[0], sqlPlaceholders[1])
|
||||
}
|
||||
|
||||
func getUpdateAPIKeyLastUseQuery() string {
|
||||
return fmt.Sprintf(`UPDATE %v SET last_use_at = %v WHERE key_id = %v`, sqlTableAPIKeys, sqlPlaceholders[0], sqlPlaceholders[1])
|
||||
}
|
||||
|
||||
func getUpdateShareLastUseQuery() string {
|
||||
return fmt.Sprintf(`UPDATE %v SET last_use_at = %v, used_tokens = used_tokens +%v WHERE share_id = %v`,
|
||||
sqlTableShares, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2])
|
||||
}
|
||||
|
||||
func getQuotaQuery() string {
|
||||
return fmt.Sprintf(`SELECT used_quota_size,used_quota_files FROM %v WHERE username = %v`, config.UsersTable,
|
||||
return fmt.Sprintf(`SELECT used_quota_size,used_quota_files FROM %v WHERE username = %v`, sqlTableUsers,
|
||||
sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getAddUserQuery() string {
|
||||
return fmt.Sprintf(`INSERT INTO %v (username,password,public_keys,home_dir,uid,gid,max_sessions,quota_size,quota_files,permissions,
|
||||
used_quota_size,used_quota_files,last_quota_update,upload_bandwidth,download_bandwidth,status,last_login,expiration_date,filters,
|
||||
filesystem,virtual_folders)
|
||||
VALUES (%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,0,0,0,%v,%v,%v,0,%v,%v,%v,%v)`, config.UsersTable, sqlPlaceholders[0], sqlPlaceholders[1],
|
||||
filesystem,additional_info,description,email,created_at,updated_at)
|
||||
VALUES (%v,%v,%v,%v,%v,%v,%v,%v,%v,%v,0,0,0,%v,%v,%v,0,%v,%v,%v,%v,%v,%v,%v,%v)`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1],
|
||||
sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7],
|
||||
sqlPlaceholders[8], sqlPlaceholders[9], sqlPlaceholders[10], sqlPlaceholders[11], sqlPlaceholders[12], sqlPlaceholders[13],
|
||||
sqlPlaceholders[14], sqlPlaceholders[15], sqlPlaceholders[16])
|
||||
sqlPlaceholders[14], sqlPlaceholders[15], sqlPlaceholders[16], sqlPlaceholders[17], sqlPlaceholders[18], sqlPlaceholders[19],
|
||||
sqlPlaceholders[20])
|
||||
}
|
||||
|
||||
func getUpdateUserQuery() string {
|
||||
return fmt.Sprintf(`UPDATE %v SET password=%v,public_keys=%v,home_dir=%v,uid=%v,gid=%v,max_sessions=%v,quota_size=%v,
|
||||
quota_files=%v,permissions=%v,upload_bandwidth=%v,download_bandwidth=%v,status=%v,expiration_date=%v,filters=%v,filesystem=%v,
|
||||
virtual_folders=%v WHERE id = %v`, config.UsersTable, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3],
|
||||
additional_info=%v,description=%v,email=%v,updated_at=%v WHERE id = %v`, sqlTableUsers, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3],
|
||||
sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6], sqlPlaceholders[7], sqlPlaceholders[8], sqlPlaceholders[9],
|
||||
sqlPlaceholders[10], sqlPlaceholders[11], sqlPlaceholders[12], sqlPlaceholders[13], sqlPlaceholders[14], sqlPlaceholders[15],
|
||||
sqlPlaceholders[16])
|
||||
sqlPlaceholders[16], sqlPlaceholders[17], sqlPlaceholders[18], sqlPlaceholders[19])
|
||||
}
|
||||
|
||||
func getDeleteUserQuery() string {
|
||||
return fmt.Sprintf(`DELETE FROM %v WHERE id = %v`, config.UsersTable, sqlPlaceholders[0])
|
||||
return fmt.Sprintf(`DELETE FROM %v WHERE id = %v`, sqlTableUsers, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getFolderByNameQuery() string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v WHERE name = %v`, selectFolderFields, sqlTableFolders, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getAddFolderQuery() string {
|
||||
return fmt.Sprintf(`INSERT INTO %v (path,used_quota_size,used_quota_files,last_quota_update,name,description,filesystem)
|
||||
VALUES (%v,%v,%v,%v,%v,%v,%v)`, sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2],
|
||||
sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6])
|
||||
}
|
||||
|
||||
func getUpdateFolderQuery() string {
|
||||
return fmt.Sprintf(`UPDATE %v SET path=%v,description=%v,filesystem=%v WHERE name = %v`, sqlTableFolders, sqlPlaceholders[0],
|
||||
sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
|
||||
}
|
||||
|
||||
func getDeleteFolderQuery() string {
|
||||
return fmt.Sprintf(`DELETE FROM %v WHERE id = %v`, sqlTableFolders, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getUpsertFolderQuery() string {
|
||||
if config.Driver == MySQLDataProviderName {
|
||||
return fmt.Sprintf("INSERT INTO %v (`path`,`used_quota_size`,`used_quota_files`,`last_quota_update`,`name`,"+
|
||||
"`description`,`filesystem`) VALUES (%v,%v,%v,%v,%v,%v,%v) ON DUPLICATE KEY UPDATE "+
|
||||
"`path`=VALUES(`path`),`description`=VALUES(`description`),`filesystem`=VALUES(`filesystem`)",
|
||||
sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3], sqlPlaceholders[4],
|
||||
sqlPlaceholders[5], sqlPlaceholders[6])
|
||||
}
|
||||
return fmt.Sprintf(`INSERT INTO %v (path,used_quota_size,used_quota_files,last_quota_update,name,description,filesystem)
|
||||
VALUES (%v,%v,%v,%v,%v,%v,%v) ON CONFLICT (name) DO UPDATE SET path = EXCLUDED.path,description=EXCLUDED.description,
|
||||
filesystem=EXCLUDED.filesystem`, sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2],
|
||||
sqlPlaceholders[3], sqlPlaceholders[4], sqlPlaceholders[5], sqlPlaceholders[6])
|
||||
}
|
||||
|
||||
func getClearFolderMappingQuery() string {
|
||||
return fmt.Sprintf(`DELETE FROM %v WHERE user_id = (SELECT id FROM %v WHERE username = %v)`, sqlTableFoldersMapping,
|
||||
sqlTableUsers, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getAddFolderMappingQuery() string {
|
||||
return fmt.Sprintf(`INSERT INTO %v (virtual_path,quota_size,quota_files,folder_id,user_id)
|
||||
VALUES (%v,%v,%v,(SELECT id FROM %v WHERE name = %v),(SELECT id FROM %v WHERE username = %v))`,
|
||||
sqlTableFoldersMapping, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlTableFolders,
|
||||
sqlPlaceholders[3], sqlTableUsers, sqlPlaceholders[4])
|
||||
}
|
||||
|
||||
func getFoldersQuery(order string) string {
|
||||
return fmt.Sprintf(`SELECT %v FROM %v ORDER BY name %v LIMIT %v OFFSET %v`, selectFolderFields, sqlTableFolders,
|
||||
order, sqlPlaceholders[0], sqlPlaceholders[1])
|
||||
}
|
||||
|
||||
func getUpdateFolderQuotaQuery(reset bool) string {
|
||||
if reset {
|
||||
return fmt.Sprintf(`UPDATE %v SET used_quota_size = %v,used_quota_files = %v,last_quota_update = %v
|
||||
WHERE name = %v`, sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
|
||||
}
|
||||
return fmt.Sprintf(`UPDATE %v SET used_quota_size = used_quota_size + %v,used_quota_files = used_quota_files + %v,last_quota_update = %v
|
||||
WHERE name = %v`, sqlTableFolders, sqlPlaceholders[0], sqlPlaceholders[1], sqlPlaceholders[2], sqlPlaceholders[3])
|
||||
}
|
||||
|
||||
func getQuotaFolderQuery() string {
|
||||
return fmt.Sprintf(`SELECT used_quota_size,used_quota_files FROM %v WHERE name = %v`, sqlTableFolders,
|
||||
sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
func getRelatedFoldersForUsersQuery(users []User) string {
|
||||
var sb strings.Builder
|
||||
for _, u := range users {
|
||||
if sb.Len() == 0 {
|
||||
sb.WriteString("(")
|
||||
} else {
|
||||
sb.WriteString(",")
|
||||
}
|
||||
sb.WriteString(strconv.FormatInt(u.ID, 10))
|
||||
}
|
||||
if sb.Len() > 0 {
|
||||
sb.WriteString(")")
|
||||
}
|
||||
return fmt.Sprintf(`SELECT f.id,f.name,f.path,f.used_quota_size,f.used_quota_files,f.last_quota_update,fm.virtual_path,
|
||||
fm.quota_size,fm.quota_files,fm.user_id,f.filesystem,f.description FROM %v f INNER JOIN %v fm ON f.id = fm.folder_id WHERE
|
||||
fm.user_id IN %v ORDER BY fm.user_id`, sqlTableFolders, sqlTableFoldersMapping, sb.String())
|
||||
}
|
||||
|
||||
func getRelatedUsersForFoldersQuery(folders []vfs.BaseVirtualFolder) string {
|
||||
var sb strings.Builder
|
||||
for _, f := range folders {
|
||||
if sb.Len() == 0 {
|
||||
sb.WriteString("(")
|
||||
} else {
|
||||
sb.WriteString(",")
|
||||
}
|
||||
sb.WriteString(strconv.FormatInt(f.ID, 10))
|
||||
}
|
||||
if sb.Len() > 0 {
|
||||
sb.WriteString(")")
|
||||
}
|
||||
return fmt.Sprintf(`SELECT fm.folder_id,u.username FROM %v fm INNER JOIN %v u ON fm.user_id = u.id
|
||||
WHERE fm.folder_id IN %v ORDER BY fm.folder_id`, sqlTableFoldersMapping, sqlTableUsers, sb.String())
|
||||
}
|
||||
|
||||
func getDatabaseVersionQuery() string {
|
||||
return "SELECT version from schema_version LIMIT 1"
|
||||
return fmt.Sprintf("SELECT version from %v LIMIT 1", sqlTableSchemaVersion)
|
||||
}
|
||||
|
||||
func getUpdateDBVersionQuery() string {
|
||||
return fmt.Sprintf(`UPDATE schema_version SET version=%v`, sqlPlaceholders[0])
|
||||
return fmt.Sprintf(`UPDATE %v SET version=%v`, sqlTableSchemaVersion, sqlPlaceholders[0])
|
||||
}
|
||||
|
||||
1234
dataprovider/user.go
1234
dataprovider/user.go
File diff suppressed because it is too large
Load Diff
205
docker/README.md
205
docker/README.md
@@ -1,5 +1,204 @@
|
||||
## Dockerfile examples
|
||||
# Official Docker image
|
||||
|
||||
Sample Dockerfiles for `sftpgo` daemon and the REST API CLI.
|
||||
SFTPGo provides an official Docker image, it is available on both [Docker Hub](https://hub.docker.com/r/drakkan/sftpgo) and on [GitHub Container Registry](https://github.com/users/drakkan/packages/container/package/sftpgo).
|
||||
|
||||
We don't want to add a `Dockerfile` for each single `sftpgo` configuration options or data provider. You can use the docker configurations here as starting point that you can customize to run `sftpgo` with [Docker](http://www.docker.io "Docker").
|
||||
## Supported tags and respective Dockerfile links
|
||||
|
||||
- [v2.2.3, v2.2, v2, latest](https://github.com/drakkan/sftpgo/blob/v2.2.3/Dockerfile)
|
||||
- [v2.2.3-alpine, v2.2-alpine, v2-alpine, alpine](https://github.com/drakkan/sftpgo/blob/v2.2.3/Dockerfile.alpine)
|
||||
- [v2.2.3-slim, v2.2-slim, v2-slim, slim](https://github.com/drakkan/sftpgo/blob/v2.2.3/Dockerfile)
|
||||
- [v2.2.3-alpine-slim, v2.2-alpine-slim, v2-alpine-slim, alpine-slim](https://github.com/drakkan/sftpgo/blob/v2.2.3/Dockerfile.alpine)
|
||||
- [v2.2.3-distroless-slim, v2.2-distroless-slim, v2-distroless-slim, distroless-slim](https://github.com/drakkan/sftpgo/blob/v2.2.3/Dockerfile.distroless)
|
||||
- [edge](../Dockerfile)
|
||||
- [edge-alpine](../Dockerfile.alpine)
|
||||
- [edge-slim](../Dockerfile)
|
||||
- [edge-alpine-slim](../Dockerfile.alpine)
|
||||
- [edge-distroless-slim](../Dockerfile.distroless)
|
||||
|
||||
## How to use the SFTPGo image
|
||||
|
||||
### Start a `sftpgo` server instance
|
||||
|
||||
Starting a SFTPGo instance is simple:
|
||||
|
||||
```shell
|
||||
docker run --name some-sftpgo -p 8080:8080 -p 2022:2022 -d "drakkan/sftpgo:tag"
|
||||
```
|
||||
|
||||
... where `some-sftpgo` is the name you want to assign to your container, and `tag` is the tag specifying the SFTPGo version you want. See the list above for relevant tags.
|
||||
|
||||
Now visit [http://localhost:8080/web/admin](http://localhost:8080/web/admin), replacing `localhost` with the appropriate IP address if SFTPGo is not reachable on localhost, create the first admin and a new SFTPGo user. The SFTP service is available on port 2022.
|
||||
|
||||
If you don't want to persist any files, for example for testing purposes, you can run an SFTPGo instance like this:
|
||||
|
||||
```shell
|
||||
docker run --rm --name some-sftpgo -p 8080:8080 -p 2022:2022 -d "drakkan/sftpgo:tag"
|
||||
```
|
||||
|
||||
If you prefer GitHub Container Registry to Docker Hub replace `drakkan/sftpgo:tag` with `ghcr.io/drakkan/sftpgo:tag`.
|
||||
|
||||
### Enable FTP service
|
||||
|
||||
FTP is disabled by default, you can enable the FTP service by starting the SFTPGo instance in this way:
|
||||
|
||||
```shell
|
||||
docker run --name some-sftpgo \
|
||||
-p 8080:8080 \
|
||||
-p 2022:2022 \
|
||||
-p 2121:2121 \
|
||||
-p 50000-50100:50000-50100 \
|
||||
-e SFTPGO_FTPD__BINDINGS__0__PORT=2121 \
|
||||
-e SFTPGO_FTPD__BINDINGS__0__FORCE_PASSIVE_IP=<your external ip here> \
|
||||
-d "drakkan/sftpgo:tag"
|
||||
```
|
||||
|
||||
The FTP service is now available on port 2121 and SFTP on port 2022.
|
||||
|
||||
You can change the passive ports range (`50000-50100` by default) by setting the environment variables `SFTPGO_FTPD__PASSIVE_PORT_RANGE__START` and `SFTPGO_FTPD__PASSIVE_PORT_RANGE__END`.
|
||||
|
||||
It is recommended that you provide a certificate and key file to expose FTP over TLS. You should prefer SFTP to FTP even if you configure TLS, please don't blindly enable the old FTP protocol.
|
||||
|
||||
### Enable WebDAV service
|
||||
|
||||
WebDAV is disabled by default, you can enable the WebDAV service by starting the SFTPGo instance in this way:
|
||||
|
||||
```shell
|
||||
docker run --name some-sftpgo \
|
||||
-p 8080:8080 \
|
||||
-p 2022:2022 \
|
||||
-p 10080:10080 \
|
||||
-e SFTPGO_WEBDAVD__BINDINGS__0__PORT=10080 \
|
||||
-d "drakkan/sftpgo:tag"
|
||||
```
|
||||
|
||||
The WebDAV service is now available on port 10080 and SFTP on port 2022.
|
||||
|
||||
It is recommended that you provide a certificate and key file to expose WebDAV over https.
|
||||
|
||||
### Container shell access and viewing SFTPGo logs
|
||||
|
||||
The docker exec command allows you to run commands inside a Docker container. The following command line will give you a shell inside your `sftpgo` container:
|
||||
|
||||
```shell
|
||||
docker exec -it some-sftpgo sh
|
||||
```
|
||||
|
||||
The logs are available through Docker's container log:
|
||||
|
||||
```shell
|
||||
docker logs some-sftpgo
|
||||
```
|
||||
|
||||
**Note:** [distroless](../Dockerfile.distroless) image contains only a statically linked sftpgo binary and its minimal runtime dependencies. Shell is not available on this image.
|
||||
|
||||
### Where to Store Data
|
||||
|
||||
Important note: There are several ways to store data used by applications that run in Docker containers. We encourage users of the SFTPGo images to familiarize themselves with the options available, including:
|
||||
|
||||
- Let Docker manage the storage for SFTPGo data by [writing them to disk on the host system using its own internal volume management](https://docs.docker.com/engine/tutorials/dockervolumes/#adding-a-data-volume). This is the default and is easy and fairly transparent to the user. The downside is that the files may be hard to locate for tools and applications that run directly on the host system, i.e. outside containers.
|
||||
- Create a data directory on the host system (outside the container) and [mount this to a directory visible from inside the container]((https://docs.docker.com/engine/tutorials/dockervolumes/#mount-a-host-directory-as-a-data-volume)). This places the SFTPGo files in a known location on the host system, and makes it easy for tools and applications on the host system to access the files. The downside is that the user needs to make sure that the directory exists, and that e.g. directory permissions and other security mechanisms on the host system are set up correctly. The SFTPGo image runs using `1000` as UID/GID by default.
|
||||
|
||||
The Docker documentation is a good starting point for understanding the different storage options and variations, and there are multiple blogs and forum postings that discuss and give advice in this area. We will simply show the basic procedure here for the latter option above:
|
||||
|
||||
1. Create a data directory on a suitable volume on your host system, e.g. `/my/own/sftpgodata`.
|
||||
2. Create a home directory for the sftpgo container user on your host system e.g. `/my/own/sftpgohome`.
|
||||
3. Start your SFTPGo container like this:
|
||||
|
||||
```shell
|
||||
docker run --name some-sftpgo \
|
||||
-p 8080:8090 \
|
||||
-p 2022:2022 \
|
||||
--mount type=bind,source=/my/own/sftpgodata,target=/srv/sftpgo \
|
||||
--mount type=bind,source=/my/own/sftpgohome,target=/var/lib/sftpgo \
|
||||
-e SFTPGO_HTTPD__BINDINGS__0__PORT=8090 \
|
||||
-d "drakkan/sftpgo:tag"
|
||||
```
|
||||
|
||||
As you can see SFTPGo uses two main volumes:
|
||||
|
||||
- `/srv/sftpgo` to handle persistent data. The default home directory for SFTP/FTP/WebDAV users is `/srv/sftpgo/data/<username>`. Backups are stored in `/srv/sftpgo/backups`
|
||||
- `/var/lib/sftpgo` is the home directory for the sftpgo system user defined inside the container. This is the container working directory too, host keys will be created here when using the default configuration.
|
||||
|
||||
If you want to get fine grained control, you can also mount `/srv/sftpgo/data` and `/srv/sftpgo/backups` as separate volumes instead of mounting `/srv/sftpgo`.
|
||||
|
||||
### Configuration
|
||||
|
||||
The runtime configuration can be customized via environment variables that you can set passing the `-e` option to the `docker run` command or inside the `environment` section if you are using [docker stack deploy](https://docs.docker.com/engine/reference/commandline/stack_deploy/) or [docker-compose](https://github.com/docker/compose).
|
||||
|
||||
Please take a look [here](../docs/full-configuration.md#environment-variables) to learn how to configure SFTPGo via environment variables.
|
||||
|
||||
Alternately you can mount your custom configuration file to `/var/lib/sftpgo` or `/var/lib/sftpgo/.config/sftpgo`.
|
||||
|
||||
### Loading initial data
|
||||
|
||||
Initial data can be loaded in the following ways:
|
||||
|
||||
- via the `--loaddata-from` flag or the `SFTPGO_LOADDATA_FROM` environment variable
|
||||
- by providing a dump file to the memory provider
|
||||
|
||||
Please take a look [here](../docs/full-configuration.md) for more details.
|
||||
|
||||
### Running as an arbitrary user
|
||||
|
||||
The SFTPGo image runs using `1000` as UID/GID by default. If you know the permissions of your data and/or configuration directory are already set appropriately or you have need of running SFTPGo with a specific UID/GID, it is possible to invoke this image with `--user` set to any value (other than `root/0`) in order to achieve the desired access/configuration:
|
||||
|
||||
```shell
|
||||
$ ls -lnd data
|
||||
drwxr-xr-x 2 1100 1100 6 7 nov 09.09 data
|
||||
$ ls -lnd config
|
||||
drwxr-xr-x 2 1100 1100 6 7 nov 09.19 config
|
||||
```
|
||||
|
||||
With the above directory permissions, you can start a SFTPGo instance like this:
|
||||
|
||||
```shell
|
||||
docker run --name some-sftpgo \
|
||||
--user 1100:1100 \
|
||||
-p 8080:8080 \
|
||||
-p 2022:2022 \
|
||||
--mount type=bind,source="${PWD}/data",target=/srv/sftpgo \
|
||||
--mount type=bind,source="${PWD}/config",target=/var/lib/sftpgo \
|
||||
-d "drakkan/sftpgo:tag"
|
||||
```
|
||||
|
||||
Alternately build your own image using the official one as a base, here is a sample Dockerfile:
|
||||
|
||||
```shell
|
||||
FROM drakkan/sftpgo:tag
|
||||
USER root
|
||||
RUN chown -R 1100:1100 /etc/sftpgo && chown 1100:1100 /var/lib/sftpgo /srv/sftpgo
|
||||
USER 1100:1100
|
||||
```
|
||||
|
||||
**Note:** the above Dockerfile will not work if you use the [distroless](../Dockerfile.distroless) image as base since the `chown` command is not available there.
|
||||
|
||||
## Image Variants
|
||||
|
||||
The `sftpgo` images comes in many flavors, each designed for a specific use case. The `edge`, `edge-slim`, `edge-alpine`, `edge-alpine-slim` and `edge-distroless-slim` tags are updated after each new commit.
|
||||
|
||||
### `sftpgo:<version>`
|
||||
|
||||
This is the defacto image, it is based on [Debian](https://www.debian.org/), available in [the `debian` official image](https://hub.docker.com/_/debian). If you are unsure about what your needs are, you probably want to use this one.
|
||||
|
||||
### `sftpgo:<version>-alpine`
|
||||
|
||||
This image is based on the popular [Alpine Linux project](https://alpinelinux.org/), available in [the `alpine` official image](https://hub.docker.com/_/alpine). Alpine Linux is much smaller than most distribution base images (~5MB), and thus leads to much slimmer images in general.
|
||||
|
||||
This variant is highly recommended when final image size being as small as possible is desired. The main caveat to note is that it does use [musl libc](https://musl.libc.org/) instead of [glibc and friends](https://www.etalabs.net/compare_libcs.html), so certain software might run into issues depending on the depth of their libc requirements. However, most software doesn't have an issue with this, so this variant is usually a very safe choice. See [this Hacker News comment thread](https://news.ycombinator.com/item?id=10782897) for more discussion of the issues that might arise and some pro/con comparisons of using Alpine-based images.
|
||||
|
||||
### `sftpgo:<version>-distroless`
|
||||
|
||||
This image is based on the popular [Distroless project](https://github.com/GoogleContainerTools/distroless). We use the latest Debian based distroless image as base.
|
||||
|
||||
Distroless variant contains only a statically linked sftpgo binary and its minimal runtime dependencies and so it doesn't allow shell access (no shell is installed).
|
||||
SQLite support is disabled since it requires CGO and so a C runtime which is not installed.
|
||||
The default data provider is `bolt`, all the supported data providers expect `sqlite` work.
|
||||
We only provide the slim variant and so the optional `git` dependency is not available.
|
||||
|
||||
### `sftpgo:<suite>-slim`
|
||||
|
||||
These tags provide a slimmer image that does not include the optional `git` dependency.
|
||||
|
||||
## Helm Chart
|
||||
|
||||
An helm chart is [available](https://artifacthub.io/packages/helm/sagikazarmark/sftpgo). You can find the source code [here](https://github.com/sagikazarmark/helm-charts/tree/master/charts/sftpgo).
|
||||
|
||||
@@ -1,8 +0,0 @@
|
||||
FROM debian:latest
|
||||
LABEL maintainer="nicola.murino@gmail.com"
|
||||
RUN apt-get update && apt-get install -y curl python3-requests python3-pygments
|
||||
|
||||
RUN curl https://raw.githubusercontent.com/drakkan/sftpgo/master/scripts/sftpgo_api_cli.py --output /usr/bin/sftpgo_api_cli.py
|
||||
|
||||
ENTRYPOINT ["python3", "/usr/bin/sftpgo_api_cli.py" ]
|
||||
CMD []
|
||||
28
docker/scripts/entrypoint-alpine.sh
Executable file
28
docker/scripts/entrypoint-alpine.sh
Executable file
@@ -0,0 +1,28 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
SFTPGO_PUID=${SFTPGO_PUID:-1000}
|
||||
SFTPGO_PGID=${SFTPGO_PGID:-1000}
|
||||
|
||||
if [ "$1" = 'sftpgo' ]; then
|
||||
if [ "$(id -u)" = '0' ]; then
|
||||
for DIR in "/etc/sftpgo" "/var/lib/sftpgo" "/srv/sftpgo"
|
||||
do
|
||||
DIR_UID=$(stat -c %u ${DIR})
|
||||
DIR_GID=$(stat -c %g ${DIR})
|
||||
if [ ${DIR_UID} != ${SFTPGO_PUID} ] || [ ${DIR_GID} != ${SFTPGO_PGID} ]; then
|
||||
echo '{"level":"info","time":"'`date +%Y-%m-%dT%H:%M:%S.000`'","sender":"entrypoint","message":"change owner for \"'${DIR}'\" UID: '${SFTPGO_PUID}' GID: '${SFTPGO_PGID}'"}'
|
||||
if [ ${DIR} = "/etc/sftpgo" ]; then
|
||||
chown -R ${SFTPGO_PUID}:${SFTPGO_PGID} ${DIR}
|
||||
else
|
||||
chown ${SFTPGO_PUID}:${SFTPGO_PGID} ${DIR}
|
||||
fi
|
||||
fi
|
||||
done
|
||||
echo '{"level":"info","time":"'`date +%Y-%m-%dT%H:%M:%S.000`'","sender":"entrypoint","message":"run as UID: '${SFTPGO_PUID}' GID: '${SFTPGO_PGID}'"}'
|
||||
exec su-exec ${SFTPGO_PUID}:${SFTPGO_PGID} "$@"
|
||||
fi
|
||||
|
||||
exec "$@"
|
||||
fi
|
||||
|
||||
exec "$@"
|
||||
32
docker/scripts/entrypoint.sh
Executable file
32
docker/scripts/entrypoint.sh
Executable file
@@ -0,0 +1,32 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
SFTPGO_PUID=${SFTPGO_PUID:-1000}
|
||||
SFTPGO_PGID=${SFTPGO_PGID:-1000}
|
||||
|
||||
if [ "$1" = 'sftpgo' ]; then
|
||||
if [ "$(id -u)" = '0' ]; then
|
||||
getent passwd ${SFTPGO_PUID} > /dev/null
|
||||
HAS_PUID=$?
|
||||
getent group ${SFTPGO_PGID} > /dev/null
|
||||
HAS_PGID=$?
|
||||
if [ ${HAS_PUID} -ne 0 ] || [ ${HAS_PGID} -ne 0 ]; then
|
||||
echo '{"level":"info","time":"'`date +%Y-%m-%dT%H:%M:%S.%3N`'","sender":"entrypoint","message":"prepare to run as UID: '${SFTPGO_PUID}' GID: '${SFTPGO_PGID}'"}'
|
||||
if [ ${HAS_PGID} -ne 0 ]; then
|
||||
echo '{"level":"info","time":"'`date +%Y-%m-%dT%H:%M:%S.%3N`'","sender":"entrypoint","message":"set GID to: '${SFTPGO_PGID}'"}'
|
||||
groupmod -g ${SFTPGO_PGID} sftpgo
|
||||
fi
|
||||
if [ ${HAS_PUID} -ne 0 ]; then
|
||||
echo '{"level":"info","time":"'`date +%Y-%m-%dT%H:%M:%S.%3N`'","sender":"entrypoint","message":"set UID to: '${SFTPGO_PUID}'"}'
|
||||
usermod -u ${SFTPGO_PUID} sftpgo
|
||||
fi
|
||||
chown -R ${SFTPGO_PUID}:${SFTPGO_PGID} /etc/sftpgo
|
||||
chown ${SFTPGO_PUID}:${SFTPGO_PGID} /var/lib/sftpgo /srv/sftpgo
|
||||
fi
|
||||
echo '{"level":"info","time":"'`date +%Y-%m-%dT%H:%M:%S.%3N`'","sender":"entrypoint","message":"run as UID: '${SFTPGO_PUID}' GID: '${SFTPGO_PGID}'"}'
|
||||
exec gosu ${SFTPGO_PUID}:${SFTPGO_PGID} "$@"
|
||||
fi
|
||||
|
||||
exec "$@"
|
||||
fi
|
||||
|
||||
exec "$@"
|
||||
@@ -1,21 +1,21 @@
|
||||
FROM golang:alpine as builder
|
||||
|
||||
RUN apk add --no-cache git gcc g++ ca-certificates \
|
||||
&& go get -d github.com/drakkan/sftpgo
|
||||
&& go get -v -d github.com/drakkan/sftpgo
|
||||
WORKDIR /go/src/github.com/drakkan/sftpgo
|
||||
# uncomment the next line to get the latest stable version instead of the latest git
|
||||
#RUN git checkout `git rev-list --tags --max-count=1`
|
||||
RUN go build -i -ldflags "-s -w -X github.com/drakkan/sftpgo/utils.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/utils.date=`date -u +%FT%TZ`" -o /go/bin/sftpgo
|
||||
ARG TAG
|
||||
ARG FEATURES
|
||||
# Use --build-arg TAG=LATEST for latest tag. Use e.g. --build-arg TAG=v1.0.0 for a specific tag/commit. Otherwise HEAD (master) is built.
|
||||
RUN git checkout $(if [ "${TAG}" = LATEST ]; then echo `git rev-list --tags --max-count=1`; elif [ -n "${TAG}" ]; then echo "${TAG}"; else echo HEAD; fi)
|
||||
RUN go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -v -o /go/bin/sftpgo
|
||||
|
||||
FROM alpine:latest
|
||||
|
||||
RUN apk add --no-cache ca-certificates su-exec \
|
||||
&& mkdir -p /data /etc/sftpgo /srv/sftpgo/config /srv/sftpgo/web /srv/sftpgo/backups
|
||||
|
||||
# ca-certificates is needed for Cloud Storage Support and to expose the REST API over HTTPS.
|
||||
# If you install git then ca-certificates will be automatically installed as dependency.
|
||||
# git, rsync and ca-certificates are optional, uncomment the next line to add support for them if needed.
|
||||
#RUN apk add --no-cache git rsync ca-certificates
|
||||
# git and rsync are optional, uncomment the next line to add support for them if needed.
|
||||
#RUN apk add --no-cache git rsync
|
||||
|
||||
COPY --from=builder /go/bin/sftpgo /bin/
|
||||
COPY --from=builder /go/src/github.com/drakkan/sftpgo/sftpgo.json /etc/sftpgo/sftpgo.json
|
||||
@@ -27,5 +27,24 @@ RUN chmod +x /bin/entrypoint.sh
|
||||
VOLUME [ "/data", "/srv/sftpgo/config", "/srv/sftpgo/backups" ]
|
||||
EXPOSE 2022 8080
|
||||
|
||||
# uncomment the following settings to enable FTP support
|
||||
#ENV SFTPGO_FTPD__BIND_PORT=2121
|
||||
#ENV SFTPGO_FTPD__FORCE_PASSIVE_IP=<your FTP visibile IP here>
|
||||
#EXPOSE 2121
|
||||
|
||||
# we need to expose the passive ports range too
|
||||
#EXPOSE 50000-50100
|
||||
|
||||
# it is a good idea to provide certificates to enable FTPS too
|
||||
#ENV SFTPGO_FTPD__CERTIFICATE_FILE=/srv/sftpgo/config/mycert.crt
|
||||
#ENV SFTPGO_FTPD__CERTIFICATE_KEY_FILE=/srv/sftpgo/config/mycert.key
|
||||
|
||||
# uncomment the following setting to enable WebDAV support
|
||||
#ENV SFTPGO_WEBDAVD__BIND_PORT=8090
|
||||
|
||||
# it is a good idea to provide certificates to enable WebDAV over HTTPS
|
||||
#ENV SFTPGO_WEBDAVD__CERTIFICATE_FILE=${CONFIG_DIR}/mycert.crt
|
||||
#ENV SFTPGO_WEBDAVD__CERTIFICATE_KEY_FILE=${CONFIG_DIR}/mycert.key
|
||||
|
||||
ENTRYPOINT ["/bin/entrypoint.sh"]
|
||||
CMD ["serve"]
|
||||
CMD ["serve"]
|
||||
|
||||
@@ -1,27 +1,38 @@
|
||||
# SFTPGo with Docker and Alpine
|
||||
|
||||
:warning: The recommended way to run SFTPGo on Docker is to use the official [images](https://hub.docker.com/r/drakkan/sftpgo). The documentation here is now obsolete.
|
||||
|
||||
This DockerFile is made to build image to host multiple instances of SFTPGo started with different users.
|
||||
|
||||
### Example
|
||||
## Example
|
||||
|
||||
> 1003 is a custom uid:gid for this instance of SFTPGo
|
||||
|
||||
```bash
|
||||
# Prereq on docker host
|
||||
sudo groupadd -g 1003 sftpgrp && \
|
||||
sudo useradd -u 1003 -g 1003 sftpuser -d /home/sftpuser/ && \
|
||||
sudo -u sftpuser mkdir /home/sftpuser/{conf,data} && \
|
||||
curl https://raw.githubusercontent.com/drakkan/sftpgo/master/sql/sqlite/20190828.sql | sqlite3 /home/sftpuser/conf/sftpgo.db && \
|
||||
curl https://raw.githubusercontent.com/drakkan/sftpgo/master/sql/sqlite/20191112.sql | sqlite3 /home/sftpuser/conf/sftpgo.db && \
|
||||
curl https://raw.githubusercontent.com/drakkan/sftpgo/master/sql/sqlite/20191230.sql | sqlite3 /home/sftpuser/conf/sftpgo.db && \
|
||||
curl https://raw.githubusercontent.com/drakkan/sftpgo/master/sql/sqlite/20200116.sql | sqlite3 /home/sftpuser/conf/sftpgo.db && \
|
||||
curl https://raw.githubusercontent.com/drakkan/sftpgo/master/sftpgo.json -o /home/sftpuser/conf/sftpgo.json
|
||||
|
||||
# Get and build SFTPGo image
|
||||
# Edit sftpgo.json as you need
|
||||
|
||||
# Get and build SFTPGo image.
|
||||
# Add --build-arg TAG=LATEST to build the latest tag or e.g. TAG=v1.0.0 for a specific tag/commit.
|
||||
# Add --build-arg FEATURES=<build features comma separated> to specify the features to build.
|
||||
git clone https://github.com/drakkan/sftpgo.git && \
|
||||
cd sftpgo && \
|
||||
sudo docker build -t sftpgo docker/sftpgo/alpine/
|
||||
|
||||
# Starting image
|
||||
# Initialize the configured provider. For PostgreSQL and MySQL providers you need to create the configured database and the "initprovider" command will create the required tables.
|
||||
sudo docker run --name sftpgo \
|
||||
-e PUID=1003 \
|
||||
-e GUID=1003 \
|
||||
-v /home/sftpuser/conf/:/srv/sftpgo/config \
|
||||
sftpgo initprovider -c /srv/sftpgo/config
|
||||
|
||||
# Start the image
|
||||
sudo docker rm sftpgo && sudo docker run --name sftpgo \
|
||||
-e SFTPGO_LOG_FILE_PATH= \
|
||||
-e SFTPGO_CONFIG_DIR=/srv/sftpgo/config \
|
||||
-e SFTPGO_HTTPD__TEMPLATES_PATH=/srv/sftpgo/web/templates \
|
||||
@@ -36,11 +47,15 @@ sudo docker run --name sftpgo \
|
||||
-v /home/sftpuser/backups:/srv/sftpgo/backups \
|
||||
sftpgo
|
||||
```
|
||||
The script `entrypoint.sh` makes sure to correct the permissions of directories and start the process with the right user
|
||||
|
||||
If you want to enable FTP/S you also need the publish the FTP port and the FTP passive port range, defined in your `Dockerfile`, by adding, for example, the following options to the `docker run` command `-p 2121:2121 -p 50000-50100:50000-50100`. The same goes for WebDAV, you need to publish the configured port.
|
||||
|
||||
The script `entrypoint.sh` makes sure to correct the permissions of directories and start the process with the right user.
|
||||
|
||||
Several images can be run with different parameters.
|
||||
|
||||
### Custom systemd script
|
||||
## Custom systemd script
|
||||
|
||||
An example of systemd script is present [here](sftpgo.service), with `Environment` parameter to set `PUID` and `GUID`
|
||||
|
||||
`WorkingDirectory` parameter must be exist with one file in this directory like `sftpgo-${PUID}.env` corresponding to the variable file for SFTPGo instance.
|
||||
`WorkingDirectory` parameter must be exist with one file in this directory like `sftpgo-${PUID}.env` corresponding to the variable file for SFTPGo instance.
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
[Unit]
|
||||
Description=SFTPGo sftp server
|
||||
Description=SFTPGo server
|
||||
After=docker.service
|
||||
|
||||
[Service]
|
||||
@@ -8,19 +8,23 @@ Group=root
|
||||
WorkingDirectory=/etc/sftpgo
|
||||
Environment=PUID=1003
|
||||
Environment=GUID=1003
|
||||
EnvironmentFile=-/etc/sysconfig/sftpgo.conf
|
||||
EnvironmentFile=-/etc/sysconfig/sftpgo.env
|
||||
ExecStartPre=-docker kill sftpgo
|
||||
ExecStartPre=-docker rm sftpgo
|
||||
ExecStart=docker run --name sftpgo \
|
||||
--env-file sftpgo-${PUID}.env \
|
||||
-e PUID=${PUID} \
|
||||
-e GUID=${GUID} \
|
||||
-e SFTPGO_LOG_FILE_PATH= \
|
||||
-e SFTPGO_CONFIG_DIR=/srv/sftpgo/config \
|
||||
-e SFTPGO_HTTPD__TEMPLATES_PATH=/srv/sftpgo/web/templates \
|
||||
-e SFTPGO_HTTPD__STATIC_FILES_PATH=/srv/sftpgo/web/static \
|
||||
-e SFTPGO_HTTPD__BACKUPS_PATH=/srv/sftpgo/backups \
|
||||
-p 8080:8080 \
|
||||
-p 2022:2022 \
|
||||
-v /home/sftpuser/conf/:/srv/sftpgo/config \
|
||||
-v /home/sftpuser/data:/data \
|
||||
-v /home/sftpuser/backups:/srv/sftpgo/backups \
|
||||
sftpgo
|
||||
ExecStop=docker stop sftpgo
|
||||
SyslogIdentifier=sftpgo
|
||||
|
||||
@@ -1,19 +1,22 @@
|
||||
# we use a multi stage build to have a separate build and run env
|
||||
FROM golang:latest as buildenv
|
||||
LABEL maintainer="nicola.murino@gmail.com"
|
||||
RUN go get -d github.com/drakkan/sftpgo
|
||||
RUN go get -v -d github.com/drakkan/sftpgo
|
||||
WORKDIR /go/src/github.com/drakkan/sftpgo
|
||||
# uncomment the next line to get the latest stable version instead of the latest git
|
||||
#RUN git checkout `git rev-list --tags --max-count=1`
|
||||
RUN go build -i -ldflags "-s -w -X github.com/drakkan/sftpgo/utils.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/utils.date=`date -u +%FT%TZ`" -o sftpgo
|
||||
ARG TAG
|
||||
ARG FEATURES
|
||||
# Use --build-arg TAG=LATEST for latest tag. Use e.g. --build-arg TAG=v1.0.0 for a specific tag/commit. Otherwise HEAD (master) is built.
|
||||
RUN git checkout $(if [ "${TAG}" = LATEST ]; then echo `git rev-list --tags --max-count=1`; elif [ -n "${TAG}" ]; then echo "${TAG}"; else echo HEAD; fi)
|
||||
RUN go build $(if [ -n "${FEATURES}" ]; then echo "-tags ${FEATURES}"; fi) -ldflags "-s -w -X github.com/drakkan/sftpgo/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/version.date=`date -u +%FT%TZ`" -v -o sftpgo
|
||||
|
||||
# now define the run environment
|
||||
FROM debian:latest
|
||||
|
||||
# ca-certificates is needed for Cloud Storage Support and to expose the REST API over HTTPS.
|
||||
# If you install git then ca-certificates will be automatically installed as dependency.
|
||||
# git, rsync and ca-certificates are optional, uncomment the next line to add support for them if needed.
|
||||
#RUN apt-get update && apt-get install -y git rsync ca-certificates
|
||||
# ca-certificates is needed for Cloud Storage Support and for HTTPS/FTPS.
|
||||
RUN apt-get update && apt-get install -y ca-certificates && apt-get clean
|
||||
|
||||
# git and rsync are optional, uncomment the next line to add support for them if needed.
|
||||
#RUN apt-get update && apt-get install -y git rsync && apt-get clean
|
||||
|
||||
ARG BASE_DIR=/app
|
||||
ARG DATA_REL_DIR=data
|
||||
@@ -37,7 +40,7 @@ ENV WEB_DIR=${BASE_DIR}/${WEB_REL_PATH}
|
||||
|
||||
RUN mkdir -p ${DATA_DIR} ${CONFIG_DIR} ${WEB_DIR} ${BACKUPS_DIR}
|
||||
RUN groupadd --system -g ${GID} ${GROUPNAME}
|
||||
RUN useradd --system --create-home --no-log-init --home-dir ${HOME_DIR} --comment "SFTPGo user" --shell /bin/false --gid ${GID} --uid ${UID} ${USERNAME}
|
||||
RUN useradd --system --create-home --no-log-init --home-dir ${HOME_DIR} --comment "SFTPGo user" --shell /usr/sbin/nologin --gid ${GID} --uid ${UID} ${USERNAME}
|
||||
|
||||
WORKDIR ${HOME_DIR}
|
||||
RUN mkdir -p bin .config/sftpgo
|
||||
@@ -68,5 +71,23 @@ ENV SFTPGO_HTTPD__STATIC_FILES_PATH=${WEB_DIR}/static
|
||||
ENV SFTPGO_DATA_PROVIDER__USERS_BASE_DIR=${DATA_DIR}
|
||||
ENV SFTPGO_HTTPD__BACKUPS_PATH=${BACKUPS_DIR}
|
||||
|
||||
# uncomment the following settings to enable FTP support
|
||||
#ENV SFTPGO_FTPD__BIND_PORT=2121
|
||||
#ENV SFTPGO_FTPD__FORCE_PASSIVE_IP=<your FTP visibile IP here>
|
||||
#EXPOSE 2121
|
||||
# we need to expose the passive ports range too
|
||||
#EXPOSE 50000-50100
|
||||
|
||||
# it is a good idea to provide certificates to enable FTPS too
|
||||
#ENV SFTPGO_FTPD__CERTIFICATE_FILE=${CONFIG_DIR}/mycert.crt
|
||||
#ENV SFTPGO_FTPD__CERTIFICATE_KEY_FILE=${CONFIG_DIR}/mycert.key
|
||||
|
||||
# uncomment the following setting to enable WebDAV support
|
||||
#ENV SFTPGO_WEBDAVD__BIND_PORT=8090
|
||||
|
||||
# it is a good idea to provide certificates to enable WebDAV over HTTPS
|
||||
#ENV SFTPGO_WEBDAVD__CERTIFICATE_FILE=${CONFIG_DIR}/mycert.crt
|
||||
#ENV SFTPGO_WEBDAVD__CERTIFICATE_KEY_FILE=${CONFIG_DIR}/mycert.key
|
||||
|
||||
ENTRYPOINT ["sftpgo"]
|
||||
CMD ["serve"]
|
||||
CMD ["serve"]
|
||||
|
||||
@@ -1,4 +1,6 @@
|
||||
## Dockerfile based on Debian stable
|
||||
# Dockerfile based on Debian stable
|
||||
|
||||
:warning: The recommended way to run SFTPGo on Docker is to use the official [images](https://hub.docker.com/r/drakkan/sftpgo). The documentation here is now obsolete.
|
||||
|
||||
Please read the comments inside the `Dockerfile` to learn how to customize things for your setup.
|
||||
|
||||
@@ -8,15 +10,50 @@ You can build the container image using `docker build`, for example:
|
||||
docker build -t="drakkan/sftpgo" .
|
||||
```
|
||||
|
||||
and you can run the Dockerfile using something like this:
|
||||
This will build master of github.com/drakkan/sftpgo.
|
||||
|
||||
To build the latest tag you can add `--build-arg TAG=LATEST` and to build a specific tag/commit you can use for example `TAG=v1.0.0`, like this:
|
||||
|
||||
```bash
|
||||
docker run --name sftpgo -p 8080:8080 -p 2022:2022 --mount type=bind,source=/srv/sftpgo/data,target=/app/data --mount type=bind,source=/srv/sftpgo/config,target=/app/config --mount type=bind,source=/srv/sftpgo/backups,target=/app/backups drakkan/sftpgo
|
||||
docker build -t="drakkan/sftpgo" --build-arg TAG=v1.0.0 .
|
||||
```
|
||||
|
||||
where `/srv/sftpgo/data`, `/srv/sftpgo/config` and `/srv/sftpgo/backups` are folders on the host system with write access for UID/GID defined inside the `Dockerfile`. You can choose to create a new user, on the host system, with a matching UID/GID pair or simply do something like:
|
||||
|
||||
To specify the features to build you can add `--build-arg FEATURES=<build features comma separated>`. For example you can disable SQLite and S3 support like this:
|
||||
|
||||
```bash
|
||||
chown -R <UID>:<GID> /srv/sftpgo/data /srv/sftpgo/config /srv/sftpgo/backups
|
||||
```
|
||||
docker build -t="drakkan/sftpgo" --build-arg FEATURES=nosqlite,nos3 .
|
||||
```
|
||||
|
||||
Please take a look at the [build from source](./../../../docs/build-from-source.md) documentation for the complete list of the features that can be disabled.
|
||||
|
||||
Now create the required folders on the host system, for example:
|
||||
|
||||
```bash
|
||||
sudo mkdir -p /srv/sftpgo/data /srv/sftpgo/config /srv/sftpgo/backups
|
||||
```
|
||||
|
||||
and give write access to them to the UID/GID defined inside the `Dockerfile`. You can choose to create a new user, on the host system, with a matching UID/GID pair, or simply do something like this:
|
||||
|
||||
```bash
|
||||
sudo chown -R <UID>:<GID> /srv/sftpgo/data /srv/sftpgo/config /srv/sftpgo/backups
|
||||
```
|
||||
|
||||
Download the default configuration file and edit it as you need:
|
||||
|
||||
```bash
|
||||
sudo curl https://raw.githubusercontent.com/drakkan/sftpgo/master/sftpgo.json -o /srv/sftpgo/config/sftpgo.json
|
||||
```
|
||||
|
||||
Initialize the configured provider. For PostgreSQL and MySQL providers you need to create the configured database and the `initprovider` command will create the required tables:
|
||||
|
||||
```bash
|
||||
docker run --name sftpgo --mount type=bind,source=/srv/sftpgo/config,target=/app/config drakkan/sftpgo initprovider -c /app/config
|
||||
```
|
||||
|
||||
and finally you can run the image using something like this:
|
||||
|
||||
```bash
|
||||
docker rm sftpgo && docker run --name sftpgo -p 8080:8080 -p 2022:2022 --mount type=bind,source=/srv/sftpgo/data,target=/app/data --mount type=bind,source=/srv/sftpgo/config,target=/app/config --mount type=bind,source=/srv/sftpgo/backups,target=/app/backups drakkan/sftpgo
|
||||
```
|
||||
|
||||
If you want to enable FTP/S you also need the publish the FTP port and the FTP passive port range, defined in your `Dockerfile`, by adding, for example, the following options to the `docker run` command `-p 2121:2121 -p 50000-50100:50000-50100`. The same goes for WebDAV, you need to publish the configured port.
|
||||
|
||||
@@ -1,61 +1,22 @@
|
||||
# Account's configuration properties
|
||||
|
||||
For each account, the following properties can be configured:
|
||||
Please take a look at the [OpenAPI schema](../openapi/openapi.yaml) for the exact definitions of user, folder and admin fields.
|
||||
If you need an example you can export a dump using the Web Admin or by invoking the `dumpdata` endpoint directly, you need to obtain an access token first, for example:
|
||||
|
||||
- `username`
|
||||
- `password` used for password authentication. For users created using SFTPGo REST API, if the password has no known hashing algo prefix, it will be stored using argon2id. SFTPGo supports checking passwords stored with bcrypt, pbkdf2, md5crypt and sha512crypt too. For pbkdf2 the supported format is `$<algo>$<iterations>$<salt>$<hashed pwd base64 encoded>`, where algo is `pbkdf2-sha1` or `pbkdf2-sha256` or `pbkdf2-sha512`. For example the `pbkdf2-sha256` of the word `password` using 150000 iterations and `E86a9YMX3zC7` as salt must be stored as `$pbkdf2-sha256$150000$E86a9YMX3zC7$R5J62hsSq+pYw00hLLPKBbcGXmq7fj5+/M0IFoYtZbo=`. For bcrypt the format must be the one supported by golang's [crypto/bcrypt](https://godoc.org/golang.org/x/crypto/bcrypt) package, for example the password `secret` with cost `14` must be stored as `$2a$14$ajq8Q7fbtFRQvXpdCq7Jcuy.Rx1h/L4J60Otx.gyNLbAYctGMJ9tK`. For md5crypt and sha512crypt we support the format used in `/etc/shadow` with the `$1$` and `$6$` prefix, this is useful if you are migrating from Unix system user accounts. We support Apache md5crypt (`$apr1$` prefix) too. Using the REST API you can send a password hashed as bcrypt, pbkdf2, md5crypt or sha512crypt and it will be stored as is.
|
||||
- `public_keys` array of public keys. At least one public key or the password is mandatory.
|
||||
- `status` 1 means "active", 0 "inactive". An inactive account cannot login.
|
||||
- `expiration_date` expiration date as unix timestamp in milliseconds. An expired account cannot login. 0 means no expiration.
|
||||
- `home_dir` the user cannot upload or download files outside this directory. Must be an absolute path.
|
||||
- `virtual_folders` list of mappings between virtual SFTP/SCP paths and local filesystem paths outside the user home directory. The specified paths must be absolute and the virtual path cannot be "/", it must be a sub directory. The parent directory for the specified virtual path must exist. SFTPGo will try to automatically create any missing parent directory for the configured virtual folders at user login
|
||||
- `uid`, `gid`. If SFTPGo runs as root system user then the created files and directories will be assigned to this system uid/gid. Ignored on windows or if SFTPGo runs as non root user: in this case files and directories for all SFTP users will be owned by the system user that runs SFTPGo.
|
||||
- `max_sessions` maximum concurrent sessions. 0 means unlimited.
|
||||
- `quota_size` maximum size allowed as bytes. 0 means unlimited.
|
||||
- `quota_files` maximum number of files allowed. 0 means unlimited.
|
||||
- `permissions` the following per directory permissions are supported:
|
||||
- `*` all permissions are granted
|
||||
- `list` list items is allowed
|
||||
- `download` download files is allowed
|
||||
- `upload` upload files is allowed
|
||||
- `overwrite` overwrite an existing file, while uploading, is allowed. `upload` permission is required to allow file overwrite
|
||||
- `delete` delete files or directories is allowed
|
||||
- `rename` rename files or directories is allowed
|
||||
- `create_dirs` create directories is allowed
|
||||
- `create_symlinks` create symbolic links is allowed
|
||||
- `chmod` changing file or directory permissions is allowed. On Windows, only the 0200 bit (owner writable) of mode is used; it controls whether the file's read-only attribute is set or cleared. The other bits are currently unused. Use mode 0400 for a read-only file and 0600 for a readable+writable file.
|
||||
- `chown` changing file or directory owner and group is allowed. Changing owner and group is not supported on Windows.
|
||||
- `chtimes` changing file or directory access and modification time is allowed
|
||||
- `upload_bandwidth` maximum upload bandwidth as KB/s, 0 means unlimited.
|
||||
- `download_bandwidth` maximum download bandwidth as KB/s, 0 means unlimited.
|
||||
- `allowed_ip`, List of IP/Mask allowed to login. Any IP address not contained in this list cannot login. IP/Mask must be in CIDR notation as defined in RFC 4632 and RFC 4291, for example "192.0.2.0/24" or "2001:db8::/32"
|
||||
- `denied_ip`, List of IP/Mask not allowed to login. If an IP address is both allowed and denied then login will be denied
|
||||
- `denied_login_methods`, List of login methods not allowed. The following login methods are supported:
|
||||
- `publickey`
|
||||
- `password`
|
||||
- `keyboard-interactive`
|
||||
- `file_extensions`, list of struct. These restrictions do not apply to files listing for performance reasons, so a denied file cannot be downloaded/overwritten/renamed but it will still be listed in the list of files. Please note that these restrictions can be easily bypassed. Each struct contains the following fields:
|
||||
- `allowed_extensions`, list of, case insensitive, allowed files extension. Shell like expansion is not supported so you have to specify `.jpg` and not `*.jpg`. Any file that does not end with this suffix will be denied
|
||||
- `denied_extensions`, list of, case insensitive, denied files extension. Denied file extensions are evaluated before the allowed ones
|
||||
- `path`, SFTP/SCP path, if no other specific filter is defined, the filter apply for sub directories too. For example if filters are defined for the paths `/` and `/sub` then the filters for `/` are applied for any file outside the `/sub` directory
|
||||
- `fs_provider`, filesystem to serve via SFTP. Local filesystem and S3 Compatible Object Storage are supported
|
||||
- `s3_bucket`, required for S3 filesystem
|
||||
- `s3_region`, required for S3 filesystem. Must match the region for your bucket. You can find here the list of available [AWS regions](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-available-regions). For example if your bucket is at `Frankfurt` you have to set the region to `eu-central-1`
|
||||
- `s3_access_key`
|
||||
- `s3_access_secret`, if provided it is stored encrypted (AES-256-GCM)
|
||||
- `s3_endpoint`, specifies a S3 endpoint (server) different from AWS. It is not required if you are connecting to AWS
|
||||
- `s3_storage_class`, leave blank to use the default or specify a valid AWS [storage class](https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html)
|
||||
- `s3_key_prefix`, allows to restrict access to the virtual folder identified by this prefix and its contents
|
||||
- `gcs_bucket`, required for GCS filesystem
|
||||
- `gcs_credentials`, Google Cloud Storage JSON credentials base64 encoded
|
||||
- `gcs_automatic_credentials`, integer. Set to 1 to use Application Default Credentials strategy or set to 0 to use explicit credentials via `gcs_credentials`
|
||||
- `gcs_storage_class`
|
||||
- `gcs_key_prefix`, allows to restrict access to the virtual folder identified by this prefix and its contents
|
||||
```shell
|
||||
$ curl "http://admin:password@127.0.0.1:8080/api/v2/token"
|
||||
{"access_token":"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOlsiQVBJIl0sImV4cCI6MTYxMzMzNTI2MSwianRpIjoiYzBrb2gxZmNkcnBjaHNzMGZwZmciLCJuYmYiOjE2MTMzMzQ2MzEsInBlcm1pc3Npb25zIjpbIioiXSwic3ViIjoiYUJ0SHUwMHNBUmxzZ29yeEtLQ1pZZWVqSTRKVTlXbThHSGNiVWtWVmc1TT0iLCJ1c2VybmFtZSI6ImFkbWluIn0.WiyqvUF-92zCr--y4Q_sxn-tPnISFzGZd_exsG-K7ME","expires_at":"2021-02-14T20:41:01Z"}
|
||||
|
||||
These properties are stored inside the data provider.
|
||||
curl -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOlsiQVBJIl0sImV4cCI6MTYxMzMzNTI2MSwianRpIjoiYzBrb2gxZmNkcnBjaHNzMGZwZmciLCJuYmYiOjE2MTMzMzQ2MzEsInBlcm1pc3Npb25zIjpbIioiXSwic3ViIjoiYUJ0SHUwMHNBUmxzZ29yeEtLQ1pZZWVqSTRKVTlXbThHSGNiVWtWVmc1TT0iLCJ1c2VybmFtZSI6ImFkbWluIn0.WiyqvUF-92zCr--y4Q_sxn-tPnISFzGZd_exsG-K7ME" "http://127.0.0.1:8080/api/v2/dumpdata?output-data=1"
|
||||
```
|
||||
|
||||
the dump is a JSON with all SFTPGo data including users, folders, admins.
|
||||
|
||||
These properties are stored inside the configured data provider.
|
||||
|
||||
SFTPGo supports checking passwords stored with bcrypt, pbkdf2, md5crypt and sha512crypt too. For pbkdf2 the supported format is `$<algo>$<iterations>$<salt>$<hashed pwd base64 encoded>`, where algo is `pbkdf2-sha1` or `pbkdf2-sha256` or `pbkdf2-sha512` or `$pbkdf2-b64salt-sha256$`. For example the pbkdf2-sha256 of the word password using 150000 iterations and E86a9YMX3zC7 as salt must be stored as `$pbkdf2-sha256$150000$E86a9YMX3zC7$R5J62hsSq+pYw00hLLPKBbcGXmq7fj5+/M0IFoYtZbo=`. In pbkdf2 variant with b64salt the salt is base64 encoded. For bcrypt the format must be the one supported by golang's crypto/bcrypt package, for example the password secret with cost 14 must be stored as `$2a$14$ajq8Q7fbtFRQvXpdCq7Jcuy.Rx1h/L4J60Otx.gyNLbAYctGMJ9tK`. For md5crypt and sha512crypt we support the format used in `/etc/shadow` with the `$1$` and `$6$` prefix, this is useful if you are migrating from Unix system user accounts. We support Apache md5crypt (`$apr1$` prefix) too. Using the REST API you can send a password hashed as bcrypt, pbkdf2, md5crypt or sha512crypt and it will be stored as is.
|
||||
|
||||
If you want to use your existing accounts, you have these options:
|
||||
|
||||
- If your accounts are aleady stored inside a supported database, you can create a database view. Since a view is read only, you have to disable user management and quota tracking so SFTPGo will never try to write to the view
|
||||
- you can import your users inside SFTPGo. Take a look at [sftpgo_api_cli.py](../scripts#convert-users-from-other-stores "SFTPGo API CLI script"), it can convert and import users from Linux system users and Pure-FTPd/ProFTPD virtual users
|
||||
- you can import your users inside SFTPGo. Take a look at [convert users](.../examples/convertusers) script, it can convert and import users from Linux system users and Pure-FTPd/ProFTPD virtual users
|
||||
- you can use an external authentication program
|
||||
|
||||
20
docs/azure-blob-storage.md
Normal file
20
docs/azure-blob-storage.md
Normal file
@@ -0,0 +1,20 @@
|
||||
# Azure Blob Storage backend
|
||||
|
||||
To connect SFTPGo to Azure Blob Storage, you need to specify the access credentials. Azure Blob Storage has different options for credentials, we support:
|
||||
|
||||
1. Providing an account name and account key.
|
||||
2. Providing a shared access signature (SAS).
|
||||
|
||||
If you authenticate using account and key you also need to specify a container. The endpoint can generally be left blank, the default is `blob.core.windows.net`.
|
||||
|
||||
If you provide a SAS URL the container is optional and if given it must match the one inside the shared access signature.
|
||||
|
||||
If you want to connect to an emulator such as [Azurite](https://github.com/Azure/Azurite) you need to provide the account name/key pair and an endpoint prefixed with the protocol, for example `http://127.0.0.1:10000`.
|
||||
|
||||
Specifying a different `key_prefix`, you can assign different "folders" of the same container to different users. This is similar to a chroot directory for local filesystem. Each SFTPGo user can only access the assigned folder and its contents. The folder identified by `key_prefix` does not need to be pre-created.
|
||||
|
||||
For multipart uploads you can customize the parts size and the upload concurrency. Please note that if the upload bandwidth between the client and SFTPGo is greater than the upload bandwidth between SFTPGo and the Azure Blob service then the client should wait for the last parts to be uploaded to Azure after finishing uploading the file to SFTPGo, and it may time out. Keep this in mind if you customize these parameters.
|
||||
|
||||
The configured container must exist.
|
||||
|
||||
This backend is very similar to the [S3](./s3.md) backend, and it has the same limitations. As with S3 `chtime` will fail with the default configuration, you can install the [metadata plugin](https://github.com/sftpgo/sftpgo-plugin-metadata) to make it work and thus be able to preserve/change file modification times.
|
||||
@@ -1,34 +1,40 @@
|
||||
# Build SFTPGo from source
|
||||
|
||||
Install the package to your [\$GOPATH](https://github.com/golang/go/wiki/GOPATH "GOPATH") with the [go tool](https://golang.org/cmd/go/ "go command") from shell:
|
||||
Download the sources and use `go build`.
|
||||
|
||||
```bash
|
||||
go get -u github.com/drakkan/sftpgo
|
||||
```
|
||||
The following build tags are available:
|
||||
|
||||
Make sure [Git](https://git-scm.com/downloads) is installed on your machine and in your system's `PATH`.
|
||||
- `nogcs`, disable Google Cloud Storage backend, default enabled
|
||||
- `nos3`, disable S3 Compabible Object Storage backends, default enabled
|
||||
- `noazblob`, disable Azure Blob Storage backend, default enabled
|
||||
- `nobolt`, disable Bolt data provider, default enabled
|
||||
- `nomysql`, disable MySQL data provider, default enabled
|
||||
- `nopgsql`, disable PostgreSQL data provider, default enabled
|
||||
- `nosqlite`, disable SQLite data provider, default enabled
|
||||
- `noportable`, disable portable mode, default enabled
|
||||
- `nometrics`, disable Prometheus metrics, default enabled
|
||||
|
||||
SFTPGo depends on [go-sqlite3](https://github.com/mattn/go-sqlite3) which is a CGO package and so it requires a `C` compiler at build time.
|
||||
If no build tag is specified the build will include the default features.
|
||||
|
||||
The optional [SQLite driver](https://github.com/mattn/go-sqlite3 "go-sqlite3") is a `CGO` package and so it requires a `C` compiler at build time.
|
||||
On Linux and macOS, a compiler is easy to install or already installed. On Windows, you need to download [MinGW-w64](https://sourceforge.net/projects/mingw-w64/files/) and build SFTPGo from its command prompt.
|
||||
|
||||
The compiler is a build time only dependency. It is not required at runtime.
|
||||
|
||||
If you don't need SQLite, you can also get/build SFTPGo setting the environment variable `GCO_ENABLED` to 0. This way, SQLite support will be disabled and PostgreSQL, MySQL, bbolt and memory data providers will keep working. In this way, you don't need a `C` compiler for building.
|
||||
|
||||
Version info, such as git commit and build date, can be embedded setting the following string variables at build time:
|
||||
|
||||
- `github.com/drakkan/sftpgo/utils.commit`
|
||||
- `github.com/drakkan/sftpgo/utils.date`
|
||||
- `github.com/drakkan/sftpgo/v2/version.commit`
|
||||
- `github.com/drakkan/sftpgo/v2/version.date`
|
||||
|
||||
For example, you can build using the following command:
|
||||
|
||||
```bash
|
||||
go build -i -ldflags "-s -w -X github.com/drakkan/sftpgo/utils.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/utils.date=`date -u +%FT%TZ`" -o sftpgo
|
||||
go build -tags nogcs,nos3,nosqlite -ldflags "-s -w -X github.com/drakkan/sftpgo/v2/version.commit=`git describe --always --dirty` -X github.com/drakkan/sftpgo/v2/version.date=`date -u +%FT%TZ`" -o sftpgo
|
||||
```
|
||||
|
||||
You should get a version that includes git commit and build date like this one:
|
||||
You should get a version that includes git commit, build date and available features like this one:
|
||||
|
||||
```bash
|
||||
$ sftpgo -v
|
||||
SFTPGo version: 0.9.0-dev-90607d4-dirty-2019-08-08T19:28:36Z
|
||||
```
|
||||
$ ./sftpgo -v
|
||||
SFTPGo 0.9.6-dev-b30614e-dirty-2020-06-19T11:04:56Z +metrics -gcs -s3 +bolt +mysql +pgsql -sqlite +portable
|
||||
```
|
||||
|
||||
47
docs/check-password-hook.md
Normal file
47
docs/check-password-hook.md
Normal file
@@ -0,0 +1,47 @@
|
||||
# Check password hook
|
||||
|
||||
This hook allows you to externally check the provided password, its main use case is to allow to easily support things like password+OTP for protocols without keyboard interactive support such as FTP and WebDAV. You can ask your users to login using a string consisting of a fixed password and a One Time Token, you can verify the token inside the hook and ask to SFTPGo to verify the fixed part.
|
||||
|
||||
The same thing can be achieved using [External authentication](./external-auth.md) but using this hook is simpler in some use cases.
|
||||
|
||||
The `check password hook` can be defined as the absolute path of your program or an HTTP URL.
|
||||
|
||||
The expected response is a JSON serialized struct containing the following keys:
|
||||
|
||||
- `status` integer. 0 means KO, 1 means OK, 2 means partial success
|
||||
- `to_verify` string. For `status` = 2 SFTPGo will check this password against the one stored inside SFTPGo data provider
|
||||
|
||||
If the hook defines an external program it can read the following environment variables:
|
||||
|
||||
- `SFTPGO_AUTHD_USERNAME`
|
||||
- `SFTPGO_AUTHD_PASSWORD`
|
||||
- `SFTPGO_AUTHD_IP`
|
||||
- `SFTPGO_AUTHD_PROTOCOL`, possible values are `SSH`, `FTP`, `DAV`, `HTTP`
|
||||
|
||||
Previous global environment variables aren't cleared when the script is called. The content of these variables is _not_ quoted. They may contain special characters. They are under the control of a possibly malicious remote user.
|
||||
|
||||
The program must write, on its standard output, the expected JSON serialized response described above.
|
||||
|
||||
If the hook is an HTTP URL then it will be invoked as HTTP POST. The request body will contain a JSON serialized struct with the following fields:
|
||||
|
||||
- `username`
|
||||
- `password`
|
||||
- `ip`
|
||||
- `protocol`, possible values are `SSH`, `FTP`, `DAV`
|
||||
|
||||
If authentication succeeds the HTTP response code must be 200 and the response body must contain the expected JSON serialized response described above.
|
||||
|
||||
The program hook must finish within 30 seconds, the HTTP hook timeout will use the global configuration for HTTP clients.
|
||||
|
||||
You can also restrict the hook scope using the `check_password_scope` configuration key:
|
||||
|
||||
- `0` means all supported protocols.
|
||||
- `1` means SSH only
|
||||
- `2` means FTP only
|
||||
- `4` means WebDAV only
|
||||
|
||||
You can combine the scopes. For example, 6 means FTP and WebDAV.
|
||||
|
||||
You can disable the hook on a per-user basis.
|
||||
|
||||
An example check password program allowing 2FA using password + one time token can be found inside the source tree [checkpwd](../examples/OTP/authy/checkpwd) directory.
|
||||
@@ -1,78 +1,116 @@
|
||||
# Custom Actions
|
||||
|
||||
The `actions` struct inside the "sftpd" configuration section allows to configure the actions for file operations and SSH commands.
|
||||
SFTPGo can notify filesystem and provider events using custom actions. A custom action can be an external program or an HTTP URL.
|
||||
|
||||
Actions will not be executed if an error is detected, and so a partial file is uploaded or an SSH command is not successfully completed. The `upload` condition includes both uploads to new files and overwrite of existing files. The `ssh_cmd` condition will be triggered after a command is successfully executed via SSH. `scp` will trigger the `download` and `upload` conditions and not `ssh_cmd`.
|
||||
## Filesystem events
|
||||
|
||||
The `command`, if defined, is invoked with the following arguments:
|
||||
The `actions` struct inside the `common` configuration section allows to configure the actions for file operations and SSH commands.
|
||||
The `hook` can be defined as the absolute path of your program or an HTTP URL.
|
||||
|
||||
- `action`, string, possible values are: `download`, `upload`, `delete`, `rename`, `ssh_cmd`
|
||||
- `username`
|
||||
- `path` is the full filesystem path, can be empty for some ssh commands
|
||||
- `target_path`, non empty for `rename` action
|
||||
- `ssh_cmd`, non empty for `ssh_cmd` action
|
||||
The following `actions` are supported:
|
||||
|
||||
The `command` can also read the following environment variables:
|
||||
- `download`
|
||||
- `pre-download`
|
||||
- `upload`
|
||||
- `pre-upload`
|
||||
- `delete`
|
||||
- `pre-delete`
|
||||
- `rename`
|
||||
- `mkdir`
|
||||
- `rmdir`
|
||||
- `ssh_cmd`
|
||||
|
||||
- `SFTPGO_ACTION`
|
||||
The `upload` condition includes both uploads to new files and overwrite of existing ones. If an upload is aborted for quota limits SFTPGo tries to remove the partial file, so if the notification reports a zero size file and a quota exceeded error the file has been deleted. The `ssh_cmd` condition will be triggered after a command is successfully executed via SSH. `scp` will trigger the `download` and `upload` conditions and not `ssh_cmd`.
|
||||
For cloud backends directories are virtual, they are created implicitly when you upload a file and are implicitly removed when the last file within a directory is removed. The `mkdir` and `rmdir` notifications are sent only when a directory is explicitly created or removed.
|
||||
|
||||
The notification will indicate if an error is detected and so, for example, a partial file is uploaded.
|
||||
|
||||
The `pre-delete` action, if defined, will be called just before files deletion. If the external command completes with a zero exit status or the HTTP notification response code is `200` then SFTPGo will assume that the file was already deleted/moved and so it will not try to remove the file and it will not execute the hook defined for the `delete` action.
|
||||
|
||||
The `pre-download` and `pre-upload` actions, will be called before downloads and uploads. If the external command completes with a zero exit status or the HTTP notification response code is `200` then SFTPGo allows the operation, otherwise the client will get a permission denied error.
|
||||
|
||||
If the `hook` defines a path to an external program, then this program can read the following environment variables:
|
||||
|
||||
- `SFTPGO_ACTION`, supported action
|
||||
- `SFTPGO_ACTION_USERNAME`
|
||||
- `SFTPGO_ACTION_PATH`
|
||||
- `SFTPGO_ACTION_TARGET`, non empty for `rename` `SFTPGO_ACTION`
|
||||
- `SFTPGO_ACTION_SSH_CMD`, non empty for `ssh_cmd` `SFTPGO_ACTION`
|
||||
- `SFTPGO_ACTION_FILE_SIZE`, non empty for `upload`, `download` and `delete` `SFTPGO_ACTION`
|
||||
- `SFTPGO_ACTION_LOCAL_FILE`, `true` if the affected file is stored on the local filesystem, otherwise `false`
|
||||
- `SFTPGO_ACTION_PATH`, is the full filesystem path, can be empty for some ssh commands
|
||||
- `SFTPGO_ACTION_TARGET`, full filesystem path, non-empty for `rename` `SFTPGO_ACTION` and for some SSH commands
|
||||
- `SFTPGO_ACTION_VIRTUAL_PATH`, virtual path, seen by SFTPGo users
|
||||
- `SFTPGO_ACTION_VIRTUAL_TARGET`, virtual target path, seen by SFTPGo users
|
||||
- `SFTPGO_ACTION_SSH_CMD`, non-empty for `ssh_cmd` `SFTPGO_ACTION`
|
||||
- `SFTPGO_ACTION_FILE_SIZE`, non-zero for `pre-upload`,`upload`, `download` and `delete` actions if the file size is greater than `0`
|
||||
- `SFTPGO_ACTION_FS_PROVIDER`, `0` for local filesystem, `1` for S3 backend, `2` for Google Cloud Storage (GCS) backend, `3` for Azure Blob Storage backend, `4` for local encrypted backend, `5` for SFTP backend
|
||||
- `SFTPGO_ACTION_BUCKET`, non-empty for S3, GCS and Azure backends
|
||||
- `SFTPGO_ACTION_ENDPOINT`, non-empty for S3, SFTP and Azure backend if configured
|
||||
- `SFTPGO_ACTION_STATUS`, integer. Status for `upload`, `download` and `ssh_cmd` actions. 1 means no error, 2 means a generic error occurred, 3 means quota exceeded error
|
||||
- `SFTPGO_ACTION_PROTOCOL`, string. Possible values are `SSH`, `SFTP`, `SCP`, `FTP`, `DAV`, `HTTP`, `HTTPShare`, `DataRetention`
|
||||
- `SFTPGO_ACTION_IP`, the action was executed from this IP address
|
||||
- `SFTPGO_ACTION_SESSION_ID`, string. Unique protocol session identifier. For stateless protocols such as HTTP the session id will change for each request
|
||||
- `SFTPGO_ACTION_OPEN_FLAGS`, integer. File open flags, can be non-zero for `pre-upload` action. If `SFTPGO_ACTION_FILE_SIZE` is greater than zero and `SFTPGO_ACTION_OPEN_FLAGS&512 == 0` the target file will not be truncated
|
||||
- `SFTPGO_ACTION_TIMESTAMP`, int64. Event timestamp as nanoseconds since epoch
|
||||
|
||||
Previous global environment variables aren't cleared when the script is called.
|
||||
The `command` must finish within 30 seconds.
|
||||
The program must finish within 30 seconds.
|
||||
|
||||
The `http_notification_url`, if defined, will contain the following, percent encoded, query string parameters:
|
||||
If the `hook` defines an HTTP URL then this URL will be invoked as HTTP POST. The request body will contain a JSON serialized struct with the following fields:
|
||||
|
||||
- `action`
|
||||
- `username`
|
||||
- `path`
|
||||
- `local_file`, `true` if the affected file is stored on the local filesystem, otherwise `false`
|
||||
- `target_path`, added for `rename` action
|
||||
- `ssh_cmd`, added for `ssh_cmd` action
|
||||
- `file_size`, added for `upload`, `download`, `delete` actions
|
||||
- `action`, string
|
||||
- `username`, string
|
||||
- `path`, string
|
||||
- `target_path`, string, included for `rename` action and `sftpgo-copy` SSH command
|
||||
- `virtual_path`, string, virtual path, seen by SFTPGo users
|
||||
- `virtual_target_path`, string, virtual target path, seen by SFTPGo users
|
||||
- `ssh_cmd`, string, included for `ssh_cmd` action
|
||||
- `file_size`, int64, included for `pre-upload`, `upload`, `download`, `delete` actions if the file size is greater than `0`
|
||||
- `fs_provider`, integer, `0` for local filesystem, `1` for S3 backend, `2` for Google Cloud Storage (GCS) backend, `3` for Azure Blob Storage backend, `4` for local encrypted backend, `5` for SFTP backend
|
||||
- `bucket`, string, inlcuded for S3, GCS and Azure backends
|
||||
- `endpoint`, string, included for S3, SFTP and Azure backend if configured
|
||||
- `status`, integer. Status for `upload`, `download` and `ssh_cmd` actions. 1 means no error, 2 means a generic error occurred, 3 means quota exceeded error
|
||||
- `protocol`, string. Possible values are `SSH`, `SFTP`, `SCP`, `FTP`, `DAV`, `HTTP`, `HTTPShare`, `DataRetention`
|
||||
- `ip`, string. The action was executed from this IP address
|
||||
- `session_id`, string. Unique protocol session identifier. For stateless protocols such as HTTP the session id will change for each request
|
||||
- `open_flags`, integer. File open flags, can be non-zero for `pre-upload` action. If `file_size` is greater than zero and `file_size&512 == 0` the target file will not be truncated
|
||||
- `timestamp`, int64. Event timestamp as nanoseconds since epoch
|
||||
|
||||
The HTTP request is executed with a 15-second timeout.
|
||||
The HTTP hook will use the global configuration for HTTP clients and will respect the retry configurations.
|
||||
|
||||
The `actions` struct inside the "data_provider" configuration section allows you to configure actions on user add, update, delete.
|
||||
The `pre-*` actions are always executed synchronously while the other ones are asynchronous. You can specify the actions to run synchronously via the `execute_sync` configuration key. Executing an action synchronously means that SFTPGo will not return a result code to the client (which is waiting for it) until your hook have completed its execution. If your hook takes a long time to complete this could cause a timeout on the client side, which wouldn't receive the server response in a timely manner and eventually drop the connection.
|
||||
|
||||
## Provider events
|
||||
|
||||
The `actions` struct inside the `data_provider` configuration section allows you to configure actions on data provider objects add, update, delete.
|
||||
|
||||
The supported object types are:
|
||||
|
||||
- `user`
|
||||
- `admin`
|
||||
- `api_key`
|
||||
|
||||
Actions will not be fired for internal updates, such as the last login or the user quota fields, or after external authentication.
|
||||
|
||||
The `command`, if defined, is invoked with the following arguments:
|
||||
If the `hook` defines a path to an external program, then this program can read the following environment variables:
|
||||
|
||||
- `action`, string, possible values are: `add`, `update`, `delete`
|
||||
- `username`
|
||||
- `ID`
|
||||
- `status`
|
||||
- `expiration_date`
|
||||
- `home_dir`
|
||||
- `uid`
|
||||
- `gid`
|
||||
|
||||
The `command` can also read the following environment variables:
|
||||
|
||||
- `SFTPGO_USER_ACTION`
|
||||
- `SFTPGO_USER_USERNAME`
|
||||
- `SFTPGO_USER_PASSWORD`, hashed password as stored inside the data provider, can be empty if the user does not login using a password
|
||||
- `SFTPGO_USER_ID`
|
||||
- `SFTPGO_USER_STATUS`
|
||||
- `SFTPGO_USER_EXPIRATION_DATE`
|
||||
- `SFTPGO_USER_HOME_DIR`
|
||||
- `SFTPGO_USER_UID`
|
||||
- `SFTPGO_USER_GID`
|
||||
- `SFTPGO_USER_QUOTA_FILES`
|
||||
- `SFTPGO_USER_QUOTA_SIZE`
|
||||
- `SFTPGO_USER_UPLOAD_BANDWIDTH`
|
||||
- `SFTPGO_USER_DOWNLOAD_BANDWIDTH`
|
||||
- `SFTPGO_USER_MAX_SESSIONS`
|
||||
- `SFTPGO_USER_FS_PROVIDER`
|
||||
- `SFTPGO_PROVIDER_ACTION`, supported values are `add`, `update`, `delete`
|
||||
- `SFTPGO_PROVIDER_OBJECT_TYPE`, affetected object type
|
||||
- `SFTPGO_PROVIDER_OBJECT_NAME`, unique identifier for the affected object, for example username or key id
|
||||
- `SFTPGO_PROVIDER_USERNAME`, the username that executed the action. There are two special usernames: `__self__` identifies a user/admin that updates itself and `__system__` identifies an action that does not have an explicit executor associated with it, for example users/admins can be added/updated by loading them from initial data
|
||||
- `SFTPGO_PROVIDER_IP`, the action was executed from this IP address
|
||||
- `SFTPGO_PROVIDER_TIMESTAMP`, event timestamp as nanoseconds since epoch
|
||||
- `SFTPGO_PROVIDER_OBJECT`, object serialized as JSON with sensitive fields removed
|
||||
|
||||
Previous global environment variables aren't cleared when the script is called.
|
||||
The `command` must finish within 15 seconds.
|
||||
The program must finish within 15 seconds.
|
||||
|
||||
The `http_notification_url`, if defined, will be called invoked as http POST. The action is added to the query string, for example `<http_notification_url>?action=update`, and the user is sent serialized as JSON inside the POST body with sensitive fields removed.
|
||||
If the `hook` defines an HTTP URL then this URL will be invoked as HTTP POST. The action, username, ip, object_type and object_name and timestamp are added to the query string, for example `<hook>?action=update&username=admin&ip=127.0.0.1&object_type=user&object_name=user1×tamp=1633860803249`, and the full object is sent serialized as JSON inside the POST body with sensitive fields removed.
|
||||
|
||||
The HTTP request is executed with a 15-second timeout.
|
||||
The HTTP hook will use the global configuration for HTTP clients and will respect the retry configurations.
|
||||
|
||||
The structure for SFTPGo objects can be found within the [OpenAPI schema](../openapi/openapi.yaml).
|
||||
|
||||
## Pub/Sub services
|
||||
|
||||
You can forward SFTPGo events to several publish/subscribe systems using the [sftpgo-plugin-pubsub](https://github.com/sftpgo/sftpgo-plugin-pubsub). The notifiers SFTPGo plugins are not suitable for interactive actions such as `pre-*` events. Their scope is to simply forward events to external services. A custom hook is a better choice if you need to react to `pre-*` events.
|
||||
|
||||
## Database services
|
||||
|
||||
You can store SFTPGo events in database systems using the [sftpgo-plugin-eventstore](https://github.com/sftpgo/sftpgo-plugin-eventstore) and you can search the stored events using the [sftpgo-plugin-eventsearch](https://github.com/sftpgo/sftpgo-plugin-eventsearch).
|
||||
|
||||
20
docs/dare.md
Normal file
20
docs/dare.md
Normal file
@@ -0,0 +1,20 @@
|
||||
# Data At Rest Encryption (DARE)
|
||||
|
||||
SFTPGo supports data at-rest encryption via its `cryptfs` virtual file system, in this mode SFTPGo transparently encrypts and decrypts data (to/from the local disk) on-the-fly during uploads and/or downloads, making sure that the files at-rest on the server-side are always encrypted.
|
||||
|
||||
Data At Rest Encryption is supported for local filesystem, for cloud storage backends you can use their server side encryption feature.
|
||||
|
||||
So, because of the way it works, as described here above, when you set up an encrypted filesystem for a user you need to make sure it points to an empty path/directory (that has no files in it). Otherwise, it would try to decrypt existing files that are not encrypted in the first place and fail.
|
||||
|
||||
The SFTPGo's `cryptfs` is a tiny wrapper around [sio](https://github.com/minio/sio) therefore data is encrypted and authenticated using `AES-256-GCM` or `ChaCha20-Poly1305`. AES-GCM will be used if the CPU provides hardware support for it.
|
||||
|
||||
The only required configuration parameter is a `passphrase`, each file will be encrypted using an unique, randomly generated secret key derived from the given passphrase using the HMAC-based Extract-and-Expand Key Derivation Function (HKDF) as defined in [RFC 5869](http://tools.ietf.org/html/rfc5869). It is important to note that the per-object encryption key is never stored anywhere: it is derived from your `passphrase` and a randomly generated initialization vector just before encryption/decryption. The initialization vector is stored with the file.
|
||||
|
||||
The passphrase is stored encrypted itself according to your [KMS configuration](./kms.md) and is required to decrypt any file encrypted using an encryption key derived from it.
|
||||
|
||||
The encrypted filesystem has some limitations compared to the local, unencrypted, one:
|
||||
|
||||
- Resuming uploads is not supported.
|
||||
- Opening a file for both reading and writing at the same time is not supported and so clients that require advanced filesystem-like features such as `sshfs` are not supported too.
|
||||
- Truncate is not supported.
|
||||
- System commands such as `git` or `rsync` are not supported: they will store data unencrypted.
|
||||
32
docs/data-retention-hook.md
Normal file
32
docs/data-retention-hook.md
Normal file
@@ -0,0 +1,32 @@
|
||||
# Data retention hook
|
||||
|
||||
This hook runs after a data retention check completes if you specify `Hook` between notifications methods when you start the check.
|
||||
|
||||
The `data_retention_hook` can be defined as the absolute path of your program or an HTTP URL.
|
||||
|
||||
If the hook defines an external program it can read the following environment variable:
|
||||
|
||||
- `SFTPGO_DATA_RETENTION_RESULT`, it contains the data retention check result JSON serialized.
|
||||
|
||||
Previous global environment variables aren't cleared when the script is called.
|
||||
The program must finish within 20 seconds.
|
||||
|
||||
If the hook defines an HTTP URL then this URL will be invoked as HTTP POST and the POST body contains the data retention check result JSON serialized.
|
||||
|
||||
The HTTP hook will use the global configuration for HTTP clients and will respect the retry configurations.
|
||||
|
||||
Here is the schema for the data retention check result:
|
||||
|
||||
- `username`, string
|
||||
- `status`, int. 1 means success, 0 error
|
||||
- `start_time`, int64. Start time as UNIX timestamp in milliseconds
|
||||
- `total_deleted_files`, int. Total number of files deleted
|
||||
- `total_deleted_size`, int64. Total size deleted in bytes
|
||||
- `elapsed`, int64. Elapsed time in milliseconds
|
||||
- `details`, list of struct with details for each checked path, each struct contains the following fields:
|
||||
- `path`, string
|
||||
- `retention`, int. Retention time in hours
|
||||
- `deleted_files`, int. Number of files deleted
|
||||
- `deleted_size`, int64. Size deleted in bytes
|
||||
- `info`, string. Informative, non fatal, message if any. For example it can indicates that the check was skipped because the user doesn't have the required permissions on this path
|
||||
- `error`, string. Error message if any
|
||||
67
docs/defender.md
Normal file
67
docs/defender.md
Normal file
@@ -0,0 +1,67 @@
|
||||
# Defender
|
||||
|
||||
The built-in `defender` allows you to configure an auto-blocking policy for SFTPGo and thus helps to prevent DoS (Denial of Service) and brute force password guessing.
|
||||
|
||||
If enabled it will protect SFTP, FTP and WebDAV services and it will automatically block hosts (IP addresses) that continually fail to log in or attempt to connect.
|
||||
|
||||
You can configure a score for the following events:
|
||||
|
||||
- `score_valid`, defines the score for valid login attempts, eg. user accounts that exist. Default `1`.
|
||||
- `score_invalid`, defines the score for invalid login attempts, eg. non-existent user accounts or client disconnected for inactivity without authentication attempts. Default `2`.
|
||||
- `score_limit_exceeded`, defines the score for hosts that exceeded the configured rate limits or the configured max connections per host. Default `3`.
|
||||
|
||||
And then you can configure:
|
||||
|
||||
- `observation_time`, defines the time window, in minutes, for tracking client errors.
|
||||
- `threshold`, defines the threshold value before banning a host.
|
||||
- `ban_time`, defines the time to ban a client, as minutes
|
||||
|
||||
So a host is banned, for `ban_time` minutes, if the sum of the scores has exceeded the defined threshold during the last observation time minutes.
|
||||
|
||||
By defining the scores, each type of event can be weighted. Let's see an example: if `score_invalid` is 3 and `threshold` is 8, a host will be banned after 3 login attempts with an non-existent user within the configured `observation_time`.
|
||||
|
||||
A banned IP has no score, it makes no sense to accumulate host events in memory for an already banned IP address.
|
||||
|
||||
If an already banned client tries to log in again, its ban time will be incremented according the `ban_time_increment` configuration.
|
||||
|
||||
The `ban_time_increment` is calculated as percentage of `ban_time`, so if `ban_time` is 30 minutes and `ban_time_increment` is 50 the host will be banned for additionally 15 minutes. You can also specify values greater than 100 for `ban_time_increment` if you want to increase the penalty for already banned hosts.
|
||||
|
||||
SFTPGo can store host scores and banned hosts in memory or within the configured data provider according to the `driver` set in the `defender` configuration section. The available drivers are `memory` and `provider`.
|
||||
The `provider` driver is useful if you want to share the defender data across multiple SFTPGo instances and it requires a shared or distributed data provider: `MySQL`, `PostgreSQL` and `CockroachDB` are supported.
|
||||
If you set the `provider` driver, the defender implementation may do many database queries (at least one query every time a new client connects to check if it is banned), if you have a single SFTPGo instance the `memory` driver is recommended.
|
||||
|
||||
For the `memory` driver, you can limit the memory usage using the `entries_soft_limit` and `entries_hard_limit` configuration keys.
|
||||
|
||||
The `provider` driver will periodically clean up expired hosts and events.
|
||||
|
||||
Using the REST API you can:
|
||||
|
||||
- list hosts within the defender's lists
|
||||
- remove hosts from the defender's lists
|
||||
|
||||
The `defender` can also load a permanent block list and/or a safe list of ip addresses/networks from a file:
|
||||
|
||||
- `safelist_file`, defines the path to a file containing a list of ip addresses and/or networks to never ban.
|
||||
- `blocklist_file`, defines the path to a file containing a list of ip addresses and/or networks to always ban.
|
||||
|
||||
These list must be stored as JSON conforming to the following schema:
|
||||
|
||||
- `addresses`, list of strings. Each string must be a valid IPv4/IPv6 address.
|
||||
- `networks`, list of strings. Each string must be a valid IPv4/IPv6 CIDR address.
|
||||
|
||||
Here is a small example:
|
||||
|
||||
```json
|
||||
{
|
||||
"addresses":[
|
||||
"192.0.2.1",
|
||||
"2001:db8::68"
|
||||
],
|
||||
"networks":[
|
||||
"192.0.3.0/24",
|
||||
"2001:db8:1234::/48"
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
These list will be always loaded in memory (even if you use the `provider` driver) for faster lookups. The REST API queries "live" data and not these lists.
|
||||
@@ -1,27 +1,45 @@
|
||||
# Dynamic user modification
|
||||
# Dynamic user creation or modification
|
||||
|
||||
Dynamic user modification is supported via an external program that can be executed just before the user login.
|
||||
To enable dynamic user modification, you must set the absolute path of your program using the `pre_login_program` key in your configuration file.
|
||||
Dynamic user creation or modification is supported via an external program or an HTTP URL that can be invoked just before the user login.
|
||||
To enable dynamic user modification, you must set the absolute path of your program or an HTTP URL using the `pre_login_hook` key in your configuration file.
|
||||
|
||||
The external program can read the following environment variables to get info about the user trying to login:
|
||||
|
||||
- `SFTPGO_LOGIND_USER`, it contains the user trying to login serialized as JSON
|
||||
- `SFTPGO_LOGIND_METHOD`, possible values are: `password`, `publickey` and `keyboard-interactive`
|
||||
- `SFTPGO_LOGIND_USER`, it contains the user trying to login serialized as JSON. A JSON serialized user id equal to zero means the user does not exist inside SFTPGo
|
||||
- `SFTPGO_LOGIND_METHOD`, possible values are: `password`, `publickey`, `keyboard-interactive`, `TLSCertificate`
|
||||
- `SFTPGO_LOGIND_IP`, ip address of the user trying to login
|
||||
- `SFTPGO_LOGIND_PROTOCOL`, possible values are `SSH`, `FTP`, `DAV`, `HTTP`
|
||||
|
||||
The program must write, on its the standard output, an empty string (or no response at all) if no user update is needed or the updated SFTPGo user serialized as JSON. Actions defined for users update will not be executed in this case.
|
||||
The JSON response can include only the fields that need to the updated instead of the full user. For example, if you want to disable the user, you can return a response like this:
|
||||
The program must write, on its standard output:
|
||||
|
||||
- an empty string (or no response at all) if the user should not be created/updated
|
||||
- or the SFTPGo user, JSON serialized, if you want to create or update the given user
|
||||
|
||||
If the hook is an HTTP URL then it will be invoked as HTTP POST. The login method, the used protocol and the ip address of the user trying to login are added to the query string, for example `<http_url>?login_method=password&ip=1.2.3.4&protocol=SSH`.
|
||||
The request body will contain the user trying to login serialized as JSON. If no modification is needed the HTTP response code must be 204, otherwise the response code must be 200 and the response body a valid SFTPGo user serialized as JSON.
|
||||
|
||||
Actions defined for user's updates will not be executed in this case and an already logged in user with the same username will not be disconnected, you have to handle these things yourself.
|
||||
|
||||
The JSON response can include only the fields to update instead of the full user. For example, if you want to disable the user, you can return a response like this:
|
||||
|
||||
```json
|
||||
{"status": 0}
|
||||
```
|
||||
|
||||
The external program must finish within 60 seconds.
|
||||
Please note that if you want to create a new user, the pre-login hook response must include all the mandatory user fields.
|
||||
|
||||
If an error happens while executing your program then login will be denied. "Dynamic user modification" and "External Authentication" are mutally exclusive.
|
||||
The program hook must finish within 30 seconds, the HTTP hook will use the global configuration for HTTP clients.
|
||||
|
||||
Let's see a very basic example. Our sample program will grant access to the user `test_user` only in the time range 10:00-18:00. Other users will not be modified since the program will terminate with no output.
|
||||
If an error happens while executing the hook then login will be denied.
|
||||
|
||||
```
|
||||
"Dynamic user creation or modification" and "External Authentication" are mutually exclusive, they are quite similar, the difference is that "External Authentication" returns an already authenticated user while using "Dynamic users modification" you simply create or update a user. The authentication will be checked inside SFTPGo.
|
||||
In other words while using "External Authentication" the external program receives the credentials of the user trying to login (for example the cleartext password) and it needs to validate them. While using "Dynamic users modification" the pre-login program receives the user stored inside the dataprovider (it includes the hashed password if any) and it can modify it, after the modification SFTPGo will check the credentials of the user trying to login.
|
||||
|
||||
You can disable the hook on a per-user basis.
|
||||
|
||||
Let's see a very basic example. Our sample program will grant access to the existing user `test_user` only in the time range 10:00-18:00. Other users will not be modified since the program will terminate with no output.
|
||||
|
||||
```shell
|
||||
#!/bin/bash
|
||||
|
||||
CURRENT_TIME=`date +%H:%M`
|
||||
@@ -38,3 +56,4 @@ fi
|
||||
|
||||
Please note that this is a demo program and it might not work in all cases. For example, the username should be obtained by parsing the JSON serialized user and not by searching the username inside the JSON as shown here.
|
||||
|
||||
The structure for SFTPGo users can be found within the [OpenAPI schema](../openapi/openapi.yaml).
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user