- Upgraded Bouncy Castle from 1.70 to 1.75
- Upgraded SLF4J from 1.7.36 to 2.0.7
- Upgraded Logback from 1.2.11 to 1.3.8
- Upgraded Apache MINA SSHD from 2.8.0 to 2.10.0
- Upgraded Grizzly HTTP Server from 2.4.4 to 3.0.1
- Upgraded Testcontainers from 1.16.2 to 1.18.3
- Refactored references and removed HttpClient dependency
- Upgraded GitHub Actions setup-java from 1 to 3
- Updated GitHub Actions to use Temurin JDK 11
- Added OpenSSL upgrade to RSA Key Tests
Co-authored-by: Jeroen van Erp <jeroen@hierynomus.com>
* Fix#805: Prevent CHANNEL_CLOSE to be sent between Channel.isOpen and a Transport.write call
Otherwise, a disconnect with a "packet referred to nonexistent channel" message can occur.
This particularly happens when the transport.Reader thread passes an eof from the server to the ChannelInputStream, the reading library-user thread returns, and closes the channel at the same time as the transport.Reader thread receives the subsequent CHANNEL_CLOSE from the server.
* Add integration test for #805
* Added Transport.isKeyExchangeRequired() to avoid unnecessary KEXINIT
- Updated SSHClient.onConnect() to check isKeyExchangeRequired() before calling doKex()
- Added started timestamp in ThreadNameProvider for improved tracking
* Moved KeepAliveThread State check after authentication to avoid test timing issues
Previously, AuthGssApiWithMic used params.getUsername() to create the
local client credential object. However, at least when using the native
GSS libraries (sun.security.jgss.native=true), the username would need
to be something like "user@EXAMPLE.COM", not "user", or the library is
unable to find credentials. Also, your remote username might not be your
local username.
Instead, and more simply, call the GSSManager#createCredential variant
that just uses default credentials, which should handle both of these
cases.
Tested on Windows using SSPI. I haven't tested this patch on Linux but I
have confirmed that this form of call to createCredential works as I
expect when using the native GSS/Kerberos library there too.
Co-authored-by: Jeroen van Erp <jeroen@hierynomus.com>
* Replaced PKCS5 parsing with PKCS8
- Moved tests for PEM-encoded PKCS1 files to PKCS8
- Removed PKCS5 Key File implementation
* Added PKCS8 test to retry password after initial failure
Co-authored-by: Jeroen van Erp <jeroen@hierynomus.com>
* Added SFTP file transfer resume support on both PUT and GET. Internally SFTPFileTransfer has a few sanity checks to fall back to full replacement even if the resume flag is set.
SCP file transfers have not been changed to support this at this time.
* Added JUnit tests for issue-700
* Throw SCPException when attempting to resume SCP transfers.
* Licensing
* Small bug resuming a completed file was restarting since the bytes were equal.
* Enhanced test cases to validate the expected bytes transferred for each scenario are the actual bytes transferred.
* Removed author info which was pre-filled from company IDE template
* Added "fall through" comment for switch
* Changed the API for requesting a resume from a boolean flag with some internal decisions to be a user-specified long byte offset. This is cleaner but puts the onus on the caller to know exactly what they're asking for in their circumstance, which is ultimately better for a library like sshj.
* Reverted some now-unnecessary changes to SFTPFileTransfer.Uploader.prepareFile()
* Fix gradle exclude path for test files
Co-authored-by: Jeroen van Erp <jeroen@hierynomus.com>
Due to a bug in logic introduced by #769, RemoteFile.ReadAheadRemoteFileInputStream started to send new read ahead requests for file parts that had already been requested.
Every call to read() asked the server to send parts of the file from the point which is already downloaded. Instead, it should have asked to send parts after the last requested part. This commit adds exactly this logic.
The bug didn't cause content corruption. It only affected performance, both on servers and on clients.
There is a contract that FileKeyProvider.readKey throws an IOException if something goes wrong. Throwing an NPE is not expected by API users. Also, it is much more difficult to find out if the NPE is thrown due to a broken key file, or due to an internal bug.
Reported from https://github.com/TeamAmaze/AmazeFileManager/issues/2976, it was found the key uses aes-128-cbc which is currently not supported by sshj. This change adds support for it.
To enable support for this, also eliminated hardcoding byte array size for key and IV, as a result of BCrypt.pbkdf().
If an instance of ReadAheadRemoteFileInputStream before this change is wrapped into a BufferedInputStream with a big buffer, the SSH client requests big packets from the server. It turned out that if the server had sent a response smaller than requested, the client wouldn't have adjusted to decreased window size, and would have read the file incorrectly.
This change detects cases when the server is not able to fulfil client's requests. Since this change, the client adjusts the maximum request length, sends new read-ahead requests, and starts to ignore all read-ahead requests sent earlier.
Just specifying some allegedly small constant buffer size wouldn't have helped in all possible cases. There is no way to explicitly get the maximum request length inside a client. All that limits differ from server to server. For instance, OpenSSH defines SFTP_MAX_MSG_LENGTH as 256 * 1024. Apache SSHD defines MAX_READDATA_PACKET_LENGTH as 63 * 1024, and it allows to redefine that size.
Interestingly, a similar issue #183 was fixed many years ago, but the bug was actually in the code introduced for that fix.
* Removed deprecated proxy connect methods from SocketClient
- Removed custom Jdk7HttpProxySocket class
* Reverted removal of Jdk7HttpProxySocket to retain JDK 7 support for HTTP CONNECT
Co-authored-by: Jeroen van Erp <jeroen@hierynomus.com>
* Add parameter to limit read ahead to maximum length. Allows to use multiple concurrent threads reading from the same file with an offset without reading too much ahead for a single segment.
* Review and add tests.
Signed-off-by: David Kocher <dkocher@iterate.ch>
Co-authored-by: Yves Langisch <yves@langisch.ch>
- Added ThreadNameProvider to set name based on Thread Class and remote socket address
- Added RemoteAddressProvider to abstract access to Remote Socket Address
- Set Reader Thread name in TransportImpl
- Set SFTP PacketReader Thread name in SFTPEngine
- Set KeepAlive Thread name in SSHClient
Co-authored-by: Jeroen van Erp <jeroen@hierynomus.com>
- Changed KeepAlive.setKeepAliveInterval() to avoid starting Thread
- Updated SSHClient.onConnect() to start KeepAlive Thread when enabled
- Updated SSHClient.disconnect() to interrupt KeepAlive Thread
- Updated KeepAliveThreadTerminationTest to verify state of KeepAlive Thread
Co-authored-by: Jeroen van Erp <jeroen@hierynomus.com>
- Adjusted test classes to work with Apache SSHD 2.8.0
- Upgraded Bouncy Castle from 1.69 to 1.70
- Upgraded Apache SSHD from 2.1.0 to 2.8.0
- Upgraded JUnit from 4.12 to 4.13.2
- Upgraded Mockito from 2.28.2 to 4.2.0
- Upgraded Logback from 1.2.6 to 1.2.9
- Upgraded Apache HTTP Client from 4.5.9 to 4.5.14
* Improve SshdContainer: log `docker build` to stdout, don't wait too long if container exited
* Fix#740: Lean on Config.keyAlgorithms choosing between rsa-sha2-* and ssh-rsa
Previously, there was a heuristic that was choosing rsa-sha2-512 after receiving a host key of type RSA. It didn't work well when a server doesn't have an RSA host key.
OpenSSH 8.8 introduced a breaking change: it removed ssh-rsa from the default list of supported public key signature algorithms. SSHJ was unable to connect to OpenSSH 8.8 server if the server has an EcDSA or Ed25519 host key.
Current behaviour behaves the same as OpenSSH 8.8 client does. SSHJ doesn't try to determine rsa-sha2-* support on the fly. Instead, it looks only on `Config.getKeyAlgorithms()`, which may or may not contain ssh-rsa and rsa-sha2-* in any order.
Sorry, this commit mostly reverts changes from #607.
* Introduce ConfigImpl.prioritizeSshRsaKeyAlgorithm to deal with broken backward compatibility
Co-authored-by: Jeroen van Erp <jeroen@hierynomus.com>
* Fix: if the client knows CA key, it should send host key algo proposal for certificates
* Run specific SSH server in KeyWithCertificateSpec
Required to verify the case with wrong host key algorithm proposals. See #733
* Split KeyWithCertificateSpec into HostKeyWithCertificateSpec and PublicKeyAuthWithCertificateSpec
Prevents from starting unnecessary SSHD containers, making the tests run a bit faster when they are launched separately.
* Replace abstract class IntegrationBaseSpec with composition through IntegrationTestUtil
* Switch to testcontainers in integration tests
It allows running different SSH servers with different configurations in tests, giving ability to cover more bugs, like mentioned in #733.
* full support for encrypted PuTTY v3 files (Argon2 library not included)
* simplified the PuTTYKeyDerivation interface and provided an abstract PuTTYArgon2 class for an easy Argon2 integration
* use Argon2 implementation from Bouncy Castle
* missing license header added
* license header again
* unit tests extended to cover all Argon2 variants and non-standard Argon2 parameters; verify the loaded keys