Any chance this work can be upstreamed into mainline SSH? I'd love to have better performance for SSH, but I'm probably not going to install and remember to use this just for the few times it would be relevant.
Bender 2 hours ago [-]
I doubt this would ever be accepted upstream. That said if one wants speed play around with lftp [1]. It has a mirror subsystem that can replicate much of rsync functionality in a chroot sftp-only destination and can use multiple TCP/SFTP streams in a batch upload and per-file meaning one can saturate just about any upstream. I have used this for transferring massive postgres backups and then because I am paranoid when using applications that automatically multipart transfer files I include a checksum file for the source and then verify the destination files.
The only downside I have found using lftp is that given there is no corresponding daemon for rsync on the destination then directory enumeration can be slow if there are a lot of nested sub-directories. Oh and the syntax is a little odd for me anyway. I always have to look at my existing scripts when setting up new automation.
Demo to play with, download only. Try different values. This will be faster on your servers, especially anything within the data-center.
ssh mirror@mirror.newsdump.org # do this once to accept key as ssh-keyscan will choke on my big banner
mkdir -p /dev/shm/test && cd /dev/shm/test
lftp -u mirror, -e "mirror --parallel=4 --use-pget=8 --no-perms --verbose /pub/big_file_test/ /dev/shm/test;bye" sftp://mirror.newsdump.org
For automation add --loop to repeat job until nothing has changed.
The normal answer that I have heard to the performance problems in the conversion from scp to sftp is to use rsync.
The design of sftp is such that it cannot exploit "TCP sliding windows" to maximize bandwidth on high-latency connections. Thus, the migration from scp to sftp has involved a performance loss, which is well-known.
An attempt to combine the BSD-licensed rsync with OpenSSH would likely see it stripped out of GPL-focused implementations, where the original GPL release has long standing.
It would be more straightforward to design a new SFTP implementation that implements sliding windows.
I understand (but have not measured) that forcibly reverting to the original scp protocol will also raise performance in high-latency conditions. This does introduce an attack surface, should not be the default transfer tool, and demands thoughtful care.
Wow, I hadn't heard of this before. You're saying it can "chunk" large files when operating against a remote sftp-subsystem (OpenSSH)?
I often find myself needing to move a single large file rather than many smaller ones but TCP overhead and latency will always keep speeds down.
aidenn0 34 minutes ago [-]
I use lftp a lot because of it's better UI compared to sftp. However, for large files, even with scp I can pin GigE with an old Xeon-D system acting as a server.
harvie 5 hours ago [-]
Also upstream is extremely well audited. That's a huge benefit i don't want to loose by using fork.
Bad_CRC 4 hours ago [-]
this, I'm not going to start using a random ssh fork with modified ciphers.
Zambyte 3 hours ago [-]
It may still be sensible if you only expose it to private networks.
bomewish 3 hours ago [-]
So could this safely be used on Tailscale then ? I’m very curious though also a bit paranoid.
messe 2 hours ago [-]
> So could this safely be used on Tailscale then ? I’m very curious though also a bit paranoid.
You may as well just use tailscale ssh in that case. It already disables ssh encryption because your connection is encrypted with WireGuard anyway.
gear54rus 3 hours ago [-]
It could safely be used on public internet, all this fearmongering has no basis under it.
Better question is 'does it have any actual improvements in day-to-day operations'? Because it seems like it mostly changes up some ciphering which is already very fast.
yjftsjthsd-h 56 minutes ago [-]
> It could safely be used on public internet, all this fearmongering has no basis under it.
On what basis are making that claim? Because AFAICT, concern about it being less secure is entirely reasonable and is one of the big caveats to it.
Zambyte 2 hours ago [-]
I'm not fear mongering. I'm just saying
- IF you don't trust it
- AND you want to use it
=> run it on a private network
You don't have to trust it for security to use it. Putting services on secure networks when the public doesn't need access is standard practice.
emilfihlman 54 minutes ago [-]
lose*
frantathefranta 2 hours ago [-]
I admittedly don't really know how SSH is built but it looks to me like the patch that "makes" it HPN-SSH is already present upstream[1], it's just not applied by default?
Nixpkgs seems to allow you to build the pkg with the patch [2].
Upstream is either OpenBSD itself or https://github.com/openssh/openssh-portable , not the FreeBSD port. I'm... not sure why nix is pulling the patch from FreeBSD, that's odd.
Almondsetat 5 hours ago [-]
OpenSSH is from the people at OpenBSD, which means performance improvements have to be carefully vetted against bugs, and, judging by the fact that they're still on fastfs and the lack of TRIM in 2025, that will not happen.
wahern 2 hours ago [-]
There's nothing inherently slow about UFS2; the theoretical performance profile should be nearly identical to Ext4. For basic filesystem operations UFS2 and Ext4 will often be faster than more modern filesystems.
OpenBSD's filesystem operations are slow not because of UFS2, but because they simply haven't been optimized up-and-down the stack the way Ext4 has been Linux or UFS2 on FreeBSD. And of course, OpenBSD's implementation doesn't have a journal (both UFS and Ext had journaling bolted late in life) so filesystem checks (triggered on an unclean shutdown or after N boots) can take a long time, which often cause people to think their system has frozen or didn't come up. That user interface problem notwithstanding, UFS2 is extremely robust. OpenBSD is very conservative about optimizations, especially when they increase code complexity, and particularly for subsystems where the project doesn't have time available to give it the necessary attention.
suprjami 4 hours ago [-]
Unlikely. These patches have been carried out-of-tree for over a decade precisely because upstream OpenSSH won't accept them.
hsbauauvhabzb 3 hours ago [-]
Depending on your hardware architecture and security needs, fiddling with ciphers in mainline might improve speed.
nhatcher 1 hours ago [-]
If folks find this interesting, maybe also mosh[1] is for you.
Different trade offs.
This is cool very cool and I think I'll give it a try, though I'm wary about using a forked SSH so would love to see things land upstream.
I've been using mosh now for over a decade and it is amazing. Add on rsync for file transfers and I've felt pretty set. If you haven't checked out mosh, you should definitely do so!
chrisweekly 1 hours ago [-]
wary (cautious or skeptical), not weary (tired)
actionfromafar 1 hours ago [-]
At this point, maybe both. :) Can we have a portmanteau?
freedomben 1 hours ago [-]
Indeed! I meant wary, but both kind of fit :-D
freedomben 1 hours ago [-]
Heh, thank you! Edited
baden1927 1 hours ago [-]
The contracting activity in terms of rsync and async, where SFTP is secure tunneling, either with SSH or OpenSSH, which -p flag specifies as the port: 22, but /ssh/.configuring 10901 works for TCP.
tristor 35 minutes ago [-]
I don't think it comes as a surprise that you can improve performance by re-implementing ciphers, but what is the security trade-off? Many times, well audited implementations of ciphers are intentionally less performant in order to operate in constant time and avoid side channel attacks. Is it even possible to do constant time operations while being multithreaded?
The only change I see here that is probably harmless and a speed boost is using AES-NI for AES-CTR. This should probably be an upstream patch. The rest is more iffy.
ollybee 1 hours ago [-]
It's not clear if you need it on both ends to get an advantage?
Rendered at 16:16:20 GMT+0000 (Coordinated Universal Time) with Vercel.
The only downside I have found using lftp is that given there is no corresponding daemon for rsync on the destination then directory enumeration can be slow if there are a lot of nested sub-directories. Oh and the syntax is a little odd for me anyway. I always have to look at my existing scripts when setting up new automation.
Demo to play with, download only. Try different values. This will be faster on your servers, especially anything within the data-center.
For automation add --loop to repeat job until nothing has changed.[1] - https://linux.die.net/man/1/lftp
The design of sftp is such that it cannot exploit "TCP sliding windows" to maximize bandwidth on high-latency connections. Thus, the migration from scp to sftp has involved a performance loss, which is well-known.
https://daniel.haxx.se/blog/2010/12/08/making-sftp-transfers...
The rsync question is not a workable answer, as OpenBSD has reimplemented the rsync protocol in a new codebase:
https://www.openrsync.org/
An attempt to combine the BSD-licensed rsync with OpenSSH would likely see it stripped out of GPL-focused implementations, where the original GPL release has long standing.
It would be more straightforward to design a new SFTP implementation that implements sliding windows.
I understand (but have not measured) that forcibly reverting to the original scp protocol will also raise performance in high-latency conditions. This does introduce an attack surface, should not be the default transfer tool, and demands thoughtful care.
https://lwn.net/Articles/835962/
I often find myself needing to move a single large file rather than many smaller ones but TCP overhead and latency will always keep speeds down.
You may as well just use tailscale ssh in that case. It already disables ssh encryption because your connection is encrypted with WireGuard anyway.
Better question is 'does it have any actual improvements in day-to-day operations'? Because it seems like it mostly changes up some ciphering which is already very fast.
On what basis are making that claim? Because AFAICT, concern about it being less secure is entirely reasonable and is one of the big caveats to it.
- IF you don't trust it
- AND you want to use it
=> run it on a private network
You don't have to trust it for security to use it. Putting services on secure networks when the public doesn't need access is standard practice.
[1] https://github.com/freebsd/freebsd-ports/blob/main/security/...
[2] https://github.com/NixOS/nixpkgs/blob/d85ef06512a3afbd6f9082...
OpenBSD's filesystem operations are slow not because of UFS2, but because they simply haven't been optimized up-and-down the stack the way Ext4 has been Linux or UFS2 on FreeBSD. And of course, OpenBSD's implementation doesn't have a journal (both UFS and Ext had journaling bolted late in life) so filesystem checks (triggered on an unclean shutdown or after N boots) can take a long time, which often cause people to think their system has frozen or didn't come up. That user interface problem notwithstanding, UFS2 is extremely robust. OpenBSD is very conservative about optimizations, especially when they increase code complexity, and particularly for subsystems where the project doesn't have time available to give it the necessary attention.
[1]: https://mosh.org/
I've been using mosh now for over a decade and it is amazing. Add on rsync for file transfers and I've felt pretty set. If you haven't checked out mosh, you should definitely do so!
The only change I see here that is probably harmless and a speed boost is using AES-NI for AES-CTR. This should probably be an upstream patch. The rest is more iffy.