2000-2003, both are pre-historic. We have neural networks now to do things like upscaling and colorization.
jandrese 2 hours ago [-]
Last time I was doing image processing in C I was doing quantization of the colorspace using the technique out of a paper from 1982. Just because a source is old doesn't mean it is wrong.
vincenthwt 9 hours ago [-]
Yes, those methods are old, but they’re explainable and much easier to debug or improve compared to the black-box nature of neural networks. They’re still useful in many cases.
earthnail 9 hours ago [-]
Only partially. The chapters on edge detection, for example, only have historic value at this point. A tiny NN can learn edges much better (which was the claim to fame of AlexNet, basically).
grumbelbart2 8 hours ago [-]
That absolutely depends on the application. "Classic" (i.e. non-NN) methods are still very strong in industrial machine vision applications, mostly due to their momentum, explainability / trust, and performance / costs. Why use an expensive NPU if you can do the same thing in 0.1 ms on an embedded ARM.
HelloNurse 5 hours ago [-]
A NN that has been trained by someone else on unknown data with unknown objectives and contains unknown defects and backdoors can compute something fast, but why should it be trusted to do my image processing?
Even if the NN is built in-house overcoming trust issues, principled algorithms have general correctness proofs while NNs have, at best, promising statistics on validation datasets.
earthnail 2 hours ago [-]
This doesn’t match my experience. I spent a good portion of my life debugging SIFT, ORB etc. The mathematical principles don’t matter that much when you apply them; what matters is performance of your system on a test set.
Turns out a small three-layer convnet autoencoder did the job much better with much less compute.
HelloNurse 1 hours ago [-]
You cannot prove that an algorithm does what you want, unless your understanding of what you want is quite formal.
But you can prove that an algorithm makes sense and that it doesn't make specific classes of mistake: for example, a median filter has the property that all output pixel values are the value of some input pixel, ensuring that no out of range values are introduced.
jononor 32 minutes ago [-]
Few customers care about proofs. If you can measure how well the method work for the desired task, that is most cases sufficient and in many cases preferred over proofs.
fsloth 8 hours ago [-]
"The chapters on edge detection, for example, only have historic value at this point"
Are there simpler, faster and better edge detection algorithms that are not using neural nets?
frankie_t 5 hours ago [-]
I wonder if doing classical processing of real-time data as a pre-phase before you feed into NN could be beneficial?
TimorousBestie 5 hours ago [-]
Yes, it’s part of the process of data augmentation, which is commonly used to avoid classifying on irrelevant aspects of the image like overall brightness or relative orientation.
4gotunameagain 9 hours ago [-]
Classical CV algorithms are always preferred over NNs in every safety critical application.
Except Self driving cars, and we all see how that's going.
rahen 5 hours ago [-]
I see it the same way I see 'Applied Cryptography'. It’s old C code, but it helps you understand how things work under the hood far better than a modern black box ever could. And in the end, you become better at cryptography than you would by only reading modern, abstracted code.
8 hours ago [-]
Rendered at 16:08:47 GMT+0000 (Coordinated Universal Time) with Vercel.
https://www.spinroot.com/pico
Turns out a small three-layer convnet autoencoder did the job much better with much less compute.
Are there simpler, faster and better edge detection algorithms that are not using neural nets?
Except Self driving cars, and we all see how that's going.