Just a guess because I've only dabbled in programming and even that was ages ago... There's so much software processing involved in capturing smartphone camera images that it leaves the door open for app developers to do things differently. They mightThis is something I never get. How can a app take a better image than the native camera when you are using the same basic hardware, lens etc?
* Create new processing tricks to use on the raw capture from the camera, possibly coming up with something better than the native iOS or Android processing chain. The current rage for AI or ML anything would be an example.
* Make different decisions in using the options of the built-in processing chain. (Example, the native iOS app is often accused of either oversharpening or overly heavy noise reduction or both. Depending on what the programming hooks give them access to, devs can alter that processing.)