When software testing began, it was essentially for scientific endeavors. It should be of no surprise to find that [the discipline] grew from debugging as a mechanism to "prove" correctness of a program (i.e., prove it works). This then grew to finding bugs. However, I believe as commercialized software began to dominate, making it "user friendly" became a differentiator for most businesses. Suddenly "correctness" meant quality and we hired hoards of testers to pound and find bugs.
Correctness can be ascertained via unit tests, data analysis, proofs, and white box testing, and probably a few other techniques thrown in there. What we focus on in automation is continued determination of a program's correctness. But this is indeed separate from quality, which can only be proven negatively, not positively. The number of bugs found in an area is a useless metric; the number of bugs users experience is the most important.