In many companies, quality is added to a product after it has been coded. The developers write code, then sling it into QA where it is beaten into shape. The results can be quite good: I've seen some reasonable products emerge from this strange process including, alarmingly, quite a bit of the software in aircraft systems. But it's a horribly inefficient way to work: it's impossible to predict how long the software beatings will last, it does nothing for teamwork, and it usually results in mediocre product. Where's the professional pride of the coders and testers in such a process?
The way to go is to to have most testing done as the software is being written and the best way to accomplish that is in self-organizing teams of coders, testers and analysts. There is still usually a need for a final test before release, as part of a rapid "end-game", but that's a time for polishing, not beating.
The net result is faster progress in development and better products. In my experience I've found that testers have valuable insights that can make products simpler and more useable - a dialogue with the coder and analyst at the time the software is being written draws this out. And removing the perception of a QA "safety-net" is good for coders and analysts - it keeps them concentrated on the product goal.
One of the indicators that all is well with this set-up is when team-members prioritise the product goal over their own tasks. So if testing is falling behind, the developers and analysts roll up their sleeves and help out with the testing effort. At the end of each sprint, it’s better to produce something that works than to have untested code or unused analysis, neither of which are of much value.
QA is about cleaning up a mess that shouldn't have been made in the first place. But when properly deployed, software testing professionals add real value to products.