How Do You Know When It's Ready for Developer Hand-Off?


#1

In trying to understand the UX Process, I was wondering how do you know when your design (which has been prototyped and tested) is finally ready to go to the coders for development?

Also, after the coders give you the “finished” product - does it go back into testing again to validate your design approach?


#2

The following answer was provided by @dennislees over at the User Experience forum on Stack Exchange. I thought it was so good I am posting it here:

When all stakeholders have signed off
There isn’t really a definite answer for this, but the coverall answer is “when all stakeholders are satisfied with whatever part of the system you’re working on”. (I say this because, depending on your working methods, stakeholders might be signing off on whole sections of a product, or individual components)

That usually means something like:

  • Product managers agree that all functional requirements are met
  • Designers agree that screens and interactions are sufficiently polished
  • Developers agree that the requirement have been sufficiently specified
    (who qualifies as a stakeholder depends on the project and org structure)

In an ideal case, and in your case it seems, this list should include:

  • Screens/interactions have been user tested (and feeback implemented)

In most cases, however, this last one doesn’t really happen.

does it go back into testing again to validate your design?

It depends on how much difference there is between what you last tested and what the developers build.

e.g. if you last tested with low-res wireframes (to validate major design decisions but not interactions) and from there created high-res mockups and designed interactions that developers are going to build, you should test again before releasing.

If you last tested high-res prototypes that had working interactions, and the developers are simply going to replicate this, there’s much less pressure to test again.

Re: testing a “finished” product i.e. something that is in production. If you’re doing things right, everything in production should be sufficiently tested. Assuming that is the case, once something is in the wild, it’s usually not worth retesting unless required, perhaps for one of the following reasons:

  • a specific conversion process is under-performing
  • users report issue with a process (through support tickets, or direct contact)
  • Element(s) trigger a lot of error codes