In general, just doing a competitive analysis won't be enough. And personas will be helpful for understanding your research (the 'analysis' phase) but won't actually dig out any useful findings of their own.
A couple of aspects will help you determine which research is required, and how much of it is needed.
What applicable research data is available already? Does the team or business have other data it has collected in the past? Has someone else done that same research already?
Is the data still accurate and useful? How do you know? You want to bust any incorrect assumptions early.
- After looking at all the available inputs and considering the requirements of the project, where are the gaps in your knowledge? What important things do you know that you don't know?
What are the risky areas for the project? What are the things you must really do your homework on? Is the organisation banking on getting something particular right?
- Knowing the range of research techniques available, which of these is most easily going to give you useful information? Which will address gaps in knowledge and gives a solid understanding for any risks? (More on this below, if you don't know the range of techniques/methods).
- Does the research have a high reliance on discovering unfamiliar behaviours? Or is it more important that the research checks previous or existing assumptions?
How much experience do the researchers have? In this particular field of research, on this topic? There will be more and larger gaps for less experienced research teams.
Do at least some face-to-face research with users anyway. I've never seen a project where research failed to help either for personal context or project discovery.
You'll need to get familiar with the range of techniques: http://uxmastery.com/resources/techniques/
A good rule of thumb is to do at least three research activities. This allows you to triangulate your data and start seeing important patterns and reinforcements coming in from the other methods. I one that is 'quantitative' (looks explicitly at statistical and numerical inputs that show HOW MUCH things happen) and one that is 'qualitative' (explores underlying reasons/opinions/motivations for WHY things happen). I would also conduct the qualitative one face-to-face with the users.
If you don't have a lot of experience yet, and no-one else is dictating the way you do things (and perhaps even if they are), I would recommend that these three research activities are:
- Contextual enquiry or site visit - go to the place where the users of the product/service are and watch them do their work. Ask them questions and follow your nose.
- Site analytics (for existing sites) or industry research (new projects) - dig out the numbers that are important to the project (I mean looking at things like 'search term frequency', 'exit pages', etc, not the usual pageviews and other vanity metrics).
- Competitive/opportunity analysis - set a benchmark so you know what standards to exceed, and where the best opportunities are.
I would start with about 6 tasks for each: 6 contextual enquiry participants, 6 things you're hunting for in the analytics, 6 competitor reviews.
As you do the research, you'll come to a point where any additional tasks with the same research focus aren't really turning up new information (kind of a Pareto Principle 80:20 rule thing—going further and harder, you may only get an extra 20% findings but have spent an extra 80% effort). It's good to stay light and nimble, and keep your energy and focus on the things that are most useful for you at that point of the project. It is a good point to re-assess where your gaps are, or to progress your thinking/design work before heading into the additional rounds of testing or research.
Is that helpful? Let me know what other questions you have. =)