Lazy loading and sorting

Hi,
I am working on an enterprise app that returns quite a few results from search. The users wanted to have lazy loading on the search results but also require sorting. My devs tell me the sort will not be accurate until the user has scrolled through all the results so they are all loaded.

I’m out of ideas on how to deal with this situation. Anyone encountered this before?

Thanks

1 Like

Not sure where he’s coming from to be honest.

Example: http://rackandruinrecords.com/ - search in your browser for “Megazord” and you’ll find 4 entries, jump to each one and you’ll see the content being loaded as you get to them.

Is the page is broken up with pagination or a ‘load more’?

No, I never encountered an issue before. But I have some thoughts to share about it. I think your dev team is right. It’s a bit challenging to incorporate a sort ability on a lazy loading list. The main reason it’s challenging because we are trying to blend two functionalities which are made for a different scenario. i.e Lazy loading are good when u focus on content discovery and sort is good when you do data analysis.

Consider Lazy loading, it’s good for applications that cater content discovery. For example Facebook, 9gag, Twitter etc. Consider the case of facebook, they provide an endless stream of feeds which keeps you to discover new facts about your friends, family, the world etc. They keep on loading new facts to the timeline as the user scrolls. On each scroll on facebook, the user express the interesting for discovering more facts and facebook identified this behaviour and provided with a lazy loading. So that the user only needs to do scroll to see new feeds.

When we talk about sort functionality, It’s a common functionality that provides the user with the power to separate data and arrange it in a particular order. Sort is useful when you do an analysis on top a data set. One thing that I noticed about analysis workflow us that, the data set provided for analysis will be well defined. i.e the user gathers all the information required before he starts an analysis. Because if he uses a volatile dataset, which will tend to deliver incorrect analysis, which further lead to bad decisions. So analysis is always critical and if you use any tools for analysis, then it’s tools responsibility to ensure the user about the dataset status whether its ready or not.

In our case, the list is our dataset. which is a volatile data set, implemented based on lazy loading strategy. So it will be hard for the user to realise whether his data set is ready for analysis or not. If he doesn’t realise and continue sorting, at one point the list gets updated which result in a bad analysis report. This conflict will reduce the user enthusiasm and add more effort on the user side like, checking list update, re-sort the updated list, etc instead of focusing on the right work. So, I think it better to use pre-loaded data set rather than the lazy loaded data set.

This is just my thought about it, Hope this will aid you.

Peace!

Thanks Jaison. The reason the lazy loading was implemented was to speed up the initial return of results rather than waiting for the entire dataset. The current application we are replacing loads the entire data set and it is extremely slow. We are working with a legacy database and optimising that database is not within our power. Politics, job protection, budgets etc. have all come into play to prevent that happening.

The two solutions we have come up with are:

  1. Implement a sort by drop down in the search form
  2. Implement a load all button so that users who want to sort the entire search results can trigger that function and then wait for the full results to be displayed.

A third option was considered that would disable the sorting until all results have been lazy loaded. Frankly I’m not fond of any of these and I don’t even think the first option would work as expected.

You are right. User stress point is evident in all the three solutions and it’s a technical problem. If you are not able to scale the search because of the legacy and lack of knowledge about the database, one quick solution that comes in my mind it to build a separate microservice. Which crunches the data from the main database and store it on a NoSQL database based on a time interval. And you can use this microservice to retrieve data faster. Like a pseudo cache. But this is just a thought there a lot of hidden costs involved until you dig more.