A contextual global search + in-video search

DESIGN FOR WEB APP

RESEARCH

DESIGN SPEC

About the project

Media post-production workflows heavily rely on locating and repurposing existing content to create new assets. A significant portion, estimated around 80% of the effort is dedicated to the meticulous process of searching through extensive data repositories to find the right audio, video, and image files. This content curation stage is critical, as the quality and relevance of the source materials directly impact the final output. This requires a lot of back and forth between different parts of the product to complete the workflow.

Problem statement

How might we enable users to intuitively and quickly invoke and access a comprehensive global search from any part of the product and seamlessly integrate it into their workflow?

Objectives

  1. Comprehensive and Relevant

Ensure that global search functionality maintains contextual relevance, providing users with tailored results that align precisely with what they are looking for.

  1. Accessible and Usable

Design an accessible and usable search interface that caters to users with varying abilities and preferences.

  1. Integrated

Ensure that users can integrate global search seamlessly without disrupting their workflow

Understanding the problem

I interviewed five diverse participants to gain an understanding of their needs and desires within the scope of the project. These were the insights I gathered from it:

Participants have to rely on memory for content retrieval because precise content matching their requirements is not directly available.

Participants in the content creation process currently heavily rely on their memory to locate specific shots amidst vast repositories of content data. This dependency introduces a significant challenge as it often leads to inefficiencies and inaccuracies in finding the required assets. Relying on memory not only consumes valuable time but also increases the likelihood of overlooking relevant materials, ultimately hindering the overall productivity of the workflow.

The workflow gets disrupted because participants have to stop what they're doing to search for things, making it harder to stay focused and get work done efficiently

A significant portion of search and navigation activities, occur in the midst of task at hand. This frequent interruption disrupts the flow of work and forces users to divert their attention away from their primary tasks to search for necessary assets. As a result, the continuity and momentum of the workflow are compromised, leading to decreased productivity and increased cognitive load for the participants.

Designing a contextual global and in-video smart search

I've put together a diagram that maps out where all users can use the global search feature throughout our platform. This helps us understand what users would expect from it in various contexts, allowing us to design an experience that meets their needs and expectations effectively.

Iterations and feedback

Post this, i came up with multiple different directions for a solution and got quick feedback from the stakeholders involved.

Final solution

Placement of the search bar

The decision to place the search bar on the top navigation bar was made as it is a global function and should be at the top most hierarchy and the users should be able to access it from any part of the product. To quote from Katie’s (NN Group) research article, “The magnifying glass alone makes it much harder to locate the search” thus, even though the search in our product acts as a button that triggers a modal, I decided to maintain the same placement and UI so that the users can easily locate it based on familiarity and reduces cost of interaction.

Suggestions

To avoid typos, decrease interaction cost and mental effort, we decided to incorporate search suggestions into our global search. Different text styling was used to differentiate between typed query text, and the suggested term. What actions take place after clicking on the suggestions were shown on the right hand side of the modal to allow users to take a informed decision and prevent errors.

The typed text was highlighted as the suggested text is before as well as after the search query term. (ref. design search suggestions, NN Group)

OBJ1: Comprehensive and relevant

OBJ 2: Accessible & usable

Dynamic scope options

Depending on where the user accesses the global search from, the scope filter option dynamically changes. By default, it is set to “All” and if the user wants to apply the scope they can choose to do so. Adding a scope is intentional action to avoid error and confusion. (ref. Scoped Search: Dangerous, but Sometimes Useful. By Katie Sherwin, NN Group)

If the user chooses to scope the search, then scope pills will get added to the search bar to explicitly make the user aware that the scope is added to further avoid any error or confusion.

OBJ1: Comprehensive and relevant

OBJ 2: Accessible & usable

OBJ 3: Integrated

Scope suggestions

There are 3 main components that the user can search in Tessact, Title search - for library, Title search - for projects and Smart search - for in-video search (where Tessact generates clips from existing content that matches the typed search query. Thus, in the default view of the search modal that appears when the users clicks on the search field in the top bar, we allow the users to scope their search on these three parameters.

OBJ1: Comprehensive and relevant

OBJ 2: Accessible & usable

OBJ 3: Integrated

If scoped to “In-smart”

When a user narrows their search scope to 'In-smart' while within the 'Hello project' interface and enters a query, The AI smartly analyses the in-video content to retrieve relevant data based on the search query term. This removes the dependancy on user memory for retrieving content and enables users to gather precise content matching their requirements, enabling recognition over recall. It greatly reduces time and increases efficiency. At the end of the list the system also shows a suggested list, with an altered similar query.

OBJ1: Comprehensive and relevant

OBJ 2: Accessible & usable

OBJ 3: Integrated

Finding the right content

The users can scrub through the thumbnails to get a quick preview of the content or they can select the clip and play the clip in the preview panel. The user can choose to trim the clip based on their preference and add it to selection. Alternatively, the user can also choose to directly multi-select the clips.

Depending on where the user is accessing the search from, the primary CTA would be contextual to that. Thus, if the user accesses search from Library, Project page or Reviewer the primary CTA would be “Add to project” but if they access it from editor, the primary CTA would be “Add to timeline” and secondary would be “Add to project”

OBJ1: Comprehensive and relevant

OBJ 2: Accessible & usable

OBJ 3: Integrated

If scoped to “In-library”

When a user narrows their search scope to “In-library” , by default, the scope is set to “All”. If the user wishes to they can scope it down to the part of the library he accessed search from. 



For e.g. The user in the image below has accessed search from the series section of the library.

If the user chooses to further scope their search down to “Series” The system populates all files and folders inside of series. The user can then choose to add more filters or search inside the category “Series” at this point.

If the user selects a folder, we open it in the library at source.

If the user clicks on any one of the files, then preview panel opens and the user can choose to take relevant action to finish their flow.

OBJ1: Comprehensive and relevant

OBJ 2: Accessible & usable

Drag and drop from

search to anywhere!

To facilitate users in swiftly and seamlessly transitioning from search results to their intended destinations, and allowing them to continue their workflow we decided to enable users to drag and drop content from the search modal into different parts of the product.

To let users know that a particular item is draggable, visual cues were enabled: Add a drop shadow on hover, changing cursor from pointer to hand cursor, and tilting the card to show the tactile nature of it.

The users could drop the items into wherever they were in the product, the drop zone was defined in that particular part of the product. They could drop into, Project page, Editor panel, editor timeline, Teams page, and if they had access, they could drop into folders on library page. The other parts of the product cannot be shown here as they are not out in production.

OBJ1: Comprehensive and relevant

OBJ 2: Accessible & usable

Usability testing

I tested the product at various stages of the project.
•  Lo-fi prototypes were tested with the stakeholders weekly to get feedback on the functionality, feasability, and interactivity of the product.
 •  Moderated user testing - a screener was set to identify participants to test the feature. All participants were using the app to carry out hypothetical tasks.

Impact

  • Smart search reduced content retrieval time by 75% and increased accuracy of content by 65%

  • Smart search enabled us to get POCs with 4 new clients

Learnings

  • Iteration is the key to all your problems

  • Every UI decision has infinite possibilities and following existing UI patterns helps make things more usable.

  • Looking at the bigger picture while making smaller decisions to see how it would impact it

Made with ❤️ in namma Blr