Skip to content

Performance issue and crashes during insert #18

@talolard

Description

@talolard

These days we store a by the book postings list in indexeddb .

A postings list is a map from tokens to the list of documents that contain them.
The problem we are hitting is that that list of documents is very large, in turn this makes two problems

  1. It's slow to insert new postings
  2. The size of a list might be so big that we hit the IPC message size limit. Chrome uses IPC to send data from the browser to indexeddb, and their is a limit on how big a message we can send.

For 2, I reduced the batch size we use on insertion, but this makes 1 worse.

I don't know how to solve this, maybe partitioning the postings list themselves, such that each term maps to a set of partions (tables ?) and we search each partition seperatly ?

That might not solve it, as the overall size of a transaction will still be large, need to check when data is sent over IPC, e.g. during the call to add or when the transaction closes.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions