This website contains ALL LeetCode **Premium** problems for
**FREE!!**.

All leaked interview problems are collected from Internet.

All leaked interview problems are collected from Internet.

Given many `words`

, `words[i]`

has weight `i`

.

Design a class `WordFilter`

that supports one function, `WordFilter.f(String prefix, String suffix)`

.
It will return the word with given `prefix`

and `suffix`

with maximum weight. If no word exists, return -1.

**Examples:**

Input:WordFilter(["apple"]) WordFilter.f("a", "e") // returns 0 WordFilter.f("b", "") // returns -1

**Note:**

`words`

has length in range`[1, 15000]`

.- For each test case, up to
`words.length`

queries`WordFilter.f`

may be made. `words[i]`

has length in range`[1, 10]`

.`prefix, suffix`

have lengths in range`[0, 10]`

.`words[i]`

and`prefix, suffix`

queries consist of lowercase letters only.

b'

\n#### Approach #1: Trie + Set Intersection [Time Limit Exceeded]

\n

\n#### Approach #2: Paired Trie [Accepted]

\n

\n#### Approach #3: Trie of Suffix Wrapped Words [Accepted]

\n

\n

'
\n\n

\n**Intuition and Algorithm**

We use two tries to separately find all words that match the prefix, plus all words that match the suffix. Then, we try to find the highest weight element in the intersection of these sets.

\nOf course, these sets could still be large, so we might TLE if we aren\'t careful.

\n\n**Complexity Analysis**

- \n
- \n
Time Complexity: where is the number of words, is the maximum length of a word, and is the number of queries. If we use memoization in our solution, we could produce tighter bounds for this complexity, as the complex queries are somewhat disjoint.

\n \n - \n
Space Complexity: , the size of the tries.

\n \n

\n

**Intuition and Algorithm**

Say we are inserting the word `apple`

. We could insert `(\'a\', \'e\'), (\'p\', \'l\'), (\'p\', \'p\'), (\'l\', \'p\'), (\'e\', \'a\')`

into our trie. Then, if we had equal length queries like `prefix = "ap", suffix = "le"`

, we could find the node `trie[\'a\', \'e\'][\'p\', \'l\']`

in our trie. This seems promising.

What about queries that aren\'t equal? We should just insert them like normal. For example, to capture a case like `prefix = "app", suffix = "e"`

, we could create nodes `trie[\'a\', \'e\'][\'p\', None][\'p\', None]`

.

After inserting these pairs into our trie, our searches are straightforward.

\n\n**Complexity Analysis**

- \n
- \n
Time Complexity: where is the number of words, is the maximum length of a word, and is the number of queries.

\n \n - \n
Space Complexity: , the size of the trie.

\n \n

\n

**Intuition and Algorithm**

Consider the word `\'apple\'`

. For each suffix of the word, we could insert that suffix, followed by `\'#\'`

, followed by the word, all into the trie.

For example, we will insert `\'#apple\', \'e#apple\', \'le#apple\', \'ple#apple\', \'pple#apple\', \'apple#apple\'`

into the trie. Then for a query like `prefix = "ap", suffix = "le"`

, we can find it by querying our trie for `le#ap`

.

**Complexity Analysis**

- \n
- \n
Time Complexity: where is the number of words, is the maximum length of a word, and is the number of queries.

\n \n - \n
Space Complexity: , the size of the trie.

\n \n

\n

Analysis written by: @awice.

\n