If you’re given a dictionary of words and an input string, find out whether the input string can be segmented into dictionary words.
n = length of input string
for i = 0 to n – 1
  first_word = substring (input string from index [0, i] )
  second_word = substring (input string from index [i + 1, n – 1] )
  if dictionary has first_word
    if second_word is in dictionary OR second_word is of zero length, then return true
    recursively call this method with second_word as input and return true if it can be segmented
The algorithm computes two strings from scratch in every iteration of the loop. Worst case scenario, there will be a recursive call of the second_word each time. This shoots the time complexity up to 2n.Â
You will see that you might be calculating the same substring multiple times, even if it doesnt exist in the dictionary. This can be fixed by memoization, where you can remember the substrings that have already been solved.
In order to achieve memoization, you will be able to store the second string in a new set each time. This will decrease time and memory complexities.
1 The Least Recently Used (LRU) is a known caching strategy. It defines the policy to remove elements from the cache to carve out space for new elements when the cache is full. It removes the least recently used items first.
class LRUCache:
  def __init__(self, capacity):
    self.capacity = capacity
    self.cache = set()
    self.cache_vals = LinkedList()
  def set(self, value):
    node = self.get(value)
    if node == None:
      if(self.cache_vals.size >= self.capacity):
        self.cache_vals.insert_at_tail(value)
        self.cache.add(value)
        self.cache.remove(self.cache_vals.get_head().data)
        self.cache_vals.remove_head()
      else:
        self.cache_vals.insert_at_tail(value)
        self.cache.add(value) Â
    else:
      self.cache_vals.remove(value)
      self.cache_vals.insert_at_tail(value)
  def get(self, value):
    if value not in self.cache:
      return None
    else:
      i = self.cache_vals.get_head()
      while i is not None:
        if i.data == value:
          return i
        i = i.next
  def printcache(self):
    node = self.cache_vals.get_head()
    while node != None :
      print(str(node.data) + “, “)
      node = node.next     Â
To implement an LRU cache, we can use two data structures: a doubly linked list and a hashmap. A hashmap helps with the O(1) lookup of cached keys, and a doubly-linked list helps in maintaining the eviction order and.Â
If you like GeeksforGeeks and would like to contribute, you can also write an article and mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Written: Aptitude: Objective 50 min 45 questions LR- 2 passages-5 each DI- 1 paragraph- 5 each Mathematical
Easy only. Just time concern for those who are out of touch. My suggestion will be to do the LR in the end.
Examples. There is one FB user in India and another in the US. When they communicate, do they connect to the same server? If not then how does the communication happen? What is the data that is being transferred? This round felt like a stress test and went very badly. Couldn’t ask for any good questions when he offered. (My suggestion will be, answer only if you know and you are sure about it otherwise don’t even try)