Conversation
@microsoft-github-policy-service agree company="Microsoft" |
| def reset(self): | ||
| """Reset monitor state for a new problem without reloading the model.""" | ||
| self.entropy = [] | ||
| self.ema_means = [] | ||
| self.ema_vars = [] | ||
| self.exit_point = None | ||
| gc.collect() | ||
| try: | ||
| torch.cuda.empty_cache() | ||
| except Exception as e: | ||
| print("Error while emptying cuda cache: ",e) |
There was a problem hiding this comment.
This function is not being used any more, right? If not, we can remove this
| def reset(self): | ||
| """Reset monitor state for a new problem.""" | ||
| self.confidence = [] | ||
| gc.collect() | ||
| try: | ||
| torch.cuda.empty_cache() | ||
| except Exception as e: | ||
| print("Error while emptying cuda cache: ",e) |
There was a problem hiding this comment.
What is the latency difference after adding the lock across all the monitors ?
No description provided.