perf(analyze): Targeted caching optimization#574
Open
Lakitna wants to merge 3 commits intorobotcodedev:mainfrom
Open
perf(analyze): Targeted caching optimization#574Lakitna wants to merge 3 commits intorobotcodedev:mainfrom
Lakitna wants to merge 3 commits intorobotcodedev:mainfrom
Conversation
This saves about 8 minutes (11m 0.14s -> 2m 55.07) in my test project of ~600 files.
This saves about 37 seconds (2m 55.07 -> 2m 18.07) in my test project of ~600 files.
81f64e7 to
8194326
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
After our discussion at Robocon I was curious what takes the most time in Robotcode.
During profiling I came across 2 situations that significantly slowed down code analysis in my ~600 file project.
same_file()took significant time. After slapping afunctools.lru_cacheon it, my runtime went down with about 8 minutes!from:
Files: 650, Errors: 1403, Warnings: 13, Infos: 552, Hints: 2709 (in 11m 0.14s)to:
Files: 650, Errors: 1403, Warnings: 13, Infos: 552, Hints: 2709 (in 2m 55.07s)search_variable()wasn't as big a time sink, but I knew that this project contains a lot of global vars. According torobotunusedit contains 31102 non-local variables. So the old cache limit of 1024 seemed small to me. After upping it significantly to 100.000, it runs 37 seconds faster.from:
Files: 650, Errors: 1403, Warnings: 13, Infos: 552, Hints: 2709 (in 2m 55.07s)to:
Files: 650, Errors: 1403, Warnings: 13, Infos: 552, Hints: 2709 (in 2m 18.07s)These are some nice time savings for such small changes.
@d-biehl I did consider memory footprint too. Do you think it makes sense to add a cache size config thing? Or maybe replace
lru_cachewith something more sophisticated and more configurable?