Fix async peer LLM token lookup, duplicate token bug, and improve async message view#1162
Conversation
…, and fixed the LLM peer lookup so it can actually retrieve API keys at call time
fix dark mode for dropdown menu in course home and for chapter naviga…
|
This also fixes issue #612 Fixes issue where the dropdown menu on the course home and course navigation showed up as white on white text. |
bnmnetp
left a comment
There was a problem hiding this comment.
How are you forcing the course home page to be in dark mode?
I can see the problem you describe on RST built book pages where you turn on dark mode, but I believe that we have variables defined to use with the different modes. RST books are deprecated but I'm happy to merge PRs for them by others.
I would prefer this be done as a separate PR from the LLM token lookup. It makes it easier to track and easier to back out a merge if needed.
…r navigation" This reverts commit eca2ee1.
|
Thanks for the fix!! I didn't expect that the custom option would be used (ever??!!) so I'm curious why you used this instead of openai? |
|
@bnmnetp Oh that was just from testing with the University of Michigan GPT endpoint. Also I noticed the async peer LLM feature isn't working on the deployed server however it works fine locally for me. Is there something in my local setup that wouldn't be present on the deployed server that could explain this? |
|
The only obvious thing would be the API keys. Are you seeing any errors in the javascript console that might give a clue? Can you be more specific about what isn't working? Is it not even trying the LLM? Is communicating with the LLM failing? I'm not seeing any errors on the server side that indicate something is wrong. |
|
When I force |
|
@bnmnetp I just tried on a separate device with a fresh Runestone build and the feature worked correctly there, so it seems like a version mismatch somewhere on the deployed server might be the culprit. Could something be out of sync between what's deployed and the latest merged code? |
|
I rebuilt and redeployed the runestone server from main, with your fix to peer.js last evening. So it is 100% up to date with what is on the main branch in production. What course are you testing with? I can look at the DB directly to make sure the keys are there. |
|
The course I'm testing with is |
|
I see one api token for that course for the openai provider. It looks like it has never been used as the last_used column is null. |
|
Thanks for checking. Since the token exists but the endpoint still says “missing api key,” it seems like the issue might be happening during retrieval or decryption rather than the DB lookup itself. I think the tokens are Fernet-encrypted, could there be an issue with the |
|
Yes, I can see log messages that you are finding the key. But since you are not using the I have a separate private This is another reason why it would be better for you to get me a PR with this stuff moved to the new PI implementation on the FastAPI servers. Then you can use our infrastructure instead of reinventing the wheel. |
|
Got it, thanks for tracking that down. Whenever you get a chance, would it be possible to add FERNET_SECRET to the runestone service? That would let me test the current version while I work on migrating the async PI implementation over to FastAPI. And I really appreciate your help and support on this! |
|
Yes, I added it to the production config. And I'll push that out and restart tonight sometime. |
|
OK, you should be good now. |
This PR fixes a few issues with the async peer LLM feature and cleans up the instructor view.
There was a duplicate token issue on the add token screen, the custom provider name input happened to share the
token-inputCSS class with the actual key field, so the submit handler was collecting it as a second token every time. The renamed class fixes this.Fixed it so all messages from a student show up in order instead of just their last one in async peer when there is no llm. Also the LLM prompt has been updated.