Skip to content

Fix async peer LLM token lookup, duplicate token bug, and improve async message view#1162

Merged
bnmnetp merged 8 commits intoRunestoneInteractive:mainfrom
sethbern:main
Feb 26, 2026
Merged

Fix async peer LLM token lookup, duplicate token bug, and improve async message view#1162
bnmnetp merged 8 commits intoRunestoneInteractive:mainfrom
sethbern:main

Conversation

@sethbern
Copy link
Contributor

This PR fixes a few issues with the async peer LLM feature and cleans up the instructor view.

There was a duplicate token issue on the add token screen, the custom provider name input happened to share the token-input CSS class with the actual key field, so the submit handler was collecting it as a second token every time. The renamed class fixes this.

Fixed it so all messages from a student show up in order instead of just their last one in async peer when there is no llm. Also the LLM prompt has been updated.

@sethbern sethbern requested a review from bnmnetp as a code owner February 25, 2026 15:52
@sethbern
Copy link
Contributor Author

sethbern commented Feb 25, 2026

This also fixes issue #612

Fixes issue where the dropdown menu on the course home and course navigation showed up as white on white text.

Copy link
Member

@bnmnetp bnmnetp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How are you forcing the course home page to be in dark mode?

I can see the problem you describe on RST built book pages where you turn on dark mode, but I believe that we have variables defined to use with the different modes. RST books are deprecated but I'm happy to merge PRs for them by others.

I would prefer this be done as a separate PR from the LLM token lookup. It makes it easier to track and easier to back out a merge if needed.

@sethbern
Copy link
Contributor Author

@bnmnetp the dark mode fix has been moved to its own #1164. To answer your question: dark mode isn't being forced, it's user-activated via a toggle in the navbar that sets data-theme="dark" on the element. The styles just weren't covering that attribute for the dropdown menus.

@sethbern sethbern requested a review from bnmnetp February 26, 2026 14:50
@bnmnetp bnmnetp merged commit 03d0e52 into RunestoneInteractive:main Feb 26, 2026
1 check passed
@bnmnetp
Copy link
Member

bnmnetp commented Mar 1, 2026

Thanks for the fix!!

I didn't expect that the custom option would be used (ever??!!) so I'm curious why you used this instead of openai?

@sethbern
Copy link
Contributor Author

sethbern commented Mar 2, 2026

@bnmnetp Oh that was just from testing with the University of Michigan GPT endpoint. Also I noticed the async peer LLM feature isn't working on the deployed server however it works fine locally for me. Is there something in my local setup that wouldn't be present on the deployed server that could explain this?

@bnmnetp
Copy link
Member

bnmnetp commented Mar 2, 2026

The only obvious thing would be the API keys. Are you seeing any errors in the javascript console that might give a clue? Can you be more specific about what isn't working? Is it not even trying the LLM? Is communicating with the LLM failing? I'm not seeing any errors on the server side that indicate something is wrong.

@sethbern
Copy link
Contributor Author

sethbern commented Mar 2, 2026

When I force window.PI_LLM_MODE = true in the console on the deployed server and trigger a chat, the get_async_llm_reflection endpoint returns {"ok": false, "error": "missing api key"}. On my local setup the same call returns the key correctly and the LLM responds fine. So the issue seems to be that the deployed server isn't finding the API key.

@sethbern
Copy link
Contributor Author

sethbern commented Mar 3, 2026

@bnmnetp I just tried on a separate device with a fresh Runestone build and the feature worked correctly there, so it seems like a version mismatch somewhere on the deployed server might be the culprit. Could something be out of sync between what's deployed and the latest merged code?

@bnmnetp
Copy link
Member

bnmnetp commented Mar 3, 2026

I rebuilt and redeployed the runestone server from main, with your fix to peer.js last evening. So it is 100% up to date with what is on the main branch in production.

What course are you testing with? I can look at the DB directly to make sure the keys are there.

@sethbern
Copy link
Contributor Author

sethbern commented Mar 3, 2026

The course I'm testing with is test_py4e-int_api. Let me know what you find in the DB, I appreciate you looking into this.

@bnmnetp
Copy link
Member

bnmnetp commented Mar 3, 2026

I see one api token for that course for the openai provider. It looks like it has never been used as the last_used column is null.

@sethbern
Copy link
Contributor Author

sethbern commented Mar 3, 2026

Thanks for checking. Since the token exists but the endpoint still says “missing api key,” it seems like the issue might be happening during retrieval or decryption rather than the DB lookup itself.

I think the tokens are Fernet-encrypted, could there be an issue with the FERNET_SECRET config on the server?

@bnmnetp
Copy link
Member

bnmnetp commented Mar 3, 2026

Yes,

I can see log messages that you are finding the key. But since you are not using the fetch_api_token method the FERNET_SECRET needs to be defined for the runestone server.

I have a separate private docker-compose.yml file for production purposes. That will need to be updated to ensure that the runestone service can see that environment variable.

This is another reason why it would be better for you to get me a PR with this stuff moved to the new PI implementation on the FastAPI servers. Then you can use our infrastructure instead of reinventing the wheel.

@sethbern
Copy link
Contributor Author

sethbern commented Mar 3, 2026

Got it, thanks for tracking that down. Whenever you get a chance, would it be possible to add FERNET_SECRET to the runestone service? That would let me test the current version while I work on migrating the async PI implementation over to FastAPI. And I really appreciate your help and support on this!

@bnmnetp
Copy link
Member

bnmnetp commented Mar 3, 2026

Yes, I added it to the production config. And I'll push that out and restart tonight sometime.

@bnmnetp
Copy link
Member

bnmnetp commented Mar 4, 2026

OK, you should be good now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants