Really love this idea! The distraction being contextual part really resonates with me and I feel like it is time to get an upgrade on URL block list based focus apps. Installed the extension and it works well. Wondering what your AI setup is like to analyze the webpages? Assuming you're using LLMs?
thanks! I'm currently using anthropic's api and running it on claude sonnet. I also tested it with gpt-4o, but felt that claude was better at handling edge cases with certain focus contexts. Moving forward, I'd likely switch out to something else due to costs, but for now this works for this phase.
i use stayfocusd, but I would definitely switch to this. I noticed you use Anthropic's api to analyze the webpage data, and I think the privacy part is a bit iffy for me since there is a lot of sensitive data that I view often. Having it blocked on some websites makes it better, and good on you for addressing this, but seems like its not a viable long term solution. Have you tried making this work with local llms to avoid these privacy issues?
totally agree that local llms would make this a lot better privacy wise. I actually started working on this with local llms, when some models got small enough that you can run them in the browser. I used Candle + WASM to test smolLM, and phi and also tested with gemini nano on chrome. Unfortunately, their performance was abysmal for this task. I couldn't even get them to consistently output structured JSON to parse their response.
one thing I haven't tested so far is on setups like ollama where the llm is running outside the browser but accessible via localhost. Some larger LLMs like llama would be good enough to actually parse the webpages. That requires some setup from the user side though and some minimum hardware requirements.
Really love this idea! The distraction being contextual part really resonates with me and I feel like it is time to get an upgrade on URL block list based focus apps. Installed the extension and it works well. Wondering what your AI setup is like to analyze the webpages? Assuming you're using LLMs?
thanks! I'm currently using anthropic's api and running it on claude sonnet. I also tested it with gpt-4o, but felt that claude was better at handling edge cases with certain focus contexts. Moving forward, I'd likely switch out to something else due to costs, but for now this works for this phase.
i use stayfocusd, but I would definitely switch to this. I noticed you use Anthropic's api to analyze the webpage data, and I think the privacy part is a bit iffy for me since there is a lot of sensitive data that I view often. Having it blocked on some websites makes it better, and good on you for addressing this, but seems like its not a viable long term solution. Have you tried making this work with local llms to avoid these privacy issues?
totally agree that local llms would make this a lot better privacy wise. I actually started working on this with local llms, when some models got small enough that you can run them in the browser. I used Candle + WASM to test smolLM, and phi and also tested with gemini nano on chrome. Unfortunately, their performance was abysmal for this task. I couldn't even get them to consistently output structured JSON to parse their response.
one thing I haven't tested so far is on setups like ollama where the llm is running outside the browser but accessible via localhost. Some larger LLMs like llama would be good enough to actually parse the webpages. That requires some setup from the user side though and some minimum hardware requirements.