Show HN: Odozi – open-source iOS journaling app https://ift.tt/0g6TSyF

Show HN: Odozi – open-source iOS journaling app Yeah I know I hate the name too but I wasn't about to pay up for odyssey.app. It's an open source project so feel free to poke around with it / fork it. I talk about it more on the marketing website, but a few of us have been using it for the past month and kind of fun. Obviously there will be a slew of issues / feedback / nits that come from this, but c'est la vie. GH is here: https://ift.tt/ABm1ych https://odozi.app April 25, 2026 at 05:52AM

Show HN: Quay – Menu-bar Git sync https://ift.tt/WoIrsLa

Show HN: Quay – Menu-bar Git sync I write Astro blog posts in a text editor; when I'm done I want them pushed to GitHub so Cloudflare deploys the site. To make it comfortable, I built Quay for the menu bar. Also useful for Obsidian vault syncing. Point it at a folder, connect a GitHub repo, and it stages/commits/pushes/pulls. Multiple repos, editable commit messages, branch switching, merges with conflict detection. Shows open issue and PR counts per repo. But it's is not a full Git client (no diffs, blame, cherry-pick, or rebase) and it doesn't create remote repos. Native macOS app (Swift/SwiftUI). Wraps the local git binary (prompts to install Xcode Command Line Tools if missing). No custom Git implementation. Sandboxed, no telemetry, GitHub-only. macOS. 7-day trial, €9 one-time on the App Store. https://ift.tt/ARmFyOz April 25, 2026 at 08:23AM

Show HN: SherifDB, a databe written in Golang under 500 LOC https://ift.tt/8EozZTa

Show HN: SherifDB, a databe written in Golang under 500 LOC https://emmanuel326.github.io/blogs/sheriffdb.html April 25, 2026 at 04:42AM

Show HN: WhiskeySour – A 10x faster drop-in replacement for BeautifulSoup https://ift.tt/SHioP71

Show HN: WhiskeySour – A 10x faster drop-in replacement for BeautifulSoup The Problem I’ve been using BeautifulSoup for sometime. It’s the standard for ease-of-use in Python scraping, but it almost always becomes the performance bottleneck when processing large-scale datasets. Parsing complex or massive HTML trees in Python typically suffers from high memory allocation costs and the overhead of the Python object model during tree traversal. In my production scraping workloads, the parser was consuming more CPU cycles than the network I/O. Lxml is fast but again uses up a lot of memory when processing large documents and has can cause trouble with malformed HTML. The Solution I wanted to keep the API compatibility that makes BS4 great, but eliminates the overhead that slows down high-volume pipelines. It also uses html5ever which That’s why I built WhiskeySour. And yes… I *vibe coded the whole thing*. WhiskeySour is a drop-in replacement. You should be able to swap from "bs4 import BeautifulSoup" with "from whiskeysour import WhiskeySour" and see immediate speedups. Your workflows that used to take more than 30 mins might take less than 5 mins now. I have shared the detailed architecture of the library here: https://the-pro.github.io/whiskeySour/architecture/ Here is the benchmark report against bs4 with html.parser: https://the-pro.github.io/whiskeySour/bench-report/ Here is the link to the repo: https://ift.tt/nCf50DS Why I’m sharing this I’m looking for feedback from the community on two fronts: 1. Edge cases: If you have particularly messy or malformed HTML that BS4 handles well, I’d love to know if WhiskeySour encounters any regressions. 2. Benchmarks: If you are running high-volume parsers, I’d appreciate it if you could run a test on your own datasets and share the results. April 25, 2026 at 04:23AM

Show HN: Browser Harness – simplest way to give an AI control of a real browser https://ift.tt/Q5ClUYP

Show HN: Browser Harness – simplest way to give an AI control of a real browser Hey HN, We got tired of browser frameworks restricting the LLM, so we removed the framework and gave the LLM maximum freedom to do whatever it's trained on. We gave the harness the ability to self correct and add new tools if the LLM wants (is pre-trained on) that. Our Browser Use library is tens of thousands of lines of deterministic heuristics wrapping Chrome (CDP websocket). Element extractors, click helpers, target managemenet (SUPER painful), watchdogs (crash handling, file downloads, alerts), cross origin iframes (if you want to click on an element you have to switch the target first, very anoying), etc. Watchdogs specifically are extremely painful but required. If Chrome triggers for example a native file popup the agent is just completely stuck. So the two solutions are to: 1. code those heuristics and edge cases away 1 by 1 and prevent them 2. give LLM a tool to handle the edge case As you can imagine - there are crazy amounts of heuristics like this so you eventually end up with A LOT of tools if you try to go for #2. So you have to make compromises and just code those heuristics away. BUT if the LLM just "knows" CDP well enough to switch the targets when it encounters a cross origin iframe, dismiss the alert when it appears, write its own click helpers, or upload function, you suddenly don't have to worry about any of those edge cases. Turns out LLMs know CDP pretty well these days. So we bitter pilled the harness. The concepts that should survive are: - something that holds and keeps CDP websocket alive (deamon) - extremely basic tools (helpers.py) - skill.md that explains how to use it The new paradigm? SKILL.md + a few python helpers that need to have the ability to change on the fly. One cool example: We forgot to implement upload_file function. Then mid-task the agent wants to upload a file so it grepped helpers.py, saw nothing, wrote the function itself using raw DOM.setFileInputFiles (which we only noticed that later in a git diff). This was a relly magical moment of how powerful LLMs have become. Compared to other approaches (Playwright MCP, browser use CLI, agent-browser, chrome devtools MCP): all of them wrap Chrome in a set of predefined functions for the LLM. The worst failure mode is silent. The LLM's click() returns fine so the LLM thinks it clicked, but on this particular site nothing actually happened. It moves on with a broken model of the world. Browser Harness gives the LLM maximum freedom and perfect context for HOW the tools actually work. Here are a few crazy examples of what browser harness can do: - plays stockfish https://ift.tt/kxg7Rmy - sets a world record in tetris https://ift.tt/zGIoVcJ - figures out how to draw a heart with js https://ift.tt/vxnutzy You can super easily install it by telling claude code: `Set up https://ift.tt/EklFyjY for me.` Repo: https://ift.tt/EklFyjY What would you call this new paradigm? A dialect? https://ift.tt/EklFyjY April 24, 2026 at 04:31AM

Show HN: Learn conflict resolution through a 90-second interactive story https://ift.tt/lM3C1dp

Show HN: Learn conflict resolution through a 90-second interactive story https://ift.tt/hxv0feU April 23, 2026 at 08:51PM

Show HN: Implit – Catch fake AI-generated dependencies https://ift.tt/hbG3gRC

Show HN: Implit – Catch fake AI-generated dependencies https://ift.tt/iAaomb3 April 25, 2026 at 07:49PM