I run a lot of small background services on my Mac (thanks Claude/Codex), and Homebrew services that I barely look at. It felt surprisingly hard to know what was actually happening on my machine and investigate/fix quickly, so I built Launchdeck: a fast TUI for launchd jobs and Homebrew services.
- Make the validator read-only for the agent. Mount it as read-only in the container, or hash your eval scripts at startup and verify before each run. If the agent can write to anything in its evaluation path, it can (will) game it.
- Log the full trajectory, not just the output. Every tool call, file diff, reasoning step. Then run a second agent over the trace with no knowledge of the KPI: it only knows what honest execution looks like and will use its internal honest alignment to assess it.
- Write system prompts like job descriptions, not optimization targets. Name a reviewer. Give the agent permission to fail ("if you can't hit the target, explain why").
- Walk your own prompts: what's the metric, what can the agent write, and can it reach the metric by modifying the measurement instead of doing the work? If yes, close that path.
Wanted to build a curated list of truly free, publicly accessible real-time datasets and streaming sources to have real-life data, and see their shape quickly in the browser (WebSocket, SSE). Also led me to build kafka-connect-websocket (https://github.com/conduktor/kafka-connect-websocket) to publish into Kafka directly.
Full disaggregation of compute and storage is the right direction. Let storage handle replication, it's getting good, global, low latency, cheaper (like with S3 Express). Kafka becomes a smart data ingester and router: it moves bytes, enforces ordering, does minimal buffering. That's it. Do one thing well.
You get a system simpler to operate, to scale, and more flexible; data could be consumed outside of Kafka itself (in a batch way typically), without duplicating the data, that's a big win.
Conduktor | Senior Java Backend Engineer & Senior Product Manager | London | Full-time | Hybrid
Conduktor is a data platform that sits on top of any data streaming technology (Kafka), ensuring companies across the world maximise the value of their data.
We’re hiring for two positions:
1— Senior Java Backend Engineer: We're assembling a team to build the most powerful Kafka proxy for enterprises. It's a critical part of data infrastructures. If you’re a Senior or Staff Engineer with a passion for low-level Java, threading wizardry, and networking in distributed systems, you have a chance to shape the future of data streaming at Conduktor. UK / London (on-site).
2— Senior Product Manager: We’re seeking a strategic, customer-centric Senior Product Manager to partner with Product & Engineering leadership. The focus will be on enhancing Conduktor’s capabilities in data security and observability. Ideal candidates will have experience in enterprise software, big data platforms, or observability products, and be excited about real-time streaming data technologies. A technical background is mandatory (or we won’t even look, sorry). This is a hybrid role based in London, with the team coming onsite 3 days a week.
Conduktor is a platform that sits on top of any data streaming technology (Kafka), ensuring companies across the world maximise the value of their data.
We’re hiring for two exciting positions.
1— Senior Java Backend Engineer: We’re looking for an experienced Java developer with deep knowledge of Kafka. Production experience is critical; we are serious about a production mindset! This role involves a deep dive into Kafka’s low-level protocol and networking stacks. We DON’T do Spring, at all. FP experience is appreciated. This is a hybrid role based in London, with the team coming onsite 3 times a week.
2— Senior Product Manager: We’re seeking a strategic, customer-centric Senior Product Manager to partner with Product & Engineering leadership. The focus will be on enhancing Conduktor’s capabilities in data security and observability. Ideal candidates will have experience in enterprise software, big data platforms, or observability products, and be excited about real-time streaming data technologies. A technical background is mandatory (or we won’t even look, sorry). This is a hybrid role based in London, with the team coming onsite 3 times a week.
We build Conduktor to improve adoption and collaborative use of Apache Kafka across teams and organizations.
We are driven to make Kafka a better and safer place, allowing the business to thrive in real-time use cases and adapt proactively to build a better customer experience.
Please share your experience with Kafka, your love and/or your hate!
Conduktor is helping companies operate their data by making their Apache Kafka journey easier. We’re an international team (mostly in Europe) looking for highly motivated, tech/data-oriented people. Today, we are a developer tool, tomorrow, we'll be something way bigger! We're looking for people around the CEST timezone:
* Experienced Scala/ZIO & Kafka developers
* Typescript & React developers
* Product Manager to lead our products
* Developer Advocate if you know your Apache Kafka stuff to help us working with our community!
* Customer Engineers to help our customers working with Conduktor & Kafka!
reply