Hacker Newsnew | past | comments | ask | show | jobs | submit | debosmit's commentslogin

one thing i have seen with testcontainers (been a user for a few years) is the ergonomic SDKs that they have especially in languages like golang, it makes spinning containers up/down, accessing the ports (eg: a mongodb container for some e2e test flow) super trivial - its like a nicety layer on top of vanilla docker (w/ the cost of including their sdk in your test build process)

yes, 100% can be done using docker directly or docker rest api (and def doesn't make sense to migrate if you have already made an investment in an in-house framework that doesn't require much upkeep)


thanks for the responses, i just wanted to cut through the marketing. taking on standardised tools is a win for me, i just wanted to know about real world experience and use-case. Indeed taking on deps is not something i do lightly.

> value of test pyramid

I mean more from the perspective of covering your bases, you never just want one kind of testing pattern in your project. Each codebase is different and i agree that taking on high value test styles/cases is a project by project challange that should be tailored by many variables. The shape of your testing pyramid may be different to others. If your inheriting a legacy system, maybe its top heavy because the effort/reward ratio just isnt there. In this circumstances i usually take on the approach of "add more layers when bugs are found" to hone in on places that could use more or less test coverage.

Our inhouse framework is really just a wrapper around certain tools that fill different gaps (think docker/selenium etc) in order for different projects to build suites that are compatible with our ci/cd pipelines that do things like generate environments on demand to run test suites against. So dropping in testcontainers to replace the home-grown docker will be trivial. Keeping test frameworks fresh and compatible with the cloud vendors that agreesively upgrade is a challange just like keeping the API bleed of other programming deps is. Our test suites essentially have a domain language that is consistant. We can upgrade selenium, swap functions for different operations, without having to change any tests. Same goes for unit or integration tests - they are exactly the same in terms of assertions, syntax etc, they may just just have different environment setup logic. CI/CD can inject and overrride logic as it needs. Sometimes its suitable, in some cases, to mock certain external hard deps in integration tests for instance to having all the unit testing tools availible a plus. Or in other cases, we may take a unit test written against mocks, and inject real deps into it for certain CI/CD scenarios.


do you have some thoughts on how sdv-dev type projects can be used to start populating, say, a database (eg: mysql running in a container) i've looked into this space a bunch (eg: Gretel, Tonic, etc) and there doesn't seem to be a good solution that works end-to-end Privacy Dynamics is quite cool but ideally I'd like something super lightweight that can get pointed to a source db of some sort and then write to a sink (maybe applying a transformation layer in the middle)


Curious what a good end-to-end solution looks like for you? Is it more about ease-of-use (import/export with minimal effort) or is there a privacy layer that's missing?

I see it in 4 steps: 1. Connect to a source db to import your data 2. Train a Gen AI using the source data 3. Use it create synthetic data 3. Export synthetic data into a new db

The SDV team is working on business solutions to cover the full use case. You can use the public SDV to validate steps 2 and 3.


its not necessarily about the privacy layer per se. the workflow i was ideating over is as follows:

1. spin up a production-equivalent database (eg: mysql container instead of prod RDS)

2. point a process/binary (maybe a simple container) to:

-- source db (RDS)

-- sink db (mysql container)

-- transformation function (that may use gen AI, etc) to seed sink db with synthetic/anonymized data [there may be some parallel process to enable testing of this transformation function]

3. profit (use this for dev etc)

Key over here would be speed in step (2) if the entire pipeline were to run end-to-end on-demand. do you have some examples of using SDV to achieve this? highly possible that there's already something in the docs that I have missed


This is what I am trying to solve via building Data Catering (https://data.catering/). It gives you the ability to generate data into any database (along with maintaining any relationships between data) via metadata that can be retrieved via a source database or other types of metadata sources (i.e. Open metadata).


yea, great for small projects but no good when you're trying to expand into enterprise capabilities -- I have to get one tool for dev, have that config diverge from CI, and then from staging then i have to hire a large devops team to manage it all -- super inefficient


This is really not true - a poor implementation shouldn't dictate the direction of the "remote development" space. Sure, IT and security have their requirements, but primary requirement is to make the developers happy and provide them with reproducible dev environments.

I used to work at Uber and what we ended up doing with devpod (https://www.uber.com/blog/devpod-improving-developer-product...) was to enable the popular IDEs to connect to these remote environments - all the dotfiles etc etc were persisted so it literally felt like the local IDE, just way faster. Admittedly, it costs a bunch of money to build internally, but there's a path to having people be happy with dev environments.

(we collected data on what IDEs to prioritize based on surveys)


Why use a survey and not just ask the endpoints directly? Presumably the laptops are managed and are running something like Santa on them. Would remove bias to get the data this way.


yea - we had that too (good for understanding how laptop tooling worked, and what areas were starting to show latencies and therefore, needed to be worked on)

surveys were anonymized


see this is the problem w/ all these devtools - i need to pair together 5 different things when i just want a reproducible, ephemeral environment

someone needs to bring a heroku-like experience but for cloud-native development


That's the mission we're on at Argonaut. I'd love to know more about how you think about it if you're up for a chat.


What are you trying to offer above and beyond GitHub codespaces?



got some other interesting insights from the BCG post by Akash Bhatia related to the stage of the company and their cloud adoption

see `Creating a Developer-Focused GTM Model` [here](https://www.bcg.com/publications/2022/developers-influence-i...)

taking the purchase pathway of: 1. need/demand 2. shortlisting 3. testing/evaluation 4. final decision

it seems like the dev teams has (and derivatively, the developers have) say in at least the testing/eval and the final decision stages. as cloud native becomes more mainstream, i'm sure we can see how this influence will affect each stage of the purchase pathway.


We’ve thought about this quite deeply at DevZero. Yes, the iOS developer experience can be quite hampered on the local env and requires users to frequently get beefy local laptops/machines. I saw this first hand at Uber from when crowdstrike was rolled out broadly.

We have support for AWS-based Mac VMs on DevZero but we don’t find our customers having their biggest issues related to iOS dev yet (we also target enterprise cos that have a vast diversity of tools, mostly backend and front end)


we’re not seeing cloud costs be too terrible at DevZero - with proper hibernation/suspension, cloud costs are ~$50-60/mo in the worst case admittedly, we target only enterprise companies where the cost of loss of dev efficiencies is much higher

Say net cost to company for an engineer is $100k-$200k+. Even a net 10% savings over a year means $10k-$20k+ vs a $600-1k/yr investment (in worst case). Security posture is also significantly improved, which admittedly is harder to assign a $ value to


fwiw, we’re solving this at DevZero with having multiple AZs - given that our platform centrally manages remote compute (hibernation etc), this is working great at making sure latencies are <20ms for users


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: