Hacker Newsnew | past | comments | ask | show | jobs | submit | TheMrZZ's commentslogin

I don't know how much SWE get paid in your area, but I sure hope it's not 1000$/month.

Though I'm aligned that I don't (yet) believe in this "AI writes all my code for me" statements.


It includes that with AI you still need someone to work. First to query the AI and then to fix up something and to bring it in a form you can release and use.


Feels like a good feature, with a simple explanation, real world use cases, and a scoped solution (global only, pretty simple keyword). I like it!


Agree, they really did their homework, listed edge cases, made practical compromises, chose not to overdo it, reworked it again and again quite a bit and compared it to real life experience.

It's really beautiful work, especially since touching the back bone (the import system) of a language as popular as Python with such a diverse community is super dangerous surgery.

I'm impressed.


Yeah, I think this is one of the cleanest PEPs to come around in quite a while, at least from the userspace perspective. Interested to see what happens after the traditional syntax bikeshedding ritual has been completed.


Hopefully they learned lessons from why PEP-690 was rejected. I've spent quite a while trying to build this stuff for our codebase and it's never worked well enough to use.


690 is mentioned a few times, including a FAQ note on the differences

https://pep-previews--4622.org.readthedocs.build/pep-0810/#f...

Q: How does this differ from the rejected PEP 690?

A: PEP 810 takes an explicit, opt-in approach instead of PEP 690’s implicit global approach. The key differences are:

Explicit syntax: lazy import foo clearly marks which imports are lazy.

Local scope: Laziness only affects the specific import statement, not cascading to dependencies.

Simpler implementation: Uses proxy objects instead of modifying core dictionary behavior.


My Samsung S24 Ultra shows black lines too, on Chrome and Samsung Internet.


Biggest trap of Simpson's paradox is the results can change with every level of granularity.

If you take the example of Treatment A vs Treatment B for tumors, you can get infinite layers of seemingly contradicting statemens: - Overall, Treatment A has better average results - But if you add tumor size, Treatment B is always better - But if you add gender to size, Treatment B is always better - But if you add age category to gender and size, Treatment A is always better - etc...

It totally contradicts our instincts, and shows statistics can be profoundly misleading (intentionally or not).


To add some proofs to my answer, I actually coded a Z3 program to prove it! The 3-variables version takes too long to resolve, but I got results for the 2-variables version (tumor size + gender):

Results can be found in this GSheet: https://docs.google.com/spreadsheets/d/1tsBhElTgXjVTeas8quar...

Code is here: https://gist.github.com/TheMrZZ/c33927ca2cc917997a67d7f84b82...

I'm currently running the 3-variables version, hopefully I'll get results this afternoon.

We can clearly see the same problems that arise in the 1-variable Simpson's paradox (widely different population sizes).


Like Tumors are higher dimensional objects than what human brains are trained to perceive.


For pedagogues and practitioners alike: there is a subtle connection between Simpson’s paradox and the wild geometry of relative entropy. This might be partly why effect sizes are also contentious.

Besides Ellenberg’s mind-altering discussion of that link[1], see hints on the second page of:

https://www.qeios.com/read/XB1N2A/pdf

[1] "[the point of Simpson’s paradox] isn't really to tell us which viewpoint to take but to insist that we keep both the parts and the whole in mind at once."

Ellenberg, from Shape: The Hidden Geometry of Information, Biology, Strategy, Democracy, and Everything Else (2021)


If the previous comment is right, then this one is plainly also true in some sense. I'm disappointed to see downvotes.


> If the previous comment is right

I actually coded a Z3 program to prove it! The 3-variables version takes too long to resolve, but I got results for the 2-variables version (tumor size + gender):

Results can be found in this GSheet: https://docs.google.com/spreadsheets/d/1tsBhElTgXjVTeas8quar...

Code is here: https://gist.github.com/TheMrZZ/c33927ca2cc917997a67d7f84b82...

I'm currently running the 3-variables version, hopefully I'll get results this afternoon.

We can clearly see the same problems that arise in the 1-variable Simpson's paradox (widely different population sizes).


I think the real-world resolution to this problem is straightforward though. You should look at the finest level of granularity available, and pick the best treatment in the relevant subpopulation for the patient.


Unfortunately our level of certainty generally falls off as we increase the granularity. For example, imagine the patient is a 77yo Polish-American man, and we're lucky enough to have one historical result for 77yo Polish-American men. That man got treatment A and did better than expected. But say if we go out to 70-79y white men we have 1,000 people, of which 500 got treatment A and generally did significantly worse than the 500 who got treatment B. While the more granular category gives us a little information, the sample size is so small that we would be foolish to discard the less granular information.


This is all true. I originally added a disclaimer to my post that said "assuming you have enough data to support the level of granularity" but I removed it for brevity because I thought it was implied -- small sample size isn't part of Simpson's paradox. My apologies for being unclear


The smaller the subpopulation, the higher the variance, and the less significant the result.


Aside from the casino story (high value target that likely faces tons of attacks, therefore an expensive customer for CF), did something happen with them? I'm not aware of bad press around them in general


I suspect having a 10M population "only" + very bad commercial relations with their direct neighbours make Israeli startups think global from scratch. In Europe, most startups tend to think local-first, which hinders their scaling.


Hi! Are your models english only, or do you plan on tackling other languages?


They will be multi-lang, the tech scales to any language and we are working to add more (it is relatively easy). Here is the demo in Polish TTS: https://www.youtube.com/watch?v=ra8xFG3keSs


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: