Hacker Newsnew | past | comments | ask | show | jobs | submit | 555watch's commentslogin

In my country there are obvious clones of known portals with clickbaity titles, visually hard to distinguish from real. Always with some random url. Sometimes just fake news and propaganda, sometimes selling fake things. Under paid sponsorship. Reported them multiple times. Always the response is as follows: we looked at it and found nothing wrong. All the disputes are also killed. So its done on purpose


Has anyone commented already about the absurdity of watching Youtube Shorts? A wide empty white space, with a narrow vertical strip of content that is often stretched and split into two smaller videos.


Can't believe I never heard about this. This is a life saver, I specifically installed ton of different browsers to emulate this....


I don't know, maybe it's a bit off topic, but at least in cases that I'm imagining, I would always hire a human than fully rely on AI. Let the human consult with AI if needed, but still finalize the decision or result. The human will be thinking about the problem for months or years, even if passively during a vacation, an idea will occasionaly pop up. AI will think about its task for seconds, in case it missed some information or whatever, it will never wake up in the middle of the night thinking "s**, i forgot about X"


Your use of the word stochastic here negates what you are saying.

Stochastic Generative models can generate new and correct data if the distribution is right. Its in the definition


My primitive understanding is that we approximate a Markovian approach and indirectly model the transition probabilities just by working through tokens.


Finding rotations that maximize variance is not that far from euclidean based clustering.


Not discussing the trivial example (since for any model there exists a distribution and a dataset on which a model will not perform). Just a general thought. Intro to ML teaches us this: if we want to "learn" a hypothesis class reasonably well with a finite sample, the class shouldn't be too complex. Otherwise we lose precision and/or any guarantees. This implies that for any DL algorithm A(S) on sample S, there exists a data transformation g such that B(g(S)) will result in a lower* risk for some simpler algorithm B. The question is not whether linear models are good or bad, but how complex/expensive is the transformation g.


Yeep, stopped reading after that proposal. I'm not sure OP should be advocating stuff like this without at least a warning. It's not the rubbing, the pressure is creating the "stars". Some people already have increased eyeball pressure, as well as increased blood pressure.

I'm not a doctor and don't know what I'm talking about, but would definitely advice against eye rubbing.


I think the previous poster claims that different measurement process yields different measurement errors (spread). Since correlation coefficient is a function of spread, if the measurement errors are random, even if the underlying relation is the same, it suffices to increase the spread a little bit and get a subsequently smaller correlation coefficient.

Confidence bounds for every correlation coefficient would add value and _might_ change some of the interpretations.

E.g.: "its average correlation with the other measurements is only 0.03, which is not just small, it is substantially smaller than the next smallest, which is ear breadth, with an average correlation of 0.13."

If the former is 0.03 +- 0.02 and the latter is 0.13 +- 0.07, we could claim that both are equal to 0 (or just equal).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: