Hacker Newsnew | past | comments | ask | show | jobs | submit | filterfiber's commentslogin

> still run on a device like the iPhone, which weights about 170g [1]? The human brain is 8 times heavier.

Why are you comparing the weight? I'm sorry but this is a bizarre comparison. This isn't even apples to oranges, this is apples to a telephone pole.

I'll also throw in that a single Nvidia H100 is 1200 grams. Unless you have a "Bracket with screws" which will add 20 grams (who wouldn't want an extra 20grams of intelligence?).

Like 70%+ of the human brain is water. The human brain needs a massive network of systems to transport nutrients/oxygen which is irrelevant to logical processing.

Similarly the majority of the iphone weight is the battery and frame. The weight of the processing chip is _grams_.

Besides the conflicting variables with the weight, the way that ML works on a physical level is completely different from the human brain.


In their second sentence they have the most honest response I've seen so far at least: " averaged across 4 diverse customer tasks, fine-tunes based on our new model are _slightly_ stronger than GPT-4, as measured by GPT-4 itself."


Their timezone has been wrong in the tzdb for at least 10 years apparently

https://mm.icann.org/pipermail/tz/2023-December/033339.html


Does anyone know if mmWave could differentiate between my cat and I?


Possibly at shorter ranges. Those devices are essentially ‘bag of water’ detectors. You can get some data out of most of them that pertains to distance and approximate size of the ‘bag’.

They generally suffer from exact counts of objects. 2 people close to each other will read like one large person. A large cat close to the sensor could read like a large human at a greater distance.


In theory, but probably not in practice. The fancy non-metal-detector airport security scanners use mmWave to detect object through clothing, so the light itself capable of producing high-resolution images. However, cheap home automation sensors are probably not imaging, so you're probably going to be stuck with a single depth measurement.


HomeSeer manufactures a Z-Wave millimeter wave radar sensor that uses imaging techniques to detect motion in specified zones, and they intend to release pet differentiation as a firmware update. I'm seeing enough negative reviews of how it works in practice that I haven't dropped the money on one, but I'm hoping that the technology will improve quickly. It's only $59 too, which isn't much more than PIR Z-Wave sensors. Maybe I will buy one to try out...


Does anyone know where I should look if I want to detect specific sounds? Like a smoke alarm, food bowl dispenser (its very distinct), cat meowing, 3d printer collision, that sort of thing?



Use any model trained on the AudioSet dataset. There is one called EfficientAT i think that I use regularly and is pretty reliable


You would learn how to do this in the first & second chapters of the fast.ai course.


Fun monetization strategy for federated apps - federate with your own instance dedicated to ads.

But more seriously what is the monetization strategy for federated apps? Up front pay or subscription for using the app?


You can look at how email is monetized for some ideas. Everything from ad infested free client to pay service to freebie with the purchase of your internet connection.


I suppose you could have an 'Apollo' style app providing a nice UI for power users with a subscription cost. I could see it being a solid revenue stream for small teams/indie makers. Difficult to get to the kind of scale needed for advertising to be particularly profitable unless we get some kind of federated facebook ads platform...


Does anyone have any input on how this compares outside of benchmarks?

They don't appear to have any info on how they made it.


Discussion is taking place, although it hasn't gone into great detail yet. https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0/dis...


I don't know why the title even mentions the flipper.

The attack can be done from any device that can send crafted BLE packets including laptops/android phones, etc.

Apple just fixed a BLE DOS attack.


Right exactly what I was thinking. It’s just weird that it was worded with the vibe of:

“Apple defeated a nefarious adversary for the good of mankind. Thank you Tim Apple!”

I’m not an Apple hater, don’t get me wrong. I’m writing this from my iPhone. I guess I’m just finally starting to get a little bothered by the ring kissing stuff that has to be done to remain on the FAANG+ allow list.

I’ve started using ChatGPT to summarize the articles for me more and more lately. So this will be less and less of an issue for me once I find a way to do it more reliably.


Better title: "Apple fixes BLE DOS attack".

I'm tired of media acting like the flipper is some kind of "super special hacking tool", it is very literally getting it banned in some places when all of it's internals are easy and common radios (Not to knock the flipper, it is conveniently well packaged).

You just needed to be able to send crafted BLE packets, this attack doesn't have anything specific to the flipper at all.

It didn't even originate on the flipper: https://github.com/ECTO-1A/AppleJuice

> To run these scripts you need a Linux machine with an internal Bluetooth card or a USB Bluetooth adapter.

Versions also exist that run on the ESP32, android, etc.


Portable devices like the Flipper make it very easy to distribute working RF-layer exploits that don’t get hung up on all the non-standard hardware in phones and laptops. And frankly, that’s a good thing! Because manufacturers have, for some reason, decided that if there’s a radio-layer protocol involved somehow they’ll be protected by the obscurity of it. The Flipper lets devs turn those vulns into push-button exploits, which is finally inspiring companies to clean up the crapware in those stacks. (And I bet there is scarier stuff in there that researchers just haven’t found yet.)


Is this really a battle worth picking? Many friends and coworkers are going to bring up the Flipper to you in passing conversation, might as well get used to it.


In my opinion, yeah. If anyone tries to deify or demonize the concept of a Flipper, you just remind them that everyone has a smartphone with even wilder SDR capabilities. The biggest difference is that the Flipper is weak hardware with wide-open software, and your smartphone is strong hardware with weak-ass software configuration.

Reminding people of that is important, even if you don't think the FCC is about to change their mind. Crucify us for being a nerd if you must, but someone has to be the voice of reason and point out that the Flipper is the most optional part of the exploit.


yes, and people shouldnt accept the media do what should be considered grossly lying. and dont forget, they do this, or whats way worse, to EVERY SINGLE article. You know about this because you're in this sphere, but when its about farm animals, you may or may not know, and then people tend to eat it up.

The mainstream media as it is now, and probably has been for a very very long time, is an enemy of the people, and should be treated accordingly


It's like FireSheep. Session hijacking wasn't new but it made easy and something anyone could do.


shoot the messenger.

since you have stocks of the billion dollar message


> since you have stocks of the billion dollar message

I have no idea what you mean by this?

Are you saying I'm defending apple somehow?

Because my point is nearly every phone/laptop could pull off this attack, not just a single "special hacking device". Which I think is worse for them.


This project is a fun POC but it's not very practical for that type of application.

A 4090 can generate over 100 images a second with turbo+lcm and a few techniques, you can make 2 days worth of images in 1 seconds. You could make a years worth in roughly 3 minutes and put them on the sd card


Do your have references for that?

I found this claiming an A100 can generate 1 image/s.

https://oneflow2020.medium.com/text-to-image-in-less-than-1-...


> I found this claiming an A100 can generate 1 image/s.

The article you linked is over a year old. Needless to say there have been a LOT of optimizations in the last year.

Back then it was common to use 50+ steps for many of the common samplers. Current methods use a few steps like 1. This OnnxStream are using SDXL-turbo, and you can combine LCM and a few other methods to go very fast.

The reason it's so much faster now is the OnnxStream is only using a single step.

This repo claims 149 images/s on a 4090 https://github.com/aifartist/ArtSpew

However even if you only get 1 image/s with whatever GPU you have I stand by my original statement that unless you want to do it for the cool factor (which is very valid), pre-calculating them makes more sense.


> This repo claims 149 images/s on a 4090

I actually get around 100 imgs/s on my 3080Ti. Three things to note: 1) you gotta run the max perf code to get the high throughput, 2) the images in this setting are absolute garbage, 3) you don't save the images so you're going to have to edit the code to extract them.

Definitely agree that this project is much more about the cool factor. I suggested a GAN in other comment for similar reasoning (because it's a pi...) but if you want quality images well I'm not sure why anyone would expect to get those out of a pi. High quality images take time and skill. But it's also HN, I'm all for doing things for the cool factor (as long as we don't sell them as things they aren't. ML is cool enough that it doesn't need all the hype and garbage)


> Back then it was common to use 50+ steps for many of the common samplers. Current methods use a few steps like 1.

The "look how fast we can go" method (turbo model with 1 step and without CFG) is blindingly fast, but the quality is...nothing close to what was being done in normal 50+ steps with normal setitngs gens.

Realistically, even with Turbo+LCM, you're still going to 4+ steps (often 8+), with CFG, for reasonable one-generation quality anywhere close to the images people generated at 50+ steps without Turbo/LCM.

Which is still a big improvement in speed.


> Realistically, even with Turbo+LCM, you're still going to 4+ steps (often 8+), with CFG, for reasonable one-generation quality anywhere close to the images people generated at 50+ steps without Turbo/LCM.

For sure the only reason I considered comparing it that way was because the linked repo appears to also be going for a similar approach with 1 step/image on the pi.

From my own experience I've had a hard ever getting a decent image below 6~8steps, but this repo seems more focused on getting it to run in a reasonable amount of time at all, which understandably requires the minimal "maybe passable" settings.


They're might be talking about this[0] as it has been popular recently. It can definitely do >60 imgs/s on my 3080Ti, but you're not going to want any of those images. They are absolute garbage.[1] I can do a little under an image a second and some may be quite usable, but nowhere near what you're going to get from the standard model.

[0] https://github.com/aifartist/ArtSpew/

[1] but the project is still cool, just context...


But that's not the point, obviously. Sometimes, being slow is a feature. Besides, a 4090 costs more than a small car.


> But that's not the point, obviously.

If you want to say the zero2-w is what's making it then sure.

> Besides, a 4090 costs more than a car.

They only cost ~0.70USD for 1 hr. In fact you could put this on an A100 for 1$/hr. Renting would make the most sense for this type of thing.


It depends on what you're using the images for.

If there's a human in the loop, 100 images/s is likey too much volume, especially if prompt engineering is needed.

At the same time, 2 images/hr is way too slow.


The whole point was that you’d be getting ramdom puctures just-in-time, at a leisurely rate suitable for background image rotation, without other interaction.


I mean, if you need a human in the loop to verify the image quality then you HAVE to pre-compute the images.

> 100 images/s is likey too much volume

You can always generate less


I think you’re just missing the point, which most certainly isn’t buying compute to generate zillions of images ahead-of-time and then replaying them at a rate of one every half hour or whatever. Anyone can do this. The idea of having a tiny instance of SD crammed on a tiny computer taking its time to compute the images just-in-time (so you don’t even in theory don’t know what you’re going to get next) is simply much more fun and original, never mind way more aligned with the hacker ethos.


$1600 is more than a car?

I feel like you can't even find driveable cars that will last 100 miles at that price point anymore.


You probably can, but it'll take some time. The supply of reasonable reliable $500-$1000 beaters is a lot less than it used to be.


hyperbole

/hʌɪˈpəːbəli/

noun

exaggerated statements or claims not meant to be taken literally.


It's so nice of you to offer to buy 4090 cards for people who can only otherwise afford Raspberry Pis ;)


I was just using that as a reference. Stable diffusion will run well with almost any relatively modern gpu.

You don't have to use a 4090, you'll still get double digit performance with a 3060 or whatnot.

> for people who can only otherwise afford Raspberry Pis ;)

You can rent a 4090 for 0.7USD/1hr, or get an A100 for 1.1USD/hr. And if your project is a display + raspberry pi then those costs will dwarf the rental cost.


You can use Google Colab for free as well.


I tried to load up atomic yesterday to see how it has changed since early in the year.

It failed to load. They reported that it was detected as against their terms of service.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: