Hacker Newsnew | past | comments | ask | show | jobs | submit | mrubashkin's commentslogin

I agree that it would be more efficient to have more wav files in the github repo, but we kept them minimal to reduce the total file size when cloning the repository. You can find more of the Librispeech data here: http://www.openslr.org/12/

We kept the epochs at 100 to demonstrate the negative consequence of overfitting training data, when doing the test or dev set evaluations. We could probably reduce that to ~50 though to save time in training :)


Hey Aetherspawn, the current repo does not currently have code for decoding individual fed in .wav files that are not in the train/dev/test sets. We'll polish up our code that simplifies decoding and add it to the repo soon, then shoot you a message


Hey ReverseCold, I had similar experiences to you when I tried TensorFlow fist ~1 yr ago, but since then, the ease of `pip install tensorflow` in a virtualenv has really made it fast and relatively straightforward to use TensorFlow with CPU-only on either Mac or Ubuntu (I haven't tried any other linux distro myself). I agree though, getting everything to work with GPU is a bit of a pain :)


Thanks for the comment! I haven't read much about OpenCL: https://en.wikipedia.org/wiki/OpenCL I'm looking forward to learning more :)


Hi Vonnik, thanks for the suggestions of tools for us to try out next and add to our list!


Hey Alex, thanks for the comment! Have you tried out image classification with the Tegra X1 yourself by chance??


I haven't but the ressources on jetsonhacks.com are amazing to see robot vision on this board in action.


We only used the Pi's CPU, which is adequate for classifying images, but not powerful enough to train a model


My colleagues and myself have a blog post discussing how to do streaming video analysis on the Raspberry Pi: https://svds.com/streaming-video-analysis-python/

On the Pi3, our application processes 320X240 images at 10 FPS without any problems.

Let me know if you have any questions!


Hi annnd! We tried a few times to train a scaled down model on the Pi3, but got nowhere. We've found that the best strategy is to train on the beefiest hardware you have, then transfer the model and run it on the Pi3 for streaming applications.


Hey Sam this is Matt, thanks for your comment and your help a few months back! And for anyone else reading this, Sam is great at getting back to filed issues about installing tensorflow on a Pi: https://github.com/samjabrahams/tensorflow-on-raspberry-pi/i...


Yeah, I'm using Sam's TF wheel on RPi3 and it works great.

> it was not feasible to analyze every image captured image from the PiCamera using TensorFlow, due to overheating of the Raspberry Pi when 100% of the CPU was being utilized

Just put a heatsink on the CPU. It's like $1.50 ... $1.95 on Adafruit. I glue a heatsink to every RPi3 unit I build.

https://www.adafruit.com/products/3082

https://www.adafruit.com/products/3083

> it was taking too long to load the 85 MB model into memory, therefore I needed to load the classifier graph to memory

Yeah, one of the first things you learn with TF on the RPi is to daemonize it, load everything you can initially, and then just process everything in a loop. That initialization is super-slow, but after that it's fast enough. YMMV


Hi Florin, thanks for the comment!

Even with the heatsink (which we install on all of the Pis), we were still having overheating issues. We tried a few other things too to mitigate the problem: 1. Reducing sampling rate for the image recognition (but if we reduced this beneath several seconds we could miss the express trains) 2. Using a cooling fan (https://www.amazon.com/gp/product/B013E1OW4G/ref=oh_aui_sear...) - still didn't prevent overheating if the CPU was continuously loaded at 100%. 3. Only sampling images where we detected motion (https://svds.com/streaming-video-analysis-python/)

We decided to use the 3rd option: Leveraging our motion detection algorithm, which while sensitive to false positives, allows us to use Deep Learning image recognition to eliminate those false positives.

Happy to chat more about your experiences daemonize-ing TF applications!


When you say "overheating issues", what do you mean exactly? IME, at 100% CPU usage with the heatsink on, either it does not throttle down the clock anymore at all, or it does it after a much longer time and the clock reduction is much less.

Are you seeing anything happen, other than some slight throttling?

The chip cannot fry itself. It's designed to slow down so as to stay below the dangerous temperature range.

> Happy to chat more about your experiences daemonize-ing TF applications!

Eh, that was just a fancy way of saying I do what you do. Launch the program once, and let it run forever. It performs initialization (which takes a long time), then it drops into a processing loop: wait for input / read / process / do something / repeat. Pretty basic stuff really.


"Just put a heatsink on the CPU."

Ever had any problems without it?


The cpu throttles when it gets too hot. Won't cause damage to not have it, but will slow it down.


Right, that chip could never run at 100% CPU load for more than a fraction of a minute; after that it starts slowing the clock. Seems to me like it was meant to run with a heatsink on.

Either that or it was meant for outdoors operation in arctic regions.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: