On the broader point though, there certainly are some companies that have access to unique datasets that they can take advantage of. Meta and Google come to mind as obvious ones, probably Microsoft too. Apple I'm less sure given their privacy stance. Any others?
Had a similar experience in middle school. We started initially doing net-send to specific users and having played around a little bit with batch files/command prompt before I tried net send *.
Well the entire county was on the same network. All computers in the school district received my message. Fortunately the message was just "hi", and I think it was only sent once.
Like the OP, in retrospect I saw that the message shows the sending computer+user. Since we had student-specific logins it didn't take long before I was tracked down and reprimanded. I think they even told my parents about it and were threatening suspension etc. But at a certain point it became kind of obvious that the network/configuration was more at fault than a curious kid. So I got off with a stern warning and narrowly avoided a life of crime.
Thanks for posting. In the early 2000's me, my brothers, and my dad spent several months taking pictures and submitting each sequential minute between 8-9. Looking back probably 20 years later now it is quite amusing to still see the pictures there! It has been a long time since I heard about the human clock.
Voloridge Investment Management is an SEC registered investment advisor that implements bleeding edge machine learning techniques to solve the extremely challenging problems of modeling and predicting financial markets. At Voloridge we are passionate about expanding our knowledge and capabilities. Enthusiastic, highly analytical and hardworking individuals make meaningful contributions to the design and implementation of our investment strategies, which are based exclusively on the predictive models developed by the research team.
Top Reasons why you want to work for Voloridge Investment Management:
• Work alongside a world-renowned Data Scientist and several Kaggle competitors including 2 Grandmasters; one who held the Highest Rank of #1
• 401k retirement plan, $1 for $1 match up to 4% of compensation
• Highly Competitive Base Salary
• Profit Sharing Bonus
• Regular in-office massages, weekly lunches, stocked kitchens with snacks, fruit and drinks
• Work off the Intracoastal and 3 minutes from the beach
• Work in an office chosen by South Florida Business Journal as one of the top 10 Coolest Offices in South Florida
Hey I'm not familiar with this way of comparing classical and quantum computation. Can you point me to some more details? I have Nielson's book but don't remember seeing this analogy before!
I presume its explained in Scott Asronsons book, its implicitly there in Nielsen and Chuang. But the best way to understand it is by example - try to write out how you would describe classical probabilistic computation on two classical bits to mimic the quantum circuit type of picture, and if you succeed the generalization will be obvious.
It seems like you're using WaveNet to do speech-to-text when we have better tools for that. To transfer text from Trump to Clinton, first run speech-to-text on Trump speech and then give that to a WaveNet trained on Clinton to generate speech that sounds like her but says the same thing as Trump.
> It seems like you're using WaveNet to do speech-to-text
I'm proposing reducing a vocal performance into the corresponding WaveNet input. At no point in that process is the actual "text" recovered, and doing so would defeat the whole purpose, since I don't care about the text, I care about the performance of speaking the text (whatever it was).
In your example, I can't force Trump to say something in particular. But I can force myself, so I could record myself saying something I wanted Clinton to say [Step 3] (and in a particular way, too!), and if I had a trained WaveNet for myself and Clinton, I could make it seem like Clinton actually said it.
I see. I still think it's easier to apply deepmind's feature transform on text rather than to try to invert a neural network. Armed with a network trained on Trump, deepmind's feature transform from text->network inputs, you should be able to make him say whatever you want, right?
Text -> features -> TrumpWaveNet -> Trump saying your text
> Armed with a network trained on Trump, deepmind's feature transform from text->network inputs, you should be able to make him say whatever you want, right?
Yes, that should work, and by tweaking the WaveNet input appropriately, you could also get him to say it in a particular way.
My hypothesis is that all RNNs (and in general complex dynamical systems) need to be reset periodically. If run for too long without resetting, they tend to get stuck in strange states, blow up, or cease activating. You can see this effect by running a generative RNN model for a long time - eventually the output is garbage.
Under this model the next obvious question is why it takes so long to reset the brain's state. Maybe it can be done faster.