Software is going more and more from "heavy code" to "no code" environments. This is a cool project that seems to strike a middle ground. A true example of how Software Engineers underestimate how powerful Spreadsheets + APIs can be...
Does Excel run python yet? Pull in parts of VSCode and the ability to run containers, a GitHub integration, something like Paw for request prototyping and inspection, and you’ve got an incredibly powerful development environment for non SWEs.
Let's say i want to predict an output `C` by multiplying two distributions `A``B` = `C`.
Assuming I am just guessing at the distribution of `A` and `B` (Uniform? Bernoulli? Geometric? Log-Normal?), would I get a better estimate by just multiplying `mean(A)` `mean(B)` ?
Point values suck. However, predicting the mean is often possible/realistic. And I feel like I am taking wild guess when describing a distribution of a data set to be honest.
TLDR:
What results in better prediction/guestimate? multiplying incorrect probability distributions? Or multiplying more-correct means/point values?
I don't have a good answer. But I wonder if there are some realistic situations where we would have a good guess at the mean, but no clue about the distribution.
It is not the actual greatness of national wealth, but its continual increase, which occasions a rise in the wages of labour. It is not, accordingly, in the richest countries, but in ... those which are growing rich the fastest, that the wages of labour are highest....
But though North America is not yet so rich as England, it is much more thriving, and advancing with much greater rapidity to the further acquisition of riches....
Though the wealth of a country should be very great, yet if it has been long stationary, we must not expect to find the wages of labour very high in it. ... There could seldom be any scarcity of hands, nor could the masters be obliged to bid against one another in order to get them. The hands, on the contrary, would, in this case, naturally multiply beyond their employment. There would be a constant scarcity of employment, and the labourers would be obliged to bid against one another in order to get it....
But it would be otherwise in a country where the funds destined for the maintenance of labour were sensibly decaying. Every year the demand for servants and labourers would, in all the different classes of employments, be less than it had been the year before. Many who had been bred in the superior classes, not being able to find employment in their own business, would be glad to seek it in the lowest. The lowest class being not only overstocked with its own workmen, but with the overflowings of all the other classes, the competition for employment would be so great in it, as to reduce the wages of labour to the most miserable and scanty subsistence of the labourer. Many would not be able to find employment even upon these hard terms, but would either starve, or be driven to seek a subsistence either by begging, or by the perpetration perhaps of the greatest enormities. Want, famine, and mortality would immediately prevail in that class, and from thence extend themselves to all the superior classes, till the number of inhabitants in the country was reduced to what could easily be maintained by the revenue and stock which remained in it, and which had escaped either the tyranny or calamity which had destroyed the rest....
The liberal reward of labour, therefore, as it is the necessary effect, so it is the natural symptom of increasing national wealth. The scanty maintenance of the labouring poor, on the other hand, is the natural symptom that things are at a stand, and their starving condition that they are going fast backwards.
There's certainly latent demand for a platform that localizes ad photography, although those customers are sensitive to weird artifacts in the generated images. Likely non-trivial r&d investment there.
The clearest immediate opportunity for GANs is generating content where artifacts might add value or are easily ignored (eg. art). The problem here is there's very little tech moat for these businesses given how easy it is to train a GAN. It'd come down to having a valuable, private dataset.
Lots of other potential commercial applications - we list some more on the demo.
1- Find solace in the fact that "senior" engineers often spend a significant amount of time searching the web/stack overflow just like you do.
2- Have respect for the code that came before you. Be generous when passing judgment on architecture or design decisions made in a codebase you've adopted. Approach inheriting legacy code with an "opportunity mindset".
Likewise, I think adopting a "how can I create a stack trace/error message" mindset is incredibly important.
Can you add a breakpoint to a certain piece of code? Could you add a try/catch statement somewhere to catch the error?
Far too often good engineers do not have an "active" mindset in hunting for stack traces/error messages, instead, they wait for them to fall like manna from heaven.
Raise your hand if you've ever created an insert/update database trigger that explodes when that it sees that one bad value you've been investigating, just so you'll have a traceback that points to the offending code.
Can you share some insight into how you are counting mentions of coins on Reddit? For example, the coin Golem has the potential for overlap with the character Golem in Lord of the Rings. Are you browsing a pre-set number of subreddits?
Likewise would love some clarity on how you're collecting twitter stream data and properly tagging the data to a respective coin--are you using preset tags?
Would love to see an API endpoint for this type of thing. Kudos!
And yes, you are right! Best example is Idealab - creating amazing ventures and exits since 1996. But even before them, some large companies were creating new ventures for new products and grew them much like startups. :)