Menlo Park, Ca - Full time - Frontend, Backend, Dev Ops, ML/AI
Blackbird is a stealth, ventured backed, artificial intelligence technology company focused on solving some important challenges created by the shift from desktop to mobile. Our stack is primarily in functional style Scala (we are heavy functional programming users) with most of our AI stack in Python and C++.
We're one of a few startups that do AI research above and beyond product development. We host regular talks on multiple disciplines ranging from systems to functional programming to deep learning.
The team was founded by former Stanford CS graduates that built self driving cars, search at Google and Yahoo Research, co-authored the google file system and scaled Twitter to 200 million users. Our open source code powers Snapchat, Tumblr, Wikipedia in production today.
We're currently looking to add some great engineers to our team. Want to write highly scalable software with the architects who scaled Twitter and Google? Want to run ops for software which is designed for fault tolerance? Want to design next generation user interfaces? jobs at blackbird.am
That's not really true. Erlang's VM is fantastic at GC with thousands upon thousands of green processes multiplexed onto the system threads, allowing soft realtime performance. Similarly, Haskell's Parallel Strategies library works well with the Parallel GC. Immutability makes this a whole lot easier.
Erlang uses only actors as concurrency mechanism and exploits that fact by giving each actor its own heap. So Erlang's GC does not need to accommodate concurrency, even though Erlang itself does.
To OCaml in particular, and you're right that immutability does make it much easier, but OCaml has mutable variables, which is precisely why it's hard.
Menlo Park, Ca - Full time - Frontend, Backend, Dev Ops, ML/AI
Blackbird is a stealth, ventured backed, artificial intelligence technology company focused on solving some important challenges created by the shift from desktop to mobile. Our stack is primarily in functional style Scala (we are heavy functional programming users) with most of our AI stack in Python and C++.
We're one of a few startups that do AI research above and beyond product development. We host regular talks on multiple disciplines ranging from systems to functional programming to deep learning.
The team was founded by former Stanford CS graduates that built self driving cars, search at Google and Yahoo Research, co-authored the google file system and scaled Twitter to 200 million users. Our open source code powers Snapchat, Tumblr, Wikipedia in production today.
We're currently looking to add some great engineers to our team. Have a passion for AI/ML and want to work with the bleedingist of edges? Want to write highly scalable software with the architects who scaled Twitter and Google? Want to run ops for software which is designed for fault tolerance? Want to design next generation user interfaces? jobs at blackbird.am
Blackbird is a stealth, ventured backed, artificial intelligence technology company focused on solving some important challenges created by the shift from desktop to mobile. Our stack is primarily in functional style Scala (we are heavy functional programming users) with most of our AI stack in Python and C++.
We're one of a few startups that do AI research above and beyond product development. We host regular talks on multiple disciplines ranging from systems to functional programming to deep learning.
The team was founded by former Stanford CS graduates that built self driving cars, search at Google and Yahoo Research, co-authored the google file system and scaled Twitter to 200 million users. Our open source code powers Snapchat, Tumblr, Wikipedia in production today.
We're currently looking to add some great engineers to our team. Have a passion for AI/ML and want to work with the bleedingist of edges? Want to write highly scalable software with the architects who scaled Twitter and Google? Want to run ops for software which is designed for fault tolerance? Want to design next generation user interfaces?
It's a huge issue on mobile. Multiple rendering engines requiring the world to write software to a spec is what prevents people from writing code dependent on a particular browser or, in this case, a rendering engine. You see it all the time in the mobile web: plenty of sites just plain don't work when using other rendering engines.
And the fact that web developers then actually can't develop just for one rendering engine and claim that "it's enough" is a very good thing for me. I refuse to use Chrome and I'm extremely hit by all the sites developed "for Chrome only" on the desktop. The web shouldn't be "what's on the developer's machine and not more."
Located right outside Stanford University in Menlo Park, CA, Blackbird is venture-backed startup founded by a team of Stanford CS Alumni who've previously built search at Google and Yahoo, and scaled Twitter to 200 million users. Our product is at the intersection of Information Retrieval, Natural Language Processing and Computer Vision and we're currently in stealth.
Role - UX Architect, Menlo Park, CA
- Design and implement UX for our flagship product and take on a leadership role
- Opportunity to invent new interfaces on mobile for something people use everyday
Experience
- Excellent Javascript, jquery, CSS, and HTML skills
- Comfortable with frameworks like backbone.js, ember.js etc
- Some basic experience with with design photoshop/illustrator is a plus
- Familiarity with ios/android programming is a plus
We'd love to hear from you! Please email us at stanfordfounders2014@gmail.com with a link to your linkedin profile or resume. Thank you for your consideration!
It's amazing how fast you can go when you don't care about safety ;)
That said, for all non trivial work loads, the latest postgres is quite the work horse, but of course requires some tuning for performance. We ended up switch to it after MySQL consistently sucked on smaller joins.
Can't argue on the replication deal: it's a work in progress.
An important realization I made a while back was that design methodologies do little to address program correctness, which is almost always the wildcard on deliverables; buggy software means missed deadlines and budget. Some, such as TDD, work to address the rapid building of tools to a particular spec, but often fail to promote static guarantees, especially in languages and environments where such provability is largely impossible. Dynamic languages penchant for monkey (guerrilla) patching further exacerbates the problem.
Solutions to this are tough. My first suggestion would be to use languages which facilitate correctness, although it's usually at the expense of developer availability: the pool of engineers with experience and know-how in true FP is orders of magnitude smaller than more pervasive languages. My second thought is to further embrace math as the building block for non-trivial applications: mathematical proofs have real, quantifiable value in correctness. I find it no surprise that the larger companies have made foundational maths, such as category theory and abstract algebra, the underlying abstraction for their general frameworks. This is even a tougher pitch than the first since most engineers don't recognize what they're doing as math at all - a big part of the problem. So many of us are doing by feel what has already been formally codified in other disciplines.
I'm aware that both require more (not necessarily formal) education than most engineers have pursued and makes it a difficult short-term pitch point for any company, but I think if we're serious about eliminating sources non-determinism from projects, it's important we address them directly.
I think that's one aspect of the problem, but the migration from make-do to mathematically rigorous code can be equally fraught with peril. While the code itself can be made predictable, and easy to reason around, the time estimates and project planning often cannot. There is never enough time to factor out all of the commonality, remove all of the unnecessary use of state, codify all the assumptions into data types, etc., so you have to pick your battles.
A programmer needs intuition of what the biggest, most effective improvements are on the code base, which allow them to get the most work done. They also need some ability to guess how much time it will take, so that they don't miss deadlines. No amount of static type analysis will fix that.
Ah, but that's a matter of design - except now we have strong constructs from which to consider our problem. We will never get away from developing the architecture of our system, which is cost dependent on how well understood the domain is. Ideally, that's where we should aim to move: problem specifications that render implementations rote. A lofty goal, I know, but within closer reach every day and possible in many environments already.
Part of the beauty of proof is that so long as it is correct, the individual lemmas are largely irrelevant: we don't necessarily need to remove commonality or statefulness. Of course, I'm purposefully glossing over extra-functional requirements on which, outside of big-O, we don't have a firm grasp.
I'm attempting to stay away from "effective" or "most work done" since they're ill defined and highly subjective but rather focus on measurable changes. I'd argue that, amortized, the upfront costs of better understanding the problem definition results in cost savings down the line, especially as it recedes into maintenance.
The dark side proof is that most mathematical problems are incredibly contrived. Taking a business requirement and translating it into known mathematical problems is harder than it sounds. Once you can do that, you're 90% of the way there, and people who can do that reliably are regarded as geniuses.
But most of the time, development is done without full understanding of the problem space. Usually, the problem doesn't become fully understood until you've already spent a good amount of time coding up your solution. If you wait until you fully understand the problem before starting, then you'll never start, because your brain simply can't comprehend the entire scope.
So instead, you get code bases full of sort-of well-factored code, but with lots of unwittingly reinvented wheels. This status is occasionally improved when somebody really smart happens to notice the commonality, and remembers a classic problem that it resembles, and manages to refactor the entire thing using the general solution. However, this almost never becomes apparent during the first revision.
"I find it no surprise that the larger companies have made foundational maths, such as category theory and abstract algebra, the underlying abstraction for their general frameworks."
I would like to learn more; do you have a specific example?
Having worked with real "engineer" engineers, I've found that they have, and value, a considerable amount of mathematical education, but that education is all in continuous mathematics; abstract algebra and formal logic have about the same amount of respect as basket weaving. Unfortunately, continuous math isn't particularly useful for software.
As an electrical engineer who briefly flirted with computer engineering, I had 4 semesters of continuous math and 1 semester of discrete math[1]. I also had classes like linear systems and electromagnetics where I had to actually use continuous math heavily.
While I do not think continuous math is particularly useful for general software development, I think it is very valuable for specific problem domains. I have found my Calculus/DiffEq foundation to be very valuable for my work with radar signal processing and NLP code, more so in the former because it was basically translating electrical engineering algorithms, formulas, and concepts into C. It is also important for any type of development that makes heavy use of geometry.
As a side note, I saw some of the bias you describe out of the more "pure" EEs I worked with when I first started. There was a strong bias against software engineers, particularly those who went the CS route, because they didn't understand the math and physics behind the hardware. Admittedly, some were clueless and probably should not have been writing software for multi-million dollar hardware[2]. Most were competent, though, and able to pick up the basics they needed to know when tutored for a bit.
[1] Which was actually titled "Discrete Mathematics", and was just covered basic set theory, combinatorics, and linear algebra.
[2] Like the one who added a reset routine that blindly opened power contacts on the UUT without verifying that the power supplies were off first. Fortunately, that was caught before they actually opened with hundreds of amps going through the bus.
As you say, continuous math (to my mind, calculus, diffeq, and linear algebra; anything involving reals) is necessary for some problem domains. But accounting is necessary for some problem domains as well. And molecular biology.[1]
But if I get worked up into a good froth, I can make a case that software development is applied formal logic or applied abstract algebra (or both). I don't believe you can do professional software development (in Weinberg's sense) without some serious discrete math, in the same way you can't do signal processing without calculus.
[1] If you've got something that mixes the three, let me know. It's probably something I should stay away from.
I must admit to ignorance of Weinberg's books and other writings. My interest is piqued now, though.
That said, based on your second statement nearly all scientific and engineering programming would not qualify as "professional software development". The code I worked on had little to no discrete math or formal logic in it. There was not an integer to be found save loop counters and array indices Do you not consider an (electrical engineer | mechanical engineer | physicist | molecular biologist) who can code and spends the vast majority of their time writing production code like this a professional software developer?
Two good examples from the Scala community would be Algebird[1] from Twitter which uses Monoids as a main abstraction and Spire[2], which might be the best numerical library out there and heavily rooted in Abstract Algebra.
I considered editing my other comment but decided instead to break it out.
There are a couple of complexities that your comment illustrates well:
First, continuous math _is_ available and immediately applicable today. The problem is that we often reason in and program to the implementation, not the abstraction - a subtle difference, but an important one. Not only that, but by reasoning in a flawed representation, we often miss important derivations that result in dramatic simplifications and reductions in the problem domain. I would also argue that we already do use continuous math regularly - for example, linear algebra, combinatorics, and set theor: most of us only know them as arrays, random, and SQL.
Secondly, not enough effort is made in formal education for applying 'pure' math to computer science. Some branches, such as linear algebra, have obvious implementations and analogies already available but others are quite a bit less clear - I fault this more on curriculum silos than an engineer's innate abilities. It's a learned skill that just isn't often taught.
I think modern software engineering really needs to start banking on provable languages with strong type systems. Instead of hoping and testing with Monte Carlo sims, let's prove an algorithm once as correct and move on from there.
More provable code is not the answer. Proving the functionality is meaningless if you can't decide on what the functionality should be. And this project has all the classic hallmarks of a requirements quagmire. The requirements are poorly specified and contradictory, and provable code and strong typing can't solve that problem.
Indeed, it makes the problem worse... the strong typing and provable code stubbornly insist that the requirements are contradictory and refuse to even compile. But managers don't want to hear that, of course, so we go back to the dumber languages that happily compile even so. This does not make the requirements any less contradictory of course... it just means that the dumber languages will let you blithely code on, and are guaranteed to do something stupid at run time instead. Progress!
Engineering is systematic approximations and best-guessing. The art of it is knowing when shortcuts are good enough.
Only very small parts of software can be proven, especially for things like flight control, where the parameter space is huge, and exhaustive search or manual formal proofs are impossible. Not even speaking of the fact that the spec against which one could build a proof is never error-free to begin with.
And what in your opinion is the difference between "systematic approximations and best-guessig" and "knowing when shortcuts are good enough"?
For me engineering is using proven and tested solutions every time, even if the solution is not sexy, and art is just "swinging it" - just like using js for everything when they are better more mature solutions.
Since you're not building the same thing every time, you don't have wholesale proven and tested solutions for everything. The engineering part is going about filling the gaps systematically, and figuring out what parts require closer inspection in the first place.
In my opinion, this last part is the art, figuring out what to do when you tread a new path. A good engineer will find the critical parts, but that relies on as much intuition as process. A really good engineer will find the meta-mistakes in the process.
The decision to "swing it" can be an engineering decision. If I am doing something I did before, and all the parts fit, I can swing it, and forego a tedious process. I have to balance the risk of implementation delays and bugs vs. spending time on simulation and paperwork.
It's also perfectly possible to do everything by the book and still fail, if the process is a bad fit, or you run out of time because you're all caught up in following procedure instead of taking common sense shortcuts.
So either swinging it or doing it by the book can be engineering or idiocy, depending entirely on the situation, and, in retrospect, the outcome.
A lot of things, like nonlinear control systems, can only be verified by Monte Carlo methods. Subsystem failure tolerance is difficult to handle by any existing theorem prover.
The JSF software organization uses static analyzers heavily. They are using theorem provers.
You could, you know, pay for a subscription. If you're bumping up on the limit of free articles per month, you're clearly getting some value from the paper. There are worse things than supporting good journalism!
The NYT is executing a decade-long return to payment-based reporting. The early parts include easy-to-circumvent paywalls so that they can have some price discrimination [1] while also setting the social norms that payment is expected for web content. The later stages will likely include more rigid restrictions on content -- but that will only fly once "pay to read" is something people expect from expert content on the web, rather than are surprised by.
Blackbird is a stealth, ventured backed, artificial intelligence technology company focused on solving some important challenges created by the shift from desktop to mobile. Our stack is primarily in functional style Scala (we are heavy functional programming users) with most of our AI stack in Python and C++.
We're one of a few startups that do AI research above and beyond product development. We host regular talks on multiple disciplines ranging from systems to functional programming to deep learning.
The team was founded by former Stanford CS graduates that built self driving cars, search at Google and Yahoo Research, co-authored the google file system and scaled Twitter to 200 million users. Our open source code powers Snapchat, Tumblr, Wikipedia in production today.
We're currently looking to add some great engineers to our team. Want to write highly scalable software with the architects who scaled Twitter and Google? Want to run ops for software which is designed for fault tolerance? Want to design next generation user interfaces? jobs at blackbird.am