The silence here is stunning. Most of the support I've found is on slack or Google groups related to Mac administration (google macadmins) / munki packing tools. Not quite macos development, but a good place to learn how to package things.
Unemployment numbers, as with most official government numbers, are based on a combination of surveys and self-performed actions, such as filing for unemployment, the latter of which essentially amounts to self-reporting [1]. These results are then aggregated into six metrics, U-1 through U-6 [2], of which U-3 has been designated the official and widely-quoted unemployment rate.
The main value of U-3 isn't that it's a particularly insightful reflection on job health, but rather that it's easy to measure, its definition is consistent, and captures a particular subset of the population that has demonstrated overt desire to participate in government programs that ostensibly help them on the path towards employment. This self-selects for workers exceeding a particular desperation ratio, while not counting people who may think that they'd prefer to work, but haven't taken a government-defined sequence of steps towards doing so.
Seasonally-adjusted U-3 is low right now, around 4.1%, so in macro terms the pool of people who have taken overt steps towards getting a job while still not having a job is shrinking. This means there's fewer candidates for a particular position, and on average, less of a chance that any particular candidate is qualified for any particular position. You could argue that this means there's a labor shortage.
For low skilled jobs, the immigration crack down might have had some effect. It's certainly had a big effect in construction. The legit bigger employers might have always been hiring people with their paperwork all in order, but the smaller contractors are now competing with them for the same pool of legit workers. The ex-cons are the ones who can fill the gaps for people looking to get cheaper labor.
I knew a guy a few years ago, when things were more lax, who lived in Bisbee, Atizona and he said that since it was right next to the border and their was a big border patrol station there that there weren't ay illegal immigrants around, so if he wanted cheap construction help he had to hire ex-cons.
The irony; you can't find any illegal immigrants to hire -- which is against the law! -- so you have to resort to hiring people who have already broken the law.
Hitting that is one way of measuring whether there is a labor shortage. At the moment it does happen to coincide with wage growth (which should be higher when it is more difficult to hire people).
But economists define shortage as a situation where an external mechanism (like government intervention) prevents price from rising. The fact that we are seeing wage growth says that there is not a shortage.
Note that the article doesn't contain the word "shortage". It's just the continuum of supply and demand: if people like the ones you hired last year can't be found this year, you have to pay more, or consider those you'd have overlooked, including felons. If they have a stronger bargaining position, they get a better deal.
Note that the article also doesn't contain the word "immigration". It deserves to be noted that the jobs being discussed here have a huge overlap with the jobs often filled by unskilled immigrants. Whatever your views on ex-cons and low-skill foreigners, I think it's important to understand that there is a trade-off between them.
We do a shitty job at educating people, so it’s hard to hire qualified people in many industries.
Try finding a master plumber or electrician. Or a mid level IT person. They don’t exist.
Companies run so lean there’s no pipeline of internal candidates either. Where I work, it’s impossible to hire competent managers with technical skills. The normal places you’d look are full of young/inexperienced and old/stagnant.
I am working at a place now with plenty of old and good managers who started with coding. If you call someone who stayed at a company more than 5 years "stagnant", you will never find a good manager.
Anyway. They've got good pay and good power in their positions. They rarely change jobs, you won't see them much in the job market.
I can't speak to plumbing and electrical work, those aren't my industries but
Or a mid level IT person. They don’t exist.
My knee jerk reaction is to say "I absolutely refuse to believe this is true", but I will concede that the problem (here, I am defining problem not as a shortage of qualified talent, but a shortage of qualified talent that actually progresses to getting hired) exists because of symptoms you correctly highlighted "pipelines" and "competent managers". I will admit-however-to being biased as someone who's worked in IT for decades and later became a technical recruiter with an agency who wanted someone on staff that could have conversations with tech professionals on a meaningful, personal level.
Unrealistic or otherwise unsustainable hiring manager expectations and offensive constraints on salary compared with demanded skills seem to have created artificial scarcities of talent in IT hiring (pipeline). Candidates are expected to come in the door having every bullet point in a 120 bullet point job ad satisfied as to core-competencies, and when hired, they're given a workstation and sent off to the grind with just enough training and grooming to know the name of the product and the ability to rattle off a list of libraries and frameworks used. Skill gaps get resumes thrown out, even if it's some middling gap that with time and support from senior techs can be rapidly closed with a dose of on the job training. Something that seems as extinct as the pterodactyl nowadays.
Heck, just last week I had a talk with a recruiter who was looking for a DevOps Engineer (not strictly IT in the traditional sense of the phrase, but I bring this up to highlight my point) and we joked about one job spec that was looking for a senior level expert with five years experience in Kubernetes (competent managers). Let that one soak in. The hiring manager demanded a five year expert in Kubernetes, which hasn't been around for a full three.
My recruiter friend told me they dropped this client after a few more job specs like this because they couldn't get a single candidate past a pre-screening due to hiring manager expecting the world-and they were preparing to do the same to a few other clients for the same reason. I asked if she were worried about what this would do to their billings "No, because we have other clients who pay us more, but have much more manageable expectations on job candidates."
I wish this were the exception versus the norm but if I have to hear one more time that there aren't qualified IT people I will probably rip what's left of my greying hair right out of my head because if this is true, it's not the fault of IT personnel, it's the fault of everyone involved in the hiring process refusing to manage their own darn expectations.
This relates to my experience from within a company. When recruiters don't want to work with you anymore, there is something wrong going on with you.
There was also the issue with salary. You won't find any senior DevOps with a clue because anyone who can ssh to a server moved to contracting and is paid double what your perm job is offering.
True. Optimistically, the current "shortage" might result in getting chronically unemployed people back into the job market. People who long ago gave up on finding a job.
That said, we should introduce some job training for these folks since their skulls are likely severely out of date.
There's a labor shortage of people who are going to do the tedious work. The good factory job in part requires a dependable worker. In many places, that dependable worker is hard to find and factories are switching to robots (e.g. https://www.washingtonpost.com/national/rise-of-the-machines... )
> The robots were coming in not to replace humans, and not just as a way to modernize, but also because reliable humans had become so hard to find. It was part of a labor shortage spreading across America, one that economists said is stemming from so many things at once. A low unemployment rate. The retirement of baby boomers. A younger generation that doesn’t want factory jobs. And, more and more, a workforce in declining health: because of alcohol, because of despair and depression, because of a spike in the use of opioids and other drugs.
> ...
> Companies now could pick between two versions of the American worker — humans and robots. And at Tenere Inc., where 132 jobs were unfilled on the week the robots arrived, the balance was beginning to shift.
> ...
> After an hour the workers were heading back to their cars, one saying that everything “sounds okay,” another saying the “pay sucks.” Bader guessed that two of the four “wouldn’t last a week,” because often, he said, he knew within minutes who would last. People who said they couldn’t work Saturdays. People who couldn’t work early mornings. This was the mystery for him: So many people showing up, saying they were worried about rent or bills or supporting children, and yet they couldn’t hold down a job that could help them.
On the other hand, you've got an ex-convict. Many of them are people who have messed up somewhere in the past and don't want to go back.
I am in Wisconsin (and work in the public sector), so this is something I'm a bit more familiar with than other states... and also happens to be the subject of the article... http://buybsi.com/images/PDF/MapofIndustries.pdf - note that three of the facilities are teaching farm skills.
Tying the two articles together - the work release are people that you're sure are going to show up and are going to be drug free. They're not going to stop at the bar on the way home and come into work the next day with a massive hangover.
There's been a labor shortage in the big cities for about a year, though IME that shortage hasn't extended to smaller cities or more rural areas of the country.
No, it's a symptom of capital concentration. Big cities are growing economically because that's where the money is, thus that's where the startup and VC money goes. The rest of the country isn't seen by investors as the best place for growth, so remains relatively underdeveloped.
Usually you will get at least a 4 week notice in Germany and this increases to up to 7 months when you worked for more than 20 years for the company. That you get fired effective immediately usually only happen if you really fucked something up.
"Layoff" and "firing" are separate concepts here. The central example of a true layoff is a trade work shortage: the layoff happens with very little notice, because it reflects the volatile nature of the demand for the trade. If there's suddenly, unexpectedly nothing for anyone to do, then there's suddenly, unexpectedly no benefit in paying anyone to come in for the day to do it.
Does that kind of situation still confer protections in Germany? If so, how do German manufacturing companies manage to not go bankrupt? Do all the companies subscribe to huge insurance pools?
As far as I know the barriers are even higher for layoffs. You can not, for example, arbitrarily pick which workers to layoff but have to minimize the social impact when you have to choose between several employees doing the same work, for example take age, spousal support, and things like that into consideration. But I am totally not an expert and I am sure it is a vastly more complex topic than I can imagine.
This seems backwards to me. Because my spouse has a better job than Mike's, I'm more likely to get laid off? Because I'm younger than Bill and can theoretically make up the financial impact sooner I'm more likely to get laid off?
If there are ten employees you need to cut, shouldn't they be cut based on either their recent performance, or a completely random method?
If there are ten employees you need to cut, shouldn't they be cut based on either their recent performance
No, because layoffs are position-based. If it's a performance related cull then that's not the same as redundancies. If you want to get rid of people like that then you need ironclad documentation and/or to provide generous packages.
In a hard time it tries to do the thing with the less overall impact.
Maybe it's putting lipstick on a pig, maybe it's fairness.
Layoffs (mass dismissal) are very hard, because usually it means the whole region has economic problems, and those that get laid off are especially vulnerable - because they almost by definition work in a sort of dying industry/trade.
.. but there isn't suddenly, unexpectly, no need for the employees to pay their rent, mortgages and bills.
When I was made redundant a decade ago in the UK I got:
- week's "consultation period" during which the ranking process for who would be made redundant was explained
- pay in lieu of month's notice
- (optional on behalf of the employer) three month's pay for signing away my right to sue for wrongful dismissal
The only situation in which employees can really lose out is if the company goes bankrupt in which case they are at risk of losing their last paycheque.
Traditionally, large scale layoffs are hard and avoided, especially in industries where unions are strong. If required they are usually negotiated with the union. Still, there are often ways to avoid a layoff. The union sometimes agrees on a temporary pay cut across the board. There are some legal instruments that help a company in a crisis, Kurzarbeit is one of the most german one: The company can under some circumstances cut work time (and salaries) for a specified group of employees and the state will cover for a certain percentage of the wage loss. This allows the company to retain staff in an unexpected slow in business and be ready and fully staffed when orders start rolling in. In effect a giant insurance pool.
The US has much weaker employee protections. Aside from workers in unions and certain protected scenarios, layoffs usually occur with zero notice. (And, often, when associated with public bad financial returns, shortly after management denies any layoffs are coming.)
Yeah, but the threshold is quite large; 500+ workers at a single site, or both 50-499 workers and over a third of the site workforce.
The Stack Overflow layoffs are 60 people and 20% of their workforce, and so wouldn't (even at a single site) be subject to the WARN Act’s 60-day notice requirement.
> Per Chapter 4, Part 4, Sections 1400-1408 of the Labor Code, WARN protects employees, their families, and communities by requiring that employers give a 60-day notice to the affected employees and both state and local representatives prior to a plant closing or mass layoff. Advance
The layoff that I was part of in '09 in California (Netapp, ~500 employees let go, about 6% of staff) had two different groups in the IT side of the house:
A. Out the door now.
B. Pick your brain for some time.
The out the door now were let go immediately, though they were on payroll for 60 days. They weren't allowed in the building, but they were technically still employees. Severance package followed the 60 days.
The pick your brain group which I was part of were still allowed in and we worked. We had 30 days to sign a "increase pay and pick brain from Feb until July" or out the door with 30 days left on the WARN (if I remember it correctly). Come July, the 60 day window kicked in and then the severance package.
I am of the understanding that this approach isn't unusual with tech companies.
Interesting. Company I used to work for reduced their LA office to a handful of people after laying off 2/3 of its workers to move operations to Vancouver. Everyone was considered 'contract' despite being on W2s so not sure. I think we did get a notice but not months to look for new work. I had a week.
I don't think that unique really applies here - You can do the same with Google app engine standard runtime, or custom runtime if you want to use your own dockerfile.
https://cloud.google.com/appengine/
Last time I tried using GAE beta with custom Docker containers (±last year) it was an epic journey of pain and confusion. The docs are split-brain between standard and beta and the rapidly evolving APIs paired with equally rapid out of date (official) example projects is completely schizo. [edit: I should mention, though, I loved that they have those example projects at all! It was just frustrating to constantly be battling api obsolence in their projects.]
GAE is great, don't get me wrong. But it's so much more than just Dockerized app hosting, and it's not so great at doing just that. If hyper.sh has any half decent onboarding flow thought out, it should be a clear winner.
I feel that pain on the splitbrain docs. That's my number one complaint about their API's - I'm never sure if I'm on the newest. That said, there's a "familiarity" to them... once you figure out how things changed it's easy to quickly determine the age of a code sample.
You should try the custom runtime again! It's come a long way since Beta.
Good point about the "getting it" with their docs. I suppose there's always that ethereal point with new concepts... It's just frustrating when it takes so long to get there. And, somehow, once you're there, it becomes incredibly hard to empathize with people who don't "get it" yet. Consequently, docs suffer.
As the old joke goes: Once you understand monads, you immediately lose the ability to explain them to others.
Anyway, good to hear they've improved and stabilized.
You are making an assumption that obesity is because people overeat, and that "moving around" is somehow a counter to that.
Have you considered the amount of sugar found in virtually all SAD based food could be the problem? (Setting aside the movement debate and the false premise it creates about energy in vs energy out).
Too much sugar is definitely a problem, but if obesity were solely due to sugar in "virtually all SAD based food" then everybody would be obese across the board. That's not what the data shows, though.
And even if sugar were the main factor, OP's post would still be technically correct, because the underlying problem would be overeating foods with sugar, possibly in addition to overeating other foods.
This may be an unpopular view, but there's no getting around the fact that obesity in most adults is caused by poor decision making around food and exercise.
It saddens me that you are being down voted. There's plenty of reasonable debate to be had around how different types of foods make you feel, whether that leads to over / under-eating, allergies, etc. But the premise that one's total calorie count is the most important (and perhaps only) factor in determining weight is not a radical thought.
The last time I bought 'healthy' food, I couldn't use it before it's shelf life expired. Smaller portions are prohibitively expensive (it costs less for me to get something that looks better, tastes better, and has more variety going out to eat).
It also isn't cost effective for me to make the time to make my own food, even ignoring the cost of the ingredients; aside from simple things that use mostly non-perishable ingredients.
However, even going out to eat, the incentive to buy in bulk /there/ exists too. Purchase of a single meal that should fill someone up for a day has a substantial discount over getting what should be a properly sized portion for a small regular meal.
For all sides, the market incentives push towards over-consumption which is why we have the outcome we do.
I recall hearing that the cost of labor was actually one of the primary factors in meal price.
This is a load of horse shit, and I am holding back from being too harsh.
Eating out is in no way cheaper than making your own meals. The only way I can even imagine someone claiming otherwise is if you haven't yet built up a pantry and have to buy things like vinegars, dressings, sauces, flour, etc., every time.
Here are some numbers. I cook an average of 4x a week, making 4 portions of each meal. My grocery bill is never more than $70 and that's if I have to restock on some pantry items or am splurging on something like shellfish. So on the high end, that's $4.38 per portion that always includes meat or fish and a vegetable. That's comparable to eating on the value menu at McDonald's but certainly cheaper than the full combo meals which can run upwards of $10.
I'll admit that I shop at Trader Joe's which has some of the best prices out there. But the point is, it's not too difficult to make your own meals that cost about the same as shitty junk food. And at least for me, it's faster than any sit down place or even take out that you still have to go somewhere to get.
I don't have the knowledge, skills, or training in pallet to accept self-prepared frozen meals. Those just sound inherently disgusting to me. I don't expect to find crispy anywhere within that and thinking of trying to save things in the freezer makes me think of freezerburn.
Western lifestyles differ markedly from those of our hunter-gatherer ancestors, and these differences in diet and activity level are often implicated in the global obesity pandemic. However, few physiological data for hunter-gatherer populations are available to test these models of obesity. In this study, we used the doubly-labeled water method to measure total daily energy expenditure (kCal/day) in Hadza hunter-gatherers to test whether foragers expend more energy each day than their Western counterparts. As expected, physical activity level, PAL, was greater among Hadza foragers than among Westerners. Nonetheless, average daily energy expenditure of traditional Hadza foragers was no different than that of Westerners after controlling for body size. The metabolic cost of walking (kcal kg−1 m−1) and resting (kcal kg−1 s−1) were also similar among Hadza and Western groups. The similarity in metabolic rates across a broad range of cultures challenges current models of obesity suggesting that Western lifestyles lead to decreased energy expenditure. We hypothesize that human daily energy expenditure may be an evolved physiological trait largely independent of cultural differences.
Increasing activity increases calories burned, which can help produce a caloric deficit which will lead to weight loss.
Obesity is literally impossible without being in a caloric surplus over the long term.
Added sugar contributes to the problem, in that it is not very satiating so it makes it easier to overeat, but you can easily lose weight on a high sugar diet so long as a caloric deficit is maintained.
The actual physiology is well-understood. The only problem is changing habits and behaviors, which can be difficult but is certainly not even close to impossible.
There was a study I'm struggling to find at the moment, that I saw via HN, that claimed childhood obesity is more dependent on what people eat than when they eat it or how much.
Childhood obesity, like all obesity, is most definitely about calories in vs calories out. However, it is also true that fructose (one half of sucrose), doesn't induce any of the body's satiety responses.
A second issue is salt, which induces thirst, which induced consumption of fluids. Solving that problem with water drops the osmolarity, which decreases the satiety signals by way of diluting the interstitium. Solving the thirst problem with beer, soda, milk, etc, maintains the osmolarity of the interstitium, but does so by way of increasing the blood glucose.
Overall, it's 90% eat less, 10% exercise more. But the exercise more is also critically important for increasing norepinephrine, which improves focus, willpower, concentration, whatever the amalgam of those words are that represents the effect of norepi.
Having seen really horrible outcomes in kids, please, please, give up this idea that calories in doesn't need to equal calories out. It does. Always.
Apple's definitely playing the long game here. When the retinas came out, the absence of an ethernet port was a bit inconvenient for me, but now I don't miss it at all. It's possible that in a few years, when this model matures and goes down in price, the touch bar gets more wide support, and everything uses USB-C, then it will make a lot more sense to get one.
Yeah I'm very convinced this is going to happen. At the same time, at least for me, I can't afford to wait for this utopia of single-connectors. I need to be able to plug in my daily cords and the last time I had to deal with dongles I lost them all over the place.
I do think it was a big misstep that their flagship, brand new phone can't connect to their flagship, brand new laptop without an adapter that has to be bought separately. I also think they could have done it in a more phased approach without losing anything. But ultimately it will turn out to be a good decision.
When was the last time you plugged your iPhone in to your computer? I don't think I've ever plugged my current phone in to my computer since syncs and backs up to the cloud anyway for me.
I haven't in years but everyone who isn't technically savvy in my family still does. I don't know if that's indicative of anything larger than my anecdote but I work with people who are very involved with technology who still sync all of their stuff, with a cord, using iTunes and their iPhone 6s / 7.
No idea how common this is but I feel like it's common enough that it should have been handled better.
Two predictions from me. (1) They sell out and it is the most popular MBP model to date, (2) They don't reduce the price in the future.
Apple is awfully good at knowing just when to extract that little bit more from their customers. And I suspect that similar to the USB-C MacBook the concern over ports will just be from a vocal but tiny minority.
I have to disagree on that. Unlike the more vertically oriented iOS devices with their A series processors, Mac pricing has always been highly volatile, and new form factors always start at high prices which quickly degrade as Apple perfects the manufacturing processes. Macbook Airs, for example, started out as premium-priced devices, before settling into the entry-level.
I can't find hard numbers for MBPs, but here's a list of prices over years for Mac Pros; from a consumer perspective, the changes are essentially random.
They have gone too far this time. This is far beyond the usual expensive inconvenience dongle cost etc that we're used to seeing from them. They just want to game us. It's basically financial domination at this time.
How hard could it really be to make a basic Linux desktop environment work sorta close enough to OS X to make enough people switch to finance further development? My quick and dirty estimate is that 50 good people could pull off a great first release in 18 months.
(I do have relevant consumer UX software design management experience and feel that I can do reasonable quick estimates based on that.)
I think that could be good as well. If I were to spearhead it I'd have us hone in on one or two laptop models. Then spend the rest of the engineering time on making the install, upgrade, and user experience as smooth as possible.
I don't know what's the fuss about 32GB and OS X/MacOS but I upgraded my iMac to 32GB (from 8) and when booting, it still prefers to swap than to use the memory it has access to. For the next hour, it slowly starts using more memory and will use even more than 16GB. I suspect the OS isn't optimized to efficiently use more than 16GB.
?? The comment is based on the fact that they're equipped with quad core Skylake processors, therefore they probably have enough thermal headroom to handle what they actually have.
It's a long play because engineering those cases (including thermal) is very difficult. So Apple typically uses the same case through several revisions of the internals, to defray the costs.
What this means is that the very first model released with a new case is usually underpowered, because the chips are not quite as powerful and power efficient as the (upcoming) chips the case was optimized for.
For this and other reasons I tend to not buy the first version of a new Apple product.
According to Macrumors, there is a compatibility issue which means Nvidia might be out of the picture for a while. Nvidia is only capable of driving dual 5K displays with the Polaris GPUs with DisplayPort 1.3 which is not supported with Thunderbolt 3.
That makes a lot of sense, but I haven't seen any hints that it's coming soon. Keep in mind, you really need an upgradable GPU for that, since these $1000 5K displays have a long life cycle.
And it would probably require rewrites of high-performance apps and games to be something closer to client/server setups to manage communication between the CPU/memory and GPU.
There is really no difference for anything but the most demanding applications though.
5K displays won't cost $1000 for much longer. Right now they are premium products but soon they'll be the norm. By then, adding a good enough GPU to the screen will make sense (because your laptop doesn't need to carry - and feed - more brains than needed to push pixels to the built-in display) and, if the user wants more, they'll use whatever is installed in their computers.
I got a chance to talk about how it works at LISA last year: http://selfcommit.com