Operating systems, like religion or politics, is a subject best not discussed in polite society.
But why let that stop me. Anyway, sorry in advance.
· · ·
Really, it’s great.
I’d like to claim this was a serious decision and not merely bike shedding. (Which is a popular geek term to refer to Parkinson’s Law of Triviality, which was popularized by the more popular than OpenBSD FreeBSD community.)
I’d also like to claim everything on my server is still working and not going to break.
Those claims are not verifiable.
Linux used to feel like the contrarian alternative OS for discerning computer enthusiasts but now that Linux powers the Android phones that half the world keeps in their pocket it feels less like an act of rebellion to use it and more the OS of choice for corporate super states. So probably this like 80% fashion and 5% post-facto rationalization and 15% completely legitimate rationales.
· · ·
Everything about modern operating systems is basically impossible and incomprehensible, as a general rule.
That’s what makes OpenBSD so great. It really breaks that rule.
The Incoherence of Modern Linux
I run my own Unix server and web sites and programs and experiments for fun.
Fun for me is taking things from idea to software, learning how stuff works, making computers do interesting things, and creating and sharing stuff.
Depending on what I’m doing I want to get into the “guts” of things and be close to the machine.
I’ve been using Linux in various ways on desktops and servers for 20 years. (I first used Red Hat linux the summer of 1997 during high school, after spending a weird summer getting a taste of overpriced Sun Workstations.)
But the last few years of using Ubuntu on this server have felt off.
Maybe it was the second time Ubuntu changed init systems and I had to learn yet another one. Or maybe when I looked at
top and didn’t recognize a bunch of stuff. (Mostly systemd’s processes and dependencies and god knows what else Ubuntu has on by default.)
Linux has always sort of seemed like a beautiful mess – and never particularly coherent, but it never bothered me that much, until recently.
Recently, I’ve just been kind of annoyed with it. It feels – nonsensical.
Minimal Viable Unix
OpenBSD takes a different approach.
It’s quiet. Things are off by default. You have to figure out how to turn them on, and in the process learn enough about them to run them responsibly.
The initial set of “base” software is spartan by modern standards, but more than enough to do what I need. Installing software from ports and packages is straightforward.
When I look at the process list, there are no surprises.
It’s just enough operating system.
Things don’t happen unexpectedly.
The system follows the principle of least astonishment. Things are predictable, in a good way.
The installer is text based and runs in a few minutes.
Releases happen regularly, every six months.
It all comes together and feels solid and coherent, rather than just disparate unrelated pieces.
People mention the quality of OpenBSD documentation, but it was hard to realize how bad things were in Linux or MacOS or other places until I started to use a system with really good, well written, comprehensive man pages.
Rather than futzing around on the web with varying sources of questionable quality, or reading manual pages that that too often are inconsistent with the actual working version of the software, OpenBSD
man pages just work and are great.
It’s the first system where reading the manual seems to not just be reasonable advice to start, but most of the advice you’d need to solve the bulk of problems.
In an alternate reality I became a weird grizzled systems administrator, but in this world I’m a product manager who tinkers with this stuff on the side.
I’m tired of the operating system I use feeling like shifting sand – arbitrarily changing things and breaking and being inconsistent.
I don’t need my Unix server to break backwards compatibility every few years in random, unpredictable ways, I want it to have some stability and continuity over the years in how it works and how I maintain it.
I’m not averse to learning new things and adopting new technologies, but I want it to be for valid reasons, not just a random walk wherever the whims of some random corporate benefactor lead.
I mean, probably not? But if the software you rely on is so confusing people think that may be a reasonable explanation for its complexity, that seems worrisome. (Remember when I said operating systems were like religion or politics and apologized in advance? Now it makes sense, right?)
The biggest security flaw in any system I’m using is generally me and the software I write, not low level operating system exploits, so security is not really my focus.
I appreciate OpenBSD’s focus on security, mostly because it leads to making the system easier, more coherent, and better. A focus on minimizing risk, attack surface, and making coherent, understandable, robust systems seems to have led the project to a good place.
Hardware compatibility – some hardware appears to be harder to get working (or, purposefully doesn’t work as a result of not taking close source binary blobs to get some things working). This doesn’t matter to me on servers, but does matter to me on a desktop (I’ve in too deep with MacOS, Thunderbolt 3, and this stupid 5k monitor I bought.
Performance – me and my personal projects are extremely unpopular so this is not really an issue for me. If you are working on something where scale and performance at the OS level actually matter, you probably have strong feelings about which Linux kernel you’re using and low level optimizations and file systems and things I don’t worry about. OpenBSD performance is fine for me.
Virtualization – support for OpenBSD in modern VPS hosts is a bit rarer. Vultr is probably the easiest to get it working. I managed to get it to work on Linode but that is probably more trouble than its worth.
Updates – things can be slightly more complicated than
apt-get upgrade, though you can get pretty close if you try.
Support – OpenBSD doesn’t seem intended to solve everyone’s problems, or be the most accessible or easy to start or use software, and the community is a lot less interested than (for example) the 90’s linux community about convincing anyone to use their software. (IE, people are smart and helpful but you probably can’t expect the same support from the software or small community compared to some of the alternatives.) Which is fine, for me, but if you’re just trying to figure out how to use Unix for the first time, maybe just stick with Ubuntu.
Or go with OpenBSD because it’ll be cooler and more interesting and make more sense and everyone will be like “wtf” when you tell them that’s what you use.
This is a first attempt at creating a canon of books for product managers in technology companies.
This is not a value judgment that these are necessarily the “best” books or a comprehensive list – but a clear declaration that these are influential enough that being familiar with the ideas, concepts, and vocabulary represented is relevant to effective product management.
It is a value judgment in the sense that I have read and recommend them.
Pioneered or popularized concepts like “affordances” along with a highly useful framework for thinking about usability, design, and the evaluation of products. This is the one book above all others I recommend for those looking to understand and develop product, design, and usability insight.
Why is software development hard? Why is nothing ever on time? Why doesn’t adding more engineers to a stalled project accelerate it? Will we ever get better at any of it? These essays by Brooks exploring these issues have turned out to be both prescient, timeless, and a fascinating time capsule as decades have gone by. Software is still fundamentally about conceptual integrity, managing complexity, and the nonlinear increase in communication that comes with large scale software.
Something of a cult classic in Silicon Valley now, this is a surprisingly practical guide to management. How should someone who has an infinitely large possible space of work prioritize? (Answer: understand what high leverage activities are, and do those.) There’s something oddly satisfying in the simplicity and clear guidance – I’ve found it tremendously useful as I take on more direct people management roles in my career.
If you want to understand the technological underpinnings of pretty much our entire modern computing infrastructure, you should understand UNIX and C. I firmly believe there is no better book about any programming language than K&R. The book in many ways mirrors the language it documents – simple but powerful, straightforward but opinionated, and concise.
You’ve probably heard something called “disruptive” a billion times by now. A few of them actually reflect the distinction between sustaining and disruptive innovations as defined here, but after reading this, you’ll know the difference.
Backed by extensive case studies and quantitative research, The Innovator’s Dilemma posits that the reasons great companies soar, plateau, and then decline is rarely due to bad management or incompetence. Instead, it is because highly qualified managers apply decision making criteria and processes that all but guarantee the next innovation will be more likely to thrive outside the current successful business.
In many ways the follow-up “The Innovator’s Solution” encompasses the first book but has more practical advice on how to positively effect organizations to compensate for these issues.
Vanity metrics, learning milestones, pivots, growth engines – The Lean Startup began by applying the Toyota Way (lean manufacturing) concepts to technology products and software, but goes on to define and document a very coherent approach to rapid product and business model ideation, iteration, measurement, and growth. This is another one where you should probably read the source material since you’ve likely heard these things misapplied a dozen times in bad blog posts and offhand.
I abandoned a lot more games and returned them these past few months than previously because apparently I hate video games right now, mostly.
Ron Gilbert set out to make a game that felt like a “lost” Lucasarts adventure and succeeded at that and beyond. It’s a great game. It may spend a bit too much time in self-aware nostalgia for some players, but the writing, puzzles, wit, and charm more than make up for it.
And also you can turn the in-jokes off with a menu option, along with changing toilet paper orientation and fonts.
Kickstarter nostalgia-fueled adventure game revivals tend to just be heartbreaking disappointments, this is the exception.
The Apollo Justice: Ace Attorney sequel nobody asked for but turns out we all needed, and they had to brand as Phoenix Wright this time.
A 3DS-digital-download only in the US until Capcom decides to sell this on iOS for a fraction of the price, this is probably the most difficult of the legally available in the US Ace Attorney games to play (you’ll need a 3DS, but really, if you never got a 3DS go now and get a New 2DS XL, you deserve it.)
Anyway, as always, Ace Attorney games are the best things in the world and and I’m glad we exist in a piece of the multiverse with them.
Previously, I meant “Phoenix Wright” games are the best. “Ace Attorney” games featuring Miles Edgeworth are actually just pretty good, not the best in the world.
It’s clear they were hampered by trying to distinguish from the “main” entrants in the series, and so tried to add in some third-person adventure-game style gameplay in addition to the dialog but that doesn’t work that well.
The “combining logic” and clues in Edgeworth’s head is a neat idea, but is somewhat convoluted in practice.
But it’s still Ace Attorney – writing, wit, characters, and weirdness is all there.
It’s Star Trek VR! Finally. We as a people have accomplished that.
It’s easy to get very in-character playing, as you’re talking to the crew in multiplayer. And to get loud and animated.
Star Trek Bridge Crew actually has voice recognition even in single player so you can give verbal orders to the crew! Not just multiplayer.
I mean, I wasn’t actually using that or playing multiplayer when my wife asked me what was going on with all the noise, but I could have been.
Like all of our current “first gen” VR experience we will laugh at how awkward and ludicrous they are when the technology gets better, and this is as awkward as they come. You can forgive it because it’s star trek but it is sort of objectively meh to and use a simulated touchscreen in VR with HTC Vive controls, and it’s JJ Abtrams Trek, not real Next Gen or DS-trek.
This is the spiritual successor to System Shock 2 that I thought I always wanted but the amount of jump scares from furniture and cups and inanimate objects turning into terrifying aliens in the first 2 hours made it impossible for me to continue.
At some point I think I’ll figure out a way to play this – it seems like it would be good?
Seems weird that I never completed Cave Story (I remember getting annoyed with some part of many years ago and losing interest.)
Anyway, despite buying and trying to play Cave Story+ it turns out playing the original pixelated version at 16x9 using nxengine-evo was much more pleasing.
I get that it’s pretty good, and the art, sound, and design are great – it has character, but I don’t quite see why it has achieved such a cult following.
Professionals and power users have been upset with Apple’s high end computers for some time, but the last six months it’s come to a boiling point.
I don’t think much in the last round of updates will change that.
I’ve worked enough in big companies to understand the external perception and internal reality diverges a lot more than people know, so it’s hard to know how this happened.
I’m not particularly interested in the explanations – for me the interesting point is one of strategic misalignment and the opportunity for Apple to do something really bold to address it.
Why Pros Are Angry
Basically, if you want the absolute fastest processing and graphical power, you are hampered if you want high power, high thermal, desktop computing. Apple isn’t just losing on a price/performance perspective – in some cases it’s not even competing anymore. The 2013 Mac Pro essentially being not updated for years is the most grievous offense, but the more recent MacBook Pro without decent GPUs or keyboards and instead idiotic touch UIs is just offensive to those of us who actually work on computers for a living.
Exhibit 1 – The 2016 MacBook Pro
Exhibit 2 – The 2013 Mac Pro is ancient
Exhibit 4 – The Hackintosh
Basically, people are unhappy, and often the best option is to make an illegal hacked up machine from parts that has better performance. Or just use a Windows/Linux PC with better components.
What Is The Point Of The Mac
Today Apple is, from a business perspective, an iPhone company.
The iPhone is the most successful consumer product in the history of consumer products in just about any objective measure. It is unclear if or when we will ever see another consumer product as successful in my lifetime.
Given what I understand of Apple’s functional internal structure (rather than business units) – one would expect all the other product lines to suffer as Apple puts more and more of their efforts into the business that makes all their other businesses seem small.
Interesting is that in my experience this is true even if benevolant management recognizes this as a problem and tries to adjust staffing / compensation / priorities to invest in other things. Because the potential rewards and recognition from working on “winning” supported projects end up influencing individual’s project decisions, this can be challenging. The rich get richer in that successful projects attract better talent. See also: The Innovator’s Dilemma
People like me look at the Mac as a general purpose computer with which to do interesting things (write, program, create art, type in terminal windows for a few decades). Historically the Mac has been Apple’s primary product that is created and sold at high margins.
The problem is that isn’t the Mac’s purpose anymore from a macro business perspective – it’s to support the iPhone.
The purpose of the Mac is to enable the creation of software and content experiences that make iPhones better.
And since the current market scale between laptops, smartphones, and new devices and experiences is unlikely to change, this is likely the reality for the next decade. There will be more smartphone users than computer users, and they will have a faster upgrade cycle. It’s a market that makes others seem tiny.
So it may be time to embrace that reality.
(Note that this equation changes if iPhones/iPads become platforms to create iOS software, but there’s been little to indicate that is planned in the near term.)
When computer products are compared to automobiles, the trite analogy now is that desktop and laptop computers like the Mac are trucks, while smartphones and tablets are cars.
Most people just need a car. Sometimes you might need a truck for specific purposes. Businesses need trucks.
The analogous problem here is that Apple’s car business is so large it seems almost irrational to care about the trucks.
But you need the trucks to make cars – they haul in the parts and people needed.
The problem is trucks have stagnated to the point where the truck drivers who bring the parts to assemble their cars are miserable and looking to buy something else.
The weird thing is Apple only allows Apple-trucks to bring them parts for Apple-cars, so when they stop buying Apple trucks, Apple cars suffer.
Commodify Your Complement
Many of the big successes in the tech business world have come from a strategy of commodifying your complement. The classic example was Microsoft creating a standard operating system that worked on a plethora of commodity computer hardware. Anybody could assemble PC’s, so fierce competition followed, which made PC’s cheaper and more prevalent.
Which was great for Microsoft, because every PC sold meant another Windows license.
Windows was the product, PC’s the complement that became further commodified – you could buy any IBM PC compatible system and run Windows and do what you needed.
Pundits suggested Apple follow this same course (and they briefly did in the 90’s with clone manufacturers before Steve Jobs returned) but it never really made sense because Apple computers weren’t about commodity hardware and solving all problems – they were about charging a premium for an integrated experience that worked (this was much harder in the 90’s, Windows “worked” on all kinds of hardware, but poorly.)
So it’s 2017 and I’m making the totally discredited suggestion Apple sell its OS and let hardware manufacturers compete in hardware?
Understand Your Complement
My hypothesis is that Apple needs as many developers using their software as possible to maintain dominance in smartphones and the next generation of hardware (AR, VR, whatever).
Their current high margin computers is making this somewhere between hard (programmers) and impossible (virtual reality developers, though the most recent WWDC keynote and external GPU enclosure is suggesting they are trying to take this from impossible down to hard.)
Let’s take things to one extreme for the sake of argument.
MacOS is already “free” – Apple has stopped charging for upgrades. The cost of MacOS is just hidden in the cost of buying a Mac, and Apple wants everyone to have the latest version for ease of maintenance and market size for developers.
But what if MacOS was free and ran on commodity hardware (which it basically does, already, if you bend the law and make a Hackintosh.)
A few interesting things happen here.
The first is less direct Mac profits – via cannibalization of the existing Mac product lines.
But there’s some potentially offsetting gains that are better in the long run –
- More MacOS users – via decreased cost of hardware, increased hardware support
- Increased innovation on the platform – via (1) and more students, starving garage developers, hobbyists choosing MacOS
- Better, stickier app ecosystem on iOS and new Apple hardware – via happier, larger pool of developers
- Support for virtual reality, augmented reality, and other hardware-dependent hacking becomes easier
- The demand for Apple services (iCloud, Music, etc) goes up significantly, especially for current iPhone users who also adopt MacOS powered desktop/laptops
There’s less extreme iterations on this –
- MacOS supports more hardware but licenses are only available with an iOS device purchase
- MacOS licenses are sold to support some homebrew hardware but with limited/no customer support
Crappy, ugly, commodity hardware is fundamentally “off-brand” for Apple, and the nature of enabling MacOS to work across more hardware fundamentally leads to experiences that are sub-optimal compared to the fully integrated Mac hardware/software stack today.
There’s also a serious strategic discussion of whether the potential gains offset the revenue declines and other issues.
It’s easy to pontificate on these things externally, it’s a lot harder to make these decisions when you have hard numbers in front of you and shareholders to be accountable to.
And it’s hard to cannibalize existing business lines as an executive, people generally fight tooth and nail for short term gains over long term strategy that has risks.
Apple is a beloved company that is having trouble coming up with its next hit.
Hits take time and Apple has a cash hoard that can buy time, acquisitions, or a few small countries, any of which might help them at this point.
Getting developers on their side – getting a small army of Apple lovers tinkering to make the best tricked out, hot-rodding Macs instead of Windows and Linux boxes – may be one of the things that has immeasurable “brand lift” (imagine the ads linking Apple II homebrew computer club users and today’s garage hackers doing AR on weird looking Mac hardware) and helps cultivate a new generation of developers.
And there’s something fundamentally “Apple” about making desktop computers simple, easy, and affordable. That’s what the Apple computer was, deep down, and everything good (Apple II, Mac, iPhone) that followed.
It may be that by giving more software away, Apple will make their software and services available to more people, make them happier, and improve long term businesses.
Or it may just lose them a lot of money – if it was an obvious win, they’d probably already be doing it.
Either way, I’m typing this on a MacBook Pro with abysmal keyboard and Touchbar and it’s insane to me that this is the best they can do.
If they don’t start shipping better hardware or freeing their OS, Apple will lose key influencers.
Today I thought about how I wanted to change some things.
Then I opened
;; cursor (setq blink-cursor-mode 0) (setq default-cursor-type 'box)
Cursors should not blink. Cursors should be boxes, not lines.
Small victories, tiny bits of autonomy.
There’s something beautiful in this image split between Phoenix, starting alone with his mentor, next to himself years later as the mentor, surrounded by the people he’s bonded with.
· · ·
I finished Spirit of Justice yesterday.
Ace Attorney games are a precious thing in this modern world. I hope there’s another 15 years ahead.
I played surprisingly few games the last couple months.
Beneath the surface, it’s a masterpiece.
(Trust me – I’m in it.)
The original Deus Ex came out in 2000, and is now a cult classic. It was ambitious and brilliant and weird and also a bit of a mess because it tried to do so much.
But the essence of Deus Ex was that it used a first-person-shooter engine to create a first-person action adventure that wasn’t just about shooting things. Violence and shooting is one tool to solve problems, but by itself would almost never work. Stealth, exploration, and outwitting your opponents through clever use of skills and the environment was key. (That and the conspiracy theory / illumnati are real stuff.)
Despite the most recent entry Deus Ex: Human Revolution being a masterwork in the action RPG genre and revitalizing it, this time it feels like the series has run out of steam and ideas. The gameplay and mechanisms feel repetitive and dated rather than fresh after 5 years. The storyline is both incomprehensible (even for Deus Ex) and seems completely unfinished and unsatisfying. It feels plodding and boring. Rather than leave me wanting the next chapter, I felt bored.
Where Mankind Divided fails, Dishonored 2 succeeds. As an action-stealth-play-as-you-want RPG, it enables all sorts of different, varied play styles. Lethal or non-lethal, loud or stealthy, indirect or head-on, and all manners in between.
The characters, voice acting, plot and more seem improved.
Dishonored made Dunwall feel real and interesting. The most remarkable thing is how vibrant and larger and varied yet cohesive the larger Empire of the Isles becomes in this sequel, and how exciting it is each time we see more of it. The level design and setting combines with the mechanics and story to create something spectacular.
The choices and how you play again feel like they have weight and impact the world. Choosing to sow chaos has repercussions. Seeing how Emily and Corvo have changed over the years was actually interesting. Very much enjoyed this one.
Despite more or less buying a 3DS to play this game, I never actually completed it. (I got through the first case and stopped.) Part of it was playing on a 3DS annoyed me.
I then bought it for iOS when it came out and played through the second case and stopped. I got bored.
This time, though, for whatever reason, the love of Phoenix Wright games overtook me again as I completed the other three cases.
If you have never played Ace Attorney, the iOS re-releases are the easiest way to experience them, despite the flaws in the ports it’s a lot easier than tracking down Nintendo GBA imports or DS versions now.
Anyway, I love these games so much, and I hope they keep making them forever.
The nice thing about building your own PC is you get exactly the parts you want.
The bad thing about building your own PC is it’s hard to know exactly what you want.
The last time I built a computer (about 2.5 years ago) was my first time building one completely from scratch in the modern era. (I’d cobbled together some tiny linux boxes from barebones PC’s, but hadn’t done the whole thing, and not for gaming.)
This is something I probably should be leaving to professionals. But I wanted the satisfaction of doing it myself.
I ended up with something that worked and ran modern games effectively on a weird 34” ultrawide monitor but it looked sort of absurd and I’m pretty sure I never got the thermal situation right – fans were loud and always running and it seemed to generated what I thought was an inordinate amount of heat.
This is also the bad thing about building your own computer – how do you even know you did it right?
Anyway, I learned a lot –
- many cases are embarrassingly ugly
- many parts wants to generate obnoxious lights
- if you’re not careful you will end up with a weird looking monstrosity that has branded lighted logos flashing everywhere
Clearly I did not learn “just let the pros do it next time” because I’m stubborn.
So spurred on by my need for Thunderbolt 3 support discussed yesterday, I embarked on a new PC building mission.
A beautiful but functional monolith, without obnoxious branding, windows, or colors.
Focuses on quiet computing so includes sound dampening and quiet fans.
I bought the “blackout edition” which makes even the internals and fans and everything black. It’s nice.
Chose the i5 since it seems like overkill for gaming already and wanted less power/heat than dealing with the i7.
This is a super popular cooling solution that was recommended to me.
Seems OK. It was kind of a pain to install but seems far quiet and more effective than using the stock cpu fan like I did last time. If I do another build I might try something different / quieter / more expensive.
Decision was driven by the need to support Thunderbolt 3 and the latest generation of Intel chips. ASUS hardware and software and BIOS etc. seems relatively inoffensive and functional.
Went with the “stock” Z270 board – seemed unclear what value most of the higher end motherboards with various add-ons actually did.
The add-on card needed to drive my LG UltraFine 5K display via a PC over Thunderbolt.
Damn RAM is fast now. Very fast.
These tiny little M2 drives are mind-boggling. Progress in size/speed/cost even over the past 2 years is significant. Definitely splurged on this because I was sick of worrying about disk space.
Has been super quiet and efficient.
See also: the PCPartPicker list for this build.
Everything actually went really smoothly this time other than I was somewhat confused on how to properly set up the CPU cooler. I think it went ok, but I did spend like an hour watching YouTube videos of people doing it first.
Also I plugged in the ATX power but forgot the separate CPU power and spent 30 minutes checking everything but that – rookie mistake, but, whatever. Helps to build character? It makes the end product feel like more of a triumph, maybe.
I have a system that is quiet, sleek, and just has a single white LED on top to indicate power and no other annoyances that is not embarrassingly loud or flashing ugly lights under my desk.
Despite being officially unsupported, the LG UltraFine 5K Display can mostly work with a Windows PC that supports Thunderbolt 3.
You can even use your existing GPU to drive it with the right hardware. The USB-C ports on the monitor are recognized and works properly. The speakers work too. (They’re terrible, but they work.)
Major caveat: Only 4k as max resolution, 5K is trickier right now.
This is fine for my usage – gaming on PC, everything else on Mac. But if you’re looking for true 5K you may need to get the pricier Dell 5K monitor or try one of the few motherboards that claim to support 5K out of the box mentioned below.
I suspect most recent Intel boards with a 5-pin thunderbolt header will work, as it did for John Griffin who used a similar setup as me but with a Gigabyte motherboard.
After connecting the add-on card to the motherboard, you do an external connection from the Displayport on your GPU card to a mini-displayport input on this card with the included cable.
Then connect to the LG 5K monitor with the Thunderbolt-3 cable.
The display powered up and worked at boot instantly for me, including showing the POST screens.
Support for the USB-C hub and speakers on the monitor required me to make a few BIOS changes to enable Thunderbolt-3. I guess TB3 support needs to be enabled explicitly, but somehow the Displayport passthrough on the card doesn’t require it? Which was convenient but very confusing.
On a PC with Windows 10 –
- 4K – 3840×2160 @60hz
- Speakers – (but again, why) and volume
- Hot swapping the cable between Mac/PC
- USB-C hub (probably)
What doesn’t work –
- brightness adjustments (though maybe that’s fixable)
- reliable USB device recognition on hot-swaps
- the webcam (haven’t tried to figure out why)
The USB-C hub passthrough was recognized and appeared to work, but wasn’t reliable for me on hot swaps. It was always fine on a fresh boot.
That’s hard to debug or speak definitively on as I’m only using it with legacy USB-A peripherals (keyboard, mouse, speakers) that are connected to a cheap Amazon Basics hub, which is then connected via Apple’s USB-A to USB-C adapter. (I think the USB-A hub is the unreliable part.)
Hot swapping the cable generally worked fine, except for flakiness in recognizing USB devices, and sometimes plugging individual devices in and out of the hub fixed it. It worked enough of the time that I suspect swapping out my old USB-A hub may fix it.
I bought the 2016 Macbook Pro a few months ago, and part of my excitement around it was the LG UltraFine 5K Display that Apple announced at the time.
It’s a native resolution of 5120×2880 at 27”, so it has the same screen real estate as a 27” 2560×1440 but doubling the clarity and pixels per inch. It looks really good!
My challenge with it was I have a gaming PC running Windows so I can play real computer games and use my HTC Vive and I didn’t really want to have two monitors on my desk. I also really like having my monitor act as a USB hub and KVM since I use both a Mac laptop and a desktop PC.
Previously I was using the LG 34UM95 34” ultrawide monitor for this – it supported Thunderbolt 2 input from my mac and Displayport from the PC, and happily swapped the USB devices between them.
So I’m losing the KVM aspect with this setup – I have to actually swap the cable between the desktop and laptop.
But the gains – 5K retina resolution on the screen I use most makes a huge difference – and while I enjoyed the 21×9 aspect ratio for gaming it is much less hassle and equally nice to go back to 16×9 but up the resolution to 4K. (Performance of 3440×1440 and 4K on my GPU, a 980TI, is usually about the same.)
Other Compatibility Notes
The LG UltraFine 5K will also work with older Macbook Pros I tried, but at lower resolutions.
As noted in Apple’s support page, most Macs from the past 3 years will drive it at 4K – 3840×2160 at 60hz – the 2014 Macbook Pro I tested worked as expected.
Not noted on that page but tested and verified by me: my first generation 2012 Macbook Pro with Retina Display will drive the display via the Thunderbolt 3 (USB-C) to Thunderbolt 2 adapter, but only at 2560×1440. I couldn’t get USB devices to be recognized or work, just video.
While it doesn’t make sense to buy this monitor except for a recent generation Macbook Pro that can drive it at 5K, you will (in most cases) be able to use this as a secondary display for older machines if needed, which is nice.
Alternatives on the pc side – the Gigabyte GC-Alpine Ridge looks like it should also work, and has dual Thunderbolt 3 ports, and (seemingly) two Displayport inputs. It claims to enable 4K video, and I suspect it’s possible that it supports dual stream and could drive 5K in some cases but nobody seems to be able to actually buy this card in the US or test it from what I could determine.
Gigabyte also produces some motherboards natively support 5K output over Thunderbolt 3 –
I assume these would be limited to the integrated graphics on the board, so wouldn’t be interesting for my gaming setup, but at least one person has gotten the Z170X Designare to drive 5K on the tonymacx86 boards.
The LG Ultrafine 5K is on sale (30% off) at Apple through the end of March so if what was holding you back was PC compatibility, there’s now at least a few documented examples of it working. Mostly.
It’s still early days of super-high resolution displays, so things are a little trickier to get working. I suspect in a year or two 5K and 8K monitors and supporting motherboards and GPUs will be much more prevalent. If you want to live on the bleeding edge now, you have to be picky about your parts.
Gershon, a professor of anthropology at the University of Indiana, Bloomington, spent a year interviewing and observing job seekers and employers in Silicon Valley and around the US. Her new book, Down and Out in the New Economy: How People Find (Or Don’t Find) Work Today explains that branding is largely a boondoggle advanced by inspirational speakers and job trainers. It doesn’t help people get jobs. But it does make us more accepting of an increasingly dehumanized job market that treats workers as products rather than people. […]
When people think of themselves as brands, they are speaking the language of reputation, appearance, and marketing. It’s hard to switch from that to a discussion of moral responsibility. […]
“Maybe instead of thinking about people as property or businesses, we could think of people as craftsman.”
So the conclusion was your personal brand won’t help you get a job. But it may make you more accepting of dehumanization in the postmodern economy.
I’ve been thinking about the complexity of modern technology stacks. See You probably like bad software from earlier this week.
Some of the interesting approaches to dealing with this eschew operating systems almost entirely.
Finding ways to reduce attack surfaces, create more maintainable systems in a world where the cost of hardware approaches zero is critical because we’re going to have a lot more of them.
Nobody (or company) is going to be able to effectively manage an infinite number of unix systems spread across dozens of devices indefinitely. I can barely maintain one unix server running this site and I have a computer science degree and two decades of experience running it.
Some interesting approaches / existing work –
UniK (pronounced you-neek) is a tool for compiling application sources into unikernels (lightweight bootable disk images) rather than binaries. UniK runs and manages instances of compiled images across a variety of cloud providers as well as locally on Virtualbox. UniK utilizes a simple docker-like command line interface, making building unikernels as easy as building containers.
Unikernels are interesting – throw out the OS and write monolithic kernels with a single application on top. Somewhere between “research concept” and “possibly a production-ready technology” but this toolkit makes testing and applying unikernels in various current forms pretty straightforward – I had some basic Go code running on a rump kernel very quickly.
Unikernels are specialised, single-address-space machine images constructed by using library operating systems. Unikernels shrink the attack surface and resource footprint of cloud services. They are built by compiling high-level languages directly into specialised machine images that run directly on a hypervisor, such as Xen, or on bare metal. Since hypervisors power most public cloud computing infrastructure such as Amazon EC2, this lets your services run more cheaply, more securely and with finer control than with a full software stack.
For a long time, we were unhappy with having to care about security issues and Linux distribution maintenance on our various Raspberry Pis. Then, we had a crazy idea: what if we got rid of memory-unsafe languages and all software we don’t strictly need?
MirageOS is a library operating system that constructs unikernels for secure, high-performance network applications across a variety of cloud computing and mobile platforms. Code can be developed on a normal OS such as Linux or MacOS X, and then compiled into a fully-standalone, specialised unikernel that runs under a Xen or KVM hypervisor.
Library OS in OCaml.
Rump kernels enable you to build the software stack you need without forcing you to reinvent the wheels. The key observation is that a software stack needs driver-like components which are conventionally tightly-knit into operating systems — even if you do not desire the limitations and infrastructure overhead of a given OS, you do need drivers.
Uses NetBSD drivers and enables a large amount of existing software to more or less “just work” as a unikernel – some example packages include mysql, nginx, leveldb, haproxy, rust, tor, zeromq.
I’m uninterested in the latest viral content.
I want more exposure to things that are good, even if you don’t want to share them. Or can’t share them in a moment easily.
Or things that won’t get repeatedly re-shared because they are actually complex and require thought and therefore less likely to be a meme.
Or they matter too much to you to share without thought.
Engagement optimized social media underexposes these things systematically.
It’s like we have a biological ecosystem where the most infectious virus won, and we’re slowly seeing all the complex organisms die.
This is a hard problem. All the incentives around attention and money are generally going the opposite direction.
We’re preserving the history of video games, one byte at a time.
Frank Cifaldi’s destiny is this foundation. (At least, that’s what I’ve been telling him.) Preserving video game culture is important – it’s great that Frank has a structure and team to do this full time now. First special collection is great – NES Launch Collection
I think Twitter actually has some sort of weird philosophical stance where brands, consumers, and Russian propaganda bots are all people, and they all stand on equal footing, and must be treated equally. Everybody in charge at Twitter was like “Wow, we live in a world where corporations have all the same rights as people and… it’s turned out great, we better emulate that!”
So great to see Andrew writing. Also, he’s right that foundational assumptions in networks like Twitter (all nodes are equal, anyone can contact people) have huge implications. And they’re hard to change. (See: death of Orkut.com)
Not sure how I missed Jon Glaser getting a new show until my brother sent it to me.
Well, I do. Probably because I pay no attention to anything, and it’s on, truTV – and when John Hodgman plugged the show on Comedy Bang Bang and explained he was playing Gear-i, a Siri-like artificial intelligence on a phone that helps Jon Glaser choose what gear to buy, I assumed he was fucking with the audience. But that’s also true, and it’s awesome.
Last month, my coworker casually told me he still has a 2001 era DoCoMo phone, which is one of the first phones to have emoji […] I then took a 10 hour flight to Europe and, for lack of better things to do while watching every movie that came out this year, I drew every one of those emoji as a sprite. 166 emoji in total, 12x12px each, in one of six colors
Amazing hand-tuned tiny pixel modern usable font rendition of one of the earliest emoji fonts.
This website displays a collection of twelve code poems, each written in the source code of a different programming language. Every poem is also a valid program which produces a visual representation of itself when compiled and run.
This is inspiring – both in concept and execution.
The challenge with software is it gets worse over time.
It seems counter-intuitive that the more people work on something, the longer it takes to get done, but that’s a well established principle in software.
What’s harder to fathom is that it also gets worse. But that is the default outcome. Outside of extraordinary circumstances and extreme measures taken, it’s what you should expect.
Software is a world created by thoughts where real work and progress over time turns into an endless ouroboros of engineers making software that breaks other software then having to fix that broken thing only to break again in new ways.
If you’re an engineer, it’s very likely you are making software worse every day.
It’s ok, most people who work on software are.
If you are using software, you probably use bad software. You probably like bad software.
It’s ok, most people like bad software.
The alternatives are usually worse.
Software gets worse over time because what people change often isn’t related to making the software better in a coherent, measurable way.
It is not fixing bugs (boring! not fun!) or improving security (nobody cares until it’s too late!) or making things faster (who cares! computers and phones are faster every year! Moore’s law makes optimization forever unnecessary!)
What’s sexy and interesting in the world of software is adding features, redesigning a user interface, or integrating it with some other unrelated piece of software to help it (synergy!) or monetization – which these days usually means spying on users to better target ads, serving ads, delivering ads, or in rare cases selling things people don’t need more efficiently.
Often this is how individuals working in software show they did something and that’s how they are judged.
(People brag about the new software they make, nobody brags about the terrible awful bugs they had to fix.)
But if there’s a piece of software people are already using, by definition, it is useful and used.
Most of the above is likely going to get in the way of that existing usage.
If you’re not fixing bugs or improving performance which, you know, unless you are properly testing things and measuring them – also boring! – you’re probably harming those things – you’re making something that people use worse.
It’s probably attemping to solve a company’s problems, not users’ problems. And the accidental outcome is making worse software.
Again, that’s ok, most people make bad software.
Most people use bad software. Most of the software industry is predicated on selling, supporting, and monetizing bad software and making it worse over time.
Underneath It’s Even Worse
The perverse incentives of individuals who work on software is one thing – but it’s when you start moving down the levels of abstraction that things get really scary.
Let’s start with operating systems.
Now the accumulated cruft, random interface changes, inconsistent features, and whatever “me too” garbage thrown in to remain “competitive” doesn’t just impact a little corner of the software world via an application, but has the potential to fuck up every process and program running on top of it.
Eventually, the weight of this nonsense led to people jumping – leaping with joy! – to stop using their computer for phones.
Snobs/weirdos like me in linux/unix/macos/beos/amiga/whateverbsd land were somewhat insulated from this but it’s not hard to understand how using an iPhone 4S with a 3.5” screen and consistent touch interface would be a massive improvement over any verison of Windows released after 1995 which we too quickly forget was basically a wasteland of crashing (blue screens of death) and virus-filled malware.
“Getting rid of the garbage on your parents’ windows machine” is an annual ritual for many people.
2007-10 era smartphones were a clean slate – there just hadn’t been enough time for programmers, product managers, marketing hacks, sales guys, and aesthetic-obsessed designers to fuck it up by larding on complexity.
For those in the future who are baffled, let me set the scene.
It’s 2017, and the Apple iPhone 7, a device previously heralded as one of the most beautiful, usable, and understandable products, has a tentpole feature called “3D touch” – a rebranding of the disastrously named “force touch” – that performs different actions depending on the pressure applied while tapping.
Which is different than the different actions performed based on the duration of tapping.
So trying to rearrange the icons on the home screen – already an undiscoverable action but one users learned over a 10 year period – depending on how hard you press can accidentally trigger a nonsense “app menu” which by default includes a single item – “Share.”
Nobody wants to “share” their apps. That is solving developer and company problems (use more apps you don’t need!) not user problems.
And the few that do want to share an app definitely don’t want to do it by pressing REALLY HARD on the icon. The only reason people press really hard on an icon is to move it which they had to discover somehow in 10 years since there’s no affordance for it.
And the iPhone is probably one of the best case scenarios. Some people at Apple are really good at this stuff – they just seem to be increasingly overruled or making bad decisions.
It’s not just them. These things seem inevitable.
Laws Of Bad Software
Given enough popularity, hardware will be mediated by bad software trying to solve corporate problems, not user problems.
Given enough additional code, all software will become bloated and incomprehensible.
Now imagine these software stacks – applications built on frameworks using libraries dependent on operating systems with kernel bugs all packaged into containers deployed on hypervisors built on other operating systems running on virtual machines managed via orchestration systems that eventually somewhere runs on real CPUs and memory and disk drives.
That doesn’t make any sense because nothing makes sense anymore in software.
We used to call people who understood things end to end “full stack engineers” but that’s a bit laughable now because nobody really undestands anything end to end anymore.
This Is Your Program, And It’s Ending One Millisecond at a Time
If you aren’t measuring latency, it’s probably getting worse, because every stupid feature you’re adding is slowing things down.
Most software would be improved if people just started turning features off.
Turning features off generally doesn’t sell additional units to consumers, close a sale, or make for a PR fluff piece, so people only do it in times of extreme failure or consequences.
I’ve seen an inverse correlation in the amount of engineering time spent on features vs their usage regularly. I’ve seen direct correlations between the amount of time spent on features and higher latencies pretty much constantly.
Some software cultures understand this and put tight controls in place to prevent regressions (because, you know, it turns out to be a real revenue and/or usage problem when people abandon your software because it’s too slow) but if your software is already painfully slow due to low standards and atrophy good luck convincing people to fix it.
Software bloat is the seemingly inevitable and sad reality of nearly all software.
As the layers of complexity start to overwhelm end users, you can only imagine what it’s like for the poor programmers stuck making all this work.
It’s layers upon layers of filth nobody wants to even wade through.
Kind of like how you’d be willing to pay lawyer-like fees just to avoid legal contracts? Well, tech is like that too now, hence tech-wages.
The terrible truth of software security isn’t that people are incompetent or lazy (though that probably happens sometimes.) It’s that the interactions between components, dependencies, and overall systems are now so awful that they may be impossible to secure at a reasonable cost.
That’s not a metaphor – literally – the benefits may outweigh the costs of connectivity according to insurance risk assessments –
“A future where the annual costs of being connected outweigh the benefits is not only possible, it is happening now. According to our project models, annual cybersecurity costs in high-income economies like the U.S. have already begun to outweigh the annual economic benefits arising from global connectivity.”
How To Stop Bad Software
Dead software can’t accumulate additional bugs. It can’t get new features. It can’t get any worse. It also can’t make assumptions about how fast today’s hardware is.
If you disconnect hardware from the internet and run old software (or hide it in a virtual machine) it may actually run better as the inevitable pace of hardware improvements provide speed updates without software engineers using that additional hardware power to get in your way.
If 25 year old dead software is doing a better job of it, then maybe stop trying to top it.
But nobody wants to actually be a neo-luddite and refuse to use any normal technology. It’s like, do you really want to never use Facebook and miss out on everything because you insiste on using a god damned email mailing list? (I do, but I was always anti-social, hence the social aspects of the web were always sort of an anomaly in my life.)
2. Fight complexity
This is fighting the good fight – having taste, being smart and proactive, outsmarting and outwitting an endless array of opponents.
The problem is some of those opponents start to look like forces of nature (hostile nation states, friendly nation state three letter agencies, corporations with more money than most nation states) and actual forces of nature (entropy) and it’s just fucking tiring because you know it’s probably just a losing battle that never ends and everybody is just fucking annoyed at you the 99.9% of the time a disaster isn’t happening and the 0.1% of the time it is, people are really fucking annoyed when you say I told you so.
3. Begin Again
When Microsoft’s and Intel’s duopoly led to a certain terrible low quality / high boredom in mass market hardware and software, it provided the fertile ground for the world wide web. By adding a new magical layer of abstraction (the web!) that made the underlying garbage of Wintel a commodity, there was a whole new world of adventure.
Normal people could like, look at the source of a web page, understand what was going on, and write their own!
The clean slate of mobile applications – where limited memory, screen size, CPU, and battery – actually provided enough constraints to force engineers, designers, and the software industry to shut up long enough to solve some actual problems in a comprehensible way – seems to be ending.
In my tech lifetime it seems that we only get about 5 years of “non-insane complexity” in our platforms before the “ecosystem” shifts into a swampish nightmare, and then 5-10 years of complete hell before we can move in. (My deep worry is this pace may be accelerating.)
The current hot place to jump next (internet of things) is going to be pretty fun when it works!
But when that stuff gets too complex and all the newly networked objects around us start speaking Portugesse we don’t understand and firing off spam emails because nobody bothered to secure the SSH and SMTP daemons on the ancient versions of linux that are lurking just beneath the surface we’re going to be in for a world of pain.
That’s already happening now – we’re already in trouble.
As many predicted, hackers are starting to use your Internet of Things to launch cyberattacks.
· · ·
We’re building the future of super-intelligent robots, but we’ve put brains in them that are hardwired to think it’s the 1970’s.
When things inevitably go wrong, I hope that they’ll let me watch Star Wars
· · ·
An earlier version of this essay was published as trenchant.org letter #9 – you probably like bad software.
· · ·
If you enjoyed these posts, please join my mailing list