A New Brush | GFX 50R 65:24

First off: what a privilege this has been over the past couple of months. When the GFX 50S was announced at Photokina in 2016, I never thought I’be one of the lucky few to test another medium format camera less than two years later. And yet, here we are.

I won’t be doing a technical review for several reasons: I was shooting a pre-production unit with beta firmware; my friends Kevin and Jonas will do a much better job than I ever could; I’m less and less interested in talking about specs. But I will say this: I’ve grown to love my GFX 50S, as I’ve previously written about. I’ve come to appreciate the insane attention to detail of its design, where every single button, port and door is exactly where it should be. Any new camera involves a period of adjustment, but my initial reaction to the GFX 50R was actually one of puzzlement; and for several days I found myself almost fighting against the camera. Some of this was due to bugs and not-yet-working features, but I eventually figured out the real problem: I was still shooting the GFX 50S. I was expecting a right-brain camera because in my mind, medium format was about work, first and foremost.

The GFX 50R isn’t about work. This is medium-format for poetry.
It’s not perfect—I probably would’ve done a few things differently, in terms of layout for instance. But we’re suddenly having a very different sort of philosophical conversation.

I can’t begin to tell you how happy I am to finally be able to share some of these images. I’ve chosen to again share a series of visual stories/essays (as I’d done with a few other models), all shot with the pre-production camera I was provided with and various GF lenses.

I’m writing these words on the eve of leaving for Belgium, before we head to Photokina. Which means nothing has been announced yet and I have no links to share. It also means I’m about to give back the camera. All good things have to end. The shots below aren’t official product images btw: just quick, gratuitous visual gear porn for my own benefit. Figured some of you might enjoy them as well ;)

Huge thanks to everyone at Fujifilm for this incredible opportunity.


Shot with the GFX 50S and GF 120mm f4 OIS R WR


​Data Bracelets

IMG_0653.JPEG

In September 2016 I wrote the following about the state of physical media:

“It's crazy how much our reality has changed over the last few years: faster internet connections, higher or even unlimited data caps (at least in Canada) combined with most of our lives moving to the digital realm...all these factors have contributed to less and less reliance on physical media. In fact I have trouble remembering when I sent files to a client through anything other than WeTransfer, Mail Drop, Box or similar services.”

The paragraph was part of a post entitled Like Candy—and I was writing about personalized USB flash drives I’d received from a US-based company called USB Memory Direct. If anything the situation has intensified since then: last week I sent almost 30GB of raw images though WeTransfer Plus for the last job I shot. The idea of using a physical drive, wrapping it up, sending it to another country through postal or courrier services, waiting for the package to reach its destination on time, hoping nothing goes wrong along the way...none of it makes sense at this point. So while I still love those walnut flash drives from 2016, when the company reached out again a couple of months ago my first instinct was to thank them, but ultimately let them know I didn’t actually need anything. I’ll never feel right about accepting products I don’t intent to use.

But then I noticed the wrist drives...

I’ve always worn bracelets—leather, metal...leather AND metal...whatever. I also own an Apple Watch. All of this to say I’m basically used to wearing stuff on my wrists. So when I saw the USB Wristbands on the company’s website I thought huh...that might be fun. The drives come in either USB 2.0 or USB 3.0, with capacities ranging from 64MB to 128GB, depending on the speed you choose. The material is described as soft rubberized plastic, which feels pretty close to the fluoroelastomer Apple uses on its Apple Watch Sports band. There’s no clasp: the USB connector just slips into the opposite end of the wristband. Tough to do the first time around but you get the hang of it after awhile. And once they’re in they hold tight—I’ve had no issues at all with the wristband loosening up and coming apart.

I decided to keep things simple and go with white, but there’s a variety of colours to choose from. And of course, like all of the company’s products, these can be personalized with logo, text or whatever else strikes your fancy (I added a tag line to mine). The process is absolutely painless too: I downloaded the specs from the website, sent in a PNG and received a proof in less than an hour. Then right before shipping they sent me a picture of the actual product, just to confirm we were good to go. Class act.

I intend to keep one of these for myself and use the others for giveaways. With 8GB I’ll probably include the 1EYE series along with a PDF or ePub portfolio. Heck, maybe These Kings while I’m at it. Now, as to the question on everyone’s mind: yes, it IS a little weird to wear something with your name on it...but fortunately, it’s pretty discreet ;)

Many thanks to Taylor for making this possible. You can find more info about the product at USBMemoryDirect.com.

Machine to Companion: One year with the GFX 50S

Ever since my very first X100, I’ve made distinctions between cameras. Some quickly become a part of me, not just extensions—a threshold most fine tools eventually cross—but something more intimate. Others I consider machines, precise instruments that don’t necessarily pull at my heartstrings but are perfectly suited to the work I need to do. The X-T1 was like that. The X-Pro1 was too...at first. Because sometimes, somewhere along the way, that relationship can change.

I purchased the GFX 50S as a tool. A machine. And over the course of the past year I’ve used it constantly on various jobs, often alongside X series cameras. But just like the X-Pro1 all those years ago...it’s become more than it initially was.

laROQUE-GFX1year-010.jpg

The pull of medium format

Everything I’ve ever said about the X series remains true to this day. I still love the footprint, the stealth, the psychological impact of these cameras on subjects—either aware or unaware of a shot being taken. I’ll never travel with the GFX 50S and it’ll never become my 1EYE camera. But those files...they’re incredibly hard to dismiss. After all this time I’m still struggling to express the pull they have over me, but it remains impossible to brush off: I get a visceral reaction to the images I shoot with it. If the role of our tools is to inspire, then the goal of this camera has been met, tenfold. No question. The result of course, is that I’ve been willing to compromise on stealth: I’ll now reach for the GFX in situations where I usually would’ve chosen an X-Pro or X100. Which may seem like a serious  shift...until you factor in the beat.

Rhythms

The GFX 50S isn’t slow—especially for a medium-format camera. But it IS slower than its APS-C siblings. It uses contrast detection, for one. The files are also much larger which, regardless of storage prices, is definitely something floating in the back of my mind as I’m shooting; in raw especially. All of this, combined with the camera itself, affects the rhythm somehow. But this is not a negative in my mind. In many ways it brings me back to my early days with the X series, the way the system made me much more aware of each moment, more deliberate in my approach to photography. It’s amazing how much evolution we’ve seen in such a short period of time—how far we’ve come from that X100. But it’s also easy to fall back into that “performance-driven” groove, to forget about slowing down when the cameras don’t force us to do so. Medium-format photography nudges me back into that softer flow. Yes, the footprint is larger...but the intent is familiar. For me, the lineage is clear and very much welcome.

 

Another detail I’ve mentioned in passing a couple of times: the addition of the EVF Tilt-Adapter, which was an important turning point in my relationship with the GFX. This is the small revolving plate that fits between the camera and the —brilliantly designed—removable viewfinder. I first used it on the shoot we did in Toronto for the Lexus+GFX video. Before this I’d only spent a few minutes with it and never while actually working. This small change—being able to look down into the viewfinder for instance—suddenly transformed the camera into a very different tool. Different from my other cameras that is. It gave me a new point of view and ADDED an element to my photography workflow, beyond the bigger sensor. Needless to say it’s stayed glued to the GFX ever since. The only times I remove it is for packing.

Expansion

Ok, enter the rabbit hole. With this camera taking on an ever increasing role in my work, I’ve looked at expanding my visual options. So the initial GF 63mm f2.8 has since been joined by the GF 120mm f/4 Macro, a Pentax 50mm f/1.7 (through a Fotodiox adapter), and recently the superb GF 110mm f/2.

 Family picture

Family picture

The 120 and 110 may seem redundant—they are. I first chose the 120 for its macro abilities on this system which, physics being inescapable, is much less accommodating in terms of minimal focusing distance. And I’m glad I did. It’s both superb and handy. But I now believe the 110 is (so far) the GF line’s magic lens ...much like the 56 f/1.2 or 35 f/1.4 on the X series. Don’t ask me to explain why, I just feel it. Yes, shallow DOF but more importantly character, imprint...something.

What I’m “missing” in this system is a super-wide zoom along the lines of the XF 10-24mm. But I’m using quotation marks because...I do have the XF 10-24mm don’t I? I know. I told you this was a rabbit hole.

 Fun with the electronic shutter...

Fun with the electronic shutter...

Conclusion

A few years ago I spoke of the possibility of a medium-format camera as a companion to the X series—how, in my mind at least, there was a logic to using both systems in tandem. A philosophical kinship if you will. Today I know this to be absolutely true: apart from a size and ergonomic shock when shooting systems side by side (which will be less obvious once the X-H1 arrives), both make sense as a pair. Both complete one another.

Now in some cases, I admit, the GFX 50S has added a layer of uncertainty—I need to think for a second or two before choosing which camera to pick up. But then, every new piece of gear usually has a similar effect, taking away from simplicity. It’s called the paradox of choice and, well...such is life. I consider myself very lucky to even have these choices.

And man, one year in...I don’t regret a single moment.
I've found the soul in the machine.

​The Process: Toning for Mood

FullSizeRender.jpg

Colour is tough. When processing black and white images, contrast and density make up the bulk of our vocabulary; with colour the dictionary explodes. And where a monochrome process will automatically unify a series of photographs without much effort, colour can easily make an awful mess of things. Over saturation, unnatural tones, unwanted colour casts...all these elements—unless intentional and part of the narrative—can diminish the strength of what we’re attempting to communicate.

I tend to believe in a less is more approach but this doesn’t mean we can’t experiment and push boundaries: as always, we just need to build on what already exists in the photograph. Extending as opposed to imposing. Simply put, it’s always about making sense. With this in mind, let’s take a look at colour not as the main focus but as a “mood enhancer”.

We feel colour before we see it. It’s a physical response based on how we decode, based on our environment and the spectrum of the world around us—night and day, summer and winter, hot and cold. Movie posters use this extensively to very quickly communicate a film’s “personality”: if it’s blue/green and slightly desaturated, chances are we won’t be expecting a romantic comedy. At the subconscious level we know what certain colours represent, regardless of cultural differences.

Cornucopia: Curves, Levels, Wheels and Split Toning

Boy that’s a lot of options. When it comes to manipulating colour, there’s no dearth of tools at our disposal these days. And although each requires its own approach, they‘re all based on the same concept: targeting a colour channel along an exposure axis, from shadows to highlights. So let’s first look at the most common and visual one of all: the humble RGB curve. In Adobe parlance this would be the Tone Curve but set to RGB mode, not the default sliders. The great thing about this tool is its ubiquity: most apps offer a version in one shape or another. Which means all of the following tips can be applied regardless of the software we’re using—it’s the fishing rod as opposed to the free bucket of fish. Here’s how it looks in three well-known desktop apps (Lightroom, Capture One and Luminar 2018):

And mobile apps (Snapseed and Adobe Lightroom CC Classic Mobile):

In each case the principle is the same: the top of the curve represents the highlights, the bottom represents the shadows. Points can be added anywhere along the line to manipulate all values in-between by raising or lowering each point. In full RGB mode this will affect contrast because we’re processing all colour channels at once—I’ve previously written about this here if you’re interested. But if we switch to an individual channel—red for instance—then any control point will now affect the amount of this very specific colour...in the shadows, highlights or everything in-between. The curve still represents the same thing but we’re manipulating colour instead of contrast/exposure. Let’s look at example to make this a bit clearer. In the following image I’ve selected the blue channel but there’s no curve applied yet.

IMG_0536.JPEG

Now, let’s say we wish to affect only the deep shadows in the image. To accomplish this we add two points to our blue channel curve. Why two? Because with the top and bottom points this gives us an equal distribution across the curve. Think of each point as a lock in this case: we’re making sure our curve will remain linear everywhere except in those shadows we’re targeting.

IMG_0533.JPEG

Right, so let’s raise the lower point and see what happens.

IMG_0534.JPEG

By raising the blue channel in the lowest portion of the curve, we’ve added a blue tint to the shadows of the image.

Now, from this point on anything is possible: the same principle can be applied to any colour channel and any point on the curve.

If we look at levels in C1 for instance (no such thing in LR), the concept is similar but instead of an infinite number of possible control points we get three main controls across the histogram: shadows, midtones and highlights.

IMG_0520.JPEG

The difficulty with curves or levels however, is that affecting only a single channel at a time (red, blue or green)—while precise—can sometimes make it awfully hard to achieve a specific overall tone: lower the red and blue gets punched up, raise green and you now have magenta...it’s a whack-a-mole three-way balancing act, with each value immediately impacting the others. It’s often more complex than what we really need. In most cases, I turn to Colour Balance (C1) or Split Toning (LR).

Subtleties

I wrote about colour wheels way back in the Aperture days so switching to Capture One and its Colour Balance tool really felt like coming home. The big difference with wheels, compared to the tools we just discussed, lies in how colour is affected: as a whole as opposed to per channel. So for instance, pushing midtones towards green doesn’t push the green channel—it actually makes midtones green. Period. The mathematics are different. It’s more like applying a tint. Capture One offers three wheels: All, Shadows, Midtones and Highlights. I use this tool enough that I’ve created a tab in my workspace where I can see each individual wheel at a glance.

IMG_0521.JPEG

With colour wheels we can very quickly add or subtract warmth, tint the shadows or highlights, all individually. Capture One also provides individual sliders for saturation and brightness making it very easy to pinpoint the intended effect. It’s extremely powerful.

Lightroom doesn’t offer anything close. But Split Toning—a tool usually reserved for emulating tinting processes in monochrome—can provide surprising results when looking to simply tweak the mood of a colour image. Try experimenting with very subtle values using the shadow slider (tinted highlights tend to be less natural and more obvious to the eye).

Shaping colour this way, altering or skewing the tones, can be a very effective way of creating visual continuity across a series of pictures. The following essay from May 2017 is a good example of this: tinting was used to intensify the feeling of heat and warmth that was already present in the images but not as obvious out of camera.

 Two versions: processed and unprocessed (note: I used my Soft Classic Chrome in-camera preset + warm white balance as a starting point). 

Two versions: processed and unprocessed (note: I used my Soft Classic Chrome in-camera preset + warm white balance as a starting point). 

 The colour balance settings.

The colour balance settings.

In my March 2018 KAGE essay the approach is similar but the effect is diametrically opposed: less saturation and much colder tones result in a very different mood. Yes, I know lighting and subject are part of it as well...don’t get smart with me guys. You understand the point I’m trying to make here ;)

 Again, colour balance settings. Really subtle but still visible.

Again, colour balance settings. Really subtle but still visible.

Conclusion

I love the power and intensity of black and white images. I love how immuable they are, how timeless. Colour photography will always be a challenge because it moves with the times in a way monochrome never will; probably because the latter is not actually part of our reality—our only frame of reference for monochromatic images is photography itself. As I said, colour is tough. But beyond purely naturalistic renditions, the prize is a second, third and fourth layer to play with.

The trick is in finding the right mix...and convey the proper message.

P.S If you’re interested in continuing the analysis, I touched on several of these points (using a “real-world” situation) in the following post: All the Green We Wanted | A technical Follow-Up

Apple, glitter & joy.

IMG_0232.JPEG

This is one of those diverging posts, so feel free to skip it and grab your camera instead. I won’t mind one bit.

Apple announced another blowout quarter this week, the new iPhone X pushing revenues forward (regardless of naysayers). Cool. I guess. Truth is I’m not feeling much of a buzz these days. I’m not moving to another platform or ditching anything. I still love my iPad Pro to death and get a kick out of iOS developments. But I’m realizing a big chunk of the Apple magic is gone for me. And it’s a little like discovering Santa isn’t real: it’s not the end of the world but it’s still a small emptiness, like a layer of glitter scratched away from reality. I could point to misfires—from the Touch Bar/MacBook debacle to the lack of updated Macs for way too long. I have friends—longtime Apple users—who ended up actually moving to Windows as a result. But those missteps haven’t directly affected me. I could go back to Aperture as the very first blow on my relationship with the company—losing a tool I loved that was instrumental in becoming a full-time photographer. But I’ve long-since moved on and I don’t tend to hold grudges. I guess it’s a combination of various elements. Today I want to look at two aspects of the Apple experience that, while on the periphery, have been bugging me for some time. In my mind, they’re also symptomatic of a certain erosion: Apple Music and Siri.

THE GUESSING GAME

We’ve also been building Apple Music into a great service and bringing more intelligence to Siri to help you discover new music based on your tastes and preferences.
— Phil Schiller-Sound & Vision, January 28 2018

 

I subscribed to Apple Music (family plan) the day it became available. Before this, I’d been on iTunes Match—Apple’s paid service that allowed a personal iTunes library to exist in the Cloud—for several years. My existing songs were pretty much all rated. In the new Apple Music-enabled app I filled out the bubbles to let Apple know my likes and dislikes. And I’ve been painstakingly “loving” and “disliking” songs for over two years, in an effort to help the service understand my tastes which, I admit, can be a bit complex. Long story short, Apple has a boatload of musical data on me: my entire music library, everything I’ve ever purchased from iTunes and all I’ve since added through their streaming platform. And yet...the New Music Mix is still, to this day, either seriously off, wrong or downright insulting. Is it all bad? No. I’ve discovered a handful of artists along the way, thank God. But the overwhelming majority of what Apple Music proposes through the supposed power of its “knowledge” is just plain awful. I get current pop music when I’ve explicitly disabled the genre several times already; when I’ve disliked every single song remotely tied to it. I get Quebec artists that don’t even remotely fit anything I’ve ever listened to (because it’s where I happen to live?). It’s gotten to the point where listening to this mix is a chore, something I do out of contempt, on the desperate hope that it will, maybe, eventually, pay off. For a service that claims intelligence this feels a lot like a failure in my world.

In the For You section, I’ve been presented with the same rotation of playlists since the day I subscribed. Until recently, most of these were labeled as “updated two years ago”. This has now been removed, probably out of shame. No real refresh though.

I also believe there’s a fundamental problem with the binary choice of either “Love” or “Dislike”. It allows for zero nuances in a user’s tastes and I think this effectively contributes to Apple Music’s shortcomings: with all the buzz around machine learning and AI, I have to believe there’s an advantage to gathering as much micro data as possible. But even if we allow for the current dual-choice setup: the system should identify what effectively triggers loves and dislikes, beyond artist and genre. Do I favour songs in certain keys over others? A certain type of mood or instrumentation? Tempo? Intensity? Is there a link between Radiohead’s Fake Plastic Trees, The Cure’s One Hundred Years and Josefin Öhrn’s Sister Green Eyes? Between Miles Davis, Beach House and Grizzly Bear? There’s so much here that could inform Apple Music into tailoring to a user’s preferences. And yet from what I can tell, if I love a Public Enemy song or a Kendrick Lamar album, then I’ve sent a signal for hip-hop/rap, period. Perhaps I’m wrong but it certainly feels like a dumb flag. It feels like there’s no actual evaluation of content, of finer sonic characteristics or even a simple analysis against the rest of my library. So every time I hear a song that I happen to enjoy on Apple Music—regardless of genre—I find myself questioning the effect a “love” will have on future recommendations, often skipping the process entirely. Which ends up defeating the purpose.

This notion of “intelligence” isn’t new. When iOS 9 was announced, it was the big buzzword of the day: Siri would learn to anticipate our needs and preferences. This might be anecdotal but after updating, every time I got in the car and plugged in my iPhone I was greeted by the same Nat King Cole Christmas album. For months, all through the following summer. A Christmas album. And every time I’d look at my iPhone and think wow, you’re really dumb.

Which brings us to Siri.

THE SYBIL SYNDROME

In Apple’s world Siri is everywhere: iOS, MacOS, WatchOS, TvOS. So I think it’s normal to expect this ubiquitous technology to be the glue that links all devices together—a kind of common, unifying personality. But it isn’t.

Here’s another quote from Phil Schiller’s Sound & Vision interview:

From the very beginning, we engineered HomeKit to work great with Siri, so it understands all of your HomeKit accessories and how you want to control each of them.
— Phil Schiller-Sound & Vision, January 28 2018

I’m sitting at my Mac and my iOS devices are upstairs. So I ask Siri (on my Mac) to make the lights brighter:

28AB09CC-917C-46BE-9EDC-EE241772B960.full.JPG

“Mac Siri” can’t control HomeKit. Ok. But here’s another one: an Apple TV 4 or later (or iPad but that’s besides the point here) is required if you intend to do things such as automation with HomeKit devices. It says so right there in the iOS Home app. The Apple TV is an essential part of the HomeKit ecosystem. Now, Siri is also on Apple TV. So here’s a new scenario: I’m sitting on the couch and I want to dim the lights. I pick up the Siri remote and ask Siri to...you know...dim the lights. Nope. “Apple TV Siri” CANNOT CONTROL HOMEKIT.

When Apple talks about Siri as a singular entity..they’re not being honest. Siri is utterly fragmented. It’s actually up to us to decipher what Siri can or cannot do, depending on the device we’re using. And this segregation of function per device extends to other areas in ways I simply don’t understand: the HomePod is soon to be shipping, but on this device Siri only understands English (which is probably the reason why it’s not available in Canada). What’s going on here? Is this a new “HomePod Siri” that can’t understand what “iPhone Siri” already knows? Doesn’t “iPhone Siri” speak a ton of languages by now?

What’s the point of development if you can’t build on what already exists or—in AI terms—use knowledge that’s already been acquired?

AND THEN THERE’S ALEXA

I bought an Echo Dot last December. Probably no big deal for many of you but in Canada, this was our very first “official” taste of Alexa. Honestly it was an impulse buy: we’re already Amazon Prime members and the Dot was something like $45 CAD during the holidays. I was curious so I figured what the hell. If it sucks, it sucks. I won’t be losing much sleep over it.

Holy shit, what an awakening.

With Siri, no matter which device I use (Apple Watch, various iPads, iPhones, even iPods) I’m never entirely certain of the results. Sometimes it will answer immediately and do exactly what I asked. Other times I just stand there like a moron, repeating myself to no avail. Or the reply will be wrong. When Siri works it’s fine...but consistency is incredibly important in digital assistants. Because when they don’t work you feel like a fool—and you just stop using them. And then it doesn’t matter if they eventually improve because you’ve stopped trying.

When I received the Echo Dot it was already the evening, so I didn’t much go beyond the initial set up. But I was somewhat impressed when I paired it to my iPhone, only to hear Alexa explain how I could just ask her to connect next time (*1). Then I moved downstairs to read (Cynthia was out playing volleyball), turned on the aforementioned Apple TV and asked Siri to play some classical music. I got David Bowie. Ok...so Bowie...classic rock maybe? Nope. This was Blackstar. I asked again. Still no go. And btw: I could see the query typed out correctly on-screen so it’s not like Siri understood something different. On a whim I went back upstairs and asked Alexa the exact same thing: “Here’s a station called Classical Music, from Amazon Music”, she replied. Huh (*2).

Alexa works every. single. time. It’s like a bloody revelation. If I ask for Cool Jazz I get a Cool Jazz station—the specific genre. Siri gives me recent jazz songs or top jazz songs, as if “cool” is a synonym for “popular”. Alexa understands the kids, regardless of their accents and sometimes imperfect English grammar. She understands when I’m halfway down the stairs and I ask her to turn on the lights in the studio—no pause, no “hold on”, just an “ok” and the lights are on, instantly. One night during dinner Anaïs, our oldest daughter, asked Alexa to “say meow”...don’t ask me why, kids are kids. But just as I was about to tell her she was being silly...cats started meowing. Everyone’s eyes went very, very wide, our actual cat went completely berserk and we had a pretty good laugh. Useless? Oh totally. But behind the scenes Alexa heard and understood the query, found a “skill” that would do the job, enabled and launched it—all of this before I had time to utter a single word. Call me crazy but for me this is technology at its most magical: frictionless. It just works? Damn right it does. As for Siri...I tried the same thing just for kicks:

398136E7-1C7F-4B53-9C77-1D4F6D3DFF8E.JPG

I understand and appreciate Apple’s stance on privacy. I’m also well aware of the way Amazon can and probably will use some of the information it gathers. But Apple has repeatedly said their approach doesn’t hinder their capacity to compete in this space, so I should be able to compare along the same lines and not grade on a curve. Do I care that Siri can’t meow? Of course not. But what I’m witnessing with the Echo Dot is lifestyle integration. Amazon’s strategy of anything and everything results in a device that, almost invisibly, weaves itself into the fabric of our lives. In the morning I ask Alexa to “start my day”: I get the weather, driving conditions and the latest news bulletin from CBC. I then ask for a specific radio station and she obliges (through the integrated TuneIn). If I ask “what’s up” I get: the time, a weather update, a check of any upcoming events on my calendar and a random trending bit of information—usually something light and fun. And if I ask for classical music, I get bloody classical music.

It’s clear that the lack of a visual interface initially forced Amazon into providing any information in audio form. They had no choice. There’s no “Here’s what I found on the web” cop out...because there’s no web to look at (*3). So when I was cooking chicken one night, had my hands dirty, and asked Alexa for the safe cooking temperature, I got the answer I needed. I didn’t get the following response (which would’ve been useless unless I was ok with putting down the chicken I was holding and getting grease all over my screen):

40745E94-77D7-4D2A-8070-9BACCC864DA4.JPG

So because it reacts, because it responds with actual real-world answers, the Echo feels “normal”. And this is creating a very different type of “relationship” in our household. Actually, it’s not unlike how the Mac used to feel when other computers were overly complex and intimidating. The smiling Mac on boot up was derided by those who dismissed it as a toy, but what it represented eventually changed our world: approachability, ease of use, a rejection of the arcane. A natural extension of possibilities.

Siri, in its current form, feels arcane in 2018. Of course the threshold for this isn’t the same as it was in 1984, but the friction is still too great. Siri should simply work better than it does, in small ways: it should provide the information we ask for, not a list of random web links; it should integrate with every type of application, not just a trickling subset that Apple expands once a year if we’re lucky. And most importantly: Siri should be the same Siri across all Apple devices. One of HomePod’s bullet-point feature is Siri as musicologist: deep knowledge or just a new incarnation stuck in a speaker?

And while we’re on the subject: Siri should have a voice on all devices—that’s its identity. I can understand the lack of audio feedback on first-gen Apple Watches but not, for instance, on a $199 CAD Apple TV. Not when a cheap plastic puck from Amazon can speak to me just like my iPhone can.

In the above quotes Phil Schiller mentions Siri—not “Siri on iOS” or “Siri on HomePod”. Just Siri. If that’s how Apple presents the technology then that’s how it should work. The entire platform would be that much stronger. I’m at a complete loss to understand why this isn’t the case.

I type these words on an iPad Pro, with Ulysses, and the experience still makes me smile. But beyond Siri and music, it’s an accumulation of disappointments that has soured my view. The universally hated Siri Remote infuriates me daily—I still can’t understand how any company could release this with a straight face, let alone Apple (*4). The Apple News app is still unavailable in Canada three years after its introduction (it’s a news reader ffs); iBooks Author, an application that could’ve been HyperCard for books never got past a few lacklustre releases, was never ported to iOS, and now seems dead in the water. Both were visions I enthusiastically embraced...in vain, ultimately. 

The DNA is still there: the recent 10.4 update to Logic Pro X was nothing short of amazing, reminding me of the glory days of Apple’s creative software suite. But it highlighted how much I miss those days. I miss Apple The Enabler. I miss watching keynotes that sparked a thousand ideas.

I miss the glitter and joy.

––––––––––––––––––––––––––––––––

1. Unfortunately, you lose this ability as soon as you add more than one device.

2. My iPhone 8 Plus gave me the correct results last night. But this wasn’t the first time I’d been unable to get classical music on the Apple TV or older devices in the past year.

3. It’ll be interesting to see how the HomePod handles this type of information. Will it simply be as clueless as the Apple TV?

4. We finally added a cheap silicone sleeve that helps with grip and orientation. But what an awful design. Hard to imagine Jony Ives using this and not throwing it against a wall. I really don’t get it.