CocoaConf San Jose starts on Thursday. As with the other stops on the Fall tour, I’ll be doing an all-day class on App Extensions, and regular sessions on WatchKit media APIs and “Revenge of the 80s”, which is about old productivity APIs like cut/copy/paste that have been with us since the first Macs and which we take for granted.
This is also the last speaking I’ll be doing for a while. I’m taking at least the first half of 2016 off, maybe longer.
OK, sorry, didn’t mean to sound dramatic. But hey, you have to have a hook before the fold. Let me explain where my head’s at right now.
Way, way back in 2002, after the big layoff of everybody in our Atlanta office, one thing I decided I’d try is writing. Eventually that panned out, but at first it didn’t. I pitched a book to O’Reilly that would cover all of Java Media (JavaSound, Java Media Framework, and QuickTime for Java)… and never even got a reply.
While some of the research work for this led to the QuickTime for Java book years later, I spent some of that time looking to see if anyone else was getting Java media books published, and what they were doing better. Of course there weren’t any others (niche topic!), but JMF did get covered in one of those 1,000-page omnibus books you used to find. So I looked through it at Borders to see what they knew and I didn’t.
Now I remembered that the Sun documentation for JMF left me wanting more. There was a big song and dance about a state machine for a javax.media.Player ‘s various states – pre-loading, ready-to-play, playing, etc. – but precious little about what formats you could actually play, whether you could grab frames and work with them, whether you could edit and export media with JMF, etc. So what was in this book?
A big song and dance about the javax.media.Player ‘s state machine. And not much else.
Obviously, they had just rewritten the Sun documentation. There was no evidence they had ever written any JMF code in anger, or had any insight to offer based on real experience.
That set my low bar. I said I would never do something like that.
But I’m worried my conference talks are turning into that.
Usually, the work I’ve done outside of conferences has led to my talks there. Writing the Core Audio book turned into an all-day class (when we were testing the waters of whether CocoaConf attendees would want an advanced first-day class). Working with a client forced me deeply into AV Foundation editing, which is where my “AV Film School” classes in 2013 came from. Other client gigs supporting UIPasteboard / NSPasteboard was the genesis of “Revenge of the 80s”. This is a good system for me – learn on the job, refine down to the good parts, teach those as a class or session.
Problem is, I haven’t been learning those kinds of things since moving from client work to a full-time position. That’s not to say that my work is boring, but it hasn’t involved the same kind of tasks that you seek out consultants for. Like most developers, we solve many of our problems at work by just adding another five Cocoapods to the project (and later wondering why it takes so long to build lately). There’s not a lot I’ve picked up that attendees won’t already know from their own work experience.
I’ve found that I’ve repeatedly picked new topics to talk about, and then the only work I do on them is to get the talk ready. Worse, with demands from work and finishing up the iOS 9 SDK Development book , I’m not even getting a good look at the stuff I’m talking about.
In Columbus, we were only two weeks removed from WWDC, so it’s not that big a surprise that my WatchKit media demo didn’t work. In Boston in September, I had 300 more lines of code, but it still didn’t work. Now it’s November, and I’m looking at banging away on it in my hotel room in San Jose, because it still doesn’t work.
This approach to speaking is not working. The WatchKit talk is OK because it’s as much about authoring as coding (and I did have fun in Compressor getting video clips ready), but five months after I conceived it, this talk is still not what it’s supposed to be. Meanwhile, the topic itself has shifted: nobody is actually doing media on the watch, and it’s not clear there’s any point doing so.
I was thinking about what I could do for CocoaConf’s Spring tour. That’s when I usually launch a new class, and the obvious choice for me would be the Apple TV SDK. Problem is, I have not yet written a line of tvOS code. What business do I have teaching a class on this when other people are shipping? Do I take a bet on being ready by March? Isn’t that exactly what I did with the WatchKit media talk?
Moreover, I’ve long planned to come at the Apple TV from the content side, not the coding side. Off and on for the last two years, I’ve been building a live-streaming site on AWS, built around a WordPress blog and a Wowza on-demand streaming server. My plan has been to do some livestreams there, build up a few episodes of content on the web and a regular schedule, and then write Apple TV and Roku apps to view them.
At best, I could have one of two things done by March: the streaming site, or the class. If I pick the latter, it won’t be based on real experience; it’ll be rewriting the docs.
It’s an easy choice. I need to take some time off the conference circuit, go off on my own, do some stuff that few or no other people are doing, and come back when I have something interesting to show.
There’s other stuff I should be playing with too: Core Audio is slowly turning into the AVAudioEngine , though it’s not clear who (if anyone) is onboard with Audio Unit Extensions / v3 Audio Units. That’s totally something I should be working with. Maybe if AVAudioEngine ever gets an offline mode like AUGraph s have, it’ll actually be useful.
Also, I have a personal reason: I need to spend more time with my parents while I can, and that’s another better use of my weekends right now than changing planes in Minneapolis or Detroit.
Will I be back? Probably. I like speaking, I like catching up with friends, and helping new people. I just don’t think I’m doing a good job of it right now, and I need to fix that.