Rockin' good code, rockin' good times

Text

We had a great time at Chicago’s Windy City Rails conference. There were lots of informative and thought provoking talks by members of the Chicago community as well as some visitors from all over. It was fun to learn that people had flown in from as far Europe to attend our little local conference :)

I gave a talk on Domain Driven Rails - check out the slides here! Video should be up soon as well.

Text

I’ve been working a lot with junior iOS devs this week, and a common issue keeps coming up. They’ll add a UIView to the screen but, for whatever reason, it doesn’t display.

This is frustrating for two reasons. First, there’s nothing in programming that feels more direct than displaying something on the screen. So if that goes sideways, it can be very demoralizing; “If I can’t even get something on the screen, what’s left?”

Second, debugging this type of problem isn’t in any way hard, so more senior devs can be dismissive when asked for help. Sadly, what it does require is a great deal of experience. Experience junior developers don’t have.

So in an attempt to fill those gaps, I present:

Tips for Debugging UIView in the Blind

Implementing the following (often in combination) can help diagnose the source of common view problems.

Existence

  • Set a breakpoint. Did it hit?
  • Is the view non-nil?
  • Is it being added to a superview (-addSubview:)?
  • Is the superview non-nil?
  • Does the superview have a -window?
  • Is the superview what you think it is (try making it blue)?

Visibility

  • Is it -hidden?
  • Is it under a sibling (use -bringSubviewToFront: check)?
  • Does the superview -clipsToBounds?
  • Make the view bigger. Can you see it now?
  • Make it a lot bigger. How about now?
  • Set its center to the superview’s center.
  • Make it bright green.
  • Make sure its -alpha is 1-ish.

Hierarchy

  • Is the view a subclass? Does a stock UIView work instead?
  • Are any view classes extended with categories?
  • What about in your CocoaPods?
  • Do any view controller lifecycle methods (viewDidLoad, viewWillAppear, viewDidLayoutSubviews, etc) mess with the view?
  • Does your view controller call super everyplace it should (viewDidLoad, etc)?

Containment

  • Is your view controller contained by another view controller that draws on top of it (for example, UINavigationController, UITabBarController, etc)?
  • Does your view controller -wantsFullscreenLayout?
  • Are there edges in -edgesForExtendedLayout?
  • Can you try rendering your view controller by itself outside of containment?

Autolayout

  • Are you getting any constraint consistency errors in log/term?
  • Have you set a width/height (or an intrinsic content size)?
  • Is -translatesAutoresizingMaskIntoConstraints set appropriately?

Xcode

  • Have you cleaned your project and rebuilt?
  • Have you “Reset Content and Settings…” in the simulator?
  • Have you tried on both simulator and a device?
  • Have you quit Xcode and started it again?
  • Have you shutdown your computer and started again?
  • Are you on a beta? Stop that.

Minimum Viable Reproduction

  • If you create a fresh project with a fresh view and view controller, does the problem persist?
  • If you copy and paste your nibs/storyboards?
  • If you copy and paste your classes?

I’ll keep this list updated with feedback and any new thoughts that come to me. What tips do you have for debugging views?

Text

My world is that of startups.

I’ve been working at early stage companies or starting them for almost fifteen years now. So my worldview is a bit colored by the needs of startups, but I submit that these needs translate to those of any organization faced with the challenge of delivering a relevant product quickly to its customers.

Eric Ries in the Lean Startup tells us that we can measure a startup’s runway by the number of pivots it has left.

I really like this view. Let’s extrapolate that to applications. Applications that are easy to change are easy to pivot. It’s easy to throw away this bit of code and add this other bit over here.

As a startup, or in fact as an any organization that wants to stay lean, and to be able to quickly test ideas, good software architecture that enables many pivots (read: easy to change) is extremely important.

Even if you have figured out what you’re building on a macro scale (here at Reverb.com, we’re building a curated marketplace for musical gear), you’re still doing lots of pivots on a micro scale - building experimental features, enhancing the ones that need help, and killing off ones that don’t work.

I often hear people talk about the idea that startups should take on lots of technical debt in order to ship quickly.

I think this is just nonsense. The technical debt you take on is reducing the number of pivots you have left and thus killing the chance that your company will make it.

Now, should you engineer everything to the n-th degree? Should you aim for zero tech debt? Should you build a 20-layer architecture fully decoupled from everything including your database, and launch a swarm of microservices with a team of five?

Of course not. Tech debt, like real debt, is a powerful instrument for growth. There are practical limits to what a small team can do while delivering business value. And usually this means you’ll be building a monolithic app to start.

Are monoliths inherently bad?

No; they enable us to build quickly and keep operational complexity to a minimum. But the way you go about it is important.

A monolithic app does not mean you have to stuff all your code into thousand-line model classes. That’s not a shortcut, that’s a surefire way to prevent the scalability of your team. Large classes become larger until they’re so massive that they’re black holes, causing every bit of the system and every team member to depend on their structure.

Refactoring becomes progressively harder, and development is slowed to a halt. Frustrated engineers start to leave and new ones don’t want to deal with legacy cruft; the team churns, the company fails. So what are we to do? 

SRP and OCP to the rescue!

During the last couple years, it has become increasingly apparent to me that the Single Responsibility Principle (that is, that objects should change for only one reason), and the Open/Closed Principle (that we should not have to edit existing code in order to add functionality) are the building blocks of our salvation.

The missing piece of the puzzle hit me on the head like a brick when I watched Uncle Bob’s Ruby Midwest Keynote, Architecture: The Lost Years. In this talk, Uncle Bob mentions but does not go into details on the idea of the Use Case, which comes from Ivar Jacobson all the way back in ‘92.

I’ll save you a 500 page read: the idea is basically just to reify your behaviors.

Instead of stuffing behaviors as methods into your objects like classical OOP seems to tell us to do, Jacobson suggests creating Use Cases like CreateAccount or ProcessOrder.

And it seems that besides Uncle Bob, other very smart people are talking about similar concepts. In 2009, James Copelien and Trygve Reenskaug (inventor of MVC) put together a paper outlining a new architecture called Domain, Context, Interaction (DCI) which started getting attention in the Rails community probably around 2012 (at least that’s when I started seeing blogs about it). Cope and Trygve later wrote a book called Lean Architecture.

DCI’s simple beauty is obscured by the red herring of runtime behavior extension

For some reason people started talking about DCI’s novel idea of extending behavior onto objects at runtime, and Rubyists starting hacking together all kinds of fun things, from extending modules onto objects at runtime, to using refinements, to frameworks that help you extend and unextend behaviors onto objects.

But in fact, the main revelation of Use Cases and DCI is that you reify your behaviors by giving them recipe-like classes of their own, and specifically for DCI, the other key concept is that you wrap your domain objects (models) with additional behavior called Roles that is relevant only in particular contexts (use cases).

You don’t have to rely on special language tricks to do this.

A humble decorator/delegator works just fine. So you can have a use case like ProcessOrder which takes an Order object and wraps it with a SimpleDelegator called TaxableOrder so that you can calculate the tax.

That’s it, in a nutshell.

I personally think it’s sad that such beautifully simple concepts are hidden in really long books that most people with today’s TLDR attention span will never read. So here’s hoping that more people will be joining the discussion on how we can keep our code small, simple, and low churn by applying these principles.

TLDR: reify your behaviors into Use Cases, wrap your domain objects with Roles to aid in the use cases they participate in. Use Cases are inherently write-once and don’t change unless the business process changes, thus they strongly support SRP and OCP, and keep your classes small and single purpose, preventing churn and growth in the underlying domain objects. Low churn code is easy to refactor and change, and thus it’s easy to keep the business moving forward with lots of pivots and experiments.

If you found this interesting, please come see my talk at Windy City Rails, 2014 and chat me up in the breaks! 


Yan Pritzker
CTO, Reverb.com

Text

Apple announced a new language last week: Swift. Now that everyone’s had time to read the book and start playing with sample projects, a startling deficiency of the language has been uncovered. Swift makes no provision for access modifiers such as public, private, or protected.

This isn’t altogether surprising. ObjC has no access modifiers (for methods or properties, anyway). Knowing what’s “public” and what’s “private” has largely been a matter of convention — public stuff gets put in the header file, private stuff in class extentions. Swift doesn’t have headers, and hasn’t been out long enough to establish conventions, so what’s an early-adopter to do?

Enter protocol

It’s important to note that, when one talks about making a “private method” in Swift or ObjC (or ruby or java or…) those methods aren’t really private. There’s no actual access control around them. Any language that offers even a little introspection lets developers get to those values from outside the class if they really want to.

So what we’re really talking about here isn’t access control modifiers as much as a way to define a public-facing interface that merely presents the functionality we want it to, and “hides” the rest that we consider “private”.

The Swift mechanism for declaring interfaces is the protocol, and it can be used for this purpose.

A Type-Oriented Language

At first it’s not very clear how this helps the situation. But remember that Swift is very strongly typed, and that protocols, themselves, are first-class type citizens that may be used anywhere a type can. And, crucially, when used in this manner, a protocol only exposes its own interface, not that of the implementing type.

Thus, as long as you use MyClass instead of MyClassImplementation in your parameter types, etc. it should all just work:

As you might already see, there are some cases of direct assignment where you have to be explicit with type instead of relying on Swift to infer it, but that hardly seems a deal breaker:

The Need for Private

And really, while I’ll need to get more Swift code under my belt before I can form a strong opinion on the matter, I suspect the need to encapsulate implementation details will be a lot less pressing in Swift than in ObjC.

In ObjC, outside of the rare case of publishing a public API or the like, the public/private divide was as much about organizing code as it was anything else. And Swift gives us many more arrows for our organizational quiver — closures, nested functions, nested classes, nested types, methods for enums and structs…

For that outlier case of publishing a public-facing interface? Using protocol is semantic, reasonably concise, and to my eyes looks a lot like the class extensions we were already using for this purpose in ObjC.

Text

Welcome back to my final post on Apprentice Challenges, this time focused on company-specific issues.

Before I began working at Reverb, I imagined that most of the obstacles I’d face would be technical in nature (in fact, my first blog post mentioned Vim as my only example of a Reverb-exclusive challenge), but after four months on the job, the biggest hurdles I’ve overcome have been non-technical. As such, the “company-specific” issues I’ll focus on will actually be soft skill challenges encountered at Reverb but with takeaways that are universal.

Soft skills are those which relate to emotional intelligence and behavior, often distilled to “empathy”. The specific challenges I faced were focused in two areas, learning/teaching styles and communication.

 

As an apprentice, my primary job responsibility isn’t coding itself but learning how to code better. That said, Reverb is a small start up with a small (currently 5-person) dev team, so there is a big focus on productivity. In an environment like this, ability to learn on the job is of utmost importance. But learning on the job is a two way street; I have to rely on my mentors (in this case, the rest of the dev team) to help me out when I get blocked so that I can learn quickly and remain productive.

It’s obvious in hindsight, but adapting to the various teaching styles of the other developers is a task that required pointed reflection, discussion, and old fashioned trial-and-error. I think we’re synchronized to the point of acceptable efficiency now, but originally a series of conflicts arising from a “tough love” teaching style meant to motivate and accelerate my learning ended up having an opposite effect.

When confronted with this style of teaching, I felt pushed onto my back foot, generalized and judged unfairly. Feeling unjustly criticized, I spent substantial time contemplating these interactions, significantly decreasing my productivity on days when this occurred.

Ultimately this effect was easily reversed when we had an explicit face-to-face conversation about learning and teaching styles and I now feel accepted, supported, and motivated in ways I was not originally. The experience emphasized the importance of understanding the ways in which you most efficiently learn and the teaching styles that best allow you to improve. Of course, this takeaway relied on what is probably the most universally important soft skill out there: communication.

Without openly communicating about teaching/learning styles, my co-worker and I would probably still be mis-communicating on a regular basis. It’s incredibly important to be proactive in your communication, especially when starting a new job. Having a scheduled one-on-one meeting with a mentor/boss is very helpful in creating a set time to discuss your experiences at work, but don’t rely on this if you are confused about something impacting you immediately. Speak up! Your co-workers won’t know how you’re doing unless you tell them.

Being proactive in communication means discussing not only gripes but also successes and wins. Positive feedback is important across teams and regardless of management hierarchies. Be thankful and congratulatory; Fostering a supportive, positive, trusting company culture has benefits beyond the obvious social perks that come from a happy work place.

That said, it’s not always easy to create such an environment. Kindness in the workplace, especially in tech, is not always a given. Empathy is a skill, not a gift, and it takes practice and effort to perfect. Be conscious about your interactions at work. Giving feedback is probably the biggest hurdle for most developers regarding communication. At Dev Bootcamp we practiced giving feedback that was Actionable, Specific, and Kind (ASK). Try to focus on these criteria when preparing to give feedback to someone.

 

As an apprentice, or in any other position that requires learning on the job, be conscious about learning and teaching styles in order to maximize your productivity. Of course, in order to do so you’ll need to communicate proactively, which is often easiest in a supportive company culture. Companies that focus on kindness, fostering trust between employees with the help of Actionable, Specific, and Kind feedback, are commonly those with the most supportive cultures.

Thanks for turning in to my final apprentice-focused blog post! Feel free to share your experience or drop me a line to chat more in-depth at joe@reverb.com.

Text

Creating a File

LightTable’s behavioral innards are written almost exclusively in ClojureScript. So if we want to start poking at them, the first thing we need to do is create a .cljs file. I’m calling mine “understanding light table.cljs” (you can find the gist here), but feel free to name yours according to your own customs and beliefs.

Namespace and Requires

This is Clojure, and that means namespaces. We’ll need Light Table’s object library. And because macros are weird in ClojureScript, we’ll need to require those separately as well. Like so:

Feel free to alias the required namespaces as you please. But because the purpose of this post is understanding how Light Table works, I’ll avoid that bit of indirection in my examples.

Making Connections

Go ahead and evaluate that namespace expression (⌘↩︎, by default). Light Table will automatically try to connect to a client REPL. If this file were part of a leiningen project, it would use that. But seeing as it’s not, it asks you to manually connect to a client:

No client available.
We don’t know what kind of client you want for this one. Try starting a client by choosing one of the connection types in the connect panel.

Luckily, Light Table provides a client specifically for evaluating expressions in the context of itself. It’s called “Light Table UI”. Select it from the list, and try evaluating the ns expression again.

Objects Defined (And Instantiated)

Classic OOP thinking might define an “object” as “state coupled with the behavior that mucks with it”. Light Table decouples state and behavior. In Light Table, objects are responsible for state only.

Similar to the standard OOP practice of defining a class that gets instantiated into a new object, the first step of creating an object in Light Table is to define an object template:

This creates a template with a type of :reverb.lighttable/my-template-name, a tag of :my-tag-name, and then whatever state-holding properties we choose to give it.

I should probably say something about tags here. Tags are just a way of grouping objects that’s not coupled to said objects’ type or IDs. This will make more sense once we get to associating behavior with objects, which is customarily done through tags. But for now, think of a tag as a potential mount-point for future mixins.

So, in a common theme you’ll notice all throughout Light Table’s object model, object templates are just data structures. After evaling the above, you can inspect it with the following:

As you can see, the template is just a simple map. Nice!

Once you have a template, you instantiate it with create:

You can check it out with:

So objects in Light Table are very much like the templates they’re instantiated from. create just adds a unique :li.object/id key, some properties to handle behaviors (more on behaviors next!), and stuffs the whole thing into an Atom. No magic. Just data structures.

Doing Things

If objects just hold state, how do we get them to do anyhting? Why, with behaviors, of course! And for creating behaviors, lt.macros/behavior is our go-to macro. It takes a name, some triggers (we’ll get to those), and a :reaction, which is essentially what you want the behavior to do:

Inspecting behaviors is also straight forward and is also just data:

Of course, a behavior doesn’t do you a lot of good unless you add it to an object. For the sake of having something illustrative to eval we’ll use lt.object/add-behavior!. But be aware; this is generally not the best practice.

If you re-eval your object at this point, you’ll notice it now has extra data in its :listeners and :behaviors keys.

Okay. So far, so good. But seriously. How do we freaking make an object do something?

Remember that trigger you gave your behavior? You activate the behavior of an object by raiseing one of the triggers of the associated behavior:

Note that the first argument passed to the behavior’s function is the object itself. But you can pass in others in the raise:

But Don’t Do Things Like That

Here’s the problem. Behaviors added with add-behavior! can get clobbered whenever Light Table reloads its own behaviors. So the proper way to associate behaviors with objects is the way Light Table does it: in a behavior file like “user.behaviors” or the “.behaviors” file included with plugins.

When you reload Light Table’s behaviors, you’ll find this behavior automatically gets attached to any objects with the given tag.

You can even specify arguments in the “.behaviors” file:

Which get appended to the behaviors arguments like so:

So… Why?

To sum up, you create an object template that specifies some state and :tags. Then you create behavior with some functionality and :triggers. You associate those behaviors to any instantiated objects with the given tag in one or more .behaviors files. And finally, behaviors attached to objects have their functionallity invoked by calling raise with the appropriate trigger (or, more likely, having Light Table call raise with a given trigger in response to user input).

That seems like a crazy amount of abstraction to wrap your head around for what could be a simple (do-thing my-stuff), right? But the benefits along the axes of customization and flexibility are pretty extreme. If you think about it, any part of the chain from tag→object→trigger→behavior→state can be redefined without disturbing its neighbors. Not only that, it could be redefined and reevaluated at run time to modify the behavior of Light Table while you’re using it.

That’s a pretty cool trick that makes developing for Light Table just as much fun as the Clojure it’s built on famously claims to be. But it also gives the open source community a lot of freedom to extend the editor in ways the original developers never anticipated.

And that, for me, is the most valuable attribute a text editor can possess.

Text

Here’s a fictional controller that loads up a few variables for a dashboard-like view.


  class MyController
    def index
      @products = Product.where(price: params[:price])
      @total_price = @products.sum(:total_price)
      @show_admin_bar = current_user.admin?
      @products_by_type = @products.count(group: “type”)
    end
  end

This type of controller tends to grow in size and complexity as new features are added. Soon you’re up to your ears in instance variables that are set so the view can render various bits.

What’s wrong with instance variables?

  1. An instance variable doesn’t complain if it’s nil. You could easily misspell the variable in your view and never really know about it.
  2. If generating the values assigned to them gets complicated, you end up littering your controller with unrelated private methods. Instance variables resist refactoring.
  3. You will end up with lots of controller specs asserting on assigns. Controller tests are harder to read because there’s a lot of noise about http verbs, params, and assigns. 

The solution? Extract an object.

Some people call these Presenters or View Models.


  class MyController
    def index
      @my_view = MyView.new(price: params[:price])
    end
  end

  class MyView
    def initialize(price:)
      @price = price
    end

    def products
      @products ||= Product.where(price: @price)
    end

    def total_price
      products.sum(:total_price)
    end

     …etc…
  end

Happy Friday.

Text

In late January I began an optimistically-paced series of blog posts on challenges faced by apprentice developers. Today I’d like to pick back up with a focus on improving tech-specific knowledge.

Originally I explained:

“There are going to be a lot of tools and methods with which new programmers are unfamiliar or not as capable as necessary. For instance, I’ve spent a significant portion of my first three weeks improving my understanding of test-driven development using mocks, stubs, and libraries that aid in this process (eg VCR for API calls).”

After three months on the job, I can say even more confidently that this is a huge challenge faced not only by apprentices, but by all developers (though it is certainly a more difficult challenge for newcomers). We must be vigilant to keep up with new technological developments, regardless of whether the updates are for the language we write, the tools we use, or new apps being launched. It is the job of a developer to determine the best tool for the job and subsequently, to develop an awareness about what tools are available.


Apprentices are in the unique position of having a comparatively tiny set of  tools so we need to work as quickly as possible to learn the tools for our job. To this end, I have two suggestions: practice, and focus.

Practice is THE way to learn about new tools. The power of learning by doing cannot be overstated. Whether you are struggling to learn a new language or tasked with integrating a new API, the most effective way to improve will be using the tool you need to master.

There are several ways to do this. The most obvious is learning as you go. This is generally an efficient use of time, but often circumstantial pressure requires a more expedient solution. In this case, I recommend a “breakable toy” which allows for practice without the pressure of clients or management.

Note that reading documentation, watching videos, and going to meetups are all important additives, but prioritize them as supplements and not your primary method of practice.


The second recommendation I have involves focus. The limitless possibilities afforded by coding make being an apprentice extremely exciting. There are literally an infinite number of topics to discover and explore, all of which could help you grow into a knowledgeable software craftsman.

However, this benefit of code as a career can have the unfortunate side effect of distracting the novice programmer. It’s very easy to try investigating so many topics that you spread yourself too thin and end up impeding both your learning and your productivity. Realize that as an apprentice, any learning you do will be beneficial, and that it generally makes sense to focus on the topics that also make you a good employee. This might be difficult if you work for a consultancy, but for any other apprenticeship, become the best you can be with the tools and languages used by your employer and stem your curiosity on other topics until you’re more proficient at what you use daily.

When you practice regularly with appropriate focus, you end up growing at a more rapid rate. Ultimately you’ll prove yourself to your employer quicker, and be able to hasten your pace of learning by training on the topics with which your mentor can assist.


Stay tuned next time for the final post in this series, which will focus on company-specific challenges and wrap up with any miscellaneous advice for apprentices

Text

If you’re interested in becoming a better software developer, there are well worn paths and pedagogical methodologies that can get you there. You can go to school, find a mentor, write code, get involved in open source, get a job — all of these are accepted ways to improve your skills.

I’m not here to talk about any of them. This is the first post in a series where I’m going to explore interdisciplinary paths to improving your skill as a software developer.

Why You Should Read Books Unrelated to Programming

Writing software is an act of communication, and through reading we are exposed to different modes of thinking and communicating ideas.

Through reading and expanding our vocabulary and understanding of humanity we can improve the quality of our communication in code as well as in our everyday interactions.

If you agree that naming things is one of the hardest problems in computer science, then you need tools in your arsenal to describe the abstractions we’re asked to create every day. There’s no better way to do that then to expand your vocabulary by reading more books.

What You Should Read

Read the classics of fiction, paying special attention to their narrative arc and the way they use language to bring the story to life. It will serve you well when you need to name your next class or method.

Read complex Russian literature that requires you to keep track of characters with multiple names, uncertainty in outcomes, and unclear motives. It will serve you well when you have to understand complex code interactions and architectures.

Read non-fiction about psychology, sociology, or any field that engages with humanity that you’re ignorant of. It will challenge your world view and expose you to ideas you don’t come across in your everyday life. This can help build empathy for your end users and other software developers as you appreciate the breadth of human experience and approaches to understanding the world around us.


The best software developers I’ve worked with have a broad range of interests outside of the field, and being well read is a consistent theme. Broaden your horizons and read more books unrelated to software.

Text

Let’s say we’re testing a conditional rule that has three parts. The method looks something like this:

def all_good?
    a == 1 && b == 2 && c == 3
end

There are technically 2^3 = 8 branches there, since each variable can take on a value that is or is not equal to the value specified, and allow the rest to vary. The naive approach may looks something like this:

context "when a = 1" do
  context "when b == 2" do
    context "when c == 3" do
    context "when c != 3" do
  context "when b != 2" do
    context "when c == 3" do
    context "when c != 3" do
context "when a != 1" do
  context "when b == 2" do
    context "when c == 3" do
    ...

This is obviously absurd but sometimes while we’re testing, we get 3 levels deep into a conditional and don’t recognize that we’re actually repeating this pattern. It’s much less obvious to spot with real world code. 

Here’s a shortcut: all you have to do is test the case when all the conditions are satisfied, and then provide a single context for each sub-condition to prove that when it’s not met, the entire rule fails:

context "when all conditions are met" do
    let(:a) { 1 }
    let(:b) { 2 }
    let(:c) { 3 }
    
    it { should be_all_good }    
    
    context "when a != 1" do
        let(:a) { 99 }
        it { should_not be_all_good }
            
    context "when b != 1" do
        let(:b) { 99 }
        it { should_not be_all_good }

From 8 conditionals, down to 4. That’s pretty good! And much more readable, without all that nesting. If you had 4 pieces to the conditional, you go from 16 branches to only 5.

Happy testing!

blog comments powered by Disqus
Crafted in Chicago