less steaks, more tapas

November 19, 2009

Gee willikers, it’s been a while since blog posts.

Just why is that? Are they so hard to write? I’d like to be more prolific, why am I not?

Previous posts have taken me hours to put together. Most are lovingly drafted, written, re-written, sprinkled tenderly with images, polished, cuddled, handed a packed lunch and then released into the wild. Indulging my perfectionist streak is time consuming to say the least.

Blogging in this way seems like a chore to me and this is why I’m not motivated to do it.

I’m going to try something different to see if I can improve my output.

Timeboxing

I use timeboxing all of the time in my work life. I use it to avoid wasting time on a task that does not need to be spent. I use it to force myself to review where I’m at with the current task and whether or not I can stop.

I do this in order not to waste the parts of my time paid for by a client, I should also be worrying about the parts that are left to me. This requires bopping my inner perfectionist with the hammer of pragmatism from time to time.

I want a publishable blog post in one hour. I use pomodoro technique so it’s roughly equivalent to two of them, each being about twenty five minutes.

At the end of my first pomodoro I want a first draft. I want this draft to get at least 80% of the message across.

At the end of my second pomodoro I want something publishable. I’ll run through it again, polish it up and publish it. If I run out of time for images, there won’t be any images.

Stay on Target

I tend to ramble. Once I dig into a subject, I find myself wanting to say everything about it that comes to mind.

Although some things are relevant, this weighs down a post and retracts from the easily digestable nature of a blog post. I don’t like having to slog through a long post from other bloggers. Do unto others.

I’m going to blog less steaks and more tapas. This means keeping it lean and keeping it on target.

I’m going to drive a post with a single message. Things that pop into my head that aren’t directly related to this one message are going to be quickly deflected into potential new posts or dropped altogether.

Just Do it Already

Hopefully I’ll be able to increase my blogging output while maintaining a good level of quality.

A one hour action on my todo list looks a lot more inviting than a 2-4 hour one. I want to turn blogging from a chore into a habit.

A rough post out there is worth a million polished posts never written
. Maybe I’ll take more of an iterative approach and decorate them with images later.

This was the first post I’ve written with this approach.

needs versus wants

August 21, 2009

Does this list look familiar?

Needs   Wants
Food    Lollies
Water   Atari 2600
Love    Television

I’m sure many of us would have initially placed television in the left hand column. The social studies teacher would then patiently force us to confront what really was necessary to our survival. We’d progressively pare away items from our needs list until we had distilled it into the bare essentials.

We have to complete a similar exercise in adulthood when we prioritise the features of a software system. If only it were as simple.

Survival

The definition of survival (success) in a software system is often very hard to realise. The prioritisation game we played as children was easier; we die if we do not have food so that is a need. Many wants masquerade as needs and picking them out is often difficult because the definition of survival is not as clear.

Are your needs really wants?

Are your needs really wants?

Like the human “needs versus wants” exercise, a software project has multiple potential levels of success. I survive if I have a roof over my head but I wouldn’t mind a swimming pool in my back yard. An online banking system could be considered a success if it allows a customer to view their transactions but it would be very useful if it allowed them to transfer money as well. The success of a software system project is like a ladder with perfection perched at the top; the aim is to climb as many rungs as possible before resources or time are exhausted.

There are multiple=

There are multiple levels of success

Once you identify subsequent levels of success for your project you can make these the themes for following releases. Using small releases will keep steps between the rungs smaller and make them more achievable.

First Things First

Effective prioritisation can be hard for many software development projects because there is not a clear idea of what constitutes that minimum level of success. When features are played that don’t contribute to achieving that minimum level of success, they steal resources from those that do. This can weigh down the project and increase the chance of all-out failure.

Wants are not necessarily pointless or useless, they just have to be prioritised after needs. There should be a point in your prioritised feature backlog where the features cease becoming needs and start to become wants. This point represents the first level of success; if everything up to this point is working as expected then the project has not failed. Any features after that point contribute to subsequent levels of success, they are the wants.

Try and shift that tipping point between needs and wants as close to the start of the backlog as possible. Find the absolute bare minimum that the system can get away with doing and make that your first goal.

Climb the first rung first

Climb the first rung first

Wants are actually very important because they provide the slack that allows your project to adapt to changes. You should plan to complete a healthy proportion of wants even though more or less will actually be completed.

Be thorough (even brutal) about asking what really does constitute the minimum level of survival for your system. Identifying truly essential functionality and then building it first gives you a solid foundation for further successes.

Mike Cohn’s Agile Estimating and Planning is in my opinion the current bible for exactly what is mentioned in the title. It contains a lot of great stuff on slack and how to prioritise features based on real information. Check it out.

By the way, I never did get that Atari 2600.

the wombat’s raft

August 21, 2009
The wombat

The wombat

The rain poured down for several days and the wombat began to worry. He decided to build a raft to escape the approaching flood.

He gathered some logs and arranged them on the ground. As he was about the lash them together a thought struck him, “I don’t want to sleep on the hard wood as I’m floating around”. He spent the rest of the night gathering grass to make a comfortable bed and then promptly fell asleep as the rain poured on.

The wombat later awoke to the water lapping at the entrance to his burrow. He scampered outside and jumped onto his raft.

The raft instantly fell apart.

Build the most important things first.

Bumps – remote features for Cucumber

July 20, 2009

Bumps is a remote feature management extension for Cucumber. It allows you to pull in feature content from another location prior to performing a Cucumber run and then post the results after. What this means is that features no longer have to live in plain text files within your project and that you have an easier way to notify another system of the results of a run.

Check it out on Github

… or read on if you’d like the lowdown on why it came into existence.

The Problem

We had a stab at BDD once. Although we learned a lot and made some good progress, there were a few points of friction. We were all editing feature content in the usual way and checking changes into the version control system. Developers were happy with this but the business analyst was not. He wanted bells, whistles, rounded corners. Sweet lullabies to soothe him as he tinkered with yet another login feature.

As somebody who messes around with text files all day, I thought “surely it isn’t that hard”. Unfortunately, something doesn’t have to be that hard to be discouraging.

Demands for more usable tools are perfectly reasonable. The tools we were using were second nature for developers but this was not the case for just about everyone else. I don’t blame any business analyst or tester for not wanting to edit plain text files and mess around with a version control system.

BDD is meant to be something that benefits and is owned by the whole team including the customer, their proxies and testers.

More accessible tools need to be created if BDD is to gain wider acceptance across the whole team and in general. Members of the team who are not developers should not have to stretch so far to use BDD tools. Given an absence of appealing tools for everyone, more teams will toss BDD into the too-hard basket.

Another problem is the visibility of feature status. I can imagine how writing features could seem like tossing scraps of paper into a black hole to some if they do not actively check (or are even aware of) feature reports. It seems that developers get to have most of the fun of seeing a feature executed and passing. Anyone editing features as text files that doesn’t actively run them probably doesn’t get this warm and fuzzy feeling.

Features are living things and should appear that way to everyone. They are constantly passing, failing or sitting there pending as the code is in flux. A nicely formatted HTML report is a great way to show the status of features but it is detatched from the original feature. Having a single place to edit feature content and show feature status should make it easier to follow that status. Imagine being able to edit a feature or scenario and then a few seconds later, see its status appear. Tools like FitNesse are moving in the right direction of adding this sort of visibility.

So the these two problems are:

  • how can we edit features in a more accessible way but still allow them to be executed easily?
  • how do we make the status of features more visible and tied to the features themselves?

Google Wave

Unless you’ve rejected technology and opted to live in a cave for the past few months (tempting at times), you have probably heard of Google Wave.

Wave is an ambitious new platform for collaboration and information exchange that has many dribbling in anticipation. Watch the video if you haven’t already. Wave looked to me like a pretty intuitive fit for some of the problems we had been facing earlier.

A wave (with the right extensions) is potentially capable of the following:

  • allow the entire team to edit and collaborate on feature descriptions
  • reflect the status of the last test run, showing exactly which features, scenarios and steps passed, failed or were pending
  • automatically run a feature as soon as it has been edited

So, Wave looks pretty cool. It is still pretty raw but a general release is on the horizon. How do we get it working with our current BDD tools?

Bumps

Bumps is only a small piece in the overall puzzle of providing better way to collaborate in BDD. It is an attempt to adapt Cucumber for this purpose.

There were already a few different Cucumber extensions out there like Pickler and Remote Feature but I wanted to make something a little more generic that would work after adding just two extra lines of code. And here they are:

  require 'bumps'
  Bumps.configure{ use_server 'http://myserver.com }

All you need to do is place them inside of your env.rb or other file loaded by Cucumber and then kick off a run as normal. Providing that the feature argument you gave it was a directory and there is a server running at the address configured, Bumps will

  • pull feature files across HTTP and write them to the directory
  • run Cucumber as normal
  • push the results of the run across HTTP

At the moment Bumps is quite generic. This means that you should be able to easily use it with JIRA or other systems providing that you build a compliant server. It was designed to be used with Google Wave and will most likely evolve in line with that technology. I haven’t even started digging into Wave development in earnest yet. I’m expecting that Bumps will change significantly once I do. I’m hoping that it will also benefit from others doing the same.

Next?

The next step is to try and develop some tools that make use of Wave and see just what can be achieved. Maybe it won’t be all that great, maybe it will.

With all this talk of shiny things I feel that I must also drag out the well-worn phrase, no silver bullet. Even the most awesome of tools will never guarantee your success, only help minimise the chance of failure. Nothing is a substitute for a team that knows how to communicate and collaborate effectively.

I would rather have a team chiseling features into a stone tablet if it meant that they actually talked more about what the system should do.

That aside, having nice tools can’t hurt your chances.

keeping on the tail of code quality with a ratchet

May 29, 2009

High code quality is one of my all time favourite things, up there with beer, icecream and when a bird sings. When I talk about quality in this sense I mean the maintainability of code. Quality is not a finite thing; instead it is a subjective little creature, a slippery invertebrate that squirms and changes over time. The subjective nature of quality is something that we have to live with, a more solvable problem is one of adapting to our changing ideas of it.

The Problem

Bash!

Bash!

A number of tools exist already exist for measuring and monitoring the quality of code in a variety of programming languages but some of the ways in which we use them are flawed. A common approach to style checking involves encoding our current view of what constitutes quality as a series of rules. Those rules can then be engraved in stone and used to bludgeon our code and developers from that day forward.

Our idea of quality changes over time as we gain experience and understanding. Frequent reassessment of what constitutes quality is a fantastic thing but unfortunately we don’t tend to do it all that much. Its hard to keep discussion rolling about design and standards unless we are forced to. It doesn’t help that a sudden shift in ideas about quality can suddenly cause us to view our legacy code base in a different light.

What do we do when we have a bunch of legacy code written to a previous set of code quality standards but then the standards change? There seems to be a few options; make all of the code conform in a big bang refactoring or relax the automated checks.

Big bang refactorings are risky, exhausting and disruptive. If your idea of what constitutes quality code has changed dramatically then you may have a lot of work ahead of you. I for one do not relish the idea of having to absorb that in one huge hit.

Relaxing the automated checks would be asking for trouble. What is to stop someone introducing additional code that completely violates your shiny new standards of quality? Nothing. You just need to survive on good will and pixie dust until you can do the above, the big bang refactoring.

As the size of a team grows, communication becomes harder. We tend to have less informal discussions about things like quality because of the difficulties of coordinating a bigger group. Sometimes communication gets a little neglected unless we are prompted to discuss things regularly.

How can you have the freedom to reassess your code quality standards and bring existing code in line with them over time? How can you prompt regular reassessment and discussion?

Style Violations

Yuck

Yuck

We are used to seeing style violations like nasty little cockroaches, scuttling about on our precious code. They make us cry. Nobody likes them. But cockroaches get a bad rap; they are relatively clean little fellows. They just love a filthy surface.

Style violations are not the problem, they are only symptomatic of it (maybe). They are indications of a possible problem. The problem is not that the method complexity metric was violated, the problem may be that the method is too complex to understand.

As any university student will tell you, cockroaches are only a problem if your hygiene standards are sufficiently high. If you all of a sudden decide that you really hate cockroaches, you may get a nasty surprise the next time you turn the kitchen light on. You can’t treat the problem immediately though, it takes time. This involves tolerating a certain number of violations.

The violation threshold represents the number of violations that you will tolerate; the minimum level of hygiene that must be maintained. If the current number of violations exceeds this threshold, the style checks should fail. If the number drops below, the threshold should be tightened down to the new count. The aim should be to keep driving the threshold towards zero, it should only ever be increased when the style rules are changed. So:

  • threshold only goes up when style rules have changed, never because of code changes
  • threshold only goes down when code has been improved or if the style rules are relaxed (and this should never happen lightly)

Introducing a Ratchet

I love that clicking noise

I love that clicking noise

I first heard about the concept of a ratchet in Chris Stevenson’s blog post. The gist of this approach is to steadily tighten accepted levels of one metric. The ratchet effect is that the levels are never allowed to slacken, only to be tightened. When used to improve code quality, old code will be tolerated until it can be cleaned up (but not allowed to degrade any further) and new code is held to the newer, stricter quality standards.

The first step in implementing a code quality ratchet is to decide what our current idea of quality is. Pick out some common metrics like complexity and class/method length and come up with some starting levels. Choose levels that are aggressive, remember that they won’t be set in stone; they should cause another discussion later on.

Once you have decided on some initial checks and levels, run them against your current code and note the number of violations. The number of violations that pops up is your initial threshold. Most style checking tools like Checkstyle have the ability to set a maximum violations figure, use whatever means to set the figure to your threshold.

Once you have your ratchet in place preventing things from getting worse, you need to figure out how to tighten it. The best method is to tighten the levels automatically as soon as they drop. You can work your own build magic to do this but its prettier in some build languages than it is in others (I’m looking at you, Ant).

Having the facilities to tighten the ratchet when the number of violations drop is good, but how do you continually drive them down? Try and set targets for each iteration or release. Make the current threshold easily visible to everyone and review progress regularly. The easy pickings will soon evaporate and expose the meatier challenges.

If you do ever drive your threshold to zero violations then make sure to have more discussions about quality. Were the metrics aggressive enough? Is the system there is terms of desired quality levels? If so, pat yourself on the back. If not, reset the rules, set a new threshold and get to it.

Communication

Keep an ear out for friction

Keep an ear out for friction

So this can be a useful technical approach for increasing the level of code quality but I think the real values lies in the discussion it encourages.

When someone is being prevented from checking in because their changes have broken the ratchet, the rest of the team will know about it (usually manifested as “arrrgh! the fucking ratchet won’t let me check in!”). These times are a prompt to have a discussion about the changes, namely which rule was violated and what it suggests about the current design of the code. Why did we set this rule? What situation is it trying to guard against?

Violations do not pop out as neat little tickets telling you what is wrong and how you need to fix it. Style violations are the prompt for the team to find out what the real problem is and how it can be fixed.

Understandably, violations will be a major cause of frustration. Legacy code will have many tissue-thin spots that teeter on the edge of breaking; it sucks to be the one holding the bomb when it goes off. People can view the style checks as a nuisance when they are too focused on their primary goal of just getting their chunk of work out of the door. A significant share of the focus needs to be awarded to maintaining and improving quality. Negative energy needs to be channeled into discussion about the real problem and how it is going to be fixed. Bigger, more painful nips from the style checking tool should also encourage people to run the checks more frequently.

Style violations have the magic effect of being a catalyst for design discussions that would not normally have taken place. The timing is not ideal (the code has already been produced) but it is better than nothing. More often than not, there is a better design that would satisfy the current notion of quality. Hopefully the problem is then resolved, a new direction is set and everyone has learned something new that they otherwise wouldn’t have.

Of course, the design isn’t always the problem. These situations can also suggest that the style rules need to be reviewed. This is why you should set the levels to be aggressive; it is better to change the level of a rule based on experience rather than taking a stab at it during initial discussions. In an ideal world the settings for each rule are the result of real experiences of what is acceptable and what is not.

Care needs to be taken to act on these prompts to communicate otherwise the approach will fail. When violations pop up:

  • move focus away from the symptom and onto the cause
  • relate the problem back to overall quality goals
  • review the rules but only if all options for a better design have been explored and there is widespread agreement
  • treat the goal of excellent quality as primary; don’t compromise it just to get the current story out of the door

You ideally need one or more people to really champion this approach. They should be constantly listening for people being bitten by violations and should be ready to fire up the necessary discussions as soon as it happens.

You can’t force a team to continually focus on code quality, all you can do is create an environment that is more conducive to such an attitude. The mechanical aspects of using a ratchet should make the job easier but the real key is consistent communication.

Images

Baldrick – a dogsbody

May 9, 2009

Ruby is an awesome language for hacking stuff together (amongst other things of course) but do you want something that takes care of more of the plumbing, especially where web feeds are involved? Baldrick will service your whims.

Check it out at Github.

The Problem

Where I used to work we used a few Delcom build lights to monitor our continuous integration build. The scripts used to run these things are great fun to write (probably why we had a few different ones floating around) but the code to monitor the RSS feed containing the build status was quite repetitive. What we really cared about was linking a change in status to a change in light colour (and behaviour), not how to pull apart the RSS for the stuff we needed.

We had modified our light scripts to make the lights flash when the build had been newly broken. Someone could then ‘claim’ the build and stop it flashing by hitting a particular web URL. We also tended to communicate such things over IRC or some other means of broadcast. I thought it would be cool if you only had to claim the build in your message and that something could pick that up and change the status of the light for you.

At the same time I was playing a lot with Sinatra and I was giddy as a schoolgirl at just how easy it was to knock out a simple web server in a few lines. The magic ability to just execute a script and have it run as a web server really ticked my fancy. I thought that I’d love to have something like Sinatra that took care of the plumbing and allowed me to easily glue events to actions.

I was also impressed with the syntax of Cucumber steps and the ability to join up a textual step with the implementation via regular expressions.

All of things came together as Baldrick:

#cuppa.rb
require 'rubygems'
require 'baldrick_serve'

listen_to :feed, :at => 'http://search.twitter.com/search.atom?q=cup+of'

on_hearing /cup of (.*?)[\.,]/ do |beverage, order|
  puts "#{order[:who]} would like a cup of #{beverage}"  
end

Executing the above script will start a Baldrick server that listens to a Twitter feed for messages containing ‘cup of’. When a tweet containing a cup of something is found, the name of the tweeter and the beverage (perhaps) are spat out to the console.

How Does it Work?

Baldrick listens to a number of sources (at the moment RSS/Atom feeds and Injour statuses) for orders. The content of these sources is wrangled into a common format containing who, what, where and when.

From there its a case of hooking an order up to a task. When you define a task you give it a block to call when it is triggered. On receiving a new order, Baldrick will trigger all matching tasks. This also means that you can have orders from multiple sources triggering the same task if you so desire. Capturing groups within the regex will be passed as arguments to the block, followed by the order.

Baldrick uses the same tricks as Sinatra to allow an arbitrary script to be run as a server (the #at_exit hook).

Writing your own listeners is a snap, check out the wiki for details.

Try it out and drop me a line to tell me what you use it for.

Numerouno – number parsing for Ruby

May 9, 2009

How do I turn a string like ‘forty two’ into something I can manipulate as a number? String has the #to_i method but that only works on numerals like ’3′. Numerouno is an English natural language parser for numbers.

Check it out at github.

The Problem

I hit this problem a few times in the past while writing Cucumber features that contained textual descriptions of numbers. Being good little BDD elves, we had worked very hard to keep the feature language true to that used by the customer. We were already using the awesome Chronic for parsing descriptions of dates and times which went a long way to preserving the language.

Unfortunately, describing numbers still seemed a bit clunky. We had steps like:

When I hop 37 times

The above is not ugly by any means, more mildly irritating. The main thing is that this is not how I would write the sentence. Maybe you find ’37′ more concise but to me it sticks out like a sore digit (ha ha) in an otherwise natural looking sentence. I want to write something like:

When I hop thirty seven times

And indeed now I can! Hurray hurrah!

require 'numerouno'
'thirty six billion, three hundred and ninety two'.as_number
 => 36000000392

How does it work?

The problem of parsing English number phrases was an interesting one and it took me a while to model it in a way that wasn’t totally confusing. Basically the current approach goes a little like this:

Identify individual numbers in the string

The first thing is to turn ‘thirty six billion, three hundred and ninety two’ into something we can manipulate a little easier, [30, 6, 1000000000, 300, 90, 2]. Simple regex matching is used to identify individual numbers.

Combine numbers

The English language has certain rules for interpreting numbers in a sentence. The rules most often revolve around numbers that are powers of ten, one hundred, one thousand, one million and so on. Once you hit one of these numbers you can start applying rules for the numbers either side of it to mash them into a combined figure.

The rules typically lead to you multiplying by the number to the left and then adding the number to the right. For example ‘five thousand and one’ => [5, 1000, 1] => 5 * 1000 + 1 => 5001.

Combination is done in several passes to ensure that lower powers of ten are combined properly before attempting to combine them with higher ones. Once all combination passes have been made a final step sums up the resulting list of combined numbers for the actual figure.

Limitations

At the moment only whole numbers up to those in the trillions are supported. The following things are not:

  • anything bigger than nine hundred and ninety nine trillion, nine hundred and ninety nine billion, nine hundred and ninety nine million, nine hundred and ninety nine thousand, nine hundred and ninety nine
  • fractions be they decimal or otherwise
  • other variations of numbers like ‘third’, ‘thirteenth’
  • slang like ‘K’, ‘grand’
  • any language except English. The rules for interpreting number are specific to the English language.

Yes, ironically Numerouno does not recognise ‘numero uno’.

If in doubt, try it out. Rhymes.

nothing looks worse than an ill-fitting domain model

July 10, 2008

An ill fitting domain model is a blight upon your application. Sharing the most personal part of your application can stretch it out into a floppy mess that isn’t fit for any system.

The domain layer of your application embodies the problem being solved. In the world of software high fashion, the domain layer should fit the problem like a well tailored suit – snug but comfortable with room to move.

Quite often another problem comes along that appears to be the same as one you’ve solved already. They look to be of similar build and height so getting some more wear out of those expensive duds seems to make good sense. Unfortunately, on closer inspection things don’t fit that well; the shoulders are loose and the stomach is tight. One false move and the seat of those slacks could be rendered asunder.

The domain can be let out to accommodate both systems but it will never fit as well as if it was tailored for a single purpose. As the shape of each system changes over time (middle age spread affects software as well), subsequent adjustments will become harder because of the need to satisfy both parties. Try to stretch the domain across multiple systems and it will pretty soon begin to look like a muumuu.

So what if the domain is a little baggy in the arse? Aesthetics aside, a poorly fitting domain is a strong contributor to software entropy (disorder).

The domain code of your application should sufficiently explain the problem your application is trying to solve. The domain code records the interpretation of the problem formed by the original developer. Subsequent developers will use this recording to form their own mental model of the problem and make changes to the code in turn. The domain code becomes an essential tool for communicating the original problem to new developers.

Problems occur when the domain code does not accurately model the domain. A poor model of the domain in code will lead to a flawed mental model being formed by any new developer. Changes contributed using this incomplete or inaccurate understanding further reduce the fidelity of the domain code. If the decay is left unchecked then the model can eventually lose all shape. New additions will be tacked on awkwardly and the domain will mutate out of control. The mutation will increase exponentially and eventually the system will need a total rewrite simply because nobody can understand it any more. The sad situation is that the rewritten system is often doomed to the same fate, it just starts off in a less advanced stage of decay.

So what can you do to keep your domain layer looking snappy?

Preserve the cut of your domain. Don’t stretch it out of shape by attempting to make it an all singing all dancing representation of multiple problems. Two different systems may utilise the same data but be concerned with separate behaviours – if this is the case then consider modelling two separate domains.

Realise that some duplication is worth it or is not really duplication at all. Just because two entities have the same properties they don’t necessarily model the same thing. Don’t strive for reducing code duplication at the cost of clarity.

Periodically update your application’s wardrobe. Review the problem domain and make sure it is still accurately modelled in the code. Always remember that the domain code is there to communicate the problem to others. Invest the time to improve it continually and don’t be afraid to make changes if the model is no longer accurate.

Make your domain code easy to understand by keeping it simple. Use techniques such as TDD to drive your code to be barely sufficient and easier to digest. Use an evolving domain model instead of a big up front design based on guesswork and incomplete understanding. Don’t add a bunch of crap that you think you’ll need – add it when you need it. Chime in obnoxiously with “YAGNI!” whenever anyone else suggests otherwise.

Keep to the boundaries of the problem you are trying to model. Sure there may be things attached at the perimeter but resist the temptation to try and capture everything; just focus on the view that affects your application. Limiting this view makes it easier to comprehend.

Finally, remember to just say no to the false economy of domain code reuse. Domain code should be something intimate to your application and the problem it is trying to solve so keep it that way. Compromising the quality of your domain code will lessen the ability of your application to communicate the problem domain to others and will accelerate the decay of your system.


Follow

Get every new post delivered to your Inbox.