I've been at Bluecrew for a month now and one thing I'm actively improving is their continuous integration (CI) strategy. Every team has values around their code and systems. The existential question I ask about each value is this: does CI enforce the value? If not, the value does not exist. You could think of this as an all or nothing CI strategy.
Thoughts on Testing
I first heard about test-driven development (TDD) around 2006-ish and I still hear about it often. TDD encourages you to write effective tests because you always see both states: the failing state and the passing state. This is great but it's easy to write the wrong code for a feature as you learn more about it. Duplicating that exploratory effort on tests is a waste of time.
After many years of building software on small to medium sized teams, I prefer a leaner approach that accomplishes the same goal: something I call spike, test, break, fix.
I wrote an article for Mozilla Hacks about testing React / Redux apps.
- Setting up a test for a Redux connected component is simple:
just dispatch the actions needed to enter a desired state.
There's no need for mock objects or API calls.
If you need to assert that the component dispatches an action
in response to a UI event, you can spy on
- Shallow rendering lets you run fast,
For example, breakage in an
<Icon>component (that might be used all across the app) would not break your entire test suite.
- Testing component interfaces rather than complete end to end coverage
improves encapsulation and makes tests more maintainable.
For example, if the component under test passes an
onSubmitprop to another component, it's better to simulate the integration by calling
otherComponent.prop('onSubmit')(). There's no need to fully mount the other component.
- I recommend using static typing with these testing strategies to ensure all components are integrated correctly.
Fudge, the python mock tool, goes 1.0! You can grab it with pip install -U fudge or directly from PyPI. This marks the end of a long incubation period where the community and I used Fudge in real world scenarios to see what worked and what didn't. I'm sure there are many more improvements to make but as of 1.0 I'm very satisfied with what we've accomplished. This is thanks to its small but vocal community of users, to all contributors and to everyone who pointed out flaws...
If you're building software that will be used by hundreds of millions of people at once it's pretty tricky to simulate that kind of load in a testing environment. And without realistic load tests, you can't be all that sure if your infrastructure will stand up to the pressure. MySpace tried to use 800 EC2 instances to simulate one million concurrent users on their new video features but before they could reach the limit of their own app they hit the physical limits of their Akamai datastore due to the geographic location of the EC2 nodes. D'oh!
Instead of simulating load, why not just deploy the feature to see what happens without disrupting usability? Facebook calls this a dark launch of the feature...
The fixture Python module is a utility for managing data needed for automated tests. Its new version, 1.3, adds support for the Django backend. This was a champion effort by Ben Ford who wanted an alternative to Django's own JSON / YAML based data loading mechanism. Thanks, Ben! Here is the complete changelog. As usual, you can run easy_install fixture or pip install fixture to get it. Or you can download it from PyPI.
I've been experimenting with a new tool that was released open source recently called JsTestDriver
Here are some features it provides that I thought were nice ...
Nose 0.11 has just been released. Woo! This has been a long time in the making and got a nice boost from many devs sprinting during PyCon. Here are its nifty new features:
- Vastly more readable documentation with a ton of new documentation added.
- Parallelize your test runs with the --processes switch (note: except on windows, support for which is in the works)
- Output test results in the popular Xunit xml style using --with-xunit; this was designed for Hudson or other CI tools that display or collect stats on test runs.
- Hate dealing with the logging module? So do I! :) Nose now captures logging messages logged during failing tests.
- Re-run only the last batch of failing tests with the --failed switch until you get them to pass. This one was inspired by TestNG.
- Collect tests in all modules without running them.
- Better support for IronPython. (Note that Nose has supported Jython for several versions now.)
Also, for forward thinking types, there is branch to support Python 3k but it's not ready for production use and is not recommended for daily use.
I really like PyCon. It's been said many times but is worth repeating: the hallway track is what makes PyCon such a fun unconference. With that said, here are some happenings:
- I'll be talking Friday after lunch about some fun I've been having trying to test Ajax web applications: Strategies For Testing Ajax. I'm pretty excited about it because there are still a lot of unsolved problems so I'm interested to hear about how other people are testing Ajax.
- There will be a Testing In Python BoF (birds of a feather). Not really sure what we'll do but a lot of people seem interested in it. There have been murmurs of a mock library shootout. Hmm ... I better bring my gun :)
- I'll be on a panel Sunday to discuss Functional Testing Tools in Python. I'll be offering the Nose perspective.
- Ian Bicking will be talking about Topics of Interest which sounds mysterious. As it happened I got a hot lead that there may be ... shall we say ... refreshments to aid in conversation (or instigate heckling?). Don't miss this!
- One of my colleagues Kevin Boers is giving an ambitious talk called Building a Simple Configuration-driven Web Testing Framework With Twill. It's pretty neat.
- Another one of my colleagues, Terry Peppers, is giving a very entertaining talk called A Configuration Comparison in Python
- Too many great talks to mention! I'm psyched about the Windmill talk, Jesse Noller's talk on Multiprocessing, and pretty much everything tagged with testing.
See you there.
Just a quick note that there is a new version of Fudge, a mock and stub library for Python. This fixes a lot of bugs in the old release and adds some nice new features:
- You can now declare method expectations more expressively using argument inspectors.
- Declaring an expected method order is now possible with fudge.Fake.remember_order().
- Added fudge.Fake.raises() for simulating exceptions.
- Declaring and modifying call chains is easier and more readable now.
- Lots of bugs fixed, all listed in the Changelog.
Thanks for all the feedback thus far. Special thanks to June Kim for testing this release early and providing feedback on the new interfaces.
I keep getting asked why I created yet another Python mock framework. I really didn't want to and explain my motivation here. I am a huge fan of PyPI and would be lost without all the hard work from the open source community but there is always room for more packages. It provides more options to developers and oftentimes rewriting software can be largely rewarding at a small cost. For example, since I wrote Fudge from the ground up I was able to focus on small things like ensuring that all object representations are sane and that exception messages are as informative as possible. Little things like that can be hard to retrofit into an existing library if they were not written right the first time.
I just released 0.9.1 of the Fudge module which is a tool for working with fake objects while testing Python code. Some call these mocks, stubs, or actors, but I just call them all fakes because that way you don't have to change the names in code if you update your tests. You can get Fudge from PyPI or by running
easy_install -U fudge. This release contains some nice new features and several contributions by Cristian Esquivias. It has more documentation and some bug fixes but note that some functions have been deprecated.
See the changelog for all new features and details on the deprecations. Big thanks to Cristian for his contributions. Also, thanks goes to Marius Gedminas whose comments on my original Fudge announcement led to better names for some commonly used functions.
I'm excited to announce the release of Fudge, a Python module for replacing real objects with fakes (mocks, stubs, etc) while testing.
Fudge started when a co-worker introduced me to Mocha, a mocking framework for Ruby and a simpler version of jMock (Java). Up to that point I had been building mocks "by hand" in Python with a post mortem approach; I'd set up some fakes then inspect their call stacks at the end of the test. I like the jMock approach better—you declare a fake object that expects methods and arguments, you replace the real object with the fake one, run your code, and you're done. If your code doesn't live up to the expectations then your test fails.
Jens W. Klein has just released a pretty cool doctest debugger tool called interlude. It's designed for a situation where you are writing doctests (perhaps in the comments of your code) and you think to yourself, hmm, what happens when I run this test? Instead of the back and forth run-test-edit cycle, well, why not just drop into a doctest session from your test suite, interact with the shell until you got it right, then copy / paste the session back into your comments? This little tool is genius. And surprisingly simple: 11 lines long (3 of those are for the shell startup message).
Jens describes an installation process that involves invoking a custom doctest runner. This can introduce a bootstrapping problem, especially if you are using a doctest runner like Nose because it's hard to customize the doctest runner. Well, actually, this bootstrapping step isn't even necessary. Here's an example ...