farmdev

Thoughts on Testing

You vs. The Real World: Testing With Fixtures (Coming Soon)

I'm very pleased to announce that my proposal for a talk at pycon 2007 was accepted (#83). I'm pretty new to pycon but, wow, what a tough time this year the reviewers had! 104 submissions, only 50-60 could be accepted; ouch.

The talk is titled You vs. The Real World: Testing With Fixtures and is a way to demonstrate some practical usage for the fixtures portion of the testtools module, as well as talk about why testing with real data is highly effective and fairly easy.

Actually, before submitting the proposal I began pulling out the fixtures logic into a new module for distribution, named simply fixture. This will be a sort-of 1.0 of testtools and will allow me to address the many problems I've run into by changing the interface some.

I hope to have the 1.0 release as the new module somewhat stable before my talk (grins) and with also a few new features. I.E. An official release of the command line fixture generator; A better interface for defining rows in a fixture, like cloning a super row and handling id sequences automatically; support for the with statement (this will work like the current @with_fixtures decorator); and better docs, examples, etc.

read article

You vs. The Real World: Writing Tests With Fixtures (Sunday at Pycon!)

I've been having some trouble trying to edit my proposal summary (maybe it's not possible anymore?) so I wanted to point out that the talk will cover the fixture module, not the testtools.fixtures module. This is actually a new module, a rewrite of the testtools one from scratch, that I started in November when my proposal was accepted.

I'll post the slides up tonight but the gist of the talk is how to use fixture to load and reference test data. Here is an updated summary:

One of the biggest challenges of testing is creating an environment akin to the real world that your code lives in. This talk will focus on general strategies for tackling the problem and then will move into specific examples using the fixture module for setting up and interacting with data stored in databases and other storage media.

The goal of the talk is to promote better code coverage in tests, more maintainable test suites, and techniques for easy and painless refactoring.

read article

testing just got easier (a few nose plugins)

While procrastinating on writing documentation for fixture I managed to code up a few nose plugins. (Seriously though, the fixture docs are nearing a stage of completion, I swear it!)

If you're not familiar with nose and its nosetests command for running test files, then it's worth checking out. Titus Brown even wrote a comprehensive introduction and usage guide.

The coolest part of course is that you can write plugins very easily (installable via easy_install even). Secondly, nosetests is for programmers and ... programmers are motivated to create software to make their life easier! Thus, here are a few useful plugins that myself and others have released lately:

  • nosetty
    • A plugin to run nosetests more interactively. The crux of this is to give you some convenient ways to edit code based on the traceback ... with your favorite editor, of course. I'm getting some great feedback from users on how to use it with different editors; all this is detailed on the recipes page.
  • nosetrim
    • A nose plugin that reports only unique exceptions. This is a small little thing I wrote to reduce the "blowup" effect when you do something stupid that causes the same exception to pollute many many tests. It still needs a little work.
  • spec
    • by Michal Kwiatkowski
    • Generate test description from test class/method names. I had never used testdox so this was a new concept for me. I think it's really clever and it's already gotten me to name my tests better ;) However, I'd like to see a little more flexibility out of it and I might try to present a patch or two.
  • outputsave
    • by Titus Brown
    • Save your stdout into files. Since I have a lot of tests that deal with data processing, they spew lots of messages and this plugin is perfect for managing that output. In fact, I liked it so much that I added a command for nosetty to open stdout captures

On a separate but related note, in writing these plugins, I came up with a fairly easy way to make functional tests for plugins themselves. It's a combination of two classes, PluginTester and NoseStream; these will probably be part of nose 0.10 but if you want a sneak peak, take a looksee at the nosetrim test suite. The nosetty test suite also makes use of it, but that one is a little more confusing to read because it automates interactaction with the subprocess.

read article

documentation for fixture module

In a mad sea of too-busy-to-blink I've managed to write documentation on the fixture module for python. If this interests anyone could he/she please let me know how the docs read? Too much? Too little? Hard to navigate? Examples too complicated? Thanks.

For reference, here is the main fixture project page

read article

Going to the GTAC (Google Test Automation Conference)

UPDATE: Here were my highlights (part 1)

I just like saying GTAC "G-TACK," it rolls off the tongue nicely. It's funny when you hear techie things said out loud that you've only see on the screen. At work, I just interviewed a Software Engineer candidate who has used scotch to set up an automated recording/playback environment for testing an AJAX app. I said "yeah this and that, blah blah, WiSGIee, blah blah" and he said, "I've never heard of WiSGIee, what is that?"

"W-S-G-I"

"ohhhhh"

Then I delivered the whole WiSGIee ... scotch ... get it?! joke and that got an even bigger "ohhhhhhhh" :) ar ar, I KILL me. I stole the joke from Titus, of course.

I'm really psyched to be fortunate enough to get an invite to attend the GTAC conference (Google Test Automation Conference) in late August in NY. All the talks look amazing but these look especially interesting :

  • CustomInk Domain Specific Language for automating an AJAX based application, Douglas Sellers
  • Automated Testing for F-Secure's Linux/UNIX Anti-Virus Products, Risto Kumpulainen
  • Building a Flexible and Extensible Automation Framework Around Selenium, Apple Chow and Santiago Etchebehere
  • Specification-based Testing, Hadar Ziv
  • User Interface Functional Testing with AFTER (Automated Functional Testing Engine in Ruby), Sergio Pinon
  • Skoll Distributed Continuous Quality Assurance System, Adam Porter and Atif Memon
  • Mobile Wireless Test Automation, Julian Harty

Since it's free and only 150 spaces for attendees they made us write an essay on why we should attend! sheesh, had me sweating.

I also like how the conference is scheduled on Thursday and Friday leaving the weekend free for roaming the city. Besides seeing some friends I will most definitely be dropping in on the best hole-in-the-wall record shop for dancehall and roots reggae on the planet, Deadly Dragon Sound

read article

context_tools, bridging the gap between test methods and test classes?

Collin Winter just released context_tools, which strikes me as a step towards bridging the gap between test methods in nose and test classes in unittest/py.test/nose/etc. Currently, it can be hard to refactor tests that start as simple test methods when later you decide to use a class for a more complex setUp() function. Specifically, context_tools lets you do:

class Test(unittest.TestCase):
setUp, tearDown = context_tools.test_with(foo_bar=my_manager())
def test_foo(self):
frobnicate(self.foo_bar)
self.assertEqual(frob_count, 1)

(an example from the docs)

As far as wrapping a def with an argument to access what data was setup, I also had to do this for the fixture with_data() method.

It would be nice to replace that code with context_tools since it's much simpler! However, after a glance at the code I think it would break the ability to chain together already decorated nose functions. in other words:

def my_custom_setup():
   whatever = 'fooz'
@nose.tools.with_setup(my_custom_setup)
@dbfixture.with_data(Foo, Bar)
def test_my_data_model(sample_data): # here we need my_custom_setup to run # but also with_data()
assert Foo.selectfirst().id == sample_data.Foo.id

Then again, the code I am using internally to accomplish this (scroll to with_data()) is very ugly :(

read article

GTAC Highlights Part 1 - Selenium is Alive and Well, Model Based Testing Is Smart, And...

I just got back from the GTAC (Google Test Automation Conference) in New York and had a great time. It spanned 2 days and had a single track — this made it very laid back (no headaches trying to decide what talk to attend) and the timing was perfect. Especially since my traveling managed to dodge one of the worst summer storm systems to hit Chicago in at least a decade!.

I've put together some highlights using the notes I took at each talk. Please bear in mind that this is not a comprehensive report on the conference and may contain misinformation (feel free to comment with corrections). The Google folk did an impressive job of posting video of most talks online within hours. A youtube search for GTAC lists them all. Or ... you can watch them from a playlist

Allen Hutchison - First Principles

  • Youtube video
  • There were about 300 applicants and only 150 were accepted. We had to apply with a short essay about why we should attend. This sounds elitist but actually it made it so the conference was full of people who really wanted to be there, which is very cool. Oh, and it was free so I guess something like this was necessary.
  • Allen mentions a test framework he has been working on, google-oaf (Google Open Automation Framework). If I heard correctly, this was used for testing Google Code?
  • A little about the Google Test Engineering dept.: There are teams of dedicated "Test Mercenaries" who will refactor a newish project's tests and hand them back.
  • Assures the audience that any demo to fail means it's on the cutting edge!

Patrick Copeland - Keynote

  • Youtube video
  • Now works on the Test Engineering group at Google.
  • Talked about how his team brought the build/test process for iGoogle down from 77 min to 14 min.
  • A bit about using mock objects to simulate faults - the only way.
  • Concept of "Happy Path" tests — ones that are designed to pass for the way a user should be using the application.
  • Referenced James Reason's Swiss Cheese Model to talk about how interactive components can be full of undiscovered holes since the possibilities for their interaction are endless (again, talking about mock objects).
  • Google practices "Root Cause Analysis." That is, instead of a team getting stuck on a treadmill with workarounds they are given extra time and resources to fix the actual problem. This sounds like a no-brainer but I see the treadmill happen all the time. Some problems are really really hard to track down and the business doesn't have time to slow down while the proper probing is done so that they're fixed. When I pressed him more in Q&A it was more like what I thought: a trade-off / balancing act. Still, great to here that from a high level their business is allowing this kind of "Do The Right Thing" to happen.
  • Talked about Google's Selenium farm (huge grid of machines to run Selenium tests).
  • Mentioned Eggplant, GUI tool for running Selenium tests?

Simon Stewart - Web Driver for Java

  • Youtube video
  • WebDriver is a Java library that can drive a real browser (currently Firefox, IE) or a fast in-memory implementation of one.
  • Proves that web driver (or Locomotive?) is on the cutting edge! The demo failed due to the fussy Locomotive app (Rails on OS X)
  • The demo wasn't really necessary though since the code did the talking. This is a very well architected library for web testing.
  • Especially interesting is how it suggests creating Page objects for each HTTP response. This objectifies all the page elements and makes for a more maintainable test suite since details that change often (click twice, toggle button "foo," etc.) can be declared separate from the test / assertion code. Hmm... I might steal this idea when I next write some twill tests ;)

Ryan Gerard and Ramya Venkataramu on Test Hygiene

  • Youtube video
  • Test Hygiene (is that the name?) is an in-house application written for their company that provides an interface to rate the effectiveness of tests for various products.
  • Pretty interesting idea: you have a community of developers who run each others tests and rate them on things like documentation level, sanity, effectiveness (are they testing the right thing?), and some other subjective qualities.
  • Keeping tests maintained and in good health is way harder than doing the same for production code.
  • Most of the Q&A seemed concerned with "gaming" the system, this didn't seem worthy of such lengthy discussion, IMHO.
  • One thing pointed out by Q&A is that the system doesn't seem to integrate well with historic code revisions. I think this will be a challenging feature to add.
  • Referenced James Surowiecki's The Wisdom Of Crowds
  • Talked about "easy grading system" (i.e. eBay?) — can someone explain to me what this means?

Matt Heusser & Sean McMillan - Interaction Based Testing

  • Youtube video
  • The "balanced breakfast:" a combination of mock objects for isolation and functional tests against real objects for interaction.
  • Everyone at the conference seemed to have a love/hate relationship with mock objects.
  • This talk made me want to use mock objects a little more (not a lot more!), perhaps because I don't really use them at all ;) They are definitely useful for deducing the root cause of a failing functional test.
  • One interesting idea they suggested was creating "facades" (groups of objects) so that a mock facade could be installed for a test. This suggests the idea of "switchable" mock objects — i.e. an "on" switch to make tests run faster, an off switch to make the tests run better, but this wasn't addressed in the talk and I forgot to comment on it.
  • They also talked about one possible strategy of focusing on "negative" testing in unit tests since functional tests naturally test for positive use cases. For example, one could just unit test a login method for error cases since all other functional tests would need to login successfully to run.
  • The Mars Rover bug: two different "units" failed to operate together correctly because one operated in English units and the other in metric units. Again, why mock objects alone aren't good enough!
  • Sean and I chatted later in the hotel lobby about the McMillan Clan Scottish tartan :)

Adam Porter & Atif Memon - Skoll DCQAS

  • Youtube video
  • This was by far the most engaging talk, I highly recommend watching the video. Actually, it was really two talks in one, so I broke it down accordingly.
  • Skoll DCQAS stands for Distributed Continuous Quality Assurance System.
  • Adam posted this link but the Skoll system isn't officially released yet. That link is to a package that supports Skoll to test the MySQL database product. Hoping to release an official Skoll package soon.
  • Adam's talk

    • This is the problem: You have a software product that can be compiled on many different OS platforms and has many different configuration options. How do you test all possible combinations!
    • Skoll is used to build a matrix of all these combinations then manage a distributed farm of servers that run the product's test suite in each configuration/platform. This runs continuously, triggered by each revision made to the code.
    • In the first project he implemented this with (some CORBA system), it would have taken something like a full year to run the test suite once per each configuration combination. Instead, Skoll makes a map of all combinations in relation to one another and randomly picks points that are far away from each other. When one point fails, it starts digging further by testing the closest points.
    • Similar thing for the MySQL product: there are 110,000 possible configuration combinations for installing MySQL.
  • Atif's talk

    • Atif was using Skoll to analyze GUI applications and automatically generate test cases based on all the possible interactions of the GUI widgets.
    • This system is called GUITAR and is available for use.
    • An interesting thing was discovered: when you look at all the possible paths a user can take through GUI widgets, often the shortest path is that which is most used by real users. This makes a generated test case for such a path very useful.
    • Atif chose 4 popular sourforge apps and ran it through the system. This resulted in a handful of bugs discovered for each app!
    • I'm very taken by this approach and may attempt something similar for AJAX web applications where widgets interact with each other in complex ways. However, it will probably be difficult to introspect common inputs/output of Javascript objects — they may have to be declared.

Apple Chow & Santiago Etchebehere - Building an Automated Framework Around Selenium

  • Youtube video
  • The framework is called "Ringo" because their code is "...getting better all the time..." :)
  • Interacts with Selenium RC via Java
  • Why are Java developers so enamored with XML? All the declarative data structures used to wrap up Javascript objects are done with XML. This seems to me complete overkill and would be such a pain to edit all the time.
  • Very cool idea: UI objects are represented in test code as objects too so that the implementation for testing them (click button X, wait 2 seconds, etc) can be hidden. Much like Web Driver's approach this makes it so you don't have to alter test code when UI implementation changes.
  • Developed for Top Secret Google systems but showed an example of how one could test the Google Suggest interface. Showed how the search-as-you-type object would have a custom implementation for how it can be tested — loops through the text field, sends each character to Selenium RC, waits a second, sends another.

Doug Sellers - CustomInk Domain Specific Language for automating an AJAX based application

  • Youtube video is not up yet at the time of this writing.
  • Also a very excellent presentation, keep an eye out for this video.
  • Talks about writing a DSL that runs in Ruby and Selenium on Rails (not using RC, actually compiles HTML to run in Selenium)
  • This was built to test Custom Ink, a site that has a Javascript (ok, ok, AJAX) interface for customizing a t-shirt order — colors, typeface, graphics — before submitting.
  • The DSL is built specifically for the site, I.E. "add_graphic" might be an action. There are 60-80 commands. Goes so far as to say click_link "browse gallery" instead of hard coding div IDs / xpaths, this is pretty smart.
  • Sort of makes me want to try this in twill, that is, build a higher level language that boils down to click/wait twill commands in the background.
  • Allows again a good separation of test case logic and test implementation, the click/wait interacting with the interface.
  • Empowers developers who are not good with Javascript to write functional tests for the website.
  • Designed for business users to write tests? Not so much. This came up during Q&A.
  • This was addressed a little in Q&A but not completely: I wonder how helpful and/or easy to pinpoint failures in a test case are. I imagine "order page has no button named submit" would induce a lot of head scratching. But, alas, this is always a problem with functional testing.
  • Advice: Use CSS selectors, not xpath in Selenium, as it is way too slow in IE.
  • Can't test file uploads with Selenium except in Firefox.

Risto Kumplainin - Automated testing for F-Secure's Linux/UNIX Anti-Virus products

  • Youtube video is not up yet at the time of this writing.
  • They built a cool LED panel for their office that summarizes any failing tests in a grid of OS/configuration.
  • "A product of evolution, not intelligent design" — this might just be the quote of the conference.
  • One module named "moosetest" after the Saab crash test that simulates what happens when a car hits a moose!
  • Typical success story for automating a continuous build/test process when many combinations of configuration exist for the product.

Jennifer Bevan & Jason Huggins - Extending Selenium With Grid Computing

  • Youtube video
  • Jennifer has worked on Canyon (sp?), an open source data mining tool. This sounded interesting. Does anyone have any info about this? Google wasn't very helpful. And she works there! Sheesh.
  • Everyone complains that Selenium tests are slow but this is a constraint of the brower itself (duh). I also find it funny that everyone complains about this.
  • Used to test the Gmail UI at Google.
  • Runs python test code via Selenium RC in parallel against Firefox/IE in multiple machines. This greatly speeds up test execution time :)
  • A live demo of using Amazon's EC2 (Elastic Computing Cloud) to run tests in parallel like this. And it worked!
  • Can run multiple instances of Firefox (via user profiles) on a single machine. Cannot run multiple instances of IE on a single machine.
  • Chrome for Firefox bypasses common security issues, IEHTA for IE does it similarly. None of these are used in the Selenium implementation (yet?).
  • Ran into many issues where Selenium RC would deadlock over time, leak memory, browsers would time out unexpectedly, etc. Workaround right now is to restart the worker machine or search and destroy while looking into a fix. What's up with that Patrick?? What about Google's Root Cause Analysis? :)
  • This talk was great news for Selenium, I remember when using RC was a painful, scary endeavor due to its instability. Also, the thought of running over 200 Selenium tests made one think of comics like this (it is very slow).

That's all I have time for at the moment. Check back for Part 2 - coming soon!

read article

WSGI Intercept Has A New Home

A while ago I started using Titus Brown's wsgi_intercept module to test some XML services I had been working on. It was a great way to set up a stub response so that the tests never hit our real services. We even created a nifty little decorator that would look for an environment variable, $USE_REAL_SERVICE, then send the stub response when False or hit a "staged" version of the service when True. This allowed us to transparently perform a slower but more comprehensive test at will (currently running once a day from buildbot).

Although I was able to whip this up pretty quick, I had trouble importing from wsgi_intercept since all its pieces were in disparate modules. No biggie, so I bundled them all together and submitted a patch back to Titus. Me and my big mouth! One thing led to another and I've since agreed to take over maintenance of the project. But seriously, I'm happy to announce that wsgi_intercept has a new home and is available for download from the cheeseshop.

This first release tries to remain true to the interface that twill can work with, although a few import paths probably need changing and I'm not sure how easy it is to use a global version of wsgi_intercept with out-of-the-box twill.

I'm not thrilled about the current interface to wsgi_intercept so the next release will probably deprecate a lot of methods in lieu of a cleaner interface ;) If anyone has ideas for improvement please let me know, preferably as a feature request ticket. Stay tuned. Oh, it also needs some love for python 2.5 so the first person to submit me an issue for all the broken pieces is automatically "awesome."

read article

Leapfrog Online is looking for some Django developers (Chicago area)

Leapfrog Online is looking to hire several Python developers to work on a Django site. If you know Python but not Django, this is an excellent opportunity to learn. If you know Django but want to learn how to use Python in other contexts, you'll get to do that, too. You'll be working on a high traffic website that hooks into several web services to help customers find Broadband Internet connectivity based on geo location (just US and Canada at the moment). Surrounding that basic function are all kinds of front-end and back-end features, services and systems.

The Software Engineer position is outlined in detail here.

You can send your resume to kumar.mcmillan@gmail.com or send it through the site above. These are full-time positions but if you'd rather work with us as a contractor that may be possible.

What Do We Do?

Leapfrog Online does performance-based customer acquisition, which translates to "we don't make money unless our clients make money." Because of this our software has to work well and we need to collect lots of structured, sensical data so our analysts can build the right marketing strategies. In a more abstract sense, the interesting challenges we face are building high-availability websites, fault-tolerant web services, pushing and pulling at hundreds of gigs of data, and accounting for tight security all along the way. As for the atmosphere, we're still a small company but we're not a struggling startup.

We Care About Open Source

We use open source tools that are right for the job. Currently we use Python or Ruby for websites / web services (Django, Pylons, Paste, Ruby on Rails), Python for backend tools, PostgreSQL for databases, and Trac for our projects. We use rich web interface libraries like Ext JS and we even wrote a distributed content system in Erlang because it was a good fit.

We have contributed patches to most of the projects listed above and maintain our own projects like nose, a few nose plugins, fixture, wsgi_intercept, and sv. We give talks at conferences like Pycon (see #24, #85 and #127). Also, Jeff Cohen (one of our senior developers) runs a popular blog called Softies on Rails and teaches and writes books about Rails.

Scrum: You'll Like It

We started with Extreme Programming a few years ago and have moved towards Scrum and other Agile methods as our approach to software development. We are constantly refining our process, keeping what works, discarding what doesn't. The company is on board with Scrum all the way up to the principles and we are always working to improve how Scrum is integrated holistically (a training program is in the works).

We think you'll like Agile for development. We have several teams of no more than three developers who work in two week "sprints." The sprints are planned out by product owners, developers, and project managers with user stories estimated in "story points" so that the business gets what it needs in order of priority. A sprint is exactly what it sounds like -- you just work! At the end of each sprint the work is released and you attend a retrospective meeting to see what was good, bad, and ugly, and how much work you did. Nothing is perfect so, of course, there are emergencies and derailments here and there but for the most part Scrum keeps things moving at a productive pace. As a developer, I find this discipline empowering and highly motivational.

You Must Test It

We are nutty about automated testing (in case you didn't notice). All code must have automated regression tests so if you're not familiar with this way of writing software, you will learn! We have a fairly involved continuous integration process running in buildbot (though probably moving to Bamboo soon) that performs several builds of each app, one with stable 3rd party libs, the others with trunk versions of 3rd party libs. As well as getting immediate feedback when a bad change is checked in, this also helps us pinpoint bugs in our dependencies before they are released. Our QA department is also different than most in that it consists of developers who are writing functional and/or integration tests in code and adding these to the build process. They are essentially software engineers like the rest of us.

Your Time Is Valuable

No one has a sleeping bad under their desk here; we work until 5 or 6 (weekdays only) to achieve a "sustainable pace." Most of us have been through the "death march" routine at other companies so we know it doesn't work long term. Scrum helps us maintain this ethic.

No Pigeonholes

While we are currently looking for Python/Django programmers, we are always interested in meeting people who think in Ruby/Rails, PHP 5 and other open source web technologies, too. We're especially interested if you're feeling ecumenical and want to learn about and work with, say, both Python and Ruby. You might only work in one language most of the time, but we think it is important for developers to stretch themselves and understand what tools are best for the job.

read article

Building Flash/ActionScript sites entirely in code and using FireBug for debugging

Way back in the olden days I used to dabble in Flash for building dynamic websites. This was incredibly painful, as I recall, because I couldn't stand the Flash IDE. It was clunky and hard to navigate, there weren't enough key commands, and the code editor was a sad version of notepad at best. And why use Flash anyway? JavaScript will do most of what you need for a dynamic site nowadays and I believe things like Google AdSense read the DOM for content filtering, which would render a Flash site useless. For me, the answer was audio playback. I wanted to play some audio on a site and for this Flash seems like the only option.

Digging deep into my vault of web dabblery I remembered getting so fed up with the Flash IDE that I wrote an entire site in ActionScript alone. I was quite proud of this as it was an early foray into programming and made me realize how much I enjoyed writing software in code. By some stroke of luck, this ActionScript-only site written circa 2002 is still online! OK, the intro was made with some timeline animation (someone else did that) but everything else is pure code, I swear.

(UPDATE: Me an my big mouth: A few months after I wrote this article the site changed, it's not longer the 2002 version :( But it looks and behaves similar so they might have reused some of my code.)

So there I was, planning out my soon-to-blow-your-mind Flash audio player, yet without Creative Suite 3, the latest Flash IDE. In fact, I didn't want it — $699.00, ouch! Not to mention: the pain of installing another massive, bloated application on my hard drive. A little Googling around later, I discovered Motion-Twin ActionScript 2 Compiler, or mtasc for short, written in OCaml. This was exactly what I was hoping for, an open source command line tool for compiling ActionScript code into SWF (Flash) movies without the need for any clunky IDE. And it works very nicely.

Unfortanately, ActionScript seems to have no error handling whatsoever (or I haven't figured out how to handle errors yet) so doing stupid stuff like calling a method that doesn't exist will not stop program execution as it should. Great. So I quickly became familiar with hooks that mtasc provides to the trace() command. Calling trace() writes to the Flash IDE's debug log. But since you aren't in a Flash IDE when testing a freshly compiled swf, mtasc allows you to define your own implementation of trace. Cool! So naturally, I created my own trace that writes to the FireBug log. FireBug, of course, is the ultimate webapp-debugging tool.

Here it is first added to the example app (from the mtasc tutorial):

class Tuto {
static var app : Tuto;
function Tuto() {
// creates a 'tf' TextField size 800x600 at pos 0,0
_root.createTextField("tf",0,0,0,800,600);
// write some text into it
_root.tf.text = "Hello world !";
trace("debugging with trace rocks!");
}
// entry point
static function main(mc) {
// note that this seems to help mtasc find the trace method
var f = new FireTrace();
app = new Tuto();
}
}

...saved to Tuto.as. And here is my trace implementation, saved to FireTrace.as:

class FireTrace {
    static function trace(msg, class_, file, line) {
        getURL('javascript:console.debug("[' + file + ':' + line + ' ' + class_ + '] ' + msg + '")');
    }
}

I compiled it like so:

mtasc -swf Tuto.swf -trace FireTrace.trace -main -header 800:600:20 Tuto.as

And used a simple HTML page to run it:

<html>
<head>
<script type="text/javascript">
if (!window.console || !console.firebug)
{
var names = ["log", "debug", "info", "warn", "error", "assert", "dir", "dirxml",
"group", "groupEnd", "time", "timeEnd", "count", "trace", "profile", "profileEnd"];
window.console = {};
for (var i = 0; i < names.length; ++i)
window.console[names[i]] = function() {}
}
</script>
</head>
<body>
<object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" name="TopPlayer" id="TopPlayer"
codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=5,0,0,0"
width="800" height="600">
<param name="movie" value="Tuto.swf">
<param name="quality" value="high">
<param name="loop" value="false">
<param name="play" value="true">
<!--<param name="bgcolor" value="#ffffff">-->
<param name="swliveconnect" value="true">
<embed src="Tuto.swf" quality="high" width="800" height="600"
type="application/x-shockwave-flash" name="TopPlayer" id="TopPlayer"
pluginspage="http://www.macromedia.com/shockwave/download/index.cgi?p1_prod_version=shockwaveflash"
swliveconnect="true">
</embed>
</object>
</body>
</html>

...and here is what it prints to my FireBug console:

[Tuto.as:11 Tuto::Tuto] debugging with trace rocks!

Notice that I had to add this line of code to the Tuto class:

var f = new FireTrace();

I don't know if this is a bug in mtasc but without it the custom trace seems like it never gets loaded. Also notice that I used the FireBug Lite JavaScript snippet for gracefully degrading console methods when FireBug isn't running.

After my romantic reunion with writing ActionScript, what has changed? The first thing I've noticed is that ActionScript 3 now looks exactly Java but I have no idea why. Luckily, it remains compatible with ActionScript 2 so you can leave out all the type declarations and everything still seems to work. Thus, it is pretty much a clone of JavaScript with some extra craziness you probably don't need. I'm still trying to figure out a better way to handle errors; putting debug statements everywhere is pretty lame. If anyone has a suggestion let me know. At the least, I would like to get an error in the log when I stupidly try to call a non-existant method.

Now that I have a prototype, the next step is to write automated tests, of course! This will be a lot of fun as I love watching tests pass. At first glance, the AsUnit (ActionScript Unit Test Runner) seems like it will be useful. I have no doubt that GUI testing Flash is no easier than it is in other languages, and thus not so easy to automate ... but one always needs unit tests.

UPDATE

Ah, looks like there is an even better way to call JavaScript by way of the ExternalInterface class:

ExternalInterface.call("console.log", variable1, variable2, variableN);

I haven't tried it but this article suggests to limit your calls to String, Number, Array, Object, and Boolean datatypes.

read article

Testing Google App Engine sites

The Google App Engine SDK sets up a fairly restrictive Python environment on your local machine to simulate their runtime restrictions. Roughly this consists of no access to sockets and thus barely working httplib, urllib, etc (replaced by urlfetch.fetch()), inability to use Python C modules, and no access to the file system. The SDK lets you run your app very easily using the dev_appserver.py script but I thought, how the heck would I test my app without the dev server?

It turns out this was ridiculously easy. You just run about 10 lines of code at the start of your test suite. Of course, it might change with an SDK upgrade but here's what worked in my tests/__init__.py in today's SDK (besides sys.path munging):

import os
from google.appengine.tools import dev_appserver
from google.appengine.tools.dev_appserver_main import *
option_dict = DEFAULT_ARGS.copy()
option_dict[ARG_CLEAR_DATASTORE] = True
def setup(): ## path to app:
root_path = os.path.join(os.path.dirname(file), '..')
logging.basicConfig(
level=option_dict[ARG_LOG_LEVEL],
format='%(levelname)-8s %(asctime)s %(filename)s] %(message)s')
config, matcher = dev_appserver.LoadAppConfig(root_path, {}) ## commented out stuff that checked for SDK updates
dev_appserver.SetupStubs(config.application, **option_dict)

The setup() is called by nose at the beginning of all tests but if you weren't using nose you could put it at the module level or anywhere else to be called once.

The really cool thing is running tests like this will automatically add indexes for your queries (just like the SDK dev server will) so if you had good code coverage your app would be ready to go live.

Next, you can test your URLs with something like WebTest like so:

from YOURAPP import application
from webtest import TestApp
def test_index():
    app = TestApp(application)
    response = app.get('/')

...where application is any WSGI app, like one defined in the Hello World tutorial:

from google.appengine.ext import webapp
class MainPage(webapp.RequestHandler):
  def get(self):
    self.response.out.write('Hello, webapp World!')
application = webapp.WSGIApplication(
[('/', MainPage)], debug=True)

I haven't tried this for Django, but I'm pretty sure it would work as advertised, making your application object this:

application = django.core.handlers.wsgi.WSGIHandler()

And there you have it. The two modules used here aren't in the SDK but that's fine because you don't need to upload them to App Engine anyway. You can run easy_install nose WebTest and be ready to test. You were planning on testing your App Engine site, weren't you? :)

If you want to poke around in the test suite I made for pypi.appspot.com, the code is all in the Pypione project, specifically, the tests directory.

UPDATE

I have since realize the above method does not actually simulate all restrictions, it just inserts some stub modules. Specifically, it doesn't simulate restricting imports of a lot of builtin modules, nor does it remove file() and open().

Jason Pellerin is working on a nose plugin, noseGAE (mentioned below in comments), that aims to get all this simulation accurate. It is coming along nicely.

read article

Fixture Goes 1.0 (Testing With Data In Python)

I've just pushed 1.0 — the I Heart Data release — of fixture, a Python module for loading and referencing test data. It is used heavily at my work in two test suites: one for the functional tests of an ETL framework and another for a Pylons + Elixir (SQLAlchemy) + Ext JS web application.

easy_install -U fixture

or download from PyPi

Highlights of This Release

Many thanks to those who submitted issues and patches, especially Manuel Aristarán, sokann, Brian Lee Hawthorne, and Jeffrey Cousens.

What's next? The fixture command that generates DataSet classes from a real database needs some attention. It doesn't work with SQLAlchemy 0.5 yet and could work better with SQLAlchemy based data models in general. Aside from that, fixture seems to have stabilized for my apps at work so I'll be waiting to hear from the community about what other areas to improve on.

read article

Chicago's Google App Engine Hack-A-Thon Recap

Today was Chicago's Google App Engine Hack-A-Thon and I managed to get some good work done. Well, I had planned to make packages more ephemeral on the PyPi mirror since it quickly hit the 500MB limit as is. But instead I decided to add Datastore support to the fixture module so that loading sample data is easier when testing an App Engine site. It was very easy to do so I spent most of the time writing some documentation with a complete example for how to go about testing an App Engine site with fixture, WebTest, nose, and NoseGAE.

Here it is: Using Fixture To Test A Google App Engine Site.

Big Thanks to Marzia Niccolai and Mano Marks for trekking out to the Windy City and to all the folks at the Chicago Google office who helped make it happen. As for the hackers, I think we were a shy bunch; there wasn't much show and tell afterwards. I know Ian Bicking got damn close to making a Datastore version of enough builtin file I/O methods for Moin Moin to run on the App Engine. Maybe you'll hear about that soon. Someone did demo a cool iPhone app (literally passed his phone around). It analyzes geo location to provide users with a local message board. I didn't catch the name or a URL — can you drop a comment if you have the info?. UPDATE: The name is Puppyo. Oh yeah, and Harper Reed was hacking on http://excla.im/, an App Engine site that is complimented by a jabber bot allowing you to simply send an IM to post to twitter.

read article

Real Test Engineers Love Dots

"The world's largest particle collider passed its first major tests by firing two beams of protons in opposite directions around a 17-mile underground ring ... After a series of trial runs, two white dots flashed on a computer screen at 10:26 a.m. indicating that the protons had traveled clockwise along the full length of the 4 billion Swiss franc (US$3.8 billion) Large Hadron Collider."

— from Massive particle collider passes first key tests

What better way to indicate a passing test than a single dot?! Simple, effective. Mmmm, dots.

read article

T'is be'a Fixture 1.1.1 fer ya!

Y'aharrr me seabound mateys! Thar be'a fine gully of'a wind shakin' ye jigger today as m'announce a new release of Fixture, a python module fer loadin' and referencing test data. O'er yonder ye find a cap'n's Changelog fer ye royal subjects.

Riches abound! Booty abaft! Me could'a n'er dunnit not be'a the help o'a few fine pirates amidst ye Python vagabonds. Me best salute go t'a Tomas Holas, Alex Marandon, and bslesinsky fer'a ya bug reports and patches. Been'a some quiet waters thus far but much treasure huntin' lies o'er th'horizon.

Plunder Fixture 1.1.1!

(And it be'a fine day fer pirates, aye)

read article

The Future of Testing (GTAC 2008)

read article

Taming The Beast: How To Test an AJAX Application (GTAC 2008)

read article

Automated Model Based Testing of Web Applications (GTAC 2008)

read article

Are you hiring web developers?

read article

Chicago JavaScript Meetup: JS.Chi()

read article

Debugging doctests interactively

read article

Fudge: Another Python Mock Framework

read article

A new version of Fudge, mock object library for Python

read article

Fudge 0.9.2 Released

read article

PyCon Happenings

read article

Nose 0.11 released (nifty new features)

read article

Unit Testing JavaScript With JsTestDriver

read article

Fixture 1.3, Now With That Tangy Django Flavor

read article

Dark-Launching or Dark-Testing New Software Features

read article

Fudge Goes 1.0

read article

Testing Strategies for React and Redux

read article

Safer Unit Testing in React with TypeScript

read article

Spike, Test, Break, Fix

read article

If It's Not In CI It Doesn't Exist

read article