Thoughts on GTAC

Automated Model Based Testing of Web Applications (GTAC 2008)

Last year at the Google Test Automation Conference (GTAC) 2007, the talk by Atif was one of my favorites. He had been working on a system of model based testing for desktop GUI applications (affectionately called GUITAR) and hinted that he will be applying the work to web applications next. Now, at GTAC 2008, here is a taste of what his dept. has been up to. The talk is by Oluwaseun Akinmade and Prof. Atif M. Memon. Both are at University of Maryland.

The idea of automated model based testing hints at a future where software can be used to figure out how to test itself. That is, when software is modeled in a way that exposes inputs, outputs, event handlers, and end-points, then introspection can be done to find all possible interactions within an application and test them. Yes, it is one step away from artificial intelligence. This is fascinating to me but I think it still needs a lot of work. Atif is asking for as much feedback as possible from industry professionals to find out how this can best be used in the real world.

Here are my notes from the talk ...

read article

Taming The Beast: How To Test an AJAX Application (GTAC 2008)

This was one of the talks at GTAC 2008 that I was most looking forward to before the conference. It was excellent, I was not let down. The talk was given by Markus Clermont and John Thomas who work at Google. Since the talk was right after lunch they decided to take a Q & A approach. It sort of went off in tangents at points but overall the format seemed to work.

In my own work I've been struggling at maintaining a now bloated test suite for an AJAX website but their approach made something click in my head. I'm already working on a refactoring plan.

Here is my abbreviated interpretation of the talk ...

read article

The Future of Testing (GTAC 2008)

Google Test Automation Conference (GTAC) is my all-time favorite conference. It's free. It's on a single track — this means you don't miss any talks and everyone experiences the same journey of thought. Also, since you have to apply for admittance with a short essay, everyone who attends is really passionate about testing. It's still sort of "underground" which keeps it small and very social.

Last year, I made some kind of attempt to live blog summaries of the GTAC talks but I never made it past part 1. We'll see how far I get this year, stay tuned.

The videos for 2008 aren't online yet but check youtube often because last year they were up in less than a day.

The Future of Testing was the first talk of the GTAC 2008 conference on Thursday Oct 23rd given by James A. Whittaker, a very entertaining speaker who works for Microsoft. His talk was excellent and I highly recommend keeping a lookout for the video. Here are my notes...

read article

GTAC Highlights Part 1 - Selenium is Alive and Well, Model Based Testing Is Smart, And...

I just got back from the GTAC (Google Test Automation Conference) in New York and had a great time. It spanned 2 days and had a single track — this made it very laid back (no headaches trying to decide what talk to attend) and the timing was perfect. Especially since my traveling managed to dodge one of the worst summer storm systems to hit Chicago in at least a decade!.

I've put together some highlights using the notes I took at each talk. Please bear in mind that this is not a comprehensive report on the conference and may contain misinformation (feel free to comment with corrections). The Google folk did an impressive job of posting video of most talks online within hours. A youtube search for GTAC lists them all. Or ... you can watch them from a playlist

Allen Hutchison - First Principles

  • Youtube video
  • There were about 300 applicants and only 150 were accepted. We had to apply with a short essay about why we should attend. This sounds elitist but actually it made it so the conference was full of people who really wanted to be there, which is very cool. Oh, and it was free so I guess something like this was necessary.
  • Allen mentions a test framework he has been working on, google-oaf (Google Open Automation Framework). If I heard correctly, this was used for testing Google Code?
  • A little about the Google Test Engineering dept.: There are teams of dedicated "Test Mercenaries" who will refactor a newish project's tests and hand them back.
  • Assures the audience that any demo to fail means it's on the cutting edge!

Patrick Copeland - Keynote

  • Youtube video
  • Now works on the Test Engineering group at Google.
  • Talked about how his team brought the build/test process for iGoogle down from 77 min to 14 min.
  • A bit about using mock objects to simulate faults - the only way.
  • Concept of "Happy Path" tests — ones that are designed to pass for the way a user should be using the application.
  • Referenced James Reason's Swiss Cheese Model to talk about how interactive components can be full of undiscovered holes since the possibilities for their interaction are endless (again, talking about mock objects).
  • Google practices "Root Cause Analysis." That is, instead of a team getting stuck on a treadmill with workarounds they are given extra time and resources to fix the actual problem. This sounds like a no-brainer but I see the treadmill happen all the time. Some problems are really really hard to track down and the business doesn't have time to slow down while the proper probing is done so that they're fixed. When I pressed him more in Q&A it was more like what I thought: a trade-off / balancing act. Still, great to here that from a high level their business is allowing this kind of "Do The Right Thing" to happen.
  • Talked about Google's Selenium farm (huge grid of machines to run Selenium tests).
  • Mentioned Eggplant, GUI tool for running Selenium tests?

Simon Stewart - Web Driver for Java

  • Youtube video
  • WebDriver is a Java library that can drive a real browser (currently Firefox, IE) or a fast in-memory implementation of one.
  • Proves that web driver (or Locomotive?) is on the cutting edge! The demo failed due to the fussy Locomotive app (Rails on OS X)
  • The demo wasn't really necessary though since the code did the talking. This is a very well architected library for web testing.
  • Especially interesting is how it suggests creating Page objects for each HTTP response. This objectifies all the page elements and makes for a more maintainable test suite since details that change often (click twice, toggle button "foo," etc.) can be declared separate from the test / assertion code. Hmm... I might steal this idea when I next write some twill tests ;)

Ryan Gerard and Ramya Venkataramu on Test Hygiene

  • Youtube video
  • Test Hygiene (is that the name?) is an in-house application written for their company that provides an interface to rate the effectiveness of tests for various products.
  • Pretty interesting idea: you have a community of developers who run each others tests and rate them on things like documentation level, sanity, effectiveness (are they testing the right thing?), and some other subjective qualities.
  • Keeping tests maintained and in good health is way harder than doing the same for production code.
  • Most of the Q&A seemed concerned with "gaming" the system, this didn't seem worthy of such lengthy discussion, IMHO.
  • One thing pointed out by Q&A is that the system doesn't seem to integrate well with historic code revisions. I think this will be a challenging feature to add.
  • Referenced James Surowiecki's The Wisdom Of Crowds
  • Talked about "easy grading system" (i.e. eBay?) — can someone explain to me what this means?

Matt Heusser & Sean McMillan - Interaction Based Testing

  • Youtube video
  • The "balanced breakfast:" a combination of mock objects for isolation and functional tests against real objects for interaction.
  • Everyone at the conference seemed to have a love/hate relationship with mock objects.
  • This talk made me want to use mock objects a little more (not a lot more!), perhaps because I don't really use them at all ;) They are definitely useful for deducing the root cause of a failing functional test.
  • One interesting idea they suggested was creating "facades" (groups of objects) so that a mock facade could be installed for a test. This suggests the idea of "switchable" mock objects — i.e. an "on" switch to make tests run faster, an off switch to make the tests run better, but this wasn't addressed in the talk and I forgot to comment on it.
  • They also talked about one possible strategy of focusing on "negative" testing in unit tests since functional tests naturally test for positive use cases. For example, one could just unit test a login method for error cases since all other functional tests would need to login successfully to run.
  • The Mars Rover bug: two different "units" failed to operate together correctly because one operated in English units and the other in metric units. Again, why mock objects alone aren't good enough!
  • Sean and I chatted later in the hotel lobby about the McMillan Clan Scottish tartan :)

Adam Porter & Atif Memon - Skoll DCQAS

  • Youtube video
  • This was by far the most engaging talk, I highly recommend watching the video. Actually, it was really two talks in one, so I broke it down accordingly.
  • Skoll DCQAS stands for Distributed Continuous Quality Assurance System.
  • Adam posted this link but the Skoll system isn't officially released yet. That link is to a package that supports Skoll to test the MySQL database product. Hoping to release an official Skoll package soon.
  • Adam's talk

    • This is the problem: You have a software product that can be compiled on many different OS platforms and has many different configuration options. How do you test all possible combinations!
    • Skoll is used to build a matrix of all these combinations then manage a distributed farm of servers that run the product's test suite in each configuration/platform. This runs continuously, triggered by each revision made to the code.
    • In the first project he implemented this with (some CORBA system), it would have taken something like a full year to run the test suite once per each configuration combination. Instead, Skoll makes a map of all combinations in relation to one another and randomly picks points that are far away from each other. When one point fails, it starts digging further by testing the closest points.
    • Similar thing for the MySQL product: there are 110,000 possible configuration combinations for installing MySQL.
  • Atif's talk

    • Atif was using Skoll to analyze GUI applications and automatically generate test cases based on all the possible interactions of the GUI widgets.
    • This system is called GUITAR and is available for use.
    • An interesting thing was discovered: when you look at all the possible paths a user can take through GUI widgets, often the shortest path is that which is most used by real users. This makes a generated test case for such a path very useful.
    • Atif chose 4 popular sourforge apps and ran it through the system. This resulted in a handful of bugs discovered for each app!
    • I'm very taken by this approach and may attempt something similar for AJAX web applications where widgets interact with each other in complex ways. However, it will probably be difficult to introspect common inputs/output of Javascript objects — they may have to be declared.

Apple Chow & Santiago Etchebehere - Building an Automated Framework Around Selenium

  • Youtube video
  • The framework is called "Ringo" because their code is "...getting better all the time..." :)
  • Interacts with Selenium RC via Java
  • Why are Java developers so enamored with XML? All the declarative data structures used to wrap up Javascript objects are done with XML. This seems to me complete overkill and would be such a pain to edit all the time.
  • Very cool idea: UI objects are represented in test code as objects too so that the implementation for testing them (click button X, wait 2 seconds, etc) can be hidden. Much like Web Driver's approach this makes it so you don't have to alter test code when UI implementation changes.
  • Developed for Top Secret Google systems but showed an example of how one could test the Google Suggest interface. Showed how the search-as-you-type object would have a custom implementation for how it can be tested — loops through the text field, sends each character to Selenium RC, waits a second, sends another.

Doug Sellers - CustomInk Domain Specific Language for automating an AJAX based application

  • Youtube video is not up yet at the time of this writing.
  • Also a very excellent presentation, keep an eye out for this video.
  • Talks about writing a DSL that runs in Ruby and Selenium on Rails (not using RC, actually compiles HTML to run in Selenium)
  • This was built to test Custom Ink, a site that has a Javascript (ok, ok, AJAX) interface for customizing a t-shirt order — colors, typeface, graphics — before submitting.
  • The DSL is built specifically for the site, I.E. "add_graphic" might be an action. There are 60-80 commands. Goes so far as to say click_link "browse gallery" instead of hard coding div IDs / xpaths, this is pretty smart.
  • Sort of makes me want to try this in twill, that is, build a higher level language that boils down to click/wait twill commands in the background.
  • Allows again a good separation of test case logic and test implementation, the click/wait interacting with the interface.
  • Empowers developers who are not good with Javascript to write functional tests for the website.
  • Designed for business users to write tests? Not so much. This came up during Q&A.
  • This was addressed a little in Q&A but not completely: I wonder how helpful and/or easy to pinpoint failures in a test case are. I imagine "order page has no button named submit" would induce a lot of head scratching. But, alas, this is always a problem with functional testing.
  • Advice: Use CSS selectors, not xpath in Selenium, as it is way too slow in IE.
  • Can't test file uploads with Selenium except in Firefox.

Risto Kumplainin - Automated testing for F-Secure's Linux/UNIX Anti-Virus products

  • Youtube video is not up yet at the time of this writing.
  • They built a cool LED panel for their office that summarizes any failing tests in a grid of OS/configuration.
  • "A product of evolution, not intelligent design" — this might just be the quote of the conference.
  • One module named "moosetest" after the Saab crash test that simulates what happens when a car hits a moose!
  • Typical success story for automating a continuous build/test process when many combinations of configuration exist for the product.

Jennifer Bevan & Jason Huggins - Extending Selenium With Grid Computing

  • Youtube video
  • Jennifer has worked on Canyon (sp?), an open source data mining tool. This sounded interesting. Does anyone have any info about this? Google wasn't very helpful. And she works there! Sheesh.
  • Everyone complains that Selenium tests are slow but this is a constraint of the brower itself (duh). I also find it funny that everyone complains about this.
  • Used to test the Gmail UI at Google.
  • Runs python test code via Selenium RC in parallel against Firefox/IE in multiple machines. This greatly speeds up test execution time :)
  • A live demo of using Amazon's EC2 (Elastic Computing Cloud) to run tests in parallel like this. And it worked!
  • Can run multiple instances of Firefox (via user profiles) on a single machine. Cannot run multiple instances of IE on a single machine.
  • Chrome for Firefox bypasses common security issues, IEHTA for IE does it similarly. None of these are used in the Selenium implementation (yet?).
  • Ran into many issues where Selenium RC would deadlock over time, leak memory, browsers would time out unexpectedly, etc. Workaround right now is to restart the worker machine or search and destroy while looking into a fix. What's up with that Patrick?? What about Google's Root Cause Analysis? :)
  • This talk was great news for Selenium, I remember when using RC was a painful, scary endeavor due to its instability. Also, the thought of running over 200 Selenium tests made one think of comics like this (it is very slow).

That's all I have time for at the moment. Check back for Part 2 - coming soon!

read article