Monday, August 26, 2013

Call for Testing: Mir with multi-monitor

As mentioned in my last post, Mir is one of the biggest changes coming in 13.10. With feature freeze now happening this week, it's time to amp up our testing engines once more to test the final features and help land Mir into the archive.

The Mir team has put together both a ppa and wiki page that contains all the information you need to help with testing. The testing window closes in 2 days on August 28th, just in time for feature freeze. The biggest changes for Mir are the inclusion of multi-monitor support and thus are a focus for this testing. So here's the details you need to know.

What?
Help test Mir using your current system, ubuntu saucy and the Mir team ppa.

When?
Now through August 28th.

How?
The full instructions for installing the ppa, running the tests, and reporting the results can be found on this wiki page. Results are reported on this page or via the package tracker testing page.

Thank you for your contributions! Good luck and Happy Testing Everyone!

Monday, August 19, 2013

Automated Testing in ubuntu

So with all the automated testing buzz occurring in the quality world of ubuntu this cycle, I wanted to speak a little about why we're doing the work, and what the output of the work looks like.

The Why
So, why? Why go through the trouble of writing tests for the code you write? I'll save everyone a novel and avoid re-hashing the debate about whether testing is a proper use of development time or not. Simply put, for developers, it will prevent bugs from reaching your codebase, alleviate support and maintenance burdens, and will give you more confidence and feedback during releases. If you want your application to have that extra polish and completeness, testing needs to be a part. For users, the positives are similar and simple. Using well tested software prevents regressions and critical bugs from affecting you. Every bug found during testing is a bug you don't have to deal with as an end user. Coincidentally, if you like finding bugs in software, we'd love to have you on the team :-)

The How
In general, three technologies are being used for automated testing.
Autopilot, Autopkg, and QML tests. Click to learn more about writing tests for each respectively.
Any color is good, as long as it's green
The Results
The QA Dashboard
The dashboard holds most of the test results, and gives you a much nicer view of the results than just looking at a jenkins build screen. Take some time to explore all of the different tabs to see what's available here. I wanted to highlight just a couple areas in particular.
  • Autopkg Tests 
    • These testcases run at build time and are great testcases for low level libraries and integral parts of ubuntu. Check out the guide for help on contributing a testcase or two to these. Regressions have been spotted and fixed before even hitting the ubuntu archives.
  • Smoke Tests for ubuntu touch
    • This is some of the newest tests to come online and displays the results of the ubuntu phone image and applications, including the core apps which are written entirely by community members. Want to know how well an image is running on your device? This is the page to find it.
Ubiquity Installer Testing
Curious about how well the installer is working? Yep, we've got tests for those as well. The tests are managed via the ubiquity project on launchpad.

Ubuntu Desktop Tests
Wondering how well things like nautilus, gedit and your other favorite desktop applications are doing? Indeed, thanks to our wonderful quality community, we've got tests for those as well. The tests can be found here.

The Next Steps
We want to continue to grow and expand all areas of testing. If you've got the skills or the willingness to learn, try your hard at helping improve our automated testcases. There's a wide variety to choose from, and all contributions are most welcome!

Sorry robot, all our tests are belong to us
Don't have those skills? Don't worry, not only are machines not taking over the world, they aren't taking over testing completely either. We need your brainpower to help test other applications and to test deeper than a machine can do. Join us for our cadence weeks and general calls for testing and sample new software while you help ubuntu. In fact, we're testing this week, focusing on Mir.

Finally, remember your contributions (automated or manual) help make ubuntu better for us all! Thank you!

Friday, August 16, 2013

Feature freeze coming? Let's test!

We're already approaching feature freeze at a quickening pace, and thus the next few weeks are rather important to us as a testing community. 13.10 is landing in October, which is now rapidly approaching (where did the summer go?!).

What?
Let's run through some manual tests for ubuntu and flavors. I'd like to ask for a special focus to be given to Mir/xMir. We plan to have a rigorous test of the package again in about a week once all features have landed. In the interim, let's try and catch any additional bugs.

When?
This week Saturday August 17th through Saturday August 24th. It's week 5 for our cadence tests.

Ok, I'm sold, what do I need to do?
Execute some testcases against the latest version of saucy; in particular the xMir test.

Got any instructions?
You bet, have a look at the Cadence Week testing walkthrough on the wiki, or watch it on youtube. If you get stuck, contact us.

Where are the tests?
You can find the Mir test in it's own milestone here. Remember to read and follow the installation instructions link at the top of the page!
The rest of the applications and packages can be found here.

I don't want to run/install the unstable version of ubuntu, can I still help?
YES! Boot up the livecd on your hardware and run the live session, or use a virtual machine to test (install ubuntu or use a live session). The video demonstrates using a virtual machine booting into a live session to run through the tests. For the Mir/xMir tests, however, we'd really like results from real hardware.

But, virtual machines are scary; I don't know how to set one up!
There's a tool called testdrive that makes setting up a vm with ubuntu development a point and click operation. You can then use it to test. Seriously, check out the video and the walkthrough for more details.

Thank you for your contributions! Good luck and Happy Testing Everyone! 

Wednesday, August 7, 2013

Autopilot best practices

I've now had the pleasure of writing autopilot tests for about 9 months, and along the way I've learned or been taught some of the things that are important to remember.

Use Eventually
The eventually matcher provided by autopilot is your best friend. Use it liberally to ensure your code doesn't fail because of a millisecond difference during your runtime. Eventually will retry your assert until it's true or it times out. When combined with examining an object or selecting one, eventually will ensure your test failure is a true failure and not a timing issue. Also remember you can use lambda if you need to make your assert function worthy.

Assert more!
Every test can use more asserts -- even my own! Timing issues can rear there ugly head again when you fail to assert after performing an action.
  • Everytime you grab an object, assert you received the object
    • You can do this by asserting the object NotEquals(None); remember to use eventually Eventually(NotEquals(None))!
  • Everytime you interact with the screen, try an assert to confirm your action
    • Click a button, assert
    • Click a field to type, assert you have focus first
      • You can do this by using the .focus property and asserting it to be True
      • Finished typing?, assert your text matches what you typed
        • You can do this by using the .text property and asserting it to be Equal to your input
Don't use strings, use objectNames
We all get lazy and just issue selects with English label names. This will break when run in a non-English language. They will also break when we decide to update the string to something more verbose or just different. Don't do it! That includes things like tab names, button names and label names -- all common rulebreakers.

Use object properties
They will help you add more asserts about what's happening. For instance, you can use the .animating property or .moving property (if they exist) to wait out animations before you continue your actions! I already mentioned the .focus property above, and you might find things like .selected, .state, .width, .height, .text, etc to be useful to you while writing your test. Check out your objects and see what might be helpful to you.

Interact with objects, not coordinates
Whenever possible, you should ensure your application interactions specify an object, not coordinates. If the UI changes, the screen size changes, etc, your test will fail if your using coordinates. If your interaction is emulating say something like a swipe, drag, pinch, etc action, ensure you utilize relative coordinates based upon the current screen size.

Use the ubuntusdk emulator if you are writing a ubuntusdk application
It will save you time, and ensure your testcase gets updated if any bugs or changes happen to the sdk; all without you having to touch your code. Check it out!

Read the documentation best practices
Yes, I know documentation is boring. But at least skim over this page on writing good tests. There is a lot of useful tidbits lurking in there. The gist is that your tests should be self-contained, repeatable and test one thing or one idea.

Looking over this list many of the best practices I listed involve avoiding bugs related to timing. You know the drill; run your testcase and it passes. Run it again, or run it in a virtual machine, a slower device, etc, and it fails. It's likely you have already experienced this.

Why does this happen? Well, it's because your test is clicking and interacting without verifying the changes occurring in the application. Many times it doesn't matter, and the built in delay between your actions will be enough to cover you. However that is not always the case.

So, adopt these practices and you will find your testcases are more reliable, easier to read and run without a hitch day in and day out. That's the sign of a good automated testcase.

Got more suggestions? Leave a comment!