Thursday, September 19, 2013

Autopilot Sandboxing

Autopilot has continued to change with the times, and the pending release of 1.4 brings even more goodies; including some performance fixes. But today I wanted to cover a newly landed feature from the minds of Martin and Jean-Baptiste (thanks guys!).

If you've developed autopilot tests in the past you will have noticed how cumbersome it can be to run the tests. If you run a test on your desktop, you lose control of your mouse and keyboard for the duration, and you might even accidentally cause a test to fail. This can be especially noticeable when you are iterating over getting your test to run "just right" while wanting to keep your introspection tree in vis open, or reviewing someone else's code while wanting to verify the tests run. Enter sandbox mode.

A new command called autopilot-sandbox-run lets you easily run a testsuite inside your choice of two sandboxes; Xvfb by default, or if you want to visually see the output, Xephyr. Have a quick look at the command options below as of this writing:

Usage: autopilot-sandbox-run [OPTIONS...] TEST [TEST...]
Runs autopilot tests in a 'fake' Xserver with Xvfb or Xephyr. autopilot runs
in Xvfb by default.
   
    TEST: autopilot tests to run

Options:
    -h, --help           This help
    -d, --debug          Enable debug mode
    -a, --autopilot ARG  Pass arguments ARG to 'autopilot run'
    -X, --xephyr         Run in nested mode with Xephyr
    -s, --screen WxHxD   Sets screen width, height, and depth to W, H, and D respectively (default: 1024x768x24)


The next time you want to get your hands dirty with some autopilot tests, try out the sandbox. I'm sure you'll find a very nice use for it in your workflow; after all wouldn't it be handy to run multiple testsuites at once?

Tuesday, September 17, 2013

Testing Ubuntu Touch: The final month before release

As of today, we are exactly one month away from the release of Saucy Salamander. As part of that release, ubuntu is committed to delivering an image of ubuntu-touch, ready to install on supported devices.

And while folks have been dogfooding the images since May, many changes have continued to land as the images mature. As such, the qa team is committing to test each of the stable images released, and do exploratory testing against new features and specific packagesets.

If you have a device, I would encourage you to join this effort! Everything you need to know can be found upon this wiki page. You'll need a nexus device and a little time to spend with the latest image. If you find a bug, report it! The wiki has links to help. Testing doesn't get anymore fun than this; flash your phone and try to break it! Go wild!

And if you don't own a device? You can still help! As bugs are found and fixed, the second part of the process is to create automated tests for them so they don't occur again. Any bug you see on the list is a potential candidate, but we'll be marking those we especially think would be useful to write an autopilot tests for with a " touch-needs-autopilot" tag.

Join us in testing, confirming bugs, or testwriting autopilot tests. We want the ubuntu touch images to be the best they can be in 1 month's time. Happy Testing!

Monday, September 9, 2013

Jamming Quality Style

It's that time of year again! Time to get your jam on (I like mine on a bagel).
  

While you are making plans for Ubuntu Global Jam, don't forget you can contribute to quality as well. There's a separate subpage of the global jam wiki dedicated to it.

We love new test contributions, and there's a collection of wiki tutorials and videos to help you contribute them. You don't have to be technical to write tests -- we need manual testcases also which are written in plain English :-)

More interested in submitting your results for tests? We've also got you covered. We have tests for the default applications of ubuntu as well as the images of ubuntu. Download an image and run it on your machine. Try running through some default testcases for ubuntu or your favorite flavor. An image and a pc or laptop is all you need to get started. Happy Jamming!

Monday, August 26, 2013

Call for Testing: Mir with multi-monitor

As mentioned in my last post, Mir is one of the biggest changes coming in 13.10. With feature freeze now happening this week, it's time to amp up our testing engines once more to test the final features and help land Mir into the archive.

The Mir team has put together both a ppa and wiki page that contains all the information you need to help with testing. The testing window closes in 2 days on August 28th, just in time for feature freeze. The biggest changes for Mir are the inclusion of multi-monitor support and thus are a focus for this testing. So here's the details you need to know.

What?
Help test Mir using your current system, ubuntu saucy and the Mir team ppa.

When?
Now through August 28th.

How?
The full instructions for installing the ppa, running the tests, and reporting the results can be found on this wiki page. Results are reported on this page or via the package tracker testing page.

Thank you for your contributions! Good luck and Happy Testing Everyone!

Monday, August 19, 2013

Automated Testing in ubuntu

So with all the automated testing buzz occurring in the quality world of ubuntu this cycle, I wanted to speak a little about why we're doing the work, and what the output of the work looks like.

The Why
So, why? Why go through the trouble of writing tests for the code you write? I'll save everyone a novel and avoid re-hashing the debate about whether testing is a proper use of development time or not. Simply put, for developers, it will prevent bugs from reaching your codebase, alleviate support and maintenance burdens, and will give you more confidence and feedback during releases. If you want your application to have that extra polish and completeness, testing needs to be a part. For users, the positives are similar and simple. Using well tested software prevents regressions and critical bugs from affecting you. Every bug found during testing is a bug you don't have to deal with as an end user. Coincidentally, if you like finding bugs in software, we'd love to have you on the team :-)

The How
In general, three technologies are being used for automated testing.
Autopilot, Autopkg, and QML tests. Click to learn more about writing tests for each respectively.
Any color is good, as long as it's green
The Results
The QA Dashboard
The dashboard holds most of the test results, and gives you a much nicer view of the results than just looking at a jenkins build screen. Take some time to explore all of the different tabs to see what's available here. I wanted to highlight just a couple areas in particular.
  • Autopkg Tests 
    • These testcases run at build time and are great testcases for low level libraries and integral parts of ubuntu. Check out the guide for help on contributing a testcase or two to these. Regressions have been spotted and fixed before even hitting the ubuntu archives.
  • Smoke Tests for ubuntu touch
    • This is some of the newest tests to come online and displays the results of the ubuntu phone image and applications, including the core apps which are written entirely by community members. Want to know how well an image is running on your device? This is the page to find it.
Ubiquity Installer Testing
Curious about how well the installer is working? Yep, we've got tests for those as well. The tests are managed via the ubiquity project on launchpad.

Ubuntu Desktop Tests
Wondering how well things like nautilus, gedit and your other favorite desktop applications are doing? Indeed, thanks to our wonderful quality community, we've got tests for those as well. The tests can be found here.

The Next Steps
We want to continue to grow and expand all areas of testing. If you've got the skills or the willingness to learn, try your hard at helping improve our automated testcases. There's a wide variety to choose from, and all contributions are most welcome!

Sorry robot, all our tests are belong to us
Don't have those skills? Don't worry, not only are machines not taking over the world, they aren't taking over testing completely either. We need your brainpower to help test other applications and to test deeper than a machine can do. Join us for our cadence weeks and general calls for testing and sample new software while you help ubuntu. In fact, we're testing this week, focusing on Mir.

Finally, remember your contributions (automated or manual) help make ubuntu better for us all! Thank you!

Friday, August 16, 2013

Feature freeze coming? Let's test!

We're already approaching feature freeze at a quickening pace, and thus the next few weeks are rather important to us as a testing community. 13.10 is landing in October, which is now rapidly approaching (where did the summer go?!).

What?
Let's run through some manual tests for ubuntu and flavors. I'd like to ask for a special focus to be given to Mir/xMir. We plan to have a rigorous test of the package again in about a week once all features have landed. In the interim, let's try and catch any additional bugs.

When?
This week Saturday August 17th through Saturday August 24th. It's week 5 for our cadence tests.

Ok, I'm sold, what do I need to do?
Execute some testcases against the latest version of saucy; in particular the xMir test.

Got any instructions?
You bet, have a look at the Cadence Week testing walkthrough on the wiki, or watch it on youtube. If you get stuck, contact us.

Where are the tests?
You can find the Mir test in it's own milestone here. Remember to read and follow the installation instructions link at the top of the page!
The rest of the applications and packages can be found here.

I don't want to run/install the unstable version of ubuntu, can I still help?
YES! Boot up the livecd on your hardware and run the live session, or use a virtual machine to test (install ubuntu or use a live session). The video demonstrates using a virtual machine booting into a live session to run through the tests. For the Mir/xMir tests, however, we'd really like results from real hardware.

But, virtual machines are scary; I don't know how to set one up!
There's a tool called testdrive that makes setting up a vm with ubuntu development a point and click operation. You can then use it to test. Seriously, check out the video and the walkthrough for more details.

Thank you for your contributions! Good luck and Happy Testing Everyone! 

Wednesday, August 7, 2013

Autopilot best practices

I've now had the pleasure of writing autopilot tests for about 9 months, and along the way I've learned or been taught some of the things that are important to remember.

Use Eventually
The eventually matcher provided by autopilot is your best friend. Use it liberally to ensure your code doesn't fail because of a millisecond difference during your runtime. Eventually will retry your assert until it's true or it times out. When combined with examining an object or selecting one, eventually will ensure your test failure is a true failure and not a timing issue. Also remember you can use lambda if you need to make your assert function worthy.

Assert more!
Every test can use more asserts -- even my own! Timing issues can rear there ugly head again when you fail to assert after performing an action.
  • Everytime you grab an object, assert you received the object
    • You can do this by asserting the object NotEquals(None); remember to use eventually Eventually(NotEquals(None))!
  • Everytime you interact with the screen, try an assert to confirm your action
    • Click a button, assert
    • Click a field to type, assert you have focus first
      • You can do this by using the .focus property and asserting it to be True
      • Finished typing?, assert your text matches what you typed
        • You can do this by using the .text property and asserting it to be Equal to your input
Don't use strings, use objectNames
We all get lazy and just issue selects with English label names. This will break when run in a non-English language. They will also break when we decide to update the string to something more verbose or just different. Don't do it! That includes things like tab names, button names and label names -- all common rulebreakers.

Use object properties
They will help you add more asserts about what's happening. For instance, you can use the .animating property or .moving property (if they exist) to wait out animations before you continue your actions! I already mentioned the .focus property above, and you might find things like .selected, .state, .width, .height, .text, etc to be useful to you while writing your test. Check out your objects and see what might be helpful to you.

Interact with objects, not coordinates
Whenever possible, you should ensure your application interactions specify an object, not coordinates. If the UI changes, the screen size changes, etc, your test will fail if your using coordinates. If your interaction is emulating say something like a swipe, drag, pinch, etc action, ensure you utilize relative coordinates based upon the current screen size.

Use the ubuntusdk emulator if you are writing a ubuntusdk application
It will save you time, and ensure your testcase gets updated if any bugs or changes happen to the sdk; all without you having to touch your code. Check it out!

Read the documentation best practices
Yes, I know documentation is boring. But at least skim over this page on writing good tests. There is a lot of useful tidbits lurking in there. The gist is that your tests should be self-contained, repeatable and test one thing or one idea.

Looking over this list many of the best practices I listed involve avoiding bugs related to timing. You know the drill; run your testcase and it passes. Run it again, or run it in a virtual machine, a slower device, etc, and it fails. It's likely you have already experienced this.

Why does this happen? Well, it's because your test is clicking and interacting without verifying the changes occurring in the application. Many times it doesn't matter, and the built in delay between your actions will be enough to cover you. However that is not always the case.

So, adopt these practices and you will find your testcases are more reliable, easier to read and run without a hitch day in and day out. That's the sign of a good automated testcase.

Got more suggestions? Leave a comment!