Thursday, December 13, 2012

Jamming Thursday's!

Right now as I type we have two jams going on! Last week Jono posted about enhancing the ubuntu.com/community page. If your a part of the community, join in raising the banner for your specific focus area. The fun is happening now on #ubuntu-docs. For the full details, see Jono's post. For us quality folks, the pad is here: http://pad.ubuntu.com/communitywebsite-contribute-quality. Feel free to type and edit away!

In addition, as Daniel Holbach mentioned, there is a hackathon for automated testing. Come hang out with us on #ubuntu-quality, learn, ask and write some tests. Again, the full details can be found on Daniel's post.

Come join us!

Wednesday, November 28, 2012

Our first Autopilot testcase

So last time we learned some basics for autopilot testcases. We're going to use the same code branch we pulled now to cover writing an actual testcase.

bzr branch lp:~nskaggs/+junk/autopilot-walkthrough

As a practical example, I'm going to convert our (rather simple and sparse) firefox manual testsuite into an automated test using autopilot. Here's a link to the testcase in question.

If you take a look at the included firefox/test_firefox.py file you should recognize it's basic layout. We have a setup step that launches firefox before each test, and then there are the 3 testcases corresponding to each of the manual tests. The file is commented, so please do have a look through it. We utilize everything we learned last time to emulate the keyboard and mouse to perform the steps mentioned in the manual testcases. Enough code reading for a moment, let's run this thing.

autopilot run firefox

Ok, so hopefully you had firefox launch and run through all the testcases -- and they all, fingers-crossed, passed. So, how did we do it? Let's step through the code and talk about some of the challenges faced in doing this conversion.

Since we want to test firefox in each testcase, our setUp method is simple. Launch firefox and set the focus to the application. Each testcase then starts with that assumption. Inside test_browse_planet_ubuntu we simply attempt to load a webpage. Our assertion for this is to check that the application title changes to "Planet Ubuntu" - - in other words that the page loaded. The other two testcases expand upon this idea by searching wikipedia and checking for search suggestions.

The test_search_wikipedia method uses the keyboard shortcut to open the searchbar, select wikipedia and then search for linux. Again, our only assertion for success here is that the page with a title of Linux and wikipedia loaded. We are unable to confirm for instance, that we properly selected wikipedia as the search engine (although the final assertion would likely fail if this was not the case).

Finally, the test_google_search_suggestions method is attempting to test that the "search suggestions" feature of firefox is performing properly. You'll notice that we are missing the assertion for checking for search suggestions while searching. With the knowledge we're gained up till now, we don't have a way of knowing if the list is generated or not. In actuality, this test cannot be completed as the primary assertion cannot be verified without some way of "seeing" what's happening on the screen.

In my next post, I'll talk about what we can do to overcome the limitations we faced in doing this conversion by using "introspection". In a nutshell by using introspection, autopilot will allow us to "see" what's happening on the screen by interacting with the applications data. It's a much more robust way of "seeing" what we see as a user, rather than reading individual screen pixels. With any luck, we'll be able to finish our conversion and look at accomplishing bigger tasks and tackling larger manual testsuites.

I trust you were able to follow along and run the final example. Until the next blog post, might I also recommend having a look through the documentation and try writing and converting some tests of your own -- or simply extend and play around with what you pulled from the example branch. Do let me know about your success or failure. Happy Testing!

Monday, November 26, 2012

Getting started with Autopilot

If you caught the last post, you'll have some background on autopilot and what it can do. Start there if you haven't already read the post.

So, now that we've seen what autopilot can do, let's dig in to making this work for our testing efforts. A fair warning, there is some python code ahead, but I would encourage even the non-programmers among you to have a glance at what is below. It's not exotic programming (after all, I did it!). Before we start, let's make sure you have autopilot itself installed. Note, you'll need to get the version from this ppa in order for things to work properly:

sudo add-apt-repository ppa:autopilot/ppa
sudo apt-get update && sudo apt-get install python-autopilot

Ok, so first things first. Let's create a basic shell that we can use for any testcase that we want to write. To make things a bit easier, there's a lovely bazaar branch you can pull from that has everything you need to follow along.

bzr branch lp:~nskaggs/+junk/autopilot-walkthrough
cd autopilot-walkthrough

You'll find two folders. Let's start with the helloworld folder. We're going to verify autopilot can see the testcases, and then run and look at the 'helloworld' tests first. (Note, in order for autopilot to see the testcases, you need to be in the root directory, not inside the helloworld directory)

$ autopilot list helloworld
Loading tests from: /home/nskaggs/projects/

    helloworld.test_example.ExampleFunctions.test_keyboard
    helloworld.test_example.ExampleFunctions.test_mouse
    helloworld.test_hello.HelloWorld.test_type_hello_world

 3 total tests.


Go ahead and execute the first helloworld test.

autopilot run helloworld.test_hello.HelloWorld.test_type_hello_world
 
A gedit window will spawn, and type hello world to you ;-) Go ahead and close the window afterwards. So, let's take a look at this basic testcase and talk about how it works.

from autopilot.testcase import AutopilotTestCase

class HelloWorld(AutopilotTestCase):

    def setUp(self):
        super(HelloWorld, self).setUp()
        self.app = self.start_app("Text Editor")

    def test_type_hello_world(self):
        self.keyboard.type("Hello World")


If you've used other testing frameworks that follow in the line of xUnit, you will notice the similarities. We implement an AutopilotTestCase object (class HelloWorld(AutopilotTestCase)), and define a new method for each test (ie, test_type_hello_world). You will also notice the setUp method. This is called before each test is run by the testrunner. In this case, we're launching the "Text Editor" application before we run each test (self.start_app("Text Editor")). Finally our test (test_type_hello_world) is simply sending keystrokes to type out "Hello World".

From this basic shell we can add more testcases to the helloworld testsuite easily by adding a new method. Let's add some simple ones now to show off some other capabilities of autopilot to control the mouse and keyboard. If you branched the bzr branch, there is a few more tests in the test_example.py file. These demonstrate some of the utility methods AutopilotTestCase makes available to us. Try running them now. The comments inside the file also explain briefly what each method does.

autopilot run helloworld.test_example.ExampleFunctions.test_keyboard
autopilot run helloworld.test_example.ExampleFunctions.test_mouse

Now there is more that autopilot can do, but armed with this basic knowledge we can put the final piece of the puzzle together. Let's create some assertions, or things that must be true in order for the test to pass. Here's a testcase showing some basic assertions.

autopilot run helloworld.test_example.ExampleFunctions.test_assert
  
Finally, there's some standards that are important to know when using autopilot. You'll notice a few things about each testsuite.
  • We have a folder named testsuite.
  • Inside the folder, we have a file named test_testsuite.py
  • Inside the file, we have TestSuite class, with test_testcase_name
  • Finally, in order for autopilot to see our testsuite we need to let python know there is a submodule in the directory. Ignoring the geekspeak, we need an __init__.py file (this can be blank if not otherwise needed)
Given the knowledge we've just acquired, we can tackle our first testcase conversion! For those of you who like to work ahead, you can already see the conversion inside the "firefox" folder. But the details, my dear Watson, will be revealed in due time. Until the next post, cheerio!

Tuesday, November 20, 2012

A glance at Autopilot

So, as has been already mentioned, automated testing is going to come into focus this cycle. To that end, I'd like to talk about some of the tools and methods for automated testing that exist and are being utilized inside ubuntu.

I'm sure everyone has used unity at some point, and you will be happy to know that there is an automated testsuite for unity. Perhaps you've even heard the name autopilot. The unity team has built autopilot as a testing tool for unity. However, autopilot has broader applications beyond unity to help us do automated testing on a grander scale. So, to introduce you to the tool, let's check out a quick demo of autopilot in action shall we? Run the following command to install the packages needed (you'll need quantal or raring in order for this to work):

sudo apt-get install python-autopilot unity-autopilot

Excellent, let's check this out. A word of caution here, running autopilot tests on your default desktop will cause your computer to send mouse and keyboard commands all by itself ;-) So, before we go any further, let's hop over into a 'Guest Session'. You should be able to use the system indicator in the top right to select 'Guest Session'. Once you are there, you'll be in a new desktop session, so head back over to this page. Without further ado, open a terminal and type:

autopilot run unity.tests.test_showdesktop.ShowDesktopTests.test_showdesktop_hides_apps

This is a simple test to check and see if the "Show Desktop" button works. The test will spawn a couple of applications, click the show desktop button and verify clicking on it will hide your applications. It'll clean up after itself as well, so no worries. Neat eh?

You'll notice there's quite a few unity testcases, and you've installed them all on your machine now.

autopilot list unity

As of this writing, I get 461 tests returned. Feel free to try and run them. Pick one from the list and see what happens. For example,

autopilot run unity.tests.test_dash.DashRevealTests.test_alt_f4_close_dash

Just make sure you run them in a guest session -- I don't want anyone's default desktop to get hammered by the tests!

If you are feeling adventurous, you can actually run all the unity testcases like this (this will take a LONG TIME!).

autopilot run unity

As a sidenote, you are likely to find some of the testcases fail on your machine. The testsuite is run constantly by the unity developers, and the live results of commit by commit success or failure is actually available on jenkins. Check it out.

So in closing, this cycle we as a community have some goals surrounding easing the burden for ourselves in testing, freeing our resources and minds towards the deeper and more thorough testing that automation cannot handle. To help encourage this move of our basic testcases towards automation, the next series of blog posts will be a walkthrough on how to write Autopilot testcases. I hope to learn, explore and discover along with all of you. Autopilot tests themselves are written in python, but don't let that scare you off! If you are able to understand how to test, writing a testcase that autopilot can run is simply a matter of learning syntax -- non-programmers are welcome here!

Wednesday, October 31, 2012

UDS-R: Rise of the the (quality) machines


Greetings from Copenhagen! I thought I would give a mid-UDS checkup for the quality team community. You may have already heard some of the exciting stuff that is already been discussed at UDS. Automated testing is being pursued with full vigor, the release schedule has been changed, and cadence testing is in. In addition, ubuntu is being focused into getting into fighting shape by targeting the Nexus 7 as a reference platform for mobile.

I was honored enough to have a quick plenary where attendees here got to see and hear about the various automated testing efforts going on. Does that mean the machines have replaced us? Hardly! The goal with bringing automated testing online is to help us be more proactive with how and why we test. We've done an amazing job of reacting to changes and bugs, but now as a community I would like us to focus on being proactive with our testing. The changes below are all going to help set us firmly in this direction. By proactively testing things, we eliminate bugs, and repetitive or duplicated work for ourselves. This frees us to explore more focused, more interesting, and more in-depth testing. So without further ado, here's a quick rundown of the changes discussed here in Copenhagen -- hang on to your testing hats!

Release
The Release schedule has dropped all alphas, and the first beta, resulting in a beta and then final release milestone only. In addition, the freezes have been moved back a few weeks. The end result is the archive will not be frozen till late in the cycle, allowing development and testing to continue unencumbered. This of course is for ubuntu only. Which brings us to flavors!


Flavors
Flavors will now have complete control over there releases. They can chose to test, freeze, and re-spin according to there own schedule and timing. Some will adopt ubuntu's schedule, others may retain the old milestones or even do something completely different.


ISOs
Iso's will now be automatically 'smoke' tested before general release. No more completely broken installers on the published images! In addition, the iso's will be published daily as usual, but will not have typical milestones as mentioned above. Preference will be given to the daily iso -- the current one -- throughout the cycle. Testing will occur in a cadence instead of a milestone.

Cadence
Rather than milestones, a bi-weekly cadence of testing will occur with the goal of assuring good quality throughout the release cycle. The cadence weeks will be scheduled and feature testing different pieces of ubuntu in a more focused manner. This includes things like unity, the installer, and new features landing in ubuntu, but will also be the target of feedback from the state of ubuntu quality.

State of ubuntu Quality
A bold effort to generate a high level view of what needs testing and what is working well on a per image basis inside of ubuntu. This is an experimental idea whose implementation will garner feedback early in the cycle and will collect data and influence decisions for testing focus during the cycle. *fingers crossed*

AutoPilot
This tool will integrate xpresser to allow for a complete functional UI testing tool. One of the first focuses for testcases will be automating the installer from a UI perspective to free our manual testing resources from basic installer testing! From the community perspective, we can join in both the writing, and executing of automated, as well as the development of the tool itself.

Hardware Testing Database
This continuing experiment will become more of a reality. The primary focus of the work this cycle will be to bring the tool, HEXR, online and to do basic integration with the qatracker for linking your hardware profiles. In addition, focused hardware testing using the profiles will be explored.

I hope this gives you a nice preview of what's coming. I would encourage you to have a look a the blueprints and pads for the sessions, and ask questions or volunteer to help in places you are interested. I am excited about the opportunities to continue bringing testing to the next level inside of ubuntu. I  owe many thanks to the wonderful community that continues to grow around testing. Here's to a wonderful cycle.

Sunday, October 28, 2012

Readying for UDS

I trust everyone is readying themselves -- don't blink! Ubuntu UDS-R is already upon us. Those of you who have been watching closely may have heard about some of the planned sessions for QA, but if not feel free to take a look. Don't worry, I'll wait.

But wait, there's more! In addition, there is going to be an evening event where testing is the focus. It's happening Tuesday evening. The goal is to learn about some of the testing efforts going on inside ubuntu, including automated testing; and more importantly, to write some testcases! Folks will be on hand to help talk you through and discuss writing both automated and manual test cases.

Looking through the tsessions, I hope you have the sense that testing is continuing to play a large role in ubuntu. And further, that you can be even more invovled! UI testing, automated testing, testcase writing -- all of these are focus points this cycle and have sessions. Get involved -- and if your at UDS, please do come to a session or two, add your voice, and grab some work items :-) Let's make it happen for next cycle.

Tuesday, October 9, 2012

Community Charity-a-thon: The Aftermath

I wanted to express my heartfelt thanks to everyone who contributed. To those who gave on behalf of the debian community, thank you as well! I stated that for every five donations I would do a manpage for a package that is missing one :-) I received just under 5 donations marked debian, but not to worry, I'll still create a manpage for one in need. Although I did other work during the marathon, I purposefully held onto creating the manpage until I was a bit more rested -- I have enough trouble speaking English sometimes without adding in sleep deprivation. The man page readers will thank me, and I'm sure those who get my page to review will as well.

To the rest of you, thank you very much. We raised $943.34 for WaterAid. That's amazing! I'm truly touched by your generosity. Here's the complete list of donors, hats off to all of you -- I know several of you donated anonymously, thank you!

Anonymous :-)
Cormac Wilcox
Gema

Anders Jonsson
Arthur Talpaert
Sam Hewitt
Alvaro

Ólavur Gaardlykke
Joey-Elijah Sneddon
steve burdine

Thomas Martin (tenach)
Daniel Marrable
sebsebseb Mageia
Jonas Grønås Drange
Gregor Herrmann
Mark Shuttleworth
phillw
Thijs K
Alvaro
Max Brustkern
Jane Silber
Gema Gomez-Solano
Martin Pitt
Michelle Hall


Now I know no one wants to re-watch that crazy 24 hours of video, but I wanted to bring you a few highlights as well. I spent time doing some of my normal work, but I also promised to do something outside the norm. I was able to scratch an itch, and although my on-air demo failed (an uh-duh moment), I was able to record this video immediately after demonstrating where we in QA are focusing next cycle. In addition, there were several talks from QA personnel, and I recommend watching this clip if your interested in hearing Rick's take on where ubuntu is going, and indeed how quality will play a role. You can skip to here if you only want to hear his take on quality. Now is a great time to be involved in QA -- I'm excited to see things unfold for 14.04, and I hope you are to.

For the readers who actually made it this far, I've got the best for last. There were some gags in those 24 hours; for instance, check out my chicken dance! (*cough* this was supposed to be a group thing *cough*). Ohh, and there's always this lovely screencap. To be fair, this was about 20 hours or so in.