qapages
Learn QA, one page at at time.
8 posts
You want to learn to test better? Here are things I learnt the hard way.
Don't wanna be here? Send us removal request.
qapages · 8 years ago
Text
Building high performing teams
So you’re building a team at your company and want the best folks out there. As the hiring manager, you feel the pressure and the pain. Past hiring mistakes quickly flash before your eyes. You see flashbacks of late nights and weekends you and your team worked because you were understaffed and just could not hire fast enough. A shiver runs through your spine. 
Do you want to put a quick end to the pain and hire someone quickly, if they are not a great match? Someone is better than no one right? Hmmm, I do not think so. How do you build high performing teams? How do you retain the team that you built?
The first thing to do is, to hire the right people. 
Do not compromise. Do not settle. Hire the right people. Who are the right people? And how do you avoid waiting years for the right candidates to show up? Look for potential, look for willingness to learn and a demonstrated ability to learn things to be successful at a job, look for examples of initiative taken etc etc. Ask the candidates about the best projects they’ve executed and ask about what made them the best. Ask candidates what were the worst projects they worked on and what made them so bad?
Ok. You’ve hired super stars. Can you keep them though?
Keeping high performing employees engaged and motivated is an art. High performers will have high expectations. Do you have a clear path for them to grow through and has that been communicated clearly to them? Do you have regular 1:1 meetings or other types of frequent feedback sessions with them? If you wait a year or months before asking someone how they are doing or what problems they are facing, the first time you hear about those issues will be in their exit interview. Talk to your team often, get to know them, understand their motivations and expectations, assign them to projects that match those expectations whenever possible.
But they did so well on the interview. What happened after they came on board?
Some folks do really well on interviews. It may be that they have really good skills in one area but we’ve been giving them projects that require other skills that they do not possess. Or it may be that they just interview really well and we did not dig deep enough. The right thing to do there is to try and find out what really is the case. If its a matter of finding the right fit for them, in a different project, act quickly and do that. If its a matter of having a tough conversation with them and helping them with a performance improvement plan, so be it, do that quickly as well. The worst thing you can do when you have a problem is, nothing.
Impact based role definitions and evaluations
How do you define a role? Do you say just list all the projects or types of work you want folks filling the role to do and leave it at that? Have you thought about creating impact linked roles that help every employee understand the impact their work is expected to have on the company? I personally feel that it is highly motivating to see how my work makes the company better, helps grow our user base or in some other way, moves the needle.
Reward excellence
You have some high performers on your team. You help one of them reach the next level in his career path. You help another get that key role in another department. You help yet another land that architect role everyone is eying and that’s a great match for him. People are watching how you treat your best. If they know that they can go places by performing well on your team, guess what? There will always be people who will want to work on your team. This will also motivate the not so high performers to do better. If everyone knows that good work is noticed and rewarded, it really makes a difference.
The High potential leader
I’m reading this book called ‘The High potential leader’. Its got some good ideas. I’ll take notes via this blog and work some of those into my/my team’s work routine. Per chapter 3 of this book, the formula for building successful and high performing teams is,
People Quality + Job Fit + Collaboration = Team Performance
Identify your team’s strengths
Build on their strengths
Make necessary changes quickly
Look for opportunities to collaborate across organizational boundaries
Lead the dialogue (keep it pointed towards positive results)
Make your meetings effective
Be a social architect
0 notes
qapages · 8 years ago
Text
Windows desktop application testing
Testing a windows desktop application just got easier (at least it did for me). Here’s a no license, no nonsense solution to testing your windows applications. And its all in Python.
Tools we’ll use today,
PyWinAuto http://pywinauto.github.io/ 
and SWAPY (a tool thats part of PyWinAuto and helps with Object identification ) - https://github.com/pywinauto/SWAPY 
and the test runner that I like to use is, 
pytest - http://doc.pytest.org/en/latest/
Steps to tackle this task,
Install python and then pywinauto + pytest in your python environment (preferably in your own virtualenv)
Launch your desktop app
Launch SWAPY and look at the list of apps it displays, watch for your app in that list
Click on your app and you should the different objects on your application that are available for you to interact with. Once you click on an item and select the operation you want to perform against it, you’ll see python code being auto generated for you in the tab on the right in SWAPY.
Save that code in a file, save as something.py and you can now run this script from the command line as python something.py and you should the script perform the operation you did. Basically you recorded the operation you performed earlier and are able to replay it with any changes you want to make to it. 
Enough talk, lets try a real working example, 
Lets try to automate interaction with notepad. Easy enough right?
Launch notepad
Tumblr media
Launch SWAPY from the executable you downloaded via https://github.com/pywinauto/SWAPY
Tumblr media
If you look closely, you can see that the notepad application already shows up in the ‘Objects browser’ tab. Lets click on it. Lets say I just want to open notepad.exe, hit the file menu and then hit quit. 
Tumblr media
So I drill down in the object browser menu until I am at the element I want to interact with. 
Tumblr media
I right click on that element and select the operation I want to perform. Which is click in this case. 
Tumblr media
After you select ‘Click’ from the right click menu, you can see that the tab named ‘Editor’ gets automatically populated with python code for you to use to perform this action. Cool right??
Here’s the code as seen on SWAPY.
Tumblr media
You save this into something.py and run it from the command line to see the Notepad.exe being launched and exited via using the File>Exit menu button. There’s your first automated script for windows! 
To see details about how you can write a test with asserts + pytest thrown in, look at this little repo of tests I put up on github - https://github.com/dbhaskaran1/win_logon
0 notes
qapages · 8 years ago
Text
Automated accessibility testing
Ok, so you want to test your website for compliance with Section 508c. Where do you start?
Thankfully, you don’t have to shell out big bucks to get a report that tells you about your website’s 508C compliance. Open Source FTW! Enter pa11y - http://pa11y.org/
Lets do the (short) setup and get started with a quick example.
Clone the pa11y repo.
git clone https://github.com/pa11y/pa11y.git
Run the pa11y command line tool against your website, eg www.bbc.com
pa11y http://www.bbc.com/
And you’ll see a bunch of results! The output looks like this
• Error: Iframe element requires a non-empty title attribute that identifies the frame.   ├── WCAG2AA.Principle2.Guideline2_4.2_4_1.H64.1   ├── #wwhp > iframe:nth-child(24)   └── <iframe src="http://tpc.googlesyndication.com/safeframe/1-0-6/html/container.html" style="visibility: hidden; display: none;"></iframe>
• Error: Iframe element requires a non-empty title attribute that identifies the frame.   ├── WCAG2AA.Principle2.Guideline2_4.2_4_1.H64.1   ├── #kx-proxy-JA8mItOH   └── <iframe id="kx-proxy-JA8mItOH" src="http://cdn.krxd.net/partnerjs/xdi/proxy.3d2100fd7107262ecb55ce6847f01fa5.html#!kxcid=JA8mItOH&kxt=http%3A%2F%2Fwww.bbc.com&kxcl=cdn&kxp=" style="display: none; visibility: hidden; height: 0; width: 0;">...
22 Errors 159 Warnings 415 Notices
If you want a more readable format (of course you do), you’d use the html report option or the csv/json options. Here’s a sample with the html report.
pa11y http://www.bbc.com/ --reporter html > /tmp/report.html
Then when you open up report.html, you see content that looks like this. Way better right? 
Tumblr media
What next? Fix those issues :) And after that? Setup a dashboard that runs these reports for you, against websites of your choosing and shows you those results. Wow, once again, Open Source FTW!!!
How to get the dashboard going? Start by cloning this repo,
git clone https://github.com/pa11y/dashboard
Install the dependencies, notably MongoDB and node. Then run the below command,
PORT=8080 node index.js
Now you should be able to hit localhost:8080 and see the dashboard. In the dashboard, you can easily add the URL you want to hit and you’re done! Here’s what it looks like on my dashboard.
Tumblr media Tumblr media
and finally,
Tumblr media
Is that the easiest accessibility testing you’ve ever done?
1 note · View note
qapages · 8 years ago
Text
How to do different tasks on mysql
Here’s a list of things that I’ve had to do on mysql from time to time. You dont always need these but when you do, its good to have them put together in one place.
Enable logging and get all sql queries hitting a mysql DB written to a log file
mysql> show variables like "general_log%"; This shows you two values usually. One ‘’ is a flag for setting general logging on and off. The other ‘’ is where to actually log to.
mysql> SET GLOBAL general_log = 'ON';
mysql> show variables like "general_log%"; +------------------+-------------------------+ | Variable_name    | Value                   | +------------------+-------------------------+ | general_log      | ON                      | | general_log_file | /var/lib/mysql/logname.log | +------------------+-------------------------+ 2 rows in set (0.00 sec)
tail -f /var/log/mysqld/logname.log
When you’ve gotten what you’re looking for, go back and turn off the logging.
mysql> SET GLOBAL general_log = 'OFF';
Enabling slow query logging
You notice a web page loading slowly and you suspect it might be because its taking a long time to fetch the results from the DB. Or you’re running performance tests against your system and you want to find out which of your queries are running slowly and hurting your system’s response time? Well - mysql has a slow query logging facility that can help you narrow down the issue.
ssh into your mysql server as a user that has superuser privileges etc and add the below lines to /etc/my.cnf and restart mysql service on that box. 
[mysqld] slow_query_log=1 slow_query_log_file=/tmp/slow.log long_query_time=2
The ‘slow_query_log’ flag turns on or off slow query logging. The ‘slow_query_log_file’ setting specifies the actual file to log to and the ‘long_query_time’ is the setting where you specify what is to be considered a slow query. Here, 2 stands for anything longer than 2 seconds. 
And you’re all set! Run your tests, your UI scripts etc and then check in on the /tmp/slow.log file and you should see the queries you’re interested in.
Listing out all the currently executing queries on a mysql server
The show processlist command is really useful in that, it shows you whats currently executing on your mysql server, what state they are in etc etc. Sort of like the ps -ef for mysql. 
https://gist.github.com/dbhaskaran1/0c5dc0e820fe5f8e482b76c7c74885b0
Getting mysql server status from the command line
After you’ve logged into your mysql server, issue the ```status``` command to see various bits of information about your mysql server and your current session, including current user, current database, is SSL being used or not, mysql version, Server characterset, uptime of the server, number of slow queries, no of open tables, average number of queries per second etc. Here’s a sample output,
mysql> status -------------- mysql  Ver 14.14 Distrib 5.7.17, for Linux (x86_64) using  EditLine wrapper
Connection id:          59 Current database:       test Current user:           root@localhost SSL:                    Not in use Current pager:          stdout Using outfile:          '' Using delimiter:        ; Server version:         5.7.17 MySQL Community Server (GPL) Protocol version:       10 Connection:             Localhost via UNIX socket Server characterset:    latin1 Db     characterset:    utf8 Client characterset:    utf8 Conn.  characterset:    utf8 UNIX socket:            /var/lib/mysql/mysql.sock Uptime:                 4 hours 39 min 28 sec
Threads: 8  Questions: 7605  Slow queries: 0  Opens: 626  Flush tables: 1  Open tables: 588  Queries per second avg: 0.453
Finding simple things about the DB like - what are the different databases available, what are the different tables in current DB etc
show databases;
show tables;
The easier way to get deadlock data to find deadlocks in your DB
Install innotop (yum install innotop).
Once you’ve got that installed, you can connect innotop to your DB from the command line like this
```innotop --user root --askpass```
and you will be prompted for the password on the command line. Then you will be placed in the Dashboard view of innotop. Hit ? there and you will see a list of different information screens that you can then navigate to depending on what you’re trying to find out. L for locks, U for user statistics, D for InnoDB deadlocks, M for replication status, K for InnoDB Lock Waits etc. Try it out!
This article has a whole lot more information on using innotop. link
Finding deadlocks with queries running against your DB (The harder way)
The ```show engine innodb status;``` gives you a whole lot of data about your database. One of the things that I find useful is the data on deadlocks.
Here’s a sample output from this command, (Warning - wall of text coming your way, my apologies!)
https://gist.github.com/dbhaskaran1/38b82fab0b491032931f5e2d4cf48a46
---
0 notes
qapages · 8 years ago
Text
My top linux commands
I’ve mostly worked on linux/unix machines for the most part of my career. These are some of the linux commands that have been very useful to me in different situations on various projects, testing related or otherwise.
top
Find load on a box, get a list of different processes running (sortable by a number of fields), find number of processes running/sleeping/stopped/zombie etc.
Tumblr media
netstat
When I need to find out when anything is listening at a certain port, I use
netstat -aln | grep <port>
For example if I’m checking on port 80
netstat -aln | grep 8080
netstat can also give you a lot of packet level statistics. For example, the below command gives you a summary of packet level statistics about your network interfaces. You can see if you’re having packet loss, how many packets came in, how many got sent out etc.
netstat -s | grep packet    4324774 total packets received    0 incoming packets discarded    4324756 incoming packets delivered    3369 packets received    44 packets to unknown port received.    0 packet receive errors    1136526 packets sent    4285 packets directly queued to recvmsg prequeue.    413474 packets directly received from prequeue    1993168 packets header predicted    381 packets header predicted and directly queued to user    90 DSACKs sent for old packets
lsof
When I need to find out what process is listening at a certain port, I use 
lsof -i :port 
For example, if I’m checking on port 8080
lsof -i :80 COMMAND   PID    USER   FD   TYPE DEVICE SIZE/OFF NODE NAME nginx   23951    root   28u  IPv4 220407      0t0  TCP *:http (LISTEN) nginx   24040 duo_www   28u  IPv4 220407      0t0  TCP *:http (LISTEN)
find
When I need to find a file or directory that matches a certain criteria or search string, i use the find command. 
find <path you want to search in> <how do you want to search> <search criteria>
For example, if I wanted to find all the files that have the .log extension in my current directory, I would do,
find . -name ‘*.log’
./test.log
./anothertest.log
./onemoretest.log
df
When I need to know file system utilization, I use the df command. For example, I suspect that one of my file systems is nearing 100% and I need to go clear it. How would I find out what the disk util is?
df -k <whichever directory/file system you want to know the utilization of>
Now, if I wanted to know how /home was doing, I’d do
df -k /home Filesystem           1K-blocks    Used Available Use% Mounted on /dev/mapper/VolGroup-lv_root                      17938864 5064080  11956872  30% /
That tells me the /home (mounted on /) is 30% used. 
ifconfig
When I need to find out what interfaces are defined on a box or what IP addresses are assigned to a machine I am logged in on, I use ifconfig.
ifconfig
eth0      Link encap:Ethernet  HWaddr 08:00:27:9F:0A:01          inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0          inet6 addr: fe80::a00:27ff:fe9f:a01/64 Scope:Link          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1          RX packets:1225086 errors:0 dropped:0 overruns:0 frame:0          TX packets:269646 errors:0 dropped:0 overruns:0 carrier:0          collisions:0 txqueuelen:1000          RX bytes:1394615163 (1.2 GiB)  TX bytes:26194169 (24.9 MiB)
eth1      Link encap:Ethernet  HWaddr 08:00:27:01:63:D1          inet addr:10.50.0.2  Bcast:10.50.0.255  Mask:255.255.255.0          inet6 addr: fe80::a00:27ff:fe01:63d1/64 Scope:Link          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1          RX packets:49869 errors:0 dropped:0 overruns:0 frame:0          TX packets:155433 errors:0 dropped:0 overruns:0 carrier:0          collisions:0 txqueuelen:1000          RX bytes:7887126 (7.5 MiB)  TX bytes:42488260 (40.5 MiB)
lo        Link encap:Local Loopback          inet addr:127.0.0.1  Mask:255.0.0.0          inet6 addr: ::1/128 Scope:Host          UP LOOPBACK RUNNING  MTU:65536  Metric:1          RX packets:3050889 errors:0 dropped:0 overruns:0 frame:0          TX packets:3050889 errors:0 dropped:0 overruns:0 carrier:0          collisions:0 txqueuelen:0          RX bytes:1496552588 (1.3 GiB)  TX bytes:1496552588 (1.3 GiB)
Based on this output, I know that there are three interfaces on my machine - eth0, eth1 and lo. I also know what IP addresses are assigned to each of them.
tee
I use the tee command when I need the output of stdout written into a file in addition to being displayed on the screen. For example, if you’re monitoring the output of a tail on a log file and you also want this content  written into a separate log file for later reference, then tee is the tool for you.
tail -f logfile.log | tee test.log
In this case, you’d see the output of running tail -f on the logfile.log and you’ll also have those same contents written into the file name you provided as input to tee (test.log in this case).
0 notes
qapages · 8 years ago
Text
Data comparison between csv files
Here’s the problem - you have two csv files that came from two different accounting systems. Ideally they would both have the same data but sometimes, they differ. And when they differ, thats a problem that the company’s accounting department cares greatly about and needs to fix. So once someone figures out that we have a QA team that could possibly handle some of this, this sort of thing becomes a regular task you get. What do you do? You script it the very first time. And keep adding to the script every time you see a variation of the request. That way you have a great tool, you’ve sharpened your programming skills and everyone is happy :) 
Read on.
I’ve used pandas which may be overkill for this kind of project but then you never know. This is the starting point, who knows how complicated these requests will get one day? So pandas it is.
The algorithm is - read the two files, load them into pandas dataframes, do an inner join on the key column (say a field named ‘reference’) and that gives you the rows that are present in both files. You save that file, thats one of the common asks. You may later need to add verification around whether this field matches in both files etc for rows that exist in both rows. Once you have the inner join results, its easy to extend that to whatever else you need. Write these results into a csv file.
Next, do an outer join with a little indicator field which tells you if each individual row is in the left dataframe only, or the right dataframe only or in both. Awesome. Write this result into a csv file as well. For now you could just stop here, provide both these csv files you just generated to the accounting team and ask them to do the rest in excel. Or you can do the rest in python. I’ll stop here though. 
Here’s the actual code,
import pandas as pd import numpy as np import sys
def usage():    print 'USAGE: python data_compare.py recurlyfilename.csv intuitfilename.csv'
if len(sys.argv) == 3:    rec_file = sys.argv[1]    int_file = sys.argv[2] else:    print 'Expected 2 arguments. Please try again.'    usage()    exit(-1)
print rec_file print int_file
rec_df = pd.read_csv(rec_file) int_df = pd.read_csv(int_file,                     names=[ 'reference','Date','CardholderName','Card','CardNo','Credit/Debit','Type','BatchID', 'WithholdingStatus','Status','Amount','Fee'])
innerjoined_df = pd.merge(int_df, rec_df, on='reference', how='inner') innerjoined_df.to_csv('found_in_both.csv')
outerjoined_df = pd.merge(int_df, rec_df, on='reference', how='outer', indicator=True)
outerjoined_df.to_csv('difference.csv')
0 notes
qapages · 8 years ago
Text
Service virtualization with hoverfly
Hoverfly from spect.io is an open source tool that allows one to simulate an API, capture, modify and playback responses from an API etc. It is an invaluable tool that speeds up your testing and helps you simulate those hard to reproduce situations when dealing with a real API. Say you want to simulate an API being down or sending you bad responses, without taking down the real API which may be in use by other teams, what can you do? Or you’re writing up tests before the actual service has been built. All you have is a spec that tells you what the response should be for every request. What do you do then?
Service virtualization is the answer. Read on.
I’m picking the python bindings that come with Hoverfly to delve into more details. There is also a java API that you may find interesting. The install instructions are available at https://hoverpy.readthedocs.io/en/latest/installation.html.
from hoverpy import HoverPy import requests
with HoverPy(capture=True) as hoverpy:    print (requests.get("http://httpbin.org/user-agent").json())
   hoverpy.simulate()    print (requests.get("http://httpbin.org/user-agent").json())
And the output?
(hovercraft) Deepaks-MacBook-Pro:hoverpy dbhaskaran$ python capt_auth.py {u'user-agent': u'python-requests/2.12.4'} {u'user-agent': u'python-requests/2.12.4'}
The first line of output is from the actual API endpoint and the second like is Hoverpy serving the recorded data. That was easy wasn’t it? Lets extend this some more. Maybe, modify the response that is being sent back? 
Here’s a script that shows you the output with the mocking and then without the mocking,
https://gist.github.com/dbhaskaran1/0b9348403e53f8f314b65bf134a121e7
The output with mocking, 
(hovercraft) Deepaks-MacBook-Pro:hoverpy dbhaskaran$ python modi_auth.py response successfully modified, current date is 02:19:56 PM response successfully modified, current date is 02:19:56 PM ..
.. response successfully modified, current date is 02:19:58 PM response successfully modified, current date is 02:19:58 PM response successfully modified, current date is 02:19:58 PM
The output without mocking,
(hovercraft) Deepaks-MacBook-Pro:hoverpy dbhaskaran$ python modi_auth.py response successfully modified, current date is 02:21:16 PM something went wrong - deal with it gracefully. response successfully modified, current date is 02:21:16 PM something went wrong - deal with it gracefully. ..
.. something went wrong - deal with it gracefully. response successfully modified, current date is 02:21:20 PM something went wrong - deal with it gracefully. 
What did we put in the mocking script that resulted in this behavior? Look closely at line 28 and lines 33/34. That’s where we add randomness into the output responses - returning 200/201 and empty responses at random.
https://gist.github.com/dbhaskaran1/550b7d9ebe15d20437905175cf7c6aab
0 notes
qapages · 8 years ago
Text
Performance testing an API
There are several good tools that can help you performance test an API. I’ve used jmeter, locust.io and a wee bit of Gatling. locust.io has become my go to tool for all recent performance testing work. 
Why locust.io?
Ease of use
: Its really easy to get started and have a working performance test running with locust
Open source
: The code is available to you! Say you’re working on a project and that turns out to need more than what locust offers out of the box. What now? If you know a bit of python or are willing to dig in and learn, you can look at the code and make your changes! If its something that might be universally useful, you just submit a pull request and there’s a good chance it will get added into the product itself.
python
: Everything in locust.io is python based. Your tests get written in python, the entire codebase of locust.io is python based etc. So if you already know python - locust is going to be piece of cake.
lightweight
: locust is extremely light weight and can be run on your personal machine to generate pretty high loads on the target machine. I’ve sometimes run into a limitation while using jmeter, which is java based, where it ends up taking my personal machine down while trying to generate high enough loads to load the target machine.
Sample Code
I like to read a little intro and then right away look at code. This article is written that way as well, here’s a simple code example.
https://gist.github.com/dbhaskaran1/d4f0c6861e88a574473112fe39f09639
I’ve defined a class named UserBehavior that (as the name suggests), models the user’s behavior. In this case, there are two tasks that the user does 
hit the index page, represented by the index() function 
hit the login page, represented by the login() function
If you look closely, the variable tasks contains the list of tasks that are to be run as part of the performance test and also the relative frequency at which to run them (or weightage). The index task is run twice as frequently as the login task. The power of this is immense - you can simulate user behavior where the user searches 10 times and then checkouts 1 item, for example or logins in, does 10 different tasks and logs out etc as part of every test. 
The min_wait and max_wait are set to 5000 and 9000, which means each simulated user will wait between 5 and 9 seconds between the requests. 
Launching
Where to start? Right here - just follow the below steps
pip install virtualenv
virtualenv <name your environment> eg, virtualenv perftest
source perftest/bin/activate
pip install locustio
Copy the above script into a python file named locustfile.py
start locust with this command `locust -f locustfile.py --host <your apps url>`
profit!
Step one, two & three create a python virtualenv for you to play around in. Step four gets locust installed. 
What do the results look like?
The results look like for our example,
Tumblr media Tumblr media
You know now that your app supports 20 concurrent users, hitting the app at the rate of 3 new requests per second - without resulting in any failures. You can keep increasing the rate of new users and the total number of concurrent users until you see failures - if you want to find out your system’s limits.
Additional Ideas
Suppose you have a library that someone has built around your REST API. That means its really simple to interact with the API now but you’re not able to use the requests library directly and thus locust does not know if a request failed or passed, how long it took etc. What do you do now? 
Luckily, the folks at locust.io thought about everything.
auth_status = testCase1.custom_check_using_wrapper_overREST(auth_res_txid) total_time = int((time.time() - start_time) * 1000) if auth_status == 'allow': #Success!         events.request_success.fire(request_type='https', name='enrollpushapprove', response_time=total_time, response_length=0) else: #Failure         events.request_failure.fire(request_type='https', name='status_not_allow', exception='auth_status_not_allow')
https://gist.github.com/dbhaskaran1/5a83ce3cc70ccdac7d9f9c9d793e63a1
What did we just do? We ran a function named custom_check_using_wrapper that was a wrapper over a REST API and used a value it returned to tell locust if the request was successful or not. Locust has these powerful event hooks that allow you to write up such tests.
1 note · View note