thetacticalproducersjournal
The Tactical Producer's Journal
14 posts
How to tackle today's strategic challenges using Leadership and Best-Practices.
Don't wanna be here? Send us removal request.
Text
The Great Overlay Debate: How Greed and Vitriol Transformed An Essential Tool Into The Industry’s Punching Bag
    Overlays, are without a doubt, the most controversial tooling/technique in professional web accessibility today. Overlays are a simple website addition, which provide visitors to a website with free, helpful tools such as increased text size or reading text aloud. Overlays do not and can not cover every specific disability, as medical ailments can vary widely in their affliction and resulting assistive technology needs.
     About 5 years ago, a cottage industry of self-styled ���Accessibility Experts”, of varying actual qualification, collectively put Overlays into their crosshairs. Rather than engaging in pro-bono website consultation for charity websites, or engaging in legislative lobbyism to make a difference in the community, these individuals began devoting their Sunday afternoons, and time between Big Tech contracts, towards taking down what they viewed as an existential threat to their livelihood- and publicly characterized as an existential threat towards public access for the disabled. The only problem, well there are many problems with this entire scenario, is that hyperbole and unsubstantiated assertions became the currency of communication… and dunking on overlays to the fanfare of a tightknit circle of allies selfcongratulatory backslapping each other has crowded out pro bono work, public protesting and legislative lobbying as valuable uses of one’s freetime. Ironically, overlays never threatened the jobs of these experts nor “damaged” accessibility in the slightest. It’s the other way around. 
     Traditional web design colleges and the increasingly common learn-it-yourself bootstraps method, barely teach Accessibility. As a direct result of the educational pipeline delivering entry level engineers, integral  and baked-inWeb Accessibility is unfortunately NOT the norm. Nor is it legally required to be in most places. Recently the 11th Circuit federal appeals court not only joined several other Circuits, but took things a step further in ruling websites aren't a place of public accommodation beholden to the ADA and don’t need to be Assistive Technology-friendly (I disagree vehemently, but that is irrelevant). Those are the facts and this is the reality we live in. 
     a11y is a specialty engineering position that costs $100,000-$130,000 USD per annum to hire and fill. If you are a small business, and can only pay $10K onetime for website setup and occasional hourly billings for sporadic troubleshooting, $130K isn't an option. Spending one thousand dollars on an overlay license shows you care, are trying your best, and at the same time reducing some legal exposure- it's a sound small business move.
     While I'm moderately-severely disabled and an impassioned activist myself- even though I wish things were a bit different, I get it.
     Overlay use should be applauded for awareness and investment, not derided, but when people believe their honeypot is threatened they tend to act unscrupulously and have forever tarnished a valuable tool. That’s why I want to address a handful of I.T. urban legends.
     NO overlays do not do "nothing". They increase perceived value among consumers, demonstrate corporate accessibility awareness, provide some level of useable consumer tools, and provide a massive amount of risk mitigation via reduction of legal exposure in regards to a jury trial.
     NO overlays do not make your product "worse". How does a couple of accessibility tools actually de-accessorize the rest of your content? Which part of the overlay kidnapped your Product Owner/Program Manager and accessibility engineers? While overlays alone is never the answer, the assertion that inclusion of overlays directly teleports your development team to the dark side of the moon and/or causes webpage WCAG compliance remission doesn't carry water now, and it never did. Making this statement is the fastest way for an "accessibility professional" to advertise their inexperience, lack of bigger picture and absence of critical thinking.
     While the product does not fit all requirements, we’ve ended up in a situation akin to doctors marrying medical malpractice lawyers, and then jointly villainizing scalpels and their manufacturers. Are scalpels appropriate for a cancer patient who needs chemotherapy, not surgery? No, but the scalpel may be a useful tool for a different patient with a surgical need, or even that same patient should chemotherapy fail. And as a result of their internal biases that doctor is committing medical malpractice themselves by prescribing chemotherapy to patients whom DO need surgery, all while supporting their ambulance-chasing lawyer-spouse in running a racket sending Demand Letters to scalpel users everywhere- even when they were used correctly.
     Overlays are not a be-all endall solution for the multidisciplinary gray areas of accessibility metascience. They don’t fit all software apps or particularly gadgets reliant on firmware.  But they are an amazing tool for specific budgets and scenarios, and the only “controversy” is selfgenerated by compromised non-experts lobbing increasingly nonsensical and habitually unsubstantiated bombs.
     Overlays are a great way for small and large businesses alike to project a sense of accessibility awareness to the user experience. Large businesses with an enterprise-style budget should also hire permanent staff of accessibility engineers under a program manager to correct inherent flaws in the educational pipeline that delivers our web designers and application developers. Overlays are a wonderful tool with little drawback other than a little bit of uninformed and selfinterested sniping.
     Anyone not televangelizing their use is simply harming their own career development, swindling their clients with novice engineering, and doing direct damage to the World Wide Web as a place of ethical, if not legal, public accommodation. THAT is the kind of thing that actually makes accessibility worse. 
0 notes
Text
Accessibility Penetration Testing is the SHIFT + LAW disruptor the Tech World needs.
If you said Accessibility was the hottest thing to hit the tech field in years- you would be half right. The false assumption that a11y is a tenet of Computer Science is the misconception that has prevented companies from seeing the bigger picture, wasting accessibility resources due to fundamental misunderstanding of what it is and what it isn’t. Accessibility is a not a native technical skill wielded by the tech world- it is a legal skill, first and foremost, a derivative of the widestanding Civil Rights movement of the mid 20th Century, adopted by the tech world in the early 21st Century.
 This chief misunderstanding blindfolds good executives from understanding how to manage an Accessibility Program, often deploying specialists solely to educate + audit artists and programmers (specifics of which are not the point of this article) and figuring that is all there is to be done. Leadership then trusts conventional policy, bureaucrats, and PMP Theory to somehow “protect” finance, legal, and PR. But Policy, particularly enacted by those without deep experience of the nebulous world of the ADA and the WCAGs, doesn’t substantially protect an organization from risk in this area- it only insulates from gaining much needed experience in the trenches to understand precisely how Accessibility failures enable risk. Additionally, a policy-rich approach to Accessibility wastes budgets AND chokes devs and SME’s in unnecessary red tape of management creation- the antithesis of Agile development. Instead, of coating an Accessibility Program with heavy policy on how to deal with defects and resolutions at every single step of the process, it is essential to stay more lean. Instead of thinking like a bureacrat, leadership would be much better off adopting additional tactics to start thinking like a lawyer- and then taking that a step further and thinking like your enemies.
 Accessibility Penetration Testing is a term I coined today, a process best described as a reindeer games-esque wargame carried out by an SME and attorney to “attack” their own company. This new Best Practice is the single most effective method to educate executives, reveal the true risk in the courtroom and public opinion. This exposes, and trains, how your legal department will respond, how PR handles outrage on mock “Fwitter”, and the shakedown reveals other weakness such as lack of live-case experienced staff.
 At the end of the day it doesn’t matter how many consultations you provided to your developers- the only thing that matters in the end is the opposing force that intends to damage and disrupt your company, which is entirely out of your control. Like dealing with terror and crime, the plaintiff will always have the luxury of first strike and get to choose how and where the attack will come from- which is why security risk management principles can be so much better suited for accessibility than conventional Agile solutions are. Which makes sense given that Accessibility was never founded in any computer system- it arose from the law system. 
Living in a legal state of mind, not just an Agile one, is key. For example, just to poke a sacred cow, consider that good documentation, a principle of software development, can actually CREATE unnecessary smoking guns susceptible to discovery by plaintiff. Additionally, think about the manner in which fundamental misunderstandings have trickled all the way down into every single a11y process now- even down to terminology and policy controls; an “ADA Defect” is not a technical concept, and should not be used in any test document or Agile process because it miseducates. An “ADA Defect” is something that can only be determined in a court of law, not a policy meeting. WCAG’s aren’t even recognized legal precedent in our country- “Defects” are ONLY determined by a judge/jury after a murky process based on the totality of objective, subjective, procedural and spiritual merit. 
In conclusion, while early involvement is here to stay as the SHIFT + LEFT process the accessibility world deserves- simulated experience managing a realistic scenario is the SHIFT + LAW process we actually need. Moving forward, executives would do well to heartily embrace Accessibility Penetration Testing as a new cornerstone of their threat solution- in addition to stop over-relying on I.T. risk management policies designed for software that don’t apply to legal situations. 
0 notes
Text
Raising Accessibility Awareness: The Voices of Ghosts
Time is a nebulous thing like pain. Just ask Twitch Streamer AccessibleGamer. He has plenty of both on his hands.
Gamer is disabled. There’s nothing  abstract about that. There’s no do-overs. No chance to turn left instead of right, no turning back the clock, and no saying NO to that metaphorical sketchy car ride. It’s pretty clear that his every single day, from yesterday, to next year, is going to kind of suck.
One thing that is equally crystal clear for all to see- his joint passions for both gaming, and raising Disability Awareness.
It’s 5pm and I’m watching him play games through a straw, dumbfounded.
He’s not only playing Warframe, but winning. Even more importantly he is smiling. I ask about the device that appears to make this miracle possible.
He lightheartedly explains this machine he is blowing into is known as “Sip and Puff” technology. He’s employing a piece of medical equipment, with input reprogrammed to support consumer videogaming. However, they stopped making it 6 years ago- and stopped servicing it 4 years ago. The company told him they can’t service or repair the machine in any way because, as used medical equipment, it’s now considered hazardous waste- and as such, the State of California won’t allow them to service their own devices. So when Gamer’s retooled Sip and Puff video game controller finally does fall apart, he doesn’t even know what he will do.
Like me, Gamer is a C5. That’s an insider term used to describe catastrophic spinal injury. But unlike me, he didn’t merely break his bones- Gamer completely severed his spinal cord too. He is more paralyzed than not. He relies on a wheelchair just to get around, and adaptive ventilatory peripherals not just to game- but for everything.
The realization that our society holds people like Gamer in such contempt that he can’t even play Fallout with his friends begins to depress me.
He requires strong prescription pain medication just to tolerate his existence, yet it doesn’t stop him from delivering critical hits to digital enemies. He has mastered the game as well as the improvised third party tech- and he manages all of this with ease while he spits out uplifting platitudes.
A far finer man than myself, I think. Perhaps there is something here for me to learn as well, not just gawk.
Despite simply laying in bed with an adaptive workstation pulled over me, my own C5 is a little less forgiving today. I can’t get comfortable enough to play a game with him onstream- nor even keep socializing with him in Twitchchat much longer. Even though I myself had experienced a miraculous “recovery” from my own spinal injury, my back was starting to hurt again, relentlessly so, just like the dark days immediately after my own accident. A tear rolls down the corner of my eye in response to the pain, punctuating my black mood like a thunderclap. I log off Twitch, and amble away in search of strong drink.
I wonder, how did we get here?
                                                                             **********
About 5.5 million Americans live with some paralysis, and 20.5 percent of all gamers report varying disabilities. By 2022, experts at Business Insider forecast the gaming industry will produce $196 billion in revenue. The notion that the gaming industry, or any modern business for that matter, would aggressively alienate the disabled demographic is stunning. These consumers game longer and spend more. A sentiment I’ve heard from many of the highly disabled, of every stripe and flavor. A part of their Social Security checks goes to rent, some to groceries, and the rest to video games- so they have something to do and a way to interact with other people. But the reality is that the handicapped are severely under catered to, at both the executive level and the programming one.
 Companies owe a fiduciary duty to their stockholders. The duty of care requires directors on the board to exercise good business judgment when making decisions on behalf of the corporation. This is just bad business, compounded by the fact that few today in the corporate world are broaching terminology-sensitive topics, just as much as they fear initiatives lacking Chargeability.
That belies the fundamental misunderstanding that making games more accessible costs more money, with little reward- when in fact it is generally the opposite. Removing barriers takes little more than an advocate on the staff and some training in basic needs like including a few extra Options Settings.
 When it comes to Corporate Social Responsibility, executives should be aware that the disabled rely on online gaming more than ever to combat Social Isolation, a specific type of Depression. And Pain Management studies from the University of Washington indicate that gaming may help treat chronic pain. So not just every economic indicator, but every social one as well, is aiming for massive expansion of Accessibility support in said $196 billion dollar gaming industry.
 But as time ticks down on Gamer’s paralysis controller-peripheral, there are only crickets from the gaming industry.
 There’s nothing about this situation that makes sense, but I do understand how we got here. And I know what we need to do as a whole to get out of this mess.
                                                                                **********
About a decade ago I began volunteering with an organization called the American Chronic Pain Association to help share my own story with the state legislature, and other bodies, to both lobby and educate. During a serious medical flare-up the quality of my own outreach began to noticeably suffer- and I had to step down. Probably for the best, as the frustration of attempting to influence corrupt state legislators to do the right thing (for free) was beginning to weigh on my shoulders heavily. Coinciding nearly simultaneously with my departure as the lone voice in the state capitol for the poor and disabled, the legislature enacted a series of draconian measures that effectively stripped pain management from State Medicaid coverage, devastating the disabled community, particularly those with painful disabilities like spinal trauma or nerve pain. I recently checked in with the ACPA and was told we haven’t had a local state lobbyist since I quit... but they aren’t really sure, because they don’t have enough staff or funding to confirm that. 
Taking a look around the disabled community, it’s fair to say my experience as a failed activist was the norm, not the outlier. It’s not hard to see why when even the most noble representatives of the class, the Wounded Veteran, currently has their “guaranteed” benefits on the chopping block; meanwhile American media trumpets disingenuous takes on whom lives in inequity, while the severely disabled can barely work or leave the house- and commonly live on what’s known internally as Social Security Poverty. 
 Again, part of the problem is visibility and the media. Disabled Americans only comprise 3.1% of all television characters, leaving us effectively hidden from sight, and more off of societal radar than on it. TV and the media aren’t the only one’s ignoring the issue- take an industry like software development for example; in a workplace known for Crunch Time and intensive workloads, it should serve as no shock that developers with severe handicaps can rarely hang in there long enough to reach executive management and ensure we have a voice there speaking for the rest of us.
That is exactly why the disabled community suffers greatly for lack of champions. We struggle to find and put forward strong public speakers whose health is consistent enough to first build a public base of support, and then sustain the engagement to push change through. Look at my idol, Christopher Reeves for example- his brave attempts to not only overcome his own spinal trauma but evangelize new treatments and charities as well, ended less than a decade post-accident as many paralyzed Americans do, suffering from lethal bedsore sepsis. 
 In the last few months, I’ve had the opportunity to have conversations with others in the Software Development and Quality Assurance communities. I actively plugged my push for Accessibility expansion, and was by and large ignored. Realizing that perhaps we don’t speak the same language, I switch tongues and insinuate that their corporate PR team could “exploit this feelgood” measure to boost corporate image. No response. I try again and talk about how dedicating just a few resources to programming support for disabled application and consumer electronics ultimately increases profitability by removing barriers to full market share. Oh, and by the way did you know you can get a tax break called WOTC for hiring more disabled workers?? You realize disabled persons have, and spend money too?... Nothing but crickets. 
 This is what it’s like not to be seen or heard.
 And, like being a ghost, none of it makes sense. Like Patrick Swayze in the beginning of his own seminal take on that supernatural love story, I feel like I’m screaming at Demi Moore just to notice me- and I wonder if I’m losing my mind.
 A penpal of mine in a disability-related subreddit described today that like Gamer, he was gaming with one hand on an adaptive controller, and one foot on a regular controller. There’s things we can do, easily, for people like that with some additional programming, accessibility design, and advocating the continuing evolution of hardware peripherals. And it sounds like unless we step up soon- it’s only a matter of time before AccessibleGamer isn’t even able to play, live, and inspire others the way that he wants to live his life. We need every single one of him, every Steve Spohn, and every Christopher Reeve we can get. We barely have any leaders- and it feels like the ones we do get, leave us too soon.
 Right now the consistent elephant in the room of software development are tough topics like Crunch Time, gender disparity, and unequal treatment of persons of color in the workplace. But if those communities also feel that there is room for just one more voice at the table of negotiation, that would be the most welcome breath of fresh air. We don’t always like to admit it in the disabled community... but goddamn could we really use a hand right now.
 New industry roles for positions such as Accessibility Content Designer, extra-flexible work hours for handicapped employees, and adoption of accessibility best practices (see IAAP & AbleGamers white papers) are sorely needed too. We need actual Accessibility Programs at major developers, run by and for the disabled- not listening sessions and contemplation. And we need basic implementations as cheap and as reasonable as Pause States or Colorblind mode to be around every time we load up a game. Most of these features are fairly painless to implement after initial training, remove barriers to full market access, and can exploitatively be trotted out by the marketing department as feelgood ESG (environmental, social, governance) social justice investments as well.
So I have to ask the question- just what are we waiting for? Let’s get moving on this, business leaders of today and tomorrow.
After all, it’s not just your ethical responsibility, it’s a fiduciary one.
0 notes
Text
Not every task is an appropriate candidate for automation- here is why
Automation. Artificial Intelligence. ROI. These buzzwords and more, are now a daily part of life in software development. As inevitable as the sun will rise in the morning, so too is the impending AI renassaisance. We are not quite there yet, however. We are still unable to program a chatbot that doesn’t devolve into offensive word salad, nor code self-driving vehicles that can recognize deadly edge case situations. But it has a major place in our industry, as it should, and a rapidly growing network of professionals dedicated to moving the technology forward.
 But it’s time we all now address the boogeyman in the room, an aspect that I consider the single greatest threat to Excellence and Integrity in the entire software industry today. I’ll be the first to say out loud what everybody else here knows. Like a surgeon quick to perform a major operation on a live patient when he pills would do, it’s clear to insiders that an uncomfortably large percentage of automation is being done for personal profit and not because that is best for the patient.
 It’s not “just someone else’s money”, it’s not “just software”, it’s who we are as a human species. It’s about doing the right thing for the right reasons in a global community. So below I’m going to lay out what Unethical Automation is, and what we need to do about it to change the entire industry for the better.
                                      WHAT UNETHICAL AUTOMATION IS
 “If you have to do something more than once Automate it.” Look no further than this statement here, a disingenuous self marketing hook that contaminates all of our LinkedIn feeds. While I’ve seen automation successfully put continental shelves on it’s back in enterprise development (Xbox/Battle.net) and major financial systems (FISERV), the ROI and Coverage Confidence starts to crack apart at the midsize operation- and small IT firm contracting is a cesspool.
 We all know what I’m talking about and that is overselling automation to our clients and project managers as a an essential Way of The Future. Paying someone $10,000-$30,000 to go away in a cave for a month or two, and come out with an automation framework utterly ultra reliant on that SDET to maintain, only to save half that time’s manhours budget is a ruse. This happens all the time. Nobody in the SDET community seems willing to speak up about it, however, because of fear they will talk their employers into “demoting” them out of automation and “down” to Tools, or worse yet- replaced with a manual tester for half the cost. But these are entirely irrational fears when SDET honesty is couched correctly using terms like Best Practices and True Return On Investment Analysis.
Many noncoders, like Project Managers and some manual QA Managers, think automation is something they simply have to do- and that is not the case. It’s a theme continually getting drilled into the industry, so even great supervisors can start to believe the hype. But the issue is that the hype primarily originates from loud automation consulting firms, and has managed to permeate the entire industry. Because of this it is now more important than ever that management get an honest assessment on every major task moving forward from someone willing to speak the Truth, and lay out the benefits of manual testing, grey box testing, and not just highly expensive automation frameworks. Aside from more honesty, we need more coders equipped with and happy to do business in a second language, one more equipped than say C#, to quickly prepare test scripts for Test Associates for simple grey box testing. Things like PowerShell and Python, which can actually be taught to manual testers on the job without sending them away to bootcamps and universities for years or months at a time.
 The ultrareliance on automation frameworks, as opposed to a mixed diet of manual testing, edge case testing, and a variety of QA personnel with different backgrounds is damaging product quality across the board and moving us backwards. While I see Automation practiced quite nicely more in banking and databases where it crunches numbers with ease and confidence, that is NOT the case in the interactive entertainment and public safety sectors.
Unethical Automation or honest misunderstandings, prevalent in many noncoding QA Managers, is seriously harming the AAA game community as well as website QA, where Automation doesn’t care about the complex user interfaces and cultural sensitivity concerns. Complex audiovisual bugs require human eyes to suss out, and an experienced mind to differentiate test environment artifact from live bugs- and it always will. As a result of these trends and the fact that nobody wants to speak up about this, we are contributing to a stunting of product quality and polish across the board in the entire software industry. 
Further making this worse is how we do things in QA today. Unfortunately a lot of management has been hoodwinked by what is really a sales-driven gestalt consciousness that we need to automate everything we can. For one thing, it leads to a false sense of confidence that we are getting great coverage when we’re not. And with management all in, we certainly aren’t going to get any pushback from the bottom of the totem pole.
I’ve never worked at a single agency where testers greyboxing with STE-issued materials are even remotely empowered to speak up about Test Engineers giving them wonky ineffective scripts and automation that seems to do nothing- but break it’s self. No manual tester is ever going to honestly say what kinds of test coverage they think we might be missing. This industry has been making false claims of empowered QA, and claims of professionalizing it, for about a decade now. Yet we still live in the Dark Ages and it costs us all.
As I alluded to above regarding autopiloted vehicles, automation testing and associated AI development & machine learning are starting to take on vital Life Safety issues too. Automation is growing in the Commercial Airliner industry at 43% annually, almost more than any other sector. We can no longer afford to throw a bunch of disingenuously produced automation test coverage at warning systems dependent on protecting human life and safety- and call it a day. It’s wrong. It’s time now for this entire industry to take one step back for an honest bit of self reflection and a fearless moral self-inventory.
The very first step mankind must take, and NOW, is to push forward past this deadlock, before we start to lose lives over self serving nonsense is start to talk about it. It has to be done. We’re no longer wasting “other people’s” Monopoly Money at a Big 4 company, we’re developing 911, Air Traffic Control, and vital infrastructure systems that lives depend on.
Secondly, I urge the creation of an “Ethical Automation Society” type advisory board. For starters, we can do this overnight- by setting up a group to join on LinkedIn. It would be welcome to anybody interested in AI, Machine Learning, Automation Testing, and Software Quality Assurance. Here we could create our own community to rededicate ourselves to solid automation candidate analysis, ethical practices in software development, and educate ourselves with case files of Unethical Automation gone wrong. And this group on our LinkedIn profile can serve as a badge of honor and oath to do best by our clients on every single project we do from here on out.
 I kind of like the following motto as the tagline:
 “We’re the best at what we do not because we always automate everything- but because we always do the right thing.”
0 notes
Text
I.T. Defense Becomes the #1 Business Priority After Terrifying Spy Malware Hits The Black Market
According to a report published by USNews.com, internet hackers caused a $445 billion dollar loss in 2016 for companies around the world- a number projected by Forbes Magazine to reach a staggering 2 trillion dollars by 2019. Whilst it’s difficult to ascertain exactly how much of that can be attributed to the tight-lipped interactive entertainment market, it’s fair to say that this trend is becoming an increasing worry for the gaming world. Steam publisher Valve, Nintendo, Blizzard, and EA all reported incidences occurring in 2016 in excess of $10 million apiece. However, a widely under-reported incident in March of 2017 has industry insiders buzzing about a potential calamity about to affect not just the games market, but one with the potential to rock every major company in the world. On March 7th, 2017 WikiLeaks reported in its “Vault 7” disclosures that both the American N.S.A. and C.I.A agencies had lost control of the entire stockpile of the United States’ cyberweapons arsenal, which is now available in copy pasta source code to every semi-skilled hacker on the planet. That is downright terrifying.
 Despite operating under the highest levels of secrecy, it is universally accepted among the IT industry that both the both the NSA and CIA have in their employ the best hackers in the world, supported by mindnumbing budgets, with zero ethical constraints. The result of this disastrous federal security breach now means that the most dangerous cyberweapons ever created are currently spreading across the black market, giving cybercriminals current access to tools that are lightyears ahead of what they could have concocted on their own. We’re talking about not just the ability to take over every phone and computer on the planet, but computer malware engineered to cripple national infrastructures and cause loss of human life. Take a moment to let that sink in. Only time will tell just how widespread the fallout will be, however. The good news is that software developers from all walks of life can take action now to protect both the financial resources and the consumer information they are responsible for safeguarding.
 Many developers, particularly in the gaming industry, have long taken a reactive approach to IT defense and the protection of assets. That includes establishing IT Defense sub-departments once a company reaches a certain size, erecting firewalls, and monitoring for intrusions. But the Vault 7 disclosure that American spy agencies had themselves hoarded a “treasure trove” of undisclosed zero-day consumer product vulnerabilities proves that most major developers have utterly failed to release secure products onto the market and a reactive approach to IT Defense is a thing of the past. Current affairs and modern times now simply demand that any company not wanting to get burned is going to be forced to invest more resources and forethought into a proactive approach to asset protection.
This will require not just the aforementioned “time and money”, but a deepseated institutional and philosophical shift in the way that security vulnerabilities are prioritized during software development. This means additional penetration testing during R & D, in addition to beefing up the IT Defense units that reactively safeguard assets post-product release. I am a serious proponent of establishing a new industrial “best-practice” of mandating that all major projects must now include (at minimum) a white hat QA strike team to conduct destructive penetration testing prior to any high profile product release. Which sounds like common sense, but of all the major projects I’ve worked on in the corporate world with development budgets in excess of $10 million dollars, I can only recall one single company that had the foresight to do this. Even then it was merely a single weeklong sprint that we engaged in as a team of junior QA Analysts and full Test Engineers. Despite that proactive approach, I was chagrined to read an international news story that my project had still been hit by hackers who penetrated the inadequate defenses established by our colleagues on the network service side of development.
I understand that nobody, nowhere, wants to pay more for car insurance but you’d be a fool not to if a zoo-full of dinosaurs had escaped from Jurassic Park and were rampaging across the streets and highways of the world. And this is analogous to what has occurred with the NSA/CIA Vault 7 scandal. But the bright side is that each and every single software developer has within their own power to change the way they do things and simply put more effort into releasing a safe product. Otherwise they risk becoming dinosaurs themselves.
0 notes
thetacticalproducersjournal · 10 years ago
Text
Guiding Premium Test Strategy Definition In Concert With Beta Access
Beta access (both closed & open) is now a mainstay of many AA and AAA budget interactive entertainment games & apps. It generally provides value to the developer, especially in the areas of testing server stress loads, generating buzz, gathering feedback to fine-tune playbalance, and it can crowdsource bughunting. But like any industrial process, there is always room for optimization, and an open minded yet methodical approach is crucial to generating the most bang for your buck.
Typically, many developers today are establishing an SDLC milestone that is several months down the pipeline- and then initiating the beta milestone once the game has reached a certain degree of semi-polish and server stability. This approach is netting developers results, but without conscientious planning many studios are failing to fully capitalize on the underutilized resource that is pre-release public feedback.
At many studios, beta access kickoff is going a little something like this on a Monday morning sprint: A Producer, Game Director, Lead Network Engineer, and a Test Manager sit around the conference table and decide whether or not they are ready to let a few thousand players give the game a go. The Reliability Engineer shares how many players they can let in, based on internal stress testing, and the QA boss scribbles down a few notes (i.e. Note to self: tell QA Lead to have someone scan the beta bug report forums). Then they are off to the races. But there are serious efficiencies to be gained by revising this traditional approach. Primarily, lack too much trust is placed into this standardized approach, which is almost certainly ensuring potentially catastrophic security loopholes- and also failing to stay true to QA best-practices.
This is exactly why the IT Security Lead needs a bigger role and voice in deciding when to kick off beta access. Many developers can and do deploy sophisticated firewall technology at all stages of the SDLC (Alpha, Beta, and Release) but they are simply deploying white & grey hat hacking strike teams too far down the SDLC to test as pre-emptively, and as iteratively, as possible. Unfortunately, today many highly sophisticated actors (i.e. Nation-States) are deploying extremely skilled teams of network engineers with explicit orders to steal commercial intellectual property. Got an anti-fraud/IT defense strike team working on your I.P. right before it goes live? Fantastic.
But disturbingly, I think many studios would be aghast if they had any idea that said sophisticated actors probably already have your alpha or beta builds. If this can (and did) happen to the Pentagon… than this has probably already happened to you. Test security early, and test often, especially on projects with multimillion dollar budgets. Otherwise you could be looking at an unmitigated disaster when a clone of your proprietary engine hits the black market or the Pirate Bay before you even have time to pick up the phone and call PR for damage control.
As important as emphasizing earlier security testing is to protect your investment, it really is only one part of what should be a more holistic, halo test approach to deploying beta access for maximum gains. There really is a little bit more to this game than just testing server stress and scooping up some amateurish bug reports of the forums. QA Managers need to start asking themselves questions like: “What can we do in Beta to test upstream and identify future trending issues before we scale up? What kind of methods can we invent to help our beta testers identify the kinds of issues that our permanent QA guys tend to miss? How early can we start automating testing, and is this beta scale truly organic enough to deliver valid results, or simply waste time with false leads? Do we have an internally programmed tool or certification process in place to ensure that bug report forums are being methodically scrutinized, as opposed to one tester eyeballing the forums during lulls between their regular duties?”
It’s conscientiousness like this that is sadly lacking in many comprehensive test strategies. First of all, the decision to kickoff beta needs a little more input from a variety of specialists and test engineers, and more needs to be done to eliminate wasteful practices.
One solution is to more thoughtfully prioritize initial beta access. When it was recently publically announced that a major, anonymous, AAA game currently under development is expanding the next wave of beta invites to top-ranked ladders players only, I couldn’t help but wonder why. I personally know a few top ladder players who've had early beta access to games, and they aren't necessarily the most generous contributors to feedback & bug forums. Generally they just submit unactionable rants about how "broken" certain units are, because the demographic of ladder champions is very, very competitive. They are so ultracompetitive that you'd get a more well-reasoned & logical suites of suggestions, feedback, & bug reports from casual & mid-ranked players instead. That’s not to say that professional gamers aren’t occasionally going to submit helpful playbalance suggestions (especially games with a strong E-Sport community following)… but frankly, the top prioritization of beta wave invites should primarily go to players who consistently contribute actionable bug reports to the bug report forums!... along with popular streamers of course, who can spin coarse thread into gold and provide you an opportunity to guerilla market your way to the top of the sales chart.
Crowdsourcing is a double-edged sword. Initially inviting in only ladder champs is simply going to flood the board early with rants and feedback applicable to benefit the minority of purely hardcore gamers- not the casual masses whose consumer purchases will actually make up the bulk of your product’s revenue. Inviting these players is tantamount to negative marketing efforts. The strategy of prioritizing early beta access for frequent bug report forum contributors is much more likely to generate voluntary bugs from folks with relevant "industry experience", so to speak. Good, actionable, bug reports for free are pure gold.
This is why I opine that it should absolutely be a new, industrial best practice for large orgs to create and maintain a database dedicated to tracking who is, and who is not, contributing bug reports to the Bug Forums. These consumers should not only be embraced, but rewarded.
Coordinating and moderating volunteer playtest sessions have, and always will be a staple of helping guide alpha-level and beta-level build feedback and even possibly finding more bugs. Especially within the nascent Accessibility Testing field, where finding PWD’s (Persons With Disability) can be extremely difficult to recruit, but can provide major insight and advancement into your accessibility efforts. Pre-release feedback sessions can open up their own can of worms, however. Moderating playtesters closely to prevent unwanted cell phone screenshots is one important security precaution to consider- obviously it's best for the marketing team and PM's to be the ones responsible for releasing screenshots, not overly excited (or even malicious) volunteers with no financial stake in your product whatsoever. Additionally, NDA's can, and always should be utilized in playtest sessions, unless you want someone hitting Reddit with an "I plaid this game and it's so so crashy" type post before your PR team even has time to optimally market your product in the best light.
Finally, many popular games these days now integrate increasingly popular achievement systems. I have yet to play a game wherein an achievement (or perhaps bonus DLC, special skins, etc.) is granted for players who have actively submitted original and actionable bug reports. Why not? Investing valuable developer time into creating a “cheev “for submitting 10 original bugs is only going to deliver fruitful results, and save time & money for the QA department as a whole- but is also a stealthy guerilla marketing tactic to subconsciously generate the notion that your organization is devoted to polish and quality. Heck, this could even catch that one devastating little exploit that you already know some annoying little kid just can’t wait to put up on YouTube and embarrass your department with…
In conclusion, beta access shouldn’t just be focused on testing server stress and sharpening playbalance. When thoughtfully integrated with a comprehensive test strategy, it can decrease potentially expensive security liabilities, ease the workload and manhours spent on QA, use streamers to generate free publicity, and increase corporate revenue via increasing perceived value. And ultimately- $$ is the bottom line. $$.
0 notes
thetacticalproducersjournal · 10 years ago
Text
A Primer On The Optimal Deployment Of Test Engineers For Interactive Entertainment Development Managers
This article will focus primarily on the best methods to strategically deploy Software Test Engineers (and their subordinates), on low, mid, and large budget interactive entertainment apps & games for both console & PC platforms (including browser and non-browser based games). With the emergence of increasingly popular low budget F2P games I thought it very timely to share some Selenium automation testing tips and primers for new DevManagers. Additionally, due to the vast budgets AAA games & new console development can cost these days (up to $250 million for a high profile game, and billions of dollars to produce a new console…) inefficiencies & inexperience in your QA Engineering department can unnecessarily cost you big money over time by needlessly performing manual test tasks, bungling your tools, and letting reputation-damaging bugs slip “into the wild”. So I’m going to explain “STE Engineering Best Practices” here to Producers and Development Managers- but this primer is particularly aimed towards those who achieved their management positions through SDE or business management experience, but have never worked in an SDET or gametester role before.
FIRST THINGS FIRST- PLAN OUT YOUR TEST MILESTONES, AND CHOOSE A CUSTOM TESTING STRATEGY THAT BEST FITS YOUR NEEDS
Sr. SDET’s/STE-3/Sr. Tools Programmers should ideally start planning out the internal tools architecture, in addition to identifying & designing requirements for future automation needs under the direction of the Lead SDET. Once the groundwork is drafted out by the Senior STE’s, development of the more advanced custom internal tools can begin in earnest. However, priority #1 should always be ensuring that a working bug report tool (i.e. JIRA and other defect-tracking analogs) plus a fresh database is ready to go for the entire test team on day one of the test cycle initiation portion milestone of the Software Development Life Cycle- although in some cases all that may be needed is simple updates and minor customization of pre-existing toolstacks if they have been successful in past projects produced by that organization and they will be compatible with the new project.
Another critical strategic decision to be made early on by the Lead SDET or QA Manager will be the decision on how the lower skilled Test Associates will go about their own testing duties. Generally, the question is: Should we adopt a strategy of written test cases, directed ad hoc testing, free testing, or other alternatives? Based on my experience, I find the most effective strategy to be a mix of 40% written test cases, 50% directed ad hoc test, and 10% of “free time” edge case testing (Definition of edge cases is: unexpected, unusual, or malicious consumer behavior).
The guarantee of written test cases provides the assurance of certification for specific critical areas. Always remember to plan out at least some written test cases that cover broad scenario “end to end” testing though, not just miniature test cases that cover purely tiny specific actions. Generally, STE-1’s and Assistant Test Lead Associates are good candidates to write up the test cases- this is a very effective way to utilize their upper-middle class level of skills and experience. However, by providing a slightly larger dose of directed ad hoc manual testing, this keep the basic testers both more alert & actively involved whilst providing a more fluent allowance for fluctuations in the examination of specific areas of need on the problem area du jour. Teams that mandate 100% written test case execution tend to turn potentially talented Test Associates into mindless drones, more prone to zoning out and checking off boxes whilst daydreaming than catching any other random bugs they may accidentally come across during execution of the test case checklist.
Unfortunately, many organizations DO NOT allocate “free” manual testing time for creative edge case testing. However, I have found that just a little free time spent on edge case testing produces a great return on investments, and identifies & prevents potentially gamebreaking exploits that may only be discovered by pure tester creativity or even a little 15 minute brainstorm jam session at the end of the shift. It’s better to let your testers to bust these unexpected exploits before release, rather than let some punk kid discover it down the road and post it on YouTube for the gamer or consumer community to learn about & copycat, thereby forcing an emergency patch post-release. This is not only potentially bad PR, but could potentially cause a serious production disruption risk by pulling resources away from your next project- just to deal with an emergency issue from last year’s game!
GET STARTED
STE-2’s at the intermediate level are best utilized initially starting work on automated test harness development early on as well, since this will take a few weeks to develop & customize the code, optimize it, and fully debug the harness setup. That way, early into the test cycle you’ve got a working partial spectrum test harness ready to go and start producing reliable automated OS (operating system) testing results. OS automated test results are among the most critical variety of test automata, because this data is MOST likely to detect and help debug Priority 1 crashes and integral architectural flaws. Unlike the bug reporting tools and database, which need to be fully functional on day 1 for the entire test team to have at least a minimal bug reporting & tracking ability, you don’t always need broad spectrum automated test harnesses immediately- the game/app generally isn’t even going to be completely functionally testable for a few months. So make sure to expand the automated test spectrum in a prioritive manner and adjust the harness accordingly as functionality and features grow in scope over the course of the SDLC.
Jr. STE’s not assigned to writing test cases should first start whipping up elementary test scripts immediately, and then begin training the Senior Testers/AKA Tester-3s on how to deploy these minor preconfigured scripts early on. The earlier you develop consistent & habitual deployment of basic timesaving scripts like this, the quicker you will begin seeing returns on time & money saved by efficiently automating mundane and manual tests & tasks. After the senior testers get the hang of deploying the miniscripts themselves, they themselves should step up and show some leadership ability by then instructing the intermediate testers/AKA Tester-2s on how to deploy these scripts as well.
Generally your greenhorn Test Associate-1’s are going to be basically helpless new fish the first few months on the job and are better left with performing the easiest, most basic manual test tasks- at least until they develop further technical proficiency by gaining a little bit more experience on the job. Whilst expecting excellence and high standards from ALL employees is a nice ideal… from my experience across multiple projects, it’s simply a wasteful process to assign technical testing to raw recruits who are more likely to flub a task they don’t fully understand. This will ultimately just end up wasting the time of the Lead Test Associate correcting who then has to correct their mistakes- when the TA Lead’s time is most effectively served directing and administrating the test associate squad as a whole and ensuring that all TA’s understand the needs of the project and the SOP’s of writing good bug reports. To analogize this point, the Lead Test Associate is akin to a football team’s Assistant Defensive Coordinator- they need to get the entire defense on the same page, while backing up the Head Defensive Coordinator (The Lead SDET). The TA Lead cannot spend 100% of their time purely devoted to developing only the third string benchwarmers and practice squad- at the expense of their role as second-in-command responsible for supervising the entire defense.
This strategy is not meant to disregard the importance of developing the greenhorns, but in my experience brand spankin’ new Tester-1’s and Test Interns truly need several weeks on the job simply absorbing their new technical environment and the esoteric lingo that comes along with it, before they are prepared to take on any technical testing. Frankly, it can be a little bit overwhelming at first, especially if you’ve got no prior experience in CS or QA… Let them learn to walk, before allocating senior resources training them to run. Assistant TA Leads are particularly well suited for helping develop TA-1’s into TA-2’s.
Finally, one of the most critical lessons that PM’s and DevManagers need to learn, is that software test engineers and software development engineers of ALL levels tend to share one very, very bad habit that simply needs to be eliminated from this industry as a whole. This sad creature is known as the “DevBug”. It’s basically a bug report that is only 10% complete, doesn’t even meet the standards of a bug report written by an unpaid intern playtester, and is completely incomprehensible to anybody else stuck trying to interpret this abominable thing. It generally is a bug report titled something like this: “Bug: The Thingie is not wurking.” Repro steps? #1 Start the game. #2 Find the Thingie. #3 It’s broken. Naturally, there’s generally not even a screenshot attached or crash call stack attached either… While the gentleman who wrote the bug may know what he is referring to in his own mind… the Producer doesn’t understand how to appropriately triage/prioritize this report, any tester stuck trying to repro finds himself stuck with an unactionable task, and it’s simply unprofessional software development. Managers- please do the industry a favor and stop allowing this kind of bad behavior. Crack down on this from day zero of the entire Software Development Life Cycle.
APPROACHING THE MIDWAY POINT OF THE TEST CYCLE
(For web browser based games): Selenium automation test tools are useful for web browser-basedtesting and comes in two formats- Selenium IDE which is easy beans Jr. STE level stuff for startup level organizations working on F2P and low budget browser-based games. AAA quality browser-based game projects require STE level 2 skill to best operate the more advanced Selenium WebDriver which does support multiple programming languages. However, from time to time, problems with the automation will pop up, and it is a lot easier to “look under the hood” and debug your WebDriver automation if you’ve chosen to script it in a more traditional scripting language such as JavaScript or Python.
(For console games or PC games requiring network matchmaking): Remember to wait to develop and deploy “Walkers” (environmental gameplay automators) and “Load Stress Tests” until midway through the project. These aren’t going to do much good when you are still in that super buggy Alpha stage we’re discussing right now. Otherwise your environmental gameplay automation will be constantly “falling through the world” and therefore spamming out unnecessary Walker bug reports… rather than mining beneficial analytics and actionable bug reports. Additionally, premature network log in stress testing & automated matchmaking testing will not be super helpful here either. That’s because generally at this point in the SDLC, the game and server haven’t even been developed enough to create reliable performance yet- once again, premature deployment of automated matchmaking testing is simply more likely to spam your Network Engineers with unactionable synchronicity bug reports. That’s because your Gameplay and UI engineers probably haven’t even finished developing and debugging the multiplayer portals and matchmaking tooltabs yet. You’ve got nothing organic, nor reliable to test here yet.
That’s not to say that a little premature stress testing doesn’t completely have it’s uses- if you have a talented, progressive team of server engineers then at least they can start research into “testing upstream” by identifying and predicting future trending problematic areas that they are likely to encounter on the next testing milestone- sometimes asking an SDE/Gameplay Programmer to help them create a miniature MP level on a private build can be quite helpful in testing upstream. But do keep in mind just how inorganic this may be- you will NOT be testing in a true aurora environment yet.
HALFWAY INTO THE TEST CYCLE- IT’S TIME TO BUMP IT UP A NOTCH
Once you get to Beta on large budget games, the overnight log-in stress tests and Walkers should be fully developed and debugged. They should be preconfigured, and include basic concise instructions (or even a simple GUI), so that every Test Associate from level 1 thru 3 can deploy the Walkers and log-in stress testers every evening at End of Shift via your preferred method of choice, so that these two forms of automation are farming while you sleep, and producing results to be evaluated & triaged first thing the next morning. These performance tools should be expected to be (mostly) functional by now, so assign an STE-1 to start running Tools BVT’s just for the first week- Tools BVT is not rocket science, but the first few times you smoketest these tools, it’s better to have a Jr. STE run this BVT rather than a Senior Test Associate- just for now.
The reasoning behind this is because the first couple daily Tools BVT’s are the most likely to reveal bugs in the functional toolstack your team needs & uses; “tools” bug reports are a bit trickier to write up a bug report for than your standard video game bug or web browser spelling error report, so by having and STE run the first couple tool smokes can better establish how the Test Engineer’s prefer formatting of “their” reported bugs to be written up by any Test Associates who end up stumbling across a bug with JIRA, Team Foundation Server, the database, etc.
The logic behind waiting to publish a Tools BVT until this point in a AAA budget test cycle is thus: you don’t really need to waste payroll manhours and spamming people’s inbox by publishing a department wide report early on when all you are rocking is a bug reporting tool and a database. The tools programmers sit next to each other anyways and will already know whats good each day via scuttlebutt. However, once you get to the point where you’re incorporating JIRA, TFS, MTM, integrally designed console-PC interface command prompts, proprietary video capture, beta-user automated crash analytics, Walkers, overnight server stress testing, and scripts of all spices & flavors… at this point EVERYBODY needs to know what is working, every single day.
The intermediate test engineering tasks (i.e. harness maintenance and Walker/log-in stress testing + troubleshooting) should be assigned to an STE 2 to work on for now, under the watchful direction of a Sr. SDET/STE 3 of course. That way a Jedi Master is keeping an eye out on the Jedi Knight to help continue the Knight’s development, at least until until the STE-2 is ready to graduate into Sr. STE level stuff. Once the Tools BVT process is stable, find a strong Senior Tester or two and get them permanently assigned to handle Tools BVT’s for the duration of this portion of the SDLC. Find another strong Senior Test Associate to start Build/OS BVT’s as well.
STARTING TO WRAP UP TESTING
By now your test harness should be finely tuned. Automation Results Evaluation should be initially done by an STE-1 the first couple days just to make sure the procedures have been set up efficiently, but then he should pass off the Automation Results BVT off to another strong Senior Tester Associate to maximize efficiency and minimize waste of the more expensive engineering payroll labor hours. It doesn’t take a programming God like John McAfee or John Carmack to simply assess the automation passes & failures, triage them out appropriately, and publish the Automation BVT report, so a Senior Tester should be ultimately able to take over this role for now, freeing up his previous STE-1 trainer for higher engineering tasks and further career development of that STE-1 to start training on STE-2 level skills.
At this point in the game, there’s only a few more weeks/months until product release. As a PM or DevManager, hold a confidential meeting with the project’s Lead SDET and review the performance of ALL of your STE’s. If somebody is not making the grade, ask the Lead SDET to correct any substandard performance immediately… or replace the underperformer with a veteran Test Engineer from another team who won’t require any “ramp up” time. While there can be, and will be, test engineering mistakes made early on in every single project (hey, we’re all human), now is not the time for mistakes, and as we approach the home stretch there is zero room available for critical errors that could potentially delay or embarrass the product launch- unless you want to get slaughtered on Metacritic.
Right now around now it is also not uncommon for one specific team issue to arise- there may be disagreements between “The Devs” and the QA. SDE’s under a backlog or in “bug jail” may become a bit testy, and increasingly sensitive & defensive towards testers and SDET’s who are simply doing their job by continuing to find bugs. While it’s natural to raise the bug bar a bit towards the end of the SDLC, the veteran DevManager should generally be aware of this scenario already, and the smarter ones will generally side with, and advocate for QA when possible. After all, would you rather your studio be known for releasing “half-baked potatoes”, or releasing polished products? Two major studios (that I will not name) recently released grossly unpolished & megabuggy games that almost completely destroyed their entire studio’s reputation singlehandedly- and it won’t come as a shock to any studio execs or shareholders when consumers are MUCH less eager to purchase the sequel to their (anonymous) franchises.
Finally, now is an excellent time to assign a small, temporary White Hat Hacking Strike Team composed of one Senior Test Associate to try and crack the front end of your game or app, and one STE-2 or Network Engineer to attempt to penetrate the back end. This is an especially critical test task to prevent criminal exposure and fraud prevention if your program features any kind of subscription service, company store, or ingame monetary microtransactions whatsoever. Generally, the code infrastructure and any network features you may have intended is simply not functional enough to organically conduct advanced penetration testing any earlier on in the SDLC before this point right here- which is why White Hat Hacking test coverage is ideally executed at this time.
Don’t wait until “crunch time” to conduct anti-malicious testing though- by that point in the project things are usually too frantic and backlogged that it may be too late to close any critical coding loopholes before project release. If you wait until Crunch Time to discover a potentially critical malicious exploit in your code, the SDE’s and Network Engineers may be forced into an undesirable situation where they have to decide between debugging said security risk, OR cutting and/or delaying attractive features that aren’t complete yet. Don’t allow your team to get placed into this situation. You only get one chance to create a first impression, and holding off popular consumers features until months after launch because you poorly planned comprehensive IT defense testing until it placed you into a jam… is an avoidable disaster that could easily have been prevented by better foresight earlier on, and the adoption of more progressive “upstream testing” methodologies.
”CRUNCH TIME” AND THE FINAL HOME STRETCH OF THE SDLC
All BVT Reporting should be fined tuned by now like a fine German automobile. It should include an assimilated, singular, all-inclusive report, that includes the trifecta of: Automation BVT results, Build/OS BVT results, and Tools BVT results… all on one big fat BVT or IVT report. At this point, this smoketest assignment should best go to your strongest, most senior tester on the entire Test Associate squad- your Top Gun (you may even need two if you are running multibranch integration snaps, in the case of a MASSIVE project such as developing a new console or MMORPG). Note: This should NOT be either the Lead Test Associate. They need to focus on supervising the test associates, not get bogged down smoketesting all day long at the expense of his primary job, which is TA supervision.
The official smoketester, AKA “Top Gun” is generally now going to be in a world of hurt. It’s probably one of the single most difficult jobs in the entire department. They will probably have to wake up at 4AM or 5AM in the morning to complete their smoketesting so that when every other dev and tester casually rolls into the office at the crack of 9AM, everybody’s got a fully comprehensive BVT or IVT waiting in their email inbox with the Green Light or Red Light needed to decide on whether to keep working on that new build all day (or stick with the last known build to have passed BVT). This guy is also entrusted to produce accurate & uncannily reliable results, otherwise development can be halted for an entire day if he makes a single mistake or typo- and everybody in the entire department will know about it. This is unlike SDE’s who screw up and get bug reports sent back daily with no repercussions, and SDET’s who break test harnesses weekly… without reprisal. Additionally, the smoketester is the only Test Associate required to be in regular contact with Senior Dev’s, Sr. STE’s, and the PM’s… and everything he does is incredibly time sensitive to boot.
Finally, it’s not uncommon for coworkers who’ve never had to smoketest a multimillion or multibillion dollar project before to storm into the tester’s office and demand the BVT faster which only disrupts the work and slows down the greenlight for everyone- so any Lead SDET or QA Manager should set official policy to ban that kind of interruptive behavior. Also, it’s bad form for any SDE to ask the smoketester to BVT any “private build” as a personal favor- especially if there’s only a single overwhelmed smoker, rather than a team. The software development engineer can and should test out his personal experiments himself. In conclusion, if you are developing a new console or MMORPG, pick your smoketester(s) well, and don’t forget to thank them for their hard work.
Just before product launch, your STE-1’s should generally be working with the Lead TA to certify compliance, and practicing intermediate level skills so they can make STE-2 on the next project. STE-2’s should be maintaining the automation harness vigilantly and feverishly testing matchmaking automation whilst personally overseeing a special task force of Multiplayer Test Associates. Generally by this late stage in the game right before launch, your STE-3/Sr. SDET’s may finally have some technical debt to catch up on and upgrade, but most importantly they can now expect to occasionally get called in to quickly troubleshoot traditional lower & middle tier issues that junior and intermediate test engineers would normally handle. That’s because if you only have 10 days before the Release Candidate or Compliance Certification milestone, and Junior Jimmy broke the test harness or can’t figure out what’s wrong with Team Foundation Server today… you need your QA Department’s very best programmer to fix that thing within the hour, not let someone dillydally over it for half a day or more.
POST RELEASE
You’ll likely have a Day 1 patch necessary, but barring unforeseen catastrophe, generally these patch “bandaids” will traditionally fall to the duties of the standard SDE’s and the Network/Server Engineers (Error: 37, anyone?)- NOT the STE’s. However, it won’t hurt to have an STE-1 plus the Lead TA oversee this single task just to ensure that the standard Test Associates have properly tested the Day 1 Patch, so the STE-1 can “certify” that the Day 1 Patch has purified the build for release day. There is your last chance to catch any bugs before they go until the wild, so this phase is absolutely critical. There’s nothing quite worse than spending a year of your life proudly devoted to your work… only to find out about a debilitating bug, gamebreaking exploit, or customer data theft for the first time ever on CNN or Kotaku with the eyes of the entire world on your SNAFU.
0 notes
thetacticalproducersjournal · 11 years ago
Text
Effectively Managing Multi-Branch Build Integrations
Big Budget software development is becoming ever more advanced and complicated every single day- some projects even have hardware, software, cloud, and firmware components to produce & meld together. For a new Program Manager or Release Manager, this can be a tricky labyrinth to navigate as we pioneer new territory into technological advancement. While compartmentalized approaches to multi-branch development are getting used more often, there’s no practical textbook on this and most release PM's today are tending to learn their most valuable lessons on the fly... only after a failure has cost the entire team time and money. Here are some of the key unwritten concepts and insider tips on the practical administration of integration mechanics- instruction free of charge, of course. PLANNING Remember to appropriately stagger the integration snap so that neither the devs nor testers are overwhelmed. An “agile environment” doesn’t have to mean belching out garbage builds several times a day (i.e. Continuous Integration) and iterative development never meant spam. Publishing a one-change build during development is an absolute waste of the payroll manhours it will require of the release engineer and the smoketester, although post-release patches are a separate conversation. Plan the integration fluxes frequently (AKA Frequent Integration) but conservatively enough to focus on getting working code going first, at least until “crunch time” when some of your planning is going to go out the window anyways. Rushing the entire team in pursuit of an industrial ideal, as opposed to completing your actual mission, is counterproductive and will only open your organization to risky liabilities that will cost more in the long run. As a PM, it's easy to put horse blinders on and get primarily "Dev Oriented", but remember if you are spamming out builds and integration fluxes then Test will get the short end of the stick by never having enough time for the deep and thorough, end to end testing that recreates actual consumer scenarios. Ill-advised haste and insufficient time between milestones also preys on the vulnerabilities created by the technical debt that accumulates during any long project, and can literally open up permanent seams in your system of checks and balances. This is can potentially allow what QA specialists refer to as “bug freefall”, when a defect in your planning can potentially allow bugs to go undetected for the entire length of the project and make it’s way out into the wild. Remember, once your bugs get out of their respective quarantine pools, locating them can be like asking somebody to figure out exactly what single ingredient is spoiled in a soup that has 20 different ingredients in the recipe. Pretty soon you are utilizing X number of manhours to investigate and repair what should have been done properly the first time around for half of that payroll. This can cost immeasurably more in terms of stress and slashed features than just a black and white overtime number on a piece of paper. Finally, when deciding on a launch day release date, the various project principle directors should keep in mind that American consumers are becoming increasingly tech savvy as a whole and are becoming quicker to catch on to half baked products. While sticking to a guaranteed release date is the status quo for most large consumer software companies, this approach will only offer decreasing returns as time goes on. It's already commonplace now for many consumers to wait half a year or more to buy products from certain companies that are notorious for shipping grossly unfinished products. The next generation of executives crosstrained in both engineering AND marketing are going to understand this concept and bring us both more iterations and longer SDLC phases for AAA projects than ever before. This of course means integration fluxes are here to stay. EXECUTING Those late check-ins are like drugs… just say NO. That’s all there is to that, and probably almost anybody reading this already is aware of their inconvenience. But I would like to touch on the use of the private builds AKA “Buddy Builds” becoming increasingly prevalent. Buddy builds are just like your best friend was in college: They’re a good resource to be involved with... provided you maintain good habits lest this devolve into the two of you simply being partners in crime and engaging in activity detrimental to the project. One example of a very bad habit is second or third generation personal buddy building that has become divorced from the general integration pool for too long and no longer represents an organic or aurora environment. At that point it has become an assumption, based on a guess, supported by a maybe and is now utterly unreliable for anything except for laughs and giggles. Generally devs should test these private builds themselves, unless they are a project architect and you have a large pool of contingent staffing to help out. That’s because when an informal or “optional” request for QA to jam in an extra BVT of a private build hits the desk of a Test Lead or QA Manager, inexperienced test managers are going to mistakenly say “Yes” every single time, because they don't have very much development experience themselves. This creative inexperience on the test side means Test Leads are not familiar with the esoteric bad habits programmers tend to employ. One of which is developing an ever-growing addiction to utilizing QA for personal use that will grow and grow until eventually it begins to dysfunction the Test Department until they are not getting their own primary work done. ADMINISTERING Unlogged changes can be a huge waste of time for devs AND test investigating the issues that arise. You need to strictly discipline programmers who engage in this kind of behavior, this isn’t mickey mouse stuff when we are talking about instances where a single sneaky check in can cost tens of thousands of dollars by Snafuing an entire day of development at the AAA level. At a smaller organization this kind of behavior is indicative of poor management direction and developer inexperience- at larger organizations (where everybody should know better) it is usually an intentionally deceptive act generally perpetrated by sneaky engineers trapped in what the industry refers to as bug jail. As far as workplace sins go, this is analogous to a felony and should be treated as such. TESTING The best person to smoketest an integration snap (AKA IVT) is generally the senior-most allstar QA tester in the entire organization. Avoid using an SDET for this process because full engineers have a tendency to consider themselves above intensive manual testing and are notorious for cutting corners on certification signoffs. Their most notorious cheats include secretly saving themselves time by whipping up unnaproved/inorganic test scripts, or fraudulently certifying something simply because their "gut coder's instinct" tells them it should work. The reason why it is imperative to only employ a Senior Tester in this capacity is because this is the most critical test pass in the entire QA department. The tester is generally going to get rushed rushed rushed to complete it, the manual examination must be flawless, and any mistake or missed bug is going to be magnified one hundredfold in front of the entire organization to see. It’s generally the toughest job in the entire test department. The Senior Tester needs a manager oncall who is always available to him 100% of the time during the smoketest should questions arise (and they will). Any test manager who tells his IVT smoketester that he’s busy and cannot help/answer questions immediately has a fundamental misunderstanding of the entire process of modern software development today (i.e. an entire team or even the whole company may be on hold waiting for the Pass or Fail BVT result!) Responsibility of test oversight for this smoketest is one of the single most important task of the QA Manager's job, and if they do not understand this- or are too busy Facebooking on the company clock then this is a serious issue to identify and eliminate early on. Just as test managers and testers are policing the devs in a sense, QA Managers aren't infallible either and sometimes they need to be held to account as well. While there often exists an invisible mental wall between "devs" and "test", both sides can ultimately get blamed when things go wrong... so both solid teamwork, and acceptance of joint responsibility for foul ups will go a long way in ultimately releasing a more successful product.
0 notes
thetacticalproducersjournal · 11 years ago
Text
What QA Management Is Going To Do Wrong in 2014
Today’s topic in software management is critical. Wouldn’t you agree today in January, that if I told you every single mistake you were going to make on the job this year… that this would be the most valuable piece of information you could possibly have? I already know what QA management is going to fail on- and I’m here to tell you exactly what these mistakes will be and how to have the best year you’ve ever had by not making them.
Leading from behind.
Managers have their own office as a perk of their station. Unfortunately, almost all of you camp out in your office out of sight, and only pop out occasionally to tour the department or make your presence known via email directives. I’m all for walking around to see “the men” once in a while- but if you understand the psychology of the average tester that’s not enough. We imagine managers at their desk on Reddit or playing games on their computer (which is exactly what some bosses actually do)- and testers feel entitled to do the same. Trust me- productivity DROPS because of this.
The greatest leadership lesson I have ever learned was at an inconsequential job in high school at a movie theatre. The G.M., who always came to work dressed incredibly well in a nice suit, noticed that his cleanup staff was behind one day due to a busy schedule. He took his suit jacket and tie off, rolled up his sleeves, and was right there with us on his hands and knees scrubbing soda off the floors and taking out the garbage with us. He really didn’t have to do that… But as you can guess, any single time after that he asked me clean up a blocked up toilet or some trash, I eagerly hopped on the task knowing that he wouldn’t ask us to do anything he wasn’t willing to do himself. I NEVER grumbled again.  
I recommend to ALL software managers- and execs even, that at least once in a blue moon, they take the office trash out right in front of anyone (yes you WILL see bulging eyeballs) and then plop down at an empty workstation for a couple hours of coding or testing right in front of everyone. This has an incredible effect on your staff and will earn more respect than any other single action a manager can take. They will pay you back by working harder than ever before.
Failure to initiate self-improvement and accept constructive criticism from your staff.
Too many managers treat their employees like a fiefdom and rule over them with an iron fist. Sure you’ve earned it, and you don’t want anybody stepping on your toes after you’ve paid your dues the hard way. So if you ever get criticized by an employee, there’s a 90% chance you take them to your office to chew them out.  However, this doesn’t make anybody a big man, it means YOU LOST. You lost a golden opportunity to make yourself better, all for free.
I recommend that on a quarterly basis every manager and exec take a hard look in the mirror, and send out an anonymous Survey Monkey to your department explicitly asking them what you can do better. Leaders rise and fall not by themselves, but by the collective actions and morale of your entire department.
Underdeveloped analytics & crash reporting.
Last week I was playing a “AAA” game at home when it crashed. An amateurish UI popped up that said “Oops! This game has crashed. Please obtain the log from file C:/Program/Program Files/Game X/Temp/Logs and send it to the developer.” My first thought was… that was really weak. I don’t mind hitting one button for “Send Crash” but… this is going to take 5-10 minutes to locate this directory pathway plus attach the crash logs... and who the heck do I even send it to? There's not even a specific email address nor a specific bug forums URL provided on this UI message… so I don't even know who to deliver this too! By the way, did the Software Test Engineer who whipped up this customer facing gem of an error message even take a look at his own work? It's really not that hard to hit the F5 key on your Visual Studio IDE and doublecheck your own code guys... So here I was left thinking: "This game is from NinBlizzEpicSegaSoft, and really I expected better from their Test Engineers!"
If you are a development manager, be sure to double check all your titles and make sure you aren’t putting out “ungraceful” failures like this that break immersion and reek of poor polish.
Utilizing general gametest staff for biometric and physical accessories QA.
Someone’s going to die boys. In RL, not in a game. Many companies that test physical accessories or fitness/dance games simply assign general test associate staff to QA up these titles and features without any forethought into ergonomics or orthopedics at all. We require high school athletes to get a physical before playing sports. That is done for a reason. Even despite the annually required high school sports physicals- there are still several tragic fatal instances of seemingly healthy teenagers collapsing during sports games anyways.
Game Testers are arguably the most out of shape profession in the entire world. They sit at a computer and play games all day, drive home, then sit at a computer or couch and play games all night. Why would we thoughtlessly assign this demographic to test sports and fitness biometric games without any foresight? It’s going to take a forward-thinking leader to instill new procedures before the first cardiac event happens and a life is lost to game testing. ANY manager overseeing testing of this sort should start considering a confidential medical history questionnaire (including light sensitivity/seizure activity), or maybe even sending them to the company doctor if someone is going to be on sports/dance project for an extended period. As a younger man before I got into professional software development, I once had to respond as a paramedic to a CPR call for a young, fit, healthy male who collapsed simply playing basketball with his buddies. Consider this true life tragedy the next time you form your dance/fitness/biometrics QA roster.
Giving younger managers more leeway and less scrutiny than the staff of testers they supervise.
Every company considers lower management to be faultless whenever conflict arises between that layer of leadership and the contract staff. Beef between A.M. and a tester? Surely it’s the testers fault…. Or is it?
Flashquiz: How many hours of mandatory leadership training did your Lead/Supervisor/SDET/Assistant Manager have to undergo when he was promoted? Oh, it’s zero?? Correct. Most new managers in our industry also do not have a degree in business administration, and even a Bachelor's in computer science spends no time whatsoever on practical administration training. This is a BIG problem here guys. Just because someone gametested for you for 4 years doesn’t mean he knows diddly about leadership, management, or psychology. That promotion up the totem pole based on solid individual work does not mean he’s to be trusted without question from that point on when it comes to matters of coaching a team. In fact, this is the MOST critical time to check them out! You’ve just promoted a former game tester to a leader and lieutenant. Unless they HAPPEN to have had former experience in military/police/fire, they are not prepared. I’ve worked at several software companies now and maybe only 1/5 require leadership training of any sort whatsoever for that first big promotional step up.
Imagine this is war, and your Jr. Lt. is taking 40 guys out on a critical mission but it's his first command ever. This is famously dangerous in the military- and it is equally so in Test. Now is the time to make sure your new leader doesn’t let any bugs slip through the cracks through unsound methodology, and that he has the management skills to effectively work with other personalities.
Phew! There you have it. This is your crystal ball for 2014. I can’t do anyone’s jobs for them, but here’s to good luck on doing the best job you’ve EVER done this year in QA Management.  
0 notes
thetacticalproducersjournal · 12 years ago
Text
Microtransacting Your Way To Wealth And Riches
              Downloadable Content (DLC), Auction Houses, and other revenue-generating streams have been a mainstay of AAA interactive entertainment projects for several years now. However, despite having worked it's way from premium fringe content to an industry "best-practice" over the last few years, this content is still poorly marketed, of mediocre design, and occasionally even reviled by consumers.  
              The key to successfully deploying DLC like map-packs, additional characters, items, etc. is integral design (or at least conceptualization) of this content early in the software development lifecycle. This has truly got to be in an Agile environment utilizing SCRUM teams, to ensure you've got Game Design, Programming, Marketing, and Production all on the same page. Otherwise you may have a notorious Waterfall Software Development Model issue such as deep-seated build engineering issues that typically arise when projects are too focused on milestones or intradepartmental production. However, and most importantly, the reason why everyone needs to be onboard with microtransactions (MT's) early on are because they are the most challenging revenue streams to implement without critique, and nearly impossible to garner praise for. Unfortunately, many developers are now too eager too undercut their own project by intentionally releasing two-thirds of a game, and the other third is tailored as premium content. The result? On release day, you've got a product wherein players race through the entire game game in only 20 hours and the consumer is confronted with the option to purchase basic skin-packs or map-packs that obviously should have been included into the game in the first place.
              This really cheeses me off, and increasingly nuanced consumers are already onto this one. It hurts your branding from day one, and cuts into the bread and butter of any business- which is repeat clientele to your franchise or services. Your projects can quickly acquire a reputation for being skimpy and making obvious grabs for cash on basic content. I can't stand it when I get to Steam or the X-Box store and basic stuff like tactical skins and generic avatar outfits for sale litter up the place from the get-go.
               That's why early, integral, and creative MT's utilizing input from interdisciplinary teams are essential. So for starters, instead of spamming your store with basic DLC that was conceived along with general build development, simply include that stuff into the game. Why? Because you should focus on making the best basic version of the game possible, before anything else. Places like Reddit, Metacritic, and Gamasutra are going to review the base game, not delve deep into DLC, and it's not going to help you one bit in your initial reviews which can be important for AAA studios. After eliminating this tempting bad habit, then get a little creative with your microtransactions. Instead of placing generic content into the arcade store, place flashy and unique content solicitations or (store links) into the game or endgame at play milestones likely to entice impulse purchases. One example of a great place to do this is after bossfights when emotions are running high. Players are more likely to be interested in immediately purchasing creative content like zombie map-packs or trading card game (TCG) content spinoffs after just defeating the game, as opposed to first sign in. That's because when a player defeats the end game, they often wish the experience of new material to immediately continue with minimal delay, so in-game placement or store links (easy for PC OS's with browsing) are superior to simply placed store wares alone.
              Another key in successful MT's is being sure to keep it premium and creative, so as not to water down the brand and generate begrudging purchases. It's like having the difference between having an option to purchase Mexican Coca-Cola,  and being strong-armed at the movie theatre to buy watered down syrup-based fountain drinks. That's why a good rule of thumb is that if it seems like something that should be in the game in the first place, don't strong arm your clients at the store or auction house. An example of this is the AH of one specific Action RPG superblockbuster. It has received constant grief and criticism that just to successfully compete in the endgame, one is required to upgrade the toon with items from the AH. All at a (reasonable?) AH cut, of course. While a cash cow, this definitely alienated A LOT of people. Now as a result, the "Most Awaited PC Game In History"  might just not be most awaited the next time a sequel or expansion pack rolls around.
              One interesting factor coming up on the horizon will be the rise of cryptocurrencies like Bitcoin (BTC for short). Around the Winter and Spring of 2014 will be a great time for developers and producers to start considering how they will incorporate BTC as an accepted method of payment for MT's (and general sales) into their store and account user interfaces. Summer of 2014 is going to see a surge of popularity for Bitcoin so Devs should get this implemented live by then- or you'll miss out on a boatload of extra money (AKA impulse purchases). One potential headache in handling this new currency will be fluctuations in the dollar conversion rate. To handle this one of the producers will have to ensure that the BTC price is carefully adjusted either manually or through automated scraping to always maintain the proper equivalent dollar cost in a fluid environment.
                It's really not as hard as its sounds to roll back overcapitalized revenue streams from the realm of the mundane, back to premium or special content, without making less money. It simply takes a little creative brainstorming. It's like how Seattelites got sick of seing two Starbucks on every block, and SB countered with increased grocery store placement like Via and other offerrings. Think about it for a second. No, I don't want to buy a desert camo skin-pack (something that basic should be a standard option anyways). Do I want to buy a Landshark Gun or Zombie-Virus-poisoned katana? Hell yes! 
                Custom maps and mods have always had a way of sneaking up through ranks and exploding with widespread popularity, sometimes even outpacing gameplay hours of the retail build itself. Don't have any content to deploy? Well how about just a concept? Incorporate the best dev-approved mods into the store with a dev cut. Think of how many millions and millions of DOTA-style MOBA games have been played for almost ten years now, and it's been incredibly poorly monetized. In-game product placement (IGPP) in DOTA could have generated millions of dollars by now, but everybody missed the boat on this one. IGPP is a double-edged sword, but it can be wielded effectively without cutting yourself. Simply remember to keep in-game advertising subtle and unique. Instead of in-your-face Subway Sandwich product placement the NFL insists on displaying, how about easter eggs and  slick incorporation? The Tanaris go-kart raceway easily could have been the Gatorade raceway- because racing is an area where sponsorship is expected anyways.
              It's really a tragedy our industry is stumbling with how to deal with this. Here's a perfect example of how you can make more money with your game without alienating anyone. Imagine a hidden Mountain Dew Extreme Shooting Range minigame with a highest score leaderboard on loadscreens for additional publicity and revenue for your FPS. Loading screens are actually one of the best ways ever to sneak in some subtle IGPP. That's because loading screens are required for the game to operate so the interruption is forgivable, and they suck anyways no matter what the heck is on them. Missed opportunities abound.
              In conclusion, microstransactions and revenue streams such as DLC, AH's, IGPP, have grown to become something expected by most knowledgeable consumers of interactive entertainment. However, lazy implementation hurts your brand and risks alienating your client base, especially when it looks like you only put out half a game and expect the suckers to shell out $59.99 for it. That doesn't mean it's not impossible to generate as much revenue with these strategies- if not more, though. You simply have to think smarter, not harder.   
0 notes
thetacticalproducersjournal · 12 years ago
Text
Managing QA The Right Way
                 The evolution of Quality Assurance in the gaming industry has grown in leaps and bounds. Initially, programmers did all their own testing themselves. Next in the 80’s we had playtest evaluation centers, leading to professional game testers in the 1990’s, and now in the new millennium we call them Quality Assurance Analysts. Just in the last few years we’ve even raised the bar a notch to realize the importance of QA in polishing games and we’re now establishing a series of what’s termed “best-practices”, in an attempt to standardize this growing career field. Now let’s talk about just what best-practices are in QA.
                Professionalizing analysts? Check. EOT’s? Check. Errr…. Wait. No. Requiring End Of Task Reports from every tester for every single task is the single worst practice in the entire QA industry. Believe it or not, this is actually standard practice at some major developers. This can all be handled by a single EOS (End of Shift) report from a Specialist or Assistant Lead, comprised of a report that involves all the tasks and bugs for the day. I’ve worked at three companies where QA spent 8 hours a day testing/bugging, and at one company where we spent only 4 hours a day testing and 4 hours a day writing reports – not bug reports, mind you, just reports about reports. This is a suicidal practice. Clearly, half of the entire payroll was being flushed down the toilet.
                     Another worst-practice to avoid is the implementation of a bug-regression pool in which testers are regressing other peoples’ bugs. Literally 50% of the time, the tester finds themselves pinging the RP (Reporting Party) going “Hey bud, you got a second to put down what you are doing right now and help me repro your bug real quick? I’m not exactly sure how to trigger your steps.” [PAUSE] Stop all your work, and wait 2-3 minutes for a reply on how to repro it. Due to the difficulties in learning the nuances of technical writing, even the direct real-time office messenger reply from the RP may take some time for a Secondary Regressor to additionally analyze and deduce whatever half-step was missing from the original bug report. This leads me to the happily pompous conclusion: TESTERS NEED TO REGRESS THEIR OWN BUGS. Because they know just how to do it, without fudging around. A good convention to establish is to have the producer sling implementations for verification right back to the RP, and first thing that QA does when they come in the morning AM is regress their own bugs. In addition to being wasteful of tester labor hours, this practice is also wasting the time of your QA Engineers who are now building and supporting regression pool tools that need never have existed in the first place! You know those guys should be focused on automation, anyways.
                Do encourage interoffice communication as best-practice, and do not restrict any employee at any time from contacting a dev. At some studios this best-practice is encouraged, and at others, this is directly prohibited. This is best-practice becausewhen a programmer, designer, or artist is known to be addressing an issue that very day, sometimes the tester finds pertinent information that might affect the work being done presently down the hall. So, don’t have your analyst simply throw relevant information into the database or wait for an approved Specialist/Assistant Lead to contact the dev, because the critical information might not make it down the line until after the dev has already implemented what he thinks is a legit fix. Now you have just wasted tomorrows build! Conventionalize notation in the database, but also be OK with a brief ping on the office instant messenger. One day I saved a senior staffer several hours on a fix, but got busted for messaging him directly. Does that make any sense?
                 Generally in the past reasons for discouragement of QA > Dev communication include the fear that fanbois in Game Test will pester your designers with ideas, but if this is going on then this is a failure on the part of the QA Supervisor/Manager who hired that unprofessional tester. That in itself shows a failure on your part as the Sr. Manager or Producer for hiring a lieutenant who doesn’t screen employment candidates carefully. It may be painful to autopsy your own failures, but as with all bad habits, taking a fearless moral self-inventory is critical to improving yourself and the organization you lead.
              Additionally, do not go cheap on your budget for technological infrastructure in your QA department, nor allow internally created tools to acquire excessive technical debt. Microsoft Game Studios does the best job I have ever seen in the industry in providing unbelievable help desk support and proprietary hardware, DevKits, and Network Security Protocols and defense. Blizzard stands out in absolutely amazing compat support- best in the industry there. However, there is a AAA studio out there who I will not name that does not even offer a single internal tool for either QA or CS. They do everything on paper! Can you believe that? That is a fully unacceptable business practice especially in the 21st century. It reminds me of an experience I once had as a 911 dispatcher when the power went out and we were taking calls on paper, not CAD (Computer Aided Dispatch) and running slips of paper out to an ambulance in the parking lot for a supervisor to dispatch calls from the mobile radio in the truck. It was total chaos, just like AAA (blank) is doing today. Time to change that, right now guys. 
                Now let’s talk about what you need to look for in a tester. These guys don’t need a lot of technical training, just a solid work ethic (no “Offscreen Kings and Queens”), a passion for games, and a shred of professionality. One already well known example of being a professional analyst, is never to trade barbs over the database. However, there is one key exception to this. I was once working at a company I loved dearly wherein an Assistant Lead without a grasp of English grammar canceled my bug report on a typo I had caught. I pinged my Supervisor, who hadn’t graduated from college either, and they declined to intercede. I decided that I cared too much about the product and the team, and consciously decided to fall on a grenade to protect everyone in my squad, for the Greater Good of the company’s reputation. The only thing left for me to do was to force this issue by calling someone out (for all time and the whole world to see) in the database. I did this despite knowing the industry standard response to this kind of behavior was a citation. Would you have done the same? I thought about this carefully, and remembered a valuable lifelong lesson I had learned while reading Richard Ambrose’s seminal book: “Band of Brothers”. Major Richard “Dick” Winters, one of the most influential leaders in the United States Army during World War II, once said:
             “Always do what’s right… because it’s right”.
             And this brings up the criticality of having both solid leadership AND character in your QA department. The true key here to getting the most out of your line testers is the manner in which you deploy your Assistant Leads and Specialists to supervise and lead them. Your sergeants should not be chained to their desk, they should be walking down the aisles selflessly helping testers, or at their station cutting the EOS report. If the Specs/AL’s are chained to their desk for the entire shift, then they are either A: YouTube Martinets themselves (get rid of these leaders fast) or B: Too immersed in the work on their screen. They are going to bark at any analyst who walks up to them to ask a question, daring to interrupting their chain of thought by breaking it with what in fact, their duty to respond to. Getting barked at is never cool in the office, and unless you are John Carmack or Mike Morhaime, if you yell at me for approaching you then I will not follow your lead. Not even into a (slightly) uncomfortably cold swimming pool, OK? Plus this kind of behavior is going to decrease pride in your workplace and thereby lower retention rates. Recruiting, hiring, and training, cost much more than you think. I’ll touch more on this later.
             The Manager or Supervisor should take a zen-like, Phil Jacksonish approach to inspiring both work ethic and creativity (to help with edge-case testing), without too much concern for technical instruction in QA. What that means is that you need to select a Manager based on intelligence, not necessarily organizational seniority. A good way to gauge managerial candidates is to ask them, in interview, who Sun Tzu is. You may even find an exceptional candidate grinding it out obscurely as a game tester. Do not be afraid to spot promote these diamonds in the rough. Field commission based upon merit is one habit often found in the Green Beret model of leadership. It will keep your organization adaptive and successful, for the greatest benefit of everyone in the company.
           Now, most game industry professionals have not spent one iota of time studying Leadership Science, but that is why I am here to help you out with some advice. A good QA manager should: drop in from time to time (surprise!) right into an empty cube on the line for a few hours and cut bug reports themselves, right out in the open, once in a while. Do not be a Foxhole Norman. And now, I’m going to let you in on a priceless secret. You want to know the very best tactic you can possibly do as a Commander or Manager to earn loyalty and respect? Take the office trash can out right in front of everyone on the very first day. I mean it. Then your employees know in their hearts that they never have the right to question the orders of the one who got their hands dirty first.
            Now, I am going to address a topic that occasionally arises in our field. What do you do with an overqualified tester? Dangle carrots in front of them so that they understand they do have a strong chance of upward mobility, and be quick to discuss internal mobility moves with other department heads such as design or production where these testers may be better suited. Otherwise they may have ideas and suggestions for processes superior to your own, which will open up an entire can of rotten worms, and you will typically end up playing “Whack-A-Mole” on your most promising prospects to protect your own position. Sadly, it is rare to find a gifted manager who understands it is better to let talented workers make the entire department look good- as opposed to the fear that enabling strong performance will make your own shaky leadership look bad.
          If you use the stick, not the carrot, intelligent employees with a shred of self-respect will become quickly disgruntled and cut out fast, leaving for better opportunities elsewhere. You don’t want to let this happen, especially in a specialty (game testing) with notoriously high turnover rates to begin with. Lowering your turnover rate in IT careers is critical because in our industry there is clearly a nasty learning curve for proprietary tools & conventions, wherein it generally takes several weeks for a new employee to get effective at testing, and possibly even months for a very smart engineer to get the hang of his tools. Don’t think it’s possible for the QA budget to drag down an entire company’s profits? It is. Increasing automation, stabilizing retention, and decreasing wasteful practices such as EOTs can make the difference between a studio that is breaking even and one that is showing a profit. After all, isn’t that what this is all about? Running a business?
0 notes
thetacticalproducersjournal · 12 years ago
Text
Drowning In A WATERFALL Whirlpool
              So just what is the Waterfall Software Development Model anyways? It is a sequential design process, beginning first with requirements, then next is… awww Hell. I’m just going to stop there. This is what Waterfall really is… IT’S A TRAP! (Quotation originally belongs to Admiral Akbar- may the Force Be With Him).
               While originally presented as solid foundation for software design in 1956 by Herbert Benington, it was more formally described in a 1970 article by Winston Royce who, appropriately, cited it as a flawed model for software development cycles. Even Mr. Benington himself eventually backtracked on his earlier claims with a hearty dose of CYA years later on. Despite the fact that this model has long been established as The Path To Destruction, I’m ashamed to say that I have been involved in MORE THAN ONE GAME in which development was based upon this outmoded model. To protect the identities of the guilty parties, and my own N.D.A., I will not be disclosing the Dev responsible for this atrocity.
        In general terms, Waterfall is a process based mainly upon landmarks, and unlike tight knit cross-disciplinary SCRUM teams, various departments often work simply amongst themselves to meet requirements, and throw everything together less often at more widely spaced out interval milestones. This approach fails to utilize the benefits of incremental delivery that creates more rapid prototypes and the daily builds you can come to expect with Agile Development using SCRUM teams.
               The result? Builds are a bug-infested mess, with many even failing the basic smoke tests administered by internal test teams, so that they don’t even make it out into your general QA department. And if they do get out to general QA, it can be quiet ugly with glaring defects that wouldn’t even pass Smoke on a normal  SCRUM team. At one Agile-based studio where I worked at, you could expect to spend 8 hours testing, and maybe find a single unreported bug to cut report on. The Waterfall Model studio? Try cutting 20 reports a day, typing nonstop, with unlimited minor bugs that you were eventually told to start ignoring. I would typically find a bug, while simply reproing a bug yet to be entered, and then maybe find a few more along the way, then get 5 reports deep. No lie. 
               Even worse, later builds approaching the release deadline can be straight up unstable. By this I mean, gamebreaking problems from a single department can seriously infect every other department, with just a few months to go, causing utter chaos for the entire team and extended length Crunchtime for everyone involved. As you all may know by now, Crunchtime has been widely derided by everyone from respected game industry executives, to medical doctors, to social scientists, and should be kept to a minimum. You may think 60 hour a week Agile crunchtime is bad for a month or two, but just try to imagine working 100 hours a week for half a year on a major AAA game under tremendous stress from the publisher to perform well. I was actually heartbroken to hear that many studio devs on that project actually got divorced during production of the game.
               Just to drive this point home, we know it is industry tradition that every game has a point near release deadline where conscientious prioritization of known issues simply must be triaged, with lesser priorities discarded. But late stage prioritization meetings of a Waterfall Model game will blow your mind when you hear people say stuff like “Opening cutscene is shot to Hell. Do we fix that, or the terrain seams that are popping out of half of our game?” Or you may incredulously hear the statement: “It’s not pretty, but it’s finally stable. Nobody touch it!”
               Fortunately, I think the facts here speak for themselves and I will let the reader come to their own conclusion about the ultimate effectiveness of the Waterfall Development Model as applied to AAA studio video/computer games. If not, just remember Admiral Akbar’s famous words.
0 notes
thetacticalproducersjournal · 12 years ago
Text
The Dangers of Product Ownership
             As anyone worth their salt knows by now, Agile Development, especially with SCRUM teams, is the new standard for developing software applications. This truth applies even more so to games, which are essentially high-performance applications. Web development, too, has become a multidiscplinary feat of engineering, art, quality assurance, and project management acumen. Building a career in I.T. Production and Program Management, regardless of the product being peddled, now entails much more than understanding iterative development and successfully shipping an Agile production. It also requires the understanding that (sadly) both personal and corporate politics are inevitably going to be involved. Sure, you know what Burndown Metrics are, but do you really know how to “Play The Game”?
             Clearly, administrative professionals in the tech world are beginning to understood that obtaining Scrum-Master and Product Owner certification (oftentimes along with Project Management certification) is a pre-requisite for upward corporate mobility and engineering management know-how as a Producer or Manager. But you also have to know how to recognize and avoid certain career landmines that are simply not discussed in Scrum Alliance coursework. The most insidious and challenging landmine to avoid is the risk involved with being a Product Owner.
             I like to describe the Product Owner role as more akin to playing a Devil’s Advocate, not merely a people & output organizer. It is a technically and politically challenging position. The role of the Devil’s Advocate is to probe around and poke holes in your product or idea – but with purely the best of intentions. You simply want to make the best possible product for your organization, to gain the most consumer traction, thereby winning success & profit for everyone involved on the project. You should go into this role for the very first time thinking of yourself as the Hero! The backstop, the safety net, the enforcer of quality. Occasionally this involves pointing out defects and calling out rotten ideas... and sometimes you will only come out of this polish & quality assurance process looking like a villain, no matter what you do. Generally the way your improvement recomendations are perceived depends equally as much on the maturity, technical skill, and professionalism of your colleagues- as yourself. Which we would all like to presume is high grade, unrelenting work ethic intertwined selfless dedication to polish and ethical production but… sometimes not. And there just may be nothing you can do about it.
             Take this example: You point out a genuine defect in the product or the industrial process being used to manufacture it. Generally you will be surrounded by solid professionals, wise enough to know that we are all human and make mistakes, and they should be happy to have an opportunity to improve their product for the Greater Good of every employee and investor with their good name and future’s riding on the line of this project. This is what happens every time in a perfect world. Tragically though, from time to time the responsible party responds either childishly, or takes offense because it is now revealed to all that they are in a position that they are simply not qualified for. They’ve been outted. And they think it’s all your fault, as if you woke up that morning purposely trying to siphon the honey right out of their honey jar. Soon they are questioning your qualifications and even making personal attacks. You may even have to hide out for a bit! And you were just doing the job that was assigned to you. That’s not right, but there are a few things you can do to prevent or protect yourself from this happening.
             Generally a Sr. Producer or Sr. Manager is going to be immune from the countercriticism of subordinates. However, the Production Managers are generally playing the Scrum-Master role alone, or at least that is all they should be doing. When project resources allows, it is no-nonsense best-practice to have somebody else play Product Owner, for the obvious reasons that it’s clearly a conflict of interest to fill both roles and it’s best to have someone helping you to doublecheck your own work in case you missed something. So we get Associate Producers, and sometimes even Production Assistants playing Product Owner. But this creates scenarios when the role is filled by an individual with down-tempoed seniority and “power”.  
             Thus, if your analysis points out multiple mistakes made by some kind of department manager, and they are a rotten egg (trust me, there is one in every carton), you just may be in for a spot of trouble now. This is even more dangerous when you are acting interim, auditing, or consulting. As an Interim Producer you have zero seniority, and as Production Auditor or Consultant you may not even personally know the people involved. And they don’t know you from Adam, either, which makes them more likely to go into a state of denial about their performance- but also to talk smack around about you around the office cooler.  
             The best ways to prevent pushback are to spend extra effort getting to know everyone on the team, and taking the time to carefully explaining your good intentions and reasoning before undertaking any audit. If you think the target of the criticism may be likely to take official remonstration unwell then it doesn’t hurt to briefly jump on a call or send them an email beforehand letting them know to expect a little of what they might believe is criticism, before they find out in an email dispatched to a 50 dev email group. This tiny bit of additional communication to notify a colleague of approaching choppy waters can at times go miles in heading off interpersonal blowups. It can even instill the sense that you respect somebody enough to put effort into heading off at pass any chance that this person could get surprised or feel humiliated in front of everybody during situations like codereview and team emails.
           Additionally, you can protect yourself by not only obtaining but also visibly displaying professional credentials, certifications, and professional association memberships. Leave those expensive Project Management Institute textbooks visibly on your desk to help demonstrably build the narrative that you are an educated and nuanced professional- not an individual beholden to issuing whimsical bug reports or somebody inclined to building their career at the cost of others’. I’ve caught flack for critical safety audits and localization audits on multiple occasions, but it didn’t bother me to get criticized in the instances when toxicity or denial was deployed by persons selfishly trying to cover up for their own mistakes- while I was a card-carrying, and experienced operator simply doing whats right because its right. I will, however, always remember getting slimed, simply for doing a damn good job in my Product Owner role.
            In conclusion, when you play the Product Owner role, you truly are the Hero, the backstop, and the enforcer of quality. Just be prepared for the eventuality that not everybody is always going to know it. So be sure to bring an umbrella.  
0 notes
thetacticalproducersjournal · 12 years ago
Text
The Secrets of Localization Production
           Localization production is just like managing any other department… right? Wrong. This is the most complex unit to direct at any major AAA developer. Successful shipment of a polished regional build entails challenges that require not only your general skillset, but also: Foresight, critical thinking, and strong knowledge of cultural customs. This is, in fact, the biggest growing concern in our industry due to the age of economic globalization and multinational corporations.
            Fortunately, I’ve devised a primer for you to avoid the pitfalls me and my colleagues have experienced with this in the past. Basically, this is the key concept to remember: Good LOC QA and LOC Engineering does not simply amount to translating. It is all about understanding and applying a working knowledge of cultural anthropology. Simply throwing bi-lingual staff at the work no longer cuts it anymore, due to the increasing sensibilities of the nuanced consumer. You need an educated social scientist to implement documentable flow charts and develop a strong “Chain of Custody” process.
           Unfortunately, most major devs don’t realize this and have fleshed out their LOC command staff AND general roster with employees who simply are bi-lingual. Well, you can post a Craigslist ad to find immigrants from just about any country, and even headhunt natural citizens with B.A.’s in a foreign language. That doesn’t mean you should always do that, though. Hypothetically, anyone can teach themselves French out of a book, or study French in Paris one summer- but that doesn’t mean you would possess French sensibilities and understand the importance of avoiding faux pas (Français for “false step”) in the cross-disciplinary realm of international business.
          This is a danger especially if you work at a larger company, where major AND sub-regionalization is become increasingly important. Nonsensically though, the larger the company you work for, the more likely the chain of command is going to be enforced, and the Green Beret Model of self-initiative driven leadership is discouraged. What that means is that if you have a strong critical thinker on your LOC team, he or she risks the wrath of mediocre managers who feel threatened by constructive criticism or any process improvement suggestion that they didn’t conjure up themselves. That’s why it is becoming an increasing best-practice to focus on hiring or promoting regional localization supervisors or producers who have backgrounds or degrees in International Business or Anthropology.
           Additionally, do not overlook the importance of having a certified Product Owner on your LOC staff! Although your localization department may not be large or complex enough to require a certified Scrum-Master, I am a serious proponent of tapping a Devil's Advocate to ensure the quality of your product. You may not need a Product Owner full time on the project, so ask the larger core dev team if you can borrow an Associate Producer on TA (Temporary Assignment) to undertake an audit. When your Product Owner does his job by pointing out defects and making recommendations, protect him from pushback.
           The importance of this concept was reinforced to me the night I watching a British program "on the telly" wherein the actors naturally referred to cigarettes by using a word found highly offensive in the States. Still think a one-dialect-per-regional-build approach still works? Just how many customers can that model potentially alienate without at least one sharp eye overseeing the work? And even if your mistakes aren't offensive, they can break gameplay immersion and destroy the carefully crafted corporate image you have worked so hard to project. I go nuts whenever I read poorly backtranslated manuals from Ikea, or see laughably translated text in computer games that were developed in Japan.
           Finally, don’t overlook some forethought and QA in localization engineering if you want to sell a polished product overseas. Otherwise, you are going to end up with a well-dubbed game in, say Russian, but loading screens in English!
0 notes